I0925 02:21:07.625303 7 e2e.go:243] Starting e2e run "32d5fbe4-05b5-4e69-a18c-272909b3d97e" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1601000454 - Will randomize all specs Will run 215 of 4413 specs Sep 25 02:21:09.004: INFO: >>> kubeConfig: /root/.kube/config Sep 25 02:21:09.056: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Sep 25 02:21:09.239: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Sep 25 02:21:09.418: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Sep 25 02:21:09.419: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Sep 25 02:21:09.419: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Sep 25 02:21:09.460: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Sep 25 02:21:09.460: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Sep 25 02:21:09.460: INFO: e2e test version: v1.15.12 Sep 25 02:21:09.464: INFO: kube-apiserver version: v1.15.11 SSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Sep 25 02:21:09.468: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe Sep 25 02:21:09.538: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Probing container /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Sep 25 02:22:09.583: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2750" for this suite. Sep 25 02:22:31.648: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 25 02:22:31.808: INFO: namespace container-probe-2750 deletion completed in 22.204399931s • [SLOW TEST:82.341 seconds] [k8s.io] Probing container /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Sep 25 02:22:31.816: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should support proportional scaling [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Sep 25 02:22:32.012: INFO: Creating deployment "nginx-deployment" Sep 25 02:22:32.031: INFO: Waiting for observed generation 1 Sep 25 02:22:34.100: INFO: Waiting for all required pods to come up Sep 25 02:22:34.114: INFO: Pod name nginx: Found 10 pods out of 10 STEP: ensuring each pod is running Sep 25 02:22:44.131: INFO: Waiting for deployment "nginx-deployment" to complete Sep 25 02:22:44.152: INFO: Updating deployment "nginx-deployment" with a non-existent image Sep 25 02:22:44.167: INFO: Updating deployment nginx-deployment Sep 25 02:22:44.167: INFO: Waiting for observed generation 2 Sep 25 02:22:46.181: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Sep 25 02:22:46.187: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Sep 25 02:22:46.192: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Sep 25 02:22:46.209: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Sep 25 02:22:46.210: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Sep 25 02:22:46.214: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Sep 25 02:22:46.221: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas Sep 25 02:22:46.221: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30 Sep 25 02:22:46.231: INFO: Updating deployment nginx-deployment Sep 25 02:22:46.231: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas Sep 25 02:22:46.595: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Sep 25 02:22:46.658: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Sep 25 02:22:49.320: INFO: Deployment "nginx-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:deployment-4169,SelfLink:/apis/apps/v1/namespaces/deployment-4169/deployments/nginx-deployment,UID:a546c38f-e653-47b9-aacd-02dcf868f1cd,ResourceVersion:315847,Generation:3,CreationTimestamp:2020-09-25 02:22:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:33,UpdatedReplicas:13,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Available False 2020-09-25 02:22:46 +0000 UTC 2020-09-25 02:22:46 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-09-25 02:22:47 +0000 UTC 2020-09-25 02:22:32 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-55fb7cb77f" is progressing.}],ReadyReplicas:8,CollisionCount:nil,},} Sep 25 02:22:49.768: INFO: New ReplicaSet "nginx-deployment-55fb7cb77f" of Deployment "nginx-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f,GenerateName:,Namespace:deployment-4169,SelfLink:/apis/apps/v1/namespaces/deployment-4169/replicasets/nginx-deployment-55fb7cb77f,UID:90d1eccb-0511-43a6-8954-cab58f4a81f4,ResourceVersion:315842,Generation:3,CreationTimestamp:2020-09-25 02:22:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment a546c38f-e653-47b9-aacd-02dcf868f1cd 0x864b217 0x864b218}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Sep 25 02:22:49.768: INFO: All old ReplicaSets of Deployment "nginx-deployment": Sep 25 02:22:49.769: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498,GenerateName:,Namespace:deployment-4169,SelfLink:/apis/apps/v1/namespaces/deployment-4169/replicasets/nginx-deployment-7b8c6f4498,UID:63952d69-ef4b-4bb6-92eb-50f654675278,ResourceVersion:315829,Generation:3,CreationTimestamp:2020-09-25 02:22:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment a546c38f-e653-47b9-aacd-02dcf868f1cd 0x864b2e7 0x864b2e8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},} Sep 25 02:22:49.871: INFO: Pod "nginx-deployment-55fb7cb77f-79hpf" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-79hpf,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-4169,SelfLink:/api/v1/namespaces/deployment-4169/pods/nginx-deployment-55fb7cb77f-79hpf,UID:c6148410-a761-4cec-aec2-2dc6453f2d2d,ResourceVersion:315905,Generation:0,CreationTimestamp:2020-09-25 02:22:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 90d1eccb-0511-43a6-8954-cab58f4a81f4 0x8afcfe7 0x8afcfe8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-xtbws {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-xtbws,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-xtbws true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0x8afd060} {node.kubernetes.io/unreachable Exists NoExecute 0x8afd080}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-25 02:22:44 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-25 02:22:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-25 02:22:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-25 02:22:44 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.6,PodIP:10.244.1.128,StartTime:2020-09-25 02:22:44 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = NotFound desc = failed to pull and unpack image "docker.io/library/nginx:404": failed to resolve reference "docker.io/library/nginx:404": docker.io/library/nginx:404: not found,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Sep 25 02:22:49.872: INFO: Pod "nginx-deployment-55fb7cb77f-7j6xp" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-7j6xp,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-4169,SelfLink:/api/v1/namespaces/deployment-4169/pods/nginx-deployment-55fb7cb77f-7j6xp,UID:bf979c9c-eec6-45f8-ae7b-a17df6ac3f2d,ResourceVersion:315831,Generation:0,CreationTimestamp:2020-09-25 02:22:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 90d1eccb-0511-43a6-8954-cab58f4a81f4 0x8afd170 0x8afd171}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-xtbws {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-xtbws,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-xtbws true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0x8afd1f0} {node.kubernetes.io/unreachable Exists NoExecute 0x8afd210}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-25 02:22:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-25 02:22:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-25 02:22:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-25 02:22:46 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.6,PodIP:,StartTime:2020-09-25 02:22:46 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Sep 25 02:22:49.873: INFO: Pod "nginx-deployment-55fb7cb77f-7rrx6" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-7rrx6,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-4169,SelfLink:/api/v1/namespaces/deployment-4169/pods/nginx-deployment-55fb7cb77f-7rrx6,UID:38e8fc4a-93db-4e4b-92bf-6c9f86940f85,ResourceVersion:315850,Generation:0,CreationTimestamp:2020-09-25 02:22:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 90d1eccb-0511-43a6-8954-cab58f4a81f4 0x8afd2e0 0x8afd2e1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-xtbws {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-xtbws,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-xtbws true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0x8afd360} {node.kubernetes.io/unreachable Exists NoExecute 0x8afd380}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-25 02:22:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-25 02:22:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-25 02:22:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-25 02:22:46 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.6,PodIP:,StartTime:2020-09-25 02:22:46 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Sep 25 02:22:49.874: INFO: Pod "nginx-deployment-55fb7cb77f-cp9tn" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-cp9tn,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-4169,SelfLink:/api/v1/namespaces/deployment-4169/pods/nginx-deployment-55fb7cb77f-cp9tn,UID:3b364e25-11f4-4e69-8176-bc2760059976,ResourceVersion:315762,Generation:0,CreationTimestamp:2020-09-25 02:22:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 90d1eccb-0511-43a6-8954-cab58f4a81f4 0x8afd450 0x8afd451}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-xtbws {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-xtbws,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-xtbws true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0x8afd4d0} {node.kubernetes.io/unreachable Exists NoExecute 0x8afd4f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-25 02:22:44 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-25 02:22:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-25 02:22:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-25 02:22:44 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.6,PodIP:,StartTime:2020-09-25 02:22:44 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Sep 25 02:22:49.875: INFO: Pod "nginx-deployment-55fb7cb77f-dpw86" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-dpw86,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-4169,SelfLink:/api/v1/namespaces/deployment-4169/pods/nginx-deployment-55fb7cb77f-dpw86,UID:d06bdd63-3bdc-4183-8d1a-f75935d5bcaf,ResourceVersion:315900,Generation:0,CreationTimestamp:2020-09-25 02:22:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 90d1eccb-0511-43a6-8954-cab58f4a81f4 0x8afd5c0 0x8afd5c1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-xtbws {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-xtbws,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-xtbws true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0x8afd640} {node.kubernetes.io/unreachable Exists NoExecute 0x8afd660}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-25 02:22:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-25 02:22:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-25 02:22:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-25 02:22:46 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.6,PodIP:,StartTime:2020-09-25 02:22:47 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Sep 25 02:22:49.876: INFO: Pod "nginx-deployment-55fb7cb77f-fzb9p" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-fzb9p,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-4169,SelfLink:/api/v1/namespaces/deployment-4169/pods/nginx-deployment-55fb7cb77f-fzb9p,UID:e9e6550d-f512-40b9-86e9-9e6c5cab59f8,ResourceVersion:315844,Generation:0,CreationTimestamp:2020-09-25 02:22:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 90d1eccb-0511-43a6-8954-cab58f4a81f4 0x8afd730 0x8afd731}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-xtbws {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-xtbws,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-xtbws true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0x8afd7b0} {node.kubernetes.io/unreachable Exists NoExecute 0x8afd7d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-25 02:22:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-25 02:22:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-25 02:22:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-25 02:22:46 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.5,PodIP:,StartTime:2020-09-25 02:22:46 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Sep 25 02:22:49.877: INFO: Pod "nginx-deployment-55fb7cb77f-gcfnm" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-gcfnm,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-4169,SelfLink:/api/v1/namespaces/deployment-4169/pods/nginx-deployment-55fb7cb77f-gcfnm,UID:6e30e7d2-f35e-49c9-8a37-ea9b80f53dad,ResourceVersion:315891,Generation:0,CreationTimestamp:2020-09-25 02:22:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 90d1eccb-0511-43a6-8954-cab58f4a81f4 0x8afd8a0 0x8afd8a1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-xtbws {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-xtbws,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-xtbws true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0x8afd920} {node.kubernetes.io/unreachable Exists NoExecute 0x8afd940}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-25 02:22:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-25 02:22:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-25 02:22:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-25 02:22:46 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.6,PodIP:,StartTime:2020-09-25 02:22:47 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Sep 25 02:22:49.878: INFO: Pod "nginx-deployment-55fb7cb77f-hr495" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-hr495,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-4169,SelfLink:/api/v1/namespaces/deployment-4169/pods/nginx-deployment-55fb7cb77f-hr495,UID:82e795dd-cd06-41c0-981a-d862faf0aced,ResourceVersion:315764,Generation:0,CreationTimestamp:2020-09-25 02:22:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 90d1eccb-0511-43a6-8954-cab58f4a81f4 0x8afda10 0x8afda11}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-xtbws {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-xtbws,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-xtbws true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0x8afda90} {node.kubernetes.io/unreachable Exists NoExecute 0x8afdab0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-25 02:22:44 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-25 02:22:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-25 02:22:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-25 02:22:44 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.5,PodIP:,StartTime:2020-09-25 02:22:44 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Sep 25 02:22:49.879: INFO: Pod "nginx-deployment-55fb7cb77f-l6lpn" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-l6lpn,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-4169,SelfLink:/api/v1/namespaces/deployment-4169/pods/nginx-deployment-55fb7cb77f-l6lpn,UID:5a43b769-1e91-4fba-8a77-c0443fc651c8,ResourceVersion:315899,Generation:0,CreationTimestamp:2020-09-25 02:22:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 90d1eccb-0511-43a6-8954-cab58f4a81f4 0x8afdb80 0x8afdb81}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-xtbws {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-xtbws,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-xtbws true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0x8afdc00} {node.kubernetes.io/unreachable Exists NoExecute 0x8afdc20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-25 02:22:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-25 02:22:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-25 02:22:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-25 02:22:46 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.5,PodIP:,StartTime:2020-09-25 02:22:47 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Sep 25 02:22:49.880: INFO: Pod "nginx-deployment-55fb7cb77f-mccq5" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-mccq5,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-4169,SelfLink:/api/v1/namespaces/deployment-4169/pods/nginx-deployment-55fb7cb77f-mccq5,UID:71fb5efb-86eb-4d25-915e-deef7d4e345c,ResourceVersion:315885,Generation:0,CreationTimestamp:2020-09-25 02:22:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 90d1eccb-0511-43a6-8954-cab58f4a81f4 0x8afdcf0 0x8afdcf1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-xtbws {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-xtbws,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-xtbws true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0x8afdd70} {node.kubernetes.io/unreachable Exists NoExecute 0x8afdd90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-25 02:22:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-25 02:22:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-25 02:22:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-25 02:22:46 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.5,PodIP:,StartTime:2020-09-25 02:22:47 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Sep 25 02:22:49.881: INFO: Pod "nginx-deployment-55fb7cb77f-rf46q" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-rf46q,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-4169,SelfLink:/api/v1/namespaces/deployment-4169/pods/nginx-deployment-55fb7cb77f-rf46q,UID:98379347-5d47-4318-9124-3eaba0537cf5,ResourceVersion:315738,Generation:0,CreationTimestamp:2020-09-25 02:22:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 90d1eccb-0511-43a6-8954-cab58f4a81f4 0x8afde60 0x8afde61}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-xtbws {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-xtbws,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-xtbws true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0x8afdee0} {node.kubernetes.io/unreachable Exists NoExecute 0x8afdf00}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-25 02:22:44 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-25 02:22:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-25 02:22:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-25 02:22:44 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.5,PodIP:,StartTime:2020-09-25 02:22:44 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Sep 25 02:22:49.882: INFO: Pod "nginx-deployment-55fb7cb77f-v9qdf" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-v9qdf,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-4169,SelfLink:/api/v1/namespaces/deployment-4169/pods/nginx-deployment-55fb7cb77f-v9qdf,UID:c9cb4403-e419-4f63-8b6d-1a6ea13f2849,ResourceVersion:315886,Generation:0,CreationTimestamp:2020-09-25 02:22:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 90d1eccb-0511-43a6-8954-cab58f4a81f4 0x8afdfd0 0x8afdfd1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-xtbws {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-xtbws,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-xtbws true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0x7422050} {node.kubernetes.io/unreachable Exists NoExecute 0x7422070}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-25 02:22:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-25 02:22:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-25 02:22:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-25 02:22:46 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.6,PodIP:,StartTime:2020-09-25 02:22:47 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Sep 25 02:22:49.883: INFO: Pod "nginx-deployment-55fb7cb77f-vj66r" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-vj66r,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-4169,SelfLink:/api/v1/namespaces/deployment-4169/pods/nginx-deployment-55fb7cb77f-vj66r,UID:e821fc1e-bc03-48c1-a84e-3ad0836ad0ab,ResourceVersion:315904,Generation:0,CreationTimestamp:2020-09-25 02:22:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 90d1eccb-0511-43a6-8954-cab58f4a81f4 0x7422140 0x7422141}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-xtbws {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-xtbws,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-xtbws true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0x74221c0} {node.kubernetes.io/unreachable Exists NoExecute 0x74221e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-25 02:22:44 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-25 02:22:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-25 02:22:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-25 02:22:44 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.5,PodIP:10.244.2.99,StartTime:2020-09-25 02:22:44 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = NotFound desc = failed to pull and unpack image "docker.io/library/nginx:404": failed to resolve reference "docker.io/library/nginx:404": docker.io/library/nginx:404: not found,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Sep 25 02:22:49.884: INFO: Pod "nginx-deployment-7b8c6f4498-567vs" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-567vs,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4169,SelfLink:/api/v1/namespaces/deployment-4169/pods/nginx-deployment-7b8c6f4498-567vs,UID:a9dfe4bb-c470-4626-b680-8f7438063989,ResourceVersion:315863,Generation:0,CreationTimestamp:2020-09-25 02:22:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 63952d69-ef4b-4bb6-92eb-50f654675278 0x74222d0 0x74222d1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-xtbws {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-xtbws,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-xtbws true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0x7422340} {node.kubernetes.io/unreachable Exists NoExecute 0x7422360}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-25 02:22:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-25 02:22:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-25 02:22:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-25 02:22:46 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.5,PodIP:,StartTime:2020-09-25 02:22:46 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Sep 25 02:22:49.885: INFO: Pod "nginx-deployment-7b8c6f4498-8sg9j" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-8sg9j,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4169,SelfLink:/api/v1/namespaces/deployment-4169/pods/nginx-deployment-7b8c6f4498-8sg9j,UID:8633967c-9d71-42c0-8e81-23a6a0d10607,ResourceVersion:315874,Generation:0,CreationTimestamp:2020-09-25 02:22:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 63952d69-ef4b-4bb6-92eb-50f654675278 0x7422420 0x7422421}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-xtbws {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-xtbws,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-xtbws true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0x7422490} {node.kubernetes.io/unreachable Exists NoExecute 0x74224b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-25 02:22:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-25 02:22:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-25 02:22:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-25 02:22:46 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.6,PodIP:,StartTime:2020-09-25 02:22:47 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Sep 25 02:22:49.886: INFO: Pod "nginx-deployment-7b8c6f4498-bg9g4" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-bg9g4,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4169,SelfLink:/api/v1/namespaces/deployment-4169/pods/nginx-deployment-7b8c6f4498-bg9g4,UID:06024ffd-dcd3-4973-90e5-d5887274918a,ResourceVersion:315851,Generation:0,CreationTimestamp:2020-09-25 02:22:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 63952d69-ef4b-4bb6-92eb-50f654675278 0x7422570 0x7422571}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-xtbws {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-xtbws,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-xtbws true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0x74225e0} {node.kubernetes.io/unreachable Exists NoExecute 0x7422600}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-25 02:22:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-25 02:22:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-25 02:22:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-25 02:22:46 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.5,PodIP:,StartTime:2020-09-25 02:22:46 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Sep 25 02:22:49.887: INFO: Pod "nginx-deployment-7b8c6f4498-dccv2" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-dccv2,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4169,SelfLink:/api/v1/namespaces/deployment-4169/pods/nginx-deployment-7b8c6f4498-dccv2,UID:776e2566-5fd3-4d38-b157-698048316f1b,ResourceVersion:315864,Generation:0,CreationTimestamp:2020-09-25 02:22:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 63952d69-ef4b-4bb6-92eb-50f654675278 0x74226c0 0x74226c1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-xtbws {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-xtbws,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-xtbws true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0x7422730} {node.kubernetes.io/unreachable Exists NoExecute 0x7422750}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-25 02:22:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-25 02:22:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-25 02:22:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-25 02:22:46 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.6,PodIP:,StartTime:2020-09-25 02:22:46 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Sep 25 02:22:49.889: INFO: Pod "nginx-deployment-7b8c6f4498-f6pwl" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-f6pwl,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4169,SelfLink:/api/v1/namespaces/deployment-4169/pods/nginx-deployment-7b8c6f4498-f6pwl,UID:5338782b-5e42-400e-9bc2-5f44e73596bc,ResourceVersion:315682,Generation:0,CreationTimestamp:2020-09-25 02:22:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 63952d69-ef4b-4bb6-92eb-50f654675278 0x7422810 0x7422811}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-xtbws {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-xtbws,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-xtbws true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0x7422880} {node.kubernetes.io/unreachable Exists NoExecute 0x74228a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-25 02:22:32 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-09-25 02:22:40 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-09-25 02:22:40 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-25 02:22:32 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.5,PodIP:10.244.2.93,StartTime:2020-09-25 02:22:32 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-09-25 02:22:39 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://bc4c49090c32dfc6c3045240b053142a0ded2d1a3921838c543cd231be15138a}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Sep 25 02:22:49.890: INFO: Pod "nginx-deployment-7b8c6f4498-fg4b2" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-fg4b2,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4169,SelfLink:/api/v1/namespaces/deployment-4169/pods/nginx-deployment-7b8c6f4498-fg4b2,UID:6edf0761-bd85-4d28-8a61-9e0069102085,ResourceVersion:315839,Generation:0,CreationTimestamp:2020-09-25 02:22:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 63952d69-ef4b-4bb6-92eb-50f654675278 0x7422970 0x7422971}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-xtbws {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-xtbws,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-xtbws true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0x74229e0} {node.kubernetes.io/unreachable Exists NoExecute 0x7422a00}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-25 02:22:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-25 02:22:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-25 02:22:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-25 02:22:46 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.6,PodIP:,StartTime:2020-09-25 02:22:46 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Sep 25 02:22:49.891: INFO: Pod "nginx-deployment-7b8c6f4498-fj5jj" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-fj5jj,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4169,SelfLink:/api/v1/namespaces/deployment-4169/pods/nginx-deployment-7b8c6f4498-fj5jj,UID:9719f47a-4396-4a10-a127-717399d3351a,ResourceVersion:315856,Generation:0,CreationTimestamp:2020-09-25 02:22:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 63952d69-ef4b-4bb6-92eb-50f654675278 0x7422ac0 0x7422ac1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-xtbws {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-xtbws,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-xtbws true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0x7422b30} {node.kubernetes.io/unreachable Exists NoExecute 0x7422b50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-25 02:22:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-25 02:22:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-25 02:22:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-25 02:22:46 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.5,PodIP:,StartTime:2020-09-25 02:22:46 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Sep 25 02:22:49.891: INFO: Pod "nginx-deployment-7b8c6f4498-hs5kt" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-hs5kt,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4169,SelfLink:/api/v1/namespaces/deployment-4169/pods/nginx-deployment-7b8c6f4498-hs5kt,UID:8ac87764-9cea-4b7b-b4b4-e9c9a4621fe6,ResourceVersion:315660,Generation:0,CreationTimestamp:2020-09-25 02:22:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 63952d69-ef4b-4bb6-92eb-50f654675278 0x7422c10 0x7422c11}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-xtbws {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-xtbws,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-xtbws true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0x7422c80} {node.kubernetes.io/unreachable Exists NoExecute 0x7422ca0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-25 02:22:32 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-09-25 02:22:38 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-09-25 02:22:38 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-25 02:22:32 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.6,PodIP:10.244.1.125,StartTime:2020-09-25 02:22:32 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-09-25 02:22:38 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://0f23cb1ae39eac672eadad11dac1742b6176d6a37f6daef3ad27eedd2ba9c8d4}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Sep 25 02:22:49.892: INFO: Pod "nginx-deployment-7b8c6f4498-jkkb8" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-jkkb8,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4169,SelfLink:/api/v1/namespaces/deployment-4169/pods/nginx-deployment-7b8c6f4498-jkkb8,UID:9d2aacce-ed73-41a7-9c2a-3b30ba0f5df8,ResourceVersion:315892,Generation:0,CreationTimestamp:2020-09-25 02:22:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 63952d69-ef4b-4bb6-92eb-50f654675278 0x7422d70 0x7422d71}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-xtbws {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-xtbws,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-xtbws true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0x7422de0} {node.kubernetes.io/unreachable Exists NoExecute 0x7422e00}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-25 02:22:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-25 02:22:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-25 02:22:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-25 02:22:46 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.5,PodIP:,StartTime:2020-09-25 02:22:47 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Sep 25 02:22:49.893: INFO: Pod "nginx-deployment-7b8c6f4498-lx9vn" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-lx9vn,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4169,SelfLink:/api/v1/namespaces/deployment-4169/pods/nginx-deployment-7b8c6f4498-lx9vn,UID:a0502fe8-b86c-4669-a543-d7ff11aa6831,ResourceVersion:315827,Generation:0,CreationTimestamp:2020-09-25 02:22:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 63952d69-ef4b-4bb6-92eb-50f654675278 0x7422ec0 0x7422ec1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-xtbws {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-xtbws,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-xtbws true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0x7422f30} {node.kubernetes.io/unreachable Exists NoExecute 0x7422f50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-25 02:22:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-25 02:22:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-25 02:22:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-25 02:22:46 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.5,PodIP:,StartTime:2020-09-25 02:22:46 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Sep 25 02:22:49.894: INFO: Pod "nginx-deployment-7b8c6f4498-nnd65" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-nnd65,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4169,SelfLink:/api/v1/namespaces/deployment-4169/pods/nginx-deployment-7b8c6f4498-nnd65,UID:f2d09c29-f71f-431d-9d8b-a16d3cb82cb4,ResourceVersion:315692,Generation:0,CreationTimestamp:2020-09-25 02:22:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 63952d69-ef4b-4bb6-92eb-50f654675278 0x7423010 0x7423011}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-xtbws {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-xtbws,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-xtbws true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0x7423080} {node.kubernetes.io/unreachable Exists NoExecute 0x74230a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-25 02:22:32 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-09-25 02:22:41 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-09-25 02:22:41 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-25 02:22:32 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.6,PodIP:10.244.1.127,StartTime:2020-09-25 02:22:32 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-09-25 02:22:40 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://27cb51ae8becce4504af5157d5f3cac269d1d235339b655e0156cbbd7c5b8c5b}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Sep 25 02:22:49.895: INFO: Pod "nginx-deployment-7b8c6f4498-nprqs" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-nprqs,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4169,SelfLink:/api/v1/namespaces/deployment-4169/pods/nginx-deployment-7b8c6f4498-nprqs,UID:d1e391da-11bf-4256-8887-66f23ba3b2b2,ResourceVersion:315857,Generation:0,CreationTimestamp:2020-09-25 02:22:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 63952d69-ef4b-4bb6-92eb-50f654675278 0x7423170 0x7423171}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-xtbws {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-xtbws,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-xtbws true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0x74231e0} {node.kubernetes.io/unreachable Exists NoExecute 0x7423200}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-25 02:22:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-25 02:22:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-25 02:22:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-25 02:22:46 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.6,PodIP:,StartTime:2020-09-25 02:22:46 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Sep 25 02:22:49.896: INFO: Pod "nginx-deployment-7b8c6f4498-q56wp" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-q56wp,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4169,SelfLink:/api/v1/namespaces/deployment-4169/pods/nginx-deployment-7b8c6f4498-q56wp,UID:fcf21173-79ef-47e1-b7eb-b3feb0612bbe,ResourceVersion:315644,Generation:0,CreationTimestamp:2020-09-25 02:22:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 63952d69-ef4b-4bb6-92eb-50f654675278 0x74232c0 0x74232c1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-xtbws {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-xtbws,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-xtbws true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0x7423330} {node.kubernetes.io/unreachable Exists NoExecute 0x7423350}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-25 02:22:32 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-09-25 02:22:35 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-09-25 02:22:35 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-25 02:22:32 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.6,PodIP:10.244.1.123,StartTime:2020-09-25 02:22:32 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-09-25 02:22:35 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://249da25017cf2d82f8d7aecc34ceed34af797319495b7f56b84851f5c1aae40f}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Sep 25 02:22:49.897: INFO: Pod "nginx-deployment-7b8c6f4498-qdmdz" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-qdmdz,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4169,SelfLink:/api/v1/namespaces/deployment-4169/pods/nginx-deployment-7b8c6f4498-qdmdz,UID:79598947-f5e7-4b29-abe2-2345ef42b6d0,ResourceVersion:315835,Generation:0,CreationTimestamp:2020-09-25 02:22:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 63952d69-ef4b-4bb6-92eb-50f654675278 0x7423420 0x7423421}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-xtbws {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-xtbws,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-xtbws true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0x7423490} {node.kubernetes.io/unreachable Exists NoExecute 0x74234b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-25 02:22:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-25 02:22:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-25 02:22:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-25 02:22:46 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.5,PodIP:,StartTime:2020-09-25 02:22:46 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Sep 25 02:22:49.898: INFO: Pod "nginx-deployment-7b8c6f4498-qz5vz" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-qz5vz,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4169,SelfLink:/api/v1/namespaces/deployment-4169/pods/nginx-deployment-7b8c6f4498-qz5vz,UID:e534ef88-5d87-4b7a-98f4-ca131d258174,ResourceVersion:315701,Generation:0,CreationTimestamp:2020-09-25 02:22:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 63952d69-ef4b-4bb6-92eb-50f654675278 0x7423570 0x7423571}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-xtbws {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-xtbws,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-xtbws true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0x74235e0} {node.kubernetes.io/unreachable Exists NoExecute 0x7423600}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-25 02:22:32 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-09-25 02:22:42 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-09-25 02:22:42 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-25 02:22:32 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.5,PodIP:10.244.2.97,StartTime:2020-09-25 02:22:32 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-09-25 02:22:41 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://05fdc4491101f464e623938a2fe1e29a0e2004aadbc52aa959af4c841fdcd039}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Sep 25 02:22:49.899: INFO: Pod "nginx-deployment-7b8c6f4498-rgtsn" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-rgtsn,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4169,SelfLink:/api/v1/namespaces/deployment-4169/pods/nginx-deployment-7b8c6f4498-rgtsn,UID:c45e9734-03ea-4276-9557-22acff36959c,ResourceVersion:315675,Generation:0,CreationTimestamp:2020-09-25 02:22:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 63952d69-ef4b-4bb6-92eb-50f654675278 0x74236d0 0x74236d1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-xtbws {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-xtbws,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-xtbws true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0x7423740} {node.kubernetes.io/unreachable Exists NoExecute 0x7423760}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-25 02:22:32 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-09-25 02:22:40 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-09-25 02:22:40 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-25 02:22:32 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.5,PodIP:10.244.2.94,StartTime:2020-09-25 02:22:32 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-09-25 02:22:39 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://82d92b309ab6793b8bcb3810197f47f65af48e9a914c1823e9c19fef8684e2b9}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Sep 25 02:22:49.900: INFO: Pod "nginx-deployment-7b8c6f4498-svdtl" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-svdtl,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4169,SelfLink:/api/v1/namespaces/deployment-4169/pods/nginx-deployment-7b8c6f4498-svdtl,UID:f66c90cd-747d-4f8c-ae66-ebc9a53e506c,ResourceVersion:315873,Generation:0,CreationTimestamp:2020-09-25 02:22:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 63952d69-ef4b-4bb6-92eb-50f654675278 0x7423830 0x7423831}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-xtbws {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-xtbws,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-xtbws true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0x74238a0} {node.kubernetes.io/unreachable Exists NoExecute 0x74238c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-25 02:22:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-25 02:22:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-25 02:22:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-25 02:22:46 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.5,PodIP:,StartTime:2020-09-25 02:22:47 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Sep 25 02:22:49.901: INFO: Pod "nginx-deployment-7b8c6f4498-t2qwz" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-t2qwz,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4169,SelfLink:/api/v1/namespaces/deployment-4169/pods/nginx-deployment-7b8c6f4498-t2qwz,UID:550104ff-2924-4eb3-bbbf-e10514fd7f5d,ResourceVersion:315870,Generation:0,CreationTimestamp:2020-09-25 02:22:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 63952d69-ef4b-4bb6-92eb-50f654675278 0x7423980 0x7423981}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-xtbws {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-xtbws,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-xtbws true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0x74239f0} {node.kubernetes.io/unreachable Exists NoExecute 0x7423a10}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-25 02:22:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-25 02:22:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-25 02:22:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-25 02:22:46 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.6,PodIP:,StartTime:2020-09-25 02:22:46 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Sep 25 02:22:49.902: INFO: Pod "nginx-deployment-7b8c6f4498-vvfb7" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-vvfb7,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4169,SelfLink:/api/v1/namespaces/deployment-4169/pods/nginx-deployment-7b8c6f4498-vvfb7,UID:430c54d6-c05c-455b-8317-6956420b5ca2,ResourceVersion:315683,Generation:0,CreationTimestamp:2020-09-25 02:22:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 63952d69-ef4b-4bb6-92eb-50f654675278 0x7423ad0 0x7423ad1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-xtbws {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-xtbws,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-xtbws true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0x7423b40} {node.kubernetes.io/unreachable Exists NoExecute 0x7423b60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-25 02:22:32 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-09-25 02:22:40 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-09-25 02:22:40 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-25 02:22:32 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.6,PodIP:10.244.1.126,StartTime:2020-09-25 02:22:32 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-09-25 02:22:40 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://179009670eaaa3021988d7d7d3b917e9025edbc126a4130a8f0e8eb44c537f01}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Sep 25 02:22:49.903: INFO: Pod "nginx-deployment-7b8c6f4498-xtv88" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-xtv88,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4169,SelfLink:/api/v1/namespaces/deployment-4169/pods/nginx-deployment-7b8c6f4498-xtv88,UID:afc46b40-93b9-4f20-ad60-a4e49d491bf6,ResourceVersion:315686,Generation:0,CreationTimestamp:2020-09-25 02:22:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 63952d69-ef4b-4bb6-92eb-50f654675278 0x7423c30 0x7423c31}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-xtbws {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-xtbws,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-xtbws true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0x7423ca0} {node.kubernetes.io/unreachable Exists NoExecute 0x7423cc0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-25 02:22:32 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-09-25 02:22:40 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-09-25 02:22:40 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-25 02:22:32 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.6,PodIP:10.244.1.124,StartTime:2020-09-25 02:22:32 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-09-25 02:22:39 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://5219b57610f77b9847bf31f830f2c8afb0e3ff3837a0eb798374b582cc8cf8d1}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Sep 25 02:22:49.903: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-4169" for this suite. Sep 25 02:23:13.939: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 25 02:23:14.087: INFO: namespace deployment-4169 deletion completed in 24.176681453s • [SLOW TEST:42.271 seconds] [sig-apps] Deployment /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Sep 25 02:23:14.090: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-4771c217-9c43-48f0-9f74-04bb17411c67 STEP: Creating a pod to test consume configMaps Sep 25 02:23:14.193: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-1b5f8225-2924-49d2-a1f5-eacc7d516769" in namespace "projected-9009" to be "success or failure" Sep 25 02:23:14.226: INFO: Pod "pod-projected-configmaps-1b5f8225-2924-49d2-a1f5-eacc7d516769": Phase="Pending", Reason="", readiness=false. Elapsed: 31.944706ms Sep 25 02:23:16.233: INFO: Pod "pod-projected-configmaps-1b5f8225-2924-49d2-a1f5-eacc7d516769": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039145017s Sep 25 02:23:18.243: INFO: Pod "pod-projected-configmaps-1b5f8225-2924-49d2-a1f5-eacc7d516769": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.049093179s STEP: Saw pod success Sep 25 02:23:18.243: INFO: Pod "pod-projected-configmaps-1b5f8225-2924-49d2-a1f5-eacc7d516769" satisfied condition "success or failure" Sep 25 02:23:18.249: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-1b5f8225-2924-49d2-a1f5-eacc7d516769 container projected-configmap-volume-test: STEP: delete the pod Sep 25 02:23:18.283: INFO: Waiting for pod pod-projected-configmaps-1b5f8225-2924-49d2-a1f5-eacc7d516769 to disappear Sep 25 02:23:18.318: INFO: Pod pod-projected-configmaps-1b5f8225-2924-49d2-a1f5-eacc7d516769 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Sep 25 02:23:18.318: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9009" for this suite. Sep 25 02:23:24.359: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 25 02:23:24.495: INFO: namespace projected-9009 deletion completed in 6.168926004s • [SLOW TEST:10.405 seconds] [sig-storage] Projected configMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Sep 25 02:23:24.496: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on tmpfs Sep 25 02:23:24.574: INFO: Waiting up to 5m0s for pod "pod-a1d78305-e443-4958-88d8-7dba0cc28460" in namespace "emptydir-6114" to be "success or failure" Sep 25 02:23:24.590: INFO: Pod "pod-a1d78305-e443-4958-88d8-7dba0cc28460": Phase="Pending", Reason="", readiness=false. Elapsed: 16.444103ms Sep 25 02:23:26.597: INFO: Pod "pod-a1d78305-e443-4958-88d8-7dba0cc28460": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023179839s Sep 25 02:23:28.605: INFO: Pod "pod-a1d78305-e443-4958-88d8-7dba0cc28460": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.030600256s STEP: Saw pod success Sep 25 02:23:28.605: INFO: Pod "pod-a1d78305-e443-4958-88d8-7dba0cc28460" satisfied condition "success or failure" Sep 25 02:23:28.610: INFO: Trying to get logs from node iruya-worker pod pod-a1d78305-e443-4958-88d8-7dba0cc28460 container test-container: STEP: delete the pod Sep 25 02:23:28.714: INFO: Waiting for pod pod-a1d78305-e443-4958-88d8-7dba0cc28460 to disappear Sep 25 02:23:28.718: INFO: Pod pod-a1d78305-e443-4958-88d8-7dba0cc28460 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Sep 25 02:23:28.719: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6114" for this suite. Sep 25 02:23:34.748: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 25 02:23:34.918: INFO: namespace emptydir-6114 deletion completed in 6.191694195s • [SLOW TEST:10.422 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Sep 25 02:23:34.920: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-map-7b6c8ff8-1fcd-4448-b397-5d1f2dd1571d STEP: Creating a pod to test consume secrets Sep 25 02:23:35.451: INFO: Waiting up to 5m0s for pod "pod-secrets-d1c51994-df76-48fa-9d71-a389b9dc55a0" in namespace "secrets-7059" to be "success or failure" Sep 25 02:23:35.464: INFO: Pod "pod-secrets-d1c51994-df76-48fa-9d71-a389b9dc55a0": Phase="Pending", Reason="", readiness=false. Elapsed: 12.660124ms Sep 25 02:23:37.471: INFO: Pod "pod-secrets-d1c51994-df76-48fa-9d71-a389b9dc55a0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019778311s Sep 25 02:23:39.478: INFO: Pod "pod-secrets-d1c51994-df76-48fa-9d71-a389b9dc55a0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026196435s STEP: Saw pod success Sep 25 02:23:39.478: INFO: Pod "pod-secrets-d1c51994-df76-48fa-9d71-a389b9dc55a0" satisfied condition "success or failure" Sep 25 02:23:39.483: INFO: Trying to get logs from node iruya-worker pod pod-secrets-d1c51994-df76-48fa-9d71-a389b9dc55a0 container secret-volume-test: STEP: delete the pod Sep 25 02:23:39.775: INFO: Waiting for pod pod-secrets-d1c51994-df76-48fa-9d71-a389b9dc55a0 to disappear Sep 25 02:23:39.786: INFO: Pod pod-secrets-d1c51994-df76-48fa-9d71-a389b9dc55a0 no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Sep 25 02:23:39.787: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7059" for this suite. Sep 25 02:23:45.818: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 25 02:23:45.924: INFO: namespace secrets-7059 deletion completed in 6.130600816s • [SLOW TEST:11.004 seconds] [sig-storage] Secrets /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Sep 25 02:23:45.925: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Sep 25 02:23:50.149: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Sep 25 02:23:50.187: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-7371" for this suite. Sep 25 02:23:56.241: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 25 02:23:56.389: INFO: namespace container-runtime-7371 deletion completed in 6.194389564s • [SLOW TEST:10.464 seconds] [k8s.io] Container Runtime /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Sep 25 02:23:56.390: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Sep 25 02:24:01.523: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-8356" for this suite. Sep 25 02:24:23.548: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 25 02:24:23.795: INFO: namespace replication-controller-8356 deletion completed in 22.262319554s • [SLOW TEST:27.405 seconds] [sig-apps] ReplicationController /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Sep 25 02:24:23.799: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-secret-bpv4 STEP: Creating a pod to test atomic-volume-subpath Sep 25 02:24:23.905: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-bpv4" in namespace "subpath-6720" to be "success or failure" Sep 25 02:24:23.924: INFO: Pod "pod-subpath-test-secret-bpv4": Phase="Pending", Reason="", readiness=false. Elapsed: 19.1309ms Sep 25 02:24:25.930: INFO: Pod "pod-subpath-test-secret-bpv4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024642806s Sep 25 02:24:27.938: INFO: Pod "pod-subpath-test-secret-bpv4": Phase="Running", Reason="", readiness=true. Elapsed: 4.033157068s Sep 25 02:24:29.948: INFO: Pod "pod-subpath-test-secret-bpv4": Phase="Running", Reason="", readiness=true. Elapsed: 6.042830036s Sep 25 02:24:31.971: INFO: Pod "pod-subpath-test-secret-bpv4": Phase="Running", Reason="", readiness=true. Elapsed: 8.06645628s Sep 25 02:24:33.978: INFO: Pod "pod-subpath-test-secret-bpv4": Phase="Running", Reason="", readiness=true. Elapsed: 10.072981894s Sep 25 02:24:35.986: INFO: Pod "pod-subpath-test-secret-bpv4": Phase="Running", Reason="", readiness=true. Elapsed: 12.080788961s Sep 25 02:24:37.994: INFO: Pod "pod-subpath-test-secret-bpv4": Phase="Running", Reason="", readiness=true. Elapsed: 14.089111556s Sep 25 02:24:40.002: INFO: Pod "pod-subpath-test-secret-bpv4": Phase="Running", Reason="", readiness=true. Elapsed: 16.09686636s Sep 25 02:24:42.009: INFO: Pod "pod-subpath-test-secret-bpv4": Phase="Running", Reason="", readiness=true. Elapsed: 18.104516987s Sep 25 02:24:44.015: INFO: Pod "pod-subpath-test-secret-bpv4": Phase="Running", Reason="", readiness=true. Elapsed: 20.110592243s Sep 25 02:24:46.022: INFO: Pod "pod-subpath-test-secret-bpv4": Phase="Running", Reason="", readiness=true. Elapsed: 22.117327559s Sep 25 02:24:48.029: INFO: Pod "pod-subpath-test-secret-bpv4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.124472526s STEP: Saw pod success Sep 25 02:24:48.030: INFO: Pod "pod-subpath-test-secret-bpv4" satisfied condition "success or failure" Sep 25 02:24:48.064: INFO: Trying to get logs from node iruya-worker2 pod pod-subpath-test-secret-bpv4 container test-container-subpath-secret-bpv4: STEP: delete the pod Sep 25 02:24:48.107: INFO: Waiting for pod pod-subpath-test-secret-bpv4 to disappear Sep 25 02:24:48.111: INFO: Pod pod-subpath-test-secret-bpv4 no longer exists STEP: Deleting pod pod-subpath-test-secret-bpv4 Sep 25 02:24:48.111: INFO: Deleting pod "pod-subpath-test-secret-bpv4" in namespace "subpath-6720" [AfterEach] [sig-storage] Subpath /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Sep 25 02:24:48.116: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-6720" for this suite. Sep 25 02:24:54.140: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 25 02:24:54.279: INFO: namespace subpath-6720 deletion completed in 6.155073521s • [SLOW TEST:30.481 seconds] [sig-storage] Subpath /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with secret pod [LinuxOnly] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support --unix-socket=/path [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Sep 25 02:24:54.282: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should support --unix-socket=/path [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Starting the proxy Sep 25 02:24:54.357: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix699249380/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Sep 25 02:24:55.263: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4093" for this suite. Sep 25 02:25:01.290: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 25 02:25:01.428: INFO: namespace kubectl-4093 deletion completed in 6.152939438s • [SLOW TEST:7.146 seconds] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Proxy server /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support --unix-socket=/path [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Sep 25 02:25:01.429: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Sep 25 02:25:01.494: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6c71c571-72eb-4456-83eb-52ed936cce0c" in namespace "downward-api-5639" to be "success or failure" Sep 25 02:25:01.507: INFO: Pod "downwardapi-volume-6c71c571-72eb-4456-83eb-52ed936cce0c": Phase="Pending", Reason="", readiness=false. Elapsed: 12.169737ms Sep 25 02:25:03.514: INFO: Pod "downwardapi-volume-6c71c571-72eb-4456-83eb-52ed936cce0c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019731963s Sep 25 02:25:05.522: INFO: Pod "downwardapi-volume-6c71c571-72eb-4456-83eb-52ed936cce0c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027478039s STEP: Saw pod success Sep 25 02:25:05.523: INFO: Pod "downwardapi-volume-6c71c571-72eb-4456-83eb-52ed936cce0c" satisfied condition "success or failure" Sep 25 02:25:05.528: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-6c71c571-72eb-4456-83eb-52ed936cce0c container client-container: STEP: delete the pod Sep 25 02:25:05.553: INFO: Waiting for pod downwardapi-volume-6c71c571-72eb-4456-83eb-52ed936cce0c to disappear Sep 25 02:25:05.558: INFO: Pod downwardapi-volume-6c71c571-72eb-4456-83eb-52ed936cce0c no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Sep 25 02:25:05.558: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5639" for this suite. Sep 25 02:25:11.634: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 25 02:25:11.795: INFO: namespace downward-api-5639 deletion completed in 6.209339687s • [SLOW TEST:10.366 seconds] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Sep 25 02:25:11.798: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-downwardapi-726j STEP: Creating a pod to test atomic-volume-subpath Sep 25 02:25:12.053: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-726j" in namespace "subpath-7477" to be "success or failure" Sep 25 02:25:12.079: INFO: Pod "pod-subpath-test-downwardapi-726j": Phase="Pending", Reason="", readiness=false. Elapsed: 26.37776ms Sep 25 02:25:14.092: INFO: Pod "pod-subpath-test-downwardapi-726j": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039120259s Sep 25 02:25:16.122: INFO: Pod "pod-subpath-test-downwardapi-726j": Phase="Running", Reason="", readiness=true. Elapsed: 4.069000211s Sep 25 02:25:18.157: INFO: Pod "pod-subpath-test-downwardapi-726j": Phase="Running", Reason="", readiness=true. Elapsed: 6.104687819s Sep 25 02:25:20.165: INFO: Pod "pod-subpath-test-downwardapi-726j": Phase="Running", Reason="", readiness=true. Elapsed: 8.112031164s Sep 25 02:25:22.172: INFO: Pod "pod-subpath-test-downwardapi-726j": Phase="Running", Reason="", readiness=true. Elapsed: 10.119487308s Sep 25 02:25:24.180: INFO: Pod "pod-subpath-test-downwardapi-726j": Phase="Running", Reason="", readiness=true. Elapsed: 12.127045018s Sep 25 02:25:26.187: INFO: Pod "pod-subpath-test-downwardapi-726j": Phase="Running", Reason="", readiness=true. Elapsed: 14.13467868s Sep 25 02:25:28.195: INFO: Pod "pod-subpath-test-downwardapi-726j": Phase="Running", Reason="", readiness=true. Elapsed: 16.142130904s Sep 25 02:25:30.202: INFO: Pod "pod-subpath-test-downwardapi-726j": Phase="Running", Reason="", readiness=true. Elapsed: 18.149352514s Sep 25 02:25:32.209: INFO: Pod "pod-subpath-test-downwardapi-726j": Phase="Running", Reason="", readiness=true. Elapsed: 20.156721089s Sep 25 02:25:34.217: INFO: Pod "pod-subpath-test-downwardapi-726j": Phase="Running", Reason="", readiness=true. Elapsed: 22.16404429s Sep 25 02:25:36.235: INFO: Pod "pod-subpath-test-downwardapi-726j": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.182389443s STEP: Saw pod success Sep 25 02:25:36.235: INFO: Pod "pod-subpath-test-downwardapi-726j" satisfied condition "success or failure" Sep 25 02:25:36.241: INFO: Trying to get logs from node iruya-worker2 pod pod-subpath-test-downwardapi-726j container test-container-subpath-downwardapi-726j: STEP: delete the pod Sep 25 02:25:36.263: INFO: Waiting for pod pod-subpath-test-downwardapi-726j to disappear Sep 25 02:25:36.267: INFO: Pod pod-subpath-test-downwardapi-726j no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-726j Sep 25 02:25:36.267: INFO: Deleting pod "pod-subpath-test-downwardapi-726j" in namespace "subpath-7477" [AfterEach] [sig-storage] Subpath /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Sep 25 02:25:36.270: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-7477" for this suite. Sep 25 02:25:42.327: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 25 02:25:42.487: INFO: namespace subpath-7477 deletion completed in 6.207241228s • [SLOW TEST:30.689 seconds] [sig-storage] Subpath /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with downward pod [LinuxOnly] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Sep 25 02:25:42.492: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Sep 25 02:25:47.176: INFO: Successfully updated pod "annotationupdate4bd66092-adc1-43b2-a00c-6ecf76a44f09" [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Sep 25 02:25:49.215: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5405" for this suite. Sep 25 02:26:11.249: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 25 02:26:11.378: INFO: namespace projected-5405 deletion completed in 22.156824435s • [SLOW TEST:28.887 seconds] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update annotations on modification [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Sep 25 02:26:11.381: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Sep 25 02:26:11.474: INFO: Waiting up to 5m0s for pod "downward-api-4b551891-2149-4c70-b470-c2f4e7c992d5" in namespace "downward-api-9665" to be "success or failure" Sep 25 02:26:11.528: INFO: Pod "downward-api-4b551891-2149-4c70-b470-c2f4e7c992d5": Phase="Pending", Reason="", readiness=false. Elapsed: 53.68236ms Sep 25 02:26:13.535: INFO: Pod "downward-api-4b551891-2149-4c70-b470-c2f4e7c992d5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.060704545s Sep 25 02:26:15.542: INFO: Pod "downward-api-4b551891-2149-4c70-b470-c2f4e7c992d5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.067752982s Sep 25 02:26:17.549: INFO: Pod "downward-api-4b551891-2149-4c70-b470-c2f4e7c992d5": Phase="Running", Reason="", readiness=true. Elapsed: 6.074950721s Sep 25 02:26:19.556: INFO: Pod "downward-api-4b551891-2149-4c70-b470-c2f4e7c992d5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.082312032s STEP: Saw pod success Sep 25 02:26:19.557: INFO: Pod "downward-api-4b551891-2149-4c70-b470-c2f4e7c992d5" satisfied condition "success or failure" Sep 25 02:26:19.563: INFO: Trying to get logs from node iruya-worker2 pod downward-api-4b551891-2149-4c70-b470-c2f4e7c992d5 container dapi-container: STEP: delete the pod Sep 25 02:26:19.611: INFO: Waiting for pod downward-api-4b551891-2149-4c70-b470-c2f4e7c992d5 to disappear Sep 25 02:26:19.627: INFO: Pod downward-api-4b551891-2149-4c70-b470-c2f4e7c992d5 no longer exists [AfterEach] [sig-node] Downward API /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Sep 25 02:26:19.627: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9665" for this suite. Sep 25 02:26:25.650: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 25 02:26:25.779: INFO: namespace downward-api-9665 deletion completed in 6.144841361s • [SLOW TEST:14.398 seconds] [sig-node] Downward API /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Sep 25 02:26:25.780: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Sep 25 02:26:32.395: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-240" for this suite. Sep 25 02:27:10.488: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 25 02:27:10.650: INFO: namespace kubelet-test-240 deletion completed in 38.19222952s • [SLOW TEST:44.871 seconds] [k8s.io] Kubelet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a read only busybox container /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:187 should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Sep 25 02:27:10.656: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-6246 STEP: creating a selector STEP: Creating the service pods in kubernetes Sep 25 02:27:10.719: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Sep 25 02:27:32.947: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.150:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6246 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 25 02:27:32.947: INFO: >>> kubeConfig: /root/.kube/config I0925 02:27:33.057380 7 log.go:172] (0x780ab60) (0x780ac40) Create stream I0925 02:27:33.057890 7 log.go:172] (0x780ab60) (0x780ac40) Stream added, broadcasting: 1 I0925 02:27:33.081836 7 log.go:172] (0x780ab60) Reply frame received for 1 I0925 02:27:33.082381 7 log.go:172] (0x780ab60) (0x7748000) Create stream I0925 02:27:33.082468 7 log.go:172] (0x780ab60) (0x7748000) Stream added, broadcasting: 3 I0925 02:27:33.084816 7 log.go:172] (0x780ab60) Reply frame received for 3 I0925 02:27:33.085521 7 log.go:172] (0x780ab60) (0x6a78af0) Create stream I0925 02:27:33.085718 7 log.go:172] (0x780ab60) (0x6a78af0) Stream added, broadcasting: 5 I0925 02:27:33.087460 7 log.go:172] (0x780ab60) Reply frame received for 5 I0925 02:27:33.198624 7 log.go:172] (0x780ab60) Data frame received for 5 I0925 02:27:33.199000 7 log.go:172] (0x6a78af0) (5) Data frame handling I0925 02:27:33.199313 7 log.go:172] (0x780ab60) Data frame received for 3 I0925 02:27:33.199581 7 log.go:172] (0x7748000) (3) Data frame handling I0925 02:27:33.199851 7 log.go:172] (0x780ab60) Data frame received for 1 I0925 02:27:33.200013 7 log.go:172] (0x780ac40) (1) Data frame handling I0925 02:27:33.202422 7 log.go:172] (0x7748000) (3) Data frame sent I0925 02:27:33.202751 7 log.go:172] (0x780ac40) (1) Data frame sent I0925 02:27:33.203127 7 log.go:172] (0x780ab60) Data frame received for 3 I0925 02:27:33.203809 7 log.go:172] (0x780ab60) (0x780ac40) Stream removed, broadcasting: 1 I0925 02:27:33.204463 7 log.go:172] (0x7748000) (3) Data frame handling I0925 02:27:33.205002 7 log.go:172] (0x780ab60) Go away received I0925 02:27:33.207635 7 log.go:172] (0x780ab60) (0x780ac40) Stream removed, broadcasting: 1 I0925 02:27:33.208058 7 log.go:172] (0x780ab60) (0x7748000) Stream removed, broadcasting: 3 I0925 02:27:33.208344 7 log.go:172] (0x780ab60) (0x6a78af0) Stream removed, broadcasting: 5 Sep 25 02:27:33.209: INFO: Found all expected endpoints: [netserver-0] Sep 25 02:27:33.222: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.120:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6246 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 25 02:27:33.222: INFO: >>> kubeConfig: /root/.kube/config I0925 02:27:33.321321 7 log.go:172] (0x9a5a770) (0x9a5a930) Create stream I0925 02:27:33.321469 7 log.go:172] (0x9a5a770) (0x9a5a930) Stream added, broadcasting: 1 I0925 02:27:33.326731 7 log.go:172] (0x9a5a770) Reply frame received for 1 I0925 02:27:33.327100 7 log.go:172] (0x9a5a770) (0x77481c0) Create stream I0925 02:27:33.327304 7 log.go:172] (0x9a5a770) (0x77481c0) Stream added, broadcasting: 3 I0925 02:27:33.329343 7 log.go:172] (0x9a5a770) Reply frame received for 3 I0925 02:27:33.329477 7 log.go:172] (0x9a5a770) (0x9a5aa10) Create stream I0925 02:27:33.329551 7 log.go:172] (0x9a5a770) (0x9a5aa10) Stream added, broadcasting: 5 I0925 02:27:33.331004 7 log.go:172] (0x9a5a770) Reply frame received for 5 I0925 02:27:33.411024 7 log.go:172] (0x9a5a770) Data frame received for 3 I0925 02:27:33.411238 7 log.go:172] (0x77481c0) (3) Data frame handling I0925 02:27:33.411402 7 log.go:172] (0x9a5a770) Data frame received for 5 I0925 02:27:33.411647 7 log.go:172] (0x9a5aa10) (5) Data frame handling I0925 02:27:33.411858 7 log.go:172] (0x77481c0) (3) Data frame sent I0925 02:27:33.412016 7 log.go:172] (0x9a5a770) Data frame received for 3 I0925 02:27:33.412259 7 log.go:172] (0x77481c0) (3) Data frame handling I0925 02:27:33.413626 7 log.go:172] (0x9a5a770) Data frame received for 1 I0925 02:27:33.413810 7 log.go:172] (0x9a5a930) (1) Data frame handling I0925 02:27:33.414000 7 log.go:172] (0x9a5a930) (1) Data frame sent I0925 02:27:33.414190 7 log.go:172] (0x9a5a770) (0x9a5a930) Stream removed, broadcasting: 1 I0925 02:27:33.415058 7 log.go:172] (0x9a5a770) (0x9a5a930) Stream removed, broadcasting: 1 I0925 02:27:33.415239 7 log.go:172] (0x9a5a770) (0x77481c0) Stream removed, broadcasting: 3 I0925 02:27:33.415374 7 log.go:172] (0x9a5a770) (0x9a5aa10) Stream removed, broadcasting: 5 Sep 25 02:27:33.415: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Sep 25 02:27:33.416: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready I0925 02:27:33.417153 7 log.go:172] (0x9a5a770) Go away received STEP: Destroying namespace "pod-network-test-6246" for this suite. Sep 25 02:27:57.461: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 25 02:27:57.618: INFO: namespace pod-network-test-6246 deletion completed in 24.192520096s • [SLOW TEST:46.962 seconds] [sig-network] Networking /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl patch should add annotations for pods in rc [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Sep 25 02:27:57.622: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should add annotations for pods in rc [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating Redis RC Sep 25 02:27:57.692: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2165' Sep 25 02:28:01.399: INFO: stderr: "" Sep 25 02:28:01.399: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. Sep 25 02:28:02.457: INFO: Selector matched 1 pods for map[app:redis] Sep 25 02:28:02.458: INFO: Found 0 / 1 Sep 25 02:28:03.409: INFO: Selector matched 1 pods for map[app:redis] Sep 25 02:28:03.410: INFO: Found 0 / 1 Sep 25 02:28:04.410: INFO: Selector matched 1 pods for map[app:redis] Sep 25 02:28:04.410: INFO: Found 0 / 1 Sep 25 02:28:05.408: INFO: Selector matched 1 pods for map[app:redis] Sep 25 02:28:05.409: INFO: Found 1 / 1 Sep 25 02:28:05.409: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Sep 25 02:28:05.415: INFO: Selector matched 1 pods for map[app:redis] Sep 25 02:28:05.415: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Sep 25 02:28:05.415: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-zwmpw --namespace=kubectl-2165 -p {"metadata":{"annotations":{"x":"y"}}}' Sep 25 02:28:06.509: INFO: stderr: "" Sep 25 02:28:06.510: INFO: stdout: "pod/redis-master-zwmpw patched\n" STEP: checking annotations Sep 25 02:28:06.516: INFO: Selector matched 1 pods for map[app:redis] Sep 25 02:28:06.516: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Sep 25 02:28:06.517: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2165" for this suite. Sep 25 02:28:28.578: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 25 02:28:28.723: INFO: namespace kubectl-2165 deletion completed in 22.199848005s • [SLOW TEST:31.102 seconds] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl patch /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should add annotations for pods in rc [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Sep 25 02:28:28.725: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-9716 STEP: creating a selector STEP: Creating the service pods in kubernetes Sep 25 02:28:28.798: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Sep 25 02:28:54.957: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.122:8080/dial?request=hostName&protocol=http&host=10.244.2.121&port=8080&tries=1'] Namespace:pod-network-test-9716 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 25 02:28:54.957: INFO: >>> kubeConfig: /root/.kube/config I0925 02:28:55.051827 7 log.go:172] (0x712a700) (0x712a770) Create stream I0925 02:28:55.051966 7 log.go:172] (0x712a700) (0x712a770) Stream added, broadcasting: 1 I0925 02:28:55.055360 7 log.go:172] (0x712a700) Reply frame received for 1 I0925 02:28:55.055541 7 log.go:172] (0x712a700) (0x7e12000) Create stream I0925 02:28:55.055628 7 log.go:172] (0x712a700) (0x7e12000) Stream added, broadcasting: 3 I0925 02:28:55.057071 7 log.go:172] (0x712a700) Reply frame received for 3 I0925 02:28:55.057202 7 log.go:172] (0x712a700) (0x7e121c0) Create stream I0925 02:28:55.057276 7 log.go:172] (0x712a700) (0x7e121c0) Stream added, broadcasting: 5 I0925 02:28:55.058847 7 log.go:172] (0x712a700) Reply frame received for 5 I0925 02:28:55.135457 7 log.go:172] (0x712a700) Data frame received for 3 I0925 02:28:55.135650 7 log.go:172] (0x7e12000) (3) Data frame handling I0925 02:28:55.135807 7 log.go:172] (0x7e12000) (3) Data frame sent I0925 02:28:55.135957 7 log.go:172] (0x712a700) Data frame received for 5 I0925 02:28:55.136134 7 log.go:172] (0x7e121c0) (5) Data frame handling I0925 02:28:55.136246 7 log.go:172] (0x712a700) Data frame received for 3 I0925 02:28:55.136415 7 log.go:172] (0x7e12000) (3) Data frame handling I0925 02:28:55.137934 7 log.go:172] (0x712a700) Data frame received for 1 I0925 02:28:55.138122 7 log.go:172] (0x712a770) (1) Data frame handling I0925 02:28:55.138350 7 log.go:172] (0x712a770) (1) Data frame sent I0925 02:28:55.138517 7 log.go:172] (0x712a700) (0x712a770) Stream removed, broadcasting: 1 I0925 02:28:55.138703 7 log.go:172] (0x712a700) Go away received I0925 02:28:55.139044 7 log.go:172] (0x712a700) (0x712a770) Stream removed, broadcasting: 1 I0925 02:28:55.139194 7 log.go:172] (0x712a700) (0x7e12000) Stream removed, broadcasting: 3 I0925 02:28:55.139294 7 log.go:172] (0x712a700) (0x7e121c0) Stream removed, broadcasting: 5 Sep 25 02:28:55.139: INFO: Waiting for endpoints: map[] Sep 25 02:28:55.145: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.122:8080/dial?request=hostName&protocol=http&host=10.244.1.154&port=8080&tries=1'] Namespace:pod-network-test-9716 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 25 02:28:55.145: INFO: >>> kubeConfig: /root/.kube/config I0925 02:28:55.246934 7 log.go:172] (0x712aa80) (0x712ab60) Create stream I0925 02:28:55.247141 7 log.go:172] (0x712aa80) (0x712ab60) Stream added, broadcasting: 1 I0925 02:28:55.253680 7 log.go:172] (0x712aa80) Reply frame received for 1 I0925 02:28:55.253954 7 log.go:172] (0x712aa80) (0x918ed20) Create stream I0925 02:28:55.254122 7 log.go:172] (0x712aa80) (0x918ed20) Stream added, broadcasting: 3 I0925 02:28:55.258236 7 log.go:172] (0x712aa80) Reply frame received for 3 I0925 02:28:55.258361 7 log.go:172] (0x712aa80) (0x712abd0) Create stream I0925 02:28:55.258434 7 log.go:172] (0x712aa80) (0x712abd0) Stream added, broadcasting: 5 I0925 02:28:55.259739 7 log.go:172] (0x712aa80) Reply frame received for 5 I0925 02:28:55.340670 7 log.go:172] (0x712aa80) Data frame received for 3 I0925 02:28:55.340922 7 log.go:172] (0x918ed20) (3) Data frame handling I0925 02:28:55.341027 7 log.go:172] (0x712aa80) Data frame received for 5 I0925 02:28:55.341162 7 log.go:172] (0x712abd0) (5) Data frame handling I0925 02:28:55.341332 7 log.go:172] (0x918ed20) (3) Data frame sent I0925 02:28:55.341418 7 log.go:172] (0x712aa80) Data frame received for 3 I0925 02:28:55.341491 7 log.go:172] (0x918ed20) (3) Data frame handling I0925 02:28:55.343012 7 log.go:172] (0x712aa80) Data frame received for 1 I0925 02:28:55.343101 7 log.go:172] (0x712ab60) (1) Data frame handling I0925 02:28:55.343208 7 log.go:172] (0x712ab60) (1) Data frame sent I0925 02:28:55.343327 7 log.go:172] (0x712aa80) (0x712ab60) Stream removed, broadcasting: 1 I0925 02:28:55.343781 7 log.go:172] (0x712aa80) Go away received I0925 02:28:55.343973 7 log.go:172] (0x712aa80) (0x712ab60) Stream removed, broadcasting: 1 I0925 02:28:55.344094 7 log.go:172] (0x712aa80) (0x918ed20) Stream removed, broadcasting: 3 I0925 02:28:55.344204 7 log.go:172] (0x712aa80) (0x712abd0) Stream removed, broadcasting: 5 Sep 25 02:28:55.344: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Sep 25 02:28:55.344: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-9716" for this suite. Sep 25 02:29:17.377: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 25 02:29:17.531: INFO: namespace pod-network-test-9716 deletion completed in 22.176446433s • [SLOW TEST:48.806 seconds] [sig-network] Networking /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Sep 25 02:29:17.533: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Sep 25 02:29:21.656: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Sep 25 02:29:21.676: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-7566" for this suite. Sep 25 02:29:27.730: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 25 02:29:27.865: INFO: namespace container-runtime-7566 deletion completed in 6.165765324s • [SLOW TEST:10.333 seconds] [k8s.io] Container Runtime /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Sep 25 02:29:27.866: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Sep 25 02:29:28.193: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1f0dacd9-11be-41f9-97d6-1743f717236a" in namespace "projected-4320" to be "success or failure" Sep 25 02:29:28.458: INFO: Pod "downwardapi-volume-1f0dacd9-11be-41f9-97d6-1743f717236a": Phase="Pending", Reason="", readiness=false. Elapsed: 264.838636ms Sep 25 02:29:30.466: INFO: Pod "downwardapi-volume-1f0dacd9-11be-41f9-97d6-1743f717236a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.272819481s Sep 25 02:29:32.473: INFO: Pod "downwardapi-volume-1f0dacd9-11be-41f9-97d6-1743f717236a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.279953543s STEP: Saw pod success Sep 25 02:29:32.474: INFO: Pod "downwardapi-volume-1f0dacd9-11be-41f9-97d6-1743f717236a" satisfied condition "success or failure" Sep 25 02:29:32.529: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-1f0dacd9-11be-41f9-97d6-1743f717236a container client-container: STEP: delete the pod Sep 25 02:29:32.578: INFO: Waiting for pod downwardapi-volume-1f0dacd9-11be-41f9-97d6-1743f717236a to disappear Sep 25 02:29:32.582: INFO: Pod downwardapi-volume-1f0dacd9-11be-41f9-97d6-1743f717236a no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Sep 25 02:29:32.582: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4320" for this suite. Sep 25 02:29:38.617: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 25 02:29:38.753: INFO: namespace projected-4320 deletion completed in 6.160467686s • [SLOW TEST:10.887 seconds] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Sep 25 02:29:38.755: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-c4e4a374-2319-4faf-b563-b20d7167f6a8 STEP: Creating a pod to test consume configMaps Sep 25 02:29:38.867: INFO: Waiting up to 5m0s for pod "pod-configmaps-dd2c3c5a-a2b5-49ba-b1cc-35b2ddbd244e" in namespace "configmap-2081" to be "success or failure" Sep 25 02:29:38.893: INFO: Pod "pod-configmaps-dd2c3c5a-a2b5-49ba-b1cc-35b2ddbd244e": Phase="Pending", Reason="", readiness=false. Elapsed: 25.49268ms Sep 25 02:29:40.899: INFO: Pod "pod-configmaps-dd2c3c5a-a2b5-49ba-b1cc-35b2ddbd244e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031682838s Sep 25 02:29:42.906: INFO: Pod "pod-configmaps-dd2c3c5a-a2b5-49ba-b1cc-35b2ddbd244e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.038266564s STEP: Saw pod success Sep 25 02:29:42.906: INFO: Pod "pod-configmaps-dd2c3c5a-a2b5-49ba-b1cc-35b2ddbd244e" satisfied condition "success or failure" Sep 25 02:29:42.910: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-dd2c3c5a-a2b5-49ba-b1cc-35b2ddbd244e container configmap-volume-test: STEP: delete the pod Sep 25 02:29:42.937: INFO: Waiting for pod pod-configmaps-dd2c3c5a-a2b5-49ba-b1cc-35b2ddbd244e to disappear Sep 25 02:29:42.941: INFO: Pod pod-configmaps-dd2c3c5a-a2b5-49ba-b1cc-35b2ddbd244e no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Sep 25 02:29:42.941: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2081" for this suite. Sep 25 02:29:48.964: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 25 02:29:49.097: INFO: namespace configmap-2081 deletion completed in 6.147014896s • [SLOW TEST:10.342 seconds] [sig-storage] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Sep 25 02:29:49.100: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod liveness-2ae24b1c-f288-48e8-a892-e4d5ae99b1ce in namespace container-probe-1143 Sep 25 02:29:53.196: INFO: Started pod liveness-2ae24b1c-f288-48e8-a892-e4d5ae99b1ce in namespace container-probe-1143 STEP: checking the pod's current state and verifying that restartCount is present Sep 25 02:29:53.201: INFO: Initial restart count of pod liveness-2ae24b1c-f288-48e8-a892-e4d5ae99b1ce is 0 Sep 25 02:30:09.314: INFO: Restart count of pod container-probe-1143/liveness-2ae24b1c-f288-48e8-a892-e4d5ae99b1ce is now 1 (16.111909279s elapsed) Sep 25 02:30:35.406: INFO: Restart count of pod container-probe-1143/liveness-2ae24b1c-f288-48e8-a892-e4d5ae99b1ce is now 2 (42.204607351s elapsed) Sep 25 02:30:51.493: INFO: Restart count of pod container-probe-1143/liveness-2ae24b1c-f288-48e8-a892-e4d5ae99b1ce is now 3 (58.291295183s elapsed) Sep 25 02:31:09.577: INFO: Restart count of pod container-probe-1143/liveness-2ae24b1c-f288-48e8-a892-e4d5ae99b1ce is now 4 (1m16.375878942s elapsed) Sep 25 02:32:09.992: INFO: Restart count of pod container-probe-1143/liveness-2ae24b1c-f288-48e8-a892-e4d5ae99b1ce is now 5 (2m16.790273696s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Sep 25 02:32:10.050: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-1143" for this suite. Sep 25 02:32:16.081: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 25 02:32:16.214: INFO: namespace container-probe-1143 deletion completed in 6.154201341s • [SLOW TEST:147.114 seconds] [k8s.io] Probing container /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should have monotonically increasing restart count [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Sep 25 02:32:16.217: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test substitution in container's command Sep 25 02:32:16.341: INFO: Waiting up to 5m0s for pod "var-expansion-2ec97add-2345-4818-862a-3a503d6f88ea" in namespace "var-expansion-5685" to be "success or failure" Sep 25 02:32:16.350: INFO: Pod "var-expansion-2ec97add-2345-4818-862a-3a503d6f88ea": Phase="Pending", Reason="", readiness=false. Elapsed: 8.760443ms Sep 25 02:32:18.360: INFO: Pod "var-expansion-2ec97add-2345-4818-862a-3a503d6f88ea": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017996357s Sep 25 02:32:20.366: INFO: Pod "var-expansion-2ec97add-2345-4818-862a-3a503d6f88ea": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024457723s STEP: Saw pod success Sep 25 02:32:20.366: INFO: Pod "var-expansion-2ec97add-2345-4818-862a-3a503d6f88ea" satisfied condition "success or failure" Sep 25 02:32:20.370: INFO: Trying to get logs from node iruya-worker2 pod var-expansion-2ec97add-2345-4818-862a-3a503d6f88ea container dapi-container: STEP: delete the pod Sep 25 02:32:20.412: INFO: Waiting for pod var-expansion-2ec97add-2345-4818-862a-3a503d6f88ea to disappear Sep 25 02:32:20.417: INFO: Pod var-expansion-2ec97add-2345-4818-862a-3a503d6f88ea no longer exists [AfterEach] [k8s.io] Variable Expansion /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Sep 25 02:32:20.417: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-5685" for this suite. Sep 25 02:32:26.440: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 25 02:32:26.599: INFO: namespace var-expansion-5685 deletion completed in 6.174944335s • [SLOW TEST:10.382 seconds] [k8s.io] Variable Expansion /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow substituting values in a container's command [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Sep 25 02:32:26.601: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-40977c41-116f-4658-826e-b851ddfb979f STEP: Creating a pod to test consume configMaps Sep 25 02:32:26.718: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-9ceb1195-f0a7-41e8-ac64-aac3ede79a74" in namespace "projected-3489" to be "success or failure" Sep 25 02:32:26.724: INFO: Pod "pod-projected-configmaps-9ceb1195-f0a7-41e8-ac64-aac3ede79a74": Phase="Pending", Reason="", readiness=false. Elapsed: 5.6494ms Sep 25 02:32:28.732: INFO: Pod "pod-projected-configmaps-9ceb1195-f0a7-41e8-ac64-aac3ede79a74": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013026618s Sep 25 02:32:30.739: INFO: Pod "pod-projected-configmaps-9ceb1195-f0a7-41e8-ac64-aac3ede79a74": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020386066s STEP: Saw pod success Sep 25 02:32:30.739: INFO: Pod "pod-projected-configmaps-9ceb1195-f0a7-41e8-ac64-aac3ede79a74" satisfied condition "success or failure" Sep 25 02:32:30.745: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-9ceb1195-f0a7-41e8-ac64-aac3ede79a74 container projected-configmap-volume-test: STEP: delete the pod Sep 25 02:32:30.767: INFO: Waiting for pod pod-projected-configmaps-9ceb1195-f0a7-41e8-ac64-aac3ede79a74 to disappear Sep 25 02:32:30.771: INFO: Pod pod-projected-configmaps-9ceb1195-f0a7-41e8-ac64-aac3ede79a74 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Sep 25 02:32:30.771: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3489" for this suite. Sep 25 02:32:36.815: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 25 02:32:36.958: INFO: namespace projected-3489 deletion completed in 6.178593466s • [SLOW TEST:10.358 seconds] [sig-storage] Projected configMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Sep 25 02:32:36.962: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap that has name configmap-test-emptyKey-97646822-d987-4f13-a642-3fc761fd0954 [AfterEach] [sig-node] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Sep 25 02:32:37.043: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9160" for this suite. Sep 25 02:32:43.069: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 25 02:32:43.442: INFO: namespace configmap-9160 deletion completed in 6.389721374s • [SLOW TEST:6.481 seconds] [sig-node] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should fail to create ConfigMap with empty key [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Sep 25 02:32:43.447: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Sep 25 02:32:43.824: INFO: Waiting up to 5m0s for pod "downward-api-e29860b8-55ce-47c1-be70-90a615125fad" in namespace "downward-api-3755" to be "success or failure" Sep 25 02:32:43.864: INFO: Pod "downward-api-e29860b8-55ce-47c1-be70-90a615125fad": Phase="Pending", Reason="", readiness=false. Elapsed: 40.131728ms Sep 25 02:32:45.871: INFO: Pod "downward-api-e29860b8-55ce-47c1-be70-90a615125fad": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046903751s Sep 25 02:32:47.879: INFO: Pod "downward-api-e29860b8-55ce-47c1-be70-90a615125fad": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.055061887s STEP: Saw pod success Sep 25 02:32:47.879: INFO: Pod "downward-api-e29860b8-55ce-47c1-be70-90a615125fad" satisfied condition "success or failure" Sep 25 02:32:47.916: INFO: Trying to get logs from node iruya-worker2 pod downward-api-e29860b8-55ce-47c1-be70-90a615125fad container dapi-container: STEP: delete the pod Sep 25 02:32:47.940: INFO: Waiting for pod downward-api-e29860b8-55ce-47c1-be70-90a615125fad to disappear Sep 25 02:32:48.049: INFO: Pod downward-api-e29860b8-55ce-47c1-be70-90a615125fad no longer exists [AfterEach] [sig-node] Downward API /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Sep 25 02:32:48.050: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3755" for this suite. Sep 25 02:32:54.095: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 25 02:32:54.230: INFO: namespace downward-api-3755 deletion completed in 6.171594476s • [SLOW TEST:10.782 seconds] [sig-node] Downward API /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Sep 25 02:32:54.234: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod liveness-c1e839b3-de5d-4fe7-9383-5074fbd238a8 in namespace container-probe-8531 Sep 25 02:32:58.385: INFO: Started pod liveness-c1e839b3-de5d-4fe7-9383-5074fbd238a8 in namespace container-probe-8531 STEP: checking the pod's current state and verifying that restartCount is present Sep 25 02:32:58.390: INFO: Initial restart count of pod liveness-c1e839b3-de5d-4fe7-9383-5074fbd238a8 is 0 Sep 25 02:33:14.482: INFO: Restart count of pod container-probe-8531/liveness-c1e839b3-de5d-4fe7-9383-5074fbd238a8 is now 1 (16.091995839s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Sep 25 02:33:14.503: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8531" for this suite. Sep 25 02:33:20.530: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 25 02:33:20.674: INFO: namespace container-probe-8531 deletion completed in 6.162457125s • [SLOW TEST:26.441 seconds] [k8s.io] Probing container /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Sep 25 02:33:20.676: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating secret secrets-8699/secret-test-4896d8c1-159c-4a00-8954-8660b064e9b6 STEP: Creating a pod to test consume secrets Sep 25 02:33:20.786: INFO: Waiting up to 5m0s for pod "pod-configmaps-eec1824a-4a57-4ef0-a5e7-3f6c34f09e23" in namespace "secrets-8699" to be "success or failure" Sep 25 02:33:20.834: INFO: Pod "pod-configmaps-eec1824a-4a57-4ef0-a5e7-3f6c34f09e23": Phase="Pending", Reason="", readiness=false. Elapsed: 47.929679ms Sep 25 02:33:22.907: INFO: Pod "pod-configmaps-eec1824a-4a57-4ef0-a5e7-3f6c34f09e23": Phase="Pending", Reason="", readiness=false. Elapsed: 2.120744604s Sep 25 02:33:24.914: INFO: Pod "pod-configmaps-eec1824a-4a57-4ef0-a5e7-3f6c34f09e23": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.12792109s STEP: Saw pod success Sep 25 02:33:24.914: INFO: Pod "pod-configmaps-eec1824a-4a57-4ef0-a5e7-3f6c34f09e23" satisfied condition "success or failure" Sep 25 02:33:24.920: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-eec1824a-4a57-4ef0-a5e7-3f6c34f09e23 container env-test: STEP: delete the pod Sep 25 02:33:24.956: INFO: Waiting for pod pod-configmaps-eec1824a-4a57-4ef0-a5e7-3f6c34f09e23 to disappear Sep 25 02:33:24.988: INFO: Pod pod-configmaps-eec1824a-4a57-4ef0-a5e7-3f6c34f09e23 no longer exists [AfterEach] [sig-api-machinery] Secrets /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Sep 25 02:33:24.989: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8699" for this suite. Sep 25 02:33:31.048: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 25 02:33:31.190: INFO: namespace secrets-8699 deletion completed in 6.191934545s • [SLOW TEST:10.514 seconds] [sig-api-machinery] Secrets /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should be consumable via the environment [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Sep 25 02:33:31.193: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Sep 25 02:33:31.260: INFO: Waiting up to 5m0s for pod "downward-api-4c6c443c-50fa-4ae6-a69a-a874863bdba2" in namespace "downward-api-1438" to be "success or failure" Sep 25 02:33:31.270: INFO: Pod "downward-api-4c6c443c-50fa-4ae6-a69a-a874863bdba2": Phase="Pending", Reason="", readiness=false. Elapsed: 9.781895ms Sep 25 02:33:33.278: INFO: Pod "downward-api-4c6c443c-50fa-4ae6-a69a-a874863bdba2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017956544s Sep 25 02:33:35.290: INFO: Pod "downward-api-4c6c443c-50fa-4ae6-a69a-a874863bdba2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029570095s STEP: Saw pod success Sep 25 02:33:35.290: INFO: Pod "downward-api-4c6c443c-50fa-4ae6-a69a-a874863bdba2" satisfied condition "success or failure" Sep 25 02:33:35.295: INFO: Trying to get logs from node iruya-worker pod downward-api-4c6c443c-50fa-4ae6-a69a-a874863bdba2 container dapi-container: STEP: delete the pod Sep 25 02:33:35.330: INFO: Waiting for pod downward-api-4c6c443c-50fa-4ae6-a69a-a874863bdba2 to disappear Sep 25 02:33:35.360: INFO: Pod downward-api-4c6c443c-50fa-4ae6-a69a-a874863bdba2 no longer exists [AfterEach] [sig-node] Downward API /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Sep 25 02:33:35.360: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1438" for this suite. Sep 25 02:33:41.383: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 25 02:33:41.531: INFO: namespace downward-api-1438 deletion completed in 6.161710016s • [SLOW TEST:10.338 seconds] [sig-node] Downward API /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide pod UID as env vars [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Sep 25 02:33:41.532: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0925 02:34:21.977970 7 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Sep 25 02:34:21.979: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Sep 25 02:34:21.979: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-7645" for this suite. Sep 25 02:34:30.001: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 25 02:34:30.202: INFO: namespace gc-7645 deletion completed in 8.215961383s • [SLOW TEST:48.670 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Sep 25 02:34:30.205: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-1797def9-f9c7-471d-9e71-551fde5380a4 STEP: Creating a pod to test consume configMaps Sep 25 02:34:30.861: INFO: Waiting up to 5m0s for pod "pod-configmaps-c09469a2-70a3-4997-b3dc-21b9d970ad31" in namespace "configmap-6568" to be "success or failure" Sep 25 02:34:30.882: INFO: Pod "pod-configmaps-c09469a2-70a3-4997-b3dc-21b9d970ad31": Phase="Pending", Reason="", readiness=false. Elapsed: 20.918657ms Sep 25 02:34:32.889: INFO: Pod "pod-configmaps-c09469a2-70a3-4997-b3dc-21b9d970ad31": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027852021s Sep 25 02:34:34.896: INFO: Pod "pod-configmaps-c09469a2-70a3-4997-b3dc-21b9d970ad31": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.034954227s STEP: Saw pod success Sep 25 02:34:34.896: INFO: Pod "pod-configmaps-c09469a2-70a3-4997-b3dc-21b9d970ad31" satisfied condition "success or failure" Sep 25 02:34:34.902: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-c09469a2-70a3-4997-b3dc-21b9d970ad31 container configmap-volume-test: STEP: delete the pod Sep 25 02:34:34.946: INFO: Waiting for pod pod-configmaps-c09469a2-70a3-4997-b3dc-21b9d970ad31 to disappear Sep 25 02:34:34.971: INFO: Pod pod-configmaps-c09469a2-70a3-4997-b3dc-21b9d970ad31 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Sep 25 02:34:34.972: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6568" for this suite. Sep 25 02:34:41.026: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 25 02:34:41.181: INFO: namespace configmap-6568 deletion completed in 6.20260567s • [SLOW TEST:10.977 seconds] [sig-storage] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-auth] ServiceAccounts /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Sep 25 02:34:41.183: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: getting the auto-created API token Sep 25 02:34:41.781: INFO: created pod pod-service-account-defaultsa Sep 25 02:34:41.782: INFO: pod pod-service-account-defaultsa service account token volume mount: true Sep 25 02:34:41.798: INFO: created pod pod-service-account-mountsa Sep 25 02:34:41.798: INFO: pod pod-service-account-mountsa service account token volume mount: true Sep 25 02:34:41.810: INFO: created pod pod-service-account-nomountsa Sep 25 02:34:41.810: INFO: pod pod-service-account-nomountsa service account token volume mount: false Sep 25 02:34:41.884: INFO: created pod pod-service-account-defaultsa-mountspec Sep 25 02:34:41.884: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Sep 25 02:34:41.918: INFO: created pod pod-service-account-mountsa-mountspec Sep 25 02:34:41.918: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Sep 25 02:34:41.943: INFO: created pod pod-service-account-nomountsa-mountspec Sep 25 02:34:41.943: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Sep 25 02:34:42.021: INFO: created pod pod-service-account-defaultsa-nomountspec Sep 25 02:34:42.022: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Sep 25 02:34:42.039: INFO: created pod pod-service-account-mountsa-nomountspec Sep 25 02:34:42.039: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Sep 25 02:34:42.075: INFO: created pod pod-service-account-nomountsa-nomountspec Sep 25 02:34:42.075: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Sep 25 02:34:42.076: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-3670" for this suite. Sep 25 02:35:10.212: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 25 02:35:10.347: INFO: namespace svcaccounts-3670 deletion completed in 28.224825364s • [SLOW TEST:29.164 seconds] [sig-auth] ServiceAccounts /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should allow opting out of API token automount [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Sep 25 02:35:10.349: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Sep 25 02:35:10.458: INFO: (0) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/
>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on tmpfs
Sep 25 02:35:16.812: INFO: Waiting up to 5m0s for pod "pod-743a424c-30e6-4357-832b-8c206c23c28f" in namespace "emptydir-8971" to be "success or failure"
Sep 25 02:35:16.820: INFO: Pod "pod-743a424c-30e6-4357-832b-8c206c23c28f": Phase="Pending", Reason="", readiness=false. Elapsed: 7.48109ms
Sep 25 02:35:18.827: INFO: Pod "pod-743a424c-30e6-4357-832b-8c206c23c28f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014563609s
Sep 25 02:35:20.834: INFO: Pod "pod-743a424c-30e6-4357-832b-8c206c23c28f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021815071s
STEP: Saw pod success
Sep 25 02:35:20.835: INFO: Pod "pod-743a424c-30e6-4357-832b-8c206c23c28f" satisfied condition "success or failure"
Sep 25 02:35:20.839: INFO: Trying to get logs from node iruya-worker pod pod-743a424c-30e6-4357-832b-8c206c23c28f container test-container: 
STEP: delete the pod
Sep 25 02:35:20.894: INFO: Waiting for pod pod-743a424c-30e6-4357-832b-8c206c23c28f to disappear
Sep 25 02:35:20.955: INFO: Pod pod-743a424c-30e6-4357-832b-8c206c23c28f no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 25 02:35:20.956: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-8971" for this suite.
Sep 25 02:35:27.002: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 25 02:35:27.146: INFO: namespace emptydir-8971 deletion completed in 6.180840034s

• [SLOW TEST:10.448 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 25 02:35:27.151: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name s-test-opt-del-cc1c05e8-0a85-4d08-be66-3c1be1bcec3d
STEP: Creating secret with name s-test-opt-upd-e178e9a2-d795-43a8-b7c9-e9b643cfa43a
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-cc1c05e8-0a85-4d08-be66-3c1be1bcec3d
STEP: Updating secret s-test-opt-upd-e178e9a2-d795-43a8-b7c9-e9b643cfa43a
STEP: Creating secret with name s-test-opt-create-63db6810-428a-48fc-aa8a-a5a74fa4d0f6
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected secret
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 25 02:35:35.413: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7287" for this suite.
Sep 25 02:35:47.469: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 25 02:35:47.619: INFO: namespace projected-7287 deletion completed in 12.199530229s

• [SLOW TEST:20.468 seconds]
[sig-storage] Projected secret
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-storage] Downward API volume 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 25 02:35:47.620: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Sep 25 02:35:47.721: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a271dbb0-bd6c-4bb9-b2ad-c9a49e9cb823" in namespace "downward-api-7690" to be "success or failure"
Sep 25 02:35:47.743: INFO: Pod "downwardapi-volume-a271dbb0-bd6c-4bb9-b2ad-c9a49e9cb823": Phase="Pending", Reason="", readiness=false. Elapsed: 22.00834ms
Sep 25 02:35:49.751: INFO: Pod "downwardapi-volume-a271dbb0-bd6c-4bb9-b2ad-c9a49e9cb823": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029533845s
Sep 25 02:35:51.759: INFO: Pod "downwardapi-volume-a271dbb0-bd6c-4bb9-b2ad-c9a49e9cb823": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.037276006s
STEP: Saw pod success
Sep 25 02:35:51.759: INFO: Pod "downwardapi-volume-a271dbb0-bd6c-4bb9-b2ad-c9a49e9cb823" satisfied condition "success or failure"
Sep 25 02:35:51.764: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-a271dbb0-bd6c-4bb9-b2ad-c9a49e9cb823 container client-container: 
STEP: delete the pod
Sep 25 02:35:51.787: INFO: Waiting for pod downwardapi-volume-a271dbb0-bd6c-4bb9-b2ad-c9a49e9cb823 to disappear
Sep 25 02:35:51.792: INFO: Pod downwardapi-volume-a271dbb0-bd6c-4bb9-b2ad-c9a49e9cb823 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 25 02:35:51.793: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-7690" for this suite.
Sep 25 02:35:57.857: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 25 02:35:58.008: INFO: namespace downward-api-7690 deletion completed in 6.18449748s

• [SLOW TEST:10.388 seconds]
[sig-storage] Downward API volume
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-network] Services 
  should serve multiport endpoints from pods  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 25 02:35:58.010: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88
[It] should serve multiport endpoints from pods  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating service multi-endpoint-test in namespace services-9207
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9207 to expose endpoints map[]
Sep 25 02:35:58.214: INFO: successfully validated that service multi-endpoint-test in namespace services-9207 exposes endpoints map[] (39.358484ms elapsed)
STEP: Creating pod pod1 in namespace services-9207
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9207 to expose endpoints map[pod1:[100]]
Sep 25 02:36:01.323: INFO: successfully validated that service multi-endpoint-test in namespace services-9207 exposes endpoints map[pod1:[100]] (3.064085745s elapsed)
STEP: Creating pod pod2 in namespace services-9207
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9207 to expose endpoints map[pod1:[100] pod2:[101]]
Sep 25 02:36:05.419: INFO: successfully validated that service multi-endpoint-test in namespace services-9207 exposes endpoints map[pod1:[100] pod2:[101]] (4.087998984s elapsed)
STEP: Deleting pod pod1 in namespace services-9207
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9207 to expose endpoints map[pod2:[101]]
Sep 25 02:36:05.456: INFO: successfully validated that service multi-endpoint-test in namespace services-9207 exposes endpoints map[pod2:[101]] (28.136179ms elapsed)
STEP: Deleting pod pod2 in namespace services-9207
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9207 to expose endpoints map[]
Sep 25 02:36:05.467: INFO: successfully validated that service multi-endpoint-test in namespace services-9207 exposes endpoints map[] (4.685842ms elapsed)
[AfterEach] [sig-network] Services
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 25 02:36:05.550: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-9207" for this suite.
Sep 25 02:36:27.728: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 25 02:36:27.911: INFO: namespace services-9207 deletion completed in 22.338483741s
[AfterEach] [sig-network] Services
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92

• [SLOW TEST:29.902 seconds]
[sig-network] Services
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve multiport endpoints from pods  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should be possible to delete [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 25 02:36:27.914: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should be possible to delete [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 25 02:36:28.022: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-16" for this suite.
Sep 25 02:36:34.066: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 25 02:36:34.205: INFO: namespace kubelet-test-16 deletion completed in 6.172051028s

• [SLOW TEST:6.290 seconds]
[k8s.io] Kubelet
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox command that always fails in a pod
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should be possible to delete [NodeConformance] [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update labels on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 25 02:36:34.209: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should update labels on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Sep 25 02:36:38.813: INFO: Successfully updated pod "labelsupdate16ea1693-e189-46d1-9b4a-9609c6d8fb58"
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 25 02:36:40.879: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-629" for this suite.
Sep 25 02:37:02.936: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 25 02:37:03.132: INFO: namespace projected-629 deletion completed in 22.242233577s

• [SLOW TEST:28.923 seconds]
[sig-storage] Projected downwardAPI
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should update labels on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 25 02:37:03.136: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-4001
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Sep 25 02:37:03.213: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Sep 25 02:37:25.434: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.151:8080/dial?request=hostName&protocol=udp&host=10.244.2.150&port=8081&tries=1'] Namespace:pod-network-test-4001 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Sep 25 02:37:25.434: INFO: >>> kubeConfig: /root/.kube/config
I0925 02:37:25.536712       7 log.go:172] (0x7749880) (0x7749a40) Create stream
I0925 02:37:25.536940       7 log.go:172] (0x7749880) (0x7749a40) Stream added, broadcasting: 1
I0925 02:37:25.541741       7 log.go:172] (0x7749880) Reply frame received for 1
I0925 02:37:25.542070       7 log.go:172] (0x7749880) (0x9a5a230) Create stream
I0925 02:37:25.542205       7 log.go:172] (0x7749880) (0x9a5a230) Stream added, broadcasting: 3
I0925 02:37:25.544403       7 log.go:172] (0x7749880) Reply frame received for 3
I0925 02:37:25.544567       7 log.go:172] (0x7749880) (0x7749c00) Create stream
I0925 02:37:25.544675       7 log.go:172] (0x7749880) (0x7749c00) Stream added, broadcasting: 5
I0925 02:37:25.546144       7 log.go:172] (0x7749880) Reply frame received for 5
I0925 02:37:25.621708       7 log.go:172] (0x7749880) Data frame received for 3
I0925 02:37:25.621887       7 log.go:172] (0x9a5a230) (3) Data frame handling
I0925 02:37:25.622025       7 log.go:172] (0x9a5a230) (3) Data frame sent
I0925 02:37:25.623022       7 log.go:172] (0x7749880) Data frame received for 5
I0925 02:37:25.623219       7 log.go:172] (0x7749c00) (5) Data frame handling
I0925 02:37:25.623356       7 log.go:172] (0x7749880) Data frame received for 3
I0925 02:37:25.623510       7 log.go:172] (0x9a5a230) (3) Data frame handling
I0925 02:37:25.624403       7 log.go:172] (0x7749880) Data frame received for 1
I0925 02:37:25.624509       7 log.go:172] (0x7749a40) (1) Data frame handling
I0925 02:37:25.624615       7 log.go:172] (0x7749a40) (1) Data frame sent
I0925 02:37:25.624750       7 log.go:172] (0x7749880) (0x7749a40) Stream removed, broadcasting: 1
I0925 02:37:25.625145       7 log.go:172] (0x7749880) Go away received
I0925 02:37:25.625328       7 log.go:172] (0x7749880) (0x7749a40) Stream removed, broadcasting: 1
I0925 02:37:25.625474       7 log.go:172] (0x7749880) (0x9a5a230) Stream removed, broadcasting: 3
I0925 02:37:25.625718       7 log.go:172] (0x7749880) (0x7749c00) Stream removed, broadcasting: 5
Sep 25 02:37:25.625: INFO: Waiting for endpoints: map[]
Sep 25 02:37:25.631: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.151:8080/dial?request=hostName&protocol=udp&host=10.244.1.180&port=8081&tries=1'] Namespace:pod-network-test-4001 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Sep 25 02:37:25.631: INFO: >>> kubeConfig: /root/.kube/config
I0925 02:37:25.755430       7 log.go:172] (0x9a5b570) (0x9a5b650) Create stream
I0925 02:37:25.755635       7 log.go:172] (0x9a5b570) (0x9a5b650) Stream added, broadcasting: 1
I0925 02:37:25.761454       7 log.go:172] (0x9a5b570) Reply frame received for 1
I0925 02:37:25.761695       7 log.go:172] (0x9a5b570) (0x9a5b730) Create stream
I0925 02:37:25.761789       7 log.go:172] (0x9a5b570) (0x9a5b730) Stream added, broadcasting: 3
I0925 02:37:25.763409       7 log.go:172] (0x9a5b570) Reply frame received for 3
I0925 02:37:25.763554       7 log.go:172] (0x9a5b570) (0x6bd6a80) Create stream
I0925 02:37:25.763642       7 log.go:172] (0x9a5b570) (0x6bd6a80) Stream added, broadcasting: 5
I0925 02:37:25.765156       7 log.go:172] (0x9a5b570) Reply frame received for 5
I0925 02:37:25.826601       7 log.go:172] (0x9a5b570) Data frame received for 3
I0925 02:37:25.826794       7 log.go:172] (0x9a5b730) (3) Data frame handling
I0925 02:37:25.826873       7 log.go:172] (0x9a5b570) Data frame received for 5
I0925 02:37:25.826983       7 log.go:172] (0x6bd6a80) (5) Data frame handling
I0925 02:37:25.827283       7 log.go:172] (0x9a5b730) (3) Data frame sent
I0925 02:37:25.827700       7 log.go:172] (0x9a5b570) Data frame received for 3
I0925 02:37:25.827897       7 log.go:172] (0x9a5b730) (3) Data frame handling
I0925 02:37:25.828623       7 log.go:172] (0x9a5b570) Data frame received for 1
I0925 02:37:25.828768       7 log.go:172] (0x9a5b650) (1) Data frame handling
I0925 02:37:25.829032       7 log.go:172] (0x9a5b650) (1) Data frame sent
I0925 02:37:25.829190       7 log.go:172] (0x9a5b570) (0x9a5b650) Stream removed, broadcasting: 1
I0925 02:37:25.829379       7 log.go:172] (0x9a5b570) Go away received
I0925 02:37:25.829588       7 log.go:172] (0x9a5b570) (0x9a5b650) Stream removed, broadcasting: 1
I0925 02:37:25.829676       7 log.go:172] (0x9a5b570) (0x9a5b730) Stream removed, broadcasting: 3
I0925 02:37:25.829745       7 log.go:172] (0x9a5b570) (0x6bd6a80) Stream removed, broadcasting: 5
Sep 25 02:37:25.829: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 25 02:37:25.830: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-4001" for this suite.
Sep 25 02:37:47.861: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 25 02:37:48.000: INFO: namespace pod-network-test-4001 deletion completed in 22.161424429s

• [SLOW TEST:44.864 seconds]
[sig-network] Networking
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[k8s.io] Pods 
  should be submitted and removed [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 25 02:37:48.001: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should be submitted and removed [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: setting up watch
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
Sep 25 02:37:48.122: INFO: observed the pod list
STEP: verifying pod creation was observed
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
STEP: verifying pod deletion was observed
[AfterEach] [k8s.io] Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 25 02:38:05.399: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-9676" for this suite.
Sep 25 02:38:11.438: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 25 02:38:11.563: INFO: namespace pods-9676 deletion completed in 6.153324869s

• [SLOW TEST:23.562 seconds]
[k8s.io] Pods
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be submitted and removed [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run rc 
  should create an rc from an image  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 25 02:38:11.565: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run rc
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1456
[It] should create an rc from an image  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Sep 25 02:38:11.635: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-7350'
Sep 25 02:38:15.267: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Sep 25 02:38:15.268: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created
STEP: confirm that you can get logs from an rc
Sep 25 02:38:15.289: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-nk2zh]
Sep 25 02:38:15.289: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-nk2zh" in namespace "kubectl-7350" to be "running and ready"
Sep 25 02:38:15.294: INFO: Pod "e2e-test-nginx-rc-nk2zh": Phase="Pending", Reason="", readiness=false. Elapsed: 4.835281ms
Sep 25 02:38:17.333: INFO: Pod "e2e-test-nginx-rc-nk2zh": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043251794s
Sep 25 02:38:19.339: INFO: Pod "e2e-test-nginx-rc-nk2zh": Phase="Running", Reason="", readiness=true. Elapsed: 4.049525722s
Sep 25 02:38:19.339: INFO: Pod "e2e-test-nginx-rc-nk2zh" satisfied condition "running and ready"
Sep 25 02:38:19.340: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-nk2zh]
Sep 25 02:38:19.340: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=kubectl-7350'
Sep 25 02:38:20.542: INFO: stderr: ""
Sep 25 02:38:20.542: INFO: stdout: ""
[AfterEach] [k8s.io] Kubectl run rc
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1461
Sep 25 02:38:20.542: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-7350'
Sep 25 02:38:21.656: INFO: stderr: ""
Sep 25 02:38:21.656: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 25 02:38:21.656: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7350" for this suite.
Sep 25 02:38:27.697: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 25 02:38:27.841: INFO: namespace kubectl-7350 deletion completed in 6.175358799s

• [SLOW TEST:16.276 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run rc
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create an rc from an image  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 25 02:38:27.842: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Sep 25 02:38:27.901: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 25 02:38:32.079: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-7062" for this suite.
Sep 25 02:39:18.117: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 25 02:39:18.259: INFO: namespace pods-7062 deletion completed in 46.174528918s

• [SLOW TEST:50.418 seconds]
[k8s.io] Pods
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[k8s.io] Pods 
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 25 02:39:18.261: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Sep 25 02:39:22.895: INFO: Successfully updated pod "pod-update-activedeadlineseconds-925a2967-8063-4931-a194-9f53101bb037"
Sep 25 02:39:22.896: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-925a2967-8063-4931-a194-9f53101bb037" in namespace "pods-2949" to be "terminated due to deadline exceeded"
Sep 25 02:39:22.900: INFO: Pod "pod-update-activedeadlineseconds-925a2967-8063-4931-a194-9f53101bb037": Phase="Running", Reason="", readiness=true. Elapsed: 3.63989ms
Sep 25 02:39:24.907: INFO: Pod "pod-update-activedeadlineseconds-925a2967-8063-4931-a194-9f53101bb037": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.011277823s
Sep 25 02:39:24.908: INFO: Pod "pod-update-activedeadlineseconds-925a2967-8063-4931-a194-9f53101bb037" satisfied condition "terminated due to deadline exceeded"
[AfterEach] [k8s.io] Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 25 02:39:24.908: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-2949" for this suite.
Sep 25 02:39:30.935: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 25 02:39:31.070: INFO: namespace pods-2949 deletion completed in 6.15118336s

• [SLOW TEST:12.810 seconds]
[k8s.io] Pods
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support rollover [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 25 02:39:31.072: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] deployment should support rollover [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Sep 25 02:39:31.189: INFO: Pod name rollover-pod: Found 0 pods out of 1
Sep 25 02:39:36.203: INFO: Pod name rollover-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Sep 25 02:39:36.204: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready
Sep 25 02:39:38.211: INFO: Creating deployment "test-rollover-deployment"
Sep 25 02:39:38.235: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations
Sep 25 02:39:40.292: INFO: Check revision of new replica set for deployment "test-rollover-deployment"
Sep 25 02:39:40.303: INFO: Ensure that both replica sets have 1 created replica
Sep 25 02:39:40.312: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update
Sep 25 02:39:40.323: INFO: Updating deployment test-rollover-deployment
Sep 25 02:39:40.323: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller
Sep 25 02:39:42.394: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2
Sep 25 02:39:42.409: INFO: Make sure deployment "test-rollover-deployment" is complete
Sep 25 02:39:42.420: INFO: all replica sets need to contain the pod-template-hash label
Sep 25 02:39:42.422: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63736598378, loc:(*time.Location)(0x67985e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63736598378, loc:(*time.Location)(0x67985e0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63736598380, loc:(*time.Location)(0x67985e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63736598378, loc:(*time.Location)(0x67985e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep 25 02:39:44.438: INFO: all replica sets need to contain the pod-template-hash label
Sep 25 02:39:44.438: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63736598378, loc:(*time.Location)(0x67985e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63736598378, loc:(*time.Location)(0x67985e0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63736598383, loc:(*time.Location)(0x67985e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63736598378, loc:(*time.Location)(0x67985e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep 25 02:39:46.439: INFO: all replica sets need to contain the pod-template-hash label
Sep 25 02:39:46.440: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63736598378, loc:(*time.Location)(0x67985e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63736598378, loc:(*time.Location)(0x67985e0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63736598383, loc:(*time.Location)(0x67985e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63736598378, loc:(*time.Location)(0x67985e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep 25 02:39:48.439: INFO: all replica sets need to contain the pod-template-hash label
Sep 25 02:39:48.440: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63736598378, loc:(*time.Location)(0x67985e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63736598378, loc:(*time.Location)(0x67985e0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63736598383, loc:(*time.Location)(0x67985e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63736598378, loc:(*time.Location)(0x67985e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep 25 02:39:50.439: INFO: all replica sets need to contain the pod-template-hash label
Sep 25 02:39:50.440: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63736598378, loc:(*time.Location)(0x67985e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63736598378, loc:(*time.Location)(0x67985e0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63736598383, loc:(*time.Location)(0x67985e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63736598378, loc:(*time.Location)(0x67985e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep 25 02:39:52.438: INFO: all replica sets need to contain the pod-template-hash label
Sep 25 02:39:52.439: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63736598378, loc:(*time.Location)(0x67985e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63736598378, loc:(*time.Location)(0x67985e0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63736598383, loc:(*time.Location)(0x67985e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63736598378, loc:(*time.Location)(0x67985e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep 25 02:39:54.438: INFO: 
Sep 25 02:39:54.438: INFO: Ensure that both old replica sets have no replicas
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Sep 25 02:39:54.453: INFO: Deployment "test-rollover-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:deployment-6486,SelfLink:/apis/apps/v1/namespaces/deployment-6486/deployments/test-rollover-deployment,UID:f5bfa48e-7c45-4454-946a-1a671546bb01,ResourceVersion:320459,Generation:2,CreationTimestamp:2020-09-25 02:39:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-09-25 02:39:38 +0000 UTC 2020-09-25 02:39:38 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-09-25 02:39:53 +0000 UTC 2020-09-25 02:39:38 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-854595fc44" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Sep 25 02:39:54.460: INFO: New ReplicaSet "test-rollover-deployment-854595fc44" of Deployment "test-rollover-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44,GenerateName:,Namespace:deployment-6486,SelfLink:/apis/apps/v1/namespaces/deployment-6486/replicasets/test-rollover-deployment-854595fc44,UID:39e94e13-9db2-4a82-ad23-19ab95d8b5b2,ResourceVersion:320448,Generation:2,CreationTimestamp:2020-09-25 02:39:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment f5bfa48e-7c45-4454-946a-1a671546bb01 0x903e3c7 0x903e3c8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Sep 25 02:39:54.460: INFO: All old ReplicaSets of Deployment "test-rollover-deployment":
Sep 25 02:39:54.461: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:deployment-6486,SelfLink:/apis/apps/v1/namespaces/deployment-6486/replicasets/test-rollover-controller,UID:0ffdec38-a522-4691-98d7-f3a20f4772fa,ResourceVersion:320458,Generation:2,CreationTimestamp:2020-09-25 02:39:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment f5bfa48e-7c45-4454-946a-1a671546bb01 0x903e2d7 0x903e2d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Sep 25 02:39:54.463: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-9b8b997cf,GenerateName:,Namespace:deployment-6486,SelfLink:/apis/apps/v1/namespaces/deployment-6486/replicasets/test-rollover-deployment-9b8b997cf,UID:70068315-3340-4ed9-9ef3-ae1aa1b963da,ResourceVersion:320407,Generation:2,CreationTimestamp:2020-09-25 02:39:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment f5bfa48e-7c45-4454-946a-1a671546bb01 0x903e490 0x903e491}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Sep 25 02:39:54.470: INFO: Pod "test-rollover-deployment-854595fc44-57wdk" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44-57wdk,GenerateName:test-rollover-deployment-854595fc44-,Namespace:deployment-6486,SelfLink:/api/v1/namespaces/deployment-6486/pods/test-rollover-deployment-854595fc44-57wdk,UID:498322b4-823c-44f2-b5bc-2a78a4367ae1,ResourceVersion:320425,Generation:0,CreationTimestamp:2020-09-25 02:39:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-854595fc44 39e94e13-9db2-4a82-ad23-19ab95d8b5b2 0x903f1d7 0x903f1d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-qzksb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qzksb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-qzksb true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x903f250} {node.kubernetes.io/unreachable Exists  NoExecute 0x903f270}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-25 02:39:40 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-09-25 02:39:43 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-09-25 02:39:43 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-25 02:39:40 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.5,PodIP:10.244.2.157,StartTime:2020-09-25 02:39:40 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-09-25 02:39:43 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://df306e81401a7762e788dad7db6dd24267df9b1eec4716b0ddc8503f65d20358}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 25 02:39:54.471: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-6486" for this suite.
Sep 25 02:40:00.525: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 25 02:40:00.689: INFO: namespace deployment-6486 deletion completed in 6.210080936s

• [SLOW TEST:29.617 seconds]
[sig-apps] Deployment
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support rollover [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should serve a basic image on each replica with a public image  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicaSet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 25 02:40:00.694: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Sep 25 02:40:00.867: INFO: Creating ReplicaSet my-hostname-basic-acf74599-ffb4-4e3c-8205-92ad4d618a05
Sep 25 02:40:00.884: INFO: Pod name my-hostname-basic-acf74599-ffb4-4e3c-8205-92ad4d618a05: Found 0 pods out of 1
Sep 25 02:40:05.892: INFO: Pod name my-hostname-basic-acf74599-ffb4-4e3c-8205-92ad4d618a05: Found 1 pods out of 1
Sep 25 02:40:05.892: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-acf74599-ffb4-4e3c-8205-92ad4d618a05" is running
Sep 25 02:40:05.898: INFO: Pod "my-hostname-basic-acf74599-ffb4-4e3c-8205-92ad4d618a05-9wrjx" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-09-25 02:40:00 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-09-25 02:40:04 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-09-25 02:40:04 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-09-25 02:40:00 +0000 UTC Reason: Message:}])
Sep 25 02:40:05.899: INFO: Trying to dial the pod
Sep 25 02:40:10.917: INFO: Controller my-hostname-basic-acf74599-ffb4-4e3c-8205-92ad4d618a05: Got expected result from replica 1 [my-hostname-basic-acf74599-ffb4-4e3c-8205-92ad4d618a05-9wrjx]: "my-hostname-basic-acf74599-ffb4-4e3c-8205-92ad4d618a05-9wrjx", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicaSet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 25 02:40:10.917: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-7725" for this suite.
Sep 25 02:40:16.944: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 25 02:40:17.086: INFO: namespace replicaset-7725 deletion completed in 6.160212685s

• [SLOW TEST:16.393 seconds]
[sig-apps] ReplicaSet
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-storage] ConfigMap 
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 25 02:40:17.087: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] binary data should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-upd-6b1e482b-434f-4153-8635-c50b70d372b1
STEP: Creating the pod
STEP: Waiting for pod with text data
STEP: Waiting for pod with binary data
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 25 02:40:23.295: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-5646" for this suite.
Sep 25 02:40:45.326: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 25 02:40:45.492: INFO: namespace configmap-5646 deletion completed in 22.188801125s

• [SLOW TEST:28.405 seconds]
[sig-storage] ConfigMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 25 02:40:45.495: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-8e10678a-61ab-4388-b7f3-619a85412176
STEP: Creating a pod to test consume configMaps
Sep 25 02:40:45.617: INFO: Waiting up to 5m0s for pod "pod-configmaps-b9d462bb-e4ea-4757-ab9f-8b24d371ad9b" in namespace "configmap-4612" to be "success or failure"
Sep 25 02:40:45.644: INFO: Pod "pod-configmaps-b9d462bb-e4ea-4757-ab9f-8b24d371ad9b": Phase="Pending", Reason="", readiness=false. Elapsed: 26.837279ms
Sep 25 02:40:47.650: INFO: Pod "pod-configmaps-b9d462bb-e4ea-4757-ab9f-8b24d371ad9b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033280033s
Sep 25 02:40:49.657: INFO: Pod "pod-configmaps-b9d462bb-e4ea-4757-ab9f-8b24d371ad9b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.040238385s
STEP: Saw pod success
Sep 25 02:40:49.658: INFO: Pod "pod-configmaps-b9d462bb-e4ea-4757-ab9f-8b24d371ad9b" satisfied condition "success or failure"
Sep 25 02:40:49.663: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-b9d462bb-e4ea-4757-ab9f-8b24d371ad9b container configmap-volume-test: 
STEP: delete the pod
Sep 25 02:40:49.741: INFO: Waiting for pod pod-configmaps-b9d462bb-e4ea-4757-ab9f-8b24d371ad9b to disappear
Sep 25 02:40:49.790: INFO: Pod pod-configmaps-b9d462bb-e4ea-4757-ab9f-8b24d371ad9b no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 25 02:40:49.790: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-4612" for this suite.
Sep 25 02:40:55.891: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 25 02:40:56.081: INFO: namespace configmap-4612 deletion completed in 6.221662573s

• [SLOW TEST:10.586 seconds]
[sig-storage] ConfigMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 25 02:40:56.082: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Sep 25 02:41:00.233: INFO: Expected: &{OK} to match Container's Termination Message: OK --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 25 02:41:00.297: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-2009" for this suite.
Sep 25 02:41:06.340: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 25 02:41:06.485: INFO: namespace container-runtime-2009 deletion completed in 6.181105758s

• [SLOW TEST:10.403 seconds]
[k8s.io] Container Runtime
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129
      should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period 
  should be submitted and removed [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 25 02:41:06.488: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Delete Grace Period
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:47
[It] should be submitted and removed [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: setting up selector
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
Sep 25 02:41:10.621: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0'
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
Sep 25 02:41:16.756: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 25 02:41:16.761: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-8469" for this suite.
Sep 25 02:41:22.783: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 25 02:41:22.918: INFO: namespace pods-8469 deletion completed in 6.149299215s

• [SLOW TEST:16.430 seconds]
[k8s.io] [sig-node] Pods Extended
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  [k8s.io] Delete Grace Period
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should be submitted and removed [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 25 02:41:22.919: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-dda49998-8a09-41b0-9c79-2f046f6ae164
STEP: Creating a pod to test consume secrets
Sep 25 02:41:23.049: INFO: Waiting up to 5m0s for pod "pod-secrets-05d7f93a-0102-4603-8d96-4552d83c2f59" in namespace "secrets-1756" to be "success or failure"
Sep 25 02:41:23.072: INFO: Pod "pod-secrets-05d7f93a-0102-4603-8d96-4552d83c2f59": Phase="Pending", Reason="", readiness=false. Elapsed: 22.850926ms
Sep 25 02:41:25.078: INFO: Pod "pod-secrets-05d7f93a-0102-4603-8d96-4552d83c2f59": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029194156s
Sep 25 02:41:27.085: INFO: Pod "pod-secrets-05d7f93a-0102-4603-8d96-4552d83c2f59": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.036128524s
STEP: Saw pod success
Sep 25 02:41:27.085: INFO: Pod "pod-secrets-05d7f93a-0102-4603-8d96-4552d83c2f59" satisfied condition "success or failure"
Sep 25 02:41:27.090: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-05d7f93a-0102-4603-8d96-4552d83c2f59 container secret-volume-test: 
STEP: delete the pod
Sep 25 02:41:27.133: INFO: Waiting for pod pod-secrets-05d7f93a-0102-4603-8d96-4552d83c2f59 to disappear
Sep 25 02:41:27.162: INFO: Pod pod-secrets-05d7f93a-0102-4603-8d96-4552d83c2f59 no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 25 02:41:27.162: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-1756" for this suite.
Sep 25 02:41:33.197: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 25 02:41:33.385: INFO: namespace secrets-1756 deletion completed in 6.200212718s

• [SLOW TEST:10.466 seconds]
[sig-storage] Secrets
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should fail to create secret due to empty secret key [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Secrets
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 25 02:41:33.387: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create secret due to empty secret key [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name secret-emptykey-test-bec59362-dbb1-4fdb-a423-097a425db842
[AfterEach] [sig-api-machinery] Secrets
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 25 02:41:33.505: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-7598" for this suite.
Sep 25 02:41:39.544: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 25 02:41:39.690: INFO: namespace secrets-7598 deletion completed in 6.175743307s

• [SLOW TEST:6.304 seconds]
[sig-api-machinery] Secrets
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31
  should fail to create secret due to empty secret key [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 25 02:41:39.696: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-c196468b-d525-47db-b0c5-7e5b3c91c85b
STEP: Creating a pod to test consume configMaps
Sep 25 02:41:39.786: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-28d4abab-93f7-4665-8fec-c834dd8310cc" in namespace "projected-3760" to be "success or failure"
Sep 25 02:41:39.826: INFO: Pod "pod-projected-configmaps-28d4abab-93f7-4665-8fec-c834dd8310cc": Phase="Pending", Reason="", readiness=false. Elapsed: 40.192012ms
Sep 25 02:41:41.834: INFO: Pod "pod-projected-configmaps-28d4abab-93f7-4665-8fec-c834dd8310cc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047863093s
Sep 25 02:41:43.841: INFO: Pod "pod-projected-configmaps-28d4abab-93f7-4665-8fec-c834dd8310cc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.055540962s
STEP: Saw pod success
Sep 25 02:41:43.842: INFO: Pod "pod-projected-configmaps-28d4abab-93f7-4665-8fec-c834dd8310cc" satisfied condition "success or failure"
Sep 25 02:41:43.847: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-28d4abab-93f7-4665-8fec-c834dd8310cc container projected-configmap-volume-test: 
STEP: delete the pod
Sep 25 02:41:43.870: INFO: Waiting for pod pod-projected-configmaps-28d4abab-93f7-4665-8fec-c834dd8310cc to disappear
Sep 25 02:41:43.874: INFO: Pod pod-projected-configmaps-28d4abab-93f7-4665-8fec-c834dd8310cc no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 25 02:41:43.874: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3760" for this suite.
Sep 25 02:41:49.938: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 25 02:41:50.092: INFO: namespace projected-3760 deletion completed in 6.210095073s

• [SLOW TEST:10.397 seconds]
[sig-storage] Projected configMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 25 02:41:50.094: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name cm-test-opt-del-9555d668-4807-4b04-9ee8-f55cc590a19c
STEP: Creating configMap with name cm-test-opt-upd-180f7131-c767-4654-ac5a-e20a3f5f0e3d
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-9555d668-4807-4b04-9ee8-f55cc590a19c
STEP: Updating configmap cm-test-opt-upd-180f7131-c767-4654-ac5a-e20a3f5f0e3d
STEP: Creating configMap with name cm-test-opt-create-5b8dff46-86f5-4bdf-80c9-6ca38f901a17
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 25 02:43:20.897: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1649" for this suite.
Sep 25 02:43:42.928: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 25 02:43:43.093: INFO: namespace projected-1649 deletion completed in 22.188423579s

• [SLOW TEST:112.999 seconds]
[sig-storage] Projected configMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart http hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 25 02:43:43.095: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart http hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Sep 25 02:43:51.328: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Sep 25 02:43:51.366: INFO: Pod pod-with-poststart-http-hook still exists
Sep 25 02:43:53.366: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Sep 25 02:43:53.372: INFO: Pod pod-with-poststart-http-hook still exists
Sep 25 02:43:55.366: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Sep 25 02:43:55.373: INFO: Pod pod-with-poststart-http-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 25 02:43:55.374: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-9118" for this suite.
Sep 25 02:44:17.436: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 25 02:44:17.577: INFO: namespace container-lifecycle-hook-9118 deletion completed in 22.194334969s

• [SLOW TEST:34.483 seconds]
[k8s.io] Container Lifecycle Hook
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart http hook properly [NodeConformance] [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop simple daemon [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 25 02:44:17.580: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should run and stop simple daemon [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Sep 25 02:44:17.700: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep 25 02:44:17.747: INFO: Number of nodes with available pods: 0
Sep 25 02:44:17.747: INFO: Node iruya-worker is running more than one daemon pod
Sep 25 02:44:18.760: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep 25 02:44:18.767: INFO: Number of nodes with available pods: 0
Sep 25 02:44:18.767: INFO: Node iruya-worker is running more than one daemon pod
Sep 25 02:44:19.821: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep 25 02:44:19.827: INFO: Number of nodes with available pods: 0
Sep 25 02:44:19.827: INFO: Node iruya-worker is running more than one daemon pod
Sep 25 02:44:20.851: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep 25 02:44:20.856: INFO: Number of nodes with available pods: 0
Sep 25 02:44:20.856: INFO: Node iruya-worker is running more than one daemon pod
Sep 25 02:44:21.759: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep 25 02:44:21.780: INFO: Number of nodes with available pods: 2
Sep 25 02:44:21.780: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Stop a daemon pod, check that the daemon pod is revived.
Sep 25 02:44:21.845: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep 25 02:44:21.851: INFO: Number of nodes with available pods: 1
Sep 25 02:44:21.851: INFO: Node iruya-worker is running more than one daemon pod
Sep 25 02:44:22.861: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep 25 02:44:22.868: INFO: Number of nodes with available pods: 1
Sep 25 02:44:22.868: INFO: Node iruya-worker is running more than one daemon pod
Sep 25 02:44:23.863: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep 25 02:44:23.869: INFO: Number of nodes with available pods: 1
Sep 25 02:44:23.870: INFO: Node iruya-worker is running more than one daemon pod
Sep 25 02:44:24.860: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep 25 02:44:24.866: INFO: Number of nodes with available pods: 1
Sep 25 02:44:24.866: INFO: Node iruya-worker is running more than one daemon pod
Sep 25 02:44:25.862: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep 25 02:44:25.868: INFO: Number of nodes with available pods: 1
Sep 25 02:44:25.868: INFO: Node iruya-worker is running more than one daemon pod
Sep 25 02:44:26.867: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep 25 02:44:26.873: INFO: Number of nodes with available pods: 1
Sep 25 02:44:26.873: INFO: Node iruya-worker is running more than one daemon pod
Sep 25 02:44:27.864: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep 25 02:44:27.870: INFO: Number of nodes with available pods: 1
Sep 25 02:44:27.871: INFO: Node iruya-worker is running more than one daemon pod
Sep 25 02:44:28.864: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep 25 02:44:28.871: INFO: Number of nodes with available pods: 2
Sep 25 02:44:28.871: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4766, will wait for the garbage collector to delete the pods
Sep 25 02:44:28.944: INFO: Deleting DaemonSet.extensions daemon-set took: 10.023349ms
Sep 25 02:44:29.247: INFO: Terminating DaemonSet.extensions daemon-set pods took: 302.68256ms
Sep 25 02:44:35.454: INFO: Number of nodes with available pods: 0
Sep 25 02:44:35.454: INFO: Number of running nodes: 0, number of available pods: 0
Sep 25 02:44:35.463: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-4766/daemonsets","resourceVersion":"321382"},"items":null}

Sep 25 02:44:35.467: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-4766/pods","resourceVersion":"321382"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 25 02:44:35.485: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-4766" for this suite.
Sep 25 02:44:41.511: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 25 02:44:41.669: INFO: namespace daemonsets-4766 deletion completed in 6.174715277s

• [SLOW TEST:24.089 seconds]
[sig-apps] Daemon set [Serial]
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop simple daemon [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 25 02:44:41.673: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-map-a0598370-50b6-407e-ba7d-7a3b313c5b41
STEP: Creating a pod to test consume configMaps
Sep 25 02:44:41.763: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-531c531d-f767-4de4-9bdd-6352133443d7" in namespace "projected-5656" to be "success or failure"
Sep 25 02:44:41.776: INFO: Pod "pod-projected-configmaps-531c531d-f767-4de4-9bdd-6352133443d7": Phase="Pending", Reason="", readiness=false. Elapsed: 12.047907ms
Sep 25 02:44:43.783: INFO: Pod "pod-projected-configmaps-531c531d-f767-4de4-9bdd-6352133443d7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019334783s
Sep 25 02:44:45.791: INFO: Pod "pod-projected-configmaps-531c531d-f767-4de4-9bdd-6352133443d7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027068487s
STEP: Saw pod success
Sep 25 02:44:45.791: INFO: Pod "pod-projected-configmaps-531c531d-f767-4de4-9bdd-6352133443d7" satisfied condition "success or failure"
Sep 25 02:44:45.797: INFO: Trying to get logs from node iruya-worker pod pod-projected-configmaps-531c531d-f767-4de4-9bdd-6352133443d7 container projected-configmap-volume-test: 
STEP: delete the pod
Sep 25 02:44:45.828: INFO: Waiting for pod pod-projected-configmaps-531c531d-f767-4de4-9bdd-6352133443d7 to disappear
Sep 25 02:44:45.834: INFO: Pod pod-projected-configmaps-531c531d-f767-4de4-9bdd-6352133443d7 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 25 02:44:45.835: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5656" for this suite.
Sep 25 02:44:51.854: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 25 02:44:51.989: INFO: namespace projected-5656 deletion completed in 6.148183178s

• [SLOW TEST:10.317 seconds]
[sig-storage] Projected configMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 25 02:44:51.991: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on tmpfs
Sep 25 02:44:52.069: INFO: Waiting up to 5m0s for pod "pod-0f18072b-c4bc-4404-9a6d-2e2e876dfbc1" in namespace "emptydir-7247" to be "success or failure"
Sep 25 02:44:52.087: INFO: Pod "pod-0f18072b-c4bc-4404-9a6d-2e2e876dfbc1": Phase="Pending", Reason="", readiness=false. Elapsed: 17.807846ms
Sep 25 02:44:54.138: INFO: Pod "pod-0f18072b-c4bc-4404-9a6d-2e2e876dfbc1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.068275949s
Sep 25 02:44:56.145: INFO: Pod "pod-0f18072b-c4bc-4404-9a6d-2e2e876dfbc1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.075431044s
STEP: Saw pod success
Sep 25 02:44:56.145: INFO: Pod "pod-0f18072b-c4bc-4404-9a6d-2e2e876dfbc1" satisfied condition "success or failure"
Sep 25 02:44:56.150: INFO: Trying to get logs from node iruya-worker pod pod-0f18072b-c4bc-4404-9a6d-2e2e876dfbc1 container test-container: 
STEP: delete the pod
Sep 25 02:44:56.182: INFO: Waiting for pod pod-0f18072b-c4bc-4404-9a6d-2e2e876dfbc1 to disappear
Sep 25 02:44:56.200: INFO: Pod pod-0f18072b-c4bc-4404-9a6d-2e2e876dfbc1 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 25 02:44:56.200: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-7247" for this suite.
Sep 25 02:45:02.223: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 25 02:45:02.360: INFO: namespace emptydir-7247 deletion completed in 6.153291453s

• [SLOW TEST:10.369 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if not matching  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 25 02:45:02.368: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81
Sep 25 02:45:02.416: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Sep 25 02:45:02.441: INFO: Waiting for terminating namespaces to be deleted...
Sep 25 02:45:02.448: INFO: 
Logging pods the kubelet thinks is on node iruya-worker before test
Sep 25 02:45:02.463: INFO: kube-proxy-mtljr from kube-system started at 2020-09-23 08:26:08 +0000 UTC (1 container statuses recorded)
Sep 25 02:45:02.464: INFO: 	Container kube-proxy ready: true, restart count 0
Sep 25 02:45:02.464: INFO: kindnet-7bsvw from kube-system started at 2020-09-23 08:26:08 +0000 UTC (1 container statuses recorded)
Sep 25 02:45:02.464: INFO: 	Container kindnet-cni ready: true, restart count 0
Sep 25 02:45:02.464: INFO: 
Logging pods the kubelet thinks is on node iruya-worker2 before test
Sep 25 02:45:02.478: INFO: kindnet-djqgh from kube-system started at 2020-09-23 08:26:08 +0000 UTC (1 container statuses recorded)
Sep 25 02:45:02.478: INFO: 	Container kindnet-cni ready: true, restart count 0
Sep 25 02:45:02.478: INFO: kube-proxy-52wt5 from kube-system started at 2020-09-23 08:26:08 +0000 UTC (1 container statuses recorded)
Sep 25 02:45:02.478: INFO: 	Container kube-proxy ready: true, restart count 0
[It] validates that NodeSelector is respected if not matching  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Trying to schedule Pod with nonempty NodeSelector.
STEP: Considering event: 
Type = [Warning], Name = [restricted-pod.1637e6bf74218eef], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.]
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 25 02:45:03.529: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-3986" for this suite.
Sep 25 02:45:09.552: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 25 02:45:09.697: INFO: namespace sched-pred-3986 deletion completed in 6.15975587s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72

• [SLOW TEST:7.331 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23
  validates that NodeSelector is respected if not matching  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update annotations on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 25 02:45:09.702: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should update annotations on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Sep 25 02:45:14.351: INFO: Successfully updated pod "annotationupdate3eaa033e-46d1-405d-9d9e-87a354a53509"
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 25 02:45:16.415: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-7523" for this suite.
Sep 25 02:45:38.458: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 25 02:45:38.598: INFO: namespace downward-api-7523 deletion completed in 22.172798797s

• [SLOW TEST:28.896 seconds]
[sig-storage] Downward API volume
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should update annotations on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should delete old replica sets [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 25 02:45:38.600: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] deployment should delete old replica sets [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Sep 25 02:45:38.698: INFO: Pod name cleanup-pod: Found 0 pods out of 1
Sep 25 02:45:43.705: INFO: Pod name cleanup-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Sep 25 02:45:43.706: INFO: Creating deployment test-cleanup-deployment
STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Sep 25 02:45:47.756: INFO: Deployment "test-cleanup-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:deployment-4696,SelfLink:/apis/apps/v1/namespaces/deployment-4696/deployments/test-cleanup-deployment,UID:ed2986a9-3d73-46cd-b563-857c0cc390c6,ResourceVersion:321680,Generation:1,CreationTimestamp:2020-09-25 02:45:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 1,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-09-25 02:45:43 +0000 UTC 2020-09-25 02:45:43 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-09-25 02:45:46 +0000 UTC 2020-09-25 02:45:43 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-cleanup-deployment-55bbcbc84c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Sep 25 02:45:47.764: INFO: New ReplicaSet "test-cleanup-deployment-55bbcbc84c" of Deployment "test-cleanup-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c,GenerateName:,Namespace:deployment-4696,SelfLink:/apis/apps/v1/namespaces/deployment-4696/replicasets/test-cleanup-deployment-55bbcbc84c,UID:bd8bb2c9-7f3e-4847-ac21-773e3ee18d6f,ResourceVersion:321669,Generation:1,CreationTimestamp:2020-09-25 02:45:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment ed2986a9-3d73-46cd-b563-857c0cc390c6 0x8c5d5a7 0x8c5d5a8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Sep 25 02:45:47.771: INFO: Pod "test-cleanup-deployment-55bbcbc84c-hhh65" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c-hhh65,GenerateName:test-cleanup-deployment-55bbcbc84c-,Namespace:deployment-4696,SelfLink:/api/v1/namespaces/deployment-4696/pods/test-cleanup-deployment-55bbcbc84c-hhh65,UID:b2325d71-4870-4c4f-8981-90f83374c910,ResourceVersion:321668,Generation:0,CreationTimestamp:2020-09-25 02:45:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-deployment-55bbcbc84c bd8bb2c9-7f3e-4847-ac21-773e3ee18d6f 0x8c146f7 0x8c146f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-gqtqp {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-gqtqp,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-gqtqp true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x8c14770} {node.kubernetes.io/unreachable Exists  NoExecute 0x8c14790}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-25 02:45:43 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-09-25 02:45:46 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-09-25 02:45:46 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-25 02:45:43 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.6,PodIP:10.244.1.190,StartTime:2020-09-25 02:45:43 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-09-25 02:45:46 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://5ad1caa0d74a14f81277b59d87ec34befed357b3067ba1bf3a35ffb78c0ae2d7}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 25 02:45:47.772: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-4696" for this suite.
Sep 25 02:45:53.797: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 25 02:45:53.940: INFO: namespace deployment-4696 deletion completed in 6.160109875s

• [SLOW TEST:15.340 seconds]
[sig-apps] Deployment
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should delete old replica sets [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop http hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 25 02:45:53.944: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop http hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Sep 25 02:46:02.102: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Sep 25 02:46:02.122: INFO: Pod pod-with-prestop-http-hook still exists
Sep 25 02:46:04.122: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Sep 25 02:46:04.130: INFO: Pod pod-with-prestop-http-hook still exists
Sep 25 02:46:06.122: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Sep 25 02:46:06.130: INFO: Pod pod-with-prestop-http-hook still exists
Sep 25 02:46:08.122: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Sep 25 02:46:08.130: INFO: Pod pod-with-prestop-http-hook still exists
Sep 25 02:46:10.122: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Sep 25 02:46:10.130: INFO: Pod pod-with-prestop-http-hook still exists
Sep 25 02:46:12.122: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Sep 25 02:46:12.130: INFO: Pod pod-with-prestop-http-hook still exists
Sep 25 02:46:14.122: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Sep 25 02:46:14.131: INFO: Pod pod-with-prestop-http-hook still exists
Sep 25 02:46:16.122: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Sep 25 02:46:16.129: INFO: Pod pod-with-prestop-http-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 25 02:46:16.139: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-5008" for this suite.
Sep 25 02:46:38.164: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 25 02:46:38.310: INFO: namespace container-lifecycle-hook-5008 deletion completed in 22.161450327s

• [SLOW TEST:44.367 seconds]
[k8s.io] Container Lifecycle Hook
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop http hook properly [NodeConformance] [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 25 02:46:38.312: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-map-0b8cfea9-1fec-47d3-966a-4240dc36501e
STEP: Creating a pod to test consume secrets
Sep 25 02:46:38.427: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-124a00f4-7ec4-4d44-a687-1c6d91930b04" in namespace "projected-9018" to be "success or failure"
Sep 25 02:46:38.435: INFO: Pod "pod-projected-secrets-124a00f4-7ec4-4d44-a687-1c6d91930b04": Phase="Pending", Reason="", readiness=false. Elapsed: 7.073758ms
Sep 25 02:46:40.474: INFO: Pod "pod-projected-secrets-124a00f4-7ec4-4d44-a687-1c6d91930b04": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046718011s
Sep 25 02:46:42.480: INFO: Pod "pod-projected-secrets-124a00f4-7ec4-4d44-a687-1c6d91930b04": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.05233806s
STEP: Saw pod success
Sep 25 02:46:42.480: INFO: Pod "pod-projected-secrets-124a00f4-7ec4-4d44-a687-1c6d91930b04" satisfied condition "success or failure"
Sep 25 02:46:42.484: INFO: Trying to get logs from node iruya-worker pod pod-projected-secrets-124a00f4-7ec4-4d44-a687-1c6d91930b04 container projected-secret-volume-test: 
STEP: delete the pod
Sep 25 02:46:42.513: INFO: Waiting for pod pod-projected-secrets-124a00f4-7ec4-4d44-a687-1c6d91930b04 to disappear
Sep 25 02:46:42.525: INFO: Pod pod-projected-secrets-124a00f4-7ec4-4d44-a687-1c6d91930b04 no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 25 02:46:42.525: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9018" for this suite.
Sep 25 02:46:48.566: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 25 02:46:48.734: INFO: namespace projected-9018 deletion completed in 6.200492201s

• [SLOW TEST:10.423 seconds]
[sig-storage] Projected secret
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 25 02:46:48.738: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Sep 25 02:46:48.901: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ddaa8094-bb3d-4d77-849b-c09ac8339184" in namespace "projected-3825" to be "success or failure"
Sep 25 02:46:48.923: INFO: Pod "downwardapi-volume-ddaa8094-bb3d-4d77-849b-c09ac8339184": Phase="Pending", Reason="", readiness=false. Elapsed: 21.812811ms
Sep 25 02:46:50.930: INFO: Pod "downwardapi-volume-ddaa8094-bb3d-4d77-849b-c09ac8339184": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029157467s
Sep 25 02:46:52.937: INFO: Pod "downwardapi-volume-ddaa8094-bb3d-4d77-849b-c09ac8339184": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.036708955s
STEP: Saw pod success
Sep 25 02:46:52.938: INFO: Pod "downwardapi-volume-ddaa8094-bb3d-4d77-849b-c09ac8339184" satisfied condition "success or failure"
Sep 25 02:46:52.944: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-ddaa8094-bb3d-4d77-849b-c09ac8339184 container client-container: 
STEP: delete the pod
Sep 25 02:46:52.971: INFO: Waiting for pod downwardapi-volume-ddaa8094-bb3d-4d77-849b-c09ac8339184 to disappear
Sep 25 02:46:52.987: INFO: Pod downwardapi-volume-ddaa8094-bb3d-4d77-849b-c09ac8339184 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 25 02:46:52.987: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3825" for this suite.
Sep 25 02:46:59.034: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 25 02:46:59.181: INFO: namespace projected-3825 deletion completed in 6.164022632s

• [SLOW TEST:10.443 seconds]
[sig-storage] Projected downwardAPI
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl api-versions 
  should check if v1 is in available api versions  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 25 02:46:59.183: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check if v1 is in available api versions  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: validating api versions
Sep 25 02:46:59.264: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions'
Sep 25 02:47:00.389: INFO: stderr: ""
Sep 25 02:47:00.389: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 25 02:47:00.390: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-425" for this suite.
Sep 25 02:47:06.419: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 25 02:47:06.583: INFO: namespace kubectl-425 deletion completed in 6.18380502s

• [SLOW TEST:7.400 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl api-versions
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check if v1 is in available api versions  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 25 02:47:06.585: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-map-80018fce-e8d7-41a5-ab2e-091b49e114e4
STEP: Creating a pod to test consume configMaps
Sep 25 02:47:06.710: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-817af242-3b12-4ade-8303-7ae367134eaa" in namespace "projected-6909" to be "success or failure"
Sep 25 02:47:06.717: INFO: Pod "pod-projected-configmaps-817af242-3b12-4ade-8303-7ae367134eaa": Phase="Pending", Reason="", readiness=false. Elapsed: 6.851919ms
Sep 25 02:47:08.725: INFO: Pod "pod-projected-configmaps-817af242-3b12-4ade-8303-7ae367134eaa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014577795s
Sep 25 02:47:10.733: INFO: Pod "pod-projected-configmaps-817af242-3b12-4ade-8303-7ae367134eaa": Phase="Running", Reason="", readiness=true. Elapsed: 4.022559141s
Sep 25 02:47:12.741: INFO: Pod "pod-projected-configmaps-817af242-3b12-4ade-8303-7ae367134eaa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.030546633s
STEP: Saw pod success
Sep 25 02:47:12.741: INFO: Pod "pod-projected-configmaps-817af242-3b12-4ade-8303-7ae367134eaa" satisfied condition "success or failure"
Sep 25 02:47:12.747: INFO: Trying to get logs from node iruya-worker pod pod-projected-configmaps-817af242-3b12-4ade-8303-7ae367134eaa container projected-configmap-volume-test: 
STEP: delete the pod
Sep 25 02:47:12.806: INFO: Waiting for pod pod-projected-configmaps-817af242-3b12-4ade-8303-7ae367134eaa to disappear
Sep 25 02:47:12.818: INFO: Pod pod-projected-configmaps-817af242-3b12-4ade-8303-7ae367134eaa no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 25 02:47:12.818: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6909" for this suite.
Sep 25 02:47:18.854: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 25 02:47:18.995: INFO: namespace projected-6909 deletion completed in 6.17086208s

• [SLOW TEST:12.411 seconds]
[sig-storage] Projected configMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 25 02:47:19.000: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name s-test-opt-del-629d0e41-a132-4319-bf5a-62800533d70f
STEP: Creating secret with name s-test-opt-upd-0e60e22b-e7be-4afe-a036-b9427bf81c99
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-629d0e41-a132-4319-bf5a-62800533d70f
STEP: Updating secret s-test-opt-upd-0e60e22b-e7be-4afe-a036-b9427bf81c99
STEP: Creating secret with name s-test-opt-create-27658133-2832-4455-886c-f0bac54820e0
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Secrets
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 25 02:48:49.739: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-840" for this suite.
Sep 25 02:49:11.767: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 25 02:49:11.919: INFO: namespace secrets-840 deletion completed in 22.171107401s

• [SLOW TEST:112.919 seconds]
[sig-storage] Secrets
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 25 02:49:11.921: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Sep 25 02:49:40.031: INFO: Container started at 2020-09-25 02:49:14 +0000 UTC, pod became ready at 2020-09-25 02:49:38 +0000 UTC
[AfterEach] [k8s.io] Probing container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 25 02:49:40.032: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-4675" for this suite.
Sep 25 02:50:02.060: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 25 02:50:02.200: INFO: namespace container-probe-4675 deletion completed in 22.161489286s

• [SLOW TEST:50.280 seconds]
[k8s.io] Probing container
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 25 02:50:02.202: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-33739a8c-02d5-4a37-9e02-cdda6b0aeee1
STEP: Creating a pod to test consume secrets
Sep 25 02:50:02.317: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-fd2d0826-3fa6-4874-bc0e-bc95cf65ff50" in namespace "projected-2929" to be "success or failure"
Sep 25 02:50:02.345: INFO: Pod "pod-projected-secrets-fd2d0826-3fa6-4874-bc0e-bc95cf65ff50": Phase="Pending", Reason="", readiness=false. Elapsed: 27.948979ms
Sep 25 02:50:04.353: INFO: Pod "pod-projected-secrets-fd2d0826-3fa6-4874-bc0e-bc95cf65ff50": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035910624s
Sep 25 02:50:06.361: INFO: Pod "pod-projected-secrets-fd2d0826-3fa6-4874-bc0e-bc95cf65ff50": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.043750802s
STEP: Saw pod success
Sep 25 02:50:06.362: INFO: Pod "pod-projected-secrets-fd2d0826-3fa6-4874-bc0e-bc95cf65ff50" satisfied condition "success or failure"
Sep 25 02:50:06.367: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-secrets-fd2d0826-3fa6-4874-bc0e-bc95cf65ff50 container projected-secret-volume-test: 
STEP: delete the pod
Sep 25 02:50:06.405: INFO: Waiting for pod pod-projected-secrets-fd2d0826-3fa6-4874-bc0e-bc95cf65ff50 to disappear
Sep 25 02:50:06.416: INFO: Pod pod-projected-secrets-fd2d0826-3fa6-4874-bc0e-bc95cf65ff50 no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 25 02:50:06.416: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2929" for this suite.
Sep 25 02:50:12.440: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 25 02:50:12.577: INFO: namespace projected-2929 deletion completed in 6.154430256s

• [SLOW TEST:10.376 seconds]
[sig-storage] Projected secret
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 25 02:50:12.580: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Sep 25 02:50:12.671: INFO: Waiting up to 5m0s for pod "downwardapi-volume-97a6ef7f-e9da-4045-9bfe-d18d5dca9265" in namespace "downward-api-6462" to be "success or failure"
Sep 25 02:50:12.680: INFO: Pod "downwardapi-volume-97a6ef7f-e9da-4045-9bfe-d18d5dca9265": Phase="Pending", Reason="", readiness=false. Elapsed: 8.098743ms
Sep 25 02:50:14.687: INFO: Pod "downwardapi-volume-97a6ef7f-e9da-4045-9bfe-d18d5dca9265": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015544021s
Sep 25 02:50:16.694: INFO: Pod "downwardapi-volume-97a6ef7f-e9da-4045-9bfe-d18d5dca9265": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02201829s
STEP: Saw pod success
Sep 25 02:50:16.694: INFO: Pod "downwardapi-volume-97a6ef7f-e9da-4045-9bfe-d18d5dca9265" satisfied condition "success or failure"
Sep 25 02:50:16.698: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-97a6ef7f-e9da-4045-9bfe-d18d5dca9265 container client-container: 
STEP: delete the pod
Sep 25 02:50:16.721: INFO: Waiting for pod downwardapi-volume-97a6ef7f-e9da-4045-9bfe-d18d5dca9265 to disappear
Sep 25 02:50:16.736: INFO: Pod downwardapi-volume-97a6ef7f-e9da-4045-9bfe-d18d5dca9265 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 25 02:50:16.736: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6462" for this suite.
Sep 25 02:50:22.798: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 25 02:50:22.946: INFO: namespace downward-api-6462 deletion completed in 6.181057584s

• [SLOW TEST:10.366 seconds]
[sig-storage] Downward API volume
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu limit [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run default 
  should create an rc or deployment from an image  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 25 02:50:22.948: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run default
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1420
[It] should create an rc or deployment from an image  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Sep 25 02:50:23.040: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-9532'
Sep 25 02:50:26.580: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Sep 25 02:50:26.580: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n"
STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created
[AfterEach] [k8s.io] Kubectl run default
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1426
Sep 25 02:50:26.630: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-9532'
Sep 25 02:50:27.775: INFO: stderr: ""
Sep 25 02:50:27.776: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 25 02:50:27.776: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9532" for this suite.
Sep 25 02:50:47.798: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 25 02:50:47.973: INFO: namespace kubectl-9532 deletion completed in 20.189019326s

• [SLOW TEST:25.025 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run default
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create an rc or deployment from an image  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 25 02:50:47.975: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on node default medium
Sep 25 02:50:48.091: INFO: Waiting up to 5m0s for pod "pod-849ad7d1-a38f-4b30-bf89-9bef92feba03" in namespace "emptydir-3989" to be "success or failure"
Sep 25 02:50:48.118: INFO: Pod "pod-849ad7d1-a38f-4b30-bf89-9bef92feba03": Phase="Pending", Reason="", readiness=false. Elapsed: 26.956143ms
Sep 25 02:50:50.140: INFO: Pod "pod-849ad7d1-a38f-4b30-bf89-9bef92feba03": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048479164s
Sep 25 02:50:52.147: INFO: Pod "pod-849ad7d1-a38f-4b30-bf89-9bef92feba03": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.055981701s
STEP: Saw pod success
Sep 25 02:50:52.148: INFO: Pod "pod-849ad7d1-a38f-4b30-bf89-9bef92feba03" satisfied condition "success or failure"
Sep 25 02:50:52.154: INFO: Trying to get logs from node iruya-worker pod pod-849ad7d1-a38f-4b30-bf89-9bef92feba03 container test-container: 
STEP: delete the pod
Sep 25 02:50:52.186: INFO: Waiting for pod pod-849ad7d1-a38f-4b30-bf89-9bef92feba03 to disappear
Sep 25 02:50:52.211: INFO: Pod pod-849ad7d1-a38f-4b30-bf89-9bef92feba03 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 25 02:50:52.211: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-3989" for this suite.
Sep 25 02:50:58.246: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 25 02:50:58.387: INFO: namespace emptydir-3989 deletion completed in 6.158556445s

• [SLOW TEST:10.412 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node using proxy subresource  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] version v1
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 25 02:50:58.390: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node using proxy subresource  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Sep 25 02:50:58.479: INFO: (0) /api/v1/nodes/iruya-worker/proxy/logs/: 
alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/
>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-map-a10027fd-0429-472e-a713-27b551241281
STEP: Creating a pod to test consume configMaps
Sep 25 02:51:04.842: INFO: Waiting up to 5m0s for pod "pod-configmaps-5d68dfcf-3550-4bc5-afe2-ed223ab85b81" in namespace "configmap-9032" to be "success or failure"
Sep 25 02:51:04.870: INFO: Pod "pod-configmaps-5d68dfcf-3550-4bc5-afe2-ed223ab85b81": Phase="Pending", Reason="", readiness=false. Elapsed: 27.421223ms
Sep 25 02:51:06.900: INFO: Pod "pod-configmaps-5d68dfcf-3550-4bc5-afe2-ed223ab85b81": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057676748s
Sep 25 02:51:08.907: INFO: Pod "pod-configmaps-5d68dfcf-3550-4bc5-afe2-ed223ab85b81": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.064457871s
STEP: Saw pod success
Sep 25 02:51:08.907: INFO: Pod "pod-configmaps-5d68dfcf-3550-4bc5-afe2-ed223ab85b81" satisfied condition "success or failure"
Sep 25 02:51:08.912: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-5d68dfcf-3550-4bc5-afe2-ed223ab85b81 container configmap-volume-test: 
STEP: delete the pod
Sep 25 02:51:08.936: INFO: Waiting for pod pod-configmaps-5d68dfcf-3550-4bc5-afe2-ed223ab85b81 to disappear
Sep 25 02:51:08.955: INFO: Pod pod-configmaps-5d68dfcf-3550-4bc5-afe2-ed223ab85b81 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 25 02:51:08.956: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-9032" for this suite.
Sep 25 02:51:14.984: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 25 02:51:15.131: INFO: namespace configmap-9032 deletion completed in 6.16467416s

• [SLOW TEST:10.377 seconds]
[sig-storage] ConfigMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform rolling updates and roll backs of template modifications [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 25 02:51:15.134: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-3248
[It] should perform rolling updates and roll backs of template modifications [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a new StatefulSet
Sep 25 02:51:15.306: INFO: Found 0 stateful pods, waiting for 3
Sep 25 02:51:25.318: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Sep 25 02:51:25.318: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Sep 25 02:51:25.319: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=false
Sep 25 02:51:35.317: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Sep 25 02:51:35.317: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Sep 25 02:51:35.317: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
Sep 25 02:51:35.338: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3248 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Sep 25 02:51:36.762: INFO: stderr: "I0925 02:51:36.604039     275 log.go:172] (0x2b871f0) (0x2b87260) Create stream\nI0925 02:51:36.607095     275 log.go:172] (0x2b871f0) (0x2b87260) Stream added, broadcasting: 1\nI0925 02:51:36.617200     275 log.go:172] (0x2b871f0) Reply frame received for 1\nI0925 02:51:36.617668     275 log.go:172] (0x2b871f0) (0x24a23f0) Create stream\nI0925 02:51:36.617737     275 log.go:172] (0x2b871f0) (0x24a23f0) Stream added, broadcasting: 3\nI0925 02:51:36.619230     275 log.go:172] (0x2b871f0) Reply frame received for 3\nI0925 02:51:36.619454     275 log.go:172] (0x2b871f0) (0x2b90150) Create stream\nI0925 02:51:36.619537     275 log.go:172] (0x2b871f0) (0x2b90150) Stream added, broadcasting: 5\nI0925 02:51:36.620757     275 log.go:172] (0x2b871f0) Reply frame received for 5\nI0925 02:51:36.716812     275 log.go:172] (0x2b871f0) Data frame received for 5\nI0925 02:51:36.717256     275 log.go:172] (0x2b90150) (5) Data frame handling\nI0925 02:51:36.717924     275 log.go:172] (0x2b90150) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0925 02:51:36.743665     275 log.go:172] (0x2b871f0) Data frame received for 3\nI0925 02:51:36.743785     275 log.go:172] (0x24a23f0) (3) Data frame handling\nI0925 02:51:36.743909     275 log.go:172] (0x24a23f0) (3) Data frame sent\nI0925 02:51:36.744008     275 log.go:172] (0x2b871f0) Data frame received for 3\nI0925 02:51:36.744100     275 log.go:172] (0x24a23f0) (3) Data frame handling\nI0925 02:51:36.744508     275 log.go:172] (0x2b871f0) Data frame received for 5\nI0925 02:51:36.744802     275 log.go:172] (0x2b90150) (5) Data frame handling\nI0925 02:51:36.746463     275 log.go:172] (0x2b871f0) Data frame received for 1\nI0925 02:51:36.746611     275 log.go:172] (0x2b87260) (1) Data frame handling\nI0925 02:51:36.746789     275 log.go:172] (0x2b87260) (1) Data frame sent\nI0925 02:51:36.748117     275 log.go:172] (0x2b871f0) (0x2b87260) Stream removed, broadcasting: 1\nI0925 02:51:36.751623     275 log.go:172] (0x2b871f0) Go away received\nI0925 02:51:36.753839     275 log.go:172] (0x2b871f0) (0x2b87260) Stream removed, broadcasting: 1\nI0925 02:51:36.754409     275 log.go:172] (0x2b871f0) (0x24a23f0) Stream removed, broadcasting: 3\nI0925 02:51:36.755053     275 log.go:172] (0x2b871f0) (0x2b90150) Stream removed, broadcasting: 5\n"
Sep 25 02:51:36.762: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Sep 25 02:51:36.763: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Sep 25 02:51:36.865: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Updating Pods in reverse ordinal order
Sep 25 02:51:47.024: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3248 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Sep 25 02:51:48.429: INFO: stderr: "I0925 02:51:48.306686     297 log.go:172] (0x24ac9a0) (0x24ad2d0) Create stream\nI0925 02:51:48.309380     297 log.go:172] (0x24ac9a0) (0x24ad2d0) Stream added, broadcasting: 1\nI0925 02:51:48.321132     297 log.go:172] (0x24ac9a0) Reply frame received for 1\nI0925 02:51:48.322142     297 log.go:172] (0x24ac9a0) (0x28fc070) Create stream\nI0925 02:51:48.322252     297 log.go:172] (0x24ac9a0) (0x28fc070) Stream added, broadcasting: 3\nI0925 02:51:48.323938     297 log.go:172] (0x24ac9a0) Reply frame received for 3\nI0925 02:51:48.324181     297 log.go:172] (0x24ac9a0) (0x2abe000) Create stream\nI0925 02:51:48.324240     297 log.go:172] (0x24ac9a0) (0x2abe000) Stream added, broadcasting: 5\nI0925 02:51:48.325521     297 log.go:172] (0x24ac9a0) Reply frame received for 5\nI0925 02:51:48.412471     297 log.go:172] (0x24ac9a0) Data frame received for 5\nI0925 02:51:48.412812     297 log.go:172] (0x24ac9a0) Data frame received for 1\nI0925 02:51:48.413144     297 log.go:172] (0x24ad2d0) (1) Data frame handling\nI0925 02:51:48.413942     297 log.go:172] (0x24ac9a0) Data frame received for 3\nI0925 02:51:48.414179     297 log.go:172] (0x28fc070) (3) Data frame handling\nI0925 02:51:48.414856     297 log.go:172] (0x2abe000) (5) Data frame handling\nI0925 02:51:48.415250     297 log.go:172] (0x28fc070) (3) Data frame sent\nI0925 02:51:48.415479     297 log.go:172] (0x2abe000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0925 02:51:48.415752     297 log.go:172] (0x24ad2d0) (1) Data frame sent\nI0925 02:51:48.415988     297 log.go:172] (0x24ac9a0) Data frame received for 5\nI0925 02:51:48.416073     297 log.go:172] (0x2abe000) (5) Data frame handling\nI0925 02:51:48.416176     297 log.go:172] (0x24ac9a0) Data frame received for 3\nI0925 02:51:48.416308     297 log.go:172] (0x28fc070) (3) Data frame handling\nI0925 02:51:48.418711     297 log.go:172] (0x24ac9a0) (0x24ad2d0) Stream removed, broadcasting: 1\nI0925 02:51:48.420087     297 log.go:172] (0x24ac9a0) Go away received\nI0925 02:51:48.422487     297 log.go:172] (0x24ac9a0) (0x24ad2d0) Stream removed, broadcasting: 1\nI0925 02:51:48.422726     297 log.go:172] (0x24ac9a0) (0x28fc070) Stream removed, broadcasting: 3\nI0925 02:51:48.422917     297 log.go:172] (0x24ac9a0) (0x2abe000) Stream removed, broadcasting: 5\n"
Sep 25 02:51:48.430: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Sep 25 02:51:48.431: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Sep 25 02:51:58.471: INFO: Waiting for StatefulSet statefulset-3248/ss2 to complete update
Sep 25 02:51:58.472: INFO: Waiting for Pod statefulset-3248/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Sep 25 02:51:58.472: INFO: Waiting for Pod statefulset-3248/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Sep 25 02:52:08.514: INFO: Waiting for StatefulSet statefulset-3248/ss2 to complete update
Sep 25 02:52:08.514: INFO: Waiting for Pod statefulset-3248/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Sep 25 02:52:18.488: INFO: Waiting for StatefulSet statefulset-3248/ss2 to complete update
STEP: Rolling back to a previous revision
Sep 25 02:52:28.488: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3248 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Sep 25 02:52:29.866: INFO: stderr: "I0925 02:52:29.716052     320 log.go:172] (0x2a3e1c0) (0x2a3e2a0) Create stream\nI0925 02:52:29.717883     320 log.go:172] (0x2a3e1c0) (0x2a3e2a0) Stream added, broadcasting: 1\nI0925 02:52:29.726133     320 log.go:172] (0x2a3e1c0) Reply frame received for 1\nI0925 02:52:29.726660     320 log.go:172] (0x2a3e1c0) (0x2824af0) Create stream\nI0925 02:52:29.726735     320 log.go:172] (0x2a3e1c0) (0x2824af0) Stream added, broadcasting: 3\nI0925 02:52:29.728108     320 log.go:172] (0x2a3e1c0) Reply frame received for 3\nI0925 02:52:29.728348     320 log.go:172] (0x2a3e1c0) (0x24b2380) Create stream\nI0925 02:52:29.728421     320 log.go:172] (0x2a3e1c0) (0x24b2380) Stream added, broadcasting: 5\nI0925 02:52:29.729854     320 log.go:172] (0x2a3e1c0) Reply frame received for 5\nI0925 02:52:29.823625     320 log.go:172] (0x2a3e1c0) Data frame received for 5\nI0925 02:52:29.823836     320 log.go:172] (0x24b2380) (5) Data frame handling\nI0925 02:52:29.824194     320 log.go:172] (0x24b2380) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0925 02:52:29.850225     320 log.go:172] (0x2a3e1c0) Data frame received for 3\nI0925 02:52:29.850356     320 log.go:172] (0x2824af0) (3) Data frame handling\nI0925 02:52:29.850432     320 log.go:172] (0x2824af0) (3) Data frame sent\nI0925 02:52:29.850614     320 log.go:172] (0x2a3e1c0) Data frame received for 5\nI0925 02:52:29.850862     320 log.go:172] (0x24b2380) (5) Data frame handling\nI0925 02:52:29.851089     320 log.go:172] (0x2a3e1c0) Data frame received for 3\nI0925 02:52:29.851251     320 log.go:172] (0x2824af0) (3) Data frame handling\nI0925 02:52:29.851549     320 log.go:172] (0x2a3e1c0) Data frame received for 1\nI0925 02:52:29.851664     320 log.go:172] (0x2a3e2a0) (1) Data frame handling\nI0925 02:52:29.851774     320 log.go:172] (0x2a3e2a0) (1) Data frame sent\nI0925 02:52:29.853320     320 log.go:172] (0x2a3e1c0) (0x2a3e2a0) Stream removed, broadcasting: 1\nI0925 02:52:29.855970     320 log.go:172] (0x2a3e1c0) Go away received\nI0925 02:52:29.858217     320 log.go:172] (0x2a3e1c0) (0x2a3e2a0) Stream removed, broadcasting: 1\nI0925 02:52:29.858439     320 log.go:172] (0x2a3e1c0) (0x2824af0) Stream removed, broadcasting: 3\nI0925 02:52:29.858630     320 log.go:172] (0x2a3e1c0) (0x24b2380) Stream removed, broadcasting: 5\n"
Sep 25 02:52:29.867: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Sep 25 02:52:29.867: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Sep 25 02:52:39.937: INFO: Updating stateful set ss2
STEP: Rolling back update in reverse ordinal order
Sep 25 02:52:49.997: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3248 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Sep 25 02:52:51.373: INFO: stderr: "I0925 02:52:51.258611     342 log.go:172] (0x2a92e00) (0x2a92e70) Create stream\nI0925 02:52:51.261459     342 log.go:172] (0x2a92e00) (0x2a92e70) Stream added, broadcasting: 1\nI0925 02:52:51.272478     342 log.go:172] (0x2a92e00) Reply frame received for 1\nI0925 02:52:51.273343     342 log.go:172] (0x2a92e00) (0x24a4770) Create stream\nI0925 02:52:51.273445     342 log.go:172] (0x2a92e00) (0x24a4770) Stream added, broadcasting: 3\nI0925 02:52:51.275479     342 log.go:172] (0x2a92e00) Reply frame received for 3\nI0925 02:52:51.276018     342 log.go:172] (0x2a92e00) (0x2848070) Create stream\nI0925 02:52:51.276171     342 log.go:172] (0x2a92e00) (0x2848070) Stream added, broadcasting: 5\nI0925 02:52:51.277885     342 log.go:172] (0x2a92e00) Reply frame received for 5\nI0925 02:52:51.356358     342 log.go:172] (0x2a92e00) Data frame received for 3\nI0925 02:52:51.356763     342 log.go:172] (0x2a92e00) Data frame received for 5\nI0925 02:52:51.357000     342 log.go:172] (0x2848070) (5) Data frame handling\nI0925 02:52:51.357101     342 log.go:172] (0x2a92e00) Data frame received for 1\nI0925 02:52:51.357202     342 log.go:172] (0x2a92e70) (1) Data frame handling\nI0925 02:52:51.357334     342 log.go:172] (0x24a4770) (3) Data frame handling\nI0925 02:52:51.358076     342 log.go:172] (0x2848070) (5) Data frame sent\nI0925 02:52:51.358333     342 log.go:172] (0x2a92e00) Data frame received for 5\nI0925 02:52:51.358459     342 log.go:172] (0x2848070) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0925 02:52:51.358717     342 log.go:172] (0x24a4770) (3) Data frame sent\nI0925 02:52:51.358990     342 log.go:172] (0x2a92e00) Data frame received for 3\nI0925 02:52:51.359101     342 log.go:172] (0x2a92e70) (1) Data frame sent\nI0925 02:52:51.359217     342 log.go:172] (0x24a4770) (3) Data frame handling\nI0925 02:52:51.360470     342 log.go:172] (0x2a92e00) (0x2a92e70) Stream removed, broadcasting: 1\nI0925 02:52:51.362923     342 log.go:172] (0x2a92e00) Go away received\nI0925 02:52:51.365161     342 log.go:172] (0x2a92e00) (0x2a92e70) Stream removed, broadcasting: 1\nI0925 02:52:51.365351     342 log.go:172] (0x2a92e00) (0x24a4770) Stream removed, broadcasting: 3\nI0925 02:52:51.365529     342 log.go:172] (0x2a92e00) (0x2848070) Stream removed, broadcasting: 5\n"
Sep 25 02:52:51.374: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Sep 25 02:52:51.374: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Sep 25 02:53:01.415: INFO: Waiting for StatefulSet statefulset-3248/ss2 to complete update
Sep 25 02:53:01.416: INFO: Waiting for Pod statefulset-3248/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Sep 25 02:53:01.416: INFO: Waiting for Pod statefulset-3248/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Sep 25 02:53:01.416: INFO: Waiting for Pod statefulset-3248/ss2-2 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Sep 25 02:53:11.431: INFO: Waiting for StatefulSet statefulset-3248/ss2 to complete update
Sep 25 02:53:11.432: INFO: Waiting for Pod statefulset-3248/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Sep 25 02:53:11.432: INFO: Waiting for Pod statefulset-3248/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Sep 25 02:53:21.443: INFO: Waiting for StatefulSet statefulset-3248/ss2 to complete update
Sep 25 02:53:21.444: INFO: Waiting for Pod statefulset-3248/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Sep 25 02:53:31.432: INFO: Deleting all statefulset in ns statefulset-3248
Sep 25 02:53:31.439: INFO: Scaling statefulset ss2 to 0
Sep 25 02:54:01.466: INFO: Waiting for statefulset status.replicas updated to 0
Sep 25 02:54:01.472: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 25 02:54:01.494: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-3248" for this suite.
Sep 25 02:54:07.517: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 25 02:54:07.660: INFO: namespace statefulset-3248 deletion completed in 6.156767767s

• [SLOW TEST:172.526 seconds]
[sig-apps] StatefulSet
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should perform rolling updates and roll backs of template modifications [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[k8s.io] Variable Expansion 
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Variable Expansion
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 25 02:54:07.661: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test env composition
Sep 25 02:54:07.783: INFO: Waiting up to 5m0s for pod "var-expansion-861d71dd-01e7-4e18-adc9-acff9e8f928f" in namespace "var-expansion-3788" to be "success or failure"
Sep 25 02:54:07.792: INFO: Pod "var-expansion-861d71dd-01e7-4e18-adc9-acff9e8f928f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.95927ms
Sep 25 02:54:09.800: INFO: Pod "var-expansion-861d71dd-01e7-4e18-adc9-acff9e8f928f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016037755s
Sep 25 02:54:11.806: INFO: Pod "var-expansion-861d71dd-01e7-4e18-adc9-acff9e8f928f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022918266s
STEP: Saw pod success
Sep 25 02:54:11.807: INFO: Pod "var-expansion-861d71dd-01e7-4e18-adc9-acff9e8f928f" satisfied condition "success or failure"
Sep 25 02:54:11.812: INFO: Trying to get logs from node iruya-worker2 pod var-expansion-861d71dd-01e7-4e18-adc9-acff9e8f928f container dapi-container: 
STEP: delete the pod
Sep 25 02:54:11.838: INFO: Waiting for pod var-expansion-861d71dd-01e7-4e18-adc9-acff9e8f928f to disappear
Sep 25 02:54:11.841: INFO: Pod var-expansion-861d71dd-01e7-4e18-adc9-acff9e8f928f no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 25 02:54:11.841: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-3788" for this suite.
Sep 25 02:54:17.876: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 25 02:54:18.044: INFO: namespace var-expansion-3788 deletion completed in 6.194922156s

• [SLOW TEST:10.383 seconds]
[k8s.io] Variable Expansion
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 25 02:54:18.046: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name projected-secret-test-1dfc8eed-7e5e-48e0-a3dd-b02494bf5c24
STEP: Creating a pod to test consume secrets
Sep 25 02:54:18.174: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-849789db-ba5c-4ad6-b1ea-529ba0c65574" in namespace "projected-5880" to be "success or failure"
Sep 25 02:54:18.198: INFO: Pod "pod-projected-secrets-849789db-ba5c-4ad6-b1ea-529ba0c65574": Phase="Pending", Reason="", readiness=false. Elapsed: 24.333485ms
Sep 25 02:54:20.220: INFO: Pod "pod-projected-secrets-849789db-ba5c-4ad6-b1ea-529ba0c65574": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045443152s
Sep 25 02:54:22.465: INFO: Pod "pod-projected-secrets-849789db-ba5c-4ad6-b1ea-529ba0c65574": Phase="Pending", Reason="", readiness=false. Elapsed: 4.290919169s
Sep 25 02:54:24.545: INFO: Pod "pod-projected-secrets-849789db-ba5c-4ad6-b1ea-529ba0c65574": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.370709472s
STEP: Saw pod success
Sep 25 02:54:24.545: INFO: Pod "pod-projected-secrets-849789db-ba5c-4ad6-b1ea-529ba0c65574" satisfied condition "success or failure"
Sep 25 02:54:24.640: INFO: Trying to get logs from node iruya-worker pod pod-projected-secrets-849789db-ba5c-4ad6-b1ea-529ba0c65574 container secret-volume-test: 
STEP: delete the pod
Sep 25 02:54:25.162: INFO: Waiting for pod pod-projected-secrets-849789db-ba5c-4ad6-b1ea-529ba0c65574 to disappear
Sep 25 02:54:25.407: INFO: Pod pod-projected-secrets-849789db-ba5c-4ad6-b1ea-529ba0c65574 no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 25 02:54:25.408: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5880" for this suite.
Sep 25 02:54:31.600: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 25 02:54:31.705: INFO: namespace projected-5880 deletion completed in 6.226681895s

• [SLOW TEST:13.659 seconds]
[sig-storage] Projected secret
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should receive events on concurrent watches in same order [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 25 02:54:31.708: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should receive events on concurrent watches in same order [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: starting a background goroutine to produce watch events
STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order
[AfterEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 25 02:54:37.560: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-9080" for this suite.
Sep 25 02:54:43.853: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 25 02:54:44.009: INFO: namespace watch-9080 deletion completed in 6.312086566s

• [SLOW TEST:12.302 seconds]
[sig-api-machinery] Watchers
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should receive events on concurrent watches in same order [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-apps] Daemon set [Serial] 
  should retry creating failed daemon pods [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 25 02:54:44.010: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should retry creating failed daemon pods [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Sep 25 02:54:44.144: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep 25 02:54:44.166: INFO: Number of nodes with available pods: 0
Sep 25 02:54:44.166: INFO: Node iruya-worker is running more than one daemon pod
Sep 25 02:54:45.178: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep 25 02:54:45.185: INFO: Number of nodes with available pods: 0
Sep 25 02:54:45.185: INFO: Node iruya-worker is running more than one daemon pod
Sep 25 02:54:46.226: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep 25 02:54:46.275: INFO: Number of nodes with available pods: 0
Sep 25 02:54:46.275: INFO: Node iruya-worker is running more than one daemon pod
Sep 25 02:54:47.317: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep 25 02:54:47.347: INFO: Number of nodes with available pods: 0
Sep 25 02:54:47.347: INFO: Node iruya-worker is running more than one daemon pod
Sep 25 02:54:48.179: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep 25 02:54:48.187: INFO: Number of nodes with available pods: 0
Sep 25 02:54:48.187: INFO: Node iruya-worker is running more than one daemon pod
Sep 25 02:54:49.178: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep 25 02:54:49.186: INFO: Number of nodes with available pods: 2
Sep 25 02:54:49.186: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived.
Sep 25 02:54:49.367: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep 25 02:54:49.383: INFO: Number of nodes with available pods: 1
Sep 25 02:54:49.383: INFO: Node iruya-worker2 is running more than one daemon pod
Sep 25 02:54:50.482: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep 25 02:54:50.490: INFO: Number of nodes with available pods: 1
Sep 25 02:54:50.490: INFO: Node iruya-worker2 is running more than one daemon pod
Sep 25 02:54:51.398: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep 25 02:54:51.460: INFO: Number of nodes with available pods: 1
Sep 25 02:54:51.460: INFO: Node iruya-worker2 is running more than one daemon pod
Sep 25 02:54:52.402: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep 25 02:54:52.408: INFO: Number of nodes with available pods: 1
Sep 25 02:54:52.408: INFO: Node iruya-worker2 is running more than one daemon pod
Sep 25 02:54:53.393: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep 25 02:54:53.398: INFO: Number of nodes with available pods: 2
Sep 25 02:54:53.399: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Wait for the failed daemon pod to be completely deleted.
[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-8051, will wait for the garbage collector to delete the pods
Sep 25 02:54:53.470: INFO: Deleting DaemonSet.extensions daemon-set took: 7.293054ms
Sep 25 02:54:53.771: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.986703ms
Sep 25 02:55:05.677: INFO: Number of nodes with available pods: 0
Sep 25 02:55:05.677: INFO: Number of running nodes: 0, number of available pods: 0
Sep 25 02:55:05.682: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-8051/daemonsets","resourceVersion":"323674"},"items":null}

Sep 25 02:55:05.686: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-8051/pods","resourceVersion":"323674"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 25 02:55:05.707: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-8051" for this suite.
Sep 25 02:55:11.741: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 25 02:55:11.933: INFO: namespace daemonsets-8051 deletion completed in 6.216055384s

• [SLOW TEST:27.924 seconds]
[sig-apps] Daemon set [Serial]
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should retry creating failed daemon pods [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 25 02:55:11.935: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test override arguments
Sep 25 02:55:12.067: INFO: Waiting up to 5m0s for pod "client-containers-9e58b830-33b3-48c0-a860-b50fd8c971c6" in namespace "containers-1007" to be "success or failure"
Sep 25 02:55:12.096: INFO: Pod "client-containers-9e58b830-33b3-48c0-a860-b50fd8c971c6": Phase="Pending", Reason="", readiness=false. Elapsed: 28.07211ms
Sep 25 02:55:14.103: INFO: Pod "client-containers-9e58b830-33b3-48c0-a860-b50fd8c971c6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03490007s
Sep 25 02:55:16.119: INFO: Pod "client-containers-9e58b830-33b3-48c0-a860-b50fd8c971c6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.051176737s
STEP: Saw pod success
Sep 25 02:55:16.119: INFO: Pod "client-containers-9e58b830-33b3-48c0-a860-b50fd8c971c6" satisfied condition "success or failure"
Sep 25 02:55:16.123: INFO: Trying to get logs from node iruya-worker pod client-containers-9e58b830-33b3-48c0-a860-b50fd8c971c6 container test-container: 
STEP: delete the pod
Sep 25 02:55:16.159: INFO: Waiting for pod client-containers-9e58b830-33b3-48c0-a860-b50fd8c971c6 to disappear
Sep 25 02:55:16.171: INFO: Pod client-containers-9e58b830-33b3-48c0-a860-b50fd8c971c6 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 25 02:55:16.172: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-1007" for this suite.
Sep 25 02:55:22.208: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 25 02:55:22.344: INFO: namespace containers-1007 deletion completed in 6.161788748s

• [SLOW TEST:10.409 seconds]
[k8s.io] Docker Containers
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] PreStop 
  should call prestop when killing a pod  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] PreStop
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 25 02:55:22.345: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename prestop
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] [sig-node] PreStop
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:167
[It] should call prestop when killing a pod  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating server pod server in namespace prestop-3229
STEP: Waiting for pods to come up.
STEP: Creating tester pod tester in namespace prestop-3229
STEP: Deleting pre-stop pod
Sep 25 02:55:37.544: INFO: Saw: {
	"Hostname": "server",
	"Sent": null,
	"Received": {
		"prestop": 1
	},
	"Errors": null,
	"Log": [
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up."
	],
	"StillContactingPeers": true
}
STEP: Deleting the server pod
[AfterEach] [k8s.io] [sig-node] PreStop
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 25 02:55:37.553: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "prestop-3229" for this suite.
Sep 25 02:56:17.627: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 25 02:56:17.774: INFO: namespace prestop-3229 deletion completed in 40.170598389s

• [SLOW TEST:55.429 seconds]
[k8s.io] [sig-node] PreStop
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should call prestop when killing a pod  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 25 02:56:17.779: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-map-8d57bdc5-07af-4f69-8999-5b6b82eaf38a
STEP: Creating a pod to test consume secrets
Sep 25 02:56:17.912: INFO: Waiting up to 5m0s for pod "pod-secrets-e3c0a3aa-3240-4e0b-8294-18ab69fad2f8" in namespace "secrets-9025" to be "success or failure"
Sep 25 02:56:17.927: INFO: Pod "pod-secrets-e3c0a3aa-3240-4e0b-8294-18ab69fad2f8": Phase="Pending", Reason="", readiness=false. Elapsed: 15.463638ms
Sep 25 02:56:19.934: INFO: Pod "pod-secrets-e3c0a3aa-3240-4e0b-8294-18ab69fad2f8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021627326s
Sep 25 02:56:21.941: INFO: Pod "pod-secrets-e3c0a3aa-3240-4e0b-8294-18ab69fad2f8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028756073s
STEP: Saw pod success
Sep 25 02:56:21.941: INFO: Pod "pod-secrets-e3c0a3aa-3240-4e0b-8294-18ab69fad2f8" satisfied condition "success or failure"
Sep 25 02:56:21.946: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-e3c0a3aa-3240-4e0b-8294-18ab69fad2f8 container secret-volume-test: 
STEP: delete the pod
Sep 25 02:56:21.981: INFO: Waiting for pod pod-secrets-e3c0a3aa-3240-4e0b-8294-18ab69fad2f8 to disappear
Sep 25 02:56:21.987: INFO: Pod pod-secrets-e3c0a3aa-3240-4e0b-8294-18ab69fad2f8 no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 25 02:56:21.988: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-9025" for this suite.
Sep 25 02:56:28.011: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 25 02:56:28.143: INFO: namespace secrets-9025 deletion completed in 6.146261879s

• [SLOW TEST:10.365 seconds]
[sig-storage] Secrets
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-cli] Kubectl client [k8s.io] Guestbook application 
  should create and stop a working application  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 25 02:56:28.144: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should create and stop a working application  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating all guestbook components
Sep 25 02:56:28.250: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-slave
  labels:
    app: redis
    role: slave
    tier: backend
spec:
  ports:
  - port: 6379
  selector:
    app: redis
    role: slave
    tier: backend

Sep 25 02:56:28.250: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5281'
Sep 25 02:56:29.772: INFO: stderr: ""
Sep 25 02:56:29.773: INFO: stdout: "service/redis-slave created\n"
Sep 25 02:56:29.774: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-master
  labels:
    app: redis
    role: master
    tier: backend
spec:
  ports:
  - port: 6379
    targetPort: 6379
  selector:
    app: redis
    role: master
    tier: backend

Sep 25 02:56:29.774: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5281'
Sep 25 02:56:31.305: INFO: stderr: ""
Sep 25 02:56:31.306: INFO: stdout: "service/redis-master created\n"
Sep 25 02:56:31.307: INFO: apiVersion: v1
kind: Service
metadata:
  name: frontend
  labels:
    app: guestbook
    tier: frontend
spec:
  # if your cluster supports it, uncomment the following to automatically create
  # an external load-balanced IP for the frontend service.
  # type: LoadBalancer
  ports:
  - port: 80
  selector:
    app: guestbook
    tier: frontend

Sep 25 02:56:31.307: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5281'
Sep 25 02:56:32.820: INFO: stderr: ""
Sep 25 02:56:32.820: INFO: stdout: "service/frontend created\n"
Sep 25 02:56:32.821: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: frontend
spec:
  replicas: 3
  selector:
    matchLabels:
      app: guestbook
      tier: frontend
  template:
    metadata:
      labels:
        app: guestbook
        tier: frontend
    spec:
      containers:
      - name: php-redis
        image: gcr.io/google-samples/gb-frontend:v6
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access environment variables to find service host
          # info, comment out the 'value: dns' line above, and uncomment the
          # line below:
          # value: env
        ports:
        - containerPort: 80

Sep 25 02:56:32.821: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5281'
Sep 25 02:56:34.359: INFO: stderr: ""
Sep 25 02:56:34.359: INFO: stdout: "deployment.apps/frontend created\n"
Sep 25 02:56:34.361: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: redis-master
spec:
  replicas: 1
  selector:
    matchLabels:
      app: redis
      role: master
      tier: backend
  template:
    metadata:
      labels:
        app: redis
        role: master
        tier: backend
    spec:
      containers:
      - name: master
        image: gcr.io/kubernetes-e2e-test-images/redis:1.0
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

Sep 25 02:56:34.361: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5281'
Sep 25 02:56:35.913: INFO: stderr: ""
Sep 25 02:56:35.913: INFO: stdout: "deployment.apps/redis-master created\n"
Sep 25 02:56:35.915: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: redis-slave
spec:
  replicas: 2
  selector:
    matchLabels:
      app: redis
      role: slave
      tier: backend
  template:
    metadata:
      labels:
        app: redis
        role: slave
        tier: backend
    spec:
      containers:
      - name: slave
        image: gcr.io/google-samples/gb-redisslave:v3
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access an environment variable to find the master
          # service's host, comment out the 'value: dns' line above, and
          # uncomment the line below:
          # value: env
        ports:
        - containerPort: 6379

Sep 25 02:56:35.915: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5281'
Sep 25 02:56:37.807: INFO: stderr: ""
Sep 25 02:56:37.807: INFO: stdout: "deployment.apps/redis-slave created\n"
STEP: validating guestbook app
Sep 25 02:56:37.808: INFO: Waiting for all frontend pods to be Running.
Sep 25 02:56:42.860: INFO: Waiting for frontend to serve content.
Sep 25 02:56:42.885: INFO: Trying to add a new entry to the guestbook.
Sep 25 02:56:42.899: INFO: Verifying that added entry can be retrieved.
STEP: using delete to clean up resources
Sep 25 02:56:42.915: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5281'
Sep 25 02:56:44.115: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Sep 25 02:56:44.116: INFO: stdout: "service \"redis-slave\" force deleted\n"
STEP: using delete to clean up resources
Sep 25 02:56:44.118: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5281'
Sep 25 02:56:45.241: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Sep 25 02:56:45.241: INFO: stdout: "service \"redis-master\" force deleted\n"
STEP: using delete to clean up resources
Sep 25 02:56:45.242: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5281'
Sep 25 02:56:46.352: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Sep 25 02:56:46.353: INFO: stdout: "service \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Sep 25 02:56:46.354: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5281'
Sep 25 02:56:47.463: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Sep 25 02:56:47.463: INFO: stdout: "deployment.apps \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Sep 25 02:56:47.464: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5281'
Sep 25 02:56:48.560: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Sep 25 02:56:48.560: INFO: stdout: "deployment.apps \"redis-master\" force deleted\n"
STEP: using delete to clean up resources
Sep 25 02:56:48.562: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5281'
Sep 25 02:56:49.680: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Sep 25 02:56:49.680: INFO: stdout: "deployment.apps \"redis-slave\" force deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 25 02:56:49.681: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5281" for this suite.
Sep 25 02:57:29.823: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 25 02:57:29.955: INFO: namespace kubectl-5281 deletion completed in 40.247292907s

• [SLOW TEST:61.811 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Guestbook application
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create and stop a working application  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 25 02:57:29.959: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test override all
Sep 25 02:57:30.039: INFO: Waiting up to 5m0s for pod "client-containers-25bc032f-e0e8-48f0-a88a-586a34a9f212" in namespace "containers-4926" to be "success or failure"
Sep 25 02:57:30.077: INFO: Pod "client-containers-25bc032f-e0e8-48f0-a88a-586a34a9f212": Phase="Pending", Reason="", readiness=false. Elapsed: 38.318212ms
Sep 25 02:57:32.191: INFO: Pod "client-containers-25bc032f-e0e8-48f0-a88a-586a34a9f212": Phase="Pending", Reason="", readiness=false. Elapsed: 2.152410585s
Sep 25 02:57:34.199: INFO: Pod "client-containers-25bc032f-e0e8-48f0-a88a-586a34a9f212": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.159522412s
STEP: Saw pod success
Sep 25 02:57:34.199: INFO: Pod "client-containers-25bc032f-e0e8-48f0-a88a-586a34a9f212" satisfied condition "success or failure"
Sep 25 02:57:34.204: INFO: Trying to get logs from node iruya-worker2 pod client-containers-25bc032f-e0e8-48f0-a88a-586a34a9f212 container test-container: 
STEP: delete the pod
Sep 25 02:57:34.224: INFO: Waiting for pod client-containers-25bc032f-e0e8-48f0-a88a-586a34a9f212 to disappear
Sep 25 02:57:34.235: INFO: Pod client-containers-25bc032f-e0e8-48f0-a88a-586a34a9f212 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 25 02:57:34.235: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-4926" for this suite.
Sep 25 02:57:40.258: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 25 02:57:40.418: INFO: namespace containers-4926 deletion completed in 6.17477581s

• [SLOW TEST:10.459 seconds]
[k8s.io] Docker Containers
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 25 02:57:40.420: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-88df7bee-d7f8-4926-9d1a-afabda00f9ac
STEP: Creating a pod to test consume configMaps
Sep 25 02:57:40.514: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-c84daec5-6c93-4171-8be8-ba54d9d66e6c" in namespace "projected-3070" to be "success or failure"
Sep 25 02:57:40.524: INFO: Pod "pod-projected-configmaps-c84daec5-6c93-4171-8be8-ba54d9d66e6c": Phase="Pending", Reason="", readiness=false. Elapsed: 9.890192ms
Sep 25 02:57:42.532: INFO: Pod "pod-projected-configmaps-c84daec5-6c93-4171-8be8-ba54d9d66e6c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017688533s
Sep 25 02:57:44.538: INFO: Pod "pod-projected-configmaps-c84daec5-6c93-4171-8be8-ba54d9d66e6c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023617255s
STEP: Saw pod success
Sep 25 02:57:44.538: INFO: Pod "pod-projected-configmaps-c84daec5-6c93-4171-8be8-ba54d9d66e6c" satisfied condition "success or failure"
Sep 25 02:57:44.542: INFO: Trying to get logs from node iruya-worker pod pod-projected-configmaps-c84daec5-6c93-4171-8be8-ba54d9d66e6c container projected-configmap-volume-test: 
STEP: delete the pod
Sep 25 02:57:44.600: INFO: Waiting for pod pod-projected-configmaps-c84daec5-6c93-4171-8be8-ba54d9d66e6c to disappear
Sep 25 02:57:44.606: INFO: Pod pod-projected-configmaps-c84daec5-6c93-4171-8be8-ba54d9d66e6c no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 25 02:57:44.607: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3070" for this suite.
Sep 25 02:57:50.638: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 25 02:57:50.764: INFO: namespace projected-3070 deletion completed in 6.147524442s

• [SLOW TEST:10.344 seconds]
[sig-storage] Projected configMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should release no longer matching pods [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicationController
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 25 02:57:50.767: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should release no longer matching pods [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Given a ReplicationController is created
STEP: When the matched label of one of its pods change
Sep 25 02:57:50.879: INFO: Pod name pod-release: Found 0 pods out of 1
Sep 25 02:57:55.887: INFO: Pod name pod-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicationController
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 25 02:57:55.907: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-1583" for this suite.
Sep 25 02:58:02.035: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 25 02:58:02.168: INFO: namespace replication-controller-1583 deletion completed in 6.252028057s

• [SLOW TEST:11.401 seconds]
[sig-apps] ReplicationController
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should release no longer matching pods [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 25 02:58:02.171: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Sep 25 02:58:02.266: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e165f8b0-3952-422d-ad25-31a2718398d2" in namespace "downward-api-9204" to be "success or failure"
Sep 25 02:58:02.294: INFO: Pod "downwardapi-volume-e165f8b0-3952-422d-ad25-31a2718398d2": Phase="Pending", Reason="", readiness=false. Elapsed: 26.693068ms
Sep 25 02:58:04.377: INFO: Pod "downwardapi-volume-e165f8b0-3952-422d-ad25-31a2718398d2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.110453453s
Sep 25 02:58:06.385: INFO: Pod "downwardapi-volume-e165f8b0-3952-422d-ad25-31a2718398d2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.118200809s
STEP: Saw pod success
Sep 25 02:58:06.385: INFO: Pod "downwardapi-volume-e165f8b0-3952-422d-ad25-31a2718398d2" satisfied condition "success or failure"
Sep 25 02:58:06.393: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-e165f8b0-3952-422d-ad25-31a2718398d2 container client-container: 
STEP: delete the pod
Sep 25 02:58:06.428: INFO: Waiting for pod downwardapi-volume-e165f8b0-3952-422d-ad25-31a2718398d2 to disappear
Sep 25 02:58:06.440: INFO: Pod downwardapi-volume-e165f8b0-3952-422d-ad25-31a2718398d2 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 25 02:58:06.440: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-9204" for this suite.
Sep 25 02:58:12.508: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 25 02:58:12.676: INFO: namespace downward-api-9204 deletion completed in 6.226463833s

• [SLOW TEST:10.505 seconds]
[sig-storage] Downward API volume
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 25 02:58:12.678: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod busybox-ba0fa1f1-c6b8-48de-96b4-a7accd7fd1ab in namespace container-probe-720
Sep 25 02:58:16.778: INFO: Started pod busybox-ba0fa1f1-c6b8-48de-96b4-a7accd7fd1ab in namespace container-probe-720
STEP: checking the pod's current state and verifying that restartCount is present
Sep 25 02:58:16.783: INFO: Initial restart count of pod busybox-ba0fa1f1-c6b8-48de-96b4-a7accd7fd1ab is 0
Sep 25 02:59:04.969: INFO: Restart count of pod container-probe-720/busybox-ba0fa1f1-c6b8-48de-96b4-a7accd7fd1ab is now 1 (48.185795877s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 25 02:59:05.022: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-720" for this suite.
Sep 25 02:59:11.046: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 25 02:59:11.198: INFO: namespace container-probe-720 deletion completed in 6.16605s

• [SLOW TEST:58.521 seconds]
[k8s.io] Probing container
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete RS created by deployment when not orphaning [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 25 02:59:11.201: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete RS created by deployment when not orphaning [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for all rs to be garbage collected
STEP: expected 0 pods, got 2 pods
STEP: expected 0 rs, got 1 rs
STEP: Gathering metrics
W0925 02:59:12.062955       7 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Sep 25 02:59:12.063: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 25 02:59:12.063: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-7625" for this suite.
Sep 25 02:59:18.093: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 25 02:59:18.225: INFO: namespace gc-7625 deletion completed in 6.153832628s

• [SLOW TEST:7.024 seconds]
[sig-api-machinery] Garbage collector
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should delete RS created by deployment when not orphaning [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 25 02:59:18.227: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Sep 25 02:59:18.316: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 25 02:59:24.514: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-8584" for this suite.
Sep 25 02:59:30.567: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 25 02:59:30.809: INFO: namespace init-container-8584 deletion completed in 6.259460571s

• [SLOW TEST:12.582 seconds]
[k8s.io] InitContainer [NodeConformance]
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run job 
  should create a job from an image when restart is OnFailure  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 25 02:59:30.809: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run job
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1612
[It] should create a job from an image when restart is OnFailure  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Sep 25 02:59:30.897: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-5064'
Sep 25 02:59:32.184: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Sep 25 02:59:32.185: INFO: stdout: "job.batch/e2e-test-nginx-job created\n"
STEP: verifying the job e2e-test-nginx-job was created
[AfterEach] [k8s.io] Kubectl run job
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1617
Sep 25 02:59:32.193: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=kubectl-5064'
Sep 25 02:59:33.290: INFO: stderr: ""
Sep 25 02:59:33.291: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 25 02:59:33.291: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5064" for this suite.
Sep 25 02:59:39.329: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 25 02:59:39.475: INFO: namespace kubectl-5064 deletion completed in 6.176266622s

• [SLOW TEST:8.666 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run job
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a job from an image when restart is OnFailure  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 25 02:59:39.481: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name cm-test-opt-del-40abb173-73d8-4f63-92d8-d7a427d04c98
STEP: Creating configMap with name cm-test-opt-upd-9152b36f-7d7d-46c2-9ee8-9903193afdd1
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-40abb173-73d8-4f63-92d8-d7a427d04c98
STEP: Updating configmap cm-test-opt-upd-9152b36f-7d7d-46c2-9ee8-9903193afdd1
STEP: Creating configMap with name cm-test-opt-create-9bd51e50-0b41-467d-996c-34721401472d
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 25 03:01:06.171: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-2345" for this suite.
Sep 25 03:01:30.202: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 25 03:01:30.353: INFO: namespace configmap-2345 deletion completed in 24.170989871s

• [SLOW TEST:110.873 seconds]
[sig-storage] ConfigMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[k8s.io] [sig-node] Events 
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] Events
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 25 03:01:30.356: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename events
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: retrieving the pod
Sep 25 03:01:34.464: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-93db879a-acc6-470e-875c-0a370b312d01,GenerateName:,Namespace:events-815,SelfLink:/api/v1/namespaces/events-815/pods/send-events-93db879a-acc6-470e-875c-0a370b312d01,UID:44254128-a69e-4a21-8d8e-640494df6888,ResourceVersion:325035,Generation:0,CreationTimestamp:2020-09-25 03:01:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 426113827,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-xmsjl {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-xmsjl,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] []  [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-xmsjl true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x8ca20b0} {node.kubernetes.io/unreachable Exists  NoExecute 0x8ca20d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-25 03:01:30 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-09-25 03:01:33 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-09-25 03:01:33 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-25 03:01:30 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.6,PodIP:10.244.1.216,StartTime:2020-09-25 03:01:30 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2020-09-25 03:01:32 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 containerd://c2a2beb615b0a2dbf3a03e40ef5f19bd0c54d1007205672d485c6ce566baa22e}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}

STEP: checking for scheduler event about the pod
Sep 25 03:01:36.476: INFO: Saw scheduler event for our pod.
STEP: checking for kubelet event about the pod
Sep 25 03:01:38.487: INFO: Saw kubelet event for our pod.
STEP: deleting the pod
[AfterEach] [k8s.io] [sig-node] Events
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 25 03:01:38.495: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-815" for this suite.
Sep 25 03:02:16.561: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 25 03:02:16.750: INFO: namespace events-815 deletion completed in 38.203536494s

• [SLOW TEST:46.395 seconds]
[k8s.io] [sig-node] Events
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected combined 
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected combined
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 25 03:02:16.755: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-projected-all-test-volume-9688607b-bc04-4aea-833d-7f7fff9e082c
STEP: Creating secret with name secret-projected-all-test-volume-f124b9a5-7ac7-4702-b564-e448d9eb83ac
STEP: Creating a pod to test Check all projections for projected volume plugin
Sep 25 03:02:16.854: INFO: Waiting up to 5m0s for pod "projected-volume-4a794fc6-3e20-4b93-b9fa-0793957b585d" in namespace "projected-8218" to be "success or failure"
Sep 25 03:02:16.885: INFO: Pod "projected-volume-4a794fc6-3e20-4b93-b9fa-0793957b585d": Phase="Pending", Reason="", readiness=false. Elapsed: 30.705144ms
Sep 25 03:02:18.893: INFO: Pod "projected-volume-4a794fc6-3e20-4b93-b9fa-0793957b585d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038142688s
Sep 25 03:02:20.900: INFO: Pod "projected-volume-4a794fc6-3e20-4b93-b9fa-0793957b585d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.045600839s
STEP: Saw pod success
Sep 25 03:02:20.901: INFO: Pod "projected-volume-4a794fc6-3e20-4b93-b9fa-0793957b585d" satisfied condition "success or failure"
Sep 25 03:02:20.906: INFO: Trying to get logs from node iruya-worker pod projected-volume-4a794fc6-3e20-4b93-b9fa-0793957b585d container projected-all-volume-test: 
STEP: delete the pod
Sep 25 03:02:20.946: INFO: Waiting for pod projected-volume-4a794fc6-3e20-4b93-b9fa-0793957b585d to disappear
Sep 25 03:02:20.978: INFO: Pod projected-volume-4a794fc6-3e20-4b93-b9fa-0793957b585d no longer exists
[AfterEach] [sig-storage] Projected combined
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 25 03:02:20.978: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8218" for this suite.
Sep 25 03:02:27.014: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 25 03:02:27.160: INFO: namespace projected-8218 deletion completed in 6.171051619s

• [SLOW TEST:10.405 seconds]
[sig-storage] Projected combined
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 25 03:02:27.164: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Sep 25 03:02:27.299: INFO: Waiting up to 5m0s for pod "downwardapi-volume-51326c9d-4924-4f50-8ec1-389fb4325a65" in namespace "downward-api-9383" to be "success or failure"
Sep 25 03:02:27.325: INFO: Pod "downwardapi-volume-51326c9d-4924-4f50-8ec1-389fb4325a65": Phase="Pending", Reason="", readiness=false. Elapsed: 26.378966ms
Sep 25 03:02:29.331: INFO: Pod "downwardapi-volume-51326c9d-4924-4f50-8ec1-389fb4325a65": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032354477s
Sep 25 03:02:31.338: INFO: Pod "downwardapi-volume-51326c9d-4924-4f50-8ec1-389fb4325a65": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.039468252s
STEP: Saw pod success
Sep 25 03:02:31.339: INFO: Pod "downwardapi-volume-51326c9d-4924-4f50-8ec1-389fb4325a65" satisfied condition "success or failure"
Sep 25 03:02:31.343: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-51326c9d-4924-4f50-8ec1-389fb4325a65 container client-container: 
STEP: delete the pod
Sep 25 03:02:31.367: INFO: Waiting for pod downwardapi-volume-51326c9d-4924-4f50-8ec1-389fb4325a65 to disappear
Sep 25 03:02:31.377: INFO: Pod downwardapi-volume-51326c9d-4924-4f50-8ec1-389fb4325a65 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 25 03:02:31.377: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-9383" for this suite.
Sep 25 03:02:37.407: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 25 03:02:37.588: INFO: namespace downward-api-9383 deletion completed in 6.201460032s

• [SLOW TEST:10.424 seconds]
[sig-storage] Downward API volume
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl label 
  should update the label on a resource  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 25 03:02:37.606: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl label
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1210
STEP: creating the pod
Sep 25 03:02:37.724: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2525'
Sep 25 03:02:41.647: INFO: stderr: ""
Sep 25 03:02:41.647: INFO: stdout: "pod/pause created\n"
Sep 25 03:02:41.648: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause]
Sep 25 03:02:41.648: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-2525" to be "running and ready"
Sep 25 03:02:41.693: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 44.658456ms
Sep 25 03:02:43.699: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050887869s
Sep 25 03:02:45.704: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.056215309s
Sep 25 03:02:45.704: INFO: Pod "pause" satisfied condition "running and ready"
Sep 25 03:02:45.704: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause]
[It] should update the label on a resource  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: adding the label testing-label with value testing-label-value to a pod
Sep 25 03:02:45.705: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-2525'
Sep 25 03:02:46.834: INFO: stderr: ""
Sep 25 03:02:46.834: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod has the label testing-label with the value testing-label-value
Sep 25 03:02:46.834: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-2525'
Sep 25 03:02:47.954: INFO: stderr: ""
Sep 25 03:02:47.955: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          6s    testing-label-value\n"
STEP: removing the label testing-label of a pod
Sep 25 03:02:47.955: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-2525'
Sep 25 03:02:49.059: INFO: stderr: ""
Sep 25 03:02:49.059: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod doesn't have the label testing-label
Sep 25 03:02:49.060: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-2525'
Sep 25 03:02:50.191: INFO: stderr: ""
Sep 25 03:02:50.192: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          9s    \n"
[AfterEach] [k8s.io] Kubectl label
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1217
STEP: using delete to clean up resources
Sep 25 03:02:50.192: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2525'
Sep 25 03:02:51.317: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Sep 25 03:02:51.318: INFO: stdout: "pod \"pause\" force deleted\n"
Sep 25 03:02:51.318: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-2525'
Sep 25 03:02:52.450: INFO: stderr: "No resources found.\n"
Sep 25 03:02:52.450: INFO: stdout: ""
Sep 25 03:02:52.450: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-2525 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Sep 25 03:02:53.542: INFO: stderr: ""
Sep 25 03:02:53.542: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 25 03:02:53.543: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2525" for this suite.
Sep 25 03:02:59.570: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 25 03:02:59.694: INFO: namespace kubectl-2525 deletion completed in 6.143535773s

• [SLOW TEST:22.089 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl label
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should update the label on a resource  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl version 
  should check is all data is printed  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 25 03:02:59.696: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check is all data is printed  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Sep 25 03:02:59.751: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version'
Sep 25 03:03:00.828: INFO: stderr: ""
Sep 25 03:03:00.828: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.12\", GitCommit:\"e2a822d9f3c2fdb5c9bfbe64313cf9f657f0a725\", GitTreeState:\"clean\", BuildDate:\"2020-05-06T05:17:59Z\", GoVersion:\"go1.12.17\", Compiler:\"gc\", Platform:\"linux/arm\"}\nServer Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.11\", GitCommit:\"d94a81c724ea8e1ccc9002d89b7fe81d58f89ede\", GitTreeState:\"clean\", BuildDate:\"2020-05-01T02:31:02Z\", GoVersion:\"go1.12.17\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 25 03:03:00.829: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6537" for this suite.
Sep 25 03:03:06.859: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 25 03:03:07.013: INFO: namespace kubectl-6537 deletion completed in 6.173711329s

• [SLOW TEST:7.317 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl version
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check is all data is printed  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 25 03:03:07.015: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-bf431b07-ccc5-4e3f-8da1-fc0fc3b2740d
STEP: Creating a pod to test consume configMaps
Sep 25 03:03:07.120: INFO: Waiting up to 5m0s for pod "pod-configmaps-286d9f0c-aeec-4f92-9a6e-26177e497986" in namespace "configmap-8612" to be "success or failure"
Sep 25 03:03:07.136: INFO: Pod "pod-configmaps-286d9f0c-aeec-4f92-9a6e-26177e497986": Phase="Pending", Reason="", readiness=false. Elapsed: 16.303671ms
Sep 25 03:03:09.144: INFO: Pod "pod-configmaps-286d9f0c-aeec-4f92-9a6e-26177e497986": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024052854s
Sep 25 03:03:11.151: INFO: Pod "pod-configmaps-286d9f0c-aeec-4f92-9a6e-26177e497986": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.030950788s
STEP: Saw pod success
Sep 25 03:03:11.151: INFO: Pod "pod-configmaps-286d9f0c-aeec-4f92-9a6e-26177e497986" satisfied condition "success or failure"
Sep 25 03:03:11.157: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-286d9f0c-aeec-4f92-9a6e-26177e497986 container configmap-volume-test: 
STEP: delete the pod
Sep 25 03:03:11.184: INFO: Waiting for pod pod-configmaps-286d9f0c-aeec-4f92-9a6e-26177e497986 to disappear
Sep 25 03:03:11.188: INFO: Pod pod-configmaps-286d9f0c-aeec-4f92-9a6e-26177e497986 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 25 03:03:11.189: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-8612" for this suite.
Sep 25 03:03:17.231: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 25 03:03:17.386: INFO: namespace configmap-8612 deletion completed in 6.188371476s

• [SLOW TEST:10.371 seconds]
[sig-storage] ConfigMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Aggregator 
  Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Aggregator
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 25 03:03:17.392: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename aggregator
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] Aggregator
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76
Sep 25 03:03:17.445: INFO: >>> kubeConfig: /root/.kube/config
[It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Registering the sample API server.
Sep 25 03:03:23.617: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set
Sep 25 03:03:25.761: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63736599803, loc:(*time.Location)(0x67985e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63736599803, loc:(*time.Location)(0x67985e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63736599803, loc:(*time.Location)(0x67985e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63736599803, loc:(*time.Location)(0x67985e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep 25 03:03:27.818: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63736599803, loc:(*time.Location)(0x67985e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63736599803, loc:(*time.Location)(0x67985e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63736599803, loc:(*time.Location)(0x67985e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63736599803, loc:(*time.Location)(0x67985e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep 25 03:03:29.771: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63736599803, loc:(*time.Location)(0x67985e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63736599803, loc:(*time.Location)(0x67985e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63736599803, loc:(*time.Location)(0x67985e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63736599803, loc:(*time.Location)(0x67985e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep 25 03:03:31.768: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63736599803, loc:(*time.Location)(0x67985e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63736599803, loc:(*time.Location)(0x67985e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63736599803, loc:(*time.Location)(0x67985e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63736599803, loc:(*time.Location)(0x67985e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep 25 03:03:33.769: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63736599803, loc:(*time.Location)(0x67985e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63736599803, loc:(*time.Location)(0x67985e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63736599803, loc:(*time.Location)(0x67985e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63736599803, loc:(*time.Location)(0x67985e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep 25 03:03:36.425: INFO: Waited 631.007396ms for the sample-apiserver to be ready to handle requests.
[AfterEach] [sig-api-machinery] Aggregator
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67
[AfterEach] [sig-api-machinery] Aggregator
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 25 03:03:36.816: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "aggregator-7169" for this suite.
Sep 25 03:03:43.032: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 25 03:03:43.183: INFO: namespace aggregator-7169 deletion completed in 6.316269156s

• [SLOW TEST:25.792 seconds]
[sig-api-machinery] Aggregator
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 25 03:03:43.184: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-3c9eeffb-61b4-4b85-ace9-1116e3b7af66
STEP: Creating a pod to test consume secrets
Sep 25 03:03:43.255: INFO: Waiting up to 5m0s for pod "pod-secrets-853b9c63-0c0d-44bd-af12-5fe41d392c73" in namespace "secrets-6182" to be "success or failure"
Sep 25 03:03:43.284: INFO: Pod "pod-secrets-853b9c63-0c0d-44bd-af12-5fe41d392c73": Phase="Pending", Reason="", readiness=false. Elapsed: 28.318277ms
Sep 25 03:03:45.292: INFO: Pod "pod-secrets-853b9c63-0c0d-44bd-af12-5fe41d392c73": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03595001s
Sep 25 03:03:47.299: INFO: Pod "pod-secrets-853b9c63-0c0d-44bd-af12-5fe41d392c73": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.043523127s
STEP: Saw pod success
Sep 25 03:03:47.300: INFO: Pod "pod-secrets-853b9c63-0c0d-44bd-af12-5fe41d392c73" satisfied condition "success or failure"
Sep 25 03:03:47.305: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-853b9c63-0c0d-44bd-af12-5fe41d392c73 container secret-volume-test: 
STEP: delete the pod
Sep 25 03:03:47.341: INFO: Waiting for pod pod-secrets-853b9c63-0c0d-44bd-af12-5fe41d392c73 to disappear
Sep 25 03:03:47.382: INFO: Pod pod-secrets-853b9c63-0c0d-44bd-af12-5fe41d392c73 no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 25 03:03:47.383: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-6182" for this suite.
Sep 25 03:03:53.412: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 25 03:03:53.544: INFO: namespace secrets-6182 deletion completed in 6.151979956s

• [SLOW TEST:10.361 seconds]
[sig-storage] Secrets
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 25 03:03:53.546: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir volume type on node default medium
Sep 25 03:03:53.628: INFO: Waiting up to 5m0s for pod "pod-82f8a093-9823-4ada-9fe5-d4aa4670b2e0" in namespace "emptydir-5906" to be "success or failure"
Sep 25 03:03:53.640: INFO: Pod "pod-82f8a093-9823-4ada-9fe5-d4aa4670b2e0": Phase="Pending", Reason="", readiness=false. Elapsed: 11.899567ms
Sep 25 03:03:55.648: INFO: Pod "pod-82f8a093-9823-4ada-9fe5-d4aa4670b2e0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019086516s
Sep 25 03:03:57.655: INFO: Pod "pod-82f8a093-9823-4ada-9fe5-d4aa4670b2e0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026456171s
STEP: Saw pod success
Sep 25 03:03:57.655: INFO: Pod "pod-82f8a093-9823-4ada-9fe5-d4aa4670b2e0" satisfied condition "success or failure"
Sep 25 03:03:57.661: INFO: Trying to get logs from node iruya-worker pod pod-82f8a093-9823-4ada-9fe5-d4aa4670b2e0 container test-container: 
STEP: delete the pod
Sep 25 03:03:57.704: INFO: Waiting for pod pod-82f8a093-9823-4ada-9fe5-d4aa4670b2e0 to disappear
Sep 25 03:03:57.711: INFO: Pod pod-82f8a093-9823-4ada-9fe5-d4aa4670b2e0 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 25 03:03:57.711: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-5906" for this suite.
Sep 25 03:04:03.735: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 25 03:04:03.876: INFO: namespace emptydir-5906 deletion completed in 6.15639131s

• [SLOW TEST:10.331 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl rolling-update 
  should support rolling-update to same image  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 25 03:04:03.880: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl rolling-update
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1516
[It] should support rolling-update to same image  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Sep 25 03:04:03.967: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-3665'
Sep 25 03:04:05.139: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Sep 25 03:04:05.139: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
STEP: rolling-update to same image controller
Sep 25 03:04:05.174: INFO: scanned /root for discovery docs: 
Sep 25 03:04:05.174: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-3665'
Sep 25 03:04:22.548: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Sep 25 03:04:22.548: INFO: stdout: "Created e2e-test-nginx-rc-b6b481dfcf075c9510e6cb9c43262296\nScaling up e2e-test-nginx-rc-b6b481dfcf075c9510e6cb9c43262296 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-b6b481dfcf075c9510e6cb9c43262296 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-b6b481dfcf075c9510e6cb9c43262296 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
Sep 25 03:04:22.549: INFO: stdout: "Created e2e-test-nginx-rc-b6b481dfcf075c9510e6cb9c43262296\nScaling up e2e-test-nginx-rc-b6b481dfcf075c9510e6cb9c43262296 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-b6b481dfcf075c9510e6cb9c43262296 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-b6b481dfcf075c9510e6cb9c43262296 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up.
Sep 25 03:04:22.550: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-3665'
Sep 25 03:04:23.712: INFO: stderr: ""
Sep 25 03:04:23.712: INFO: stdout: "e2e-test-nginx-rc-b6b481dfcf075c9510e6cb9c43262296-vrg6h "
Sep 25 03:04:23.713: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-b6b481dfcf075c9510e6cb9c43262296-vrg6h -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3665'
Sep 25 03:04:24.816: INFO: stderr: ""
Sep 25 03:04:24.816: INFO: stdout: "true"
Sep 25 03:04:24.817: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-b6b481dfcf075c9510e6cb9c43262296-vrg6h -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3665'
Sep 25 03:04:25.925: INFO: stderr: ""
Sep 25 03:04:25.925: INFO: stdout: "docker.io/library/nginx:1.14-alpine"
Sep 25 03:04:25.926: INFO: e2e-test-nginx-rc-b6b481dfcf075c9510e6cb9c43262296-vrg6h is verified up and running
[AfterEach] [k8s.io] Kubectl rolling-update
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1522
Sep 25 03:04:25.927: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-3665'
Sep 25 03:04:27.058: INFO: stderr: ""
Sep 25 03:04:27.059: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 25 03:04:27.059: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3665" for this suite.
Sep 25 03:04:49.113: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 25 03:04:49.255: INFO: namespace kubectl-3665 deletion completed in 22.169314978s

• [SLOW TEST:45.376 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl rolling-update
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should support rolling-update to same image  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 25 03:04:49.257: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all services are removed when a namespace is deleted [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a service in the namespace
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there is no service in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 25 03:04:55.509: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-1336" for this suite.
Sep 25 03:05:01.536: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 25 03:05:01.680: INFO: namespace namespaces-1336 deletion completed in 6.160261078s
STEP: Destroying namespace "nsdeletetest-5510" for this suite.
Sep 25 03:05:01.684: INFO: Namespace nsdeletetest-5510 was already deleted
STEP: Destroying namespace "nsdeletetest-5573" for this suite.
Sep 25 03:05:07.715: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 25 03:05:07.858: INFO: namespace nsdeletetest-5573 deletion completed in 6.174589025s

• [SLOW TEST:18.602 seconds]
[sig-api-machinery] Namespaces [Serial]
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should be updated [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 25 03:05:07.861: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should be updated [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Sep 25 03:05:12.474: INFO: Successfully updated pod "pod-update-31beeca6-4bf4-4e98-a0a8-77ec8067c376"
STEP: verifying the updated pod is in kubernetes
Sep 25 03:05:12.500: INFO: Pod update OK
[AfterEach] [k8s.io] Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 25 03:05:12.500: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-5793" for this suite.
Sep 25 03:05:34.524: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 25 03:05:34.673: INFO: namespace pods-5793 deletion completed in 22.166898609s

• [SLOW TEST:26.813 seconds]
[k8s.io] Pods
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be updated [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Burst scaling should run to completion even with unhealthy pods [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 25 03:05:34.675: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-5324
[It] Burst scaling should run to completion even with unhealthy pods [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating stateful set ss in namespace statefulset-5324
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-5324
Sep 25 03:05:34.810: INFO: Found 0 stateful pods, waiting for 1
Sep 25 03:05:44.818: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod
Sep 25 03:05:44.823: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5324 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Sep 25 03:05:46.236: INFO: stderr: "I0925 03:05:46.084261    1031 log.go:172] (0x296b6c0) (0x296b730) Create stream\nI0925 03:05:46.086416    1031 log.go:172] (0x296b6c0) (0x296b730) Stream added, broadcasting: 1\nI0925 03:05:46.100329    1031 log.go:172] (0x296b6c0) Reply frame received for 1\nI0925 03:05:46.100892    1031 log.go:172] (0x296b6c0) (0x25a6150) Create stream\nI0925 03:05:46.100967    1031 log.go:172] (0x296b6c0) (0x25a6150) Stream added, broadcasting: 3\nI0925 03:05:46.102224    1031 log.go:172] (0x296b6c0) Reply frame received for 3\nI0925 03:05:46.102473    1031 log.go:172] (0x296b6c0) (0x296a0e0) Create stream\nI0925 03:05:46.102545    1031 log.go:172] (0x296b6c0) (0x296a0e0) Stream added, broadcasting: 5\nI0925 03:05:46.103778    1031 log.go:172] (0x296b6c0) Reply frame received for 5\nI0925 03:05:46.185517    1031 log.go:172] (0x296b6c0) Data frame received for 5\nI0925 03:05:46.185873    1031 log.go:172] (0x296a0e0) (5) Data frame handling\nI0925 03:05:46.186554    1031 log.go:172] (0x296a0e0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0925 03:05:46.219509    1031 log.go:172] (0x296b6c0) Data frame received for 3\nI0925 03:05:46.219671    1031 log.go:172] (0x25a6150) (3) Data frame handling\nI0925 03:05:46.219901    1031 log.go:172] (0x296b6c0) Data frame received for 5\nI0925 03:05:46.220123    1031 log.go:172] (0x296a0e0) (5) Data frame handling\nI0925 03:05:46.220258    1031 log.go:172] (0x25a6150) (3) Data frame sent\nI0925 03:05:46.220457    1031 log.go:172] (0x296b6c0) Data frame received for 3\nI0925 03:05:46.220601    1031 log.go:172] (0x25a6150) (3) Data frame handling\nI0925 03:05:46.221198    1031 log.go:172] (0x296b6c0) Data frame received for 1\nI0925 03:05:46.221415    1031 log.go:172] (0x296b730) (1) Data frame handling\nI0925 03:05:46.221644    1031 log.go:172] (0x296b730) (1) Data frame sent\nI0925 03:05:46.222404    1031 log.go:172] (0x296b6c0) (0x296b730) Stream removed, broadcasting: 1\nI0925 03:05:46.225064    1031 log.go:172] (0x296b6c0) Go away received\nI0925 03:05:46.229458    1031 log.go:172] (0x296b6c0) (0x296b730) Stream removed, broadcasting: 1\nI0925 03:05:46.229688    1031 log.go:172] (0x296b6c0) (0x25a6150) Stream removed, broadcasting: 3\nI0925 03:05:46.229878    1031 log.go:172] (0x296b6c0) (0x296a0e0) Stream removed, broadcasting: 5\n"
Sep 25 03:05:46.238: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Sep 25 03:05:46.238: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Sep 25 03:05:46.246: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Sep 25 03:05:56.255: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Sep 25 03:05:56.255: INFO: Waiting for statefulset status.replicas updated to 0
Sep 25 03:05:56.278: INFO: POD   NODE          PHASE    GRACE  CONDITIONS
Sep 25 03:05:56.280: INFO: ss-0  iruya-worker  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-25 03:05:34 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-25 03:05:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-25 03:05:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-25 03:05:34 +0000 UTC  }]
Sep 25 03:05:56.280: INFO: 
Sep 25 03:05:56.281: INFO: StatefulSet ss has not reached scale 3, at 1
Sep 25 03:05:57.290: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.989910872s
Sep 25 03:05:58.448: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.979968746s
Sep 25 03:05:59.457: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.822742675s
Sep 25 03:06:00.478: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.8140004s
Sep 25 03:06:01.489: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.792342785s
Sep 25 03:06:02.498: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.781984743s
Sep 25 03:06:03.509: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.772579216s
Sep 25 03:06:04.531: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.761744299s
Sep 25 03:06:05.541: INFO: Verifying statefulset ss doesn't scale past 3 for another 739.162505ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-5324
Sep 25 03:06:06.551: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5324 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Sep 25 03:06:07.910: INFO: stderr: "I0925 03:06:07.800175    1053 log.go:172] (0x281fd50) (0x281fdc0) Create stream\nI0925 03:06:07.805030    1053 log.go:172] (0x281fd50) (0x281fdc0) Stream added, broadcasting: 1\nI0925 03:06:07.822820    1053 log.go:172] (0x281fd50) Reply frame received for 1\nI0925 03:06:07.823475    1053 log.go:172] (0x281fd50) (0x26a4000) Create stream\nI0925 03:06:07.823559    1053 log.go:172] (0x281fd50) (0x26a4000) Stream added, broadcasting: 3\nI0925 03:06:07.825309    1053 log.go:172] (0x281fd50) Reply frame received for 3\nI0925 03:06:07.825735    1053 log.go:172] (0x281fd50) (0x2a36000) Create stream\nI0925 03:06:07.825856    1053 log.go:172] (0x281fd50) (0x2a36000) Stream added, broadcasting: 5\nI0925 03:06:07.827168    1053 log.go:172] (0x281fd50) Reply frame received for 5\nI0925 03:06:07.894962    1053 log.go:172] (0x281fd50) Data frame received for 3\nI0925 03:06:07.895321    1053 log.go:172] (0x281fd50) Data frame received for 1\nI0925 03:06:07.895624    1053 log.go:172] (0x281fd50) Data frame received for 5\nI0925 03:06:07.895750    1053 log.go:172] (0x281fdc0) (1) Data frame handling\nI0925 03:06:07.896015    1053 log.go:172] (0x2a36000) (5) Data frame handling\nI0925 03:06:07.896212    1053 log.go:172] (0x26a4000) (3) Data frame handling\nI0925 03:06:07.896745    1053 log.go:172] (0x26a4000) (3) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0925 03:06:07.897543    1053 log.go:172] (0x281fdc0) (1) Data frame sent\nI0925 03:06:07.897692    1053 log.go:172] (0x281fd50) Data frame received for 3\nI0925 03:06:07.897816    1053 log.go:172] (0x26a4000) (3) Data frame handling\nI0925 03:06:07.897908    1053 log.go:172] (0x2a36000) (5) Data frame sent\nI0925 03:06:07.898004    1053 log.go:172] (0x281fd50) Data frame received for 5\nI0925 03:06:07.898795    1053 log.go:172] (0x281fd50) (0x281fdc0) Stream removed, broadcasting: 1\nI0925 03:06:07.900243    1053 log.go:172] (0x2a36000) (5) Data frame handling\nI0925 03:06:07.900514    1053 log.go:172] (0x281fd50) Go away received\nI0925 03:06:07.904324    1053 log.go:172] (0x281fd50) (0x281fdc0) Stream removed, broadcasting: 1\nI0925 03:06:07.904636    1053 log.go:172] (0x281fd50) (0x26a4000) Stream removed, broadcasting: 3\nI0925 03:06:07.904948    1053 log.go:172] (0x281fd50) (0x2a36000) Stream removed, broadcasting: 5\n"
Sep 25 03:06:07.911: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Sep 25 03:06:07.912: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Sep 25 03:06:07.912: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5324 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Sep 25 03:06:09.318: INFO: stderr: "I0925 03:06:09.200169    1076 log.go:172] (0x282e7e0) (0x282ebd0) Create stream\nI0925 03:06:09.203439    1076 log.go:172] (0x282e7e0) (0x282ebd0) Stream added, broadcasting: 1\nI0925 03:06:09.218102    1076 log.go:172] (0x282e7e0) Reply frame received for 1\nI0925 03:06:09.219392    1076 log.go:172] (0x282e7e0) (0x27d8000) Create stream\nI0925 03:06:09.219551    1076 log.go:172] (0x282e7e0) (0x27d8000) Stream added, broadcasting: 3\nI0925 03:06:09.222148    1076 log.go:172] (0x282e7e0) Reply frame received for 3\nI0925 03:06:09.222692    1076 log.go:172] (0x282e7e0) (0x282ec40) Create stream\nI0925 03:06:09.222821    1076 log.go:172] (0x282e7e0) (0x282ec40) Stream added, broadcasting: 5\nI0925 03:06:09.224637    1076 log.go:172] (0x282e7e0) Reply frame received for 5\nI0925 03:06:09.298725    1076 log.go:172] (0x282e7e0) Data frame received for 3\nI0925 03:06:09.299099    1076 log.go:172] (0x282e7e0) Data frame received for 5\nI0925 03:06:09.299228    1076 log.go:172] (0x27d8000) (3) Data frame handling\nI0925 03:06:09.299457    1076 log.go:172] (0x282e7e0) Data frame received for 1\nI0925 03:06:09.299598    1076 log.go:172] (0x282ebd0) (1) Data frame handling\nI0925 03:06:09.299798    1076 log.go:172] (0x282ec40) (5) Data frame handling\nI0925 03:06:09.301179    1076 log.go:172] (0x282ec40) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0925 03:06:09.301852    1076 log.go:172] (0x282ebd0) (1) Data frame sent\nI0925 03:06:09.301964    1076 log.go:172] (0x27d8000) (3) Data frame sent\nI0925 03:06:09.302222    1076 log.go:172] (0x282e7e0) Data frame received for 3\nI0925 03:06:09.302340    1076 log.go:172] (0x27d8000) (3) Data frame handling\nI0925 03:06:09.302767    1076 log.go:172] (0x282e7e0) Data frame received for 5\nI0925 03:06:09.302935    1076 log.go:172] (0x282ec40) (5) Data frame handling\nI0925 03:06:09.304504    1076 log.go:172] (0x282e7e0) (0x282ebd0) Stream removed, broadcasting: 1\nI0925 03:06:09.306195    1076 log.go:172] (0x282e7e0) Go away received\nI0925 03:06:09.309759    1076 log.go:172] (0x282e7e0) (0x282ebd0) Stream removed, broadcasting: 1\nI0925 03:06:09.310029    1076 log.go:172] (0x282e7e0) (0x27d8000) Stream removed, broadcasting: 3\nI0925 03:06:09.310378    1076 log.go:172] (0x282e7e0) (0x282ec40) Stream removed, broadcasting: 5\n"
Sep 25 03:06:09.319: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Sep 25 03:06:09.319: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Sep 25 03:06:09.320: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5324 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Sep 25 03:06:10.666: INFO: stderr: "I0925 03:06:10.575075    1099 log.go:172] (0x24ba7e0) (0x24ba930) Create stream\nI0925 03:06:10.576784    1099 log.go:172] (0x24ba7e0) (0x24ba930) Stream added, broadcasting: 1\nI0925 03:06:10.587788    1099 log.go:172] (0x24ba7e0) Reply frame received for 1\nI0925 03:06:10.588529    1099 log.go:172] (0x24ba7e0) (0x24bad90) Create stream\nI0925 03:06:10.588625    1099 log.go:172] (0x24ba7e0) (0x24bad90) Stream added, broadcasting: 3\nI0925 03:06:10.590290    1099 log.go:172] (0x24ba7e0) Reply frame received for 3\nI0925 03:06:10.590647    1099 log.go:172] (0x24ba7e0) (0x28461c0) Create stream\nI0925 03:06:10.590748    1099 log.go:172] (0x24ba7e0) (0x28461c0) Stream added, broadcasting: 5\nI0925 03:06:10.592507    1099 log.go:172] (0x24ba7e0) Reply frame received for 5\nI0925 03:06:10.652127    1099 log.go:172] (0x24ba7e0) Data frame received for 3\nI0925 03:06:10.652399    1099 log.go:172] (0x24ba7e0) Data frame received for 5\nI0925 03:06:10.652655    1099 log.go:172] (0x28461c0) (5) Data frame handling\nI0925 03:06:10.652954    1099 log.go:172] (0x24bad90) (3) Data frame handling\nI0925 03:06:10.653184    1099 log.go:172] (0x24ba7e0) Data frame received for 1\nI0925 03:06:10.653266    1099 log.go:172] (0x24ba930) (1) Data frame handling\nI0925 03:06:10.653436    1099 log.go:172] (0x24ba930) (1) Data frame sent\nI0925 03:06:10.653723    1099 log.go:172] (0x28461c0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\nI0925 03:06:10.654079    1099 log.go:172] (0x24bad90) (3) Data frame sent\nI0925 03:06:10.654214    1099 log.go:172] (0x24ba7e0) Data frame received for 3\nI0925 03:06:10.654314    1099 log.go:172] (0x24bad90) (3) Data frame handling\nI0925 03:06:10.654395    1099 log.go:172] (0x24ba7e0) Data frame received for 5\nI0925 03:06:10.654502    1099 log.go:172] (0x28461c0) (5) Data frame handling\nI0925 03:06:10.654609    1099 log.go:172] (0x28461c0) (5) Data frame sent\n+ true\nI0925 03:06:10.654720    1099 log.go:172] (0x24ba7e0) Data frame received for 5\nI0925 03:06:10.654813    1099 log.go:172] (0x28461c0) (5) Data frame handling\nI0925 03:06:10.655791    1099 log.go:172] (0x24ba7e0) (0x24ba930) Stream removed, broadcasting: 1\nI0925 03:06:10.658058    1099 log.go:172] (0x24ba7e0) Go away received\nI0925 03:06:10.659677    1099 log.go:172] (0x24ba7e0) (0x24ba930) Stream removed, broadcasting: 1\nI0925 03:06:10.659952    1099 log.go:172] (0x24ba7e0) (0x24bad90) Stream removed, broadcasting: 3\nI0925 03:06:10.660103    1099 log.go:172] (0x24ba7e0) (0x28461c0) Stream removed, broadcasting: 5\n"
Sep 25 03:06:10.667: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Sep 25 03:06:10.668: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Sep 25 03:06:10.675: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Sep 25 03:06:10.675: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Sep 25 03:06:10.676: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Scale down will not halt with unhealthy stateful pod
Sep 25 03:06:10.683: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5324 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Sep 25 03:06:12.079: INFO: stderr: "I0925 03:06:11.950297    1121 log.go:172] (0x2be0070) (0x2be00e0) Create stream\nI0925 03:06:11.953615    1121 log.go:172] (0x2be0070) (0x2be00e0) Stream added, broadcasting: 1\nI0925 03:06:11.965708    1121 log.go:172] (0x2be0070) Reply frame received for 1\nI0925 03:06:11.966543    1121 log.go:172] (0x2be0070) (0x28d0000) Create stream\nI0925 03:06:11.966648    1121 log.go:172] (0x2be0070) (0x28d0000) Stream added, broadcasting: 3\nI0925 03:06:11.968507    1121 log.go:172] (0x2be0070) Reply frame received for 3\nI0925 03:06:11.968768    1121 log.go:172] (0x2be0070) (0x28d02a0) Create stream\nI0925 03:06:11.968887    1121 log.go:172] (0x2be0070) (0x28d02a0) Stream added, broadcasting: 5\nI0925 03:06:11.970429    1121 log.go:172] (0x2be0070) Reply frame received for 5\nI0925 03:06:12.062078    1121 log.go:172] (0x2be0070) Data frame received for 5\nI0925 03:06:12.062524    1121 log.go:172] (0x2be0070) Data frame received for 3\nI0925 03:06:12.062792    1121 log.go:172] (0x2be0070) Data frame received for 1\nI0925 03:06:12.063042    1121 log.go:172] (0x2be00e0) (1) Data frame handling\nI0925 03:06:12.063629    1121 log.go:172] (0x28d0000) (3) Data frame handling\nI0925 03:06:12.063917    1121 log.go:172] (0x28d02a0) (5) Data frame handling\nI0925 03:06:12.065404    1121 log.go:172] (0x28d0000) (3) Data frame sent\nI0925 03:06:12.065817    1121 log.go:172] (0x28d02a0) (5) Data frame sent\nI0925 03:06:12.066014    1121 log.go:172] (0x2be00e0) (1) Data frame sent\nI0925 03:06:12.066395    1121 log.go:172] (0x2be0070) Data frame received for 5\nI0925 03:06:12.066581    1121 log.go:172] (0x28d02a0) (5) Data frame handling\nI0925 03:06:12.066865    1121 log.go:172] (0x2be0070) Data frame received for 3\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0925 03:06:12.068029    1121 log.go:172] (0x2be0070) (0x2be00e0) Stream removed, broadcasting: 1\nI0925 03:06:12.069311    1121 log.go:172] (0x28d0000) (3) Data frame handling\nI0925 03:06:12.070330    1121 log.go:172] (0x2be0070) Go away received\nI0925 03:06:12.073150    1121 log.go:172] (0x2be0070) (0x2be00e0) Stream removed, broadcasting: 1\nI0925 03:06:12.073439    1121 log.go:172] (0x2be0070) (0x28d0000) Stream removed, broadcasting: 3\nI0925 03:06:12.073673    1121 log.go:172] (0x2be0070) (0x28d02a0) Stream removed, broadcasting: 5\n"
Sep 25 03:06:12.082: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Sep 25 03:06:12.082: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Sep 25 03:06:12.082: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5324 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Sep 25 03:06:13.495: INFO: stderr: "I0925 03:06:13.340741    1144 log.go:172] (0x26216c0) (0x2621730) Create stream\nI0925 03:06:13.344505    1144 log.go:172] (0x26216c0) (0x2621730) Stream added, broadcasting: 1\nI0925 03:06:13.363922    1144 log.go:172] (0x26216c0) Reply frame received for 1\nI0925 03:06:13.364626    1144 log.go:172] (0x26216c0) (0x28b6000) Create stream\nI0925 03:06:13.364733    1144 log.go:172] (0x26216c0) (0x28b6000) Stream added, broadcasting: 3\nI0925 03:06:13.366219    1144 log.go:172] (0x26216c0) Reply frame received for 3\nI0925 03:06:13.366490    1144 log.go:172] (0x26216c0) (0x28b6070) Create stream\nI0925 03:06:13.366551    1144 log.go:172] (0x26216c0) (0x28b6070) Stream added, broadcasting: 5\nI0925 03:06:13.367582    1144 log.go:172] (0x26216c0) Reply frame received for 5\nI0925 03:06:13.450400    1144 log.go:172] (0x26216c0) Data frame received for 5\nI0925 03:06:13.450726    1144 log.go:172] (0x28b6070) (5) Data frame handling\nI0925 03:06:13.451521    1144 log.go:172] (0x28b6070) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0925 03:06:13.477038    1144 log.go:172] (0x26216c0) Data frame received for 3\nI0925 03:06:13.477231    1144 log.go:172] (0x28b6000) (3) Data frame handling\nI0925 03:06:13.477380    1144 log.go:172] (0x26216c0) Data frame received for 5\nI0925 03:06:13.477609    1144 log.go:172] (0x28b6070) (5) Data frame handling\nI0925 03:06:13.477716    1144 log.go:172] (0x28b6000) (3) Data frame sent\nI0925 03:06:13.477850    1144 log.go:172] (0x26216c0) Data frame received for 3\nI0925 03:06:13.477962    1144 log.go:172] (0x28b6000) (3) Data frame handling\nI0925 03:06:13.478894    1144 log.go:172] (0x26216c0) Data frame received for 1\nI0925 03:06:13.479049    1144 log.go:172] (0x2621730) (1) Data frame handling\nI0925 03:06:13.479258    1144 log.go:172] (0x2621730) (1) Data frame sent\nI0925 03:06:13.481146    1144 log.go:172] (0x26216c0) (0x2621730) Stream removed, broadcasting: 1\nI0925 03:06:13.483582    1144 log.go:172] (0x26216c0) Go away received\nI0925 03:06:13.487118    1144 log.go:172] (0x26216c0) (0x2621730) Stream removed, broadcasting: 1\nI0925 03:06:13.487475    1144 log.go:172] (0x26216c0) (0x28b6000) Stream removed, broadcasting: 3\nI0925 03:06:13.487754    1144 log.go:172] (0x26216c0) (0x28b6070) Stream removed, broadcasting: 5\n"
Sep 25 03:06:13.496: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Sep 25 03:06:13.496: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Sep 25 03:06:13.497: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5324 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Sep 25 03:06:14.898: INFO: stderr: "I0925 03:06:14.781649    1167 log.go:172] (0x28ca460) (0x28ca540) Create stream\nI0925 03:06:14.783393    1167 log.go:172] (0x28ca460) (0x28ca540) Stream added, broadcasting: 1\nI0925 03:06:14.791897    1167 log.go:172] (0x28ca460) Reply frame received for 1\nI0925 03:06:14.792705    1167 log.go:172] (0x28ca460) (0x26ac0e0) Create stream\nI0925 03:06:14.792810    1167 log.go:172] (0x28ca460) (0x26ac0e0) Stream added, broadcasting: 3\nI0925 03:06:14.794256    1167 log.go:172] (0x28ca460) Reply frame received for 3\nI0925 03:06:14.794493    1167 log.go:172] (0x28ca460) (0x28ca7e0) Create stream\nI0925 03:06:14.794556    1167 log.go:172] (0x28ca460) (0x28ca7e0) Stream added, broadcasting: 5\nI0925 03:06:14.796010    1167 log.go:172] (0x28ca460) Reply frame received for 5\nI0925 03:06:14.851390    1167 log.go:172] (0x28ca460) Data frame received for 5\nI0925 03:06:14.851578    1167 log.go:172] (0x28ca7e0) (5) Data frame handling\nI0925 03:06:14.851880    1167 log.go:172] (0x28ca7e0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0925 03:06:14.883536    1167 log.go:172] (0x28ca460) Data frame received for 5\nI0925 03:06:14.883640    1167 log.go:172] (0x28ca7e0) (5) Data frame handling\nI0925 03:06:14.883871    1167 log.go:172] (0x28ca460) Data frame received for 3\nI0925 03:06:14.884135    1167 log.go:172] (0x26ac0e0) (3) Data frame handling\nI0925 03:06:14.884385    1167 log.go:172] (0x26ac0e0) (3) Data frame sent\nI0925 03:06:14.884516    1167 log.go:172] (0x28ca460) Data frame received for 3\nI0925 03:06:14.884657    1167 log.go:172] (0x26ac0e0) (3) Data frame handling\nI0925 03:06:14.885384    1167 log.go:172] (0x28ca460) Data frame received for 1\nI0925 03:06:14.885530    1167 log.go:172] (0x28ca540) (1) Data frame handling\nI0925 03:06:14.885689    1167 log.go:172] (0x28ca540) (1) Data frame sent\nI0925 03:06:14.886353    1167 log.go:172] (0x28ca460) (0x28ca540) Stream removed, broadcasting: 1\nI0925 03:06:14.889288    1167 log.go:172] (0x28ca460) Go away received\nI0925 03:06:14.892475    1167 log.go:172] (0x28ca460) (0x28ca540) Stream removed, broadcasting: 1\nI0925 03:06:14.892967    1167 log.go:172] (0x28ca460) (0x26ac0e0) Stream removed, broadcasting: 3\nI0925 03:06:14.893181    1167 log.go:172] (0x28ca460) (0x28ca7e0) Stream removed, broadcasting: 5\n"
Sep 25 03:06:14.899: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Sep 25 03:06:14.899: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Sep 25 03:06:14.900: INFO: Waiting for statefulset status.replicas updated to 0
Sep 25 03:06:14.906: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1
Sep 25 03:06:24.920: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Sep 25 03:06:24.920: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Sep 25 03:06:24.920: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Sep 25 03:06:24.965: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Sep 25 03:06:24.965: INFO: ss-0  iruya-worker   Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-25 03:05:34 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-25 03:06:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-25 03:06:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-25 03:05:34 +0000 UTC  }]
Sep 25 03:06:24.965: INFO: ss-1  iruya-worker2  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-25 03:05:56 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-25 03:06:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-25 03:06:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-25 03:05:56 +0000 UTC  }]
Sep 25 03:06:24.966: INFO: ss-2  iruya-worker2  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-25 03:05:56 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-25 03:06:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-25 03:06:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-25 03:05:56 +0000 UTC  }]
Sep 25 03:06:24.967: INFO: 
Sep 25 03:06:24.967: INFO: StatefulSet ss has not reached scale 0, at 3
Sep 25 03:06:26.138: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Sep 25 03:06:26.139: INFO: ss-0  iruya-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-25 03:05:34 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-25 03:06:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-25 03:06:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-25 03:05:34 +0000 UTC  }]
Sep 25 03:06:26.139: INFO: ss-1  iruya-worker2  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-25 03:05:56 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-25 03:06:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-25 03:06:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-25 03:05:56 +0000 UTC  }]
Sep 25 03:06:26.140: INFO: ss-2  iruya-worker2  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-25 03:05:56 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-25 03:06:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-25 03:06:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-25 03:05:56 +0000 UTC  }]
Sep 25 03:06:26.141: INFO: 
Sep 25 03:06:26.141: INFO: StatefulSet ss has not reached scale 0, at 3
Sep 25 03:06:27.150: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Sep 25 03:06:27.150: INFO: ss-0  iruya-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-25 03:05:34 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-25 03:06:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-25 03:06:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-25 03:05:34 +0000 UTC  }]
Sep 25 03:06:27.151: INFO: ss-1  iruya-worker2  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-25 03:05:56 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-25 03:06:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-25 03:06:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-25 03:05:56 +0000 UTC  }]
Sep 25 03:06:27.151: INFO: ss-2  iruya-worker2  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-25 03:05:56 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-25 03:06:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-25 03:06:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-25 03:05:56 +0000 UTC  }]
Sep 25 03:06:27.152: INFO: 
Sep 25 03:06:27.152: INFO: StatefulSet ss has not reached scale 0, at 3
Sep 25 03:06:28.160: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Sep 25 03:06:28.160: INFO: ss-1  iruya-worker2  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-25 03:05:56 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-25 03:06:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-25 03:06:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-25 03:05:56 +0000 UTC  }]
Sep 25 03:06:28.160: INFO: ss-2  iruya-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-25 03:05:56 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-25 03:06:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-25 03:06:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-25 03:05:56 +0000 UTC  }]
Sep 25 03:06:28.160: INFO: 
Sep 25 03:06:28.160: INFO: StatefulSet ss has not reached scale 0, at 2
Sep 25 03:06:29.167: INFO: Verifying statefulset ss doesn't scale past 0 for another 5.789545152s
Sep 25 03:06:30.173: INFO: Verifying statefulset ss doesn't scale past 0 for another 4.782902423s
Sep 25 03:06:31.180: INFO: Verifying statefulset ss doesn't scale past 0 for another 3.777027809s
Sep 25 03:06:32.187: INFO: Verifying statefulset ss doesn't scale past 0 for another 2.769626925s
Sep 25 03:06:33.195: INFO: Verifying statefulset ss doesn't scale past 0 for another 1.762378603s
Sep 25 03:06:34.201: INFO: Verifying statefulset ss doesn't scale past 0 for another 754.990524ms
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-5324
Sep 25 03:06:35.208: INFO: Scaling statefulset ss to 0
Sep 25 03:06:35.234: INFO: Waiting for statefulset status.replicas updated to 0
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Sep 25 03:06:35.238: INFO: Deleting all statefulset in ns statefulset-5324
Sep 25 03:06:35.243: INFO: Scaling statefulset ss to 0
Sep 25 03:06:35.256: INFO: Waiting for statefulset status.replicas updated to 0
Sep 25 03:06:35.259: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 25 03:06:35.290: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-5324" for this suite.
Sep 25 03:06:41.316: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 25 03:06:41.465: INFO: namespace statefulset-5324 deletion completed in 6.166134366s

• [SLOW TEST:66.791 seconds]
[sig-apps] StatefulSet
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    Burst scaling should run to completion even with unhealthy pods [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run deployment 
  should create a deployment from an image  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 25 03:06:41.468: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1557
[It] should create a deployment from an image  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Sep 25 03:06:41.538: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/apps.v1 --namespace=kubectl-6843'
Sep 25 03:06:42.712: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Sep 25 03:06:42.712: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n"
STEP: verifying the deployment e2e-test-nginx-deployment was created
STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created
[AfterEach] [k8s.io] Kubectl run deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1562
Sep 25 03:06:44.774: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-6843'
Sep 25 03:06:45.913: INFO: stderr: ""
Sep 25 03:06:45.913: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 25 03:06:45.914: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6843" for this suite.
Sep 25 03:07:07.938: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 25 03:07:08.091: INFO: namespace kubectl-6843 deletion completed in 22.166592426s

• [SLOW TEST:26.623 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a deployment from an image  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 25 03:07:08.097: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-8228.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-8228.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8228.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-8228.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-8228.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8228.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe /etc/hosts
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Sep 25 03:07:14.280: INFO: DNS probes using dns-8228/dns-test-24b91d8d-0a35-46b5-ab47-2984421a7497 succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 25 03:07:14.342: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-8228" for this suite.
Sep 25 03:07:20.392: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 25 03:07:20.539: INFO: namespace dns-8228 deletion completed in 6.179790425s

• [SLOW TEST:12.443 seconds]
[sig-network] DNS
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-storage] Downward API volume 
  should update labels on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 25 03:07:20.541: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should update labels on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Sep 25 03:07:25.220: INFO: Successfully updated pod "labelsupdated2797025-e9f2-422f-9470-9452ead5bb11"
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 25 03:07:27.258: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-4042" for this suite.
Sep 25 03:07:49.287: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 25 03:07:49.463: INFO: namespace downward-api-4042 deletion completed in 22.194360613s

• [SLOW TEST:28.922 seconds]
[sig-storage] Downward API volume
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should update labels on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 25 03:07:49.467: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Sep 25 03:07:49.534: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5295b0d8-5945-4666-a43e-c4f4add45dff" in namespace "projected-8079" to be "success or failure"
Sep 25 03:07:49.543: INFO: Pod "downwardapi-volume-5295b0d8-5945-4666-a43e-c4f4add45dff": Phase="Pending", Reason="", readiness=false. Elapsed: 8.646477ms
Sep 25 03:07:51.551: INFO: Pod "downwardapi-volume-5295b0d8-5945-4666-a43e-c4f4add45dff": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016180205s
Sep 25 03:07:53.558: INFO: Pod "downwardapi-volume-5295b0d8-5945-4666-a43e-c4f4add45dff": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02380816s
STEP: Saw pod success
Sep 25 03:07:53.559: INFO: Pod "downwardapi-volume-5295b0d8-5945-4666-a43e-c4f4add45dff" satisfied condition "success or failure"
Sep 25 03:07:53.565: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-5295b0d8-5945-4666-a43e-c4f4add45dff container client-container: 
STEP: delete the pod
Sep 25 03:07:53.588: INFO: Waiting for pod downwardapi-volume-5295b0d8-5945-4666-a43e-c4f4add45dff to disappear
Sep 25 03:07:53.591: INFO: Pod downwardapi-volume-5295b0d8-5945-4666-a43e-c4f4add45dff no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 25 03:07:53.591: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8079" for this suite.
Sep 25 03:07:59.620: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 25 03:07:59.784: INFO: namespace projected-8079 deletion completed in 6.186693148s

• [SLOW TEST:10.318 seconds]
[sig-storage] Projected downwardAPI
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory limit [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 25 03:07:59.786: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Sep 25 03:07:59.877: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3ceb3c6b-b0e1-465d-a738-1a082df9d693" in namespace "downward-api-6905" to be "success or failure"
Sep 25 03:07:59.906: INFO: Pod "downwardapi-volume-3ceb3c6b-b0e1-465d-a738-1a082df9d693": Phase="Pending", Reason="", readiness=false. Elapsed: 28.903806ms
Sep 25 03:08:01.913: INFO: Pod "downwardapi-volume-3ceb3c6b-b0e1-465d-a738-1a082df9d693": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035764422s
Sep 25 03:08:03.920: INFO: Pod "downwardapi-volume-3ceb3c6b-b0e1-465d-a738-1a082df9d693": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.043640447s
STEP: Saw pod success
Sep 25 03:08:03.921: INFO: Pod "downwardapi-volume-3ceb3c6b-b0e1-465d-a738-1a082df9d693" satisfied condition "success or failure"
Sep 25 03:08:03.926: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-3ceb3c6b-b0e1-465d-a738-1a082df9d693 container client-container: 
STEP: delete the pod
Sep 25 03:08:03.964: INFO: Waiting for pod downwardapi-volume-3ceb3c6b-b0e1-465d-a738-1a082df9d693 to disappear
Sep 25 03:08:03.982: INFO: Pod downwardapi-volume-3ceb3c6b-b0e1-465d-a738-1a082df9d693 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 25 03:08:03.983: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6905" for this suite.
Sep 25 03:08:10.012: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 25 03:08:10.189: INFO: namespace downward-api-6905 deletion completed in 6.196268389s

• [SLOW TEST:10.403 seconds]
[sig-storage] Downward API volume
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's memory limit [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 25 03:08:10.194: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not cause race condition when used for configmaps [Serial] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating 50 configmaps
STEP: Creating RC which spawns configmap-volume pods
Sep 25 03:08:11.013: INFO: Pod name wrapped-volume-race-d10408df-377c-4e1e-a133-dce6538c97da: Found 0 pods out of 5
Sep 25 03:08:16.027: INFO: Pod name wrapped-volume-race-d10408df-377c-4e1e-a133-dce6538c97da: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-d10408df-377c-4e1e-a133-dce6538c97da in namespace emptydir-wrapper-3439, will wait for the garbage collector to delete the pods
Sep 25 03:08:30.217: INFO: Deleting ReplicationController wrapped-volume-race-d10408df-377c-4e1e-a133-dce6538c97da took: 23.98629ms
Sep 25 03:08:30.518: INFO: Terminating ReplicationController wrapped-volume-race-d10408df-377c-4e1e-a133-dce6538c97da pods took: 300.879846ms
STEP: Creating RC which spawns configmap-volume pods
Sep 25 03:09:07.184: INFO: Pod name wrapped-volume-race-9b0ec0d1-f53d-4ab0-99ae-fdceae8adfb2: Found 0 pods out of 5
Sep 25 03:09:12.206: INFO: Pod name wrapped-volume-race-9b0ec0d1-f53d-4ab0-99ae-fdceae8adfb2: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-9b0ec0d1-f53d-4ab0-99ae-fdceae8adfb2 in namespace emptydir-wrapper-3439, will wait for the garbage collector to delete the pods
Sep 25 03:09:26.334: INFO: Deleting ReplicationController wrapped-volume-race-9b0ec0d1-f53d-4ab0-99ae-fdceae8adfb2 took: 9.975377ms
Sep 25 03:09:26.635: INFO: Terminating ReplicationController wrapped-volume-race-9b0ec0d1-f53d-4ab0-99ae-fdceae8adfb2 pods took: 300.973707ms
STEP: Creating RC which spawns configmap-volume pods
Sep 25 03:10:05.591: INFO: Pod name wrapped-volume-race-a4410c86-ba9e-4871-aa38-c36be6dbf8db: Found 0 pods out of 5
Sep 25 03:10:10.649: INFO: Pod name wrapped-volume-race-a4410c86-ba9e-4871-aa38-c36be6dbf8db: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-a4410c86-ba9e-4871-aa38-c36be6dbf8db in namespace emptydir-wrapper-3439, will wait for the garbage collector to delete the pods
Sep 25 03:10:30.763: INFO: Deleting ReplicationController wrapped-volume-race-a4410c86-ba9e-4871-aa38-c36be6dbf8db took: 7.191767ms
Sep 25 03:10:31.064: INFO: Terminating ReplicationController wrapped-volume-race-a4410c86-ba9e-4871-aa38-c36be6dbf8db pods took: 300.704408ms
STEP: Cleaning up the configMaps
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 25 03:11:08.615: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-3439" for this suite.
Sep 25 03:11:16.677: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 25 03:11:16.853: INFO: namespace emptydir-wrapper-3439 deletion completed in 8.213348446s

• [SLOW TEST:186.661 seconds]
[sig-storage] EmptyDir wrapper volumes
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via environment variable [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 25 03:11:16.865: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via environment variable [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap configmap-7365/configmap-test-ae5c7db3-7d6e-4130-8f6d-96a645f0a773
STEP: Creating a pod to test consume configMaps
Sep 25 03:11:16.949: INFO: Waiting up to 5m0s for pod "pod-configmaps-a75455a1-3a95-4544-9976-96b18c2ba737" in namespace "configmap-7365" to be "success or failure"
Sep 25 03:11:16.960: INFO: Pod "pod-configmaps-a75455a1-3a95-4544-9976-96b18c2ba737": Phase="Pending", Reason="", readiness=false. Elapsed: 10.084352ms
Sep 25 03:11:18.967: INFO: Pod "pod-configmaps-a75455a1-3a95-4544-9976-96b18c2ba737": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017712393s
Sep 25 03:11:20.973: INFO: Pod "pod-configmaps-a75455a1-3a95-4544-9976-96b18c2ba737": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023470374s
STEP: Saw pod success
Sep 25 03:11:20.973: INFO: Pod "pod-configmaps-a75455a1-3a95-4544-9976-96b18c2ba737" satisfied condition "success or failure"
Sep 25 03:11:20.978: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-a75455a1-3a95-4544-9976-96b18c2ba737 container env-test: 
STEP: delete the pod
Sep 25 03:11:21.003: INFO: Waiting for pod pod-configmaps-a75455a1-3a95-4544-9976-96b18c2ba737 to disappear
Sep 25 03:11:21.007: INFO: Pod pod-configmaps-a75455a1-3a95-4544-9976-96b18c2ba737 no longer exists
[AfterEach] [sig-node] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 25 03:11:21.007: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-7365" for this suite.
Sep 25 03:11:27.030: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 25 03:11:27.192: INFO: namespace configmap-7365 deletion completed in 6.178614915s

• [SLOW TEST:10.328 seconds]
[sig-node] ConfigMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via environment variable [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-node] Downward API 
  should provide host IP as an env var [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 25 03:11:27.194: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP as an env var [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Sep 25 03:11:27.292: INFO: Waiting up to 5m0s for pod "downward-api-ac8f68a7-a30d-47b9-a83d-dd85b82c5175" in namespace "downward-api-6711" to be "success or failure"
Sep 25 03:11:27.300: INFO: Pod "downward-api-ac8f68a7-a30d-47b9-a83d-dd85b82c5175": Phase="Pending", Reason="", readiness=false. Elapsed: 8.506543ms
Sep 25 03:11:29.325: INFO: Pod "downward-api-ac8f68a7-a30d-47b9-a83d-dd85b82c5175": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033481466s
Sep 25 03:11:31.334: INFO: Pod "downward-api-ac8f68a7-a30d-47b9-a83d-dd85b82c5175": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.041915047s
STEP: Saw pod success
Sep 25 03:11:31.334: INFO: Pod "downward-api-ac8f68a7-a30d-47b9-a83d-dd85b82c5175" satisfied condition "success or failure"
Sep 25 03:11:31.339: INFO: Trying to get logs from node iruya-worker pod downward-api-ac8f68a7-a30d-47b9-a83d-dd85b82c5175 container dapi-container: 
STEP: delete the pod
Sep 25 03:11:31.384: INFO: Waiting for pod downward-api-ac8f68a7-a30d-47b9-a83d-dd85b82c5175 to disappear
Sep 25 03:11:31.419: INFO: Pod downward-api-ac8f68a7-a30d-47b9-a83d-dd85b82c5175 no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 25 03:11:31.420: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6711" for this suite.
Sep 25 03:11:37.455: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 25 03:11:37.586: INFO: namespace downward-api-6711 deletion completed in 6.152698405s

• [SLOW TEST:10.393 seconds]
[sig-node] Downward API
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide host IP as an env var [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  pod should support shared volumes between containers [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 25 03:11:37.587: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] pod should support shared volumes between containers [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating Pod
STEP: Waiting for the pod running
STEP: Geting the pod
STEP: Reading file content from the nginx-container
Sep 25 03:11:43.781: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec pod-sharedvolume-6faee52f-7748-44fa-8895-856067bc96f1 -c busybox-main-container --namespace=emptydir-1295 -- cat /usr/share/volumeshare/shareddata.txt'
Sep 25 03:11:45.260: INFO: stderr: "I0925 03:11:45.145270    1234 log.go:172] (0x2ac37a0) (0x2ac3810) Create stream\nI0925 03:11:45.146785    1234 log.go:172] (0x2ac37a0) (0x2ac3810) Stream added, broadcasting: 1\nI0925 03:11:45.165208    1234 log.go:172] (0x2ac37a0) Reply frame received for 1\nI0925 03:11:45.165778    1234 log.go:172] (0x2ac37a0) (0x24ac7e0) Create stream\nI0925 03:11:45.165854    1234 log.go:172] (0x2ac37a0) (0x24ac7e0) Stream added, broadcasting: 3\nI0925 03:11:45.167446    1234 log.go:172] (0x2ac37a0) Reply frame received for 3\nI0925 03:11:45.167837    1234 log.go:172] (0x2ac37a0) (0x2590000) Create stream\nI0925 03:11:45.167968    1234 log.go:172] (0x2ac37a0) (0x2590000) Stream added, broadcasting: 5\nI0925 03:11:45.169406    1234 log.go:172] (0x2ac37a0) Reply frame received for 5\nI0925 03:11:45.241615    1234 log.go:172] (0x2ac37a0) Data frame received for 3\nI0925 03:11:45.241865    1234 log.go:172] (0x2ac37a0) Data frame received for 1\nI0925 03:11:45.242120    1234 log.go:172] (0x2ac37a0) Data frame received for 5\nI0925 03:11:45.242437    1234 log.go:172] (0x2590000) (5) Data frame handling\nI0925 03:11:45.242598    1234 log.go:172] (0x24ac7e0) (3) Data frame handling\nI0925 03:11:45.242907    1234 log.go:172] (0x2ac3810) (1) Data frame handling\nI0925 03:11:45.246449    1234 log.go:172] (0x24ac7e0) (3) Data frame sent\nI0925 03:11:45.247135    1234 log.go:172] (0x2ac37a0) Data frame received for 3\nI0925 03:11:45.247283    1234 log.go:172] (0x24ac7e0) (3) Data frame handling\nI0925 03:11:45.247417    1234 log.go:172] (0x2ac3810) (1) Data frame sent\nI0925 03:11:45.248291    1234 log.go:172] (0x2ac37a0) (0x2ac3810) Stream removed, broadcasting: 1\nI0925 03:11:45.248588    1234 log.go:172] (0x2ac37a0) Go away received\nI0925 03:11:45.251585    1234 log.go:172] (0x2ac37a0) (0x2ac3810) Stream removed, broadcasting: 1\nI0925 03:11:45.251785    1234 log.go:172] (0x2ac37a0) (0x24ac7e0) Stream removed, broadcasting: 3\nI0925 03:11:45.251954    1234 log.go:172] (0x2ac37a0) (0x2590000) Stream removed, broadcasting: 5\n"
Sep 25 03:11:45.261: INFO: stdout: "Hello from the busy-box sub-container\n"
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 25 03:11:45.262: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-1295" for this suite.
Sep 25 03:11:51.317: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 25 03:11:51.458: INFO: namespace emptydir-1295 deletion completed in 6.187045485s

• [SLOW TEST:13.871 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  pod should support shared volumes between containers [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 25 03:11:51.459: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on tmpfs
Sep 25 03:11:51.571: INFO: Waiting up to 5m0s for pod "pod-5fae5b21-c231-415c-8721-1b7b8c664ae1" in namespace "emptydir-9678" to be "success or failure"
Sep 25 03:11:51.582: INFO: Pod "pod-5fae5b21-c231-415c-8721-1b7b8c664ae1": Phase="Pending", Reason="", readiness=false. Elapsed: 11.598969ms
Sep 25 03:11:53.589: INFO: Pod "pod-5fae5b21-c231-415c-8721-1b7b8c664ae1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017790265s
Sep 25 03:11:55.597: INFO: Pod "pod-5fae5b21-c231-415c-8721-1b7b8c664ae1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025862535s
STEP: Saw pod success
Sep 25 03:11:55.597: INFO: Pod "pod-5fae5b21-c231-415c-8721-1b7b8c664ae1" satisfied condition "success or failure"
Sep 25 03:11:55.602: INFO: Trying to get logs from node iruya-worker pod pod-5fae5b21-c231-415c-8721-1b7b8c664ae1 container test-container: 
STEP: delete the pod
Sep 25 03:11:55.626: INFO: Waiting for pod pod-5fae5b21-c231-415c-8721-1b7b8c664ae1 to disappear
Sep 25 03:11:55.647: INFO: Pod pod-5fae5b21-c231-415c-8721-1b7b8c664ae1 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 25 03:11:55.647: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9678" for this suite.
Sep 25 03:12:01.686: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 25 03:12:01.830: INFO: namespace emptydir-9678 deletion completed in 6.173087682s

• [SLOW TEST:10.372 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl logs 
  should be able to retrieve and filter logs  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 25 03:12:01.832: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl logs
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1292
STEP: creating an rc
Sep 25 03:12:01.898: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5189'
Sep 25 03:12:03.469: INFO: stderr: ""
Sep 25 03:12:03.469: INFO: stdout: "replicationcontroller/redis-master created\n"
[It] should be able to retrieve and filter logs  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Waiting for Redis master to start.
Sep 25 03:12:04.480: INFO: Selector matched 1 pods for map[app:redis]
Sep 25 03:12:04.480: INFO: Found 0 / 1
Sep 25 03:12:05.478: INFO: Selector matched 1 pods for map[app:redis]
Sep 25 03:12:05.479: INFO: Found 0 / 1
Sep 25 03:12:06.479: INFO: Selector matched 1 pods for map[app:redis]
Sep 25 03:12:06.479: INFO: Found 0 / 1
Sep 25 03:12:07.479: INFO: Selector matched 1 pods for map[app:redis]
Sep 25 03:12:07.479: INFO: Found 1 / 1
Sep 25 03:12:07.479: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Sep 25 03:12:07.486: INFO: Selector matched 1 pods for map[app:redis]
Sep 25 03:12:07.486: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
STEP: checking for a matching strings
Sep 25 03:12:07.487: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-ngczb redis-master --namespace=kubectl-5189'
Sep 25 03:12:08.658: INFO: stderr: ""
Sep 25 03:12:08.658: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 25 Sep 03:12:05.823 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 25 Sep 03:12:05.823 # Server started, Redis version 3.2.12\n1:M 25 Sep 03:12:05.823 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 25 Sep 03:12:05.823 * The server is now ready to accept connections on port 6379\n"
STEP: limiting log lines
Sep 25 03:12:08.659: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-ngczb redis-master --namespace=kubectl-5189 --tail=1'
Sep 25 03:12:09.822: INFO: stderr: ""
Sep 25 03:12:09.822: INFO: stdout: "1:M 25 Sep 03:12:05.823 * The server is now ready to accept connections on port 6379\n"
STEP: limiting log bytes
Sep 25 03:12:09.824: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-ngczb redis-master --namespace=kubectl-5189 --limit-bytes=1'
Sep 25 03:12:11.008: INFO: stderr: ""
Sep 25 03:12:11.008: INFO: stdout: " "
STEP: exposing timestamps
Sep 25 03:12:11.009: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-ngczb redis-master --namespace=kubectl-5189 --tail=1 --timestamps'
Sep 25 03:12:12.174: INFO: stderr: ""
Sep 25 03:12:12.175: INFO: stdout: "2020-09-25T03:12:05.823507364Z 1:M 25 Sep 03:12:05.823 * The server is now ready to accept connections on port 6379\n"
STEP: restricting to a time range
Sep 25 03:12:14.677: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-ngczb redis-master --namespace=kubectl-5189 --since=1s'
Sep 25 03:12:15.811: INFO: stderr: ""
Sep 25 03:12:15.811: INFO: stdout: ""
Sep 25 03:12:15.812: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-ngczb redis-master --namespace=kubectl-5189 --since=24h'
Sep 25 03:12:16.974: INFO: stderr: ""
Sep 25 03:12:16.974: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 25 Sep 03:12:05.823 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 25 Sep 03:12:05.823 # Server started, Redis version 3.2.12\n1:M 25 Sep 03:12:05.823 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 25 Sep 03:12:05.823 * The server is now ready to accept connections on port 6379\n"
[AfterEach] [k8s.io] Kubectl logs
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298
STEP: using delete to clean up resources
Sep 25 03:12:16.977: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5189'
Sep 25 03:12:18.095: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Sep 25 03:12:18.095: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n"
Sep 25 03:12:18.096: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=kubectl-5189'
Sep 25 03:12:19.269: INFO: stderr: "No resources found.\n"
Sep 25 03:12:19.269: INFO: stdout: ""
Sep 25 03:12:19.270: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=kubectl-5189 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Sep 25 03:12:20.387: INFO: stderr: ""
Sep 25 03:12:20.388: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 25 03:12:20.388: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5189" for this suite.
Sep 25 03:12:42.420: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 25 03:12:42.559: INFO: namespace kubectl-5189 deletion completed in 22.162388023s

• [SLOW TEST:40.728 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl logs
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should be able to retrieve and filter logs  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 25 03:12:42.561: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on node default medium
Sep 25 03:12:42.646: INFO: Waiting up to 5m0s for pod "pod-ce9823bc-f22b-419f-bf56-07a86e63dd1a" in namespace "emptydir-7278" to be "success or failure"
Sep 25 03:12:42.714: INFO: Pod "pod-ce9823bc-f22b-419f-bf56-07a86e63dd1a": Phase="Pending", Reason="", readiness=false. Elapsed: 67.768969ms
Sep 25 03:12:44.721: INFO: Pod "pod-ce9823bc-f22b-419f-bf56-07a86e63dd1a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.075062306s
Sep 25 03:12:46.729: INFO: Pod "pod-ce9823bc-f22b-419f-bf56-07a86e63dd1a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.082837838s
STEP: Saw pod success
Sep 25 03:12:46.729: INFO: Pod "pod-ce9823bc-f22b-419f-bf56-07a86e63dd1a" satisfied condition "success or failure"
Sep 25 03:12:46.735: INFO: Trying to get logs from node iruya-worker2 pod pod-ce9823bc-f22b-419f-bf56-07a86e63dd1a container test-container: 
STEP: delete the pod
Sep 25 03:12:46.758: INFO: Waiting for pod pod-ce9823bc-f22b-419f-bf56-07a86e63dd1a to disappear
Sep 25 03:12:46.769: INFO: Pod pod-ce9823bc-f22b-419f-bf56-07a86e63dd1a no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 25 03:12:46.769: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-7278" for this suite.
Sep 25 03:12:52.792: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 25 03:12:52.959: INFO: namespace emptydir-7278 deletion completed in 6.182593117s

• [SLOW TEST:10.398 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 25 03:12:52.961: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a watch on configmaps with a certain label
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: changing the label value of the configmap
STEP: Expecting to observe a delete notification for the watched object
Sep 25 03:12:53.089: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-5895,SelfLink:/api/v1/namespaces/watch-5895/configmaps/e2e-watch-test-label-changed,UID:28aa8ebf-f0ba-42cf-8b3d-31be049c10cf,ResourceVersion:328071,Generation:0,CreationTimestamp:2020-09-25 03:12:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Sep 25 03:12:53.091: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-5895,SelfLink:/api/v1/namespaces/watch-5895/configmaps/e2e-watch-test-label-changed,UID:28aa8ebf-f0ba-42cf-8b3d-31be049c10cf,ResourceVersion:328072,Generation:0,CreationTimestamp:2020-09-25 03:12:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Sep 25 03:12:53.092: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-5895,SelfLink:/api/v1/namespaces/watch-5895/configmaps/e2e-watch-test-label-changed,UID:28aa8ebf-f0ba-42cf-8b3d-31be049c10cf,ResourceVersion:328073,Generation:0,CreationTimestamp:2020-09-25 03:12:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time
STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements
STEP: changing the label value of the configmap back
STEP: modifying the configmap a third time
STEP: deleting the configmap
STEP: Expecting to observe an add notification for the watched object when the label value was restored
Sep 25 03:13:03.137: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-5895,SelfLink:/api/v1/namespaces/watch-5895/configmaps/e2e-watch-test-label-changed,UID:28aa8ebf-f0ba-42cf-8b3d-31be049c10cf,ResourceVersion:328094,Generation:0,CreationTimestamp:2020-09-25 03:12:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Sep 25 03:13:03.138: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-5895,SelfLink:/api/v1/namespaces/watch-5895/configmaps/e2e-watch-test-label-changed,UID:28aa8ebf-f0ba-42cf-8b3d-31be049c10cf,ResourceVersion:328095,Generation:0,CreationTimestamp:2020-09-25 03:12:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
Sep 25 03:13:03.139: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-5895,SelfLink:/api/v1/namespaces/watch-5895/configmaps/e2e-watch-test-label-changed,UID:28aa8ebf-f0ba-42cf-8b3d-31be049c10cf,ResourceVersion:328096,Generation:0,CreationTimestamp:2020-09-25 03:12:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 25 03:13:03.140: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-5895" for this suite.
Sep 25 03:13:09.170: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 25 03:13:09.314: INFO: namespace watch-5895 deletion completed in 6.164906139s

• [SLOW TEST:16.353 seconds]
[sig-api-machinery] Watchers
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicaSet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 25 03:13:09.320: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation and release no longer matching pods [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Given a Pod with a 'name' label pod-adoption-release is created
STEP: When a replicaset with a matching selector is created
STEP: Then the orphan pod is adopted
STEP: When the matched label of one of its pods change
Sep 25 03:13:14.457: INFO: Pod name pod-adoption-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicaSet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 25 03:13:14.499: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-1298" for this suite.
Sep 25 03:13:36.584: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 25 03:13:36.741: INFO: namespace replicaset-1298 deletion completed in 22.233196712s

• [SLOW TEST:27.422 seconds]
[sig-apps] ReplicaSet
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 25 03:13:36.745: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with configMap that has name projected-configmap-test-upd-ed468baa-f7b2-4cb1-a6ca-f0159dfca390
STEP: Creating the pod
STEP: Updating configmap projected-configmap-test-upd-ed468baa-f7b2-4cb1-a6ca-f0159dfca390
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 25 03:13:44.913: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5454" for this suite.
Sep 25 03:14:06.944: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 25 03:14:07.095: INFO: namespace projected-5454 deletion completed in 22.170517018s

• [SLOW TEST:30.350 seconds]
[sig-storage] Projected configMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 25 03:14:07.096: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-map-1503f504-150e-4d1a-bbde-f45553e20104
STEP: Creating a pod to test consume configMaps
Sep 25 03:14:07.210: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-7926c284-b71d-45a8-be97-ec906a8552a7" in namespace "projected-8277" to be "success or failure"
Sep 25 03:14:07.233: INFO: Pod "pod-projected-configmaps-7926c284-b71d-45a8-be97-ec906a8552a7": Phase="Pending", Reason="", readiness=false. Elapsed: 22.474156ms
Sep 25 03:14:09.242: INFO: Pod "pod-projected-configmaps-7926c284-b71d-45a8-be97-ec906a8552a7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03188234s
Sep 25 03:14:11.249: INFO: Pod "pod-projected-configmaps-7926c284-b71d-45a8-be97-ec906a8552a7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.038459432s
STEP: Saw pod success
Sep 25 03:14:11.249: INFO: Pod "pod-projected-configmaps-7926c284-b71d-45a8-be97-ec906a8552a7" satisfied condition "success or failure"
Sep 25 03:14:11.253: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-7926c284-b71d-45a8-be97-ec906a8552a7 container projected-configmap-volume-test: 
STEP: delete the pod
Sep 25 03:14:11.384: INFO: Waiting for pod pod-projected-configmaps-7926c284-b71d-45a8-be97-ec906a8552a7 to disappear
Sep 25 03:14:11.423: INFO: Pod pod-projected-configmaps-7926c284-b71d-45a8-be97-ec906a8552a7 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 25 03:14:11.424: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8277" for this suite.
Sep 25 03:14:17.458: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 25 03:14:17.593: INFO: namespace projected-8277 deletion completed in 6.161270735s

• [SLOW TEST:10.498 seconds]
[sig-storage] Projected configMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 25 03:14:17.594: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-upd-309829b6-9d35-4960-86d4-72a102008480
STEP: Creating the pod
STEP: Updating configmap configmap-test-upd-309829b6-9d35-4960-86d4-72a102008480
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 25 03:15:44.214: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-5866" for this suite.
Sep 25 03:16:06.254: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 25 03:16:06.401: INFO: namespace configmap-5866 deletion completed in 22.178774251s

• [SLOW TEST:108.807 seconds]
[sig-storage] ConfigMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 25 03:16:06.404: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-3689aad4-b0e0-483b-82d6-5aca1f2ecaba
STEP: Creating a pod to test consume secrets
Sep 25 03:16:06.549: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-ca5436ae-db3a-463c-9b1f-e5f262a8361a" in namespace "projected-2650" to be "success or failure"
Sep 25 03:16:06.578: INFO: Pod "pod-projected-secrets-ca5436ae-db3a-463c-9b1f-e5f262a8361a": Phase="Pending", Reason="", readiness=false. Elapsed: 29.482596ms
Sep 25 03:16:08.591: INFO: Pod "pod-projected-secrets-ca5436ae-db3a-463c-9b1f-e5f262a8361a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041802545s
Sep 25 03:16:10.598: INFO: Pod "pod-projected-secrets-ca5436ae-db3a-463c-9b1f-e5f262a8361a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.049029301s
STEP: Saw pod success
Sep 25 03:16:10.598: INFO: Pod "pod-projected-secrets-ca5436ae-db3a-463c-9b1f-e5f262a8361a" satisfied condition "success or failure"
Sep 25 03:16:10.603: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-secrets-ca5436ae-db3a-463c-9b1f-e5f262a8361a container projected-secret-volume-test: 
STEP: delete the pod
Sep 25 03:16:10.629: INFO: Waiting for pod pod-projected-secrets-ca5436ae-db3a-463c-9b1f-e5f262a8361a to disappear
Sep 25 03:16:10.633: INFO: Pod pod-projected-secrets-ca5436ae-db3a-463c-9b1f-e5f262a8361a no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 25 03:16:10.633: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2650" for this suite.
Sep 25 03:16:16.699: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 25 03:16:16.835: INFO: namespace projected-2650 deletion completed in 6.193140431s

• [SLOW TEST:10.431 seconds]
[sig-storage] Projected secret
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if matching  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 25 03:16:16.838: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81
Sep 25 03:16:16.919: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Sep 25 03:16:16.938: INFO: Waiting for terminating namespaces to be deleted...
Sep 25 03:16:16.942: INFO: 
Logging pods the kubelet thinks is on node iruya-worker before test
Sep 25 03:16:16.953: INFO: kube-proxy-mtljr from kube-system started at 2020-09-23 08:26:08 +0000 UTC (1 container statuses recorded)
Sep 25 03:16:16.953: INFO: 	Container kube-proxy ready: true, restart count 0
Sep 25 03:16:16.953: INFO: kindnet-7bsvw from kube-system started at 2020-09-23 08:26:08 +0000 UTC (1 container statuses recorded)
Sep 25 03:16:16.954: INFO: 	Container kindnet-cni ready: true, restart count 0
Sep 25 03:16:16.954: INFO: 
Logging pods the kubelet thinks is on node iruya-worker2 before test
Sep 25 03:16:16.963: INFO: kindnet-djqgh from kube-system started at 2020-09-23 08:26:08 +0000 UTC (1 container statuses recorded)
Sep 25 03:16:16.963: INFO: 	Container kindnet-cni ready: true, restart count 0
Sep 25 03:16:16.963: INFO: kube-proxy-52wt5 from kube-system started at 2020-09-23 08:26:08 +0000 UTC (1 container statuses recorded)
Sep 25 03:16:16.963: INFO: 	Container kube-proxy ready: true, restart count 0
[It] validates that NodeSelector is respected if matching  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-9a317183-4b27-4851-9368-5228bb4ae523 42
STEP: Trying to relaunch the pod, now with labels.
STEP: removing the label kubernetes.io/e2e-9a317183-4b27-4851-9368-5228bb4ae523 off the node iruya-worker
STEP: verifying the node doesn't have the label kubernetes.io/e2e-9a317183-4b27-4851-9368-5228bb4ae523
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 25 03:16:25.173: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-5788" for this suite.
Sep 25 03:16:37.203: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 25 03:16:37.342: INFO: namespace sched-pred-5788 deletion completed in 12.158131552s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72

• [SLOW TEST:20.504 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23
  validates that NodeSelector is respected if matching  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 25 03:16:37.346: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc1
STEP: create the rc2
STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well
STEP: delete the rc simpletest-rc-to-be-deleted
STEP: wait for the rc to be deleted
STEP: Gathering metrics
W0925 03:16:48.331635       7 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Sep 25 03:16:48.332: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 25 03:16:48.332: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-3130" for this suite.
Sep 25 03:16:54.404: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 25 03:16:54.540: INFO: namespace gc-3130 deletion completed in 6.199089613s

• [SLOW TEST:17.195 seconds]
[sig-api-machinery] Garbage collector
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should get a host IP [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 25 03:16:54.543: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should get a host IP [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating pod
Sep 25 03:16:59.220: INFO: Pod pod-hostip-eaa81bfc-e948-442c-b202-4fcc939ce783 has hostIP: 172.18.0.5
[AfterEach] [k8s.io] Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 25 03:16:59.220: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-3825" for this suite.
Sep 25 03:17:21.259: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 25 03:17:21.391: INFO: namespace pods-3825 deletion completed in 22.16149426s

• [SLOW TEST:26.848 seconds]
[k8s.io] Pods
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should get a host IP [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should surface a failure condition on a common issue like exceeded quota [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicationController
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 25 03:17:21.395: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should surface a failure condition on a common issue like exceeded quota [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Sep 25 03:17:21.480: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace
STEP: Creating rc "condition-test" that asks for more than the allowed pod quota
STEP: Checking rc "condition-test" has the desired failure condition set
STEP: Scaling down rc "condition-test" to satisfy pod quota
Sep 25 03:17:22.591: INFO: Updating replication controller "condition-test"
STEP: Checking rc "condition-test" has no failure condition set
[AfterEach] [sig-apps] ReplicationController
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 25 03:17:22.609: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-3615" for this suite.
Sep 25 03:17:28.666: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 25 03:17:28.793: INFO: namespace replication-controller-3615 deletion completed in 6.146167208s

• [SLOW TEST:7.399 seconds]
[sig-apps] ReplicationController
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should surface a failure condition on a common issue like exceeded quota [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not conflict [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 25 03:17:28.797: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not conflict [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Cleaning up the secret
STEP: Cleaning up the configmap
STEP: Cleaning up the pod
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 25 03:17:33.193: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-3352" for this suite.
Sep 25 03:17:39.222: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 25 03:17:39.369: INFO: namespace emptydir-wrapper-3352 deletion completed in 6.160297226s

• [SLOW TEST:10.572 seconds]
[sig-storage] EmptyDir wrapper volumes
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  should not conflict [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy through a service and a pod  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] version v1
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 25 03:17:39.375: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy through a service and a pod  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: starting an echo server on multiple ports
STEP: creating replication controller proxy-service-j4mhl in namespace proxy-4642
I0925 03:17:39.495738       7 runners.go:180] Created replication controller with name: proxy-service-j4mhl, namespace: proxy-4642, replica count: 1
I0925 03:17:40.549634       7 runners.go:180] proxy-service-j4mhl Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0925 03:17:41.551309       7 runners.go:180] proxy-service-j4mhl Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0925 03:17:42.552414       7 runners.go:180] proxy-service-j4mhl Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0925 03:17:43.553305       7 runners.go:180] proxy-service-j4mhl Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0925 03:17:44.554048       7 runners.go:180] proxy-service-j4mhl Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0925 03:17:45.554957       7 runners.go:180] proxy-service-j4mhl Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0925 03:17:46.555615       7 runners.go:180] proxy-service-j4mhl Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0925 03:17:47.556370       7 runners.go:180] proxy-service-j4mhl Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0925 03:17:48.557345       7 runners.go:180] proxy-service-j4mhl Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Sep 25 03:17:48.569: INFO: setup took 9.127911862s, starting test cases
STEP: running 16 cases, 20 attempts per case, 320 total attempts
Sep 25 03:17:48.579: INFO: (0) /api/v1/namespaces/proxy-4642/pods/http:proxy-service-j4mhl-98lsp:1080/proxy/: ... (200; 8.290426ms)
Sep 25 03:17:48.579: INFO: (0) /api/v1/namespaces/proxy-4642/pods/http:proxy-service-j4mhl-98lsp:162/proxy/: bar (200; 8.227469ms)
Sep 25 03:17:48.580: INFO: (0) /api/v1/namespaces/proxy-4642/pods/http:proxy-service-j4mhl-98lsp:160/proxy/: foo (200; 7.281224ms)
Sep 25 03:17:48.582: INFO: (0) /api/v1/namespaces/proxy-4642/services/http:proxy-service-j4mhl:portname2/proxy/: bar (200; 10.749104ms)
Sep 25 03:17:48.582: INFO: (0) /api/v1/namespaces/proxy-4642/services/http:proxy-service-j4mhl:portname1/proxy/: foo (200; 11.731455ms)
Sep 25 03:17:48.583: INFO: (0) /api/v1/namespaces/proxy-4642/pods/proxy-service-j4mhl-98lsp/proxy/: test (200; 11.276326ms)
Sep 25 03:17:48.583: INFO: (0) /api/v1/namespaces/proxy-4642/services/proxy-service-j4mhl:portname1/proxy/: foo (200; 11.878491ms)
Sep 25 03:17:48.584: INFO: (0) /api/v1/namespaces/proxy-4642/pods/proxy-service-j4mhl-98lsp:160/proxy/: foo (200; 11.997746ms)
Sep 25 03:17:48.585: INFO: (0) /api/v1/namespaces/proxy-4642/pods/proxy-service-j4mhl-98lsp:162/proxy/: bar (200; 13.736064ms)
Sep 25 03:17:48.585: INFO: (0) /api/v1/namespaces/proxy-4642/pods/https:proxy-service-j4mhl-98lsp:460/proxy/: tls baz (200; 14.421652ms)
Sep 25 03:17:48.585: INFO: (0) /api/v1/namespaces/proxy-4642/pods/https:proxy-service-j4mhl-98lsp:462/proxy/: tls qux (200; 13.373227ms)
Sep 25 03:17:48.587: INFO: (0) /api/v1/namespaces/proxy-4642/services/https:proxy-service-j4mhl:tlsportname1/proxy/: tls baz (200; 15.173333ms)
Sep 25 03:17:48.588: INFO: (0) /api/v1/namespaces/proxy-4642/pods/https:proxy-service-j4mhl-98lsp:443/proxy/: test<... (200; 23.62581ms)
Sep 25 03:17:48.597: INFO: (0) /api/v1/namespaces/proxy-4642/services/https:proxy-service-j4mhl:tlsportname2/proxy/: tls qux (200; 24.588568ms)
Sep 25 03:17:48.597: INFO: (0) /api/v1/namespaces/proxy-4642/services/proxy-service-j4mhl:portname2/proxy/: bar (200; 24.822555ms)
Sep 25 03:17:48.602: INFO: (1) /api/v1/namespaces/proxy-4642/pods/https:proxy-service-j4mhl-98lsp:460/proxy/: tls baz (200; 4.998045ms)
Sep 25 03:17:48.604: INFO: (1) /api/v1/namespaces/proxy-4642/services/https:proxy-service-j4mhl:tlsportname1/proxy/: tls baz (200; 7.290395ms)
Sep 25 03:17:48.605: INFO: (1) /api/v1/namespaces/proxy-4642/pods/proxy-service-j4mhl-98lsp:162/proxy/: bar (200; 7.379313ms)
Sep 25 03:17:48.605: INFO: (1) /api/v1/namespaces/proxy-4642/services/https:proxy-service-j4mhl:tlsportname2/proxy/: tls qux (200; 8.241562ms)
Sep 25 03:17:48.606: INFO: (1) /api/v1/namespaces/proxy-4642/services/proxy-service-j4mhl:portname2/proxy/: bar (200; 8.718184ms)
Sep 25 03:17:48.606: INFO: (1) /api/v1/namespaces/proxy-4642/pods/proxy-service-j4mhl-98lsp/proxy/: test (200; 8.67453ms)
Sep 25 03:17:48.606: INFO: (1) /api/v1/namespaces/proxy-4642/pods/proxy-service-j4mhl-98lsp:1080/proxy/: test<... (200; 8.440571ms)
Sep 25 03:17:48.606: INFO: (1) /api/v1/namespaces/proxy-4642/pods/proxy-service-j4mhl-98lsp:160/proxy/: foo (200; 8.639273ms)
Sep 25 03:17:48.606: INFO: (1) /api/v1/namespaces/proxy-4642/pods/http:proxy-service-j4mhl-98lsp:1080/proxy/: ... (200; 8.899264ms)
Sep 25 03:17:48.606: INFO: (1) /api/v1/namespaces/proxy-4642/services/http:proxy-service-j4mhl:portname2/proxy/: bar (200; 8.938713ms)
Sep 25 03:17:48.607: INFO: (1) /api/v1/namespaces/proxy-4642/pods/http:proxy-service-j4mhl-98lsp:160/proxy/: foo (200; 9.01385ms)
Sep 25 03:17:48.607: INFO: (1) /api/v1/namespaces/proxy-4642/pods/http:proxy-service-j4mhl-98lsp:162/proxy/: bar (200; 9.210067ms)
Sep 25 03:17:48.607: INFO: (1) /api/v1/namespaces/proxy-4642/pods/https:proxy-service-j4mhl-98lsp:462/proxy/: tls qux (200; 9.328121ms)
Sep 25 03:17:48.607: INFO: (1) /api/v1/namespaces/proxy-4642/pods/https:proxy-service-j4mhl-98lsp:443/proxy/: test<... (200; 4.318751ms)
Sep 25 03:17:48.636: INFO: (2) /api/v1/namespaces/proxy-4642/pods/proxy-service-j4mhl-98lsp:160/proxy/: foo (200; 8.772575ms)
Sep 25 03:17:48.637: INFO: (2) /api/v1/namespaces/proxy-4642/pods/http:proxy-service-j4mhl-98lsp:162/proxy/: bar (200; 9.614693ms)
Sep 25 03:17:48.637: INFO: (2) /api/v1/namespaces/proxy-4642/services/http:proxy-service-j4mhl:portname1/proxy/: foo (200; 9.931973ms)
Sep 25 03:17:48.637: INFO: (2) /api/v1/namespaces/proxy-4642/pods/proxy-service-j4mhl-98lsp/proxy/: test (200; 9.982417ms)
Sep 25 03:17:48.637: INFO: (2) /api/v1/namespaces/proxy-4642/pods/proxy-service-j4mhl-98lsp:162/proxy/: bar (200; 9.97711ms)
Sep 25 03:17:48.638: INFO: (2) /api/v1/namespaces/proxy-4642/services/https:proxy-service-j4mhl:tlsportname1/proxy/: tls baz (200; 10.336819ms)
Sep 25 03:17:48.639: INFO: (2) /api/v1/namespaces/proxy-4642/pods/http:proxy-service-j4mhl-98lsp:160/proxy/: foo (200; 11.305946ms)
Sep 25 03:17:48.639: INFO: (2) /api/v1/namespaces/proxy-4642/pods/http:proxy-service-j4mhl-98lsp:1080/proxy/: ... (200; 11.914354ms)
Sep 25 03:17:48.639: INFO: (2) /api/v1/namespaces/proxy-4642/pods/https:proxy-service-j4mhl-98lsp:460/proxy/: tls baz (200; 12.183852ms)
Sep 25 03:17:48.643: INFO: (2) /api/v1/namespaces/proxy-4642/services/https:proxy-service-j4mhl:tlsportname2/proxy/: tls qux (200; 15.391897ms)
Sep 25 03:17:48.643: INFO: (2) /api/v1/namespaces/proxy-4642/pods/https:proxy-service-j4mhl-98lsp:443/proxy/: test<... (200; 8.730498ms)
Sep 25 03:17:48.653: INFO: (3) /api/v1/namespaces/proxy-4642/pods/https:proxy-service-j4mhl-98lsp:443/proxy/: test (200; 9.194003ms)
Sep 25 03:17:48.654: INFO: (3) /api/v1/namespaces/proxy-4642/services/proxy-service-j4mhl:portname2/proxy/: bar (200; 8.995828ms)
Sep 25 03:17:48.654: INFO: (3) /api/v1/namespaces/proxy-4642/services/http:proxy-service-j4mhl:portname2/proxy/: bar (200; 9.576914ms)
Sep 25 03:17:48.654: INFO: (3) /api/v1/namespaces/proxy-4642/services/https:proxy-service-j4mhl:tlsportname2/proxy/: tls qux (200; 9.888749ms)
Sep 25 03:17:48.654: INFO: (3) /api/v1/namespaces/proxy-4642/services/proxy-service-j4mhl:portname1/proxy/: foo (200; 9.856045ms)
Sep 25 03:17:48.655: INFO: (3) /api/v1/namespaces/proxy-4642/pods/http:proxy-service-j4mhl-98lsp:1080/proxy/: ... (200; 9.958551ms)
Sep 25 03:17:48.655: INFO: (3) /api/v1/namespaces/proxy-4642/pods/http:proxy-service-j4mhl-98lsp:160/proxy/: foo (200; 10.424496ms)
Sep 25 03:17:48.656: INFO: (3) /api/v1/namespaces/proxy-4642/pods/https:proxy-service-j4mhl-98lsp:462/proxy/: tls qux (200; 11.063295ms)
Sep 25 03:17:48.657: INFO: (3) /api/v1/namespaces/proxy-4642/pods/https:proxy-service-j4mhl-98lsp:460/proxy/: tls baz (200; 12.000335ms)
Sep 25 03:17:48.657: INFO: (3) /api/v1/namespaces/proxy-4642/pods/proxy-service-j4mhl-98lsp:160/proxy/: foo (200; 11.67134ms)
Sep 25 03:17:48.663: INFO: (4) /api/v1/namespaces/proxy-4642/pods/https:proxy-service-j4mhl-98lsp:443/proxy/: test (200; 6.563771ms)
Sep 25 03:17:48.664: INFO: (4) /api/v1/namespaces/proxy-4642/pods/proxy-service-j4mhl-98lsp:162/proxy/: bar (200; 6.805498ms)
Sep 25 03:17:48.664: INFO: (4) /api/v1/namespaces/proxy-4642/pods/http:proxy-service-j4mhl-98lsp:1080/proxy/: ... (200; 7.006194ms)
Sep 25 03:17:48.665: INFO: (4) /api/v1/namespaces/proxy-4642/pods/http:proxy-service-j4mhl-98lsp:162/proxy/: bar (200; 7.147762ms)
Sep 25 03:17:48.665: INFO: (4) /api/v1/namespaces/proxy-4642/pods/https:proxy-service-j4mhl-98lsp:462/proxy/: tls qux (200; 7.068772ms)
Sep 25 03:17:48.665: INFO: (4) /api/v1/namespaces/proxy-4642/services/proxy-service-j4mhl:portname2/proxy/: bar (200; 7.652598ms)
Sep 25 03:17:48.665: INFO: (4) /api/v1/namespaces/proxy-4642/services/https:proxy-service-j4mhl:tlsportname1/proxy/: tls baz (200; 7.803298ms)
Sep 25 03:17:48.667: INFO: (4) /api/v1/namespaces/proxy-4642/services/proxy-service-j4mhl:portname1/proxy/: foo (200; 9.307122ms)
Sep 25 03:17:48.668: INFO: (4) /api/v1/namespaces/proxy-4642/pods/http:proxy-service-j4mhl-98lsp:160/proxy/: foo (200; 9.995317ms)
Sep 25 03:17:48.669: INFO: (4) /api/v1/namespaces/proxy-4642/services/https:proxy-service-j4mhl:tlsportname2/proxy/: tls qux (200; 11.195439ms)
Sep 25 03:17:48.669: INFO: (4) /api/v1/namespaces/proxy-4642/pods/proxy-service-j4mhl-98lsp:1080/proxy/: test<... (200; 11.382928ms)
Sep 25 03:17:48.669: INFO: (4) /api/v1/namespaces/proxy-4642/services/http:proxy-service-j4mhl:portname1/proxy/: foo (200; 11.695228ms)
Sep 25 03:17:48.674: INFO: (5) /api/v1/namespaces/proxy-4642/pods/proxy-service-j4mhl-98lsp:162/proxy/: bar (200; 4.162969ms)
Sep 25 03:17:48.674: INFO: (5) /api/v1/namespaces/proxy-4642/pods/https:proxy-service-j4mhl-98lsp:460/proxy/: tls baz (200; 4.841176ms)
Sep 25 03:17:48.675: INFO: (5) /api/v1/namespaces/proxy-4642/pods/proxy-service-j4mhl-98lsp:160/proxy/: foo (200; 4.67228ms)
Sep 25 03:17:48.676: INFO: (5) /api/v1/namespaces/proxy-4642/pods/http:proxy-service-j4mhl-98lsp:162/proxy/: bar (200; 6.191173ms)
Sep 25 03:17:48.676: INFO: (5) /api/v1/namespaces/proxy-4642/pods/http:proxy-service-j4mhl-98lsp:160/proxy/: foo (200; 6.393826ms)
Sep 25 03:17:48.678: INFO: (5) /api/v1/namespaces/proxy-4642/pods/proxy-service-j4mhl-98lsp/proxy/: test (200; 8.447668ms)
Sep 25 03:17:48.679: INFO: (5) /api/v1/namespaces/proxy-4642/services/http:proxy-service-j4mhl:portname1/proxy/: foo (200; 9.067586ms)
Sep 25 03:17:48.679: INFO: (5) /api/v1/namespaces/proxy-4642/services/proxy-service-j4mhl:portname2/proxy/: bar (200; 9.066253ms)
Sep 25 03:17:48.679: INFO: (5) /api/v1/namespaces/proxy-4642/services/http:proxy-service-j4mhl:portname2/proxy/: bar (200; 9.094814ms)
Sep 25 03:17:48.679: INFO: (5) /api/v1/namespaces/proxy-4642/pods/http:proxy-service-j4mhl-98lsp:1080/proxy/: ... (200; 9.516077ms)
Sep 25 03:17:48.679: INFO: (5) /api/v1/namespaces/proxy-4642/services/https:proxy-service-j4mhl:tlsportname2/proxy/: tls qux (200; 9.369725ms)
Sep 25 03:17:48.679: INFO: (5) /api/v1/namespaces/proxy-4642/pods/proxy-service-j4mhl-98lsp:1080/proxy/: test<... (200; 9.595403ms)
Sep 25 03:17:48.679: INFO: (5) /api/v1/namespaces/proxy-4642/pods/https:proxy-service-j4mhl-98lsp:462/proxy/: tls qux (200; 9.879838ms)
Sep 25 03:17:48.680: INFO: (5) /api/v1/namespaces/proxy-4642/pods/https:proxy-service-j4mhl-98lsp:443/proxy/: ... (200; 3.778092ms)
Sep 25 03:17:48.686: INFO: (6) /api/v1/namespaces/proxy-4642/services/proxy-service-j4mhl:portname1/proxy/: foo (200; 5.367738ms)
Sep 25 03:17:48.686: INFO: (6) /api/v1/namespaces/proxy-4642/pods/proxy-service-j4mhl-98lsp:160/proxy/: foo (200; 5.248912ms)
Sep 25 03:17:48.686: INFO: (6) /api/v1/namespaces/proxy-4642/services/proxy-service-j4mhl:portname2/proxy/: bar (200; 5.645208ms)
Sep 25 03:17:48.686: INFO: (6) /api/v1/namespaces/proxy-4642/services/http:proxy-service-j4mhl:portname1/proxy/: foo (200; 5.949088ms)
Sep 25 03:17:48.687: INFO: (6) /api/v1/namespaces/proxy-4642/pods/proxy-service-j4mhl-98lsp/proxy/: test (200; 6.057064ms)
Sep 25 03:17:48.687: INFO: (6) /api/v1/namespaces/proxy-4642/pods/proxy-service-j4mhl-98lsp:162/proxy/: bar (200; 6.17963ms)
Sep 25 03:17:48.689: INFO: (6) /api/v1/namespaces/proxy-4642/pods/https:proxy-service-j4mhl-98lsp:443/proxy/: test<... (200; 8.065572ms)
Sep 25 03:17:48.690: INFO: (6) /api/v1/namespaces/proxy-4642/pods/http:proxy-service-j4mhl-98lsp:162/proxy/: bar (200; 8.565389ms)
Sep 25 03:17:48.690: INFO: (6) /api/v1/namespaces/proxy-4642/services/https:proxy-service-j4mhl:tlsportname2/proxy/: tls qux (200; 9.049991ms)
Sep 25 03:17:48.690: INFO: (6) /api/v1/namespaces/proxy-4642/pods/http:proxy-service-j4mhl-98lsp:160/proxy/: foo (200; 8.807452ms)
Sep 25 03:17:48.690: INFO: (6) /api/v1/namespaces/proxy-4642/pods/https:proxy-service-j4mhl-98lsp:460/proxy/: tls baz (200; 8.778709ms)
Sep 25 03:17:48.693: INFO: (7) /api/v1/namespaces/proxy-4642/pods/proxy-service-j4mhl-98lsp:160/proxy/: foo (200; 3.063795ms)
Sep 25 03:17:48.695: INFO: (7) /api/v1/namespaces/proxy-4642/pods/proxy-service-j4mhl-98lsp/proxy/: test (200; 5.030412ms)
Sep 25 03:17:48.696: INFO: (7) /api/v1/namespaces/proxy-4642/services/http:proxy-service-j4mhl:portname1/proxy/: foo (200; 6.331424ms)
Sep 25 03:17:48.697: INFO: (7) /api/v1/namespaces/proxy-4642/pods/proxy-service-j4mhl-98lsp:162/proxy/: bar (200; 6.474126ms)
Sep 25 03:17:48.697: INFO: (7) /api/v1/namespaces/proxy-4642/services/https:proxy-service-j4mhl:tlsportname2/proxy/: tls qux (200; 6.818847ms)
Sep 25 03:17:48.698: INFO: (7) /api/v1/namespaces/proxy-4642/pods/https:proxy-service-j4mhl-98lsp:462/proxy/: tls qux (200; 7.548997ms)
Sep 25 03:17:48.698: INFO: (7) /api/v1/namespaces/proxy-4642/pods/https:proxy-service-j4mhl-98lsp:460/proxy/: tls baz (200; 7.706864ms)
Sep 25 03:17:48.698: INFO: (7) /api/v1/namespaces/proxy-4642/pods/http:proxy-service-j4mhl-98lsp:1080/proxy/: ... (200; 8.065952ms)
Sep 25 03:17:48.698: INFO: (7) /api/v1/namespaces/proxy-4642/services/https:proxy-service-j4mhl:tlsportname1/proxy/: tls baz (200; 8.418489ms)
Sep 25 03:17:48.699: INFO: (7) /api/v1/namespaces/proxy-4642/pods/http:proxy-service-j4mhl-98lsp:160/proxy/: foo (200; 8.335688ms)
Sep 25 03:17:48.699: INFO: (7) /api/v1/namespaces/proxy-4642/services/proxy-service-j4mhl:portname1/proxy/: foo (200; 8.523889ms)
Sep 25 03:17:48.699: INFO: (7) /api/v1/namespaces/proxy-4642/pods/proxy-service-j4mhl-98lsp:1080/proxy/: test<... (200; 8.86922ms)
Sep 25 03:17:48.699: INFO: (7) /api/v1/namespaces/proxy-4642/services/http:proxy-service-j4mhl:portname2/proxy/: bar (200; 8.722033ms)
Sep 25 03:17:48.699: INFO: (7) /api/v1/namespaces/proxy-4642/pods/https:proxy-service-j4mhl-98lsp:443/proxy/: test<... (200; 6.502599ms)
Sep 25 03:17:48.707: INFO: (8) /api/v1/namespaces/proxy-4642/services/https:proxy-service-j4mhl:tlsportname2/proxy/: tls qux (200; 6.405721ms)
Sep 25 03:17:48.707: INFO: (8) /api/v1/namespaces/proxy-4642/services/proxy-service-j4mhl:portname1/proxy/: foo (200; 6.930148ms)
Sep 25 03:17:48.707: INFO: (8) /api/v1/namespaces/proxy-4642/pods/proxy-service-j4mhl-98lsp/proxy/: test (200; 7.036738ms)
Sep 25 03:17:48.708: INFO: (8) /api/v1/namespaces/proxy-4642/services/http:proxy-service-j4mhl:portname2/proxy/: bar (200; 7.10044ms)
Sep 25 03:17:48.708: INFO: (8) /api/v1/namespaces/proxy-4642/pods/http:proxy-service-j4mhl-98lsp:160/proxy/: foo (200; 6.900569ms)
Sep 25 03:17:48.708: INFO: (8) /api/v1/namespaces/proxy-4642/pods/https:proxy-service-j4mhl-98lsp:443/proxy/: ... (200; 7.294233ms)
Sep 25 03:17:48.708: INFO: (8) /api/v1/namespaces/proxy-4642/pods/proxy-service-j4mhl-98lsp:160/proxy/: foo (200; 7.422777ms)
Sep 25 03:17:48.708: INFO: (8) /api/v1/namespaces/proxy-4642/pods/proxy-service-j4mhl-98lsp:162/proxy/: bar (200; 7.611676ms)
Sep 25 03:17:48.708: INFO: (8) /api/v1/namespaces/proxy-4642/services/https:proxy-service-j4mhl:tlsportname1/proxy/: tls baz (200; 7.760008ms)
Sep 25 03:17:48.712: INFO: (9) /api/v1/namespaces/proxy-4642/pods/https:proxy-service-j4mhl-98lsp:443/proxy/: test (200; 3.602785ms)
Sep 25 03:17:48.713: INFO: (9) /api/v1/namespaces/proxy-4642/pods/proxy-service-j4mhl-98lsp:160/proxy/: foo (200; 4.641174ms)
Sep 25 03:17:48.714: INFO: (9) /api/v1/namespaces/proxy-4642/pods/proxy-service-j4mhl-98lsp:1080/proxy/: test<... (200; 5.333268ms)
Sep 25 03:17:48.714: INFO: (9) /api/v1/namespaces/proxy-4642/services/https:proxy-service-j4mhl:tlsportname1/proxy/: tls baz (200; 5.749979ms)
Sep 25 03:17:48.715: INFO: (9) /api/v1/namespaces/proxy-4642/services/http:proxy-service-j4mhl:portname1/proxy/: foo (200; 6.179221ms)
Sep 25 03:17:48.716: INFO: (9) /api/v1/namespaces/proxy-4642/pods/http:proxy-service-j4mhl-98lsp:162/proxy/: bar (200; 6.719666ms)
Sep 25 03:17:48.716: INFO: (9) /api/v1/namespaces/proxy-4642/pods/proxy-service-j4mhl-98lsp:162/proxy/: bar (200; 7.080606ms)
Sep 25 03:17:48.716: INFO: (9) /api/v1/namespaces/proxy-4642/pods/http:proxy-service-j4mhl-98lsp:160/proxy/: foo (200; 6.98507ms)
Sep 25 03:17:48.716: INFO: (9) /api/v1/namespaces/proxy-4642/pods/https:proxy-service-j4mhl-98lsp:460/proxy/: tls baz (200; 7.086131ms)
Sep 25 03:17:48.716: INFO: (9) /api/v1/namespaces/proxy-4642/pods/https:proxy-service-j4mhl-98lsp:462/proxy/: tls qux (200; 7.537549ms)
Sep 25 03:17:48.717: INFO: (9) /api/v1/namespaces/proxy-4642/pods/http:proxy-service-j4mhl-98lsp:1080/proxy/: ... (200; 7.998039ms)
Sep 25 03:17:48.717: INFO: (9) /api/v1/namespaces/proxy-4642/services/http:proxy-service-j4mhl:portname2/proxy/: bar (200; 8.026855ms)
Sep 25 03:17:48.718: INFO: (9) /api/v1/namespaces/proxy-4642/services/proxy-service-j4mhl:portname2/proxy/: bar (200; 9.168121ms)
Sep 25 03:17:48.719: INFO: (9) /api/v1/namespaces/proxy-4642/services/proxy-service-j4mhl:portname1/proxy/: foo (200; 9.649583ms)
Sep 25 03:17:48.720: INFO: (9) /api/v1/namespaces/proxy-4642/services/https:proxy-service-j4mhl:tlsportname2/proxy/: tls qux (200; 11.339712ms)
Sep 25 03:17:48.725: INFO: (10) /api/v1/namespaces/proxy-4642/services/proxy-service-j4mhl:portname2/proxy/: bar (200; 4.883591ms)
Sep 25 03:17:48.725: INFO: (10) /api/v1/namespaces/proxy-4642/services/http:proxy-service-j4mhl:portname1/proxy/: foo (200; 4.972391ms)
Sep 25 03:17:48.726: INFO: (10) /api/v1/namespaces/proxy-4642/pods/proxy-service-j4mhl-98lsp/proxy/: test (200; 5.975099ms)
Sep 25 03:17:48.726: INFO: (10) /api/v1/namespaces/proxy-4642/pods/proxy-service-j4mhl-98lsp:160/proxy/: foo (200; 6.101409ms)
Sep 25 03:17:48.726: INFO: (10) /api/v1/namespaces/proxy-4642/pods/https:proxy-service-j4mhl-98lsp:462/proxy/: tls qux (200; 6.192746ms)
Sep 25 03:17:48.727: INFO: (10) /api/v1/namespaces/proxy-4642/services/https:proxy-service-j4mhl:tlsportname1/proxy/: tls baz (200; 6.125857ms)
Sep 25 03:17:48.727: INFO: (10) /api/v1/namespaces/proxy-4642/pods/http:proxy-service-j4mhl-98lsp:160/proxy/: foo (200; 6.261896ms)
Sep 25 03:17:48.727: INFO: (10) /api/v1/namespaces/proxy-4642/services/http:proxy-service-j4mhl:portname2/proxy/: bar (200; 6.626364ms)
Sep 25 03:17:48.727: INFO: (10) /api/v1/namespaces/proxy-4642/pods/proxy-service-j4mhl-98lsp:1080/proxy/: test<... (200; 6.914217ms)
Sep 25 03:17:48.728: INFO: (10) /api/v1/namespaces/proxy-4642/pods/http:proxy-service-j4mhl-98lsp:162/proxy/: bar (200; 7.039063ms)
Sep 25 03:17:48.728: INFO: (10) /api/v1/namespaces/proxy-4642/services/proxy-service-j4mhl:portname1/proxy/: foo (200; 7.05199ms)
Sep 25 03:17:48.728: INFO: (10) /api/v1/namespaces/proxy-4642/pods/https:proxy-service-j4mhl-98lsp:460/proxy/: tls baz (200; 7.142369ms)
Sep 25 03:17:48.728: INFO: (10) /api/v1/namespaces/proxy-4642/pods/proxy-service-j4mhl-98lsp:162/proxy/: bar (200; 7.258084ms)
Sep 25 03:17:48.728: INFO: (10) /api/v1/namespaces/proxy-4642/pods/http:proxy-service-j4mhl-98lsp:1080/proxy/: ... (200; 7.193116ms)
Sep 25 03:17:48.728: INFO: (10) /api/v1/namespaces/proxy-4642/pods/https:proxy-service-j4mhl-98lsp:443/proxy/: ... (200; 4.1184ms)
Sep 25 03:17:48.734: INFO: (11) /api/v1/namespaces/proxy-4642/pods/proxy-service-j4mhl-98lsp/proxy/: test (200; 5.28883ms)
Sep 25 03:17:48.734: INFO: (11) /api/v1/namespaces/proxy-4642/pods/http:proxy-service-j4mhl-98lsp:162/proxy/: bar (200; 5.440403ms)
Sep 25 03:17:48.735: INFO: (11) /api/v1/namespaces/proxy-4642/pods/https:proxy-service-j4mhl-98lsp:460/proxy/: tls baz (200; 6.109652ms)
Sep 25 03:17:48.735: INFO: (11) /api/v1/namespaces/proxy-4642/services/https:proxy-service-j4mhl:tlsportname1/proxy/: tls baz (200; 6.451649ms)
Sep 25 03:17:48.735: INFO: (11) /api/v1/namespaces/proxy-4642/services/http:proxy-service-j4mhl:portname1/proxy/: foo (200; 6.731522ms)
Sep 25 03:17:48.736: INFO: (11) /api/v1/namespaces/proxy-4642/services/https:proxy-service-j4mhl:tlsportname2/proxy/: tls qux (200; 7.434473ms)
Sep 25 03:17:48.736: INFO: (11) /api/v1/namespaces/proxy-4642/pods/proxy-service-j4mhl-98lsp:1080/proxy/: test<... (200; 7.353715ms)
Sep 25 03:17:48.736: INFO: (11) /api/v1/namespaces/proxy-4642/pods/http:proxy-service-j4mhl-98lsp:160/proxy/: foo (200; 7.418794ms)
Sep 25 03:17:48.737: INFO: (11) /api/v1/namespaces/proxy-4642/services/http:proxy-service-j4mhl:portname2/proxy/: bar (200; 7.814904ms)
Sep 25 03:17:48.737: INFO: (11) /api/v1/namespaces/proxy-4642/services/proxy-service-j4mhl:portname2/proxy/: bar (200; 7.74879ms)
Sep 25 03:17:48.737: INFO: (11) /api/v1/namespaces/proxy-4642/pods/proxy-service-j4mhl-98lsp:160/proxy/: foo (200; 8.054713ms)
Sep 25 03:17:48.737: INFO: (11) /api/v1/namespaces/proxy-4642/pods/proxy-service-j4mhl-98lsp:162/proxy/: bar (200; 8.175214ms)
Sep 25 03:17:48.738: INFO: (11) /api/v1/namespaces/proxy-4642/pods/https:proxy-service-j4mhl-98lsp:462/proxy/: tls qux (200; 8.723364ms)
Sep 25 03:17:48.738: INFO: (11) /api/v1/namespaces/proxy-4642/services/proxy-service-j4mhl:portname1/proxy/: foo (200; 9.484564ms)
Sep 25 03:17:48.744: INFO: (12) /api/v1/namespaces/proxy-4642/pods/proxy-service-j4mhl-98lsp:1080/proxy/: test<... (200; 5.053531ms)
Sep 25 03:17:48.745: INFO: (12) /api/v1/namespaces/proxy-4642/pods/https:proxy-service-j4mhl-98lsp:443/proxy/: ... (200; 6.653753ms)
Sep 25 03:17:48.747: INFO: (12) /api/v1/namespaces/proxy-4642/pods/proxy-service-j4mhl-98lsp:162/proxy/: bar (200; 7.831525ms)
Sep 25 03:17:48.747: INFO: (12) /api/v1/namespaces/proxy-4642/pods/https:proxy-service-j4mhl-98lsp:460/proxy/: tls baz (200; 8.022089ms)
Sep 25 03:17:48.747: INFO: (12) /api/v1/namespaces/proxy-4642/pods/proxy-service-j4mhl-98lsp/proxy/: test (200; 8.186474ms)
Sep 25 03:17:48.747: INFO: (12) /api/v1/namespaces/proxy-4642/services/http:proxy-service-j4mhl:portname1/proxy/: foo (200; 8.334846ms)
Sep 25 03:17:48.747: INFO: (12) /api/v1/namespaces/proxy-4642/pods/http:proxy-service-j4mhl-98lsp:160/proxy/: foo (200; 8.523031ms)
Sep 25 03:17:48.748: INFO: (12) /api/v1/namespaces/proxy-4642/services/https:proxy-service-j4mhl:tlsportname2/proxy/: tls qux (200; 9.010336ms)
Sep 25 03:17:48.748: INFO: (12) /api/v1/namespaces/proxy-4642/pods/http:proxy-service-j4mhl-98lsp:162/proxy/: bar (200; 9.105394ms)
Sep 25 03:17:48.748: INFO: (12) /api/v1/namespaces/proxy-4642/services/proxy-service-j4mhl:portname2/proxy/: bar (200; 9.449073ms)
Sep 25 03:17:48.748: INFO: (12) /api/v1/namespaces/proxy-4642/services/https:proxy-service-j4mhl:tlsportname1/proxy/: tls baz (200; 9.739001ms)
Sep 25 03:17:48.749: INFO: (12) /api/v1/namespaces/proxy-4642/services/http:proxy-service-j4mhl:portname2/proxy/: bar (200; 9.731985ms)
Sep 25 03:17:48.749: INFO: (12) /api/v1/namespaces/proxy-4642/pods/https:proxy-service-j4mhl-98lsp:462/proxy/: tls qux (200; 9.788671ms)
Sep 25 03:17:48.749: INFO: (12) /api/v1/namespaces/proxy-4642/services/proxy-service-j4mhl:portname1/proxy/: foo (200; 10.488231ms)
Sep 25 03:17:48.753: INFO: (13) /api/v1/namespaces/proxy-4642/pods/http:proxy-service-j4mhl-98lsp:1080/proxy/: ... (200; 3.233741ms)
Sep 25 03:17:48.755: INFO: (13) /api/v1/namespaces/proxy-4642/services/http:proxy-service-j4mhl:portname1/proxy/: foo (200; 5.633162ms)
Sep 25 03:17:48.757: INFO: (13) /api/v1/namespaces/proxy-4642/pods/https:proxy-service-j4mhl-98lsp:460/proxy/: tls baz (200; 7.335821ms)
Sep 25 03:17:48.758: INFO: (13) /api/v1/namespaces/proxy-4642/services/https:proxy-service-j4mhl:tlsportname2/proxy/: tls qux (200; 8.185151ms)
Sep 25 03:17:48.757: INFO: (13) /api/v1/namespaces/proxy-4642/pods/http:proxy-service-j4mhl-98lsp:162/proxy/: bar (200; 7.874359ms)
Sep 25 03:17:48.757: INFO: (13) /api/v1/namespaces/proxy-4642/pods/https:proxy-service-j4mhl-98lsp:443/proxy/: test<... (200; 8.466554ms)
Sep 25 03:17:48.758: INFO: (13) /api/v1/namespaces/proxy-4642/pods/proxy-service-j4mhl-98lsp:160/proxy/: foo (200; 8.697388ms)
Sep 25 03:17:48.759: INFO: (13) /api/v1/namespaces/proxy-4642/pods/proxy-service-j4mhl-98lsp/proxy/: test (200; 8.913663ms)
Sep 25 03:17:48.759: INFO: (13) /api/v1/namespaces/proxy-4642/pods/http:proxy-service-j4mhl-98lsp:160/proxy/: foo (200; 9.58827ms)
Sep 25 03:17:48.760: INFO: (13) /api/v1/namespaces/proxy-4642/pods/https:proxy-service-j4mhl-98lsp:462/proxy/: tls qux (200; 10.094198ms)
Sep 25 03:17:48.760: INFO: (13) /api/v1/namespaces/proxy-4642/services/proxy-service-j4mhl:portname2/proxy/: bar (200; 10.379762ms)
Sep 25 03:17:48.760: INFO: (13) /api/v1/namespaces/proxy-4642/services/http:proxy-service-j4mhl:portname2/proxy/: bar (200; 10.540287ms)
Sep 25 03:17:48.760: INFO: (13) /api/v1/namespaces/proxy-4642/services/proxy-service-j4mhl:portname1/proxy/: foo (200; 10.887192ms)
Sep 25 03:17:48.760: INFO: (13) /api/v1/namespaces/proxy-4642/services/https:proxy-service-j4mhl:tlsportname1/proxy/: tls baz (200; 10.833684ms)
Sep 25 03:17:48.766: INFO: (14) /api/v1/namespaces/proxy-4642/pods/proxy-service-j4mhl-98lsp:160/proxy/: foo (200; 4.801074ms)
Sep 25 03:17:48.767: INFO: (14) /api/v1/namespaces/proxy-4642/pods/proxy-service-j4mhl-98lsp:162/proxy/: bar (200; 6.063146ms)
Sep 25 03:17:48.769: INFO: (14) /api/v1/namespaces/proxy-4642/pods/http:proxy-service-j4mhl-98lsp:162/proxy/: bar (200; 8.019634ms)
Sep 25 03:17:48.770: INFO: (14) /api/v1/namespaces/proxy-4642/services/https:proxy-service-j4mhl:tlsportname1/proxy/: tls baz (200; 8.84233ms)
Sep 25 03:17:48.770: INFO: (14) /api/v1/namespaces/proxy-4642/pods/proxy-service-j4mhl-98lsp/proxy/: test (200; 7.306897ms)
Sep 25 03:17:48.773: INFO: (14) /api/v1/namespaces/proxy-4642/pods/http:proxy-service-j4mhl-98lsp:1080/proxy/: ... (200; 12.134739ms)
Sep 25 03:17:48.773: INFO: (14) /api/v1/namespaces/proxy-4642/pods/http:proxy-service-j4mhl-98lsp:160/proxy/: foo (200; 10.889819ms)
Sep 25 03:17:48.773: INFO: (14) /api/v1/namespaces/proxy-4642/pods/proxy-service-j4mhl-98lsp:1080/proxy/: test<... (200; 11.194516ms)
Sep 25 03:17:48.774: INFO: (14) /api/v1/namespaces/proxy-4642/services/http:proxy-service-j4mhl:portname2/proxy/: bar (200; 11.641247ms)
Sep 25 03:17:48.774: INFO: (14) /api/v1/namespaces/proxy-4642/services/proxy-service-j4mhl:portname1/proxy/: foo (200; 13.230988ms)
Sep 25 03:17:48.774: INFO: (14) /api/v1/namespaces/proxy-4642/pods/https:proxy-service-j4mhl-98lsp:460/proxy/: tls baz (200; 12.30233ms)
Sep 25 03:17:48.774: INFO: (14) /api/v1/namespaces/proxy-4642/services/proxy-service-j4mhl:portname2/proxy/: bar (200; 12.048763ms)
Sep 25 03:17:48.774: INFO: (14) /api/v1/namespaces/proxy-4642/services/http:proxy-service-j4mhl:portname1/proxy/: foo (200; 12.622509ms)
Sep 25 03:17:48.775: INFO: (14) /api/v1/namespaces/proxy-4642/pods/https:proxy-service-j4mhl-98lsp:462/proxy/: tls qux (200; 12.918791ms)
Sep 25 03:17:48.775: INFO: (14) /api/v1/namespaces/proxy-4642/services/https:proxy-service-j4mhl:tlsportname2/proxy/: tls qux (200; 13.432689ms)
Sep 25 03:17:48.775: INFO: (14) /api/v1/namespaces/proxy-4642/pods/https:proxy-service-j4mhl-98lsp:443/proxy/: test (200; 5.792926ms)
Sep 25 03:17:48.781: INFO: (15) /api/v1/namespaces/proxy-4642/pods/http:proxy-service-j4mhl-98lsp:162/proxy/: bar (200; 6.064874ms)
Sep 25 03:17:48.782: INFO: (15) /api/v1/namespaces/proxy-4642/services/https:proxy-service-j4mhl:tlsportname1/proxy/: tls baz (200; 7.167062ms)
Sep 25 03:17:48.783: INFO: (15) /api/v1/namespaces/proxy-4642/pods/http:proxy-service-j4mhl-98lsp:160/proxy/: foo (200; 7.295908ms)
Sep 25 03:17:48.783: INFO: (15) /api/v1/namespaces/proxy-4642/pods/http:proxy-service-j4mhl-98lsp:1080/proxy/: ... (200; 7.683446ms)
Sep 25 03:17:48.783: INFO: (15) /api/v1/namespaces/proxy-4642/pods/proxy-service-j4mhl-98lsp:160/proxy/: foo (200; 7.640166ms)
Sep 25 03:17:48.784: INFO: (15) /api/v1/namespaces/proxy-4642/services/proxy-service-j4mhl:portname1/proxy/: foo (200; 8.47118ms)
Sep 25 03:17:48.784: INFO: (15) /api/v1/namespaces/proxy-4642/services/http:proxy-service-j4mhl:portname2/proxy/: bar (200; 8.797978ms)
Sep 25 03:17:48.784: INFO: (15) /api/v1/namespaces/proxy-4642/pods/https:proxy-service-j4mhl-98lsp:462/proxy/: tls qux (200; 8.588245ms)
Sep 25 03:17:48.784: INFO: (15) /api/v1/namespaces/proxy-4642/services/proxy-service-j4mhl:portname2/proxy/: bar (200; 9.155132ms)
Sep 25 03:17:48.784: INFO: (15) /api/v1/namespaces/proxy-4642/pods/proxy-service-j4mhl-98lsp:1080/proxy/: test<... (200; 9.033167ms)
Sep 25 03:17:48.785: INFO: (15) /api/v1/namespaces/proxy-4642/pods/proxy-service-j4mhl-98lsp:162/proxy/: bar (200; 9.163862ms)
Sep 25 03:17:48.785: INFO: (15) /api/v1/namespaces/proxy-4642/pods/https:proxy-service-j4mhl-98lsp:443/proxy/: test (200; 3.655338ms)
Sep 25 03:17:48.790: INFO: (16) /api/v1/namespaces/proxy-4642/pods/https:proxy-service-j4mhl-98lsp:443/proxy/: test<... (200; 4.692411ms)
Sep 25 03:17:48.790: INFO: (16) /api/v1/namespaces/proxy-4642/pods/https:proxy-service-j4mhl-98lsp:462/proxy/: tls qux (200; 5.038446ms)
Sep 25 03:17:48.791: INFO: (16) /api/v1/namespaces/proxy-4642/services/https:proxy-service-j4mhl:tlsportname2/proxy/: tls qux (200; 6.011486ms)
Sep 25 03:17:48.791: INFO: (16) /api/v1/namespaces/proxy-4642/pods/https:proxy-service-j4mhl-98lsp:460/proxy/: tls baz (200; 5.964799ms)
Sep 25 03:17:48.792: INFO: (16) /api/v1/namespaces/proxy-4642/pods/http:proxy-service-j4mhl-98lsp:160/proxy/: foo (200; 6.108506ms)
Sep 25 03:17:48.792: INFO: (16) /api/v1/namespaces/proxy-4642/pods/proxy-service-j4mhl-98lsp:160/proxy/: foo (200; 5.952446ms)
Sep 25 03:17:48.792: INFO: (16) /api/v1/namespaces/proxy-4642/services/proxy-service-j4mhl:portname1/proxy/: foo (200; 6.693624ms)
Sep 25 03:17:48.792: INFO: (16) /api/v1/namespaces/proxy-4642/pods/http:proxy-service-j4mhl-98lsp:1080/proxy/: ... (200; 6.541804ms)
Sep 25 03:17:48.794: INFO: (16) /api/v1/namespaces/proxy-4642/services/proxy-service-j4mhl:portname2/proxy/: bar (200; 8.664097ms)
Sep 25 03:17:48.794: INFO: (16) /api/v1/namespaces/proxy-4642/pods/http:proxy-service-j4mhl-98lsp:162/proxy/: bar (200; 8.539949ms)
Sep 25 03:17:48.794: INFO: (16) /api/v1/namespaces/proxy-4642/services/https:proxy-service-j4mhl:tlsportname1/proxy/: tls baz (200; 9.208127ms)
Sep 25 03:17:48.795: INFO: (16) /api/v1/namespaces/proxy-4642/pods/proxy-service-j4mhl-98lsp:162/proxy/: bar (200; 8.729377ms)
Sep 25 03:17:48.795: INFO: (16) /api/v1/namespaces/proxy-4642/services/http:proxy-service-j4mhl:portname2/proxy/: bar (200; 9.178042ms)
Sep 25 03:17:48.796: INFO: (16) /api/v1/namespaces/proxy-4642/services/http:proxy-service-j4mhl:portname1/proxy/: foo (200; 10.562273ms)
Sep 25 03:17:48.800: INFO: (17) /api/v1/namespaces/proxy-4642/pods/proxy-service-j4mhl-98lsp:162/proxy/: bar (200; 3.55315ms)
Sep 25 03:17:48.800: INFO: (17) /api/v1/namespaces/proxy-4642/pods/proxy-service-j4mhl-98lsp:1080/proxy/: test<... (200; 3.610788ms)
Sep 25 03:17:48.803: INFO: (17) /api/v1/namespaces/proxy-4642/pods/http:proxy-service-j4mhl-98lsp:160/proxy/: foo (200; 5.818303ms)
Sep 25 03:17:48.803: INFO: (17) /api/v1/namespaces/proxy-4642/services/https:proxy-service-j4mhl:tlsportname1/proxy/: tls baz (200; 6.376227ms)
Sep 25 03:17:48.803: INFO: (17) /api/v1/namespaces/proxy-4642/pods/proxy-service-j4mhl-98lsp/proxy/: test (200; 6.662283ms)
Sep 25 03:17:48.804: INFO: (17) /api/v1/namespaces/proxy-4642/services/http:proxy-service-j4mhl:portname2/proxy/: bar (200; 7.344627ms)
Sep 25 03:17:48.804: INFO: (17) /api/v1/namespaces/proxy-4642/pods/proxy-service-j4mhl-98lsp:160/proxy/: foo (200; 7.39977ms)
Sep 25 03:17:48.804: INFO: (17) /api/v1/namespaces/proxy-4642/services/proxy-service-j4mhl:portname1/proxy/: foo (200; 7.175821ms)
Sep 25 03:17:48.804: INFO: (17) /api/v1/namespaces/proxy-4642/services/http:proxy-service-j4mhl:portname1/proxy/: foo (200; 7.732412ms)
Sep 25 03:17:48.804: INFO: (17) /api/v1/namespaces/proxy-4642/pods/http:proxy-service-j4mhl-98lsp:162/proxy/: bar (200; 7.713531ms)
Sep 25 03:17:48.805: INFO: (17) /api/v1/namespaces/proxy-4642/pods/https:proxy-service-j4mhl-98lsp:443/proxy/: ... (200; 8.728598ms)
Sep 25 03:17:48.805: INFO: (17) /api/v1/namespaces/proxy-4642/pods/https:proxy-service-j4mhl-98lsp:462/proxy/: tls qux (200; 8.327508ms)
Sep 25 03:17:48.805: INFO: (17) /api/v1/namespaces/proxy-4642/pods/https:proxy-service-j4mhl-98lsp:460/proxy/: tls baz (200; 8.577599ms)
Sep 25 03:17:48.805: INFO: (17) /api/v1/namespaces/proxy-4642/services/proxy-service-j4mhl:portname2/proxy/: bar (200; 8.614393ms)
Sep 25 03:17:48.806: INFO: (17) /api/v1/namespaces/proxy-4642/services/https:proxy-service-j4mhl:tlsportname2/proxy/: tls qux (200; 9.48226ms)
Sep 25 03:17:48.813: INFO: (18) /api/v1/namespaces/proxy-4642/pods/proxy-service-j4mhl-98lsp:160/proxy/: foo (200; 6.527977ms)
Sep 25 03:17:48.813: INFO: (18) /api/v1/namespaces/proxy-4642/pods/http:proxy-service-j4mhl-98lsp:1080/proxy/: ... (200; 6.597028ms)
Sep 25 03:17:48.816: INFO: (18) /api/v1/namespaces/proxy-4642/pods/proxy-service-j4mhl-98lsp:1080/proxy/: test<... (200; 8.760742ms)
Sep 25 03:17:48.816: INFO: (18) /api/v1/namespaces/proxy-4642/services/proxy-service-j4mhl:portname1/proxy/: foo (200; 9.741677ms)
Sep 25 03:17:48.816: INFO: (18) /api/v1/namespaces/proxy-4642/pods/https:proxy-service-j4mhl-98lsp:460/proxy/: tls baz (200; 9.248964ms)
Sep 25 03:17:48.817: INFO: (18) /api/v1/namespaces/proxy-4642/pods/https:proxy-service-j4mhl-98lsp:443/proxy/: test (200; 10.121543ms)
Sep 25 03:17:48.817: INFO: (18) /api/v1/namespaces/proxy-4642/pods/http:proxy-service-j4mhl-98lsp:162/proxy/: bar (200; 10.547872ms)
Sep 25 03:17:48.817: INFO: (18) /api/v1/namespaces/proxy-4642/pods/proxy-service-j4mhl-98lsp:162/proxy/: bar (200; 10.864868ms)
Sep 25 03:17:48.823: INFO: (18) /api/v1/namespaces/proxy-4642/services/https:proxy-service-j4mhl:tlsportname1/proxy/: tls baz (200; 16.44909ms)
Sep 25 03:17:48.823: INFO: (18) /api/v1/namespaces/proxy-4642/services/proxy-service-j4mhl:portname2/proxy/: bar (200; 16.270145ms)
Sep 25 03:17:48.823: INFO: (18) /api/v1/namespaces/proxy-4642/services/http:proxy-service-j4mhl:portname2/proxy/: bar (200; 16.546765ms)
Sep 25 03:17:48.823: INFO: (18) /api/v1/namespaces/proxy-4642/services/https:proxy-service-j4mhl:tlsportname2/proxy/: tls qux (200; 16.941249ms)
Sep 25 03:17:48.829: INFO: (19) /api/v1/namespaces/proxy-4642/pods/http:proxy-service-j4mhl-98lsp:160/proxy/: foo (200; 5.146368ms)
Sep 25 03:17:48.830: INFO: (19) /api/v1/namespaces/proxy-4642/pods/proxy-service-j4mhl-98lsp:1080/proxy/: test<... (200; 5.927978ms)
Sep 25 03:17:48.830: INFO: (19) /api/v1/namespaces/proxy-4642/pods/https:proxy-service-j4mhl-98lsp:443/proxy/: ... (200; 6.731191ms)
Sep 25 03:17:48.831: INFO: (19) /api/v1/namespaces/proxy-4642/pods/http:proxy-service-j4mhl-98lsp:162/proxy/: bar (200; 6.983274ms)
Sep 25 03:17:48.831: INFO: (19) /api/v1/namespaces/proxy-4642/services/proxy-service-j4mhl:portname1/proxy/: foo (200; 7.086614ms)
Sep 25 03:17:48.832: INFO: (19) /api/v1/namespaces/proxy-4642/services/https:proxy-service-j4mhl:tlsportname1/proxy/: tls baz (200; 7.179101ms)
Sep 25 03:17:48.832: INFO: (19) /api/v1/namespaces/proxy-4642/pods/proxy-service-j4mhl-98lsp/proxy/: test (200; 7.044107ms)
Sep 25 03:17:48.832: INFO: (19) /api/v1/namespaces/proxy-4642/services/http:proxy-service-j4mhl:portname1/proxy/: foo (200; 7.534852ms)
Sep 25 03:17:48.832: INFO: (19) /api/v1/namespaces/proxy-4642/services/http:proxy-service-j4mhl:portname2/proxy/: bar (200; 7.682156ms)
STEP: deleting ReplicationController proxy-service-j4mhl in namespace proxy-4642, will wait for the garbage collector to delete the pods
Sep 25 03:17:48.894: INFO: Deleting ReplicationController proxy-service-j4mhl took: 8.224505ms
Sep 25 03:17:49.195: INFO: Terminating ReplicationController proxy-service-j4mhl pods took: 300.90688ms
[AfterEach] version v1
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 25 03:17:55.396: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-4642" for this suite.
Sep 25 03:18:01.423: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 25 03:18:01.569: INFO: namespace proxy-4642 deletion completed in 6.16174301s

• [SLOW TEST:22.194 seconds]
[sig-network] Proxy
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58
    should proxy through a service and a pod  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartNever pod [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 25 03:18:01.571: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should invoke init containers on a RestartNever pod [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Sep 25 03:18:01.647: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 25 03:18:08.554: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-1753" for this suite.
Sep 25 03:18:14.600: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 25 03:18:14.734: INFO: namespace init-container-1753 deletion completed in 6.155817909s

• [SLOW TEST:13.164 seconds]
[k8s.io] InitContainer [NodeConformance]
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should invoke init containers on a RestartNever pod [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with projected pod [LinuxOnly] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 25 03:18:14.738: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with projected pod [LinuxOnly] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-projected-j4qn
STEP: Creating a pod to test atomic-volume-subpath
Sep 25 03:18:14.849: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-j4qn" in namespace "subpath-8429" to be "success or failure"
Sep 25 03:18:14.854: INFO: Pod "pod-subpath-test-projected-j4qn": Phase="Pending", Reason="", readiness=false. Elapsed: 5.597049ms
Sep 25 03:18:16.866: INFO: Pod "pod-subpath-test-projected-j4qn": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017130049s
Sep 25 03:18:18.873: INFO: Pod "pod-subpath-test-projected-j4qn": Phase="Running", Reason="", readiness=true. Elapsed: 4.024009968s
Sep 25 03:18:20.884: INFO: Pod "pod-subpath-test-projected-j4qn": Phase="Running", Reason="", readiness=true. Elapsed: 6.035367869s
Sep 25 03:18:22.895: INFO: Pod "pod-subpath-test-projected-j4qn": Phase="Running", Reason="", readiness=true. Elapsed: 8.046388502s
Sep 25 03:18:24.902: INFO: Pod "pod-subpath-test-projected-j4qn": Phase="Running", Reason="", readiness=true. Elapsed: 10.052734772s
Sep 25 03:18:26.908: INFO: Pod "pod-subpath-test-projected-j4qn": Phase="Running", Reason="", readiness=true. Elapsed: 12.059327534s
Sep 25 03:18:28.926: INFO: Pod "pod-subpath-test-projected-j4qn": Phase="Running", Reason="", readiness=true. Elapsed: 14.077587695s
Sep 25 03:18:30.933: INFO: Pod "pod-subpath-test-projected-j4qn": Phase="Running", Reason="", readiness=true. Elapsed: 16.084655406s
Sep 25 03:18:32.941: INFO: Pod "pod-subpath-test-projected-j4qn": Phase="Running", Reason="", readiness=true. Elapsed: 18.091798389s
Sep 25 03:18:34.947: INFO: Pod "pod-subpath-test-projected-j4qn": Phase="Running", Reason="", readiness=true. Elapsed: 20.097842574s
Sep 25 03:18:36.954: INFO: Pod "pod-subpath-test-projected-j4qn": Phase="Running", Reason="", readiness=true. Elapsed: 22.104757263s
Sep 25 03:18:38.961: INFO: Pod "pod-subpath-test-projected-j4qn": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.112014019s
STEP: Saw pod success
Sep 25 03:18:38.961: INFO: Pod "pod-subpath-test-projected-j4qn" satisfied condition "success or failure"
Sep 25 03:18:38.967: INFO: Trying to get logs from node iruya-worker pod pod-subpath-test-projected-j4qn container test-container-subpath-projected-j4qn: 
STEP: delete the pod
Sep 25 03:18:39.000: INFO: Waiting for pod pod-subpath-test-projected-j4qn to disappear
Sep 25 03:18:39.040: INFO: Pod pod-subpath-test-projected-j4qn no longer exists
STEP: Deleting pod pod-subpath-test-projected-j4qn
Sep 25 03:18:39.040: INFO: Deleting pod "pod-subpath-test-projected-j4qn" in namespace "subpath-8429"
[AfterEach] [sig-storage] Subpath
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 25 03:18:39.047: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-8429" for this suite.
Sep 25 03:18:45.076: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 25 03:18:45.206: INFO: namespace subpath-8429 deletion completed in 6.149004745s

• [SLOW TEST:30.469 seconds]
[sig-storage] Subpath
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with projected pod [LinuxOnly] [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 25 03:18:45.208: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod test-webserver-94653127-b753-4abd-999a-45600df92383 in namespace container-probe-9768
Sep 25 03:18:49.296: INFO: Started pod test-webserver-94653127-b753-4abd-999a-45600df92383 in namespace container-probe-9768
STEP: checking the pod's current state and verifying that restartCount is present
Sep 25 03:18:49.300: INFO: Initial restart count of pod test-webserver-94653127-b753-4abd-999a-45600df92383 is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 25 03:22:50.294: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-9768" for this suite.
Sep 25 03:22:56.343: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 25 03:22:56.486: INFO: namespace container-probe-9768 deletion completed in 6.18104387s

• [SLOW TEST:251.278 seconds]
[k8s.io] Probing container
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl expose 
  should create services for rc  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 25 03:22:56.490: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should create services for rc  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating Redis RC
Sep 25 03:22:56.549: INFO: namespace kubectl-420
Sep 25 03:22:56.549: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-420'
Sep 25 03:23:00.398: INFO: stderr: ""
Sep 25 03:23:00.398: INFO: stdout: "replicationcontroller/redis-master created\n"
STEP: Waiting for Redis master to start.
Sep 25 03:23:01.407: INFO: Selector matched 1 pods for map[app:redis]
Sep 25 03:23:01.407: INFO: Found 0 / 1
Sep 25 03:23:02.407: INFO: Selector matched 1 pods for map[app:redis]
Sep 25 03:23:02.407: INFO: Found 0 / 1
Sep 25 03:23:03.407: INFO: Selector matched 1 pods for map[app:redis]
Sep 25 03:23:03.407: INFO: Found 0 / 1
Sep 25 03:23:04.407: INFO: Selector matched 1 pods for map[app:redis]
Sep 25 03:23:04.407: INFO: Found 1 / 1
Sep 25 03:23:04.407: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Sep 25 03:23:04.414: INFO: Selector matched 1 pods for map[app:redis]
Sep 25 03:23:04.414: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Sep 25 03:23:04.414: INFO: wait on redis-master startup in kubectl-420 
Sep 25 03:23:04.415: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-7vw49 redis-master --namespace=kubectl-420'
Sep 25 03:23:05.603: INFO: stderr: ""
Sep 25 03:23:05.603: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 25 Sep 03:23:03.012 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 25 Sep 03:23:03.012 # Server started, Redis version 3.2.12\n1:M 25 Sep 03:23:03.013 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 25 Sep 03:23:03.013 * The server is now ready to accept connections on port 6379\n"
STEP: exposing RC
Sep 25 03:23:05.604: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-420'
Sep 25 03:23:06.847: INFO: stderr: ""
Sep 25 03:23:06.847: INFO: stdout: "service/rm2 exposed\n"
Sep 25 03:23:06.892: INFO: Service rm2 in namespace kubectl-420 found.
STEP: exposing service
Sep 25 03:23:08.905: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-420'
Sep 25 03:23:10.080: INFO: stderr: ""
Sep 25 03:23:10.081: INFO: stdout: "service/rm3 exposed\n"
Sep 25 03:23:10.095: INFO: Service rm3 in namespace kubectl-420 found.
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 25 03:23:12.105: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-420" for this suite.
Sep 25 03:23:34.146: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 25 03:23:34.305: INFO: namespace kubectl-420 deletion completed in 22.190436727s

• [SLOW TEST:37.816 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl expose
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create services for rc  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should create and stop a replication controller  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 25 03:23:34.311: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Update Demo
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should create and stop a replication controller  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a replication controller
Sep 25 03:23:34.398: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1573'
Sep 25 03:23:35.877: INFO: stderr: ""
Sep 25 03:23:35.878: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Sep 25 03:23:35.878: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1573'
Sep 25 03:23:36.999: INFO: stderr: ""
Sep 25 03:23:36.999: INFO: stdout: "update-demo-nautilus-24zm8 update-demo-nautilus-72fz7 "
Sep 25 03:23:37.000: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-24zm8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1573'
Sep 25 03:23:38.112: INFO: stderr: ""
Sep 25 03:23:38.112: INFO: stdout: ""
Sep 25 03:23:38.112: INFO: update-demo-nautilus-24zm8 is created but not running
Sep 25 03:23:43.113: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1573'
Sep 25 03:23:44.279: INFO: stderr: ""
Sep 25 03:23:44.280: INFO: stdout: "update-demo-nautilus-24zm8 update-demo-nautilus-72fz7 "
Sep 25 03:23:44.280: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-24zm8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1573'
Sep 25 03:23:45.382: INFO: stderr: ""
Sep 25 03:23:45.382: INFO: stdout: "true"
Sep 25 03:23:45.382: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-24zm8 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1573'
Sep 25 03:23:46.480: INFO: stderr: ""
Sep 25 03:23:46.480: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Sep 25 03:23:46.480: INFO: validating pod update-demo-nautilus-24zm8
Sep 25 03:23:46.487: INFO: got data: {
  "image": "nautilus.jpg"
}

Sep 25 03:23:46.487: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Sep 25 03:23:46.487: INFO: update-demo-nautilus-24zm8 is verified up and running
Sep 25 03:23:46.488: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-72fz7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1573'
Sep 25 03:23:47.586: INFO: stderr: ""
Sep 25 03:23:47.587: INFO: stdout: "true"
Sep 25 03:23:47.587: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-72fz7 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1573'
Sep 25 03:23:48.699: INFO: stderr: ""
Sep 25 03:23:48.699: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Sep 25 03:23:48.699: INFO: validating pod update-demo-nautilus-72fz7
Sep 25 03:23:48.706: INFO: got data: {
  "image": "nautilus.jpg"
}

Sep 25 03:23:48.706: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Sep 25 03:23:48.706: INFO: update-demo-nautilus-72fz7 is verified up and running
STEP: using delete to clean up resources
Sep 25 03:23:48.706: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1573'
Sep 25 03:23:49.807: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Sep 25 03:23:49.807: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Sep 25 03:23:49.807: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-1573'
Sep 25 03:23:50.951: INFO: stderr: "No resources found.\n"
Sep 25 03:23:50.951: INFO: stdout: ""
Sep 25 03:23:50.951: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-1573 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Sep 25 03:23:52.122: INFO: stderr: ""
Sep 25 03:23:52.122: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 25 03:23:52.123: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1573" for this suite.
Sep 25 03:23:58.149: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 25 03:23:58.293: INFO: namespace kubectl-1573 deletion completed in 6.161449841s

• [SLOW TEST:23.983 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Update Demo
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create and stop a replication controller  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-storage] Downward API volume 
  should provide podname only [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 25 03:23:58.294: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Sep 25 03:23:58.376: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6bb60973-18b8-4b3a-ad2b-6bdc0e7d6054" in namespace "downward-api-8901" to be "success or failure"
Sep 25 03:23:58.382: INFO: Pod "downwardapi-volume-6bb60973-18b8-4b3a-ad2b-6bdc0e7d6054": Phase="Pending", Reason="", readiness=false. Elapsed: 5.588296ms
Sep 25 03:24:00.388: INFO: Pod "downwardapi-volume-6bb60973-18b8-4b3a-ad2b-6bdc0e7d6054": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011629247s
Sep 25 03:24:02.395: INFO: Pod "downwardapi-volume-6bb60973-18b8-4b3a-ad2b-6bdc0e7d6054": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018249991s
STEP: Saw pod success
Sep 25 03:24:02.395: INFO: Pod "downwardapi-volume-6bb60973-18b8-4b3a-ad2b-6bdc0e7d6054" satisfied condition "success or failure"
Sep 25 03:24:02.400: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-6bb60973-18b8-4b3a-ad2b-6bdc0e7d6054 container client-container: 
STEP: delete the pod
Sep 25 03:24:02.435: INFO: Waiting for pod downwardapi-volume-6bb60973-18b8-4b3a-ad2b-6bdc0e7d6054 to disappear
Sep 25 03:24:02.442: INFO: Pod downwardapi-volume-6bb60973-18b8-4b3a-ad2b-6bdc0e7d6054 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 25 03:24:02.442: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8901" for this suite.
Sep 25 03:24:08.485: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 25 03:24:08.617: INFO: namespace downward-api-8901 deletion completed in 6.167483423s

• [SLOW TEST:10.323 seconds]
[sig-storage] Downward API volume
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide podname only [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 25 03:24:08.618: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs
STEP: Gathering metrics
W0925 03:24:38.775938       7 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Sep 25 03:24:38.776: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 25 03:24:38.776: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-203" for this suite.
Sep 25 03:24:44.803: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 25 03:24:44.946: INFO: namespace gc-203 deletion completed in 6.158873246s

• [SLOW TEST:36.328 seconds]
[sig-api-machinery] Garbage collector
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-network] DNS 
  should provide DNS for ExternalName services [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 25 03:24:44.947: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for ExternalName services [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test externalName service
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8688.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-8688.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8688.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-8688.svc.cluster.local; sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Sep 25 03:24:51.083: INFO: DNS probes using dns-test-994a7916-86f8-4bab-85f3-f70cdda337c6 succeeded

STEP: deleting the pod
STEP: changing the externalName to bar.example.com
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8688.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-8688.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8688.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-8688.svc.cluster.local; sleep 1; done

STEP: creating a second pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Sep 25 03:24:57.203: INFO: File wheezy_udp@dns-test-service-3.dns-8688.svc.cluster.local from pod  dns-8688/dns-test-04fad6d9-ed3d-4b38-8376-e60e496cb084 contains 'foo.example.com.
' instead of 'bar.example.com.'
Sep 25 03:24:57.207: INFO: File jessie_udp@dns-test-service-3.dns-8688.svc.cluster.local from pod  dns-8688/dns-test-04fad6d9-ed3d-4b38-8376-e60e496cb084 contains 'foo.example.com.
' instead of 'bar.example.com.'
Sep 25 03:24:57.207: INFO: Lookups using dns-8688/dns-test-04fad6d9-ed3d-4b38-8376-e60e496cb084 failed for: [wheezy_udp@dns-test-service-3.dns-8688.svc.cluster.local jessie_udp@dns-test-service-3.dns-8688.svc.cluster.local]

Sep 25 03:25:02.215: INFO: File wheezy_udp@dns-test-service-3.dns-8688.svc.cluster.local from pod  dns-8688/dns-test-04fad6d9-ed3d-4b38-8376-e60e496cb084 contains 'foo.example.com.
' instead of 'bar.example.com.'
Sep 25 03:25:02.221: INFO: File jessie_udp@dns-test-service-3.dns-8688.svc.cluster.local from pod  dns-8688/dns-test-04fad6d9-ed3d-4b38-8376-e60e496cb084 contains 'foo.example.com.
' instead of 'bar.example.com.'
Sep 25 03:25:02.221: INFO: Lookups using dns-8688/dns-test-04fad6d9-ed3d-4b38-8376-e60e496cb084 failed for: [wheezy_udp@dns-test-service-3.dns-8688.svc.cluster.local jessie_udp@dns-test-service-3.dns-8688.svc.cluster.local]

Sep 25 03:25:07.215: INFO: File wheezy_udp@dns-test-service-3.dns-8688.svc.cluster.local from pod  dns-8688/dns-test-04fad6d9-ed3d-4b38-8376-e60e496cb084 contains 'foo.example.com.
' instead of 'bar.example.com.'
Sep 25 03:25:07.220: INFO: File jessie_udp@dns-test-service-3.dns-8688.svc.cluster.local from pod  dns-8688/dns-test-04fad6d9-ed3d-4b38-8376-e60e496cb084 contains 'foo.example.com.
' instead of 'bar.example.com.'
Sep 25 03:25:07.220: INFO: Lookups using dns-8688/dns-test-04fad6d9-ed3d-4b38-8376-e60e496cb084 failed for: [wheezy_udp@dns-test-service-3.dns-8688.svc.cluster.local jessie_udp@dns-test-service-3.dns-8688.svc.cluster.local]

Sep 25 03:25:12.215: INFO: File wheezy_udp@dns-test-service-3.dns-8688.svc.cluster.local from pod  dns-8688/dns-test-04fad6d9-ed3d-4b38-8376-e60e496cb084 contains 'foo.example.com.
' instead of 'bar.example.com.'
Sep 25 03:25:12.220: INFO: File jessie_udp@dns-test-service-3.dns-8688.svc.cluster.local from pod  dns-8688/dns-test-04fad6d9-ed3d-4b38-8376-e60e496cb084 contains 'foo.example.com.
' instead of 'bar.example.com.'
Sep 25 03:25:12.220: INFO: Lookups using dns-8688/dns-test-04fad6d9-ed3d-4b38-8376-e60e496cb084 failed for: [wheezy_udp@dns-test-service-3.dns-8688.svc.cluster.local jessie_udp@dns-test-service-3.dns-8688.svc.cluster.local]

Sep 25 03:25:17.215: INFO: File wheezy_udp@dns-test-service-3.dns-8688.svc.cluster.local from pod  dns-8688/dns-test-04fad6d9-ed3d-4b38-8376-e60e496cb084 contains 'foo.example.com.
' instead of 'bar.example.com.'
Sep 25 03:25:17.220: INFO: File jessie_udp@dns-test-service-3.dns-8688.svc.cluster.local from pod  dns-8688/dns-test-04fad6d9-ed3d-4b38-8376-e60e496cb084 contains 'foo.example.com.
' instead of 'bar.example.com.'
Sep 25 03:25:17.220: INFO: Lookups using dns-8688/dns-test-04fad6d9-ed3d-4b38-8376-e60e496cb084 failed for: [wheezy_udp@dns-test-service-3.dns-8688.svc.cluster.local jessie_udp@dns-test-service-3.dns-8688.svc.cluster.local]

Sep 25 03:25:22.219: INFO: DNS probes using dns-test-04fad6d9-ed3d-4b38-8376-e60e496cb084 succeeded

STEP: deleting the pod
STEP: changing the service to type=ClusterIP
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8688.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-8688.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8688.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-8688.svc.cluster.local; sleep 1; done

STEP: creating a third pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Sep 25 03:25:28.951: INFO: DNS probes using dns-test-6fa1f5b4-02a2-474c-ba20-fa8cf33020d1 succeeded

STEP: deleting the pod
STEP: deleting the test externalName service
[AfterEach] [sig-network] DNS
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 25 03:25:29.013: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-8688" for this suite.
Sep 25 03:25:35.050: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 25 03:25:35.183: INFO: namespace dns-8688 deletion completed in 6.162942763s

• [SLOW TEST:50.236 seconds]
[sig-network] DNS
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for ExternalName services [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 25 03:25:35.185: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-8b492fcf-60a8-44a4-86a8-c2344cebedce
STEP: Creating a pod to test consume secrets
Sep 25 03:25:35.261: INFO: Waiting up to 5m0s for pod "pod-secrets-89256b1a-961c-4bcc-a1bc-d94416fb1895" in namespace "secrets-715" to be "success or failure"
Sep 25 03:25:35.318: INFO: Pod "pod-secrets-89256b1a-961c-4bcc-a1bc-d94416fb1895": Phase="Pending", Reason="", readiness=false. Elapsed: 57.132354ms
Sep 25 03:25:37.325: INFO: Pod "pod-secrets-89256b1a-961c-4bcc-a1bc-d94416fb1895": Phase="Pending", Reason="", readiness=false. Elapsed: 2.06404177s
Sep 25 03:25:39.333: INFO: Pod "pod-secrets-89256b1a-961c-4bcc-a1bc-d94416fb1895": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.07157618s
STEP: Saw pod success
Sep 25 03:25:39.333: INFO: Pod "pod-secrets-89256b1a-961c-4bcc-a1bc-d94416fb1895" satisfied condition "success or failure"
Sep 25 03:25:39.339: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-89256b1a-961c-4bcc-a1bc-d94416fb1895 container secret-volume-test: 
STEP: delete the pod
Sep 25 03:25:39.369: INFO: Waiting for pod pod-secrets-89256b1a-961c-4bcc-a1bc-d94416fb1895 to disappear
Sep 25 03:25:39.373: INFO: Pod pod-secrets-89256b1a-961c-4bcc-a1bc-d94416fb1895 no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 25 03:25:39.373: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-715" for this suite.
Sep 25 03:25:45.407: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 25 03:25:45.547: INFO: namespace secrets-715 deletion completed in 6.167150181s

• [SLOW TEST:10.362 seconds]
[sig-storage] Secrets
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Proxy server 
  should support proxy with --port 0  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 25 03:25:45.548: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should support proxy with --port 0  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: starting the proxy server
Sep 25 03:25:45.605: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter'
STEP: curling proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 25 03:25:46.617: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3421" for this suite.
Sep 25 03:25:52.648: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 25 03:25:52.786: INFO: namespace kubectl-3421 deletion completed in 6.157785023s

• [SLOW TEST:7.237 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Proxy server
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should support proxy with --port 0  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete pods created by rc when not orphaning [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 25 03:25:52.788: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete pods created by rc when not orphaning [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc
STEP: delete the rc
STEP: wait for all pods to be garbage collected
STEP: Gathering metrics
W0925 03:26:02.939613       7 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Sep 25 03:26:02.940: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 25 03:26:02.940: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-2200" for this suite.
Sep 25 03:26:08.971: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 25 03:26:09.108: INFO: namespace gc-2200 deletion completed in 6.158537383s

• [SLOW TEST:16.321 seconds]
[sig-api-machinery] Garbage collector
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should delete pods created by rc when not orphaning [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Variable Expansion
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 25 03:26:09.110: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's args [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test substitution in container's args
Sep 25 03:26:09.230: INFO: Waiting up to 5m0s for pod "var-expansion-09dfd460-812d-4607-89b7-f01c2e9829f2" in namespace "var-expansion-3226" to be "success or failure"
Sep 25 03:26:09.235: INFO: Pod "var-expansion-09dfd460-812d-4607-89b7-f01c2e9829f2": Phase="Pending", Reason="", readiness=false. Elapsed: 5.391812ms
Sep 25 03:26:11.242: INFO: Pod "var-expansion-09dfd460-812d-4607-89b7-f01c2e9829f2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012330852s
Sep 25 03:26:13.249: INFO: Pod "var-expansion-09dfd460-812d-4607-89b7-f01c2e9829f2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019223392s
STEP: Saw pod success
Sep 25 03:26:13.249: INFO: Pod "var-expansion-09dfd460-812d-4607-89b7-f01c2e9829f2" satisfied condition "success or failure"
Sep 25 03:26:13.271: INFO: Trying to get logs from node iruya-worker pod var-expansion-09dfd460-812d-4607-89b7-f01c2e9829f2 container dapi-container: 
STEP: delete the pod
Sep 25 03:26:13.291: INFO: Waiting for pod var-expansion-09dfd460-812d-4607-89b7-f01c2e9829f2 to disappear
Sep 25 03:26:13.295: INFO: Pod var-expansion-09dfd460-812d-4607-89b7-f01c2e9829f2 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 25 03:26:13.295: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-3226" for this suite.
Sep 25 03:26:19.326: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 25 03:26:19.467: INFO: namespace var-expansion-3226 deletion completed in 6.164354975s

• [SLOW TEST:10.357 seconds]
[k8s.io] Variable Expansion
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should scale a replication controller  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 25 03:26:19.470: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Update Demo
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should scale a replication controller  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a replication controller
Sep 25 03:26:19.546: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7248'
Sep 25 03:26:21.029: INFO: stderr: ""
Sep 25 03:26:21.029: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Sep 25 03:26:21.030: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7248'
Sep 25 03:26:22.172: INFO: stderr: ""
Sep 25 03:26:22.172: INFO: stdout: "update-demo-nautilus-kg5qf update-demo-nautilus-nc2kw "
Sep 25 03:26:22.173: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kg5qf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7248'
Sep 25 03:26:23.378: INFO: stderr: ""
Sep 25 03:26:23.378: INFO: stdout: ""
Sep 25 03:26:23.378: INFO: update-demo-nautilus-kg5qf is created but not running
Sep 25 03:26:28.379: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7248'
Sep 25 03:26:29.529: INFO: stderr: ""
Sep 25 03:26:29.529: INFO: stdout: "update-demo-nautilus-kg5qf update-demo-nautilus-nc2kw "
Sep 25 03:26:29.529: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kg5qf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7248'
Sep 25 03:26:30.664: INFO: stderr: ""
Sep 25 03:26:30.664: INFO: stdout: "true"
Sep 25 03:26:30.664: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kg5qf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7248'
Sep 25 03:26:31.801: INFO: stderr: ""
Sep 25 03:26:31.801: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Sep 25 03:26:31.801: INFO: validating pod update-demo-nautilus-kg5qf
Sep 25 03:26:31.808: INFO: got data: {
  "image": "nautilus.jpg"
}

Sep 25 03:26:31.808: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Sep 25 03:26:31.808: INFO: update-demo-nautilus-kg5qf is verified up and running
Sep 25 03:26:31.808: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nc2kw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7248'
Sep 25 03:26:32.944: INFO: stderr: ""
Sep 25 03:26:32.945: INFO: stdout: "true"
Sep 25 03:26:32.945: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nc2kw -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7248'
Sep 25 03:26:34.044: INFO: stderr: ""
Sep 25 03:26:34.044: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Sep 25 03:26:34.044: INFO: validating pod update-demo-nautilus-nc2kw
Sep 25 03:26:34.051: INFO: got data: {
  "image": "nautilus.jpg"
}

Sep 25 03:26:34.051: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Sep 25 03:26:34.051: INFO: update-demo-nautilus-nc2kw is verified up and running
STEP: scaling down the replication controller
Sep 25 03:26:34.063: INFO: scanned /root for discovery docs: 
Sep 25 03:26:34.063: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-7248'
Sep 25 03:26:35.273: INFO: stderr: ""
Sep 25 03:26:35.273: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Sep 25 03:26:35.273: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7248'
Sep 25 03:26:36.398: INFO: stderr: ""
Sep 25 03:26:36.398: INFO: stdout: "update-demo-nautilus-kg5qf update-demo-nautilus-nc2kw "
STEP: Replicas for name=update-demo: expected=1 actual=2
Sep 25 03:26:41.399: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7248'
Sep 25 03:26:42.550: INFO: stderr: ""
Sep 25 03:26:42.550: INFO: stdout: "update-demo-nautilus-kg5qf update-demo-nautilus-nc2kw "
STEP: Replicas for name=update-demo: expected=1 actual=2
Sep 25 03:26:47.551: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7248'
Sep 25 03:26:48.711: INFO: stderr: ""
Sep 25 03:26:48.712: INFO: stdout: "update-demo-nautilus-nc2kw "
Sep 25 03:26:48.712: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nc2kw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7248'
Sep 25 03:26:49.850: INFO: stderr: ""
Sep 25 03:26:49.850: INFO: stdout: "true"
Sep 25 03:26:49.850: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nc2kw -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7248'
Sep 25 03:26:50.955: INFO: stderr: ""
Sep 25 03:26:50.956: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Sep 25 03:26:50.956: INFO: validating pod update-demo-nautilus-nc2kw
Sep 25 03:26:50.961: INFO: got data: {
  "image": "nautilus.jpg"
}

Sep 25 03:26:50.961: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Sep 25 03:26:50.961: INFO: update-demo-nautilus-nc2kw is verified up and running
STEP: scaling up the replication controller
Sep 25 03:26:50.968: INFO: scanned /root for discovery docs: 
Sep 25 03:26:50.969: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-7248'
Sep 25 03:26:52.206: INFO: stderr: ""
Sep 25 03:26:52.207: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Sep 25 03:26:52.207: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7248'
Sep 25 03:26:53.356: INFO: stderr: ""
Sep 25 03:26:53.356: INFO: stdout: "update-demo-nautilus-nc2kw update-demo-nautilus-vft76 "
Sep 25 03:26:53.356: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nc2kw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7248'
Sep 25 03:26:54.465: INFO: stderr: ""
Sep 25 03:26:54.465: INFO: stdout: "true"
Sep 25 03:26:54.466: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nc2kw -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7248'
Sep 25 03:26:55.561: INFO: stderr: ""
Sep 25 03:26:55.562: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Sep 25 03:26:55.562: INFO: validating pod update-demo-nautilus-nc2kw
Sep 25 03:26:55.568: INFO: got data: {
  "image": "nautilus.jpg"
}

Sep 25 03:26:55.568: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Sep 25 03:26:55.568: INFO: update-demo-nautilus-nc2kw is verified up and running
Sep 25 03:26:55.569: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vft76 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7248'
Sep 25 03:26:56.679: INFO: stderr: ""
Sep 25 03:26:56.679: INFO: stdout: "true"
Sep 25 03:26:56.680: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vft76 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7248'
Sep 25 03:26:57.786: INFO: stderr: ""
Sep 25 03:26:57.786: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Sep 25 03:26:57.787: INFO: validating pod update-demo-nautilus-vft76
Sep 25 03:26:57.792: INFO: got data: {
  "image": "nautilus.jpg"
}

Sep 25 03:26:57.792: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Sep 25 03:26:57.792: INFO: update-demo-nautilus-vft76 is verified up and running
STEP: using delete to clean up resources
Sep 25 03:26:57.793: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7248'
Sep 25 03:26:58.924: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Sep 25 03:26:58.924: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Sep 25 03:26:58.924: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-7248'
Sep 25 03:27:00.086: INFO: stderr: "No resources found.\n"
Sep 25 03:27:00.087: INFO: stdout: ""
Sep 25 03:27:00.087: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-7248 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Sep 25 03:27:01.204: INFO: stderr: ""
Sep 25 03:27:01.204: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 25 03:27:01.205: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7248" for this suite.
Sep 25 03:27:07.232: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 25 03:27:07.368: INFO: namespace kubectl-7248 deletion completed in 6.153497233s

• [SLOW TEST:47.898 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Update Demo
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should scale a replication controller  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-network] Services 
  should provide secure master service  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 25 03:27:07.369: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88
[It] should provide secure master service  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [sig-network] Services
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 25 03:27:07.471: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-9309" for this suite.
Sep 25 03:27:13.496: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 25 03:27:13.654: INFO: namespace services-9309 deletion completed in 6.175086534s
[AfterEach] [sig-network] Services
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92

• [SLOW TEST:6.285 seconds]
[sig-network] Services
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide secure master service  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartAlways pod [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 25 03:27:13.658: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should invoke init containers on a RestartAlways pod [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Sep 25 03:27:13.755: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 25 03:27:21.484: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-6538" for this suite.
Sep 25 03:27:43.517: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 25 03:27:43.662: INFO: namespace init-container-6538 deletion completed in 22.160763621s

• [SLOW TEST:30.004 seconds]
[k8s.io] InitContainer [NodeConformance]
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should invoke init containers on a RestartAlways pod [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-storage] Secrets 
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 25 03:27:43.663: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-25282ce1-bafb-4c1b-8b50-e3981ec9f206
STEP: Creating a pod to test consume secrets
Sep 25 03:27:43.827: INFO: Waiting up to 5m0s for pod "pod-secrets-264dc5a1-08fc-4f5d-a0ab-15a2a6caf77a" in namespace "secrets-3706" to be "success or failure"
Sep 25 03:27:43.846: INFO: Pod "pod-secrets-264dc5a1-08fc-4f5d-a0ab-15a2a6caf77a": Phase="Pending", Reason="", readiness=false. Elapsed: 19.784123ms
Sep 25 03:27:45.901: INFO: Pod "pod-secrets-264dc5a1-08fc-4f5d-a0ab-15a2a6caf77a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.074301956s
Sep 25 03:27:47.909: INFO: Pod "pod-secrets-264dc5a1-08fc-4f5d-a0ab-15a2a6caf77a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.0821053s
STEP: Saw pod success
Sep 25 03:27:47.909: INFO: Pod "pod-secrets-264dc5a1-08fc-4f5d-a0ab-15a2a6caf77a" satisfied condition "success or failure"
Sep 25 03:27:47.915: INFO: Trying to get logs from node iruya-worker pod pod-secrets-264dc5a1-08fc-4f5d-a0ab-15a2a6caf77a container secret-volume-test: 
STEP: delete the pod
Sep 25 03:27:47.940: INFO: Waiting for pod pod-secrets-264dc5a1-08fc-4f5d-a0ab-15a2a6caf77a to disappear
Sep 25 03:27:47.957: INFO: Pod pod-secrets-264dc5a1-08fc-4f5d-a0ab-15a2a6caf77a no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 25 03:27:47.958: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-3706" for this suite.
Sep 25 03:27:53.981: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 25 03:27:54.121: INFO: namespace secrets-3706 deletion completed in 6.156856813s
STEP: Destroying namespace "secret-namespace-5690" for this suite.
Sep 25 03:28:00.145: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 25 03:28:00.297: INFO: namespace secret-namespace-5690 deletion completed in 6.175943466s

• [SLOW TEST:16.634 seconds]
[sig-storage] Secrets
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 25 03:28:00.299: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-2fd95018-d079-4f6a-9af9-52e6987e2a91
STEP: Creating a pod to test consume configMaps
Sep 25 03:28:00.425: INFO: Waiting up to 5m0s for pod "pod-configmaps-3323680d-06e5-48ab-88f0-4553185a52ab" in namespace "configmap-9868" to be "success or failure"
Sep 25 03:28:00.454: INFO: Pod "pod-configmaps-3323680d-06e5-48ab-88f0-4553185a52ab": Phase="Pending", Reason="", readiness=false. Elapsed: 28.331583ms
Sep 25 03:28:02.461: INFO: Pod "pod-configmaps-3323680d-06e5-48ab-88f0-4553185a52ab": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034976651s
Sep 25 03:28:04.468: INFO: Pod "pod-configmaps-3323680d-06e5-48ab-88f0-4553185a52ab": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.042048633s
STEP: Saw pod success
Sep 25 03:28:04.468: INFO: Pod "pod-configmaps-3323680d-06e5-48ab-88f0-4553185a52ab" satisfied condition "success or failure"
Sep 25 03:28:04.474: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-3323680d-06e5-48ab-88f0-4553185a52ab container configmap-volume-test: 
STEP: delete the pod
Sep 25 03:28:04.512: INFO: Waiting for pod pod-configmaps-3323680d-06e5-48ab-88f0-4553185a52ab to disappear
Sep 25 03:28:04.525: INFO: Pod pod-configmaps-3323680d-06e5-48ab-88f0-4553185a52ab no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 25 03:28:04.525: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-9868" for this suite.
Sep 25 03:28:10.602: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 25 03:28:10.784: INFO: namespace configmap-9868 deletion completed in 6.250040706s

• [SLOW TEST:10.485 seconds]
[sig-storage] ConfigMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 25 03:28:10.788: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Sep 25 03:28:10.886: INFO: Waiting up to 5m0s for pod "downward-api-eb3b80c9-7b29-42f1-91ec-44bb83df6c34" in namespace "downward-api-9233" to be "success or failure"
Sep 25 03:28:10.896: INFO: Pod "downward-api-eb3b80c9-7b29-42f1-91ec-44bb83df6c34": Phase="Pending", Reason="", readiness=false. Elapsed: 9.457371ms
Sep 25 03:28:12.906: INFO: Pod "downward-api-eb3b80c9-7b29-42f1-91ec-44bb83df6c34": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01982793s
Sep 25 03:28:14.914: INFO: Pod "downward-api-eb3b80c9-7b29-42f1-91ec-44bb83df6c34": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027002274s
STEP: Saw pod success
Sep 25 03:28:14.914: INFO: Pod "downward-api-eb3b80c9-7b29-42f1-91ec-44bb83df6c34" satisfied condition "success or failure"
Sep 25 03:28:14.919: INFO: Trying to get logs from node iruya-worker2 pod downward-api-eb3b80c9-7b29-42f1-91ec-44bb83df6c34 container dapi-container: 
STEP: delete the pod
Sep 25 03:28:14.946: INFO: Waiting for pod downward-api-eb3b80c9-7b29-42f1-91ec-44bb83df6c34 to disappear
Sep 25 03:28:15.026: INFO: Pod downward-api-eb3b80c9-7b29-42f1-91ec-44bb83df6c34 no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 25 03:28:15.026: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-9233" for this suite.
Sep 25 03:28:21.052: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 25 03:28:21.200: INFO: namespace downward-api-9233 deletion completed in 6.163042023s

• [SLOW TEST:10.413 seconds]
[sig-node] Downward API
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory limit [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 25 03:28:21.204: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Sep 25 03:28:21.396: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6f03b1bf-9db1-4932-808b-93d59aa65d9a" in namespace "projected-8314" to be "success or failure"
Sep 25 03:28:21.437: INFO: Pod "downwardapi-volume-6f03b1bf-9db1-4932-808b-93d59aa65d9a": Phase="Pending", Reason="", readiness=false. Elapsed: 40.626725ms
Sep 25 03:28:23.445: INFO: Pod "downwardapi-volume-6f03b1bf-9db1-4932-808b-93d59aa65d9a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049050516s
Sep 25 03:28:25.453: INFO: Pod "downwardapi-volume-6f03b1bf-9db1-4932-808b-93d59aa65d9a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.057310226s
STEP: Saw pod success
Sep 25 03:28:25.454: INFO: Pod "downwardapi-volume-6f03b1bf-9db1-4932-808b-93d59aa65d9a" satisfied condition "success or failure"
Sep 25 03:28:25.459: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-6f03b1bf-9db1-4932-808b-93d59aa65d9a container client-container: 
STEP: delete the pod
Sep 25 03:28:25.487: INFO: Waiting for pod downwardapi-volume-6f03b1bf-9db1-4932-808b-93d59aa65d9a to disappear
Sep 25 03:28:25.529: INFO: Pod downwardapi-volume-6f03b1bf-9db1-4932-808b-93d59aa65d9a no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 25 03:28:25.529: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8314" for this suite.
Sep 25 03:28:31.562: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 25 03:28:31.708: INFO: namespace projected-8314 deletion completed in 6.170791752s

• [SLOW TEST:10.505 seconds]
[sig-storage] Projected downwardAPI
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's memory limit [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should serve a basic image on each replica with a public image  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicationController
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 25 03:28:31.711: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating replication controller my-hostname-basic-344abf54-5698-49f9-a993-621e41663c81
Sep 25 03:28:31.790: INFO: Pod name my-hostname-basic-344abf54-5698-49f9-a993-621e41663c81: Found 0 pods out of 1
Sep 25 03:28:36.797: INFO: Pod name my-hostname-basic-344abf54-5698-49f9-a993-621e41663c81: Found 1 pods out of 1
Sep 25 03:28:36.797: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-344abf54-5698-49f9-a993-621e41663c81" are running
Sep 25 03:28:36.801: INFO: Pod "my-hostname-basic-344abf54-5698-49f9-a993-621e41663c81-8zfb5" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-09-25 03:28:31 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-09-25 03:28:34 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-09-25 03:28:34 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-09-25 03:28:31 +0000 UTC Reason: Message:}])
Sep 25 03:28:36.802: INFO: Trying to dial the pod
Sep 25 03:28:41.819: INFO: Controller my-hostname-basic-344abf54-5698-49f9-a993-621e41663c81: Got expected result from replica 1 [my-hostname-basic-344abf54-5698-49f9-a993-621e41663c81-8zfb5]: "my-hostname-basic-344abf54-5698-49f9-a993-621e41663c81-8zfb5", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicationController
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 25 03:28:41.819: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-8390" for this suite.
Sep 25 03:28:47.842: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 25 03:28:47.986: INFO: namespace replication-controller-8390 deletion completed in 6.159053613s

• [SLOW TEST:16.276 seconds]
[sig-apps] ReplicationController
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 25 03:28:47.989: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on tmpfs
Sep 25 03:28:48.096: INFO: Waiting up to 5m0s for pod "pod-a8733e64-94bd-42c3-b4ce-337dffe680db" in namespace "emptydir-7143" to be "success or failure"
Sep 25 03:28:48.128: INFO: Pod "pod-a8733e64-94bd-42c3-b4ce-337dffe680db": Phase="Pending", Reason="", readiness=false. Elapsed: 31.232558ms
Sep 25 03:28:50.219: INFO: Pod "pod-a8733e64-94bd-42c3-b4ce-337dffe680db": Phase="Pending", Reason="", readiness=false. Elapsed: 2.122703301s
Sep 25 03:28:52.227: INFO: Pod "pod-a8733e64-94bd-42c3-b4ce-337dffe680db": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.130702329s
STEP: Saw pod success
Sep 25 03:28:52.228: INFO: Pod "pod-a8733e64-94bd-42c3-b4ce-337dffe680db" satisfied condition "success or failure"
Sep 25 03:28:52.233: INFO: Trying to get logs from node iruya-worker2 pod pod-a8733e64-94bd-42c3-b4ce-337dffe680db container test-container: 
STEP: delete the pod
Sep 25 03:28:52.298: INFO: Waiting for pod pod-a8733e64-94bd-42c3-b4ce-337dffe680db to disappear
Sep 25 03:28:52.310: INFO: Pod pod-a8733e64-94bd-42c3-b4ce-337dffe680db no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 25 03:28:52.311: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-7143" for this suite.
Sep 25 03:28:58.335: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 25 03:28:58.467: INFO: namespace emptydir-7143 deletion completed in 6.14736798s

• [SLOW TEST:10.479 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test when starting a container that exits 
  should run with the expected status [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 25 03:28:58.470: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run with the expected status [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpa': should get the expected 'State'
STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpof': should get the expected 'State'
STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpn': should get the expected 'State'
STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance]
[AfterEach] [k8s.io] Container Runtime
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 25 03:29:27.285: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-3947" for this suite.
Sep 25 03:29:33.334: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 25 03:29:33.484: INFO: namespace container-runtime-3947 deletion completed in 6.191536041s

• [SLOW TEST:35.015 seconds]
[k8s.io] Container Runtime
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    when starting a container that exits
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39
      should run with the expected status [NodeConformance] [Conformance]
      /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 25 03:29:33.485: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test override command
Sep 25 03:29:33.564: INFO: Waiting up to 5m0s for pod "client-containers-fcd9d70f-1aa6-49b5-9b29-29ee18ab8567" in namespace "containers-1050" to be "success or failure"
Sep 25 03:29:33.575: INFO: Pod "client-containers-fcd9d70f-1aa6-49b5-9b29-29ee18ab8567": Phase="Pending", Reason="", readiness=false. Elapsed: 10.143169ms
Sep 25 03:29:35.596: INFO: Pod "client-containers-fcd9d70f-1aa6-49b5-9b29-29ee18ab8567": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031036605s
Sep 25 03:29:37.603: INFO: Pod "client-containers-fcd9d70f-1aa6-49b5-9b29-29ee18ab8567": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.038004202s
STEP: Saw pod success
Sep 25 03:29:37.603: INFO: Pod "client-containers-fcd9d70f-1aa6-49b5-9b29-29ee18ab8567" satisfied condition "success or failure"
Sep 25 03:29:37.608: INFO: Trying to get logs from node iruya-worker pod client-containers-fcd9d70f-1aa6-49b5-9b29-29ee18ab8567 container test-container: 
STEP: delete the pod
Sep 25 03:29:37.657: INFO: Waiting for pod client-containers-fcd9d70f-1aa6-49b5-9b29-29ee18ab8567 to disappear
Sep 25 03:29:37.666: INFO: Pod client-containers-fcd9d70f-1aa6-49b5-9b29-29ee18ab8567 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 25 03:29:37.666: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-1050" for this suite.
Sep 25 03:29:43.689: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 25 03:29:43.828: INFO: namespace containers-1050 deletion completed in 6.154359381s

• [SLOW TEST:10.343 seconds]
[k8s.io] Docker Containers
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class 
  should be submitted and removed  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 25 03:29:43.830: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods Set QOS Class
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:179
[It] should be submitted and removed  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying QOS class is set on the pod
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 25 03:29:43.948: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-7329" for this suite.
Sep 25 03:30:06.000: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 25 03:30:06.151: INFO: namespace pods-7329 deletion completed in 22.19305737s

• [SLOW TEST:22.321 seconds]
[k8s.io] [sig-node] Pods Extended
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  [k8s.io] Pods Set QOS Class
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should be submitted and removed  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Should recreate evicted statefulset [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 25 03:30:06.154: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-6526
[It] Should recreate evicted statefulset [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace statefulset-6526
STEP: Creating statefulset with conflicting port in namespace statefulset-6526
STEP: Waiting until pod test-pod will start running in namespace statefulset-6526
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-6526
Sep 25 03:30:10.333: INFO: Observed stateful pod in namespace: statefulset-6526, name: ss-0, uid: 0b78dcb7-af49-46c7-aea3-7dc67a8b19a9, status phase: Pending. Waiting for statefulset controller to delete.
Sep 25 03:30:10.911: INFO: Observed stateful pod in namespace: statefulset-6526, name: ss-0, uid: 0b78dcb7-af49-46c7-aea3-7dc67a8b19a9, status phase: Failed. Waiting for statefulset controller to delete.
Sep 25 03:30:10.936: INFO: Observed stateful pod in namespace: statefulset-6526, name: ss-0, uid: 0b78dcb7-af49-46c7-aea3-7dc67a8b19a9, status phase: Failed. Waiting for statefulset controller to delete.
Sep 25 03:30:10.944: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-6526
STEP: Removing pod with conflicting port in namespace statefulset-6526
STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-6526 and will be in running state
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Sep 25 03:30:15.017: INFO: Deleting all statefulset in ns statefulset-6526
Sep 25 03:30:15.022: INFO: Scaling statefulset ss to 0
Sep 25 03:30:35.044: INFO: Waiting for statefulset status.replicas updated to 0
Sep 25 03:30:35.047: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 25 03:30:35.063: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-6526" for this suite.
Sep 25 03:30:41.100: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 25 03:30:41.669: INFO: namespace statefulset-6526 deletion completed in 6.599488207s

• [SLOW TEST:35.516 seconds]
[sig-apps] StatefulSet
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    Should recreate evicted statefulset [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] KubeletManagedEtcHosts 
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] KubeletManagedEtcHosts
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 25 03:30:41.672: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Setting up the test
STEP: Creating hostNetwork=false pod
STEP: Creating hostNetwork=true pod
STEP: Running the test
STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false
Sep 25 03:30:51.955: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-1000 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Sep 25 03:30:51.955: INFO: >>> kubeConfig: /root/.kube/config
I0925 03:30:52.063782       7 log.go:172] (0x910ca80) (0x910cb60) Create stream
I0925 03:30:52.064067       7 log.go:172] (0x910ca80) (0x910cb60) Stream added, broadcasting: 1
I0925 03:30:52.068090       7 log.go:172] (0x910ca80) Reply frame received for 1
I0925 03:30:52.068277       7 log.go:172] (0x910ca80) (0x91e2700) Create stream
I0925 03:30:52.068369       7 log.go:172] (0x910ca80) (0x91e2700) Stream added, broadcasting: 3
I0925 03:30:52.069896       7 log.go:172] (0x910ca80) Reply frame received for 3
I0925 03:30:52.070083       7 log.go:172] (0x910ca80) (0x910cc40) Create stream
I0925 03:30:52.070187       7 log.go:172] (0x910ca80) (0x910cc40) Stream added, broadcasting: 5
I0925 03:30:52.071797       7 log.go:172] (0x910ca80) Reply frame received for 5
I0925 03:30:52.139011       7 log.go:172] (0x910ca80) Data frame received for 5
I0925 03:30:52.139152       7 log.go:172] (0x910cc40) (5) Data frame handling
I0925 03:30:52.139326       7 log.go:172] (0x910ca80) Data frame received for 3
I0925 03:30:52.139520       7 log.go:172] (0x91e2700) (3) Data frame handling
I0925 03:30:52.139704       7 log.go:172] (0x91e2700) (3) Data frame sent
I0925 03:30:52.139852       7 log.go:172] (0x910ca80) Data frame received for 3
I0925 03:30:52.140025       7 log.go:172] (0x91e2700) (3) Data frame handling
I0925 03:30:52.140491       7 log.go:172] (0x910ca80) Data frame received for 1
I0925 03:30:52.140611       7 log.go:172] (0x910cb60) (1) Data frame handling
I0925 03:30:52.140727       7 log.go:172] (0x910cb60) (1) Data frame sent
I0925 03:30:52.140986       7 log.go:172] (0x910ca80) (0x910cb60) Stream removed, broadcasting: 1
I0925 03:30:52.141181       7 log.go:172] (0x910ca80) Go away received
I0925 03:30:52.141596       7 log.go:172] (0x910ca80) (0x910cb60) Stream removed, broadcasting: 1
I0925 03:30:52.141826       7 log.go:172] (0x910ca80) (0x91e2700) Stream removed, broadcasting: 3
I0925 03:30:52.141981       7 log.go:172] (0x910ca80) (0x910cc40) Stream removed, broadcasting: 5
Sep 25 03:30:52.142: INFO: Exec stderr: ""
Sep 25 03:30:52.142: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-1000 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Sep 25 03:30:52.142: INFO: >>> kubeConfig: /root/.kube/config
I0925 03:30:52.247923       7 log.go:172] (0x7cad260) (0x7cad420) Create stream
I0925 03:30:52.248108       7 log.go:172] (0x7cad260) (0x7cad420) Stream added, broadcasting: 1
I0925 03:30:52.252378       7 log.go:172] (0x7cad260) Reply frame received for 1
I0925 03:30:52.252528       7 log.go:172] (0x7cad260) (0x8814000) Create stream
I0925 03:30:52.252593       7 log.go:172] (0x7cad260) (0x8814000) Stream added, broadcasting: 3
I0925 03:30:52.253883       7 log.go:172] (0x7cad260) Reply frame received for 3
I0925 03:30:52.254018       7 log.go:172] (0x7cad260) (0x7cad570) Create stream
I0925 03:30:52.254106       7 log.go:172] (0x7cad260) (0x7cad570) Stream added, broadcasting: 5
I0925 03:30:52.255625       7 log.go:172] (0x7cad260) Reply frame received for 5
I0925 03:30:52.311423       7 log.go:172] (0x7cad260) Data frame received for 3
I0925 03:30:52.311644       7 log.go:172] (0x8814000) (3) Data frame handling
I0925 03:30:52.311785       7 log.go:172] (0x7cad260) Data frame received for 5
I0925 03:30:52.312011       7 log.go:172] (0x7cad570) (5) Data frame handling
I0925 03:30:52.312180       7 log.go:172] (0x8814000) (3) Data frame sent
I0925 03:30:52.312390       7 log.go:172] (0x7cad260) Data frame received for 3
I0925 03:30:52.312569       7 log.go:172] (0x8814000) (3) Data frame handling
I0925 03:30:52.313050       7 log.go:172] (0x7cad260) Data frame received for 1
I0925 03:30:52.313301       7 log.go:172] (0x7cad420) (1) Data frame handling
I0925 03:30:52.313513       7 log.go:172] (0x7cad420) (1) Data frame sent
I0925 03:30:52.313697       7 log.go:172] (0x7cad260) (0x7cad420) Stream removed, broadcasting: 1
I0925 03:30:52.313941       7 log.go:172] (0x7cad260) Go away received
I0925 03:30:52.315124       7 log.go:172] (0x7cad260) (0x7cad420) Stream removed, broadcasting: 1
I0925 03:30:52.315318       7 log.go:172] (0x7cad260) (0x8814000) Stream removed, broadcasting: 3
I0925 03:30:52.315458       7 log.go:172] (0x7cad260) (0x7cad570) Stream removed, broadcasting: 5
Sep 25 03:30:52.315: INFO: Exec stderr: ""
Sep 25 03:30:52.316: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-1000 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Sep 25 03:30:52.316: INFO: >>> kubeConfig: /root/.kube/config
I0925 03:30:52.417302       7 log.go:172] (0x7cad960) (0x7cad9d0) Create stream
I0925 03:30:52.417449       7 log.go:172] (0x7cad960) (0x7cad9d0) Stream added, broadcasting: 1
I0925 03:30:52.422321       7 log.go:172] (0x7cad960) Reply frame received for 1
I0925 03:30:52.422534       7 log.go:172] (0x7cad960) (0x7cada40) Create stream
I0925 03:30:52.422653       7 log.go:172] (0x7cad960) (0x7cada40) Stream added, broadcasting: 3
I0925 03:30:52.426831       7 log.go:172] (0x7cad960) Reply frame received for 3
I0925 03:30:52.427033       7 log.go:172] (0x7cad960) (0x999f8f0) Create stream
I0925 03:30:52.427133       7 log.go:172] (0x7cad960) (0x999f8f0) Stream added, broadcasting: 5
I0925 03:30:52.428937       7 log.go:172] (0x7cad960) Reply frame received for 5
I0925 03:30:52.496149       7 log.go:172] (0x7cad960) Data frame received for 5
I0925 03:30:52.496391       7 log.go:172] (0x999f8f0) (5) Data frame handling
I0925 03:30:52.496578       7 log.go:172] (0x7cad960) Data frame received for 3
I0925 03:30:52.496815       7 log.go:172] (0x7cada40) (3) Data frame handling
I0925 03:30:52.497076       7 log.go:172] (0x7cada40) (3) Data frame sent
I0925 03:30:52.497201       7 log.go:172] (0x7cad960) Data frame received for 3
I0925 03:30:52.497273       7 log.go:172] (0x7cada40) (3) Data frame handling
I0925 03:30:52.497363       7 log.go:172] (0x7cad960) Data frame received for 1
I0925 03:30:52.497501       7 log.go:172] (0x7cad9d0) (1) Data frame handling
I0925 03:30:52.497631       7 log.go:172] (0x7cad9d0) (1) Data frame sent
I0925 03:30:52.497750       7 log.go:172] (0x7cad960) (0x7cad9d0) Stream removed, broadcasting: 1
I0925 03:30:52.497883       7 log.go:172] (0x7cad960) Go away received
I0925 03:30:52.498451       7 log.go:172] (0x7cad960) (0x7cad9d0) Stream removed, broadcasting: 1
I0925 03:30:52.498627       7 log.go:172] (0x7cad960) (0x7cada40) Stream removed, broadcasting: 3
I0925 03:30:52.498761       7 log.go:172] (0x7cad960) (0x999f8f0) Stream removed, broadcasting: 5
Sep 25 03:30:52.498: INFO: Exec stderr: ""
Sep 25 03:30:52.499: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-1000 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Sep 25 03:30:52.499: INFO: >>> kubeConfig: /root/.kube/config
I0925 03:30:52.594457       7 log.go:172] (0x8dac3f0) (0x8dac460) Create stream
I0925 03:30:52.594579       7 log.go:172] (0x8dac3f0) (0x8dac460) Stream added, broadcasting: 1
I0925 03:30:52.597687       7 log.go:172] (0x8dac3f0) Reply frame received for 1
I0925 03:30:52.597857       7 log.go:172] (0x8dac3f0) (0x910cd20) Create stream
I0925 03:30:52.597932       7 log.go:172] (0x8dac3f0) (0x910cd20) Stream added, broadcasting: 3
I0925 03:30:52.599285       7 log.go:172] (0x8dac3f0) Reply frame received for 3
I0925 03:30:52.599446       7 log.go:172] (0x8dac3f0) (0x8dac4d0) Create stream
I0925 03:30:52.599522       7 log.go:172] (0x8dac3f0) (0x8dac4d0) Stream added, broadcasting: 5
I0925 03:30:52.600704       7 log.go:172] (0x8dac3f0) Reply frame received for 5
I0925 03:30:52.664322       7 log.go:172] (0x8dac3f0) Data frame received for 5
I0925 03:30:52.664465       7 log.go:172] (0x8dac4d0) (5) Data frame handling
I0925 03:30:52.664611       7 log.go:172] (0x8dac3f0) Data frame received for 3
I0925 03:30:52.664794       7 log.go:172] (0x910cd20) (3) Data frame handling
I0925 03:30:52.665002       7 log.go:172] (0x910cd20) (3) Data frame sent
I0925 03:30:52.665118       7 log.go:172] (0x8dac3f0) Data frame received for 3
I0925 03:30:52.665218       7 log.go:172] (0x910cd20) (3) Data frame handling
I0925 03:30:52.665400       7 log.go:172] (0x8dac3f0) Data frame received for 1
I0925 03:30:52.665588       7 log.go:172] (0x8dac460) (1) Data frame handling
I0925 03:30:52.665814       7 log.go:172] (0x8dac460) (1) Data frame sent
I0925 03:30:52.666019       7 log.go:172] (0x8dac3f0) (0x8dac460) Stream removed, broadcasting: 1
I0925 03:30:52.666259       7 log.go:172] (0x8dac3f0) Go away received
I0925 03:30:52.666602       7 log.go:172] (0x8dac3f0) (0x8dac460) Stream removed, broadcasting: 1
I0925 03:30:52.666749       7 log.go:172] (0x8dac3f0) (0x910cd20) Stream removed, broadcasting: 3
I0925 03:30:52.666899       7 log.go:172] (0x8dac3f0) (0x8dac4d0) Stream removed, broadcasting: 5
Sep 25 03:30:52.667: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount
Sep 25 03:30:52.667: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-1000 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Sep 25 03:30:52.667: INFO: >>> kubeConfig: /root/.kube/config
I0925 03:30:52.766976       7 log.go:172] (0x8814c40) (0x8814d20) Create stream
I0925 03:30:52.767103       7 log.go:172] (0x8814c40) (0x8814d20) Stream added, broadcasting: 1
I0925 03:30:52.770795       7 log.go:172] (0x8814c40) Reply frame received for 1
I0925 03:30:52.771007       7 log.go:172] (0x8814c40) (0x8814e00) Create stream
I0925 03:30:52.771098       7 log.go:172] (0x8814c40) (0x8814e00) Stream added, broadcasting: 3
I0925 03:30:52.772664       7 log.go:172] (0x8814c40) Reply frame received for 3
I0925 03:30:52.772887       7 log.go:172] (0x8814c40) (0x91e28c0) Create stream
I0925 03:30:52.772979       7 log.go:172] (0x8814c40) (0x91e28c0) Stream added, broadcasting: 5
I0925 03:30:52.774326       7 log.go:172] (0x8814c40) Reply frame received for 5
I0925 03:30:52.834720       7 log.go:172] (0x8814c40) Data frame received for 5
I0925 03:30:52.834894       7 log.go:172] (0x91e28c0) (5) Data frame handling
I0925 03:30:52.835046       7 log.go:172] (0x8814c40) Data frame received for 3
I0925 03:30:52.835205       7 log.go:172] (0x8814e00) (3) Data frame handling
I0925 03:30:52.835323       7 log.go:172] (0x8814e00) (3) Data frame sent
I0925 03:30:52.835430       7 log.go:172] (0x8814c40) Data frame received for 3
I0925 03:30:52.835520       7 log.go:172] (0x8814e00) (3) Data frame handling
I0925 03:30:52.835950       7 log.go:172] (0x8814c40) Data frame received for 1
I0925 03:30:52.836022       7 log.go:172] (0x8814d20) (1) Data frame handling
I0925 03:30:52.836093       7 log.go:172] (0x8814d20) (1) Data frame sent
I0925 03:30:52.836189       7 log.go:172] (0x8814c40) (0x8814d20) Stream removed, broadcasting: 1
I0925 03:30:52.836291       7 log.go:172] (0x8814c40) Go away received
I0925 03:30:52.837007       7 log.go:172] (0x8814c40) (0x8814d20) Stream removed, broadcasting: 1
I0925 03:30:52.837193       7 log.go:172] (0x8814c40) (0x8814e00) Stream removed, broadcasting: 3
I0925 03:30:52.837325       7 log.go:172] (0x8814c40) (0x91e28c0) Stream removed, broadcasting: 5
Sep 25 03:30:52.837: INFO: Exec stderr: ""
Sep 25 03:30:52.837: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-1000 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Sep 25 03:30:52.837: INFO: >>> kubeConfig: /root/.kube/config
I0925 03:30:52.937908       7 log.go:172] (0x910d340) (0x910d3b0) Create stream
I0925 03:30:52.938078       7 log.go:172] (0x910d340) (0x910d3b0) Stream added, broadcasting: 1
I0925 03:30:52.942759       7 log.go:172] (0x910d340) Reply frame received for 1
I0925 03:30:52.942943       7 log.go:172] (0x910d340) (0x999f9d0) Create stream
I0925 03:30:52.943048       7 log.go:172] (0x910d340) (0x999f9d0) Stream added, broadcasting: 3
I0925 03:30:52.945074       7 log.go:172] (0x910d340) Reply frame received for 3
I0925 03:30:52.945255       7 log.go:172] (0x910d340) (0x999fab0) Create stream
I0925 03:30:52.945365       7 log.go:172] (0x910d340) (0x999fab0) Stream added, broadcasting: 5
I0925 03:30:52.947045       7 log.go:172] (0x910d340) Reply frame received for 5
I0925 03:30:53.009409       7 log.go:172] (0x910d340) Data frame received for 3
I0925 03:30:53.009642       7 log.go:172] (0x999f9d0) (3) Data frame handling
I0925 03:30:53.009815       7 log.go:172] (0x910d340) Data frame received for 5
I0925 03:30:53.010158       7 log.go:172] (0x999fab0) (5) Data frame handling
I0925 03:30:53.010490       7 log.go:172] (0x999f9d0) (3) Data frame sent
I0925 03:30:53.010758       7 log.go:172] (0x910d340) Data frame received for 3
I0925 03:30:53.011058       7 log.go:172] (0x999f9d0) (3) Data frame handling
I0925 03:30:53.011345       7 log.go:172] (0x910d340) Data frame received for 1
I0925 03:30:53.011524       7 log.go:172] (0x910d3b0) (1) Data frame handling
I0925 03:30:53.011709       7 log.go:172] (0x910d3b0) (1) Data frame sent
I0925 03:30:53.011854       7 log.go:172] (0x910d340) (0x910d3b0) Stream removed, broadcasting: 1
I0925 03:30:53.012024       7 log.go:172] (0x910d340) Go away received
I0925 03:30:53.012453       7 log.go:172] (0x910d340) (0x910d3b0) Stream removed, broadcasting: 1
I0925 03:30:53.012616       7 log.go:172] (0x910d340) (0x999f9d0) Stream removed, broadcasting: 3
I0925 03:30:53.012758       7 log.go:172] (0x910d340) (0x999fab0) Stream removed, broadcasting: 5
Sep 25 03:30:53.012: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true
Sep 25 03:30:53.013: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-1000 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Sep 25 03:30:53.013: INFO: >>> kubeConfig: /root/.kube/config
I0925 03:30:53.115320       7 log.go:172] (0x8815260) (0x88152d0) Create stream
I0925 03:30:53.115463       7 log.go:172] (0x8815260) (0x88152d0) Stream added, broadcasting: 1
I0925 03:30:53.120436       7 log.go:172] (0x8815260) Reply frame received for 1
I0925 03:30:53.120596       7 log.go:172] (0x8815260) (0x88153b0) Create stream
I0925 03:30:53.120684       7 log.go:172] (0x8815260) (0x88153b0) Stream added, broadcasting: 3
I0925 03:30:53.122550       7 log.go:172] (0x8815260) Reply frame received for 3
I0925 03:30:53.122707       7 log.go:172] (0x8815260) (0x910d420) Create stream
I0925 03:30:53.122798       7 log.go:172] (0x8815260) (0x910d420) Stream added, broadcasting: 5
I0925 03:30:53.124376       7 log.go:172] (0x8815260) Reply frame received for 5
I0925 03:30:53.179334       7 log.go:172] (0x8815260) Data frame received for 3
I0925 03:30:53.179517       7 log.go:172] (0x88153b0) (3) Data frame handling
I0925 03:30:53.179664       7 log.go:172] (0x8815260) Data frame received for 5
I0925 03:30:53.179938       7 log.go:172] (0x910d420) (5) Data frame handling
I0925 03:30:53.180220       7 log.go:172] (0x88153b0) (3) Data frame sent
I0925 03:30:53.180527       7 log.go:172] (0x8815260) Data frame received for 3
I0925 03:30:53.180690       7 log.go:172] (0x88153b0) (3) Data frame handling
I0925 03:30:53.180983       7 log.go:172] (0x8815260) Data frame received for 1
I0925 03:30:53.181053       7 log.go:172] (0x88152d0) (1) Data frame handling
I0925 03:30:53.181143       7 log.go:172] (0x88152d0) (1) Data frame sent
I0925 03:30:53.181240       7 log.go:172] (0x8815260) (0x88152d0) Stream removed, broadcasting: 1
I0925 03:30:53.181411       7 log.go:172] (0x8815260) Go away received
I0925 03:30:53.181957       7 log.go:172] (0x8815260) (0x88152d0) Stream removed, broadcasting: 1
I0925 03:30:53.182101       7 log.go:172] (0x8815260) (0x88153b0) Stream removed, broadcasting: 3
I0925 03:30:53.182224       7 log.go:172] (0x8815260) (0x910d420) Stream removed, broadcasting: 5
Sep 25 03:30:53.182: INFO: Exec stderr: ""
Sep 25 03:30:53.182: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-1000 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Sep 25 03:30:53.182: INFO: >>> kubeConfig: /root/.kube/config
I0925 03:30:53.286684       7 log.go:172] (0x8dacaf0) (0x8dacbd0) Create stream
I0925 03:30:53.286814       7 log.go:172] (0x8dacaf0) (0x8dacbd0) Stream added, broadcasting: 1
I0925 03:30:53.291634       7 log.go:172] (0x8dacaf0) Reply frame received for 1
I0925 03:30:53.291788       7 log.go:172] (0x8dacaf0) (0x999fb90) Create stream
I0925 03:30:53.291866       7 log.go:172] (0x8dacaf0) (0x999fb90) Stream added, broadcasting: 3
I0925 03:30:53.293301       7 log.go:172] (0x8dacaf0) Reply frame received for 3
I0925 03:30:53.293471       7 log.go:172] (0x8dacaf0) (0x8daccb0) Create stream
I0925 03:30:53.293558       7 log.go:172] (0x8dacaf0) (0x8daccb0) Stream added, broadcasting: 5
I0925 03:30:53.295112       7 log.go:172] (0x8dacaf0) Reply frame received for 5
I0925 03:30:53.348634       7 log.go:172] (0x8dacaf0) Data frame received for 3
I0925 03:30:53.348980       7 log.go:172] (0x999fb90) (3) Data frame handling
I0925 03:30:53.349188       7 log.go:172] (0x8dacaf0) Data frame received for 5
I0925 03:30:53.349448       7 log.go:172] (0x8daccb0) (5) Data frame handling
I0925 03:30:53.349692       7 log.go:172] (0x999fb90) (3) Data frame sent
I0925 03:30:53.349942       7 log.go:172] (0x8dacaf0) Data frame received for 3
I0925 03:30:53.350178       7 log.go:172] (0x999fb90) (3) Data frame handling
I0925 03:30:53.350420       7 log.go:172] (0x8dacaf0) Data frame received for 1
I0925 03:30:53.350570       7 log.go:172] (0x8dacbd0) (1) Data frame handling
I0925 03:30:53.350736       7 log.go:172] (0x8dacbd0) (1) Data frame sent
I0925 03:30:53.350902       7 log.go:172] (0x8dacaf0) (0x8dacbd0) Stream removed, broadcasting: 1
I0925 03:30:53.351153       7 log.go:172] (0x8dacaf0) Go away received
I0925 03:30:53.351731       7 log.go:172] (0x8dacaf0) (0x8dacbd0) Stream removed, broadcasting: 1
I0925 03:30:53.351957       7 log.go:172] (0x8dacaf0) (0x999fb90) Stream removed, broadcasting: 3
I0925 03:30:53.352158       7 log.go:172] (0x8dacaf0) (0x8daccb0) Stream removed, broadcasting: 5
Sep 25 03:30:53.352: INFO: Exec stderr: ""
Sep 25 03:30:53.352: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-1000 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Sep 25 03:30:53.353: INFO: >>> kubeConfig: /root/.kube/config
I0925 03:30:53.460273       7 log.go:172] (0x7cadd50) (0x7caddc0) Create stream
I0925 03:30:53.460461       7 log.go:172] (0x7cadd50) (0x7caddc0) Stream added, broadcasting: 1
I0925 03:30:53.466897       7 log.go:172] (0x7cadd50) Reply frame received for 1
I0925 03:30:53.467199       7 log.go:172] (0x7cadd50) (0x7cade30) Create stream
I0925 03:30:53.467378       7 log.go:172] (0x7cadd50) (0x7cade30) Stream added, broadcasting: 3
I0925 03:30:53.471543       7 log.go:172] (0x7cadd50) Reply frame received for 3
I0925 03:30:53.471798       7 log.go:172] (0x7cadd50) (0x910d490) Create stream
I0925 03:30:53.471929       7 log.go:172] (0x7cadd50) (0x910d490) Stream added, broadcasting: 5
I0925 03:30:53.473796       7 log.go:172] (0x7cadd50) Reply frame received for 5
I0925 03:30:53.542110       7 log.go:172] (0x7cadd50) Data frame received for 5
I0925 03:30:53.542324       7 log.go:172] (0x910d490) (5) Data frame handling
I0925 03:30:53.542468       7 log.go:172] (0x7cadd50) Data frame received for 3
I0925 03:30:53.542644       7 log.go:172] (0x7cade30) (3) Data frame handling
I0925 03:30:53.542834       7 log.go:172] (0x7cade30) (3) Data frame sent
I0925 03:30:53.542996       7 log.go:172] (0x7cadd50) Data frame received for 3
I0925 03:30:53.543156       7 log.go:172] (0x7cade30) (3) Data frame handling
I0925 03:30:53.544051       7 log.go:172] (0x7cadd50) Data frame received for 1
I0925 03:30:53.544231       7 log.go:172] (0x7caddc0) (1) Data frame handling
I0925 03:30:53.544402       7 log.go:172] (0x7caddc0) (1) Data frame sent
I0925 03:30:53.544552       7 log.go:172] (0x7cadd50) (0x7caddc0) Stream removed, broadcasting: 1
I0925 03:30:53.544737       7 log.go:172] (0x7cadd50) Go away received
I0925 03:30:53.545202       7 log.go:172] (0x7cadd50) (0x7caddc0) Stream removed, broadcasting: 1
I0925 03:30:53.545383       7 log.go:172] (0x7cadd50) (0x7cade30) Stream removed, broadcasting: 3
I0925 03:30:53.545479       7 log.go:172] (0x7cadd50) (0x910d490) Stream removed, broadcasting: 5
Sep 25 03:30:53.545: INFO: Exec stderr: ""
Sep 25 03:30:53.545: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-1000 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Sep 25 03:30:53.545: INFO: >>> kubeConfig: /root/.kube/config
I0925 03:30:53.639068       7 log.go:172] (0x91e3180) (0x91e3340) Create stream
I0925 03:30:53.639201       7 log.go:172] (0x91e3180) (0x91e3340) Stream added, broadcasting: 1
I0925 03:30:53.642143       7 log.go:172] (0x91e3180) Reply frame received for 1
I0925 03:30:53.642263       7 log.go:172] (0x91e3180) (0x8dacd90) Create stream
I0925 03:30:53.642321       7 log.go:172] (0x91e3180) (0x8dacd90) Stream added, broadcasting: 3
I0925 03:30:53.643598       7 log.go:172] (0x91e3180) Reply frame received for 3
I0925 03:30:53.643708       7 log.go:172] (0x91e3180) (0x91e3500) Create stream
I0925 03:30:53.643774       7 log.go:172] (0x91e3180) (0x91e3500) Stream added, broadcasting: 5
I0925 03:30:53.644958       7 log.go:172] (0x91e3180) Reply frame received for 5
I0925 03:30:53.721435       7 log.go:172] (0x91e3180) Data frame received for 3
I0925 03:30:53.721645       7 log.go:172] (0x8dacd90) (3) Data frame handling
I0925 03:30:53.721774       7 log.go:172] (0x91e3180) Data frame received for 5
I0925 03:30:53.721958       7 log.go:172] (0x91e3500) (5) Data frame handling
I0925 03:30:53.722239       7 log.go:172] (0x8dacd90) (3) Data frame sent
I0925 03:30:53.722432       7 log.go:172] (0x91e3180) Data frame received for 3
I0925 03:30:53.722545       7 log.go:172] (0x8dacd90) (3) Data frame handling
I0925 03:30:53.722695       7 log.go:172] (0x91e3180) Data frame received for 1
I0925 03:30:53.722825       7 log.go:172] (0x91e3340) (1) Data frame handling
I0925 03:30:53.722983       7 log.go:172] (0x91e3340) (1) Data frame sent
I0925 03:30:53.723139       7 log.go:172] (0x91e3180) (0x91e3340) Stream removed, broadcasting: 1
I0925 03:30:53.723313       7 log.go:172] (0x91e3180) Go away received
I0925 03:30:53.723759       7 log.go:172] (0x91e3180) (0x91e3340) Stream removed, broadcasting: 1
I0925 03:30:53.723922       7 log.go:172] (0x91e3180) (0x8dacd90) Stream removed, broadcasting: 3
I0925 03:30:53.724048       7 log.go:172] (0x91e3180) (0x91e3500) Stream removed, broadcasting: 5
Sep 25 03:30:53.724: INFO: Exec stderr: ""
[AfterEach] [k8s.io] KubeletManagedEtcHosts
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 25 03:30:53.724: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-kubelet-etc-hosts-1000" for this suite.
Sep 25 03:31:33.758: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 25 03:31:33.910: INFO: namespace e2e-kubelet-etc-hosts-1000 deletion completed in 40.175161785s

• [SLOW TEST:52.238 seconds]
[k8s.io] KubeletManagedEtcHosts
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 25 03:31:33.911: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Sep 25 03:31:34.015: INFO: PodSpec: initContainers in spec.initContainers
Sep 25 03:32:28.101: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-0b7e6706-8d7f-4b95-8c9c-e3feb243a27a", GenerateName:"", Namespace:"init-container-4068", SelfLink:"/api/v1/namespaces/init-container-4068/pods/pod-init-0b7e6706-8d7f-4b95-8c9c-e3feb243a27a", UID:"84aa11b6-1751-42b9-bfa0-e7d0fc349ece", ResourceVersion:"332032", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63736601494, loc:(*time.Location)(0x67985e0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"14285945"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-thxn7", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0x81fe6a0), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-thxn7", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-thxn7", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-thxn7", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0x84ea0e8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"iruya-worker2", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0x6e880f0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0x84ea170)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0x84ea190)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0x84ea198), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0x84ea19c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63736601494, loc:(*time.Location)(0x67985e0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63736601494, loc:(*time.Location)(0x67985e0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63736601494, loc:(*time.Location)(0x67985e0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63736601494, loc:(*time.Location)(0x67985e0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.18.0.5", PodIP:"10.244.2.222", StartTime:(*v1.Time)(0x81ff080), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0x81ff140), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0x79902d0)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://31c8bb2b763234c8ad8cef7bd0b6f0f11a8161d0a20627df4c84c675a66e48e3"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0x77f0040), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0x77f0030), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}}
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 25 03:32:28.105: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-4068" for this suite.
Sep 25 03:32:50.184: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 25 03:32:50.337: INFO: namespace init-container-4068 deletion completed in 22.186813429s

• [SLOW TEST:76.426 seconds]
[k8s.io] InitContainer [NodeConformance]
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Service endpoints latency 
  should not be very high  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Service endpoints latency
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 25 03:32:50.339: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svc-latency
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be very high  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating replication controller svc-latency-rc in namespace svc-latency-7044
I0925 03:32:50.438157       7 runners.go:180] Created replication controller with name: svc-latency-rc, namespace: svc-latency-7044, replica count: 1
I0925 03:32:51.489862       7 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0925 03:32:52.490620       7 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0925 03:32:53.491327       7 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Sep 25 03:32:53.626: INFO: Created: latency-svc-gcjlh
Sep 25 03:32:53.634: INFO: Got endpoints: latency-svc-gcjlh [39.768946ms]
Sep 25 03:32:53.682: INFO: Created: latency-svc-4br25
Sep 25 03:32:53.686: INFO: Got endpoints: latency-svc-4br25 [50.840237ms]
Sep 25 03:32:53.734: INFO: Created: latency-svc-r5shc
Sep 25 03:32:53.748: INFO: Got endpoints: latency-svc-r5shc [112.841053ms]
Sep 25 03:32:53.772: INFO: Created: latency-svc-n4dsv
Sep 25 03:32:53.820: INFO: Got endpoints: latency-svc-n4dsv [185.357976ms]
Sep 25 03:32:53.847: INFO: Created: latency-svc-5lmvz
Sep 25 03:32:53.861: INFO: Got endpoints: latency-svc-5lmvz [226.542573ms]
Sep 25 03:32:53.884: INFO: Created: latency-svc-w99cn
Sep 25 03:32:53.909: INFO: Got endpoints: latency-svc-w99cn [274.665217ms]
Sep 25 03:32:53.964: INFO: Created: latency-svc-5g82w
Sep 25 03:32:53.967: INFO: Got endpoints: latency-svc-5g82w [332.387984ms]
Sep 25 03:32:53.998: INFO: Created: latency-svc-cl5cz
Sep 25 03:32:54.021: INFO: Got endpoints: latency-svc-cl5cz [386.727723ms]
Sep 25 03:32:54.052: INFO: Created: latency-svc-d5zwt
Sep 25 03:32:54.063: INFO: Got endpoints: latency-svc-d5zwt [427.451813ms]
Sep 25 03:32:54.107: INFO: Created: latency-svc-gc6wk
Sep 25 03:32:54.114: INFO: Got endpoints: latency-svc-gc6wk [479.451102ms]
Sep 25 03:32:54.135: INFO: Created: latency-svc-42vct
Sep 25 03:32:54.145: INFO: Got endpoints: latency-svc-42vct [509.39814ms]
Sep 25 03:32:54.166: INFO: Created: latency-svc-vbj7h
Sep 25 03:32:54.183: INFO: Got endpoints: latency-svc-vbj7h [547.578946ms]
Sep 25 03:32:54.246: INFO: Created: latency-svc-7vlkq
Sep 25 03:32:54.251: INFO: Got endpoints: latency-svc-7vlkq [615.551631ms]
Sep 25 03:32:54.281: INFO: Created: latency-svc-4f8xh
Sep 25 03:32:54.301: INFO: Got endpoints: latency-svc-4f8xh [665.950805ms]
Sep 25 03:32:54.414: INFO: Created: latency-svc-h2qbg
Sep 25 03:32:54.417: INFO: Got endpoints: latency-svc-h2qbg [782.494153ms]
Sep 25 03:32:54.460: INFO: Created: latency-svc-dgg8q
Sep 25 03:32:54.473: INFO: Got endpoints: latency-svc-dgg8q [837.702167ms]
Sep 25 03:32:54.502: INFO: Created: latency-svc-gnrmc
Sep 25 03:32:54.550: INFO: Got endpoints: latency-svc-gnrmc [863.802323ms]
Sep 25 03:32:54.562: INFO: Created: latency-svc-dqvdt
Sep 25 03:32:54.581: INFO: Got endpoints: latency-svc-dqvdt [833.328135ms]
Sep 25 03:32:54.622: INFO: Created: latency-svc-78qwh
Sep 25 03:32:54.635: INFO: Got endpoints: latency-svc-78qwh [814.566694ms]
Sep 25 03:32:54.701: INFO: Created: latency-svc-ll68f
Sep 25 03:32:54.704: INFO: Got endpoints: latency-svc-ll68f [842.864994ms]
Sep 25 03:32:54.772: INFO: Created: latency-svc-2xrmw
Sep 25 03:32:54.785: INFO: Got endpoints: latency-svc-2xrmw [875.704345ms]
Sep 25 03:32:54.840: INFO: Created: latency-svc-9z7vm
Sep 25 03:32:54.863: INFO: Created: latency-svc-w699p
Sep 25 03:32:54.863: INFO: Got endpoints: latency-svc-9z7vm [896.099866ms]
Sep 25 03:32:54.893: INFO: Got endpoints: latency-svc-w699p [871.3734ms]
Sep 25 03:32:54.921: INFO: Created: latency-svc-4cpsc
Sep 25 03:32:54.933: INFO: Got endpoints: latency-svc-4cpsc [870.285916ms]
Sep 25 03:32:55.000: INFO: Created: latency-svc-8tdmf
Sep 25 03:32:55.003: INFO: Got endpoints: latency-svc-8tdmf [888.477796ms]
Sep 25 03:32:55.054: INFO: Created: latency-svc-vpmdd
Sep 25 03:32:55.065: INFO: Got endpoints: latency-svc-vpmdd [920.282198ms]
Sep 25 03:32:55.146: INFO: Created: latency-svc-2999v
Sep 25 03:32:55.148: INFO: Got endpoints: latency-svc-2999v [964.492788ms]
Sep 25 03:32:55.178: INFO: Created: latency-svc-q9gfq
Sep 25 03:32:55.192: INFO: Got endpoints: latency-svc-q9gfq [940.964428ms]
Sep 25 03:32:55.221: INFO: Created: latency-svc-6kbzr
Sep 25 03:32:55.305: INFO: Got endpoints: latency-svc-6kbzr [1.003370678s]
Sep 25 03:32:55.317: INFO: Created: latency-svc-qz6m9
Sep 25 03:32:55.342: INFO: Got endpoints: latency-svc-qz6m9 [924.692773ms]
Sep 25 03:32:55.360: INFO: Created: latency-svc-6ss4k
Sep 25 03:32:55.369: INFO: Got endpoints: latency-svc-6ss4k [896.573126ms]
Sep 25 03:32:55.449: INFO: Created: latency-svc-zdtdf
Sep 25 03:32:55.453: INFO: Got endpoints: latency-svc-zdtdf [902.305085ms]
Sep 25 03:32:55.485: INFO: Created: latency-svc-kq2kw
Sep 25 03:32:55.515: INFO: Got endpoints: latency-svc-kq2kw [933.687515ms]
Sep 25 03:32:55.546: INFO: Created: latency-svc-z8p9p
Sep 25 03:32:55.610: INFO: Got endpoints: latency-svc-z8p9p [974.947569ms]
Sep 25 03:32:55.616: INFO: Created: latency-svc-jjwn5
Sep 25 03:32:55.622: INFO: Got endpoints: latency-svc-jjwn5 [917.461464ms]
Sep 25 03:32:55.647: INFO: Created: latency-svc-2r99h
Sep 25 03:32:55.659: INFO: Got endpoints: latency-svc-2r99h [873.161465ms]
Sep 25 03:32:55.683: INFO: Created: latency-svc-47vsz
Sep 25 03:32:55.694: INFO: Got endpoints: latency-svc-47vsz [830.758552ms]
Sep 25 03:32:55.748: INFO: Created: latency-svc-g7psn
Sep 25 03:32:55.751: INFO: Got endpoints: latency-svc-g7psn [857.814685ms]
Sep 25 03:32:55.779: INFO: Created: latency-svc-9lrss
Sep 25 03:32:55.794: INFO: Got endpoints: latency-svc-9lrss [860.86764ms]
Sep 25 03:32:55.821: INFO: Created: latency-svc-l4v8k
Sep 25 03:32:55.842: INFO: Got endpoints: latency-svc-l4v8k [839.104465ms]
Sep 25 03:32:55.909: INFO: Created: latency-svc-nlbgq
Sep 25 03:32:55.920: INFO: Got endpoints: latency-svc-nlbgq [854.471902ms]
Sep 25 03:32:55.971: INFO: Created: latency-svc-2fdff
Sep 25 03:32:55.999: INFO: Got endpoints: latency-svc-2fdff [850.497029ms]
Sep 25 03:32:56.054: INFO: Created: latency-svc-4zs88
Sep 25 03:32:56.059: INFO: Got endpoints: latency-svc-4zs88 [866.619161ms]
Sep 25 03:32:56.097: INFO: Created: latency-svc-cplnz
Sep 25 03:32:56.146: INFO: Got endpoints: latency-svc-cplnz [840.483522ms]
Sep 25 03:32:56.203: INFO: Created: latency-svc-p97np
Sep 25 03:32:56.209: INFO: Got endpoints: latency-svc-p97np [866.996009ms]
Sep 25 03:32:56.235: INFO: Created: latency-svc-69dcd
Sep 25 03:32:56.251: INFO: Got endpoints: latency-svc-69dcd [881.694311ms]
Sep 25 03:32:56.271: INFO: Created: latency-svc-5xlgq
Sep 25 03:32:56.288: INFO: Got endpoints: latency-svc-5xlgq [835.47871ms]
Sep 25 03:32:56.342: INFO: Created: latency-svc-xf5mj
Sep 25 03:32:56.345: INFO: Got endpoints: latency-svc-xf5mj [829.911967ms]
Sep 25 03:32:56.379: INFO: Created: latency-svc-c9jrt
Sep 25 03:32:56.407: INFO: Got endpoints: latency-svc-c9jrt [796.060113ms]
Sep 25 03:32:56.439: INFO: Created: latency-svc-4pvf7
Sep 25 03:32:56.502: INFO: Got endpoints: latency-svc-4pvf7 [879.508661ms]
Sep 25 03:32:56.504: INFO: Created: latency-svc-sm2dp
Sep 25 03:32:56.536: INFO: Got endpoints: latency-svc-sm2dp [876.295091ms]
Sep 25 03:32:56.576: INFO: Created: latency-svc-d5h2z
Sep 25 03:32:56.589: INFO: Got endpoints: latency-svc-d5h2z [893.879668ms]
Sep 25 03:32:56.682: INFO: Created: latency-svc-84457
Sep 25 03:32:56.685: INFO: Got endpoints: latency-svc-84457 [933.120647ms]
Sep 25 03:32:56.739: INFO: Created: latency-svc-tkgks
Sep 25 03:32:56.751: INFO: Got endpoints: latency-svc-tkgks [956.521666ms]
Sep 25 03:32:56.769: INFO: Created: latency-svc-lnjsd
Sep 25 03:32:56.781: INFO: Got endpoints: latency-svc-lnjsd [938.827047ms]
Sep 25 03:32:56.846: INFO: Created: latency-svc-c2jd8
Sep 25 03:32:56.847: INFO: Got endpoints: latency-svc-c2jd8 [926.982638ms]
Sep 25 03:32:56.876: INFO: Created: latency-svc-sq8fp
Sep 25 03:32:56.890: INFO: Got endpoints: latency-svc-sq8fp [890.904502ms]
Sep 25 03:32:56.913: INFO: Created: latency-svc-mvgqd
Sep 25 03:32:56.926: INFO: Got endpoints: latency-svc-mvgqd [867.004014ms]
Sep 25 03:32:56.994: INFO: Created: latency-svc-qbn5c
Sep 25 03:32:56.998: INFO: Got endpoints: latency-svc-qbn5c [851.605521ms]
Sep 25 03:32:57.027: INFO: Created: latency-svc-w694g
Sep 25 03:32:57.041: INFO: Got endpoints: latency-svc-w694g [831.352326ms]
Sep 25 03:32:57.074: INFO: Created: latency-svc-h6krg
Sep 25 03:32:57.089: INFO: Got endpoints: latency-svc-h6krg [837.403135ms]
Sep 25 03:32:57.155: INFO: Created: latency-svc-gvqpg
Sep 25 03:32:57.161: INFO: Got endpoints: latency-svc-gvqpg [872.281487ms]
Sep 25 03:32:57.182: INFO: Created: latency-svc-52tmd
Sep 25 03:32:57.197: INFO: Got endpoints: latency-svc-52tmd [851.908191ms]
Sep 25 03:32:57.225: INFO: Created: latency-svc-ntk9z
Sep 25 03:32:57.298: INFO: Got endpoints: latency-svc-ntk9z [891.290273ms]
Sep 25 03:32:57.328: INFO: Created: latency-svc-j2dd9
Sep 25 03:32:57.362: INFO: Got endpoints: latency-svc-j2dd9 [859.907264ms]
Sep 25 03:32:57.443: INFO: Created: latency-svc-qf68v
Sep 25 03:32:57.450: INFO: Got endpoints: latency-svc-qf68v [913.961679ms]
Sep 25 03:32:57.471: INFO: Created: latency-svc-bbvzq
Sep 25 03:32:57.486: INFO: Got endpoints: latency-svc-bbvzq [897.467684ms]
Sep 25 03:32:57.513: INFO: Created: latency-svc-5jbr7
Sep 25 03:32:57.529: INFO: Got endpoints: latency-svc-5jbr7 [844.028418ms]
Sep 25 03:32:57.594: INFO: Created: latency-svc-4qrjb
Sep 25 03:32:57.618: INFO: Got endpoints: latency-svc-4qrjb [866.703349ms]
Sep 25 03:32:57.637: INFO: Created: latency-svc-qc8z9
Sep 25 03:32:57.649: INFO: Got endpoints: latency-svc-qc8z9 [867.426063ms]
Sep 25 03:32:57.674: INFO: Created: latency-svc-9kxzn
Sep 25 03:32:57.692: INFO: Got endpoints: latency-svc-9kxzn [843.767014ms]
Sep 25 03:32:57.747: INFO: Created: latency-svc-gtp6w
Sep 25 03:32:57.751: INFO: Got endpoints: latency-svc-gtp6w [860.509786ms]
Sep 25 03:32:57.783: INFO: Created: latency-svc-vxc4c
Sep 25 03:32:57.800: INFO: Got endpoints: latency-svc-vxc4c [873.545996ms]
Sep 25 03:32:57.842: INFO: Created: latency-svc-7hq8m
Sep 25 03:32:57.910: INFO: Got endpoints: latency-svc-7hq8m [911.617891ms]
Sep 25 03:32:57.927: INFO: Created: latency-svc-6crbs
Sep 25 03:32:57.950: INFO: Got endpoints: latency-svc-6crbs [908.805703ms]
Sep 25 03:32:57.986: INFO: Created: latency-svc-wqzqg
Sep 25 03:32:58.071: INFO: Got endpoints: latency-svc-wqzqg [981.582064ms]
Sep 25 03:32:58.073: INFO: Created: latency-svc-gs6tc
Sep 25 03:32:58.089: INFO: Got endpoints: latency-svc-gs6tc [927.183216ms]
Sep 25 03:32:58.119: INFO: Created: latency-svc-66h4n
Sep 25 03:32:58.131: INFO: Got endpoints: latency-svc-66h4n [932.850832ms]
Sep 25 03:32:58.154: INFO: Created: latency-svc-dhjt7
Sep 25 03:32:58.168: INFO: Got endpoints: latency-svc-dhjt7 [869.61462ms]
Sep 25 03:32:58.210: INFO: Created: latency-svc-9v9hg
Sep 25 03:32:58.212: INFO: Got endpoints: latency-svc-9v9hg [849.698435ms]
Sep 25 03:32:58.239: INFO: Created: latency-svc-2bz48
Sep 25 03:32:58.252: INFO: Got endpoints: latency-svc-2bz48 [801.643709ms]
Sep 25 03:32:58.275: INFO: Created: latency-svc-2bw84
Sep 25 03:32:58.288: INFO: Got endpoints: latency-svc-2bw84 [801.621828ms]
Sep 25 03:32:58.347: INFO: Created: latency-svc-zkl2l
Sep 25 03:32:58.350: INFO: Got endpoints: latency-svc-zkl2l [820.599514ms]
Sep 25 03:32:58.377: INFO: Created: latency-svc-nrjh2
Sep 25 03:32:58.390: INFO: Got endpoints: latency-svc-nrjh2 [772.068245ms]
Sep 25 03:32:58.412: INFO: Created: latency-svc-j5pft
Sep 25 03:32:58.433: INFO: Got endpoints: latency-svc-j5pft [783.879326ms]
Sep 25 03:32:58.515: INFO: Created: latency-svc-9l52h
Sep 25 03:32:58.518: INFO: Got endpoints: latency-svc-9l52h [826.511652ms]
Sep 25 03:32:58.570: INFO: Created: latency-svc-dj5j6
Sep 25 03:32:58.583: INFO: Got endpoints: latency-svc-dj5j6 [832.482039ms]
Sep 25 03:32:58.605: INFO: Created: latency-svc-l4f9j
Sep 25 03:32:58.675: INFO: Got endpoints: latency-svc-l4f9j [875.531941ms]
Sep 25 03:32:58.679: INFO: Created: latency-svc-wwtxm
Sep 25 03:32:58.709: INFO: Got endpoints: latency-svc-wwtxm [799.536302ms]
Sep 25 03:32:58.742: INFO: Created: latency-svc-nb9pq
Sep 25 03:32:58.758: INFO: Got endpoints: latency-svc-nb9pq [807.813168ms]
Sep 25 03:32:58.820: INFO: Created: latency-svc-zbqjs
Sep 25 03:32:58.832: INFO: Got endpoints: latency-svc-zbqjs [760.569152ms]
Sep 25 03:32:58.874: INFO: Created: latency-svc-5gp79
Sep 25 03:32:58.917: INFO: Got endpoints: latency-svc-5gp79 [827.759282ms]
Sep 25 03:32:58.970: INFO: Created: latency-svc-4p47p
Sep 25 03:32:58.981: INFO: Got endpoints: latency-svc-4p47p [850.507653ms]
Sep 25 03:32:59.025: INFO: Created: latency-svc-kgr8w
Sep 25 03:32:59.035: INFO: Got endpoints: latency-svc-kgr8w [866.340439ms]
Sep 25 03:32:59.060: INFO: Created: latency-svc-4jrfp
Sep 25 03:32:59.120: INFO: Got endpoints: latency-svc-4jrfp [907.552116ms]
Sep 25 03:32:59.123: INFO: Created: latency-svc-9sw46
Sep 25 03:32:59.125: INFO: Got endpoints: latency-svc-9sw46 [872.884074ms]
Sep 25 03:32:59.156: INFO: Created: latency-svc-9d58l
Sep 25 03:32:59.168: INFO: Got endpoints: latency-svc-9d58l [878.908807ms]
Sep 25 03:32:59.192: INFO: Created: latency-svc-td94t
Sep 25 03:32:59.204: INFO: Got endpoints: latency-svc-td94t [853.761555ms]
Sep 25 03:32:59.269: INFO: Created: latency-svc-z4fv6
Sep 25 03:32:59.272: INFO: Got endpoints: latency-svc-z4fv6 [881.494163ms]
Sep 25 03:32:59.295: INFO: Created: latency-svc-r57qw
Sep 25 03:32:59.313: INFO: Got endpoints: latency-svc-r57qw [879.112122ms]
Sep 25 03:32:59.349: INFO: Created: latency-svc-mcbmc
Sep 25 03:32:59.424: INFO: Got endpoints: latency-svc-mcbmc [905.592965ms]
Sep 25 03:32:59.428: INFO: Created: latency-svc-b46hz
Sep 25 03:32:59.433: INFO: Got endpoints: latency-svc-b46hz [849.290919ms]
Sep 25 03:32:59.468: INFO: Created: latency-svc-zg9nx
Sep 25 03:32:59.482: INFO: Got endpoints: latency-svc-zg9nx [805.897557ms]
Sep 25 03:32:59.498: INFO: Created: latency-svc-z8g5z
Sep 25 03:32:59.518: INFO: Got endpoints: latency-svc-z8g5z [808.338161ms]
Sep 25 03:32:59.568: INFO: Created: latency-svc-xq25b
Sep 25 03:32:59.571: INFO: Got endpoints: latency-svc-xq25b [812.999742ms]
Sep 25 03:32:59.630: INFO: Created: latency-svc-j6qs9
Sep 25 03:32:59.644: INFO: Got endpoints: latency-svc-j6qs9 [811.911033ms]
Sep 25 03:32:59.666: INFO: Created: latency-svc-scbcw
Sep 25 03:32:59.730: INFO: Got endpoints: latency-svc-scbcw [812.902334ms]
Sep 25 03:32:59.732: INFO: Created: latency-svc-6w29b
Sep 25 03:32:59.756: INFO: Got endpoints: latency-svc-6w29b [774.339904ms]
Sep 25 03:32:59.787: INFO: Created: latency-svc-s55c8
Sep 25 03:32:59.816: INFO: Got endpoints: latency-svc-s55c8 [780.9072ms]
Sep 25 03:32:59.874: INFO: Created: latency-svc-kqpxk
Sep 25 03:32:59.903: INFO: Got endpoints: latency-svc-kqpxk [782.84205ms]
Sep 25 03:32:59.930: INFO: Created: latency-svc-t6c7s
Sep 25 03:32:59.945: INFO: Got endpoints: latency-svc-t6c7s [819.840336ms]
Sep 25 03:32:59.967: INFO: Created: latency-svc-88q9d
Sep 25 03:32:59.999: INFO: Got endpoints: latency-svc-88q9d [831.167599ms]
Sep 25 03:33:00.026: INFO: Created: latency-svc-sq9lc
Sep 25 03:33:00.042: INFO: Got endpoints: latency-svc-sq9lc [837.711331ms]
Sep 25 03:33:00.074: INFO: Created: latency-svc-vhl8h
Sep 25 03:33:00.155: INFO: Got endpoints: latency-svc-vhl8h [882.342462ms]
Sep 25 03:33:00.158: INFO: Created: latency-svc-m55nj
Sep 25 03:33:00.168: INFO: Got endpoints: latency-svc-m55nj [854.788623ms]
Sep 25 03:33:00.188: INFO: Created: latency-svc-rkdmc
Sep 25 03:33:00.218: INFO: Got endpoints: latency-svc-rkdmc [793.165001ms]
Sep 25 03:33:00.248: INFO: Created: latency-svc-9k4px
Sep 25 03:33:00.317: INFO: Got endpoints: latency-svc-9k4px [883.696472ms]
Sep 25 03:33:00.322: INFO: Created: latency-svc-7qh4h
Sep 25 03:33:00.324: INFO: Got endpoints: latency-svc-7qh4h [842.355973ms]
Sep 25 03:33:00.344: INFO: Created: latency-svc-xqrrc
Sep 25 03:33:00.355: INFO: Got endpoints: latency-svc-xqrrc [836.895016ms]
Sep 25 03:33:00.374: INFO: Created: latency-svc-pwrbn
Sep 25 03:33:00.391: INFO: Got endpoints: latency-svc-pwrbn [819.703518ms]
Sep 25 03:33:00.412: INFO: Created: latency-svc-m99zh
Sep 25 03:33:00.472: INFO: Got endpoints: latency-svc-m99zh [828.111864ms]
Sep 25 03:33:00.475: INFO: Created: latency-svc-7s5zh
Sep 25 03:33:00.506: INFO: Got endpoints: latency-svc-7s5zh [776.076505ms]
Sep 25 03:33:00.529: INFO: Created: latency-svc-ptmc9
Sep 25 03:33:00.554: INFO: Got endpoints: latency-svc-ptmc9 [798.139512ms]
Sep 25 03:33:00.636: INFO: Created: latency-svc-wr2dj
Sep 25 03:33:00.657: INFO: Got endpoints: latency-svc-wr2dj [840.371926ms]
Sep 25 03:33:00.675: INFO: Created: latency-svc-k2cd4
Sep 25 03:33:00.686: INFO: Got endpoints: latency-svc-k2cd4 [783.008581ms]
Sep 25 03:33:00.709: INFO: Created: latency-svc-mbht4
Sep 25 03:33:00.723: INFO: Got endpoints: latency-svc-mbht4 [778.141436ms]
Sep 25 03:33:00.772: INFO: Created: latency-svc-ck86m
Sep 25 03:33:00.775: INFO: Got endpoints: latency-svc-ck86m [775.232356ms]
Sep 25 03:33:00.849: INFO: Created: latency-svc-gtcfn
Sep 25 03:33:00.861: INFO: Got endpoints: latency-svc-gtcfn [819.175849ms]
Sep 25 03:33:00.910: INFO: Created: latency-svc-mjgls
Sep 25 03:33:00.913: INFO: Got endpoints: latency-svc-mjgls [757.804126ms]
Sep 25 03:33:00.944: INFO: Created: latency-svc-bkkqd
Sep 25 03:33:00.957: INFO: Got endpoints: latency-svc-bkkqd [789.52138ms]
Sep 25 03:33:01.072: INFO: Created: latency-svc-x656x
Sep 25 03:33:01.101: INFO: Got endpoints: latency-svc-x656x [882.882344ms]
Sep 25 03:33:01.104: INFO: Created: latency-svc-rxwnq
Sep 25 03:33:01.107: INFO: Got endpoints: latency-svc-rxwnq [790.433216ms]
Sep 25 03:33:01.130: INFO: Created: latency-svc-mxqp2
Sep 25 03:33:01.150: INFO: Got endpoints: latency-svc-mxqp2 [825.976444ms]
Sep 25 03:33:01.221: INFO: Created: latency-svc-m7mn2
Sep 25 03:33:01.223: INFO: Got endpoints: latency-svc-m7mn2 [867.570179ms]
Sep 25 03:33:01.316: INFO: Created: latency-svc-wcr2w
Sep 25 03:33:01.382: INFO: Got endpoints: latency-svc-wcr2w [990.952739ms]
Sep 25 03:33:01.406: INFO: Created: latency-svc-cstcl
Sep 25 03:33:01.421: INFO: Got endpoints: latency-svc-cstcl [948.678571ms]
Sep 25 03:33:01.441: INFO: Created: latency-svc-p8c5f
Sep 25 03:33:01.458: INFO: Got endpoints: latency-svc-p8c5f [951.160925ms]
Sep 25 03:33:01.478: INFO: Created: latency-svc-dh9sw
Sep 25 03:33:01.532: INFO: Got endpoints: latency-svc-dh9sw [977.683397ms]
Sep 25 03:33:01.533: INFO: Created: latency-svc-jkr6z
Sep 25 03:33:01.549: INFO: Got endpoints: latency-svc-jkr6z [891.772332ms]
Sep 25 03:33:01.573: INFO: Created: latency-svc-6dj69
Sep 25 03:33:01.591: INFO: Got endpoints: latency-svc-6dj69 [904.240834ms]
Sep 25 03:33:01.622: INFO: Created: latency-svc-9r9tt
Sep 25 03:33:01.682: INFO: Got endpoints: latency-svc-9r9tt [958.421729ms]
Sep 25 03:33:01.683: INFO: Created: latency-svc-rpbcv
Sep 25 03:33:01.711: INFO: Got endpoints: latency-svc-rpbcv [936.065016ms]
Sep 25 03:33:01.754: INFO: Created: latency-svc-nk77t
Sep 25 03:33:01.765: INFO: Got endpoints: latency-svc-nk77t [904.018553ms]
Sep 25 03:33:01.820: INFO: Created: latency-svc-ck4pj
Sep 25 03:33:01.843: INFO: Got endpoints: latency-svc-ck4pj [929.869066ms]
Sep 25 03:33:01.843: INFO: Created: latency-svc-zpc7s
Sep 25 03:33:01.856: INFO: Got endpoints: latency-svc-zpc7s [898.028249ms]
Sep 25 03:33:01.873: INFO: Created: latency-svc-wnvmt
Sep 25 03:33:01.886: INFO: Got endpoints: latency-svc-wnvmt [784.919384ms]
Sep 25 03:33:01.911: INFO: Created: latency-svc-tprs7
Sep 25 03:33:01.981: INFO: Got endpoints: latency-svc-tprs7 [873.250835ms]
Sep 25 03:33:01.983: INFO: Created: latency-svc-99v6f
Sep 25 03:33:01.988: INFO: Got endpoints: latency-svc-99v6f [837.094195ms]
Sep 25 03:33:02.017: INFO: Created: latency-svc-b46mj
Sep 25 03:33:02.031: INFO: Got endpoints: latency-svc-b46mj [807.520139ms]
Sep 25 03:33:02.053: INFO: Created: latency-svc-7zkw6
Sep 25 03:33:02.119: INFO: Got endpoints: latency-svc-7zkw6 [736.375494ms]
Sep 25 03:33:02.132: INFO: Created: latency-svc-rsgrr
Sep 25 03:33:02.145: INFO: Got endpoints: latency-svc-rsgrr [723.487046ms]
Sep 25 03:33:02.168: INFO: Created: latency-svc-6pvcs
Sep 25 03:33:02.182: INFO: Got endpoints: latency-svc-6pvcs [724.241055ms]
Sep 25 03:33:02.209: INFO: Created: latency-svc-5fn9b
Sep 25 03:33:02.268: INFO: Got endpoints: latency-svc-5fn9b [735.250492ms]
Sep 25 03:33:02.271: INFO: Created: latency-svc-75njc
Sep 25 03:33:02.305: INFO: Got endpoints: latency-svc-75njc [756.196564ms]
Sep 25 03:33:02.336: INFO: Created: latency-svc-xfrf5
Sep 25 03:33:02.351: INFO: Got endpoints: latency-svc-xfrf5 [760.039136ms]
Sep 25 03:33:02.406: INFO: Created: latency-svc-tml5m
Sep 25 03:33:02.409: INFO: Got endpoints: latency-svc-tml5m [727.279387ms]
Sep 25 03:33:02.438: INFO: Created: latency-svc-krz6l
Sep 25 03:33:02.447: INFO: Got endpoints: latency-svc-krz6l [736.064743ms]
Sep 25 03:33:02.468: INFO: Created: latency-svc-kxt5g
Sep 25 03:33:02.477: INFO: Got endpoints: latency-svc-kxt5g [712.016223ms]
Sep 25 03:33:02.556: INFO: Created: latency-svc-nzxzw
Sep 25 03:33:02.588: INFO: Got endpoints: latency-svc-nzxzw [744.680494ms]
Sep 25 03:33:02.589: INFO: Created: latency-svc-hlxxz
Sep 25 03:33:02.604: INFO: Got endpoints: latency-svc-hlxxz [748.363049ms]
Sep 25 03:33:02.635: INFO: Created: latency-svc-fn8zb
Sep 25 03:33:02.653: INFO: Got endpoints: latency-svc-fn8zb [766.360442ms]
Sep 25 03:33:02.700: INFO: Created: latency-svc-hmrm2
Sep 25 03:33:02.713: INFO: Got endpoints: latency-svc-hmrm2 [731.342769ms]
Sep 25 03:33:02.750: INFO: Created: latency-svc-swd8v
Sep 25 03:33:02.761: INFO: Got endpoints: latency-svc-swd8v [773.01217ms]
Sep 25 03:33:02.785: INFO: Created: latency-svc-95l5v
Sep 25 03:33:02.798: INFO: Got endpoints: latency-svc-95l5v [766.557627ms]
Sep 25 03:33:02.856: INFO: Created: latency-svc-fq7vd
Sep 25 03:33:02.858: INFO: Got endpoints: latency-svc-fq7vd [738.317633ms]
Sep 25 03:33:02.888: INFO: Created: latency-svc-4zbf6
Sep 25 03:33:02.917: INFO: Got endpoints: latency-svc-4zbf6 [771.796867ms]
Sep 25 03:33:03.023: INFO: Created: latency-svc-clk8r
Sep 25 03:33:03.023: INFO: Got endpoints: latency-svc-clk8r [841.020719ms]
Sep 25 03:33:03.055: INFO: Created: latency-svc-ptrfh
Sep 25 03:33:03.080: INFO: Got endpoints: latency-svc-ptrfh [812.131562ms]
Sep 25 03:33:03.154: INFO: Created: latency-svc-wn7x5
Sep 25 03:33:03.158: INFO: Got endpoints: latency-svc-wn7x5 [852.430364ms]
Sep 25 03:33:03.187: INFO: Created: latency-svc-bwxt7
Sep 25 03:33:03.217: INFO: Got endpoints: latency-svc-bwxt7 [865.646553ms]
Sep 25 03:33:03.247: INFO: Created: latency-svc-6zv7h
Sep 25 03:33:03.328: INFO: Got endpoints: latency-svc-6zv7h [918.688486ms]
Sep 25 03:33:03.330: INFO: Created: latency-svc-445dc
Sep 25 03:33:03.352: INFO: Got endpoints: latency-svc-445dc [904.312126ms]
Sep 25 03:33:03.385: INFO: Created: latency-svc-fxmmw
Sep 25 03:33:03.412: INFO: Got endpoints: latency-svc-fxmmw [934.213088ms]
Sep 25 03:33:03.473: INFO: Created: latency-svc-fft94
Sep 25 03:33:03.486: INFO: Got endpoints: latency-svc-fft94 [898.377217ms]
Sep 25 03:33:03.517: INFO: Created: latency-svc-qjrn7
Sep 25 03:33:03.532: INFO: Got endpoints: latency-svc-qjrn7 [927.348483ms]
Sep 25 03:33:03.559: INFO: Created: latency-svc-q4jx2
Sep 25 03:33:03.610: INFO: Got endpoints: latency-svc-q4jx2 [956.924106ms]
Sep 25 03:33:03.625: INFO: Created: latency-svc-4p854
Sep 25 03:33:03.641: INFO: Got endpoints: latency-svc-4p854 [927.944883ms]
Sep 25 03:33:03.661: INFO: Created: latency-svc-cqfvn
Sep 25 03:33:03.677: INFO: Got endpoints: latency-svc-cqfvn [915.622823ms]
Sep 25 03:33:03.697: INFO: Created: latency-svc-2ghn5
Sep 25 03:33:03.730: INFO: Got endpoints: latency-svc-2ghn5 [931.957871ms]
Sep 25 03:33:03.745: INFO: Created: latency-svc-cv989
Sep 25 03:33:03.762: INFO: Got endpoints: latency-svc-cv989 [903.80785ms]
Sep 25 03:33:03.781: INFO: Created: latency-svc-2q6bd
Sep 25 03:33:03.798: INFO: Got endpoints: latency-svc-2q6bd [880.60189ms]
Sep 25 03:33:03.817: INFO: Created: latency-svc-m2hq7
Sep 25 03:33:03.879: INFO: Got endpoints: latency-svc-m2hq7 [855.832623ms]
Sep 25 03:33:03.894: INFO: Created: latency-svc-cpjtl
Sep 25 03:33:03.906: INFO: Got endpoints: latency-svc-cpjtl [825.476587ms]
Sep 25 03:33:03.955: INFO: Created: latency-svc-hsv6x
Sep 25 03:33:03.967: INFO: Got endpoints: latency-svc-hsv6x [808.574838ms]
Sep 25 03:33:04.005: INFO: Created: latency-svc-bvhnf
Sep 25 03:33:04.008: INFO: Got endpoints: latency-svc-bvhnf [791.423476ms]
Sep 25 03:33:04.033: INFO: Created: latency-svc-nfcb4
Sep 25 03:33:04.046: INFO: Got endpoints: latency-svc-nfcb4 [716.168111ms]
Sep 25 03:33:04.069: INFO: Created: latency-svc-nwcc2
Sep 25 03:33:04.082: INFO: Got endpoints: latency-svc-nwcc2 [729.688247ms]
Sep 25 03:33:04.106: INFO: Created: latency-svc-lffnj
Sep 25 03:33:04.169: INFO: Created: latency-svc-vmhjm
Sep 25 03:33:04.178: INFO: Got endpoints: latency-svc-lffnj [766.249708ms]
Sep 25 03:33:04.178: INFO: Got endpoints: latency-svc-vmhjm [691.89371ms]
Sep 25 03:33:04.237: INFO: Created: latency-svc-vsrj8
Sep 25 03:33:04.244: INFO: Got endpoints: latency-svc-vsrj8 [712.478306ms]
Sep 25 03:33:04.311: INFO: Created: latency-svc-z8k6s
Sep 25 03:33:04.315: INFO: Created: latency-svc-nmzr7
Sep 25 03:33:04.315: INFO: Got endpoints: latency-svc-z8k6s [704.941821ms]
Sep 25 03:33:04.319: INFO: Got endpoints: latency-svc-nmzr7 [677.904773ms]
Sep 25 03:33:04.351: INFO: Created: latency-svc-wndfr
Sep 25 03:33:04.372: INFO: Got endpoints: latency-svc-wndfr [694.558813ms]
Sep 25 03:33:04.393: INFO: Created: latency-svc-dc2z4
Sep 25 03:33:04.404: INFO: Got endpoints: latency-svc-dc2z4 [673.808556ms]
Sep 25 03:33:04.443: INFO: Created: latency-svc-rdrwk
Sep 25 03:33:04.452: INFO: Got endpoints: latency-svc-rdrwk [690.378186ms]
Sep 25 03:33:04.477: INFO: Created: latency-svc-npbwm
Sep 25 03:33:04.489: INFO: Got endpoints: latency-svc-npbwm [691.049985ms]
Sep 25 03:33:04.531: INFO: Created: latency-svc-jjzv4
Sep 25 03:33:04.604: INFO: Got endpoints: latency-svc-jjzv4 [724.134041ms]
Sep 25 03:33:04.621: INFO: Created: latency-svc-2dg6m
Sep 25 03:33:04.650: INFO: Got endpoints: latency-svc-2dg6m [743.80547ms]
Sep 25 03:33:04.698: INFO: Created: latency-svc-lr2ns
Sep 25 03:33:04.754: INFO: Got endpoints: latency-svc-lr2ns [786.828538ms]
Sep 25 03:33:04.795: INFO: Created: latency-svc-p8zvj
Sep 25 03:33:04.814: INFO: Got endpoints: latency-svc-p8zvj [804.862783ms]
Sep 25 03:33:04.831: INFO: Created: latency-svc-6x88x
Sep 25 03:33:04.879: INFO: Got endpoints: latency-svc-6x88x [832.739321ms]
Sep 25 03:33:04.880: INFO: Latencies: [50.840237ms 112.841053ms 185.357976ms 226.542573ms 274.665217ms 332.387984ms 386.727723ms 427.451813ms 479.451102ms 509.39814ms 547.578946ms 615.551631ms 665.950805ms 673.808556ms 677.904773ms 690.378186ms 691.049985ms 691.89371ms 694.558813ms 704.941821ms 712.016223ms 712.478306ms 716.168111ms 723.487046ms 724.134041ms 724.241055ms 727.279387ms 729.688247ms 731.342769ms 735.250492ms 736.064743ms 736.375494ms 738.317633ms 743.80547ms 744.680494ms 748.363049ms 756.196564ms 757.804126ms 760.039136ms 760.569152ms 766.249708ms 766.360442ms 766.557627ms 771.796867ms 772.068245ms 773.01217ms 774.339904ms 775.232356ms 776.076505ms 778.141436ms 780.9072ms 782.494153ms 782.84205ms 783.008581ms 783.879326ms 784.919384ms 786.828538ms 789.52138ms 790.433216ms 791.423476ms 793.165001ms 796.060113ms 798.139512ms 799.536302ms 801.621828ms 801.643709ms 804.862783ms 805.897557ms 807.520139ms 807.813168ms 808.338161ms 808.574838ms 811.911033ms 812.131562ms 812.902334ms 812.999742ms 814.566694ms 819.175849ms 819.703518ms 819.840336ms 820.599514ms 825.476587ms 825.976444ms 826.511652ms 827.759282ms 828.111864ms 829.911967ms 830.758552ms 831.167599ms 831.352326ms 832.482039ms 832.739321ms 833.328135ms 835.47871ms 836.895016ms 837.094195ms 837.403135ms 837.702167ms 837.711331ms 839.104465ms 840.371926ms 840.483522ms 841.020719ms 842.355973ms 842.864994ms 843.767014ms 844.028418ms 849.290919ms 849.698435ms 850.497029ms 850.507653ms 851.605521ms 851.908191ms 852.430364ms 853.761555ms 854.471902ms 854.788623ms 855.832623ms 857.814685ms 859.907264ms 860.509786ms 860.86764ms 863.802323ms 865.646553ms 866.340439ms 866.619161ms 866.703349ms 866.996009ms 867.004014ms 867.426063ms 867.570179ms 869.61462ms 870.285916ms 871.3734ms 872.281487ms 872.884074ms 873.161465ms 873.250835ms 873.545996ms 875.531941ms 875.704345ms 876.295091ms 878.908807ms 879.112122ms 879.508661ms 880.60189ms 881.494163ms 881.694311ms 882.342462ms 882.882344ms 883.696472ms 888.477796ms 890.904502ms 891.290273ms 891.772332ms 893.879668ms 896.099866ms 896.573126ms 897.467684ms 898.028249ms 898.377217ms 902.305085ms 903.80785ms 904.018553ms 904.240834ms 904.312126ms 905.592965ms 907.552116ms 908.805703ms 911.617891ms 913.961679ms 915.622823ms 917.461464ms 918.688486ms 920.282198ms 924.692773ms 926.982638ms 927.183216ms 927.348483ms 927.944883ms 929.869066ms 931.957871ms 932.850832ms 933.120647ms 933.687515ms 934.213088ms 936.065016ms 938.827047ms 940.964428ms 948.678571ms 951.160925ms 956.521666ms 956.924106ms 958.421729ms 964.492788ms 974.947569ms 977.683397ms 981.582064ms 990.952739ms 1.003370678s]
Sep 25 03:33:04.881: INFO: 50 %ile: 840.371926ms
Sep 25 03:33:04.881: INFO: 90 %ile: 929.869066ms
Sep 25 03:33:04.881: INFO: 99 %ile: 990.952739ms
Sep 25 03:33:04.881: INFO: Total sample count: 200
[AfterEach] [sig-network] Service endpoints latency
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 25 03:33:04.882: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svc-latency-7044" for this suite.
Sep 25 03:33:26.916: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 25 03:33:27.058: INFO: namespace svc-latency-7044 deletion completed in 22.169794191s

• [SLOW TEST:36.719 seconds]
[sig-network] Service endpoints latency
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should not be very high  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart exec hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 25 03:33:27.063: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart exec hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Sep 25 03:33:35.269: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Sep 25 03:33:35.287: INFO: Pod pod-with-poststart-exec-hook still exists
Sep 25 03:33:37.287: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Sep 25 03:33:37.294: INFO: Pod pod-with-poststart-exec-hook still exists
Sep 25 03:33:39.287: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Sep 25 03:33:39.295: INFO: Pod pod-with-poststart-exec-hook still exists
Sep 25 03:33:41.287: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Sep 25 03:33:41.295: INFO: Pod pod-with-poststart-exec-hook still exists
Sep 25 03:33:43.287: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Sep 25 03:33:43.295: INFO: Pod pod-with-poststart-exec-hook still exists
Sep 25 03:33:45.287: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Sep 25 03:33:45.295: INFO: Pod pod-with-poststart-exec-hook still exists
Sep 25 03:33:47.287: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Sep 25 03:33:47.294: INFO: Pod pod-with-poststart-exec-hook still exists
Sep 25 03:33:49.287: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Sep 25 03:33:49.295: INFO: Pod pod-with-poststart-exec-hook still exists
Sep 25 03:33:51.287: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Sep 25 03:33:51.295: INFO: Pod pod-with-poststart-exec-hook still exists
Sep 25 03:33:53.287: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Sep 25 03:33:53.295: INFO: Pod pod-with-poststart-exec-hook still exists
Sep 25 03:33:55.287: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Sep 25 03:33:55.300: INFO: Pod pod-with-poststart-exec-hook still exists
Sep 25 03:33:57.287: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Sep 25 03:33:57.294: INFO: Pod pod-with-poststart-exec-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 25 03:33:57.295: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-7873" for this suite.
Sep 25 03:34:19.323: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 25 03:34:19.475: INFO: namespace container-lifecycle-hook-7873 deletion completed in 22.169539934s

• [SLOW TEST:52.412 seconds]
[k8s.io] Container Lifecycle Hook
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart exec hook properly [NodeConformance] [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for the cluster  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 25 03:34:19.477: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for the cluster  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-607.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-607.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Sep 25 03:34:25.649: INFO: DNS probes using dns-607/dns-test-da64ff1c-5c95-4814-a633-a4b8d63b586b succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 25 03:34:25.676: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-607" for this suite.
Sep 25 03:34:31.752: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 25 03:34:31.884: INFO: namespace dns-607 deletion completed in 6.173259918s

• [SLOW TEST:12.407 seconds]
[sig-network] DNS
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for the cluster  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 25 03:34:31.886: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-map-5ae8086e-818e-42fb-8f0b-4bd9af9ba75a
STEP: Creating a pod to test consume secrets
Sep 25 03:34:32.007: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-95377335-2147-4a8b-8dec-cc90fe2ad142" in namespace "projected-1560" to be "success or failure"
Sep 25 03:34:32.022: INFO: Pod "pod-projected-secrets-95377335-2147-4a8b-8dec-cc90fe2ad142": Phase="Pending", Reason="", readiness=false. Elapsed: 14.680842ms
Sep 25 03:34:34.029: INFO: Pod "pod-projected-secrets-95377335-2147-4a8b-8dec-cc90fe2ad142": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021998309s
Sep 25 03:34:36.036: INFO: Pod "pod-projected-secrets-95377335-2147-4a8b-8dec-cc90fe2ad142": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02875229s
STEP: Saw pod success
Sep 25 03:34:36.036: INFO: Pod "pod-projected-secrets-95377335-2147-4a8b-8dec-cc90fe2ad142" satisfied condition "success or failure"
Sep 25 03:34:36.041: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-secrets-95377335-2147-4a8b-8dec-cc90fe2ad142 container projected-secret-volume-test: 
STEP: delete the pod
Sep 25 03:34:36.104: INFO: Waiting for pod pod-projected-secrets-95377335-2147-4a8b-8dec-cc90fe2ad142 to disappear
Sep 25 03:34:36.108: INFO: Pod pod-projected-secrets-95377335-2147-4a8b-8dec-cc90fe2ad142 no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 25 03:34:36.109: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1560" for this suite.
Sep 25 03:34:42.148: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 25 03:34:42.324: INFO: namespace projected-1560 deletion completed in 6.208387783s

• [SLOW TEST:10.438 seconds]
[sig-storage] Projected secret
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 25 03:34:42.325: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-c2a9ac11-8686-4e06-a9a3-e757208f0192
STEP: Creating a pod to test consume secrets
Sep 25 03:34:42.398: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-9f88f8ed-e89d-4d44-8f1e-978069fed869" in namespace "projected-2473" to be "success or failure"
Sep 25 03:34:42.409: INFO: Pod "pod-projected-secrets-9f88f8ed-e89d-4d44-8f1e-978069fed869": Phase="Pending", Reason="", readiness=false. Elapsed: 10.446611ms
Sep 25 03:34:44.439: INFO: Pod "pod-projected-secrets-9f88f8ed-e89d-4d44-8f1e-978069fed869": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040673681s
Sep 25 03:34:46.446: INFO: Pod "pod-projected-secrets-9f88f8ed-e89d-4d44-8f1e-978069fed869": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.047874818s
STEP: Saw pod success
Sep 25 03:34:46.447: INFO: Pod "pod-projected-secrets-9f88f8ed-e89d-4d44-8f1e-978069fed869" satisfied condition "success or failure"
Sep 25 03:34:46.452: INFO: Trying to get logs from node iruya-worker pod pod-projected-secrets-9f88f8ed-e89d-4d44-8f1e-978069fed869 container projected-secret-volume-test: 
STEP: delete the pod
Sep 25 03:34:46.501: INFO: Waiting for pod pod-projected-secrets-9f88f8ed-e89d-4d44-8f1e-978069fed869 to disappear
Sep 25 03:34:46.534: INFO: Pod pod-projected-secrets-9f88f8ed-e89d-4d44-8f1e-978069fed869 no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 25 03:34:46.535: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2473" for this suite.
Sep 25 03:34:52.607: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 25 03:34:52.744: INFO: namespace projected-2473 deletion completed in 6.199003304s

• [SLOW TEST:10.420 seconds]
[sig-storage] Projected secret
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 25 03:34:52.749: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on node default medium
Sep 25 03:34:52.855: INFO: Waiting up to 5m0s for pod "pod-aee88f28-b31e-495c-a5dd-b0386f8841db" in namespace "emptydir-7215" to be "success or failure"
Sep 25 03:34:52.878: INFO: Pod "pod-aee88f28-b31e-495c-a5dd-b0386f8841db": Phase="Pending", Reason="", readiness=false. Elapsed: 23.335612ms
Sep 25 03:34:54.886: INFO: Pod "pod-aee88f28-b31e-495c-a5dd-b0386f8841db": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031258003s
Sep 25 03:34:56.892: INFO: Pod "pod-aee88f28-b31e-495c-a5dd-b0386f8841db": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.037695586s
STEP: Saw pod success
Sep 25 03:34:56.893: INFO: Pod "pod-aee88f28-b31e-495c-a5dd-b0386f8841db" satisfied condition "success or failure"
Sep 25 03:34:56.897: INFO: Trying to get logs from node iruya-worker2 pod pod-aee88f28-b31e-495c-a5dd-b0386f8841db container test-container: 
STEP: delete the pod
Sep 25 03:34:56.954: INFO: Waiting for pod pod-aee88f28-b31e-495c-a5dd-b0386f8841db to disappear
Sep 25 03:34:56.988: INFO: Pod pod-aee88f28-b31e-495c-a5dd-b0386f8841db no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 25 03:34:56.989: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-7215" for this suite.
Sep 25 03:35:03.015: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 25 03:35:03.169: INFO: namespace emptydir-7215 deletion completed in 6.169859672s

• [SLOW TEST:10.421 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu request [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 25 03:35:03.175: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Sep 25 03:35:03.288: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5869e36f-899b-48aa-a517-e2be402dfd56" in namespace "projected-4755" to be "success or failure"
Sep 25 03:35:03.298: INFO: Pod "downwardapi-volume-5869e36f-899b-48aa-a517-e2be402dfd56": Phase="Pending", Reason="", readiness=false. Elapsed: 10.07717ms
Sep 25 03:35:05.323: INFO: Pod "downwardapi-volume-5869e36f-899b-48aa-a517-e2be402dfd56": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034327422s
Sep 25 03:35:07.329: INFO: Pod "downwardapi-volume-5869e36f-899b-48aa-a517-e2be402dfd56": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.040993866s
STEP: Saw pod success
Sep 25 03:35:07.330: INFO: Pod "downwardapi-volume-5869e36f-899b-48aa-a517-e2be402dfd56" satisfied condition "success or failure"
Sep 25 03:35:07.335: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-5869e36f-899b-48aa-a517-e2be402dfd56 container client-container: 
STEP: delete the pod
Sep 25 03:35:07.365: INFO: Waiting for pod downwardapi-volume-5869e36f-899b-48aa-a517-e2be402dfd56 to disappear
Sep 25 03:35:07.369: INFO: Pod downwardapi-volume-5869e36f-899b-48aa-a517-e2be402dfd56 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 25 03:35:07.369: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4755" for this suite.
Sep 25 03:35:13.398: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 25 03:35:13.533: INFO: namespace projected-4755 deletion completed in 6.154171411s

• [SLOW TEST:10.358 seconds]
[sig-storage] Projected downwardAPI
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's cpu request [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 25 03:35:13.534: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all pods are removed when a namespace is deleted [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a pod in the namespace
STEP: Waiting for the pod to have running status
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there are no pods in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 25 03:35:39.762: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-9565" for this suite.
Sep 25 03:35:45.788: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 25 03:35:45.929: INFO: namespace namespaces-9565 deletion completed in 6.155123939s
STEP: Destroying namespace "nsdeletetest-6869" for this suite.
Sep 25 03:35:45.932: INFO: Namespace nsdeletetest-6869 was already deleted
STEP: Destroying namespace "nsdeletetest-2417" for this suite.
Sep 25 03:35:51.950: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 25 03:35:52.116: INFO: namespace nsdeletetest-2417 deletion completed in 6.183796454s

• [SLOW TEST:38.583 seconds]
[sig-api-machinery] Namespaces [Serial]
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl replace 
  should update a single-container pod's image  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 25 03:35:52.118: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl replace
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1721
[It] should update a single-container pod's image  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Sep 25 03:35:52.233: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=kubectl-8453'
Sep 25 03:35:55.910: INFO: stderr: ""
Sep 25 03:35:55.910: INFO: stdout: "pod/e2e-test-nginx-pod created\n"
STEP: verifying the pod e2e-test-nginx-pod is running
STEP: verifying the pod e2e-test-nginx-pod was created
Sep 25 03:36:00.963: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=kubectl-8453 -o json'
Sep 25 03:36:02.108: INFO: stderr: ""
Sep 25 03:36:02.109: INFO: stdout: "{\n    \"apiVersion\": \"v1\",\n    \"kind\": \"Pod\",\n    \"metadata\": {\n        \"creationTimestamp\": \"2020-09-25T03:35:55Z\",\n        \"labels\": {\n            \"run\": \"e2e-test-nginx-pod\"\n        },\n        \"name\": \"e2e-test-nginx-pod\",\n        \"namespace\": \"kubectl-8453\",\n        \"resourceVersion\": \"334096\",\n        \"selfLink\": \"/api/v1/namespaces/kubectl-8453/pods/e2e-test-nginx-pod\",\n        \"uid\": \"5a13c25e-d7dd-4fe1-b2b4-b158bfb3286c\"\n    },\n    \"spec\": {\n        \"containers\": [\n            {\n                \"image\": \"docker.io/library/nginx:1.14-alpine\",\n                \"imagePullPolicy\": \"IfNotPresent\",\n                \"name\": \"e2e-test-nginx-pod\",\n                \"resources\": {},\n                \"terminationMessagePath\": \"/dev/termination-log\",\n                \"terminationMessagePolicy\": \"File\",\n                \"volumeMounts\": [\n                    {\n                        \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n                        \"name\": \"default-token-rh8bp\",\n                        \"readOnly\": true\n                    }\n                ]\n            }\n        ],\n        \"dnsPolicy\": \"ClusterFirst\",\n        \"enableServiceLinks\": true,\n        \"nodeName\": \"iruya-worker\",\n        \"priority\": 0,\n        \"restartPolicy\": \"Always\",\n        \"schedulerName\": \"default-scheduler\",\n        \"securityContext\": {},\n        \"serviceAccount\": \"default\",\n        \"serviceAccountName\": \"default\",\n        \"terminationGracePeriodSeconds\": 30,\n        \"tolerations\": [\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/not-ready\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            },\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/unreachable\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            }\n        ],\n        \"volumes\": [\n            {\n                \"name\": \"default-token-rh8bp\",\n                \"secret\": {\n                    \"defaultMode\": 420,\n                    \"secretName\": \"default-token-rh8bp\"\n                }\n            }\n        ]\n    },\n    \"status\": {\n        \"conditions\": [\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-09-25T03:35:55Z\",\n                \"status\": \"True\",\n                \"type\": \"Initialized\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-09-25T03:35:58Z\",\n                \"status\": \"True\",\n                \"type\": \"Ready\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-09-25T03:35:58Z\",\n                \"status\": \"True\",\n                \"type\": \"ContainersReady\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-09-25T03:35:55Z\",\n                \"status\": \"True\",\n                \"type\": \"PodScheduled\"\n            }\n        ],\n        \"containerStatuses\": [\n            {\n                \"containerID\": \"containerd://91b091db6b44104015ad0c7bc42e1b8e725752530c2a6547d715e90565be3a9e\",\n                \"image\": \"docker.io/library/nginx:1.14-alpine\",\n                \"imageID\": \"docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n                \"lastState\": {},\n                \"name\": \"e2e-test-nginx-pod\",\n                \"ready\": true,\n                \"restartCount\": 0,\n                \"state\": {\n                    \"running\": {\n                        \"startedAt\": \"2020-09-25T03:35:58Z\"\n                    }\n                }\n            }\n        ],\n        \"hostIP\": \"172.18.0.6\",\n        \"phase\": \"Running\",\n        \"podIP\": \"10.244.1.32\",\n        \"qosClass\": \"BestEffort\",\n        \"startTime\": \"2020-09-25T03:35:55Z\"\n    }\n}\n"
STEP: replace the image in the pod
Sep 25 03:36:02.112: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-8453'
Sep 25 03:36:03.594: INFO: stderr: ""
Sep 25 03:36:03.594: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n"
STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29
[AfterEach] [k8s.io] Kubectl replace
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1726
Sep 25 03:36:03.612: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-8453'
Sep 25 03:36:15.392: INFO: stderr: ""
Sep 25 03:36:15.392: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 25 03:36:15.392: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8453" for this suite.
Sep 25 03:36:21.445: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 25 03:36:21.607: INFO: namespace kubectl-8453 deletion completed in 6.206257562s

• [SLOW TEST:29.489 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl replace
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should update a single-container pod's image  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl describe 
  should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 25 03:36:21.609: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Sep 25 03:36:21.650: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7415'
Sep 25 03:36:23.173: INFO: stderr: ""
Sep 25 03:36:23.173: INFO: stdout: "replicationcontroller/redis-master created\n"
Sep 25 03:36:23.174: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7415'
Sep 25 03:36:24.988: INFO: stderr: ""
Sep 25 03:36:24.988: INFO: stdout: "service/redis-master created\n"
STEP: Waiting for Redis master to start.
Sep 25 03:36:25.999: INFO: Selector matched 1 pods for map[app:redis]
Sep 25 03:36:26.000: INFO: Found 0 / 1
Sep 25 03:36:26.997: INFO: Selector matched 1 pods for map[app:redis]
Sep 25 03:36:26.997: INFO: Found 1 / 1
Sep 25 03:36:26.997: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Sep 25 03:36:27.004: INFO: Selector matched 1 pods for map[app:redis]
Sep 25 03:36:27.004: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Sep 25 03:36:27.005: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod redis-master-jq7zh --namespace=kubectl-7415'
Sep 25 03:36:28.239: INFO: stderr: ""
Sep 25 03:36:28.239: INFO: stdout: "Name:           redis-master-jq7zh\nNamespace:      kubectl-7415\nPriority:       0\nNode:           iruya-worker/172.18.0.6\nStart Time:     Fri, 25 Sep 2020 03:36:23 +0000\nLabels:         app=redis\n                role=master\nAnnotations:    \nStatus:         Running\nIP:             10.244.1.33\nControlled By:  ReplicationController/redis-master\nContainers:\n  redis-master:\n    Container ID:   containerd://73c2fbded285483a4989dc19299f1ecdde55d372bd16bddc4bc97fb302dab4bc\n    Image:          gcr.io/kubernetes-e2e-test-images/redis:1.0\n    Image ID:       gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830\n    Port:           6379/TCP\n    Host Port:      0/TCP\n    State:          Running\n      Started:      Fri, 25 Sep 2020 03:36:25 +0000\n    Ready:          True\n    Restart Count:  0\n    Environment:    \n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from default-token-sx28r (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  default-token-sx28r:\n    Type:        Secret (a volume populated by a Secret)\n    SecretName:  default-token-sx28r\n    Optional:    false\nQoS Class:       BestEffort\nNode-Selectors:  \nTolerations:     node.kubernetes.io/not-ready:NoExecute for 300s\n                 node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n  Type    Reason     Age   From                   Message\n  ----    ------     ----  ----                   -------\n  Normal  Scheduled  5s    default-scheduler      Successfully assigned kubectl-7415/redis-master-jq7zh to iruya-worker\n  Normal  Pulled     4s    kubelet, iruya-worker  Container image \"gcr.io/kubernetes-e2e-test-images/redis:1.0\" already present on machine\n  Normal  Created    3s    kubelet, iruya-worker  Created container redis-master\n  Normal  Started    3s    kubelet, iruya-worker  Started container redis-master\n"
Sep 25 03:36:28.242: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc redis-master --namespace=kubectl-7415'
Sep 25 03:36:29.480: INFO: stderr: ""
Sep 25 03:36:29.480: INFO: stdout: "Name:         redis-master\nNamespace:    kubectl-7415\nSelector:     app=redis,role=master\nLabels:       app=redis\n              role=master\nAnnotations:  \nReplicas:     1 current / 1 desired\nPods Status:  1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n  Labels:  app=redis\n           role=master\n  Containers:\n   redis-master:\n    Image:        gcr.io/kubernetes-e2e-test-images/redis:1.0\n    Port:         6379/TCP\n    Host Port:    0/TCP\n    Environment:  \n    Mounts:       \n  Volumes:        \nEvents:\n  Type    Reason            Age   From                    Message\n  ----    ------            ----  ----                    -------\n  Normal  SuccessfulCreate  6s    replication-controller  Created pod: redis-master-jq7zh\n"
Sep 25 03:36:29.481: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service redis-master --namespace=kubectl-7415'
Sep 25 03:36:30.628: INFO: stderr: ""
Sep 25 03:36:30.628: INFO: stdout: "Name:              redis-master\nNamespace:         kubectl-7415\nLabels:            app=redis\n                   role=master\nAnnotations:       \nSelector:          app=redis,role=master\nType:              ClusterIP\nIP:                10.109.125.193\nPort:                6379/TCP\nTargetPort:        redis-server/TCP\nEndpoints:         10.244.1.33:6379\nSession Affinity:  None\nEvents:            \n"
Sep 25 03:36:30.638: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node iruya-control-plane'
Sep 25 03:36:31.912: INFO: stderr: ""
Sep 25 03:36:31.912: INFO: stdout: "Name:               iruya-control-plane\nRoles:              master\nLabels:             beta.kubernetes.io/arch=amd64\n                    beta.kubernetes.io/os=linux\n                    kubernetes.io/arch=amd64\n                    kubernetes.io/hostname=iruya-control-plane\n                    kubernetes.io/os=linux\n                    node-role.kubernetes.io/master=\nAnnotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n                    node.alpha.kubernetes.io/ttl: 0\n                    volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp:  Wed, 23 Sep 2020 08:25:31 +0000\nTaints:             node-role.kubernetes.io/master:NoSchedule\nUnschedulable:      false\nConditions:\n  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message\n  ----             ------  -----------------                 ------------------                ------                       -------\n  MemoryPressure   False   Fri, 25 Sep 2020 03:35:46 +0000   Wed, 23 Sep 2020 08:25:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available\n  DiskPressure     False   Fri, 25 Sep 2020 03:35:46 +0000   Wed, 23 Sep 2020 08:25:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure\n  PIDPressure      False   Fri, 25 Sep 2020 03:35:46 +0000   Wed, 23 Sep 2020 08:25:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available\n  Ready            True    Fri, 25 Sep 2020 03:35:46 +0000   Wed, 23 Sep 2020 08:26:01 +0000   KubeletReady                 kubelet is posting ready status\nAddresses:\n  InternalIP:  172.18.0.7\n  Hostname:    iruya-control-plane\nCapacity:\n cpu:                16\n ephemeral-storage:  2303189964Ki\n hugepages-1Gi:      0\n hugepages-2Mi:      0\n memory:             131759868Ki\n pods:               110\nAllocatable:\n cpu:                16\n ephemeral-storage:  2303189964Ki\n hugepages-1Gi:      0\n hugepages-2Mi:      0\n memory:             131759868Ki\n pods:               110\nSystem Info:\n Machine ID:                 75bedc8ea3a84920a6257d408ae4fc72\n System UUID:                f7c1d795-23db-4f0f-aa92-a051f5bbc85d\n Boot ID:                    b267d78b-f69b-4338-80e8-3f4944338e5d\n Kernel Version:             4.15.0-118-generic\n OS Image:                   Ubuntu 19.10\n Operating System:           linux\n Architecture:               amd64\n Container Runtime Version:  containerd://1.3.3-14-g449e9269\n Kubelet Version:            v1.15.11\n Kube-Proxy Version:         v1.15.11\nPodCIDR:                     10.244.0.0/24\nNon-terminated Pods:         (9 in total)\n  Namespace                  Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE\n  ---------                  ----                                           ------------  ----------  ---------------  -------------  ---\n  kube-system                coredns-5d4dd4b4db-ktm6r                       100m (0%)     0 (0%)      70Mi (0%)        170Mi (0%)     43h\n  kube-system                coredns-5d4dd4b4db-m9gbg                       100m (0%)     0 (0%)      70Mi (0%)        170Mi (0%)     43h\n  kube-system                etcd-iruya-control-plane                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         43h\n  kube-system                kindnet-rv6n5                                  100m (0%)     100m (0%)   50Mi (0%)        50Mi (0%)      43h\n  kube-system                kube-apiserver-iruya-control-plane             250m (1%)     0 (0%)      0 (0%)           0 (0%)         43h\n  kube-system                kube-controller-manager-iruya-control-plane    200m (1%)     0 (0%)      0 (0%)           0 (0%)         43h\n  kube-system                kube-proxy-zcw5n                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         43h\n  kube-system                kube-scheduler-iruya-control-plane             100m (0%)     0 (0%)      0 (0%)           0 (0%)         43h\n  local-path-storage         local-path-provisioner-668779bd7-t77bq         0 (0%)        0 (0%)      0 (0%)           0 (0%)         43h\nAllocated resources:\n  (Total limits may be over 100 percent, i.e., overcommitted.)\n  Resource           Requests    Limits\n  --------           --------    ------\n  cpu                850m (5%)   100m (0%)\n  memory             190Mi (0%)  390Mi (0%)\n  ephemeral-storage  0 (0%)      0 (0%)\nEvents:              \n"
Sep 25 03:36:31.916: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-7415'
Sep 25 03:36:33.047: INFO: stderr: ""
Sep 25 03:36:33.048: INFO: stdout: "Name:         kubectl-7415\nLabels:       e2e-framework=kubectl\n              e2e-run=32d5fbe4-05b5-4e69-a18c-272909b3d97e\nAnnotations:  \nStatus:       Active\n\nNo resource quota.\n\nNo resource limits.\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 25 03:36:33.049: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7415" for this suite.
Sep 25 03:36:55.076: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 25 03:36:55.212: INFO: namespace kubectl-7415 deletion completed in 22.152235359s

• [SLOW TEST:33.604 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl describe
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check if kubectl describe prints relevant information for rc and pods  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide podname only [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 25 03:36:55.213: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Sep 25 03:36:55.279: INFO: Waiting up to 5m0s for pod "downwardapi-volume-470770a9-52f6-46f5-b891-fe22fb2b41b3" in namespace "projected-7901" to be "success or failure"
Sep 25 03:36:55.288: INFO: Pod "downwardapi-volume-470770a9-52f6-46f5-b891-fe22fb2b41b3": Phase="Pending", Reason="", readiness=false. Elapsed: 8.514978ms
Sep 25 03:36:57.296: INFO: Pod "downwardapi-volume-470770a9-52f6-46f5-b891-fe22fb2b41b3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016408677s
Sep 25 03:36:59.303: INFO: Pod "downwardapi-volume-470770a9-52f6-46f5-b891-fe22fb2b41b3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024216856s
STEP: Saw pod success
Sep 25 03:36:59.304: INFO: Pod "downwardapi-volume-470770a9-52f6-46f5-b891-fe22fb2b41b3" satisfied condition "success or failure"
Sep 25 03:36:59.309: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-470770a9-52f6-46f5-b891-fe22fb2b41b3 container client-container: 
STEP: delete the pod
Sep 25 03:36:59.342: INFO: Waiting for pod downwardapi-volume-470770a9-52f6-46f5-b891-fe22fb2b41b3 to disappear
Sep 25 03:36:59.366: INFO: Pod downwardapi-volume-470770a9-52f6-46f5-b891-fe22fb2b41b3 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 25 03:36:59.366: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7901" for this suite.
Sep 25 03:37:05.408: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 25 03:37:05.576: INFO: namespace projected-7901 deletion completed in 6.201571525s

• [SLOW TEST:10.363 seconds]
[sig-storage] Projected downwardAPI
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide podname only [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should have an terminated reason [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 25 03:37:05.580: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should have an terminated reason [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 25 03:37:09.715: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-8304" for this suite.
Sep 25 03:37:15.748: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 25 03:37:15.880: INFO: namespace kubelet-test-8304 deletion completed in 6.15349861s

• [SLOW TEST:10.300 seconds]
[k8s.io] Kubelet
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox command that always fails in a pod
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should have an terminated reason [NodeConformance] [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-network] Services 
  should serve a basic endpoint from pods  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 25 03:37:15.883: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88
[It] should serve a basic endpoint from pods  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating service endpoint-test2 in namespace services-4383
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-4383 to expose endpoints map[]
Sep 25 03:37:16.018: INFO: successfully validated that service endpoint-test2 in namespace services-4383 exposes endpoints map[] (12.234616ms elapsed)
STEP: Creating pod pod1 in namespace services-4383
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-4383 to expose endpoints map[pod1:[80]]
Sep 25 03:37:20.171: INFO: successfully validated that service endpoint-test2 in namespace services-4383 exposes endpoints map[pod1:[80]] (4.115182411s elapsed)
STEP: Creating pod pod2 in namespace services-4383
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-4383 to expose endpoints map[pod1:[80] pod2:[80]]
Sep 25 03:37:24.277: INFO: successfully validated that service endpoint-test2 in namespace services-4383 exposes endpoints map[pod1:[80] pod2:[80]] (4.099606225s elapsed)
STEP: Deleting pod pod1 in namespace services-4383
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-4383 to expose endpoints map[pod2:[80]]
Sep 25 03:37:24.303: INFO: successfully validated that service endpoint-test2 in namespace services-4383 exposes endpoints map[pod2:[80]] (18.409115ms elapsed)
STEP: Deleting pod pod2 in namespace services-4383
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-4383 to expose endpoints map[]
Sep 25 03:37:24.317: INFO: successfully validated that service endpoint-test2 in namespace services-4383 exposes endpoints map[] (8.42503ms elapsed)
[AfterEach] [sig-network] Services
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 25 03:37:24.375: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-4383" for this suite.
Sep 25 03:37:46.431: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 25 03:37:46.674: INFO: namespace services-4383 deletion completed in 22.2815145s
[AfterEach] [sig-network] Services
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92

• [SLOW TEST:30.791 seconds]
[sig-network] Services
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve a basic endpoint from pods  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-network] DNS 
  should provide DNS for services  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 25 03:37:46.675: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for services  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-2404.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-2404.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-2404.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-2404.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-2404.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-2404.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-2404.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-2404.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-2404.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-2404.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-2404.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-2404.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2404.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 52.114.105.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.105.114.52_udp@PTR;check="$$(dig +tcp +noall +answer +search 52.114.105.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.105.114.52_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-2404.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-2404.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-2404.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-2404.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-2404.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-2404.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-2404.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-2404.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-2404.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-2404.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-2404.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-2404.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2404.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 52.114.105.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.105.114.52_udp@PTR;check="$$(dig +tcp +noall +answer +search 52.114.105.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.105.114.52_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Sep 25 03:37:52.893: INFO: Unable to read wheezy_udp@dns-test-service.dns-2404.svc.cluster.local from pod dns-2404/dns-test-b6d11feb-8ff6-4e06-98c6-75ad2870101a: the server could not find the requested resource (get pods dns-test-b6d11feb-8ff6-4e06-98c6-75ad2870101a)
Sep 25 03:37:52.898: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2404.svc.cluster.local from pod dns-2404/dns-test-b6d11feb-8ff6-4e06-98c6-75ad2870101a: the server could not find the requested resource (get pods dns-test-b6d11feb-8ff6-4e06-98c6-75ad2870101a)
Sep 25 03:37:52.902: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2404.svc.cluster.local from pod dns-2404/dns-test-b6d11feb-8ff6-4e06-98c6-75ad2870101a: the server could not find the requested resource (get pods dns-test-b6d11feb-8ff6-4e06-98c6-75ad2870101a)
Sep 25 03:37:52.906: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2404.svc.cluster.local from pod dns-2404/dns-test-b6d11feb-8ff6-4e06-98c6-75ad2870101a: the server could not find the requested resource (get pods dns-test-b6d11feb-8ff6-4e06-98c6-75ad2870101a)
Sep 25 03:37:52.932: INFO: Unable to read jessie_udp@dns-test-service.dns-2404.svc.cluster.local from pod dns-2404/dns-test-b6d11feb-8ff6-4e06-98c6-75ad2870101a: the server could not find the requested resource (get pods dns-test-b6d11feb-8ff6-4e06-98c6-75ad2870101a)
Sep 25 03:37:52.936: INFO: Unable to read jessie_tcp@dns-test-service.dns-2404.svc.cluster.local from pod dns-2404/dns-test-b6d11feb-8ff6-4e06-98c6-75ad2870101a: the server could not find the requested resource (get pods dns-test-b6d11feb-8ff6-4e06-98c6-75ad2870101a)
Sep 25 03:37:52.940: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2404.svc.cluster.local from pod dns-2404/dns-test-b6d11feb-8ff6-4e06-98c6-75ad2870101a: the server could not find the requested resource (get pods dns-test-b6d11feb-8ff6-4e06-98c6-75ad2870101a)
Sep 25 03:37:52.944: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2404.svc.cluster.local from pod dns-2404/dns-test-b6d11feb-8ff6-4e06-98c6-75ad2870101a: the server could not find the requested resource (get pods dns-test-b6d11feb-8ff6-4e06-98c6-75ad2870101a)
Sep 25 03:37:52.970: INFO: Lookups using dns-2404/dns-test-b6d11feb-8ff6-4e06-98c6-75ad2870101a failed for: [wheezy_udp@dns-test-service.dns-2404.svc.cluster.local wheezy_tcp@dns-test-service.dns-2404.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-2404.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-2404.svc.cluster.local jessie_udp@dns-test-service.dns-2404.svc.cluster.local jessie_tcp@dns-test-service.dns-2404.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-2404.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-2404.svc.cluster.local]

Sep 25 03:37:57.978: INFO: Unable to read wheezy_udp@dns-test-service.dns-2404.svc.cluster.local from pod dns-2404/dns-test-b6d11feb-8ff6-4e06-98c6-75ad2870101a: the server could not find the requested resource (get pods dns-test-b6d11feb-8ff6-4e06-98c6-75ad2870101a)
Sep 25 03:37:57.982: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2404.svc.cluster.local from pod dns-2404/dns-test-b6d11feb-8ff6-4e06-98c6-75ad2870101a: the server could not find the requested resource (get pods dns-test-b6d11feb-8ff6-4e06-98c6-75ad2870101a)
Sep 25 03:37:57.987: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2404.svc.cluster.local from pod dns-2404/dns-test-b6d11feb-8ff6-4e06-98c6-75ad2870101a: the server could not find the requested resource (get pods dns-test-b6d11feb-8ff6-4e06-98c6-75ad2870101a)
Sep 25 03:37:58.015: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2404.svc.cluster.local from pod dns-2404/dns-test-b6d11feb-8ff6-4e06-98c6-75ad2870101a: the server could not find the requested resource (get pods dns-test-b6d11feb-8ff6-4e06-98c6-75ad2870101a)
Sep 25 03:37:58.042: INFO: Unable to read jessie_udp@dns-test-service.dns-2404.svc.cluster.local from pod dns-2404/dns-test-b6d11feb-8ff6-4e06-98c6-75ad2870101a: the server could not find the requested resource (get pods dns-test-b6d11feb-8ff6-4e06-98c6-75ad2870101a)
Sep 25 03:37:58.047: INFO: Unable to read jessie_tcp@dns-test-service.dns-2404.svc.cluster.local from pod dns-2404/dns-test-b6d11feb-8ff6-4e06-98c6-75ad2870101a: the server could not find the requested resource (get pods dns-test-b6d11feb-8ff6-4e06-98c6-75ad2870101a)
Sep 25 03:37:58.050: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2404.svc.cluster.local from pod dns-2404/dns-test-b6d11feb-8ff6-4e06-98c6-75ad2870101a: the server could not find the requested resource (get pods dns-test-b6d11feb-8ff6-4e06-98c6-75ad2870101a)
Sep 25 03:37:58.055: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2404.svc.cluster.local from pod dns-2404/dns-test-b6d11feb-8ff6-4e06-98c6-75ad2870101a: the server could not find the requested resource (get pods dns-test-b6d11feb-8ff6-4e06-98c6-75ad2870101a)
Sep 25 03:37:58.082: INFO: Lookups using dns-2404/dns-test-b6d11feb-8ff6-4e06-98c6-75ad2870101a failed for: [wheezy_udp@dns-test-service.dns-2404.svc.cluster.local wheezy_tcp@dns-test-service.dns-2404.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-2404.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-2404.svc.cluster.local jessie_udp@dns-test-service.dns-2404.svc.cluster.local jessie_tcp@dns-test-service.dns-2404.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-2404.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-2404.svc.cluster.local]

Sep 25 03:38:02.977: INFO: Unable to read wheezy_udp@dns-test-service.dns-2404.svc.cluster.local from pod dns-2404/dns-test-b6d11feb-8ff6-4e06-98c6-75ad2870101a: the server could not find the requested resource (get pods dns-test-b6d11feb-8ff6-4e06-98c6-75ad2870101a)
Sep 25 03:38:02.982: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2404.svc.cluster.local from pod dns-2404/dns-test-b6d11feb-8ff6-4e06-98c6-75ad2870101a: the server could not find the requested resource (get pods dns-test-b6d11feb-8ff6-4e06-98c6-75ad2870101a)
Sep 25 03:38:02.985: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2404.svc.cluster.local from pod dns-2404/dns-test-b6d11feb-8ff6-4e06-98c6-75ad2870101a: the server could not find the requested resource (get pods dns-test-b6d11feb-8ff6-4e06-98c6-75ad2870101a)
Sep 25 03:38:02.989: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2404.svc.cluster.local from pod dns-2404/dns-test-b6d11feb-8ff6-4e06-98c6-75ad2870101a: the server could not find the requested resource (get pods dns-test-b6d11feb-8ff6-4e06-98c6-75ad2870101a)
Sep 25 03:38:03.013: INFO: Unable to read jessie_udp@dns-test-service.dns-2404.svc.cluster.local from pod dns-2404/dns-test-b6d11feb-8ff6-4e06-98c6-75ad2870101a: the server could not find the requested resource (get pods dns-test-b6d11feb-8ff6-4e06-98c6-75ad2870101a)
Sep 25 03:38:03.017: INFO: Unable to read jessie_tcp@dns-test-service.dns-2404.svc.cluster.local from pod dns-2404/dns-test-b6d11feb-8ff6-4e06-98c6-75ad2870101a: the server could not find the requested resource (get pods dns-test-b6d11feb-8ff6-4e06-98c6-75ad2870101a)
Sep 25 03:38:03.021: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2404.svc.cluster.local from pod dns-2404/dns-test-b6d11feb-8ff6-4e06-98c6-75ad2870101a: the server could not find the requested resource (get pods dns-test-b6d11feb-8ff6-4e06-98c6-75ad2870101a)
Sep 25 03:38:03.025: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2404.svc.cluster.local from pod dns-2404/dns-test-b6d11feb-8ff6-4e06-98c6-75ad2870101a: the server could not find the requested resource (get pods dns-test-b6d11feb-8ff6-4e06-98c6-75ad2870101a)
Sep 25 03:38:03.058: INFO: Lookups using dns-2404/dns-test-b6d11feb-8ff6-4e06-98c6-75ad2870101a failed for: [wheezy_udp@dns-test-service.dns-2404.svc.cluster.local wheezy_tcp@dns-test-service.dns-2404.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-2404.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-2404.svc.cluster.local jessie_udp@dns-test-service.dns-2404.svc.cluster.local jessie_tcp@dns-test-service.dns-2404.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-2404.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-2404.svc.cluster.local]

Sep 25 03:38:08.014: INFO: Unable to read wheezy_udp@dns-test-service.dns-2404.svc.cluster.local from pod dns-2404/dns-test-b6d11feb-8ff6-4e06-98c6-75ad2870101a: the server could not find the requested resource (get pods dns-test-b6d11feb-8ff6-4e06-98c6-75ad2870101a)
Sep 25 03:38:08.019: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2404.svc.cluster.local from pod dns-2404/dns-test-b6d11feb-8ff6-4e06-98c6-75ad2870101a: the server could not find the requested resource (get pods dns-test-b6d11feb-8ff6-4e06-98c6-75ad2870101a)
Sep 25 03:38:08.023: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2404.svc.cluster.local from pod dns-2404/dns-test-b6d11feb-8ff6-4e06-98c6-75ad2870101a: the server could not find the requested resource (get pods dns-test-b6d11feb-8ff6-4e06-98c6-75ad2870101a)
Sep 25 03:38:08.028: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2404.svc.cluster.local from pod dns-2404/dns-test-b6d11feb-8ff6-4e06-98c6-75ad2870101a: the server could not find the requested resource (get pods dns-test-b6d11feb-8ff6-4e06-98c6-75ad2870101a)
Sep 25 03:38:08.057: INFO: Unable to read jessie_udp@dns-test-service.dns-2404.svc.cluster.local from pod dns-2404/dns-test-b6d11feb-8ff6-4e06-98c6-75ad2870101a: the server could not find the requested resource (get pods dns-test-b6d11feb-8ff6-4e06-98c6-75ad2870101a)
Sep 25 03:38:08.061: INFO: Unable to read jessie_tcp@dns-test-service.dns-2404.svc.cluster.local from pod dns-2404/dns-test-b6d11feb-8ff6-4e06-98c6-75ad2870101a: the server could not find the requested resource (get pods dns-test-b6d11feb-8ff6-4e06-98c6-75ad2870101a)
Sep 25 03:38:08.065: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2404.svc.cluster.local from pod dns-2404/dns-test-b6d11feb-8ff6-4e06-98c6-75ad2870101a: the server could not find the requested resource (get pods dns-test-b6d11feb-8ff6-4e06-98c6-75ad2870101a)
Sep 25 03:38:08.069: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2404.svc.cluster.local from pod dns-2404/dns-test-b6d11feb-8ff6-4e06-98c6-75ad2870101a: the server could not find the requested resource (get pods dns-test-b6d11feb-8ff6-4e06-98c6-75ad2870101a)
Sep 25 03:38:08.111: INFO: Lookups using dns-2404/dns-test-b6d11feb-8ff6-4e06-98c6-75ad2870101a failed for: [wheezy_udp@dns-test-service.dns-2404.svc.cluster.local wheezy_tcp@dns-test-service.dns-2404.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-2404.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-2404.svc.cluster.local jessie_udp@dns-test-service.dns-2404.svc.cluster.local jessie_tcp@dns-test-service.dns-2404.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-2404.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-2404.svc.cluster.local]

Sep 25 03:38:12.978: INFO: Unable to read wheezy_udp@dns-test-service.dns-2404.svc.cluster.local from pod dns-2404/dns-test-b6d11feb-8ff6-4e06-98c6-75ad2870101a: the server could not find the requested resource (get pods dns-test-b6d11feb-8ff6-4e06-98c6-75ad2870101a)
Sep 25 03:38:12.984: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2404.svc.cluster.local from pod dns-2404/dns-test-b6d11feb-8ff6-4e06-98c6-75ad2870101a: the server could not find the requested resource (get pods dns-test-b6d11feb-8ff6-4e06-98c6-75ad2870101a)
Sep 25 03:38:12.989: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2404.svc.cluster.local from pod dns-2404/dns-test-b6d11feb-8ff6-4e06-98c6-75ad2870101a: the server could not find the requested resource (get pods dns-test-b6d11feb-8ff6-4e06-98c6-75ad2870101a)
Sep 25 03:38:12.993: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2404.svc.cluster.local from pod dns-2404/dns-test-b6d11feb-8ff6-4e06-98c6-75ad2870101a: the server could not find the requested resource (get pods dns-test-b6d11feb-8ff6-4e06-98c6-75ad2870101a)
Sep 25 03:38:13.025: INFO: Unable to read jessie_udp@dns-test-service.dns-2404.svc.cluster.local from pod dns-2404/dns-test-b6d11feb-8ff6-4e06-98c6-75ad2870101a: the server could not find the requested resource (get pods dns-test-b6d11feb-8ff6-4e06-98c6-75ad2870101a)
Sep 25 03:38:13.029: INFO: Unable to read jessie_tcp@dns-test-service.dns-2404.svc.cluster.local from pod dns-2404/dns-test-b6d11feb-8ff6-4e06-98c6-75ad2870101a: the server could not find the requested resource (get pods dns-test-b6d11feb-8ff6-4e06-98c6-75ad2870101a)
Sep 25 03:38:13.034: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2404.svc.cluster.local from pod dns-2404/dns-test-b6d11feb-8ff6-4e06-98c6-75ad2870101a: the server could not find the requested resource (get pods dns-test-b6d11feb-8ff6-4e06-98c6-75ad2870101a)
Sep 25 03:38:13.039: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2404.svc.cluster.local from pod dns-2404/dns-test-b6d11feb-8ff6-4e06-98c6-75ad2870101a: the server could not find the requested resource (get pods dns-test-b6d11feb-8ff6-4e06-98c6-75ad2870101a)
Sep 25 03:38:13.060: INFO: Lookups using dns-2404/dns-test-b6d11feb-8ff6-4e06-98c6-75ad2870101a failed for: [wheezy_udp@dns-test-service.dns-2404.svc.cluster.local wheezy_tcp@dns-test-service.dns-2404.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-2404.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-2404.svc.cluster.local jessie_udp@dns-test-service.dns-2404.svc.cluster.local jessie_tcp@dns-test-service.dns-2404.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-2404.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-2404.svc.cluster.local]

Sep 25 03:38:17.978: INFO: Unable to read wheezy_udp@dns-test-service.dns-2404.svc.cluster.local from pod dns-2404/dns-test-b6d11feb-8ff6-4e06-98c6-75ad2870101a: the server could not find the requested resource (get pods dns-test-b6d11feb-8ff6-4e06-98c6-75ad2870101a)
Sep 25 03:38:17.983: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2404.svc.cluster.local from pod dns-2404/dns-test-b6d11feb-8ff6-4e06-98c6-75ad2870101a: the server could not find the requested resource (get pods dns-test-b6d11feb-8ff6-4e06-98c6-75ad2870101a)
Sep 25 03:38:17.987: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2404.svc.cluster.local from pod dns-2404/dns-test-b6d11feb-8ff6-4e06-98c6-75ad2870101a: the server could not find the requested resource (get pods dns-test-b6d11feb-8ff6-4e06-98c6-75ad2870101a)
Sep 25 03:38:17.993: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2404.svc.cluster.local from pod dns-2404/dns-test-b6d11feb-8ff6-4e06-98c6-75ad2870101a: the server could not find the requested resource (get pods dns-test-b6d11feb-8ff6-4e06-98c6-75ad2870101a)
Sep 25 03:38:18.045: INFO: Unable to read jessie_udp@dns-test-service.dns-2404.svc.cluster.local from pod dns-2404/dns-test-b6d11feb-8ff6-4e06-98c6-75ad2870101a: the server could not find the requested resource (get pods dns-test-b6d11feb-8ff6-4e06-98c6-75ad2870101a)
Sep 25 03:38:18.050: INFO: Unable to read jessie_tcp@dns-test-service.dns-2404.svc.cluster.local from pod dns-2404/dns-test-b6d11feb-8ff6-4e06-98c6-75ad2870101a: the server could not find the requested resource (get pods dns-test-b6d11feb-8ff6-4e06-98c6-75ad2870101a)
Sep 25 03:38:18.054: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2404.svc.cluster.local from pod dns-2404/dns-test-b6d11feb-8ff6-4e06-98c6-75ad2870101a: the server could not find the requested resource (get pods dns-test-b6d11feb-8ff6-4e06-98c6-75ad2870101a)
Sep 25 03:38:18.058: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2404.svc.cluster.local from pod dns-2404/dns-test-b6d11feb-8ff6-4e06-98c6-75ad2870101a: the server could not find the requested resource (get pods dns-test-b6d11feb-8ff6-4e06-98c6-75ad2870101a)
Sep 25 03:38:18.085: INFO: Lookups using dns-2404/dns-test-b6d11feb-8ff6-4e06-98c6-75ad2870101a failed for: [wheezy_udp@dns-test-service.dns-2404.svc.cluster.local wheezy_tcp@dns-test-service.dns-2404.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-2404.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-2404.svc.cluster.local jessie_udp@dns-test-service.dns-2404.svc.cluster.local jessie_tcp@dns-test-service.dns-2404.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-2404.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-2404.svc.cluster.local]

Sep 25 03:38:23.069: INFO: DNS probes using dns-2404/dns-test-b6d11feb-8ff6-4e06-98c6-75ad2870101a succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 25 03:38:23.741: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-2404" for this suite.
Sep 25 03:38:29.922: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 25 03:38:30.060: INFO: namespace dns-2404 deletion completed in 6.305773139s

• [SLOW TEST:43.385 seconds]
[sig-network] DNS
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for services  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-api-machinery] Watchers 
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 25 03:38:30.061: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe add, update, and delete watch notifications on configmaps [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a watch on configmaps with label A
STEP: creating a watch on configmaps with label B
STEP: creating a watch on configmaps with label A or B
STEP: creating a configmap with label A and ensuring the correct watchers observe the notification
Sep 25 03:38:30.181: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-1252,SelfLink:/api/v1/namespaces/watch-1252/configmaps/e2e-watch-test-configmap-a,UID:65508acc-28c9-4719-ad71-d0f6ee611669,ResourceVersion:334618,Generation:0,CreationTimestamp:2020-09-25 03:38:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Sep 25 03:38:30.182: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-1252,SelfLink:/api/v1/namespaces/watch-1252/configmaps/e2e-watch-test-configmap-a,UID:65508acc-28c9-4719-ad71-d0f6ee611669,ResourceVersion:334618,Generation:0,CreationTimestamp:2020-09-25 03:38:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: modifying configmap A and ensuring the correct watchers observe the notification
Sep 25 03:38:40.195: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-1252,SelfLink:/api/v1/namespaces/watch-1252/configmaps/e2e-watch-test-configmap-a,UID:65508acc-28c9-4719-ad71-d0f6ee611669,ResourceVersion:334639,Generation:0,CreationTimestamp:2020-09-25 03:38:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Sep 25 03:38:40.196: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-1252,SelfLink:/api/v1/namespaces/watch-1252/configmaps/e2e-watch-test-configmap-a,UID:65508acc-28c9-4719-ad71-d0f6ee611669,ResourceVersion:334639,Generation:0,CreationTimestamp:2020-09-25 03:38:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying configmap A again and ensuring the correct watchers observe the notification
Sep 25 03:38:50.207: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-1252,SelfLink:/api/v1/namespaces/watch-1252/configmaps/e2e-watch-test-configmap-a,UID:65508acc-28c9-4719-ad71-d0f6ee611669,ResourceVersion:334660,Generation:0,CreationTimestamp:2020-09-25 03:38:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Sep 25 03:38:50.208: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-1252,SelfLink:/api/v1/namespaces/watch-1252/configmaps/e2e-watch-test-configmap-a,UID:65508acc-28c9-4719-ad71-d0f6ee611669,ResourceVersion:334660,Generation:0,CreationTimestamp:2020-09-25 03:38:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: deleting configmap A and ensuring the correct watchers observe the notification
Sep 25 03:39:00.217: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-1252,SelfLink:/api/v1/namespaces/watch-1252/configmaps/e2e-watch-test-configmap-a,UID:65508acc-28c9-4719-ad71-d0f6ee611669,ResourceVersion:334680,Generation:0,CreationTimestamp:2020-09-25 03:38:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Sep 25 03:39:00.218: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-1252,SelfLink:/api/v1/namespaces/watch-1252/configmaps/e2e-watch-test-configmap-a,UID:65508acc-28c9-4719-ad71-d0f6ee611669,ResourceVersion:334680,Generation:0,CreationTimestamp:2020-09-25 03:38:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: creating a configmap with label B and ensuring the correct watchers observe the notification
Sep 25 03:39:10.230: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-1252,SelfLink:/api/v1/namespaces/watch-1252/configmaps/e2e-watch-test-configmap-b,UID:8878edbd-4fc2-4935-a56c-db6173bc7ff8,ResourceVersion:334701,Generation:0,CreationTimestamp:2020-09-25 03:39:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Sep 25 03:39:10.231: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-1252,SelfLink:/api/v1/namespaces/watch-1252/configmaps/e2e-watch-test-configmap-b,UID:8878edbd-4fc2-4935-a56c-db6173bc7ff8,ResourceVersion:334701,Generation:0,CreationTimestamp:2020-09-25 03:39:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: deleting configmap B and ensuring the correct watchers observe the notification
Sep 25 03:39:20.241: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-1252,SelfLink:/api/v1/namespaces/watch-1252/configmaps/e2e-watch-test-configmap-b,UID:8878edbd-4fc2-4935-a56c-db6173bc7ff8,ResourceVersion:334721,Generation:0,CreationTimestamp:2020-09-25 03:39:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Sep 25 03:39:20.241: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-1252,SelfLink:/api/v1/namespaces/watch-1252/configmaps/e2e-watch-test-configmap-b,UID:8878edbd-4fc2-4935-a56c-db6173bc7ff8,ResourceVersion:334721,Generation:0,CreationTimestamp:2020-09-25 03:39:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 25 03:39:30.242: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-1252" for this suite.
Sep 25 03:39:36.271: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 25 03:39:36.423: INFO: namespace watch-1252 deletion completed in 6.168747996s

• [SLOW TEST:66.362 seconds]
[sig-api-machinery] Watchers
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-apps] Deployment 
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 25 03:39:36.424: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] RecreateDeployment should delete old pods and create new ones [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Sep 25 03:39:36.489: INFO: Creating deployment "test-recreate-deployment"
Sep 25 03:39:36.496: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1
Sep 25 03:39:36.550: INFO: deployment "test-recreate-deployment" doesn't have the required revision set
Sep 25 03:39:38.565: INFO: Waiting deployment "test-recreate-deployment" to complete
Sep 25 03:39:38.570: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63736601976, loc:(*time.Location)(0x67985e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63736601976, loc:(*time.Location)(0x67985e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63736601976, loc:(*time.Location)(0x67985e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63736601976, loc:(*time.Location)(0x67985e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep 25 03:39:40.578: INFO: Triggering a new rollout for deployment "test-recreate-deployment"
Sep 25 03:39:40.590: INFO: Updating deployment test-recreate-deployment
Sep 25 03:39:40.591: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Sep 25 03:39:40.826: INFO: Deployment "test-recreate-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:deployment-5954,SelfLink:/apis/apps/v1/namespaces/deployment-5954/deployments/test-recreate-deployment,UID:e09f4244-1353-49f2-b300-6fe177f970d5,ResourceVersion:334810,Generation:2,CreationTimestamp:2020-09-25 03:39:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2020-09-25 03:39:40 +0000 UTC 2020-09-25 03:39:40 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-09-25 03:39:40 +0000 UTC 2020-09-25 03:39:36 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-5c8c9cc69d" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},}

Sep 25 03:39:40.841: INFO: New ReplicaSet "test-recreate-deployment-5c8c9cc69d" of Deployment "test-recreate-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d,GenerateName:,Namespace:deployment-5954,SelfLink:/apis/apps/v1/namespaces/deployment-5954/replicasets/test-recreate-deployment-5c8c9cc69d,UID:4a9e1c69-2698-4fc0-aa47-1a072f3c7fc0,ResourceVersion:334809,Generation:1,CreationTimestamp:2020-09-25 03:39:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment e09f4244-1353-49f2-b300-6fe177f970d5 0x8c731e7 0x8c731e8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Sep 25 03:39:40.841: INFO: All old ReplicaSets of Deployment "test-recreate-deployment":
Sep 25 03:39:40.842: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-6df85df6b9,GenerateName:,Namespace:deployment-5954,SelfLink:/apis/apps/v1/namespaces/deployment-5954/replicasets/test-recreate-deployment-6df85df6b9,UID:5f9b66a2-a81c-45da-bafd-b06115b993b1,ResourceVersion:334799,Generation:2,CreationTimestamp:2020-09-25 03:39:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment e09f4244-1353-49f2-b300-6fe177f970d5 0x8c732b7 0x8c732b8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Sep 25 03:39:40.851: INFO: Pod "test-recreate-deployment-5c8c9cc69d-j9gff" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d-j9gff,GenerateName:test-recreate-deployment-5c8c9cc69d-,Namespace:deployment-5954,SelfLink:/api/v1/namespaces/deployment-5954/pods/test-recreate-deployment-5c8c9cc69d-j9gff,UID:377cb599-8ffb-4f3e-9f17-94d2bfe2924f,ResourceVersion:334811,Generation:0,CreationTimestamp:2020-09-25 03:39:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-5c8c9cc69d 4a9e1c69-2698-4fc0-aa47-1a072f3c7fc0 0x8c73bf7 0x8c73bf8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-lf76p {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lf76p,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-lf76p true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x8c73c70} {node.kubernetes.io/unreachable Exists  NoExecute 0x8c73c90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-25 03:39:40 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-25 03:39:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-25 03:39:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-25 03:39:40 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.6,PodIP:,StartTime:2020-09-25 03:39:40 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 25 03:39:40.851: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-5954" for this suite.
Sep 25 03:39:47.339: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 25 03:39:47.475: INFO: namespace deployment-5954 deletion completed in 6.615722814s

• [SLOW TEST:11.051 seconds]
[sig-apps] Deployment
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should contain environment variables for services [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 25 03:39:47.479: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should contain environment variables for services [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Sep 25 03:39:51.694: INFO: Waiting up to 5m0s for pod "client-envvars-cf5ad21a-9e13-4de3-8763-c04c20ab8c49" in namespace "pods-8103" to be "success or failure"
Sep 25 03:39:51.705: INFO: Pod "client-envvars-cf5ad21a-9e13-4de3-8763-c04c20ab8c49": Phase="Pending", Reason="", readiness=false. Elapsed: 10.561133ms
Sep 25 03:39:53.711: INFO: Pod "client-envvars-cf5ad21a-9e13-4de3-8763-c04c20ab8c49": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016789486s
Sep 25 03:39:55.719: INFO: Pod "client-envvars-cf5ad21a-9e13-4de3-8763-c04c20ab8c49": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02473263s
STEP: Saw pod success
Sep 25 03:39:55.720: INFO: Pod "client-envvars-cf5ad21a-9e13-4de3-8763-c04c20ab8c49" satisfied condition "success or failure"
Sep 25 03:39:55.734: INFO: Trying to get logs from node iruya-worker pod client-envvars-cf5ad21a-9e13-4de3-8763-c04c20ab8c49 container env3cont: 
STEP: delete the pod
Sep 25 03:39:55.808: INFO: Waiting for pod client-envvars-cf5ad21a-9e13-4de3-8763-c04c20ab8c49 to disappear
Sep 25 03:39:55.830: INFO: Pod client-envvars-cf5ad21a-9e13-4de3-8763-c04c20ab8c49 no longer exists
[AfterEach] [k8s.io] Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 25 03:39:55.831: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-8103" for this suite.
Sep 25 03:40:35.853: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 25 03:40:36.015: INFO: namespace pods-8103 deletion completed in 40.175324601s

• [SLOW TEST:48.536 seconds]
[k8s.io] Pods
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should contain environment variables for services [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 25 03:40:36.021: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Sep 25 03:40:36.167: INFO: Creating simple daemon set daemon-set
STEP: Check that daemon pods launch on every node of the cluster.
Sep 25 03:40:36.220: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep 25 03:40:36.244: INFO: Number of nodes with available pods: 0
Sep 25 03:40:36.244: INFO: Node iruya-worker is running more than one daemon pod
Sep 25 03:40:37.253: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep 25 03:40:37.259: INFO: Number of nodes with available pods: 0
Sep 25 03:40:37.259: INFO: Node iruya-worker is running more than one daemon pod
Sep 25 03:40:38.258: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep 25 03:40:38.264: INFO: Number of nodes with available pods: 0
Sep 25 03:40:38.264: INFO: Node iruya-worker is running more than one daemon pod
Sep 25 03:40:39.256: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep 25 03:40:39.263: INFO: Number of nodes with available pods: 0
Sep 25 03:40:39.264: INFO: Node iruya-worker is running more than one daemon pod
Sep 25 03:40:40.256: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep 25 03:40:40.263: INFO: Number of nodes with available pods: 2
Sep 25 03:40:40.263: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Update daemon pods image.
STEP: Check that daemon pods images are updated.
Sep 25 03:40:40.390: INFO: Wrong image for pod: daemon-set-bpvtb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Sep 25 03:40:40.391: INFO: Wrong image for pod: daemon-set-qhjkt. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Sep 25 03:40:40.400: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep 25 03:40:41.409: INFO: Wrong image for pod: daemon-set-bpvtb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Sep 25 03:40:41.409: INFO: Wrong image for pod: daemon-set-qhjkt. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Sep 25 03:40:41.421: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep 25 03:40:42.408: INFO: Wrong image for pod: daemon-set-bpvtb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Sep 25 03:40:42.408: INFO: Wrong image for pod: daemon-set-qhjkt. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Sep 25 03:40:42.419: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep 25 03:40:43.409: INFO: Wrong image for pod: daemon-set-bpvtb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Sep 25 03:40:43.409: INFO: Wrong image for pod: daemon-set-qhjkt. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Sep 25 03:40:43.420: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep 25 03:40:44.409: INFO: Wrong image for pod: daemon-set-bpvtb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Sep 25 03:40:44.409: INFO: Pod daemon-set-bpvtb is not available
Sep 25 03:40:44.409: INFO: Wrong image for pod: daemon-set-qhjkt. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Sep 25 03:40:44.416: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep 25 03:40:45.410: INFO: Wrong image for pod: daemon-set-bpvtb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Sep 25 03:40:45.410: INFO: Pod daemon-set-bpvtb is not available
Sep 25 03:40:45.410: INFO: Wrong image for pod: daemon-set-qhjkt. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Sep 25 03:40:45.421: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep 25 03:40:46.408: INFO: Wrong image for pod: daemon-set-bpvtb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Sep 25 03:40:46.408: INFO: Pod daemon-set-bpvtb is not available
Sep 25 03:40:46.408: INFO: Wrong image for pod: daemon-set-qhjkt. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Sep 25 03:40:46.415: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep 25 03:40:47.409: INFO: Wrong image for pod: daemon-set-bpvtb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Sep 25 03:40:47.409: INFO: Pod daemon-set-bpvtb is not available
Sep 25 03:40:47.409: INFO: Wrong image for pod: daemon-set-qhjkt. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Sep 25 03:40:47.419: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep 25 03:40:48.408: INFO: Wrong image for pod: daemon-set-bpvtb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Sep 25 03:40:48.408: INFO: Pod daemon-set-bpvtb is not available
Sep 25 03:40:48.408: INFO: Wrong image for pod: daemon-set-qhjkt. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Sep 25 03:40:48.417: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep 25 03:40:49.409: INFO: Wrong image for pod: daemon-set-bpvtb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Sep 25 03:40:49.410: INFO: Pod daemon-set-bpvtb is not available
Sep 25 03:40:49.410: INFO: Wrong image for pod: daemon-set-qhjkt. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Sep 25 03:40:49.420: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep 25 03:40:50.408: INFO: Wrong image for pod: daemon-set-bpvtb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Sep 25 03:40:50.408: INFO: Pod daemon-set-bpvtb is not available
Sep 25 03:40:50.409: INFO: Wrong image for pod: daemon-set-qhjkt. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Sep 25 03:40:50.416: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep 25 03:40:51.410: INFO: Wrong image for pod: daemon-set-bpvtb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Sep 25 03:40:51.410: INFO: Pod daemon-set-bpvtb is not available
Sep 25 03:40:51.410: INFO: Wrong image for pod: daemon-set-qhjkt. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Sep 25 03:40:51.421: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep 25 03:40:52.407: INFO: Wrong image for pod: daemon-set-bpvtb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Sep 25 03:40:52.407: INFO: Pod daemon-set-bpvtb is not available
Sep 25 03:40:52.407: INFO: Wrong image for pod: daemon-set-qhjkt. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Sep 25 03:40:52.417: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep 25 03:40:53.409: INFO: Wrong image for pod: daemon-set-bpvtb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Sep 25 03:40:53.409: INFO: Pod daemon-set-bpvtb is not available
Sep 25 03:40:53.409: INFO: Wrong image for pod: daemon-set-qhjkt. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Sep 25 03:40:53.418: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep 25 03:40:54.408: INFO: Wrong image for pod: daemon-set-bpvtb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Sep 25 03:40:54.408: INFO: Pod daemon-set-bpvtb is not available
Sep 25 03:40:54.408: INFO: Wrong image for pod: daemon-set-qhjkt. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Sep 25 03:40:54.416: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep 25 03:40:55.408: INFO: Wrong image for pod: daemon-set-bpvtb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Sep 25 03:40:55.409: INFO: Pod daemon-set-bpvtb is not available
Sep 25 03:40:55.409: INFO: Wrong image for pod: daemon-set-qhjkt. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Sep 25 03:40:55.418: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep 25 03:40:56.407: INFO: Wrong image for pod: daemon-set-qhjkt. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Sep 25 03:40:56.407: INFO: Pod daemon-set-snpcw is not available
Sep 25 03:40:56.417: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep 25 03:40:57.408: INFO: Wrong image for pod: daemon-set-qhjkt. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Sep 25 03:40:57.408: INFO: Pod daemon-set-snpcw is not available
Sep 25 03:40:57.417: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep 25 03:40:58.409: INFO: Wrong image for pod: daemon-set-qhjkt. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Sep 25 03:40:58.409: INFO: Pod daemon-set-snpcw is not available
Sep 25 03:40:58.421: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep 25 03:40:59.409: INFO: Wrong image for pod: daemon-set-qhjkt. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Sep 25 03:40:59.421: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep 25 03:41:00.409: INFO: Wrong image for pod: daemon-set-qhjkt. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Sep 25 03:41:00.419: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep 25 03:41:01.408: INFO: Wrong image for pod: daemon-set-qhjkt. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Sep 25 03:41:01.408: INFO: Pod daemon-set-qhjkt is not available
Sep 25 03:41:01.417: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep 25 03:41:02.408: INFO: Wrong image for pod: daemon-set-qhjkt. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Sep 25 03:41:02.408: INFO: Pod daemon-set-qhjkt is not available
Sep 25 03:41:02.419: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep 25 03:41:03.409: INFO: Wrong image for pod: daemon-set-qhjkt. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Sep 25 03:41:03.409: INFO: Pod daemon-set-qhjkt is not available
Sep 25 03:41:03.419: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep 25 03:41:04.411: INFO: Wrong image for pod: daemon-set-qhjkt. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Sep 25 03:41:04.411: INFO: Pod daemon-set-qhjkt is not available
Sep 25 03:41:04.421: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep 25 03:41:05.425: INFO: Pod daemon-set-mg4np is not available
Sep 25 03:41:05.445: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
STEP: Check that daemon pods are still running on every node of the cluster.
Sep 25 03:41:05.472: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep 25 03:41:05.488: INFO: Number of nodes with available pods: 1
Sep 25 03:41:05.488: INFO: Node iruya-worker is running more than one daemon pod
Sep 25 03:41:06.499: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep 25 03:41:06.504: INFO: Number of nodes with available pods: 1
Sep 25 03:41:06.504: INFO: Node iruya-worker is running more than one daemon pod
Sep 25 03:41:07.502: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep 25 03:41:07.507: INFO: Number of nodes with available pods: 1
Sep 25 03:41:07.507: INFO: Node iruya-worker is running more than one daemon pod
Sep 25 03:41:08.498: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep 25 03:41:08.504: INFO: Number of nodes with available pods: 1
Sep 25 03:41:08.504: INFO: Node iruya-worker is running more than one daemon pod
Sep 25 03:41:09.501: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep 25 03:41:09.508: INFO: Number of nodes with available pods: 2
Sep 25 03:41:09.508: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6143, will wait for the garbage collector to delete the pods
Sep 25 03:41:09.597: INFO: Deleting DaemonSet.extensions daemon-set took: 8.36689ms
Sep 25 03:41:09.898: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.950508ms
Sep 25 03:41:15.704: INFO: Number of nodes with available pods: 0
Sep 25 03:41:15.705: INFO: Number of running nodes: 0, number of available pods: 0
Sep 25 03:41:15.709: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-6143/daemonsets","resourceVersion":"335145"},"items":null}

Sep 25 03:41:15.714: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-6143/pods","resourceVersion":"335145"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 25 03:41:15.732: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-6143" for this suite.
Sep 25 03:41:21.774: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 25 03:41:21.911: INFO: namespace daemonsets-6143 deletion completed in 6.171363902s

• [SLOW TEST:45.891 seconds]
[sig-apps] Daemon set [Serial]
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] HostPath 
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] HostPath
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 25 03:41:21.915: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename hostpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37
[It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test hostPath mode
Sep 25 03:41:22.015: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-273" to be "success or failure"
Sep 25 03:41:22.024: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 9.182771ms
Sep 25 03:41:24.032: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016869711s
Sep 25 03:41:26.040: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 4.025251181s
Sep 25 03:41:28.048: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.03335062s
STEP: Saw pod success
Sep 25 03:41:28.048: INFO: Pod "pod-host-path-test" satisfied condition "success or failure"
Sep 25 03:41:28.071: INFO: Trying to get logs from node iruya-worker2 pod pod-host-path-test container test-container-1: 
STEP: delete the pod
Sep 25 03:41:28.091: INFO: Waiting for pod pod-host-path-test to disappear
Sep 25 03:41:28.115: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 25 03:41:28.115: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "hostpath-273" for this suite.
Sep 25 03:41:34.142: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 25 03:41:34.304: INFO: namespace hostpath-273 deletion completed in 6.182005358s

• [SLOW TEST:12.389 seconds]
[sig-storage] HostPath
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 25 03:41:34.307: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-0fb058f0-1a26-471e-a563-688ebdddc2e3
STEP: Creating a pod to test consume configMaps
Sep 25 03:41:34.409: INFO: Waiting up to 5m0s for pod "pod-configmaps-844be53d-7c8a-4a8c-953a-7234793242ec" in namespace "configmap-8773" to be "success or failure"
Sep 25 03:41:34.429: INFO: Pod "pod-configmaps-844be53d-7c8a-4a8c-953a-7234793242ec": Phase="Pending", Reason="", readiness=false. Elapsed: 19.241279ms
Sep 25 03:41:36.435: INFO: Pod "pod-configmaps-844be53d-7c8a-4a8c-953a-7234793242ec": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025493466s
Sep 25 03:41:38.450: INFO: Pod "pod-configmaps-844be53d-7c8a-4a8c-953a-7234793242ec": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.040242263s
STEP: Saw pod success
Sep 25 03:41:38.450: INFO: Pod "pod-configmaps-844be53d-7c8a-4a8c-953a-7234793242ec" satisfied condition "success or failure"
Sep 25 03:41:38.456: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-844be53d-7c8a-4a8c-953a-7234793242ec container configmap-volume-test: 
STEP: delete the pod
Sep 25 03:41:38.479: INFO: Waiting for pod pod-configmaps-844be53d-7c8a-4a8c-953a-7234793242ec to disappear
Sep 25 03:41:38.490: INFO: Pod pod-configmaps-844be53d-7c8a-4a8c-953a-7234793242ec no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 25 03:41:38.490: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-8773" for this suite.
Sep 25 03:41:44.540: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 25 03:41:44.682: INFO: namespace configmap-8773 deletion completed in 6.164604871s

• [SLOW TEST:10.375 seconds]
[sig-storage] ConfigMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-api-machinery] Garbage collector 
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 25 03:41:44.683: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
Sep 25 03:41:51.610: INFO: 9 pods remaining
Sep 25 03:41:51.610: INFO: 8 pods has nil DeletionTimestamp
Sep 25 03:41:51.610: INFO: 
Sep 25 03:41:52.484: INFO: 0 pods remaining
Sep 25 03:41:52.484: INFO: 0 pods has nil DeletionTimestamp
Sep 25 03:41:52.484: INFO: 
Sep 25 03:41:53.065: INFO: 0 pods remaining
Sep 25 03:41:53.065: INFO: 0 pods has nil DeletionTimestamp
Sep 25 03:41:53.065: INFO: 
STEP: Gathering metrics
W0925 03:41:53.806953       7 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Sep 25 03:41:53.807: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 25 03:41:53.807: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-9039" for this suite.
Sep 25 03:42:00.073: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 25 03:42:00.231: INFO: namespace gc-9039 deletion completed in 6.416773591s

• [SLOW TEST:15.548 seconds]
[sig-api-machinery] Garbage collector
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 25 03:42:00.232: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir volume type on tmpfs
Sep 25 03:42:00.302: INFO: Waiting up to 5m0s for pod "pod-022498d5-6944-4343-8f5b-ba45fae57e53" in namespace "emptydir-8700" to be "success or failure"
Sep 25 03:42:00.330: INFO: Pod "pod-022498d5-6944-4343-8f5b-ba45fae57e53": Phase="Pending", Reason="", readiness=false. Elapsed: 27.559092ms
Sep 25 03:42:02.335: INFO: Pod "pod-022498d5-6944-4343-8f5b-ba45fae57e53": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033196186s
Sep 25 03:42:04.342: INFO: Pod "pod-022498d5-6944-4343-8f5b-ba45fae57e53": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.040045722s
STEP: Saw pod success
Sep 25 03:42:04.343: INFO: Pod "pod-022498d5-6944-4343-8f5b-ba45fae57e53" satisfied condition "success or failure"
Sep 25 03:42:04.347: INFO: Trying to get logs from node iruya-worker2 pod pod-022498d5-6944-4343-8f5b-ba45fae57e53 container test-container: 
STEP: delete the pod
Sep 25 03:42:04.383: INFO: Waiting for pod pod-022498d5-6944-4343-8f5b-ba45fae57e53 to disappear
Sep 25 03:42:04.395: INFO: Pod pod-022498d5-6944-4343-8f5b-ba45fae57e53 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 25 03:42:04.396: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-8700" for this suite.
Sep 25 03:42:10.437: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 25 03:42:10.601: INFO: namespace emptydir-8700 deletion completed in 6.196717029s

• [SLOW TEST:10.369 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu request [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 25 03:42:10.602: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Sep 25 03:42:10.725: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9b681e50-d1c8-4f3d-b118-1cf527f9fcf9" in namespace "downward-api-3203" to be "success or failure"
Sep 25 03:42:10.732: INFO: Pod "downwardapi-volume-9b681e50-d1c8-4f3d-b118-1cf527f9fcf9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.802569ms
Sep 25 03:42:12.738: INFO: Pod "downwardapi-volume-9b681e50-d1c8-4f3d-b118-1cf527f9fcf9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012873389s
Sep 25 03:42:14.745: INFO: Pod "downwardapi-volume-9b681e50-d1c8-4f3d-b118-1cf527f9fcf9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020696817s
STEP: Saw pod success
Sep 25 03:42:14.746: INFO: Pod "downwardapi-volume-9b681e50-d1c8-4f3d-b118-1cf527f9fcf9" satisfied condition "success or failure"
Sep 25 03:42:14.751: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-9b681e50-d1c8-4f3d-b118-1cf527f9fcf9 container client-container: 
STEP: delete the pod
Sep 25 03:42:14.790: INFO: Waiting for pod downwardapi-volume-9b681e50-d1c8-4f3d-b118-1cf527f9fcf9 to disappear
Sep 25 03:42:14.813: INFO: Pod downwardapi-volume-9b681e50-d1c8-4f3d-b118-1cf527f9fcf9 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 25 03:42:14.813: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3203" for this suite.
Sep 25 03:42:20.838: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 25 03:42:20.999: INFO: namespace downward-api-3203 deletion completed in 6.178457482s

• [SLOW TEST:10.397 seconds]
[sig-storage] Downward API volume
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu request [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 25 03:42:21.000: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod busybox-98b571ef-4025-4f56-a18b-f60073eb01f3 in namespace container-probe-4667
Sep 25 03:42:25.110: INFO: Started pod busybox-98b571ef-4025-4f56-a18b-f60073eb01f3 in namespace container-probe-4667
STEP: checking the pod's current state and verifying that restartCount is present
Sep 25 03:42:25.115: INFO: Initial restart count of pod busybox-98b571ef-4025-4f56-a18b-f60073eb01f3 is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 25 03:46:26.055: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-4667" for this suite.
Sep 25 03:46:32.104: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 25 03:46:32.251: INFO: namespace container-probe-4667 deletion completed in 6.188672245s

• [SLOW TEST:251.251 seconds]
[k8s.io] Probing container
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-apps] Job 
  should delete a job [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Job
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 25 03:46:32.252: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete a job [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a job
STEP: Ensuring active pods == parallelism
STEP: delete a job
STEP: deleting Job.batch foo in namespace job-6658, will wait for the garbage collector to delete the pods
Sep 25 03:46:38.416: INFO: Deleting Job.batch foo took: 8.292001ms
Sep 25 03:46:38.717: INFO: Terminating Job.batch foo pods took: 300.92341ms
STEP: Ensuring job was deleted
[AfterEach] [sig-apps] Job
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 25 03:47:15.724: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-6658" for this suite.
Sep 25 03:47:21.773: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 25 03:47:21.942: INFO: namespace job-6658 deletion completed in 6.211418869s

• [SLOW TEST:49.690 seconds]
[sig-apps] Job
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should delete a job [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl cluster-info 
  should check if Kubernetes master services is included in cluster-info  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 25 03:47:21.944: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check if Kubernetes master services is included in cluster-info  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: validating cluster-info
Sep 25 03:47:22.002: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info'
Sep 25 03:47:25.584: INFO: stderr: ""
Sep 25 03:47:25.584: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:37711\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:37711/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 25 03:47:25.585: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3375" for this suite.
Sep 25 03:47:31.635: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 25 03:47:31.778: INFO: namespace kubectl-3375 deletion completed in 6.157118766s

• [SLOW TEST:9.834 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl cluster-info
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check if Kubernetes master services is included in cluster-info  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run pod 
  should create a pod from an image when restart is Never  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 25 03:47:31.785: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run pod
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1685
[It] should create a pod from an image when restart is Never  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Sep 25 03:47:31.867: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-8114'
Sep 25 03:47:33.023: INFO: stderr: ""
Sep 25 03:47:33.023: INFO: stdout: "pod/e2e-test-nginx-pod created\n"
STEP: verifying the pod e2e-test-nginx-pod was created
[AfterEach] [k8s.io] Kubectl run pod
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1690
Sep 25 03:47:33.045: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-8114'
Sep 25 03:47:45.663: INFO: stderr: ""
Sep 25 03:47:45.663: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 25 03:47:45.664: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8114" for this suite.
Sep 25 03:47:51.724: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 25 03:47:51.856: INFO: namespace kubectl-8114 deletion completed in 6.15090211s

• [SLOW TEST:20.071 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run pod
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a pod from an image when restart is Never  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run --rm job 
  should create a job from an image, then delete the job  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 25 03:47:51.860: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should create a job from an image, then delete the job  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: executing a command with run --rm and attach with stdin
Sep 25 03:47:51.977: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-742 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed''
Sep 25 03:47:55.897: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0925 03:47:55.743616    2690 log.go:172] (0x2b24070) (0x2b240e0) Create stream\nI0925 03:47:55.746855    2690 log.go:172] (0x2b24070) (0x2b240e0) Stream added, broadcasting: 1\nI0925 03:47:55.758589    2690 log.go:172] (0x2b24070) Reply frame received for 1\nI0925 03:47:55.759359    2690 log.go:172] (0x2b24070) (0x27f0000) Create stream\nI0925 03:47:55.759500    2690 log.go:172] (0x2b24070) (0x27f0000) Stream added, broadcasting: 3\nI0925 03:47:55.761882    2690 log.go:172] (0x2b24070) Reply frame received for 3\nI0925 03:47:55.762435    2690 log.go:172] (0x2b24070) (0x2b24150) Create stream\nI0925 03:47:55.762567    2690 log.go:172] (0x2b24070) (0x2b24150) Stream added, broadcasting: 5\nI0925 03:47:55.764720    2690 log.go:172] (0x2b24070) Reply frame received for 5\nI0925 03:47:55.765146    2690 log.go:172] (0x2b24070) (0x2b241c0) Create stream\nI0925 03:47:55.765235    2690 log.go:172] (0x2b24070) (0x2b241c0) Stream added, broadcasting: 7\nI0925 03:47:55.766935    2690 log.go:172] (0x2b24070) Reply frame received for 7\nI0925 03:47:55.770406    2690 log.go:172] (0x27f0000) (3) Writing data frame\nI0925 03:47:55.773126    2690 log.go:172] (0x2b24070) Data frame received for 5\nI0925 03:47:55.773322    2690 log.go:172] (0x27f0000) (3) Writing data frame\nI0925 03:47:55.773524    2690 log.go:172] (0x2b24150) (5) Data frame handling\nI0925 03:47:55.773923    2690 log.go:172] (0x2b24150) (5) Data frame sent\nI0925 03:47:55.775064    2690 log.go:172] (0x2b24070) Data frame received for 5\nI0925 03:47:55.775143    2690 log.go:172] (0x2b24150) (5) Data frame handling\nI0925 03:47:55.775217    2690 log.go:172] (0x2b24150) (5) Data frame sent\nI0925 03:47:55.832470    2690 log.go:172] (0x2b24070) Data frame received for 7\nI0925 03:47:55.832776    2690 log.go:172] (0x2b24070) Data frame received for 1\nI0925 03:47:55.834295    2690 log.go:172] (0x2b240e0) (1) Data frame handling\nI0925 03:47:55.834838    2690 log.go:172] (0x2b241c0) (7) Data frame handling\nI0925 03:47:55.835072    2690 log.go:172] (0x2b240e0) (1) Data frame sent\nI0925 03:47:55.835973    2690 log.go:172] (0x2b24070) Data frame received for 5\nI0925 03:47:55.836135    2690 log.go:172] (0x2b24070) (0x27f0000) Stream removed, broadcasting: 3\nI0925 03:47:55.837771    2690 log.go:172] (0x2b24150) (5) Data frame handling\nI0925 03:47:55.838029    2690 log.go:172] (0x2b24070) (0x2b240e0) Stream removed, broadcasting: 1\nI0925 03:47:55.839382    2690 log.go:172] (0x2b24070) Go away received\nI0925 03:47:55.841945    2690 log.go:172] (0x2b24070) (0x2b240e0) Stream removed, broadcasting: 1\nI0925 03:47:55.842305    2690 log.go:172] (0x2b24070) (0x27f0000) Stream removed, broadcasting: 3\nI0925 03:47:55.842447    2690 log.go:172] (0x2b24070) (0x2b24150) Stream removed, broadcasting: 5\nI0925 03:47:55.842767    2690 log.go:172] (0x2b24070) (0x2b241c0) Stream removed, broadcasting: 7\n"
Sep 25 03:47:55.898: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n"
STEP: verifying the job e2e-test-rm-busybox-job was deleted
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 25 03:47:57.911: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-742" for this suite.
Sep 25 03:48:05.952: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 25 03:48:06.105: INFO: namespace kubectl-742 deletion completed in 8.183644377s

• [SLOW TEST:14.245 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run --rm job
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a job from an image, then delete the job  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 25 03:48:06.107: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should support remote command execution over websockets [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Sep 25 03:48:06.167: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 25 03:48:10.401: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-224" for this suite.
Sep 25 03:48:48.428: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 25 03:48:48.592: INFO: namespace pods-224 deletion completed in 38.182096228s

• [SLOW TEST:42.485 seconds]
[k8s.io] Pods
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory request [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 25 03:48:48.594: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Sep 25 03:48:48.700: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5ff23886-e291-4a88-89c3-f5951c80bc17" in namespace "projected-2410" to be "success or failure"
Sep 25 03:48:48.713: INFO: Pod "downwardapi-volume-5ff23886-e291-4a88-89c3-f5951c80bc17": Phase="Pending", Reason="", readiness=false. Elapsed: 12.724969ms
Sep 25 03:48:50.721: INFO: Pod "downwardapi-volume-5ff23886-e291-4a88-89c3-f5951c80bc17": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020717769s
Sep 25 03:48:52.729: INFO: Pod "downwardapi-volume-5ff23886-e291-4a88-89c3-f5951c80bc17": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028466746s
STEP: Saw pod success
Sep 25 03:48:52.729: INFO: Pod "downwardapi-volume-5ff23886-e291-4a88-89c3-f5951c80bc17" satisfied condition "success or failure"
Sep 25 03:48:52.735: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-5ff23886-e291-4a88-89c3-f5951c80bc17 container client-container: 
STEP: delete the pod
Sep 25 03:48:52.771: INFO: Waiting for pod downwardapi-volume-5ff23886-e291-4a88-89c3-f5951c80bc17 to disappear
Sep 25 03:48:52.809: INFO: Pod downwardapi-volume-5ff23886-e291-4a88-89c3-f5951c80bc17 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 25 03:48:52.809: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2410" for this suite.
Sep 25 03:48:58.858: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 25 03:48:58.993: INFO: namespace projected-2410 deletion completed in 6.170912021s

• [SLOW TEST:10.399 seconds]
[sig-storage] Projected downwardAPI
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's memory request [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should do a rolling update of a replication controller  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 25 03:48:58.994: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Update Demo
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should do a rolling update of a replication controller  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the initial replication controller
Sep 25 03:48:59.100: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9447'
Sep 25 03:49:00.687: INFO: stderr: ""
Sep 25 03:49:00.688: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Sep 25 03:49:00.688: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9447'
Sep 25 03:49:01.833: INFO: stderr: ""
Sep 25 03:49:01.833: INFO: stdout: "update-demo-nautilus-4dpj4 update-demo-nautilus-w6w6b "
Sep 25 03:49:01.834: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4dpj4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9447'
Sep 25 03:49:02.935: INFO: stderr: ""
Sep 25 03:49:02.935: INFO: stdout: ""
Sep 25 03:49:02.935: INFO: update-demo-nautilus-4dpj4 is created but not running
Sep 25 03:49:07.936: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9447'
Sep 25 03:49:09.067: INFO: stderr: ""
Sep 25 03:49:09.067: INFO: stdout: "update-demo-nautilus-4dpj4 update-demo-nautilus-w6w6b "
Sep 25 03:49:09.067: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4dpj4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9447'
Sep 25 03:49:10.202: INFO: stderr: ""
Sep 25 03:49:10.202: INFO: stdout: "true"
Sep 25 03:49:10.202: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4dpj4 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9447'
Sep 25 03:49:11.304: INFO: stderr: ""
Sep 25 03:49:11.304: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Sep 25 03:49:11.304: INFO: validating pod update-demo-nautilus-4dpj4
Sep 25 03:49:11.311: INFO: got data: {
  "image": "nautilus.jpg"
}

Sep 25 03:49:11.312: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Sep 25 03:49:11.312: INFO: update-demo-nautilus-4dpj4 is verified up and running
Sep 25 03:49:11.312: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-w6w6b -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9447'
Sep 25 03:49:12.428: INFO: stderr: ""
Sep 25 03:49:12.428: INFO: stdout: "true"
Sep 25 03:49:12.428: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-w6w6b -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9447'
Sep 25 03:49:13.548: INFO: stderr: ""
Sep 25 03:49:13.548: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Sep 25 03:49:13.548: INFO: validating pod update-demo-nautilus-w6w6b
Sep 25 03:49:13.554: INFO: got data: {
  "image": "nautilus.jpg"
}

Sep 25 03:49:13.554: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Sep 25 03:49:13.554: INFO: update-demo-nautilus-w6w6b is verified up and running
STEP: rolling-update to new replication controller
Sep 25 03:49:13.561: INFO: scanned /root for discovery docs: 
Sep 25 03:49:13.561: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-9447'
Sep 25 03:49:37.761: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Sep 25 03:49:37.762: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Sep 25 03:49:37.762: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9447'
Sep 25 03:49:38.894: INFO: stderr: ""
Sep 25 03:49:38.894: INFO: stdout: "update-demo-kitten-88pj9 update-demo-kitten-z885g "
Sep 25 03:49:38.895: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-88pj9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9447'
Sep 25 03:49:40.010: INFO: stderr: ""
Sep 25 03:49:40.011: INFO: stdout: "true"
Sep 25 03:49:40.011: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-88pj9 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9447'
Sep 25 03:49:41.151: INFO: stderr: ""
Sep 25 03:49:41.152: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Sep 25 03:49:41.152: INFO: validating pod update-demo-kitten-88pj9
Sep 25 03:49:41.157: INFO: got data: {
  "image": "kitten.jpg"
}

Sep 25 03:49:41.158: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Sep 25 03:49:41.158: INFO: update-demo-kitten-88pj9 is verified up and running
Sep 25 03:49:41.158: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-z885g -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9447'
Sep 25 03:49:42.280: INFO: stderr: ""
Sep 25 03:49:42.280: INFO: stdout: "true"
Sep 25 03:49:42.280: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-z885g -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9447'
Sep 25 03:49:43.450: INFO: stderr: ""
Sep 25 03:49:43.450: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Sep 25 03:49:43.450: INFO: validating pod update-demo-kitten-z885g
Sep 25 03:49:43.458: INFO: got data: {
  "image": "kitten.jpg"
}

Sep 25 03:49:43.458: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Sep 25 03:49:43.458: INFO: update-demo-kitten-z885g is verified up and running
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 25 03:49:43.458: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9447" for this suite.
Sep 25 03:50:07.477: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 25 03:50:07.623: INFO: namespace kubectl-9447 deletion completed in 24.16012815s

• [SLOW TEST:68.630 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Update Demo
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should do a rolling update of a replication controller  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 25 03:50:07.630: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a watch on configmaps
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: closing the watch once it receives two notifications
Sep 25 03:50:07.732: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-3580,SelfLink:/api/v1/namespaces/watch-3580/configmaps/e2e-watch-test-watch-closed,UID:cfefface-9afc-4707-b890-f6759289f6dd,ResourceVersion:336787,Generation:0,CreationTimestamp:2020-09-25 03:50:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Sep 25 03:50:07.733: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-3580,SelfLink:/api/v1/namespaces/watch-3580/configmaps/e2e-watch-test-watch-closed,UID:cfefface-9afc-4707-b890-f6759289f6dd,ResourceVersion:336789,Generation:0,CreationTimestamp:2020-09-25 03:50:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time, while the watch is closed
STEP: creating a new watch on configmaps from the last resource version observed by the first watch
STEP: deleting the configmap
STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed
Sep 25 03:50:07.752: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-3580,SelfLink:/api/v1/namespaces/watch-3580/configmaps/e2e-watch-test-watch-closed,UID:cfefface-9afc-4707-b890-f6759289f6dd,ResourceVersion:336790,Generation:0,CreationTimestamp:2020-09-25 03:50:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Sep 25 03:50:07.753: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-3580,SelfLink:/api/v1/namespaces/watch-3580/configmaps/e2e-watch-test-watch-closed,UID:cfefface-9afc-4707-b890-f6759289f6dd,ResourceVersion:336791,Generation:0,CreationTimestamp:2020-09-25 03:50:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 25 03:50:07.754: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-3580" for this suite.
Sep 25 03:50:13.780: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 25 03:50:13.928: INFO: namespace watch-3580 deletion completed in 6.165397916s

• [SLOW TEST:6.299 seconds]
[sig-api-machinery] Watchers
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 25 03:50:13.929: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on node default medium
Sep 25 03:50:14.010: INFO: Waiting up to 5m0s for pod "pod-8f2e33ac-7bb0-49bd-9187-a1e3b167926f" in namespace "emptydir-8167" to be "success or failure"
Sep 25 03:50:14.014: INFO: Pod "pod-8f2e33ac-7bb0-49bd-9187-a1e3b167926f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.532448ms
Sep 25 03:50:16.021: INFO: Pod "pod-8f2e33ac-7bb0-49bd-9187-a1e3b167926f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011697654s
Sep 25 03:50:18.029: INFO: Pod "pod-8f2e33ac-7bb0-49bd-9187-a1e3b167926f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018924652s
STEP: Saw pod success
Sep 25 03:50:18.029: INFO: Pod "pod-8f2e33ac-7bb0-49bd-9187-a1e3b167926f" satisfied condition "success or failure"
Sep 25 03:50:18.034: INFO: Trying to get logs from node iruya-worker pod pod-8f2e33ac-7bb0-49bd-9187-a1e3b167926f container test-container: 
STEP: delete the pod
Sep 25 03:50:18.072: INFO: Waiting for pod pod-8f2e33ac-7bb0-49bd-9187-a1e3b167926f to disappear
Sep 25 03:50:18.085: INFO: Pod pod-8f2e33ac-7bb0-49bd-9187-a1e3b167926f no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 25 03:50:18.085: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-8167" for this suite.
Sep 25 03:50:24.121: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 25 03:50:24.263: INFO: namespace emptydir-8167 deletion completed in 6.167253915s

• [SLOW TEST:10.334 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to start watching from a specific resource version [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 25 03:50:24.265: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to start watching from a specific resource version [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: modifying the configmap a second time
STEP: deleting the configmap
STEP: creating a watch on configmaps from the resource version returned by the first update
STEP: Expecting to observe notifications for all changes to the configmap after the first update
Sep 25 03:50:24.431: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-9743,SelfLink:/api/v1/namespaces/watch-9743/configmaps/e2e-watch-test-resource-version,UID:72419bfc-406b-4de6-8dce-f68d476f99f1,ResourceVersion:336857,Generation:0,CreationTimestamp:2020-09-25 03:50:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Sep 25 03:50:24.433: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-9743,SelfLink:/api/v1/namespaces/watch-9743/configmaps/e2e-watch-test-resource-version,UID:72419bfc-406b-4de6-8dce-f68d476f99f1,ResourceVersion:336858,Generation:0,CreationTimestamp:2020-09-25 03:50:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 25 03:50:24.433: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-9743" for this suite.
Sep 25 03:50:30.468: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 25 03:50:30.607: INFO: namespace watch-9743 deletion completed in 6.153429836s

• [SLOW TEST:6.343 seconds]
[sig-api-machinery] Watchers
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to start watching from a specific resource version [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates resource limits of pods that are allowed to run  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 25 03:50:30.611: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81
Sep 25 03:50:30.676: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Sep 25 03:50:30.717: INFO: Waiting for terminating namespaces to be deleted...
Sep 25 03:50:30.722: INFO: 
Logging pods the kubelet thinks is on node iruya-worker before test
Sep 25 03:50:30.735: INFO: kube-proxy-mtljr from kube-system started at 2020-09-23 08:26:08 +0000 UTC (1 container statuses recorded)
Sep 25 03:50:30.735: INFO: 	Container kube-proxy ready: true, restart count 0
Sep 25 03:50:30.735: INFO: kindnet-7bsvw from kube-system started at 2020-09-23 08:26:08 +0000 UTC (1 container statuses recorded)
Sep 25 03:50:30.735: INFO: 	Container kindnet-cni ready: true, restart count 0
Sep 25 03:50:30.735: INFO: 
Logging pods the kubelet thinks is on node iruya-worker2 before test
Sep 25 03:50:30.745: INFO: kindnet-djqgh from kube-system started at 2020-09-23 08:26:08 +0000 UTC (1 container statuses recorded)
Sep 25 03:50:30.746: INFO: 	Container kindnet-cni ready: true, restart count 0
Sep 25 03:50:30.746: INFO: kube-proxy-52wt5 from kube-system started at 2020-09-23 08:26:08 +0000 UTC (1 container statuses recorded)
Sep 25 03:50:30.746: INFO: 	Container kube-proxy ready: true, restart count 0
[It] validates resource limits of pods that are allowed to run  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: verifying the node has the label node iruya-worker
STEP: verifying the node has the label node iruya-worker2
Sep 25 03:50:30.853: INFO: Pod kindnet-7bsvw requesting resource cpu=100m on Node iruya-worker
Sep 25 03:50:30.854: INFO: Pod kindnet-djqgh requesting resource cpu=100m on Node iruya-worker2
Sep 25 03:50:30.854: INFO: Pod kube-proxy-52wt5 requesting resource cpu=0m on Node iruya-worker2
Sep 25 03:50:30.854: INFO: Pod kube-proxy-mtljr requesting resource cpu=0m on Node iruya-worker
STEP: Starting Pods to consume most of the cluster CPU.
STEP: Creating another pod that requires unavailable amount of CPU.
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-3e791f78-951d-40dc-a59b-4d662c5434ff.1637ea52196ded47], Reason = [Scheduled], Message = [Successfully assigned sched-pred-7701/filler-pod-3e791f78-951d-40dc-a59b-4d662c5434ff to iruya-worker]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-3e791f78-951d-40dc-a59b-4d662c5434ff.1637ea52ae7ddbea], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-3e791f78-951d-40dc-a59b-4d662c5434ff.1637ea52e2b7b0ba], Reason = [Created], Message = [Created container filler-pod-3e791f78-951d-40dc-a59b-4d662c5434ff]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-3e791f78-951d-40dc-a59b-4d662c5434ff.1637ea52efa4738f], Reason = [Started], Message = [Started container filler-pod-3e791f78-951d-40dc-a59b-4d662c5434ff]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-f2ab7ba9-68ad-4f07-a281-ffd69a6a35e5.1637ea5219fc5b73], Reason = [Scheduled], Message = [Successfully assigned sched-pred-7701/filler-pod-f2ab7ba9-68ad-4f07-a281-ffd69a6a35e5 to iruya-worker2]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-f2ab7ba9-68ad-4f07-a281-ffd69a6a35e5.1637ea526571b245], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-f2ab7ba9-68ad-4f07-a281-ffd69a6a35e5.1637ea52c3f799b8], Reason = [Created], Message = [Created container filler-pod-f2ab7ba9-68ad-4f07-a281-ffd69a6a35e5]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-f2ab7ba9-68ad-4f07-a281-ffd69a6a35e5.1637ea52db167da1], Reason = [Started], Message = [Started container filler-pod-f2ab7ba9-68ad-4f07-a281-ffd69a6a35e5]
STEP: Considering event: 
Type = [Warning], Name = [additional-pod.1637ea530a197a4c], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.]
STEP: removing the label node off the node iruya-worker
STEP: verifying the node doesn't have the label node
STEP: removing the label node off the node iruya-worker2
STEP: verifying the node doesn't have the label node
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 25 03:50:36.078: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-7701" for this suite.
Sep 25 03:50:42.109: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 25 03:50:42.272: INFO: namespace sched-pred-7701 deletion completed in 6.184221675s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72

• [SLOW TEST:11.662 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23
  validates resource limits of pods that are allowed to run  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox Pod with hostAliases 
  should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 25 03:50:42.278: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 25 03:50:46.420: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-1608" for this suite.
Sep 25 03:51:24.446: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 25 03:51:24.592: INFO: namespace kubelet-test-1608 deletion completed in 38.16307408s

• [SLOW TEST:42.314 seconds]
[k8s.io] Kubelet
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox Pod with hostAliases
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136
    should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 25 03:51:24.593: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on node default medium
Sep 25 03:51:24.647: INFO: Waiting up to 5m0s for pod "pod-5f786851-f5d2-49ca-bf19-5288b05a40ed" in namespace "emptydir-5960" to be "success or failure"
Sep 25 03:51:24.693: INFO: Pod "pod-5f786851-f5d2-49ca-bf19-5288b05a40ed": Phase="Pending", Reason="", readiness=false. Elapsed: 45.483996ms
Sep 25 03:51:26.700: INFO: Pod "pod-5f786851-f5d2-49ca-bf19-5288b05a40ed": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052401536s
Sep 25 03:51:28.706: INFO: Pod "pod-5f786851-f5d2-49ca-bf19-5288b05a40ed": Phase="Pending", Reason="", readiness=false. Elapsed: 4.058475996s
Sep 25 03:51:30.713: INFO: Pod "pod-5f786851-f5d2-49ca-bf19-5288b05a40ed": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.065534947s
STEP: Saw pod success
Sep 25 03:51:30.713: INFO: Pod "pod-5f786851-f5d2-49ca-bf19-5288b05a40ed" satisfied condition "success or failure"
Sep 25 03:51:30.719: INFO: Trying to get logs from node iruya-worker pod pod-5f786851-f5d2-49ca-bf19-5288b05a40ed container test-container: 
STEP: delete the pod
Sep 25 03:51:30.740: INFO: Waiting for pod pod-5f786851-f5d2-49ca-bf19-5288b05a40ed to disappear
Sep 25 03:51:30.744: INFO: Pod pod-5f786851-f5d2-49ca-bf19-5288b05a40ed no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 25 03:51:30.745: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-5960" for this suite.
Sep 25 03:51:37.052: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 25 03:51:37.175: INFO: namespace emptydir-5960 deletion completed in 6.423760363s

• [SLOW TEST:12.582 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 25 03:51:37.178: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-configmap-jxlj
STEP: Creating a pod to test atomic-volume-subpath
Sep 25 03:51:37.750: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-jxlj" in namespace "subpath-8111" to be "success or failure"
Sep 25 03:51:37.844: INFO: Pod "pod-subpath-test-configmap-jxlj": Phase="Pending", Reason="", readiness=false. Elapsed: 94.053066ms
Sep 25 03:51:39.850: INFO: Pod "pod-subpath-test-configmap-jxlj": Phase="Pending", Reason="", readiness=false. Elapsed: 2.100460862s
Sep 25 03:51:41.857: INFO: Pod "pod-subpath-test-configmap-jxlj": Phase="Running", Reason="", readiness=true. Elapsed: 4.10743269s
Sep 25 03:51:43.869: INFO: Pod "pod-subpath-test-configmap-jxlj": Phase="Running", Reason="", readiness=true. Elapsed: 6.119497961s
Sep 25 03:51:46.233: INFO: Pod "pod-subpath-test-configmap-jxlj": Phase="Running", Reason="", readiness=true. Elapsed: 8.482801194s
Sep 25 03:51:48.240: INFO: Pod "pod-subpath-test-configmap-jxlj": Phase="Running", Reason="", readiness=true. Elapsed: 10.490339643s
Sep 25 03:51:50.248: INFO: Pod "pod-subpath-test-configmap-jxlj": Phase="Running", Reason="", readiness=true. Elapsed: 12.498088231s
Sep 25 03:51:52.255: INFO: Pod "pod-subpath-test-configmap-jxlj": Phase="Running", Reason="", readiness=true. Elapsed: 14.505388752s
Sep 25 03:51:54.262: INFO: Pod "pod-subpath-test-configmap-jxlj": Phase="Running", Reason="", readiness=true. Elapsed: 16.512780368s
Sep 25 03:51:56.270: INFO: Pod "pod-subpath-test-configmap-jxlj": Phase="Running", Reason="", readiness=true. Elapsed: 18.520325946s
Sep 25 03:51:58.278: INFO: Pod "pod-subpath-test-configmap-jxlj": Phase="Running", Reason="", readiness=true. Elapsed: 20.527906123s
Sep 25 03:52:00.285: INFO: Pod "pod-subpath-test-configmap-jxlj": Phase="Running", Reason="", readiness=true. Elapsed: 22.53528178s
Sep 25 03:52:02.359: INFO: Pod "pod-subpath-test-configmap-jxlj": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.609101173s
STEP: Saw pod success
Sep 25 03:52:02.359: INFO: Pod "pod-subpath-test-configmap-jxlj" satisfied condition "success or failure"
Sep 25 03:52:02.375: INFO: Trying to get logs from node iruya-worker2 pod pod-subpath-test-configmap-jxlj container test-container-subpath-configmap-jxlj: 
STEP: delete the pod
Sep 25 03:52:02.394: INFO: Waiting for pod pod-subpath-test-configmap-jxlj to disappear
Sep 25 03:52:02.414: INFO: Pod pod-subpath-test-configmap-jxlj no longer exists
STEP: Deleting pod pod-subpath-test-configmap-jxlj
Sep 25 03:52:02.414: INFO: Deleting pod "pod-subpath-test-configmap-jxlj" in namespace "subpath-8111"
[AfterEach] [sig-storage] Subpath
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 25 03:52:02.418: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-8111" for this suite.
Sep 25 03:52:08.460: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 25 03:52:08.617: INFO: namespace subpath-8111 deletion completed in 6.19204844s

• [SLOW TEST:31.440 seconds]
[sig-storage] Subpath
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 25 03:52:08.619: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on tmpfs
Sep 25 03:52:08.738: INFO: Waiting up to 5m0s for pod "pod-1179551d-a494-440f-bb54-48a13b92c699" in namespace "emptydir-5742" to be "success or failure"
Sep 25 03:52:08.751: INFO: Pod "pod-1179551d-a494-440f-bb54-48a13b92c699": Phase="Pending", Reason="", readiness=false. Elapsed: 12.642175ms
Sep 25 03:52:10.758: INFO: Pod "pod-1179551d-a494-440f-bb54-48a13b92c699": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019820951s
Sep 25 03:52:12.765: INFO: Pod "pod-1179551d-a494-440f-bb54-48a13b92c699": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026529754s
STEP: Saw pod success
Sep 25 03:52:12.765: INFO: Pod "pod-1179551d-a494-440f-bb54-48a13b92c699" satisfied condition "success or failure"
Sep 25 03:52:12.770: INFO: Trying to get logs from node iruya-worker pod pod-1179551d-a494-440f-bb54-48a13b92c699 container test-container: 
STEP: delete the pod
Sep 25 03:52:12.807: INFO: Waiting for pod pod-1179551d-a494-440f-bb54-48a13b92c699 to disappear
Sep 25 03:52:12.829: INFO: Pod pod-1179551d-a494-440f-bb54-48a13b92c699 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 25 03:52:12.829: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-5742" for this suite.
Sep 25 03:52:18.852: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 25 03:52:19.024: INFO: namespace emptydir-5742 deletion completed in 6.187100812s

• [SLOW TEST:10.405 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 25 03:52:19.027: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Sep 25 03:52:22.173: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 25 03:52:22.431: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-7545" for this suite.
Sep 25 03:52:28.471: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 25 03:52:28.629: INFO: namespace container-runtime-7545 deletion completed in 6.188430176s

• [SLOW TEST:9.602 seconds]
[k8s.io] Container Runtime
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129
      should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
      /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 25 03:52:28.630: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-1789
[It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Initializing watcher for selector baz=blah,foo=bar
STEP: Creating stateful set ss in namespace statefulset-1789
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-1789
Sep 25 03:52:28.750: INFO: Found 0 stateful pods, waiting for 1
Sep 25 03:52:38.758: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod
Sep 25 03:52:38.765: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1789 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Sep 25 03:52:40.188: INFO: stderr: "I0925 03:52:40.053057    3027 log.go:172] (0x26cc000) (0x26cc0e0) Create stream\nI0925 03:52:40.054840    3027 log.go:172] (0x26cc000) (0x26cc0e0) Stream added, broadcasting: 1\nI0925 03:52:40.065748    3027 log.go:172] (0x26cc000) Reply frame received for 1\nI0925 03:52:40.066497    3027 log.go:172] (0x26cc000) (0x269c000) Create stream\nI0925 03:52:40.066593    3027 log.go:172] (0x26cc000) (0x269c000) Stream added, broadcasting: 3\nI0925 03:52:40.068031    3027 log.go:172] (0x26cc000) Reply frame received for 3\nI0925 03:52:40.068276    3027 log.go:172] (0x26cc000) (0x269c230) Create stream\nI0925 03:52:40.068336    3027 log.go:172] (0x26cc000) (0x269c230) Stream added, broadcasting: 5\nI0925 03:52:40.069797    3027 log.go:172] (0x26cc000) Reply frame received for 5\nI0925 03:52:40.128129    3027 log.go:172] (0x26cc000) Data frame received for 5\nI0925 03:52:40.128386    3027 log.go:172] (0x269c230) (5) Data frame handling\nI0925 03:52:40.128900    3027 log.go:172] (0x269c230) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0925 03:52:40.171948    3027 log.go:172] (0x26cc000) Data frame received for 3\nI0925 03:52:40.172107    3027 log.go:172] (0x269c000) (3) Data frame handling\nI0925 03:52:40.172264    3027 log.go:172] (0x26cc000) Data frame received for 5\nI0925 03:52:40.172482    3027 log.go:172] (0x269c230) (5) Data frame handling\nI0925 03:52:40.172772    3027 log.go:172] (0x269c000) (3) Data frame sent\nI0925 03:52:40.173124    3027 log.go:172] (0x26cc000) Data frame received for 3\nI0925 03:52:40.173272    3027 log.go:172] (0x269c000) (3) Data frame handling\nI0925 03:52:40.173679    3027 log.go:172] (0x26cc000) Data frame received for 1\nI0925 03:52:40.173807    3027 log.go:172] (0x26cc0e0) (1) Data frame handling\nI0925 03:52:40.173958    3027 log.go:172] (0x26cc0e0) (1) Data frame sent\nI0925 03:52:40.174735    3027 log.go:172] (0x26cc000) (0x26cc0e0) Stream removed, broadcasting: 1\nI0925 03:52:40.177695    3027 log.go:172] (0x26cc000) Go away received\nI0925 03:52:40.180963    3027 log.go:172] (0x26cc000) (0x26cc0e0) Stream removed, broadcasting: 1\nI0925 03:52:40.181382    3027 log.go:172] (0x26cc000) (0x269c000) Stream removed, broadcasting: 3\nI0925 03:52:40.181630    3027 log.go:172] (0x26cc000) (0x269c230) Stream removed, broadcasting: 5\n"
Sep 25 03:52:40.189: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Sep 25 03:52:40.190: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Sep 25 03:52:40.196: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Sep 25 03:52:50.233: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Sep 25 03:52:50.234: INFO: Waiting for statefulset status.replicas updated to 0
Sep 25 03:52:50.274: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999976826s
Sep 25 03:52:51.282: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.975725728s
Sep 25 03:52:52.291: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.967257472s
Sep 25 03:52:53.300: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.958303819s
Sep 25 03:52:54.307: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.949750466s
Sep 25 03:52:55.316: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.94255236s
Sep 25 03:52:56.324: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.933588283s
Sep 25 03:52:57.332: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.925627862s
Sep 25 03:52:58.341: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.917649292s
Sep 25 03:52:59.348: INFO: Verifying statefulset ss doesn't scale past 1 for another 909.076207ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-1789
Sep 25 03:53:00.357: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1789 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Sep 25 03:53:01.722: INFO: stderr: "I0925 03:53:01.597008    3049 log.go:172] (0x29d1ce0) (0x29d1d50) Create stream\nI0925 03:53:01.600649    3049 log.go:172] (0x29d1ce0) (0x29d1d50) Stream added, broadcasting: 1\nI0925 03:53:01.614703    3049 log.go:172] (0x29d1ce0) Reply frame received for 1\nI0925 03:53:01.615156    3049 log.go:172] (0x29d1ce0) (0x24ac8c0) Create stream\nI0925 03:53:01.615224    3049 log.go:172] (0x29d1ce0) (0x24ac8c0) Stream added, broadcasting: 3\nI0925 03:53:01.616439    3049 log.go:172] (0x29d1ce0) Reply frame received for 3\nI0925 03:53:01.616773    3049 log.go:172] (0x29d1ce0) (0x281ab60) Create stream\nI0925 03:53:01.616930    3049 log.go:172] (0x29d1ce0) (0x281ab60) Stream added, broadcasting: 5\nI0925 03:53:01.618123    3049 log.go:172] (0x29d1ce0) Reply frame received for 5\nI0925 03:53:01.703901    3049 log.go:172] (0x29d1ce0) Data frame received for 5\nI0925 03:53:01.704105    3049 log.go:172] (0x29d1ce0) Data frame received for 1\nI0925 03:53:01.704319    3049 log.go:172] (0x29d1ce0) Data frame received for 3\nI0925 03:53:01.704532    3049 log.go:172] (0x24ac8c0) (3) Data frame handling\nI0925 03:53:01.704801    3049 log.go:172] (0x29d1d50) (1) Data frame handling\nI0925 03:53:01.705162    3049 log.go:172] (0x281ab60) (5) Data frame handling\nI0925 03:53:01.706519    3049 log.go:172] (0x24ac8c0) (3) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0925 03:53:01.706729    3049 log.go:172] (0x281ab60) (5) Data frame sent\nI0925 03:53:01.706921    3049 log.go:172] (0x29d1d50) (1) Data frame sent\nI0925 03:53:01.707085    3049 log.go:172] (0x29d1ce0) Data frame received for 3\nI0925 03:53:01.707277    3049 log.go:172] (0x24ac8c0) (3) Data frame handling\nI0925 03:53:01.707535    3049 log.go:172] (0x29d1ce0) Data frame received for 5\nI0925 03:53:01.708198    3049 log.go:172] (0x29d1ce0) (0x29d1d50) Stream removed, broadcasting: 1\nI0925 03:53:01.710179    3049 log.go:172] (0x281ab60) (5) Data frame handling\nI0925 03:53:01.711214    3049 log.go:172] (0x29d1ce0) Go away received\nI0925 03:53:01.714578    3049 log.go:172] (0x29d1ce0) (0x29d1d50) Stream removed, broadcasting: 1\nI0925 03:53:01.714938    3049 log.go:172] (0x29d1ce0) (0x24ac8c0) Stream removed, broadcasting: 3\nI0925 03:53:01.715250    3049 log.go:172] (0x29d1ce0) (0x281ab60) Stream removed, broadcasting: 5\n"
Sep 25 03:53:01.723: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Sep 25 03:53:01.723: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Sep 25 03:53:01.729: INFO: Found 1 stateful pods, waiting for 3
Sep 25 03:53:11.740: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Sep 25 03:53:11.740: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Sep 25 03:53:11.740: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Verifying that stateful set ss was scaled up in order
STEP: Scale down will halt with unhealthy stateful pod
Sep 25 03:53:11.759: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1789 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Sep 25 03:53:13.144: INFO: stderr: "I0925 03:53:13.032075    3071 log.go:172] (0x2684000) (0x2684230) Create stream\nI0925 03:53:13.034200    3071 log.go:172] (0x2684000) (0x2684230) Stream added, broadcasting: 1\nI0925 03:53:13.048590    3071 log.go:172] (0x2684000) Reply frame received for 1\nI0925 03:53:13.049620    3071 log.go:172] (0x2684000) (0x24ba7e0) Create stream\nI0925 03:53:13.049752    3071 log.go:172] (0x2684000) (0x24ba7e0) Stream added, broadcasting: 3\nI0925 03:53:13.051814    3071 log.go:172] (0x2684000) Reply frame received for 3\nI0925 03:53:13.052236    3071 log.go:172] (0x2684000) (0x24ba930) Create stream\nI0925 03:53:13.052362    3071 log.go:172] (0x2684000) (0x24ba930) Stream added, broadcasting: 5\nI0925 03:53:13.054311    3071 log.go:172] (0x2684000) Reply frame received for 5\nI0925 03:53:13.123571    3071 log.go:172] (0x2684000) Data frame received for 5\nI0925 03:53:13.123959    3071 log.go:172] (0x2684000) Data frame received for 3\nI0925 03:53:13.124144    3071 log.go:172] (0x24ba7e0) (3) Data frame handling\nI0925 03:53:13.124747    3071 log.go:172] (0x24ba930) (5) Data frame handling\nI0925 03:53:13.125083    3071 log.go:172] (0x2684000) Data frame received for 1\nI0925 03:53:13.125209    3071 log.go:172] (0x2684230) (1) Data frame handling\nI0925 03:53:13.125541    3071 log.go:172] (0x2684230) (1) Data frame sent\nI0925 03:53:13.125788    3071 log.go:172] (0x24ba930) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0925 03:53:13.126138    3071 log.go:172] (0x24ba7e0) (3) Data frame sent\nI0925 03:53:13.126488    3071 log.go:172] (0x2684000) Data frame received for 3\nI0925 03:53:13.126641    3071 log.go:172] (0x24ba7e0) (3) Data frame handling\nI0925 03:53:13.126847    3071 log.go:172] (0x2684000) Data frame received for 5\nI0925 03:53:13.127013    3071 log.go:172] (0x24ba930) (5) Data frame handling\nI0925 03:53:13.127772    3071 log.go:172] (0x2684000) (0x2684230) Stream removed, broadcasting: 1\nI0925 03:53:13.132301    3071 log.go:172] (0x2684000) Go away received\nI0925 03:53:13.133903    3071 log.go:172] (0x2684000) (0x2684230) Stream removed, broadcasting: 1\nI0925 03:53:13.134304    3071 log.go:172] (0x2684000) (0x24ba7e0) Stream removed, broadcasting: 3\nI0925 03:53:13.134864    3071 log.go:172] (0x2684000) (0x24ba930) Stream removed, broadcasting: 5\n"
Sep 25 03:53:13.145: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Sep 25 03:53:13.145: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Sep 25 03:53:13.146: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1789 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Sep 25 03:53:14.521: INFO: stderr: "I0925 03:53:14.385074    3095 log.go:172] (0x2929dc0) (0x2b68000) Create stream\nI0925 03:53:14.388943    3095 log.go:172] (0x2929dc0) (0x2b68000) Stream added, broadcasting: 1\nI0925 03:53:14.407237    3095 log.go:172] (0x2929dc0) Reply frame received for 1\nI0925 03:53:14.407725    3095 log.go:172] (0x2929dc0) (0x2b08070) Create stream\nI0925 03:53:14.407810    3095 log.go:172] (0x2929dc0) (0x2b08070) Stream added, broadcasting: 3\nI0925 03:53:14.409233    3095 log.go:172] (0x2929dc0) Reply frame received for 3\nI0925 03:53:14.409488    3095 log.go:172] (0x2929dc0) (0x26664d0) Create stream\nI0925 03:53:14.409562    3095 log.go:172] (0x2929dc0) (0x26664d0) Stream added, broadcasting: 5\nI0925 03:53:14.410615    3095 log.go:172] (0x2929dc0) Reply frame received for 5\nI0925 03:53:14.473413    3095 log.go:172] (0x2929dc0) Data frame received for 5\nI0925 03:53:14.473779    3095 log.go:172] (0x26664d0) (5) Data frame handling\nI0925 03:53:14.474410    3095 log.go:172] (0x26664d0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0925 03:53:14.504503    3095 log.go:172] (0x2929dc0) Data frame received for 3\nI0925 03:53:14.504706    3095 log.go:172] (0x2b08070) (3) Data frame handling\nI0925 03:53:14.504965    3095 log.go:172] (0x2929dc0) Data frame received for 5\nI0925 03:53:14.505175    3095 log.go:172] (0x26664d0) (5) Data frame handling\nI0925 03:53:14.505344    3095 log.go:172] (0x2b08070) (3) Data frame sent\nI0925 03:53:14.505572    3095 log.go:172] (0x2929dc0) Data frame received for 3\nI0925 03:53:14.505747    3095 log.go:172] (0x2b08070) (3) Data frame handling\nI0925 03:53:14.506237    3095 log.go:172] (0x2929dc0) Data frame received for 1\nI0925 03:53:14.506399    3095 log.go:172] (0x2b68000) (1) Data frame handling\nI0925 03:53:14.506567    3095 log.go:172] (0x2b68000) (1) Data frame sent\nI0925 03:53:14.507672    3095 log.go:172] (0x2929dc0) (0x2b68000) Stream removed, broadcasting: 1\nI0925 03:53:14.510650    3095 log.go:172] (0x2929dc0) Go away received\nI0925 03:53:14.513604    3095 log.go:172] (0x2929dc0) (0x2b68000) Stream removed, broadcasting: 1\nI0925 03:53:14.513981    3095 log.go:172] (0x2929dc0) (0x2b08070) Stream removed, broadcasting: 3\nI0925 03:53:14.514262    3095 log.go:172] (0x2929dc0) (0x26664d0) Stream removed, broadcasting: 5\n"
Sep 25 03:53:14.522: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Sep 25 03:53:14.522: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Sep 25 03:53:14.522: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1789 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Sep 25 03:53:15.944: INFO: stderr: "I0925 03:53:15.770716    3118 log.go:172] (0x28301c0) (0x24ae770) Create stream\nI0925 03:53:15.773254    3118 log.go:172] (0x28301c0) (0x24ae770) Stream added, broadcasting: 1\nI0925 03:53:15.787333    3118 log.go:172] (0x28301c0) Reply frame received for 1\nI0925 03:53:15.788012    3118 log.go:172] (0x28301c0) (0x27a2000) Create stream\nI0925 03:53:15.788103    3118 log.go:172] (0x28301c0) (0x27a2000) Stream added, broadcasting: 3\nI0925 03:53:15.789888    3118 log.go:172] (0x28301c0) Reply frame received for 3\nI0925 03:53:15.790302    3118 log.go:172] (0x28301c0) (0x26a01c0) Create stream\nI0925 03:53:15.790399    3118 log.go:172] (0x28301c0) (0x26a01c0) Stream added, broadcasting: 5\nI0925 03:53:15.791966    3118 log.go:172] (0x28301c0) Reply frame received for 5\nI0925 03:53:15.875022    3118 log.go:172] (0x28301c0) Data frame received for 5\nI0925 03:53:15.875349    3118 log.go:172] (0x26a01c0) (5) Data frame handling\nI0925 03:53:15.875995    3118 log.go:172] (0x26a01c0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0925 03:53:15.926703    3118 log.go:172] (0x28301c0) Data frame received for 5\nI0925 03:53:15.926916    3118 log.go:172] (0x26a01c0) (5) Data frame handling\nI0925 03:53:15.927062    3118 log.go:172] (0x28301c0) Data frame received for 3\nI0925 03:53:15.927254    3118 log.go:172] (0x27a2000) (3) Data frame handling\nI0925 03:53:15.927499    3118 log.go:172] (0x27a2000) (3) Data frame sent\nI0925 03:53:15.927660    3118 log.go:172] (0x28301c0) Data frame received for 3\nI0925 03:53:15.927930    3118 log.go:172] (0x27a2000) (3) Data frame handling\nI0925 03:53:15.928378    3118 log.go:172] (0x28301c0) Data frame received for 1\nI0925 03:53:15.928491    3118 log.go:172] (0x24ae770) (1) Data frame handling\nI0925 03:53:15.928643    3118 log.go:172] (0x24ae770) (1) Data frame sent\nI0925 03:53:15.929576    3118 log.go:172] (0x28301c0) (0x24ae770) Stream removed, broadcasting: 1\nI0925 03:53:15.933462    3118 log.go:172] (0x28301c0) Go away received\nI0925 03:53:15.936109    3118 log.go:172] (0x28301c0) (0x24ae770) Stream removed, broadcasting: 1\nI0925 03:53:15.936463    3118 log.go:172] (0x28301c0) (0x27a2000) Stream removed, broadcasting: 3\nI0925 03:53:15.936749    3118 log.go:172] (0x28301c0) (0x26a01c0) Stream removed, broadcasting: 5\n"
Sep 25 03:53:15.945: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Sep 25 03:53:15.945: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Sep 25 03:53:15.945: INFO: Waiting for statefulset status.replicas updated to 0
Sep 25 03:53:15.970: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1
Sep 25 03:53:25.984: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Sep 25 03:53:25.984: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Sep 25 03:53:25.984: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Sep 25 03:53:26.006: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999987335s
Sep 25 03:53:27.017: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.98791446s
Sep 25 03:53:28.031: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.976990254s
Sep 25 03:53:29.039: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.962508024s
Sep 25 03:53:30.045: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.955059383s
Sep 25 03:53:31.054: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.94841953s
Sep 25 03:53:32.064: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.940075382s
Sep 25 03:53:33.074: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.930171248s
Sep 25 03:53:34.083: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.920083625s
Sep 25 03:53:35.094: INFO: Verifying statefulset ss doesn't scale past 3 for another 910.437474ms
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-1789
Sep 25 03:53:36.122: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1789 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Sep 25 03:53:37.487: INFO: stderr: "I0925 03:53:37.373745    3141 log.go:172] (0x28487e0) (0x2848bd0) Create stream\nI0925 03:53:37.375457    3141 log.go:172] (0x28487e0) (0x2848bd0) Stream added, broadcasting: 1\nI0925 03:53:37.386782    3141 log.go:172] (0x28487e0) Reply frame received for 1\nI0925 03:53:37.388140    3141 log.go:172] (0x28487e0) (0x2a76000) Create stream\nI0925 03:53:37.388439    3141 log.go:172] (0x28487e0) (0x2a76000) Stream added, broadcasting: 3\nI0925 03:53:37.390826    3141 log.go:172] (0x28487e0) Reply frame received for 3\nI0925 03:53:37.391229    3141 log.go:172] (0x28487e0) (0x2b0c000) Create stream\nI0925 03:53:37.391346    3141 log.go:172] (0x28487e0) (0x2b0c000) Stream added, broadcasting: 5\nI0925 03:53:37.393239    3141 log.go:172] (0x28487e0) Reply frame received for 5\nI0925 03:53:37.471122    3141 log.go:172] (0x28487e0) Data frame received for 5\nI0925 03:53:37.471425    3141 log.go:172] (0x28487e0) Data frame received for 1\nI0925 03:53:37.471612    3141 log.go:172] (0x2b0c000) (5) Data frame handling\nI0925 03:53:37.471720    3141 log.go:172] (0x28487e0) Data frame received for 3\nI0925 03:53:37.471845    3141 log.go:172] (0x2a76000) (3) Data frame handling\nI0925 03:53:37.472113    3141 log.go:172] (0x2848bd0) (1) Data frame handling\nI0925 03:53:37.472413    3141 log.go:172] (0x2b0c000) (5) Data frame sent\nI0925 03:53:37.472608    3141 log.go:172] (0x2848bd0) (1) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0925 03:53:37.473021    3141 log.go:172] (0x28487e0) Data frame received for 5\nI0925 03:53:37.473099    3141 log.go:172] (0x2b0c000) (5) Data frame handling\nI0925 03:53:37.473266    3141 log.go:172] (0x2a76000) (3) Data frame sent\nI0925 03:53:37.473492    3141 log.go:172] (0x28487e0) Data frame received for 3\nI0925 03:53:37.473667    3141 log.go:172] (0x2a76000) (3) Data frame handling\nI0925 03:53:37.475821    3141 log.go:172] (0x28487e0) (0x2848bd0) Stream removed, broadcasting: 1\nI0925 03:53:37.476692    3141 log.go:172] (0x28487e0) Go away received\nI0925 03:53:37.480468    3141 log.go:172] (0x28487e0) (0x2848bd0) Stream removed, broadcasting: 1\nI0925 03:53:37.480653    3141 log.go:172] (0x28487e0) (0x2a76000) Stream removed, broadcasting: 3\nI0925 03:53:37.480820    3141 log.go:172] (0x28487e0) (0x2b0c000) Stream removed, broadcasting: 5\n"
Sep 25 03:53:37.488: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Sep 25 03:53:37.488: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Sep 25 03:53:37.488: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1789 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Sep 25 03:53:38.839: INFO: stderr: "I0925 03:53:38.739342    3164 log.go:172] (0x2b38070) (0x2b38150) Create stream\nI0925 03:53:38.741753    3164 log.go:172] (0x2b38070) (0x2b38150) Stream added, broadcasting: 1\nI0925 03:53:38.753339    3164 log.go:172] (0x2b38070) Reply frame received for 1\nI0925 03:53:38.754465    3164 log.go:172] (0x2b38070) (0x2b381c0) Create stream\nI0925 03:53:38.754600    3164 log.go:172] (0x2b38070) (0x2b381c0) Stream added, broadcasting: 3\nI0925 03:53:38.756989    3164 log.go:172] (0x2b38070) Reply frame received for 3\nI0925 03:53:38.757504    3164 log.go:172] (0x2b38070) (0x281e2a0) Create stream\nI0925 03:53:38.757642    3164 log.go:172] (0x2b38070) (0x281e2a0) Stream added, broadcasting: 5\nI0925 03:53:38.759649    3164 log.go:172] (0x2b38070) Reply frame received for 5\nI0925 03:53:38.821859    3164 log.go:172] (0x2b38070) Data frame received for 5\nI0925 03:53:38.822096    3164 log.go:172] (0x2b38070) Data frame received for 3\nI0925 03:53:38.822205    3164 log.go:172] (0x2b381c0) (3) Data frame handling\nI0925 03:53:38.822516    3164 log.go:172] (0x281e2a0) (5) Data frame handling\nI0925 03:53:38.822786    3164 log.go:172] (0x2b38070) Data frame received for 1\nI0925 03:53:38.822968    3164 log.go:172] (0x281e2a0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0925 03:53:38.823346    3164 log.go:172] (0x2b38150) (1) Data frame handling\nI0925 03:53:38.823554    3164 log.go:172] (0x2b381c0) (3) Data frame sent\nI0925 03:53:38.823759    3164 log.go:172] (0x2b38070) Data frame received for 3\nI0925 03:53:38.824198    3164 log.go:172] (0x2b381c0) (3) Data frame handling\nI0925 03:53:38.824426    3164 log.go:172] (0x2b38070) Data frame received for 5\nI0925 03:53:38.824616    3164 log.go:172] (0x281e2a0) (5) Data frame handling\nI0925 03:53:38.824757    3164 log.go:172] (0x2b38150) (1) Data frame sent\nI0925 03:53:38.826496    3164 log.go:172] (0x2b38070) (0x2b38150) Stream removed, broadcasting: 1\nI0925 03:53:38.828787    3164 log.go:172] (0x2b38070) Go away received\nI0925 03:53:38.831126    3164 log.go:172] (0x2b38070) (0x2b38150) Stream removed, broadcasting: 1\nI0925 03:53:38.831437    3164 log.go:172] (0x2b38070) (0x2b381c0) Stream removed, broadcasting: 3\nI0925 03:53:38.831744    3164 log.go:172] (0x2b38070) (0x281e2a0) Stream removed, broadcasting: 5\n"
Sep 25 03:53:38.840: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Sep 25 03:53:38.840: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Sep 25 03:53:38.841: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1789 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Sep 25 03:53:40.116: INFO: rc: 1
Sep 25 03:53:40.118: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1789 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    error: unable to upgrade connection: container not found ("nginx")
 []  0x7e31dd0 exit status 1   true [0x8c0de10 0x8c0de30 0x8c0de50] [0x8c0de10 0x8c0de30 0x8c0de50] [0x8c0de28 0x8c0de48] [0x6bbb70 0x6bbb70] 0x99c3540 }:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("nginx")

error:
exit status 1
Sep 25 03:53:50.119: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1789 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Sep 25 03:53:51.204: INFO: rc: 1
Sep 25 03:53:51.205: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1789 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0x85f0090 exit status 1   true [0x6db03c8 0x6db0430 0x6db0540] [0x6db03c8 0x6db0430 0x6db0540] [0x6db0418 0x6db04e8] [0x6bbb70 0x6bbb70] 0x73e5640 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Sep 25 03:54:01.206: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1789 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Sep 25 03:54:02.318: INFO: rc: 1
Sep 25 03:54:02.318: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1789 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0x85f0150 exit status 1   true [0x6db0708 0x6db0800 0x6db0908] [0x6db0708 0x6db0800 0x6db0908] [0x6db07c0 0x6db08a0] [0x6bbb70 0x6bbb70] 0x740a280 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Sep 25 03:54:12.319: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1789 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Sep 25 03:54:13.446: INFO: rc: 1
Sep 25 03:54:13.446: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1789 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0x85f0210 exit status 1   true [0x6db0c60 0x6db0de8 0x6db0f38] [0x6db0c60 0x6db0de8 0x6db0f38] [0x6db0dc8 0x6db0f10] [0x6bbb70 0x6bbb70] 0x740ae00 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Sep 25 03:54:23.447: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1789 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Sep 25 03:54:24.578: INFO: rc: 1
Sep 25 03:54:24.578: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1789 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0x85f02d0 exit status 1   true [0x6db1090 0x6db1128 0x6db1250] [0x6db1090 0x6db1128 0x6db1250] [0x6db1110 0x6db1168] [0x6bbb70 0x6bbb70] 0x8ffe240 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Sep 25 03:54:34.579: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1789 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Sep 25 03:54:35.662: INFO: rc: 1
Sep 25 03:54:35.663: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1789 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0x92000c0 exit status 1   true [0x9046090 0x90460b0 0x90460d0] [0x9046090 0x90460b0 0x90460d0] [0x90460a8 0x90460c8] [0x6bbb70 0x6bbb70] 0x6ff5880 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Sep 25 03:54:45.664: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1789 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Sep 25 03:54:46.778: INFO: rc: 1
Sep 25 03:54:46.779: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1789 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0x9200180 exit status 1   true [0x9046108 0x9046128 0x9046148] [0x9046108 0x9046128 0x9046148] [0x9046120 0x9046140] [0x6bbb70 0x6bbb70] 0x73b2a00 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Sep 25 03:54:56.780: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1789 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Sep 25 03:54:57.914: INFO: rc: 1
Sep 25 03:54:57.915: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1789 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0x85f0390 exit status 1   true [0x6db13c0 0x6db1570 0x6db17f0] [0x6db13c0 0x6db1570 0x6db17f0] [0x6db1530 0x6db1748] [0x6bbb70 0x6bbb70] 0x8ffe580 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Sep 25 03:55:07.916: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1789 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Sep 25 03:55:09.024: INFO: rc: 1
Sep 25 03:55:09.025: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1789 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0x9200240 exit status 1   true [0x9046180 0x90461a0 0x90461c0] [0x9046180 0x90461a0 0x90461c0] [0x9046198 0x90461b8] [0x6bbb70 0x6bbb70] 0x73b3480 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Sep 25 03:55:19.026: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1789 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Sep 25 03:55:20.131: INFO: rc: 1
Sep 25 03:55:20.132: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1789 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0x86a8150 exit status 1   true [0x9086218 0x9086240 0x9086260] [0x9086218 0x9086240 0x9086260] [0x9086230 0x9086258] [0x6bbb70 0x6bbb70] 0x7bca8c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Sep 25 03:55:30.133: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1789 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Sep 25 03:55:31.283: INFO: rc: 1
Sep 25 03:55:31.284: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1789 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0x86a8210 exit status 1   true [0x9086290 0x90862b0 0x90862d0] [0x9086290 0x90862b0 0x90862d0] [0x90862a8 0x90862c8] [0x6bbb70 0x6bbb70] 0x7bcad80 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Sep 25 03:55:41.285: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1789 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Sep 25 03:55:42.401: INFO: rc: 1
Sep 25 03:55:42.402: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1789 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0x86a82d0 exit status 1   true [0x9086308 0x9086338 0x9086360] [0x9086308 0x9086338 0x9086360] [0x9086330 0x9086358] [0x6bbb70 0x6bbb70] 0x7bcb480 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Sep 25 03:55:52.403: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1789 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Sep 25 03:55:53.510: INFO: rc: 1
Sep 25 03:55:53.510: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1789 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0x9200090 exit status 1   true [0x9046030 0x9046050 0x9046070] [0x9046030 0x9046050 0x9046070] [0x9046048 0x9046068] [0x6bbb70 0x6bbb70] 0x6ff5880 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Sep 25 03:56:03.511: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1789 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Sep 25 03:56:04.650: INFO: rc: 1
Sep 25 03:56:04.651: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1789 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0x92001b0 exit status 1   true [0x90460a8 0x90460c8 0x90460e8] [0x90460a8 0x90460c8 0x90460e8] [0x90460c0 0x90460e0] [0x6bbb70 0x6bbb70] 0x740a800 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Sep 25 03:56:14.652: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1789 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Sep 25 03:56:15.805: INFO: rc: 1
Sep 25 03:56:15.806: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1789 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0x86a8090 exit status 1   true [0x9086028 0x9086048 0x9086070] [0x9086028 0x9086048 0x9086070] [0x9086040 0x9086068] [0x6bbb70 0x6bbb70] 0x73e5640 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Sep 25 03:56:25.807: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1789 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Sep 25 03:56:26.946: INFO: rc: 1
Sep 25 03:56:26.946: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1789 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0x9200300 exit status 1   true [0x90461f0 0x9046210 0x9046230] [0x90461f0 0x9046210 0x9046230] [0x9046208 0x9046228] [0x6bbb70 0x6bbb70] 0x740bd80 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Sep 25 03:56:36.947: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1789 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Sep 25 03:56:38.057: INFO: rc: 1
Sep 25 03:56:38.057: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1789 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0x94e20f0 exit status 1   true [0x6db36f0 0x6db3a10 0x6db3b78] [0x6db36f0 0x6db3a10 0x6db3b78] [0x6db3970 0x6db3b28] [0x6bbb70 0x6bbb70] 0x73b2c00 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Sep 25 03:56:48.058: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1789 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Sep 25 03:56:49.213: INFO: rc: 1
Sep 25 03:56:49.214: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1789 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0x94e21e0 exit status 1   true [0x6db0540 0x6db0640 0x6db0740] [0x6db0540 0x6db0640 0x6db0740] [0x6db05e8 0x6db0708] [0x6bbb70 0x6bbb70] 0x73b3600 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Sep 25 03:56:59.215: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1789 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Sep 25 03:57:00.324: INFO: rc: 1
Sep 25 03:57:00.325: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1789 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0x85f00f0 exit status 1   true [0x6d2b1f8 0x6d2b308 0x6d2b420] [0x6d2b1f8 0x6d2b308 0x6d2b420] [0x6d2b2f0 0x6d2b418] [0x6bbb70 0x6bbb70] 0x7bca8c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Sep 25 03:57:10.326: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1789 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Sep 25 03:57:11.477: INFO: rc: 1
Sep 25 03:57:11.477: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1789 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0x86a8180 exit status 1   true [0x90860a8 0x90860c8 0x90860e8] [0x90860a8 0x90860c8 0x90860e8] [0x90860c0 0x90860e0] [0x6bbb70 0x6bbb70] 0x8ffe1c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Sep 25 03:57:21.478: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1789 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Sep 25 03:57:22.587: INFO: rc: 1
Sep 25 03:57:22.588: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1789 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0x92003f0 exit status 1   true [0x9046268 0x9046288 0x90462a8] [0x9046268 0x9046288 0x90462a8] [0x9046280 0x90462a0] [0x6bbb70 0x6bbb70] 0x7ae7040 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Sep 25 03:57:32.589: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1789 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Sep 25 03:57:36.246: INFO: rc: 1
Sep 25 03:57:36.247: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1789 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0x92004b0 exit status 1   true [0x90462e0 0x9046300 0x9046320] [0x90462e0 0x9046300 0x9046320] [0x90462f8 0x9046318] [0x6bbb70 0x6bbb70] 0x7ff41c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Sep 25 03:57:46.248: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1789 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Sep 25 03:57:47.349: INFO: rc: 1
Sep 25 03:57:47.350: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1789 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0x86a8270 exit status 1   true [0x9086120 0x9086140 0x9086170] [0x9086120 0x9086140 0x9086170] [0x9086138 0x9086168] [0x6bbb70 0x6bbb70] 0x8ffe500 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Sep 25 03:57:57.351: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1789 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Sep 25 03:57:58.462: INFO: rc: 1
Sep 25 03:57:58.462: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1789 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0x94e2090 exit status 1   true [0x6db22e0 0x6db2590 0x6db2808] [0x6db22e0 0x6db2590 0x6db2808] [0x6db24f0 0x6db26f0] [0x6bbb70 0x6bbb70] 0x73e5640 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Sep 25 03:58:08.463: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1789 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Sep 25 03:58:09.580: INFO: rc: 1
Sep 25 03:58:09.580: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1789 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0x85f00c0 exit status 1   true [0x6d2b1f8 0x6d2b308 0x6d2b420] [0x6d2b1f8 0x6d2b308 0x6d2b420] [0x6d2b2f0 0x6d2b418] [0x6bbb70 0x6bbb70] 0x740a840 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Sep 25 03:58:19.581: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1789 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Sep 25 03:58:20.710: INFO: rc: 1
Sep 25 03:58:20.711: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1789 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0x94e2180 exit status 1   true [0x6db2ba0 0x6db30a0 0x6db34e8] [0x6db2ba0 0x6db30a0 0x6db34e8] [0x6db2f58 0x6db34c0] [0x6bbb70 0x6bbb70] 0x6ff5080 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Sep 25 03:58:30.712: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1789 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Sep 25 03:58:31.823: INFO: rc: 1
Sep 25 03:58:31.823: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1789 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0x92000c0 exit status 1   true [0x6db03b0 0x6db0418 0x6db04e8] [0x6db03b0 0x6db0418 0x6db04e8] [0x6db0408 0x6db0498] [0x6bbb70 0x6bbb70] 0x7ae7040 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Sep 25 03:58:41.825: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1789 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Sep 25 03:58:42.963: INFO: rc: 1
Sep 25 03:58:42.964: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: 
Sep 25 03:58:42.964: INFO: Scaling statefulset ss to 0
STEP: Verifying that stateful set ss was scaled down in reverse order
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Sep 25 03:58:42.989: INFO: Deleting all statefulset in ns statefulset-1789
Sep 25 03:58:42.994: INFO: Scaling statefulset ss to 0
Sep 25 03:58:43.006: INFO: Waiting for statefulset status.replicas updated to 0
Sep 25 03:58:43.009: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 25 03:58:43.026: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-1789" for this suite.
Sep 25 03:58:49.052: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 25 03:58:49.204: INFO: namespace statefulset-1789 deletion completed in 6.168439111s

• [SLOW TEST:380.575 seconds]
[sig-apps] StatefulSet
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory request [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 25 03:58:49.206: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Sep 25 03:58:49.274: INFO: Waiting up to 5m0s for pod "downwardapi-volume-818359a8-72fb-417e-8eac-16f65525b6ca" in namespace "downward-api-158" to be "success or failure"
Sep 25 03:58:49.285: INFO: Pod "downwardapi-volume-818359a8-72fb-417e-8eac-16f65525b6ca": Phase="Pending", Reason="", readiness=false. Elapsed: 10.495796ms
Sep 25 03:58:51.293: INFO: Pod "downwardapi-volume-818359a8-72fb-417e-8eac-16f65525b6ca": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018140707s
Sep 25 03:58:53.300: INFO: Pod "downwardapi-volume-818359a8-72fb-417e-8eac-16f65525b6ca": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02591675s
STEP: Saw pod success
Sep 25 03:58:53.301: INFO: Pod "downwardapi-volume-818359a8-72fb-417e-8eac-16f65525b6ca" satisfied condition "success or failure"
Sep 25 03:58:53.306: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-818359a8-72fb-417e-8eac-16f65525b6ca container client-container: 
STEP: delete the pod
Sep 25 03:58:53.344: INFO: Waiting for pod downwardapi-volume-818359a8-72fb-417e-8eac-16f65525b6ca to disappear
Sep 25 03:58:53.357: INFO: Pod downwardapi-volume-818359a8-72fb-417e-8eac-16f65525b6ca no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 25 03:58:53.357: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-158" for this suite.
Sep 25 03:58:59.405: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 25 03:58:59.570: INFO: namespace downward-api-158 deletion completed in 6.20235379s

• [SLOW TEST:10.364 seconds]
[sig-storage] Downward API volume
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's memory request [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 25 03:58:59.572: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test use defaults
Sep 25 03:58:59.632: INFO: Waiting up to 5m0s for pod "client-containers-a4ad603b-cb0a-449a-82ec-c605dbb948c6" in namespace "containers-3633" to be "success or failure"
Sep 25 03:58:59.645: INFO: Pod "client-containers-a4ad603b-cb0a-449a-82ec-c605dbb948c6": Phase="Pending", Reason="", readiness=false. Elapsed: 13.155114ms
Sep 25 03:59:01.652: INFO: Pod "client-containers-a4ad603b-cb0a-449a-82ec-c605dbb948c6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020258619s
Sep 25 03:59:03.659: INFO: Pod "client-containers-a4ad603b-cb0a-449a-82ec-c605dbb948c6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027211647s
STEP: Saw pod success
Sep 25 03:59:03.659: INFO: Pod "client-containers-a4ad603b-cb0a-449a-82ec-c605dbb948c6" satisfied condition "success or failure"
Sep 25 03:59:03.663: INFO: Trying to get logs from node iruya-worker2 pod client-containers-a4ad603b-cb0a-449a-82ec-c605dbb948c6 container test-container: 
STEP: delete the pod
Sep 25 03:59:03.687: INFO: Waiting for pod client-containers-a4ad603b-cb0a-449a-82ec-c605dbb948c6 to disappear
Sep 25 03:59:03.692: INFO: Pod client-containers-a4ad603b-cb0a-449a-82ec-c605dbb948c6 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 25 03:59:03.692: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-3633" for this suite.
Sep 25 03:59:09.740: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 25 03:59:09.885: INFO: namespace containers-3633 deletion completed in 6.185365537s

• [SLOW TEST:10.313 seconds]
[k8s.io] Docker Containers
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not be blocked by dependency circle [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 25 03:59:09.891: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be blocked by dependency circle [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Sep 25 03:59:10.023: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"0e9f4d3d-4d7d-467d-89cc-467dfc9c92ef", Controller:(*bool)(0x8c30302), BlockOwnerDeletion:(*bool)(0x8c30303)}}
Sep 25 03:59:10.053: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"bbff23a7-f8a2-4489-8e3b-82653c532016", Controller:(*bool)(0x8c14d0a), BlockOwnerDeletion:(*bool)(0x8c14d0b)}}
Sep 25 03:59:10.088: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"d9026c97-3c48-408a-9b9e-267f6a194a98", Controller:(*bool)(0x8c150aa), BlockOwnerDeletion:(*bool)(0x8c150ab)}}
[AfterEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 25 03:59:15.138: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-4846" for this suite.
Sep 25 03:59:21.167: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 25 03:59:21.309: INFO: namespace gc-4846 deletion completed in 6.161106518s

• [SLOW TEST:11.419 seconds]
[sig-api-machinery] Garbage collector
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not be blocked by dependency circle [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 25 03:59:21.313: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Sep 25 03:59:21.434: INFO: Waiting up to 5m0s for pod "downwardapi-volume-47af9122-abba-4d82-bd21-9cdfaf0dfade" in namespace "projected-3532" to be "success or failure"
Sep 25 03:59:21.443: INFO: Pod "downwardapi-volume-47af9122-abba-4d82-bd21-9cdfaf0dfade": Phase="Pending", Reason="", readiness=false. Elapsed: 8.735682ms
Sep 25 03:59:23.450: INFO: Pod "downwardapi-volume-47af9122-abba-4d82-bd21-9cdfaf0dfade": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016219324s
Sep 25 03:59:25.457: INFO: Pod "downwardapi-volume-47af9122-abba-4d82-bd21-9cdfaf0dfade": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023418768s
STEP: Saw pod success
Sep 25 03:59:25.458: INFO: Pod "downwardapi-volume-47af9122-abba-4d82-bd21-9cdfaf0dfade" satisfied condition "success or failure"
Sep 25 03:59:25.463: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-47af9122-abba-4d82-bd21-9cdfaf0dfade container client-container: 
STEP: delete the pod
Sep 25 03:59:25.497: INFO: Waiting for pod downwardapi-volume-47af9122-abba-4d82-bd21-9cdfaf0dfade to disappear
Sep 25 03:59:25.508: INFO: Pod downwardapi-volume-47af9122-abba-4d82-bd21-9cdfaf0dfade no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 25 03:59:25.509: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3532" for this suite.
Sep 25 03:59:31.532: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 25 03:59:31.697: INFO: namespace projected-3532 deletion completed in 6.17861057s

• [SLOW TEST:10.384 seconds]
[sig-storage] Projected downwardAPI
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[k8s.io] Kubelet when scheduling a busybox command in a pod 
  should print the output to logs [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 25 03:59:31.698: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should print the output to logs [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 25 03:59:35.822: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-2057" for this suite.
Sep 25 04:00:15.848: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 25 04:00:15.989: INFO: namespace kubelet-test-2057 deletion completed in 40.156788363s

• [SLOW TEST:44.291 seconds]
[k8s.io] Kubelet
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox command in a pod
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40
    should print the output to logs [NodeConformance] [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 25 04:00:15.990: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Sep 25 04:00:16.071: INFO: Waiting up to 5m0s for pod "downwardapi-volume-735632df-cab6-415b-aacd-16fdaf6d4db8" in namespace "projected-2340" to be "success or failure"
Sep 25 04:00:16.099: INFO: Pod "downwardapi-volume-735632df-cab6-415b-aacd-16fdaf6d4db8": Phase="Pending", Reason="", readiness=false. Elapsed: 27.197014ms
Sep 25 04:00:18.106: INFO: Pod "downwardapi-volume-735632df-cab6-415b-aacd-16fdaf6d4db8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034184192s
Sep 25 04:00:20.114: INFO: Pod "downwardapi-volume-735632df-cab6-415b-aacd-16fdaf6d4db8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.042086501s
STEP: Saw pod success
Sep 25 04:00:20.114: INFO: Pod "downwardapi-volume-735632df-cab6-415b-aacd-16fdaf6d4db8" satisfied condition "success or failure"
Sep 25 04:00:20.120: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-735632df-cab6-415b-aacd-16fdaf6d4db8 container client-container: 
STEP: delete the pod
Sep 25 04:00:20.196: INFO: Waiting for pod downwardapi-volume-735632df-cab6-415b-aacd-16fdaf6d4db8 to disappear
Sep 25 04:00:20.200: INFO: Pod downwardapi-volume-735632df-cab6-415b-aacd-16fdaf6d4db8 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 25 04:00:20.200: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2340" for this suite.
Sep 25 04:00:26.246: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 25 04:00:26.388: INFO: namespace projected-2340 deletion completed in 6.15743488s

• [SLOW TEST:10.398 seconds]
[sig-storage] Projected downwardAPI
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's cpu limit [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 25 04:00:26.391: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-87b8f73d-14bb-47c1-acb6-4a6da5534984
STEP: Creating a pod to test consume secrets
Sep 25 04:00:26.474: INFO: Waiting up to 5m0s for pod "pod-secrets-41dd8f21-08e2-4e21-9ede-25e366bbe3cd" in namespace "secrets-8418" to be "success or failure"
Sep 25 04:00:26.524: INFO: Pod "pod-secrets-41dd8f21-08e2-4e21-9ede-25e366bbe3cd": Phase="Pending", Reason="", readiness=false. Elapsed: 49.758314ms
Sep 25 04:00:28.530: INFO: Pod "pod-secrets-41dd8f21-08e2-4e21-9ede-25e366bbe3cd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.056255061s
Sep 25 04:00:30.538: INFO: Pod "pod-secrets-41dd8f21-08e2-4e21-9ede-25e366bbe3cd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.063750894s
STEP: Saw pod success
Sep 25 04:00:30.538: INFO: Pod "pod-secrets-41dd8f21-08e2-4e21-9ede-25e366bbe3cd" satisfied condition "success or failure"
Sep 25 04:00:30.545: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-41dd8f21-08e2-4e21-9ede-25e366bbe3cd container secret-volume-test: 
STEP: delete the pod
Sep 25 04:00:30.563: INFO: Waiting for pod pod-secrets-41dd8f21-08e2-4e21-9ede-25e366bbe3cd to disappear
Sep 25 04:00:30.567: INFO: Pod pod-secrets-41dd8f21-08e2-4e21-9ede-25e366bbe3cd no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 25 04:00:30.568: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-8418" for this suite.
Sep 25 04:00:36.617: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 25 04:00:36.829: INFO: namespace secrets-8418 deletion completed in 6.253391256s

• [SLOW TEST:10.438 seconds]
[sig-storage] Secrets
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 25 04:00:36.830: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on node default medium
Sep 25 04:00:36.899: INFO: Waiting up to 5m0s for pod "pod-41443425-aa29-47f6-b004-87a4d1568958" in namespace "emptydir-612" to be "success or failure"
Sep 25 04:00:36.910: INFO: Pod "pod-41443425-aa29-47f6-b004-87a4d1568958": Phase="Pending", Reason="", readiness=false. Elapsed: 10.524966ms
Sep 25 04:00:38.917: INFO: Pod "pod-41443425-aa29-47f6-b004-87a4d1568958": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017831081s
Sep 25 04:00:40.924: INFO: Pod "pod-41443425-aa29-47f6-b004-87a4d1568958": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024897438s
STEP: Saw pod success
Sep 25 04:00:40.925: INFO: Pod "pod-41443425-aa29-47f6-b004-87a4d1568958" satisfied condition "success or failure"
Sep 25 04:00:40.930: INFO: Trying to get logs from node iruya-worker2 pod pod-41443425-aa29-47f6-b004-87a4d1568958 container test-container: 
STEP: delete the pod
Sep 25 04:00:40.986: INFO: Waiting for pod pod-41443425-aa29-47f6-b004-87a4d1568958 to disappear
Sep 25 04:00:40.993: INFO: Pod pod-41443425-aa29-47f6-b004-87a4d1568958 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 25 04:00:40.993: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-612" for this suite.
Sep 25 04:00:47.018: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 25 04:00:47.156: INFO: namespace emptydir-612 deletion completed in 6.153350478s

• [SLOW TEST:10.326 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop complex daemon [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 25 04:00:47.157: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should run and stop complex daemon [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Sep 25 04:00:47.274: INFO: Creating daemon "daemon-set" with a node selector
STEP: Initially, daemon pods should not be running on any nodes.
Sep 25 04:00:47.288: INFO: Number of nodes with available pods: 0
Sep 25 04:00:47.289: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Change node label to blue, check that daemon pod is launched.
Sep 25 04:00:47.391: INFO: Number of nodes with available pods: 0
Sep 25 04:00:47.392: INFO: Node iruya-worker is running more than one daemon pod
Sep 25 04:00:48.399: INFO: Number of nodes with available pods: 0
Sep 25 04:00:48.399: INFO: Node iruya-worker is running more than one daemon pod
Sep 25 04:00:49.466: INFO: Number of nodes with available pods: 0
Sep 25 04:00:49.466: INFO: Node iruya-worker is running more than one daemon pod
Sep 25 04:00:50.401: INFO: Number of nodes with available pods: 1
Sep 25 04:00:50.401: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Update the node label to green, and wait for daemons to be unscheduled
Sep 25 04:00:50.438: INFO: Number of nodes with available pods: 1
Sep 25 04:00:50.438: INFO: Number of running nodes: 0, number of available pods: 1
Sep 25 04:00:51.446: INFO: Number of nodes with available pods: 0
Sep 25 04:00:51.446: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate
Sep 25 04:00:51.506: INFO: Number of nodes with available pods: 0
Sep 25 04:00:51.507: INFO: Node iruya-worker is running more than one daemon pod
Sep 25 04:00:52.514: INFO: Number of nodes with available pods: 0
Sep 25 04:00:52.514: INFO: Node iruya-worker is running more than one daemon pod
Sep 25 04:00:53.515: INFO: Number of nodes with available pods: 0
Sep 25 04:00:53.515: INFO: Node iruya-worker is running more than one daemon pod
Sep 25 04:00:54.515: INFO: Number of nodes with available pods: 0
Sep 25 04:00:54.515: INFO: Node iruya-worker is running more than one daemon pod
Sep 25 04:00:55.525: INFO: Number of nodes with available pods: 0
Sep 25 04:00:55.525: INFO: Node iruya-worker is running more than one daemon pod
Sep 25 04:00:56.514: INFO: Number of nodes with available pods: 0
Sep 25 04:00:56.514: INFO: Node iruya-worker is running more than one daemon pod
Sep 25 04:00:57.515: INFO: Number of nodes with available pods: 1
Sep 25 04:00:57.515: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-3048, will wait for the garbage collector to delete the pods
Sep 25 04:00:57.589: INFO: Deleting DaemonSet.extensions daemon-set took: 8.433321ms
Sep 25 04:00:57.890: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.835369ms
Sep 25 04:01:05.406: INFO: Number of nodes with available pods: 0
Sep 25 04:01:05.407: INFO: Number of running nodes: 0, number of available pods: 0
Sep 25 04:01:05.410: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-3048/daemonsets","resourceVersion":"338738"},"items":null}

Sep 25 04:01:05.414: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-3048/pods","resourceVersion":"338738"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 25 04:01:05.491: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-3048" for this suite.
Sep 25 04:01:11.532: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 25 04:01:11.677: INFO: namespace daemonsets-3048 deletion completed in 6.176440861s

• [SLOW TEST:24.520 seconds]
[sig-apps] Daemon set [Serial]
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop complex daemon [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should mount an API token into pods  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-auth] ServiceAccounts
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 25 04:01:11.679: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should mount an API token into pods  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: getting the auto-created API token
STEP: reading a file in the container
Sep 25 04:01:16.286: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-1365 pod-service-account-4d2a949f-493f-402d-8db6-e1c883b7fb89 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token'
STEP: reading a file in the container
Sep 25 04:01:17.627: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-1365 pod-service-account-4d2a949f-493f-402d-8db6-e1c883b7fb89 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt'
STEP: reading a file in the container
Sep 25 04:01:19.021: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-1365 pod-service-account-4d2a949f-493f-402d-8db6-e1c883b7fb89 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace'
[AfterEach] [sig-auth] ServiceAccounts
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 25 04:01:20.424: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-1365" for this suite.
Sep 25 04:01:26.452: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 25 04:01:26.595: INFO: namespace svcaccounts-1365 deletion completed in 6.161826898s

• [SLOW TEST:14.917 seconds]
[sig-auth] ServiceAccounts
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should mount an API token into pods  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 25 04:01:26.597: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-906
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Sep 25 04:01:26.682: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Sep 25 04:01:54.831: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.6 8081 | grep -v '^\s*$'] Namespace:pod-network-test-906 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Sep 25 04:01:54.832: INFO: >>> kubeConfig: /root/.kube/config
I0925 04:01:54.940318       7 log.go:172] (0x8c617a0) (0x8c61a40) Create stream
I0925 04:01:54.940472       7 log.go:172] (0x8c617a0) (0x8c61a40) Stream added, broadcasting: 1
I0925 04:01:54.945023       7 log.go:172] (0x8c617a0) Reply frame received for 1
I0925 04:01:54.945293       7 log.go:172] (0x8c617a0) (0x8c61c00) Create stream
I0925 04:01:54.945422       7 log.go:172] (0x8c617a0) (0x8c61c00) Stream added, broadcasting: 3
I0925 04:01:54.947483       7 log.go:172] (0x8c617a0) Reply frame received for 3
I0925 04:01:54.947845       7 log.go:172] (0x8c617a0) (0x8c61dc0) Create stream
I0925 04:01:54.948000       7 log.go:172] (0x8c617a0) (0x8c61dc0) Stream added, broadcasting: 5
I0925 04:01:54.949898       7 log.go:172] (0x8c617a0) Reply frame received for 5
I0925 04:01:56.003613       7 log.go:172] (0x8c617a0) Data frame received for 5
I0925 04:01:56.003846       7 log.go:172] (0x8c61dc0) (5) Data frame handling
I0925 04:01:56.004016       7 log.go:172] (0x8c617a0) Data frame received for 3
I0925 04:01:56.004152       7 log.go:172] (0x8c61c00) (3) Data frame handling
I0925 04:01:56.004281       7 log.go:172] (0x8c61c00) (3) Data frame sent
I0925 04:01:56.004373       7 log.go:172] (0x8c617a0) Data frame received for 3
I0925 04:01:56.004457       7 log.go:172] (0x8c61c00) (3) Data frame handling
I0925 04:01:56.006030       7 log.go:172] (0x8c617a0) Data frame received for 1
I0925 04:01:56.006117       7 log.go:172] (0x8c61a40) (1) Data frame handling
I0925 04:01:56.006217       7 log.go:172] (0x8c61a40) (1) Data frame sent
I0925 04:01:56.006334       7 log.go:172] (0x8c617a0) (0x8c61a40) Stream removed, broadcasting: 1
I0925 04:01:56.006444       7 log.go:172] (0x8c617a0) Go away received
I0925 04:01:56.006919       7 log.go:172] (0x8c617a0) (0x8c61a40) Stream removed, broadcasting: 1
I0925 04:01:56.007085       7 log.go:172] (0x8c617a0) (0x8c61c00) Stream removed, broadcasting: 3
I0925 04:01:56.007179       7 log.go:172] (0x8c617a0) (0x8c61dc0) Stream removed, broadcasting: 5
Sep 25 04:01:56.007: INFO: Found all expected endpoints: [netserver-0]
Sep 25 04:01:56.012: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.65 8081 | grep -v '^\s*$'] Namespace:pod-network-test-906 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Sep 25 04:01:56.012: INFO: >>> kubeConfig: /root/.kube/config
I0925 04:01:56.114107       7 log.go:172] (0x75ce0e0) (0x75ce1c0) Create stream
I0925 04:01:56.114247       7 log.go:172] (0x75ce0e0) (0x75ce1c0) Stream added, broadcasting: 1
I0925 04:01:56.118626       7 log.go:172] (0x75ce0e0) Reply frame received for 1
I0925 04:01:56.118962       7 log.go:172] (0x75ce0e0) (0x8f20700) Create stream
I0925 04:01:56.119108       7 log.go:172] (0x75ce0e0) (0x8f20700) Stream added, broadcasting: 3
I0925 04:01:56.121177       7 log.go:172] (0x75ce0e0) Reply frame received for 3
I0925 04:01:56.121404       7 log.go:172] (0x75ce0e0) (0x75ce2a0) Create stream
I0925 04:01:56.121513       7 log.go:172] (0x75ce0e0) (0x75ce2a0) Stream added, broadcasting: 5
I0925 04:01:56.123287       7 log.go:172] (0x75ce0e0) Reply frame received for 5
I0925 04:01:57.233706       7 log.go:172] (0x75ce0e0) Data frame received for 3
I0925 04:01:57.234064       7 log.go:172] (0x75ce0e0) Data frame received for 5
I0925 04:01:57.234328       7 log.go:172] (0x75ce2a0) (5) Data frame handling
I0925 04:01:57.234484       7 log.go:172] (0x8f20700) (3) Data frame handling
I0925 04:01:57.234683       7 log.go:172] (0x8f20700) (3) Data frame sent
I0925 04:01:57.234817       7 log.go:172] (0x75ce0e0) Data frame received for 3
I0925 04:01:57.234940       7 log.go:172] (0x8f20700) (3) Data frame handling
I0925 04:01:57.235398       7 log.go:172] (0x75ce0e0) Data frame received for 1
I0925 04:01:57.235545       7 log.go:172] (0x75ce1c0) (1) Data frame handling
I0925 04:01:57.235702       7 log.go:172] (0x75ce1c0) (1) Data frame sent
I0925 04:01:57.235849       7 log.go:172] (0x75ce0e0) (0x75ce1c0) Stream removed, broadcasting: 1
I0925 04:01:57.236031       7 log.go:172] (0x75ce0e0) Go away received
I0925 04:01:57.236501       7 log.go:172] (0x75ce0e0) (0x75ce1c0) Stream removed, broadcasting: 1
I0925 04:01:57.236703       7 log.go:172] (0x75ce0e0) (0x8f20700) Stream removed, broadcasting: 3
I0925 04:01:57.236946       7 log.go:172] (0x75ce0e0) (0x75ce2a0) Stream removed, broadcasting: 5
Sep 25 04:01:57.237: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 25 04:01:57.237: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-906" for this suite.
Sep 25 04:02:19.265: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 25 04:02:19.411: INFO: namespace pod-network-test-906 deletion completed in 22.164702475s

• [SLOW TEST:52.815 seconds]
[sig-network] Networking
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform canary updates and phased rolling updates of template modifications [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 25 04:02:19.416: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-7950
[It] should perform canary updates and phased rolling updates of template modifications [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a new StatefulSet
Sep 25 04:02:19.596: INFO: Found 0 stateful pods, waiting for 3
Sep 25 04:02:29.605: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Sep 25 04:02:29.605: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Sep 25 04:02:29.605: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=false
Sep 25 04:02:39.606: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Sep 25 04:02:39.606: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Sep 25 04:02:39.606: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Sep 25 04:02:39.648: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Not applying an update when the partition is greater than the number of replicas
STEP: Performing a canary update
Sep 25 04:02:49.700: INFO: Updating stateful set ss2
Sep 25 04:02:49.736: INFO: Waiting for Pod statefulset-7950/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
STEP: Restoring Pods to the correct revision when they are deleted
Sep 25 04:02:59.976: INFO: Found 2 stateful pods, waiting for 3
Sep 25 04:03:09.987: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Sep 25 04:03:09.987: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Sep 25 04:03:09.987: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Performing a phased rolling update
Sep 25 04:03:10.025: INFO: Updating stateful set ss2
Sep 25 04:03:10.059: INFO: Waiting for Pod statefulset-7950/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Sep 25 04:03:20.097: INFO: Updating stateful set ss2
Sep 25 04:03:20.128: INFO: Waiting for StatefulSet statefulset-7950/ss2 to complete update
Sep 25 04:03:20.128: INFO: Waiting for Pod statefulset-7950/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Sep 25 04:03:30.142: INFO: Deleting all statefulset in ns statefulset-7950
Sep 25 04:03:30.147: INFO: Scaling statefulset ss2 to 0
Sep 25 04:03:50.179: INFO: Waiting for statefulset status.replicas updated to 0
Sep 25 04:03:50.183: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 25 04:03:50.221: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-7950" for this suite.
Sep 25 04:03:56.256: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 25 04:03:56.402: INFO: namespace statefulset-7950 deletion completed in 6.173986215s

• [SLOW TEST:96.986 seconds]
[sig-apps] StatefulSet
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should perform canary updates and phased rolling updates of template modifications [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via the environment [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 25 04:03:56.405: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap configmap-8605/configmap-test-a7d2cec4-2ff6-4bac-b709-6ccb3578a840
STEP: Creating a pod to test consume configMaps
Sep 25 04:03:56.517: INFO: Waiting up to 5m0s for pod "pod-configmaps-d6deb29f-60cf-49d0-b24c-d1a3a0b0c37d" in namespace "configmap-8605" to be "success or failure"
Sep 25 04:03:56.526: INFO: Pod "pod-configmaps-d6deb29f-60cf-49d0-b24c-d1a3a0b0c37d": Phase="Pending", Reason="", readiness=false. Elapsed: 9.188377ms
Sep 25 04:03:58.533: INFO: Pod "pod-configmaps-d6deb29f-60cf-49d0-b24c-d1a3a0b0c37d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016148168s
Sep 25 04:04:00.542: INFO: Pod "pod-configmaps-d6deb29f-60cf-49d0-b24c-d1a3a0b0c37d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024325355s
STEP: Saw pod success
Sep 25 04:04:00.542: INFO: Pod "pod-configmaps-d6deb29f-60cf-49d0-b24c-d1a3a0b0c37d" satisfied condition "success or failure"
Sep 25 04:04:00.547: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-d6deb29f-60cf-49d0-b24c-d1a3a0b0c37d container env-test: 
STEP: delete the pod
Sep 25 04:04:00.576: INFO: Waiting for pod pod-configmaps-d6deb29f-60cf-49d0-b24c-d1a3a0b0c37d to disappear
Sep 25 04:04:00.655: INFO: Pod pod-configmaps-d6deb29f-60cf-49d0-b24c-d1a3a0b0c37d no longer exists
[AfterEach] [sig-node] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 25 04:04:00.655: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-8605" for this suite.
Sep 25 04:04:06.682: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 25 04:04:06.870: INFO: namespace configmap-8605 deletion completed in 6.204312534s

• [SLOW TEST:10.465 seconds]
[sig-node] ConfigMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via the environment [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod [LinuxOnly] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 25 04:04:06.880: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with configmap pod [LinuxOnly] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-configmap-6fzc
STEP: Creating a pod to test atomic-volume-subpath
Sep 25 04:04:06.979: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-6fzc" in namespace "subpath-2180" to be "success or failure"
Sep 25 04:04:06.988: INFO: Pod "pod-subpath-test-configmap-6fzc": Phase="Pending", Reason="", readiness=false. Elapsed: 8.582188ms
Sep 25 04:04:08.996: INFO: Pod "pod-subpath-test-configmap-6fzc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016143195s
Sep 25 04:04:11.003: INFO: Pod "pod-subpath-test-configmap-6fzc": Phase="Running", Reason="", readiness=true. Elapsed: 4.023222479s
Sep 25 04:04:13.010: INFO: Pod "pod-subpath-test-configmap-6fzc": Phase="Running", Reason="", readiness=true. Elapsed: 6.030647658s
Sep 25 04:04:15.017: INFO: Pod "pod-subpath-test-configmap-6fzc": Phase="Running", Reason="", readiness=true. Elapsed: 8.037151496s
Sep 25 04:04:17.024: INFO: Pod "pod-subpath-test-configmap-6fzc": Phase="Running", Reason="", readiness=true. Elapsed: 10.044551566s
Sep 25 04:04:19.030: INFO: Pod "pod-subpath-test-configmap-6fzc": Phase="Running", Reason="", readiness=true. Elapsed: 12.050558994s
Sep 25 04:04:21.037: INFO: Pod "pod-subpath-test-configmap-6fzc": Phase="Running", Reason="", readiness=true. Elapsed: 14.05749133s
Sep 25 04:04:23.044: INFO: Pod "pod-subpath-test-configmap-6fzc": Phase="Running", Reason="", readiness=true. Elapsed: 16.064303422s
Sep 25 04:04:25.050: INFO: Pod "pod-subpath-test-configmap-6fzc": Phase="Running", Reason="", readiness=true. Elapsed: 18.070957597s
Sep 25 04:04:27.058: INFO: Pod "pod-subpath-test-configmap-6fzc": Phase="Running", Reason="", readiness=true. Elapsed: 20.078483328s
Sep 25 04:04:29.065: INFO: Pod "pod-subpath-test-configmap-6fzc": Phase="Running", Reason="", readiness=true. Elapsed: 22.085776193s
Sep 25 04:04:31.072: INFO: Pod "pod-subpath-test-configmap-6fzc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.092484172s
STEP: Saw pod success
Sep 25 04:04:31.072: INFO: Pod "pod-subpath-test-configmap-6fzc" satisfied condition "success or failure"
Sep 25 04:04:31.077: INFO: Trying to get logs from node iruya-worker2 pod pod-subpath-test-configmap-6fzc container test-container-subpath-configmap-6fzc: 
STEP: delete the pod
Sep 25 04:04:31.121: INFO: Waiting for pod pod-subpath-test-configmap-6fzc to disappear
Sep 25 04:04:31.165: INFO: Pod pod-subpath-test-configmap-6fzc no longer exists
STEP: Deleting pod pod-subpath-test-configmap-6fzc
Sep 25 04:04:31.165: INFO: Deleting pod "pod-subpath-test-configmap-6fzc" in namespace "subpath-2180"
[AfterEach] [sig-storage] Subpath
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 25 04:04:31.170: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-2180" for this suite.
Sep 25 04:04:37.207: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 25 04:04:37.346: INFO: namespace subpath-2180 deletion completed in 6.166830422s

• [SLOW TEST:30.467 seconds]
[sig-storage] Subpath
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with configmap pod [LinuxOnly] [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition 
  creating/deleting custom resource definition objects works  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 25 04:04:37.348: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] creating/deleting custom resource definition objects works  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Sep 25 04:04:37.394: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 25 04:04:38.086: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-187" for this suite.
Sep 25 04:04:44.143: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 25 04:04:44.276: INFO: namespace custom-resource-definition-187 deletion completed in 6.180538331s

• [SLOW TEST:6.928 seconds]
[sig-api-machinery] CustomResourceDefinition resources
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Simple CustomResourceDefinition
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35
    creating/deleting custom resource definition objects works  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should rollback without unnecessary restarts [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 25 04:04:44.284: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should rollback without unnecessary restarts [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Sep 25 04:04:44.416: INFO: Create a RollingUpdate DaemonSet
Sep 25 04:04:44.422: INFO: Check that daemon pods launch on every node of the cluster
Sep 25 04:04:44.436: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep 25 04:04:44.446: INFO: Number of nodes with available pods: 0
Sep 25 04:04:44.446: INFO: Node iruya-worker is running more than one daemon pod
Sep 25 04:04:45.455: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep 25 04:04:45.462: INFO: Number of nodes with available pods: 0
Sep 25 04:04:45.462: INFO: Node iruya-worker is running more than one daemon pod
Sep 25 04:04:46.623: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep 25 04:04:46.631: INFO: Number of nodes with available pods: 0
Sep 25 04:04:46.631: INFO: Node iruya-worker is running more than one daemon pod
Sep 25 04:04:47.457: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep 25 04:04:47.465: INFO: Number of nodes with available pods: 0
Sep 25 04:04:47.465: INFO: Node iruya-worker is running more than one daemon pod
Sep 25 04:04:48.486: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep 25 04:04:48.492: INFO: Number of nodes with available pods: 2
Sep 25 04:04:48.493: INFO: Number of running nodes: 2, number of available pods: 2
Sep 25 04:04:48.493: INFO: Update the DaemonSet to trigger a rollout
Sep 25 04:04:48.537: INFO: Updating DaemonSet daemon-set
Sep 25 04:04:56.587: INFO: Roll back the DaemonSet before rollout is complete
Sep 25 04:04:56.597: INFO: Updating DaemonSet daemon-set
Sep 25 04:04:56.598: INFO: Make sure DaemonSet rollback is complete
Sep 25 04:04:56.631: INFO: Wrong image for pod: daemon-set-fqf84. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Sep 25 04:04:56.631: INFO: Pod daemon-set-fqf84 is not available
Sep 25 04:04:56.654: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep 25 04:04:57.663: INFO: Wrong image for pod: daemon-set-fqf84. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Sep 25 04:04:57.663: INFO: Pod daemon-set-fqf84 is not available
Sep 25 04:04:57.672: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Sep 25 04:04:58.663: INFO: Pod daemon-set-dz2tt is not available
Sep 25 04:04:58.674: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-8406, will wait for the garbage collector to delete the pods
Sep 25 04:04:58.750: INFO: Deleting DaemonSet.extensions daemon-set took: 8.663557ms
Sep 25 04:04:59.051: INFO: Terminating DaemonSet.extensions daemon-set pods took: 301.003874ms
Sep 25 04:05:05.656: INFO: Number of nodes with available pods: 0
Sep 25 04:05:05.656: INFO: Number of running nodes: 0, number of available pods: 0
Sep 25 04:05:05.675: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-8406/daemonsets","resourceVersion":"339733"},"items":null}

Sep 25 04:05:05.679: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-8406/pods","resourceVersion":"339733"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 25 04:05:05.701: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-8406" for this suite.
Sep 25 04:05:11.728: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 25 04:05:11.877: INFO: namespace daemonsets-8406 deletion completed in 6.167261685s

• [SLOW TEST:27.593 seconds]
[sig-apps] Daemon set [Serial]
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should rollback without unnecessary restarts [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-apps] Deployment 
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 25 04:05:11.878: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Sep 25 04:05:11.984: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted)
Sep 25 04:05:11.994: INFO: Pod name sample-pod: Found 0 pods out of 1
Sep 25 04:05:17.003: INFO: Pod name sample-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Sep 25 04:05:17.004: INFO: Creating deployment "test-rolling-update-deployment"
Sep 25 04:05:17.010: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has
Sep 25 04:05:17.036: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created
Sep 25 04:05:19.048: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected
Sep 25 04:05:19.053: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63736603517, loc:(*time.Location)(0x67985e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63736603517, loc:(*time.Location)(0x67985e0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63736603517, loc:(*time.Location)(0x67985e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63736603517, loc:(*time.Location)(0x67985e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep 25 04:05:21.061: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted)
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Sep 25 04:05:21.079: INFO: Deployment "test-rolling-update-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:deployment-8802,SelfLink:/apis/apps/v1/namespaces/deployment-8802/deployments/test-rolling-update-deployment,UID:934644d0-e443-4020-b8b7-b8cf46ad24f5,ResourceVersion:339837,Generation:1,CreationTimestamp:2020-09-25 04:05:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-09-25 04:05:17 +0000 UTC 2020-09-25 04:05:17 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-09-25 04:05:20 +0000 UTC 2020-09-25 04:05:17 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-79f6b9d75c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Sep 25 04:05:21.101: INFO: New ReplicaSet "test-rolling-update-deployment-79f6b9d75c" of Deployment "test-rolling-update-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c,GenerateName:,Namespace:deployment-8802,SelfLink:/apis/apps/v1/namespaces/deployment-8802/replicasets/test-rolling-update-deployment-79f6b9d75c,UID:ad3bed19-eda4-415d-8326-1d7146045732,ResourceVersion:339826,Generation:1,CreationTimestamp:2020-09-25 04:05:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 934644d0-e443-4020-b8b7-b8cf46ad24f5 0x898b5a7 0x898b5a8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Sep 25 04:05:21.102: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment":
Sep 25 04:05:21.103: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:deployment-8802,SelfLink:/apis/apps/v1/namespaces/deployment-8802/replicasets/test-rolling-update-controller,UID:c1299dd2-7075-492d-a816-cc897f869c8a,ResourceVersion:339835,Generation:2,CreationTimestamp:2020-09-25 04:05:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 934644d0-e443-4020-b8b7-b8cf46ad24f5 0x898b4d7 0x898b4d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Sep 25 04:05:21.110: INFO: Pod "test-rolling-update-deployment-79f6b9d75c-x6hqg" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c-x6hqg,GenerateName:test-rolling-update-deployment-79f6b9d75c-,Namespace:deployment-8802,SelfLink:/api/v1/namespaces/deployment-8802/pods/test-rolling-update-deployment-79f6b9d75c-x6hqg,UID:e25bb24b-1b18-41a5-86be-7032b43edd35,ResourceVersion:339825,Generation:0,CreationTimestamp:2020-09-25 04:05:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-79f6b9d75c ad3bed19-eda4-415d-8326-1d7146045732 0x898bef7 0x898bef8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-dsg9k {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dsg9k,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-dsg9k true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x898bf70} {node.kubernetes.io/unreachable Exists  NoExecute 0x898bfa0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-25 04:05:17 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-09-25 04:05:20 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-09-25 04:05:20 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-25 04:05:17 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.5,PodIP:10.244.2.16,StartTime:2020-09-25 04:05:17 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-09-25 04:05:19 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://4002d8def2818f4a50e2a4038bb6b2316fad50bac7d41818fdd6e6c268bd56ac}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 25 04:05:21.111: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-8802" for this suite.
Sep 25 04:05:27.292: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 25 04:05:27.446: INFO: namespace deployment-8802 deletion completed in 6.329173754s

• [SLOW TEST:15.569 seconds]
[sig-apps] Deployment
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop exec hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 25 04:05:27.452: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop exec hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Sep 25 04:05:35.603: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Sep 25 04:05:35.610: INFO: Pod pod-with-prestop-exec-hook still exists
Sep 25 04:05:37.610: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Sep 25 04:05:37.618: INFO: Pod pod-with-prestop-exec-hook still exists
Sep 25 04:05:39.610: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Sep 25 04:05:39.653: INFO: Pod pod-with-prestop-exec-hook still exists
Sep 25 04:05:41.610: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Sep 25 04:05:41.641: INFO: Pod pod-with-prestop-exec-hook still exists
Sep 25 04:05:43.610: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Sep 25 04:05:43.617: INFO: Pod pod-with-prestop-exec-hook still exists
Sep 25 04:05:45.610: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Sep 25 04:05:45.623: INFO: Pod pod-with-prestop-exec-hook still exists
Sep 25 04:05:47.610: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Sep 25 04:05:47.618: INFO: Pod pod-with-prestop-exec-hook still exists
Sep 25 04:05:49.610: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Sep 25 04:05:49.618: INFO: Pod pod-with-prestop-exec-hook still exists
Sep 25 04:05:51.610: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Sep 25 04:05:51.618: INFO: Pod pod-with-prestop-exec-hook still exists
Sep 25 04:05:53.610: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Sep 25 04:05:53.618: INFO: Pod pod-with-prestop-exec-hook still exists
Sep 25 04:05:55.610: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Sep 25 04:05:55.617: INFO: Pod pod-with-prestop-exec-hook still exists
Sep 25 04:05:57.611: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Sep 25 04:05:57.626: INFO: Pod pod-with-prestop-exec-hook still exists
Sep 25 04:05:59.610: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Sep 25 04:05:59.618: INFO: Pod pod-with-prestop-exec-hook still exists
Sep 25 04:06:01.610: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Sep 25 04:06:01.617: INFO: Pod pod-with-prestop-exec-hook still exists
Sep 25 04:06:03.610: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Sep 25 04:06:03.618: INFO: Pod pod-with-prestop-exec-hook still exists
Sep 25 04:06:05.610: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Sep 25 04:06:05.626: INFO: Pod pod-with-prestop-exec-hook still exists
Sep 25 04:06:07.610: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Sep 25 04:06:07.617: INFO: Pod pod-with-prestop-exec-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 25 04:06:07.625: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-919" for this suite.
Sep 25 04:06:31.655: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 25 04:06:31.802: INFO: namespace container-lifecycle-hook-919 deletion completed in 24.168819367s

• [SLOW TEST:64.351 seconds]
[k8s.io] Container Lifecycle Hook
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop exec hook properly [NodeConformance] [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-api-machinery] Secrets 
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Secrets
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Sep 25 04:06:31.804: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-70d67e20-652f-42ee-8867-3cb6e80337a0
STEP: Creating a pod to test consume secrets
Sep 25 04:06:31.903: INFO: Waiting up to 5m0s for pod "pod-secrets-79c2b42b-23ed-4619-9b30-49720e51a15d" in namespace "secrets-2219" to be "success or failure"
Sep 25 04:06:31.925: INFO: Pod "pod-secrets-79c2b42b-23ed-4619-9b30-49720e51a15d": Phase="Pending", Reason="", readiness=false. Elapsed: 21.604454ms
Sep 25 04:06:33.932: INFO: Pod "pod-secrets-79c2b42b-23ed-4619-9b30-49720e51a15d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028423339s
Sep 25 04:06:35.939: INFO: Pod "pod-secrets-79c2b42b-23ed-4619-9b30-49720e51a15d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.035618428s
STEP: Saw pod success
Sep 25 04:06:35.939: INFO: Pod "pod-secrets-79c2b42b-23ed-4619-9b30-49720e51a15d" satisfied condition "success or failure"
Sep 25 04:06:35.944: INFO: Trying to get logs from node iruya-worker pod pod-secrets-79c2b42b-23ed-4619-9b30-49720e51a15d container secret-env-test: 
STEP: delete the pod
Sep 25 04:06:36.013: INFO: Waiting for pod pod-secrets-79c2b42b-23ed-4619-9b30-49720e51a15d to disappear
Sep 25 04:06:36.017: INFO: Pod pod-secrets-79c2b42b-23ed-4619-9b30-49720e51a15d no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Sep 25 04:06:36.018: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-2219" for this suite.
Sep 25 04:06:42.039: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 25 04:06:42.173: INFO: namespace secrets-2219 deletion completed in 6.147977808s

• [SLOW TEST:10.370 seconds]
[sig-api-machinery] Secrets
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSep 25 04:06:42.175: INFO: Running AfterSuite actions on all nodes
Sep 25 04:06:42.177: INFO: Running AfterSuite actions on node 1
Sep 25 04:06:42.177: INFO: Skipping dumping logs from cluster

Ran 215 of 4413 Specs in 6333.205 seconds
SUCCESS! -- 215 Passed | 0 Failed | 0 Pending | 4198 Skipped
PASS