[Beowulf] Lustre Upgrades

John Hearns hearnsj at googlemail.com
Tue Jul 24 10:15:34 PDT 2018


Thankyou for a comprehensive reply.

On Tue, 24 Jul 2018 at 17:56, Paul Edmon <pedmon at cfa.harvard.edu> wrote:

> This was several years back so the current version of Gluster may be in
> better shape.  We tried to use it for our primary storage but ran into
> scalability problems.  It especially was the case when it came to healing
> bricks and doing replication.  It just didn't scale well.  Eventually we
> abandoned it for NFS and Lustre, NFS for deep storage and Lustre for
> performance.  We tried it for hosting VM images which worked pretty well
> but we've since moved to Ceph for that.
>
> Anyways I have no idea about current Gluster in terms of scalability so
> the issues we ran into may not be an problem anymore.  However it has made
> us very gun shy about trying Gluster again.  Instead we've decided to use
> Ceph as we've gained a bunch of experience with Ceph in our OpenNebula
> installation.
> -Paul Edmon-
>
> On 07/24/2018 11:02 AM, John Hearns via Beowulf wrote:
>
> Paul, thanks for the reply.
> I would like to ask, if I may. I rather like Glustre, but have not
> deployed it in HPC. I have heard a few people comment about Gluster not
> working well in HPC. Would you be willing to be more specific?
>
> One research site I talked to did the classic 'converged infrastructure'
> idea of attaching storage drives to their compute nodes and distributing
> Glustre storage. They were not happy with that IW as told, and I can very
> much understand why. But Gluster on dedicated servers I would be interested
> to hear about.
>
>
> On Tue, 24 Jul 2018 at 16:41, Paul Edmon <pedmon at cfa.harvard.edu> wrote:
>
>> While I agree with you in principle, one also has to deal with the
>> reality as you find yourself in.  In our case we have more experience with
>> Lustre than Ceph in an HPC and we got burned pretty badly by Gluster.
>> While I like Ceph in principle I haven't seen it do what Lustre can do in a
>> HPC setting over IB.  Now it may be able to do that, which is great.
>> However then you have to get your system set up to do that and prove that
>> it can.  After all users have a funny way of breaking things that work
>> amazingly well in controlled test environs, especially when you have no
>> control how they will actually use the system (as in a research
>> environment).  Certainly we are working on exploring this option too as it
>> would be awesome and save many headaches.
>>
>> Anyways no worries about you being a smartarse, it is a valid point.  One
>> just needs to consider the realities on the ground in ones own environment.
>>
>> -Paul Edmon-
>>
>> On 07/24/2018 10:31 AM, John Hearns via Beowulf wrote:
>>
>> Forgive me for saying this, but the philosophy for software defined
>> storage such as CEPH and Gluster is that forklift style upgrades should not
>> be necessary.
>> When a storage server is to be retired the data is copied onto the new
>> server then the old one taken out of service. Well, copied is not the
>> correct word, as there are erasure-coded copies of the data. Rebalanced is
>> probaby a better word.
>>
>> Sorry if I am seeming to be a smartarse. I have gone through the pain of
>> forklift style upgrades in the past when storage arrays reach End of Life.
>> I just really like the Software Defined Storage mantra - no component
>> should be a point of failure.
>>
>>
>> _______________________________________________
>> Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing
>> To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf
>>
>>
>> _______________________________________________
>> Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing
>> To change your subscription (digest mode or unsubscribe) visit
>> http://www.beowulf.org/mailman/listinfo/beowulf
>>
>
>
> _______________________________________________
> Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing
> To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf
>
>
> _______________________________________________
> Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing
> To change your subscription (digest mode or unsubscribe) visit
> http://www.beowulf.org/mailman/listinfo/beowulf
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.beowulf.org/pipermail/beowulf/attachments/20180724/e951b9c3/attachment.html>


More information about the Beowulf mailing list