[Beowulf] Lustre Upgrades

Paul Edmon pedmon at cfa.harvard.edu
Tue Jul 24 07:40:55 PDT 2018


While I agree with you in principle, one also has to deal with the 
reality as you find yourself in.  In our case we have more experience 
with Lustre than Ceph in an HPC and we got burned pretty badly by 
Gluster.  While I like Ceph in principle I haven't seen it do what 
Lustre can do in a HPC setting over IB.  Now it may be able to do that, 
which is great.  However then you have to get your system set up to do 
that and prove that it can.  After all users have a funny way of 
breaking things that work amazingly well in controlled test environs, 
especially when you have no control how they will actually use the 
system (as in a research environment).  Certainly we are working on 
exploring this option too as it would be awesome and save many headaches.

Anyways no worries about you being a smartarse, it is a valid point.  
One just needs to consider the realities on the ground in ones own 
environment.

-Paul Edmon-


On 07/24/2018 10:31 AM, John Hearns via Beowulf wrote:
> Forgive me for saying this, but the philosophy for software defined 
> storage such as CEPH and Gluster is that forklift style upgrades 
> should not be necessary.
> When a storage server is to be retired the data is copied onto the new 
> server then the old one taken out of service. Well, copied is not the 
> correct word, as there are erasure-coded copies of the data. 
> Rebalanced is probaby a better word.
>
> Sorry if I am seeming to be a smartarse. I have gone through the pain 
> of forklift style upgrades in the past when storage arrays reach End 
> of Life.
> I just really like the Software Defined Storage mantra - no component 
> should be a point of failure.
>
>
> _______________________________________________
> Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing
> To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.beowulf.org/pipermail/beowulf/attachments/20180724/0d54a34b/attachment.html>


More information about the Beowulf mailing list