[Beowulf] Lustre Upgrades

Prentice Bisbal pbisbal at pppl.gov
Wed Jul 25 13:36:23 PDT 2018


Paging Dr. Joe Landman, paging Dr. Landman...

Prentice

On 07/24/2018 10:19 PM, James Burton wrote:
> Does anyone have any experience with how BeeGFS compares to Lustre? 
> We're looking at both of those for our next generation HPC storage 
> system.
>
> Is CephFS a valid option for HPC now? Last time I played with CephFS 
> it wasn't ready for prime time, but that was a few years ago.
>
> On Tue, Jul 24, 2018 at 10:58 AM, Joe Landman <joe.landman at gmail.com 
> <mailto:joe.landman at gmail.com>> wrote:
>
>
>
>     On 07/24/2018 10:31 AM, John Hearns via Beowulf wrote:
>
>         Forgive me for saying this, but the philosophy for software
>         defined storage such as CEPH and Gluster is that forklift
>         style upgrades should not be necessary.
>         When a storage server is to be retired the data is copied onto
>         the new server then the old one taken out of service. Well,
>         copied is not the correct word, as there are erasure-coded
>         copies of the data. Rebalanced is probaby a better word.
>
>
>     This ^^
>
>     I'd seen/helped build/benchmarked some very nice/fast CephFS based
>     storage systems in $dayjob-1.  While it is a neat system, if you
>     are focused on availability, scalability, and performance, its
>     pretty hard to beat BeeGFS.  We'd ($dayjob-1) deployed several
>     very large/fast file systems with it on our spinning rust, SSD,
>     and NVMe units.
>
>
>     -- 
>     Joe Landman
>     e: joe.landman at gmail.com <mailto:joe.landman at gmail.com>
>     t: @hpcjoe
>     w: https://scalability.org
>     g: https://github.com/joelandman
>     l: https://www.linkedin.com/in/joelandman
>     <https://www.linkedin.com/in/joelandman>
>
>
>     _______________________________________________
>     Beowulf mailing list, Beowulf at beowulf.org
>     <mailto:Beowulf at beowulf.org> sponsored by Penguin Computing
>     To change your subscription (digest mode or unsubscribe) visit
>     http://www.beowulf.org/mailman/listinfo/beowulf
>     <http://www.beowulf.org/mailman/listinfo/beowulf>
>
>
>
>
> -- 
> James Burton
> OS and Storage Architect
> Advanced Computing Infrastructure
> Clemson University Computing and Information Technology
> 340 Computer Court
> Anderson, SC 29625
> (864) 656-9047
>
>
> _______________________________________________
> Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing
> To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.beowulf.org/pipermail/beowulf/attachments/20180725/b60cc572/attachment.html>


More information about the Beowulf mailing list