[Beowulf] Lustre Upgrades

John Hearns hearnsj at googlemail.com
Tue Jul 24 08:06:10 PDT 2018


Joe, sorry to split the thread here. I like BeeGFS and have set it up.
I have worked for two companies now who have sites around the world, those
sites being independent research units. But HPC facilities are in
headquarters.
The sites want to be able to drop files onto local storage yet have it
magically appear on HPC storage, and same with the results going back the
other way.

One company did this well with GPFS and AFM volumes.
For the current company, I looked at gluster and Gluster geo-replication is
one way only.
What do you know of the BeeGFS mirroring? Will it work over long distances?
(Note to me - find out yourself you lazy besom)

On Tue, 24 Jul 2018 at 16:59, Joe Landman <joe.landman at gmail.com> wrote:

>
>
> On 07/24/2018 10:31 AM, John Hearns via Beowulf wrote:
> > Forgive me for saying this, but the philosophy for software defined
> > storage such as CEPH and Gluster is that forklift style upgrades
> > should not be necessary.
> > When a storage server is to be retired the data is copied onto the new
> > server then the old one taken out of service. Well, copied is not the
> > correct word, as there are erasure-coded copies of the data.
> > Rebalanced is probaby a better word.
>
> This ^^
>
> I'd seen/helped build/benchmarked some very nice/fast CephFS based
> storage systems in $dayjob-1.  While it is a neat system, if you are
> focused on availability, scalability, and performance, its pretty hard
> to beat BeeGFS.  We'd ($dayjob-1) deployed several very large/fast file
> systems with it on our spinning rust, SSD, and NVMe units.
>
>
> --
> Joe Landman
> e: joe.landman at gmail.com
> t: @hpcjoe
> w: https://scalability.org
> g: https://github.com/joelandman
> l: https://www.linkedin.com/in/joelandman
>
> _______________________________________________
> Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing
> To change your subscription (digest mode or unsubscribe) visit
> http://www.beowulf.org/mailman/listinfo/beowulf
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.beowulf.org/pipermail/beowulf/attachments/20180724/1933eb18/attachment-0001.html>


More information about the Beowulf mailing list