[Beowulf] Considering BeeGFS for parallel file system

James Burton jburto2 at g.clemson.edu
Tue Mar 19 08:05:05 PDT 2019


We are also switching to BeeGFS from OrangeFS (PVFS2) for our HPC scratch
system and looking to expand its use. We set up an experimental scratch
system on older hardware and have been very pleased with the performance
and ease of use and administration. Metadata performance with a SSD MDT is
particularly good. We also have zfs-based NFS storage which is generally
used as cold storage. Generally, BeeGFS is a much faster, much better
scaling system.

BeeGFS is designed for high performance from parallel HPC applications.
It's architecturally very similar to Lustre. Think of it as Lustre-lite—It
does basically the same thing, doesn't have quite all the features of
Lustre, but also has less complexity and fewer headaches. It is very easy
to administer compared to Lustre, OrangeFS, and GPFS. It works well in
other applications, but parallel HPC is what it is designed for. If you are
looking for more of a ZFS replacement and are less concerned with parallel
performance, GlusterFS might be a better fit.

BeeGFS is open source, but not free software. You can get the source and
modify it for your own use, but you can't distribute the changes without
permission from ThinkParQ. There are also certain "enterprise" features
that you should have a support contract with ThinkParQ to use. They won't
prevent you from using them, but you are in violation of the license
agreement if you do. I make no guarantees, but the trend is that BeeGFS is
becoming more open, not less. A support contract is a good idea anyway and
contributes to the development of the project. Don't be a freeloader.

Jim Burton


On Tue, Mar 19, 2019 at 12:49 AM Jan Wender <j.wender at web.de> wrote:

> Hi,
>
> I suggest also to read the license, because it is not a standard open
> source one. Depending on your situation this might not be an issue. As far
> as I remember:
> - As a service provider you need a xontract with Thinkparq to provide
> BeeGFS to others.
> - Thinkparq reserves for themselves the copyright on changes you perform
> in the source code.
> Just somw things to be aware of.
>
> In comparison, GPFS  is totally closed source, but Lustre is GPL (or was
> it LGPL?).
>
> Cheerio, Jan
> --
> Jan Wender - j.wender at web.de
>
> > Am 18.03.2019 um 20:32 schrieb Joshua Baker-LePain <
> joshua.bakerlepain at gmail.com>:
> >
> >> On Mon, Mar 18, 2019 at 8:52 AM Will Dennis <wdennis at nec-labs.com>
> wrote:
> >>
> >> I am considering using BeeGFS for a parallel file system for one (and
> if successful, more) of our clusters here. Just wanted to get folks’
> opinions on that, and if there is any “gotchas” or better-fit solutions out
> there... The first cluster I am considering it for has ~50TB storage off a
> single ZFS server serving the data over NFS currently; looking to increase
> not only storage capacity, but also I/O speed. The cluster nodes that are
> consuming the storage have 10GbaseT interconnects, as does the ZFS server.
> As we are a smaller shop, want to keep the solution simple. BeeGFS was
> recommended to me as a good solution off another list, and wanted to get
> people’s opinions off this list.
> >
> > We're in the midst of migrating our cluster storage from a, err,
> > network appliance to BeeGFS.  We currently have 4 storage servers (2
> > HA pairs) and 2 metadata servers (each running 4 metadata threads,
> > mirrored between the servers) serving 1.4PB of available space.  As
> > configured, we've seen the system put out over 600,000 IOPS and
> > aggregrate read speeds of over 12,000MB/s.  We're actually going to be
> > adding 6 more storage servers and 2 more metadata servers in the near
> > future.  So, yeah, we're pretty happy with it.  One rather nice
> > feature is the ability to see, at any point, which users and/or hosts
> > are generating the most load.
> >
> > That being said, there are currently a few of gotchas/pain points:
> >
> > 1) We're using ZFS under BeeGFS, and the storage servers are rather
> > cycle hungry.  If you go that route, get boxes with lots of fast
> > cores.
> >
> > 2) In previous versions, you could mix and match point releases
> > between servers and clients -- as long as the major version was the
> > same, you were fine.  As of v7, that's no longer the case.  IOW,
> > moving from 7.0 to 7.1 requires unmounting all the clients, shutting
> > down all the daemons, updating all the software, and then restarting
> > everything.  Painful.
> >
> > 3) Also as of v7, the mgmtd service is *critical*.  Any communication
> > interruption to/from the mgmtd results in the clients immediately
> > hanging.  And, unlike storage and metadata, there is currently no
> > mirroring/HA mechanism within BeeGFS for the mgmtd.
> >
> > We do have a support contract and the folks from Thinkparq are
> > responsive.  If you have more questions, please feel free to ask away.
> >
> > --
> > Joshua Baker-LePain
> > QB3 Shared Cluster Sysadmin
> > UCSF
> > _______________________________________________
> > Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing
> > To change your subscription (digest mode or unsubscribe) visit
> https://beowulf.org/cgi-bin/mailman/listinfo/beowulf
>
> _______________________________________________
> Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing
> To change your subscription (digest mode or unsubscribe) visit
> https://beowulf.org/cgi-bin/mailman/listinfo/beowulf
>


-- 
James Burton
OS and Storage Architect
Advanced Computing Infrastructure
Clemson University Computing and Information Technology
340 Computer Court
Anderson, SC 29625
(864) 656-9047
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://beowulf.org/pipermail/beowulf/attachments/20190319/1c28547b/attachment-0001.html>


More information about the Beowulf mailing list