[Beowulf] Containers in HPC

Brian Dobbins bdobbins at gmail.com
Wed May 22 06:49:28 PDT 2019


Thanks, Gerald - I'll be reading this shortly.  And to add to any
discussion, here's the Blue Waters container paper that I like to point
people towards - from the same conference, in fact:
https://arxiv.org/pdf/1808.00556.pdf

The key thing here is achieving *native* network performance through the
MPICH ABI compatibility layer[1].  This is such a key technology.  Prior to
this, I was slightly negative about containers, figuring MPI
compatibility/performance was an issue - now, I'm eager to containerize
some of our applications, as it can dramatically simplify
installation/configuration for non-expert users.

One thing I'm less certain about, and would welcome any information on, is
whether things like Linux's cross-memory attach (XPMEM / CMA) can work
across containers for MPI messages on the same node.  Since it's the same
host kernel, I'm somewhat inclined to think so, but I haven't yet had the
time to run any tests.  Anyway, given the complexity of a lot of projects
these days, native performance in a containerized environment is pretty
much the best of both worlds.

[1] MPICH ABI Compatibility Initiative : https://www.mpich.org/abi/

Cheers,
  - Brian


On Wed, May 22, 2019 at 7:10 AM Gerald Henriksen <ghenriks at gmail.com> wrote:

> Paper on arXiv that may be of interest to some as it may be where HPC
> is heading even for private clusters:
>
> Evalutation of Docker Containers for Scientific Workloads in the Cloud
> https://arxiv.org/abs/1905.08415
> _______________________________________________
> Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing
> To change your subscription (digest mode or unsubscribe) visit
> https://beowulf.org/cgi-bin/mailman/listinfo/beowulf
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://beowulf.org/pipermail/beowulf/attachments/20190522/767746f3/attachment.html>


More information about the Beowulf mailing list