[Beowulf] Can one Infiniband net support MPI and a parallel file system?
hahn at mcmaster.ca
Wed Aug 6 07:15:22 PDT 2008
> It works(ish) and people are doing it but my research has shown that
> it is not yet stable.
"not stable" sounds like a bit of a smear. file and mpi activity
_do_ coexist on a single network - the only issue is possible contention.
it's not like NFS somehow ionizes the wires so MPI packets sort out ;)
> I have been talking to various companies
> offering lustre support. They have all told me that they can do it but
> none have been able to offer a reference site.
my organization has at least 4 production clusters which use the
interconnect for both MPI and file (lustre) traffic. ironically,
our one IB cluster has no local filestore, but two are quadrics,
one is myri 2g and one is plain old gigabit. actually, now that I
think of it, we have ~6 other myri 2g clusters that also share the IC
between MPI and NFS.
> as mentioned by mark, If you try and force lots of stuff down the
> tubes you are going to break something. I guess its a _bit_ like
contention is possible, but mixing NFS+MPI doesn't change anything.
you can still run into fabric contention with pure MPI -
after all, it's not as if _every_ MPI program was equally latency-tolerant
or only used sparse tinygrams.
there is NOTHING wrong with using a single network for NFS and MPI -
just consider, preferably measure, your workload's traffic beforehand.
if you can handle NFS purely via gigabit (ie, ~80 MB/s), it's probably very
cheap to add a decent gigabit switch. of course, you can just as easily
see the same contention with the right mix of MPI traffic - no panacea.
More information about the Beowulf