[Beowulf] Infiniband: MPI and I/O?

Greg Keller greg at keller.net
Thu May 26 12:29:07 PDT 2011


Date: Thu, 26 May 2011 12:18:18 -0400 (EDT)
> From: Mark Hahn<hahn at mcmaster.ca>
> Subject: Re: [Beowulf] Infiniband: MPI and I/O?
> To: Bill Wichser<bill at Princeton.EDU>
> Cc: Beowulf Mailing List<beowulf at beowulf.org>
> Message-ID:
> 	<Pine.LNX.4.64.1105261210510.7148 at coffee.psychology.mcmaster.ca>
> Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed
>
>> Wondering if anyone out there is doing both I/O to storage as well as
>> MPI over the same IB fabric.
> I would say that is the norm.  we certainly connect local storage
> (Lustre) to nodes via the same fabric as MPI.  gigabit is completely
> inadequate for modern nodes, so the only alternatives would be 10G
> or a secondary IB fabric, both quite expensive propositions, no?
>
> I suppose if your cluster does nothing but IO-light serial/EP jobs,
> you might think differently.
>
Agreed.  Just finished telling another vendor, "It's not high speed 
storage unless it has an IB/RDMA interface".   They love that.  Except 
for some really edge cases, I can't imagine running IO over GbE for 
anything more than trivial IO loads.


I am Curious if anyone is doing IO over IB to SRP targets or some 
similar "Block Device" approach.  The Integration into the filesystem by 
Lustre/GPFS and others may be the best way to go, but we are not 100% 
convinced yet.  Any stories to share?

Cheers!
Greg



More information about the Beowulf mailing list