[Beowulf] many cores and ib

Jan Heichler jan.heichler at gmx.net
Mon May 5 06:54:33 PDT 2008


Hallo Jaime,

Montag, 5. Mai 2008, meintest Du:

JP> Hello,

JP> Just a small question, does anybody has experience with many core
JP> (16) nodes and infiniband? Since we have some users that need
JP> shared memory but also we want to build a normal cluster for
JP> mpi apps, we think that this could be a solution. Let's say about
JP> 8 machines (96 processors)  pus infiniband. Does it sound correct?
JP> I'm aware of the bottleneck that means having one ib interface for 
JP> the mpi cores, is there any possibility of bonding?

Bonding (or multi-rail) does not make sense with "standard IB" in PCIe x8 since the PCIe connection limits the transfer rate of a single IB-Link already. 

My hint would be to go for Infinipath from QLogic or the new ConnectX from Mellanox since message rate is probably your limiting factor and those technologies have a huge advantage over standard Infiniband SDR/DDR. 

Infinipath and ConnectX are available as DDR Infiniband and provide a bandwidth of more than 1800 MB/s.

Cheers, Jan                            
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://www.scyld.com/pipermail/beowulf/attachments/20080505/ed89a351/attachment.html


More information about the Beowulf mailing list