[Beowulf] QDR InfiniBand interconnect architectures ... approaches ...

richard.walsh at comcast.net richard.walsh at comcast.net
Thu Apr 8 13:29:39 PDT 2010


On Thursday, April 8, 2010 2:14:11 PM Greg Lindahl wrote: 



>> What are the approaches and experiences of people interconnecting 
>> clusters of more than128 compute nodes with QDR InfiniBand technology? 
>> Are people directly connecting to chassis-sized switches? Using multi-tiered 
>> approaches which combine 36-port leaf switches? 
> 
>I would expect everyone to use a chassis at that size, because it's cheaper 
>than having more cables. That was true on day 1 with IB, the only question is 
>"are the switch vendors charging too high of a price for big switches?" 


Hey Greg, 


I think my target is around 192 compute nodes, with room for a head node(s), 
and ports to a Lustre file server. So, 216 ports looks like a reasonable number 
to me (6 x 36). The price for an integrated chassis model solution should not exceed 
the price for a multi-tiered solution using 36-port (or some other switch smaller 
than 216) plus the cabling costs. Reliability and labor would also have to factored 
in with an advantage going to the chassis I assume based also on fewer cables? 
Looks like the chassis options are between $375 and $400 a port, while the 36 
port options are running at about $175 to $200 a port (but you need more ports and 
cables). 

>> I am looking for some real world feedback before making a decision on 
>> architecture and vendor. 
> 
>Hopefully you're planning on benchmarking your own app -- both the 
>HCAs and the switch silicon have considerably different application- 
>dependent performance characteristics between QLogic and Mellanox 
>silicon. 


Yes, I assume that people would also recommend matching NIC and switch 
hardware. 


Thanks for your input ... 


rbw 



_______________________________________________ 
Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing 
To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf 
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.beowulf.org/pipermail/beowulf/attachments/20100408/7ff417d4/attachment.html>


More information about the Beowulf mailing list