[Beowulf] QDR InfiniBand interconnect architectures ... approaches ...

Tom Ammon tom.ammon at utah.edu
Fri Apr 9 11:04:47 PDT 2010


Another thing to remember with chassis switches is that you can also 
build them in an oversubscribed model by removing spine cards. Most 
chassis' have at least 3 spine modules so you lose some granularity in 
oversubscription, but you can still cut costs. You don't have to go with 
fully nonblocking in a chassis if you want to save money.

Tom

On 04/09/2010 03:16 AM, Peter Kjellstrom wrote:
> On Thursday 08 April 2010, Greg Lindahl wrote:
>    
>> On Thu, Apr 08, 2010 at 04:13:21PM +0000, richard.walsh at comcast.net wrote:
>>      
>>> What are the approaches and experiences of people interconnecting
>>> clusters of more than128 compute nodes with QDR InfiniBand technology?
>>> Are people directly connecting to chassis-sized switches? Using
>>> multi-tiered approaches which combine 36-port leaf switches?
>>>        
>> I would expect everyone to use a chassis at that size, because it's cheaper
>> than having more cables. That was true on day 1 with IB, the only question
>> is "are the switch vendors charging too high of a price for big switches?"
>>      
> Recently we've (swedish academic centre) got offers using 1U 36-port switches
> not chassis from both Voltaire and Qlogic reason given: lower cost. So from
> our point of view, yes, "switch vendors [are] charging too high of a price
> for big switches" :-)
>
> One "pro" for many 1U switches compared to a chassi is that it gives you more
> topological flexibility. For example, you can build a 4:1 over subscribed
> fat-tree and that will obviously be cheaper than a chassi (even if they were
> more reasonably priced).
>
> /Peter
>    

-- 
--------------------------------------------------------------------
Tom Ammon
Network Engineer
Office: 801.587.0976
Mobile: 801.674.9273

Center for High Performance Computing
University of Utah
http://www.chpc.utah.edu




More information about the Beowulf mailing list