[Beowulf] fast interconnects

Bogdan Costescu Bogdan.Costescu at iwr.uni-heidelberg.de
Tue May 23 07:16:26 PDT 2006


On Tue, 23 May 2006, Joachim Worringen wrote:

> The cabling is already quite tricky for 3D setups. I don't think you
> would like to go beyond. There are good reasons why nobody has done
> it yet.

It's not clear if you refer to a large cluster or include the 
"experimental" ones as well, in which case I'd like to point to:

http://www.sicmm.org/vrana.html

where under the "2000" entry you can find a mention of a 6D one.

But I wonder how such a system (more than 2 HPC NICs per node) would
work now ? Has any interconnect vendor attempted to install and use
succesfully more than 2 NICs per computer ? How were the connected to
the underlying bus(es) (PCI-X, PCI-E, maybe even HyperTransport) ?  
What's the gain with respect to the 1 NIC + switch case ? (from all
points of view like price, latency when all NICs in a computer
communicate simultaneously, CPU usage, etc.)

-- 
Bogdan Costescu

IWR - Interdisziplinaeres Zentrum fuer Wissenschaftliches Rechnen
Universitaet Heidelberg, INF 368, D-69120 Heidelberg, GERMANY
Telephone: +49 6221 54 8869, Telefax: +49 6221 54 8868
E-mail: Bogdan.Costescu at IWR.Uni-Heidelberg.De




More information about the Beowulf mailing list