[Beowulf] 1.2 us IB latency?

Patrick Geoffray patrick at myri.com
Wed Mar 28 23:00:31 PDT 2007


Hi Peter,

Peter St. John wrote:
> I was wondering if Peter K's remark generalized: if there are multiple 
> ports, the node has a choice, which may be application dependent. One 
> port for MPI and the other to a disk farm seems clear, but it still 
> isn't obvious to me that a star topology with few long cables to a huge 
> switch is always better than many short cables with more ports per node 
> but no switches. (I myself don't have any feel for how much bottleneck a 
> switch is, just topologically it seems scary).

FNN-like topologies make sense only when:
1) the price of the host port is low compared to a switch port.
2) the host has enough IO capacity to drive that many ports.
3) the cables are reasonably cheap/small/light.

Today, only Fast and Gigabit Ethernet can validate all three.

For everything else, the switch port is the same price or lower than the 
NIC price (that will change when the NIC goes for "free" on the 
motherboard, but we are not there yet). PCI Express is a bottleneck and 
cables are a major pain.

Bulky cables are actually the best argument for a switch topology, since 
the biggest advantage of a switch is to not have cables inside, just 
wires on PCB.

Patrick
-- 
Patrick Geoffray
Myricom, Inc.
http://www.myri.com



More information about the Beowulf mailing list