High Speed Beowulf Cluster using gigabit ethernet

Felix Rauch rauch at inf.ethz.ch
Mon Feb 4 08:54:00 PST 2002


On Mon, 4 Feb 2002, W Bauske wrote:
> Ole W Saastad wrote:
> > The latency with GBE and fast ethernet are almost the same,
> > and while you get much higher bandwidth (not 100 MB/sec, unfortunately)
> > but substantially more than with 100 fast ethernet, the latency
> > is still a showstopper for many applications.

Actually, on our research cluster with dual 1 GHz PIII processors and
Packet Engines Gigabit Ethernet cards, we get 100 MB/s TCP
performance. Unfortunately that pretty much uses all available CPU
cycles.

> Well, I just bought GbE cards for $36 each, and a couple 4 port GbE switches
> for $130 each. Per port cost is $69. Get creative on how to connect things
> to get the bandwidth you want. As long as you stay below 20 nodes or so
> it shouldn't be too bad. Latency is still an issue though. 8 port GbE
> switches go for $600 or so at the moment so less cost effective but less
> switches needed if that will help the layout/performance.

I don't know about your particular switches, but I would be careful
with cheap switches. Don't forget: You get for what you pay. We had
our experiences with mid-range and also expensive and large Fast
Ethernet switches and they did not always deliver what they promised
when we loaded them with a lot of bandwith on all ports.

So, if you really have an application that needs Gigabit/s speeds on
some nodes, then you should make sure that your switch can handle that
load. You can only do this by measurements, as the technical
descriptions are not always right.

- Felix
-- 
Felix Rauch                      | Email: rauch at inf.ethz.ch
Institute for Computer Systems   | Homepage: http://www.cs.inf.ethz.ch/~rauch/
ETH Zentrum / RZ H18             | Phone: ++41 1 632 7489
CH - 8092 Zuerich / Switzerland  | Fax:   ++41 1 632 1307





More information about the Beowulf mailing list