[Beowulf] new release of GAMMA: x86-64 + PRO/1000

Greg Lindahl lindahl at pathscale.com
Sat Feb 4 16:07:38 PST 2006


On Sat, Feb 04, 2006 at 01:51:29PM -0500, Mark Hahn wrote:

> well, that's the sticking point, isn't it?  is there any way that GAMMA
> could be converted to use the netpoll interface?  for instance, look at 
> drivers/net/netconsole.c which is, admittedly, much less ambitious 
> than supporting MPI.

There is an example of this in the ParTec software. Also, the
commercial Scali MPI stack has a special protocol for Ethernet that
works on arbitrary ethernet devices.

> AFAIKT, per-port prices for IB (incl cable+switch)
> have not come down anywhere near GB, or even GB prices from ~3 years ago,
> when it was still slightly early-adopter.

Oh, there's no chance of IB getting down to gigabit prices. So there
will always be a market for gigabit clusters running problems which
are embarrassingly parallel enough.

> channels are a "get the kernel out of the way" approach, which I think 
> makes huge amounts of sense.  in a way, InfiniPath (certainly the most 
> interesting thing to happen to clusters in years!) is a related effort,
> since it specifically avoids the baroqueness of IB kernel drivers.

You can think of InfiniPath's "accellerated mode" as being channels
plus an extremely light-weight user-level protocol, much simpler than
TCP.  Even though we do more work on the main CPU than traditional
InfiniBand, we still put less load on the main CPU than traditional IB
(!). While this helps latency, it helps message rate a lot more, which
is why we get faster as you add more cpus to a node, and have a 10X
advantage in message rate for small packets: we send ~ 11.3 million
MPI messages per second on an 8-core machine, traditional IB only ~ 1
million.

-- greg





More information about the Beowulf mailing list