[Beowulf] Re: Re: Home beowulf - NIC latencies

Greg Lindahl lindahl at pathscale.com
Fri Feb 11 13:03:35 PST 2005


On Fri, Feb 11, 2005 at 07:49:48PM +0100, Joachim Worringen wrote:

> The latest unsuccessful case of uncoupling computation and MPI 
> communication I read about was BG/L when using the second CPU as a 
> message processor.

Yep, "offload" that improves performance is more complicated than it
seems. The new InfiniPath adapter aims at raw latency and bandwidth
excellence, because this is always helpful. It's also frequently
helpful to be able to send directly out of cache, for medium-sized
packets, instead of using send dma, which has to flush cache to main
memory. Memory bandwidth isn't free.

Getting more concurrency, by the way, is as much a hardware issue as a
software issue. InfiniPath's hardware is dumb, but highly pipelined.
Most offload engines seem to have less pipelining. And cpu software
overhead generally scales nicely with additional cpus...

-- greg




More information about the Beowulf mailing list