[Beowulf] Correct networking solution for 16-core nodes

Greg Lindahl greg.lindahl at qlogic.com
Fri Aug 4 08:35:25 PDT 2006


On Fri, Aug 04, 2006 at 11:35:56AM +0200, Joachim Worringen wrote:

> I think Vincent meant another latency, not the per-hop latency in the 
> switches: the time to switch between different processes communicating 
> to the NIC. I never heard of this latency being specified, nor being 
> substantial. Can anybody comment?

That would be an odd thing to call 'switch latency', but the
mpi_multibw micro-benchmark would suffer if this was a problem.  I've
never seen any interconnect's aggregate mpi_multibtw message rate get
slower as you add more cores. The results from mpi_multibtw instead
point out that some interconnects get faster for short messages as you
add more cores, and some don't.

Now Joe's LAMMPS slowdown is interesting, since it doesn't happen with
ch_shmem but did with various interconnects. But, with the LAMMPS
dataset that we've tried, we haven't seen that effect.

-- greg





More information about the Beowulf mailing list