[Beowulf] Broadcast - not for HPC - or is it?

John Hearns hearnsj at googlemail.com
Tue Oct 5 06:40:55 PDT 2010


On 5 October 2010 14:23, Bogdan Costescu <bcostescu at gmail.com> wrote:
>
> HPC usage is a mixture of point-to-point and collective
> communications; most (all?) MPI library use low level point-to-point
> communications to achieve collective ones over Ethernet.. Another
> important point is that the collective communications can be started
> by any of the nodes - it's not one particular node which generates
> data and then spreads it to the others; it's also relatively common
> that 2 or more nodes reach the point of collective communication at
> the same time, leading to a higher load on the interconnect, maybe
> congestion.

True indeed.
However this device might be very interesting if you redefine your
parallel processing paradigm.
How about problems where you send out identical datasets to (say) a
farm of GPUs.



More information about the Beowulf mailing list