[Beowulf] Shared memory
lindahl at pathscale.com
Sun Jul 3 20:12:10 PDT 2005
On Mon, Jun 27, 2005 at 12:25:13PM +0100, Kozin, I (Igor) wrote:
> I think MPI/OpenMP has its niche.
I think it's a tiny one. Modern interconnects like InfiniPath are
getting to such low latencies that the spinlocks needed for a fully
threaded MPI are very expensive. And a single thread can't necessarily
max out the interconnect performance.
> BTW, "taskset" worked fine with MPI but could not get a grip on OpenMP
> threads on a dual core.
You didn't say which compiler you were using, but in the PathScale
case, our compiler default is to set process affinity for you. Our
manual describes how you can turn this off, but you probably don't
> Unfortunately I can't recommend a simple established code or benchmark
> which would allow transparent comparison of MPI versus OpenMP/MPI.
MM5 runs both ways... and it's faster as pure MPI. If OpenMPI+MPI
doesn't have some special benefit such as accellerating convergence,
it's not going to be a win.
More information about the Beowulf