[Beowulf] How to justify the use MPI codes on multicore systems/PCs?

Rayson Ho raysonlogin at gmail.com
Mon Dec 12 08:00:23 PST 2011


On Sat, Dec 10, 2011 at 3:21 PM, amjad ali <amjad11 at gmail.com> wrote:
> (2) The latest MPI implementations are intelligent enough that they use some
> efficient mechanism while executing MPI based codes on shared memory
> (multicore) machines.  (please tell me any reference to quote this fact).

Not an academic paper, but from a real MPI library developer/architect:

http://blogs.cisco.com/performance/shared-memory-as-an-mpi-transport/
http://blogs.cisco.com/performance/shared-memory-as-an-mpi-transport-part-2/

Open MPI is used by Japan's K computer (current #1 TOP 500 computer)
and LANL's RoadRunner (#1 Jun 08 – Nov 09), and "10^16 Flops Can't Be
Wrong" and "10^15 Flops Can't Be Wrong":

http://www.open-mpi.org/papers/sc-2008/jsquyres-cisco-booth-talk-2up.pdf

Rayson

=================================
Grid Engine / Open Grid Scheduler
http://gridscheduler.sourceforge.net/

Scalable Grid Engine Support Program
http://www.scalablelogic.com/


>
>
> Please help me in formally justifying this and comment/modify above two
> justifications. Better if I you can suggent me to quote some reference of
> any suitable publication in this regard.
>
> best regards,
> Amjad Ali
>
> _______________________________________________
> Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing
> To change your subscription (digest mode or unsubscribe) visit
> http://www.beowulf.org/mailman/listinfo/beowulf
>



-- 
Rayson

==================================================
Open Grid Scheduler - The Official Open Source Grid Engine
http://gridscheduler.sourceforge.net/



More information about the Beowulf mailing list