parallelizing OpenMP apps; pghpf & MPI

Ole W. Saastad ole at
Mon Mar 26 04:34:46 PST 2001

> Greg Lindahl wrote:

> I believe that the Portland Group's HPF compiler does have the ability
> to compile down to message passing of a couple of types. But scaling
> is poor compared to MPI, because the compiler can't combine messages
> as well as a human or SMS can. If you're praying for a 2X speedup, it
> may get you there. If you want 100X...
> Greg Lindahl

Portland hpf does indeed use MPI as the transport layer.

I works well with ScaMPI which is the implementation I  have tested. 
I get speedups from 2.33 to 3.04 with 4 cpus for the BN-H benchmark, 
class W, with MG as an exception where the serial code is better. 

For the pfbench benchmark I get speedups ranging from 1.35 to 3.71,
again with one exception where the serial code run faster.

Our license is limited to four cpus so I have not tested with more.

More information under support at Scali's web site (see below).


Ole W. Saastad, Dr.Scient.
Scali AS P.O.Box 70 Bogerud 0621 Oslo NORWAY 
Tel:+47 22 62 89 68(dir) mailto:ole at 
ScaMPI: bandwidth .gt. 220 MB/sec. latency .lt. 4us.

More information about the Beowulf mailing list