parallelizing OpenMP apps

Greg Lindahl lindahl at conservativecomputer.com
Fri Mar 23 16:45:57 PST 2001


On Thu, Mar 22, 2001 at 11:11:08AM -0800, Mattson, Timothy G wrote:

> Mapping OpenMP onto an HPF-style programming environment can be as-hard or
> harder than going straight to MPI.

That's true in general. But this tool (SMS) was designed for weather
and climate codes. You actually start by throwing away all the OpenMP
cruft and converting to a serial code. Then the tool can insert most
of the data layout directives for you if you only have one data
decomposition, which is fairly common.

> I haven't personnally used it, but the OMNI compiler project at RWCP lets
> you run OpenMP programs on a cluster.

As Craig Tierney points out, this isn't true, but if it was, I assure
you that scalability would be poor for programs that aren't
embarrassingly parallel. So it would depend on how many jobs and of
what size you're wanting to run: if you want to run single codes on a
large numbers of nodes, OpenMP isn't going to get you there.

I believe that the Portland Group's HPF compiler does have the ability
to compile down to message passing of a couple of types. But scaling
is poor compared to MPI, because the compiler can't combine messages
as well as a human or SMS can. If you're praying for a 2X speedup, it
may get you there. If you want 100X...

-- g





More information about the Beowulf mailing list