[Beowulf] large MPI adopters

Nifty niftyompi Mitch niftyompi at niftyegg.com
Thu Oct 16 16:11:46 PDT 2008


On Wed, Oct 08, 2008 at 07:00:36AM -0500, Gerry Creager wrote:
> 
> Tom,
>
> I looked at that and *THOUGHT* it looked funny, but I was unable to see  
> the typo.  Yeah, OpenMP.  Both have their place, which is sometimes  
> integrated into the same application!

Also there is a class of applications that are hybrid.  i.e. With coarse grain
parallel code in MPI and compiler detected fine grain parallel code.
In some ways this mix of styles has value  with modern multi core CPUs
in smallish systems.  The application footprint for code and data on
memory need not be duplicated and the programmer can focus on the obvious
big parallel chunks and let the compiler folk attend to the detailed stuff.

I suspect that 90% of MPI clustering is associated with less than
fifteen common packages.   A bit of research into who purchases or down
loads these packages would cover most of the MPI cluster computation sites.
In all I suspect two or three weather codes, four or five fluid dynamic
packages, a couple thermal modeling and a gaggle of chemistry codes will
add up to 90%.

More guessing, the last 10% contain the next generation work in progress
codes and based on the design choices made will shape the clustering
needs in the future.  For some of these the science is the hard part and
what ever abstraction model the author hooks up to will shape purchase
requirements...

At the recent Stanford HPC conference there were some very interesting
talks on their bio chemistry work (Folding at Home) and how they rethought
some of their code designs for orders of magnitude speedups.   Some of
their Alzheimer related research is astounding.  It might be that the next
bit for FaH might be multi core aware trickery.

Later,
mitch


-- 
	T o m  M i t c h e l l 
	Found me a new hat, now what?




More information about the Beowulf mailing list