beowulf performance with MPI

Tony Skjellum tony at MPI-Softtech.Com
Sun Jun 25 07:20:17 PDT 2000


Greg, to be perfectly technical, MPI/Pro also achieves about 10% higher
large message bandwidth in our experiments compared to other MPI's over
TCP over Ethernet.  So, it is not just a matter of rewriting the
collectives, the overall middleware design and implementation matters too.

Since it works, users should take advantage of state of the art MPI,
not wait around and hope.

As may be pointed out, there is room for improvement in our product too,
in some areas, and we're working that very aggressively.

Tony

Anthony Skjellum, PhD, President (tony at mpi-softtech.com) 
MPI Software Technology, Inc., Ste. 33, 101 S. Lafayette, Starkville, MS 39759
+1-(662)320-4300 x15; FAX: +1-(662)320-4301; http://www.mpi-softtech.com
"Best-of-breed Software for Beowulf and Easy-to-Own Commercial Clusters."

On Sat, 24 Jun 2000, Greg Lindahl wrote:

> > I compared the profiling of the two simulations and it appears that
> > much of the time savings came from a significantly faster MPI_ALLGATHERV,
> 
> ... which is one of the small number of functions in mpich that could use a
> rewrite. For a fairly small mount of effort, collective operations can be
> much faster.
> 
> -- g
> 
> 
> _______________________________________________
> Beowulf mailing list
> Beowulf at beowulf.org
> http://www.beowulf.org/mailman/listinfo/beowulf
> 





More information about the Beowulf mailing list