Dolphin Wulfkit

Øystein Gran Larsen gran at scali.com
Wed May 1 23:12:46 PDT 2002


I have taken the liberty of compiling Joachims results into a graphical form. They
are available from the "Computational Battery" (http://computational-battery.org).
More specifically the uniprocessor results are presented at:
    http://computational-battery.org/Maskinvare/Jwpmbmyrinetogsci/UP/
while the dual processor results are available at
    http://computational-battery.org/Maskinvare/Jwpmbmyrinetogsci/SMP/

Øystein

Joachim Worringen wrote:

> Kevin Van Workum wrote:
> >
> > I know this has been discussed before, but I'd like to know the "current"
> > opinion. What are your experiences with the Dolphin Wulfkit interconnect?
> > Any major issues (compatability/linux7.2/MPI/etc)? General comments.
>
> Oh oh, this is a sensitive topic. I'll try to give my modest opinion and
> some personal experiences...
>
> Wulfkit is a very solid, high-quality MPI implementation on top of a
> well-performing interconnect - and it's highly optimized for this
> interconnect. It includes a more or less full featured cluster
> environment software to monitor and manage your cluster. From my
> personal experiences, it is very stable and performs well, however, I am
> not really an application developer/user, but an MPI developer.
>
> To compare Wulfkit (SCI + ScaMPI) with "the other" well-known (probably
> better known) cluster interconnect (Myrinet2000 + MPICH-GM), I have put
> results of some PMB runs on a P4-i860 cluster with both, SCI and
> Myrinet, at http://www.lfbs.rwth-aachen.de/users/joachim/pmb_results .
> Please note that the i860 is quite a bad chipset for both SCI and
> Myrinet as it has disappointing PCI performance for DMA and PIO as well
> (relative to the performance potential of 64bit/66Mhz PCI). Anyway, look
> at the numbers if you are interested. I really don't want to discuss the
> relevance of such benchmarks and the type of the test platform (again);
> anybody is free to interprete them at it's own standard, requirements
> and experience.
>
> Of course, there is also a nice open-source no-cost MPI on top of the
> same hardware (SCI-MPICH, which I have written ;-)); for more info see
> URL in signature. This performs also quite nice and has a number of
> unique features ScaMPI/Wulfkit does not have. But I need to say that it
> surely isn't as well tested as ScaMPI is, and it is just an MPI and no
> cluster environment.
>
>   Joachim
>
> --
> |  _  RWTH|  Joachim Worringen
> |_|_`_    |  Lehrstuhl fuer Betriebssysteme, RWTH Aachen
>   | |_)(_`|  http://www.lfbs.rwth-aachen.de/~joachim
>     |_)._)|  fon: ++49-241-80.27609 fax: ++49-241-80.22339
> _______________________________________________
> Beowulf mailing list, Beowulf at beowulf.org
> To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf

--
Øystein Gran Larsen, Dr.Scient mailto:gran at scali.no Tel:+47 2262-8982
----------------------------------------------------------------------
        MPI·SCI=HPC -- Scalable Linux Systems -- www.scali.com
Scali Universe = the single system image cluster management THAT WORKS






More information about the Beowulf mailing list