[Beowulf] Cluster Diagram of 500 PC

Daniel Navas-Parejo Alonso danapa2000 at gmail.com
Fri Jul 13 09:55:22 PDT 2007


Yes, in fact Score libraries had nice results in MPI over Ethernet in a
couple of clusters I've seen... in fact, by the way implemented by your
company, in collaboration with a HW vendor. As far as I remember an
efficiency of around 67-70% in HPL is manageable with Score MPI, if I
remember well.

Problem was, as far as I remember, only specific versions of x86/ x86_64
linux kernel were supported, don't know if this is gonna change in the next
future.



2007/7/13, John Hearns <john.hearns at streamline-computing.com>:
>
> Mark Hahn wrote:
> >> Anyway, hop latency in Ethernet is most of times just peanuts in terms
> of
> >> latency compared to TCP/IP stack overhead...
> >
> > unfortunately - I'm still puzzled why we haven't seen any open,
> > widely-used,
> > LAN-tuned non-TCP implementation that reduces the latency.  it should be
> > possible to do ~10 us vs ~40 for a typical MPI-over-Gb-TCP.
>
> Well, the SCore impementation which we install on all our clusters does
> just this.
> www.pccluster.org
>
> In fact, we have one 500 machine cluster which (at the time of install)
> ranked 167 in the Top 500 and achieved a very high efficiency.
> All connected with gigabit ethernet only.
> http://www.streamline-computing.com/index.php?wcId=76&xwcId=72
>
>
>
>
> --
>      John Hearns
>      Senior HPC Engineer
>      Streamline Computing,
>      The Innovation Centre, Warwick Technology Park,
>      Gallows Hill, Warwick CV34 6UW
>      Office: 01926 623130 Mobile: 07841 231235
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.beowulf.org/pipermail/beowulf/attachments/20070713/65a6ba53/attachment.html>


More information about the Beowulf mailing list