[Beowulf] Performance characterising a HPC application

Kozin, I (Igor) i.kozin at dl.ac.uk
Sat Mar 24 13:04:36 PDT 2007


I can only second that. On our Woodcrest cluster we are seeing MPI
latencies in the region of 12-18 us with Intel NICs and SCore MPI
(the cluster has been delivered by Streamline).

As for GAMMA, Tony Ladd reported pretty impressive results a while ago
http://ladd.che.ufl.edu/research/beoclus/parallel.htm
He might be able to tell you more but based on what he told me
I concluded that unfortunately GAMMA is not quite ready yet for cluster 
environment.


I. Kozin  (i.kozin at dl.ac.uk)
CCLRC Daresbury Laboratory, WA4 4AD, UK
skype: in_kozin
tel: +44 (0) 1925 603308
http://www.cse.clrc.ac.uk/disco 



-----Original Message-----
From: beowulf-bounces at beowulf.org on behalf of John Hearns
Sent: Sat 24/03/2007 09:33
To: Eugen Leitl
Cc: beowulf at beowulf.org
Subject: Re: [Beowulf] Performance characterising a HPC application
 
Eugen Leitl wrote:
> 
> Anyone been able to make MPI/Gamma work with the Broadcoms in SunFire
> X2100 and X2100 M2 series?
> 
> I just got a X2100 M2 machine in today. 4 GBit ports, though 2 of them nVidia. 
> 
Can't help with Gamma, but can confirm that SCore works well on Sun 
Galaxies. We have it running on hundreds of 'em.
www.pccluster.org

-- 
      John Hearns
      Senior HPC Engineer
      Streamline Computing,
      The Innovation Centre, Warwick Technology Park,
      Gallows Hill, Warwick CV34 6UW
      Office: 01926 623130 Mobile: 07841 231235
_______________________________________________
Beowulf mailing list, Beowulf at beowulf.org
To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf





More information about the Beowulf mailing list