[Beowulf] evaluating FLOPS capacity of our cluster

richard.walsh at comcast.net richard.walsh at comcast.net
Mon May 11 13:50:20 PDT 2009





>----- Original Message ----- 
>From: "Greg Lindahl" <lindahl at pbm.com> 
> 
>On Mon, May 11, 2009 at 02:30:31PM -0400, Mark Hahn wrote: 
> 
>> 80 is fairly high, and generally requires a high-bw, low-lat net. 
>> gigabit, for instance, is normally noticably lower, often not much   
>> better than 50%.  but yes, top500 linpack is basically just 
>> interconnect factor * peak, and so unlike real programs... 
> 
>Don't forget that it depends significantly on memory size. 


 ... and interconnect.  Take a look at the top500 and note that 
GigE interconnects tend to deliver a lower percentage of peak 
when running Linpack. 


As suggested to model a Linpack number for your cluster quickly, 
you should compute peak performance, then go to the top500  
list and find a system with your processors and interconnect type. 
Note the percentage of peak Linpack reported for that system and 
use it to generate an estimated Linpack peak for your cluster. 


Later, when you have time to install and tune Linpack for your 
machine you can see how close your estimate was.  It should  
not be more than 2 to 4% off. 


Regards, 


rbw 



_______________________________________________ 
Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing 
To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf 
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.beowulf.org/pipermail/beowulf/attachments/20090511/55a8fece/attachment.html>


More information about the Beowulf mailing list