[Beowulf] standards for GFLOPS / power consumption measurement?

Douglas Eadline - ClusterWorld Magazine deadline at clusterworld.com
Mon May 9 14:19:40 PDT 2005


On Thu, 5 May 2005, Ted Matsumura wrote:

> I've noted that the orionmulti web site specifies 230 Gflops peak, 110 
> sustained, ~48% of peak with Linpack which works out to ~$909 / Gflop ?
>  The Clusterworld value box with 8 Sempron 2500s specifies a peak Gflops by 
> measuring CPU Ghz x 2 (1 - FADD, 1 FMUL), and comes out with a rating of 52% 
> of peak using HPL @ ~ $140/Gflop (sustained?)

It is hard to compare. I don't know what sustained or peak means in the
context of their tests. There is the actual number (which I assume is
sustained) then the theoretical peak (which I assume is peak).

And our cost/Gflop does not take into consideration the construction 
cost. In my opinion when reporting these type of numbers, there
should be two categories "DIY/self assembled" and "turn-key". Clearly
Kronos is DIY system and will always have an advantage of a 
turnkey system.


>  So what would the orionmulti measure out with HPL? What would the 
> Clusterworld value box measure out with Linpack?

Other benchmarks are here (including some NAS runs):

http://www.clusterworld.com/kronos/bps-logs/                                                                                
                                                                                

>  Another line item spec I don't get is rocketcalc's ( 
> http://www.rocketcalc.com/saturn_he.pdf )"Max Average Load" ?? What does 
> this mean?? How do I replicate "Max Average Load" on other systems??
>  I'm curious if one couldn't slightly up the budget for the clusterworld box 
> to use higher speed procs or maybe dual procs per node and see some 
> interesting value with regards to low $$/Gflop?? Also, the clusterworld box 
> doesn't include the cost of the "found" utility rack, but does include the 
> cost of the plastic node boxes. What's up with that??

This was explained in the article. We assumed that shelving was optional 
because others my wish to just put the cluster on existing shelves or 
table top (or with enough Velcro strips and wire ties build a standalone 
cube!)

Doug
> 

----------------------------------------------------------------
Editor-in-chief                   ClusterWorld Magazine
Desk: 610.865.6061                            
Cell: 610.390.7765         Redefining High Performance Computing
Fax:  610.865.6618                www.clusterworld.com




More information about the Beowulf mailing list