[Beowulf] standards for GFLOPS / power consumption measurement?

Ted Matsumura matsumura at gmail.com
Tue May 10 08:46:14 PDT 2005


Thanks Doug,
 I was just noting that since you went to the detail of includig the cost of 
CAT 5 cables, and the individual node cases, that it seemed logical that the 
thing that holds the 8 nodes together should possibly be an off the shelf, 
regularly available item and bundled into the total cost. But I understand 
your point of the 8 nodes being able to stand alone, or possibly sit on 
existing shelving.
 I believe that at some point of increase, whether it's density, Gflops, or 
increased budget by 3-4X, one might find that moving to rack or blade units 
is cost-effective and offers increased manageability over stand alone single 
proc. servers.
 Thanks for the benchmark links, I'll check them out.

 On 5/9/05, Douglas Eadline - ClusterWorld Magazine <
deadline at clusterworld.com> wrote: 
> 
> On Thu, 5 May 2005, Ted Matsumura wrote:
> 
> > I've noted that the orionmulti web site specifies 230 Gflops peak, 110
> > sustained, ~48% of peak with Linpack which works out to ~$909 / Gflop ?
> > The Clusterworld value box with 8 Sempron 2500s specifies a peak Gflops 
> by
> > measuring CPU Ghz x 2 (1 - FADD, 1 FMUL), and comes out with a rating of 
> 52%
> > of peak using HPL @ ~ $140/Gflop (sustained?)
> 
> It is hard to compare. I don't know what sustained or peak means in the
> context of their tests. There is the actual number (which I assume is
> sustained) then the theoretical peak (which I assume is peak).
> 
> And our cost/Gflop does not take into consideration the construction
> cost. In my opinion when reporting these type of numbers, there
> should be two categories "DIY/self assembled" and "turn-key". Clearly
> Kronos is DIY system and will always have an advantage of a
> turnkey system.
> 
> 
> > So what would the orionmulti measure out with HPL? What would the
> > Clusterworld value box measure out with Linpack?
> 
> Other benchmarks are here (including some NAS runs):
> 
> http://www.clusterworld.com/kronos/bps-logs/
> 
> 
> > Another line item spec I don't get is rocketcalc's (
> > http://www.rocketcalc.com/saturn_he.pdf )"Max Average Load" ?? What does
> > this mean?? How do I replicate "Max Average Load" on other systems??
> > I'm curious if one couldn't slightly up the budget for the clusterworld 
> box
> > to use higher speed procs or maybe dual procs per node and see some
> > interesting value with regards to low $$/Gflop?? Also, the clusterworld 
> box
> > doesn't include the cost of the "found" utility rack, but does include 
> the
> > cost of the plastic node boxes. What's up with that??
> 
> This was explained in the article. We assumed that shelving was optional
> because others my wish to just put the cluster on existing shelves or
> table top (or with enough Velcro strips and wire ties build a standalone
> cube!)
> 
> Doug
> >
> 
> ----------------------------------------------------------------
> Editor-in-chief ClusterWorld Magazine
> Desk: 610.865.6061
> Cell: 610.390.7765 Redefining High Performance Computing
> Fax: 610.865.6618 www.clusterworld.com <http://www.clusterworld.com>
> 
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.beowulf.org/pipermail/beowulf/attachments/20050510/80efb229/attachment.html>


More information about the Beowulf mailing list