[Beowulf] standards for GFLOPS / power consumption measurement?

Timothy Bole tbole1 at umbc.edu
Tue May 10 06:28:40 PDT 2005


this seems to me, at least, to be a bit of an unfair comparison.  if
someone were to just give me a cluster with 80386 processors, then i would
tie for the lead forever, as 0/{any number>0}=0. {not counting if someone
were to *pay* me to take said cluster of 80386's}...

having inhabited many an underfunded academic department, i have seen that
there are many places where there is just not money to throw at any
research labs, including computational facilities.  i think that the point
of the article was to demonstrate that one can build a useful beowulf for
a dollar amount that is not unreasonable to find at small companies and
universities.  not everyone can count on the generosity of strangers
handing out network cards and hubs.  so, the US$/GFLOP is a decent, but
*very* generic, means of getting the most of that generic dollar.

of course, the bottom line is that a cost benefit analysis for any cluster
is really necessary, and the typical type of problem to be run on said
cluster should factor into this.  i applaud the work of the KRONOS team
for demonstrating the proof-of-principle that one can design and build a
useful beowulf for US$2500.

cheers,
twb


On Tue, 10 May 2005, Vincent Diepeveen wrote:

> How do you categorize second hand bought systems?
>
> I bought for 325 euro a third dual k7 mainboard + 2 processors.
>
> The rest i removed from old machines that get thrown away otherwise.
> Like 8GB harddisk. Amazingly biggest problem was getting a case to reduce
> sound production :)
>
> Network cards i got for free, very nice gesture from someone.
>
> So when speaking of gflops per dollar at linpack, this will destroy of
> course any record of $2500 currently, especially for applications needing
> bandwidth to other processors, if i see what i paid for this self
> constructed beowulf.
>
> At 05:19 PM 5/9/2005 -0400, Douglas Eadline - ClusterWorld Magazine wrote:
> >On Thu, 5 May 2005, Ted Matsumura wrote:
> >
> >> I've noted that the orionmulti web site specifies 230 Gflops peak, 110
> >> sustained, ~48% of peak with Linpack which works out to ~$909 / Gflop ?
> >>  The Clusterworld value box with 8 Sempron 2500s specifies a peak Gflops
> by
> >> measuring CPU Ghz x 2 (1 - FADD, 1 FMUL), and comes out with a rating of
> 52%
> >> of peak using HPL @ ~ $140/Gflop (sustained?)
> >
> >It is hard to compare. I don't know what sustained or peak means in the
> >context of their tests. There is the actual number (which I assume is
> >sustained) then the theoretical peak (which I assume is peak).
> >
> >And our cost/Gflop does not take into consideration the construction
> >cost. In my opinion when reporting these type of numbers, there
> >should be two categories "DIY/self assembled" and "turn-key". Clearly
> >Kronos is DIY system and will always have an advantage of a
> >turnkey system.
> >
> >
> >>  So what would the orionmulti measure out with HPL? What would the
> >> Clusterworld value box measure out with Linpack?
> >
> >Other benchmarks are here (including some NAS runs):
> >
> >http://www.clusterworld.com/kronos/bps-logs/
>
> >
>
> >
> >>  Another line item spec I don't get is rocketcalc's (
> >> http://www.rocketcalc.com/saturn_he.pdf )"Max Average Load" ?? What does
> >> this mean?? How do I replicate "Max Average Load" on other systems??
> >>  I'm curious if one couldn't slightly up the budget for the clusterworld
> box
> >> to use higher speed procs or maybe dual procs per node and see some
> >> interesting value with regards to low $$/Gflop?? Also, the clusterworld
> box
> >> doesn't include the cost of the "found" utility rack, but does include the
> >> cost of the plastic node boxes. What's up with that??
> >
> >This was explained in the article. We assumed that shelving was optional
> >because others my wish to just put the cluster on existing shelves or
> >table top (or with enough Velcro strips and wire ties build a standalone
> >cube!)
> >
> >Doug
> >>
> >
> >----------------------------------------------------------------
> >Editor-in-chief                   ClusterWorld Magazine
> >Desk: 610.865.6061
> >Cell: 610.390.7765         Redefining High Performance Computing
> >Fax:  610.865.6618                www.clusterworld.com
> >
> >_______________________________________________
> >Beowulf mailing list, Beowulf at beowulf.org
> >To change your subscription (digest mode or unsubscribe) visit
> http://www.beowulf.org/mailman/listinfo/beowulf
> >
> >
> _______________________________________________
> Beowulf mailing list, Beowulf at beowulf.org
> To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf
>

=========================================================================
Timothy W. Bole a.k.a valencequark
Graduate Student
Department of Physics
Theoretical and Computational Condensed Matter
UMBC
4104551924
reply-to: valencequark at umbc.edu

http://www.beowulf.org
=========================================================================



More information about the Beowulf mailing list