[Beowulf] 96 Processors Under Your Desktop (fwd =?iso-8859-1?q?from=09brian-slashdotnews?=@hyperreal.org)

Robert G. Brown rgb at phy.duke.edu
Tue Aug 31 14:04:03 PDT 2004


On Tue, 31 Aug 2004, Michael Will wrote:

> There already is the opteron 246HE specced as 55W, that comes at twice the 
> cost of the standard opteron 246 which is specced as 70W. 
> 
> AMD also announced a 246EE specced at 30W.
> 
> What is the range of per year cost for a dual opteron 1U for air conditioning and
> power consumption?

This depends on the cost of electricity in your area, clock, and other
things.  Assuming a draw of 200 Watts sustained, you burn a kw-hour
every five hours.  That works out to 1752 kw-hr/year.  Power can cost
anything from $0.06 to maybe $0.12/kw-hr. Assuming 8 cents a kw-hr you
pay roughly $140 for the power alone, BUT you have to remove that power
as heat if your cluster is in a closed space.  A very crude estimate of
that cost is 1/3 the cost of the heat being removed, or an extra $47 in
round numbers.  In even rounder numbers if you add the extra $13 to this
estimate, you come up with $200 to heat and air condition a single 200 W
dual opteron (or 200W of any other node or load) in a closed-room
cluster environment.

That's $1/Watt/Year.  

Your Mileage May Vary (in fact, it almost certainly will depending on
the cost of electrical power in your area, the efficiency of your AC,
heat conductivity of the walls of the server room and ambient
temperature outside, the day of the week and time of day, and probably a
slew of other things.  The best that can be said of it is that it is
PROBABLY within a factor of two of the true cost and it is easy to
remember.  If you remember this referenced to electricity at
$0.08/kw-hr, you can probably improve it pretty easily, with just a bit
of highball conservatism in the result, knowing the cost of power in
your area.

This, by the way, is the basis of my estimates for cost differentials.
Saving 40W saves you a whopping $40/year or $120 total over a three year
lifetime, $200 total over five.  If the cost of the more expensive but
cooler CPU/system is more than $200 more than the cheap/hot one, you
lose net money, although you may save infrastructure resources with high
nonlinear costs to push past a given maximum and hence still justify the
more expensive but cooler system.  If electricity is cheaper in your
area and/or you have an average AC COP closer to 5 (possible for high
efficiency models and temperate or cool climates) the marginal savings
go DOWN for the cooler CPU and it is "worth" even less of a marginal
cost.

Only rarely do I see systems marketed as low power that are "just as
fast" and that actually save money.  Energy, perhaps (although you often
have to look carefully at energy expenditures in manufacturing and
tooling for low-volume units to be able to positively assert even that).
Money, no, unless you have limited space and power and cooling resources
relative to your task set.

To top this off, since VLSI switching rates tend to scale with power
consumed according to a formula I can never remember but that Jim Lux
publishes from time to time and that is in the archives, low power chips
are nearly always SLOWER than high power chips.  Consequently you can't
just compare the cost and/or power -- you really need independent
benchmarks for the hot and cool versions, in actual systems, in order to
be able to do a PROPER cost benefit analysis.  In quite a few of the
earlier bladed systems I actually worked through in this way, the speed
differential was just enough to nearly precisely correct for the energy
savings, as in you have to buy two "cool" cpus to equal the actual work
output of one "hot" cpu that consumes twice the power.

This, of course, sounds just like a proverbial tanstaafl theorem.  Going
"cool" >>can<< actually break even in energy per unit of computing done
(however you measure it) and invariably costs more money for the
hardware.  That leaves only space and maximal packing as a benefit.  In
some cases this was a real benefit -- you could actually get a bit more
compute power per U with the cooler chips (at much greater cost and
about the same net energy).  The advantage was never as much as a factor
of two, though, at least when I last looked.  

This is in raw aggregate cycles, or MIPS or whatever.  Amdahl's law and
its generalizations ensure that this is the BEST you can do for
embarrassingly parallel applications; real parallel performance would
likely be worse for the more slower cooler bladed systems.

Perhaps this has changed, but given the 2nd law of thermodynamics, I
doubt it has changed much.  There ain't no such thing as a free lunch,
after all...;-)

HTH.

   rgb

> 
> Michael
> On Monday 30 August 2004 04:30 pm, Glen Gardner wrote:
> > 
> > I have been touting the virtues of low power use clusters for the last 
> > year. I hope to build a second one next year , with twice the 
> > performance of the present machine.
> > My experience with my low power cluster has been that it is not a "big 
> > iron" machine, but is very effective, and very fast for some things. 
> > Also, a low power use cluster is the only way I can have a significant 
> > cluster in my apartment, so it was to be this way, or no way. At 
> > present, the cost of power for my 14 node cluster is running about $20 a 
> > month (14 nodes up 24/7 and in use much of the time).
> > 
> > It is rather difficult to operate a significant opteron cluster in an 
> > office environment (or in an efficiency apartment). The heat alone will 
> > prevent it. If you need lots of nodes and low power use, the "small p 
> > performance" machines are going to be the way to go.  I can think of 
> > many situations where it would be desirable to have a deskside cluster 
> > for computation, development, or testing, and the low power machines 
> > opens the door to a lot of users who can't otherwise take advantage of 
> > parallel processing.
> > A 450 watt , 10 GFLP parallel computing machine for about $10K seems 
> > attractive. It is even more attractive if it does not need any special 
> > power or cooling arrangements.
> > 
> > 
> > Glen
> > 
> > 
> > Mark Hahn wrote:
> > 
> > >>Transmeta 2) This is not shared memory setup, but ethernet connected. So
> > >>    
> > >>
> > >
> > >yeah, just gigabit.  that surprised me a bit, since I'd expect a trendy
> > >product like this to want to be buzzword-compliant with IB.
> > >
> > >  
> > >
> > >>Does anyone have any idea haw the Efficeon's stack up against Opterons?
> > >>    
> > >>
> > >
> > >the numbers they give are 3Gflops (peak/theoretical) per CPU.
> > >that's versus 4.8 for an opteron x50, or 10 gflops for a ppc970/2.5.
> > >they mention 150 Gflops via linpack, which is about right, given
> > >a 50% linpack "yield" as expected from a gigabit network.
> > >
> > >remember that memory capacity and bandwidth are also low for a typical
> > >HPC cluster.  perhaps cache-friendly things like sequence-oriented bio stuff
> > >would find this attractive, or montecarlo stuff that uses small models.
> > >
> > >  
> > >
> > >>A quad cpu opteron comes in at a similar price as Orion's 12 cpu unit,
> > >>but the opeteon is a faster chips and has shared mem. The Orion DT-12
> > >>lists a 16 Gflop linpack. Does anyone have quad Opteron linpack results?
> > >>    
> > >>
> > >
> > >for a fast-net cluster, linpack=.65*peak.  for vector machines, it's closer 
> > >to 1.0; for gigabit .5 is not bad.  for a quad, I'd expect a yield better 
> > >than a cluster, but not nearly as good as a vector-super.  guess .8*2.4*2*4=
> > >.8*2.4*2*4=15 Gflops.
> > >
> > >(the transmeta chip apparently does 2 flops/cycle like p4/k8, unlike 
> > >the 4/cycle for ia64 and ppc.)
> > >
> > >I think the main appeal of this machine is tidiness/integration/support.
> > >I don't see any justification for putting one beside your desk - 
> > >are there *any* desktop<=>cluster apps that need more than a single 
> > >gigabit link?
> > >
> > >for comparison, 18 Xserves would deliver the same gflops, dissipate
> > >2-3x as much power, take up about twice the space.
> > >
> > >personally, I think more chicks would dig a stack of Xserves ;)
> > >
> > >_______________________________________________
> > >Beowulf mailing list, Beowulf at beowulf.org
> > >To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf
> > >
> > >  
> > >
> > 
> 
> 

-- 
Robert G. Brown	                       http://www.phy.duke.edu/~rgb/
Duke University Dept. of Physics, Box 90305
Durham, N.C. 27708-0305
Phone: 1-919-660-2567  Fax: 919-660-2525     email:rgb at phy.duke.edu






More information about the Beowulf mailing list