true power consumption (was: disadvantages of linux cluster)

Robert G. Brown rgb at phy.duke.edu
Sat Nov 9 17:37:31 PST 2002


On Fri, 8 Nov 2002, Dave Lane wrote:

> At 05:53 PM 11/8/2002 +0100, you wrote:
> >First, your facts below are, to put it mildly, incorrect. Power max as
> >given by AMD and Intel for their cpus are unfortunately not 'measured' the
> >same way. To get some hard figures I grabbed my clamp meter and went into
> >the computer room. Time for a reality check:
> >
> >dual MP2000+     idle: 0.5-0.6A (kept changing, 0.55)
> >dual Xeon 2.4GHz idle: 0.2A
> >dual MP2000+     load: 0.7A
> >dual Xeon 2.4GHz load: 0.5-0.6A (kept changing, ~0.55)
> >
> >Obviously my clamp meter isn't very accurate but the relative changes
> >should hold. If the numbers seem awfully low keep in mind that we are
> >running on 230V here.
> 
> I haven't been doing the same thing here over the last two days and you 
> have to be very careful when interpreting these measurements. Most meters 
> assume that the waveform is sinusoidal and use this fact to guess what 
> "RMS" current reading to display on the meter. Switching power supplies 
> have nothing like a sinusoidal current waveform.
> 
> I have been measuring the supply current (120V) for an Athlon XP1700 PC 
> running Linux (MSI KT266A MB, 20G drive, CD, floppy, KB, Mach64 video). For 
> cpu loading I'm using the setiathome cmdline version). The measurement 
> technique is inline (I made a custom cable to do this) with both normal 
> (Fluke 73UIII) and true-rms (Fluke 79III) multimeters.
> 
>                  Unloaded (A)    Loaded (A)      Unloaded (W)    Loaded 
> (W)
> Normal          0.65A           0.74A           78W             88.8W
> True RMS        1.13A           1.3A            135W            156W
> 
> Note that huge difference in readings. This even surprised me and I should 
> have known better before trying this! I was also surprised how little 
> difference there was between loaded and unloaded. Note that the same 
> current readings were measured for unloaded when the machine was sitting in 
> the setup screen of the bios.
> 
> Note that these readings don't account for the power factor that others 
> have mentioned.

And there is one final measure of power consumption -- the finger.  The
finger, held behind a dual Athlon's case exhaust fan(s), feels exhaust
air that is quite warm.  Air pulled in is maybe 60F.  Air exhausted in
the 80's, easily, maybe 90's (F), in a high volume airflow.  It's an
EZ-bake oven in there.

However, the most appropriate observation so far is that heat production
numbers aside, the Athlons are made very unhappy by heat.  They don't
like to run warm.  They like to lose the heat they produce, however
little or great it might be, and crash like tempery children if they
ever get a bit too warm.  Our experience mirrors the one reported
earlier -- when these nodes get warm for whatever reason, they are less
stable than Intel nodes in the same room.  Perhaps this leads to the
incorrect perception that they generate more heat, or maybe they just
produce more heat AND are less stable when they get hot...

Not to beat a dead uptime horse, but we've had plenty of opportunity to
observe Hot Athlons in situ.  Our chilled water supply went up from 44F
(normal) to 66F (near-disasterous) AGAIN as facilities struggles with
the concept that even though it is "wintertime" they can't shut the
chiller that supplies our cluster into a warmer mode of operation.  Had
to shut down a whole bank of nodes.  There goes more of our "uptime" --
guess we should have been running Microsoft HPC which would doubtless
have saved us;-)

Ah, for some hardware elves.

Or, of course, a wee chunk of 60 million dollars.  Buy our OWN damn
chiller.  And computer room, naw, building.  And elves, lots of elves.
And have plenty for our "retirement account" in the Caymans and a small
fishing lake with its own head node in a casting gazebo outside the new
computer building, just for me.

Like I said, should have been running Microsoft HPC...

   rgb

-- 
Robert G. Brown	                       http://www.phy.duke.edu/~rgb/
Duke University Dept. of Physics, Box 90305
Durham, N.C. 27708-0305
Phone: 1-919-660-2567  Fax: 919-660-2525     email:rgb at phy.duke.edu






More information about the Beowulf mailing list