Dual Athlon MP 1U units

Robert G. Brown rgb at phy.duke.edu
Sat Jan 26 15:20:10 PST 2002


On Sat, 26 Jan 2002, W Bauske wrote:

> Velocet wrote:
> > 
> > Whats the power dissipation of running dual 1.2 GHz Mp's? How about for
> > 1.33Ghz regular athlons in non-SMP configs as comparison? (As well, how much
> > heat comes off typical power supplies to run these systems?)
> > 
> 
> My TigerMP XP1600 duals take about 1.7amps at 125v.
> 
> Forgot the formula to convert to btu's. Vaguely remember a factor
> of around 3.42. Not sure if that was for Watt's or VoltAmps. Assuming
> a VA is approximately a Watt, 212.5 * 3.42 = 727 btu per system. 
> 
> At least with that you can calculate your AC load for a rack. Say 40
> 1U's per rack, 29080 btu's. A ton of AC is 12000 btu's. So, 2.5 ton's
> of AC per rack. Course, you have 40x1.7 amps going into the rack for
> a power load of 68 Amps at 125v.

A ton of AC removes almost exactly 3500 Watts continuously.  That's your
factor of 3.42 btu/watt.  With this number you can work with nice SI
Watts and forget archaic old BTU's, although frankly the "ton" unit is
even worse...;-)

Power has been discussed on the list before a few times.  It depends on
peak voltage, peak current, and relative phase (power factor).  If peak
voltage is 120V, peak current is 1.7A, and they are in phase, peak power
is 204W but average power is only 1/\sqrt{2} = 0.707 of this or around
144W.  I believe that somebody pointed out once that the power factor
for most hardware is close to 1 so phase differences probably don't
reduce this a whole lot, but I haven't measured itself and don't know.

At 40 1U's/rack, this is about 5800W/rack, or at >>least<< 1.6 tons of
AC per rack to remove the heat.  However, the heat removal capability of
AC is itself a bit amorphous.  The efficiency depends on things like the
ambient air temperature that it is trying to cool and the ambient
temperature of the environment where it is (eventually) trying to dump
the heat.  To be safe you need to keep the ambient air entering the rack
quite cool, since your rack is basically a 6 KW space heater.  You need
to be especially careful with airflow, since the nodes in the middle
have basically no way of rejecting heat EXCEPT to the airflow.  Then, as
Wes noted, there are the other peripherals that might be in the rack --
switches, surge protectors, UPS, etc. -- which also draw current.  2-2.5
tons of AC is probably better.

One useful way to imagine the rack is as a stack of metal boxes
containing two 75W light bulbes each, all turned on inside the boxes,
with the boxes so tightly closed that hardly any light escapes. If
>>anything<< interrupts the cooling air, those boxes will get mighty hot
-- hot enough to short things out and maybe start a fire -- very
quickly.

That's basically why I worry about 1U duals.  In principle they'll work
-- keep the outside air cool, pull as much cold air through the cases as
you can possibly arrange, keep the air clean (so the fans don't clog),
monitor thermal sensors and kill if they start getting too hot.  You can
see, though, that they are a design that taunts Murphy's Law.  Not too
robust.  A little thing like an AC blower motor that blows a circuit
breaker at 3 am can reduce your $65K rack of hardware to a pile of junk
in the thirty minutes it takes you to find out and do something about
it, if you don't have fully automated (and functioning) shutdown setup.

Not that a stack of 2U duals is MUCH better.  It's still hot -- we have
1800 XP's and probably will have more like 150-160W/box.  If we only put
12 per rack, though, we can leave gaps between the cases and get some
cooling from the surfaces of the cases and in any event the cases have
much larger air volumes, more room for air to flow through, and more
room for bigger fans.  With luck we'll have SOME time to react (or for
our automated sentries to react) if the room AC fails and the power
doesn't.

But yes, we'll need 3 racks for what you put into one.  There is a
fundamental tradeoff.  Space versus power density.  The smaller the
volume into which you concentrate your systems, the more power per unit
volume you burn (and must get rid of) and the more careful your
engineering must be to do it robustly.  Careful engineering in turn
costs money and risk, which is traded off against the nontrivial cost of
space into which to put racks.  Our new space is pretty expensive (we
have 75 KW of power capacity matched to 75 KW of chiller capacity -- the
AC blower/heat exchanger unit is the size of my entire office and eats
1/4 of the room).  At the moment we're not crowded, so we're going for a
relatively low density.  In three years, we may need to start repacking
or replacing with more tightly packed nodes as we grow, but in the
meantime we'll enjoy slightly reduced risk and greater robustness of
design.

   rgb

-- 
Robert G. Brown	                       http://www.phy.duke.edu/~rgb/
Duke University Dept. of Physics, Box 90305
Durham, N.C. 27708-0305
Phone: 1-919-660-2567  Fax: 919-660-2525     email:rgb at phy.duke.edu






More information about the Beowulf mailing list