[Beowulf] Reasonable upper limit in kW per rack for air cooling?

Jim Lux james.p.lux at jpl.nasa.gov
Sun Feb 13 19:33:50 PST 2005


>
> >
> I think you're within a factor of 2 or so of the SANE threshold at 10KW.
> A rack full of 220 W Opterons is there already (~40 1U enclosures).  I'd
> "believe" that you could double that with a clever rack design, e.g.
> Rackable's, but somewhere in this ballpark...it stops being sane.
>
> > If you were designing a computer room today (which I am) what would
> > you allow for the maximum power dissipation per rack _to_be_handled_
> > by_the_room_A/C.  The assumption being that in 8 years if somebody
> > buys a 40kW (heaven forbid) rack it will dump its heat through
> > a separate water cooling system.
>
> This is a tough one.  For a standard rack, ballpark of 10 KW is
> accessible today.  For a Rackable rack, I think that they can not quite
> double this (but this is strictly from memory -- something like 4 CPUs
> per U, but they use a custom power distribution which cuts power and a
> specially designed airflow which avoids recycling used cooling air).  I
> don't know what bladed racks achieve in power density -- the earlier
> blades I looked at had throttled back CPUs but I imagine that they've
> cranked them up at this point (and cranked up the heat along with them).
>
> Ya pays your money and ya takes your choice.  An absolute limit of 25
> (or even 30) KW/rack seems more than reasonable to me, but then, I'd
> "just say no" to rack/serverroom designs that pack more power than I
> think can sanely be dissipated in any given volume. Note that I consider
> water cooled systems to be insane a priori for all but a small fraction
> of server room or cluster operations, "space" generally being cheaper
> than the expense associated with achieving the highest possible spatial
> density of heat dissipating CPUs.  I mean, why stop at water?  Liquid
> Nitrogen.  Liquid Helium.  If money is no option, why not?  OTOH, when
> money matters, at some point it (usually) gets to be cheaper to just
> build another cluster/server room, right?

The speed of light starts to set another limit for the physical size, if you
want real speed.  There's a reason why the old Crays are compact and liquid
cooled.  It's that several nanoseconds per foot propagation delay.  Once you
get past a certain threshold, you're actually better off going to very dense
form factors and liquid cooling, in many areas.  I think that most clusters
haven't reached the performance point where it's worth liquid cooling the
processors, but it's probably pretty close to the threshold. Adding machine
room space is expensive for other reasons.  You've already got to have the
water chillers for any sort of major sized cluster (to cool the air), so the
incremental cost to providing an appropriate interface to the racks and
starting to build racks in liquid cooled configurations can't be far away.

Liquid cooling is MUCH more efficient than air cooling: better heat
transfer, better life (more even temperatures), less real estate required,
etc.  The hangup now is that nobody makes liquid cooled PCs as a commodity,
mass production item.  What you'll find is liquid cooling retrofits that
don't take advantage of what liquid cooling can get you. If you look at high
performance radar or sonar processors and such that use liquid cooling, the
layout and physical configuration is MUCH different (partly driven by the
fact that the viscosity of liquid is higher than air).

Wouldn't YOU like to have, say, 1000 processors in one rack, with a  2-3"
flexible pipe to somewhere else?  Especially if it was perfectly quiet? And
could sit next to your desk?  (1000 processors*100W each is 100kW).




More information about the Beowulf mailing list