[Beowulf] Reasonable upper limit in kW per rack for air cooling?

Robert G. Brown rgb at phy.duke.edu
Sun Feb 13 15:47:22 PST 2005


On Sun, 13 Feb 2005, David Mathog wrote:

> There are a series of white papers by APC here:
> 
>   http://www.apc.com/tools/mytools/index.cfm?action=wp

That link doesn't work for me (apc's website barfs on it) but I googled
and worked through their gatekeeper to get access.  After "logging in"
(yuck) I'm going to try to download:

  WP-5 Cooling Imperatives for Data Centers and Network Rooms Effective
  next generation data centers and network rooms must address the known
  needs and problems relating to current and past designs. This paper
  presents a categorized and prioritized collection of cooling needs and
  problems as obtained through systematic user interviews.

which I'm hoping is the one you are referring to above.

> where they discuss various power and cooling factors.  They note
> a disconnect between the higher densities achieved by blades and
> similar high density racks and the practicality of actually
> cooling these beasts.  Basically it comes down to you save space
> on the rack and then give it all back on the cooling system.  Think
> of it minimally in these terms - to move enough cfm at less than 30
> feet per minute starts to require a duct larger than the rack itself!
> 
> In terms of TCO, at the moment, APC rejects the notion that
> these ultra high density machines are cost effective because they
> are so very difficult to cool.

>From what I learned of bladed systems back when I reviewed them for my
own purposes, this isn't terribly surprising, but it is really valuable
to have a well-researched document that explains how and why. 10 KW
(think 100 100W light bulbs) in what, 2 m^3 -- that's a lot of energy to
get rid of, and almost by definition you're removing it from components
that are packed as tightly as possible.

> It seems to me that at a certain power point the racks are going to
> have to resort to water cooling.  Long ago the ECL mainframes were
> cooled this way, but it's been a long time since most of us have
> seen water pipes running into the computers in a machine room. 
> 
> Cooling a 10 kW rack well looks to be extremely tough with air,
> and going much above that would seem to require something approaching
> a dedicated wind tunnel.  Any opinions on how high the power
> dissipation in racks will go  before the manufacturers throw
> in the air cooling towel and start shipping them with water
> connections?

I think you're within a factor of 2 or so of the SANE threshold at 10KW.
A rack full of 220 W Opterons is there already (~40 1U enclosures).  I'd
"believe" that you could double that with a clever rack design, e.g.
Rackable's, but somewhere in this ballpark...it stops being sane.

> If you were designing a computer room today (which I am) what would
> you allow for the maximum power dissipation per rack _to_be_handled_
> by_the_room_A/C.  The assumption being that in 8 years if somebody
> buys a 40kW (heaven forbid) rack it will dump its heat through
> a separate water cooling system.

This is a tough one.  For a standard rack, ballpark of 10 KW is
accessible today.  For a Rackable rack, I think that they can not quite
double this (but this is strictly from memory -- something like 4 CPUs
per U, but they use a custom power distribution which cuts power and a
specially designed airflow which avoids recycling used cooling air).  I
don't know what bladed racks achieve in power density -- the earlier
blades I looked at had throttled back CPUs but I imagine that they've
cranked them up at this point (and cranked up the heat along with them).

Ya pays your money and ya takes your choice.  An absolute limit of 25
(or even 30) KW/rack seems more than reasonable to me, but then, I'd
"just say no" to rack/serverroom designs that pack more power than I
think can sanely be dissipated in any given volume. Note that I consider
water cooled systems to be insane a priori for all but a small fraction
of server room or cluster operations, "space" generally being cheaper
than the expense associated with achieving the highest possible spatial
density of heat dissipating CPUs.  I mean, why stop at water?  Liquid
Nitrogen.  Liquid Helium.  If money is no option, why not?  OTOH, when
money matters, at some point it (usually) gets to be cheaper to just
build another cluster/server room, right?

   rgb

-- 
Robert G. Brown	                       http://www.phy.duke.edu/~rgb/
Duke University Dept. of Physics, Box 90305
Durham, N.C. 27708-0305
Phone: 1-919-660-2567  Fax: 919-660-2525     email:rgb at phy.duke.edu





More information about the Beowulf mailing list