[Beowulf] let's standardize liquid cooling

Lux, Jim (337C) james.p.lux at jpl.nasa.gov
Sat Sep 29 06:56:13 PDT 2012



On 9/28/12 6:17 PM, "Mark Hahn" <hahn at mcmaster.ca> wrote:

>> Sounds expensive, complicated, and challenging.
>
>I donno - it seems elegantly modular to me.  vendors are responsible
>for getting the heat to the cold plate (via heatpipes, probably.  these
>days, heatpipes are extremely widespread and well-controlled.  every
>laptop has them, many GPU cards and desktop heatsink/fan units.)
>and the facility is responsible for extracting heat from the cold plates.


Heatpipes aren't cheap in small quantities for a custom design, as far as
I know.  When you're making a million laptops, you can afford the NRE and
tooling.  Although, I'm not sure. If someone has better data, like that
NRE is reasonable  at scales of 1000 units, then your idea as a lot more
merit. $50k spread over 1000 units is $50/node.  You can buy a lot of
sheet metal and fans for $50.

The other issue is that you still need air flow for the rest of the board.
Unless you have some scheme like a heat pipe to a small localized heat
exchanger.


If your goal is to bring chilled water in and reject the heat to a colder
sink, then putting the heat exchanger in a "per cabinet" basis might be
the best approach.  The problem is, as always, cost.  I'll bet a chiller
deck for 100kW worth of heat removal (100 nodes at 100W each) costs a lot
less than 25 chiller decks for 4kW each.

Just things like the fittings and hoses, not to mention the energy needed
to pump the cold water around.  But it is an interesting concept.






>



More information about the Beowulf mailing list