[Beowulf] Servers Too Hot? Intel Recommends a Luxurious Oil Bath

Ellis H. Wilson III ellis at cse.psu.edu
Wed Sep 5 06:59:50 PDT 2012


On 09/05/2012 09:14 AM, Robert G. Brown wrote:
> On Tue, 4 Sep 2012, Ellis H. Wilson III wrote:
>
>> Yes, Google does house these containers in a fairly basic building, but
>> there is no reason I can think of why it couldn't put them out in the
>> open and run all wires, etc, into the ground instead. I think they just
>> put them in a building for convenience to the maintainers, rather than
>> for some property of the building itself that would enable the
>> containers to work better.
>
> Google in particular, though, lives and dies by means of instantaneous
> access to parts. A computer is to them as a mere neuron is to us --
> nodes fail in their cluster at the rate of many a day, and are replaced
> almost immediately the way they have things set up. This is multiply

This is what I was getting at in my first response when I said, "One big 
concern from my perspective is replacing equipment in these boxes."  I 
know and agree that currently Google relies on very COTS equipment (and 
I agree they will in the future) that can be popped in and out rapidly 
(this part may not be necessary in the future), but I think this has a 
STRONG correlation with the need for them to make sure that:
1. Since they are using air-chilling, they need to keep all those 
containers reasonably close to the chillers
2. While their CPUs+Mobos+etc are COTS, their switching is expensive 
(not IB of course, but still expensive relative to the individual 
Mobos).  Therefore they need to maximize use of those switches
3. Their buildings are built many containers high to keep things close 
to the chillers, so that space is very valuable as well and the more 
CPUs they can keep going the better.

In summary, the biggest costs I see for them (from my external 
vantage-point) are cooling (aka power), switching and space (close to 
the chillers).  So replacing nodes to keep switches busy, utilize the 
cooling they are paying so much for, and fully utilize the space they 
have around said cooler is critical for them.

Which is why I was suggesting that, "Maybe the whole thing is just 
built, sealed for good, primed with [hydrogen/oil/he/whatever], started, 
allowed to slowly degrade over time and finally tossed when the still 
working equipment is sufficiently dated."  They could keep the switching 
outside the containers to allow them to reuse that expensive equipment, 
but everything else would be a "set it, forget it, and let it shit the 
bed" kind of thing.  Even SIMPLER than what they do now -- not more 
complex, which is why the Google-style of "screw fancy" works for this 
type of setup.  Basically they would never open up a container to 
replace a mobo like they do now.

They save money because a) they can chill things much more easily (less 
chillers), and moreover with a dense substance they can pipe it chilled 
much farther (distance to chiller is far less important), unlike they 
currently cannot with air.  So instead of a tall building with expensive 
chillers and a high need to keep everything nearby, think giant 
cornfield someplace cold with a series of containers just sitting on the 
ground with big power and oil pipes going to them that only ever get 
touched when every node dies in them or they are decided to be 
decommissioned.

> dump it. They can scale up indefinitely by just adding more trailers.

I don't believe they can currently scale up indefinitely by just adding 
more trailers, because they have to have proximity to the chillers as 
well.  Having more efficient and/or longer range cooling allows them to 
scale up much further.

As I've said before, I don't have the proper background to tell if using 
one of the aforementioned substances will net enough of a win in terms 
of decreased cooling needs for them to justify slightly underused 
network switches and the costs of oil/whatever, but I do contend this 
approach falls squarely into the Google-style of doing things.

Best,

ellis



More information about the Beowulf mailing list