Racks vs. pile of PCs

Rocky McGaugh rocky at atipa.com
Tue Aug 13 09:49:48 PDT 2002


On Tue, 13 Aug 2002, David Mathog wrote:

> Racks sure look nice and there is no question that they
> are space efficient, but I'm really starting to wonder if
> they are such a great idea for a smallish cluster (<=20 nodes)
> in those situations where there is enough space for a
> classic pile of PCs.   I mean, what other advantages do they
> have besides those two to offset their many disadvantages?
> 
> Racks better than piles:
> 
> 1.  Space efficiency.
> 2.  Aesthetics (racks look cool)
> 
> Piles better than racks (these are not orthogonal):
> 
> 1.  Internal space constraints
> 2.  CPU/motherboard Cooling.  This follows from [1].
> 3.  Motherboard/CPU options.  This follows from [1]. 
>     With a few exceptions most motherboard/CPU combinations
>     will fit into a standard ATX case -  good luck getting
>     a 2.4 Ghz P4 into a 1U.
> 4.  Initial purchase price for equivalent performance.
> 5.  Maintenance costs (rack parts tend to be nonstandard
>     and expensive to replace, for instance, 1U power supplies).
> 
> Other factors?
> 
> I estimate that for a small cluster (<1 rack's worth of equipment) with
> node guts (mobo,CPU,disk,ram) costing <= $1200 the racked version
> will cost at least 20-30% more than the piled version.  So if a piled
> 20 node cluster costs $24000, the equivalent racked version will
> be at least $30000.   $6000 seems a lot to pay for no extra performance.
> If the "guts" were much more expensive the additional rack costs would,
> in theory be a lower percentage.  In practice, it is my impression that
> the ratio is no lower because the vendors charge even more for the 
> racked versions of high performance nodes.

For clusters of all sizes, i like the mid-tower/breadrack/crashcart (or
Emergency Response Terminal (ERT), as i like to call them). Of course,
space can be a consideration. It can also be easier to force-cool a
smaller physical area sometimes, which can make racks more attractive.

I would argue that a well designed 1 or 2U chassis can improve cooling 
over most tower cases though.

My home boxen are in decently designed Inwin towers with high-flow front
fans and Aopen 300W server power supplies with high-flow fans. The power
supply air feed is on the bottom, right above the processors. Idle, at an
ambient 23C, my AMD 1700+ runs at 55C and my dual 800 P3's run at 53C.

In an ambient 21C, our dual Xeon 1.8's in a 2U run at 23C. 31C under full 
load.


-- 
Rocky McGaugh
Atipa Technologies
rocky at atipatechnologies.com
rmcgaugh at atipa.com
1-785-841-9513 x3110
http://1087800222/
perl -e 'print unpack(u, ".=W=W+F%T:7\!A+F-O;0H`");'




More information about the Beowulf mailing list