high physical density cluster design - power/heat/rf question s

Schilling, Richard RSchilling at affiliatedhealth.org
Mon Mar 12 09:26:37 PST 2001


I took a look at the website.  These boards are full size PC boards, and
might not perform well with the compact space, due to the problems you've
outlined. 

But, on the other hand, FreeBSD should work fine on these.  I'm using
FreeBSD for clustering right now, and the operating system is pretty stable.

Check out http://www.emjembedded.com/products/products.html for single board
computers that may give you much more of a dense setup than with these
boards.

Richard Schilling
Mount Vernon, WA


> -----Original Message-----
> From: Velocet [mailto:mathboy at velocet.ca]
> Sent: Monday, March 05, 2001 9:36 PM
> To: beowulf at beowulf.org
> Subject: high physical density cluster design - power/heat/rf 
> questions
> 
> 
> I have some questions about a cluster we're designing. We really need
> a relatively high density configuration here, in terms of floor space.
> 
> To be able to do this I have found out pricing on some socket 
> A boards with
> onboard NICs and video (dont need video though). We arent 
> doing anything
> massively parallel right now (just running 
> Gaussian/Jaguar/MPQC calculations)
> so we dont need major bandwidth.* We're booting with root 
> filesystem over
> NFS  on these boards. Havent decided on FreeBSD or Linux yet. 
> (This email
> isnt about software config, but feel free to ask questions).
> 
> (* even with NFS disk we're looking at using MFS on freebsd 
> (or possibly
> the new md system) or the new nbd on linux or equivalent for 
> gaussian's
> scratch files - oodles faster than disk, and in our case, with no
> disk, it writes across the network only when required. Various tricks
> we can do here.)
> 
> The boards we're using are PC Chip M810 boards 
> (www.pcchips.com). Linux seems
> fine with the NIC on board (SiS chip of some kind - Ben 
> LaHaise of redhat is
> working with me on some of the design and has been testing it 
> for Linux, I
> have yet to play with freebsd on it).
> 
> The configuration we're looking at to achieve high physical density is
> something like this:
> 
>                NIC and Video connectors
>               /
>  ------------=--------------	 board upside down
>     | cpu |  =  |   RAM   |
>     |-----|     |_________|
>     |hsync|
>     |     |      --fan--
>     --fan--      |     | 
>    _________     |hsync|
>   |         |    |-----|
>   |  RAM    | =  | cpu |
>  -------------=-------------	board right side up
> 
> as you can see the boards kind of mesh together to take up 
> less space. At
> micro ATX factor (9.25" I think per side) and about 2.5 or 3" 
> high for the
> CPU+Sync+fan (tallest) and 1" tall for the ram or less, I can 
> stack two of
> these into 7" (4U). At 9.25" per side, 2 wide inside a 
> cabinet gives me 4
> boards per 4U in a standard 24" rack footprint. If I go 2 
> deep as well (ie 2x2
> config), then for every 4U I can get 16 boards in.
> 
> The cost for this is amazing, some $405 CDN right now for 
> Duron 800s with
> 128Mb of RAM each without the power supply (see below; 
> standard ATX power is
> $30 CDN/machine). For $30000 you can get a large ass-load of 
> machines ;)
> 
> Obviously this is pretty ambitious. I heard talk of some people doing
> something like this, with the same physical confirguration and cabinet
> construction, on the list. Wondering what your experiences have been.
> 
> 
> Problem 1
> """""""""
> The problem is in the diagram above, the upside down board 
> has another board
> .5" above it - are these two boards going to leak RF like mad 
> and interefere
> with eachothers' operations? I assume there's not much to do 
> there but to put
> a layer of grounded (to the cabinet) metal in between.  This 
> will drive up the
> cabinet construction costs. I'd rather avoid this if possible.
> 
> Our original construction was going to be copper pipe and 
> plexiglass sheeting,
> but we're not sure that this will be viable for something 
> that could be rather
> tall in our future revisions of our model. Then again, copper 
> pipe can be
> bolted to our (cement) ceiling and floor for support.
> 
> For a small model that Ben LaHaise built, check the pix at
> http://trooper.velocet.ca/~mathboy/giocomms/images
> 
> Its quick a hack, try not to laugh. It does engender the 'do 
> it damn cheap'
> mentality we're operating with here.
> 
> The boards are designed to slide out the front once the power 
> and network
> are disconnected.
> 
> An alternate construction we're considering is sheet metal cutting and
> folding, but at much higher cost.
> 
> 
> Problem 2 - Heat Dissipation
> """""""""""""""""""""""""""" 
> The other problem we're going to have is heat. We're going to 
> need to build
> our cabinet such that its relatively sealed, except at front, 
> so we can get
> some coherent airflow in between boards. I am thinking we're 
> going to need to
> mount extra fans on the back (this is going to make the 2x2 
> design a bit more
> tricky, but at only 64 odd machines we can go with 2x1 config 
> instead, 2
> stacks of 32, just 16U high). I dont know what you can 
> suggest here, its all
> going to depend on physical configuration. The machine is 
> housed in a proper
> environment (Datavaults.com's facilities, where I work :) 
> thats climate
> controlled, but the inside of the cabinet will still need 
> massive airflow,
> even with the room at 68F.
> 
> 
> Problem 3 - Power
> """""""""""""""""
> The power density here is going to be high. I need to mount 
> 64 power supplies
> in close proximity to the boards, another reason I might need 
> to maintain
> the 2x1 instead of 2x2 design. (2x1 allows easier access too). 
> 
> We dont really wanna pull that many power outlets into the 
> room - I dont know
> what a diskless Duron800 board with 256Mb or 512Mb ram will 
> use, though I
> guess around .75 to 1 A. Im gonna need 3 or 4 full circuits 
> in the room (not
> too bad actually). However thats alot of weight on the 
> cabinet to hold 60 odd
> power supplies, not to mention the weight of the cables 
> themselves weighing
> down on it, and a huge mess of them to boot.
> 
> I am wondering if someone has a reliable way of wiring 
> together multiple
> boards per power supply? Whats the max density per supply? Can we
> go with redundant power supplies, like N+1? We dont need that much
> reliability (jobs are short, run on one machine and can be restarted
> elsewhere), but I am really looking for something thats going to
> reduce the cabling.
> 
> As well, I am hoping there is some economy of power converted here -
> a big supply will hopefully convert power for multiple boards more
> efficiently than a single supply per board. However, as always, the
> main concern is cost.
> 
> Any help or ideas is appreciated.
> 
> /kc
> -- 
> Ken Chase, math at velocet.ca  *  Velocet Communications Inc.  * 
>  Toronto, CANADA 
> 
> 
> _______________________________________________
> Beowulf mailing list, Beowulf at beowulf.org
> To change your subscription (digest mode or unsubscribe) 
> visit http://www.beowulf.org/mailman/listinfo/beowulf
> 
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.beowulf.org/pipermail/beowulf/attachments/20010312/893b3cc0/attachment.html>


More information about the Beowulf mailing list