[Beowulf] cluster advice

Vincent Diepeveen diep at xs4all.nl
Mon Sep 19 14:27:53 PDT 2005


Hi Asa,

In your case it might be an idea to not buy a cluster if your software
just runs windows, but buy a quad opteron dual core 1.8Ghz with a 
raid array. That just fits within budget.

That's in total 8 cores that can calculate for you.

It's fast and probably your image rendering software runs under windows and
not *nix?

In that case you can put server2003 at it and later when there is 64 bits
compiled software that's taking advantage of x64 then you can upgrade to
server x64 windows.

If your software runs linux and is embarrassingly parallel, you can
consider building a cluster. In that case you can choose for example for

4 nodes dual opteron dual core 2.2Ghz and a raid array attached to 1 of the
nodes.

By the way, don't save onto the raid card, you want a raid card with a lot
of RAM on the card itself, not something that works 'in the RAM' nor some
linux kernel emulation. Never install beta drivers nor the latest kernel
that's still in beta and not extremely well tested for clusters yet.

Read-speed is probably most important?

In that case i'd go for S-ATA raid 5 with 8 harddisks. 
I see so so many people around me with problems with 
their raid0 arrays (amazingly especially SCSI U320 arrays), 
within 1 or 2 years after installation, that i start to doubt how
well raid cards get supported by the different kernels and 
how well a striped file system has been debugged into the kernels.

All of the guys crashing their raid partition (without exception they use
raid-0) their arrays are using linux arrays.

If not raid5 in hardware, then JBOD might be also an idea to use, as it
should run very bugfree.

Vincent

At 06:21 PM 9/15/2005 -0700, asa hammond wrote:
>Hello all. I am going to be building a cluster to handle image  
>rendering and processing and I wanted to get some advice.  The  
>cluster will have 2 kinds of problems to crunch on,
>1) heavy cpu style jobs with moderate IO and
>2) heavy IO jobs with up to 200 meg image retrieval off of the file  
>server and moderate cpu requirements.
>
>We have 10-14k to spend on the nodes at the moment.
>
>What are your thoughts on the best node configurations to purchase.
>If we go with dual core athlon64 x2's  we can get about 10-12 nodes  
>with single cpu motherboards.
>Any reason to go with dual opterons over the dual core athlon64s if  
>the cache sizes and clock speeds are the same?
>
>Any alternative case systems you all have experience with?  Do any of  
>you run without cases a la early google? Everything at the moment is  
>rackmount cases but we figure if we are using micro atx boards we  
>could fit two boards per 1U front to back with good cooling on  
>aluminium rackmount trays.  Anyone doing such extreme cost-cutting  
>measures to save on the obscene rackmount case+rail costs?
>
>Any mobo recommendations?  What is your feeling about the micro atx  
>boards?  not longterm reliable?  We don't need any of the fancy  
>extras most gamer and server motherboards have. All we need to have  
>is pxe bootable gigabit(dual on mobo would be great), slots for ram  
>(2 gig) and a cpu.  We can go with a gigabit on the board and add an  
>extra nic to get two total, etc.
>
>We are interested in going the channel bonding route for better than  
>gigabit throughput with 2 ports per node, each feeding into a  
>separate switch connected to separate ports on the server.
>
>Any switch recommendations?  Do I need to go with a layer 3 switch if  
>all I am doing is running my machines this way?  We are thinking  
>several 16 port netgears. We want to go for future-proof  
>expandability as we will be adding nodes as we can afford them.
>
>Heating is becoming an issue as well.  We have  an 8x10x6 foot room  
>with no AC. Just forced air cooling.  Any good cooling advice for  
>remote AC as well as any kind of quiet passive cooling would be very  
>useful.  We need about 1.5 ton of cooling as far as I can figure.   
>Ambient air temp is 73 deg and room is running at 84-89.
>
>Pointers to any info would be great.  This has all been discussed  
>before but now those recommendations are a bit outdated as the march  
>of moore goes on.
>
>Thank you all in advance.  This list has tons of great information.   
>A great resource for all of us.
>
>Asa
>
>_______________________________________________
>Beowulf mailing list, Beowulf at beowulf.org
>To change your subscription (digest mode or unsubscribe) visit
http://www.beowulf.org/mailman/listinfo/beowulf
>
>



More information about the Beowulf mailing list