Typical hardware

Cameron Harr charr at lnxi.com
Mon Mar 5 09:09:33 PST 2001


If you could get two duals per 1U, that'd be great density. I must warn
you though of heating issues. Even if you manage to get the logistics of
your idea to work, heat will be a huge concern. So if you can get the
company to drop the computer room to a refrigerator and have the cost
come out of their budget, you may ok.

"Carpenter, Dean" wrote:
> 
> We're just now beginning to mess around with clustering - initial
> proof-of-concept for the local code and so on.  So far so good, using spare
> equipment we have lying around, or on eval.
> 
> Next step is to use some "real" hardware, so we can get a sense of the
> throughput benefit.  For example, right now it's a mishmosh of hardware
> running on a 3Com Switch 1000, 100m to the head node, and 10m to the slaves.
> The throughput one will be with 100m switched all around, possibly with a
> gig uplink to the head node.
> 
> Based on this, we hunt for money for the production cluster(s) ...
> 
> What hardware are people using ?  I've done a lot of poking around at the
> various clusters linked to off beowulf.org, and seen mainly two types :
> 
> 1.  Commodity white boxes, perhaps commercial ones - typical desktop type
> cases.  These take up a chunk of real estate, and give no more than 2 cpus
> per box.  Lots of power supplies, shelf space, noise, space etc etc.
> 
> 2.  1U or 2U rackmount boxes.  Better space utilization, still 2 cpus per
> box, but costing a whole lot more $$$.
> 
> We, like most out there I'm sure, are constrained, by money and by space.
> We need to get lots of cpus in as small a space as possible.  Lots of 1U
> VA-Linux or SGI boxes would be very cool, but would drain the coffers way
> too quickly.  Generic motherboards in clone cases is cheap, but takes up too
> much room.
> 
> So, a colleague and I are working on a cheap and high-density 1U node.  So
> far it looks like we'll be able to get two dual-CPU (P3) motherboards per 1U
> chassis, with associated dual-10/100, floppy, CD and one hard drive.  And
> one PCI slot.  Although it would be nice to have several Ultra160 scsi
> drives in raid, a generic cluster node (for our uses) will work fine with a
> single large UDMA-100 ide drive.
> 
> That's 240 cpus per 60U rack.  We're still working on condensed power for
> the rack, to simplify things.  Note that I said "for our uses" above.  Our
> design goals here are density and $$$.  Hence some of the niceties are being
> foresworn - things like hot-swap U160 scsi raid drives, das blinken lights
> up front, etc.
> 
> So, what do you think ?  If there's interest, I'll keep you posted on our
> progress.  If there's LOTS of interest, we may make a larger production run
> to make these available to others.
> 
> --
> Dean Carpenter
> deano at areyes.com
> dean.carpenter at pharma.com
> dean.carpenter at purduepharma.com
> 94TT :)
> 
> _______________________________________________
> Beowulf mailing list, Beowulf at beowulf.org
> To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf

-- 
Cameron Harr
Applications Engineer
Linux NetworX Inc.
http://www.linuxnetworx.com






More information about the Beowulf mailing list