Beowulf digest, Vol 1 #304 - 13 msgs

Carpenter, Dean Dean.Carpenter at pharma.com
Mon Mar 12 12:15:58 PST 2001


Hmmm.  Has anyone looked at the teeny tiny RazorBlade systems from Cell
Computing ?  If P3-500 is enough for a node, these things are *small*.

http://www.cellcomputing.com

--
Dean Carpenter
deano at areyes.com
dean.carpenter at pharma.com
dean.carpenter at purduepharma.com
94TT :)


-----Original Message-----
From: pbn2au [mailto:pbn2au at qwest.net]
Sent: Tuesday, March 06, 2001 9:37 PM
To: beowulf at beowulf.org
Subject: Re: Beowulf digest, Vol 1 #304 - 13 msgs


> Dean.Carpenter at pharma.com said:
> > We, like most out there I'm sure, are constrained, by money and by
> > space. We need to get lots of cpus in as small a space as possible.
> > Lots of 1U VA-Linux or SGI boxes would be very cool, but would drain
> > the coffers way too quickly.  Generic motherboards in clone cases is
> > cheap, but takes up too much room.
>
> > So, a colleague and I are working on a cheap and high-density 1U node.
> >  So far it looks like we'll be able to get two dual-CPU (P3)
> > motherboards per 1U chassis, with associated dual-10/100, floppy, CD
> > and one hard drive.  And one PCI slot.  Although it would be nice to
> > have several Ultra160 scsi drives in raid, a generic cluster node (for
> > our uses) will work fine with a single large UDMA-100 ide drive.
>
> > That's 240 cpus per 60U rack.  We're still working on condensed power
> > for the rack, to simplify things.  Note that I said "for our uses"
> > above.  Our design goals here are density and $$$.  Hence some of the
> > niceties are being foresworn - things like hot-swap U160 scsi raid
> > drives, das blinken lights up front, etc.
>
> > So, what do you think ?  If there's interest, I'll keep you posted on
> > our progress.  If there's LOTS of interest, we may make a larger
> > production run to make these available to others.
>
> > -- Dean Carpenter deano at areyes.com dean.carpenter at pharma.com
> > dean.carpenter at purduepharma.com 94TT :)

 Dean, Get rid of the cases!!!! You can put the motherboards together using
all- threads. There are a
couple of companies selling 90 degree pci slot adapters, for the nics. By
running  2 motherboards on a
regular power supply, using just the nic card, processor and ram, (use boot
proms on the nics) you can
get 40 boards in a 5 foot Rack mount. use a shelf every 4 boards to attach
the power supply top and
bottom. With a fully enclosed case 8 100 mm fans are sufficient to cool the
entire setup. Conversely
if you use 32 boards and a 32 port router/switch you can have nodes on
wheels!!

 It may sound nuts, but mine has a truncated version of this setup. using 4
boards I was able to
calculate the needed power for fans and by filling my tower with 36 naked
m\boards running full steam,
I calculated the air flow. Yes it sounds rinky-dink but under smoked glass
it looks awesome!!





More information about the Beowulf mailing list