Please forward to beowulf.org (fwd)

Robert G. Brown rgb at phy.duke.edu
Sat Jan 26 15:21:42 PST 2002


Forwarding this for Lee Rotller, see below.  He's had trouble posting in
his own name.  Probably hitting an anti-spam block by accident.

   rgb

-- 
Robert G. Brown	                       http://www.phy.duke.edu/~rgb/
Duke University Dept. of Physics, Box 90305
Durham, N.C. 27708-0305
Phone: 1-919-660-2567  Fax: 919-660-2525     email:rgb at phy.duke.edu



---------- Forwarded message ----------
Date: Sat, 26 Jan 2002 01:16:10 -0800
From: Lee Rotller <rottler at es.ucsc.edu>
To: rgb at phy.duke.edu
Subject: Please forward to beowulf.org

Hi Robert,

I tried unsubscribing then resubscribing with no luck.  Here is the
message I wanted to post.  Murphy's Law says that as soon as you
post it the other three tries will make it to the list. :-)

------------------------------------------------------------------
We have a 132 node dual Athlon cluster from Racksaver with high
bandwidth SCI interconnect from Dolphin and Wulfkit software
from Scali.  Component wise it is very similar to Robert's cluster
included below.

Our Configuration
132 nodes
Dual Athlon 1.4 (1500)
1024 MB PC2100 (Corsair)
61.5 Gb IDE HD
Tyan dual mobo/w dual NICS

1 frontend server
1 data i/o node (cluster NFS server)

Both of these are connected to the cluster via an Intel
gigabit card.  Initially we had 3Com but had trouble with the
supplied linux driver.  This may be a red herring since
yesterday we discovered that all the memory slots had dual
bank DDR sticks leading to all kinds of stability problems
independent of the Gb NIC cards.  Tyan boards want dual
bank memory sticks only in slots #1 and #2.  The #3 and #4

slots must have single bank DDR sticks.  Sigh.

Aside from teething problems I am extremely happy with this
machine.  Linpack clocked in at 301.8 Gflops running on all
264 processors.  One of the user mpi codes was run on 128
processors on this machine and the same # on Seaborg and we
were 31.2% faster.  Once we have everything configured the
way I want it I will run the full Pallas benchmark and
report back.

We are still in the configuration and testing phase but with
regards to heat problems in our 1Us I have not seen any
problems.  We purchased a 32 node/single cpu Athlon cluster
from Racksaver since last March and although there were
problems due to the MSI mobo there was not a single heat
related failure and do not expect that to be a problem with
the dual 1Us.  RackSaver has taken alot of care in optimizing
the air flow through their boxes to get optimum cooling.  In
my experience heat is a non issue with RackSaver 1Us.  This
is only my experience (YMMV).

Cheers,
Lee

R C wrote:
 >
 > On Fri, Jan 25, 2002 at 11:54:37AM -0600, Steven Timm wrote:
 > >
 > > I am just wondering how many people have managed to get a
 > > cluster of dual Athlon-MP nodes up and running.  If so,
 > > which motherboards and chipsets are you using, and has anyone
 > > safely done this in a 1U form factor?
 >
 > We're in the process of testing our 1U dual athalon nodes from Racksaver.
 >
 > Configuration:
 > 16 nodes
 > Dual Athalon 1.53 (1800+)
 > 512 MB PC2100 Reg/ECC (Crucial / Corsair) (We ordered Crucial ram modules
 > before the price hike)
 > 20 GB IDE HDs (IBM)
 > S2462NG (non-scsi version)
 >
 > The units themselves are solid, and hefty (roughly 30 lbs). They do draw
 > quite a bit of power (we are waiting for a 2nd 30 amp drop).  No
 > problems with them so far (24 hour burnin, room temperature approx 75-80
 > deg F, above recommended temperature).  They are noisy, as one would
 > expect from 1U units with these processors.  Don't put them in an office.
 > CPU temperatures after 24 hour runs were in the 49-55 C range.
 >
 > We haven't gotten our software on all the units yet, but they seem
 > stable.  Once our school actually cut the PO, the order went through
 > quickly.
 >
 > Robert Cicconetti
 >
 > PS. Has anyone gotten Wake-on-Lan working on this motherboard?
 > _______________________________________________
 > Beowulf mailing list, Beowulf at beowulf.org
 > To change your subscription (digest mode or unsubscribe) visit 
http://www.beow
ulf.org/mailman/listinfo/beowulf

-- 
* Lee Rottler                                  rottler at es.ucsc.edu     *
* System Administrator/Scientific Programmer   Office:  (831) 459-5059 *
* High Performance Computing                   FAX:     (831) 459-3074 *
* IGPP - Earth Sciences                                                *





More information about the Beowulf mailing list