[Beowulf] Re: building a new cluster

Tim Mattox tmattox at gmail.com
Wed Sep 1 19:19:41 PDT 2004


On Wed, 01 Sep 2004 21:46:25 -0400, Joe Landman
<landman at scalableinformatics.com> wrote:
> SC Huang wrote:
> 
> > Thanks, Jeff.
> >
> > 1. It is not frequency domain modeling. The FFT's, together with a
> > multigrid method, are used in solving a Poisson equation (for each
> > time step) in the solution procedure.

You should also take a look at FFTW http://www.fftw.org/ but unfortunately
their version 3.0 doesn't yet do MPI.  The older 2.1.5 version does MPI.

[snip]
> > 4. Thanks for the suggestions on the diskless or other file systems. I
> > will discuss that with my group members.
> 
> 
> This is an interesting way to go if your IO can be non-local, or if you
> just need local scratch space.  It makes building clusters quite
> fast/easy.

You can use Warewulf to boot the nodes from ramdisks (via PXE or
Etherboot), and then use local disks for swap and scratch storage
(possibly as part of a PVFS/GFS/Luster/"did I forget one?"/GPFS file system).
See http://warewulf-cluster.org/ for more details.  Disclaimer: Warewulf
was such a great system I became one of it's developers.

[snip]
> > Also, I heard of the name "channel bonding" here and there. Is that
> > some kind of network connection method for cluster (to use standard
> > switches to achieve faster data transfer rate)? Can someone briefly
> > talk about it, or point me to some website that I can read about it? I
> > did some google search about it but the materials are too technical to
> > me. :( Is it useful for a cluster of about 30-40 nodes?
> 
> 
> There are plusses and minuses to channel bonding.  Search back through
> the archives for details.  I am not sure if the issues with latency and
> out of order packet send/receive have been addressed in 2.6.

I'm sure others can comment from experience, but my impression
has been that, at least for the time being, channel bonding with GigE
isn't particularly helpful, and can slow things down in some situations.
I'd hoped to have tested this myself already, but other research priorities
have been higher.

Good luck on your cluster design choices.
-- 
Tim Mattox - tmattox at gmail.com - http://homepage.mac.com/tmattox/



More information about the Beowulf mailing list