[Beowulf] Some beginner's questions on cluster setup
john.hearns at mclaren.com
Thu Jul 9 08:50:27 PDT 2009
I echo what the other replies have said on this one.
> My next question(s) is regarding network setup.
> Each motherboard has an integrated gigabit nic.
> Q: should we be running 2 gigabit NICs per motherboard instead of one?
> Is there a 'rule-of-thumb' when it comes to sizing the network
> (i.e.,'one NIC per 1-2 processor cores'...)
> Also, we were planning on plugging EVERYTHING into one big (unmanaged)
> gigabit switch.
> However, I read somewhere on the net where another cluster was
> separating NFS & MPI traffic on two separate gigabit switches.
This is quite a common configuration - you run the cluster management
and NFS traffic on one network,
And the MPI traffic on another network.
I would personally go for two separate switches. The only slight
complication here is that when you run MPI through a batch system you
have an MPI machines file assigned by the batch system - you might have
to change that file such that the hostnames are the ones associated with
the MPI interface. To make that clear, lets say the node name is 'node1'
and you might have to change this to 'node1-mpi'
Scripts to do this are very easy to write - don't worry.
Also I must ask you - have you considered buying a preconfigured, tested
and supported system from a cluster company?
Most cluster vendors have a 'canned configuration' where they will sell
you a system 'off the shelf' which will fit your requirements.
They will test the hardware before it is brought to you, assemble it on
site, test it again, and will help you get applications running.
It really is worth doing this. It is not clear from your email which
country you are in, we could recommend some companies.
The contents of this email are confidential and for the exclusive use of the intended recipient. If you receive this email in error you should not copy it, retransmit it, use it or disclose its contents but should return it to the sender immediately and delete your copy.
More information about the Beowulf