[Beowulf] A cluster for material simulation

Mark Hahn hahn at mcmaster.ca
Mon Dec 3 23:00:19 PST 2007


> be familliar with this kind of system and then extend it to around 20 nodes. 
> Tasks size could vary between 1G to let say 10G.

10G is quite modest, especially for 20 nodes (ram is cheap!).
are you sure you need a cluster?  a single nicely configured 
SMP system will handle 10G jobs quite neatly, and save considerable
effort.  of course, you can't really scale memory bandwidth
without going to a cluster, but I would guess that a 4-socket,
quad-core AMD system with all memory banks active would be tempting.

> I did some reasearch and reading (by the way, Building clustered linux 
> systems by Robert W.Lucke is a bit scary!)

well, it tries to cover a lot of ground.  it's really pretty simple
to get a basic cluster up and running.

> Both systems use a Gigabit ethernet, 2GB of memory per CPU, 80GB of sata hard 
> drive per nodes.
> Dekstop motherboard based system:
> 1 asus P5E WS Professional motherboard, 1066FSB, DDR2 800 NON ECC unbuffered, 
> 2GigE ports, 1 Intel Q6600 CPU @2.4GHz 8MB L2 cache
>
> Server motherboard based system:
> Supermicroserver 6015C-MTB 1333/1066FSB, DDR2 667 ECC FB-DIMM, 2GigE ports, 2 
> intel Xeon 5410 CPU @2.3Ghz 12MB L2 cache

the main thing here is that Intel has, for a long time, had a mediocre 
reputation for memory bandwidth.  I probably would not consider buying
anything older than the 45nm penryn-generation chips with 1333 or higher FSB.

> It might seems I'm comparing apples and oranges but theoretical peak 
> performance is equivalent and in term of cost/CPU there is not a huge 
> difference(150 to 250 A$), also the server solution use twice as less nodes 
> wich could be interesting in term of space, cables, switch...

a 20-node cluster is half a rack, and not really complicated in cabling.
how's your cooling?  I'd probably worry about cooling before I worried 
about cabling...

> For recycling 
> the desktop option seems better except if we use the servers for some kind of 
> graphic cluster in the futur.

perhaps.  my experience is that well-adapted cluster nodes are not 
good for desktops precisely because of those adaptations.

> 	1- If I understood properly FEM is kind of memory bounded so DDR2 
> 800/1066FSB/8MB L2 cache or DDR2 667/1333FSB/12MB L2 cache -> kind of newbie 
> to theses things!

10G/20 nodes is 512M/node - divided among 4 cores is 128M/core, so I 
suspect the cache size isn't going to make much difference.  the FSB will
matter, though.

> 	2- Which one seems better in term of performance, reliability?

faster FSB and ram will be noticably better in performance.  I don't see
why there would be much difference in reliability, though.  the parts that
break are mainly fans.  server parts tend to offer nicer monitoring options
as well as the comfort of ECC (one less place for a heisenbug to live.)

> 	3- Do I need a distinct network for NFS sharing (thath's why I wanted

certainly not.  my experience is that a single job doesn't tend to overlap
its MPI and NFS traffic much.  if you share a single node among multiple 
jobs, this could be an issue.

> 2 GigE ports per nodes) or I put the shared data on the master node(Quote 
> from R.W.Luke book: "This is bad, bad, bad")?

well, he's wrong.  sure, it's a hotspot, but it's also convenient, cheap
and effective.  going to a parallel filesystem will be a significant 
increase in complexity, though only you can know how badly you need the 
IO performance.  a shared fileserver can deliver higher bandwidth through
trunking or even a 10Gb link.  configuring a couple fileservers obviously
scales nicely at the expense of having a partitioned namespace.

> 	4- there is also the supermicro superserver 6015tw-tb with two dual 
> socket motherboard in a 1U form factor (node it's just two nodes put in one 
> box, no interconnections whatsoever apart from the PSU) with roughtly the 
> same price per CPU compare to the other supermicro solution, could be 
> interesting for an even more compact system, do you have any knowledge about 
> this system?

AFAIK, the only downside is a custom formfactor (chassis, boards, PSU).
but why is space such an issue for you?  a stack of 20 1U servers is not
all that big.  it's also a newer system design which, given low-volt cpus,
would be nicely heat-efficient.

> 	5- anything I didn't think of and might be worth checking such as 
> "Oh! you need a fast hard drive as i/o is critical...;-)"

your IO will be over gigabit, so you don't need fast HD (current single
disks average about 70 MB/s.

even for a 20-node cluster, I'd seriously consider getting IPMI
or at least controllable power.



More information about the Beowulf mailing list