[Beowulf] cluster on Mellanox Infiniband (fwd)

Franz Marini franz.marini at mi.infn.it
Mon Jun 21 02:29:56 PDT 2004


Hi,

On Fri, 18 Jun 104, Mikhail Kuzminsky wrote:

> 1) Do we need to buy some additional software from Mellanox ?
> (like THCA-3 or HPC Gold CD Distrib etc)

You shouldn't have to.

> 2) Any information about potential problems of building and using
> of this hard/software. 

> To be more exactly, we want to install also MVAPICH (for MPI-1) or
> new VMI 2.0 from NCSA for work w/MPI. 
> For example, VMI 2.0, I beleive, requires THCA-3 and HPC Gold CD for
> installation. But I don't know, will we receive this software w/Mellanox
> cards or we should buy this software additionally ?

Hrm, no, VMI 2.0 doesn't require neither THCA-3 nor HPC Gold CD (whatever 
it is ;)). 

We have a small (6 dual Xeon nodes, plus server) testbed cluster with 
Mellanox Infiniband (switched, obviously). 

So far, it's been really good. We tested the net performance with SKaMPI4 
( http://liinwww.ira.uka.de/~skampi/ ), the results should be in the 
online db soon, if you want to check them out.

Seeing that you are at the Institute of Organic Chemistry, I guess you're 
interested in running programs like Gromacs or CPMD. So far both of them 
worked great with our cluster, as far as only one cpu per node is used 
(running two different runs of gromacs and/or CPMD on both cpus on each 
node gives good results, but running only one instance of either program 
on both cpus on each node results in very poor scaling).

Have a good day,

Franz 


---------------------------------------------------------
Franz Marini
Sys Admin and Software Analyst,
Dept. of Physics, University of Milan, Italy.
email : franz.marini at mi.infn.it
--------------------------------------------------------- 




More information about the Beowulf mailing list