It's Mosix what I need! (more help)

Andreas Boklund andreas at amy.udd.htu.se
Sat May 20 01:10:02 PDT 2000


I might have missed something but MOSIX aint always a solution, 
you will have to (or should) testrun your program/applications on a
cluster using MOSIX before you decide to use it in a "production" 
situation. MOSIX will case some programs to hang and others to take longer
time then if they are run on a single machine (in my experience I/O
intensive programs). My cluster is a 20 node PIII 500 that we run "flow"
simulations on and played around a bit with for a thesis. Here are a few,,

Compiling one program (large) would not make mosix distrubute anything
(maybe one file or 2 out of 100's) so it hardly gave any advantage
at all. Mainly because the processor could crunch the code faster then it
could handle the I/O, so the disks (and maybe the memory), So starting
several processes here did not gain anything. Although this problem can be
solved using pvmmake.


My point is this, find X machines like the one you want to buy set them
up, it will be a good learning experience. Try your code, if it works like
you have hoped to, good! If not, rethink it and start over.

Im saying this because i noticed that i would (paper figures) have gained
a 30-40 % performance increase if i would have spent my money
in another way then i did. 


I hope that this stuff helped atleast a little, and theese are my personal
views based upon my experience :)

Good luck and dont hesitate to ask..


//Andreas

*********************************************************
*  Administator of Amy(studentserver) and Sfinx(Iris23) *
*                                                       *
*   Voice: 070-7294401                                  *
*   ICQ: 12030399                                       *
*   Email: andreas at shtu.htu.se, boklund at linux.nu        *
*                                                       *
*   That is how you find me, How do -I- find you ?      *
*********************************************************

> * 1 		SCSI card
> * x 		SCSI HD's
If you are going to run MOSIX i really really recommend the use 
of SCSI drives in the master node since it is where all  writing
will take place.


> Questions:
> ----------
> 
> * Is it recommendable to clone the node system disks and never access it 
> directly but by telnet?
I would configure rsh or ssh they are much neater (in my experience rsh is
faster but thats just a feeling.

> * Is AMD ok for this kind of system?
Depends on what your code prefer, there are no drawbacks as far as i know

> * Perhaps it's a stupid question but ... 8-S ... Do all the nodes need 
> video cards for the X applications run ok?
Nope, since you wont run X on them, the cards wont have an impact on that.

> * Is it a good idea to centralize all the HD's in the master node? Could 
> it produce conflicts in the access and so reduce substantially the 
> access time? Is it better to distribute the HD's in the nodes and give 
> access via NFS?
MOSIX only writes to the master node's filesystem. I cant see any
advantage of writing on any of the nodes filesystems, but there might be. 
The main drawback of this is that you will slow your applications down
(unless you have a LOT of writes and a slow disk, but in that case i would
recommend buying another SCSI disk for the master instead. NFS would also
create a lot of network traffic which would slow down the processes
migrations time.

> * If it's better to centralize HD's, is there problems whith diskless 
> nodes. What is better, to boot from floppy or from NFS?
I have no real experience in anything but floppy booting diskless nodes
and making them mount their filesystems via NFS, just writing a kernel to
a floppy and nothing more.

> * Is the relation improvement/cost ok between 100Mb/1Gb Eth. card?
Again how much bandwith does the applications that you want to run need,
and how many nodes. I had the opportunity of using a "pretty" low bandwith
program that made it possible for me to fit dual 100Mb cards in the master
and then my application didnt fill it up anymore.








More information about the Beowulf mailing list