Best file system strategy

Carey F. Cox cfcox at coes.latech.edu
Fri Nov 10 08:57:40 PST 2000


Hello,

We just received our 8-node cluster and I am trying to set everything up. 

	Each node has:

			2-550 MHz Pentium II
			256Mb Ram
			1-18Gb U2 SCSI 

Node 1 serves as the head node. 

I want to get the group's knowledgable advice on how to set up the 
user space (/home). There are nice size drives on each node, but how do 
I access them? I already have a roughly 12Gb /home partition on each 
Here are the options that I am looking at...

	1) NFS mount node#/home to each node as /home#. Users 
	   would be assigned different base home directories.
	2) Use pvfs on the /home space. I am not sure how to set 
	   up /home here.
	3) If I can find the $$, purchase a new /home disk for the 
	   head node, use pvfs on the old /home on each node to 
	   create a large scratch space for running computations

Option 1 should entail quite a heavy communication load, I would think.
As to option 2, I am concerned with the redundancy in pvfs. If I 
understand correctly, were the system to lose a node, I have lost the 
file system. I understand I can reboot the node, but what if I lose a 
drive on a node. Is all lost in this case? 
Option 3 looks the best, but it means purchasing a new disk. 

What would be the best way to set up home directory space as well as 
some kind scratch space. I should note that there is already a /share 
directory that is NFS mounted on all of the nodes.

I might add that I did not spec this system out, I just inherited it. 
I appreciate any and all advice that you may provide. 

Thanks,

Carey

-- 
 ======================================================================
<>  Carey F. Cox, PhD          |  PHONE: (318) 257-3770               <>
<>  Assistant Professor        |  FAX:   (318) 257-2306               <>
<>  Dept. of Mech. Eng.        |  EMAIL: cfcox at coes.latech.edu        <>
<>  Louisiana Tech University  |  WEB:   http://www.latech.edu/~cfcox <>
 ======================================================================





More information about the Beowulf mailing list