Scratch partition...

Geraldo Pereira de Souza geraldo at cic.unb.br
Mon Dec 10 13:26:37 PST 2001


People,

The start poin is:

I´m new in beowulf and i´ve decided to construct a beowulf machine based in the tutorial of Starling (see http://www.cacr.caltech.edu/beowulf ):
The tutorial is divided in:
1) Introduction of Beowulf:
2) Planning the system: About the hardware and some informations about the partition configuration, etc...
3) Software Installation: 
It´s the maily point of my doubts. 
I´ve cut and paster the item 3 of the tutorial (see bellow):

/-------------------------------------------
To make the system as easy as possible to maintain, plan on making all compute nodes identical, i.e. same partition sizes, same software and same configuration files. This makes the software a whole lot easier to install and maintain. For partition sizes, I like to give 128 MB to root, 2X memory for swap partition for less than 128 MB RAM, 1X memory if you have 128 MB or more RAM. Some people would say, "Why have any swap space at all?" For most usage, swapping should not happen on the compute nodes, but accidents happen. For /usr partition it really depends on the number of packages you are planning to install, about 800 MB should go a long way for most systems.

I recommend installing at least the following, if you are installing 

RedHat or Mandrake Linux:

 Basic System Software, including networking software 
 Kernel Sources 
 C, C++, g77 compilers and libraries 
 X11 with development libraries 
 xntp or another time synchronizer 
 autofs 
 rsync 

To install these packages, please follow the standard instructions that come with the Linux distribution you choose to use. You want to try to install all you need, before you start the cloning process. There are a few files that you probably want to edit before cloning the system:


 /etc/hosts 
 /etc/hosts.equiv 
 /etc/shosts.equiv 
 /etc/fstab 
 /etc/pam.d/rlogin 
 /etc/ntp.conf 
 /etc/lilo.conf 
 /etc/exports 
 /etc/auto.master 
 /etc/auto.beowulf 

-------------------------------------------/
I´ve install the first machine. I labed it of node00. The clones will be named node01, node02, ..., node07. (My Beowulf will be 8 PC´s)

Next step:

For the hosts file, you need to list all the names, with aliases, for each node. All nodes need to be listed in hosts.equiv and shosts.equiv to let users use rsh commands without creating their own .rhosts and .shosts files. In your /etc/pam.d/rlogin file, you want to make sure you do not require secure tty when logging into the nodes. To fix this, I just switch the order of the modules pam_securetty.so and pam_rhosts_auth.so., and end up with the following:

auth sufficient /lib/security/pam_rhosts_auth.so 
auth required /lib/security/pam_securetty.so 
auth required /lib/security/pam_pwdb.so shadow nullok 
auth required /lib/security/pam_nologin.so 
account required /lib/security/pam_pwdb.so 
password required /lib/security/pam_cracklib.so 
password required /lib/security/pam_pwdb.so shadow nullok use_authtok 
session required /lib/security/pam_pwdb.so 

When setting up ntp, have the front-end machine (could be node 0) sync up to the outside world and broadcast the time to the private network. The nodes will be configured to sync their time to the front-end machine. If you don't use a private network, you can have all machines sync up to any ntp server.

/-------------------------------------------------
I´ve done this...

Next step:

We decided to export scratch partitions from all nodes and use autofs to mount these from any node upon request. Our exports file looks like:

/scratch n???.cacr.caltech.edu(rw,no_root_squash)

The no_root_squash option lets root on one node operate as root on any of the mounted files systems.

To set up autofs, add a line like:

/data        /etc/auto.beowulf         --timeout 600

to the file /etc/auto.master and create /etc/auto.beowulf with entries like:

n000 -fstype=nfs n000:/scratch
n001 -fstype=nfs n001:/scratch
n002 -fstype=nfs n002:/scratch
n003 -fstype=nfs n003:/scratch

On the nodes, you will also want to automatically mount home directories from the front-end machine, so you'll need to have:

home -fstype=nfs front-end-machine-name:/home

added to the /etc/auto.beowulf file as well.

On the front-end machine I create a directory named "local" on the large partition used for home directories, move everything from /usr/local to this directory and make a symbolic link to this directory from the /usr directory (ex. ln -s /home/local /usr/local). For software packages like MPICH and PVM, I install them under /usr/local/mpich and /usr/local/pvm on the front-end machine On the nodes /usr/local will will be a symbolic link to the NFS mounted file system on the front-end machine. This way I only have to maintain the packages in one place. PVM can be found at http://www.epm.ornl.gov/pvm and MPICH can be found at http://www.mcs.anl.gov/mpi/mpich.

/------------------------
My partition configuration is:
HD total: 6GB
Swap:     400MG
/root       1.5 GB
/usr         1GB
/home      3 GB

After configure the files as described before, i istall the first clone a named it node01 as described in item 4 of the tutorial:

Item 4 of the tutorial:

4. The Cloning Process

There are different ways to clone your first system.  One way is to physically connect a drive to an already configured system and then copy the content from one drive to another. Another, maybe more elegant method, is to boot the nodes like diskless clients the first time and let setup scripts partition and copy the system from tar files or another already running system.

The first method is simpler, but it requires a lot of reboots and power shut downs. This is the only way to do it if you don't have a floppy drive in every system. When copying from one drive to another there are two simple ways of doing this:

1. Use dd on the entire drive. If your configured drive is hda and the unconfigured is hdb, you would do "dd if=/dev/hda of =/dev/hdb". This will take care of partitions as well as boot sector and files.

I used the command dd to clone the system.

Testing the systems:

After doing everything described i expted my clone run well. I´ve tested criating a user named teste in node00 and node 01. When i log in node01 i criate a directory named sun (/home/geraldo/sun). As described before the sun wuld be created in the node00, wuld not? I´m doing something wrong... The exports isn´t running... 

It´s because my question about scratch partition, etc...Then, now i´m studing the linux configuration, export, autofs, etc.
I think after understand the nfs, int teory an practice, i´ll be able to complete my beowulf system...

So, any hint will be great...mainly of someone that construt the beowul based in the tutorial of Starling....

I´m using the online tutorial of linux (Linux Howto, Deja News, etc), but some points´re dificult to be solved...

Obs: I´m sorry my poor english...

Thanks,

Geraldo Pereira de Souza (geraldo at cic.unb.br)
Laboratório de Sistemas Integrados e Concorrentes - LAICO
Universidade de Brasília - UNB
Brazil.


----- Original Message ----- 
From: "Mark Hahn" <hahn at physics.mcmaster.ca>
To: "Carlos O'Donell Jr." <carlos at baldric.uwo.ca>
Cc: <beowulf at beowulf.org>
Sent: Monday, December 10, 2001 6:17 PM
Subject: Re: Scratch partition...


> > mount -t tmpfs /dev/shm /tmp
> > 
> > And in theory, the numbers I've seen tend to show that this is a bit
> > faster than having /tmp in the vfs cache path. Though I'm not an 
> > expert on the topic.
> 
> generally, filesystems try to be synchronous with metadata,
> that is, directory entries, inodes, etc.  so you'd be avoiding
> that traffic by using tmpfs.  file *contents* tend to be lazily
> written out by filesystems anyway, so there's probably 
> no savings there.  (depends on timing)
> 
> > However, I have noticed that on our compile box
> > (since gcc tends to toss it's temp .o files in /tmp) it's a bit lighter
> > on the drive.
> 
> gcc -pipe avoids that traffic...
> 
> _______________________________________________
> Beowulf mailing list, Beowulf at beowulf.org
> To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://www.scyld.com/pipermail/beowulf/attachments/20011210/2551b224/attachment.html


More information about the Beowulf mailing list