Questions and Sanity Check

Daniel Ridge newt at scyld.com
Tue Feb 27 11:45:30 PST 2001


On Tue, 27 Feb 2001, Keith Underwood wrote:

> I would use larger hard drives.  The incremental cost from 10GB to 30GB
> should be pretty small and you may one day appreciate that space if you
> use something like PVFS.  I would also consider a gigabit uplink to the
> head node if you are going to use Scyld.  It drastically improved our
> cluster booting time to have a faster link to the head.
> 
> 				Keith

For people who are spending a lot of time booting their Scyld slave nodes
-- I would suggest trimming the library list.

This is the list of shared libraries which the nodes cache for improved
runtime migration performance. These libraries are transferred over to
the nodes at node boot time. 

You can see which libraries are currently in the list via:
/usr/sbin/vmadlib -l

You may find that this totals 40MB or more. If you wish to trim the
library set that the nodes load, you can do this in /etc/beowulf/config

The default library line is:
libraries /lib /usr/lib

My VMware slave nodes use:
libraries /lib/libc-2.1.3.so /usr/lib/libmpi.so

And there are a large number of reasonable choices between these two
extremes.

If you are low on memory and you want to cache just the bulkiest libraries
and eject megabytes and megabytes of Gnome, you can also set the
MINLIBSIZE line in /etc/rc.d/init.d beowulf to something like 500. This
will cause only libraries larger than 500k to be cached.

Regards,
	Dan Ridge
	Scyld Computing Corporation 






More information about the Beowulf mailing list