bruce at staff.cs.usyd.edu.au
Tue Dec 5 09:40:24 PST 2000
Thanks for the reply!
Date: Tue, 5 Dec 2000 12:36:55 -0500 (EST)
From: Daniel Ridge <newt at scyld.com>
On Wed, 6 Dec 2000, Bruce Janson wrote:
> Like you, installing makes me grumpy too, so I try not to do it
> more than once. Ideally all of our compute servers would share
> the same (network) file system. There are ways of doing this
> now (typically via NFS) but they tend to be hand-crafted and
> In particular, I notice that the recent Scyld distribution
> assumes that files (libraries if I remember rightly) will be
> installed and available on the local computer.
> Why do people want to install locally? (Scyld people in particular
> are encouraged to reply.)
While it is true that our (Scyld's) distribution places some files
on target nodes, the total volume is pretty tiny (a couple of tens of
megabytes for now, less in the future). These files, essentially
all shared libraries, are placed on the nodes just as a cache and
are not 'available' from most useful perspectives. They are 'available'
for a remote application to 'dlopen()' or certian other dynamic link
Yes, but storing any files locally suggests that you don't trust the
kernel's network file system caching. Is that why? If so, in what
way does such caching fail?
In addition to shared libraries, we also place a number of entries
for '/dev' on the nodes.
Well, now that you mention /dev, why don't you use devfs to automatically
populate your nodes' /dev directories?
setup script to only transfer libraries greater than 500K. I let the
bproc system migrate other small libraries as I need them.
Sounds like you don't use a network file system at all,
which in itself is an interesting decision.
Care to give some reasons?
More information about the Beowulf