HD cloning

Bruce Janson bruce at staff.cs.usyd.edu.au
Tue Dec 5 07:22:23 PST 2000


    ..
    Message-ID: <004601c05e3a$71f30540$4102a8c0 at richardr>
    From: "Richard Rogers" <rrogers at aa.net>
    To: <beowulf at beowulf.org>
    References: <Pine.LNX.4.30.0012041401170.10639-100000 at ganesh.phy.duke.edu>
    ..
    Date: Mon, 4 Dec 2000 13:37:39 -0800
    ..

    ----- Original Message -----
    From: "Robert G. Brown" <rgb at phy.duke.edu>
    > for a still regrettably brief overview of the process.  I'd also (as of
    > RH 6.2) recommend using mkkickstart on a single node that you set up by
    > hand to create the original node kickstart file.  Eventually I'm hoping
    
    I haven't been able to get the 6.X kickstarts to work without a video
    card present. I haven't tried with 7.X, but 5.X seemed to work. Probably
    not a big deal if you're only going to install once. I install often and it
    makes
    me very grumpy.
    ..

Like you, installing makes me grumpy too, so I try not to do it
more than once.  Ideally all of our compute servers would share
the same (network) file system.  There are ways of doing this
now (typically via NFS) but they tend to be hand-crafted and
unpopular.
In particular, I notice that the recent Scyld distribution
assumes that files (libraries if I remember rightly) will be
installed and available on the local computer.
Why do people want to install locally?  (Scyld people in particular
are encouraged to reply.)

And there's a related problem (to installing) which we might call
"updating" in which, for example, I am running RedHat 6.2 now and
I wish to rip it out and install RedHat 7.0, but I also want to
preserve all(?) of my local changes.
Let's make the simplifying assumption that whenever I install a
RedHat distribution I always install everything (it's less than
2GB which is hardly worth worrying about, particularly when
shared amongst a bunch of computers, and there's a RedHat installation
menu option that makes selection of "Everything" straightforward).
Anyone solved this updating problem in the context of a single
file system shared by multiple computers?

And to you Robert Brown: speak for yourself please when you say
(in your message of Sun, 3 Dec 2000 16:03:39 -0500 (EST)):

	A beowulf is a high performance computing
	cluster, not a data or web server cluster.

This kind of supercomputer elitism, this fascination with fine-
grained parallelism and linear speed-up has held back the progress
of single system image multicomputing for long enough.  I disagree
with your claim, so much so that I wouldn't even fight for your
right to make it (well, not with much conviction).

Regards,
bruce.




More information about the Beowulf mailing list