[Beowulf] Clusters and Distro Lifespans

Robert G. Brown rgb at phy.duke.edu
Wed Jul 19 09:31:31 PDT 2006


On Wed, 19 Jul 2006, Stu Midgley wrote:

> We also have our install process configured to allow booting different
> distros/images, which is useful to boot diagnostic cd images etc.

Good point and one I'd forgotten to mention.  It is really lovely to
keep a PXE boot image pointed at tools like memtest86, a freedos image
that can e.g. flash bios or do other stuff that expects an environment
that can execute a MS .exe, boot into a diskless config for repair
purposes (or to bring up a node diskless while waiting for a replacement
disk).  EVEN if you DO leave your cluster running a semi-commercial
setup installed by a vendor or consultant, having a local PXE boot
server is a lovely idea, and one that gives you upward mobility towards
the DIY cluster as you master running linux.

Honestly, for MOST work people do with clusters, running pretty much the
(PXE-installable) distro of your choice will almost certainly work.  I
tend to use FC-even or Centos (a.k.a. FC-even-frozen) on cluster nodes
simply because we have long since gotten to where we can make RH-derived
distributions jump through hoops.  With Seth Vidal in charge of the core
mirrors and repos, Duke is "Repo World" not just to campus but to much
of the world.  Heck, I PXE-boot and kickstart install my systems at
HOME using mirrors of the duke repos, and if I ever bothered to figure
out Icon's toolset for customizing kickstart boots per system (using
some very clever CGI scripts and a bit of XML) it would make those
installs even easier than they are now.

>> > iii) Do people regularly upgrade their clusters in relation to
>> > distros?  I guess this is like asking how long is a piece of string
>> > because everyone's needs are different.
>> 
>> Cluster upgrades are rare unless you are missing functionality or
>> something is broken.  That is of course one opinion, some here do
>> upgrades nightly.  From a purely production oriented viewpoint, where
>> downtime == lost money for our customers, we usually advise against that.
>
> I think rare is a strong word.  Infrequent may be better.  We
> regularly apply patches and upgrades to the front end nodes (globally
> connected) and infrequently (~ every 6 months) upgrade all the cluster
> nodes in the rolling fashon mentioned above.
>
> You can even do a kernel upgrades to the file servers/front end nodes
> (which requires a reboot) without killing or disrupting jobs.  Having
> complete control has a lot of benefits.

Again I would agree with this.  In fact, I'd go so far as to say that
for the most part one can fairly safely permit a cluster node to just
run the nightly yum update off of the standard updates repo chain, with
only the kernel and maybe particular libraries excepted (and yes, it is
easy to except them with yum).

Alternatively, it is extremely easy to make cluster updates a two-tier
process.  Maintain a small subset of the nodes on the standard update
stream and use them to test updates in a running fashion in situ.  If an
update doesn't crash anything within a week or so, mirror the week-old
update rpm's (only) into an update repo for the rest of the cluster,
waiting as described above to actually reboot the nodes if a kernel
update occurs until a convenient time.  That way you don't have to roll
back an update across the whole cluster, at least not often.

Of course whether or not this will work depends on your cluster -- if it
runs fine grained synchronous jobs that use the entire cluster and which
die if any node goes down, you don't gain much updating only a couple of
nodes of that if their failure brings down the whole computation.  So
sure, YMMV and use a bit of sense.

On the whole, though, updates are there for a reason and STABILIZE
systems more often than the DESTABILIZE them.

    rgb

>
>
>

-- 
Robert G. Brown	                       http://www.phy.duke.edu/~rgb/
Duke University Dept. of Physics, Box 90305
Durham, N.C. 27708-0305
Phone: 1-919-660-2567  Fax: 919-660-2525     email:rgb at phy.duke.edu





More information about the Beowulf mailing list