CLIC = Mandrake Cluster Distribution

Richard C Ferri rcferri at us.ibm.com
Tue Sep 24 05:39:36 PDT 2002



Erwan,
      Not to start a flame war and divert attention from the real purpose
of this forum, which is discussing beowulf problems and not blatant self
promotion, I want to comment on your postings.

      CLIC appears to be YAB1S (Yet Another Beowulf 1 Solution).   The
woods are full of Beowulf 1 solutions that promise turn key setup and
installation (reference OSCAR, reference NPACI Rocks, and many others).
There are also many cloners that work with the golden client method
(reference SystemImager which has evolved into SIS, System Installation
Suite).  While "several scientists" may have used CLIC and liked it, OSCAR
and Rocks have been around since mid-2000, and have thriving open source
communities.  OSCAR has over 70,000 downloads to date as well.

      The real point that Don Becker was making is that the state of the
art evolved past Beowulf 1 back in mid-2000, to a Beo 2 model that doesn't
rely on a full install and therefore isn't subject to version skew -- nodes
getting out of sync over time.  The Beo 2 model also provides single system
image with regard to process space, and methods to manage remote processes
from a central point of control.

      My personal opinion (and not that of my employer) is that it's a
shame Mandrake didn't work more closely with the open source community to
leverage the work that's already been done to provide a turn  key beowulf 1
solution -- CLIC merely further fragments the beo 1 space.

These have been the opinions of:

Richard Ferri
IBM Linux Technology Center
rcferri at us.ibm.com

Erwan Velu <erwan at mandrakesoft.com>@beowulf.org on 09/24/2002 06:44:08 AM

Sent by:    beowulf-admin at beowulf.org


To:    Donald Becker <becker at scyld.com>
cc:    Beowulf List <beowulf at beowulf.org>
Subject:    Re: CLIC = Mandrake Cluster Distribution



Le mar 24/09/2002 à 01:07, Donald Becker a écrit :
> This cluster distribution still has the problem common to many other
> old-style cluster systems: by cloning full installations it has
> significant long-term maintainence and consistency problems.

This distribution is just at the end of its first development stage but
already features urpmi with a "parallel installation" feature.
For those who don't know "urpmi" there is a slashdot poll "aptget vs
urpmi"
http://ask.slashdot.org/article.pl?sid=01/06/28/0611240&mode=thread&tid=147
Urpmi sources can be local, nfs, http, ftp, ssh & rsync.

The parallel installation feature of urpmi allows users to update their
nodes from one central server. The server ask each node the packages it
needs, uploads them to the nodes using a parallel copy (ka-tools
:http://ka-tools.sourceforge.net/) then nodes updates their systems
using local packages.
Therefore the maintainence of the cluster is really easy, just add rpms
in the server and deploy them on all nodes in one single action.

The cloning process uses the same technology. One node is considered as
the "golden node" that runs the duplication server. The other nodes are
booting (using PXE) the client side of the application.
When all nodes are present (booted), the duplication starts using
ka-tools.
We are duplicating nodes at 10MB/sec on a fast ethernet switch; from 1
to 200 nodes are duplicated in less than 2-3 minutes !

> Nor does it have any approach to scalability or system observability.
> [[ Just venting at the advertising nature of this posting: this appears
> to be just a me-too collection of unrelated tools, with no overall
> design or architecture.  Where is the innovation or contribution? ]]
CLIC is really a Clustering distribution because the Linux distro and
the clustering tools are merged. One of the main features of this
development is to let the users easily install the OS and the clustering
layer in one shot.
Several scientists (cluster testers) have tested it and consider it as a
nearly turn key solution very easy to setup due to the autoconfiguration
tools (DNS,DHCP,PXE,MPICH,PVM,LAM,NIS,GANGLIA,remote commands).
The installation process is really fast, we just need 1 hour (once the
hardware setup is done) to install and deploy the cluster.

linuxely yours,
--
Erwan Velu
Linux Cluster Distribution Project Manager
MandrakeSoft
43 rue d'aboukir 75002 Paris
Phone Number : +33 (0) 1 40 41 17 94
Fax Number   : +33 (0) 1 40 41 92 00
Email        : erwan at mandrakesoft.com
Web site     : http://clic.mandrakesoft.com
 OpenPGP key  : http://www.mandrakesecure.net/cks/


-------------- next part --------------
A non-text attachment was scrubbed...
Name: =?iso-8859-1?Q?signature.asc?=
Type: application/octet-stream
Size: 249 bytes
Desc: not available
Url : http://www.scyld.com/pipermail/beowulf/attachments/20020924/2ac23067/iso-8859-1Qsignature.obj


More information about the Beowulf mailing list