[Beowulf] hpl size problems

Joe Landman landman at scalableinformatics.com
Wed Sep 28 16:18:16 PDT 2005



Donald Becker wrote:
> On Wed, 28 Sep 2005, Robert [UTF-8] G. Brown wrote:
>> laytonjb at charter.net writes:
>>>>> At most you waste a few seconds and a few
>>>>> hundred megabytes of disk by leaving them in, 
>>>> Latter yes (more like GB), former no.  Trimming the fat from the SuSE 
>>>> cluster install got it from over an hour down to about 8 minutes with 
>>>> everything, per node.
> 
> This is more typical: a distribution that comes on many CD-Rs isn't going 
> to be easily stripped down to something that can loaded in a few seconds.
> A stripped-down install will take on the order of 5 minutes.

Thats what I have now.  And you are quite right.  Trimming down an 
install is really hard.  Someone had a concept for a "modular" linux at 
one point in time.  I really like the idea of a base core to add to 
rather than a large bloated load to pull from.  Sadly, most 
distributions these days tend to have everything, including the kitchen 
sink, so it is hard to "de-bloat" them.

Worse, when closed source vendors ship product, they qualify against 
specific OSes, and will not officially support others.  Usually we get a 
"tell us if it works" perspective.

Somehow this seems wrong to me.  I would like there to be a standards 
body (say LSB?) that says "linux LSB vX.Y is defined thusly", and then 
have vendors qualify against that, regardless of the distro.  This way, 
if the application is supported with LSB vX.Y, then if distro Z is 
compliant with LSB vX.Y, then it ought to be real easy to get the 
appropriate distro for the problem at hand.  This is, unfortunately, not 
likely to happen.

>>>> I think I hit the point of diminishing returns.  I don't mind waiting up 
>>>> to about 10 minutes for a reload, beyond that, I mind.
> 
> This really isn't scalable: even 5 minutes per machine has a big impact on 
> how you consider operating a cluster of dozens or hundreds of machines.

10 minutes per machine, 30 machines per server is not so terrible in a 
base server config.  If you work a little bit at it, you can support 120 
machines (easily) per server, or if you work a little harder, 240 
machines per server.  This is not an issue for most common cluster size 
systems.

The diskless method of thin kernel is even better with 0 minutes per 
machine, though you do have other issues you have to worry about. 
Everything is an engineering design tradeoff.

[...]

> There is a good reason for updating vulnerable daemons and services even 
> if they are not currently enabled.  What if they are turned to -- "gee, 
> I'll just turn on the web server so that this new admin tool works 
> through the firewall".

heh...

>> I agree, guys, I agree.  My point wasn't that trimming cluster
>> configurations relative to workstation or server configurations is a bad
>> thing -- it is not, and indeed one would wish that eventually e.g. FC,
>> RHEL, Centos, Caosity etc will all have a canned "cluster configuration"
>> in their installers to join server and workstation, or that somebody
>> will put up a website with a generic "cluster node" kickstart fragment
>> containing a "reasonable" set of included groups and packages for people
>> to use as a baseline that leaves most of the crap out.

If there is interest, I can host/offer up something like this.  I have a 
good (reasonable) SuSE baseline autoyast file.  I would love help in 
trimming it more.  I would like to see similar things for RHEL/Centos.

> We went down this path years ago.  It doesn't take long to find the 
> problem with striping down full installations to make minimal compute node 
> installs: your guess at the minimal package set isn't correct.  You might 
> not think that you need the X Window system on compute nodes.  But your 
> MPI implementation likely requires the X libraries, and perhaps a few 
> interpreters, and the related libraries, and some extra configuration 
> tools for those, and...

Deja vu.  The dependency radius of the packages is huge.  Makes support 
and footprint reduction hard.  The support part is why yum was 
developed.  I would bet that with some effort, you could "yum delete" 
until you found a good minimal config, rather than re-installing over 
and over again.  Moreover as we now have a working yum for SuSE, we can 
do that there.

> Yes, there are a number of labor-intensive ways to rebuild and repackage 
> to break these dependencies.  But now you have a unique installation that 
> is a pain to update.  There is no synergy here -- workstation-oriented
> packages don't have the same motivations that compute cluster or server 
> people have.

Yup.  This is painful, and it is annoying.  See the point about closed 
source vendors qualifying against distributions versus standards.  This 
means that closed source apps may not be supported on your favorite 
non-qualified distribution.  This isn't an issue until your work comes 
to running said closed source applications...

> 
>>   a) In most cases the crap doesn't/won't affect performance of
>> CPU/memory/disk bound HPC tasks.
> 
> Except for additional cruft automatically installed and 
> started.  Your compute nodes might not ever need 'xfs' (the X font server, 
> not the file system), but it will be started anyway.

The cruft can add to the background noise, or open ports, or poll 
(PCMCIA/hotplug), or ...


-- 
Joseph Landman, Ph.D
Founder and CEO
Scalable Informatics LLC,
email: landman at scalableinformatics.com
web  : http://www.scalableinformatics.com
phone: +1 734 786 8423
fax  : +1 734 786 8452
cell : +1 734 612 4615



More information about the Beowulf mailing list