[Beowulf] Opteron/Athlon Clustering
Robert G. Brown
rgb at phy.duke.edu
Tue Jun 8 15:00:16 PDT 2004
On Tue, 8 Jun 2004, Joe Landman wrote:
> Robert G. Brown wrote:
> >To put it another way, I think that an Opteron equipped to transparently
> >run 32 bit binaries is a superior configuration to one that isn't.
> Agreed, unless you want to jump to pure 64 bit out of the gate. This
> can cause significant pain if you have one of those closed source apps
> that you cannot live without.
> >benefit of the work required to so equip it is nonlinear and highly
> >dependent on local environment and task. The cost of the work required
> >to equip it is CURRENTLY still rather high as everybody is doing
> >one-offs of it.
> I disagree. Not everyone is doing one offs of it. Some distros include
> this, and will support it. You simply make a choice as to whether or
> not you want to pay them money for the work they did to do this.
<rant> Ah, but you see, this is a religious issue for me for reasons of
long-term scalability and maintainability, so I don't even think of this
alternative. Or if you like, I think it costs money in the short run,
and costs even more money in the long run, compared to participating in
Fedora or CaOSity and doing it once THERE, where everybody can share it.
But you knew that...;-)
You see, to me "scalable" means "transparently scalable to the entire
campus". It also means "independently/locally auditable for security"
(according to our campus security officer, who curiously enough is an
ex-cluster jock:-). Maintainable means "installed so that all users
automagically update nightly from a locally maintained campus
repository, built from source rpm's that can be built/rebuilt at a whim
for any of the supported distributions.
A setup like this means that one person, or a small team, can maintain a
single repository so that every sysadmin, or individual, or research
group, or cluster on campus can PXE/kickstart install specific
preconfigured images, or interactively install from preconfigured
templates plus mods, or install a barebones base and refine with e.g.
yum install by hand or with a %post script. It means that individuals,
or individual groups, or individual departments WITHIN the global
organization can set up local repositories containing their own RPMs
that layer on top of the public repository (and which might well be
proprietary or limited in their distribution scope). It means that
nobody has to do work twice, and that everybody understands exactly how
everything works and hence can make it meet quite exotic needs within
the established framework, simply. It means that if any tool IS a
security problem, dropping an updated rpm in place in the master
repository means that by the following morning every system on campus is
patched and no longer vulnerable to an exploit. It means not having to
call a vendor for support when they DON'T run your local environment and
have no idea how or why your application fails and whose idea of a
solution is "pay Red Hat a small fortune per node so that your
environment and ours are the same, then we'll help you".
One of the biggest problems facing the cluster community, in my humble
opinion, is that the very people who should know BEST how important
scalability and transparency is to users of the software they develop
and maintain (just for example and not to pick on them as it is a common
problem, say, SGE) to distribute their software source in tarball form
with arcane build instructions that install into /usr/local one system
at a time!
I'm sorry, but this is just plain insane. The whole reason that RPMs
and DEBs exist is because this is an utterly bankrupt model for scalable
systems management, and it is particularly sad that the tools that NEVER
seem to be properly packaged these days are largely cluster tools, tools
that you NEVER plan to install just once and that are damn difficult
for a non-developer to install. It is as if the developers assume that
everybody that will ever use their tool is "like them" -- a brilliant
and experienced network and systems and software engineer who can read
and follow detailed instructions just to get the damn thing built, let
alone learn to run it.
It is also a bit odd, given that many/most major systems now support rpm
(or can be made to, given that is IS an open source architecture
independent toolset). I visit the Globus website, and they have
everything neatly packaged -- as tarballs. I visit Condor, and after
clicking my way through their really annoying "registration" window I
see -- tarballs, or BINARY rpms (which of course have to match
distribution, which limits the applicability of the rpms considerably).
I visit SGE, I see tarballs (or binary RPMs). We don't WANT binary
rpms, we want source rpms, ones that require one whole line of
installation instructions: rpm --rebuild, followed by rpm -Uvh or
yum-arch and yum install.
Does this represent a business opportunity? Sure. One that shouldn't
exist, but sure. It also represents an even bigger opportunity for the
community to come to its senses and adopt a proper packaging, one that
is indeed portable across "all" the rpm-supporting operating systems and
that puts software so built into the right places on the right systems
without anything more than (e.g.)
yum install sge-client
or an equivalent kickstart line on any given node.
So I appreciate that you're in business doing this work, and adding
additional value, and selling the work back to many people who DON'T
want to build all this themselves given that it IS absurdly difficult to
build and install and configure the high end tools. I still lament the
fact that it is necessary at all, as building, installing, and even part
of configuring should be totally automagical and prepacked ready to go.
Note well that tools that ARE maintained in rpm format, e.g. PVM, lam
mpi, tend to be universally available >>because<< they are there ready
to directly install in all the major rpm-based distros. Tools like SGE
that might well BE universally useful on clusters are NOT in anything
like universal use BECAUSE they are a major PITA to build and install
and maintaining them in an installation doesn't scale worth a damn.
This is not the Sufi way....
> I am using SuSE 9.0 in this mode without problems. I use nedit
> binaries (too lazy to compile myself), and a number of other tools that
> are 32 bit only on the dual Opteron.
> That said, it makes a great deal of sense to recompile computationally
> intensive apps for the 64 bit mode of the Opteron. Not that it will
> make Jove faster, but it does quite nicely for BLAST, HMMer, and others,
> including my old molecular dynamics stuff. The 64 bit versions are
> faster by a bit.
Robert G. Brown http://www.phy.duke.edu/~rgb/
Duke University Dept. of Physics, Box 90305
Durham, N.C. 27708-0305
Phone: 1-919-660-2567 Fax: 919-660-2525 email:rgb at phy.duke.edu
More information about the Beowulf