[Beowulf] hpl size problems
Robert G. Brown
rgb at phy.duke.edu
Tue Oct 4 07:08:23 PDT 2005
Joe Landman writes:
> Its all about setting up a discipline for change, educating users,
> understanding that it is inevitable, and that you need to adapt to it if
> you have more than 1 (group of) user(s). If you need to have 14
> different mpich, then you need to make sure your administrative and
> installation processes can handle it. This turns out to be what
> modules is exceptionally good at helping with. You can also change the
> default install paths (remember the thing you buzzed me for
> previously?), and select paths based upon a well defined algorithm
> rather than a "database" lookup (modules). Lots of folks use this happily.
> In short this is a real problem for large shared computing resource
> facilities with lots of users of varying code requirements, that often
> are beyond the initial scope of deployment and system build. If you
> don't have a good method of handling such cases, you can either deny
> they exist and insist upon LARTs (bad), or come up with a method to
> adapt to the need (can be bad or good, depending upon how hard you work
> at setting up a sensible system). I know it doesn't jive with "The One
> True Way"(tm).
It's all about CBA. Everything is all about CBA. If you
permit/encourage/acquiesce in an explosion of minor variants of any
commonly used package, especially those that depend on a legacy code
base and legacy libraries or specific compilers or... you incur one set
of costs. If your institution (which has to PAY those not at all
insubstantial maintenance costs) forces users to FIX THEIR CODE so that
it does NOT require any specific package/library/compiler or at least so
that it will easily rebuild with only minimal tweaks as any of the above
evolve in time (as they will, of course) you incur another set of costs.
What nobody EVER does, it seems like, is sit down and actually compare
those sets of costs.
The users view maintenance as "free", and will cheerfully say things
like "why can't we just leave all those systems running Red Hat 7.1 ---
everything works, after all" not realizing that doing so is NOT free and
in fact is rather expensive, diverting all sorts of time to patching
and/or protecting the network, to having user X want package Y that's on
his desktop (after all) to work on it when there is NO WAY to build Y
for RH 7.1, and that eventually they're going to want to replace those
systems with Opterons with gigabit ethernet adapters that would laugh
hysterically at RH 7.1 were you to try to install it. The department or
university that actually PAYS for the administration of the cluster/LAN
views their systems people as an infinite resource. Nobody looks at
scaling and long term HUMAN costs.
So I meant what I said in my little admin diatribe the other day. It
really, really is all right to Just Say No. To tell your user "Look, we
have one sysadmin taking care of 500 systems here, with two different
hardware architectures. We cannot afford to do this, and to make the
environment available to run applications both consistent and up-to-date
in terms of the hardware it runs on and the libraries you might wish to
link to if we spend the time required to build 15 different versions of
this package just so you can avoid fixing up your code so that it runs
correctly on the ONE version of the package that comes preinstalled with
the actual distribution and site-specific support packages (that we KNOW
we have to maintain anyway). A second administrator would cost almost
$100K in salary and benefits, which means that we cannot afford to by
next year's round of new nodes if we hire one. It is much cheaper for
you to spend the week it might take to fix your code."
In order to make this work, of course, you have to get there gently.
Get the toplevel directors of the cluster(s) to buy into the CBA you
conduct that illustrates the actual costs of doing things the wrong way.
Get them to buy into a policy MIGRATION that says something like: "In
one year all nodes and LAN workstations will have a consistent
installation environment based on the XXX distribution plus the
following site license and local packages. Here are a set of prototypes
of this environment you may use for software building and porting. We
have limited time available from the following programmers to help you
port your code. Start now so that we can make this transition to a
cleaner, more scalable administrative and operational state as seamless
and painless as possible."
It's really all about the money (where time is money). Somebody doesn't
want to spend six FTE-months porting a big program, so they condemn
fifty institutions that use the code to spend several FTE-months of
effort EACH to support all the legacy crap required to run it unchanged.
EACH institution only sees a small enough bite that their users can say
"you have to do it we need to run this program", but overall it is loss,
loss, loss. This is a pervasive and ubiquitous problem, from the sound
of it, in institutional research packages -- really "big" programs --
often written (like the example in my previous note) by a researcher who
knew diddly about real programming technique on a totally unsuitable
platform (WinXX or a Mac, for example) and then minimally (badly) ported
it to run on some flavor of Unix. Not even the person who WROTE the
application can fix it -- they weren't that good a programmer and their
code isn't properly structured or commented or maintainable and uses
some feature they didn't realize was nonstandard or even proprietary on
their original development platform.
The only solution to this (and I've seen it implemented more than once,
including BTW for the aforementioned web application) is to BITE THE
BULLET and rewrite the (hopefully open source) application. Even if the
application is NOT open source, it is probably not that hard to
re-engineer based on the operational specifications of the tool you're
replacing, and a good rewrite will damn near start over from these specs
anyway (and MAKE them open source for the future).
I just don't get it. I'm a researcher and I write a ton of code. I've
had to rewrite my OWN code multiple times over the years. I've never
regretted it. A nearly complete rewrite forces to to reconsider your
data structures and processing flow so that the result is faster, more
efficient, more compact, more natural, AND as you track the changes in
the primary libraries and compilers available, more standards compliant.
My C code used to be KR. Then it was ANSI. Now I'm working it
gradually towards C90 and so on, as there are BENEFITS that offset many
of the costs of the port, and in a lot of cases the "port" involves
changing a single compile time flag in the Makefile. Ditto working my
systems calls from early free-form linux or SunOS variants into rough
SUS/Posix compliance, either voluntarily or as a call I've used that was
noncompliant and deprecated but "supported" finally disappears.
The kernel and library developers are participating in this SAME process
-- they announce a significant upcoming change, supported twinned calls
(via e.g. macro or include #ifdefs) for legacy compatibility during a
"deprecated" period, then poof, the old call goes away because it is too
damn "expensive" to support forever.
It is a Damn Shame(tm) that the government agencies that support all the
research that is done all over the world with these packages don't
recognize the long term expense of failing to properly support the base
packages themselves, including making them build and install according
to some minimal STANDARDS of portability and functionality. From what
I've seen, if the DOE announces tomorrow that they will not spend money
or resources on a non-compliant package (commercial or otherwise) used
on ANY of its supported sites, in six weeks every commercial package and
most non-commercial packages would "miraculously" become compliant.
Money talks, but people are lazy and will cheerfully spend any amount of
OTHER people's money -- usually the taxpayer's -- rather than any of
THEIR OWN time, even if the latter is in fact paid for by those same
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Size: 189 bytes
Desc: not available
More information about the Beowulf