[Beowulf] Re: vectors vs. loops

Jim Lux james.p.lux at jpl.nasa.gov
Wed May 4 08:05:53 PDT 2005


----- Original Message -----
From: "Douglas Eadline - ClusterWorld Magazine" <deadline at clusterworld.com>
To: "Philippe Blaise" <philippe.blaise at cea.fr>
Cc: "Joachim Worringen" <joachim at ccrl-nece.de>; "beowulf"
<beowulf at beowulf.org>; "Robert G. Brown" <rgb at phy.duke.edu>
Sent: Wednesday, May 04, 2005 6:40 AM
Subject: Re: [Beowulf] Re: vectors vs. loops


> On Tue, 3 May 2005, Philippe Blaise wrote:
>
> > Robert G. Brown wrote:
> >
> > >....
> > >
> > >Still, the marketplace speaks for itself.  It doesn't argue, and isn't
> > >legendary, it just is.
> > >....
> >
> > But, does the hpc marketplace have a direction ?
>
> Just as any market place it is ruled by price/performance. One could argue
> that during the "vector machine" epoch there was an artificial market that
> was due to the cold war. Performance at any price was an important
> strategic issue.
<giant snip > >
> > Of course, it would be nice to have a true vector unit on a P4 or
Opteron.
> > But the problem will be the memory access again.
>
> If there were a need for such a device in the big markets then there would
> be such a device in processors.
>
> We must play with the "Lego" that we can find in the commodity markets.
> Fortunately, there are many companies that make "specialty pieces"
> (interconnects, compilers, packaging, etc.) for the cluster market. But
> like all companies they need to earn money - something the vector
> supercomputers seemed to have trouble accomplishing in the absence of a
> government supported market.
>
>

Sounds like the old "killer ap" question, doesn't it?  If you've got a
problem that has to be solved, and a vector machine solves it, and anything
else doesn't, then you can fund the vector machine. I suspect that there
were a LOT of problems that were run on classical supercomputers and their
forebears (CDC7600, e.g.) just because that's what was available.  When FPS
came out with attached vector processors, it was to meet a specific need
(medical imaging (tomography) being but one).  As everyone on this list
knows, writing software is expensive and tedious, so once you've got that
monster code written and running, there's a substantial inertia to moving it
to another architecture.

However, many, many of the problems run on supers (and vectors) probably
could run just as well on a cluster or even a NoW.  Things like rgb's monte
carlo stuff, for instance, where it's quite embarassingly parallel, and raw
compute cycles is what you need.  Once the cost of the super became
significant (compared to the hassle of rewriting your code and managing that
CoW/NoW/Beowulf) the die was cast, and the demand for a classical super
decreased.  And, as Doug points out, the market will quickly determine where
development and production resources go.

A similar phenomenon occurs at a lower level, for ICs.  In the 70s,
government bought the supercomputers and the ICs, so that's what the mfrs
produced.  Now, consumers are a larger fraction of the market (like 90%?),
so the mfrs produce high volume high integration computers (Wintel boxes)
and chips aimed at that market.

It would be interesting (in a casual inquiry way.. someone out there need a
senior history project?) to look at the mix of types of applications run on
computers for each decade, in particular, the dollars spent on doing that
type of processing, and relating that to the architectures available.  Not
many mainframe based word processors running these days, are there?  Or even
"client/server" word processors (such as the Wang series).




More information about the Beowulf mailing list