[Beowulf] A start in Parallel Programming?

Robert G. Brown rgb at phy.duke.edu
Wed Mar 14 09:20:12 PDT 2007

On Wed, 14 Mar 2007, Peter St. John wrote:

> Joe Landman wrote:
> "...if you buy into the notion that there may in fact be some fundamental
> physical limits on Moore's law due to the nasty combination of
> thermodynamics (stability of small structures with respect to temperature,
> and resistance of same to defect formation), and  quantum mechanics ..."
> Yeah I buy into that. I figure that there's a ceiling of so many Deliverable
> FLOPS per Cubic Angstrom? And I wonder if something like Nonvolatile Memory
> vs Processing Bits per Second are sortof a Heisenberg Dual; if you want a
> bit carved in stone, so it will still be there when you look later, you
> won't be able to process it as fast as someting transient. So you could
> maximize the amount of FLOPS in a milliliter, or the amount of RAM, but not
> both.
> So maybe the ceiling will be (Deliverable FLOPS times Recoverable Bytes)/ml
> I think acheiving zero degrees kelvin, watching time zero in a telescope,
> and the ceiling for DP are all kinda the same thing, and we'll entertain
> ourselves reducing temp below 0.001K (whatever) and watching t < 0.00001 sec
> and so on, for decades. Maybe we got within a certain kind of ballpark of 0K
> first, now we're getting to the ballpark of time 0 cosmologically, and maybe
> in a few decades we'll get in the ballpark of Max DFLOPSRB/ML (tm).
> Robert?

Y'all are so cynical, really.  Historically the demise of Moore's Law
for reasons of violating rapidly approaching physical barriers has
occurred with depressing regularity, but in each and every instance
reports of its demise have been premature, or as Moore would doubtless
like to say, "I'm not dead yet.  I'm really feeling much better..."

I will make no such mistake.  Sure, there are probably physical limits
on switching associated with Heisenberg and Maxwell.  As Jim noted (or
was it Joe?) light (or if you prefer, information) ain't a gonna move no
faster than c, and indeed will almost invariably move strictly slower
than C in media.  The quantum switching stability argument is not so
clear -- there it is difficult to make concrete statements without a
specific model.  Yes, the engineering hovers around a variety of
barriers in this arena, but the barriers have proven fairly mobile as
people think up clever new ways of making things work in spite of the
fact that they "can't".  The barriers for Silicon aren't the barriers
for Gallium Arsenide aren't the barriers for Compount X that hasn't been
invented yet aren't the barriers for the Hi T_c super-semiconductor that
will win the Nobel Prize in five years aren't the barriers for optical
switching at all (which awaits certain key developments that I can
fairly well "promise" will arrive in the next 3-5 years:-) -- and then
there is real quantum computing and computing based on a complete
logical rearrangement of the way we define "a computer" and...

Don't think of humans as designing computers (or any technology).  We
are co-evolving them in the midst of the most intense genetic
optimization algorithm in recorded history.  And one of the lovely
things about GAs is that they can and do make "far jumps" quite
frequently, they don't just toddle along up the nearest hill only to
discover at the top that it is an anthill and they are missing nearby
Mount Everest.  So no sooner will an "unsuperable barrier" appear making
uphill across a deep valley -- a veritable chasm even -- than a new
dimension will emerge and folks will take a single short step >>around<<
the barrier and emerge on the far side with easy climbing as far as the
eye can see again.

We are part of this.  Parallelism and COTS simply blew Moore's Law for
supercomputing away right when it looked like supercomputers couldn't
get any faster except at immense expense.  Now single chassis computers
available off the shelf ARE COTS clusters, with four CPU cores on a
network with memory and clever little clusterware keeping them all
happily coexisting, one hopes.  There is more parallelism within the
CPUs.  There are unexploited opportunities for truly massive parallelism
in IBM Cell processors and the like.  Neural computing and so on are
still primitive but full of potential (potential that may be waiting for
something like the Cell softwired into a network).

So I'm not holding my breath on ML running out this week or next.  I'm
more interested in speculating on when the next massive superML jump on
TOP of ML will occur, when the next phase/paradigm shift is due that
will change the way we all think about computing to where we look back
and remember who quaint it was to even doubt that it was up, up, up as
far as the eye can see and the mind can imagine...

OK, enough poetry now.


Robert G. Brown	                       http://www.phy.duke.edu/~rgb/
Duke University Dept. of Physics, Box 90305
Durham, N.C. 27708-0305
Phone: 1-919-660-2567  Fax: 919-660-2525     email:rgb at phy.duke.edu

More information about the Beowulf mailing list