[Beowulf] IBM's Watson on Jeopardy tonight

Robert G. Brown rgb at phy.duke.edu
Wed Feb 16 07:21:37 PST 2011


On Wed, 16 Feb 2011, "C. Bergström" wrote:

> Lux, Jim (337C) wrote:
>> I think it will be a while before a machine has the wide span of capabilities of a human (particularly in terms of the ability to manipulate the surroundings), and, as someone pointed out the energy consumption is quite different (as is the underlying computational rate... lots of fairly slow neurons with lots of parallelism vs relatively few really fast transistors)
>>
> Doesn't this then raise the question of why we aren't modeling computers
> and programming models after the brain? ;)

We are, but that problem is, well, "hard".  As in grand challenge hard.
There are other problems -- brains are highly non-deterministic (in that
they often selectively amplify tiny nearly random signals, as in "having
an idea" or "reacting completely differently depending on our hormonal
state, our immediate past history, and the phase of the moon").  Brains
are extremely non-Markovian with all sorts of multiple-time-scale memory
and with plenty of functional structures we honestly don't understand
much at all.  We don't even know how brains ENCODE memory -- when I've
got Sheryl Crow running through my head in what SEEM to be remarkably
good fidelity, there is absolutely no way to determine where that
detailed, replayable music is stored or how it is encoded or how it is
being reproduced or how the reproduction is being conceived in the
background of my mind right now as I'm "doing" something else with my
fingers and my attention is only partly on these words, that are
appearing out of -- exactly where?  They're being synthesized, but how?

Our computers, on the other hand, tend to be mostly deterministic and
usually serial unless (as everybody on this list understands very well)
we work HARD to parallelize their function.  Our programs tend to be
insensitive to noise and to not use random numbers (unless we
DELIBERATELY use random numbers in the programs) and if we do the
latter, we are ultimately sampling an ensemble of possible dynamical
outcomes with little theoretical reason to believe that our sample will
be a "good" one.  Not that this isn't true of humans as well, but it is
both a sometimes strength and a frequent weakness.

Humans can "almost" remember somebody's name one day, for example, and
then another day know it immediately, and another day still not even
recognize that they once knew it.  Computers are more systematic and as
they are currently architected and built either remember it or don't,
based on fairly deterministic criteria and the ability to do a
systematic search instead of a vague, parallel, "latching" of
holographically encoded memory into a holographically encoded attention
center for a being that IS little more than that attention center and
associated memory and hardware plus various bidirectional interfaces.
Computers may fail, but it isn't the same KIND of failure that humans
have.

To be honest, I think the Jeapordy example is a silly one.  It's a
canned problem.  Why bother?  We do the test every day.  Let's race --
everybody here has probably read, I dunno, Moby Dick at one time or
another.

So now, off to the races.  Open up a Google window.  Type in:

    What is the name of the third mate in Moby Dick?

Now press return.  I don't really care how soon.  Look!  Google beat the
shit out of Watson and all of the Jeapordy contestants (it didn't even
take a full second, human reactions couldn't possibly have matched it as
EVEN IF YOU KNEW THE ANSWER (which once upon a time you HAVE seen) your
mouth would still be forming the word "Flask" long after Google has the
answer on the screen -- four or five times!

So this is a dumb test.  Let >>me<< make up the questions, and >>don't<<
let the participants see them ahead of time, and Google will beat Watson
and all of the humans put together hands down, generally nearly zero
time to infinite time (they just don't know).  Everybody knows computers
win at this sort of thing, and have since Alta Vista days.

Now, let's try to subject "Watson" to a real Turing test.  What's that
Watson?  How do you >>feel<< about being tested to see if you're human?

Not much, I'd wager.

    rgb

>
>
> _______________________________________________
> Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing
> To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf
>

Robert G. Brown	                       http://www.phy.duke.edu/~rgb/
Duke University Dept. of Physics, Box 90305
Durham, N.C. 27708-0305
Phone: 1-919-660-2567  Fax: 919-660-2525     email:rgb at phy.duke.edu



More information about the Beowulf mailing list