Subject:
|
Re: Design
|
Newsgroups:
|
lugnet.robotics
|
Date:
|
Mon, 5 Dec 2005 01:04:20 GMT
|
Original-From:
|
steve <sjbaker1@airmailANTISPAM.net>
|
Viewed:
|
1418 times
|
| |
| |
dan miller wrote:
> I'm quite certain that we will one day have walking, talking, human-level
> AI, and most folks will still be quite sure that it's just a mindless
> machine, a trick of programming. Newsflash: our very souls are a trick of
> programming, albeit a fantastically complex one.
I firmly believe that.
There is a famous 'proof' that AI is impossible - "The Chinese Room"
argument by John Searle.
The idea is that someone who only speaks English is imprisoned in a
room. He is given long and complex instructions for manipulating
strings of symbols (which - unbeknownst to the prisoner are really
Chinese characters) and can only earn his freedom by following them
to the letter. Strings of chinese characters are passed under the
door - he works on them and carefully writes out strings of symbols
that result from the long, tedious instructions. As it turns out,
these instructions are cunningly designed such that outside observers
can talk to the man in chinese and get intelligable results.
(Think of this as a Turing test in Chinese)
The idea here is that whilst the man in the room APPEARS to be
able to speak chinese, in reality, he does not. No matter how
clever you are about telling him how to manipulate those symbols,
the man never understands chinese - or even realises that he IS
speaking chinese.
This argument is supposed to explain that no matter how clever
you are about programming a computer to APPEAR intelligent, it
never becomes so - any more than the man in the room ever learns
to speak chinese. It follows therefore that the computer can
only ever be a simulation of something intelligent and can never
be intelligent in it's own right.
This is a particularly annoying argument!
I would rather conclude that whilst the man doesn't understand
chinese, the combined system of the man PLUS the instructions
he's been given, can.
By which we understand that it's not the computer hardware that
becomes intelligent - but rather the hardware PLUS the software
that resides in it. That's a no-brainer for most of us. We don't
expect computer hardware alone to become intelligent - but
we do (perhaps) believe that hardware PLUS detailed processing
instructions might become so. This means that the Chinese
Room argument is no longer a proof of anything.
The trouble is that when you look at the parts of the machine,
none of them seem to hold the intelligence - just as in the
room with the man and the instructions, there is no part that
knows how to speak chinese.
Intelligence is a systemic thing. Each individual neuron in
our brains is a very simple thing. We can easily make them
out of silicon or simulate them in software - there is no
intelligence in a neuron. It is only a system of billions
of them with insanely complex interconnections and a particular
pattern of growth, training and experience that does anything
remotely intelligent.
Once we've understood this system completely, I suspect we'll
have pulled it all apart and decided that what we think of
as intelligence has been so fully dissected that there is
nothing left for us to describe.
Just as you can look at the individual lines of C++ code in
a video game and find no single place where they wrote "fun"
into the game.
|
|
Message has 1 Reply: | | Re: Design
|
| Quoting steve <sjbaker1@airmail.net>: (...) Are you sure? This Chinese Room arguably disproves the validity of the so-called Turing Test. The Turing Test deals only with one small aspect of AI, the question of whether a machine is judged (...) (19 years ago, 5-Dec-05, to lugnet.robotics)
|
Message is in Reply To:
| | Re: Design
|
| (...) Agreed. ... (...) One thing that game worlds give you that is very tricky in real-world situations is, the breakdown of the environment into separable objects. Game code starts with a bunch of objects (floor, walls, tables, (...) (19 years ago, 4-Dec-05, to lugnet.robotics)
|
7 Messages in This Thread:
- Entire Thread on One Page:
- Nested:
All | Brief | Compact | Dots
Linear:
All | Brief | Compact
This Message and its Replies on One Page:
- Nested:
All | Brief | Compact | Dots
Linear:
All | Brief | Compact
|
|
|
|