To LUGNET HomepageTo LUGNET News HomepageTo LUGNET Guide Homepage
 Help on Searching
 
Post new message to lugnet.roboticsOpen lugnet.robotics in your NNTP NewsreaderTo LUGNET News Traffic PageSign In (Members)
 Robotics / 24662
24661  |  24663
Subject: 
Re: Design
Newsgroups: 
lugnet.robotics
Date: 
Mon, 5 Dec 2005 13:23:58 GMT
Original-From: 
Mr S <szinn_the1@yahoo.com[StopSpammers]>
Viewed: 
1169 times
  
Dan,
I wouldn't have argued it quite that way, but then I
wouldn't have been so eloquent. On the human mind and
simulation (mimicry) of it, I have three observations:

Intelligent behavior has a goal (easy to mimic)
Intelligence has an agenda (not easy to mimic)
Intelligence has an attention span (I have never even
heard of anyone trying to mimic this)

In any test of intelligence or test for intelligence
that I have heard of, none pass the last two points.
Yet, somehow, even a 4 month old baby is able to have
both an agenda and an attention span even though it
has failed to learn anything useful to the parents
thus far, and in fact, would not be able to pass the
Turing test...

What I call failure mode analysis, a human not capable
of complex character manipulations and calculations
inside the Chinese box would also fail, no matter what
the motivations to pass were, and thus this example is
not a good one, other than as a arm-chair example of
fixed domain problem analysis where intelligence is
not needed or required.

Even in the rigid domain paradigm of computer games,
it might be possible to immitate intelligence having
an agenda, though the game programmer has to guide
that agenda, if not create it. Try writing a program
where the program decides what it wants to do, when it
wants to, how it wants to? Yes, I'm saying that
intelligent entities are born with their intelligence
in tact. It is not learned, only better defined with
learning. This has led me to the conclusion that
intelligence is not a learning process, but the
definition of the machine that is doing the
learning... Learning is a by-product of intelligence,
not the creator of it.

As for the real world and game worlds differing
because of the predefined definitions of objects,
walls etc. I have to say that the real world is
predefined, just a much larger, more complex set of
definitions. For instance; A privacy fence is a
geometric object with associated properties. It is
also a wooden object. It is an object that has
relative motion. It is also often an object that has
the properties of many smaller wooden and/or metal
geometric objects. A fence, on the other hand, can
have so many more differing properties, such as a
chain link fence, a white picket fence, and electric
fence, a barb-wire fence, a chicken wire fence etc....
and all of the associated properties of each fence. It
would overwhelm that game programmer to include so
many possible properties for something that is
represented usefully by a single set of geometric
properties.

The real world can indeed be broken down into basic
geometric shapes and properties... it just takes a
great deal of memory and recognition skills to do so
successfully. Not even all humans are able to do this
successfully.

It will take a great deal of work, but defining a
computing system (not device) that is able to function
as intelligence is STILL a hugely complex goal, one
that remains out of sight.



--- dan miller <danbmil99@yahoo.com> wrote:

--- steve <sjbaker1@airmail.net> wrote:
...
Searle's own counter to the argument that the
man+the rulebook is an intelligent system is that
the man could memorize the rulebook and step out
of the room able to speak fluent chinese without
begin able to understand a word of what he was • saying
in response to chinese questioning.

This would leave use with a strange situation.

On leaving the room, the man would be able to
behave as a chinese speaker - but he conscious
'English' self would have no idea what he was
saying.

It would be like having two completely separate
individuals without one skull.  Neither would be
able to access knowledge that the other had.
Neither would be able to speak the other's • language.

The situation would appear to have complete • symmetry.

Except for the fact that the Chinese guy could only
understand one word
every million years or so.

What Serle (annoyingly, yes!) fails to comprehend is
that a human being who
miraculously had the ability to emulate a Turing
machine, which in turn is
running a program able to simulate a human mind, is
not the human mind he is
simulating.

Humans who have had their corpus colossum severed
(for severe epilepsy) are
effectively two centers of consciousness in one
body.  Tests have been done
where they let the left hand touch a toothbrush, and
later identify it from
a bunch of objects, but the right hand (which is
associated with the left
side of the brain, and speech), cannot access the
information.  This is
exactly the state of affairs in Searle's absurd
thought experiment.  (absurd
because of the performance problem of a human
memorizing enough rules to run
a complex program -- but I'll ignore that for the
sake of argument).  If I
ask something in Chinese of the "wrong" person -- in
this case, the
english-speaking host -- he will know nothing about
it, because he has not
learned Chinese.  But the Chinese person, who the
English person is somehow
emulating, can talk Chinese just fine.

Here's a concrete example of a similar state of
affairs that can exist
today:  A Macintosh has a PC emulator running in a
window.  On the (virtual)
PC, you open a word processor, edit a document, and
save it to disk.  Then
you open a word processor on the Mac, and attempt to
open the file.  It
won't open. (maybe you can't even find it, or if you
can, it's in the wrong
format).  Hmm, does that say anything about the
relative merits of PC's
versus Mac's?  Does it mean anything at all about
their qualitative
capabilities?  No, it means essentially nothing.
The PC could emulate the
Mac, and the result would be the same.  That's the
nature of the beast when
it comes to Universal Turing Machines.  If Searle
(and the legions of
philosophy students who actually believe this crap
-- I've met a couple)
spent some time understanding the Church/Turing
thesis, they wouldn't be so
confused by a simple sleight-of-hand parlor trick,
and the world would be a
better place.

It's all so silly, because it's just going to
happen, and then we can argue
about it forever.  Or until they get annoyed.



__________________________________________
Yahoo! DSL – Something to write home about.
Just $16.99/mo. or less.
dsl.yahoo.com






__________________________________________
Yahoo! DSL – Something to write home about.
Just $16.99/mo. or less.
dsl.yahoo.com



Message has 2 Replies:
  Re: Design
 
(...) I don't see any problem with programming an agenda, or an attention span. Are you saying these are intrinsically hard things to do with a computer? ... Yes, I'm saying that (...) I'll go with that. A dog in a human family grows up with dog (...) (19 years ago, 5-Dec-05, to lugnet.robotics)
  Re: Design
 
(...) The Turing test only says that a system that can pass the test should be considered intelligent - not that a system that cannot pass the test is not intelligent. This is an important criticism of the test. There are a lot of very clever AI (...) (19 years ago, 6-Dec-05, to lugnet.robotics)

Message is in Reply To:
  Re: Design
 
--- steve <sjbaker1@airmail.net> wrote: ... (...) Except for the fact that the Chinese guy could only understand one word every million years or so. What Serle (annoyingly, yes!) fails to comprehend is that a human being who miraculously had the (...) (19 years ago, 5-Dec-05, to lugnet.robotics)

9 Messages in This Thread:


Entire Thread on One Page:
Nested:  All | Brief | Compact | Dots
Linear:  All | Brief | Compact

This Message and its Replies on One Page:
Nested:  All | Brief | Compact | Dots
Linear:  All | Brief | Compact
    

Custom Search

©2005 LUGNET. All rights reserved. - hosted by steinbruch.info GbR