To LUGNET HomepageTo LUGNET News HomepageTo LUGNET Guide Homepage
 Help on Searching
 
Post new message to lugnet.roboticsOpen lugnet.robotics in your NNTP NewsreaderTo LUGNET News Traffic PageSign In (Members)
 Robotics / 12787
12786  |  12788
Subject: 
AI, AI & AC
Newsgroups: 
lugnet.robotics
Date: 
Mon, 16 Oct 2000 17:37:09 GMT
Original-From: 
Jon Kongsvold <jon@[IHateSpam]kongsvold.com>
Viewed: 
650 times
  
Everybody today is talking about Artificial Intelligence.  I think this is
only one of (at least!!!) three different elements our mind is using:

AI:  Artificial Intelligence.  This is in my opinion just the ability to
make logical choices.  So any expert system, my pocket calculator, the
thermostat regulating the heat in my oven etc, all have some (little!) AI.
AI also has the feature that you can backtrack the use of rules and make the
expert system or whatever, tell you how it arrived at it's conclusion.
Might get stuck cause all expert systems beyond a certain complexity will be
inconsistent and will give a wrong answer when it finds a paradox in it's
rule database.

AI:  Artificial Intuition.  Pity it's not easy to make a good abbreviation
for this one, AI is used, and AIntu. isn't looking good (so I will stop
using it) .  Well, this is the stuff neural networks and fuzzy logic is all
about.  These systems react to input by recognizing some pattern in the
input and from that giving some output.  They aren't coded with strict
rules, but are trained from total moron level to a level where they can
compete with i.e.. human stock brokers (whatever the difference ;) ).  As
they are trained and contains no rules, only a vast number of numbers
defining the "artificial brains" neural structure, they have no idea how
they arrived at their conclusion.  They just get a "hunch" or just "know
it", like when you know somebody's face, you don't have to logically think
about "is this my neighbor or not?", you just recognize him/her and can't
explain how you did it.  A neural/fuzzy network will sometimes not get
totally confused by new data (like the expert systems do), but might
recognize it as "similar" to some pattern it already knows and give the
correct output.  This makes Artificial Intuition more robust when the input
data might be noisy.

AC:  Artificial Consciousness.  This is the real issue, the thing almost
anybody talking about AI is really talking about.  If you say AI, people
think more of HAL in 2001 than the pocket calculator.  But the word for
being like HAL isn't intelligent, my chess computer is more intelligent than
I am, the word is conscious.  Self conscious, conscious about one's
surroundings, about one's memories etc.  Some AI labs have made robots or
programs that perhaps have the consciousness level of a badly brain damaged
monkey, this is how far we have come in our search of AI, or rather AC.

So how do we achieve AC?  Do we make neural nets that input to other neural
nets that go around in a loop, with some additional input from sensory
neural nets added each time?  This is about how our brain works (with some
millions other factors, but never mind them :) ), is that the right way for
programming a silicon chip to do the same job?  Or some expert system/neural
network combo, these have had success in copying the decisions of experts
even betters than the experts themselves, but no consciousness though...

Any idea's, criticism, additions etc are welcome, I need to figure out this
consciousness thing, without it my robots/programs will be no more than
advanced thermostats.

Jon

----------------------------------------------------
Jon Kongsvold
Lars Onsagersv. 11, N-7030 Trondheim, Norway
(+47) 73.88.90.73
jon@kongsvold.com

If you are in a hurry, page me online on ICQ# 381458
(use http://wwp.icq.com/381458 if you don't have the ICQ
client available.)



1 Message in This Thread:

Entire Thread on One Page:
Nested:  All | Brief | Compact | Dots
Linear:  All | Brief | Compact
    

Custom Search

©2005 LUGNET. All rights reserved. - hosted by steinbruch.info GbR