Subject:
|
Re: Design
|
Newsgroups:
|
lugnet.robotics
|
Date:
|
Sun, 4 Dec 2005 22:52:29 GMT
|
Original-From:
|
dan miller <danbmil99@yahoo.!antispam!com>
|
Viewed:
|
1410 times
|
| |
| |
--- steve <sjbaker1@airmail.net> wrote:
>
> I think the way that robotics develoment and video games AI is
> proceeding is entirely complementary.
>
> In the video game world, we are relieved of the problems of sensors,
> mechanics, battery and such - so we can concentrate on solving the
> tricky issues of of path finding, goal seeking and problem solving.
Agreed.
...
> Saying that video game developments are irrelevent because they aren't
> dealing with the real world is ridiculous. Development can be started
> from both ends of the problem spectrum and meet somewhere in the middle.
One thing that game worlds give you that is very tricky in real-world
situations is, the breakdown of the environment into separable objects.
Game code starts with a bunch of objects (floor, walls, tables,
things-that-are-on-tables, etc), rendering them into a composite image. The
game AI can take advantage of this logical representation; in the real
world, we need to segment our sensations into objects that can be
manipulated, fixed objects we must avoid, moving objects (like humans) that
we may interact with, and so on.
When I have the time and resources, I hope to build a simulation environment
where virtual 'bots have to learn everything from virtual raw sensor data.
So instead of just knowing that the thing in front of them is another robot,
they will have to infer it from the composite representation they get from
their virtual cameras and other sensors. Only in the last few years have
rendering techniques become good enough to make this feasible in a way that
could translate to real-world applications. I have yet to see any research
in this area.
> It cuts both ways though. My own OpenSource 'tuxkart' game uses
> the important bits of a line following algorithm that I wrote for a
> Lego Robot.
That's pretty cool!
...
>
> I strongly disagree that the intellectual attack is getting us nowhere.
>
> We have real-world (and somewhat useful) robots such as roomba and aibo
> and we didn't have them 10 years ago. That's a success. Fields like
> speech recognition are now so routinely used that you can phone an
> airline booking service and TALK TO A COMPUTER. We forget that
> speaker-independent single word recognition was impossible 15 years
> ago.
Damn straight. As you say below, people keep moving the marker when
machines approach their sacred *human* abilities.
>
> The intellectual attack has been disappointing for several reasons:
>
> * Firstly because the problem initially looked a lot simpler than it
> finally turned out to be. The progress so far has been in
> discovering just how hard the problem truly is. That's not a
> lack of progress - that's a necessary first step.
>
> * As soon as a computer can do something (like voice recognition
> or playing chess at grand master level), the general public
> mentally remove that from the field of AI. People naturally
> define the word 'Intelligence' as the thing that humans can do
> that computers and animals can't do. Look in an AI
> textbook from the 1960's and you'll see playing chess at
> grand master levels was one of the goals of AI - it was right
> up there with the Turing test. Look in a biology textbook from
> the same era and you'll see that 'tool using' is unique to human
> intelligence - when we discovered that blackbirds use tools, the
> definition changed to 'making and using tools' - then we found
> monkeys that make tools - so then they switched to 'passing on
> knowledge of tools to offspring' - and now we know that chimps
> do that.
>
> * We tend to trivialise some applications of AI. Because the only
> use we have found for pathfinding in AI has been in video games
> and robot vacuum cleaners - we don't see that as a significant
> advance - but there is some sophisticated technology going on
> 'under the hood' of many video-games. Just try writing one and
> you'll soon find out how hard it is!
I reiterate what I said earlier: we are enamored of this idea that things
like perception, intuition, inspiration, intention, and so on have 'magic'
properties (look at all the guff modern-day philosophers spout about
'qualia' -- puhleeze!). The fact that, just as the deep secrets of biology
turned out to be completely concrete (though fantastically complex)
molecular interactions, every skill we cherish must at some level be
determined by an informational process that is exactly equivalent in
function to some C++ sourcecode. So once we create some code that performs
function X, everyone looks at it and says, "well that's not so special. I
guess I really meant skill Y -- try to do that with your silly computers!"
I'm quite certain that we will one day have walking, talking, human-level
AI, and most folks will still be quite sure that it's just a mindless
machine, a trick of programming. Newsflash: our very souls are a trick of
programming, albeit a fantastically complex one.
-dbm
__________________________________
Start your day with Yahoo! - Make it your home page!
http://www.yahoo.com/r/hs
|
|
Message has 1 Reply: | | Re: Design
|
| (...) I firmly believe that. There is a famous 'proof' that AI is impossible - "The Chinese Room" argument by John Searle. The idea is that someone who only speaks English is imprisoned in a room. He is given long and complex instructions for (...) (19 years ago, 5-Dec-05, to lugnet.robotics)
|
Message is in Reply To:
| | Re: Design
|
| (...) To get to the goal of autonomous and 'sufficiently intelligent' robots, we have a lot of problems to solve. Some of these are to do with mechanics, battery technology, communications and sensors - others are to do with solving problems, path (...) (19 years ago, 4-Dec-05, to lugnet.robotics)
|
7 Messages in This Thread:
- Entire Thread on One Page:
- Nested:
All | Brief | Compact | Dots
Linear:
All | Brief | Compact
This Message and its Replies on One Page:
- Nested:
All | Brief | Compact | Dots
Linear:
All | Brief | Compact
|
|
|
|