To LUGNET HomepageTo LUGNET News HomepageTo LUGNET Guide Homepage
 Help on Searching
 
Post new message to lugnet.roboticsOpen lugnet.robotics in your NNTP NewsreaderTo LUGNET News Traffic PageSign In (Members)
 Robotics / 10310
10309  |  10311
Subject: 
Re: Positioning
Newsgroups: 
lugnet.robotics
Date: 
Wed, 26 Jan 2000 23:16:01 GMT
Viewed: 
872 times
  
Hi Pete,

The direct answer to what I've used for landmarks is colored marks, like those
found on the RCX test pad. If the robot is a few inches to the left or right
of the exact path, it'll still find the mark, assuming it's a few inches wide.
But if it misses the landmark, it keeps going, so a landmark map can actually
produce a much larger error than other types of maps, in that event. The range
for seing the landmark is not far at all. The robot has to be right on top of
it.

Later this summer, I'm planning to do some work with vision systems and/or
sonar, with the hopes that this will increase the range of seeing a landmark.

The real answer to what I plan on using for landmarks in the future
is "patterns". Not patterns in the software development sense, but more in the
artistic or mathematical sense. I'd like to be able to collect a set of
important patterns for a given problem, such as examining the environment, and
build maps that can store those patterns. Each unique pattern would be
assigned a number, and the numbers could be stored in a map, just as rotation
and timer numbers are. This is what the mapping code I've written can store
and replay. Not just numbers to move through physical space, but numbers to
move through any given problem space. This is also why I didn't directly tie
the mapping code to a rotation sensor or timer. I want it to be usable for any
problem domain.

Once a set of patterns for a problem domain are identified and numbered, the
question becomes "How do we translate the robots environment into those
numbers?" The answer SoftBricks provides for that question is Brains. Brains
translate data, whether it be from the environment or the internal state of
the RCX, into a new form. They're based on the Subsumption Architecture of
Rodney Brooks, so they're proven to be capable of creating programs that allow
these translations to correctly drive a robot. So maps give me a way to store
instructions and brains give me a way of examining things to see if those
instructions apply.

In addition to this, a program could easily be written that has the robot
trying different actions and seeing which one produced the desired result. As
desired results are discovered they are stored in a map. When the robot
encounters the same problem, it can just replay the map to determine what
actions to take. In other words, a robot that has a map is a robot that can
learn.

Also, because maps are just variables on the RCX, they can be sent to other
robots using SoftBricks RoboCall Protocol. The second robot can then use that
map to perform tasks. In other words, SoftBricks can be used to build robots
that can teach other robots new things.

Now you know where I'm going with all this and why I spent so much time
polishing up SoftBricks.

And I agree that drawing lines on the floor isn't really cheating. I only
meant that it felt like cheating to me because I'd really rather produce a
different way to examine the environment and make decisions. I've got the
basics for that different way written now, it's SoftBricks. After I finish the
prototype of my android, I'll return to building on that foundation.


In lugnet.robotics, Pete Hardie <pete.hardie@dvsg.sciatl.com> writes:
David Leeper wrote:

Hi Pete,

Those are good points. But I believe they apply to other types of mapping as
well. For example, when testing SoftBricks I built a robot that travels a • map
and if it gets to the desired destination, sends the map to another robot • with
the intent of that robot following the map to join the first robot. It • worked
great, except no two robots drive exactly alike and sometimes the second • robot
got "lost". And this was with a simple timing map. Although I didn't test • it,
my feeling is that a landmark map would work better in that case because • they
don't depend on the precision of rotation sensors, or get thrown off by
slipage or different drive trains. Landmark maps are a bit more resistant to
the differences in how robots move. Just a bit.

Some questions I have - what are you using for 'landmark'?  How far away can
the legobot detect a landmark?  When giving directions including a landmark,
what are the steps leading up to "...when you get to the X..."?  And how does
a bot determine that it has not found a landmark and needs to ask more
directions.


The real problem seems to me to lie in the "GO" part of following the
map. "GO" isn't a precise thing. Perhaps different types of algorithms could
solve this problem? And there's always the trick of drawing a black line on
the floor, but that always seemed like "cheating" to me.

Again, it's only cheating if you are trying to solve a problem more like an
orienteering challenge than a roadmap challenge.  If you have an existing
'network' of edges between nodes, and travel along the edges to nodes, then
the problem is different (simpler?) than if all you have is nodes, and no
'directional' reference.

--
Pete Hardie                   |   Goalie, DVSG Dart Team
Scientific Atlanta            |
Digital Video Services Group  |



Message is in Reply To:
  Re: Positioning
 
(...) Some questions I have - what are you using for 'landmark'? How far away can the legobot detect a landmark? When giving directions including a landmark, what are the steps leading up to "...when you get to the X..."? And how does a bot (...) (24 years ago, 26-Jan-00, to lugnet.robotics)

28 Messages in This Thread:









Entire Thread on One Page:
Nested:  All | Brief | Compact | Dots
Linear:  All | Brief | Compact
    

Custom Search

©2005 LUGNET. All rights reserved. - hosted by steinbruch.info GbR