|
In lugnet.org.us.laflrc, Steve Hassenplug wrote:
> Brian, don't read this. (he's trying to work it
> out for himself...) :)
I didn't read it. But if I had...
> While this is a good "brute force" method of mapping
> a maze, you'll be saving, and trying to use quite a
> bit of information that you don't need.
Agreed. although if we start running mazes with loops (like Alegomazer did) I
think mapping, physically, the maze will likely be needed - if for no other
reason than to know when you've closed a loop. But, that's for next time.
> Assuming you can identify an intersection, all the robot
> really needs to know is where to go when it gets to an
> intersection... doesn't need to track intersections [along
> dead-ends]... doesn't need to know the distance[s]...
> doesn't need to identify ["pure"] turns...
Agreed. What it does right now is look for turns or intersections, making
choices only there. If it's just a turn (a "pure" turn), it makes it and doesn't
bother updating the memory of the path. Ideally (in other words, the untested
portion) when it hits a dead-end, it back-tracks to the last node in the tree,
pruning off this "dead branch" and choosing again - if there are no valid paths
to explore off this last node, it returns to the parent node in the tree,
pruning the newly-dead node from the tree again. Essentially, it's a depth-first
search of a graph or tree representation of the maze that is built dynamicly as
the robot traverses branches.
It also means, currently, there are three things each node needs to have
recorded. The "structure" (compass heading of valid paths to explore from the
node - four bits), the last direction the node was left from (2 bits; this way
we know which direction to "prune" from the node if we are sent back here by a
deadend), and the heading back to the parent of this node (again, 2 bits). That
works out nicely to 8 bits per node, meaning I can pack two nodes of info into a
single NQC variable.
> From the robot's point of view, it has three options
> at an intersection, go right, straight, or left.
Yep, I took Bryan's suggestion and lobotomized the code to do a simple
"left-hand" version that now solves the maze quite well (at least it works on my
black-on-white simple 4x5 maze). That was easy, as all the junction types are
selected via a switch statement.
> I think I have my program set to track 36 intersections
> (not sure where I got that number from) In reality, I
> suspect you'll need to remember less than 10.
I figured a worst-case was a single path where from each node a single-unit
dead-end split off. So roughly half the possible "cells" in the map would be
dead-ends, and half junctions, with the single path to the finish consisting of
all the junctions. So the number of junctions must be less than half the number
of cells in the maze (for the ChiBots maze, it's 7x7 = 49 cells, so worst-case
I'd need to store 24 nodes).
> > I want to apply at least one other optimization. At a
> > junction, the first choice of direction to explore
> > should be either straight (turns take time) or towards
> > the exit (this maze starts at grid coordinate (0,0)
> > and finishes at (6,6) - I might end up taking advantage
> > of that, if I (the programmer) get time).
>
> I don't think this is a good idea. It could easily cost
> you much more time than you save.
Why? Consider a left or right following system (I'm Ass-U-Meing that I can
NOT choose which one of those options to use after seeing the maze - in other
words, a "micromouse" philosophy of absolutely no "signalling" of information to
the robot after the maze is revealed), how would choosing any other depth-first
search method take, on average, longer? I'm not talking about choosing "straight
ahead" at all junctions all the time - just if the robot is confronted by a
choice of equally-valid branch to search in depth, it chooses the one that is
(as far as is known) the quickest, i.e., the one the robot can enter without
turning.
Same with the "trying to use the location" idea, although much more
complicated (because I'm not at this point locating the cells in space, just by
their connections). But I don't see why it should be slower.
> Of course during your "mapping" run, it really
> doesn't matter how long it takes.
Agreed. But I still think it would be "nice" if I can find ways to minimize
that as well. As you noted, this is more a matter of folks getting together to
share neat solutions, and I'd like to contribute (and learn) as much as
possible. I will not be able to add to a ChiBots discussion of which
microprocessor to use, or the relative merits of the LED color in line-following
sensors, but I can demonstrate some code ideas and how to work with a limited
"pallet" of materials (sensors, motors, structures).
Thank goodness I didn't read Steve's post. Now, excuse me, I have some
critical programming on to do on Trellis... like what song & dance to do when
(if) it finds the exit. Suggestions? I can't find the note sequence for "Axle F"
as yet ;-).
--
Brian Davis
|
|
Message has 1 Reply:
Message is in Reply To:
13 Messages in This Thread:
- Entire Thread on One Page:
- Nested:
All | Brief | Compact | Dots
Linear:
All | Brief | Compact
This Message and its Replies on One Page:
- Nested:
All | Brief | Compact | Dots
Linear:
All | Brief | Compact
|
|
|
|