|
In lugnet.org.us.laflrc, Thomas Chesney wrote:
> ...one way to do it is to use the 'always turn left
> (or right)' technique... recording the (x,y) position
> of each turn... Getting accurate (x,y) data is a pain
> though.
Agreed. Using two rotation sensors could certainly work, but that takes two
RCX inputs, leaving me only one light sensor for line following (OK most of the
time) and detecting & describing junctions (rather more difficult with a single
light sensor). Timing the distance might work in some cases, but here it would
be tough - the time between two nodes in this maze isn't a nice, constant
function, and a long straightaway could really mess up this method. Still, it
might be a possibility (& it might help with another problem I've had, see
below)
> One variable holds the direction N,S,E,W to
> calculate if the x or y should be going up
> or down.
Yep, one variable is the heading, although currently it is used to help
identify which branch out of a junction I've taken (and how much I need to turn
to get there).
> How does this sound?
Good, although I want to apply at least one other optimization. At a
junction, the first choice of direction to explore should be either straight
(turns take time) or towards the exit (this maze starts at grid coordinate (0,0)
and finishes at (6,6) - I might end up taking advantage of that, if I (the
programmer) get time).
The other problem I've had (referenced above) is detecting dead-ends in a
timely fashion. The current design runs about 17 inches/sec on a straightaway,
and recognizes a dead-end by the fact that it hasn't seen the line in the last
0.2 seconds (so dead-end overshoots are at least 4"... in practice, more like
6"). Worse, they are not straight-line overshoots (the line-following code is
trying to "correct" back to a line that's not there) but curve right. I *think*
I can reverse the curve it follows during the overshoot (at least roughly), but
a 6" overshoot is much more than is good. Reducing the time until the routine
flags a "line lost, must be a dead-end" is ideal, but... limited. For one thing,
I'm monitoring the timer with an event, so the lowest resolution is 0.1 seconds
(NQC can't use FastTimer() as a source for an event). Secondly, there are times
it takes more than 0.1 seconds to re-find the line during normal line-following.
Right now I'm seeing if I can have yet another task running to watch for some
sort of clearer "end of line" condition - like no significant variation in the
sensor reading during a short (9 msec? 12 msec?) time. Even that looks like it
will need to be done by timing loops, certainly not Wait().
--
Brian Davis
|
|
Message has 1 Reply:
Message is in Reply To:
13 Messages in This Thread:
- Entire Thread on One Page:
- Nested:
All | Brief | Compact | Dots
Linear:
All | Brief | Compact
This Message and its Replies on One Page:
- Nested:
All | Brief | Compact | Dots
Linear:
All | Brief | Compact
|
|
|
Active threads in Robotics
|
|
|
|