Subject:
|
RE: 'Bot Camera
|
Newsgroups:
|
lugnet.robotics
|
Date:
|
Tue, 22 Feb 2000 16:48:56 GMT
|
Original-From:
|
Marco C. <marco@soporcelSPAMLESS.pt>
|
Viewed:
|
1065 times
|
| |
| |
At 09:05 22-02-2000 -0700, Morgan, David wrote:
> Let's say that I am at work, and I leave the robot on all day. Now when I
> come home do I want to watch 12 hours of recorded video? I think not... Your
It all depends of the initial function of the bot ;) but yes, one would
want to see only the movement bits.
If it is a "record on movement" kind of bot, all can be done by the PC
vision system: The video analisys, the movement detection, the decision and
the video capture. I see the bot as an "extension" of the PC, into the real
world... like... his hand... well... a mix of hand & semi-brainless head
(eyes and ears) :>
Mind you that I'm talking about a bot done with a LEGO Technic CyberMaster
with RF comunication with the PC, and with a wireless XCam color video (and
audio) camera.
> "eyes" wouldn't be closed, you just wouldn't be remembering what you saw,
> unless it met certain criteria.
Well, when I say "eyes closed" I'm talking about the way you use other less
evolved sensors (Light and Touch sensors) to help in the decision, when you
already have one of the best sensor to detect movement: vision.
(well, at least in this context, assuming you're talking about LEGO's Light
sensor, and... well, using some kind of auxiliary light or during day light)
> You could do this on the RCX side, the computer side, or both. For example
> from the robot side, lets say the robot is sitting stationary and detects
> movement using the Infrared transmitter as a motion detector. This would be
> grounds to tell the computer to record what it is seeing. On the computer
> side, the bot could move to one of several security look-out points and
> signal the computer that it is in position. You could then take a reference
> frame, and use that to determine if something in the scene has changed and,
> if it has, start recording....
Yes, that can be done with the so called MovingBlob detection, one usefull
robotic vision "function".
From what I could get to know about the new LEGO Mindstorms Vision Command
System, that's what they are using, the camera to detect movement and then
act acordingly.
> Do you see where I am going with this? I would like to just see the
> interesting parts of a robots day. I am assuming that the "interesting
> parts" would contain movement of some type.
Yes, I understand.
What I didn't understand was: Why does the pBrick (rcx, CyberMaster,
whatever) must *ask/command/warn* the PC to initiate the video capture,
when that decision can be done by the PC itself, because he *is* the
"vision" part of the setup.
> I'm not sure what kind of lag you would encounter while all of these
> decisions are being made. If it turns out interesting stuff is getting cut
> off at the beginning, a buffer could be used to compensate.
One solution would be, the PC take all the decisions. The detection of
movement through video (using a robotic vision system software, like the
Vision Command System), and the decision to star/stop capturing.
____________________
Marco C. aka McViper
|
|
1 Message in This Thread:
- Entire Thread on One Page:
- Nested:
All | Brief | Compact | Dots
Linear:
All | Brief | Compact
|
|
|
Active threads in Robotics
|
|
|
|