To LUGNET HomepageTo LUGNET News HomepageTo LUGNET Guide Homepage
 Help on Searching
 
Post new message to lugnet.roboticsOpen lugnet.robotics in your NNTP NewsreaderTo LUGNET News Traffic PageSign In (Members)
 Robotics / 23925
23924  |  23926
Subject: 
Re: Robotic simulators
Newsgroups: 
lugnet.robotics
Date: 
Tue, 26 Apr 2005 13:23:51 GMT
Original-From: 
PeterBalch <(peterbalch@compuserve)IHateSpam(.com)>
Viewed: 
1018 times
  
Steve

the robots themselves would (presumably) have
been designed by you with only a handful of design variations.

No. Even in the 2D version, you can have different weight distribution,
different motors and gears and any number of different wheels arranged
where you like with akerman steering, skid steering, idler wheels, casters,
etc. Then the wheels get damaged during fights ... Getting the AI robots to
drive using whatever wheel arrangement you give them was fun.

Imagine two (simulated) robots fighting. One gets tipped over but it
responds by operating its CO2-powered self-righting-mechanisms. I felt if I
couldn't simulate that then I'd failed.

It's not neccessary to model the physics exactly. Just so long as the
result is within 10%. After
That's true if it's for a game - but if you wanted to debug an RCX • program,
I don't think that would be good enough.

I disagree.

You can measure the length and mass of your Lego robot's components
accurately but I challenge you to measure anything else to better than 10%.
If your robot stops working when one of the pieces is 10% heavier then it's
badly designed.

Friction is very hard to measure in the robot's operating conditions. Do
you know the torque of your gearbox-motors at different speeds and battery
voltages? How do they behave when back-driven? What is the Young's modulus
of the plastic? Over what range is it perfectly elastic? How does plastic
deformation work? How does that alter as the brick ages?

I think 10% is perfectly good enough.

I've worked with physicists and engineers who demanded ludicrous levels of
accuracy in the simulation when it was clear that they were only making a
very rough guess at most of the parameters. And I've simulated (biological)
neural nets where I was lucky to know a parameter to +/-50% (fortunately
biological neural nets are designed to work properly with huge tolerances
in their components). I'm a great believer in Monte Carlo methods.

Also, errors tend to multiply in physics simulations.  Something where • just
two objects collide against a flat surface isn't likely to suffer too • badly
but a robot with perhaps a few dozen moving parts will rapidly develop • cases
where 10% errors add up a dozen times and turn into a 120% error.

Yes, many physics simulations are chaotic. I could simulate snooker balls
to ridiculous accuracy and still be 120% out after three collisions.

Most of the things that Lego robots get up to are chaotic which is why 10%
is good enough.

One approach is to backtrack time.

I tried backtracking and it was too slow. I wanted the simulation to run in
real time so it was important that every millisecond of simulated time took
0.1mS of real time.

Did your simulation cope with the situation where one cube came • completely
to rest sitting on one face of the other cube?  Such 'resting collisions'
can be really tricky to deal with because the simulation applies gravity
to the top cube - which causes it to start accellerating downwards. Now
you detect a collision and apply a restoring force - which causes the top
cube to accellerate upwards.

I've met that problem before but, in this case, collisions were fairly
lossy so they settled down almost instantly.

Peter



1 Message in This Thread:

Entire Thread on One Page:
Nested:  All | Brief | Compact | Dots
Linear:  All | Brief | Compact
    

Custom Search

©2005 LUGNET. All rights reserved. - hosted by steinbruch.info GbR