Augmented Reality for Robot Development and Experimentation

advertisement
Augmented Reality for Robot
Development and
Experimentation
Jorge Dávila Chacón
Authors:
Mike Stilman, Philipp Michel,
Joel Chestnutt,
Koichi Nishiwaki, Satoshi Kagami, James J. Kuffner.
Introduction & Related Work
Overview
Ground Truth Modelling
Evaluation of Sensing
Evaluation of Planning
Evaluation of Control
Discussion
Introduction
Virtual simulation: Find critical system
flaws or software errors.
Testing of various interconnected
components for perception, planning, and
control becomes increasingly difficult.
 Vision system: Model of the environment.
 Navigation planner: Erroneous path.
 Controller: Properly following the desired trajectory.
Objective: To present a ground truth model
of the world and to introduce virtual
objects into real world experiments.
Establish a correspondence between
virtual components (Environment, models,
plans, intended robot actions) and the real
world.
Visualize and identify system errors prior
to their occurrence.
Related Work
For humanoid robots: Simulation engines
model dynamics and test the controllers,
kinematics, geometry, higher level
planning and vision components.
Khatib: Haptically interaction with the
virtual environment.
Purely virtual simulations are limited to
approximating the real world (Rigid body
dynamics and perfect vision).
Hardware in-the-loop simulation
(Aeronautics and space robotics).
Virtual overlays for robot teleoperation:
Design and evaluate robot plans.
Speed, robustness and accuracy
enhanced by binocular cameras
Hybrid tracking by the use of markers
(Retroreflective, LEDs and/or Magnetic
trackers).
Overview
Lab space setting:
“Eagle-4” Motion analysis system,
cameras and furniture objects.
Experiments focus: High level autonomous
tasks for humanoid robot “HRP-2”.
Foot locations to avoid obstacles and
manipulate them to free its path.
Technical details
“Eagle-4” system:
Eight cameras, 5 × 5 x 2 m.
Distances calculated to 0.3% accuracy.
Dual Xeon 3.6GHz processor
“EVa Real-Time” (EVaRT) software: Locate 3D
markers at a max. rate of 480Hz and 1280 ×
1024 (60 markers min. at 60Hz).
Virtual chair is overlayed in real-time.
Both the chair and the camera are in motion.
Ground Truth Modeling
 Reconstructing Position and Orientation
 Individually identified markers, attached to an
object can be expressed as a set of points
{a1, ..., an} in the object’s coordinate frame “F”
(Object template).
 Displaced marker location “bi” is found with
translation vector “t”, orientation vector “R” and
the centroid of markers.

bi  
0
R
0
t
ai

0 1
Markers occluded from motion capture:
Algorithm performed only on the visible
markers: their corresponding rows in matrix
must be removed.
New centroids are the centroids of the
visible markers and their associated
template markers.
 Reconstructing Geometry and
 Virtual Cameras
 3D triangular surface meshes form environment
objects (Manually edited for holes and
automatically simplified to reduce the number of
vertices).
 The position of robot camera found with groundtruth positioning information and calculus of its
axis provides the “Virtual view”.
Evaluation of Sensing
Ground-truth positioning information
localize sensors, cameras, range finders.
Build reliable global environment
representations (Occupancy grids or
height maps) for robot navigation plan.
Overlay them onto projections of the real
world evaluate sensing algorithms for
construction of world models.
 Reconstruction by Image Warping
 Tracking of camera’s position, using motion
capture to recover projection matrix, enables 2D
homography between the floor and the image
plane.
 To build a 2D occupancy grid of the environment
for biped navigation, we assume that all scene
points of interest lie in the z = 0 plane.
 Reconstruction from Range Data
 Range sensor “CSEM Swiss Ranger SR-2” timeof-flight (TOF) to build 2.5D height maps of the
environment objects.
 Motion-capture based localization lets us
convert range measurements into clouds of 3D
points in world coordinates in real-time.
 Environment height maps can be cumulatively
constructed.
Example “box” scene.
Raw sensor measurement.
Point cloud views of reconstructed box.
 Registration with Ground Truth
 Environment reconstructed by image warping or
range data allows to visually evaluate the
accuracy of our perception algorithms.
 Make parameter adjustments on the-fly by
overlaying the environment maps generated
back onto a camera view of the scene.
Evaluation Of Planning
 Video overlay displays diagnostic information
about the planning and control process in
physically relevant locations.
 The robot plan a safe sequence of actions to
convey itself from its current configuration to a
goal location.
Goal location and obstacles moved while
robot was walking, requiring a constant
update of the plan.
Example camera image. Synthesized ground plane view. Corresponding environment map.
 Planning algorithm evaluates candidate footstep
locations through a cluttered environment.
 • Motion capture obstacle recognition.
 • Localized sensors.
 • Self-contained vision
Motion capture data removed completely and
the robot use its own odometry to build maps
of the environment.
 Visual Projection: Footstep Plans
 For each step it computes 3D position and
orientation of the foot.
 Augmented reality planned footsteps are
overlaid in real-time onto the environment
(Continuously updated while walking).
 This display exposes the planning process to
identify errors and gain insight into the
performance of the algorithm.
Occupancy grid generated from the robot’s camera.
Planar projection of an obstacle recovered from range data.
Temporal Projection: Virtual Robot
Real world preferred to completely
simulated environment for
experimentation: AVATAR proposal.
Instead of replacement of all sensing with
perfect ground truth data, we can simulate
degrees of realistic sensors.
Objects And The Robot’s Perception
Slowly increase the realism of the data
which the system must handle.
By knowing the locations and positions of
all objects as well as the robot’s sensors,
we can determine which objects are
detectable by the robot at any given point
in time.
Footstep plan displayed onto
the world.
Augmented reality with a simulated robot
amongst real obstacles.
Evaluation Of Control
Objective: To maximize the safety of the
robot and the environment.
To accomplish this, we perform hardware
in-the-loop simulations while gradually
introducing real components.
“Complexity of the Plant”
 Virtual Objects
 Simulation: Analyze the interaction of a robot
with a virtual object by a geometric and dynamic
model of the object.
 In case of a failure we observe and detect virtual
collisions without affecting the robot hardware.
 Similarly, these concepts can be applied towards
grasping and manipulation.
Precise Localization
To perform force control on an object
during physical interaction.
Fixing the initial conditions of robot and
environment, or asking the robot to sense
and acquire a world model prior to every
experiment.
The hybrid experimental model avoids the
rigidity of the former approach and the
overhead time required for the latter.
Virtual optical sensor: Efforts can be
focused on algorithms for making contact
with the object and evaluating the higher
frequency feedback required for force
control.
 Gantry Control
 Physical presence of gantry and its operator
prevent from testing fine manipulation and
navigation in cluttered environments that
requires the close proximity to objects.
 To bypass this problem a ceiling suspended
gantry was implemented, that can follow the
robot throughout the experimental space.
Discussion
Paradigm leverages advances in optical
motion capture speed and accuracy to
enable simultaneous online testing of
complex robotic system components.
Promotes a rapid development and
validation testing on each of the
perception, planning and control.
 Future Work
 Automated methods for environment modeling
(Object with markers could be inserted at
environment and immediately modeled for
application)
 Automatic sensor calibration in the context of a
ground truth world model.
 Enhanced visualizations by fusing local sensing
(Gyroscopes and Force) sensors into the virtual
environment.
Download