1 Introduction - Johns Hopkins University

advertisement
Vision-Based Control for Omni-directional Camera Equipped Mobile Robots
Computational Interaction and Robotics Laboratory
Johns Hopkins University
Baltimore, MD 21218
August 2002
0 Abstract
1 Introduction
A common problem in most applications of mobile systems is to keep the vehicle on a pre-defined multisegmented path. This path may be a corridor through a factory to transport parts from one machine to
another, or a route through a building to give a pre-specified tour [B3], or it may be a known path between
offices in case of a courier robot [B5]. Several systems have been proposed to solve this problem, most of
which operate based on maps [B4, B10, B11, B14, B13] or on localization from artificial landmarks in the
environment [B8, B5]. Another system uses dynamic sets of natural landmarks [Burschka].
A large limitation to all these approaches is the limited field of view from conventional front-mounted
vehicle cameras. Typical conventional front-mounted cameras with a focal length of 8mm covers only a
narrow field – approximately 50º in horizontal and vertical directions. This limits landmarks to exist within
at most 50º of each other, which often does not occur in a practical natural setting. Furthermore, when all
landmarks are concentrated together within this small range, a sudden obstruction could easily stall or
confuse the localization. Introducing an omni-directional camera [Nayar] into such a landmark-based
system can eliminate these problems since such a camera allows for a 360º field.
One omni-directional vision system uses landmarks to estimate relative position [Das] and another uses
color histograms to distinguish environments [Ulrich]. However, neither can estimate vehicle pose within
an environment for accurate vision-based control along a defined path.
Our approach of using an omni-directional camera for vision-based control eliminates these limitations of
conventional and current omni-directional systems. In addition, we show the optimal and worst case
criterions for natural landmark selections and vehicle pose error that results.
The remainder of this article is structured as follows. The next section describes the geometry of the visionbased control problem for an omni-directional camera. Section 3 describes the system model of the nonholonomic platform of the mobile system, and describes the path recording and control methods we use.
Section 4 presents experimental results from the system. We close with a discussion of future work.
2 Navigation in a Local Segment
2.1 System Model
We assume throughout the article a non-holonomic mobile system with kinematics similar to a unicycle.
The system operates in the x-z plane and rotates around the y-axis (fig. 1). Let the Cartesian coordinates
(xg, zg) specify the robot position in space and g describe its orientation. The motion of the robot can be
described by its forward velocity v and the superimposed rotational velocity w as s = () T. Given these
definitions, the kinematics of the system can be described in the Cartesian coordinate system as
x g = v • cos 
z g = v • sin 
 g = 
2.2 Spherical Image Projection
We employ cylindrical coordinates for points observed moving in the plane of the floor. Let us assume the
origin of the camera is at the center of rotation of the mobile system, the optical z axis points in the
“forward” direction of motion of the robot, and the x axis points to the “right” of the direction of forward
travel (fig. 1).
A point in space relative to the robot can then be described by the triple (ri,  i, yi). The polar coordinates ri
and  i describe the position of the projection of the point onto the horizontal plane containing the camera
optical axis. The vertical distance of the point from this plane is given by yi.
We can also define the elevation angle i above that plane as
y
i = arctan i
ri
The pair (i ,  i) are the spherical coordinates of the projection of the point located at (ri, i, yi) into the
(spherical) camera image. Note that increases clockwise and increases in the downward direction.
3 Vision Based Control
3.1 The Image Jacobian
Now, assuming holonomic motion in the plane, we can compute the following image Jacobian relating the
change of angles in the image, (i , i), due to changes in motion in the plane, (xi , zi) from eq. 3:
The dependency on the Cartesian coordinates can be avoided considering the geometry of the system to:
Note in particular that the image Jacobian is a function of only one unobserved parameter, yi, the height of
the observed point. Furthermore, this value is constant for motion in the plane. Thus, instead of estimating a
time-changing quantity as is the case in most vision-based control, we only need to solve a simpler static
estimation problem.
3.2 Estimating Parameters for the Jacobian Matrix
We can estimate the value yi for each tracked object Oi from the information collected during the teaching
phase of the system. Consider two different robot positions (labeled p 1 and p2 in fig. 2) in space with
coordinates (x1, z1, 1) and (x2, z2, 2) in a distance d 0 from each other. For simplicity, we take the first
location to be the origin. Then we can formulate the following equations based on fig. 2 and eq. 4:
…
The resulting value used in the replay mode to calculate the inverse of the Jacobian matrix (eq. 5).
Although formulated for a single motion, better results are obtained by formulating this system for every
system motion and solving a least-squares problem.
3.3 Processing Sequence
teaching phase
replay phase
4 Results
4.1 Quality of Tracking
- Graph expected results for pixel to real-world distance accuracy (i.e. arrows in real
world)
- Chart condition numbers vs. field of view and position of tracked objects
4.2 Quality of the pose-Estimation
- Chart y-Estimator value for features of varying size as alpha (omni angle) and x
(distance from feature) change.
- Compare above experimental results to theoretical results from Matlab. Include
simulated noise results.
4.3 Path Following Results
-
Chart results of path following on local segments of curves for radii 0 (rotation on a
single point) to radii infinity (path following along a line).
Compare above experimental results to theoretical results from Matlab. Include
simulated noise results.
5 Stability of the System (small section)
5.1 The Mobile Robot Platform
5.2 Camera Sensor
- Standard front mounted with pan-tilt unit
- Omnicam
5.3 Vision Algorithms
- SSD Feature Extraction
- Color Feature Extraction
6 Conclusion and Future Work
- What worked and what didn’t work. What can be improved and roughly how.
- Suggest useful extension(s) to approach
7 Acknowledgements
8 References (grouped by relevance)
[Burschka] Darius Burschka, Gregory Hager, “Vision-Based Control of Mobile Robots.” In
Proceedings of ICRA2001, May 2001, Seoul, pages 1707-1713.
[B6] G. Hager, D. Kriegman, A. Georghiades, O. Ben-Shahar, “Toward DomainIndependent Navigation: Dynamic Vision and Control,” CDC, 1998.
[B12] L. Ojeda, H. Chung, J. Borenstein, “Precision-calibration of Fiber-optics
Gyroscopes for Mobile Robot Navigation,” Proceedings of ICRA, pages 2064-2069,
2000.
[Ulrish] Iwan Ulrich, Illah Nourbakhsh, “Appearance-Based Place Recognition for Topological
Localization,” IEEE Inter. Conf. On Robotics and Automation (ICRA), April 2000, pp. 1023-1029.
[U8] Jennings, C., Murray, D., and Little, J.J., "Cooperative Robot Localization with Visionbased Mapping,” IEEE InternationalConference on Robotics and Automation, May 1999, pp.
2659-2665.
[U1] Abe, Y., Shikano, M., Fukuda, T., Arai, F., and Tanaka, Y., "Vision Based Navigation
System for Autonomous Mobile Robot with Global Matching", IEEE International Conference on
Robotics and Automation, May 1999, pp. 1299-1304.
[U3] Asensio, J.R., Montiel, J.M.M., and Montano, L., "Goal Directed Reactive Robot Navigation
with Relocation Using Laser and Vision", IEEE International Conference on Robotics and
Automation, May 1999, pp. 2905-2910.
[U20] Takeuchi, Y., and Hebert, M., "Evaluation of Image-Based Landmark Recognition
Techniques", CMU-RI-TR-98-20, Carnegie Mellon University, 1998.
[U22] Thrun, S., "Finding Landmarks for Mobile Robot Navigation,” IEEE International
Conference on Robotics and Automation, May 1998, pp. 958-963.
[Das] A.K. Das, R. Fierro, V. Kumar, B. Southall, J. Spletzer, and C. J. Taylor, “Real-Time Vision-Based
Control of a Nonholonomic Mobile Robot.”
[Ma] Yi Ma, Jana Kosecka, Shankar Sastry, “Vision Guided Navigation for A Nonholonomic Mobile
Robot,” IEEE Conf. Decision and Control, 1997.
[Nayer1] Rahul Swaminathan, Michael D. Grossberg, Shree K. Nayar, “Caustics of Catadioptric
Cameras.” Proc. of IEEE International Conference on Computer Vision, Vancouver, Canada, July 2001.
[Nayer2] S.K. Nayar, "Catadioptric Omnidirectional Camera,” IEEE Computer Society Conference on
Computer Vision and Pattern Recognition, June 1997, pp. 482-488.
NOTES
1 Introduction
State problem
Motivation
Which part trying to solve
Show what’s been done
Small evaluations pros/cons
Approach
1.1 Past work
- Conventional Camera with Pan-tilt unit and Jacobian y-Estimation [Burschka]
- Appearance-Based Place Recognition for Topological Localization [Ulrich]
- Types of Landmark Detections Used. Corners [U8], Doors [U3,U22], Overhead Lights [U22], Air
Diffusers in ceilings [U1] , and distinctive buildings [U20]
- Wall Follower [Das]
- Velocity Estimator [Das]
- Leader Follower [Das]
- Localization [Das]
- Go To Goal Controller [Das]
- Tracking Piece-wise Ground Curves [Ma]
- Caustics of omni-directional cameras [Nayar1]
- Catadioptric cameras [Nayer2]
- Dead-Reckoning faults [B12]
1.2 Goal
- Not to use map-based system to control paths of robots, but rather simple teaching phase
- Path learning
1.3 Pitfalls of Past Systems
- Map-based systems
- [B6]
1.4 Our Approach
Download