Global Vision Based Tracking of Multiple Mobile Robots

advertisement
Global Vision Based Tracking of
Multiple Mobile Robots
Brezak, M. ; Petrovic, I. ; Rozman, D. ;
Fac. of Electr. Eng. & Comput., Zagreb Univ.
Industrial Electronics, 2006 IEEE International Symposium on
Issue Date : 9-13 July 2006
Volume : 1 , On page(s): 649
Digital Object Identifier : 10.1109/ISIE.2006.295537
指導教授:謝銘原
學
生:柯俊毅
學
號:M9820212
PPT製作:(100%)
1
2
OUTLINE

Abstract

Introduction

System description and design

Vision algorithm for tracking

Experimental results

Conclusion

References
3
ABSTRACT

(i) thesystem must have the ability to track
unlimited number of robotsusing images from
one or more cameras.

(ii) the system must haveacceptable price.

(iii) the system must operate in real time,
soalgorithms with lower computational cost
are preferred.
4
INTRODUCTION-1

The goal of this paper is to develop a global
vision system for tracking of multiple mobile
robots as a part of intelligent space. Some works
related to this problematic are [3], [4] and [5].

In [3] an example of global vision system
designed for soccer robots is described, but with
limitation on the number of robots that algorithm
can track and with considerable computational
complexity since the algorithm always makes
global search of the image.
5
INTRODUCTION-2

In [4] local image search is used, but measuring is
based only on color marks which can't guarantee
high measuring accuracy without using expensive
cameras.

In [5] an approach that uses a network of
omnidirectional cameras is presented, but it has
many restrictions for practical use.
6
INTRODUCTION-3

In our work, we give special emphasis on the
following requirements:

(i) the system must have the ability to track
unlimited number of robots using images from
one or more cameras.

(ii) the system must have acceptable price.

(iii) it must operate in real time, so algorithms
with lower.
7
SYSTEM DESCRIPTION
AND DESIGN-1

In general case, the system is required to track
the pose of one or more mobile robots in the area
supervised by one or more cameras placed above
the robots (Fig. 1).
Fig. 1. System overview
SYSTEM
DESCRIPTION
AND DESIGN-2
8
When
the problem of robots identification in the
image is solved, it is necessary to accurately measure
position and orientation of the robots.

Because of low system price and high-speed
requirements, we consider usage of the cameras with
Bayer color filter arrays (Fig. 2)
Fig. 2. Bayer pattern
SYSTEM
DESCRIPTION
AND DESIGN-3
9

Measuring mark is white, and is surrounded by
black edge, so there exists clear, maximum
contrast and length black-white edge.

The key point here is that when using blackwhite edge for measuring, color information is
not important, and measuring can be performed
directly in Bayer image without need for any
color interpolation.

The final mark that was used in our experiments
is shown in Fig. 3.
2010/10/6
SYSTEM
DESCRIPTION
AND DESIGN-4
10

The color mark is not placed in the middle of the
white mark, but is shifted to the side. This fact is
used for exact determination of robot orientation.

For our experiments color mark with only one
color was sufficient, but in real world conditions
two or more colors can be used, so that
distinguishing from environment is more reliable.
Fig. 3. Identification mark
VISION ALGORITHM
FOR TRACKING-1
11

The task of a vision algorithm is tracking the position
and orientation of the robots based solely on the image
obtained from the camera.

The algorithm consists of the following stages:
A. robot identification procedure,
B. measuring procedure
C. image distortion correction.
VISION ALGORITHM
FOR TRACKING-2
12

To minimize the probability that any other object of
similar color, or any other robot is found instead of
robot that is being searched, the searching begins right
from the middle of the region of interest, i.e. from
predicted robot position.

Then the searching is continued following the
rectangular spiral toward the edge of region of interest
(Fig. 4).
Fig. 4. Spiral search
VISION ALGORITHM
FOR TRACKING-3
13

Which has the advantage of low computational cost, as
not every pixel of the region is checked and labeled,
but only pixels around the edge of the region. The
algorithm is illustrated with Fig. 5(a).

The outputs of the algorithm are coordinates of edge
contour pixels and belonging array with movement
directions along contour chain codes (Fig. 5(b)).
Fig. 5. Contour following: (a) Edge following procedure (b) Chain codes
VISION ALGORITHM
FOR TRACKING-4
14

An example for both cases is shown in Fig. 6, where
the initial starting point was on the edge of a hole
inside region.
Fig. 6. Contour following

The described procedure does not have high
computational cost because the holes inside the
region, if any, usually are small and relatively low
number of pixel is being examined.
VISION ALGORITHM
FOR TRACKING-5
15

The easiest way to calibrate thesystem for using this
method is that the user marks nine pointsin the image,
which define rectangle with known coordinatesin
some global coordinate system (Fig. 7).
Fig. 7. Distortion correction
VISION ALGORITHM
FOR TRACKING-6
16

In Fig. 8 , is height of the camera to the ground, hr is
height of the robot mark to the ground, d is distance
of the uncorrected robot position to the camera axis
ground projection point and d, is unknown corrected
distance from robot real position to camera axis
projection point.
Fig. 8. Parallax correction
VISION ALGORITHM
FOR TRACKING-7
17

Then, assuming that camera is mounted vertically to
the ground, the parallax correcting equation is:

To apply this correction, a global coordinates of
camera axis projection on ground point must be
known.

If the camera is mounted vertically, then this point
can be obtained by converting the middle pixel of
the image to global coordinate system.
EXPERIMENTAL
RESULTS-1
18

Figure 9(a) shows the mean of measured x coordinate
as a function of the shift of the robot mark in mm. The
jumps in position are most likely caused by binary
image segmentation, because as the mark moves,
pixels switch from background color to white mark
color abruptly, changing the location of centroid by
fixed amount.
Fig. 9(a). Position x coordinate mean and standard deviation as a function of the robot shift in mm
EXPERIMENTAL
RESULTS-2
19

The precision of the position, i.e., its standard
deviation, is displayed in Fig. 9 (b), from which it
can be seen that the maximum standard deviation
is approximately 0.1 pixel, i.e. it is roughly one
tenth of the pixel.
Fig. 9(b). Position x coordinate mean and standard deviation as a function of the robot shift in mm
EXPERIMENTAL
RESULTS-3
20

The results are shown in Fig. 10. As can be seen,
the absolute position error is almost everywhere
less than 0.15 pixels, i.e. 0.5371 mm.
Fig. 10. Absolute error of the robot x coordinate
EXPERIMENTAL
RESULTS-4
21

The achieved execution time of algorithm on 3
GHz Athlon 64 processor, when tracking all five
robots, was 1 ms in local tracking mode, and 7 ms
in global image search mode, thus it was able to
work on full 80 fps framerate of camera in real
time conditions.

Tracking and measuring positions of robots was
reliable, even in very dynamic environment
conditions like in robosoccer, so smooth and fast
robot movement was achieved.
22
CONCLUSION-1

The system can be easily extended to use multiple
networked cameras. Then the described algorithm
can be applied for every camera, or can be applied
selectively, only for images where it is known that
robots currently reside.

To achieve reliable robot detection, images from
multiple cameras should overlap. So, there is a
possibility that single robot can show up in more
images.
23
CONCLUSION-2

The algorithm then must check if measurements
from multiple images originate from single robot,
and if so, combine these measurements to only one
measurement result.

Apart from multiple networked cameras
application, future work will include improvement
of measurement accuracy by refinement of
extracted white area edge contours in subpixel
precision.
24
REFERENCES-1

[1] H. Hashimoto, "Intelligent interactive space based for robots," in Pro-ceedings of the
2000 International Symposium on Mechatronics and Intelligent Mechanical System for 21
Century, 2000, p. 26.

[2] J. Lee, N. Ando, and H. Hashimoto, "Mobile robot architecture in intelligent space,"
JSME Journal of Robotics and Mechatronics, vol. 11, no. 2, pp. 165-170, 1999.

[3] G. Klancar, M. Lepetic, and D. Matko, "Vision system design for mobile robots tracking,"
in Proceedings of 2004 Fira Robot World Congress, 2004.

[4] M. Simon, S. Behnke, and R. Rojas, "Robust real time color tracking,“ in Proceedings of:
The Fourth International Workshop on RoboCup, Melbourne, Australia, 2000, pp. 62-71.

[5] E. Menegatti, G. Gatto, E. Pagello, T. Minato, and H. Ishiguro, "Dis-tributed vision
system for robot localisation in indoor environment," in Proceedings of the 2nd European
Conference on Mobile Robots, 2005.

[6] B. E. Bayer, "Color imaging array," U.S. Patent No. 3,971,065, 1976.

[7] R. Ramanath, W. E. Snyder, G. L. Bilbro, and W. A. S. III, "Demo-saicking methods for
bayer color arrays," Journal of Electronic Imaging, vol. 11, pp. 306-315, 2002.
25
REFERENCES-2

[8] R. Kimmel, "Demosaicking: Image reconstruction from color CCD samples," IEEE
Transaction on Image Processing, vol. 7, no. 3, pp. 1221-1228, 1999.

[9] S. H. Kim, J. S. Choi, J. K. Kim, and B. K. Kim, "A cooperative micro robot system
playing soccer: Design and implementation," Robotics and Autonomous Systems, vol. 21, no.
2, pp. 177-189, 1997.

[10] W. K. Pratt, Digital Image Processing, 3rd ed. John Wiley, 2001.

[11] R. Swaminathan and S. Nayar, "Polycameras: Camera clusters for wide angle imaging,"
Columbia University, Computer Science, Tech. Rep., 1999.

[12] H. Kitano, M. Asada, Y Kuniyoshi, I. Noda, and E. Osawa, "RoboCup: The robot world
cup initiative," in Proceedings of the First International Conference on Autonomous Agents
(Agents'97). New York: ACM Press, 1997, pp. 340-347.

[13] C. Steger, "Evaluation of subpixel line and edge detection precision and accuracy," in
International Archives of Photogrammetry and Remote Sensing, vol. XXXII, Part 3/1, 1998,
pp. 256-264.
Download