Uploaded by Epic Acoustic

ARobotVisionSysteminABURoboconContestVisualServoingNavigationandObjectTracking (1)

See discussions, stats, and author profiles for this publication at: https://www.researchgate.net/publication/272054773
A Robot Vision System in ABU Robocon Contest: Visual Servoing Navigation
and Object Tracking
Article in Advanced Materials Research · December 2012
DOI: 10.4028/www.scientific.net/AMR.605-607.1630
CITATION
READS
1
926
3 authors, including:
Li Xihua
Beihang University (BUAA)
5 PUBLICATIONS 9 CITATIONS
SEE PROFILE
Some of the authors of this publication are also working on these related projects:
Micromouse Competition View project
Robot Cup 2011 View project
All content following this page was uploaded by Li Xihua on 13 June 2017.
The user has requested enhancement of the downloaded file.
Advanced Materials Research Vols. 605-607 (2013) pp 1630-1635
© (2013) Trans Tech Publications, Switzerland
doi:10.4028/www.scientific.net/AMR.605-607.1630
A Robot Vision System in ABU Robocon Contest: Visual Servoing
Navigation and Object Tracking
Xihua Li1, a, Ping Zhang2, b and Sina Lin3, c
1
Science and Technology on Aircraft Control Laboratory, Beihang University, Beijing, 100191, China
2
Science and Technology on Aircraft Control Laboratory, Beihang University, Beijing, 100191, China
3
Intelligent Computing and Machine Learning Lab, Beihang University, Beijing, 100191, China
a
b
c
lixihua9@126.com, zhangpingbuaa@126.com, olivialsn@gmail.com
Keywords: Robot Vision, Visual Servoing, Navigation, Object Tracking.
Abstract. This article took the 2011 ABU Robocon Contest as background, and introduced the design
and implementation of a vision system for visual servoing navigation and object tracking in this
contest. For visual servoing navigation part, it had a series of simple image processing and then added
to a previous control law; tests showed only 3cm distance and 1°angle error occur when robot ran one
line of 6m. For object tracking part, article introduced a combined algorithm for the detecting of a
black and white semi-circle object, which moved quickly and uncertainty in a colorful background
and shining environment; tests showed the proposed algorithm work better than other exist algorithms
in complex environment.
Background Introduction
ABU is the abbreviation of Asia-Pacific Broadcasting Union, and The ABU Robocon is known as the
ABU Asia-Pacific Robot Contest. It was first launched in Tokyo, Japan in 2002, and after which
member countries took turn to host the event, Thailand(2003), Korea(2004), China(2005),
Malaysia(2006), Vietnam(2007), India(2008), Japan(2009) and Egypt(2010). As the contest host, the
main objective of the contest is to develop knowledge, creativity and innovation among younger
people in the region. The year 2011 made the tenth anniversary of ABU Robocon and Thailand is
honored to host the event again [1].
Rules of ABU Robocon 2011[1]. Here article just have a generality description of the rules. The
ground information of the contest was shown in figure1. First, the robot needs to start from one of the
four corners of the square (12m*12m) and do a series of tasks, which needs to be done in order, after
that a Completed Krathong (left blue) as shown in figrue2 would be assembled. Second, the robot
needs to throw the Completed Krathong onto the River Surface in the middle of the square which will
cause a quick and uncertain moving of the River Surface. Finally, the robot needs to do some tasks
near the River Surface and casts a so called Candle Light Flame (shown in figure2) onto the
Completed Krathong top (top of the Completed Krathong was shown in figure2) in order to win the
game, but as the River Surface is always rocking, the last step will be difficult.
Figure 1. Game field information.
Figure 2. Completed Krathong and candle light flame.
All rights reserved. No part of contents of this paper may be reproduced or transmitted in any form or by any means without the written permission of TTP,
www.ttp.net. (ID: 58.194.224.112-30/11/12,06:03:11)
Advanced Materials Research Vols. 605-607
1631
During the whole rules, some rules that nearly related to this article were shown bellow. As the
white lines in figure1, it can be used for servo control and navigation by camera to detect the lines. As
the top of Completed Krathong is a semi-circle of black and white, it can be used for tracking by
camera, the colorful background and shining lights in the studio will be a big trouble for normal
algorithms and this is the main point this article to deal with.
Introduction to Visual Servoing Navigation and Object Tracking. Visual Servoing, also known
as Vision-Based Robot Control and abbreviated VS, is a technique which uses feedback information
extracted from a vision sensor to control the motion of a robot [2]. In image-based visual servo
(IBVS) control, an error signal is measured in the image and mapped directly to motion commands
[3]. On the contrast, position-based visual servo (PBVS) will extract features to compute a 3-D
reconstruction of the environment or the motion of a target object in the sense [4]. Although
image-based visual servo is known to be generally satisfactory, there are some stability and
convergence problems, reaching or nearing a task singularity if image points are chosen as visual
features, reaching local minima as for different camera of robot poses [5]. In this article, in order to
avoid this problem, very simple image processing algorithm was used and combined sensors were
also added.
Self-localization and navigation (plans a path towards to goal location) capability is essential for a
robot in order to perform required tasks. Variety of sensors including photoelectric gyroscope, code
disc, sonar, infrared sensor, laser and camera can be used separately or fused for this purpose.
Photoelectric gyroscope based navigation is expensive and error occurs when works long. Code disc
based navigation is easy to complement but often provide imprecise coordinate data. Landmark based
navigation techniques are common in robotics [6] but needs extra landmarks. Vision-based
navigation is now popular and seems prospect, but the choice and extraction of visual features for
positioning is far from straightforward, and this always depends on tasks.
Object recognition and tracking has been a heat area in computer vision for a long time, some
appearance-based methods such as edge, greyscale, gradient matching, histograms of receptive field
responses, large model bases, some feature-based methods such as interpretation trees, SIFT, SURT,
geometric hashing, and some other approaches has been proposed. In this article, as for completing
the contest, a task-based algorithm which combined some above algorithms was proposed, which
solved the colorful background and shining lights problem perfectly.
This article is structured as follows. In the next section we gave a detailed discussion of the visual
servoing navigation algorithm with some basic image processing used in the contest. In section 3 we
proposed a combined algorithm that can robustly track the black and white semi-circle object in a
complex and shining lights environment. Then we ported both visual servoing navigation and object
tracking parts from PC to an ARM platform in section 4. In section 5, conclusion was included.
Robot Visual Servoing Navigation Implementation
During the contest, TRT (turn-run-turn) move and Double-circles move were used commonly. This
year, a four-directions-move-free base which ensures the robot travel along any desired path of any
directions was developed, but it was not that precisely coincide with the desired path when moving
long distance with PID controller, so a correct mechanism should exploit. In this article, a way of
detecting the intersection of double parallel lines (shown in figure4) was proposed and then a control
flowchart with the combination of visual servoing navigation would be presented.
Edge Detection. Edge detection [7] is a fundamental tool in image processing, it was proposed to
detecting the sharp change in image brightness in order to capture important events and changes in
properties of world. In 1986, John F. Canny developed canny edge detector [8] and also produced a
computational theory of edge detection explaining why the technique works. In this article, some
basic and simple image processing are shown below in figure3. We had a comparison of four Edge
Detect operators (Roberts, Prewitt, Sobel and Canny) that have been widely used. As shown in
1632
Advanced Designs and Researches for Manufacturing
figure4, the four operators worked almost the same, but in consideration of time consuming and
hardware, canny operators worked better. Then, we added the acquired coordinate of the central point
to previous control law.
Figure 3. Simple image processing.
Figure 4. Comparison of four operators: Roberts,
Prewitt, Sobel and Canny. Left top was the double
parallel lines to be detected. From 2 to 5, it showed the
result of edge detection use the operators Roberts,
Prewitt, Sobel and Canny respectively.
Added Control Law. From 2.1 we got the coordinate of the central point of the intersection of the
lines in the game field. When the robot moved on, it got a series of coordinates and formed the path.
Here the camera just worked as an error detect sensor. The added control flowchart was shown in
figure5.
∆ϕ , ∆θ , ∆d
Figure 5. Added control flowchart.
In this way, the robot could move precisely along the path we designed previously. The results
were shown in figure6, and the data was obtained from practical test. After a series of tasks for
building up the Krathong, a Completed Krathong would be thrown onto the River Surface and the
object tracking part began to work.
Robot Object Tracking Implementation
One of the biggest challenges for vision system in autonomous robot is the rapidly change in visual
circumstances. Colour is the basic features of the surface, color information is not only abundance and
intuitionistic but also easy to use. RGB (red, green, blue) model is the most common and
hardware-oriented color model. The images acquired by camera are using the RGB color space but it
does not conform to people’s understanding to color. By contrast, HSV color space is based on the
ways of human perceive color information, is represented by Hue (reflects the type of color),
Saturation (reflects the purity of color), Value (stimulus intensity of light). Value has nothing to do
with the color of light, so if the range of hue and saturation is limited, we can do segmentation [9]. In
this article, the first process to dealing with colourful background and shining lights was to conduct
the subsequent processing in HSV color space. Here we had a detailed survey in the flowchart in
figure7.
Advanced Materials Research Vols. 605-607
1633
Hu Invariant Moment. Image moments are useful to describe objects and contours after
segmentation. Simple properties of the image which are found via image moments include area (or
total intensity), its centroid, and information about its orientation. In our application, a rotation
invariant moment must be used, and the most frequently used one is Hu invariant moment. Hu
invariant moment [11] is a linear combination of a series of normalized central moments, and it is
invariant to some transformation such as zoom, rotation and mirroring. Hu moment was used in
contour matching algorithm.
Template get
Images get
Smooth, Morphological
Opening and Erosion,
Binarization
Smooth, Morphological
Opening and Erosion,
Binarization
Template contours
Image contours
N
N
Contour’s Area and
Length are limited?
N
Y
Contours match
Match degree limited?
Y
Central coordinate
of the circle
Signal to cast the
Candle Light Flame
Y
Over
Figure 6. Results of Visual servoing
navigation: the data was got from
practical test, which designed a 6 meters
long line to follow. 3cm and 1°error.
Figure 7. Object tracking flowchart.
Contours and Contour Matching. As described in 2.1, canny operator could detect the edge when
the gradient is big, but it does not treat contours as a whole. Contour matching is important in
computer vision with a variety of applications, including model based recognition, depth from stereo
and tracking. In contour matching applications, contours were treated as a whole and could be
described as a feature of an object. For example, a typical application of curve matching to model
based recognition would be to decide whether a model curve and an image curve are the same, up to
some scaling or 2D rigid transformation and some permitted level of noise.
What we had used in article was template contours of semi-circle in different visual angles which
made it ellipse in different ovality. When we directly had a contour matching of template contours and
the contours obtained from the real-time images, over matching and bad results in some special
environment occurred, which was shown in figure8. Next, we had an optimization on normal
algorithm.
In order to improve this kind of problems, a pre-comparison of contours’ area and length was
added, and this made it more easily to have the subsequent matching. Here it was the second point that
we used to dealing with complex environment because some slightly interference can easily avoided.
As the Hu moment is invariant to zoom, rotation and mirroring, it had been used because of the
quickly and uncertainly moves of the object to be tracked. And in this way, a distinguishable match
degree which is in (0, 1) was get, which made it easily to make predication. Here we had a practical
test of our algorithm outdoor and in studio, which also in different view angle, the results were shown
in figure8. Results showed that the use of Hu moment was the third point to dealing with the complex
environment.
1634
Advanced Designs and Researches for Manufacturing
Until now, good results were obtained in both visual servoing navigation and object tracking parts.
In order to be used in ABU Robocon Contest, we tried to port both parts onto an ARM platform.
Figure 8. Results of normal object
tracking algorithm and optimization
algorithm: which we could see the
optimization algorithm works really
good although colourful environment
and shining lights.
Figure 9. Flowchart of transplanting both algorithms
from PC to ARM platform.
Transplant to an ARM platform
As the development of embedded computer, computer vision algorithms had been transplanted onto
platforms such as DSP, ARM and SBC (Single Board Computer) successfully. Because of the
limitation of weight in the Contest, we transplanted both parts from PC to an ARM platform, and here
the Friendly ARM mini2440 was used. The transplanted was complicated but not the interested point
of this article, so article just showed it in a flowchart in figure9 in order to keep the completeness of
our job.
Conclusion
This article described a robot vision system which visual servoing navigation and object tracking
parts were included and that had been put into practical use in the ABU Robocon Contest. In visual
servoing navigation part, the camera just worked as an extra sensor to help correcting the error
between desired and the practical path, which only 3cm distance and 1°angle error arose when
running a line of 6m. In object tracking part, a combination of several existing algorithms and some
extra features were added to the optimization algorithm, which shown a better result in colorful
background and shining lights environment compared with some existing algorithms. All the two
parts were based on a carefully survey of the tasks, in other words, algorithms above were task-based.
In the area of Robot Vision and Computer Vision, general and universal algorithms are widely
accepted, but some task-based algorithm could work better in some special environment. The major
contribution of our approach is the additional use of object shape and its contour, and some extended
shape and contour information in task-based object tracking algorithm, which proved working better
in colorful background and shining lights environment. Also, the transplant from PC to ARM
platform was really a valuable experience for subsequently job. In future research, we hope to propose
a universal algorithm that could work perfectly in complex background and shining environment and
more tasks may added to this system.
Advanced Materials Research Vols. 605-607
1635
References
[1] http://www.aburobocon2011.com/
[2] Agin. G. J, “Real time control of a robot with a mobile camera”. Technical Note 179, SRI
International, Feb. 1979
[3] Wwiss. L, “Dynamic Sensor-based Control of robots with Visual Feedback”. Robotics and
Automation, IEEE Journal, Oct. 1987
[4] Corke, “A new partitioned approach to image-based visual servo control”. Robotics and
Automation, IEEE Transactions, Aug. 2001
[5] François Chaumette, “Potential problems of stability and convergence in image-based and
position-based visual servoing”. Lecture Notes in Control and Information Sciences, 1998, Volume
237/1998, 66-78, DOI: 10.1007/BFb0109663
[6] Greene. S. “Vision-based mobile robot learning and navigation”. Robot and Human Interactive
Communication, Aug. 2005
[7] Lindeberg, Tony (2001), “Edge detection”. Hazewinkel, Michiel, Encyclopedia of Mathematics,
Springer, ISBN: 978-1556080104
[8] J. Canny (1986), “A computational approach to edge detection”, IEEE Trans, Pattern Analysis
and Machine Intelligence, vol 8, pages 679-714
[9] Zhong Qiubo, “Object Recognition Algorithm Research Based on Variable Illumination”.
Automation and Logistics, 2009, ICAL, Aug. 2009
[10] Li he, “Object Detection by Parts Using Appearance, Structural and Shape Features”.
ISBN: 9781424481132, DOI: 10.1109/ICMA.2011.5985611
[11] M. K. Hu, “Visual pattern recognition by moment invariants”. IRE Trans, Info, Theory, vol.
IT-8, PP.179-187, 1962
View publication stats