Developing a Low Cost Local Vision Team for the Small Size League

advertisement
ViperRoos: Developing a Low Cost Local Vision Team
for the Small Size League
1
2
1
Mark Chang , Brett Browning , and Gordon Wyeth
1
Department of Computer Science and Electrical Engineering,
University of Queensland, St Lucia, QLD 4072, Australia.
{chang, wyeth}@csee.uq.edu.au
2
The Robotics Institute, Carnegie Mellon University,
5000 Forbes Avenue Pittsburgh, PA 15213, USA.
brettb@cs.cmu.edu
Abstract. The development of cheap robot platforms with on-board vision
remains one of the key challenges that robot builders have yet to surmount. This
paper presents the prototype development of a low cost (<US$1000), on-board
vision based, robot platform that participated in the small-size league in
RoboCup’2000. The robot platform, named the VIPER robot, represents an
integration of state of the art embedded technology to produce a robot that has
reasonable computational abilities without compromising its size, cost or
versatility. In this paper, we describe the hardware architecture of the system,
its motivation through to its implementation. In addition, we reflect upon their
performance at the competition as a measure of the robustness and versatility of
the robot platform. Based on these reflections, we describe what remaining
developments are required to evolve the system into a truly capable and
versatile research platform.
1
INTRODUCTION
Robot soccer is an exciting new research domain aimed at driving forward robotics
and multi-agent research through friendly competition. In this paper, we describe the
development of the Viper robot (see figure 1), a prototype robot platform with onboard vision developed for the RoboCup'2000 competition. Although primarily
intended for robot soccer competition, the Viper robot platform provides a versatile
research base for investigating autonomous robot intelligence issues at low cost. We
describe the robot architecture and reflect upon its performance at RoboCup'2000, and
as a general research vehicle, as a means for determining what remaining work is
required to develop a truly useful platform.
Building a small, versatile mobile robot with on-board vision capabilities at low
cost is a challenging problem and one that has yet to be reliably overcome in the field
of robotics. Building a robot for RoboCup competition is even more difficult as the
system must be both robust to the strenuous RoboCup environment and provide highperformance characteristics while fitting inside a small volume. Although there have
been numerous attempts at building cheap robots with on-board vision capabilities,
few approaches are suitable for small size RoboCup competition either because of
size, cost or some other constraints.
We believe that a fully custom design, utilizing the wide range of embedded
processors that are currently available, provides the best approach to satisfying the
many constraints required for a successful RoboCup robot platform. The Viper robot
platform takes a novel approach to developing a low cost robot system with reliable
hardware and on-board vision capabilities. In contrast to most other robot hardware
(e.g. [2, 7]), the Viper robot divides the problem by using different embedded
processors handle the contrasting needs of vision and motion control processing. As
most high end embedded processors are not suited to motion control and vice versa,
this approach enables one to choose from a much wider range of processors thus
producing a better overall product.
Fig. 1. The Viper robot with a golf ball.
In this paper we describe the Viper robot platform and the specifics of the
prototype Viper system developed for RoboCup'2000. Section IV describes the
performance of the system, from a user's perspective, at the RoboCup competition and
also as a general research vehicle. Section V discusses the performance of the robot
and determines what remaining work is required. Finally, section VI concludes the
paper and lists the future direction of the Viper project.
2
The VIPER System
The Viper system consists of the mechanical chassis and motors, the electronics and
associated batteries, the software environment on the robot and the development
support facilities that run on an off-board PC. The robot is fully autonomous, in the
sense that all sensing, computation and actuation are performed on-board. In this
section, we describe the physical hardware of the robot, its computational
arrangement and capabilities, and the software environment that was developed for
the RoboCup competition.
2.1
Mechanical Structure
Mechanically, we desired a robot that would be robust to the RoboCup conditions
while offering high-speed maneuverability. As a nominal figure based on our prior
RoboCup experiences, we desired a robot that could reach velocities in excess of
2m.s-1 with an acceleration of greater than 2m.s-2. To fulfil these requirements the
Viper robot was built upon the same mechanical base of the RoboRoos robots, a
global vision robot soccer team used successfully in previous RoboCup competitions.
To conserve space we will not discuss the mechanical details here but instead refer
the reader to [12] for further details. Figure 2 shows the Viper robot with its primary
mechanical components labeled. The Viper robot is 180mm wide by 100mm long and
stands around 120mm high (not including antenna). The camera is mounted on a
separate Printed Circuit Board (PCB) that can be tilted along the horizontal axis. The
prototype robot uses a 2.8mm, manual iris, CS-mount lens providing a horizontal field
of view of just under 100° at full image resolution. Fully assembled, the robot weighs
about 850g and exceeds the speed and acceleration requirements specified above.
RF Board
EPP Cable
Camera
Processors
Boards
IR Sensors
Antenna
Batteries
DC Motors
Wheel &
Encoder
Wheel &
Encoder
Fig. 2. The major components of the VIPER robot.
2.2
Computational Structure
Vision and motor control have very different requirements in terms of electronic
hardware. Vision, a processor intensive task, requires a fast processor that has
hardware to enable interfacing with the digital output of the CMOS image sensor.
Motor control requires a processor with peripheral set that is capable of performing
timer-related operations without CPU intervention.
Due to these contrasting hardware requirements, the Viper's computational
architecture is divided into two parts. A vision board, with an embedded RISC
microprocessor, performs all operations related to vision and communication to other
robots. Meanwhile, a second processor board performs all motor control operations
and also all remaining computation required. The two, distinctly different processor
systems interact over a 60kbs asynchronous serial link. Figure 3 shows the conceptual
arrangement of the two processor boards. The following sections describe each
component of the system.
PC
Half Duplex
Radio Link
EPP Cable
Link
PC/Other
Robots
Vision Board
SH7709 Processor
PB-300 Colour
CMOS Camera
Motor
Drivers
async.
Serial link
Motor Board
68332 Processor
Encoders
6x1.2V
700mAH
NiMH
Fig. 3. The processor architecture on the Viper robot
2.2.1
Camera Selection
The camera chosen for the Viper robot was a Photobit PB-300 CMOS image sensor.
A CMOS image sensor was chosen over a CCD sensor for its lower power
consumption and fully digital interface. The camera chip situates on a customdesigned PCB with aluminum mounting for a CS-mount lens.
The camera provides color images of 640x480 pixels arranged in a Bayer 2G
format through a digital interface utilizing 8 data lines and three hardware
handshaking signals. Due to the SH-3's limited bus bandwidth the full data rate from
the PB-300 of 30fps was limited to around 10 fps on the Viper implementation. The
camera supports electronic panning and windowing but not zooming. Additionally,
the chip allows full control over exposure time, color gains, color offsets, horizontal
and vertical blanking. All these options are controlled through parameters stored in
registers that can be accessed through a standard synchronous serial (I2C) interface.
Although the camera has on-board auto-exposure routines these were typically
disabled and the camera parameters fixed by the programmer at the competition.
2.2.2
Vision Board
To perform the necessary vision capture and processing requires an embedded
processor that has both a capable CPU core and a wide ranging peripheral suite.
Specifically, the peripherals must support glueless interfacing to high density
memories, Direct Memory Access (DMA) for image capture and other large scale
memory transfers, and serial communications.
There are a number of embedded processors currently on the market that fulfil
these requirements at a low cost. Prime examples include the Texas Instruments
TMS320C6xx series, the Hitachi SH-4 and the up-coming SH-5 series processors.
Due to processor availability in Australia during the early design phase (early 1999),
we chose the Hitachi SH-3 processor as a stepping stone to the SH-4 and SH-5
processors. The SH-3 is fully code compatible to the SH-4/5 and has the same
peripheral suite.
The Hitachi SH-7709, the most versatile of the SH-3 processors, is a 32-bit RISC
processor that can perform at up to 80MIPs (40MHz bus clock). The processor has a
peripheral suite that facilitates glueless connection of numerous memory types, three
serial connections, and a four-channel DMA controller. The resulting vision board has
16MB of 32-bit wide, 100MHz SDRAM for main program memory and 512KB of 8bit wide 15ns SRAM for image storage. Boot code is stored in 512KB, 8-bit wide 5V
Flash or EPROM. All memory interfaces are glueless, meaning there is no extraneous
interface logic, through the SH-3 Bus controller peripheral. Color image data from the
PB-300 CMOS image sensor is transferred, one byte at a time, to the SRAM via
DMA. Here, SRAM is used for image transfers rather than SDRAM as it has no
overheads for the single byte transfer.
There are three main communication channels on the vision board. An
asynchronous serial link running at 60kbs using one channel of the Serial
Communications Interface (SCI) peripheral connects the vision and motor boards. To
operate as efficiently as possible, the serial link uses DMA to transfer data to and
from the motor board, leaving the CPU free to perform other processing.
Communication to the PC and other robots in the robot soccer environment occurs
through a wireless, half-duplex FM asynchronous serial link operating at a nominal
19.2kbs. Finally, an EPP (Extended Parallel Port) interface to the PC enables realtime video debugging with the use of a standard 25-way parallel cable and custom
software. The half-duplex, 8-bit EPP channel operates using DMA and hardware
interrupts.
2.2.3
Motor Board
There are a number of embedded micro-controllers designed specifically for motor
control primarily for use in the automotive industry. We chose the Motorola
MC68332 for the motor controller board given its wide range of peripherals, reliable
development system and low cost.
The MC68332, hereafter the 68332, is a 32-bit CISC processor that operates at
20MHz giving around 5MIPs. The peripheral set supports glueless connection to
SRAM and Flash memories, synchronous serial communications (QSPI),
asynchronous serial (SCI) communications, timer utilities (TPU), and Background
Debugging (BDM). The motor board has a modest 256KBx16 SRAM and 512KBx16
Flash where the Flash is in-system programmable through the use of the Background
Debugging Mode (BDM) connection. BDM enables access to the 68332 registers and
memory space without CPU control through a dongle connected to the PC parallel
port. The 68332 controls the motors through its Timer Processor Unit (TPU)
peripheral, a powerful timer utility that requires no CPU intervention to perform a
wide variety of timer functions with 16 independent output channels. The TPU drives
the two motors through MOSFET H-Bridges with Pulse-Width-Modulation (PWM)
signals and keeps track of the encoders with Quadrature Decode (QDEC) operations
using 2 channels per motor and 2 per encoder. Each encoder provides 500 ‘clicks’ per
revolution, which with QDEC becomes 2000 ‘clicks’ per revolution giving a fine
-1
grained resolution of 47µm and a velocity resolution of 47mm.s when sampled at
1kHz.
The 68332 has two serial peripherals: a Queued Serial Peripheral Interface (QSPI)
for synchronous serial operations and a Serial Communications Interface (SCI) for
asynchronous serial operations. The QSPI operates a set of 8 IR proximity sensors
used to detect obstacles in front of the robot that are too close for vision to sense
(trigger distance is typically set to 70-100mm). The SCI communicates, with
hardware interrupts, to the vision board at the rate of 60kbs. The bandwidth limitation
on this communications channel is related to the interrupt latency and processing time
on the 68332 rather than hardware limitations.
2.3
Software Development Environment
This section will briefly describe the low-level software and the development
environment created for the Viper robots. For a more detailed description of the Viper
software used at RoboCup'2000 please refer to [4].
All robot software is written in C with some of the boot and time critical code
written in assembler. Currently, the robot is programmed in a two stage process
corresponding to the secondary memories on each processor board. Typically
program development is focussed on either vision or motor control and so this does
not prove a hindrance, however, future work aims to migrate to a program once only
system.
Both processors have a library of device drivers that provide a single layer of
hardware abstraction without significantly hindering computational efficiency. These
device drivers hide most of the hardware details but do provide facilities to configure
many of the peripheral features during initialization. All high usage paths, for
example resetting DMA transfers, are structured to enable maximum throughput with
minimum overheads. In short, a simple but fast approach has been utilized to obtain
maximum efficiency. In addition to the device driver library, the vision processor has
a library of vision related function optimized for the SH-3 processor. Similarly, the
motor board software includes a PID servo-loop routine that operates at 1kHz for
velocity control of the robot. The motor board also includes a developmental multithreaded, pre-emptive OS although this was not used at the competition. As an aid for
debugging both at the competition and during development, we have written custom
PC software operating under the Microsoft Windows 9x environment. The PCDebug
program allows real-time video debugging via the EPP connection and interaction
with the robots via the RF-link where it forms part of the communication network
with the robots.
Overall the robot costs less than US$1,000 to build on an individual basis making
it a cheap autonomous robot platform for its capabilities. The price would
significantly decrease for larger productions. Using cheaper components for the
motors and chassis could produce a robot under US$500 but would of course detract
from its mechanical performance.
3
System Performance
In general the robot hardware performs at or beyond our initial expectations. Once
built, the hardware proved robust and reliable. In all its operation over 18 months as a
research test bed and competition vehicle, we have had to fix only two problems: a
broken battery connector and a broken encoder. Given their high use both problems
can be attributed to wear and tear and are to be expected. In all other cases, the robots
have remained stable and operational with no real maintenance required to keep the
status quo. In terms of general performance, the robot is able to travel at the speeds
and accelerations specified above. Actually navigating at such speeds is limited due to
the lack of satisfactory control by the navigation system, a common situation for local
vision teams in robot soccer.
As a demonstration of the Viper system's capabilities, and as its primary design
aim, a team of Viper robots was entered in the RoboCup’2000 competition with the
team name ViperRoos. The ViperRoos RoboCup team consists of three Viper robots:
two field robots and a goalkeeper. In any performance metric the ViperRoos were
clearly amongst the best of the small size local vision teams at RoboCup'2000. The
robots even proved somewhat competitive against the weaker global vision teams.
The robot software consisted of three main components: vision processing,
localization and the soccer behaviors (see [4] for details). For the sake of brevity, this
discussion will focus on the ViperRoos as a demonstration of what the Viper robots
are capable of and also where the limitations in the system lie.
The allocation of processing resources on the Viper robots was quite
straightforward. The vision board performed all vision processing and communication
with other robots. The vision board also performed a majority of the localization
routines required to keep the robot's estimate of its position in the world accurate. The
motor board performed all motion control routines, including a PID servo loop
controlling the robot's forward and rotational velocity components, and the behavior
architecture that produces the actual soccer game play of the robot. For vision
processing, the ViperRoos utilized a series of standard vision algorithms common to
robot soccer (e.g. [10]) to recognize objects in the image based on their color in YUV
space at the frame rate of 10Hz. Briefly, the vision system sub-samples and converts
the raw 512x128 Bayers 2G input to 128x32 YUV and segments it based on a pregenerated YUV lookup table. Blobs are found in the segmented image using a
standard region growing algorithm and categorized into objects based on their
dominant color. Valid objects include the walls, goals, robots, or the ball. No
distinction was made between fellow teammates or opponents. The resulting visual
information is fed to the localization system and the behavior system. The localization
system used for RoboCup’2000 was quite simple and much work remains in
improving its performance. In short, the system relied predominantly on deadreckoning developed from the 1kHz PID servo routines running on the motor board.
Resets to the dead-reckoned localization were performed when landmarks (walls and
goals) were reliably observed. While this system has some obvious deficiencies, it
worked reasonably well considering its development time was minimal.
To actually play soccer, the ViperRoos used a behavior-based approach modified
from the behavior architecture used on the RoboRoos robots, the precursors to the
ViperRoos (see [12] for more details). On the Viper implementation, the motor board
performed the computations required for navigation and kicking the ball. The
navigation system, which was essentially reactive, was operated at a nominal rate of
30Hz and used encoder feedback to exceed the raw sensing rate. The current
ViperRoos does not utilize on-board RF transceivers for inter-robot communication,
but a simple broadcast system was used to send the referee's directives to the robots
and monitor their operating status. The RF transceivers are capable of operating on a
half-duplex communication at the modest speed of 19.2Kbps.
In terms of raw competition performance, the ViperRoos performed admirably for
their RoboCup debut. The team finished the round robin stage with one win and two
loses against three global vision teams. In a specialized local vision only competition
the robots performed well but no goals were scored either for or against. However,
ViperRoos demonstrated their speed and accuracy over other local vision teams
during the penalty shoot out. The ViperRoos missed one goal in ten attempts and
managed to block all shots on goal from the opponents.
From the programmer's perspective the robots performed reliably and robustly. The
programming interface worked satisfactorily, allowing fast development and with the
EPP and RF link relatively easy debugging. The ability to retrieve live video feeds
from the robots, albeit through an EPP cable, enabled fast calibration of the system
parameters before each game while the RF link enabled run-time debugging of robot
software. More work is required however to improve the GUI features of the
debugging interface to make it truly user friendly. In terms of producing fast,
efficient, code no extensive effort was required. Careful attention was required in the
development of the vision library, but typically is not required elsewhere. Indeed, the
limitation on frame rate was due to the limited bus bandwidth of the SH-3 rather than
its processing capabilities per se.
4
Discussion
4.1
Comparisons to other Systems
There are a number of alternative approaches to building small, local vision robots.
As with the Viper, most approaches rely on building custom hardware to achieve the
desired performance characteristics within the compact size requirements. Probably
the best amongst these are the Eyebots [2]. The Eyebot controller board is designed
around a Motorola 68332 processor and uses a commercially available CCD camera
with a parallel output. As the entire robot system is built around the same processor
used for motor control on the Viper robot, the Eyebot is naturally more limited in
terms of its processing capabilities meaning it must use significantly coarser images
to obtain the same processing speeds.
Outside of the small-size scale, there are a number of local vision robots that are
either low cost or commercially available. The most notable of these is the Sony Aibo
robot dog which provides a common robot platform for the RoboCup’s Sony legged
league (e.g. [11]). These robots offer relatively cheap robot vision with the addition of
legged motion. The mid-size league consists entirely of on-board vision robots, but
typically these systems are neither cheap nor small. In many cases, these systems, due
to the larger size specifications, use laptops or single board computers to process
vision with simpler local processors to perform motor control (e.g. [5]). This is clearly
a similar strategy to the Viper robot taken to a larger extreme.
4.2
Limitations and Future Work
The heavy use of the Viper system for the RoboCup competition clearly demonstrated
the versatile ability of the system. The approach taken to build the Viper robot
demonstrated a number of points that, in our opinion, worked effectively. In
particular, the choice of the dual processor architecture enabled a wider selection of
processors more suited to the differing tasks of vision and motor control. In addition,
this approach which divides a system into loss coupled modules allows each
processing board to be upgraded without re-implementation on either the software or
hardware of other modules. The compact mechanical design and careful selection of
battery and motor technology produced a small fast robotic platform that is robust for
the competitive conditions. The choice of CMOS imaging technology as opposed to
CCD's that reduced the complexity of the electrical interface and the power
consumption of the camera overall.
Of course, as it is a prototype, there are always bugs or deficiencies that need to be
improved. There are a number of problems with the Viper platform that require future
work to improve the system overall. Firstly, better performance from the vision
system to achieve the full possible 30Hz frame rate requires an upgrade the vision
processor. At the time of designing the Viper prototype, there was a lack of
availability of higher end processors. This is no longer the case, the replacement of
the vision processor is now possible and is currently being carried out. In addition to
improving the general frame rate, the higher bandwidth processor should enable
pixels to be processed as they arrive rather than buffering an entire image before
processing it. This would significantly reduce the system latency, which will in turn
improve the overall robot performance regardless of the control architecture being
used.
With a packet structure including start and stop bytes with Manchester encoding,
the robots can reliably communicate with one another at 19.2kbps. The weakness to
the approach, other than the low bandwidth, was the significant delay required when
switching from transmission to reception for the token ring network (typically around
250ms). Using newer RF transceiver technologies such as BlueTooth™, the single
chip 900MHz transceivers used in portable phones or 2.4GHz Spread Spectrum
Transceiver offer an opportunity to greatly increase the transmission speed and
throughput of the RF link.
5
Conclusions
This paper has described the Viper robot system and its performance as a robot
platform in the robot soccer competition and as a general research vehicle. We believe
that the robust performance of the vehicle demonstrates both the virtue of
investigating cheap (<US$1000) autonomous robots with vision sensing, and our
approach to doing so. Specifically, the dual processor architecture presented in this
paper offers a way to make the most of processor technology that is currently
available without breaking the budget.
There is considerable remaining work to develop the Viper platform into a truly
capable and versatile robot base for research and robot soccer purposes. Most of this
work focuses on a redesign of the system to incorporate the latest technology that was
not available during the initial design phase. Additionally, improvements to the
software and development environment are required to make the developer's life
easier and to aid in general productivity.
References
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
Behnke, S., Frtschl, B., Rojas, R., et al. "Using hierarchical dynamical systems to control
reactive behavior", Proceedings of IJCAI-99, 28-33, 1999.
Bräunl, T., Reinholdsen, P., and Humble, S., "CIIPS Glory – Small Soccer Robots with
Local Image Processing," submitted for RoboCup-2000: Robot Soccer World Cup IV,
Springer Verlag, 2001.
Browning, B. A Biologically Plausible Robot Navigation System, Phd. Dissertation,
Computer Science & Electrical Engineering Department, University of Queensland,
2000.
Chang, M., Browning, B., Wyeth, G., "ViperRoos 2000", submitted for RoboCup-2000:
Robot Soccer World Cup IV, Springer Verlag, 2001.
Emery, R., Balch, T., Bruce, J., Lenser, S., et al. “CMU Hammerheads Team
Description,” submitted for RoboCup-2000: Robot Soccer World Cup IV, Springer
Verlag, 2001.
Kitano, H., Asada, M., Kuniyoshi, Y., Noda, I., Osawa, E., & Matsubara, H., "RoboCup:
A Challenge Problem for AI and Robotics," RoboCup-97: Robot Soccer World Cup I,
Nagoya, L.N. on A.I., Springer Verlag, 1998, pp. 1-19.
Marchese, F., Sorrenti, D., “Omni-directional Vision with a Multi-part Mirror,”
Proceedings of The Fourth International Workshop on RoboCup, 2000, pp.289-298.
Photobit Web Site http://www.photbit.com
RoboCup Official Website at http://www.robocup.org/
Veloso, M., Bowling, M., Achim, S., Han, K., and Stone, P. "The CMUnited-98
champion small robot team", M. Asada, H., Kitano (eds), RoboCup-98: Robot Soccer
World Cup II, pages 77-92. Springer Verlag, Berlin, 1999.
Veloso, M., Winner, E., Lenser, S., Bruce, J., and Balch, T., "Vision-servoed localization
and behavior-based planning for an autonomous quadruped legged robot", Proceedings of
AIPS-2000, Breckenridge, 2000.
Wyeth, G., Tews, A., & Browning, B., "RoboRoos: Kicking into 2000," submitted for
RoboCup-2000: Robot Soccer World Cup IV, Springer Verlag, 2001.
Download