5 Reactive Navigation Modules

advertisement
PROJECT DELIVERABLE D8
TOURBOT NAVIGATION
Shared-cost RTD
Project acronym: TOURBOT
Project full title: Interactive Museum Tele-presence Through
Robotic Avatars
Contract Number: IST-1999-12643
Key Action:
3
Action Line:
3-2-3
TOURBOT
Navigation
Project
Deliverable
TOURBOT: Interactive Museum Tele-presence Through Robotic Avatars
Project Deliverable D8: TOURBOT Navigation
Date Produced:
March 21, 2001
Author:
Dimitris Tsakiris
Contents
1
2
3
4
5
Introduction .................................................................................................................. 3
Avatar Hardware .......................................................................................................... 5
Avatar Navigation Software ......................................................................................... 7
Localization Module .................................................................................................... 9
Reactive Navigation Modules .................................................................................... 10
5.1 Collision Avoidance ............................................................................................ 11
5.2 User’s Tele-control of the Robot ......................................................................... 11
5.3 Tour (move around) an Exhibit ........................................................................... 12
5.4 Move Towards an Area in the FOV of the Camera ............................................. 12
6 Targeted Navigation Modules .................................................................................... 12
6.1 Path Planning ....................................................................................................... 13
6.2 Map Updating ...................................................................................................... 15
7 Navigation Software Integration and Testing ............................................................ 17
Page 2
TOURBOT
Navigation
1
Project
Deliverable
Introduction
TOURBOT is based on a complex robotic system, which, in order to effectively fulfill its
goals, should simultaneously navigate reliably through the museum exhibition and
interact with visitors and users connected to the robot via the Web. This requires a
flexible mobile system that is also backed by sufficient computing resources at the base
station. Figure 1 presents the overall system setup. It consists of a mobile robotic
platform connected via a wireless link to a base station, which is directly connected to the
Internet. According to the scenario of the application, the system software has to control
the robot’s motors and monitor its environment, in order to safely navigate the robot and
to appropriately interact with Web and on-site users. The current document details the
results of Workpackage 6, aiming at the design of the TOURBOT navigation software.
The consortium partners involved in WP6 are FORTH, UNIFR and UNIBONN. The
Workpackage duration was from Month 5 to Month 15 of the Project.
Effective, safe and reliable navigation of the TOURBOT system is one of the most
important requirements of the project. The navigation is based on information provided
by the sensors of the system and the environment map, and involves a wide range of
capabilities, both related to the immediate surroundings of the robot and to the site where
it is to operate. The aim of WP6 was to develop the navigation modules needed to allow
the system to quickly and safely arrive at its target positions, to avoid obstacles, to move
around (tour) exhibits, and to localize itself in the environment. Two major tasks were
tackled during this WP: reactive navigation, which refers to the capabilities of the robot
to cope with changes in its surroundings, and targeted navigation, which drives the
system to a certain target.
Reactive navigation endows the robotic avatar with primitive competences, such as
obstacle detection and avoidance, the ability for the user to tele-control the robot, to tour
(move around) an exhibit and to move towards an area in the field of view of the robot’s
cameras.
Targeted navigation involves path planning and the detection of the object to which we
refer as target (for example a certain exhibit). In order to operate efficiently, the
TOURBOT system should be capable to plan its trajectories from its current location to a
target position. The planning module of the TOURBOT system is, in addition, able to
Page 3
TOURBOT
Navigation
Project
Deliverable
integrate detected dynamical changes into the representation of the environment. This
allows the system to quickly react to situations in which entire passages are blocked and
from which it would not be able to escape otherwise.
Section 2 briefly describes the robot’s hardware, especially issues relevant to navigation.
Sections 3, 4, 5 and 6 describe the avatar navigation methods and software. Section 7
elaborates on the integration and testing of the navigation software.
Page 4
TOURBOT
Navigation
2
Project
Deliverable
Avatar Hardware
The TOURBOT mobile avatar is composed of the robot base, which includes the wheels
and their drives, of the enclosure, which contains the on-board computers, device
interfaces and networking components and of the upper part (top) containing all devices
necessary to its interaction with people in the museum and to the control of the robot or
the observation of the museum environment over the Web. To acquire information about
the environment, a variety of sensors, installed at different parts of the platform, are used.
INTERNET
High-Speed Internet Connection
Wireless Connection
On-Site PC
Figure 1. Hardware components of the TOURBOT system
Page 5
TOURBOT
Navigation
Project
Deliverable
In order to navigate efficiently in a populated environment and provide meaningful
museum tours, as well as in order to reduce the complexity of the navigation tasks, the
system is designed to have a circular shape. The wheel drive is a 4-wheel synchro-drive,
which allows the robot to turn in place. Both features facilitate navigation, by allowing
the use of path-planning techniques in which the robot is regarded as a point.
The avatar requires on-board computing power for various reactive processes designed to
control the robot over its device drivers and to quickly process and react to sensory input.
On-board computing is based on two standard workstations using the Intel Pentium III
microprocessor running at 800 MHz. These computers have various interfaces connecting
them to the robot hardware.
The devices installed on top of the enclosure are used for interaction with the system
users. In order to provide information regarding its activities, the system is designed to
carry an LCD-display, which is connected to the computer system(s) in the enclosure.
The user interface running on this display provides buttons allowing the on-site users to
specify their requests. An optional feature is to install a motorized face on the robot, able
to express different moods and to support interaction with the on-site users. A pan-tilt
device carrying a camera system is also included in the design. These cameras are
directed towards the exhibits that the robot is approaching, in order to deliver video and
images over the Web.
The robot carries a sensory apparatus, which updates its knowledge about the
environment and adapts its behavior. More specifically, we employ ultrasound sensors,
laser-range finders and bump sensors. The most important sensor for navigation purposes
is the SICK laser-range finder system, which covers 180 degrees of the robot’s
surroundings at an angular resolution of 0.5 degrees and at a distance resolution of 2.5 cm
to 5 cm. This sensor serves many different navigation purposes, such as localization,
collision avoidance and map updating.
Since a laser-range finder is a light-based sensor, the system additionally uses ultrasound
sensors to deal with transparent materials, such as glass panels or reflecting surfaces. The
ultrasound sensors are mounted on a ring, so that they cover the complete surroundings of
the robot. The information provided by these sensors is integrated with the data coming
from the laser-range finder, in order to support localization, collision avoidance and map
building.
Page 6
TOURBOT
Navigation
Project
Deliverable
On top of the robot, resides the camera system, installed on a pan/tilt unit. The cameras
are used to grab images and video that are broadcasted over the Web. The video streams
of the viewed scene in the museum also serve the purpose of assisting the remote control
of the robot by the Web-users. Finally, the system has a number of bump sensors, which
are used to detect imminent collisions and immobilize the robot, should such an event
occur.
3
Avatar Navigation Software
The overall software architecture consists of a variety of modules (processes), which are
executed in parallel on one of the robot’s on-board computers as well as on off-board
servers. All processes are connected via Ethernet and communicate using TCX, a
decentralized communication protocol for point-to-point socket communication. Figure 2
shows the major software modules of the overall software architecture along with the
flow of information between different modules. The TOURBOT system control
architecture is organized in a hierarchical manner, with the device drivers at the lowest
level and the user interfaces at the highest. The hierarchy, however, is not strict, in the
sense that modules would pass information only within the same and not across adjacent
layers.
With respect to environment sensing and related navigational competences, the system is
endowed with appropriate techniques to overcome the various problems that a mobile
robot operating in a populated environment is faced with. First, our design employs
probabilistic representations and reasoning. Since the sensors used by mobile robots are
inherently inaccurate and noisy, robots generally cannot uniquely determine the state of
the world. Probabilistic approaches, in contrast, are able to integrate sensory information
over time and allow, this way, a mobile robot to robustly deal with the different sources
of uncertainty. Instead of extracting just a single interpretation from the sensors, the
TOURBOT system considers multiple possible interpretations weighted by a numeric
plausibility factor that is expressed as a conditional probability. By considering multiple
hypotheses, the robot can deal in a mathematically rigorous way with ambiguities and
uncertainty. This in turn facilitates recovery from false beliefs, a prerequisite for the robot
to exhibit robust behavior. In addition, the probabilistic representation allows the robotic
avatar to make optimal decisions under uncertainty.
Page 7
TOURBOT
Navigation
Project
Deliverable
User Interface
Navigation System
Map Updating
Path Planning and
Navigation Behaviors
Localization
Collision
Avoidance
Figure 2. Software modules of the TOURBOT system.
To cope with the limited resources of the system, the computationally intensive
algorithms used have on-line characteristics and exhibit resource flexibility. Modules like
the path planning component or the localization system are adapted to the available
computational resources. Regardless of the available computation time, these modules are
configured as “any-time” algorithms, producing at any time the necessary output for the
control of the platform. The more processing cycles are available, the more accurate, or
optimal is the result.
Finally, the processes on the TOURBOT system are configured to run in a modularized,
distributed and asynchronous fashion and apply decentralized decision making. Thus, the
TOURBOT software does not require a centralized clock or a centralized communication
module. Synchronization of different modules is strictly de-central. Time-critical
Page 8
TOURBOT
Navigation
Project
Deliverable
software (e.g. device drivers), and software important for the safety of the robot, such as
the collision avoidance, run on the robot's on-board computer. Other higher-level
software, such as the task control module or the localization system may run on the onboard or off-board computer, depending on the system’s load. Possible communication
breakdowns can be handled by the on-board control systems. The modular, decentralized
software organization eases the task of software configuration and extension. Each
module provides certain functionality, but not all modules are required to run the robot.
For example, certain device drivers can be omitted, if the corresponding device is not
needed.
4
Localization Module
Localization is the task of estimating the position of the robot in its environment.
Without knowing its position, the TOURBOT system cannot efficiently perform typical
tasks such as navigating to a certain exhibit and taking high-resolution images from userselected viewpoints. The TOURBOT system employs a recently-developed technique,
denoted as Markov localization, to estimate the position of the robot. This technique
offers the advantage of being able to globally estimate the position of the robot and to
detect and recover from localization failures. To actually estimate the position of the
robot in the workspace, a previously acquired map of the environment is used (see
Figures 3 and 7.a).
Page 9
TOURBOT
Navigation
Project
Deliverable
Figure 3. Navigation map of the site BYSMUS
The input of the robot’s sensors is then compared with the model of the environment (see
Figure 4). To cope with uncertainties, the system maintains a probability distribution
about the possible positions of the robot. To determine the actions to be taken, it averages
over the current belief, i.e. it considers, if useful, all possible states of the robot to
determine its next action.
In order to achieve the desired efficiency, the TOURBOT system uses a sample-based
representation of the underlying position probability density. The advantage of this
approach is that the system can efficiently update the probability density based on
sensory input.
MAP
SENSORY INPUT
POSITION
Figure 4. The localization module uses a map of the environment and
sensory input to estimate the position of the robot.
5
Reactive Navigation Modules
The reactive navigation modules endow the robotic avatar with primitive competences,
such as obstacle detection and avoidance, user’s tele-control of the robot, as well as the
capabilities to move around an exhibit and to move towards an area in the field of view of
the robot’s cameras.
Page 10
TOURBOT
Navigation
5.1
Project
Deliverable
Collision Avoidance
The collision avoidance system (Figure 5) uses a recently-developed technique denoted
“the dynamic window” approach. The advantage of this approach, over other approaches
developed so far and dealing with the problem of collision avoidance, is that it
incorporates the dynamics of the robot to quickly react to unforeseen obstacles. The key
idea of the dynamic window approach is to consider all admissible velocities that can be
reached within a specific small time interval. Among these velocities, the collision
avoidance module chooses the one maximizing the expected future reward of the system,
as this is specified by an evaluation function. This evaluation function simultaneously
maximizes the progress of the system measured by the distance to the (intermediate)
target location, the heading to the target point, as well as the velocity of the system.
Moreover, it considers additional constraints, such as the minimum distance to obstacles
and to people in the vicinity of the robot. This is achieved by introducing a safety margin,
effectively keeping the robot away from objects.
ODOMETRY INFORMATION
SENSORY INPUT
NAVIGATION PLAN
STEERING COMMAND
Figure 5. The collision avoidance system uses the information provided by the odometry,
the sensory input, and the current navigation plan to compute the next steering command
issued to the platform.
5.2
User’s Tele-control of the Robot
The present module includes facilities allowing the user to stop or restart the robot
motion and to “look” around its current position. While the robot is stationary, the
Page 11
TOURBOT
Navigation
Project
Deliverable
enclosure (the upper part of the robotic platform) and the panning platform carrying the
camera, can be commanded to rotate, in order to bring the cameras to a desired viewing
direction. There, the user can examine a scene of interest or take pictures of exhibits.
Following that, the user can command the robot to resume its previous motion or to end
the current task and start a new one.
5.3
Tour (move around) an Exhibit
The user can command the robot to tour an exhibit that he/she wishes to view in some
detail. First the robot is moved, from its current position, to a predefined position in the
vicinity of the exhibit that the user wishes to view. For each exhibit, there exists a
predefined trajectory around it (defined by the end-users), which is specified in terms of
appropriately chosen via-points. These via-points constitute subgoals of the robot motion,
which can be executed using the path planning and the collision avoidance modules.
During this movement, the cameras are appropriately directed towards the exhibit.
5.4
Move Towards an Area in the FOV of the Camera
The user can command the robot to move in a particular direction that he/she finds
interesting, by specifying this direction on the Web interface, based on information from
the video transmitted to him. This way, the robot will move towards a particular area in
the field of view (FOV) of its camera. Using encoder and localization information, the
robotic system makes the correspondence of this direction in the FOV to a desired
direction of motion. Then, it moves in this direction, until an obstacle is encountered or
until commanded to stop. In order for this behavior to be initiated, the robot needs to be
stationary.
6
Targeted Navigation Modules
The targeted navigation modules involve path planning and the detection of the object to
which we refer as target (for example a certain exhibit). In order to operate efficiently,
the TOURBOT system is capable of planning its trajectories from its current location to a
target position. The planning module of the TOURBOT system is, in addition, able to
integrate detected dynamical changes into the representation of the environment. This
Page 12
TOURBOT
Navigation
Project
Deliverable
allows the system to quickly react to situations in which entire passages are blocked and
from which it would not be able to escape otherwise.
6.1
Path Planning
In order to efficiently plan the trajectories of the robot from its current position to the
next object, our design employs a variant of dynamic programming. The given map of the
environment is transferred into a grid-based approximation denoted as occupancy grid
map. Based on this representation, the system applies value iteration to determine the
cost-optimal path from the current position of the robot to the target location (see Figure
6). This approach operates in an any-time fashion, i.e., whenever a new target point is
given, the system can at any time compute an intermediate target point, which then is
transferred to the collision avoidance module, which itself controls the platform in order
to reach this target point. This module permanently monitors the current position of the
robot and computes the cost-optimal path in reverse order, i.e. from the target point to all
possible states of the robot. This way the system can immediately react to changes of the
robot’s states caused by the reactive behaviors in the collision avoidance module.
Figure 7 shows an example of the avatar’s path planning capability: in the environment
whose map is shown in Figure 7.a, the avatar starts at the rightmost shown point and is
directed to move to the leftmost one. Its trajectory is plotted on the map and snapshots
taken during its motion (when the avatar is at the locations marked by dots on the
trajectory) are shown in Figure 7.b. The goal of its motion is the black object in front of
the white column seen in the last picture.
Page 13
TOURBOT
Navigation
Project
Deliverable
MAP
GOAL-POINT
NAVIGATION PLAN
Figure 6. The path-planning module uses a map of the environment
and a target location supplied by a user to compute a navigation plan.
Page 14
TOURBOT
Navigation
Project
Deliverable
a. Map of the environment and trajectory of the avatar
(the dots on the trajectory are locations from which the pictures below were taken)
Page 15
TOURBOT
Navigation
Project
Deliverable
b. Pictures of the avatar along its trajectory
Figure 7. The avatar performs a targeted-navigation behavior:
it moves from a starting point to a goal supplied by the user.
Page 16
TOURBOT
Navigation
6.2
Project
Deliverable
Map Updating
The map-updating component allows the robot to maintain its knowledge about the state
of the environment. Using this component, the robot can quickly learn a map about the
environment from scratch (see Figure 8). Thus, whenever the robot is deployed in a new
environment or whenever the environment changes, the robot uses this component to
update the world model based on information from its sensors.
POSITION
SENSORY INPUT
MAP
Figure 8. The map-updating system computes a map of the environment
based on the current position of the robot and the sensory input.
7
Navigation Software Integration and Testing
The navigation software was integrated and validated in a series of steps of increasing
complexity. In all steps, the new software modules were evaluated under various
operating conditions, in order to achieve a high level of robustness.
1. Device drivers. First, the device drivers, which connect the software to the
hardware of the TOURBOT system, were evaluated, with respect to the
availability of necessary commands and their correct execution.
Page 17
TOURBOT
Navigation
Project
Deliverable
2. Collision avoidance and map updating. The next step was the integration of the
collision avoidance module into the TOURBOT system. This included the
evaluation of the interfaces to the device drivers of the motors and the odometry.
Additionally, this step validated the system’s safety margins to the objects in the
exhibition.
3. Localization. In the third step, the localization module was connected to the
navigation system. Based on the odometry information provided by the robot’s
drive and the sensory input, this module was evaluated with respect to the
accuracy and robustness of the position estimates it provides.
4. Path planning and navigation behaviors. The first target of this integration step
was the trajectory planning system. This module communicates with the
localization component and the collision avoidance module, in order to perform
complex navigation actions. It was ensured that the corresponding actions are
carried out correctly by the mobile platform.
In order to validate the navigation software under various conditions, identical versions
of it were installed at FORTH, UNIFR and UNIBONN and extensive testing was carried
out. The three technical partners performed the final part of these navigation software
integration activities, as well as further testing of it, in a special meeting in the premises
of FORTH.
Page 18
Download