RFID in Robot-Assisted Indoor Navigation for the Visually Impaired

advertisement
RFID in Robot-Assisted Indoor Navigation for
the Visually Impaired
Vladimir Kulyukin
Department of Computer Science
Utah State University
Logan, UT
vkulyukin@cs.usu.edu
Chaitanya Gharpure
John Nicholson
Department of Computer Science
Utah State University
Logan, Utah
cpg@cc.usu.edu, jnicholson@cc.usu.edu
Abstract— We describe how Radio Frequency Identification
(RFID) can be used in robot-assisted indoor navigation for
the visually impaired. We present a robotic guide for the
visually impaired that was deployed and tested both with
and without visually impaired participants in two indoor
environments. We describe how we modified the standard
potential fields algorithms to achieve navigation at moderate
walking speeds and to avoid oscillation in narrow spaces.
The experiments illustrate that passive RFID tags deployed
in the environment can act as reliable stimuli that trigger local
navigation behaviors to achieve global navigation objectives.
I. I NTRODUCTION
For most visually impaired people the main barrier to
improving their quality of life is the inability to navigate.
This inability denies the visually impaired equal access
to buildings, limits their use of public transportation, and
makes the visually impaired in the United States a group
with one of the highest unemployment rates (74%)[1].
Robot-assisted navigation can help the visually impaired
overcome the navigation barrier for several reasons. First,
the amount of body gear required by wearable navigation
technologies, e.g., [2], [3], is significantly minimized, because most of it is mounted on the robot and powered from
on-board batteries. Consequently, the navigation-related
physical load is significantly reduced. Second, the user can
interact with the robot in ways unimaginable with guide
dogs and white canes, i.e., speech, wearable keyboard,
audio, etc. These interaction modes make the user feel
more at ease and reduce her navigation-related cognitive
load. Third, the robot can interact with other people in the
environment, e.g., ask them to yield. Fourth, robotic guides
can carry useful payloads, e.g., suitcases and grocery bags.
Finally, the user can use robotic guides in conjunction
with her conventional navigation aids, e.g., white canes
and guide dogs.
What environments are suitable for robotic guides?
There is little need for such guides in familiar environments where conventional navigation aids are adequate.
For example, a guide dog typically picks a route after
three to five trials. While there is a great need for assisted
navigation outdoors, the robotic solutions, due to severe
sensor challenges, have so far been inadequate for the
job and have not compared favorably to guide dogs[4].
Therefore, we believe that unfamiliar indoor environments
that are dynamic and complex, e.g., airports and conference
Sachin Pavithran
Center for Persons with Disabilities
Utah State University
Logan, UT
sachin@cpd2.usu.edu
centers, are a perfect niche for robotic guides. Guide dogs
and white canes are of limited use in such environments,
because they do not have any environment-specific topological knowledge and, consequently, cannot help their
users find paths to useful destinations.
II. R ELATED W ORK
The idea of robotic guides is not new. Horswill[5] used
the situated activity theory to build Polly, a robotic guide
for the MIT AI Lab. Polly used lightweight vision routines
that depended on textures specific to the lab. Thrun et al.[6]
built Minerva, a completely autonomous tour-guide robot
that was deployed in the National Museum of American
History in Washington, D.C. Burgard et al.[7] developed
RHINO, a close sibling of Minerva’s, which was deployed
as an interactive tour guide in the Deutsches Museum in
Bonn, Germany.
Unfortunately, these robots do not address the needs of
the visually impaired. First, the robots depend on the users’
ability to maintain visual contact with them, which cannot
be assumed for the visually impaired. The only way users
could interact with Polly[5] was by tapping their feet. To
request a museum tour from RHINO[7], the user had to
identify and press a button of a specific color on the robot’s
panel. Second, these solutions require substantial investments in customized engineering to become operational,
which makes it difficult to use them as models of replicable
solutions that work out of the box in a variety of environments. The approach on which Polly is based requires that
a robot be evolved by its designer to fit its environment not
only in terms of software but also in terms of hardware.
The probabilistic localization algorithms used in RHINO
and MINERVA, require a great deal of processing power.
For example, to remain operational, RHINO had to run 20
parallel processes on 3 on-board PCs and 2 off-board SUN
workstations connected via a customized Ethernet-based
point-to-point socket communication protocol. Even with
these high software and hardware commitments, RHINO
reportedly experienced six collisions over a period of fortyseven hours, although each tour was less than ten mintues
long[7].
Mori and Kotani[4] developed HARUNOBU-6, a robotic
travel aid to guide the visually impaired on the Yamanashi University campus. HARUNOBU-6 is a motor
(a) RG
(b) An RFID Tag
Fig. 1.
(c) Navigation
Robot-Assisted Navigation
wheel chair equipped with a vision system, sonars, a
differential GPS, and a portable GIS. While the wheel
chair is superior to the guide dog in its knowledge of the
environment, the experiments run by the HARUNOBU-6
research team demonstrated that the wheel chair is inferior
to the guide dog in mobility and obstacle avoidance. The
major source of problems was vision-based navigation
because the recognition of patterns and landmarks was
greatly influenced by the time of day, weather, and season.
Additionally, HARUNOBU-6 is a higly customized piece
of equipment, which negatively affects its portability across
a broad spectrum of environments.
Several research efforts in mobile robotics are similar to
the research described in this paper in that they also use
RFID technology for robot navigation. Kantor and Singh
used RFID tags for robot localization and mapping[8].
Once the positions of the RFID tags are known, their
system uses time-of-arrival type of information to estimate
the distance from detected tags. Tsukiyama[9] developed a
navigation system for mobile robots using RFID tags. The
system assumes perfect signal reception and measurement
and does not deal with uncertainty. Hähnel et al.[10]
developed a robotic mapping and localization system to
analyze whether RFID can be used to improve the localization of mobile robots in office environments. They
proposed a probabilistic measurement model for RFID
readers that accurately localizes RFID tags in a simple
office environment.
III. A ROBOTIC G UIDE FOR THE V ISUALLY I MPAIRED
In May 2003, the Department of Computer Science of
Utah State Univeristy (USU) and the USU Center for
Persons with Disabilities launched a collaborative project
whose objective is to build an indoor robotic guide for the
visually impaired. In this paper, we describe a prototype
we have built and deployed in two indoor environments.
Its name is RG, which stands for “robotic guide.” We
refer to the approach behind RG as non-intrusive instrumentation of environments. Our current research objective
is to alleviate localization and navigation problems of
purely autonomous approaches by instrumenting environments with inexpensive and reliable sensors that can be
placed in and out of environments without disrupting any
indigenous activities. Effectively, the environment becomes
a distributed tracking and guidance system[11]. Additional
requirements are: 1) that the instrumentation be fast, e.g.,
two to three hours, and require only commercial off-theshelf (COTS) hardware components; 2) that sensors be
inexpensive, reliable, easy to maintain (no external power
supply), and provide accurate localization; 3) that all computation run onboard the robot; 4) that robot navigation be
smooth (few sideways jerks and abrupt velocity changes)
and keep pace with a moderate walking speed; and 5) that
human-robot interaction be both reliable and intuitive from
the perspective of the visually impaired users.
The first two requirements make the systems that satisfy them replicable, maintainable, and robust. The third
requirement eliminates the necessity of running substantial
off-board computation to keep the robot operational. In
emergency situations, e.g., computer security breaches,
power failures, and fires, off-board computers are likely to
become dysfunctional and paralyze the robot if it depends
on them. The last two requirements explicitly consider
the needs of the target population and make our project
different from the RFID-based robot navigation systems
mentioned above.
A. Hardware
RG is built on top of the Pioneer 2DX commercial
robotic platform [12] (See Figure 1(a)). The platform has
three wheels, 16 ultrasonic sonars, 8 in front and 8 in the
back, and is equipped with three rechargeable Power Sonic
PS-1270 onboard batteries that can operate for up to five
hours at a time.
What turns the platform into a robotic guide is a
Wayfinding Toolkit (WT) mounted on top of the platform
and powered from the on-board batteries. As can be seen
in Figure 1(a), the WT currently resides in a PVC pipe
structure attached to the top of the platform. The WT’s
core component is a Dell laptop connected to the platform’s
microcontroller. The laptop has a Pentium 4 mobile 1.6
GHz processor with 512 MB of RAM. Communication
between the laptop and the microcontroller is done through
a usb-to-serial cable. The laptop interfaces to a radiofrequency identification (RFID) reader through another
usb-to-serial cable. The TI Series 2000 RFID reader is
connected to a square 200mm × 200mm antenna. The
arrow in Figure 1(b) points to a TI RFID Slim Disk tag
attached to a wall. Only these RFID tags are currently
used by the system. These tags can be attached to any
objects in the environment or worn on clothing. They do
not require any external power source or direct line of sight
(a) Empty Spaces
(b) RG’s Grid
Fig. 2.
Potential Fields and Empty Spaces.
to be detected by the RFID reader. They are activated by
the spherical electromagnetic field generated by the RFID
antenna with a radius of approximately 1.5 meters. Each
tag is programmatically assigned a unique ID.
A dog leash is attached to the battery bay handle on the
back of the platform. The upper end of the leash is hung
on a PCV pole next to the RFID antenna’s pole. As shown
in Figure 1(c), visually impaired individuals follow RG by
holding onto that leash.
B. Navigation
Since RG’s objective is to assist the visually impaired
in navigating unknown environments, we had to pay close
attention to three navigational features. First, RG should
move at moderate walking speeds. For example, the robot
developed by Hähnel et al.[10] travels at an average speed
of 0.223 m/s, which is too slow for our purposes because
it is slower than a moderate walking speed (0.7 m/s) by
almost half a meter. Second, the motion must be smooth,
without sideways jerks or abrupt speed changes. Third, RG
should be able to avoid obstacles.
RG navigates in indoor environments using potential
fields (PF) and by finding empty spaces around itself.
PFs have been widely used in navigation and obstacle
avoidance[13]. The basic concept behind the PF approach
is to populate the robot’s sensing grid with an vector field
in which the robot is repulsed away from obstacles and
attracted towards the target. Thus, the walls and obstacles
around RG generate a PF in which RG acts like a moving
particle[14]. The desired direction of travel is the direction
of the maximum empty space around RG, which, when
found, becomes the target direction to guide RG through
the PF. This simple strategy takes explicit advantage of
the way human indoor environments are organized. For
example, if the maximum empty space is in front, the
navigator can keep moving forward; if the maximum empty
space is on the left, a left turn can be made, etc. This
strategy allows RG to follow hallways, avoid obstacles, and
turn without using any orientation sensors, such as digital
compasses or inertia cubes.
To find the maximum empty space, RG uses a total of 90
laser range finder readings, taken at every 2 degrees. The
readings are taken every millisecond. An initial threshold of
3000 mm is used. If no empty space is found, this threshold
is iteratively decreased by 100 mm. In Figure 2(a), laser
readings R1 and R2 are the boundary readings of the
maximum empty space. All readings between R1 and R2
are greater than the threshold. The next step is to find the
target direction. For that, we find the midway point between
R1 and R2 and the direction to that point is the target
direction αt .
RG’s PF is a 10 × 30 egocentric grid. Each cell in the
grid is 200mm × 200mm. The grid covers an area of 12
square meters (2 meters in front and 3 meters on each side)
in front of RG. Each cell C ij holds a vector that contributes
in calculating the resultant PF vector. The direction to the
cell from RG’s center is αij . There are three types of
cells: 1) occupied cells, which hold the repulsive vectors
generated by walls and obstacles; 2) free cells, which
hold the vector pointing in the target direction obtained
by finding the maximum empty space; and 3) unknown
cells, the contents of which are unknown, since they lie
beyond detected obstacles. Unknown cells do not hold any
vectors. In Figure 2(b), dark gray cells are occupied, white
cells are free, and light gray cells are unknown. If d(C ij )
is the distance from the robot’s center to the cell C ij in
the grid, L(αij ) is the laser reading in the cell’s direction,
and T is a tolerance constant, then the occupation of C ij
is computed by the function ζ(i, j):

if |d(Cij ) − L(αij )| < T
1
if L(αij ) − d(Cij ) > T
ζ(i, j) = 0
(1)

−1 if d(Cij ) − L(αij ) > T
In Equation 1, the constants 1, 0, -1 denote occupied,
free, and unknown, respectively. By default, all vectors in
occupied and free cells are unit vectors. However, since
closer obstacles have more effect on the robot, the vector
magnitude increases with the proximity of the cell to the
robot. Therefore, the vector magnitude in the cell is a
function of the cell’s row and column.
A repulsive vector in C ij is denoted as Rij (mij , −αij ),
where mij is the vector’s magnitude and −α ij is its
direction. The magnitude is inversely proportional to the
distance of the occupied cell from the robot. For the leftside vectors, mij = M agn(i, j) ∗ P1 ; for the right-side
vectors, mij = M agn(i, j) ∗ P2 , where M agn(i, j) is
the magnitude of the corresponding vector and P 1 and P2
are constants that vary the replusion vectors on the robot’s
left and right sides, respectively. Thus, one can adjust the
distance maintained by RG from the right or left wall,
respectively. Since RG’s localization relies on RFID tags
placed in hallways, it has to navigate closer to the right
wall, which is achieved by increasing the repulsive force
of the left side vectors. Repulsive vectors for occupied
cells are
summed up to get the resultant repulsive vector
Rr = ij Rij .
The target vector in a free cell C ij is denoted as
Tij (M, αt ), where M is the vector’s magnitude. In our
implementation, all vectors in the unoccupied cells
are unit
vectors. The resultant target vector is T r =
ij Tij =
Tr (M ∗N, αt ), where N is the number of unoccupied cells.
The resultant vector, RES, is the sum of the repulsive and
target vectors: RES(mr , αr ) = Rr + Tr , where, mr is the
magnitude and α r is the direction.
To ensure smooth turns and avoid abrupt speed changes,
RG never stops and turns in place. Instead, RG sets the
left (V1 ) and right (V 2 ) wheel velocities, to produce a
smooth turn. V 1 and V2 are functions of m r and αr :
V1 = V2 = v + (αr ∗ S)/mr , v is the robot’s velocity,
and S is a constant that determines the sharpness of turns;
αr is positive for left turns and negative for right. The
robot’s velocity v is a function of the front distance. Thus,
if mr is large, the turns are less sharp. This is precisely
why RG follows a smooth, straight path even in narrow
hallways without oscillating, which has been a problem
for some PF algorithms[15]. Given this implementation,
RG maintains, at most times, a moderate walking speed of
0.7 m/s without losing smoothness or robustness.
C. Ethology and Spatial Semantic Hierarchy
As a software system, RG is based on Kupiers’
Spatial Semantic Hierarchy (SSH)[16] and Tinbergen’s
ethology[17]. The SSH is a framework for representing
spatial knowledge. It divides spatial knowledge of autonomous agents into four levels: control, causal, topological, and metric. The control level consists of low level
mobility laws, e.g., trajectory following and aligning with
a surface. The causal level represents the world in terms
of views and actions. A view is a collection of data items
that an agent gathers from its sensors. Actions move agents
from view to view. The topological level represents the
world’s connectivity, i.e., how different locations are connected. The metric level adds distances between locations.
In RG, the control level is implemented with the PF
methods described above and includes the following behaviors: follow-wall, turn-left, turn-right, avoid-obstacles,
go-thru-doorway, pass-doorway, and make-u-turn. These
behaviors are coordinated and controlled through Tinbergen’s release mechanisms[17]. RFID tags are viewed as
stimuli that trigger or disable specific behaviors. To ensure
portabilty, all these behaviors are written in the behavior
programming language of the ActivMedia Robotics Interface for Applications (ARIA) system from ActivMedia
Robotics, Inc. The routines run on the WT laptop. In
addition, the WT laptop runs three other software compo-
nents: 1) a map server, 2) a path planner, and 3) a speech
recognition and synthesis engine.
The Map Server realizes the causal and topological
levels of the SSH. The server’s knowledge base represents a connectivity graph of the environment in which
RG operates. No global map is assumed. In addition,
the knowledge base contains tag to destination mappings
and simple behavior trigger/disable scripts associated with
specific tags. The Map Server continuously registers the
latest location of RG on the connectivity graph. The
location is updated as soon as RG detects a RFID tag.
Given the connectivity graph, the Path Planner uses the
standard breadth first search algorithm to find a path from
one location to the other. A path plan is a sequence
of tag numbers and behavior scripts at each tag. Thus,
RG’s trips are sequences of locally triggered behaviors that
achieve global navigation objectives. The SSH metric level
is not implemented, because, as studies in mobile robotics
show[16], [14], odometry, from which metric information
is typically obtained, is not reliable in robotic navigation.
D. Human-Robot Interaction
Human-robot interaction in RG is described in detail
elsewhere[18], [19]. Here we give a brief summary only
for the sake of completeness. Visually impaired users can
interact with RG through speech and wearable keyboards.
Speech is received by RG through a wireless microphone
placed on the user’s clothing. Speech is recognized and
synthesized with Microsoft Speech API (SAPI) 5.1. RG
interacts with its users and people in the environment
through speech and audio icons, i.e., non-verbal sounds that
are readily associated with specific objects, e.g., the sound
of water bubbles associated with a water cooler. When RG
is passing a water cooler, it can either say “water cooler”
or play an audio file with sounds of water bubbles. We
added audio icons to the system because, as recent research
findings indicate [20], speech perception can be slow and
prone to block ambient sounds from the environment. To
other people in the environment, RG is personified as
Merlin, a Microsoft software character, always present on
the WT laptop’s screen.
IV. E XPERIMENTS
We deployed our system for a total of approximately
seventy hours in two indoor environments: the Assistive
Technology Laboratory (ATL) of the USU Center for
Persons with Disabilities and the USU CS Department. The
ATL occupies part of a floor in a building on the USU
North Campus. The floor has an area of approximately
4,270 square meters. The floor contains 6 laboratories, two
bathrooms, two staircases, and an elevator. The CS Department occupies an entire floor in a multi-floor building. The
floor’s area is 6,590 square meters. The floor contains 23
offices, 7 laboratories, a conference room, a student lounge,
a tutor room, two elevators, several bathrooms, and two
staircases.
Forty RFID tags were deployed at the ATL and one hundred tags were deployed at the CS Department. It took one
(a) Narrow (1m wide) Hallway Runs
(b) Medium (1.5m wide) Hallway Runs
Fig. 3.
(a) Narrow (1m wide) Hallway Runs
Path Deviations in Hallways
(b) Medium (1.5m wide) Hallway Runs
Fig. 4.
(c) Wide (2.5m wide) Hallway Runs
(c) Wide (2.5m wide) Hallway Runs
Velocity Changes in Hallways.
person 20 minutes to deploy the tags and about 10 minutes
to remove them at the ATL. The same measurements at the
CS Department were 30 and 20 minutes, respectively. As
Figure 1(b) indicates, the tags were placed on small pieces
of cardboard to insulate them from the walls and were
attached to the walls with regular scotch tape. The creation
of the connectivity graphs, took one hour at the ATL and
about 2 hours at the CS Department. One administrator
first walked around the areas with a laptop and recorded
tag-destination associations and then associated behavior
scripts with tags.
RG was first repeatedly tested in the ATL, the smaller
of the two environments, and then deployed for pilot
experiments at the USU CS Department. We ran two sets
of pilot experiments. The first set did not involve visually
impaired participants. The second set did. In the first set of
experiments, we had RG navigate three types of hallways
of the CS Department: narrow (1 m), medium (1.5 m) and
wide (2.5 m) and estimated its navigation in terms of two
variables: path deviations and abrupt speed changes. We
also wanted to test how well RG’s RFID reader detected
the tags.
To estimate path deviations, in each experiment we first
computed the ideal distance that the robot has to maintain
from the right wall in a certain type of hallway (narrow,
medium, and wide). The ideal distance was computed by
running the robot in a hallway of that type with all doors
closed and no obstacles en route. During the run, the
distance read by the laser range finder between the robot
and the right wall was recorded every 50 milliseconds. In
recording the distance, the robot orientation was taken into
account from two consecutive readings. The ideal distance
was computed as the average of the distances taken during
the run. Once the ideal distances were known, we ran the
robot three times in each type of hallway. The hallways
in which the robot ran were different from the hallways
in which the ideal distances were computed. Obstacles,
e.g., humans walking by and open doors, were allowed
during the test runs. Figure 3 gives the distance graphs
of the three runs compared in each hallway type. The
vertical bars in each graph represent the robot’s width.
As can be seen from Figure 3(a), there is almost no
deviation from the ideal distance in narrow hallways. Nor
is there any oscillation. Figure 3(b) and Figure 3(c) show
some insignificant deviations from the ideal distance. The
deviations were caused by people walking by and by
open doors. However, there is no oscillation, i.e., sharp
movements in different directions. In both environments,
we observed several tag detection failures, particularly in
metallic door frames. However, after we insulated the tags
with small pieces of cardboard (see Figure 1(b)), the tag
detection failures stopped.
Figure 4 gives the velocity graphs for each hallway type
(x-axis is time in seconds, y-axis is velocity in mm/sec).
The graphs show that the narrow hallways cause short
abrupt changes in velocity. This is because in narrow
hallways even a slight disorientation, e.g., 3 degrees, in the
robot causes changes in velocity because less free space is
detected in the grid. In medium and wide hallways, the
velocity remains mostly smooth. Several speed changes
occur when the robot passes or navigates through doorways
or avoids obstacles.
The second set of pilot experiments involved five visually impaired participants, one participant at a time,
over a period of two months. Three participants were
completely blind and two participants could perceive only
light. The participants had no speech impediments, hearing
problems, or cognitive disabilities. Two participants were
dog users and the other three used white canes. The
participants were asked to use RG to navigate to three
distinct locations (an office, a lounge, and a bathroom)
at the USU CS Department. All participants were new
to the environment and had to navigate approximately 40
meters to get to all destinations. Thus, in the experiments
with visually impaired participants, the robot navigated
approximately 200 meters. All participants reached their
destinations without a problem. In their exit interviews,
the participants complained mostly about the human-robot
interaction aspects of the system. For example, all of them
had problems with the speech recognition system[21], [19].
The participants especially liked the fact that they did not
have to give up their white canes and guide dogs to use
RG.
V. L IMITATIONS
In addition to velocity changes in narrow hallways, RG
has three other limitations. First, the robot cannot create
a connectivity graph for a given environment once the
RFID tags are deployed. We are currently working on
creating connectivity graphs and behavior scripts in a semiautomatic fashion. Second, the robot cannot detect route
blockages. If the route is blocked, the robot first slows
down to a stop and then starts turning in order to find
some free space. In this fashion, RG makes a gradual uturn by looking for the maximum free space around itself.
Since RG has no orientation sensor, currently the only way
it can detect a detour is by detecting an RFID tag that is
not on the path to the current destination. Finally, while
several visually impaired participants told us that it would
be helpful if RG could guide them in and out of elevators,
RG cannot negotiate elevators yet.
VI. C ONCLUSION
In this paper, we showed how Radio Frequency Identification (RFID) can be used in robot-assisted indoor
navigation for the visually impaired. We presented a robotic
guide for the visually impaired that was deployed and
tested both with and without visually impaired participants
in two indoor environments. The experiments illustrate that
passive RFID tags can act as reliable stimuli that trigger
local navigation behaviors to achieve global navigation
objectives.
ACKNOWLEDGMENT
The authors would like to thank the visually impaired
participants for generously volunteering their time for the
pilot experiments. The authors would like to thank Marty
Blair, Director of the Utah Assistive Technology Program
for his administrative assistance and support. The first
author would like to acknowledge that this research has
been supported, in part, through the NSF Universal Access Career Grant (IIS-0346880), a Community University
Research Initiative (CURI) grant from the State of Utah,
and through a New Faculty Research grant from Utah State
University.
R EFERENCES
[1] M. P. LaPlante and D. Carlson, Disability in the United States:
Prevalence and Causes. Washington, DC: U.S. Department of Education, National Institute of Disability and Rehabilitation Research,
2000.
[2] S. Shoval, J. Borenstein, and Y. Koren, “Mobile Robot Obstacle
Avoidance in a Computerized Travel for the Blind,” in IEEE
International Conference on Robotics and Automation, San Diego,
CA, 1994.
[3] D. Ross and B. Blasch, “Development of a Wearable Computer
Orientation System,” IEEE Personal and Ubiquitous Computing,
vol. 6, pp. 49–63, 2002.
[4] H. Mori and S. Kotani, “Robotic Travel Aid for the Blind:
HARUNOBU-6,” in Second European Conference on Disability,
Virtual Reality, and Assistive Technology, Sövde, Sweden, 1998.
[5] I. Horswill, “Polly: A Vision-Based Artificial Agent,” in Proceedings of the 11th Conference of the American Association for
Artificial Intelligence (AAAI-93), Washington, DC, July 1993.
[6] S. Thrun, M. Bennewitz, W. Burgard, A. B. Cremers, F. Dellaert,
D. Fox, D. Hähnel, C. Rosenberg, N. Roby, J. Schutle, and
D. Schultz, “Minerva: A Second Generation Mobile Tour-Guide
Robot,” in Proceedings of the IEEE International Conference on
Robotics and Automation (ICRA-99), Antwerp, Belgium, June 1999.
[7] W. Burgard, A. Cremers, D. Fox, D. Hähnel, G. Lakemeyer,
D. Schulz, W. Steiner, and S. Thrun, “Experiences with an Interactive Museum Tour-Guide Robot,” Artificial Intelligence, no. 114,
pp. 3–55, 1999.
[8] G. Kantor and S. Singh, “Preliminary Results in Range-Only Localization and Mapping,” in Proceedings of the IEEE Conference
on Robotics and Automation, Washington, DC, May 2002.
[9] T. Tsukiyama, “Navigation System for the Mobile Robots using
RFID Tags,” in Proceedings of the IEEE Conference on Advanced
Robotics, Coimbra, Portugal, June-July 2003.
[10] D. Hähnel, W. Burgard, D. Fox, K. Fishkin, and M. Philipose,
“Mapping and localization with rfid technology,” Intel Research
Institute, Seattle, WA, Tech. Rep. IRS-TR-03-014, December 2003.
[11] V. Kulyukin and M. Blair, “Distributed Tracking and Guidance in
Indoor Environments,” in Conference of the Rehabilitation Engineering and Assistive Technology Society of North America (RESNA2003), Atlanta, GA, June 2003.
[12] http://www.activmedia.com, ActivMedia Robotic Platforms. ActivMedia Robotics, Inc.
[13] J. H. Chuang and N. Ahuja, “An Analytically Tractable Potential
Field Model of Free Space and its Application in Obstacle Avoidance,” IEEE Trans. Sys. Man, Cyb., vol. 5, no. 28, pp. 729–736,
1998.
[14] R. Murphy, Introduction to AI Robotics. Cambridge, MA: The MIT
Press, 2000.
[15] Y. Koren and J. Borenstein, “Potential Field Methods and their
Inherent Limitations for Mobile Robot Navigation,” in Proceedings
of the IEEE Conference on Robotics and Automation, Sacramento,
CA, April 1991.
[16] B. Kupiers, “The Spatial Semantic Hierarchy,” Artificial Intelligence,
no. 119, pp. 191–233, 2000.
[17] N. Tinbergen, Animal in its World: Laboratory Experiments and
General Papers. Cambridge, MA: Harvard University Press, 1976.
[18] V. Kulyukin, “Towards Hands-Free Human-Robot Interaction
through Spoken Dialog,” in AAAI Spring Symposium on Human
Interaction with Autonomous Systems in Complex Environments,
Palo Alto, CA, March 2003.
[19] ——, “Human-Robot Interaction through Gesture-Free Spoken Dialogue,” Autonomous Robots,, 2004, to appear.
[20] T. V. Tran, T. Letowski, and K. S. Abouchacra, “Evaluation of
Acoustic Beacon Characteristics for Navigation Tasks,” Ergonomics,
vol. 43, no. 6, pp. 807–827, 2000.
[21] V. Kulyukin, C. Gharpure, and N. De Graw, “Human-Robot Interaction in a Robotic Guide for the Visually Impaired,” in AAAI
Spring Symposium on Interaction between Humans and Autonomous
Systems over Extended Operation, Palo Alto, CA, March 2004, to
appear.
Download