Exploring Unknown Structured Environments

From: FLAIRS-01 Proceedings. Copyright © 2001, AAAI (www.aaai.org). All rights reserved.
Exploring UnknownStructured Environments
Jonathan F. Diaz and Alexander Stoytchev and Ronald C. Arkin
Mobile Robot Laboratory
College of Computing
GeorgiaInstitute of Technology
Atlanta, Georgia 30332-0280U.S.A.
Abstract
Thispaperpresentsperceptualalgorithmsfor corridornavigation, doordetectionandentry that canbe usedfor exploration of an unknown
floor of a building.Thealgorithmsare
fairly robustto sensornoise, donot use odometric
data, and
can be implementedon a low end PC. Theprimary sensor
usedis a laser rangescanner.
Introduction
Manypractical robot applications require navigation in
structured but unknownenvironments. Search and rescue
missions, surveillance and monitoringtasks, urban warfare
scenarios, are all examplesof domainswhere autonomous
robot applications would be highly desirable. Deploying
robots in such scenarios is expected to reduce the risk of
losing humanlives (Blitch 1999).
Onescenario, for example, wouldbe a biohazard detection mission, wherea robot or a teamof robots wouldbe deployed to enter and navigate an unknown
building to search
and report the existence of any hazardous materials. Although the building layout maybe unknown,it is safe to
assumethat the building will consist of several floors, where
each floor will haveone or morecorridors, and each corridor
will have several roomswhoseentrance doors are located on
the corridor.
This paper presents several perceptual algorithmsthat exploit available structure to allow mobilerobots to navigate
through corridors, detect doors, and enter rooms. In combination the algorithms can be used for exploration of an
unknownfloor of a building in the absence of any map.
The algorithmsare fairly robust to sensor noise, do not rely
on odometry,and are computationally efficient. The primarysensor used is a laser range scanner. The algorithms
presented here have been demonstratedsuccessfully at the
DARPA
Tactical Mobile Robotics (TMR)Demonstration
Rockville, Marylandin September2000.
Related
Work
Lu and Milios (Lu &Milios 1997a; 1997b) have previously
described algorithms for robot pose estimation with laser
Copyright©2001,American
Associationfor Artificial Intelligence(www.aaai.org).
All rights reserved.
data. Theapproachthat they take is registration and matching. Their technique relies on constraint-based search to
align a current laser frame with a mapbuilt from previous
laser scans. The alignment of range scans is computationally expensiveand can be unnecessarywhenwivigating in a
structured environment.
Otherresearchers (Foxet al. 1999)haveused laser range
scanners for mapbuilding and robot localization. Their approachrelies heavily on the assumptionthat the features of
the environmentused for localization are not changingover
time. Therefore,it is not robust to changesin the environment(e.g., openingand closing doors) whichmaydisorient
the robot causingit to decreaseits localization certainty.
Consecutivelocalizations can take time which maynot alwaysbe available in a warfare scenario.
The office delivery mobilerobot Xavier (Simmonset al.
1997) was capable of entering roomsin order to deliver or
pick up printouts that users requested over a web-basedinterface. Xavier used a neural networkand cameraimagesto
positionitself in front of a door in order to enter a room.The
wholeprocess was discrete rather than continuous.
The approachpresented in this paper for door entry uses
a behaviorsimilar to the dockingmotorschemadescribed in
(Arkin & Murphy1990). This schemaallows the robot
enter the roomrapidly while traveling on a smoothsemi-arc
trajectory (see Figure 2). Therobot never stops to makesure
it is onthe right track.
Mission Specification
The MissionLabmission specification system (Endoet al.
2000)was used as a test-bed for the biohazardsurvey experiments (see ExperimentsSection for details). MissionLab
allows a robot mission to be specified graphically in terms
of a finite state acceptor (FSA). Figure I showsthe FSA
used for the biohazard detection mission. Wepresent perceptual algorithms that support the ProceedAlongHallway
and GoThroughDoorbehavioral assemblages. An assemblage is a collection of low-level behaviors aggregatedin
supportof a task.
The behavioral assemblage ProceedAlongHallway is
composed of the motor schemas StaylnHallway, MoveDownHallway,and AvoidObstacles. StaylnHallway directs
the robot in a direction orthogonalto the centerline of the
hallway.It requires as input the widthof the hallwayandthe
DECISIONANALYSIS 145
Figure 1: Finite State Acceptordescribing a biohazard detection missionat a highlevel.
angle betweenthe centerline of the hallway and the current
robot heading. MoveDownHallway
movesthe robot parallel
to the centedineof the hallway. Theangle to the centerline
alone is sufficient input for this motorschema. AvoidObstacles repulses the robot fromobstacles. Eachof the three
schemasproduces a vector as its output. The vectors are
adjusted accordingto gain values suppliedby the user.
The behavioral assemblage GoThroughDoor
consists of
EnterDoorwayand AvoidObstacles schemas. EnterDoorway uses a docking schema(Arkin & Murphy1990) to compute the instantaneousforce on the robot relative to the door.
The schemarequires three points to computethe vector response: two end points of the door to enter and a point on
the far side of the door that the robot should movetowards
(Figure 2).
========================,:
Iii;
I J ~l ~//,’///.%
"-:.:-....’,:+X,:,’,’,
:....-..,,<,,,,,,,,,,:,:,
d,ll
//,~% ,*.%%#/,%%"
f lll lld~d’[~.l¢¢,ll’¢~’¢’,
::::::::::::::::::::: bq,,’,;,/,:.:.-:-:..-.-:-:
,:/.:/.:.:
2~-:-:-t~a,wtenemM
-...,,,
;’;,,,,,,,,
-%-,.P’PttPf
¯ tl.
.:.:.:.:,
, ,,,, :,, ,,,,,,,,,,,.
;-.7.:.7.:,l,bi,X,b.,:,.
Corridor Detection
Webegin with the assumption that hallway parameters
(widthand angle withrespect to the robot) can in fact be determinedfroma single laser reading. This is a fairly strong
but generally true assumption.Twofactors contribute to this
result. First, hallwayshavedistinct features; long, straight,
parallel walls. Second,the laser range scanneris extremely
accurate~ allowing proper perceptual algorithms to detect
these features.
2 200 used in our experiments has a
The SICK LMS
field-of-view of 180 degrees and returns 361 distance readings (two per degree). These distances are converted
software to Cartesian coordinates to form a laser frame
F =< pl, P2, ..., Pa61>, whereeach Pl is a point in 2D. A
local coordinate systemis imposedon the laser frame such
thatpxst is alwayson the x-axis, Pl on the -y axis, and Pa~
on the +y axis. All calculations that follow are relative to
the origin of this coordinatesystem.Anglescalculated relative to the sensor are calculatedwithrespect to the x-axis.
Since corridors consist of long, straight walls, it is expectedthat manyof the laser rays will intersect these walls,
yieldinga framewitha large numberof relevant data points.
It only needsto be determinedwhichpoints fall along these
walls. Figure 3 showsa laser frametaken in a corridor with
two open doors.
m
..
w
leo
i0
I
~
’~’"""’""
-...
° .
.tee
im
Figure 3: ASingle frameof laser scannerdata, positionedat
the origin.
Figure 2: The docking motorschemaused to enter rooms.
To determine which points fall along the two parallel
walls a voting schemeis adopted. Eachpoint Pi is allowed
to vote for the dominantorientation of the corridor. The
points cast their vote by observingthe local orientation, determined by a small neighborhoodof points around them,
and declaring it the dominantorientation. The orientation
with the mostvotes is declaredthe orientation of the hallway.This is done by fitting a line L to the local neighbor-
Perceptual Algorithms
Weare motivated by the approach of (Gibson1979) and use
affordance-basedpercepts to interpret sensor data. For example,a door for the robot is defined as an affordance for
passage;a corridor is an affordancefor navigation.
~Maximum
error of +/-3 cmper 80 meters.
2SICK
is an industrial sensorengineeringcompany.
TheLaser
Measurement
System(LMS)calculates the distance to an object
usingthe timeof flight of pulsedlight. Arotatingmirrordeflects
the pulsedlight beamto manypoints in a semi-circle. TheLMS
200has a serial port interface, a peakpowerdemand
of 35 Watts,
andcosts approximately
five thousand
dollars.
-’:-:.:":,’7.’:.
Z,’,;.;’:-b’
//...:.,.7,;,¢,;,;,;,’,,;,:,
146
FLAIRS-2001
hoodof points about pi. A line, L, in 2Dcan be described
by twoparametersp and 4, wherep is the perpendiculardistance to the line L and $ is the angle fromthe positive x-axis
to that perpendicularof L (Figure 4).
¯
y
po
¯
rotated by 180 degrees. In this way,angles correspondingto
parallel lines will nowbe overlapping.Figure 5 showsthe
histogramfor the laser framedisplayedin Figure3.
Step 3. Thehallwayangle is selected. A triangle filter
is used to smooththe histogramand the dominantangle in
the histogramis selected. This is the angle betweenthe perpendicular to the left-hand wall and the laser scanner. The
angle betweenthe perpendicular to the right-hand wall and
the sensor is 180degrees larger. This latter angle, labeled
H~,is the one used in subsequentcalculations. Thebin size
of the histogramwasselected to be twodegrees and the triangle filter usedwas[0.1, 0.2, 0.4, 0.2, 0.1].
Figure4: Bestfit line for a set of points.
Aleast-squares approachis usedto fit a line to the neighborhoodof points. A closed form solution exists and can
be calculated efficiently (Lu &Milios 1997a). The size
the local neighborhoodwas empirically determinedto be 7.
Smaller values for the neighborhoodcan cause the lines to
be overly sensitive to small changesin contour and noise,
while larger values incur morecomputationwith little gain
in performance.
/
Figure 6: Hallwaytransformation parameters.
Step 4. Thewinningpoints are divided into two sets.
Thewidth of the hallwaywill be estimatedfrom those points
that voted for the winninghallwayangle. Thesepoints can
be divided into twosets correspondingto the left and right
wall: L and R. The set L consists of points pi for which
~i E [0,180]. The set R consists of points Pl for which
~i E[-180,0].
Figure 5: Histogramfor the dominanthallwayangle for Figure 3. Eachbin correspondsto a twodegree interval.
Step1. A line is fit througheach datapoint. To be more
precise, for eachdata point Pl in a laser frameF, a best fit
line Li is foundfor the set of data points {pj E F : i - 3 <
j<_i+3}.
Step 2. A histogramis calculatedover all ~i. After all
(Pl, ~i) pairs have beencalculated, a histogramover ~i
computed.For a typical hallway and a laser scanner position within the hallway, the histogramwill have two large
humpscorrespondingto the two opposingwalls. Exploiting
the fact that the walls are parallel, or near parallel, the two
humpscan be mergedif all ~i outside the range [0,180] are
Step 5. Thedistance to each wall is calculated using a
histogrammethod.A histogram is calculated over pi for
each of the two sets. Triangle filtering is again used. The
most common
value is selected from each histogram. Let dt
be the distance to the left-hand wall calculated fromthe set
L, and let dr be the distanceto the right-handwall calculated
fromthe set R.
Step6. Thewidthof the hallwayis calculated. Thenthe
widthof the hallway, Hto, is the sumof the two distances:
H~, = dt + dr.
Step 7. Thelocation of the robot within the hallwayis
calculated. The distance betweenthe laser scanner and the
hallwaycenterline, can also be calculated: S~ = dl - H,,,/2.
The sign of this variable determineswhether the robot is
to the right (positive) or to the left (negative)of the centerline. There is nowsufficient informationto navigate along
the hallway.Let the laser scannerbe positioned at the front
center of the robot and oriented in the samedirection. Then
the angle of the hallwaylocal to the robot is -He- 90. The
widthof the hallwayis Hi, and the robot is located S=units
awayfromthe hallway centerline.
DECISIONANALYSIS 147
t.
+oo
~o
~l°°-im¯ llml~llm) soo
imo
"too
Experiments
The experiments described in this section were conducted
during the DARPA
TMRDemoin Rockville, Maryland in
September2000. The site for our demoportion was inside a firefighters’ training facility, also known
as the Burn
Building (Figure 9). The conditions inside the building
were quite different from the pristine environmentsfound
in manyrobotics labs. Mostwalls (both corridor and room
walls) werenot precisely straight due to the routine incinerate/extinguish exercises that have beengoing on for more
than a decadein that building. Light was very limited and
the air wasquite dusty.
Figure 7: Alignedlaser scan data.
Door Detection
Thetechniquefor doorwaydetection relies on the definition
of a door as an affordancefor passage. It looks for openings
in the hallway that are within someacceptable width. In
the experiments,fifty centimeters was used for the minimum
doorway width and two meters for the maximum.These
numbers were used to accommodateour RWIurban robot
(Figure 8). A larger or smaller robot mayhavevery different
acceptable ranges of doorwaywidths.
Step 1. All points in the frameare transformed
so that
the hallwaycenterline coincideswiththe x-axis. This step
allows for efficient detection of openingsin the twowalls,
since the walls will nowbe horizontal. This transformation
requires a translation of (cos(H~)¯ S~, sin(H~)* S~=)
lowed by a rotation of -H~ - 90, where again H~ is the
angle betweenthe laser scanner and the perpendicularto the
right-hand wall. The wall positions are now(0, -H~/2) and
(0, H,~/2)and the newlaser scannerposition is (0,-S~:). Figure 7 displays the laser frameafter the translation and rotation have been performed.
Step 2. Openingsin the wall are detected using a box
filter. To detect openingsa box filter of size 50 cmx 50
cmis passed along each wall at 5 cmoffsets. For each filter position the numberof points that fall within the boxis
counted. Consecutiveregions with zero points are mergedto
forma potential door opening.Thewidth of that openingis
checkedagainst the acceptabledoor widthsspecified earlier.
Step 3. False positives are eliminated. This methodcan
generate false positives, however,due to obstacles located
betweenthe laser and the wall. To eliminate this problem
the laser frameis checkedfor points that are closer to the
laser scannerthan the expectedposition of the wall and fall
in the sameviewing angle as the opening. If such points
are found the opening is rejected. The coordinates of the
two endpoints of the opening(whichcan be represented as
a line segment)are tracked continuously, transformedinto
coordinates local to the robot, and fed to the EnterDoorway
motor schemaduring doorwayentry.
148
FLAIRS-2OOl
Figure 8: The RWIUrbanRobot equipped with a SICKlaser
scanner and a Sonypan/tilt/zoom camera.
Nevertheless, the RWIUrbanrobot (Figure 8) was up
the test. Its missionwas to start fromone end of the building, navigate downthe corridor, find a roomthat is labeled
as one containingbiohazardmaterials, enter that room,find
the hazardousmaterial, and navigatetowardsit. If a biohazard is found,the robot sendsa picture of the materialbackto
the humanoperator. The roomin question was labeled with
a biohazard sign (Figure 8). A bucket labeled ’biohazard’
wasused for the biohazardmaterial. To detect the sign and
the bucket the robot used simple color tracking. A Newton
Labs Cognachrome
board performing real time color blobbingwasused to do the tracking. Thesize and position of the
blob wasused for detection. This simple techniqueserves as
a place holder for a moresophisticated sensor that wouldbe
able to detect a biohazarddirectly.
TomlMissionsSuccessesFaituresiMeanRunTimc
I STDl
32
29
3
62.3 seconds 5.6 seconds
Table1: Summary
of the experimentalresults.
Thirty-two missions were attempted during a two day
period before the demoday. Of those twenty-nine were
successful - 91%success rate. Twomission failures were
causedby unreliable detection of the biohazard due to the
limited light in the building, whichcausedthe computervision codeto fail. Onefailure resulted froman unsuccessful
Figure 9: The BumBuilding in Rockville, MD:site of the
DARPATMR Demo.
attempt to enter the door. TableI summarizes
the results of
the experiments.
The corridor and door detection algorithmsperformedexceptionallywell since the laser rangescanneris not affected
by the amountof light present in the environment.In fact,
the samemission(without the biohazard sign detection) was
run successfully in completedarkness.
References
Arkin, R., and Murphy,R. 1990. Autonomousnavigation
in an manufacturing environment. IEEETransactions on
Robotics and Automation 6(4):445-454.
Blitch, J. 1999. Tactical mobilerobots for complexurban
environments. In Mobile Robots XIV, 116-128.
Endo, Y.; MacKenzie,D.; Stoychev, A.; Halliburton, W.;
Ali, K.; Balch, T.; Cameron,J.; and Chen, Z. 2000. MissionLab: User Manualfor MissionLab. Gerogia Institute
of Technology,4.0 edition.
Fox, D.; Burgard, W.; Dellaert, E; and Thrun, S. 1999.
Montecarlo localization: Efficient position estimation for
mobile robots. In AAA/,343-349.
Gibson, J. 1979. The Ecological Approachto Visual Perception. Boston, MA:HoughtonMifflin.
Lu, F., and Milios, E. 1997a. Globally consistent range
scan alignment for environment mapping. Autonomous
Robots 4:333-349.
Lu, F., and Milios, E. 1997b.Robotpose estimation in unknownenvironments by matching 2d range scans. Journal
of Intelligent and Robotic Systems 18:249-275.
Simmons,R.; Goodwin,R.; Haigh, K.; Koenig, S.; and
O’Sulli van, J. 1997. A layered architecture for office delivery robots. In International Conferenceon Autonomous
Agents, 245-252.
Summary
This paper describedperceptual algorithmsfor corridor navigation, door detection and entry that can be used for exploration of an unknownfloor of a building. The algorithms
use the inherentstructure in buildingdesignto extract a few
parametersneededfor correct navigation. To navigate down
a corridor, for example,the robot needs to knowonly the
width of the corridor and the angle betweenthe centerline
of the corridor and its current heading.To enter a door, the
robot needs to knowonly the two endpoints of the doorway.Thealgorithmspresentedin this paper use a laser range
scanner to detect these parameters. Experimentalresults
presented aboveshowthat this techniqueis quite reliable.
In fact, the samemotorschemasand perceptual algorithms
have been used without modification to control a Nomad
200
robot that has a completelydifferent drive mechanism
than
the Urbanrobot.
Acknowledgements
This research has been supported by DARPA
under the Tactical MobileRobotics (TMR)program. Additional funding
for the Georgia Tech MobileRobotLaboratory is provided
by C.S. Draper Laboratories and Honda R&D,Co. The
authors wouldlike to thank WilliamHalliburton, Yoichiro
Endo, MichaelCramer, and TomCollins for their help with
the implementationof the systemand for their hard workat
the DARPA
TMRDemoin Rockville, MD.
DECISION
ANALYSIS149