Using Negative Information in Simultaneous by

advertisement
Using Negative Information in Simultaneous
Localization and Mapping
by
MASSACHUSETTS INSTITUTE
OF TECHNOLOGY
Hunter McClelland
JUL 2 9 2011
B.S. Mechanical Engineering
LIBRARIES
Tennessee Tech University, 2009
ARCHNES
Submitted to the Department of Mechanical Engineering
in partial fulfillment of the requirements for the degree of
Master of Science in Mechanical Engineering
at the
MASSACHUSETTS INSTITUTE OF TECHNOLOGY
June 2011
© Massachusetts Institute of Technology 2011. All rights reserved.
A uthor ...........................
...
..
. ...
Department of Mechanical Engineering
May 9, 2011
........ ........
John J. Leonard
Professor of Mechanical Engineering
Thesis Supervisor
C ertified by ..............................
A ccepted by ............................
....
. ....................
David E. Hardt
Chairman, Department Committee on Graduate Students
Mechanical Engineering Department
THIS PAGE INTENTIONALLY LEFT BLANK
Using Negative Information in Simultaneous Localization
and Mapping
by
Hunter McClelland
Submitted to the Department of Mechanical Engineering
on May 9, 2011, in partial fulfillment of the
requirements for the degree of
Master of Science in Mechanical Engineering
Abstract
The problem of autonomous navigation is one of efficiently utilizing available information from sensors and intelligently processing that information to determine the state
of the robot and its environment. This thesis explores a topic often ignored in the
Simultaneous Localization And Mapping (SLAM) literature: the utility of including
Negative Information as a means of aiding state-estimation decisions and successfully
re-localizing the autonomous agent. The work is motivated by a low-cost underwater
mine neutralization project, which requires that an Autonomous Underwater Vehicle (AUV) successfully localizes itself in a difficult SLAM environment. This thesis
presents a new Negative And Positive Scoring (NAPS) algorithm for comparing multiple localization hypotheses and then uses a large number of simulations to quantify
the effect of including the often ignored Negative Information (NI). The ultimate
conclusion of this thesis, that careful inclusion of Negative Information increases the
chances of successful localization across a wide variety of difficult SLAM situations,
extends beyond the intended target reacquisition application and is generally applicable to robotic navigation problems.
Thesis Supervisor: John J. Leonard
Title: Professor of Mechanical Engineering
THIS PAGE INTENTIONALLY LEFT BLANK
Acknowledgments
This research was partially supported by the Office of Naval Research under research
grants N00014-05-10244, N00014-07-11102, and N00014-11-10119.
To my advisor: I am deeply honored to have received your guidance and direction during our time together. Thank you for your wisdom, for your patience, and
for the example you have set; I will remember these as long as I live.
To my labmates, collaborators, and colleagues:
Thank you for so many
creative discussions, critiques, and conversations. Your contribution to this thesis is
immeasurable in quantity and unparalleled in quality.
To my dear friends: I could not possibly have gotten this far without you. Each
of you holds a special place in my heart for your loyalty, your help, your encouragement, and your support. You are the most fascinating people I know.
To my family: I love you more than I can express. Because I know that you'll
always be there for me, no matter the circumstances, to you I dedicate this thesis.
THIS PAGE INTENTIONALLY LEFT BLANK
Contents
1
Introduction
1.1
Low-Cost Mine Neutralization . . . . . . . . . . . . . . . . . . .
1.2
Feature-Based Simultaneous Localization and Mapping (SLAM)
1.3
Negative Information (NI) . . . . . . . . . . . . . . . . . . . . .
1.4
Thesis Outline . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2 Literature Review
2.1
2.2
General SLAM Concepts . . . . . . . . . . . . . . . . . . . . . .
2.1.1
Probabilistic Estimation . . . . . . . . . . .
2.1.2
Data Association . . . . . . . . . . . . . . .
2.1.3
Multiple Hypotheses . . . . . . . . . . . . .
2.4
2.5
22
Kalman Filter SLAM . . . . . . . . . . . . . . . . .
2.2.1
2.3
. . . . . .
Complexity . . . . . . . . . . . . . . . . . .
Particle Filter SLAM . . . . . . . . . . . . . . . . .
2.3.1
Particle Depletion . . . . . . . . . . . . . . .
2.3.2
Forced Randomization . . . . . . . . . . . .
Optimization Based SLAM . . . . . . . . . . . . . .
2.4.1
Comparison to SLAM Filtering Methods . .
2.4.2
Reducing Complexity . . . . . . . . . . . . .
Concluding Remarks . . . . . . . . . . . . . . . . .
3 Negative And Positive Scoring (NAPS) Algorithm
31
Negative Information (NI) Definition . . . . . . . .
31
3.1
7
3.2
NAPS Probabilistic Definition ..................
3.3
Heuristics for NI Contribution to NAPS . . . . . . . . . . . . . . . .
36
3.3.1
Theoretical Calculation . . . . . . . . . . . . . . . . . . . . . .
37
3.3.2
Monte Carlo Heuristic . . . . . . . . . . . . . . . . . . . . . .
38
3.3.3
Grid-based Heuristic . . . . . . . . . . . . . . . . . . . . . . .
39
Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
41
3.4
4
. . . . . . .
32
Quantifying the Impact of Negative Information (NI)
43
4.1
Simulation Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
43
4.1.1
Mission Setup Modifications . . . . . . . . . . . . . . . . . . .
44
4.1.2
Multi-Hypothesis Estimation & Pruning Methods . . . . . . .
45
4.2
Discussion of Simulation Parameters
4.2.1
4.3
. . . . . . . . . . . . . . . . . .
47
Difficult Environment Parameters . . . . . . . . . . . . . . . .
47
Simulation Results
. . . . . . . . . . . . . . . . . . . . . . . . . . . .
49
4.3.1
Feature Visibility . . . . . . . . . . . . . . . . . . . . . . . . .
49
4.3.2
Environment Regularity . . . . . . . . . . . . . . . . . . . . .
49
4.3.3
Capture Strategy . . . . . . . . . . . . . . . . . . . . . . . . .
50
4.3.4
Initial Position
. . . . . . . . . . . . . . . . . . . . . . . . . .
51
4.3.5
Apriori Map Probability . . . . . . . . . . . . . . . . . . . . .
51
4.3.6
Feature Position Uncertainty . . . . . . . . . . . . . . . . . . .
52
4.4
Relevant Experimental Results . . . . . . . . . . . . . . . . . . . . . .
53
4.5
Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
54
5 Thesis Conclusions
63
5.1
Summary of Contributions . . . . . . . . . . . . . . . . . . . . . . . .
63
5.2
Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
64
5.2.1
Algorithms that Naturally Utilize NI . . . . . . . . . . . . . .
64
5.2.2
Mission Path Planning . . . . . . . . . . . . . . . . . . . . . .
65
5.3
Closing Remarks
. . . . . . . . . . . . . . . . . . . . . . . . . . . . .
65
List of Figures
1-1
Picture of iRobot Ranger . . . . . . . . . . . . . . . . . . . . . . . . .
14
1-2
Picture of iRobot Transphibian
. . . . . . . . . . . . . . . . . . . . .
14
1-3
Side-by-side picture of iRobot Rangers and Transphibian . . . . . . .
15
1-4
Picture of Hydroid REMUS 100 . . . . . . . . . . . . . . . . . . . . .
15
1-5
iRobot Ranger after successfully capturing the target mooring line . .
17
1-6
Forward-looking sonar image showing distinct features
. . . . . . . .
17
3-1
Diagram explaining the concept of Negative Information
. . . . . . .
34
4-1
Diagram of typical simulation setup . . . . . . . . . . . . . . . . . . .
55
4-2
Success of NAPS vs POS for varying feature visibility . . . . . . . .
56
4-3
Four simulation maps, with different offset coefficients . . . . . . . . .
57
4-4
Success of NAPS vs POS for varying offset coefficients . . . . . . . .
58
4-5
Success of NAPS vs POS for 3 capture plans . . . . . . . . . . . . .
59
4-6 Success of NAPS vs POS over varying initial position uncertainty .
60
4-7 Success of NAPS vs POS for varying match weight coefficient
. .
61
Success of NAPS vs POS for varying feature location uncertainty
62
4-8
THIS PAGE INTENTIONALLY LEFT BLANK
List of Tables
4.1
Baseline simulation parameters
. . . . . . . . . . . . . . . . . . . . .
47
4.2
Folkesson experimental UMS vs POS results . . . . . . . . . . . . . .
53
THIS PAGE INTENTIONALLY LEFT BLANK
Chapter 1
Introduction
1.1
Low-Cost Mine Neutralization
The motivating project for this thesis is to develop a cost-effective means of neutralizing underwater mines in a littoral zone. The sea-floor and minefield, generally called
the feature field, is mapped a priori before deploying a low-cost Autonomous Underwater Vehicle (AUV). The AUV uses an inexpensive blazed array forward-looking
sonar as its sole exteroceptive sensor for locating objects underwater. Its mission is
summarized as autonomously interpreting the sonar data and then fusing the results
with the a priori map to localize itself in the field, identify the correct target feature, and physically capture the target. Figures 1-1, 1-2, 1-3, and 1-4 show several
different AUVs used over the course of the project, figure 1-5 shows an example of
a successful capture, and figure 1-6 shows a forward looking sonar image containing
several clear features. Using SLAM to reacquire underwater features utilizes the core
autonomous technologies of path planning, mapping, and sensor processing in realtime, all of which are widely extendable to other applications underwater as well as
surface, land, and aerial environments.
Figure 1-1: The iRobot Ranger was the most used AUV (shown here in Boston, MA)
Figure 1-2: The iRobot Transphibian was tested as a second AUV
Figure 1-3: Two iRobot Rangers and one Transphibian in Panama City, FL
Figure 1-4: Hydroid REMUS AUV during testing in Monterrey, CA
THIS PAGE INTENTIONALLY LEFT BLANK
Figure 1-5: iRobot Ranger after successfully capturing the target mooring line
Figure 1-6: Forward-looking sonar image showing 4 distinct features. (Note that
this picture is not representative of normal operation, rarely are multiple features
simultaneously in view and the rocks shown are much easier to distinguish than other
types of objects)
1.2
Feature-Based Simultaneous Localization and
Mapping (SLAM)
Successful mine neutralization is based on solving the Simultaneous Localization and
Mapping (SLAM) problem in real-time on the low-cost AUV. Abstracting away the
autonomous sonar processing algorithm (which is a nontrivial part of the neutralization solution) the problem is formulated as a state estimation task where the state
consists of the location of the robot as well as the location of the underwater features.
Due to the difference between the a priori mapping sensor (usually an expensive
sidescan sonar) and the low-cost forward-looking sonar, there often exist features
which are found in operation but were not on the initial a priori map, and also there
are absent features which may be invisible to the forward-looking sonar, or may have
been imperfections in the a priori mapping sensor data. In either case, the problem
cannot be reduced to mere localization, due to the potential mismatch between sensor
modalities.
An approach which does not use SLAM could be to have the expensive mapping
vehicle deploy a marker beacon near the desired target, and then to have the low-cost
AUV perform terminal homing to the marker
[60].
In fact, a group from the Woods
Hole Oceanographic Institution (WHOI) demonstrated this approach in June 2010
with great reliability. However, a SLAM solution has a variety of advantages over
target-marking, all stemming from the fact that it directly utilizes the target field
itself, rather than blindly relying on the information from a previously-situated beacon. If the beacon is unknowingly relocated, either intentionally or through natural
causes, the blind AUV cannot hope to successfully capture the target feature, and
cannot know that it has been unsuccessful. Also, the beacon-based solution has quite
limited further application compared to performing successful real-time feature-based
SLAM under high uncertainty.
1.3
Negative Information (NI)
The underwater Feature-Based Navigation (FBN) mission which motivates this thesis has some unique environmental factors which make SLAM particularly difficult.
Minefields are often laid in regular patterns which are naturally difficult to disambiguate and thus make localization much more difficult. Secondly, the feature fields
are quite sparse, containing around 10 features on average. In conjunction, the features are spaced quite distantly compared to the effective length of the sonar sensor
(typically only a single feature is visible at a time) with the rest of the environment
being quite bare. Finally, there are unmodeled environmental disturbances, most notably marine life and underwater currents, which make the operating conditions quite
harsh. The key idea to deal with these difficulties, proposed originally by Folkesson
et al [18, 19], is to make use of Negative Information (NI) by directly using the act
of not seeing a feature as an aid in correct localization.
1.4
Thesis Outline
The goal of this thesis is to explore the role of NI in improving SLAM performance,
especially under the constraints motivated by underwater mine reacquisition. The
goal of this chapter has been to set the stage by summarizing the problem's context,
describing the FBN mission characteristics, and justifying the utility of a SLAM-based
approach. The rest of the thesis structure is as follows:
Chapter 2 reviews the relevant literature on the SLAM problem, addressing filtering
approaches, data association, and pose graph optimization.
Chapter 3 presents the new algorithm of this thesis; an extension of the Unseen
Match Scale (UMS) proposed by Folkesson et al. A number of improvements
are made over the previous work, including the use of a Monte Carlo integration
heuristic in combination with a Multi-Hypothesis Tracking filter, and a complete
probabilistic derivation of the scoring algorithm, named NAPS.
Chapter 4 describes the design and results of a SLAM simulator used to quantify the
effect of including NI over a variety of mission and environment parameters. A
comparison is also provided with earlier in-water experimental results published
by Folkesson et al.
Chapter 5 concludes with a summary of contributions and suggestions for future
research.
Chapter 2
Literature Review
Many aspects of SLAM are quite well-understood, but the AUV target reacquisition
mission that motivates this thesis presents several challenges, including the need to
run in real-time on a small low-cost AUV and the requirement to achieve robust relocation based on accurate data association in a field of sparse and regularly-arranged
features. In light of these difficulties, this chapter reviews the basic classes of SLAM
estimation and data association approaches which have been largely successful in realworld SLAM applications. The approach detailed by Folkesson et al [18] and used in
the FBN project is a combination of unique innovations and fundamental methods
described in this chapter.
2.1
General SLAM Concepts
A number of key works have been published which comprehensively introduce the
SLAM' problem formulation [13, 1]. The leading textbook on the subject was published by Thrun, Burgard, and Fox [55] and this thesis adopts much of their vocabulary. As the name suggests, the simultaneous nature of the solution makes the
problem much more difficult than either independent mapping [43] or independent
localization [5]. Due to its unique difficulty and wide applicability, the SLAM problem remains highly studied and continues to advance. Interestingly, in 2001 there was
'Called Concurrent Mapping and Localization (CML) in early literature, among other names
one work which theoretically proved the convergence of SLAM [10] and in the same
year there was also one which theoretically proved its divergence [27].
The focus of this chapter is to give the reader a basic understanding of the methods
successfully employed to solve the SLAM problem effectively [49, 16]. Of course, there
exist several successful marine applications of SLAM on a variety of different vehicle
and sensor types [35, 61].
2.1.1
Probabilistic Estimation
Perhaps the most important key to the success of SLAM solutions lies in their probabilistic definition of the state. The most cited seminal work proposing a stochastic
map was produced by Smith et al in 1990 [52]. Probability theory provides a mathematically defined way to reason about uncertain information, regardless of the origin
of the uncertainty. To begin, the state (call it x) is defined as a Probability Density
Function (PDF) over all the included state variables. This leads to a definition of
"solving" a SLAM problem as obtaining a state estimate which is a PDF p(x) over
all state variables x1 , x 2
...
xn and roughly centered about the correct values for each
variable with a low uncertainty.
Of course obtaining this solution implies that the sensors themselves have probabilistic models too, so that although a measurement z is obtained, the calculation is
concerned with the probability of that measurement given a particular state x, represented by the PDF p(zlx). The task is to find a suitable inversion of the measurement
model, in order to determine the state x upon receiving a noisy measurement z; this
amounts to calculating p(xlz). A similar model applies for actuator input u, but since
the actuator changes the state, SLAM instead deals directly with the probabilistic
inversion p(xIu).
With all these terms well defined, now a formal mathematical definition of SLAM
can be understood. The weaker form of SLAM is only concerned with determining correctly the state at the current time and doesn't correct past mistakes. This
type of SLAM is termed Online SLAM and is a simpler situation than its counterpart Full SLAM which continually estimates the entire time-history of the state as
the robot senses and actuates. Online SLAM is concerned with the calculation of
P(Xzow lxearlier, Uearlier, znow) whereas Full SLAM is the calculation of P(XaiulUaj, Zaln).
Full SLAM is almost a perfect corollary of the Structure From Motion or Bundle
Adjustment problems in computer vision [48, 59].
There is another interesting aspect to SLAM which should be mentioned here.
Probability theory gives a complete, well-defined, and rigorous method to solve either
the Full SLAM or the Online SLAM problem, using the key equation known as Bayes'
Theorem:
pAzly) = (lxpx
p(y)
(2.1)
Bayes' Theorem performs the necessary inversion of the measurement model and
actuation model into the useful state-estimation given a set of measurements or actuations, thus the complete theoretical solution to SLAM is known as a Bayes Filter.
Unfortunately for all useful applications, the Bayes filter is intractable, requiring unsolvable integrals and an explosion of complexity in state length.
2.1.2
Data Association
In the known data association case, the SLAM problem is still to estimate the state
of the environment, which amounts to dealing with all uncertainty inherent in measurements and actuation [51, 44]. However, many relevant SLAM environments deal
with unknown data association, that is, when a measurement carries no direct feature
identification information with it, thus the association must be intelligently inferred
from other factors [3, 7]. The data association problem has been studied even before
SLAM itself, with roots in basic sensing, target tracking, and system identification.
The simplest algorithm for a decision is to select the Most Likely (ML) choice from the
available options, calculated by simply selecting the highest probability of a measurement z originating from a feature f [56, 9]. This does not deal naturally with the case
that a measurement z does not come from any existing feature, but rather indicates
either an erroneous measurement or a new existing feature which should be added to
the state. Different SLAM methods contain different possibilities for approaching the
unknown data association problem [2, 23, 53] and often in practice there exists some
partial information which influences the decision but does not completely identify the
feature.
2.1.3
Multiple Hypotheses
One method to address the data association problem is to permit multiple hypotheses for the data association choice, thus allowing the robot to delay the decision until
further information is acquired. Multiple hypotheses often take the form of parallel
SLAM estimators, each processing the same sensory and actuator inputs, but varying
in other choices (most commonly data association) such that each parallel solution is
a different possibility for the true solution. When viewed from the vantage point of a
complete solution, it becomes apparent that multiple hypothesis estimators provide
a way to circumvent the unimodal assumption inherent in working with Gaussians.
This powerful extension provides a wealth of opportunity, but at a considerable increase to computational cost, namely that each hypothesis is almost a separate SLAM
system. Assuming that the only difference between hypotheses is the choice of data
association, the number of hypotheses still explodes exponentially in n2, the number
of measurements, and clever hypothesis-tree pruning techniques are required to keep
the system manageable.
2.2
Kalman Filter SLAM
The Kalman (or Kalman-Bucy) Filter (KF) was developed by Rudolph Kalman (with
help from others) around 1960, [31, 32] and was shown to be the optimal filter for
a reasonable set of assumptions: A linear system and Gaussian distributions. The
details of the KF are not presented here, the interested reader is directed to the many
good references on the subject [62, 38]. The first seminal paper on SLAM used the
KF approach [52].
Robotic navigation is both nonlinear and non-Gaussian. Fortunately, in many
cases the uncertainty in SLAM can be modeled by a Gaussian with a reasonably
small error, definitely small enough to justify the amazing computational efficiency
of the KF. Likewise the nonlinearity can be handled by a first-order Taylor expansion
linearization of the nonlinear model about a small-enough portion to sufficiently approximate the solution [44]. This is the key insight to the Extended Kalman Filter,
(EKF) which lies at the base of the simulator designed for this thesis and described in
chapter 4. An alternative approach to addressing the nonlinearity is to select a small
number of finite points, pass them through the full nonlinear measurement model,
and re-approximate the resulting distribution as a Gaussian. This technique is known
as the Unscented Kalman Filter (UKF) and has been shown to be more accurate than
the EKF, but also more computationally intensive [26].
Another key caveat in applying the KF to SLAM is to recognize that a Gaussian
is uni-modal by definition, whereas sometimes in SLAM, a multi-modal distribution
is necessary. This is an inherent limitation in any implementation of a Kalman Filter,
and it is addressed in section 2.1.3 as a multi-hypothesis extension of the EKF, and
also in section 2.3 via a non-KF-based particle filter.
2.2.1
Complexity
A key drawback of the KF-based approaches to the SLAM problem is their complexity
dependency on n, the number of landmarks in the state. It is 0(n 2) in updating the
state x and covariance E, and 0(n) in adding newly discovered features to the system.
One clever realization is that the canonical representation of a Gaussian covariance
matrix is nearly sparse, allowing for much more efficient matrix operations.
This
representation is named the Information Filter (IF) and has been shown to be superior
to the KF in many situations [58]. The inherent trade-off of the IF representation
is that recovering the actual state x and covariance E at any point require matrixinversion, a computationally expensive task.
A different approach to addressing the computational complexity is to divide the
entire SLAM problem with n state-elements into a number of local submaps and
therefore greatly reduce the computation required for any given update [36, 47, 6].
This utilizes the correct assumption that most robotic navigation problems have a
much smaller number of landmarks in view of the sensors at any one time compared
to the total number of landmarks in the map.
Many KF-based methods for SLAM have been explored with advances in computational complexity over the basic implementation, but the general story remains
the same: each method takes advantage of some general assumption which applies
to robotic navigation, and exploits it to demonstrate an overall increase in accuracy
and/or a reduction in complexity [21, 34].
2.3
Particle Filter SLAM
Using a Rao-Blackwellised particle filter [11] (or closely-related Monte Carlo method)
avoids many of the limiting issues of working with KF's. This filter is based on a
Monte Carlo assumption that any probabilistic distribution can be well approximated
by sampling it many times, and viewing the independent samples in-aggregate. The
key insight of this method is that each individual sample is deterministic and therefore can wholly ignore the complexities of dealing with difficult models and their
effect on probabilistic distributions. Instead, the probabilistic distribution is simply
re-approximated when desired based on the collection of individual samples, called
particles. Many examples of successful PF-based SLAM algorithms have been published [54, 41, 42].
The Particle Filter approach to SLAM suffers from an important drawback: It is
entirely based on the assumption that a distribution will remain well-modeled by a
finite number of samples from that distribution. This is a basic concern of probability
theory, but is in the most generalized sense not more upsetting than corresponding
assumptions made about the SLAM problem by other methods.
2.3.1
Particle Depletion
A unique problem of PF-based methods is that the sampling from a distribution
naturally focuses around high-probability solutions and tends to give less focus to
low-probability ones. This is a primary contributor to the efficiency, but also probabilistically ensures that all but one solution will die out over time. This issue is
referred to as particle depletion and requires additional complexity to combat effectively. Particle depletion can prevent the closure of large loops, thus ensuring the
solution diverges away from the truth. The most effective method of preventing this
is to correctly weight the resulting particles such that the aggregate combination
models the underlying distribution accurately, but this weighting is a non-trivial process and the focus of much literature on PF methods. Several other methods involve
bringing in hybridizations of the PF with scan-matching or KF elements, increasing
algorithm complexity as a trade-off for robustness to bad luck.
2.3.2
Forced Randomization
Another common method employed in PF-based SLAM methods is to include a small
percentage of semi-randomly generated estimates. This seeding process gives a chance
that a PF can recover from destroying the correct estimate, and is an effective method
of dealing with highly disruptive unmodeled errors. Of special mention is the "kidnapped robot problem" which is the case that the robot is unknowingly moved to
a different position by some outside agent.
By creating a small number of non-
mainstream solutions, the robot has a chance of discovering the correct location and
having particles develop there. For many applications, the cost of a small percentage
of random guesses is clearly outweighed by the robustness it creates to unexpected
disturbances or unmodeled factors.
The true test of an algorithm is in its ability to function robustly in a variety of
SLAM situations, and Particle Filters have definitely proven themselves as popular
and successful methods for robot navigation [14, 15]. The FBN project discussed in
this thesis, however, does not use particle filtering techniques for its navigation.
2.4
Optimization Based SLAM
A final class of SLAM approaches not discussed thus far fall under the category of
Optimization based SLAM. These methods, in contrast to the previous two filteringtype solutions, solve the full SLAM problem which is the calculation of the entire
trajectory of the robot rather than just the latest location. All information contained
in uncertain measurements and actuations can be considered to be constraints on
the pose of the robot at their respective points in time. Measurements naturally
constrain the pose of the robot with respect to environment features, and actuator
commands constrain the pose of the robot with respect to other poses. Once the
full system of constraint equations has been developed, it can be solved with Least
Squares Optimization or a similar method of estimation [37].
The increased cost
of using this method is that full-trajectory optimization is computationally more
expensive than the prior filtering approaches, but clever methods of decreasing the
necessary resources combined with advances in computer speeds have vaulted this
class of algorithms to the forefront of SLAM development [17].
2.4.1
Comparison to SLAM Filtering Methods
Filtering-based methods contain a weakness which has not yet been explicitly stated;
with each new filtering step they cease to consider the decisions which contributed to
the previous estimates. That is, if an error is made early in the navigation process,
it cannot be reconsidered later, the filter must simply be robust enough to overcome
its negative effects on the solution. This is not true for optimization methods, however, as every update can reconsider all previous information in calculating the total
solution. This capability allows extension to a variety of multi-robot cases, where
the sets of measurements are completely uncorrelated until the robots encounter each
other [33, 64]. A variety of methods have been investigated for identifying and removing erroneous previous information, but these must be carefully handled in all
cases.
Similar to filtering, optimization methods can gain significant advantages from
dividing the navigation problem into local submap chunks, exploiting the structure
of a realistic SLAM environment. Optimization gains a significant advantage over
filtering, however, because the local submaps can be recalculated if necessary when
the robot closes a large loop and gains additional information pertinent to the previous
decisions. If the robot runs for a long time, simply refusing to reoptimize an old area
is no worse than the filtering solution to the same issue.
Finally, since most robotic models are linearized approximations of the nonlinear
reality, all methods of SLAM are vulnerable to linearization error growth, that is,
that the small errors induced by linearization cause larger errors in the solution,
and linearizing about that solution in turn causes even larger errors. The errors
are irreversible using a filtering approach, whereas the optimization techniques often
relinearize the relevant models about the newest and most accurate estimations of
the state.
2.4.2
Reducing Complexity
At its basic level, optimization solves a more difficult problem than filtering, so close
care must be taken to ensure that the computational complexity is still manageable. A
key insight which proved the feasibility of optimization methods was that the matrix
representation of underlying constraints is usually sparse, since the environment is
much larger than the sensor's visibility. For example, the features seen early in the
trajectory are physically distant (and thus relatively disconnected) from those seen
in later portions, unless the robot executes a large loop and revisits an old area.
Sparse calculations can be performed much faster than their full counterparts for a
large portion of operations, thus the necessary computational resources required drop
significantly [8, 20].
Another common technique used with optimization is to only perform a limited
but important set of optimizations. This is an extension of the local submapping
idea, essentially only revisiting the most important of previous information decisions,
and only doing so infrequently [50]. Similar to running the particle filter for as long as
possible, the optimizations can be delayed during periods of high sensory input, and
then computed in batch fashion at a later point. The most extreme example of this is
seen in applications where the SLAM engine is totally disconnected from the actuation
choices, and thus a lazy algorithm can simply accumulate measurements then run a
single expensive optimization to reconstruct the full map of the environment at the
end of the trajectory. It is important, however, to note that some recent research has
developed a computationally efficient incremental approach to optimization [30, 28,
29].
2.5
Concluding Remarks
This chapter is organized around three main classifications of SLAM solutions: Kalman
Filter based, Particle Filter based, and Optimization based. These discussed solution
classes are certainly not exhaustive, nor really that independent. Plenty of hybrid
methods operate in the space between and external to these general categories, each
with its own unique characteristics [57, 22, 12]. Research into the SLAM problem continues to unveil new approaches with unique benefits, some current work is addressing
the issue from a purely biological approach, attempting to mimic the solution(s) developed by evolution [39, 40, 63]. The SLAM system designed by Folkesson for the
FBN project [18] was itself a unique combination of both known SLAM techniques
and new modifications suited especially to the littoral environment.
Chapter 3
Negative And Positive Scoring
(NAPS) Algorithm
In this chapter, a novel hypothesis scoring algorithm is proposed which combines
both negative and positive information; the algorithm is called NAPS. This chapter
begins with a definition and discussion of NI in a general sense, and then proceeds
with a detailed probabilistic definition of the NAPS algorithm in section 3.2. Since
the key (NI) part of NAPS is computationally intractable, two separate heuristics
are proposed to enable efficient real-time evaluation in sections 3.3.3 and 3.3.2. The
chapter closes with some final thoughts, and the following chapter provides a quantitative study into the effect of using NAPS compared to a similar Positive Only
Scoring (POS) algorithm, which is representative of most published SLAM systems.
3.1
Negative Information (NI) Definition
At this point, a formal definition of NI is required: Negative information is defined
as a situation when the robot predicts to observe a feature, but does not actually
observe it. All SLAM systems provide the ability to predict measurements based on
knowledge of the environment and an accurate sensor model. The key to utilizing
negative information is to clearly analyze the possible causes and to develop a suitable
heuristic for incorporating the negative information into the total navigation solution.
Negative information can come from a variety of sources:
1. The robot's pose has an error
" The robot is currently wrong about its position
" The feature was initialized from a wrong pose
2. The sensor does not detect the feature
" The feature is occluded
" The feature is hard-to-spot
" The sensor is malfunctioning
3. The feature is not where it was predicted
" The feature never existed, a spurious measurement caused initialization
" The feature never existed, the a priori map was incorrect
" The feature is not static, and has been relocated/removed
The usefulness of incorporating negative information relies on successfully attributing it to the correct cause from the list above. In a work closely related to this
thesis, negative information is used to aid in localizing a Sony AIBO robot on a soccer
field [24] assuming known data association and certainty of feature existence. Other
work including negative information utilizes a method of data association called "scan
matching" which accounts for negative information inherently [23]. When the robot
is not in these ideal conditions (as is the case in this thesis), a probabilistic approach
is needed to utilize the lack-of-measurement.
3.2
NAPS Probabilistic Definition
In SLAM, multi-hypothesis comparison can be distilled to a scoring algorithm which
evaluates the relative probability between the candidate solutions. This thesis describes a new algorithm which utilizes both positive and negative information, into a
single metric named NAPS. The algorithm is recalculated at each distinct timestep
t
11...
tcurrent
simply by adding the latest score NAPSt to the score computed
previously:
tcurrent
(
NAPS =
(3.1)
NAPS,
t=1
Development of the current-timestep NAPS, begins with an expression for the score
of hypothesis h 1 with the log of its probability, conditioned on the latest measurement
zt when compared to the null hypothesis:
NAPSt(hi) = In
p(hi lzt))
(p(hu lzt)f
(3.2)
and a clear definition of terms is necessary. The subscript t is included to indicate
that the equation is evaluated at a certain timestep t=
tcurrent
with corresponding
measurement zt. Hypothesis h is defined as a computed SLAM estimation of state
Xh
assuming a set of data association matches Q between positive measurements
zi:t,,os and map features (both a priori and run-time-instantiated features).
Each
hypothesis h has an associated Probability Distribution Function (PDF) p(x), over
the state vector x with n features:
(Xr obot pose)
X
(3.3)
=1
The null hypothesis hnull is defined as the hypothesis choosing zero data association,
thus always creating a new feature for every zt,,as. Using Bayes' Rule for conditional
probabilities, equation 3.2 becomes:
NAPSt(hi) = In
p(ztjhp(hi)
Ap (z lhntir)pi(hbi
(3.4)
o)
As foreshadowed, the term p(ztlh) is split into two distinct contributing factors:
p(ztlh) = rzp(zt,posjh)p(zt,negjh)
(3.5)
and a formal definition for what it means to be a "negative measurement"
Zt,neg
is
necessary. The concept is actually quite familiar to SLAM, a negative measurement
occurs if the predicted positive measurement which should have been observed, given
Xh
and the sensor's geometry. In contrast, a positive measurement is a positive return
from the sensor, which will be associated to an existing feature, or will cause a new
feature to be created in the state x. All of SLAM is based off positive measurements.
Figure 3-1 gives a clear picture of what it means for a measurement to be positive
vs negative for each hypothesis choice: Three hypotheses are shown for matching a
series of sensor sweeps (considered to be one measurement) with one detection (solid
dot) and two a priori features (X's). The null hypothesis has no data association,
and therefore creates one new feature and has 2 NI features. Hypothesis hi has
one matched feature but also one negative measurement, while h2 has one matched
feature and no negative measurements.
(Null)
(hi)
(h2)
x
Figure 3-1: Three hypotheses which depict the concept of Negative Information: null
has 2 NI measurements and 1 new feature, hi has 1 positive measurement and 1
negative, and h2 has 1 positive and no negative measurements
This division, based on the conditional independence of positive and negative measurements allows recognition of p(zt,,,Ih) as exponential in Mahalanobis Distance
Dh= f(zt) and r/2,,, as a normalization constant:
p(zt,pos
h) =
p(zt,osh)
=
(yz,,)e-'(Xhzt)TE-1
(2h-z)
(r/,6)-
This representation assumes a Gaussian distribution over
Xl
which is a common
representation in many SLAM estimators and is appropriate within each hypothesis.
From equation 3.4, the term p(h) represents the a prioriprobability of a hypothesis
being found by the robot. At this point, the first key assumption is made that maps
which appear to match up well to the a priori map are more likely correct than those
which ignore it. Therefore, as a measure of map probability, an exponential PMF is
chosen based on the number of features Nf matched to the a priori map by h:
p(h) -
(3.7)
e'Nf
where r7, is a normalization constant, Nf is discrete and varies from 0 to the number
of features in the current map m, and A is a free-choice parameter, which controls
the shape of the exponential distribution. Combining equations 3.4, 3.5, 3.6, 3.7,
evaluating the log, and canceling terms permits a clean expression for NAPSt in
comparison to Positive Only Scoring POSt:
NAPSt(h) = -Dh + ANf +
POSt(h) = -Dh + ANf
where the
Ch,neg
Ch,neg
(3.8)
(3.9)
term, called the "Negative Information Contribution" is a new con-
struction of this paper, and it is defined as:
Ch,neg-
In
(
zt,neg h)
p(zt,negIhnuii)
35
(3.10)
In words, Ch,,g is the log of the probability of all negative information normalized
relative to the negative information inherent in the null hypothesis. As mentioned in
the introduction, this information has been used for a variety of tasks, but has never
been probabilistically defined. Equations 3.8 and 3.9 clearly illustrate the difference
between typical SLAM algorithms [46] which are collectively labeled POS because of
their sole inclusion of positive information, and the proposed NAPS which accounts
for negative information: POS algorithms always assume
Ch,neg =
0 because they
don't account for it in scoring the hypotheses. Most assume very-high A, essentially
selecting the hypotheses which match the most total features as most-likely, and then
ordering those by Mahalanobis distance. The interested reader should be aware of a
growing body of methods which are ever improving upon positive-information-only
approaches [25, 47].
3.3
Heuristics for NI Contribution to NAPS
Calculating the Ch,ng in equation 3.10 is neither obvious nor trivial, as it has direct
influence on the overall success of the robot's navigation. Basic logarithm properties
allow separation of the score contribution from the null hypothesis p(zt,neg|hnu1) and
the contribution from the negative measurement p(zt,negjh). Then, since NAPSt is
always compared against another NAPSt, the common p(zt,neg Ihnu 1 ) term can be
ignored. Thus the issue of calculating or estimating the Negative Information contribution (C,neg) of a hypothesis is equivalent to estimating p(zt,negjh). However, as
listed at the beginning of this chapter, there are a wide variety of underlying causes
for a robot to attain a negative measurement. The use of negative information is
directly designed to address robot position errors or feature position errors. Yet, of
course, sensor malfunctions or unrecognized feature occlusion cause "unwanted but
unavoidable" NI contribution, and thus care must be taken in the heuristic approximation of p(z,,egh) that robustness is maintained to these unwanted types of negative
measurements.
3.3.1
Theoretical Calculation
It is instructive to consider the theoretical total calculation of p(zt,,eg Ih) according to
a set of assumptions:
* The sensor occlusion model is well-defined and accurate
" The sensor is not malfunctioning
* All features are static
" Feature detections are independent of each other
" Feature visibility can be accurately modeled by a single probability
The last item in the list is by far the most questionable, it asserts that a feature
with, say, visibility Vf = 0.4 has a 40% chance of being seen regardless of any other
factors. This is clearly unrealistic, as feature visibility for a wide variety of features
will be dependent on other factors, perhaps view-angle, the distance to the sensing
element, the orientation of the sensor, even environmental effects such as temperature
or ambient light. However, more complicated feature visibility modeling does not help
to communicate the theoretical concepts, and thus is abstracted away under this broad
visibility factor assumption.
Operating under these stated assumptions, and denoting the negative measurement features as
f; for
i
=
1... n, with corresponding feature visibility coefficient(s)
vf1 and location PDF p(fi):
P(Zt,negji n ... n Zt,negjn
p(zt,,egIh)
p(zt,neglh)
JJ p(ztnegjjh)
J (1 - p(zt,pos,fIh))
1h)
(3.11)
(3.12)
f GN,
p(zt,neglh)
=
where st is whole area sensed during measurement zt, thus:
(3.13)
p( zt,,,,5|h) =
vpmn
Vf
p(t jp( stjdA
(3.14)
In words, the probability of not detecting each conditionally-independent feature is
the product of one minus the probability of detecting each feature, integrated across
the intersection of the PDF of each feature and the PDF of the scanned sensor area.
This calculation, often intractable due to complicated integration limits, theoretically
defines the probability of a set of negative measurements zt,, given sensed area st.
3.3.2
Monte Carlo Heuristic
Observing that the previous theoretical calculation is both based on some shaky
assumptions and computationally infeasible, a much simplified approximation must
be found which can handle not only the probability estimation, but also an important
implementation challenge: How to approximate the feature visibility online.
This subsection begins with the easier of the two challenges: approximating the
complex integration. The simplest and quickest method is to threshold the calculation to only occur if the swept area contains a non-negligible portion of the featurelocation PDF. A simple threshold is to neglect all features for which the swept area
does not intersect the standard 95% ellipse for the Gaussian, calculated from the X'
distribution. Realistic sensor models predict a worst-case neglect at a maximum of
1% without accounting for visibility, with the majority of threshold-eliminated cases
much lower. When the sensible area does overlap the 95% region, Monte Carlo integration techniques provide an adequate approximation of the complex integral. If
real-time implementation demands faster approximation, any weighted discretization
of points across the ellipse could be precomputed for a variety of ellipses, and a fast
(but slightly rougher) approximation will suffice.
At this level, the inaccuracies due to imperfect estimation techniques cease to
become the most important factors in the calculation, and users requiring more accurate calculation are recommended to consider a more complicated sensor model which
estimates sensor failures, deals with the complex issue of feature visibility and its as-
sociated factors, and perhaps even considers feature movement and removal models
to account for physically missing features.
The more difficult challenge of this Monte Carlo based approximation deals with
how to estimate the feature's visibility. The problem is theoretically eliminated if the
"correct" feature visibility is known a priori, however this is almost a meaningless
statement, since the feature visibility is already an abstraction on the physical underlying causes for missing measurements. Instead, using this Monte Carlo method,
adequate results are obtained in simulation using a digital first-order infinite impulse
response (IIR) filter:
x(t) = ay(t) + (1 - a)x(t - 1)
(3.15)
where x(t) is the filter output at time t and y(t) is the measurement at t.
Applied to visibility, at the first calculation for feature
f, x(t -
1) is set as the
a pnoi guess for visibility Vf, and a E [0, 1] controls the trade-off between sticking
soundly to the a priori guess and deviating quickly based on real-time estimation.
In simulation, a = 0.1 yielded adequate results, and it is expected that the optimal
value of a is much less important than the single visibility coefficient assumption
under which it operates.
Summarizing, one efficient and adequate way to estimate the negative measurement probability term in the NAPSt is to first threshold the estimation to only
include non-negligible feature contributions, then to approximate the contribution by
a Monte Carlo integration approximation over the region of overlap, weighted by the
feature visibility, and finally to incrementally update the feature visibility coefficient
by a simple first-order filter to improve robustness to missing/invisible features.
3.3.3
Grid-based Heuristic
One of the biggest problems with SLAM was its quadratic computational complexity
in n, the number of features in the map. It was thought that this would limit the
usefulness of SLAM until researchers discovered that by breaking the global mapping
problem into pieces, called local submaps, and then running SLAM at both the local
and the global level, substantial computational gains were to be found. It also turns
out that local submaps are often quite consistent, when analyzed locally.
The local submap approach is used in the Feature Based Navigation SLAM engine,
as it is quite natural to combine multiple sonar scans into a single sweep of a robot,
similar to multiple rotations of a lawnmower making a cut row of grass. The sweep
covers a small area compared to the total mapped region, but nonetheless it is a
computationally useful division. This subsection describes a heuristic for estimating
the negative information contribution to the NAPSt calculation under the assumption
of local submaps and lower level sensor filtering.
The local submapped area is discretized into a regular grid, with sufficiently chosen grid-cell size to neither overload the processor nor to distort the position of the
features. This discretized approach allows the estimation of the total number of scans
of a particular cell, and this number is denoted as sc. Likewise, in this approach, it
is assumed that to prevent spurious measurements from becoming incorrect features,
there exists some number of positive detections na which must be associated to the
same particular position for it to be considered a feature observation. The concept of
a feature visibility coefficient Vf is employed here, and this allows computation (for
each feature
f) of the P(Zt,neg
h) term via the following binomial distribution:
nd-1
P(Zt,negfjh) =
v )(1 - vf)(sc~)
In words, the sum is computed over the total number of times the feature
(3.16)
f
in cell c
could have been scanned, but the feature would not have been initialized due to low
visibility Vf. Of course naturally these contributions are combined across all features
predicted to lie in a mapped region via equation 3.12 to give the total estimate for
p(zt,,eIh).
Special care must be taken to deal with some minor complexity issues under the
grid-based heuristic. The first is that the location of a feature is not exactly known,
but rather is an assumed Gaussian distribution over a known area. This lends itself
nicely to a Gaussian weighting w,
of a certain feature
f
in cell c, and the weighting
can be thresholded to zero outside the typical 95% confidence ellipse with minor
impact on results. Mathematically, a Gaussian weighted feature scan-count, if is
introduced, and is calculated naturally via:
=
(3.17)
wcfse
C E ellipse
Now this Gaussian-adjusted estimate of feature-scans can be used to calculate both
the Negative Information Contribution and the online-estimate feature visibility, via:
og = df
if
Where df is simply the number of times feature
(3.18)
f
is detected, according to data
association in the current hypothesis.
3.4
Conclusion
This chapter presented the key theoretical contribution of this thesis: a well-defined
probabilistic method to include NI in hypothesis comparison. The assumptions on
which the NAPS algorithm is based are put forth clearly, and two heuristics are
developed which enable real-time approximation of the computationally intractable
NI contribution. Typical SLAM algorithms all ignore NI in their hypothesis ranking,
and for comparison they are grouped into a Positive Only Scoring (POS) metric.
The next contribution of this thesis (in the following chapter) is to quantitatively
demonstrate the advantages of using negative information, by comparing NAPS and
POS across a variety of situations.
THIS PAGE INTENTIONALLY LEFT BLANK
Chapter 4
Quantifying the Impact of
Negative Information (NI)
The impact of this thesis's contribution depends on the utility gained from incorporating NI into SLAM state estimation. Towards this goal of demonstrating the effect
of incorporating (or ignoring) negative information, a SLAM simulator was designed.
The simulator is run in Monte Carlo fashion to quantitatively describe the effects
of NI across a range of various system and environmental parameters. This chapter begins with a description of the design of the simulation, then the Monte Carlo
results are presented and discussed, and finally a the results are compared to some
experimental results published previously by Folkesson et al [19].
4.1
Simulation Design
The simulator is based on an existing Kalman-Filter based SLAM simulator created
by Jose Neira and Juan Tardos [45]. Their simulator has a preprogrammed zeronoise map, and pre-programs the robot's desired trajectory through the map, in the
absence of movement error. A user can modify the relatively landmark-rich map,
but the initial configuration is tuned to show a reasonably substantial loop-closing
around the large square map. Many appropriately-placed elementary assumptions
about their system are in place, including 100% visibility, reasonable sensor and
motion uncertainty, and realistic path complexity. An interesting feature available
(which was disabled when their simulation was modified for this thesis) is the ability
to generate moving landmarks for which the robot was relatively unprepared. These
moving features greatly complicate the data association problem, and force the robot
to use jointly compatible data association decisions (on each scan) to prevent gross
location estimation error.
4.1.1
Mission Setup Modifications
The first important modification made to develop this simulator for negative information testing was to add an a priori map (with appropriate uncertainty) and to
instruct the robot to locate and navigate to a desired target feature. All features,
including the target, are actually disturbed from their a priori locations based on
sampling from the a priori Gaussian distribution on location. Thus the robot's task
is to successfully localize itself with respect to its a priori map, which is a noisy
approximation of the real map, and then seek to the desired target. For the new simulations the target was randomly selected for each run as one of the a priori features.
Figure 4-1 depicts a typical a priori configuration.
A second modification was to enforce a more realistic navigation scenario than
simply executing low level motion instructions. The basic case was changed to be an
a priori specified trajectory in coordinates of the map. However, instead of simply
executing the point-to-point low level movement commands, the robot compares its
current Maximum Likelihood (ML) location to where it should be, according to the
trajectory, and attempts to reach the desired point. Of course the maximum step
the robot can take between sensing sweeps is limited, but since the trajectory step is
smaller than the robot's maximum step, the robot tends to seek back onto the path.
Of course, if it makes a mistake in localizing itself in the ML hypothesis, it seeks
off of the correct path, as it wouldn't know in practice that it is incorrect. Then,
after completing the field-survey portion, the robot attempts to reactively seek to the
desired target. It has a reduced homing speed during this portion, but also a limited
amount of steps to reach the target. Upon termination of the mission, the overall
mission success is:
" Successful: if the robot is within a small distance d, of the correct target feature
" Unsuccessful: if the robot is further than d., from the correct target feature
Also, it was hypothesized during actual underwater testing early in the FBN
project that having the robot actively seek to the target feature as soon as it localized
with a certain percent belief would achieve the best results. For example, if the robot
was 90% sure it had the correct feature-to-measurement data association, it would
stop its a priori field survey behavior and attempt to seek the final target. This design
decision was motivated by some unmodeled factors such as water current and a slightly
different hypothesis evaluation model at the time, but two roughly equivalent greedy
algorithms were programmed into this simulation and are discussed in section 4.3.3.
Multi-Hypothesis Estimation & Pruning Methods
4.1.2
Another important modification was to code a multi-hypothesis data association
paradigm. Recall that the NAPS algorithm is a relative scoring algorithm and therefore requires multiple hypotheses to make a comparison. In its simplest form, the
separate hypotheses are leaves from the hypothesis tree. The hypothesis tree begins
with n
+
1 branches at the first measurement, where n is the number of current fea-
tures in the map. The additional branch is often called the "star node" and represents
choosing that the measurement corresponds to a new feature in the map, rather than
one of the n existing ones. Without any pruning, the tree grows exponentially in
the number of measurements m and quickly becomes too large to handle. The total
number of leaves Nh (separate hypotheses) starting with na a priori features and
after m measurements can be calculated via:
Nh=
1J(rta +i)
(4.1)
i=1
As is common in hypothesis tree literature, a series of pruning methods are employed
to ensure the tree is computationally manageable.
Gated data association
The first is a typical possibility threshold of 95% enforced via the 2 function, which
eliminates most of the unreasonable data association hypotheses immediately. After
completing this first prune, and therefore removing hypotheses which have a less than
5% chance of being feasible, all hypotheses are evaluated in the NAPS algorithm, and
a minimum score is required to avoid pruning. In practice this minimum score is very
low, and all hypotheses which pass the probability threshold also pass the minimum
score.
Bhattacharyya coefficient threshold
The next pruning step, after evaluating and ranking all hypotheses via the NAPS
is to remove hypotheses with differing data association choices Q but similar robot
location outcomes. Especially after several positive measurements in a row, the hypotheses will be relatively grouped around several possible robot positions in the
feature field. These close hypotheses essentially repeat the same computations in the
future, and lead to the same localization results. So, for each non-ML hypothesis, the
Bhattacharyya Coefficient [4] (BC) is calculated between it and all hypotheses above
it for the robot's pose
(Xrobotpose
{x, y, 0}). If the BC > X, the poses are more than
X * 100% identical, and the less-likely of the two (similar) hypotheses is eliminated.
This is perhaps the most important and most significant pruning step.
Total hypotheses threshold
Finally, the maximum number of hypotheses tracked by the system are capped at a
reasonable value and those which score below the top H are removed from the tree.
This last step is required to maintain real-time feasibility, but it is the most probable
pruning choice to (incorrectly) eliminate an accurate solution.
4.2
Discussion of Simulation Parameters
As every roboticist suspects, there were many configuration parameters, both those
which have been mentioned and those which have not, which were set to realistic values considered characteristic of a difficult SLAM situation. The values of this baseline
configuration and its success percents are specified in Table 4.1. The parameter values for the baseline were chosen roughly based on adapting a typical FBN mission to
the simulation environment.
Parameter
Distance between features
Feature offset coeff. Dojfset
Value
10 [m]
0
Feature visibility
Feature N-S oj,
0.9
1[m]
Feature E-W o-,
1[m]
Maximum hypotheses
Bhatt. coeff. X
Match weight coeff. A
Maximum robot step
30
(0.9)3
12
4[m]
Maximum robot turn
900
Success/fail distance
1.5[m]
Sensor range
20[m]
Sensor angle
Sensor range o
Sensor angle o
Initial pos N-S std. dev.
Initial pos E-W std. dev.
7r/8[m]
0.03[m]
0.125*
8[m]
4[m]
Initial rotation std. dev.
2*
Table 4.1: Baseline simulation parameters
4.2.1
Difficult Environment Parameters
There are several environmental setup choices which make the base case setup particularly challenging for a SLAM navigation paradigm, nonetheless each of these parameters are grounded in realistic aspects of the main FBN application: underwater
artificial feature reacquisition.
Map Regularity
One issue which is particularly prevalent is the regularity of the
a priori map (4x4 square, see Fig 4-1). This regularity clearly increases the difficulty
of navigation since from within the field, the robot could think it was translated one
row, or one column, off of the actual position, and have no way to gather information
about its incorrect hypothesis until it drives outside the field and realizes its mistake.
If the robot does not drive to the edge of the field, common during the homing phase
of the mission, it cannot correct the mistake and therefore fails the mission.
Uncertain Initial Position
Another configuration choice is the high uncertainty
in robot starting position, which complicates the regular-field setup since the robot
has a reasonably high probability of encountering any of the 4 horizontal rows, and
either the first or the second vertical column of targets. This uncertainty causes a
large number of initial measurement hypotheses, which causes the hypothesis tree
to grow quickly. Of course a growing hypothesis tree naturally slows computation,
but more threatening is the increase in probability that the correct hypothesis will
be choked out early by lots of feasible but incorrect ones. However, the large arrival
uncertainty is realistic for what an inexpensive underwater robot would experience
simply dead-reckoning to the target field.
Planned Trajectory Finally, the design of the fly-over trajectory is important to
the robot's ability to localize correctly. The more a trajectory crosses unique portions
of the feature field, the higher chance it will recognize them as unique and be able to
identify the correct hypothesis. Edges of the field are less unique than corners, but still
quite helpful in localization. The worst-case trajectory would be to start somewhere
inside the field and never leave it, as the robot would have little or no chance to
disambiguate between shifted hypotheses. The trajectory chosen (no corners, 3 to 5
edge crossings) was selected as a realistic balance of difficulty.
4.3
Simulation Results
The simulator described in the previous sections was run Monte Carlo style across a
variety of parameters, all of which were listed in table 4.1. The results are presented
with standard error bars (±o-or t2o-, see individual captions) on all the data points
to clearly depict the statistical significance of this thesis's findings.
4.3.1
Feature Visibility
A difficult issue (which is especially present when using sonars or cameras) is not
correctly identifying features even if they are within the sensor's scanned area. This
complex feature recognition issue is represented simply as a visibility coefficient between 0 and 1, where .8 corresponds to an 80% chance to see a feature on a given
scan. Figure 4-2 depicts findings for a linear sweep of feature visibility coefficients
from 0 to 1. The figure shows two different cases of applying the NAPS algorithm: in
one case the a priori visibility estimate coefficient is set equal to the actual visibility
coefficient.
This means the robot has correct a priori knowledge of the expected
visibility of the features, and uses that knowledge (filtered over time through the IIR
filter described in Sec 3.3.2) when approximating the integral in equation 3.14. For
the second NAPS case, the a priori coefficient is ignorantly left at the baseline value
of 0.9, and unseen features actually weight "unfairly" heavily on the Negative Contribution term of equation 3.8, causing it to perform quite poorly at low visibility. It is
concluded, in-aggregate, that these results show that careful consideration of a priori
feature visibility is important, and also that the NAPS outperforms POS unless it
is poorly tuned to the reality of the actual scenario.
4.3.2
Environment Regularity
One of the most important parameters to test was the impact of the regularity of the
feature field. To vary this, each feature was displaced by a set distance Df f ,et but in
a random direction. Figure 4-3 shows some typical maps with different Dgf set values.
The results of Monte Carlo simulations are shown in figure 4-4. As intuition suggests,
the results show that NAPS affords a significant advantage over POS in maps with
high regularity (low Dqff8 t) but that advantage is lost as the higher irregularities in
the field cause each section of the map to be more locally identifiable.
4.3.3
Capture Strategy
During the FBN project design, several plans to maximize chances of successful capture were discussed. Some plans favored more exploration to improve localization
while others favored less localization and "go straight for the target." To study the
effect of the different capture plan types on mission success, the following 3 plans
were attempted on the baseline simulation:
1. standard - Cross the field twice, then turn and seek to target. Figure 4-1 shows
an example of this case.
2. greedy-if-see-target - perform the standard mission, but if the robot ever thinks
it sees the target feature in its sensors while crossing the field, it switches immediately to seek-mode and goes for capture.
3. direct-greedy - The robot does no scripted exploration, and instead seeks to
capture the target directly from launch. Of course if it changes its ML estimate
of position, it adjusts accordingly.
Figure 4-5 shows the success percentage results for these tested strategies. In all 3
cases, the NAPS outperforms the POS algorithm by a statistically significant margin, and surprisingly it is shown superior to employ the greedy-ifseetarget approach
rather than finish localizing before homing. It is hypothesized that the error introduced by forcing the robot to leave the target field after each fly-over pass are
probably cause the robot to get slightly disoriented (2% or 3% of the simulations).
It is also important to note that in the real FBN project, underwater current had a
maximum effect if the robot was not tracking features in its sonar, while in simulation the water current was not modeled. Under increasing unmodeled water currents,
leaving the field for a turn-around becomes an even more dangerous proposition, it
is expected that the greedy approaches become more attractive.
4.3.4
Initial Position
As discussed previously, the robot arriving at the target field with a high initial
position uncertainty was suspected to be a significant factor in correct localization
and target capture. To experiment with this, the a priori position and uncertainty
were varied across a linear sweep. Recall from table 4.1 that the standard deviation in
initial position allows for a large variety of starting positions. In fact, just as observed
in real FBN experiments, there is a non-negligible chance that the robot will miss the
field entirely, and never see any features at all. Also recall that the actual robot began
at a single sampled location from the initial a priori uncertainty estimate. Figure
4-6 depicts the results of the simulation for a linear sweep on standard deviation (0-)
in all 3 pose components simultaneously (x, y, 6). As the initial position uncertainty
increases, NAPS demonstrates clear superiority over POS. This was an expected
result, as the NAPS can much more easily differentiate between incorrectly shifted
hypotheses, for example, when the robot believes it is on the 3rd feature row and
it is actually on the 2nd. The Negative Information of expecting to see other rows
and not finding them causes the shifted hypothesis to gain Ch,,,, and the correct
hypothesis becomes more likely in comparison. In fact, expectation of this result
could be considered the primary motivator for incorporating NI and creating NAPS
(or its predecessor UMS) in the first place.
4.3.5
Apriori Map Probability
In development of the NAPS algorithm, one key assumption was that the a priori
probability of any particular hypothesis p(h) was exponentially related to the number of a priori features matched to the positive measurements zt,,,. Equation 3.7
proposed this relationship, based on an exponential parameter A, which is interpreted
as a "match weight coefficient." Equation 3.8 most clearly shows that A rewards the
NAPS and POS algorithms linearly in the number of matched features, Nf according to A. Note that since Dh is necessarily calculated over only positive measurement
observations (zts) with non-null data association, it is minimized by simply always
hypothesizing that a new feature is created. This undermines the whole goal of SLAM,
and is to be avoided by rewarding feature-matching for reasonable data association
decisions. In words, the size of the reward is A. Figure 4-7 characterizes the effect
of A on overall success, both for NAPS and POS. It is important to note that in
actual implementations of NAPS SLAM, A is the only freely-chosen parameter, and
therefore the operators would be well-advised to simulate as closely as possible on
all the other parameters to gain an intuition for the effect of A on their particular
parameter values. In the baseline simulations, A was chosen as a reasonable guess
before generating the full simulation data. The relative insensitivity of success above
approximately A = 8 justifies the guessed baseline value.
4.3.6
Feature Position Uncertainty
In the simulations the features were selected randomly from the feature placement
distribution, a 2-D Gaussian over their a priori location. Reducing the uncertainty on
feature position allows the robot to more accurately data associate since the features
are closer to their a priori location, as well as allowing it to reject incorrect hypotheses
early because of the Mahalanobis threshold pruning technique. Figure 4-8 displays
the success percentage results as the feature uncertainty is increased.
The POS
algorithm stays at a relatively constant 50% success, despite the increasing feature
uncertainty. A decrease in success percentage was expected with increase in feature
uncertainty, however that is not observed in the data. Also, the NAPS outperformed
the POS for low feature uncertainty, but when the uncertainty grew too high, the
Ch,neg
grew fast and "pushed out" the correct hypotheses in favor of ones which had
the robot outside the field. This suggests that for future investigation, perhaps the A
parameter, which rewards successful feature matches, must be higher for maps with
higher uncertainty to counteract this effect. Recall that for all simulations shown in
Figure 4-8 that A = 12 according to the baseline parameter values.
4.4
Relevant Experimental Results
In 2007, as a part of developing the FBN project, Folkesson et al designed and
performed experiments on real AUVs in the Gulf of Mexico off the coast of Florida
as a way of comparing POS and NAPS algorithms [19]. Those results are reprinted
in this section, with a brief summary of the experiment design, and then comparison
is drawn between these similar experimental findings and the simulation results.
Environment Description
A field was laid of submerged radar reflectors, each
floating 1(m) off the seabed in four rows. The AUV was equipped with an off-the-shelf
forward looking blazed array sonar. The robot was provided with an a priori map
generated by an expensive mapping AUV with good accuracy, but the uncertainty
was artificially raised to permit match ambiguity, to 5(m) in each direction. The
robot was also equipped with an inexpensive sensor suite consisting of a depth sensor,
magnetometer, and off-the-shelf Inertial Measurement Unit (IMU). The initial ingress
heading to the field was varied around all eight cardinal and sub-cardinal compass
points. The designated target and approach direction was also randomly assigned
just before run-time.
Experiment Design & Results
The robot performed 18 separate missions, 9
using a standard POS algorithm called Joint Compatibility Branch and Bound
(JCBB) [46], and 9 using a predecessor to NAPS, called the Unseen Match Scale
(UMS). The success percents are show in Table 4.2. Based on this limited sample
size, a statistically significant success percent increase is observed from using UMS
compared to POS.
Match Algorithm
POS (JCBB)
UMS
Runs (n)
9
9
Successes
3
6
Success Percent
33%
66%
Std Error (o//i)
±16.7%
±16.7%
Table 4.2: Folkesson experimental UMS vs POS results
Comparison to Simulation
The most significant difference between NAPS as
proposed in this thesis and its predecessor UMS used by Folkesson is that UMS
was not applied across the entire history of the hypothesis, but instead only used
at each step to select the current data association decision Qt. UMS also used the
grid-based heuristic approximation for the NI term (as presented in Section 3.3.3).
And finally, although the experimental and simulation setups were similar in many
aspects, they differ in several details, including different vehicle models, the presence
of water currents, the capture strategy, and the values for the algorithm parameters.
Still, fully acknowledging these differences, the results of both the simulations and
the similar experiments support the same conclusion: that a statistically significant
increase in success percentage can be seen by incorporating NI into SLAM hypothesis
scoring.
4.5
Conclusion
This chapter details the design and creation of a SLAM simulator to test the impact
of incorporating NI in NAPS with ignoring it, as the majority of SLAM algorithms
do, represented by the POS algorithm. The simulation results support this thesis's
hypotheses that including NI generally improves the success of the SLAM system,
at the cost of extra computational complexity. The reprinted experimental results
due to Folkesson also support this hypothesis. This chapter concludes that NAPS
outperforms POS for the majority of tested cases, especially those which resemble
the difficult environment of the FBN project.
GROUND TRUTH with planned path, features: 16
40 -
30 -
25 20 -
START
cc)
Figure 4-1: Typical a priori simulation setup: red dots represent actual feature
locations, blue dots are a priori positions with associated 95% uncertainty regions,
the green path is the scripted robot trajectory, and the target feature is marked with
a circled cross
POS (blue) and NAPS (red,black) success plot (n=5000)
40a,
CD,
30-
20-
10
0
0
0.1
0.2
0.3
0.7
0.6
0.5
0.4
Feature Visibility Coefficient (vc)
A
POS
0
NAPS (ap=.9)
0
NAPS (ap=vc)
0.8
0.9
1
Figure 4-2: NAPS shows general improvement over POS when tuned correctly
(ap=vc) and moderate improvement at Vf > 0.5 even when tuned poorly (error
bars ±2o)
Apriori map with Doffs
=
0
Aprior map with Doffset = 0.2
0.0
*1
0
0
1
2
-1
3
Apriori map with Doffset
= 0.4
7-
(
).
0
1
2
3
Aprioi map with Dofet
=
4
0.6
(
1D0
-1
0
1
2
3
4
-1
0
1
2
3
Figure 4-3: A comparison of 4 maps, each with a different feature offset coefficient
Doffset depicting the increasing irregularity of the feature field
POS (blue) and NAPS (red) success plot (n=5000)
60 h
A
0
45
-0.1
0.2
0.3
Feature Offset Coefficient D0fe
0.4
0.5
POS
NAPS
0.6
Figure 4-4: NAPS demonstrates better performance for highly regular feature fields
(low Doffset) but decreasing effectiveness for irregular cases (error bars ±2o)
POS (blue) and NAPS (red) success plot (n=1000)
MPOS
T
= NAPS
C
( 52
au 50
O 48
CW
46
44
42
40'
standard
greedy ifsee target
Capture Plan
directgreedy
Figure 4-5: NAPS demonstrates better performance across all capture plans (error
bars ±1a)
POS (blue) and NAPS (red) success plot (n=1000)
100
A
POS
0
NAPS
90
80-
70-
60-
50-
40-
0I
0.2
0.4
0.6
0.8
1.4
1.2
1
Initial Position a coefficient
1.6
1.8
2
2.2
Figure 4-6: Higher initial uncertainty causes lower success, and NAPS mostly outperforms POS due to ability to discern between shifted hypotheses (error bars ±1a-)
60
POS (blue) and NAPS (red) success plot (n=5000)
*) 40o
0
Ca
30 0
10-
0
0
2
4
6
8
'
'
10
12
A
POS
0
NAPS
1
14
16
18
20
22
24
Match Weight Coefficent X
Figure 4-7: As A increases, total success percentage increases as expected. The effect
levels off at higher A (greater than 15) to a fairly constant value, with NAPS overall
outperforming POS (error bars ±2-)
POS (blue) and NAPS (red) success plot (n=1000)
70
A
POS
NAPS
0
55-
50-
45-
40-
35-
OU
0.2
e
I
nI
0.4
0.6
0.8
1
1.4
1.2
1
Feature oa(also aY)
I
I
I
1.6
1.8
2
Figure 4-8: At low feature uncertainty, the NAPS outperforms the POS as usual.
However, at high uncertainty, the NAPS is thrown off in favor of hypotheses which
predict the robot not being in the target field (error bars 1l-)
62
Chapter 5
Thesis Conclusions
The goal of this thesis is to explore the role of including Negative Information in
the evaluation metric which governs the data association decisions in any SLAM
system. A new algorithm is developed for hypothesis comparison, and its effectiveness
is compared side-by-side with the leading class of (positive information only) SLAM
algorithms.
5.1
Summary of Contributions
This thesis contains several unique contributions to the field of SLAM, specifically to
the evaluation/scoring of multiple hypothesized solutions:
1. A new algorithm is probabilistically derived, called the Negative And Positive
Score (NAPS) which is an extension of the Unseen Match Scale (UMS) developed by Folkesson. NAPS improves on UMS by calculating the hypothesis
score over its whole history (rather than just at the current timestep) and also
by using a new Monte Carlo integration technique to approximate the computationally intractable NI contribution.
2. A simulator is designed, based on existing code from Neira and Tardos, which
runs a robot using a Multi-Hypothesis Extended Kalman Filter in a featurebased environment similar to conditions of reacquiring mines in a littoral zone
on an AUV. The simulation parameters are varied across reasonable values,
and the Monte Carlo success percentage of NAPS vs POS is presented. Due
to the similarity of the two algorithms, this thesis concludes that except in a
few unique cases, the inclusion of Negative Information preserves or improves
success percentage in a statistically significant manner.
5.2
Future Work
During the course of this work, many other opportunities for improvement were noticed and considered, but not pursued thus far. Therefore, a discussion of some future
developments of NI contribution is included to aid in further exploration.
5.2.1
Algorithms that Naturally Utilize NI
Omitted from chapter 2 is a common alternative formulation to feature-based SLAM,
a method called scan matching. Rather than extracting explicit features based on
some understanding of the environment structure, scan matching directly compares
the set of measurements from one pose with the same set of measurements from a
nearby pose and attempts to correlate the two in order to gain information about
the difference between the poses. Since this is a fundamentally different operation
than feature extraction followed by data association, it inherently utilizes NI in its
estimation. Scan matching requires a non-sparse operating field, and therefore is
not suitable for the FBN target relocation constraints, however experimental and
simulated comparison between NAPS and scan matching may reveal more about the
value of NI in SLAM. Similarly, due to the propensity for a single particle in a particle
filter to die out when competing particles confirm their predictions and grow more
confident, a particle-filter SLAM implementation also could be said to inherently
calculate based on NI. Intuition suggests that there is great insight for understanding
the role of negative information from setting up a direct comparison between directly
including NI in feature-based methods (NAPS) and estimation methods (PF, scan
matching) which inherently utilize it in some sense.
5.2.2
Mission Path Planning
As mentioned briefly in chapter 4, of key importance to the successful hypothesis
selection is ensuring that the robot observes enough unique aspects of the field to
disambiguate between shifted hypotheses in a regular feature field. However, in this
thesis, the robot was always executing a pre-scripted mission plan, designed to be
neither information-rich nor information-poor. To put it more concretely, a mission
which either passes over regions devoid of features, or over regions hopelessly cluttered with extraneous ones may not give the system adequate information to perform
a successful localization, regardless of the algorithm. In contrast, a mission which
puts plenty of strong and unique objects into view of the sensors maximizes chances
for successful localization and target capture. This dependency on mission design
inspires another avenue for system improvement: Creating algorithms which on-thefly make mission choices which maximize information-gain and chance of localization
rather than rely on the pre-scripted mission design. This idea is commonly referred
to as "Active Localization" and is the subject of much research in the SLAM literature, particularly when framed as a Partially Observable Markov Decision Process
(POMDP). As with the Bayesian Filter in SLAM, well-defined but highly intractable
complete solution generators have been imagined, but the only feasible solutions are
estimates derived from clever approximation algorithms. When considering active
localization decision algorithms, the inclusion of NI offers clear benefits in generating
maximum-information-gain path plans.
5.3
Closing Remarks
In closing, the central claim of this thesis is that the inclusion of Negative Information is a promising extension to the majority of SLAM systems. This claim is
demonstrated through increased success of autonomous decision-making for the difficult task of underwater target reacquisition with a small, low-cost AUV. In general,
the use of Negative Information potential to improve the robustness of SLAM and
data association algorithms in a variety of operating environments.
THIS PAGE INTENTIONALLY LEFT BLANK
Bibliography
[1] T. Bailey and H.F. Durrant-Whyte. Simultaneous localisation and mapping
(SLAM): Part II. Robotics & Automation Magazine, 13(3):108 -117, Sep 2006.
[2] T. Bailey, E. Nebot, J. Rosenblatt, and H. Durrant-Whyte. Data association for
multiple robot navigation: a graph theoretic approach. In IEEE Intl. Conf. on
Robotics and Automation (ICRA), volume 3, pages 2512 - 2517, April 2000.
[3] Y. Bar-Shalom and T. Fortmann. Tracking and Data Association. Mathematics
in Science and Engineering. Academic Press Professional, Inc., San Diego, CA,
USA, 1988.
[4] A. Bhattacharyya. On a measure of divergence between two statistical populations defined by their probability distributions. Bulletin of the Calcutta Mathematical Society, pages 99-109, 1943.
[5] J. Borenstein, H. Everett, and L. Feng. Navigating Mobile Robots: Systems and
Techniques. A. K. Peters, 1996.
[6] M. Bosse, P. Newman, J. Leonard, and S. Teller. An atlas framework for scalable
mapping. In IEEE Intl. Conf. on Robotics and Automation (ICRA), pages 18991906, 2003.
[7] I. Cox. A review of statistical data association techniques for motion correspondance. InternationalJournal of Computer Vision, 10:53-66, 1993.
[8] F. Dellaert and M. Kaess. Square Root SAM: Simultaneous localization and
mapping via square root information smoothing. Intl. J. of Robotics Research,
25(12):1181-1203, Dec 2006.
[9] G. Dissanayake, H. Durrant-Whyte, and T. Bailey. A computationally efficient
solution to the simultaneous localization and map building SLAM problem. In
IEEE Intl. Conf. on Robotics and Automation (ICRA), volume 2, pages 10091014, 2000.
[10] M. G. Dissanayake, P. Newman, S. Clark, H. Durrant-Whyte, and M. Csorba.
A solution to the simultaneous localization and map building (SLAM) problem.
IEEE Trans. Robotics, 17(3):229-241, Jul 2001.
[11] A. Doucet, N. de Freitas, and N. Gordon, editors. Sequential Monte Carlo methods in Practice. Springer-Verlag, New York, 2000.
[12] T. Duckett and U. Nehmzow. Experiments in evidence based localisation for a
mobile robot. Manchester, UK, 1997.
[13] H.F. Durrant-Whyte and T. Bailey. Simultaneous localisation and mapping
(SLAM): Part I. Robotics & Automation Magazine, 13(2):99 -110, Jun 2006.
[14] A. Eliazar and R. Parr. DP-SLAM: Fast, robust simultaneous localization and
mapping without predetermined landmarks. In Intl. Joint Conf. on Artificial
Intelligence, 2003.
[15] A. Eliazar and R. Parr. DP-SLAM 2.0. In IEEE Intl. Conf. on Robotics and
Automation (ICRA), volume 2, pages 1314-1320, April 2004.
[16] H. Feder, J. Leonard, and C. Smith. Adaptive mobile robot navigation and
mapping. Intl. J. of Robotics Research, 18(7):650-668, July 1991.
[17] J. Folkesson and H. Christensen. Closing the loop with graphical SLAM. IEEE
Trans. Robotics, 23(4):731-741, Aug 2007.
[18] J. Folkesson, J. Leederkerken, R. Williams, A. Patrikalakis, and J. Leonard. A
feature based navigation system for an autonomous underwater robot. In Field
and Service Robotics (FSR), volume 42, pages 105-114, 2008.
[19] J. Folkesson and J. Leonard. Autonomy through slam for an underwater robot.
In Proc. of the Intl. Symp. of Robotics Research (ISRR), 2009.
[20] G. Golub and C. Van Loan. Matrix Computations. Johns Hopkins University
Press, Baltimore, MD, 3rd edition, 1996.
[21] J. Guivant and E. Nebot. Optimization of the simultaneous localization and
map-building algorithm for real-time implementation. IEEE Trans. Robotics,
17(3):242-257, 2001.
[22] J. Gutmann and K. Konolige. Incremental mapping of large cyclic environments.
In IEEE/RSJ Intl. Conf. on Intelligent Robots and Systems (IROS), pages 318325, 1999.
[23] D. Hihnel, S. Thrun, B. Wegbreit, and W. Burgard. Towards lazy dasa association in SLAM. In Proc. of the Intl. Symp. of Robotics Research (ISRR), pages
421-431, 2003.
[24] J. Hoffman, M. Spranger, D. Gohring, and M. Jungel. Making use of what you
don't see: Negative information in markov localization. In IEEE/RSJ Intl. Conf.
on Intelligent Robots and Systems (IROS), pages 2947-2952, August 2005.
[25] J. Castellanos J. Neira, J. Tardos. Linear time vehicle relocation in SLAM. In
IEEE Intl. Conf. on Robotics and Automation (ICRA), pages 427-433, September 2003.
[26] S. Julier and J. Uhlmann. A new extension of the Kalman filter to nonlinear
systems. In Int. Symp. Aerospace/Defense Sensing, Simulation, and Controls,
pages 182-193, 1997.
[27] S.J. Julier and J.K. Uhlmann. A counter example to the theory of simultaneous
localization and map building. In Robotics and Automation, 2001. Proceedings
2001 ICRA. IEEE International Conference on, volume 4, pages 4238 - 4243
vol.4, 2001.
[28] M. Kaess and F. Dellaert. Covariance recovery from a square root information matrix for data association. Journal of Robotics and Autonomous Systems,
57(12):1198-1210, Dec 2009.
[29] M. Kaess, H. Johannsson, R. Roberts, V. Ila, J.J. Leonard, and F. Dellaert.
iSAM2: Incremental smoothing and mapping with fluid relinearization and incremental variable reordering. In IEEE Intl. Conf. on Robotics and Automation
(ICRA), Shanghai, China, May 2011. To appear.
[30] M. Kaess, A. Ranganathan, and F. Dellaert. iSAM: Incremental smoothing and
mapping. IEEE Trans. Robotics, 24(6):1365-1378, Dec 2008.
[31] R. Kalman. A new approach to linear filtering and prediction problems. ASME
Journal of Basic Engineering, 1960.
[32] R. Kalman and R. Bucy. New results in linear filtering and prediction theory.
ASME Journal of Basic Engineering, 83:95-107, 1961.
[33] B. Kim, M. Kaess, L. Fletcher, J.J. Leonard, A. Bachrach, N. Roy, and
S. Teller. Multiple relative pose graphs for robust cooperative mapping. In IEEE
Intl. Conf. on Robotics and Automation (ICRA), pages 3185-3192, Anchorage,
Alaska, May 2010.
[34] J. Knight, A. Davison, and I. Reid. Towards constant time SLAM using postponement. In IEEE/RSJ Intl. Conf. on Intelligent Robots and Systems (IROS),
volume 1, pages 406-412, Oct 2001.
[35] J. Leonard, A Bennett, C. Smith, and H. Feder. Autonomous underwater vehicle
navigation. Technical report, Massachusetts Institute of Technology, 1998.
[36] J. Leonard and H. Feder. A computationally efficient method for large-scale
concurrent mapping and localization. In Proc. of the Intl. Symp. of Robotics
Research (ISRR), pages 169-176, 1999.
[37] F. Lu and E. Milios. Globally consistent range scan alignment for environmental
mapping. Autonomous Robots, 4:333-349, April 1997.
[38] R. Meinhold and N. Singpurwalla. Understanding the Kalman filter. The American Statistician,37(2):123-127, May 1983.
[39] M. Milford, G. Wyeth, and D. Prasser. RatSLAM: a hippocampal model for
simultaneous localization and mapping. In IEEE Intl. Conf. on Robotics and
Automation (ICRA), volume 1, pages 403-408, April 2004.
[40] M.J. Milford and G.F. Wyeth. Mapping a suburb with a single camera using
a biologically inspired SLAM system. IEEE Trans. Robotics, 24(5):1038-1053,
Oct 2008.
[41] M. Montemerlo, S. Thrun, D. Koller, and B. Wegbreit. FastSLAM: A factored
solution to the simultaneous localization and mapping problem. In National
Conf. on Artificial Intelligence, Edmonton, Canada, 2002. AAAI.
[42] M. Montemerlo, S. Thrun, D. Roller, and B. Wegbreit. FastSLAM 2.0: an
improved particle filtering algorithm for simultaneous localization and mapping
that provably converges. In Intl. Joint Conf. on Artificial Intelligence, pages
1151-1156. Morgan Kaufmann Publishers Inc., 2003.
[43] H. Moravec and A. Elfes. High resolution maps from wide angle sonar. In IEEE,
editor, IEEE Intl. Conf. on Robotics and Automation (ICRA), pages 116-121,
March 1985.
[44] P. Moutarlier and R. Chatila. An experimental system for incremental environment modeling by an autonomous mobile robot. In Intl. Sym. on Experimental
Robotics (ISER), 1990.
[45] J. Neira and J. Tardos. Slam simulator for Matlab. http://webdiis.unizar.
es/\-neira/software/slam/slamsim.htm, 2002-2004.
[46] Jose Neira and Juan D. Tardos. Data association in stochastic mapping using
the joint compatibility test. IEEE Trans. Robotics, 17(6):890-897, Dec 2001.
[47] L. Pax, J. Guivant, J. Tard6s, and J. Neira. Data association in 0(n) for divide
and conquer SLAM. In Robotics: Science and Systems,RSS, Atlanta, GA, USA,
June 2007.
[48] C. Poelman and T. Kanade. A paraperspective factorization method for shape
and motion recovery. IEEE Trans. Pattern Analysis and Machine Intelligence,
19:206-218, March 1997.
[49] W. Rencken. Concurrent localisation and map building for mobile robots using
ultrasonic sensors. In IEEE/RSJ Intl. Conf. on Intelligent Robots and Systems
(IROS), volume 3, pages 2192-2197, July 1993.
[50] S. Simhon and G. Dudek. A global topological map formed by local metric maps.
In IEEE/RSJ Intl. Conf. on Intelligent Robots and Systems (IROS), volume 3,
pages 1708-1714, October 1998.
[51] R. Smith and P. Cheeseman. On the representation and estimation of spatial
uncertainty. Intl. J. of Robotics Research, 5:56-68, December 1986.
[52] R. Smith, M. Self, and P. Cheeseman. Estimating uncertain spatial relationships
in robotics. In Autonomous Robot Vehicles, pages 167-193. Springer Verlag,
1990.
[53] S. Thrun. Learning metric-topological maps for indoor mobile robot naviation.
Artificial Intelligence, 99(1):21 - 71, 1998.
[54] S. Thrun, W. Burgard, and D. Fox. A real-time algorithm for mobile robot
mapping with applications to multi-robot and 3D mapping. In IEEE Intl. Conf.
on Robotics and Automation (ICRA), pages 321-328, San Francisco, CA, 2000.
IEEE.
[55] S. Thrun, W. Burgard, and D. Fox. ProbabilisticRobotics. The MIT press,
Cambridge, MA, 2005.
[56] S. Thrun, D. Fox, and W. Burgard. A probabilistic approach to concurrent
mapping and localization for mobile robots. Machine Learning, 31:29-53, 1998.
also appeared in Autonomous Robots 5, 253-271 (joint issue).
[57] S. Thrun, J. Gutmann, D. Fox, W. Burgard, and B. Kuipers. Integrating topological and metric maps for mobile robot navigation: A statistical approach. In
AAAI Conf. on Artificial Intelligence, 1998.
[58] S. Thrun, Y. Liu, D. Koller, A. Ng, Z. Ghahramani, and H. Durrant-Whyte.
Simulataneous localization and mapping with sparse extended information filters.
Intl. J. of Robotics Research, 2004.
[59] B. Triggs, P. McLaughlan, R. Hartley, and A. Fitzgibbon. Bundle adjustment
a modern synthesis. In Bill Triggs, Andrew Zisserman, and Richard Szeliski,
editors, Vision Algorithms: Theory and Practice,volume 1883 of Lecture Notes
in Computer Science, pages 153-177. Springer Berlin / Heidelberg, 2000.
[60] J. Vaganay, P. Baccou, and B. Jouvencel. Homing by acoustic ranging to a single
beacon. In IEEE Oceans, pages 1457-1462, 2000.
[61] M. Walter, F. Hover, and J. Leonard. SLAM for ship hull inspection using
exactly sparse extended information filters. In IEEE Intl. Conf. on Robotics and
Automation (ICRA), pages 1463-1470, May 2008.
[62] G. Welch and G. Bishop. An introduction to the Kalman filter, 1995.
[63] G. Wyeth and M. Milford. Spatial cognition for robots. IEEE Robotics Automa-
tion Magazine, 16(3):24-32, September 2009.
[64] X.S. Zhou and S.I. Roumeliotis. Multi-robot SLAM with unknown initial correspondence: The robot rendezvous case. In IEEE/RSJ Intl. Conf. on Intelligent
Robots and Systems (IROS), pages 1785-1792, Oct 2006.
Download