CH 8 LOCALIZATION AND MAPPING

advertisement
7
LOCALIZATION AND
MAPPING
7.1 Introduction to Localization and Mapping
Where Am I?
Frequently, robots find themselves asking this question.
Knowing your location, and being able to navigate to other
locations, is extremely important for autonomous robots. The
act of finding one's location against a map is known as
localization. So how can a robot localize itself? The most
common form of intelligent localization and navigation is to
use a map, combined with sensor readings and some form of
closed-loop motion feedback. But a robot needs to do so
much more than just localizing itself
Internet based smart home robot
Page 69
CH 8 LOCALIZATION AND MAPPING
Often, we'd like our robots to be able to build their own
maps, since map building by hand is tedious, boring, and
error-prone. The field of robotics that studies localization and
mapping is typically called SLAM (simultaneous localization
and mapping).
Mapping and Localization are almost always a precursor to
navigation. We can now break down the several types of
problems that exist in SLAM and lesser problems:
1) Known map, known localization - this is the easiest type of
problem. If you know the map, and your current location, you
can easily navigate to a new location
2) Known map, unknown localization - you know what the
world looks like, but you don't know where you are. Once
you find out where you are, you really are just doing case #1
above.
3) Unknown map, unknown localization - the hardest of all
problems. This is known as simultaneous mapping and
localization (SLAM).
Now we will discuss briefly introduced the main concepts of
Mapping, Localization,,,
7.2 Probability Theory And Bayes Rules
Probability Theory
Probability theory is the branch of mathematics concerned with
probability and the analysis of random phenomena. The central
objects of probability theory are random variables, stochastic
processes, and events: mathematical abstractions of nondeterministic events or measured quantities that may either be
single occurrences or evolve over time in an apparently random
fashion. If an individual coin toss or the roll of dice is considered to
be a random event, then if repeated many times the sequence of
random events will exhibit certain patterns, which can be studied
and predicted.
As a mathematical foundation for statistics, probability theory is
essential to many human activities that involve quantitative
analysis of large sets of data. Methods of probability theory also
Internet based smart home robot
Page 70
CH 8 LOCALIZATION AND MAPPING
apply to descriptions of complex systems given only partial
knowledge of their state.
Most introductions to probability theory treat discrete probability
distributions and continuous probability distributions separately.
The more mathematically advanced measure theory based
treatment of probability covers both the discrete, the continuous,
any mix of these two and more.
- Discrete probability theory deals with events that occur in
countable sample spaces. (Examples: Throwing dice, experiments
with decks of cards)
We have a finite or countable set called the sample space, which
relates to the set of all possible outcomes in classical sense,
denoted by . It is then assumed that for each element
, an
intrinsic "probability" value
is attached, which satisfies the
following properties:
1.
2.
- Continuous probability theory deals with events that occur in a
continuous sample space.
If the outcome space of a random variable X is the set of real
numbers ( ) or a subset thereof, then a function called the
cumulative distribution function (or cdf)
exists, defined by
. That is, F(x) returns the probability that X will
be less than or equal to x.
The cdf necessarily satisfies the following properties.
1.
is a monotonically non-decreasing, right-continuous
function;
2.
3.
Bayes Rules
Internet based smart home robot
Page 71
CH 8 LOCALIZATION AND MAPPING
In probability theory and statistics, Bayes' theorem (alternatively
Bayes' law or Bayes' rule) is a result that is of importance in the
mathematical manipulation of conditional probabilities.
Mathematically, Bayes' theorem gives the relationship between the
probabilities of A and B, P(A) and P(B), and the conditional
probabilities of A given B and B given A, P(A|B) and P(B|A). In its
most common form, it is:
The meaning of this statement depends on the interpretation of
probability ascribed to the terms:
Bayesian interpretation
In the Bayesian (or epistemological) interpretation, probability
measures a degree of belief. Bayes' theorem then links the degree
of belief in a proposition before and after accounting for evidence.
For example, suppose somebody proposes that a biased coin is
twice as likely to land heads than tails. Degree of belief in this
might initially be 50%. The coin is then flipped a number of times to
collect evidence. Belief may rise to 70% if the evidence supports
the proposition.
For proposition A and evidence B,

P(A), the prior, is the initial degree of belief in A.

P(A|B), the posterior, is the degree of belief having
accounted for B.

the quotient P(B|A)/P(B) represents the support B provides
for A.
Frequentist interpretation
In the frequentist interpretation, probability measures a proportion
of outcomes. For example, suppose an experiment is performed
many times. P(A) is the proportion of outcomes with property A,
and P(B) that with property B. P(B|A) is the proportion of outcomes
with property B out of outcomes with property A, and P(A|B) the
proportion of those with A out of those with B.
Internet based smart home robot
Page 72
CH 8 LOCALIZATION AND MAPPING
The role of Bayes' theorem is best visualized with tree diagrams,
as shown to the right. The two diagrams partition the same
outcomes by A and B in opposite orders, to obtain the inverse
probabilities. Bayes' theorem serves as the link between these
different partitionings.
EVENT FORMS
Simple Form
In many applications, for instance in Bayesian inference, the event
B is fixed in the discussion, and we wish to consider the impact of
its having been observed on our belief in various possible events
A. In such a situation the denominator of the last expression, the
probability of the given evidence B, is fixed; what we want to vary
is A. Bayes theorem then shows that the posterior probabilities are
proportional to the numerator:
(proportionality over A for given B)
….EQ(8.1)
In words: posterior is proportional to prior times likelihood
If events A1, A2, …, are mutually exclusive and exhaustive, i.e.,
one of them is certain to occur but no two can occur together, and
we know their probabilities up to proportionality, then we can
determine the proportionality constant by using the fact that their
probabilities must add up to one. For instance, for a given event A,
the event A itself and its complement ¬A are exclusive and
exhaustive. Denoting the constant of proportionality by c we have
...EQ (8.2)
and
…EQ (8.3)
Adding these two formulas we deduce that
… EQ (8.4)
Internet based smart home robot
Page 73
CH 8 LOCALIZATION AND MAPPING
Extended form
Often, for some partition {Aj} of the event space, the event space is
given or conceptualized in terms of P(Aj) and P(B|Aj). It is then
useful to compute P(B) using the law of total probability:
….EQ (8.5)
…EQ(8.6)
In the special case where A is a binary variable:
… EQ (8.7)
Bayes Rule
Bayes' rule is Bayes' theorem in odds form (Odds form means The
odds in favor of an event is defined by the ratio of the probability
that the event will happen to the probability that it will not happen.).
…EQ (8.8)
where
… EQ (8.9)
is called the Bayes factor or likelihood ratio
and the odds between two events is simply the ratio of the
probabilities of the two events. Thus
… EQ (8.10)
,
,
Internet based smart home robot
…EQ (8.11)
Page 74
CH 8 LOCALIZATION AND MAPPING
So the rule says that the posterior odds are the prior odds times
the Bayes factor, or in other words, posterior is proportional to prior
times likelihood.
Derivation For events
Bayes' theorem may be derived from the definition of conditional
probability:
… EQ (8.12)
… EQ (8.13)
…EQ (8.14)
… EQ (8.15)
For random variables
For two continuous random variables X and Y, Bayes' theorem
may be analogously derived from the definition of conditional
density:
…EQ (8.16)
…EQ (8.17)
…EQ (8.18)
Internet based smart home robot
Page 75
CH 8 LOCALIZATION AND MAPPING
7.3 SLAM(Simultaneous Localization and
Mapping)
Abstract
is a technique used by robots and autonomous vehicles to build up
a map within an unknown environment (without a priori
knowledge), or to update a map within a known environment (with
a priori knowledge from a given map), while at the same time
keeping track of their current location.
Mapping is the problem of integrating the information gathered by
a set of sensors into a consistent model and depicting that
information as a given representation. It can be described by the
first characteristic question, “What does the world look like? “.
In contrast to this, localization is the problem of estimating the
place of the robot relative to a map; in other words, the robot has
to answer the second characteristic question, “Where am I? “.
SLAM is therefore defined as the problem of building a model
leading to a new map, or repetitively improving an existing map,
while at the same time localizing the robot within that map. In
practice, the answers to the two characteristic questions cannot be
delivered independently of each other.
Mapping
SLAM in the mobile robotics community generally refers to the
process of creating geometrically consistent maps of the
environment. Topological maps are a method of environment
representation which capture the connectivity of the environment
rather than creating a geometrically accurate map. As a result,
algorithms that create topological maps are not referred to as
SLAM
Sensing
SLAM will always use several different types of sensors to acquire
data with statistically independent errors.
Internet based smart home robot
Page 76
CH 8 LOCALIZATION AND MAPPING
Locating
The results from sensing will feed the algorithms for locating.
The SLAM Process
The SLAM process consists of a number of steps. The goal of the
process is to use the environment to update the position of the
robot. Since the odometry (provide an approximate position of the
robot as measured by the movement of the wheels of the robot, to
serve as the initial guess of where the robot might be in the PF ) of
the robot but we cannot rely directly on the odometry. We can use
laser scans of the environment to correct the position of the robot.
This is accomplished by extracting features from the environment
and re-observing when the robot moves around. A PF (particle
Filter) is the heart of the SLAM process. It is responsible for
updating where the robot thinks it is based on these features.
These features are commonly called landmarks .( Landmarks are
features which can easily be re-observed and distinguished from
the environment. These are used by the robot to find out where it
is (to localize itself).)
Internet based smart home robot
Page 77
CH 8 LOCALIZATION AND MAPPING
Figure 7.1 Describes the SLAM process.
When the odometry changes because the robot moves the
uncertainty pertaining to the robots new position is updated in the
EKF using Odometry update. Landmarks are then extracted from
the environment from the robots new position. The robot then
attempts to associate these landmarks to observations of
landmarks it previously has seen. Re-observed landmarks are then
used to update the robots position in the EKF. Landmarks which
have not previously been seen are added to the EKF as new
observations so they can be re-observed later.
Figure 7.2 The robot is represented by the triangle. The stars represent landmarks.
The robot initially measures using its sensors and the location of the landmarks
(sensor measurements illustrated with lightening).
Figure 7.3 The robot moves so it now thinks it is here. The distance moved is given
by the robot's odometry.
Internet based smart home robot
Page 78
CH 8 LOCALIZATION AND MAPPING
Figure 7.4 The robot once again measures the location of the landmarks using its
sensors but finds out they don't match with where the robot thinks they should be
(given the robot location). Thus the robot is not where it thinks it is.
Figure 7.5 As the robot believes more its sensors than its odometry, it now uses the
information gained about where the landmarks actually are to determine where it is
(the location the robot originally thought it was at is illustrated by the dashed
triangle).
Figure 7.6 In actual fact the robot is here. The sensors are not perfect so the robot
will not precisely know where it is. However, this estimate is better than relying on
odometry alone. The dotted triangle represents where is thinks it is; the dashed
triangle is where the odometry told it it was; and the last triangle is where it actually
is.
Internet based smart home robot
Page 79
CH 8 LOCALIZATION AND MAPPING
PF (Particle Filter)
This filter is used to localize the robot on the given last updated
map and we will talk about it later in other section in this book and
how it works.
Overview of the process
As soon as the landmark extraction and the data association is in
place the SLAM process can be considered as three steps:
1. Update the current state estimate using the odometry data
2. Update the estimated state from re-observing landmarks.
3. Add new landmarks to the current state.
 The first step is very easy. It is just an addition of the controls
of the robot to the old state estimate. E.g. the robot is at point
(x, y) with rotation theta and the controls are (dx, dy) and
change in rotation is dtheta. The result of the first step is the
new state of the robot (x+dx, y+dy) with rotation theta+dtheta.
 In the second step the re-observed landmarks are
considered. Using the estimate of the current position it is
possible to estimate where the landmark should be. There is
usually some difference, this is called the innovation. So the
innovation is basically the difference between the estimated
robot position and the actual robot position, based on what
the robot is able to see.
 In the second step the uncertainty of each observed landmark
is also updated to reflect recent changes. An example could
be if the uncertainty of the current landmark position is very
little. Re-observing a landmark from this position with low
uncertainty will increase the landmark certainty, i.e: the
variance of the landmark with respect to the current position
of the robot.
 In the third step new landmarks are added to the state, the
robot map of the world. This is done using information about
Internet based smart home robot
Page 80
CH 8 LOCALIZATION AND MAPPING
the current position and adding information about the relation
between the new landmark and the old landmarks.
7.4 Particle Filters
Particle filters have many names: (Sequential) Monte Carlo
filters, Bootstrap filters, Condensation, Interacting Particle
Approximations, Survival of the fittest, … etc.
In statistics, a particle filter, also known as a sequential Monte
Carlo method (SMC), The particle filter aims to estimate the
sequence of hidden parameters, xk for k = 0,1,2,3,…, based
only on the observed data yk for k = 0,1,2,3,…. All Bayesian
estimates of xk follow from the posterior distribution
p(xk | y0,y1,…,yk).
Our problem is tracking the state of a system as it evolves
over time (system here is the robot and its state is its
location). We have sequentially arriving (noisy or ambiguous)
observations from our kinect device and ultrasonic sensor
and we need to know the best possible estimate of the hidden
variables.
Bayesian Filtering / Tracking Problem
 Unknown State Vector x0:t= (x0, …, xt)
 Observation Vector z1:t
 Find PDF (Probability Distribution Function) p(x0:t| z1:t)…
posterior distribution
or p(xt| z1:t)… filtering distribution
Prior Information given:
p(x0) … prior on state distribution
p(zt| xt)… sensor model
p(zt| xt-1)… Markovian state-space model
Internet based smart home robot
Page 81
CH 8 LOCALIZATION AND MAPPING
Sequential Update
Recursive filtering
 Predict next state pdf from current estimate.
 Update the prediction using sequentially arriving new
measurements.
 Optimal Bayesian solution is recursively calculating exact
posterior density.
Particle Filters Algorithm
Here is a floor plan of an environment where a robot is
located and the robot has to perform global localization.
Global localization is when an object has no idea where it is in
space and has to find out where it is just based on sensory
measurements.
Figure 7.7 map of particles
The robot, which is located in the upper right hand corner of
the environment, has range sensors that are represented by
the blue stripes.
The sensors are used to range the distance of nearby
obstacles. These sensors and help the robot determine a
good posterior distribution as to where it is.
In this environment the red dots are particles. They are a
discrete guess as to where the robot might be. These
particles have the structure (X, Y, heading) [Heading is the
angle the robot makes with respect to X axis] -- three values
Internet based smart home robot
Page 82
CH 8 LOCALIZATION AND MAPPING
to comprise a single guess. However, a single guess is not a
filter, but rather a filter is the set of several thousands of
guesses
that
together
generate
an
approximate
representation for the posterior of the robot.
The essence of particle filters is to have the particles guess
where the robot might be moving, but also have them survive,
a kind of "survival of the fittest," so that particles that are more
consistent with the measurements, are more likely to survive.
As a result, places of high probability will collect more
particles, and therefore be more representative of the robot's
posterior belief. The particle together, make up the
approximate belief of the robot as it localizes itself.
The robot senses the distance to the nearby landmarks (For
example, we have four landmarks here in our environment).
The distances from the landmarks to the robot make up the
measurement vector of the robot.
Figure 7.8 robot distance
We have a measurement vector that consists of the four
values of the four distances from L~1~ to L~4~. If a particle
hypothesizes that its coordinates are somewhere other than
where the robot actually is (the red robot indicates the particle
hypothesis), we have the situation shown in figure (8.9):
Internet based smart home robot
Page 83
CH 8 LOCALIZATION AND MAPPING
Figure 7.9 new distance
The particle also hypothesizes a different heading direction.
We can then take the same measurement vector from our
robot and apply it to the particle.
Figure 7.10 new distance versus old distance
The green indicates the measurement vector we would have
predicted if the red particle actually were a good match for the
robot's actual location.
Figure 7.11 subtract old from new position
Internet based smart home robot
Page 84
CH 8 LOCALIZATION AND MAPPING
The closer the assumed particle is to the correct position, the
more likely will be the set of measurements given that
position. The idea of particle filters is that the mismatch of the
actual measurement and the predicted measurement leads to
an importance weight that indicates the importance of that
specific particle. The larger the weight the more important is
the particle.
Figure 7.12 weight calculations
When we have a bunch of particles, each has its own weight;
some are very plausible, while others look very implausible as
indicated by the size of the particle.
Figure 7.13 weight
Next we allow the particles to survive at random, but the
probability of survival will be proportional to the weights. That
is, a particle with a larger weight will survive at a higher
proportion than a particle with a small weight. This means that
after resampling, which is randomly drawing particles from
the old ones with replacement in proportion to the importance
weight, the particles with a higher importance weight will live
on, while the smaller ones will die out. The "with replacement"
Internet based smart home robot
Page 85
CH 8 LOCALIZATION AND MAPPING
aspect of this selection method is important because it allows
us to choose the high probability particles multiple times. This
causes the particles to cluster around regions with high
posterior probability.
7.5 Mapping (Occupancy Grid Mapping)
Occupancy Grid Mapping refers to a family of computer
algorithms in probabilistic robotics for mobile robots which
address the problem of generating maps from noisy and
uncertain sensor measurement data, with the assumption
that the robot pose is known.
The basic idea of the occupancy grid is to represent a map of
the environment as an evenly spaced field of binary random
variables each representing the presence of an obstacle at
that location in the environment. Occupancy grid algorithms
compute approximate posterior estimates for these random
variables.
We can never be entirely certain if a cell in our 2D
occupancy grid map is occupied or empty, thus we will
typically give each cell a probability of being full or empty.
We have to initialize the values of all cells to some unknown,
a common value would be 0.5 denoting that we believe there
is a 50% chance the cell is occupied. If you are navigating in
mostly open space, you may lower this value, or vice versa.
For now, we will assume we know our position absolutely.
Our position is typically called a "pose", and in our 2D map, it
would entail X and Y coordinates as well as a rotation angle.
The goal of an occupancy mapping algorithm is to estimate
the posterior probability over maps given the data:
, where
is the map,
is the set of
measurements from time 1 to t, and
is the set of robot
poses from time 1 to t. The controls and odometry data play
no part in the occupancy grid mapping algorithm since the
path is assumed known.
Occupancy grid algorithms represent the map
as a finegrained grid over the continuous space of locations in the
Internet based smart home robot
Page 86
CH 8 LOCALIZATION AND MAPPING
environment. The most common type of occupancy grid
maps are 2d maps that describe a slice of the 3d world.
Occupancy Grid Mapping Algorithm
Figure(7.14) Mapping Algorithm
 First update the grid cells which fall in the range of Zt
and updates the occupancy value by the function
inverse_sensor_model.
 Where inverse_sensor_model is a function that assigns
to all cells within the sensor cone whose range is close to
the measured range.
 Otherwise if the cell is not in the range of the Zt then
the value in the cell is kept as it is not updated.
 L0 is the prior of occupancy.
Internet based smart home robot
Page 87
CH 8 LOCALIZATION AND MAPPING
Figure7.15 Plan for an office environment
Figure7.16 Occupancy grid map for an office environment
Internet based smart home robot
Page 88
CH 8 LOCALIZATION AND MAPPING
7.6 Rao-Blackwellised Particle Filtering (RBPF)
Introduction
Learning maps is a fundamental task of mobile robots and a
lot of researchers focused on this problem. In the literature,
the mobile robot mapping problem is often referred to as the
simultaneous localization and mapping (SLAM) problem. In
general, SLAM is a complex problem because for learning a
map the robot requires a good pose estimate while at the
same time a consistent map is needed to localize the robot.
This dependency between the pose and the map estimate
makes the SLAM problem hard and requires to search for a
solution in a high-dimensional space. We introduced Rao–
Blackwellized particle filters (RBPFs) as an effective means
to solve the SLAM problem. The main problem of Rao–
Blackwellized particle filters lies in their complexity,
measured in terms of the number of particles required to
learn an accurate map. Either reducing this quantity or
improving the algorithm so that it is able to handle larger
sample sets is one of the major challenges for this family of
algorithms.
Learning Maps with Rao–Blackwellized Particle
Filters
The key idea of the Rao–Blackwellized Particle Filter for
SLAM is to estimate the joint posterior p(x1:t ,m | z1:t, u1:t−1)
about the trajectory x1:t= x1, . . . , xt of the robot and the map
m of the environment given the observations z1:t = z1, . . . , zt
and odometry measurements u1:t−1 = u1, . . . , ut−1. It does so
by using the following factorization
..EQ(8.19)
Internet based smart home robot
Page 89
CH 8 LOCALIZATION AND MAPPING
In this equation, the posterior p(x1:t | z1:t, u1:t−1) is similar to the
localization problem, since only the trajectory of the vehicle
needs to be estimated. This estimation is performed using a
particle filter which incrementally processes the observations
and the odometry readings as they are available. The
second term p(m | x1:t, z1:t) can be computed efficiently since
the poses x1:t of the robot are known when estimating the
map m. As a result of the factorization, a Rao–Blackwellized
particle filter for SLAM maintains an individual map for each
sample and updates this map based on the trajectory
estimate of the sample upon “mapping with known poses”.
A mapping system that applies a RBPF requires a proposal
distribution to draw the next generation of samples. The
general framework leaves open which proposal should be
used and how it should be computed. A proposal distribution
that is typically used in the context of Monte-Carlo
localization is the motion model p(x(t) | x(t−1), u(t−1)). This
proposal, however, is suboptimal since it does not consider
the observations of the robot to predict its motion.
The optimal proposal distribution is
….EQ (8.20)
Whenever a laser range finder is used, one can observe that
the observation likelihood p(z(t) | m(t−1), x(t) ) is much more
peaked than the motion model p(x(t) | x(t−1), u(t−1)). The
observation likelihood dominates the product in Eq. (2) in the
meaningful area of the distribution. Therefore, we
approximate
p(x(t) |x(t−1), u(t−1)) by a constant k within
this meaningful area L(i ). Under this approximation, the
proposal turns into
Internet based smart home robot
Page 90
CH 8 LOCALIZATION AND MAPPING
…EQ(8.21)
Eq. (3) can be computed by evaluating
on
a grid which is bounded by the maximum odometry error.
Alternatively, one can use a set of sampled points {xj} and
then evaluate point-wise the observation likelihood. In order
to efficiently sample the next generation of particles, one can
approximate this distribution by a Gaussian. For each
particle i, the parameters
computed as
of the Gaussian are
…EQ (8.22)
which is computationally expensive but leads to an informed
proposal distribution. This allows us to draw particles in a
more accurate manner which seriously reduces the number
of required samples.
Internet based smart home robot
Page 91
CH 8 LOCALIZATION AND MAPPING
Speeding up the computation of the proposal
The problem of the method presented above is the
computational complexity of the informed proposal
distribution. The computations need to be carried out for
each sample individually. As a result, such a mapping
system runs online only for small particle sets. Furthermore,
each particle maintains a full grid map which requires to
store large grid structures in the memory. To overcome this
limitation, we present a way to utilize intermediate results in
order to efficiently determine the proposal for the individual
samples.
The proposal distribution is needed to model the relative
movement of the vehicle under uncertainty. In most
situations, this uncertainty is similar for all samples within
one movement. It therefore makes sense to use the same
uncertainty to propagate the particles. We derive a way to
sample multiple particles from the same proposal. As a
result, the time consuming computation of the proposal
distribution can be carried out for a few particles that are
representatives for groups of similar samples. Furthermore,
we observed that local maps which are represented in a
particle-centered coordinate frame look similar for many
samples. We therefore present a compact map model in
which multiple particles can share their local maps. Instead
of storing a full grid map, each sample maintains only a set
of reference frames for the different local maps. This
substantially reduces the memory requirements of the
mapping algorithm.
Different situations during mapping
Before we derive our new proposal distribution, we start with
a brief analysis of the behavior of a RBPF. One can
distinguish three different types of situations during mapping:
• The robot is moving through unknown areas,
• is moving through known areas, or
Internet based smart home robot
Page 92
CH 8 LOCALIZATION AND MAPPING
• is closing a loop. Here, closing a loop means that the robot
first moves through unknown areas and then reenters known
terrain. It can be seen as moving along a so far nontraversed shortcut from the current pose of the robot to an
already known area.
For each of these three situations, we will present a proposal
distribution that needs to be computed only for a small set of
representatives rather than for all particles. we make the
following three assumptions:
Assumption 1. The current situation is known, which means
that the robot can determine whether it is moving through
unknown terrain, within a known area, or is closing a loop.
Assumption 2. The corresponding local maps of two samples
are similar if considered in a particle-centered reference
frame. In the following, we refer to this property as local
similarity of the maps.
Assumption 3. An accurate algorithm for pose tracking is
used and the observations are affected by a limited sensor
noise.
Computing the proposal for unknown terrain
For proximity sensors like laser range finders, the
observations of the robot cover only a local area around the
robot. As a result, we only need to consider the surroundings
of the robot when computing the proposal distribution. Let
m(i)t−1 refer to the local map of particle i around its previous
pose x(i)t−1.In the surroundings of the robot, we can
approximate
…EQ (8.23)
Internet based smart home robot
Page 93
CH 8 LOCALIZATION AND MAPPING
…EQ (8.24)
As a result, we can determine the proposal for a particle j by
computing the proposal in the reference frame of particle i
and then translating it to the reference frame of particle j
…EQ (8.25)
Computing the proposal for already visited areas
Whenever the robot moves through known areas, each
particle stays localized in its own map according to
Assumption 3. To update the pose of each particle while the
robot moves, we choose the pose xt that maximizes the
likelihood of the observation around the pose predicted by
odometry
….EQ (8.26)
Under the Assumptions 2 and 3, we can estimate the poses
of all samples according to Eq. (12) (while moving through
Internet based smart home robot
Page 94
CH 8 LOCALIZATION AND MAPPING
known areas). In this way, the complex computation of an
informed proposal needs to be done only once.
Computing the proposal when closing a loop
In contrast to the two situations described earlier, the
computation of the proposal is more complex in case of a
loop closure. This is due to the fact that Assumption 2 (local
similarity) is typically violated even for subsets of particles.
To illustrate this, assume that the particle cloud is widely
spread when the loop is closed. Typically, the individual
samples reenter the previously mapped terrain at different
locations. This results in different hypotheses about the
topology of the environment and definitively violates
Assumption 2.
Dealing with such a situation, requires additional effort in the
estimation process. Whenever a particle i closes a loop, we
consider that the map m(i)t−1 of its surroundings consists of
two components. Let m(i)loop refer to the map of the area,
the robot seeks to reenter. Then, m(i) local is the map
constructed from the most recent measurements without the
part of the map that overlaps with m(i)loop. Since those two
maps are disjoint and under the assumption that the
individual grid cells are independent, we can use a factorized
form for our likelihood function
…EQ (8.27)
For efficiency reasons, we use only the local map m(i)local to
compute the proposal and do not consider m(i)loop. This
procedure is valid but requires to adapt the weight
computation. According to the importance sampling principle,
this leads to
Internet based smart home robot
Page 95
CH 8 LOCALIZATION AND MAPPING
….EQ (8.28)
where etha1 and etha2 are normalization factors resulting
from Bayes’ rule.
Approximative importance weight computation
We observed in practical experiments that the normalizing
factors etha1 and etha2 in Eq. (16) have only a minor
influence on the overall weight. In order to speed up the
computation of the importance weights, we approximate Eq.
(16) by
…EQ (8.29)
Internet based smart home robot
Page 96
CH 8 LOCALIZATION AND MAPPING
Figure7.17 SLAM mapping using Rao-Blackwellised particle filters
7.7 SLAM Pseudo code
 Include a header file that includes all the header files and
namespaces needed from the MRPT library.
 Class SLAM
//define a class called SLAM to which an input is a pointer to
the critical section
 MAKESLAM (shared resource )
//function of SLAM class is defined and has shared resource
as an input
 Define map parameters like likelihood options and particle
filter options.
 Initialize values of map parameters.
 Assign these values to the Tconstruction_options which is
the main input options to our RBPF map.
 Define the grid map.
 Define the action and observation variables and kinect.
Internet based smart home robot
Page 97
CH 8 LOCALIZATION AND MAPPING
 While (true)
{
Enter the critical section;
If (exit signal is true)
{
Save the last map update to file and save it as an
image in the shared resource
}
Else if (there is a new odometry and observation input and
the map isn’t yet updated)
{
Get the last motion and observation update from the shared
resource;
Update the shared resource position variables and MAP
variable and mapimage ;
}
Leave the critical section;
Update our map using the action and observation;
Update the location of the robot on the updated map ;
Update the absolute position with the x,y,yaw of the mean
value of the position ;
Clear action ;
Clear observation;
Sleep 10 ms to give another thread the chance to execute;
}
7.8 Passing SLAM to Other Modules
The SLAM section is highly related to many
modules in the project which are:
1- Path Planning Module :
The output of the rbpf slam is a grid map that is passed to
the path planning and used to determine which path is used
to go from initial state to goal.
Internet based smart home robot
Page 98
CH 8 LOCALIZATION AND MAPPING
2- Kinect Grabber :
This extracts the kinect 3D observation and put it in shared
resource to be used by the SLAM.
3- Odometry Grabber :
Extracts the incremental motion from the encoder and put in
shared resource to be then used by the SLAM.
4- Shared Resource file :
Uses some of the shared resource elements like :




MAP: The map is updated each iteration.
Absolute position : Updated by each iteration.
MAPimage: Map update is saved as an image.
Incremental motion: Motion that is grabbed by the
odometry grabber.
 Signals_to_exit: To tell the thread that is must stop
working and exit the critical section.
 Kinect_observation: Grabbed by the kinect grabber.
 Persistent_synchronization_mechanism: Used to make
sure that each time we use SLAM to update map and
position there is a new action and observation.
Internet based smart home robot
Page 99
Download