CMPUT 412 Experimental Robotics

advertisement
CMPUT 412
Localization
Csaba Szepesvári
University of Alberta
1
Contents







To localize or not to localize
Noise and aliasing; odometric position estimation
Map representation
Belief representation
Probabilistic map-based localization
Other examples of localization system
Autonomous map building
Localization
"Position"
Global Map
Environment Model
Local Map
Perception
Cognition
Path
Real World
Environment
Motion Control
2
Localization, Where am I?
?
position
Encoder
Position Update
(Estimation?)
Prediction of
Position
matched
observations
(e.g. odometry)
YES
Map
predicted position
Matching
data base
• Localization base on external sensors,
beacons or landmarks
• Probabilistic Map Based Localization
Perception
Perception
• Odometry, Dead Reckoning
raw sensor data or
extracted features
Observation
3
Challenges of Localization
 Knowing the absolute position (e.g. GPS) is
not sufficient
 Indoor environments
 Relation to other (possibly moving) objects
 Planning needs more than location
 Factors influencing the quality of location
estimates




Sensor noise
Sensor aliasing
Effector noise
Odometric position estimation
4
To Localize or Not?
5
To Localize or Not?
 Constraints on navigation:


navigation without hitting obstacles
detection of goal location
 Possible solution:


follow always the left wall
detect that the goal is reached
6
Behavior Based Navigation
7
Model Based Navigation
8
Noise and aliasing
9
Sensor Noise
 Where does it come from?
 Environment:
e.g. surface, illumination …
 Measurement principle:
e.g. interference between ultrasonic sensors
 What does it cause?
 Poor information
 What to do?
 Integrate
 Over sensors
 Over time
10
Sensor Aliasing
 What is it?
 Same reading, multiple possible locations
(states)
 How generic?
 Very..
 What is the result?
 Insufficient information
 What to do?
 Integrate
 Over multiple sensors
 Over time
11
Effector Noise,
Odometry, Dead Reckoning
 Wheel encoders are precise  Use it for keeping
track of the robot’s location
 Does this work??
 What is “odometry and dead reckoning”?
 Position update is based on proprioceptive sensors
 Odometry: wheel sensors only
 Dead reckoning: also heading sensors
 How do we do this?
 The movement of the robot, sensed with wheel
encoders and/or heading sensors is integrated to the
position.
 Pros: Straight forward, easy
 Cons: Errors are integrated -> unbound
 Improvement: additional heading sensors (e.g.
gyroscope)
12
Odometry
13
Errors in Odometry
 Error types
 Deterministic (systematic)
 Non-deterministic (non-systematic)
 What to do with them?
 Calibration can eliminate deterministic errors
 Non-deterministic errors: leave with them!
 Major Error Sources:
 Limited resolution during integration (time increments,
measurement resolution …)
 Misalignment of the wheels (deterministic)
 Unequal wheel diameter (deterministic)
 Variation in the contact point of the wheel
 Unequal floor contact (slipping, not planar …)
 …
14
Classification of Integration Errors
 Range error: integrated path length (distance) of
the robots movement
 sum of the wheel movements
 Turn error: similar to range error, but for turns
 difference of the wheel motions
 Drift error: difference in the error of the wheels
leads to an error in the robots angular orientation
Over long periods of time, turn and drift errors
far outweigh range errors!
 Consider moving forward on a straight line along the
x axis. The error in the y-position introduced by a
move of d meters will have a component of dsinDq,
which can be quite large as the angular error Dq
grows.
15
The Differential Drive Robot
 x
p   y
 
 q 
 Dx 
p  p   Dy 
Dq
16
Growth of Pose Uncertainty for
Straight Line Movement
 Note: Errors perpendicular to the direction
of movement are growing much faster!
17
Growth of Pose Uncertainty for
Movement on a Circle
 Note: Error ellipse does not remain
perpendicular to the direction of
movement!
18
Map Representations
19
Map Representation
 Issues
1. Map precision vs. application
2. Features precision vs. map precision
3. Precision vs. computational complexity
 Continuous Representation
 Decomposition (Discretization)
20
Representation of the Environment
 Environment Representation



Continuous Metric
Discrete Metric
Discrete Topological
 x,y,q
metric grid
topological grid
 Environment Modeling

Raw sensor data, e.g. laser range data, grayscale images
 large volume of data, low distinctiveness on the level of
individual values
 makes use of all acquired information

Low level features, e.g. line other geometric features
 medium volume of data, average distinctiveness
 filters out the useful information, still ambiguities

High level features, e.g. doors, a car, the Eiffel tower
 low volume of data, high distinctiveness
 filters out the useful information, few/no ambiguities, not
enough information
21
Continuous Line-Based
Representation
a) Architecture map
b) Representation with set of infinite lines
22
Exact Cell Decomposition
23
Fixed Cell Decomposition
24
Adaptive Cell Decomposition
25
Topological Decomposition
~ 400 m
~ 1 km
~ 200 m
~ 50 m
~ 10 m
26
Topological Decomposition
node
Connectivity
(arc)
27
Challenges
 Real world is dynamic
 Perception is still a major challenge
 Error prone
 Extraction of useful information difficult
 Traversal of open space
 How to build up topology (boundaries
of nodes)
 Sensor fusion
…
28
Belief Representations
29
Belief Representation
 Continuous map
with single hypothesis
 Continuous map
with multiple hypotheses
 Discretized map
with discrete pdf
 Discretized topological
map with discrete pdf
30
Belief Representation:
Characteristics
 Continuous
 Precision bound by
sensor data
 Typically single
hypothesis pose
estimate
 Lost when diverging
(for single
hypothesis)
 Compact
representation and
typically reasonable
in processing power
 Discrete
 Precision bound by
resolution of
discretisation
 Typically multiple
hypothesis pose
estimate
 Never lost (when
diverges converges to
another cell)
 Important memory
and processing power
needed. (not the case
for topological maps)
31
A Taxonomy of Probabilistic Models
More general
Courtesy of Julien Diard
Bayesian
Programs
S: State
O: Observation
A: Action
Bayesian
Networks
St St-1
DBNs
Markov
Chains
Bayesian
Filters
Particle
Filters
More specific
discrete
HMMs
semi-cont.
HMMs
St St-1 Ot
St St-1 At
continuous
HMMs
Kalman
Filters
Markov Loc
MDPs
MCML
POMDPs
St St-1 Ot At
32
Single-hypothesis Belief –
Continuous Line-Map
33
Single-hypothesis Belief – Grid
and Topological Map
34
Grid-base Representation –
Multi-Hypothesis
 Cell size around 20 cm2.
Courtesy of W. Burgard
35
Probabilistic, Map-based
Localization
36
The Problem
 Problem:
 Odometry alone: uncertainty growth
uncontrolled!
 Why we do not get lost?
 “Look around”
 Incorporate other sensory
information!
 Key problem:
 How to fuse observation?
37
Belief Update
 Action update
 action model ACT
ot: Encoder Measurement,
st-1: prior belief state
 increases uncertainty
 Perception update
 perception model SEE
it: exteroceptive sensor inputs
s’1: updated belief state
 decreases uncertainty
38
Why Does This Work??
39
The Five Steps for Map-Based
Localization
Predictionof
Measurement and
Map
database
predicted feature
observations
Position(odometry)
position
estimate
1. Prediction based on previous estimate and
odometry
2. Observation with on-board sensors
3. Measurement prediction based on prediction
and map
4. Matching of observation and map
5. Estimation -> position update (posteriori
position)
Estimation
(fusion)
matched predictions
and observations
YES
Matching
Perception
Encoder
raw sensor data or
extracted features
Observation
on-board sensors
40
Methods
 Aim is to calculate
 P( Xt | Y1,A1 ,…, At-1 ,Yt )
 “posterior of state given the past observations
and actions”
 == belief state
 “Filtering”
 Methods –
How is the belief state represented?
 Discretization  Markov Localization
 Continuous  Kalman Filter and its variants
 Particle cloud  Particle Filters
41
Markov  Kalman Filter
 Markov
localization
 Robust
 localization
starting from any
unknown position
 recovers from
ambiguous
situations
 Kalman filter
localization
 Moderately
robust
 Inexpensive
 Costly
42
Markov Localization
 Explicit, discrete representation for
the probability of
all positions in the state space.
 Representation
 grid
 topological graph
 Update:
 Sweeps through all possible states
43
Probability Theory Basics
 P(A): Probability that A is true.

e.g. p(rt = l): probability that the robot r is at position l at
time t
 We wish to compute the probability of each individual
robot position given actions and sensor measures.
 P(A|B): Conditional probability of A given that we know
B.

e.g. p(rt = l| it): probability that the robot is at position l
given the sensors input it.
 Product rule:
 Bayes rule:
44
Fusing in Sensory Input
 Bayes rule:
 Map from a belief state and a sensor input to a refined
belief state (SEE):
 p(l): belief state before perceptual update process
 p(i |l): probability to get measurement i when being at
position l
 consult robots map, identify the probability of a certain
sensor reading for each possible position in the map
 p(i): normalization factor so that sum over all l for L
equals 1.
45
Action Update
 Bayes rule:

Map from a belief state and a action to new belief state
(ACT):

Summing over all possible ways in which the robot may
have reached l.
 Markov assumption: Update only depends on previous
state and its most recent actions and perception.
46
Case Study: Topological Map
 The Dervish Robot
 Topological Localization with Sonar
 1994 AAAI
47
Dervish
 Topological map of office-type environment
48
Dervish – Belief Update
 Update of belief state for position n
given the percept-pair i
 p(n¦i): new likelihood for being in
position n
 p(n): current believe state
 p(i¦n): probability of seeing i in n (see
table)
49
Dervish Move Update
50
Grid Map
 Fine fixed decomposition grid (x, y, q),
cell size: 15 cm x 15 cm x 1°

Action and perception update
 Action update:
 Perception update:
Courtesy of
W. Burgard
51
Grid Map -- Challenge
 Challenge: calculation of p(i | l)



Combinatorial explosion!
Use model (sensor, map)
Assumptions
 Measurement error: mean zero Gaussian
 Non-zero chance for any measurement
 Failure mode
Courtesy of
W. Burgard
52
Grid Map
1. Start
 No knowledge at start, thus we
have
an uniform probability distribution.
2. Robot perceives first pillar
 Seeing only one pillar, the
probability
being at pillar 1, 2 or 3 is equal.
3. Robot moves
 Action model enables to estimate
the
new probability distribution based
on the previous one and the
motion.
4. Robot perceives second pillar
 Base on all prior knowledge the
probability being at pillar 2
becomes
dominant
53
Grid Map

Example 1: Office Building
Courtesy of
W. Burgard
Position 5
Position 3
Position 4
54
Grid Map Challenges
 Fine fixed decomposition grids result in a huge
table
 Huge processing power needed
 Large memory
 Reducing complexity:
Randomized Sampling / Particle Filter
 Approximated belief state by representing only a
‘representative’ subset of all states (possible locations)
 E.g update only 10% of all possible locations
 The sampling process is typically weighted, e.g. put
more samples around the local peaks in the probability
density function
 However, you have to ensure some less likely locations
are still tracked, otherwise the robot might get lost
55
Summary






To localize or not?
Noise, noise, noise
Odometry
Map representations
Belief representations
Filtering – Markov localization
56
Download