Introduction to Mobile Robotics

advertisement
Probabilistic Robotics
Bayes Filter Implementations
Gaussian filters
Bayes Filter Reminder
•Prediction (Action)
bel ( x t ) 
 p(x
t
| u t , x t 1 ) bel ( x t 1 ) dx t 1
•Correction (Measurement)
bel ( x t )   p ( z t | x t ) bel ( x t )
Gaussians
p( x) ~ N ( , ) :
2

p( x) 

1
2 
1 (x )

2
e
2
2
Univariate
-

p ( x ) ~ Ν (μ ,Σ ) :
p (x) 
1
( 2 )
d /2
Σ
1/ 2
e
Multivariate
1
t
1
 (xμ ) Σ (xμ )
2

Properties of Gaussians
X ~ N ( , )

Y  aX  b

2

Y ~ N (a  b, a  )
2
2
2
2
2

X 1 ~ N ( 1 , 1 ) 
2
1
 p( X 1)  p( X 2 ) ~ N  2
1  2
2,
2
2
2 

1  2
X 2 ~ N ( 2 , 2 )
1  2
1
1
2
2
2




Multivariate Gaussians
X ~ N ( , ) 

Y  AX  B


Y ~ N ( A  B, AA )
T
X 1 ~ N ( 1 , 1 ) 
 2
1


p
(
X
)

p
(
X
)
~
N


2,

1
2
1

X 2 ~ N ( 2 ,  2 )
1   2
 1   2
1
1
1
1   2
• We stay in the “Gaussian world” as long as we
start with Gaussians and perform only linear
transformations.




Discrete Kalman Filter
Estimates the state x of a discrete-time
controlled process that is governed by the
linear stochastic difference equation
x t  At x t 1  Bt u t   t
with a measurement
z t  C t xt   t
6
Components of a Kalman Filter
At
Matrix (nxn) that describes how the state
evolves from t-1 to t without controls or
noise.
Bt
Matrix (nxl) that describes how the control ut
changes the state from t-1 to t.
Ct
Matrix (kxn) that describes how to map the
state xt to an observation zt.
t
t
Random variables representing the process
and measurement noise that are assumed to
be independent and normally distributed
with covariance Rt and Qt respectively.
7
Kalman Filter Updates in 1D
Initial belief
Measurement with
associated uncertainty
Integrating
measurement
into belief via KF
Measurement
8
Kalman Filter Updates in 1D
  t   t  K t ( zt   t )
bel ( x t )  
2
2


(
1

K
)

t
t
t

 t   t  K t ( zt  C t  t )
bel ( x t )  
  t  ( I  K tC t ) t
with
with
Kt 
t
2
 t   obs , t
2
2
K t   t C t (C t  t C t  Q t )
T
T
1
Kalman gain
9
Kalman Filter Updates in 1D
  t  a t  t 1  b t u t
bel ( x t )   2
2
2
2


a



t
t
act , t
 t
  t  At  t  1  B t u t
bel ( x t )  
T


A

A
 Rt
t
t
t 1 t

Motion to the right, ==
Introduces uncertainty
10
Kalman Filter Updates
Belief after motion to right
New measurement
Resulting belief
11
Linear Gaussian Systems: Initialization
• Initial belief is normally distributed:
bel ( x 0 )  N  x 0 ;  0 ,  0 
12
Linear Gaussian Systems: Dynamics
• Dynamics are linear function of state and
control plus additive noise:
x t  At x t 1  Bt u t   t
State transition probability
p ( x t | u t , x t 1 )  N  x t ; At x t 1  Bt u t , Rt 
bel ( x t ) 
 p(x
t
| u t , x t 1 )

~ N  x t ; At x t  1  B t u t , R t 
bel ( x t 1 ) dx t 1

~ N  x t 1 ;  t 1 ,  t 1 
13
Linear Gaussian Systems: Dynamics
bel ( x t ) 
 p(x
t
| u t , x t 1 )

~ N  x t ; At x t 1  B t u t , R t 
bel ( x t 1 ) dx t 1

~ N  x t 1 ;  t 1 ,  t 1 

 1

T
1
bel ( x t )    exp   ( x t  At x t 1  B t u t ) R t ( x t  At x t 1  B t u t ) 
 2

 1

T
1
exp   ( x t 1   t 1 )  t 1 ( x t 1   t 1 )  dx t 1
 2

  t  At  t 1  B t u t
bel ( x t )  
T


A

A
 Rt
t
t t 1 t

14
Linear Gaussian Systems: Observations
• Observations are linear function of state
plus additive noise:
z t  C t xt   t
p ( z t | xt )  N  z t ; C t xt , Qt 
bel ( x t ) 

Measurement probability
p ( zt | xt )
bel ( x t )

~ N  zt ; C t xt , Q t 


~ N xt ;  t ,  t

15
Linear Gaussian Systems: Observations
bel ( x t ) 

p ( zt | xt )
bel ( x t )

~ N  zt ; C t xt , Q t 


~ N xt ;  t ,  t


 1

T
1
bel ( x t )   exp   ( z t  C t x t ) Q t ( z t  C t x t )  exp
 2

 t   t  K t ( zt  C t  t )
bel ( x t )  
  t  ( I  K tC t ) t
with
 1

T
1

(
x


)

(
x


)

t
t
t
t
t 
2


K t   t C t (C t  t C t  Q t )
T
T
1
16
Kalman Filter Algorithm
1.
Algorithm Kalman_filter( t-1, t-1, ut, zt):
2.
3.
4.
Prediction:
5.
6.
7.
8.
Correction:
9.
Return t, t
 t  At  t 1  B t u t
 t  At  t 1 At  R t
T
K t   t C t (C t  t C t  Q t )
T
T
1
 t   t  K t ( zt  C t  t )
 t  ( I  K tC t ) t
17
The Prediction-Correction-Cycle
Prediction
  t  a t  t 1  b t u t
bel ( x t )   2
2
2
2
 t  a t  t   act , t
  t  At  t  1  B t u t
bel ( x t )  
T
  t  At  t 1 At  R t
18
The Prediction-Correction-Cycle
  t   t  K t ( zt   t )
bel ( x t )  
, Kt 
2
2

  t  (1  K t ) t
 t2
2
t

2
obs , t
 t   t  K t ( zt  C t  t )
T
T
1
bel ( x t )  
, K t   t C t (C t  t C t  Q t )


(
I

K
C
)

t
t
t
t

Correction
19
The Prediction-Correction-Cycle
Prediction
  t   t  K t ( zt   t )
bel ( x t )  
, Kt 
2
2

  t  (1  K t ) t
 t2
2
t

2
obs , t
 t   t  K t ( zt  C t  t )
T
T
1
bel ( x t )  
, K t   t C t (C t  t C t  Q t )


(
I

K
C
)

t
t
t
t

  t  a t  t 1  b t u t
bel ( x t )   2
2
2
2
 t  a t  t   act , t
  t  At  t  1  B t u t
bel ( x t )  
T
  t  At  t 1 At  R t
Correction
20
Kalman Filter Summary
• Highly efficient: Polynomial in
measurement dimensionality k and
state dimensionality n:
O(k2.376 + n2)
• Optimal for linear Gaussian systems!
• Most robotics systems are nonlinear!
21
Nonlinear Dynamic Systems
• Most realistic robotic problems involve
nonlinear functions
x t  g ( u t , x t 1 )
z t  h ( xt )
22
Linearity Assumption Revisited
23
Non-linear Function
24
EKF Linearization (1)
25
EKF Linearization (2)
26
EKF Linearization (3)
27
EKF Linearization: First Order
Taylor Series Expansion
• Prediction:
g ( u t , x t 1 )  g ( u t ,  t 1 ) 
 g ( u t ,  t 1 )
 x t 1
( x t 1   t 1 )
g ( u t , x t 1 )  g ( u t ,  t 1 )  G t ( x t 1   t 1 )
• Correction:
h ( xt )  h (  t ) 
h (  t )
xt
( xt   t )
h ( xt )  h (  t )  H t ( xt   t )
28
EKF Algorithm
1. Extended_Kalman_filter( t-1, t-1, ut, zt):
2.
3.
4.
Prediction:
5.
6.
7.
8.
Correction:
K t   t H t ( H t  t H t  Qt )
9.
Return t, t
 t  g ( u t ,  t 1 )
 t  At  t 1  B t u t
 t  G t  t 1G t  R t
 t  At  t 1 At  R t
T
T
T
T
1
K t   t C t (C t  t C t  Q t )
T
T
 t   t  K t ( z t  h (  t ))
 t   t  K t ( zt  C t  t )
 t  ( I  K t H t ) t
 t  ( I  K tC t ) t
Ht 
h (  t )
xt
Gt 
1
 g ( u t ,  t 1 )
 x t 1
29
EKF Summary
• Highly efficient: Polynomial in
measurement dimensionality k and
state dimensionality n:
O(k2.376 + n2)
• Not optimal!
• Can diverge if nonlinearities are large!
• Works surprisingly well even when all
assumptions are violated!
30
Linearization via Unscented
Transform
EKF
UKF
31
UKF Sigma-Point Estimate (2)
EKF
UKF
32
UKF Sigma-Point Estimate (3)
EKF
UKF
33
Unscented Transform
Sigma points
Weights
 
0
w

 
i
(n   )

0
m


n
wm  wc 
i
i
w 
0
c
i
1
2(n   )

n
 (1     )
2
for i  1,..., 2 n
Pass sigma points through nonlinear function
  g ( )
i
i
Recover mean and covariance
2n
 '

w m
i
i
i0
For n-dimensional Gaussian
λ is scaling parameter that determine how far
the sigma points are spread from the mean
If the distribution is an exact Gaussian, β=2 is
the optimal choice.
2n
'

i0
w c (
i
i
   )(
i
  )
T
34
UKF Summary
• Highly efficient: Same complexity as
EKF, with a constant factor slower in
typical practical applications
• Better linearization than EKF:
Accurate in first two terms of Taylor
expansion (EKF only first term)
• Derivative-free: No Jacobians needed
• Still not optimal!
35
Thank You
36
Localization
“Using sensory information to locate the robot
in its environment is the most fundamental
problem to providing a mobile robot with
autonomous capabilities.”
[Cox ’91]
• Given
• Map of the environment.
• Sequence of sensor measurements.
• Wanted
• Estimate of the robot’s position.
• Problem classes
• Position tracking (initial robot pose is known)
• Global localization (initial robot pose is unknown)
• Kidnapped robot problem (recovery)
37
Markov Localization
Markov Localization: The straightforward application of Bayes filters to the
localization problem
38
1. EKF_localization ( t-1, t-1, ut, zt, m):
Prediction:
3.
5.
Gt
Vt


 g ( u t ,  t 1 )
 x t 1
 g ( u t ,  t 1 )
ut


6.
  1 | v t |   2 |  t | 2
Mt 

0

7.
 t  g ( u t ,  t 1 )
 x '

   t 1 , x
 y '

   t 1 , x
  '

   t 1 , x
 x '

 vt
 y '

 vt
  '
 v
 t
x '
  t 1 , y
y '
  t 1 , y
 '
  t 1 , y


  t 1 , 
y ' 

  t 1 , 
 ' 

  t 1 , 
x '
x ' 

 t 
y ' 

 t 
 ' 
  t 


2 
 3 | v t |   4 |  t |  
8.  t  G t  t 1G tT  V t M tV t T
0
Jacobian of g w.r.t location
Jacobian of g w.r.t control
Motion noise covariance
Matrix from the control
Predicted mean
Predicted covariance
39
1. EKF_localization ( t-1, t-1, ut, zt, m):
Correction:
3.
5.
6.
zˆ t
Ht


m x   t , x 2  m y   t , y 2

 atan 2 m   , m     
y
t,y
x
t,x
t ,


  r2
Qt  
 0

h (  t , m )
xt

  rt

  t ,x
  t

  t ,x
 rt
 t , y
 t
 t , y




Predicted measurement mean
 rt 

  t , 
 t 

  t , 
Jacobian of h w.r.t location
0 

2 
r 
7. S t  H t  t H tT  Q t
Pred. measurement covariance
8.
Kalman gain
T
9.  t
10.
1
K t  t H t St
  t  K t ( z t  zˆ t )
 t   I  K t H t  t
Updated mean
Updated covariance
40
EKF Prediction Step
41
EKF Observation Prediction Step
42
EKF Correction Step
43
Estimation Sequence (1)
44
Estimation Sequence (2)
45
Comparison to GroundTruth
46
UKF_localization ( t-1, t-1, ut, zt, m):
Prediction:
  1 | v t |   2 |  t | 2
Mt 

0

  r2
Qt  
 0


0 

2 
r 
 t 1
a
T
  t 1

 0
 0

a

0 

0 
Q t 
Mt
0
 t 1  
a
Augmented state mean
0 0 T 
0
 t 1   t 1
a
Augmented covariance
 t 1
a
 t 1  
a
 t  g u t   t ,  t 1 
x
u

a

Sigma points
Predicted mean
w m  i ,t
i
 t 1
Prediction of sigma points
x
2L
t 
Motion noise
Measurement noise
0 0 T
 t 1   t 1
a


2 
 3 | v t |   4 |  t |  
0
x
i0
2L
t 
 w 
i
c
i0
x
i ,t
 t
 
x
i ,t
 t

T
Predicted covariance
47
UKF_localization ( t-1, t-1, ut, zt, m):
Correction:
 
Measurement sigma points
t  h t  t
x
z
2L
zˆ t 

Predicted measurement mean
w m  i ,t
i
i0
2L
St 
 w 
i
c
i ,t
 zˆ t
 
i ,t
 zˆ t

Pred. measurement covariance
T
i0
2L

x,z
t

 w 
i
c
x
i ,t
 t
 
i ,t
 zˆ t

T
Cross-covariance
i0
Kt  
x,z
t
St
1
Kalman gain
 t   t  K t ( z t  zˆ t )
Updated mean
t  t  K tStK t
Updated covariance
T
48
1. EKF_localization ( t-1, t-1, ut, zt, m):
Correction:
3.
5.
6.
zˆ t
Ht


m x   t , x 2  m y   t , y 2

 atan 2 m   , m     
y
t,y
x
t,x
t ,


  r2
Qt  
 0

h (  t , m )
xt

  rt

  t ,x
  t

  t ,x
 rt
 t , y
 t
 t , y




Predicted measurement mean
 rt 

  t , 
 t 

  t , 
Jacobian of h w.r.t location
0 

2 
r 
7. S t  H t  t H tT  Q t
Pred. measurement covariance
8.
Kalman gain
T
9.  t
10.
1
K t  t H t St
  t  K t ( z t  zˆ t )
 t   I  K t H t  t
Updated mean
Updated covariance
49
UKF Prediction Step
50
UKF Observation Prediction Step
51
UKF Correction Step
52
EKF Correction Step
53
Estimation Sequence
EKF
PF
UKF
54
Estimation Sequence
EKF
UKF
55
Prediction Quality
EKF
UKF
56
Kalman Filter-based System
• [Arras et al. 98]:
• Laser range-finder and vision
• High precision (<1cm accuracy)
[Courtesy of Kai Arras]
57
Multihypothesis
Tracking
58
Localization With MHT
• Belief is represented by multiple hypotheses
• Each hypothesis is tracked by a Kalman filter
• Additional problems:
• Data association: Which observation
corresponds to which hypothesis?
• Hypothesis management: When to add / delete
hypotheses?
• Huge body of literature on target tracking, motion
correspondence etc.
59
MHT: Implemented System (1)
• Hypotheses are extracted from LRF scans
• Each hypothesis has probability of being the correct
one:
H i  { xˆ i ,  i , P ( H i )}
• Hypothesis probability is computed using Bayes’
rule
P (H i | s) 
P(s | H i )P(H i )
P (s)
Hypotheses with low probability are deleted.
•
• New candidates are extracted from LRF scans.
C j  {z j , R j}
[Jensfelt et al. ’00]
60
MHT: Implemented System (2)
Courtesy of P. Jensfelt and S. Kristensen
61
MHT: Implemented System (3)
Example run
# hypotheses
P(Hbest)
Map and trajectory
Courtesy of P. Jensfelt and S. Kristensen
#hypotheses vs. time
62
Download