Lecture 10: Observers and Kalman Filters CS 344R: Robotics Benjamin Kuipers

advertisement
Lecture 10:
Observers and Kalman Filters
CS 344R: Robotics
Benjamin Kuipers
Stochastic Models of an
Uncertain World
xÝ  F(x,u)
xÝ  F(x,u,1)

y  G(x)
y  G(x,2 )
• Actions are uncertain.
• Observations are uncertain.
• i ~ N(0,i) are random variables
Observers
xÝ  F(x,u,1 )
y  G(x,2 )
• The state x is unobservable.
• The sense vector y provides noisy
information about x.
ˆ  Obs(y) is a process that
• An observer x
uses sensory history to estimate x.
• Then a control law can be written
u  Hi (xˆ )
Kalman Filter: Optimal Observer
1
u
x
y
2
xˆ
Estimates and Uncertainty
• Conditional probability density function
Gaussian (Normal) Distribution
• Completely described by N(,)
– Mean 
– Standard deviation , variance  2
1
 2
(x  )2 / 2 2
e
The Central Limit Theorem
• The sum of many random variables
– with the same mean, but
– with arbitrary conditional density functions,
converges to a Gaussian density function.
• If a model omits many small unmodeled
effects, then the resulting error should
converge to a Gaussian density function.
Estimating a Value
• Suppose there is a constant value x.
– Distance to wall; angle to wall; etc.
• At time t1, observe value z1 with variance 
• The optimal estimate is xˆ (t1)  z1 with
2
variance 1
2
1
A Second Observation
• At time t2, observe value z2 with variance 
2
2
Merged Evidence
Update Mean and Variance
• Weighted average of estimates.
xˆ (t 2)  Az1  Bz2
A  B 1
• The weights come from the variances.
– Smaller variance = more certainty
  2    2 
ˆx (t 2 )   2 2 2 z1   2 1 2 z2

 1   2 
 
1   2 

1
1
1
 2 2
2
 (t 2 )  1  2
From Weighted Average
to Predictor-Corrector
• Weighted average:
xˆ (t2)  Az1  Bz2  (1 K)z1  Kz2
• Predictor-corrector:
xˆ (t 2)  z1  K(z2  z1)
 xˆ (t1)  K(z2  xˆ (t1))
• This version can be applied “recursively”.
Predictor-Corrector
• Update best estimate given new data
xˆ (t 2)  xˆ(t1)  K(t 2)(z2  xˆ (t1))
• Update variance:
 12
K(t 2 )  2
2
1   2
 (t 2 )   (t1 )  K(t2 ) (t1 )
2
2
2
 (1 K(t2 ))  (t1 )
2
Static to Dynamic
• Now suppose x changes according to
xÝ F(x,u,)  u  
(N (0, ))
Dynamic Prediction
2
ˆ
• At t2 we know x (t 2 )  (t 2 )
• At t3 after the change, before an observation.

3
xˆ (t )  xˆ (t2 )  u[t3  t 2 ]

3
 (t )   (t2 )   [t3  t2 ]
2
2
2
• Next, we correct this prediction with the
observation at time t3.
Dynamic Correction
• At time t3 we observe z3 with variance 
• Combine prediction with observation.

3

3
xˆ (t 3 )  xˆ(t )  K(t3 )(z3  xˆ (t ))

3
 (t 3 )  (1 K(t 3 )) (t )
2
2
 (t )
K(t 3 )  2 
2
 (t3 )   3
2

3
2
3
Qualitative Properties

3

3
xˆ (t 3 )  xˆ(t )  K(t3 )(z3  xˆ (t ))
2 
 (t3 )
K(t 3 )  2 
2
 (t3 )   3
• Suppose measurement noise 
2
3
is large.
– Then K(t3) approaches 0, and the measurement
will be mostly ignored.

3
• Suppose prediction noise  (t ) is large.
2
– Then K(t3) approaches 1, and the measurement
will dominate the estimate.
Kalman Filter
• Takes a stream of observations, and a
dynamical model.
• At each step, a weighted average between
– prediction from the dynamical model
– correction from the observation.
• The Kalman gain K(t) is the weighting,
2
2
– based on the variances  (t) and 
2
• With time, K(t) and  (t) tend to stabilize.
Simplifications
• We have only discussed a one-dimensional
system.
– Most applications are higher dimensional.
• We have assumed the state variable is
observable.
– In general, sense data give indirect evidence.
xÝ F(x,u,1)  u  1
z  G(x,2)  x  2
• We will discuss the more complex case next.
Download