Document 10731682

advertisement
Particle Filters
Pieter Abbeel
UC Berkeley EECS
Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics
TexPoint fonts used in EMF.
Read the TexPoint manual before you delete this box.:
AAAAAAAAAAAAA
Motivation
§ 
For continuous spaces: often no analytical formulas for Bayes filter updates
§ 
Solution 1: Histogram Filters: (not studied in this lecture)
§ 
§ 
§ 
Partition the state space
Keep track of probability for each partition
Challenges:
§ 
§ 
§ 
§ 
What is the dynamics for the partitioned model?
What is the measurement model?
Often very fine resolution required to get reasonable results
Solution 2: Particle Filters:
§ 
§ 
§ 
Represent belief by random samples
§ 
Aka Monte Carlo filter, Survival of the fittest, Condensation, Bootstrap filter
2
Can use actual dynamics and measurement models
Naturally allocates computational resources where required (~ adaptive
resolution)
Page 1!
Sample-based Localization (sonar)
Problem to be Solved
n 
Given a sample-based representation
St = {x1t , xt2 ,..., xtN }
of Bel(xt) = P(xt | z1, …, zt, u1, …, ut)
Find a sample-based representation
2
N
St+1 = {x1t+1, xt+1
,..., xt+1
}
of Bel(xt+1) = P(xt+1 | z1, …, zt, zt+1 , u1, …, ut+1)
Page 2!
Dynamics Update
n 
Given a sample-based representation
St = {x1t , xt2 ,..., xtN }
of Bel(xt) = P(xt | z1, …, zt, u1, …, ut)
Find a sample-based representation
of P(xt+1 | z1, …, zt, u1, …, ut+1)
n 
Solution:
n 
For i=1, 2, …, N
n 
Sample xit+1 from P(Xt+1 | Xt = xit)
Sampling Intermezzo
Page 3!
Observation update
n 
Given a sample-based representation of
2
N
{x1t+1, xt+1
,..., xt+1
}
P(xt+1 | z1, …, zt)
Find a sample-based representation of
P(xt+1 | z1, …, zt, zt+1) = C * P(xt+1 | z1, …, zt) * P(zt+1 | xt+1)
n 
Solution:
n 
For i=1, 2, …, N
n 
n 
w(i)t+1 = w(i)t* P(zt+1 | Xt+1 = x(i)t+1)
the distribution is represented by the weighted set of samples
2
2
N
N
{< x1t+1, w1t+1 >, < xt+1
, wt+1
>,..., < xt+1
, wt+1
>}
Sequential Importance Sampling (SIS) Particle Filter
n 
Sample x11, x21, …, xN1 from P(X1)
n 
Set wi1= 1 for all i=1,…,N
n 
For t=1, 2, …
n 
n 
n 
Dynamics update:
n  For i=1, 2, …, N
i
i
n  Sample x t+1 from P(Xt+1 | Xt = x t)
Observation update:
n  For i=1, 2, …, N
i
i
i
n  w t+1 = w t* P(zt+1 | Xt+1 = x t+1)
At any time t, the distribution is represented by the weighted set of samples
{ <xit, wit> ; i=1,…,N}
Page 4!
SIS particle filter major issue
n 
The resulting samples are only weighted by the evidence
n 
The samples themselves are never affected by the evidence
à Fails to concentrate particles/computation in the high
probability areas of the distribution P(xt | z1, …, zt)
Sequential Importance Resampling (SIR)
n 
At any time t, the distribution is represented by the weighted
set of samples
{ <xit, wit> ; i=1,…,N}
à 
à 
Sample N times from the set of particles
The probability of drawing each particle is given by its
importance weight
à More particles/computation focused on the parts of the state
space with high probability mass
Page 5!
1.  Algorithm particle_filter( St-1, ut , zt):
2.  St = ∅,
3.  For
4. 
η =0
Generate new samples
i = 1… n
Sample index j(i) from the discrete distribution given by wt-1
5.  Sample xti from p(xt | xt!1, ut )
6. 
wti = p ( zt | xti )
i
t
7. 
η =η + w
8. 
S t = S t ∪ {< xti , wti >}
using xtj−(1i ) and ut
Compute importance weight
Update normalization factor
Insert
9.  For i = 1… n
10. 
wti = wti / η
Normalize weights
Particle Filters
Page 6!
Sensor Information: Importance Sampling
Bel( x) ← α p( z | x) Bel − ( x)
α p( z | x) Bel − ( x)
w
←
= α p ( z | x)
Bel − ( x)
Robot Motion
Bel − ( x) ←
∫ p( x | u x' ) Bel( x' ) d x'
,
Page 7!
Sensor Information: Importance Sampling
Bel( x) ← α p( z | x) Bel − ( x)
α p( z | x) Bel − ( x)
w
←
= α p ( z | x)
Bel − ( x)
Robot Motion
Bel − ( x) ←
∫ p( x | u x' ) Bel( x' ) d x'
,
Page 8!
20
21
Page 9!
22
23
Page 10!
24
25
Page 11!
26
27
Page 12!
28
29
Page 13!
30
31
Page 14!
32
33
Page 15!
34
35
Page 16!
36
37
Page 17!
Summary – Particle Filters
§ 
§ 
Particle filters are an implementation of recursive
Bayesian filtering
They represent the posterior by a set of weighted
samples
§ 
They can model non-Gaussian distributions
§ 
Proposal to draw new samples
§ 
Weight to account for the differences between the
proposal and the target
42
Summary – PF Localization
§ 
§ 
§ 
In the context of localization, the particles are propagated
according to the motion model.
They are then weighted according to the likelihood of the
observations.
In a re-sampling step, new particles are drawn with a
probability proportional to the likelihood of the observation.
43
Page 18!
Download