ParticleFilterWriteUp

advertisement
Overview
When implementing a particle filter for localization, the two most important effects to
model are position uncertainty and sensor likelihood. In the following sections I provide
details on how both my motion model and likelihood model were constructed. I will also
present several implementation details that were required to achieve consistent
convergence. Finally, I will present my algorithm in pseudocode and information on the
results.
For this particular implementation I used Matlab on the Stanford Player/Stage data. As
the accompanying videos demonstrate, my implementation worked successfully on all of
the provided data sets. I have also included is the Matlab code for running my filter; to
execute the filer run the particleFilter.m file and pass in the map file name, the data log
name and the desired number of particles to use.
Motion Estimation
Since there was no information in the log from which to construct a more sophisticated
model, I decided to use Euler integration on the provided velocity terms to model the
robot’s motion. Since there was no existing documentation to reference, I performed a
handful of experiments to determine how to Euler integrate the velocities to arrive at
poses similar to those recorded in the logs. After my experiments, I concluded that the
correct integration is as follows:
θt+1 = θ t - ωdt
x t+1 = x t + υdt cos((θ t+1 + θt) / 2)
y t+1 = y t + υdt sin((θ t+1 + θt) / 2)
As the following figures illustrate, the Euler integration yields approximately correct
poses:
To estimate the normal error distribution on the velocities, I treated the logged position
data as “ground truth” and measured the mean and standard deviations of the rate of
change of position and heading. From the logged pose data, I numerically differentiated
the angular and linear displacement to arrive at corresponding velocities. I then let the
error be the difference between the logged velocities and the differentiated displacements.
From this error I computed a mean and a variance. The mean for both the angular and
linear error distributions was very close to zero. This shows that there was no significant
bias in the velocities. This led me to use the following distributions for the motion error:
εangular ~ N(0, 1.8975)
εlinear ~ N(0, 0.0722)
When the motion model for each particle is evaluated, the logged velocities are corrupted
with errors chosen from the above distributions in the following manner:
υ = υ + εlinear
ω = ω + εangular
These corrupted velocities are then Euler integrated by the above equations to arrive at a
new position sample.
Sensor Model
After considering several options for modeling the likelihood of sensor data, I eventually
decided to use a likelihood field. Although this method does not exploit the information
contained in the negative space that the beam passes through, it does avoid the expensive
computation involved in ray tracing. Additionally, much of the work of the likelihood
field model can be performed as an initial, one time operation. Therefore the online code
can be very fast – my highly vectorized version in Matlab was running marginally slower
than real time.
The initial step in creating the likelihood field was to precompute the nearest neighboring
obstacle for every cell in a grid. Since this nearest obstacle map would be used to
estimate the likelihood of each laser point, I generated this grid at a much finer resolution
than the map data (4x). Doing so allowed for a more accurate estimation of the
likelihood of the laser points. The nearest obstacle map for Gates Hall is shown here:
When an observation is made, the laser data is projected into the world at each particle
location. The likelihood of each point was computed as a mixture of a normal and a
uniform distribution:
Lpoint = zhit * prob(dist, σhit) + zuniform
The above z’s are the weights of used in mixing the distributions and prob(dist, σhit)
computes the probability of the distance to the nearest obstacle under a zero-mean
Gaussian distribution with standard deviation σhit. The zuniform term allows for the
rejection of noise at the cost of slowing down convergence. For my implementation I
found that a zhit of 0.3 and a zuniform of 0.7 resulted in tolerance to the noise in the data as
well as quick convergence.
The likelihood field model assumes independence between all laser points so the
likelihood of a scan is taken to be the product of the likelihoods of each of the individual
points.
Other Implementation Details
Good motion and likelihood models were important in getting the filter to work;
however, to make it consistently converge, there were several other details that were
important to implement carefully.
One of the most important steps was filtering the laser data. In addition to filtering out
the max range scans, other filtering had to occur. The original sampling of the
environment by the sensor data was highly non-uniform. In particular, there were a large
number of points in close proximity to the robot and fewer points far from the robot. If
the likelihood was calculated over all of these data points, the information carried by
points farther away would be trumped by the large quantity of near points. To remedy
this problem, the laser points were filtered so that there was at least a half meter of
distance between consecutive points.
For the data sets provided, it was safe to assume that the robot was always inside of the
building. For this reason, the initial samples were generated as uniform distribution over
only the cells that were inside of the building. It was also beneficial to throw out (i.e.
assign a zero weight to) any particles that go outside the building during the motion
modeling.
A final step that was taken to help with proper convergence was to use the most recent
velocity estimates to forward simulate the vehicle to the point in time at which the laser
scan was taken.
Putting it all together
An outline of my implementation of the particle filter is below:
Initialization
1. Generate samples using a uniform distribution over the valid positions in the map
and all possible headings.
2. Precompute the nearest neighboring obstacle grid at a fine resolution that gives
the distance to the closest obstacle for each grid cell.
For all data in the log:
If next reading is position
1. Use the motion model to predict where the particles move to.
2. Throw out any particle that goes off the map.
If next reading is laser data
1. Use the motion model to forward simulate where each particle would be at the
time of the scan.
2. Filter the laser data as described above.
3. Project the sensor data into the world using the particle’s position and heading
and use the likelihood field to compute a weight for each particle.
4. Sample a new batch of particles using the distribution of the weights to
sample from the old set of particles.
Results
The attached movies show the particle filter algorithm converging to the correct global
position after a few iterations. For all three of the movies, 3000 particles were used. The
particles are represented by blue dots with a yellow line indicating their heading.
(Although the yellow line is barely visible from the zoom level at which the movie was
captured, it was tremendously useful in debugging). For the particle with the greatest
likelihood, the filtered laser data is transformed into world coordinates using the
particle’s pose. This laser data is displayed as white plus signs; when the filter is
working properly, these plus signs should align with map obstacles.
These log files experience several interesting events that the viewer should note:
Log 1 starts off with the robot traveling down a feature sparse corridor. During this
traversal there are several good estimates of the robots position. Some of the particles are
headed down the corridor while some are traveling up it. Only once the end of the
corridor is reached can the robot successfully localize itself in the map.
Log 2 has laser data returns off of several obstacles that were not in the map. This log
shows that the particle filter as implemented is robust to a moderate amount of errors in
the map. Additionally at the end of the log the robot executes some high-velocity jerky
maneuvers. During this motion the particles spread out in a large cloud but quickly
converge back to the correct location.
Log 3 has a few maneuvers with very high angular velocity. During these maneuvers the
particles spread out in a large fan due to the increased position uncertainty. They quickly
converge back to the correct location.
Download