ROBOT PATH PLANNING AND TRACKING OF A MOVING TARGET USING
KALMAN FILTER
Adya Shrotriya
B.E., University of Pune, India, 2006
PROJECT
Submitted in partial satisfaction of the requirements for the degree in
MASTER OF SCIENCE in
ELECTRICAL AND ELECTRONIC ENGINEERING at
CALIFORNIA STATE UNIVERSITY, SACRAMENTO
SPRING
2010
ROBOT PATH PLANNING AND TRACKING OF A MOVING TARGET USING
KALMAN FILTER
A Project by
Adya Shrotriya
Approved by:
__________________________________, Committee Chair
Dr. Fethi Belkhouche
__________________________________, Second Reader
Dr. Preetham B. Kumar
____________________________
Date ii
Student: Adya Shrotriya
I certify that this student has met the requirements for format contained in the university format manual and that this project is suitable for shelving in the library and credits to be rewarded for the Project.
__________________________, Graduate Coordinator ________________
Dr. Preetham B. Kumar Date
Department of Electrical and Electronic Engineering iii
Abstract of
ROBOT PATH PLANNING AND TRACKING OF A MOVING TARGET USING KALMAN
FILTER by
Adya Shrotriya
This project develops a method for path planning and tracking of a moving target by a wheeled robot. The main kinematics variables updated in each step are the orientation and relative position of the robot and the target. A discrete extended Kalman filter is used to predict and update the states of the robot and the target and their uncertainties. The robot uses a linear proportional navigation law to track the target. Simulation of the motion kinematics of robot and the target is performed using MATLAB. It is shown using multiple simulation scenarios that the robot is able to track and reach the moving goal successfully.
_______________________, Committee Chair
Dr. Fethi Belkhouche
_______________________
Date iv
ACKNOWLEDGEMENT
I would like to take this opportunity to thank the people behind successful completion of this project. First and foremost, I would like to thank my professor and mentor for this project, Dr.
Fethi Belkhouche for his tremendous support and guidance throughout the project. I would like to thank him for always being there to help me in developing better understanding of the topic and encouraging me to give my best. Also, I am extremely grateful to my Graduate Coordinator, Dr.
Preetham Kumar for his consistent support in my academic years at the School guiding me through the entire process, right from enrolling for a class until submission of my master’s project. Finally, I would like to thank my professors throughout my Master’s, my department and my school for giving me this opportunity to learn and grow towards attainment of this degree.
Lastly, I would like to express my heartfelt gratitude to my family in India who have constantly supported me and blessed me. I am extremely thankful to my husband Puru, who helped me make my dreams come true. v
TABLE OF CONTENTS
Page
Acknowledgement.………………………………………………………………………………...v
List of figures…...………………………………………………………………………………..viii
Chapter
5. PREVIOUS RESEARCH IN IMPLEMENTATION OF KALMAN FILTERING ............... 27
Mathematical Foundations of Navigation and Perception for an Autonomous Mobile
Kalman Filtering for Positioning and Heading Control of Ships and Offshore Rigs [5] .. 28
Accurate Odometry and Error modeling for a Mobile Robot [10] ................................... 29
6. TRACKING–NAVIGATION OF A MOVING GOAL ......................................................... 30
7. KALMAN FILTERING IMPLEMENTATION IN NAVIGATION/TRACKING ................ 37
vi
vii
LIST OF FIGURES
Page
Growing error ellipses indicate growing position uncertainty with odometry . .................. 8
Probability density function for noises v
............................................................... 34
Probability density function for r
..................................................................... 34
Probability density function for positions x
...................................................... 35
viii
15
16
17
18
19
10
11
12
13
14
5
6
7
8
9
3
4
1
2
1
Chapter 1
INTRODUCTION
This project is concerned with the application of one of the most important techniques from estimation theory to the problem of navigation and tracking for a mobile robot. In this project, probabilistic estimation is done to predict the next step of the robot that follows a moving target under uncertainty. Translation as well as orientation of the moving target with respect to the global axis work as the reference for estimation of robot position. Estimation of the position of the vehicle with respect to the external world is fundamental to navigation. Modeling the contents of the immediate environment is equally fundamental. Estimation theory provides a basic set of tools for position estimation and environmental modeling. These tools provide an elegant and formally sound method for combining internal and external sensor information from different sources, operating at different rates. The foundations of estimation theory are reviewed, and a virtual system is assumed for mathematical tools derived for combining sensory information. In particular, a predict-match-update cycle is derived as a framework for perception. The Kalman filter is used to provide the mathematical basis for this process
[1].
The use of a Kalman filter as the basis for a virtual vehicle controller makes it possible to correct errors in odometric position using external perception. Example cases are derived
36
37
38
39
40
31
32
33
34
35
26
27
28
29
30
20
21
22
23
24
25
2 for correcting a position estimate from different forms of perception. In particular, technique is presented for correction of estimated position using angle and distance to the moving target.
This project is organized as follows. Chapter 2 discusses fundamentals of odometry and robot navigation. Probabilistic estimation for robot as well as target states and the resulting uncertainties are examined. Later, in Chapter 3 and 4, the implementation of
Kalman filter, its operation and significance in path planning is elaborated with simulation examples and mathematical models. The Kalman filter algorithm and the extended Kalman filter are explained. In Chapter 5, several case studies that have used the Kalman filter and odometry to perform navigation for a wheeled robot, ships and other such real time applications are studied and discussed. Chapter 6 introduces the linear navigation laws and several simulation examples that form the basis of our robot and target path planning operation. Later, in Chapter 7, successful implementation of
Kalman filter to perform odometric position estimation for the robot and the target is shown in simulation examples. At last, the appendix lists MATLAB codes written in this project to implement the Kalman filter and achieve successful robot path tracking to reach its target.
52
53
54
55
56
57
58
59
47
48
49
50
51
41
42
43
44
45
46
3
Chapter 2
ROBOT NAVIGATION FUNDAMENTALS
Robot navigation means a robot’s ability to determine its own position in its frame of reference and then to plan a path towards some goal location. In order to navigate in its environment, the robot or any another mobility device requires a representation i.e. a map of the environment and the ability to interpret that representation. Path planning is one of the techniques of Robot Navigation which is an extension of localization. In that, it requires the determination of the robot's current position and a position of a goal location, both within the same frame of reference or coordinates.
In this project, the goal location is also moving and characterized by uncertainty in motion. The position of the target is estimated and updated with respect to its observed values using the discrete Kalman filter. The Kalman filter provides us with tools to estimate and update the position and orientation of the robot, considering its uncertainty characterized by mean values and a covariance matrix. The path planning algorithm uses motion models for probabilistic estimation of robot position. Kalman filter is also used to track the position and orientation of the moving goal.
78
79
80
81
73
74
75
76
77
68
69
70
71
72
60
61
62
63
64
65
66
67
4
Odometry based models are used when the system is equipped with wheel encoders whereas the velocity based models are applied when there is no presence of wheel encoders to provide observation values from motion sensors inbuilt inside the robot [9].
Odometry
Odometry is the process of estimating one's current position based upon a previously determined position, or fix, and advancing that position based upon known or estimated speeds over elapsed time, and course. In mobile robotics, odometry typically refers to a position estimate achieved by integrating odometry measurements. The current pose of the vehicle is estimated based on its velocities and the time elapsed. It is a velocity based model that needs to be applied since no wheel encoders are present [9].
Odometry is the simplest and the most widely used navigation method for mobile robot positioning. The fundamental idea is the integration of incremental motion information over time, which leads inevitably to the accumulation of errors. Particularly, the accumulation of orientation errors will cause large position errors which increase proportionally with the distance traveled by the robot. Despite these limitations, most researchers agree that odometry is an important part of a robot navigation system and that navigation tasks will be simplified if odometric accuracy can be improved [7].
Odometry is used in almost all mobile robots, for various reasons:
98
99
100
101
102
93
94
95
96
97
88
89
90
91
92
82
83
84
85
86
87
5
1.
Odometry data can be fused with absolute position measurements to provide better and more reliable position estimation [1].
2.
Odometry can be used in between absolute position updates with landmarks.
Given a required positioning accuracy, increased accuracy in odometry allows for less frequent absolute position updates. As a result, fewer landmarks are needed for a given travel distance [1].
3.
Many mapping and landmark matching algorithms assume that the robot can maintain its position well enough to allow the robot to look for landmarks in a limited area and to match features in that limited area to achieve short processing time and to improve matching correctness [1].
4.
In some cases, odometry is the only navigation information available; for example: when no external reference is available, when circumstances preclude the placing or selection of landmarks in the environment, or when another sensor subsystem fails to provide usable data [4].
In a typical indoor environment with a flat floor, localization becomes a matter of determining the Cartesian coordinates (x,y) and the orientation θ of the robot on a two dimensional floor plan. For a wheeled robot, odometry is one of the most important means of achieving this task. In practice, optical encoders that are mounted on both drive wheels feed discretized wheel increment information to a processor, which correspondingly updates the robot’s state using geometric equations. However, with time,
116
117
118
119
120
121
122
109
110
111
112
113
114
103
104
105
106
107
108
115
6 odometric localization accumulates errors in an unbounded fashion due to wheel slippage, floor roughness and discretized sampling of wheel increments.
To begin, consider how x and y coordinates (and orientation) change with respect to time.
At any instant, the x and y coordinates of the robot’s center point are changing based on its speed and orientation. We treat orientation as an angle measured in radians, counter clockwise from the x-axis. The vector giving the direction of the forward motion for the robot will simply be (cosθ, sinθ). The x and y coordinates for the robot’s center point will change depending on the speed of its motion along the vector.
These observations suggest that, taking v(t) and θ(t) as time dependent functions for the robot’s speed and orientation, our solution is going to be in the form [8] : dx
dt dy
dt
or dx
t dt dy
t dt
Theoretical research normally involves error modeling, so that a mathematical treatment is possible. For example, different versions of Kalman Filter require that the odometry
129
130
131
132
133
123
124
125
126
127
128
134
135
136
137
138
139
140
141
142
143
144
7 errors be expressed in the form of an error covariance matrix. Odometry errors can be classified as being systematic or non-systematic [3].
It is noteworthy that many researchers develop algorithms that estimate the position uncertainty or odometry errors of a robot. With this approach each computed robot position is surrounded by a characteristic “error ellipse”, which indicated a region of uncertainty for the robot’s actual position. Typically, these ellipses grow with travel distance, until an absolute position measurement reduces the growing uncertainty and thereby “resets” the size of the error ellipse. These error estimation techniques must rely on error estimation parameters derived from observations of the vehicle’s odometric performance. These parameters can take into account only systematic errors, because the magnitude of non-systematic errors is unpredictable [4].
Probabilistic Estimation
Probabilistic localization is a probabilistic algorithm where, instead of maintaining a single hypothesis as to where in the world a robot might be, probabilistic localization maintains a probability distribution over the space of all such hypotheses. The probabilistic representation allows for the uncertainties that arise from the uncertain motion models and noisy sensor readings to be accounted for as well. The challenge is then to maintain a position probability density over all possible robot poses. Such a density can have arbitrary forms representing various kinds of information about the
145
146
147
148
149
150
151
152
8 robot’s position. For example, the robot can start with a uniform distribution representing that it is completely uncertain about its position, i. e. that the robot could be in any location with equal probability. It furthermore can contain ambiguous situations. In the usual case, in which the robot is highly certain about its position, it consists of a unimodal distribution centered on the true position of the robot. Kalman filtering is a form of probabilistic estimation where the estimate is assumed to be a Gaussian (unimodal) probability density function (PDF) [2].
Figure 1 illustrates the error ellipses and the increasing uncertainties with odometry.
153
154
155
156
157
158
159
160
1.
Growing error ellipses indicate growing position uncertainty with odometry [10].
177
178
179
180
181
172
173
174
175
176
167
168
169
170
171
161
162
163
164
165
166
9
Chapter 3
KALMAN FILTER
The Kalman filter is an on-line, recursive algorithm designed to estimate the true state of a system where only (noisy) observations or measurements are available. It contains a linear model for the process, as well as a linear model for the measurement. The former model describes how the current state of the tracker is changing, given the previous instance. To improve the estimated state the Kalman filter uses measurements that are related to the state but disturbed as well.
Thus the Kalman filter consists of two steps:
1) Prediction
2) Correction (or update)
In the first step, the state is predicted with the dynamic model. In the second step it is corrected with the observation model, so that the error covariance of the estimator is minimized. In this sense it is an optimal estimator [3].
The application areas of the Kalman filter span from aerospace, to marine navigation, demographic modeling, weather science, manufacturing and many other areas. Hence, due to its wide area of utility, Kalman filter is subject to extensive amount of research [2].
10
188
189
190
191
192
193
182
183
184
185
186
187
2.
Kalman Filter diagram [3].
The procedure is repeated for each time step with the state of the previous time step as initial value. Therefore, the Kalman Filter is called a recursive filter.
The basic components of the Kalman filter are the state vector, the dynamic model and the observation model [3].
State Vector
The state vector contains the variables of interest. In general it describes the state of the dynamic system. The variables in the state vector cannot be measured directly but can be inferred from values that are measurable.
206
207
208
209
210
211
212
213
214
215
200
201
202
203
204
205
194
195
196
197
198
199
11
Elements of state vector can be position, velocity, orientation angles, etc. For examples, x
R
(t) = [x
R
, y
R
, θ
R
]
T
is a state vector with x
R
and y
R
as variables of translation and θ
R
as the variable of orientation. The state vector has two values at the same time. A priori value, which is the predicted value before the update and a posteriori value, which is the corrected value after the update [3].
Dynamic Model
The dynamic model describes the transformation of the state vector over time. It can usually be represented by a system of difference or differential equations. The discrete model can be written as follows:
(
( ) ( )
( ) ( )
( )
The vector x(k) ∈ R n is the state vector which is used to represent the system input such as velocity commands, torques, or forces intentionally applied to the robot. Matrix F(k) ∈
R nxn
encodes the dynamics of the system, and G(k) ∈ R nxm
describes how the inputs drive the dynamics. Vector v(k) ∈ R n
is called the process noise and is assumed to be white
Gaussian noise with zero mean and covariance matrix V(k). This process noise is used to account for unmodeled disturbances (such as slipping wheels) that affect the system dynamics [3].
223
224
229
230
231
232
233
225
226
227
228
234
235
236
216
217
218
219
220
221
222
12
Observation Model
The observation model represents the relationship between the state and the measurements. In the linear case, the measurements can be described by a system of linear equations, which depend on the state variables. Usually the observations are made at discrete time steps. The discrete measurement model can be written as follows:
( )
( ) ( )
( )
The matrix H(k) ∈ R pxn describes how state vectors are mapped into outputs. The measurement noise vector w(k) ∈ R p
is assumed to be White Gaussian noise with zero mean and covariance matrix W(k) [3].
When the commands are executed perfectly and the robot starting position is perfectly known, odometry method gives a perfect estimation of position. However, perfect performance and knowledge are impossible to achieve in the real world. Errors between the velocity commands and the actual robot velocities will accumulate over time. In other words, as the robot moves, it is continuously losing information about its location.
Eventually, the robot will lose so much information that the command integration estimate becomes meaningless for any practical purpose. A similar approach is to integrate robot velocity measurements reported by onboard odometry sensors.
253
254
255
256
257
248
249
250
251
252
243
244
245
246
247
237
238
239
240
241
242
13
With just a few extensions, command integration can be thought of as a probabilistic estimation method. First, consider the robot starting location. Since, in the real world the starting position cannot be perfectly known, it makes sense to represent the starting location as a PDF over the state space, i. e., the space of possible robot positions. If a good estimate of the starting position is available, the PDF will have a peak at that location, and the “sharpness” of the peak will represent the certainty of the initial estimate, the higher and narrower the peak. The location of the peak is propagated by integrating the velocity commands. However, since the commands are not executed perfectly, the estimate becomes more uncertain as time progresses and the peak of the
PDF will become smaller and more spread out. The rate at which the peak spreads is determined by the amount of error (noise) in the velocity command; the greater the noise, the faster the peak spreads. Eventually, the peak will become so flat that the estimate provided by the PDF is meaningless [2].
The goal of probabilistic estimation (or any estimation method) is to provide a meaningful estimate of the system state, which in this case is robot pose. The problem with the command integration method is that information is continually lost and no new information is ever gained. The solution to this is to inject new information into the system through the use of sensors that gather information about the environment. For example, a robot equipped with a sensor capable of measuring the range and bearing to a landmark with a known location. Such a measurement adds new information to the
274
275
276
277
269
270
271
272
273
264
265
266
267
268
258
259
260
261
262
263
14 system and can be used (at least partially) to compensate for the information that was lost by integrating. The new information can be represented as a PDF in sensor space, usually with a peak at the value of the sensor reading. Knowledge about the robot location prior to the sensor measurement is described by a PDF in the state space. The crux of probabilistic estimation problem is to merge these two distributions in a meaningful way.
Generally, any probabilistic estimation method can be thought of as a two step process of prediction and update. Given an estimate of the system state in the form of PDF, the prediction propagates the PDF according to robot commands together with a motion model for the robot. The update step then corrects the prediction by merging the predicted PDF with information collected by the sensors. The “new” estimate is given by the updated PDF, and the process is iterated. Note that in each iteration, the prediction step accounts for the information lost due to errors in the motion model while the update step incorporates information gained by the sensors.
Kalman filter is a well known probabilistic estimation technique. In Kalman filtering, the motion model is assumed to be a linear function of the state variables and the inputs
(commands). The quantities measured by the sensors are assumed to be linear functions of the state variables. Errors in both the motion model and the sensor model are assumed to be zero-mean white Gaussian noise. Because of this simple form, it is possible to
282
283
284
285
286
287
288
289
290
291
292
293
278
279
280
281
294
295
296
297
298
299
15 derive closed form equations to perform the prediction and update steps, making Kalman filter implementation a straightforward process [2].
Kalman Filtering
In order to apply Kalman filtering to the problem of robot localization, it is necessary to define equations that can be used to model the dynamics and sensors of the robot system.
The vector x is used to denote the system (robot) state as it evolves through time. This project uses discrete time models, meaning that the continuously varying robot state is sampled at discrete, regularly spaced intervals of time to create the sequence x(k), k is the discrete time, k = 0,1,2….. Specifically, if x (0) is the value of the state at time t=t
0
, then x(k) is the value of the state at time t
0
+T k
, where T is defined to be the sampling time.
The evolution of the robot state and the values measured by the robot sensors can be modeled as a linear dynamical discrete-time system as follows [2].
(
( ) ( )
( ) ( )
( )
( )
( ) ( )
( )
From building systems using linear dynamic model framework, a set of principles are identified for integrating perceptual information. These principles follow directly from the nature of the cyclic process for dynamic world modeling. Experiences from building systems for dynamic world modeling have led to identify a set of principles for
316
317
318
319
320
311
312
313
314
315
306
307
308
309
310
300
301
302
303
304
305
16 integrating perceptual information as below. These principles have been elaborated in the paper “Mathematical foundations of navigation and perception for an autonomous mobile robot” by James L. Crowley [1].
1) Primitives in the world model should be expressed as a set of properties [1]
A model primitive expresses an association of properties which describe the state of some part of the world. This association is typically based on spatial position. For example, the co-occurrence of a surface with a certain normal vector, a yellow color, and a certain temperature or for numerical quantities, each property can be listed as an estimate accompanied by a precision. For symbolic entities, the property slot can be filled with a list of possible values, from a finite vocabulary. This association of properties is known as a “state vector” in estimation theory.
2) Observation and Model should be expressed in the same coordinate system [1]
The common coordinate system may be scene based or observer based. The choice of reference frame should be determined by considering the total cost of the transformations involved in each predict-match-update cycle. For example, in the case of a single stationary observer, a sensor based coordinate system may minimize the transformation cost. For a moving observer with a model which is small relative to the size of the observations, it may be cheaper to transform the model to the current sensor coordinates during each cycle of modeling. On the other hand, when the model is large compared to
337
338
339
340
341
332
333
334
335
336
327
328
329
330
331
321
322
323
324
325
326
17 the number of observations, using an external scene based system may yield fewer computations.
Transformations between coordinate frames generally require a precise model of the entire sensing process. This description, often called a “sensor model”, is essential to transform a prediction into the observation coordinates, or to transform an observation into a model based coordinate system.
3) Observation and Model should be expressed in a common vocabulary [1]
In order to match or to add information to a model, an observation needs some be transformed to the terms of the data base in order to serve as a key. It is possible to calculate such information as needed. However, since the information is used both in matching and updating, it makes more sense to save it between phases. Hence, the observation should be expressed in a subset of properties used in the model.
An efficient way to integrate information from different sensors is to define a standard
“primitive” element which is composed of the different properties which may be observed or inferred from different sensors. Any one sensor might supply observations for only a subset of these properties. Transforming the observation into the common vocabulary allows the fusion process to proceed independent of the source of observations.
358
359
360
361
362
353
354
355
356
357
348
349
350
351
352
342
343
344
345
346
347
18
4) Properties should include an explicit representation of uncertainty [1]
Dynamic world modeling involves two kinds of uncertainty: precision and confidence.
Precision can be thought of as a form of spatial uncertainty. By explicitly listing the precision of an observed property, the system can determine the extent to which an observation is providing new information to a model. Unobserved properties can be treated as observations which are very imprecise. Having a model of the sensing process permits an estimate of the uncertainties to be calculated directly from the geometric situation.
5) Primitives should be accompanied by a confidence factor [1].
A confidence factor provides the world modeling system with a simple mechanism for non monotonic reasoning. Observations which do not correspond to expectations may be initially considered as uncertain. If confirmation is received further observation, their confidence is increased. If no further confirmation is received, they can be eliminated from the model.
The application of these principles leads to a set of techniques for the processes of dynamic world modeling.
363
364
365
366
367
368
19
Predict-Match-Update Process
In a dynamic world modeling framework, independent observations are “transformed” into a common coordinate space and vocabulary. These observations are then integrated
(fused) into a model (or internal perception) by a cyclic process composed of three phases: Predict, Match and Update [1].
369
370
371
372
373
374
3.
A framework of dynamic world modeling [1].
Predict: In the prediction phase, the current state of the model is used to predict the state of the external world at the time that the next observation is taken.
391
392
393
394
395
386
387
388
389
390
380
381
382
383
384
385
375
376
377
378
379
20
Match: In the match phase, the transformed observation is brought into correspondence with the predictions. Such matching requires that the observation and the prediction express information which is qualitatively similar. Matching requires that the predictions and observations be transformed to the same coordinate space and to a common vocabulary.
Update: The update phase integrates the observed information with the predicted state of the model to create an updated description of the environment composed of hypotheses.
405
406
407
408
396
397
398
399
400
401
402
403
404
409
410
411
412
413
414
21
Chapter 4
KALMAN FILTER ALGORITHM
The Kalman filter is a recursive estimator. This means that only the estimated state from the previous time step and the current measurement are needed to compute the estimate for the current state. In contrast to Bayesian estimation techniques, no history of observations and/or estimates is required. In what follows, the notation x
ˆ
represents the estimate of “ x
” at time “n” given observations up to, and including at time “m”.
The state of the filter is represented by two variables:
x
, the a posteriori state estimate at time k given observations up to and
including at time k.
ˆ
, the a posteriori error covariance matrix (a measure of the estimated accuracy of the state estimate).
The Kalman filter has two distinct phases: Predict and Update . As mentioned previously, the predicted state estimate is also known as the a priori state estimate because, although it is an estimate of the state at the current time step, it does not include observation information from the current time step. In the update phase, the current a priori prediction is combined with current observation information to refine the state estimate. This improved estimate is termed the a posteriori state estimate.
22
Usually, the update phase follows the predict phase after incorporating the observation at each step. This observation value helps in providing an updated state of the system which is nearer to the true value. However, in situations like absence of an observation value or presence of multiple observation values, the update phase may change. Either the update step is skipped with multiple values of prediction or the observation values are user defined in order to provide a beacon point to the predictions. Similarly, multiple update values are evaluated depending on the available number of observations
(typically with different observation matrices y k
). In order to perform the predict and update steps as explained above, the Kalman filter algorithm is used which is summarized as follows [12].
Predict
Predicted (a priori) state
|
1
k k
1
G u k
Predicted (a priori) estimate covariance
C
|
1
F C k k
1
F
T
W k
415
416
417
418
419
420
421
422
423
23
Update
Innovation or measurement residual y k
z k
H x
|
1
Innovation (or residual) covariance
S k
H C k |
1
H
T k
V k
Optimal Kalman gain
K k
C
|
1
H S k k
1
Updated (a posteriori) state estimate x
ˆ x
ˆ
|
1
K y k k
Updated (a posteriori) estimate covariance
C
( I
k k
)
|
1
The formula for the updated estimate covariance above is only valid for the optimal
Kalman gain. Usage of other gain values requires a more complex formula found in the derivations section.
424
425
426
427
428
429
430
431
438
439
440
441
442
443
432
433
434
435
436
437
24
Extended Kalman Filter
The robot and goal motion models used in this project are highly non-linear. Thus,
Kalman filter cannot be used.To solve this problem we suggest to use the Extended
Kalman filter (EKF). The extended Kalman filter is a filter where the state transition and observation models are non- linear functions of the state and input. The dynamic model is given by, x k
( k
1
, u k
)
w k z k
( k
)
v k
The function f can be used to compute the predicted state from the previous estimate and similarly the function h can be used to compute the predicted measurement from the predicted state. However, f and h cannot be applied to the covariance directly. Instead, a matrix of partial derivatives (the Jacobian) is computed.
The Jacobian of a function describes the orientation of a tangent plane to the function at a given point. For example, in a two dimensional plane where x n+1
= f(x n
) , with a fixed point x* , the Jacobian matrix is simply the first order derivative of the function f(x n
) at that fixed point. Although, for the Jacobian, only the partial derivatives of the variable need to exist. Hence, the Jacobian gives the best possible linear approximation for a variable to a differential function near a given fixed point. Since the Jacobian is a
458
459
460
454
455
456
457
444
445
446
447
448
449
450
451
452
453
25 derivative of a multivariate function, for “n” variables, n > 1, the derivative of a numerical function must be matrix-valued, or a partial derivative.
At each time step the Jacobian is evaluated with current predicted states. These matrices can be used in the Kalman filter equations. This process essentially linearizes the nonlinear function around the current estimate [12].
Predict
Predicted State x
ˆ
|
1
(
ˆ k
1
, u k
1
)
Predicted estimate covariance
C
|
1
F C k
1
F k
T
1
W k
1
Update
Innovation or measurement residual y
ˆ k
z k
(
ˆ
|
1
)
Innovation (or residual) covariance
S k
H C k |
1
H T k
V k
Optimal Kalman Gain
K k
C
|
1
T
H S k k
1
461
462
463
464
465
466
467
468
469
470
471
472
473
474
Updated State estimate x
ˆ x
ˆ
|
1
K y k k
Updated estimate covariance
C
( I
k k
)
|
1 where the state transition and observation matrices are defined to be the following
Jacobians
F k
f
x x k 1| k 1, u k
1
26
H k
h
x
|
1
493
494
495
496
486
487
488
489
490
491
492
475
476
477
478
479
480
481
482
483
484
485
27
Chapter 5
PREVIOUS RESEARCH IN IMPLEMENTATION OF KALMAN FILTERING
In this section, we present case studies where odometry and Kalman filter are used to predict and update the system states and eventually achieve the desired goal of the project.
Mathematical Foundations of Navigation and Perception for an Autonomous Mobile
Robot [1]
This work concerns the application of estimation theory to the problem of navigation and perception of a mobile robot. A hierarchical structure is presented for the design of a mobile robot navigation system. The control system of a mobile robot is found to decompose naturally into a set of layered control loops, where the layers are defined by the level of abstraction of data, and the cycle time of the feedback control. The levels that occur naturally are identified as the level of signal, device, behavior and task.
The vehicle controller provides asynchronous independent control of forward translation and rotation. The controller accepts both velocity and displacement commands, and can support a diverse variety of navigation techniques. In addition, the controller operates in terms of standardized “virtual vehicle” which may be easily adapted to a large variety of vehicle geometries. All vehicle specific information is contained in the low level interface which translates the virtual vehicle into specific vehicle geometry.
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
497
498
499
500
28
The use of Kalman filter as the basis for a vehicle controller makes it possible to correct errors in odometric position estimation using external perception. The concepts of this project are adapted majorly from this paper.
Kalman Filtering for Positioning and Heading Control of Ships and Offshore Rigs [5]
This application majorly discusses the role of Kalman filter in dealing with uncertainties in ship motion control caused by environmental conditions like waves, wind and current.
Modern marine vessels are equipped with sophisticated motion control systems that have different objectives depending on the particular operations being performed. Some of these control objectives include position and heading regulation, path following, trajectory tracking and wave induced motion reduction [5].
The stochastic nature of environmentally induced forces had led to extensive use of
Kalman filter for estimating ship-motion-related quantities and for filtering disturbances, both of which are of paramount importance for implementing marine motion-control systems. In this application, elements of a ship motion-control system are introduced, describing the models used for position regulation and course keeping control and how
Kalman filter is used for wave filtering, that is, removing oscillatory wave induced motion from velocity and positioning measurements [5].
532
533
534
535
536
527
528
529
530
531
537
538
539
540
519
520
521
522
523
524
525
526
29
Accurate Odometry and Error modeling for a Mobile Robot [10]
This case study examines the important steps involved in the design, calibration and error modeling of a low cost odometry system capable of achieving high accuracy path planning. A consistent error model is developed for updating every step of position and orientation of the odometric system. The approach taken in this case takes into account the noise over the entire path track to produce simple closed form expressions, allowing efficient correction or update of the covariance matrix after the completion of each path segments [10].
An accurate odometry system is presented which is comparable to the best reported system but can be fabricated at low cost. A new first order odometry error model has been derived to propagate the state error covariance following a motion. The Kalman filter can implement this case study procedure easily since a covariance matrix is developed which is a common approach to considering uncertainties or noise in many statistics and filtering theories. Although, this model fails to consider unexpected errors such as hitting a bump on the floor or any other noise that violates the flat floor assumption. External tools or sensors may be required to detect errors for certain applications like mapping.
547
548
549
550
551
552
553
554
541
542
543
544
545
546
30
Chapter 6
TRACKING–NAVIGATION OF A MOVING GOAL
The robot and the moving goal move in the horizontal plane. The goal maneuvers are not a priori known to the robot. The aim is to design a closed-loop control law for the robot steering angle, which insures reaching the moving goal. We assume that the following conditions are satisfied.
1.
The robot is faster than the moving goal, and the goal moves in a smooth path.
2.
The minimum turning radius of the robot is smaller than the minimum turning radius of the moving goal.
3.
The robot has a sensory system, which provides the control system with the necessary information about the target and the environment. The target’s speed, orientation, and position are not exactly known, but can be measured and estimated using a Kalman filter.
555
556 4.
State space diagram for an odometric system [9].
557
558
559
560
561
562
563
564
565
566
571
572
573
574
567
568
569
570
31
Point O (0,0) is the origin of an inertial reference frame of coordinates. The robot is a wheeled mobile robot of the unicycle type with the following kinematics. x
R
v cos
R y
R
v sin
R
R
R where (x
R
, y
R
) represent the position of the robot’s reference point in the inertial frame of reference, θ
R
is the robot’s orientation angle. v
R
and ω
R
represent the linear and angular velocities of the robot, respectively. The state of the robot is characterized by s
R
= [x
R
, y
R
,
θ
R
]
T
. The state has a Gaussian distribution with mean value ˆ
G
[ x
ˆ
G
, ˆ
G
G
]
T
and covariance matrix C
R
. The goal moves according to the following kinematics equations x
G
v cos
G y
G
v sin
G
G
G where (x
G
, y
G
) represent the position of the goal in the inertial frame of reference, θ
G
is the goal’s orientation angle. v
G
is the goal’s linear velocity. The state of the goal is characterized by S = [x
G
, y
G
, θ
G
G
]
T
. A Gaussian distribution is assumed for the state, with mean value ˆ
G
[ x
ˆ
G
, ˆ
G
G
]
T
and covariance matrix C
G
.
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
32 y
Y
G
Target r
GR y
R x
Ψ
GR
Robot
R x
G
X
5.
Geometrical representation of robot/target positions
The geometrical representation of the target and robot coordinates is shown above. The orientation angle of the robot is relative to that of the target and it is represented by Ψ.
The following equations show the implementation of the linear navigation law that depicts motion of robot with respect to a moving target. The relative velocity between the target and the robot is given by x
GR
v
G y
GR
v
G cos
G sin
G
v
R
v
R cos
R sin
R
The orientation angle between the robot and the target is given by tan y
G
y
R x
G
x
R
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
33
The distance between the robot and the target is given by r
GR
( y
G
y
R
)
2
( x
G
x
R
)
2
The onboard sensory system returns value for
and r
GR
GR
. These observation or measurement values are corrupted by noise which is Gaussian with zero mean. Thus, the equations for
and r
GR
GR
can be written as follows
GR a tan 2
y
G
y
R x
G
x
R
v
1 r
GR
( y
G
y
R
)
2
( x
G
x
R
)
2 v
2 where v
1
and v
2
are both measurement noise components.
Figure 6 shows the probability density function (PDF) for the noise. The standard deviation for v
1
is 0.4 and mean is 0, whereas the standard deviation of v
2
(distance) is 0.5 with the mean value of 0. Figure 7 shows the uncertainty propagation for
and r
GR
GR
.
Figure 8 shows the PDFs for the relative positions where the PDFs are obtained using
Monte Carlo simulation for r
GR
=10 and Ψ
GR
=45
0
.
Monte Carlo methods are useful for modeling phenomena with significant uncertainty in inputs. To perform the Monte Carlo simulation, following equations are evaluated. x
GR
=r
GR cos Ψ
GR y
GR
=r
GR sin Ψ
GR
620
621
622
623
618
619
The following graphs illustrate PDFs for v
1
, v
2
, r
GR
, Ψ
GR
, x
GR
, .
34
v
1
= 0.4 v
2
= 0.5
6.
Probability density function for noises v
1
and v
2
624
625
626
627
628
r
GR
= 10 Ψ
GR
= 45
0
7.
Probability density function for r
GR
and Ψ
GR
For mean values r
GR
= 10, Ψ
GR
= 45
0
,
35
629
638
639
640
641
630
631
632
633
634
635
636
637
642
643
644
645 x
GR
y
GR
8.
Probability density function for positions xGR and yGR
Linear Navigation Law
The other contribution of this project consists of suggesting a solution to the navigationtracking problem using the linear navigation law. In our formulation, the navigation law is defined in terms of the orientation angle or the angular velocity, which is more suitable for robotics applications. This formulation allows an easy integration between the navigation--tracking and obstacle avoidance modes. The integration is accomplished by tuning the control variables of the proportional navigation law. The aim of the navigation law is to make the robot angular velocity proportional to the rate of turn of the angle of the line of sight robot-goal, which means that the robot’s steering angle is given by
R
( )
GR
( )
c
650
658
659
660
661
662
651
652
653
654
655
656
657
663
664
665
666
646
647
648
649
36
GR t is the robot-goal line of sight angle as shown in figure 6. N is a positive real number called the proportionality factor or constant, with N
1. Note that N = 1 corresponds to the pursuit. c is a real constant called the deviation constant. The aim of the controller is to drive the line of sight angle rate to zero, and thus the line of sight
S
G
asymptotically stable equilibrium solution that results in a decreasing range. In some situations, this equilibrium solution approximates the goal’s orientation angle. The navigation constant N plays an important role. Different values of N result in different paths. For small values of N, small path corrections are required at the beginning of the navigation-tracking process; however, larger corrections are required near the interception. A rendezvous behavior is obtained for larger values of N, where the robot’s path tends to be more curved at the beginning, and large path corrections are needed; these corrections decrease with time. The robot navigating under the proportional navigation law reaches the goal from almost all initial states. Several examples illustrating the method are shown in our simulation. The MATLAB code to implement the navigation law is included in the appendix and its corresponding simulation results are shown in figure 9 and 10.
683
684
685
686
678
679
680
681
682
673
674
675
676
677
667
668
669
670
671
672
37
Chapter 7
KALMAN FILTERING IMPLEMENTATION IN NAVIGATION/TRACKING
Estimation of the position of the vehicle with respect to the external world is fundamental to navigation. It provides the basic set of tools for position estimation and environmental modeling. These tools provide an elegant and formally sound method for combining internal and external sensor information from different sources, operating at different rates. With the Kalman filter as the basis for these processes, a predict-match-update cycle is derived as a framework for perception.
Position estimation also includes a model of errors of the position estimation process.
Thus the estimated position is accompanied by an estimate of uncertainty, in the form of a covariance matrix. The covariance matrix makes it possible to correct the estimated position using a Kalman filter. The correction is provided as a side effect in the process of dynamic modeling of the local environment.
The experiment performed in this project is path planning for a robot that follows a target which is moving under uncertainty. The probabilistic estimation is performed by recurring stages of predict and update with implementation of odometry and discrete
Kalman filter.
703
704
705
706
707
698
699
700
701
702
693
694
695
696
697
687
688
689
690
691
692
38
In general, the configuration of a robot can be described by six parameters, three dimensional Cartesian coordinates plus the Euler angles pitch, roll and tilt. Throughout this experiment, we consider robots operating on a planar surface. The state space of such systems is three dimensional (x
R
, y
R
, θ
R
). The angle θ
R
is the orientation of the vehicle heading with respect to the X axis [9].
This is a simulation of a virtual robot following a target. The main focus of this project is on implementation of Kalman filter for successfully estimating the next step and targeting the moving destination.
The target moves in an uncertain manner based on predicted values obtained from
Kalman filter equations as well. The Kalman filter equations predict the next step values for the moving target. The observed values are obtained from one of our assumptions made in the experiment.
The predicted state vector values for the robot hugely depend on the target’s next state predicted values. Here partial prediction is done on account of target movement. The orientation being the point of focus in order to move the robot, the next state values of the robot are predicted by following the orientation of the target. The translation of the robot is then updated depending on the orientation of the target and equations of motion.
724
725
726
727
728
719
720
721
722
723
714
715
716
717
718
708
709
710
711
712
713
39
The uncertainty estimate is based on a model of the errors which corrupt the prediction and observation processes. Estimating these errors is both difficult and essential to the function of such a system.
The uncertainty estimate provides two crucial roles:
1) The tolerance bounds for matching observations to predictions, and
2) The relative strength of prediction and observation when calculating a new
estimate.
Because covariance determines the tolerance for matching, system performance will degrade rapidly if we under-estimate the covariance. On the other hand, overestimating covariance may increase the computing time for finding a match. Here, the covariance values are stored in the array “arr” which help to predict the values of next step covariance. Later, the covariance values are updated based on the discrete Kalman filter update equations [1].
The state transition matrices for the robot and the destination are kept different in this experiment. This determines at after what interval the next values of the robot and target movement need to be predicted. Also, the measurement noise and the process noise are user defined constants.
740
741
742
743
744
745
729
730
731
732
733
734
735
736
737
738
739
40
A Kalman Filter Model for Odometric Position Estimation [1]
The robot estimates and commands a state vector composed of the incremental translation and rotation (x
R
, y
R
R
) and their derivatives.
The robot also maintains estimates of the absolute position and orientation (the pose) and its uncertainty, represented by a covariance. The “pose” of a vehicle is defined as the estimated values for position and orientation:
X =
x y
R
R
R
The uncertainty in pose is represented by a covariance matrix:
C x
=
2
xy
x yx
y
2
x
y
x
y
2
The vehicle controller responds to commands. The command to correct the estimated position applied a Kalman filter to the position using the “innovation” ΔY.
X
X
C x
C x
* x
752
753
754
755
756
757
758
759
760
761
762
746
747
748
749
750
751
763
764
41
The Kalman gain matrix is a 3x3 matrix which tells how much of the change in each parameter should be taken into account in correcting the other parameters. The process for estimating and correcting the vehicle’s pose and uncertainty from odometry are also described.
Estimating Position and Orientation and their corresponding Uncertainties
The odometric position estimation process is activated at a regular interval. The process requests the translator to provide the change in the translation, ΔS = [x
R
, y
R
] and rotation
Δθ
R
. The process also reads the system real time clock to determine the change in time,
T k
. The accumulated translation and rotation are calculated as:
S S S
The vehicle pose is then updated by means of algebraic manipulation to calculate the average orientation during a cycle given by the previous estimate plus half the incremental change. The overall state vector is updated as
X
X
X
765
766
767
768
773
774
775
776
769
770
771
772
777
778
779
780
781
782
783
42
The estimated position is accompanied by an estimate of its uncertainty, represented by a covariance matrix; C x
. Updating the position uncertainty requires expressing a linear transformation which estimates the update in position.
Y =
S
The contribution of Y to X can be expressed in linear form using a matrix which expresses the sensitivity of each parameter of X to the observed values by use of the
Jacobian of the X with respect to Y.
The uncertainty in orientation also contributes to an uncertainty in position. This is also captured by a Jacobian J. This linear expression permits the covariance of the estimated position to be updated by:
C
X
X
X
JC
X
Y
Y
J
0 s
2
0
a
2
X
Y
J
T
C w
These matrices are a discrete model of the odometric process. They tell the sensitivity of the covariance matrix to uncertainty in the terms of ΔS and Δθ. For pose estimation, it is generally the case that an uncertainty in orientation strongly contributes to an uncertainty in Cartesian position. In particular, the uncertainty in orientation dominates the odometric error model [1].
790
791
792
793
794
784
785
786
787
788
789
795
796
797
798
799
800
801
802
803
804
43
Correcting the Estimated Position from Observations of the Environment using Kalman gain
Different sensing modalities provide different kinds of information about position. Each of the different kinds of observations can be used to correct an estimate of the position and orientation of a vehicle. The choice of perceptual depends on the environment, the task and the available computing power. [1]
The new information that can be introduced into the estimated position is the difference between the observed coordinates, and the predicted coordinates. This information, called the innovation is denoted by ΔY.
This innovation is a composition of three sources of errors:
1) Error in the perception of the target C
0
2) Error in the knowledge if the target’s position. C b
3) Error in the estimated position of the vehicle. C x
A direct computation of the new uncertainty in the vehicle position C x
is given by
C x = ( C b
-1 + C o
-1 ) -1
By means of some algebraic manipulation, it is possible to transform this update into a recursive form expressed by the Kalman Filter. In the recursive form, a Kalman Gain is computed as:
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
44
K = ˆ x
(C b
* + C o
)
-1 and the position and its uncertainty are given by the familiar forms [1] :
X = X
ˆ
– K ΔY
Some of the assumptions made in this project are
1) The velocity is kept constant
C x
= ˆ x
C x
The equation of motion used in this experiment is: x k
vcos
k
T y k
vsin
k
T
x k
1 y k
1 where T is the sampling time, v is the constant velocity of the robot/target, k is the priori state in Kalman filter and θ k
is the orientation of the robot at time k. Here we assume the velocity of the robot and the target to be constant. Hence, the translational movement of the robot and the target remain considerably the same in each recurring step.
2) First two steps are user defined in order to be able to find the covariance
In this experiment, there are three variable of motion, x and y for the translational and theta for orientation. Hence the covariance matrix is a 3x3 diagonal matrix which needs three different values in order to find the covariance of a 3x3 matrix. Hence, the first two
841
842
843
844
845
836
837
838
839
840
831
832
833
834
835
825
826
827
828
829
830
45 values of the robot/target movement are user defined whereas the third step and onwards are predicted and updated by rules of Kalman filter where the covariance matrix is calculated at each step based on the previous, present and the next state predicted value.
3) Since this is a simulation of the actual vehicle controller, the observation values provided by the sensors are assumed here to be as presence of some noise value in the predicted value. This observed value is used to obtain the updated value.
850
851
852
853
854
855
856
846
847
848
849
857
858
859
46
Chapter 8
SIMULATION RESULTS
Implementation of Navigation Law Equations
This section shows the various paths covered by the robot to reach its target eventually by use of the linear navigation law. In the following simulation graphs, the red color depicts the robot path and the blue color depicts the target path. Here, N corresponds to a proportionality factor whereas c is the deviation constant. Larger the value of N more is the initial curve in the graph which depicts the need of large path corrections.
For variable N and constant c=0
50
45
40
35
30
25
20
15
10
5
0
2 4 6 8 10 12 14 16 18 20
Simulation 1: N=1, c=0
30
20
10
60
50
40
0
-10
-10 -5 0 5 10 15 20 25
Simulation 2: N=3, c=0
860
861
862
863
864
865
866
867
868
869
870
47
25
20
15
10
5
0
-5
40
35
30
50
45
0 5 10 15 20 25
Simulation 3: N=10,c=0
9.
Linear Navigation law simulation for constant c and variable N
For constant N=3 and variable c
60
50
40
30
20
10
0
-5 0 5 10 15 20 25
40
30
20
10
60
50
0
-10
-10 -5 0 5 10 15 20 25
Simulation 1: N=3, c=10 Simulation 2: N=3, c=-25
10.
Linear Navigation law simulation for constant N and variable c
Hence, from the simulation graphs above the effects of changing values of N, the proportionality factor and c, the deviation constant can be observed in the path tracked by the robot to reach its target.
871
872
873
874
875
876
877
48
Representation of Error Ellipses
The following figure shows the results obtained for error ellipses from implementation of
Kalman filter for this project. They depict the effect of uncertainties in odometry.
11
10
9
8
7
6
5
5
15
14
13
12
10 15 20 25 30 35 40 45 50 55
8
7.5
7
6.5
6
5.5
5
4.5
6 8 10 12 14 16 18 20 22
11.
Simulation results depicting
uncertainties through error ellipses
878
879
880
881
882
883
884
The above figure is the zoomed view of the error ellipses showing 5 recurring steps of
49
Kalman filter implementation. The red dot depicts the true/observed value whereas the
“inverted triangle” is the value predicted by use of Kalman filter.
Robot Target Path
The starting point of the robot and the target are encircled in red.
885
886
887
888
889
890
12.
Robot Target path
This result shows the path of the robot and the target where the robot path is shown in green and the target path is shown in red. Also, the “+” and “*” signs show the locations of target and robot respectively at discrete time intervals. The robot follows the target path as shown by following its orientation angle. The translation movement of the robot
891
892
893
894
895
50 and the target stay considerably the same. Successful implementation of the Kalman filter is also shown in predicting and updating the next step of the target as well as the robot.
Covariance Distribution
896
897
898
899
900
901
902
903
904
905
13.
Covariance matrix distribution
This simulation result shows the covariance matrix distribution. Statistical analyses of multivariate data often involve exploratory studies of the way in which the variables change in relation to one another and this may be followed up by explicit statistical models involving the covariance matrix of the variables. Thus the estimation of covariance matrices directly from observational data plays two roles:
Estimates of covariance matrices are required at the initial stages of principal component analysis and factor analysis, and are also involved in versions of regression analysis that
906
907
908
909
910
911
912
51 treat the dependent variables in a data-set, jointly with the independent variable as the outcome of a random sample.
The covariance matrix here is a variance matrix since the diagonal terms are considered assuming the rest of the parts of the covariance matrix are uncorrelated.
Error Ellipses
913
914
915
916
917
918
919
920
921
14.
Error Ellipse for the first and the 15th step robot/target position
This is the error ellipse for the first step of the covariance matrix.
Error ellipses are derived from the covariance matrix, and provide a graphical means of viewing the results of a network adjustment. Error ellipses can show [6]:
Orientation weakness (minor axes pointing to the datum)
Scale weakness (major axes pointing to the datum)
etc.
922
923
924
925
926
927
928
929
930
931
932
933
934
935
Generally the standard deviation in x, y, and z is used as a guide to the precision of a point, the error ellipse gives more detailed information - the maximum and minimum standard deviations, and their associated directions. The orientation of the ellipse is basically given by the correlation term.
52
The error ellipse is an approximation of the pedal curve, which is the true shape of the standard error in all directions.
A standard error ellipse shows the region, if centered at the true point position, where the least squares estimate falls with a confidence of 39.4%. Since the truth is rarely known, the error ellipse is taken to represent the 39.4% confidence region, and when drawn is centered at the least squares estimate of the position (rather than the true position). To obtain different confidence regions the length of the ellipse axis is just multiplied by an appropriate factor:
Confidence region 39.4% 86.5% 95.0% 98.9%
Factor 1.000 2.000 2.447 3.000
These multiplicative factors are determined from the c 2 distribution. Generally the 95% confidence region ellipses are plotted.
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
Absolute error ellipses show the effects of the datum - for example, points further away
53 from the datum point in a survey network generally have larger error ellipses than points close to the datum point
Relative error ellipses are not influenced by the choice of datum. Relative error ellipses are still derived from the covariance matrix, only they involve 2 points, and an exercise in propagation of variance [6].
The second figure above shows the error ellipse for step 15 covariance matrix. The equations referred for calculating the error ellipses is as below. It can be shown that the square root of the eigen values of the covariance matrix correspond to the error ellipse axis lengths, and the eigen vectors that correspond to the eigen values define the error ellipse axis directions.
For example,
n
2 ne
ne e
2
Covariance matrix;
Eigenvalue s satisfy
2
(
n
2 e
2
)
(
n
2 e
2 ne
2
)
0 [11]
Eigenvecto rs :
1
ne
2 n
and
2
ne
e
2
1
2
1
2 tan 2
n
2 e
2
2 n
e
2
2
n
2
e
2
2 ne
n
2
2
ne
e
2 angle ellipse make to N axis
[11]
957
958
959
960
961
962
967
968
969
970
971
963
964
965
966
972
951
952
953
954
955
956
973
54
Chapter 9
CONCLUSION
This project presents a successful implementation of the principles of odometry and
Kalman filtering on a robot to track and reach a moving target. The robot uses a linear navigation law to navigate in the workspace. Several simulation examples using Matlab are shown to illustrate the approach. The effects of uncertainties have been taken into account and represented by error ellipses. Factors affecting the robot path and effects of increasing uncertainties are also considered and reported. Also, this project elaborates the importance of Kalman filtering in robot path planning as well as explains the prediction and correction (update) steps performed under effects of uncertainties or noise to eventually track the moving target.
974
975
976
999
1000
1001
1002
1003
1004
985
986
987
988
989
990
991
992
993
977
978
979
980
981
982
983
984
994
995
996
997
998
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
APPENDIX
MATLAB Code Implementing Kalman Filter to show Error Ellipses
T=0.5
A = [1 T; 0 1];
B = [0; T];
C = [1 0; 0 1];
D = 0;
Q=[0.2 0.05; 0.05 .1]
%Plant = ss(A,[B B],C,0,-1,'inputname',{'u' 'w'},'outputname','y');
%Q = 2.3; % the process noise covariance (Q):
R = [0.5 0;0 0.5]; % the sensor noise covariance (R): t = [0:100]'; u = 1.5; randn( 'seed' ,0);
P=[1 0 ;0 2]; % Initial error covariance xhat=[2;4]; % Initial condition on the state x0=[2.9 ;3.5]; for i=1:0.1:10
x1=A*x0+B*u
%Prediction xhat1 = A*xhat + B*u; % x[n+1|n]
P1 = A*P*A' + Q; % P[n+1|n]
%Update y1=C*x1;
M = y1-C*xhat1;
S=C*P1*C'+R;
Z=P1*C'*inv(S); xhat11=xhat1+Z*M
P11 = P1-Z*C*P1 % P[n|n] pause xhat=xhat11;
55
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
56 plot(x1(1),x1(2), 'r.' ) hold on plot(xhat11(1),xhat11(2), 'kv' )
x0=x1;
P=P11;
n=P11
lm=eig(n)
majmin=sqrt(lm)
phi=0.5*atan2(2*n(1,2),n(1,1)^2-n(2,2)^2), phi*180/pi
a=majmin(1)
b=majmin(2)
timpo=0:0.05:30
x= xhat1(1)+a*cos(timpo)*cos(phi)-b*sin(timpo)*sin(phi)
y=xhat1(2)+a*cos(timpo)*sin(phi)+b*sin(timpo)*cos(phi)
plot(x,y, 'k' )
hold on end
MATLAB Code Implementing the Navigation Law vG=2. vR=4
θG=π/2
%velocity of the Goal/target
%velocity of the robot
%Orientation angle of the goal
%Initial position of Goal (x,y) xG0=20; yG0=20; xR0=0; yR0=0; xG=xG0; yG=yG0;
%Initial position of robot (x,y) xR=xG0; yR=yG0; h=0.25 taken tf=100
N=3 c=-25
%discrete time interval when measurements
%total number of steps
7+3+1 for time=0:h:tf pause xG=vG*cos(θG)*h+xG0; yG=vG*sin(θG)*h+yG0; psi=atan2(yG-yR,xG-xR);
%New values of goal position obtained
1088
1089
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
θR=N*psi+c; xR=vR*cos(θR)*h+xR0; yR=vR*sin(θR)*h+yR0; plot(xG,yG,'.') hold on
%New values of robot position obtained plot(xR,yR,'r.') xG0=xG; %New values of robot and goal position are now old values for next step yG0=yG; xR0=xR; yR0=yR; end
57
MATLAB Code Implementing Kalman Filter for Path Planning vr=1; %velocity of robot vd=0.9; %velocity of destination tr(1)=pi; %initial first orientation of robot xr(1)=vr*cos(tr(1)); %initial X positon of robot yr(1)=vr*sin(tr(1)); %initial Y positon of robot wr(1)=vr*tr(1); td(1)=pi; %initial orientation of destination xd(1)=vd*cos(td(1)); %initial X positon of destination yd(1)=vd*sin(td(1)); %initial Y positon of destination wd(1)=vd*td(1); tr(2)=pi; xr(2)=vr*cos(tr(2)); yr(2)=vr*sin(tr(2)); wr(2)=vr*tr(2); td(2)=pi; xd(2)=vd*cos(td(2)); yd(2)=vd*sin(td(2)); wd(2)=vd*td(2);
X_R(:,1) = [xr(1);yr(1);wr(1)];
Y_R(:,1) = [xr(1);yr(1);wr(1)];
X_D(:,1) = [xd(1);yd(1);wd(1)];
Y_D(:,1) = [xd(1);yd(1);wd(1)];
1145
1146
1147
1148
1149
1150
1151
1152
1153
1154
1155
1156
1157
1158
1159
1160
1161
1162
1163
1164
1165
1166
1167
1168
1169
1170
1120
1121
1122
1123
1124
1125
1126
1127
1128
1129
1130
1131
1132
1133
1134
1135
1136
1137
1138
1139
1140
1141
1142
1143
1144
58
X_R(:,2) = [xr(2);yr(2);wr(2)];
Y_R(:,2) = [xr(2);yr(2);wr(2)];
X_D(:,2) = [xd(2);yd(2);wd(2)];
Y_D(:,2) = [xd(2);yd(2);wd(2)];
X_C(:,:,1) = 0.84*eye(3); %initial covariance constant of the robot
X_CD(:,:,1) = 0.74*eye(3); %initial covariance constant of the destination
X_C(:,:,2) = 0.93*eye(3); %initial covariance constant of the robot
X_CD(:,:,2) = 0.93*eye(3); %initial covariance constant of the destination
X_W = 0.3*eye(3); %measurement noise defined
H = 0.492*eye(3); %state transition matrix of robot
HD = 0.5*eye(3); %state transition matrix of destination arr=1; m=1; n=1;
%Kalman filter implementation for i=2:1:50
%predict
%X_D(:,i+1) = HD*X_D(:,i) + X_D(:,i);
%Y_D(:,i+1)= X_D(:,i+1) + rand(3,1); %the observed/measured value for moving destination wd(i+1) = HD(3,3)*vd*td(i) + wd(i); td(i+1) = wd(i+1)/2; xd(i+1) = HD(1,1)*vd*cos(td(i+1)) + xd(i); yd(i+1) = HD(2,2)*vd*sin(td(i+1)) + yd(i);
X_D(:,i+1) = [xd(i+1);yd(i+1);wd(i+1)];
Y_D(:,i+1) = [xd(i+1);yd(i+1);wd(i+1)] + [0.2;0.1;0.1];
C_D = [X_D(:,i-1) X_D(:,i) X_D(:,i+1)];
CD(:,:,m)=cov(C_D);
X_CD(:,:,arr)=cov(C_D); %calculated covariance for moving destination
1196
1197
1198
1199
1200
1201
1202
1203
1204
1205
1206
1207
1208
1209
1210
1211
1212
1213
1214
1215
1216
1217
1218
1219
1220
1221
1171
1172
1173
1174
1175
1176
1177
1178
1179
1180
1181
1182
1183
1184
1185
1186
1187
1188
1189
1190
1191
1192
1193
1194
1195
59
X_CD(:,:,arr)=(HD*X_CD(:,:,arr)*HD')+X_CD(:,:,arr);
%new predicted value for covariance of moving destination tr(i+1) = td(i+1); wr(i+1) = 2*tr(i+1); xr(i+1) = H(1,1)*vr*cos(tr(i+1)) + xr(i); yr(i+1) = H(2,2)*vr*sin(tr(i+1)) + yr(i);
% xr(i+1) = H(1,1)*2*cos(tr(i+1)) + xr(i);
% yr(i+1) = H(2,2)*2*sin(tr(i+1)) + yr(i);
% xr(i+1) = xd(i+1);
% yr(i+1) = yd(i+1);
X_R(:,i+1) = [xr(i+1);yr(i+1);wr(i+1)];
Y_R(:,i+1) = [xr(i+1);yr(i+1);wr(i+1)] + [0.2;0.1;0.1];
C_R = [X_R(:,i-1) X_R(:,i) X_R(:,i+1)];
CR(:,:,n)=cov(C_R);
X_C(:,:,arr)=cov(C_R);
%calculated covariance
X_C(:,:,arr) = (H*X_C(:,:,arr)*H') + X_C(:,:,arr);
%update target
X_D(:,i+1)=X_D(:,i+1)+((X_CD(:,:,arr).*eye(3))*HD'/(HD*(X_CD(:,:,arr).* eye(3))*HD' + X_W)*(Y_D(:,i+1) - HD*X_D(:,i+1)));
%updated value of X_D: destination
%covariance of moving destination
X_CD(:,:,arr)=(X_CD(:,:,arr).*eye(3))+((X_CD(:,:,arr).*eye(3))*HD')/((H
D*(X_CD(:,:,arr).*eye(3))*HD')+X_W)*HD*(X_CD(:,:,arr).*eye(3)));
%updated value of X_CD:
%update robot
X_R(:,i+1) = X_R(:,i+1) +
((X_C(:,:,arr).*eye(3))*H'/(H*(X_C(:,:,arr).*eye(3))*H' +
X_W)*(Y_R(:,i+1) - H*X_R(:,i+1)));
%updated value of X_R
%covariance of moving target
X_C(:,:,arr)=(X_C(:,:,arr).*eye(3))+(((X_C(:,:,arr).*eye(3))*H')/((H*(X
_C(:,:,arr).*eye(3))*H')+X_W)*H*(X_C(:,:,arr).*eye(3)));
%updated value of X_C
m=m+1;
n=n+1;
arr=arr+1;
1244
1245
1246
1247
1248
1249
1250
1251
1252
1253
1254
1255
1256
1257
1258
1259
1260
1261
1222
1223
1224
1225
1226
1227
1228
1229
1230
1231
1232
1233
1234
1235
1236
1237
1238
1239
1240
1241
1242
1243
1262
1263
1264 end subplot(2,1,1), p=0; for p=1:1:arr
surfc(X_CD(:,:,p));
hold on
end
hold off subplot(2,1,2), plot(X_R(1,:),X_R(2,:),'r'); hold on plot(X_R(1,:),X_R(2,:),'+'); plot(X_D(1,:),X_D(2,:),'g'); plot(X_D(1,:),X_D(2,:),'*'); plot(X_D(1,1),X_D(2,1),'O'); plot(X_R(1,1),X_R(2,1),'O'); hold off
60
1265
1266
1267
1276
1277
1278
1279
1280
1281
1282
1283
1284
1285
1268
1269
1270
1271
1272
1273
1274
1275
1286
1287
1288
1289
1290
1291
61
REFERENCES
[1] Crowley James L.,”Mathematical foundations of Navigation and Perception for an autonomous mobile robot”. International Workshop on Reasoning with Uncertainty in
Robotics, University of Amsterdam, The Netherlands, December 4-6, 1995.
[2] Choset Howie, Lynch Kevin, Hutchinson Seth, George Kantor, Burgard Wolfram,
Kaveraki Lydia, Thrun Sebastian, “Principles of Robot Motion: Theory, Algorithms and
Implementation, Chapter 8 Kalman Filtering”, The MIT Press, Cambridge,
Massachusetts, London, England, 2004.
[3] Kleinbauer Rachel, “Kalman filtering Implementation in Matlab”. Study report in the field of study Geodesy and Geoinformatics at Universitat Stuttgart, Helsinki, November
2004.
[4] Borenstien J., Everett H. R., Feng L., Navigating Mobile Robots: Systems and
Techniques, Chapter 5 Odometry and other Dead Reckoning Methods, A K Peters,
Wellesley, MA., 1996.
[5] Fossen Thor I., Perez Tristan, “Kalman filtering for positioning and heading control of Ships and offshore rigs.” IEEE Control Systems Magazine, December 2009: 33, 40-
42.
[6] Jones Nicole, “Error Ellipses”, Department of Geomatics, The University of
Melbourne Australia, 25 May 1999. February 2010 http://www.geom.unimelb.edu.au/nicole/surveynetworks/02a/notes09_01.html
1303
1304
1305
1306
1307
1308
1309
1310
1311
1312
1313
1292
1293
1294
1295
1296
1297
1298
1299
1300
1301
1302
62
[7] Nathaniel Bowditch, LL.D., “Dead Reckoning”, The American Practical Navigator:
An Epitome of Navigator, National Imagery And Mapping Agency, Bethesda, Maryland,
1995
[8] G. W. Lucas, “A Tutorial and Elementary Trajectory Model for the Differential
Steering System of Robot Wheel Actuators,” SourceForge, 2001, http://rossum.sourceforge.net/papers/DiffSteer/DiffSteer.html
.
[9] Thrun Sebastian, Burgrad Wolfram, Fox Dieter, Probabilistic Robotics: Chapter 5
Robot Motion, The MIT Press, Cambridge, Massachusetts, London, England, 2006.
[10] K S Chong and L Kleeman, "Accurate odometry and error modelling for a mobile robot", IEEE International Conference on Robotics and Automation, Albuquerque USA,
April 1997, pp. 2783-2788.
[11] Herring Thomas, “12.215 Modern Navigation L14” Massachusetts Institute of
Technology, MA, 2009. March 2010. http://geoweb.mit.edu/~tah/12.215/12.215.Lec14.html
[12] F. Orderud, “Comparison of Kalman Filter Estimation Approaches for State Space
Models with Nonlinear Measurements”, Norwegian Institute of Science and Technology,
Proceedings of Scandinavian Conference on Simulation and Modeling - SIMS 2005