Kalman Filters

advertisement

6.0 EXTENDED KALMAN FILTER

6.1 Introduction

One of the underlying assumptions of the Kalman filter is that it is designed to estimate the states of a linear system based on measurements that are a linear function of the states. Unfortunately, in many of the situations where we would like to use a Kalman filter, we have a non-linear system model and/or a non-linear measurement equation. Specifically, the system model is a non-linear function of the states and/or the measurements are non-linear functions of the states. Usually, the non-linearities don’t extend to the system disturbances and measurement noise.

Because of the attractiveness of the Kalman filter, designers have developed a set of mathematics to extend Kalman filter theory to situations where the system model and/or measurement model are non-linear functions of state. The resultant Kalman filter is referred to as the extended Kalman filter. As we will see, the extended Kalman filter uses the non-linear system model for computing the predicted state estimate,

 k

1 k

, and the non-linear measurement model to form the predicted measurement,

 k

1 k

. The smoothed state then takes on the standard Kalman filter form similar to

Equation 5-78.

6.2 Extended Kalman Filter Development

6.2.1 Problem Definition

We are interested in designing a Kalman filter for a system defined by the state equation x

 k

 f

 x

G

   

(6-1) and the measurement equation y

 k

 h

 x

 k

1

 

 v

 k

1

. (6-2)

In Equations 6-1 and 6-2 w

  and v

 k

1

are uncorrelated, zero-mean, white random processes. They are also uncorrelated with the initial state,

G

 

is a known matrix and f

 x

and h

 x

 k

1

  x

 

.

are known, vector, nonlinear functions of the state. They are of the forms f

 x

 f f f

1

 x x

 

 

 

2 n

 x

 

  f

1 f

2 f n

 x

1 x

1 x

1

   

, x n

 

   

, x n

  

   

, x n

  

(6-3) and

89

h

 x

 k

1

 

 h m h h

2

1

 x x

 k

1

 k

1

 x

 k

1

 

 

 

 

  h

1 h

2 h m

 x x

1

1 x

1

 k

   k

  x n

 k

1

 k

   k

  x n

 k

1

 

 k

   k

  x n

 k

1

 

.

6.2.2 Filter Development

We begin by separating x

 

and y

 k

1

into two parts as x

   x

0

    x

 

(6-4)

(6-5) and y

 k

 y

0

 k

  y

 k

1

. f

 x

 f

 x

   x

0

 f

 x x

 

 

 x

0 x

   x

0

  

H.O.T.

(6-6) x

0

and y

0

 k

1

are termed the nominal values of respectively, while  x

 

and  y

 k

1

 x

 

and y

are termed perturbations.

 k

1

, about

Next we expand x

0

and x

0 f

 k

1

 x

as

and h

 x

into a Taylor series expansion

(6-7) and h

 x

 h

 x

   x

0

 h

 x x

 

 

 x

0 x

   x

0

  

H.O.T.

. (6-8)

We next drop the high order terms (H.O.T.), recognize that x

   x

0

    x

  and write f

 x

0

F

   

(6-9) and h

 x

0

H

   

.

In Equations 6-9 and 6-10,

F

 

and

H

 

are defined by

(6-10)

F

 f

 x x

 

 

 x

0

 f

2 x

1 f

2 x

2

  f n x x

1

1

  f n x x

2

2

  x n

  x n

  n x n

(6-11)

90

and

H

 h

 x x

 

 

 x

0

 

  x

1

  x

 h

2

 x

1

 h

2

 x

2

2

 h m

 x

1

 h m

 x

2 where the notation  g i

 x j

is interpreted as

 g i x

 g i

 x

1

   

 x j

 

, , x n

 x

1

   x

10

 

. x n

   x n 0

 

  x n

 h

2

 x n

 h m

 x n

(6-12)

(6-13)

We want to digress to discuss the notation we have used. One of the misconceptions associated with Equations 6-5 and 6-6 is that we have some knowledge of the nominal state and measurement, can thus insure that the perturbations,  x

 

and x

0

 

and

 y

 k

1

 y

0

 k

1

, and

, are small. This, in turn, allows us to claim that the H.O.T. of the Taylor series expansions of f

 x

  

and h

 x

 k

1

 

are small and can thus be ignored. In fact, we have no knowledge of the nominal states or measurements (if we did, we would not need to build a Kalman filter!). We introduce the notation as a mathematical tool that facilitates our derivation. We temporarily assume we know the nominal state and measurement and that they are close to the actual state and measurement. This allows us to claim that the perturbations are small and thus that the first order approximations to the Taylor series expansions are valid. We cannot determine the validity of our assumption and its ramifications beforehand. Because of this we cannot always be sure that an extended Kalman filter that we design will work well. While this may be discouraging we should take solace from the fact that designers have been designing extended Kalman filters for the past forty or so years and that most of these filters work very well.

Returning to our development, we use Equations 6-5, 6-6, 6-9 and 6-10 to rewrite the system and measurement equations as x

0

 k

  x

 k

 

0

F

    

G

   

(6-14) and y

0

 k

  y

 k

 

0

 k

1

 

H

 k

1

  k 1

  k

1

. (6-15)

We next separate Equations 6-14 and 6-15 into two sets of equations. One of these we term the nominal equations and the other we call the perturbation

equations. The nominal equations are x

0

 k

 

0

(6-16)

91

and y

0

 k

 

0

 k

1

 

, and the perturbation equations are

 x

 k

F

    

G

   

(6-17)

(6-18) and

 y

 k

H

 k

1

  k 1

  k

1

. (6-19)

For now we set the nominal equations aside and build a Kalman filter for the perturbation model. This results in the equations

  k

1 k

F

   

, (6-20)

  k

1 k

H

 k

1

   k

1 k

, (6-21)

  k

   k

1 k

K

 k

1

  y

 k

   k

1 k

, (6-22)

K

 k

P

 k

1 k

H T

 k

1

H

 k

1

P

 k

1 k

H T

 k

R

 k

1

 

1

, (6-23)

P

 k

1 k

F

      

G

     

and

P

 k 1

I

K

 k

1

  k

1

P

 k

1 k

.

(6-24)

(6-25)

With Equations 6-20 through 6-25 we have a means of estimating the perturbation part of the state. While this is an interesting result, it is not what we want. We want ˆ

 

, the estimate of x

 

; not  x

 

, the estimate of

 x

 

.

Let us consider ˆ

 

. Based on our results from earlier, we can write

ˆ

 

as

   x

0

     

. (6-26)

In this equation estimate of x

0

is the nominal state discussed earlier and  x

 

is the

 x

 

that we just derived. Equation 6-26 tells us that to form an estimate of the state we add an estimate of the perturbed state to the nominal state (we conveniently ignore the fact that we don’t really know the nominal state).

If we consider the mean-squared error between x

 

and ˆ

 

we get

92

e

2 x

E

E

 x

    T

  x

   

 x

      T

 

 x

     

 e

2 x

. (6-27)

Recall that the Kalman filter results in a  x

 

that minimizes e

2 x

. Since e 2 x

 e

2 x

we tend to jump to the conclusion that we have also minimized e 2 x

.

This is not a totally correct conclusion. We can say that we have minimized under the constraint that ˆ

 

is given by Equation 6-26 and e

2 x

 x

 

is defined by Equation 6-20 through 6-25. If we were to remove these constraints we could conceivably find an ˆ

 

such that e

2 x min

 e

2 x min

. However, we don’t have a general means of finding ˆ

 

that truly minimizes e

2 . Therefore, for want of x anything better, we will adopt Equation 6-26. As indicated earlier, the quality of this decision can only be tested by constructing the extended Kalman filter and testing it.

If we combine Equation 6-26 with our previous results we get

 k

 x

0

 k

   k

1

0

F

    

K

 k

1

  y

 k

 

 k

1 k

 and if we expand f

ˆ

  

into a Taylor series about x

0 f

 

0

F

     x

0

  

H.O.T.

.

we get

(6-28)

(6-29)

Dropping the higher order terms (H.O.T.) and recognizing

 x

 

we get

ˆ

   x

0

 

as f

 

0

F

   

. (6-30)

Using this we can replace the first two terms in the last part of Equation 6-28 to yield

 k

 f

 

K

 k

1

  y

 k

 

 k

1 k

. (6-31)

We next turn our attention to the last term in Equation 6-31. If we use

Equation 6-6 and write the predicted measurement as

 k

1 k

 y

0

 k

 

 k

1 k

, (6-32) we can rewrite the last term as

 y

 k

 

 k

1 k

 y

 k 1

 y

 k

1 k

(6-33)

93

and the state estimation equation as

 k

 f

 

K

 k

1

  k

  k

1 k

. (6-34)

We note that we have manipulated the state estimation equation into the form that we want. That is, we have an equation for the state estimate at stage k

1 in terms of the state estimate at stage and the predicted measurement at stage k

1

, the measurement at stage k

1 k

 . We now need to perform some more mathematical manipulations to complete our formulation of the extended

Kalman filter.

We start by examining the function value of x

 k

1

given only f

ˆ

  

. If we wanted to predict the

ˆ

 

we would use the original state equation of

Equation 6-1. However, this equation contains the system disturbance w

 

, which we don’t know. Using the logic from Chapter 5 we can say that since we don’t know w

 

, our best choice for its replacement would be E

 w

  

, which is zero. Thus we are left with the equation

 k

1 k

 f

   

(6-35) for the predicted state at stage k

1 given the state estimate at stage k .

As we did in Chapter 5, we can characterize the error between the predicted and actual state as

 x

 k

1 k

 x

 k 1

 x

 k

1 k

F

   x

      

G

   

. (6-36)

With this we can use the results of Chapter 4 to write the covariance of

 x

 k

1 k

as

P

 k

1 k

F

      

G

     

. (6-37)

We next consider the predicted measurement,

 k

1 k

. Following the logic from Chapter 5, we can relate the predicted measurement to the predicted state through the measurement model of Equation 6-2. However, we don’t know v

 k

1

. As in Chapter 5, for want of anything better we will use

E

 v

 k

1

 

0

instead. Thus we get

 k

1 k

 h

  k

1 k

 

. (6-38)

The final topic we need to consider concerns the matrices F

  and

H

 k

1

. Recall that these matrices are given by

94

F

 f

 x x

 

 

 x

0

(6-39) and

H

 k

 h

 x

 k

 x k

1

1

 

  x

0

 k

1

. (6-40)

To compute

F

  and

H

 k

1 know. In lieu of these, we use

we need x

0

ˆ

 

in place of

and x

0 x

0

 k

and

1

, which we don’t k

1 k

in place of x

0

 k

1

. We can offer no rationale for these substitutions except to state that they are the only ones we have.

6.2.3 Summary

We now summarize the results of this section into the following presentation of the extended Kalman filter. Given the state and measurement models of Equations 6-1 and 6-2, along with the previously indicated properties of w

 

, v

 k

1

and x

  an extended Kalman filter that can be used to estimate the state of the system is given by the equations

 k

  k

1 k

K

 k

1

  k

  k

1 k

, (6-41)

 k

1 k

 f

   

,

 k

1 k

 h

  k

1 k

 

,

(6-42)

(6-43)

K

 k

P

 k

1 k

H

T

 k

1

H

 k

1

P

 k

1 k

H

T

 k

R

 k

1

 

1

, (6-44)

P

 k

1 k

F

      

G

     

,

P

 k 1

I

K

 k

1

  k

1

P

 k

1 k

,

F

 f

 x x

 

 

ˆ

  and

H

 k

 h

 x x

 k k

1

1

  x

 k

1 k

.

(6-45)

(6-46)

(6-47)

(6-49)

Equations 6-41 through 6-49 constitute the tracking form of the extended

Kalman filter. The control theoretic form of the extended Kalman filter is defined by the equations

95

 k

 f

 

K

     h

   

, (6-50)

K

  

F

   

H

T

 

H

   

H

T

  

R

  

1

, (6-51)

P

 k

F

  

K

        

G

     

,

F

H

 f

 x x

 

 

ˆ

  and

 h

 x x

 

 

ˆ

 

.

(6-52)

(6-53)

(6-54)

The derivation of the control theoretic form of the extended Kalman filter is left to the reader. Block diagrams of the tracking and control theoretic forms of the extended Kalman filter are contained in Figures 6-1 and 6-2. It will be noted that their structure is very similar to the normal Kalman filter block diagrams of Figures 5-7 and 5-8. The difference is that the matrices F

 

and H

 

are replaced by their non-linear equivalents, f

 x

and h

 x

.

96

6-1. Tracking Form of the Extended Kalman Filter

97

Figure 6-2. Control Theoretic Form of the Extended Kalman Filter

6.2.4 Properties of the Extended Kalman Filter

We now take some time to discuss the properties of the extended

Kalman filter. We start by noting that the extended Kalman filter is intuitively appealing. It has the basic structure of the normal Kalman filter but also incorporates the non-linear functions of the system and measurement models.

Since it incorporates these non-linear functions, the extended Kalman filter is a non-linear filter. That is, the state prediction equation and the predicted measurement equations are non-linear. Because of this the state estimate equation is also a non-linear function of the previous state estimate.

Although not as obvious, the elements of F

 

and H

 

are also nonlinear functions of the state estimates. This means that the Kalman gain and covariance matrices are also non-linear functions of the state estimate.

It should be obvious to the reader that the development of the extended

Kalman filter is very heuristic. That is, while the various steps are intuitively appealing, as are the resultant equations, they are not always mathematically rigorous. For example, we have no rigorous mathematical (or physical) justification for ignoring the higher order terms in the various Taylor series expansions. Nor can we mathematically or physically justify replacing x

0

 

98

by ˆ

 

and x

0

 k

1

by

 k

1 k

when evaluating

F

  and

H

 k

1

.

Because of this we must take care in implementing extended Kalman filters. It is quite possible that we could design an unstable extended Kalman filter. In other instances, the extended Kalman filter may be stable but do a poor job of estimating the system states. Having made these statements, we also note that designers have been very successful at constructing extended Kalman filters that work very well. They are stable, they produce accurate state estimates, their covariance matrices are representative of the actual errors between the estimate and actual states, and they have good transient response. However, this is not accomplished by blind application of the extended Kalman filter equations. It results from very careful selection of the system and measurement models and tuning of the filter via the Q

 

matrix.

6.3 Applications

As the reader has probably deduced by now, the main effort in designing

Kalman filters is devoted to deriving system and measurement models, and in determining R

 k

1

, Q

 

,

 

and P

 

. Once these are found the Kalman filter will take on the established forms of Equations 6-41 through 6-49 or

Equations 6-50 through 6-54 depending upon whether we are building a tracking or control theoretic type of Kalman filter. Because of this, we will devote the rest of this chapter to developing system and measurement models for some examples. As mentioned before, the system model is usually derived from an understanding of the “physics” of the process or system, and the measurement model is based on our understanding of how the measurements take place.

6.3.1 Example 1: General Target Tracker in Cartesian Coordinates

For our first example we consider the system and measurement models that might be used to design a Kalman filter for a broad range of target tracking problems. We cast the problem in a Cartesian coordinate system (xyz). We will limit ourselves to two dimensions (x and z) for ease of notation. The reader can extend the results to three dimensions.

Since we are considering a general target tracker we have no detailed knowledge of the system since we don’t know the target type or its motion characteristics. The target could be an aircraft, a ship, a ballistic missile, an artillery round or a bicycle. In this case, our best choice for a system model would be to use the Taylor series expansion discussed in Chapter 2.

Specifically, if we use position and velocity as our states we would have

        

        

2

2

        

        

2

2

2

2

2

2

. (6-55)

99

Recall that since we don’t know the second and higher order derivatives we lump them into the model disturbance terms, w

 

. With this we have

       x

 

     x

 

       z

 

      

. (6-56)

If we write this in matrix form we get x

 k

F

    

G

   

(6-57) where

F w

 

1 T 0

0 0 1

0

0 1 0 0

T

, G

  

I , x

0 0 0 1

 w w

2 w w

1

3

4

 

 

 

 

 

 

 

 

 

 

 

 

.

 x x

2 x x

1

3

4

 

 

 

 

 

 

 

 

 

 

 

 

and

We have transitioned to representing the states and disturbances as random processes as acknowledgment of the fact that we are now treating the higher order terms of the Taylor series expansions as random processes to be consistent with the requirements of the Kalman filter formulation.

The model described by the above equations is know as a constant velocity model because of the fact that

   

and

   

. In some instances it may be more appropriate to use a constant acceleration model. A constant acceleration model would be described by the Equation 6-57 with the various matrices replaced by the following.

100

F w

 

1 T

0 1

T

2

T

2 0

0

0 0 0 1

0

0

0 0 1 0 0 0

T

0 0 0 0 1

T

2

0

0

T

0 0 0 0 0 1

2

, G

  

I , x

 w w

2 w w w

1

3

4

5 w

6

 

 

 

 

 

 

 

 

 

 

 

 

 

  w k x w k x

 

 

 

 

 

 

.

 x x x x x x

1

2

3

4

5

6

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

, and

The derivation of the above matrices is a simple exercise that is left to the reader.

If we consider a sensor that is remote from the target, the measurements made by this sensor will not be in the Cartesian coordinate system. Instead, they are most often in a polar coordinate system. These types of sensors would include radars, lasers, sonar and the like. The measurements and their relation to the Cartesian coordinate system are illustrated in Figure 6-3. In this figure, we assume that the Cartesian coordinate system is centered on the sensor. This is a standard convention. As illustrated, the standard measurements that a sensor makes are range and angle. In some cases the sensor can also measure range-rate via Doppler frequency.

Figure 6-3. Coordinate System for Target Tracking Model

101

Since our system model is in the Cartesian coordinate system we need to cast our measurement model in this coordinate system. From the geometry of

Figure 6-3 the appropriate equations are r

 x

2

   z

2

   v r

 

(6-58) and

    tan

1

 

 v

 

. (6-59)

If we include range rate as a measurement, the appropriate equation would be r

 x

2

   z

2

 

 v r

(6-60)

The reader can substitute the elements of the appropriate x

 

vector (for the constant velocity or constant accretion system model) into the above equations to express them in terms of their elements. As an example of one of the vector forms, the measurement vector for the case of range and angle measurement with the constant velocity system model is y

 h

 x

 v

  

 x

1

2 tan

1

 x

   x

2

3

 

3 x

1

 v

 v

1

2

 

 

 where v

1

   v r

 

and v

2

   v

 

.

(6-61)

Recall that, for purposes of building the Kalman filter we need to generate the H

 k

1

matrix. For the tracking filter formulation this equation is as defined in Equation 6-49. For the specific case associated with Equation

6-61 the H

 k

1

becomes

H

 k

1 1 1 2 1 3 1 4

 h x h x h x h x

 h

2

 x

1

 h

2

 x

2

 h

2

 x

3

 h

2

 x

4

 where the elements of the matrix are defined by

 x

1

2

 k

 x

1

1 x

 k

1

2

3

 k

1

 x

 k

1 k

ˆ

1

2

 k

ˆ

1

1

 k k

1 k

 2

3

 k

1 k

,

  

1 3

 x

1

2

 k

 x

3

1 x

 k

1

2

3

 k

1

 x

 k

1 k

ˆ

1

2

 k

1

3

 k k

1 k

 2

3

 k

1 k

,

 h

2

 

1

 tan

1

 x

3

 x

1 k

 k

1 x

1

 

1

 k

1

  x

 k

1 k

ˆ

1

2

 k

1

3 k

 k

1 k

 2

3

 k

1 k

 ,

(6-62)

(6-63)

(6-64)

(6-65)

102

 h

2

 

3

 tan

1

 x

3

 x

3 k

 k

1 x

1

 

1

 k

1

  x

 k

1 k

ˆ

1

2

 k

ˆx

1

1

 k

 k

1 k

 ˆ

3

2

 k

1 k

, (6-66) and the rest of the terms are zero. With the above, we can rewrite H

 k

1

as

H

 k

ˆ

1

2

ˆ

1

2

 k

ˆ

1

1

 k k

1 k

 2

3

 k

1 k

 k

ˆ

3

1 k

 k

1 k

 2

3

 k

1 k

0

0

ˆ

1

2

ˆ

1

2

 k

3

1

 k k

1 k

 2

3

 k

1 k

 k

ˆ

1

1

 k

 k

1 k

 2

3

 k

1 k

0

0

. (6-67)

The derivation of the other forms of h

 x

 for the reader.

and H

 k

1

are left as exercises

With the above we are in a position to proceed to the next phase of designing the Kalman filter. We will consider this through a specific example.

6.3.2 Example 2: Application to a Ballistic Missile Tracking Problem – Purely

Ballistic Trajectory

We now want to apply some of the developments of the previous subsection to the development of a Kalman filter that can be used to estimate the states of a ballistic missile based on measurements of range and angle. We will start with the unrealistic, but instructive, case where the missile is flying a purely ballistic trajectory across a non-rotating, flat earth. Consistent with the developments in subsection 6.3.1, we will assume that the trajectory lies in the x-z plane to simplify the system model and measurement equations. In this coordinate system, the origin is located at the radar, which generates the range and angle measurements, and includes the Kalman filter.

The continuous-time equations of motion for this ideal trajectory are

0 x

  g z

 x

0

, x

   x

 

  

0, z

   z

0

(6-68) where g is the acceleration of gravity (9.8 m/s 2 ),

 x

0

, 0

is the launch position of the missile and

 x z

0 0

is the launch velocity. x

0

Figure 6-4 contains a plot of the missile trajectory for the case where

100 Km and

,

0 0

  

. In the figure, the solid red curve denotes the portion of the trajectory over which the radar (and Kalman filter) tracks the target. Track is initiated when the missile is at a range of about 43

Km and an angle of about 27º.

We will start with the system model of Equation 6-57. We deduce that this model is representative in the x coordinate because we know that our

“real” system has a constant velocity in this coordinate (see Equation 6-68).

103

However, the model is not good in the z coordinate because we know that the

“real” system has a constant acceleration, and not a constant velocity, in this coordinate. We will try to account for the inaccuracy of the model through the

Q

 

matrix.

Figure 6-4 – Ballistic Target Trajectory

 r

2

2

1

SNR

1

SNR

1

 2

20

 

2 p

1

 2

20

 

B

2

.

Since we are measuring range and angle the appropriate measurement model is specified in Equations 6-58 and 6-59; and Equation 6-61 if we use the state assignment of Equation 6-57. The appropriate

H

 k

1

matrix is given by

Equation 6-67. v

 k

In general, the range and angle measurement errors,

1

 v r

 k

1

and

, will depend on signal-to-noise ratio (SNR), the radar compressed pulse width,  p

, and the radar antenna beam width,  . There will also be a

B lower limit on these errors that depends upon various radar design parameters.

The measurement error model we will assume is

(6-69)

The terms 

2 p

SNR and 

B

2

SNR were obtained from Skolnik’s radar text (with k r

 k

1 – see Skolnik). The

 

2

1 20 factor limits the minimum errors to

1/20 th of the pulse width and beam width. We will assume a pulse width of

 p

1

 s (150 m) and a beam width of 

B

2 (0.0349 rad).

The SNR would be obtained from the radar range equation (see Skolnik reference) and is inversely proportional to R 4 , where R is the range from the

104

radar to the target. For this example we will assume that the radar range equation results in a SNR given by

SNR

10

20

R

4

. (6-70)

This provides an SNR of about 15 dB at the start of tracking.

To implement the Kalman filter we need to specify

 

,

P

 

, and

R

 k

1

. The system model matrices,

F

 

and

G

 

have been specified. We will also need to discuss the computation of

H

 k

1

.

Q

 

We know that tracking starts at a missile range of approximately 43 Km and an angle of about 27º. From this we can compute the initial position as

 

 

 x

1 x

3

 

 

 

 

. (6-71)

We assume that we don’t know the initial velocity but we assume it is in the order of 1000 m/s. With this we let

   x

2

   

1000 y

ˆ    x

4

   

1000

. (6-72)

We reflect our uncertainty in these initial estimates by using

P

200 2

0

0

0

0

200

0

0

2

0

0

500

0

2

0

0

0

500

2

. (6-73)

This selection of the P

 

matrix says that we know the position to within about 200 m and the velocity to within about 500 m/s. These are guesses.

Although we know we shouldn’t, we will start with a Q

 

of all zeros.

This says that we believe that our system model is perfect, which we know it is not.

We will assume that the range and angle measurements are independent, which is a reasonable assumption. With this we get

R

 k

  r

2

 0

0

2

(6-74) where the range and angle variances are as defined in Equations 6-69. We note that the measurement covariance matrix is not constant since the variances depend upon target range, which varies with time. This means that R

 k

1

 must be computed inside of the Kalman filter loop. Furthermore, to compute

105

R

 k

1

we will need to use a prediction of target range based on the predicted state estimate.

Since

H

 k

1

depends upon the predicted states, it must also be computed inside of the Kalman loop. It should be initialized to

H

1 0 1 0

 1 0 1 0

(6-75) or a 2X4 matrix of zeros outside of the Kalman filter loop.

The MATLAB script entitled MissileKfilt1.m contained in the Program

Files folder contains a script that implements the Kalman filter. The script is well commented and should be easy to follow. The script reads a file entitled

Missile1.txt

1 that contains the true state. The data in the file is stored at intervals of 0.1 seconds. We must down sample the data because we want the update period of the Kalman filter to be

T

0.2 s

.

The results of a simulation run are shown in Figures 6-5 through 6-9.

Figure 6-5 contains a plot of actual and estimated y position vs. x position.

This plot seems to indicate that the filter is working well. However, the scale makes the errors difficult to see. More specific information on the errors is provided in Figures 6-6 and 6-7. Figure 6-6 contains plots of x and y position errors between the estimates and actual values (top plot) and the measured and actual values (bottom plot). The measured x and y positions were obtained from x meas y meas

 r meas

 r meas cos sin

  meas

  meas

. (6-76)

It will be noted that the errors between the estimated and actual positions is large, and much larger than the measured values. This tells us that the filter is not working very well. In fact, direct use of the measurements provides better position estimates!

Figure 6-7 shows that the errors between the estimated and actual velocities are also large. There is no curve of errors between measured and estimated velocities because the measured velocities were not computed.

Figure 6-8 contains plots of what the Kalman filter thinks the rms position and velocity errors are. These curves were obtained by plotting the square root of the diagonal terms of the covariance matrix ( P

 

). As we have previously discussed, the rms errors indicated in these curves should be in agreement with the actual errors of Figures 6-6 and 6-7. As can be seen, they are not. The Kalman filter thinks it is doing a good job of estimating the positions and velocities and is thus assigning small rms errors to them. This is a clear indication of the fact that we have not provided the Kalman filter with

1 The file Missile.txt is a MATLAB .mat file. The .txt extension is used because many e-mail programs will not allow down load of files with a .mat extension because they often contain harmful viruses.

106

an adequate system model; and we have “lied” to the filter by telling it that the model is good my setting the

Q

 

matrix to zero.

Figure 6-5 – Plot of Estimated and Actual Positions

Figure 6-6 – Plot of Position Errors

107

Figure 6-7 – Plot of Velocity Errors

Figure 6-8 – Plot of RMS Position and Velocity Error from Covariance

108

The curves of Figure 6-9 contain plots of the eight Kalman gains. The top two curves are the gains that are applied to the range measurement error

(difference between the actual range measurement and the estimated range measurement) to update the position estimates. The second two curves are the gains that are applied to the angle measurement error to update the position estimates. The third two curves are the gains that are applied to the range measurement error to update the position estimates and the fourth two curves are the gains that are applied to the angle measurement error to update the velocity estimates. It will be noted that all of the gains tend toward zero. This is expected and means that the Kalman filter is de-emphasizing the measurements and emphasizing the previous state estimates. This is the reason the position and velocity errors increase with time: the estimates are poor but the filter is not making use of the measurements to try and improve the estimates. The reason we expect the gains to go to zero is that we have told the Kalman filter that the system model is good and that it can rely on the system model more than on the data, once the filter has been initialized by the first few measurements.

Figure 6-9 – Plot of Kalman Gains

109

As we learned with the Humvee tracking problem, we can probably improve the performance of the missile tracking Kalman filter by choosing a non-zero Q

 

matrix. This is left as a homework exercise.

Another alternative is to include acceleration as a state and attempt to estimate it. This will also be considered as a homework problem. We will note that this approach will be of considerable help when the trajectory is not purely ballistic but includes atmospheric drag.

Still another way of improving the performance of the Kalman filter for the ballistic missile case is to recognize that the trajectory is influenced by gravity and include gravity in the system model by considering it as a known input. This is discussed in Chapter 8.

It should be noted that there are many other ways to improve the performance of the Kalman filter. As examples:

 For the trajectory with atmospheric drag, we could include the drag term in the system model.

 For both trajectories we may be able to improve performance by including the measurement of range-rate.

 We could use the combined continuous-time, discrete-time Kalman filter discussed in Chapter 8.

In all cases, a critical factor affecting the performance of the filter is a proper selection of the

Q

 

matrix.

110

PROBLEMS

1.

Derive the control theoretic form of the extended Kalman filter presented in

Equations 6-50 through 6-54.

2.

Derive the Cartesian coordinate, constant acceleration system model defined by the matricies and vectors above Figure 6-3.

3.

Show that  x

 

is uncorrelated with w

 

and v

 

.

4.

Derive Equation 6-60.

5.

Derive the H

 k

1

matrix given by Equations 6-62 through 6-67.

6.

Derive the H

 k

1

matrix for the six-state, Cartesian-coordinate system model.

7.

Extend the measurement model and

H

 k

1

matrix of Section 6.2.1 to include Doppler.

111

Download