Inertial Measurement Unit Calibration using Full

advertisement
Inertial Measurement Unit Calibration using Full
Information Maximum Likelihood Optimal Filtering
by
Gordon A. Thompson
B.S. Mechanical Engineering
Rose-Hulman Institute of Technology, 2003
SUBMITTED TO THE DEPARTMENT OF AERONAUTICS AND ASTRONAUTICS
IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF
MASTER OF SCIENCE IN AERONAUTICS AND ASTRONAUTICS
AT THE
MASSACHUSETTS INSTITUTE OF TECHNOLOGY
SEPTEMBER 2005
@ Gordon A. Thompson, 2005. All rights reserved.
The author hereby grants to MIT permission to reproduce and distribute publicly paper
and electronic copies of this thesis document in whole or in part.
Author -
Z;
,"Department of Aeronautics and Astronautics
July 13, 2005
Certified by:
Steven R. Hall
Professor of Aeronautics and Astronautics
Thesis Supervisor
{'~ ~1)
A A2()
Ai I
Certified by:
J. Arnold Soltz
Principal Memb
i
Accepted by
MASSACHUSETlIS
X
INSTiTUTE
'I
OF TECHNOLOGY
of the
c
-al Staff, C. S. Draper Laboratory
Thesis Supervisor
Jaime Peraire
Professor of Aeronautics and Astronautics
Chair, Committee on Graduate Students
DEC 31 2005
vow
LIBRARIES
rAE~O'
[This page intentionally left blank.]
V;
0 ,-,,
Inertial Measurement Unit Calibration using Full Information
Maximum Likelihood Optimal Filtering
by
Gordon A. Thompson
Submitted to the Department of Aeronautics and Astronautics
on July 13, 2005, in partial fulfillment of the
requirements for the degree of
Master of Science in Aeronautics and Astronautics
Abstract
The robustness of Full Information Maximum Likelihood Optimal Filtering (FIMLOF)
for inertial measurement unit (IMU) calibration in high-g centrifuge environments is considered. FIMLOF uses an approximate Newton's Method to identify Kalman Filter parameters such as process and measurement noise intensities. Normally, IMU process noise
intensities and measurement standard deviations are determined by laboratory testing in
a 1-g field. In this thesis, they are identified along with the calibration of the IMU during
centrifuge testing. The partial derivatives of the Kalman Filter equations necessary to
identify these parameters are developed. Using synthetic measurements, the sensitivity
of FIMLOF to initial parameter estimates and filter suboptimality is investigated. The
filter residuals, the FIMLOF parameters, and their associated statistics are examined.
The results show that FIMLOF can be very successful at tuning suboptimal filter models.
For systems with significant mismodeling, FIMLOF can substantially improve the IMU
calibration and subsequent navigation performance. In addition, FIMLOF can be used
to detect mismodeling in a system, through disparities between the laboratory-derived
parameter estimates and the FIMLOF parameter estimates.
Thesis Supervisor: Steven R. Hall
Title: Professor of Aeronautics and Astronautics
Thesis Supervisor: J. Arnold Soltz
Title: Principal Member of the Technical Staff, C. S. Draper Laboratory
3
[This page intentionally left blank.]
Contents
1
2
Introduction
1.1
M otivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
11
1.2
B ackground
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
12
1.3
Thesis O verview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
13
15
Inertial Measurement Units
2.1
2.2
3
11
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
15
2.1.1
Single-Degree-of-Freedom Gyroscopes . . . . . . . . . . . . . . . .
18
2.1.2
Two-Degree-of-Freedom Gyroscopes . . . . . . . . . . . . . . . . .
19
2.1.3
Gyroscope Testing and Calibration
. . . . . . . . . . . . . . . . .
21
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
22
. . . . . . . . .
24
. . . . . . . . . . . . . . . . . . . . .
25
. . . . . . . . . . . . . . .
25
G yroscopes
A ccelerom eters
2.2.1
Pendulous Integrating Gyroscopic Accelerometers
2.2.2
Accelerometer Error Model
2.2.3
Accelerometer Testing and Calibration
27
System Identification
3.1
. . . . . . . . . . . . . . . . . . . . . . .
27
. . . . . . . . . . . . . . . . . . . . . . . . .
28
Maximum Likelihood Estimation
3.1.1
Parameter Estimation
3.1.2
Likelihood Functions . . . . . . . . . . . . . . . . . . . . . . . . . . 28
3.1.3
Cramer-Rao Lower Bound . . . . . . . . . . . . . . . . . . . . . . .
29
3.1.4
Fisher Information Matrix . . . . . . . . . . . . . . . . . . . . . . .
30
3.1.5
Properties of Maximum Likelihood Estimation
. . . . . . . . . . .
31
3.1.6
Solution of Maximum Likelihood Estimators .
. . . . . . . . . . .
33
7
3.2
3.3
4
.................................
35
3.2.1
System Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
35
3.2.2
Kalman Filter Equations . . . . . . . . . . . . . . . . . . . . . . . .
36
3.2.3
Mismodeling in Kalman Filters
37
. . . . . . . . . . . . . . . . . . . .
Full Information Maximum Likelihood Optimal Filtering . . . . . . . . . . 39
3.3.1
The Likelihood Function . . . . . . . . . . . . . . . . . . . . . . . . 39
3.3.2
Negative Log-Likelihood Function Minimization . . . . . . . . . . . 41
3.3.3
Process Noise Equations . . . . . . . . . . . . . . . . . . . . . . . .
43
3.3.4
Measurement Noise Equations . . . . . . . . . . . . . . . . . . . . .
44
3.3.5
Convergence Criteria . . . . . . . . . . . . . . . . . . . . . . . . . .
44
Robustness Analysis
4.1
4.2
5
Kalman Filters
System Models
47
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
4.1.1
Model 2 -
No PIGA Harmonics Model . . . . . . . . . . . . . . . . 50
4.1.2
Model 3 -
Small Deterministic Error Model . . . . . . . . . . . . . 50
4.1.3
Model 4 -
Minimum state with centrifuge model . . . . . . . . . .
51
4.1.4
Model 5 -
Minimum state model . . . . . . . . . . . . . . . . . . .
51
4.1.5
Synthetic Measurements
. . . . . . . . . . . . . . . . . . . . . . . .
52
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
53
4.2.1
Determining Whiteness . . . . . . . . . . . . . . . . . . . . . . . . .
54
4.2.2
Miss Distance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
55
Robustness Tests
Results
5.1
5.2
59
Full State M odel
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
59
5.1.1
FIMLOF Parameter Estimates . . . . . . . . . . . . . . . . . . . . .
60
5.1.2
Residual Magnitude and Whiteness . . . . . . . . . . . . . . . . . .
64
5.1.3
Miss Distances
. . . . . . . . . . . . . . . . . . . . . . . . . . . . .
66
. . . . . . . . . . . . . . . . . . . . . . . . . .
66
5.2.1
Parameter Estimates . . . . . . . . . . . . . . . . . . . . . . . . . .
67
5.2.2
Residual Magnitude and Whiteness . . . . . . . . . . . . . . . . . .
67
5.2.3
Miss Distances
. . . . . . . . . . . . . . . . . . . . . . . . . . . . .
68
No PIGA Harmonics Model
8
5.3
5.4
5.5
6
Small Sinusoidal Error Model
. . . . . . . . . . . . . . . . . . . . . . . . .
71
5.3.1
Parameter Estimates . . . . . . . . . . . . . . . . . . . . . . . . . .
72
5.3.2
Residual Magnitude and Whiteness . . . . . . . . . . . . . . . . . .
74
5.3.3
Miss Distances
. . . . . . . . . . . . . . . . . . . . . . . . . . . . .
74
Minimum State with Centrifuge Model . . . . . . . . . . . . . . . . . . . .
75
5.4.1
Parameter Estimates . . . . . . . . . . . . . . . . . . . . . . . . . .
75
5.4.2
Residual Magnitude and Whiteness . . . . . . . . . . . . . . . . . .
77
5.4.3
Miss Distances
. . . . . . . . . . . . . . . . . . . . . . . . . . . . .
79
Minimum State Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
80
5.5.1
Parameter Estimates . . . . . . . . . . . . . . . . . . . . . . . . . . 81
5.5.2
Residual Magnitude and Whiteness . . . . . . . . . . . . . . . . . . 83
5.5.3
Miss Distances
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
Conclusion
85
6.1
Summary of Results
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
6.2
Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
A Derivation of Selected Partial Derivatives
86
89
A. 1 Derivative of a Matrix Inverse . . . . . . . . . . . . . . . . . . . . . . . . .
89
A.2 Derivatives of Noise Parameters . . . . . . . . . . . . . . . . . . . . . . . .
90
B Inertial Measurement Unit System Model
B.1
93
Gyroscope Error Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
94
B.2 Accelerometer Error Model . . . . . . . . . . . . . . . . . . . . . . . . . . .
95
B.3 PIGA Error Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
96
B.4 Misalignment Error Model . . . . . . . . . . . . . . . . . . . . . . . . . . .
98
B.5
Centrifuge Error Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
B.5.1
Lever Arm Errors . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
B.5.2
Centrifuge Target Bias Errors . . . . . . . . . . . . . . . . . . . . . 102
103
C Removed Model State Listings
C .1
M odel 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
9
C.2 Model3 . . . . . . . . . . . . . . . . . . . . . . . . . . . .. .
.. .
. . .
104
C .3 M odel4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
C.4 Model5 . . . . . . . . . . . . .. . ..
10
...
..
.. . . . . . . .
..
.104
Chapter 1
Introduction
The fundamental purpose of this research is to investigate the robustness of Full Information Maximum Likelihood Optimal Filtering (FIMLOF) when estimating the process
noise strengths and measurement standard deviations of an inertial measurement unit
(IMU) via centrifuge testing.
Full Information Maximum Likelihood Optimal Filtering [27, 32, 33] uses maximum
likelihood to estimate the parameter values of an error model via Newton-Raphson iteration, and is typically used for system identification. Full Information means that all of
the parameters are estimated at once. Maximum Likelihood indicates that the estimated
parameters are those most likely (for a given statistical assumption) to have generated
the observations collected.
Optimal Filtering using a Kalman Filter provides the best
estimates of the observations possible with the given model.
1.1
Motivation
Inertial navigation systems use a guidance computer and an IMU containing gyroscopes
and accelerometers to determine the position, velocity, and attitude of a vehicle. The gyros
and accelerometers have calibrated error models implemented in the guidance computer
to improve the accuracy of the navigation. The states of the error models are estimated
during the calibration of the IMU.
For the system under consideration in this thesis, calibration is performed by testing
11
the IMU on a centrifuge, which allows error states that are acceleration and accelerationsquared sensitive to be estimated. The measurements from the centrifuge test are used by
a Kalman Filter to estimate the error states of the IMU. In addition to the measurements,
the Kalman Filter needs a priori values of many model parameters, including the process
noise strengths and measurement standard deviations.
Classically, noise parameters have been determined in the laboratory. For example,
the random walk in angle for gyroscopes is typically calculated through a "tombstone
test." This test involves running a gyro for a long time on an inertially stable table. The
power spectral density of the measurements from the gyro, when expressed in the proper
units, forms the noise strength of the random walk in angle.
FIMLOF can estimate the model parameter values from the centrifuge measurements
using maximum likelihood estimation. The likelihood function is formed from Kalman
Filter residuals and their covariances.
The measurements contain the same statistical
information as the filter residuals [16] -
the residuals are used to make the maximum
likelihood estimation easier. Updated model parameter estimates are used to rerun the
Kalman Filer. FIMLOF can potentially result in significant time savings, because it uses
the data already required for a centrifuge calibration of the IMU to estimate the model
parameters, instead of requiring additional lab tests. Alternatively, it can be used to
confirm the results of the laboratory tests.
1.2
Background
System identification involves the development or improvement of a model of a physical
system using experimental data [15, 20]. Many types of identification exist, including
modal parameter identification [3, 24, 36] (determining the mode shapes and frequencies
of a structure) and control-model identification [14] (identifying a model used to control
the system). They have different goals and development histories, but share the same
mathematical principles.
Control-model identification can occur in either the frequency domain or the time
domain. Frequency domain identification uses non-parametric methods such as frequency
12
response estimation. It is no longer prevalent, due to the rise of parametric system models
[15]. In the time domain, there are many methods of identifying state-space models, such
as minimum realization methods [37, 34, 28, 29]. In inertial navigation, these models are
then used in a Kalman Filter.
Once a system model has been identified for use in a Kalman Filter, the system noise
parameters must be identified. The value of these parameters may be estimated using a
variety of methods [9, 22, 23]. For example, Bayesian estimation [12], maximum likelihood
estimation [17], and correlation methods [22] may be used. FIMLOF is a MLE method.
The method of FIMLOF was developed by Schweppe [32, 33] and Peterson [27].
Sandell and Yared [31] developed a special form of it for linear Gaussian models. Their
approach is followed in this thesis. They used FIMLOF to estimate initial state covariances, process noise strengths, and measurement standard deviations. Skeen extended
FIMLOF to estimate fractional Brownian motion and Markov processes [35]. This thesis
expands on Sandell and Yared's work by investigating FIMLOF's sensitivity to the initial
parameter estimate and to filter suboptimality.
1.3
Thesis Overview
In a well-modeled system, FIMLOF should provide parameter estimates that are very
similar to laboratory-derived noise values. FIMLOF parameter estimates that are significantly different from the laboratory values are often evidence of serious mismodeling.
This thesis shows that using FIMLOF to estimate the process noise strengths and measurement noise standard deviations of a mismodeled IMU can significantly improve its
navigational performance. For the most severely mismodeled case considered, FIMLOF
improved the navigational accuracy of the system by an order of magnitude.
Chapter 2 provides a review of inertial instruments, including various types of gyroscopes and accelerometers and their standard error terms. Inertial instrument technology
is a mature field, so the chapter provides only a brief overview.
Chapter 3 covers system identification. FIMLOF relies heavily on maximum likelihood
estimation, so it is discussed at some length. The Kalman Filter equations are described,
13
but no derivations are given. The FIMLOF algorithm is derived in detail, and the partial
derivatives for estimating process noise strengths and measurement standard deviations
are presented.
The primary results of the thesis stretch over three chapters. Chapter 4 covers the system models used in evaluating FIMLOF. It also sets out the metrics by which the robustness of FIMLOF was evaluated. The actual results are presented in Chapter 5. FIMLOF
is investigated for its sensitivity to initial model parameter estimates and reduced-order
error models. Chapter 6 gives a summary of the findings, and proposes future work on
the topic.
14
Chapter 2
Inertial Measurement Units
Inertial Measurement Units (IMUs) provide the data required for the navigation and guidance of a vehicle. An IMU consists of gyroscopes and accelerometers and their associated
electronics, while a guidance system combines the instruments with a computer. At its
most basic, a guidance system provides automatic control to the vehicle without relying on
external measurements. IMU instrument technology is a mature field, so only the briefest
overview is given here. For more information, the reader is directed to the literature.
An IMU can be attached to a vehicle in several ways. It may be a strapdown system,
in which the instruments are fixed to the vehicle in a fixed orientation. Alternatively, it
may be mounted on an inertially stable platform that uses the gyros' readings to maintain
a fixed inertial attitude. The IMU used in this thesis is one of the latter, so the error
equations that follow are those of platform-mounted instruments.
2.1
Gyroscopes
The mechanical gyroscope uses a rotating mass to detect changes in angle. Consider a
basic gyroscope like the one shown in Figure 2-1. The gyro has a polar moment of inertia
J about its spin axis (SA), and is spinning with angular velocity w.
From Newton's
Second Law, we know that the angular momentum of the gyro, H, will not change unless
15
Q-. Precession, Q
Z
nput Torque, T
2
Input Torque, T,
dO
X
Spin Axis, H=Jo
dH=HdO
Figure 2-1: Law of Gyroscopics.
acted upon by a torque T, in which case
T = dH/dt.
(2.1)
Consider a torque, T 1 , acting about the SA. T increases the angular velocity of the gyro,
so that
T1 = Jdw/dt = Ja,
(2.2)
where a is the angular acceleration. Because T acts around the SA, only the magnitude
of the angular velocity changes, not its direction. However, if the torque is orthogonal
to the SA, only the direction of the angular velocity will change, not its magnitude. In
Figure 2-1, a positive torque about the Y axis, T 2 , will cause a small angular momentum
change dH in the direction of the positive Y axis. This angular momentum change has a
magnitude given by
dH = HdO,
16
(2.3)
Input Axis
Gimbal
Torsion bar,
stiffness Ktb
Weld
Output Axis,
k0f
Pickoff,
angle 0
10 L-S
Spin axis
Damper,
Coefficient c
in e rtia 10
Figure 2-2: Single-degree-of-freedom rate gyro.
Input Axis
14
Bearing
Gimbal
Wheel
Output axis
%
Axis,
inertia 10
0Output
Torquer
0
H
Spin axis
Servo
Amplifier
Damper,
Coefficient c
Pickoff,
angle 0
Figure 2-3: Single-degree-of-freedom rate-integrating gyro.
where dO is the small change in angle. When combined with Equation (2.1), this equation
leads to the Law of Gyroscopics [19], written as
T = dH/dt = H dO/dt = HQ,
(2.4)
where Q is the precession rate, the angular velocity of the gyro about the Z axis.
A
precession results from a deliberately applied torque, whereas drift results from accidental
or unwanted torques [18]. In essence, a gyroscope attempts to align the SA and the input
torque.
17
2.1.1
Single-Degree-of-Freedom
Gyroscopes
A single-degree-of-freedom gyroscope is restricted to one free axis. Rate gyros measure the
angular velocity of the instrument. As can be seen in Figure 2-2, a gimbal is connected
to the gyro case by torsion bars on the output axis (OA). The gyro turns about the
OA in response to an angular rate around the input axis (IA). A pickoff measures the
gimbal angle, which is proportional to the angular velocity seen by the IA. Rate gyros are
inexpensive, but not very accurate.
The other type of single-degree-of-freedom gyroscope is the rate-integratinggyro, which
provides higher accuracy. The gimbal is connected to the gyro case by bearings instead
of torsion bars. As shown in Figure 2-3, the pickoff from the gimbal provides a signal to a
torque motor that drives the gimbal back to level. The output of a rate-integrating gyro
is the torquer current rather than the gimbal angle, which remains at at zero.
A rate-integrating gyro gimbal must respond to very small torques in order to have
a low input threshold. Therefore, the bearings on the OA gimbal must have very low
friction. The friction torque of the bearings is a function of the forces placed upon them.
In a dry gyro, these forces include both the gyroscopic loads and the gimbal weight.
However, if the gimbal is sealed against liquids, it may be floated at neutral buoyancy
inside the gyro case. Liquid fluorocarbons are often used for this purpose [19]. A floating
gimbal is usually referred to as a float. Because the float is neutrally buoyant, the bearing
forces are reduced to only the gyroscopic loads.
Single-degree-of-freedom
Gyro Error Model
Single-degree-of-freedom gyros suffer from a variety of drift rate errors.
Acceleration-
insensitive drift rates, called biases, are caused, for example, by magnetic field effects.
Biases are not necessarily constant -
they can vary over the life of the instrument or
even between one startup and the next. Acceleration-sensitive drift rates vary linearly
with acceleration and typically result from a mass imbalance in the wheel. Accelerationsquared-sensitive drift rates vary quadratically with acceleration. These drifts are caused
by anisoelasticity, mismatches in the stiffness of supports. In single-degree-of-freedom
18
gyros, the rotor bearings must be as stiff radially in the IA as they are axially in the SA
[19]. The IEEE steady-state error model for a single-degree-of-freedom rate-integrating
gyro mounted on a platform is [1]
Drift rate = DF + D1 a1 + Do ao + Ds as
+DI, a2 + Dss as+ Ds a, as + Dioa, ao + Dos ao as.
(2.5)
The terms of the model are:
DF : Bias, acceleration-insensitive drift
a1
: IA acceleration
ao
: OA acceleration
as
: SA acceleration
D,
:IA mass imbalance, acceleration-sensitive drift coefficient
Do :OA acceleration-sensitive drift coefficient
Ds
:SA mass imbalance, acceleration-sensitive drift coefficient
D11 : acceleration-squared-sensitive drift coefficient along IA
Dss: acceleration-squared-sensitive drift coefficient along SA
DIS :acceleration-squared-sensitive drift coefficient along IA and SA
DIO : acceleration-squared-sensitive drift coefficient along IA and OA
Dos: acceleration-squared-sensitive drift coefficient along OA and SA
2.1.2
Two-Degree-of-Freedom Gyroscopes
Two-degree-of-freedom gyros are also called free gyros, because the spin axis of the rotor
can point in any direction without restraint. Dynamically tuned gyros (Figure 2-4) are the
most common form of free gyros. The gimbal of a dynamically tuned gyro is connected to
the rotor and shaft by flexures. The moments of inertia of the rotor are much larger than
those of the gimbal; therefore, the rotor will remain at a fixed orientation while the gimbal
is forced to flutter as the shaft spins. As the gimbal flutters, the flexures cause positive
spring torques on it. Gimbal angular velocity from the flutter also generates negative
19
Shaft
Flexures
Gimbal
Gimbal Axis Y
Gimbal Axis X
Spin Axis Z
Figure 2-4: Two-degree-of-freedom dynamically tuned gyro.
precession torques on the gimbal due to its angular momentum.
The gyro operates at
its tuned speed when the two torques on the gimbal cancel each other and the rotor is
unrestrained by any gimbal torques [6, 71.
Two-degree-of-freedom
Gyro Error Model
The two-degree-of-freedom dynamically tuned gyro has many of the same sources of error
as the single-degree-of-freedom gyro described in Section 2.1.1.
Errors not observed in
a single-degree-of-freedom gyro are the quadrature mass unbalances, which occur when
the flexures generate a torque while loaded axially. This torque creates an accelerationsensitive drift about the axis opposite to the one the acceleration acts along. Anisoelasticity in dynamically tuned gyros is caused by the flexures, not the supports as in
single-degree-of-freedom gyros. The compliance of each flexure must be equal in both the
radial and the axial direction to prevent this error. The error model for the x-axis of a
20
dynamically tuned gyro is written as [19]
Drift rate = B,
(Fixed bias)
+ Dxxax
(Normal acceleration-sensitive drift)
+ Drya,
(Quadrature acceleration-sensitive drift)
+ Dxzaz
('Dump' acceleration-sensitive drift)
+ Dxxzazax.
(Anisoelasticity)
The term Dxz is caused by the rectification of internal vibration.
(2.6)
For example, this
vibration may come from shaft bearings and vary based on their load. The y-axis of the
gyro has a similar error model.
2.1.3
Gyroscope Testing and Calibration
Testing and calibration of gyroscopes is a well-known process. Complete test procedures
for the rate-integrating single-degree-of-freedom gyro are described by IEEE Standard
517-1974 [1]. Dynamically tuned gyro testing proceeds in a similar fashion.
The gyro is mounted on an inertially stable platform and allowed to run for a long
period to determine the random drift [19]. It is calculated by taking the power spectral
density of the output. For a dynamically tuned gyro, the random drift is usually called
the random walk in angle, when expressed in the correct units. The random walk in angle
forms one of the primary noise sources in the gyro.
Bias, scale factor, and mass imbalances for a rate integrating gyro may be determined
by performing a six-position test. The gyro axes are aligned with north, east, and up.
The gyro is tested with each of the axes vertically positive and then vertically negative.
A dynamically tuned gyro is tested in an eight position test. For four positions, the SA
is vertical with the x-axis aligned with north, west, south, and east, respectively. For
the other four positions, the SA is horizontal and aligned with north, while the x-axis is
aligned vertically positive, west, vertically negative, and east in sequence. Accelerationsensitive terms are determined by the local gravity, while the scale factor is determined
21
by the horizontal earth rate [19].
2.2
Accelerometers
Accelerometers measure the specific force generated when a mass accelerates. They contain a "proof mass," a suspension to locate the mass, and a pickoff that outputs a signal
related to the acceleration [19]. The implementations of this concept range from springmass systems to fiber-optics. Accelerometers can even be made out of gyroscopes, such
as the pendulous integrating gyroscopic accelerometer described in Section 2.2.1.
The basic spring-mass accelerometer is a single-degree-of-freedom instrument. It consists of a mass, m, constrained to move in one direction and connected to the frame by
a spring with constant K, and a damper with coefficient c. Summing the forces on the
mass produces the system response equation, given by [19]
(d
2x N
dxN + Kx,
Fz=m d2 2 + c dt /
( dt )
(2.7)
where F is the input force and x is the displacement of the mass from its rest position
relative to the accelerometer frame. At steady-state, the acceleration is given by
d2 x 2=- -xK
dt2
m '
(2.8)
where Kx/m is the accelerometer scale factor.
Accelerometers measure the specific force on the instrument rather than the total
acceleration seen by the system. Unless the accelerometer is used in an inertial reference
frame, the acceleration must be expressed in the rotating frame, e.g., an earth-centered
earth-fixed frame. Figure 2-5 shows the position vectors of a particle relative to an inertial
frame and a rotating frame. In the figure, the XYZ system is inertial, while the eie 2 e3
system translates and rotates relative to it. The acceleration of the particle, as seen by
an observer at 0' in the eie2 e3 frame, can be expressed as [11]
d2 p?P-CP
dri
dp? dw xpP - wx(w
=
r - 2wxxdP-d
x pP)
2
dt
' dt
dt
dt
22
(2.9)
Z
3
P
2
P
r
R
0
X
Figure 2-5: Position vectors of a point P relative to a fixed system and a rotating system.
with the terms defined as follows:
pP : Position vector of P relative to 0' as expressed in eie 2e 3 frame
Cf : Transformation from XYZ frame to eie 2 e 3 frame
ri
:Acceleration vector of P relative to 0 as expressed in XYZ frame
w
: Absolute angular velocity of eie 2e 3 frame.
In an earth-fixed coordinate system, w = i, the constant rotation rate of the earth. In
this case, Equation (2.9) simplifies to
d2 pp P = GP + SFP - 2w x d pP -p
_ Wx (W x pP),
dt
dt 2
(2.10)
where GP is the graviational attraction vector and SFP is the specific force, both expressed
in the eie 2 e 3 earth-fixed frame. The specific force observed by the instrument may then
be written as [18]
SFP = aP - GP,
(2.11)
where aP is the combined acceleration vector of the accelerometer expressed in the earthfixed frame.
23
Torquer
(not shown)Bern
rotates
housing
- Wheel
Pickoff
OA bearing
Pendulous massF
Figure 2-6: Pendulous integrating gyroscopic accelerometer.
2.2.1
Pendulous Integrating Gyroscopic Accelerometers
A modified rate-integrating gyro can be used as a very precise accelerometer, as shown
in Figure 2-6. The pendulous integrating gyroscopic accelerometer (PIGA) converts an
acceleration-induced inertia force into a torque.
This torque precesses a gyro -
the
precession rate is proportional to the acceleration. The time integral of the acceleration
is the angle through which the gyro turns [19].
A PIGA operates in a slightly different manner than the rate-integrating gyro described
in Section 2.1.1. In a rate-integrating gyro, the float is balanced so that its center of mass
lies at the intersection of the SA and the OA. In a PIGA, however, the center of mass is
offset along the SA. Accelerations along the IA cause the float to act like a pendulum,
generating a torque about the OA. A pickoff senses the resulting rotation of the float
about the OA and drives a torquer to rotate the PIGA housing about the IA. This
rotation creates a gyroscopic torque about the OA that cancels the acceleration torque,
so the float remains level. The rotation rate of the housing about the IA measures the
acceleration of the PIGA along the IA.
24
Accelerometer Error Model
2.2.2
Accelerometers suffer from several sources of error. The instrument bias is an accelerationinsensitive error. The bias usually varies from one instrument startup to the next, and
must be compensated by the navigation computer. Scale factor errors are accelerationsensitive. If the mass is not perfectly constrained to move along the sensing axis, crosscoupling errors also occur. These errors are caused by accelerations along non-sensing
axes being observed by the instrument. The steady-state instrument output u of an
accelerometer can be expressed as [18]
U = ko + k1a, + k2 a! + k2,ay + kxzaz.
(2.12)
The terms of the model are as follows:
ax :Component of specific force along the sensing axis
a: Component of specific force along a nonsensing axis
ay :Component of specific force along a nonsensing axis
k
:
mpAccelerometer bias
ko :
Scale factor
k
Nonlinear calibration coefficient
k2
:Cross-axis sensitivity coefficient
kxy :Cross-axis sensitivity coefficient
2.2.3
Accelerometer Testing and Calibration
Accelerometers are classically calibrated by testing them statically in a 1-g field.
To
calibrate the accelerometers, they are mounted on a test bench that rotates their IA in
a vertical plane around a horizontal axis. This allows the bias and scale factor to be
determined. If measurements
i
and E
1
are taken at
+
g and -ig respectively, the
scale factor is calculated from Equation (2.12) by [19]
ki =
1
(E+1 - E. 1).
2
-
25
(2.13)
The bias can be calculated by
ko =
1
(E- 1 + E+1 ) /ki.
2
-
(2.14)
Cross-axis sensitivities can be determined by making measurements with the accelerometer in other orientations.
The in-run variation of the bias may be tested by placing the accelerometer on an
inertially stable platform and allowing it to run for a long time. The standard deviation
of the outputs of the accelerometer is its random drift. When expressed in the correct
units, the random drift for a PIGA is the random walk in velocity, which is a primary
noise source for the instrument.
Higher-order error terms cannot be separated from each other when the accelerometer
is tested at only one acceleration level. These higher order terms can be determined by
placing the accelerometer on a centrifuge. Testing the accelerometer at various centrifuge
speeds and orientations allows accleration-squared-sensitive and higher error terms to be
calculated.
26
Chapter 3
System Identification
Parameter identification is the determination or estimation of system parameters such as
initial covariances, initial state estimates, dynamics, or noise values. This thesis focuses
on two identification methods: Bayesian estimation and maximum likelihood estimation.
Bayesian estimation is used for Kalman filters, discussed briefly in Section 3.2.2, while
maximum likelihood estimation provides the basis for Full Information Maximum Likelihood Optimal Filtering (FIMLOF) [31, 35]. Maximum likelihood estimation is discussed
in general in Section 3.1 and the implementation in FIMLOF is covered in Section 3.3.
In this thesis, the term parameter estimate refers to the value of a model parameter
identified by FIMLOF such as a process noise strength. The term state estimate refers to
the Kalman Filter estimate of the value of a state in the model.
3.1
Maximum Likelihood Estimation
Maximum likelihood estimation is a commonly used form of parameter identification.
It maximizes a likelihood function that is dependent on the conditional probability of
the observations.
This maximization is frequently performed using Newton's Method,
although an approximate Newton's Method is used for FIMLOF.
27
3.1.1
Parameter Estimation
Consider a system with states x E R"', past observations z (E RmxN, and unknown
parameters a E RqX . Bayesian estimation is more effective than maximum likelihood
estimation in general, but requires the prior knowledge of p(a), the probability density
function of the parameters [30]. In practice, there is often no way of knowing p(a). Maximum likelihood estimation requires only a knowledge of p(zla), the probability density
function of the observations conditioned on the parameters, rather than p(a). Therefore,
maximum likelihood estimation is used in FIMLOF.
3.1.2
Likelihood Functions
Maximum likelihood estimation determines the parameter estimates that maximize the
conditional probability of the observations. In other words, it finds the parameter estimates for which the most likely observations are those that have actually occurred. Let
a = [Oi ...
, Caq]
be a set of q parameters that affect a system. Let z
set of N observations of the system for times t1,...
, tN.
=
[zi, .. . , ZN be a
The likelihood function is defined
as the joint probability density of the observations given the parameter values,
l(a) = p(z; a)
=
p(zi,. .. ,zN; oZ, -. . , aq)-
(3-1)
The likelihood function is calculated for a given set of observations. The maximum of
l(a) is defined as l(z), where & are the maximum likelihood estimates of a. They are
the parameter estimates for which the observations are easiest to produce.
Equation (3.1) is a probability density function, so it follows that
I
p(z; a) dz
=
(3.2)
1.
Taking the partial derivatives of both sides of Equation (3.2) results in [35]
/P_(Z;
_
iBai
)
6 2 p(z;
dz = 0
f
28
agaia
a)
dz = 0.
(3.3)
When l(a) is an exponential function of a (e.g., a Gaussian distribution), Equation
(3.1) may be converted to the negative natural logarithm of the likelihood function, ((a),
so that
(3.4)
((a) = - ln p(z; a).
Equation (3.4) takes advantage of the monotonicity of the natural logarithm, and converts
the problem from a maximization to a minimization in keeping with standard optimization
notation. If the probability density functions are independent, the likelihood function may
be formed by taking the product of the individual density functions. In this case, Equation
(3.4) has the additional benefit of converting the products into sums.
3.1.3
Cramer-Rao Lower Bound
Given a set of observations, z, consider &(z), an arbitrary estimate of a. The bias b(a)
of the estimator
&()
can be expressed as [31]
b(a) = a - E {&(z) a}
= a -
(3.5)
&(z)p(z; a)dz,
where E{-} denotes the expectation operator. The error covariance matrix can be expressed as
E(a)
=
=
E {(a - &(z) - b(a)) (a - &(z) - b(a)) T
Ia}
(a - &(z) - b(a)) (a - &(z) - b(a)) T p(z; a) dz.
(3.6)
A desirable estimator is unbiased, so that E {&} = E {a}. It also has a minimum covariance, so that the diagonal elements of E(a) are smaller than those of any other estimator
[30]. The variance has a lower bound that can be derived from the bias and the statistical
properties of the observations. This lower bound, which is independent of the estimator,
is known as the Cramer-Rao lower bound. In the scalar case, the general form of the
29
Cramer-Rao lower bound is [26]
)
E
{
-a)2
E{&(z)a}]
-a 2
-
ja} >
E
2
(3.7)
a In p(za)2
For an unbiased estimator,
=E f
(3.8)
Ba
and Equation (3.7) becomes
1
E {(d - a) 2 a} >
(3.9)
,
where
F=E{(1np(zla) 2
2lnp(zla)
az
ia2
(3.10)
is the Fisher information matrix and will be discussed further in Section 3.1.4. In the
more general case of an unbiased estimator with q parameters, it can be shown [38] that
Equation (3.9) becomes
-a) (6-a)T
E {&
-FF-
.
(3.11)
The Cramer-Rao lower bound for the variance of the ith parameter, aj, is equal to [F4]1i.
3.1.4
Fisher Information Matrix
The Fisher information matrix measures the information contained in a parameter estimate. It is the expected value of the Hessian, the matrix of second derivatives, of the
negative log-likelihood function. Combining Equation (3.4) and Equation (3.11) yields
Fi- = E a2( (Z;i
)
(3.12)
This equation will become important in the sequel.
It will also be useful to have an expression of the Fisher information matrix in terms
of first partial derivatives [35].
Applying the definition of the expectation operator to
30
Equation (3.12), the equation may be rewritten as
Fi = - J
02
In j(z; a)] p(z; a) dz.
Ooaiacj
(3.13)
Recalling that ln(x) is the derivative of 1/x, Equation (3.13) becomes
FjJ=
-Ia
a
p(z a)
p(z; a) dz.
(3.14)
Applying the chain rule of differentiation to Equation (3.14) yields
Fi - =- f
a 2 p(z;
0acaxo0
a) dz +
1
f p(z; a)
2
0p(z; a) ap(z; a)p(z; a)dz.
%3'
0aC
(3.15)
From Equation (3.3) [8], it follows that Equation (3.15) simplifies to
a)] a In [p(z; a)]
Fzj = E 0In [p(z;
0%
0caj
(3.16)
Substituting Equation (3.4) into Equation (3.16) results in the Fisher information matrix
in terms of first partial derivatives, so that
}
Yij = E {((Z; a)a((z; a)
Baa
(9aj I
3.1.5
(3.17)
Properties of Maximum Likelihood Estimation
A standard assumption for maximum likelihood estimation, made in this thesis, is that
the measurements are independent, which implies that
N
p(z; a)
f p(zk; a).
=
(3.18)
k=O
The observations are assumed to be a sequence of independent experiments. An additional
assumption is the identifiability condition [31], defined as
P(Z; Ctl) 0-P(Z; t2)
31
for all a, f a 2.
(3.19)
This assumption means that different parameter values lead to observations with different
probabilistic behavior. Simply put, a parameter
a1
cannot cause measurements with
an identical probability density function as measurements from another parameter a .
2
The identifiability condition determines a unique value for a. Without it, it would be
impossible to distinguish between two parameter values a 1 and a 2 , regardless of the
number of observations made.
The identifiability condition can sometimes be relaxed to form a local identifiability
condition. Under the relaxation, Equation (3.19) becomes
p(zt;
for all Iai - a 2 | < M,
ai) 0 p(zt; a 2)
a1
fa 2 ,
(3.20)
where M determines the local region. Inside this region, a is unique.
Using the conditions of independence and identifiability, and several technical assumptions [38], maximum likelihood estimation has many asymptotic properties. An unbiased
estimator &, as described in Section 3.1.3, is one in which E {&} = E{a} [30].
asymptotically unbiased estimator, E{&tja} -- + a as t ->
In an
oc. An estimator is efficient if,
given the estimator & and any other estimator &, [30],
E { (6f - a)(& _ f)T} < E f{(6 - a)(& - a)T} .
(3.21)
An unbiased, efficient estimator fulfills the Cramer-Rao lower bound as an equality, and
Equation (3.11) becomes
E {(&
-
a)(& - a)T}
=
F- 1 .
(3.22)
Such an estimator is not guaranteed to exist, but if it does, it is a maximum likelihood
estimator [38].
An asymptotically efficient estimator is one in which Equation (3.22)
is fulfilled as the number of observations goes to infinity. A consistent estimator gets
better as the number of observations increases [30]. More specifically, the estimate & converges to the true value a as the number of observations goes to infinity. An estimator is
asymptotically normal if it becomes Gaussian as the number of observations goes to infinity. The asymptotic normality of maximum likelihood estimators fulfills the requirements
32
of the negative natural logarithm form of the likelihood function found in Equation (3.4).
3.1.6
Solution of Maximum Likelihood Estimators
Due to the complex nature of the likelihood function, its derivatives usually cannot be
solved for analytically.
Therefore, an iterative optimization method must be used to
find the solution. Several methods exist for determining the maximum of the likelihood
function or, alternatively, the minimum of the negative log-likelihood function [4]. The
method used in this thesis is an approximate Newton-Raphson method.
Let a be an estimate of the optimal parameter values a*.
For parameter values
la - al < c, with c > 0, the likelihood function ((a) can be approximated by its Taylor
series expansion
((a)
1
+ V((a) T (a - a) + -(a - C) T H(Cx)(a - dx).
+()
h(a) =
2
The gradient of ((a), VC()
(3.23)
, is evaluated at a = d, so that
)
=
.(3.24)
H(5x) is the Hessian, the matrix of second partial derivatives, of ((a) evaluated at a = a,
so that
[H(5c)].
a2 ((a)
.
(3.25)
Note that h(a) is a quadratic function, which can be minimized by solving
Vh(a)
=
VC(a) + H(d) (a - a) = 0.
(3.26)
Rearranging Equation (3.26) yields the Newton step
a - d = -H(a) 1 V((a).
33
(3.27)
Solving Equation (3.27) iteratively, the Newton-Raphson algorithm is given by
dk+1 = 5k -
(3.28)
-ykH(dk)-V~(ak),
where Yk is the step size. The basic Newton-Raphson algorithm specifies Yk
=
1. Allowing
it to vary based on a line search adds robustness to the algorithm [4].
The convergence rate of a minimization algorithm determines how many iterations it
takes to arrive at a solution. A sequence of numbers {xi} displays linear convergence if
limi,,, xi = x* and there exist some n < o
Ixii
-
and 6 < 1 such that
X*I< 6,
for all i > n.
|xi - x*l
(3.29)
Linear convergence adds a constant number of accurate digits to the estimate at each
iteration. The sequence offers superlinearconvergence if limi,,, xi = x* and
lim
i-o
+0
-
Xj - x*1
= 0
(3.30)
Approximate Newton's Methods exhibit superlinear convergence [4] and add an increasing
number of accurate digits to the estimate each iteration. The sequence displays quadratic
convergence if limiO,
0 xi = x* and there exist some n < o0
2
|xi - x*l
< 6
and 6 < 00 such that
for all i > n.
2
(3.31)
Quadratic convergence doubles the number of accurate digits in the estimate every iteration.
The Newton-Raphson method demonstrates quadratic convergence, but is computationally expensive. For a system with q parameters, the Hessian requires q(q+ 1)/2 second
partial derivatives to be calculated. In practice, this can prove to be prohibitively expensive for large systems. Instead, the Hessian can be approximated by its expected value
F(&k), using the form of F given in Equation (3.17).
This form requires only the first
partial derivatives, which are already calculated for the gradient. Although this approx-
34
imate Newton-Raphson method does not exhibit quadratic convergence, it can prove to
be much less computationally expensive for problems with many parameters.
Kalman Filters
3.2
Kalman Filters use Bayesian estimation to develop an optimal filter. FIMLOF uses the
outputs of a Kalman Filter in concert with maximum likelihood estimation to estimate the
model parameters. Kalman Filters require accurate system models to perform optimally.
The suboptimality of a mismodeled system can be determined by a sensitivity analysis.
3.2.1
System Model
The guidance system can be modeled using a linear, time-invariant, discrete-time statespace model driven by white noise, w, with white measurement noise, v so that
Xk
1
k-1k-
+
Wk-1,
Zk = HkXk + Vk.
The terms of the model are:
Xk
ER
:
state vector at time tk
Wk
C RP
:
white plant noise vector at time tk
Zk
E R'
:
measurement vector at time tk
Vk
E"
:
white measurement noise vector at time tk
:
time index (k = 0, 1, 2, ... )
:
state transition matrix from time tk to tk+1
tk
E
RE Xfl
Hk
E R'xn:
system observation matrix at time tk
Gk
E Rnxp :
system plant noise input matrix at time tk
35
(3.32)
(3.33)
The initial state vector x 0 has covariance P0 = E
{ [x 0 -
:o] [XO
--
0
]T
}
The plant
noise covariance is given by
E {ww
}
Qk
= J
tk
to
_
t0
(3.34)
where N is the strength of the plant noise. The measurement noise has a covariance
Rk = E {VkVk'.
3.2.2
Kalman Filter Equations
Given the linear state space model described by Equations (3.32) and (3.33), the objective
is to determine
k,
the best estimate of Xk for the given measurements.
An optimal
estimator is defined as one that minimizes the mean square estimation error. Under certain
assumptions, an optimal estimator produces the same results as an efficient maximum
likelihood estimator [9], given by Equation (3.21). This estimate is readily provided by a
Kalman Filter. The Kalman Filter equations are well known [9, 13] and are summarized
here.
The state estimate propagation,
'is-
=k-a1
)
(3.35)
and the error covariance propagation,
PC = <_k-1P +
1_
+ Qk-1,
(3.36)
propagate the filter from one measurement to the next. The residue is the difference
between the new measurement and the estimated value of the new measurement, given
by
rk = Zk - Hki;.
36
(3.37)
The covariance of the residue can be expressed as
Sk
=
HkPHT + Rk.
(3.38)
The Kalman gain matrix provides the filter gains, and may be written as
Kk
=
P1 7HTSk 1 .
(3.39)
Xk
= k- + Kkrk,
(3.40)
The state estimate update,
and the error covariance update,
p=
[I - KkHk] P [I
-
KkHkI T + KkRkKj,
(3.41)
update the filter using the new measurements. Equation (3.42) can be rewritten as
Pk=
[I - KkHk] P.
(3.42)
Equation (3.41) is known as the Josephson form of the update equation [9]. It provides
better numerical stability than Equation (3.42) at the expense of calculation time.
3.2.3
Mismodeling in Kalman Filters
Much has been written about suboptimal filtering [9, 10, 13]. In Chapter 5, the performance of FIMLOF on systems with a variety of mismodelings is evaluated. Each of the
mismodeled systems is a reduced-order model of the true system. In other words, the
suboptimal models used in this thesis contain only states found in the truth model. Sen-
sitivity analysis, the investigation of the effects of the mismodeling, is greatly simplified
in this case.
The state estimates from a suboptimal Kalman Filter reflect how well the filter estimates it has performed.
Instead, it is of interest to determine the true performance
37
of the filter. Suppose that there are two models, the truth model M E Rnxn and the
reduced-order model R E RSXs, with s < n. Model M has states x E RnX1 and model
R has states t E R"Xl, with t consisting of a proper subset of x. In general, to perform
a sensitivity analysis upon a mismodeled system, both models must be combined into a
macromodel of dimension n + s. However, in the special case where M contains only a
subset of the states in M, the models may be evaluated separately, at the expense of some
additional bookkeeping. A Kalman Filter is run on M, using noise covariances Q and R,
so that the Kalman Filter equations become
Xtk
=
P
Kk
p
=P
=I
)k-l1Xk-
+
4k-1l"N-I1
1 J,
kP
1
KgkNk]
Qk-1
ky+ Rk)
P- [I -- RkPk] T + RkAkRP.
(3.43)
The Josephson form of the state covariance update equation must be used. The gains,
K, from Equation (3.43) may be restated in the dimension of model M. For example,
suppose that model M has five states, called x 1 , x 2 , x 3, x 4 , and x 5 . Model M contains
states x 1 , x 2 , and X5 . k then has elements
Kxi
K=
(3.44)
RX2.
K may be restated in the dimension of M, so that
Rm=
0
0
Kx5
38
.
(3.45)
The gains KM and noise covariances Q and R are then used in a Kalman Filter on M,
given by
=
kDk-k
-k
k-1
[I
+Q
D
k-1
k
k
k
-1k-
Qk-1
(k
[MI k Hk]
-H
k
)
PI--M]
HklT+ [
]Rk [kM] .
(3.46)
The state estimates from this filter are the true performance of a filter using model MI.
3.3
Full Information Maximum Likelihood Optimal
Filtering
FIMLOF
Section 3.1 describes the basic method of maximum likelihood estimation.
uses maximum likelihood estimation combined with a Kalman Filter to identify model
parameters.
The estimated parameters can be initial conditions, noise strengths [31],
or even Markov processes [35]. This thesis focuses on the identification of measurement
standard deviations and process noise strengths.
The basic FIMLOF procedure involves iterating between a Kalman Filter and maximum likelihood estimation. The first guesses for the parameters of interest are chosen,
and the Kalman Filter is run using these guesses. State information from the Kalman
Filter is used to form a likelihood function, which is then maximized with respect to the
unknown parameters, and the process is repeated with the resulting parameter estimates
until convergence is reached.
3.3.1
The Likelihood Function
The purpose of FIMLOF is to determine the model parameter estimates that have the
highest probability of producing the observed measurements. To that end, it is necessary
to know the conditional probability density of the measurement
39
Zk
at time
tk
given all
measurements zk-1 = [Zk_1, Zk-2,. . . , zO] prior to tk [35]. zo contains the initial conditions. It is assumed, in common with the system model given in Equations (3.32) and
(3.33), that the noise sources are Gaussian. Then the probability density function is
p (zklzk-)
= (2r)" det [Sk]1" exp-rsi(r4/27)
For the purposes of this thesis, the probability density function p(zklzk-1) should be
written as p(zkzlz-; a), where a is the vector of unknown parameters, because the family
of densities indexed by the parameter values is of interest [31]. The individual conditional
probability density functions can be combined into a joint probability density function,
so that
p(zk; a)
= p (zklzk-1; a) ... p
(zjz 0 ; a) p (zo; a).
(3.48)
Equation (3.48), once expanded via Equation (3.47), is a function of only the residuals
rk
and the residual covariances Sk. Therefore, it can be calculated with the output of a
Kalman Filter.
As mentioned in Section 3.1.2, Equation (3.48) is converted to a sum rather than a
product using the negative natural logarithm. The equation can be written as
N
- ln [p(zN a)]
=
Zn
[P (zk izk-1;a)]
(3.49)
k=O
where p (zo Iz -;
-in Ln
pP((k
Zk
a) - p (zo; a). Equation (3.47) becomes
- a)]
)
n(27)
-'-nk~r
+ 1In
det [Sk(af +
1t~tLkaj
rk(a)'Sk (a)rk(a).
(3.50)
The term M2 ln(27r) from Equation (3.50) is dropped, because it is a constant and thus
will have no effect on the derivatives or the minimization, yielding
((zlz1;I
a)
- 21Infdet [Sk(I
+ I rk(a)TS-_I(a)rk-1(a).
k-2
2ndtSk]
40
(3.51)
Equation (3.49) becomes the negative log-likelihood function, so that
N
((ZN a)
(k
=
(3.52)
Zk-1; 0)-
k=O
Equation (3.52) is assembled recursively as the Kalman Filter proceeds or, by saving rk
and Sk, once the filter is complete. Minimizing Equation (3.52) yields &, the maximum
likelihood estimate of a.
3.3.2
Negative Log-Likelihood Function Minimization
Section 3.1.6 details the maximum likelihood estimation solution method for an arbitrary
likelihood function. Let & = [&1,.
..
, &q]T be the maximum likelihood estimate of the
parameters. For FIMLOF iteration g, the parameter estimate is &g. Equation (3.28) is
rewritten as
&
= &g-1
+ 7gA- 1 (&g_i)Bgi(&).
(3.53)
A(&g) is the Hessian of the negative log-likelihood function, so that
g2C( {kI k-1. a
(zzka)
N
[A(N )&
k=O
3
(3.54)
ia&g
B(&g) is the negative of the gradient of the negative log-likelihood function and can be
expressed as
N
[B(&g)]
=
- E
a)k-1.
(
(zk
k=O
(3.55)
0=&g
Convergence of the FIMLOF algorithm is reached when
la -
&g_1I
< E,
for a given e > 0. Formulas for E are discussed in Section 3.3.5.
41
(3.56)
Recall from Section 3.1.4 that the Hessian may be approximated by its expected value
N
O (Zk
Aij
)k-1
kzk-1. a)
E
.
(3.57)
k=O
This approximation is valid for a stationary system when the observation interval [0, tNl
is much longer than the correlation times of &rk/cay
and ark/aaj [31]. Equation (3.57)
may be manipulated into the stochastic approximation to the Fisher Information matrix,
so that
NT
Ai
Ztr [ark_1(a) &rk(a)
S1()
k=O
+
ISkI
2
aoz
(a) Sk_'1(a)
ask -I(a)
ca
k
.
(3.58)
See Reference [35] for a complete derivation of Equation (3.58).
Equation (3.53) may be solved using the first partial derivatives of the likelihood
function, which can be calculated analytically. From Equations (3.51) and (3.52), it
follows that [351
8
N
19((zN. a)
7
(9ai
-
k-1.
(Z
IZ
a
i=
1, . .. , q
(3.59)
k=O
where
l(zkz-;
a)
-
_rk_1_a
= rk_1(a) Sl(a)
1
k 1 (a)S_
&O1(
1
(a) &
k
i(a)k SIr(a)
+ ItrSpii(a) S~(a)]
2 1
aaj _
(3.60)
The calculation of the partial derivatives is computationally intensive, because the partial
derivatives for rk and Sk are made up of the partial derivatives of the Kalman Filter
equations. If the algorithm is solving for q parameters, it must perform the computational
equivalent of q + 1 Kalman Filters every FIMLOF iteration.
42
The partial derivatives
for rk and Sk vary depending on the type of the unknown parameters. The analytical
partial derivatives necessary for unknown process noise strengths are presented in Section
3.3.3. The analytical partial derivatives for unknown measurement standard deviations
are presented in Section 3.3.4.
3.3.3
Process Noise Equations
FIMLOF can be used to estimate the value of a process noise strength in the system.
Suppose that there is a white process noise with an unknown strength N. The process
noise enters the Kalman Filter through the
Q
matrix. It may be possible to calculate
the partial derivative of this matrix analytically; however, in the system used for this
Q is essentially
thesis, the method of calculating
a black box. As such, an analytic partial
derivative for Q is impossible. A numerical finite difference partial derivative can be
calculated instead using Lagrange's Equation [2], so that
0f (x)
ax
4 f (x + p) - f(x - p) _f(x
3
3
2u
+ 2p) - f (x - 2p)
4y
(3.61)
Equation (3.61) agrees with the Taylor Series expansion of the partial derivative to 3rd
order. The partial derivatives of the Kalman Filter equations are given by
=Dk
aPDac
Ork
&Sk
&o0zi
k-1
aP+
k-1
=
(3.62)
<
aP+
__
D Pk 1
ax
k
+
0Q-
(3.63)
Q
ai
__2-
-H
(3.64)
&P
&Oaj
(3.65)
= Hk aP HT
(I - K
=(I - K
+
)
H S
k
apP (I _ KkHk)T.
rk
(3.66)
(3.67)
Complete derivations of Equations (3.66) and (3.67) can be found in Appendix A.2.
43
3.3.4
Measurement Noise Equations
Unknown values of the measurement standard deviation for the system can be estimated
by FIMLOF. These parameters enter the Kalman Filter through the R matrix, the covariance of the measurements. Let vi be the measurement noise standard deviation of interest.
The measurement noise vector is then v = v 1 ,..
.
,v,
... , Vr]T
and R = E{vVT }. If it is
assumed that the measurements are not correlated, then
r2
F0
V1
2
V.
OR
Oaa
and
(3.68)
2vi
VrV2
The partial derivatives of the Kalman Filter equations are given by
Ooai
OP-
_P+
k
k
Or
8a
Bar
aSk
acey
(3.69)
Ooai
___
--
(3.70)
k-1
H acei
_Hkar
Hk 0 p,_
H ae
as
(3.71)
Ba
HT ±
aRk-
ko
O9ce
= (I- KkHk)
OP+
s
iP)T
KkHk)
ae
P*
Oaei
+
(3.72)
Oac
HS
kk
rk
-- Kka Rk-1 S
09ae
(I --KkHk)T ±Kk aRk- 1 KT
+ aa k~*i
rk
(3.73)
(3.74)
Complete derivations of Equations (3.73) and (3.74) can be found in Appendix A.2.
3.3.5
Convergence Criteria
The Cramer-Rao lower bound asserts that the variance of a parameter i is greater than or
equal to the corresponding diagonal element of the inverse of the Fisher information matrix, [F-I1 ]i. This implies that the standard deviation of an estimate is equal to the square
44
--- -
0.1
-
.--
- -
FIMLOF Estimate
FIMLOF Estimate ± 1 Standard Deviation
True Value
0.05
6
8
10
12
14
FIMLOF Iteration
16
18
Figure 3-1: Convergence of a parameter estimate. The estimate changes by more than
two standard deviations between iterations 7 and 13.
root of this term. The Cramer-Rao lower bound ensures that the estimate can never be
exact -
there is always some uncertainty as to the actual value. Consequently, run-
ning FIMLOF until the likelihood function reaches its exact minimum serves no purpose,
because the estimate is still not precise.
Equation (3.56) gives the convergence criterion used in FIMLOF. The convergence
bound Ej is defined to be
Ei = p
[F- 1]jj.
(3.75)
For this thesis, p = 0.01; therefore, when the change in each of the parameters is less than
1% of the standard deviations of their estimates, the algorithm is declared to be converged.
The convergence bound must be chosen with some care. Setting p too small can lead
to nonconvergence of the algorithm. Section 5.4 provides an example of nonconvergence.
However, setting p too large leads to false convergence. Figure 3-1 shows an example where
the parameter estimate walked more than a standard deviation over several iterations. If
FIMLOF had been stopped at iteration 8, the estimate would have been almost 2 standard
deviations from the final estimate.
45
[This page intentionally left blank.]
Chapter 4
Robustness Analysis
The robustness of FIMLOF is of critical importance. FIMLOF should be able to successfully identify parameter values that improve the calibration of the system. Sensitivity to
first guess, false convergence, and failure to converge are all potential problems. In this
chapter, the robustness of FIMLOF is investigated using synthetic measurements and a
variety of reduced-order system models.
Testing the robustness of FIMLOF using synthetic measurements instead of real ones
has several benefits. First, the true system model is precisely known, so that FIMLOF's
performance on complex systems may be evaluated without worrying about unknown
modeling errors. Second, it is possible to intentionally introduce known modeling errors to
investigate FIMLOF's performance. In addition, a sensitivity analysis on the suboptimal
models can be performed because the true model is known.
4.1
System Models
The system used in this thesis is an inertial measurement unit mounted on a centrifuge
for calibration. The IMU contains four gimbals that support an inertial platform for the
instruments. The platform is stabilized by 2 two-degree-of-freedom, dynamically tuned
gyroscopes. They are nominally aligned in an orthogonal, right-hand frame with axes U,
V, and W. This coordinate system is designated as the guidance computation frame. The
gyros are aligned so that each one has an input axis along the W axis. The redundant
47
U
X
1/8
Zw
- --
~~
1/8
i
- c/4
V
Figure 4-1: Gyroscope and accelerometer coordinate systems.
W axis is not used in computation.
The velocity is measured by three PIGAs.
The
accelerometers are nominally aligned in an orthogonal, right-hand frame with axes X, Y,
and Z. During testing, the position of the IMU is determined by integrating the signals
from the accelerometers and gyros. Error models and state descriptions for each of these
components may be found in Appendix B.
The guidance computation frame may nominally be transformed into the accelerometer
frame by two rotations.
The first rotation requires a
positive V axis. The second rotates
+1/8 radian rotation about the
+7r/4 radians about the positive, displaced X axis.
The relative orientation of the two coordinate frames is shown in Figure 4-1. Throughout
the thesis, the system is run in a "W-up" configuration with the W axis nearly vertical.
The IMU is mounted near the end of a 10-meter centrifuge arm. The centrifuge arm
position is measured by six targets unequally spaced around the circumference of the
test chamber.
passes.
These targets record the position of the tip of the centrifuge arm as it
These position fixes must be corrected to determine the position of the IMU
rather than the tip of the arm. Appendix B.5 describes the proper transformation of the
measurements.
48
0.02-
02Residuals
-0
1 Standard Deviation
L+
200
400
600
Time [sec]
800
1000
1200
Figure 4-2: Kalman Filter residuals using Model 1 and synthetic data.
A Kalman Filter that estimates the error states of the system already exists. The
measurements that are fed into the Kalman Filter,
ZKF,
are formed from the difference
between the IMU's integrated position, zIMu, and its position as measured by the centrifuge position sensors, zcent. The measurements are formed by
ZKF ~ ZIMU -
ZCent-
(4.1)
Figure 4-2 shows an example of the residuals from this Kalman Filter using synthetic
data.
FIMLOF is used to identify three noise parameters in the system: two process noise
strengths and a measurement standard deviation. The process noise strengths consist of
a random walk in gyro angle (GRWA) and a random walk in velocity (RWVL). Both are
stochastic instrument errors and can be readily evaluated on a test bench, as described
in Chapter 2. The measurement standard deviation (MEAS) results from the centrifuge
position sensors. All of the noise parameters are considered to be independent of IMU
orientation, so a single value is used for all three system axes. The filter contains other
noise sources, such as PIGA quantization, that have very well known and unchanging
values. These noise sources are not estimated.
The initial conditions and covariances have been altered from the values used in the
physical system. In addition, the noise strengths do not reflect the performance of the
physical instrument.
Consequently, although the numbers reported within this thesis
are internally consistent, they bear little resemblance to the numbers from the physical
system. Likewise, although the residuals, state estimates, and miss distances are qualitatively similar to those from the physical system (e.g., the miss distances from model A
49
-00 --
, --7-
200
-
-400
600
Time [sec]
-Residuals
1Standard Deviation
800
1000
1200
Figure 4-3: Kalman Filter residuals using Model 2.
are twice the size of those from model B), they are quite different in magnitude. In other
words, although reliable conclusions can be drawn from the results presented here, the
numbers should not be applied to the physical system.
FIMLOF is investigated first for the accurate model, denoted as Model 1. This provides a baseline and proves that the method will work under ideal conditions. FIMLOF
is then tested using several variations of the correct model. These range from slight mismodelings, ignoring a small deterministic signal, to large mismodelings, dropping all but
the largest error terms. All of these models are reduced-order versions of Model 1. Every
model uses the same initial conditions and covariances.
4.1.1
Model 2 -
No PIGA Harmonics Model
Model 2 removes the PIGA gyro spin-rate harmonic error states from Model 1. They
have a very small effect on the performance of the Kalman Filter, as can be seen from
Figure 4-3. The residuals are nearly identical to those from Model 1, and the parameter
estimates are nearly unchanged. This model represents a very well modeled system. A
complete listing of the removed states may be found in Appendix C.1.
4.1.2
Model 3 -
Small Deterministic Error Model
In the W-up test configuration, the X-axis PIGA is nearly horizontal. Model 3 removes
9 of the PIGA states from Model 1, although it leaves the PIGA harmonics. The missing
states are three separate error terms applied to each axis that create a small sinusoidal
signal from the horizontal X-axis PIGA. A list of the missing states may be found in
Appendix C.2. Figure 4-4 shows an example of the residuals from this model. This model
50
0.0 2
-
-
- -
- --
-
-0.02
200
400
-
- --
600
Time [sec]
800
.
...---- .
-..
-Residuals
1 Standard Deviation
1000
1200
Figure 4-4: Kalman Filter residuals using Model 3.
0.04
.~
-0.04
-0.02
-
200
- ------
400
---
600
Time [sec]
.
Residuals
-- 1 Standard Deviation
- -
800
1000
1200
Figure 4-5: Kalman Filter residuals using Model 4.
tests FIMLOF on a system that has a small deterministic error but is otherwise well
modeled.
4.1.3
Model 4
-
Minimum state with centrifuge model
Model 4 removes a large number of terms from Model 1. Only those error sources that are
most important, such as biases and scale factors, remain. The centrifuge target position
biases are included, although the other centrifuge errors are removed. The complete list
of removed states can be found in Appendix C.3. Model 4 does a poor job of modeling the
system compared to any of the preceding models, as can be seen from the filter residuals
in Figure 4-5. It provides a chance to test FIMLOF against a poorly modeled system.
Such a model could be used as a first approximation of the system or for applications
where calculation speed is more important than high accuracy. In such cases, FIMLOF
would be very useful to save calibration time.
4.1.4
Model 5
Minimum state model
Model 5 removes the centrifuge target position bias states from Model 4. The list of
removed states may be found in Appendix C.4. Each of the centrifuge targets have
51
EEL
0.2
0
,-. .17
-
-
-
1
Residuals
~ -0.1
200
600
400
Time (sec]
800
1000
1200
Figure 4-6: Kalman Filter residuals using Model 5.
a position bias an order of magnitude larger than the filter residuals.
Consequently,
removing these states makes a large difference to the filter, as seen in Figure 4-6. The
position biases are easily visible as striations in the filter residuals and have the same
effect as a very large centrifuge speed-dependent sinusoidal error in the system model.
This model would be unacceptable in practice, but it provides an example of FIMLOF's
performance on extremely poorly modeled systems.
4.1.5
Synthetic Measurements
Synthetic measurements have several advantages. First, the exact system model is known.
Almost every physical system, on the other hand, requires some level of abstraction to
model. The perfectly modeled synthetic data allows a definitive comparison of the performance of a suboptimal filter to the performance of an optimal filter. Second, the noise
parameters of the system are known exactly. The performance of FIMLOF is under investigation, so knowing what values it should estimate is very important. Using Model 1,
synthetic measurements to test FIMLOF are generated.
A set of synthetic measurements are generated through a Monte Carlo process for a
given system model and set of initial conditions and covariances. The initial conditions
are propagated using Equation (3.32) with
Wk
=VQ D 2
(4.2)
where VQ is the matrix of eigenvectors and DQ is the matrix of eigenvalues for Qk, the
covariance of the process noise.
Ak
E R' is a normally distributed random noise with
52
Table 4.1: Standard deviations of noises used in synthetic measurements.
Noise Parameter
GRWA
RWVL
MEAS
Value
1.5 x 10-2
3.0 x 10-6
3.5 x 10-3
Units
arcsec/s
g/
Hz
ft
a mean of 0 and a standard deviation of 1. Similarly, the synthetic measurements are
created from Equation (3.33) with
Vk = VRD
2Vk,
(4.3)
where VR is the matrix of eigenvectors and DR is the matrix of eigenvalues for Rk, the
measurement noise covariance, and VkE R' is a normally distributed random noise with
a mean of 0 and a standard deviation of 1.
Physical tests to calibrate an IMU can be expensive and time consuming -
it is desir-
able for FIMLOF to produce consistent parameter estimates for every set of measurements
with the same underlying parameter values. However, FIMLOF identifies the parameter
values most likely to have occurred based on the recorded measurements, and different
sets of measurements may lead to different parameter estimates. Therefore, 20 sets of
measurements are generated using the same parameter values. These noises are shown in
Table 4.1.
4.2
Robustness Tests
The robustness of FIMLOF can be evaluated in several ways. For a well modeled system,
the FIMLOF parameter estimates should be close to the true parameter values. For more
severe mismodelings, the parameter estimates should be larger than the true values in
order to prevent the unmodeled errors from affecting the modeled states.
It is well known that the residuals of a Kalman Filter can be reduced by raising the
values of the filter process and measurement noise parameters [9]. This reduction comes at
the expense of knowledge of the state estimates, however. Higher noise parameter values
53
keep the covariance higher, allowing the states to vary more. Increasing the process noise
parameter values reduces the reliance of the filter upon previous events and causes it to
give the current measurements more weight. Likewise, increasing the measurement noise
parameter values increases the reliance of the filter on its model and decreases the weight
it gives the measurements. Therefore, FIMLOF should decrease the filter residuals for
mismodeled systems.
4.2.1
Determining Whiteness
A perfectly modeled and tuned system should produce white residuals from its Kalman
Filter [13].
however.
A poorly modeled system will often have some structure to the residuals,
In this case, the measurements will be correlated with the residuals.
The
correlation occurs because the system contains information that cannot be captured by
any of the modeled states. Raising the plant noise parameter values whitens the residuals
by reducing the filter's reliance on past events and giving more weight to the current
measurements. Changing the model can also whiten the residuals, because more of the
information can be explained by the states. For a perfectly modeled system, FIMLOF
will tune the filter so that the residuals are white. In a suboptimal filter, the degree of
whiteness of the residuals provides a measure of the degree of mismodeling.
The autocorrelation of a signal is used to detect repeating patterns in a signal. For
an infinite, zero-mean, discrete-time signal, y, the autocorrelation is given by
+00
EZYi Yi~j
(4.4)
-00
If y is white, the autocorrelation function consists of a spike at ro and zero everywhere
else. For a zero-mean signal of finite length, u E RNx1, the unbiased autocorrelation is
given by
N--1
rN
N -- j|ji=-
Ui j.
(4.5)
In this case, the autocorrelation will contain a spike at ro and small residuals everywhere
else.
54
The whiteness of two signals of equal length can be qualitatively compared using
the magnitudes of the residuals of their autocorrelation functions. The Lozow whiteness
metric W [21] is defined as
W =1 - A,
(4.6)
where
N
A_
N-1
r?
j=1 3
r2
For a perfectly white signal of infinite length, W = 1, because the autocorrelation function
will be zero everywhere except at ro.
Using W, we can qualitatively compare the whiteness of residuals from Kalman Filters.
Because the signals are finite length, none will be perfectly white. However, residuals from
optimal Kalman Filters will be whiter than those from suboptimal Kalman Filters.
4.2.2
Miss Distance
FIMLOF is performed on a centrifuge calibration of the IMU. The final error state estimates and covariances from the Kalman Filter are used after the calibration to navigate
the IMU from missile launch to impact. For FIMLOF to be useful for calibration, the
miss distance at impact should improve when using the state estimates that result from a
filter using the FIMLOF parameter estimates. If FIMLOF is truly robust, it will improve
the miss distance for any suboptimal filter.
Like the sensitivity analysis described in Section 3.2.3, comparisons between the miss
distances of different models cannot be made directly. Instead, the Kalman gains from a
reduced state model must be used in the full state model to create state estimates that
can then be used to calculate the miss distance.
Figure 4-7 presents a diagram of the
procedure for a reduced-order filter. Figure 4-8 diagrams the procedure for a reduced
state filter tuned by FIMLOF.
At the end of a calibration, the Kalman Filter produces final estimates of the system
states and their covariances. The IMU state estimates and covariances may then be used
to navigate a missile from launch to impact. This navigation is simulated by propagating
55
True
Parame~ter
Values
Reduced
State Filter
Kree
P
suboptimal
Full State Filter
Missile
Flight
Miss Distance
P
Measurements
Figure 4-7: Calculation of the miss distance for a reduced state filter.
True
Parameter
Values
FIMLOF
Reduced
KF
State Filter
Suboptimal
Full State Filter
XFIMLOF PFIMLOF
Missile
Miss Distance
Flight
Parameter
Estimates
FIMLOF
Measurement
Figure 4-8: Calculation of the miss distance for a reduced state filter tuned with FIMLOF.
The reduced state filter is started with the true parameter values for the first FIMLOF
iteration, but uses the FIMLOF parameter estimates for the subsequent iterations.
the states and covariances via a ballistic state transition matrix 1. The equations are
given by
Ximpact -- Xlaunch,
(4.7)
=
(4.8)
impact
DPlaunch) T.
X impact
E R 2 x 1 is the along-track and cross-track estimate of the impact error for the
flight.
Pimpact C R2x
2
is the covariance of the estimate. One measure of how well the
IMU has performed is the miss distance, / [5]. Using the singular value decomposition of
Pimpact,
given by
Pimpact
=
UpD
v/,
(4.9)
the impact error is scaled so that
A
=
UP-i'impact.
56
(4.10)
Using Equations (4.9) and (4.10), the miss distance is defined by
7
=
0.29435
t1)2
[Dp3 11
+ (D
2 2
[Dp]22)
+ 0.562 [Dp] 11 + 0.615 [Dp] 22 -
In this thesis, the miss distances are normalized to be around 1.
57
(4.11)
[This page intentionally left blank.]
Chapter 5
Results
Chapter 4 describes various tests to determine the robustness of FIMLOF. In this chapter,
the results of these tests are described for five model configurations. In addition to a full
state model, where the filter model is the same as the truth model, four reduced-order
filter models are considered: a model without the PIGA harmonics, a model with a small
sinusoidal error, a minimum state model including the centrifuge target bias states, and
a minimum state model without the centrifuge target bias states.
Model 1 is used to test FIMLOF's sensitivity to initial parameter estimates. FIMLOF
is started with parameter values an order of magnitude too high and an order of magnitude
too low. Models 2 through 5 are used to test its sensitivity to suboptimality. For each
of these models, FIMLOF is started at the true parameter values and allowed to run.
The resulting parameter estimates are used to generate residuals and miss distances for
comparison.
5.1
Full State Model
The full state model, also denoted as Model 1, generates the synthetic measurements. It is
described in Section 4.1, and the noise values used to generate the synthetic measurements
may be found in Table 4.1. Model 1 should need no adjustment when it is started from
these noise values. Ideally, FIMLOF should accurately identify these values regardless
of its starting point. The model is included both to prove that the implementation of
59
FIMLOF presented in this thesis works and to test its sensitivity to initial parameter
estimates.
The results in this section show that FIMLOF can accurately identify the true parameter values, subject to reasonable limitations. The starting estimates of the parameters
are important. Starting FIMLOF with parameter estimates an order of magnitude too
high has no effect on the parameter estimates it produces. Starting an order of magnitude too low, however, can lead to incorrect parameter estimates. When starting from the
true parameter values, the residuals calculated from a filter using the FIMLOF parameter
estimates are virtually identical to those calculated from a filter using the true values.
Likewise, the whiteness of the residuals is unchanged. The miss distance is also unchanged,
indicating that FIMLOF does not have an adverse effect on the final state covariances.
These results demonstrate the FIMLOF is self-consistent except for a sensitivity to the
initial parameter estimates.
5.1.1
FIMLOF Parameter Estimates
The results for a filter using Model 1 show that, when started with parameter estimates
that are equal to the true values or too high, FIMLOF returns estimates close to the true
values. All parameter estimates are within two standard deviations of the true values.
Most are within one, making them statistically indistinguishable from the true values.
Starting from parameter estimates an order of magnitude lower than the true value leads
to FIMLOF parameter estimates that are either much too low or have such large standard
deviations that they are useless in practice.
Starting from True Noise Values
As expected, when started with parameter estimates equal to the true parameter values,
FIMLOF identifies the parameters very accurately. Figure 5-1 shows the parameter estimates and standard deviations for each of the 20 sets of synthetic measurements. The
FIMLOF parameter estimates vary slightly from the true values. Several, such as those
from measurement set 20, are almost two standard deviations away from the true values
60
0.03
1
21
3 -.0 -
0x01
X10-3
Fiue51
I
-
*
0 ot
s.
2 Ftaamft6 4
2al
IL
0
tFIL
6
tmats
e2al
8
foM
-
*
C
12
14
16
18
h
*siae
tru
10
de
0
12 1,satn
14
0
0
6
5F
ftrigfrmHg
t
0 ) C
te
0f
t
10
8
6
4
2
0
1
1
c0
6
4
2
4
vae
C
l
ts
f
0
f
fate
Nois
10
4
the t
20
0t
18
paateesi
fro
16
-
bereetd.
fat
20
~ ~ ~
s1
14
14 18
2
estimateaoreofmgiuehhr
tre rmpxmtr
or rof m g t deh g
wa t r e r m p r m t r Estimatesa
FIMLOF
For this test,~~~~~~~~
Detrob en entfyg yn
Sasnar
mt
ILFhsntobeid
Fi ue 2s o st ers s FIMLOF
ValuesFiue52sostersls
tru ~ .a..es
rue
than. the.than~~
t
th
For this
the~
test,
~~~~~~~~~~~E
..
true
FIMLOF wa
Vs
alues.Teprmtretmtsfo
hsts
r eryietclt
hs
paameter estimates oo
parameter
tfo f
for Modaod ll starinefo
Thrmesersideabeate
pa nsdramete etm escte ror
FIMLOF
erence.
d1: rece
Foiure di
St
btol
02arscs
e 1 s24
mauemn
values.
eqamleo the teise
onthese
dneloiatyn (RWVhe).
ard invityon
wtnak
standm
the
thweerandom
andHwer
(GRWA and
setA
asrn
yoasureet
thmedmwakiwakinro
the3 random
.5x1-ars/sfor
for
hypothesisththyderbehetu
thethenull
null hypothesisththedscieheru
ntrahoees
t5hofiecantrahoees
thatofiec
largehinug
Thraetrestalues
paraeter estimes are ilawithinog
thtFMO siae h revalues cannot be rejected wt 5 ofdneitra.
thtFMO
Nise
Hghfom
Startng
alue
r
0.0 3
-
1 --*0-
8 0.0
0.0
0
0
4
*
-
4
2
6
10
8
12
-
16
14
18
20
3.
0
0
UI
21
0
2
6
4
10
8
12
14
-
-
-
2. 5
18
16
20
x 10-3
~0
4.6-
*
*
6
4.5-
0
*
0
0
*
6
*
..
. ..
. .
w 4.4-
*
*
-*
0
0
4
0
0
4
*
4
4
*
*
4
*
*
:
*
.
*
0
*
o FIMLOF Es timate
FIMLOF Es timate± 1 Standard Deviation
True Value
20
10
12
14
16
18
Monte Carlo Run
*
4.3 -
-
0
2
4
6
8
Figure 5-2: FIMLOF parameter estimates for Model 1, starting from parameter estimates
an order of magnitude higher than the true values.
Starting from Low Noise Values
In this test, the filter was started from parameter estimates an order of magnitude smaller
than the true values. The results are shown in Figure 5-4. For these initial conditions,
FIMLOF does a poor job of identifying the true values. One possible explanation is that
the covariance envelope of the residuals is much smaller than the residuals, as seen in
Figure 5-3. The assumptions of the Kalman Filter fail and the state estimates are inaccu0.04
.......... .....................
-0.02
-0.04
-
-11
200
400
600
Time [sec]
800
Residuals
1 Standard Deviation
1000
1200
Figure 5-3: Example of Kalman Filter residuals using Model 1 and noise parameters an
order of magnitude lower than the true values.
62
0.c
3-
1'rTT
I
I
16
18
*r
C
0
0c
0
4p
x 10-6
2
4
00
6
8
10
12
14
20
**
r
4
.
CR
o
4-
4-
*
v
-
2- -True
0
2
4
6
8
FIMLOF Estimate
FIMLOF Estimate 1 Standard Deviation
Standard Deviation > 0.03 arcsec/s
Value
10
12
Monte Carlo Run
14
16
18
20
Figure 5-4: FIMLOF parameter estimates for Model 1, starting from parameter estimates
an order of magnitude lower than the true values. V indicates a parameter estimate
standard deviation that is larger than 0.03 arcsec/s.
rate. Therefore, FIMLOF cannot correctly calculate the partial derivatives of the filter to
determine the parameter estimates for the next iteration. Another possible explanation
could be that the expected value of the Hessian does not accurately reflect the true value
of the Hessian for these residuals. More work is needed to determine the true cause of
the poor performance.
FIMLOF occasionally produces very small parameter estimates with very large standard deviations, such as some of the GRWA parameter estimates. Using measurement set
10, for example, the estimate of GRWA is 1.5 x 10-6 arcsec/s with a standard deviation
of 3.5 arcsec/s. The true value of 1.5 x 10-2 arcsec/s is technically within those bounds,
but the FIMLOF parameter estimate is worthless in practice. The parameter estimate
has such a large standard deviation that it contains no worthwhile information about the
actual parameter value.
FIMLOF also produces parameter estimates that have very small standard deviations,
63
but are very inaccurate. This can be seen in the RWVL and measurement standard deviation (MEAS) estimates. For example, the estimate of MEAS in measurement set 10 is
1.4 x 10-3 with a standard deviation of 1.2 x 10'. This estimate is over 250 standard deviations away from the true parameter value of 4.5 x 10-.
These low parameter estimates
correspond to the estimates of GRWA that are very close to zero. From these results, it
would seem that false convergence of one parameter estimate, such as the MEAS estimate,
is indicated by very large standard deviations on another parameter estimate, such as the
estimate for GRWA. Indeed, the measurement sets for which the GRWA estimates have
small standard deviations also have proper convergence for the estimates of RWVL and
MEAS. Unfortunately, these observations do not hold for filters with suboptimal models.
The results from Models 3, 4, and 5 all include cases that have one parameter estimate
with a standard deviation orders of magnitude larger than the estimate. In every case,
however, none of the other parameter estimates show evidence of false convergence.
5.1.2
Residual Magnitude and Whiteness
The residuals generated by running a filter using the true noise values (Figure 5-5) are
very similar to the residuals generated by running a filter using the FIMLOF parameter
estimates (Figure 5-6).
This is to be expected, because FIMLOF has not changed the
parameter estimates appreciably.
Table 5.1 shows the means and standard deviations of the whiteness values of the filter
residuals using the various noise values considered in Section 5.1.1. The whiteness values
were calculated using the Lozow whiteness metric, defined in Section 4.2.1. According
to the metric, a perfectly white, infinite length signal will have a whiteness value of 1.
The residuals presented in this thesis are all finite length, so none have a whiteness value
above 0.94. Using a pooled two-sample t-test [25], the null hypothesis that the FIMLOF
parameter estimates do not cause a change in the mean whiteness of the residuals cannot
be rejected at a 95% confidence interval. All of the sets of residuals are very white, as
they should be -
they come from the full state model driven with only white noise.
64
0.02
-.
0. 01
-
0
..
.
-
--
-
02
- -:
- -Y
200
400
-%
-
-
- -
.--
--
--0. 01
-0.
- -..-....
......
800
600
1000
120 0
0.I
0. 31
0
-0. 31
-
----
-1
..
.-..
*17.
---200
400
600
800
1000
120 0
0.02
--...
.-.....
--......
-
001-
0
--
-
--
.--
-- -
--
-0.01 -
Residua1 Standard Deviation
--
-0.02
200
400
600
Time [sec]
800
1000
1200
Figure 5-5: Kalman Filter residuals using Model 1 and true noise values.
0.0 2
0.0
0
-
AX
-0.0
-0.0
1.
2
-..
*
-
200
-
i - - .r
-.
400
-*
e-
. .. .
600
- .a.
800
* - .-...-.*%
1000
1200
U.02
0 .0 1 -
--
.. .---. -
-
-
.
S-
.
01~~-
6
.
-o
Fir----
*e
-0.
200
400
600
800
-
1000
1200
0.0 2
0.0 1 -.-.-...
0
--
-
-0.0
-0.0
-
--
.
200
400
600
Time [sec]
800
1 Residuals
-- 1 Standard Deviation
1000
1200
Figure 5-6: Kalman Filter residuals using Model 1 and FIMLOF estimates for noise values.
65
Table 5.1: The means and standard deviations of the whiteness values of the filter residuals
using Model 1 and the 20 measurement sets.
Parameter
Values Used
True values
FIMLOF estimates
when started from
true values
High values
FIMLOF estimates
when started from
U Axis
Average Standard
Deviation
0.93512
0.014671
V Axis
Average Standard
Deviation
0.93313
0.014633
W Axis
Average Standard
Deviation
0.93626
0.018569
0.93535
0.014451
0.93317
0.014461
0.93632
0.018538
0.92760
0.018689
0.91013
0.051626
0.92348
0.027507
0.93535
0.014451
0.93317
0.014461
0.93632
0.018538
0.93048
0.019919
0.93053
0.015344
0.93283
0.022256
0.93214
0.017901
0.93239
0.014011
0.93515
0.019197
high values
Low values
FIMLOF estimates
when started from
low values
5.1.3
Miss Distances
Figure 5-7 shows the miss distances after propagation of the errors to impact for the filter
using Model 1. The miss distances calculated using the true noise values are very similar
to the miss distances calculated using the FIMLOF noise estimates. Using a pooled twosample t-test with a 95% confidence interval, the null hypothesis that the two sets of miss
distances are drawn from the same population cannot be rejected. The variations in miss
distance between the two sets can be explained by the differences between the FIMLOF
parameter estimates and the true values.
5.2
No PIGA Harmonics Model
Model 2 does not include the PIGA harmonic states. As discussed in Section 4.1.1, the
PIGA harmonics have only a small effect on the filter. More importantly, they have only
a very small effect on the navigation or miss distances. The model demonstrates the
performance of FIMLOF for a well-modeled system.
FIMLOF is started from parameter estimates equal to the true noise values. The pa-
66
4 -
Optimal Miss Distance
Miss Distance Using Model 1 and FIMLOF Estimates
3 -..-.-.-.-.-
.
-.-.-.-.-.-........
-.
-.
.
-. ..
...-.
-.
-.
2 .5
1
0
2
4
6
8
10
12
Monte Carlo Run
-
-
-
1.5 - -
14
16
18
20
Figure 5-7: Normalized miss distances for Model 1.
rameter estimates from FIMLOF are close to the true values, and the residuals are largely
unchanged. Consequently, the residual whiteness is unaffected, and the miss distances are
similar.
5.2.1
Parameter Estimates
Figure 5-8 shows values for the FIMLOF parameter estimates for Model 2. In almost
every case, the estimate of RWVL is slightly larger than the true value. This makes sense
-
the mismodeling occurs in the accelerometer model and RWVL is an accelerometer
noise. Furthermore, the PIGA harmonics are errors in the velocity measurement. The
estimates of GRWA and MEAS are very close to their estimates using Model 1.
5.2.2
Residual Magnitude and Whiteness
Figure 5-9 shows the residuals from measurement set 1 using the true noise values. Figure
5-10 shows the residuals from the filter using the FIMLOF estimates.
67
The residuals
0.030
0.02-
0.01
-* -
*0
*
0
-
*
*
0
0
*
*
0 0
*
*0
0
0
0
*
o
0
. ... . .
*
6
4
2
10
8
0
o
*
0
0
12
14
0
*
0
*
0
A
*
.-
16
20
18
4.
.
x X10
4-
-
*
-0
*
0
2
6
4
*
x 10o3
8
-
-
II?
* *
~****
10
*
12
14
16
18
20
0*
**
00
O
4.
4.
)U
^D
ul 4. -o
FIMLOF Estimate
aFIMLOF Estimate
03*-
4.
0
2
4
1 Standard Deviation
- True Value
6
8
12
10
Monte Carlo Run
14
16
18
20
Figure 5-8: FIMLOF estimated noise values for Model 2, starting from parameter estimates equal to the true noise values.
generated by the filter using Model 2 with the FIMLOF parameter estimates are very
similar to the residuals from a filter using the true noise values. These results occur
because the mismodeling is small, as are the changes to the parameter estimates.
The whiteness of the residuals was calculated using the Lozow metric and is displayed
in Table 5.2. Both sets of residuals are nearly as white as the residuals from the full
state filter. This indicates both that PIGA harmonic states have little effect and that the
adjustments that FIMLOF made are minor.
5.2.3
Miss Distances
Figure 5-11 shows the miss distances after propagation of the errors to impact for a
filter using Model 2. Three miss distances are displayed. Those labeled "Optimal Miss
Distance" are the miss distances from a filter using Model 1 and the true noise values.
They represent the optimal performance of the system. The miss distances labeled "Miss
68
0.02
0.0 1
...-.-.-..-..-
.......
.. -.
Ix-
0
.
...
-. -.-.-.-.-.
-0.01
r7.7
200
-
400
600
800
1000
1200
II 1110
0.01
-
0
-
-0.01
--..
-
-,
;.
-0.02
-.
-
-
200
- ,-r
400
--
-
600
--
800
1000
1200
0.0 2
~..
0.0
..........
-0.0 1 ---0.0 2
Residuals
1 Standard Deviation
200
400
600
Time [sec]
800
1000
1200
Figure 5-9: Kalman Filter residuals using Model 2 and true noise values.
0.02T0.01---
-
0
-0.01
.--.
-7 -..-.--.-200
600
400
800
1000
1200
0.02
-.
0.01
0
-.-.--.-.
.... ..
..
46:j$
-~~~~~~~~
-
-
--
-
-
-0.01
-U.
02
200
400
600
800
1000
120C
n A2
0.0
-
-
-
1-
-.
-0.0
-0.0
1... ---200
11N
- -
--;--
'-:
7i 1400
--- - -
-
-
7
Residuals
1 Standcard Deviation
600
Time [sec]
800
1000
1200
Figure 5-10: Kalman Filter residuals using Model 2 and FIMLOF parameter estimates.
69
Table 5.2: The means and standard deviations of the whiteness values of the filter residuals
using Model 2 and the 20 measurement sets. The filter was started each time using the
true noise values.
U Axis
Parameter
Values Used
True values
FIMLOF estimates
when started from
true values
V Axis
Average
Standard
0.93211
0.93281
1
W Axis
Average
Standard
Average
Standard
Deviation
0.016037
0.93220
Deviation
0.015577
0.93583
Deviation
0.019479
0.015447
0.93223
0.015417
0.93592
0.019498
1
R*
Miss Distance Using Model 2 and True Noise Values
Miss Distance Using Model 2 and FIMLOF Estimates
Optimal Miss Distance
*
5.5
+
.....
5 -
--
.-
4.5[
"a
- - -.-.-.--
-
-.-
+
-
4
0
CO)
3.5 .
-
.*
E)
a)
3
-
2.52
1.511
0
. ...
*-
-
+
-
-.-.-.
-.-.-.-.- --- -
*
-.- -.
2
4
- 6
8
2
4
6
8
.-.-.-.
12
10
12
Monte Carlo Run
14
- 16
18.
20
14
16
18
20
Figure 5-11: Normalized miss distances for Model 2.
70
Distance using Model 2 and True Noise Values" are calculated using the method shown
in Figure 4-7. The Kalman gains from a filter using Model 2 and the true noise values are
used in the full-state filter with the true noise values. They represent how well the system
will perform when calibrated with the suboptimal filter model and the true noise values.
The miss distances labeled "Miss Distance using Model 2 and FIMLOF Noise Estimates"
are calculated using the method shown in Figure 4-8. FIMLOF is performed on a filter
using Model 2 and started from parameter estimates equal to the true noise values. After
FIMLOF converges, the reduced-order filter is run again using the FIMLOF parameter
estimates. The resulting Kalman gains are used in the full-state filter with the true noise
values. These miss distances represent how well the system will perform when calibrated
using the suboptimal model and the FIMLOF parameter estimates.
The missing PIGA harmonic states have very little effect on the miss distance. They
have a small enough effect on the system that the errors they model do not change the
calibration of the remaining states. Using the FIMLOF-estimated parameters in a filter
with Model 2 does not produce a statistically significant change in the miss distances with a pooled two-sample t-test and a 95% confidence interval, the null hypothesis that
the miss distances are drawn from the same population cannot be rejected. The FIMLOF
parameter estimates are very similar to the true values, so the miss distances are likewise
unchanged.
5.3
Small Sinusoidal Error Model
The small sinusoidal error model, Model 3, removes several of the PIGA states. The
missing states cause the residuals to contain a sinusoidal signal when a PIGA is horizontal.
An example can be seen in Figure 4-4. Once again, FIMLOF is started from parameter
estimates equal to the true noise values. The results using this model demonstrate the
performance of FIMLOF for a well modeled system with a small deterministic error.
71
0.c 3
T
0.c 1
-..
T
T
V
T
T
--
-
-
.
-
-
C1.
.......
.....
.. ......
.....
0
2
4....1.12
*
o
6
4
20
0
2
0
0c
41-x 106
14...1
12
10
8
14
.
18
16
2
20
-
0
-
0
864-
-
0
2
.2
*
4
8
6
--
10
-*
12
FML*
t
n
adD
14
0siae1Sadr 16
vaio
.3
18 eito
a
e
20
/
Tr.....u
44
.... .. ..
... ....
4
*
C
FIMLOF Estimate
*FIMLOF Estimate ±1 Standard Deviation
v Standard Deviation > 0.03 arcsec/s
True Value
18
20
10
12
14
16
Monte Carlo Run
o
-u
4 .2
-
II-
0
2
4
6
8
Figure 5-12: FIMLOF estimated noise values for Model 3, starting from parameter estimates equal to the true noise values. V indicates a parameter estimate standard deviation
that is larger than 0.03 arcsec/s.
5.3.1
Parameter Estimates
The FIMLOF parameter estimates for a filter using Model 3 are shown in Figure 512. The estimates of GRWA are all within or close to one standard deviation from the
truth, but the standard deviations of the parameter estimates are very large for many
of the measurement sets. For example, the estimate of GRWA for measurement set 6
has a standard deviation of 2.2 arcsec/s. As discussed in Section 5.1.1, this estimate is
worthless in practice - it is far too uncertain. The estimates of RWVL are higher than
the true values, although this can be explained by the suboptimality of the accelerometer
model. The estimates of MEAS are slightly higher as well.
72
0.02
0.01
.-..--4
..-0 I ....
.
---
'
- .
.
-
- 0.0 1
_- .-- -...
...-.. .
-0.02
200
400
600
800
....
1000
1200
0.02
00 1
0 L
-0.01
~
:
|--
*
-
---
-
-0.02-
-
....-...
..
-.
-........
200
---
400
600
800
1000
1200
0.02
0.01
-
.
----.
-0.01
-
.
-
--
----
-
--
-'
-
- Residuals
--
-
--
-0.02
200
-
1 Standard
400
600
800
Figure 5-13: Kalman FilterI!,
r esiduals
uigMdl3adtu
Time
[sec]
Deviation
1000
1200
aus
s
0
-0.01
%...
....
.....
-0.02
200
400
200
400
600
800
1000
1200
600
800LO
-i
1000
1200-
600
800
1000
1200
0.02
-0.02
0 2
-0.021
-
---
-0.01
-
-
0.02
200
0
-M.a -
-.... . ......
-
-..-- -.-.
--- -..-.-
0 .0 1 ---...
--
-
400
-
--
-
- - ;
- : --
-0.02
200
400
600
Time
Figure
5-14:
Kalman
Filter
e
-
-0-
residuals
Model
73
800
i
u
l
1.......d
1000
r
rd...
a io
120
[seci
3
and
FIMLOF
parameter
estimates.
Table 5.3: The means and standard deviations of the whiteness values of the filter residuals
using Model 3 and the 20 measurement sets. The filter was started each time using the
true noise values.
Parameter
U Axis
Average Standard
V Axis
Average Standard
W Axis
Average Standard
Deviation
Deviation
Deviation
Values Used
True values
FIMLOF estimates
when started from
true values
5.3.2
0.93087
0.024554
0.93183
0.014863
0.93297
0.020000
0.93334
0.018636
0.93247
0.014319
0.93368
0.019804
Residual Magnitude and Whiteness
The residuals generated by a filter using Model 3 and the true noise values (Figure 5-9)
are no worse than the residuals from the a filter using the FIMLOF estimates (Figure
5-10).
Although the sinusoidal signal shown in the U-axis of Figure 5-9 appears to be
reduced in Figure 5-10, performing a power spectral density on the residuals reveals that
the energy at that frequency remains unchanged.
The means and standard deviations of the whiteness values of the filter residuals are
given in Table 5.3. Using a pooled two-sample t-test at a 95 % confidence interval,the
null hypothesis that the whiteness is unchanged cannot be rejected. The sinusoidal signal
in the residuals, although caused by the PIGAs, cannot be removed by the parameters
that FIMLOF identifies. A random walk such as GRWA or RWVL does not approximate
a sinusoid well, so FIMLOF is unable to reduce the sinusoid in the residuals by raising
the noise.
5.3.3
Miss Distances
Figure 5-15 shows the miss distances for the errors propagated to impact from a filter
using Model 3 for each set of synthetic measurements. The miss distances for Model 3
are very similar to the miss distances for the full state model. The PIGA states missing
from Model 3 are a part of the miss distance calculation, but they only play a minor role.
The remaining states do not change dramatically as a result of the missing states, so the
miss distance remains largely unchanged. We cannot reject the null hypothesis at a 95%
74
I
S
*
+
5.5
I
I
Miss Distance Using Model 3 and True Noise Values
Miss Distance Using Model 3 and FIMLOF Estimates
Optimal Miss Distance
-.-
5
-*
4.5 f ....
........... ........
-+
4
........................
-.
........
cts
3.5 F...
..
.. -..
E 3
..............
ci,
-
2.5 .
I
4
±
2
..............
- .
-
1.5-
0
2
4
6
8
10
12
Monte Carlo Run
14
16
18
20
Figure 5-15: Normalized miss distances for Model 3.
confidence interval that the miss distances are drawn from the same population.
5.4
Minimum State with Centrifuge Model
Model 4 contains only the states necessary for a basic model of the IMU and centrifuge.
All of the less important error states have been removed. Because of these missing states,
the filter residuals are quite large. The calibration of the filter using this model also
suffers. Consequently, the miss distances for a filter using the true noise values increase
substantially. The results from Model 4 demonstrate the performance of FIMLOF on a
poorly modeled system.
5.4.1
Parameter Estimates
Figure 5-16 shows the parameter estimates. FIMLOF does a poor job estimating the true
values. With the exception of the estimates for measurement set 2, all of the estimates of
75
2
CU)
0
cs
0.
0
0
2
4
6
12
10
8
14
16
18
20
X10'
0
00
.
-.
o
-U
2
4
6
4 -.
- - -.. .. -.....
.
8
12
10
o
.........
-...
*
w
S
2
4
0
2
4
14
16
18
20
FIM LO F Estim ateFIMLOF Estimate 1 Standard Deviation
v Standard Deviation > 6x1 0-6 g8 Hz
6
8
10
-- True Value
12
14
Monte Carlo Run
16
18
20
Figure 5-16: FIMLOF estimated noise values for Model 4, starting from parameter estimates equal to the true noise values. V indicates a parameter estimate standard deviation
that is larger than 6 x 10-6 g/v'iz. Measurement sets 3, 4, 5, 8, 9, 17, and 19 fail to
converge.
GRWA are at least an order of magnitude higher than the true values. The estimate for
measurement set 10 is nearly two orders of magnitude higher. Estimates of RWVL are
widely scattered, ranging from 7.1 standard deviations too high (measurement set 2) to
2.8 standard deviations too low (measurement set 8).
For seven of the 20 measurement sets (3, 4, 5, 8, 9, 17, and 19), FIMLOF does not
converge. An example of the evolution of the parameter estimates for measurement set
3 is shown in Figure 5-17. For these measurements, FIMLOF does not converge after
30 iterations, so the algorithm terminates unsuccessfully. This behavior is caused by the
Newton-Raphson algorithm used by FIMLOF. The parameter estimates are oscillating
around a minimum, but failing to reach it. Logic to recognize this phenomenon and
adjust the step size might solve the problem, but is not used in this thesis. This behavior
only seems to appear for poorly modeled systems, and so might be an indication of serious
mismodeling.
76
I
0.5-
-I-O-E-i--
0
5
15 X1
5
.. . . .. ..
5
"0
Fu
-17-
5
10
15
20
25
--..
..
...
...
10
-
15
FMLOF
-Estimate
FIMLOF Estimate
-- True Value
-
.
20
25
0
30
1 Standard
l Deve
4, with
on no-cnverenc-
-!
4-
30
-
ft
-!7
5
10
15
FIMLOF Iteration
20
25
30
Figure 5-17: FIMLOF parameter evolution for Model 4, with no convergence after 30
iterations.
5.4.2
Residual Magnitude and Whiteness
Figure 5-18 shows the residuals for a filter using Model 4 with the true noise values
and measurement set 1. Figure 5-19 shows the residuals for a filter using the FIMLOF
parameter estimates, which are smaller and have less structure.
Table 5.4 shows the mean and standard deviations of the whiteness of the filter residuals. Despite the improvements to the residuals shown in Figure 5-19, we cannot reject
the null hypothesis at a 95% confidence interval that the whiteness is unchanged. Neither
set of residuals is as white as those from a filter using Model 1. The noise parameters that
FIMLOF identifies cannot adequately explain the effects of the removed states in Model
4.
77
R- -4w
0. 04
0.
02
Dn
-.
- . --.-.- - - .. ...
-.
-0.
04
A
200
400
800
600
1000
1200
0.04
0.02
0
--.
-0.02
-0.
04
200
-.
- -1
1000
800
600
400
120(
n AA
0.02
- - -- --------
-
-V,:,.,*
- ----
-- c
- ---.
-I
0
-.....-... -.....
-R .
-0.02
-0.04
400
200
esiduals
1 Standard Deviation
600
1000
800
Time [sec]
1200
Figure 5-18: Kalman Filter residuals using Model 4 and true noise values.
0. 04
0.
CA
A,
-0. 02
- -
-
--
-0. 04Ar7
....-.
1000
800
600
400
200
1201
0. 04,
...
-..
...
.... .................................
.
0.02 - ..
CA
..........
--
0
-
-
-0.02
-U.
800
600
400
200
-
-9
1000
1201
A
0.02
CA
0
-0.02
-0.04
-
-
.
-.200
Residuals
1 Standard Deviation
.
400
600
Time [sec]
800
1000
1200
Figure 5-19: Kalman Filter residuals using Model 4 and FIMLOF parameter estimates.
78
Table 5.4: The means and standard deviations of the whiteness values of the filter residuals
using Model 4 and the 20 measurement sets. The filter was started each time using the
true noise values.
W Axis
V Axis
U Axis
Standard
Average
Average Standard Average Standard
Parameter
Tru vlus
FIMLOF estimates
when started from
true values
5.4.3
Deviation
Deviation
Deviation
Values Used
.8706
0.037047
0.86804
0.06385
0.89237
0.538
0.90148
0.036245
0.90502
0.033785
0.89639
0.049465
Miss Distances
Figure 5-20 shows the miss distances for error states propagated to impact from filters
using each synthetic measurement set and Model 4. The miss distances calculated using
the FIMLOF parameter estimates in the reduced state filter are on average 3.7 times
larger than the optimal miss distances, while the miss distances calculated using the true
values in the reduced state filter are on average 14.2 times larger than the optimal miss
distances.
The miss distances calculated from the filter using the FIMLOF parameter estimates
are smaller than the miss distances from the filter using the true noise values on Model 4
in every case except measurement set 2. As seen in Section 5.4.1, the FIMLOF estimates
of GRWA for this set of measurements was 1.9 x 10-2 arcsec/s, and less that one standard
deviation from the true value of 1.5 x 10'
arcsec/s. The FIMLOF parameter estimate
for every other measurement set was at least an order of magnitude higher than the true
value. Therefore, it appears that FIMLOF displays false convergence for measurement
set 2.
The improvement in the miss distance seen by using the FIMLOF parameter estimates
instead of the true parameter values to calibrate the filter for Model 4 is substantial. Using
a pooled two-sample t-test and a 95% confidence interval, the null hypothesis that the miss
distances are drawn from the same population can be rejected. Using the true parameter
values forces the errors from the missing states to affect the estimates of the remaining
states in the reduced state filter. FIMLOF pulls the errors into the noise parameter
79
40
+
Miss Distance Using Model 4 and True Noise Values
Miss Distance Using Model 4 and FIMLOF Estimates
Optimal Miss Distance
35
30
0
C
Cu
25
... . ..
0
0,
0,
-F -.
20
.
-
.-..
. -..
Cu
E
a, 15
-
Lii
*
-
- ..
- ... ......I
-.
10
........
5.
**
*
U-
0
2
4
6
*
*
±±
+
8
12
10
Monte Carlo Run
14
16
18
+
I---
20
Figure 5-20: Normalized miss distances for Model 4.
estimates instead. As a result, the parameter estimates are much larger than the true
values, but the state covariance estimates are much closer to the optimal ones. The
system therefore does a much better job navigating when calibrated using the FIMLOF
parameter estimates.
5.5
Minimum State Model
Model 5 is exceedingly poor. It removes the centrifuge position bias states from Model 4.
These bias states are much larger than the magnitude of the residuals from a filter using
Model 4, so the residuals from a filter using Model 5 have large striations in them. As a
result, the FIMLOF parameter estimates are much larger than the true values of the noise
strengths; however, the miss distances calculated using these estimates are surprisingly
close to the optimal miss distances.
80
2 --
- -
-
a:
a
00
0
0
2
4
I
p
*
I
p
6
8
10
12
14
16
18
20
6
8
10
12
14
16
18
20
x 106-
-
-
0
0.1
9
Lu
2
4
FIMLOF Estimate
SFIMLOF Estimate ±1 Standard Deviation
Standard Deviation > 7x10-' g/q Hz
-True Value
5o
.1 -v
0
--
e0
0.0
5 --0
-----2
-----------4
6
8
10
12
Monte Carlo Run
14
16
18
20
Figure 5-21: FIMLOF estimated noise values for Model 5, starting from parameter estimates equal to the true noise values. V indicates a parameter estimate standard deviation
that is larger than 7 x 10-5 g/v/iH . Measurement sets 5, 6, and 16 fail to converge.
5.5.1
Parameter Estimates
Figure 5-21 shows the parameter estimates for a filter using Model 5. 3 of the 20 measurement sets (5, 6, and 16) failed to converge. Interestingly, fewer cases failed to converge
for a filter using Model 5 than for a filter using Model 4, despite the more serious mismodeling. The estimates of GRWA are very large and widely scattered. They range from
within one standard deviation of the true value (measurement set 8) to over 17 standard
deviations away from the true value (measurement set 3). The estimates of RWVL range
from nearly true (measurement set 8) to over an order of magnitude too large (measurement set 13). The estimates of MEAS are all an order of magnitude too high. The large
FIMLOF parameter estimates are evidence of very serious midmodeling.
81
0.21
0. 1 -
-..
- ..
---.. ...
* --.
..
........-.
.
.... .....
... ..........
- -..
.....
-..
....................... ...........---
-0.1
-0.2
400
200
600
800
1000
1200
600
800
1000
120 0
0.2
0.
1 -
----
-0.
1 -[-
-0.
-2
200
400
0.
1.........-.
0
-0. 1
..............
..................................-
- 2-1
200
-0. 2
600
Time [sec]
400
Reidal
Residuals
1 Standard Deviation
--
120 0
1000
800
Figure 5-22: Kalman Filter residuals using Model 5 and true noise values.
A
.2-
-
0.1
0
-
--
-
w
-
-0.1
600
400
200
1200
1000
800
0. 2 ,
1.....
0.
...
0,F
0
...
.
-..
-.-..
-0.
-
2
--
-
. . . . . .. .
....
-0. 2
. ... . . . . . . .. .
600
400
200
. . .. . ... . .
. .. . . ... . . . . .
1000
800
1200
0. 4
0.1
an
0
.~
~
-
-
-0.1
-
-.
-
-02
21
200
400
600
Time [sec]
800
Residuals
1 Standard Deviation
1000
1200
Figure 5-23: Kalman Filter residuals using Model 5 and FIMLOF parameter estimates.
82
Table 5.5: The means and standard deviations of the whiteness values of the filter residuals
using Model 5 and the 20 measurement sets. The filter was started each time using the
true noise values.
U Axis
V Axis
W Axis
Parameter
Average Standard Average Standard
Average Standard
Values Used
Deviation
Deviation
Deviation
True values
FIMLOF estimates
0.55013
0.05998
0.52781
0.090789
0.56066
0.083877
when started from
true values
0.54923
0.063786
0.53886
0.091441
0.56383
0.072848
5.5.2
Residual Magnitude and Whiteness
Figure 5-22 shows the filter residuals from measurement set 1 using Model 5 and the
true noise values. Figure 5-23 shows the residuals from a filter using the FIMLOF estimates. The residuals from the filter using the FIMLOF parameter estimates are not
much changed from the filter residuals using the starting noise values.
FIMLOF has
been unable to remove the visible striations caused by the centrifuge target biases in the
measurements. Despite the order of magnitude changes to the noise estimates, the filter
residual magnitudes are largely unchanged.
The covariance envelope has expanded to a much more reasonable estimate; however,
it does not perfectly bound the residuals. The covariance envelopes for the U-axis and
V-axis residuals are still too small, while the envelope for the W-axis residuals is now
too big. This result occurs because the measurement noise is modeled as being equal on
all axes. For the full state model, MEAS is indeed equal on all axis. Model 5 no longer
meets this assumption, however, because the centrifuge target position biases have been
removed. The biases do not affect each axis equally, so the U-axis and V-axis require
more measurement noise than the W-axis.
Table 5.5 shows the mean and standard deviations of the whiteness values from the
Lozow metric for the filter residuals.
Using the FIMLOF parameter estimates in the
filter causes some changes to the whiteness values, but the changes are not large. The
null hypothesis that the whiteness values are drawn from the same population cannot be
rejected at a 95% confidence interval.
83
CA
45
40
35
30
25
-
E20
--
u
- ..
..
...
15
-Miss Distance Using Model 5 and True Noise Values. .. . . .. .
. . . . .. .
Estimates
*Miss Distance Using Model 5 and..FIMLOF
Miss Distance
Optimal
............
12..14..16.18
8..
.. 10 ......
4..
.....6.......
..
.
.
.
..20
-
--
10 -*
2
+
-
-+~
5
0
.
4
6
8
12
10
Monte Carlo Run
14
16
18
20
Figure 5-24: Normalized miss distances for Model 5.
5.5.3
Miss Distances
Figure 5-24 displays the miss distances calculated for each synthetic measurement set.
The miss distances calculated using the true parameter values and Model 5 are on average
19.1 times larger than the optimal miss distances. The miss distances calculated using
the FIMLOF parameter estimates are quite close to the optimal miss distances, averaging
only 1.3 times larger. Interestingly, the miss distances calculated using the FIMLOF
estimates for the filter using Model 5 are smaller than the miss distances calculated using
FIMLOF estimates for the filter using Model 4. FIMLOF tunes the suboptimal filter
using Model 5 very well. Using a pooled two-sample t-test and a 95% confidence interval,
the null hypothesis that the miss distances calculated using the reduced-order filter and
the true values and the miss distances calculated using the reduced-order filter and the
FIMLOF estimates are drawn from the same population can be rejected.
84
Chapter 6
Conclusion
Full Information Maximum Likelihood Optimal Filtering (FIMLOF) is a specialized form
of system identification, useful for identifying initial state covariances and system noise
parameters. In this thesis, it was used to identify the noise parameters of an Inertial
Measurement Unit (IMU). Specifically, the robustness of FIMLOF was evaluated using
synthetic measurements from the IMU. The sensitivity of FIMLOF to initial parameter
estimates and reduced-order models was investigated using Kalman Filter residuals, the
FIMLOF parameter estimates, and their associated statistics. The results show that
FIMLOF can be very successful at tuning suboptimal filter models.
6.1
Summary of Results
For well-modeled systems, the FIMLOF parameter estimates are very close to the true
parameter values, indicating the validity of the method. FIMLOF does have some sensitivity to initial parameter values; however, the estimates are accurate unless the initial
parameter values are much too small.
Tuning of Suboptimal Filters. Suboptimal system models receive significant benefit from FIMLOF. The miss distance (the distance between the target and the actual
impact point) of such systems improves when using calibrations with FIMLOF-estimated
noise values. The miss distance is calculated from the state estimates and their covariances. Therefore, its improvement is a good indication that the calibration of the system
85
has been substantially improved by tuning the model with FIMLOF.
Detection of Mismodeling. FIMLOF sometimes fails to converge for suboptimal
system models. The parameter estimates vary around a point instead instead of converging to it. The variations can be large, sometimes multiple standard deviations of the
parameter estimates between iterations. Failure to converge can be a sign of significant
mismodeling in the system.
Initial Parameter Estimate Sensitivity. FIMLOF proves to be largely insensitive
to the initial parameter estimates. Initial parameter estimates that are much too low
can lead to the algorithm converging to an incorrect value. However, FIMLOF does not
appear to display this behavior when started with very large initial parameter estimates.
Consequently, the initial parameter estimates should be larger than the expected system
noise values.
6.2
Future Work
Determine Cause of Initial Parameter Estimate Sensitivity. As previously mentioned, FIMLOF is susceptible to very low initial parameter estimates. Several reasons
have been hypothesized in this thesis. One possibility is that the low initial parameter
estimates cause the covariance envelope of the residuals to be much smaller than the
residuals themselves. When this occurs, the assumptions of the Kalman Filter are violated, and the filter does not accurately estimate the model states. In this case, FIMLOF
would calculate inaccurate partial derivatives of the filter. Another possibility is that the
expected value of the Hessian used by FIMLOF could be quite different from the true
value. In this case, the Newton-Raphson method would fail to converge to the correct
value, because the incorrect approximation to the Hessian would lead it in the wrong
direction. More work is needed to determine the true cause of the problem.
Compare FIMLOF to Other Algorithms. No attempt was made in this thesis to
compare the performance of the FIMLOF algorithm to that of other search algorithms.
It is possible that another search method (for example, a non-gradient algorithm) would
require fewer computations or exhibit less sensitivity to initial conditions.
86
Although
FIMLOF has been shown to be quite robust and successful in tuning suboptimal filters,
another method may be even more successful.
More work is needed to evaluate the
performance of other algorithms.
Separate Parameters for Individual Axes. FIMLOF currently treats the model
parameters as equal for all instruments. For example, the same random walk in angle
estimate is used for each of the gyroscopes in the IMU. Several benefits could be realized
by estimating a separate parameter for each instrument. First, the impact of instrument
observability issues would be limited. Second, an out-of-spec instrument could be detected
from its parameter estimate.
In this case, a bad instrument would have a parameter
estimate quite different from the other two. More work is needed to determine the viability
of estimating individual parameters and to implement a damage detection scheme for the
instruments.
87
[This page intentionally left blank.]
Appendix A
Derivation of Selected Partial
Derivatives
In this Appendix, several of the more complex partial derivatives of the Kalman Filter
equations are derived.
A.1
Derivative of a Matrix Inverse
Consider an invertible matrix A(t), the elements of which are a function of t. The objective
is to find the derivative of A- 1 (t). Let B(t) = A- 1 (t). It follows that
A(t)B(t) = I.
(A.1)
Differentiating both sides of Equation A.1 via the chain rule results in
dA(t
t) dB (t) 00.
dt =
dt )tB(t) +A
(A.2)
Solving Equation A.2 for dB(t)/dt yields
dB(t)
dA- 1 (t)
dt
dt
--A-'(t)dAt Ad (t)
89
(A.3)
A.2
Derivatives of Noise Parameters
Several of the Kalman Filter equations have difficult partial derivatives with respect to
noise parameters. The partial derivatives of the state estimate and error covariance update
equations are derived. The derivatives are presented for generic noise parameters and then
specialized into process and measurement noise parameters.
Substituting the Kalman Gain equation, Equation (3.39), into the error covariance
update equation, Equation (3.42), yields the expanded form of the equation, so that
Rk)-1 HkP-.
P+ = P - PI7HT(HkP7 HIk+
(A.4)
Differentiating both sides of Equation (A.4) with respect to a noise parameter ca yields
P+
k__
ap-k
_Pap-T1kH(HkP HT+R )
_
+ P H
( HkP HkT+
7
HkP~
H
Rk)-
- PjHk (HkPJHkT+R)
Hk
Hk +
Oaa
(H P H+
Rk)
HkP~-
(A.5)
.
Using the definition of the Kalman Gain, Equation (3.39), Equation (A.5) simplifies to
aP+
_p_- _
__
&ac
0ai
p- HPK~ Kk
kH
±+ (H
ai
c
H
+
OPR
KT-KHa k
,a)a"
&ao
(A.6)
For a process noise parameter, the partial derivative of the measurement noise covariance
is zero, and Equation (A.6) simplifies again to
S=
(I- KkHk) O
(I - KkHkj).
(A.7)
For a measurement noise parameter, Equation (A.6) simplifies to
aaj
=(I -- KH
)
a
aaj
90
k
The expanded form of the state estimate update equation, Equation (3.40), is
H (HkP Hk
k H+
-
+ Rk)-k - P 1 Hz (HkPi H + Rk)
Hki.
(A.9)
Differentiating both sides of Equation (A.9) with respect to a process noise parameter ac
yields
=
+
a
i (HkPI Hi+ Rk)' ik
a~azij H'ka
-
-'
PC Hz (HkPHkT+ Rk)
-
azi
H (HTPIH[
I
+ Rk
6P (H<H
Hk
+
2Zk
(HkP Hkj+
Rk)
-Zk
Hk
HkT + a~
+ PIHkT(HkPJ HZ'+ Rk)-( Hk
- PPHkT(HkP gHkT+
H
Rk) 1 Hk
(HkPI7HT+ Rk)- Hkx(A. 10)
.
Using the definition of the Kalman Gain, Equation (A. 10) simplifies to
a=
+
-
5 HS
1k
- Kk
aHZSIc 1Hk - + Kk (
H
Hk
H
a
+ aR
aRk
+ -ja-)
I
Hkll - KkHk O.
(A.11)
For a process noise parameter, the partial derivative of the measurement noise covariance
is zero. Equation (A.11) simplifies to
(a
a=(I-K
SaKkHk)
+ aPj HTar
S
(A.12)
-k.)
For a measurement noise parameter, Equation (A.11) simplifies to
-k -
ai
(I-KkHk)
Ka
+ aPHTS1)
ai
91
Ka RkS
-K a k
1
-
.
(A.13)
[This page intentionally left blank.]
Appendix B
Inertial Measurement Unit System
Model
The inertial measurement unit consists of four gimbals supporting a gyro-stabilized platform. The system uses 2 two-degree-of-freedom gyros for stabilization. Velocity is sensed
by three pendulous integrating gyroscopic accelerometers. The inertial measurement unit
system model consists of four main error models. The gyro model error model is presented
in Section B.1. The basic accelerometer error model appears in Section B.2. A PIGA
error model, containing error states specific to the PIGAs can be found in Section B.3.
The misalignment error model of the accelerometers is located in Section B.4. In addition,
the centrifuge error model used in centrifuge testing is given in Section B.5.
The system state error dynamics can be written as a system of first order differential
equations, so that
ES
F 3
j
0
ES
TJLX
+[
qO
.
(B.1)
Lqij
ES is the system state error vector and x is the error state vector. The system state error
vector is made up of the position error vector ER (ft), the velocity error vector EV (ft/s),
and the platform attitude correction vector 36 (arcsec). The system state error vector is,
93
therefore,
EV
(B.2)
ER
eS=
60
The error state vector, x, defines the individual error states.
For centrifuge testing, the system navigation error dynamics matrix, F is given by
E5 = FeS + Ex + qO,
(B.3)
where
0 0 -(aix)
F=
I 0
0
0 0
0
.(B.4)
The error states couple into the system dynamics through the B matrix. The configuration
of this matrix depends on the individual error states and will be defined further in the
sequel.
The error state dynamics matrix, T is used to define the error state differential equation, expressed as
x=I Tx + qi.
(B.5)
q, is an error state driving vector. Both q, and T depend on the individual states.
B.1
Gyroscope Error Model
The gyro error model consists of three basic error types. These errors are defined in terms
of their effect on the gyro drift rate, wU, wv, and ww (arcsec/s), along the gyro U, V, and
W axes. Figure 4-1 shows these axes in relation to the accelerometer axes. Accelerationinsensitive drift errors, denoted by BD_, are called bias errors. Throughout this appendix,
an underscore in a state name indicates that the state occurs for all of the instrument
axes. In the case of the bias errors, the states are BDU, BDV, and BDW. Mass unbalance
and spring restraint errors for loading along the spin axis form the acceleration-sensitive
94
errors, AD_. Compliance errors, AAD_, are acceleration-squared-senrsitiveerrors. The
complete gyro error model is, therefore,
WU
WV
WWJ
=k
1
BDU
ADUI
BDV
+ kik2 ADVS
A DUS
ADUQ
au
AL VWI
ADVWQI
av
ADVWI
aw_
ADWS -AL VWQ
BDWJ
AADUSS
AADUQQ
aU
+ kik [AADVSS
AADVWII
AADVQQ
AADWSS
AADWQQ
AADVWII
a~V
a2
AADUII
AADUIQ
kik[ AADVWSQ
AADUSQ
AADUSI
auaw
AADVWI Q
AADVWSIj
avaw
-AADVWI Q -AADVWSQ
AADVWSI
(B.6)
auav_
au, av, and aw (ft/s 2) are the nongravitational specific forces applied along the gyro U,
are conversion factors. Note that several
V, and W axes. k (arcsec/s and k2 (-)
terms in Equation B.6 occur twice due to perfect correlations, so that
ADVI
= ADWI
=
ADVWI
ADVQ
-ADWQ
=
ADVWQ
ADVII
= ADWII
=
ADVWII
ADVSI
= ADWSI
=
ADVWSI
ADVSQ
-ADWSQ
=
ADVWSQ
-ADWIQ
=
ADVWIQ.
ADVIQ
B.2
=
Accelerometer Error Model
The accelerometers used in the IMU are pendulous integrating gyroscopic (PIGA) accelerometers. The accelerometer error model consists of five basic error types. The errors
are defined in terms of the residual accelerometer-related errors for the accelerometer axes,
6ax, 6ay, and 6az (ft/sec2 ). The errors are expressed in the X, Y, and Z accelerometer
95
axes. These axes are shown in Figure 4-1. Non-excitation sensitive torques, e.g., flex-lead
torques, and excitation sensitive torques, e.g., magnetic field leakage torques, form the
bias errors, AB_. Acceleration-sensitive scale-factor errors, SFE_, are caused by variations
in PIG angular momentum and pendulosity between instruments. Acceleration-squaredsensitive terms are made up of PIGA anisoinertia, FIL_, PIGA T-shaft compliance, FX1_,
and float motion error, FIX_, caused by finite suspension stiffness of the PIGA float. The
accelerometer model is
[ax
ABX
SFEX
0
0
ax
Say
= k3 ABY + k4
0
SFEY
0
ay
6az
ABZ
0
0
SFEZ
az
FIIX
+ k2k4 FY
FX1Z
FIXX
+
A2
0
L
0
FXIX FX1X
F2
FY
FX1Y
12
FX1Z
FX1Z
ax
ay
0
FIXY 1+Ba0
A
0
J
0
ax(a2 + a)1
0
ay (a2 + a )
FIXZ
(B.7)
az(a2 + a2 )
ax, ay, and az (ft/s 2 ) are the nongravitational specific forces applied along the accelerometer X, Y, and Z axes. A and B (ft/s 2
)-
2
are PIGA FIX constants. k3 (L/)
and k4
(1/ppm) are conversion factors.
B.3
PIGA Error Model
The error terms presented in Section B.2 are generic accelerometer errors. The system
model also contains terms specific to the PIGAs.
AMSI - Coning angle sensitivity about the spin axis
The AMSI states are static PIGA offset coning angles about the spin axes. The errors in
indicated acceleration are proportional to the product of the coning angle and the sensed
96
acceleration along the output axis of the PIGA.
AMOI - Coning angle sensitivity about the output axis
The AMOK states are static PIGA offset coning angles about the output axes.
The
errors in indicated acceleration are proportional to the product of the coning angle and
the sensed acceleration along the spin axis of the PIGA.
FPO - Pendulous and output axis non-orthogonality
The misalignment of the pendulous axis of the PIGA relative to the case may be separated
into two components. The FPO_ states are the sensitivity of the output axis components
to a misalignment of the pendulous axis.
DIS and DOS - Input and output axis compliance coefficients
The DIS and DOS terms model the acceleration-squared drift sensitivity of the PIGA
gyroscope. They are very similar to the second order drifts AADSI and AADSQ in the
accelerometer model.
BRSI and BROI - Input and output axis bearing runout sensitivity
The PIG input axes rotate and precess around the PIGA input axes due to misalignments
between them. The BRSI and BROI terms model the effect of the bearing runout on the
sensed accleration error.
CHI - Viscous Torque about output axis
The CHI terms model the viscous torque about the output axis of the PIGA. They are
a combination of the torque resulting from angular velocity of the float relative to the
PIGA case and the float cocking angular velocity.
97
Resolver Harmonics
The PIGA measures the velocity through the Servo Driven Member (SDM) angle. The
SDM angle is read a one speed resolvers and an eight speed resolver. Any error in either
of the resolvers causes a harmonic error. This error is periodic with modulo 2wr. The
model contains error states for the 1, 2, 7, 8, 9, 15, 16, and 32 speed harmonics. These
states contain both the SIN and COS terms of the harmonics.
B.4
Misalignment Error Model
The acceleration-sensitive nonorthogonality error is the misalignment of the PIGA Input
Axes (IAs) relative to the X, Y, and Z axes. The X, Y, and Z axes form an orthogonal
coordinate system established at the time of calibration. This error is not a calibration
error, rather it is a combination of mechanical and electronic misalignments of the lAs
since the time of calibration. The error is modeled as three independent Gaussian random
variables.
Sax
Say
Saz
=
k5
0
0
0
ax
-MYXN
0
0
ay
-MZYN 0
az
-MZXN
(B.8)
ax, ay, and az (ft/sec2 ) are the nongravitational specific forces applied along the accelerometer X, Y, and Z axes. k5 (rad/arcsec) is a conversion factor. Sax, Say, and Saz
(ft/sec2 ) are the residual accelerometer-related errors for the accelerometer axes.
The acceleration-sensitive platform compliance terms, D*, are also modeled. These
are the result of the deformation of the IMU base due to acceleration.
B.5
Centrifuge Error Model
The centrifuge arm is shown in Figure B-1. The error model of the centrifuge consists of
two basic error types. Lever arm errors result from errors in the location of the IMU on
the centrifuge arm. Target bias errors are errors in the location of the target.
98
Rotational Axis
Sensors
iMU
Targets
Wall
rgc
r3
r2
Side View
Center of
Navigation
iMU
8 inches
6 feet
Front View
Figure B-1: Location of IMU on centrifuge arm
A
Counter Clockwise
Rotation
Z
U
W
N
,/'
c
X
-
Y
Centrifuge Center
z
CCAF
Centrifuge Arm
Figure B-2: Centrifuge Centered Earth Fixed and Centrifuge Centered Arm Fixed coordinate frames
B.5.1
Lever Arm Errors
The centrifuge arm is assumed to be a rigid body during a centrifuge test. Static lever
arm errors, denoted by CSLVARM_, are the result of errors in the placement of the IMU
on the centrifuge arm. More specifically, they are the result of errors in the displacement
between the center of navigation of the IMU and the proximity sensor on the tip of the
arm. This displacement is defined as the vector r 3 in the centrifuge centered arm fixed
(CCAF) coordinate frame, and is time invariant. The CCAF frame is shown in Figure
99
Z
Vertical Proximity Sensor
Horizontal Proximity Sensor
CCAF
Y
X
Center of
~~
Navigation
C
Horizontal Proximity
Sensor Centerline
IM
/I
t
Ir
rr
r
Target Location in
Vertical Plane
Side View
T
Y
Target Location in
Horizontal Plane
Direction of Arm Motion
fCCAF
Z ILA
X
Horizontal Proximity Sensor
(Leading Edge of Notch)
b
ar-
a
Vhie
Top View
Figure B-3: Geometry of IMU on centrifuge arm
100
B-2. r 3 is shown in Figure B-3. The position error in the gyro frame is given by
ER(t)
=
TCGCAFt
3.
(B.9)
The CCAF to G transformation is a time dependent function of the centrifuge arm position, so that
EV(t)
dTG
r.
AF)3
=
=ER
dt -LCF~)r
(B.10)
The static lever arms are time invariant, so it follows that
+ = 0.
(B.11)
The B matrix for CSLVARM is given by
[
TCGC AF
0
1
(B.12)
0J
For this error model, qO, qi, and T are given by
qO = q = 0
(B.13)
T = [0].
(B.14)
The IMU is mounted to the centrifuge via a set of shock mounts. These shock mounts
deform under the centrifugal acceleration of the centrifuge. The deformation is modeling
as occurring solely in the X axis of the CCAF frame. The SHOCKMT state models the
error in the displacement of the shock mounts.
101
B.5.2
Centrifuge Target Bias Errors
The centrifuge target bias states account for errors in the position of the targets. They
are measured in inches in the CCEF frame. The bias state for target i is
ERbi
(B.15)
ERbzi
The target biases create a measurement error, given by
6z,70
H(t)ER ,
where
H(t)
12~
102
CTUt)
CCEF
(B.16)
Appendix C
Removed Model State Listings
C.A
Model 2
The following states are missing from Model 2:
SINO1HX
SINOIHY SIN01HZ
COSO1HX
COSO1HY
COS01HZ
SINO2HX
SINO2HY
SIN02HZ
COS02HX
COS02HY
COSO2HZ
SIN07HX
SIN07HY
SIN07HZ
COS07HX
COS07HY
COS07HZ
SIN08HX
SIN08HY
SIN08HZ
COS08HX
COS08HY
COS08HZ
SIN09HX
SIN09HY
SIN09HZ
COSO9HX
COSO9HY
COSO9HZ
SIN15HX
SIN15HY
SIN15HZ
COS15HX
COS15HY
COS15HZ
SIN16HX
SIN16HY
SIN16HZ
COS16HX
COS16HY
COS16HZ
SIN32HX
SIN32HY
SIN32HZ
COS32HX
COS32HY
COS32HZ
103
C.2
Model 3
The following states are missing from Model 3:
AMOIX AMOIY AMOIZ
C.3
CHIX
CHIY
CHIZ
FPOX
FPOY
FPOZ
Model 4
In addition to the states removed in Models 2 and 3, the following states are removed
from Model 4:
AMSIX
AMSIY
AMSIZ
FPOX
FPOY
FPOZ
DISX
DISY
DISZ
DOSX
DOSY
DOSZ
BROIX
BROIY
BROIZ
BRSIX
BRSIY
BRSIZ
CSLVARMX
CSLVARMY
SHOCKMT
C.4
Model 5
In addition to the states removed in Models 2, 3, and 4, the following states are removed
from Model 5:
CTOPOSBX
CTOPOSBY
CTOPOSBZ
CT1POSBX
CT1POSBY
CT1POSBZ
CT2POSBX
CT2POSBY
CT2POSBZ
CT3POSBX
CT3POSBY
CT3POSBZ
CT4POSBX
CT4POSBY
CT4POSBZ
CT5POSBX
CT5POSBY
CT5POSBZ
CT6POSBX
CT6POSBY
CT6POSBZ
104
Bibliography
[1] IEEE STD 517-1974. Standard Specification Format Guide and Test Procedure for
Single Degree of Freedom Rate-Integrating Gyros.
[2] Milton Abramowitz and Irene A. Segun, editors. Handbook of Mathemetical Functions. Dover, New York, 1965.
[3] Peter Avitabile. Experimental modal analysis: A simple non-mathematical presentation. Sound and Vibration, 35(1):20-31, January 2001.
[4] Dimitri Bertsekas. Nonlinear Programming. Athena Scientific, Belmont, MA, second
edition, 1999.
[5] W. R. Blischke and A. H. Halpin.
Asymptotic properties of some estimators of
quantiles of circular error. Journal of the American Statistical Association, 61:618632, 1966.
[6] Robert J. G. Craig. Theory of errors of a multigimbal, elastically supported tuned
gyroscope. IEEE Transactions on Aerospace and Electronic Systems, 8(3):289-297,
May 1972.
[7] Robert J. G. Craig. Theory of operation of an elastically supported, tuned gyroscope.
IEEE Transactions on Aerospace and Electronic Systems, 8(3):280-288, May 1972.
[8] Morris DeGroot and Mark Schervish. Probability and Statistics. Addison Wesley,
Boston, third edition, 2002.
[9] Arthur Gelb, editor. Applied Optimal Estimation. MIT Press, Cambridge, MA, 1974.
105
[10] R. E. Green and A. P. Sage. Sensitivity analysis of discrete filtering and smoothing
algorithms. AIAA Journal, 7(10):1890-1897, October 1969.
[11] Donald T. Greenwood. Principles of Dynamics. Prentice Hall, New Jersey, second
edition, 1988.
[12] C. G. Hilborn and D. G. Lainiotis. Optimal adaptive filter realizations for sampled
stochastic processes with unknown parameters. Technical Report 67-3, Department
of Electrical Engineering, University of Texas, Communications Systems Research
Group, Austin, July 1967.
[13] Andrew Jazwinski. Stochastic Processes and Filtering Theory. Academic Press, New
York, 1970.
[14] J.-N. Juang and R. S. Pappa. A comparative overview of modal testing and system
identifcation for control of structures. Shock and Vibration Digest, 20(5):4-15, May
1988.
[15] Jer-Nan Juang. Applied System Identification. Prentice Hall, New Jersey, 1994.
[16] Thomas Kailath. An innovations approach to least-squares estimation pt. 1: Linear
filtering in additive white noise.
IEEE Transactions on Automatic Control, AC-
13(6):646-655, December 1978.
[17] R. L. Kashyap. Maximum likelihood identification of stochastic linear systems. IEEE
Transactions on Automatic Control, AC-15(1):25-34, February 1970.
[18] Myron Kayton and Walter R. Fried. Avionics Navigation Systems. John Wiley, New
York, second edition, 1997.
[19] Anthony Lawrence. Modern Inertial Technology: Navigation, Guidance, and Control.
Springer, New York, second edition, 1998.
[20] Lennart Ljung. System Identification: Theory for the User. Prentice Hall, New
Jersey, second edition, 1999.
106
[21] Jeff Lozow. Personal Communication, 2005.
[22] Raman K. Mehra. On the identifaction of variances and adaptive kalman filtering.
I IEEE Transactions on Automatic Control, AC-15(2):175-184, April 1970.
[23] Raman K. Mehra. Approaches to adaptive filtering. IEEE Transactionson Automatic
Control, AC-16(5):693-698, October 1972.
[24] L. D. Mitchell and L.D. Mitchell. Modal analysis bibliography - 1979-1984.
In
Proceedings of the 4th InternationalAnalysis Conference, volume 2, pages 1659-1734,
Los Angeles, February 1986.
[25] Douglas C. Montgomery and George C. Runger. Applied Statistics and Probability
for Engineers. John Wiley, New York, second edition, 1999.
[26] Nasser Nahi. Estimation Theory and Applications. John Wiley, New York, 1969.
[27] David W. Peterson. Hypothesis, Estimation, and Validation of Dynamic Social Models -
Energy Demand Modeling. Doctoral dissertation, Massachusetts Institute of
Technology, June 1975.
[28] R. H. Rosen and L. Lapidus. Minimum realization and system modeling: 1. fundamental theory and algorithm. AIChE Journal, 18(4):673-684, July 1972.
[29] R. H. Rosen and L. Lapidus. Minimum realization and system modeling: 2. theoretical and numerical extensions. AIChE Journal,18(5):881-892, September 1972.
[30] Andrew Sage and James Melsa. Estimation Theory with Applications to Communications and Control. Robert E. Krieger, New York, 1979.
[31] Nils Sandell and Khaled Yared. Maximum likelihood identification of state space
models for linear dynamic systems.
Technical Report ESL-R-814, Massachusetts
Institute of Technology, April 1978.
[32] Fred C. Schweppe. Algorithms for estimating a re-entry body's position, velocity, and
ballistic coefficient in real time or for post flight analysis. Technical Report 1964-4,
M.I.T. Lincoln Laboratory, January 1965.
107
[33] Fred C. Schweppe.
Evaluation of likelihood functions for gaussian signals.
IEEE
Transactions on Information Theory, IT-11(1):61-70, January 1965.
[34] L. M. Silverman. Realization of linear dynamical systems. IEEE Transactions on
Automatic Control, AC-16(6):554-567, December 1971.
[35]
Matthew Skeen. Maximum likelihood estimation of fractional brownian motion and
markov processes.
Master's thesis, Massachusetts Institute of Technology, Cam-
bridge, MA, December 1991. Also C.S. Draper Laboratory T-1109.
[36] R. C. Stroud. Excitation, measurement, and analysis methods for modal testing.
Sound and Vibration, 21(8):12-27, August 1987.
[37] A. J. Tether.
Construction of minimum linear state variable models from in-
put/output data. IEEE Transactions on Automatic Control, AC-15(4):424-436, August 1970.
[38] Samuel Wilks. Mathematical Statistics. John Wiley, New York, 1962.
108
?C~Cc~.
Download