16.346 Astrodynamics MIT OpenCourseWare .

advertisement
MIT OpenCourseWare
http://ocw.mit.edu
16.346 Astrodynamics
Fall 2008
For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.
Lecture 22
The Covariance, Information & Estimator Matrices
The Covariance Matrix
#13.6
• Random Vector of Measurement Errors: α where α T = [α1 α2 . . . αn ]
Assume measurement errors independent.
• First and Second Moments: E( α) = α = 0 and E(αi αj ) = αi αj = 0 (i =
j)
• Variances:
E(αi2 ) = αi2 = σi2
•
Variance Matrix:
0 ⎤
0 ⎥
⎥
.
..
⎦
⎡
σ 2
1
0
⎢
E( α α T ) = α α T = A = ⎢
⎣
..
.
0
σ22
..
.
···
···
..
.
0
0
· · · σn2
• Estimation Error Vector:
= PHA−1 α
• Covariance Matrix of Estimation Errors:
E( T ) = T
T = PHA−1 ααT A−1 HT P
T = PHA−1 ααT A−1 HT P = PHA−1 AA−1 HT P
= PHA−1 HT P = PP−1 P = P = (P−1 )−1
Covariance Matrix = (Information Matrix)−1
A Matrix Identity (The Magic Lemma)
Let Xmn and Ynm be rectangular compatible matrices such that Xmn Ynm and Ynm Xmn
are both meaningful.
However, Rmm = Xmn Ynm is an m × m matrix while Snn = Ynm Xmn is an n × n
matrix. With this understanding, the following sequence of matrix operations leads to a
remarkable and very useful identity:
(Imm + Xmn Ynm )(Imm + Xmn Ynm )−1 = Imm
Ynm (Imm + Xmn Ynm )(Imm + Xmn Ynm )−1 = Ynm
(Inn + Ynm Xmn )Ynm (Imm + Xmn Ynm )−1 = Ynm
(Inn + Ynm Xmn )Ynm (Imm + Xmn Ynm )−1 Xmn = Ynm Xmn
Inn + (Inn + Ynm Xmn )Ynm (Imm + Xmn Ynm )−1 Xmn = Inn + Ynm Xmn
(I + YX)−1 [I + (I + YX)Y(I + XY)−1 X] = (I + YX)−1 (I + YX)
(I + YX)−1 + Y(I + XY)−1 X = I
Hence:
(Inn + Ynm Xmn )−1 = Inn − Ynm (Imm + Xmn Ynm )−1 Xmn
T
To generalize: Let Ynm = Ann Bnm and Xmn = C−1
mm Bmn . Then
(A−1 + BC−1 B T )−1 = A − AB(C + B T AB)−1 B T A
16.346 Astrodynamics
Lecture 22
Inverting the Information Matrix Using the Magic Lemma
[= A−1 + BC−1 B T ]
P −1 = P−1 + h(σ 2 )−1 h T
• Recursive formulation:
(A−1 + BC−1 B T )−1 = A − AB(C + B T AB)−1 B T A
P = P − Ph(σ 2 + h T Ph)−1 h T P
1
a = σ 2 + h T Ph and w = Ph so that
a
• Using the magic lemma:
• Define:
P = (I − wh T )P
The Square Root of the P Matrix
#13.7
The matrix W is the Square Root of a Positive Definite Matrix P if P = WW T
1
P = P − PhhT P
a
�
�
1
W W T = W I − WT hhT W WT
a
�
�
1
= W I − zzT WT
a
= W(I − βzzT )(I − βzzT )WT
= W(I − 2βzzT + β 2 zzT zzT )WT
= W[I − (2β − β 2 z 2 )zzT ]WT
z 2 = h T WW T h = h T Ph = a − σ 2
But
Hence:
2β − β 2 z 2 =
�
W =W I−
1
a
=⇒
zz T
√
a + aσ 2
β=
�
a+
1
√
aσ 2
z = WT h
Properties of the Estimator
• Linear
� = δq = H T δr so that
• Unbiased: If measurements are exact ( α = 0 ) then δq
δ�
r = PP−1 δr = δr
� ) if no redundant measurements.
• Reduces to deterministic case ( δ�
r = H −T δq
If H is square & non-singular, then P = H −T AH−1 and
� = H −T δq
�
δ�
r = H −T AH−1 HA−1 δq
16.346 Astrodynamics
Lecture 22
Recursive Form of the Estimator
#13.6
Define q�
�
δ�
r = F δq
F =P H A
�
H = H
P = (I − whT )P
h
�
F = (I − wh )P H
T
�
= (I − whT )F
�
� = (I − whT )F
δ�
r = F δq
��
h A−1
0T
�
�
δq
� =
δq
δ q�
�
�
A 0
A =
0T σ 2
�
Then
�
−1
0
σ −2
�
�
= (I − wh )P HA−1
T
�
aw w(a − σ 2 )
= [ (I − whT )F
−
σ2
σ2
�� �
� = (I − whT ) δ�
w δq
r + wδq�
δq�
h
σ2
w]
= δ�
r + w(δq� − hT δ�
r) = δ�
r + w(δq� − δq�)
ris the best estimate of the new measurement.
Since δq = h T δr , then δq� = h T δ�
δ�
r = δ�
r + w(δq� − δq�)
1
Ph
w= 2
σ + hT Ph
δ�
r = δ�
r + w(δq� − δq�)
z = WT h
1
w= 2
Wz
σ + z2
�
�
zzT
√
W =W I−
a + aσ 2
or
P = (I − whT )P
Triangular Square Root
⎡
0
T
⎣
WW = 0
w4
0
w2
w5
where
w12 = m1
w22 =
m1 m4 − m22
m1
16.346 Astrodynamics
⎤⎡
w1
0
⎦
⎣
w3
0
w6
w1
0
w2
w3
⎤ ⎡
w4
m1
⎦
⎣
w5 = m2
w6
m3
m2
w1
det M
w42 =
m1 w22
w3 =
Lecture 22
m2
m4
m5
⎤
m3
m5 ⎦
m6
m3
w1
m − w3 w6
w5 = 5
w2
w6 =
�
Download