16. Mean Square Estimation

advertisement
16. Mean Square Estimation
Given some information that is related to an unknown quantity of
interest, the problem is to obtain a good estimate for the unknown in
terms of the observed data.
Suppose X 1 , X 2 ," , X n represent a sequence of random
variables about whom one set of observations are available, and Y
represents an unknown random variable. The problem is to obtain a
good estimate for Y in terms of the observations X 1 , X 2 , " , X n .
Let
Yˆ = ϕ ( X 1 , X 2 , " , X n ) = ϕ ( X )
(16-1)
represent such an estimate for Y.
Note that ϕ (⋅) can be a linear or a nonlinear function of the observation
X 1 , X 2 ," , X n . Clearly
ε ( X ) = Y − Yˆ = Y − ϕ ( X )
(16-2)
2
represents the error in the above estimate, and | ε | the square of 1
PILLAI
2
{
|
|
} represents the mean
E
ε
ε
the error. Since is a random variable,
square error. One strategy to obtain a good estimator would be to
minimize the mean square error by varying over all possible forms
of ϕ (⋅), and this procedure gives rise to the Minimization of the
Mean Square Error (MMSE) criterion for estimation. Thus under
MMSE criterion,the estimator ϕ (⋅) is chosen such that the mean
square error E{ | ε |2 } is at its minimum.
Next we show that the conditional mean of Y given X is the
best estimator in the above sense.
Theorem1: Under MMSE criterion, the best estimator for the unknown
Y in terms of X 1 , X 2 ," , X n is given by the conditional mean of Y
gives X. Thus
Yˆ = ϕ ( X ) = E{Y | X }.
(16-3)
Proof : Let Yˆ = ϕ ( X ) represent an estimate of Y in terms of
X = ( X 1 , X 2 , " , X n ).Then the error ε = Y − Yˆ , and the mean square
error is given by
2
2
2
2
2
ˆ
(16-4) PILLAI
σ ε = E{ | ε | } = E{ | Y − Y | } = E{ | Y − ϕ ( X ) | }
Since
E[ z ] = E X [ E z {z | X }]
we can rewrite (16-4) as
(16-5)
σ ε2 = E{ | Y − ϕ ( X ) |2 } = E X [ EY { | Y − ϕ ( X ) |2 X }]
z
z
where the inner expectation is with respect to Y, and the outer one is
with respect to X .
Thus
σ ε2 = E[ E{ | Y − ϕ ( X ) |2 X }]
+∞
= ∫−∞ E{ | Y − ϕ ( X ) |2 X } f X ( X )dx.
(16-6)
To obtain the best estimator ϕ , we need to minimize σ ε2 in (16-6)
with respect to ϕ . In (16-6), since f X ( X ) ≥ 0, E{ | Y − ϕ ( X ) |2 X } ≥ 0,
and the variable ϕ appears only in the integrand term, minimization
of the mean square error σ ε2 in (16-6) with respect to ϕ is
equivalent to minimization of E{ | Y − ϕ ( X ) |2 X } with respect to ϕ .
3
PILLAI
Since X is fixed at some value, ϕ ( X ) is no longer random,
and hence minimization of E{ | Y − ϕ ( X ) |2 X } is equivalent to
∂
E{ | Y − ϕ ( X ) |2 X } = 0.
∂ϕ
(16-7)
This gives
E{| Y − ϕ ( X ) | X } = 0
or
E{Y | X } − E{ϕ ( X ) | X } = 0.
(16-8)
E{ϕ ( X ) | X } = ϕ ( X ),
(16-9)
But
since when X = x, ϕ ( X ) is a fixed number ϕ (x). Using (16-9)
4
PILLAI
in (16-8) we get the desired estimator to be
Yˆ = ϕ ( X ) = E{Y | X } = E{Y | X 1 , X 2 , " , X n }.
(16-10)
Thus the conditional mean of Y given X 1 , X 2 ," , X n represents the best
estimator for Y that minimizes the mean square error.
The minimum value of the mean square error is given by
2
= E{ | Y − E (Y | X ) |2 } = E[ E{ | Y − E (Y | X ) |2 X }]
σ min
var(Y X )
= E{var(Y | X )} ≥ 0.
(16-11)
As an example, suppose Y = X 3 is the unknown. Then the best
MMSE estimator is given by
Yˆ = E{Y | X } = E{ X 3 | X } = X 3 .
(16-12)
Clearly if Y = X 3 , then indeed Yˆ = X 3 is the best estimator for Y
5
PILLAI
in terms of X. Thus the best estimator can be nonlinear.
Next, we will consider a less trivial example.
Example : Let
kxy, 0 < x < y < 1
f X ,Y ( x, y ) = 
otherwise,
 0
where k > 0 is a suitable normalization constant. To determine the best
estimate for Y in terms of X, we need f Y | X ( y | x).
1
y
1
f X ( x) = ∫ x f X ,Y ( x, y )dy = ∫ x kxydy
kxy
=
2
2 1
x
kx(1 − x 2 )
=
,
2
1
0 < x < 1.
1
Thus
f X ,Y ( x, y )
kxy
2y
f Y X ( y | x) =
=
=
; 0 < x < y < 1.
2
2
f X ( x)
kx (1 − x ) / 2 1 − x
(16-13)
Hence the best MMSE estimator is given by
6
PILLAI
x
1
ˆ
Y = ϕ ( X ) = E{Y | X } = ∫ x y f Y | X ( y | x)dy
1
= ∫x y
2y
1− x 2
dy =
2
1− x 2
1
2
y
∫ x dy
1
2 y
2 1 − x 3 2 (1 + x + x 2 )
=
=
=
.
2
2
2
31− x x 31− x
3 1− x
3
(16-14)
Once again the best estimator is nonlinear. In general the best
estimator E{Y | X } is difficult to evaluate, and hence next we
will examine the special subclass of best linear estimators.
Best Linear Estimator
In this case the estimator Yˆ is a linear function of the
observations X 1 , X 2 ," , X n . Thus
n
Yˆl = a1 X 1 + a2 X 2 + " + an X n = ∑ ai X i .
(16-15)
i =1
where a1 , a2 , " , an are unknown quantities to be determined. The
7
mean square error is given by (ε = Y − Yˆl )
PILLAI
E{ | ε |2 } = E{| Y − Yˆl |2 } = E{ | Y − ∑ ai X i |2 }
(16-16)
and under the MMSE criterion a1 , a2 , " , an should be chosen so
that the mean square error E{ | ε |2 } is at its minimum possible
value. Let σ n2 represent that minimum possible value. Then
n
σ = min E{| Y − ∑ ai X i |2 }.
(16-17)
To minimize (16-16), we can equate
∂
E{| ε |2 } = 0, k = 1,2 ," ,n.
∂ak
This gives
(16-18)
2
n
a1 , a2 ,", an
i =1
*

∂ | ε | 
 ∂ε  
∂
2
E{| ε | } = E 
 = 2 E ε 
  = 0.
∂ak
 ∂ak 
  ∂ak  
2
But
(16-19)
8
PILLAI
n
∂ε
=
∂ak
∂ (Y − ∑ ai X i )
i =1
∂ak
n
∂Y
=
−
∂ak
∂ (∑ ai X i )
i =1
∂ak
= −Xk.
(16-20)
Substituting (16-19) in to (16-18), we get
∂E{| ε |2 }
= −2 E{ε X k*} = 0,
∂ak
or the best linear estimator must satisfy
E{ε X k*} = 0,
k = 1,2, ", n.
(16-21)
Notice that in (16-21), ε represents the estimation error
n
(Y − ∑i =1 ai X i ), and X k , k = 1 → n represents the data. Thus from
(16-21), the error ε is orthogonal to the data X k , k = 1 → n for the
best linear estimator. This is the orthogonality principle.
In other words, in the linear estimator (16-15), the unknown
constants a1 , a2 , " , an must be selected such that the error
9
PILLAI
ε = Y − ∑ i =1 ai X i is orthogonal to every data X 1 , X 2 ," , X n for the
best linear estimator that minimizes the mean square error.
Interestingly a general form of the orthogonality principle
holds good in the case of nonlinear estimators also.
Nonlinear Orthogonality Rule: Let h( X ) represent any functional
form of the data and E{Y | X } the best estimator for Y given X . With
e = Y − E{Y | X } we shall show that
n
(16-22)
E{eh( X )} = 0,
implying that
e = Y − E{Y | X }
This follows since
⊥
h( X ).
E{eh( X )} = E{(Y − E[Y | X ])h( X )}
= E{Yh( X )} − E{E[Y | X ]h( X )}
= E{Yh( X )} − E{E[Yh( X ) | X ]}
= E{Yh( X )} − E{Yh( X )} = 0.
10
PILLAI
Thus in the nonlinear version of the orthogonality rule the error is
orthogonal to any functional form of the data.
The orthogonality principle in (16-20) can be used to obtain
the unknowns a1 , a2 , " , an in the linear case.
For example suppose n = 2, and we need to estimate Y in
terms of X 1 and X 2 linearly. Thus
Yˆl = a1 X 1 + a2 X 2
From (16-20), the orthogonality rule gives
E{ε X1*} = E{(Y − a1 X 1 − a2 X 2 ) X 1*} = 0
E{ε X*2 } = E{(Y − a1 X 1 − a2 X 2 ) X 2*} = 0
Thus
E{| X 1 |2 }a1 + E{ X 2 X 1*}a2 = E{YX 1*}
E{ X 1 X 2*}a1 + E{| X 2 |2 }a2 = E{YX 2*}
or
11
PILLAI
 E{| X 1 |2 }

 E{ X X *}

1 2
E{ X 2 X 1*}  a1   E{YX 1*}
  =

E{| X 2 |2 }   a2   E{YX 2*}
(16-23)
(16-23) can be solved to obtain a1 and a2 in terms of the crosscorrelations.
2
The minimum value of the mean square error σ n in (16-17) is given by
σ n2 = min E{| ε |2 }
a1 , a2 ,", an
n
= min E{εε } = min E{ε (Y − ∑ ai X i )*}
*
a1 , a2 ,", an
a1 , a2 ,", an
= min E{ε Y } − min
*
a1 , a2 ,", an
a1 , a2 ,", an
i =1
n
*
{
a
E
ε
X
∑ i
l }.
i =1
(16-24)
But using (16-21), the second term in (16-24) is zero, since the error is
orthogonal to the data X i , where a1 , a2 , " , an are chosen to be
optimum. Thus the minimum value of the mean square error is given
12
by
PILLAI
n
σ = E{ε Y } = E{(Y − ∑ ai X i )Y *}
2
n
*
i =1
n
= E{| Y | } − ∑ ai E{ X iY *}
2
i =1
(16-25)
where a1 , a2 , " , an are the optimum values from (16-21).
Since the linear estimate in (16-15) is only a special case of
the general estimator ϕ ( X ) in (16-1), the best linear estimator that
satisfies (16-20) cannot be superior to the best nonlinear estimator
E{Y | X }. Often the best linear estimator will be inferior to the best
estimator in (16-3).
This raises the following question. Are there situations in
which the best estimator in (16-3) also turns out to be linear ? In
those situations it is enough to use (16-21) and obtain the best
linear estimators, since they also represent the best global estimators.
Such is the case if Y and X 1 , X 2 ," , X n are distributed as jointly Gaussian
We summarize this in the next theorem and prove that result.
Theorem2: If X 1 , X 2 ," , X n and Y are jointly Gaussian zero 13
PILLAI
mean random variables, then the best estimate for Y in terms of
X 1 , X 2 , " , X n is always linear.
Proof : Let
Yˆ = ϕ ( X 1 , X 2 , " , X n ) = E{Y | X }
(16-26)
represent the best (possibly nonlinear) estimate of Y, and
n
Yˆl = ∑ ai X i
(16-27)
i =1
the best linear estimate of Y. Then from (16-21)
n
ε = Y − Yl = Y − ∑ ai X i
∆
i =1
(16-28)
is orthogonal to the data X k , k = 1 → n. Thus
E{ε X*k } = 0,
Also from (16-28),
k = 1 → n.
(16-29)
n
E{ε } = E{Y } − ∑ ai E{ X i } = 0.
i =1
(16-30)
14
PILLAI
Using (16-29)-(16-30), we get
E{ε X k*} = E{ε }E{ X k*} = 0, k = 1 → n.
(16-31)
From (16-31), we obtain that ε and X k are zero mean uncorrelated
random variables for k = 1 → n. But ε itself represents a Gaussian
random variable, since from (16-28) it represents a linear combination
of a set of jointly Gaussian random variables. Thus ε and X are
jointly Gaussian and uncorrelated random variables. As a result, ε and
X are independent random variables. Thus from their independence
E{ε | X } = E{ε }.
(16-32)
But from (16-30), E{ε } = 0, and hence from (16-32)
E{ε | X } = 0.
(16-33)
Substituting (16-28) into (16-33), we get
n
E{ε | X } = E{Y − ∑ ai X i | X } = 0
i =1
15
PILLAI
or
n
n
E{Y | X } = E{ ∑ ai X i | X } = ∑ ai X i = Yl .
i =1
i =1
(16-34)
From (16-26), E{Y | X } = ϕ ( x) represents the best possible estimator,
n
and from (16-28), ∑ i =1 ai X i represents the best linear estimator.
Thus the best linear estimator is also the best possible overall estimator
in the Gaussian case.
Next we turn our attention to prediction problems using linear
estimators.
Linear Prediction
Suppose X 1 , X 2 ," , X n are known and X n +1 is unknown.
Thus Y = X n +1 , and this represents a one-step prediction problem.
If the unknown is X n + k , then it represents a k-step ahead prediction
problem. Returning back to the one-step predictor, let Xˆ n +1
represent the best linear predictor. Then
16
PILLAI
n
Xˆ n +1 = − ∑ ai X i ,
∆
(16-35)
i =1
where the error
n
ε n = X n +1 − Xˆ n +1 = X n +1 + ∑ ai X i
i =1
= a1 X 1 + a2 X 2 + " + an X n + X n +1
n +1
= ∑ ai X i ,
an +1 = 1,
i =1
(16-36)
is orthogonal to the data, i.e.,
E{ε n X k*} = 0,
k = 1 → n.
(16-37)
Using (16-36) in (16-37), we get
n +1
E{ε n X } = ∑ ai E{ X i X k*} = 0,
*
k
k = 1 → n.
(16-38)
i =1
Suppose X i represents the sample of a wide sense stationary
17
PILLAI
stochastic process X (t ) so that
E{ X i X k*} = R (i − k ) = ri − k = rk*−i
(16-39)
Thus (16-38) becomes
n +1
E{ε n X } = ∑ ai ri − k = 0, an +1 = 1, k = 1 → n.
*
k
(16-40)
i =1
Expanding (16-40) for k = 1,2," , n, we get the following set of
linear equations.
a1r0 + a2 r1 + a3 r2 + " + an rn −1 + rn = 0 ← k = 1
a1r1* + a2 r0 + a3 r1 + " + an rn − 2 + rn −1 = 0 ← k = 2
#
a1rn*−1 + a2 rn*− 2 + a3 rn*−3 + " + an r0 + r1 = 0 ← k = n.
(16-41)
Similarly using (16-25), the minimum mean square error is given by
18
PILLAI
σ n2 = E{| ε |2 } = E{ε n Y *} = E{ε n X n*+1}
n +1
= E{(∑ ai X i ) X
i =1
*
n +1
n +1
} = ∑ ai rn*+1−i
i =1
= a1rn* + a2 rn*−1 + a3 rn*− 2 + " + an r1 + r0 .
(16-42)
The n equations in (16-41) together with (16-42) can be represented as
 r0
 *
 r1
 *
 r2

 *
 rn −1
 *
 rn
Let
r2 " rn   a   0 
 1   
r0 r1 " rn −1   a2   0 

r1* r0 " rn − 2   a3   0 
  =  .
 #   # 
#
   
*
rn − 2 " r0 r1   an   0 
  1   σ 2 
*
*
rn −1 " r1 r0 
n
r1
(16-43)
19
PILLAI
 r0 r1
 *
 r1 r0
Tn = 

 r* r*
 n n −1
r2 " rn 

r1 " rn −1 
.

#

" r1* r0 
(16-44)
Notice that Tn is Hermitian Toeplitz and positive definite. Using
(16-44), the unknowns in (16-43) can be represented as
0 
 a1 
 
 
 Last 
0 
 a2 


0 
a 
column 
3
−1
2
  = Tn   = σ n 
of 
 # 
# 


0 
a 
 T −1 


n
 
 n
1 
σ 2 
 
 n
Let
(16-45)
20
PILLAI
 Tn11 Tn12 " Tn1,n +1 


21
22
2 , n +1


T
T
T
"
−1
n
n
n
Tn = 
.
#


 T n +1,1 T n +1, 2 " T n +1,n +1 

 n
n
n
(16-46)
Then from (16-45),
 a1 
 Tn1,n +1 
 


 a2 
2 , n +1

 #  = σ 2  Tn
.

n
 
# 

 an 
 T n +1,n +1 
1 

 n
 
Thus
σ =
2
n
1
n +1, n +1
n
T
> 0,
(16-47)
(16-48)
21
PILLAI
and
 Tn1,n +1 
 a1 


 
2 , n +1

1  Tn
 a2 
 #  = T n +1,n +1  # .
n


 
 T n +1,n +1 
 an 

 n
(16-49)
Eq. (16-49) represents the best linear predictor coefficients, and they
can be evaluated from the last column of Tn in (16-45). Using these,
The best one-step ahead predictor in (16-35) taken the form
n


1
Xˆ n +1 = − n +1,n +1 ∑ (Tni ,n +1 ) X i .
(16-50)
 Tn
 i =1
and from (16-48), the minimum mean square error is given by the
(n +1, n +1) entry of Tn−1.
From (16-36), since the one-step linear prediction error
ε n = X n +1 + an X n + an −1 X n −1 + " + a1 X 1 ,
(16-51)
22
PILLAI
we can represent (16-51) formally as follows
X n +1 → 1 + an z −1 + an −1 z − 2 + " + a1 z − n → ε n
Thus, let
An ( z ) = 1 + an z −1 + an −1 z − 2 + " + a1 z − n ,
(16-52)
them from the above figure, we also have the representation
εn →
1
An ( z )
→ X n +1.
The filter
1
1
H ( z) =
=
An ( z ) 1 + an z −1 + an −1 z − 2 + " + a1 z − n
(16-53)
represents an AR(n) filter, and this shows that linear prediction leads
23
to an auto regressive (AR) model.
PILLAI
The polynomial An (z ) in (16-52)-(16-53) can be simplified using
(16-43)-(16-44). To see this, we rewrite An (z ) as
An ( z ) = a1 z − n + a2 z −( n −1) + " + an −1 z − 2 + an z −1 + 1
0 
a1 
 
a 
0 
 2
= [ z − n , z −( n −1) ," , z −1 ,1]  #  = [ z − n , z −( n −1) , ", z −1 ,1] Tn−1  # 
 
a 
0 
 n
σ 2 
 1 
n
(16-54)
To simplify (16-54), we can make use of the following matrix identity
 A B   I − AB   A
=
C D  0

I  C


0

.
−1 
D − CA B 
(16-55)
24
PILLAI
Taking determinants, we get
A B
= A D − CA−1 B .
C D
(16-56)
In particular if D ≡ 0, we get
n
A B
(
1
)
−
−1
CA B =
.
A C 0
(16-57)
Using (16-57) in (16-54), with
0 
 
C = [ z − n , z −( n −1) , " , z −1 ,1], A = Tn , B =  # 
σ n2 
25
PILLAI
we get
0
0
(−1)
An ( z ) =
| Tn |
n
Tn
#
r0
r1
r1* r0
r2 " rn
r1 " rn −1
σ
=
#
. (16-58)
0
| Tn | * *
rn −1 rn − 2 " r0 r1
σ n2
−n
− ( n −1)
−1
"
1
z
z
z
−n
−1
z ," , z ,1 0
2
n
Referring back to (16-43), using Cramer’s rule to solve for an +1 (= 1),
we get
r0 " rn −1
σ n2
an +1 =
#
rn −1 " r0
2 | Tn −1 |
=σn
=1
| Tn |
| Tn |
26
PILLAI
or
| Tn |
σ =
> 0.
| Tn −1 |
2
n
(16-59)
Thus the polynomial (16-58) reduces to
r0
1
An ( z ) =
| Tn −1 |
r1
r1* r0
r2 " rn
r1 " rn −1
#
rn*−1 rn*− 2 " r0 r1
(16-60)
z − n z −( n −1) " z −1 1
= 1 + an z −1 + an −1 z − 2 + " + a1 z − n .
The polynomial An (z ) in (16-53) can be alternatively represented as
1
~ AR (n) in fact represents a stable
in (16-60), and H ( z ) =
An ( z )
27
PILLAI
AR filter of order n, whose input error signal ε n is white noise of
constant spectral height equal to | Tn | / | Tn −1 | and output is X n +1.
It can be shown that An (z ) has all its zeros in | z |> 1 provided
| Tn |> 0 thus establishing stability.
Linear prediction Error
From (16-59), the mean square error using n samples is given
by
| Tn |
σ =
> 0.
| Tn −1 |
2
n
(16-61)
Suppose one more sample from the past is available to evaluate
X n +1 ( i.e., X n , X n −1 ," , X 1 , X 0 are available). Proceeding as above
the new coefficients and the mean square error σ n2+1 can be determined.
From (16-59)-(16-61),
| Tn +1 |
2
σ n +1 =
.
(16-62)
28
| Tn |
PILLAI
Using another matrix identity it is easy to show that
| Tn |2
| Tn +1 |=
(1− | sn +1 |2 ).
| Tn −1 |
(16-63)
Since | Tk | > 0, we must have (1− | sn +1 |2 ) > 0 or | sn +1 | < 1 for every n.
From (16-63), we have
| Tn +1 | | Tn |
=
(1− | sn +1 |2 )
| T | | Tn −1 |
n
σ n2+1
σ n2
or
σ n2+1 = σ n2 (1− | sn +1 |2 ) < σ n2 ,
(16-64)
since (1− | sn +1 |2 ) < 1. Thus the mean square error decreases as more
and more samples are used from the past in the linear predictor.
In general from (16-64), the mean square errors for the one-step
29
predictor form a monotonic nonincreasing sequence
PILLAI
σ n2 ≥ σ n2+1 ≥ " σ k2 > " → σ ∞2
(16-65)
whose limiting value σ ∞2 ≥ 0.
Clearly, σ ∞2 ≥ 0 corresponds to the irreducible error in linear
prediction using the entire past samples, and it is related to the power
spectrum of the underlying process X (nT ) through the relation
1

σ = exp
 2π
2
∞
+π
∫ −π
ln S XX (ω )d ω  ≥ 0.

(16-66)
where S XX (ω ) ≥ 0 represents the power spectrum of X (nT ).
For any finite power process, we have
+π
∫ −π
S XX (ω )d ω < ∞,
and since ( S XX (ω ) ≥ 0),
+π
∫ −π
ln S XX (ω ) ≤ S XX (ω ). Thus
ln S XX (ω )d ω ≤
+π
∫ −π
S XX (ω )d ω < ∞.
(16-67)
30
PILLAI
Moreover, if the power spectrum is strictly positive at every
Frequency, i.e.,
S XX (ω ) > 0,
in -π < ω < π ,
(16-68)
then from (16-66)
+π
∫ −π
ln S XX (ω )d ω > −∞.
(16-69)
and hence
1
σ = exp 
 2π
2
∞
+π
∫ −π
ln S XX (ω )d ω  >

e−∞ = 0
(16-70)
i.e., For processes that satisfy the strict positivity condition in
(16-68) almost everywhere in the interval (−π , π ), the final
minimum mean square error is strictly positive (see (16-70)).
i.e., Such processes are not completely predictable even using
their entire set of past samples, or they are inherently stochastic,
31
PILLAI
since the next output contains information that is not contained in
the past samples. Such processes are known as regular stochastic
processes, and their power spectrum is strictly positive.
S XX (ω )
−π
ω
π
Power Spectrum of a regular stochastic Process
Conversely, if a process has the following power spectrum,
S XX (ω )
−π
ω1 ω 2
π
ω
such that S XX (ω ) = 0 in ω1 < ω < ω 2 then from (16-70), σ ∞2 = 0.
32
PILLAI
Such processes are completely predictable from their past data
samples. In particular
X (nT ) = ∑ ak cos(ω k t + φ k )
(16-71)
k
is completely predictable from its past samples, since S XX (ω ) consists
of line spectrum.
S (ω )
XX
"
ω1 ω k
ω
X (nT ) in (16-71) is a shape deterministic stochastic process.
33
PILLAI
Download