ANALISI SERIE TEMPORALI

advertisement
This material is protected by copyright and is intended for students’ use only. Sell and distribution are strictly forbidden. Final
exam requires integrating this material with teacher explanations and textbooks.
LINEAR OPTIMAL
PREDICTION
Model Identification and Data Analysis (MIDA) – LINEAR OPTIMAL PREDICTION
1
This material is protected by copyright and is intended for students’ use only. Sell and distribution are strictly forbidden. Final
exam requires integrating this material with teacher explanations and textbooks.
PREDICTION OF AN ARMA PROCESS
Let us consider a zero mean ARMA process:
y(t )  W ( z ) e(t )
W ( z) 
We will consider process
with zero mean first. Then,
we will extend the theory
to the general case
where e(t ) ~ WN ( 0 , 2 )
C ( z)
is an asymptotically stable rational transfer function
A( z )
Assumptions
I.
y(t )  W ( z ) e(t ) , e(t ) ~ WN ( 0 , 2 ) is a canonical representation
of the ARMA process y (t )
II. W ( z ) 
C ( z)
has no zeroes on the unit circle boundary (i.e. no
A( z )
zeroes with absolute value equal to 1)
Observation: Assumption I. is not much a hurdle since every ARMA
process admits a canonical representation, and if y (t ) is not given in
its canonical representation one can easily reconstruct it. Why
Assumption I. is required will be clear later.
Model Identification and Data Analysis (MIDA) – LINEAR OPTIMAL PREDICTION
2
This material is protected by copyright and is intended for students’ use only. Sell and distribution are strictly forbidden. Final
exam requires integrating this material with teacher explanations and textbooks.
Observation: Assumption II. is intrinsically needed. Without this
assumption prediction theory cannot be developed along the lines
traced below.
Problem ( k -step prediction)
Given the observations of the process y (t ) up to time t
, y(t  101), y(t  100), , y(t  2), y(t  1), y(t )
predict the future value of the process at time t  k
t-5 t-4 t-3 t-2 t-1 t
t+k
Prediction = guess on the future values of the process
Model Identification and Data Analysis (MIDA) – LINEAR OPTIMAL PREDICTION
3
This material is protected by copyright and is intended for students’ use only. Sell and distribution are strictly forbidden. Final
exam requires integrating this material with teacher explanations and textbooks.
A predictor is any function of the available information which is used
to guess the future value of the process.
We will denote the predictor by yˆ (t  k | t )
 t
denotes the time instant up to which data are collected
 tk
is the time instant at which we make the prediction
yˆ (t  k | t ) = predictor of y (t  k ) given the observations up to time t
In general:
yˆ (t  k | t )  f ( y(t ), y(t  1), y(t  2), )
i.e. the predictor is any function of the available observations of the
process y (t )
Examples:
yˆ (t  1 / t ) 
y (t )  y (t  1)  y (t  2)
3
2 y (t )  12 y (t  1)  12 y (t  2)
yˆ (t  1 / t ) 
3
Which one gives
the best prediction
performance?
yˆ (t  1/ t )  3 y(t )  y(t  1)  y(t  2)
…
Model Identification and Data Analysis (MIDA) – LINEAR OPTIMAL PREDICTION
4
This material is protected by copyright and is intended for students’ use only. Sell and distribution are strictly forbidden. Final
exam requires integrating this material with teacher explanations and textbooks.
Prediction objective: yˆ (t  k | t )  y(t  k )
i.e. yˆ (t  k | t ) must be as close as possible to y (t  k )
We need to quantify this intuitive closeness concept be rewritten in
some rigorous mathematical notion of “distance”...
Observation: the process y (t ) is stochastic i.e. y (t )  y (t , s) for all t
Hence, y(t  k )  y(t  k , s)
and, moreover,
yˆ (t  k | t )  f ( y(t , s), y(t  1, s), y(t  2, s), )  yˆ (t  k | t , s)
(the predictor is stochastic itself in that it depends on the realization of
the stochastic process y (t ) we measured)
The notion of distance must be valid for random variables
Mean square prediction error: E[ y(t  k )  yˆ (t  k | t ) ]
2
 (t  k | t ) : y(t  k )  yˆ (t  k | t ) is called prediction error.
Thanks to the mean, the mean square error returns a global
quantification of the predictor ability
Model Identification and Data Analysis (MIDA) – LINEAR OPTIMAL PREDICTION
5
This material is protected by copyright and is intended for students’ use only. Sell and distribution are strictly forbidden. Final
exam requires integrating this material with teacher explanations and textbooks.
Observation (Linear Predictors): the present setup is too general,
and an optimal predictor cannot be found with the currently available
information.
We shall limit ourselves to the class of linear predictors, that is of
predictors of the following type:

yˆ (t  k | t )   0 y (t )  1 y (t  1)   2 y (t  2)      i y (t  i )
i 0
where  i are real coefficients such that

  i2  
i 0
(this condition guarantees that yˆ (t  k | t ) is well defined)
Note that linear predictors are important from a computational
perspective:
yˆ (t  k | t )   0  1 z 1   2 z 2  y ( y )  F ( z ) y (t )
yˆ (t  k | t ) is the (steady-state) output of a linear filter having transfer
function F (z ) and fed by y (t ) .
Model Identification and Data Analysis (MIDA) – LINEAR OPTIMAL PREDICTION
6
This material is protected by copyright and is intended for students’ use only. Sell and distribution are strictly forbidden. Final
exam requires integrating this material with teacher explanations and textbooks.
Optimal Linear Prediction
Find the optimal coefficients  0o , 1o , 2o , of the linear predictor

yˆ (t  k | t )   i y (t  i ) so that the mean square prediction error is
i 0
minimized, i.e.
min E[ y(t  k )  yˆ (t  k | t )  ]
2
 0 ,1 , 2 ,
The linear optimal prediction scheme:
y (t )
zk
y(t  k )

F (z )

yˆ (t  k | t )
Model Identification and Data Analysis (MIDA) – LINEAR OPTIMAL PREDICTION
min
7
This material is protected by copyright and is intended for students’ use only. Sell and distribution are strictly forbidden. Final
exam requires integrating this material with teacher explanations and textbooks.
In order to solve the optimal linear prediction problem, let us start
from a simpler problem.
y (t ) is an ARMA process: y (t )  W ( z )  e(t ) , e(t ) ~ WN ( 0 , 2 )
Consider its MA(∞) representation:

y (t )  w0 e(t )  w1e(t  1)  w2 e(t  2)     wi e(t  i )
i 0
where W ( z ) 
C ( z)
 w0  w1 z 1  w2 z  2  
A( z )


i 0
i 0
Clearly, y (t  1)   wi e(t  1  i ) , y (t  2)   wi e(t  2  i ) , …
Hence,
yˆ (t  k | t )   0 y (t )   1 y (t  1)  


i 0
i 0
  0  wi e(t  i )   1  wi e(t  1  i )  
  0 e(t )  1e(t  1)   2 e(t  2)  
Any linear predictor based on past output observations, can be
rewritten as a (linear) predictor based on noise measurements up to
time t
Model Identification and Data Analysis (MIDA) – LINEAR OPTIMAL PREDICTION
8
This material is protected by copyright and is intended for students’ use only. Sell and distribution are strictly forbidden. Final
exam requires integrating this material with teacher explanations and textbooks.
Idea: instead of computing the optimal coefficient  0o , 1o , 2o , first,
let us work out directly with the optimal linear predictor based on
noise measurements up to time t :

yˆ (t  k | t )   e(t )   e(t  1)   e(t  2)      io e(t  i )
o
0
o
1
o
2
i 0
Again, the optimal coefficients  0o , 1o ,  2o , must be found so as to
minimize the mean square prediction error, i.e.
min E[ y(t  k )  yˆ (t  k | t )  ]
2
 0 , 1 ,  2 ,
Observation: y(t  k ) admits a MA(∞) representation too
y (t  k ) 

 wi e(t  k  i)
i 0
 w0 e(t  k )  w1e(t  k  1)    wk 1e(t  1) 
 wk e(t )  wk 1e(t  1)  wk  2 e(t  2)  
i.e.
y (t  k ) 

k 1

j 0
k 1
j k

j 0
i 0
 w j e(t  k  j )   w j e(t  k  j )
 w j e(t  k  j )   wk i e(t  i)
(simply, call i  j  k )
Model Identification and Data Analysis (MIDA) – LINEAR OPTIMAL PREDICTION
9
This material is protected by copyright and is intended for students’ use only. Sell and distribution are strictly forbidden. Final
exam requires integrating this material with teacher explanations and textbooks.
E[ y(t  k )  yˆ (t  k | t ) ] 
2
2


 k 1

 E[  w j e(t  k  j )   wk i e(t  i )    i e(t  i )  ] 
i 0
i 0
 j 0

2

 k 1
 
 E[  w j e(t  k  j )     wk i e(t  i )    i e(t  i )  

i 0
 j 0
  i 0
2

 k 1
 

 2  w j e(t  k  j )   wk i e(t  i )    i e(t  i ) ] 

i 0
 j 0
 i 0
(thanks to linearity)
2


 k 1


 E[  w j e(t  k  j )  ]  E[  wk i e(t  i )    i e(t  i )  ] 
 i 0

i 0
 j 0

2

 k 1
 

 2  E[  w j e(t  k  j )   wk i e(t  i )    i e(t  i ) ]

i 0
 j 0
 i 0
However,

 k 1
 

E[  w j e(t  k  j )   wk i e(t  i )    i e(t  i ) ]  0

i 0
 j 0
 i 0
depends on
e(t  k ), e(t  k  1),, e(t  1)
depends on
e(t ), e(t  1), e(t  2), 
All the resulting products are between uncorrelated terms
E[e(t  k  j )e(t  i)]  0 (recall that we assumed e(t ) ~ WN ( 0 , 2 ) , i.e.
noise was zero mean)
Model Identification and Data Analysis (MIDA) – LINEAR OPTIMAL PREDICTION
10
This material is protected by copyright and is intended for students’ use only. Sell and distribution are strictly forbidden. Final
exam requires integrating this material with teacher explanations and textbooks.
Hence,
E[ y(t  k )  yˆ (t  k | t ) ] 
2
2





 E[  w j e(t  k  j )  ]  E[  wk i e(t  i )    i e(t  i )  ]
 i 0

i 0
 j 0

k 1
2
always  0
does not depend on  0 , 1 ,  2 ,
always  0
depends on  0 , 1 ,  2 , 
When we minimize with respect to  0 , 1 ,  2 ,, the first term cannot
be modified, while at best the second term can be made = 0.
Hence the optimal solution can be obtained by putting


  i e(t  i)   wk i e(t  i)
i 0
which gives as optimal solution
i 0
 0o  wk , 1o  wk 1 ,  2o  wk  2 ,  that is  io  wk i , i  0,1,2,
The optimal linear predictor based on noise measurements is:

yˆ (t  k | t )   wk i e(t  i )
i 0
Model Identification and Data Analysis (MIDA) – LINEAR OPTIMAL PREDICTION
11
This material is protected by copyright and is intended for students’ use only. Sell and distribution are strictly forbidden. Final
exam requires integrating this material with teacher explanations and textbooks.
Operatorial representation of the optimal predictor
W ( z) 
C ( z)
 w0  w1 z 1    wk 1 z k 1 
A( z )
 wk z  k  wk 1 z  k 1  wk  2 z  k 2  
How coefficients wi ’s can be computed?
Numerator and denominator k -steps division:
C (z )
A(z )

E (z )
z  k F (z )
C( z)
k F ( z )
 E( z)  z
A( z )
A( z )
E ( z )  w0  w1 z 1    wk 1 z  k 1
z k
F ( z)
 wk z k  wk 1 z k 1  wk  2 z k 2  
A( z )
Hence,
y (t  k ) 

C ( z)
F ( z) 
e(t  k )   E ( z )  z k
e(t  k )

A( z )
A( z ) 

Model Identification and Data Analysis (MIDA) – LINEAR OPTIMAL PREDICTION
12
This material is protected by copyright and is intended for students’ use only. Sell and distribution are strictly forbidden. Final
exam requires integrating this material with teacher explanations and textbooks.
k 1

F ( z)
y (t  k )  E ( z )e(t  k ) 
e(t )   w j e(t  k  j )   wk i e(t  i )
A( z )
j 0
i 0
Uncorrelated with past
values and unpredictable
predictable with the
with the information at
information at time t
time t
The process can be decomposed in two parts:
 a unpredictable part which depends on future values of noise
e(t  k ),, e(t  1) which, by definition of WN are uncorrelated to
whatever happened up to time t ;
 a predictable part depending on past values of the noise
e(t ), e(t  1),
The optimal predictor is given by the predictable part of y(t  k ) :
yˆ (t  k | t ) 
F ( z)
e(t )
A( z )
the predict values can obtained as the output of a suitable linear filter
fed by e(t )
Model Identification and Data Analysis (MIDA) – LINEAR OPTIMAL PREDICTION
13
This material is protected by copyright and is intended for students’ use only. Sell and distribution are strictly forbidden. Final
exam requires integrating this material with teacher explanations and textbooks.
Observation:
The optimal predictor expression
yˆ (t  k | t ) 
F ( z)
e(t )
A( z )
is very important from a theoretical point of view, but unfortunately is
completely useless in practice!!!
In order to actually compute the predicted value for the output
variable based on the above expression for yˆ (t  k | t ) , past values of
the noise process e(t ), e(t  1), e(t  2), should be accessible.
This is not so in practice
e(t )
W (z )
This an our model of reality,
it is an our construction and
cannot be accessed
y (t )
The output y(t) is the only
information on reality to
which we can access
Model Identification and Data Analysis (MIDA) – LINEAR OPTIMAL PREDICTION
14
This material is protected by copyright and is intended for students’ use only. Sell and distribution are strictly forbidden. Final
exam requires integrating this material with teacher explanations and textbooks.
Prediction from output data
y
y (t  k )  ?
t
NOT ACCESSIBLE
e
t
Prediction from
noise
In order to use it practice, we need to express the predict value as a
function of past values of the output variable y(t ), y(t  1), y (t  2),
Idea: is it possible to reconstruct
the noise e(t ) from the output y (t ) ?
Model Identification and Data Analysis (MIDA) – LINEAR OPTIMAL PREDICTION
15
This material is protected by copyright and is intended for students’ use only. Sell and distribution are strictly forbidden. Final
exam requires integrating this material with teacher explanations and textbooks.
Our computation of the predictor based on noise measurements started
from the observation that process y (t ) is made up with noise e(t ) :
y (t )  W ( z )e(t ) 
C ( z)
e(t )
A( z )
This is always true provided that W (z ) is asymptotically stable.
Is the reverse true, e(t )  W ( z ) 1 y (t ) 
A( z )
y (t ) ?
C ( z)
In other words, is e(t ) the steady-state output of a digital filter having
transfer function W ( z ) 1 
A( z )
?
C ( z)
Yes, thanks to assumptions I. and II.
We assumed that y (t )  W ( z )  e(t ) , e(t ) ~ WN ( 0 , 2 ) was canonical
(hence, C (z ) has no roots outside the unit circle) and that C (z ) has no
roots on the boundary the unit circle.
Hence, the inverse transfer function W ( z ) 1 
A( z )
is asymptotically
C ( z)
stable too, and the noise e(t ) can be reconstructed based on past value
of the output:
e(t )  W ( z ) 1 y (t ) 
A( z )



y (t )  w0 y (t )  w1 y (t  1)  w2 y (t  2)  
C ( z)
Model Identification and Data Analysis (MIDA) – LINEAR OPTIMAL PREDICTION
16
This material is protected by copyright and is intended for students’ use only. Sell and distribution are strictly forbidden. Final
exam requires integrating this material with teacher explanations and textbooks.
Going to the problem of prediction:
k 1

F ( z)
y (t  k )  E ( z )e(t  k ) 
e(t )   w j e(t  k  j )   wk i e(t  i )
A( z )
j 0
i 0
unpredictable at time t
predictable at time t
F ( z)
yˆ (t  k | t ) 
e(t )
A( z )
e(t )  W ( z ) 1 y (t ) 

or
yˆ (t  k | t )   wk i e(t  i )
i 0
A( z )



y (t )  w0 y (t )  w1 y (t  1)  w2 y (t  2)  
C ( z)
OPTIMAL LINEAR PREDICTOR FROM OUTPUT DATA
yˆ (t  k | t ) 



 wk i w0 y (t  i)  w1 y (t  i  1)  
i 0

F ( z)
W ( z ) 1 y(t )  F ( z )  A( z ) y(t )
A( z )
A( z ) C ( z )
Hence:
yˆ (t  k | t ) 
F ( z)
y (t )
C ( z)
the prediction can obtained as the output of the filter
F ( z)
fed by y (t )
C ( z)
Model Identification and Data Analysis (MIDA) – LINEAR OPTIMAL PREDICTION
17
This material is protected by copyright and is intended for students’ use only. Sell and distribution are strictly forbidden. Final
exam requires integrating this material with teacher explanations and textbooks.
OBSERVATION (IMPORTANT)
The correct expression of the linear predictor can be obtained by using
the canonical representation only!
Example (non canonical ARMA)
Let y (t )  W ( z ) e(t ) 
C(z)
e(t ) , e(t ) ~ WN ( 0 , 2 ) be non canonical
A(z)
with at least one zero outside the unit circle
compute the canonical representation


C ( z)
y (t )  W ( z )  (t )    (t ) ,  (t ) ~ WN ( 0 , 2 )
A( z )
We can compute both the predictor from e(t ) and  (t )
C (z )
A(z )

C (z )

E (z )

k
z F (z )

A(z )

E (z )

z F (z )
k
Model Identification and Data Analysis (MIDA) – LINEAR OPTIMAL PREDICTION
18
This material is protected by copyright and is intended for students’ use only. Sell and distribution are strictly forbidden. Final
exam requires integrating this material with teacher explanations and textbooks.
We have that
F ( z)
e(t )
A(z)


F ( z)
y (t  k )  E ( z ) (t  k )    (t )
A(z)
y (t  k )  E ( z )e(t  k ) 
so that
F ( z)
yˆ (t  k | t ) 
e(t )
A(z)
and

F ( z)
yˆ (t  k | t )    (t )
A(z)
However, there is just one expression for the predictor based on

F ( z)
output data which is obtained from yˆ (t  k | t )    (t )
A(z)
Indeed,
e(t ) 
A( z )
y (t )
C(z)
yˆ (t  k | t ) 
F ( z ) A( z )
F ( z)

y (t ) 
y (t )
A(z) C ( z )
C(z)
That’s a fatal mistake!!!
As a matter of fact
A( z )
is not asymptotically stable (the zero outside
C(z)
the unit circle becomes an unstable pole), so that
A( z )
y (t ) is not even
C(z)
well defined!!
Model Identification and Data Analysis (MIDA) – LINEAR OPTIMAL PREDICTION
19
This material is protected by copyright and is intended for students’ use only. Sell and distribution are strictly forbidden. Final
exam requires integrating this material with teacher explanations and textbooks.
Hence, e(t ) 
A( z )
y (t ) and the white noise e(t ) cannot be reconstruct
C(z)
from the output y (t )
Starting from the canonical representation instead:



F(z) A(z)
F(z)
yˆ (t  k | t )     y(t)   y(t)
A(z) C(z)
C(z)

A(z)
 (t )   y(t )
C(z)
The expression

F(z)
yˆ (t  k | t )   y (t )
C(z)
is the sole possible one.
Among the white noises appearing in the various representations of an
ARMA process, the white noise  (t ) entering in the canonical
representation only can be surely reconstructed from past output data:



 (t )  w0 y(t )  w1 y(t  1)  w2 y(t  2)  ,
provided that there are no zeros on the unit circle (Assumption II).
Model Identification and Data Analysis (MIDA) – LINEAR OPTIMAL PREDICTION
20
This material is protected by copyright and is intended for students’ use only. Sell and distribution are strictly forbidden. Final
exam requires integrating this material with teacher explanations and textbooks.
Observation
Assumption II is strictly required.
If y (t )  W ( z ) e(t ) 
C(z)
C(z)
has zeroes
e(t ) , e(t ) ~ WN ( 0 , 2 ) and
A(z)
A(z)
on the unit circle, then an optimal linear predictor of the type

yˆ (t  k | t )   0 y (t )  1 y (t  1)   2 y (t  2)      i y (t  i )
i 0
does not exist!!!
Model Identification and Data Analysis (MIDA) – LINEAR OPTIMAL PREDICTION
21
This material is protected by copyright and is intended for students’ use only. Sell and distribution are strictly forbidden. Final
exam requires integrating this material with teacher explanations and textbooks.
Observation
Since
F ( z)
is asymptotically stable, and y (t ) is a S.S.P,
C ( z)
then yˆ (t  k | t ) 
F ( z)
y (t ) is a S.S.P. too.
C ( z)
Thanks to stationarity, the probabilistic properties of the optimal
predictor do not depend on the time t at which prediction is made
(clearly, they depend instead on the prediction horizon k ).
Equivalent representations:
 yˆ (t  k | t ) 
F ( z)
y (t )
C ( z)
 yˆ (t | t  k ) 
F ( z)
y (t  k )
C( z)
Model Identification and Data Analysis (MIDA) – LINEAR OPTIMAL PREDICTION
22
This material is protected by copyright and is intended for students’ use only. Sell and distribution are strictly forbidden. Final
exam requires integrating this material with teacher explanations and textbooks.
OPTIMAL PREDICTION ERROR
Let y(t )  W ( z ) e(t ) , e(t ) ~ WN ( 0 , 2 ) be a canonical representation
of the ARMA process y (t ) .
C (z )
A(z )

E (z )
C( z)
F ( z)
 E ( z )  z k
A( z )
A( z )
z  k F (z )
E ( z )  w0  w1 z 1    wk 1 z  k 1 z k
y (t  k )  E ( z )e(t  k ) 
F ( z)
e(t )
A( z )
F ( z)
 wk z k  wk 1 z k 1  
A( z )
yˆ (t  k | t ) 
 (t  k | t )  y (t  k )  yˆ (t  k | t )  E ( z )e(t  k ) 
F ( z)
e(t )
A( z )
F ( z)
F ( z)
e(t ) 
e(t )
A( z )
A( z )
Hence,
 (t  k | t )  E ( z )e(t  k )  [ w0  w1 z 1    wk 1 z  k 1 ]e(t  k )
 w0 e(t  k )  w1e(t  k  1)    wk 1e(t  1)
Model Identification and Data Analysis (MIDA) – LINEAR OPTIMAL PREDICTION
23
This material is protected by copyright and is intended for students’ use only. Sell and distribution are strictly forbidden. Final
exam requires integrating this material with teacher explanations and textbooks.
The k -steps prediction error of is always an MA( k  1) process!!
Let us quantify its size by computing its variance
var[ (t  k | t )]  w02  w12    wk21  2
var[ (t  1 | t )]  w02  2
var[ (t  2 | t )]  w02  w12  2
var[ (t  3 | t )]  w02  w12  w22  2
As expected, thanks to
stationarity of process y(t)
and of the predictor
y^(t+k|t), the prediction
error variance does not
depend on the time t at
which prediction is made
but only on the prediction
horizon k.

Observation: the prediction error variance becomes larger and larger
as the prediction horizon k increases
var[ (t  k | t )]

1
2
3
4

k
That’s not surprising, as k increases the information on y (t  k )
 0 ,
carried by y (t ), y (t  1), becomes smaller (note that  y ( ) 

i.e. the correlation between signals tends to 0 as the gap increases)
Model Identification and Data Analysis (MIDA) – LINEAR OPTIMAL PREDICTION
24
This material is protected by copyright and is intended for students’ use only. Sell and distribution are strictly forbidden. Final
exam requires integrating this material with teacher explanations and textbooks.
2
2
2
2
Moreover, var[ (t  k | t )] k




w

w

w




0
1
2

Recall, wi ’s are the coefficients of the MA(∞) representation of
y (t  k ) . Hence,
var[ (t  k | t )] k
 var[ y(t  k )]

As k   , the information on y (t  k ) carried by y (t ), y (t  1),
vanishes so that the sole possible prediction for y (t  k ) is given by its
mean value:
yˆ (t   | t )  E[ y(t  )]  0
Hence,
var[ (t   | t )]  E[ y(t  )  0 ]  var[ y(t  )]  var[ y(t )]
2
Model Identification and Data Analysis (MIDA) – LINEAR OPTIMAL PREDICTION
25
This material is protected by copyright and is intended for students’ use only. Sell and distribution are strictly forbidden. Final
exam requires integrating this material with teacher explanations and textbooks.
Observation
While developing the optimal prediction theory, we assumed that all
the (infinite) past observations from time t backward,
, y(t  101), y(t  100),, y(t  2), y(t  1), y(t ) ,
were available to perform prediction.
In practice however, process y (t ) can be measured over a finite length
window only:
y(1), y(2),, y(t  2), y(t  1), y(t )
Heuristic way of proceeding (always adopted in practice)
Use the optimal predictor, but instead of considering the steadysolution, use a conventional initialization
yˆ (t  k | t ) 
F ( z)
y (t ) ,
C ( z)
initialized with
yˆ (k | 0)  0, yˆ (k  1 | 1)  0, yˆ (k  2 | 2)  0, ,
y(0)  0, y(1)  0, y(2)  0, 
Thanks to the asymptotic stability of
F ( z)
, the effect of the
C ( z)
conventional initialization rapidly vanishes, and is negligible provided
that t is large enough.
Model Identification and Data Analysis (MIDA) – LINEAR OPTIMAL PREDICTION
26
This material is protected by copyright and is intended for students’ use only. Sell and distribution are strictly forbidden. Final
exam requires integrating this material with teacher explanations and textbooks.
Example
1  z 1  0.5 z 2
y (t ) 
e(t ) ,
1  0.9 z 1
e(t ) ~ WN ( 0 ,1)
0.1  0.5 z 1
yˆ (t  1 | t ) 
y (t )
1  z 1  0.5 z 2
6
4
2
0
-2
-4
-6
0
5
10
15
20
25
30
35
40
dotted line = y (t )
continuous line = yˆ (t  1 | t ) (optimal predictor)
dashed line = yˆ (t  1 | t ) (predictor with conventional initialization)
Model Identification and Data Analysis (MIDA) – LINEAR OPTIMAL PREDICTION
27
This material is protected by copyright and is intended for students’ use only. Sell and distribution are strictly forbidden. Final
exam requires integrating this material with teacher explanations and textbooks.
PREDICTION OF NONZERO MEAN ARMA PROCESS
y (t ) 
C ( z)
e(t )
A( z )
where e(t ) ~ WN (  , 2 )
E[ y(t )]  W (1)    y
Assume that’s the canonical representation, otherwise compute it!
Note that the theory we have developed before does not apply directly
since it relies on the fact that the process is zero mean.
Unbiased processes:
y (t )  y (t )  y
~
~
e (t )  e(t )  
C ( z) ~
~
y (t ) 
e (t )
A( z )
~
y (t ) is a zero mean process, so that the developed prediction theory
apply to it. So let’s start by first computing the predictor for it.
Model Identification and Data Analysis (MIDA) – LINEAR OPTIMAL PREDICTION
28
This material is protected by copyright and is intended for students’ use only. Sell and distribution are strictly forbidden. Final
exam requires integrating this material with teacher explanations and textbooks.
C ( z)
F ( z)
 E ( z )  z k
A( z )
A( z )
( k -steps division between C (z ) and A(z ) )
F ( z) ~
~
yˆ (t  k | t ) 
y (t )
C ( z)
That’s the predictor for ~
y (t  k ) . Note that it is obtained by filtering
past values of process ~
y (t )  y(t )  y which are available since:
 The past values of y (t ) are available
 y is known
Predictor for y (t  k )
y(t  k )  ~
y (t  k )  y
yˆ (t  k | t )  ~
yˆ (t  k | t )  y
The mean value of the
process is completely
known, and hence this
part of y(t+k) can be
trivially predicted by
taking the same value
yˆ (t  k | t ) 
F ( z) ~
F ( z)
y (t )  y 
( y (t )  y )  y
C ( z)
C ( z)
yˆ (t  k | t ) 
 F (1) 
F ( z)
y (t )  1 
 y
C ( z)
C
(
1
)


Model Identification and Data Analysis (MIDA) – LINEAR OPTIMAL PREDICTION
29
This material is protected by copyright and is intended for students’ use only. Sell and distribution are strictly forbidden. Final
exam requires integrating this material with teacher explanations and textbooks.
PREDICTION OF ARMAX PROCESS
y (t ) 
B( z )
C ( z)
u (t  d ) 
e(t )
A( z )
A( z )
Deterministic part
of the process
where e(t ) ~ WN ( 0, 2 )
Stochastic part of
the process
u (t  d ) is a measurable signal; d is a time delay
We assume:

C ( z)
e(t ) is a canonical representation (otherwise compute it)
A( z )
 u (t  d ) is a signal completely known from t   to t  
Let z (t )  y (t ) 
Then, z (t ) 
B( z )
u (t  d )
A( z )
C ( z)
e(t ) i.e. it is an ARMA process
A( z )
C ( z)
F ( z)
 E ( z )  z k
A( z )
A( z )
zˆ(t  k | t ) 
( k -steps division between C (z ) and A(z ) )
F ( z)
z (t )
C ( z)
Model Identification and Data Analysis (MIDA) – LINEAR OPTIMAL PREDICTION
30
This material is protected by copyright and is intended for students’ use only. Sell and distribution are strictly forbidden. Final
exam requires integrating this material with teacher explanations and textbooks.
y (t ) 
B( z )
u (t  d )  z (t )
A( z )
This part of the process
is deterministically
known, and hence can
be trivially predicted
yˆ (t  k | t ) 
B( z )
u (t  k  d )  zˆ (t  k | t )
A( z )
yˆ (t  k | t ) 
B( z )
F ( z)
u (t  k  d ) 
z (t )
A( z )
C ( z)
yˆ (t  k | t ) 
B( z )
F ( z)
B( z )
u (t  k  d ) 
( y (t ) 
u (t  d ))
A( z )
C ( z)
A( z )
B( z )  C ( z ) z k F ( z ) 
F ( z)
yˆ (t  k | t ) 


y (t )
u (t  k  d ) 
C ( z )  A( z )
A( z ) 
C ( z)
yˆ (t  k | t ) 
B( z ) E ( z )
F ( z)
u (t  k  d ) 
y (t )
C ( z)
C ( z)
( remember that
C ( z)
F ( z)
 E ( z )  z k
)
A( z )
A( z )
Model Identification and Data Analysis (MIDA) – LINEAR OPTIMAL PREDICTION
31
Download