Uploaded by meysam Khadem

Generalizations of the Normalized Prediction Error

advertisement
Generalizations of the
Normalized Prediction Error
Asker M. Bazen and Cornelis H. Slump
University of Twente, Department of Electrical Engineering,
Laboratory of Signals and Systems,
P.O. box 217 - 7500 AE Enschede - The Netherlands
Phone: +31 53 489 2673
Fax: +31 53 489 1060
E-mail: a.m.bazen@el.utwente.nl
Abstract — The Normalized Prediction Error, or
NPE, can be used for the evaluation of the fit of
AR models, which are estimated from signals that
are generated by an AR process. The NPE does
not only provide a measure of the time domain fit,
but also of the frequency domain fit of an estimated
model. Therefore, it is a measure that can very well
be used for comparison of different estimates.
In this paper, generalizations of the NPE are
proposed that enable the application of the NPE
to complex valued ARMA models and processes,
to sinusoids in correlated ARMA noise and to the
periodogram. The last extension is achieved by a
method that rewrites the periodogram as an exactly equivalent MA(n − 1) model.
Keywords— signal processing, spectral estimation,
time series analysis.
scribes an extension to the NPE that enables the
application to complex valued ARMA processes and
models. Section IV gives a further generalization to
processes that contain complex sinusoids in correlated
ARMA noise.
To compare the periodogram to ARMA estimates,
in section V, a method is proposed that provides the
MA(n−1) model that is exactly equivalent to the periodogram. Unlike other methods, this method is exact,
does not make use of iterative matrix inversions, and
can be applied to complex valued signals as well.
In section VI, the results of some simulation experiments are discussed, and in section VII, the conclusions of this paper are given.
In appendix A, the Matlab software that is available
with this paper is discussed.
I. Introduction
II. Normalized Prediction Error
Many methods exist for spectral analysis of time series that originate from stochastic processes. To compare these methods, a measure is needed that evaluates the fit of an estimated model to the process that
generated the data.
The Normalized Prediction Error, or NPE, provides
such a measure. It can be used for comparison of different spectral estimates or different estimation methods. Furthermore, it can be used for evaluation and
tuning of a certain method of interest.
In this section, the basics of the Normalized Prediction Error, or NPE, are discussed. The NPE is a
measure that evaluates estimated AR models, both in
the time- and in the frequency-domain. The basis of
this method is linear prediction.
This section is organized as follows. First, in section
II-A, the AR process and model are discussed. Then,
in section II-B, a derivation of the Prediction Error,
or PE, is given, using the theory of linear prediction.
In section II-C, it will be shown that the time-domain
NPE represents a frequency-domain measure as well
and finally, in section II-D, the relations that are given
in the previous sections are illustrated by a numerical
example.
Section II describes the basics of the NPE. The
main part of this section is already known from literature [7] [5] [4]. In this section, a derivation of the
NPE is given and it will be shown that the NPE provides a time- and frequency-domain measure of the
fit of an estimated AR model to the AR process that
generated the data.
In the remaining part of this paper, three generalizations of the NPE are proposed. Section III deISBN: 90-73461-18-9
A. Autoregressive Process and Model
An autoregressive process generates the time series
x1 , . . . , xn as the response of an all-pole filter to the
white noise series εi . The AR(p) process, which is an
autoregressive process of order p, is given by:
631
c STW, 1999 10 19-01:094
632
Asker M. Bazen, Cornelis H. Slump
xi + a1 xi−1 + · · · + ap xi−p = εi
(1)
In this expression, ai are the autoregressive parameters.
It is always possible to approximate the structure
of the series by the estimation of an AR(p) model,
whether stochastic time series are generated by an
AR process or not. The AR model describes the observations xi as the response of an estimated all-pole
filter to estimated white noise residual series ε̂i . The
estimated AR(p) model is given by:
xi + â1 xi−1 + · · · + âp xi−p = ε̂i
(2)
In this model, âi are the estimated autoregressive parameters.
The AR model may for instance be estimated by the
Burg method or the Yule-Walker method, both providing an AR model that has a guaranteed positive
semi-definite autocovariance function. As an important consequence, this provides a valid power spectral
density function that contains no negative power. The
conditions that all poles λi of the transfer function are
inside the unit circle, or that all reflection coefficients
or partial correlations ki are inside unit circle, provide
this type of autocovariance function as well.
B. Linear Prediction
The estimated AR model can be used for linear prediction. Linear prediction means that the AR model is
used to predict xi as a linear combination of the previous samples xi−1 , . . . , xi−p . The calculation of x̃i ,
which is the prediction of xi , is given by the following
expression:
x̃i = −â1 xi−1 − · · · − âp xi−p
(3)
which is easily constructed using expression (2).
A measure for the time domain fit of the estimated
AR model to the process that generated the data follows naturally from linear prediction. The one step
ahead error of prediction, ηi , is given by:
ηi = x̃i − xi
(4)
and the Prediction Error, or PE, is defined as the
variance of this error:
h
i
h
PE = E |ηi |2 = E |x̃i − xi |2
i
(5)
It is important to keep in mind that the PE measures the fit of an estimated AR model to new and
εi
1
A(z)
ηi
xi
Â(z)
Fig. 1. Block-diagram that illustrates the calculation of
the Prediction Error, or PE. In this diagram, series xi
is generated by an AR process 1/A(z) from white noise
εi . The AR model 1/Â(z) is estimated from the series
and the inverse of this model, which is Â(z), is used to
find the one step ahead errors of prediction ηi .
independent data from the same process. Stating it
in other words: an AR model that provides a good description of the process that generated the data will
have a low value of PE, while a model that only provides a good description of the series x from which
the model is estimated still has a high PE.
Once an AR model is estimated and the generating
AR process is known, it is possible to find an expression for PE without applying the procedure of generating more data from the same process, predicting
the next samples, calculating the one step ahead error of prediction, and determining the variance of this
errors, as described by expressions (1) and (3) to (5).
Instead, an expression for PE is found that is only
a function of the AR process that generated the data
and the AR model that is estimated from the data.
This method makes use of the Prediction Error Filter,
or PEF, which filters the series in order to obtain the
one step ahead error of prediction ηi .
The expression for the PEF is found by substituting
expression (3) into expression (4):
ηi = −(xi + â1 xi−1 + · · · + âp xi−p )
(6)
From this expression, it can be seen that the PEF
is a FIR filter, that is built from the parameters of the
estimated AR model. Therefore, the PE can be found
by filtering the series with the inverse of the estimated
model. This is illustrated by the block-diagram in
figure 1.
In this diagram, the AR process is denoted by A(z),
A(z) = 1 + a1 z −1 + · · · + ap z −p
(7)
where z −1 is used as a back-shift operator.
Using the notations in the diagram, PE can be calculated as as the variance of the output ηi of the PEF :
PE = ση2
(8)
STW/SAFE99
Generalizations of the Normalized Prediction Error
which is the output variance of an ARMA process.
As will be shown in section III-A, PE can now be
calculated as:
PE = âH · R · â
(9)
where the estimated parameter vector âi is defined as:



âi = 


1
â1
..
.






(10)
âp
âH denotes the hermitian transpose of â:
âH =
h
1 â∗1 · · · â∗p



R=


Rxx (0) Rxx (−1) · · · Rxx (−p)
..
Rxx (1) Rxx (0)
.
..
..
..
.
.
.
Rxx (p)
···
· · · Rxx (0)
(11)







Rxx (k) = E [(xi − µx )∗ · (xi+k − µx )]
In order to eliminate the dependence of this quality
measure on the variance σx2 , PE is normalized with
respect to the input variance σε2 of the process. This
results in the Normalized Prediction Error, or NPE,
which is defined as:
R
· â
σε2
(15)
The NPE provides a variance independent timedomain measure of the fit of an estimated AR model
to the process that generated the data, which has to
be known. If the estimated model is perfect, which
means that its parameters exactly equal those of the
process, it will have NPE = 1. This means that this
IEEE/ProRISC99
Sxx (ω) =
σε2
1
2π |1 + a1 e−jω + · · · + ap e−jpω |2
(16)
In [6], a spectral distance measure JA is presented
that provides a frequency domain measure of fit of an
estimated AR model to the generating AR process:
1
JA =
2π
Z
π
−π
Â(ejω ) − A(ejω )
A(ejω )
2
dω
(17)
which can be interpreted as the integrated relative
inverse spectral error.
Using JA and the diagram of figure 1, NPE can be
written as:
NPE
=
2
Â(z)
A(z)

1
= 
2π
Z
Â(ejω )
A(ejω )
2
Â(ejω ) − A(ejω )
A(ejω )
2
1
=
2π
π
−π
dω
(18)
Z
π
−π

dω + 1
and the relation between the time-domain NPE and
the frequency-domain JA is given by:
(14)
For an AR estimate â, NPE can be expressed as:
NPE = âH ·
In the previous section, the time domain fit of a
model to a process, provided by the NPE, is discussed.
In this section, it will be shown that the NPE provides
a measure of the fit of the estimated model to the process that generated the data in the frequency-domain
as well.
The power spectral density of an AR(p) process is
given by:
(13)
For an AR process, the autocovariance function can
be calculated by expression (25) in section III-A.
PE
σε2
C. Frequency-domain representation of NPE
(12)
where the autocovariance function Rxx (k) is defined
as:
NPE =
model will predict as much as is possible. Other models will result in higher NPE values, since their predictions are not the best ones that are possible.
i
Furthermore, the autocovariance matrix R is given
by:

633
JA = NPE − 1
(19)
Therefore, a low value of NPE indicates a good fit in
the time domain as well as a good fit in the frequencydomain.
D. Example
In this section, the results of the previous sections
are illustrated by an example. The relations between
the spectral errors and their corresponding NPE values are shown for the simple AR(1) case. The process
that is used in this example is given by:
634
Asker M. Bazen, Cornelis H. Slump
εi
15
1
A(z)
xi
vi
B(z)
10
Power (dB)
5
Fig. 3. Block-diagram of the ARMA process B(z)/A(z)
0
III. NPE for Complex Valued ARMA Models
−5
The series x1 , . . . , xn might be generated by a
complex valued autoregressive-moving average, or
ARMA(p, q), process, given by
−10
−15
−0.5
0
Normalized Frequency
0.5
xi + a1 xi−1 + · · · + ap xi−p =
Fig. 2. Example of AR(1) power spectral densities. The
solid line depicts the process, the dotted line a model
with an error in |a| and the dash-dot line depicts a
model with an error in f . The corresponding values of
NPE are given in table I
TABLE I
Spectra and NPE of AR(1)
Sxx (ω)
Process
Model 1
Model 2
Linestyle
solid
dotted
dash-dot
|a|
0.9
0.8
0.9
f
0.3
0.3
0.2
NPE
1
1.05
2.63
xi + axi−1 = εi
(20)
a = −|a| · ej2πf = −0.9 · ej2π·0.3
(21)
with:
where f is the peak frequency, normalized to the sample frequency.
In this example, AR(1) models are chosen with a
small error with respect to the process. This error
might be either in |a| or in f . In figure 2, the power
spectral density functions of these 2 models are depicted, together with the spectrum of the process. Table I shows the exact parameters of the models and
the corresponding values of NPE.
From the figure, it can be seen that variations in
|a| do not have much influence on the power spectral density function. The value of NPE is, for this
case, not much greater than 1, as shown in the table.
This agrees with the observation that was made in the
spectrum. However, it can also be seen that a change
in the relative peak frequency of the model has great
influence, both on the spectrum and on the NPE.
εi + b1 εi−1 + · · · + bq εi−q
(22)
which describes a series as the response of a filter that
contains p poles and q zeros to white noise εi .
In case an ARMA model is estimated from the data,
the definition of the NPE is not as obvious as it was in
the purely autoregressive case, since linear prediction
is only possible using a finite order AR model. However, the PEF approach will lead to a solution for the
ARMA case.
First, in section III-A, the calculation of the autocovariance function of a complex valued ARMA process
is discussed. This is used as one of the basics in the
derivation of the NPE for ARMA, as discussed in section III-B.
A. Autocovariance Function of ARMA Process
The autocovariance function Rxx (k) of an ARMA
process, and especially the variance gain of an ARMA
process, is an important feature. Unlike other derivations of this function [3] [1], the solution that is presented in this paper is simple, elegant and correct.
Furthermore, it is a solution that also applies to complex valued processes.
The calculation of the autocovariance function of
an ARMA process is illustrated by the block diagram
in figure 3. In this figure, intermediate series vi are
used to describe the ARMA process:
vi + a1 vi−1 + · · · + ap vi−p = εi
(23)
xi = vi + b1 vi−1 + · · · + bq vi−q
(24)
STW/SAFE99
Generalizations of the Normalized Prediction Error
The variance gain of the ARMA process is given by
σx2 /σε2 .
The autocovariance function Rvv (k) of vi is calculated as the autocovariance function of an AR process,
which can be derived from the matrix normal equations [5]:

p
Y

1

2


σ
·
ε

2



i=1 1 − |ki |
Rvv (k) =


k

X



−
ak,l Rvv (k − l)


k=0
635
Rvv (k) =

Rvv (k)






(31)
Rvv (k − 1) · · · Rvv (k − q)
..
Rvv (k + 1)
Rvv (k)
.
..
..
..
.
.
.
Rvv (k + q)
···
···
Rvv (k)







and

(25)


k≥1
b=

l=1

where ki are the corresponding reflection coefficients
of the AR part (23) and ak,l is the l-th parameter of
the k-th order AR model. Both AR representations
are related by the Levinson-Durbin recursion.
The autocovariance function Rxx (k) of xi can now
be found by making use of the fact that xi is a linear
combination of vi , . . . , vi−q .
Therefore, pre-multiplying expression (24) by x∗i−k
leads to:
1
b1
..
.






(32)
bq
From expression (30), the variance σx2 of series xi is
found to be
σx2 = Rxx (0) = bH · Rvv (0) · b
(33)
This result will be used to find an expression for
the NPE with ARMA process and models in the next
section.
B. Normalized Prediction Error
x∗i−k xi = x∗i−k vi + b1 x∗i−k vi−1 + · · · + bq x∗i−k vi−q (26)
in which
∗
∗
∗
x∗i−k = vi−k
+ b∗1 vi−k−1
+ · · · + b∗q vi−k−q
(27)
is substituted.
Taking the expectation of both sides of the result,
collecting the terms that are autocovariance functions Rxx (k) and Rvv (k), and crosscovariance function
Rxv (k), which is defined as:
Rxv (k) = E [(vi − µv )∗ · (xi+k − µx )]
(29)
since xi is does not depend on future values of vi , will
finally result in the autocovariance function Rxx (k) of
an ARMA process:
Rxx (k) = bH · Rvv (k) · b
In this expression, Rvv (k) is defined as:
IEEE/ProRISC99
D(z) = Â(z)B(z)
(34)
C(z) = A(z)B̂(z)
(35)
(28)
and using that
Rxv (k) = 0 for k < 0
Although an estimated ARMA model cannot directly be used for linear prediction, it is possible to
define a Prediction Error Filter for this case. A block
diagram of this situation is depicted in figure 4.
The series xi is assumed to be generated by an
ARMA process B(z)/A(z). Now, the Prediction Error Filter is given by the ARMA filter Â(z)/B̂(z),
applied to xi . Both ARMA filters can be merged to
one filter D(z)/C(z), with
(30)
Now, the NPE can be found as the variance gain of
this new ARMA model D(z)/C(z), using expression
(33):
NPE =
ση2
Rvv
= dH · 2 · d
2
σε
σε
(36)
Since each ARMA model can be written as a infinite
order AR model [7], the spectral distance measure JA ,
as defined in expression (17), can be used to show the
frequency domain equivalence of the ARMA version
of the NPE as well.
636
εi
Asker M. Bazen, Cornelis H. Slump
B(z)
A(z)
εi
εi
1
C(z)
xi
Â(z)B(z)
A(z)B̂(z)
Â(z)
B̂(z)
ηi
lead to a one step ahead error of prediction ηi with
variance:
ση2 > 0
(38)
Therefore, normalizing the PE of an estimated
model to the input variance σε2 , will lead to:
ηi
NPE = ∞
ηi
vi
D(z)
(39)
for all non-perfectly estimated models. Of course this
is not a measure that provides useful results. However, the addition of correlated noise makes the process stochastic with:
σε2 6= 0 for x stochastic
Fig. 4. Block-diagram that illustrates the Prediction Error Filter for the case of an ARMA model and an
ARMA process. The upper part of the figure depicts
the ARMA process B(z)/A(z) and the Prediction Error Filter Â(z)/B̂(z). Both blocks are merged in the
middle part of the figure, and split again in the lower
part. Now, NPE equals ηi /εi , which is the variance
gain of the ARMA process D(z)/C(z).
(40)
enabling a useful definition of the NPE.
In section IV-A, the process that contains sinusoids
in correlated noise is analyzed. This will lead to a
derivation of the Normalized Prediction Error for this
class of processes in section IV-B.
A. Analysis of the Process
IV. NPE for Sinusoids in Noise
In this section, a method is presented that extends the Normalized Prediction Error to processes
that consist of sinusoids in additive correlated gaussian noise. This enables the comparison of the performance of various spectral estimates also for this type
of processes.
If a signal contains sinusoids in correlated noise,
it is often sufficient to estimate the frequencies and
power of the sinusoidal components. Therefore, most
measures only use these characteristics of the process
to evaluate the spectral estimate [4].
The NPE provides a measure of fit for the entire
power spectral density function, as was shown in section II-C. Therefore, not only the correct modeling of
frequency peaks, but also the description of the colored noise is taken into account in this measure.
The process generates the series xi as the sum of a
sinusoid si and noise ni :
xi = si + ni
(41)
The Signal to Noise Ratio, or SNR, is defined as the
ratio between the powers of signal and noise:
SNR = σs2 /σn2
(42)
The noise ni is assumed to be the response of an
ARMA process to white noise εi . The sinusoid si ,
which is a deterministic series, is given by:
si = σs ej(2πf i+φ)
(43)
where f is the relative frequency and φ is a random
phase offset in the interval [0, 2π]. The sinusoid has
an autocovariance function Rxx (k) that is given by:
Since a pure sinusoid is a deterministic series, it has
no input noise εi , or:
Rss (k) = σs2 ej2πf k
σε2 = 0 for x deterministic
A complex valued sinusoid can be represented by
an AR(1) process As (z):
(37)
In other words: when the process is known exactly,
it can be predicted perfectly. However, the application
of an estimated ARMA model as PEF will always
(44)
As (z) = 1 + as z −1
(45)
that has a pole exactly at the unit circle:
STW/SAFE99
Generalizations of the Normalized Prediction Error
δi
σs
As (z)
εi
B n (z)
An (z)
si
xi
Â(z)
B̂(z)
ηi
ni
Fig. 5. Diagram of sinusoid si , correlated noise ni , sum of
both xi and estimated ARMA model used as Prediction Error Filter.
637
δi
ui
σs
As (z)B̂(z)
εi
B n (z)
An (z)B̂(z)
Â(z)
wi
Fig. 6. Rearranged diagram that makes the estimated
model purely autoregressive.
NPE = âH ·
|as | = 1
6
as = −2πf
(46)
(47)
This AR process has an input δi which is a delta pulse.
The series xi are generated by this AR(1) process according to:
xi + as xi−1 = δi
(48)
The situation is illustrated by the diagram of figure
5. In this figure, the lower left block generates the
ARMA noise ni and the upper left part is the AR
process that generates the sinusoid si . It can be seen
that the input to this process is a delta pulse δi instead
of white noise εi . The signal xi is the sum of the
sinusoid si and the noise ni .
According to the definition of expression 14, the
Normalized Prediction Error can be written as:
NPE = ση2 /σε2
(49)
Using figure 5, the prediction error series ηi can be
obtained by filtering the signal with the Prediction Error Filter, which equals the inverted estimated ARMA
model. This is depicted in the rightmost part of the
figure.
The diagram can be rearranged in such a way, that
the estimate is purely autoregressive. This causes the
MA part of the estimate to be included in the process,
as shown in figure 6.
As is known from the calculation of the Normalized
Prediction Error in the purely stochastic case, in this
situation, the NPE can be calculated by:
IEEE/ProRISC99
Rvv
· â
σε2
(50)
The problem of calculating NPE by this expression
is easiest solved in 2 steps:
• first, all signals are normalized with respect to σε2 .
• then, Rvv is calculated.
Using the signals as defined in figure 6, the autocovariance function Rww (k) of the stochastic part wi is
easily calculated as the autocovariance function of an
ARMA process, as given in expression 30.
The autocovariance function Ruu (k) of ui is
found by taking a closer look at the AR process
σs /As (z)B̂(z), with:
As (z)B̂(z)
= (1 + as z
(51)
−1
) · (1 + b̂1 z
= 1 + (as + b̂1 )z
B. Derivation of the NPE
ηi
vi
−1
−1
+ · · · + b̂q z
+ (as b̂1 + b̂2 )z
−2
−q
)
+ · · · + as b̂q z −q−1
that describes the sinusoid after moving the MA part
of the ARMA estimate into the process. It will be
shown that this process can be replaced by an AR(1)
process that is exactly equivalent for an impulse input.
Applying partial fraction expansion to this AR(q +
1) process results in a sum of (q + 1) AR(1) processes,
where q is the MA order of the estimated ARMA
model. Only one of the resulting AR(1) processes has
its pole exactly at the unit circle. This pole turns out
to have exactly the same location as the pole of the
original AR process As (z).
By definition, we are only interested in the stationary solution. Since the inputs of the (q+1) AR(1) processes are delta pulses, only the non-decaying process,
which is the one that has its pole at the unit circle,
will provide a non-zero stationary solution. Therefore,
638
δi
εi
Asker M. Bazen, Cornelis H. Slump
si
σs
As (z)
B n (z)
An (z)B̂(z)
ui
c
σs
vi
ηi
Â(z)
wi
Fig. 7. Simplified diagram for calculation of NPE.
figure 6 can be simplified to the diagram of figure 7,
by only keeping this one AR(1) process.
As a result of the partial fraction expansion, a factor
c/σs , given by:
q
Y
c
as
=
σs i=1 as + λi
(52)
has to be introduced in the block diagram. In this
factor, λi are the roots of the estimated MA parameter
vector B̂(z).
In other words, the only effect of the introduction of
the MA filter into the sinusoidal process is a multiplication factor. Another way of looking at this is that
this factor originates from the frequency response of
the MA filter at the sinusoid frequency.
Using this result, it is possible to complete the normalization, which was the first step in calculating the
NPE. Normalizing ui with respect to σε2 now gives for
σu2 :
σu2 = SNR ·
2
|c|
·
σs2
"
2
σw
σε2
#
(53)
and the autocovariance function Ruu (k) of ui is found
to be, using expression (43):
Ruu (k) = σu2 ej2πkf
(54)
Finally, summing both autocovariance functions:
Rvv (k) = Ruu (k) + Rww (k)
(56)
enables the use of expression 50 to calculate the NPE:
NPE = âH · Rvv · â
The periodogram is the most widely used numerical
spectral estimator, since it uses the computationally
efficient Fast Fourier Transform.
For the comparison of the periodogram to parametric methods by means of a quantitative parametric
measure like the NPE, a method is needed to represent the periodogram by an equivalent parametric
model. Since the autocovariance function of the periodogram equals zero beyond lag (n − 1), where n is
the number of observations, it can be represented by
a moving average (MA) model of order (n − 1).
For the calculation of the equivalent MA representation of a periodogram, only methods that yield an approximation are known from literature. The method
proposed in [9] includes iterative matrix inversions
and cannot be applied to complex valued models. The
method of [8] can be used with complex valued data,
but it only provides an approximation. In this paper, a new method is presented that provides the exactly equivalent MA(n − 1) representation of the periodogram. This method is exact, not iterative, and
can be applied to complex valued signals as well.
In section V-A, an analysis of the relevant characteristics of the periodogram will be made. It will be
shown why it is possible to represent the periodogram
by an exactly equivalent MA(n − 1) model. In Section V-B, the method to find this representation will
be presented. This method will be used in simulation
experiments that are presented in section VI.
A. Analysis of the Periodogram
For series x = x1 , . . . , xn , which is tapered by a
window w, the periodogram P̂xx is calculated using
the Fast Fourier Transform by:
1
(58)
|FFT (w · x)|2
n
In the rest of the text, the window w is assumed to
be already included in the series x. Now, the autocovariance function that represents the periodogram is
the biased estimate:
P̂xx =
(55)
leads to an expression for Rvv . This, and the fact that
everything is normalized to
σε2 = 1
V. Periodogram as MA(n − 1)
(57)


X
1 n−k



x∗l · xl+k

R̂xx (k) =
n




 0
k ≤n−1
l=1
(59)
k≥n
Since this autocovariance function equals zero beyond lag n − 1, it is possible to represent the periodogram by an equivalent moving average model of
STW/SAFE99
Generalizations of the Normalized Prediction Error
order n − 1, as will be shown below.
An MA(q) model describes time series x as the response of a FIR filter of order q to white noise ε̂:
xi = ε̂i + b̂1 ε̂i−1 + · · · + b̂q ε̂i−q
639
When comparing the periodogram to MA spectral
estimates and noticing that hi = bi , the equivalence
can be seen:
(60)
Therefore, the impulse response ĥ of the MA(q)
model equals the parameter vector b̂:
ĥ = b̂
(61)
Pxx ∼ |F(x)|2
(66)
Sxx ∼ |F(h)|
(67)
where F denotes the Fourier transform.
Therefore, by taking
hi = xi
in which these vectors are defined as:
ĥ = ĥ0 , ĥ1 , . . . , ĥq
(62)
b̂ = 1, b̂1 , . . . , b̂q
(63)
The autocovariance function R̂xx (k) of the MA(q)
model is given by:

q
X

2


b̂∗l−k b̂l
σ̂
 ε
R̂xx (k) =




k≤q
∗
Rxx (z) = h(z)h̃ (z)
(69)
where
(64)
(70)
Rxx (−n + 1)z
Rxx (1)z
k>q
with b̂0 = 1. The autocovariance function of an MA(q)
model equals zero for lags beyond q.
The power spectral density function Ŝxx (ω) of the
MA(q) model is given by,
(68)
an impulse response hi that matches the periodogram
is found. However, in general, this impulse response
will not be minimum phase.
In z-notation, the autocovariance function Rxx (z)
of a transfer function model is given by:
Rxx (z) =
l=k
0
2
−1
n−1
+ · · · + Rzz (−1)z + Rxx (0) +
+ · · · + Rxx (n − 1)z −(n−1)
and
h(z) = h0 + h1 z −1 + · · · + hn−1 z −(n−1)
(71)
∗
Ŝxx (ω) =
σ̂ε2
· 1 + b̂1 e−jω + · · · + b̂q e−jqω
2π
2
(65)
Because of the similar structure of the periodogram
and the MA(n − 1) model, both having a finite length
autocovariance function, the MA model can be used
as a parametric representation of the periodogram.
B. Description of the Method
The problem in representing the periodogram by an
MA(n − 1) model is to find the impulse response that
exactly fits the given autocovariance function of the
periodogram. Furthermore, usage of the NPE requires
that the impulse response is minimum phase, which
means that all zeros of the transfer function have to
be inside the unit circle. If not, the NPE will not
provide correct results.
In this section, a method is presented to calculate
the MA(n − 1) model that matches the periodogram
exactly.
IEEE/ProRISC99
Furthermore, h̃ (z) denotes the z-transform of the
complex conjugated reversed sequence h:
∗
h̃ (z) = h∗0 + h∗1 z 1 + · · · + h∗n−1 z n−1
(72)
The zeros of Rxx (z) are determined by the zeros of
∗
h(z) and their counterparts of h̃ (z). It can be seen
∗
easily that the zeros of h̃ (z) are the same as those of
h(z), but mirrored in the unit circle.
In order to find the minimum phase solution, the ze∗
ros of Rxx (z) are redistributed over h(z) and h̃ (z),
in such a way that h(z) is the polynomial that is built
up from the zeros of the autocovariance function that
∗
are inside the unit circle. Now, h̃ (z) contains all zeros that are outside the unit circle. This way, the
impulse response h is the minimum phase solution we
are looking for.
•
Two additional remarks:
The minimum phase solution by is determined by
choosing all zeros of the autocovariance function
640
that are inside the unit circle and should therefore
belong to the minimum phase solution. In effect,
this is the same as mirroring all zeros of the impulse
response that are outside the unit circle into it.
When the input of the algorithm is a finite length
autocovariance function Rxx (k) instead of the series
x, it is, in some cases, possible to represent this
autocovariance function by an MA(q) model. This
is carried out by the following procedure. First, the
zeros of Rxx (z) are determined. This may have two
different resulting sets of zeros:
– If the zeros appear in pairs that are mirrored in
the unit circlewith respect to each other, the rest
of the procedure that is described above can be followed. This will lead to the desired MA(q) model.
– However, if the zeros do not appear in mirrored
pairs, it is not possible to represent the given autocovariance function by an MA model. For example,
if:
Rxx (z) = 0.8z 1 + 1 + 0.8z −1
70
60
50
40
Power (dB)
•
Asker M. Bazen, Cornelis H. Slump
30
20
10
0
−10
−20
−0.5
0
Normalized Frequency
0.5
Fig. 8. Power spectral densities of process, periodogram
and AR estimate. See table II for details.
TABLE II
Estimated spectra and Normalized Prediction
Errors
(73)
Sxx (ω)
Process
Periodogram
AR(p)
Rxx (z) will have two complex conjugate zeros that
are both outside the unit circle. Therefore, this autocovariance function cannot be represented by an
MA(1) model.
The proposed method provides the exact MA(n−1)
representation of the periodogram. It can deal with
complex valued data and will always provide an invertible model. This representation can be used to
make an exact comparison of the periodogram to parametric spectral estimates.
VI. Simulation Results
To illustrate the methods that are presented in this
paper, a simulation experiment is carried out. In order to obtain statistical reliable results, the next procedure is repeated for 10,000 times.
• A signal x of 64 observations has been generated
by a process that consists of 2 complex sinusoids in
complex valued AR(1) noise.
• From each signal, an AR(p) model is estimated using the Burg method. The order is selected by
means of finite sample order selection criteria as described in [2].
• From each signal, a periodogram are estimated, using a Chebyshev window with sidelobes of −30 dB.
The effect of this window can be seen clearly in figure 8.
• The Normalized Prediction Error is calculated for
•
Linestyle
solid
dotted
dash-dot
NPE
1
265
3.3
both the AR(p) estimate and the periodogram in
order to enable a quantitative comparison.
The average of NPE is calculated over all 10,000
repetitions.
An example of the power spectral densities of an
estimated AR(p) model and a periodogram are given
in figure 8, together with the spectrum of the process that generated the data. The solid line is the
process, the dotted line is the periodogram and the
dash-dotted line the AR estimate.
In table II, the average Normalized Prediction Error are given for the process and the estimates. By
definition, the process has NPE = 1, the AR estimate
gave an average NPE = 3.3, while the windowed periodogram resulted in NPE = 265.
It is already known from literature [1] [2], that
AR(p) estimates in general provide a better fit to
real valued ARMA class processes than the periodogram. This can be explained by the fact that the
periodogram is a non-selective transformation of all
data, while the parametric methods discriminate between statistically significant information and noise
STW/SAFE99
Generalizations of the Normalized Prediction Error
by means of order selection.
In this simulation experiment, it has been shown
that this is also true for processes that consist of sinusoids in complex ARMA noise. For this case too,
selection of the right model order is essential for an
accurate estimate. It will lead to a considerably lower
prediction error and so to a more accurate spectral
estimate than when the periodogram is used.
VII. Conclusions
A derivation is presented that extends the use of the
Normalized Prediction Error. This provides a timeand frequency-domain measure for the fit of estimated
AR models to the AR process that generated the data.
In this paper, the NPE is generalized to cases where
the process consist of sinusoids in additive complex
valued ARMA noise and the model is an ARMA
model or a periodogram.
When comparing periodogram estimates of processes, containing sinusoids in complex valued ARMA
noise, to AR(p) estimates, the latter turn out to be
superior. This can be explained by the application of
finite sample order selection.
IEEE/ProRISC99
641
References
[1] P.M.T. Broersen. The quality of models for arma processes. IEEE Trans. on Signal Process., 46(6):1749–1752,
June 1998.
[2] P.M.T. Broersen. Robust algorithms for time series models. In Proceedings of ProRISC/IEEE CSSP98, pages 75–82,
Nov. 1998.
[3] S.M. Kay. Generation of the autocorrelation function of an
arma process. IEEE Trans. Acoust., Speech, Signal Proc.,
ASSP-33(3):733–734, June 1985.
[4] S.M. Kay. Modern Spectral Estimation: Theory and Application. Prentice-Hall signal processing series. Prentice-Hall,
Englewood Cliffs, NJ, 1988.
[5] S.M. Kay and S.L. Marple. Spectrum analysis - a modern perspective. Proceedings of the IEEE, 69(11):1380–1419,
Nov. 1981.
[6] E. Parzen. Some recent advances in time series modelling.
IEEE Transactions on Automatic Control, AC-19:723–730,
1977.
[7] M.B. Priestly. Spectral Analysis and Time Series. Probability and Mathematical Statistics. Academic Press, 1981.
[8] H.E. Wensink and A.M. Bazen. On stochastic parametric
modelling of radar sea clutter for identification purposes. In
Proceedings of PSIP ’99, pages 80–85, Jan. 1999.
[9] G. Wilson. Factorization of the covariance generating function of a pure moving average process. SIAM Journal Numer. Anal, 6(1):1–7, March 1969.
642
Asker M. Bazen, Cornelis H. Slump
C. Description Par2cov.m
Appendix
I. Matlab Files
The following Matlab files are included:
NormPredErr.m, Par2cov.m, ARMApar2cov.m and
Par2psd.m. These functions provide the tools to evaluate and display the spectral estimates of ARMA
models and periodogram.
A. Description NormPredErr.m
The Matlab file NormPredErr.m calculates the Normalized Prediction Error for ARMA estimates of processes that contains m complex sinusoids in ARMA
noise. The function is called by:
[NPE] = NormPredErr(ARpar,MApar,ARparEst,
MAparEst,Freqs,SNR)
The Matlab file Par2cov.m calculates the autocovariance function Rxx (k) of an AR process. The function is called by:
[Cov] = Par2cov(ARpar,Ncov,R0)
where the same definitions of variables are used as
above.
D. Description Par2cov.m
The Matlab file Par2psd.m calculates the power
spectral density function Sxx (ω) of an ARMA process. The function is called by:
[Psd,Freqas] = Par2psd(ARpar,MApar,R0,Nfreq)
where the following definitions of variables are used:
where the following new definitions of variables are
used:
NPE = NPE
h
ARpar =
MApar =
ARparEst =
MAparEst =
Freqs =
h
h
h
h
h
SNR =
1 a1 · · · ap
1 b1 · · · b q
1 â1 · · · âp
1 b̂1 · · · b̂q
f1 · · · fm
iT
h
iT
Psd =
iT
Freqas =
h
Sxx (−π) · · · Sxx (π)
−π · · · π
iT
iT
Nfreq = number of evaluation frequencies
iT
iT
SNR 1 · · · SNR m
iT
B. Description ARMApar2cov.m
The file NormPredErr.m makes use of ARMApar2cov.m.
This function calculates the autocovariance function
Rxx (k) of an ARMA process. The function is called
by:
[Cov] = ARMApar2cov(ARpar,MApar,Ncov,R0)
where the following new definitions of variables are
used:
h
Cov =
Rxx (0) · · · Rxx (Ncov)
iT
Ncov = number of lags wanted
R0 = Rxx (0)
STW/SAFE99
Download