The Spectral Density

advertisement
The Spectral Density
The periodogram shows which frequencies are strong in a
nite sample x1; x2; : : : ; xn.
What about a population model for xt, such as a stationary
time series?
The spectral density plays the corresponding role.
1
The Discrete Fourier Transform
Given data x1; x2; : : : ; xn, the discrete Fourier transform is
n
1 X
d !j = p
xte 2i!j t; j = 0; 1; : : : ; n 1:
n t=1
The frequencies !j = j=n are the Fourier or fundamental
frequencies.
Like any other Fourier transform, it has an inverse transform:
xt =
1
pn
nX1
j =0
d(!j )e2i!j t;
t = 1; 2; : : : ; n:
2
The Periodogram
The periodogram is I !j = d !j
2
, j = 0; 1; : : : ; n 1.
The scaled periodogram we used earlier is P !j = (4=n)I !j .
In terms of sample autocovariances: I (0) = nx2, and for
j 6= 0,
I !j
=
nX1
h=
(n 1)
= ^ (0) + 2
^ (h)e
nX1
h=1
2i!j h
^ (h) cos 2!j h :
3
We know that if n is large, ^ (h) is approximately unbiased
for (h): E[^ (h)] (h).
So for the periodogram we should have
h E I !j
i
nX1
h=
1
(n 1)
X
(h )e
(h )e
2i!j h
2i!j h
1
1
X
= (0) + 2
(h) cos 2!j h
h=
= f !j ;
the spectral density function.
h=1
4
So heuristically, the spectral density is the approximate expected value of the periodogram.
That is, a graph of the spectral density shows what we should
expect the periodogram to look like.
Conversely, the periodogram can be used as an estimator of
the spectral density.
But the periodogram values have only two degrees of freedom
each, which makes it a poor estimate.
5
The Mathematics of the Spectrum
Every weakly stationary time series xt with autocovariances
(h) has a non-decreasing spectrum or spectral distribution
function F (!) for which
(h ) =
Z
1=2
1=2
e
2i!h
dF (! ) = 2
Z
1=2
0
cos(2!h)dF (!):
If F (!) is absolutely continuous, it has a spectral density
function f (!) = F 0(!), and
(h ) =
Z
1=2
1=2
e
2i!h
f (! )d!
=2
Z
1=2
0
cos(2!h)f (!)d!:
6
Under various conditions on (h), such as
1
X
j (h)j < 1;
h= 1
f (! ) can be written as the sum
1
1
X
X
2i!j h
f (! ) =
(h )e
= (0) + 2
(h) cos 2!j h
h= 1
h=1
Properties of the spectral density:
Z
1=2
1=2
f (! ) 0;
f ( ! ) = f (! );
f (! )d!
= (0) < 1:
7
Examples
White noise: fw (!) = w2 , a constant.
{ The
analogy with the spectrum of white light explains
(nally!) the name \white" noise.
Moving Average: if vt = wt
8
2
< w
v (h ) = :
so
fv (! ) =
2
w
9
9
0
1
(3
+ wt + wt+1 =3, then
j hj ) j hj 2
j hj > 2
[3 + 4 cos(2!) + 2 cos(4!)]:
8
General ARMA: using results about linear ltering, we shall
show that the spectral density of the ARMA(p; q) process
(B )xt = (B )wt
is
fARMA(! ) = w 2
e
e
2i!
2
2i!
2
:
Note that this gives the polynomials () and () a concrete
meaning: they determine how strongly the series tends to
uctuate at each frequency.
9
Recall the inverse (or dual) process, dened by reversing the
roles of () and ():
(B )x(inverse)
= ( B ) wt :
t
Its spectral density is
(inverse)
fARMA
e
(!) = w2 =
e
2i!
2
2i!
2
4
w
fARMA(! )
which explains (nally!) the name \inverse" process.
10
Recall the AR(2) model:
par(mfrow = c(2, 1))
plot(ts(arima.sim(list(ar = c(1.5,-.95)), n = 144)))
Here
xt = 1:5xt
1
0:95xt
2
+ wt ;
so
(B ) = 1
and
fx(! ) =
1:5B + 0:95B 2;
1
1:5e 2i! + 0:95e
1
4i! 2
11
We can compute this explicitly:
omega = seq(from = 0, to = 1/2, length = 101)
a = -2 * pi * 1i * omega
fx = 1 / Mod(1 - 1.5 * exp(a) + 0.95 * exp(2 * a))^2
plot(omega, fx, type = "l", log = "y")
omega[which.max(fx)]
abline(v = omega[which.max(fx)], col = "blue")
12
Why is fx(!) large for ! = 0:11?
{ Find the zeros of (z): p = polyroot(c(1, -1.5, .95)) shows
the zeros to be 0:7894737 0:6552579i
{ These are only just outside the unit circle:
that they are both at a radius of 1:025978;
Mod(p)
shows
{ The closest points on the unit circle are found by scaling:
z = p / Mod(p) shows that they are 0:76948380:6386664i;
{ We expect (z) to be small, and hence 1=j(z)j2 to be
large.
{ But z = e
2i!
, where ! = log(z )=( 2i):
log(z) / (-2 * pi * 1i) shows ! = 0:1102568.
13
Download