Stat 882 (Winter 2012) – Peter F. Craigmile Stationary

advertisement
Stat 882 (Winter 2012) – Peter F. Craigmile
Stationary autoregressive moving average processes
Part 1
Reading: Brockwell and Davis [1991, Chapter 3]
• LTI filtering of random variables
• LTI filtering of stationary processes
• Defining the autoregressive moving average (ARMA) process
– The moving average (MA) and autoregressive (AR) processes.
• Simulating ARMA Processes in R
• Example: The AR(1) process
• An aside: a quick review of complex numbers
• ARMA processes and the roots of their polynomials
• Checking stationarity
• Causal ARMA processes
• Converting between ARMA and MA representations
• Invertible ARMA processes
1
Motivation
• The class of autoregressive moving average processes
(ARMA) are the most famous class of time series models.
• Popularly used since the 1960s.
• This very general class of processes includes the moving
average (MA) and autoregressive (AR) processes.
• Why do we use these models?
– Much is known about the statistical properties of these
models.
– Under suitable assumptions, we can exactly represent the
ACVF at a finite collection of lags using an ARMA model
of a large enough model order.
• Before defining ARMA models, we discuss what it means to
LTI filter a collection of random variables.
2
LTI filtering of random variables
Brockwell and Davis [1991, Prop 3.1.1]
• For LTI filter coefficients {ψj } that are absolutely summable;
P
i.e., j∈Z |ψj | < ∞, let
ψ(B) =
X
ψj B j
j∈Z
(where B is the usual backshift operator).
• Now let {Xt} be any collection of random variables such that
supt E|Xt| < ∞. Then the filtered sequence
Yt = ψ(B)Xt =
X
ψj Xt−j
j∈Z
converges absolutely with probability one.
(Proof via the Monotone Convergence Theorem).
• Now let {Xt} be any collection of random variables such that
supt E|Xt| < ∞ and supt E|Xt|2 < ∞. Then Yt = ψ(B)Xt
converges in mean square to the same limit.
3
LTI filtering of stationary processes
(filtering preserves stationarity)
Brockwell and Davis [1991, Prop 3.1.2]
• Suppose {Xt} is a stationary process with mean µX and
ACVF γX (·).
• Let {ψj } be a set of absolutely summable coefficients.
• Then Yt = ψ(B)Xt convergences absolutely with probability
one and in mean square to the same limit (follows from the
previous page).
• Show that {Yt} is a stationary process:
4
LTI filtering of stationary processes, continued
5
Remarks
• Commonly the process that we are filtering is a white noise
or IID process.
– Such a filtering defines what is called a linear process.
• Thinking of ψ(·) as a polynomial in the backshift operator
it helps to think about power series when filtering {Xt}
using ψ(·) to yield {Yt}.
(We will demonstrate with examples in these notes.)
• Filter cascades will also be important.
6
Defining the autoregressive moving average (ARMA)
process
• The process {Xt} is an ARMA(p, q) process if
1. {Xt} is stationary, and
2. for every t we can write
φ(B)Xt = θ(B)Zt,
for some WN(0, σ 2) process {Zt}, where
φ(B) = 1 −
p
X
φj B j
j=1
is the autoregressive (AR) polynomial of order p, and
θ(B) = 1 +
q
X
θj B j
j=1
is the moving average (MA) polynomial of order q.
7
ARMA processes: remarks and subclasses
• We can show later that {Xt} is a mean zero process. We
obtain a mean µ, ARMA(p, q) process {Yt} by taking an
ARMA process {Xt} and letting
Yt = µ + Xt,
for each t.
• With φ(B) = θ(B) = 1, Xt = Zt for all t.
Thus an ARMA(0, 0) process is a white noise process.
8
Subclasses of ARMA model
• When φ(B) = 1, we obtain the moving average, MA(q),
process of order q:
X t = Zt +
q
X
θj Zt−j .
j=1
• When θ(B) = 1, we obtain the autoregressive, AR(p),
process of order p.
Xt −
p
X
φj Xt−j = Zt
j=1
The AR process is attributed to George Udny Yule (1871–
1951) (See the biography by O’Connor and Robertson and
look at Yule [1921] and Yule [1927]. The AR(1) process has
also been called the Markov process).
• If θ(B) contains absolutely summable coefficients {θj }, then
it makes sense to define a MA(∞) process:
∞
X
Xt =
θj Zt−j .
j=0
(Similarly we can also define an AR(∞) process).
9
Simulating ARMA Processes in R
• To simulate a series, called x, of length n from a time series
model we use:
x <- arima.sim(n=, model)
• Here, model is a list of two possible items:
1. ar: a vector of AR coeffcients {φj }.
2. ma: a vector of MA coeffcients {θj }.
• Some examples:
## simulate an AR(1) process with phi_1=0.7
x <- arima.sim(n=100, model=list(ar=0.7))
## simulate an ARMA(1,1) process with phi_1=0.4 and theta_1=0.5
y <- arima.sim(n=100, model=list(ar=0.4, ma=0.5))
• To change the stdev. of {Zt} use the sd argument:
## simulate an AR(1) process with phi_1=0.7 and sigma^2=2
x <- arima.sim(n=100, model=list(ar=0.7), sd=sqrt(2))
10
Example: The AR(1) process
• Consider the AR(1) process {Xt} defined by
Xt = φXt−1 + Zt,
where {Zt} is a WN(0, σ 2) process.
• When φ = 1 or φ = −1 we obtain a form of random walk.
This process is not stationary.
• What about other values of φ? Is the process stationary in
these other cases?
• With φ(B) = (1 − φB) we have
φ(B)Xt = Zt.
11
An aside: a quick review of complex numbers
• Let i be the square root of negative one,
√
i =
−1.
• Then any complex number ζ can be written as
ζ = a + b i,
for real numbers a and b.
• The polar representation of ζ is
ζ = |ζ|eiArg(ζ).
• Here, the modulus of a complex number ζ, |ζ| is
|ζ| =
√
a2 + b2 ,
and the argument of ζ is
Arg(ζ) = “ arctan(b/a)00.
12
When is the AR(1) process stationary?
• The key idea is to see whether there exists a LTI filter ψ(B)
such that
ψ(B)φ(B)Xt = ψ(B)Zt,
with ψ(B)φ(B) = 1.
• Then we have written Xt = ψ(B)Zt, and by the LTI filtering
preserves stationarity result, {Xt} is a mean zero stationary
process.
• Forget about the backward shift operator for the moment.
• For the polynomial φ(z) = 1 − φz, with z ∈ C (a complexvalued number), it follows that when |φ| < 1
∞
X
1
1
=
=
φj z j .
φ(z)
1 − φz
j=0
13
The |φ| < 1 case
14
The |φ| < 1 case, continued
15
The |φ| > 1 case
• When |φ| > 1 we have
Xt = φXt−1 + Zt.
Dividing by φ we get
φ−1Xt = Xt−1 + φ−1Zt,
ı.e.,
Xt−1 = φ−1Xt − φ−1Zt.
• Can write this as a linear combination of future Zt’s. As this
is also a filtering of a stationary process we have a stationary solution.
– BUT, Xt depends on future values of the {Zt} – not
very practical!
• If we assume that Xs and Zt are uncorrelated for each
t > s, |φ| < 1 is the only stationary solution to the AR
equation.
16
ARMA processes and the roots of their polynomials
• In the next few slides we discuss a number of properties of
ARMA processes:
– Stationarity
– Casuality
– Invertibility
Each property involve rewriting the ARMA equations in a
different form, and can be tested by examining the roots of
the AR or the MA polynomial.
e.g., examining the values ζ ∈ C such that φ(ζ) = 0.
• From now on we will assume that φ(·) and θ(·) do not have
common roots.
– If there are common roots, either the ARMA process can
be simplified, or there may be more than one solution
solution to the ARMA equations [Brockwell and Davis,
1991, p.87, Remark 1].
17
Decomposing roots
• The Fundamental Theorem of Algebra tells us that any polynomial of degree p, f (·) can always be rewritten as
f (z) = K
n
Y
(z − ζj ),
j=1
where ζ1, . . . , ζn are the roots of the polynomial f (·).
(The roots are real- or complex-valued).
• When we think about f (·) being, for example, the AR polynomial φ(·), it is more convenient to decompose the polynomial
as
n
Y
φ(z) = K1 (1 − ζj z).
j=1
18
Checking stationarity in general
• For any ARMA(p,q) process, a stationary and
unique solution exists if and only if
φ(z) = 1 − φ1z − . . . − φpz p 6= 0,
for all |z| = 1.
• General strategy:
1. Find the roots of φ(·),
i.e., the values of ζ such that φ(ζ) = 0.
2. Show that the roots do not have modulus 1;
i.e., show |ζ| =
6 1 for each root ζ.
• We can use the polyroot function in R to find the roots of
a polynomial, and Mod to calculate the modulus.
19
Causal ARMA processes
• An ARMA process is causal if there exists constants {ψj }
P
with ∞
j=0 |ψj | < ∞ and
Xt =
∞
X
ψj Zt−j ;
j=0
that is, we can write {Xt} as an MA(∞) process depending
only on the current and past values of {Zt}.
• Equivalently, an ARMA process is causal if and only if
φ(z) = 1 − φ1z − . . . − φpz p 6= 0,
for all |z| ≤ 1.
20
Converting between ARMA and MA representations
• For an causal AR polynomial φ(·) and an MA polynomial
θ(·) suppose we observe the ARMA process {Xt}
φ(B)Xt = θ(B)Zt.
• The MA(∞) representation of this process is written as
Xt = ψ(B)Zt.
• We want to identify the polynomial ψ(·), which satisfies
φ(B)ψ(B) = θ(B).
• Next we change the B to some z ∈ C:
(1 − φ1z − . . . − φpz p)(ψ0 + ψ1z + ψ2z 2 . . .)
= 1 + θ1 z + . . . + θq z q
21
Converting, continued
• We match the constant terms, z terms, z 2 terms, etc. of both
sides of the equation, and obtain
1 = ψ0
θ1 = ψ1 − ψ0φ1
θ2 = ψ2 − ψ1φ1 − ψ0φ2
...
• More generally for j = 1, 2, . . .
min{j,p}
θj = ψj −
X
φk ψj−k .
k=1
• Now we solve the linear system of equations for {ψj }.
• We can solve these equations recursively.
• In R we can convert from an ARMA model to the MA representation using the function
ARMAtoMA(ar, ma, lag.max).
22
Invertible ARMA processes
• An ARMA process is invertible if there exists constants
P
{πj } with ∞
j=0 |πj | < ∞ and
Zt =
∞
X
πj Xt−j ;
j=0
i.e., we can write {Zt} as an MA process depending only on
the current and past values of {Xt}.
• The process is invertible if and only if
θ(z) = 1 + θ1z + . . . + θpz q 6= 0,
for all |z| ≤ 1.
23
References
P. J. Brockwell and R. A. Davis. Time Series. Theory and Methods (Second Edition).
Springer-Verlag, New York, NY, 1991.
J. J. O’Connor and E. F. Robertson. A biography of George Udny Yule. The MacTutor History of Mathematics archive. URL http://www-gap.dcs.st-and.ac.uk/
~history/Biographies/Yule.html.
G. U. Yule. On the time-correlation problem, with special reference to the variatedifference correlation method. Journal of Royal Statistical Society, 84:497–537, 1921.
URL http://www.jstor.org/stable/2341101.
G. U. Yule. On a method of investigating periodicities in disturbed series, with special
reference to Wolfer’s sunspot numbers. Philosophical Transactions of the Royal Society
of London. Series A, Containing Papers of a Mathematical or Physical Character, 226:
267–298, 1927. URL http://www.jstor.org/stable/91170.
24
Download