Uploaded by Fialy

ITS8

advertisement
Time series, spring 2014
10
Slides for ITS, sections 8.1, 8.4, 8.5 (see
also SSPSE 8.6.1)
Exercises: ITS 8.1, 8.7, 8.9, 8.12
State space models
Let 𝒀𝒀𝑑𝑑 = π‘Œπ‘Œπ‘‘π‘‘,1 , … , π‘Œπ‘Œπ‘‘π‘‘,𝑀𝑀 ′ and 𝑿𝑿𝒕𝒕 = 𝑋𝑋𝑑𝑑,1 , … , 𝑋𝑋𝑑𝑑,𝑣𝑣 ′ be random
vectors, let 𝑾𝑾𝑑𝑑 ~WN(𝟎𝟎, 𝑅𝑅) and 𝑽𝑽𝑑𝑑 ~WN 𝟎𝟎, 𝑄𝑄 , and let 𝐺𝐺 be a
𝑀𝑀 × π‘£π‘£ matrix and 𝐹𝐹 be a 𝑣𝑣 × π‘£π‘£ matrix. A state space model
satisfies the equations
𝒀𝒀𝑑𝑑 = 𝐺𝐺𝑿𝑿𝑑𝑑 + 𝑾𝑾𝑑𝑑 ,
οΏ½
𝑿𝑿𝑑𝑑+1 = 𝐹𝐹𝑿𝑿𝑑𝑑 + 𝑽𝑽𝑑𝑑
the observation equation
the state equation,
for 𝑑𝑑 = 0, ±1, ±2, … The equation is stable if 𝐹𝐹 has all its
eigenvalues inside of the unit circle. Then
∞
𝑿𝑿𝑑𝑑 = οΏ½ 𝐹𝐹𝑗𝑗 𝑽𝑽𝑑𝑑−𝑗𝑗−1
𝑗𝑗=0
𝑗𝑗 𝑽𝑽
and 𝒀𝒀𝑑𝑑 = π‘Šπ‘Šπ‘‘π‘‘ + ∑∞
𝐺𝐺𝐹𝐹
𝑑𝑑−𝑗𝑗−1
𝑗𝑗=0
Kalman recursion
Estimation of 𝑿𝑿𝑑𝑑 from
• 𝒀𝒀0 , …, 𝒀𝒀𝑑𝑑−1 is the prediction problem
• 𝒀𝒀0 , …, 𝒀𝒀𝑑𝑑 is the filtering problem
• 𝒀𝒀0 , …, 𝒀𝒀𝑛𝑛 for 𝑛𝑛 > 𝑑𝑑 is the smoothing problem
οΏ½ 𝑑𝑑 = 𝑃𝑃𝑑𝑑−1 𝑿𝑿𝑑𝑑 of 𝑿𝑿𝑑𝑑 is given by the
The best linear predictor 𝑿𝑿
recursion
οΏ½ 𝑑𝑑+1 = 𝐹𝐹 𝑿𝑿
οΏ½ 𝑑𝑑 + ΘΔ−1
οΏ½
𝑿𝑿
𝑑𝑑 (𝒀𝒀𝑑𝑑 − 𝐺𝐺 𝑿𝑿𝑑𝑑 )
′
οΏ½
οΏ½
• Ω𝑑𝑑 = 𝐸𝐸[ 𝑿𝑿𝑑𝑑 − 𝑿𝑿𝑑𝑑 𝑿𝑿𝑑𝑑 − 𝑿𝑿𝑑𝑑 ]
• Δ𝑑𝑑 = 𝐺𝐺Ω′𝑑𝑑 𝐺𝐺 ′ + 𝑅𝑅 and Δ−1
𝑑𝑑 is (generalized) inverse of Δ𝑑𝑑
• Θ𝑑𝑑 = 𝐹𝐹Ω′𝑑𝑑 𝐺𝐺 ′
οΏ½ 1 and Ω1 are obtained by direct
• The initial values 𝑿𝑿
computation (or by cheating and setting them equal to 0 and 𝐼𝐼,
respectively)
• Ω𝑑𝑑 , Δ𝑑𝑑 , and Θ𝑑𝑑 converge to limiting values, which may be used
to simplify the recursions for large 𝑑𝑑
οΏ½ 𝑑𝑑 = 𝑃𝑃𝑑𝑑−1 𝑿𝑿𝑑𝑑 of 𝑿𝑿𝑑𝑑 for 𝑑𝑑 > 1 is given
The best linear predictor 𝑿𝑿
by the recursions
οΏ½ 𝑑𝑑+1 = 𝐹𝐹 𝑿𝑿
οΏ½ 𝑑𝑑 + Θt Δ−1
οΏ½
𝑿𝑿
𝑑𝑑 𝒀𝒀𝑑𝑑 − 𝐺𝐺 𝑿𝑿𝑑𝑑
′
Ω𝑑𝑑 = 𝐹𝐹Ω′𝑑𝑑 𝐹𝐹 ′ + 𝑄𝑄 − Θt Δ−1
𝑑𝑑 Θt
Pf: Innovations are defined recursively by 𝑰𝑰0 = 𝒀𝒀0 and
οΏ½ 𝑑𝑑 + 𝑾𝑾𝑑𝑑
𝑰𝑰𝑑𝑑 = 𝒀𝒀𝑑𝑑 − 𝑃𝑃𝑑𝑑−1 𝒀𝒀𝑑𝑑 = 𝐺𝐺 𝑿𝑿𝑑𝑑 − 𝑃𝑃𝑑𝑑−1 𝑿𝑿
Since
𝑃𝑃𝑑𝑑 ⋅ = 𝑃𝑃𝑑𝑑−1 ⋅ + 𝑃𝑃 ⋅ 𝑰𝑰𝑑𝑑
it follows that
οΏ½
𝑿𝑿𝑑𝑑+1 = 𝑃𝑃𝑑𝑑−1 𝑿𝑿𝑑𝑑+1 + 𝑃𝑃 𝑿𝑿𝑑𝑑+1 𝑰𝑰𝑑𝑑
= 𝑃𝑃𝑑𝑑−1 𝐹𝐹𝑿𝑿𝑑𝑑 + 𝑽𝑽𝑑𝑑 + 𝐸𝐸 𝑿𝑿𝑑𝑑+1 𝑰𝑰′𝑑𝑑 𝐸𝐸 𝑰𝑰𝑑𝑑 𝑰𝑰′𝑑𝑑 −1 𝑰𝑰𝑑𝑑
−1
οΏ½ 𝑑𝑑 + Θ𝑑𝑑 Δ−1
οΏ½
οΏ½
= 𝐹𝐹 𝑿𝑿
𝑑𝑑 𝑰𝑰𝑑𝑑 = 𝐹𝐹 𝑿𝑿𝑑𝑑 + Θ𝑑𝑑 Δ𝑑𝑑 (𝒀𝒀𝑑𝑑 −𝐺𝐺 𝑿𝑿𝑑𝑑 )
Further
Ω𝑑𝑑+1 = 𝐸𝐸
οΏ½ 𝑑𝑑+1 𝑿𝑿𝑑𝑑+1 − 𝑿𝑿
οΏ½ 𝑑𝑑+1
𝑿𝑿𝑑𝑑+1 − 𝑿𝑿
′
οΏ½ 𝑑𝑑+1 𝑿𝑿
οΏ½ ′𝑑𝑑+1
= 𝐸𝐸 𝑿𝑿𝑑𝑑+1 𝑿𝑿𝑑𝑑+1
− 𝐸𝐸 𝑿𝑿
′
′
οΏ½ 𝑑𝑑 𝑿𝑿
οΏ½ ′𝑑𝑑 𝐹𝐹 ′ − Θ𝑑𝑑 Δ−1
= 𝐹𝐹𝐹𝐹 𝑿𝑿𝑑𝑑 𝑿𝑿′𝑑𝑑 𝐹𝐹 ′ + 𝑄𝑄 − 𝐹𝐹𝐹𝐹 𝑿𝑿
𝑑𝑑 Θ𝑑𝑑
′
= 𝐹𝐹Ω𝑑𝑑 𝐹𝐹 ′ + 𝑄𝑄 − Θ𝑑𝑑 Δ−1
Θ
𝑑𝑑
𝑑𝑑
οΏ½ 𝑑𝑑+1
• β„Ž-step predictor is οΏ½
𝑿𝑿𝑑𝑑+β„Ž = 𝐹𝐹 β„Ž−1 𝑿𝑿
• Similar computations solve filtering and smoothing problem
• Similar computations if 𝐺𝐺 and 𝐹𝐹 depend on 𝑑𝑑
Estimation
Let πœƒπœƒ be a vector which contains all the parameters of the state
space model (𝐺𝐺, 𝐹𝐹, parameters of π‘Šπ‘Š and 𝑉𝑉, … ). The conditional
likelihood given 𝒀𝒀0 is
𝑛𝑛
𝐿𝐿 πœƒπœƒ; 𝒀𝒀1 , … , 𝒀𝒀𝑛𝑛 = οΏ½ 𝑓𝑓( 𝒀𝒀𝑑𝑑 |𝒀𝒀1 , … , 𝒀𝒀𝒕𝒕−𝟏𝟏 )
𝑑𝑑=1
and if all variables are jointly normal, then
𝒇𝒇(𝒀𝒀𝑑𝑑 𝒀𝒀1 , … , 𝒀𝒀𝒕𝒕−1 = 2πœ‹πœ‹
𝑀𝑀
−2
det Δ𝑑𝑑
1
−2
1
2
exp(− 𝑰𝑰′𝑑𝑑 Δ−1 𝑰𝑰𝑑𝑑 ),
and estimates may be found by numerical maximization of the
log conditional likelihood function.
• Sometimes it is useful to continuously downweigh distant
observations to adapt to changes in the situation which is
model changes with time. This can be done recursively
Download