Uploaded by Fialy

Time Series Analysis: State Space Models & Kalman Recursion

advertisement
Time series, spring 2014
10
Slides for ITS, sections 8.1, 8.4, 8.5 (see
also SSPSE 8.6.1)
Exercises: ITS 8.1, 8.7, 8.9, 8.12
State space models
Let 𝒀𝒀𝑑𝑑 = π‘Œπ‘Œπ‘‘π‘‘,1 , … , π‘Œπ‘Œπ‘‘π‘‘,𝑀𝑀 β€² and 𝑿𝑿𝒕𝒕 = 𝑋𝑋𝑑𝑑,1 , … , 𝑋𝑋𝑑𝑑,𝑣𝑣 β€² be random
vectors, let 𝑾𝑾𝑑𝑑 ~WN(𝟎𝟎, 𝑅𝑅) and 𝑽𝑽𝑑𝑑 ~WN 𝟎𝟎, 𝑄𝑄 , and let 𝐺𝐺 be a
𝑀𝑀 × π‘£π‘£ matrix and 𝐹𝐹 be a 𝑣𝑣 × π‘£π‘£ matrix. A state space model
satisfies the equations
𝒀𝒀𝑑𝑑 = 𝐺𝐺𝑿𝑿𝑑𝑑 + 𝑾𝑾𝑑𝑑 ,
οΏ½
𝑿𝑿𝑑𝑑+1 = 𝐹𝐹𝑿𝑿𝑑𝑑 + 𝑽𝑽𝑑𝑑
the observation equation
the state equation,
for 𝑑𝑑 = 0, ±1, ±2, … The equation is stable if 𝐹𝐹 has all its
eigenvalues inside of the unit circle. Then
∞
𝑿𝑿𝑑𝑑 = οΏ½ 𝐹𝐹𝑗𝑗 π‘½π‘½π‘‘π‘‘βˆ’π‘—π‘—βˆ’1
𝑗𝑗=0
𝑗𝑗 𝑽𝑽
and 𝒀𝒀𝑑𝑑 = π‘Šπ‘Šπ‘‘π‘‘ + βˆ‘βˆž
𝐺𝐺𝐹𝐹
π‘‘π‘‘βˆ’π‘—π‘—βˆ’1
𝑗𝑗=0
Kalman recursion
Estimation of 𝑿𝑿𝑑𝑑 from
β€’ 𝒀𝒀0 , …, π’€π’€π‘‘π‘‘βˆ’1 is the prediction problem
β€’ 𝒀𝒀0 , …, 𝒀𝒀𝑑𝑑 is the filtering problem
β€’ 𝒀𝒀0 , …, 𝒀𝒀𝑛𝑛 for 𝑛𝑛 > 𝑑𝑑 is the smoothing problem
οΏ½ 𝑑𝑑 = π‘ƒπ‘ƒπ‘‘π‘‘βˆ’1 𝑿𝑿𝑑𝑑 of 𝑿𝑿𝑑𝑑 is given by the
The best linear predictor 𝑿𝑿
recursion
οΏ½ 𝑑𝑑+1 = 𝐹𝐹 𝑿𝑿
οΏ½ 𝑑𝑑 + Ξ˜Ξ”βˆ’1
οΏ½
𝑿𝑿
𝑑𝑑 (𝒀𝒀𝑑𝑑 βˆ’ 𝐺𝐺 𝑿𝑿𝑑𝑑 )
β€²
οΏ½
οΏ½
β€’ Ω𝑑𝑑 = 𝐸𝐸[ 𝑿𝑿𝑑𝑑 βˆ’ 𝑿𝑿𝑑𝑑 𝑿𝑿𝑑𝑑 βˆ’ 𝑿𝑿𝑑𝑑 ]
β€’ Δ𝑑𝑑 = 𝐺𝐺Ω′𝑑𝑑 𝐺𝐺 β€² + 𝑅𝑅 and Ξ”βˆ’1
𝑑𝑑 is (generalized) inverse of Δ𝑑𝑑
β€’ Ξ˜π‘‘π‘‘ = 𝐹𝐹Ω′𝑑𝑑 𝐺𝐺 β€²
οΏ½ 1 and Ξ©1 are obtained by direct
β€’ The initial values 𝑿𝑿
computation (or by cheating and setting them equal to 0 and 𝐼𝐼,
respectively)
β€’ Ω𝑑𝑑 , Δ𝑑𝑑 , and Ξ˜π‘‘π‘‘ converge to limiting values, which may be used
to simplify the recursions for large 𝑑𝑑
οΏ½ 𝑑𝑑 = π‘ƒπ‘ƒπ‘‘π‘‘βˆ’1 𝑿𝑿𝑑𝑑 of 𝑿𝑿𝑑𝑑 for 𝑑𝑑 > 1 is given
The best linear predictor 𝑿𝑿
by the recursions
οΏ½ 𝑑𝑑+1 = 𝐹𝐹 𝑿𝑿
οΏ½ 𝑑𝑑 + Θt Ξ”βˆ’1
οΏ½
𝑿𝑿
𝑑𝑑 𝒀𝒀𝑑𝑑 βˆ’ 𝐺𝐺 𝑿𝑿𝑑𝑑
β€²
Ω𝑑𝑑 = 𝐹𝐹Ω′𝑑𝑑 𝐹𝐹 β€² + 𝑄𝑄 βˆ’ Θt Ξ”βˆ’1
𝑑𝑑 Θt
Pf: Innovations are defined recursively by 𝑰𝑰0 = 𝒀𝒀0 and
οΏ½ 𝑑𝑑 + 𝑾𝑾𝑑𝑑
𝑰𝑰𝑑𝑑 = 𝒀𝒀𝑑𝑑 βˆ’ π‘ƒπ‘ƒπ‘‘π‘‘βˆ’1 𝒀𝒀𝑑𝑑 = 𝐺𝐺 𝑿𝑿𝑑𝑑 βˆ’ π‘ƒπ‘ƒπ‘‘π‘‘βˆ’1 𝑿𝑿
Since
𝑃𝑃𝑑𝑑 β‹… = π‘ƒπ‘ƒπ‘‘π‘‘βˆ’1 β‹… + 𝑃𝑃 β‹… 𝑰𝑰𝑑𝑑
it follows that
οΏ½
𝑿𝑿𝑑𝑑+1 = π‘ƒπ‘ƒπ‘‘π‘‘βˆ’1 𝑿𝑿𝑑𝑑+1 + 𝑃𝑃 𝑿𝑿𝑑𝑑+1 𝑰𝑰𝑑𝑑
= π‘ƒπ‘ƒπ‘‘π‘‘βˆ’1 𝐹𝐹𝑿𝑿𝑑𝑑 + 𝑽𝑽𝑑𝑑 + 𝐸𝐸 𝑿𝑿𝑑𝑑+1 𝑰𝑰′𝑑𝑑 𝐸𝐸 𝑰𝑰𝑑𝑑 𝑰𝑰′𝑑𝑑 βˆ’1 𝑰𝑰𝑑𝑑
βˆ’1
οΏ½ 𝑑𝑑 + Ξ˜π‘‘π‘‘ Ξ”βˆ’1
οΏ½
οΏ½
= 𝐹𝐹 𝑿𝑿
𝑑𝑑 𝑰𝑰𝑑𝑑 = 𝐹𝐹 𝑿𝑿𝑑𝑑 + Ξ˜π‘‘π‘‘ Δ𝑑𝑑 (𝒀𝒀𝑑𝑑 βˆ’πΊπΊ 𝑿𝑿𝑑𝑑 )
Further
Ω𝑑𝑑+1 = 𝐸𝐸
οΏ½ 𝑑𝑑+1 𝑿𝑿𝑑𝑑+1 βˆ’ 𝑿𝑿
οΏ½ 𝑑𝑑+1
𝑿𝑿𝑑𝑑+1 βˆ’ 𝑿𝑿
β€²
οΏ½ 𝑑𝑑+1 𝑿𝑿
οΏ½ ′𝑑𝑑+1
= 𝐸𝐸 𝑿𝑿𝑑𝑑+1 𝑿𝑿𝑑𝑑+1
βˆ’ 𝐸𝐸 𝑿𝑿
β€²
β€²
οΏ½ 𝑑𝑑 𝑿𝑿
οΏ½ ′𝑑𝑑 𝐹𝐹 β€² βˆ’ Ξ˜π‘‘π‘‘ Ξ”βˆ’1
= 𝐹𝐹𝐹𝐹 𝑿𝑿𝑑𝑑 𝑿𝑿′𝑑𝑑 𝐹𝐹 β€² + 𝑄𝑄 βˆ’ 𝐹𝐹𝐹𝐹 𝑿𝑿
𝑑𝑑 Ξ˜π‘‘π‘‘
β€²
= 𝐹𝐹Ω𝑑𝑑 𝐹𝐹 β€² + 𝑄𝑄 βˆ’ Ξ˜π‘‘π‘‘ Ξ”βˆ’1
Θ
𝑑𝑑
𝑑𝑑
οΏ½ 𝑑𝑑+1
β€’ β„Ž-step predictor is οΏ½
𝑿𝑿𝑑𝑑+β„Ž = 𝐹𝐹 β„Žβˆ’1 𝑿𝑿
β€’ Similar computations solve filtering and smoothing problem
β€’ Similar computations if 𝐺𝐺 and 𝐹𝐹 depend on 𝑑𝑑
Estimation
Let πœƒπœƒ be a vector which contains all the parameters of the state
space model (𝐺𝐺, 𝐹𝐹, parameters of π‘Šπ‘Š and 𝑉𝑉, … ). The conditional
likelihood given 𝒀𝒀0 is
𝑛𝑛
𝐿𝐿 πœƒπœƒ; 𝒀𝒀1 , … , 𝒀𝒀𝑛𝑛 = οΏ½ 𝑓𝑓( 𝒀𝒀𝑑𝑑 |𝒀𝒀1 , … , π’€π’€π’•π’•βˆ’πŸπŸ )
𝑑𝑑=1
and if all variables are jointly normal, then
𝒇𝒇(𝒀𝒀𝑑𝑑 𝒀𝒀1 , … , π’€π’€π’•π’•βˆ’1 = 2πœ‹πœ‹
𝑀𝑀
βˆ’2
det Δ𝑑𝑑
1
βˆ’2
1
2
exp(βˆ’ 𝑰𝑰′𝑑𝑑 Ξ”βˆ’1 𝑰𝑰𝑑𝑑 ),
and estimates may be found by numerical maximization of the
log conditional likelihood function.
β€’ Sometimes it is useful to continuously downweigh distant
observations to adapt to changes in the situation which is
model changes with time. This can be done recursively
Download