Time series, spring 2014 10 Slides for ITS, sections 8.1, 8.4, 8.5 (see also SSPSE 8.6.1) Exercises: ITS 8.1, 8.7, 8.9, 8.12 State space models Let πππ‘π‘ = πππ‘π‘,1 , … , πππ‘π‘,π€π€ ′ and πΏπΏππ = πππ‘π‘,1 , … , πππ‘π‘,π£π£ ′ be random vectors, let πΎπΎπ‘π‘ ~WN(ππ, π π ) and π½π½π‘π‘ ~WN ππ, ππ , and let πΊπΊ be a π€π€ × π£π£ matrix and πΉπΉ be a π£π£ × π£π£ matrix. A state space model satisfies the equations πππ‘π‘ = πΊπΊπΏπΏπ‘π‘ + πΎπΎπ‘π‘ , οΏ½ πΏπΏπ‘π‘+1 = πΉπΉπΏπΏπ‘π‘ + π½π½π‘π‘ the observation equation the state equation, for π‘π‘ = 0, ±1, ±2, … The equation is stable if πΉπΉ has all its eigenvalues inside of the unit circle. Then ∞ πΏπΏπ‘π‘ = οΏ½ πΉπΉππ π½π½π‘π‘−ππ−1 ππ=0 ππ π½π½ and πππ‘π‘ = πππ‘π‘ + ∑∞ πΊπΊπΉπΉ π‘π‘−ππ−1 ππ=0 Kalman recursion Estimation of πΏπΏπ‘π‘ from • ππ0 , …, πππ‘π‘−1 is the prediction problem • ππ0 , …, πππ‘π‘ is the filtering problem • ππ0 , …, ππππ for ππ > π‘π‘ is the smoothing problem οΏ½ π‘π‘ = πππ‘π‘−1 πΏπΏπ‘π‘ of πΏπΏπ‘π‘ is given by the The best linear predictor πΏπΏ recursion οΏ½ π‘π‘+1 = πΉπΉ πΏπΏ οΏ½ π‘π‘ + ΘΔ−1 οΏ½ πΏπΏ π‘π‘ (πππ‘π‘ − πΊπΊ πΏπΏπ‘π‘ ) ′ οΏ½ οΏ½ • Ωπ‘π‘ = πΈπΈ[ πΏπΏπ‘π‘ − πΏπΏπ‘π‘ πΏπΏπ‘π‘ − πΏπΏπ‘π‘ ] • Δπ‘π‘ = πΊπΊΩ′π‘π‘ πΊπΊ ′ + π π and Δ−1 π‘π‘ is (generalized) inverse of Δπ‘π‘ • Θπ‘π‘ = πΉπΉΩ′π‘π‘ πΊπΊ ′ οΏ½ 1 and Ω1 are obtained by direct • The initial values πΏπΏ computation (or by cheating and setting them equal to 0 and πΌπΌ, respectively) • Ωπ‘π‘ , Δπ‘π‘ , and Θπ‘π‘ converge to limiting values, which may be used to simplify the recursions for large π‘π‘ οΏ½ π‘π‘ = πππ‘π‘−1 πΏπΏπ‘π‘ of πΏπΏπ‘π‘ for π‘π‘ > 1 is given The best linear predictor πΏπΏ by the recursions οΏ½ π‘π‘+1 = πΉπΉ πΏπΏ οΏ½ π‘π‘ + Θt Δ−1 οΏ½ πΏπΏ π‘π‘ πππ‘π‘ − πΊπΊ πΏπΏπ‘π‘ ′ Ωπ‘π‘ = πΉπΉΩ′π‘π‘ πΉπΉ ′ + ππ − Θt Δ−1 π‘π‘ Θt Pf: Innovations are defined recursively by π°π°0 = ππ0 and οΏ½ π‘π‘ + πΎπΎπ‘π‘ π°π°π‘π‘ = πππ‘π‘ − πππ‘π‘−1 πππ‘π‘ = πΊπΊ πΏπΏπ‘π‘ − πππ‘π‘−1 πΏπΏ Since πππ‘π‘ ⋅ = πππ‘π‘−1 ⋅ + ππ ⋅ π°π°π‘π‘ it follows that οΏ½ πΏπΏπ‘π‘+1 = πππ‘π‘−1 πΏπΏπ‘π‘+1 + ππ πΏπΏπ‘π‘+1 π°π°π‘π‘ = πππ‘π‘−1 πΉπΉπΏπΏπ‘π‘ + π½π½π‘π‘ + πΈπΈ πΏπΏπ‘π‘+1 π°π°′π‘π‘ πΈπΈ π°π°π‘π‘ π°π°′π‘π‘ −1 π°π°π‘π‘ −1 οΏ½ π‘π‘ + Θπ‘π‘ Δ−1 οΏ½ οΏ½ = πΉπΉ πΏπΏ π‘π‘ π°π°π‘π‘ = πΉπΉ πΏπΏπ‘π‘ + Θπ‘π‘ Δπ‘π‘ (πππ‘π‘ −πΊπΊ πΏπΏπ‘π‘ ) Further Ωπ‘π‘+1 = πΈπΈ οΏ½ π‘π‘+1 πΏπΏπ‘π‘+1 − πΏπΏ οΏ½ π‘π‘+1 πΏπΏπ‘π‘+1 − πΏπΏ ′ οΏ½ π‘π‘+1 πΏπΏ οΏ½ ′π‘π‘+1 = πΈπΈ πΏπΏπ‘π‘+1 πΏπΏπ‘π‘+1 − πΈπΈ πΏπΏ ′ ′ οΏ½ π‘π‘ πΏπΏ οΏ½ ′π‘π‘ πΉπΉ ′ − Θπ‘π‘ Δ−1 = πΉπΉπΉπΉ πΏπΏπ‘π‘ πΏπΏ′π‘π‘ πΉπΉ ′ + ππ − πΉπΉπΉπΉ πΏπΏ π‘π‘ Θπ‘π‘ ′ = πΉπΉΩπ‘π‘ πΉπΉ ′ + ππ − Θπ‘π‘ Δ−1 Θ π‘π‘ π‘π‘ οΏ½ π‘π‘+1 • β-step predictor is οΏ½ πΏπΏπ‘π‘+β = πΉπΉ β−1 πΏπΏ • Similar computations solve filtering and smoothing problem • Similar computations if πΊπΊ and πΉπΉ depend on π‘π‘ Estimation Let ππ be a vector which contains all the parameters of the state space model (πΊπΊ, πΉπΉ, parameters of ππ and ππ, … ). The conditional likelihood given ππ0 is ππ πΏπΏ ππ; ππ1 , … , ππππ = οΏ½ ππ( πππ‘π‘ |ππ1 , … , ππππ−ππ ) π‘π‘=1 and if all variables are jointly normal, then ππ(πππ‘π‘ ππ1 , … , ππππ−1 = 2ππ π€π€ −2 det Δπ‘π‘ 1 −2 1 2 exp(− π°π°′π‘π‘ Δ−1 π°π°π‘π‘ ), and estimates may be found by numerical maximization of the log conditional likelihood function. • Sometimes it is useful to continuously downweigh distant observations to adapt to changes in the situation which is model changes with time. This can be done recursively