Time series, spring 2014 10 Slides for ITS, sections 8.1, 8.4, 8.5 (see also SSPSE 8.6.1) Exercises: ITS 8.1, 8.7, 8.9, 8.12 State space models Let πππ‘π‘ = πππ‘π‘,1 , β¦ , πππ‘π‘,π€π€ β² and πΏπΏππ = πππ‘π‘,1 , β¦ , πππ‘π‘,π£π£ β² be random vectors, let πΎπΎπ‘π‘ ~WN(ππ, π π ) and π½π½π‘π‘ ~WN ππ, ππ , and let πΊπΊ be a π€π€ × π£π£ matrix and πΉπΉ be a π£π£ × π£π£ matrix. A state space model satisfies the equations πππ‘π‘ = πΊπΊπΏπΏπ‘π‘ + πΎπΎπ‘π‘ , οΏ½ πΏπΏπ‘π‘+1 = πΉπΉπΏπΏπ‘π‘ + π½π½π‘π‘ the observation equation the state equation, for π‘π‘ = 0, ±1, ±2, β¦ The equation is stable if πΉπΉ has all its eigenvalues inside of the unit circle. Then β πΏπΏπ‘π‘ = οΏ½ πΉπΉππ π½π½π‘π‘βππβ1 ππ=0 ππ π½π½ and πππ‘π‘ = πππ‘π‘ + ββ πΊπΊπΉπΉ π‘π‘βππβ1 ππ=0 Kalman recursion Estimation of πΏπΏπ‘π‘ from β’ ππ0 , β¦, πππ‘π‘β1 is the prediction problem β’ ππ0 , β¦, πππ‘π‘ is the filtering problem β’ ππ0 , β¦, ππππ for ππ > π‘π‘ is the smoothing problem οΏ½ π‘π‘ = πππ‘π‘β1 πΏπΏπ‘π‘ of πΏπΏπ‘π‘ is given by the The best linear predictor πΏπΏ recursion οΏ½ π‘π‘+1 = πΉπΉ πΏπΏ οΏ½ π‘π‘ + ΞΞβ1 οΏ½ πΏπΏ π‘π‘ (πππ‘π‘ β πΊπΊ πΏπΏπ‘π‘ ) β² οΏ½ οΏ½ β’ Ξ©π‘π‘ = πΈπΈ[ πΏπΏπ‘π‘ β πΏπΏπ‘π‘ πΏπΏπ‘π‘ β πΏπΏπ‘π‘ ] β’ Ξπ‘π‘ = πΊπΊΞ©β²π‘π‘ πΊπΊ β² + π π and Ξβ1 π‘π‘ is (generalized) inverse of Ξπ‘π‘ β’ Ξπ‘π‘ = πΉπΉΞ©β²π‘π‘ πΊπΊ β² οΏ½ 1 and Ξ©1 are obtained by direct β’ The initial values πΏπΏ computation (or by cheating and setting them equal to 0 and πΌπΌ, respectively) β’ Ξ©π‘π‘ , Ξπ‘π‘ , and Ξπ‘π‘ converge to limiting values, which may be used to simplify the recursions for large π‘π‘ οΏ½ π‘π‘ = πππ‘π‘β1 πΏπΏπ‘π‘ of πΏπΏπ‘π‘ for π‘π‘ > 1 is given The best linear predictor πΏπΏ by the recursions οΏ½ π‘π‘+1 = πΉπΉ πΏπΏ οΏ½ π‘π‘ + Ξt Ξβ1 οΏ½ πΏπΏ π‘π‘ πππ‘π‘ β πΊπΊ πΏπΏπ‘π‘ β² Ξ©π‘π‘ = πΉπΉΞ©β²π‘π‘ πΉπΉ β² + ππ β Ξt Ξβ1 π‘π‘ Ξt Pf: Innovations are defined recursively by π°π°0 = ππ0 and οΏ½ π‘π‘ + πΎπΎπ‘π‘ π°π°π‘π‘ = πππ‘π‘ β πππ‘π‘β1 πππ‘π‘ = πΊπΊ πΏπΏπ‘π‘ β πππ‘π‘β1 πΏπΏ Since πππ‘π‘ β = πππ‘π‘β1 β + ππ β π°π°π‘π‘ it follows that οΏ½ πΏπΏπ‘π‘+1 = πππ‘π‘β1 πΏπΏπ‘π‘+1 + ππ πΏπΏπ‘π‘+1 π°π°π‘π‘ = πππ‘π‘β1 πΉπΉπΏπΏπ‘π‘ + π½π½π‘π‘ + πΈπΈ πΏπΏπ‘π‘+1 π°π°β²π‘π‘ πΈπΈ π°π°π‘π‘ π°π°β²π‘π‘ β1 π°π°π‘π‘ β1 οΏ½ π‘π‘ + Ξπ‘π‘ Ξβ1 οΏ½ οΏ½ = πΉπΉ πΏπΏ π‘π‘ π°π°π‘π‘ = πΉπΉ πΏπΏπ‘π‘ + Ξπ‘π‘ Ξπ‘π‘ (πππ‘π‘ βπΊπΊ πΏπΏπ‘π‘ ) Further Ξ©π‘π‘+1 = πΈπΈ οΏ½ π‘π‘+1 πΏπΏπ‘π‘+1 β πΏπΏ οΏ½ π‘π‘+1 πΏπΏπ‘π‘+1 β πΏπΏ β² οΏ½ π‘π‘+1 πΏπΏ οΏ½ β²π‘π‘+1 = πΈπΈ πΏπΏπ‘π‘+1 πΏπΏπ‘π‘+1 β πΈπΈ πΏπΏ β² β² οΏ½ π‘π‘ πΏπΏ οΏ½ β²π‘π‘ πΉπΉ β² β Ξπ‘π‘ Ξβ1 = πΉπΉπΉπΉ πΏπΏπ‘π‘ πΏπΏβ²π‘π‘ πΉπΉ β² + ππ β πΉπΉπΉπΉ πΏπΏ π‘π‘ Ξπ‘π‘ β² = πΉπΉΞ©π‘π‘ πΉπΉ β² + ππ β Ξπ‘π‘ Ξβ1 Ξ π‘π‘ π‘π‘ οΏ½ π‘π‘+1 β’ β-step predictor is οΏ½ πΏπΏπ‘π‘+β = πΉπΉ ββ1 πΏπΏ β’ Similar computations solve filtering and smoothing problem β’ Similar computations if πΊπΊ and πΉπΉ depend on π‘π‘ Estimation Let ππ be a vector which contains all the parameters of the state space model (πΊπΊ, πΉπΉ, parameters of ππ and ππ, β¦ ). The conditional likelihood given ππ0 is ππ πΏπΏ ππ; ππ1 , β¦ , ππππ = οΏ½ ππ( πππ‘π‘ |ππ1 , β¦ , ππππβππ ) π‘π‘=1 and if all variables are jointly normal, then ππ(πππ‘π‘ ππ1 , β¦ , ππππβ1 = 2ππ π€π€ β2 det Ξπ‘π‘ 1 β2 1 2 exp(β π°π°β²π‘π‘ Ξβ1 π°π°π‘π‘ ), and estimates may be found by numerical maximization of the log conditional likelihood function. β’ Sometimes it is useful to continuously downweigh distant observations to adapt to changes in the situation which is model changes with time. This can be done recursively