Stat 551 Exam 2 December 12, 2012 I have neither given nor received unauthorized assistance on this examination. ____________________________________ Signature ____________________________________ Name Printed There are 12 parts on this exam. I will score every part out of 10 points and take your best 10 of 12 scores. (Budget your time accordingly and find things you can do.) 1 10 pts 1. A mean 0 bivariate stationary time series {( y 1t , y2t )}t =−∞ has cross-covariance function ∞ 1 γc Γ ( 0) = , Γ (1) = θ Γ ( 0 ) , and Γ ( h ) = 0 for h > 1 2 γ c c for some c > 0, −1 < γ < 1, and − 1 < θ < 1 . a) Write in explicit matrix form (you need not simplify, but you must plug in everything you can) the variance of the variable y11 − 2 y21 + y12 . 10 pts b) Suppose that ( y11 , y21 ) = ( 3, 7 ) and y12 = 4 . Write out in explicit matrix form (again you must plug in everything you can but need not simplify) the best linear prediction of y23 . 2 10 pts c) Suppose that, in fact, the process is Gaussian and γ = 0 . What is the large n approximate distribution of ( n times) the bivariate variable giving empirical cross correlations at lags h and k , n ( ρˆ1,2 ( h ) , ρˆ1,2 ( k ) ) (Be as explicit as possible.) 10 pts d) Identify a time invariant linear filter L , that when (at least in theory) applied to the { y1t } sequence (say, Y1 ) will produce a white noise sequence, "whitening" the 1st coordinate series. For Y2 the { y2t } sequence, what is the large n approximate distribution of the empirical cross correlation between L Y1 and Y2 at lag h = 3 ? 3 10 pts 2. Suppose that I decide that the sample autocorrelation function for the first n values of a mean 0 stationary series { yt } can be described roughly as "oscillating at some not completely obvious period inside a roughly exponentially declining 'wrapper' whose rate of decline is not exactly clear." A cartoon might look like: Propose a sensible parametric form for an autocorrelation function that might be fit to the observed series (provide for both a parameter governing the period of oscillation and a parameter governing the rate of decay) and write the matrix form for a function you would propose to optimize in order the estimate the correlation parameters and the process variance. Proposed autocorrelation function: Function to be optimized: 4 10 pts 3. Below are some values of a time series { yt } through time n = 12 . Compute Holt-Winters seasonal (with d = 4 ) forecasts for times t = 13,14,15,16 using smoothing constants α = β = γ = 1 . 2 (In doing these computations, for purposes of this exam, you may round even intermediate calculations to one decimal place.) t 1 2 3 4 5 6 7 8 yt 2 3 1 2 4 5 4 5 t 9 10 11 12 13 14 15 16 yt 7 6 6 7 -- -- -- --_ aˆt bˆt cˆt yˆt aˆt bˆt cˆt yˆt 5 10 pts 4. Argue carefully that an ARMA ( 2, 2 ) process { yt } satisfying yt = .2 yt −1 − .2 yt − 2 + ε t + .2ε t −1 + .2ε t − 2 for {ε t } mean 0 white noise with variance σ 2 is both causal and invertible. 10 pts 5. Consider an ARMA (1,1) process { yt } satisfying yt = −.5 yt −1 + ε t + .2ε t −1 for {ε t } mean 0 white noise with variance σ 2 and an ARIMA(1,1,1) process satisfying {ut } ( ut − ut −1 ) = −.5 ( ut −1 − ut −2 ) + ε t + .2ε t −1 for {ε t } mean 0 white noise with variance σ 2 . Find spectral densities for { yt } and {ut } . 6 10 pts 6. Below are plots of power transfer functions for the differencing operators D, D4 , and D* = DD4 . There are time series plots and spectral densities below for realizations of 5 different moving average models and a differenced version of each. For each set of plots, identify the correct MA model and the differencing operator that was used to make the second pair of plots for that model. The models were Model A: Model B: Model C: Model D: Model E: yt yt yt yt yt = ε t + .8ε t −1 = ε t + .2ε t −1 = ε t − .8ε t −1 = ε t + .8ε t −1 + .2ε t − 4 = ε t + .2ε t −1 + .8ε t − 4 Case 1: Series Spectral Density Differenced Series Spectral Density Model is ________________ Differencing Operator is ____________________ 7 Case 2: Series Spectral Density Differenced Series Spectral Density Model is ________________ Case 3: Series Differenced Series Model is ________________ Differencing Operator is ____________________ Spectral Density Spectral Density Differencing Operator is ____________________ 8 Case 4: Series Spectral Density Differenced Series Spectral Density Model is ________________ Case 5: Series Differenced Series Model is ________________ Differencing Operator is ____________________ Spectral Density Spectral Density Differencing Operator is ____________________ 9 7. BDM considers a data set consisting of the numbers of goals scored by England against Scotland in a series of n = 52 annual soccer matches. We'll here consider a generalized state space analysis of those data. We'll suppose that conditional on Poisson means λt for t = 1, 2, ,53 the numbers of goals scored ( y1 , y2 , , y53 ) are independent and yt Poisson ( λt ) . 10 pts a) Give a joint probability mass function for Y53 conditioned on the means. That is, specify f ( y1 , , y53 | λ1 , , λ53 ) . Then provide a joint pdf for the logarithms of the means lt = ln ( λt ) supposing that these follow a mean 0 normal random walk model with variance σ 2 under the assumption that l1 N ( 0,100 ) (100 is the variance). (You don't need to rewrite anything, but note then that λt = exp ( lt ) and we thus have a fully specified model for all of the yt 's and lt 's .) 10 pts b) On the next page is some BUGS code and output for analysis of the goal data. Two analyses are included. The first uses an assumption that σ 2 = 1 and the second employs a U ( 0,10 ) prior distribution on σ . For both analyses say when (at which t ) the English goal scoring capacity seemed at its lowest and what is a sensible prediction interval for y53 . Analysis 1 | | | | | | | | | | | | | Analysis 2 10 10 pts c) What, overall, seem to be the effects of computing under the assumption that σ = 1 rather than giving σ a prior distribution? Argue that the differences you see "make sense." Analysis 1 model { l[1]~dnorm(0,.01) for (i in 2:53) { epsilon[i]~dnorm(0,tau) l[i]<-l[i-1]+epsilon[i] } for (i in 1:53) { lambda[i]<-exp(l[i]) y[i]~dpois(lambda[i]) } } list(y=c(0,1,0,2,4,1,0,1,5,1,4,2,1,3,1,1,1,1,0,1,1,0,2,0,2,0,0,1,0,1,2,2,1,2,4,1,4,1,0,0,4,1,0,1,0, 1,1,2,1,1,0,0,NA),tau=1) 11 lambda[1] lambda[2] lambda[3] lambda[4] lambda[5] lambda[6] lambda[7] lambda[8] lambda[9] lambda[10] lambda[11] lambda[12] lambda[13] lambda[14] lambda[15] lambda[16] lambda[17] lambda[18] lambda[19] lambda[20] lambda[21] lambda[22] lambda[23] lambda[24] lambda[25] lambda[26] lambda[27] lambda[28] lambda[29] lambda[30] lambda[31] lambda[32] lambda[33] lambda[34] lambda[35] lambda[36] lambda[37] lambda[38] lambda[39] lambda[40] lambda[41] lambda[42] lambda[43] lambda[44] lambda[45] lambda[46] lambda[47] lambda[48] lambda[49] lambda[50] lambda[51] lambda[52] lambda[53] y[53] mean 0.5112 0.6439 0.7125 1.633 2.721 1.376 0.8744 1.497 3.465 2.036 3.199 2.136 1.607 2.112 1.302 1.088 0.986 0.8642 0.5921 0.7459 0.8581 0.6745 1.061 0.7148 0.9747 0.4887 0.445 0.6485 0.5889 1.022 1.567 1.773 1.57 2.048 3.009 1.909 2.716 1.275 0.7521 0.8749 2.226 1.151 0.6579 0.7251 0.5999 0.8449 1.038 1.356 0.9557 0.6917 0.3842 0.3274 0.5541 0.5533 sd 0.5018 0.5569 0.5845 1.009 1.332 0.831 0.5413 0.9134 1.602 1.033 1.438 1.076 0.8887 1.157 0.7678 0.6978 0.6855 0.6441 0.4613 0.5601 0.6542 0.5285 0.7443 0.5511 0.6815 0.4033 0.3839 0.5254 0.439 0.7082 0.9388 0.9956 0.9464 1.139 1.434 1.054 1.344 0.7959 0.551 0.6033 1.203 0.7495 0.495 0.5536 0.4774 0.6238 0.7096 0.8668 0.6773 0.5513 0.3763 0.3835 1.372 1.563 MC_error 0.03194 0.03305 0.03291 0.05155 0.05704 0.04009 0.02565 0.04273 0.06386 0.04197 0.05409 0.04151 0.0368 0.0447 0.0307 0.02867 0.02793 0.02725 0.01895 0.0231 0.028 0.02218 0.02826 0.0229 0.0252 0.01602 0.01521 0.02076 0.016 0.02728 0.032 0.0324 0.03213 0.03595 0.03702 0.02907 0.02851 0.01928 0.01522 0.01461 0.02018 0.01344 0.009646 0.009893 0.008122 0.009785 0.008914 0.007996 0.006009 0.004563 0.00281 0.002337 0.006582 0.007518 val2.5pc 0.05587 0.08563 0.1138 0.3376 0.7822 0.2858 0.1622 0.3994 1.184 0.5778 1.125 0.629 0.4714 0.5932 0.2824 0.2235 0.1714 0.1273 0.08747 0.1202 0.1405 0.1034 0.1916 0.1085 0.1766 0.07101 0.05724 0.08672 0.09961 0.1954 0.3703 0.4402 0.3763 0.5307 0.9561 0.5296 0.8177 0.2831 0.1296 0.1726 0.596 0.2391 0.1081 0.1139 0.08884 0.1406 0.1943 0.2761 0.1621 0.09311 0.03201 0.01447 0.007843 0.0 median 0.3448 0.478 0.5577 1.404 2.494 1.205 0.761 1.283 3.201 1.847 2.954 1.953 1.408 1.866 1.142 0.9301 0.8198 0.6978 0.4719 0.5956 0.6808 0.5321 0.8861 0.5674 0.8024 0.3765 0.3356 0.509 0.478 0.8485 1.361 1.568 1.355 1.81 2.769 1.688 2.47 1.092 0.6103 0.7234 1.993 0.9778 0.5259 0.5778 0.4684 0.6819 0.8605 1.157 0.7851 0.5407 0.2688 0.1994 0.192 0.0 val97.5pcstart 1.917 10000 2.174 10000 2.375 10000 4.131 10000 5.912 10000 3.505 10000 2.269 10000 3.816 10000 7.347 10000 4.554 10000 6.63 10000 4.7 10000 3.849 10000 4.973 10000 3.221 10000 2.859 10000 2.807 10000 2.567 10000 1.797 10000 2.214 10000 2.608 10000 2.068 10000 2.999 10000 2.192 10000 2.755 10000 1.565 10000 1.443 10000 2.067 10000 1.739 10000 2.815 10000 3.936 10000 4.23 10000 3.964 10000 4.913 10000 6.442 10000 4.569 10000 5.987 10000 3.305 10000 2.191 10000 2.476 10000 5.222 10000 3.059 10000 1.974 10000 2.188 10000 1.871 10000 2.471 10000 2.888 10000 3.552 10000 2.71 10000 2.14 10000 1.395 10000 1.385 10000 3.306 10000 4.0 10000 sample 52001 52001 52001 52001 52001 5200 52001 52001 52001 52001 52001 52001 52001 52001 52001 52001 52001 52001 52001 52001 52001 52001 52001 52001 52001 52001 52001 52001 52001 52001 52001 52001 52001 52001 52001 52001 52001 52001 52001 52001 52001 52001 52001 52001 52001 52001 52001 52001 52001 52001 52001 52001 52001 52001 Analysis 2 model { sigma~dunif(0,10) tau<-1/(sigma*sigma) 12 l[1]~dnorm(0,.01) for (i in 2:53) { epsilon[i]~dnorm(0,tau) l[i]<-l[i-1]+epsilon[i] } for (i in 1:53) { lambda[i]<-exp(l[i]) y[i]~dpois(lambda[i]) } } list(y=c(0,1,0,2,4,1,0,1,5,1,4,2,1,3,1,1,1,1,0,1,1,0,2,0,2,0,0,1,0,1,2,2,1,2,4,1,4,1,0,0,4,1,0,1, 0,1,1,2,1,1,0,0,NA)) list(sigma=1) lambda[1] lambda[2] lambda[3] lambda[4] lambda[5] lambda[6] lambda[7] lambda[8] lambda[9] lambda[10] lambda[11] lambda[12] lambda[13] lambda[14] lambda[15] lambda[16] lambda[17] lambda[18] lambda[19] lambda[20] lambda[21] lambda[22] lambda[23] lambda[24] lambda[25] lambda[26] lambda[27] lambda[28] lambda[29] lambda[30] mean 1.218 1.238 1.273 1.34 1.401 1.382 1.378 1.437 1.537 1.511 1.533 1.477 1.415 1.389 1.323 1.276 1.235 1.207 1.182 1.174 1.169 1.157 1.168 1.153 1.16 1.145 1.144 1.162 1.182 1.228 sd 0.3318 0.3042 0.2903 0.326 0.3848 0.3511 0.3398 0.4094 0.5415 0.4988 0.5419 0.464 0.388 0.3678 0.3055 0.2828 0.2745 0.2737 0.2766 0.277 0.2802 0.2835 0.2802 0.2809 0.2816 0.2874 0.2876 0.2822 0.2699 0.2716 MC_error 0.01347 0.01069 0.008317 0.008372 0.01285 0.01076 0.01018 0.01536 0.0247 0.02194 0.02455 0.01908 0.01342 0.0114 0.005829 0.003788 0.005301 0.007195 0.008829 0.009265 0.00968 0.01073 0.009498 0.01071 0.01011 0.01143 0.01156 0.009881 0.008226 0.004637 val2.5pc 0.5093 0.588 0.6998 0.8391 0.9185 0.9067 0.9118 0.9566 0.9917 0.9861 0.9898 0.9754 0.948 0.9278 0.8321 0.7381 0.6447 0.5871 0.5334 0.5287 0.5181 0.4935 0.527 0.5017 0.5 0.4696 0.4668 0.5019 0.5535 0.6653 median 1.231 1.243 1.259 1.289 1.314 1.309 1.309 1.332 1.359 1.353 1.357 1.343 1.321 1.309 1.281 1.258 1.239 1.224 1.212 1.206 1.203 1.198 1.201 1.194 1.198 1.19 1.191 1.199 1.208 1.233 val97.5pcstart 1.903 10001 1.882 10001 1.934 10001 2.189 10001 2.471 10001 2.344 10001 2.286 10001 2.58 10001 3.066 10001 2.933 10001 3.058 10001 2.81 10001 2.485 10001 2.407 10001 2.1 10001 1.932 10001 1.793 10001 1.71 10001 1.666 10001 1.653 10001 1.653 10001 1.633 10001 1.649 10001 1.623 10001 1.632 10001 1.616 10001 1.613 10001 1.64 10001 1.66 10001 1.769 10001 sample 50000 50000 50000 50000 50000 50000 50000 50000 50000 50000 50000 50000 50000 50000 50000 50000 50000 50000 50000 50000 50000 50000 50000 50000 50000 50000 50000 50000 50000 50000 13 lambda[31] lambda[32] lambda[33] lambda[34] lambda[35] lambda[36] lambda[37] lambda[38] lambda[39] lambda[40] lambda[41] lambda[42] lambda[43] lambda[44] lambda[45] lambda[46] lambda[47] lambda[48] lambda[49] lambda[50] lambda[51] lambda[52] lambda[53] sigma y[53] 1.282 1.319 1.348 1.404 1.453 1.41 1.405 1.32 1.268 1.255 1.278 1.222 1.183 1.166 1.149 1.152 1.151 1.151 1.13 1.109 1.088 1.08 1.087 0.09086 1.096 0.2913 0.3226 0.3465 0.415 0.481 0.4129 0.4174 0.3172 0.2825 0.2778 0.3019 0.2738 0.2708 0.2782 0.2824 0.2833 0.2867 0.2929 0.3026 0.3178 0.3377 0.3549 0.376 0.1244 1.118 0.003429 0.006162 0.008884 0.01422 0.01923 0.01476 0.01461 0.006537 0.002939 0.002966 0.004068 0.004366 0.007353 0.008718 0.01019 0.009916 0.009806 0.009795 0.01165 0.01362 0.01552 0.01626 0.01559 0.00783 0.01646 0.7754 0.8344 0.8809 0.9311 0.952 0.9388 0.9335 0.8452 0.7511 0.7233 0.765 0.6603 0.5824 0.538 0.5162 0.5165 0.5111 0.5007 0.4487 0.3904 0.3325 0.2841 0.2616 0.001356 0.0 1.256 1.271 1.284 1.302 1.319 1.306 1.303 1.27 1.249 1.242 1.25 1.225 1.205 1.195 1.187 1.188 1.189 1.19 1.18 1.174 1.166 1.165 1.171 0.01111 1.0 1.983 2.162 2.294 2.593 2.824 2.593 2.594 2.168 1.936 1.891 2.005 1.781 1.676 1.652 1.632 1.636 1.637 1.649 1.629 1.609 1.602 1.613 1.652 0.392 4.0 10001 10001 10001 10001 10001 10001 10001 10001 10001 10001 10001 10001 10001 10001 10001 10001 10001 10001 10001 10001 10001 10001 10001 10001 10001 50000 50000 50000 50000 50000 50000 50000 50000 50000 50000 50000 50000 50000 50000 50000 50000 50000 50000 50000 50000 50000 50000 50000 50000 50000 14