Consider an MA(3) model \[ Z_t = (1-\theta_1 B - \theta_2 B^2 - \theta_3 B^3) a_t \] where \(\theta_1 = 7\), \(\theta_2 = -16\) and \(\theta_3 = 12\) and the white noise variance \(\sigma_a^2=1\).

  1. Find the autocorrelation function \(\rho_k\) of the model. Also write a function that computes the theoretical partial autocorrelation function from \(\rho_k\) using the Durbin-Levinson algorithm and use this to compute \(\phi_{kk}\) up to lag 19.

  2. Write your own function that simulates a realization from the above model (you’re allowed to use rnorm in R but not arima.sim). Use this to study the bias of the sample autocorrelation and partial autocorrelation functions (here you may use the functions acf and pacf in R) by simulating 1000 realizations of length \(n=20\) from the model and comparing the mean of the sample autocovariance function to its true theoretical value (note that you only have estimates of the acf and pacf up to lag 19 based on a time series of length \(n=20\)).

  3. Why is the above model not invertible? Find a third order invertible moving average model representing the same process. Hint: The R function polyroot is useful for finding roots of polynomials.

  4. Write code that computes the coefficients of the pure autoregressive AR(\(\infty\)) representation of the model in point 3 and use this to compute the 1-step ahead infinite history forecast of \(Z_{11}\) based on following observed values of \(Z_1,Z_2,\dots,Z_{10}\).

c(17.9, -26.41, 32.43, -7.64, -19.76, 13.05, 1.63, 4, -21.99, 
24.5)
  1. A finite history 1-step ahead forecast of \(Z_{11}\) can alternatively be derived from the regression \[ Z_{11} = \phi_{10,1} Z_{10} + \dots + \phi_{10,10} Z_1 + e_{10}, \] where the \(\phi_{i,j}\)’s and \(\text{Var}(e_{10})\) can computed via the Durbin-Levinson algorithm. Modify your implementation of the Durbin-Levinson algorithm in point 1 and use this to compute the 1-step ahead forecast using this method. Discuss any differences you see. How does the variance of the forecast error for the finite history forecast compare to the infinite history forecast? How do you in general expect the difference to depend on the roots of the moving average polynomial?

Consider a second order random walk \[ (1-B)^2 Z_t = a_t \] where \(a_t\) is Gaussian white noise with variance \(\sigma_a^2=1\). Suppose that we have the following observations up to time \(t=20\)

c(0.79, 2.11, 5.17, 6.96, 10.95, 15.37, 18.22, 20.13, 22.11, 
24.09, 23.79, 24.25, 24.16, 24.24, 24.88, 27.04, 29.85, 33.79, 
36.94, 39.67)
  1. Write code that computes the forecast of \(Z_{20+l}\) for \(l=1,2,\dots,20\) and 95% forecast limits for lead times \(l=1,2,\dots,10\) and make a plot of the observed series, the forecast and the forecast limits. Hint: To derive the variance of the forecast error you may need to use the formula for square pyramidal numbers, see https://en.wikipedia.org/wiki/Square_pyramidal_number