ARMA Series

If $Y_t = \phi_1 Y_{t-1} + \mdots + \phi_p Y_{t-p} + \epsilon_t$ where $\epsilon_t$ is white noise, then $Y_t$ is called an autoregressive series of order p, denoted AR(p).

Important because:

AR (1) series

AR (1) series defined by $Y_t = \phi Y_{t-1} + \epsilon_t$ , or $(1- \phi L)Y_t = \epsilon_t$ . $Y_t$ and $\epsilon_t$ are uncorrelated, so $var(Y_t) = \phi^2 var(Y_{t-1}) + \sigma^2_\epsilon$ , and under stationarity $\sigma_Y^2 = \phi^2 \sigma^2_Y + \sigma^2_\epsilon$ which implies that $\sigma_Y^2 \le 1$ if stationary.

Similarly by inverting autoregressive operator and applying to $\{\epsilon_t\}$ we find that $Y_t = \sum_{u=0}^\infinity \phi^u \epsilon_t-u$ which converge in mean square because $\sum |\phi|^2u \lt \infinity$ . An equivalent condition is that the root of $(1- \phi L)$ lies outside the unit circle.

If we multiply both sides of the equation by $Y_{t-u}$ and take expectations: \[E[Y_t Y_{t-u}] = \phi E[Y_{t-u} Y_{t-u}] + E[\epsilon_t Y_{t-u}]$

MA (1) series

\[ \gamma(u) = \left\{ \array{

(Can see the invertibility problem here: if we replace $\theta$ by $1 / \theta$ and $\sigma^2$ by $\theta \sigma^2$ the function is unchanged).

ARMA

Time series is ARMA if can be written $\phi(L) Y_t = \theta(L) \epsilon_t$ . Stationary if roots of $\phi(z)$ lie outside unit circle.

ARMA (1,1) series

Model: $Y_t = \phi Y_{t-1} + \epsilon_t + \theta \epsilon_{t-1}$

To derive ACF note that $E(\epsilon_t \Y_t) = \sigma^2_\epsilon$ and $E(\epsilon_{t-1} \Y_t) = (\phi + \theta) \sigma^2_\epsilon$ . As above multiple model by $Y_{t-1}$ take expectations, solve recurrence and we get:.

\[ \gamma(u) = \left\{ \array{

Similar to AR (1) except for the first term.

ARMA (p,q) series

Model: $\Phi(L)Y_t = \Theta(L)\epsilon_t$ , or $Y_t = \Psi(L) \epsilon_t$ , where $\Psi(L) = \Theta(L) / \Phi(L)$ .

When greater than $q$ time units apart, will behave like AR (p) series, but first $p$ terms will exhibit additional structure.

Generalisations

ARIMA models

ARMA models useful for stationary time series, but many real-life series are not stationary, so we need a way to generalise ARMA models. Difficult to do with complete generality but can do for certain types of stationarity (ie. polynomial trend).

Can define a trend model explicitly, eg. $Y_t = \beta_0 + \beta_1 t + a_t$ , or implicitly by taking differences eg. $Y_t - Y_{t-1} = \beta_1 + a_t$ . Explicit model less variable, but not flexible enough in practice.

Can write $\bigtriangledown Y_t = \beta_1 + a_t$ , where $\bigtriangledown$ is the differencing operator, $1 - L$ . This suggest the generalisation $\bigtriangledown^d Y_t = \beta_1 + a_T$ which will remove polynomial trend of degree $d$ .

Written: ARIMA (p, d, q)

SARIMA models

Many real-life time series show seasonal trends with periodicity $s$ . To keep number of parameters down usually assume seasonal effects are independent, so use multiplicative model

Written: SARIMA (p, d, q) $\times$ (P, D, Q)_s Model: $\bigtriangledown^d \bigtriangledown_s^D \phi(L) \Phi(L) Y_t = \theta(L) \Theta(L) \epsilon_t$