Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 11

3.4.

Autoregressive Processes

The First-Order Autoregressive Process


A first-order autoregression, denoted AR(1), satisfies the following difference
equation:

Y, = c + + Et. [3.4.1]

Again, {e,} is a white noise sequence satisfying [3.2.1] through [3.2.3]. Notice that
[3.4.1] takes the form of the first-order difference equation [1.1.1] or [2.2.1] in
which the input variable w, is given by w, = c + Et. We know from the analysis of
first-order difference equations that if 101 1, the consequences of the E's for Y
accumulate rather than die out over time. It is thus perhaps not surprising that when
14/I a 1, there does not exist a covariance-stationary process for Y, with finite
variance that satisfies [3.4.1]. In the case when 101 < 1, there is a covariance-
stationary process for Y, satisfying [3.4.1]. It is given by the stable solution to [3.4.1]
characterized in [2.2.9]:

Y, = (c + er) + 4)-(c + az_ + y1)2•(c + Et_2) 03' (C E/_3) + • • •

= [d(1 44] + et + iket- i + eet-2 + 4)3et-3 + • • • [3.4.2]


This can be viewed as an MA(00) process as in [3.3.13] with tts given by 4//. When
14/1 < 1, condition [3.3.15] is satisfied:

E ItP,1 =
which equals 1/(1 — 101) provided that < 1. The remainder of this discussion of
first-order autoregressive processes assumes that 14/1 < 1. This ensures that the
MA(00) representation exists and can be manipulated in the obvious way, and that the
AR(1) process is ergodic for the mean.
Taking expectations of [3.4.2], we see that

E(Y,) = [d(1 — 4)1 + 0 + 0 + •


••,
so that the mean of a stationary AR(1) process is
= c/(1 ,4).
[3.4.3]
The variance is

= (1 + 02 04 + . .).0.2 [3.4.4]
= a.2/(1 _ 4,2),

while the jth autocovariance is

E(Y, — ko(Y, —
=E[E t
I,
+•••+
= [01

=
c +2

[1 4_ 4,2 4_
+ • • •1 x [Et-i +
c +4 1,0.2
02Ef-1-2+ • • [3.4.5]
4,4 4_ .].Q2

= [011(1 — (¢2)1•0-2.
3.4. Autoregressive Processes 53
It follows from [3.4.4] and [3.4.5] that the autocorrelation function,
P1 = Yi/Yo= [3.4.6]
follows a pattern of geometric decay as in panel (d) of Figure 3.1. Indeed, the
autocorrelation function [3.4.6] for a stationary AR(1) process is identical to the
dynamic multiplier or impulse-response function [1.1.10]; the effect of a one-unit
increase in e, on Y,.,.1 is equal to the correlation between Y, and A positive
value of ilke a positive value of B for an MA(1) process, implies positive cor-
relation between Y, and A negative value of 4, implies negative first-order
but positive second-order autocorrelation, as in panel (e) of Figure 3.1.
Figure 3.3 shows the effect on the appearance of the time series {y,} of varying
the parameter The panels show realizations of the process in [3.4.1] with c = 0 and Et
•-•-• N(0, 1) for different values of the autoregressive parameter cp. Panel (a) displays

white noise (4 = 0). A series with no autocorrelation looks choppy and patternless
to the eye; the value of one observation gives no information about the value of the
next observation. For cf) = 0.5 (panel (b)), the series seems smoother, with
observations above or below the mean often appearing in clusters of modest
duration. For 4, = 0.9 (panel (c)), departures from the mean can be quite pro-
longed; strong shocks take considerable time to die out.
The moments for a stationary AR(1) were derived above by viewing it as an
MA(x) process. A second way to arrive at the same results is to assume that the
process is covariance-stationary and calculate the moments directly from the dif-
ference equation [3.4.1]. Taking expectations of both sides of [3.4.1],
E(Y) = c + cf•E(Ye_,) + E(e,). [3.4.7]
Assuming that the process is covariance-stationary,
E(Y) = E(Y,_,) = kt. [3.4.8]

Substituting [3.4.8] into [3.4.7],


= c + OIL + 0

Or

= c/(1 — [3.4.9]
reproducing the earlier result [3.4.3].
Notice that formula [3.4.9] is clearly not generating a sensible statement if !Or
1. For example, if c > 0 and cf) > 1, then Y, in [3.4.1] is equal to a positive constant
plus a positive number times its lagged value plus a mean-zero random variable.
Yet [3.4.9] seems to assert that Y, would be negative on average for such a process!
The reason that formula [3.4.9] is not valid when 10i a 1 is that we assumed in
[3.4.8] that Y, is covariance-stationary, an assumption which is not correct when
jOI a 1.
To find the second moments of Y, in an analogous manner, use [3.4.3] to
rewrite [3.4.1] as

Yr = (1 — + Yt -1 + Et
or

(Y, P-) = + Et. [3.4.10]


Now square both sides of [3.4.10] and take expectations:

E(Y, — ik)z = 4,2E(yr_1 - + — 1.0c,1 + E(4). [3.4.11]

54 Chapter 3 I Stationary ARMA Processes


1
i

1 1
—1

11

1 C 20 3C 4 50 60 70 80 90

(a) cf) = 0 (white noise

i■P

—2 -

—3
O 1 0 20 30 40 50 80 70 80 90

(b) (/) = 0.5


a

—2

—a

—6
O 1 0 20 30 40 50 60 70 80 90

(c) CA = 0.9
FIGURE 3.3 Realizations of an AR(1) process, 11; = 4, Ye_1 + e„ for alternative
values of 4,.

3.4. Autoregressive Processes 55


Recall from [3.4.2] that (Y,_1 - IL) is a linear function of ct_i, et_2,
-1
(Yr IL) = et-I + + 02et--3 + • • ••
But e, is uncorrelated with et-1, er-21 • • so er must be uncorrelated with
(Y,_1 - IL). Thus the middle term on the right side of [3.4.11] is zero:
E[(Y,-1 - Wei] = O. [3.4.12]

Again, assuming covariance-stationarity, we have

E(YC - 1.)2 = E(Y,--1 1.)2 = Yo. [3.4.13]


Substituting [3.4.13] and [3.4.12] into [3.4.11],

Yu = 02Yo + 0 Cr2

Or

Yo = cr21(1 — 02),
reproducing [3.4.4].
Similarly, we could multiply [3.4.10] by (Y1_, - kt) and take expectations:

ERY, 1.0 (Yt-, ILA


[3.4.14]
= cfrERY,-1 - 10(Y,-; - 1.)1 + E[c,(Y,-, F.0]•
But the term (Y„.1 - IL) will be a linear function of et_i, ,
which, for j > 0, will be uncorrelated with Et. Thus, for j > 0, the last term on the
right side in [3.4.14] is zero. Notice, moreover, that the expression appearing in the
first term on the right side of [3.4.14],

ERY,-1 101,
is the autocovariance of observations on Y separated by j - 1 periods:

ERY-1 10 (Y1,-11-i,-11 1.)1 =


Thus, for j > 0, [3.4.14] becomes
Y, = (AY, -1. [3.4.15]
Equation [3.4.15] takes the form of a first-order difference equation,

Yr =
in which the autocovariance y takes the place of the variable y and in which the
subscript j (which indexes the order of the autocovariance) replaces t (which indexes
time). The input wr in [3.4.15] is identically equal to zero. It is easy to see that the
difference equation [3.4.15] has the solution

Y., = cAlYo,
which reproduces [3.4.6]. We now see why the impulse-response function and
autocorrelation function for an AR(1) process coincide—they both represent the
solution to a first-order difference equation with autoregressive parameter 0, an
initial value of unity, and no subsequent shocks.

The Second-Order Autoregressive Process


A second-order autoregression, denoted AR(2), satisfies

Y, = c + 01Y,-1 + 02Y-2 + et, [3.4.16]

56 Chapter 3 I Stationary ARMA Processes


or, in lag operator notation,

(1 — 01L — 00Y, c + e,. [3.4.17]


The difference equation [3.4.16] is stable provided that the roots of
(1 — 01z — 42z2) = 0 [3.4.18]
lie outside the unit circle. When this condition is satisfied, the AR(2) process turns
out to be covariance-stationary, and the inverse of the autoregressive operator in
[3.4.17] is given by
0(L) = (1 — 011., — 020-1 = + 11111... + tii2L2 + tfi3L3 + • • [3.4.19]
Recalling [1.2.44], the value of 0, can be found from the (1, 1) element of the
matrix F raised to the jth power, as in expression [1.2.28]. Where the roots of
[3.4.18] are distinct, a closed-form expression for 01 is given by [1.2.29] and [1.2.25].
Exercise 3.3 at the end of this chapter discusses alternative algorithms for calculating
45.
Multiplying both sides of [3.4.17] by tk(L) gives
Y, =
0(L)c 0(L)E„ [3.4.20]
It is straightforward to show that
tk(L)c
c/(1 — 0, — 02) [3.4.21]
and

E 10;1 < co; [3.4.22]


J0
the reader is invited to prove these claims in Exercises 3.4 and 3.5. Since [3.4.20] is
an absolutely summable MA(co) process, its mean is given by the constant term:
= c/(1 — i1 — 02)• [3.4.23]
An alternative method for calculating the mean is to assume that the process is
covariance-stationary and take expectations of [3.4.16] directly:
E( Yr) = c + 41E(Yi_1) + 02E(Y,_2) + E(e,),

implying
c + c1,11.c + 4,21.c + 0,
reproducing [3.4.23].
To find second moments, write [3.4.16] as

Yr .=" P;(1 — — 4)2) + 461Yr- + 02Yr-2 + Et


Or

- = 4) 1(17t -1 - + 02(Yr -µ) + Et . [3.4.24]


Multiplying both sides of [3.4.24] by (Y,_, ki.) and taking expectations produces
0111]-1 + 0211;-2 for j = 1, 2,........... [3.4.25]

Thus, the autocovariances follow the same second-order difference equation as


does the process for Y„ with the difference equation for yi. indexed by the lag j.
The autocovariances therefore behave just as the solutions to the second-order
difference equation analyzed in Section 1.2. An AR(2) process is covariance-
stationary provided that 02 and 02 lie within the triangular region of Figure 1.5.
3.4. Autorezressive Processes 57
When </h and cA2 lie within the triangular region but above the parabola in that
ifgure, the autocovariance function is the sum of two decaying exponential
functions of j. When cAi and cA2 fall within the triangular region but below the
parabola, yi is a damped sinusoidal function.
The autocorrelations are found by dividing both sides of [3.4.25] by yo:
P, = + 021)1_2 for j = 1, 2,.......... [3.4.26]
In particular, setting j = 1 produces

Pt = 01 + 02Pi
or

For j = 2,

P2 = 011'1 4t. [3.4.28]


The variance of a covariance-stationary second-order autoregression can be
found by multiplying both sides of [3.4.24] by (Y, — IL) and taking expectations:

E(Y, — iL)2 = 01•E(Yt-1 — IL)(Yt IL) + 02*E(Yi-2 IL)(Y, iL)


E(e,)(Y, 1L),
or
Yo = Wlyl + 02Y2 + 6r2 • [3.4.29]

The last term (o-2) in [3.4.29] comes from noticing that

E(6t)(Yt 11) = E(ei)[01(Yt--1 — + 4)2(Yt-2 µ) + 63


= 01.0 + 4)2.0 + o-2.
Equation [3.4.29] can be written
Yo = 0iPiYo + 02P2Yo + cr2 • [3.4.30]

Substituting [3.4.27] and [3.4.28] into [3.4.30] gives

0i + 0201 + oi] yo

or

(1 — 02)0'2
YO —
(1 + 02)[(1 — 02)2 —

The pth-Order Autoregressive Process


A pth-order autoregression, denoted AR(p), satisfies

Yt = c + 41Yt-1 + 4)2Yr-2 + • • + OpYi-p + Et. [3.4.31]


Provided that the roots of

1 — 4)1z — 02z2 — • • • 42„Z P =0 [3.4.32]


all lie outside the unit circle, it is straightforward to verify that a covariance-
stationary representation of the form
Yt = µ + til(L)et [3.4.33]

58 Chapter 3 I Stationary ARMA Processes


exists where
110) = (1 - OiL - (62L2 '• - 0,LP)-
and E7_0 < cc. Assuming that the stationarity condition is satisfied, one way
to find the mean is to take expectations of [3.4.31]:

= c + 014 + 0211 + • • ' + Opg,


or
= d(1 - - 4)2 -• • • - Op).
[3.4.34]
Using [3.4.34], equation [3.4.31] can be written

Y, = ci)1(Y,-1 11) + cA2(Y,-2 + • ••


+ - + Er. [3.4.35]
Autocovariances are found by multiplying both sides of [3.4.35] by
- ki.) and
taking expectations:
+ •
• • + (Apyj_p for j = 1, 2, .
=
[3.4.36]
cf/Iyi + c/o2y2 + • • + clopyp + o-2 for j = O.
Using the fact that = yi, the system of equations in [3.4.36] for j = 0, 1,
, p can be solved for yo, yl, . . , yp as functions of o-2, cf)i, cf)2, .., II tt
can be shown4 that the (p x 1) vector (yo, yi, . . . , yp_I)' is given by the first p
elements of the first column of the (p2 x p9 matrix o-2[1p., - (F C) F)]-1 where F
is the (p x p) matrix defined in equation [1.2.3] and C) indicates the Kronecker
product.
Dividing [3.4.36] by yo produces the Yule-Walker equations:

P, = 01131-1 + 02Pi-2 + • • • + OA_ p for j 1, 2,.......... [3.4.37]


Thus, the autocovariances and autocorrelations follow the same pth-order
difference equation as does the process itself [3.4.31]. For distinct roots, their
solutions take the form
yj = + + . . . + gprtip,
[3.4.38]
where the eigenvalues (A1, . , A,,) are the solutions to

AP - 401AP - 42AP-2 -• -
0

3.5. Mixed Autoregressive Moving Average Processes


An ARMA(p, q) process includes both autoregressive and moving average terms:

Y, = c + 4)1 Y, -1 + cA2Yr-2 + • • + OpYr p


+ e2Et-2 + • • + OgEt--q, 6, + Oet-t [3.5.1]

or, in lag operator form,


$
(1 - - 2L2 -• • • - clopL,P)Y,
= c + (1 + 01L + 02L2 + • [3.5.2]
+ 0,7L9)et.
Provided that the roots of

1 - 01z - 02z2 -• • Ope = 0


°The reader will be invited to prove this in Exercise 10.1 in Chapter 10. [3.5.3]
lie outside the unit circle, both sides of [3.5.2] can be divided by (1 — OIL — ¢,L2
— — OLP) to obtain

Y, = µ + tir(L)e,
where

- c/(1 — —•• — ygp) •


Thus, stationarity of an ARMA process depends entirely on the autoregressive
parameters (0,, 4)2, . . , Op) and not on the moving average parameters (01, 02, . • ,
It is often convenient to write the ARMA process [3.5.1] in terms of deviations
from the mean:

Yr — µ = 01( Yr -1 — + 4)2( Yt -2
— +••• [3.5.4]
Op(Y r p 11) 4- Er 4-
018r--1 026t-2 4- • • + Oqgt-q•

Autocovariances are found by multiplying both sides of [3.5.4] by (Y,_„ — i.L) and
taking expectations. For j > q, the resulting equations take the form

Yi = + 02111-2 + • • • + (1) pyj - p for j = q + 1, q + 2,................ [3.5.5]


Thus, after 9 lags the autocovariance function y, (and the autocorrelation function
pi) follow the pth-order difference equation governed by the autoregressive
parameters.
Note that [3.5.5] does not hold for j q, owing to correlation between 61jet_i
and Y,_1. Hence, an ARMA(p, 9) process will have more complicated autocovar-
iances for lags 1 through q than would the corresponding AR(p) process. For j > q
with distinct autoregressive roots, the autocovariances will be given by
= hiA{ + h21q + • • • + hpAip. [3.5.6]
This takes the same form as the autocovariances for an AR(p) process [3.4.38],
though because the initial conditions (yr), y„ . . . , yd differ for the ARMA and AR
processes, the parameters hk in [3.5.6] will not be the same as the parameters gk in
[3.4.38].
There is a potential for redundant parameterization with ARMA processes.
Consider, for example, a simple white noise process,

Y, = Et. [3.5.7]
Suppose both sides of [3.5.7] are multiplied by (1 — pL):
(1 — pL)Y, = (1 — pL)c,. [3.5.8]
Clearly, if [3.5.7] is a valid representation, then so is [3.5.8] for any value of p.
Thus, [3.5.8] might be described as an ARMA(1, 1) process, with 01 = p and
= —p. It is important to avoid such a parameterization. Since any value of p in
[3.5.8] describes the data equally well, we will obviously get into trouble trying to
estimate the parameter p in [3.5.8] by maximum likelihood. Moreover, theo-
retical manipulations based on a representation such as [3.5.8] may overlook key
cancellations. If we are using an ARMA(1, 1) model in which 01 is close to —01, then
the data might better be modeled as simple white noise.

60 haoter 3 1 Stationary ARMA Processes


-

A related overparameterization can arise with an ARMA(p, q) model. Con-


sider factoring the lag polynomial operators in [3.5.2] as in [2.4.3]:
(1 - A1L)(1 - A2L) • • • (1 - ApL)(Y, 1.)
= (1 - n1L)(1 — 712L) • • • (1 — nqL)se. [3.5.9]
We assume that 1A11 < 1 for all i, so that the process is covariance-stationary. If the
autoregressive operator (1 - (AIL - 42L2 - • • • - clopLP) and the moving
average operator (1 + 01L + 02L2 + • • + eqLq) have any roots in common,
say, A, = m for some i and j, then both sides of [3.5.9] can be divided by
(1 - AiL):
(1 - AkL)(Y, - = (1 - 77,L)e„
k‘i 1,1)

Or

(1 — (AIL — 44' —•• • — — 1•1)


[3.5.10]
= (1 + et .L. + 0“.,2 + • • • + 0;_1L9-1)et,
where
(1 - L - L2 - • • - p- L
P - t)

(1 - A1L)(1 A2L) • • • (1 - A,_,L)(1 - • • (1 - ApL)


(1 + Bi L + BZL2 + • • •
(1 -71,L)(1 + 0:_ Lq
The stationary ARMA(p, 712L) • • • (1 — ni_IL)(1 — 71".iL) • • • (1 — 74L).
q) process satisfying [3.5.2] is clearly identical to the
stationary ARMA(p — 1, q — 1) process satisfying [3.5.10].

3.6. The Autocovariance-Generating Function


For each of the covariance-stationary processes for Yi considered so far, we cal-
culated the sequence of autocovariances {-07_ If this sequence is absolutely
summable, then one way of summarizing the autocovariances is through a scalar-
valued function called the autocovariance-generating function:
gy(z) = 2, yizi. [3.6.1]
This function is constructed by taking the jth autocovariance and multiplying it by
some number z raised to the jth power, and then summing over all the possible
values of j. The argument of this function (z) is taken to be a complex scalar.
Of particular interest as an argument for the autocovariance-generating func-
tion is any value of z that lies on the complex unit circle,
z = cos(co) — i sin(co) =
where i = V--7. and co is the radian angle that z makes with the real axis. If the
autocovariance-generating function is evaluated at z and divided by 2•,
the resulting function of co,
Sy( W) = -
2
71. g y(e-1`)) = — E
2.7r
is called the population spectrum of Y. The population spectrum will be discussed

3.6. The Autocovariance-Generating Function 61


in detail in Chapter 6. There it will be shown that for a process with absolutely
summable autocovariances, the function sy(e) exists and can be used to calculate all
of the autocovariances. This means that if two different processes share the same
autocovariance-generating function, then the two processes exhibit the iden-
tical sequence of autocovariances.
As an example of calculating an autocovariance-generating function, consider
the MA(1) process. From equations [3.3.3] to [3.3.5], its autocovariance-generating
function is
)
gy(z = [0cr12-1 + [(1 + 02)a2]z° + [Ocr2]z1= o2•[Oz-1 + (1 + 02) + 02].
Notice that this expression could alternatively be written
gy(z) = 0.2(1 + 02)(1 + 02-1). [3.6.2]

The form of expression [3.6.2] suggests that for the MA(q) process,
Y, = µ + (1 + OIL + 02L2 + • • • + OqL1)6„
the autocovariance-generating function might be calculated as
02Z2
gy(z) = o-2(1 + 01z + +•• + 0q24') [3.6.3]
x (1 + 81z-' + 02z-2 + • • +

This conjecture can be verified by carrying out the multiplication in [3.6.3] and
collecting terms by powers of z:
(1 + 0,2 + 0222 + • • • + 0721 x (1 + 012-' + 02z-2 + • • + 0qz-q)
= (07)zq + (07_1 + 0,701)2(q + + eq.■.101+ 002)2(7-2)
+•••+ + 0201 + 0302 + • • + eq _021 [3.6.4]
+(1+ 0 +0 +. .+0 )z0
+ (ei+ 6+26+1+ 6136+2+ • • • + + • • • + (eq)z—q•
Comparison of [3.6.4] with [3.3.10] or [3.3.12] confirms that the coefficient on z1 in
[3.6.3] is indeed the jth autocovariance.
This method for finding g,(2) extends to the MA(00) case. If
Y, = µ + 4/(L)E, [3.6.5]
with

4 (L) = 11/1L + tii2L2 + • • • [3.6.6]


and

E < [3.6.7]
/.
then
gy(z) = cr20(z)tii(z [3.6.8]
For example, the stationary AR(1) process can be written as

= (1 — OL)'iEt,
which is in the form of [3.6.5] with iii(L) = 1/(1 — OL). The autocovariance-
generating function for an AR(1) process could therefore be calculated from
Tc2

By(2) — [3.6.9]
(1 — 02)(1 — 02-1).

62 Chapter 3 Stationary ARMA Processes

You might also like