Professional Documents
Culture Documents
(James - D. - Hamilton) - Time - Series - Analysis PROCESOS AUTOREGRESIVOS
(James - D. - Hamilton) - Time - Series - Analysis PROCESOS AUTOREGRESIVOS
Autoregressive Processes
Y, = c + + Et. [3.4.1]
Again, {e,} is a white noise sequence satisfying [3.2.1] through [3.2.3]. Notice that
[3.4.1] takes the form of the first-order difference equation [1.1.1] or [2.2.1] in
which the input variable w, is given by w, = c + Et. We know from the analysis of
first-order difference equations that if 101 1, the consequences of the E's for Y
accumulate rather than die out over time. It is thus perhaps not surprising that when
14/I a 1, there does not exist a covariance-stationary process for Y, with finite
variance that satisfies [3.4.1]. In the case when 101 < 1, there is a covariance-
stationary process for Y, satisfying [3.4.1]. It is given by the stable solution to [3.4.1]
characterized in [2.2.9]:
E ItP,1 =
which equals 1/(1 — 101) provided that < 1. The remainder of this discussion of
first-order autoregressive processes assumes that 14/1 < 1. This ensures that the
MA(00) representation exists and can be manipulated in the obvious way, and that the
AR(1) process is ergodic for the mean.
Taking expectations of [3.4.2], we see that
= (1 + 02 04 + . .).0.2 [3.4.4]
= a.2/(1 _ 4,2),
E(Y, — ko(Y, —
=E[E t
I,
+•••+
= [01
=
c +2
[1 4_ 4,2 4_
+ • • •1 x [Et-i +
c +4 1,0.2
02Ef-1-2+ • • [3.4.5]
4,4 4_ .].Q2
= [011(1 — (¢2)1•0-2.
3.4. Autoregressive Processes 53
It follows from [3.4.4] and [3.4.5] that the autocorrelation function,
P1 = Yi/Yo= [3.4.6]
follows a pattern of geometric decay as in panel (d) of Figure 3.1. Indeed, the
autocorrelation function [3.4.6] for a stationary AR(1) process is identical to the
dynamic multiplier or impulse-response function [1.1.10]; the effect of a one-unit
increase in e, on Y,.,.1 is equal to the correlation between Y, and A positive
value of ilke a positive value of B for an MA(1) process, implies positive cor-
relation between Y, and A negative value of 4, implies negative first-order
but positive second-order autocorrelation, as in panel (e) of Figure 3.1.
Figure 3.3 shows the effect on the appearance of the time series {y,} of varying
the parameter The panels show realizations of the process in [3.4.1] with c = 0 and Et
•-•-• N(0, 1) for different values of the autoregressive parameter cp. Panel (a) displays
white noise (4 = 0). A series with no autocorrelation looks choppy and patternless
to the eye; the value of one observation gives no information about the value of the
next observation. For cf) = 0.5 (panel (b)), the series seems smoother, with
observations above or below the mean often appearing in clusters of modest
duration. For 4, = 0.9 (panel (c)), departures from the mean can be quite pro-
longed; strong shocks take considerable time to die out.
The moments for a stationary AR(1) were derived above by viewing it as an
MA(x) process. A second way to arrive at the same results is to assume that the
process is covariance-stationary and calculate the moments directly from the dif-
ference equation [3.4.1]. Taking expectations of both sides of [3.4.1],
E(Y) = c + cf•E(Ye_,) + E(e,). [3.4.7]
Assuming that the process is covariance-stationary,
E(Y) = E(Y,_,) = kt. [3.4.8]
Or
= c/(1 — [3.4.9]
reproducing the earlier result [3.4.3].
Notice that formula [3.4.9] is clearly not generating a sensible statement if !Or
1. For example, if c > 0 and cf) > 1, then Y, in [3.4.1] is equal to a positive constant
plus a positive number times its lagged value plus a mean-zero random variable.
Yet [3.4.9] seems to assert that Y, would be negative on average for such a process!
The reason that formula [3.4.9] is not valid when 10i a 1 is that we assumed in
[3.4.8] that Y, is covariance-stationary, an assumption which is not correct when
jOI a 1.
To find the second moments of Y, in an analogous manner, use [3.4.3] to
rewrite [3.4.1] as
Yr = (1 — + Yt -1 + Et
or
1 1
—1
11
1 C 20 3C 4 50 60 70 80 90
i■P
—2 -
—3
O 1 0 20 30 40 50 80 70 80 90
—2
—a
—6
O 1 0 20 30 40 50 60 70 80 90
(c) CA = 0.9
FIGURE 3.3 Realizations of an AR(1) process, 11; = 4, Ye_1 + e„ for alternative
values of 4,.
Yu = 02Yo + 0 Cr2
Or
Yo = cr21(1 — 02),
reproducing [3.4.4].
Similarly, we could multiply [3.4.10] by (Y1_, - kt) and take expectations:
ERY,-1 101,
is the autocovariance of observations on Y separated by j - 1 periods:
Yr =
in which the autocovariance y takes the place of the variable y and in which the
subscript j (which indexes the order of the autocovariance) replaces t (which indexes
time). The input wr in [3.4.15] is identically equal to zero. It is easy to see that the
difference equation [3.4.15] has the solution
Y., = cAlYo,
which reproduces [3.4.6]. We now see why the impulse-response function and
autocorrelation function for an AR(1) process coincide—they both represent the
solution to a first-order difference equation with autoregressive parameter 0, an
initial value of unity, and no subsequent shocks.
implying
c + c1,11.c + 4,21.c + 0,
reproducing [3.4.23].
To find second moments, write [3.4.16] as
Pt = 01 + 02Pi
or
For j = 2,
0i + 0201 + oi] yo
—
or
(1 — 02)0'2
YO —
(1 + 02)[(1 — 02)2 —
AP - 401AP - 42AP-2 -• -
0
Y, = µ + tir(L)e,
where
Yr — µ = 01( Yr -1 — + 4)2( Yt -2
— +••• [3.5.4]
Op(Y r p 11) 4- Er 4-
018r--1 026t-2 4- • • + Oqgt-q•
Autocovariances are found by multiplying both sides of [3.5.4] by (Y,_„ — i.L) and
taking expectations. For j > q, the resulting equations take the form
Y, = Et. [3.5.7]
Suppose both sides of [3.5.7] are multiplied by (1 — pL):
(1 — pL)Y, = (1 — pL)c,. [3.5.8]
Clearly, if [3.5.7] is a valid representation, then so is [3.5.8] for any value of p.
Thus, [3.5.8] might be described as an ARMA(1, 1) process, with 01 = p and
= —p. It is important to avoid such a parameterization. Since any value of p in
[3.5.8] describes the data equally well, we will obviously get into trouble trying to
estimate the parameter p in [3.5.8] by maximum likelihood. Moreover, theo-
retical manipulations based on a representation such as [3.5.8] may overlook key
cancellations. If we are using an ARMA(1, 1) model in which 01 is close to —01, then
the data might better be modeled as simple white noise.
Or
The form of expression [3.6.2] suggests that for the MA(q) process,
Y, = µ + (1 + OIL + 02L2 + • • • + OqL1)6„
the autocovariance-generating function might be calculated as
02Z2
gy(z) = o-2(1 + 01z + +•• + 0q24') [3.6.3]
x (1 + 81z-' + 02z-2 + • • +
This conjecture can be verified by carrying out the multiplication in [3.6.3] and
collecting terms by powers of z:
(1 + 0,2 + 0222 + • • • + 0721 x (1 + 012-' + 02z-2 + • • + 0qz-q)
= (07)zq + (07_1 + 0,701)2(q + + eq.■.101+ 002)2(7-2)
+•••+ + 0201 + 0302 + • • + eq _021 [3.6.4]
+(1+ 0 +0 +. .+0 )z0
+ (ei+ 6+26+1+ 6136+2+ • • • + + • • • + (eq)z—q•
Comparison of [3.6.4] with [3.3.10] or [3.3.12] confirms that the coefficient on z1 in
[3.6.3] is indeed the jth autocovariance.
This method for finding g,(2) extends to the MA(00) case. If
Y, = µ + 4/(L)E, [3.6.5]
with
E < [3.6.7]
/.
then
gy(z) = cr20(z)tii(z [3.6.8]
For example, the stationary AR(1) process can be written as
= (1 — OL)'iEt,
which is in the form of [3.6.5] with iii(L) = 1/(1 — OL). The autocovariance-
generating function for an AR(1) process could therefore be calculated from
Tc2
By(2) — [3.6.9]
(1 — 02)(1 — 02-1).