Professional Documents
Culture Documents
Econometrics Eviews 8
Econometrics Eviews 8
TUTORIAL V
The 5th tutorial deals with autocorrelation and partial autocorrela-
tion functions (ACF and PACF ) and their usefulness in detecting the
(stationary) stochastic processes generating the data. We will also deal
with model selection and forecasting issues.
Definition of partial autocorrelation function ρPk : correlation measure
between yt and yt−k after having eliminated the effect of all intermediate
lags yt−1 , yt−2 , ..., yt−k+1 .
View...
Correlogram
from the series menu, specifying in the dialog window whether one wants
to analyse the original series or its difference and the maximum number
of lags for ACF, PACF, and Ljung-Box test computation. Notice also
that the dashed bands correspond to
2 1
± √ = ±2 √ = ±2 × s.e.(ρbk ) (2)
T T
Therefore, they provide a test at the α ' 5% level whether each autocor-
relation parameter ρk equals 0 if one also supposes that all the previous
autocorrelation parameters up to lag k − 1 are equal to 0.
Of course, the shape of the ACF and PACF depends on the specific
stochastic process. Let
1
a q − th order moving average process (M A(q)).
The pattern of ACF and PACF can give some information on the
type of process:
i. For AR(p) processes: the ACF decays, while PACF is zero for lags
t > p.
ii. For M A(q) processes: the ACF is zero for lags t > q, while PACF
decays.
iii. For ARM A(p, q): the ACF decays from lag q, the PACF decays
from lag p.
iv. For White Noise: ACF and PACF are all zero.
Then:
ACF PACF
White noise All ρk = 0 All ρP
k = 0
k P
AR(1), θ1 > 0 direct exponential decady: ρk = θ 1 ρ1 = θ1 , ρP
k = 0 ∀ k > 1
k P
AR(1), θ1 < 0 oscillating decay: ρk = θ 1 ρ1 = θ1 , ρP
k = 0 ∀ k > 1
P
AR(2) (real roots) direct decay ρ k 6= 0 if k ≤ 2; ρP
k = 0 if k > 2
P
AR(2) (complex roots) wavelike decay ρ k 6= 0 if k ≤ 2; ρP
k = 0 if k > 2
P
AR(p) decay towards 0 ρ k 6= 0 if k ≤ p; ρP
k = 0 if k > p
Examples
For AR processes it is important to verify they are stationary, i.e.
they have a constant mean and variance.
A way for verifying stationarity is to find the solutions of the charac-
teristic equation:
2
d. Take only the equation in L: a(L) = L0 − ρL1 (note L0 = 1)
i. If |ρ| < 1, the solution(s) is (are) inside the unit circle and the
process is stationary.
1
a(z) = 1 − 0.7z = 0 ⇔ z = >1
0.7
The ACF will then be equal to ρk = 0.7k with direct decay and the PACF
will be equal to ρP1 = 0.7, ρPk = 0 ∀ k > 1.
3
The solution to the equation a(z) = 0 is:
1
a(z) = 1 + 0.7z = 0 ⇔ z = − >1
0.7
Alternative verification (polynomial in m):
m + 0.7 = 0 ⇔ m = |−0.7| < 1
The ACF will then be equal to ρk = −0.7k with oscillating decay and the
PACF will be equal to ρP1 = −0.7, ρPk = 0 ∀ k > 1.
Example 3. AR(2) process with first and second order autore-
gressive coefficients θ1 = 0.7 and θ2 = 0.2:
yt = 0.7yt−1 + 0.2yt−2 + εt
Is it stationary?
a(L)yt = yt − 0.7yt−1 − 0.2yt−2
a(L) = 1 − 0.7L − 0.2L2
The solutions to the equation a(z) = 0 are:
q
2
− (−0.7) ± (−0.7)2 − 4(−0.2)1
a(z) = 1 − 0.7z − 0.2z = 0 ⇔ z1,2 = =
2(−0.2)
√
0.7 ± 1.29 0.7 ± 1.136
= =
−0.4 −0.4
⇒ |z1 | = |−4.59| > 1 |z2 | = |1.09| > 1
Alternative verification (polynomial in m):
q
− (−0.7) ± (−0.7)2 − 4(−0.2)1
m2 − 0.7m − 0.2 = 0 ⇔ m1,2 = =
2
0.7 ± 1.136
=
2
⇒ m1 = |−0.22| < 1 m2 = |0.92| < 1
The ACF can be computed through the Yule-Walker equations for an
AR(2) process:
γ0 = θ1 γ1 + θ2 γ2 + σε2 (5)
γ1 = θ1 γ0 + θ2 γ1 (6)
γ2 = θ1 γ1 + θ2 γ0 (7)
γ3 = θ1 γ2 + θ2 γ1 (8)
γ4 = θ1 γ3 + θ2 γ2 (9)
...
4
From (6):
γ1 (1 − θ2 ) = θ1 γ0
γ1 θ1 0.7
⇒ ρ1 = = = = 0.875
γ0 1 − θ2 1 − 0.2
From (7):
γ2 = θ1 γ1 + θ2 γ0
θ1
= θ1 γ0 + θ2 γ0 =
1 − θ2
2
θ1 + θ2 − θ22
= γ0
1 − θ2
γ2 θ2 + θ2 − θ22 0.72 + 0.2 − 0.22
⇒ ρ2 = = 1 = = 0.8125
γ0 1 − θ2 1 − 0.2
From (8):
γ3 = θ1 γ2 + θ2 γ1
γ3 γ2 γ1
⇒ = θ1 + θ2
γ0 γ0 γ0
⇒ ρ3 = θ1 ρ2 + θ2 ρ1
⇒ ρ3 = 0.7 ∗ 0.8125 + 0.2 ∗ 0.875 = 0.74375
From (9):
γ4 = θ1 γ3 + θ2 γ2
⇒ ρ4 = θ1 ρ3 + θ2 ρ2
⇒ ρ4 = 0.7 ∗ 0.74375 + 0.2 ∗ 0.8125 = 0.683125
...
5
Example 4. AR(2) process with first and second order autore-
gressive coefficients θ1 = −0.7 and θ2 = −0.2:
yt = −0.7yt−1 − 0.2yt−2 + εt
Is it stationary?
a(L)yt = yt + 0.7yt−1 + 0.2yt−2
a(L) = 1 + 0.7L + 0.2L2
The solutions to the equation a(z) = 0 are:
p
2 −0.7 ± 0.72 − 4(0.2)1
a(z) = 1 + 0.7z + 0.2z = 0 ⇔ z1,2 = =
2(0.2)
√ √
−0.7 ± −0.31 −0.7 ± i 0.31
= =
0.4 0.4
⇒ z1 = −1.75 + 1.39i z2 = −1.75 − 1.39i
These roots are inside or outside
p the unit circle? Modulus of a (complex
or real) number: kx + iyk ≡ x2 + y 2 . The stability condition is that
the modulus of each root is outside the unit circle. In our case:
q
kz1 k = (−1.75)2 + 1.392 = 2.24 > 1
q
kz2 k = (−1.75)2 + 1.392 = 2.24 > 1
Alternative verification (polynomial in m):
p
−0.7 ± 0.72 − 4(0.2)1
m2 + 0.7m + 0.2 = 0 ⇔ m1,2 = =
√ 2
−0.7 ± i 0.31
=
2
⇒ m1 = −0.35 + 0.18i m2 = −0.35 − 0.18i
Are these roots inside or outside the unit circle? The stability con-
dition is that the modulus of each root is inside the unit circle. In our
case: q
km1 k = km2 k = (−0.35)2 + 0.182 = 0.39 < 1
The ACF will be
θ1 −0.7
ρ1 = = = −0.583
1 − θ2 1 + 0.2
θ12 + θ2 − θ22 (−0.7)2 − 0.2 − (−0.2)2
ρ2 = = = 0.208
1 − θ2 1 + 0.2
As for the PACF, ρP1 = ρ1 = −0.583, ρP2 = θ2 = −0.2, ρPk = 0 ∀ k > 2.
6
2 Simulation Program for some (stationary and non-
stationary) stochastic processes
Commands to simulate some basic stochastic processes
’white noise
series epsilon=@rnorm/20
epsilon.label white noise
’ two AR(1), one with coefficient 0.7 (called y5) and the other -0.7
(called y6)
smpl 1 1
series y5=0
series y6=0
smpl 2 @last
series y5=rho1*y5(-1) + epsilon
series y6=-rho1*y6(-1) + epsilon
y5.label ar1 with coeff 0.7
y6.label ar1 with coeff -0.7
’ two AR(2) with coefficients a) 0.7 and 0.2 (called y7); b) -0.7 and
-0.2 (called y8)
smpl 1 2
series y7=0
7
series y8=0
smpl 3 @last
series y7 = rho1*y7(-1) + rho2*y7(-2) + epsilon
series y8 = -rho1*y8(-1) - rho2*y8(-2) + epsilon
y7.label ar2 with coeff 0.7 e 0.2
y8.label ar2 with coeff -0.7 e -0.2
’ two MA(1), one with coefficient 0.7 (called y9) and the other -0.7
(called y10)
smpl 1 1
series y9=0
series y10=0
smpl 2 @last
series y9 = rho1*epsilon(-1)+ epsilon
series y10 = -rho1*epsilon(-1)+ epsilon
y9.label ma1 with coeff 0.7
y10.label ma1 with coeff -0.7
’ ARMA(3,3) with coefficients 0.7 -0.6 0.5 e 0.2 0.1 0.05 (called y12)
smpl 1 3
series y12=0
smpl 4 @last
series y12 = rho1*y12(-1) - rho3*y12(-2) + rho4*y12(-3) + epsilon
+ rho2*epsilon(-1) + rho5*epsilon(-2) + rho6*epsilon(-3)
y12.label arma33 with coeff 0.7 -0.6 0.5 e 0.2 0.1 0.05
8
smpl 2 @last
series y4=rho7*y4(-1) + epsilon
y4.label ar1 with coeff 0.99
smpl @all
Analyse the properties of the series (levels, ACF and PACF) first by
selecting the first 100 observations and then by selecting the whole sam-
ple. Being all the processes ergodic stationary (except the random walk ),
empirical moments converge to the theoretical ones.
Y5 Y6
.25 .3
.20
.2
.15
.1
.10
.05 .0
.00
-.1
-.05
-.2
-.10
-.15 -.3
10 20 30 40 50 60 70 80 90 100 10 20 30 40 50 60 70 80 90 100
Y7 Y8
.4 .20
.15
.3
.10
.2
.05
.1 .00
-.05
.0
-.10
-.1
-.15
-.2 -.20
10 20 30 40 50 60 70 80 90 100 10 20 30 40 50 60 70 80 90 100
9
Y9 Y10
.20 .25
.20
.15
.15
.10
.10
.05 .05
.00 .00
-.05
-.05
-.10
-.10
-.15
-.15 -.20
10 20 30 40 50 60 70 80 90 100 10 20 30 40 50 60 70 80 90 100
Y11 Y12
.3 .24
.20
.2 .16
.12
.1 .08
.04
.0 .00
-.04
-.1 -.08
-.12
-.2 -.16
10 20 30 40 50 60 70 80 90 100 10 20 30 40 50 60 70 80 90 100
10
11
Correlogram of Y5
Correlogram of Y6
12
Correlogram of Y7
Correlogram of Y8
13
Correlogram of Y9
Correlogram of Y10
14
Correlogram of Y11
Correlogram of Y12
15
Notice how the output reports the value of the roots of the polynomial
and how they can be plotted through the command
View
ARMA Structure
Dependent Variable: Y7
Method: Least Squares
Date: 04/13/11 Time: 21:06
Sample (adjusted): 3 2000000
Included observations: 1999998 after adjustments
Convergence achieved after 3 iterations
4 Model selection
For model selection, there are three main criteria:
(1) Ljung-Box test: the correlogram of the residuals should be white
noise;
(2) Use Akaike (AIC) and the Schwarz (or BIC) Information Criteria
: the best model is the one with the lowest AIC and BIC;
(3) The additional lags should be statistically significant.
View...
Residual tests
Correlogram - Q statistics
16
Correlogram of Residuals
Notice how in the first case the residuals are not white noise and how in
both cases the Akaike and the Schwarz Information Criterion (or BIC)
lead to (correctly) prefer the AR(2) process. The corresponding outputs
are the following:
Dependent Variable: Y7
Method: Least Squares
Date: 04/13/11 Time: 21:08
Sample (adjusted): 2 2000000
Included observations: 1999999 after adjustments
Convergence achieved after 3 iterations
17
Correlogram of Residuals
Dependent Variable: Y7
Method: Least Squares
Date: 04/13/11 Time: 21:18
Sample (adjusted): 4 2000000
Included observations: 1999997 after adjustments
Convergence achieved after 3 iterations
5 Forecasting
Expand the range of the workfile from 2, 000, 000 to 2, 000, 100 observa-
tions. Perform a forecast for these 100 observations by using the (correct)
AR(2) process estimated equation. The command to use is:
Procs...
Forecast
from the equation menu. Notice how the forecast tends towards the un-
conditional mean of the process (equal to 0) as the process is stationary.
18
.3
.2
.1
.0
-.1
-.2
-.3
2000025 2000050 2000075 2000100
Y7F ± 2 S.E.
19