Professional Documents
Culture Documents
Time Series Analysis: Applied Econometrics Prof. Dr. Simone Maxand
Time Series Analysis: Applied Econometrics Prof. Dr. Simone Maxand
Applied Econometrics
WS 2020/21
Contents
6.1 Introduction
6.1 Introduction
I Time Series (TS): sequence (set) of observations yt of a
. Observation at time t ∈ T : yt
I Notation: Write {yt }t∈T
. Sometimes shortly: {yt } or yt (if obvious that the TS and not
the observation at time t is meant)
. {yt }T
t=1 is considered as nite segment of that innite series.
. Examples:
Independence of yt t = 1, . . . , T .
for all
Dependence under stationarity: yt = φyt−1 + εt with |φ| < 1
Integrated process (stochastic trend): yt = yt−1 + εt
Deterministic (linear) trend: yt = β · t + εt
I Forecasts
. Predict yt based on yt−1 , yt−2 ,...
. Predict yt based on xt−1 , xt−2 ,...
. Example: Forecast of ination rate y by means of its own past
or ADL model: ination rate y is additionally inuenced by
unemployment rate and its lagged values
. trends,
. seasonal patterns,
. structural breaks,
. conditional heteroscedasticity,
. outliers, etc.
6.4 0.06
6.2 0.03
6.0 1983 1988 1993 1998 2003 2008 2013
5.8 -0.03
5.6 -0.06
5.4 -0.09
1983 1988 1993 1998 2003 2008 2013
5.0
2.5
0.0
-2.5
-5.0
-7.5
2000 2001 2002 2003 2004 2005 2006 2007 2008 2009
. Convention: L0 yt = yt
Lj yt = L(Lj−1 yt ) (j ≥ 2)
⇒ Lj yt = yt−j , j ≥ 0 (j -th lag of yt )
I Obviously, for some constant c and integers j, k :
Lj c = c and Lj Lk yt = Lj+k yt
Applied Econometrics Chapter 6: Time series analysis
6.1 Introduction 14 | 124
Lag polynomials
I The lag operator is a linear operator:
Dierence operator
I Dierence operator ∆ of rst order is dened by:
∆yt = yt − yt−1 .
⇒ yt∗ = ∆yt = yt − yt−1 is a linear lter (dierence lter).
Examples
I Dierence operator of order 2:
Classical decomposition
I Many economic TS exhibit trends and seasonal patters that
yt = tt + st + rt
⇒ Detrended and deseasonalized time series:
I Autocorrelation:
then
Sample moments
I Besides plots, sample moments may serve as exploratory tools.
T
1 X
y= yt .
T
t=1
T
2 1
(yt − y )2 .
X
σ
b =
T
t=1
I Sample standard deviation: σ
b
T
1 X
γ
bh = (yt − y )(yt−h − y ).
T −h
t=h+1
time):
ρbh = γ
bh /b
γ0 ρh | ≤ 1).
(|b
I (Auto-)correlogram: Plot ρbh against h.
Contents
6.1 Introduction
{Yt }t∈T0 .
I The observed TS is (part of ) a realization of a stochastic
Stochastic process
I Denition. A stochastic process (SP) is a family of RVs
p
. Or: E =R (multivariate TS).
SP, cont.
I The functions {Y· (ω)}ω∈Ω on T are called realizations (or
phase) space.
I Frequently:
. The term TS is used for both the data and the SP,
Example 1
I Let X ∼ N (0, 1) (dened on some space Ω) and dene a SP
{Yt }t∈N by
Yt = (−1)t X
(i.e., more explicitly, Yt (ω) = (−1)t X (ω), ω ∈ Ω, t ∈ N).
I Realizations of this SP: functions of t obtained by xing ω:
with
1
P(Yt = 1) = P(Yt = −1) = for all t.
2
⇒ It is not so obvious as in Example 1 that there exists a
Finite-dimensional distributions
I An important characteristic of a (real-valued) SP is the
each t and ω.
I In contrast, investigations start frequently by specifying the
Solution
I A profound inference requires:
. E |Yt |2 < ∞
∀ t ∈ Z,
. E[Yt ] = µ ∀ t ∈ Z, and
. γ(s, t) = γ(s + r , t + r ) ∀ s, t, r ∈ Z.
Strict stationarity
Examples
I Let εt ∼ N (0, σ 2 ) i.i.d. Then:
Random walk
E[Yt ] ≡ 0
γ(s, t) = σ 2 min(s, t)
Ergodicity
I We have only one observation of the SP and thus only the
1 PT
time average Y = T t=1 Yt .
T
1 X p
Y = Yt → µ.
T
t=1
. Requires that γ(h) goes to 0 as h → ∞.
. Sucient condition:
∞
X
|γ(h)| < ∞. (1)
h=0
T
1 X p
(Yt − µ)(Yt−h − µ) → γ(h) ∀ h.
T −h
t=h+1
Example
I Assume
Yit = µ + λi + νit , i = 1, . . . , I ; t = 1, . . . , T ,
ergodic.
processes.
σ2 ,
(
if t=s
E[εt εs ] = (3)
0, otherwise.
{εt } ∼ IWN(0, σ 2 ) .
3. If εt ∼ N (0, σ 2 ) i.i.d., then {εt } is a Gaussian white noise
process, written
{εt } ∼ GWN(0, σ 2 ) .
I Clearly,
2
1
Observations
0
-1
-2
Time
Linear processes
I Let {εt } ∼ WN(0, σ 2 ).
I Let {cj }j∈Z be a sequence of real-valued, absolutely summable
coecients, i.e.
∞
X ∞
X ∞
X
|cj | := |cj | + |c−j | < ∞ . (4)
j=−∞ j=0 j=1
∞
X
c(L) = cj Lj ,
j=−∞
∞
X
Yt = (µ+ ) c(L)εt = (µ+ ) cj εt−j
j=−∞
∞
X ∞
X
= (µ+ ) cj εt−j + c−j εt+j
j=0 j=1
j=−∞
Applied Econometrics Chapter 6: Time series analysis
6.2 Stochastic processes | 6.2.3 Linear processes 47 | 124
I Then
∞
X
Yt = cj εt−j = c(L)εt
j=−∞
is weakly stationary with
X
µY = E[Yt ] = cj µε = c(1)µε ,
j
XX
γY (h) = ci cj γε (h + i − j).
j i
Applied Econometrics Chapter 6: Time series analysis
6.2 Stochastic processes | 6.2.3 Linear processes 48 | 124
cj = 0 ∀ j < 0.
µY = E[Yt ] ≡ 0, and
∞
2
X
γY (h) = E[Yt Yt−h ] = σ cj cj+h = γ(−h) (h ∈ N).
j=0
{εt } ∼ GWN(0, σ 2 ).
Contents
6.1 Introduction
Φp (L)Yt = c + εt , where
Φp (L) = 1 − φ1 L − . . . − φp Lp
is the autoregressive (AR) polynomial.
2
Yt = µ + εt + θ1 εt−1 + . . . + θq εt−q , {εt } ∼ WN(0, σ ).
Yt = µ + Θq (L)εt , where
Θq (L) = 1 + θ1 L + . . . + θq Lq .
Φp (L) = 1 − φ1 L − . . . − φp Lp ,
Θq (L) = 1 + θ1 L + . . . + θq Lq .
. Again, only a stationary solution to the ARMA(p, q ) equations
is usually called ARMA(p, q ) process.
Applied Econometrics Chapter 6: Time series analysis
6.3 ARMA models | 6.3.1 Autoregressive and other ARMA processes 55 | 124
equations
2
Yt = c + φYt−1 + εt , {εt } ∼ WN(0, σ )? (5)
∞
X c
Yt = µ + φj εt−j , µ=
1 −φ
j=0
sense.
∞
X
Yt = µ − φ−j εt+j .
j=1
φ|h|
γ(h) = σ 2
1 − φ2
γ(h)
ρ(h) = = φ|h| .
γ(0)
I Note: The ACF satises the dierence equation (for h > 0):
ρ(h) = φρ(h − 1).
Applied Econometrics Chapter 6: Time series analysis
6.3 ARMA models | 6.3.1 Autoregressive and other ARMA processes 58 | 124
where b t |Z )
E(Y is the best linear prediction of Yt based on Z,
. or, equivalently, the last coecient in a linear projection of Yt
on its h most recent values.
3
2
1
0
Y
-1
-2
-3
-4
Time
1.0
0.8
0.6
ACF
0.4
0.2
ACF PACF
lag.1 0.510 0.510
0.0
0 5 10 15 20 25
Lag
15
10
5
Y
0
-5
-10
Time
1.0
0.8
0.6
ACF
0.4
0.2
ACF PACF
lag.1 0.962 0.962
lag.2 0.927 0.018
0.0
0 5 10 15 20 25
Lag
5
Y
0
-5
Time
1.0
0.5
ACF
0.0
ACF PACF
-0.5
0 5 10 15 20 25
Lag
Yt = c + φ1 Yt−1 + . . . + φp Yt−p + εt ,
Xp
Φp (L)Yt = c + εt , with Φp (L) = 1 − φj Lj ,
j=1
Φp (z) 6= 0 ∀|z| ≤ 1.
Applied Econometrics Chapter 6: Time series analysis
6.3 ARMA models | 6.3.1 Autoregressive and other ARMA processes 66 | 124
5 1
⇒ Φ2 (z) = 1 − z + z 2 = 0 ⇔ z 2 − 5z + 6 = 0
r6 6
5 25 5 1
⇔ z1,2 = ± −6= ± ⇔ z1 = 3, z2 = 2
2 4 2 2
9 1 9
⇒ Φ2 (z) = 1 − z + z 2 = 0 ⇔ z 2 − z + 2 = 0
r4 2 2
9 81 9 7 1
⇔ z1,2 = ± −2= ± ⇔ z1 = 4, z2 =
4 16 4 4 2
MA process Pq
I MA(q) process: Yt = µ + j=1 θj εt−j + εt = µ + Θq (L)εt
I Moments: E[Yt ] := µ and
σ 2 k=0 θk θk+|h| ,
( Pq−|h|
if |h| ≤ q
γ(h) =
0, if |h| > q
I Without any condition on the parameters, the process exists, is
I Special case q = 1:
γ(0) = V[Yt ] = (1 + θ2 )σ 2 ,
γ(1) = θσ 2 , γ(h) = 0 for q > 1,
θ 1 1
ρ(1) = 2
, − ≤ ρ(1) ≤ .
1+θ 2 2
Applied Econometrics Chapter 6: Time series analysis
6.3 ARMA models | 6.3.1 Autoregressive and other ARMA processes 70 | 124
4
2
Y
0
-2
-4
Time
1.0
0.8
0.6
ACF
0.4
0.2
ACF PACF
lag.1 0.479 0.479
lag.2 -0.042 -0.352
0.0
0 5 10 15 20 25
Lag
Θq (z) = (1 + θ1 z + . . . + θq z q )
are outside the unit circle, then the ARMA(p, q ) process,
Φp (z) = (1 − φ1 z − . . . − φp z p )
lie outside the unit circle (stability condition).
T
X 0X 1 X p
⇒ = xt xt0 → E[xt xt0 ] = ((γ(h − k))ph,k=1 =: Γp
T T t=1
T
X 0ε 1 X p
and = xt εt → E[xt εt ] = 0
T T t=1
0 −1 0
0 −1 0 X X X ε p
⇒ φ = (X X ) X y = φ +
b → φ
T T
Applied Econometrics Chapter 6: Time series analysis
6.3 ARMA models | 6.3.2 Estimation and forecasting 76 | 124
⇒ OLS residuals e1 , . . . , eT .
I Step 2: Use OLS to estimate the model:
second moments.
MSEP(Y
b) = E(Y − Yb )2
E(Y − E[Y |X ]) = 0.
function
I Note that b |X ]
E[Y is also an unbiased prediction.
h-step-ahead forecast.
I Make an initial guess of small values for lag order p and q for
Φ
b p (L) cb
et := Yt − .
Θ
b q (L) Θ
b q (1)
h
ρbε (j)2 .
X
QBP (h) = T
j=1
I If εt are i.i.d., then we have
estimated.
Applied Econometrics Chapter 6: Time series analysis
6.3 ARMA models | 6.3.3 The Box-Jenkins program 87 | 124
ρbε (j)2
h
∼ χ2(h−m)
a
X
QLB (h) = T (T + 2)
T −j H0
j=1
I Note that ρbε (j) estimates the true correlations between the
LM = T · R 2 ∼ χ2r ,
a
where
H0
Model comparison
I Akaike Information Criterion (AIC):
AIC := −2 ln[L(θ)]
b + 2m
BIC := −2 ln[L(θ)]
b + m ln(T )
where L(θ)
b is the likelihood function at the point θb (MLE),
and m denotes the number of model parameters.
I Pseudo-out-of-sample: e.g.
T ∗ = {T − h, . . . , T − h − T ∗ + 1}
Applied Econometrics Chapter 6: Time series analysis
6.4 Nonstationary processes | 91 | 124
Contents
6.1 Introduction
Trend-stationary processes
I A trend-stationary process is given by
⇒ E[Yt ] = δ0 + δt.
Yt = δ0 + δt + Xt
= δ0 + δt + φ Xt−1 +εt
| {z }
=Yt−1 −δ0 −δ(t−1)
= [δ0 (1 − φ) + δφ] + δ(1 − φ)t + φYt−1 + εt
I Last representation: AR(1) process around a linear trend
30
20
Y
10
0
Time
Applied Econometrics Chapter 6: Time series analysis
6.4 Nonstationary processes | 6.4.1 Unit root processes 96 | 124
1.0
0.8
0.6
ACF
0.4
0.2
ACF PACF
lag.1 0.990 0.990
lag.2 0.982 0.104
0.0
0 5 10 15 20 25
Lag
Integrated processes
I Denition: d ∈ N0 , a time series {Yt }∞
For t=−∞ is called
d
integrated of order d , denoted {Yt } ∼ I (d), if {∆ Yt } is a
I Representation: ∆d Yt = c + Ψ(L)εt ,
P∞
. j=0 |ψj | < ∞, ψ0 = 1, Ψ(1) 6= 0
40
20
Y
0
-20
Time
ACF PACF
lag.1 0.995 0.995
ACF
0.4
0.5
Lag
E[Yt ] = t · c + Y0
V[Yt ] = t · σ 2
r
h
ρ(t, t − h) = 1 − (for h ≥ 0).
t
I The drift c generates a linear trend!
5
0
-5
Time
Applied Econometrics Chapter 6: Time series analysis
6.4 Nonstationary processes | 6.4.1 Unit root processes 102 | 124
1.0
0.8
0.6
ACF
0.4
0.2
ACF PACF
lag.1 0.987 0.987
lag.2 0.972 -0.054
0.0
0 5 10 15 20 25
Lag
80
60
Y
40
20
0
Time
Applied Econometrics Chapter 6: Time series analysis
6.4 Nonstationary processes | 6.4.1 Unit root processes 104 | 124
1.0
0.8
0.6
ACF
0.4
0.2
ACF PACF
lag.1 0.996 0.996
lag.2 0.991 -0.005
0.0
0 5 10 15 20 25
Lag
I(0) I(1)
stationary process e.g. random walk
normal
φb1 − 1 α d 1 W (1)2 − 1
−→
b
DF = q =p qR ,
Var (b
α ) 2
Var (φb1 ) W (r )dr
terms:
(1 − φ1 L − . . . − φp Lp )Yt = εt ,
I Alternative formulation:
Φp (z) = (1 − φ1 z − . . . − φp z p ) = 0
that is equal to one (and {Yt } is called an unit root process),
i.e.
Φp (1 ) = 1 − φ1 − . . . − φp = 0 ⇔ ρ = 1.
I In this case (7) is not stationary; if however one unit root is
H0 : α = 0 vs. H1 : α < 0.
and is performed by means of a simple t -test statistic for α.
. If {Yt } is stationary and causal, then α < 0.
. Under H1 , {Yt } might be nonstationary or stationary and
non-causal; but these cases are of little practical relevance.
I It turns out, e.g. by unit root testing, that {xt } ∼ I (1) (unit
root process).
of residuals.
160
140
U.S. Fixed Investment
120
100
80
Time
1.0
0.6
ACF
0.2
-0.2
0 1 2 3 4 5
Lag
Series x
1.0
Partial ACF
0.6
0.2
-0.2
1 2 3 4 5
Lag
Applied Econometrics Chapter 6: Time series analysis
6.4 Nonstationary processes | 6.4.3 An empirical application with R 115 | 124
Title:
Augmented Dickey-Fuller Test
Test Results:
PARAMETER:
Lag Order: 1
STATISTIC:
Dickey-Fuller: -2.4072
P VALUE:
0.4079
Applied Econometrics Chapter 6: Time series analysis
6.4 Nonstationary processes | 6.4.3 An empirical application with R 116 | 124
8
Quarterly Changes in U.S. Fixed Investment
6
4
2
0
-2
-4
Time
1.0
0.6
ACF
0.2
-0.2
0 1 2 3 4 5
Lag
Series y
0.4
Partial ACF
0.2
0.0
-0.2
1 2 3 4 5
Lag
Applied Econometrics Chapter 6: Time series analysis
6.4 Nonstationary processes | 6.4.3 An empirical application with R 118 | 124
Title:
Augmented Dickey-Fuller Test
Test Results:
PARAMETER:
Lag Order: 1
STATISTIC:
Dickey-Fuller: -5.3516
P VALUE:
0.01
Applied Econometrics Chapter 6: Time series analysis
6.4 Nonstationary processes | 6.4.3 An empirical application with R 119 | 124
Call:
arima(x = y, order = c(1, 0, 0))
Coefficients:
ar1 intercept
0.5019 1.0885
s.e. 0.0899 0.4994
> ar4<-arima(y,order=c(4,0,0))
> ar4
Call:
arima(x = y, order = c(4, 0, 0))
Coefficients:
ar1 ar2 ar3 ar4 intercept
0.5264 -0.0968 -0.0155 -0.2043 1.0125
s.e. 0.1015 0.1146 0.1149 0.1023 0.3085
> ar4r<-arima(y,order=c(4,0,0),fixed=c(NA,0,0,NA,NA))
> ar4r
Call:
arima(x = y, order = c(4, 0, 0), fixed = c(NA, 0, 0, NA, NA))
Coefficients:
ar1 ar2 ar3 ar4 intercept
0.4750 0 0 -0.2292 1.0150
s.e. 0.0879 0 0 0.0889 0.3247
3
2
1
0
-2
1950 1955 1960 1965 1970
Time
ACF of Residuals
1.0
0.6
ACF
0.2
-0.2
0 1 2 3 4 5
Lag
0.4
0.0
5 10 15 20
lag
Applied Econometrics Chapter 6: Time series analysis
6.4 Nonstationary processes | 6.4.3 An empirical application with R 123 | 124
0.15
0.10
Jarque_Bera_Test
X-squared 2.8564686
df 2.0000000
p-value 0.2397318
0.05
0.00
-5 0 5
Residuals
Coefficients:
ar1 ar2 ar3 ar4 time
0.4750 0 0 -0.2292 1.0150
s.e. 0.0884 0 0 0.0894 0.3263