Professional Documents
Culture Documents
Yule Walker MatrixForm
Yule Walker MatrixForm
in matrix form
where
𝑍𝑡 ~ 𝑁𝑜𝑟𝑚𝑎𝑙(0, 𝜎𝑍2 )
Note that, if we take expectation from both sides of the model
𝜇 = 𝜙0 + 𝜙1 𝜇 + 𝜙2 𝜇 + ⋯ 𝜙𝑝 𝜇
If 𝑋𝑡 = 𝑋𝑡 − 𝜇, then 𝐸 𝑋𝑡 = 0, and
where
𝑍𝑡 ~ 𝑁𝑜𝑟𝑚𝑎𝑙(0, 𝜎𝑍2 )
Yule –Walker equations
𝜌 𝑘 = 𝜙1 𝜌 𝑘 − 1 + 𝜙2 𝜌 𝑘 − 2 + ⋯ + 𝜙𝑝 𝜌(𝑘 − 𝑝)
ρ k = ϕ1 ρ k − 1 + ϕ2 ρ k − 2 + ⋯ + ϕp ρ(k − p)
For 𝑘 = 1,2, … , 𝑝,
𝜌 1 = 𝜙1 𝜌 0 + 𝜙2 𝜌 −1 + 𝜙3 𝜌 −2 + ⋯ + 𝜙𝑝 𝜌(1 − 𝑝)
𝜌 2 = 𝜙1 𝜌 1 + 𝜙2 𝜌 0 + 𝜙3 𝜌 −1 + ⋯ + 𝜙𝑝 𝜌(2 − 𝑝)
𝜌 3 = 𝜙1 𝜌 2 + 𝜙2 𝜌 1 + 𝜙3 𝜌 0 + ⋯ + 𝜙𝑝 𝜌(3 − 𝑝)
⋮
𝜌 𝑝 − 1 = 𝜙1 𝜌 𝑝 − 2 + 𝜙2 𝜌 𝑝 − 3 + 𝜙3 𝜌 𝑝 − 4 + ⋯ + 𝜙𝑝 𝜌(1)
𝜌 𝑝 = 𝜙1 𝜌 𝑝 − 1 + 𝜙2 𝜌 𝑝 − 2 + 𝜙3 𝜌 𝑝 − 3 + ⋯ + 𝜙𝑝 𝜌(0)
Recall 𝜌 𝑘 = 𝜌(−𝑘).
i.e., 𝜌 −1 = 𝜌 1 , 𝜌 −2 = 𝜌 2 , … , 𝜌 2 − 𝑝 = 𝜌 𝑝 − 2 , 𝜌 1 − 𝑝 = 𝜌 𝑝 − 1
𝜌 −𝑘 = 𝜌(𝑘)
𝜌 1 = 𝜙1 𝜌 0 + 𝜙2 𝜌 1 + 𝜙3 𝜌 2 + ⋯ + 𝜙𝑝 𝜌(𝑝 − 1)
𝜌 2 = 𝜙1 𝜌 1 + 𝜙2 𝜌 0 + 𝜙3 𝜌 1 + ⋯ + 𝜙𝑝 𝜌(𝑝 − 2)
𝜌 3 = 𝜙1 𝜌 2 + 𝜙2 𝜌 1 + 𝜙3 𝜌 0 + ⋯ + 𝜙𝑝 𝜌(𝑝 − 3)
⋮
𝜌 𝑝 − 1 = 𝜙1 𝜌 𝑝 − 2 + 𝜙2 𝜌 𝑝 − 3 + 𝜙3 𝜌 𝑝 − 4 + ⋯ + 𝜙𝑝 𝜌(1)
𝜌 𝑝 = 𝜙1 𝜌 𝑝 − 1 + 𝜙2 𝜌 𝑝 − 2 + 𝜙3 𝜌 𝑝 − 3 + ⋯ + 𝜙𝑝 𝜌(0)
𝜌 0 =1
𝜌 1 = 𝜙1 + 𝜙2 𝜌 1 + 𝜙3 𝜌 2 + ⋯ + 𝜙𝑝 𝜌(𝑝 − 1)
𝜌 2 = 𝜙1 𝜌 1 + 𝜙2 + 𝜙3 𝜌 1 + ⋯ + 𝜙𝑝 𝜌(𝑝 − 2)
𝜌 3 = 𝜙1 𝜌 2 + 𝜙2 𝜌 1 + 𝜙3 + ⋯ + 𝜙𝑝 𝜌(𝑝 − 3)
⋮
𝜌 𝑝 − 1 = 𝜙1 𝜌 𝑝 − 2 + 𝜙2 𝜌 𝑝 − 3 + 𝜙3 𝜌 𝑝 − 4 + ⋯ + 𝜙𝑝 𝜌(1)
𝜌 𝑝 = 𝜙1 𝜌 𝑝 − 1 + 𝜙2 𝜌 𝑝 − 2 + 𝜙3 𝜌 𝑝 − 3 + ⋯ + 𝜙𝑝
Matrix form: Yule- Walker equations
𝜌(1) 1 𝜌(1) 𝜌(2) 𝜌(𝑝 − 1) 𝜙1
𝜌(2) 𝜌(1) 1 𝜌(1) ⋯ 𝜌(𝑝 − 2) 𝜙2
𝜌(3) 𝜌(2) 𝜌(1) 1 𝜌(𝑝 − 3) 𝜙3
=
⋮ ⋮ ⋱ ⋮ ⋮
𝜌(𝑝 − 1) 𝜌(𝑝 − 2) 𝜌(𝑝 − 3) 𝜌(𝑝 − 4) 𝜌(1) 𝜙𝑝−1
⋯
𝜌(𝑝) 𝜌(𝑝 − 1) 𝜌(𝑝 − 2) 𝜌(𝑝 − 3) 1 𝜙𝑝
𝑏 𝑅 𝜙
𝑏 = 𝑅𝜙
𝑅−1 𝑏 = 𝜙
Sample ACF 𝑟𝑘 ≈ 𝜌(𝑘)
𝑟1 1 𝑟1 𝑟2 𝑟𝑝−1 𝜙1
𝑟2 𝑟1 1 𝑟1 ⋯ 𝑟𝑝−2 𝜙2
𝑟3 𝑟2 𝑟1 1 𝑟𝑝−3 𝜙3
⋮ =
⋮ ⋱ ⋮ ⋮
𝑟𝑝−1 𝑟𝑝−2 𝑟𝑝−3 𝑟𝑝−4 𝑟1 𝜙𝑝−1
𝑟𝑝 𝑟𝑝−1 𝑟𝑝−2 𝑟𝑝−3 ⋯
1 𝜙𝑝
𝑏 𝑅 𝜙
𝑏 = 𝑅𝜙
𝑅−1 𝑏 = 𝜙
Matrices 𝑅 and 𝑅
These matrices are symmetric matrices
They are positive semidefinite matrices
All eigenvalues are nonnegative
Inverses of these matrices exist
𝑋𝑡 = 𝜙1 𝑋𝑡−1 + 𝜙2 𝑋𝑡−2 + 𝑍𝑡
𝑟1 1 𝑟1 𝜙1
𝑟2 =
𝑟1 1 𝜙2
Example – AR(3)
We (will) estimate coefficients of the model
𝑟1 1 𝑟1 𝑟2 𝜙1
𝑟2 = 𝑟1 1 𝑟1 𝜙2
𝑟3 𝑟2 𝑟1 1 𝜙3
What We’ve Learned
𝑋𝑡 = 𝜙1 𝑋𝑡−1 + 𝜙2 𝑋𝑡−2 + 𝑍𝑡
where
1 1
𝜙1 = , 𝜙2 = , 𝜎𝑍 = 4
3 2
Yule –Walker equaitons
We estimate coefficients of the model
𝑋𝑡 = 𝜙1 𝑋𝑡−1 + 𝜙2 𝑋𝑡−2 + 𝑍𝑡
𝑟1 1 𝑟1 𝜙1
𝑟2 =
𝑟1 1 𝜙2
𝜎𝑍 Estimation
Since
𝑋𝑡 = 𝜙1 𝑋𝑡−1 + 𝜙2 𝑋𝑡−2 + 𝑍𝑡
We have
𝑉𝑎𝑟 𝑋𝑡 = 𝜙12 𝑉𝑎𝑟 𝑋𝑡−1 + 𝜙22 𝑉𝑎𝑟 𝑋𝑡−2 + 2𝜙1 𝜙2 𝐶𝑜𝑣 𝑋𝑡−1 , 𝑋𝑡−2 + 𝜎𝑍2
Thus
2𝜙1 𝜙2 𝛾 1
𝜎𝑍2 =𝛾 0 1− 𝜙12 − 𝜙22 − = 𝛾 0 1 − 𝜙12 − 𝜙22 − 2𝜙1 𝜙2 𝜌1
𝛾 0
Since
𝜌1 1 𝜌1 𝜙1
𝜌2 =
𝜌1 1 𝜙2
we have
𝜌1 = 𝜙1 + 𝜌1 𝜙2
𝜌2 = 𝜙1 𝜌1 + 𝜙2
1 − 𝜙12 − 𝜙22 − 2𝜙1 𝜙2 𝜌1
= 1 − 𝜙12 − 𝜙1 𝜙2 𝜌1 − 𝜙22 − 𝜙1 𝜙2 𝜌1
= 1 − 𝜙1 𝜙1 + 𝜌1 𝜙2 − 𝜙2 𝜙1 𝜌1 + 𝜙2
= 1 − 𝜙1 𝜌1 − 𝜙2 𝜌2
𝜎𝑍 Estimation cont.
Thus,
𝜎𝑍2 = 𝛾 0 [1 − 𝜙1 𝜌1 − 𝜙2 𝜌2 ]
𝜎𝑍2 = 𝑐0 [1 − 𝜙1 𝑟1 − 𝜙2 𝑟2 ]
Simulation
Number of data points, 𝑛 = 10000
arima.sim() # simulating
plot() # plotting the series
acf() # autocorrelation function
matrix(,m,n) # matrix with dimensions m by n
solve(R,b) # finds the solution to Rx=b
Code details
sigma=4
phi[1:2]=c(1/3,1/2)
n=10000
set.seed(2017)
ar.process=arima.sim(n, model=list(ar=c(1/3,1/2)), sd=4)
ar.process[1:5]
𝑟[1] = 0.6814103
𝑟[2] = 0.7255825
1 𝑟1
𝑅=
𝑟1 1
Code details cont.
𝑟[1] 0.6814103
𝑏= =
𝑟[2] 0.7255825
Code details cont.
solve(R,b)
0.3490720
0.4877212
phi.hat=matrix(c(solve(R,b)[1,1], solve(R,b)[2,1]),2,1)
𝜙1 = 0.3490720
𝜙2 = 0.4877212
Code details cont.
Actual Model
Fitted model
Estimating
model parameters of a
simulated AR(2) process using Yule-Walker
equations in a matrix form
Estimating model
parameters – AR(3)
Simulation
1 1 7
𝜙1 = , 𝜙2 = , 𝜙3 = ,𝜎 = 4
3 2 100 𝑍
Yule –Walker equaitons
We (will) estimate coefficients of the model
𝑟1 1 𝑟1 𝑟2 𝜙1
𝑟2 = 𝑟1 1 𝑟1 𝜙2
𝑟3 𝑟2 𝑟1 1 𝜙3
𝜎𝑍 Estimation
𝜎𝑍2 = 𝑐0 (1 − 𝜙𝑖 𝑟𝑖 )
𝑖=1
Results (set.seed(2017))
n= 100000
𝜙1 ≈ 0.3381245
𝜙2 ≈ 0.4984999
𝜙3 ≈ 0.06849712
𝜎𝑍2 ≈ 15.979
Actual Model
𝑍𝑡 ~ 𝑁(0,15.979)
What We’ve Learned
Estimating
model parameters of a
simulated AR(3) process using Yule-Walker
equations in a matrix form
Parameter estimation:
Recruitment
𝑋𝑡 − 𝜇
p=2
Yule-Walker equations: 𝑅 𝜙 = 𝑏
Sample autocorrelation coefficients, vector 𝑟
for(i in 1:p+1){
r[i-1]<-acf(ar.process, plot=F)$acf[i]
}
Matrix 𝑅
1 𝑟1 𝑟2 𝑟𝑝−1
𝑟1 1 𝑟1 ⋯ 𝑟𝑝−2
𝑟2 𝑟1 1 𝑟𝑝−3
⋮ ⋱ ⋮
𝑟𝑝−2 𝑟𝑝−3 𝑟𝑝−4 𝑟1
𝑟𝑝−1 𝑟𝑝−2 𝑟𝑝−3 ⋯
1
Realize
𝑅 𝑖, 𝑗 = 𝑅𝑖𝑗 = 𝑟 𝑖−𝑗
R=matrix(1,p,p) # matrix of dimension p by p, with entries all 1's.
for(i in 1:p){
for(j in 1:p){
if(i!=j)
R[i,j]=r[abs(i-j)]
}
}
# b-column vector on the right
phi.hat=NULL
for(i in 1:p){
phi.hat[i]=solve(R,b)[i,1]
}
Model
Thus
where
𝑝
𝜙0 = 𝑥(1 − 𝜙𝑖 )
𝑖=1
𝑝=2
Fitted model is
𝑍𝑡 ~ 𝑁𝑜𝑟𝑚𝑎𝑙 (0,94.17131)
What We’ve Learned
is defined as
𝑋𝑡
𝑟𝑡 = log = log 𝑋𝑡 − log(𝑋𝑡−1 )
𝑋𝑡−1
In R,
𝑑𝑖𝑓𝑓(log )
The parsimony principle
where
𝑋𝑡
𝑟𝑡 = log
𝑋𝑡−1
What We’ve Learned
Fitting
an AR(p=4) model to log-return of
Johnson & Johnson quarterly earnings
from ‘astsa’ package using Yule-Walker
equations in matrix form