Arma Model

You might also like

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 45

ARMA models ARMA models

Gloria Gonzlez-Rivera
University of California, Riverside
and
Jess Gonzalo U. Carlos III de Madrid
White Noise White Noise
A sequence of uncorrelated random variables is called a white noise
process.
,
0 for 0 ) , (
) (
) 0 (normally ) ( :
2
= =
=
= =
+
k a a Cov
a Var
a E a
k t t
a t
a a t t
W
Q Q

=
=
=

=
=
=

=
=
=
0 0
0 1
0 0
0 1
0 0
0
ation autocorrel and ance Autocovari
2
k
k
k
k
k
k
kk
k
a
k
J
V
W
K
. . . .
1 2 3 4 k
k
V
The Wold Decomposition The Wold Decomposition
If {Z
t
} is a nondeterministic stationary time series, then
tic. determinis is }
t
V { 5.
and , t s ,
s
Z of ns combinatio linear of limit the is
t
a . 4
, t and s all for 0 )
t
V ,
s
a ( Cov . 3
, 0
2
with ),
2
, 0 ( WN is }
t
a { . 2
,
0 j
2
j
and 1
0
. 1
where
,
t
V
t
a ) L (
t
V
j t
a
0 j
j t
Z
e
=
> W W
g
g
=
=
+ + = +

g
=
=

Some Remarks on the Wold Decomposition Some Remarks on the Wold Decomposition
g p p

=

g
=
--

= -

n as 0
2
]
j t
a
n
0 j
j t
Z [ E
???
0 j
by mean we do What ) (
,...]
2 t
Z ,
1 t
Z |
t
Z [ P
t
Z
t
a ) (
What the Wold theorem does not say What the Wold theorem does not say
The a
t
need not be normally distributed, and hence need not be iid
Though P[a
t
|Z
t-j
]=0, it need not be true that E[a
t
|Z
t-j
]=0 (think on the
possible consequences???)
The shocks a need not be the true shocks to the system. When
will this happen???
The uniqueness result only states that the Wold representation is the
unique linear representation where the shocks are linear forecast
errors. Non-linear representations, or representations in terms of non-
forecast error shocks are perfectly possible.
Birth of the ARMA models Birth of the ARMA models
) L (
p
) L (
q
) L (
*
5
} +
Under general conditions the infinite lag polynomial of the Wold
Decomposition can be approximated by the ratio of two finite lag
polynomials:
Therefore
q t
a
q
...
1 t
a
1 t
a
p t
Z
p
...
1 t
Z
1 t
Z
t
a )
q
L
q
... L
1
1 (
t
Z )
p
L
p
... L
1
1 (
t
a ) L (
q t
Z ) L (
p
,
t
a
) L (
p
) L (
q
t
a ) L (
t
Z

+ +

+ =

J
+ + + = J J
5 = *
*
5
} + =
AR(p) MA(q)
MA(1) processes MA(1) processes
Let
,
t
a a zero-mean white noise process
)
2
a
, 0 (
t
a W p
) 1 ( MA
1 t
a
t
a
t
Z p

+ + Q =
Expectation
Variance
Autocovariance
Q Q = + + =

) ( ) ( ) (
1 t t t
a E a E Z E
) 1 ( ) 2 (
) ( ) ( ) (
2 2
1
2
1
2 2
2
1
2
W
Q
+ = + + =
= + = =

a t t t t
t t t t
a a a a E
a a E Z E Z Var
2
2 1
2
2
2
1 1
2 1 1 1
) (
) )( ( ) )(
order 1st.
a t t t t t t t
t t t t t t
a a a a a a a E
a a a a E Z E(Z
W
Q Q
= + + + =
= + + =


MA(1) processes (cont) MA(1) processes (cont)
Autocovariance of higher order
Autocorrelation
1 0
1 ) 1 (
2 2 2
2
0
1
1
> =
+
=
+
= =
j
j
V

W
W
K
K
V
MA(1) process is covariance-stationary because
2 2
) 1 ( ) ( ) ( W Q + = =
t t
Z Var Z E
MA(1) process is ergodic because
g + + =

g
=0
2 2 2
) 1 (
j
j
W W K
If were Gaussian, then would be ergodic for all moments
t
a t
Z
Plot the function
2
1
1

V
+
=
1 -1
0.5
-0.5

1
V
2 for 4 . 0
5 . 0 for 4 . 0
1 for 5 . 0 ) max(
1
1
1
= =
= =
= =
V
V
V
2 2
1
2
1
1 ) / 1 ( 1
/ 1
,
1
substitute we
1
in If

V
+
=
+
=
+
=

+ =
+ =

1
1
)
1
(
t t t
t t t
a a Z
a a Z

Both processes share the same


autocorrelation function
MA(1) is not uniquely identifiable, except for
Invertibility Invertibility
Definition: A MA(q) process defined by the equation
is said to be invertible if there exists a sequence of constants
and
t
a ) L (
q t
Z =
g
g
=
x x

|
0 j
j
| such that }
j
{
,... 1 , 0 t , j t
Z
0 j
j t
a
=
g
=
x =

Theorem: Let {Z
t
} be a MA(q). Then {Z
t
} is invertible if and only if
The coefficients {x
j
} are determined by
the relation
1. | x | such that C x all for 0 ) x ( e =
1. | x | ,
) x (
1
j
x
0 j
j
) x ( e

=
g
=
x = x

Identification of the MA(1) Identification of the MA(1)


If we identify the MA(1) through the autocorrelation
structure, we need to decide with value of to choose, the one
greater than one or othe one less than one. Requiring the
condition of invertibility (think why????) we will choose the
value .
Another reason to choose the value less than one can be found
by paying attention to the error variance of the two equivalent
representations:
)
t
a ( V )
t
a ( V
invertible - non ,
)
2
1
1 (
0
)
t
a ( V ,
t
a ) L
1
1
1 (
t
Z
invertible ,
)
2
1
1 (
0
)
t
a ( V ,
t
a ) L
1
1 (
t
Z
2
1

>
+

K
=

+ =
+
K
= + =
MA(q) MA(q)
q t q t t t t
a a a a Z

+ + + + + = Q .
2 2 1 1
Moments
0
1 1
MA(2) Example
for 0
for ) (
) )( (
) 1 ( ) var(
) (
4 3
2
2
2
1
2
2
2
2
2
1
2 1 1
1
1
2
2 2 1 1
0
2
2 2 1 1
1 1 1 1
2 2 2
2
2
1 0
= = = =
+ +
=
+ +
+
=
+ + + +
= =

>
e + + + +
=
+ + + + + + =
+ + + + = =
=

=
+ +
+ +

k
q
i
i
j q q j j j j
j
j q q j j j
j
q j t q j t j t q t q t t j
a q t
t
q j
q j
a a a a a a E
Z
Z E
V V V

V


V


K
K
V
W
K
K
W K
Q
.
.
.
. .
.
MA(q) is
covariance-stationary
and
ergodic for the
same reasons
as in a MA(1)
MA(infinite) MA(infinite)

g
=
=

+ Q =
0 j
1
0 j t
a
j t
Z
Is it covariance-stationary?
? A

g
=

g
=
+

= V
g
=
+
W = Q

Q = K
g
=
W = Q =
0 i
2
i
0 i
j i i
j
0 i
j i i
2
)
j t
Z )(
t
Z ( E
j
0 i
2
i
2
a
)
t
Z ( Var , )
t
Z ( E
The process is
covariance-
stationary
provided that
g

g
=0
2
i
i

(square summable sequence)


Some interesting results Some interesting results
Proposition 1.
Proposition 2.

g
=
g
=
g
0
2
0 i
i
i
i

(absolutely
summable)
g g

g
=
g
= 0 0 i
i
i
i
K
(square
summable)
Ergodic for the mean
Proof 1.

g
=
g
=
g
0
2
0 i
i
i
i


g
=

=
g
=

=
g
=
g
=
g
=
g
=
+ + =
> V
> V g g
N i
i
N
i
i
N i
i
N
i
i
i
i
N i
i
N i
i i i
i
i
i
N i
N i N
| |
Now,
1 that such If
1
0
2 2
1
0
2
0
2
2
2
0



(1) (2)
(1) It is finite because N is finite
(2) It is finite because is absolutely summable
g

g
=0
2
i
i

then
Proof 2.
g g

g
=
g
= 0 0 i
i
i
i
K
M
M M
j
j i
i i
i
j
j i i
j
j i
i
i
j j i
j i i j
i
j i i
i
j i i j
i
j i i j

g =
= = e
e =
=

g
=
+
g
=
g
=
g
=
+
g
=
+
g
=
g
=
g
=
g
=
+
g
=
+
g
=
+
g
=
+
0
2 2
0 0
2
0
2
0 0 0 0
2
0
2
0
2
0
2
0
2
assumption by because
W W W
W W K
W W K
W K
AR(1) AR(1)
t t t
a Z c Z + + =
1
J
Using backward substitution
. . + + + + + + + =
= + + + + =


2
2
1
2
1 2
2
) 1 (
t t t
t t t t
a a a c
a a Z c c Z
J J J J
J J J
geometric progression

) ( g MA
1 if
1
1
) 2 (
sequence bounded
1
1
1 ) 1 (
1 if
0 0
2
g

= =

= + + +


g
=
g
=
J
J
J
J
J J
J
j
j
j
j
.
Remember:
g

g
=0 j
j
is the condition for stationarity and ergodicity
AR(1) (cont) AR(1) (cont)
Hence, this AR(1) process has a stationary solution if
1 J
Alternatively, consider the solution of the characteristic equation:
1
1
0 1 > = =
J
J x x
i.e. the roots of the characteristic equation lie outside of the unit circle
Mean of a stationary AR(1)
J
Q
J J
J

= =
+ + + +

=

1
) (
1
2
2
1
c
Z E
a a a
c
Z
t
t t t t
.
Variance of a stationary AR(1)

2
2
2 4 2
0
1
1
1
a
W
J
W J J K

= + + + = .
Autocovariance of a stationary AR(1)
Rewrite the process as
t t t
a Z Z + =

) ( ) (
1
Q J Q
? A ? A
? A
1 1
1


= + =
= + = =
j j t t j t t
j t t t j t t j
Z a Z Z E
Z a Z E Z Z E
JK Q Q Q J
Q Q J Q Q K
1
1
> =

j
j j
JK K
Autocorrelation of a stationary AR(1)
j j
j j j
j
j
o
j
j
j
J V J V J V J V
JV
K
K
J
K
K
V
= = = = =
> = = =

0 3
3
2
2
1
0
1
1
.
ACF
PACF: from Yule-Walker equations
2 0
0
1 1
1
1
1
2
1
2 2
2
1
2
1 2
1
1
2 1
1
22
1 ` 11
> =
=

= =
= =
k
kk
J
V
J J
V
V V
V
V
V V
V
J
J V J
Causality and Stationarity Causality and Stationarity
Definition: An AR(p) process defined by the equation
is said to be causal, or a causal function of {a
t
}, if there exists a sequence of constants
and
Causality is equivalent to the condition
t
a
t
Z ) L (
p
= J
g
g
=

|
0 j
j
| such that }
j
{
,... 1 , 0 t , j t
a
0 j
j t
Z
=
g
=
=

1. | x | such that C x all for 0 ) x ( e = J


Definition: A stationary solution {Z
t
} of the equation exists (and
is also the unique stationary solution) if and only if
t
a
t
Z ) L (
p
= J
1. | x | such that C x all for 0 ) x ( = = J
From now on we will be dealing only with causal AR models
AR(2) AR(2)
Stationarity
Study of the roots of the characteristic equation
0 1
2
2 1
= x x J J
2
2
2
1 1
2
2
2
2
1 1
1
1
2
2
2
4
2
4
0 1
J
J J J
J
J J J
J J
+
=
+ +
=
= +
x
x
x x
(a) Multiply by -1
(b) Divide by
2
x
2
4
1
2
4
1
0 / 1 ) / 1 (
2
2
1 1
2
2
2
1 1
1
2
1 2
J J J
J J J
J J
+
=
+ +
=
= +
x
x
x x
For a stationary causal
solution is required that
2
1 1 1 1
1
1 1
1
2 , 1 1
1
1
2 1
1
2 1
2 2
2 1
+ e = +
= = >
= >
x x x x
x x
i
x
x
i
i
J
J J
Necessary conditions for a
stationary causal solution
2 2
1 1
1
2


J
J
Roots can be real or complex.
(1) Real roots
1 (2) From
1 (1) From
1
1
2
4
1
2
4
1
0 4
1 2
2 1
) 1 ( ) 2 (
1
2
2
1 1
2
2
2
1 1
2
2
1
p
+ p
p n p n
=
+ +
e =
+

> +
J J
J J
J J J J J J
J J
x x
(2) Complex roots
0
4
0 4
2
1
2
2
2
1
e
+
J
J
J J
1
-1
1 2 -1 -2
1 2
1 J J =
1 2
1 J J + =
4
2
1
2
J
J =
1
J
2
J
real
complex
Mean of AR(2)
2 1
1 J J
Q

=
c
Variance and Autocorrelations of AR(2)

2 2 1 1
2
0
2
0 2 2 0 1 1 0
2
2 2 1 1 0
2 2 1 1
2
0
1
) ( ) ( ) )( ( ) (
V J V J
W
K
W K V J K V J K
W K J K J K
Q Q Q J Q Q J Q K

=
+ + =
+ + =
+ + = =


a
a
a
t t t t t t t
a Z E Z Z E Z Z E Z E
1 ) )( (
2 2 1 1
> + = =

j Z Z E
j j j t t j
K J K J Q Q K
1
2 2 1 1
> + =

j
j j j
V J V J V
1 2 2 1 3
2
2
2
1
2
2
1
1
0 2 1 1 2
1 2 0 1 1
3
1
1
2
1
V J V J V
J
J
J
V
J
J
V
V J V J V
V J V J V
+ = =

=
p
)
`

+ = =
+ = =
j
j
j
Difference equation
different shapes according to the
roots, real or complex
Partial autocorrelations: from Yule-Walker equations
0 ;
1
;
1
33
2
1
2
1 2
22
2
1
1 11
=

= = J
V
V V
J
J
J
V J
AR(p) AR(p)
t p t p t t t
a Z Z Z c Z + + + + =

J J J .......
2 2 1 1
stationarity
All p roots of the characteristic equation
outside of the unit circle
ACF

+ + =
+ + =
+ + =
+ + =


0 2 2 1 1
2 0 2 11 1 2
1 1 2 0 1 1
2 2 1 1
......
......
......
......
V J V J V J V
V J V J V J V
V J V J V J V
V J V J V J V
p p p p
p p
p p
p k p k k k
/
System to solve for the first p
autocorrelations:
p unknowns and p equations
ACF decays as mixture of exponentials and/or damped sine waves,
Depending on real/complex roots
PACF
p k
kk
> = for 0 J
Relationship between AR(p) and MA(q) Relationship between AR(p) and MA(q)
Stationary AR(p)
+ =
*
+ + + = + + =
*
=
= * = *
) (
) (
1
....) 1 ( ) ( ) (
) (
1
) .... 1 ( ) ( ) (
2
2 1
2
2 1
L
L
L L L a L a
L
Z
L L L L a Z L
p
t t
p
t
p
p p t t p

J J J
1 ) ( ) ( = + * L L
p
? from obtain How to * +
Example

+ + =
+ =
=

=
=
=
=

+ + + + +
= + + +
1 2 2
2
1 1 3
2
2
1 2
1 1
1 2 2 1 3
2 1 1 2
1 1
3
1 2
2
2
3
2 1
2
1 1 1
3
3
2
2 1
2
2 1
2
2 1
) (
0
0
0
: s polynomial both from ts coefficien equating
1 ... ..........
......
...... 1
1 .....) 1 )( 1 ( ) 2 (
J J J J J
J J
J
J J
J J
J
J J
J J J

J J
L L
L L L
L L L
L L L L AR
2
2 2 1 1
> + =

j
j j j
J J
Invertible MA(q)
H =
5
+ + + = H =
5
= H
= 5 5 =
) (
) (
1
....) 1 ( ) (
) (
1
) (
) .... 1 ( ) ( ) (
2
2 1
2
2 1
L
L
L L L a Z
L
Z L
L L L L a L Z
q
t t
q
t
q
q q t q t
x x

1 ) ( ) ( = H 5 L L
q
? from obtain How to 5 H
Write an example, i.e. MA(2), and proceed as in the previous example
ARMA (p,q) ARMA (p,q)
t t
p
q
t t
q
p
t
q
t q t p
a L a
L
L
a Z
L
L
Z L
x x
x x
a L Z L
) (
) (
) (
Z tion representa MA Pure
) (
) (
) ( tion representa AR Pure
1 0 ) ( of roots Causal
1 0 ) ( of roots ity Invertibil
) ( ) (
t
p
+ =
*
5
= p
=
5
*
= H p
> = * p
> = 5 p
5 = *
Autocorrelations of ARMA(p,q) Autocorrelations of ARMA(p,q)
k t q t q k t t k t t k t p t p k t t k t t
q t q t t p t p t t
Z a Z a Z a Z Z Z Z Z Z
a a a Z Z Z


+ + =
+ + =
J J
J J
...... ....
: zero to equal mean assume , generality of loss of without
...... ....
1 1 1 1
1 1 1 1
taking expectations:
1 and on depend will
1 ......
i i
2 2 1 1
+
+ > + + =

q k
q k
k
p k p k k k
J V
V J V J V J V
i k a Z E
i t k t
> =

0 ) (
that Note
PACF
ARMA MA
ARMA(1,1) ARMA(1,1)
1 ) ( (L) form MA pure
1 ) ( (L)Z form AR pure
1 ity invertibil
1 causal
) 1 ( ) 1 (
1
1
t
> = + = p
> = = H p
p
p
=

j a Z
j a
a L Z L
j
j t t
j
j t
t t
J J
J x

J
J
ACF of ARMA(1,1)
k t t k t t k t t k t t
Z a Z a Z Z Z Z

+ =
1 1
J
taking expectations
) ( ) (
1 1 k t t k t t k k
Z a E Z a E

+ = JK K
1
2
0 1
2 2
1 0
2
1
2
2
1
) (
) ( ) ( ) ( 0

= >
= =
+ =
= = =
k k
a
a a
a t t
a
t t
k
k
Z a E Z a E k
JK K
W JK K
W J W JK K
W J W

1 0
and for solve
unknowns 2 and equations 2 of system
K K

>
=
+

=
=

2
1
2 1
1 ) (
0 1
1
2
k
k
k
k
k
JV
J
J J
V
PACF
decays l exponentia
) 1 , 1 ( ) 1 ( ARMA MA
ACF
ACF and PACF of an ARMA(1,1) ACF and PACF of an ARMA(1,1)
ACF and PACF of an MA(2) ACF and PACF of an MA(2)
ACF and PACF of an AR(2) ACF and PACF of an AR(2)
Problems Problems
P1: Determine which of the following ARMA processes are casual and which of
them are invertible (in each case a
t
denotes a white noise):
P2: Show that the two MA(1) processes
have the same autocovariances functions.
t
a
2 t
Z 81 . 0
1 t
Z 8 . 1
t
Z . d
1 t
a 2 . 1
t
a
1 t
Z 6 . 0
t
Z . c
2 t
Z 7 . 0
1 t
Z 2 . 0
t
a
2 t
Z 88 . 0
1 t
Z 9 . 1
t
Z . b
t
a
2 t
Z 48 . 0
1 t
Z 2 . 0
t
Z . a
=

+ =

+ =

+
=

+
1 | | 0 where
)
2 2
WN(0, is }
t
{a
1 t
a
1
t
a
t
Z
)
2
WN(0, is }
t
{a
1 t
a
t
a
t
Z

W

=
W

+ =
Problems (cont) Problems (cont)
P.3: Let {Z
t
} denote the unique stationary solution of the autoregressive equations
Where . Then is given by the expression
Define the new sequence
These calculations show that {Z
t
} is the (unique stationary) solution of the causal
AR equations
1,.... 0, t ,
t
a
1 t
Z
t
Z = +

J =
1 | | and )
2
, 0 ( WN is }
t
a { > J W
j t
a
1 j
j
t
Z
+
g
=

. and of in terms express and ) WN(0, is } W { that show


,
1 t
Z
1
t
Z
t
W
2 2
W
2
W t
J W W W

J
=
1,... 0, t ,
t
W
1 t
Z
1
t
Z = +

J
=
Problems (cont) Problems (cont)
P4: Let Y
t
be the AR(1) plus noise time series defined by Y
t
=Z
t
+ W
t
, where
for all s and t.
Show that {Y
t
} is stationary and find its autocovariance functions.
Show that the time series is an MA(1).
Conclude from the previous point that {Y
t
} is an ARMA(1,1) and express the three
parameters of this model in terms of
0 )
t
a
s
E(W and )
2
a
WN(0, a
t
a with
t
a
1 t
Z
t
Z and )
2
W
WN(0, is }
t
W { = W =

J W
1 t
Y
t
Y
t
U

J
2
a
,
2
W
, W W J
Appendix: Lag Operator L Appendix: Lag Operator L
Definition
1
=
t t
Z LZ
Properties
1 1
1
) ( . 3
) ( . 2
. 1

+ = + = +
= =
=
t t t t t t
t t t
k t t
k
Y Z LY LZ Y Z L
Z LZ Z L
Z Z L
J J J
Examples
t t t t t t
a Z L L a Z Z Z = p + + =

) 1 ( . 1
2
2 1 2 2 1 1
J J J J
t t
t t t t
t t
a Z L
a L a a Z
Z L L L Z L L
=
+ + = + + =
+ =

) 1 ( . 4
) 1 ( . 3
) 1 ( ) 1 )( 1 ( . 2
1
2
2 1 2 1 2 1
J
Q Q
J J J J J J
Appendix: Inverse Operator Appendix: Inverse Operator
Definition
operator) (identity ) 1 ( ) 1 ( that such
) ....... 1 ( lim ) 1 (
0 1
3 3 2 2 1
L L L
L L L L L
j j
j
=
+ + + + =

g p

J J
J J J J J
Note that :
1 if > J this definition does not hold because the limit does not exist
Example:
......
) 1 ( ) 1 ( ) 1 (
) 1 ( ) 1 (
2
2
1
1 1
+ + + =
=
=


t t t t
t t
t t
a a a Z
a L Z L L
a Z L AR
J J
J J J
J
Appendix: Inverse Operator (cont) Appendix: Inverse Operator (cont)
Suppose you have the ARMA model and want to find
the MA representation . You could try to crank out
directly, but thats not much fun. Instead you could find
and matching terms in L
j
to make sure this works.
t
a ) L (
t
Z ) L ( 5 = *
t
a ) L (
t
Z + =
) L ( ) L (
1
5

*
) L ( ) L ( ) L ( hence ,
t
a ) L ( ) L (
t
Z ) L (
t
a ) L ( + * = 5 + * = * = 5
Example: Suppose .
Multiplying both polynomials and matching powers of L,
)
2
L
2
L
1 0
( ) L ( and ) L
1 0
( ) L ( + + = 5 J + J = *
3. j ;
j 0 1 j 1
0
..... ..........
2
1 0 0 1 1
0 0 0
> J +
+
J =
=
J + J =
J
=

which you can easily solver recursively for the TRY IT!!!
j

Appendix: Factoring Lag Polynomials Appendix: Factoring Lag Polynomials


Suppose we need to invert the polynomial
We can do that by factoring it:
Now we need to invert each factor and multiply:
)
2
L
2
L
1
1 ( ) L ( J J = *
1 2 1
2 2 1
with ) L
2
1 )( L
1
1 ( )
2
L
2
L
1
1 (
J = +
J =
= J J
j
L )
0 j
j
0 k
k j
2
k
1
(
... L )
2 1
( 1 ( )
0 j
j
L
0 j
j
2
)(
j
L
j
1
(
1
) L
2
1 (
1
) L
1
1 (


g
= =


= + + + =
g
=
g
=
=


Check the last expression!!!!
Appendix: Partial Fraction Tricks Appendix: Partial Fraction Tricks
There is a prettier way to express the last inversion by using the partial
fraction tricks. Find the constants a and b such that
) L
2
1 )( L
1
1 (
) L
2
1 ( b ) L
1
1 ( a
) L
2
1 (
b
) L
1
1 (
a
) L
2
1 )( L
1
1 (
1

+
=

+

=

The numerator on the right hand side must be 1, so
)
j
2
)
2 1
(
2
0 j
j
1
)
2 1
(
1
(
) L
2
1 (
1
)
2 1
(
2
) L
1
1 (
1
)
2 1
(
1
) L
2
1 )( L
1
1 (
1
so
,
2 1
1
a ,
1 2
2
b
Solving,
0 b
1
a
2
1 b a

g
=
+

=
= +
= +

Appendix: More on Invertibility Appendix: More on Invertibility


Consider a MA(1)
t
AR
t
t t t
t
a Z L L L
a a L L Z
L
a L
= + +
= + + = +
+
+ + =
g


) (
3 3 2 2
1 1 -
1
t
) .......)( 1 (
) 1 ( ) 1 ( ) ( L) (1
defined is ) 1 ( , 1 If
1 Z
Q
Q

Q
Definition
A MA process is said to be invertible if it can be written as an AR( )
g
For a MA(1) to be invertible we require ] 1
1
0 1 [ 1 > = p = +

x x
For a MA(q) to be invertible, all roots of the characteristic equation
should lie outside of the unit circle
MA processes have an invertible and a non-invertible representations
Invertible representation optimal forecast depends on past information
Non-invertible representation forecast depends on the future!!!

You might also like