Stochastic Processes

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 7

14.

Stochastic Processes
process X(t). (see Fig 14.1). For example
Introduction
Let ξ denote the random outcome of an experiment. To every such X (t ) = a cos(ω 0 t + ϕ ),
outcome suppose a waveform X (t, ξ ) where ϕ is a uniformly distributed random variable in (0, 2π ),
X (t ,ξ ) is assigned.
represents a stochastic process. Stochastic processes are everywhere:
The collection of such X (t, ξ ) Brownian motion, stock market fluctuations, various queuing systems
waveforms form a
n

X (t, ξ ) all represent stochastic phenomena.


stochastic process. The k

If X(t) is a stochastic process, then for fixed t, X(t) represents


set of {ξ k } and the time X (t, ξ ) 2 a random variable. Its distribution function is given by
index t can be continuous
X (t , ξ )
or discrete (countably 1
t FX ( x, t ) = P{ X (t ) ≤ x} (14-1)
0 t t2
infinite or finite) as well. 1

Notice that FX ( x, t ) depends on t, since for a different t, we obtain


Fig. 14.1
For fixed ξ i ∈ S (the set of a different random variable. Further
all experimental outcomes), X (t ,ξi ) is a specific time function. dF ( x, t )
For fixed t, f X ( x, t ) =∆ X (14-2)
X 1 = X ( t1 , ξ ) dx
represents the first-order probability density function of the
is a random variable. The ensemble of all such realizations 1 process X(t). 2
X (t ,ξ ) over time represents the stochastic PILLAI/Cha PILLAI/Cha

Mean of a Stochastic Process:


For t = t1 and t = t2, X(t) represents two different random variables +∞
X1 = X(t1) and X2 = X(t2) respectively. Their joint distribution is µ (t ) =∆ E{ X (t )} = −∞
x f X ( x, t )dx (14-5)
given by represents the mean value of a process X(t). In general, the mean of
a process can depend on the time index t.
FX ( x1 , x2 , t1 , t2 ) = P{ X (t1 ) ≤ x1 , X (t2 ) ≤ x2 } (14-3)
Autocorrelation function of a process X(t) is defined as
and ∆
∂ 2 FX ( x1 , x2 , t1 , t2 ) RXX (t1 , t2 ) = E{ X (t1 ) X * (t2 )} = x1 x2* f X ( x1 , x2 , t1 , t2 )dx1dx2 (14-6)
f X ( x1 , x2 , t1 , t2 ) =∆ (14-4)
∂x1 ∂x2 and it represents the interrelationship between the random variables
X1 = X(t1) and X2 = X(t2) generated from the process X(t).
represents the second-order density function of the process X(t).
Similarly f X ( x1 , x2 , xn , t1 , t 2 , t n ) represents the nth order density Properties:
function of the process X(t). Complete specification of the stochastic
process X(t) requires the knowledge of f X ( x1 , x2 , xn , t1 , t 2 , t n ) 1. R XX (t1 , t 2 ) = R XX* (t 2 , t1 ) = [ E{ X (t 2 ) X * (t1 )}]*
for all ti , i = 1, 2, , n and for all n. (an almost impossible task (14-7)
in reality). 2. R XX (t , t ) = E{| X (t ) | } > 0. (Average instantaneous power)
2
3 4
PILLAI/Cha PILLAI/Cha

3. RXX (t1 , t2 ) represents a nonnegative definite function, i.e., for any Example 14.2
set of constants {ai }in=1 X (t ) = a cos(ω 0 t + ϕ ), ϕ ~ U (0,2π ). (14-11)
n n
ai a *j RXX (ti , t j ) ≥ 0. (14-8) This gives
i =1 j =1 n
Eq. (14-8) follows by noticing that E{| Y |2 } ≥ 0 for Y = ai X (ti ). µ (t ) = E{ X (t )} = aE{cos(ω 0 t + ϕ )}
X
The function i =1
= a cos ω 0 t E{cos ϕ } − a sin ω 0 t E{sin ϕ } = 0, (14-12)
C XX (t1 , t2 ) = RXX (t1 , t2 ) − µ X (t1 ) µ *X (t2 ) (14-9)

since E{cos ϕ } = 21π cos ϕ dϕ = 0 = E{sin ϕ }.
represents the autocovariance function of the process X(t). 0

Example 14.1 Similarly


Let R XX (t1 , t 2 ) = a 2 E{cos(ω 0 t1 + ϕ ) cos(ω 0 t 2 + ϕ )}
T
z = −T X (t )dt.
a2
= E{cos ω 0 (t1 − t 2 ) + cos(ω 0 (t1 + t 2 ) + 2ϕ )}
Then 2
T T 2
E[| z |2 ] = E{ X (t1 ) X * (t 2 )}dt1dt 2 a
−T −T = cos ω 0 (t1 − t 2 ). (14-13)
T T 2
= R XX (t1 , t 2 )dt1dt2 (14-10) 5 6
−T −T PILLAI/Cha PILLAI/Cha

1
Stationary Stochastic Processes For a first-order strict sense stationary process,
Stationary processes exhibit statistical properties that are from (14-14) we have
invariant to shift in the time index. Thus, for example, second-order
f X ( x, t ) ≡ f X ( x, t + c) (14-15)
stationarity implies that the statistical properties of the pairs
{X(t1) , X(t2) } and {X(t1+c) , X(t2+c)} are the same for any c. for any c. In particular c = – t gives
Similarly first-order stationarity implies that the statistical properties
of X(ti) and X(ti+c) are the same for any c. f X ( x, t ) = f X ( x ) (14-16)
In strict terms, the statistical properties are governed by the i.e., the first-order density of X(t) is independent of t. In that case
joint probability density function. Hence a process is nth-order +∞
Strict-Sense Stationary (S.S.S) if E[ X (t )] = −∞
x f ( x )dx = µ , a constant. (14-17)
f X ( x1 , x2 , xn , t1 , t 2 , t n ) ≡ f X ( x1 , x2 , xn , t1 + c, t 2 + c , t n + c ) Similarly, for a second-order strict-sense stationary process
(14-14) we have from (14-14)
for any c, where the left side represents the joint density function of
f X ( x1 , x2 , t1 , t2 ) ≡ f X ( x1 , x2 , t1 + c, t 2 + c)
the random variables X 1 = X (t1 ), X 2 = X (t 2 ), , X n = X (t n ) and
the right side corresponds to the joint density function of the random for any c. For c = – t2 we get
variables X 1′ = X (t1 + c ), X 2′ = X (t 2 + c), , X n′ = X (t n + c).
A process X(t) is said to be strict-sense stationary if (14-14) is 7 f X ( x1 , x2 , t1 , t2 ) ≡ f X ( x1 , x2 , t1 − t2 ) (14-18)
8
true for all ti , i = 1, 2, , n, n = 1, 2, and any c. PILLAI/Cha PILLAI/Cha

i.e., the second order density function of a strict sense stationary (14-17) and (14-19) as the necessary conditions. Thus, a process X(t)
process depends only on the difference of the time indices t1 − t2 = τ . is said to be Wide-Sense Stationary if
In that case the autocorrelation function is given by (i) E{ X (t )} = µ (14-20)
∆ and
RXX (t1 , t2 ) = E{ X (t1 ) X * (t2 )}
(ii) E{ X (t1 ) X * (t 2 )} = R XX (t1 − t2 ), (14-21)
= x1 x2* f X ( x1 , x2 ,τ = t1 − t2 )dx1dx2
i.e., for wide-sense stationary processes, the mean is a constant and
= RXX (t1 − t2 ) ∆= RXX (τ ) = RXX* ( −τ ), (14-19)
the autocorrelation function depends only on the difference between
i.e., the autocorrelation function of a second order strict-sense the time indices. Notice that (14-20)-(14-21) does not say anything
stationary process depends only on the difference of the time about the nature of the probability density functions, and instead deal
indices τ = t1 − t2 . with the average behavior of the process. Since (14-20)-(14-21)
Notice that (14-17) and (14-19) are consequences of the stochastic follow from (14-16) and (14-18), strict-sense stationarity always
process being first and second-order strict sense stationary. implies wide-sense stationarity. However, the converse is not true in
On the other hand, the basic conditions for the first and second order general, the only exception being the Gaussian process.
stationarity – Eqs. (14-16) and (14-18) – are usually difficult to verify. This follows, since if X(t) is a Gaussian process, then by definition
In that case, we often resort to a looser definition of stationarity, X 1 = X (t1 ), X 2 = X (t2 ), , X n = X (t n ) are jointly Gaussian random
known as Wide-Sense Stationarity (W.S.S), by making use of 9 variables for any t1 , t 2 , tn whose joint characteristic function 10
PILLAI/Cha is given by PILLAI/Cha

n n
j µ ( t )ω − C ( t , t )ω ω / 2
k k
(14-22) XX i k i k for wide sense stationarity, we obtain strict sense stationarity as well.
φ (ω1 , ω2 , , ω n ) = e
X
k =1 l ,k
From (14-12)-(14-13), (refer to Example 14.2), the process
where C (ti , t k ) is as defined on (14-9). If X(t) is wide-sense
XX
X (t ) = a cos(ω 0 t + ϕ ), in (14-11) is wide-sense stationary, but
stationary, then using (14-20)-(14-21) in (14-22) we get not strict-sense stationary. t 2

n n n
j µω k − 12 C XX ( ti − tk )ωi ω k
φ (ω1 , ω2 , , ωn ) = e k =1 1=1 k =1 (14-23) Similarly if X(t) is a zero mean wide τ = t1 − t 2
X
sense stationary process in Example 14.1, −T T τ
t1
and hence if the set of time indices are shifted by a constant c to then σ z2 in (14-10) reduces to τ −T

generate a new set of jointly Gaussian random variables X 1′ = X (t1 + c),


2T − τ
T T
σ z2 = E{| z |2 } = R XX (t1 − t2 )dt1dt2 .
X 2′ = X (t2 + c ), , X n′ = X (tn + c) then their joint characteristic −T −T
n
function is identical to (14-23). Thus the set of random variables { X i }i =1 As t1, t2 varies from –T to +T, τ = t1 − t2 varies Fig. 14.2
and { X i′}i =1 have the same joint probability distribution for all n and
n
from –2T to + 2T. Moreover RXX (τ ) is a constant
all c, establishing the strict sense stationarity of Gaussian processes over the shaded region in Fig 14.2, whose area is given by (τ > 0)
from its wide-sense stationarity. 1 1
(2T − τ ) 2 − (2T − τ − dτ ) 2 = (2T − τ ) dτ
To summarize if X(t) is a Gaussian process, then 2 2
wide-sense stationarity (w.s.s) strict-sense stationarity (s.s.s). and hence the above integral reduces to
Notice that since the joint p.d.f of Gaussian random variables depends σ z2 =
2T
RXX (τ )(2T − | τ |)dτ = 21T
2T
R XX (τ )(1 − 2|τT| )dτ .
11 −2t −2 t 12
only on their second order statistics, which is also the basis PILLAI/Cha (14-24) PILLAI/Cha

2
Systems with Stochastic Inputs
A deterministic system1 transforms each input waveform X (t,ξ i ) into Deterministic Systems
an output waveform Y (t,ξ i ) = T [ X (t,ξ i )] by operating only on the
time variable t. Thus a set of realizations at the input corresponding
to a process X(t) generates a new set of realizations {Y (t ,ξ )} at the
output associated with a new process Y(t). Memoryless Systems Systems with Memory
Y (t ) = g[ X (t )]
Y (t, ξ i )
X (t , ξ i )

Time-varying Time-Invariant Linear systems


→
X (t )
T [⋅] Y

(t )
→ systems systems Y (t ) = L[ X (t )]
t t
Fig. 14.3
Fig. 14.3
Linear-Time Invariant
Our goal is to study the output process statistics in terms of the input (LTI) systems
process statistics and the system function. +∞
X (t ) h( t ) Y (t ) = −∞
h (t − τ ) X (τ )dτ
1A stochastic system on the other hand operates on both the variables t and ξ . +∞
13
LTI system = −∞
h (τ ) X (t − τ )dτ . 14
PILLAI/Cha PILLAI/Cha

Memoryless Systems: Theorem: If X(t) is a zero mean stationary Gaussian process, and
The output Y(t) in this case depends only on the present value of the Y(t) = g[X(t)], where g (⋅) represents a memoryless device,
input X(t). i.e., then
Y (t ) = g{ X (t )} (14-25)
R XY (τ ) = ηR XX (τ ), η = E{g ′( X )}. (14-26)
Strict-sense Memoryless Strict-sense Proof: Verify it or
stationary input system stationary output. See Page 397 of the Book
(see (9-76), Text for a proof.)

Wide-sense Memoryless Need not be


stationary input system stationary in
any sense.

X(t) stationary Y(t) stationary,but


Memoryless
Gaussian with not Gaussian with
system
RXX (τ ) R XY (τ ) = ηR XX (τ ).

Fig. 14.4 (see (14-26)). 15 16
PILLAI/Cha PILLAI/Cha

Linear Systems: L[⋅] represents a linear system if then Y (t )

L{a1 X (t1 ) + a 2 X (t 2 )} = a1 L{ X (t1 )} + a2 L{ X (t2 )}. (14-28) X (t )


t
X (t ) Y (t )
Let t LTI +∞
Y (t ) = L{ X (t )} (14-29) Y (t ) = −∞
h (t − τ ) X (τ )dτ
arbitrary Fig. 14.6
represent the output of a linear system. input
=
+∞
h (τ ) X (t − τ )dτ (14-31)
Time-Invariant System: L[⋅] represents a time-invariant system if −∞
Eq. (14-31) follows by expressing X(t) as
Y (t ) = L{ X (t )} L{ X (t − t0 )} = Y (t − t 0 ) (14-30) +∞
X (t ) = −∞
X (τ )δ (t − τ )dτ (14-32)
i.e., shift in the input results in the same shift in the output also.
and applying (14-28) and (14-30) to Y (t ) = L{ X (t )}. Thus
If L[⋅] satisfies both (14-28) and (14-30), then it corresponds to +∞
a linear time-invariant (LTI) system. Y (t ) = L{ X (t )} = L{ −∞
X (τ )δ (t − τ )dτ }
LTI systems can be uniquely represented in terms of their output to +∞
a delta function h (t )
Impulse = −∞
L{ X (τ )δ (t − τ )dτ } By Linearity
response of +∞
δ (t ) LTI h(t ) the system = −∞
X (τ ) L{δ (t − τ )}dτ By Time-invariance
t +∞ +∞
Fig. 14.5 Impulse 17 = −∞
X (τ )h (t − τ )dτ = −∞
h (τ ) X (t − τ )dτ . (14-33) 18
Impulse
response PILLAI/Cha PILLAI/Cha

3
Output Statistics: Using (14-33), the mean of the output process RYY (t1 , t2 ) = E{Y (t1 )Y * (t 2 )}
is given by +∞
= E{ −∞
X (t1 − β )h ( β )dβ Y * (t 2 )}
+∞
µ (t ) = E{Y (t )} = E{ X (τ )h (t − τ )dτ } +∞
Y
−∞ = −∞
E{ X (t1 − β )Y (t 2 )}h ( β )dβ
+∞
= µ (τ )h (t − τ )dτ = µ (t ) ∗ h(t ). (14-34) +∞
−∞ X X
= −∞
R XY (t1 − β , t 2 )h ( β )dβ
Similarly the cross-correlation function between the input and output = R XY (t1 , t 2 ) ∗ h (t1 ), (14-36)
processes is given by
or
R XY (t1 , t 2 ) = E{ X (t1 )Y * (t2 )}
+∞ RYY (t1 , t2 ) = R XX (t1 , t 2 ) ∗ h * (t 2 ) ∗ h (t1 ). (14-37)
= E{ X (t1 ) −∞
X (t 2 − α )h * (α )dα }

=
+∞
E{ X (t1 ) X (t 2 − α )}h * (α )dα µ (t )
X h(t) µ (t )
Y
−∞
+∞ (a)
= −∞
R XX (t1 , t 2 − α )h (α )dα
*

= R XX (t1 , t2 ) ∗ h * (t 2 ). (14-35) RXX (t1 , t2 ) 


→ h*(t2) R →
XY ( t1 , t 2 )
h(t1) 
→ RYY (t1 , t2 )
(b)
Finally the output autocorrelation function is given by 19 20
PILLAI/Cha Fig. 14.7 PILLAI/Cha

In particular if X(t) is wide-sense stationary, then we have µ X (t ) = µ X From (14-38)-(14-40), the output process is also wide-sense stationary.
so that from (14-34) This gives rise to the following representation
+∞
µ (t ) = µ h (τ )dτ = µ X c, a constant. (14-38) X (t )
Y X
−∞ Y (t )
wide-sense LTI system wide-sense
Also RXX (t1 , t2 ) = RXX (t1 − t2 ) so that (14-35) reduces to
stationary process h(t) stationary process.
+∞
R XY (t1 , t2 ) = −∞
R XX (t1 − t 2 + α )h * (α )dα (a)

= R XX (τ ) ∗ h * ( −τ ) = RXY (τ ), τ = t1 − t 2 . (14-39) X (t )
strict-sense LTI system Y (t )
Thus X(t) and Y(t) are jointly w.s.s. Further, from (14-36), the output strict-sense
stationary process h(t)
autocorrelation simplifies to stationary process
+∞ (b) (see Text for proof )
RYY (t1 , t 2 ) = −∞
RXY (t1 − β − t 2 )h( β )dβ , τ = t1 − t2
= RXY (τ ) ∗ h(τ ) = RYY (τ ). (14-40) X (t ) Y (t )
Gaussian Linear system Gaussian process
From (14-37), we obtain process (also (also stationary)
RYY (τ ) = RXX (τ ) ∗ h* (−τ ) ∗ h(τ ). stationary) (c)
(14-41) 21 22
PILLAI/Cha Fig. 14.8 PILLAI/Cha

White Noise Process:


+∞
W(t) is said to be a white noise process if E [ N (t )] = µW h(τ )dτ , a constant (14-44)
−∞

RWW (t1 , t 2 ) = q (t1 )δ (t1 − t 2 ), (14-42)


and
i.e., E[W(t1) W*(t2)] = 0 unless t1 = t2.
W(t) is said to be wide-sense stationary (w.s.s) white noise Rnn (τ ) = qδ (τ ) ∗ h* ( −τ ) ∗ h(τ )
if E[W(t)] = constant, and = qh* ( −τ ) ∗ h(τ ) = qρ (τ ) (14-45)
RWW (t1 , t 2 ) = qδ (t1 − t 2 ) = qδ (τ ). (14-43)
where
If W(t) is also a Gaussian process (white Gaussian process), then all of +∞
its samples are independent random variables (why?). ρ (τ ) = h(τ ) ∗ h* (−τ ) = −∞
h(α )h* (α + τ ) dα . (14-46)

LTI Colored noise Thus the output of a white noise process through an LTI system
White noise
h(t) N (t ) = h ( t ) ∗ W ( t ) represents a (colored) noise process.
W(t)
Note: White noise need not be Gaussian.
Fig. 14.9 “White” and “Gaussian” are two different concepts!
For w.s.s. white noise input W(t), we have 23 24
PILLAI/Cha PILLAI/Cha

4
Discrete Time Stochastic Processes: i.e., R(n1, n2) = R(n1 – n2) = R*(n2 – n1). The positive-definite
property of the autocorrelation sequence in (14-8) can be expressed
A discrete time stochastic process Xn = X(nT) is a sequence of in terms of certain Hermitian-Toeplitz matrices as follows:
random variables. The mean, autocorrelation and auto-covariance Theorem: A sequence {rn }+− ∞∞ forms an autocorrelation sequence of
functions of a discrete-time process are gives by a wide sense stationary stochastic process if and only if every
µ n = E{ X ( nT )} (14-57) Hermitian-Toeplitz matrix Tn given by
R( n1 , n2 ) = E{ X ( n1T ) X * ( n2T )} (14-58) r0 r1 r2 rn
and r1* r0 r1 rn −1
Tn = = Tn* (14-62)
C ( n1 , n2 ) = R (n1 , n2 ) − µ n1 µ n*2 (14-59)
respectively. As before strict sense stationarity and wide-sense rn* rn*−1 r1* r0
stationarity definitions apply here also. is non-negative (positive) definite for n = 0, 1, 2, , ∞.
For example, X(nT) is wide sense stationary if Proof: Let a = [a0 , a1 , , a n ]T represent an arbitrary constant vector.
E{ X ( nT )} = µ , a constant (14-60) Then from (14-62), n n
*
a Tn a = ai a k* rk −i (14-63)
and i =0 k =0
E[ X {( k + n )T } X * {( k )T }] = R( n ) = rn =∆ r−*n (14-61) 25 since the Toeplitz character gives (Tn ) i ,k = rk −i . Using (14-61), 26
PILLAI/Cha Eq. (14-63) reduces to PILLAI/Cha

n n n 2 Auto Regressive Moving Average (ARMA) Processes


*
a Tn a = ai ak* E{ X ( kT ) X * (iT )} = E ak* X ( kT ) ≥ 0. (14-64)
i =0 k =0 k =0
Consider an input – output representation
From (14-64), if X(nT) is a wide sense stationary stochastic process p q
(14-68)
then Tn is a non-negative definite matrix for every n = 0, 1, 2, , ∞. X (n) = − ak X (n − k ) + bkW (n − k ),
k =1 k =0
Similarly the converse also follows from (14-64). (see section 9.4, Text)
where X(n) may be considered as the output of a system {h(n)}
If X(nT) represents a wide-sense stationary input to a discrete-time driven by the input W(n).
Z – transform of W(n) h(n) X(n)
system {h(nT)}, and Y(nT) the system output, then as before the cross
correlation function satisfies (14-68) gives Fig.14.12
R XY (n ) = R XX ( n ) ∗ h * ( − n ) (14-65) p q
and the output autocorrelation function is given by X ( z) ak z − k = W ( z ) bk z − k , a0 ≡ 1 (14-69)
k =0 k =0
RYY (n ) = R XY (n ) ∗ h( n ) (14-66)
or or
RYY (n ) = RXX ( n ) ∗ h * ( − n ) ∗ h (n ). (14-67) ∞ −1 −2 −q
X ( z ) b0 + b1 z + b2 z + + bq z ∆ B( z )
H ( z) = h( k ) z − k = = =
Thus wide-sense stationarity from input to output is preserved k =0
−1 −2
W ( z ) 1 + a1 z + a2 z + + a p z −p
A( z )
for discrete-time systems also. 27 28
(14-70) PILLAI/Cha
PILLAI/Cha

represents the transfer function of the associated system response {h(n)} process (all-zero process). Next, we shall discuss AR(1) and AR(2)
in Fig 14.12 so that ∞
processes through explicit calculations.
X ( n ) = h( n − k )W ( k ). (14-71) AR(1) process: An AR(1) process has the form (see (14-68))
k =0
X ( n ) = aX (n − 1) + W (n ) (14-73)
Notice that the transfer function H(z) in (14-70) is rational with p poles
and q zeros that determine the model order of the underlying system. and from (14-70) the corresponding system transfer
From (14-68), the output undergoes regression over p of its previous 1 ∞

values and at the same time a moving average based on W (n ), W (n − 1), H ( z) = = a n z −n (14-74)
1 − az −1 n = 0
, W ( n − q) of the input over (q + 1) values is added to it, thus
generating an Auto Regressive Moving Average (ARMA (p, q)) provided | a | < 1. Thus
process X(n). Generally the input {W(n)} represents a sequence of (14-75)
h( n ) = a n , | a |< 1
uncorrelated random variables of zero mean and constant variance σ W2
so that represents the impulse response of an AR(1) stable system. Using
RWW ( n) = σ W2δ (n). (14-72)
(14-67) together with (14-72) and (14-75), we get the output
If in addition, {W(n)} is normally distributed then the output {X(n)} autocorrelation sequence of an AR(1) process to be
also represents a strict-sense stationary normal process. ∞
a |n|
If q = 0, then (14-68) represents an AR(p) process (all-pole R XX ( n ) = σ W2δ (n ) ∗ {a − n } ∗ {a n } = σ W2 a |n |+ k a k = σ W2
29 k =0 1− a2 30
process), and if p = 0, then (14-68) represents an MA(q) PILLAI/Cha (14-76) PILLAI/Cha

5
where we have made use of the discrete version of (14-46). The so that its normalized version is given by
normalized (in terms of R (0)) output autocorrelation sequence is 1 n=0
RYY (n )
XX

given by ρ (n) = =
R (n ) Y
RYY (0) c a |n| n = ±1, ± 2, (14-80)
ρ X ( n ) = XX = a |n| , | n | ≥ 0. (14-77)
R XX (0) where
σ2
It is instructive to compare an AR(1) model discussed above by c= < 1. W

(14-81)
superimposing a random component to it, which may be an error σ 2 + σ 2 (1 − a 2 )W V

term associated with observing a first order AR process X(n). Thus Eqs. (14-77) and (14-80) demonstrate the effect of superimposing
an error sequence on an AR(1) model. For non-zero lags, the
Y (n) = X (n) + V (n) (14-78) autocorrelation of the observed sequence {Y(n)}is reduced by a constant
where X(n) ~ AR(1) as in (14-73), and V(n) is an uncorrelated random factor compared to the original process {X(n)}.
sequence with zero mean and variance σ V that is also uncorrelated
2 From (14-78), the superimposed
ρ (0) = ρ ( 0) = 1
with {W(n)}. From (14-73), (14-78) we obtain the output error sequence V(n) only affects X Y

autocorrelation of the observed process Y(n) to be the corresponding term in Y(n) ρ (k ) > ρ (k ) X Y

(term by term). However,


RYY (n) = RXX (n) + RVV (n) = RXX ( n) + σ V2δ (n)
a particular term in the “input sequence” 0
n
a |n| W(n) affects X(n) and Y(n) as well as
k
= σ W2 + σ V2δ (n) (14-79)
1− a2 31
all subsequent observations. 32
PILLAI/Cha Fig. 14.13 PILLAI/Cha

AR(2) Process: An AR(2) process has the form and H(z) stable implies | λ1 |< 1, | λ2 |< 1.
Further, using (14-82) the output autocorrelations satisfy the recursion
X (n) = a1 X (n − 1) + a2 X ( n − 2) + W (n) (14-82)
RXX (n) = E{ X ( n + m) X * (m)}
and from (14-70) the corresponding transfer function is given by = E{[a1 X ( n + m − 1) + a2 X (n + m − 2)] X * (m)}

1 b1 b2 0
H ( z) = h( n) z −n
= = + (14-83) + E{W (n + m) X * (m)}
n=0 1 − a1 z −1 − a2 z − 2 1 − λ1 z −1 1 − λ2 z −1 (14-87)
= a1 RXX (n − 1) + a2 RXX (n − 2)
so that and hence their normalized version is given by
R (n)
h(0) = 1, h(1) = a1 , h(n) = a1h( n − 1) + a2 h(n − 2), n ≥ 2 (14-84) ρ X ( n ) =∆ XX = a1 ρ X (n − 1) + a2 ρ X ( n − 2). (14-88)
RXX (0)
and in term of the poles λ1 and λ2 of the transfer function, By direct calculation using (14-67), the output autocorrelations are
from (14-83) we have given by
h(n) = b1λ1n + b2 λn2 , n≥0 (14-85) RXX (n) = RWW (n) ∗ h* (− n) ∗ h(n) = σ W2 h* (− n) ∗ h(n)

that represents the impulse response of the system. = σ W2 h* (n + k ) ∗ h(k )


k =0
From (14-84)-(14-85), we also have b1 + b2 = 1, b1λ1 + b2 λ2 = a1. (14-89)
From (14-83), | b1 |2 (λ1* ) n b1*b2 (λ*1 ) n b1b2* (λ*2 ) n | b2 |2 (λ*2 ) n
= σ W2 + + +
λ1 + λ2 = a1 , λ1λ2 = −a 2 , 33
(14-86) PILLAI/Cha 1− | λ1 |2 1 − λ*1λ2 1 − λ1λ*2 1− | λ2 |2 34
PILLAI/Cha

where we have made use of (14-85). From (14-89), the normalized and hence 2r sin θ = −( a12 + 4a2 ) > 0 which gives
output autocorrelations may be expressed as
− ( a12 + 4a2 )
R (n) n n
tan θ = . (14-94)
ρ X (n) = XX = c1λ*1 + c2 λ*2 (14-90) a1
RXX (0)
where c1 and c2 are appropriate constants. Also from (14-88)
Damped Exponentials: When the second order system in ρ (1) = a1 ρ (0) + a2 ρ (−1) = a1 + a2 ρ (1)
(14-83)-(14-85) is real and corresponds to a damped exponential X X X X

response, the poles are complex conjugate which gives a12 + 4a2 < 0 so that
in (14-83). Thus a1
ρ (1) = = 2cr cos(θ + ϕ ) (14-95)
1 − a2
X
λ1 = r e − jθ , λ2 = λ1* , r < 1. (14-91)
where the later form is obtained from (14-92) with n = 1. But ρ X (0) = 1

In that case c1 = c2 = c e in (14-90) so that the normalized
*

correlations there reduce to in (14-92) gives


2c cos ϕ = 1, or c = 1 / 2 cos ϕ .
n
ρ ( n ) = 2 Re{c1λ1* } = 2cr n cos( nθ + ϕ ).
X
(14-92) (14-96)
But from (14-86) Substituting (14-96) into (14-92) and (14-95) we obtain the normalized
λ1 + λ2 = 2 r cosθ = a1 , r 2 = − a 2 < 1, output autocorrelations to be
(14-93) 35 36
PILLAI/Cha PILLAI/Cha

6
an old result due to Kronecker1 (1881) states that the necessary and
cos(nθ + ϕ ) ∞
ρ (n) = (−a2 ) n / 2 , − a2 < 1 (14-97) sufficient condition for H ( z ) = k = 0 hk z − k to represent a rational
cosϕ
X

system (ARMA) is that


where ϕ satisfies
det H n = 0, n≥N (for all sufficiently large n ), (14-99)
cos(θ + ϕ ) a 1
= 1 . (14-98) where
cos θ 1 − a 2 − a2 h0 h1 h2 hn
Thus the normalized autocorrelations of a damped second order ∆ h1 h2 h3 hn +1
system with real coefficients subject to random uncorrelated Hn = . (14-100)
impulses satisfy (14-97).
hn hn +1 hn + 2 h2 n
More on ARMA processes i.e., In the case of rational systems for all sufficiently large n, the
Hankel matrices Hn in (14-100) all have the same rank.
From (14-70) an ARMA (p, q) system has only p + q + 1 independent
coefficients, (ak , k = 1 → p, bi , i = 0 → q ), and hence its impulse
response sequence {hk} also must exhibit a similar dependence among
them. In fact according to P. Dienes (The Taylor series, 1931),
37 1Among other things “God created the integers and the rest is the work of man.” (Leopold Kronecker) 38
PILLAI/Cha PILLAI/Cha

You might also like