Professional Documents
Culture Documents
2.11.9 Autocovariance and Autocorrelation Matrices: M R M R J M R M R M K X K X E M R K X
2.11.9 Autocovariance and Autocorrelation Matrices: M R M R J M R M R M K X K X E M R K X
Cross - correlatio n of complex {x[k ]} & { y[ k ]}, rxy (m) = E{x[k ]( y[k + m])*}
Cross - covariance of complex {x[k ]} & { y [k ]}, c xy (m) = E{( x[k ] − x )( y[k + m] − y )*}
c xy (m) = rxy (m) − x[k ] x *[m]
where : m is the lag factor
rx ( k , i ) = rx ( k − i ,0) ≡ rx ( k − i )
where:
m is the lag.
The expectation of this matrix is the (p+1) x (p+1) autocorrelation matrix, Rx, as
defined by Equation 2.91.
where:
rx (m) = rx* (−m), according to Hermitian symmetry.
By the same process the expectation of the outer product of the vector x minus
the mean vector of the process, i.e., (x − x ) , produces the autocovariance matrix,
Cx, which for a zero mean process is equal to the autocorrelation matrix. The
autocovariance matrix is defined by Equation 2.92.
1 N −1
lim
N →∞
¦ c x2 (m) = 0
N m =0
where:
2π f
θ = ωT =
Fs
Linear Systems and Stochasic Processes 77
1 +π
rx (k ) = ³ Px (e jθ )e jkθ dθ (2.94)
2π −π
The power spectrum of a wide-sense random process x[n] is nonnegative and real-
valued, i.e., Px (e jθ ) = Px* (e jθ ) , and Px (z ) satisfies the symmetry condition,
Px ( z ) = Px* (1 z ) . If x[n] is real then the power spectrum is even, i.e.,
Px (e jθ ) = Px (e − jθ ) , which implies that Px ( z ) = Px* ( z ) . The total power in a zero
mean wide-sense stationary process is proportional to the area under the power
spectral curve as defined by Equation 2.95.
2 1 +π jθ
E{ x[n] } = ³ Px (e )dθ
2π −π
(2.95)
The power spectrum can also be seen as the expected value of the squared
Fourier magnitude, PN(ejθ), in the limit as N → ∞ for 2N + 1 samples of a given
realisation of the random process, i.e., refer to Equation 2.97.
∞
Px (e jθ ) = E{PN (e jθ )} = ¦ rx (k )e − jkθ
k = −∞
° N
2½ (2.97)
1 − jnθ °
= lim E® ¦ x ( n ) e ¾
N →∞ 2 N + 1 °̄ n=− N °¿
where:
2
1 N
PN (e jθ ) = ¦ x(n)e − jnθ
2 N + 1 n=− N
§ 1 ·
Px ( z ) = σ 02Q (z )Q * ¨ * ¸ (2.98)
©z ¹
78 Principles of Adaptive Filters and Self-learning Systems
where:
1 π ½
σ 02 = exp ® ³−π
ln Px (e jθ )dθ ¾
¯ 2π ¿
Px (e z ) = σ 02 Q ( z )Q ( z −1 ) (2.99)
Any process that can be factored in this way is called a regular process and has the
following properties,
1. The process x[n] can be realised as the output of a causal and stable filter
H (z ) that is driven by white noise having a variance of σ 02 .
1
2. If the process x[n] is filtered with the inverse filter the output v[n]
H ( z)
1
is white noise having a variance of σ 02 . In this case is known as a
H ( z)
whitening filter.
3. Since v[n] and x[n] are related by an invertible transformation then they
both contain the same information and may be derived from each other.
N ( z)
For the special case when Px ( z) = , a rational function, then according to the
D( z)
spectral factorisation Px ( z) may be factored in the following form,
ª * 1 º
§1 · 2 ª B( z ) º
« B ( z* ) »
Px ( z ) = σ 02 Q ( z )Q * ¨ ¸ = σ 0« »« »
© z* ¹ ¬ A( z ) ¼ « A* ( 1 ) »
«¬ z * »¼
where:
σ 2o is a constant.
B( z ) = 1 + b[1]z −1 + .. + b[q ]z − q , is a monic polynomial having all its roots inside
the unit circle.
A( z ) = 1 + a[1]z −1 + .. + a[ p ]z − p , is monic polynomial having all its roots inside
the unit circle.
∞
y[ n ] = x [ n ] ∗ h [ n ] = ¦ h [k ]x [n − k ] (2.100)
k = −∞
∞ ½ ∞
E{y[n]} = E ® ¦ h [k ] x [n − k ]¾ = ¦ h [k ] E{x [n − k ]}
¯k = −∞ ¿ k = −∞
(2.101)
∞
= x [ n ] ¦ h [ k ] = x [ n ] H (e ) j0
k = −∞
The crosscorrelation between y[n] and x[n], ryx (n + k , n) , depends only on the
difference between n + k and n and is defined by Equation 2.102.
where:
k is the difference between n + k and n .
ry (k ) = ryx (k ) ∗ h* [−k ]
∞ ∞
= ¦ ¦ h[l ]rx (m − l + k ) h*[m] (2.103)
l = −∞ , m = −∞
= rx (k ) ∗ h[k ] ∗ h*[− k ]
∞
rh (k ) = h[k ] ∗ h* [−k ] = ¦ h[n]h*[n + k ]
n = −∞
therefore,
ry (k ) = rx (k ) ∗ rh (k )
In the special case when h[n] is finite with a length of N then the variance, or
power, of y[n] may be expressed in terms of the autocorrelation matrix Rx of x[n]
and the vector filter coefficients h as defined by Equation 2.105.
2
σ y2 = E{ y[n] } = h H R x h (2.105)