Professional Documents
Culture Documents
SpectralEstimation Modern
SpectralEstimation Modern
Spectrum Estimation
Family of Non Parametric Methods
Classical Methods
(Fourier Transform Nonclassical Methods
Based) (Non Fourier Based)
Filterbank Approach
.Minimum Variance
Periodogram Based
.Periodogram ACF Based
.Modified Per. .Blackmen Tukey
.Bartlett
.Welch
Periodogram Definition (recalled)
Based on the definition of PSD:
2
1 M
S x ( w) lim E x(n)e jwn
2 M 1 n M
M
2
ˆ 1 N 1
S PER ( w)
N
x
n 0
( n ) e jwn
jn ( w wi )( N 1) / 2 sin( N ( w wi ) / 2)
e
N sin(( w wi ) / 2)
Periodogram-Viewed as a filterbank (cont)
With this definition of hi(n), the output of the ith filter becomes
n
yi (n) x(n) * hi (n)
k n N 1
x(k )hi (n k )
1 n
N
k n N 1
x(k )e j ( n k ) wi
2
Note that if one chooses n=N-1 the term yi ( N 1) gives the
periodogram
2 2
1 N 1
j ( N 1) wi 2 1 N 1
2 j ( N 1 k ) wi jkwi
yi ( N 1) x ( k ) e e x ( k ) e
N k 0 N k 0
2
1 N 1
x ( k ) e jkwi
NSˆPER ( w)
N k 0
If a WSS random process x(n) is filtered with hi[n],
then the output process is
n
1 n
yi [ n] x[ n]* hi [ n] x[ k ]hi [ n k ] x[ k ]e j ( n k ) wi
k n N 1 N k n N 1
Since H i (e jw )
w wi
1
If the bandwidth of the filter is small enough then the power spectrum of
x[n] may be assumed to be approximately constant over the passband
of the filter, then the power of in yi[n] will be approximately
2
1
E yi [n ]2
2 S x (e jw ) H i (e jw ) dw
w 1
S x (e jwi ) S x (e jwi )
2 N
Therefore
S x (e jwi ) NE yi [n]2
Thus if we are able to estimate the power in yi[n], then the power
spectrum at frequency wi may be estimated as follows
Sˆ x (e jwi ) NE
ˆ y [ n]
i
2
Eˆ yi [n]2 yi [ N 1]
2
This is equivalent to
2 2
1 N 1 1
Sˆx (e jwi ) N yi [ N 1] x[k ]e
2
Thus
2 jkwi jkwi
yi [ N 1] 2 x[ k ]e
N k 0
N
• Filter x[n] with each filter in the filterbank and estimate the
power in each output process yi[n]
Design Goals:
1. Want Hi(wi)=1
this will let through the desired Sx(wi).
2. Minimize the total output power in the filter
1
i E y [n]
i
2
2
2
S x ( w) H i ( w) dw
w 1
S x ( wi ) S x ( wi )
2 N
1 p 1 p 1
jlw
i i
jkw *
h i [ k ]e h [l ] e S x ( w)dw
2 k 0 l 0
p 1 p 1
1
hi [k ]hi [l ] *
S x ( w) e jw ( l k )
dw
k 0 l 0 2
1
Noting that
2
S x ( w)e jw( l k ) dw rx (l k )
p 1 p 1
i hi [ k ]hi*[l ]rx (l k )
k 0 l 0
hiH R xhi
Work on minimizing the Matrix Form
For each i, minimize
i hiH R xhi
Subject to
H i ( wi ) 1
hi ei = 1
where ei [1 e jwi e j 2 wi . . . e j ( p 1) wi ]
Solve the above constrained optimization
problem using Lagrange multipliers
H H
J h i R xhi (h i R x 1)
ˆ p 1
• Given rx[k] for k<p the minimum S MV ( w) H 1
e Rx e
variance spectral estimate is:
1
Rx Toep (1, , 2 ,..., p )
1 2
1 0 ... 0 0
1 2 ... 0 0
1
0 1 2 ... 0 0
Rx
...
0 0 0 ... 1 2
0
0 0 ... 1
MV Estimate of an AR(1) process
p 1
SˆMV ( w)
2 ( p 1)(1 2 ) 2 p cos( w)
rx (k ) A1 e jkw1 w2 (k )
Rx A1 e1e1H w2 I
where
jw1 j 2 w1 jpw1 T
e1 1, e , e ,...e
Woodbury matrix identity is
1 H
A e e
1
1 1 1
1 w4 A1
Rx1 2 I 2 I 2 H
e1e1
w A w w ( p 1) A1
1 12 e1e1H
w
Thus,
p 1 p 1
SˆMV ( w) H 1
e Rx e 1 H A1 H
e I 2 e1e1 e
w2 w ( p 1) A1
w2
A1 /( p 1) H 2
1 2 e e1
w ( p 1) A1
MW Estimate of a Complex exponential in noise
Since
p p
e e1 e
H jkw
e jkw1
e jk ( w w1 ) WR ( w w1 )
k 0 k 0
w2
SˆMV ( w) w2 ( p 1) A1
w w1 A1 ( p 1)
1 2
w ( p 1) A1
MV Estimate of a Complex exponential in noise
1 ˆ w2
ˆ ( w1 )
2
x S x ( w1 ) A1
p 1 p 1
ˆ x2 ( w1 ) A1
SˆMV ( w) w2
w w1
The Minimum Variance Spectral Estimation:
MATLAB code
function Px = minvar(x,p)
%MINVAR Spectrum estimation using the minimum variance method.
% The spectrum of a process x is estimated using the minimum
% variance method (sometimes called the maximum likelihood method).
% x : Input sequence
% p : Order of the minimum variance estimate - for short
% sequences, p is typically about length(x)/3
% The spectrum estimate is returned in Px using a dB scale.
%
x = x(:);
R = covar(x,p);
[v,d]=eig(R);
U = diag(inv(abs(d)+eps));
V = abs(fft(v,1024)).^2;
Px = 10*log10(p)-10*log10(V*U);
end;
function R=covar(x,p)
%COVAR Generates a covariance matrix/
%USAGE R=covar(x,p)
%
% Generates a p x p covariance matrix for the sequence x.
%---------------------------------------------------------------
x = x(:);
m = length(x);
x = x - ones(m,1)*(sum(x)/m);
R = convm(x,p)'*convm(x,p)/(m-1);
end;
Maximum Entropy Method
For a data record of length N the autocorrelation sequence
can only be estimated for |k|<N.
From
Maximum Entropy Method
Maximum Entropy Method
Having determined the form of the MEM spectrum
one needs to calculate the coefficients ap(k) and b(0).
These coefficients should be chosen to satisfy
Furthermore if
function Px = mem(x,p)
%MEM Spectrum estimation using the Maximum Entropy Method (MEM).
%--- %USAGE Px = mem(x,p) %
% The spectrum of a process x is estimated using the maximum
% entropy method, which uses the autocorrelation method to
% find a pth-order all-pole model for x(n), and then forms
% the estimate of the spectrum as follows: % Px = b^2(0)/|A(omega)|^2
% The spectrum estimate is returned in Px using a dB scale.
% x : Input sequence % p : Order of the all-pole model %
%---------------------------------------------------------------
% copyright 1996, by M.H. Hayes. For use with the book
% "Statistical Digital Signal Processing and Modeling"
% (John Wiley & Sons, 1996).
%---------------------------------------------------------------
[a,e] = acm(x,p);
Px = 10*log10(e/length(x))-20*log10(abs(fft(a,1024)));
end;
Matlab Implementation
x = x(:); N = length(x);
if p>=length(x), error('Model order too large'),
end
X = convm(x,p+1);
Xq = X(1:N+p-1,1:p);
a = [1;-Xq\X(2:N+p,1)];
err = abs(X(1:N+p,1)'*X*a);
end;
Fs = 1000;
p=100;
t = 0:1/Fs:1;
N=length(t);
Nz=N;
x = cos(2*pi*t*200) + cos(2*pi*t*240) + randn(size(t));
Px = mem(x,p,Nz);
figure(1)
f=[0:Nz-1]*Fs/Nz; 25
plot(f,abs(Px))
20
15
10
0
0 100 200 300 400 500 600 700 800 900 1000
8
Fs=1000; 6
p=10;
t = 0:1/Fs:10; 4
N=length(t);
Nz=N;
x = randn(size(t)); 2
aa=[1 -.5];
y=filter(1,aa,x); %This is an FIR filter 0
Py = mem(y,p,Nz);
-2
f=[0:Nz-1]*Fs/Nz;
H = FREQZ(1,aa,f,Fs);
-4
0 100 200 300 400 500 600 700 800 900 1000
figure(1)
4.5
plot(f,((Py)),'r')
hold on 4
plot(f,10*log10(abs(H).^2),'g')
3.5
figure(2) 3
plot(f,10.^(Py/10),'r')
hold on 2.5
plot(f,(abs(H).^2),'g')
2
1.5
0.5
0
0 100 200 300 400 500 600 700 800 900 1000
MEM PSD estimate of an AR(1) process
p=2
4 8
3.5
6
4
2.5
2 2
1.5
0
-2
0.5
0 -4
0 100 200 300 400 500 600 700 800 900 1000 0 100 200 300 400 500 600 700 800 900 1000
MEM PSD estimate of an AR(2) process
p=5
18
16
14
12
10
0
0 100 200 300 400 500 600 700 800 900 1000
MEM PSD estimate of an MA(2) process
1.4
1.2
0.8
0.6
0.4
0.2
0
0 100 200 300 400 500 600 700 800 900 1000
MEM PSD estimate of an MA(2) process
p=50
1.4
1.2
0.8
0.6
0.4
0.2
0
0 100 200 300 400 500 600 700 800 900 1000
SOME FACTS about HARMONIC PROCESSES
And
Frequency Estimation
Frequency Estimation
where
Since the rank of Rs is one, then Rs has only one nonzero eigenvalue
Thus the nonzero eigenvalue is MP1 and the corresponding eigenvector is e1.
Eigen-Decomposition of the autocorrelation Matrix
Since Rs is Hermitian the remaining eigenvectors v2, v3, … vM are
orthogonal to the eigenvector e1 which is due to the signal
implies that Vi(ejw)=0 at w=w1 i.e., at the frequency of the complex exponential.
(Due to noise)
Eigen-Decomposition of the autocorrelation Matrix
One can more concisely express Rx as
Diagonal matrix
containing powers
Let us perform an eigendecomposition on Rx:
Thus the eigenvalues and eigenvectors of Rx can be divided into two groups:
The first group consisting of two eigenvectors that have eigenvalues
2
greater than w are referred to as signal eigenvectors and
span a two dimensional subspace called the signal subspace.
More concisely
Eigen-Decomposition of the autocorrelation Matrix
s
Since the eigenvalues of Rx are i i w where i are the eigenvalues of Rs
s 2
Again the eigenvalues aand eigenvectors of Rx can be divided into two groups:
The signal eigenvectors v1, v2, …., vp that have eigenvalues greater than w2
2
The noise eigenvectors vp+1, vp+2, … , vM that have eigenvalues equal to w
Or in matrix form
Eigen-Decomposition of the autocorrelation Matrix
Vss and Vnn are diagonal matrices that contain the eigenvalues
i is w2 and i w2
Eigen-Decomposition of the autocorrelation Matrix
Since each signal vector e1, e2, … , ep lies in the signal subspace,
The orthogonality implies that ei will be orthogonal to each of the noise
Eigenvectors.
Therefore
H
Multiplying both sides on the left by vi
Associated with the noise eigenvector vi will exhibit sharp peaks at the
frequencies of the complex exponentials. The remaining (M-p-1) zeros
May lie anywhere giving rise to spurious peaks if they are close to the unit circle.
Therefore when only one noise eigenvector is used to estimate
the complex exponential frequencies, there may be some ambiguity
In distinguishing the desired peaks from the spurious ones.