Professional Documents
Culture Documents
Adaptive Signal Processing PDF
Adaptive Signal Processing PDF
Adaptive Signal Processing PDF
Computer Exercise 2
u(n)
d(n)
b
w
M=5
LMS
?
d(n)
-
e(n)
1
The filter that should be identified is h(n) = {1, 21 , 41 , 81 , 16
}. Use white Gaussian noise as input signal that you filter with h(n) in order to obtain the desired
signal d(n). Write in Matlab
%
filter coefficients
h=0.5.^[0:4];
%
input signal
u=randn(1000,1);
%
filtered input signal == desired signal
d=conv(h,u);
%
LMS
[e,w]=lms(0.1,5,u,d);
Compare the final filter coefficients (w) obtained by the LMS algorithm with
the filter that it should identify (h). If the coefficients are equal, your LMS
algorithm is correct.
Note that in the current example there is no noise source influencing the
driving noise u(n). Furthermore, the length of the adaptive filter M corresponds to the length of the FIR-filter to be identified. Therefore the error
e(n) tends towards zero.
Computer exercise 2.3 Now you shall follow the example in Haykin, edition 4, chapter 5.7, pp. 285-291, (edition 3: chapter 9.7, pp. 412-421), Computer Experiment on Adaptive Equalization, and reproduce the result. Below
follow some hints that will simplify the implementation.
A Bernoulli sequence is a random sequence of +1 and -1, where both
occur with probability 21 . In Matlab, such a sequence is generated by
Adaptive Signal Processing 2010
Computer Exercise 2
%
Bernoulli sequence of length N
x=2*round(rand(N,1))-1;
A learning curve is generated by taking the mean of the squared error e2 (n)
over several realizations of an ensemble, i.e.,
K
1 X 2
J(n) =
e (n) , n = 0, ..., N 1
K k=1 k
where ek (n) is the estimation error at time instant n for the k-th realization,
and K is the number of realizations to be considered. In order to plot J(n)
with a logarithmic scale on the vertical axis, use the command semilogy.
Computer exercise 2.4 Calculation of the autocorrelation matrix R =
E{u(n)uH (n)} and the cross-correlation vector p = E{u(n)d (n)} for the
system in Haykin yields
R=
,
.
.
.
.
.
.
.
.
.
r(10) r(9) r(8) . . . r(0)
with
r(0) = (h21 + h22 + h33 )x2 + v2
r(1) = (h1 h2 + h2 h3 )x2
r(2) = h1 h3 x2
r(k) = 0, k > 2 ,
and
p = x2 [0, 0, 0, 0, h3 , h2 , h1 , 0, 0, 0, 0 ]T ,
respectively. The autocorrelation matrix can be generated in Matlab by
R=sigmax2*toeplitz([h1^2+h2^2+h3^2,h1*h2+h2*h3,h1*h3,zeros(1,8)])
+sigmav2*eye(11);
Calculate the Wiener filter for W = 3.1, and determine Jmin . Give an estimate for Jex () in Haykin, edition 4, figure 5.23, (edition 3: figure 9.23) for
= 0.075 and = 0.025.
Adaptive Signal Processing 2010
Computer Exercise 2
Computer Exercise 2
Program code
LMS
function [e,w]=lms(mu,M,u,d);
%
Call:
%
[e,w]=lms(mu,M,u,d);
%
%
Input arguments:
%
mu
= step size, dim 1x1
%
M
= filter length, dim 1x1
%
u
= input signal, dim Nx1
%
d
= desired signal, dim Nx1
%
%
Output arguments:
%
e
= estimation error, dim Nx1
%
w
= final filter coefficients, dim Mx1
%inital values: 0
w=zeros(M,1);
%number of samples of the input signal
N=length(u);
%Make sure that u and d are column vectors
u=u(:);
d=d(:);
%LMS
for n=M:N
uvec=u(n:-1:n-M+1);
e(n)=d(n)-w*uvec;
w=w+mu*uvec*conj(e(n));
end
e=e(:);
Optimal filter
function [Jmin,R,p,wo]=wiener(W,sigmav2);
%WIENER Returns R and p, together with the Wiener filter
Adaptive Signal Processing 2010
Computer Exercise 2
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
Computer Exercise 2