Download as pdf or txt
Download as pdf or txt
You are on page 1of 17

A

Progress Report on
“Noise Cancellation using Adaptive Filter and
Concept of Adaptive Equalizer”
Submitted in partial fulfillment of the requirement for the award of the degree
of
BACHELOR OF TECHNOLOGY
In
Electronics & Communication Engineering

SUBMITTED BY:

Shubham Mishra(15EC47), Kavita Gangwar(15EC22),


Shubham Gupta(15EC46)
Dept. of Electronics and Communication Engineering,
IET MJP Rohilkhand University Bareilly, India

SUPERVISED BY:
Guide Name
Mr.Janak Kapoor
Assistant Professor
Dept. of Electronics and Communication Engineering,
IET MJP Rohilkhand University Bareilly, India

Page | 1
2

CHAPTER 1: INTRODUCTION
1.1 GENERAL
Our requirement is to design a filter which minimizes the
effect of the noise at the filter output by using some statistical
criterion. An optimal solution to this problem is minimizing the
mean square value of the error signal. This approach of
minimizing error signal to the stationary input data is known as
Wiener Filter. But this wiener filter is not useful for non-
stationary data.
The design of wiener filter also requires prior information
on the signal which cannot be possible at every time. So we
need to find the solution for the filter designing which do not
require any relevant signal information. Thus we go for
Adaptive Filter which meets our requirement.

1.2 ADAPTIVE FILTER


The adaptive filter can be defined as, a filter which itself
regulate its switch function according to some optimizing
parameter. The filter coefficients are up-to-date at every
generation until they converge to long-established signal.
Centered on the precise signals bought, it makes an attempt to
find the most excellent filter design. In a non -stationary
atmosphere, the filter is anticipated to track time versions and
range its filter coefficients, accordingly in a stationary
environment, the filter is anticipated to converge, to the Wiener
filter.
The operation of adaptive filter involves two processes:
1) Filtering
2) Weight adjustment procedure

2
3

General description for the figure are given below, x(n)-Input signal
plus Noise, d(n)-Desired signal, e(n)-Error signal, y(n)output signal.

Figure 1.2 Basic block diagram of an adaptive filter used for noise cancellation

There are some types of adaptive algorithms like


1) Least Mean Square (LMS) algorithm
2) Recursive Least Square (RLS) algorithm

1.2.1 Least Mean Square (LMS):

The LMS algorithm is a linear adaptive filtering algorithm that


consists of two basic processes. The first process is the filtering
process, which involves computations of the transversal filtering
output of a produced by a set of tap inputs and estimation error by
comparing this output to a desired response. Next, the adaptive
filtering will adjust automatically the filter tap weights in
accordance with the estimation error.
Consistent with this LMS algorithm the burden replace
equation is given in equation
W(n + 1) = W(n) + 2 * p* E(n) *X(n)
Where, p = Step Size,
W(n)=Weight update, X(n) = Input signal, E(n)= Error signal

3
4

1.2.1 Flow Chart For LMS adaptive filter.

Figure 1.2.1 Flow chart of LMS adaptive filter

4
5

1.3 SIMULATION RESULTS

So we plotted our desired signal,output signal and voice signal


with by using MATLAB as shown below:-

Figure 1.3.1 Obtained Simulation results using MATLAB showing (a)


voice signal without noise (b) voice signal with noise (c) noisy voice after
filtering. Parameters (i) Order of Filter N=20 (ii) Convergence Factor
µ=0.01

5
6

Figure 1.3.2 Obtained Simulation results using MATLAB showing (a)


voice signal without noise (b) voice signal with noise (c) noisy voice after
filtering. Parameters (i) Order of Filter N=20 (ii) Convergence Factor
µ=0.03

Figure 1.3.3 Obtained Simulation results using MATLAB showing (a)


voice signal without noise (b) voice signal with noise (c) noisy voice after
filtering. Parameters (i) Order of Filter N=12 (ii) Convergence Factor
µ=0.03

6
7

1.4 AIM AND OBJECTIVE:

Use of LMS algorithm in Adaptive Equalizer and stability of the


LMS algorithm.

1.5 PROBLEM STATEMENT:


In communication systems that use a linear modulation scheme for
transmission, the fractionally-spaced (FS) samples of the received
signal constitute a wide-sense cyclostationary (WSCS) time series.
Hence, standard Fourier transform techniques cannot be used to
study the spectral characteristics of the received FS samples or to
derive the transfer function (TF) of the corresponding digital
minimum mean-square error (MMSE) receiver. An analytical
expression for the TF of the FS MMSE equalizer is derived which
includes the effects of the continuous-time to discrete-time (C/D)
converter used at the receiver front end. Using this TF, the sources
of instability of the FS least-mean-square (LMS) algorithm and the
effects of the equalizer length and sampling phase on convergence
of the LMS algorithm are explained. For stabilization of the FS
LMS algorithm, conditions on the front-end C/D converter are
provided such that when satisfied the LMS algorithm becomes more
stable.

7
8

CHAPTER 2:LITERATURE REVIEW

2.1 GENERAL

2.1.1 Fundamentals of equalization

Inter symbol interference (ISI) has been recognized as a major


obstacle to high speed data transmission. For high speed data
transmission, we have no choice but to use some kind of
equalization technique.
Equalization is used to combat inter symbol interference, as the
fading channels are random as well as time varying equalizers must
track the time varying characteristics of the mobile channel and thus
are called adaptive equalizers.
Blind equalization wherein it doesn’t need a training sequence.It
uses some property of the signal may be constant envelop or a
constant amplitude. So as to guess the distortion being put in by the
channel and undo those effects but the more common once are
adaptive equalizers which work on the concept of a training
sequence.

2.2 ADAPTIVE EQUALIZERS

The two operating modes of an adaptive equalizers are:


1.Training mode.
2. Tracking mode.
The training mode proceeds the tracking mode.

2.2.1 Training mode of adaptive equalization

Initially a known fixed length training sequence is sent by the


transmitter so that the receiver’s equalizer may average to a proper
setting.
The training sequence is usually a pseudo random sequence or a
fixed known prescribed bit pattern usually put in the standard.
Immediately following the training sequence the user data is sent.

8
9

The training sequence in an adaptive equalizer is designed to permit


and equalizer at the receiver to acquire the proper filter coefficients
in the worst possible channel conditions.
Therefore when training sequence is finished, the filter coefficients
are near optimal
An adaptive equalizer at the receiver uses a LMS algorithms to
evaluate the channel and estimate filter coefficients to compensate
for the channel.

2.2.2 Tracking mode

When the data of the users are received, the adaptive algorithms of
the equalizer tracks the changing channel.
As a result of this the adaptive equalizer continuously changes the
filter characteristics over time.

2.3 BLOCK DIAGRAM OF AN ADAPTIVE


EQUALIZER:

We have the original baseband message x(t), it passes through a


modulator then through the transmitter over to the radio channel.
Here noise and fading effects get added leading to a very distorted
received signal.

Figure 2.3 Block Diagram Of Adaptive Equalizer

9
10

Here it goes through the RF receiver front end, passes through the IF
stage and then detector in match filter. So equalizer usually works
either at the IF stage or at the baseband. So here is equalizer, decision
maker and error which kind of tracks and then finally we have the
reconstructed message data d(t). In the training section this d(t) and x(t)
should be equal.

2.4 WORKING OF AN ADAPTIVE EQUALIZER

The signal received by the equalizer is simply given by y(t)


Y(t)=x(t)*f(t)+nb(t)
Where, x(t)= original sequence
f(t)= combined impulse response
nb(t)= baseband noise signal
ifheq(t)= impulse response of the equalizer
The output of the equalizer is
d(t)=x(t)*f(t)*heq(t)+nb(t)*heq(t)
d(t)=x(t)*g(t)+nb(t)*heq(t)
g(t)=f(t)*heq(t)
The desired output is x (t) which is the original source data. Assume
that nb(t) = 0 for the time being in order that
d(t)= x(t).
g(t)=f(t)*heq(t)=delta(t)
The main goal of the equalizer is to satisfy this above equation.
In frequency domain this same thing is given by
heq(f) F* (-f) =1
The time domain and frequency domain implementation are the two
basic equations that have to be satisfied for any equalizer to work
effectively.
It implies simply that the equalizer is kind of inverse filter of the
channel. If the channel is frequency selective, the equalizer enhances
the frequency components with small amplitudes working as an inverse
filter and attenuates the strong frequencies in the received frequency
spectrum. So as to obtain a flat, composite, received frequency
response and on top of that a linear phase response. So that is the basic
philosophy of working of an adaptive equalizer.
It works as an inverse filter by enhancing the frequency components of
small amplitude, attenuating the frequency components with larger
amplitudes, in the same time keeping in mind that a linear phase

10
11

response is obtained. For a time varying channel, the equalizer is


designed to track the channel variations so that the above equation is
approximately satisfied.

2.5 GENERIC BLOCK DIAGRAM OF ADAPTIVE


EQUALIZER

Fig2.5 Generic Block Diagram

An adaptive equalizer is a time varying filter which must constantly be


retuned. In the block diagram the substitute k represents the discrete
time index, here k is the discrete time index.

It can be seen from the block diagram that there is a single input yk at
any time instant here.The value of yk depends on the instantaneous
state of the radio channel and the specific value of the noise.
The block diagram shown is called the transversal filter and in this case
it has n delay elements, N+1 taps and hence N+1 tunable multipliers
which we will called weights. So here is my weight vector. These
weights have a second subscript k as pointed out earlier to explicitly
show that they vary with time and are updated on a sample by sample
basis or in some cases for the whole block. So either you work sample
by sample or you work with the whole block.

The adaptive algorithm is controlled by the error signal ek. The error
signal is derived by comparing the output of the equalizer with some
signal dk which is either replica of the transmitted signal xk or which
represents a known property of the transmitted signal.

11
12

The adaptive algorithm uses ek to minimize the cost function and uses
the equalizer weights in such a manner that it minimizes the cost
function iteratively, this is important. How many iteration it will it take
to converge those depend on the particular choice of the algorithm.
The least mean square or LMS algorithm searches for the optimum or
near optimum weights, this is one of the popular methods.
The constant shown here may be adjusted by the algorithm to control
the variation between filter weights on successive iteration. This
process is repeated rapidly in a programming loop while the equalizer
tries to converge. When the convergence is reached, the algorithm
freezes the filter weights. This is the basic way the adaptive equalizer
works, and the weights are updated.

2.6 MEAN SQUARE ERROR

From classical equalization theory the most common cost function is


the mean square error or MSE between the desired signal and the
output of the equalizer.

The mean square error is represented by E[e(k)e’(k)]

When the replica of the transmitted signal is required as output of the


equalizer, a known training sequence must be periodically transmitted.

By the detection of the training sequence the adaptive algorithm in the


receiver is able to minimize and the cost function by driving the tap
weights until the next training sequence is sent.
When is the time to change the training sequence or go to the training
mode is depends on how fast the channel is changing.

2.6.1 Mathematics involved with an adaptive equalizer

Let us define the input signal to the equalizer as a vector yk

Where yk=[yk+yk-1……………..yk-n ]T (1)

The output of the equalizer is a scalar and is written as

12
13

dK= ∑ wnk yk-n (2)

The weight vector can be written as

wk =[ w0k w1k ……….. wnk]T (3)

Using the previous two equations we have dK that is

dK = ykTwk = wkTyk (4)

When the desired equalizer output is known(dk=xk)


then the error signal ek is given by

ek=dk-dK=xk-dK (5)
From equation (4) we have
ek =xk-ykTwk=xk-wkTyk (6)

To compute the mean square error, the mod of ek squared at


the time instant k equation no 6 is squared to obtain

[ek2] = xk2+ wkTyk.ykTwk-2xkykTwk (7)

Take the expected value of ek squared over k which in


practice amounts to computing time average yields the
following equation

E(|ek|2)= E[Xk2]+ wkT E [ Yk YkT]wk- 2E[XkYkT]*wk (8)

It should be noted from equation 8 that wk is not included in


the time average. Since it is assumed that it has converged to
the optimal value.
It is also noted that xk and yk are not independent,and there
should be an input vector correlated to the desired output
vector of equalizer.

13
14

The cross correlation vector p as follows

⃗⃗𝑷= E[xkyk] = E [ xkyk xkyk-1 ……… xkyk-1]

Input correlation matrix of the order N+1 by N+1 and lets represent it
by R, R is nothing but the expected value of yk ykT and given as follows

R= E[yk.ykT]

The matrix R is also called as an input covariance matrixthe diagonal of


R contains mean square values of each input sample and the cross terms
specify the auto correlation terms resulting from the delayed samples of
the input signals.

If xk and yk are stationary then the elements R and P are second order
statistics that do not vary with time and hence using these equations we
have the mean square error given as follows

Mean square error = E[xk2]+ wTRw – 2PTw

14
15

2.7 ALGORITHMS FOR ADAPTIVE EQUALIZER

2.7.1 Factors which determine an algorithms


performance

1.Rate of convergence- The number of iterations required for an


algorithm in response to a stationary input to converge close enough to
an optimal solution. Algorithm of high rate of convergence adapts
rapidly to a stationary environment.

2.Mis-adjustment - It provides a quantitative measure of


the amount by which the final value of the mean squared
error averaged over an ensemble of adaptive filters deviates
from the optimal mean square error.

3.Computational complexity-Number of observations required to


make one complete iteration of the algorithm. It will clearly be
correlated to the energy your equalizer consumes.

4.Numerical properties- Certain algorithm are prone to inaccuracies


like round off noise and representation error in the computer, which
influence the stability of algorithm.

2.7.2 Types of algorithm for adaptive equalization

1. Zero forcing algorithm


2. Least mean square algorithm or LMS
3. Recursive least square or RLS

These are applicable both for the linear and the non-linear equalizers.

15
16

CHAPTER 3

3.1 FUTURE WORK

In this paper we have studied about LMS Algorithm and working of


Equaliser and also we have studied about the use of LMS algorithm in
adaptive equalizer to overcome Inter Symbol Interference effect.
Our future work will be to study the stability of LMS Algorithm and
analyse different types of algorithm which can make it noise immune.
We will also try to study the spectral characteristics of FS MMSE
equalizer and derive the transfer function of the same including the
effects of front-end receiver C/D converter and the sampling phase.

3.2 TIMELINE

Time Period Action to be performed


1.AUG 2018 1.LITERATURE REVIEW
2.SEPT 2018 2. SYNOPSIS SUBMISSION
3. OCT-DEC 2018 3. EXPERIMENT BY CHANGING
CONVERGENCE FACTOR,ORDER OF
FILTER,CHANGING NOISE SOURCE
4. JAN-MAR 2019 4. STUDY FS MMSE EQUALIZER
5. APR-JUNE 2019 5. STUDY AND IMPLEMENT PRACTICAL
APPLICATIONS IN REAL TIME WORLD

16
17

3.3 REFERENCES

1. J.G Proakis and M. Salehi,Digital Communications, 5 th ed.


Cambridge,MA,USA: Wiley,1997

2. G.B. Giannakis, “Cyclostationary signal analysis” in Digital Signal


Processing Fundamentals,2nd ed

3. W. Rudin,Real and Complex Analysis,3rd ed. New York ,NY,USA:


McGraw-Hill,1987

4. A.H Sayed,Fundamentals of Adaptive


Filtering.Hoboken,NJ,USA:Wiley,2003

17

You might also like