Performance Analysis of Adaptive Noise Cancellation by Using Algorithms

You might also like

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 7

Performance Analysis of Adaptive Noise

Cancellation by Using Algorithms


Student OF MS EE at FAST-NUCES,

Abstract— In statistical signal processing results will be presented here for noise cancellation.
adaptive filters plays a vital role. In Adaptive
filtering, filter coefficients are eventually
changes automatically according to some
adaptive algorithm in synchronization with
input signal by keeping error into account, to
adapt to changing signal characteristics. In
this research paper I have chosen noise
cancellation application of adaptive filers.
First of all we gave a brief overview of
adaptive filters. Then brief introduction
Figure 1: Flow Chart
about Least Mean Square (LMS), Normalized
Least Mean Square (NLMS) and Recursive II. ADAPTIVE FILTERS
Least Square (RLS) algorithm. LMS, NLMS According to an adaptive algorithm adaptive
and the RLS are implemented in MATLAB filter self-adjusts the filter coefficients. The
to remove noise from the information relationship between the input and output signals
carrying signal and present us helpful is iteratively modeled in adaptive filter. Figure
information as an output. Then we compared given below is a general adaptive filter
the results. Graphs are the results of consisting of a shift varying filter wn(z) and an
implementation of these algorithms; with the adaptive algorithm for updating the filter
help of these graphs performance of each coefficients wn(k).
algorithm is analyzed.

Keywords—Recursive Least Square (RLS),


Least Mean Square (LMS), Matrix LABoratory
(MATLAB) and Normalised least mean
square(NLMS).
I. INTRODUCTION
Adaptive filters are appropriate for the systems
where statistical parameters of system are
indefinite. Frequently known adaptation
algorithms are RLS and LMS, where RLS
algorithm has higher convergence speed than the
LMS algorithm, but at the expense of
computation complexity. LMS algorithm has Figure 2: Typical adaptive filter
less computational complexity than RLS
algorithm. Adaptive filters are capable of adaptation to an
After reading this research paper one can easily unknown setting. These filters are most
understand the theory behind the adaptive filters. frequently used because of their adaptability and
Possible Algorithms solutions and their performance low cost. Adaptive filters can operate in an
unknown environment and can track time
variations of input statistics. Undeniably,
adaptive filters are frequently and effectively
utilized over the years. Applications of Adaptive
filters can be classified as follows:
Identification, Prediction, Inverse modeling, and
noise cancelling.
Following characteristic are in common for all
the above mentioned applications:
Comparison of an input fed to an adaptive filter
is done with a required output which results an
error. This error than updates the weights
(coefficients of filter). This is done to lessen the
error so that we can reach the optimization.
Error in best possible case is zero but is always Figure 4: Inverse Modeling
some points greater than zero.
C. Prediction: The figure 5 is explaining the
III. APPLICATIONS OF ADAPTIVE logic of the predictor adaptive filter. This
FILTERS practical usage gives the best estimate of a
arbitrary signal, this filter uses the previous
To make the reader familiar with the overview values of the arbitrary signal x(n), achieved as a
with the variety of adaptive filter’s applications result of providing a delay to that signal fed to
four main applications are listed below. the adaptive filter input and comparing its output
y(n), with the desired response d(n), that is the
A. Identification or Modeling: actual random signal x(n). To adjust weights of
Figure 3 is illustrating the identification filter, when output of filter is used, then that
problem. Here the same input x(n) is fed to the filter is said to be a predictor.
adaptive filter as the system. y(n) is the adaptive
filter’s output. Output of adaptive filter is then
compared with the preferred response d(n).This
comparison generates something which is
known as error . Error is then used to amend the
w(n) weight for the reduction of error in order to
identify the system.

Figure 5: Predictor Adaptive Filter

D. Noise Cancellation:
Noise cancellation is the main focal point of this
research paper. The idea behind this application
is following
let d(n) is a preferred output tarnished by a
Figure 3: System Identification or Modeling noise n2(n) and is said to be main signal.
Main signal = s(n) + n2(n)
B. Inverse Modeling: Comparison of d(n) is then done with y(n) ,input
Figure 4 illustrates inverse modeling, the. fed to adaptive filter is a reference signal n1(n).
Inverse modeling is also known as de- This reference signal also produces noise which
convolution. Its goal is to find out and follow the tarnishes the main signal. e(n) is the system
inverse transfer function of the system. system output which is actually the comparison of
receives an input x(n) and its output u(n) is fed filter output and the preferred output .For best
to the adaptive filter .when a delay is given to state, this error will be equal to the original
the input x(n) we get preferred output which is signal .
d(n).Then we compare the filter output y(n) and
the desired response d(n). To update or correct
the weights of filter error is used.
converge to the most favorable filter weight.
Firstly, a small weight is assumed, most
commonly it is zero.
At each step gradient of the mean square error is
determined, the weights are updated
accordingly. If the MSE-gradient is positive, it
shows that error will increase positively if the
same weight is used repeatedly, so here is the
need to reduce the weights to be closer to the
Figure 6: Interference Cancellation required weight. In the same way, if the gradient
is negative reflects the need of increase in
IV. ACTIVE NOISE CANCELLING weight till to get closer to the desired weight. So,
the basic weight update equation is:
Adaptive noise cancellation is also known as
active noise cancellation. Active noise wn+1 = wn - μΔε[n]
cancellation can simply be referred as noise
cancellation class. Our main focus is to lessen Here, ℇ is presenting the mean square error.
the noise intrusion or terminate the disturbance.
The approach chosen in the ANC algorithm, is (-) sign is showing there is the need to change
to try to mimic the original signal s(n). the weight opposite to gradient slop,that is if
Central goal is to use an active noise cancelling slope is positive reduce the weight,if negative
algorithm to stop speech noise intrusion and to increase the weight.
coup with several type of tarnished signal. A. LMS algorithm Summary
Interference cancellation is explained in figure 6.
for pth order:
The intrusion signal for the reference sensor is a The LMS algorithm for a pth order algorithm
noise and given in the system as a reference can be described as
signal. Primary sensor is finds out the preferred
signal. Adaptive filters produce a first response. P = filter order
Preferred signal is then compared with that first μ = step size
response which results an error. Error produced
is then used as the feedback of filter, correcting Start with: ĥ (0) = 0
weight of filter and the response of system .To Computation: For n = 0, 1, 2...
compute the best active noise canceller X(n) = [x(n), x(n - 1), …, x(n – p + 1)]T
algorithm diverse approaches can be used. Noise e(n) = d(n) – ĥH(n) X(n)
canceller algorithms to a preferred usage with ĥ (n+1) = ĥ(n) + μ e*(n) X(n)
diverse outputs for a large number of algorithm
can be designed. Depending on the specific B. Convergence and stability in the mean of
features pertaining in each Algorithm give its LMS:
unique applications having its own merits and
demerits. One of the drawback of LMS algorithm is that we
cannot obtain optimal weight in absolute sense due to
not using the exact values of the expectations, but
convergence to the mean is possible that is helpful in
V. LEAST MEAN SQUARE (LMS) achieving desired results. Here variation of weights
ALGORITHM to optimal weight is possible to small amount.
Convergence to the mean can be misleading if the
In LMS Algorithms we are concerned with variance is high with which weights are changing..
desired filter by finding the filter coefficients This problem arises when the value of step-size μ is
that produce the least mean squares of the error not selected correctly.
signal. Error signal is the difference between the Thus, an upper bound on μ is needed which is
desired and the actual signal. Stochastic gradient given as
descent method is used in that algorithm. 0 < μ < 2/λmax
The basic idea behind LMS filter is to approach
the most favorable filter weights (R-1 P), by Where λmax is an autocorrelation matrix, its
updating the filter weights in such a way to eigen vales can never be negative. Algorithm’s
stability is dependent on its value, negative LMS may not provide fast rate of convergence
value of λmax will make it unstable. If the value and small mean square error.
of μ selected is very small then the algorithm In case of LMS algorithm it is necessary to have
will converge very slowly. A large value of μ knowledge of autocorrelation of input signal and
will result into faster convergence but at the cross-corelation between the input and desired
expense of stability around the minimum value. output.
Maximum convergence speed can be achieved Therefore, we consider error measures that do
by not include from expectations and may be
included from the data.
2
𝜇=
𝜆𝑚𝑎𝑥 + 𝜆𝑚𝑖𝑛 ℰ(𝑛) = ∑𝑛𝑖=0 | 𝑒(𝑛)|2

Here in order to get convergence faster the value No Statistical information about x(n) or d(n) is
of 𝜇 should be large which is achievable when required and may be evaluated directly from
λmax is close to λmin, where λmin is the x(n) and d(n).
smallest eigen value of R and also determining In RLS algorithm we introduces forgetting
the speed of convergence. factor λ
𝑛

VI. NORMALIZED LEAST MEAN ℰ(𝑛) = ∑ λ𝑛−𝑖 | 𝑒(𝑛)|2


SQUARE (NLMS) ALGORITHM 𝑖=0

Learning rate μ assures the stability of the RLS algorithm reduces a weighted linear least
algorithm and makes it more effective. squares error. The RLS algorithms gives
Sensitivity to the scaling of inputs makes pure outstanding performance in non-stationary
LMS less effective because it makes selection of situation.
learning rate μ difficult which is a drawback.
The Normalized least mean squares (NLMS) VIII. MATLABSIMULATION RESULTS
filter is a modification of the LMS algorithm. It
normalizes with the power of the input. A. LMS algorithm

NLMS algorithm outline: Some alterations has been done to figure 6 and
Parameters: P = filter order summary for the LMS to attain the interference
μ = step size cancelling.
Initialization: ĥ (0) = 0 The figure 7 shows the result of usage of LMS
Computation: For n = 0, 1, 2... algorithm to intrusion problem, where s(n) is the
X(n) = [x(n), x(n - 1), …, x(n – p + 1)]T input, the preferred signal x(n) = s(n) + n2(n) and
e(n) = d(n) – ĥH (n) X(n) the error signal, which should be equal to the
ĥ(n+1) = ĥ(n) + μ e∗(𝑛) X(𝑛)/ XH(𝑛) X(𝑛) input signal s(n). In figure 7, the step-size
parameter μ is se equal to 0.0002 and the length
Optimal Learning Rate: of adaptive filter has kept 5. The input signal
s(n) is shown in blue. The input signal After the
If there is no intrusion [v(n) = 0], then the noise corruption s(n) + n2(n) is shown in green
optimal learning rate for the NLMS algorithm is color, and the error signal e(n) is shown in red.
μopt = 1 L = 5, μ = 0.0002
And is free of the input X(n) and the real
(unfamiliar) impulse response h(n). In the
general case with intrusion v(n) does not equal
to 0, the most favorable learning rate is
μopt = 𝐸 [│𝑦 (𝑛) –y^(𝑛) │2]/𝐸 [│𝑒(𝑛)│2]
We assume that the signals v(n) and X(n) are
uncorrelated to each other.

VII. RECURSIVE LEAST SQUARE (RLS) Figure 7: Output of noise cancellation using LMS
ALGORITHM algorithm.
Analyzing Figure 7 the LMS we can see x (n) =s(n)+n2 (n). The error signal e (n) which is
algorithm has not done excellent performance, thought to reproduce the input signal s (n) is
not entirely removing the error signal e(n) hence shown by color in red.
not resulting in the original signal s(n), entirely Looking at the figure 9, it can be seen that the
free of the noise intrusion n1(n). algorithm has done a good job, removing the
noise interfering the original signal.
L = 5, μ = 0.0002 Figure 10 illustrate the MSE which is difference
between the error signal and input signal. In this
figure algorithm has presented conversion after
round about 10000 iterations. After iterations
1000 number, the algorithm has shown some
variance but not at the expense of conversion.

L = 5, δ = 3.2, μ = 0.005

Figure 8- Mean-squared error of noise


cancellation using the LMS algorithm

Figure 8 is showing the mean-squared error for


the LMS algorithm for noise cancellation. This
error is not the error signal e(n) but the
difference between this signal and the input Figure 10- Mean-squared error of noise cancellation
signal s(n). even until 30000 iterations, using the NLMS algorithm
convergence has not occurred. Practically, the
algorithm should have a reasonable number of C. RLS ALGORITHM
iterations (a few seconds) for the error to reach a
value closer to its most favorable in the mean-
square sense. Figures 11and 12 are showing output of noise
cancellation using RLS algorithm. Algorithm
B.NLMS ALGORITHM has shown good job by removing noise and
proved it a good active noise canceller. The
L = 5, δ = 3.2, μ = 0.005 original input signal s(n) is shown in blue color
line, the green line shows the same signal after
the noise corruption x(n)=s(n)+n2(n) and the red
line shows the error signal, which must be closer
to the original input signal s(n).

L = 5, λ = 1

Figure 9- Output of noise cancellation using NLMS


algorithm.

Figures 9 and 10 show the outputs noise


cancellation using the NLMS algorithm. The
input signal s (n), is tarnished by the noise n1(n)
resulting in the tarnished signal
Figure 11- Output of noise cancellation using RLS In a nutshell, this research paper gives a brief
algorithm. idea of our achievements in the field of adaptive
L = 5, λ = 1 filter usage for noise cancellation,
comprehensive investigation and evaluation of
LMS, NLMS and RLS algorithms for noise
cancellation.

X. REFERENCES

[1]. Diniz, P. S. R. Adaptive Filtering –


Algorithms and Practical
Implementation. Kluwer Academic
Publishers, 2008.
Figure 10- Mean-squared error of noise cancellation [2]. Falcão, Rafael Merredin Alves.
using the RLS algorithm "ADAPTIVE FILTERING
ALGORITHMS FOR NOISE
THE RLS ALGORITHM CANCELLATION." 2012.
[3]. Guan, X., X. Chen, and G. Wu. "QX-
LMS Adaptive Filters For System
Figures 11and 12 are also showing the cost
Identification." 2nd international
function of the error which is between the Congress on Image and Signal
original input signal represented by s(n) and the Processing, 17-19 Oct 2009: CISP '09,
error signal represented by e(n). By analyzing vol., no., pp.1-5,.
the figures it can be seen the algorithm has very [4]. Haykin, Simon S. Least-Mean-Square
fast convergence as compare to LMS AND Adaptive Filters. Wiley: Bernard
NLMS only after the first few thousand Widrow , ISBN 0-471-21570-8., 2003.
iterations, so worked very efficiently in [5]. [Haykin:, Simon. Adaptive Filter
removing noise. Theory. Prentice Hall,: ISBN 0-13-
048434-2., 2002.
[6]. Thenua, Raj Kumar, and S.K. Agarwal.
IX. RESULTS COMPARISON
"Simulation and Performance Analyasis
of Adaptive Filter in Noise
Analyzing above figures , we came to the Cancellation." International Journal of
conclusion that the LMS algorithm takes time Engineering Science and Technology,
and has a very slow convergence for the given 2010: Vol. 2(9),4373-4378.
number of iterations making noise cancellation [7]. Monson H. Hayes: Statistical Digital
less effective as compare to NLMS and RLS. Signal Processing and Modeling, Wiley,
Moreover in case of LMS even convergence 1996, ISBN 0-471-59431-8
occurs but it incorporates a high error value, [8]. Raj Kumar Thenua and S.K.
about 30dB per 20dB as compare to other AGARWAL” Simulation and
Performance Analyasis of Adaptive
algorithms. In a situation where we are
Filter in Noise Cancellation”
concerned about the convergence speed LMS International Journal of Engineering
algorithm is not favorable here we should select Science and Technology Vol. 2(9),
the NLMS or RLS algorithm depending upon 2010, 4373-4378. Functions
speed and other requirements. RLS algorithm (Universitext), New York: Springer,
works three times more efficiently and faster 1986
than the NLMS algorithm, So, in our research [9]. Bouchard, M.; Quednau, S.,
span the most efficient is the RLS algorithm "Multichannel recursive-least-square
having faster speed of convergence, lesser error algorithms and fast-transversal-filter
and efficient output. It has its own algorithms for active noise control and
sound reproduction systems," IEEE
computational cost higher than other two.
Transactions on Speech and Audio
Processing, vol.8, no.5, pp.606-618, Sep
2000.
[10]. Wallace and R.B. Goubran.
IX. CONCLUSIONS "Improved tracking adaptive noise
canceler for nonstationary
environments" IEEE Transactions on
Signal Processing, mar 1992: vol.40,
no.3, pp.700-703.

You might also like