Professional Documents
Culture Documents
Performance Analysis of Adaptive Noise Cancellation by Using Algorithms
Performance Analysis of Adaptive Noise Cancellation by Using Algorithms
Performance Analysis of Adaptive Noise Cancellation by Using Algorithms
Abstract— In statistical signal processing results will be presented here for noise cancellation.
adaptive filters plays a vital role. In Adaptive
filtering, filter coefficients are eventually
changes automatically according to some
adaptive algorithm in synchronization with
input signal by keeping error into account, to
adapt to changing signal characteristics. In
this research paper I have chosen noise
cancellation application of adaptive filers.
First of all we gave a brief overview of
adaptive filters. Then brief introduction
Figure 1: Flow Chart
about Least Mean Square (LMS), Normalized
Least Mean Square (NLMS) and Recursive II. ADAPTIVE FILTERS
Least Square (RLS) algorithm. LMS, NLMS According to an adaptive algorithm adaptive
and the RLS are implemented in MATLAB filter self-adjusts the filter coefficients. The
to remove noise from the information relationship between the input and output signals
carrying signal and present us helpful is iteratively modeled in adaptive filter. Figure
information as an output. Then we compared given below is a general adaptive filter
the results. Graphs are the results of consisting of a shift varying filter wn(z) and an
implementation of these algorithms; with the adaptive algorithm for updating the filter
help of these graphs performance of each coefficients wn(k).
algorithm is analyzed.
D. Noise Cancellation:
Noise cancellation is the main focal point of this
research paper. The idea behind this application
is following
let d(n) is a preferred output tarnished by a
Figure 3: System Identification or Modeling noise n2(n) and is said to be main signal.
Main signal = s(n) + n2(n)
B. Inverse Modeling: Comparison of d(n) is then done with y(n) ,input
Figure 4 illustrates inverse modeling, the. fed to adaptive filter is a reference signal n1(n).
Inverse modeling is also known as de- This reference signal also produces noise which
convolution. Its goal is to find out and follow the tarnishes the main signal. e(n) is the system
inverse transfer function of the system. system output which is actually the comparison of
receives an input x(n) and its output u(n) is fed filter output and the preferred output .For best
to the adaptive filter .when a delay is given to state, this error will be equal to the original
the input x(n) we get preferred output which is signal .
d(n).Then we compare the filter output y(n) and
the desired response d(n). To update or correct
the weights of filter error is used.
converge to the most favorable filter weight.
Firstly, a small weight is assumed, most
commonly it is zero.
At each step gradient of the mean square error is
determined, the weights are updated
accordingly. If the MSE-gradient is positive, it
shows that error will increase positively if the
same weight is used repeatedly, so here is the
need to reduce the weights to be closer to the
Figure 6: Interference Cancellation required weight. In the same way, if the gradient
is negative reflects the need of increase in
IV. ACTIVE NOISE CANCELLING weight till to get closer to the desired weight. So,
the basic weight update equation is:
Adaptive noise cancellation is also known as
active noise cancellation. Active noise wn+1 = wn - μΔε[n]
cancellation can simply be referred as noise
cancellation class. Our main focus is to lessen Here, ℇ is presenting the mean square error.
the noise intrusion or terminate the disturbance.
The approach chosen in the ANC algorithm, is (-) sign is showing there is the need to change
to try to mimic the original signal s(n). the weight opposite to gradient slop,that is if
Central goal is to use an active noise cancelling slope is positive reduce the weight,if negative
algorithm to stop speech noise intrusion and to increase the weight.
coup with several type of tarnished signal. A. LMS algorithm Summary
Interference cancellation is explained in figure 6.
for pth order:
The intrusion signal for the reference sensor is a The LMS algorithm for a pth order algorithm
noise and given in the system as a reference can be described as
signal. Primary sensor is finds out the preferred
signal. Adaptive filters produce a first response. P = filter order
Preferred signal is then compared with that first μ = step size
response which results an error. Error produced
is then used as the feedback of filter, correcting Start with: ĥ (0) = 0
weight of filter and the response of system .To Computation: For n = 0, 1, 2...
compute the best active noise canceller X(n) = [x(n), x(n - 1), …, x(n – p + 1)]T
algorithm diverse approaches can be used. Noise e(n) = d(n) – ĥH(n) X(n)
canceller algorithms to a preferred usage with ĥ (n+1) = ĥ(n) + μ e*(n) X(n)
diverse outputs for a large number of algorithm
can be designed. Depending on the specific B. Convergence and stability in the mean of
features pertaining in each Algorithm give its LMS:
unique applications having its own merits and
demerits. One of the drawback of LMS algorithm is that we
cannot obtain optimal weight in absolute sense due to
not using the exact values of the expectations, but
convergence to the mean is possible that is helpful in
V. LEAST MEAN SQUARE (LMS) achieving desired results. Here variation of weights
ALGORITHM to optimal weight is possible to small amount.
Convergence to the mean can be misleading if the
In LMS Algorithms we are concerned with variance is high with which weights are changing..
desired filter by finding the filter coefficients This problem arises when the value of step-size μ is
that produce the least mean squares of the error not selected correctly.
signal. Error signal is the difference between the Thus, an upper bound on μ is needed which is
desired and the actual signal. Stochastic gradient given as
descent method is used in that algorithm. 0 < μ < 2/λmax
The basic idea behind LMS filter is to approach
the most favorable filter weights (R-1 P), by Where λmax is an autocorrelation matrix, its
updating the filter weights in such a way to eigen vales can never be negative. Algorithm’s
stability is dependent on its value, negative LMS may not provide fast rate of convergence
value of λmax will make it unstable. If the value and small mean square error.
of μ selected is very small then the algorithm In case of LMS algorithm it is necessary to have
will converge very slowly. A large value of μ knowledge of autocorrelation of input signal and
will result into faster convergence but at the cross-corelation between the input and desired
expense of stability around the minimum value. output.
Maximum convergence speed can be achieved Therefore, we consider error measures that do
by not include from expectations and may be
included from the data.
2
𝜇=
𝜆𝑚𝑎𝑥 + 𝜆𝑚𝑖𝑛 ℰ(𝑛) = ∑𝑛𝑖=0 | 𝑒(𝑛)|2
Here in order to get convergence faster the value No Statistical information about x(n) or d(n) is
of 𝜇 should be large which is achievable when required and may be evaluated directly from
λmax is close to λmin, where λmin is the x(n) and d(n).
smallest eigen value of R and also determining In RLS algorithm we introduces forgetting
the speed of convergence. factor λ
𝑛
Learning rate μ assures the stability of the RLS algorithm reduces a weighted linear least
algorithm and makes it more effective. squares error. The RLS algorithms gives
Sensitivity to the scaling of inputs makes pure outstanding performance in non-stationary
LMS less effective because it makes selection of situation.
learning rate μ difficult which is a drawback.
The Normalized least mean squares (NLMS) VIII. MATLABSIMULATION RESULTS
filter is a modification of the LMS algorithm. It
normalizes with the power of the input. A. LMS algorithm
NLMS algorithm outline: Some alterations has been done to figure 6 and
Parameters: P = filter order summary for the LMS to attain the interference
μ = step size cancelling.
Initialization: ĥ (0) = 0 The figure 7 shows the result of usage of LMS
Computation: For n = 0, 1, 2... algorithm to intrusion problem, where s(n) is the
X(n) = [x(n), x(n - 1), …, x(n – p + 1)]T input, the preferred signal x(n) = s(n) + n2(n) and
e(n) = d(n) – ĥH (n) X(n) the error signal, which should be equal to the
ĥ(n+1) = ĥ(n) + μ e∗(𝑛) X(𝑛)/ XH(𝑛) X(𝑛) input signal s(n). In figure 7, the step-size
parameter μ is se equal to 0.0002 and the length
Optimal Learning Rate: of adaptive filter has kept 5. The input signal
s(n) is shown in blue. The input signal After the
If there is no intrusion [v(n) = 0], then the noise corruption s(n) + n2(n) is shown in green
optimal learning rate for the NLMS algorithm is color, and the error signal e(n) is shown in red.
μopt = 1 L = 5, μ = 0.0002
And is free of the input X(n) and the real
(unfamiliar) impulse response h(n). In the
general case with intrusion v(n) does not equal
to 0, the most favorable learning rate is
μopt = 𝐸 [│𝑦 (𝑛) –y^(𝑛) │2]/𝐸 [│𝑒(𝑛)│2]
We assume that the signals v(n) and X(n) are
uncorrelated to each other.
VII. RECURSIVE LEAST SQUARE (RLS) Figure 7: Output of noise cancellation using LMS
ALGORITHM algorithm.
Analyzing Figure 7 the LMS we can see x (n) =s(n)+n2 (n). The error signal e (n) which is
algorithm has not done excellent performance, thought to reproduce the input signal s (n) is
not entirely removing the error signal e(n) hence shown by color in red.
not resulting in the original signal s(n), entirely Looking at the figure 9, it can be seen that the
free of the noise intrusion n1(n). algorithm has done a good job, removing the
noise interfering the original signal.
L = 5, μ = 0.0002 Figure 10 illustrate the MSE which is difference
between the error signal and input signal. In this
figure algorithm has presented conversion after
round about 10000 iterations. After iterations
1000 number, the algorithm has shown some
variance but not at the expense of conversion.
L = 5, δ = 3.2, μ = 0.005
L = 5, λ = 1
X. REFERENCES