Professional Documents
Culture Documents
DSP Adoptive Filter
DSP Adoptive Filter
discussions, stats, and author profiles for this publication at: https://www.researchgate.net/publication/313556170
CITATIONS READS
0 37
3 authors:
SEE PROFILE
Some of the authors of this publication are also working on these related projects:
Estudio de Algoritmos RLS y LMS de Filtrado Adaptativo y sus Aplicaciones en Procesamiento de Señales
View project
Desarrollo del Prototipo de un Sistema de Control y Monitoreo para el Vuelo de un Cohete View project
All content following this page was uploaded by Fabián R. Jiménez L. on 10 February 2017.
3
factor was executed with values greater than 0.15, where defining the point at which the graph has not significant
the behavior was unstable. changes in the MSE along the samples. From Figure 6a
it’s clear that the RLS achieve faster convergence speed
than LMS and NLMS. RLS algorithm has lowest MSE
with compare to other algorithms. Although RLS
algorithm converges faster is important to note that its
computational complexity was superior due that the
a) LMS Algorithm. b) NLMS Algorithm
correlation matrix inversion was involved. In order to
compare these algorithms easily, the best parameters in
above implementation results are selected.
c) RLS Algorithm
Figure 5. Learning Curve for Adaptive Algorithms used in
System Identification. a) MSE. b) MSE in dB.
Figure 6. Learning Curve for MSE for Adaptive Algorithms
Each of the five step-sizes was interesting: on one used in System Identification.
hand, the larger the step-size, the faster the convergence.
In Figure 6a, μ=0.1 for LMS adaptive filter, μ=0.15 for
But on the other hand, the smaller the step-size, the better
NLMS algorithm and λ=0.8 for RLS adaptive filter were
the steady state square error. The RLS algorithm was
established for their best MSE performance.
executed with five different forgetting factors: = [0.999;
Under the same filter length for the adaptive
099; 0.9; 0.85; 0.8]; comparatively, the worst behavior
algorithms, at first glance the results of Figure 6b showed
was obtained when = 0.999; and the best performance
the MSE in in logarithmical scale of magnitude. A
were presented when = 0.8. difference was presented by the NLMS algorithm due that
3.1.2 Mean Square Error (MSE) has a higher convergence rate than the LMS. Similarly
This parameter is the most commonly used for model the RLS algorithm has faster convergence than the NLMS
testing purposes: filter.
N In addition the least error value not was reached by the
1
MSE
N d [ k ] y [ k ]
2
(1) LMS algorithm. The RLS and NLMS algorithms reached
k 1 the lesser error in approximately –320 dB while the RLS
Where y[k] is the predicted output for the adaptive reaches it in approximately –220 dB. This implies that the
filter and N is the number of samples used in the RLS and NLMS algorithms had a lower minimum error
identification process. The MSE graph of the filtered compared to the LMS algorithm. It’s important to state
output signal with respect to the filter input indicates how that the minimum error is conditioned by the
fast reaches the Least Square Error (LSE), and therefore characteristics of the data transfer channel, in this
defines the filter convergence rate. The MSE quantifies experience was used a Jack Stereo 3.5 mm connector.
the difference between the estimated model (identified)
and the real model. For obtaining MSE, both power error 3.1.3 Measurement of Error Signal e[k]
signal and power input signal in a number of samples is The performance of the adaptive filters was appreciated
calculated. by comparing the error signal, i.e. by measurement of
difference between the desired signal d[k] and the
Best Adaptive Algorithm MSE adaptive filter output y[k]. The adaptive algorithm
LMS ( = 0.1) 0.0127 convergence is reached when there is no significant
NLMS ( = 0.15) 0.0116 change in the Error along several samples. The best
RLS ( = 0.8) 0.01 behavior is obtained for the adaptive algorithm who
Table 1. MSE for Adaptive Algorithms for System Identification reaches before to this point. The adaptive algorithms were
compared using the same length N=60 weights. The best
Table 1 shows that the less average MSE was 0.01 for
factor convergence was chosen in all experiments: =0.1
RLS algorithm, followed by 0.0116 for NLMS and 0.0127
and =0.15 for LMS and NLMS algorithms; and the
by LMS. In order to get better insight, Figure 6a displays
better factor forgetting equal to =0.8 for RLS algorithm.
the MSE between the identified system and the unknown
The output data were captured and displayed in Matlab®.
system. The convergence speed evaluation was done by
The algorithms errors results are indicated in Figure 7.
4
For the LMS algorithm the Error shown is higher and unknown system was showed around the central
the convergence speed is lesser than in NLMS algorithm, frequency of 2 kHz.
similarly, the RLS algorithm has faster convergence and In order to approach the context of system
lesser error than the NLMS. As is shown, LMS algorithm identification, the set adaptive algorithms was
converge after about 8438 steps, while NLMS converge implemented using the same filter length N=60. Figures 8
after 2812 steps and RLS only needs 704 steps. That display the FFT applied to the output of the adaptive
means the adaptive performance of RLS is much better filtering algorithms implemented, that were visualized in
than NLMS and LMS algorithms. CCS. Just as it was observed that the spectrum was
strongly attenuated when the value of λ decreases for the
RLS adaptive algorithm and when increases for the
LMS adaptive algorithms. Similar effects appear when
the filter length N increases.
5
when the algorithm outputs from the DSK C6713 were best in comparison with NLMS and RLS algorithms.
applied to the oscilloscope using the FFT tool
incorporated. These responses are showed in Figures 9. 3.4. Accuracy in Weights Estimation
3.2.2 Spectrogram
For accuracy analysis, a superimposition of the desired
The Specgram function of Matlab shows a time
input coefficients (Fixed FIR Filter for unknown system)
dependent frequency analysis which gives the power
and output weights of the adaptive filters were analyzed
density of the signal (warmer colors correspond to higher
density while colder colors to lower density). In Figure 10 in Matlab. In Figures 11 the blue signal corresponded to
the spectrogram response for N=60 weight order graph the input coefficients and the red signal the reached
during 0.5 seconds are shown for analyzed algorithms. output weights. The adaptive filters were tested with N =
60. The adaptation process illustrated in Figure 11d
showed the precisely adjust of the output weights of the
adaptive filters for the unknown system coefficients.
a) LMS with N=60 and = 0.1. b) NLMS with N=60 and = 0.15.
7
Mathworks Inc. [Online]. Available FTP:
http://www.mathworks.com/help/dsp/ref/rlsfilter.html.
[29] P. K. Pathak and K. K.Sarma, “Time Varying System
Identification using Adaptive Filter,” IRNet Trans. on
Electrical and Electronics Engineering, ITEEE´12, Vol. 1,
Iss. 2, 2012.
[30] T. Kara and I. Eker, "Experimental nonlinear identification
of a two mass system," presented at IEEE Conference on
Control Applications, CCA 2003, Vol.1, No., pp. 66-71,
Jun. 2003.
[31] P. Dobra, R. Duma, D. Petreus and M. Trusca, “Adaptive
System Identification and Control using DSP for
Automotive Power Generation,” presented at 16th
Mediterranean Conference on Control and Automation
Congress Centre, Ajaccio, France, Jun. 2008.
[32] Z. Li; C. Li, "LMS and RLS algorithms comparative study
in system identification," presented at International
Conference on Multimedia Technology, ICMT 2011. Vol.
1, No. 1, pp. 5428–5430, Jul. 2011.
[33] M. Shafiq, S. Ejaz and N. Ahmed, “Hardware
Implementation of Adaptive Noise Cancellation over DSP
Kit TMS320C6713,” International Journal of Signal
Processing (SPIJ), Vol. 7, No. 1, 2013.
[34] C. A., Duran, J. A., Reyes and J. C. Sanchez,
"Implementation and analysis of the NLMS algorithm on
TMS320C6713 DSP," presented at 52nd IEEE
International Midwest Symposium on Circuits and
Systems, MWSCAS '09, Vol., No., pp.1091 – 1096, Aug.
2009.
[35] V. I. Djigan, "Adaptive filtering algorithms with
quatratized cost function for Linearly Constrained arrays,"
presented at IX International Conference on Antenna
Theory and Techniques, ICATT´13, Vol., No.1, pp. 214–
216, Sept. 2013.
[36] R. G. Soumya, N. Naveen and M. J. Lal, "Application of
Adaptive Filter Using Adaptive Line Enhancer
Techniques," presented at Third International Conference
on Advances in Computing and Communications,
ICACC´13, Vol. 1, No.1, pp. 165–168, Aug. 2013.
[37] Y. Xia, L. Jianchang and L. Hongru, "Performance analysis
of adaptive filters for time-varying systems," presented at
32nd Chinese Control Conference, CCC´13, Vol., No., pp.
8572–8575, Jul. 2013.
[38] Y. I. Huang, Y. W. Wang, F. J. Meng and G. L.Wang, "A
spatial spectrum estimation algorithm based on adaptive
beamforming nulling," presented at Fourth International
Conference on Intelligent Control and Information
Processing, ICICIP´13, Vol., No., pp. 220–224, Jun. 2013.
[39] J. P. Vijay and N. K. Sharma, “Performance Analysis of
RLS Over LMS Algorithm for MSE in Adaptive Filters,”
International Journal of Technology Enhancements and
Emerging Engineering Research, Vol. 2, Iss. 4, pp. 40–44,
2014.
[40] A. A. Hameed, “Real-Time Noise Cancellation Using
Adaptive Algorithms,” M.Sc. thesis, in Computer
Engineering, Eastern Mediterranean University, Sep. 2012.