Download as pdf or txt
Download as pdf or txt
You are on page 1of 4

2012 IEEE International Symposium on Intelligent Signal Processing and Communication Systems (ISPACS 2012) November 4-7, 2012

A Modified IPNLMS Algorithm


Using System Sparseness
Genki Hirano and Tetsuya Shimamura
Graduate School of Science and Engineering, Saitama University
255 Shimo-Okubo,Sakura-ku, Saitama 338-8570, Japan
Email:{hirano, shima}@sie.ics.saitama-u.ac.jp

Abstract—Channels with sparse impulse response appear in echo


path, such as network echo and acoustic echo. A proportionate
adaptive algorithm has been proposed to accelerate convergence
speed than the normalized least mean square (NLMS) algorithm
in the sparse system. In general, the weight of the proportionate
term in improved proportionate NLMS (IPNLMS) algorithm
is set to a large value when the system is sparse. However,
the relationship between the weight of the proportional term
and the system sparseness is not clear. We reveal the rela-
tionship that the weight increases with an increase of system
sparseness by computer simulations. And we propose a variable
weight of the proportional term in IPNLMS algorithm using
the system sparseness. Computer simulations demonstrate that
the proposed algorithm provides faster convergence than the
conventional NLMS and IPNLMS algorithms on sparse systems.

Keywords - Adaptive filter; sparse system; NLMS; IPNLMS

I. I NTRODUCTION
Figure 1. An example of sparse channel impulse response.
The normalized least mean square (NLMS) algorithm is
widely used in adaptive signal processing applications such as
finite impulse response (FIR) estimator and equalizer[1][2]. and a variable step size parameter to each filter tap. IPNLMS
It is well known that the NLMS algorithm is considerably algorithm has characteristics of both NLMS and PNLMS by
affected by the parameter called step size [3][4]. The NLMS the combination. IPNLMS shows better convergence speed
algorithm is simple but it is not appropriate to estimate a than PNLMS and NLMS algorithms on sparse and non-
sparse channel. Sparse channels mean that most of impulse sparse channels [10]. Experiential knowledge [6] is that the
response individual values are zeros or nearly zeros. An exam- parameter of IPNLMS is set to a large value when the system
ple of such channels is shown in Fig. 1. Such channels arise is expected to be sparse. However, the relationship between
in room acoustic echo [5], circuit echo path, and underwater the parameter of IPNLMS and the system sparseness is not
channels [6] etc. Sparse impulse response channel estimation clear [11]. At present, the determination of the parameter is
is applied to echo cancellation [7][8] such as acoustic echo experiential and a constant value is used [9] [12].
cancellation. It is inefficient for the NLMS algorithm to
estimate sparse channel impulse response because a fixed step In this paper, we reveal the relationship between the
size parameter is assigned to each filter tap. parameter of IPNLMS and the system sparseness by computer
simulations. The simulation results provide a clue to select
Proportionate adaptive algorithm has been proposed to im- the parameter of IPNLMS. Thereby, we propose a modified
prove convergence speed when a channel has sparse impulse IPNLMS which automatically controls the parameter value of
response. In proportionate adaptive algorithm, step size is IPNLMS using the system sparseness, through the relation-
assigned individually to each filter tap. Proportionate NLMS ship between them. Namely, the proposed algorithm controls
(PNLMS) algorithm [7] is proposed using such an idea. the step size of each tap based on the estimated system
Therefore, PNLMS algorithm accelerates initial convergence impulse response at each iteration.
speed for sparse impulse response estimation. On the other
hand, PNLMS algorithm often behaves worse than NLMS The organization of this paper is as follows. The PNLMS
algorithm when the impulse response is not sparse. Improve and IPNLMS algorithms are introduced in Section II. In
proportionate NLMS (IPNLMS) algorithm [9], one of propor- Section III, we show the relationship between the parameter of
tionate adaptive algorithms, assigns a combination of a fixed IPNLMS and the system sparseness, and derive the proposed
algorithm. The detail of simulations and the results are

978-1-4673-5082-2 ©2012 IEEE 876


2012 IEEE International Symposium on Intelligent Signal Processing and Communication Systems (ISPACS 2012) November 4-7, 2012

The diagonal elements of K(n) at PNLMS algorithm are


calculated as follows:

l(n) = max[δP , |w(1, n)|, . . . , |w(L, n)|], (7)


g(l, n) = max[ρl(n), |w(l, n)|], (8)
g(l, n)
k(l, n) = PL . (9)
i=1 g(i, n)

The parameters ρ and δP are small positive numbers for


regularization, respectively. PNLMS shows faster initial con-
vergence speed than NLMS on sparse systems, but PNLMS
Figure 2. System identification model using adaptive filter. often becomes worse than NLMS for non-sparse systems.
This is because the components of adjustment step-size matrix
are very sensitive to the system impulse response to be
presented in Section IV. Conclusions are drawn in Section estimated. IPNLMS use not only filter coefficients but also
V. a fixed value depending on filter length to solve the problem.
The components of the adjustment matrix in IPNLMS are
II. PNLMS AND IPNLMS ALGORITHMS described by
We assume the system shown in Fig. 2 throughout this 1−α |w(l, n)|
k(l, n) = + (1 + α) PL . (10)
paper. 2L 2 i=1 |w(i, n)| + ǫ
The unknown channel is linear and time invariant; it is In (10), ǫ is used to prevent from dividing by zero. The
described by a discrete time FIR filter with a maximum lag parameter α(−1 ≤ α < 1) combines the first term with the
of L − 1 (i.e., filter length is L). The adaptive filter is used second term(so called proportional term) on the right hand
to identify the unknown system. The input vector x(n) and side in (10). This means that α controls the weighting between
filter vector w(n) at the time of nth iteration are defined as the NLMS and the PNLMS. When α = −1, IPNLMS is
equivalent to NLMS. In the case of a sparse system, α
x(n) = [x(n), x(n − 1), . . . , x(n − L + 1)]T (1)
is set to a large value so that IPNLMS can track rapidly
and system sparseness. The weight of proportional term α affects
w(n) = [w(1, n), w(2, n), . . . , w(L, n)]T (2) significantly the convergence rate of IPNLMS.

where x(n) is the input signal and III. A MODIFIED IPNLMS ALGORITHM USING SYSTEM
w(1, n), w(2, n), . . . , w(L, n) are the adaptive filter SPARSENESS
coefficients. The filter output signal y(n) and error signal
e(n) are As described in Section II, the convergence rate of
IPNLMS is affected by the weight of proportional term
y(n) = wT (n)x(n), (3) according to system sparseness. The weight of proportional
e(n) = d(n) − y(n), (4) term should be a large value when the unknown system is
expected to be a sparse system. However, there is no distinct
respectively, where d(n) is the desired signal. The updating measure to decide the parameter. In this section, we show
equation of the proportionate adaptive algorithm is given by the relationship between system sparseness and optimal value
of the weight of proportional term. Then, we introduce a
K(n) = diag[k(1, n), k(2, n), . . . , k(L, n)], (5)
modified IPNLMS algorithm using the relationship.
µK(n)x(n)e(n)
w(n + 1) = w(n) + T (6) The problem is that we transform a system sparseness into
x (n)K(n)x(n) + δ
a numerical value in order to associate a degree of system
where µ (0 < µ < 2) is the fixed step size parameter, K(n) sparseness with optimal value of the weight. The following
is the diagonal matrix that regulates the fixed step size of equation is used to solve the problem:
the respective taps of the adaptive filter and δ is a positive  
constant used to avoid that the denominator in (6) results in L kwk1
S(w) = √ 1− √ (11)
zero values. The fixed step size µ governs significantly the L− L Lkwk2
convergence speed and the final error level. Therefore, by
where w is a system impulse response, L is the length of w,
individual adjustment in the step-size matrix, a proportionate
kwk1 and kwk2 are the L1 -norm and L2 -norm of w, which
adaptation is expected to improve the convergence rate. Em-
are defined as:
phasis of most of proportionate adaptive algorithms is placed L
X
on the calculation of components of the individual adjustment kwk1 = |wl | (12)
step-size matrix. l=1

877
2012 IEEE International Symposium on Intelligent Signal Processing and Communication Systems (ISPACS 2012) November 4-7, 2012

Figure 3. The relationship between system sparseness and optimal value of Figure 4. Impulse response of unknown systems. Systems 1 and 2 have
β and an approximated curve of the relationship. sparseness characteristic S = 0.8656 commonly. The systems are considered
as typical sparse systems.

and
IPNLMS algorithm to be proposed in this section, (15) is used
v
u L
with
uX
kwk2 = t wl2 (13) 1 − β(n) β(n)|w(l, n)|
l=1 k(l, n) = + PL (16)
L i=1 |w(i, n)| + ǫ
respectively. S(w) means the system sparseness of w [13].
in proportionate adaptive algorithm.
The range of S(w) is [0, 1]. Thus, it is convenient that
the range of α is transformed from [−1, 1) into [0, 1). The
transformation is achieved by using β = (α + 1)/2. In this IV. S IMULATIONS
case, Eq. (10) is rewritten as We consider the situation that the unknown system is
1−β β|w(l, n)| identified by an adaptive filter as shown in Fig. 2. The
k(l, n) = + PL . (14) input signal is a white noise signal with standard normal
L i=1 |w(i, n)| + ǫ distribution N (0, 1). The noise v(n) is also white, and the
noise power is -40[dB]. Hence, the theoretical minimum of
We have investigated the relationship between the value of
error curve is -40[dB]. The fixed step-size µ = 1 is used in
the system sparseness and optimal value of β in convergence
all simulations. We use MSE as a measure of performance.
speed by computer simulations. The specific conditions of the
Number of independent trials of each simulation is 100 in
computer simulations are explained in the next section. The
order to calculate the MSE.
computer simulation result is shown in Fig. 3. Fig. 3 shows
the optimal value of β which provides the fastest convergence The first simulation is that of finding the relationship
of IPNLMS to the prescribed mean square error(MSE) levels between the optimal value of β and the system sparseness.
on sparseness characteristics of a system. Here, β = S 2 is The system is generated under the following conditions:
obtained as the approximated curve. The notable point is
• The length of impulse response is 100.
the position of each point at each MSE in Fig. 3. When
the MSE is large (-20[dB]), the most of points are located • The coefficients of impulse response have only 0 or 1.
above the approximated curve. However, the most of points
which are prescribed as low MSE(-30[dB]) exist below the We change the ratio of the coefficient values being 0 or 1.
approximated curve. From these observations, we propose that The simulation searches the optimal value of β in IPNLMS
the β (i.e. the weight of proportional term) is automatically for each impulse response. The number of impulse response
varied using the resulting relationship and tendency. The patterns is 100. Hence, the number of sparseness characteristic
determination method for the β is described now as patterns are also 100. The β is chosen from values from 0 to
1 by an increment of 0.05. However, β is set to 0.999 when β
S 2 (w(n)) is equivalent to 1, because of β ∈ [0, 1). The optimal value of
β(n) = γ (15)
S 2 (w(n)) + e2 (n)+δ β is decided as that to achieve the fastest convergence to the
prescribed MSE. The relationship between the optimal value
where γ and δ are positive constants, respectively. δ is set to
of β and the sparseness characteristic is shown in Fig. 3.
a small value to avoid dividing by zero. γ plays a role like
the weight of proportional term in IPNLMS. For the modified The next simulation compares the proposed algorithm

878
2012 IEEE International Symposium on Intelligent Signal Processing and Communication Systems (ISPACS 2012) November 4-7, 2012

(a) Performance in typical sparse systems. The unknown system is switched (b) Performance in network echo path model. The unknown system is
between system 1 and system 2. switched between model 1 and model 5 in ITU-T G.168.
Figure 5. Convergence comparison

with the conventional algorithms; the NLMS and IPNLMS [3] Y. Tsuda and T. Shimamura, “An Improved NLMS Algorithm for Chan-
algorithms. One of the simulations uses the systems which nel Equalization,” in Proc. IEEE International Symposium on Circuits
and Systems, pp. v-353-v-356, 2002.
are shown in Fig. 4 as typical artificial sparse systems. In this [4] S. Haykin, “Adaptive Filter Theory, Fourth Edition”, Prentice-Hall, 2002
simulation, γ = 0.1, α = ±0.5 are used. Further simulation [5] S. Makino, Y. Kaneda and N. Koizumi, “Exponentially Weighted Step-
uses network echo path impulse response defined in ITU- size NLMS Adaptive Filter Based on the Statistics of a Room Impulse
Response,” IEEE Trans. Speech and Audio Processing, vol. 1, no. 1, pp.
T G.168 [14]. γ = 0.01 and α = ±0.5 are used in this 101-108, 1993.
simulation. We use model 1 and model 5 in ITU-T G.168 [6] K. Pelekanakis and M. Chitre, “Comparison of Sparse Adaptive Filters
here. For model 1 and model 5, several zero coefficients are for Underwater Acoustic Channel Equalization/Estimation,” in Proc.
IEEE International Conference on Communication Systems, pp. 395-
inserted to the beginning of impulse response to increase the 399, 2010.
number of taps up to 100. The simulation result using the [7] D. L. Duttweiler, “Proportionate Normalized Least-Mean-Squares Adap-
typical sparse systems is shown in Fig. 5(a). Fig. 5(b) shows taion in Echo Cancelers,” IEEE Trans. Speech Audio Process., vol. 8,
no. 5, pp. 508-518, 2000.
the result of the echo path identification where the two echo [8] O. Hoshuyama, R. A. Goubran and A. Sugiyama, “A Generalized
path models; model 1 and model 5, are switched. Proportionate Variable Step-Size Algorithm for Fast Changing Acoustic
Environments,” in Proc. IEEE International Conference on Acoustics,
The results in Fig. 5 indicate that the proposed algorithm Speech, and Signal Processing, vol. 4, pp. iv-161-iv-164, 2004.
is superior to the conventional algorithms in sparse systems. [9] J. Benesty and S. L. Gay, “An Improved PNLMS Algorithm,” in
Proc. IEEE International Conference on Acoustics, Speech, and Signal
In particular, the proposed algorithm is faster for convergence Processing, pp. II-1881-II-1884, 2002.
speed than the conventional algorithms in typical sparse [10] S. Sohn, J. Yun, J. Lee, H. Bae and H. Choi, “Convex Combination of
systems. Subband Adaptive Filters for Sparse Impulse Response System,” in Proc.
IEEE 54th International Midwest Symposium on Circuits and Systems,
pp. 1-4, 2011.
V. C ONCLUSION [11] L. Liu, M. Fukumoto and S. Zhang, “A Variable Parameter Improved
Proportionate Normalized LMS Algorithm,” in Proc. IEEE Asia Pacific
In this paper, our two purposes have been achieved. One Conference on Circuits and Systems, pp. 201-204, 2008.
of the purposes is to find the measure which provides a clue [12] J. Benesty and Y. Huang, “Adaptive Signal Processing (Signals and
Communication Technology)”, Springer-Verlag, 2003
to select the weight of proportional term in IPNLMS. The [13] Y. Huang, J. Benesty and J. Chen, “Acoustic MIMO Signal Processing
other purpose is to propose a modified IPNLMS algorithm in (Signal and Communication Technology).” Springer-Verlag, 2006
which the weight is automatically controlled. The relationship [14] ITU-T, “ITU-T Recommendation G.168: Digital Network Echo Can-
cellers”, 2009
between the weight of proportional term in IPNLMS and the
system sparseness has been shown. The proposed algorithm
is improved considering the relationship. Computer simula-
tions have demonstrated that the proposed algorithm provides
faster convergence than the conventional algorithms on sparse
systems.

R EFERENCES
[1] J. Homer, “Detection Guided NLMS Estimation of Sparsely
Parametrized Channels,” IEEE Trans. Circuits and Systems II:
Analog and Digital Signal Processing, vol. 47, no. 12, pp. 1437-1442,
2000.

[2] S. F. Cotter and B. D. Rao, “Sparse Channel Estimation via Matching


Pursuit With Application to Equalization,” IEEE Trans. Communications,
vol. 50, no. 3, pp. 374-377, 2002.

879

You might also like