Download as pdf or txt
Download as pdf or txt
You are on page 1of 7

Signal Processing 148 (2018) 234–240

Contents lists available at ScienceDirect

Signal Processing
journal homepage: www.elsevier.com/locate/sigpro

Short communication

Sign normalised spline adaptive filtering algorithms against impulsive


noise
Chang Liu∗, Zhi Zhang, Xiao Tang
School of Electronic Engineering, Dongguan University of Technology, 1 DaXueRoad Dongguan, 523808, China

a r t i c l e i n f o a b s t r a c t

Article history: In this paper, a sign normalised least mean square algorithm (SNLMS) based on Wiener spline adaptive
Received 17 October 2017 filter, called SAF-SNLMS, is proposed. The proposed algorithm is derived by minimising the absolute value
Revised 19 December 2017
of the a posteriori error. Moreover, to further improve the convergence performance of the SAF-SNLMS,
Accepted 19 February 2018
the variable step-size scheme is introduced. Simulation results demonstrate the SAF-SNLMS and its vari-
Available online 22 February 2018
able step-size variant obtain more robust performance when compared with the existing spline adaptive
Keywords: filter algorithms in impulsive noise.
Spline adaptive filter © 2018 Elsevier B.V. All rights reserved.
Nonlinear adaptive filter
Sign algorithm
Variable step-size

1. Introduction algorithm was proposed in [4] which guaranteed the robustness


against impulsive noise. In addition, several sign subband adaptive
The merits of the linear adaptive filter are its simple design and filters with variable step-size were introduced to improve the con-
analysis, which lead to its wide application in many practical engi- vergence speed and combat the impulsive noise [5–7].
neering problems such as acoustic echo cancellation (AEC), acous- In this brief paper we extend the sign idea into spline adap-
tic noise control (ANC), channel estimation and equalization. For tive filter and propose a new sign normalised least mean square
the linear adaptive filter, its weight coefficients can be updated by algorithm based on Wiener spline adaptive filter which is called
using several sophisticated adaptive algorithms like the least mean SAF-SNLMS. It is derived by minimising the absolute value of the
square (LMS) algorithm, normalized least mean square (NLMS) al- a posteriori error and used to identify the Wiener-type nonlinear
gorithm and affine projection algorithm (APA). However, the linear systems. Furthermore, by adjusting the step-size associated with
model suffers from the performance degradation because of the the squared value of the impulsive-free error, the variable step-size
failure to model the nonlinearity. SAF-SNLMS (SAF-VSS-SNLMS) algorithm is proposed. It is demon-
In recent years, in order to model the nonlinearity, several strated that the proposed algorithms offer better convergence per-
adaptive nonlinear spline adaptive filters (SAFs) have been intro- formance and robustness compared with the conventional SAF al-
duced, such as Wiener spline filter [1], Hammerstein spline filter gorithms in the impulsive noise environment.
[2] and cascade spline filter [3]. The nonlinearity in this kind of
structure is modeled by an adaptive look-up table (LUT) in which
the control points are interpolated by a local low order polynomi- 2. SAF-NLMS algorithm
nal spline curve. The adaptive spline filters achieve improved per-
formance in modelling the nonlinearity. However, since their adap- Fig. 1 shows the structure of SAF [1,8], assuming that the input
tation is derived by minimising the squared value of the instan- of the SAF at time (n) is x(n), s(n) represents the output of the
taneous error, the performance of the spline adaptive filter can linear network which is given by
deteriorate seriously in impulsive noise. To alleviate this problem, s ( n ) = wT ( n )x ( n ), (1)
sign adaptive algorithm is an excellent candidate. The weight vec-
tor in sign adaptive algorithm is commonly updated in accordance where w ( n ) = [ w ( 0 ) , w ( 1 ) , · · · , w ( M − 1 )] T represents the
with the L1 norm optimization criterion. The affine projection sign weight vector of the FIR filter with length M, and x(n ) =
[x(n ), x(n − 1 ), · · · , x(n − M + 1 )]T is the input vector of the
linear network.

Corresponding author. With reference to the spline interpolation scheme in [1], third-
E-mail address: chaneaaa@163.com (C. Liu). order spline curves are applied, thus the output of the whole

https://doi.org/10.1016/j.sigpro.2018.02.022
0165-1684/© 2018 Elsevier B.V. All rights reserved.
C. Liu et al. / Signal Processing 148 (2018) 234–240 235

Fig. 1. Structure of SAF [1,8].

system y(n) can be expressed as 3. Proposed sign SAF-NLMS algorithms


y (n ) = ϕ ( )T
i un =un Cqi,n , (2)
3.1. SAF-SNLMS algorithm
where un = [u3n , u2n , un , 1]T ,
qi,n = [qi,n , qi+1,n , qi+2,n , qi+3,n ]T
is the
control point vector. The superscript T denotes the transposition The updating equation of qi, n in the proposed sign SAF-NLMS
operation. C is the spline basis matrix whose dimension is 4 × 4. algorithm can be formulated by the following constrained opti-
Two suitable types of spline basis are Catmul-Rom (CR) spline and mization problem:
B-spline whose spline basis matrices are given by
min |e p (n )| = |d (n ) − y(n + 1 )| = |d (n ) − uTn Cqi,n+1 |
⎡ ⎤ ⎡ ⎤ qi,n+1 (9)
−1 3 −3 1 −1 3 −3 1 subject to ||qi,n+1 − qi,n ||2 ≤ β 2 ,
1⎢ 3 −6 3 0⎥ 1⎢ 2 −5 4 −1⎥
CB = ⎣
0⎦ CR
,C = ⎣
0⎦
, where e p (n ) = d (n ) − uTn Cqi,n+1 is defined as the a posteriori error,
6 −3 0 3 2 −1 0 1
1 4 1 0 0 2 0 0 β 2 is selected to be a small parameter ensuring the updating of
qi, n dose not change drastically, | · | is the absolute value operation
(3) and || · || denotes the Eucilidean norm of a vector.
and the span index i and the local parameter un can be defined as Then, using the Lagrange multiplier method, the cost function
follows can be expressed by

s (n ) s (n ) (qi,n+1 ) = |e p (n )| + ρ0 [||qi,n+1 − qi,n ||2 − β 2 ], (10)
un = − , (4)
x x where ρ 0 denotes the Lagrange multiplier. Setting the derivative of
 the cost function (qi,n+1 ) with respect to qi,n+1 equal to zero, we
s (n ) Q −1
i= + , (5) have
x 2
1 T
qi,n+1 =qi,n + C un sgn[e p (n )], (11)
where x is the uniform space between two adjacent control 2 ρ0
points, Q is the total number of control point and  ·  denotes the
where sgn[] is the sign function.
floor operator.
Substituting (11) into the constraint condition in (9), we obtain
Using the Lagrange multiplier method, the cost function for the
SAF-NLMS can be defined as [8] 1 β
= , (12)
1 1 2ρ0 ||CT un ||
0 (qi,n+1 )= T e2 (n )+ ||qi,n+1 − qi,n ||2 , (6)
2un un 2 Note that CT is a constant matrix and ||CT un || ≤ ||CT || · ||un ||,
where (1/2 ) × [e(n )/uTn un ] can be viewed as the Lagrange mul- where ||CT || is defined as the spectral norm of matrix CT ,
tiplier [8]. e(n) is the a priori error which can be expressed as ||CT || := sup [||CT un ||/||un ||]. Thus, (12) can be rewritten as
un =0
e(n ) = d (n ) − y(n ) = d (n ) − uTn Cqi,n , and d(n) is the desired signal
which contains impulsive noise. 1 β0

, (13)
Taking the derivative of (6) with respect to qi,n+1 and wn+1 re- 2 ρ0 ε
uTn un + 0
spectively, and setting them to zeros, we can obtain two recursive
equations of the tap weights and control points for the NLMS-SAF where β0 =β /||CT || and ɛ0 is small positive constant used for
algorithm [8] avoiding zero-division.
Considering the lower bound of 1/(2ρ 0 ) in (13), the updating
e (n ) 1 T
wn+1 =wn + μw u˙ n Cqi,n xn , (7) equation of qi, n can be derived as
uTn un +ε x
sgn[e p (n )] T
e (n ) qi,n+1 = qi,n + μq
C un (14)
qi,n+1 = qi,n + μq CT un , (8) uTn un +ε0
uTn un +ε
In a similar manner, the cost function associated with the
where μw and μq are the step-sizes for the linear network and
weight vector of FIR filter wn can be formulated as
nonlinear network respectively, the small positive constant ɛ is
used for avoiding zero-division. J (wn+1 ) = |e p (n )| + ρ0 [||wn+1 − wn ||2 − β 2 ], (15)
236 C. Liu et al. / Signal Processing 148 (2018) 234–240

Let the derivative of the cost function J (wn+1 ) with respect to Table 1
Summary of the SAF-VSS-SNLMS algorithm.
wn+1 equal to zero, he updating equation of wn can expressed as
Initialize: w−1 =[κ , 0, · · · , 0]T , q−1 , μw (0 ), μq (0 ), ε , ε0 , α , λ, Nw , c1 , eˆ2o (0 )
sgn[e p (n )] 1 T
wn+1 =wn + μw
u˙ n Cqi,n+1 xn , (16) 1: for n = 0,1,…,do
uT u +ε x
n n 0 2: s ( n ) = wT ( n )x ( n )
3: un = s(n )/x − s(n )/x
where u˙ n = [3u2n , 2un , 1, 0]T . 4: in = s(n )/x + (Q − 1 )/2
Note that in (14) and (16), the a posteriori error ep (n) is un- 5: y(n ) = uTn Cqi,n
6: e (n ) = d (n ) − y (n )

accessible, we replace it by using the a priori error e(n) approxi-


7: qi,n+1 = qi,n + μq (n − 1 ){sgn[e(n )]/
uTn un +ε0 }CT un
mately [4,6]. The updating equations can be rewritten as
8: wn+1 =wn + μw (n − 1 ){sgn[e(n )]/[ (uTn un +ε0 )x]}u˙ Tn Cqi,n+1 xn
sgn[e(n )] 9: ξ (n )= med[e2 (n ), e2 (n − 1 ), · · · , e2 (n − Nw + 1 )]
qi,n+1 = qi,n + μq
CT un , (17) 10: eˆ2o (n ) = λeˆ2o (n − 1 ) + c1 (1 − λ )ξ (n )
uTn un +ε0 11: μw (n )=αμw (n − 1 )+(1 − α ) min[eˆ2o (n ), μw (n − 1 )]
12: μq (n )=αμq (n − 1 )+(1 − α ) min[eˆ2o (n ), μq (n − 1 )]
sgn[e(n )] 1 T
wn+1 =wn + μw
u˙ n Cqi,n+1 xn . (18) 13: end for

uTn un +ε0 x
Table 2
Comparison of the computational complexities.
3.2. Variable Step-size SAF-SNLMS (SAF-VSS-SNLMS) algorithm
Algorithm Multiplication Addition Division
It is well known that the variable step-size scheme is an effi- SAF-LMS [1] 2M + 4K p + 1 2M + 4Kq 0
cient solution for improving the convergence in linear adaptive fil- SAF-NLMS [8] 2M + 4K p + 5 2M + 4Kq +4 2
ters [9–11]. In [9], the step-size can be controlled by the square of SAF-SNLMS M + 4 K p +1 2M + 4Kq +4 2
SAF-VSS-SNLMS M + 4 K p +8 2M + 4Kq +7 2
the a priori error which is generally distorted by the ambient noise.
To address this problem, several noise-constrained methods pre-
sented in [10,11] obtain better convergence performance. For non- adaptive filter where the FIR length is M  7, the sign algorithms
linear filters, it is difficult to acquire the analytical result of the save M multiplications approximately due to sign operation of the
optimal variable step-size. However, like in case of linear adaptive a priori error, and requires only extra one sign operation.
filters, the step-size for nonlinear filters should be as large as pos-
3.3. Convergence analysis
sible to ensure fast convergence rate at the beginning of filtering.
When the filter approaches its steady-state, the step-size decreases In this section we analyse the convergence performance of
gradually, leading to lower steady-state error [12]. the proposed sign algorithm in the Bernoulli-Gaussian (BG)
In this work, the adjustments of the variable step sizes are con- noise. The analysis is derived by using the energy conserva-
trolled by the squared value of the impulsive-free error, i.e., tion method. Assuming that the ambient noise η (n ) = v(n ) +
μw (n )=αμw (n − 1 )+(1 − α ) min[eˆ2o (n ), μw (n − 1 )] (19) ς (n )=v(n ) + z(n )b(n ), where v(n) is white Gaussian background
noise with zero-mean and variance σv2 . ς (n) is contaminated Gaus-
sian (CG) impulse, z(n) is a white Gaussian process with zero-mean
μq (n )=αμq (n − 1 )+(1 − α ) min[eˆ2o (n ), μq (n − 1 )] (20) and variance σz2 = t σv2 (t  1 ) and b(n) is a Bernoulli sequence
where α is the forgetting factor approaching one. eˆ2o (n ) is the es- with the probability mass function with P (b) = 1 − p for b = 0 and
timate of squared value of the impulsive-free error which can be P (b) = p for b = 1, where p is the probability of the occurrence of
obtained by [13] the impulsive noise. Thus, the variance of η(n) can be expressed as
ση2 = σv2 + pσz2 = (1+ pt )σv2 .
eˆ2o (n ) = λeˆ2o (n − 1 ) + c1 (1 − λ )med(γn ), (21) In the first phase, we study the bound of step size μw . Defining
where λ is another forgetting factor close to but smaller than one, the tap-weight error vector of FIR filter as wn =w0 − wn , where
c1 =1.483(1 + 5/(Nw − 1 )) is a finite correction factor [13] and Nw wn → w0 is the optimal tap-weight for the Wiener-type spline
is the data window. γn = [e2 (n ), e2 (n − 1 ), · · · , e2 (n − Nw + 1 )] and model, and inserting it into the updating equation of wn , we have
med( · ) denotes the median operator.
sgn[e(n )] 1 T
It is useful to select the initial values of step-sizes, μw (0), μq (0) wn+1 = wn − μw
u˙ n Cqi,n+1 xn , (22)
in (19) and (20) respectively. At the beginning of adaptation, the uTn un +ε0 x
initial values of step-sizes should be set to suitable values in or-
Setting the regularization parameter ɛ0 to zero and taking the
der to avoid the probable divergence caused by the lager squared
mathematical expectation of the squared Euclidean norm of both
value of the impulsive-free error eˆ2o (n ). In this initial phase, the
sides of (22), we obtain
adaptation is implemented using fixed step-sizes and the SAF-VSS-
SNLMS algorithm performs like the SAF-SNLMS. On the other hand, D(n + 1 ) = D(n ) −2μw E {sgn[e(n )]ξw (n )}E [ϕi
(un )/(c3 qi,n )||un ||]
the initial values of step-sizes should be selected as lager as pos-
+μ2w /x2 )E[ϕ
i2 (un )||xn ||2 ||un ||−2 ], (23)
sible to guarantee faster convergence rate. In this paper, the initial
values of step-sizes will be chosen as μw (0 )=μq (0 )=0.05 for all where ϕi
(un )=u˙ Tn Cqi,n+1 , the noise-free error is denoted by
the simulations, thus for each iteration, μw (n) and μq (n) are equal ξw (n )=(c3 qi,n /x )wTn xn and D(n )=E[||wn ||2 ] is defined as the
as in [1]. The proposed SAF-VSS-SNLMS algorithm is summarized mean square deviation (MSD).
in Table 1. Assuming that the noise-free error ξ w (n) is a zero-mean Gaus-
The computational burdens for the algorithms mentioned above sian process for sufficient long filters. Using Price’s theorem [7] in
are summarized in Table 2. For the spline output calculation and (23), we obtain the approximation as
adaptation, it only needs 4Kp multiplications, plus 4Kq additions by ⎛ ⎞
the date reuse of the past computations [1], where Kp and Kq (less 2 1− p p
E {sgn[e(n )]ξw (n )} ≈ σξ2w ⎝ + ⎠,
than 16) are the constants which can be defined with reference π σ σ σ (t + 1 )σv2
ξw+ v ξw+
2 2 2
to the implementation spline structure in [14]. As can be seen in
Table 2, in case of long FIR filter which is contained in the spline
(24)
C. Liu et al. / Signal Processing 148 (2018) 234–240 237

0 0 SAF−LMS
SAF−NLMS
−5 SAF−SNLMS
−10
SAF−VSS−SNLMS

−10 −20
0 500 1000
Samples

MSE [dB]
−15

Fig. 2. Adaptive identification scheme for simulations. −20

−25
where σξ2w is the variance of the noise-free error ξ w (n). Based on
the estimate of squared value of the impulsive-free error eˆ2o (n ),
−30
σξ2w can be estimated by using the shrinkage method [15].
In order to ensure that the algorithm is stable, D(n) must de-
−35
crease iteratively, e.g., D(n + 1 ) − D(n ) < 0. Thus, the bound of the
step-size μw is given by 0 0.5 1 1.5 2 2.5 3 3.5 4

  √  Samples x 10
4
2
0 < μw < 8/πσξ2w x2 (1−p)/ σξ2w +σv2 + p/ σ ξ w +(t +1 )σv2
Fig. 3. MSE curves for white Gaussian input in absence of impulsive noise.
E[ϕi
(un )/(c3 qi,n ||un || )] (SNR = 30 dB).
× , (25)
E[ϕ
i (un )||xn ||2 ||un ||−2 ]
2
0.06
With a similar way, we define the control point error vector
qn =q0 − qi,n in the second phase, where q0 is the optimal con- µ (n)
0.04
trol point vector, and then obtain
w

sgn[e(n )]
qn+1 = qn − μq
CT un , (26)
0.02

uTn un + 0 ε
0
Taking the mathematical expectation of the energies of both 0.5 1 1.5 2 2.5 3 3.5 4
sides of (26), we get Samples x 10
4

K (n + 1 ) = K (n ) − 2μq E {sgn[e(n )]ξq (n )}E [||un ||−1 ] 0.06

+μ 2
q E[ ||C un || ||un || ],
T 2 −2
(27)
0.04
µ (n)

where in this phase the noise-free error is and ξq (n )=qTn CT un


q

K (n )=E[||qn ||2 ]. When x 1 and the length of the control 0.02


point vector which is updated on each iteration is sufficient long,
we use the Price’s theorem in (27) and consider that K (n + 1 ) −
0
K (n ) < 0. The bound of step size μq can be obtained by 0.5 1 1.5 2 2.5 3 3.5 4
⎛ ⎞ Samples x 10
4

8 1− p p
0 < μq < σξ2q ⎝  +  ⎠ Fig. 4. The variation of the step sizes for white Gaussian input in absence of im-
π σξ2q + σv2 σξ2q + (t + 1 )σv2 pulsive noise. (SNR = 30 dB).

E[||un ||−1 ]
× , (28)
E[||CT un ||2 ||un ||−2 ] achieved using the CR-spline basis. The unknown Wiener spline
model comprises a FIR filter wo=[0.6, −0.4, 0.25, −0.15, 0.1]T
where σξ2q is the variance of ξ q (n). and a nonlinear spline function represented by a LUT q0 with
23 control points, x is set to 0.2 and q0 is defined by q0 =
4. Experimental results [−2.2, · · · , −0.8, −0.91, −0.4, −0.2, 0.05, 0, −0.4, 0.58, 1.0, 1.0, 1.2,
· · · , 2.2].
We evaluate the performance of the proposed algorithms in An independent White Gaussian background noise, v(n), is
the context of Wiener-type system identification as shown in added to the output of the unknown system, with 30 dB signal
Fig. 2. All the following results are obtained by averaging over 100 to noise ratio (SNR) which is defined as SNR = 10log10 (σd2 /σv2 ),
Monte Carlo trials. The performance is measured by use of mean where σ 2 is the variance of noise-free output d˜(n ). The impul-
d
square error (MSE) defined as 10log10 [e(n)]2 .
The input signal is sive noise is considered as the contaminated Gaussian (CG) im-
generated by the process x(n ) = ωx(n − 1 ) + 1 − ω2 a(n ), where pulse or the symmetric α -S noise [7]. For the symmetric α -S noise,
a(n) is the white Gaussian noise signal with zero-mean and uni- its fractional-order signal to noise ratio (FSNR) can be defined as
tary variance, the parameter ω is selected in the range [0, 0.95], FSNR = 10log10 (|d˜(n )|a0 /|η0 (n )|a0 ), where η0 (n) denotes the sym-
which can be interpreted as the degree of correlation for the ad- metric α -S noise, 0 < a0 < α 0 , α 0 is the characteristic exponent of
jacent samples. Real speech inputs are also applied. The FIR filter the symmetric α -S noise, α 0 is set to 0.8 and a0 is selected to
coefficients for the SAF are initialized as w−1 = [1, 0,..., 0] with be 0.7 in simulations. The values of other parameters can be set
length M = 5, while the spline model is initially set to a straight as follows: μw =μq =0.01, ε =0.001, ε0 =0.001, α =λ=0.99, Nw =11,
line with a unitary slope. For convenience, only B-spline basis is μw (0 ) =μq (0 ) =0.05 and eˆ2o (0 )=σx2 , where σx2 denotes the vari-
applied in the simulations, however, similar results can also be ance of the input.
238 C. Liu et al. / Signal Processing 148 (2018) 234–240

0 0 0 −0.6
SAF−LMS SAF−LMS
SAF−NLMS −0.8 SAF−NLMS
−5
−5 SAF−SNLMS −1 SAF−SNLMS
SAF−VSS−SNLMS −5 SAF−VSS−SNLMS
−10 −1.2
−10 −1.4
−15
0 500 1000 −10 160 180 200 220 240
Samples Samples

MSE [dB]
MSE [dB]

−15
−15
−20

−20
−25

−30 −25

−35 −30
0 0.5 1 1.5 2 2.5 3 3.5 4 0 0.5 1 1.5 2 2.5 3 3.5 4
Samples x 10
4
Samples x 10
4

Fig. 5. MSE curves for colored input in absence of impulsive noise. (SNR = 30 dB). Fig. 7. MSE curves for colored input in CG impulsive noise. (SNR = 30dB, t = 10,0 0 0,
p = 0.1).
0
0 SAF−LMS
SAF−NLMS 0
0 SAF−LMS
−5 SAF−SNLMS
−5 SAF−VSS−SNLMS −5 SAF−NLMS
SAF−SNLMS
−5 −10 SAF−VSS−SNLMS
−10
−10 0 500 1000
Samples −15
MSE [dB]

−10 0 500 1000


Samples
MSE [dB]

−15

−15
−20

−20
−25

−25
−30
0 0.5 1 1.5 2 2.5 3 3.5 4
Samples x 10
4
−30
0 0.5 1 1.5 2 2.5 3 3.5 4
Fig. 6. MSE curves for colored input in CG impulsive noise. (SNR = 30 dB, Samples x 10
4

t = 10 0,0 0 0, p = 0.01).
Fig. 8. MSE curves for colored input in symmetric α -S noise. (SNR = 30 dB, sym-
metric α -S noise FSNR =0 dB).

Figs. 3 and 5 show the MSE learning curves of the SAF-LMS [1],
SAF-NLMS [8], proposed SAF-SNLMS and SAF-VSS-SNLMS in ab-
sence of impulsive noise. The input signal is the white Gaussian rithms outperform the other cited algorithms, obtaining the lower
sequence (ω is set to zero) in Fig. 3 and colored input (ω is set to steady-state MSE and better tracking ability. In addition, the SAF-
0.9) is used in Fig. 5. It clearly can be seen that the proposed SAF- VSS-SNLMS achieves the best performance.
SNLMS suffers from the steady-sate performance deterioration due Figs. 8–10 show the MSE learning curves of four algorithms in
to the sign operation of the error. However, the SAF-VSS-SNLMS the symmetric α -S noise environment at different FSNR. The other
nearly gets the steady-state performance comparable to that of simulation parameters are the same of Fig. 6. As can be seen in
the SAF-LMS and SAF-NLMS algorithms, besides, it obtains a better cases of 0 dB in Fig. 8 and 20 dB FSNR in Fig. 9, the SAF-SNLMS
tracking ability than these two algorithms because of the variable algorithm does not acquire the satisfactory steady-state perfor-
step-size scheme. From the small figure on the top left corner of mance. However, due to the variable step-size solution, the SAF-
Fig. 3, we also can see that the SAF-VSS-SNLMS obtains the fastest VSS-SNLMS provides good tracking and steady-state performances.
convergence rate in the beginning (about 10 0 0 samples in the ini- At high FSNR (−5 dB) in Fig. 10, the SAF-LMS and SAF-NLMS fail
tial phase of filtering) of adaptation. Fig. 4 shows the variation of to track the unknown nonlinear system, but the proposed sign al-
the step sizes of the SAF-VSS-SNLMS for white Gaussian input, the gorithms have the robust performance against the impulsive noise.
step sizes in the beginning is higher which leads to faster conver- Fig. 12 shows the MSE learning curves of four algorithms in case
gence rate, and when the filter approaches its steady state, the step of speech signal input which is shown in Fig. 11. The other simula-
sizes become lower to ensure the small error. tion parameters are the same with in Fig. 6. The impulsive noise is
Figs. 6 and 7 indicate the learning curves of four algorithms the CG noise. From Fig. 12, the proposed sign algorithms perform
in case of CG impulsive noise, the input is the colored signal, ω better than other cited algorithms which demonstrate the effec-
is set to 0.9. It is clear in this case that the proposed sign algo- tiveness to the speech signal input.
C. Liu et al. / Signal Processing 148 (2018) 234–240 239

0 0 0
0 SAF−LMS
SAF−LMS −2 SAF−NLMS
SAF−NLMS
−4 SAF−SNLMS
−2 SAF−SNLMS −5
−5 SAF−VSS−SNLMS
SAF−VSS−SNLMS −6

−4 −8
−10 0 100 200 300
−10 0 100 200 300
Samples

MSE [dB]
Samples
MSE [dB]

−15
−15

−20
−20

−25
−25

−30
−30
0 0.5 1 1.5 2 2.5 3 3.5 4
0 0.5 1 1.5 2 2.5 3 3.5 4
Samples 4
Samples x 10
4
x 10
Fig. 12. MSE curves for speech input in CG impulsive noise. (SNR = 30 dB,
Fig. 9. MSE curves for colored input in symmetric α -S noise. (SNR = 30 dB, sym-
t = 10 0,0 0 0, P = 0.01).
metric α -S noise FSNR =20 dB).

0 0 5. Conclusion
−0.5
−5
This paper proposed a sign normalised least mean square al-
−1
gorithm based on Wiener spline adaptive filter. This sign adaptive
−1.5 algorithm was derived by minimising the absolute value of the a
−10
50 100 150 200
posteriori error. In addition, its variable step-size variant was intro-
duced. With the benefits of the variable step-size scheme, the vari-
MSE [dB]

Samples SAF−LMS
−15 able step-size sign algorithm obtained both faster convergence rate
SAF−NLMS
SAF−SNLMS and lower steady-state error. The convergence property and com-
−20 SAF−VSS−SNLMS putational complexity were also analysed. Compared with the ex-
isting spline adaptive filtering algorithms, the proposed algorithms
−25 provided better convergence performance and robustness against
impulsive noise.
−30
Acknowledgments

−35 This research was supported by the National Natural Science


0 0.5 1 1.5 2 2.5 3 3.5 4
Samples 4 Foundation of China (61501119).
x 10

Fig. 10. MSE curves for colored input in symmetric α -S noise. (SNR = 30 dB, sym- References
metric α -S noise FSNR =−5 dB).
[1] M. Scarpiniti, D. Comminiello, R. Parisi, A. Uncini, Nonlinear spline adaptive
filtering, Signal Process. 93 (4) (2013) 772–783.
0.6 [2] M. Scarpiniti, D. Comminiello, R. Parisi, A. Uncini, Hammerstein uniform cubic
spline adaptive filtering: Learning and convergence properties, Signal Process.
100 (2014) 112–123.
0.4 [3] M. Scarpiniti, D. Comminiello, R. Parisi, A. Uncini, Novel cascade spline archi-
tectures for the identification of nonlinear systems, IEEE Trans. Circuits Syst. I
62 (7) (2015) 1825–1835.
0.2 [4] T. Shao, Y.R. Zheng, J. Benesty, An affine projection sign algorithm robust
against impul-sive interferences, IEEE Signal Process. Lett. 17 (4) (2010)
327–330.
Ampulitude

0 [5] J.H. Kim, J.H. Chang, S.W. Nam, Sign subband adaptive filter with L1
–norm minimiza-tion based variable step-size, Electron. Lett. 49 (21) (2013)
1325–1326.
−0.2 [6] J. Ni, X. Chen, J. Yang, Two variants of the sign subband adaptive filter with
improved convergence rate, Signal Process. 96 (2014) 325–331.
[7] P. Wen, J. Zhang, Robust variable step-size sign subband adaptive filter algo-
−0.4 rithm against impulseive noise, Signal Process. 139 (2017) 110–115.
[8] S. Guan, Z. Li, Normalised spline adaptive filtering algorithm for nonlinear sys-
tem identi-fication, Neural Process Lett. 5 (2017) 1–13.
−0.6 [9] R.H. Kwong, E.W. Johnston, A variable step size LMS algorithm, IEEE Trans. Sig-
nal Process. 40 (7) (1992) 1633–1642.
[10] M.O.B. Saeed, A. Zerguine, S.A. Zummo, A noise-contrained algorithm for esti-
−0.8 mation over distributed networks, Int. J. Adapt. Contr. Signal Process. 27 (10)
0 0.5 1 1.5 2 2.5 3 3.5 4 (2013) 827–845.
Samples 4 [11] H.S. Lee, S.E. Kim, W. Lee, W.J. Song, A variable step-size diffusion LMS al-
x 10
gorithm for distributed estimation, IEEE Trans. Signal Process 63 (7) (2015)
1808–1820.
Fig. 11. Speech signal.
240 C. Liu et al. / Signal Processing 148 (2018) 234–240

[12] S. Wang, J. Feng, C.T. Tse, Kernel affine projection sign algorithms for combat- [14] S. Guarnieri, F. Piazza, A. Uncini, Multilayer feedforward networks with adap-
ing impul-se interference, IEEE Trans. Circuits Syst. II: Exp. Briefs 60 (11) (2012) tive spline activation function, IEEE Trans. Neural Network 10 (3) (1999)
811–815. 672–683.
[13] Y. Zhou, S.C. Chan, T.S. Ng, Least mean M –estimate algorithms for robust adap- [15] S. Zhang, J. Zhang, H. Han, Robust shrinkage normalized sign algorithm in an
tive filtering in impulse noise, IEEE Trans. Circuits Syst. II: Analog Digit. Signal impulsive noise environment, IEEE Trans. Circuits 64 (1) (2017) 91–95.
Process. 60 (11) (20 0 0) 1564–1569.

You might also like