Professional Documents
Culture Documents
A Family of Constrained Robust Least Mean
A Family of Constrained Robust Least Mean
A Family of Constrained Robust Least Mean
> REPLACE THIS LINE WITH YOUR MANUSCRIPT ID NUMBER (DOUBLE-CLICK HERE TO EDIT) <
Abstract—Recently the robust constrained adaptive filtering always result in improved filtering accuracy in non-Gaussian
algorithms have gained the attention of researchers and have been noises with a light tail and their high computing complexity.
widely studied. In this brief we present a generalized constraint To reduce computational costs while achieving robustness
robust least mean logarithmic square (CRLMLS) family that
underlies the relative logarithmic cost function. With the energy
capability in the heavy-tailed noises, the constrained least mean
conservation approach, mean square convergence analysis in term M-estimate (CLMM) [9], and a family of constrained adaptive
of mean-square deviation (MSD) is derived. Also, the steady state filtering algorithms based on logarithmic cost function [10]
MSD analysis for the standard CRLMLS (p=2) is obtained under have been introduced. To reduce computational costs and
both Gaussian and non-Gaussian noise distributions. Then, the improve the steady state MSE performance compared to
proposed algorithms are examined by some numerical simulations CLMM, the maximum Versoria criterion (MVC) method has
over system identification beamforming applications under been presented [11]. Recently, the robust constrained
different Gaussian and non-Gaussian/impulsive noise scenarios to generalized correntropy (CGMCC) and maximum Versoria
validate the superiority. criterion (CMVC) algorithms were proposed [12].
To further improve the convergence speed and have a better
Index Terms— Robust filtering algorithm; Stochastic gradient;
Steady state performance; Transient analysis; Impulsive noise;
solution accuracy while maintaining robustness against heavy-
tailed noise, the novel robust least mean logarithmic square
I. INTRODUCTION (RLMLS) algorithm has been introduced [13]. The
performance of RLMLS is better that MSE and other
L
inearly constrained adaptive filtering (CAF) algorithms 1
conventional algorithms due to the term 2 (where 𝛼 is
1+𝛼|𝜖(𝑘)|
have been widely used in various signal processing
applications, such as antenna array processing for scaling parameter and 𝜖(𝑘) is filter error signal) insensitive to
multichannel signals, beamforming, and linear phase in the outliers/impulsive noise as compare to MSE, and it less
system identification [1] [2]. In these methods, the coefficient computational complexity as compared to the exponential term
vector is restricted to a specific hyperplane given by the in MCC. However, to our knowledge, no any attempts so far to
extend these error nonlinear concepts to the constrained
constraints that are generated from prior knowledge about the
adaptive filtering algorithms. In this brief, we address this gap.
system of concern. In recent years, many efforts have been
By using a gradient descent method, the linearly constrained
made to present a number of algorithms based on numerous cost
generalized least mean logarithmic square cost function is
functions to achieve good performance in diverse minimized to develop constrained robust least mean
environments. The most commonly used methods in the logarithmic square (CRLMLS) family. Theoretical analysis,
applications are those designed based on the mean-square error mean square deviation (MSD) and steady-state MSD are
(MSE) cost function, such as the constrained least mean square derived. However, due to the computational burden of the
(CLMS) algorithm [3] that is originally designed as linearly higher-order of the CRLMLS family, we exclusively
constrained minimum-variance (LCMV) algorithm [4]. investigate the steady-state MSD of CRLMLS when p = 2.
Since the robustness towards impulsive noise is the most Finally, the proposed algorithms are studied by some numerical
important attribute for choosing the cost functions. Therefore, simulations over system identification and beamforming
adaptive filtering methods based on the constrained maximum applications under different Gaussian and non-Gaussian and
correntropy criterion (CMCC) [5], constrained minimum error impulsive noise scenarios to validate the superiority and
entropy (CMEE) criterion [6], robust constrained minimum comparing them with other conventional methods.
mixture kernel risk-sensitive loss criterion [7] and a constraint Notations: The notation [. ]𝑇 refers transpose, 𝑇𝑟(·) denotes
least Incosh algorithm [8] were proposed. These algorithms trace of a matrix, 𝐸[. ] illustrates statistical expectation and |. |
have been demonstrated to be robust to large errors and smooth is refers the absolute value operator. For column vectors, we
to small errors, and have introduced an improvement in the use lower-case boldface symbols or letters, while for matrixes,
accuracy of adaptive filters. However, the main challenges with we use upper-case boldface symbols or letters.
methods based on forementioned methods are that they don't
Manuscript received March 5, 2022; revised XXXXXX, 2022. The associate Omer Mohamed Abdelrahman. and Li Sen are with the Information Science
editor coordinating the review of this manuscript and approving it for and Technology College, Dalian Maritime University, Dalian, China (e-mail:
publication was XXXXXXXX. This work was supported in part by the National omer.dmu@gmail.com; listen@dlmu.edu.cn ).
Natural Science Foundation of China under Grant 61971083, in part by the
fundamental research funds for the central university under Grants
3132019341. (Corresponding author: Li Sen.)
2
> REPLACE THIS LINE WITH YOUR MANUSCRIPT ID NUMBER (DOUBLE-CLICK HERE TO EDIT) <
⟹ 𝐸[𝛽(𝜖(𝑘))𝒙(𝑘)𝒙𝑇 (𝑘)]𝒘(𝑘) stochastic expectation to the last expression (20), then the
= 𝐸[𝛽(𝜖(𝑘))𝒙(𝑘)𝑑(𝑥)] + 𝑯𝜿(𝑘) energy conservation relation can be described as [5, 12]
𝑬[‖𝒘̃ (𝑘 + 1)‖𝟐 ]
̃ −1
𝒘opt = 𝑹 ̃ −1
𝒙 𝐫𝐱𝐝 + 𝑹𝒙 𝑯𝜿(𝑘) (13) = 𝑬[‖𝒘 ̃ (𝑘)‖𝟐𝚺 ] (21)
2 2 2 (𝑘)]𝐸[𝒙𝑇 (𝑘)𝚩𝒙(𝑘)]
+ 𝜇 𝐸[𝛽 (𝜖(𝑘))]𝐸[𝜂
Where ̃ −1
𝑹 𝒙 = 𝐸[𝛽(𝜖(𝑘))𝒙(𝑘)𝒙 (𝑥)]
𝑇
and 𝐫𝐱𝐝 = + 𝜇 2 𝐸[𝛽 2 (𝜖(𝑘))]𝝑𝑇 𝐸[𝒙(𝑘)𝒙𝑇 (𝑘)𝚩𝒙(𝑘)𝒙𝑻 (𝑘)]𝝑
𝐸[𝛽(𝜖(𝑘))𝒙(𝑘)𝑑(𝑘)] are weighted covariance matrix of input
𝒙(𝑘), and cross-correlation vector between input 𝒙(𝑘), and Where ‖𝒘̃ (𝑘)‖𝟐𝚺 = 𝒘̃ 𝑻 (𝑘)𝚺𝒘
̃ (𝑘), and
desired filter output 𝑑(𝑘), respectively. 𝚺
The cost function in (3) satisfy [5] = 𝑰𝑀 − 2𝜇𝐸[𝛽(𝜖(𝑘))]𝚩𝐑 𝐱 (𝑘)𝚩 (22)
𝑯𝑻 𝒘opt = 𝝃 + 𝜇 2 𝐸[𝛽 2 (𝜖(𝑘))]𝚩𝐸[𝒙(𝑘)𝒙𝑻 (𝑘)𝚩𝒙(𝑘)𝒙𝑻 (𝑘)]𝚩
Thus, by using (13), it leads that
𝑯𝑻 (𝑹̃ −1 ̃ −1
𝒙 𝐫𝐱𝐝 + 𝑹𝒙 𝑯𝜿(𝑘)) = 𝝃 By using the property 𝜝 = 𝜝2 , we have
−1
𝝃 = (𝑯𝑇 𝑹̃ −1 𝑯) [𝝃 − 𝑯𝑇 𝑹 ̃ −1 𝐫 ] (14)
𝒙 𝒙 𝐱𝐝
𝐸[𝒙𝑻 (𝑘)𝚩𝒙(𝑘)] = 𝐸[𝒙𝑻 (𝑘)𝚩𝚩𝒙(𝑘)] =
(23)
Thus, (13) becomes 𝐸[𝚩𝐑 𝐱 𝚩] = Tr(𝚼)
̃ −1 ̃ −1 𝑇 ̃ −1 −1 (15) Where 𝚼 = 𝚩𝐑 𝐱 𝚩.
𝒘opt = 𝑹 𝒙 𝐫𝐱𝐝 + 𝑹𝒙 𝑯[𝑯 𝑹𝒙 (𝑘)𝑯] [𝝃
and by utilizing the Isserlis’ theorem [15], we have
− 𝑯𝑇 𝒘𝒐 ]
𝐸[𝒙(𝑘)𝒙𝑻 (𝑘)𝚩𝒙(𝑘)𝒙𝑻 (𝑘)]
From (1), and by utilizing A.1-A.2, we derive
= 𝐑 𝐱 𝚩𝐑 𝐱 + 𝐑 𝐱 𝚩𝐑 𝐱
(24)
+ 𝐸[𝒙𝑻 (𝑘)𝚩𝒙(𝑘)]𝐑 𝐱
𝑑(𝑘) = 𝒘𝑇𝑜 𝒙(𝑘) + 𝜂(𝑘)
= 2𝐑 𝐱 𝚩𝐑 𝐱 + Tr(𝚼)𝐑 𝐱
⟹ 𝑑(𝑘)𝒙 (𝑘) = 𝒘𝑇𝑜 𝒙(𝑘)𝒙𝑻 (𝑘) + 𝜂(𝑘)𝒙𝑻 (𝑘)
𝑻
⟹ 𝛽(𝜖(𝑘))𝑑(𝑘)𝒙𝑻 (𝑘)
Thus, by invoking (23) and (24) into (21), and 𝚩𝐑 𝐱 𝝑 = 0𝑀𝑥1 ,
= 𝛽(𝜖(𝑘))𝒘𝑇𝑜 𝒙(𝑘)𝒙𝑻 (𝑘) the mean square stability can be defined
+ 𝛽(𝜖(𝑘))𝜂(𝑘)𝒙𝑻 (𝑘)
It known that 𝐸[‖𝒘 ̃ (𝑘 + 1)‖𝟐𝚺 ] = 𝐸[‖𝒘 ̃ (𝑘)‖𝟐𝚺 ]
𝒘𝒐 = 𝑹̃ −1
𝒙 𝐫𝐱𝐝 (16)
+ 𝜇 𝐸[𝛽 2 (𝜖(𝑘))]Tr(𝚼)(𝝑𝑇 𝐑 𝐱 𝝑
2 (25)
+ 𝐸[𝜂 2 (𝑘)])
Thus, merging (15) and (16), we obtain
And the 𝚺 defined in (22) can be rewritten as
−1
(17) 𝚺 = 𝑰𝑀 − 2𝜇𝐸[𝛽(𝜖(𝑘))]𝚩𝐑 𝐱 𝚩
̃ −1
𝒘opt = 𝒘𝒐 + 𝑹 𝑇 ̃ −1 𝑇
𝒙 𝑯[𝑯 𝑹𝒙 (𝑘)𝑯] [𝝃 − 𝑯 𝒘𝒐 ]
+ 𝜇 2 𝐸[𝛽 2 (𝜖(𝑘))](Tr(𝚼)𝚩𝐑 𝐱 𝚩 (26)
+ 𝟐𝚩𝐑 𝐱 𝚩𝐑 𝐱 𝚩)
Also, we define
Let us assume that 𝜆𝒕 (𝑡 = 1,2, … . , 𝑀 − 𝑁) represents
𝝑 = 𝒘𝒐 − 𝒘𝐨𝐩𝐭 (18)
eigenvalues of the matrix 𝚼 , thus, a sufficient condition to
ensure the mean square stability should satisfy
After substituting (1) and (11) and (18) into (10) and doing
some mathematical manipulations, we obtain as [12]
|1 − 2𝜇𝐸[𝛽(𝜖(𝑘))]𝜆𝒕
+ 𝜇 2 𝐸[𝛽 2 (𝜖(𝑘))][Tr(𝚼)𝜆𝒕 (27)
̃ (𝑘 + 1) = 𝚩 (𝑰𝑀 − 𝜇𝛽(𝜖(𝑘))𝒙(𝑘)𝒙𝑻 (𝑘)) 𝒘
𝒘 ̃ (𝑘)
(19) + 2𝜆2𝑡 ]| < 1
+ 𝜇𝛽(𝜖(𝑘))𝚩𝒙(𝑘)𝒙𝑻 (𝑘)𝝑 Thus, the bound of 𝜇 can be defined as
+ 𝜇𝛽(𝜖(𝑘))𝜂(𝑘)𝚩𝒙(𝑘) + 𝚩𝒘𝐨𝐩𝐭
+ 𝝓 − 𝒘𝐨𝐩𝐭 2𝐸[𝛽(𝜖(𝑘))]
0<𝜇< (28)
𝐸[𝛽 2 (𝜖(𝑘))](2𝜆 max + 𝑇𝑟(𝚼))
From [5], we find that 𝚩𝒘𝐨𝐩𝐭 + 𝝓 − 𝒘𝐨𝐩𝐭 = 0𝑀𝑥1 , and 𝚩 is
idempotent, such that 𝚩 = 𝚩 𝟐 and 𝚩 = 𝚩 𝐓 and 𝚩𝒘 ̃ (𝑘) = 𝜆max denotes the maximum eigenvalues of Tr(𝚼).
̃ (𝑘). Thus, using these properties into (19), we have
𝒘 Since 𝐸[𝛽(𝜖(𝑘))] ≥ 𝐸[𝛽 2 (𝜖(𝑘))] > 0 , therefore, to ensure
mean square stability, a stronger condition can be obtained:
̃ (𝑘 + 1) = (𝑰𝑀 − 𝜇𝛽(𝜖(𝑘))𝚩𝒙(𝑘)𝒙𝑻 (𝑘)𝚩)𝒘
𝒘 ̃ (𝑘) 2
+ 𝜇𝛽(𝜖(𝑘))𝚩𝒙(𝑘)𝒙𝑻 (𝑘)𝝑 (20) 0<𝜇<
(2𝜆max + 𝑇𝑟(𝚼)) (29)
+ 𝜇𝛽(𝜖(𝑘))𝜂(𝑘)𝚩𝒙(𝑘)
B). Steady-State MSD
By employing the Euclidean norm squared and taking the
4
> REPLACE THIS LINE WITH YOUR MANUSCRIPT ID NUMBER (DOUBLE-CLICK HERE TO EDIT) <
By using similar method as for (23) and (24), we have By using (17) and (18), and invoking A.3, we obtain
+ 𝛼(𝝑𝑇 𝐑 𝐱𝝑 1
= 𝐸[ 2]
1 (1 + 𝛼𝜂 2 (𝑘))
+ 𝜎𝜂2 )) erfc (√ ) = Γ(𝑘)
2𝛼(𝝑𝑇 𝐑 𝐱 𝝑 + 𝜎𝜂2 ) 2𝛼(5𝛼𝜂 2 (𝑘) − 1)
+ 𝝑𝑇 𝐑 𝐱 𝝑𝐸 [ 4 ] = 𝜛(𝑘)
} (1 + 𝛼𝜂 2 (𝑘))
(48)
Now after substituting (47) and (48) into (43), we have Now, after substituting (51) and (52) into (43), we have
𝑀𝑆𝐷(∞) = 𝜇 2 (𝝑𝑇 𝐑 𝐱 𝝑 𝑀𝑆𝐷(∞) = 𝜇 2 (𝝑𝑇 𝐑 𝐱 𝝑
+ 𝜎𝜂2 )vec T (𝚼) lim ( 𝑰𝑀2 (49)
𝑘→∞
+ 𝐸[𝜂 2 (𝑘)])vec T (𝚼) lim ( 𝑰𝑀2 (53)
𝑘→∞
− 𝑭)−1 vec(𝑰𝑀 )Γ(𝑘) − 𝑭) −1
vec(𝑰𝑀 )𝜛(𝑘)
2). Non-Gaussian noise condition, The steady-state MSD for CRLMLS algorithm obtained in (49)
To perform the analysis in this case, Taylor series expansion of and (53), were obtained by using approximation 𝒘(𝑘) ≈ 𝒘opt
the nonlinearity function 𝛽(𝜖(𝑘)) with respect to 𝜖𝑎 [𝑡] around . Moreover, for the case of non-Gaussian condition, the higher
the noise value 𝜂(𝑘) is usually used [14, 17]. It follows that, order terms of 𝑂(𝜖𝑎 (𝑘))2 are ignored. Hence, if the magnitude
(12) can be expanded as of noise power is very large, the expressions obtained (49) and
(53) will not yield accurate results.
𝛽(𝜖(𝑘)) = 𝛽(𝜖𝑎 (𝑘) + 𝜂(𝑘))
= 𝛽(𝜂(𝑘)) + 𝛽 ′ (𝜂(𝑘))𝜖𝑎 (𝑘) (50) V. SIMULATION RESULT
1
+ 𝛽 ′′ (𝜂(𝑘))(𝜖𝑎 (𝑘))2 In this part, we give some numerical simulation results in
2 Gaussian and non-Gaussian noise to evaluate the performance
+ 𝑂(𝜖𝑎 (𝑘))2 of the CRLMLS family algorithm and to verify the theoretical
analysis.
Where 𝛽 ′ (𝜂(𝑘)) and 𝛽 ′′ (𝜂(𝑘)) are the first and second An unknown system 𝒘𝑜 to be identified is wo = [0.3328, -
differential terms of the nonlinear function 𝛽(𝜖(𝑘)) , and 0.0392, -0.0944, 0.7174, -0.652, -0.0721, 0.58], the constraint
𝑂(𝜖𝑎 (𝑘))2 is higher order terms. matrix 𝑯 is set [-0.1058, 0.2778, -0.115; 1.5423, 1.6399,
Where for the case of standard CRLMLS ( 𝑝 = 2 ), 0.3357; 0.2614, -0.7101, -0.0217; 0.2191, -0.4895, 0.5751; -
𝛽𝑝=2 (𝜂(𝑘)) = g(𝜂(𝑘)), thus, we have 1.855, 0.49, 0.6389; -0.718, 0.4914, -0.5332; -0.3816, -0.0427,
1 -1.0077] and the constraint vector is chosen as 𝝃 =[1.0767; -
g(𝜂(𝑘)) = 0.5783; -0.5993] as in [3]. The normalized MSD (NMSD) is
1 + 𝛼𝜂 2 (𝑘)
2𝛼𝜂(𝑘) frequently used as an adaptive filtering algorithm performance
g ′ (𝜂(𝑘)) = − 2 evaluation metric [12, 18], which can be defined as
(1 + 𝛼𝜂 2 (𝑘)) ‖𝒘(𝑘) − 𝒘𝑜 ‖2
2𝛼(3𝛼𝜂 2 (𝑘) − 1) 𝑁𝑀𝑆𝐷 = 10 log10 {𝐸 [ ]} (54)
g ′′ (𝜂(𝑘)) = ‖𝒘 ‖2𝑜
3
(1 + 𝛼𝜂 2 (𝑘)) and the input signal with variance 𝜎𝑢2 = 1 is used. The
simulation results are achieved as an average over 100
However, from [5], we use the approximations independents Monte Carlo experiments, 20000 iterations are
run to ensure the algorithms reach the steady state and the
steady-state results is obtained over averaging last 200
iterations.
Fig. 4 we investigate the NMSD performance of the proposed suppresses other interference signals in the presence of alpha-
CRLMLS family algorithms as compared to the CLMS, stable noise environments.
standard CMVC, CMCC algorithms in the presence of
difference noise scenarios: Gaussian noise with zero mean and
𝜎𝜂2 = 1 , Binary noise over {−1,1} , Uniform noise over
{−√3, √3} and strong heavy-tailed noise distribution with
𝑆α−stable (1.15,0,1,0). From Fig. 10, it can be observed that,
standard CRLMLS algorithms is robust against non-Gaussian,
while the higher-order of CRLMLS family are significantly
outperforms the other robust adaptive filters.
VI. CONCLUSION
In the presented work here, we introduced a novel HQC
adaptive filtering algorithm based on a convex half
quadratic criterion (HQC) cost function that is robust against
impulsive nose, improves filtering convergence speed and
accuracy, simultaneously. To improve the performance of the
proposed algorithm characteristics, a variable step-size of HQC
(VSS-HQC) is further proposed. We carried out the mean
square analysis in terms of transient MSD and steady-state
MSD and EMSE performance under the assumptions of
Gaussian interreference. Also, sufficient conditions for the
Theoretical and simulated MSD of standard CRLMLS through algorithm's stability were provided. The simulation results
long iterations for under Gaussian, Uniform and alpha-stable demonstrated that, the proposed algorithms provide good
noise distributions. performance under both Gaussian and non-Gaussian noise
environments.
Fig. 4 shows the theoretical and simulated MSD versus a
number of iterations. We consider Gaussian noise with 𝜎𝜂2 = REFERENCES
0.1, uniform noise over {−√3, √3} and alpha-stable noise with
stable with 𝑆α−stable (1.15,0,1,0). It can be observed form this [1] M. L. de Campos, S. Werner, and J. A. Apolinário,
figure that the simulated results are consistent with the results "Constrained adaptive filters," in Adaptive Antenna
those calculated by theory. Arrays: Springer, 2004, pp. 46-64.
[2] O. L. Frost, "An algorithm for linearly constrained
B). Beamforming Application adaptive array processing," Proceedings of the IEEE,
Adaptive beamforming is an important technology for mobile vol. 60, no. 8, pp. 926-935, 1972.
communications [22, 23]. Here we examine the performance of [3] R. Arablouei, K. Doğançay, and S. Werner, "On the
the standard RLMLS under the beamforming application. we mean-square performance of the constrained LMS
consider a uniform linear array (ULA) with M = 7 algorithm," Signal Processing, vol. 117, pp. 192-197,
omnidirectional sensors with an element spacing of half 2015.
wavelength. We assume that there are four users with the [4] J. Zhang and C. Liu, "On linearly constrained
directions-of-arrival (DOA) of −35° , 0° , 30° and 60° , minimum variance beamforming," The Journal of
respectively. Among them, the corresponding signal of one user Machine Learning Research, vol. 16, no. 1, pp. 2099-
is of interest, presumed to arrive at DOA of 𝜃𝑑 = 0° and the rest 2145, 2015.
considered as the interference signals. We choose the constraint [5] S. Peng, B. Chen, L. Sun, W. Ser, and Z. Lin,
matrix 𝑯 = [𝑰(𝑀−1)/2 , 0, −𝑱(𝑀−1)/2 ] with 𝑱 being a reversal "Constrained maximum correntropy adaptive
matrix of size (an identity matrix with all rows in reversed filtering," Signal Processing, vol. 140, pp. 116-126,
order), and the response vector 𝝃 = 0 [24]. The measurement 2017.
noise 𝜂(𝑘) is the non-Gaussian noise alpha-stable noise with [6] S. Peng, W. Ser, B. Chen, L. Sun, and Z. Lin, "Robust
𝑆α−stable (1.3,0,1,0). From Fig. 10, it can be observed that, the constrained adaptive filtering under minimum error
standard CRLMLS algorithm has the highest gain in the entropy criterion," IEEE Transactions on Circuits and
direction of the signal of user of interest and successfully Systems II: Express Briefs, vol. 65, no. 8, pp. 1119-
1123, 2018.
8
> REPLACE THIS LINE WITH YOUR MANUSCRIPT ID NUMBER (DOUBLE-CLICK HERE TO EDIT) <
[7] G. Qian, F. Dong, and S. Wang, "Robust constrained [22] A. A. Falkovski, E. V. Kuhn, M. V. Matsuo, C. A. Pitz,
minimum mixture kernel risk-sensitive loss algorithm E. L. O. Batista, and R. Seara, "Stochastic modeling of
for adaptive filtering," Digital Signal Processing, vol. the CNLMS algorithm applied to adaptive
107, p. 102859, 2020. beamforming," Signal Processing, vol. 178, p.
[8] T. Liang, Y. Li, Y. V. Zakharov, W. Xue, and J. Qi, 107772, 2021.
"Constrained least lncosh adaptive filtering [23] J. Li and P. Stoica, Robust adaptive beamforming.
algorithm," Signal Processing, vol. 183, p. 108044, John Wiley & Sons, 2005.
2021. [24] R. Arablouei and K. ı l Dogancay, "Reduced-
[9] Z. Wang, H. Zhao, and X. Zeng, "Constrained least complexity constrained recursive least-squares
mean M-estimation adaptive filtering algorithm," adaptive filtering algorithm," IEEE transactions on
IEEE Transactions on Circuits and Systems II: signal processing, vol. 60, no. 12, pp. 6687-6692,
Express Briefs, vol. 68, no. 4, pp. 1507-1511, 2020. 2012.
[10] V. C. Gogineni and S. Mula, "A family of constrained
adaptive filtering algorithms based on logarithmic
cost," arXiv preprint arXiv:1711.04907, 2017.
[11] F. Huang, J. Zhang, and S. Zhang, "Maximum versoria
criterion-based robust adaptive filtering algorithm,"
IEEE Transactions on Circuits and Systems II:
Express Briefs, vol. 64, no. 10, pp. 1252-1256, 2017.
[12] S. S. Bhattacharjee, M. A. Shaikh, K. Kumar, and N.
V. George, "Robust constrained generalized
correntropy and maximum Versoria criterion adaptive
filters," IEEE Transactions on Circuits and Systems II:
Express Briefs, vol. 68, no. 8, pp. 3002-3006, 2021.
[13] K. Xiong and S. Wang, "Robust least mean
logarithmic square adaptive filtering algorithms,"
Journal of the Franklin Institute, vol. 356, no. 1, pp.
654-674, 2019.
[14] S. C. Douglas and T.-Y. Meng, "Stochastic gradient
adaptation under general error criteria," IEEE
transactions on signal processing, vol. 42, no. 6, pp.
1335-1351, 1994.
[15] L. Isserlis, "On a formula for the product-moment
coefficient of any order of a normal frequency
distribution in any number of variables," Biometrika,
vol. 12, no. 1/2, pp. 134-139, 1918.
[16] W. Gröbner, N. Hofreiter, and Z. T. Integraltafel,
Bestimmte Integrale. Springer, 1973.
[17] B. Chen, L. Xing, B. Xu, H. Zhao, N. Zheng, and J. C.
Principe, "Kernel risk-sensitive loss: definition,
properties and application to robust adaptive filtering,"
IEEE Transactions on Signal Processing, vol. 65, no.
11, pp. 2888-2901, 2017.
[18] W. Xu and H. Zhao, "Robust constrained recursive
least M-estimate adaptive filtering algorithm," Signal
Processing, vol. 194, p. 108433, 2022.
[19] M. Shao and C. L. Nikias, "Signal processing with
fractional lower order moments: stable processes and
their applications," Proceedings of the IEEE, vol. 81,
no. 7, pp. 986-1010, 1993.
[20] R. Leahy, Z. Zhou, and Y.-C. Hsu, "Adaptive filtering
of stable processes for active attenuation of impulsive
noise," in 1995 international conference on acoustics,
speech, and signal processing, 1995, vol. 5: IEEE, pp.
2983-2986.
[21] S. P. Talebi, S. Werner, and D. P. Mandic, "Distributed
Adaptive Filtering of $\alpha $-Stable Signals," IEEE
Signal Processing Letters, vol. 25, no. 10, pp. 1450-
1454, 2018.