Rotate Vector Reducer Fault Diagnosis Model Based PDF

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 17

applied

sciences
Article
Rotate Vector Reducer Fault Diagnosis Model Based
on EEMD-MPA-KELM
Zhijian Tu 1,2, *, Lifu Gao 1,2 , Xiaoyan Wu 3 , Yongming Liu 3 and Zhuanzhe Zhao 3

1 Institute of Intelligent Machines, Hefei Institutes of Physical Science, Chinese Academy of Sciences,
Hefei 230031, China
2 Science Island Branch of Graduate School, University of Science and Technology of China, Hefei 230026, China
3 School of Mechanical Engineering, Anhui Polytechnic University, Wuhu 241000, China
* Correspondence: tuzhijian@ceprei.com; Tel.: +86-156-6543-5118

Abstract: With the increase of service time, the rotation period of rotating machinery may become
irregular, and the Ensemble Empirical Mode Decomposition (EEMD)can effectively reflect its periodic
state. In order to accurately evaluate the working state of the Rotate Vector (RV) reducer, the torque
transfer formula of the RV reducer is first derived to theoretically prove periodicity of torque transfer
in normal operation. Then, EEMD is able to effectively reflect the characteristics of data periodicity.
A fault diagnosis model based on EEMD-MPA-KELM was proposed, and a bearing experimental
dataset from Xi‘an Jiaotong University was used to verify the performance of the model. In view
of the characteristics of the industrial robot RV reducer fault was not obvious and the sample data
is few, spectrum diagram was used to diagnose the fault from the RV reducer measured data. The
EEMD decomposition was performed on the data measured by the RV reducer test platform to
obtain several Intrinsic Mode Functions (IMF). After the overall average checking and optimization
of each IMF, several groups of eigenvalues were obtained. The eigenvalues were input into the Kernel
Extreme Learning Machine (KELM) optimized by the Marine Predators Algorithm (MPA), and the
fault diagnosis model was established. Finally, compared with other models, the prediction results
showed that the proposed model can judge the working state of RV reducer more effectively.

Keywords: RV reducer; EEMD; MPA; KELM; fault diagnosis

Citation: Tu, Z.; Gao, L.; Wu, X.; Liu,


Y.; Zhao, Z. Rotate Vector Reducer
Fault Diagnosis Model Based on
EEMD-MPA-KELM. Appl. Sci. 2023,
1. Introduction
13, 4476. https://doi.org/10.3390/ The Rotate Vector (RV) reducer is a new type of cycloidal pinwheel planetary drive
app13074476 two-stage gear reducer, which is mainly applied in the field of robotics. With the increasing
demand of industrial robots, the RV reducer, as a core parts in robots, plays a very important
Academic Editor: Mohamed
Benbouzid
role in the actual production process. If the RV retarder fails, it will reduce the economic
benefit of the manufacturer. Therefore, it is urgent to have a method of fault diagnosis and
Received: 22 February 2023 life prediction for the RV retarder to reduce losses caused by RV retarder failure.
Revised: 24 March 2023 In recent years, domestic and foreign scholars have done a lot of research on RV
Accepted: 24 March 2023 reducers. Pan Bosong [1] and others analyzed the reliability of transmission accuracy of
Published: 31 March 2023
planetary gear reducer considering gear wear. Li Zhe [2] and others placed sensors in
multiple locations for data extraction and preprocessing, carried out preliminary processing,
and introduced the analyzed information into the deep convolutional neural network,
Copyright: © 2023 by the authors.
improving the accuracy and robustness of the existing fault diagnosis for the reducer.
Licensee MDPI, Basel, Switzerland. Wang Jiugen [3] input the data of different fault modes measured by the vibration test
This article is an open access article bench into the residual neural network for training and five-fold cross-validation, and then
distributed under the terms and compared it with multiple neural network models, and verified it by using the database of
conditions of the Creative Commons Western Reserve University. Mao Jun [4] put forward a fault diagnosis method based on
Attribution (CC BY) license (https:// Deep Auto-Encoder Networks (DAENS), taking several common fault features of reducers
creativecommons.org/licenses/by/ as input and whether there was a fault or not as output. Peng [5] clearly proposed the
4.0/). entity model of a convolutional neural network under the influence of noise, converted

Appl. Sci. 2023, 13, 4476. https://doi.org/10.3390/app13074476 https://www.mdpi.com/journal/applsci


Appl. Sci. 2023, 13, 4476 2 of 17

the one-dimensional vibration data signal of the RV reducer into gray image after two-
dimensional transformation, took it as input and fused the features. Chen Lerui [6] selected
a method closely combining frequency bands and a large number of possible optimization
algorithms according to the output phase frequency characteristic function and Kernel
Principal Component Analysis (KPCA) of discrete systems, and obtained the first four
frequency band values. In each case, SVM was introduced and compared with various
diagnostic methods after the KPCA solution was used. An Haibo [7] analyzed the basic
principle of the RV reducer dynamics and established the transmission model of the AE
data signal inside RV reducer. Yu Ning [8] carried out CMF decomposition of fault signals,
improved local Mean Decomposition (CELMD), and combined Multi-scale Perarrangement
Entropy (MPE) method to select the PF component for reconstruction, envelope analysis
and other comprehensive diagnosis methods. HM Qian [9] put forward a time-varying
reliability method for an industrial robot Rotation Vector (RV) reducer with multiple failure
modes using the Kriging model, and a time-varying reliability analysis method for multiple
failure modes based on the double-ring Kriging model. The inner loop is the extreme value
optimization of each limit state function based on Efficient Global Optimization (EGO),
while the outer loop is the active learning reliability analysis combining the Multi-Response
Gaussian Process model (MRGP) and Monte Carlo Simulation (MCS).
With the development of artificial intelligence, extreme learning machine, as a kind of
pattern recognition, has been given wide attention due to its fast running speed and small
amount of training data. Wang H. [10] adopted the extreme learning machine method to
classify the faults of fuel system. Guo S. [11] combined the circle model with the Extreme
Learning Machine (ELM) to form a fault diagnosis method for linear analog circuits. Xia
Y. [12] reported an effective method for early fault diagnosis of a chiller by combining
nuclear Entropy Component Analysis (KECA) and a Voting based Extreme Learning
Machine (VELM). Liu X. [13] proposed a personalized fault diagnosis method using Finite
Element Method (FEM) simulation and ELM to detect faults in gears. Huang [14] from
Nanyang Technological University introduced a kernel function idea from a support vector
machine into ELM to improve the stability and generalization ability of ELM neural network
model training results, and proposed Kernel Extreme Learning Machine (KELM). Yang
Xin [15] used the nuclear extreme learning machine to classify turbine rotor faults. Qiao
Wenshan [16] proposed a fault diagnosis method based on IBOA-KELM to improve the
accuracy and efficiency of diesel engine fault diagnosis. Liang R. [17] proposed a fault
diagnosis method based on the Whale Optimization Algorithm (WOA) to optimize KELM,
aiming at the problems that fault features are difficult to extract and time-frequency features
cannot fully represent state information. Zhang H. [18] proposed a fault diagnosis method
for the coal mill of a nuclear extreme learning machine based on feature extraction of a
variational model. The above studies combined various methods with the nuclear extreme
learning machine for fault diagnosis, and achieved good results, but most of them have
poor universality. Moreover, because KELM introduced kernel functions, it is very sensitive
to parameters and has randomness, so the results are not stable every time, which is easy
to affect the prediction results.
The above fault diagnosis method for the RV reducer has a high accuracy, but the
content of the method is more complex, with more complicated calculation, and a large
number of data support is needed. Often, the data characteristics of the RV reducer are
fewer when the initial fault occurs, resulting in the reduced accuracy of the RV reducer in
operation, and the fault cannot be detected. In this paper, a MPA-KELM fault diagnosis and
classification model for RV reducer of a small sample of industrial robots based on EEMD
was proposed. In view of the characteristics of the industrial robot RV reducer where the
fault is not obvious and the sample data are sparse, the spectrum diagram was used to
diagnose the fault from the measured data of RV reducer, the data measured by the RV
reducer test platform was decomposed by EEMD, and several Intrinsic Mode Functions
(IMFs) are obtained. After the overall average checking and optimization of each IMF,
multiple groups of characteristic values were obtained. The eigenvalues are imported as
used to diagnose the fault from the measured data of RV reducer, the data measured by
the RV reducer test platform was decomposed by EEMD, and several Intrinsic Mode
Functions (IMFs) are obtained. After the overall average checking and optimization of
each IMF, multiple groups of characteristic values were obtained. The eigenvalues are
Appl. Sci. 2023, 13, 4476
imported as inputs into the KELM optimized by the Marine Predators Algorithm3 of (MPA)
17
in order to accurately evaluate the RV reducer working state and establish a fault diag-
nosis model.
inputs into the KELM optimized by the Marine Predators Algorithm (MPA) in order to
2. Industrial Robot RV Reducer
accurately evaluate the RV reducer working state and establish a fault diagnosis model.
RV Reducer Structure of Industrial Robots
2. Industrial Robot RV Reducer
As shown in Figure 1 [19], the RV reducer uses the solar wheel as the input shaft,
RV Reducer Structure of Industrial Robots
and the solar wheel with fewer teeth drives the planetary wheel with more teeth to rotate,
so as As shownout
to carry in Figure 1 [19],deceleration.
first-stage the RV reducerThe
useslarge
the solar wheel as
planetary the input
wheel shaft,
drives theand
crank
the solar wheel with fewer teeth drives the planetary wheel with more teeth to rotate, so as
shaft to rotate, and the crank shaft drives two RV wheels to rotate, the two RV wheels
to carry out first-stage deceleration. The large planetary wheel drives the crank shaft to
rotate for one circle, and the pinion gear rotates for one tooth, so as to carry out sec-
rotate, and the crank shaft drives two RV wheels to rotate, the two RV wheels rotate for one
ond-stage deceleration.
circle, and the pinion gear rotates for one tooth, so as to carry out second-stage deceleration.

Figure 1. Structure diagram of RV reducer of industrial robot.


Figure 1. Structure diagram of RV reducer of industrial robot.
Since the center of gravity of the crank shaft is not on the shaft, its torque will change
Since thewith
periodically center of gravity
rotation. of the crank
The calculation shaft isisnot
formula on the
shown shaft, its (1):
in Equation torque will change
periodically with rotation. The calculation formula is shown in Equation (1):
Mx = TF (Wx − B) − Mc sin(θ x + γ) (1)
���� (𝑊𝑊𝑥𝑥 − 𝐵𝐵) − 𝑀𝑀𝑐𝑐 𝑠𝑠𝑠𝑠𝑠𝑠(𝜃𝜃𝑥𝑥 + 𝛾𝛾)
𝑀𝑀𝑥𝑥 = 𝑇𝑇𝑇𝑇 (1)
In the formula, M —Crank shaft torque, kNm; TF—Torque factor, m; Wx —Suspension
In the formula,x 𝑀𝑀𝑥𝑥 —Crank shaft torque, kNm; ���� 𝑇𝑇𝑇𝑇 —Torque factor, m;
instantaneous load, kN; B—Crank structure unbalanced weight, kN; Mc —Maximum crank
𝑊𝑊𝑥𝑥 —Suspension instantaneous load, kN; 𝐵𝐵—Crank structure unbalanced weight, kN;
balance torque, θ x —Crank Angle; γ—Phase Angle.
𝑀𝑀𝑐𝑐 —Maximum crank balance torque, 𝜃𝜃𝑥𝑥 —Crank Angle; 𝛾𝛾—Phase Angle.
0
c ==
M𝑀𝑀 W𝑊𝑊 p R p𝑅𝑅++
W𝑊𝑊
b R′b𝑅𝑅 (2)
𝑐𝑐 𝑝𝑝 𝑝𝑝 𝑏𝑏 𝑏𝑏 (2)
WpWpisis the weightofofthe
the weight the crank
crank balance
balance block, Rp isRp
block, theisdistance
the distance between
between the crankthe crank
shaft
0 is the weight of the crank, and R is the
shaft and the center of gravity of the balance block,
and the center of gravity of the balance block, W b Wb’ is the weight of the crank,
b and Rb
isdistance between
the distance the crank
between shaft and
the crank theand
shaft center
theof gravity.
center of gravity.
Since the crank shaft of RV reducer does not need a crank balance block, the formula
Since the crank shaft of RV reducer does not need a crank balance block, the formula
for RV reducer can be expressed as:
for RV reducer can be expressed as:
Mc = Wb0 Rb (3)

The torque factor can also be expressed as follows:

A sinα
TF = d (4)
B sinβ
Appl. Sci. 2023, 13, 4476 4 of 17

In formula, A is the length of the forearm of the traveling beam, B is the length of the
rear arm of the traveling beam, d is the distance of the central crank shaft of the cycloidal
wheel, α and β are the Angle between the connecting rod and the crank. Because does not
have the RV reducer, connecting rod, the Angle α is zero, so the torque factor TF also is
zero, so the type (1) can be simplified as:

Mx = −Wb0 Rb sin(θ x + γ) (5)

While the center of the cycloidal wheel is not in line with the axis, assuming that
the difference is s and the radius of the cycloidal wheel is r, the formula of the cycloidal
wheel is:
Mx [r + s·sin(θ + β)]
Mr = (6)
r+s
When cycloidal wheel and pinion gear transfer torque, the torque needs to add a mesh
coefficient C1 .
Then its final output torque is:

Mx [r + s·sin(θ + β)]
Moutput = c1 Mr = c1 (7)
r+s
Since the efficiency formula is:

Moutput c Mx [r + s·sin(θ + β)]


η= = 1 (8)
nMinput nMinput (r + s)

In formula, n is the deceleration ratio.


From Formula (8), it can be concluded that the efficiency of RV reducer in normal
operation is periodic.

3. Theoretical Basis
3.1. KELM
Extreme Learning Machine (ELM) is a neural network algorithm composed of single
hidden layer. Because the hidden layer of the input weight kernel is randomly biased and
fixed in the network, compared with the traditional neural network, the parameter setting
is lower, the learning speed is faster, and the generalization ability is stronger.
Suppose that the input of the model’s training set is N-dimensional X = { x1 , x2 , . . . , xn },
the ideal output is I dimensional T = {t1 , t2 , . . . , ti }, when the number of hidden nodes
L and the excitation function G(x) are determined, the calculation formula of the actual
output value of ELM network is:

L
yi = ∑ βli G(Wnl · xn + bl ) n = 1, 2, . . . , N (9)
i =1

In formula, Wnl is the input weight between node n of ELM input layer and node l of
hidden layer; bl is the bias of the l th hidden layer node; βli is the output weight of hidden
layer node l and output layer node i. G(x) is the activation function.
When the input sample is trained by ELM network model, the output data
Y = {y1 , y2 , . . . , yk } can be infinitely approximated to an ideal output T = {t1 , t2 , . . . , tk },
there is a definite combination of W, b and β, such that Y = T, then the above formula can
be converted into a matrix form:
H·β = T (10)
In formula, H is the output matrix of the hidden layer; β is the weight matrix of the
output layer; T is the expected output matrix.
Appl. Sci. 2023, 13, 4476 5 of 17

Using the least square method to solve β, we can obtain:


  −1
β = H + T = H T HH T T (11)

In formula, H+ is the generalized inverse matrix of matrix H.


Due to the randomness of the setting of input layer weight and hidden layer bias
in ELM, the state of the model is extremely unstable. Therefore, Huang and others from
Nanyang Technological University introduced kernel function idea from the Support Vector
Machine (SVM) into ELM and proposed Kernel Extreme Learning Machine (KELM). The
algorithm not only has the same excellent running speed as the ELM neural network, but
also has more stable performance and generalization ability as SVM. Kernel function is
used to map the input space to higher dimensional space to enhance the stability of the
model. That is, kernel function matrix ΩELM is used to replace E hidden layer random
matrix HHT . Based on Mercer condition definition, we can obtain:

Ω ELM = HH T

(12)
Ωi,j = h( xi ) · h x j = K xi , x j
 

RBF kernel function is adopted in this paper:

k x i , x j k2
!

K xi , x j = exp − (13)
2σ2

In formula, σ is the width parameter of RBF kernel function.


The regularization coefficient C and the identity matrix I are introduced into the ELM
neural network random matrix HHT , and the evaluation formula of β0 is as follows:
  −1
0 I
β = HT + HH T T (14)
C

Based on the above formula, the output function of KELM neural network can be written:
 
K ( x, x1 ) 
I −1

f (x) = 
 .. 
Ω + T (15)
.  ELM
C
K ( x, x N )

3.2. MPA
The Marine Predators Algorithm (MPA) is a novel and efficient meta-heuristic algo-
rithm proposed by Faramarzi [20] in 2020, which was mainly inspired by the motion laws
of predators and prey in the ocean and the optimal encounter rate strategies for biological
interactions between predators and prey. The optimization process is divided into three
stages based on the speed scores of the predator and the prey, with the predator following
a regular levy motion or Brownian motion. At the same time, the prey also acts as the
predator identity while being hunted, making the algorithm more realistic in principle. In
addition, considering the Marine environmental factors, it can reduce the phenomenon of
predators falling into the local optimal value.
The detailed steps of MPA algorithm implementation are as follows:
1. Initialize the Elite matrix and Prey matrix. Each element in the Prey matrix is initial-
ized by a formula.


Xij = l j + rand u j − l j (16)
Appl. Sci. 2023, 13, 4476 6 of 17

In formula, uj and lj are the upper and lower limits of the search space in the j
dimension; rand is a random vector subject to uniform distribution [0,1], and Xij represents
the j-dimensional spatial position of the i prey. The final Prey matrix is obtained:
 
X1,1 X1,2 ... X1,d
 X2,1 X2,2 ... X2,d 
Prey =  . (17)
 
.. .. ..
 ..

. . . 
Xn,1 Xn,2 ... Xn,d n×d

In formula, n is the population size and d is the dimension value.


The fitness value of each predator was calculated, and the predator with the best
fitness was selected to make n copies to form the Elite matrix:
I I I
 
X1,1 X1,2 ... X1,d
XI I I
 2,1 X2,2 . . . X2,d 

Elite =  .
 . . .  (18)
 .. .. .. .. 
I
Xn,1 I
Xn,2 ... I
Xn,d
n×d

In formula, n is the population size and d is the dimension value. The Elite matrix has
the same dimension as the Predator matrix.
2. Optimization process. In the optimization process, there are three stages.
Stage 1: In the early iteration, the first third of the total iteration, the prey moves faster
than the predator in this stage. The predator is mainly in the exploratory phase, and the
optimal strategy is to be completely motionless. The mathematical model of this stage is:

si = R B ⊗ ( Ei − R B ⊗ Pi ) 1
i = 1, 2, . . . , n; t < T (19)
Pi = Pi + 0.5R ⊗ si 3

In formula, si is the moving step; RB is a random vector with Brownian motion based
on normal distribution; ⊗ is term by term multiplication; R is a uniform random vector in
[0,1]; t is the number of current iterations; T is the maximum number of iterations.
Stage 2: In the middle of the iteration, one-third to two-thirds of the way through the
iteration, prey and predator are moving at the same speed. Predators are mainly in the
exploration and development transition phase, so half of the predators are for exploration
and the other half for development. At this point, the prey uses Levi’s Walk for exploitation
and the predator uses Brownian motion for exploration. The mathematical model of this
stage is: 
si = R L ⊗ ( Ei − R L ⊗ Pi )
i = 1, 2, . . . , n2 ; 31 T < t < 32 T
 P i = Ei + 0.5R ⊗ s i
si = R B ⊗ ( R B ⊗ Ei − Pi ) (20)
i = n2 , . . . , n; 13 T < t < 32 T
Pi = Ei + 0.5CF ⊗ si
 2t
CF = 1 − Tt T
In formula, RL is a random vector generated based on Levy distribution; CF is an
adaptive parameter that controls the predator’s movement step size.
Stage 3: In the late iteration stage, the predator moves faster than the prey. The
predator is in the development stage, and the best strategy is Levi’s walk. Its mathematical
model is: 
si = R L ⊗ ( R L ⊗ Ei − Pi ) 2
i = 1, . . . , n; t > T (21)
Pi = Ei + 0.5CF ⊗ si 3
3. Fish Aggregation Device Effect (FADs).
Appl. Sci. 2023, 13, 4476 7 of 17

In order to avoid the stagnation of local optimal value, MPA considers the influence
of environmental factors such as fish aggregation device, and the mathematical model of
FADs is as follows:

Pi + CF [ Xmin + R L ⊗ ( Xmax − Xmin )] ⊗ U, r ≤ 0.2
Pi = (22)
Pi + [0.2(1 − R) + R]( Pr1 − Pr2 ), r > 0.2

In formula, U is a random binary vector of an array containing 0 and 1; r1 and r2 are


not random indexes of the prey matrix.

4. Establishment of RV Reducer Fault Diagnosis Model


4.1. EEMD Preprocessing
Empirical Mode Decomposition (EMD) is an empirical mode decomposition method
that can reflect the instantaneous frequency of data, but since the input noise is generated
randomly, it is easy to cause problems such as mode aliasing, end effect, and stop stan-
dard of screening iteration. To solve the above problems, Wu [21] and others proposed
Ensemble Em-pirical Mode Decomposition (EEMD). The method solves the mode aliasing
phenomenon by adding white noise to fill the discontinuous signal segment. In the process
of noise signal decomposition, the filtering characteristics of white noise signal are used to
solve the IMF average value for many times to eliminate the interference of white noise on
the original signal at discontinuous points [22].
Data preprocessing steps are as follows [23]:
Step 1: Divide the fault data into several sample data, and add normal white noise
Sw (ω) to each sample data x(t) to obtain a new overall xs (t).

x s ( t ) = x ( t ) + Sw ( ω ) (23)

The expression of normal white noise Sw (ω) is as follows:



Sw ( ω ) = ∑t=−∞ Rw (l )e− jωt = σ2 (24)

In the formula, the expression of Rw (l) is as follows:

Rw (l ) = σ2 δl , l = 0, ±1, ±2 · · · (25)

In the formula, the expression of δl is as follows:



1, l=0
δl = (26)
0, l 6= 0

Step 2: Perform EMD decomposition on xs (t), the sample data with white noise added,
to obtain several IMF components. The flow chart is shown as follows:
n
xs (t) = ∑c=1 im f c (t) + rn (t) (27)

In the formula, imfc (t) is the c IMF of EMD decomposition. rn (t) is the participating
weight after decomposing n IMFs.
Step 3: Repeat the above steps, adding a new white noise sequence each time;
Step 4:Take definite integral of IMF obtained each time and divide it by segment
length. In order to avoid the influence of positive and negative signs of the value after
decomposition, IMF absolute value is processed before.
Its flow chart is shown in Figure 2:
Appl. Sci. 2023, 13, x FOR PEER REVIEW 8 of 17
Appl. Sci. 2023, 13, 4476 8 of 17

Figure 2. EEMD flow chart.

4.2. Establishment of MPA-KELM


Figure 2. EEMD flow chart. Model
The
4.2.regularization
Figure MPA-KELMCModel
coefficient
2. EEMD flowofchart.
Establishment and kernel function parameter σ are the key pa-
rameters that Theaffect the performance
regularization coefficientofCKELM neural
and kernel network.
function In this
parameter paper,
σ are theparame-
the key MPA is
4.2. Establishment of MPA-KELM Model
used toters
optimize the
that affect theregularization neuralCnetwork.
coefficient
performance of KELM and kernel
In thisfunction
paper, theparameter
MPA is used σ toin
Thethe
KELM,optimize
and regularization
fault
the coefficient
diagnosis
regularization model Cofand
coefficient kernel
tower
C and function
dryer
kernel parameter
is established
function parameterσ are
after the key pa-
obtaining
σ in KELM, the
and
rameters that affect the performance of
best parameters. The flow chart is shown in Figure 3.
the fault diagnosis model of tower KELM
dryer is neural network.
established after In this
obtaining paper,
the bestthe MPA is
parameters.
used to optimize
The flow in Figure 3. coefficient C and kernel function parameter σ in
the regularization
chart is shown
KELM, and the fault diagnosis model of tower dryer is established after obtaining the
best parameters. The flow chart is shown in Figure 3.

Figure 3. RV Reducer fault diagnosis model based on MPA-KELM.


FigureFigure
3. RV 3.
Reducer faultfault
RV Reducer diagnosis model
diagnosis based
model ononMPA-KELM.
based MPA-KELM.
Appl. Sci. 2023, 13, 4476 9 of 17

4.3. Evaluation Index


In order to comprehensively and effectively evaluate the performance of the proposed
RV reducer fault diagnosis model based on EEMD-MPA-KELM, a multi-classification
evaluation index system based on confusion matrix is selected in this paper, and the
accuracy λA , accuracy λP , recall λR and Kappa coefficient are taken as the evaluation
indexes of the RV reducer fault diagnosis model. The calculation formula is as follows:
n
λA = (28)
N

nT
λP = (29)
nP
nT
λR = (30)
nR

λ A − ∑3i=1 n Ri n Pi /N 2
Kappa = (31)
1 − ∑3i=1 n Ri n Pi /N 2
In the formula, n represents the number of samples predicted correctly, N represents
the total number of samples; nT represents the number of correctly predicted samples of a
fault class, nR represents the total number of samples of this fault class, and nP represents
the total number of samples predicted as this fault class. i = 1,2 indicates the two fault types
of RV reducer.
Accuracy λA represents the comprehensive index of fault classification and prediction
for tower dryer, and the higher the data is, the stronger the fault identification ability is.
The accuracy λP represents the misjudgment index of a certain type of fault. The higher
the accuracy, the higher the reliability of fault judgment. The recall ratio λR represents the
missing judgment index of a certain type of fault. The higher the recall ratio, the higher the
sensitivity of fault identification.

4.4. Model Performance Test


XJTU-SY bearing experimental data set was adopted for model test. In the test,
sampling frequency was set as 25.6 kHz, sampling interval was 1 min, and each sampling
duration was 1.28 s. The speed of the bearing is set at 2100 r/min, and the radial force
received is 12 kN. Table 1 shows the fault forms of Bearing1_2, Bearing1_4, and Bearing1_5
in the data set.

Table 1. Bearing data information.

Data Set Total Sample Size Actual Life Failure Location


Bearing1_2 123 2 h 41 min Outer ring
Bearing1_4 122 2 h 2 min cage
Bearing1_5 52 52 min Inner ring, outer ring

The fault data were divided into 16 groups of sample data and decomposed by
EEMD. The decomposed IMF is taken as input, and the classification number is taken as
output. Classification 1 represents outer ring fault, classification 2 represents cage fault,
classification 3 represents inner ring and outer ring fault, and classification 4 represents
normal. The output is shown in the figure below.
It can be seen from Figure 4 that the prediction accuracy of the MPA-KELM model is
much higher than that of the ELM model. As can be seen from Figure 5, the performance
of the MPA-KIELM prediction model is still good, with fast running speed, small mean
square error and good stability.
Appl. Sci. 2023, 13, x FOR PEER REVIEW 10 of 17
Appl. Sci. 2023, 13, x FOR PEER REVIEW 10 of 17

Appl. Sci. 2023, 13, 4476 10 of 17


of the MPA-KIELM prediction model is still good, with fast running speed, small mean
of the MPA-KIELM
square prediction
error and good stability.model is still good, with fast running speed, small mean
square error and good stability.

Figure4.
Figure 4. Comparison
Comparison ofofmodel
modelprediction results.
prediction results.
Figure 4. Comparison of model prediction results.

Figure 5. Convergence diagram.


Figure 5. Convergence diagram.
Figure 5. Convergence diagram.

5. Test and Data Analysis


5.1. Test Platform
The specific model parameters of the RV reducer test platform used in the experiment
are shown in Figure 6 and Table 2. the power of the servo motor is 5 kW, the measuring
range of the torque sensor is 200 Nm, the measuring range of the torque sensor is 500 Nm,
and the specification of the magnetic powder brake is 1.5 kW. The spindle speed is set to
151 r/min, and the input shaft and output shaft of the RV reducer are measured by the
torque sensor. The RV reducer model number is RV-20E. Samples were collected at 0.5 s
intervals and the experiment was conducted at normal temperature.
ment are shown in Figure 6 and Table 2. the power of the servo motor is 5 kW, the
measuring range of the torque sensor is 200 Nm, the measuring range of the torque sen-
sor is 500 Nm, and the specification of the magnetic powder brake is 1.5 kW. The spindle
speed is set to 151 r/min, and the input shaft and output shaft of the RV reducer are
measured by the torque sensor. The RV reducer model number is RV-20E. Samples were
Appl. Sci. 2023, 13, 4476 11 of 17
collected at 0.5 s intervals and the experiment was conducted at normal temperature.

RVreducer
Figure6.6.RV
Figure reducertest
testplatform.
platform.

Table
Table 2.
2. Type
Type parameters
parameters of
of RV
RV reducer
reducer platform.
platform.

Equipment
Equipment Model Number
Model Number
servomotor
servomotor Panasonic MSME502GCG
Panasonic MSME502GCG
Torque sensor
Torque sensor NCNJ-101(0
NCNJ-101(0± 15 N·m)
± 15 N·m)
RVRV reducer
reducer RV-20E
RV-20E
Torque sensor NCNJ-101(0 ± 200 N·m)
Torque powder
Magnetic sensor brake NCNJ-101(0 ± 200 N·m)
FZ400A-1
Magnetic powder brake FZ400A-1

In the experiment, the torque, speed and power of the RV reducer appear abnormal
In the experiment, the torque, speed and power of the RV reducer appear abnormal
data, and the reducer fails. Therefore, the data before the fault and the efficiency data after
data, and the reducer fails. Therefore, the data before the fault and the efficiency data
the fault are compared.
after the fault are compared.
Appl. Sci. 2023, 13, x FOR PEER REVIEW As can be seen from Figure 7, the efficiency before failure has certain periodicity,12 of 17
while
As can be seen from Figure 7, the efficiency before failure has certain periodicity,
the efficiency after failure is obviously not periodicity, which conforms to the conclusion of
while the efficiency after failure is obviously not periodicity, which conforms to the con-
Equation (8).
clusion of Equation (8).

Figure
Figure7.
7.Comparison
Comparison before
before and
and after the fault.
after the fault.

5.2. Simulation of EEMD-MPA-KELM Fault Classification Model


MATLAB R2020b programming was used to select 1,680 monitoring data of
pre-fault efficiency and post-fault efficiency as sample data for EEMD decomposition.
Some graphs obtained are shown in Figure 8 and Figure 9.
Appl. Sci. 2023, 13, 4476 12 of 17
Figure 7. Comparison before and after the fault.

5.2. Simulation of EEMD-MPA-KELM Fault Classification Model


5.2. Simulation of EEMD-MPA-KELM Fault Classification Model
MATLAB R2020b programming was used to select 1,680 monitoring data of
MATLAB R2020b programming was used to select 1,680 monitoring data of pre-fault
pre-fault efficiency
efficiency and post-fault
and post-fault efficiency
efficiency as as for
sample data sample
EEMDdata for EEMDSome
decomposition. decomposition.
graphs
Some graphs obtained are shown in
obtained are shown in Figures 8 and 9. Figure 8 and Figure 9.

Figure 8. IMF image after EEMD decomposition of normal data. (a) IMF1 component after EEMD
Figure 8. IMF image after EEMD decomposition of normal data. (a) IMF1 component after EEMD
decomposition of normal data; (b) IMF2 component after EEMD decomposition of normal data;
decomposition of normal data; (b) IMF2 component after EEMD decomposition of normal data; (c)
(c) IMF3 component after EEMD decomposition of normal data; (d) IMF4 component after EEMD
IMF3 component after EEMD decomposition of normal data; (d) IMF4 component after EEMD
decomposition of normal data; (e) IMF5 component after EEMD decomposition of normal data; and
(f) IMF6 component (residual component) after EEMD decomposition of normal data.

It can be seen from Figures 8 and 9 that normal data and fault data are significantly
different after EEMD decomposition. Since the energy of the signal is mainly concentrated
in the lower IMF, the first six IMF components of each group are selected for average
checking calculation. The two groups of data obtained 21 eigenvalues after collective
average calculation. At this time, the eigenvalues were input into MPA-KELM classification
model for training and classification. The decomposed IMF was taken as input, and the
classification number was taken as output, among which classification 1 represented normal
and classification 2 represented fault. The output is shown in Figures 10 and 11.
Appl. Sci. 2023, 13, x FOR PEER REVIEW 13 of 17

Appl. Sci. 2023, 13, 4476 13 of 17


decomposition of normal data; (e) IMF5 component after EEMD decomposition of normal data;
and (f) IMF6 component (residual component) after EEMD decomposition of normal data.

Figure
Figure9.9.Image
Image of fault data
of fault dataafter
afterEEMD
EEMDdecomposition.
decomposition. (a)(a) IMF1
IMF1 component
component afterafter
EEMD EEMD de-
decompo-
composition
sition of faultofdata;
fault(b)
data;
IMF2 (b)component
IMF2 component afterdecomposition
after EEMD EEMD decomposition of fault
of fault data; data;component
(c) IMF3 (c) IMF3
component
after EEMD after EEMD decomposition
decomposition of fault
of fault data; data;component
(d) IMF4 (d) IMF4 component
after EEMD after EEMD decomposi-
decomposition of fault
23, 13, x FOR PEER REVIEW tion of(e)
fault data; (e) IMF5 component after EEMD ofdecomposition 14 of 17and (f) IMF6
data; IMF5 component after EEMD decomposition fault data; andof (f) fault
IMF6 data;
component (residual
component
component) (residual component)
after EEMD after EEMD
decomposition decomposition
of fault data. of fault data.

It can be seen from Figure 8 and Figure 9 that normal data and fault data are signif-
icantly different after EEMD decomposition. Since the energy of the signal is mainly
concentrated in the lower IMF, the first six IMF components of each group are selected
for average checking calculation. The two groups of data obtained 21 eigenvalues after
collective average calculation. At this time, the eigenvalues were input into MPA-KELM
classification model for training and classification. The decomposed IMF was taken as
input, and the classification number was taken as output, among which classification 1
represented normal and classification 2 represented fault. The output is shown in Figures
10 and 11.
It can be seen from Figure 10 that the EEMD-MPA-KELM diagnosis model has good
recognition performance, and the overall recognition accuracy of its training set is rela-
tively high, reaching 100%. The overall recognition accuracy of the test set is 90.33%.

(a) (b)
Figure 10. Single Figure
operation result of
10. Single EEMD-MPA-KELM
operation model. (a) model
result of EEMD-MPA-KELM training
model. sets predict
(a) model training sets predict
results; (b) model results;
test sets(b)
predict results.
model test sets predict results.

Figure 11 shows the visual confusion matrix of RV reducer fault diagnosis results
based on EEMD-MPA-KELM. The diagonal elements in the matrix represent the correct
predicted number of samples of a certain fault class, and the sum of data in each column
represents the total number of samples of this fault class, and the sum of data in each row
and accuracy rates of the proposed EEMD-MPA-KELM fault diagnosis model for various
samples of running state are both higher than 85%, and the diagnostic accuracy rate for
cutting machine cut-off and tide discharge detection rod plugging is up to 100%. The
Kappa coefficient of the model as a whole is 0.82. The Classification accuracy is as high as
Appl. Sci. 2023, 13, 4476 91.00%. The proposed RV reducer fault diagnosis model based on EEMD-MPA-KELM 14 of 17is
verified to have high fault recognition sensitivity, high fault judgment reliability and
strong overall fault-classification ability.

Figure11.
Figure 11.Classification
Classificationconfusion
confusionmatrix
matrixdiagram
diagramofoffault
faultdiagnosis.
diagnosis.

It can be seen from Figure 10 that the EEMD-MPA-KELM diagnosis model has good
recognition performance, and the overall recognition accuracy of its training set is relatively
high, reaching 100%. The overall recognition accuracy of the test set is 90.33%.
Figure 11 shows the visual confusion matrix of RV reducer fault diagnosis results
based on EEMD-MPA-KELM. The diagonal elements in the matrix represent the correct
predicted number of samples of a certain fault class, and the sum of data in each column
represents the total number of samples of this fault class, and the sum of data in each
row represents the total number of samples predicted for this fault class. Table 3 shows
the evaluation indexes of the model. It can be seen from Figure 10 and Table 3 that the
recall and accuracy rates of the proposed EEMD-MPA-KELM fault diagnosis model for
various samples of running state are both higher than 85%, and the diagnostic accuracy
rate for cutting machine cut-off and tide discharge detection rod plugging is up to 100%.
The Kappa coefficient of the model as a whole is 0.82. The Classification accuracy is as high
as 91.00%. The proposed RV reducer fault diagnosis model based on EEMD-MPA-KELM is
verified to have high fault recognition sensitivity, high fault judgment reliability and strong
overall fault-classification ability.

Table 3. Evaluation indexes of EEMD-MPA-KELM model.

Evaluation Index
Running State
λR λP Kappa Coefficient λA
Normal 92.67% 89.68%
0.82 91.00%
Failure 89.33% 92.41%

In order to further verify the performance superiority of the proposed RV reducer fault
diagnosis model based on EEMD-MPA-KELM, this model was compared and analyzed
with the diagnostic results of other models. In order to eliminate the contingency of the test
results, a total of 30 times were run, and the results are shown in Figure 12.
Normal 92.67% 89.68%
0.82 91.00%
Failure 89.33% 92.41%

In order to further verify the performance superiority of the proposed RV reducer


fault diagnosis model based on EEMD-MPA-KELM, this model was compared and ana-
Appl. Sci. 2023, 13, 4476 15 of 17
lyzed with the diagnostic results of other models. In order to eliminate the contingency of
the test results, a total of 30 times were run, and the results are shown in Figure 12.

Figure 12. Comparison of accuracy of 30 runs of various models.


Figure 12. Comparison of accuracy of 30 runs of various models.
As can be seen from Figure 12 and Table 4, the accuracy of KELM model is the lowest
and Asunstable.
can be seen from Figure
The accuracy often12fluctuates
and Table 4, the 30–50%,
between accuracyandof KELM model
the overall is the is
accuracy
lowest and unstable. The accuracy often fluctuates between 30%-50%,
only 44.67%. After adding EEMD decomposition, the recognition accuracy of KEML was and the overall
accuracy
improved,is only
and44.67%. Afteraccuracy
the overall adding EEMD decomposition,
was 52.33%. Althoughthe recognition accuracy
EEMD-GA-KELM model of is
KEML was improved, and the overall accuracy was 52.33%. Although EEMD-GA-KELM
relatively stable, its recognition accuracy is not much different from that of EEMD-KELM
model
model,is and
relatively stable, itseffect
its optimization recognition accuracy
is not obvious. Theisaccuracy
not much different from thatmodel
of EEMD-PSO-KELM of
EEMD-KELM model, and its optimization effect is not obvious. The
is relatively high, most of which is about 80%, and occasionally reaches 90%. EEMD-MPA- accuracy of
EEMD-PSO-KELM model is relatively high, most of which is about 80%,
KELM model has the highest overall accuracy, with an average accuracy of 90.33% for and occasionally
reaches 90%.
30 times, anEEMD-MPA-KELM
overall accuracy of moremodel has90%,
than the and
highest
evenoverall accuracy,
more than with
100% for an aver-
several times.
ageThe accuracy is relatively stable, and the optimization effect is the most significant.more
accuracy of 90.33% for 30 times, an overall accuracy of more than 90%, and even
than 100% for several times. The accuracy is relatively stable, and the optimization effect
is Table
the most significant.
4. Accuracy and running time statistics of each model after 30 runs.

Table 4. Accuracy and running time statisticsAccuracy


Type of each model
Rate after 30 runs. Time
KELM 44.67% 6.69 s
Type Accuracy Rate Time
EEMD-KELM 52.33% 6.58 s
KELM
EEMD-GA-KELM 44.67%
59.33% 6.69
5.62ss
EEMD-KELM
EEMD-PSO-KELM 52.33%
78.67% 6.58
4.84ss
EEMD-GA-KELM
EEMD-MPA-KELM 59.33%
90.33% 5.62
4.58ss
EEMD-PSO-KELM 78.67% 4.84 s
EEMD-MPA-KELM 90.33%of EEMD-MPA-KELM4.58
In conclusion, the accuracy and stability towers dryer fault
diagnosis model are superior to the other four models.

6. Conclusions
The RV retarder is the core part of an industrial robot. With the increasing demand
for industrial robot, it is very important to predict the failure of RV retarders in actual
production. To solve the above problems, a fault diagnosis model of an RV reducer based
on EEMD-MPA-KELM was proposed in this paper. After modeling and research, the
following conclusions were obtained:
(1) The data measured on the RV reducer test platform is decomposed by EEMD to
obtain the optimal special collection. The regularization coefficient C of KELM and the
kernel function parameter σ are optimized with MPA to improve the prediction accuracy
and stability of the KELM. The feature set is imported into the MPA-KELM model as input
for fault identification. Compared with ELM, KEML, EEMD-GA-KELM and EEMD-PSO-
Appl. Sci. 2023, 13, 4476 16 of 17

KELM models, EEMD-MPA-KELM’s RV retarder fault diagnosis and classification model,


this model is faster, more accurate and more stable.
(2) Timely and accurate judgment of the operation state of the RV reducer of industrial
robots is conducive to the timely maintenance of the RV reducer, providing guarantee for
the accuracy of the RV reducer test data, so as to help the RV reducer manufacturers to
improve production efficiency and reduce economic losses.

Author Contributions: Conceptualization, Z.T.; Data curation, L.G.; Formal analysis, Y.L.; Fund-
ing acquisition, L.G. and Z.Z.; Investigation, Z.T.; Methodology, Z.T.; Project administration, Y.L.;
Software, X.W.; Supervision, Y.L.; Validation, X.W. and Z.Z.; Visualization, Z.T.; Writing—review
& editing, X.W.; Writing—original draft, X.W. and Z.T. All authors have read and agreed to the
published version of the manuscript.
Funding: This research was funded by the Key research projects supported by the National Nat-
ural Science Foundation of China, grant number No. 92067205; Key Research and Development
Project of Anhui Province, grant number No. 2022a05020035; Major science and technology project
of Anhui Province, grant number No. 202103a05020022; HFIPS Director’s Fund, grant number
No. YZJJQY202305; Supported by Anhui Province Intelligent Mine Technology and Equipment
Engineering Laboratory Open Fund, grant number No. AIMTEEL202201; Key Project of Scientific
Research of Anhui Provincial Education Department, China, grant number No. 2022AH050995.
Institutional Review Board Statement: Not applicable.
Informed Consent Statement: Not applicable.
Data Availability Statement: Not applicable.
Conflicts of Interest: The authors declare no conflict of interest.

References
1. Pan, B.S.; Lin, C.K.; Xiang, Y.Y.; Wen, J.; Shi, L.J. Time-varying reliability analysis and optimal design of planetary Reducer
transmission accuracy considering gear wear. Comput. Integr. Manuf. Syst. 2022, 28, 745–757.
2. Li, Z.; Huang, X.L.; Zhang, T.F.; Jing, X. Fault diagnosis of Planetary Reducer based on multi-source heterogeneous sensor based
on deep neural network. J. Ordnance Equip. Eng. 2018, 39, 192–195.
3. Wang, J.G.; Ke, L.L. RV retarder fault diagnosis based on residual network. J. Mech. Eng. 2019, 55, 73–80. [CrossRef]
4. Mao, J.; Guo, H.; Chen, H.Y. Fault diagnosis of shearer cutting gear based on deep self-coding network. Coal Sci. Technol. 2019, 47,
123–128.
5. Peng, P.; Ke, L.L.; Wang, J.G. RV reducer fault diagnosis under noise interference. J. Mech. Eng. 2020, 56, 30–36. [CrossRef]
6. Chen, L.R.; Cao, J.F.; Wang, X.Q. Fault diagnosis of RV reducer for robot based on nonlinear spectrum and kernel principal
component analysis. J. Xi’an Jiaotong Univ. 2020, 54, 32–41.
7. An, H.B.; Liang, W.; Zhang, Y.L. Analysis and Experimental study on Acoustic emission Signal Propagation Mechanism of Robot
RV Reducer. Robot 2020, 42, 557–567.
8. Yu, N.; Jing, N.; Chen, H.Y. Segmental fusion diagnosis method for compound fault of mine hoist retarder. Mech. Sci. Technol.
2022, 41, 394–401.
9. Qian, H.-M.; Li, Y.-F.; Huang, H.-Z. Time-variant reliability analysis for industrial robot RV reducer under multiple failure modes
using Kriging model. Reliab. Eng. Syst. Saf. 2020, 199, 106936. [CrossRef]
10. Wang, H.; Jing, W.; Li, Y.; Yang, H. Fault Diagnosis of Fuel System Based on Improved Extreme Learning Machine. Neural Process.
Lett. 2021, 53, 2553–2565. [CrossRef]
11. Guo, S.; Wu, B.; Zhou, J.; Li, H.; Su, C.; Yuan, Y.; Xu, K. An Analog Circuit Fault Diagnosis Method Based on Circle Model and
Extreme Learning Machine. Appl. Sci. 2020, 10, 2386. [CrossRef]
12. Xia, Y.; Ding, Q.; Jiang, A.; Jing, N.; Zhou, W.; Wang, J. Incipient fault diagnosis for centrifugal chillers using kernel entropy
component analysis and voting based extreme learning machine. Korean J. Chem. Eng. 2022, 39, 504–514. [CrossRef]
13. Liu, X.; Huang, H.; Xiang, J. A personalized diagnosis method to detect faults in gears using numerical simulation and extreme
learning machine. Knowl. Based Syst. 2020, 195, 105653. [CrossRef]
14. Huang, G.-B. An Insight into Extreme Learning Machines: Random Neurons, Random Features and Kernels. Cogn. Comput. 2014,
6, 376–390. [CrossRef]
15. Yang, X.; Yu, Z.D.; Zhang, Z.Y.; Bing, H.K.; Shen, H.N.; Wang, J.X. Turbine rotor fault diagnosis based on multi-feature extraction
and nuclear extreme Learning machine. Turbine Technol. 2020, 62, 137–142.
16. Qiao, W.S.; Hua, J.; Lou, R. Optimization of diesel engine fault diagnosis of nuclear Extreme Learning Machine with improved
Butterfly algorithm. Mech. Des. Res. 2022, 38, 211–214.
Appl. Sci. 2023, 13, 4476 17 of 17

17. Liang, R.; Chen, Y.; Zhu, R. A novel fault diagnosis method based on the KELM optimized by whale optimization algorithm.
Machines 2022, 10, 93. [CrossRef]
18. Zhang, H.; Pan, C.; Wang, Y.; Xu, M.; Zhou, F.; Yang, X.; Zhu, L.; Zhao, C.; Song, Y.; Chen, H. Fault Diagnosis of Coal Mill Based
on Kernel Extreme Learning Machine with Variational Model Feature Extraction. Energies 2022, 15, 5385. [CrossRef]
19. Zhao, Z.; Ye, G.; Liu, Y.; Zhang, Z. Recognition of Fault State of RV Reducer Based on self-organizing feature map Neural Network.
J. Physics Conf. Ser. 2021, 1986, 012086. [CrossRef]
20. Mirjalili, S. Moth-flame optimization algorithm: A novel nature-inspired heuristic paradigm. Knowl. Based Syst. 2015, 89, 228–249.
[CrossRef]
21. Wu, Z.; Huang, N.E. Ensemble empirical mode decomposition: A noise-assisted data analysis method. Adv. Adapt. Data Anal.
2009, 1, 1–41. [CrossRef]
22. Chen, J.M.; Liang, Z.C. Research on Speech Enhancement Algorithm based on EEMD Data preprocessing and DNN. J. Ordnance
Equip. Eng. 2019, 40, 96–103.
23. Wang, Y.J.; Kang, S.Q.; Zhang, Y.; Liu, X.; Jiang, Y.C. Condition Recognition Method of Rolling Bearing Based on Ensemble
Empirical Mode Decomposition Sensitive Intrinsic Mode Function Selection Algorithm. J. Electron. Inf. Technol. 2014, 36, 595–600.

Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual
author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to
people or property resulting from any ideas, methods, instructions or products referred to in the content.

You might also like