Download as pdf or txt
Download as pdf or txt
You are on page 1of 6

A Deep Learning and Softmax Regression Fault

Diagnosis Method for Multi-Level Converter

Bin Xin Tianzhen Wang


Shanghai Maritime University Shanghai Maritime University
Shanghai, China Shanghai, China
xbbx385@163.com wtz0@sina.com

Tianhao Tang
Shanghai Maritime University
Shanghai, China
thtang@shmtu.edu.cn

Abstract—With the single-tube and double-tube fault of prediction of their behavior, avoiding false detection on
seven-level converter, this paper presents a new way to learn the transients while keeps the precision under fault events,
faults feature based on the deep neural network of sparse however, the cell detector can present problems due to
autoencoder. Sparse autoencoder is an unsupervised learning oscillations of the voltage which leads to collecting the
method, it can learn the feature information of the fault data incorrect data. The analysis of current slope and output pattern
according to training. The feature information is used to train the according to zero-voltage switching states has been proposed in
softmax classifier by softmax regression to realize the aim of [7]. However, these methods can be rather complicated and
classification. Comparing with the traditional neural network of
may require modifying the modulation scheme as their
BP neural network, the experimental results show that the
characteristic depends on the applied modulation techniques. In
method to classify the fault of seven level converter based on deep
neural network of sparse autoencoder can achieve higher
[8] presents a method based on the pole voltages of the neutral
accuracy. point clamped converter are distorted whenever there is an
open circuit switch fault. In [9] the classical neural network BP
Keywords—fault diagnosis; deep neural network; seven-level neural network is used to classify faults. But it is easy to
converter; sparse autoencoder; softmax classifier converge to the local minimum when training the BP neural
network. Geoffrey Hinton and his students used the depth
neural network framework and training method based on the
I. INTRODUCTION
original neural network [10]. It has excellent unsupervised
In order to achieve the goal of reducing carbon emissions feature learning ability, can realize the approximation of
and increasing energy demand, a clean, efficient, safe and complex functions, prevent over-fitting, and overcome the
diversified modern energy system must be established to shortcomings of BP neural network convergence to local
reduce reliance on traditional fossil fuels. The new energy with minimum, which is helpful to improve the classification
renewable and environmental characteristics is placed great accuracy.
hopes. At the same time, in order to meet the needs of the
development of the power system, multilevel converter arises This paper presents a fault diagnosis method for the fault
at this moment [1]. The topology of cascaded H bridge detection of seven-level converter. This method consists of a 6-
converter without large clamping diode, there is no layer deep neural network based on sparse autoenconder and
intermediate DC voltage of neutral point offset, which adopts softmax classifier. The function of the deep neural network is
modular installation. It has high working efficiency and wide to extract the feature of fault data. Then they will be classified
range of applications [2-4]. However, as the number of level by the softmax classifier. This method is validated by
increases, the number of controllable switch in the main circuit experimental studies. Theoretical analysis and experimental
of the converter increases, the circuit structure and control results show the effectiveness of this method.
mode become more complicated. The cascaded H-bridge
converter uses a large number of power semi-conductor II. FAULT SIGNAL ANALYSIS OF CASCADED SEVEN
devices. All of these increase the probability of failure [5]. LEVEL CONVERTER
Once a fault occurs, it will cause a serious safety accident. Figure 1 is a typical topology diagram of the cascaded H-
Therefore, the fault diagnosis research of cascaded H-bridge bridge seven-level converter, which is composed of three
multilevel converter should be paid more attention. cascaded H-bridges. Each bridge has four power switching
In the researches of fault diagnosis methods for the existing devices and all of them are connected in anti-parallel with a
cascaded multilevel converter like [6], it presents a method freewheeling diode. In this paper, the pulse width modulation
based on high frequency harmonic analysis, using a dynamic (SPWM) technique is used to control the gate drive signal.

 ‹ ,((( 


ensed use limited to: MINISTERE DE L'ENSEIGNEMENT SUPERIEUR ET DE LA RECHERCHE SCIENTIFIQUE. Downloaded on September 29,2022 at 22:13:28 UTC from IEEE Xplore. Rest
8 IGBT-7 open circuit [0,0,0,0,0,0,0,1,0,0,0]T
9 IGBT-8 open circuit [0,0,0,0,0,0,0,0,1,0,0]T
10 IGBT-9 open circuit [0,0,0,0,0,0,0,0,0,1,0]T
11 IGBT-10 open circuit [0,0,0,0,0,0,0,0,0,0,1]T
12 IGBT-11 open circuit [0,0,0,0,0,0,0,0,0,0,1]T
13 IGBT-12 open circuit [0,0,0,0,0,0,0,0,0,1,0]T

Figure 2, 3, 4 is the simulation of output voltage of the


seven-level converter working under the condition of normal
operation and 12 kinds of single-switch open circuit. Fig. 2 (1)
is the output voltage waveform of the normal state of seven-
level converter. Figure 2 (2) - Figure 4 (4) is the voltage output
waveform corresponding to the IGBT1-IGBT12 switch under
the state of open circuit. It can be seen from the diagram that
the breaking of each switch can cause the change of the output
voltage. Some of these fault characteristics are obvious such as
Figure 2 (4). However, some of the fault characteristics have
small differences which show in the red cycle such as Figure
3(1) and 3(2). This increases the difficulty of fault feature
extraction and the accuracy of fault classification. The voltage
output signal of the fault characteristics of IGBT9 (Fig. 4(1)) is
similar to that of IGBT12 (Fig. 4(4)), the same condition to
Fig. 1 Seven-level converter topology IGBT10 and IGBT11 because of the character of the H-bridge
multi-level converter. Therefore, similar signal features are
There are two kinds of failure modes in power switching classified into the same category of faults.
devices: short circuit and open circuit. Short circuit fault will
make the circuit overcurrent, causing system protection,
making the fuse melted, which is equivalent to open circuit
fault. The gate fault of the power switch and the fault of the
drive circuit can lead to the open circuit fault of the switching
device. When the open circuit fault occurs, the system can
continue to work, but the output waveform is not normal,
which will cause the load failure. The occurrence of open
circuit failure is not easy to cause system protection, it is
difficult to detect. This paper mainly discusses the fault
diagnosis of one and two switching devices, and collects the
output voltage of the converter system as the data source for
the diagnosis [11]. Cascade H-bridge seven-level converter
consists of 12 power switches. If only a single switch failure is
considered, there are 11 states. In addition to 12 single switch
faults, the other state is normal, as shown in Table 1, using Fig. 2 Normal state and fault output voltage from IGBT1 to IGBT4
number 0, 1 to distinguish the state. 1 indicates the category
label of the state where the converter system is located, and the
other states are 0 at this time.

Table I. Fault category label settings


Serial Fault category Category label
number
1 normal [1,0,0,0,0,0,0,0,0,0,0]T
2 IGBT-1 open circuit [0,1,0,0,0,0,0,0,0,0,0]T
3 IGBT-2 open circuit [0,0,1,0,0,0,0,0,0,0,0]T
4 IGBT-3 open circuit [0,0,0,1,0,0,0,0,0,0,0]T
5 IGBT-4 open circuit [0,0,0,0,1,0,0,0,0,0,0]T
6 IGBT-5 open circuit [0,0,0,0,0,1,0,0,0,0,0]T Fig. 3 Fault output voltage from IGBT5 to IGBT8
7 IGBT-6 open circuit [0,0,0,0,0,0,1,0,0,0,0]T


ensed use limited to: MINISTERE DE L'ENSEIGNEMENT SUPERIEUR ET DE LA RECHERCHE SCIENTIFIQUE. Downloaded on September 29,2022 at 22:13:28 UTC from IEEE Xplore. Rest
Fig.4 Fault output voltage from IGBT9 to IGBT12

III. A FAULT DIAGNOSIS METHOD BASED ON DEEP LEARNING


AND SOFTMAX REGRESSION
In this paper, the basic structure of the fault diagnosis
system based on deep neural network model of sparse
Fig.6 Sparse autoencoder network structure
autoencoder and softmax regression is shown in Fig.5.
the output layer. It uses the propagation algorithm, so that the
It consists of four parts: data acquisition, feature extraction,
target value is equal to the input value, so that it can learn the
fault classification, switch mode calculation. Here only the first
data characteristics without supervision and labels. Assuming
two parts are researched. The 6-layer neural network structure
  L Q
based on sparse autoencoder is used to extract the features of unlabeled training sample set [ = >[  [ @ [ ∈ 5 .
the data. Then the next step of classification is completed by 

softmax regression. The weight of Z ML represents the weight between the ith
neurons from the input layer and the jth neurons in the hidden
A. Sparse Autoencoder Algorithm layer; The weight between the hidden layer and the output
Sparse autoencoder is one of the deep learning structures

layer is denoted by Z LM . The input threshold of the first hidden
[12,13], Figure 6 is the basic structure of the block diagram. It  
layer is E and the input threshold of the output layer is E .
consists of three layers, the input layer ( / ), the hidden layer
Use forward and backward propagation to train parameters.
( / ) and the output layer ( / ), where “+1” is the bias Sigmoid function is chosen as the activation function of
neurons.
coefficient. The input layer has the same number of neurons as

I ] = (1)
 + H[S −]
Its value range (0,1). When the input is a dataset containing m
samples, we need to define the overall cost function:

 P
 λ Ql − Vl Vl 

- Z E = > ¦  & KZ E [N − \ N & @ +  ¦¦¦ Z  


ML
(2)
P N = l = M = L =

Where y is the output, equal to x, Ql is the number of layer,


which equal to 3 here. V l is the number of neurons in the
hidden layer, V l  1 is the number of neurons in the output layer.
Where the first term is the sum of the error energy for all
neurons, which is the same as BP. While the second item is the
sum of the squares of all elements in the parameter matrix of
 
Z Z . It is a regular item, the purpose is to reduce the
magnitude of the weights, to prevent overfitting. λ is the
attenuation parameter.
Fig. 5 Fault diagnosis system structure diagram


ensed use limited to: MINISTERE DE L'ENSEIGNEMENT SUPERIEUR ET DE LA RECHERCHE SCIENTIFIQUE. Downloaded on September 29,2022 at 22:13:28 UTC from IEEE Xplore. Rest
Sparse autoencoder restrains the output of the hidden layer, the m is the number of samples, if given a sample, its output is a
average value of the output of the hidden layer node should be probability value, which represents the probability that this
close to 0. In this case, most of the hidden layer nodes are in sample belongs to the category ‘k’. So its category labels can
the unactivated state. Therefore , in [14], the cost function of range from 1 to K. the system equation is:
sparse autoencoder is given as follows˖ 7 L

Vl ª S \ L =  _ [ L θ º ªHθ [
 º
- VSDUVH Z  E = - Z  E + β ¦ ./ ρ & ρ M

(3) « L » « 7 L
»
M =
« S \ =  _ [ L θ »  «Hθ [

»
Kθ [ =
L
= « » (10)
Where β is the weight of the sparsity penalty term. The « # » N
θM7 [ L
second item is the sparse penalty factor. The formula is: « L » ¦H « # »
¬S \ = N _ [ θ ¼ «Hθ [ »
L 7 L
M =
¬ N
¼
 ρ  − ρ
./ ρ & ρ ρ ORJ  +  − ρ ORJ  (4) At this point the parameter θ is no longer a column vector, but
M ρM  − ρM a matrix, each row of the matrix can be regarded as a category
 corresponding to the parameters of the classifier, a total of k
ρ M is the average activation of hidden unit j. rows. Its cost function is:

  P ˄˅ L
ª º
ρM ¦ >DM [ @ (5) « θ[
» λ N Q 
7 L

-˄θ˅ «¦¦ {\ = M} ORJ N


P L =  P N
H M

» + ¦¦θLM
L
(11)
P «L = M =  L = M =
H »
7
θ [ L
P  L
is the number of samples. DM [ is the activation of unit j
«¬ ¦ »¼
O

O =
in hidden layer. ρ is the sparsity parameter, which specifies
desired level of sparsity. After getting the final cost function, Where 1{} is an indicator function, when the value in braces is
calculating the derivative of the weights and thresholds. true, the result of the function is 1, otherwise the result is 0.
The second term on the right side of the equal sign is the
∂ l l + 
weight attenuation term, which will punish the oversized
˄l
- :  E [  \ = DM δ L (6) parameter value. It can make the original cost function be a
∂:LM strict convex function, to ensure that has unique solution. The
expression of the partial derivative of the parameter θ shown
∂ l + as follow:
l
- :  E [  \ = δ L (7)
∂EL P

∇θ - θ = − ¦ª[ ^\ = M` − S \ = M _ [ θ º¼ + λθ
L L L L
 M (12)
l is the number of layer, δ L
l + 
is the residual of unit L in the
M
P ¬ L=

( l +  )th layer. Know the cost function, and its partial derivative, using the L-
BFGS iterative algorithm to obtain the parameter θ .
§§ Vl 1
· § ρ  − ρ · ·
l
δL l ¨ ¨ ¦:ML l δM l + ¸ + β ¨ −  +  ¸ ¸ I ]L (8) C. Fault diagnosis procedure
© © M = ¹ © ρL
 − ρL ¹¹
In this paper, when dealing with single-pipe fault
After obtaining the derivative of the cost function to each classification, there are 11 categories of fault categories. In
weight and threshold, the optimal value of the parameter is dealing with single and double pipe failure, different types of
obtained by using the L-BFGS iterative algorithm. At this point, labels can reach 41 categories. The first step is to collect
the weights and threshold parameters of the first hidden layer enough data for training and testing. Here collecting 100 sets of
are trained. And then the hidden layer output as input to train data for each state. Seven-level converter consists of 12
the parameters of the second hidden layer and so on to train the switching device, there are one normal state, 12 kinds of
whole deep neural network. single-pipe fault state and 66 kinds of double-pipe fault state.
However, some of the voltage output waveforms of the states
B. Softmax regression are consistent, which regard as the same type of label. The 79
Softmax regression model is a generalization of logistic kinds of states can be seen as 41 categories. At the time of
regression model in multi classification problems [15]. sampling, each set of data is 1000 sampling points, of which 50
Suppose x is the input, θ is a parameter vector. and the sets of data as training samples, the other 50 groups as a test
sample. The training matrix (here, 1000 * 3950 dimension
corresponding cost function is:
matrix) is used as input to enter the input layer of the deep
 P ˄L neural network. After the training is completed, the test sample
-˄θ˅ ª¦ L L L º
(9) data (the same as the 1000*3950 dimension matrix) is inputted
P «¬L= \ ORJ Kθ [ +  − \ ORJ  − Kθ [ »¼ into the input layer of the deep neural network. If the final
classifier output label is consistent with the test data, the


ensed use limited to: MINISTERE DE L'ENSEIGNEMENT SUPERIEUR ET DE LA RECHERCHE SCIENTIFIQUE. Downloaded on September 29,2022 at 22:13:28 UTC from IEEE Xplore. Rest
classification results are correct. Here are the parameters and j) Judging the fault: Utilizing the value of Kθ [ to
algorithm steps of the whole systems:
judge which one or two switch devices are on fault.
Table II. The parameters of the whole system
Offline training process Online testing process
Parameters Significance Value
Start Initialize weight and Start
threshold for next layer
Weight attenuation
λ parameter for SA
0.003 Collect training sample Output of last layer as input
Collect testing sample
of next layer

β weight of the sparsity Initialize structure of neural


3
penalty term network and parameters Output of Neural network as Input to the trained deep
input of Softmax classifier neural network
ρ sparsity parameter 0.1 Calculate the output of the
Calculate cost function, adjust
sparse autoencoder
parameter ș Output of neural network as
Weight attenuation input of classifier
λ parameter for softmax
0.0001 Calculate cost function, adjust
Satisfy Requirement N
weights and thresholds
of classifier?
Calculate the probability
Y
1˅ Offline training process Satisfy number
of iterations?
Y
Record parameter ș
The training is to calculate and find the optimal parameters N Judge the fault of switch
Fine-tuning weights and devices
of the deep neural network and softmax classifier. N Satisfy Requirement thresholds of each layer and
of error? parameter ș by
a) Data sampling: Collecting data X as the traning Y
backpropagation algorithm End

samples. The row represents the number of samples, the Record weights and Record final weights and
thresholds of each layer
column means the sampling number. thresholds of this layer

Record final parameter ș


b) Setting structure of neural network and initializing Satisfy number
of layer?
N

parameters: determining the number of layer, the number of Y End

neurons for hidden layer should less than N-1 (N means the Fig. 7 Training and testing flow chart
number of sample), the number of neurons in the input layer
should be equal to the number of sampling points, the last IV. COMPARISON AND ANALYSIS OF EXPERIMENTAL
hidden layer is regarded as the output layer of the network, RESULTS
initializing the parameters by random number. The experimental data are collected by experiment platform
c) Calculating the first-layer parameters: Initializing the which shown in Fig. 7. A total of 13 groups of single-tube fault
ˈ ˈ data are sampled by platform. The simulation data are obtained
auxiliary parameter Z  E for the first hidden layer. According from the simulink simulation. A total of 79 groups of state data
 
to Fig.6, formula (1), (2), (3), (6), (7), (8) and the L-BFGS were collected. The number of points collected in a cycle is
1000. Among them, 50 groups of data were used as training
iterative algorithm to calculating the value of Z   E . The samples to train deep neural networks and softmax classifiers,
output of this layer is used as the input for next layer. and 50 groups of data were used as test samples. At the same
time, according to the Characteristics of seven-level converter,
d) Computing the whole parameters: utilizing the same here setting a 6-layer network structure and 1000, 620, 300,
computing method in (Process c) to calculate the weight and 160, 90, 50 as the number of neurons in layer 1, 2, 3, 4, 5 and 6
threshold of each hidden layer. respectively. At final, the 50 feature points will represent the
e) Training the classifier: The output of the deep neural characteristic information of original sample which has 1000
network as the input of softmax classifier, calculating θ by points.
formula (11) and (12) until its error meet the setter’s
requirement.
f) Fine-tuning the whole parameters by backpropoga-
tion algorithm: Fine-tuning the whole parameters by replacing
-˄:E with - θ ˅in formula (6), (7), calculating the final
value of all weights and thresholds.
2˅ Online testing process
g) Data sampling: Resampling the voltage signal as the
testing data and input it into the neural network.
h) Calculating the output of network: According to the
weights and thresholds we have got, calculating the output of
the neural network as the input for classifier.
i) Calculating the probability: Calculating the value of
Kθ [ by softmax classifier with formula (10).
Fig. 8 Experiment platform of cascade H-bridge seven-level converter


ensed use limited to: MINISTERE DE L'ENSEIGNEMENT SUPERIEUR ET DE LA RECHERCHE SCIENTIFIQUE. Downloaded on September 29,2022 at 22:13:28 UTC from IEEE Xplore. Rest
The test is divided into two rounds. The first round is to accuracy, which can effectively improve the traditional BP
classify the single-pipe fault (a total of 13 states are divided neural network converge to the local minimum and over fitting
into 11 categories), the second round is to classify the single problem. At the same time, the feature extraction capability is
and double pipe fault (a total of 79 states are divided into 41 also better than that of PCA. However, the training time is too
categories). Compared with the Principal Component Analysis long and we can simplify the algorithm and speed up the
(PCA) algorithm and the traditional Back Propagation (BP) training speed in the future.
neural network, in the case of network layers, parameters and
optimization algorithms are consistent, the experimental data ACKNOWLEDGMENT
are shown in Table III, the simulation data are shown in Table
III and Table IV. This paper is supported by Shanghai Natural Science
Foundation (16ZR1414300) and National Natural Science
Foundation of China (61673260).
Table III. Experimental results and fault detection based on different methods
Experimental data (single fault) REFERENCES
accuracy [1] J. Mathew, K. Mathew, N.A. Azeez, P.P. Rajeevan, and K. Gopakumar,
different method “A Hybrid multilevel inverter system based on dodecagonal space
experiment1 experiment2 average vectors for medium voltage IM drives,” IEEE Trans. Power Electron.,
BP 99.77% 99.85% 99.81% vol. 28, no. 8, pp. 3723-3732, Aug. 2013.
[2] S. Rivera, S. Kouro S, B. Wu, J. I. Leon, J. Rodriguez and L. G.
PCA with softmax 99.692% 99.769% 99.7305% Franquelo, “Cascaded H-bridge multilevel converter multistring
topology for large scale photovoltaic systems,” In Proceedings of the
SA with softmax 99.923% 100% 99.9615%
2011 IEEE International Symposium on Industrial Electronics, Gdansk,
Poland, August 2011, pp. 1837-1844.
Experiments show that the method based on deep neural [3] M. Malinowski and K. Gopakumar, “A survey on cascaded multilevel
network of sparse autoencoder and softmax classifier is inverters,” IEEE Trans. Ind. Electron., vol. 57, pp. 2197-2206, Jul. 2010.
superior to BP neural network in fault classification accuracy [4] B. Mirafzal, "Survey of Fault-Tolerance Techniques for Three-Phase
of seven-level converters. The results also show that the feature Voltage Source Inverters," IEEE Trans. Ind. Electron., vol. 61, no. 10,
extraction capability of sparse autoencoder is better than that of pp. 5192-5202, Oct. 2014.
PCA algorithm. Moreover, it does not need to preprocess the [5] S. Khomfo, L.M. Tolert, “Fault diagnosis system for a multilevel
inverters using a neural network[J],” IEEE Industrial Electronics
data, which can better preserve the original feature information. Conference, 2005, pp. 1455-1460.
The most important is that the accuracy of diagnostic method [6] P.Lezana, J.Rodriguez, R.Agulerz, C.Silva, “Fault Detection on
based on sparse autoencoder and softmax classifier can reach Multicell Converter Based on Output Voltage Frequency Analysis,”
100% without any misinformation. IEEE Transactions on Industrial Electronics., vol. 55, no. 6, pp. 2713-
2723, February, 2009.
[7] H. W. Sim, J. S. Lee, and K. B. Lee, “A detection method for an
Table IV. Simulation results and fault detection based on different methods openswitch fault in cascaded H-bridge multilevel inverters,” In 2014
Simulation data (single and double fault) IEEE Energy Conv. Cong. And Expo.(ECCE), pp. 2101-2106,
September. 2014.
accuracy [8] J. He and N. Demerdash, “Diagnosis of open-circuit switch faults in
different method
simulation 1 simulation 2 average multilevel active-NPC (ANPC) inverters,” in proc. IEEE Transportation
Electrification Conference and Expo(ITEC), Detroit, MI, 2014.
BP 97.11% 97.37% 97.24% [9] B.P. Babu, J.V.S. Srinivas, B. Vikranth, and Dr. P. Premchnad, “Fault
diagnosis in multi-level inverter system using adaptive back propagation
PCA with softmax 96.203% 96.203% 96.203%
neural network.” In: Proceedings of India conference, Kanpur, India,
SA with softmax 98.734% 98.734% 98.734% 2008, pp. 494-498.
[10] G. E. Hiton, R. R. Salakhutdinov, ”Reducing the dimensionality of data
with neural networks.” Science, vol. 313, no. 5786, 2006 , pp. 504-7.
All accuracies of simulation results is reduced by a little,
but the accuracy of method based on sparse autoencoder and [11] Khomfoi S, Tolbert L M, “Fault diagnosis and reconfiguration for
multilevel inverter drive using AI-based techniques[J].” IEEE Trans on
softmax classifier is still higher than that of BP neural network Industial Electronics, vol. 54, no. 6, 2007, pp.2954-2968.
and PCA with softmax classifier. After the feature extraction of [12] Reza Safdari, Mohamad-Shahram Moin, ”A Hierarchical Feature
the fault data through 6-layer sparse autoencoder network and Learning for Isolated Farsi Handwritten Digit Recognition Using Spare
classification by softmax classifier, we can get better accuracy. Autoencoder. “ Aritificial Intelligence and Robotics, 2016, pp.67-71.
It is proved that the method adopted in this paper is effective to [13] Xiaofan Zhang, Hang Dou, Tao Ju, “Fusing Heterogeneous Features
diagnose the open fault of cascade H-bridge seven-level From Stacked Spares Autoencoder for Histopathol-ogical Image
converter. Analysis.” IEEE Journal of Biomedical and Health Informatics, vol. 20,
no. 5, 2016, pp.1377-1383.
[14] Erzhu Li, Peijun Du, Alim Samat, Yaping Meng, and Meiqin Che.
V. CONCLUSION “Mid-level Feature Representation via Sparse Autoencoder for Remotely
Sensed Scene Classification.” IEEE Journal of Selected Topics in
In this paper, the characteristics of cascaded H-bridge Applied Earth Observations and Remote Sensing, 2016, pp. 1-14.
multi-level converter system are analyzed, and a deep neural [15] Rui Zeng, Jiasong Wu, Zhuhong Shao, Lotfi Senhadji, and Huazhong
network algorithm based on sparse autoenconder is proposed. Shu, “Quaternion softmax classifier. Electronics Letters,” vol. 50, no. 25,
It is higher than the BP neural network in classification 2014, pp. 1929-1931.


ensed use limited to: MINISTERE DE L'ENSEIGNEMENT SUPERIEUR ET DE LA RECHERCHE SCIENTIFIQUE. Downloaded on September 29,2022 at 22:13:28 UTC from IEEE Xplore. Rest

You might also like