Download as pdf or txt
Download as pdf or txt
You are on page 1of 14

energies

Article
Predicting Temperature of Permanent Magnet
Synchronous Motor Based on Deep Neural Network
Hai Guo 1,2 , Qun Ding 1, *, Yifan Song 2 , Haoran Tang 2 , Likun Wang 3 and Jingying Zhao 2,4
1 Post-Doctoral Workstation of Electronic Engineering, Heilongjiang University, Harbin 150080, China;
guohai@dlnu.edu.cn
2 College of Computer Science and Technology, Dalian Minzu University, Dalian 116650, China;
kerlansherry@aliyun.com (Y.S.); qq4728153@126.com (H.T.); zjy@dlnu.edu.cn (J.Z.)
3 College of Electronic and Electrical Engineering, Harbin University of Science and Technology,
Harbin 150080, China; wlkhello@163.com
4 Faculty of Electronic Information and Electrical Engineering, Dalian University of Technology,
Dalian 116650, China
* Correspondence: qunding@aliyun.com

Received: 30 July 2020; Accepted: 10 September 2020; Published: 14 September 2020 

Abstract: The heat loss and cooling modes of a permanent magnet synchronous motor (PMSM) directly
affect the its temperature rise. The accurate evaluation and prediction of stator winding temperature
is of great significance to the safety and reliability of PMSMs. In order to study the influencing
factors of stator winding temperature and prevent motor insulation ageing, insulation burning,
permanent magnet demagnetization and other faults caused by high stator winding temperature,
we propose a computer model for PMSM temperature prediction. Ambient temperature, coolant
temperature, direct-axis voltage, quadrature-axis voltage, motor speed, torque, direct-axis current,
quadrature-axis current, permanent magnet surface temperature, stator yoke temperature, and stator
tooth temperature are taken as the input, while the stator winding temperature is taken as the
output. A deep neural network (DNN) model for PMSM temperature prediction was constructed.
The experimental results showed the prediction error of the model (MAE) was 0.1515, the RMSE
was 0.2368, the goodness of fit (R2 ) was 0.9439 and the goodness of fit between the predicted data
and the measured data was high. Through comparative experiments, the prediction accuracy of the
DNN model proposed in this paper was determined to be better than other models. This model can
effectively predict the temperature change of stator winding, provide technical support to temperature
early warning systems and ensure safe operation of PMSMs.

Keywords: permanent magnet synchronous motor (PMSM); stator winding; temperature prediction;
deep learning

1. Introduction
The thermal loss and cooling modes of the permanent magnet synchronous motor (PMSM) directly
affect its temperature rise [1–3]. The heat loss of the PMSM mainly includes copper loss, iron loss
and mechanical loss. The iron loss mainly depends on the stator’s voltage, and the mechanical loss
mainly depends on the rotor speed. Different from iron loss and mechanical loss, the copper loss of
a permanent magnet motor stator directly affects the heating degree of the stator winding. On one
hand, the heat of the stator winding is first transferred to the insulation. On the other hand, compared
with the winding and core, the insulation in the motor is the material with the worst heat resistance
among all materials of the motor. In the engineering field, the selection of an insulation grade of
PMSM depends entirely on the temperature of the stator winding. When the temperature of the motor

Energies 2020, 13, 4782; doi:10.3390/en13184782 www.mdpi.com/journal/energies


Energies 2020, 13, 4782 2 of 14

stator winding is too high, the insulation will be thermally aged, and decortication will even occur,
which will seriously threaten the safe operation of the motor. In addition, if the permanent magnet
motor winding heating cannot be effectively controlled, the heat of the stator winding will be further
transmitted to the rotor side through the air gap, which will cause irreversible demagnetization of the
permanent magnet. In conclusion, accurate evaluation and prediction of stator winding temperature is
of great significance to the safety and reliability of permanent magnet motors.
In order to ensure safe operation, many experts and scholars have put forward methods to measure
the temperature of a PMSM. Wallscheid et al. used an accurate flux observer in the fundamental
wave domain that can indirectly obtain magnet temperature without any additional sensors or signal
injections [4]. Mohammed et al. proposed a new sensing method, namely, investigating the application
of dedicated electrically non-conductive and electromagnetic interference immune fiber Bragg grating
(FBG) temperature sensors embedded in PMSM windings to enable winding open-circuit fault diagnosis
based on observing the fault thermal signature [5,6]. However, when these methods are used to
measure the temperature of a PMSM, a lot of experimental preparation is needed, which leads to high
costs and tedious processes. For this reason, many scholars have begun to consider how to predict
temperature after collecting enough data.
The traditional motor temperature prediction model mainly uses the finite element method [7,8].
The specific method is to simulate the transient temperature by using the finite element method,
establish the temperature field, and then predict the motor temperature. This method can only be used
to calculate and process the current linear data, but it cannot deal with a large number of nonlinear
historical data. On the other hand, the method based on machine learning can effectively solve this
problem, and the overall prediction effect can be greatly improved compared with the traditional
methods. Many experts and scholars have done a lot of research and tried a lot of different methods
using machine learning to predict motor temperature. Chen et al. used support vector machine to
predict the hot spot temperature of an oil-immersed transformer, and verified its practicability and
effectiveness on a large power transformer [9]. Hyeontae et al. used a variety of machine learning
methods including decision tree to predict the end temperature of a Linz–Donawiz converter, which
was used to improve the quality of melted pig iron. They injected pure oxygen into hot metal in order
to remove impurities via oxidation-reduction reactions. The relevant simulation results were presented
and compared with the real temperature [10]. The rise of ensemble learning has greatly promoted the
development of motor temperature prediction models. Wang applied the stochastic forest method to
the temperature prediction of a ladle furnace. The sample was divided into several subsets and the
random forest method was applied to obtain higher accuracy than the other temperature models [11].
Zhukov et al. used ensemble methods of classification to assess power system security. The proposed
hybrid approach was based on random forest models and boosting models. The experiment results
showed that the proposed model can be employed to examine whether a power system is secure under
steady-state operating conditions [12]. Su et al. integrated an extreme learning machine to establish a
prediction model of iron temperatures in a blast furnace, and selected the corresponding influencing
factors as input, combined with several extreme learning machines with different parameters This
model achieved high prediction accuracy and generalization performance for molten iron temperature
prediction [13].
In recent years, people have paid more and more attention to deep learning methods. The feature
extraction of machine learning methods mainly depends on humans. For specific simple tasks, manual
feature extraction is simple and effective, but for complex tasks, how to choose appropriate features
can be an extremely difficult problem. The feature extraction of deep learning does not rely on manual
extraction, and features are automatically extracted by the machine, which is often called end-to-end
learning [14]. Therefore, deep learning methods have better versatility. Chinnathambi et al. established
three kinds of deep neural network (DNN) to predict the day-head price of the Iberian electricity
market [15]. Kasburg et al. used an Long Short Term Memory (LSTM) neural network to predict
the photovoltaic power generation of an active solar tracker, and the results were promising [16].
Energies 2020, 13, 4782 3 of 14

Energies 2020, 13, x FOR PEER REVIEW 3 of 14


Tao et al. proposed a short-term forecasting model based on deep learning for PM2.5 concentrations [17].
concentrations
Sengar [17]. Sengar
et al. combined the et
DNNal. combined the DNN
with a chicken swarm with a chicken swarm
optimization optimization
algorithm to realizealgorithm
the load
to realize the load forecasting problem of a wind power generation
forecasting problem of a wind power generation system [18]. Gui et al. proposed a multi-step system [18]. Gui et al. proposed
a multi-step
time feature time feature
selection selection optimization
optimization model of temperature
model of temperature based on the based
DNN onand
thegenetic
DNN and genetic
algorithm
algorithm
(GA), which (GA), which predicted
effectively effectivelythe predicted the temperature
temperature of a reheaterofsystem
a reheater
[19]. system
Because[19]. Because
of their strongof
their strong learning ability, wide coverage and good adaptability, deep learning
learning ability, wide coverage and good adaptability, deep learning methods have been widely used methods have been
widely
in used
various in various fields.
fields.
However,
However, deep deep learning
learning methods
methods have have barely
barely been
been applied
applied in in PMSM
PMSM temperature
temperature prediction.
prediction.
In this paper, the prediction of stator winding temperature in PMSMs
In his paper, the prediction of stator winding temperature in PMSMs is studied. A PMSM is studied. A PMSM consists
consists of
of two
two keykey components,
components, a rotor
a rotor with awith a permanent
permanent magnetmagnet and with
and a stator a stator with designed
properly properly windings.
designed
windings.
Figure Figure
1 shows the1structure
shows the ofstructure
the PMSM. of the
When PMSM. Whenisthe
the PMSM PMSM isthe
operating, operating, the temperature
temperature of the stator
winding will rise due to the influence of copper loss. If the temperature is too high, theisinsulation
of the stator winding will rise due to the influence of copper loss. If the temperature too high,willthe
insulation
be thermally will be and
aged, thermally
in seriousaged, andit in
cases, willserious cases,
be shelled, it will bethe
threatening shelled, threatening
safe operation of thethe safe
motor.
operation of the motor. If the heat is transferred to the rotor side for a long
If the heat is transferred to the rotor side for a long time, it may cause irreversible demagnetization time, it may cause
irreversible
of demagnetization
the permanent magnet, which of the permanent
means the totalmagnet, which
destruction of means
the PMSM. the total
This destruction
paper only usesof thea
PMSM. This paper only uses a data-driven method to study the thermal
data-driven method to study the thermal management system of PMSMs, and provides a new research management system of
PMSMs,as
method and provides
a means a new research
of aiding design and method as aofmeans
analysis PMSMs. of aiding
The maindesign and analysis
purpose of this of PMSMs.
study is to
The mainapurpose
establish of this study
stator winding is to establish
temperature predictiona stator
model winding
for PMSMs temperature
based onprediction
DNN andmodel for
to verify
PMSMs based
its effectiveness. on DNN and to verify its effectiveness.

inside structure

stator

rotor

front view
stator

stator winding

stator tooth

rotor

permanent magnet

Figure 1. Internal structure of a permanent magnet synchronous motor (PMSM).


Figure 1. Internal structure of a permanent magnet synchronous motor (PMSM).
This study employs the following structure. Section 2 introduces the basic structure of the DNN
This study employs
and establishes a PMSMthe following
stator structure.
winding Sectionprediction
temperature 2 introduces the basic
model. structure
In Section 3, of
thethe DNN
dataset
and establishes a PMSM stator winding temperature prediction model. In Section 3, the
used in this paper is introduced and the prediction performance of the proposed model is presented. dataset used
in this paperand
Conclusions is introduced
prospectionsand arethe prediction
drawn performance
in Section 4. of the proposed model is presented.
Conclusions and prospections are drawn in Section 4.
2. Method Based on DNN
2. Method Based on DNN
2.1. Establishment of a Stator Winding Temperature Prediction Model for PMSMs
2.1. Establishment of a Stator Winding Temperature Prediction Model for PMSMs
There are many factors that after the stator winding temperature of a PMSM. In this paper,
11 variables
There arearemany
regarded as that
factors input, including
after ambient
the stator winding temperature
temperature (ambient), coolant
of a PMSM. temperature
In this paper, 11
(coolant),
variables direct-axis
are regardedvoltage (u_d), quadrature-axis
as input, including ambientvoltage (u_q), motor
temperature speed (motor_speed),
(ambient), torque,
coolant temperature
direct-axis current (i_d), quadrature-axis current (i_q), permanent magnet surface
(coolant), direct-axis voltage (u_d), quadrature-axis voltage (u_q), motor speed (motor_speed),temperature (pm),
stator
torque,yoke temperature
direct-axis (stator_yoke)
current and stator toothcurrent
(i_d), quadrature-axis temperature
(i_q),(stator_tooth).
permanent magnetStator winding
surface
temperature (pm), stator yoke temperature (stator_yoke) and stator tooth temperature (stator_tooth).
Stator winding temperature (stator_winding) is regarded as output. Considering the high dimension
of the independent variables, a DNN model was chosen for the prediction.
Energies 2020, 13, 4782 4 of 14
Energies 2020, 13, x FOR PEER REVIEW 4 of 14

Energies 2020, 13, x FOR PEER REVIEW 4 of 14


DNN(stator_winding)
temperature is an extension of an artificial
is regarded neural network
as output. Considering (ANN), withdimension
the high a structure of that is similar to
the independent
ANN but
variables, with amodel
number of hidden for layers [20–23]. Generally, neural networks that have two or more
DNNaisDNN an extension was
of chosen
an artificialthe prediction.
neural network (ANN), with a structure that is similar to
hidden layers can be regarded as a DNN. Figure 2 shows(ANN),
the structures aofstructure
ANNs and thatDNNs.
ANNDNN is ana extension
but with number ofof an artificial
hidden layers neural
[20–23].network
Generally, neuralwithnetworks that have is similar
two to
or more
ANN but with a number of hidden layers [20–23]. Generally, neural
hidden layers can be regarded as a DNN. Figure 2 shows the structures of ANNs and DNNs.networks that have two or more
hidden layersinput
canlayer
be regarded
hidden layer as a DNN.
output layer Figure 2 shows the
input structures
layer hidden layerof ANNshiddenand
layer DNNs.
output layer

ambient ambient ...


input layer hidden layer output layer input layer hidden layer hidden layer output layer
coolant
ambient coolant
ambient ......

coolantu_d coolantu_d ...


Deep

... Stator_winding
u_du_q Stator_winding u_q
u_d Deep temperature
temperature
u_q ... Stator_winding
Stator_winding u_q
temperature
stator_tooth temperature stator_tooth ...

stator_tooth stator_tooth ...


(a) (b)

(a) of an artificial neural network (ANN) and a deep neural


Figure 2. Structures (b) network (DNN). (a) A
simple
Figure 2. example of anofANN.
Structures (b) A DNN
an artificial has network
neural two or more hidden
(ANN) andlayers.
a deep neural network (DNN).
Figure 2. Structures of an artificial neural network (ANN) and a deep neural network (DNN). (a) A
(a) A simple example of an ANN. (b) A DNN has two or more hidden layers.
simple example
In this paper,ofthe
anPMSM
ANN. (b) A DNN
stator has two
winding or more hidden
temperature layers.model based on a DNN has nine
prediction
layers. The
In this first layer
paper, the PMSM is the stator
input layer,
winding andtemperature
the numberprediction
of its nodes is equal
model to the
based on anumberDNN has of input
nine
In this
layers. The paper,
variables X (i.e.,
first the
11).PMSM
layer isThe
theninthstator
input winding
layer
layer, (in
andothertemperature
thewords,
number theprediction
of last layer)model
its nodes the based
is equal output
to theon a DNN
layer.
number Thehas nine
number
of input
layers.
of itsThe
variables X first
nodes is layer
(i.e., equal
11). istothe
The theinput
ninth number
layerlayer,
(in and the
of other number
output-dependent
words, ofvariable
the last its nodes
layer) is
y, the
is equal
which to layer.
equals
output the1.number
The
Thelayersof input
number fromof
variables
itsthe X
second
nodes (i.e.,
to the
is equal 11). The
toeighth ninth
the number layer
are hidden (in other words,
layers, and each of
of output-dependent the last
them has
variable layer) is the
14 nodes,
y, which output layer.
respectively.
equals 1. The layers The number
Thefrom
nodesthe of
of its nodes
the
second former
to theiseighth
equalare
layer to
arethe number
connected
hidden of output-dependent
with
layers, eacheach
and node variable
of thehas
of them latter y, which
layer,
14 nodes, one equals
by one,1.and
respectively. Thenodes
The layersare
there from
of no
the
the second
connections
former toare
layer the eighthnodes
between
connected arewithhidden thelayers,
ofeach same and
nodelayer.
of theeach
The
latterof them has
activation
layer, one 14
by nodes,
functionone,of andrespectively.
a hidden
there are noThe
layer nodes
is the ReLU
connections of
the former
function,
between layer
and the
nodes are
of the connected
activation
same layer. with
function each node
Theofactivation of
the outputfunction the latter
layer is the layer,
of atanhhidden one
function.by one,
layerTheis the and
lossReLU there
function are no
of the
function,
connections
andmodel between
is the
the activation mean nodes of
squared
function oferror
theoutput
the same
(MSE) layer. The
function,
layer is theactivation
and
tanhthe function
back
function. Theofloss
propagation a hidden
algorithm
function layer is the
of the ReLU
Adam
model is
function, and
optimization the activation
algorithm, function
whose of the
learning output
rate layer
is set is
to the tanh
0.001.
the mean squared error (MSE) function, and the back propagation algorithm is the Adam optimization function.
Figure 3 The
shows loss the function
DNN of the
model
model is the
constructed
algorithm, mean
whose in this squared
paper.rate
learning error
is set(MSE) function,
to 0.001. Figureand the back
3 shows the DNNpropagation algorithm is
model constructed inthe
thisAdam
paper.
optimization algorithm, whose learning rate is set to 0.001. Figure 3 shows the DNN model
constructed in this paper. input layer hidden layer hidden layer output layer

11 nodes 14 nodes 14 nodes 1 node

input layer hiddenReLu


layer ... hidden layer
ReLu
output layer

11 nodes 14 nodes 14 nodes 1 node

ambient ReLu
ReLu ...... ReLu
ReLu

coolant
ambient
ReLu
ReLu ...... ReLu
ReLu

coolant
u_d ReLu
ReLu ...... ReLu
ReLu
tanh

......
Stator_winding
u_q ReLu ReLu temperature
u_d ReLu ReLu tanh

u_q ReLu ... ReLu


Stator_winding
temperature

stator_tooth ReLu ... ReLu

stator_tooth
ReLu
ReLu ...... ReLu
ReLu

ReLu
ReLu ...... ReLu
ReLu
7 layers

ReLu ...
Figure 3. PMSM stator winding temperature prediction model.
ReLu
Figure 3. PMSM stator winding temperature prediction model.
7 layers

Figure 3. PMSM stator winding temperature prediction model.


Energies 2020, 13, x FOR PEER REVIEW 5 of 14
Energies 2020, 13, 4782 5 of 14

The constructed neural network model was used to predict the PMSM stator winding
temperature. Figure 4neural
The constructed shows the prediction
network process.
model was used to First of all,the
predict thePMSM
data were
statorstandardized, then the
winding temperature.
standardized
Figure 4 showsdatathe were divided
prediction into five
process. equal
First parts
of all, theby means
data wereofstandardized,
five-fold crossthen
validation. Each part
the standardized
was were
data takendivided
as the test
intoset inequal
five turn, parts
whilebythemeans
other offour parts were
five-fold cross used as the Each
validation. training
partset.
wasThe DNN
taken as
model
the testused
set inthe training
turn, whileset thetoother
fit thefour
model
partswhile
werethe testas
used setthe
was used toset.
training predict the stator
The DNN modelwinding
used
temperature.
the training setThe results
to fit of thewhile
the model DNNthe model were
test set wasconsolidated;
used to predict thus,
the the 6000
stator pieces temperature.
winding of predicted
values were obtained. Together with the real values, the metrics
The results of the DNN model were consolidated; thus, the 6000 pieces of predicted values of the DNN model could be
were
calculated.Together with the real values, the metrics of the DNN model could be calculated.
obtained.

dataset
training
subset1

subset2 y_pred

subset3
standardization

subset4
divide
standardized testing
dataset subset5

Figure 4. The process of predicting the PMSM stator winding temperature.


Figure 4. The process of predicting the PMSM stator winding temperature.
2.2. Assessment of Model
2.2. Assessment of Model
In this paper, the average absolute error (MAE), root mean squared error (RMSE) and goodness of
fit (R2In
) were selected
this paper, theasaverage
metricsabsolute
to evaluate the(MAE),
error prediction
root performance
mean squaredoferror the model.
(RMSE) and goodness
of fitMAE is theselected
(R2) were average asvalue of the
metrics to absolute
evaluateerror, which reflects
the prediction the actualofsituation
performance of the deviation
the model.
between the predicted value and the real value. MAE is generally used
MAE is the average value of the absolute error, which reflects the actual situation to measure the error of
of the
the
predicted value deviating from the real value, and the smaller the MAE value,
deviation between the predicted value and the real value. MAE is generally used to measure the errorthe better the prediction
performance.
of the predicted Assuming yttrue represents
value deviating from the the actual
real value,
value,
t
and ythe represents the predicted value and N
pred smaller the MAE value, the better the
represents the number of samples. Equation
t (1) is the
prediction performance. Assuming ytrue represents the actual value, calculation formula
y tpredof MAE.
represents the predicted
value and N represents the number of samples. N Equation (1) is the calculation formula of MAE.
1 X t

t
MAE = 1 N ypred − ytrue (1)
MAE N t= t t
1 y pred  ytrue (1)
N t 1
RMSE is the square root of the ratio of the square of the deviation between the predicted value
RMSE is the square root of the ratio of the square of the deviation between the predicted value
and the real value and the number of samples. It is often used as a measure of the prediction results of
and the real value and the number of samples. It is often used as a measure of the prediction results
machine learning models. Similar to the MAE, the smaller the RMSE value, the better the prediction
of machine learning models. Similar to the MAE, the smaller the RMSE value, the better the prediction
performance. Equation (2) is the calculation formula of the RMSE.
performance. Equation (2) is the calculation formula of the RMSE.
NN 
11X ytt  y tt

RMSE 
RMSE = N  y true − ypred
pred  (2)
(2)
N t 1 true
t=1
R2 is the goodness of fit, which measures the ability of the prediction model to fit the data. Its
R 2
range isis0–1.
theThe
goodness of fit,
closer the which
value to 1,measures
the higherthe
theability
fittingofdegree
the prediction
is, and themodel
better to
thefitprediction
the data.
Its range is 0–1. The closer the value to 1, the higher the fitting degree is,
performance of the model is. Equation (3) is the calculation formula of the R2. and the better the prediction
performance of the model is. Equation (3) is the calculation formula of the R2 .
Energies2020,
Energies 13,x4782
2020,13, FOR PEER REVIEW 66ofof14
14

NN
P  tt 2
t t 2
  ytruetrue y predpred
t=1
y − y
RR2 = 1 − t N1
2
N  t 2 2
(3)(3)
  yytruet − yavg 
P
true y avg
tt=
11

3. Experimental Results and Discussion


3. Experimental Results and Discussion
3.1. Introduction of Dataset
3.1. Introduction of Dataset
The data used in this study were collected from the PMSM placed on a test bench. The PMSM
The data used
was a German in this from
prototype studythewere collected
original from themanufacturer.
equipment PMSM placedThe on ameasuring
test bench.platform
The PMSMwas
was a German prototype from the original equipment manufacturer. The measuring
assembled by the LEA department of Paderborn University [24–27]. The main purpose of recording platform was
assembled
the datasetbywas
theto
LEA department
simulate of Paderborn
the temperatures ofUniversity [24–27].
the stator and rotorThe maintime.
in real purpose of recording
Figure 5 shows a
the dataset was to simulate the temperatures of the stator
random selection of 6000 pieces of the data from this dataset. and rotor in real time. Figure 5 shows a
random selection of 6000 pieces of the data from this dataset.
2.0
0.5 2.0
1.5 1.5
ambient(℃)

coolant(℃)

0.0 1.0 1.0

u_d(V)
0.5 0.5
-0.5
0.0 0.0
-1.0 -0.5
-0.5
-1.0
-1.5 -1.0 -1.5
-2.0 -1.5 -2.0
0 1000 2000 3000 4000 5000 6000 0 1000 2000 3000 4000 5000 6000 0 1000 2000 3000 4000 5000 6000
samples samples samples
motor_speed(RPM)

2.0
1.5 3
1.5 (a) (b) (c)
1.0 2
torque(Nm)

1.0
u_q(V)

0.5 1
0.5
0.0 0.0 0

-0.5 -0.5 -1
-1.0 -1.0 -2
-1.5 -1.5 -3
0 1000 2000 3000 4000 5000 6000 0 1000 2000 3000 4000 5000 6000 0 1000 2000 3000 4000 5000 6000
samples samples samples
1.5
1.0 (d) 3 (e) 0.6 (f)
0.4
0.5 2
i_q(A)

0.2
pm(℃)
i_d(A)

0.0 1 0.0
-0.5 0 -0.2
-1.0 -0.4
-1
-1.5 -0.6
-2.0 -2 -0.8
-2.5 -3 -1.0
0 1000 2000 3000 4000 5000 6000 0 1000 2000 3000 4000 5000 6000 0 1000 2000 3000 4000 5000 6000
samples samples samples
(g) (h) (i)
stator_winding(℃)
stator_tooth(℃)

1.5
1.5 1.5
stator_yoke(℃)

1.0 1.0 1.0


0.5 0.5 0.5
0.0 0.0
0.0
-0.5
-0.5
-1.0 -0.5
-1.0
-1.5
-1.5 -1.0
-2.0
0 1000 2000 3000 4000 5000 6000 1000 2000 3000 4000 5000 6000 0 1000 2000 3000 4000 5000 6000
samples samples samples
(j) (k) (l)
Figure
Figure5.5.Dataset
Datasetofofthe PMSM.
the PMSM. (a)(a)
ambient
ambienttemperature;
temperature; (b)(b)
coolant temperature;
coolant (c) u_d;
temperature; (d) u_q;
(c) u_d; (e)
(d) u_q;
motor_speed; (f) torque;
(e) motor_speed; (g) i_d;
(f) torque; (h) i_q;
(g) i_d; (i) pm;
(h) i_q; (j) stator_yoke;
(i) pm; (k) (k)
(j) stator_yoke; stator_tooth; (l) stator_winding.
stator_tooth; (l) stator_winding.
Energies 2020, 13, 4782 7 of 14

The main features of the selected dataset are as follows:

• ambient—the ambient temperature is measured by a temperature sensor located close to the stator;
• coolant—temperature of the coolant. The motor is cooled by water. The measurement is performed
at the outflow of water;
• u_d—voltage of the d-component;
• u_q—voltage of the q-component;
• motor_speed—engine speed;
• torque—torque induced by current;
• i_d—current of the d-component;
• i_q—current of the q-component;
• pm—surface temperature of the permanent magnet, which is the temperature of the rotor;
• stator_yoke—the stator yoke temperature is measured using a temperature sensor;
• stator_tooth—the temperature of the stator tooth is measured using a temperature sensor;
• stator_winding—the temperature of the stator winding is measured using a temperature sensor.

The test conditions for the dataset were as follows:

• all recordings were selected at a frequency of 2 Hz (one row in 0.5 s);


• the engine was accelerated using manually designed driving cycles indicating the reference engine
speed and reference torque;
• currents in the d/q coordinates (columns “i_d” and “i_q”) and voltages in the coordinates d/q
(columns “u_d” and “u_q”) were the result of a standard control strategy that tried to follow the
reference speed and torque;
• the columns “motor_speed” and “torque” are the resulting values achieved by this strategy,
obtained from the specified currents and voltages.

In machine learning, if the variance of a feature is several orders of magnitude different from
other features, it will occupy a dominant position in the learning algorithm, resulting in the learner not
being able to learn from other features as expected. Data standardization can readjust the original data
so that they have the properties of standard normal distribution. The processing of standardization is
shown as Equation (4):
xi − µx
xistd = (4)
δx
where, xistd represents the standardized result, xi represents the original data, µx represents the mean
value and δx represents the standard deviation.

3.2. Prediction Results of the Model


The specification of the computer used in this experiment was a dual-channel Intel E5 2690
V2 CPU with 128 GB DDR-RAM, which was manufactured by ASUS and made in Taiwan, China.
The programming language was Python, and the model used was the DNN model built in Section 3.3.
After the dataset in Figure 5 was standardized, 6000 pieces of data were divided into five equal parts by
means of five-fold cross validation. Each part was taken as the test set in turn, and the other four parts
were used as the training set. Thus, all the data served as both the training set and test set. Taking the
stator winding temperature as dependent variable y and the remaining 11 variables as independent
variables X, the PMSM stator winding temperature could be predicted.
The curve of the MSE loss value with the number of iterations of the model’s training set is shown
in Figure 6. It can be seen that the loss value of the training set decreases rapidly during the first
10 iteration cycles, while it decreases slowly in the later iteration cycles.
Energies 2020, 13, 4782 8 of 14
Energies 2020, 13, x FOR PEER REVIEW 8 of 15

MSE_LOSS
0.14

0.12
MSE_LOSS

0.10

0.08

0.06

0 10 20 30 40 50
epochs

Figure
Figure 6. Curve of
6. Curve of the
the MSE
MSE loss
loss value
value with
with iteration
iteration times.
times.

Figure 7 shows the prediction results of the model. Figure 7a presents a comparison between the
Figure 7 shows the prediction results of the model. Figure 7a presents a comparison between the
predicted values and the real values of the model. It can be seen that there are many overlapped parts
predicted values and the real values of the model. It can be seen that there are many overlapped parts
between predicted values and real values. In this paper, the established DNN selected tanh as the
between predicted values and real values. In this paper, the established DNN selected tanh as the
activation function of output layer among sigmoid, ReLU (Rectified Linear Unit) and tanh, which had
activation function of output layer among sigmoid, ReLU (Rectified Linear Unit) and tanh, which
the best performance on the dataset. The output value of the tanh activation function was between −1
had the best performance on the dataset. The output value of the tanh activation function was
and 1, and therefore the values of y_pred in Figure 7a range from −1 to 1, resulting in local saturation
between −1 and 1, and therefore the values of y_pred in Figure 7a range from −1 to 1, resulting in
when the values of y_true were out of the range. Figure 7b shows a comparison between the predicted
local saturation when the values of y_true were out of the range. Figure 7b shows a comparison
and real2020,
Energies values x after the REVIEW
dataset was normalized, and it can be seen that the curves of y_pred and
between the 13, FOR PEER
predicted 9 of
and real values after the dataset was normalized, and it can be seen that 15
the
y_true are almost coincident. Figure 7c,d shows the absolute errors and absolute percentage errors,
curves of y_pred and y_true are almost coincident. Figure 7c,d shows the absolute errors and absolute
respectively. The absolute errors are always lower than 0.6 ◦ C, while the absolute percentage errors are
percentage errors, respectively. The absolute errors are always lower than 0.6 °C, while the absolute
barely higher than 10%. The results indicate the model has great prediction performance.
percentage errors are barely higher than 10%. The results indicate the model has great prediction
performance.
y_pred
1.5
y_true
y_pred and y_true(℃)

1.0

0.5

0.0

-0.5

-1.0

-1.5

-2.0
0 1000 2000 3000 4000 5000 6000
samples

(a)

Figure 7. Cont.

y_pred
1.5
y_true
d and y_true(℃)

1.0

0.5

0.0

-0.5
-1.5

-2.0
0 1000 2000 3000 4000 5000 6000
samples
Energies 2020, 13, 4782 9 of 14
(a)

y_pred
1.5
y_true
y_pred and y_true(℃)

1.0

0.5

0.0

-0.5

-1.0

-1.5

-2.0
0 1000 2000 3000 4000 5000 6000
samples

(b)

1.0 50
Absolute error Absolute percentage error(%) Absolute percentage error

0.8 40
Absolute error(℃)

0.6 30

0.4 20

0.2 10

0.0 0
0 1000 2000 3000 4000 5000 6000 0 1000 2000 3000 4000 5000 6000
samples samples

(c) (d)

Figure7.7.Prediction
Figure Prediction results. (a) Comparison
results. (a) Comparisonof ofpredicted
predictedvalues
valuesand
andreal
realvalues.
values.(b)(b) Comparison
Comparison of of
predicted
predictedvalues
values and
and real values
values after
afterthe
thedataset
datasetwas
wasnormalized.
normalized.(c)(c) Absolute
Absolute errors
errors of predicted
of predicted
values
valuesand
andreal
realvalues.
values. (d)
(d) Absolute
Absolute percentage
percentage errors of predicted values
values and
and real
real values.
values.

3.3. Comparison with Other DNN Models


The network topology of a DNN consists of the number of hidden layers, a number of neurons in
each hidden layer, an activation function of the output layer and hidden layers, the learning rate and
so on, all of which play very important roles in the prediction performance of the DNN. A DNN with
unsuitable topology will not only increase the training time, but also lead to overfitting or underfitting.
In order to compare it more fairly with other models, instead of employed normalization and
anti-normalization methods, we directly used standardized data as input, which could better reflect
the superior performance of our proposed model.
In order to study the influence of the different number of hidden layers on the prediction
performance, DNNs with two, three, four, five, six, seven (this paper) and eight hidden layers were
tested. Other parameters remained at their default values during the test. The corresponding models
were constructed for experiments, and RMSE and R2 were selected to evaluate the model. The results
are shown in Figure 8. It can be seen that the curve of the RMSE is in the shape of “V”. When the
number of hidden layers was two, the maximum RMSE of the model was 0.2375, which then decreased
with increases in the number of layers. However, there was an extreme point beyond which the
prediction accuracy of the model decreased as the number of layers continued to increase. When there
were seven hidden layers, the minimum RMSE of the network was 0.2368 and the maximum R2 was
Energies 2020, 13, x FOR PEER REVIEW 10 of 14
Energies 2020, 13, 4782 10 of 14
Energies 2020,R
maximum 13, x FOR
2 was PEER REVIEW
0.9439. 10 of 14
The experimental results verify the superiority of the proposed DNN model
in the PMSM stator winding temperature prediction performance.
maximum R2experimental
0.9439. The was 0.9439. The experimental
results verify theresults verify of
superiority thethe
superiority
proposedofDNN
the proposed
model inDNN model
the PMSM
in the PMSM
stator stator
winding windingprediction
temperature temperature prediction
RMSEperformance.
performance. R2

0.2375
RMSE R2 0.9440

0.2375
0.9440

0.9438

0.2370 0.9438

0.9436
RMSE RMSE

0.2370

R2
0.9436

R2
0.9434
0.2365
0.9434
0.2365
0.9432

0.9432

0.2360 0.9430
2 3 4 5 6 7 8
number of hidden layers
0.2360 0.9430
2 3 4 5 6 7 8
number
Figure 8. Prediction performance of DNN of hiddenwith
models layers different numbers of hidden layers.

Figure 8. Prediction performance of DNN models with different numbers of hidden layers.
Figure
In order to8.study
Prediction performance
the influence of DNN
of the models
number with different
of hidden nodesnumbers of hidden layers.
on the prediction performance,
DNNs In with
order10, to study
12, 14the influence
(this paper), of16,
the18number
and 20ofnodeshidden in nodes on the prediction
each hidden performance,
layer are tested. Other
DNNs Inwith
order
parameters 10,to12,
study
remained atthe
14 (this influence
paper),
their 16, of
default 18 the
andnumber
values 20 nodesof
during inhidden
theeach nodes
test. hidden
The on are
layer
results the prediction
areshown
tested. in performance,
Other parameters
Figure 9. It can
DNNs
be seenwith
remained that 10,
the 12,
at their 14 (this
default
network paper),
values
with during 16,
the18
14 hidden and
test.
layerThe 20results
nodes nodes
hadarein each
shown
the hidden
in
minimumFigure layer are0.2371
9. It can
RMSE of betested.
seenandOther
that the
the
parameters
network with
maximum remained
R of
2 14 0.9438. at
hidden The their default values
layerexperimental
nodes had the during
minimum
results the test.
verifyRMSE The results are
of 0.2371 and
the superiority shown in
the proposed
of the Figure
maximumDNN 2 9. It
R of modelcan
0.9438.
be
The
in seen that the
experimental
prediction network withthe
results verify
performance. 14 superiority
hidden layer nodes
of the had the
proposed DNNminimum
model in RMSE of 0.2371
prediction and the
performance.
maximum R2 of 0.9438. The experimental results verify the superiority of the proposed DNN model
in prediction performance. RMSE R2
0.2380 0.9440
2
RMSE R
0.2380 0.9440

0.9438
0.2375
0.9438
0.2375
0.9436
RMSE RMSE

R2

0.2370 0.9436

0.9434
R2

0.2370

0.9434
0.2365
0.9432
0.2365
0.9432

0.2360 0.9430
10 12 14 16 18 20
number of nodes
0.2360 0.9430
Figure 9. Prediction 10 12 of DNN models
performance 14 with16different numbers
18 of20
hidden layer nodes.
number ofwith
Figure 9. Prediction performance of DNN models nodesdifferent numbers of hidden layer nodes.

In order to study the influence of different activation functions on the prediction performance
Figure 9. Prediction performance of DNN models with different numbers of hidden layer nodes.
of the model, linear function, tanh function, sigmoid function and ReLU function (this paper) were
Energies 2020, 13, 4782 11 of 14

selected as the activation functions of DNN models, and these models were tested. Other parameters
remained at their default values during the test. Results are shown in Table 1. The prediction accuracy
of the DNN model using a ReLU function was the highest, with an RMSE of 0.2369 and an R2 of 0.9438.
The experimental results verify the superiority of the proposed DNN model in prediction performance.

Table 1. Prediction performance of DNN models with different activation functions.

Activation Functions RMSE R2


linear 0.2625 0.9310
tanh 0.2375 0.9435
sigmoid 0.2394 0.9427
ReLU (this paper) 0.2369 0.9438

In order to study the influence of the learning rate of the back propagation algorithm on the
prediction performance of the DNN model, six different learning rates were used to establish the
models for testing. Other parameters remained at their default values during the test. Since the
difference of metrics were too small, six decimal places were reserved. Table 2 shows the results,
when the learning rate was 0.001, with a minimum MSE of 0.237080 and a maximum R2 of 0.943793.
The experimental results verify the advantages of the proposed DNN model in PMSM stator winding
temperature prediction.

Table 2. Prediction performance of DNN models with different learning rates.

Learning Rates Title 2 R2


0.0001 0.239620 0.942582
0.001 (this paper) 0.237080 0.943793
0.002 0.237114 0.943777
0.01 0.242495 0.941196
0.02 0.278323 0.922536
0.1 0.998120 0.003756

In summary, the DNN model proposed in this paper was compared with DNN models with
different numbers of hidden layers, different numbers of hidden layer nodes, different activation
functions and different learning rates. The model proposed in this paper showed the best prediction
performance in all experiments. In practical application, the DNN model proposed in this paper is the
first choice for predicting PMSM stator winding temperature.

3.4. Comparison with Machine Learning Methods


In order to build a comparison with other models more fairly, instead of employed normalization
and anti-normalization methods, we directly used standardized data as input, which can better reflect
the superior performance of our proposed model.
In order to verify the effectiveness of the DNN model in PMSM stator winding temperature
prediction, three traditional machine learning methods (i.e., support vector regression (SVR), decision
tree and ridge regression), and two ensemble learning methods (i.e., random forest and AdaBoosting)
were selected for comparative experiments. The original sample data were trained and verified 50 times
for each experiment, and the average of the metrics was calculated. The results are shown in Table 3.
Energies 2020, 13, x4782
Energies FOR PEER REVIEW 12 of
12 of 14
14

Table 3. Comparison of metrics.


Table 3. Comparison of metrics.
Methods MAE 1 RMSE R2
Methods MAE 1 RMSE R2
SVR 2 0.3996 0.5094 0.7405
DecisionTree SVR 2 0.45160.3996 0.5094
0.58170.7405 0.6616
DecisionTree 0.4516 0.5817
Ridge 0.1731 0.25100.6616 0.9370
Ridge 0.1731 0.2510 0.9370
RandomForestRandomForest0.42030.4203 0.49370.7562
0.4937 0.7562
AdaBoosting AdaBoosting 0.43540.4354 0.49370.7563
0.4937 0.7563
DNN DNN 0.15150.1515 0.23680.9439
0.2368 0.9439
1 mean absolute error. 2 support
1 mean absolute error. support
2 vector
vector regression.
regression.

The
The results
results show
show that,
that, when
when predicting
predicting PMSM
PMSM stator
stator winding
winding temperature,
temperature, the the DNN
DNN model
model
proposed
proposed in in this
thispaper
paperobtained
obtained thethe best
best performance.
performance. TheThe
MAE MAE was was
0.1515 0.1515 and
and the the RMSE
RMSE was
was 0.2368,
0.2368,
thus the thus the prediction
prediction error
error of the of the model
model was thewasleast.
the least.
The The
R2 isR20.9439,
is 0.9439, which
which waswas
thethe closest
closest toto
1,
1, and proves that the performance of the model fitting the real
and proves that the performance of the model fitting the real data was the best. data was the best.
Figure
Figure 10 10 shows
shows the
the results
results ofof three
three metrics
metrics ofof the
the models.
models. It It can
can be
be seen
seen that
that the
the MAE
MAE and
and RMSE
RMSE
of the ridge regression model and the DNN model are smaller than those of
of the ridge regression model and the DNN model are smaller than those of other models, that the R other models, that the R22
is
is larger, andthat
larger, and thatthe
theprediction
predictionresults
results of of
thethe
DNN DNN werewere better
better thanthan
ridge ridge regression.
regression. Compared
Compared with
with the worst
the worst model model (decision
(decision tree),tree),
the R theof
2 R the
2 of the proposed
proposed model
model was was improved
improved byby about
about 42%,
42%, and
and it
it also improved on the ridge regression model by 0.0069. The DNN model
also improved on the ridge regression model by 0.0069. The DNN model proposed in this paper can proposed in this paper
can predict
predict a PMSM
a PMSM stator
stator winding
winding temperature
temperature better
better thanthan
otherother models.
models.

MAE RMSE R2

0.6
1.0

0.5
0.8

0.4

0.6
RMSE
MAE

R2

0.3

0.4
0.2

0.2
0.1

0.0 0.0
SVR DecisionTree Ridge RandomForestAdaBoosting DNN
Methods
Figure 10.
Figure Prediction performance
10. Prediction performance of
of the
the DNN
DNN model
model and
and other
other models.
models.

4. Conclusions and Prospections


4. Conclusions and Prospections
In order to effectively predict PMSM stator winding temperature, a PMSM stator winding
In order to effectively predict PMSM stator winding temperature, a PMSM stator winding
temperature prediction model based on a DNN was proposed in this paper. The model was trained
temperature prediction model based on a DNN was proposed in this paper. The model was trained
and tested by partial data of a common sample set. Experiments on the number of hidden layers,
and tested by partial data of a common sample set. Experiments on the number of hidden layers, the
the number of hidden layer nodes, different activation functions and different learning rates of the
number of hidden layer nodes, different activation functions and different learning rates of the DNN
DNN models were carried out to verify the superiority of the DNN model proposed in this paper
models were carried out to verify the superiority of the DNN model proposed in this paper in
in prediction performance. Additionally, by calculating three metrics (i.e., MAE, RMSE and R2 ),
prediction performance. Additionally, by calculating three metrics (i.e., MAE, RMSE and R2), the
Energies 2020, 13, 4782 13 of 14

the prediction performance of the proposed DNN model was compared with other machine learning
methods. The summary of this paper is as follows:

1. This paper presented a PMSM stator winding temperature prediction model based on a DNN.
The model can be used to solve the problem of how to determine the temperature of PMSM stator
winding. It can effectively prevent a series of faults of PMSM due to the high temperature of
stator winding. It is of great significance to ensure the safe and reliable operation of the PMSM.
2. The model proposed in this paper was compared with DNN models with different numbers
of hidden layers, different numbers of hidden layer nodes, different activation functions and
different learning rates, as well as with other machine learning methods. The results of directly
inputting the dataset were as follows. The MAE of this model was 0.1515 and the RMSE was
0.2368, which is smaller than other models, while the R2 was 0.9439, which is the closest to 1.
The results of employing normalization and anti-normalization methods were also obtained.
The MAE was 0.0151, the RMSE was 0.0214 and the R2 was 0.9992. Therefore, this model is more
suitable for PMSM stator winding temperature prediction under complex nonlinear conditions.

In conclusion, the DNN model proposed in this paper shows better performance than other
machine learning methods in PMSM stator winding temperature prediction. The model can play an
important role in PMSM temperature detection system, and provide technical support for temperature
warning and the safe operation of PMSMs.
In the future, our next step is to find a lower-cost measurement method and verify the universality
of the proposed model in the case of a different power of the same PMSM or different PMSMs. Moreover,
we are also considering the construction of a larger sample set to conduct time series prediction for the
features of PMSMs.

Author Contributions: H.G. conceived and designed the experiments; H.G., Q.D., Y.S., H.T. and J.Z. designed the
regression model and programming. L.W. analyzed the performance of the PMSM. H.G., Q.D., Y.S., H.T., L.W.
and J.Z. wrote the manuscript. All authors reviewed the manuscript. All authors have read and agreed to the
published version of the manuscript.
Funding: This work was supported only by the Science Foundation of Ministry of Education of China
(No.18YJCZH040).
Acknowledgments: Authors gratefully acknowledge the helpful comments and suggestions of the reviewers who
improved the presentation.
Conflicts of Interest: The authors declare no conflict of interest.

References
1. Zhang, H.T.; Dou, M.F.; Deng, J. Loss-Minimization Strategy of Nonsinusoidal Back EMF PMSM in Multiple
Synchronous Reference Frames. IEEE Trans. Power Electron. 2020, 35, 8335–8346. [CrossRef]
2. Jafari, M.; Taher, S.A. Thermal survey of core losses in permanent magnet micro-motor. Energy 2017, 123,
579–584. [CrossRef]
3. Chen, S.A.; Jiang, X.D.; Yao, M.; Jiang, S.M.; Chen, J.; Wang, Y.X. A dual vibration reduction structure-based
self-powered active suspension system with PMSM-ball screw actuator via an improved H-2/H-infinity
control. Energy 2020, 201, 117590. [CrossRef]
4. Wallscheid, O.; Specht, A.; Bcker, J. Observing the Permanent-Magnet Temperature of Synchronous Motors
Based on Electrical Fundamental Wave Model Quantities. IEEE Trans. Ind. Electron. 2017, 64, 3921–3929.
[CrossRef]
5. Anees, M.; Sinisa, D. FBG Thermal Sensing Ring Scheme for Stator Winding Condition Monitoring in PMSMs.
IEEE Trans. Transp. Electrif. 2019, 5, 1370–1382.
6. Anees, M.; Sinisa, D. Open-Circuit Fault Detection in Stranded PMSM Windings Using Embedded FBG
Thermal Sensors. IEEE Sens. J. 2019, 19, 3358–3367.
7. Joo, D.; Cho, J.-H.; Woo, K.; Kim, B.-T.; Kim, D.-K. Electromagnetic Field and Thermal Linked Analysis of
Interior Permanent-Magnet Synchronous Motor for Agricultural Electric Vehicle. IEEE Trans. Magn. 2011,
47, 4242–4245. [CrossRef]
Energies 2020, 13, 4782 14 of 14

8. Grobler, A.J.; Holm, S.R.; Van, S.G. A Two-Dimensional Analytic Thermal Model for a High-Speed PMSM
Magnet. IEEE Trans. Ind. Electron. 2015, 62, 6756–6764. [CrossRef]
9. Chen, W.G.; Su, X.P.; Chen, X.; Zhou, Q.; Xiao, H.G. Combination of Support Vector Regression with Particle
Swarm Optimization for Hot-spot temperature prediction of oil-immersed power transformer. Prz. Elektrotech.
2012, 88, 172–176.
10. Jo, H.; Hwang, H.J.; Phan, D.; Lee, Y.; Jang, H. Endpoint Temperature Prediction model for LD Converters
Using Machine-Learning Techniques. In Proceedings of the 2019 IEEE 6th International Conference on
Industrial Engineering and Applications, Tokyo, Japan, 14 May 2019; pp. 22–26.
11. Wang, X.J. Ladle Furnace Temperature Prediction Model Based on Large-scale Data with Random Forest.
IEEE/CAA J. Autom. Sin. 2017, 4, 770–774.
12. Zhukov, A.; Tomin, N.; Kurbatsky, V.; Sidorov, D.; Panasetsky, D.; Foley, A. Ensemble methods of classification
for power systems security assessment. Appl. Comput. Inform. 2019, 15, 45–53. [CrossRef]
13. Su, X.L.; Zhang, S.; Yin, Y.X.; Xiao, W.D. Prediction model of hot metal temperature for blast furnace based on
improved multi-layer extreme learning machine. Int. J. Mach. Learn. Cybern. 2019, 10, 2739–2752. [CrossRef]
14. Li, H. Deep learning for natural language processing: Advantages and challenges. Natl. Sci. Rev. 2018, 5,
24–26. [CrossRef]
15. Chinnathambi, R.A.; Plathottam, S.J.; Hossen, T.; Nair, A.S.; Ranganathan, P. Deep Neural Networks (DNN)
for Day-Ahead Electricity Price Markets. In Proceedings of the 2018 IEEE Electrical Power and Energy
Conference (EPEC), Toronto, ON, Canada, 10 October 2018.
16. Kasburg, C.; Frizzo, S. Deep Learning for Photovoltaic Generation Forecast in Active Solar Trackers. IEEE Lat.
Am. Trans. 2019, 17, 2013–2019. [CrossRef]
17. Tao, Q.; Liu, F.; Li, Y.; Sidorov, D. Air Pollution Forecasting Using a Deep Learning Model Based on 1D
Convnets and Bidirectional GRU. IEEE Access 2019, 7, 76690–76698. [CrossRef]
18. Sengar, S.; Liu, X. Ensemble approach for short term load forecasting in wind energy system using hybrid
algorithm. J. Ambient. Intell. Humaniz. Comput. 2020, 1–18. [CrossRef]
19. Gui, N.; Lou, J.; Zhifeng, Q.; Gui, W. Temporal Feature Selection for Multi-Step Ahead Reheater Temperature
Prediction. Processes 2019, 7, 473. [CrossRef]
20. Egrioglu, E.; Yolcu, U.; Bas, E.; Dalar, A.Z. Median-Pi artificial neural network for forecasting.
Neural Comput. Appl. 2017, 31, 307–316. [CrossRef]
21. Heo, S.; Lee, J.H. Parallel neural networks for improved nonlinear principal component analysis.
Comput Chem. Eng. 2019, 127, 1–10. [CrossRef]
22. Sze, V.; Chen, Y.-H.; Yang, T.-J.; Emer, J.S. Efficient Processing of Deep Neural Networks: A Tutorial and
Survey. Proc. IEEE 2017, 105, 2295–2329. [CrossRef]
23. Tao, Q.; Liu, F.; Sidorov, D. Recurrent Neural Networks Application to Forecasting with Two Cases: Load and
Pollution. Adv. Intell. Syst. Comput. 2020, 1072, 369–378.
24. Specht, A.; Wallscheid, O.; Boecker, J. Determination of rotor temperature for an interior permanent magnet
synchronous machine using a precise flux observer. In Proceedings of the 2014 International Power Electronics
Conference (IPEC-ECCE-ASIA), Hiroshima, Japan, 18 May 2014; pp. 1501–1507.
25. Wallscheid, O.; Huber, T.; Peters, W.; Böcker, J. Real-time capable methods to determine the magnet
temperature of permanent magnet synchronous motors. In Proceedings of the 2014 40th Annual Conference
of the IEEE-Industrial-Electronics-Society (IECON), Dallas, TX, USA, 29 October 2014; pp. 811–818.
26. Wallscheid, O.; Boecker, J. Fusion of direct and indirect temperature estimation techniques for permanent
magnet synchronous motors. In Proceedings of the 2017 IEEE International Electric Machines and Drives
Conference (IEMDC), Miami, FL, USA, 20 May 2017.
27. Gaona, D.; Wallscheid, O.; Joachim, B. Improved Fusion of Permanent Magnet Temperature Estimation
Techniques for Synchronous Motors Using a Kalman Filter. IEEE Trans. Ind. Electron. 2019, 67, 1708–1717.

© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access
article distributed under the terms and conditions of the Creative Commons Attribution
(CC BY) license (http://creativecommons.org/licenses/by/4.0/).

You might also like