Professional Documents
Culture Documents
Application in Electronics - Paper 2
Application in Electronics - Paper 2
Rr = Ra ⎛ 235 + θ r ⎞ (8)
⎜ ⎟
⎝ 235 + θ a ⎠ Figure 4. A feed-forward multiplayer perceptron (MLP)
Using supervised learning with the error-correction learning Table 1: Look-Up table
rule (ECL), these networks can learn the mapping from one Output PCORE RHV RLV Temp
data space to another using examples. The term back- PT (w) PCU (w)
Index (w) (Ω) (mΩ) ( °C)
hidden layer, and finally to the input layer. In the MLP, the 2 240.26 0.06 240.20 57.80 21.00
3 238.80 0.50 238.30 57.87 21.00
data are fed forward into the network without feedback. The
4 241.53 1.13 240.40 57.80 21.00
neurons in the MLP can be fully or partially interconnected [9].
5 241.91 2.01 239.90 57.93 21.00
6 243.83 3.13 240.70 57.70 21.00
IV. IMPLEMENTATION OF THE NEURAL NETWORK 30
7 245.01 4.51 240.50 57.70 21.00
According to the work by Souza et al. [1], they indicated the 8 246.25 6.15 240.10 57.90 21.00
total transformer loss (PT) using 6 variables as the follows the 9 248.94 8.04 240.90 58.13 21.00
temperature in degree Celsius (T), the resistances of the high 10 250.46 10.15 240.30 57.73 21.00
and low voltage winding (RHV and RLV), the core and copper
11 252.82 12.52 240.30 57.70 21.00
losses (Pcore and PCU) and the magnetizing current (IM). These
variables were set as inputs and the total loss (PT) of the
601 1560.10 1319.70 240.40 57.47 21.00
transformer for the output (training target) of the MLP.
602 1591.12 1350.52 240.60 57.73 21.00
Unlike the work proposed by Souza, two variables,
temperatures due to it effect to the transformer loss as shown in 603 1622.63 1382.73 239.90 57.47 21.00
75
Figure 5 and the test current (ITest), are employed for the total 604 1631.21 1391.51 239.70 57.27 21.00
transformer loss (PT) identifying. The MLP with two neurons 605 1665.28 1429.98 235.30 57.20 21.00
at the input layer and one neuron at the output layer was 606 1704.65 1464.65 240.00 57.73 21.00
implemented as shown in Figure 6. The test current (ITest) is
adjusted from 0% to 100% at the pre-processing and the six
The Levenberg-Marquardt [10] is employed as the training
different values of temperature: 30°C, 35°C, 45°C, 55°C, 60°C
algorithm because it provides stable training with small
and 75°C are employed to determine the number of the set of training rates. The MLP with log-sigmoid transfer function in
training data.
the hidden layer, linear transfer function in the output layer and
In this study, we intend to use the trained MLP as a classifier
the Levenberg-Marquardt algorithm is employed in the ANN.
that is used to identify the address where the set of data are
Firstly, the MLP with is trained using the percent of the test
kept in the look-up table. Therefore, not only the estimate total
current (ITest) and the temperature (T) sets are fed into the MLP,
loss (PT) is given but also the related variables: the core loss
which has 2 input nodes. The MLP then evolves itself by
(Pcore), the copper loss (PCU), the resistances of both the high
correctly weighting the nodes in the hidden layer, which has 20
winding (RHV) and the low voltage winding (RLV) are obtained.
nodes, to achieve the desired output (1 output node) for the
The kept data in the look-up table were collected from
given input.
experiments of 100 transformers that were done at the
transformer manufactories in Thailand. The average value of
each variable from the 80 data sets is used as the output of the
look-up table. Therefore, all of the values at the same address
of the look-up table are obtained when the output that functions
as an index is given. An example of the look-up table is shown
in Table 1.
V. EXPERIMENTAL RESULTS
The experiment is carried out on the trained MLP simulation
in order to evaluate the proposed method, 20 test data sets that
measured from three-phase distribution transformers with the
rates of 100 KVA,22kV-400/230V, Dyn11 were used to
compare the results. The error in percent can be calculated as
in (9)