Application Research Based On Artificial Neural Network (ANN) To Predict No Load

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 5

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/241187229

Application Research Based on Artificial Neural Network (ANN) to Predict No-Load


Loss for Transformer's Design

Article · June 2011


DOI: 10.1109/CSNT.2011.45

CITATIONS READS

9 89

5 authors, including:

Amit Kumar Yadav Abdul Azeem


National Institute of Technology, Hamirpur Manav Bharti University
31 PUBLICATIONS   631 CITATIONS    10 PUBLICATIONS   67 CITATIONS   

SEE PROFILE SEE PROFILE

Hasmat Malik O. P. Rahi


Indian Institute of Technology Indore National Institute of Technology, Hamirpur
31 PUBLICATIONS   242 CITATIONS    45 PUBLICATIONS   130 CITATIONS   

SEE PROFILE SEE PROFILE

Some of the authors of this publication are also working on these related projects:

Condition Monitoring View project

Active power support from wind power plants for system frequency regulation View project

All content following this page was uploaded by Amit Kumar Yadav on 05 September 2015.

The user has requested enhancement of the downloaded file.


2011 International Conference on Communication Systems and Network Technologies

Application Research Based on Artificial Neural Network(ANN) to predict No Load


Loss for Transformer’s Design

Amit Kr. Yadav,Abdul Azeem,Akhilesh Singh,


Hasmat Malik O.P.Rahi
Electrical Engineering Department
National Institute Of Technology Assistant Professor, Electrical Engineering Department
Hamirpur, H.P. India National Institute Of Technology
e-mail: amit1986.529@rediffmail.com Hamirpur, H.P. India
e-mail: oprahi2k@gmail.com

Abstract—Transformer is one of the vital components in due to variability in the production process. Reduction of
electrical network which play important role in the power transformer actual no-load loss is a very important task for
system. The continuous performance of transformers is any manufacturing industry, since (1) it helps the
necessary for retaining the network reliability, forecasting its manufacturer not to pay no-load loss penalties, and (2) it
costs for manufacturer and industrial companies. The major
reduces the material cost (since a smaller no-load loss
amounts of transformer costs are related to its no-load loss, so
the cost estimation processes of transformers are based on design margin is used)[1].
reduction of no-load loss. Artificial Neural Network is one of methods that mostly
This paper presents a new method for classification of have been used in the recent years, in this field. Transformer
transformer no-load losses. It is shown that ANNs are very insulation aging diagnoses, the time left from the life of
suitable for this application since they present classification transformers oil, transformers protection and selection of
success rates between 78% and 96% for all the situations winding material in order to reduce the cost, are few topics
examined. The method is based on Multilayer Perceptron that have been performed [4-8].
Neural Network (MPNN) with sigmoid transfer function. The In this paper Artificial Neural Network based method have
Levenberg-Marquard (LM) algorithm is used to adjust the
been used to estimate no load losses during design
parameters of MPNN. The required training data are obtained
from transformer company. phase.ANN are used to predict no-load losses as a function
of core design parameters.
In following artificial neural networks with Levenberg-
Keywords-Artificial Neural Network (ANN),Levenberg Marquard back propagation algorithm have been used to
Marquard(LM)algorithm, estimating no-load loss, estimate no-load losses of transformers. The extracted data
design,powersystem, transformer. from transformer manufacturing company has been used to
train the ANN and the best parameters for this network have
I. INTRODUCTION been presented graphically. Finally result given by trained
Construction of transformers of high quality at minimum neural network have been compared with actual
possible cost has been a crucial for any transformer manufactured transformer prove the accuracy of presented
manufacturing industry facing market competition. A method to estimate no-load losses as a function of core
critical measure of transformer quality is transformer no- design parameters
load loss. The less the transformer no-load loss, the higher II. ARTIFICIAL NEURAL NETWORK
the transformer quality and efficiency [2]. The transformer
designer can reduce no-load loss by using lower loss core Neural networks are a relatively new artificial intelligence
materials or reducing core flux density or flux path length. technique. In most cases an ANN is an adaptive system that
Electric utilities use more generating capacity to produce changes its structure based on external or internal
additional electrical energy to compensate for transformer information that flows through the network during the
energy losses. The production of this additional electrical learning phase. The learning procedure tries is to find a set
energy increases electrical energy cost as well as of connections w that gives a mapping that fits the training
greenhouse gas emissions. Although transformers inherently set well. Furthermore, neural networks can be viewed as
have high energy transfer efficiencies, the accumulated highly nonlinear functions with the basic the form
transformer energy losses in an electric utility distribution F ( x, w) = y
network are high since a large number of transformers are
Where x is the input vector presented to the network, w are
installed. In addition, transformer no-load loss appears 24
the weights of the network, and y is the corresponding
hours per day, every day, for a continuously energized
output vector approximated or predicted by the network.
transformer. Thus, it is in general preferable to design a
The weight vector w is commonly ordered first by layer,
transformer for minimum no-load loss. Transformer actual
then by neurons, and finally by the weights of each neuron
(measured) no-load loss deviates from designed no-load loss
plus its bias. This view of network as an parameterized

978-0-7695-4437-3/11 $26.00 © 2011 IEEE 180


DOI 10.1109/CSNT.2011.45
function will be the basis for applying standard function but supervised learning process requires only input pairs to
optimization methods to solve the problem of neural train the network.Unsupervised learning can be
network training. characterized as a fast, but potentially inaccurate, method of
adjusting the weights. On the other hand, supervised
learning typically requires longer learning times and can be
A. ANN Structure
more accurate. There is no way to tell beforehand which
A neural network is determined by its architecture, training
learning method will work best for a given application. For
method and exciting function. Its architecture determines
the pattern of connections among neurons. Network training this reason, we concentrate on the very popular supervised
changes the values of weights and biases (network learning approach based on the backpropagation training
algorithm, which has been shown to produce good results
parameters) in each step in order to minimize the mean
for a large number of different problems.
square of output error.
The back propagation training algorithm is a method of
Multi-Layer Perceptron (MLP) has been used in load
iteratively adjusting the neural network weights until the
forecasting, nonlinear control, system identification and
pattern recognition [9], thus in this paper multi-layer desired accuracy level is achieved. It is based on a gradient-
perceptron network (with nine inputs, two outputs and a search optimization method applied to an error function.
hidden layer) with Levenberg-Marquardt training algorithm III. DESIGN OF WOUND CORE DISTRIBUTION
have been used. TRANSFORMER
In general, on function approximation problems, for
network that contain up to a few hundred weights, the In order to construct a three-phase wound core distribution
Levenberg-Marquardt algorithm has the fastest transformer, two small individual cores (width of core
convergence. This advantage is especially noticeable if very window equal to F1) and two large individual cores (width
accurate training is required. In many cases, trainlm is used of core window equal to F2) should be assembled (Figure
to obtain lower mean square error than any other algorithms 2). In general, the width F2 is twice F1.
tested. As the number of weights in the network increases,
the advantage of trainlm decreases. In addition trainlm
performance is relatively poor on pattern recognition
problems. The storage requirements of trainlm are larger
than the other algorithm tested.

Figure 2: Assembled active part of wound core distribution transformer

The theoretical no-load losses, say W1 (in Watt), of the


small individual core, which are called theoretical single-
phase no-load losses, are given by:
W1 = WPK1 ∗ CTW1 (1)
Where WPK1 , are the theoretical individual core specific
Figure 1: Artificial Neural Network no-load losses at the rated magnetic induction and CTW1 ,
B. Training of ANN is the theoretical weight of the small core as defined in [11].
The major justification for the use of ANNs is their The theoretical no-load losses, say W2 (in Watt), of the
ability to learn relationships in complex data sets that may large individual core are:
not be easily perceived by engineers. An ANN performs this W2 = WPK1 ∗ CTW2 (2)
function as a result of training that is a process of repetitively
presenting a set of training data (typically a representative where CTW2 is the theoretical weight of the large core [11].
subset of the complete set of data available) to the network Consequently, the theoretical total no-load losses,
and adjusting the weights so that each input data set produces say W1tot , (in Watt), of the four individual cores are:
the desired output.
Unsupervised and supervised learning process can be used W1tot = 2 ∗ (W1 + W2 ) (3)
to adjust the weights in an ANN. Supervised learning
process requires both input/output pairs to train the network

181
The theoretical no-load losses of the three-phase back propagation algorithm. The number of neurons in
transformer, TFLOSSES , which are also called theoretical hidden layer is twenty.
three-phase no-load losses, are: VI. RESULT AND DISCUSSION
TFLOSSES = WPK3 * CTW (4)
where WPK 3 are the theoretical transformer specific no-
load losses at the rated magnetic induction, and CTW is
the theoretical total weight of transformer.
IV. DATA BASE
The first step in the application of artificial intelligence to
the classification of individual core no-load losses is to
create a database with all the attributes (input parameters)
that affect the no-load losses of individual cores [12].
In the case of individual cores, nine attributes have been
selected and used as the input vector for the artificial
intelligence techniques. The selection of these attributes was
based on extensive research and transformer designer’s
experience. These attributes correspond to parameters that
actually affect the no-load losses of individual cores.
depicted in Table 1.1.
TABLE 1.1 ATTRIBUTES FOR THE CLASSIFICATION OF TRANSFORMER NO-
LOAD LOSSES Figure3: Mean Square Error
Symbol Attribute Name The performance curve is shown in Figure 2. In this figure
ATTR1 ASFLTF / DSFLTF mean squared error have become small by increasing the
ATTR2 A K g TF / D K g TF number of epoch. The test set error and the validation set
ATTR3 (WPK"11",mat,a +WPK"12",mat,a +WPK"13",mat,a +WPK"14",mat ,a ) / 4 error has similar characteristics and no significant over
ATTR4 Rated Magnetic Induction fitting has occurred by iteration 7(where best validation
ATTR5 Thickness of core leg performance has occurred).
ATTR6 Width of core leg
ATTR7 Height of core window
ATTR8 Width of core window
ATTR9 Transformer volt per turn

In Table1.1, the attribute ATTR1 represents the ratio of


actual ( A S F LT F ) to theoretical ( D S F L T F ) total no-load
losses of the four individual cores. The attribute ATTR2
represents the ratio of actual ( A K g T F ) to theoretical
( D K gTF ) total weight of the four individual cores. Finally,
the attribute ATTR3 represents the average specific no-load
losses of the magnetic material of the four individual cores
where ( W P K "1 1", m a t , a ) denotes the specific no-load losses
(W/kg) at 1500Gauss of the magnetic material of the
individual core that is placed at position “11” as shown in
figure 2.
V. SIMULATION
For network learning, some input vectors (P) and some Figure4: Prediction of transformer no-load loss during training analysis.
output vectors (T) are needed. By considering extracted data The output has tracked the targets very well for training, in
the belong type of 250kVA transformer from transformer estimation of transformer no-load loss. The value of
manufacturing during last 4 years, simulating has been regression is one which indicates a close correlation
performed. A two layer feed-forward network with sigmoid between outputs and targets.
hidden neurons and linear output neurons has been used.
The network has been trained with Levenberg-Marquard

182
knowledge base used and for the selected In this paper, the
Artificial Neural Network is applied for the classification of
transformer no-load losses. The basic steps in the
application of the method, like the generation of the
knowledge base (training and testing sets), the selection of
candidate attributes and the derivation of the appropriate
neural network structures (number of neurons in the
competitive layer) are presented. It is shown that with the
knowledge base used and for the selected candidate attribute
sets, the Artificial Neural Network method is very suitable
for classification of no-load losses of wound core
distribution transformers.
VII. REFERENCES

[1] P. S. Georgilakis, "Recursive genetic algorithm-finite element method


technique for the solution of transformer manufacturing cost
minimization problem”, IET Electr. Power Appl., 2009, Vol. 3, Iss. 6,
Figure5: Regression plot of transformer no-load loss pp. 514–519.
The output tracks the targets very well for training, testing, [2] M. Papadopoulos, Electric Energy Distribution Networks.
and validation, and the R-value is over 1 for the total Book, NTUA, 1994 (in Greek).
response. [3] Pavlos S. Georgilakis, Marina A. Tsili and Athanassios T. Souflaris,
”A Heuristic Solution to the Transformer Manufacturing Cost
Optimization Problem”, JAPMED’4-4th Japanese-Mediterranean
Workshop on Applied Electromagnetic Engineering for Magnetic,
Superconducting and Nano Materials, Poster Session, Paper
103_PS_1 nology (e-JST), September 2005, pp. 83-84.
[4] Manish Kumar Srivastava,”An innovative method for design of
distribution transformer”, e-Journal of Science & Technology (e-
JST), April 2009, pp. 49-54.
[5] Geromel, Luiz H., Souza, Carlos R., ”The application of intelligent
systems in power transformer design”, IEEE Conference, 2002, pp.
1504-1509.
[6] Yang Qiping, Xue Wude, LanZida, “Transformer Insulation Aging
Diagnosis and Service Life Evaluation”, Transformer [J], No.2, Vol.
41,
[7] Tetsuro Matsui, Yasuo Nakahara, Kazuo Nishiyama, Noboru Urabe
and Masayoshi Itoh, ”Development of Remaining Life Assessment
for Oilimmersed Transformer Using Structured Neural Networks”, I
CROSSICE International Joint Conference, August 2009, pp. 1855-
1858.
[8] M. R. Zaman and M. A. Rahman, “Experimental testing of the
Figure6:Function fit for transformer no-load loss prediction artificial neural network based protection of power transformers”,
For the transformer no-load loss prediction problem, the IEEE Trans. Power Del., vol. 13, no. 2, pp. 510–517, Apr. 1998.
network outputs have been match with the targets for all [9] Eleftherios I. Amoiralis, Pavlos S. Georgilakis and Alkiviadis T.
data sets Gioulekas, "An Artificial Neural Network for the Selection of
Winding Material in Power Transformers", Springer-Verlag Berlin
. Heidelberg, 2006, pp. 465-468.
[10] Khaled Shaban, Ayman EL-Hag and Andrei Matveev, ”Predicting
CONCLUSIONS Transformers Oil Parameters”, IEEE Electrical Insulation
In this paper, the Artificial Neural Network is applied for Conference, Montreal, QC, Canada, 31 May - 3 June 2009, pp. 196-
199.
the classification of transformer no-load losses. The basic
[11] P.S Georgilakis, J.A. Bakopoulos, and N.D. Hatziargyriou, “A
steps in the application of the method, like the generation of Decision Tree Method for prediction of Distribution Transformer Iron
the knowledge base (training and testing sets), the selection Losses,” 32 UPEC,vol.1,pp.257-260, Manchester, September 1997.
of candidate attributes and the derivation of the appropriate [12] P. S. Georgilakis, N. D. Hatziargyriou, N. D. Doulamis, A. D.
neural network structures (number of neurons in the Doulamis, and S. D. Kollias, “Prediction of iron losses of wound core
ompetitive layer) are presented. The classification success distribution transformers based on artificial neural networks,”
Neurocomputing, vol.23, no. 1-3,pp.15-29, Dec.1998.
rate ranges between 78% and 96% for the two criteria used
and the various limits examined. It is shown that with the

183

View publication stats

You might also like