Download as pdf or txt
Download as pdf or txt
You are on page 1of 7

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/277710911

Anomaly Detection in Thermal Power Plant using Probabilistic Neural


Network

Conference Paper · May 2015


DOI: 10.1109/MIPRO.2015.7160443

CITATIONS READS

4 1,341

4 authors, including:

Amel Hajdarevic Lejla Banjanovic-Mehmedovic


Ricardo Prague s.r.o. University of Tuzla
3 PUBLICATIONS   20 CITATIONS    59 PUBLICATIONS   91 CITATIONS   

SEE PROFILE SEE PROFILE

Fahrudin Mehmedovic
ABB
20 PUBLICATIONS   37 CITATIONS   

SEE PROFILE

Some of the authors of this publication are also working on these related projects:

Cooperation between mobile robots in specific environment View project

Behaviour Prediction of Cooperative Vehicles in View project

All content following this page was uploaded by Lejla Banjanovic-Mehmedovic on 05 June 2015.

The user has requested enhancement of the downloaded file.


Anomaly Detection in Thermal Power Plant using
Probabilistic Neural Network

A. Hajdarevic*, I. Dzananovic *, L. Banjanovic-Mehmedovic**, F. Mehmedovic***


* Thermal Power Plant "Tuzla", 75000 Tuzla, Bosnia and Herzegovina
** Faculty of Electrical Engineering, University of Tuzla, P.O.Box 75000 Tuzla, Bosnia and Herzegovina
*** ABB Ltd. Representative Office in Bosnia and Herzegovina, P.O.Box 75000, Tuzla, Bosnia and Herzegovina

hajdarevicamel@gmail.com, i.dzananovic@elektroprivreda.ba, lejla.mehmedovic@untz.ba,


fahrudin.mehmedovic@hr.abb.com

Abstract - Anomalies are integral part of every system's This paper presents one of the possible solutions to
behavior and sometimes cannot be avoided. Therefore it is the anomaly detection problem in boiler system using
very important to timely detect such anomalies in real-world probabilistic neural network. Given the complexity of the
running power plant system. Artificial neural networks are system, because of necessary simplification, only some
one of anomaly detection techniques. This paper gives a type characteristic sections and some characteristic process
of neural network (probabilistic) to solve the problem of variables were selected. All of the input data that are used
anomaly detection in selected sections of thermal power are coming from the system for monitoring and control
plant. Selected sections are steam superheaters and steam present in Thermal Power Plant "Tuzla", Bosnia and
drum. Inputs for neural networks are some of the most Herzegovina.
important process variables of these sections. It is
noteworthy that all of the inputs are observable in the real II. SELECTED SECTIONS OF THERMAL POWER PLANT
system installed in thermal power plant, some of which A thermal power plant, as a large and complex
represent normal behavior and some anomalies. In addition system, consists of multiple smaller systems that work
to the implementation of this network for anomaly together and ensure continuous electricity generation.
detection, the effect of key parameter change on anomaly This paper is focused on one of the main systems of every
detection results is also shown. Results confirm that thermal power plant and that is the boiler.
probabilistic neural network is excellent solution for
anomaly detection problem, especially in real-time The boiler is one of the most important systems in a
industrial applications. thermal power plant because of its role in electricity
generation [1, 2]. The boiler represents the entire system
I. INTRODUCTION that participates in the conversion of water into steam. A
single division of the boiler system into individual
With around 40% share in global electricity sections does not exist, because many sections are
production, thermal power plants are still significant connected and there is often no clear distinction between
power generation facilities, although their influence is them. In this paper, the division is made according to
reduced in favor of the renewable energy plants. available process images with certain corrections
Adequate control of thermal power plant units is concerning data availability.
necessary for the system to function properly, since
improper functioning of the system may result in A. Water-Steam System
electricity supply interruption and financial losses for This system is primarily engaged in converting water
electricity producers. into steam and consists of multiple subsystems with
One of the most important systems involved in separate functions. The system consists of multiple pipes
thermal power plant operation is the plant's boiler. and vessels. Heat exchange between different media
Pulverized coal combustion occurs in this system. Its occurs in this system in order to achieve optimal steam
combustion raises the temperature which helps feedwater parameters. The steam is driven further to the turbine
conversion into steam with needed parameters. Such propelling it, which is essential for electricity generation.
steam is driven further to the turbine system propelling it, Given that the quality and parameters of steam directly
which is essential for electricity production in thermal affect the electricity generation, anomaly detection in this
power plant. From the above, it is clear that the system is of great importance for the plant. The most
malfunction of any part in boiler system affects the important measurements of process variables related to
operation of thermal power plant. Outage of some part or steam are temperature, pressure and steam flow which
the whole thermal power plant unit can often be avoided directly affects the current power output.
with timely anomaly detection. B. Feedwater System
Many of those techniques are being researched and One of the important subsystems of the boiler is the
one of those are neural networks based anomaly detection system for its feedwater supply, because without the
techniques. If the neural network is created to detect feedwater supply there is no steam generation. After the
anomalies in the system, reaction time on the anomaly raw water is treated at the chemical water treatment plant,
could be significantly reduced. This would provide the water is stored in the feedwater tank. The water is
greater system stability and therefore fewer losses.

MIPRO 2015/CTS 1321


distributed further from the tank into the feedwater pipe III. ANALYSIS OF EXISTING ANOMALY DETECTION
system using feedwater pumps. The pumps maintain the TECHNIQUES
specified feedwater flow which is determined by the
required power output. The purpose of this system is Conventional anomaly detection techniques have
distribution of feedwater to the most important subsystem been used for a long time, but with the development of
of the water-steam system – the steam drum. computer technology modern anomaly detection
techniques can be developed.
C. Steam Superheaters
Anomaly detection techniques division can be done
The steam generated in the steam drum is distributed in multiple ways [3, 4]. Based on the means of acquiring
through this system. After distribution of steam through knowledge anomaly detection techniques can operate in
this system, it is called the superheated steam. Steam one of the following three modes: supervised, semi-
superheaters system consists of pipes mounted in the supervised and unsupervised. Techniques that operate in
boiler that distribute the steam to the turbine. This system unsupervised mode detect anomalies by analyzing an
has a major role in removing moisture from the steam, unlabeled data set and making an implicit assumption that
which improves its quality. majority of the data set represents normal behavior.
The temperature of steam generated in the drum still Anomalous data is the data that least fits the
does not match the temperature needed in the process. characteristics of the data set. Techniques that operate in
The steam generated in the drum still contains a certain semi-supervised mode build a model for the class
percentage of moisture. Such steam should not be corresponding to normal behavior. Then the model is
distributed to the turbine due to the possibility of used to identify anomalies in the test data. Techniques
condensation on the blades. In every thermal power plant, that operate in supervised mode require two data sets, one
there is a tendency to produce 100% quality steam. labeled as normal behavior and the other labeled as
Because of that, the steam is heated in this system using anomaly. Then it is needed to train the classifier that
flue gases. Increasing the steam temperature results in determines if the test data belong to the class labeled as
removal of moisture from the steam and that is the normal behavior.
primary goal of this system. The steam temperature is In addition to this division, there is also another
increased by around 160 °C compared to steam division that is used in this paper, proposed in [3]. It is
temperature in the drum. The goal of the system is to division based on anomaly type and classifies anomaly
maintain the temperature around 535 °C, which is detection techniques in one of the following three groups:
optimal for boiler analyzed in this paper. point anomaly detection techniques, contextual anomaly
D. Steam Drum detection techniques and collective anomaly detection
techniques. If an individual data instance from a data set
The most important subsystem of the water-steam can be considered as anomalous with respect to the rest of
system is the steam drum, which is a large tank with the data, then such instance can be classified as a point
task of steam extraction from a water-steam mixture anomaly. The focus of this paper are point anomaly
stored in the drum. Given the importance of the drum for detection techniques.
electricity generation, it is logical that timely anomaly
detection is required in this system. The most important A. Classification based Anomaly Detection Techniques
process variable in the drum is the water level. In addition Classification is used to learn a model from a set of
to that, steam pressure, conductivity and pH value are labeled data instances (training data set). After that, the
also measured. A rough block diagram is shown in "Fig. model is used to classify a test instance into one of the
1". It shows important boiler sections and selected classes. These techniques operate under the assumption
sections of the boiler for anomaly detection (marked in that there is enough data in the training data set, so the
red) and some connections between the sections. classifier can learn how to classify test data.

Figure 1. Block diagram of boiler sections (some turbine sections included)

1322 MIPRO 2015/CTS


Classification based anomaly detection techniques where pk is the prior probability of occurrence of
can be categorized into two following groups: one-class examples from class k and fk is the estimated pdf of
and multi-class anomaly detection techniques. One-class class k.
classification based anomaly detection techniques assume
that all training instances have only one class label. A test The advantages of this network type are: rapid
instance different from the labeled ones are declared as training process (faster than BP based networks),
anomalous. Multi-class classification based anomaly guaranteed convergence to the optimal classification with
detection techniques assume that the training data increasing data set and the ability to change the number
contains labeled instances belonging to multiple normal of learning inputs with minimal or no additional training.
classes. A test instance is declared as anomalous if it does The disadvantages are: large memory requirements, slow
not belong to any of the normal classes [5]. execution and the representative data set requirement.
Unlike many other neural networks, training of PNN is
Commonly used classification based anomaly done in a single iteration.
detection techniques are: rule based techniques, neural
networks based techniques and support vector machines The PNN architecture consists of four layers: input
(SVM) based techniques. layer, pattern layer, summation layer and decision layer,
presented in "Fig 2". This PNN structure
B. Neural networks based anomaly detection techniques recognizes c classes. The first layer shows the input
Neural networks are used for anomaly detection and pattern with n features. The number of nodes in the
they represent anomaly detection techniques that use pattern layer is equal to the number of training instances.
artificial intelligence. They can perform as one-class as The number of nodes in the summation layer is equal to
well as multi-class classifiers. Neural networks are the number of classes in the training instances. The input
trained with normal instances and anomalous instances. layer is fully connected to the pattern layer. The input
After that the network should be able to determine layer doesn't perform any computation and simply
anomalous instances in a test data set. distributes the input to the neurons in the pattern layer,
which is semi-connected to the summation layer. The
Many of the published papers are dealing with summation units sum the inputs from the pattern units
applications of neural networks in various systems. that correspond to the category from which the training
Papers [6] and [7] are dealing with the application of pattern was selected.
artificial intelligence in anomaly detection. Many
techniques that use neural networks are represented in The PNN works by creating a set of multivariate
those papers. The application of neural networks in power probability densities that are derived from the training
systems is considered in [8]. In particular, the application vectors presented to the network. The input instance with
of neural networks for anomaly detection in power plants unknown category is propagated to the pattern layer.
is considered in papers [9] and [10]. Neural networks in Once each node in the pattern layer receives the input, the
combination with fuzzy logic for anomaly detection are output of the node will be computed:
considered in [11]. ೅
ሺ௫ି௫೔ೕ ሻ ሺ௫ି௫೔ೕ ሻ
ͳ ି
Multiple types of neural networks could be used for ߨ௜௖ ൌ ௡Ȁଶ ௡
݁ ଶఙ మ (2)
ሺʹߨሻ ߪ
anomaly detections: MLP neural networks, recurrent
neural networks (RNN), probabilistic neural networks, etc. where n is the number of features of the input
instance x, ı is the smoothing parameter and xij is a
IV. PROBABILISTIC ARTIFICIAL NEURAL NETWORK training instance corresponding to category c.
Probabilistic neural networks (PNN) belong to the
stochastic neural networks group. PNN represent an The summation layer neurons compute the maximum
implementation of a statistical algorithm called kernel likelihood of pattern x being classified into c by
discriminant analysis (KDA) [12, 13]. The probabilistic summarizing and averaging the output of all neurons that
neural net is based on the theory of Bayesian belong to the same class:
classification and the estimation of probability density
function (PDF). Because of ease of training and a sound
statistical foundation in Bayesian estimation theory, PNN
has become an effective tool for solving many
classification problems. A PNN consists of several sub-
networks, each of which is a Parzen window pdf
estimator for each of the classes. The development of the
probabilistic neural network relies on Parzen windows
classifiers. The Parzen windows method is a non-
parametric procedure that synthesizes an estimate of a
probability density function (pdf) by superposition of a
number of windows, replicas of a function (often the
Gaussian). The Parzen windows classifier takes a
classification decision after calculating the probability
density function of each class using the given training
examples. The multicategory classifier decision is
expressed as follows:
‫݌‬௞ ݂௞ ൐ ‫݌‬௝ ݂௝ (1)
Figure 2. An example of probabilistic artificial neural network

MIPRO 2015/CTS 1323


ே೔ ೅
ሺ௫ି௫೔ೕ ሻ ሺ௫ି௫೔ೕ ሻ
ͳ ͳ ି
‫݌‬௜ ሺ‫ݔ‬ሻ ൌ ෍ ݁ ଶఙ మ (3) Based on these counts, the following perfomance
ሺʹߨሻ௡Ȁଶ ߪ ௡ ܰ௜
௜ୀଵ metrics are calculated (all expresed as a percentage):
where Ni denotes the total number of samples in class c. x Accuracy (ACC):
If the a priori probabilities for each class are the same, ܶܲ ൅ ܶܰ
and the losses associated with making an incorrect ‫ ܥܥܣ‬ൌ (6)
decision for each class are the same, the decision layer ܶܲ ൅ ‫ ܲܨ‬൅ ‫ ܰܨ‬൅ ܶܰ
unit classifies the pattern x in accordance with the Bayes’
decision rule based on the output of all the summation x Sensitivity or true positive rate (TPR) or recall:
layer neurons: ܶܲ
ܴܶܲ ൌ (7)
‫ܥ‬ሺ‫ݔ‬ሻ ൌ ܽ‫ݔܽ݉݃ݎ‬ሼ‫݌‬௜ ሺ‫ݔ‬ሻሽǡ ݅ ൌ ͳǡʹǡ ǥ ǡ ܿ (4) ܶܲ ൅ ܶܰ

where C(x) denotes the estimated class of the x Specificity or true negative rate (TNR):
pattern x and m is the total number of classes in the
ܶܰ
training samples. If the a priori probabilities for each ܴܶܰ ൌ (8)
class are not the same and the losses associated with ܶܲ ൅ ܶܰ
making an incorrect decision for each class are different,
the output of all the summation layer neurons will be x Precision (PR) or positive predictive value (PPV):
‫ܥ‬ሺ‫ݔ‬ሻ ൌ ܽ‫ݔܽ݉݃ݎ‬ሼ‫݌‬௜ ሺ‫ݔ‬ሻܿ‫ݐݏ݋‬௜ ሺ‫ݔ‬ሻܽ‫݋ݎ݌‬௜ ሺ‫ݔ‬ሻሽǡ ܶܲ
(5) ܴܲ ൌ (9)
݅ ൌ ͳǡʹǡ ǥ ǡ ܿ ܶܲ ൅ ‫ܲܨ‬

where costi(x) is the cost associated with misclassifying x Negative predictive value (NPR):
the input vector and aproi(x) is the prior probability of ܶܰ
occurrence of patterns in class c. ܴܰܲ ൌ (10)
ܶܰ ൅ ‫ܰܨ‬
V. EVALUATION OF ACQUIRED KNOWLEDGE
x F1 score:
After obtaining the actual classification results, some
way of quality assessment is needed for a certain type of ʹܶܲ
neural network. The purpose of the assessment is to ‫ ͳܨ‬ൌ (11)
ʹܶܲ ൅ ‫ ܲܨ‬൅ ‫ܰܨ‬
determine which type of neural network fits best for some
classification problem. Therefore the evaluation of
acquired knowledge is an important part of neural B. Result Validation
network application. During this process, it is needed to Result validation techniques are mainly linked to
determine which verification and validation measures neural networks if an input data set is not large enough.
will be used. Different types of these measures exist In addition to that, they are used to test a neural network
through specific application [14] and some of them are with some random sample from the data set. If the data
applied in this paper. set is large enough, which does not guarantee the neural
network will be trained properly. This is due to a
A. Result Verification
possibility of overfitting the network with a large number
There are several different measures used for result of similar instances. If that occurs, the neural network
verification. Majority of the measures are adapted to one- could treat some noise as an anomaly. In order to prevent
class classification. This corresponds to the aim of this this, some kind of result validation should be performed.
paper, because neural networks used for anomaly Result validation techniques can also show if the neural
detection in this paper are trained as one-class classifiers. network parameters are good enough. Commonly used
Table I provides an overview of possible outcomes of result validation techniques are: cross-validation,
one-class classification problem solution. There are four regularization, early stopping etc. Result validation
possible outcomes of anomaly detection. True positive techniques are not exclusively used with neural networks.
(TP) and true negative (TN) outcomes represent a correct They are also used with other anomaly detection
classification, while false negative (FN) and false positive techniques and with value prediction techniques.
(FP) outcomes represent an incorrect one. Both types of
incorrect classifications represent a hazard. False The most popular result validation technique is cross-
positives can cause an action which is not needed, but validation [15]. The idea is to separate the available data
false negatives are actually more dangerous because an into a training data set (usually 80% to 90% of the data)
anomaly would be ignored. and a test data set (the remaining 10% to 20% of the
data). The simplest form of cross-validation randomly
separates the available data into a single training set and a
TABLE I. POSSIBLE PUTCOMES OF ONE-CLASS CLASSIFICATION single test set. It is a risky approach, because an unlucky
PROBLEM SOLUTION split could lead to an ineffective neural network. A better
approach would be to repeat the previous procedure
Outcome Actual
multiple times, but this approach is also risky because
Behavior Normal Anomalous there is a chance that some instances could be used only
Desired Normal True positive False positive for training and never for testing, or vise versa.
Anomalous False negative True negative Therefore, a technique called k-fold cross-validation was
developed.

1324 MIPRO 2015/CTS


The idea behind k-fold cross-validation is to divide the spread coefficient. The initial value of this parameter
all the available data into k roughly equal-sized groups. In is 1 and the changes are performed by decreasing and
each iteration of k-fold cross-validation, k-1 groups are increasing the value. The minimum value is 0.01, and the
used for training and the remaining one is used for maximum is 10. The best results can be obtained if the
testing. The k-fold cross-validation iterates through a coefficient value is between 0.5 and 5 for both section.
number of folds. After the first iteration, the next group is Results of PNN application for anomaly detection in the
used for testing, and the remaining data are used for steam superheaters are given in the Table II. From the
training. This procedure repeats until all of the groups are results, PNN gives excellent results if the spread
used for testing once. coefficient is properly selected (spread = 2). Results of
PNN application for anomaly detection in the steam
The main advantage of k-fold cross-validation drums are given in the Table III. The best results can be
compared to other techniques is utilization of all the obtained if the value is between 0.5 and 5, but the steam
available data in the process. The main disadvantage is drums results are a bit better if other values are used. The
the potential for different validation results due to inputs (6 values for both systems) from Steam Superheats
stochastic process of group forming at the beginning of and PNN classification outputs are shown in "Fig 3". The
the validation process. The k-fold cross-validation can inputs (6 values for both systems) from Steam Drums and
give different results each time it is performed. This can PNN classification outputs are shown in "Fig 4". The
be avoided by repeating the process multiple times and results confirm that PNN is excellent solution for
using the mean validation result. anomaly detection problem.

VI. RESULTS AND DISCUSSION


TABLE II. RESULTS OF PNN (STEAM SUPERHEATERS)
The results are presented as a set of different
verification measures which show the effect of important Validationresults
parameters change. Given that the k-fold cross-validation
technique is mentioned in the paper, in addition to the test Spread Spread Spread Spread Spread
Measure
results, 10-fold cross-validation results are also presented. = 0.01 = 0.1 = 0.5 =2 = 10
ACC 0.5418 0.9214 0.9995 0.9994 0.8925
Scores of an ideal classification would all be the
same (1), except TPR and TNR (0.5 – their sum is always TPR 0.9227 0.5425 0.4999 0.4998 0.5601
1). Given that the initial synaptic weights are chosen TNR 0.0773 0.4575 0.5001 0.5002 0.4399
randomly, it is possible to obtain different results if a PR 0.5218 0.8643 0.9997 0.9997 0.8231
neural network is trained and tested multiple times. NPR 0.9977 0.9998 0.9992 0.9991 0.9998
Therefore, all of the obtained results are obtained from F1 0.6857 0.9271 0.9995 0.9994 0.9029
average confusion matrices (average of multiple training
and testing cycles). Testingresults

Measure Spread Spread Spread Spread Spread


A. Data Used as Neural Network Input Data = 0.01 = 0.1 = 0.5 =2 = 10
Neural networks were applied to data from two ACC 0.6501 0.9982 0.9993 0.9994 0.8990
sections of the boiler, the steam superheaters and the TPR 0.7690 0.5009 0.4997 0.4997 0.5562
steam drums. These sections consist of two separate
TNR 0.2310 0.4991 0.5003 0.5003 0.4438
systems each, which produce steam of adequate quality
PR 0.5883 0.9965 0.9999 0.9999 0.8320
together. Process variables used in this paper represent
both systems (I and II). The process variables from the NPR 0.9999 0.9999 0.9987 0.9989 0.9999
steam superheaters are: superheated steam temperature F1 0.7408 0.9982 0.9993 0.9994 0.9083
(TS), superheated steam flow (FS) and superheated steam
cooling water flow (CWF). The variables from the steam TABLE III. RESULTS OF PNN (STEAM DRUMS)
drums are: drum level (DL), drum pressure (DP) and
feedwater flow (FWF). All of the input data come from Validationresults
an actual system for monitoring and control present in
unit 4 of Thermal Power Plant "Tuzla" with the sampling Spread Spread Spread Spread Spread
Measure
period of 1 s. = 0.01 = 0.1 = 0.5 =2 = 10
ACC 0.5566 0.8932 0.9961 0.9974 0.9877
There are 962 instances that represent normal TPR 0.8982 0.5597 0.5004 0.4997 0.5036
behavior and the same amount representing anomalous TNR 0.1018 0.4403 0.4996 0.5003 0.4964
behavior in the input data set. That makes a total of 1924 PR 0.5300 0.8241 0.9954 0.9979 0.9809
instances for each section. Given that, the 10-fold cross-
NPR 0.9983 0.9998 0.9968 0.9969 0.9946
validation is not necessary because the data set is not
large enough, but to test the neural networks with more F1 0.6928 0.9035 0.9961 0.9974 0.9877
random data sets. The data set is divided into a training Testingresults
data set (70% of the data), a validation data set (15% of
the data) and a test data set (15% of the data). Measure Spread Spread Spread Spread Spread
= 0.01 = 0.1 = 0.5 =2 = 10
B. Results of Probabilistic Neural Network Application ACC 0.6822 0.9987 0.9999 0.9999 0.9916
Due to the nature of this neural networks type, there TPR 0.7329 0.5006 0.4997 0.4999 0.5041
are not many parameters that can be changed. Number of TNR 0.2671 0.4994 0.5003 0.5001 0.4959
neurons depends on the number of instances used in input PR 0.6114 0.9975 0.9999 0.9999 0.9838
data set. Also, the learning process is completed in only NPR 0.9999 0.9999 0.9999 0.9999 0.9998
one epoch, so the only parameter that can be changed is F1 0.7589 0.9987 0.9999 0.9999 0.9917

MIPRO 2015/CTS 1325


Figure 3. Anomaly detection using probabilistic neural network for Figure 4. Anomaly detection using probabilistic neural network for
input data from steam superheaters (spread=2) input data from steam drums (spread=2)

[2] G.F. Gilman, "Boiler Control Systems Engineering", International


VII. CONLUSION Society of Automation (ISA), Second Edition, 2010.
In this paper the probabilistic neural network (PNN) [3] V. Chandola, A. Banerjee, V. Kumar, "Anomaly Detection: A
was tested with data from real-time thermal power system Survey", Department of Computer Science and Engineering,
(TPS) to explore possibilities of anomaly detection. University of Minnesota, Minneapolis, MN, USA, 2007.
[4] V.J. Hodge, J. Austin, "A Survey of Outlier Detection
An anomaly in any part of the boiler affects the Methodologies", Dept. of Computer Science, University of York,
operation of thermal power plant unit. Malfunctions can York, UK, 2004.
be prevented by timely anomaly detection. Given the [5] K. Hempstalk, E. Frank, "Discriminating Against New Classes:
system complexity, for the purpose of necessary One-Class versus Multi-Class Classification", AI 2008: Advances
simplification, only some of the characteristic boiler in Artificial Intelligence Lecture Notes in Computer Science, Vol.
sections and process variables from those sections were 5360, 2008.
selected for further analysis. All of the input data come [6] T. Ahmed, B. Oreshkin, M. Coates, "Machine Learning
from an actual system for monitoring and control present Approaches to Network Anomaly Detection", 2nd USENIX
in Thermal Power Plant "Tuzla". Workshop on Tackling Computer Systems Problems with
Machine Learning Techniques (SysML), Cambridge, MA, USA,
It was shown that the PNN provide excellent results 2007.
for anomaly detection and with the used data. As for the [7] S.X. Wu, W. Banzhaf, "The Use of Computional Intelligence in
significance of the results for power generation process Intrusion Detection Systems: A Review", Applied Soft
control in a thermal power plant, it could be great. Due to Computing, Vol. 10, Issue 1, 2010.
the fact that the data are divided so that the very [8] M.T. Haque, A.M. Kashtiban, "Application of Neural Networks in
beginning of anomalous state is represented in anomalous Power Systems: A Review", International Journal of Electrical,
Electronic Science and Engineering, Vol. 1, No. 6, 2007.
data instances, neural networks could be used as a tool for
early anomaly detection. Given that, an anomaly reaction [9] Z. Liu, W. Gao, "Wind Power Plant Prediction by using Neural
Networks", IEEE Energy Conversion Conference and Exposition,
time could be significantly reduced. That would provide Raleigh, NC, USA, 2012.
greater system stability and therefore fewer losses.
[10] A. Kusiak, Z. Song, "Sensor Fault Detection in Power Plants",
Only some data from large sections of the boiler were Journal of Energy Engineering ASCE, 2009.
included in the analysis provided in this paper. In an [11] F. Amiri, C. Lucas, N. Yazdani, "Anomaly Detection using Neuro
actual application of neural networks for anomaly Fuzzy System", World Academy of Science, Engineering and
detection, data from many other (smaller) sections should Technology Vol. 3, 2009.
be included. In addition to the problem of anomaly [12] F. Ancona, A.M. Colla, S. Rovetta, R. Zunino, "Implementing
detection, future research could be extended to predictive Probabilistic Neural Networks", Neural Computing and
Applications 7, Springer-Verlag London Ltd., 1998.
control in industrial systems using neural networks. In
any case, in the area of modern advanced automation, [13] S. N. Sivanadam, S. Sumathi, S. N. Deepa, "Special Networks"
in Introduction to Neural Networks using MATLAB 6.0, New
neural networks and other forms of artificial intelligence Delhi: Tata McGraw-Hill, pp. 311-314, 2006.
are labeled as the future of complex systems.
[14] M. Suljic, L. Banjanovic-Mehmedovic, I. Dzananovic:
“Determination of coal quality using Artificial Intelligence
REFERENCES Algorithms”, Journal of Scientific and Industrial Research (JSIR),
[1] J.B. Kitto, S.C. Stultz, "Steam: It's Generation and Use", The Vol. 72 (06), 2013.
Babcock & Wilcox Company, Edition 41, 2005. [15] J. McCaffrey, "Understanding and Using K-Fold Cross Validation
for Neural Networks", 2013.

1326 MIPRO 2015/CTS

View publication stats

You might also like