Download as pdf or txt
Download as pdf or txt
You are on page 1of 8

Applied Soft Computing 13 (2013) 3628–3635

Contents lists available at SciVerse ScienceDirect

Applied Soft Computing


journal homepage: www.elsevier.com/locate/asoc

A spiking neural network (SNN) forecast engine for short-term


electrical load forecasting
Santosh Kulkarni ∗ , Sishaj P. Simon, K. Sundareswaran
Department of Electrical and Electronics Engineering, National Institute of Technology, Tiruchirapalli, Tamil Nadu, India

a r t i c l e i n f o a b s t r a c t

Article history: Short-term load forecasting (STLF) is one of the planning strategies adopted in the daily power system
Received 4 June 2012 operation and control. All though many forecasting models have been developed through the years,
Received in revised form 28 February 2013 the uncertainties present in the load profile significantly degrade the performance of these models. The
Accepted 8 April 2013
uncertainties are mainly due to the sensitivity of the load demand with varying weather conditions,
Available online 24 April 2013
consumption pattern during month and day of the year. Therefore, the effect of these weather variables
on the load consumption pattern is discussed. Based on the literature survey, artificial neural networks
Keywords:
(ANN) models are found to be an alternative to classical statistical methods in terms of accuracy of the
Spiking neurons
Spike response model (SRM)
forecasted results. However, handling of bulk volumes of historical data and forecasting accuracy is still a
ANN major challenge. The development of third generation neural networks such as spike train models which
Hybrid model are closer to their biological counterparts is recently emerging as a robust model. So, this paper presents a
load forecasting system known as the SNNSTLF (spiking neural network short-term load forecaster). The
proposed model has been tested on the database obtained from the Australian Energy Market Operator
(AEMO) website for Victoria State.
© 2013 Elsevier B.V. All rights reserved.

1. Introduction under in these types of forecasts. Long term forecasting on the


other hand deals with forecasts longer than a year [3]. It is pri-
Power system operation, planning and setting up of power gen- marily intended for capacity expansion plans, capital investments,
eration infrastructure in any nation depends upon the growth in and corporate budgeting. These types of forecasts are often com-
the power consumption by its population. Due to the depletion plex in nature due to several uncertainties such as political factors,
of fossil fuels, increase in population and increase of per capita economic situation, per capital growth. Planning of new networks
power consumption in the world, forecasting of electrical load has and extensions to existing power system for both the utility and
become one of the major areas of research. Load forecasting is gen- consumers require long-term forecasts.
erally carried out to assist planners in making strategic decisions Short-term load forecasting is, however, considered as very a
with regards to unit commitment, hydro-thermal co-ordination, difficult task. First, because the load series is complex and exhibits
interchange evaluation, and security assessments. Electrical load several levels of seasonality: the load at a given hour is depend-
forecasting, in general is classified into three categories; short- ent not only on the load at the previous hour, but also on the
term, medium term and long-term forecasting [1]. The duration for loads at several past hours, even on the loads at past days. Sec-
these categories is not explicitly defined in the literature papers. ondly, the consumption pattern is dependent on the prevailing
Thus, different authors use different time horizons to define these weather conditions [4]. Also, due to power system deregulation, it
categories. Short-term load forecasting, in general ranges from few has become more important i.e., these forecasts are used by energy
minutes to seven days [2]. Since in power system, the next day’s management system (EMS) to establish operational plans for power
power generation must be scheduled every day, short-term load stations and to plan transactions in the energy market. A very good
forecasting (STLF) is necessary for economic scheduling of genera- example about the importance of load forecasting accuracy is that
tion capacity. Medium-term or intermediate load forecasting deals an increase in 1% of forecasting error caused an estimated increase
with predictions ranging from few weeks to several months [3]. of ten million pounds of operating costs for one electrical util-
Outage scheduling and maintenance of plants and networks come ity in United Kingdom [5]. Thus, the accuracy of STLF reduces the
operating costs of electrical utilities in many areas. However, the
most important issue in STLF is to extrapolate past load behav-
∗ Corresponding author. Tel.: +91 0431 2513298; fax: +91 0431 2500133. ior while taking into account effect of other influencing factors
E-mail addresses: santosh.kulkarni1988@gmail.com, vibrantsantu@gmail.com such as weather and day of the week. In a power system, tempera-
(S. Kulkarni), sishajpsimon@nitt.edu (S.P. Simon). ture is usually the most dominant variable in driving the electricity

1568-4946/$ – see front matter © 2013 Elsevier B.V. All rights reserved.
http://dx.doi.org/10.1016/j.asoc.2013.04.007
S. Kulkarni et al. / Applied Soft Computing 13 (2013) 3628–3635 3629

6.5

Load (MW)
5.5

4.5

3.5
0 5 10 15 20 25 30 35 40
Temperature (Celsius)

Fig. 1. Hourly load variations with respect to temperature change – Victoria (January 2004).

demand. A sample scattered plot showing the variations in the con- temporally encoded neurons have better computational ability in
sumption pattern with respect to average temperatures is shown spatio temporal context [35]. Temporal encoded SNN have been
in Fig. 1. It is obvious that the relationship between temperature used for pattern recognition proposed by Bohte in [36]. In [35],
and load demand is non-linear. Previous research work and their authors have successfully tested the performance of temporal
results are prone to large errors, when the load forecasting model encoded SNN for electricity time series forecasting using evolution-
is exposed to large weather forecast errors. Therefore, improving ary learning technique. However, in this work, an attempt has been
the quality of weather forecasting is an effective way to improve made to forecast electrical load using gradient method. The spike
the load forecasting accuracy [6–10]. propagation algorithm is similar to the gradient descent method
As indicated above, STLF is an important area and this is reflected used for feedforward backpropagation neural network. The real
in the literature with many traditional and non-conventional time dataset is obtained from Australian Energy Market Operator
techniques like regression analysis [11–13], time series approach (AEMO) website [37]. The data consist of half hourly load of each day
[14,15], neural networks [16–22], fuzzy logic [23–26] and support from January 2004 to December 2009. The data is pre-processed by
vector machines [27,28] etc. Based on the literature survey, though normalizing the load between 0 and 1. The historical data of hourly
various STLF have been proposed, the scope for developing more temperature values are obtained from [38]. The historical data for
efficient and accurate models is still a major challenge. daily solar radiation is obtained from [39].
Section 1 gives an introduction about short-term load fore-
casting (STLF) and its importance in power system planning. The
proposed work and the justification in applying SNN to STLF are
3. Spiking neural network
discussed in Section 2. The details about SNN, the step by step
training algorithm (spike propagation algorithm) and the setting
The input and the output of a spiking neuron are described by a
of control parameters are given in Section 3. In Section 4, the pro-
series of firing times known as the spike train. One firing time means
posed methodology for implementation of SNN to STLF is discussed.
the instant the neuron has fired a pulse. The potential of a spik-
Section 5 investigates the results obtained during the training and
ing neuron is modeled by a dynamic variable and works as a leaky
testing of SNN. The conclusion and the scope for further research
integrator of the incoming spikes and the newer spikes contribute
are presented in Section 6.
more to the potential than the older spikes. If this integrated sum
of the incoming spikes is greater than a predefined threshold level,
then the neuron fires a spike. This makes SNN a dynamic system,
2. Proposed work
in contrast with the second generation sigmoid neural networks
which are static, and enables it to perform computation on tempo-
The proposed work uses spiking neural networks for STLF. The
ral patterns in a very natural biological manner. Though its likeness
SNNs have evolved due to the significant growth achieved in the
is closer to the conventional back propagation ANN, the difference
field of neurophysiology and information processing in brain. It has
lies in having many numbers of connections between individual
been discovered that the neural processing inside the brain is car-
neurons of successive layers as in Fig. 2. Formally between any two
ried out in the form of spike trains and not real valued signals. As
neurons ‘e’ and ‘f’, input neuron ‘e’ receives a set of spikes with firing
a result, the artificial neuron that has found closeness to its bio-
times te . The output neuron ‘f’ fires a spike only when membrane
logical counterpart is known as spiking neuron and the network
potential exceeds its threshold value (). The internal state variable
based upon this type of neurons is known as spiking neural network
xf (t) is described by the spike response function ε (t) weighted by
(SNN). A typical circuit model of spiking neuron was first proposed
in 1952 as Hodgkin–Huxley [29,30] model. The spiking neurons are
generally divided into three types: threshold, compartment and
conductance based [31]. The threshold model is again divided into
1
two types. They are non linear leaky integrate and fire model; and Wef1 d1 wef (t t i d1)
the spike response neuron model (SRNM) [32].
In the proposed work, the SRNM has been used in the feed k

forward SNN. However, an understanding of neural coding is nec- te Wefk wef (t t e d k )


tf
dk f
essary for encoding information in terms of spike trains which e
is given in [33,34]. The methods of encoding real values include m
wef (t ti d m)
rate encoding, population encoding and temporal encoding [35]. Wefm m
d
In this work, precise time of spikes in the temporal encod-
ing scheme has been implemented. This scheme is chosen since Fig. 2. Multiple delayed synaptic terminals between two neurons (e and f).
3630 S. Kulkarni et al. / Applied Soft Computing 13 (2013) 3628–3635

Input Layer Output Layer Step 7: Calculate the error, the difference between the actual spike
Index:i Index:j
time and the desired spike time.
i=1
h=1 j=1 n 2
E = .5 ∗ (tja − tjd ) (6)
i=2 j=1

tj-Output Spikes
j=2
ti -Input Spikes

h=2
where tja is the actual output spike and tjd is the desired output spike
i=3 j=3
h=3 and n is the number of neurons in the output layer.
Step 8: Adjust weights of the network in such a way that mini-
i=4
h=4 j=5 mizes the training error. Then, repeat the steps 4–8 for each pair
of input and output vectors of the training set until the error for
h=q
the entire system is acceptably low. The equations for change in
i=p j=n weights between the hidden and output layer and input and hidden
Hidden Layer Index:h layers are given below.
Change in weights from hidden layer (h) to output layer (o) is
Fig. 3. Architecture of a multilayer spiking neural network.
given as
k k a
Whj = −˛ıh Yih (th ) (7)
the synaptic efficacy. i.e., the weight between the two neurons is
given in Eq. (1) tjd − tja
where, ıj =  (8)
 m
k,h
k (∂Yk/hj(t a )/∂t a )
Whj j j
xf (t) = Yek Wef
k
(1)
k=1 Change in weight from input (i) to hidden layer (h) is given as
k k a
where, m is the number of synaptic terminals between two neurons Wih = −˛ıh Yih (th ) (9)
e and f.
k is the weight of the sub-connection k, and Y gives   n
Where, Wef ıj k (∂Y k (t a )/∂t a )
Whj
k hj j h
the unweighted contribution of all spikes to the synaptic terminal
j=1
and is given in Eq. (2) as where ıh =  k (∂Y k (t a )/∂t a )
(10)
k,i
Wih ih h h
Yik (t) = ε(t − ti − d ) k
(2)
3.2. Setting of control parameters
dk is the delay between the two nodes in the kth synaptic termi-
nal. ε(t) is the spike response function shaping the post synaptic The proposed SNN has the following adjustable parameters:
potential (PSP) of a neuron which is given in Eq. (3), Number of hidden nodes (h), weights (Wih and Whj ), learning rate
t 1−t/ (˛) and decay time constant (). As with many other forecast
ε(t) = e (3)
 methods, the accuracy of the proposed SNN is dependent on the
appropriate adjustment of its parameters. Mean square error (MSE)
where  is the membrane potential decay time constant.
is considered as the performance index for adjustment of the con-
trol parameters. The range for each of the parameter is selected as
3.1. Training algorithm for SNN
W ∈ [0.01, 3], number of hidden layer neurons h ε [12, 30], number
of synaptic connections k ∈ [10, 20], decay time constant  ∈ [4, 8],
The SNN is a 3 layer network with a single hidden layer as shown
and learning rate ˛ ∈ [0.0005, 0.01]. Initially, training samples are
in Fig. 3. The parameters which affect the performance of the pro-
constructed for a period of 300 days. Keeping the other parame-
posed SNN are discussed in Section 3.2. The step by step procedure
ters constant, within the selected neighborhood, the weights from
of the spike propagation algorithm [37] is given below.
input to hidden layer and hidden to output layer are varied and
Step 1: Prepare an initial dataset. The dataset is normalized
the training process is stopped when MSE obtained is minimum.
between 0 and 1.
Now with the weights being set, and keeping all other parame-
Step 2: Generate the weights to small random values. Initial-
ters constant, the number of hidden layer neurons is varied. The
ize the SNN parameters such as membrane potential decay time
number of neurons in the hidden layer determines the network’s
constant () and learning rate (˛).
learning capabilities. The selection of the number of hidden neu-
Step 3: For the proposed SPNN architecture, input layer (i), hid-
rons is an important issue in network design. So, optimal number
den layer (h) and output layer (o), do the steps 4–8 ‘itt’ times (itt = no
of hidden layer neurons are selected and varied within the given
of iterations). The network indices represents the number of neu-
neighborhood and trials are carried out till least MSE is obtained.
rons where, p = number of neurons in the input layer; q = number of
The number of synaptic connections is another important param-
neurons in the hidden layer; n = number of neurons in the output
eter for obtaining reliable results from the network. The number
layer; m = number of synaptic terminals between any two neurons
of connections between two neurons is varied for each trial. The
of successive layers.
setting for which least MSE is obtained is selected. For the feed for-
Step 4: For each pattern in the training set apply the input vector
ward network used in this problem, the delay value in each synaptic
to the network input.
connection is increased in steps of 2 ms. It should also be noted that
Step5: Calculate the internal state variable of the hidden layer
the delay in each terminal may not be same. The next parameter
neurons. It is given by Eq. (4).
to be set with respect to the order of precedence is the decay time
 m constant value () which depends on the coding interval (t). The
xh (t) = Yik Wih
k
(4)
i k=1 coding interval (t) is the difference between the minimum value
and maximum value of the normalized input spikes which is prob-
Step 6: Calculate the network output which is given by Eq. (5)
lem dependent.  is varied for each trial, but the best possible result
 m is obtained when the coding interval is equal to the decay time con-
xj (t) = Yhk Whj
k
(5)
h k=1 stant. The last parameter to be tuned is the learning rate ˛. For high
S. Kulkarni et al. / Applied Soft Computing 13 (2013) 3628–3635 3631

Phase 1 Phase 2

Training Input (Actual Max. & Min Temp)


Binary Inputs

Forecasted Load Pattern


Binary Inputs
Training Input

Historical Weather SNN 1


Test Input (Forecasted Max. & Min Temp)
Database SNN 2
Historical Load
E1 Database

E2

Fig. 4. Block diagram of the proposed SNNSTLF engine.

values of ˛, the MSE obtained after SNN training is very large and form. The twelve months in the yearly calendar are encoded in
did not converge. Therefore, small values are considered. binary form (0001 to 1100). For example, if the binary represen-
However, it should be noted that minimizing the MSE during tation is 0010, it represents month of July (4 neurons). The target
training phase of the SNN may lead to over fitting problem. Hence, values consist of the corresponding hourly temperatures on the day
SNN begins to memorize the training samples instead of learning of forecast (6 neurons). Therefore, SNN1 consists of 29 input nodes
them. When over fitting occurs in a SNN, the MSE continues to 6 output nodes, respectively. Training is carried out for 4 times with
decrease and it seems that the training process progresses, while the input data divided into 4 parts of a day consisting of 24 h. The
in fact the generalization capability of the SNN degrades and it loses first part consists of hourly weather variables from 12 A.M to 5 A.M.
its forecasting ability for the unseen forecast samples. To overcome Similarly, the other 3 parts consist of data for the remaining time
this problem, the generalization performance of the SNN should horizon. For each of the four input formats, the temperatures for
also be monitored during its training phase. Since the forecast error 6 h are forecasted. Therefore, all the 24 hourly temperatures are
is not available in the training phase, validation error is used as an forecasted. The maximum and the minimum temperatures of the
approximation. For SNN learning, validation samples are a subset forecasted day is recorded and used as test input for SNN2. The
of training period that are not used for the adjustment of the con- above scheme of testing and training in Phase 1 is carried out for the
trol parameters of the SNN. Thus, the validation samples can give a city of Melbourne in Victoria. The same procedure is repeated for
better estimate of the error for the forecast samples. Whenever the the city of Geelong in Victoria. The objective of the forecast engine
validation error increases, the generalization performance of the is to forecast the electrical load for the state of Victoria in Australia.
SNN begins to degrade. It indicates the occurrence of over fitting Therefore, to have an accurate forecasts result, the weather data
problem, and so the training process of the SNN by the spike prop- for 2 cities are considered in Phase 1. Finally, the maximum and
agation algorithm should be terminated at this iteration. Thus, the the minimum forecasted temperatures for the 2 cities is recorded
iteration with the minimum validation error gives the final results which will be acting as 4 test inputs for SNN2 in Phase 2. The best
of the SNN training. combination of parameters for SNN1 is Wih ∈ (1, 2), Whj ∈ (2, 3),
h = 15, k = 12,  = 7 ms and ˛ = 0.001.
4. Implementation of Snn for Stlf The cross correlation analysis between temperature and humid-
ity is shown in Fig. 5. It enables to choose the number of input
The design of SNN model to perform load forecast includes variables with some time lag which affect the other variable. As
determination of model structure, selection of training algorithm seen in the figure, the cross correlation coefficients obtained for
and input variables, which significantly influence the network per- a time lag of 1–5 h are highly negative. Therefore, humidity val-
formance. The architecture and the selection of input variables are ues for 6 h are chosen as inputs to the forecasting model. A similar
discussed in Sections 4.1.1 and 4.1.2. analysis performed on the maximum temperature and maximum
solar radiation is shown in Fig. 6. It is observed that the coefficient

4.1. Proposed SNN architecture for STLF problem


1
The implementation of the proposed model is divided into two
Cross Correlation Function

phases. The block diagram shown in Fig. 4 consists of 2 SNN’s in a


0.5
sequential manner. E1 and E2 represent the training errors of SNN1
and SNN2 respectively. In phase two, the forecasted temperature
values are given as input to SNN2 to forecast the load pattern. 0

-0.5
4.1.1. Phase 1
In Phase 1, using SNN1, a day ahead hourly temperature profile
is forecasted. The input data consists of hourly temperatures of the -1
-150 -100 -50 0 50 100 150
previous 2 days (12 neurons), hourly humidity values of the previ- Lags
ous 2 days (12 neurons), maximum solar radiation of the previous
day (1 neuron) and the corresponding month encoded in binary Fig. 5. Cross correlation function between temperature and humidity.
3632 S. Kulkarni et al. / Applied Soft Computing 13 (2013) 3628–3635

1 5. Results and discussion


Cross Correlation Function

0.8

0.6
5.1. Performance evaluation
0.4
The accuracy of the results in this paper is evaluated in this
0.2 paper based on three error indices. They are mean absolute per-
0 centage error (MAPE), normalized mean square error (NMSE) and
-0.2 error variance (EV). MAPE is defined by the following equation.
 
1   Pi − Ai 
-0.4 NH
-400 -300 -200 -100 0 100 200 300 400
Lags
MAPE =
NH
 A  (11)
i
i=1
Fig. 6. Cross correlation function between temperature and solar radiation.
NMSE [41] is defined as
 
1 
NH
NMSE = (Pi − Ai )2 (12)
2 NH
obtained at zero lag is greater than any other coefficient. Therefore, i=1
solar radiation of the previous day is taken as an input variable for
SNN1. The form of cross correlation function (ACF) adopted here 1  NH
where  = (Ai − AAve )2 (13)
follows that of Box, Jenkins, and Reinsel, specifically [40]. NH − 1
i=1

EV [41] is defined as
4.1.2. Phase 2
1 
NH  P − A  2
 i 
In Phase 2, a day ahead half hourly load profile is forecasted. For 2 =
NH
 A i  − MAPE (14)
example, if the load pattern of Monday is given as test input, the i=1
i
load pattern of Tuesday is forecasted. For producing accurate load
forecasts, the best combination of input variables is required. Here where, Pi and Ai are the i predicted and actual values respectively,
in this paper, apart from historical load data, the factors which have AAve is the mean of the actual value and NH is the total number of
significant impact on the consumption pattern like day of the week, predictions.
weather variables, holiday effect, etc., are chosen as inputs to the
network. The topology of SNN2 included the following variables as 5.2. Numerical results
inputs:
The performance of the SNN is compared with the ANN and
Hybrid model in presented in reference [42] for short-term load
One day half hourly lagged load: (48 neurons) forecast. The ANN model in [42] is a 3 layer feed forward network
One day lagged temperature values (minimum and maximum val- trained using the Levenberg–Marquardt approach. The hyperbolic
ues): (4 neurons) tangent function has been used in both hidden and output layers
Forecasted temperature values obtained from Phase 1 (minimum since it can produce positive and negative values, which can speed
and maximum values): (4 neurons) up the training process. The Hybrid method in [42] is an adaptive
Maximum and minimum demand in the last 24 h: (2 neurons) two stage network with self organized map (SOM) and support vec-
Average temperature value in the last seven days: (2 neurons) tor machine (SVM). The data from AEMO website [37] from October
Day of the week is encoded in binary form: (001 to 111). For exam- 2008 to March 2009 is used to test the proposed SNNSTLF model.
ple, if the binary representation is 010, it represents Tuesday (3 Table 1 gives the average mean absolute error (MAE) and MAPE
neurons). obtained for each month from October 2008 to March 2009. From
Month number: (0001 to 1100) (4 neurons) Table 1, it is obvious that the proposed forecast engine is able to
Holiday effect: (0 – holiday; 1 – working day) (1 neuron) perform better than ANN and Hybrid models [42]. It is found out
that the average MAPE given in the last row of Table 1 is 2% less than
the Hybrid Model and 19.76% less than the ANN model in [42].
The corresponding target training data consists of half hourly load Table 2 compares the MAPE and MAE obtained when the actual
on the day of forecast. Therefore, the SNN2 model has 68 input and the forecasted temperature values are fed as inputs to SNN2
neurons and output 48 neurons, respectively. It is observed the fol- model. It is to be noted that the actual temperature values cannot be
lowing set of control parameters provided the best results during obtained for real time load forecasting. Only the forecasted temper-
the training of the SNN2 model: Wih ∈(2, 3), Whj ∈(1, 2), h = 10, k = 8, atures can be used for real time forecasting. However, to investigate
 = 5 ms and ˛ = 0.0006. the performance of SNN2, the actual values of the temperature are

Table 1
Monthly performance comparison.

Month SNN (forecasted temperature as input to SNN2) Hybrid[42] ANN[42]

MAE MAPE MAE MAPE MAE MAPE

October 2008 123.56 2.11 121.83 2.15 134.87 2.57


November 2008 118.58 2.06 123.50 2.12 140.52 2.63
December 2008 119.25 2.09 116.34 2.17 126.39 2.49
January 2009 129.26 2.10 126.73 2.14 168.04 2.81
February 2009 111.54 1.92 119.07 1.95 139.68 2.37
March 2009 109.56 1.90 116.49 1.94 123.21 2.29

Average 118.62 2.03 120.66 2.08 138.79 2.53


S. Kulkarni et al. / Applied Soft Computing 13 (2013) 3628–3635 3633

Table 2 Table 5
Average monthly performance comparison. Performance evaluation on all weekdays – February 2009.

Month SNN (forecasted SNN (actual temperature January 09 Influence of weather variables
Temperature as input to as input to SNN2)
SNN2) Including weather variables Excluding weather
(Phase 1 and Phase 2) variables (Phase 2)
MAE MAPE MAE MAPE
Day MAPE Max. APE MAPE Max. APE
December 2008 119.25 2.09 111.86 1.98
January 2009 129.26 2.10 113.29 2.07 Monday 1.94 8.45 5.95 9.65
February 2009 111.54 1.92 105.52 1.84 Tuesday 1.96 7.26 5.98 7.52
March 2009 109.56 1.90 100.78 1.79 Wednesday 1.87 7.33 5.78 9.17
Thursday 1.82 7.33 5.78 9.17
Average 113.62 2.00 104.86 1.92 Friday 1.90 8.18 6.07 8.28
Saturday 1.95 8.84 9.21 15.37
Sunday 2.03 8.38 9.62 19.28
Table 3
Average monthly performance comparison.

Month SNN (forecasted SNN (actual temperature 7


temperature as input to as input to SNN2)
6.5
SNN2)

Load (MW)
6
EV NMSE EV NMSE

December 2008 0.172 0.0689 0.164 0.0675 5.5


January 2009 0.198 0.0784 0.189 0.0725 Actual Load
5
February 2009 0.185 0.0526 0.174 0.0508 Forecasted Load
March 2009 0.182 0.0537 0.177 0.0498
4.5

4
0 5 10 15 20 25 30 35 40 45 50
taken from the historical data and the errors are calculated. From
Time (Half Hour)
Table 2, it is observed that an improvement of 4% and 7.77% of MAPE
and MAE (respectively) is achieved, if actual temperature is used as Fig. 7. 5 January 2009.
inputs to SNN2 model. This analysis shows that if the accuracy of the
forecasted temperature is increased, then electrical load forecasts
7
can be enhanced to a certain extent.
A comparative analysis using EV and NMSE is tabulated in 6.5

Table 3. It is observed from the table that the EV and NMSE obtained 6
Load (MW)

are less when actual historical temperature values are fed as inputs
to the SNNSTLF engine. However, the values obtained for the two 5.5
Actual Load
cases do not differ by a large value. The results computed in 5 Forecasted Load
Tables 2 and 3 show that the performance of the forecast engine for
4.5
the months of February and March is much better than the results
obtained for the month of January. This is mainly attributed due 4
the abnormal high peak load demands occurring due to the high 0 5 10 15 20 25 30 35 40 45 50

temperatures in the month of January. Time (Half Hour)


To understand the influence of weather variables on load pat- Fig. 8. 12 January 2009.
tern, SNN2 is trained and tested using solely the historical load data.
The results evaluated on each week day for the months of January
and February is tabulated in Tables 4 and 5, respectively. The mean than the average MAPE obtained in January. The MAPE and Max.
absolute percentage error (MAPE) and maximum absolute percent- APE computed for Saturdays and Sundays are higher than the
age error (Max. APE) calculated with the incorporation of weather other weekdays for both January and February. The MAPE and
variables for all the Mondays in January is 74% and 8.94% less than the maximum absolute percentage error for other typical week
when the weather variables are excluded. days (Tuesday to Friday) is presented in Tables 4 and 5. From the
Error analysis carried out for the month of February is pre- above analysis, it is concluded that the accuracy of the tempera-
sented in Table 5. It is found that the average MAPE (incorporating ture forecasts will indirectly enhance the performance of the load
temperature) obtained for all Mondays in February is 6% less forecasting model.

Table 4
7
Performance evaluation on all weekdays – January 2009.

January 09 Influence of weather variables 6.5


Load (MW)

Including weather variables Excluding weather


6
(Phase 1 and Phase 2) variables (Phase 2)

Day MAPE Max. APE MAPE Max. APE 5.5 Actual Load
Forecasted Load
Monday 2.08 8.15 8.03 8.95
5
Tuesday 2.02 7.46 8.07 8.07
Wednesday 2.00 7.23 6.92 9.35
Thursday 2.04 8.45 7.36 8.31 4.5
0 5 10 15 20 25 30 35 40 45 50
Friday 2.03 8.25 6.49 9.23
Time (Half Hour)
Saturday 2.21 8.78 9.81 19.37
Sunday 2.28 9.15 10.83 24.65
Fig. 9. 25 February 2009.
3634 S. Kulkarni et al. / Applied Soft Computing 13 (2013) 3628–3635

6.5 0.18

Absolute Percentage Error(%)


0.16 SNN(including Weather Variables)
6 0.14
SNN (excluding WeatherVariables)
Load (MW)

0.12

5.5 0.1
Actual Load 0.08
Forecasted Load
5 0.06
0.04
0.02
4.5
0 5 10 15 20 25 30 35 40 45 50 0
Time (Half Hour) 1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43 45 47
Time (Half Hour)
Fig. 10. 23 March 2009.
Fig. 12. 3 January 2009.

Figs. 7–10 compare the half hourly one day-ahead load pattern
6. Conclusion
with the actual load pattern when forecasted temperatures are con-
sidered as inputs to the proposed SNNSTLF. It is observed from these
A new forecast engine based on SNN for short term load forecast-
figures that the forecasted curve is almost close to the actual load
ing is presented. The results obtained from the proposed forecast
curve. In Figs. 7 and 8, the MAPE calculated is 2.12% and 2.04%,
engine are compared with previous existing models available in the
respectively. In Fig. 7, the MAPE obtained for the first 12 h (24 obser-
literature. The SNNSTLF model involves the SNN handling historical
vations) is found to be 1.91% and it is 2.33% for the remaining time
data in an efficient manner with increased training accuracy. It can
horizon. The actual peak load is 6600 MW, where as the forecasted
effectively correlate the non-linear relationship between load pat-
peak demand is 6515 MW. The average error obtained (Fig. 8) is
tern and several other factors such temperature, humidity, week
2.03%, 1.94% and 2.03% for the first 8 h (16 observations), for the
day, month of the year etc. This paper also validates the improve-
next 8 and for the remaining time horizon, respectively. The actual
ment in the performance of the third generation neural networks
peak load demand observed in Fig. 8 is 6652 MW, whereas the peak
compared to that of the second generation neural networks such
load obtained from the forecasted curve is 6510 MW. Higher devi-
as the Levenberg back propagation neural network and support
ations from the actual load pattern are observed from 12 P.M. to
vector machine based on self organizing map (Hybrid model). The
6 P.M. The maximum and the minimum absolute error obtained
proposed model can be used in the power system operation and
during the forecast period in Fig. 8 are 324.7 MW and 3.9 MW,
planning environment. It can be extended to other power system
respectively.
forecasting applications such as electricity price forecasting, wind
The MAPE computed for half hourly one day-ahead forecasts
power generation forecasting, solar radiation forecasting.
in Figs. 9 and 10 is 2.01% and 1.92%, respectively The minimum
absolute error obtained for the half hourly one day ahead fore-
casts shown in Figs. 9 and 10 is 0.002 MW. The maximum absolute References
percentage error obtained in Figs. 9 and 10 is 4.23% and 4.01%,
respectively. In Fig. 11, the forecasted load curves with and without [1] D. Srinivasan, C.S. Chang, A.C. Liew, Survey of hybrid fuzzy neural approaches
to electrical load forecasting, in: Proceedings on IEEE International Conference
the weather variables are plotted. It is obvious that the forecasted on Systems, Man and Cybernetics, Part 5, Vancouver, BC, 1995, pp. 4004–4008.
curve including weather variables follows the actual load pattern [2] N. Amjady, Short-term bus load forecasting of power systems by a new hybrid
more closely that the other curve (without weather variables). method, IEEE Transactions on Power Systems 22 (1) (2007) 333–341.
[3] H.M. Al-Hamadi, S.A. Soliman, Long term/mid term electric load forecasting
Fig. 12 shows the variation in the absolute percentage error for the based on short-term correlation and annual growth, Electrical Power Systems
forecasted load patterns shown in Fig. 11. It indicates that the per- Research 74 (3) (2005) 353–361.
centage error in the forecasted curve is significantly reduced when [4] H.S. Hippert, C.E. Pedreira, R. Castro, Neural networks for short-term load fore-
casting: a review and evaluation, IEEE Transactions on Power Systems 16 (1)
the weather variables are incorporated in SNN2. All the load values (2001) 44–55.
in the y-coordinates for Figs. 7–11 are scaled by a factor of 1000 i.e., [5] K.R. Damitha, G.K. George, C.F. Richard, Economic impact analysis of load fore-
if the load is 6500 MW, it is scaled to 6.5. casting, IEEE Transactions on Power Systems 12 (3) (1997) 1388–1392.
[6] T. Miyake, J. Murata, K. Hirasawa, One-day-through seven-day ahead electri-
The total average time taken for training the SNNSTLF model is
cal load forecasting in consideration of uncertainties of weather information,
8 min. All the codes for SNN are developed in the Matlab Version Electrical Engineering in Japan 115 (8) (1995) 22–32.
2007b. The hardware configuration of the computer used is Intel [7] H. Chen, Y. Du, J. Jiang, Weather sensitive short-term load forecasting using
knowledge based ARX models, in: Proc. IEEE Power Eng. Soc. General Meeting,
Core I5 processor with 2.53 GHz CPU, 4 GB RAM and the operating
vol. 1, 2005, pp. 190–196.
system used is Windows 7 Home Premium. [8] S. Fan, K. Methaprayoon, W.J. Lee, Short-term multi-region load forecasting
based on weather and load diversity analysis, in: Proceedings of 39th NAPS,
2007, pp. 562–567.
[9] S. Fan, L. Chen, W.J. Lee, Short-term load forecasting using comprehensive com-
6.5
bination based on multi-meteorological information, in: Proceedings of IEEE
Actual Load
Industrial Commercial Power Systems. Technical Conference, 2008, pp. 1–7.
6 Forecasted Load (including Weather Variables)
[10] K. Methaprayoon, W. Lee, S. Rasmiddatta, J. Liao, R. Ross, Multi-stage artificial
Forecasted Load (excluding Weather Variables)
neural network short-term load forecasting engine with front-end weather
Load (MW)

5.5 forecast, in: Proceedings of IEEE Ind. and Comm. Power Systems Technical
Conference, 2006, pp. 1–7.
[11] A.D. Papalexopoulos, T.C. Hesterberg, A regression-based approach to short-
5
term system load forecasting, IEEE Transactions PWRS 5 (4) (1990) 1535–1544.
[12] L.F. Amaral, R.C. Souza, M. Stevenson, A smooth transition periodic autoregres-
4.5 sive (STPAR) model for short-term load forecasting, International Journal of
Forecasting 24 (2008) 603–615.
4 [13] J.M. Vilar, R. Cao, G. Aneiros, Forecasting next day electricity demand and price
0 5 10 15 20 25 30 35 40 45
using non-parametric functional methods, Electrical Power and Energy Sys-
Time (Half Hour) tems 39 (2012) 48–55.
[14] S.S. Pappas, L. Ekonomou, P. Karampelas, D.C. Karamousantas, S.K. Katsikas, G.E.
Fig. 11. 3 January 2009. Chatzarakis, P.D. Skafidas, Electricity load demand forecasting of the hellinic
S. Kulkarni et al. / Applied Soft Computing 13 (2013) 3628–3635 3635

power system using an ARMA model, Electrical Power Systems Research 80 [27] J. Nagi, K.S. Yap, F. Nagi, S.K. Tiong, S.K. Ahmed, A computational intelli-
(2010) 256–264. gence for prediction of daily peak load, Applied Soft Computing 11 (2011)
[15] J.W. Taylor, Triple seasonal methods for short-term load forecasting, European 4773–4788.
Journal of Operational Research 204 (2010) 139–152. [28] W.-C. Hong, Y.D. Yucheng Dong, W.Y. Zhang, C. Li-Yueh, B.K. Panigrahi, Cyclic
[16] A.G. Bakirtzis, V. Petridis, S.J. Kiartzis, M.C. Alexiadis, A.H. Maissis, A neural electric load forecasting by seasonal SVR with chaotic genetic algorithm, Elec-
network short term load forecasting model for the Greek power system, IEEE trical Power and Energy Systems 11 (2013) 604–613.
Transactions on Power Systems 11 (2) (1996) 858–863. [29] A.L. Hodgkin, A.F. Huxley, A quantitative description of ion currents and its
[17] P. Bunnoon, K. Chalermyanont, C. Limsakul, Multi substation control central applications to conductance and excitation in nerve membranes, Journal of
load area forecasting by using HP-filter and double neural networks (DP-NN’S), Physiology (London) 117 (1952) 500–544.
Electrical Power Energy Systems 44 (2013) 561–570. [30] P. Joshi, W. Maass, Movement generation with circuits of spiking neurons,
[18] M. El-Telbany, F. El-Karmi, Short-term forecasting of Jordanian electricity Neural Computation 17 (8) (2005) 1715–1738.
demand using particle swarm optimization, Electrical Power Systems Research [31] W. Gelsterner, Spiking Neuron Models, Cambridge University Press, Cambridge
78 (2008) 425–433. (United Press in United Kingdom), 2002.
[19] C. Xia, J. Wnag, K. Mcmenemy, Short, medium and long term load forecast- [32] W. Gerstner, Time structure of the activity in neural network models, Physical
ing model and virtual load forecaster based on radial basis function neural Review E 51 (1995) 738–758.
networks, Electrical Power Energy Systems 32 (2010) 743–750. [33] W. Bialek, F.R.R. Steveninck, D. Warland, Spikes: Exploring the Neural Code,
[20] Y. Chen, P.B. Luh, C. Guan, Y. Zhao, L.D. Michel, M.A. Coolbeth, P.B. Fried- MIT Press, Cambridge, 1997.
land, J.R. Stephen, Short-term load forecasting: similar day based wavelet [34] W. Bialek, F.R.R. Steveninck, D. Warland, Reading a neural code, Science 252
neural networks, IEEE Transactions on Power Systems 25 (1) (2010) (1991) 1854–1857.
322–330. [35] V. Sharma, D. Srinivasan, A spiking neural network based on temporal encoding
[21] M. Hanmandlu, B.K. Chauhan, Load forecasting using hybrid models, IEEE Trans- for electricity price time series forecasting in deregulated markets, in: Neural
actions on Power Systems 26 (1 (February)) (2011) 20–29. Networks International Conference (IJCNN), July, 2010.
[22] M. Lopez, S. Valero, C. Senabre, J. Aparicio, A. Gabaldon, Application of SOM [36] S.M. Bohte, Spiking Neural Networks, Doctoral Dissertation, Available online,
neural networks to short-term load forecasting: the Spanish electricity market 2002.
case study, Electrical Power Systems Research 91 (2012) 18–27. [37] http://www.aemo.com.au/en/Electricity/NEM-Data/Price-and-Demand-Data-
[23] S. Chenthur Pandian, K. Duraiswamy, C. Christober Asir Rajan, N. Kanagaraj, Sets/AggregatedPrice-and-Demand-2006-to-2010
Fuzzy approach for short term load forecasting, Electrical Power Systems [38] www.wunderground.com
Research 76 (2006) 541–548. [39] http://www.bom.gov.au/climate/data/index.shtml?bookmark=200
[24] Y. Bodyanskiy, S. Popov, T. Rybalchenko, Multilayer neuro-fuzzy network for [40] G.E.P. Box, G.M. Jenkins, G.C. Reinsel, Time Series Analysis: Forecasting and
short-term electric load forecasting, Lecture Notes Computer Science 5010 Control, third ed., Prentice-Hall, Upper Saddle River, NJ, 1994.
(2008) 339–348. [41] N. Amjady, F. Keynia, H. Zareipour, Short-term load forecasts of microgrids by a
[25] M. Amina, V.S. Kodogiannis, I. Petronias, D. Tomtis, A hybrid intelligent new bi-level prediction strategy, IEEE Transactions on Smart Grid 1 (3) (2010)
approach for the prediction of electricity consumption, Electrical Power and 286–294.
Energy Systems 43 (2012) 99–108. [42] S. Fan, R.J. Hyndman, Short-term load forecasting based on a semi para-
[26] P. Chang, C. Fan, J. Lin, Monthly electricity demand forecasting based on a metric additive model, IEEE Transactions on Power Systems 27 (1) (2012)
weighted evolving fuzzy neural network approach, Electrical Power and Energy 134–141.
Systems 33 (2011) 17–27.

You might also like