Download as pdf or txt
Download as pdf or txt
You are on page 1of 6

Photovoltaic Power Forecasting using LSTM on

Limited Dataset

Vidisha De T.T. Teo W.L. Woo T. Logenthiran


School of Electronic School of Electrical and School of Electrical and School of Electrical and
Engineering, Electronics Engineering Electronics Engineering Electronics Engineering
Indian Institute of Newcastle University, Newcastle University, Newcastle University,
Technology Singapore Singapore, Singapore,
vidisha.14je000632@ece.is t.t.teo@ncl.ac.uk lok.woo@ncl.ac.uk t.logenthiran@ncl.ac.uk
m.ac.in

Abstract— This paper aims to forecast the photovoltaic power, The most important benefit of using photovoltaic power is that
which is beneficial for grid planning which aids in anticipating and it is a renewable energy source that can regularly be harnessed.
prediction in the event of a shortage. Forecasting of photovoltaic However, there are some drawbacks to photovoltaic power. For
power using Recurrent Neural Network (RNN) is the focus of this starters, there is the high initial cost of purchasing and installing
paper. The training algorithm used for RNN is Long Short-Term the system [4]. The location of solar panels is a major factor for
Memory (LSTM). To ensure that the amount of energy being
generating electricity. High vegetation, landscape and
harvested from the solar panel is sufficient to match the demand,
forecasting its output power will aid to anticipate and predict at surrounded by high rise buildings are not suitable for
times of a shortage. However, due to the intermittent nature of installation of solar panels. Solar panels installed in countries
photovoltaic, accurate photovoltaic power forecasting can be that experience all four seasons might not be able to harness as
difficult. Therefore, the purpose of this paper is to use long short- much energy as compared to those that experience summer
term memory to obtain an accurate forecast of photovoltaic power. throughout. This factor is crucial to countries that experience
In this paper, Python with Keras is used to implement the neural long winter as the night time is longer than day considered that
network model. Simulation studies were carried out on the solar energy could not be collected during the night.
developed model and simulation results show that the proposed Forecasting solar energy has been receiving a huge amount of
model can forecast photovoltaic power with high accuracy.
attention as a more accurate prediction can aid in avoided costs.
However, there are challenges in forecasting of solar energy. As
Keywords—Photovoltaic, forecasting, Recurrent neural
network, LSTM, Energy management system, Renewable energy most of the renewable sources are infrequent due to nature, the
resources Introduction power produced by the PV will fluctuate therefore affecting the
stability of the grid.
Data related to photovoltaics power such as ambience
I. INTRODUCTION
temperature, panel temperature and irradiance are readily
Natural resources such as coal, oil and natural gas are heavily available in large quantity with the widespread of data logging
relied on to produce energy in the world. With the current rate devices [5]-[7]. However, the main drawback of the huge influx
of energy consumption, fossil fuel may be drastically depleting of data is higher computation time and integrity of data.
thus leading to the severe energy crisis. Constant burning of Machine learning methods does not scale well with data as the
resources not only brings harm to all living things on the planet complexity increases exponentially with the data and larger
but also contributes to global warming. Solid toxic wastes such dataset may contain more noise. Due to these drawbacks the
as mercury, lead and chromium are produced during the process
proposed methodology only uses one month of data to train and
of coal mining and petroleum refinement [1].
determine the architectural of the LSTM model. The proposed
This causes the releases of carbon dioxide, sulfur dioxide,
LSTM model can achieve accurate forecast with the limited
nitrogen oxides and mercury into the atmosphere. These
dataset and is able to compute in a reasonable amount of time.
pollutants not only cause death and respiratory illness in
In this paper, a methodology for forecasting photovoltaic
humans, but it also leads to acid rain that damage buildings,
power using RNN with LSTM with limited dataset. The rest of
destruction of ecosystems and depletion of ozone layer [2].
this paper is organized as follows. Section II presents the
However, such resources are finite and bound to run out some
background information for the RNN and LTSM, Section III
day. With the implementation of renewable energy, usage of
presents the proposed LSTM architectural, Section IV presents
fossil fuels and natural gases can be reduced. Photovoltaic
the simulation and results, Section V discusses the results, and
power is obtained by converting irradiance into electricity using
finally, the paper is concluded in Section VI.
solar panels. Its photovoltaic cells convert sunlight into
electricity by utilizing a semi-conductor that absorbs radiation
from the sun emitting electrons thus harnessing electricity [3].

c
978-1-5386-4291-7/18/$31.00 2018 IEEE 710
II. BACKGROUND INFORMATION previous state [12]. This information is either stored in, written
to or read from a cell like a computer’s memory. The cell
A. RNN Architecture decides on whether to store the incoming information, when it
The model proposed in this paper is a Single Layer Feed- reads, writes and erase via the gates opening and closure. They
forward Network (SLFN) that consists of the input layer, act base on the signals received and block or pass on
hidden layer, and an output layer. Recurrent Neural Network information based on its strength and import by filtering with
(RNN) works slightly different as compared to Feedforward their own sets of weights. These weights are similar to those
Neural Network by having a feedback loop. Each layer contains that modulate input and hidden states by adjusting through the
many nodes and a weighted line that interconnects each node network’s learning process. In order words, the cells learn when
between each layer. to permit the data’s entry, exit or be erased by making different
Each input nodes are fully interconnected with the nodes in guesses, backpropagation errors and adjust respective weights
the hidden layer. This interconnection is called input layer by the gradient descent algorithm.
weights. Nodes in each hidden layer are also fully
interconnected with all the nodes in the output layer. This
interconnection is called the output layer weights. The weights
can be adjusted using different training algorithm.
A recurrent network contains multiple copies of the same
network each passing a message to the next. In other words, it
is learning from sequences this way the recurrent network can
remember its previous context as it progresses forward to the
next time step.

Fig. 2. LSTM Model

Fig. 2 shows the LSTMs model comprising of forget gate, " ,


input gate, " , output gate,", and cell state, " .

" & *  ,"  " - $  + (1)

" & " " $ " )" (2)

" & *  ,"  " - $  + (3)

)" & *  ,"  " - $  + (4)

" & * ! ,"  " - $ ! + (5)

" &  * " + (6)

Fig. 1. Feedforward Neural Network Model


Where W, h, x and b is weight, output, input and bias
Five major parameters have repeatedly appeared in the
respectively and subscript t, f, c, and o is time-step, forget gate,
literature and require user interference:
cell state and output gate respectively. The activation function
f Size and division of dataset [5], [6] is represented by .
f Number of hidden layer and hidden nodes [7], [8] The forget gate from (1) must decide what information to be

f Activation function [6], [7], [5], [10] kept and what information to be discarded from the cell state

f Number of input parameters [6], [8], [9] from (2). This decision is made by the logistic function which

f Performance evaluation [11] outputs a value between 0 to 1. A value of 0 represents
‘completely keep this’ and 1 represents ‘completely forget this’.
From (3), shows the calculation for the input gate. This gate
B. Long Short-Term Memory (LSTM) decides the input values to be updated by the LSTMs. The final
LSTMs helps to preserve the error by backpropagating state would be the output state as shown in the equation above.
through the neural network’s time and layers. LSTMs is A sigmoid is included in " to decide which part of the cell state
comprised of various gates that contains information about the

2018 IEEE Innovative Smart Grid Technologies - Asia (ISGT Asia) 711
is chosen to output. Finally, for the " , " is multiplied with is extremely important when designing a neural network. The
another logistic function to scale the values between 0 – 1. type of activation function used is problem dependent [5][8].
This paper considers two activation functions as follow:
III. PROPOSED RNN ARCHITECTURE f Logistic Function
A. Size and Division of Dataset  (7)
*+ &
The dataset is divided into a training and test set. The  $  #
training set is used to train the neural network while the testing
set is used to evaluate its performance. The sizes of the dataset
have an impact on the performance of the model. A different f Hyperbolic Tangent
division of the dataset will produce different results for both
training and testing. There is no exact method to determine the  # %  # (8)
*+ &
optimum size for the dataset. However, increasing the amount  # $  #
of data available will improve the performance of the neural
network as smaller dataset may not contain enough information D. Input Variables
to uncover the underlying relationship between the input and The number of input variables is important to determine the
output. Intuitively, the size of the dataset should be a overall performance of the neural network. If there are too little
representative of the population, i.e., using one year of variables, the neural network may miss out on important
historical data to forecast the next year. information. While too many variables may introduce too much
The dataset has been acquired from 8th May 2014 to 6th noise to the neural network. Additional input variable(s) may
June 2014. improve the performance of the model [8][9], but it does not
f Ambience Temperature, Degree Celsius, °C hold true always. It is conversely shown, the model that yields
f Panel Temperature, Degree Celsius, °C the best performance does not necessary comes from the one
f Accumulated Daily Energy, Joules, J that has the most input variables.

f Irradiance, Watt/meter square, W/
f Power, Watt, W D. Performance Measure
Five sampled variables will be used as the input and power
The forecasting error of the RNN can be measured using
will be used as the output. The data are sampled approximately
several methods such as mean absolute percentage error
every 15 minutes. In time-series prediction, all sampled values
are required to be sampled at a fixed interval. (MAPE), mean square error (MSE) or root mean square error
(RMSE). This paper uses RMSE to measure the performance of
TABLE I: DIVISION OF DATASET the proposed RNN.
Dataset Training Days Testing Days
 ( (9)
01 24th May to 1st June 2nd June to 6th June   *  %  +
th nd th
&'
02 16 May to 1st June 2 June to 6 June
03 8th May to 1st June 2nd June to 6th June
Where ( is the estimated output of the network while  is
its actual output.
B. Number of Hidden Layers and Hidden Nodes

Determining the optimal number of hidden nodes is an open IV. SIMULATION AND RESULTS
question. The number of hidden nodes greatly impact the
Different simulations will be discussed in this paper to
performance of the model [15]. In most cases, the number of
analyze and compare its performance on the model:
hidden nodes is determined heuristically. Too many hidden
nodes will cause overfitting while too little will cause
Simulation 1: Division of Dataset
underfitting. Overfitting increases the complexity of the neural
Simulation 2: Number of Hidden Nodes
network and underfitting reduces the generalization capability.
Simulation 3: Activation Function
Determining a suitable number of hidden neurons to prevent
Simulation 4: Number of Input Variables
overfitting is critical in function approximation using RNN. The
common methods to determine whether a certain number of
Simulations above generates RMSE for both training and
hidden neurons is optimal are cross-validation and early-
testing phase which is used to evaluate the performance of the
stopping [16] [17]. It is shown that the number of nodes is
chosen heuristically to determine the optimum number [8]. neural network. Here the training dataset from Table I and the
testing dataset from 1st June 2017 to 8th June 2017 are used in
the RNN simulations. The dataset that is inserted into the
C. Activation Function
training phase is used to adjust the weights which will be
The purpose of an activation function is to introduce non- transferred over for the testing phase to justify the accuracy of
linearity in the network so it would be capable of learning the predicted power for the LSTM.
nonlinear problems. Choosing a suitable activation function

712 2018 IEEE Innovative Smart Grid Technologies - Asia (ISGT Asia)
Before proceeding with the simulation, certain parameters lower RMSE for both training and testing phase as compared to
are set to establish a proper benchmark to compare the the other hidden nodes. It is quite difficult to identify the
performance of the LSTM model with different simulations. number of hidden nodes that yields the best performance since
Table II below shows that the training and testing RMSE will the RMSE values are quite fluctuating. However, the number of
be used as a benchmark to compare the performance when each hidden nodes is set as 100.
parameter is changed. The LSTM model is train with 100
C. Simulation 3: Activation Function
epochs. All simulations are implemented using Python 3.6
using Keras backend with Tensorflow. In this simulation, the performance of the hyperbolic
tangent function and logistic function will be evaluated for Set
3 respectively. In all the previous simulation, the logistic
function was used as the activation function. This simulation
TABLE I: INITIAL PARAMETERS AND SIMULATION RESULTS will help us to decide the type of activation function for all
Dataset Initial further simulations.
Activation Function Logistic
TABLE IV: RMSE AND STANDARD DEVIATION OF DIFFERENT
No. of Input Variables 4
ACTIVATION FUNCTIONS
No. of Hidden Nodes 100 Dataset Set 3 Set 3
Training RMSE 2.6302 Activation Function Hyperbolic Tangent Logistic
Test RMSE 2.3222 No. of Hidden Nodes 100 100
Training RMSE 2.6672 2.6543
Testing RMSE 2.2411 2.2093
A. Simulation 1: Division of Dataset
This simulation is performed to select the best possible From Table IV, it can be concluded that the performance of
division of dataset from Table I to analyze the performance of hyperbolic tangent function is similar to logistic function.
the different division of dataset.
D. Simulation 4: Number of Input Variables
TABLE II: RMSE AND STANDARD DEVIATION FOR DIFFERENT The purpose of this simulation is to compare between the
DIVISION OF DATASET
Dataset Set 1 Set 2 Set 3
choices of input variables used in this paper. Power will be
included as one of the input variables to determine the
Activation Function Logistic Logistic Logistic
performance of the forecast. Total five simulations are done to
No. of Hidden Nodes 100 100 100 varying the number of input variable(s).
Training RMSE 2.6302 2.6411 2.6543
Testing RMSE 2.3222 2.2867 2.2093 TABLE V: DIVISION OF INPUT VARIABLES
Variables Input(s)
Power 1
Table II shows the different division of dataset with their Ambience Temperature 2
respective RMSE and Standard Deviation for each different set. Panel Temperature 3
The results show that Set 3 manages to achieve a lower RMSE Irradiance 4
Accumulated Daily Energy 5
for both training and testing phase as compared to the other two
Sets. Hence, set 3 is used for rest of the simulations. TABLE VI: RMSE AND STANDARD DEVIATION FOR DIFFERENT
DIVISION OF INPUT VARIABLES
B. Simulation 2: Number of Hidden Nodes No. of Input Variables Training RMSE Testing RMSE
Input 1 2.8912 2.2421
This simulation is done to further analyze the performance Input 2 2.7152 2.2013
of the different number of hidden nodes. For Set 3, the number Input 3 2.6737 2.2623
of hidden nodes is varied from 1 to 200, and their corresponding Input 4 2.5410 2.3045
RMSE and standard deviation are calculated. Input 5 2.6523 2.3272

TABLE III: RMSE AND STANDARD DEVIATION FOR DIFFERENT Table VI shows the different division of input variables with
DIVISION OF DATASET their respective RMSE and Standard Deviation. The results
No. of Training Testing show that the division with two input variables yields the least
Hidden RMSE RMSE
Nodes
RMSE as compared to other division. This shows that by
50 2.8217 2.3424 increasing the number of input variables for the LSTM model
100 2.6543 2.2093 to undergo more training does not necessarily improve its
150 2.6831 2.2344 overall performance.
200 2.7586 2.4127

Table III shows the different number of hidden nodes used


for this simulation namely 100 which is being used for the
initial simulation, 50, 150 and 200 respectively. The results
show that Set 3 with 100 hidden nodes manages to achieve a

2018 IEEE Innovative Smart Grid Technologies - Asia (ISGT Asia) 713
Experimental results have shown that the proposed neural
network could faithfully reproduce the curve of daily produced
energy to predict the daily PV power with a percentage error
less than 5%. Even with such accurate results, the architecture
is not perfect enough to address all the forecasting problems.
Each proposed model is unique to the dataset used when
modeling. Many factors can impact the performance of the
model, and the best architectural of the model is determined
heuristically. Empirical evidence has shown that the model
benefited the most by using a larger training dataset and having
more input variables does not yield better performance, and
every division must be tested to determine the optimum number
of input variable(s).
Various simulations are being conducted in this paper to
Fig. 4. Training Set Results for 26th May obtain the best results and performance by improving on the
LSTM algorithm used during the initial test. There are many
Fig. 4 shows the actual and forecast values from the training other methods and procedures available such as using different
set. It shows a close resemblance to the actual power. This error calculation methods, internal swapping between the
shows that the third simulation has a higher accuracy than the sequences of input variables and using different activation
initial simulation functions for each respective gates of the LSTM model. The
Hence, it can be concluded that in Simulation 1, the size of four different simulations mentioned in this paper is just one
the dataset was changed and in Simulation 3, the input variables way to improve the RMSE of the LSTM used in this research.
were changed. There is no difference in the Simulation result of The dataset used in this paper is limited when compared to
both 1 and 2, but there is an improvement in performance in [18]-[21]. However, the forecasting performance of the
training and testing RMSE in Simulation 3. From Table VII, proposed method is accurate. The sample rate of the dataset
there is a gradual improvement in the performance of the used in this paper is 15 minutes between each interval across 30
forecasting model after each parameters of the LSTM days which is sufficient for forecasting. Increasing the sampling
architectural is determine and better accuracy training and rate will increase the computation burden and increase the noise
testing RMSE. in the dataset. Increasing the number of input variables may not
improve the forecasting accuracy of the proposed model as
shown in Simulation 4, where the accuracy reduces with the
increased in input variables. Increasing the quantity and variety
of the dataset should be considered prudently.
VI. CONCLUSION
This paper proposed a Recurrent Neural Network with Long
Short-Term Memory training algorithm to forecast the
photovoltaic power. The forecasting model is being
implemented using the Python. Firstly, Simulation 1 shows that
giving more training data gives a better performance to the
model. These results are analyzed by evaluating the RMSE for
each simulation. Next, Simulation 3 shows that increasing the
input variables do not necessarily produce a result with lower
Fig. 5. Testing Set Results for 6th June RMSE. These different parameters are determined from
different simulations to establish an RNN architectural and
TABLE VII: COMPARISION OF RMSE AND STANDARD compare against the initial simulation in Table VII. Future
DEVIATION FOR DIFFERENT SIMULATIONS works will consider more input variables such as pressure and
Simulation Initial 1 2 3
Training RMS 2.6302 2.6543 2.6543 2.7152
wind speed to provide a more detailed dataset. Other methods
Testing RMSE 2.3220 2.2093 2.2093 2.2013 such as deep belief net [22] and traditional artificial neural
network [23] can be combined with the LSTM model in an
V. DISCUSSION ensemble [24] to test for a more accurate forecast Forecasting
also helps in integrating to energy management system of smart
The purpose of this paper is to establish a neural network grid [25], microgrids planning and operation [26]-[29], and
model using Long Short-Term Neural (LSTM). This forecast smart homes [30],[31].
system uses back propagation learning rule to train the neural
network to accurately estimate the generated power and the REFERENCES
completed back propagation neural network successfully [1] G. W. Crabtree, and N. S. Lewis, "Solar energy conversion," Physics
predicted the PV power. today, vol.60, pp. 37-42, 2007.

714 2018 IEEE Innovative Smart Grid Technologies - Asia (ISGT Asia)
[2] C. Candelise, M. Winskel, and R. J. K. Gross, "The dynamics of solar PV [19] S. Balluff, J. Bendfeld and S. Krauter, "Short term wind and energy
costs and prices as a challenge for technology forecast-ing," Renewable prediction for offshore wind farms using neural networks," 2015
and Sustainable Energy Reviews, vol.26, pp. 96-107, 2013. International Conference on Renewable Energy Research and
[3] "Key World Energy Statistic," ed: International Energy Agency, 2014. Applications (ICRERA), Palermo, 2015, pp. 379-382.
[4] How do solar systems produce energy? [20] M. Rana, I. Koprinska and V. G. Agelidis, "Forecasting solar power
http://www.nwwindandsolar.com/solar-power-in-seattle-and-the- generated by grid connected PV systems using ensembles of neural
northwest /how-dosolar-systems-produce-energy/ networks," 2015 International Joint Conference on Neural Networks
[5] Advantages & Disadvantages of Solar Energy | GreenMatch. (IJCNN), Killarney, 2015, pp. 1-8.
http://www.greenmatch.co.uk/blog/2014/08/5-advantages-and-5- [21] H. T. Yang, C. M. Huang, Y. C. Huang and Y. S. Pai, "A Weather-Based
disadvantages-ofsolar-energy Hybrid Method for 1-Day Ahead Hourly Forecasting of PV Power
[6] G. Zhang, B. Eddy Patuwo, and M. Y. Hu, "Forecasting with artificial Output," IEEE Transactions on Sustainable Energy, vol. 5, no. 3, pp. 917-
neural networks: The state of the art," International Journal of 926, July 2014.
Forecasting, vol. 14, pp. 35-62, 1998. [22] Y. Q. Neo, T. T. Teo, W. L. Woo, T. Logenthiran and A. Sharma,
[7] S. H. Oudjana, A. Hellal, and I. H. Mahamed, "Short term photovoltaic ”Forecasting of photovoltaic power using deep belief network,”
power generation forecasting using neural network," 11th International TENCON 2017 - 2017 IEEE Region 10 Conference, Penang, 2017, pp.
Conference on Environment and Electrical Engineering (EEEIC), pp. 1189-1194.
706-711, 2012. [23] Z.Y.A. Ang, W.L. Woo, and E. Mesbahi, “Artificial Neural Network
[8] K. Hornik, M. Stinchcombe, and H. White, "Multilayer feedforward Based Prediction of Energy Generation from Thermoelectric Generator
networks are universal approximators," Neural Networks, vol .2, pp.359- with Environmental Parameters,” Journal of Clean Energy Technologies,
366,1989. vol. 5, no. 6, pp. 458-463, 2017
[9] Y. Ting-Chung, and C. Hsiao-Tse, "The forecast of the electrical energy [24] T. T. Teo, T. Logenthiran, W. L. Woo and K. Abidi, ”Forecasting of
generated by photovoltaic systems using neural network method," photovoltaic power using regularized ensemble Extreme Learning
International Conference on Electric Information and Control Machine,” 2016 IEEE Region 10 Conference (TENCON), Singapore,
2016, pp. 455-458.
Engineering (ICEICE), pp. 2758-2761, 2011.
[25] T. T. Teo, T. Logenthiran and W. L. Woo, "Forecasting of photovoltaic
[10] A. Alzahrani, J. W. Kimball, and C. Dagli, "Predicting Solar Irradiance
Using Time Series Neural Networks," Procedia Computer Science, power using extreme learning machine," 2015 IEEE Innovative Smart
vol.36, pp.623-628, 2014. Grid Technologies - Asia (ISGT ASIA), Bangkok, 2015, pp. 1-6.
[11] G. W. Crabtree, and N. S. Lewis, "Solar energy conversion," Physics [26] A. W. L. Lim, T. T. Teo, M. Ramadan, T. Logenthiran and V. T. Phan,
today, vol.60, pp. 37-42, 2007. ”Optimum longterm planning for microgrid,” TENCON 2017 - 2017
[12] A. Gensler, J. Henze, B. Sick and N. Raabe, "Deep Learning for solar IEEE Region 10 Conference, Penang, 2017, pp. 1457-1462.
power forecasting — An approach using AutoEncoder and LSTM Neural [27] Muhammad Ramadan B. M. S, R. T. Naayagi and Woo Li Yee,
Networks," 2016 IEEE International Conference on Systems, Man, and "Modelling, simulation and experimentation of grid tied inverter for wind
Cybernetics (SMC), Budapest, 2016, pp. 002858-002865. energy conversion systems," 2017 International Conference on Green
[13] Understanding LSTM Networks -- colah's blog. Energy and Applications (ICGEA), Singapore, 2017, pp. 52-56.
http://colah.github.io/posts/2015-08-Understanding-LSTMs/. [28] J. E. C. Tee, T. T. Teo, T. Logenthiran, W. L. Woo and K. Abidi, ”Day-
[14] K. Greff; R. K. Srivastava; J. Koutnkí; B. R. Steunebrink; J. Schmidhuber, ahead forecasting of wholesale electricity pricing using extreme learning
"LSTM: A Search Space Odyssey," in IEEE Transactions on Neural machine,” TENCON 2017 -2017 IEEE Region 10 Conference, Penang,
Networks and Learning Systems, vol.PP, no.99, pp.1-11. 2017, pp. 2973-2977.
[15] Lyu, Qi and Jun Zhu. “Revisit Long Short-Term Memory: An [29] T. T. Teo, T. Logenthiran,W. L.Woo and K. Abidi, ”Fuzzy logic control
Optimization Perspective.” (2014). of energy storage system in microgrid operation,” 2016 IEEE Innovative
[16] S. Geman, E. Bienenstock, R. Doursat, Neural networks and the Smart Grid Technologies – Asia (ISGT-Asia), Melbourne, VIC, 2016, pp.
bias/variance dilemma, Neural Computation, 4(1), 1992, 1-58. 65-70.
[17] L. Prechelt. Early stopping -- but when, Neural Networks: Tricks of the [30] W. Li, T. Logenthiran and W. L. Woo, "Intelligent multi-agent system for
trade, Lecture Notes in Computer Science 1524 (Springer Verlag, smart home energy management," 2015 IEEE Innovative Smart Grid
Heidelberg, 1998). Technologies - Asia (ISGT ASIA), Bangkok, 2015, pp. 1-6.
[18] R. Setiono, Feedforward neural network construction using cross [31] Y.T. Quek, W.L. Woo and T. Logenthiran, “Smart Sensing of Loads in
validation, Neural Computation, 13(12), 2001, 2865–2877. an Extra Low Voltage DC Pico-grid using Machine Learning
Techniques,” IEEE Sensors Journal, vol. 17, no. 23, pp. 7775-7783, 2017

2018 IEEE Innovative Smart Grid Technologies - Asia (ISGT Asia) 715

You might also like