Download as pdf or txt
Download as pdf or txt
You are on page 1of 6

2019 International Conference on Electrical Engineering and Informatics (ICEEI)

July 2019, 9 - 10, Bandung, Indonesia

Rainfall Forecasting for the Natural Disasters


Preparation Using Recurrent Neural Networks

Elvan P. Prasetya, *Esmeralda C. Djamal


Department of Informatics
Universitas Jenderal Achmad Yani
Cimahi, Indonesia
*Corresponding Author: esmeralda.contessa@lecture.unjani.ac.id

Abstract— Rainfall forecasting is still a concern for Previous research conducted calibration of predictive data
researchers considering the increasing uncertainty weather with cross-validation [2]. Previous research on rainfall has
conditions in various tropical regions, including Indonesia. been carried out, such as research to analyze rainfall intensity
Therefore, a more robust computational model is needed in the Katulampa river [3], the potential for extreme rainfall
because of uncertainty climate. Deep learning is a method that events [4]. Also, other studies have been conducted to predict
allows machines to learn time-based data patterns, such as rainfall in anticipation of flooding using online learning
climate data. One technique that is often used for time series techniques [5], identifying rain thresholds against the potential
data is Recurrent Neural Networks (RNN). However, the for landslides [6]. This matter can provide ideas and thoughts
selection of climate feature, time segments, and long records of
in this study to predict rainfall.
historical data, pre-processing methods, and prediction models
largely determines accuracy. This paper proposed Recurrent Climate data is a time series, which can use various
Neural Networks for weekly rainfall forecasting. It was the methods and recognize patterns from historical data for future
rainfall, temperature, and humidity variable each week within rainfall predictions. Previous research used percentile
a year. The training used LSTM to generalize climate data for methods with standard deviations [3], Embedded System [7],
the past ten years. Weighting renewal used Stochastic Gradient Convolutional Networks, and Conditional Restricted
Descent (SGD) and Adaptive Moment Estimation (Adam). The Boltzmann Machine (CRBM) [8], which can predict rainfall
results showed that the number of datasets and learning rates intensity. Other studies also conducted rainfall prediction
determine accuracy significantly. So that using data learning
using the Backpropagation Artificial Neural Network (ANN)
last ten years gave more than 96% accuracy of new data and
more than 98% of training data.
method with an accuracy of 96% [9], while other studies using
the same method namely Artificial Neural Networks (ANN)
Keywords— rainfall forecasting; floods; RNN; LSTM. Backpropagation had an accuracy of 77.5% [10] and ANFIS
[11].
I. INTRODUCTION
Deep Learning itself is a machine learning method which
Rainfall that falls on the ground is influenced by several is the development of Artificial Neural Networks (ANN) that
factors such as temperature, air humidity around the area, and distinguishes Deep Learning from ANN can be seen from the
global climate phenomena. However, the current rainfall number of layers in carrying out more complex processes.
conditions are increasingly uncertain so that the time and Deep Learning has a high level of accuracy but must be
volume of rain are difficult to predict, including areas in accompanied by a large amount of data and adequate
Indonesia. Continuous rainfall can cause natural disasters such hardware capabilities. Deep learning methods are mainly
as floods and landslides. Also, very high rainfall and occurs in various neural networks designed by inspiration from brain
a long time can disrupt the activities of human life. One of the neurons, such as Convolutional Neural Networks (CNN) [12]
regions in Indonesia that has high rainfall is Bogor, which for image processing. In rainfall prediction, previous research
causes the region to be frequently hit by natural disasters such used deep learning methods such as CNN [13], Recurrent
as floods and landslides. Moreover, flooding in the Bogor area Neural Networks (RNN) to measure the potential pool of
can cause an increase in the water flow of the Katulampa River water in the urban area [14]. Rainfall prediction using RNN
and cause potential flooding in surrounding areas such as has a higher level of accuracy than some other Deep Learning
Jakarta and Bekasi. methods [8]. RNN can be used to predict radiation levels
Given the importance of forecasting rainfall, making based on rainfall [15], dynamic weather forecast [16]. Other
research on weather prediction is growing. Therefore, an studies also prove that RNN coding is suitable for rainfall
appropriate computational model for forecasting rainfall is predictions that have time series or time-based data [17].
needed as an anticipatory step to reduce the impact that will However, time series data can be viewed as a two-dimensional
occur. The prediction must have the right level of accuracy on image so that other studies used 1D Convolutional Neural
rainfall. The goal in climate science research is to characterize Networks and LSTM [18].
extreme events in the current day and future climate Data is provided on daily climate parameters. Prediction
projections. However, understanding the developing of meteorological parameters consists of several aspects.
mechanism life cycle and future trend requires accurately Viewed from the area can use a global area [19] or specific
identifying such pattern in space and time [1]. area [20]. While looking at a time can use short term or minute
Rainfall is a time series data that has a particular pattern. to hours [21] [20], daily [22] [23] or estimates extreme climate
Furthermore, atmospheric analysis is also carried out; rainfall daily [24] , estimated daily rainfall of 7,10 and the next 14
is related to other variables such as air humidity, temperature, days [18], weeks or months [11] [16] [25]. This research used
and air pressure. weekly rainfall forecasts considering the use of anticipated
flooding.

978-1-7281-2418-6/19/$31.00 ©2019 IEEE 52


2019 International Conference on Electrical Engineering and Informatics (ICEEI)
July 2019, 9 - 10, Bandung, Indonesia

Rainfall that occurs in successive times can affect the TABLE II. CLIMATE DATASET WITH INTERPOLATION
absorption of soil against water caused by rainfall, which can Climate Parameters
Days
cause natural disasters such as floods and landslides. So, this - Date Temperature Humidity Rainfall
research used a weekly rainfall forecasting.
1. 01/01/2008 18.7 100.0 18.0
This research proposed a rainfall forecasting system based
on rainfall, air temperature, air humidity, and rainfall during a 2. 02/01/2008 19.0 99.0 78.0
year. The data set of ten years was learned using RNN and … … … … …
LSTM to forecast rainfall next week. This model determines
the intensity of rainfall in the next week to anticipate puddles 2022. 14/07/2013 21.5 81.0 76.0
that will take place in a specific region, such Bogor area. The 2023. 15/07/2013 21.0 86.0 40.0
output divided five classes, to find out one of the five intensity
of rainfall that is likely to occur, namely “Very Low" 2024. 16/07/2013 21.9 80.0 4.0
(<5mm),"Low" (5-20 mm),"Medium" (20-50mm),"High” … … … … …
(50-100), and" Very High" (>100mm). Weight updated used
3651. 30/12/2017 21.9 90.0 10.0
two optimization model that is Stochastic Gradient Descent
(SGD) and Adaptive moment estimation (Adam). 3652. 31/12/2017 22.4 82.0 7.0

II. PROPOSED METHODS


A. Data Set Week

This research used a data set of rainfall, air temperature, 1 2 ... 12 13 ... 510 511 ... 522 523
and humidity. The length of the data history also affects the
Dataset 1 511
accuracy of the model. The study then used rainfall history in
Semarang in the last ten years [26]. In this study, used the Dataset 2 Dataset 512
previous ten years in Bogor observation station for a variable
temperature, humidity, and rainfall years 2018 to 2017, as Fig. 1. Data segmentation
shown at Table I.
The data was obtained through the Meteorology, B. Rainfall Forecasting Model
Climatology, and Geophysics Agency (BMKG) in Bogor. Computational models designed to forecast maximum
However, often, among available data, there is missing data or rainfall a week ahead of the data sets a year. Precipitation
zero data. Therefore, pre-processing needs to be done first. forecast model was trained first using training data 10 years
Past research does not include the data [2]. However, in this using RNNs. Then the result of generalization of training used
study interpolated the missing data of around points. This to forecast rainfall one of five output ranges provided. This
research used a data set of rainfall, air temperature, and model is shown in Fig. 2.
humidity.
Training
Very Low
Recurrent Neural
<5

Output Rainfall
TABLE I. CLIMATE DATASET Week
1 - 12
Networks
Low
Forgate Gate
5 - 20
Week
Climate Parameters Preprocessing
2 - 13
Days Data
Input Gate Middle
Date Temperature Humidity Rainfall Week
n-m
20 - 50

Data Training Output Gate


Week High
1. 01/01/2008 18.7 100.0 18.0 511 - 523
50 - 100
Dense Layer
(Sigmoid) Very High
2. 02/01/2008 19.0 99.0 78.0 > 100

Training
… … … … … Result Prediction
Result
Weight

2022. 14/07/2013 21.5 81.0 76.0 Preprocessing


Data Set Prediction of Rainfall
Data

2023. 15/07/2013 21.0 86.0 8888.0 Data Test

2024. 16/07/2013 21.9 80.0 4.0


Fig. 2. Rainfall forecasting model
… … … … …

3651. 30/12/2017 21.9 90.0 10.0 1) Feature Extraction


Rainfall data in Table I is the daily history. Diverse range
3652. 31/12/2017 22.4 82.0 7.0 of forecasts made in previous studies, the daily [11] [24],
a.
8888 = unmeasured
weekly or monthly [16]. This study used weekly predictions
Table I show lost, or unmeasured data were interpolated with consideration designated for handling disaster
from information before and after the point. So, obtained as in management and ability to absorb soil.
Table II. Feature extraction is used to find the maximum value
Each data set has a history of one year. In order to every week of each variable that is available so that later, the
minimize the effect of discontinuation of the data, the overlap data will be processed into weekly data. The mathematical
is carried out, as shown in Fig. 1. formula used to find the maximum value every month of all
variables can be seen (1). Then, of Table II extracted into
Table III

53
2019 International Conference on Electrical Engineering and Informatics (ICEEI)
July 2019, 9 - 10, Bandung, Indonesia

x = max ( + ) (1)
xt xt
In machine learning, each feature affects the output
calculation with weights that change dynamically. Some cases it Input gate Output gate Ot
have unequal feature value ranges. This condition can cause
features that have a broad range of values to be dominant. Cell

Therefore normalization is needed first, with the same range xt x Ct x ht


of values 0-1.
x

TABLE III. WEEKLY CLIMATE DATASET


ft Forget gate

Climate Parameters
Week-
Date Temperature Humidity Rainfall xt
1. 06/01/2008 20.9 100.0 78.0 Fig. 4. LSTM architecture

2. 13/01/2008 22.5 81.9 0.0 In LSTM architecture, the memory used is called cells that
… … … … …
take input from (ht-1) and current input (xt). The collection of
cells decides which information will be stored or forgotten in
288. 01/07/2013 22.4 93.0 12.0 memory. The equation for the calculation process from each
289. 08/07/2013 21.5 94.0 76.0 gate that is in LSTM can be seen in (3) - (8).

290. 14/07/2013 21.9 94.0 40.0 = ( + ) (3)


… … … … …
= ( + ) (4)
522. 24/12/2017 21.8 98.0 36.8

523. 31/12/2017 22.4 90.0 10.0 = ( + ) (5)

= tanh( + ) (6)
1) Forecasting model
RNN is one variation of Deep Learning that utilizes high = ∘ + ∘ (7)
computing for large numbers of training data. Therefore, some
recent studies on RNNs are quite powerful for prediction = tanh ( ) ∘ (8)
problems. The architecture of the RNN can be seen in Fig. 3.
Where i is the gate input, f is the gate output, and o is the
O
Ot-1 Ot-1 Ot-1
gate output. Whereas g, ct and st are the processes of
calculating hidden state updates. LSTM has three gates, and
V
W V
st-1
V
st-1
V the first gate is “forget gate” to determine the information to
W st-1
s be forgotten from the cell using the sigmoid function as shown
W
Unfold
U U
W
U
W
(9).
U
x t-1 x t-1 x t-1
X
= ( . ℎ , + ) (9)

Fig. 3. RNN Architecture Where w and b are initial weights vector and biases, x is input
and ht-1 is the previous hidden state.
The RNN architecture is almost the same as the MLP
architecture, but with more complexity for transient data The second gate is the input gate that receives the value of the
processing, one of them uses Long Short Term Memory sigmoid layer that will be updated. Then, the updated value is
(LSTM). This LSTM was created to overcome problems that entered into the layer, as seen (10) and (11).
often occur in RNNs, namely short-term memory problems or
often called vanishing gradients. RNN is suitable for use in = ( . ℎ , + ) (10)
data sequences because the workings of this method are the
same as the name that is recurrent or repeated. The = tanh( . ℎ , + ) (11)
architecture of LSTM, as shown in Fig. 4.
After performing the preprocessing stage and producing a The calculation process is almost the same as the
vector, then entering the activation process of the Rectified calculation process on the forget gate. Where w and b are
Linear Units (ReLU). This activation is done to eliminate the initial weights and biases, x is input and ht-1 is the previous
negative value of vector x caused by the dense layer process hidden state. However, at the input gate, there is an additional
when the training is conducted. The equation used for process to find new hidden state candidates. Then after the
activation with ReLU as (2). gate, input process cell renewal is carried out (12).

( ) = max (0, ) (2) = ∗ + ∗ (12)

Where ft is a value derived from “forget gate”, ct-1 is a value


from the previous memory, while it and ĉt are values based on

54
2019 International Conference on Electrical Engineering and Informatics (ICEEI)
July 2019, 9 - 10, Bandung, Indonesia

the input gate. The last gate is the gate output, which will be
calculated based on cell renewal and sigmoid layer. The
calculation process can be seen (13) and (14).

= ( . ℎ , + ) (13)

ℎ = ∗ tanh( ) (14)

Where w and b are initial weights, biases, x is input and ht-1 is Fig. 5. Model Accuracy of RNN and CNN
the previous hidden state. Moreover, at the output gate, there
are also hidden state update calculations. After the gate output
process is complete, then calculate the dense layer to update
the weight (15).

= ∗ ( ) (15)

Prediction is composed of several layers, and each layer is


composed of fully connected neurons with other layers. Fig. 6. Model Loss of RNN and CNN
Prediction is obtained based on the generalization of the
weight of the results of the training; this process is carried out B. Comparison of two optimization models
in a feed forward. Updated weights can be carried out in various ways. Some
previous studies compared Stochastic Gradient Descent
III. RESULT AND DISCUSSION
(SGD) and Adaptive Moment Estimation (Adam). From
Climate data used from 2009 to 2018, consisting of 10959 previous research, Adam usually has better accuracy and more
sets. The data is divided into two parts, namely 80% for stable [18]. However, the usual is not too significant.
training data and 20% for test data. Training optimization with However, in this research, Adam is better than SGD in
variations in weight renewal models, number of training data accuracy and stability, as in Table V. Both models use a
sets and learning rates. learning rate of 0.001 and 500 epochs
A. Comparison Between RNN and CNN.
TABLE V. MODEL OPTIMIZATION
Deep Learning has several methods that can be used in
making predictions. Among them are Recurrent Neural Train Data New Data
No. Model Accuracy Accuracy
Networks (RNN) and Convolutional Neural Networks (CNN). Loss Loss
(%) (%)
Both methods are often used widely. For time-series data,
1. SGD 1.1927 25.00 1.2183 22.54
CNN is adapted using one dimension or called 1D CNN.
2. Adam 0.0339 98.53 0.0375 96.07
This experiment was conducted to find out more suitable
methods for predicting rainfall. From previous research, RNN
usually has better accuracy for time series data such as rainfall Both optimization methods have their advantages and
[8]. However, in this study, RNN is better than CNN in disadvantages. However, in this study, Adam's optimization
accuracy and stability, as in Table IV. Both methods used a method is better than SGD with a higher level of accuracy and
learning rate of 0.001 and 500 epochs. lower loss compared to the results obtained by SGD. Graphs
of SGD and Adam test results, as shown in Fig. 7 and Fig. 8.
TABLE IV. COMPARISON RNN AND CNN

Training Data New Data


No. Method Accuracy Accuracy
Loss Loss
(%) (%)
1. RNN 0.0339 98.53 0.0375 96.07

2. CNN 0.0519 94.83 0.1297 81.37

RNN is better in accuracy, namely 96.07% of new data


Fig. 7. Accuracy of SGD and Adam Model
while CNN has an accuracy of 81.37% of new data. Also, the
RNN is better than CNN in terms of stability, that was small
Loss. This result is possible given that RNN and LSTM
consider connectivity between sequences or sharing.

Both methods have their disadvantages, and their strengths


depend on data characteristic. In this study, the RNN method
is better than CNN with a higher level of accuracy and its
stability. Rainfall is sequential data so suitable used RNN.
Graphs of RNN and CNN test results can be seen in Fig. 5, Fig. 8. Loss of SGD and Adam model
and Fig. 6.
Based on the results of SGD and Adam optimization
testing, which can be seen in Table V, the optimization

55
2019 International Conference on Electrical Engineering and Informatics (ICEEI)
July 2019, 9 - 10, Bandung, Indonesia

accuracy of SGD reaches 25.00% for training data and for test data. So it can be concluded that a large number of
22.54% for test data. While Adam's optimization reaches an datasets trained can affect the results of the accuracy obtained.
accuracy value of 99.02% for training data and 99.02% for
training data. Comparison accuracy graph model between TABLE VII. MODEL CONFIGURATION OF RNN
SGD and Adam optimization models which can be seen in Configuration of the output layer
Fig. 7. This result shows that when training using SGD No. Layer
Ten years Five years One year
experiences a local minimum, remembering the SGD training
process uses only one or several parts of the training data that 1. Dataset 509 249 40
are randomly selected, so the possibility of occurrence is 2.
Input
64 64 64
minimum local, as a result of not representing all data in each Layer
class. Hidden
2. 64 64 64
Layer
C. Testing the Learning Rate Parameter 3. Dropout 0.2 0.2 0.2
The experiment was conducted with several learning rates 4. Dense 13 13 13
that will be used in the Adam optimization model. It used to
Output
determine the effect of the size of the learning rate on the 5.
layer
5 5 5
learning process and also testing the data. The results obtained
from the experimental learning rate can be seen in Table VI.
TABLE VIII. ACCURACY OF RNN MODEL CONFIGURATION
TABLE VI. LEARNING RATE TESTING
Data Training New Testing
Data Training New Data No. Dataset Accuracy Accuracy
Learning Loss Loss
No. Accuracy Accuracy (%) (%)
Rate Loss Loss
(%) (%)
1. Ten years 0.0339 98.53 0.0375 96.07
1. 0.001 0.0339 98.53 0.0375 96.07
2. Five years 0.0548 94.97 0.0665 88.00
2. 0.002 0.0894 90.44 0.1369 83.33
3. One year 0.4578 43.75 1.1024 25.00
3. 0.010 0.0677 94.61 0.1660 84.31

4. 0.040 0.1069 86.52 0.0982 91.17


IV. CONCLUSION
5. 0.100 0.1894 80.64 0.1609 84.31
This study has developed rainfall forecasting model using
6. 0.400 0.3593 62.01 0.3693 65.68
Recurrent Neural Networks. The rainfall forecasting model
7. 0.600 0.3304 63.48 0.3785 67.64 consists of three stages. The first stage is preprocessing data
consisting of data interpolation, normalization, and
8. 0.800 0.2577 70.59 0.2436 74.50
segmentation. The second stage is the data training process
using RNN; then the third stage is the testing process. The
Table VI shows the differences in the results of accuracy results of this study indicate that rainfall prediction using RNN
obtained based on the learning rate. Learning rate 0.600 get outstanding results.
reached an accuracy value of 63.48% of training data and
67.64% of new data. While the learning rate 0.001 has an Using Adam optimization with a learning rate of 0.001, the
accuracy of 98.53% for training data and 96.07% for new data. accuracy for training data is 98.53%, and the new data is
The number of learning rate from 0.001 to 0.010 has better 96.07%. Adam's optimization results in high accuracy and
appropriate. While the number of the learning rate from 0.040 stability compared to SGD. Predictions using data over the
to 0800 has less optimal results. This is because when the past ten years have better results than predictions made using
learning level is too large, gradient descent can accidentally data of 5 years and also one year. Therefore, it can be
increase errors rather than reduce errors during training. concluded that the amount of data, optimization model, and
Therefore, it can be concluded that the higher the learning rate size of the learning rate used for the training process can
value, it will cause an error in updating the weight, which will influence the results of the accuracy obtained.
affect the results of the training accuracy. RNN is better in accuracy of new data, that is 96.07%
D. The Influence of the Amount of Data Set while CNN has an accuracy of 81.37%. Also, the RNN is
better than CNN in terms of stability, that was small Loss. This
This study predicts weekly rainfall with ten years of data, result is possible given that RNN and LSTM consider
namely in 2010-2017. However, I need to know how much connectivity between sequences or sharing.
influence the amount of training data. The training process
was carried out using Adam optimization with a learning rate REFERENCES
of 0.001 and epochs of 500. Some configurations to be tested [1] Y. Liu, J. Correa, D. Lavers, M. Wehner, K. Kunkel, and W. Collins,
can be seen in Table VII, and the results can be seen in Table “Application of Deep Convolutional Neural Networks for Detecting
VIII. Extreme Weather in Climate Datasets.”
[2] J. C. Bennett, D. E. Robertson, P. G. D. Ward, H. A. P. Hapuarachchi,
Table, VIII shows differences in the accuracy obtained based and Q. J. Wang, “Calibrating Hourly Rainfall-Runoff Models with
on the number of datasets. Where the value of accuracy Daily Forcings for Streamflow Forecasting Applications in Meso-Scale
decreases toward reducing the number of datasets being Catchments,” Environmental Modelling and Software, vol. 76, pp. 20–
trained. Experiments with 509 datasets have the highest 36, 2016.
accuracy of 98.53% for training data and 96.07% for test data. [3] G. B. Wicaksono and R. Hidayat, “Extreme Rainfall in Katulampa
Associated with the Atmospheric Circulation,” in Procedia
Meanwhile, experiments with a dataset of 40 have the lowest Environmental Sciences, 2016, vol. 33, pp. 155–166.
accuracy results, with 43.75% for training data and 25.00%

56
2019 International Conference on Electrical Engineering and Informatics (ICEEI)
July 2019, 9 - 10, Bandung, Indonesia

[4] K. C. Gouda, S. Nahak, and P. Goswami, “Evaluation of a GCM in [15] Z. Liu and C. J. Sullivan, "Prediction of Weather-Induced Background
Seasonal Forecasting of Extreme Rainfall Events Over Continental Radiation Fluctuation with Recurrent Neural Networks," Radiation
India,” Weather and Climate Extremes, vol. 21, no. May, pp. 10–16, Physics and Chemistry, vol. 155, no. March 2018, pp. 275–280, 2019.
2018. [16] N. Sinha, B. Purkayastha, and L. Marbaniang, “Weather prediction by
[5] P. A. Chen, L. C. Chang, and F. J. Chang, “Reinforced recurrent neural recurrent neural network dynamics,” International Journal Intelligent
networks for multi-step-ahead flood forecasts,” Journal of Hydrology, Engineering Informatics, vol. 2, no. 2/3, 2014.
vol. 497, pp. 71–79, 2013. [17] M. A. Zaytar, “Sequence to Sequence Weather Forecasting with Long
[6] S. Naidu, K. S. Sajinkumar, T. Oommen, V. J. Anuja, R. A. Samuel, Short-Term Memory Sequence to Sequence Weather Forecasting with
and C. Muraleedharan, “Early Warning System for Shallow Landslides Long Short-Term Memory Recurrent Neural Networks,” no. June
Using Rainfall Threshold and Slope Stability Analysis,” Geoscience 2016, 2017.
Frontiers, vol. 9, no. 6, pp. 1871–1882, 2017. [18] S. Yuan, X. Luo, B. Mu, J. Li, and G. Dai, “Prediction of North Atlantic
[7] I. Wahyuni, P. Faster, E. Adipraja, and S. A. Dewi, “Implementation Oscillation Index with Convolutional LSTM Based on Ensemble
of Rainfall Prediction Using the Embedded System Method in Empirical Mode Decomposition,” Atmosphere, vol. 10, no. 252, pp. 2–
Kelurahan Wonokoyo, Kecamatan Kedungkandang, Malang City,” 13, 2019.
Jurnal Ilmiah Teknologi Informasi Asia, vol. 13, no. 1, pp. 35–46, [19] S. G. Gouda, Z. Hussein, S. Luo, and Q. Yuan, “Model selection for
2019. accurate daily global solar radiation prediction in China,” Journal of
[8] A. G. Salman, B. Kanigoro, and Y. Heryadi, “Weather Forecasting Cleaner Production, vol. 221, no. 1 June 2019, pp. 132–144, 2019.
Using Deep Learning Techniques,” in 2015 International Conference [20] I. Tanaka and H. Ohmori, “Method Selection in Different Regions for
on Advanced Computer Science and Information Systems (ICACSIS), Short-Term Wind Speed Prediction in Japan,” in SICE Annual
2015, pp. 281–285. Conference 2015 July 2015, vol. 2, pp. 189–194.
[9] L. Handayani and M. Adri, “Application of ANN (Backpropagation) [21] F. Li, G. Ren, and J. Lee, “Multi-step wind speed prediction based on
for Rainfall Prediction (Case Study: Pekanbaru),” in Seminar Nasional turbulence intensity and hybrid deep neural networks,” Energy
Teknologi Informasi, Komunikasi dan Industri (SNTIKI) 7, 2015, pp. Conversion and Management, vol. 186, no. January, pp. 306–322,
238–247. 2019.
[10] Y. S. Pangestu and R. Gernowo, “Evaluation of Backpropagation [22] Kazım Kaba, M. Sarıgülb, Mutlu Avcıc, and H. K. Mustafa,
Neural Neural Model for Extreme Climate Prediction with Correlation “Estimation of Daily Global Solar Radiation Using Deep Learning,”
of Rainfall and Sea Level in Semarang,” Youngster Physics Journal, Energy, vol. 162, no. 1, pp. 126–135, 2018.
vol. 4, no. 1, pp. 67, 68, 2015.
[23] L. Wang, Z. Wang, and B. Wang, “Wind Power Day-ahead Prediction
[11] M. I. Azhar and W. F. Mahmudy, "Prediction of Rainfall Using Based on LSSVM With Fruit Fly Optimization Algorithm,” in 2018
Adaptive Neuro-Fuzzy Inference System ( ANFIS )," Jurnal International Conference on Power System Technology
Pengembangan Teknologi Informasi dan Ilmu Komputer, vol. 2, no. (POWERCON), 2018, pp. 999–1003.
11, pp. 4932–4939, 2018.
[24] Y. Liu et al., “Application of Deep Convolutional Neural Networks for
[12] H. Fadhilah, E. C. Djamal, and R. Ilyas, “Non-Halal Ingredients Detecting Extreme Weather in Climate Datasets,” in International
Detection of Food Packaging Image Using Convolutional Neural Conference on Advances in Big Data Analytics, 2016, pp. 81–88.
Networks,” in The 2018 International Symposium on Advanced
Intelligent Informatics (SAIN 2018), 2018. [25] T. Kaur, S. Kumar, and R. Segal, “Application of artificial neural
network for Short term wind speed prediction,” in Biennial
[13] M. Qiu et al., “A short-term rainfall prediction model using multi-task International Conference on Power and Energy Systems: Towards
convolutional neural networks,” in Proceedings - IEEE International Sustainable Energy (PESTSE), 2016, pp. 217–222.
Conference on Data Mining, ICDM, 2017, vol. 2017-Novem, pp. 395–
404. [26] F. M. Arif, R. Gernowo, A. Setyawan, and D. Febrianty, “Analysis of
the Rainfall Climatology Station Semarang Data Using Artificial
[14] F. J. Chang, P. A. Chen, Y. R. Lu, E. Huang, and K. Y. Chang, “Real- Neural Networks Model,” Berkala Fisika, vol. 15, no. 1, pp. 21–26,
time Multi-Step-Ahead Water Level Forecasting by Recurrent Neural 2012.
Networks for Urban Flood Control,” Journal of Hydrology, vol. 517,
pp. 836–846, 2014.

57

You might also like