Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 4

Stock Prediction with Neural Network Using LSTM

for Project 1
T on
an E
g ng
C in
hu ee
ng ri
Fa ng
n N
D ati
ep on
ar al
t T
m ai
en w
t an
of N
C or
o m
m al
pu U
te ni
r ve
Sc rsi
ie ty,
nc T
e ai
an pe
d i,
In T
fo ai
r w
m an
ati

Abstract—This is the report of project 1 for II. RELATED WORK


the Neuro Network. In this project, next 5
days’ stock should be predict whilst enormous A. Data Loading
amount of data from the past would be fed in The thing people who are interested in
training the model for this prediction. daily stock trading is the prices of Open,
Keywords—deep learning, LSTM, Stock,
High, Low, Close, Volume, and so forth. In
Prediction Project 1, next 5 days closing price are
required predicting as accurately as
I. INTRODUCTION (HEADING 1) possible. For the sake, we need to pay keen
The formatter will need to create these attentiveness on Closing Price in the past
components, incorporating the applicable and apt method of neural network.
criteria that follow.

XXX-X-XXXX-XXXX-X/XX/$XX.00 ©20XX IEEE


As the dataset TA provided with, there deal with signals that mix low and high
are rows of data for each day from 2012. frequency components.
Date, Open, High, Low, Close, and
Volume are included in each row, of which C. Data Separating to Training and
Closing data could be fetched easily with Testing Groups
python data frame.A4 paper size. If you are In the dataset TA provided with, there are
using US letter-sized paper, please close approximately 2400 or more days’ data
this file and download the Microsoft Word, from 2012 to 2022. In deep learning
Letter file. process, we need to separate all the data in
B. Process to Choose and Devise the three groups, training, verifying, and
Model testing. Verifying data would be ignorant
due to enough samples. Proportion devised
Concerning the method utilized to predict
for the training data and testing data would
the prices, the stock data show the trait of
be 49:1.
sequence with date. In real life, language
translation or voice recognition are closely III. EXPERIMENT SETTING
akin to it, of which people are tend to be A. Data Scaling
associate with Recurrent Neural Network.
Min/Max Scaler is set to convert all the
RNN is flexible to deal with the serial data, Closing price data into 0 or 1 while
but in term of long-term dependency, the forming data with format of data frame in
status quo from previous moment would to the format model can accept. Same way
affect the next state. With the network to cope with the testing data before putting
layers growing increasingly, the model them to predicting session.
might have the difficulties handling error or
gradient explosion/vanishing, which might
result in failure of optimizing and less B. Regularization: Dropout for Overfitting
aptitude for learning. The lengthier the With respect to overfitting, the units, also
series, the deeper the network. In order to know as layers, was set as 50 at first. If
involve in incorporates time delays and time is enough or further curiosity is
feedback from the previous results, inspired, different number of layers would
controlling states which are referred to as be tested to find the optimal one if more or
gated state or gated memory would be fewer layers. The time step hidden stats
acceptable. would be set to return, in which all of them
are regard as the feedback. The next step
As LSTM, therefore, is chosen in the the model would do is dropout 20% of the
project, it’s also used for avoiding the neurons in hidden layers to prevent the
gradient problem and being augmented by result from overfitting. In this section,
recurrent gates, so-called "forget gates". three time of layer fed with data built and
LSTM prevents backpropagated errors neurons dropout would be done here. As
from vanishing or exploding. In lieu, errors requirement shows, 5 consecutive days’
can flow backwards through unlimited data after now is needed from the model,
numbers of virtual layers unfolded in so Dense function is used to print out 5
space. That is, LSTM can learn tasks that numeric data in a row.
require memories of events that occurred
thousands or more of discrete time steps in Model would be compiled after the step
retrospect. LSTM works even given long was done to configure the training process.
delays between significant events and can First, optimizer would be set as adaptive
moment estimation which fits for updating
neurons’ weight iteratively with less cache
and better performance of calculation. In
the setting of the loss function, Mean-
Square Error was selected in this model for
evaluating the quality of the residual.

When it comes to training session, the


training data would be fed into the model
setting epoch as 25 and batch size as
default. The prediction session will go after
the model has its own training done and
give the 5 scaled data to converted into
authentic predicted data.

IV. EXPERIMENT RESULT


A. Result
In the 5 given further future data,
B. Fetch additional data from yahoo to
train and see if the the result from the
predicting model would be much more
accurate
In favor of realizing if the amount of data
maters, the data from 2007,in which Lotus
became listed company, was imported to
the project. However, the result which
should be expected to surpass the one
abovementioned did not work.

V. CONCLUSION
REFERENCE

You might also like