Final PPT ANN

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 30

AI ML DL

DEEP LEARNING
• Neural networks, also known as artificial neural networks
(ANNs) or simulated neural networks (SNNs), are a subset
of machine learning and are at the heart of deep
learning algorithms. Their name and structure are inspired by
the human brain, mimicking the way that biological neurons
signal to one another.

• Artificial neural networks (ANNs) are comprised of a node


layers, containing an input layer, one or more hidden layers,
and an output layer. Each node, or artificial neuron, connects to
another and has an associated weight and threshold. If the
output of any individual node is above the specified threshold
value, that node is activated, sending data to the next layer of
the network. Otherwise, no data is passed along to the next
layer of the network.
• Recurrent Neural Network(RNN) are a type of Neural Network where the output from
previous step are fed as input to the current step. In traditional neural networks, all the
inputs and outputs are independent of each other, but in cases like when it is required to
predict the next word of a sentence, the previous words are required and hence there is
a need to remember the previous words. Thus RNN came into existence, which solved
this issue with the help of a Hidden Layer.
• The main and most important feature of RNN is Hidden state, which remembers some
information about a sequence. A recurrent neural network (RNN) is a type of artificial
neural network which uses sequential data or time series data. These deep learning
algorithms are commonly used for ordinal or temporal problems, such as language
translation, natural language processing (nlp), speech recognition, and image captioning;
they are incorporated into popular applications such as Siri, voice search, and Google
Translate. Like feedforward and convolutional neural networks (CNNs), recurrent neural
networks utilize training data to learn. They are distinguished by their “memory” as they
take information from prior inputs to influence the current input and output. While
traditional deep neural networks assume that inputs and outputs are independent of
each other, the output of recurrent neural networks depend on the prior elements within
the sequence.
TYPES OF ALGORITHMS IN ANN

FEED FORWARD BACK


PROPAGATION
INTRODUCTION

 In this network, the information moves from the input nodes, through the
hidden nodes (if any) and to the output nodes.
 The predicted value of the network is compared to the expected output, and an
error is calculated using a function. This error is then propagated back within
the whole network, one layer at a time, and the weights are updated according
to the value that they contributed to the error.
REPRESENTATION OF A FFBP NETWORK
CONSISTING OF 2 HIDDEN LAYERS WITH 15
AND 20 NEURONS EACH
What is an epoch?

 One round of updating the network


for the entire training dataset is
called an epoch.

 A network may be trained for tens,


hundreds or many thousands of
epochs depending on the complexity
of the artificial neural network.
DATA OBSERVED FROM THE EXPERIMENT CAN BE DIVIDED INTO INPUTS AND OUTPUTS,
NORMALIZED AND SPLITTED INTO TWO SETS FOR TRAINING AND TESTING PURPOSES.

THE IDEAL WAY OF SPLITTING THE DATA WOULD BE IN A 70/30 FASHION WITH 70% OF THE
DATA USED FOR TRAINING AND THE OTHER FOR TESTING, CHOSEN RANDOMLY FOR BETTER
TRAINING
TRAINING AND TESTING DATA VARIABLES ARE CREATED AND
IMPORTED INTO THE NNtool WINDOW, TEST OUTPUTS ARE
STORED SEPARATELY IN AN EXCEL SHEET TO COMPARE IT WITH
THE SIMULATION DATA AND FINDING THE CORRELATION
COEFFICIENT (R)

THE NETWORK IS ALSO


CREATED AND THE I/O DATA
ARE SELECTED ALONG WITH
THE TRAINING FUNCTION,
NUMBER OF
LAYERS(INCLUDING O/P
LAYER) ALONG WITH
CONFIGURATION OF EACH
HIDDEN LAYER
TRAINING INFO AND PARAMETERS ARE ENTERED FOR THE NETWORK. PARAMETERS VARY DEPENDING
ON THE COMPLEXITY OF THE NETWORK. DETAILSA RE FILLED AND THE NETWORK IS TRAINED.
THE SIMULATION IS THEN
PERFORMED AND
OUTPUTS ARE STORED IN
A VARIABLE, WHICH IS
EXPORTED INTO THE
WORKSPACE.

CORRELATION
COEFFICIENT (AVERAGE
R VALUE) IS THEN FOUND
FOR THE NETWORK
• Backpropagation (backward propagation) is an important mathematical tool for improving
the accuracy of predictions in data mining and machine learning. Essentially,
backpropagation is an algorithm used to calculate derivatives quickly.

• Artificial neural networks use backpropagation as a learning algorithm to compute a


gradient descent with respect to weights. Desired outputs are compared to achieved
system outputs, and then the systems are tuned by adjusting connection weights to narrow
the difference between the two as much as possible. The algorithm gets its name because
the weights are updated backwards, from output towards input.
• The first successful example of a recurrent network trained with backpropagation was
introduced by Jeffrey Elman, the so-called Elman Network (Elman, 1990).

• The physical layout of the Elman neural network is divided broadly into 4 layers: the
input layer, the hidden layer, the Undertake layer(aka context layer) , and the output
layer.
• Connections among the input layer, the hidden layer and the output layer can be considered as a feed-
forward network, this part is similar to the traditional multi-layer neural network.
• Besides above three layers, there exists another layer named the context layer, the inputs of this layer
come from outputs of the hidden layer, the context layer is used to store the hidden layer’s output
values of the previous time, so it is called the context layer. The purpose of context layer is to memorize
the hidden layer output. As it is based on a backpropagation neural network, the output of the hidden
layer connects with its input via the delay and memory of context layer.
• nntool opens the Network/Data Manager window, which
allows you to import, create, use, and export neural
networks and data.
• Matlab for Cascade Forward Back-propagation
LINEAR NORMALIZED DATA
S.No I/P O/P
Blend Load BTE CO2 CO SMOKE HC Pressure
1 0.060302 0 0 0.181671 0.05914 0.063239 0.049219 0.178847
2 0.060302 0.08165 0.128755 0.215314 0.11828 0.112425 0.1125 0.194089
3 0.060302 0.163299 0.193133 0.215314 0.177421 0.140532 0.165234 0.207223
4 0.060302 0.244949 0.25751 0.248957 0.215055 0.196745 0.229687 0.215914
5 0.060302 0.326599 0.261534 0.255686 0.279572 0.274037 0.301172 0.222854
6 0.120605 0 0 0.174943 0.11828 0.105399 0.052734 0.17515
7 0.120605 0.08165 0.14485 0.195129 0.172044 0.126479 0.116016 0.191624
8 0.120605 0.163299 0.225322 0.201857 0.211471 0.161612 0.16875 0.205504
9 0.120605 0.244949 0.249463 0.222043 0.263443 0.196745 0.235547 0.215038
10 0.120605 0.326599 0.281652 0.228772 0.057348 0.295117 0.310547 0.216984
11 0.180907 0 0 0.168214 0.121865 0.098372 0.05625 0.174275
12 0.180907 0.08165 0.136802 0.174943 0.173836 0.147558 0.119531 0.189906
13 0.180907 0.163299 0.209227 0.201857 0.224016 0.175665 0.172265 0.204629
14 0.180907 0.244949 0.253084 0.215314 0.28674 0.203771 0.242578 0.212898
15 0.180907 0.326599 0.265558 0.222043 0.134409 0.302143 0.317578 0.215752
16 0.241209 0 0 0.161486 0.172044 0.119452 0.058594 0.173432
17 0.241209 0.08165 0.120708 0.154757 0.211471 0.133505 0.123047 0.18903
18 0.241209 0.163299 0.193133 0.1884 0.263443 0.168638 0.174609 0.203753
19 0.241209 0.244949 0.233369 0.208586 0.206095 0.238904 0.247265 0.209979
20 0.241209 0.326599 0.247532 0.222043 0.225808 0.316197 0.307031 0.213093
21 0.301511 0 0.112661 0.148029 0.147922 0.147558 0.062109 0.172556
22 0.301511 0.08165 0.177038 0.1413 0.245521 0.112425 0.127734 0.187182
23 0.301511 0.163299 0.249463 0.181671 0.188173 0.161612 0.180469 0.201613
24 0.301511 0.244949 0.227736 0.198762 0.116488 0.281064 0.251953 0.205958
25 0.301511 0.326599 0.27441 0.218679 0.318999 0.344303 0.3 0.208034

You might also like