ANN Data Report

You might also like

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 4

ANN which stand for Artificial Neural Networks were motivated by the desire to create

models for natural brains. Much later it was discovered that ANN are a very general
statistical1 framework for modelling posterior probabilities given a set of samples (the input
data).

The basic building block of a (artificial) neural network (ANN) is the neuron. A neuron is a
processing unit which have some (usually more than one) inputs and only one output. First
each input xi is weighted by a factor with and the whole sum of inputs is calculated
P all inputs

∑ wi xi = a
Then a activation function f is applied to the result a. The neuronal output is taken to be f(a).
Generally, the ANN are built by putting the neurons in layers and connecting the outputs of
neurons from one layer to the inputs of the neurons from the next layer. The type of network
depicted there is also named feedforward (a feedforward network do not have feedbacks, i.e.
no \loops"). Note that there is no processing on the layer 0, its role is just to distribute the
inputs to the next layer (data processing really starts with layer 1); for this reason, its
representation will be omitted most of the time. Variations are possible: the output of one
neuron may go to the input of any neuron, including itself; if the outputs on neuron from one
layer are going to the inputs of neurons from previous layers then the network is called
recurrent.

Feedback is done when the output of one neuron goes to the other neurons on the same layer2.
So, to compute the output, an \activation function" is applied on the weighted sum of inputs:

total input ≡ a = ∑ wi xi = a

output = activation function = ( ∑ wi xi = a) = f(a)

we have used 3 layered ANN networks to analyze our problem, these layers are
1. Input layers - 1
2. Hidden layer – 7 neural
3. Target/output layer - 1

Recieves input Perform most of the Predicts final output


computations

Input data (each pixel)

Input data to each neuron Layer 1 connected to layer 2 Weights assigned to each
(784) through channels channel

Above calculation is used at Bn is the bias added to make Activation function activate
input at 2nd layer activation funtion neuron
Activated neuron transmit data to the next Transmission of data is called forward
layer propagation

In output layer, neuron with highest value


Outputs are basically, probability
determines the output

First output is a wrong prediction Error prediction


Arrows predict that values are higher or Error details are propagated back to the
lower than expected value network for training and improvement

Based on backpropagation weights are It’s the most accurate prediction made by
adjusted to predict accurate values the network after multiple iterations

This is the basic working idea of ANN. It predicts the data then use the similar data to learn
more about it to predict the most accurate value with least errors. This training process or
forward and backward propagation of neural network may take hours or even sometime
months to predict the accurate values.

You might also like