Professional Documents
Culture Documents
Machine Learning NN
Machine Learning NN
Machine Learning NN
And it’s validate the model by checking the error between actual and
predicted values
• What is the need for Machine Learning ?
-Huge amount of data is required.
- More is the amount of Data, More effective is the Machine Learning performance.
- As world is being data-driven Machine Learning becomes necessary in the modern era for solving various complex
problems.
a. Supervised Learning: It is a type of machine learning technique in which we train the machine by providing
labeled data and on that basis, the machine predicts output.
- In this, the Machine is being trained on data that is not being labeled or categorized
- Machine tries to find a useful pattern, insights from the huge amount of data and restructures the data on the basis of
feature or similarities.
c. Reinforcement Learning: It is a learning method, in which a machine gets reward points for each right prediction and
gets a negative point for each wrong prediction.
• The Machine learns automatically with these feedback points and improves its accuracy.
• Regression Algorithms: It is a machine learning technique where the model predicts the output as a continuous
numerical value.
• Regression analysis is often used for forecasting and predictive analysis.
• It finds out the relationship between a single dependent variable on several independent ones. For example, predicting
house price, stock market or salary of an employee, etc are the most common regression problems.
• Neural network can work for both regression and classification problems.
• It is the smallest unit of neural network that does certain computations to detect features in the input data.
• It accepts weighted inputs, and apply the activation function to obtain the output as the final result.
• The simplest form of neural networks where input data travels in one direction only, passing through artificial neural
nodes and exiting through output nodes.
• Here hidden layers may or not be present. Input and output layers are present.
• Based on this it can be classify further as a single layered or multi layered feed forward neural network.
• It has uni-directional forward propagation but no backward propagation. Weights are static here.
• It is fast and speedy but it can not used for deep learning.
Multilayer Perceptron
• An entry point towards complex neural nets where input data travels through various layers of artificial
neurons. Every single node is connected to all neurons in the next layer which makes it a fully connected
neural network.
• Input and output layers are present having multiple hidden Layers. It has a bi-directional propagation i.e.
forward propagation and backward propagation.
• Inputs are multiplied with weights and fed to the activation function and in backpropogation, they are
modified to reduce the loss.
• Inputs are fed into neuron 1, neuron 2 and neuron 3 as they belong to the Input Layer.
• Each neuron has a weight associated with it. When an input enters a neuron, the weight on the neuron is multiplied to the input.
• For instance, weight 1 will be applied to the input of Neuron 1. If weight 1 is 0.8 and input is 1 then 0.8 will be computed from Neuron 1:
1 * 0.8 = 0.8
• Sum of (weight * inputs) of neurons in a layer is calculated. As an example, the calculated value on the hidden layer in the image will be:
(Weight 4 x Input To Neuron 4) + (Weight 5 x Input To Neuron 5)
• Finally an activation function is applied. Output calculated by the neurons becomes input to the activation function which then computes
a new output.
• The output from activation function is then fed to the subsequent layers.
Activation Functions in Neural Networks
• Activation function is a mathematical formula that is activated under certain circumstances. When neurons compute
weighted sum of inputs, they are passed to the activation function which checks if the computed value is above the
required threshold.
• Activation functions are important to learn complicated and Non-linear complex functional mappings between the inputs
and output variable. They introduce non-linear properties to the network.
• Specifically in NN we do the sum of products of inputs(X) and their corresponding Weights(W) and apply a Activation
function f(x) to it to get the output of that layer and feed it as an input to the next layer.
Y = f (Σ(inputs*weights)+bias)
• Here f(x) is activation functions which can be different mathematical functions.
• Some of the popular activation functions are,
• Sigmoid function/ Logistic function
• Tanh function
• ReLU function,etc
Sigmoid Function/Logistic Function
• It is an activation function of form , f(x) = 1 / 1 + exp(-x) .
• Its Range is between 0 and 1.
• For different x value we performed the sigmoid function it gives curve like this,
Tanh function
• It’s mathematical formula is f(x) = exp(2x) - 1 / exp(2x) + 1.
• For different x value we performed the tanh function it gives curve like this,
ReLu -Function
• It’s an activation function of form R(x) = max(0,x)
i.e if x < 0 , R(x) = 0
and if x >= 0 , R(x) = x
• Hence as seeing the mathematical form of this function we can see that it is very simple and efficient .
• But its limitation is that it should only be used within Hidden layers of a Neural Network Model.
Choosing the right Activation Function
• Depending upon the properties of the problem we able to make a better choice for quick convergence of the network.
• Sigmoids and tanh functions are sometimes avoided due to it’s limitations.
• ReLU function is a general activation function and is used in most cases these days.
• After generated predicted value for the actual output value, model try to investigate the difference between
the actual output and predicted output value.
• Different loss functions will give different errors for the same prediction, and thus have a considerable effect
on the performance of the model.
• One of the most widely used loss function is mean square error, which calculates the square of difference
between actual value and predicted value.
• Different loss functions are used to deal with different type of tasks, i.e. regression and classification.
Error = P – P’