Download as ppt, pdf, or txt
Download as ppt, pdf, or txt
You are on page 1of 45

ECO-598 - Open Elective

Artificial Intelligence
And
Machine Learning

Prof. Neelapala Anil Kumar,


Department of ECE,
ACED, Alliance University.
Syllabus

Module 1: Foundations for AI and Neural Module-3 Types of machine learning Algorithms
Networks Supervised Learning:
Application areas of AI, AI Basics (Divide and Random Forest, Decision Trees, Logistic Regression,
Conquer, Greedy, Branch and Bound, Gradient Support Vector Machines, KNN, Naïve Bayes, Regression,
Descent), NN basics (Perceptron and MLP, FFN, Linear Regression, Polynomial Regression.
back propagation) Convolution Neural Unsupervised Learning: Clustering-K-Means, K Nearest
Networks, Recurrent Neural Networks, neighbours, Association Rule Learning, Dimensionality
Representation, Intuition Multiclass Reduction-PCA, SVD
classification. Reinforcement Learning: Markov Decision, Monte Carlo
Prediction.
Module-2 Introduction to Machine Learning Module 4: Introduction to Deep Learning
Introduction to Learning and Machine Introduction, Need for Deep learning, Deep Learning
Learning, Components of Learning process, Models-Restricted Boltzmann Machines, Additional Deep
Applications of Machine Learning, Types of Learning Models- Auto encoders.
data, Classification of data with real examples, Module 5: Applications using ML and AI.
Regression, Learning and its types. Working of Image recognition, Speech recognition, Medical Diagnosis,
machine learning. Self-driving cars, Product recommendations, Traffic
  Predictions, online fraud detection, Automatic Language
translation. Process for Machine learning projects.
Essential Readings:
•1. Tom M. Mitchell, “Machine Learning”, by McGraw Hill, 2013.
•2. Christopher M. Bishop, “Pattern Recognition and Machine Learning”, by Springer, 2007.
•3. “Artificial Intelligence: A Modern Approach”, Staurt Russell, Peter Norvig.
•4. “Artificial Intelligence”: 2nd Edition, Rich and Knight.

Additional Readings:
•1. Gareth James, Daniela Witten, Trevor Hastie, and Robert Tibshirani,” An Introduction to
Statistical Learning: with Applications in R”, Springer, 2016.
•2. Andreas Muller, “Introduction to Machine Learning with Python: A Guide for Data
Scientists”, Shroff/O’Reilly; First edition (2016).

Recommended Digital Library:


•IEEE Explore
1.https://nptel.ac.in/courses/106106139/
2.https://www.javatpoint.com/regression-analysis-in-machine-learning
3. Andrew NG’s online Course, coursera.
4. https://www.analyticsvidhya.com/blog/2017/09/common-machine-learning-algorithms/
5.Machine Learning For all, coursera.
Foundations for AI and Neural Networks
Application areas of AI,
 AI Basics (Divide and Conquer, Greedy, Branch and Bound, Gradient
Descent),
 NN basics (Perceptron and MLP, FFN, Back propagation)
Convolution Neural Networks ,
Recurrent Neural Networks,
Intuition Multiclass classification,
Back Propagation Algorithm,
Neural Network Training.
 To understand basics along with applications areas of AI.
 To understand Classifications of Neural Networks.
 To understand algorithms associated with Neural Networks.
 To understand training process and procedures of Neural Network
https://www.youtube.com/watch?v=4NsilUpnRY0
Introduction
https://www.youtube.com/watch?v=4NsilUpnRY0
Following are the main goals of Artificial
Intelligence
1.Replicate human intelligence
2.Solve Knowledge-intensive tasks
3.An intelligent connection of perception and
action
4.Building a machine which can perform tasks
that requires human intelligence such as:
o Proving a theorem
o Playing chess
o Plan some surgical operation
o Driving a car in traffic
5.Creating some system which can exhibit
intelligent behaviour, learn new things by
itself, demonstrate, explain, and can advise to
its user.
• High Cost: Requires lots of • Increase dependency on
maintenance to meet current machines: dependent on devices
world requirements. and hence they are losing their
• Can't think out of the mental capabilities.
box: robot will only do that work • No Original Creativity: AI
for which they are trained or machines cannot beat this power
programmed. of human intelligence and cannot
• No feelings and be creative and imaginative.
emotions: cannot make any kind
of emotional attachment with
human and may sometime be
harmful for users if the proper
care is not taken.
1. Divide the original problem into a set of
sub problems.
2. Conquer: Solve every sub problem
individually, recursively.
3. Combine: Put together the solutions of
the sub problems to get the solution to
the whole problem.
• Branch and Bound is a method to
systematically search a solution
space.

• Just like backtracking, we will


use bounding functions to avoid
generating subtrees that do not
contain an answer node
Batch Gradient Descent Stochastic Gradient Descent
• Neural Networks are networks of neurons,
There are two basic goals for neural network
research
• Brain modelling (how real brains work.
This can potentially help us understand the
nature of perception, actions, learning and
memory, thought and intelligence )Artificial
System Construction The need for
biological plausibility in brain modelling,
and the need for computational efficiency in
artificial system construction.
Sigmoid neuron
• Subsequently, the first step in
• In the neural network what we do, we minimizing the error is to
update the biases and weights based determine the gradient
on the error. (Derivatives) of each node w.r.t.
• This weight and bias updating process is the final output.
known as “Back Propagation”.
• Back-propagation (BP) algorithms work
• This one round of forwarding and
by determining the loss (or error) at the backpropagation iteration is
output and then propagating it back known as one training iteration
into the network. aka “Epoch“.
• The weights are updated to minimize
the error resulting from each neuron.
• Then we take matrix dot product of input and
• We take input and output X as an input weights assigned to edges between the input
matrix, y as an output matrix. and hidden layer then add biases of the hidden
layer neurons to respective inputs, this is known
• Then we initialize weights and biases with as linear transformation
random values (This is one-time initiation. In • Perform non-linear transformation using an
activation function (Sigmoid). Sigmoid will
the next iteration, we will use updated return the output as 1/(1 + exp (-x)).
weights, and biases. • Then perform a linear transformation on hidden
layer activation (take matrix dot product with
• Let us define: weights and add a bias of the output layer
neuron) then apply an activation function (again
 wh as a weight matrix to the hidden layer used sigmoid, but you can use any other
 bh as bias matrix to the hidden layer activation function depending upon your task)
to predict the output
 wout as a weight matrix to the output layer •
All the above steps are known as “Forward
 bout as bias matrix to the output layer Propagation“
• Compare prediction with actual output and • Compute change factor (delta) at hidden layer,
calculate the gradient of error (Actual – multiply the error at hidden layer with slope of
Predicted) hidden layer activation
• Compute the slope/ gradient of hidden and • Then update weights at the output and hidden
output layer neurons (To compute the slope, layer: The weights in the network can be
we calculate the derivatives of non-linear updated from the errors calculated for
activations x at each layer for each neuron). training example(s).
• Then compute change factor(delta) at the • Finally, update biases at the output and hidden
output layer, dependent on the gradient of layer: The biases in the network can be
error multiplied by the slope of output layer updated from the aggregated errors at that
activation. neuron.
• At this step, the error will propagate back  bias at output layer =bias at output_layer +
into the network which means error at the sum of delta of output_layer at row-wise *
hidden layer. For this, we will take the dot learning_rate
product of the output layer delta with the  bias at hidden_layer =bias at hidden_layer
weight parameters of edges between the + sum of delta of output_layer at row-wise *
hidden and output layer (wout. T). learning_rate   
• Forward propagation 
• Backward propagation 
• Loss function.
• Training function
• Used in various classification task like image,
audio, words. image classification we use
Convolution Neural Network.
• Input Layer This layer holds the raw input of
image with width 32, height 32 and depth 3.
• Convolution Layer This layer computes the
output volume by computing dot product between
all filters and image patch. Suppose we use total
12 filters for this layer we’ll get output volume of
dimension 32 x 32 x 12. Convolution Operation
• This layer is regular neural network layer which takes input from the previous layer and
computes the class scores and outputs the 1-D array of size equal to the number of classes.
Each hidden layer will have its own set of weights
and biases, let’s say, independent of each other,
i.e. they do not memorize the previous outputs
• RNN converts the independent activations
into dependent activations by providing the • A single time step of the input is provided to the
same weights and biases to all the layers, network.
thus reducing the complexity of increasing • Then calculate its current state using set of current
parameters and memorizing each previous input and the previous state.
outputs by giving each output as input to the • The current ht becomes ht-1 for the next time step.
next hidden layer. • One can go as many time steps according to the
• Hence these three layers can be joined problem and join the information from all the
together such that the weights and bias of all previous states.
the hidden layers is the same, into a single • Once all the time steps are completed the final
recurrent layer. current state is used to calculate the output.
• The output is then compared to the actual output
i.e. the target output and the error is generated.
• The error is then back-propagated to the network
to update the weights and hence the network
(RNN) is trained.
ONE Vs ALL

You might also like