Professional Documents
Culture Documents
P 1.1.2 ML Types
P 1.1.2 ML Types
P 1.1.2 ML Types
CO-2
CO-3:
CO-4:
CO-5:
2
Course Objectives
To study learning
processes:
To provide a
supervised and
comprehensive To understand
To understand the unsupervised,
foundation to modern techniques
history and deterministic and
Machine Learning and practical trends
development of statistical
and Optimization of Machine
Machine Learning. knowledge of
methodology with learning.
Machine learners,
applications t.
and ensemble
learning
3
Syllabus
• UNIT-I
• Chapter-1
Fundamentals of Machine Learning: Introduction to Machine Learning (ML),
Different types of Machine Learning, Machine Learning Life Cycle: Data Discovery,
Exploratory Analysis: Data Preparation, Model Planning, Model Building, Model
Evaluation, Real World Case Study. Foundation of ML: ML Techniques.
4
Deep Learning
DEEP
LEARNING
• Data Preparation: Format and engineer the data into the optimal format, extracting
important features and performing dimensionality reduction.
• Training: Also known as the fitting stage, this is where the Machine Learning algorithm
actually learns by showing it the data that has been collected and prepared.
8
Approaches
• Supervised Learning
• Unsupervised Learning
• Semi-supervised Learning
• Reinforcement Learning
9
Supervised Learning
• In supervised learning, the goal is to learn the mapping (the rules) between a set
of inputs and outputs.
• For example, the inputs could be the weather forecast, and the outputs would be the
visitors to the beach.
• The goal in supervised learning would be to learn the mapping that describes the
relationship between temperature with other weather conditions and number of beach
visitors.
• So here number of visitors (dependent variable) will be dependent on weather conditions
(independent variable).
• The output from a supervised Machine Learning model could be a numeric value from a
finite set e.g [500-2000] for the number of visitors to the beach.
• This is called regression problem.
12
Supervised Learning-Classification
• Classification is used to group the similar data points into different sections in order to
classify them.
• Machine Learning is used to find the rules that explain how to separate the different data
points.
• They all focus on using data and answers to discover rules that linearly separate data
points.
• Linear separability is a key concept in machine learning.
• Classification approaches try to find the best way to separate data points with a line.
• The lines drawn between classes are known as the decision boundaries.
• The entire area that is chosen to define a class is known as the decision surface.
• The decision surface defines that if a data point falls within its boundaries, it will be
assigned a certain class.
13
Supervised Learning-Classification
• Binary Classification CLASS 1
• Multi-Class Classification
• Multi-Label Classification
CLASS 2
• Imbalanced Classification
LINEAR SEPERABLE
14
Binary Classification
• Binary Classification refers to those classification tasks that have two class labels.
• Example: Email spam detection (spam or not).
• Typically, binary classification tasks involve one class that is the normal state and
another class that is the abnormal state.
• For example “not spam” is the normal state and “spam” is the abnormal state.
• Another example is “cancer not detected” is the normal state of a task that involves a
medical test and “cancer detected” is the abnormal state.
• The class for the normal state is assigned the class label 0 and the class with the
abnormal state is assigned the class label 1.
15
Multi-Class Classification
• Multi-Class Classification refers to those classification tasks that have more than two
class labels.
• Examples include:
• Face classification.
• Plant species classification.
• Optical character recognition.
• Consider the example of photo classification, where a given photo may have multiple
objects in the scene and a model may predict the presence of multiple known objects in
the photo, such as “bicycle,” “apple,” “person,” etc.
• This is unlike binary classification and multi-class classification, where a single class
label is predicted for each example.
17
Imbalanced Classification
• Imbalanced Classification refers to classification tasks where the number of examples
in each class is unequally distributed.
• Typically, imbalanced classification tasks are binary classification tasks where the
majority of examples in the training dataset belong to the normal class and a minority of
examples belong to the abnormal class.
• Examples include:
• Fraud detection.
• Outlier detection.
• Medical diagnostic tests.
18
Supervised Learning-Regression
• The difference between classification and CLASS 1
regression is that regression outputs a
number rather than a class.
CLASS 2
19
Unsupervised Learning
• In unsupervised learning, only input data is provided in the examples.
• There are no labelled example outputs to aim for.
• But it may be surprising to know that it is still possible to find many interesting and
complex patterns hidden within data without any labels.
• An example of unsupervised learning in real life would be sorting different colour coins
into separate piles. Nobody taught you how to separate them, but by just looking at their
features such as colour, you can see which colour coins are associated and cluster them
into their correct groups.
20
Unsupervised Learning
• Unsupervised machine learning finds all kind of unknown patterns in data.
• Unsupervised methods help you to find features which can be useful for categorization.
• It is taken place in real time, so all the input data to be analyzed and labeled in the
presence of learners.
• It is easier to get unlabeled data from a computer than labeled data, which needs manual
intervention.
21
Unsupervised Learning-Clustering
• Unsupervised learning is mostly used for clustering.
• Clustering is the act of creating groups with differing characteristics.
• Clustering attempts to find various subgroups within a dataset.
• As this is unsupervised learning, we are not restricted to any set of labels and are free to
choose how many clusters to create.
• This is both a blessing and a curse.
• Picking a model that has the correct number of clusters (complexity) has to be conducted
via an empirical model selection process.
22
Unsupervised Learning-Association
• In Association Learning you want to uncover the rules that describe your data.
• For example, if a person watches video A they will likely watch video B.
• Association rules are perfect for examples such as this where you want to find related
items.
25
Reinforcement Learning
• This is very similar to how we as humans also learn.
• Throughout our lives, we receive positive and negative signals and constantly learn from
them.
• The chemicals in our brain are one of many ways we get these signals.
• When something good happens, the neurons in our brains provide a hit of positive
neurotransmitters such as dopamine which makes us feel good and we become more
likely to repeat that specific action.
• We don’t need constant supervision to learn like in supervised learning.
• By only giving the occasional reinforcement signals, we still learn very effectively.
26
Reinforcement Learning
• One of the most exciting parts of Reinforcement Learning is that is a first step away from
training on static datasets, and instead of being able to use dynamic, noisy data-rich
environments.
• This brings Machine Learning closer to a learning style used by humans. The world is
simply our noisy, complex data-rich environment.
• Games are very popular in Reinforcement Learning research. They provide ideal data-
rich environments.
• The scores in games are ideal reward signals to train reward-motivated behaviours.
Additionally, time can be sped up in a simulated game environment to reduce overall
training time.
• A Reinforcement Learning algorithm just aims to maximise its rewards by playing the
game over and over again. If you can frame a problem with a frequent ‘score’ as a
reward, it is likely to be suited to Reinforcement Learning.
27
References
• Books and Journals
• Understanding Machine Learning: From Theory to Algorithms by Shai Shalev-Shwartz and Shai
Ben-David-Cambridge University Press 2014
• Introduction to machine Learning – the Wikipedia Guide by Osman Omer.
• Video Link-
• https://www.youtube.com/watch?v=9f-GarcDY58
• https://www.youtube.com/watch?v=GwIo3gDZCVQ
• Web Link-
• https://data-flair.training/blogs/types-of-machine-learning-algorithms/
• https://towardsdatascience.com/machine-learning-an-introduction-23b84d51e6d0
• https://towardsdatascience.com/introduction-to-machine-learning-f41aabc55264
28
THANK YOU