Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 13

Learning Methodologies

 Supervised learning is one of the primary approaches to machine learning where a model is
trained on labeled data. The "supervision" consists of the model making predictions and then
being corrected by the labeled output whenever it's wrong. Here's a breakdown of the concept:
 Labeled Data: In supervised learning, the training data includes both the input data and the
correct output, known as labels or annotations. The model learns from this data by adjusting its
parameters to predict the label as accurately as possible.
 Training: During the training process, the algorithm iteratively makes predictions on the training
data and is corrected by the known labels whenever it's wrong. The goal is to adjust the model's
internal parameters so that it can make accurate predictions.
 Evaluation: After training, the model's performance is typically evaluated on a separate set of
data (test data) that it hasn't seen before. This helps in assessing how well the model will perform
on new, unseen data.
 Types of Supervised Learning Tasks:
 Classification: The output variable is a category, such as "spam" or "not spam",
"fraudulent" or "valid", or "cat", "dog", "horse".
 Regression: The output variable is a real or continuous value, such as "weight" or "price".
Learning Methodologies

 Feedback Loop: The model receives feedback directly in the form of error or loss. This feedback
is used to correct and improve the model during training.
 Applications: Supervised learning has a wide range of applications, including:
 Image and voice recognition.
 Medical diagnosis.
 Stock price prediction.
 Email filtering.
 And many more.
 Challenges:
 Overfitting: If a model is too complex, it might perform exceptionally well on the training
data but poorly on new, unseen data. This is because it has memorized the training data
rather than generalizing from it.
 Data Quality: The quality of the training data is crucial. If the data is noisy, biased, or
unrepresentative, the model's performance can be significantly affected.
 Need for Labeled Data: One of the main challenges of supervised learning is the need for
a large amount of labeled data. Labeling data can be time-consuming and expensive.
Learning Methodologies

 Unnspervised learning is one of the primary approaches to machine learning where a


model is trained on labeled data. The "supervision" consists of the model making
predictions and then being corrected by the labeled output whenever it's wrong. Here's a
breakdown of the concept:
 Labeled Data: In supervised learning, the training data includes both the input data and
the correct output, known as labels or annotations. The model learns from this data by
adjusting its parameters to predict the label as accurately as possible.
 Training: During the training process, the algorithm iteratively makes predictions on
the training data and is corrected by the known labels whenever it's wrong. The goal is
to adjust the model's internal parameters so that it can make accurate predictions.
 Evaluation: After training, the model's performance is typically evaluated on a separate
set of data (test data) that it hasn't seen before. This helps in assessing how well the
model will perform on new, unseen data.
 Types of Supervised Learning Tasks:
 Classification: The output variable is a category, such as "spam" or "not spam", "fraudulent"
or "valid", or "cat", "dog", "horse".
 Regression: The output variable is a real or continuous value, such as "weight" or "price".
Learning Methodologies

 Feedback Loop: The model receives feedback directly in the form of error or loss.
This feedback is used to correct and improve the model during training.
 Applications: Supervised learning has a wide range of applications, including:
 Image and voice recognition.
 Medical diagnosis.
 Stock price prediction.
 Email filtering.
 And many more.
 Challenges:
 Overfitting: If a model is too complex, it might perform exceptionally well on
the training data but poorly on new, unseen data. This is because it has
memorized the training data rather than generalizing from it.
 Data Quality: The quality of the training data is crucial. If the data is noisy,
biased, or unrepresentative, the model's performance can be significantly affected.
 Need for Labeled Data: One of the main challenges of supervised learning is the
need for a large amount of labeled data. Labeling data can be time-consuming and
expensive.
AI Model

 An AI (Artificial Intelligence) model refers to the computational and


mathematical structure used by machines to make decisions or predictions
based on data.
 Training: Before an AI model can make predictions or decisions, it needs
to be trained. Training involves feeding the model a large amount of data
and allowing it to adjust its internal parameters to best predict the desired
outcome.
 Data: The quality and quantity of data used to train the model play a
crucial role in its performance. The data should be diverse, relevant, and
representative of the real-world scenarios where the model will be applied.
 Architecture: AI models can have various architectures, such as neural
networks, decision trees, support vector machines, etc. The architecture
defines the structure of the model and how data flows through it.
AI Model

 Parameters: These are the internal variables that the model adjusts during
training. For example, in a neural network, the weights and biases are
parameters.
 Evaluation: After training, the model's performance is evaluated on
unseen data (testing data) to ensure it's making accurate predictions.
 Deployment: Once satisfied with the model's performance, it can be
deployed in real-world applications, such as recommendation systems,
image recognition software, or autonomous vehicles.
 Fine-tuning: Even after deployment, AI models might need periodic
retraining or fine-tuning, especially if the underlying data distribution
changes over time.
Artificial Neural Network

 An ANN, or Artificial Neural Network, is a computational model inspired by


the way biological neural networks in the human brain work. It is a
fundamental concept in the field of artificial intelligence and serves as the
backbone for deep learning
 Neurons: The basic unit of computation in an ANN is the neuron, often
called a node or unit. It receives input from some other nodes, or from an
external source and computes an output.
 Layers: Neurons are typically organized in layers. There are three types of
layers in an ANN:
 Input Layer: This is where the network starts, and it receives input
from datasets.
 Hidden Layer(s): These are layers between input and output layers. A
network can have a single hidden layer or many, depending on the
complexity of the problem.
 Output Layer: This is where the final prediction or classification is
made.
Artificial Neural Network

 Weights and Biases: Each connection between neurons has a


weight, which is adjusted during training. Biases are another type
of weight that are added to the neuron's output, providing an extra
degree of freedom.
 Activation Function: After a neuron receives a set of inputs and
their corresponding weights, a function processes this information
to produce an output. This function is called the activation
function. Common activation functions include the sigmoid,
hyperbolic tangent (tanh), and rectified linear unit (ReLU).
Artificial Neural Network

 Training: The process of adjusting the weights (and biases) of the


network based on the input data and the desired output. This is
typically done using a method called backpropagation and an
optimization technique, such as gradient descent.
 Applications: ANNs have a wide range of applications including:
 Image and voice recognition
 Medical diagnosis
 Financial forecasting
 Game playing
 Language translation
 And many more.
Artificial Neural Network BrekaDown

 Inputs: A neuron receives multiple inputs, each associated


with a weight. These inputs can come from the dataset
(for neurons in the input layer) or from the outputs of
other neurons (for neurons in hidden and output layers).
Artificial Neural Network Breakdowns

 Activation Function: The weighted sum is then passed through


an activation function to produce the neuron's output. The
activation function introduces non-linearity into the model,
allowing the network to learn complex patterns. Common
activation functions include:
• Sigmoid: Produces an output between 0 and 1.
• Hyperbolic Tangent (tanh): Produces an output between -1 and 1.
• Rectified Linear Unit (ReLU): Produces an output that is either 0
(for negative inputs) or the input itself (for positive inputs).
Artificial Neural Network Breakdowns

 Output: The result from the activation function is the output of the
neuron. This output can serve as an input to neurons in subsequent layers.
 Learning: During the training process, the weights and biases of each
neuron are adjusted to minimize the difference between the predicted
output and the actual target values. This is typically done using the
backpropagation algorithm and an optimization technique like gradient
descent.

You might also like