Download as pdf or txt
Download as pdf or txt
You are on page 1of 29

Defuzzification

Defuzzification is the conversion of a fuzzy quantity to a precise


quantity, just as fuzzification is the conversion of a precise quantity to a
fuzzy quantity. The output of a fuzzy process can be the logical union of
two or more fuzzy membership functions defined on the universe of
discourse of the output variable
CENTROID METHOD
(Center of area or center of gravity)
 Quiz 2 :Topics you need to study

 Defuzzification Methods

 Structure and foundation of Single Neuron,


 Neural Net Architectures,
 Neural Learning Application,
 Evaluation of Networks, Implementation.
 Supervised Learning - Single Layer Networks,
 Perceptions, Linear separability,
 Perception, Training algorithms,
 You should cover all the topic discuses in the class after the mid
semester exam .
Structure and foundation of Single
Neuron
NEURONS
The human body is made up of trillions of cells.
Cells of the nervous system, called nerve cells
or neurons, are specialized to carry "messages"
through an electrochemical process.

The human brain has approximately 86 billion


neurons.

Neurons come in many different shapes and sizes.


Some of the smallest neurons have cell bodies that
are only 4 microns wide. Some of the biggest
neurons have cell bodies that are 100 microns wide.
(1 micron is equal to one thousandth of a
millimeter!). Neurons can be quite large - in some
neurons, such as corticospinal neurons (from motor
cortex to spinal cord) or primary afferent neurons
(neurons that extend from the skin into the spinal
cord and up to the brain stem), can be several feet
long.

Neurons are similar to other cells in the body


because: Neurons are the oldest and longest cells in the body.
Neurons are surrounded by a cell membrane. Many of the same neurons remain whole life.
Neurons have a nucleus that contains genes. Although other cells die and are replaced, many neurons are never
Neurons contain cytoplasm, mitochondria and replaced when they die.
other organelles. In fact, we have fewer neurons when you are old compared to when
Neurons carry out basic cellular processes such you are young.
as protein synthesis and energy production.
Dendrites bring electrical
signals to the cell body and
axons take information
away from the cell body.

Sensory (or afferent) neurons: send information from sensory receptors (e.g., in skin, eyes, nose, tongue, ears) TOWARD the central
nervous system (CNS). They are equipped with specialized receptors for detecting stimuli like touch, temperature, pain, and sensory
information from organs like the eyes and ears.
Motor (or efferent) neurons: send information AWAY from the CNS to muscles or glands. enabling voluntary and involuntary
muscle movements. They play a crucial role in motor control.
Interneurons: send information between sensory neurons and motor neurons. Most interneurons are located in the CNS. are
responsible for decision-making and reflexes
(Neurons contain some specialized structures (for example, synapses) and chemicals (for example, neurotransmitters)
Pseudounipolar

Bipolar Neurons: These neurons have two distinct extensions (or processes) from the cell body—one dendrite and one axon.
Bipolar neurons are often associated with sensory functions, such as vision and olfaction( sense of smell).
Unipolar (Pseudounipolar) Neurons: These neurons have a single extensions that divides into two branches, functioning as
both an axon and a dendrite. Unipolar neurons are commonly found in the peripheral nervous system and play a role in sensory
signaling.
Multipolar Neurons: Multipolar neurons have multiple dendrites and a single axon. They are the most common type of neuron
in the CNS and play a crucial role in complex neural networks.
NEURAL NETWORK
(circuit of biological neurons)

"junction between two nerve cells," 1899, medical Latin, from Greek synapsis "conjunction," from or related to synaptein "to clasp, join together, tie or bind together, be connected with," from syn- "together"
(see syn-) + haptein "to fasten" (see apse).
"junction between two nerve cells," 1899, medical Latin, from Greek synapsis "conjunction," from or related to synaptein "to clasp, join together, tie or bind together, be connected with," from syn- "together"
(see syn-) + haptein "to fasten" (see apse).
PERCEPTRON

The term "PERCEPTRON"


is derived from the word
"perception" and is used to
describe a simple
computational model inspired
by the way biological
neurons in the brain are
thought to work. Frank
Rosenblatt, who introduced
the perceptron model in the
late 1950s, chose this name
because he aimed to create a
computational unit that could
mimic certain aspects of
human perception and
decision-making.
PERCEPTRON
BIAS TERM &ACTIVATION FUNCTION
COMPONENT IMPORTANCE FUNCTIONS AND SIGNIFICANCE
Shifting Decision Allows the model to shift the decision boundary (hyperplane) in the feature space.
Boundaries: Effects of Removal of bias -Decision boundary passes through the origin:- Inability to
shift the decision boundary to accommodate data that isn't centered around the origin
Handling Non-Zero (A "threshold" can include zero, meaning that even the smallest value is considered, while a "non-zero threshold"
requires a value greater than zero to be met or exceeded before triggering a certain action or decision.)
Thresholds:
Enables the perceptron to account for non-zero thresholds or activation points, affecting
Bias when the perceptron activates and outputs a class.
Effects of Removal of bias -- Constrained to make decisions based on the origin as the
threshold.:-- Struggles with datasets not centered around the origin or requiring non-zero
thresholds for decision-making.
Adaptability: Provides flexibility to the perceptron's learning process, allowing it to adapt to various
data patterns and relationships.
Introducing Non- Introduces non-linearity to the model, which is crucial for modeling complex, non-linear
Linearity: relationships in data.
Activation Function Effects of Removal of AF - Entire network limited to linear transformation:-- Struggles
(The primary with datasets not centered around the origin or requiring non-zero thresholds for
purpose of an decision-making.
activation function Enabling Feature Allows the neural network to learn hierarchical representations of data and extract
is to squash the Learning: abstract, complex features by stacking layers with non-linear activation functions.
weighted sum of
Effects of Removal of AF-- Inability to learn hierarchical data representations.:--
inputs into a
Limited capacity to extract and represent abstract and complex features from the data:-
specific range or to
decide whether a Solving Complex
Essential for solving tasks like image recognition, natural language processing, and
neuron should Problems:
more, where data relationships are highly non-linear.
"fire" (activate) or
Effects of Removal of AF-- Difficulty solving complex, non-linear problems:--
not.)
Ineffectiveness in tasks like image recognition, natural language processing, and others
that involve non-linear patterns.
BIAS TERM &ACTIVATION
FUNCTION

In summary, the bias term in a perceptron is essential for shifting and adapting the decision boundary and setting
non-zero thresholds,

while the activation function introduces non-linearity and enables feature learning, allowing neural networks to
model complex relationships in data.

Together, these elements make neural networks capable of solving a wide range of real-world problems, making
them a fundamental building block in deep learning.

without a bias term and an activation function, the model would be a purely linear system with no ability to
adapt to non-zero thresholds, shift decision boundaries, or model non-linear data relationships. This would
render it ineffective for many real-world machine learning and deep learning tasks where complex patterns and
non-linearities are prevalent.
IMPORTANCE OF BIAS:
Importance of Bias:
In the context of machine learning and neural networks, a bias (also referred to as a bias term or bias unit) is an adjustable
parameter added to each neuron (or perceptron) in a neural network. The bias serves to shift the activation function of the
neuron, allowing the model to better fit the data and make it more expressive. The bias term is an essential component
because it enables the model to adapt to various patterns and relationships in the data.

 Shifting Decision Boundaries: The bias term in a perceptron allows the model to shift the decision boundary (hyperplane) in
the feature space. Without a bias term, the decision boundary would always pass through the origin (0,0) in the input space.
This can be limiting in cases where the data doesn't naturally pass through the origin, or when you need a non-zero threshold
for decision-making.

 Handling Non-Zero Thresholds: The bias term enables the perceptron to account for non-zero thresholds or activation
points. It helps in determining when the perceptron should activate and output a particular class. Without a bias term, the
decision boundary would always be centered around zero.(A "threshold" can include zero, meaning that even the smallest
value is considered, while a "non-zero threshold" requires a value greater than zero to be met or exceeded before
triggering a certain action or decision.)

 Adaptability: Including a bias term provides flexibility to the perceptron's learning process. It allows the model to adapt to
various patterns and relationships in the data, making it more versatile and capable of fitting a wide range of data
distributions.
No Bias Term:
 The decision boundary of the perceptron or the neural network would always pass through the origin (0,0) in the input space. It would be
unable to shift the decision boundary to better fit the data if the data is not centered around the origin.
 The perceptron would be constrained to making decisions based on the origin as the threshold, which might not be suitable for many real-
world datasets.
 The model would struggle to adapt to data distributions that are not centered around the origin or that require non-zero thresholds for
decision-making.
MODEL OF A ARTIFICIAL
NEURAL NETWORK

o/p of first Activation Final o/p :


neuron:(weig function: after
hted sum ) processed the processing
pass to value it gate through
activation from the Activation
function neuron function
ACTIVATION FUNCTION

The sigmoid function is a mathematical function commonly used in machine learning and
neural networks. It's also known as the logistic function.

Sigmoid functions were historically used as activation functions in artificial neural networks.
However, they have some drawbacks, such as the vanishing gradient problem, which can
make training deep networks difficult.
As a result, alternative activation functions like the Rectified Linear Unit (ReLU) have
become more popular in recent years for deep neural networks.

Nonetheless, sigmoid functions still find use in specific contexts, such as binary classification
problems where the output represents probabilities or as part of more complex activation
functions like the hyperbolic tangent (tanh) function.
ACTIVATION FUNCTION

The Sigmoid function: observe that when a value greater than 0 is selcted on the x-axis,
the corresponding value on the y-axis is greater than 0.5. Conversely, if a value less than 0 on
the x-axis is selected , the corresponding value on the y-axis is less than 0.5
IMPORTANCE OF ACTIVATION FUNCTION

The purpose of activation function is to introduce non linearity into network


NEURAL NETWORK
NEURAL NETWORKARCHITECTURES
ARCHITECTURES

Various neural network architectures used in deep learning and machine learning
for different types of tasks and data.

 Feed forward Neural Networks (FNN) or Multi-Layer Perceptrons (MLPs)


 Convolutional Neural Networks (CNN)
 Recurrent Neural Networks (RNN)
 Long Short-Term Memory Networks (LSTM)
 Gated Recurrent Unit Networks (GRU)
 Autoencoders
 Generative Adversarial Networks (GANs)
 Reinforcement Learning Architectures
 Transformers
 Capsule Networks (CapsNets)
 Siamese Networks
 Self-Attention Networks (e.g., Transformers)
NEURAL NETWORK ARCHITECTURES

Neural network architectures refer to the specific structures or layouts of artificial neural networks
used in deep learning and machine learning. These architectures determine how neural network layers and
units are organized and interconnected to solve specific tasks or problems. some common neural network
architectures:

1) Feedforward Neural Networks (FNN): Also known as multi-layer perceptrons (MLPs), these networks
consist of an input layer, one or more hidden layers, and an output layer. They are used for tasks like
regression and classification.

2) Convolutional Neural Networks (CNN): CNNs are designed for tasks involving grid-like data, such as
images and videos. They use convolutional layers to automatically learn spatial hierarchies of features.
They are commonly used in image recognition and computer vision.

3) Recurrent Neural Networks (RNN): RNNs are suitable for sequence data and have connections that loop
back on themselves, allowing them to process data with sequential dependencies. Applications include
natural language processing (NLP) and time series analysis.
NeuralNETWORK
NEURAL network architectures
ARCHITECTURES

4) Long Short-Term Memory Networks (LSTM): LSTMs are a type of RNN with memory cells that can
store and retrieve information over long sequences. They are particularly effective in tasks that require
capturing long-term dependencies.

5) Gated Recurrent Unit Networks (GRU): GRUs are another type of RNN that is computationally
efficient and often used for similar tasks as LSTMs.

6) Autoencoders: Autoencoders consist of an encoder that compresses input data into a lower-dimensional
representation and a decoder that reconstructs the original data. They are used for data compression,
denoising, and feature learning.

7) Generative Adversarial Networks (GANs):GANs consist of a generator and a discriminator network.


They are used for generating new data instances, such as images, by training the generator to produce
data that is indistinguishable from real data according to the discriminator.

8) Reinforcement Learning Architectures:These architectures, such as deep Q-networks (DQN) and


policy gradient methods, are designed for reinforcement learning tasks. They learn to make sequential
decisions by interacting with an environment.
NeuralNETWORK
NEURAL network architectures
ARCHITECTURES
9) Transformers: Transformers are a type of architecture that has gained prominence in NLP and
other sequential data tasks. They use attention mechanisms to model relationships between
elements in a sequence.

10) Capsule Networks (CapsNets):Capsule networks are designed to overcome some limitations of
traditional CNNs by capturing hierarchical relationships between features and pose information.
They are used in image recognition and object segmentation.

11) Siamese Networks: Siamese networks are used for tasks like similarity learning, face recognition,
and signature verification. They consist of two identical subnetworks with shared weights.

12) Self-Attention Networks: These networks, such as the Transformer architecture, use self-attention
mechanisms to weigh the importance of different parts of the input sequence when making
predictions. They have been highly successful in NLP tasks.

There are many more variations and combinations used to address specific problems in machine learning
and deep learning. The choice of architecture depends on the nature of the data and the task at hand.
Researchers and practitioners often customize or create new architectures to tackle emerging challenges and
improve performance on various tasks.
NEURAL LERNING APPLICATION
APPLICATION DESCRIPTION
Image Recognition Using CNNs for object recognition, facial recognition, and medical image analysis.
Leveraging RNNs and Transformers for language translation, chatbots, sentiment analysis, and text
Natural Language Processing
generation.
Employing deep learning models for speech recognition in virtual assistants and voice-controlled
Speech Recognition
systems.
Recommendation Systems Utilizing machine learning for personalized recommendations in online platforms.
Applying machine learning in disease diagnosis, medical image analysis, drug discovery, and
Healthcare
patient outcome prediction.
Autonomous Vehicles Using neural networks for object detection, lane tracking, and decision-making in self-driving cars.
Employing machine learning models for fraud detection, stock market prediction, and credit risk
Financial Services
assessment.
Using machine learning to detect unusual patterns or anomalies in cybersecurity and fraud
Anomaly Detection
prevention.
Environmental Monitoring Leveraging neural networks for weather forecasting, air quality prediction, and ecological modeling.
Robotics Applying machine learning in robot control, vision-based robotics, and human-robot interaction.
Manufacturing and Quality Utilizing machine learning models for predictive maintenance, defect detection, and quality control
Control in manufacturing processes.
Using reinforcement learning in game playing, including AI victories in complex games like Go and
Game Playing
video game agents.
Content Generation Employing neural networks for generating art, music, and written content.
Energy Management Using machine learning to optimize energy consumption in smart grids and buildings.
Agriculture Leveraging machine learning for crop yield prediction, disease detection, and precision farming.
EVALUATION OF NETWORK
METRIC DESCRIPTION APPLICABILITY
Proportion of correctly classified instances out of the total instances in
1 Accuracy Classification
the dataset.
Binary
2 Precision Accuracy of positive predictions in binary classification.
Classification3
3 Recall (Sensitivity) Proportion of actual positives that were correctly predicted. Binary Classification
4 F1 Score Harmonic mean of precision and recall, useful for imbalanced datasets. Binary Classification
Mean Squared Error Measures the average squared difference between predicted and true
5 Regression
(MSE) values (for regression tasks).
Root Mean Squared Error Square root of MSE, providing an interpretable error metric (for
6 Regression
(RMSE) regression tasks).
Mean Absolute Error Measures the average absolute difference between predicted and true
7 Regression
(MAE) values in regression.
Proportion of variance in the dependent variable explained by the model
8 R-squared (R^2) Regression
(for regression tasks).
Area Under the ROC
9 Measures the trade-off between true positive rate and false positive rate. Binary Classification
Curve (AUC-ROC)
Area Under the Precision-
Measures the area under the precision-recall curve, suitable for
10 Recall Curve (AUC- Binary Classification
imbalanced datasets.
PR)
Provides a detailed breakdown of model performance, showing true
11 Confusion Matrix Classification
positives, true negatives, false positives, and false negatives.
Techniques like k-fold cross-validation assess a model's generalization
12 Cross-Validation General
performance.
Evaluating models with different hyperparameters to select the best
13 Hyperparameter Tuning General
configuration for optimal performance.
Metrics tailored to specific applications, such as sensitivity, specificity,
14 Domain-Specific Metrics Domain-Specific
and Dice coefficient.
15 Custom Metrics Metrics defined based on specific project goals and requirements. Custom
IMPLEMENTATION OF A NEURAL
NETWORK

The implementation of a neural network involves the practical steps to build, train, and use a
neural network for a specific task.
Neural network implementation is an iterative process that involves experimentation and
refinement to achieve the best possible performance and utility in the intended application. It
may also involve collaboration with domain experts and data scientists with expertise in the
specific problem domain.
IMPLEMENTATION
IMPLEMENTATION OF A NEURAL
OF A NEURAL NETWORK
NETWORK
STEP DESCRIPTION
Data Collection and - Gather and prepare the dataset. - Split data into training, validation, and testing sets. -
1. Preprocess data (normalize, augment, handle missing values).
Preprocessing
Choosing a Neural - Select a suitable neural network architecture (e.g., feedforward, CNN, RNN, transformer). -
2. Network Define layers, units, and activation functions.
Architecture
- Use a deep learning framework (e.g., TensorFlow, PyTorch) to build the neural network
3. Model Building model. - Specify model layers and connections. - Compile the model with loss function,
optimization algorithm, and evaluation metrics.
- Train the model on the training data using an optimization algorithm (e.g., SGD). - Monitor
4. Training training process, track metrics on the validation set. - Implement early stopping to prevent
overfitting.
Hyperparameter - Experiment with different hyperparameters (e.g., learning rate, batch size, architecture). -
5. Use grid search or random search for optimization.
Tuning
- Evaluate the model on the test dataset to assess performance and generalization. - Use task-
6. Evaluation specific evaluation metrics (e.g., accuracy, F1 score, MSE).
IMPLEMENTATION
IMPLEMENTATION OF A NEURAL
OF A NEURAL NETWORK
NETWORK

7. Deployment Deploy the model in an application or system if it meets performance criteria. - Integrate
the model into a web service, mobile app, or software.
8. Monitoring and Continuously monitor the model's performance in the real-world application. - Update the
Maintenance model with new data and retraining as needed.
9. Scaling Consider scaling options such as distributed computing or cloud deployment based on
requirements.
10. Documentation Properly document the implemented model, including architecture, preprocessing, and
usage requirements.
11. Testing and Quality Conduct thorough testing to ensure the model functions correctly and meets safety and
Assurance quality standards.
12. Security Considerations Address security concerns, such as adversarial attacks, data privacy, and securing the
deployment environment.
13. User Interface (UI) Design a user-friendly interface for user-facing applications that interact with the model's
predictions.
14. Compliance and Ensure compliance with relevant regulations, particularly in fields like healthcare and
Regulations finance.
15. Feedback Loop Establish a feedback loop for continuous improvement by incorporating user feedback
and performance data into model updates.

You might also like