Download as pdf or txt
Download as pdf or txt
You are on page 1of 11

Department of Electronics Engineering

Experiment No. : 04
Single Layer Perceptron Neuron Model

Aim: a. Implement the operation for two input AND function to predict the final
weights and number of epochs required to achieve the final weights.
a. Implement the operation for two input OR function to predict the final
weights and number of epochs required to achieve the final weights.
b. Train a neural network for four input AND function using any 8
combinations for an epoch to predict the final weights and number of epochs
required to achieve the final weights.
In the testing phase consider any input not considered in the training phase
and predict the output of the same using the final weights in the training
phase.

b. Train a neural network for four input OR function using any 8


combinations for an epoch to predict the final weights and number of epochs
required to achieve the final weights.
In the testing phase consider any input not considered in the training phase
and predict the output of the same using the final weights in the training
phase.

b. Train a neural network using perceptron learning rule, to find the weights
required to classify a data set given as,
Vectors [1 1 1 1] and [-1 1 -1 -1] are the members that belong to class “A”
and vectors [1 1 1 -1] and [1 -1 -1 1] are the members that do not belong to
class “A”.
Test the vectors if they classify correctly using the final weights.
Apparatus: MATLAB / Python / C++
Circuit
Diagram: Sensory Response
Associator unit
unit unit

Fig (i) – Perceptron model

Final Year (Electronics Engineering) / Sem – VII / AIML Page |1


Name of the student : Umang G Pednekar SAP id no : 60001200113
Department of Electronics Engineering

Fig (ii) – Architecture of single layer perceptron

Theory: Frank Rosenblatt [1962], and Minsky and Papert [1988], developed large class of
artificial neural networks called Perceptron. The perceptron learning rule uses an
iterative weight adjustment that is more powerful than the Hebb rule. The
perceptrons use threshold output function and the McCulloch-Pitts model of a
neuron. Their iterative learning converges to correct weights, i.e. the weights that
produce the exact output value for the training input pattern.
The perceptron has three layers : sensory, associator and response units as shown
in fig (i). The sensory and association units have binary activations and binary or
bipolar activation is used for the response unit. All the units have their
corresponding weighted interconnections. Training in perceptron will continue
until no error occurs. This net solves the problem and is also used to learn the
classification.

The perceptrons are of two types : single layer and multi layer perceptrons.
A single layer perceptron is the simplest form of a neural network used for the
classification of patterns that are linearly separable. Fundamentally, it consists of
a single neuron with adjustable weights and bias. Rosenblatt found that if the
patterns used to train the perceptron are drawn from two linearly separable classes,
the perceptron algorithm converges and positions the decision surface in the form
of a hyper-plane between the two classes. The perceptron built around a single

Final Year (Electronics Engineering) / Sem – VII / AIML Page |2


Name of the student : Umang G Pednekar SAP id no : 60001200113
Department of Electronics Engineering
neuron is limited to performing pattern classification with only two classes. Also
classes have to be linearly separable for the perceptron to work properly.

The basic concept of a single layer perceptron as used in pattern classification is


that, it is concerned with only a single neuron. The linearity and the integrity
learning makes the perceptron network very simple. Training in the perceptron
continues till no error occurs.

The architecture of a single layer perceptron is shown in fig (ii). The input to the
response unit will be the output from the associator unit, which is a binary vector.
Since only the weight between the associator and the response unit is adjusted, the
concept is limited to single layer network. The sensor unit is hidden, because only
the weights between the associator and the response unit are adjusted.

The input layer consists of input neurons from X1,…….., Xi,……., Xn. There always
exists a common bias of ‘1’. The input neurons are connected to the output neurons
through weighted interconnections.
This is a single layer network because it has only one layer of interconnections
between the input and the output neurons. This network perceives the input signal
received and performs the classification.

Training Step 1 : Initialize the weights and the bias to zero. Initialize the learning rate to 1.
Algorithm : Step 2 : Obtain the net input to the network.
n
y in  b   xi wi
i 1

where, n = number of input neurons in the input layer.


Then apply the activation function over the net input calculated to calculate the
output of the network.
y  f ( yin )
where, f ( yin )  activation function
1 if yin  

f ( yin )  0 if    yin  
 1 if y  
 in

Final Year (Electronics Engineering) / Sem – VII / AIML Page |3


Name of the student : Umang G Pednekar SAP id no : 60001200113
Department of Electronics Engineering
Step 3 : Weight and bias adjustment : Compare the value of the actual
(calculated) output and the desired (target) output.
If y ≠ t, then

wi ( new)  wi (old )  txi


b( new)  b(old )  t
where, t  target va lues  1 or - 1
&   learning rate
else, we have

wi (new)  wi (old )
b(new)  b(old )
Train the network until there is no weight change. This is the stopping condition
for the network.
Code : Part a)

Final Year (Electronics Engineering) / Sem – VII / AIML Page |4


Name of the student : Umang G Pednekar SAP id no : 60001200113
Department of Electronics Engineering

Part b)

Final Year (Electronics Engineering) / Sem – VII / AIML Page |5


Name of the student : Umang G Pednekar SAP id no : 60001200113
Department of Electronics Engineering

Final Year (Electronics Engineering) / Sem – VII / AIML Page |6


Name of the student : Umang G Pednekar SAP id no : 60001200113
Department of Electronics Engineering

Output : Part a)

Part b)

Final Year (Electronics Engineering) / Sem – VII / AIML Page |7


Name of the student : Umang G Pednekar SAP id no : 60001200113
Department of Electronics Engineering
Calculations
:

Conclusion : In this experiment we have learnt the single layer perceptron model and have used Logic gate
to implement the same logic and have attached calculations for the same

Final Year (Electronics Engineering) / Sem – VII / AIML Page |8


Name of the student : Umang G Pednekar SAP id no : 60001200113
Department of Electronics Engineering
Reference questions :

Q1. Explain common activation functions used in neural networks.


Q2. Differentiate between supervised and unsupervised learning
Q3. Briefly describe the architectures of neuron network.
Q4. List out the applications of artificial neural networks.

Activation Functions in Neural Networks

1. Sigmoid Function (Logistic)


- Range: (0, 1)
- Suitable for binary classification problems.
- Prone to vanishing gradient problem.

2. Hyperbolic Tangent (tanh)


- Range: (-1, 1)
- Similar to the sigmoid but with a higher output range.
- Still prone to vanishing gradient problem.

3. Rectified Linear Unit (ReLU)


- \( f(x) = \max(0, x) \)
- Commonly used for hidden layers.
- Efficient computation and mitigates vanishing gradient problem.

4. Leaky ReLU
- \( f(x) = x \) if \( x > 0 \), \( af(x) \) otherwise (where \( a \) is a small positive constant).
- Addresses the dying ReLU problem.

5. Softmax
- Used in the output layer for multi-class classification problems.
- Converts raw scores into probabilities.

Supervised vs. Unsupervised Learning

1 Supervised Learning
- Involves labeled data, where the algorithm is trained on input-output pairs.
- The goal is to learn a mapping from inputs to outputs.
- Common tasks: classification and regression.

2 Unsupervised Learning
- Deals with unlabeled data, aiming to find hidden patterns or structure within the data.
- Clustering and dimensionality reduction are typical tasks.
Final Year (Electronics Engineering) / Sem – VII / AIML Page |9
Name of the student : Umang G Pednekar SAP id no : 60001200113
Department of Electronics Engineering
- The algorithm learns without explicit supervision.

Neural Network Architecture

1 Neuron
- Basic unit, receives input, applies weights, adds a bias, and passes the result through an
activation function.

2 Layer
- Neurons organized in layers: input, hidden, and output.
- Information flows from input to output through the hidden layers.

3 Feedforward Neural Network


- Information flows one way, from input to output.
- No feedback loops.

4 Recurrent Neural Network (RNN)


- Feedback connections, allowing information persistence.
- Suitable for sequential data.

5 Convolutional Neural Network (CNN)


- Specialized for processing grid-like data (e.g., images).
- Employs convolutional layers to capture spatial hierarchies.

Applications of Artificial Neural Networks

1. Image and Speech Recognition


- CNNs excel in image recognition.
- RNNs are used in speech recognition.

2. Natural Language Processing (NLP)


- RNNs and transformers are used for language modeling and translation.

3. Medical Diagnosis
- Neural networks assist in diagnosing diseases based on medical images and patient
data.

4. Financial Forecasting
- ANNs are used for predicting stock prices and financial market trends.

5. Autonomous Vehicles
- Neural networks play a crucial role in object detection and decision-making for self-
driving cars.

6. Game Playing

Final Year (Electronics Engineering) / Sem – VII / AIML P a g e | 10


Name of the student : Umang G Pednekar SAP id no : 60001200113
Department of Electronics Engineering
- Deep reinforcement learning has been used to master complex games like Go and Poker.

7. Fraud Detection
- Neural networks can identify patterns indicative of fraudulent activities in financial
transactions.

8. Drug Discovery
- ANNs assist in predicting the biological activity of potential drug compounds.

These applications demonstrate the versatility and power of artificial neural networks
across various domains.

Final Year (Electronics Engineering) / Sem – VII / AIML P a g e | 11


Name of the student : Umang G Pednekar SAP id no : 60001200113

You might also like