Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 13

Savitribai Phule Pune University, Pune

April 2021
An Assignment Report Submitted in
Partial Fulfillment for the Award of
BE - SCOA
In
Computer Engineering
By
Ankit Katewa
ROLL NO. -3416

Department of Computer Engineering,


Army Institute of Technology,
Pune – 411015
GUIDE: PROF. P.R Sonawane

CERTIFICATE
This is to certify that the Assignment report entitled “LP-4 ASSIGNMENT Soft
Computing And Optimization Algorithms” which is being submitted by ANKIT
KATEWA for the partial fulfillment of the award of the BE-SCOA of the Computer
Engineering from “Army Institute of Technology, Pune” is a record of the candidates own
work carried out by him under my supervision and guidance. The matter embodied in this report
has not been submitted for the award of any other degree.
Date: - 09/06/2021 Guide
Place: - Pune (Prof.P.RSonaware)H.O.D.
(Prof.S.R.Dhore)

ARMY INSTITUTE OF TECHNOLOGY, PUNE


CERTIFICATE OF ORIGINALITY
I the undersigned declare the following:
• Work carried out for my Assignment is genuine work and it is not directly or indirectly
copied from others project / resources / published work,
• This is original and authentic Assignment work and has not been submitted earlier to
other universities for entitlement of any degree / diploma by me or others,
• The data used for Assignment report has not been copied from any book, journals or
websites directly or indirectly,
• The references used in Assignment work or report do not violate state / country /
international Copyright act, Intellectual Property Rights act and Patent act,
• I the undersigned and Army Institute of Technology, Pune has jointly hold copyright for
the Assignment submitted to get my degree under IPR act.
If I do not follow this, then, I am fully responsible for any legal action.
Yours truly,
Ankit Katewa
(Signature)

Date: 09/06/2021
Name: Ankit Katewa
Title of the Assignment: LP-4 ASSIGNMENT Soft Computing And Optimizatio Algorithms
Class: BE-A (Computer Engineering)
Roll No: 3416
PRN No.: 71812412L
Name of Guide: Prof. P.R Sonawane Prof. P.R Sonawane
Place: Pune (Signature of Guide with date)
ACKNOWLEDGEMENT

I have a great pleasure to this work completed during 2020-2021. It gives pleasure to
express our deep sense of indebtedness & hurtful gratitude to Mr. P.R Sonaware Sir for his expert
& valuable guidance and continuous encouragement given to us during the phase of work.
I express my deep sense of gratitude to our principal Prof. B. P. Patil Sir, Head of
undertake of “Army Institute of Technology, Pune” Mr. Abhay Bhatt Sir, for giving the
opportunity to undertake this Assignment provide necessary facilities without which it could not
be possible to complete the project for we.
It gives us great pleasure to acknowledge our many thanks to Prof. Anup Kadam and
Prof. P.R. Sonaware and all known, unknown persons for their meaning discussion & valuable
suggestion during every phase work.
I would like to express my sincere gratitude to words my all friends and classmates for
their co-operation. Lastly, I express my deep of loving parents & other family members whose
continuous encouragement, love and affection made us to complete this piece of work
successfully.

Place: - Pune Yours,


Date: - 09/06/2021 Ankit Katewa (3416)

Assignment No: 01

Aim: Implement Union, Intersection, Complement and Difference operations on fuzzy sets. Also
create fuzzy relation by Cartesian product of any two fuzzy sets and perform max-min composition
on any two fuzzy relations.
Objectives:
1. To learn different fuzzy operation on fuzzy set.
2. To learn min-max composition on fuzzy set.

Software Requirements:
Ubuntu 18.04/ Windows 7+ OS with MATLAB/ Online OCTAVE
Hardware Requirements:
Pentium IV + system with latest configuration

Theory:
Fuzzy logic is a superset of conventional (Boolean) logic that has been extended to handle the
concept of partial truth- truth values between "completely true" and "completely false". As its name
suggests, it is the logic underlying modes of reasoning which are approximate rather than exact. The
importance of fuzzy logic derives from the fact that most modes of human reasoning and especially
common-sense reasoning are approximate in nature.
The essential characteristics of fuzzy logic as founded by Zader Lotfi are as follows.
• In fuzzy logic, exact reasoning is viewed as a limiting case of approximate reasoning.
• In fuzzy logic everything is a matter of degree.
• Any logical system can be fuzzified.
• In fuzzy logic, knowledge is interpreted as a collection of elastic or, equivalently , fuzzy constraint
on a collection of variables
• Inference is viewed as a process of propagation of elastic constraints.

The third statement, hence, defines Boolean logic as a subset of Fuzzy logic.
Fuzzy Sets Fuzzy Set Theory was formalized by Professor Lofti Zadeh at the University of
California in 1965. What Zadeh proposed is very much a paradigm shift that first gained acceptance
in the Far East and its successful application has ensured its adoption around the world.
A paradigm is a set of rules and regulations which defines boundaries and tells us what to do to be
successful in solving problems within these boundaries. For example, the use of transistors instead of
vacuum tubes is a paradigm shift - likewise the development of Fuzzy Set Theory from conventional
bivalent set theory is a paradigm shift.
Bivalent Set Theory can be somewhat limiting if we wish to describe a 'humanistic' problem
mathematically. For example, Fig 1 below illustrates bivalent sets to characterize the temperature of a
room.
The most obvious limiting feature of bivalent sets that can be seen clearly from the diagram is that
they are mutually exclusive - it is not possible to have membership of more than one set (opinion
would widely vary as to whether 50 degrees Fahrenheit is 'cold' or 'cool' hence the expert knowledge
we need to define our system is mathematically at odds with the humanistic world). Clearly, it is not
accurate to define a transition from a quantity such as 'warm' to 'hot' by the application of one degree
Fahrenheit of heat. In the real world a smooth (unnoticeable) drift from warm to hot would occur.
This natural phenomenon can be described more accurately by Fuzzy Set Theory. Fig.2 below shows
how fuzzy sets quantifying the same information can describe this natural drift.
The whole concept can be illustrated with this example. Let's talk about people and "youthness". In
this case the set S (the universe of discourse) is the set of people. A fuzzy subset YOUNG is also
defined, which answers the question "to what degree is person x young?" To each person in the
universe of discourse, we have to assign a degree of membership in the fuzzy subset YOUNG. The
easiest way to do this is with a membership function based on the person's age.
young(x) = { 1, if age(x) <= 20,
(30-age(x))/10, if 20 < age(x) <= 30,
0, if age(x) > 30 }
A graph of this looks like:
Given this definition, here are some example values:
Person Age degree of youth
Johan 10 1.00
Edwin 21 0.90
Parthiban 25 0.50
Arosha 26 0.40
Chin Wei 28 0.20
Rajkumar 83 0.00
So given this definition, we'd say that the degree of truth of the statement Parthiban is YOUNG" is
0.50.
Note: Membership functions almost never have as simple a shape as age(x). They will at least tend to
be triangles pointing up, and they can be much more complex than that. Furthermore, membership
functions so far is discussed as if they always are based on a single criterion, but this isn't always the
case, although it is the most common case. One could, for example, want to have the membership
function for YOUNG depend on both a person's age and their height (Arosha's short for his age). This
is perfectly legitimate, and occasionally used in practice. It's referred to as a two-dimensional
membership function. It's also possible to have even more criteria, or to have the membership
function depend on elements from two completely different universes of discourse.
Fuzzy Set Operations.
Union
The membership function of the Union of two fuzzy sets A and B with membership functions 𝜇𝐴and
𝜇𝐵 respectively is defined as the maximum of the two individual membership functions. This is called
the maximum criterion. . Union (A ∪ B): 𝝁𝑨∪(𝒙) = 𝐦𝐚𝐱 (𝝁𝑨(𝒙), 𝝁𝑩(𝒙))
Example:
𝐴 = {(𝑥1, 0.5), (𝑥2, 0.1), (𝑥3, 0.4)} and
𝐵 = {(𝑥1, 0.2), (𝑥2, 0.3), (𝑥3, 0.5)}; 𝐶 = 𝐴 ∪ 𝐵 = {(𝑥1, 0.5), (𝑥2, 0.3), (𝑥3, 0.5)};
p q μA μ μB b c a p q μA μB b c a μAUB x x
Intersection
The membership function of the Intersection of two fuzzy sets A and B with membership functions
𝜇𝐴and 𝜇𝐵 respectively is defined as the minimum of the two individual membership functions. This is
called the minimum criterion.
. Union (A ∪ B): 𝝁𝑨∪(𝒙) = 𝐦𝐢𝐧 (𝝁𝑨(𝒙), 𝝁𝑩(𝒙))
Example:
𝐴 = {(𝑥1, 0.5), (𝑥2, 0.1), (𝑥3, 0.4)} and
𝐵 = {(𝑥1, 0.2), (𝑥2, 0.3), (𝑥3, 0.5)};
𝐶 = 𝐴 ∩ 𝐵 = {(𝑥1, 0.2), (𝑥2, 0.1), (𝑥3, 0.4)}
p q μA μ μB b c a p q b c a x x μA?B
Complement
The membership function of the Complement of a Fuzzy set A with membership function 𝜇𝐴is
defined as the negation of the specified membership function. This is called the negation criterion.
Complement (𝐴𝑐): 𝜇𝐴𝑐 (𝑥) = 1 − (𝑥)
Example:
𝐴 = {(𝑥1, 0.5), (𝑥2, 0.1), (𝑥3, 0.4)}
𝐶 = 𝐴𝑐= {(𝑥1, 1-0.5), (𝑥2, 1-0.1), (𝑥3, 1-0.4)}
𝐶 = 𝐴𝑐 = {(𝑥1, 0.5), (𝑥2, 0.9), (𝑥3, 0.6)}
p q μA μ x p q x 1.0 μA’ μA
MATLAB Code:
Output:
Assignment on fuzzy set operations
Enter First Matrix> [0.3 0.6; 0.2 0.9]
Enter Second Matrix> [0.1 0.9; 0.3 0.6]
Union Of Two Matrices
w =
0.3000 0.9000
0.3000 0.9000
Intersection Of Two Matrices
p =
0.1000 0.6000
0.2000 0.6000
Complement Of First Matrix
q1 =
0.7000 0.4000
0.8000 0.1000
Complement Of Second Matrix
q2 =
0.9000 0.1000
0.7000 0.4000
Difference of First and Second Matrix
d =
0.2000 -0.3000
-0.1000 0.3000

Conclusion: Thus, we learnt implementation of Union, Intersection, Complement and Difference


operations on fuzzy sets.
Assignment No: 02

Aim: Create fuzzy relation by Cartesian product of any two fuzzy sets.

Objectives:
1. To learn different fuzzy operation on fuzzy set.

Software Requirements:
Ubuntu 18.04/ Windows 7+ OS with MATLAB/ Online OCTAVE
Hardware Requirements:
Pentium IV + system with latest configuration

Theory:
Fuzzy Relations
Generalizes classical relation into one that allows partial membership – Describes a relationship that
holds between two or more objects • Example: a fuzzy relation “Friend” describe the degree of
friendship between two person (in contrast to either being friend or not being friend in classical
relation!)
A fuzzy relation is a mapping from the Cartesian space X x Y to the interval [0,1], where the strength
of the mapping is expressed by the membership function of the relation μ (x,y) •
The “strength” of the relation between ordered pairs of the two universes is measured with a
membership function expressing various “degree” of strength [0,1]

Fuzzy Cartesian Product


Let
A be a fuzzy set on universe X, and
B be a fuzzy set on universe Y,
then Where the fuzzy relation R has membership function
Caretsian Product (𝐴×𝐵): 𝜇𝐴×𝐵(𝑥,𝑦)=min (𝜇𝐴(𝑥),𝜇𝐵(𝑦)) A×𝐵=min(𝜇𝐴(𝑥),𝜇𝐵(𝑦))
MATLAB Code:
Output:
Assignment on fuzzy set operations
Enter First Matrix> [0.2 0.3 0.5 0.6]
Enter Second Matrix> [0.8 0.6 0.3]
4
3
0.2 0.2 0.2
0.3 0.3 0.3
0.5 0.5 0.3
0.6 0.6 0.3

Conclusion: Thus, we learnt how to create fuzzy relation by Cartesian product of any two fuzzy sets.
Assignment No: 03

Aim: Perform max-min composition on any two fuzzy relations.

Objectives:
1. To learn different fuzzy operation on fuzzy set.

Software Requirements:
Ubuntu 18.04/ Windows 7+ OS with MATLAB/ Online OCTAVE

Hardware Requirements:
Pentium IV + system with latest configuration

Theory:
Fuzzy composition
Two fuzzy relations are given by,
Obtain fuzzy relation T as a composition between the fuzzy relation.
Solution: The composition between two fuzzy relations is obtained by,
[a] Max – min composition.
[b] Max-product composition.
Max – min composition.
MT(x1,z1)= max [min [MR(x1,y1),MS(y1,z1)]MT(x1,z1)= max [min [MR(x1,y1),MS(y1,z1)] min
[MR(x1,y2),MS(y2,z1)]min [MR(x1,y2),MS(y2,z1)]
= max [ min (0.6, 1), min (0.3,0.8)]
= max [0.6, 0.3]
= 0.6
MT(x1,z2)=max [min [MR(x1,y1),MS(y1,z2)]MT(x1,z2)=max [min [MR(x1,y1),MS(y1,z2)]
min [MR(x1,y2),Ms(y2,z2)]min [MR(x1,y2),Ms(y2,z2)]
=max [min (0.6,0.5),min (0.3,0.4)]=max [min (0.6,0.5),min (0.3,0.4)]
= max (0.5, 0.3) = 0.5
MT(x1,z3)=max [min (0.6,0.3),min (0.3,0.7)]MT(x1,z3)=max [min (0.6,0.3),min (0.3,0.7)]
= max [0.3, 0.3] = 0.3
MT(x2,z1)=max [min (0.2,1),min (0.9,0.8)]MT(x2,z1)=max [min (0.2,1),min (0.9,0.8)]
= max [ 0.2, 0.8] = 0.8
MT(x2,z2)=max [min (0.2,0.5),min (0.3,0.4)]MT(x2,z2)=max [min (0.2,0.5),min (0.3,0.4)]
= max [0.2, 0.4] = 0.4
MT(x−2,z3)=max [min (0.2,0.3),min (0.9,0.7)]MT(x−2,z3)=max [min (0.2,0.3),min (0.9,0.7)]
= max (0.2, 0.7) = 0.7
∴∴ T = RoS = [0.6 0.5 0.3
0.8 0.4 0.7]

MATLAB Code:
Output:
Assignment on fuzzy set operations
Enter First Matrix> [0.3 0.6;0.2 0.9]
Enter Second Matrix> [0.1 0.9; 0.3 0.6]
m = 2
n = 2
a = 2
b = 2
c =
0.3000 0.6000
d =
0.1000
0.3000
f =
0.1000
0.3000
e =
0.1000 0.1000
0.3000 0.3000
h = 0.3000
c =
0.3000 0.6000
d =
0.9000
0.6000
f =
0.9000 0.6000
e =
0.3000 0.6000
0.3000 0.6000
h =
0.3000 0.3000
c =
0.2000 0.9000
d =
0.1000
0.3000
f =
0.1000
0.3000
e =
0.1000 0.1000
0.2000 0.3000
h =
0.3000 0.3000
0.2000 0
c =
0.2000 0.9000
d =
0.9000
0.6000
f =
0.9000
0.6000
e =
0.2000 0.9000
0.2000 0.6000
h =
0.3000 0.3000
0.2000 0.2000
the min max composition between two vectors is
h =
0.3000 0.3000
0.2000 0.2000

Conclusion: Thus, we learnt to perform max-min composition on any two fuzzy relations.
Assignment No: 04

Aim: Write a program to find the Boolean function to implement following single layer perceptron.
Assume all activation functions to be the threshold function which is 1 for all input values greater
than zero and 0, otherwise.

Objectives:
1. To learn single layer perceptron.
2. To learn Boolean logic implementation using perceptron.

Software Requirements:
Ubuntu 18.04/ Windows 7+ OS with MATLAB/ Online OCTAVE

Hardware Requirements:
Pentium IV + system with latest configuration

Theory:
The most common Boolean functions are AND, OR, NOT. The Boolean logic AND only returns 1 if
both inputs are 1 else 0, Boolean logic OR returns 1 for all inputs with 1, and will only return 0 if
both input is 0 and lastly logic NOT returns the invert of the input, if the input is 0 it returns 1, if the
input is 1 it returns 0. To make it clear the image below shows the truth table for the basic Boolean
Function.
The columns A and B are the inputs and column Z is the output. So, for the inputs A = 0, B = 0 the
output is Z = 0.

Perceptron:
A perceptron is the basic part of a neural network. A perceptron represents a single neuron on a
human’s brain, it is composed of the dataset ( Xm ) , the weights ( Wm ) and an activation function,
that will then produce an output and a bias. The datasets ( inputs ) are converted into an ndarray
which is then matrix multiplied to another ndarray that holds the weights. Summing up all matrix
multiply and adding a bias will create the net input function, the output would then passed into an
activation function that would determine if the neuron needs to fire an output or not.
Most common activation function used for classification used is a sigmoid function, which is a great
function for classification (Although sigmoid is not the leading activation function for middle layers
of neural networks [ ehem ReLU / Leaky ReLU ] it still is widely used for final classifications. )
The perceptron is simply separating the input into 2 categories, those that cause a fire, and those that
don't. It does this by looking at (in the 2-dimensional case):
w1I1 + w2I2 < t
If the LHS is < t, it doesn't fire, otherwise it fires. That is, it is drawing the line:
w1I1 + w2I2 = t
and looking at where the input point lies. Points on one side of the line fall into 1 category, points on
the other side fall into the other category. And because the weights and thresholds can be anything,
this is just any line across the 2-dimensional input space.
So what the perceptron is doing is simply drawing a line across the 2-d input space. Inputs to one
side of the line are classified into one category, inputs on the other side are classified into another.
e.g. the OR perceptron, w1=1, w2=1, t=0.5, draws the line:
I1 + I2 = 0.5
across the input space, thus separating the points (0,1),(1,0),(1,1) from the point (0,0):
As you might imagine, not every set of points can be divided by a line like this. Those that can be,
are called linearly separable. In 2 input dimensions, we draw a 1 dimensional line. In n dimensions,
we are drawing the (n-1) dimensional hyperplane:
w1I1 + .. + wnIn = t
Perceptron Learning Algorithm
we initialize w with some random vector. We then iterate over all the examples in the data, (P U N)
both positive and negative examples. Now if an input x belongs to P, ideally what should the dot
product w.x be? It would be greater than or equal to 0 because that is the only thing what our
perceptron wants at the end of the day so let us give it that. And if x belongs to N, the dot product.
MUST be less than 0.
So, if you look at the if conditions in the while loop:
Case 1: When x belongs to P and its dot product w.x < 0
Case 2: When x belongs to N and its dot product w.x ≥ 0
Only for these cases, we are updating our randomly initialized w. Otherwise, we don’t touch w at all
because Case 1 and Case 2 are violating the very rule of a perceptron. So we are adding x to w (ahem
vector addition ahem) in Case 1 and subtracting x from w in Case 2.

Conclusion
Thus we learned implementation of single layer perceptron for AND ,OR and NOT Boolean
function.
Assignment No: 05

Aim: Implement basic logic gates using Mc-Culoch-Pitts or Hebbnet neural networks.
.
Objectives:
1. To learn simple neural network model.
2. To learn logic gates implementation using Mc-Culoch-Pitts Model.

Software Requirements:
Ubuntu 18.04/ Windows 7+ OS with MATLAB/ Online OCTAVE
Hardware Requirements:
Pentium IV + system with latest configuration

Theory:
First artificial neurons: The McCulloch-Pitts model
The McCulloch-Pitts model was an extremely simple artificial neuron. The inputs could be either a
zero or a one. And the output was a zero or a one. And each input could be either excitatory or
inhibitory.
Now the whole point was to sum the inputs. If an input is one, and is excitatory in nature, it added
one. If it was one, and was inhibitory, it subtracted one from the sum. This is done for all inputs, and
a final sum is calculated.
Now, if this final sum is less than some value (which you decide, say T), then the output is zero.
Otherwise, the output is a one.
Here is a graphical representation of the McCulloch-Pitts model
In the figure, I represented things with named variables. The variables w 1, w2 and w3 indicate which
input is excitatory, and which one is inhibitory. These are called "weights". So, in this model, if a
weight is 1, it is an excitatory input. If it is -1, it is an inhibitory input.
x1, x2, and x3 represent the inputs. There could be more (or less) inputs if required. And accordingly,
there would be more 'w's to indicate if that particular input is excitatory or inhibitory.
Now, if you think about it, you can calculate the sum using the 'x's and 'w's... something like this:
sum = x1w1 + x2w2 + x3w3 + ...
This is what is called a 'weighted sum'.
Now that the sum has been calculated, we check if sum < T or not. If it is, then the output is made
zero. Otherwise, it is made a one.
Now, using this simple neuron model, we can create some interesting things. Here are a few
examples:
NOR Gate
The figure above is a 3 input NOR gate. A NOR gate gives you an output of 1 only when all inputs
are zero (in this case, x1, x2 and x3) You can try the different possible cases of inputs (they can be
either zero or one).
Note that this example uses two neurons. The first neurons receives the inputs you give. The second
neuron works upon the output of the first neuron. It has no clue what the initial inputs were.
NAND Gate
This figure shows how to create a 3-input NAND gate with these neurons. A NAND gate gives a zero
only when all inputs are 1. This neuron needs 4 neurons. The output of the first three is the input for
the fourth neuron. If you try the different combinations of inputs.
The McCulloch-Pitts model is no longer used. These NOR and NAND gates already have extremely
efficient circuits. So its pointless to redo the same thing, with less efficient models. The point is to
use the "interconnections" and the advantages it has.
It has been replaced by more advanced neurons. The inputs can have decimal values. So can the
weights. And the neurons actually process the sum instead of just checking if it is less than or not.
MATLAB PROGRAM:
Generate ANDNOT function using McCulloch-Pitts neural net by MATLAB program.
%ANDNOT function using McCulloch-Pitts neuron clear;
clc;
% Getting weights and threshold value disp('Enter the weights'); w1=input('Weight w1=');
w2=input('Weight w2=');
disp('Enter threshold value'); theta=input('theta=');
y=[0 0 0 0];
x1=[0 0 1 1];
x2=[0 1 0 1];
z=[0 0 1 0];
con=1; while con
zin = x1*w1+x2*w2; for i=1:4
if zin(i)>=theta
y(i)=1;
else
y(i)=0; end
end
disp('Output of net='); disp(y);
if y==z
con=0; else
disp('Net is not learning Enter another set of weights and threshold value');
w1=input('Weight w1=');
w2=input('Weight w2=');
thete=input('theta=');
end
end
disp('McCulloch Pitts Net for ANDNOT function'); disp('Weights of neuron');
disp(w1);
disp(w2);
disp('Threshold value=');
disp(theta);
Output:
Enter the weights
Weight w1=1
Weight w2=1
Enter threshold value
theta=1
Output of net= 0 1 1 1
Net is not learning Enter another set of weights and threshold value Weight w1=1
Weight w2=-1 theta=1
Output of net=0 0 1 0
McCulloch Pitts Net for ANDNOT function
Weights of neuron
1
-1
Threshold value= 1

Conclusion
Thus we have studied implementation of ANDNOT logic gates using Mc-Culoch-Pitts neural
networks.

You might also like