Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 68

LAB MANUAL

Class- Final Year B. Tech

Semester-VIII

Subject- Soft Computing

Computer Science and Technology Program

1
Institute Vision, Mission

Vision
To be a leader in engineering and technology education, a research centre of
global standards to provide valuable resources for industry and society through
development of competent technical human resources.

Mission
1) To undertake collaborative research projects that offer opportunities for consistent
interaction with industries.
2) To organize teaching learning programs to facilitate the development of
competent and committed professionals for practice, research and academics.
3) To develop technocrats of national & international stature committed to the task
of nation building.

2
Department Vision & Mission

Vision
To be a centre of academic excellence and research in the field of Computer
Science and Technology by imparting knowledge to students and facilitating research
activities that cater the needs of industries and society.

Mission

1. To provide a learning environment that help students to enhance problem


solving skills, be successful in their professional career and to prepare students
to be lifelong learners by offering theoretical foundation in Computer Science
and Technology.
2. To prepare students in developing research, design, entrepreneur skills and
employability capabilities.
3. To establish Industry Institute Interaction to make students ready for industrial
environment.
4. To educate students about their professional and ethical responsibilities.

3
Index
Page
Sr. No. Contents No.

1. List of Experiments

2. Course Outcomes and Experiment Plan

3. Study and Evaluation Scheme

Experiment No. 1
4.

Experiment No. 2
5.

Experiment No. 3
6.

Experiment No. 4
7.

Experiment No. 5
8.

Experiment No. 6
9.

4
Experiment No. 7
10.

Experiment No. 8
11.

Experiment No. 9
12.

Experiment No. 10
13.

14 Experiment No. 11

15 Experiment No. 12

List of Experiments

Sr. No. Name of Experiment


1. Write a program to implement logical XOR
2. Write a program to implement logical AND using McCulloch Pitts
neuron model

3. Write a program to implement logical XOR using McCulloch Pitts


neuron model

4. Write a program to implement logical AND using Perception


network

5. Write a program to implement Adaline network


6. Write a program to implement Madaline network for XOR function
7. Write a program to implement Back propagation network
8. Write a program to implement the various primitive operations of

5
classical sets

9. Write a program to implement various primitive operations on fuzzy


sets with dynamic components.

10. Write a program to maximize f(x 1+x2)=4x1+3x2 using genetic


algorithm

11. Write a program to minimize f(x)=x2 using genetic algorithm


12. Write a program to implement Travelling Salesman Problem using
genetic algorithm

Course Outcome & Experiment Plan


Course Outcomes:

CO1 Demonstrate different soft computing techniques like Genetic Algorithms, Fuzzy
Logic, Neural Networks and their combination.

CO2 Design and implement computing systems by using appropriate Artificial Neural
Network and tools.

CO3 Apply neural networks to pattern classification

CO4 Apply the concepts of Fuzzy Logic, Various fuzzy systems and their functions to
real time systems.

CO5 Analyze the genetic algorithms and their applications to solve engineering
optimization problems

CO6 Apply soft computing techniques to solve engineering or real life problems

6
7
Experiment Plan

Experimen Week Experiment Name Course


t No. No. Outcome

1 W1 Write a program to implement logical XOR CO1


2 W2 Write a program to implement logical AND using CO2
McCulloch Pitts neuron model

3 W3 Write a program to implement logical XOR using CO2


McCulloch Pitts neuron model

4 W4 Write a program to implement logical AND using CO2


Perception network

5 W5 Write a program to implement Adaline network CO3


6 W6 Write a program to implement Madaline network for CO2
XOR function
7 W7 Write a program to implement Back propagation CO2
network
8 W8 Write a program to implement the various primitive CO4
operations of classical sets

9 W9 Write a program to implement various primitive CO4


operations on fuzzy sets with dynamic components.

10 W10 Write a program to maximize f(x1+x2)=4x1+3x2 using CO5


genetic algorithm

11 W11 Write a program to minimize f(x)=x2 using genetic CO5


algorithm
12 W12 Write a program to implement Travelling Salesman CO6
Problem using genetic algorithm

8
Study and Evaluation Scheme
Cours
e
Course Name Teaching Scheme Credits Assigned
Code

Theo Practic Tutori Practic Tot


ry al al Theory al Tutorial al
CPE
Soft
Computing
7025
04 02 -- 04 01 -- 05

Examination
Course Code Course Name Scheme

Term Work Oral Total


Soft
CPE7025 Computing
2
25 5 50

9
10
Experiment No. : 1

Write a program to implement logical XOR

11
Title: Write a program to implement logical XOR.

Aim: To Write a program to implement logical XOR.

Objectives: Understand the operation of logical XOR, algebraic expressions, symbols and
working of symbols in programmatically.

Outcomes: The students will be able to learn the concept of logical XOR, algebraic
expressions and the detail information of symbols.

Input: input is 0, 1

Output: The output of this experiment is if there are two true and two false values the output
become false (0) and if two operations one is true (1) and another is false the output become
true as shown in the truth table.

Theory
The XOR gate (sometimes EOR, or EXOR and pronounced as Exclusive OR) is a
digital logic gate that gives a true (1 or HIGH) output when the number of true inputs is odd.
An XOR gate implements an exclusive or; that is, a true output results if one, and only one,
of the inputs to the gate is true. If both inputs are false (0/LOW) and both are true, a false
output results. XOR represents the inequality function, i.e., the output is true if the inputs are
not alike otherwise the output is false. A way to remember XOR is "one or the other but not
both".

INPUT OUTPUT
A Y
B
0 0 0
0 1 1
1 0 1
1 1 0

Table: Truth
table

12
Algorithm/Pseudo code:
# Program for implementation of XOR gate

import pandas as pd
test_inputs = [(0, 0), (0, 1), (1, 0), (1, 1)]
correct_outputs = [False, True, True, False]
outputs = []

for test_input, correct_output in zip(test_inputs, correct_outputs):

output = int(test_input[0] ^ test_input[1])


outputs.append([test_input[0], test_input[1], output])

output_frame = pd.DataFrame(outputs, columns=['Input A', ' Input B', 'Output'])

print(output_frame.to_string(index=False))

Output:
Input A Input B Output
0 0 0
0 1 1
1 0 1
1 1 0

Example:

INPUT OUTPUT
A B Y
0 0 0

0 1 1
1 0 1
1 1 0

Conclusion: Thus we have studied the concept of logical XOR, algebraic expressions and
Symbols working in detail.

13
Experiment No. : 2

Write a program to implement logical


AND using McCulloch Pitt’s neuron
model

14
Title: Write a program to implement logical AND using McCulloch Pitt’s neuron model

Aim: To write a program to implement logical AND using McCulloch Pitt’s neuron model.

Objectives: Know the concept of logical AND using McCulloch Pitt’s neuron model with
example.

Outcomes: The students will be able to learn logical AND operations truth table and
theoretically using McCulloch Pits model.

Input: in=A+B.

Output:s=f (yin)

Theory:

McCULLOCH PITTS MODEL:

Every neuron model consists of a processing element with synaptic input connection and a
single input. The "neurons" operated under the following assumptions:-i. They are binary
devices (Vi = [0,1]) ii. Each neuron has a fixed threshold, theta values. iii. The neuron
receives inputs from excitatory synapses, all having identical weights. iv. Inhibitory inputs
have an absolute veto power over any excitatory inputs. v. At each time step the neurons are
simultaneously (synchronously) updated by summing the weighted excitatory inputs and
setting the output (Vi) to 1 if the sum is greater than or equal to the threshold and if the
neuron receives no inhibitory input.

AND GATE:
It is a logic gate that implements conjunction.
Whenever both the inputs are high then only output
will be high (1) otherwise low (0).
A B Y
0 0 0
0 1 0
1 0 0
1 1 1
Table: Truth
table

15
IMPLIMENTATION OF MCCULLOCH PITTS
MODEL:

Fig : Architecture of AND Gate

Algorithm/Pseudo code:
import numpy as np
import pandas as pd
def cal_output_and(threshold=0):
weight1 = 1
weight2 = 1
bias = 0

test_inputs = [(0, 0), (0, 1), (1, 0), (1, 1)]


correct_outputs = [False, False, False, True]
outputs = []

for test_input, correct_output in zip(test_inputs, correct_outputs):


linear_combination = weight1 * test_input[0] + weight2 * test_input[1] + bias
output = int(linear_combination >= threshold)
is_correct_string = 'Yes' if output == correct_output else 'No'
outputs.append([test_input[0], test_input[1], linear_combination, output,
is_correct_string])

num_wrong = len([output[4] for output in outputs if output[4] == 'No'])


output_frame = pd.DataFrame(outputs, columns=['Input 1', ' Input 2', ' Linear
Combination', ' Activation Output', ' Is Correct'])
if not num_wrong:
print('all correct for threshold {}.\n'.format(threshold))
else:
threshold = threshold + 1
cal_output_and(threshold)
print('{} wrong, for threshold {} \n'.format(num_wrong,threshold))
print(output_frame.to_string())

return threshold
t = cal_output_and()

16
Output:
all correct for threshold 2.

Input 1 Input 2 Linear Combination Activation Output Is Correct


0 0 0 0 0 Yes
1 0 1 1 0 Yes
2 1 0 1 0 Yes
3 1 1 2 1 Yes
2 wrong, for threshold 2

Input 1 Input 2 Linear Combination Activation Output Is Correct


0 0 0 0 0 Yes
1 0 1 1 1 No
2 1 0 1 1 No
3 1 1 2 1 Yes
3 wrong, for threshold 1

Input 1 Input 2 Linear Combination Activation Output Is Correct


0 0 0 0 1 No
1 0 1 1 1 No
2 1 0 1 1 No
3 1 1 2 1 Yes

Conclusion: Thus we have studied the logical AND operation with example using McCulloch
Pitts model in detail.

17
Experiment No. : 3

Write a program to implement logical


XOR using McCulloch Pitt’s neuron
model

18
Title: Write a program to implement logical XOR using McCulloch Pitt’s neuron model

Aim: To Write a program to implement logical XOR using McCulloch Pitt’s neuron model.

Objectives: To understand in details of logical XOR using McCulloch Pitt’s neuron model.
with example

Outcomes: The students will be able to learn the concept of logical XOR operation,
architecture of logical XOR and logical XOR using McCulloch Pitts model.

Input: input is 0, 1

Output: The output of this experiment is it takes two inputs “x1” and “x2” 0 is false and 1 is
true, when both inputs are false (0 0) the output of the y is false and if one is true and another
is false input the output of the Y is true (1).

Popular repositories
Theory:
McCULLOCH PITTS MODEL:

Every neuron model consists of a processing element with synaptic input connection and a
single input. The "neurons" operated under the following assumptions:-i. They are binary
devices (Vi = [0,1]) ii. Each neuron has a fixed threshold, theta values. iii. The neuron
receives inputs from excitatory synapses, all having identical weights. iv. Inhibitory inputs
have an absolute veto power over any excitatory inputs. v. At each time step the neurons are
simultaneously (synchronously) updated by summing the weighted excitatory inputs and
setting the output (Vi) to 1 if the sum is greater than or equal to the threshold and if the
neuron receives no inhibitory input.

XOR GATE
It is sometimes called as XOR gate or exclusive or gate. It gives a true output when the
number of true inputs is odd. If both the inputs are true and both are false then the output is
false. These are used to implement binary addition in computers. [7] The truth table and
symbol are shown below.

X1 X2 Y
0 0 0
0 1 1
1 0 1
1 1 0

19
IMPLEMENTATION OF MCCULLOCH PITTS MODEL:

Algorithm/Pseudo code:
def step_function(ip,threshold=0):
if ip >= threshold:
return 1
else:
return 0

def cal_gate(x, w, b, threshold=0):


linear_combination = np.dot(w, x) + b
#print(linear_combination)
y = step_function(linear_combination,threshold)
#clear_output(wait=True)
return y
def AND_gate_ip(x):
w = np.array([1, 1])
b = -1.5
#threshold = cal_output_or()
return cal_gate(x, w, b)

def NOT_gate_ip(x):
w = -1
b = .5
#threshold = cal_output_not()
return cal_gate(x, w, b)

def OR_ip(x):
w = np.array([1, 1])
b = -0.5
return cal_gate(x, w, b)

def Logical_XOR(x):
A = AND_gate_ip(x)
C = NOT_gate_ip(A)
B = OR_ip(x)

20
AND_output = np.array([C, B])
output = AND_gate_ip(AND_output)
return output
input=[(0, 0), (0, 1), (1, 0), (1, 1)]

for i in input:
print(Logical_XOR(i))

Output:
Input 1 Input 2 Activation Output Is Correct
0 0 0 0 Yes
1 0 1 1 Yes
2 1 0 1 Yes
3 1 1 0 Yes

Conclusion: Thus we have studied the concept of logical XOR using McCulloch Pitts neuron
model with example.

21
Experiment No. : 4

Write a program to implement logical


AND using Perceptron network

22
Title: Write a program to implement logical AND using Perceptron network

Aim: To Write a program to implement logical AND using Perceptron network.

Objectives: To Understand implementation of logical AND using perceptron network.

Outcomes: The students will be able to learn the concept of logical AND using perceptron
network with definitions and examples.

Input: 0, 1

Output:0.1

Theory:

A Perceptron is an algorithm for supervised learning of binary classifiers. This algorithm enables
neurons to learn and processes elements in the training set one at a time.

There are two types of Perceptron: Single layer and Multilayer.


● Single layer Perceptron can learn only linearly separable patterns.
● Multilayer Perceptron or feed forward neural networks with two or more layers have the
greater processing power.

23
The Perceptron algorithm learns the weights for the input signals in order to draw a linear
decision boundary.
This enables you to distinguish between the two linearly separable classes +1 and -1.
Perceptron is a function that maps its input “x,” which is multiplied with the learned weight
coefficient; an output value”f(x)”is generated.

In the equation given above:


“w” = vector of real-valued weights
“b” = bias (an element that adjusts the boundary away from origin without any dependence on
the input value)
“x” = vector of input x values

“M” = number of inputs to the Perceptron


The output can be represented as “1” or “0.”  It can also be represented as “1” or “-1” depending
on which activation function is used.

Algorithm/Pseudo code:

/*program for implementation of perceptron network */

def step_function(ip,threshold=0):
if ip >= threshold:
return 1
else:
return 0

def cal_gate(x, w, b, threshold=0):


linear_combination = np.dot(w, x) + b
#print(linear_combination)
y = step_function(linear_combination,threshold)
#clear_output(wait=True)
return y

def AND_gate_ip(x):
w = np.array([1, 1])
b = -1.5

24
#threshold = cal_output_or()
return cal_gate(x, w, b)
input=[(0, 0), (0, 1), (1, 0), (1, 1)]

for i in input:
print(AND_gate_ip(i))

Output:
0
0
0
1

Example:

We shall see explicitly how one can construct simple networks that perform NOT, AND, and OR. It is then a
well-known result from logic that we can construct any logical function from these three operations. The
resulting networks, however, will usually have a much more complex architecture than a simple
Perceptron. We generally want to avoid decomposing complex problems into simple logic gates, by
finding the weights and thresholds that work directly in a Perceptron architecture.

25
Conclusion: Hence we studied that implementation of logical AND using perceptron network

26
Experiment No. : 5

Write a program to implement Adaline


network

27
Title: Write a program to implement Adaline network

Aim: To Write a program to implement Adaline network.

Objectives: To understand the implementation of Adaline network.

Outcomes: The students will be able to learn the concept of Adaline Network.

Input: Enter the row and target

Output: to get iteration of entered row

Theory:

ADALINE (Adaptive Linear Neuron or later Adaptive Linear Element) is an early single-layer
artificial neural network and the name of the physical device that implemented this network. The
network uses memistors. It was developed by Professor Bernard Widow and his graduate student
Ted Hoff at Stanford University in 1960. It is based on the McCulloch–Pitts neuron. It consists
of a weight, a bias and a summation function.

The difference between Adaline and the standard (McCulloch–Pitts) perceptron is that in the
learning phase, the weights are adjusted according to the weighted sum of the inputs (the net). In
the standard perceptron, the net is passed to the activation (transfer) function and the function's
output is used for adjusting the weights.

A multilayer network of ADALINE units is known as a MADALINE


Adaline is a single layer neural network with multiple nodes where each node accepts multiple
inputs and generates one output.

Algorithm/Pseudo code:

/*program for implementation of Adaptive Linear Neuron */

import numpy as np
import matplotlib.pyplot as plt
import math

LEARNING_RATE = 0.5

def step(x):
if (x > 0):

28
return 1
else:
return -1;

INPUTS = np.array([[-1,-1,1],
[-1,1,1],
[1,-1,1],
[1,1,1] ])

OUTPUTS = np.array([[-1,1,1,1]]).T

WEIGHTS = np.array([[0],[0],[0]])
print("Random Weights {} before training".format(WEIGHTS))

errors=[]

for iter in range(1000):

for input_item,desired in zip(INPUTS, OUTPUTS):

ADALINE_OUTPUT = (input_item[0]*WEIGHTS[0]) +
(input_item[1]*WEIGHTS[1]) + (input_item[2]*WEIGHTS[2])

ADALINE_OUTPUT = step(ADALINE_OUTPUT)

ERROR = desired - ADALINE_OUTPUT

errors.append(ERROR)

WEIGHTS[0] = WEIGHTS[0] + LEARNING_RATE * ERROR * input_item[0]


WEIGHTS[1] = WEIGHTS[1] + LEARNING_RATE * ERROR * input_item[1]
WEIGHTS[2] = WEIGHTS[2] + LEARNING_RATE * ERROR * input_item[2]

print("Random Weights {} after training".format(WEIGHTS))


for input_item,desired in zip(INPUTS, OUTPUTS):
ADALINE_OUTPUT = (input_item[0]*WEIGHTS[0]) +
(input_item[1]*WEIGHTS[1]) + (input_item[2]*WEIGHTS[2])

ADALINE_OUTPUT = step(ADALINE_OUTPUT)

print("Actual {} desired {} ".format(ADALINE_OUTPUT,desired))

ax = plt.subplot(111)
ax.plot(errors, label='Training Errors')
ax.set_xscale("log")
plt.title("ADALINE Errors (2,-2)")
plt.legend()
plt.xlabel('Error')
plt.ylabel('Value')
plt.show()

29
Example:

Conclusion: Hence we studied that the concept of adaline network

30
Experiment No. : 6

Write a program to implement Madaline


network for XOR function

31
Title: Write a program to implement madaline network for XOR function

Aim: To Write a program to implement madaline network for XOR function.

Objectives: To understand the implementation of madaline network.

Outcomes: The students will be able to learn the concept of madaline Network.

Input=f (yin)

Output: to get network in which simple madaline consist of input layer m units

Theory:

The basic structure of a MADALINE network consists of combining several ADALINE with


their correspondence activation functions into a single forward structure. When suitable weights
are chosen, the network is capable of implementing complicated and nonlinear separable
mapping such as XOR gate problems.
Madaline Rule II (MRII)
Training algorithm – A trial–and–error procedure with a minimum disturbance principle (those
nodes that can affect the output error while incurring the least change in their weights should
have precedence in the learning process)
Procedure –
1. Input a training pattern
2. Count #incorrect values in the output layer
3. For all units on the output layer
3.1 Select the first previously unselected error node whose analog output is closest to zero (
this node can reverse its bipolar output with the least change in its weights)
3.2. Change the weights on the selected unit the bipolar output of the unit changes
3.3. Input the same training pattern
3.4. If reduce #errors, accept the weight change, otherwise restore the original weights
4. Repeat Step 3 for all layers except the input layer
5. for all units on the output layer
5.1. Select the previously unselected pair of units whose output are closest to zero
5.2. Apply a weight correction to both units, in order to change their bipolar outputs

32
5.3. Input the same training pattern
5.4. If reduce # errors, accept the correction; otherwise, restore the original weights

6. Repeat step 5 for all layers except the input layer.


7. Steps 5 and 6 can be repeated with triplets, quadruplets or longer combinations of units
Until satisfactory results are obtained
The MRII learning rule considers the network with only one hidden layer. For networks with
more hidden layers, the back propagation learning strategy to be discussed later can be employed

Algorithm/Pseudo code:

/*XOR function using madaline*/

import numpy as np
import copy
import pandas as pd
import math
import matplotlib
import operator
import matplotlib.pyplot as plt

def Activation_function(val):
if val>=0:
return 1
else:
return -1

def Testing(mat_inputs_s,w11,w21,w12,w22,b1,b2,v1,v2,b3):
print("------------------------")
print("Testing for XOR GATE")
print("------------------------")
for i in range(len(mat_inputs_s)):
mat_inputs_x_i=list(mat_inputs_s[i].flat)
z_in_1=b1+mat_inputs_x_i[0]*w11+mat_inputs_x_i[1]*w21
z_in_2=b2+mat_inputs_x_i[0]*w12+mat_inputs_x_i[1]*w22
z1=Activation_function(z_in_1)
z2=Activation_function(z_in_2)
y_in=b3+z1*v1+z2*v2
y=Activation_function(y_in)
print("Input: "+ str(mat_inputs_x_i)+ " Output: "+str(y)+" (Value="+str(y_in)+")"
)

33
##XOR GATE => (2*AND NOT + OR)
def Madaline_DeltaRule(alpha):
print("------------------------")
print("MADALINE FOR XOR GATE with alpha ="+str(alpha))
print("------------------------")
mat_inputs_s=np.matrix([[1,1],[1,-1],[-1,1],[-1,-1]])
mat_target_t=[-1,1,1,-1]
v1=v2=b3=0.5

w11=w21=b1=0
w12=w22=b2=0
iterations=0
w11=0.05
w21=0.2
w12=0.1
w22=0.2
b1=0.3
b2=0.15

while(True):
iterations+=1

prev_w11=copy.deepcopy(w11);prev_w21=copy.deepcopy(w21);prev_w12=copy.deepcopy(w1
2);prev_w22=copy.deepcopy(w22)
for i in range(len(mat_inputs_s)):
mat_inputs_x_i=list(mat_inputs_s[i].flat)
z_in_1=b1+mat_inputs_x_i[0]*w11+mat_inputs_x_i[1]*w21
z_in_2=b2+mat_inputs_x_i[0]*w12+mat_inputs_x_i[1]*w22
z1=Activation_function(z_in_1)
z2=Activation_function(z_in_2)
y_in=b3+z1*v1+z2*v2
y=Activation_function(y_in)
##Error
if(mat_target_t[i]!=y):
if(mat_target_t[i]==1):
if(z_in_1>z_in_2):

b1= b1+ alpha*(1-z_in_1)


w11=w11+alpha*(1-z_in_1)*mat_inputs_x_i[0]
w21=w21+alpha*(1-z_in_1)*mat_inputs_x_i[1]
else:

b2= b2+ alpha*(1-z_in_2)


w12=w12+alpha*(1-z_in_2)*mat_inputs_x_i[0]
w22=w22+alpha*(1-z_in_2)*mat_inputs_x_i[1]

34
else:
if(z_in_1>=0):
#Update Adaline z1
b1= b1+ alpha*(-1-z_in_1)
w11=w11+alpha*(-1-z_in_1)*mat_inputs_x_i[0]
w21=w21+alpha*(-1-z_in_1)*mat_inputs_x_i[1]
if(z_in_2>=0):
#Update Adaline z2
b2= b2+ alpha*(-1-z_in_2)
w12=w12+alpha*(-1-z_in_2)*mat_inputs_x_i[0]
w22=w22+alpha*(-1-z_in_2)*mat_inputs_x_i[1]

if(prev_w11==w11 and prev_w21==w21 and prev_w12==w12 and


prev_w22==w22):
print("------------------------")
print("Stopping Condition satisfied. Weights stopped changing.")
print("Total Iterations = "+str(iterations))
print("------------------------")
print("Final Weights:")
print("------------------------")
print("Adaline Z1:")
print("w11 = "+str(w11))
print("w21 = "+str(w21))
print("b1 = "+str(b1))
print("------------------------")
print("Adaline Z2:")
print("w12 = "+str(w12))
print("w22 = "+str(w22))
print("b2 = "+str(b2))
print("------------------------")
Testing(mat_inputs_s,w11,w21,w12,w22,b1,b2,v1,v2,b3)
break

print("Iteration = " +str(iterations))


print("------------------------")
print("Weights till now:")
print("------------------------")
print("Adaline Z1:")
print("w11 = "+ str(w11) )
print("w21 = "+ str(w21))
print("b1 = "+str(b1))
print("------------------------")
print("Adaline Z2:")
print("w12 = "+ str(w12))

35
print("w22 = "+ str(w22))
print("b2 = "+ str(b2))
print("------------------------")

##Alpha=0.05
Madaline_DeltaRule(0.05)
##Alpha=0.1
Madaline_DeltaRule(0.1)
##Alpha=0.5
Madaline_DeltaRule(0.5)

Output:
------------------------
Weights till now:
------------------------
Adaline Z1:
w11 = 1.3203125000000002
w21 = -1.3390625
b1 = -1.0671875000000002
------------------------
Adaline Z2:
w12 = -1.2921875
w22 = 1.2859375
b2 = -1.0765624999999999
------------------------
------------------------
Stopping Condition satisfied. Weights stopped changing.
Total Iterations = 3
------------------------
Final Weights:
------------------------
Adaline Z1:
w11 = 1.3203125000000002
w21 = -1.3390625
b1 = -1.0671875000000002
------------------------
Adaline Z2:
w12 = -1.2921875
w22 = 1.2859375
b2 = -1.0765624999999999
------------------------
------------------------
Testing for XOR GATE
------------------------
Input: [1, 1] Output: -1 (Value=-0.5)
Input: [1, -1] Output: 1 (Value=0.5)
Input: [-1, 1] Output: 1 (Value=0.5)
Input: [-1, -1] Output: -1 (Value=-0.5)

36
Example:

Conclusion: Hence we studied implementation of Madaline network.

37
Experiment No. : 7

Write a program to implement Back


propagation network

38
Title: Write a program to implement Back propagation network

Aim: To Write a program to implement Back propagation network

Objectives: To understand the and implement the concept of Ex-OR operation using Back
Propagation network

Outcomes: The students will be able to learn the EX-OR operation by using the technique of
Back Propagation Network

Input: binary (0, 1)

Output: YK=f (yink)

Theory:

Back propagation:

The back propagation algorithm was originally introduced in the 1970s, but its importance wasn't
fully appreciated until a famous 1986 paper by David Rumelhart, Geoffrey Hinton, and Ronald
Williams.

Back propagation is a supervised learning algorithm, for training Multi-layer Perceptron


(Artificial Neural Networks).

The Back propagation algorithm looks for the minimum value of the error function in weight
space using a technique called the delta rule or gradient descent. The weights that minimize the
error function is then considered to be a solution to the learning problem.

Algorithm/Pseudo code:

import numpy as np

def nonlin(x,deriv=False):
if(deriv==True):
return x*(1-x)

return 1/(1+np.exp(-x))

X = np.array([[0,0,1],
[0,1,1],
[1,0,1],

39
[1,1,1]])

y = np.array([[0],
[1],
[1],
[0]])

np.random.seed(1)

syn0 = 2*np.random.random((3,4)) - 1
syn1 = 2*np.random.random((4,1)) - 1

for j in range(60000):

# Feed forward through layers 0, 1, and 2


k0 = X
k1 = nonlin(np.dot(k0,syn0))
k2 = nonlin(np.dot(k1,syn1))

# how much did we miss the target value?


k2_error = y - k2

if (j% 10000) == 0:
print("Error:" + str(np.mean(np.abs(k2_error))))

k2_delta = k2_error*nonlin(k2,deriv=True)

k1_error = k2_delta.dot(syn1.T)

k1_delta = k1_error * nonlin(k1,deriv=True)

syn1 += k1.T.dot(k2_delta)
syn0 += k0.T.dot(k1_delta)

Output:
Error:0.4964100319027255
Error:0.008584525653247157
Error:0.0057894598625078085
Error:0.004629176776769985
Error:0.0039587652802736475
Error:0.003510122567861678

40
We then squash it using the logistic function to get the output of  :

Carrying out the same process for   we get:

We repeat this process for the output layer neurons, using the output from the hidden layer
neurons as inputs.

Here’s the output for  :

And carrying out the same process for   we get:

Conclusion: Hence we implement Back propagation network

41
Experiment No. : 8

Write a program to implement the various


primitive operations of classical sets

42
Title: Write a program to implement the various primitive operations of classical sets

Aim: To write a program to implement the various primitive operations of classical sets.

Objectives: To understand the perform various numerical operation such as Union, Insert,
Delete, Complement, Differences

Outcomes: The students will be able to the various primitive operations of classical sets

Input: Enter member of sets A & B

Output: Get the output of performing various numerical operation on given set A & B

Theory:

Set is defined as collection of objects which share certain characters. A classical set is collection
of distance object. E.g. User may be defining classical sets of negative integers and a set of
students with passive values. Each individual entity is a set is called a member of or an element
of the set classical into two group members and non members

Properties of Classical (Crisp) Sets :

● Commutatively A ∪ B = B ∪ A A ∩ B = B ∩ A
● Associatively A ∪ (B ∪ C) = (A ∪ B) ∪ C A ∩ (B ∩ C) = (A ∩ B) ∩ C
● Distributive A ∪ (B ∩ C) = (A ∪ B) ∩ (A ∪ C) A ∩ (B ∪ C) = (A ∩ B) ∪ (A ∩ C) (2.7)
● Idempotency A ∪ A = A A ∩ A = A
● Identity A ∪ ∅ = A A ∩ X = A A ∩ ∅ = ∅ A ∪ X = X
● Transitivity If A ⊆ B and B ⊆ C, then A ⊆ C

Algorithm/Pseudo code:

/*Classical set operations.*/

# sets are define


A = {0, 2, 4, 6, 8};
B = {1, 2, 3, 4, 5};

print("Union :", A | B)

print("Intersection :", A & B)

print("Difference :", A - B)

43
print("Symmetric difference :", A ^ B)

Output:
Union : {0, 1, 2, 3, 4, 5, 6, 8}
Intersection : {2, 4}
Difference : {0, 8, 6}
Symmetric difference : {0, 1, 3, 5, 6, 8}

Conclusion: Thus we have studied the classical sets and its primitive operations.

44
Experiment No. : 9

Write a program to implement various


primitive operations on fuzzy sets with
dynamic Components

45
Title: Write a program to implement various primitive operations on fuzzy sets with dynamic
Components

Aim: To write a program to implement various primitive operations on fuzzy sets with dynamic
Components.

Objectives: To understand the perform various Fuzzy sets operation such as Union, Insert,
Delete, Complement, Differences

Outcomes: The students will be able to perform various Fuzzy sets operation such as Union,
Insert, Delete, Complement, Differences

Input: Enter member of fuzzy sets A & B

Output: Get the output of performing various fuzzy set operation on given set A & B

Theory:

A fuzzy set operation is an operation on fuzzy sets. These operations are generalization of crisp
set operations. There is more than one possible generalization. The most widely used operations
are called standard fuzzy set operations. There are three operations: fuzzy complements, fuzzy
intersections, and fuzzy unions.
Admits gradation such as all tones between black and white. A fuzzy set has a graphical
description that expresses how the transition from one to another takes place. This graphical
description is called a membership function.

Algorithm/Pseudo code:
%Program for calculating union, intersection and complement for given fuzzy
%set

import numpy as np

class FuzzySet:
def __init__(self, iterable: any):
self.f_set = set(iterable)
self.f_list = list(iterable)
self.f_len = len(iterable)
for elem in self.f_set:
if not isinstance(elem, tuple):
raise TypeError("No tuples in the fuzzy set")
if not isinstance(elem[1], float):
raise ValueError("Probabilities not assigned to elements")

def __or__(self, other):

46
# fuzzy set union
if len(self.f_set) != len(other.f_set):
raise ValueError("Length of the sets is different")
f_set = [x for x in self.f_set]
other = [x for x in other.f_set]
return FuzzySet([f_set[i] if f_set[i][1] > other[i][1] else other[i]
for i in range(len(self))])

def __and__(self, other):


# fuzzy set intersection
if len(self.f_set) != len(other.f_set):
raise ValueError("Length of the sets is different")
f_set = [x for x in self.f_set]
other = [x for x in other.f_set]

return FuzzySet([f_set[i] if f_set[i][1] < other[i][1] else other[i]


for i in range(len(self))])

def __invert__(self):
f_set = [x for x in self.f_set]
for indx, elem in enumerate(f_set):
f_set[indx] = (elem[0], float(round(1 - elem[1], 2)))
return FuzzySet(f_set)

def __sub__(self, other):


if len(self) != len(other):
raise ValueError("Length of the sets is different")
return self & ~other

def __mul__(self, other):


if len(self) != len(other):
raise ValueError("Length of the sets is different")
return FuzzySet([(self[i][0], self[i][1] * other[i][1]) for i in
range(len(self))])

def __mod__(self, other):


# cartesian product
print(f'The size of the relation will be: {len(self)}x{len(other)} ')
mx = self
mi = other
tmp = [[] for i in range(len(mx))]
i = 0
for x in mx:
for y in mi:
tmp[i].append(min(x[1], y[1]))
i += 1
return np.array(tmp)

@staticmethod
def max_min(array1: np.ndarray, array2: np.ndarray):
tmp = np.zeros((array1.shape[0], array2.shape[1]))
t = list()
for i in range(len(array1)):
for j in range(len(array2[0])):
for k in range(len(array2)):
t.append(round(min(array1[i][k], array2[k][j]), 2))
tmp[i][j] = max(t)

47
t.clear()
return tmp

def __len__(self):
self.f_len = sum([1 for i in self.f_set])
return self.f_len

def __str__(self):
return f'{[x for x in self.f_set]}'

def __getitem__(self, item):


return self.f_list[item]

def __iter__(self):
for i in range(len(self)):
yield self[i]

a = FuzzySet({('x1', 0.5), ('x2', 0.7), ('x3', 0.0)})


b = FuzzySet({('x1', 0.8), ('x2', 0.2), ('x3', 1.0)})
c = FuzzySet({('x', 0.3), ('y', 0.3), ('z', 0.5)})
x = FuzzySet({('a', 0.5), ('b', 0.3), ('c', 0.7)})
y = FuzzySet({('a', 0.6), ('b', 0.4)})
print(f'a -> {a}')
print(f'b -> {b}')
print(f'Fuzzy union: \n{a | b}')
print(f'Fuzzy intersection: \n{a & b}')
print(f'Fuzzy inversion of b: \n{~b}')
print(f"Fuzzy inversion of a: \n {~a}")
print(f'Fuzzy Subtraction: \n{a - b}')

r = np.array([[0.6, 0.6, 0.8, 0.9], [0.1, 0.2, 0.9, 0.8], [0.9, 0.3, 0.4,
0.8], [0.9, 0.8, 0.1, 0.2]])
s = np.array([[0.1, 0.2, 0.7, 0.9], [1.0, 1.0, 0.4, 0.6], [0.0, 0.0, 0.5,
0.9], [0.9, 1.0, 0.8, 0.2]])
print(f"Max Min: of \n{r} \nand \n{s}\n:\n\n")

print(FuzzySet.max_min(r, s))

Output:
a -> [('x3', 0.0), ('x1', 0.5), ('x2', 0.7)]
b -> [('x1', 0.8), ('x3', 1.0), ('x2', 0.2)]
Fuzzy union:
[('x1', 0.8), ('x3', 1.0), ('x2', 0.7)]
Fuzzy intersection:
[('x3', 0.0), ('x1', 0.5), ('x2', 0.2)]
Fuzzy inversion of b:
[('x3', 0.0), ('x1', 0.2), ('x2', 0.8)]
Fuzzy inversion of a:
[('x1', 0.5), ('x3', 1.0), ('x2', 0.3)]
Fuzzy Subtraction:
[('x3', 0.0), ('x1', 0.2), ('x2', 0.7)]
Max Min: of

48
[[0.6 0.6 0.8 0.9]
[0.1 0.2 0.9 0.8]
[0.9 0.3 0.4 0.8]
[0.9 0.8 0.1 0.2]]
and
[[0.1 0.2 0.7 0.9]
[1. 1. 0.4 0.6]
[0. 0. 0.5 0.9]
[0.9 1. 0.8 0.2]]

1.The membership function of the fuzzy set of real numbers ”close to 1”, is
can be defined 7 as A(t) = exp(−β(t − 1)2 ) where β is a positive real number

A membership function for”x is close to 1”.

Conclusion: Thus we have studied the implement various primitive operations on fuzzy sets
with Dynamic Components

49
Experiment No. : 10

Write a program to maximize f(x1+x2)


=4x1+3x2 using genetic algorithm

50
Title: Write a program to maximize f(x1+x2) =4x1+3x2 using genetic algorithm

Aim: To Write a program to maximize f(x1+x2) =4x1+3x2 using genetic algorithm.

Objectives: To understand the concept of genetic algorithm

Outcomes: The students will be able to

Input: f(x1+x2) =4x1+3x2

Output: To get maximum generation result using genetic algorithm

Theory:

A genetic algorithm is a search heuristic that is inspired by Charles Darwin’s theory of natural
evolution. This algorithm reflects the process of natural selection where the fittest individuals are

selected for reproduction in order to produce offspring of the next generation.


Five phases are considered in a genetic algorithm.

1. Initial population

2. Fitness function

3. Selection

4. Crossover

5. Mutation

1. Initial Population

The process begins with a set of individuals which is called a Population. Each individual is a
solution to the problem you want to solve.

An individual is characterized by a set of parameters (variables) known as Genes. Genes are


joined into a string to form a Chromosome (solution).

51
In a genetic algorithm, the set of genes of an individual is represented using a string, in terms of
an alphabet. Usually, binary values are used (string of 1s and 0s). We say that we encode the
genes in a chromosome.

2. Fitness Function

The fitness function determines how fit an individual is (the ability of an individual to compete


with other individuals). It gives a fitness score to each individual. The probability that an
individual will be selected for reproduction is based on its fitness score.

3. Selection

The idea of selection phase is to select the fittest individuals and let them pass their genes to the
next generation. Two pairs of individuals (parents) are selected based on their fitness scores.
Individuals with high fitness have more chance to be selected for reproduction.

4. Crossover

Crossover is the most significant phase in a genetic algorithm. For each pair of parents to be
mated, a crossover point is chosen at random from within the genes.

5. Mutation
In certain new offspring formed, some of their genes can be subjected to a mutation with a low
random probability. This implies that some of the bits in the bit string can be flipped.
Algorithm/Pseudo code:

Algorithm/Pseudo code:

import numpy

def cal_pop_fitness(equation_inputs, pop):


fitness = numpy.sum(pop*equation_inputs, axis=1)
return fitness

def select_mating_pool(pop, fitness, num_parents):

52
# Selecting the best individuals in the current generation as parents for producing the offspring
of the next generation.
parents = numpy.empty((num_parents, pop.shape[1]))
for parent_num in range(num_parents):
max_fitness_idx = numpy.where(fitness == numpy.max(fitness))
max_fitness_idx = max_fitness_idx[0][0]
parents[parent_num, :] = pop[max_fitness_idx, :]
fitness[max_fitness_idx] = -99999999999
return parents

def crossover(parents, offspring_size):


offspring = numpy.empty(offspring_size)
# The point at which crossover takes place between two parents. Usually it is at the center.
crossover_point = numpy.uint8(offspring_size[1]/2)

for k in range(offspring_size[0]):
# Index of the first parent to mate.
parent1_idx = k%parents.shape[0]
# Index of the second parent to mate.
parent2_idx = (k+1)%parents.shape[0]
# The new offspring will have its first half of its genes taken from the first parent.
offspring[k, 0:crossover_point] = parents[parent1_idx, 0:crossover_point]
# The new offspring will have its second half of its genes taken from the second parent.
offspring[k, crossover_point:] = parents[parent2_idx, crossover_point:]
return offspring

def mutation(offspring_crossover):
# Mutation changes a single gene in each offspring randomly.
for idx in range(offspring_crossover.shape[0]):
# The random value to be added to the gene.
random_value = numpy.random.uniform(-1.0, 1.0, 1)

53
offspring_crossover[idx, 4] = offspring_crossover[idx, 4] + random_value
return offspring_crossover

equation_inputs = [4,-2]

# Number of the weights we are looking to optimize.


num_weights = 2

sol_per_pop = 4
num_parents_mating = 4

# Defining the population size.


pop_size = (sol_per_pop,num_weights) # The population will have sol_per_pop chromosome
where each chromosome has num_weights genes.
#Creating the initial population.
new_population = numpy.random.uniform(low=-4.0, high=4.0, size=pop_size)
print(new_population)

num_generations = 5
for generation in range(num_generations):
print("Generation : ", generation)
# Measing the fitness of each chromosome in the population.
fitness = cal_pop_fitness(equation_inputs, new_population)

# Selecting the best parents in the population for mating.


parents = select_mating_pool(new_population, fitness,
num_parents_mating)

# Generating next generation using crossover.


offspring_crossover = crossover(parents,

54
offspring_size=(pop_size[0]-parents.shape[0], num_weights))

# Adding some variations to the offsrping using mutation.


offspring_mutation = mutation(offspring_crossover)

# Creating the new population based on the parents and offspring.


new_population[0:parents.shape[0], :] = parents
new_population[parents.shape[0]:, :] = offspring_mutation

# The best result in the current iteration.


print("Best result : ", numpy.max(numpy.sum(new_population*equation_inputs, axis=1)))

# Getting the best solution after iterating finishing all generations.


#At first, the fitness is calculated for each solution in the final generation.
fitness = cal_pop_fitness(equation_inputs, new_population)
# Then return the index of that solution corresponding to the best fitness.
best_match_idx = numpy.where(fitness == numpy.max(fitness))

print("Best solution : ", new_population[best_match_idx, :])


print("Best solution fitness : ", fitness[best_match_idx])

Output:
[[-2.19432529 1.70391184]
[ 0.47773586 -3.89955216]]
Generation : 0
Best result : 9.710047743186704
Generation : 1
Best result : 9.710047743186704
Generation : 2
Best result : 9.710047743186704
Generation : 3
Best result : 9.710047743186704

55
Generation : 4
Best result : 9.710047743186704
Best solution : [[[ 0.47773586 -3.89955216]]]
Best solution fitness : [9.71004774]

Conclusion: Thus we have studied the program to maximize f(x1+x2) =4x1+3x2 using genetic
algorithm

56
Experiment No. : 11

Write a program to minimize f(x) =x2 using


genetic algorithm

57
Title: Write a program to minimize f(x)=x2 using genetic algorithm

Aim: To Write a program to minimize f(x)=x2 using genetic algorithm.

Objectives: To understand the concept of minimize genetic algorithm

Outcomes: The students will be able to implement program of genetic algorithm.

Input: Enter minimum iteration values

Output: To get sum, average minimum iterations of entered values.

Theory:

A genetic algorithm is a search heuristic that is inspired by Charles Darwin’s theory of natural
evolution. This algorithm reflects the process of natural selection where the fittest individuals are
selected for reproduction in order to produce offspring of the next generation

1. Initial Population
The process begins with a set of individuals which is called a Population. Each individual is a
solution to the problem you want to solve.An individual is characterized by a set of parameters
(variables) known as Genes. Genes are joined into a string to form a Chromosome (solution).In a
genetic algorithm, the set of genes of an individual is represented using a string, in terms of an
alphabet. Usually, binary values are used (string of 1s and 0s). We say that we encode the genes
in a chromosome.

2. Fitness Function

The fitness function determines how fit an individual is (the ability of an individual to compete


with other individuals). It gives a fitness score to each individual. The probability that an
individual will be selected for reproduction is based on its fitness score.

3. Selection

The idea of selection phase is to select the fittest individuals and let them pass their genes to the
next generation. Two pairs of individuals (parents) are selected based on their fitness scores.
Individuals with high fitness have more chance to be selected for reproduction.

4. Crossover

58
Crossover is the most significant phase in a genetic algorithm. For each pair of parents to be
mated, a crossover point is chosen at random from within the genes.

5. Mutation
In certain new offspring formed, some of their genes can be subjected to a mutation with a low
random probability. This implies that some of the bits in the bit string can be flipped.
Algorithm/Pseudo code:

Algorithm/Pseudo code:

import random
import math
def objective(m):#Objective Function
return(abs((m[0]*m[0])))
def fitness(m): #Fitness Function
return(1/(1+m))
def probability(m,n): #Probability Function
return(m/n)
def crossover(m,n): #Crossover
pt=random.randint(0,3)
return(m[:pt+1]+n[pt+1:])
print("Minimize x^2")
pop1=[]
#Initialize random population
for i in range(6):
pop1.append([])
for j in range(4):
pop1[i].append(random.randint(0,30))
print("Initial Populaion: ")
for i in range(6):
print(pop1[i])
#Maximum iterations 10
for it in range(10):
obj=[0]*6
print("Objective Function: ")
for i in range(6):
obj[i]=objective(pop1[i])
print(obj[i])
fit=[0]*6
print("Fitness Value: ")
for i in range(6):
fit[i]=fitness(obj[i])
print(fit[i])

59
print("Total fitness: ",sum(fit))
prob=[0]*6
#Probability calculation
for i in range(6):
prob[i]=probability(fit[i],sum(fit))
#Commulative Probability Calculation
cmp=[0]*6
sum1=0
for i in range(6):
sum1+=prob[i]
cmp[i]=sum1
#Roulette Wheel Selection
R=[0]*6

for i in range(6):
R[i]=random.random()
new_pop=[]

for i in range(6):
for j in range(6):
if R[i]<=cmp[j]:
new_pop.append(pop1[j])
break

print("After Selection: ",new_pop)


#Crossover
cr=0.25#Crossover Rate
CR=[0]*6
for i in range(6):
CR[i]=random.random()
par=[]
par_index=[]

for i in range(6):
if CR[i]<cr:
par.append(new_pop[i])
par_index.append(i)

x=len(par)
for i in range(x):
a=random.randint(0,x-1)
b=random.randint(0,x-1)
new_pop[par_index[i]]=crossover(par[a],par[b])
#Mutation
mr=0.1 #mutation rate
total_mut=math.floor(24*mr)
mut_index=[]

60
mut_value=[]
for i in range(total_mut):
mut_index.append(random.randint(0,23))
mut_value.append(random.randint(0,30))
cr_num=mut_index[i]//4
gen_num=mut_index[i]%4
new_pop[cr_num][-(gen_num-1)]=mut_value[i]

print("After iteration: ",it)


print(new_pop)
print(obj)
pop1=new_pop

Output:

After Selection: [[1, 7, 29, 27], [1, 7, 29, 27], [1, 7, 29, 27], [1, 7,
29, 27], [1, 7, 29, 27], [1, 7, 29, 27]]
After iteration: 8
[[2, 7, 9, 27], [2, 7, 9, 27], [2, 7, 9, 27], [1, 7, 29, 27], [2, 7, 9,
27], [2, 7, 9, 27]]
[196, 1, 196, 1, 1, 1]
Objective Function:
4
4
4
1
4
4
Fitness Value:
0.2
0.2
0.2
0.5
0.2
0.2
Total fitness: 1.5
After Selection: [[1, 7, 29, 27], [2, 7, 9, 27], [1, 7, 29, 27], [1, 7,
29, 27], [2, 7, 9, 27], [2, 7, 9, 27]]
After iteration: 9
[[1, 7, 29, 27], [17, 7, 9, 27], [1, 20, 29, 27], [1, 20, 29, 27], [17, 7,
9, 27], [17, 7, 9, 27]]
[4, 4, 4, 1, 4, 4]

Example:

61
Conclusion: Thus we have studied the genetic operations here in which contain crossbar,
selection etc.

62
Experiment No. : 12
Write a program to implement Travelling
Salesman Problem using genetic algorithm

63
Title: Write a program to implement Travelling Salesman Problem using genetic algorithm

Aim: Write a program to implement Travelling Salesman Problem using genetic algorithm.

Objectives: To understand the concept of genetic algorithm using travelling salesman problem.

Outcomes: The students will be able to implement Travelling Salesman Problem using genetic
algorithm.

Input: Enter number of cities and minimum cost matrix.

Output: To get minimum cost matrix.

Theory:

The travelling salesman problem follows the approach of the branch and bound algorithm that
is one of the different structures. This algorithm falls under the NP-Complete problem. It is also
popularly known as Travelling Salesperson Problem. The TSP algorithm states that – “From a
given set of N cities and distance between each pair of the cities, find the minimum path length
in such a way that it covers each and every city exactly once (without repetition of any path) and
terminate the traversal at the starting point or the starting city from where the traversal of the
TSP Algorithm was initiated.”

Algorithm/Pseudo code:

from collections import defaultdict


from functools import partial

import matplotlib.pyplot as plt


import matplotlib.animation as animation
import random
import math

def create_point():

return random.random(), random.random()

def distance(orig, dest):

return math.sqrt((dest[0] - orig[0]) ** 2 + (dest[1] - orig[1]) ** 2)

def compute_distance_matrix(point_set):

64
distance_matrix = defaultdict(dict)
for orig in point_set:
for dest in point_set:
distance_matrix[orig][dest] = distance_matrix[dest][orig] = distance(orig, dest)
return distance_matrix

def compute_solution_distance(solution, distance_matrix):

total_distance = 0

for i in range(len(solution) - 1):


total_distance += distance_matrix[solution[i]][solution[i + 1]]

return total_distance

def create_individual(point_set):

points = list(point_set)
random.shuffle(points)

return points

def create_population(n_individuals, point_set, distance_matrix):

individuals = [create_individual(point_set) for _ in


range(n_individuals)]
distances = list(map(partial(compute_solution_distance,
distance_matrix = distance_matrix), individuals))

return sorted(zip(individuals, distances), key = lambda x: x[1])

def plot_result(solution):

xs = [point[0] for point in solution]


ys = [point[1] for point in solution]
plt.plot(xs, ys)
plt.axis('off')
plt.show()

def plot_point_set(point_set):

point_list = list(point_set)
xs = [point[0] for point in point_list]
ys = [point[1] for point in point_list]
plt.scatter(xs, ys)
plt.axis('off')
plt.show()

def mutate(individual, distance_matrix):


def mutation_swap():
swap_idx = random.randint(0, len(individual) - 2)
new_individual = individual[:swap_idx] + \
[individual[swap_idx + 1], individual[swap_idx]] + \
individual[swap_idx + 2:]

65
return new_individual

def mutation_reverse():
reverse_start = random.randint(0, len(individual) - 2)
reverse_end = random.randint(reverse_start + 1, len(individual) - 1)
new_individual = individual[:reverse_start] + \
individual[reverse_start : reverse_end][::-1] + \
individual[reverse_end:]

return new_individual

mutation = random.choice([mutation_swap, mutation_reverse])


new_individual = mutation()

return new_individual, compute_solution_distance(new_individual, distance_matrix)

def reproduce(individual_1, individual_2, distance_matrix):


def generate_subset_idx(subset_size):
return sorted(random.sample(range(ind_size), subset_size))

def select_subset(individual, subset_idx):


return [individual[i] for i in subset_idx]

def complement_subset(individual_2, individual_1_subset):


s = set(individual_1_subset)

return [point for point in individual_2 if point not in s]

ind_size = len(individual_1)
ind_1_subset_size = ind_size // 2
subset_ind_1_idx = generate_subset_idx(ind_1_subset_size)
ind_1_subset = select_subset(individual_1, subset_ind_1_idx)
ind_2_subset = complement_subset(individual_2, ind_1_subset)

new_individual = ind_1_subset + ind_2_subset

return new_individual, compute_solution_distance(new_individual,


distance_matrix)

def evolve(population, n_reproductions, n_mutations, n_news,


reproductor_pool, distance_matrix, point_set):

population_size = len(population)
n_new_individuals = n_reproductions + n_mutations + n_news
n_survivors = population_size - n_new_individuals
reproductor_pool_size = round(reproductor_pool * population_size)
new_population = population[:n_survivors]

for _ in range(n_reproductions):
individual_1 = population[random.randint(0, reproductor_pool_size - 1)][0]
individual_2 = random.choice(population)[0]
new_population.append(reproduce(individual_1, individual_2, distance_matrix))

for _ in range(n_mutations):

66
individual_to_mutate = random.choice(population)[0]
new_population.append(mutate(individual_to_mutate, distance_matrix))

for _ in range(n_news):
new_individual = create_individual(point_set)
new_population.append((new_individual,
compute_solution_distance(new_individual, distance_matrix)))

return sorted(new_population, key = lambda x: x[1])

def genetic_algorithm(point_set, population_size, n_generations,


n_reproductions, n_mutations, n_news, reproduction_pool):

distance_matrix = compute_distance_matrix(point_set)
population = create_population(population_size, point_set, distance_matrix)

for i in range(n_generations):
population = evolve(population, n_reproductions, n_mutations,
n_news, reproduction_pool, distance_matrix, point_set)
return population[0]

point_set = {create_point() for _ in range(20)}


plot_point_set(point_set)
best_solution, length = genetic_algorithm(point_set, 300, 120, 100, 50, 0, 0.15)
plot_result(best_solution)

Output:

67
Example:

So, basically you have to find the shortest route to traverse all the cities without repeating any
city and finally end your journey from where you started.

Conclusion: Thus we have studied the implement Travelling Salesman Problem using genetic
algorithm.

68

You might also like