Professional Documents
Culture Documents
Artificial Intelligence MCA
Artificial Intelligence MCA
A
Laboratory File
On
Artificial Intelligence & Soft Computing
Submitted
For
Master of Computer Applications
In
GJIMT Lab
At
-: INDEX :-
Sr. No Experiment Page No. Remarks
Use logic programming in Python to check for prime 3
1. numbers.
Use logic programming in Python parse a family tree and
2. infer the relationships between the family members. 4, 5, 6, 7, 8
Page | 2
GJIMT || 2022
Experiment:- 01
Aim:- Use logic programming in Python to check for prime numbers.
Prime Numbers:- Prime numbers are natural numbers that are divisible by only 1
and the number itself.
Example:- 2, 3, 5, 7, 11, 13, etc.
Code:-
# Input from the user
num = int(input("Enter a number: "))
# If number is greater than 1
if num > 1:
# Check if factor exist
for i in range(2,num):
if (num % i) == 0:
print(num,"is not a prime number")
break
else:
print(num,"is a prime number")
else:
print(num,"is not a prime number")
Output:-
Page | 3
GJIMT || 2022
Experiment:- 02
Aim:- Use logic programming in Python parse a family tree and infer the relationships between
the family members.
Family Tree:-
The core of the Family Tree data model are the individual persons that, linked
together by relationships, create the Tree. The purpose of the other data objects is
to give support and detailed information about the person, relationships and the
research recorded in the Family Tree.
John and Megan have three sons - William, David, and Adam. The wives of
William, David, and Adam are Emma, Olivia, and Lily respectively. William and
Emma have two children - Chris and Stephanie. David and Olivia have five
children - Wayne, Tiffany, Julie, Neil, and Peter. Adam and Lily have one child -
Sophia. Based on these facts, we can create a program that can tell us the name of
Wayne's grandfather or Sophia's uncles are. Even though we have not explicitly
specified anything about the grandparent or uncle relationships, logic programming
can infer them.
Page | 4
GJIMT || 2022
These relationships are specified in a file called relationships.json provided for you.
The file looks like the following:-
Code:- relationships.json
{
"father": [
{ "John": "William" },
{ "John": "David" },
{ "John": "Adam" },
{ "William": "Chris" },
{ "William": "Stephanie" },
{ "David": "Wayne" },
{ "David": "Tiffany" },
{ "David": "Julie" },
{ "David": "Neil" },
{ "David": "Peter" },
{ "Adam": "Sophia" }
],
"mother": [
{ "Megan": "William" },
{ "Megan": "David" },
{ "Megan": "Adam" },
{ "Emma": "Stephanie" },
{ "Emma": "Chris" },
{ "Olivia": "Tiffany" },
{ "Olivia": "Julie" },
{ "Olivia": "Neil" },
{ "Olivia": "Peter" },
{ "Lily": "Sophia" }
]
}
Code:- family.py
import json
from kanren import Relation, facts, run, conde, var, eq
if __name__=='__main__':
father = Relation()
mother = Relation()
with open (r'C:\\Users\ajitk\Desktop\PythonAI\.vscode\relation-
ships.json') as f:
d = json.loads(f.read())
x = var()
# John's children
name = 'John'
output = run(0, x, father(name, x))
print("\nList of " + name + "'s children:")
for item in output:
print(item)
# William's mother
name = 'William'
output = run(0, x, mother(x, name))[0]
print("\n" + name + "'s mother:\n" + output)
# Adam's parents
name = 'Adam'
output = run(0, x, parent(x, name))
print("\nList of " + name + "'s parents:")
for item in output:
print(item)
# Wayne's grandparents
name = 'Wayne'
output = run(0, x, grandparent(x, name))
Page | 6
GJIMT || 2022
print("\nList of " + name + "'s grandparents:")
for item in output:
print(item)
# Megan's grandchildren
name = 'Megan'
output = run(0, x, grandparent(name, x))
print("\nList of " + name + "'s grandchildren:")
for item in output:
print(item)
# David's siblings
name = 'David'
output = run(0, x, sibling(x, name))
siblings = [x for x in output if x != name]
print("\nList of " + name + "'s siblings:")
for item in siblings:
print(item)
# Tiffany's uncles
name = 'Tiffany'
name_father = run(0, x, father(x, name))[0]
output = run(0, x, uncle(x, name))
output = [x for x in output if x != name_father]
print("\nList of " + name + "'s uncles:")
for item in output:
print(item)
# All spouses
Page | 7
GJIMT || 2022
Output:-
Page | 8
GJIMT || 2022
Experiment:- 03
Aim:- Python script for building a puzzle solver.
Code:-
from kanren import *
from kanren.core import lall
Output:-
Page | 10
GJIMT || 2022
Experiment:- 04
Aim:- Implementation of uninformed search techniques in Python.
BFS Pseudo Code:- The pseudo code for BFS in python goes as below:-
create a queue Q
mark v as visited and put v into Q while Q is non-
empty
remove the head u of Q
mark and enqueue all(unvisited) neighbors of u
Page | 11
GJIMT || 2022
Code:-
graph = {
'5' : ['3','7'],
'3' : ['2', '4'],
'7' : ['8'],
'2' : [],
'4' : ['8'],
'8' : []
}
# Driver Code
print("Following is the Breadth-First Search")
bfs(visited, graph, '5') # function calling
Output:-
Page | 12
GJIMT || 2022
Experiment:- 05
Aim:- Implementation of heuristic search techniques in Python.
A* Search Algorithm:-
A* search is the most commonly known form of best-first search. It uses the
heuristic function h(n) and cost to reach the node n from the start state g(n).
It has combined features of UCS and greedy best-first search, by which it
solve the problem efficiently.
It finds the shortest path through the search space using the heuristic
function. This search algorithm expands fewer search tree and gives optimal
results faster.
we are going to find out how the A* search algorithm can be used to find the
most cost-effective path in a graph. Consider the following graph below.
Page | 13
GJIMT || 2022
Code:-
open_set = set(start_node)
closed_set = set()
g = {} #store distance from starting node
parents = {}# parents contains an adjacency map of all nodes
Page | 14
GJIMT || 2022
#if m in closed set,remove and add to
open
if m in closed_set:
closed_set.remove(m)
open_set.add(m)
if n == None:
print('Path does not exist!')
return None
while parents[n] != n:
path.append(n)
n = parents[n]
path.append(start_node)
path.reverse()
return H_dist[n]
}
aStarAlgo('A', 'G')
Output:-
Page | 16
GJIMT || 2022
Experiment:- 06
Aim:- Python script for tokenizing text data.
Tokenizing:- Tokenization is a common task a data scientist comes across when
working with text data. It consists of splitting an entire text into small units, also
known as tokens. Most Natural Language Processing (NLP) projects have
tokenization as the first step because it’s the foundation for developing good
models and helps better understand the text we have.
Code:-
my_text = """Hey I'm Ajit Kumar . I Just Love Coding ."""
print(my_text.split('. '))
print(my_text.split())
Output:-
Code:-
# import the existing word and sentence tokenizing
# libraries
import nltk
nltk.download('punkt')
from nltk.tokenize import sent_tokenize, word_tokenize
#print(sent_tokenize(text))
print(word_tokenize(text))
Output:-
Page | 18
GJIMT || 2022
Experiment:- 07
Aim:- Extracting the frequency of terms using a Bag of Words model.
Bag of Words model:- Bag of Words model is used to preprocess the text by
converting it into a bag of words, which keeps a count of the total occurrences of most
frequently used words.
This model can be visualized using a table, which contains the count of words
corresponding to the word itself.
Code:-
def vectorize(tokens):
vector=[]
for w in filtered_vocab:
vector.append(tokens.count(w))
return vector
def unique(sequence):
seen = set()
stopwords=["to","is","a"]
special_char=[",",":"," ",";",".","?"]
string1=string1.lower()
string2=string2.lower()
tokens1=string1.split()
tokens2=string2.split()
print(tokens1)
Page | 19
GJIMT || 2022
print(tokens2)
vocab=unique(tokens1+tokens2)
print(vocab)
filtered_vocab=[]
for w in vocab:
filtered_vocab.append(w)
print(filtered_vocab)
vector1=vectorize(tokens1)
print(vector1)
vector2=vectorize(tokens2)
print(vector2)
Output:-
Page | 20
GJIMT || 2022
Experiment:- 08
Aim:- Predict the category to which a given piece of text belongs.
Code:-
Page | 21
GJIMT || 2022
def
sentime
nt_scor
es():
sentenc
e=
ent1.ge
t()
sid_obj=SentimentIntensityAnalyzer()
sentiment_dict=sid_obj.polarity_scores(
sentence) print("Overall Sentiment
Dictionary is:",sentiment_dict)
print("Sentence was rated
as:",sentiment_dict['neg']*100,"% Negative")
print("Sentence was rated
as:",sentiment_dict['neu']*100,"% Neutral")
print("Sentence was rated
as:",sentiment_dict['pos']*100,"% Positive")
print("Sentence was overall rated as:",end=' ')
if sentiment_dict['compound']>=0.05:
print("Positive")
result="Positive"
elif sentiment_dict['compound']<=-0.05:
print("Negative")
result="Negative" else:
print("Neutral") result="Neutral"
messagebox.showinfo(title='Result',message="
Sentence was overall rated as:%s"%(result))
button1=Button(root,text="predict",command=
Page | 22
GJIMT || 2022
sentiment_scores,width=10,background='#A05
2 2D',font="Arial 15",fg="#D2691E")
button1.place(x=600,y=400)
button2=Button(root,text="Quit",command=ro
ot.destroy,width=10,background='#A0522D',fo
n t="Arial 15",fg="#D2691E")
button2.place(x=800,y=400)
root.mainloop()
Output:-
Page | 23
GJIMT || 2022
Experiment:- 09
Aim:- Python code for visualizing audio speech signal.
Code:-
def showing_audiotrack():
# We use a variable previousTime to store the time when a plot update
is made
# and to then compute the time taken to update the plot of the audio
data.
previousTime = time.time()
Experiment:- 10
Aim:- Python code for Generating audio signals.
Code:-
#!/usr/bin/env python
import sys
import wave
import math
import struct
import random
import argparse
from itertools import *
def white_noise(amplitude=0.5):
'''
Generate random samples.
'''
return (float(amplitude) * random.uniform(-1, 1) for i in count(0))
w = wave.open(filename, 'w')
w.setparams((nchannels, sampwidth, framerate, nframes, 'NONE', 'not
compressed'))
# split the samples into chunks (to reduce memory consumption and
improve performance)
for chunk in grouper(bufsize, samples):
frames = ''.join(''.join(struct.pack('h', int(max_amplitude * sample))
for sample in channels) for channels in chunk if channels is not None)
w.writeframesraw(frames)
w.close()
return filename
# split the samples into chunks (to reduce memory consumption and
improve performance)
for chunk in grouper(bufsize, samples):
frames = ''.join(''.join(struct.pack('h', int(max_amplitude * sample))
for sample in channels) for channels in chunk if channels is not None)
f.write(frames)
f.close()
return filename
def main():
parser = argparse.ArgumentParser()
parser.add_argument('-c', '--channels', help="Number of channels to
produce", default=2, type=int)
parser.add_argument('-b', '--bits', help="Number of bits in each
sample", choices=(16,), default=16, type=int)
parser.add_argument('-r', '--rate', help="Sample rate in Hz",
default=44100, type=int)
parser.add_argument('-t', '--time', help="Duration of the wave in
seconds.", default=60, type=int)
parser.add_argument('-a', '--amplitude', help="Amplitude of the wave
on a scale of 0.0-1.0.", default=0.5, type=float)
parser.add_argument('-f', '--frequency', help="Frequency of the wave
in Hz", default=440.0, type=float)
parser.add_argument('filename', help="The file to generate.")
args = parser.parse_args()
if __name__ == "__main__":
main()
Output:-
Experiment:- 11
GJIMT || 2022
Aim:- Create a perception with appropriate no. of inputs and outputs. Train it using fixed
increment learning algorithm until no change in weights is required. Output the final
weights.
Code:-
class Perceptron:
#constructor
#model
self.w = np.ones(X.shape[1])
self.b = 0
accuracy = {}
max_accuracy = 0
wt_matrix = []
wt_matrix.append(self.w)
accuracy[i] = accuracy_score(self.predict(X), Y)
if (accuracy[i] > max_accuracy):
max_accuracy = accuracy[i]
chkptw = self.w
chkptb = self.b
print(max_accuracy)
plt.plot(accuracy.values())
plt.xlabel("Epoch #")
plt.ylabel("Accuracy")
plt.ylim([0, 1])
plt.show()
#return the weight matrix, that contains weights over all epochs
return np.array(wt_matrix)
Output:-
GJIMT || 2022
Experiment:- 12
GJIMT || 2022
Aim:- Implement AND function using ADALINE with bipolar inputs and outputs.
Code:-
for i in range(epoch):
Output:-
GJIMT || 2022
Experiment:- 13
GJIMT || 2022
Aim:- Implement AND function using MADALINE with bipolar inputs and outputs.
Code:-
for i in range(epoch):
Output:-
GJIMT || 2022
GJIMT || 2022
Experiment:- 14
Aim:- Construct and test auto associative network for input vector using HEBB rule.
Code:-
%The MATLAB program for calculating the weight matrix is as follows
%Discrete Hopfield net
clc;
clear;
x=[1 1 1 0];
w=(2*x'–1)*(2*x–1);
for i=1:4
w (i, i)=0;
end
disp('Weight matrix');
disp(w);
Output:-
Experiment:- 15
GJIMT || 2022
Aim:- Construct and test auto associative network for input vector using outer product
rule.
Code:-
Output:-
GJIMT || 2022
Experiment:- 16
Aim:- Construct and test heteroassociative network for binary inputs and targets.
Code:-
# Import Python Libraries
import numpy as np
'''
print("Set A: Input Pattern, Set B: Target Pattern")
print("\nThe input for pattern 1 is")
print(x1)
print("\nThe target for pattern 1 is")
print(y1)
print("\nThe input for pattern 2 is")
print(x2)
print("\nThe target for pattern 2 is")
print(y2)
print("\nThe input for pattern 3 is")
print(x3)
print("\nThe target for pattern 3 is")
print(y3)
print("\nThe input for pattern 4 is")
print(x4)
print("\nThe target for pattern 4 is")
print(y4)
print("\n------------------------------")
'''
# Calculate weight Matrix: W
inputSet = np.concatenate((x1, x2, x3, x4), axis = 1)
targetSet = np.concatenate((y1.T, y2.T, y3.T, y4.T), axis = 0)
GJIMT || 2022
print("\nWeight matrix:")
weight = np.dot(inputSet, targetSet)
print(weight)
print("\n------------------------------")
# Testing Phase
# Test for Input Patterns: Set A
print("\nTesting for input patterns: Set A")
def testInputs(x, weight):
# Multiply the input pattern with the weight matrix
# (weight.T X x)
y = np.dot(weight.T, x)
y[y < 0] = -1
y[y >= 0] = 1
return np.array(y)
Experiment:- 17
GJIMT || 2022
Aim:- Create a back propagation network for a given input pattern. Perform 3 epochs of
operation.
Code:-
import numpy as np
def sigmoid(x):
return 1.0/(1.0 + np.exp(-x))
def sigmoid_prime(x):
return sigmoid(x)*(1.0-sigmoid(x))
def tanh(x):
return np.tanh(x)
def tanh_prime(x):
return 1.0 - x**2
class NeuralNetwork:
# Set weights
self.weights = []
# layers = [2,2,1]
# range of weight values (-1,1)
# input and hidden layers - random((2+1, 2+1)) : 3 x 3
for i in range(1, len(layers) - 1):
r = 2*np.random.random((layers[i-1] + 1, layers[i] + 1)) -1
self.weights.append(r)
# output layer - random((2+1, 1)) : 3 x 1
r = 2*np.random.random( (layers[i] + 1, layers[i+1])) - 1
self.weights.append(r)
for l in range(len(self.weights)):
dot_value = np.dot(a[l], self.weights[l])
activation = self.activation(dot_value)
a.append(activation)
# output layer
error = y[i] - a[-1]
deltas = [error * self.activation_prime(a[-1])]
# reverse
# [level3(output)->level2(hidden)] => [level2(hidden)-
>level3(output)]
deltas.reverse()
# backpropagation
# 1. Multiply its output delta and input activation
# to get the gradient of the weight.
# 2. Subtract a ratio (percentage) of the gradient from the weight.
for i in range(len(self.weights)):
layer = np.atleast_2d(a[i])
delta = np.atleast_2d(deltas[i])
self.weights[i] += learning_rate * layer.T.dot(delta)
if __name__ == '__main__':
nn = NeuralNetwork([2,2,1])
X = np.array([[0, 0],
[0, 1],
[1, 0],
[1, 1]])
y = np.array([0, 1, 1, 0])
nn.fit(X, y)
for e in X:
GJIMT || 2022
print(e,nn.predict(e))
Output:-
Experiment:- 18
GJIMT || 2022
Fuzzy Sets:- Fuzzy refers to something that is unclear or vague . Hence, Fuzzy Set
is a Set where every key is associated with value, which is between 0 to 1 based on
the certainty .This value is often called as degree of membership. Fuzzy Set is
denoted with a Tilde Sign on top of the normal Set notation.
Union :-
Code:-
# Example to Demonstrate the
# Union of Two Fuzzy Sets
A = dict()
B = dict()
Y = dict()
Output:-
GJIMT || 2022
Intersection :-
Code:-
# Example to Demonstrate
# Intersection of Two Fuzzy Sets
A = dict()
B = dict()
Y = dict()
Output:-
Complement :-
Code:-
# Example to Demonstrate the
# Complement Between Two Fuzzy Sets
A = dict()
Y = dict()
for A_key in A:
GJIMT || 2022
Y[A_key]= 1-A[A_key]
Output:-
Difference:-
Code:-
# Example to Demonstrate the
# Difference Between Two Fuzzy Sets
A = dict()
B = dict()
Y = dict()
Output:-
GJIMT || 2022
Cartesian Product & MaxMin Composition:-
Code:-
def cartesian():
n = int(input("\nEnter number of elements in first set (A): "))
A = []
B = []
print("Enter elements for A:")
for i in range(0, n):
ele = float(input())
A.append(ele)
m = int(input("\nEnter number of elements in second set (B): "))
print("Enter elements for B:")
for i in range(0, m):
ele = float(input())
B.append(ele)
print("A = {"+str(A)[1:-1]+"}")
print("B = {"+str(B)[1:-1]+"}")
cart_prod = []
cart_prod = [[0 for j in range(m)]for i in range(n)]
for i in range(n):
for j in range(m):
cart_prod[i][j] = min(A[i],B[j])
print("A x B = ")
for i in range(n):
for j in range(m):
print(cart_prod[i][j],end=" ")
print("\n")
return
def minmax():
r1 = int(input("Enter number of rows of first relation (R1): "))
c1 = int(input("Enter number of columns of first relation (R1): "))
rel1=[[0 for i in range(c1)]for j in range(r1)]
print("Enter the elments for R:")
for i in range(r1):
for j in range(c1):
rel1[i][j]=float(input())
print("\nR1 = ")
for i in range(r1):
for j in range(c1):
print(rel1[i][j],end=" ")
print("\n")
print("\nR2 = ")
for i in range(r2):
for j in range(c2):
print(rel2[i][j],end=" ")
print("\n")
col=0
comp=[]
for i in range(r1):
comp.append([])
for j in range(c2):
l=[]
for k in range(r2):
l.append(min(rel1[i][k],rel2[k][j]))
comp[i].append(max(l))
ch=1
while ch==1:
print("MENU:\n----\n1->Cartesian Product\n2->Minmax
Composition\n3->Exit")
op=int(input("Enter Your Choice: "))
if op==1:
cartesian()
elif op==2:
minmax()
elif op==3:
break
else:
print("Wrong Choice!")
ch=int(input("Do you wish to continue (1-Yes | 0-No): "))
print("\n")
GJIMT || 2022
Experiment:- 19
Aim:- Maximize the function f(x)=x2 using GA, where x ranges form 0-25. Perform 6
iterations.
Code:-
Output:-
GJIMT || 2022
-: THE END :-