Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 46

Date:

Experiment-1:
1.Implementation of DFS for water jug problem.

Aim:To Implementation of DFS for water jug problem.


Description:
You are given an m liter jug and a n liter jug. Both the jugs are initially empty. The jugs don’t have
markings to allow measuring smaller quantities. You have to use the jugs to measure d liters of water
where d is less than n.
(X, Y) corresponds to a state where X refers to the amount of water in Jug1 and Y refers to the amount
of water in Jug2
Determine the path from the initial state (xi, yi) to the final state (xf, yf), where (xi, yi) is (0, 0) which
indicates both Jugs are initially empty and (xf, yf) indicates a state which could be (0, d) or (d, 0).
The operations you can perform are:
1.Empty a Jug, (X, Y)->(0, Y) Empty Jug 1
2.Fill a Jug, (0, 0)->(X, 0) Fill Jug 1
3.Pour water from one jug to the other until one of the jugs is either empty or full, (X, Y) -> (X-d, Y+d)
Program:
def pour_water(juga,jugb):
max1,max2,fill=3,4,2
print("%d\t%d"%(juga,jugb))
if jugb==fill:
return
elif jugb==max2:
pour_water(0,juga)
elif juga!=0 and jugb==0:
pour_water(0,juga)
elif juga==fill:
pour_water(juga,0)
elif juga<max1:
pour_water(max1,jugb)
elif juga<(max2-jugb): pour_water(0,
(juga+jugb))
else:
pour_water(juga-(max2-jugb),(max2-jugb)+jugb)

1|Pag
Date:

print("JUGA \t JUGB")
pour_water(0,0)

Output:

Experiment-2:
2. Implementation of BFS for tic-tac-toe problem.

2|Pag
Date:

Aim: To Implementation of BFS for tic-tac-toe problem.


Description:
Breadth-First Search algorithm is a graph traversing algorithm, where you select a random initial
node (source or root node) and starts traversing the graph from root node and explores all
the neighboring nodes first.Then, it selects the nearest node and explore all the unexplored
nodes.
Basically BFS traverse a graph layer-wise in such a way that all the nodes and their respective
children nodes are visited and explored.

Program:
import numpy as
np import random
from time import sleep

# Creates an empty board


def create_board():
return(np.array([[0, 0, 0],

[0, 0, 0],
[0, 0, 0]]))

# Check for empty places on board


def possibilities(board):
l = []

for i in range(len(board)):
for j in range(len(board)):

if board[i][j] == 0:
l.append((i, j))
return(l)

# Select a random place for the


player def random_place(board,
player):

3|Pag
Date:

selection = possibilities(board)

4|Pag
Date:

current_loc = random.choice(selection)
board[current_loc] = player
return(board)

# Checks whether the player has three


# of their marks in a horizontal row
def row_win(board, player):
for x in range(len(board)):
win = True

for y in range(len(board)):
if board[x, y] != player:
win = False
continue

if win == True:
return(win)
return(win)

# Checks whether the player has three


# of their marks in a vertical row
def col_win(board, player):
for x in range(len(board)):
win = True

for y in range(len(board)):
if board[y][x] != player:
win = False
continue

if win == True:

5|Pag
Date:

return(win)
return(win)
# Checks whether the player has three
# of their marks in a diagonal row
def diag_win(board, player):
win = True
y=0
for x in range(len(board)):
if board[x, x] != player:
win = False
if win:
return win
win = True
if win:
for x in range(len(board)):
y = len(board) - 1 - x
if board[x, y] != player:
win = False
return win
# Evaluates whether there is
# a winner or a tie
def evaluate(board):
winner = 0

for player in [1, 2]:


if (row_win(board, player) or
col_win(board,player) or
diag_win(board,player)):

winner = player

6|Pag
Dat

if np.all(board != 0) and winner == 0:


winner = -1
return winner

# Main function to start the game


def play_game():
board, winner, counter = create_board(), 0, 1
print(board)
sleep(2)

while winner == 0:
for player in [1, 2]:
board = random_place(board, player)
print("Board after " + str(counter) + "
move") print(board)
sleep(2)
counter +=
1
winner = evaluate(board)
if winner != 0:
break
return(winner)

# Driver Code
print("Winner is: " + str(play_game()))

Output:

6|Page
RollNo:21A9
Dat

Experiment-3:
3. Implementation of TSP using heuristic approach.
Dat

Aim: To Implementation of TSP using heuristic approach.

Description:
(TSP) The travelling salesman problem, the problem is to find the shortest possible route that
visit every city exactly once and returns to the starting point.

The traveling salesman problem (TSP) is to find the shortest hamiltonian cycle in a graph. This
problem is NP-hard and thus interesting. There are a number of algorithms used to find optimal tours,
but none are feasible for large instances since they all grow exponentially

Program:
from sys import maxsize
from itertools import
permutations V = 4

# implementation of traveling Salesman Problem


def travellingSalesmanProblem(graph, s):

# store all vertex apart from source


vertex vertex = []
for i in range(V):
if i != s:
vertex.append(i)

# store minimum weight Hamiltonian


Cycle min_path = maxsize
next_permutation=permutations(vertex)
for i in next_permutation:

# store current Path weight(cost)


current_pathweight = 0

# compute current path


weight k = s
for j in i:
Dat

current_pathweight += graph[k][j]
k=j
current_pathweight += graph[k][s]

# update minimum
min_path = min(min_path, curren t_pathweight)

return min_path

# Driver Code
if name == " main ":

# matrix representation of graph


graph = [[0, 10, 15, 20], [10, 0, 35, 25],
[15, 35, 0, 30], [20, 25, 30, 0]]
s=0
print(travellingSalesmanProblem(graph, s))

Output:
80

Experiment-4:
4. Implementation of Hill-climbing to solve 8- Puzzle Problem.
Aim: To implementation of Hill-climbing to solve 8- Puzzle Problem.
Description:
Dat

A set of eight numbered tiles are arranged in order on a puzzle slate with an empty tile placed at
last. Here, In this problem, need to arrange the tiles which are unorderd , using the hill climbing
searching strategy as in the goal state
Hill Climbing is a heuristic search used for mathematical optimization problems in the field of
Artificial Intelligence. It is an iterative algorithm that starts with an arbitrary solution to a problem,
then attempts to find a better solution by making an incremental change to the solution.
So, given a large set of inputs and a good heuristic function, the algorithm tries to find the best
possible solution to the problem in the most reasonable time period.
Solution is not necessary, a optimal solution (Global Optimal Maxima) but it is consider to be good
solution according to time period.
Mathematical optimization problems: Implies that hill-climbing solves the problems where we need to
maximize or minimize a given real function by choosing values from the given inputs.
Program:

Syntax-
Syntax-
import
sys
import numpy as np

class Node:
def init (self, state, parent, action):
self.state = state
self.parent = parent
self.action = action

class StackFrontier:
def init (self):
self.frontier = []

def add(self, node):


self.frontier.append(node)

def contains_state(self, state):


return any((node.state[0] == state[0]).all() for node in self.frontier)

def empty(self):
return len(self.frontier) == 0

def remove(self):
if self.empty():
raise Exception("Empty
Frontier") else:
node = self.frontier[-1]
self.frontier = self.frontier[:-1]
return node

10 | P a g e
Dat

class QueueFrontier(StackFrontier):
def remove(self):
if self.empty():
raise Exception("Empty
Frontier") else:
node = self.frontier[0]
self.frontier = self.frontier[1:]
return node

class Puzzle:
def init (self, start, startIndex, goal, goalIndex):
self.start = [start,
startIndex] self.goal =
[goal, goalIndex]
self.solution = None

def neighbors(self, state):


mat, (row, col) = state
results = []

if row > 0:
mat1 = np.copy(mat) mat1[row]
[col] = mat1[row - 1][col] mat1[row
- 1][col] = 0
results.append(('up', [mat1, (row - 1, col)]))
if col > 0:
mat1 = np.copy(mat) mat1[row]
[col] = mat1[row][col - 1]
mat1[row][col - 1] = 0
results.append(('left', [mat1, (row, col - 1)]))
if row < 2:
mat1 = np.copy(mat)
mat1[row][col] = mat1[row + 1]
[col] mat1[row + 1][col] = 0
results.append(('down', [mat1, (row + 1,
col)])) if col < 2:
mat1 = np.copy(mat)
mat1[row][col] = mat1[row][col +
1] mat1[row][col + 1] = 0
results.append(('right', [mat1, (row, col +

1)])) return results

def print(self):
solution = self.solution if self.solution is not None else None
print("Start State:\n", self.start[0], "\n")
print("Goal State:\n", self.goal[0], "\n") print("\
nStates Explored: ", self.num_explored, "\n")
print("Solution:\n ")
for action, cell in zip(solution[0], solution[1]):
11 | P a g e
RollNo: 2 1 A 9 1
Dat

print("action: ", action, "\n", cell[0], "\


n") print("Goal Reached!!")

def does_not_contain_state(self, state):


for st in self.explored:
if (st[0] == state[0]).all():
return False
return True

def solve(self):
self.num_explored = 0

start = Node(state=self.start, parent=None,


action=None) frontier = QueueFrontier()
frontier.add(start)

self.explored = []

while True:
if frontier.empty():
raise Exception("No solution")

node =
frontier.remove()
self.num_explored += 1

if (node.state[0] == self.goal[0]).all():
actions = []
cells = []
while node.parent is not None:
actions.append(node.action)
cells.append(node.state)
node = node.parent
actions.reverse()
cells.reverse()
self.solution = (actions, cells)
return

self.explored.append(node.state)

for action, state in self.neighbors(node.state):


if not frontier.contains_state(state) and
self.does_not_contain_state(state): child = Node(state=state,
parent=node, action=action) frontier.add(child)

start = np.array([[1, 2, 3], [8, 0, 4], [7, 6, 5]])


goal = np.array([[2, 8, 1], [0, 4, 3], [7, 6, 5]])

startIndex = (1, 1)
goalIndex = (1, 0)
12 | P a g e
RollNo: 2 1 A 9
Dat

p = Puzzle(start, startIndex, goal,


goalIndex) p.solve()
p.print()

Output:

13 | P a g
Dat

14 | P a g
Dat

Experiment-5:

15 | P a g
Dat

5. Implement and demonstrate FIND-S algorithm for finding the most specific hypothesis based on
a given set of training data samples. Read the training data from a .csv file.

Aim: To implement and demonstrate FIND-S algorithm for finding the most specific hypothesis
based on a given set of training data samples. Read the training data from a .csv file.

Description:
The find-S algorithm is a basic concept learning algorithm in machine learning. The find-S algorithm
finds the most specific hypothesis that fits all the positive examples. We have to note here that the
algorithm considers only those positive training example.

DataSet:

Example Sky Air temperature Humidity Wind Water Forecast Enjoy-sport


1 Sunny Warm Normal Strong Warm Same Yes
2 Sunny Warm High Strong Warm Same Yes
3 Rainy Cold High Strong Warm Change No
4 Sunny Warm High Strong Cool Change Yes

Program:
import pandas as pd
import numpy as np
import csv
filename=(r"C:\Users\LENOVO\Downloads\Find.csv")
df=pd.read_csv(filename)
df.info()
df.head()
a=np.array(df)[::-1]
print(a)
target=np.array(df)[::-1]
print(target)
def func(concept,target):
for i,val in enumurate(target):
if val=="yes":
for x in range(len(specific_hypothesis)):

16 | P a g
Dat

if val[x]!=specific_hypothesis[x]:
specific_hypothesis[x]='?'
return specific_hypothesis
print("The final hypothesis is:",func(a,target))

output:
<class 'pandas.core.frame.DataFrame'>

RangeIndex: 4 entries, 0 to 3

Data columns (total 8 columns):

# Column Non-Null Count Dtype

0 Example 4 non-null int64


1 Sky 4 non-null object
2 Air temperature 4 non-null object
3 Humidity 4 non-null object
4 Wind 4 non-null object
5 Water 4 non-null object
6 Forecast 4 non-null object
7 Enjoy-sport 4 non-null object

dtypes: int64(1), object(7)

memory usage: 384.0+ bytes

[[4 'Sunny' 'Warm' 'High' 'Strong' 'Cool' 'Change' 'Yes']


[3 'Rainy' 'Cold' 'High' 'Strong' 'Warm' 'Change' 'No']
[2 'Sunny' 'Warm' 'High' 'Strong' 'Warm' 'Same' 'Yes']
[1 'Sunny' 'Warm' 'Normal' 'Strong' 'Warm' 'Same' 'Yes']]
[[4 'Sunny' 'Warm' 'High' 'Strong' 'Cool' 'Change' 'Yes']
[3 'Rainy' 'Cold' 'High' 'Strong' 'Warm' 'Change' 'No']
[2 'Sunny' 'Warm' 'High' 'Strong' 'Warm' 'Same' 'Yes']
[1 'Sunny' 'Warm' 'Normal' 'Strong' 'Warm' 'Same' 'Yes']]

Experiment-6:

17 | P a g
Dat

6. For a given set of training data examples stored in a .csv file, implement and demonstrate the
candidate elimination algorithm to output a description of the set of all hypotheses consistent with
the training examples
Aim: To For a given set of training data examples stored in a .csv file, implement and demonstrate
the candidate elimination algorithm to output a description of the set of all hypotheses consistent with
the training examples

Description:
Candidate Elimination Algorithm is used to find the set of consistent hypothesis, that is
Version spsce.
The candidate Elimination algorithm finds all hypotheses that match all the given training
examples. Unlike in Find-S algorithm and List-then-Eliminate algorithm, it goes through both
negative and positive examples, eliminating any inconsistent hypothesis.Candidate generation
is the first stage of recommendation. Given a query, the system generates a set of relevant
candidates. The following table shows two common candidate generation approaches: Type.
Definition.
DataSet:
Enjoy-
Example
Sky Air temperature Humidity Wind Water Forecast sport
1 Sunny Warm Normal Strong Warm Same Yes
2 Sunny Warm High Strong Warm Same Yes
3 Rainy Cold High Strong Warm Change No
4 Sunny Warm High Strong Cool Change Yes

Program:
import csv filename=(r"C:\Users\LENOVO\Downloads\
Find.csv") df=pd.read_csv(filename)
df.info()
df.head()
concepts.np.array(df.iloc[:,0:-1])
print(concepts)
target=np.array(df.loc[:,-1])
print(target)

18 | P a g
Dat

def learn(concepts, target):


specific_h-concepts[0].copy()
print("Initialization of specific_h and general=h")
print("specific_h:",specific_h)
general_h=[["?" for i in range(len(specific ))] for i in range(len(specific_))]
print("general_h:",general_h)
print("concepts:",concepts)
for i,h in enumerate(concepts):
if target[i]=="yes":
for x in
range(len(specifi_h)):
#print("h[x]",h[x])
if h[x]!=specific_h[x]:
specific_h[x]='?'
general_h[x][x]='?'
if target[i]=="no":
for x in
range(len(specific_h)): if
h[x]!=specific_h[x]:
general_h[x][x]=specific_h[x]
else:
general_h[x][x]='?'
print("\n steps of candidate elimation algorithm:",
[]) print("specific_h:",i-1)
print(specific_h,"\n")
print("general_h:",i+1)
print(general_h)
indicates=[i for i,val in enumurate(general_h)if val==['?','?','?','?','?','?']]
print("\n indicates",indices)
for i in indices:

19 | P a g
Dat

general_h.remove(['?','?','?','?','?','?'])
return specific_h,general_h

20 | P a g
Dat

s_final,g_final=learn(concepts,target) print("\
n final specific_h:",s_final,sep="\n")
print("final general_h:",g_final,sep="\n")

Output:
<class 'pandas.core.frame.DataFrame'>

RangeIndex: 4 entries, 0 to 3

Data columns (total 8 columns):

# Column Non-Null Count Dtype


0 Example 4 non-null int64
1 Sky 4 non-null object
2 Air temperature 4 non-null object
3 Humidity 4 non-null object
4 Wind 4 non-null object
5 Water 4 non-null object
6 Forecast 4 non-null object
7 Enjoy-sport 4 non-null object

dtypes: int64(1), object(7)

memory usage: 384.0+ bytes

[[1 'Sunny' 'Warm' 'Normal' 'Strong' 'Warm' 'Same']


[2 'Sunny' 'Warm' 'High' 'Strong' 'Warm' 'Same']
[3 'Rainy' 'Cold' 'High' 'Strong' 'Warm' 'Change']
[4 'Sunny' 'Warm' 'High' 'Strong' 'Cool' 'Change']] ['Yes' 'Yes' 'No' 'Yes']

Experiment-7:
7. Write a program to demonstrate the working of the decision tree classifier. Use appropriate
dataset for building the decision tree and apply this knowledge to classify a new sample.

21 | P a g
Dat

Aim: To Write a program to demonstrate the working of the decision tree classifier. Use appropriate
dataset for building the decision tree and apply this knowledge to classify a new sample.

Description:
Decision tree is one of the most poweful and popular algorithm decision tree algorithm
falls under the company of supersied learning algorithm,it works for both continuous as well
as categorical output variable.
Program:
import pandas as pd df=pd.read_csv(r"C:\Users\LENOVO\Downloads\
company.csv") df.head()
inputs=df.drop('salary more than
100k',axis='columns') inputs
target=df['salary more than 100k']
target
from sklearn.preprocessing import
LabelEncoder le_company=LabelEncoder()
le_job=LabelEncoder()
le_degree=LabelEncoder()
inputs['company_n']=le_company.fit_transform(inputs['company'])
inputs['job_n']=le_job.fit_transform(inputs['job'])
inputs['degree_n']=le_job.fit_transform(inputs['degree'])
inputs
inputs_n=inputs.drop(['company','job','degree'],axis='columns')
inputs_n
target

Output:

22 | P a g
Dat

0 0
1 0
2 1
3 1
4 0
5 1
6 0
7 0
8 1
9 1
10 1
11 1
12 1
13 1
14 1
15 1
Name: salary more than 100k, dtype: int64

Experiment: 8
8. Write a program to demonstrate the working of Decision tree regressor.Use
appropriate dataset for decision tree regressor.
Aim: To Write a program to demonstrate the working of Decision tree regressor. Use appropriate
dataset for decision tree regressor.

23 | P a g
Dat

Description:
Decision tree regression observes features of an object and trains a model in the
structure of a tree to predict data in the future to produce meaningful contains output.
continuous output means that the output/result i.e, not discrete ,it is not represented just by
a discrete,know set of number or values.
DataSet:
Age Height
10 118
11 138
12 138
13 139
14 140
15 140
16 140
17 140
18 141
19 141
20 141
21 142
22 142
23 142
24 143
25 143
26 143
27 143

Program:
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
#import the height weight dataset

24 | P a g
Dat

data=pd.read_csv(r'C:\Users\admin\Documents\aiml lab\decission tree regression.csv')


data.head()
#store the data in the form of dependent variebles separetly
x=data.iloc[:,0:1].values
y=data.iloc[:,1].values
#split the dataset into training and test dataset
from sklearn.model_selection import train_test_split
x_train,x_test,y_train,y_test=train_test_split(x,y,test_size=0.3,random_state=0)
#import the decision tree Regression
from sklearn.tree import DecisionTreeRegressor
#create a decision tree Regression object from DecisionTreeRegression class
DtReg=DecisionTreeRegressor(random_state=0)
#fit the decision tree regressor with training data represented by x_train and y_train
DtReg.fit(x_train,y_train)
#predict height from test dataset w.r.t Decision Tree
Regressor y_predict_dtr=DtReg.predict((x_test))
#model evaluate using R-square for Decision Tree Regression
from sklearn import metrics
r_square=metrics.r2_score(y_test,y_predict_dtr)
print('R-square Error associated with Decision Tree Regression is:',r_square)
'''visualise the Decision Tree regression by creating range of values from min value of x_train having
a differrence of 0.01 between two consecutive values'''
x_val=np.arange(min(x_train),max(x_train),0.01)
#Reshape the data into a len(x_val)*1 array in order to make a coloumn out of the x_val values
x_val=x_val.reshape((len(x_val),1))
#define a scatter plot for training data
plt.scatter(x_train,y_train,color='blue')
#plot the predicated data
plt.plot(x_val,DtReg.predict(x_val),color='red')
#define the title
plt.title('Height prediction using Decision tree Regression')
#define x axis label

25 | P a g
Dat

plt.xlabel('age')
#define y axis label
plt.xlabel('age')
#set the size of the plot for better
clarity plt.figure(figsize=(1,1))
#draw the
plot
plt.show()
#import export_graphviz package
from sklearn.tree import export_graphviz
#store the decison tree in a tree.datafile in order to visualize the plot.
export_graphviz(DtReg,out_file='dtregression.dot',
feature_names=['Age'])
#predicting Height based on Age using Decision Tree
Regression height_pred=DtReg.predict([[41]])
print("predicted Height:%d" %height_pred)

Output:

26 | P a g
Dat

27 | P a g
Dat

Experiment: 9
Aim: To write a program to demonstrate the working of Random Forest classifier. Use appropriate
dataset for Random Forest Classifier.

Description:

28 | P a g
Dat

Random is a Supervised learning algorithm. It can be used both for classification and
regression It is also the most flexible to use algorithm. A forest is comprised of trees.
it is the Said that the more trees more it has, the more robust a forest is Random forests
Creates selects the best solution by means of voting. It also provides a pretty good
indicator of the feature importance.
DataSet:

Program:
#1
from sklearn.datasets import load_iris
iris=load_iris()

29 | P a g
Dat

dir(iris)

Output:
['DESCR', 'data', 'data_module', 'feature_names', 'filename', 'frame', 'target', 'target_names']

#2
import pandas as pd
df=pd.DataFrame(iris.data, columns=iris.feature_names)
df.head()

Output:
SepalLengthCm SepalWidthCm PetalLengthCm PetalWidthCm
1 5.1 3.5 1.4 0.2
2 4.9 3 1.4 0.2
3 4.7 3.2 1.3 0.2
4 4.6 3.1 1.5 0.2
5 5 3.6 1.4 0.2

#3
df['target']=iris.target
df.head()

Output:
SepalLengthCm SepalWidthCm PetalLengthCm PetalWidthCm Target
1 5.1 3.5 1.4 0.2 0
2 4.9 3 1.4 0.2 0
3 4.7 3.2 1.3 0.2 0
4 4.6 3.1 1.5 0.2 0
5 5 3.6 1.4 0.2 0

30 | P a g
Dat

#4
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(df.drop(['target'],axis='columns'),iris.target,t
est_size=0.2)
from sklearn.ensemble import RandomForestClassifier
model = RandomForestClassifier()
model.fit(X_train, y_train)

Output:
RandomForestClassifier()

#5
model.score(X_test,y_test)

Output:
0.8666666666666667

#6
model = RandomForestClassifier(n_estimators=40)
model.fit(X_train, y_train)
model.score(X_test, y_test)

Output:
0.9

#7
y_predicted = model.predict(X_test)
from sklearn.metrics import confusion_matrix
cm = confusion_matrix(y_test, y_predicted)
cm

31 | P a g
Dat

Output:
array([[ 7, 0, 0], [ 0, 10, 0], [ 0, 3, 10]])

Experiment: 10
Aim: To Write a program to demonstrate the working of Logistic Regression classifier. Use
appropriate dataset for Logistic Regression.
Description:
Logistic Regression is a Machine Learning classification algorithm that is used to predict
the probability of a categorical dependent variable. In logistic regression, the dependent
variable is a binary variable that contains data coded as 1 (yes, success, etc.) or 0 (no,
failure, etc.). In other words, the logistic regression model predicts P(Y=1) as a function of
32 | P a g
Dat

X.

33 | P a g
Dat

Types of Logistic Regression:


Though generally used for predicting binary target variables, logistic regression can be
extended and further classified into three different types that are as mentioned below:
Binomial: Where the target variable can have only two possible types. e.g.: Predicting a mail
as spam or not.
Multinomial: Where the target variable has three or more possible types, which may not
have any quantitative significance. e.g.: Predicting disease.
Ordinal: Where the target variables have ordered categories. e.g.: Web Series ratings from 1
to 5.

Dataset:

Program:
#1
import pandas as pd
from matplotlib import pyplot as ply
%matplotlib inline

#2

34 | P a g
Dat

df = pd.read_csv('HR_comma_sep.csv')
df.head()
Output:

#3
left=df[df.left==1]
left.shape

Output:
(3571, 10)

#4
retained=df[df.left==0]
retained.shape

Output:
(11428, 10)

#5
df.groupby('left').mean()

Output:

35 | P a g
Dat

#6
pd.crosstab(df.salary,df.left).plot(kind='bar')

Output:

#7

pd.crosstab(df.Department,df.left).plot(kind='bar')

Output:

36 | P a g
Dat

#8
subdf=df[['satisfaction_level','average_montly_hours','promotion_last_5years','salary']]
subdf.head()

Output:

#9
salary_dummies=pd.get_dummies(subdf.salary,prefix="salary")
df_with_dummies=pd.concat([subdf,salary_dummies],axis='columns')
df_with_dummies.head()

Output:

37 | P a g
Dat

#10
df_with_dummies.drop('salary',axis='columns',inplace=True)
df_with_dummies.head()
X=df_with_dummies
X.head()

Output:

#11
y=df.left

#12
from sklearn.model_selection import train_test_split
X_test,X_train,y_test,y_train = train_test_split(X,y,train_size=0.3)

#13

38 | P a g
Dat

from sklearn.linear_model import LogisticRegression


model = LogisticRegression()

#14
model.fit(X_train,y_train)

Output:
LogisticRegression()

#15
model.predict(X_test)

Output:
array([0, 0, 0, ..., 0, 0, 0])

#16
model.score(X_test,y_test)

Output:

0.7877306068015114

Experiment: 11

39 | P a g
Dat

Aim: To Implementation of Simulated Annealing Algorithm using LISP/PROLOG

Description:
Simulated Annealing (SA)
 SA is applied to solve optimization problems
 SA is a stochastic algorithm
 SA is escaping from local optima by allowing worsening moves
 SA is a memoryless algorithm the algorithm does not use any information
gathered during the search
 SA is applied for both combinatorial and continuous optimization problems
 SA is simple and easy to implement.
 SA is motivated by the physical annealing process

Program:
from numpy import asarray, exp
from numpy.random import randn, rand, seed
from matplotlib import pyplot
def objective(step):
return step[0] ** 2.0
def sa(objective, area, iterations, step_size, temperature):
start_point = area[:, 0] + rand( len( area ) ) * ( area[:, 1] - area[:, 0] )
start_point_eval = objective(start_point)
mia_start_point, mia_start_eval = start_point,
start_point_eval outputs = []
for i in range(iterations):
mia_step = mia_start_point + randn( len( area ) ) *
step_size mia_step_eval = objective(mia_step)
if mia_step_eval < start_point_eval:
start_point, start_point_eval = mia_step,
mia_step_eval outputs.append(start_point_eval)

40 | P a g
Dat

print('Acceptance Criteria = %.5f' % mac," ",'iteration Number = ',i," ", 'best_so_far =


',start_point," " ,'new_best = %.5f' % start_point_eval)
difference = mia_step_eval - mia_start_eval
t = temperature / float(i + 1)
mac = exp(-difference / t)
if difference < 0 or rand() < mac:
mia_start_point, mia_start_eval = mia_step, mia_step_eval
return [start_point, start_point_eval, outputs]
seed(1)
area = asarray([[-6.0, 6.0]])
temperature = 12
iterations = 1200
step_size = 0.1
start_point, output, outputs = sa(objective, area, iterations, step_size, temperature)
pyplot.plot(outputs, 'ro-')
pyplot.xlabel('Improvement Value')
pyplot.ylabel('Evaluation of Objective Function')
pyplot.show()

Output:

Acceptance Criteria = 0.76232 iteration Number = 37 best_so_far = [-0.95340594] new_best = 0.90898

Acceptance Criteria = 0.87733 iteration Number = 39 best_so_far = [-0.91563305] new_best = 0.83838

Acceptance Criteria = 1.44710 iteration Number = 40 best_so_far = [-0.85680363] new_best = 0.73411

Acceptance Criteria = 1.42798 iteration Number = 41 best_so_far = [-0.8221177] new_best = 0.67588

Acceptance Criteria = 1.22608 iteration Number = 42 best_so_far = [-0.68541443] new_best = 0.46979

Acceptance Criteria = 2.09273 iteration Number = 43 best_so_far = [-0.61804282] new_best = 0.38198

Acceptance Criteria = 1.16478 iteration Number = 50 best_so_far = [-0.42564785] new_best = 0.18118

Acceptance Criteria = 0.81414 iteration Number = 66 best_so_far = [-0.35632231] new_best = 0.12697

Acceptance Criteria = 1.74595 iteration Number = 67 best_so_far = [-0.33780667] new_best = 0.11411

Acceptance Criteria = 1.02155 iteration Number = 72 best_so_far = [-0.3368772] new_best = 0.11349

Acceptance Criteria = 1.20897 iteration Number = 75 best_so_far = [-0.29671621] new_best = 0.08804

41 | P a g
Dat

Acceptance Criteria = 1.47133 iteration Number = 88 best_so_far = [-0.29346186] new_best = 0.08612


Acceptance Criteria = 1.78859 iteration Number = 89 best_so_far = [-0.2525718] new_best = 0.06379

Acceptance Criteria = 0.29369 iteration Number = 93 best_so_far = [-0.20893326] new_best = 0.04365

Acceptance Criteria = 1.68933 iteration Number = 94 best_so_far = [-0.12724854] new_best = 0.01619

Acceptance Criteria = 1.24284 iteration Number = 95 best_so_far = [-0.00883141] new_best = 0.00008

Acceptance Criteria = 0.99774 iteration Number = 102 best_so_far = [0.00581387] new_best = 0.00003
Acceptance Criteria = 1.05301 iteration Number = 118 best_so_far = [0.00137051] new_best = 0.00000

Acceptance Criteria = 0.99611 iteration Number = 138 best_so_far = [0.0009528] new_best = 0.00000

Acceptance Criteria = 1.33458 iteration Number = 158 best_so_far = [-0.00047896] new_best = 0.00000

Acceptance Criteria = 0.50157 iteration Number = 473 best_so_far = [0.00045742] new_best = 0.00000

Acceptance Criteria = 1.02387 iteration Number = 482 best_so_far = [0.00010925] new_best = 0.00000

42 | P a g
Dat

Experiment: 12
Aim:To Implementation of Monkey Banana Problem using LISP/PROLOG
Description:
 A hungry monkey is in a room, and he is near the door.
 The monkey is on the floor.
 Bananas have been hung from the center of the ceiling of the room.
 There is a block (or chair) present in the room near the window.
 The monkey wants the banana but cannot reach it.

Program:
import random
class Monkey:
def init (self,bananas):
self.bananas =
bananas
def repr (self):
return "Monkey with %d bananas."% self.bananas
monkeys = [Monkey(random.randint(0,50)) for i in range(5)]

print("Random monkeys:")
print(monkeys)
print
def number_of_bananas(monkey):
"""Returns number of bananas that monkey
has.""" return monkey.bananas
print("number_of_bananas(FIRST MONKEY):",number_of_bananas(monkeys[0]))
print

max_monkey=max(monkeys,key=number_of_bananas)
print("Max monkey: ",max_monkey)

43 | P a g
Dat

44 | P a g
Dat

Output:

Random monkeys:
[Monkey with 40 bananas., Monkey with 13 bananas., Monkey with 0 bananas., Monkey
with 3 bananas., Monkey with 19 bananas.]
number_of_bananas(FIRST MONKEY): 40
Max monkey: Monkey with 40 bananas.

45 | P a g

You might also like