AI & ML Lab Manual

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 43

Ex.

No:1
Date:
Implement Breadth First Search (BFS)

Aim:
To write a Python program to implement Breadth First Search (BFS).

Algorithm:

Step 1. Start
Step 2. Put any one of the graph’s vertices at the back of the queue.
Step 3. Take the front item of the queue and add it to the visited list.
Step 4. Create a list of that vertex's adjacent nodes. Add those which are not
within the visited list to the rear of the queue.
Step 5. Continue steps 3 and 4 till the queue is empty.
Step 6. Stop

Program:

graph = {

'5' : ['3','7'],

'3' : ['2', '4'],

'7' : ['8'],

'2' : [],

'4' : ['8'],

'8' : []

visited = []

queue = []

def bfs(visited, graph, node):

visited.apped(node)
queue.append(node)

while queue:

m = queue.pop(0)

print (m, end = " ")

for neighbour in graph[m]:

if neighbour not in visited:

visited.append(neighbour)

queue.append(neighbour)

print("Following is the Breadth-First Search")

bfs(visited, graph, '5')


OUTPUT:

RESULT:

Thus the Python program to implement Breadth First Search (BFS) was developed

successfully.
Ex.No:2
Date: Implement Depth First Search (DFS)

Aim:
To write a Python program to implement Depth First Search (DFS).

Algorithm:

Step 1.Start

Step 2.Put any one of the graph's vertex on top of the stack.

Step 3.After that take the top item of the stack and add it to the visited list of the

vertex.

Step 4.Next; create a list of that adjacent node of the vertex. Add the ones which

aren't in the visited list of vertexes to the top of the stack.

Step 5.Repeat steps 3 and 4 until the

stack is empty.

Step 6.Stop

Program:

graph = {
'5' : ['3','7'],
'3' : ['2', '4'],
'7' : ['8'],
'2' : [],
'4' : ['8'],
'8' : []
}
visited = set()

def dfs(visited, graph, node):


if node not in visited: print (node)

visited.add(node)

for neighbour in graph[node]: dfs(visited, graph, neighbour)

print("Following is the Depth-First Search")

dfs(visited, graph, '5')


OUTPUT:

RESULT;
Thus the Python program to implement Depth First Search (DFS) was
developed successfully.
Ex.No:3 Analysis of Breadth First and Depth First Search in Terms of
Date: Time and Space

Aim:
To write a Python program to implement the Analysis of Breadth First and
Depth First Search in Terms of Time and Space

Algorithm:

Step 1.Start
Step 2. Import the required libraries and define a class Graph.
Step 3. Initialize the class with a defaultdict graph to store the graph structure.
Step 4. Define a function analyze_algorithm to measure the execution time and
memory usage of a given algorithm on a graph.
Step 5. In the analyze_algorithm function, measure the execution time using
time.time() and memory usage using sys.getsizeof().
Step 6. Create an instance of the Graph class and add edges to construct the
graph.
Step 7. Define the start node for traversal.
Step 8. Analyze the BFS and DFS algorithms using the analyze_algorithm
function.
Step 9. Print the execution time and memory usage for both BFS and DFS.
Step 10. Stop

Program

from collections import

defaultdict import time

import sys class

Graph:

def init (self):


self.graph = defaultdict(list)

def add_edge(self, u, v):

self.graph[u].append(v)

def bfs(self, start):

visited = set()

queue = [start]

visited.add(start)

while queue:

vertex = queue.pop(0)

for neighbor in self.graph[vertex]:

if neighbor not in visited:

queue.append(neighbor)

visited.add(neighbor)

def dfs_util(self, vertex, visited):

visited.add(vertex)

for neighbor in

self.graph[vertex]: if

neighbor not in visited:

self.dfs_util(neighbor,

visited) def dfs(self, start):

visited = set() self.dfs_util(start, visited)

def analyze_algorithm(algorithm, graph, start_node):

start_time = time.time()
algorithm(graph, start_node)

end_time = time.time()

execution_time = end_time - start_time

memory_usage = sys.getsizeof(graph)

return execution_time, memory_usage

if name == " main ":

g = Graph()

g.add_edge(0, 1)

g.add_edge(0, 2)

g.add_edge(1, 2)

g.add_edge(2, 0)

g.add_edge(2, 3)

g.add_edge(3, 3)

start_node = 2

bfs_time, bfs_memory = analyze_algorithm(g.bfs, g.graph, start_node)

dfs_time, dfs_memory = analyze_algorithm(g.dfs, g.graph, start_node)

print("BFS Execution Time:", bfs_time)

print("BFS Memory Usage:", bfs_memory)

print("DFS Execution Time:", dfs_time)

print("DFS Memory Usage:", dfs_memory)


OUTPUT:

RESULT;
Thus the Python program to implement the Analysis of Breadth First and
Depth First Search in Terms of Time and Space was developed successfully.
Ex.No:4
Date: Implement and Compare Greedy and A* Algorithm

Aim:
To write a Python program to implement and compare greedy and A* algorithm.

Algorithm:
Step 1.Start

Step 2. Initialize an empty set visited to keep track of visited nodes.

Step 3. Initialize a list path to store the path taken.

Step 4. Set the current node to the start node.

Step 5. While the current node is not equal to the goal node:

Step 5.1. Add the current node to the visited set.

Step 5.2. Get the neighbors of the current node from the graph.

Step 5.3. Choose the next node by selecting the neighbor with the

minimum heuristic value (estimated cost to reach the goal).

Step 5.4. Append the chosen next node to the path.

Step 5.5. Update the current node to the chosen next node and If there are

no neighbors,

Step 5.6. Return none.

Step 6. Append the goal node to the path.

Step 7. Return the path.

Step 8. Stop
Program

import heapq class Node:

def init (self, x, y):

self.x = x

self.y = y

self.g = 0

self.h = 0

self.parent = None

def lt (self, other):

return (self.g + self.h) < (other.g + other.h)

def manhattan_distance(node1, node2):

return

abs(node1.x - node2.x) + abs(node1.y - node2.y)

def get_neighbors(node, grid):

neighbors = []

for dx, dy in [(1, 0), (-1, 0), (0, 1), (0, -1)]:

nx, ny = node.x + dx, node.y + dy

if 0 <= nx < len(grid) and 0 <= ny < len(grid[0]) and grid[nx][ny] != 1:

neighbors.append(Node(nx, ny))

return neighbors

def greedy_search(start, goal, grid):

open_list = [start]

while open_list:
current = open_list.pop(0)

if current == goal:

path = []

while current:

path.append((current.x, current.y))

current = current.parent

return path[::-1]

for neighbor in get_neighbors(current, grid):

if neighbor not in open_list:

neighbor.parent = current

open_list.append(neighbor)

open_list.sort(key=lambda x: manhattan_distance(x, goal))

return None

def astar_search(start, goal, grid):

open_list = []

closed_set = set()

heapq.heappush(open_list, start)

while open_list:

current =

heapq.heappop(open_list)

if current == goal:

path = []

while current:
path.append((current.x, current.y))

current = current.parent

return path[::-1]

closed_set.add(current)

for neighbor in get_neighbors(current, grid):

if neighbor in

closed_set: continue

tentative_g = current.g + 1

if neighbor not in open_list or tentative_g < neighbor.g:

neighbor.parent = current

neighbor.g = tentative_g

neighbor.h = manhattan_distance(neighbor,

goal) if neighbor not in open_list:

heapq.heappush(open_list, neighbor)

return None

def print_grid(grid, path=None):

for i in range(len(grid)):

for j in range(len(grid[0])):

if path and (i, j) in path:

print("P", end=" ")

elif grid[i][j] == 1:

print("#", end=" ")

else:
print(".", end=" ")

print()

grid = [

[0, 0, 0, 0, 0],

[0, 0, 1, 0, 0],

[0, 0, 1, 0, 0],

[0, 0, 0, 0, 0],

start = Node(0, 0)

goal = Node(3, 4)

print("Grid:")

print_grid(grid)

print("\nGreedy Search Path:")

greedy_path = greedy_search(start, goal, grid)

print_grid(grid, greedy_path)

print("\nA* Search Path:")

astar_path = astar_search(start, goal, grid)

print_grid(grid, astar_path)
OUTPUT:

RESULT;
Thus the Python program to implement and compare greedy and
A*algorithm was developed successfully.
Ex.No:5
Implement the non-parametric locally weighted regression
Date:
algorithm in order to fit data points. Select appropriate data
set for your experiment and draw graphs

Aim:
To write a Python program to implement the non-parametric locally weighted
regression algorithm in order to fit data points. Select appropriate data set for your
experiment and draw graphs

Algorithm:

Step 1.Start

Step2. Import the required libraries and define a class Locally Weighted

Regression

Step 3. Initialize the class with a parameter tau representing the bandwidth.

Step 4. In the fit method, store the input features X, target values y, and the
number of data points m.
Step 5. Implement the predict method to predict target values for a given set of
input features X_test.
Step 6. Implement the predict_point method to predict the target value for a
single data point using locally weighted regression.
Step 7. Define the kernel method to calculate the weights for each data point
based on the Gaussian kernel.
Step 8. Generate random data points X and y.
Step 9. Sort the data points based on the input features and set the bandwidth.

Step 10. Create an instance of the Locally Weighted Regression class.

Step 11. Fit the model to the data using the fit method.

Step 12. Predict the target values for the test data using the predict method.
Step 13.Plot the original data points and the predicted values using

`matplotlib.pyplot

Step14. Stop

Program:

import numpy as np

import matplotlib.pyplot as plt

class LocallyWeightedRegression:

def init (self, tau):

self.tau = tau

def fit(self, X, y):

self.X = X self.y =y

self.m = X.shape[0]

def predict(self, X_test):

m_test = X_test.shape[0]

y_pred = np.zeros(m_test)

for i in range(m_test):

y_pred[i] = self.predict_point(X_test[i]) r

eturn y_pred

def predict_point(self, x):

weights = self.kernel(x)

W = np.eye(self.m) * weights

theta =

np.linalg.inv(self.X.T.dot(W).dot(self.X)).dot(self.X.T).dot(W).dot(self.y)
return x.dot(theta)

def kernel(self, x):

weights = np.exp(-np.sum((self.X - x) ** 2, axis=1) / (2 * self.tau ** 2))

return weights

np.random.seed(42)

X = 5 * np.random.rand(100, 1)

y = 3 * X.squeeze() + 2 + np.random.randn(100)

sorted_indices = np.argsort(X.squeeze())

X_sorted = X[sorted_indices]

y_sorted = y[sorted_indices]

tau = 0.5

lwr = LocallyWeightedRegression(tau)

lwr.fit(X, y)

X_test = np.linspace(0, 5, 100).reshape(-1, 1)

y_pred = lwr.predict(X_test)

plt.figure(figsize=(10, 6))

plt.scatter(X, y, color='blue', label='Data points')

plt.plot(X_test, y_pred, color='red', label='Locally Weighted Regression')

plt.xlabel('X')

plt.ylabel('y'

plt.title('Locally Weighted Regression')

plt.legend()
plt.grid(True)
plt.show()
OUTPUT:

RESULT;
Thus the Python program to implement the non-parametric locally weighted
regression algorithm in order to fit data points. Select appropriate data set for your
experiment and draw graphs was developed successfully.
Ex.No:6 Write a program to demonstrate the working of the decision
Date: tree based algorithm

Aim:
To write a Python program to demonstrate the working of the decision tree
based algorithm.

Algorithm:

Step 1.Start
Step 2.Import necessary libraries: pandas, scikit-learn's Decision Tree Classifier
and Random Forest Classifier, matplotlib.py plot.
Step 3. Load data from an Excel file into a pandas Data Frame.
Step 4. Map categorical variables to numerical values for easier processing.
Step 5. Define features (input variables) and target variable (output variable).
Step 6. Select features and target variable from the Data Frame.
Step 7. Create a Decision Tree Classifier object and fit it to the data.
Step 8. Plot the decision tree using matplotlib.py plot.
Step 9. Stop

Program:

import pandas as pd

from sklearn.tree import

DecisionTreeClassifier import

matplotlib.pyplot as plt

from sklearn import tree

df = pd.read_excel("nation.xlsx")

d = {'UK': 0, 'USA': 1, 'N': 2}

df['Nationality'] = df['Nationality'].map(d)
d = {'YES': 1, 'NO': 0}

df['Go'] = df['Go'].map(d)

features = ['Age', 'Experience', 'Rank', 'Nationality']

X = df[features]

y = df['Go']

dtree = DecisionTreeClassifier()

dtree = dtree.fit(X, y)

plt.figure(figsize=(12, 8))

tree.plot_tree(dtree, feature_names=features, class_names=['NO', 'YES'], filled=True)

plt.subplots_adjust(left=0.05, right=0.95, top=0.95, bottom=0.05)

plt.show()

OUTPUT:
RESULT:
Thus the Python program to demonstrate the working of the decision tree based
algorithm was developed successfully.
Ex.No:7 Build an artificial neural network by implementing the back
Date: propagation algorithm and test the same using appropriate
data sets.

Aim:
To write a Python program to build an artificial neural network by
implementing the back propagation algorithm and test the same using appropriate
data sets.

Algorithm:

Step 1.Start
Step 2. Initialize the input size, hidden size, output size, learning rate, and
number of epochs.
Step 3. Initialize weights and biases randomly
Step 4. Define the sigmoid activation function and its derivative.
Step 5. Compute the forward pass through the network by multiplying input data
with weights, applying activation function, and computing output.
Step 6. Compute the error between predicted output and actual output.
Step 7. Compute gradients using back propagation.
Step 8. Update weights and biases using gradients and learning rate.
Step 9. Iterate over the dataset for a specified number of epochs.
Step 10. Use the trained neural network to make predictions on the input data.
Step 11. Print the predictions made by the neural network.
Step 12. Stop

Program:

import numpy as np class


NeuralNetwork:
def init (self, input_size, hidden_size, output_size, learning_rate=0.1):
self.input_size = input_size
self.hidden_size = hidden_size
self.output_size = output_size
self.learning_rate = learning_rate
self.weights_input_hidden = np.random.randn(input_size, hidden_size)
self.biases_input_hidden = np.zeros((1, hidden_size))
self.weights_hidden_output = np.random.randn(hidden_size, output_size)
self.biases_hidden_output = np.zeros((1, output_size))
def sigmoid(self, x):
return 1 / (1 + np.exp(-x)) def
sigmoid_derivative(self, x):
return x * (1 - x)
def forward(self, X):
self.hidden_output = self.sigmoid(np.dot(X, self.weights_input_hidden) +
self.biases_input_hidden)
self.output = self.sigmoid(np.dot(self.hidden_output,
self.weights_hidden_output) +self.biases_hidden_output)
return self.output
def backward(self, X, y):
output_error = y - self.output
output_delta = output_error * self.sigmoid_derivative(self.output)
hidden_error = output_delta.dot(self.weights_hidden_output.T)
hidden_delta = hidden_error *
self.sigmoid_derivative(self.hidden_output)
self.weights_hidden_output +=
self.hidden_output.T.dot(output_delta) *
self.learning_rate
self.biases_hidden_output += np.sum(output_delta, axis=0, keepdims=True) *
self.learning_rate
self.weights_input_hidden += X.T.dot(hidden_delta) * self.learning_rate
self.biases_input_hidden += np.sum(hidden_delta, axis=0, keepdims=True) *
self.learning_rate
def train(self, X, y, epochs):
for epoch in range(epochs):
output = self.forward(X)
self.backward(X, y)
def predict(self, X):
return
self.forward(X)
X = np.array([[0, 0], [0, 1], [1, 0], [1, 1]])
y = np.array([[0], [1], [1], [0]])
input_size = 2
hidden_size = 4
output_size = 1
learning_rate = 0.1
epochs = 10000
nn = NeuralNetwork(input_size, hidden_size, output_size, learning_rate)
nn.train(X, y, epochs)
predictions = nn.predict(X)
print("Predictions:")
print(predictions)

OUTPUT:
RESULT:
Thus the Python program to build an artificial neural network by implementing
the back propagation algorithm and test the same using appropriate data sets was
developed successfully.
Ex.No:8
Write a program to implement the naive Bayesian classifier
Date:

Aim:
To write a Python program to write a program to implement the naive Bayesian
classifier.

Algorithm:

Step 1.Start
Step 2. Import necessary libraries for text preprocessing and machine learning.
Step 3. Define a set of labeled training data consisting of text samples and their
corresponding sentiment labels
Step 4. Use a vectorization technique (e.g., CountVectorizer) to convert the text
data into numerical feature vectors.
Step 5. Initialize a Multinomial Naive Bayes classifier.
Step 6. Train the Naive Bayes classifier using the vectorized training data and
their associated sentiment labels.
Step 7. Prompt the user to input a sentence for sentiment analysis.
Step 8. Vectorize the user's input sentence using the same vectorization
technique used for the training data.
Step 9 . Use the trained classifier to predict the sentiment of the user's input.
Step 10. Output the predicted sentiment (e.g., positive or negative) for the user's
input.
Step 11. Stop

Program:

from sklearn.feature_extraction.text

import CountVectorizer

from sklearn.naive_bayes
import MultinomialNB

train_sentences = [

"I love this movie",

"This movie is great",

"Wonderful movie",

"I hate this movie",

"This movie is terrible"

train_labels = [1, 1,1, 0, 0]

vectorizer = CountVectorizer()

X_train = vectorizer.fit_transform(train_sentences)

nb_classifier = MultinomialNB() nb_classifier.fit(X_train,

train_labels)

user_input = input("Enter a sentence: ")

X_user = vectorizer.transform([user_input])

prediction = nb_classifier.predict(X_user)[0]

sentiment = "positive" if prediction == 1 else "negative"

print(f"Predicted sentiment for '{user_input}': {sentiment}")


OUTPUT:

RESULT:
Thus the Python program to write a program to implement the naive Bayesian
classifier was developed successfully.
Ex.No:9 Implementing neural network using self-organizing maps
Date:

Aim:
To write a Python program to Implementing neural network using self-
organizing maps.

Algorithm:

Step 1.Start

Step 2. Initialize a grid of neurons with random weights.

Step 3. Iterate over the data for a specified number of epochs:

Step 3.1. For each data point, find the Best Matching Unit (BMU), i.e.,

the neuron with weights closest to the input vector.

Step 3.2. Update the weights of the BMU and its neighbors based on

their distance from the BMU and the current epoch.

Step 4. The weights of each neuron are adjusted based on the distance from the

BMU and a learning rate that decreases over time.

Step 5. After training, the SOM provides a low-dimensional representation of

the input data, where similar data points are mapped to nearby neurons.

Step 6. Plot the input data points and the final positions of the neurons in the

SOM grid.

Step 7. Stop

Program:

import numpy as np

import matplotlib.pyplot as plt


class SOM:

def _init_(self, input_size, output_size, learning_rate=0.1, sigma=1.0):

self.input_size = input_size

self.output_size = output_size

self.learning_rate = learning_rate

self.sigma = sigma

self.weights = np.random.randn(output_size[0], output_size[1], input_size)

def train(self, data, epochs):

for epoch in range(epochs):

for x in data:

bmu_index = self._find_bmu(x)

self._update_weights(x, bmu_index, epoch, epochs)

def _find_bmu(self, x):

distances = np.linalg.norm(self.weights - x, axis=2)

bmu_index = np.unravel_index(np.argmin(distances), distances.shape)

return bmu_index

def _update_weights(self, x, bmu_index, epoch, epochs):

influence = self._calculate_influence(epoch, epochs)

for i in range(self.output_size[0]):

for j in range(self.output_size[1]):

dist = np.linalg.norm(np.array(bmu_index) - np.array([i, j]))

decay = np.exp(-dist / (2 * self.sigma**2))

self.weights[i, j] += influence * decay * (x - self.weights[i, j])


def _calculate_influence(self, epoch, epochs):

return self.learning_rate * np.exp(-epoch / epochs)

def get_weights(self):

return self.weights

def predict(self, x):

bmu_index = self._find_bmu(x)

return bmu_index

data = np.random.rand(100, 2)

input_size = data.shape[1]

output_size = (10, 10)

learning_rate = 0.1

sigma = 1.0

epochs = 100

som = SOM(input_size, output_size, learning_rate, sigma)

som.train(data, epochs)

weights = som.get_weights()

plt.figure(figsize=(8, 6))

plt.scatter(data[:, 0], data[:, 1], color='blue', label='Data')

for i in range(output_size[0]):

for j in range(output_size[1]):

plt.scatter(weights[i, j, 0], weights[i, j, 1], color='red')

plt.title('Self-Organizing Map')

plt.xlabel('Feature 1')
plt.ylabel('Feature 2')

plt.legend()

plt.show()
OUTPUT:

RESULT:
Thus the Python program to implementing neural network using self-organizing
maps write a program to was developed successfully.
Ex.No:10
Date:
Implementing k-Means algorithm to cluster a set of data

Aim:
To write a Python program to implement k-Means algorithm to cluster a set of
data.

Algorithm:

Step 1.Start

Step 2. Initialize n_clusters centroids randomly from the dataset.

Step 3. Repeat the following steps for max_iters iterations or until convergence:

Step 3.1. Assign each data point to the nearest centroid.

Step 3.2. Update the centroids by taking the mean of the data points

assigned to each centroid.

Step 3.3. If the centroids do not change significantly, stop iterating.

Step 4. Return the labels indicating which cluster each data point belongs to.

Step 5. Plot the data points colored by their assigned cluster, with centroids

marked as 'x'.

Step 6. Stop

Program:

import numpy as np

import matplotlib.pyplot as plt

class KMeans:

def _init_(self, n_clusters=3, max_iters=100):


self.n_clusters = n_clusters

self.max_iters = max_iters

def fit(self, X):

self.centroids = X[np.random.choice(X.shape[0], self.n_clusters,

replace=False)]

for _ in range(self.max_iters):

labels = self._assign_clusters(X)

new_centroids = self._update_centroids(X, labels)

if np.allclose(new_centroids, self.centroids):

break

self.centroids = new_centroids return labels

def _assign_clusters(self, X):

distances = np.sqrt(((X - self.centroids[:, np.newaxis])**2).sum(axis=2))

return np.argmin(distances, axis=0)

def _update_centroids(self, X, labels):

new_centroids = np.zeros_like(self.centroids)

for i in range(self.n_clusters):

new_centroids[i] = X[labels == i].mean(axis=0)

return new_centroids

np.random.seed(0)

X, _ = np.random.randn(100, 2), None kmeans = KMeans(n_clusters=3)

labels = kmeans.fit(X)

plt.figure(figsize=(8, 6))
plt.scatter(X[:, 0], X[:, 1], c=labels, cmap='viridis')

plt.scatter(kmeans.centroids[:, 0], kmeans.centroids[:, 1], marker='x', c='red', s=100,

label='Centroids')

plt.title('K-means Clustering')

plt.xlabel('Feature 1')

plt.ylabel('Feature 2')

plt.legend()

plt.show()
OUTPUT:

RESULT:
Thus the Python program to implement k-Means algorithm to cluster a set of

data was developed successfully.


Ex.No:11
Date:
Implementing hierarchical clustering algorithm

Aim:
To write a Python program to implement hierarchical clustering algorithm.

Algorithm:

Step 1.Start

Step 2. Generate sample data using make_blobs function from scikit-learn.

Step3. Compute the linkage matrix using the linkage function from

scipy.cluster.hierarchy, specifying the method as 'ward'.

Step 4. The linkage matrix represents the hierarchical clustering of the data

points.

Step 5. Plot the dendrogram using the dendrogram function, passing the

computed linkage matrix.

Step 6. Customize the plot by specifying orientation, distance sort, and showing

leaf counts.

Step 7. Instantiate an Agglomerative Clustering object with the desired number

of clusters, affinity (distance metric), and linkage method.

Step 8. Fit the model to the data and obtain cluster labels using fit_predict.

Step 9. Scatter plot the data points, coloring them according to the cluster labels

obtained from hierarchical clustering.

Step 10. Customize the plot with titles and axis labels.

Step 11. Stop


Program:

import numpy as np

import matplotlib.pyplot as plt

from sklearn.datasets

import make_blobs

from sklearn.cluster

import AgglomerativeClustering

from scipy.cluster.hierarchy

import dendrogram, linkage

X, _ = make_blobs(n_samples=300, centers=4, cluster_std=0.60, random_state=0)

linked = linkage(X, method='ward')

plt.figure(figsize=(10, 7))

dendrogram(linked, orientation='top', distance_sort='descending',

show_leaf_counts=True) plt.title('Hierarchical Clustering Dendrogram')

plt.xlabel('Sample Index')

plt.ylabel('Distance')

plt.show()

cluster = AgglomerativeClustering(n_clusters=4, affinity='euclidean',

linkage='ward')

cluster.fit_predict(X)

plt.figure(figsize=(10, 7))

plt.scatter(X[:,0], X[:,1], c=cluster.labels_, cmap='viridis')

plt.title('Hierarchical Clustering')
plt.xlabel('Feature 1')

plt.ylabel('Feature 2')

plt.show()
OUTPUT:

RESULT:
Thus the Python program to implement hierarchical clustering algorithm was

developed successfully.

You might also like