Professional Documents
Culture Documents
Aml Record
Aml Record
Aml Record
1. 09.06.22 ID3
2. 16.06.22 C 4.5
APRIORI
4. 30.06.22
9. 04.08.22 XG BOOST
Date : 09.06.22
Page |1
AIM:
To Write a Python Program to Implement Iterative Dichomotiser 3 algorithm.
ALGORITHM:
3.Import dataset
6..Entropy:
7.Information Gain:
Program:
Page |2
#import dataset
Page |3
df = pd.read_csv("golf.txt")
df
#creating
model
df
#prediction
RESULT:
Thus, the Python program to implement ID3 algorithm was implemented successfully
and the output is verified.
Page |6
Exp No : 02 C 4.5
Date : 16.06.22
AIM:
To Write a Python Program to Implement C 4.5 algorithm.
ALGORITHM:
2. Import Pandas
7.Gain ratio:
8.Gain:
9.Split info:
Page |7
Program:
#import dataset
df = pd.read_csv("golf.txt")
df
Page |8
Page |9
#creating model
df
P a g e | 10
#prediction
RESULT:
Thus, the Python program to implement ID3 algorithm was implemented successfully
and the output is verified.
P a g e | 12
AIM:
ALGORITHM:
2. Import Pandas
Gini index:
P a g e | 13
Program:
#import dataset
df = pd.read_csv("mammals.csv")
df
P a g e | 14
#creating model
config = {'algorithm': 'C4.5'}
model = chef.fit(df, config = config, target_label = 'Decision')
P a g e | 15
df
#prediction
RESULT:
Thus, the Python program to implement CART algorithm was implemented
successfully and the output is verified.
Exp No : 04 APRIORI
Date : 30.06.22
P a g e | 17
AIM:
To Write a Python Program to implement Apiori algorithm.
ALGORITHM:
PROGRAM:
#installing required packages
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
P a g e | 18
#data preprocessing
dataset = pd.read_csv('groceries.csv',on_bad_lines='skip')
transactions = []
dataset
#dataset shape
dataset.shape
#creating model
results = list(rules)
P a g e | 20
def inspect(results):
lhs = [tuple(result[2][0][0])[0] for result in results]
rhs = [tuple(result[2][0][1])[0] for result in results]
supports = [result[1] for result in results]
confidences = [result[2][0][2] for result in results]
lifts = [result[2][0][3] for result in results]
return list(zip(lhs, rhs, supports, confidences, lifts))
resultsinDataFrame = pd.DataFrame(inspect(results), columns = ['Left Hand Side', 'Right
Hand Side', 'Support', 'Confidence', 'Lift'])
P a g e | 21
resultsinDataFrame
P a g e | 22
RESULT:
Thus, the Python program to implement Apriori algorithm was implemented
successfully and the output is verified.
AIM:
To Write a Python Program to implement Equivalence Class Transformation
algorithm.
ALGORITHM:
PROGRAM:
#installing required packages
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
P a g e | 25
#data preprocessing
dataset = pd.read_csv('groceries.csv',on_bad_lines='skip')
transactions = []
dataset
#dataset shape
dataset.shape
#creating model
results = list(rules)
P a g e | 27
def inspect(results):
lhs = [tuple(result[2][0][0])[0] for result in results]
rhs = [tuple(result[2][0][1])[0] for result in results]
supports = [result[1] for result in results]
return list(zip(lhs, rhs, supports))
resultsinDataFrame = pd.DataFrame(inspect(results), columns = ['Product 1', 'Product 2',
'Support'])
P a g e | 28
resultsinDataFrame
RESULT:
Thus, the Python program to implement E-Clat algorithm was implemented
successfully and the output is verified.
Exp No : 06 Naïve bayes Ensemble
Date : 14.07.22
AIM:
To Write a Python Program to implement Naïve Bayes Ensemble
ALGORITHM:
1. Start the program
2. Import the necessary packages like numpy,matplotlib,seaborn,
3. Import the dataset
4. Assign the independent and dependant variables
5. Split the datasets using train_test_split
P a g e | 30
PROGRAM:
#importing required packages
X = dataset.iloc[:, [0,1]].values
y = dataset.iloc[:, 2].values
# split the data into training and testing data
#Preprocessing
#Model-Naïve bayes
# initializaing the NB
classifer = BernoulliNB()
#accuracy
#Ensemble
nb = GaussianNB()
model = AdaBoostClassifier(base_estimator=nb, n_estimators=10)
model.fit(X_train, y_train)
#Prediction
model.predict(X_test)
P a g e | 33
RESULT:
Thus, the Python program to implement Naïve Bayes Ensemble was implemented
successfully and the output is verified.
P a g e | 34
AIM:
To Write a Python Program to implement Random Forest algorithm.
ALGORITHM:
1.First, start with the selection of random samples from a given dataset.
2.Next, this algorithm will construct a decision tree for every sample. Then it will
get the prediction result from every decision tree.
3.In this step, voting will be performed for every predicted result.
4.At last, select the most voted prediction result as the final prediction result.
PROGRAM:
#importing dataset
import pandas as pd
features = pd.read_csv('temps.csv')
features.head(5)
#checking shape
#describe function
features.describe()
#preprocessing data
features = pd.get_dummies(features)
features
P a g e | 36
import numpy as np
labels = np.array(features['actual'])
features= features.drop('actual', axis = 1)
feature_list = list(features.columns)
features = np.array(features)
from sklearn.model_selection import train_test_split
train_features, test_features, train_labels, test_labels = train_test_split(features,
labels, test_size = 0.25,
random_state = 42)
P a g e | 37
#baseline error
#creating model
#prediction
predictions = rf.predict(test_features)
predictions
P a g e | 38
#accuracy
RESULT:
Thus, the Python program to implement Random Forest algorithm was implemented
successfully and the output is verified.
P a g e | 40
AIM:
To Write a Python Program to implement Ada Boost algorithm.
ALGORITHM:
1. Creating the First Base Learner
2. Calculating the Total Error (TE)
3. Updating Weights
4. Creating a New Dataset
5. Final Predictions
PROGRAM:
import pandas as pd
#import dataset
credit = pd.read_csv("CreditCardDefault.csv")
credit.drop(["ID"], axis=1, inplace=True)
credit
P a g e | 41
X = credit.iloc[:, 0:23]
y = credit.iloc[:, -1]
print(X_train.shape)
print(X_test.shape)
print(y_train.shape)
print(y_test.shape)
#creating model
#prediction
ada_pred = ada.predict(X_test)
ada_pred[1]
P a g e | 43
#accuracy
RESULT:
Thus, the Python program to implement AdaBoost algorithm was implemented
successfully and the output is verified.
Exp No : 09 XG Boost
Date : 04.08.22
AIM:
P a g e | 44
ALGORITHM:
1. Load all the libraries
2. Load the dataset
3. Data Cleaning & Feature Engineering
4. Tune and Run the model
PROGRAM:
#import required packages
import pandas as pd
#import dataset
df=pd.read_csv("diabetes.csv")
df
X = df.iloc[:, 0:8]
y = df.iloc[:, 8]
P a g e | 45
print(X_train.shape)
print(X_test.shape)
print(y_train.shape)
print(y_test.shape)
#scaler transform
#creating model
import xgboost
from xgboost import XGBClassifier
(XGBClassifier)
xgbt = XGBClassifier(max_depth = 2,
learning_rate = 0.2,
objective = "multi:softmax",
num_class = 2,
booster = "gbtree",
n_estimarors = 10,
P a g e | 46
#prediction
xgbt_pred = xgbt.predict(X_test)
P a g e | 47
#accuracy
print(accuracy_score(y_test, xgbt_pred))
print(xgbt.score(X_train, y_train))
print(xgbt.score(X_test, y_test))
#plotting
RESULT:
Thus, the Python program to implement XG Boost algorithm was implemented
successfully and the output is verified.
Exp No : 10 Simple Neural Network
Date : 12.08.22
AIM:
To Write a Python Program to implement Simple Neural Network.
P a g e | 49
ALGORITHM:
1. Start the program
2. Import the necessary packages like numpy,matplotlib,keras and PCA
3.Import the dataset
4. Plot the graph
5. Split the datasets using train_test_split
6.Plot the numbers
7.Flatten the data
8.Import keras module to define the Neural Network
9.Create the model and display the summary
10.Fit the model with training datasets
11.Plot the history
12.Compute the accuracy of the model
13.Predict the output
PROGRAM:
#import required packages
%matplotlib inline
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from tensorflow import keras
from sklearn.datasets import load_iris
from sklearn.decomposition import PCA
#import dataset
iris = load_iris()
df = pd.DataFrame(iris.data, columns = iris.feature_names)
df['class'] = iris.target_names[iris.target]
(X_train, y_train), (X_test, y_test) = keras.datasets.mnist.load_data()
# Normalize
X_train = X_train / 255
P a g e | 50
#Plotting
plt.figure(figsize=(10, 3))
for i in range(30):
plt.subplot(3, 10, i + 1)
plt.imshow(X_train[i], cmap='gray')
plt.axis('off')
plt.show()
# To flatten data
X_train_flat = X_train.reshape((X_train.shape[0], 28*28))
X_test_flat = X_test.reshape((X_test.shape[0], 28*28))
y_train_one_hot = np.eye(10)[y_train]
y_test_one_hot = np.eye(10)[y_test]
import tensorflow.keras as keras
from tensorflow.keras.layers import Dense, Input
from tensorflow.keras.models import Sequential
from tensorflow.keras.optimizers import Adam
P a g e | 51
#Model
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[28, 28]))
model.add(keras.layers.Dense(64, activation="relu"))
model.add(keras.layers.Dense(32, activation="relu"))
model.add(keras.layers.Dense(10, activation="softmax"))
model.compile(loss="sparse_categorical_crossentropy",
optimizer="sgd",
metrics=["accuracy"])
model.summary()
loss = model.fit(X_train, y_train, epochs=20, validation_split=0.2)
P a g e | 52
pd.DataFrame(loss.history).plot()
# Accuracy
pred = model.predict(X_test).argmax(axis=1)
print('Accuracy on test set - {0:.02%}'.format((pred == y_test).mean()))
P a g e | 53
# Prediction
plt.figure(figsize=(10, 10))
for label in range(10):
for i in range(10):
plt.subplot(10, 10, label * 10 + i + 1)
plt.imshow(X_test[pred == label][i], cmap='gray')
plt.axis('off')
plt.show()
P a g e | 54
RESULT:
Thus, the Python program to implement Simple neural network was implemented
successfully and the output is verified.