Professional Documents
Culture Documents
Aml Record b2
Aml Record b2
Aml Record b2
1. 13.06.22 ID3 01
2. 20.06.22 C 4.5 06
4. 04.07.22 17
APRIORI
9. 01.08.22 XG BOOST 44
Date : 13.06.22
AIM:
To Write a Python Program to Implement Iterative Dichomotiser 3 algorithm.
ALGORITHM:
3.Import dataset
6..Entropy:
7.Information Gain:
Page |2
Program:
#import dataset
df = pd.read_csv("golf.txt")
df
Page |4
#creating model
df
Page |5
#prediction
RESULT:
Thus, the Python program to implement ID3 algorithm was implemented successfully
and the output is verified.
Page |6
Exp No : 02 C 4.5
Date : 20.06.22
AIM:
To Write a Python Program to Implement C 4.5 algorithm.
ALGORITHM:
2. Import Pandas
7.Gain ratio:
8.Gain:
9.Split info:
Page |7
Program:
#import dataset
df = pd.read_csv("golf.txt")
df
Page |8
Page |9
#creating model
df
P a g e | 10
#prediction
RESULT:
Thus, the Python program to implement ID3 algorithm was implemented successfully
and the output is verified.
P a g e | 12
AIM:
ALGORITHM:
2. Import Pandas
Gini index:
P a g e | 13
Program:
#import dataset
df = pd.read_csv("mammals.csv")
df
P a g e | 14
#creating model
config = {'algorithm': 'C4.5'}
model = chef.fit(df, config = config, target_label = 'Decision')
P a g e | 15
df
#prediction
RESULT:
Thus, the Python program to implement CART algorithm was implemented
successfully and the output is verified.
P a g e | 17
Exp No : 04 APRIORI
Date : 04.07.22
AIM:
To Write a Python Program to implement Apiori algorithm.
ALGORITHM:
PROGRAM:
#installing required packages
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
#data preprocessing
dataset = pd.read_csv('groceries.csv',on_bad_lines='skip')
transactions = []
dataset
#dataset shape
dataset.shape
P a g e | 19
#creating model
results = list(rules)
def inspect(results):
lhs = [tuple(result[2][0][0])[0] for result in results]
rhs = [tuple(result[2][0][1])[0] for result in results]
supports = [result[1] for result in results]
confidences = [result[2][0][2] for result in results]
lifts = [result[2][0][3] for result in results]
return list(zip(lhs, rhs, supports, confidences, lifts))
resultsinDataFrame = pd.DataFrame(inspect(results), columns = ['Left Hand Side', 'Right
Hand Side', 'Support', 'Confidence', 'Lift'])
P a g e | 21
resultsinDataFrame
P a g e | 22
RESULT:
Thus, the Python program to implement Apriori algorithm was implemented
successfully and the output is verified.
P a g e | 24
AIM:
To Write a Python Program to implement Equivalence Class Transformation
algorithm.
ALGORITHM:
PROGRAM:
#installing required packages
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
#data preprocessing
dataset = pd.read_csv('groceries.csv',on_bad_lines='skip')
transactions = []
dataset
#dataset shape
dataset.shape
P a g e | 26
#creating model
results = list(rules)
def inspect(results):
lhs = [tuple(result[2][0][0])[0] for result in results]
rhs = [tuple(result[2][0][1])[0] for result in results]
supports = [result[1] for result in results]
return list(zip(lhs, rhs, supports))
resultsinDataFrame = pd.DataFrame(inspect(results), columns = ['Product 1', 'Product 2',
'Support'])
P a g e | 28
resultsinDataFrame
RESULT:
Thus, the Python program to implement E-Clat algorithm was implemented
successfully and the output is verified.
P a g e | 30
AIM:
To Write a Python Program to implement Naïve Bayes Ensemble
ALGORITHM:
1. Start the program
2. Import the necessary packages like numpy,matplotlib,seaborn,
3. Import the dataset
4. Assign the independent and dependant variables
5. Split the datasets using train_test_split
6. Preprocess the data using fit_transform function
7. Import module to perform naive bayes using the command
from sklearn.naive_bayes import BernoulliNB
8. Fit the model with training datasets
9.Predict the output
10.Compute the accuracy of the model
11.Perform ensemble by importing Adaboost using the command
from sklearn.ensemble import AdaBoostClassifier
and Naive bayes using GaussanNB()
12.Assign the model to a variable nb and pass it to the AdaBoostClassifier model
13.Fit the model with training datasets
14. Predict the output
PROGRAM:
#importing required packages
#importing Dataset
dataset = pd.read_csv('NaiveBayes.csv')
X = dataset.iloc[:, [0,1]].values
y = dataset.iloc[:, 2].values
# split the data into training and testing data
#Preprocessing
#Model-Naïve bayes
# initializaing the NB
classifer = BernoulliNB()
#accuracy
#Ensemble
nb = GaussianNB()
model = AdaBoostClassifier(base_estimator=nb, n_estimators=10)
model.fit(X_train, y_train)
#Prediction
model.predict(X_test)
P a g e | 33
RESULT:
Thus, the Python program to implement Naïve Bayes Ensemble was implemented
successfully and the output is verified.
P a g e | 34
AIM:
To Write a Python Program to implement Random Forest algorithm.
ALGORITHM:
1.First, start with the selection of random samples from a given dataset.
2.Next, this algorithm will construct a decision tree for every sample. Then it will
get the prediction result from every decision tree.
3.In this step, voting will be performed for every predicted result.
4.At last, select the most voted prediction result as the final prediction result.
PROGRAM:
#importing dataset
import pandas as pd
features = pd.read_csv('temps.csv')
features.head(5)
#checking shape
#describe function
features.describe()
#preprocessing data
features = pd.get_dummies(features)
features
P a g e | 36
import numpy as np
labels = np.array(features['actual'])
features= features.drop('actual', axis = 1)
feature_list = list(features.columns)
features = np.array(features)
from sklearn.model_selection import train_test_split
train_features, test_features, train_labels, test_labels = train_test_split(features,
labels, test_size = 0.25,
random_state = 42)
P a g e | 37
#baseline error
#creating model
#prediction
predictions = rf.predict(test_features)
predictions
P a g e | 38
predictions = rf.predict(test_features)
errors = abs(predictions - test_labels)
print('Mean Absolute Error:', round(np.mean(errors), 2), 'degrees.')
#accuracy
RESULT:
Thus, the Python program to implement Random Forest algorithm was implemented
successfully and the output is verified.
P a g e | 40
AIM:
To Write a Python Program to implement Ada Boost algorithm.
ALGORITHM:
1. Creating the First Base Learner
2. Calculating the Total Error (TE)
3. Updating Weights
4. Creating a New Dataset
5. Final Predictions
PROGRAM:
import pandas as pd
#import dataset
credit = pd.read_csv("CreditCardDefault.csv")
credit.drop(["ID"], axis=1, inplace=True)
credit
P a g e | 41
X = credit.iloc[:, 0:23]
y = credit.iloc[:, -1]
print(X_train.shape)
print(X_test.shape)
print(y_train.shape)
print(y_test.shape)
#creating model
#prediction
ada_pred = ada.predict(X_test)
ada_pred[1]
P a g e | 43
#accuracy
RESULT:
Thus, the Python program to implement AdaBoost algorithm was implemented
successfully and the output is verified.
P a g e | 44
Exp No : 09 XG Boost
Date : 01.08.22
AIM:
To Write a Python Program to implement XG Boost algorithm.
ALGORITHM:
1. Load all the libraries
2. Load the dataset
3. Data Cleaning & Feature Engineering
4. Tune and Run the model
PROGRAM:
#import required packages
import pandas as pd
#import dataset
df=pd.read_csv("diabetes.csv")
df
P a g e | 45
X = df.iloc[:, 0:8]
y = df.iloc[:, 8]
print(X_train.shape)
print(X_test.shape)
print(y_train.shape)
print(y_test.shape)
#scaler transform
#creating model
import xgboost
from xgboost import XGBClassifier
(XGBClassifier)
P a g e | 46
xgbt = XGBClassifier(max_depth = 2,
learning_rate = 0.2,
objective = "multi:softmax",
num_class = 2,
booster = "gbtree",
n_estimarors = 10,
random_state = 123)
model=xgbt.fit(X_train, y_train)
model
P a g e | 47
#prediction
xgbt_pred = xgbt.predict(X_test)
#accuracy
print(accuracy_score(y_test, xgbt_pred))
print(xgbt.score(X_train, y_train))
print(xgbt.score(X_test, y_test))
#plotting
RESULT:
Thus, the Python program to implement XG Boost algorithm was implemented
successfully and the output is verified.
P a g e | 49
AIM:
To Write a Python Program to implement Simple Neural Network.
ALGORITHM:
1. Start the program
2. Import the necessary packages like numpy,matplotlib,keras and PCA
3.Import the dataset
4. Plot the graph
5. Split the datasets using train_test_split
6.Plot the numbers
7.Flatten the data
8.Import keras module to define the Neural Network
9.Create the model and display the summary
10.Fit the model with training datasets
11.Plot the history
12.Compute the accuracy of the model
13.Predict the output
PROGRAM:
#import required packages
%matplotlib inline
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from tensorflow import keras
from sklearn.datasets import load_iris
from sklearn.decomposition import PCA
P a g e | 50
#import dataset
iris = load_iris()
df = pd.DataFrame(iris.data, columns = iris.feature_names)
df['class'] = iris.target_names[iris.target]
# Normalize
X_train = X_train / 255
X_test = X_test / 255
#Plotting
plt.figure(figsize=(10, 3))
for i in range(30):
plt.subplot(3, 10, i + 1)
plt.imshow(X_train[i], cmap='gray')
plt.axis('off')
plt.show()
# To flatten data
y_train_one_hot = np.eye(10)[y_train]
y_test_one_hot = np.eye(10)[y_test]
P a g e | 51
#Model
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[28, 28]))
model.add(keras.layers.Dense(64, activation="relu"))
model.add(keras.layers.Dense(32, activation="relu"))
model.add(keras.layers.Dense(10, activation="softmax"))
model.compile(loss="sparse_categorical_crossentropy",
optimizer="sgd",
metrics=["accuracy"])
model.summary()
pd.DataFrame(loss.history).plot()
# Accuracy
pred = model.predict(X_test).argmax(axis=1)
print('Accuracy on test set - {0:.02%}'.format((pred == y_test).mean()))
P a g e | 53
# Prediction
plt.figure(figsize=(10, 10))
for label in range(10):
for i in range(10):
plt.subplot(10, 10, label * 10 + i + 1)
plt.imshow(X_test[pred == label][i], cmap='gray')
plt.axis('off')
plt.show()
P a g e | 54
RESULT:
Thus, the Python program to implement Simple neural network was implemented
successfully and the output is verified.