Download as pdf or txt
Download as pdf or txt
You are on page 1of 25

Practical-1

Program- Upload the Data set

Manually loading a file: This is the first, most popular, and least recommended way
to load data as it requires many code parts to read one tuple from the DataFrame. This
way comes into the picture when the dataset doesn’t have any particular pattern to
identify or a specific pattern.

np. load txt: One of the NumPy methods for loading different types of data though it
is only supported when we have data in a specific format, i.e., pattern
recognizable, unlike the manual way of reading the dataset. This method is widely
used for simple data arrays requiring very minimal formatting, i.e., for simple values
where such changes are unnecessary.

Syntax- var = np.loadtxt(filename, skiprows, delimiter)

 filename to get the path of the dataset


 skip rows, which is responsible to decide whether the first row (column
headers) should be skipped or not.
 delimiter specifies how the values are separated

np.GenFromTxt: This is another NumPy way to read the data, but this time it is much
better than the np. load txt() method recognizes the column header’s presence on its
own, which the previous one cannot follow. Along with that, it can also detect the right
data type for each column.

Syntax - var = np.genfromtxt(filename, delimiter, names, dtype)


 filename is used to access the path of the dataset
 delimiter specifies how the values are separated
 names are used to set the visibility of the data
 type is used to AutoDetect the column type

Using PD.read_csv: Here is the most recommended and widely used method
for reading, writing, and manipulating the dataset. It only deals with CSV
formatted data, but the support of various parameters makes it a gold mine for data
analysts to work with different sorts of data (they should have a specific format).
Code
1- np.loadtxt()
import numpy as np
file_dir = "load_dataset_blog.csv"
d1 = np.loadtxt(filename, skiprows=1, delimiter=",")
print(d1.dtype)
print(d1[:5, :])
Output-

2- np.gnfromtxt()
import numpy as np
d2 = np.genfromtxt(filename, delimiter=",", names=True, dtype=None)
print(d2.dtype)
print(d2[:5])

Output

3- PD.read_csv
import pandas as pd
file_dir = "load_dataset_blog.csv"
d3 = pd.read_csv(filename)
print(d3.dtypes)
d3.head()
Output
Practical 2
Program- Implementation of Machine Learning Model
Machine learning is the process of making systems that learn and improve by
themselves, by being specifically programmed.
The ultimate goal of machine learning is to design algorithms that automatically
help a system gather data and use that data to learn more. Systems are expected
to look for patterns in the data collected and use them to make vital decisions for
themselves.

Steps to implement a machine learning model are as follows:

1. Collecting Data:
Machines initially learn from the data that we give them. It is of the utmost
importance to collect reliable data so that our machine learning model can find
the correct patterns. The quality of the data that we feed to the machine will
determine how accurate our model is. If we have incorrect or outdated data, we
will have wrong outcomes or predictions which are not relevant.

2. Preparing the Data:


After we have our data, we have to prepare it. we can do this by :

 Putting together all the data we have and randomizing it. This helps make sure
that data is evenly distributed, and the ordering does not affect the learning
process.
 Cleaning the data to remove unwanted data, missing values, rows, and
columns, duplicate values, data type conversion, etc. we might even have to
restructure the dataset and change the rows and columns or index of rows and
columns.
 Visualize the data to understand how it is structured and understand the
relationship between various variables and classes present.
 Splitting the cleaned data into two sets - a training set and a testing set. The
training set is the set your model learns from. A testing set is used to check the
accuracy of your model after training.

3. Choosing a Model:
A machine learning model determines the output we get after running a machine
learning algorithm on the collected data. It is important to choose a model which
is relevant to the task at hand. Over the years, scientists and engineers developed
various models suited for different tasks like speech recognition, image
recognition, prediction, etc. Apart from this, we also have to see if our model is
suited for numerical or categorical data and choose accordingly.

4. Training the Model:


Training is the most important step in machine learning. In training, we pass the
prepared data to your machine learning model to find patterns and make
predictions. It results in the model learning from the data so that it can accomplish
the task set. Over time, with training, the model gets better at predicting.

5. Evaluating the Model:


After training your model, we have to check to see how it’s performing. This is
done by testing the performance of the model on previously unseen data. The
unseen data used is the testing set that we split our data into earlier. If testing was
done on the same data which is used for training, we will not get an accurate
measure, as the model is already used to the data, and finds the same patterns in
it, as it previously did. This will give you disproportionately high accuracy.
When used on testing data, you get an accurate measure of how your model will
perform and its speed.

6. Parameter Tuning:
Once you have created and evaluated your model, see if its accuracy can be
improved in any way. This is done by tuning the parameters present in your
model. Parameters are the variables in the model that the programmer generally
decides. At a particular value of your parameter, the accuracy will be the
maximum. Parameter tuning refers to finding these values.

7. Making Predictions
In the end, you can use your model on unseen data to make predictions accurately.

Implementation of SVM(Support Vector Machine) Model

“Support Vector Machine” (SVM) is a supervised learning machine learning


algorithm that can be used for both classification or regression challenges.
However, it is mostly used in classification problems, such as text classification.
In the SVM algorithm, we plot each data item as a point in n-dimensional space
(where n is the number of features you have), with the value of each feature being
the value of a particular coordinate.
Code
import numpy as np
import matplotlib.pyplot as plt
from sklearn import svm, datasets
iris = datasets.load_iris()
X = iris.data[:, :2]
y = iris.target
C = 1.0
svc = svm.SVC(kernel='linear', C=1,gamma=0).fit(X, y)
x_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 1
y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1
h = (x_max / x_min)/100
xx, yy = np.meshgrid(np.arange(x_min, x_max, h),
np.arange(y_min, y_max, h))
plt.subplot(1, 1, 1)
Z = svc.predict(np.c_[xx.ravel(), yy.ravel()])
Z = Z.reshape(xx.shape)
plt.contourf(xx, yy, Z, cmap=plt.cm.Paired, alpha=0.8)
plt.scatter(X[:, 0], X[:, 1], c=y, cmap=plt.cm.Paired)
plt.xlabel('Sepal length')
plt.ylabel('Sepal width')
plt.xlim(xx.min(), xx.max())
plt.title('SVC with linear kernel')
plt.show()
Output
Practical 3
Program- Implementation of Perceptron Model.
Perceptron- Perceptron is one of the simplest Artificial neural network architectures.
It was introduced by Frank Rosenblatt in 1957s. It is the simplest type of feedforward
neural network, consisting of a single layer of input nodes that are fully connected to
a layer of output nodes. It can learn the linearly separable patterns.

The basic components of a perceptron include input values or features, weights


associated with each input, a summation function, an activation function, a bias term,
and an output. These elements collectively enable the perceptron to learn and make
binary classifications in machine learning tasks.
Code
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import joblib
import ListedColormap
plt.style.use("fivethirtyeight")
class Perceptron:
def __init__(self, eta, epochs):
self.weights = np.random.randn(3) * 1e-4
print(f"initial weights before training: n{self.weights}")
self.eta = eta
self.epochs = epochs
def activationFunction(self, inputs, weights):
z = np.dot(inputs, weights)
return np.where(z > 0, 1, 0)
def fit(self, X, y):
self.X = X
self.y = y
X_with_bias = np.c_[self.X, -np.ones((len(self.X), 1))]
print(f"X with bias: n{X_with_bias}")
for epoch in range(self.epochs):
print("--"*10)
print(f"for epoch: {epoch}")
print("--"*10)
y_hat = self.activationFunction(X_with_bias, self.weights)
print(f"predicted value after forward pass: n{y_hat}")
self.error = self.y - y_hat
print(f"error: n{self.error}")
self.weights = self.weights + self.eta * np.dot(X_with_bias.T, self.error)
print(f"updated weights after epoch:n{epoch}/{self.epochs} : n{self.weights}")
print("#####"*10)
def predict(self, X):
X_with_bias = np.c_[X, -np.ones((len(X), 1))]
return self.activationFunction(X_with_bias, self.weights)
def total_loss(self):
total_loss = np.sum(self.error)
print(f"total loss: {total_loss}")
return total_loss
def prepare_data(df):
X=df.drop("y",axis=1)
y=df["y"]
return X,y
AND = {
"x1": [0,0,1,1],
"x2": [0,1,0,1],
"y": [0,0,0,1],
}
df = pd.DataFrame(AND)
X,y = prepare_data(df)
ETA = 0.3 # 0 and 1
EPOCHS = 10
model = Perceptron(eta=ETA, epochs=EPOCHS)
model.fit(X, y)
model.total_loss()
Output
initial weights before training: n[ 1.91234729e-06 4.87681676e-05 -2.1
0112179e-05]
X with bias: n[[ 0. 0. -1.]
[ 0. 1. -1.]
[ 1. 0. -1.]
[ 1. 1. -1.]]
--------------------
for epoch: 0
--------------------
predicted value after forward pass: n[1 1 1 1]
error: n0 -1
1 -1
2 -1
3 0
Name: y, dtype: int64
updated weights after epoch:n0/10 : n[-0.29999809 -0.29995123 0.899978
99]
##################################################
--------------------
for epoch: 1
--------------------
predicted value after forward pass: n[0 0 0 0]
error: n0 0
1 0
2 0
3 1
Name: y, dtype: int64
updated weights after epoch:n1/10 : n[1.91234729e-06 4.87681676e-05 5.9
9978989e-01]
##################################################
--------------------
for epoch: 2
--------------------
predicted value after forward pass: n[0 0 0 0]
error: n0 0
1 0
2 0
3 1
Name: y, dtype: int64
updated weights after epoch:n2/10 : n[0.30000191 0.30004877 0.29997899]
##################################################
--------------------
for epoch: 3
--------------------
predicted value after forward pass: n[0 1 1 1]
error: n0 0
1 -1
2 -1
3 0
Name: y, dtype: int64
updated weights after epoch:n3/10 : n[1.91234729e-06 4.87681676e-05 8.9
9978989e-01]
##################################################
--------------------
for epoch: 4
--------------------
predicted value after forward pass: n[0 0 0 0]
error: n0 0
1 0
2 0
3 1
Name: y, dtype: int64
updated weights after epoch:n4/10 : n[0.30000191 0.30004877 0.59997899]
##################################################
--------------------
for epoch: 5
--------------------
predicted value after forward pass: n[0 0 0 1]
error: n0 0
1 0
2 0
3 0
Name: y, dtype: int64
updated weights after epoch:n5/10 : n[0.30000191 0.30004877 0.59997899]
##################################################
--------------------
for epoch: 6
--------------------
predicted value after forward pass: n[0 0 0 1]
error: n0 0
1 0
2 0
3 0
Name: y, dtype: int64
updated weights after epoch:n6/10 : n[0.30000191 0.30004877 0.59997899]
##################################################
--------------------
for epoch: 7
--------------------
predicted value after forward pass: n[0 0 0 1]
error: n0 0
1 0
2 0
3 0
Name: y, dtype: int64
updated weights after epoch:n7/10 : n[0.30000191 0.30004877 0.59997899]
##################################################
--------------------
for epoch: 8
--------------------
predicted value after forward pass: n[0 0 0 1]
error: n0 0
1 0
2 0
3 0
Name: y, dtype: int64
updated weights after epoch:n8/10 : n[0.30000191 0.30004877 0.59997899]
##################################################
--------------------
for epoch: 9
--------------------
predicted value after forward pass: n[0 0 0 1]
error: n0 0
1 0
2 0
3 0
Name: y, dtype: int64
updated weights after epoch:n9/10 : n[0.30000191 0.30004877 0.59997899]
##################################################
total loss: 0
Practical 4
Program- AND Gate classification using perceptron Model
import numpy as np
def unitStep(v):
if v >= 0:
return 1
else:
return 0

def perceptronModel(x, w, b):


v = np.dot(w, x) + b
y = unitStep(v)
return y

def AND_logicFunction(x):
w = np.array([1, 1])
b = -1.5
return perceptronModel(x, w, b)

test1 = np.array([0, 1])


test2 = np.array([1, 1])
test3 = np.array([0, 0])
test4 = np.array([1, 0])

print("AND({}, {}) = {}".format(0, 1, AND_logicFunction(test1)))


print("AND({}, {}) = {}".format(1, 1, AND_logicFunction(test2)))
print("AND({}, {}) = {}".format(0, 0, AND_logicFunction(test3)))
print("AND({}, {}) = {}".format(1, 0, AND_logicFunction(test4)))

Output
Practical 5
Program- OR Gate classification using Perceptron Model
import numpy as np

def unitStep(v):
if v >= 0:
return 1
else:
return 0
def perceptronModel(x, w, b):
v = np.dot(w, x) + b
y = unitStep(v)
return y

def OR_logicFunction(x):
w = np.array([1, 1])
b = -0.5
return perceptronModel(x, w, b)

test1 = np.array([0, 1])


test2 = np.array([1, 1])
test3 = np.array([0, 0])
test4 = np.array([1, 0])

print("OR({}, {}) = {}".format(0, 1, OR_logicFunction(test1)))


print("OR({}, {}) = {}".format(1, 1, OR_logicFunction(test2)))
print("OR({}, {}) = {}".format(0, 0, OR_logicFunction(test3)))
print("OR({}, {}) = {}".format(1, 0, OR_logicFunction(test4)))

Output
Practical 6
Program- Implementation of ANN(Artificial Neural Network) model
ANN-The term "Artificial neural network" refers to a biologically inspired sub-field of
artificial intelligence modeled after the brain. An Artificial neural network is usually a
computational network based on biological neural networks that construct the structure
of the human brain. Similar to a human brain has neurons interconnected to each other,
artificial neural networks also have neurons that are linked to each other in various
layers of the networks. These neurons are known as nodes.
● Neurons (Artificial Neurons): In an ANN, artificial neurons, also known as nodes or
units, are the basic building blocks. They take in multiple inputs, perform a weighted
sum of these inputs, apply an activation function, and produce an output. Neurons are
organized into layers, including an input layer, one or more hidden layers, and an output
layer.
● Weights and Biases: Each connection between neurons is associated with a weight,
which represents the strength of the connection. A bias term is also associated with each
neuron, allowing it to adjust its output. Training the ANN involves adjusting these
weights and biases to minimize the error in the predictions.
● Activation Function: The activation function determines the output of a neuron based
on the weighted sum of its inputs. Common activation functions include the sigmoid
function, hyperbolic tangent (tanh), and rectified linear unit (ReLU). Activation
functions introduce non-linearity into the model, enabling ANNS to capture complex
patterns in data.
● Backpropagation: Backpropagation is the training process for ANNS. It involves
calculating the error between the predicted output and the actual target, propagating this
error backward through the network, and using gradient descent optimization
techniques to update the weights and biases in a way that reduces the error.
● Loss Function: The loss function measures the error between the predicted output and
the true target. The goal during training is to minimize this loss function by adjusting
the network's weights and biases.
● Generalization: After training, an ANN is expected to make accurate predictions on
new, unseen data, which is known as generalization. Avoiding overfitting (where the
model performs well on the training data but poorly on new data) is a crucial
consideration in ANN design.
Code
from joblib.numpy_pickle_utils import xrange
from numpy import *
class NeuralNet(object):
def __init__(self):
random.seed(1)
self.synaptic_weights = 2 * random.random((3, 1)) - 1
def __sigmoid(self, x):
return 1 / (1 + exp(-x))
def __sigmoid_derivative(self, x):
return x * (1 - x)
def train(self, inputs, outputs, training_iterations):
for iteration in xrange(training_iterations):
output = self.learn(inputs)
error = outputs - output
factor = dot(inputs.T, error * self.__sigmoid_derivative(output))
self.synaptic_weights += factor
def learn(self, inputs):
return self.__sigmoid(dot(inputs, self.synaptic_weights))
if __name__ == "__main__":
neural_network = NeuralNet()
inputs = array([[0, 1, 1], [1, 0, 0], [1, 0, 1]])
outputs = array([[1, 0, 1]]).T
neural_network.train(inputs, outputs, 10000)
print(neural_network.learn(array([1, 0, 1])))
Output
Practical 7
Program- Implementation of CNN(Convolution Neural Network) Model.
CNN- A Convolutional Neural Network (CNN) is a type of deep learning neural
network that is well-suited for image and video analysis.
CNNs work by applying a series of convolution and pooling layers to an input image or
video. Convolution layers extract features from the input by sliding a small filter, or
kernel, over the image or video and computing the dot product between the filter and
the input. Pooling layers then down sample the output of the convolution layers to
reduce the dimensionality of the data and make it more computationally efficient.
Convolutional Neural Network consists of multiple layers like the input layer,
Convolutional layer, Pooling layer, and fully connected layers.

 Input Layers: It’s the layer in which we give input to our model. In CNN,
Generally, the input will be an image or a sequence of images.
 Convolutional Layers: This is the layer, which is used to extract the feature from
the input dataset. It applies a set of learnable filters known as the kernels to the
input images. It slides over the input image data and computes the dot product
between kernel weight and the corresponding input image patch. The output of this
layer is referred ad feature maps
 Activation Layer: By adding an activation function to the output of the preceding
layer, activation layers add nonlinearity to the network. it will apply an element-
wise activation function to the output of the convolution layer.
 Pooling layer: This layer is periodically inserted in the covnets and its main
function is to reduce the size of volume which makes the computation fast reduces
memory and also prevents overfitting.
Code
import tensorflow as tf
from tensorflow.keras import layers, models
mnist = tf.keras.datasets.mnist
(train_images, train_labels), (test_images, test_labels) = mnist.load_data()
train_images = train_images.reshape(train_images.shape[0], 28, 28, 1)
test_images = test_images.reshape(test_images.shape[0], 28, 28, 1)
model = models.Sequential()
model.add(layers.Conv2D(32, (3, 3), activation='relu', input_shape=(28, 28, 1)))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(64, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(64, (3, 3), activation='relu'))
model.add(layers.Flatten())
model.add(layers.Dense(64, activation='relu'))
model.add(layers.Dense(10, activation='softmax')) # 10 classes for digits 0-9
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
model.fit(train_images, train_labels, epochs=5, batch_size=64,
validation_data=(test_images, test_labels))
test_loss, test_acc = model.evaluate(test_images, test_labels)
print(f"Test accuracy: {test_acc}")
Output:
Practical-8
Program- Implementation of Resnet
Resnet- ResNet (short for Residual Network) is a type of neural network architecture
introduced in 2015 by Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun from
Microsoft Research. It was designed to solve the problem of vanishing gradients in deep
neural networks, which hindered their performance on large-scale image recognition
tasks.
In order to solve the problem of the vanishing/exploding gradient, this architecture
introduced the concept called Residual Blocks. In this network, we use a technique
called skip connections. The skip connection connects activations of a layer to further
layers by skipping some layers in between. This forms a residual block. Resnets are
made by stacking these residual blocks together.

Code
from tensorflow.keras.models import Model
from tensorflow.keras.layers import Input, Conv2D, BatchNormalization, Activation,
Add, AveragePooling2D, Flatten, Dense
from tensorflow.keras.datasets import cifar10
from tensorflow.keras.utils import to_categorical
from tensorflow.keras import backend as K
def residual_block(x, filters, kernel_size=3, strides=1):
shortcut = x
x = Conv2D(filters, kernel_size=kernel_size, strides=strides, padding='same')(x)
x = BatchNormalization()(x)
x = Activation('relu')(x)
x = Conv2D(filters, kernel_size=kernel_size, strides=strides, padding='same')(x)
x = BatchNormalization()(x)
x = Add()([x, shortcut])
x = Activation('relu')(x)
return x
def ResNet(input_shape, num_classes=10):
inputs = Input(shape=input_shape)
x = Conv2D(32, kernel_size=7, strides=2, padding='same')(inputs)
x = BatchNormalization()(x)
x = Activation('relu')(x)
x = residual_block(x, filters=32)
x = residual_block(x, filters=32)
x = AveragePooling2D(pool_size=3, strides=2, padding='same')(x)
x = Flatten()(x)
outputs = Dense(num_classes, activation='softmax')(x)
model = Model(inputs=inputs, outputs=outputs)
return model
(X_train, y_train), (X_test, y_test) = cifar10.load_data()
X_train = X_train.astype('float32') / 255.0
X_test = X_test.astype('float32') / 255.0
y_train = to_categorical(y_train, 10)
y_test = to_categorical(y_test, 10)
input_shape = X_train.shape[1:]
num_classes = 10
model = ResNet(input_shape, num_classes)
model.compile(optimizer='adam', loss='categorical_crossentropy',
metrics=['accuracy'])
model.fit(X_train, y_train, batch_size=128, epochs=10, validation_data=(X_test,
y_test))
loss, accuracy = model.evaluate(X_test, y_test)
print(f"Test Loss: {loss}, Test Accuracy: {accuracy}")

Output:
Practical-9
Program- Implementation of Chi Square Test (Feature Selection)
Chi-Square Test-The Chi-Square test is a statistical procedure for determining the
difference between observed and expected data. This test can also be used to determine
whether it correlates to the categorical variables in our data. It helps to find out whether
a difference between two categorical variables is due to chance or a relationship
between them.
Code
import pandas as pd
from sklearn.datasets import load_iris
from sklearn.feature_selection import SelectKBest
from sklearn.feature_selection import chi2
iris = load_iris()
X = iris.data
y = iris.target
column_names = [f'feature_{i}' for i in range(X.shape[1])]
df = pd.DataFrame(X, columns=column_names)
df['target'] = y
print("Original Dataset:")
print(df.head())
k=2
chi2_selector = SelectKBest(chi2, k=k)
X_new = chi2_selector.fit_transform(X, y)
selected_features = df.columns[:-1][chi2_selector.get_support()]
print("\nSelected Features:")
print(selected_features)
Output:
Practical- 10
Program- Implementation of PCA (Feature Extraction)

PCA(Principal Component Analysis)- Principal component analysis, or PCA, is


a dimensionality reduction method that is often used to reduce the dimensionality of
large data sets, by transforming a large set of variables into a smaller one that still
contains most of the information in the large set.
Reducing the number of variables of a data set naturally comes at the expense of
accuracy, but the trick in dimensionality reduction is to trade a little accuracy for
simplicity. Because smaller data sets are easier to explore and visualize and make
analyzing data points much easier and faster for machine learning algorithms without
extraneous variables to process.

Code-
import numpy as np
from sklearn.decomposition import PCA
from sklearn.datasets import load_iris
import matplotlib.pyplot as plt
data = load_iris()
X = data.data
y = data.target
pca = PCA(n_components=2)
X_pca = pca.fit_transform(X)
plt.figure(figsize=(8, 6))
for i, target_name in enumerate(data.target_names):
plt.scatter(X_pca[y == i, 0], X_pca[y == i, 1], label=target_name)
plt.xlabel('Principal Component 1')
plt.ylabel('Principal Component 2')
plt.title('PCA for Feature Extraction')
plt.legend()
plt.show()

Output:

You might also like