Download as pdf or txt
Download as pdf or txt
You are on page 1of 26

ARUNAI ENGINEERING COLLEGE

(Affiliated to Anna University)


Velu Nagar, Thiruvannamalai-606 603
www.arunai.org

DEPARTMENT OF COMPUTER SCIENCE &


ENGINEERING

BACHELOR OF ENGINEERING

2023 - 2024

SIXTH SEMESTER

CCS355 NEURAL NETWORK AND DEEP LEARNING


LABORATORY
ARUNAI ENGINEERING COLLEGE
TIRUVANNAMALAI – 606 603

DEPARTMENT OF COMPUTER SCIENCE & ENGINEERING


CERTIFICATE
Certified that this is a bonafide record of work done by

Name :

University Reg.No :

Semester :

Branch :

Year :

Staff-in-Charge Head of the Department

Submitted for the


Practical Examination held on

Internal Examiner External Examiner


CONTENT
EX. NO. DATE LIST OF EXPERIMENT PAGE.NO. SIGNATURE
Step1:Import the necessary libraries.
Step2: Define the input vectors as tensors with the desired shape and data type.
Step3: Add the input vectors using the `add` function.
Step4: Initialize the session and run the computation.
Step5: This will output the sum of the input vectors.
Step1:Import the necessary libraries.
Step2: Define the perceptron function.
Step3: Define training data.
Step4: Define Perceptron variable.
Step5:Define perceptron training process.
Step6: Test the perceptrons prediction.
Step1:Import the necessary libraries.
Step2: Create sequential model.
Step3: Add a single dense layer with a single output and input_dim of to.
Step4: Add a activation layer with a sigmoid activation function.
Step5:Compile the model.
Step6: Train and evaluate the model.
Step1:Import the necessary libraries.
Step2: Load the dataset.
Step3:Normalise the data.
Step4: Define model.
Step5:Define loss function and optimizer.
Step6: Compile the model
Step7:Train and evaluate the model.
Step1:Import the necessary libraries.
Step2: Generate the dummy data.
Step3:Create a sequential model.
Step4: Compile the model.
Step5:Train the model.
Step6: make prediction
Step7:Stop.
Step1:Import the necessary libraries.
Step2: Load the dataset.
Step3:. Define model using CNN.
Step4:Compile the model with appropriate loss function,optimizer and metrics
Step5: Train and evaluate the model.
Step6:Plot the training and validation accuracy.
Step1:Import the necessary libraries.
Step2: Load the dataset.
Step3:. Define model using CNN.
Step4:Compile the model with appropriate loss function,optimizer and metrics
Step5: Train and evaluate the model.
Ex No:6
IMPROVE THE DEEP LEARNING BY FINE TUNING HYPER
Date: PARAMETERS

AIM:

To write improve the deep learning by fine tuning hyper parameters.

ALGORITHM:
Step1:Import the necessary libraries.
Step2: Load the mnist dataset.
Step3:.Preprocess the data.
Step4:Define the model.
Step5:Hyperparameters to perform.
Step6:Perform grid search for hyperparameter tuning

PROGRAM:
import tensorflow as tf
from tensorflow.keras import layers, models
from tensorflow.keras.datasets import mnist
from tensorflow.keras.utils import to_categorical

# Load MNIST dataset


(train_images, train_labels), (test_images, test_labels) = mnist.load_data()

# Preprocess the data


train_images = train_images.reshape((60000, 28, 28, 1)).astype('float32') / 255
test_images = test_images.reshape((10000, 28, 28, 1)).astype('float32') / 255

train_labels = to_categorical(train_labels)
test_labels = to_categorical(test_labels)

# Define the model


def build_model(learning_rate=0.001, dropout_rate=0.25):
model = models.Sequential()
model.add(layers.Conv2D(32, (3, 3), activation='relu', input_shape=(28, 28, 1)))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(64, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(64, (3, 3), activation='relu'))

model.add(layers.Flatten())
model.add(layers.Dropout(dropout_rate))
model.add(layers.Dense(64, activation='relu'))
model.add(layers.Dense(10, activation='softmax'))

optimizer = tf.keras.optimizers.Adam(learning_rate=learning_rate)
model.compile(optimizer=optimizer,
loss='categorical_crossentropy',
metrics=['accuracy'])
return model
# Hyperparameters to tune
learning_rates = [0.001, 0.01, 0.1]
dropout_rates = [0.25, 0.5]

# Perform grid search for hyperparameter tuning


for lr in learning_rates:
for dr in dropout_rates:
model = build_model(learning_rate=lr, dropout_rate=dr)
history = model.fit(train_images, train_labels, epochs=5, batch_size=64,
validation_data=(test_images, test_labels), verbose=0)

# Print results
print(f"Learning Rate: {lr}, Dropout Rate: {dr}")
print(f"Train Accuracy: {history.history['accuracy'][-1]}, Test Accuracy:
{history.history['val_accuracy'][-1]}")
print("-" * 50)

RESULT:
Thus the implementation of deep learning model by fine tuning hyperparameters was excecuted
successfully.
Ex No:7
IMPLEMENT THE TRANSFER LEARNING CONCEPT IN IMAGE
Date: CLASSIFICATION

AIM:
To write implementation of transfer learning concept in image classification.

ALGORITHM:
Step1:Import the necessary libraries.
Step2: Load CIFAR-10 dataset.
Step3:.Preprocess the data.
Step4:One-hot encode the labels.
Step5:Load pretrain model.
Step6:Freeze the convolutional base.
Step7:Build the new model.
Step8:Train and evaluate the model.

PROGRAM:

import tensorflow as tf
from tensorflow.keras import layers, models
from tensorflow.keras.datasets import cifar10
from tensorflow.keras.applications import MobileNetV2
from tensorflow.keras.applications.mobilenet_v2 import preprocess_input
from tensorflow.keras.utils import to_categorical
# Load CIFAR-10 dataset
(train_images, train_labels), (test_images, test_labels) = cifar10.load_data()
# Preprocess the data
train_images = preprocess_input(train_images)
test_images = preprocess_input(test_images)
# One-hot encode the labels
train_labels = to_categorical(train_labels, num_classes=10)
test_labels = to_categorical(test_labels, num_classes=10)
# Load pre-trained MobileNetV2 model without the top (classification) layers
base_model = MobileNetV2(weights='imagenet', include_top=False, input_shape=(32, 32, 3))
# Freeze the convolutional base
base_model.trainable = False
# Build a new model on top of the pre-trained base
model = models.Sequential([
base_model,
layers.GlobalAveragePooling2D(),
layers.Dense(128, activation='relu'),
layers.Dropout(0.5),
layers.Dense(10, activation='softmax')
])
# Compile the model
model.compile(optimizer='adam',
loss='categorical_crossentropy',
metrics=['accuracy'])
# Train the model
history = model.fit(train_images, train_labels, epochs=10, validation_data=(test_images, test_labels))
# Evaluate the model
test_loss, test_acc = model.evaluate(test_images, test_labels)
print(f'Test accuracy: {test_acc}')
# Fine-tune the model (optional)
# model.trainable = True
# model.compile(optimizer=tf.keras.optimizers.Adam(1e-5),
# loss='categorical_crossentropy',
# metrics=['accuracy'])
# history_fine = model.fit(train_images, train_labels, epochs=5, validation_data=(test_images, test_labels
))

RESULT:
Thus the implementation of transfer learning concepts in image classification.
8

Step1:Set hyperparameters.
Step2: Load the dataset.
Step3:.Pad sequences to make them same length.
Step4:Define the model
Step5:Compile the model.
Step6: Train and evaluate the model.
9(a)

Step1:Set hyperparameters.
Step2: Define the autoencoder model.
Step3:.Compile the autoencoder.
Step4:Train the autoencoder using random data.
Step5:Test the autoencoder on new data.
Step6: Train and evaluate the model.
9(b)

Step1:Set hyperparameters.
Step2: Define the autoencoder model.
Step3:.Compile the autoencoder.
Step4:Train the autoencoder using random data.
Step5:Test the autoencoder on new data.
Step6: Train and evaluate the model.
10

Step1: Import the necessary libraries.


Step2:Create model function.
Step3:.Call the function.
Step4:Generate the image
Step5:Plot the image.

You might also like