Download as pdf or txt
Download as pdf or txt
You are on page 1of 33

Indira Gandhi Delhi Technical University for Women

(Established by Govt. of Delhi vide Act 09 of 2012)

Kashmere Gate, Delhi - 110006

LAB FILE
For
DEEP LEARNING-II
(BAI-304)

B.Tech/CSE-AI

Department of Artificial Intelligence & Data Sciences


EVEN Semester 2024

Submitted to: Submitted by:


Dr. Himanshu Mittal Vinisky Kumar
Dept. of AI&DS < >
INDEX
S. No. Lab Date

1 Write a PyTorch program to create a tensor and print the tensor


and its rank.

2 Write a program to implement the Bias Variance trade-off for


classification and assure that the total error as the sum of Bias
squares and variance bias?

3 Write a program to implement the Gradient Descent algorithm


along with
its visualization?

4 Write a program to implement CNN and Pretrained-CNN on


Fashion-MNIST dataset? Also validate the performance of same
CNN and
Pretrained-CNN on German Traffic Sign Dataset.

5 Write a program to compare the performance of a 5-layer CNN


model on Fashion-MNIST dataset
with following optimization algorithms along with visualization:
Gradient descent algorithm, RMSprop, Adagrad,ADAM

6 Write a program to do performance analysis of a 5-layer CNN


model with ADAM optimizer on
CIFAR10 dataset with following regularization techniques:
Batch normalization,Dropout,Batch normalization + Dropout

7 Write a program to implement of a 3-layer CNN model on


FOOD101 dataset and validate the results
with different performance parameters.

8 Write a program to analysis the performance a 5-layer CNN


model on Oxford102 dataset by performing the hyperparameter
tuning through Grid and Random approach.

9 Write a program to implement an auto-encoder model to perform


image segmentation by using Pascal VOC segmentation dataset.

10 Write a program to use a pre-trained GAN network to synthesis


face images.
LAB-1

Q1) Write a PyTorch program to create a tensor and print the tensor and its rank?

Code:

Output:

Q2) Write a PyTorch program to create a tensor from a list of values by using
different methods?

Code:
Output:

Q3) Write a PyTorch program to reshape a 3x4 tensor into 6x2 tensor?

Code:

Output:
Q4) Write a PyTorch program to apply addition, subtraction, multiplication and
division of two tensors?

Code:

Output:
Q5) Write a PyTorch program to build a neural network consisting of a single node
with single input and linear function as activation function. The dataset is as follow
with adam optimizer and mean squared error as loss function?

Code:

Output:
LAB-2
Write a program to implement the Bias Variance trade-off for classification and
assure that the total error as the sum of Bias squares and variance bias?

Code:

Output:
Q2) Write a program to implement the Bias Variance trade-off for regression
and assure that the variance are comparable to each other?

Code:

Output:
LAB-3
Write a program to implement the Gradient Descent algorithm along with
its visualization?

Code:

Output:
LAB-4
Q1) Write a program to implement CNN and Pretrained-CNN on Fashion-MNIST
dataset? Also validate the performance of same CNN and
Pretrained-CNN on German Traffic Sign Dataset.

Code:
Output:
LAB-5
Q1) Write a program to compare the performance of a 5-layer CNN model on
Fashion-MNIST dataset
with following optimization algorithms along with visualization:
Gradient descent algorithm
RMSprop
Adagrad
ADAM

Code:
Output:
LAB-6
Q1) Write a program to do performance analysis of a 5-layer CNN model with ADAM
optimizer on
CIFAR10 dataset with following regularization techniques:
Batch normalization
Dropout
Batch normalization + Dropout

Code:
Output:
LAB-7

Write a program to implement of a 3-layer CNN model on FOOD101 dataset and


validate the results
with different performance parameters.

Code:
Output:
LAB-8
Q1) Write a program to analysis the performance a 5-layer CNN model on Oxford102
dataset by performing the hyperparameter tuning through Grid and Random
approach.
CODE:
import numpy as np
from sklearn.model_selection import train_test_split, GridSearchCV,
RandomizedSearchCV
from sklearn.metrics import classification_report
from tensorflow.keras.datasets import cifar10
from tensorflow.keras import layers, models
from tensorflow.keras.models import Sequential

from tensorflow.keras.layers import Conv2D, MaxPooling2D, Flatten, Dense


from scikeras.wrappers import KerasClassifier
import tensorflow_datasets as tfds

# Load Oxford102 dataset


def load_oxford102_dataset():
dataset, info = tfds.load('oxford_flowers102', with_info=True)
return dataset['train'], dataset['test']

# Load dataset
train_data, test_data = load_oxford102_dataset()

from tensorflow.image import resize

# Define function to preprocess data


def preprocess_data(dataset, image_size=(128, 128)):
X = []
y = []
for example in dataset:
image = example['image'].numpy() # Convert TensorFlow tensor to
numpy array
label = example['label'].numpy()
# Resize image to a fixed size
resized_image = resize(image, image_size)
X.append(resized_image)
y.append(label)
X = np.array(X)
y = np.array(y)
X = X / 255.0 # Normalize pixel values to [0, 1]
return X, y
X_train, y_train = preprocess_data(train_data)
X_test, y_test = preprocess_data(test_data)
# Define CNN model function

def create_model(input_shape):
model = models.Sequential([
layers.Conv2D(32, (3, 3), activation='relu',
input_shape=input_shape),
layers.MaxPooling2D((2, 2)),
layers.Conv2D(64, (3, 3), activation='relu'),
layers.MaxPooling2D((2, 2)),
layers.Conv2D(128, (3, 3), activation='relu'),
layers.MaxPooling2D((2, 2)),
layers.Conv2D(128, (3, 3), activation='relu'),
layers.MaxPooling2D((2, 2)),
layers.Flatten(),
layers.Dense(512, activation='relu'),
layers.Dense(102, activation='softmax') # Assuming 102 classes for
Oxford102 dataset
])
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
return model

# Create a KerasClassifier based on the model function


#model = KerasClassifier(build_fn=create_model, verbose=0)
# Create a KerasClassifier based on the model function
model = KerasClassifier(build_fn=create_model, input_shape=X_train.shape[1:],
verbose=0)

# Define hyperparameters grid for Grid Search


param_grid = {
'batch_size': [32, 64],
'epochs': [10, 20],
}

# Perform Grid Search


grid_search = GridSearchCV(estimator=model, param_grid=param_grid, cv=3,
verbose=2,error_score='raise')
grid_result = grid_search.fit(X_train, y_train)

# Print the best parameters and score from Grid Search


print("Best: %f using %s" % (grid_result.best_score_,
grid_result.best_params_))

# Define hyperparameters distributions for Random Search


param_dist = {
'batch_size': [32, 64],
'epochs': [10, 20],
}
# Perform Random Search
random_search = RandomizedSearchCV(estimator=model,
param_distributions=param_dist, n_iter=4, cv=3, verbose=2)
random_result = random_search.fit(X_train, y_train)

# Print the best parameters and score from Random Search


print("Best: %f using %s" % (random_result.best_score_,
random_result.best_params_))

# Evaluate the best model from Grid Search on the testing set
y_pred = grid_result.best_estimator_.predict(X_test)
print("Grid Search Model:")
print("Accuracy:", np.mean(y_pred == y_test))

# Evaluate the best model from Random Search on the testing set
y_pred = random_result.best_estimator_.predict(X_test)
print("Random Search Model:")
print("Accuracy:", np.mean(y_pred == y_test))

OUTPUT:
LAB-9

Write a program to implement an auto-encoder model to perform image segmentation by


using Pascal VOC segmentation dataset.
CODE:

import torch
import torch.nn as nn
import torch.optim as optim
from torch.utils.data import Dataset, DataLoader
from torchvision import transforms, datasets

class Autoencoder(nn.Module):
def __init__(self):
super(Autoencoder, self).__init__()

# Encoder
self.encoder = nn.Sequential(
nn.Conv2d(3, 16, kernel_size=3, stride=2, padding=1),
nn.ReLU(),
nn.Conv2d(16, 32, kernel_size=3, stride=2, padding=1),
nn.ReLU(),
)

# Decoder
self.decoder = nn.Sequential(
nn.ConvTranspose2d(32, 16, kernel_size=3, stride=2, padding=1,
output_padding=1),
nn.ReLU(),
nn.ConvTranspose2d(16, 3, kernel_size=3, stride=2, padding=1,
output_padding=1),
nn.Sigmoid(),
)

def forward(self, x):


x = self.encoder(x)
x = self.decoder(x)
return x

class PascalVOCDataset(Dataset):
def __init__(self, root_dir, transform=None):
self.root_dir = root_dir
self.transform = transform
self.data = datasets.VOCSegmentation(root=root_dir, download=True,
transform=None)

def __len__(self):
return len(self.data)
def __getitem__(self, idx):
img, mask = self.data[idx]

if self.transform:
img = self.transform(img)
mask = self.transform(mask)

return img, mask

# Create an instance of PascalVOCDataset


transform = transforms.Compose([
transforms.Resize((224, 224)),
transforms.ToTensor(),
])
pascal_voc_dataset = PascalVOCDataset(root_dir='./data', transform=transform)

# Split the dataset into train and test sets


train_size = int(0.8 * len(pascal_voc_dataset))
test_size = len(pascal_voc_dataset) - train_size
train_set, test_set = torch.utils.data.random_split(pascal_voc_dataset,
[train_size, test_size])
train_loader = DataLoader(train_set, batch_size=32, shuffle=True)
test_loader = DataLoader(test_set, batch_size=32, shuffle=False)

autoencoder = Autoencoder()
criterion = nn.MSELoss()
optimizer = optim.Adam(autoencoder.parameters(), lr=0.001)

num_epochs = 10

for epoch in range(num_epochs):


running_loss = 0.0

for images, masks in train_loader:


optimizer.zero_grad()
outputs = autoencoder(images)
loss = criterion(outputs, masks)
loss.backward()
optimizer.step()
running_loss += loss.item()

print(f"Epoch [{epoch+1}/{num_epochs}], Loss:


{running_loss/len(train_loader)}")

# Iterate over the test set


device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')

autoencoder.eval() # Set model to evaluation mode


total_loss = 0.0
num_batches = 0

with torch.no_grad():
for images, _ in test_loader:
images = images.to(device)

# Forward pass
outputs = autoencoder(images)

# Calculate loss
loss = criterion(outputs, images)
total_loss += loss.item()
num_batches += 1

# Calculate average loss


average_loss = total_loss / num_batches
print(f"Average loss: {average_loss:.4f}")

# Show a sample of original and reconstructed images


import matplotlib.pyplot as plt

# Get a batch of test images


images, _ = next(iter(test_loader))
images = images.to(device)

# Pass through autoencoder


outputs = autoencoder(images)

# # Convert tensors to numpy arrays


# images = images.cpu().numpy()
# outputs = outputs.cpu().numpy()

# Convert tensors to numpy arrays


images = images.detach().cpu().numpy()
outputs = outputs.detach().cpu().numpy()

# Plot original and reconstructed images


fig, axes = plt.subplots(nrows=2, ncols=8, figsize=(16, 4))

for i in range(8):
axes[0, i].imshow(images[i].transpose((1, 2, 0)))
axes[0, i].set_title('Original')
axes[0, i].axis('off')

axes[1, i].imshow(outputs[i].transpose((1, 2, 0)))


axes[1, i].set_title('Reconstructed')
axes[1, i].axis('off')

plt.show()
Output
LAB-10
Q1) Write a program to use a pre-trained GAN network to synthesis face images.
CODE:
import numpy as np
import tensorflow as tf
from tensorflow.keras import layers, models
from tensorflow.keras.datasets import mnist # Example dataset, replace with
your dataset
from tensorflow.keras.models import Sequential, Model
from tensorflow.keras.optimizers import Adam
import matplotlib.pyplot as plt

# Define the generator model


def build_generator(latent_dim):
model = Sequential()

# First, project the input into a small spatial and deep representation
model.add(layers.Dense(7 * 7 * 128, input_dim=latent_dim))
model.add(layers.LeakyReLU(alpha=0.2))
model.add(layers.Reshape((7, 7, 128)))

# Upsample to 14x14
model.add(layers.Conv2DTranspose(128, (4,4), strides=(2,2),
padding='same'))
model.add(layers.LeakyReLU(alpha=0.2))

# Upsample to 28x28
model.add(layers.Conv2DTranspose(128, (4,4), strides=(2,2),
padding='same'))
model.add(layers.LeakyReLU(alpha=0.2))

# Output 28x28x1 image


model.add(layers.Conv2D(1, (7,7), activation='tanh', padding='same'))
return model

# Define the discriminator model


def build_discriminator(input_shape=(28,28,1)):
model = Sequential()

# Downsample the input


model.add(layers.Conv2D(64, (3,3), strides=(2,2), padding='same',
input_shape=input_shape))
model.add(layers.LeakyReLU(alpha=0.2))

# Downsample to 14x14
model.add(layers.Conv2D(128, (3,3), strides=(2,2), padding='same'))
model.add(layers.LeakyReLU(alpha=0.2))

# Downsample to 7x7
model.add(layers.Conv2D(128, (3,3), strides=(2,2), padding='same'))
model.add(layers.LeakyReLU(alpha=0.2))

# Classifier
model.add(layers.Flatten())
model.add(layers.Dropout(0.4))
model.add(layers.Dense(1, activation='sigmoid'))
return model

# Define the GAN model


def build_gan(generator, discriminator):
discriminator.trainable = False
model = Sequential([generator, discriminator])
return model

# Load and preprocess dataset (example with MNIST)


(train_images, _), (_, _) = mnist.load_data()
train_images = train_images.reshape(train_images.shape[0], 28, 28,
1).astype('float32')
train_images = (train_images - 127.5) / 127.5 # Normalize images to [-1, 1]

# Define hyperparameters
latent_dim = 10
epochs = 50
batch_size = 32

# Build and compile the discriminator


discriminator = build_discriminator()
discriminator.compile(loss='binary_crossentropy', optimizer=Adam(lr=0.0002,
beta_1=0.5), metrics=['accuracy'])

# Build the generator


generator = build_generator(latent_dim)

# Build and compile the GAN model


gan = build_gan(generator, discriminator)
gan.compile(loss='binary_crossentropy', optimizer=Adam(lr=0.0002,
beta_1=0.5))

# Train the GAN


for epoch in range(epochs):
# Select a random batch of real images
idx = np.random.randint(0, train_images.shape[0], batch_size)
real_images = train_images[idx]

# Generate a batch of fake images


noise = np.random.normal(0, 1, (batch_size, latent_dim))
fake_images = generator.predict(noise)

# Train the discriminator


d_loss_real = discriminator.train_on_batch(real_images,
np.ones((batch_size, 1)))
d_loss_fake = discriminator.train_on_batch(fake_images,
np.zeros((batch_size, 1)))
d_loss = 0.5 * np.add(d_loss_real, d_loss_fake)

# Train the generator


noise = np.random.normal(0, 1, (batch_size, latent_dim))
g_loss = gan.train_on_batch(noise, np.ones((batch_size, 1)))

# Print progress
if epoch % 10 == 0:
print(f"Epoch {epoch}, Discriminator Loss: {d_loss[0]}, Generator
Loss: {g_loss}")

# Save the generator model to a file


generator.save('pretrained_generator_model.h5')

import numpy as np
import matplotlib.pyplot as plt
from tensorflow.keras.models import load_model

# Load the pre-trained DCGAN generator model


generator = load_model('/content/pretrained_generator_model.h5')

generator.compile(loss='binary_crossentropy', optimizer='adam')

# Generate synthetic face images


num_images = 10 # Number of images to generate
latent_dim = 10 # Dimension of the latent space

# Generate random noise as input to the generator


noise = np.random.normal(0, 1, (num_images, latent_dim))

# Generate images from the random noise


synthetic_images = generator.predict(noise)

# Display the generated images


plt.figure(figsize=(10, 5))
for i in range(num_images):
plt.subplot(2, 5, i + 1)
plt.imshow((synthetic_images[i] + 1) / 2) # Scale the pixel values to
[0, 1]
plt.axis('off')
plt.tight_layout()
plt.show()
OUTPUT:

You might also like