Professional Documents
Culture Documents
Deep Learning Lab Manual - IGDTUW - Vinisky Kumar
Deep Learning Lab Manual - IGDTUW - Vinisky Kumar
LAB FILE
For
DEEP LEARNING-II
(BAI-304)
B.Tech/CSE-AI
Q1) Write a PyTorch program to create a tensor and print the tensor and its rank?
Code:
Output:
Q2) Write a PyTorch program to create a tensor from a list of values by using
different methods?
Code:
Output:
Q3) Write a PyTorch program to reshape a 3x4 tensor into 6x2 tensor?
Code:
Output:
Q4) Write a PyTorch program to apply addition, subtraction, multiplication and
division of two tensors?
Code:
Output:
Q5) Write a PyTorch program to build a neural network consisting of a single node
with single input and linear function as activation function. The dataset is as follow
with adam optimizer and mean squared error as loss function?
Code:
Output:
LAB-2
Write a program to implement the Bias Variance trade-off for classification and
assure that the total error as the sum of Bias squares and variance bias?
Code:
Output:
Q2) Write a program to implement the Bias Variance trade-off for regression
and assure that the variance are comparable to each other?
Code:
Output:
LAB-3
Write a program to implement the Gradient Descent algorithm along with
its visualization?
Code:
Output:
LAB-4
Q1) Write a program to implement CNN and Pretrained-CNN on Fashion-MNIST
dataset? Also validate the performance of same CNN and
Pretrained-CNN on German Traffic Sign Dataset.
Code:
Output:
LAB-5
Q1) Write a program to compare the performance of a 5-layer CNN model on
Fashion-MNIST dataset
with following optimization algorithms along with visualization:
Gradient descent algorithm
RMSprop
Adagrad
ADAM
Code:
Output:
LAB-6
Q1) Write a program to do performance analysis of a 5-layer CNN model with ADAM
optimizer on
CIFAR10 dataset with following regularization techniques:
Batch normalization
Dropout
Batch normalization + Dropout
Code:
Output:
LAB-7
Code:
Output:
LAB-8
Q1) Write a program to analysis the performance a 5-layer CNN model on Oxford102
dataset by performing the hyperparameter tuning through Grid and Random
approach.
CODE:
import numpy as np
from sklearn.model_selection import train_test_split, GridSearchCV,
RandomizedSearchCV
from sklearn.metrics import classification_report
from tensorflow.keras.datasets import cifar10
from tensorflow.keras import layers, models
from tensorflow.keras.models import Sequential
# Load dataset
train_data, test_data = load_oxford102_dataset()
def create_model(input_shape):
model = models.Sequential([
layers.Conv2D(32, (3, 3), activation='relu',
input_shape=input_shape),
layers.MaxPooling2D((2, 2)),
layers.Conv2D(64, (3, 3), activation='relu'),
layers.MaxPooling2D((2, 2)),
layers.Conv2D(128, (3, 3), activation='relu'),
layers.MaxPooling2D((2, 2)),
layers.Conv2D(128, (3, 3), activation='relu'),
layers.MaxPooling2D((2, 2)),
layers.Flatten(),
layers.Dense(512, activation='relu'),
layers.Dense(102, activation='softmax') # Assuming 102 classes for
Oxford102 dataset
])
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
return model
# Evaluate the best model from Grid Search on the testing set
y_pred = grid_result.best_estimator_.predict(X_test)
print("Grid Search Model:")
print("Accuracy:", np.mean(y_pred == y_test))
# Evaluate the best model from Random Search on the testing set
y_pred = random_result.best_estimator_.predict(X_test)
print("Random Search Model:")
print("Accuracy:", np.mean(y_pred == y_test))
OUTPUT:
LAB-9
import torch
import torch.nn as nn
import torch.optim as optim
from torch.utils.data import Dataset, DataLoader
from torchvision import transforms, datasets
class Autoencoder(nn.Module):
def __init__(self):
super(Autoencoder, self).__init__()
# Encoder
self.encoder = nn.Sequential(
nn.Conv2d(3, 16, kernel_size=3, stride=2, padding=1),
nn.ReLU(),
nn.Conv2d(16, 32, kernel_size=3, stride=2, padding=1),
nn.ReLU(),
)
# Decoder
self.decoder = nn.Sequential(
nn.ConvTranspose2d(32, 16, kernel_size=3, stride=2, padding=1,
output_padding=1),
nn.ReLU(),
nn.ConvTranspose2d(16, 3, kernel_size=3, stride=2, padding=1,
output_padding=1),
nn.Sigmoid(),
)
class PascalVOCDataset(Dataset):
def __init__(self, root_dir, transform=None):
self.root_dir = root_dir
self.transform = transform
self.data = datasets.VOCSegmentation(root=root_dir, download=True,
transform=None)
def __len__(self):
return len(self.data)
def __getitem__(self, idx):
img, mask = self.data[idx]
if self.transform:
img = self.transform(img)
mask = self.transform(mask)
autoencoder = Autoencoder()
criterion = nn.MSELoss()
optimizer = optim.Adam(autoencoder.parameters(), lr=0.001)
num_epochs = 10
with torch.no_grad():
for images, _ in test_loader:
images = images.to(device)
# Forward pass
outputs = autoencoder(images)
# Calculate loss
loss = criterion(outputs, images)
total_loss += loss.item()
num_batches += 1
for i in range(8):
axes[0, i].imshow(images[i].transpose((1, 2, 0)))
axes[0, i].set_title('Original')
axes[0, i].axis('off')
plt.show()
Output
LAB-10
Q1) Write a program to use a pre-trained GAN network to synthesis face images.
CODE:
import numpy as np
import tensorflow as tf
from tensorflow.keras import layers, models
from tensorflow.keras.datasets import mnist # Example dataset, replace with
your dataset
from tensorflow.keras.models import Sequential, Model
from tensorflow.keras.optimizers import Adam
import matplotlib.pyplot as plt
# First, project the input into a small spatial and deep representation
model.add(layers.Dense(7 * 7 * 128, input_dim=latent_dim))
model.add(layers.LeakyReLU(alpha=0.2))
model.add(layers.Reshape((7, 7, 128)))
# Upsample to 14x14
model.add(layers.Conv2DTranspose(128, (4,4), strides=(2,2),
padding='same'))
model.add(layers.LeakyReLU(alpha=0.2))
# Upsample to 28x28
model.add(layers.Conv2DTranspose(128, (4,4), strides=(2,2),
padding='same'))
model.add(layers.LeakyReLU(alpha=0.2))
# Downsample to 14x14
model.add(layers.Conv2D(128, (3,3), strides=(2,2), padding='same'))
model.add(layers.LeakyReLU(alpha=0.2))
# Downsample to 7x7
model.add(layers.Conv2D(128, (3,3), strides=(2,2), padding='same'))
model.add(layers.LeakyReLU(alpha=0.2))
# Classifier
model.add(layers.Flatten())
model.add(layers.Dropout(0.4))
model.add(layers.Dense(1, activation='sigmoid'))
return model
# Define hyperparameters
latent_dim = 10
epochs = 50
batch_size = 32
# Print progress
if epoch % 10 == 0:
print(f"Epoch {epoch}, Discriminator Loss: {d_loss[0]}, Generator
Loss: {g_loss}")
import numpy as np
import matplotlib.pyplot as plt
from tensorflow.keras.models import load_model
generator.compile(loss='binary_crossentropy', optimizer='adam')