Download as pdf or txt
Download as pdf or txt
You are on page 1of 30

MAHARASHTRA INSTITUTE LABORATORY MANUAL

OF TECHNOLOGY,
AURANGABAD

PRACTICAL EXPERIMENT INSTRUCTION SHEET

DEPARTMENT: Emerging Science and Technology LABORATORY: OCC Lab

Class:TY.(AIDS)Batu SUBJECT: DL YEAR: 2024-2025

EXPERIMENT NO. 1

AIM: Loading dataset into keras/pytorch,creating traing and testing splits.

Algorithm:
In machine learning, training a model involves learning patterns from a dataset. However, it's
crucial to evaluate the model's generalization ability, meaning how well it performs on unseen
data. This is where splitting the data into training and testing sets comes in.

 Training Set: Used to train the model. The model learns the patterns and relationships within
this data.
 Testing Set: Used to evaluate the model's performance on unseen data. Since the model hasn't
seen this data during training, its performance on the testing set reflects its ability to generalize
to new examples.

Splitting the data ensures the model isn't simply memorizing the training data but can learn
and apply those learnings to new situations.

1. Import Libraries:
o Keras/PyTorch: Import the necessary libraries for your chosen deep learning framework. For Keras
o Scikit-learn (Optional): Import train_test_split from scikit-learn for a common splitting approach
(works with both Keras and PyTorch).
2. Load Dataset:
o Built-in Datasets (Keras): If using Keras, leverage built-in datasets like cifar10, mnist, etc.:
o Custom Datasets: For your own data, use appropriate loading functions based on data format (CSV,
images, etc.):
 CSV: Use pandas.read_csv to load CSV data.
 Images: Use PIL (Keras) or torchvision (PyTorch) for image loading and preprocessing.
3. Preprocess Data (Optional):
o Normalization/Standardization: Normalize or standardize features (e.g.,
using MinMaxScaler or StandardScaler from scikit-learn) to improve model performance.
o One-Hot Encoding (Categorical Features): One-hot encode categorical features if necessary.
o Image Augmentation (Images): Consider image augmentation techniques (random cropping,
flipping, etc.) to increase data variability and prevent overfitting.
4. Create Training and Testing Splits:

PREPARED BY: Ms.Shweta R Moim PREPARED BY : Ms Mrunal Mule APPROVED BY : Dr. Kavita Bhosale
(Course Teacher) (Course Coordinator) (HOD)
MAHARASHTRA INSTITUTE LABORATORY MANUAL
OF TECHNOLOGY,
AURANGABAD

PRACTICAL EXPERIMENT INSTRUCTION SHEET

DEPARTMENT: Emerging Science and Technology LABORATORY: OCC Lab

Class:TY.(AIDS)Batu SUBJECT: DL YEAR: 2024-2025

o Scikit-learn Splitting: Use train_test_split to create train/test splits (works with both Keras and
PyTorch):
o Framework-Specific Splitting (Keras/PyTorch): Explore framework-specific utilities for data
splitting, if available.
5. Convert Data to Tensors (PyTorch): If using PyTorch, convert NumPy arrays to PyTorch
tensors for efficient GPU training:
6. Create DataLoaders (Optional - PyTorch):
o For efficient batching and data pipeline management in PyTorch, use DataLoader:
o Replace train_dataset and test_dataset with your custom dataset objects (if applicable).

Expected Output:

 For built-in datasets, the output will be NumPy arrays (Keras) or tensors (PyTorch) containing
features (X) and target variables (y).
 For custom datasets, the structure depends on your implementation (lists, NumPy arrays,
tensors).

Conclusion
While Keras provides more built-in features to quickly get a model up and running, PyTorch
offers more granular control over the entire process. Both frameworks have their advantages
and are extensively used in the industry.

For best practices:

 Always normalize your data.


 Use data loaders in PyTorch for efficient data handling.
 Utilize validation splits to monitor model performance during training.
 Ensure that your data is shuffled appropriately to prevent model bias.

PROGRAM:-

PREPARED BY: Ms.Shweta R Moim PREPARED BY : Ms Mrunal Mule APPROVED BY : Dr. Kavita Bhosale
(Course Teacher) (Course Coordinator) (HOD)
MAHARASHTRA INSTITUTE LABORATORY MANUAL
OF TECHNOLOGY,
AURANGABAD

PRACTICAL EXPERIMENT INSTRUCTION SHEET

DEPARTMENT: Emerging Science and Technology LABORATORY: OCC Lab

Class:TY.(AIDS)Batu SUBJECT: DL YEAR: 2024-2025

EXPERIMENT NO. 2

AIM: Creating functions to compute various losses.

Algorithm:

Common Loss Functions:

1. Mean Squared Error (MSE):

o Theory: MSE is a popular choice for regression problems, where the


objective is to predict continuous values. It calculates the average
squared difference between the true labels (y_true) and the predicted
values (y_pred).

o Algorithm:

def mean_squared_error(y_true, y_pred):

"""

Calculates the mean squared error between true labels and predictions.

Args:

y_true: A numpy array of true labels.

PREPARED BY: Ms.Shweta R Moim PREPARED BY : Ms Mrunal Mule APPROVED BY : Dr. Kavita Bhosale
(Course Teacher) (Course Coordinator) (HOD)
MAHARASHTRA INSTITUTE LABORATORY MANUAL
OF TECHNOLOGY,
AURANGABAD

PRACTICAL EXPERIMENT INSTRUCTION SHEET

DEPARTMENT: Emerging Science and Technology LABORATORY: OCC Lab

Class:TY.(AIDS)Batu SUBJECT: DL YEAR: 2024-2025

y_pred: A numpy array of predictions.

Returns:

A float representing the mean squared error.

"""

return np.mean((y_true - y_pred) ** 2)

o Example: Consider true labels y_true = [1, 3, 2] and predictions y_pred =


[0.8, 3.5, 1.7]. The MSE loss would be: mean_squared_error(y_true,
y_pred) = 0.13

o Expected Output: A single floating-point value representing the


average squared difference between true and predicted values.

2. Mean Absolute Error (MAE):

o Theory: MAE is another common regression loss function. It calculates


the average absolute difference between true labels and predictions,
offering less sensitivity to outliers compared to MSE.

o Algorithm:

def mean_absolute_error(y_true, y_pred):

"""

Calculates the mean absolute error between true labels and predictions.

PREPARED BY: Ms.Shweta R Moim PREPARED BY : Ms Mrunal Mule APPROVED BY : Dr. Kavita Bhosale
(Course Teacher) (Course Coordinator) (HOD)
MAHARASHTRA INSTITUTE LABORATORY MANUAL
OF TECHNOLOGY,
AURANGABAD

PRACTICAL EXPERIMENT INSTRUCTION SHEET

DEPARTMENT: Emerging Science and Technology LABORATORY: OCC Lab

Class:TY.(AIDS)Batu SUBJECT: DL YEAR: 2024-2025

Args:

y_true: A numpy array of true labels.

y_pred: A numpy array of predictions.

Returns:

A float representing the mean absolute error.

"""

return np.mean(np.abs(y_true - y_pred))

o Example: Using the same example from MSE (y_true = [1, 3, 2], y_pred =
[0.8, 3.5, 1.7]), the MAE loss would be: mean_absolute_error(y_true,
y_pred) = 0.2

o Expected Output: A single floating-point value representing the


average absolute difference between true and predicted values.

3. Binary Cross Entropy (BCE):

o Theory: BCE is employed for binary classification tasks, where the model
predicts the probability of an instance belonging to a particular class
(usually 0 or 1). It measures the divergence between the true labels and
the predicted probabilities.

o Algorithm:

PREPARED BY: Ms.Shweta R Moim PREPARED BY : Ms Mrunal Mule APPROVED BY : Dr. Kavita Bhosale
(Course Teacher) (Course Coordinator) (HOD)
MAHARASHTRA INSTITUTE LABORATORY MANUAL
OF TECHNOLOGY,
AURANGABAD

PRACTICAL EXPERIMENT INSTRUCTION SHEET

DEPARTMENT: Emerging Science and Technology LABORATORY: OCC Lab

Class:TY.(AIDS)Batu SUBJECT: DL YEAR: 2024-2025

def binary_cross_entropy(y_true, y_pred):

"""

Calculates the binary cross entropy loss between true labels and predictions.

Args:

y_true: A numpy array of true labels (0 or 1).

y_pred: A numpy array of predictions between 0 and 1.

Returns:

A float representing the binary cross entropy loss.

"""

# Clip predictions to avoid division by zero

y_pred = np.clip(y_pred, 1e-9, 1 - 1e-9)

return -np.mean(y_true * np.log(y_pred) + (1 - y_true) * np.log(1 - y_pred))

o Example: Let y_true = [1, 0, 1, 0] represent true labels and y_pred = [0.8,
0.2, 0.7, 0.3] be the predicted probabilities. The BCE loss would
be: binary_cross_entropy(y_true, y_pred) = 0.289 (approximately).

PREPARED BY: Ms.Shweta R Moim PREPARED BY : Ms Mrunal Mule APPROVED BY : Dr. Kavita Bhosale
(Course Teacher) (Course Coordinator) (HOD)
MAHARASHTRA INSTITUTE LABORATORY MANUAL
OF TECHNOLOGY,
AURANGABAD

PRACTICAL EXPERIMENT INSTRUCTION SHEET

DEPARTMENT: Emerging Science and Technology LABORATORY: OCC Lab

Class:TY.(AIDS)Batu SUBJECT: DL YEAR: 2024-2025

o Expected Output: A single floating-point value representing the


average BCE loss across all data points.

Conclusion:

The choice of loss function hinges on the specific machine learning problem you're
tackling. MSE and MAE are well-suited for regression, while BCE caters to binary
classification. It's crucial to understand the underlying assumptions and properties

PROGRAM:-

PREPARED BY: Ms.Shweta R Moim PREPARED BY : Ms Mrunal Mule APPROVED BY : Dr. Kavita Bhosale
(Course Teacher) (Course Coordinator) (HOD)
MAHARASHTRA INSTITUTE LABORATORY MANUAL
OF TECHNOLOGY,
AURANGABAD

PRACTICAL EXPERIMENT INSTRUCTION SHEET

DEPARTMENT: Emerging Science and Technology LABORATORY: OCC Lab

Class:TY.(AIDS)Batu SUBJECT: DL YEAR: 2024-2025

EXPERIMENT NO. 3

AIM: Freeding data to pretrained neural network and making predictions

Algorithm:
Pre-trained neural networks are models trained on massive datasets for general-
purpose tasks like image recognition or natural language processing. "Freeding" data
refers to passing new, unseen data to the pre-trained model to make predictions. This
technique leverages the pre-trained model's learned features and relationships to
make predictions on new data without requiring training from scratch.

Algorithm (General Steps):

1. Data Preprocessing: Ensure your new data format aligns with the pre-trained
model's input requirements. This might involve resizing images, converting text to
numerical representations, or normalizing values.
2. Load Pre-trained Model: Use libraries like TensorFlow or PyTorch to load the pre-
trained model architecture and its weights.
3. Disable Training (Optional): Since you're not retraining, you might want to disable
the training mode of the pre-trained model to improve efficiency.
4. Pass Data Through the Model: Feed your preprocessed new data through the pre-
trained model's layers. Each layer performs its learned transformations, extracting
features and making predictions.
5. Obtain Predictions: Depending on the pre-trained model's purpose, the output could
be:
o Classification: A probability distribution for each class (e.g., cat: 0.8, dog: 0.2)
o Regression: A continuous value representing the prediction (e.g., house price:
$500,000)
o Feature Extraction: Intermediate activations from hidden layers can be used as
features for further analysis.

Example:

PREPARED BY: Ms.Shweta R Moim PREPARED BY : Ms Mrunal Mule APPROVED BY : Dr. Kavita Bhosale
(Course Teacher) (Course Coordinator) (HOD)
MAHARASHTRA INSTITUTE LABORATORY MANUAL
OF TECHNOLOGY,
AURANGABAD

PRACTICAL EXPERIMENT INSTRUCTION SHEET

DEPARTMENT: Emerging Science and Technology LABORATORY: OCC Lab

Class:TY.(AIDS)Batu SUBJECT: DL YEAR: 2024-2025

Imagine you have a pre-trained image classification model (e.g., VGG16) trained on
millions of images. You want to use it to predict the content of a new image containing
a cat.

1. Preprocess: Resize the image to the size expected by VGG16.


2. Load the Model: Use TensorFlow to load the VGG16 model architecture and its pre-
trained weights.
3. Disable Training (Optional): Set the model to evaluation mode.
4. Pass Data: Feed the preprocessed image through VGG16.
5. Obtain Prediction: The output will be a probability distribution for each class (e.g., cat:
0.9, dog: 0.1), indicating a high probability for "cat."

Expected Output:

The expected output depends on the pre-trained model's purpose. In classification


tasks, you'll receive class probabilities or labels. For regression, you'll get a
continuous predicted value. Additionally, you might obtain intermediate feature
representations for further analysis.

Conclusion:

Freeding data to pre-trained models is a powerful technique for leveraging existing


knowledge and making quick predictions on new data. However, it's crucial to
consider limitations:

 Pre-trained models might not be specifically tailored to your task, potentially leading
to suboptimal results compared to training a model from scratch on your specific data.
 Pre-trained models can be computationally expensive to run, especially large models
like VGG16.

For best results, consider fine-tuning the pre-trained model on your specific data to
improve its performance for your task.

PROGRAM:-

PREPARED BY: Ms.Shweta R Moim PREPARED BY : Ms Mrunal Mule APPROVED BY : Dr. Kavita Bhosale
(Course Teacher) (Course Coordinator) (HOD)
MAHARASHTRA INSTITUTE LABORATORY MANUAL
OF TECHNOLOGY,
AURANGABAD

PRACTICAL EXPERIMENT INSTRUCTION SHEET

DEPARTMENT: Emerging Science and Technology LABORATORY: OCC Lab

Class:TY.(AIDS)Batu SUBJECT: DL YEAR: 2024-2025

EXPERIMENT NO. 4

AIM: Implementing regression using deep neural network

Algorithm:
DNNs consist of multiple interconnected layers of artificial neurons. Each neuron
applies a non-linear activation function to a weighted sum of its inputs. By stacking
these layers, DNNs can learn complex relationships between input features and the
target variable.

In regression, the DNN learns to map a set of input features (x) to a continuous output
value (y) that represents the predicted value of the target variable. The loss function
(e.g., mean squared error) measures the difference between the predicted and actual
values. During training, the DNN adjusts its weights to minimize the loss function,
effectively learning the underlying relationship between features and the target
variable.

Algorithm:

1. Data Preparation:
o Gather your dataset containing input features and the target variable.
o Preprocess the data by scaling or normalizing features to a common range.
2. Model Definition:
o Choose a DNN architecture with an appropriate number of hidden layers and neurons
per layer.
o Specify an activation function for hidden layers (e.g., ReLU) and a linear activation for
the output layer.
3. Training:
o Define an optimizer (e.g., Adam) that updates the network's weights during training.
o Choose a loss function suitable for regression (e.g., mean squared error).
o Train the DNN by iteratively feeding batches of data through the network, computing
the loss, and updating weights using the optimizer to minimize the loss.
4. Evaluation:
o Use a separate validation set to monitor the DNN's performance during training and
prevent overfitting.
o After training, evaluate the model's performance on unseen test data using metrics
like mean squared error or R-squared.

PREPARED BY: Ms.Shweta R Moim PREPARED BY : Ms Mrunal Mule APPROVED BY : Dr. Kavita Bhosale
(Course Teacher) (Course Coordinator) (HOD)
MAHARASHTRA INSTITUTE LABORATORY MANUAL
OF TECHNOLOGY,
AURANGABAD

PRACTICAL EXPERIMENT INSTRUCTION SHEET

DEPARTMENT: Emerging Science and Technology LABORATORY: OCC Lab

Class:TY.(AIDS)Batu SUBJECT: DL YEAR: 2024-2025

5. Prediction:
o Once satisfied with the model's performance, use it to predict target values for new,
unseen data points.

Example:

Imagine you have a dataset with features like house size, location, and number of
bedrooms, and the target variable is house price. You can build a DNN with several
hidden layers and train it to predict house prices based on the input features.

Expected Output:

For each new data point containing house features, the trained DNN will predict a
continuous value representing the estimated house price.

Conclusion:

DNNs offer a powerful approach to regression problems, especially when dealing with
complex, non-linear relationships between features and the target variable. However,
they require careful consideration of factors like:

 Hyperparameter Tuning: Finding the optimal network architecture, learning rate, and
other hyperparameters can significantly impact performance.
 Overfitting: DNNs are prone to overfitting, where the model memorizes the training
data and performs poorly on unseen data. Techniques like regularization and dropout
can help mitigate this.
 Computational Cost: Training DNNs can be computationally expensive, especially
for large datasets or complex architectures.

For simpler regression problems, linear regression models might be sufficient.


However, DNNs shine when dealing with intricate relationships and large datasets.

PROGRAM:-

PREPARED BY: Ms.Shweta R Moim PREPARED BY : Ms Mrunal Mule APPROVED BY : Dr. Kavita Bhosale
(Course Teacher) (Course Coordinator) (HOD)
MAHARASHTRA INSTITUTE LABORATORY MANUAL
OF TECHNOLOGY,
AURANGABAD

PRACTICAL EXPERIMENT INSTRUCTION SHEET

DEPARTMENT: Emerging Science and Technology LABORATORY: OCC Lab

Class:TY.(AIDS)Batu SUBJECT: DL YEAR: 2024-2025

EXPERIMENT NO. 5

AIM: Classifying IMDB movie review dataset using deep neural network-binary
classification problem

Algorithm:
This task involves building a DNN that can distinguish between positive and negative
movie reviews based on the text content. The DNN learns to extract features from the
reviews and map them to two output classes: positive (e.g., label 1) and negative (e.g.,
label 0).

Algorithm:

1. Data Preprocessing:
o Load the IMDB dataset, which typically comes pre-processed with reviews converted
to sequences of integers representing words.
o Consider further cleaning the text data (e.g., removing stop words,
stemming/lemmatization).
o Split the data into training, validation, and test sets.
2. Model Definition:
o Design a DNN architecture with an embedding layer to convert integer-encoded
reviews into dense vectors capturing word meaning.
o Stack multiple hidden layers with activation functions (e.g., ReLU) to learn complex
features from the review text.
o Use a dropout layer (optional) to prevent overfitting.
o Add a final output layer with one neuron and a sigmoid activation function to predict
the probability of a review being positive (between 0 and 1).
3. Training:
o Define a loss function suitable for binary classification (e.g., binary cross-entropy).
o Choose an optimizer (e.g., Adam) to update the DNN's weights during training.
o Train the DNN by iteratively:
 Feeding batches of review sequences and their corresponding labels (0 for negative,
1 for positive) through the network.
 Calculating the loss between predicted probabilities and actual labels.

PREPARED BY: Ms.Shweta R Moim PREPARED BY : Ms Mrunal Mule APPROVED BY : Dr. Kavita Bhosale
(Course Teacher) (Course Coordinator) (HOD)
MAHARASHTRA INSTITUTE LABORATORY MANUAL
OF TECHNOLOGY,
AURANGABAD

PRACTICAL EXPERIMENT INSTRUCTION SHEET

DEPARTMENT: Emerging Science and Technology LABORATORY: OCC Lab

Class:TY.(AIDS)Batu SUBJECT: DL YEAR: 2024-2025

 Updating the network's weights using the optimizer to minimize the loss.
o Monitor the DNN's performance on the validation set during training to prevent
overfitting.
4. Evaluation:
o After training, evaluate the model's performance on the unseen test set using metrics
like accuracy (percentage of correctly classified reviews), precision (ratio of true
positives to predicted positives), and recall (ratio of true positives to all actual
positives).
5. Prediction:
o Use the trained DNN to predict the sentiment (positive or negative) of new, unseen
movie reviews.

Example:

Consider a review: "This movie was absolutely fantastic! A must-watch!" The


preprocessed review might be a sequence of integers. The DNN, after training, would
predict a high probability (close to 1) for the positive class, indicating a positive
sentiment.

Expected Output:

For each new review, the DNN will predict a probability between 0 and 1. A value
closer to 1 signifies a positive review, while a value closer to 0 suggests a negative
review. You can define a threshold (e.g., 0.5) to classify the review as positive or
negative based on the predicted probability.

Conclusion:

DNNs are a powerful approach for sentiment analysis tasks like classifying movie
reviews. They can capture complex relationships within text data and achieve high
accuracy. However, keep in mind:

 Hyperparameter tuning is crucial for optimal performance.


 Overfitting can be a challenge. Mitigate it with techniques like dropout,
regularization, and early stopping.
 Data quality significantly impacts performance. Preprocessing and cleaning the text
data are essential.

PREPARED BY: Ms.Shweta R Moim PREPARED BY : Ms Mrunal Mule APPROVED BY : Dr. Kavita Bhosale
(Course Teacher) (Course Coordinator) (HOD)
MAHARASHTRA INSTITUTE LABORATORY MANUAL
OF TECHNOLOGY,
AURANGABAD

PRACTICAL EXPERIMENT INSTRUCTION SHEET

DEPARTMENT: Emerging Science and Technology LABORATORY: OCC Lab

Class:TY.(AIDS)Batu SUBJECT: DL YEAR: 2024-2025

For simpler classification tasks, traditional machine learning algorithms like Support
Vector Machines (SVMs) might be sufficient. However, DNNs excel when dealing with
large datasets and complex relationships within the text data.

PROGRAM:-

PREPARED BY: Ms.Shweta R Moim PREPARED BY : Ms Mrunal Mule APPROVED BY : Dr. Kavita Bhosale
(Course Teacher) (Course Coordinator) (HOD)
MAHARASHTRA INSTITUTE LABORATORY MANUAL
OF TECHNOLOGY,
AURANGABAD

PRACTICAL EXPERIMENT INSTRUCTION SHEET

DEPARTMENT: Emerging Science and Technology LABORATORY: OCC Lab

Class:TY.(AIDS)Batu SUBJECT: DL YEAR: 2024-2025

EXPERIMENT NO. 6

AIM: Classifying Reuters dataset using deep neural network-multiclass


classification problem

Algorithm:
The DNN learns to classify news articles into various predefined categories (e.g.,
sports, business, politics). It extracts features from the text content and maps them to
multiple output neurons, each representing a specific topic class.

Algorithm:

1. Data Preprocessing:
o Load the Reuters dataset, which might come with preprocessed text data.
o Consider further cleaning the text (e.g., removing stop words,
stemming/lemmatization).
o Convert text into numerical representations (e.g., word embedding or bag-of-words).
o Split the data into training, validation, and test sets.
2. Model Definition:
o Design a DNN architecture with an embedding layer to convert text data into dense
vectors.
o Stack multiple hidden layers with activation functions (e.g., ReLU) to learn complex
features from the news articles.
o Use a dropout layer (optional) to prevent overfitting.
o Add a final output layer with a number of neurons equal to the number of topic
classes (e.g., 46 for the standard Reuters dataset). Apply a softmax activation
function to this layer. The softmax function outputs a probability distribution across all
classes, where the sum of probabilities equals 1.
3. Training:
o Define a loss function suitable for multiclass classification (e.g., categorical cross-
entropy).
o Choose an optimizer (e.g., Adam) to update the DNN's weights during training.
o Train the DNN similarly to the binary classification case:
 Feed batches of text data and their corresponding labels (one-hot encoded vectors
representing the topic class) through the network.

PREPARED BY: Ms.Shweta R Moim PREPARED BY : Ms Mrunal Mule APPROVED BY : Dr. Kavita Bhosale
(Course Teacher) (Course Coordinator) (HOD)
MAHARASHTRA INSTITUTE LABORATORY MANUAL
OF TECHNOLOGY,
AURANGABAD

PRACTICAL EXPERIMENT INSTRUCTION SHEET

DEPARTMENT: Emerging Science and Technology LABORATORY: OCC Lab

Class:TY.(AIDS)Batu SUBJECT: DL YEAR: 2024-2025

 Calculate the loss between predicted probabilities and actual labels.


 Update the network's weights using the optimizer to minimize the loss.
o Monitor the DNN's performance on the validation set during training to prevent
overfitting.
4. Evaluation:
o After training, evaluate the model's performance on the unseen test set using metrics
like accuracy (percentage of correctly classified articles) and macro/micro-averaged
precision, recall, and F1 scores (consider using libraries like scikit-learn for these
metrics).
5. Prediction:
o Use the trained DNN to predict the topic class for new, unseen news articles.

Example:

Consider a news article about an economic downturn. The preprocessed text would
be fed into the DNN. After training, the DNN might predict a high probability for the
"business" class and lower probabilities for other classes like "sports" or "politics."

Expected Output:

For each new article, the DNN will predict a probability distribution across all topic
classes. The class with the highest probability indicates the predicted topic.

Conclusion:

DNNs are well-suited for multiclass classification tasks like classifying news articles.
They can effectively learn complex relationships within text data and achieve high
classification accuracy. However, remember:

 Hyperparameter tuning is crucial to optimize performance for your specific dataset


and number of classes.
 Overfitting is a significant concern. Techniques like dropout, regularization, and early
stopping can help mitigate it.
 Data quality significantly impacts performance. Preprocessing and cleaning the text
data are essential.

For smaller datasets or simpler classification problems, other algorithms like Random
Forests or Support Vector Machines (SVMs) with appropriate multiclass adaptations
might be viable alternatives. However, DNNs excel when dealing with large datasets
and intricate relationships within the text data

PREPARED BY: Ms.Shweta R Moim PREPARED BY : Ms Mrunal Mule APPROVED BY : Dr. Kavita Bhosale
(Course Teacher) (Course Coordinator) (HOD)
MAHARASHTRA INSTITUTE LABORATORY MANUAL
OF TECHNOLOGY,
AURANGABAD

PRACTICAL EXPERIMENT INSTRUCTION SHEET

DEPARTMENT: Emerging Science and Technology LABORATORY: OCC Lab

Class:TY.(AIDS)Batu SUBJECT: DL YEAR: 2024-2025

EXPERIMENT NO. 7

AIM: Classifying MNIST dataset using CNN

Algorithm:
Convolutional Neural Networks (CNNs) are a powerful architecture specifically
designed for image recognition tasks. They excel at capturing spatial relationships and
local features within images, making them ideal for datasets like MNIST, which
contains handwritten digits. Here's how CNNs work in this context:

 Convolutional Layers: These layers apply filters (kernels) that slide across the
image, extracting features like edges, lines, and curves. The filter weights are learned
during training, allowing the network to identify these features automatically.
 Pooling Layers: These layers reduce the dimensionality of the data by applying
pooling operations (e.g., max pooling) to summarize the most important information
from the previous layer's activations. This helps control overfitting and computational
cost.
 Activation Functions: Non-linear activation functions (e.g., ReLU) are used after
each convolutional layer to introduce non-linearity and help the network learn more
complex feature representations.
 Fully Connected Layers: After the convolutional and pooling layers, the network
transitions to fully connected layers similar to traditional neural networks. These
layers combine the extracted features to make the final classification decision.
 Softmax Activation: The output layer typically uses a softmax activation function to
produce a probability distribution across all digit classes (0-9 for MNIST). Each output
neuron represents a class, and the predicted digit is the one with the highest
predicted probability.

Algorithm

1. Data Preprocessing:
o Load the MNIST dataset, which comes with training and testing images (28x28 pixel
grayscale) and their corresponding labels (0-9).
o Normalize the pixel values (usually to the range [0, 1]) to improve training stability.
o Consider data augmentation techniques (e.g., random cropping, rotations) to
artificially increase the dataset size and make the model more robust to variations.
2. Model Definition:

PREPARED BY: Ms.Shweta R Moim PREPARED BY : Ms Mrunal Mule APPROVED BY : Dr. Kavita Bhosale
(Course Teacher) (Course Coordinator) (HOD)
MAHARASHTRA INSTITUTE LABORATORY MANUAL
OF TECHNOLOGY,
AURANGABAD

PRACTICAL EXPERIMENT INSTRUCTION SHEET

DEPARTMENT: Emerging Science and Technology LABORATORY: OCC Lab

Class:TY.(AIDS)Batu SUBJECT: DL YEAR: 2024-2025

o Define a CNN architecture with:


 Convolutional layers (e.g., 2-3 convolutional layers) with appropriate filter sizes (e.g.,
3x3, 5x5) and the desired number of filters per layer (e.g., 32, 64). Experiment with
these hyperparameters to find the optimal configuration for your problem.
 Pooling layers (e.g., max pooling layers) with a stride of 2 to downsample the data
and reduce dimensionality.
 Activation functions (e.g., ReLU) after each convolutional layer to introduce non-
linearity.
 Fully connected layers (e.g., 1-2 fully connected layers) with a number of neurons in
the final layer equal to the number of classes (10 for MNIST).
o Choose an appropriate optimizer (e.g., Adam) to update the network's weights during
training.
o Select a loss function suitable for multiclass classification (e.g., categorical cross-
entropy) to measure the difference between the predicted probabilities and the actual
labels.
3. Training:
o Train the CNN in an iterative process:
 Feed batches of images and their corresponding labels through the network.
 Calculate the loss between the predicted probabilities and actual labels.
 Use the optimizer to update the network's weights in a direction that minimizes the
loss.
 Monitor training progress (e.g., loss, accuracy) on the training set and validation set
(a subset of the training data used to prevent overfitting). Early stopping can be used
to stop training if the validation loss starts to increase, indicating overfitting.
4. Evaluation:
o After training, evaluate the model's performance on the unseen test set using metrics
like:
 Accuracy: Percentage of correctly classified images.
 Precision: Ratio of true positives to predicted positives.
 Recall: Ratio of true positives to all actual positives.
 F1-score: Harmonic mean of precision and recall.
5. Prediction:
o Use the trained CNN to predict the class (digit) of new, unseen images.

Example

Imagine a new handwritten digit image (28x28 pixels). The preprocessed image
would be fed through the trained CNN. After forward propagation, the output layer
would produce a probability distribution across the 10 digit classes. The class with the
highest probability would be the predicted digit.

PREPARED BY: Ms.Shweta R Moim PREPARED BY : Ms Mrunal Mule APPROVED BY : Dr. Kavita Bhosale
(Course Teacher) (Course Coordinator) (HOD)
MAHARASHTRA INSTITUTE LABORATORY MANUAL
OF TECHNOLOGY,
AURANGABAD

PRACTICAL EXPERIMENT INSTRUCTION SHEET

DEPARTMENT: Emerging Science and Technology LABORATORY: OCC Lab

Class:TY.(AIDS)Batu SUBJECT: DL YEAR: 2024-2025

Expected Output

For each new image, the trained CNN will generate a probability distribution with a
value between 0 and 1 for each digit class (0-9). The class with the highest probability
signifies the predicted digit.

Conclusion

CNNs are a highly effective approach

PROGRAM:-

PREPARED BY: Ms.Shweta R Moim PREPARED BY : Ms Mrunal Mule APPROVED BY : Dr. Kavita Bhosale
(Course Teacher) (Course Coordinator) (HOD)
MAHARASHTRA INSTITUTE LABORATORY MANUAL
OF TECHNOLOGY,
AURANGABAD

PRACTICAL EXPERIMENT INSTRUCTION SHEET

DEPARTMENT: Emerging Science and Technology LABORATORY: OCC Lab

Class:TY.(AIDS)Batu SUBJECT: DL YEAR: 2024-2025

EXPERIMENT NO. 8

AIM: Classifying data using pre-trained models/transfer learning

Algorithm:
ransfer learning is a powerful technique in machine learning where you leverage a pre-
trained model on a large dataset for a new task. Here's how it works:

1. Pre-trained Model: A model trained on a massive dataset like ImageNet (for image
recognition) or BERT (for natural language processing) learns general-purpose
features that are useful for various computer vision or NLP tasks.
2. Feature Extraction: The pre-trained model's initial layers typically extract low-level
features like edges, lines, or word embeddings. These layers capture generic
knowledge applicable to many tasks.
3. Fine-tuning: You freeze the weights of the pre-trained model's earlier layers (feature
extraction) and focus on training the final layers with your specific dataset. This
leverages the pre-trained knowledge while adapting it to your specific classification
problem.

Benefits:

 Reduced Training Time: By using pre-trained features, you don't need to train a
model from scratch, saving time and computational resources.
 Improved Performance: Pre-trained models often outperform models trained from
scratch on smaller datasets, especially for complex tasks.
 Reduced Overfitting: Transfer learning helps regulate the model, reducing the risk of
overfitting to your specific dataset.

Algorithm:

1. Choose a Pre-trained Model: Select a pre-trained model appropriate for your task
(e.g., VGG16 for image classification, BERT for text classification).

PREPARED BY: Ms.Shweta R Moim PREPARED BY : Ms Mrunal Mule APPROVED BY : Dr. Kavita Bhosale
(Course Teacher) (Course Coordinator) (HOD)
MAHARASHTRA INSTITUTE LABORATORY MANUAL
OF TECHNOLOGY,
AURANGABAD

PRACTICAL EXPERIMENT INSTRUCTION SHEET

DEPARTMENT: Emerging Science and Technology LABORATORY: OCC Lab

Class:TY.(AIDS)Batu SUBJECT: DL YEAR: 2024-2025

2. Prepare Your Data: Preprocess your data to match the input format of the pre-
trained model (e.g., image resizing, text tokenization).
3. Freeze Early Layers: Freeze the weights of the earlier layers in the pre-trained
model to preserve their learned features.
4. Add New Layers: Add new fully connected layers on top of the pre-trained model to
handle your specific classification problem and number of output classes.
5. Fine-tune the Model: Train the newly added layers and the top few layers of the pre-
trained model using your dataset and a suitable optimizer and loss function.
6. Evaluate: Monitor performance on a validation set to prevent overfitting and evaluate
the final model's accuracy on a held-out test set.

Example:

Imagine you want to classify cat vs. dog images using a smaller dataset than
ImageNet. You can:

1. Choose a pre-trained model like VGG16 trained on ImageNet.


2. Freeze the weights of VGG16's early convolutional layers.
3. Add new fully connected layers at the end for binary classification (cat or dog).
4. Train only the new layers and a few top layers of VGG16 using your cat and dog
image dataset.

Expected Output:

For each new image, the fine-tuned model will predict a probability of it belonging to
the "cat" or "dog" class.

Conclusion

Transfer learning provides a powerful approach to classifying data, particularly when


dealing with limited datasets or complex tasks. It leverages pre-trained models'
knowledge while adapting it to your specific problem. However, consider these points:

 Pre-trained Model Choice: Choose a pre-trained model related to your task for
optimal performance transfer.
 Data Suitability: Ensure your dataset has enough data points for fine-tuning. Very
small datasets might not benefit significantly from transfer learning.
 Fine-tuning Parameters: Experiment with the number of layers to freeze and the
learning rate for fine-tuning to achieve the best results.

PREPARED BY: Ms.Shweta R Moim PREPARED BY : Ms Mrunal Mule APPROVED BY : Dr. Kavita Bhosale
(Course Teacher) (Course Coordinator) (HOD)
MAHARASHTRA INSTITUTE LABORATORY MANUAL
OF TECHNOLOGY,
AURANGABAD

PRACTICAL EXPERIMENT INSTRUCTION SHEET

DEPARTMENT: Emerging Science and Technology LABORATORY: OCC Lab

Class:TY.(AIDS)Batu SUBJECT: DL YEAR: 2024-2025

EXPERIMENT NO. 9

AIM: Training various popular neural networks(Resnet,VGGNet,InceptionV3


etc)on custom Dataset

Algorithm:
These pre-trained networks (ResNet, VGGNet, InceptionV3) excel at image
recognition tasks due to their deep architectures and ability to learn complex features
from image data. Transfer learning allows you to leverage their pre-trained weights to
improve the efficiency and accuracy of training on your custom dataset.

Algorithm:

1. Data Preparation:
o Gather and Organize: Collect your dataset, ensuring it's labeled and appropriately
structured for training (e.g., separate folders for different classes).
o Preprocess: Normalize or standardize pixel values for image datasets, convert text
to numerical representations if applicable, and consider data augmentation
techniques (random cropping, flipping) to artificially increase dataset size.
o Split: Divide your data into training, validation, and test sets (typically
80%/10%/10%). The validation set helps monitor overfitting during training, while the
test set evaluates final performance.
2. Choose a Pre-trained Network:
o Select a network architecture suitable for your task:
 ResNet: Effective for image classification, known for residual connections that
address vanishing gradient problems in deep architectures. Consider different
variants (ResNet50, ResNet101) based on dataset size and complexity.
 VGGNet: Deep convolutional architecture with good performance on image
classification. Be mindful of its memory requirements if your dataset is large. Consider
smaller variants like VGG16 compared to VGG19.
 InceptionV3: Uses inception modules for efficient feature extraction. Can be
computationally expensive to train, so consider your hardware resources.
3. Load and Modify the Network:
o Use deep learning libraries like TensorFlow or PyTorch to load the pre-trained model
architecture and its weights.
o Transfer Learning:

PREPARED BY: Ms.Shweta R Moim PREPARED BY : Ms Mrunal Mule APPROVED BY : Dr. Kavita Bhosale
(Course Teacher) (Course Coordinator) (HOD)
MAHARASHTRA INSTITUTE LABORATORY MANUAL
OF TECHNOLOGY,
AURANGABAD

PRACTICAL EXPERIMENT INSTRUCTION SHEET

DEPARTMENT: Emerging Science and Technology LABORATORY: OCC Lab

Class:TY.(AIDS)Batu SUBJECT: DL YEAR: 2024-2025

 Freeze Early Layers: In most cases, freeze the weights of the pre-trained model's
earlier layers (feature extraction) to preserve their general knowledge.
 Add New Layers: Add new fully connected layers at the end, tailored to the number
of classes in your classification task.
4. Model Compilation:
o Define an optimizer (e.g., Adam) that updates the network's weights during training.
o Choose a loss function suitable for your task (e.g., categorical cross-entropy for
multiclass classification, mean squared error for regression).
o Compile the model with the optimizer, loss function, and any additional metrics (e.g.,
accuracy).
5. Training:
o Train the model iteratively:
 Feed batches of data from your training set through the network.
 Calculate the loss between predicted outputs and actual labels.
 Use the optimizer to adjust the weights of the trainable layers (frozen layers in
transfer learning) to minimize the loss.
 Monitor training progress (loss, accuracy) on both the training and validation sets.
Use early stopping if validation loss increases to prevent overfitting.
6. Evaluation:
o After training, evaluate the model's performance on the unseen test set using metrics
relevant to your task (e.g., accuracy, precision, recall, F1-score).
7. Prediction:
o Use the trained model to make predictions on new, unseen data points.

Example:

Imagine you have a custom dataset of cat and dog images. You can:

1. Preprocess the images (resize, normalize).


2. Choose a pre-trained model like ResNet50.
3. Freeze the earlier layers of ResNet50 and add new fully connected layers for two
classes (cat and dog).
4. Train the model using your cat and dog image dataset.
5. Evaluate the model on the test set to see how well it distinguishes cats from dogs.

Expected Output:

For each new image, the trained model will predict a probability of it belonging to a
specific class (e.g., "cat" with a probability of 0.8 and "dog" with a probability of 0.2).

PREPARED BY: Ms.Shweta R Moim PREPARED BY : Ms Mrunal Mule APPROVED BY : Dr. Kavita Bhosale
(Course Teacher) (Course Coordinator) (HOD)
MAHARASHTRA INSTITUTE LABORATORY MANUAL
OF TECHNOLOGY,
AURANGABAD

PRACTICAL EXPERIMENT INSTRUCTION SHEET

DEPARTMENT: Emerging Science and Technology LABORATORY: OCC Lab

Class:TY.(AIDS)Batu SUBJECT: DL YEAR: 2024-2025

Conclusion:

Training popular deep neural networks on custom datasets using transfer learning
offers significant advantages:

 Reduced Training Time: Leveraging pre-trained weights significantly reduces


training time compared to training from scratch.
 Improved Performance: Transfer learning often leads to better performance,
especially for smaller datasets, as the model can adapt its knowledge to your specific
task.
 Efficiency: You gain access to the capabilities of these powerful architectures
without the need to design and train an entirely new network

PROGRAM:-

PREPARED BY: Ms.Shweta R Moim PREPARED BY : Ms Mrunal Mule APPROVED BY : Dr. Kavita Bhosale
(Course Teacher) (Course Coordinator) (HOD)
MAHARASHTRA INSTITUTE LABORATORY MANUAL
OF TECHNOLOGY,
AURANGABAD

PRACTICAL EXPERIMENT INSTRUCTION SHEET

DEPARTMENT: Emerging Science and Technology LABORATORY: OCC Lab

Class:TY.(AIDS)Batu SUBJECT: DL YEAR: 2024-2025

EXPERIMENT NO. 10

AIM: Temperature forecasting using RNN

Algorithm:
RNNs are a type of neural network adept at handling sequential data like time series,
making them well-suited for temperature forecasting. They can learn patterns and
dependencies within historical temperature data to predict future values.

RNNs work by processing data one step at a time, maintaining an internal state
(memory) that captures information from previous inputs. This allows them to
consider the temporal relationship between temperatures across different points in
time.

Algorithm:

1. Data Preparation:
o Gather historical temperature data, including timestamps and the desired temperature
variable (e.g., daily average temperature).
o Preprocess the data by scaling or normalizing the temperature values to a common
range.
2. Model Definition:
o Choose an RNN architecture suitable for your task. Common options include:
 Simple RNN: A basic RNN, but can suffer from vanishing gradient problems for long
sequences.
 Long Short-Term Memory (LSTM): Addresses vanishing gradients by controlling
information flow through gates in the network.
 Gated Recurrent Unit (GRU): Similar to LSTM, but with a simpler architecture.
o Define the number of hidden layers (cells) in the RNN, which determines the model's
complexity and capacity to learn patterns.
o Add an output layer with one neuron (for single-value temperature prediction) and a
linear activation function.
3. Training:
o Define an optimizer (e.g., Adam) to update the network's weights during training.
o Choose a loss function suitable for regression tasks (e.g., mean squared error) to
measure the difference between predicted and actual temperatures.
o Train the RNN iteratively:

PREPARED BY: Ms.Shweta R Moim PREPARED BY : Ms Mrunal Mule APPROVED BY : Dr. Kavita Bhosale
(Course Teacher) (Course Coordinator) (HOD)
MAHARASHTRA INSTITUTE LABORATORY MANUAL
OF TECHNOLOGY,
AURANGABAD

PRACTICAL EXPERIMENT INSTRUCTION SHEET

DEPARTMENT: Emerging Science and Technology LABORATORY: OCC Lab

Class:TY.(AIDS)Batu SUBJECT: DL YEAR: 2024-2025

 Feed sequences of historical temperature data (e.g., past week's daily temperatures)
and their corresponding actual future temperatures (e.g., next day's temperature)
through the network.
 Calculate the loss between predicted and actual temperatures.
 Use the optimizer to adjust the RNN's weights to minimize the loss.
o Monitor training progress (loss) on both the training and validation sets to prevent
overfitting.
4. Evaluation:
o After training, evaluate the model's performance on the unseen test set using metrics
like mean squared error or Mean Absolute Error (MAE) to assess the accuracy of its
temperature predictions.
5. Prediction:
o Use the trained RNN to predict future temperatures based on new, unseen
sequences of historical data.

Example:

Imagine you have a dataset with daily average temperatures for the past year. You
can:

1. Preprocess the temperature values by scaling them to a range between 0 and 1.


2. Choose an LSTM network with one hidden layer.
3. Train the LSTM on sequences of past week's temperatures to predict the next day's
temperature.
4. Evaluate the model's performance on unseen data (e.g., past month's temperatures)
using MAE.
5. Use the trained LSTM to predict the temperature for the upcoming week based on the
most recent historical data.

Expected Output:

The trained RNN will predict a single temperature value (e.g., next day's average
temperature) for each new sequence of historical data it encounters.

Conclusion:

RNNs offer a powerful approach to temperature forecasting by learning temporal


relationships within temperature data. However, consider these points:

PREPARED BY: Ms.Shweta R Moim PREPARED BY : Ms Mrunal Mule APPROVED BY : Dr. Kavita Bhosale
(Course Teacher) (Course Coordinator) (HOD)
MAHARASHTRA INSTITUTE LABORATORY MANUAL
OF TECHNOLOGY,
AURANGABAD

PRACTICAL EXPERIMENT INSTRUCTION SHEET

DEPARTMENT: Emerging Science and Technology LABORATORY: OCC Lab

Class:TY.(AIDS)Batu SUBJECT: DL YEAR: 2024-2025

 Model Selection: Different RNN architectures (Simple RNN, LSTM, GRU) have their
strengths and weaknesses. Experimentation might be needed to find the best fit for
your dataset.
 Hyperparameter Tuning: Tuning hyperparameters like the number of hidden layers
and learning rate can significantly impact performance.
 Data Quality and Length: The quality and length of your historical temperature data
directly affect the model's accuracy. More data often leads to better predictions.
 External factors: Weather forecasting is complex, influenced by various factors
beyond temperature (e.g., humidity, wind speed). RNNs might not capture all these
complexities.

PROGRAM:-

PREPARED BY: Ms.Shweta R Moim PREPARED BY : Ms Mrunal Mule APPROVED BY : Dr. Kavita Bhosale
(Course Teacher) (Course Coordinator) (HOD)
MAHARASHTRA INSTITUTE LABORATORY MANUAL
OF TECHNOLOGY,
AURANGABAD

PRACTICAL EXPERIMENT INSTRUCTION SHEET

DEPARTMENT: Emerging Science and Technology LABORATORY: OCC Lab

Class:TY.(AIDS)Batu SUBJECT: DL YEAR: 2024-2025

EXPERIMENT NO. 11

AIM: Implementation of GAN on any suitable dataset

Algorithm:
GANs are a powerful class of deep learning models consisting of two competing neural
networks:

1. Generator (G): Learns to create new data samples that resemble the real data
distribution.
2. Discriminator (D): Acts as a critic, aiming to distinguish between real data samples
(from the dataset) and the samples generated by the Generator.

Through an adversarial training process:

 The Generator continuously improves its ability to generate realistic data by trying to
fool the Discriminator.
 The Discriminator strives to accurately identify real vs. generated data, pushing the
Generator to create even better forgeries.

This competition leads both networks to improve, ultimately resulting in the Generator
producing high-quality, realistic data samples.

Algorithm:

1. Data Preparation:
o Choose a suitable dataset for your task. Common choices include images (e.g.,
MNIST handwritten digits, CelebA faces), music, or text data.
o Preprocess the data as required by the task (e.g., image resizing, normalization).
2. Model Definition:
o Design the architectures for both Generator (G) and Discriminator (D).
 G typically uses convolutional layers (for images) or recurrent layers (for sequential
data) to generate new samples.
 D typically uses convolutional layers (for images) or fully connected layers to classify
data as real or fake.
o Define loss functions:
 Generator Loss: Measures how well the generated samples fool the Discriminator.

PREPARED BY: Ms.Shweta R Moim PREPARED BY : Ms Mrunal Mule APPROVED BY : Dr. Kavita Bhosale
(Course Teacher) (Course Coordinator) (HOD)
MAHARASHTRA INSTITUTE LABORATORY MANUAL
OF TECHNOLOGY,
AURANGABAD

PRACTICAL EXPERIMENT INSTRUCTION SHEET

DEPARTMENT: Emerging Science and Technology LABORATORY: OCC Lab

Class:TY.(AIDS)Batu SUBJECT: DL YEAR: 2024-2025

 Discriminator Loss: Measures how well the Discriminator distinguishes real from
fake data.
3. Training:
o Train the model iteratively:
 In each iteration:
 Train the Discriminator: Feed real data and generated data to the Discriminator and
update its weights to improve its ability to discriminate.
 Train the Generator: Fix the Discriminator's weights and train the Generator to
minimize the Generator Loss (fooling the Discriminator).
o Monitor training progress by visualizing generated samples and tracking loss values
for both G and D.
4. Evaluation:
o After training, qualitatively assess the generated samples for realism and adherence
to the real data distribution.
o Consider quantitative metrics (e.g., Inception Score for image quality) if applicable to
your data type.
5. Generation:
o Use the trained Generator to create new data samples that resemble the real data
distribution.

Example (MNIST Handwritten Digits):

1. Dataset: MNIST dataset of handwritten digits (0-9).


2. G: Uses convolutional layers to generate new digit images.
3. D: Uses convolutional layers to classify images as real (from MNIST) or fake
(generated by G).
4. Training: Train G and D iteratively, aiming for G to produce realistic digits that fool D.
5. Expected Output: The trained G will create new images of handwritten digits that
resemble the real digits in the MNIST dataset.

Conclusion:

GANs hold immense potential for various applications, including:

 Generating realistic images for image editing or creating training data for other
models.
 Creating realistic music or text data.
 Translating images from one style to another.

However, GANs can be challenging to train due to:

PREPARED BY: Ms.Shweta R Moim PREPARED BY : Ms Mrunal Mule APPROVED BY : Dr. Kavita Bhosale
(Course Teacher) (Course Coordinator) (HOD)
MAHARASHTRA INSTITUTE LABORATORY MANUAL
OF TECHNOLOGY,
AURANGABAD

PRACTICAL EXPERIMENT INSTRUCTION SHEET

DEPARTMENT: Emerging Science and Technology LABORATORY: OCC Lab

Class:TY.(AIDS)Batu SUBJECT: DL YEAR: 2024-2025

 Mode Collapse: The Generator might get stuck in a loop, producing only a limited set
of outputs.
 Training Instability: Balancing the training of G and D can be tricky, requiring careful
hyperparameter tuning.

PROGRAM:-

PREPARED BY: Ms.Shweta R Moim PREPARED BY : Ms Mrunal Mule APPROVED BY : Dr. Kavita Bhosale
(Course Teacher) (Course Coordinator) (HOD)

You might also like