Professional Documents
Culture Documents
Deep Learning Assignment 1,2
Deep Learning Assignment 1,2
1. Feature Selection
Feature selection is the process of choosing a subset of relevant and significant features
from a larger set of features in a dataset. The goal is to improve model performance,
reduce overfitting, and enhance interpretability. It helps in eliminating irrelevant or
redundant features, reducing the computational complexity, and improving the
generalization of machine learning models. Techniques for feature selection include filter
methods, wrapper methods, and embedded methods, each with its own advantages and
considerations. Overall, feature selection is crucial for enhancing the efficiency and
effectiveness of deep learning models by focusing on the most informative features for a
given task.
Principles of CNN:
● Convolutional Layers: Use filters to extract local patterns and features from input
data.
● Fully Connected Layers: Connect every neuron for final predictions based on
learned features.
● Flattening: Spatial hierarchies are flattened into a vector before fully connected
layers.
Applications of CNN:
● Image Recognition and Classification: Recognizing and classifying objects in
images.
● Object Detection: Locating objects in images or video frames, vital for computer
vision and autonomous vehicles.
● Medical Image Analysis: Used for tasks like tumor detection in medical images.
3. Introduction to RNN
Introduction to RNN (Recurrent Neural Network):
Recurrent Neural Networks (RNNs) are a class of artificial neural networks designed for
sequential data processing. Unlike traditional feedforward neural networks, RNNs have
connections that form directed cycles, allowing them to maintain a hidden state or
memory of previous inputs. This architecture makes RNNs well-suited for tasks where
context and temporal dependencies are crucial.
RNNs are designed for sequential data processing, making them suitable for tasks like
natural language processing and time series prediction. They maintain a hidden state for
memory, share parameters across time steps for generalization, and use
Backpropagation Through Time (BPTT) for training, considering the temporal aspect of
the network.
Deep Learning Assignment 2
1. Introduction to advanced RNN
Advanced RNN variants address limitations in traditional RNNs, such as the vanishing
gradient problem, and enhance their ability to capture long-term dependencies in
sequential data. Two notable variants are the Long Short-Term Memory (LSTM)
networks and the Gated Recurrent Unit (GRU) networks.
LSTM networks use specialized memory cells and gating mechanisms to selectively
remember and forget information, addressing the vanishing gradient problem and
capturing long-term dependencies. GRU networks, similar to LSTMs, tackle the
vanishing gradient problem with a streamlined architecture and fewer parameters,
maintaining computational efficiency while achieving comparable performance in
capturing temporal dependencies.
Applications of GNN:
● Social Network Analysis
● Recommendation Systems
● Node Classification
● Link Prediction
● Knowledge Graph Completion
5. CycleGAN:
Description: Performs unpaired image-to-image translation without explicit paired training
data.
Application: Style transfer, artistic image transformations.