Important Questions Unit 2

You might also like

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 8

Important Questions (Unit 2)

Loss /Entropy Function


1. What is a loss function in the context of neural networks?
2. Why do neural networks use loss functions?
3. What is the role of the loss function in training a neural network?
4. Can you explain the concept of a cost function in machine learning, and how is it
related to the loss function?
5. What are some commonly used loss functions in neural networks?
6. When would you choose mean squared error (MSE) as the loss function, and what
kind of problems does it address?
7. What is cross-entropy loss, and why is it frequently used in classification problems?
8. How does the choice of loss function affect the learning process and the performance
of a neural network?
9. What is the difference between binary cross-entropy and categorical cross-entropy
loss?
10. In what situations would you use custom loss functions instead of standard ones?
11. How does the concept of weighted loss functions work, and when might you apply
them?
12. What is the impact of an imbalanced dataset on the choice of loss function?
13. Can you explain the concept of information entropy and its relation to loss functions
in neural networks?
14. How is the concept of entropy used in the context of softmax activation and cross-
entropy loss in classification tasks?
15. What is the difference between entropy and cross-entropy in the context of neural
networks?
16. How do you calculate the entropy of a probability distribution, and why is it important
in machine learning?
17. How do you interpret the value of the loss function in the training process of a neural
network?
18. What are some strategies for mitigating the vanishing gradient problem when using
certain loss functions?
Activation Function
1. What is an activation function in a neural network, and why is it necessary?
2. How do activation functions introduce non-linearity into neural networks?
3. Can you explain the concept of a threshold in activation functions, and how it affects
neuron activation?
4. What are the main characteristics that you consider when choosing an activation
function for a neural network?
5. How does the choice of activation function impact the training and performance of a
neural network?
6. What are the challenges associated with vanishing and exploding gradients, and how
do different activation functions address these issues?

ReLU (Rectified Linear Unit)-Specific Questions:

7. What is the ReLU activation function, and how does it work?


8. What are the advantages of using ReLU over other activation functions like sigmoid
or tanh?
9. What is the "dying ReLU" problem, and how can it be mitigated?
10. Can you explain the variants of ReLU, such as Leaky ReLU and Parametric ReLU
(PReLU)?
11. In what types of neural network architectures is ReLU commonly used?

Tanh (Hyperbolic Tangent)-Specific Questions:

12. What is the Tanh activation function, and how does it differ from the sigmoid
function?
13. How does the Tanh activation function map input values to the range [-1, 1]?
14. What are the advantages and disadvantages of using Tanh as an activation function?
15. When might you choose Tanh over ReLU or other activation functions?

Softmax-Specific Questions:

16. What is the Softmax activation function, and where is it typically used in neural
networks?
17. How does the Softmax function transform a vector of real numbers into a probability
distribution?
18. In what types of neural network layers is Softmax commonly applied?
19. Can you explain how Softmax is used in multi-class classification problems?
20. What is the relationship between Softmax and the cross-entropy loss function in
classification tasks?
Associative memory Networks
1. What is an associative memory network in the context of neural networks?
2. How does an associative memory network differ from other types of neural networks,
such as feedforward or recurrent networks?
3. Can you explain the concept of content-addressable memory in associative memory
networks?
4. What are some real-world applications where associative memory networks can be
useful?
5. How do associative memory networks store and retrieve information?
6. What role does the Hopfield network play in associative memory networks, and how
does it work?
7. Can you describe the architecture of a Hopfield network and its components?
8. What are the limitations of Hopfield networks in terms of memory capacity and
retrieval accuracy?
9. Are there any alternative models or architectures for associative memory networks
besides Hopfield networks?
10. How do neural networks like the Bidirectional Associative Memory (BAM) differ
from Hopfield networks in terms of operation and applications?
11. What are some techniques for improving the capacity and robustness of associative
memory networks?
12. How do spiking neural networks (SNNs) relate to associative memory networks, and
what advantages do they offer in this context?
13. How can associative memory networks be used in pattern recognition and recall
tasks?
14. What are some challenges or limitations associated with training and fine-tuning
associative memory networks?
15. Are there any hybrid models that combine associative memory networks with other
neural network architectures for improved performance?
16. Can you explain the concept of autoassociative memory and its role in pattern
completion within associative memory networks?
17. How do you evaluate the performance of an associative memory network in terms of
recall and retrieval accuracy?
18. In what scenarios would you choose an associative memory network over other types
of neural networks, such as convolutional or recurrent neural networks?
19. What are the trade-offs between using hardware-based associative memory networks
and software-based implementations?
20. Can you provide examples of industries or fields where associative memory networks
have had a significant impact or potential for breakthroughs?
SOM
1. What is a Self-Organizing Map (SOM) in the context of neural networks?
2. How does a SOM differ from other types of neural networks, such as feedforward or
recurrent networks?
3. What are the main principles behind the self-organizing behaviour of SOMs?
4. Can you explain the concept of unsupervised learning and how it relates to SOMs?
5. What are the key components of a SOM, including neurons, weights, and input data?
6. How are the neurons organized in a SOM, and what is the role of the topology?
7. What is the purpose of the Kohonen learning rule, and how does it update the weights
of the neurons in a SOM?
8. Can you describe the process of training a SOM and how it clusters input data?
9. What are the advantages of using a SOM for dimensionality reduction and data
visualization?
10. How do you determine the number of neurons and the size of the grid in a SOM?
11. What are the applications of SOMs in data analysis, such as clustering and
visualization?
12. How do SOMs handle high-dimensional input data and preserve the topological
relationships?
13. What is the neighborhood function in a SOM, and how does it influence weight
updates during training?
14. Can you explain the concept of topological ordering and its importance in SOMs?
15. What are some common distance metrics used in SOMs to measure similarity
between neurons and input data?
16. How do you evaluate the quality of a trained SOM, and what metrics can be used for
this purpose?
17. What are some variations or extensions of SOMs, such as Growing SOMs or
MiniSom?
18. How can SOMs be used for feature extraction and representation learning in machine
learning tasks?
19. Are there any limitations or challenges associated with using SOMs, and how can
they be addressed?
20. Can you provide examples of industries or fields where SOMs have been successfully
applied for data analysis and visualization?

RNN/LSTM/GRU

General RNN Questions:


1. What is a recurrent neural network (RNN), and how does it differ from feedforward
neural networks?
2. How do RNNs handle sequential data, and why are they suitable for tasks like time
series prediction and natural language processing?
3. Can you explain the concept of recurrent connections and how they enable memory in
RNNs?
4. What are the limitations of traditional RNNs, particularly regarding the vanishing
gradient problem?
5. How do RNNs unfold over time during training and inference?

LSTM-Specific Questions:

6. What is a Long Short-Term Memory (LSTM) network, and how does it address the
vanishing gradient problem?
7. Can you describe the internal architecture of an LSTM cell, including the roles of
gates and memory cells?
8. How do LSTMs handle long-term dependencies in sequential data?
9. What are the forget gate, input gate, and output gate in an LSTM, and how do they
function?
10. How does backpropagation through time (BPTT) work when training LSTM
networks?
11. What are some common applications of LSTMs in natural language processing and
time series analysis?
12. What are the hyperparameters that you can tune when working with LSTMs, and how
do they affect network performance?

GRU-Specific Questions:

13. What is a Gated Recurrent Unit (GRU) network, and how does it compare to LSTMs?
14. Can you explain the key differences between LSTM and GRU cells in terms of
architecture and functionality?
15. How does a GRU handle the trade-off between memory and computational
complexity compared to an LSTM?
16. What are the reset gate and update gate in a GRU, and what roles do they play in
information flow?
17. In what scenarios might you choose a GRU over an LSTM, and vice versa?

Comparative Questions:

18. What are the advantages and disadvantages of using LSTM and GRU networks for
sequence modeling tasks?
19. How do LSTM and GRU networks perform in terms of training speed and
convergence compared to traditional RNNs?
20. Are there any recent developments or variations of LSTMs and GRUs that have
improved their performance in specific applications?
Encoders and Autoencoders

General Questions:
1. What is an encoder in the context of neural networks?
2. How does an encoder differ from other components in a neural network architecture?
3. Can you explain the concept of feature encoding and its significance in machine
learning?
4. What are some common tasks where encoders play a crucial role in deep learning?

Autoencoder-Specific Questions:

5. What is an autoencoder, and how does it work?


6. What is the primary purpose of an autoencoder in neural networks?
7. Can you describe the architecture of a basic autoencoder, including the encoder and
decoder components?
8. How does an autoencoder learn to represent data in a lower-dimensional space?
9. What are the key hyperparameters to consider when designing and training an
autoencoder?
10. How is the loss function defined for training an autoencoder, and what is its role?
11. What is the encoder's role in an autoencoder, and how does it perform feature
extraction?
12. How does the decoder reconstruct the original input data from the encoded
representation?
13. What are the applications of autoencoders in unsupervised learning, dimensionality
reduction, and data denoising?
14. Can you explain the concept of variational autoencoders (VAEs) and how they differ
from traditional autoencoders?
15. How do convolutional autoencoders and recurrent autoencoders differ from standard
autoencoders, and what are their use cases?
16. What are some techniques for fine-tuning autoencoders for specific tasks, such as
anomaly detection or image generation?
17. How can you evaluate the performance of an autoencoder, especially when it is used
for tasks like image reconstruction?

Advanced Questions:

18. What is the connection between autoencoders and principal component analysis
(PCA)?
19. How can autoencoders be used for generative modeling and generating new data
samples?
20. What are the challenges and limitations of using autoencoders, and how can they be
addressed in practice?
21. How do denoising autoencoders work, and what advantages do they offer for noise-
robust feature extraction?
22. Can you explain the concept of adversarial autoencoders (AAEs) and their role in
generative modeling?
23. What is the role of bottleneck layers in autoencoders, and how do they affect feature
representation?
24. Are there any real-world examples or applications where autoencoders have been
particularly successful?
GAN

General GAN Questions:


1. What is a Generative Adversarial Network (GAN) in the context of neural networks?
2. How do GANs differ from other generative models, such as autoencoders or
variational autoencoders (VAEs)?
3. Can you explain the basic architecture and components of a GAN?
4. What is the primary goal of a GAN in terms of generating data?
5. How does a GAN consist of a generator and a discriminator, and what roles do they
play in the training process?
6. What is the concept of adversarial training in GANs, and why is it important?

Training and Loss Functions:

7. What loss functions are used to train the generator and discriminator in a GAN?
8. How does the generator loss encourage the generation of realistic data, while the
discriminator loss encourages accurate discrimination?
9. Can you explain the concept of the minimax game between the generator and
discriminator in GAN training?
10. How does backpropagation and gradient descent work during the training of GANs?
11. What are some common challenges and issues in training GANs, such as mode
collapse and vanishing gradients, and how can they be addressed?

GAN Architectures:

12. What are some common architectures used for GANs, such as Deep Convolutional
GANs (DCGANs) and Wasserstein GANs (WGANs)?
13. How do Conditional GANs (CGANs) and InfoGANs extend the basic GAN
architecture to address specific tasks and improve control over generated data?
14. Can you explain the role of normalization techniques, such as batch normalization, in
GAN architectures?

Applications:

15. What are some practical applications of GANs in generating images, text, audio, and
other data types?
16. How are GANs used in image-to-image translation tasks, such as style transfer or
super-resolution?
17. What are some creative applications of GANs, such as generating art, music, or
deepfake videos?
18. Can you describe how GANs are employed in data augmentation for improving the
performance of other machine learning models?

Ethical and Security Concerns:

19. What are the ethical concerns and potential misuse of GANs, such as deepfake
generation and privacy issues?
20. How can GAN-generated content be detected and distinguished from real data, and
what are the challenges in this area?

Advanced Topics:
21. What are some recent advancements and variations of GANs, such as Progressive
GANs, CycleGANs, and BigGANs?
22. Can you explain how self-attention mechanisms and transformers have been
integrated into GAN architectures to improve their performance?
23. How do GANs relate to reinforcement learning, and how can they be used to generate
content in RL environments?

Mixed Questions
Q1. Discuss in detail the generalized delta rule and the updating of hidden layer and output
layer.
Q2. Illustrate the various applications of neural networks.
Q3. State few activation functions which are used in single and multilayer network to
classify…….
Q4. Differentiate between Hopfield and iterative auto associative networks.
Q5. Exemplify linearly separable problems.
Q6. Compare and contrast LSTM and gated recurrent units.
Q7. Show the linearization of sigmoid function.
Q8. Compare and Contrast stateful and stateless LSTMs.
Q9. Define recurrent neural networks.
Q10. A 3-input neuron has weights 2, 3 and 4. The transfer func is linear………
Q11. Define Associative memory.
Q12. Describe the significance of convolutional layer and pooling layer.
Q13. Describe Bidirectional associative memory.
Q14. Distinguish between recurrent and non-recurrent networks.
Q15. Explain Hopfield memory in brief.
Q16. Compare auto associative net and Hopfield net.
Q17. Explain self-organization in brief.
Q18. Distinguish between binary and bipolar sigmoid Function.
Q19. Write in brief about convolution layer.
Q20. Write some applications of CNN.
Q21. Illustrate denoising Autoencoders.
Q22. Write a short note on sparse autoencoders.
Q23. Explain Suitability of various activation functions with respect to applications.
Q24. Illustrate the operations of pooling layer in cnn with simple example.
Q25. Justify the advantages of auto encoders over principal component analysis for
dimensionality reduction.
Q26. Explain the working of gate recurrent unit.
Q27. Show graphical representation of sigmoid activation function.
Q28. Illustrate the significance of sigmoid activation function.
Q29. Graphically, sketch the different Activation functions used in NN.
Q30. Distinguish between Auto associative and heteroassociative Memory.
Q31.Describe rectified linear units and their generalized form.
Q32. Differentiate between Relu and Tanh Activation functions.
Q33. Explain the Algorithm of discrete Hopfield network and its Architecture.
Q34. Analyse the roll of rectified linear units in hidden layers.
Q35. Describe the characteristics of continuous Hopfield network.
Q36. Illustrate Encoder – Decoder sequence –to-sequence Architecture.

You might also like