Download as pdf or txt
Download as pdf or txt
You are on page 1of 3

EXAM SIMULATION:

1.1
(True or False) The perceptron is a single-layer feedforward neural network.

Answer:

1.2
In a multilayer feedforward network, what are the three main types of layers?

a. Input layer, output layer, convolutional layer


b. Input layer, hidden layers, output layer
c. Hidden layer, pooling layer, fully connected layer

1.3
(Open answer) Explain the backpropagation algorithm and its role in training a neural
network.

Answer:

--------------------------------------------------------------------------------------------------------------

2.1
(True or False) Autograd is a technique that automatically calculates the gradients of
tensor operations.

Answer:

2.2
What is the main difference between Full Batch Gradient Descent and Stochastic
Gradient Descent (SGD)?

a. Full Batch Gradient Descent updates weights after each epoch, while SGD
updates weights after each mini-batch.
b. Full Batch Gradient Descent updates weights after each mini-batch, while SGD
updates weights after each epoch.
c. FullBatch Gradient Descent uses the entire dataset to compute the gradient and
update the weights, while SGD updates the weights using a single or a small number
of training samples.
2.3
(Open answer) Describe the advantages and disadvantages of using Stochastic
Gradient Descent in deep learning.

Answer:

--------------------------------------------------------------------------------------------------------------

3.1
(True or False) ReLU (Rectified Linear Unit) is an activation function that is linear for
all input values.

Answer:

3.2
What is the purpose of using activation functions in neural networks?

a. To introduce non-linearity into the network


b. To regularize the weights of the network
c. To normalize the input data for the network

3.3
(Open answer) Discuss the importance of proper parameter initialization in deep
learning and provide an example of a commonly used initialization technique.

Answer:

--------------------------------------------------------------------------------------------------------------

4.1
(True or False) L1 regularization encourages sparsity in the model's weights.

Answer:

4.2
What is the primary function of the learning rate in optimization algorithms?

a. It determines the size of the mini-batch used for training.


b. It controls how quickly or slowly the model learns from the data.
c. It adjusts the momentum term in the optimization process.

4.3
(Open answer) Explain the concept of batch normalization and its benefits in training
deep neural networks.
Answer:

--------------------------------------------------------------------------------------------------------------

5.1
(True or False) Convolutional Neural Networks (CNNs) are particularly well-suited for
processing grid-like data structures such as images.

Answer:

5.2
What are the three main types of layers commonly found in a CNN architecture?

a. Convolutional layers, pooling layers, and fully connected layers


b. Input layer, hidden layers, output layer
c. Convolutional layer, dropout layer, and batch normalization layer

5.3
(Open answer) Describe the role of pooling layers in a Convolutional Neural Network
and provide an example of a common pooling operation.

Answer:

--------------------------------------------------------------------------------------------------------------
--------------------------------------------------------------------------------------------------------------
--------------------------------------------------------------------------------------------------------------

You might also like