Professional Documents
Culture Documents
B. Tech - (AR19 & AR 20) Question Bank Template
B. Tech - (AR19 & AR 20) Question Bank Template
UNIT - I
PART – A: (Multiple Choice Questions) (1 Mark)
Page 1 of 7
minimum separable data.
(iii)It prevents the model from overfitting (iv)It accelerates the convergence rate.
i. In the context of the perceptron learning algorithm, what role does the learning rate play? 01 01
(i) It determines the number of hidden (ii) It controls the step size during weight
layers in the network updates
(iii) It specifies the size of the input layer (iv) It influences the choice of activation
function
j. How does threshold logic differ from traditional binary classification in neural networks? 01 01
(i) It uses a linear activation function (ii) It allows for continuous output values
(iii) It employs multiple thresholds for (iv) It has no connection to artificial
decision-making intelligence concepts
c. What is Backpropagation? 01 01
UNIT – II
PART – A: (Multiple Choice Questions) (1 Mark)
Page 2 of 7
(i) Shallow architecture (ii) Limited number of layers
(iii) Single hidden unit (iv) Multiple hidden layers
b In the context of neural networks, why is the XOR problem considered challenging? 01 02
.
(i) It has a linearly separable decision (ii) It requires deep architectures to learn
boundary
(iii) It has a small input space (iv) It does not involve any non-linearity
c. Which regularization technique adds a penalty term based on the absolute values of the 02 01
weights?
(i) L1 regularization (ii) L2 regularization
(iii) Dropout regularization (iv)Elastic Net regularization
d What is a key advantage of using the Rectified Linear Unit (ReLU) activation function? 02 02
.
(i) Avoids vanishing gradient problem (ii) Smooth and differentiable
(iii) Reduces computational complexity (iv) Suitable for binary classification tasks
only
e. How does early stopping prevent overfitting in a neural network? 02 02
(i) By increasing the learning rate (ii) By reducing the number of hidden layers
(iii) By monitoring the validation loss and (iv) By applying aggressive data augmentation
stopping training when it starts to increase
f. What is the main characteristic of the Adagrad optimization algorithm? 02 02
(i) Maintains a constant learning rate (ii) Adjusts the learning rate for each
throughout training parameter based on its historical gradients
(iii) Only updates the weights after (iv) Ignores the gradient information during
evaluating the entire training dataset optimization
g What makes the Adam optimization algorithm different from traditional stochastic gradient 02 02
. descent (SGD)?
(i)It uses a fixed learning rate (ii) It adapts the learning rates for each
parameter individually
(iii) It only updates the weights every few (iv) It does not require the calculation of
epochs gradients
h What type of architecture is generally required to solve the XOR problem using a neural 02 02
. network?
(i) Shallow neural network (ii) Recurrent neural network
(iii) Deep neural network (iv) Linear regression model
i. In what scenario would dropout regularization be most beneficial? 02 02
(i) When the training dataset is very small (ii) When the model has too few parameters
(iii) When the model is prone to overfitting (iv) When the model suffers from underfitting
j. When might the Sigmoid activation function be preferable over the ReLU activation function? 02 02
(i) When dealing with image data (ii) When the output needs to be in the range
[0, 1]
(iii) When computational efficiency is a (iv) When the model requires non-linearity
priority
UNIT – III
PART – A: (Multiple Choice Questions) (1 Mark)
UNIT – IV
PART – A: (Multiple Choice Questions) (1 Mark)
Page 7 of 7