Download as pdf or txt
Download as pdf or txt
You are on page 1of 4

SET-1

III B.TECH I SEMESTER (KGR21) CSE I MID EXAMINATIONS-MARCH,2024


MACHINE LEARNING
OBJECTIVE EXAM
NAME_____________________________HALL TICKET NO A

Answer all the questions. All questions carry equal marks. Time: 20min. 10 marks.
I choose correct alternative:

Supervised learning differs from unsupervised learning in that scenario supervised


1. [ ]
learning requires [CO1, k2]
A. at least one input B. input attributes to be C.at least one output D. output attributes to
attribute. categorical. attribute. be categorical.

2. FIND S algorithm starts from the most specific hypothesis and generalizing it by [ ]
considering only? [CO1, k2]
A.Positive B.Negative Negative or Positive D.None

3. What kind of algorithm used for “Facial identities or facial expressions” [CO1, k1] [ ]
D.Recognition
A.Recognizing anomalies B.Prediction C.Generating Patterns
Patterns
4. The Candidate Elimination Algorithm Represents the [CO1, k2] [ ]

A.Solution Space B.Version Space C.Elimination space D.all of the above

5. ID3 Algorithm stands for?[CO1,k1] [ ]


D.Intrusion
A.Iterative Dichotomiser B.Intrusion Detection C.Intrusion Dichotomiser
Dichotomiser
In general, to have a well-defined learning performance of the learner system does not
6. [ ]
include? ?[CO1,k2]
B.The measure of
C.The source of
A.The class of tasks performance to be D.all of the above
experience
performed
7. What is perceptron? [CO2,k1] [ ]
A.A single layer feed B.Aneural network that C.A auto associative
D.none.
forward network contains feedback network
8. The General Limitations of Back propagation rule is\are? [CO2,k2] [ ]
A. Scaling B. Slow Convergence C.Local minima Problem D.All of the above
9. Bayes rule can be used for? [CO3,k1] [ ]
B.Increasing C.Answering Probabilistic
A.Solving queries D.None
Complexity query
10.
--------------hypothesis h with respect to target concept c and distribution D,is the [ ]
probability that h will misclassify an instance drawn at random according to D[CO3,k2]
A.True error B.Sample error C.Type error D.none

II Fill in the Blanks:

--------------------type of machine learning algorithm makes predictions when you have a set of
input data and you know the possible responses.
11.
[CO 1, k2]

--------------------------is the problem of searching through a predefined space of potential


12. hypothesis for the hypothesis that best fits the training examples. [CO1, k2]

13.
---------------------initializes the version space to contain all the hypothesis in H,then eliminates
any hypothesis found inconsistent with any training samples CO1,k2]

Which network that involves backward links from output to input and hidden layers
14. -----------------------------------[CO2,k1]

15. Full form of MDL---------------------------. [CO2,k2]

16.
---------------- is the transmission of error, back through the network to allow weights to be
adjusted so that the network can learn from the weights.[CO2, k2]

The key idea behind the delta rule is to use-------------------to search the hypothesis space. [CO2,
17. k2]

18. ----------------- is the type of estimate,which will get computed from the statistics of the
unobserved data [CO2,k2]

19. ------------------ is the empirical risk of hypothesis with respect to some sample S of instances
drawn from hypothesis.[CO3, k2]
20.
--------------is the principled way to calculate posterior probability of each hypothesis. [CO3,k2]
Multiple Choice:

1. C. at least one output attribute. (Supervised learning aims to map inputs to

desired outputs.)

2. B. Negative (FIND S starts with the most specific hypothesis and generalizes by

considering only negative evidence.)

3. D. Recognition Patterns (Facial recognition involves identifying patterns in

images.)

4. B. Version Space (Candidate Elimination represents the version space that

shrinks as hypotheses are eliminated.)

5. A. Iterative Dichotomiser (ID3 stands for Iterative Dichotomiser.)

6. D. all of the above (A well-defined learning performance requires all three: class

of tasks, performance measure, and experience source.)

7. A. A single layer feed forward network (A perceptron is a simple neural

network with one output layer.)

8. D. All of the above (Backpropagation can struggle with scaling issues, slow

convergence, and local minima.)

9. C. Answering Probabilistic queries (Bayes' rule calculates the probability of a

hypothesis given evidence.)

10. A. True error (True error is the probability of misclassification on unseen data.)

Fill in the Blanks:

11. Supervised learning (Predicts outputs based on labeled data.)

12. Concept learning (Searches for a hypothesis that best fits training examples.)

13. Version Space Algorithm (Initializes with all hypotheses and eliminates

inconsistent ones.)
14. Recurrent Neural Network (RNN) (RNNs have backward links for processing

sequential data.)

15. Minimum Description Length (MDL aims for a balance between model

complexity and data fit.)

16. Backpropagation (Adjusts weights based on error propagation through the

network.)

17. Gradient descent (The delta rule uses gradient descent to search the

hypothesis space.)

18. Maximum A Posteriori (MAP) estimate (Estimated from unobserved data

statistics.)

19. Risk (Empirical risk measures a hypothesis' performance on a sample.)

20. Bayes' theorem (Provides a principled way to calculate posterior probability.)

You might also like