Professional Documents
Culture Documents
18CS71 Ia2 Scheme
18CS71 Ia2 Scheme
INTERNAL ASSESSMENT – II
Scheme of Evaluation
PROGRAM : U.G.
SEM & SEC : VII / ‘A’ & ‘B’ DATE: 27/12/2021
SUBJECT : Artificial Intelligence & Machine Learning TIME: 9:00 –10:15 AM
FACULTY : A.Rosline Mary & Suma S MAX. MARKS: 30
Q Marks Total
Details of the Answer
No. Distribution Marks
1a
Case 2: Training Rule for Hidden Unit Weights. In the case where j is an
internal, or hidden unit in the network, the derivation of the training rule for wji must
consider the indirect ways in which wji can influence the network outputs and hence Ed. For Derivation:
this reason, we will find it useful to refer to the set of all units immediately downstream of
Case1+Case2: 10M
unit j in the network (i.e., all units whose direct inputs include the output of unit j).
5+5:10
1b
A straightforward Bayesian analysis will show that under certain Explanation: 2M
HML:3M
assumptions any learning algorithm that minimizes the squared error 5M
Page 1 of 5
CBCS 18CS71
between the output hypothesis predictions and the training data will
output a maximum likelihood hypothesis.
OR
2a
Algorithm : 7M 7M
2b
Bayes theorem provides a way to calculate the probability of a
Bayes
hypothesis based on its prior probability, the probabilities of observing
various data given the hypothesis, and the observed data itself. Theorem:2M 8M
Brute Force:6M
Page 2 of 5
CBCS 18CS71
Explanation: 2M
5M
Algorithm: 3M
Gradient descent is an important general paradigm for learning. It is
a strategy for searching through a large or infinite hypothesis space
that can be applied whenever
1. the hypothesis space contains continuously
parameterized hypotheses (e.g., the weights in a linear
unit)
2. The error can be differentiated with respect to these
hypothesis parameters.
3b Let us use the naive Bayes classifier and the training data from this Conditional
table to classify the following novel instance:
Probability: 10M
(Outlook = sunny, Temperature = cool, Humidity = high, Wind = 10M
strong)
Page 3 of 5
CBCS 18CS71
Our task is to predict the target value (yes or no) of the target concept
PlayTennis for this new instance. Instantiating Equation (20) to fit the
current task, the target value vNB is given by
We have
Thus, the naive Bayes classifier assigns the target value PlayTennis =
no to this new instance, based on the probability estimates learned
from the training data.
Furthermore, by normalizing the above quantities to sum to one we
can calculate the conditional probability that the target value is no,
given the observed attribute values. For the current example, this
probability is 0.0206 / (0,0206+0.0053) = 0.795
OR
4a
M-estimate: 4M
Conditional
Probability: 4M
8M
Page 4 of 5
CBCS 18CS71
Page 5 of 5