CS 717: Endsem

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 5

CS 717: Endsem

45 Marks, open class (printed + handwritten) notes ONLY.


Write concise answers. Make assumptions if/when you feel
they are absolutely needed. 3 Hours only. Attempt any
one of Question 3 and Question 6
1. In this problem, we illustrate the dierence between a markov logic net-
work (MLN) and a bunch of logical statements, but in the simpler case of
propositional logic.
The Indian fruit Jambul
1
is often confused with Blackberries
2
. However,
we will use certain regional and seasonal growing patterns of these two
fruits along with their physical characteristics to help classify the fruit
3
.
Some village folk tell us that the following logical formulae (together com-
prising ) are good enough for characterizing and discriminating between
these fruits.
blackberry north
blackberry winter
blackberry autumn
blackberry medium in color
blackberry dark in color
blackberry small in size
jambul south
jambul spring
jambul summer
jambul light in color
jambul big in size
1
http://en.wikipedia.org/wiki/Jambul
2
http://en.wikipedia.org/wiki/Blackberry. It is the edible blackberry, not the non-
edible smartphone :-)
3
Note that the data we have is just cooked up.
1
2
Here, means if and only if (as was demonstrated in the case of the
MLNs). However, feeling uncomfortable with the fact that blackberries
do sometimes grow in the south, that blackberries are sometimes light in
color, etc., we decide to probabilistically model
4
the correlations between
characteristics and fruits using an undirected
5
graphical model.
We thus start o with the following random variables: S, F, L, C, Z
where S stands for seasons and takes four values, viz., winter, spring,
summer, autumn, F stands for the fruit and takes on the values blackberry,
jambul, L stands for the location and assumes two values north, south.
C stands for the color and takes values light, medium, dark while
Z stands for the size and takes values big, small. We will abbrevi-
ate blackberry by b and jambul by j. We consider an undirected graph
( =< 1, c > with the vertex set 1 = S, F, L, C, Z and edge set c =
(S, F), (L, F), (C, F), (Z, F).
The potential functions over the maximal cliques (i.e., edges) are given in
the following tables:
S (S, F = b) (S, F = j)
winter 0.9 0.1
spring 0.3 0.7
summer 0.4 0.6
autumn 0.8 0.2
L (L, F = b) (L, F = j)
north 0.65 0.35
south 0.25 0.75
F (C = light, F) (C = medium, F) (C = dark, F)
b 0.33 0.33 0.34
j 0.8 0.1 0.1
F (Z = big, F) (Z = small, F)
b 0.4 0.6
j 0.95 0.05
(a) Suppose you are given a piece of fruit and you nd that it is small
and has medium color. What season is it now, most likely? Deter-
mine using the max-product algorithm on the Markov Network thus
4
Obviously we needed to put more eorts and engage a CS717 student in the modeling
exercise.
5
Recall from our discussion on MLNs that since the implication in the logical formulae in
is a two-sided , an undirected graph makes more sense.
3
constructed. What is your probability of being correct as per the
sum-product algorithm?
(b) Now suppose we decide to employ only the (if and only if) rules
(and not the probability tables). Let us treat each variable as bi-
nary. So what will be the result of (logical) inference on the given
logical formulae using the DPLL algorithm, given the truth values
(small in size=true and medium in color=true)? Is there any dif-
ference between the results from logical and probabilistic inference?
Explain.
(c) Actually we did not construct our Markov network from the logical
formuale exactly the way an MLN approach would have dealt with
the formulae (using the procedure explained in the class). Please
explain the most important dierence and intuitively explain what
the problem could have been in (a), had we constructed the Markov
Network exactly as per specication for MLNs as discussed in class.
This should illustrate that in the case of propositional logic, MLNs
could be an overkill.
Show the main message-passing steps as well as the steps of logical in-
ference. You are free to choose between serial and parallel versions of
message passing.
(6+3+3=12 Marks)
2. Consider the following Directed Graphical model: 1 = X
1
, X
2
, X
3
, X
4
,
c = (X
1
, X
3
), (X
2
, X
3
), (X
3
, X
4
). None of the variables are observed.
Show that X
1
X
2
[. Now suppose we observe the variable X
4
. Show
that X
1
, X
2
[X
4
.
Now explain how the conditional distribution for a node X in a directed
graph, conditioned on all of the nodes in the Markov blanket, is indepen-
dent of the remaining variables in the graph.
(3+3=6 Marks)
3. Let T = (x
1
, y
1
), . . . , (x
m
, y
m
) be the training data, where for each
i [1, m], x
i
= [x
i1
, . . . , x
in
]

1
n
and y
i
1, 1 represent the
i
th
input data point and the corresponding label respectively. Let 1 be
a set of indices for the conjunctive features (which we will also refer to
as rules). We assume that each rule
v
(x), v 1 is a conjunction of
basic propositions concerning the input feature values of x; thus,
v
(x) :
1
n
0, 1. For a nominal input feature, say j 1, 2, . . . , n, and
taking nominal values from the set a
j1
, . . . , a
jnj
, the basic propositions
evaluated at a data point x take one of the forms: x
j
= a
jk
or x
j
,= a
jk
for all k = 1, . . . , n
j
. For a numerical input feature, say j 1, 2, . . . , n,
we pick n
j
critical points, say a
j1
, . . . , a
jnj
. The basic propositions in
this case are of one of the following forms: x
j
a
jk
and x
j
a
jk
for
all k = 1, . . . , n
j
. To summarize, the total number of basic rules are
4
p =

n
j=1
n
j
. We consider 1 to be the set of all possible conjunctions of
the p propositions. Then,
v
(x) is the v
th
conjunction of the basic rules
evaluated on x. Denote by f and () the vectors with entries as f
v
and

v
() for all v 1 respectively.
Then the prediction model for learning with conjunctive features takes the
form:
F(x) =

vV
f
v

v
(x)
where, x is the input data point at which the prediction is made,
v
() is
the v
th
rule/conjunctive feature function and f
v
is the weight given to this
function. Point x is classied to the class indicated by the sign of F(x),
i.e., sgn(F(x)).
Consider any partial order _ between the elements of the input space 1
n
.
Now the dataset T is said to be monotone if for each (x
p
, y
p
) T and
(x
q
, y
q
) T and for the given partial order _,
x
p
_ x
q
y
p
y
q
And F(x) is said to be monotone, if for any x
i
1
n
and any x
j
1
n
and the given _
x
i
_ x
j
sgn(F(x
i
)) sgn(F(x
j
))
Prove that there exists a monotone decision function F(x) separating a
dataset T if and only if T is monotone.
(8 Marks)
4. Explain the signcance(s) of total unimodularity in MAP inference in
Markov Networks. Give an example of a Markov Network along with its
potentials for which the MAP inference problem is not totally unimodular.
(7 Marks)
5. Consider the item-set lattice < 2
S
, > and the criteria sub(h) and sup(h)
for any h S, such that sub(h) is true (let true=1 and false=0) if and
only if none of the subsets of h have the same frequency as h, and sup(h) is
true if and only if none of the super-sets of h have the same frequency as h.
Let the frequency be given by freq(h) = [covers
D
(h)[ where covers
D
(h) =
e [ e T, e h. Do sub(.) and sup(.) satisfy the anti-monotonicity or
monotonicity property? Argue why.
Recall: a criterion (
r
: 2
S
0, 1
is monotonic if (
r
(h) = 0 (
r
(h

) = 0 h

h and
is anti-monotonic if (
r
(h) = 0 (
r
(h

) = 0 h

h and
(4 Marks)
5
6. We dene an (edit) distance measure
6
between c
1
, c
2
/ for a partial
order < /, _> as
d(c
1
, c
2
) = [c
1
[ +[c
2
[ 2[mgg(c
1
, c
2
)[ (1)
where [.[ : / 1 is monotonically decreasing, i.e.
c
1
, c
2
/ : c
1
_ c
2
[c
1
[ [c
2
[
and is strictly order preserving, i.e.,
c
1
, c
2
/ : c
1
c
2
[c
1
[ > [c
2
[
Minimally general generalization (mgg) is dened as
mgg(c
1
, c
2
) = min c /[c _ c
1
, c _ c
2

while maximally general specialization (mgs) is dened as


mgs(c
1
, c
2
) = max c /[c _ c
1
, c _ c
2

Since mgg(c
1
, c
2
) and mgs(c
1
, c
2
) need not be unique
7
, we dene
[mgg(c
1
, c
2
)[ = max
mmgg(x,y)
[m[
and
[mgs(c
1
, c
2
)[ = max
mmgs(x,y)
[m[
Prove that d is a metric
8
if [.[ satises following inequality
[x[ +[y[ [mgg(x, y)[ +[mgs(x, y)[
in addition to the already described properties of [.[.
[8 Marks]
For objects such as strings, trees and graphs, simple [.[ measures can be
used to dene distance metrics.
7. In the class, we discussed two representations for sequence labeling us-
ing StructSVM (i) in terms of regular grammar (a restricted version of
context free grammar) leading to a modied version of CYK for MAP in-
ference and (ii) other in terms of emissions and transitions, leading to the
Max-product algorithm. Explain (a) the feature (x, y) representation in
each case and (b) the time complexities using each of these approaches.
How would you decide which representation to choose for a particular
application.
[8 Marks]
6
This measure is relevant in the context of relational kernels.
7
If they happen to be unique, they correspond to lub and glb respectively.
8
A metric on a set D is a function f : D D such that x, y, z D: (a) f(x, y) 0;
(b) f(x, y) = 0 if and only if x = y; (c) d(x, y) = d(y, x); d(x, z) d(x, y) + d(y, z)

You might also like