Professional Documents
Culture Documents
Information Theory and Computing Assignment No. 1: April 10, 2020
Information Theory and Computing Assignment No. 1: April 10, 2020
Assignment No. 1
1
Que. 1.
Total number of outcomes = 32
Since, random variable is uniformly distributed,
1
thus, probability of each outcome = 32
Entropy of random variable X,
32
X
H(X) = − p(x) log2 (x)
i=1
1 1
= −32 · log2
32 32
1
= − log2
32
= log2 32
= 5 bits
∴ H(X) = 5 bits
2
Que. 2.
Entropy of random variable X,
X
H(X) = − p(x) log2 (x)
x
1 1 1 1 1 1 1 1 4 1
=− · log2 + · log2 + · log2 + · log2 + · log2
2 2 4 4 8 8 16 16 64 64
1 1 1 1 1 1 4 1
= − · log2 − · log2 − · log2 − · log2
2 2 4 4 8 8 64 64
1 1 1 1 4
= · log2 2 + · log2 4 + · log2 8 + · log2 16 + · log2 64
2 4 8 16 64
1 1 1 1 4
= ·1+ ·2+ ·3+ ·4+ ·6
2 4 8 16 64
1 1 3 1 3
= + + + +
2 2 8 4 8
= 2 bits
∴ H(X) = 2 bits
3
Que. 3.
From the given marginal distribution of X,
X
H(X) = − p(x) log2 (x)
x
1 1 1 1 1 1 1 1
=− · log2 + · log2 + · log2 + · log2
2 2 4 4 8 8 8 8
1 1 1 1 1 1 1 1
= − · log2 − · log2 − · log2 − · log2
2 2 4 4 8 8 8 8
1 1 1 1
= · log2 2 + · log2 4 + · log2 8 + · log2 8
2 4 8 8
1 1 1 1
= ·1+ ·2+ ·3+ ·3
2 4 8 8
3
=1+2·
8
7
= bits
4
4
Que. 4.
From the given marginal distribution of X,
X
H(X) = − p(x) log2 (x)
x
1 1 1 1 1 1 1 1
=− · log2 + · log2 + · log2 + · log2
2 2 4 4 8 8 8 8
1 1 1 1 1 1 1 1
= − · log2 − · log2 − · log2 − · log2
2 2 4 4 8 8 8 8
1 1 1 1
= · log2 2 + · log2 4 + · log2 8 + · log2 8
2 4 8 8
1 1 1 1
= ·1+ ·2+ ·3+ ·3
2 4 8 8
3
=1+2·
8
7
= bits
4
∴ H(Y ) = 2 bits
5
Using values of the joint probability distribution of X and Y,
XX
H(X, Y ) = − p(xi , yj ) log2 p(xi , yj )
i j
1 1 1 1 1 1 1 1
= · log2 + 2 × · log2 + 6 × · log2 +4× · log2
4 4 8 8 16 16 32 32
1
= × [2 × log2 4 + 2 × log2 8 + 3 × log2 16 + log2 32]
8
1
= × [2 × 2 + 2 × 3 + 3 × 4 + 5]
8
27
= bits
8
∴ H(X, Y ) = 3.375 bits
7
Ques. 6.
(a) It comes from entropy’s chain rule applied to the random variable X
and g(X ), i.e. H(X, Y ) = H(X) + H(Y |X), so H(X, g(X)) = H(X) +
H(g(X)|X).
(c) Again, this formula comes from the entropy’s chain rule, in the form:
H(X, Y ) = H(Y ) + H(X|Y ).
8
Ques. 7.
Compute of marginal distributions :
2 1
p(x) = ,
3 3
1 2
p(y) = ,
3 3
(a) H(X), H(Y)
2 2 1 1
H(X) = − · log2 + · log2
3 3 3 3
= 0.918 bits
1 1 2 2
H(Y ) = − · log2 + · log2
3 3 3 3
= 0.918 bits
9
2
∴ H(X|Y ) =
3
1
X
H(Y |X) = p(x = i)H(Y |X = x)
i=0
2 1
= H(Y |X = 0) + H(Y |X = 1)
3 3
2 1 1 1
= H( , ) + H(0, 1)
3 2 2 3
2
=
3
2
∴ H(Y |X) =
3
(c) H(X,Y)
1,1
X
H(X, Y ) = p(x, y) log2 p(x, y)
x=0,y=0
1 1
= −3 · · log2
3 3
= 1.5849625 bits
10
(e) I(X;Y)
X p(x, y)
I(X; Y ) = p(x, y) log2
x,y
p(x)p(y)
1 1 1
1 1 1
= log2 2 1 + log2 2 2 + log2 1 3 2
3 3
3 ·
3 3
3 ·
3 3
3 ·
3 3
= 0.25162916
∴ I(X; Y ) = 0.25162916
11
12