Professional Documents
Culture Documents
Discrete Random Variable: Entropy and Mutual Information
Discrete Random Variable: Entropy and Mutual Information
Chain Rule
Definitions:
h(X1 , X2 , , Xn ) =
H(X) =
h(Xi |Xi1 , , X1 )
Important Distribution
Uniform: h(X) = log a, x [0, a]
Single Variable Normal: x N (0, 2 )
h(x) =
1
2
ln 2e 2
Basic Properties:
h(X1 , , Xn ) =
H(X) 0; I(X; Y ) 0
f (x) =
H(X) H(X|Y )
1
( 2)n |K| 2
1
2
ln(2e)n |K|
T
e 2 (x)
K1 (x)
Entropy Bound:
Basic Relations:
h(X1 , , Xn )
X 3 E(X) = 0, E(XXT ) = K
maxE(XXT )=K h(X) =
Chain Rule:
H(X1 , X2 , , Xn ) =
H(Xi |Xi1 , , X1 )
P
I(X1 , X2 , , Xn ; Y ) = I(Xi ; Y |Xi1 , , X1 )
AEP
H(Xi )
Important Inequality:
Markov Chain: p(x, y, z) = p(x)p(y|x)p(z|y)
Conditional Independence: p(x, z|y) = p(x|y)p(z|y)
X Y Z = X Y Z
Discrete Case
Definition:
(n)
Pe = Pr{X 6= X}
X Y X;
H(X|Y )
H(Pe ) + Pe log |X | H(X|X)
H(X|Y )1
log |X |
A
(n)
X (n) A
(n)
Pr{A
>1
(n)
Jointly Typical
Definition
A(n)
={x(n) , y (n) : |
1
log p(x(n) ) H(X)| <
n
1
log p(y (n) ) H(y)| <
n
1
| log p(x(n) , y (n) ) H(X, Y )| < }
n
Basic Properties:
Properties:
Basic Relation
ln(2e)n |K|
X Y Z
I(X; Y |Z)
1
2
Theorem:
Entropy Bound:
Pe
h(Xi )
Continuous Case
Definition:
A(n) = {x(n) S n : |
1
log f (x(n) ) h(X)| }
n
Single Channel
Yi = Xi + Zi ; Zi N (0, N );
C=
1
2
log(1 +
1
n
x2i P
P
N)
Parallel Channel
The parallel Gaussian channels is defined as
Yj = Xj + Zj ;
1 j k;
Zj N (0, Nj )
(h(Yi ) h(Zi ))
X1
Pi
log(1 +
)
2
Ni
The Lagrangian of the system is
L=
X1
2
log(1 +
X
X
Pi
)
i Pi + (
Pi P )
Ni
For optimality
i 0
i Pi = 0