Professional Documents
Culture Documents
Channel Capacity and Models
Channel Capacity and Models
Channel Capacity and Models
P(y|x)
output Y
transition probabilities
Statistically a communication channel is usually modelled as a triple
consisting of an input alphabet, an output alphabet, and for each pair
(x, y) of input and output elements a transition probability p(y|x).
The transition probability is the probability that the symbol y is
received given that x was transmitted over the channel.
.
X
.
.
x J 1
y0
y1
.
X P(y k | x j ) Y
Y
.
.
y K -1
Definition of DMC
Channel with input X & output Y which is noisy
version of X.
Discrete when both of alphabets X & Y finite sizes.
Memoryless when no dependency between
input symbols.
5
P P (
p
(
y
|
x
)
.
.
.
.
.
p
(
y
|
x
)
0 1
K 1 1
P (Y | X ) .
p ( y0 |xJ 1 ) . . . .
.. p ( yK 1|xJ 1 )
The size is J by K
K 1
p( y
k 0
| xj) 1
for all j
p( yk x j ) p( x j )
J 1
( yk ) p(Y yprob.
Y yk X ofx j )the
p ( X output
xj)
k ) p (distn
The pmarginal
Y,
j 0
J 1
p( yk x j ) p( x j ), for k 0,1,.., K 1
j 0
P(X) M O
0
L
M
P( xJ 1 )
Example 1
Let X={0,1} of equally
probable
symbols, and let Y be a three-element
set Y={y0,y1,y2}. Let the channel have
0.15matrix
0.05
0.8
transition
probability
P(Y | X )
0.05
0.15
0.8
Entropy= H (Y ) p ( yk ) log 2 p ( yk )
k 0
n 1 m 1
H ( X , Y ) p ( x j , yk ) log 2 p ( x j , yk )
Joint Entropy=
j 0 k 0
H ( X , Y ) H (Y | X ) H ( X )
=H ( X | Y ) H (Y )
Conditional Entropy=
n 1 m 1
H (Y | X ) p ( x j , yk ) log 2 p( yk | x j )
j 0 k 0
H (Y | X ) p ( x)H (Y | X x )
x
H (Y | X x) p ( y | X x ) log 2 p ( y | X x )
y
I ( X ;Y )
I ( X ; Y ) 3 / 8bits
Channel capacity
The channel capacity is the maximum
average amount of information that
can be sent per channel use.
Recall: Mutual Information : measure of
information passed through the
channel on average
The information transferred via the noisy channel (in the absence of
the observer)
H(y) H(y|x)
why is the channel capacity not the same as
the mutual information
Channel capacity
The mutual information is a function of the probability
distribution of X.
By changing P(X) we get different results of I(X;Y)
For a fixed transition probability matrix, a change in
P(X) also results in a different output symbol
distribution P(Y).
The maximum mutual information achieved for a given
transition probability matrix is the channel capacity.
Cs max I ( X ; Y )
H(X)
H(X|Y)
channel
max I( X; Y ) capacity
P( x )
notes:
capacity depends on input probabilities
because the transition probabilites are fixed
x0
x1
1
1-p
X={x0,
x1}
y
0
y
1
1-p
P(Y | X )
1- p
H ( X ) p( x j ) log 2 p( x j )
j 0
H (Y | X ) p( x j , yk )log 2 p( yk | x j )
j 0 k 0
Hence CS 1 H 2 ( p)
1.0
Channel capacity
H(p)
0.5
1.0
Bit error p
A binary erasure channel with erasure probability p is a channel with binary input,
ternary output, and probability of erasure p.
That is, let X be the transmitted random variable with alphabet {0, 1}.
Let Y be the received variable with alphabet {0, 1, e}, where e is the erasure symbol.
Then, the channel is characterized by the conditional probabilities.
Pr( Y = 0 | X = 0) = 1-p
Pr( Y = e | X = 0) = p
Pr( Y = 1 | X = 0) = 0
Pr( Y = 0 | X = 1) = 0
Pr( Y = e | X = 1) = p
Pr( Y = 1 | X = 1) = 1-p
P(x1)=
P(x2)=1-
Find
(a) P(Y ) P (Y | X ).P( X )
Where P ( X ) [
1- ]
(b) H (Y )
(c) H(Y|X)
(d) I( X ; Y ) H (Y ) - H (Y | X )
Prove that I( X ; Y ) (1- p) H ( X )
Y={0,e,1
}
1 p p
P(Y | X )
p
0
0
1- p
X={O,
1}
1
( P( x1 ) P( x2 ) )
2
Hence CS (1 p )
Graphically
C
s1
0.
5
0.
5
p
1
0 (light on)
X
P(X=0) = P0
H(Y|X) = (1 - P0 ) H(p)
p
1-p
1 (light off)
For capacity,
maximize I(X;Y) over P0