Channel Capacity and Models

You might also like

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 30

Channel capacity and models

communication channel, or channel,


-a physical transmission medium such as a
wire,
or to a logical connection over a
multiplexed medium such as a radio
channel.
A channel is used to convey an information
signal, for example a digital bit stream,
from
one
or
several
senders
(or
transmitters) to one or several receivers. A
channel has a certain capacity for
transmitting information, often measured

some channel models


Input X

P(y|x)

output Y

transition probabilities
Statistically a communication channel is usually modelled as a triple
consisting of an input alphabet, an output alphabet, and for each pair
(x, y) of input and output elements a transition probability p(y|x).
The transition probability is the probability that the symbol y is
received given that x was transmitted over the channel.

Discrete Memoryless Channel


x0
x
1

.
X
.
.

x J 1

y0
y1

.
X P(y k | x j ) Y
Y
.
.

y K -1

Definition of DMC
Channel with input X & output Y which is noisy
version of X.
Discrete when both of alphabets X & Y finite sizes.
Memoryless when no dependency between
input symbols.
5

Discrete Memoryless Channel (cont)

P P (

Channel Matrix (Transition Probability Matrix)


p ( y0 | x0 ) p (y1 | x0 ) ... p ( yK 1|x0 )

p
(
y
|
x
)
.
.
.
.
.
p
(
y
|
x
)
0 1
K 1 1

P (Y | X ) .

p ( y0 |xJ 1 ) . . . .
.. p ( yK 1|xJ 1 )

The size is J by K

K 1

p( y
k 0

| xj) 1

for all j

Discrete Memoryless Channel (cont)


(x j )
Given a priori pprob.
, and the channel
p ( yk )
matrix, P
then we can find the prob. of the various
output
p( x jsymbols,
, yk ) p( X x j , Y as
yk ) the
p(Y joint
yk X prob.
x j ) p( X x j )
distn of X and Y

p( yk x j ) p( x j )
J 1

( yk ) p(Y yprob.
Y yk X ofx j )the
p ( X output
xj)
k ) p (distn
The pmarginal
Y,
j 0
J 1

p( yk x j ) p( x j ), for k 0,1,.., K 1
j 0

Channel input row matrix, P( X ) [ P( x0 ), P( x1 ),.....P( xJ -1 )]


Channel output row matrix, P(Y ) [ P( y0 ), P( y1 ),.....P( yK -1 )]
Channel output prob. matrix, P(Y ) [ P( X )][ P(Y | X )]
Joint prob. matrix, P( X , Y ) [P(X)][ P(Y | X ]
P ( x0 ) K

P(X) M O
0
L

M
P( xJ 1 )

P ( x j , yk ) is the joint prob. of transmitting x j and receiving yk

Example 1
Let X={0,1} of equally
probable
symbols, and let Y be a three-element
set Y={y0,y1,y2}. Let the channel have
0.15matrix
0.05
0.8
transition
probability
P(Y | X )
0.05

Find P(Y) & P(X,Y)

0.15

0.8

Entropy equations for DMC


Channel input of n
symbols
n 1
H ( X ) p ( x j ) log 2 p( x j )
j 0
Entropy=
Channel output of m symbols
m 1

Entropy= H (Y ) p ( yk ) log 2 p ( yk )
k 0

n 1 m 1

H ( X , Y ) p ( x j , yk ) log 2 p ( x j , yk )
Joint Entropy=
j 0 k 0

H ( X , Y ) H (Y | X ) H ( X )
=H ( X | Y ) H (Y )

Conditional Entropy=
n 1 m 1

H (Y | X ) p ( x j , yk ) log 2 p( yk | x j )
j 0 k 0

-> H(X|Y) : a conditional entropy


(equivocation)
The amount of uncertainty remaining
about the channel input data after the
channel output has been observed.
Mutual Information : The uncertainty of the
input resolved by observing output =
measure of information passed through
the channel on average.
n 1 m 1
I(X;Y)
H(X) - H(X|Y) p ( yk | x j )
I ( X ; Y ) p ( x j , yk ) log 2
p ( yk )
j 0 k 0

Calculate the entropy of Y for the system in


example 1. Compare this with the entropy of
source X.

Calculate the mutual information for


the system
of example 1.

Note that marginal prob., P( X ) P( x, y )


y

H (Y | X ) p ( x)H (Y | X x )
x

H (Y | X x) p ( y | X x ) log 2 p ( y | X x )
y

I ( X ;Y )

I ( X ; Y ) 3 / 8bits

Channel capacity
The channel capacity is the maximum
average amount of information that
can be sent per channel use.
Recall: Mutual Information : measure of
information passed through the
channel on average

The information transferred via the noisy channel (in the absence of
the observer)

I(x; y) H(x) H(x | y )


Information
transfer

Information loss due to


Information in noiseless noise
(equivocation)
system
(source entropy)

H(y) H(y|x)
why is the channel capacity not the same as
the mutual information

Channel capacity
The mutual information is a function of the probability
distribution of X.
By changing P(X) we get different results of I(X;Y)
For a fixed transition probability matrix, a change in
P(X) also results in a different output symbol
distribution P(Y).
The maximum mutual information achieved for a given
transition probability matrix is the channel capacity.

Cs max I ( X ; Y )

Cs having units of bits per symbol.

channel capacity in bits/symbol:


I(X;Y) = H(X) - H(X|Y) = H(Y) H(Y|X) (Shannon 1948)

H(X)

H(X|Y)
channel

max I( X; Y ) capacity
P( x )

notes:
capacity depends on input probabilities
because the transition probabilites are fixed

Example: binary symmetric channel (BSC)


The channel
consists of 2 inputs
x0 and x1 symbols
that produces 2
discrete output
symbols y1 and y2.
P(x0)= and
Y={y ,
P(x1)=1- 1 y p.} p

x0

x1
1

1-p

X={x0,
x1}

y
0
y
1

1-p

P(Y | X )

1- p

binary symmetric channel (BSC


1

H ( X ) p( x j ) log 2 p( x j )
j 0

H (Y | X ) p( x j , yk )log 2 p( yk | x j )
j 0 k 0

Prove H (Y | X ) p log 2 p (1 p)log 2 (1 p)

Channel Capacity of BSC


mutual information
I(X;Y) =H(Y) - H(Y|X)
= H(Y)+plog2 p+(1p)log2 (1-p)

Channel Capacity , CS max{I ( X ; Y )}


max{H (Y ) p log 2 p (1- p) log 2 (1- p)}
1
( P( y1 ) P( y2 ) )
2
CS 1 p log 2 p (1- p) log 2 (1- p) 1 [ p log 2 p (1- p) log 2 (1- p)]
max H (Y ) 1

Hence CS 1 H 2 ( p)

Channel Capacity of BSC

1.0

Channel capacity

H(p)

0.5

1.0

Bit error p

Binary Erasure channel


A binary erasure channel (or BEC) is a common
communications channel model used in coding theory and
information theory. In this model, a transmitter sends a bit (a zero
or a one), and the receiver either receives the bit or it receives a
message that the bit was not received ("erased"). This channel is
used frequently in information theory because it is one of the
simplest channels to analyze. The BEC was introduced by
Peter Elias of MIT in 1954 as a toy example.
The channel model for the binary
erasure channel showing a mapping
from channel input X to channel
output Y (with known erasure
symbol ?). The probability of erasure
is pe

A binary erasure channel with erasure probability p is a channel with binary input,
ternary output, and probability of erasure p.
That is, let X be the transmitted random variable with alphabet {0, 1}.
Let Y be the received variable with alphabet {0, 1, e}, where e is the erasure symbol.
Then, the channel is characterized by the conditional probabilities.
Pr( Y = 0 | X = 0) = 1-p
Pr( Y = e | X = 0) = p
Pr( Y = 1 | X = 0) = 0
Pr( Y = 0 | X = 1) = 0
Pr( Y = e | X = 1) = p
Pr( Y = 1 | X = 1) = 1-p
P(x1)=

P(x2)=1-

Find
(a) P(Y ) P (Y | X ).P( X )
Where P ( X ) [
1- ]
(b) H (Y )
(c) H(Y|X)
(d) I( X ; Y ) H (Y ) - H (Y | X )
Prove that I( X ; Y ) (1- p) H ( X )

Y={0,e,1
}

1 p p
P(Y | X )
p
0

0
1- p

X={O,
1}

Channel Capacity of BEC


Channel Capacity, CS max{I ( X ; Y )}
max{(1 p ) H ( X )}
max H ( X ) 1

1
( P( x1 ) P( x2 ) )
2

Hence CS (1 p )
Graphically
C
s1
0.
5
0.
5

p
1

channel capacity: the Z-channel


Application in optical communications

0 (light on)
X

P(X=0) = P0

H(Y|X) = (1 - P0 ) H(p)

p
1-p

H(Y) = H(P0 +p(1- P0 ) )

1 (light off)

For capacity,
maximize I(X;Y) over P0

You might also like