Download as ppt, pdf, or txt
Download as ppt, pdf, or txt
You are on page 1of 18

SHANTILAL SHAH

GOVERNMENT ENGINEERING
COLLEGE,BHAVNAGAR

NAME:- VIVEK MISHRA


ENROLL NO. 150430111123
BRANCH:- E.C
ROLL NO. 4068
TOPIC:- INFORMATION THEORY


Information Theory
Father of Digital
Communication

The roots of modern digital communication stem from the


ground-breaking paper “A Mathematical Theory of
Communication” by Claude Elwood Shannon in 1948.
Model of a Digital
Communication System

Message Encoder
e.g. English symbols e.g. English to 0,1 sequence

Information
Coding
Source

Communication
Channel

Destination Decoding

Can have noise


or distortion
Decoder
e.g. 0,1 sequence to English
Shannon’s Definition of
Communication

“The fundamental problem of communication is that of


reproducing at one point either exactly or approximately
a message selected at another point.”
“Frequently the messages have meaning”

“... [which is] irrelevant to the engineering problem.”


Shannon Wants to…

Shannon wants to find a way for “reliably” transmitting


data throughout the channel at “maximal” possible rate.

Information
Coding
Source

Communication
Channel

Destination Decoding
And he thought about this
problem for a while…

He later on found a solution and


published in this 1948 paper.
In his 1948 paper he build a rich theory to the
problem of reliable communication, now called
“Information Theory” or “The Shannon Theory”
in honor of him.
Shannon’s Information
Theory
The
Claude Shannon:
Shannon A Mathematical Theory of Communication
Bell System Technical Journal, 1948
 Shannon’s measure of information is the number of bits to
represent the amount of uncertainty (randomness) in a
data source, and is defined as entropy

n
H   pi log( pi )
i 1
Shannon’s Vision

Source Channel
Data
Encoding Encoding
Channel
Source Channel
User
Decoding Decoding
Shannon’s Entropy
Consider the following string consisting of symbols a and b:

abaabaababbbaabbabab… ….

◦ On average, there are equal number of a and b.


◦ The string can be considered as an output of a below source with equal
probability of outputting symbol a or b:
0.5 a

We want to characterize the average


information generated by the source!

0.5 b

source
Entropy
Example: Binary Memoryless Source
BMS 01101000…

Let

Ofte
n den
Then oted

1 The uncertainty (information) is greatest when

0 0.5 1
Entropy: Three
properties
1. It can be shown that 0 · H · log N.

2. Maximum entropy (H = log N) is reached when all


symbols are equiprobable, i.e.,
pi = 1/N.

3. The difference log N – H is called the redundancy of the


source.
Lossless source coding
Shannon’s First Theorem - Lossless Source
Coding

Let X denote a discrete memoryless source. There exists a lossless


source code at rate R if

R  H(X ) bits per transmission

14
Huffman coding algorithm
P(x1)
P(x2)
P(x3)
P(x4)
P(x5) x1 00
P(x6) x2 01
P(x7) x3 10

x4 110

H(X)=2.11 x5 1110
R=2.21 bits per symbol x6 11110

x7 11111
15
Binary symmetric channel (BSC) model

Source Output
data data

Channel Binary Demodulator Channel


Channel
encoder modulator and detector decoder

Composite discrete-input discrete output channel

16
Discrete memoryless channel (DMC)
{X} {Y}
Input x0 y0 Output
x1 y1

xM-1
…… P[ y | x ]
can be arranged
yQ-1
in a matrix
17
AWGN waveform channel
Source Output
data data

Channel Physical Demodulator Channel


Modulator
encoder channel and detector decoder

Input Output
Assume channel has bandwidth W, with frequency
waveform waveformresponse C(f)=1, [-W, +W]

y (t )  x(t )  n(t )

18

You might also like