Professional Documents
Culture Documents
Information Theory and Coding - Chapter 3
Information Theory and Coding - Chapter 3
Text Book: B.P. Lathi, “Modern Digital and Analog Communication Systems”, 3 th edition, Oxford University Press, Inc., 1998
Reference: A. Papoulis, Probability, Random Variables, and Stochastic Processes, Mc-Graw Hill, 2005
or
In the limit, as n approaches infinity, the lower and upper bounds in the
above Equation converge to H(S), i.e. unity efficiency
Where Limn →∞ (LSFn/n) = H(S)
Noiseless Coding Theorem
Again, if the code efficiency of Huffman is less than 100%, the solution,
according to source-coding theorem, is to encode source extensions.
Huffman code by definition has the most highest code efficiency, i.e.
H(S) ≤ LH ≤ LSF < H(S)+1
Therefore, H(Sn) ≤ LHn ≤ LSFn < H(Sn)+1
Then, H(S) ≤ LHn/n ≤ LSFn/n < H(S)+1/n
It is noteworthy that the Huffman encoding process (i.e., the Huffman
tree) is not unique. In particular, when the probability of a combined
symbol (obtained by adding the last two probabilities pertinent to a
particular step) is found to equal another probability in the list. We may
proceed by placing the probability of the new symbol as high as
possible. Alternatively, we may place it as low as possible. (It is
presumed that whichever way the placement is made, high or low, it is
consistently adhered to throughout the encoding process.). Thus,
noticeable differences arise in that the code words in the resulting source
code can have different code word lengths. Nevertheless, the average
code-word length remains the same, i.e. there is, in this particular case,
more than one Huffman code.
Problem 1
𝜸 = 1 - 𝜂 =1%
Problem 2
a 1100
b 10
c 0110
d 00
SF code ) )
a 1100 4 1/16 4
b 10 2 7/16 1.2
c 0110 4 1/16 4
d 00 2 7/16 1.2
T S SF Code
aa 1100
bb 0
ab 101
ba 111
probability]
The average code length of memory source
L = sum of [Conditional average code length of each state
(row) * state probability]
Therefore, the code efficiency of the memory source is
η = H(S)/L
Note that The memory source has a lower entropy than that of its
adjoint source (If the source, for simplicity, were considered as a zero
memory source) therefore, a lower average code length (lower binit rate)
can be achieved by using coding of states [different code for each
Problem 1
Consider a 3-symbol, 1st order Markov (memory) source S with the following state
transition matrix:
0.
5
0.25
1 0.25
0.25 0.25
3
0.
5
2 0.25
0.
5
0.25
Problem 1 - Solution
2)
+0.25 +0.25
+0.5 +0.25
+0.25 +0.5
+ + =1 …………..…(4)
So;
(1)-(2):
(2)-(3):
=1.5 bit/SS
Problem 2
Consider a 2-symbol, 1st order Markov (memory) source with the following state
diagram:
2/3
1/
1/ 1 2 3
3
2/3
1/
1/ 1 2 3
3
2/3
+
+
1/
1/ 1 2 3
3
2/3
=0.918 bit/SS
0.
7
0.
a
2
0.
1
0.
0. 4
5
c
0.
3
b 0.
2
1
0.4 d
Problem 3 - Solution
2)
P(a) = 8/15
P(b) = 2/9
P(c) = 11/90
P(d) = 11/90
Problem 3 - Solution
3) Entropy of the source:
(c) Calculate the binit rate if vector quantizer with codebook size equals to (512 x 64), is used where the picture is divided
into blocks of size (8 x 8) pixels each.
The binit rate = [total # of blocks/sec] * log2 (512) binit/block
= [((784 x 440)/64) x 30] x 9 = 1455300 = 1.4553x 10 6 binit/sec
(d) Compare and comment.
The binit rate has been reduced in (b) by [(82.7904 -51.744)/82.7904]*100% = 37.5%
The binit rate has been reduced in (c) by [(82.7904 -1.4553)/82.7904]*100% = 98.24%
As the codevector (block) size increases the binit rate decreases but with less image quality so that the codebook size (# of
codevectors) must be increased in order to improve the image quality. Larger codebook size needs more efficient
design and search algorithms