Answer Key Ese - Apr - 2018

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 17

CHRIST (DEEMED TO BE UNNIVERSITY, BENGALURU-560029

End Semester Examination March -2018

Bachelor of technology VI Semester

Code: EC632 Max. Marks: 100

Subject: INFORMATION THEORY AND CODING Duration:3 Hrs

1. a)
i. Identify the equations of source efficiency and redundancy
1M
1M

ii. Consider a source emitting one of three symbols A, B and C with respective
probabilities 0.7, 0.15, and 0.15. Find the self-information conveyed by each symbol
and compare. Also calculate its efficiency and redundancy.

Given:

P=(0.7, 0.15, 0.15) S=(A,B,C)

Self-information bits

= =0.5145 bits

= =2.73696 bits

= =2.73696 bits

Symbols B and C conveys more information compared to symbol A

8M
b)
i. A discrete memory less source emits five symbols in every 2ms.The symbol probabilities
are {0.5, 0.25, 0.125, 0.0625, 0.0625}. Find the average information rate of the source.
Solution:

Average rate of information is


In the problem it is given that a symbol is emitted for every 2ms. Therefore

10M

ii. A discrete source emits one of the five symbols in every 1s.The symbol probabilities are
{ } Find the average information rate of the source. Find the average
information content of the source in nats/Sym and Hartley/Sym.

Average rate of information is


In the problem it is given that a symbol is emitted for every 2ms. Therefore

Average information content of the source in nats/Sym

Average information content of the source in Hartley/Sym


10M
2. a) Apply Shannon’s first encoding algorithm to the following set of messages and obtain
code efficiency and redundancy

M1 M2 M3 M4 M5
1/8 1/16 3/16 1/4 3/8

The sequence of ‘a’ are:


, , ,
, ,
The smallest integer value of s found using

l=(2,2, 3,3,4)
Source Symbol Code
M5 3/8 00 2
M4 1/4 01 2
M3 3/16 101 3
M1 1/8 110 3
M2 1/16 1111 4


Code efficiency
Code redundancy 10M

b)
i. Construct a Shannon-Fano ternary code for the following ensemble and find the code
efficiency and redundancy.
S1 S2 S3 S4 S5 S6 S7
0.3 0.3 0.12 0.12 0.06 0.06 0.04
S1 0.3 2 Code length
S2 0.3 1 2 1
S3 0.12 0 0.12 2 1 1
S4 0.12 0 0.12 1 02 2
S5 0.06 0 0.06 0 0.06 2 01 2
S6 0.06 0 0.06 0 0.06 1 002 3
S7 0.04 0 0.04 0 0.04 0 001 3
000 3

Code efficiency
Code redundancy 10M

ii. Given the messages s1, s2, s3 and s4 with respective probabilities of 0.4, 0.3, 0.2 and0.1,
construct a binary code by applying Huffman encoding procedure. Determine the
efficiency and redundancy of the code so formed.

10M
3. a) A transmitter has an alphabet consisting of five letters {a1, a2, a3, a4, a5} and the receiver
has an alphabet of four letters {b1, b2, b3, b4. The JPM of the system is given below.
[ ]

[ ]

Compute H(A), H(B), H(A,B), H(A/B), H(B/A) and I(A,B).

10M

b) Consider a discrete memory less source with S={C, M, O, E} with respective probabilities
P={0.4, 0.1, 0.2, 0.3}. Develop the code word for the message “COME” using arithmetic
coding.

4. a) List the detailed steps involved in the development of adaptive Huffman code tree and
discuss sibling property.
b) Discuss
i. Perceptual coding 3M
ii. MEG audio Layers I, II, III 4M
iii. Psychoaucostic 3M

5.
c) Compare the following lossless compression methods 10M
i. RLE
ii. Huffman Coding
iii. LZW Coding

Method Pros. Cons. Application


RLE It can be used to It does not work Used in making
compress data made of well at all on palette and fax
any combination of continuous tone machine.
symbols. It does not images such as
need to know the photographs
frequency of occurrence
of symbols
Huffman High Compression It used as a
Coding transmission of image files "back-end" to
speed Easy to that contain some other
implement long runs of compression
identical method.
pixels by
Huffman is
not as
efficient when
compared to
RLE.
LZW No need to Only good at used in the
Coding Analyse the text files but GIF and TIFF
incoming text not on other image format
files

b) i) Write the block diagram of lossless compression

ii) Explain any two techniques that follow lossless compression


6. a) Write short notes on
i. Multiresolution coding
ii.DCT
iii.DWT
Multiresolution Coding: HINT (hi erarchical interpolation) is a mul ti re s ol uti on codi ng s cheme ba s ed on s ub -
s a mplings. It s tarts with a l ow resolution version of the ori gi na l i ma ge, a nd i nterpol a tes the pi xel va l ues to
s uccessively generate higher resolutions. The errors between the i nterpolation va lues a nd the rea l va l ues a re
s tored, along with the i nitial l ow-resolution image. Compression is achieved since both the l ow-resolution i ma ge
a nd the error va lues can be s tored wi th fewer bi ts tha n the ori gi na l i ma ge. La pl a ci a n Pyra mi d i s a nother
mul tiresolution i mage compression method developed by Burt and Adel s on. It s ucces s i vel y cons tructs l ower
res olution versions of the original image by down sampling s o that the number of pixels decreases by a fa ctor of
two a t ea ch s cale. The differences between s uccessive resolution versions together with the l owes t res ol uti on
i ma ge a re s tored a nd uti l i zed to perfectl y recons truct the ori gi na l i ma ge. But i t ca nnot a chi eve a hi gh
compres s i on ra ti o beca us e the number of da ta va l ues i s i ncrea s ed by 4/3 of the ori gi na l i ma ge s i ze.
b) Compare the following video formats
i. MPEG1
ii.MPEG2
ii.MPEG4
7. a) For a systematic (6,3) linear block code, the parity matrix P is given by

[ ] [ ] Draw the encoder circuit and obtain all possible code vectors.
b) i. Explain the types of errors seen in a digital communication system with examples
ii. List out the differences between block codes and convolutional codes
8. a) the parity check bits of a (7,4) block code are generated by
c5=d1+d3+d4
c6=d1+d2+d3
c7=d2+d3+d4
i. Obtain the generator matrix and parity check matrix for this code
ii. Show that GHT=0

b) Construct the standard array to perform syndrome decoding for (6,3) code having a
parity matrix

[ ] [ ]
9. a) Consider the (3,1,2) convolutional code with g(1)=(110), g(2)=(101), g(3)=(111).
i. Find the constraint length
ii. Find the rate
iii. Draw the encoder block diagram
iii. Find the codeword for the message sequence (11101) using time domain approach
b) i. write the convolutional encoder circuit for K=3, r=1/3.
Ii. Draw the state table for the encoder circuit obtained in (i)
Iii. Draw the trellis diagram and encode the message “110101”.

10. a) Explain the steps involved in encoding using


i. Time domain approach
ii. Frequency domain approach
b) Consider the received sequence be r=(11, 11, 11, 00, 10,11). Decode the given received
vector using stack algorithm.

Assumptions: 2M
Drawing State machine: 3M
Decoding and final output: 5M

You might also like