Download as pdf or txt
Download as pdf or txt
You are on page 1of 29

Website : www. aceengineeringpublications.

com

ACE
Engineering Publications
(A Sister Concern of ACE Engineering Academy, Hyderabad)

Hyderabad | Delhi | Bhopal | Pune | Bhubaneswar | Bengaluru | Lucknow | Patna | Chennai | Vijayawada | Visakhapatnam | Tirupati | Kolkata | Ahmedabad

ESE - 19
(MAINS)

Electronics & Telecommunication


Engineering
(PAPER - II)
(Analog & Digital Communication systems, Control Systems, Signals & Systems,
Computer Organization & Architecture, Electro Magnetics, Advanced Electronics,
and Advanced Communication)

Previous Conventional Questions with Solutions, Subject wise & Chapter wise
(1980 - 2018)

ACE is the leading institute for coaching in ESE, GATE & PSUs
H O: Sree Sindhi Guru Sangat Sabha Association, # 4-1-1236/1/A, King Koti, Abids, Hyderabad-500001.
Ph: 040-23234418 / 19 / 20 / 21, 040 - 24750437

7 All India 1st Ranks in ESE


43 All India 1st Ranks in GATE
Copyright © ACE Engineering Publications 2018

All rights reserved.

No part of this publication may be reproduced, stored in a retrieval system, or


transmitted, in any form or by any means, electronic, mechanical, photocopying,
digital, recording or otherwise, without the prior permission of the publishers.

Published at :

ACE Engineering Publications

Sree Sindhi Guru Sangat Sabha Association,


# 4-1-1236/1/A, King Koti, Abids,
Hyderabad – 500001, Telangana, India.
Phones : 040- 23234419 / 20 / 21
www.aceenggacademy.com
Email: info@aceenggpublications.com
hyderabad@aceenggacademy.com

Authors :
Subject experts of ACE Engineering Academy, Hyderabad

While every effort has been made to avoid any mistake or omission, the publishers do not owe any
responsibility for any damage or loss to any person on account of error or omission in this publication.
Mistakes if any may be brought to the notice of the publishers, for further corrections in forthcoming
editions, to the following Email-id.
Email : info@aceenggpublications.com

First Edition 2011


Revised Edition 2018

Printed at :
Karshak Art Printers,
Hyderabad.

Price : ₹. 380/-
ISBN : 978-1724241290
Foreword
UPSC Engineering Services in Electronics & Telecommunication Engineering
[Paper - II Conventional Questions with Solutions from 1980–2018]
(More than 38 years)
In UPSC Engineering Services, Conventional Papers have 54.5% weightage in the entire process of
written test.
In Paper-II of Electronics & Telecommunication Engineering six subjects are
included as follows:
01. Analog & Digital Communication Systems 02. Control systems
03. Computer Organization & Architecture 04. Electro Magnetics
05. Advanced Electronics Topics 06. Advanced Communication Topics
The following approach is advisable to secure maximum marks.
• The solution to any question shall have the following appropriate steps generally. Steps will
also have due weightage.
(a) The data given
(b) The appropriate figure, if applicable, with parts labeled properly
(c) The concept on which the problem is being solved
(d) The relevant formulae with standard notations and abbreviations. Incase the paper
setter asks to solve the question from the fundamentals, necessary derivations shall be
done. Otherwise 70% of marks one has to lose. If the question carries more marks like
10 to 20 marks detailed analysis is compulsory to score high. If the question carries less
marks (<8) the formulae with abbreviations and concept may be sufficient. In any case, the
assumptions made shall be written clearly while solving a problem.
• The neatness and presentation of solutions will have 5% marks.
• Note that all parts of a question shall be answered together.
• Using short sentences and brevity in expressing the content is appreciated.
• Practice of numerical problems using calculator is very much essential instead of reading
merely.
* Try to understand and practice the solutions keeping in view QCAB format
The solutions are prepared with utmost care. In spite of this there may be some typographical
mistakes or improper sequence. The student is requested to inform us regarding errors, if any to
aceenggpublications@gmail.com. ACE Engineering Publications will be grateful in this regard.

Thanks to all the professors who co-operated in the preparation of this booklet. Thanks to the
Academic Assistants and Data Entry section in the design of this booklet.

With best wishes to all those who wish to go through the following pages.

Y.V. Gopala Krishna Murthy,


M Tech. MIE,
Chairman & Managing Director,
ACE Engineering Academy,
ACE Engineering Publications.
Previous Questions with Solutions Subject wise & Chapter wise (E & TE)

(1980 – 2018)

MAIN INDEX

S.No. Name of the Subject Page No.

1 Analog & Digital Communication Systems 1 - 108

PART – A
(Control Systems) 109 - 221
2 Control Systems
PART – B
(Signals & Systems) 222 - 255

3 Computer Organization & Architecture 256 - 288

4 Electro Magnetics 289 - 370

5 Advanced Communication 371 - 387

6 Advanced Electronics 388 - 414


Chapter 1 Basics of Information Theory

m n
Conventional Questions with Solutions H(X, Y)   P(x i , y j ) log 2 [P(y j /x i )P(x i )]
i 1 j1

from BAYE’S THEOREM


01. a) In a discrete random system with
inter- symbol interference, Show that yj p( x i , y j )
p( )
the joint entropy is given by xi p( x i )
H(X, Y) = H(X) + H(Y/X).
H(X,Y) Pxi , y j  log2 [P(yj /xi )]
m n
b) In a Binary symmetric channel, the
symbols 1/0 are transmitted with i1 j1

equal probability at a rate of m n


104 per sec. The error rate in the   P(x i , y j ) log 2 P(x i )
channel P0 = 1/16 Calculate the rate of i 1 j1

transmission R over the channel. m  n 


(IES-EC-80) H(X,Y)  H(Y/X)   P(xi , y j )log2 P(xi )
i 1  j1 
Sol:
m
a) We know that p(xi,yj) = p(yj/xi)p(xi) and H(X, Y)  H(Y/X)   P(x i )log 2 P(x i )
n

 P( x , y )  P( x )
i j i
i 1

j1 H(X,Y) = H(Y/X) + H(X)


Average uncertainty of source
m
1 b) Given
H(X)   P(x i )log 2
i 1 P(x i ) A BSC symbols 1/0 are transmitted with
Average uncertainty of received symbol equal probability at a rate of 104/sec.
n
1 1
H(Y)   P(y j )log 2 Error rate in channel P0 
P(y j ) 16
j1
Transmission Rate =?
Average uncertainty of received symbol
1
when X is transmitted P (x= 0) = P (x = 1) = (equal probability)
m n 2
Y 1
H( )   P(x i , y j )log 2 1 1
X i 1 j1 P(y j /x i ) PX   
2 2 
Average uncertainty in transmitted symbol
when Y is received, 1
P (1/0) = P (0/1) = ( error rate)
X m n
1 16
H( )   P(x i , y j )log 2
Y i 1 j1 P(x i /y j ) 0 1
0 15 1 
Average uncertainty of the communication P(Y/X) = 16 16 
channel as a whole, is given by the entropy  
m n
1 1 15 
H(X, Y)   P(x i , y j ) log 2 1 16 16  22
i 1 j1 P(x i , y j )
ACE Engg. Publications Hyderabad|Delhi|Bhopal |Pune |Bhubaneswar |Bengaluru |Lucknow |Patna|Chennai |Vijayawada |Vizag | Tirupati |Kukatpally | Kolkata
:4: Analog & Digital Communication Systems

15 Y   0 1 1 1
H    P0P  log2  P0P  log
16 X  0 0  0  P 1 
P(x1)=1/2 0 0 P   
0
   0
1 1
0 1  1 1
16 16  P1P  log2  P1P  log2
1
   
0 1
   1
P(x2)=1/2 1 1 P  P 
15 1  1
16 1 15 16 1 1 1 1 1 15 16
 . log 2  . log 2 16  . log 2 16  . log 2
Y
PY   PX P  2 16 15 2 16 2 16 2 16 15
X
15 1  15 16 1
H(Y/X) = log 2  log 2 16
1 1   32 15 32
   16 16 1 15 16
2 212  1 15  log 2 16  log 2
16 1622 32 32 15
15 1 1 1 1 1 1 15  = 0.044+0.125+0.125 + 0.044
       = 0.338
16 2 16 2 2 16 2 16 
I [X,Y] = H(Y)  H(Y/X)
I(X,Y) = 1 0.338 = 0.662
 16 16 
PY     0.5 0.5 Information rate (R) = r  I(X,Y)
 32 32  12 =104  0.662 = 6620 bps =6.6 kbps
m
1
HX    Px i  log 2
i 1 P x i  02. a) Define Mutual Information I(x, y) and
show that: I(x,y) = H(x) – H(x/y);
 log 2 m  log 2 2  1bps b) In Binary communication channel,
p(xi =0) = 0.4 and p(xi=1) = 0.6. For the
HY  Py j log2
n
1 given noise matrix of the
 log2 m  log2 2  1bps
j1 Py j  channel. Calculate the average
information I(x,y) conveyed per
symbol
H     P xi , y j log
 Y m n 1
(IES-EC-81)
 X  i 1 j1  yj 
P  yi P(y/x)
 xi  xi 0 1
0 .99 .01
From Baye’s theorem 1 0 1
 y j  Px i , y j  Sol:
P  
 xi  Px i  a) Mutual information is defined as the
amount of information transferred when Xi
 yj 
Px i , y j   Px i P  is transmitted & Yj is received. It is
 xi  represented as I(Xi, Yj)
Y m n  yj  1 The mutual information is a measure of the
H    Px i P  log 2 uncertainty about the channel input that is
 X  i 1 j1  xi   yj  resolved by observing the channel output
P 
 xi  and vice versa

ACE Engg. Publications Hyderabad|Delhi|Bhopal |Pune |Bhubaneswar |Bengaluru |Lucknow |Patna|Chennai |Vijayawada |Vizag | Tirupati |Kukatpally | Kolkata
:5: Basics of Information Theory

H    Px i , y j log 2
X Y m n 1
The quantity H  is called a conditional
Y  X  i 1 j1  yj 
P 
entropy.
 xi 
It represents the amount of uncertainty
y  1
about the channel input after observing the   Px i P j  log
channel output.  xi  y 
The quantity H(X) represents the amount of P j 
 xi 
uncertainty about the channel input before
observing the channel output. H(Y/X) =  [0.396 log20.99 +0.04log2 0.01
+ 0.6 log2 1]
X = 5.72 10-3 +0.2657
The difference H(X)  H  must
Y = 0.2714 bits / symbol
represent the amount of uncertainty about I(X, Y) = H(Y) – H(Y/X) = 0.9685 0.2714
the channel input that is resolved by = 0.697 bits / symbol
observing the channel output. This quantity
is called “Mutual Information” of the 03. Show that the channel capacity of a noisy
channel denoted by I(x , y) channel is C = B log2 (1 + S/N) where
x B = bandwidth and S/N is the signal to
I(x, y) = H(x)  H 
 y noise ratio.
Average mutual information (IES-EC-82,84, 87)(10M)
  x 
P i  04. a) State Hartley-Shannon theorem and
 yj  explain
Pxi , yj  log   
m n
I(x, y)   b) A system has a bandwidth of 4.0 kHz
i 1 j1
 P(xi )  and an S/N ratio 28 dB at the input to
  the receiver. Calculate
 
m n m n i) its information carrying capacity
  P(x , y ) log P(x , / y    P(x ,y ) log P(x )
i 1 j1
i j i j
i 1 j1
i j i
ii) the capacity of the channel if its
x bandwidth is double while the
I(x,y) = –H   + H(x) transmitted signal power remains
y constant.
x (IES-EC-85)(12M)
I(x,y) = H(x) – H  
y
05. a) State and explain Hartley-Shannon
b) Given P(xi = 0) = 0.4 P(xi = 1) = 0.6 Theorem.
P(X) = [0.4 0.6] b) Calculate the amount of information
 Y  0.99 0.01 needed to open a lock whose
P   
 X  0 1  combination consists of three numbers
each ranging from 0 and 1.
Y 0.99 0.01
PY   PX P   0.4 0.6 (IES-EC-90)(10M)
X  0 1  Sol: Shannon-Hartley Law:
 0.396 0.604 Hartley-Shannon law is so named in
recognition of the early work by Hartley on
H(Y)= [0.396 log20.396 +0.604log20.604] the subject and its rigorous derivation by
= 0.529+0.439 = 0.9685bits/symbol
ACE Engg. Publications Hyderabad|Delhi|Bhopal |Pune |Bhubaneswar |Bengaluru |Lucknow |Patna|Chennai |Vijayawada |Vizag | Tirupati |Kukatpally | Kolkata
:6: Analog & Digital Communication Systems

Shannon. The Hartley-Shannon law has two Both x(t) & y(t) are assumed to have a PSD
important implications in the range B  f  B and zero else where.
1. It gives the upper limit for the rate of The output signal is sampled at a sampling
reliable information transmission over a rate = Nyquist rate.
Gaussian channel. Let Y denote a sample of the received
2. For a specified channel capacity, it signal y(f).since y has a Gaussian
defines the way in which transmission distribution with zero mean and variance
Bandwidth BT may be traded off for equal to (S+N)
improved signal to Noise ratio S/N and The Entropy of the received signal = H(Y)
vice versa. = log 2 2eS  N 
It relates to the channel capacity so it is H(X) of a sample x of a transmitted signal
also called as channel capacity theorem differs from H(Y). The difference equals the
which is usually stated for a discrete conditional entropy H(Y/X) which is a
memory less channel. measure of the average uncertainty of the
The channel capacity is the maximum rate received sample Y.
of reliable information transmission over a H Y / X    

  

 
f x , y ( x , y) log 2 f y / x Y / X  dx dy
additive white Gaussian channel. Where Joint Pdf is given by
Let C be the capacity of a discrete memory f x , y x, y   f Y / X y / x f x x 
less channel and let H be the entropy of a  
discrete information source emitting ‘r’ HY / X   f X xdx f y / X y / x log2 f Y / x Y / xdy
 
symbols/sec.
When the additive noise at the channel
The capacity theorem states that if rHC output is independent of the transmitted
then there exists a coding scheme such that signal conditional entropy does not depend
the output of the source can be transmitted
on x or y except in the combination (YX)
over the channel with an arbitrarily small
Probability of error. It is not possible to H(Y/X) equals the entropy of the additive
transmit messages without error if rH > C. noise since the noise is a Gaussian process
This gives essentially error –free of zero mean and variance equal to N, the
transmission in the presence of additive entropy of a noise samples
noise. HY / X   log 2 2eN
SHANNON defines the channel capacity of
a continuous channel having an average  Mutual information
power limitation and by an additive white IX, Y   HY   HY / X 
Gaussian noise. In a continuous channel, the
 log 2 2eS  N   log 2 2eN
input and output signals are continuous
functions of time.
= 1 log 2 1  S  bits/sample
Consider a channel of BW ‘B’ Hz and let 2  N
x(t) be input, y(t) be the output Since samples are taken at the maximum
S = Average power of transmitted signal rate of 2B samples/sec. For a AWGN
x(t) channel, the channel capacity
N = Average power of additive noise C  2BIX, Y 
component at the received signal
 S
S + N = Average power of the received C  B log 2 1  bits / sec
signal y(t)  N 

ACE Engg. Publications Hyderabad|Delhi|Bhopal |Pune |Bhubaneswar |Bengaluru |Lucknow |Patna|Chennai |Vijayawada |Vizag | Tirupati |Kukatpally | Kolkata
:7: Basics of Information Theory

S 06. An analog signal is band limited to B Hz,


04. b) i) Given B = 4 kHz,  28 dB sampled at the Nyquist rate and the
N
 S samples are quantized into 4 levels. The
C = B log 1   quantized levels are assumed
 N independent and occur with probabilities
S S
 28dB   10   631 1 1 1 1
2.8
Since , , and . Find the average
N N 4 8 8 2
C  4K log 2 (1  631) information and information rate of the
 C = 37.21 kbps source.
(IES-EC-86)(14M)
ii) Since Noise power  Band width, Sol: Average information of the source is
N=2 Entropy
 S  m
1
 C = 2B log 1   Entropy H(X) =  pilog2 . bits /symbol
 2N  i 1 pi
 631  1 1 1 1
 C  8K log1  
 2  = log2 (4) + log2 (8) + log2 (8) + log2(2)
4 8 8 2
 C  66.45 kbps
2 6 1 14
=   = bits / symbol.
4 8 2 8
05. b) There will be 8 possible combinations.
Assuming all are equiprobable, Information rate (R) = r  H(X) where r is
the message rate = 2Bsamples/sec
H = log2 8 = 3 bits/combination
14
 R = 2B   3.5B bits / sec
8
07. A source is transmitting SIX messages with probabilities 0.30, 0.25, 0.15, 0.12, 0.10 and 0.08
respectively
i) Find the binary Huffman code?
ii) Determine its average word length, efficiency and redundancy?
(IES-EC-91)(10M)
Sol: Huffman coding:
Xi P(Xi) CODE
X1 0.30 (00) 0.30 (00) 0.30 (00) 0.43 (1) 0.57 (0)
X2 0.25 (10) 0.25 (10) 0.27 (01) 0.30 (00) 0.43 (1)
X3 0.15 (010) 0.18 (11) 0.25 (10) 0.27 (01)
X4 0.12 (011) 0.15 010 0.18 (11)
X5 0.10 (110) 0.12 011
X6 0.08 (111)
6
1
Entropy H(X) = i 1
Pi log2
Pi

ACE Engg. Publications Hyderabad|Delhi|Bhopal |Pune |Bhubaneswar |Bengaluru |Lucknow |Patna|Chennai |Vijayawada |Vizag | Tirupati |Kukatpally | Kolkata
:8: Analog & Digital Communication Systems

=  [0.30 log 2 (0.30)  0.25 log 2 (0.25)  0.15 log 2 (0.15)


+ 0.12 log2 (0.12) + 0.10log2(0.10) + 0.08 log2(0.08)]
= 2.42 bits/messages

6
Average length ( L ) = i 1
Pi xi

= 0.3  2 + 0.25  2 + 0.153 + 0.12  3 + 0.10  3 + 0.08  3. = 2.45


H(X) 2.42
Efficiency (  ) =  100   100  98.7%
L 2.45
 Redundancy = 1   =10.9877 = 0.0122

08. A message source generates eight symbols m1, m2, …m8 with probabilities 0.25, 0.03, 0.19,
0.16, 0.11, 0.14, 0.08 and 0.04 respectively. Give the Huffman codes for these Symbols,
Calculate the entropy of the source and the average number of bits per symbol.
(IES-EC-92)( 15M)
Sol: Xi P(Xi) CODE
m1 0.25 01 0.25 01 0.25 01 0.25 01 0.31 00 0.44 1 0.56 0
m3 0.19 11 0.19 11 0.19 11 0.25 10 0.25 01 0.31 00 0.44 1
m4 0.16 000 0.16 000 0.16 000 0.19 11 0.25 10 0.25 01
m6 0.14 100 0.14 100 0.15 001 0.16 000 0.19 11
m5 0.11 101 0.11 101 0.14 100 0.15 001
m7 0.08 0010 0.08 0010 0.11 101
m8 0.04 00110 0.07 0011
m2 0.03 00111
m
H =   Pi logPi
i 1

=  [0.25log0.25 + 0.19log0.19 + 0.16log0.16 + 0.14log0.14 + 0.11 log0.11 + 0.08log0.08


+ 0.04log0.04 + 0.03log0.03]
= 2.751bits/mesage

L = 0.252 + 0.192 + 0.163 + 0.143 + 0.113 + 0.084 + 0.045 + 0.035
= 2.78 bits/symbol

ACE Engg. Publications Hyderabad|Delhi|Bhopal |Pune |Bhubaneswar |Bengaluru |Lucknow |Patna|Chennai |Vijayawada |Vizag | Tirupati |Kukatpally | Kolkata
:9: Basics of Information Theory

09. a) Describe briefly Shannon Fano (b) Average length


Algorithm
messages.
for efficient encoding of
 1 2
L = 1   3   4 
2 8
3
16
2
32
5
b) Using this algorithm, obtain the code = 2.3125 bits/ symbol
for a source emitting eight message
H(X)
with probability 1/2, 1/8, 1/8, 1/16,  Efficiency (  ) =  100
1/16, 1/16, 1/32 and 1/32. Calculate the L
average information per message and = 100%
the efficiency of the code.
(IES-EC-93)(15M) 10. a) Define conditional entropy and
redundancy.
Sol: Shannon Fano Algorithm: b) A binary data source has
a) 3 5 3 1
P0  , P1  and P1/0   , P0/1  .
1) List the source symbols in the order of 8 8 4 16
decreasing probability. Calculate the conditional entropy and
2) Partition the set into two sets, that are as the redundancy.
close to equiprobable as possible, and (IES-EC-94)(12M)
assign ‘0’ to the upper set and a ‘1’ to
Sol:
the lower set.
3) Continue this process, each time a)
partitioning the sets with as nearly equal Conditional Entropy
probabilities as possible, until further The input probabilities – P(xi),
partitioning is not possible. output probabilities–P(yj),
 yj 
Symbol Probability P(xi) Code word No. of bits transition probabilities– P  ,
 xi 
X1 1/2 0 1
joint probabilities– P(xi, yj).
X2 1/8 100 3
X3 1/8 101 3
H(x) is the average uncertainty of the
m
X4 1/16 1100 4
channel input. H  X    P xi log 2 P xi 
X5 1/16 1101 4 i 1
X6 1/16 1110 4
H(y) is the average uncertainty of
X7 1/32 11110 5
X8 1/32 11111 5 the channel output

H Y    P  y j log 2 P y j 
n
8
1
Entropy H(x) =  Pi log 2 bits/symbol
i 1 Pi j 1

1 1 1 1 The conditional entropy can be given as


= log2 2 + log 2 8  log 2 8  log 2 16
2 8 8 16 x 
 Px , y log
n m
1 1 1 H X / Y    P i 
+ log 2 16  log 2 16  log 2 32 y 
i j 2

16 16 32
j1 i 1  j
1 The conditional entropy H(x/y) is a measure
 log 2 32
32 of the average uncertainty about the channel
= 2.3125 bits/ symbol input after the channel output has been

ACE Engg. Publications Hyderabad|Delhi|Bhopal |Pune |Bhubaneswar |Bengaluru |Lucknow |Patna|Chennai |Vijayawada |Vizag | Tirupati |Kukatpally | Kolkata
: 10 : Analog & Digital Communication Systems

observed. H(x/y) is sometimes called the 3 5


b) P0  ; P1  ;
equivocation of x w.r.t y. 8 8
 yj  3 5
HY / X    P x i , y j log 2 P  PX   
m n

i 1 j1  xi   8 8 
1 3 0 1
The conditional entropy H(Y/X) is average P   , P   ;
uncertainty of the channel output when x 0 4  1  16
was transmitted and y was received. 1  0  15
P   1  P  
1  1  16
Redundancy: Redundancy is the number of
bits used to transmit a message minus the 0 1 1
P   1  P  
number of bits of actual information in the 0 0 4
message. Informally, it is the amount of 0 1
wasted "space" used to transmit certain 0 1 3 
data. Data compression is a way to reduce 4 4 
or eliminate unwanted redundancy, while y
P    
checksums are a way of adding desired x  1 15 
redundancy for purposes of error detection 1 16 16  22
when communicating over a noisy channel
y 
H    Px i , y j log P j 
of limited capacity Y m n
The combined role of the channel encoder  X  i1 j1  xi 
and decoder is to provide reliable m n
 yj  1
communication over a noisy channel. This   Px i P  log
i 1 j1  xi   yj 
is done by introducing redundancy in the P 
channel encoder and exploiting in the  xi 
channel decoder to reconstruct the original
encoder input as accurately as possible. 1 1 0 1
In source coding, we remove redundancy,  P0.P  log  P1.P  log
0 1 1 0
where as in channel coding we introduce P  P 
controlled redundancy. Because of 0 1
redundancy, we are able to decode a 0 1 1 1
 P0.P  log  P1.P  log
message accurately without errors in the 0 0 1 1
P  P 
received message. 0 1
For example to the code words 0001 if we 3 3 4 5 1 16
add a fifth pulse of positive polarity to make  . log  . log
a new code word 00011. Now the number 8 4 3 8 16 1
of positive pulses is 2 (even). 3 1 4 5 15 16
 . log  . . log
If a single error occurs in any position, this 8 4 1 8 16 15
parity will be violated. The receiver knows
that an error has been made and can request 3 3 4 1 
  log  log 4
retransmission of the message. It can detect 8 4 3 4 
an error, but cannot locate it. 5 15 16 1 
Redundancy =1 – Efficiency   log  log 16
H( x ) 8 16 15 16 
 = 1 –   1 = 0.515
L
ACE Engg. Publications Hyderabad|Delhi|Bhopal |Pune |Bhubaneswar |Bengaluru |Lucknow |Patna|Chennai |Vijayawada |Vizag | Tirupati |Kukatpally | Kolkata
: 11 : Basics of Information Theory

HX   P0  log P0   P1 log P1 1


P(x1) = P(x2) =
3 3 5 5 2
   log  log   0.530  0.423  0.953
8 8 8 8 1 1 
P(X) = 
Redundancy  2 2 
Y Y
H 
0.515 PY   PX  P 
X
  1   1  45.96% X
HX  0.953  1 1  0.8 0.2 0 
  
11. A binary erasure channel matrix is given  2 2   0 0.2 0.8
  Y  1  p p 0  1 
by p     2  0.8 
  X   0 p 1  p    0.4  P y1 
1 1
Draw the channel diagram and if the    0.2   0.2  0.2   P y2 
2 2 
 0.4  P y3 
source has equally likely outputs,
1
compute the probabilities associated with   0.8 
the channel output for P= 0.2  2 
(IES-EC-97)(8M)
Sol: Given, 12. An information source produces 8
 Y  1  p p 0  different symbols with probabilities 1/2,
p     1/4, 1/8, 1/16, 1/32, 1/64, 1/128 and
 X   0 p 1  p
1/128 respectively. These symbols are
The channel diagram encoded as 000,001, 010, 011, 100, 101,
110 and 111 respectively
(IES-EC-99)(20M)
1-p y1 i) What is the amount of
x1 information per symbol?
P(x1)=0.5 p ii) What are the probabilities of
p y2
x2 occurring for a 0 and a 1?
iii) What is the efficiency of the code
P(x2)=0.5 1-p so obtained?
y3
iv) Give an efficient code with the help of
the method of Shannon.
 0.8 0.2 0  v) What is the efficiency of the
For P = 0.2, P(Y/X) =  
 0 0.2 0.8 code so obtained in (iv)above?
X1 = 0 Sol: i) Amount of Information per symbol
m
X2 = 1 1
HX    Pi log 2 bits / symbol
i 1 Pi
Where  is the erasure, i.e the o/p is in
doubt and it should be erased. 1 P
Y1= 0
  y1   y2   y 3  P
 P  P  P   P Y2= 
Y x  x1   x 1 
P     1 
 X    y 1   y2   y 3  Y3= 1
P  P  P  1 P
  x 2   x2   x 2 
ACE Engg. Publications Hyderabad|Delhi|Bhopal |Pune |Bhubaneswar |Bengaluru |Lucknow |Patna|Chennai |Vijayawada |Vizag | Tirupati |Kukatpally | Kolkata
: 12 : Analog & Digital Communication Systems

1 1 1 1 iv) Shannon Fano coding:


 log 2 2  log 2 4  log 2 8  log 2 16
2 4 8 16
Symbol Probability Code No.
1 1 1 1
 log2 32  log2 64  log2 128  log2 128 A1 1/2 0 1
32 64 128 128
1 1 3 1 5 6 7 7  A2 1/4 10 2
       
 2 2 8 4 32 64 128 128  A3 1/8 110 3
 0.5  0.5  0.375  0.25  0.156  0.093  0.109
A4 1/16 1110 4
=1.983 bits/symbol.
A5 1/32 11110 5
ii) Probability of occurring ‘0’ is A6 1/64 111110 6
1 1 1 1 1  A7 1/128 1111110 7
  3   2   2  1  2
1 2 4 8 16 32 
P0    
3 1 1 1  A8 1/128 1111111 7
 1  1  0
 64 128 128 
v) Average length of the code:
1 3 1 1 1 1 1 1 
       
3  2 2 4 16 16 64 128 
 1 1 1 1 
1 1  2  2  4  3  8  4  16 
 (1.5  0.5  0.25  0.0625
3

L new   
 5  1  6  1  7  1  7 
 0.0625  0.015625  7.8  10 -3 )  32 64 128 128 
1 = 1.98 bits/symbol.
= 2.398  0.8 New Efficiency
3
H x  1.98
Probability of occurring ‘1’ is new    100  100  100% .
1  P(0) = 1  0.8

L new 1.98
= 0.2
13. a) Explain the meaning of ‘information’
iii) Average length of the code from the information theory point of
8 view.
765
( L )   Pi x i = = 2.988 bits/symbol (IES-EC-01)(4M)
i 1 256 b) Explain how it is measured. How can
we quantify one unit of information?
=  1  1  1  1  1  1  1  1   3
 2 4 8 16 32 64 128 128  Explain.
(IES-EC-01)(4M)
0.5  0.25  0.125  0.0625  0.03125
 3 3 3 c) A source emits 4 messages m1, m2, m3,
 0.015625  7.8  10  7.8  10  m4 with probabilities of 1/8, 1/4 , 1/ 2
= 2.999  3 and 1/8. Calculate the entropy of the
information.
H x  (IES-EC-01)(4M)
Efficiency of the code (  ) =  100 Sol:
L
(a) The amount of information about an event
1.983
  100 is closely related to its probability of
3 occurrence. The measure of information is
 66.10% real valued and monotonic and additive for
ACE Engg. Publications Hyderabad|Delhi|Bhopal |Pune |Bhubaneswar |Bengaluru |Lucknow |Patna|Chennai |Vijayawada |Vizag | Tirupati |Kukatpally | Kolkata
: 13 : Basics of Information Theory

events which are statistically independent. ii) The average information in the dot
The measure of information associated with dash code
an event A occurring with probability PA is iii) The average rate of information
defined as transmission, assuming that a dot
1 lasts 10 m sec and the same time
I A  log 2 interval is allowed between symbols.
PA
(b) The information content of a message is Sol:
proportional to the logarithm of the a) Self Information: Same as 13(a)
reciprocal of the probability of the message. Entropy: Entropy is a measure of the
The basic quantum unit of information is uncertainty in a random variable. Entropy is
called a binary digit normally called as bit. defined as the average information content
In general any one of ‘n’ equiprobable per source symbol. The entropy H(X)
messages that containing log2n bits of depends only on the probabilities of the
information. Then the probability of symbols of the source. It has the dimensions
1 of energy by its definition. It provides a
occurrence of each one is Pi  quantitative measure of the degree of
n
The information associated with each randomness of the system. The entropy
1 H(X) of source is bounded as
message is I i  log 2 n  log 2   log 2 Pi 0  HX   log 2 N
Pi
If H = 0 then Entropy corresponds to no
For r-ary digits logr(1/p) uncertainty.
1 If H = log2N then Entropy corresponds to
I  log r r-ary units
P 1
1 1 maximum uncertainty if and only if PK =
I  log 2 bits  log r r-ary units K
P P The Entropy of a binary memory less source
1 r-ary unit = log2r bits. which is statistically independent is
4
H = – plog2p – (1–p) log2(1–p)
1
c) H( x )   Pi log 2 H max = 1 when p1 = p0 =
1
i 1 pi 2
1 1 1 1 i.e., symbols ‘1’ & ‘0’ are equiprobable.
 log 2 8  log 2 4  log 2 2  log 2 8
8 4 2 8
H
3 2 1 3
    =1.75 bits /symbol.
8 4 2 8 1.0

14. a) Define and explain the terms: Self


information, Entropy and Mutual
information.
(IES-EC-95) 0 0.5 p
b) A code is composed of dots and dashes.
Mutual Information: Same as 02(A)
Assume that the dash is 3 times as long
as the dot and has one-third the
b) Given a dash is 3-times the dot.
probability of occurrence. Calculate.
tdash = 3 tdot
i) The information in a dot and that in a
tdot = 10 m sec
dash
ACE Engg. Publications Hyderabad|Delhi|Bhopal |Pune |Bhubaneswar |Bengaluru |Lucknow |Patna|Chennai |Vijayawada |Vizag | Tirupati |Kukatpally | Kolkata
: 14 : Analog & Digital Communication Systems

tspace = 10m sec


1-P
tdash = 3  10 = 30 m sec P(x1)= x1 y1
The dash has one third the probability of P
occurrence
1; 1 2 P
Pdash  Pdot  1  
3 3 3 P(x2)=1- x2 y2
1-P
i) Information in a dot
3   Y 
= log 2
2
 0.584 bits P(X,Y) = P(X)P 
  X 
Information in a dash
= log 2 3  1.58 bits
 0  1  p p 
ii) Average information =  
 H x   Pdot log Pdot  Pdash log Pdash  0 1     p 1  p 
2  Px1, y1  P x 1 , y 2  
PX, Y   
3 1
   log 2  log 2 3 
3 2 3  Px 2, y1  Px 2 , y 2 
 0.915 bits / msg 1  p   p 
P(X,Y)  
iii) Average rate
transmission R = r H (x)
of information
1   p 1   1  p 
Average time per symbol Y y  y 
H   Px1 , y1 log2 P 1   Px1 , y 2 log P 2 
Ts  Pdot t dot  Pdash t dash  t space X  x1   x1 
2 1 y  y 

 10 ms   30 ms  10 ms  Px 2, y1  log2 P 1   Px 2 , y 2  log2 P 2 
3 3  x2   x2 
= 26.67 ms
Message rate  1  p  log 2 1  p   p log 2 p
1 1 10 3  1   p log 2 p  1   1  p  log 2 1  p 
r    37.50
Ts 26.67  10 3 26.67   p log 2 p  1  p  log 2 1  p 
R  rHx   37.50  0.915  34.3 bps Hence IX, Y   HY   H Y / X 
15. For a binary symmetric channel, the  H Y   p log 2 p  1  p  log 2 1  p 
error probability Pe(0) = Pe(1) = p and This is maximum when H(Y) is maximum.
errors are statistically independent. Since the channel output is binary, H(Y) is
Show that the channel capacity is maximum when each output has a
C = s[1+plogp +(1-p)log(1-p)] where s is 1
probability of and is achieved for equally
the signaling speed. 2
(IES-EC-95) likely inputs.
Sol: Channel Capacity: For this case H(Y) = log22=1.
The channel capacity of a binary symmetric
channel can be derived as follows: Hence the channel capacity is,
P x 1    C  Max IX, Y   [1  p log 2 p  1  p  log 2 1  p ]
P x 2   1   When the signaling speed ‘s’ is introduced
then the channel capacity becomes
Y x  
y1 y2

P    1 1  p p 
 X  x 2  p 1  p C s  Max IX, Y   s[1  p log 2 p  1  p  log 2 1  p ]

ACE Engg. Publications Hyderabad|Delhi|Bhopal |Pune |Bhubaneswar |Bengaluru |Lucknow |Patna|Chennai |Vijayawada |Vizag | Tirupati |Kukatpally | Kolkata
: 15 : Basics of Information Theory

16. a) When raw binary bits generated by the 17. Prove that a (n,k) linear block code of
source can be transmitted in a minimum distance dmin can correct up to
channel, why is source coding done 1
which adds to complicacy of the ‘t’ errors if and only if t  (dmin -1)
2
transmission work? (IES-EC-01)(8M)
(IES-EC-01)(4M)
Sol: In block codes, a block of ‘k’ data digits is
b) Obtain Shannon-Fano code for the encoded by a codeword of ‘n’ digits (n>k)
source information consisting of 5 i.e., k data digits are accumulated and then
messages m1, m2, m3, m4 and m5 with encoded into an ‘n’ digit codeword such
probabilities of 1/16,1/4, 1/8 ,1/2 , 1/16 that the number of check digits m = n – k
(IES-EC-01)(4M) k
The code efficiency is such a code is
Sol: n
a) Source coding is an efficient way of known as (n, k) code.
representing the output of a source. The minimum distance dmin of a Linear
Consider that there are M = 2N messages, Block Code ‘c’ is defined as the smallest
each messages coded into N bits. If the hamming distance between any pair of code
messages are equally likely, the average words in ‘c’.
information per message is The minimum distance dmin of a linear code
H = N, i.e. N bits. These being N ‘c’ is the smallest Hamming weight of the
bits/messages, the average information nonzero code word in the ‘c’.
carried by an individual bit is H/N = 1 bit. The minimum distance dmin of a linear code
If the messages are not equally likely, then ‘c’ determines the error detection and
H < N, and each bit carries less than 1 bit of correction capabilities of code word.
information. A linear code ‘c’ of minimum distance dmin
The efficiency can be improved by using a can detect up to t errors if and only if
code in which, not all messages are encoded dmin  t + 1.
into same number of bits. The more likely a A linear code ‘c’ of minimum distance dmin
message is, the fewer the number of bits can correct up to t errors if and only if
that should be used in its code word. dmin  2t + 1.
Thus, source coding is the way of In figure two Hamming Spheres, each of
transmitting the output of a source, with less radius t, are constructed around the points
number of bits (an average) without any that represent code words c i and cj . Fig (a)
information loss. depicts the case where two spheres are
disjoint, that is d( c i , c j )  2t +1. for this
b) Shannon - Fano coding: case ,if the code word c i is transmitted, the
received word is r, and d( c i ,r)  t, such that
Symbol Probability Code word No. ofBits the decoder will choose c i , since it is the
m4 1/2 1 1 code word closest to the received word r.
m2 1/4 01 2 In figure (b) depicts the case where two
m3 1/8 001 3 spheres intersect, that is d( c i , cj )<2t.
m1 1/16 0001 4 If the code word c i is transmitted, there
m5 1/16 0000 4
exists a received code word r such that

ACE Engg. Publications Hyderabad|Delhi|Bhopal |Pune |Bhubaneswar |Bengaluru |Lucknow |Patna|Chennai |Vijayawada |Vizag | Tirupati |Kukatpally | Kolkata
: 16 : Analog & Digital Communication Systems

d( c i ,r)  t, yet r is as close to cj as it is to c i . ii) Second-order Extension of Shannon- Fano


code
The decoder may choose cj , which is
Symbol Probability Code Word Length of
incorrect. code
t a1 = x1x1 0.25 11 2
t a2 = x1x2 0.20 10 2

ci r cj a3 = x2x1 0.20 011 3
a4 = x2x2 0.16 010 3
a5 = x1x3 0.05 0011 4
d( c i , c j )  2t+1 d( c i , cj )<2t. a6 = x3x1 0.05 0010 4
a7 = x2x3 0.04 0001 4
a8 = x3x2 0.04 00001 5
18. Consider a source with three messages a9 = x3x3 0.01 00000 5
having symbol probabilities 0.5, 0.4 and
0.1   
H x 2   0.25 log
1
0.25
 2  0.20 log
1
0.20
i) Obtain Shannon Fano code and 
calculate its efficiency. 1 1
 2  0.05 log  0.16 log
(IES-EC-03) (4 M) 0.05 0.16
ii) Repeat (i) for second- order extension 1 1 
 2  0.04 log  0.01log
code and determine its efficiency. 0.04 0.01
(IES-EC-03)(4M)  2.72 bps
Sol: H x   2  Hx   2  1.360  2.72 bps
2
i) Shannon Fano Coding:
New average length
___ 2  0.25  2  0.20  4  0.05  0.20  3  0.16  3
Symbol Probability Code Word L new   
x1 0.5 1  0.04  4  0.05  4  0.04  5  0.01  5 
x2 0.4 01 = 2.79 bits/ symbol
H x  2.72
x3 0.1 00 Efficiency ' =  100   100
L 2.79
Entropy  97.49 %
3
1 19. i) An analog signal has a 4 kHz BW. The
H(x) =  p i log 2 signal is sampled at 2.5 times the
i 1 pi
Nyquist rate. Each sample is quantized
= 0.5 log 2  1   0.4 log 2  1   0.1 log 2  1  into one of 256 equally likely levels. The
 0.5   0.4   0.1 successive samples are statistically
= 1.360 bits / symbol independent. What is the information
___
Average length ( L ) rate of this source?
3 ii) Can the output of this source be
= p x i i = 0 .5  1  0 .4  2  0 .1  2 transmitted without errors over a
i 1 Gaussian channel with a bandwidth of
=1.5 bps 50 kHz and SNR equal to 23 dB?
HX  1.360 iii) What will be the bandwidth
Efficiency  =  100 =  100
__
1.5 requirements of an analog channel for
L
transmitting the output of the source
= 90.67 %
without errors if the SNR is 10 dB?
(IES-EC-03)(5+5+5M)
ACE Engg. Publications Hyderabad|Delhi|Bhopal |Pune |Bhubaneswar |Bengaluru |Lucknow |Patna|Chennai |Vijayawada |Vizag | Tirupati |Kukatpally | Kolkata
: 17 : Basics of Information Theory

Sol: i) Band width of analog signal S


(fm) = 4kHz iii) 10 dB = 10 log10  
N
Nyquist rate fs min = 2fm = 8kHz.
S
r = 8k  2.5 = 20k samples/sec   10.
L = 256  n = log2 L=8 N
 Information rate(R) = 8  20k =160 kbps. Channel capacity (Cmin) = Rb.
 S
ii) Channel capacity(C) =B log2 1   160  B log 2 1  
S
 N
 N
160 = Blog2 (1+10)
S
Since 10 log 10    23dB B = 46.25 kHz.
N For error free transmission of an analog
S channel, band width required is greater
  10 
2.3

N than or equal to 46.25 kHz.



C  50  10 log 2 1  10  382kbps
3 2.3

Since C > R so the output of this source
can be transmitted without errors over a
Gaussian channel.
20. A source emits seven symbols x1, x2,…., x7 with probabilities 0.35, 0.3, 0.2, 0.1, 0.04, 0.005,
0.005 respectively. Give Huffman coding for these symbol. Calculate average bits of
information and average binary digits of information per symbol.
(IES-EC-07)(10M)
Sol: Huffman Coding:
Xi P(xi) code
X1  0.35 – 1 0.35 – 1 0.35 – 1 0.35 – 1 0.35 1 0.65 0.
X2  0.30 – 01 0.30 – 01 0.30 – 01 0.30 – 01 0.35 00 0.351.
X3  0.20 – 000 0.20 – 000 0.20 – 000 0.20 000 0.30 01
X4  0.10 – 0010 0.10 – 0010 0.10 – 0010 0.15 001
X5  0.04 – 00110 0.04 00110 0.05 – 0011
X6 0.005 001110 0.01 00111
X7  0.005 001111
Average bits of information per symbol
7
1
H ( x )   Pi log 2  
i 1  Pi 
 1   1   1   1   1   1 
 0.35 log 2    0.3 log 2    0.2 log 2    0.1log 2    0.04 log 2    2  0.005 log 2  
 0.35   0.3   0.2   0.1   0.04   0.005 
H(x) = 2.107 bits/symbol.
Average binary digits of information per symbol = probability  no. of code words
 0.35  1  0.30  2  0.20  3  0.10  4  0.04  5  0.005  6  0.005  6
 0.35  0.60  0.60  0.40  0.2  0.03  0.03  2.21 bits / symbol
ACE Engg. Publications Hyderabad|Delhi|Bhopal |Pune |Bhubaneswar |Bengaluru |Lucknow |Patna|Chennai |Vijayawada |Vizag | Tirupati |Kukatpally | Kolkata
: 18 : Analog & Digital Communication Systems

21. If a binary PCM, if ‘0’ with probability ¼ and ‘1’ occurs with probability equal to ¾, then
calculate the amount of information carried by each bit. Comment on the result obtained.
(IES-EC-10)(8M)
1
Sol: Probability of ‘0’ =
4
3
Probability of ‘1’ =
4
1
Amount of Information of ‘0’ = log2 = log2 4 = 2 bits.
p
1 4
Amount of Information of ‘1’ = log2 = log2 = 0.42  1 bit.
p 3
1
I
p
The amount of information of zero is more compared to one.
Since amount of information is inversely proportional to the probability of occurrence.
Thus as probability P decreases from 1 to 0, Ik increases monotonically going from 0 to infinity. A
greater amount of information has been conveyed when the receiver correctly identifies a less
likely message.

22. Four source messages are probable to appear as (IES-EC-11)(8M)


1 1 1 1
m1 = , m2 = , m3 = , m4 =
2 4 8 8
Obtain its Huffman coding and determine the coding efficiency.
Sol: Huffman coding:
Symbol Probability Code Word
1 1 1
m1 0 0 0
2 2 2
1 1 1
m2 10 10 1
4 4 2
1 1
m3 110 11
8 4
1
m4 111
8
4
1 1 1 1 1
Entropy H(X)=  Pi log 2 = log 2 2  log 2 4  log 2 8  log 2 8 =1.75 bits/symbol.
i 1
Pi 2 4 8 8

Average length L =
1
2
 1 2
 1   2   3 = 1.75 bits/symbol.
4 8
H(X)
 Efficiency () =  100 = 100%.
L

ACE Engg. Publications Hyderabad|Delhi|Bhopal |Pune |Bhubaneswar |Bengaluru |Lucknow |Patna|Chennai |Vijayawada |Vizag | Tirupati |Kukatpally | Kolkata
: 19 : Basics of Information Theory

23. If I(x1) is the information carried by The bit rate Rb, coming out of the encoder is
message x1 and I(x2) is the information called channel data rate.
carried by message x2. Then prove that
the amount of information carried
compositely due to x1 and x2 is b0, b1,…… bn-k-1 m0, m1,…… mk-1
I(x1, x2) = I(x1) + I(x2).
(IES-EC-12)(7M)
Sol: The measure of information associated with Parity bits Message bits
a message x occurring with probability Px is
1 Hamming codes are single error correcting
defined as I( x )  log 2 binary perfect codes.
Px
The single-error correcting capability of
For two statistically independent events
Hamming codes is also confirmed by the
P(x1x2) or P (x1, x2) = P (x1) P(x2)
minimum distance (dmin)
1
I( x 1 , x 2 )  log (By definition ) A binary code for which the Hamming
P( x 1 , x 2 ) bound is satisfied with the equality sign is
1  1 1  called a “Perfect code”.
 log  log  .  An (n,k) linear block code can correct up to
P( x 1 ) P( x 2 )  P( x 1 ) P( x 2 )  ‘t’ errors per codeword, provided that n and
 1   1  k satisfy the Hamming bound
 log    log  

t
 P( x 1 )   P( x 2 )  2 n k   n
i
 I(x1,x2) = I(x1)+I(x2) i 0
An (n, k) linear block code of minimum
24. a) Explain Hamming Codes? distance dmin can correct up to ‘t’ errors if
1 
b) How many Hamming bits are required and only if t   d min  1
for a block length of 20 message bits to 2 
correct one bit error? For a family of (n, k) linear codes that have
(IES - EC-12)(10M) the following parameters:
Sol: Hamming codes are error-correcting codes. Block length  n  2 m  1
These codes have been classified into block Number of message bits
codes and convolutional codes.  k  2m  m  1
To generate an (n,k) block code, the channel
Number of parity bits  n – k = m
encoder accepts information in successive
‘k’ bit block; for each block, it adds (n-k) b) Given k = 20, t = 1
redundant bits that related to the k message t
 n  n k
bits, there by producing an over all encoded     2
i 0  i 
(Hamming Bound)
block of ‘n’ bits where n>k.
The n bit block is called a code word where
1
n
2 n  20    
n is the block length of the code. i 0  i 
n 1
n n! n!
R b   R s where Rs is the bit rate of the
k    
i 0  i 

n  0!0! n  1!1!
information generated by the source.
n ! n n  1!
The dimensionless ratio r = k/n is called the   1  n
code rate, where 0 < r < 1. n! n  1!
ACE Engg. Publications Hyderabad|Delhi|Bhopal |Pune |Bhubaneswar |Bengaluru |Lucknow |Patna|Chennai |Vijayawada |Vizag | Tirupati |Kukatpally | Kolkata
: 20 : Analog & Digital Communication Systems

2 n  20  1  n 1 1 1
p x 1   p x 2   P X  
log 2 n  20   log 1  n  2  2 2 
0.3 n  6  log 1  n  H  X   log 2 m  log 2 2 1bits / msg
Given K = 20 so n should be  20. y1 y2 y3 y4
By trial & error method n  25.  Y  x q p p p 
n  K  m (number of parity bits) P   1  
 X  x 2 p p p q 
m = 25 – 20 = 5
H    Σ Σ p x i , y j log
Five Hamming bits are required. Y 1
X p y j/x i 
25.
 Σ Σ p x i  p y j /x i log
y1 [00] 1
p y j /x i 
 00  1  01 
x1 [00]
 p 00  p   log  p 00  p  
y2 [01]  00   00   00 
p 
 00 
1  10  1
y3 [10] log  p 00 p   log
x2 [11]  
01 00
   10 
p  p 
 00   00 
y4 [11]  11  1  00 
 p 00 p   log  p 11 p  
Calculate the rate of joint information  00   11   11 
p 
transmission for the above channel.  00 
1  01  1
Assume p x1   p x 2  
1
where log  p 11 p   log
2  00   11   01 
p  p 
X  x 1 , x 2  is the set of input symbols.  11   11 
Assume q →Probability of correct  10 1  11 1
 p11 p  log  p11 p  log
reception.  11 p  10  11 p 11
P = Probability of incorrect reception    
 11  11
and Y   y 1 , y 2 , y 3 , y 4  be the set of
Y
received symbols. H     [qlog 2q  3 plog 2 p]
X
(IES-EC-13)(10M)
Sol:  Y  1 1  q p p p
P(Y)  P(X)P   
 X  2 2 12 p p p q 24
q y1 [00]
q p p q
P(Y)    p p 
p
2 2 2 2 1 4
x1 [00]
p H(Y) =   p log2 p
p y2 [01]
 p q  1 
p  2  2 log 2 (p  q)  plogp 
 
p y3 [10]   
 p q 1 
x2 [11] p  plogp    log (p  q)
 2 2 2 
q y4 [11] 1
=  [2 p log p + (p + q) log (p + q)]
2
ACE Engg. Publications Hyderabad|Delhi|Bhopal |Pune |Bhubaneswar |Bengaluru |Lucknow |Patna|Chennai |Vijayawada |Vizag | Tirupati |Kukatpally | Kolkata
: 21 : Basics of Information Theory

I  X , Y   H Y   H Y / X  Then the equivalent channel matrix is:


1 y1 y 2 y 3 y 4
=  [2 p log p + (p + q) log (p + q)]
2  
Y
p   x 1 q p pq pq 
2 2

+ [qlog 2 q  3 plog 2 p] X x  2 


2 p q 2 pq pq 
1
= p log p + q log q  (p + q) log (p + q) p2  q2
2 py 1   py 2   ;
Total probability p + q = 1, 2
I(X, Y) = p log p + q log q + log2 2 py 3   py 4   pq
= 1 + p log p + q log q
Therefore,
26. Assume a Binary symmetric channel with
probability of incorrect reception (p) =
1 
HY 2  p 2  q 2 log 2  2
p q
2 2
 2pq log 2
1
pq
4
and probability of correct reception Y 1 1 1
H   q 2 log 2 2  p 2 log 2 2  2pq log 2
3  X 2 q p pq
(q) = . Let all the transmitted symbols
{x1,
4
x2 } be equally probable i.e.,
Y
 X 2

IX, Y 2  HY 2  H   p 2  q 2  log p 2
1
q 2

 1

 
1
p(0) = p(1) = . Calculate the  q 2 log 2 q 2  p 2 log 2 p 2
2
improvements in rate of transmission by  q2 q2 
2 and 3 repetitions of the input. 1  2 log 2 
(IES-EC-13)(10M) 
2 2 
 p q 
p  q2
2
 p2  q2 

Sol: Repetition of Signals:   p p2 
log 2 2
Additivity of Mutual Information  p2  q2 p  q 2 
To improve the channel efficiency   p 2 
(equivalently to reduce the error rate), an  p 2  q 2 1  H 2 
2 
useful technique is to repeat the signals {X}   p  q 
at the channel input and to detect only the It is seen that the channel is now equivalent
repeated signals as {Y}, as shown in fig. to a BSC with error probability
consider the BSC of fig. with inputs p2
{X} = {00, 11}, and acceptable as p1  2 and a normalizing factor of
p  q2
{Y} = {00, 11}, thereby
(p2+q2) = probability of observing either 00
Discarding the outputs{01,10}, as in a BEC.
00 = y1
or 11 at the output.
q2

X1= 00
Since p1 < p, I(X,Y)2 is now greater than the
pq original value of [1H(p)] of a BSC. It may
p1 = 1/2 pq 01 = y3 be shown that with three repetitions of the
ERASE

p2 input, the mutual information is given as

p2
10 = y4  
  p3
IX, Y 3  p 3  q 3 1  H
  p3  q3

  3pq1  Hp 

pq  
pq
1 3
X2= 11 Given BSC with p  ,q 
p2 = 1/2 q2
4 4
11 = y2 p(0) = p(1) = 0.5.
A repetitive BSC
ACE Engg. Publications Hyderabad|Delhi|Bhopal |Pune |Bhubaneswar |Bengaluru |Lucknow |Patna|Chennai |Vijayawada |Vizag | Tirupati |Kukatpally | Kolkata
: 22 : Analog & Digital Communication Systems

Calculation of the improvements in rate of The entropy is to be maximized with a peak


transmission by 2 and 3 repetitions of the signal limitations or constraint of M i.e
input. M

For the BSC, without repetitions,  p(x)dx  1 ….. (2)


IX, Y   1  p log p  q log q M
The problem of maximizing
1 1 3 3
 1  log  log 
4 4 4 4 H(x) =   p( x ) log 2 p( x )dx is a particular
 0.185 bits/symbol 

With 2 repetitions, equivalent error rate case of so called isoperimetric problem of


1 calculus of variations . The problem here is
p1 = , and to determine optimum p(x) so that the
10
   
b
IX, Y 2  p 2  q 2 1  H p1 integral I =  F(x, p)dx yields a maximum
5

 1  p1 log p1  q 1 log q 1
8
 a

(or minimum) subject to the conditions (i.e


5 1 10  the given constraints) that :
 1  log10  0.9 log  b
8  10 9
5
=  0.533  0.333 bits
  (x, p)dx  k
a
1 1

8 b
1
With 3 repetitions, the error rate p =
28

a
2 ( x , p)dx  k 2 ….. (3)

27 .
and q =
28 .
 
IX, Y 3  p 3  q 3 1  Hp' '  3pq1  Hp 
b
.
 7 

1 27 28   9 
   1  log 28  log      0.185 n ( x , p)dx  k n
16
  28 28 27   16  a
7 Where k1, k2,….. kn are preassigned
  0.78  0.104  0.444 bits/symbol
16 constants. The form of p(x) which satisfies
It is thus seen that the mutual information the above constraints and makes H(x)
improves considerably with repetitions. maximum (or minimum) is obtained by
solving the equation
27. A continuous random variable X is F   
constrained to a peak magnitude M. +1 1 +2 2 ….n n = 0 ...(4)
p p p p
Show that
(i) The differential entropy of X is The quantities 1,2….n are the
maximum when it is uniformly lagrangians (undetermined multipliers) and
distributed; they are determined by substituting the
(ii) The maximum differential entropy of value of p(x) in the equation (3) .
X is log2 2M. Given that the signal is peak limited to  M
M
(IES-EC-14) (10M)
Sol:(i) The entropy of a continuous random  H(x) =   p(x) log
M
2 p( x )dx
variable X is

Subject to the conditions given in equation
H(x) =   p( x ) log 2 p( x )dx ….. (1) (2) from (3) we have p= p(x) and

F(x,p) = –plogp ;
ACE Engg. Publications Hyderabad|Delhi|Bhopal |Pune |Bhubaneswar |Bengaluru |Lucknow |Patna|Chennai |Vijayawada |Vizag | Tirupati |Kukatpally | Kolkata
: 23 : Basics of Information Theory

And 1(x,p) = p
(M0) P(r0|M0|)=0.6
F r0
Then = –(1+logp) ….. (5) P(M0)=0.3
p 0.3 0.1
1
=1 ….. (6) 0.1
p
Substituting (5) & (6) in (4) (M1) 0.5
P(M1)=0.5 r1
we get the value

–(1+logp)+ 1 = 0 (∵ equating it to zero for 0.1


maximum differential entropy)
0.1 0.4
p(x) = exp(1–1)
(M2) r2
The value of 1 is obtained from 0.8
P(M2)=0.2
M M

 p(x)dx =  exp(1  1)dx  1


M M Sol: (i)
0.6
M
1 P(M0) = 0.3 r0
exp(1–1)  dx =1  exp(1–1) = 0.3
M
2M 0.1
0.1
which gives a rectangular or uniform 0.5
P(M1) = 0.5 r1
distribution 0.4
0.1
0.1 r2
(ii) Maximum differential Entropy P(M2) = 0.2
M 0.8
H max =   p(x) log
M
2 p( x )dx
P(M0) = 0.3; P(M1) = 0.5, P(M2) = 0.2
M
P(r0/M0) = 0.6; P(r0/M1) = 0.1
1 P(r0/M2) = 0.1; P(r1/M0) = 0.3
=  2 M
log 2 2M dx
P(r1/M1) = 0.5; P(r1/M2) = 0.1
M
P(r2/M0) = 0.1; P(r2/M1) = 0.4
 log 2 2M bits per sample P(r2/M2) = 0.8
In general if there are k messages
 H(x)max = log22M bits/sample M1,M2….Mk and j received responses
r1,r2,r3…rj, the optimum receiver is designed
28. (i) For the channel and message according to the rule:
probabilities given in the figure below, If rj is received choose Mk if
determine the best decisions about the P(Mk / rj) > P(Mi / rj)
transmitted message for each possible For all i  k
received response.
(ii) With decisions made as in part (i), (a) P (r0 / M0) P(M0) > P(r0/M1) P(M1)>
calculate the probability of error. P(r0 / M2)P(M2)
(Paper - I) i.e (0.6)(0.3)>(0.1)(0.5)>(0.1)(0.2)
(IES-EC-14)(10M)  We select Mo whenever r0 is received
ACE Engg. Publications Hyderabad|Delhi|Bhopal |Pune |Bhubaneswar |Bengaluru |Lucknow |Patna|Chennai |Vijayawada |Vizag | Tirupati |Kukatpally | Kolkata
: 24 : Analog & Digital Communication Systems

(b) P(r1 / M1) P(M1) > P(r1 / M0) P(M0) > P(E  V) 0.2
P(r1 / M2) P(M2) =   0.4
P( E ) 0.5
i.e (0.5)(0.5) > (0.3)(0.3) > (0.1)(0.2)
 We select M1 whenever r1 is received.  EC 
(ii) P   = Probability of person being not
 V 
(c) P (r2 / M1) P (M1) > P (r2 / M2)P(M2) educated given that he is a voter
> P(r2 / M0) (M0)
P(EC/V) =

P EC  V


P(V)  P(E  V)
i.e (0.4)(0.5)>(0.8)(0.2)>(0.1)(0.3) P(V) P( V )
i.e We select M1 whenever r2 is received 0.4  0.2 0.2
=   0.5
0.4 0.4
(ii) Based on the decisions made in (i), we
have the probability of error as (iii) P(EC  VC ) = Probability that a person is
Pe = 1–Pc neither a voter nor educated.
Where Pc = P(r0 / M0) P(M0) + = 1– P (E  V)
P(r1 /M1)P(M1)+P(r2 / M1) P(M1)
= 1– [P(E) + P(V) – P(E  V)]
Pc = 0.6 ×0.3 + 0.5× 0.5+0.4 ×0.5 = 1– [0.4 + 0.5 – 0.2]
= 1– 0.7 = 0.3
Pc = 0.63
30. Three students A, B and C are given a
 Pe = 0.37 problem in Maths. The probabilities of
3 2 1
29. 40% of the population of a town are their solving the problem are , and
voters, 50% are educated and 20% are 4 3 4
educated-voters. A person is chosen at respectively. Determine the probability
random. that the problem is solved if all of them
(i) If he is educated, what is the try to solve the problem.
probability that he is a voter? (IES-EC-15)(5M)
(ii) If he is a voter, what is the Sol: P(A) = 3/4; P(B) = 2/3
probability that he is not educated? P(C) = 1/4.
(iii) What is the probability that he is
neither a voter nor educated? Let p(s) be the probability that the problem
(Paper - I) (IES-EC-14)(10M)
Sol: Probability of a person being a voter is solved and ps  be the probability that the
= P(V) = 0.4 problem is not solved.
Probability of a person being educated
= P(E) = 0.5 Ps   1  Ps  ---- (1)

 PA  PB PC  ------(2)


Probability of educated voters
P(EV) = 0.20
= [1 –P(A)] [1 –P(B)] [1–P(C)]
(i) P(V/E) = Probability of person being a = (1–3/4) (1–2/3) (1–1/4)
voter given that he is educated
= 1/4  1/3  3/4 = 1/16
ACE Engg. Publications Hyderabad|Delhi|Bhopal |Pune |Bhubaneswar |Bengaluru |Lucknow |Patna|Chennai |Vijayawada |Vizag | Tirupati |Kukatpally | Kolkata
: 25 : Basics of Information Theory

1 (1  kcosTωco jωωd , | ω |  2ππ


 Ps   H ω   
16 , | ω | 2ππ
0
Ps   1  Ps  A pulse g(t) band-limited to B Hz is
applied at the input of this channel. Find
1 the output y(t). Consider this to be an LTI
 1 system.
16
(IES-EC-16)(5M)
P(s) = 15/16 Sol: Transfer function of a communication
channel is given by,
 The probability that the problem is solved (1  K cos T)e  jt d , |  | 2B
H()  
15 0 , |  | 2B
is
16
(1  K cos(2fT))e j2ftd , | f | B
H(f )  
31. A WSS random process X(t) is applied to 0 , elsewhere
the input of an LTI system with impulse H(f)
response
1+k
h(t) = 3e–2t u(t) Phase
Magnitude
response
Find the mean value of the output Y(t) of response
the system, if E[X(t)] =2. Here E [.]
–B B f
denotes the expectation operator.
(IES-EC-16)(5M)
A pulse g(t) band limited to ‘B’Hz
Sol: y(t)
g(t) H(f)
h(t)  y(t)
x(t)
2 t
3e u(t)
y(t) = g(t)  h(t)
E[x(t)]=2 spectrum of output: y(f) = G(f ).H(f)
Mean value of output = E (y(t)) y(f )  G (f )[1  K cos(2fT)]e  j2 ft d
= H(0) E[x(t)] y(f )  G (f )e  j2 ft d  KG (f ) cos(2fT)]e  j2 ft d
my = mx .H(0)
Response of system,
h(t) =3e-2t u(t)
K
3 y( t )  g ( t  t d )  g ( t  t d  T)  g ( t  t d  T)
H() = 2
2  j
(∵time shifting property g(t-td) e  j2 ft d G(f))
3 3
H(0)  
2  j0 2
3
m y  H(0).m x  .2  3
2 33. A channel has the following channel
 Mean value of output = 3 matrix:
1  p p 0 
[PY | X |]  
 0 p 1  p 
32. The transfer function H() of a
communication channel is given by (i) Draw the channel diagram.
ACE Engg. Publications Hyderabad|Delhi|Bhopal |Pune |Bhubaneswar |Bengaluru |Lucknow |Patna|Chennai |Vijayawada |Vizag | Tirupati |Kukatpally | Kolkata
: 26 : Analog & Digital Communication Systems

(ii) If the source symbols/bits have equally (ii) Source symbols/bits have equally likely
likely probabilities, then compute the probability i.e.,
probabilities associated with the 1
channel outputs for p = 0.2. P( x 0 )  P( x 1 ) 
2
(IES-EC-16)(10M) Y
Sol: P(Y)  PX P 
y 0 y1 y 2 X
 y  x 0 1  P P 0   1 1  0.8 0.2 0 
P     
x
  x  P 1  P   2 2   0 0.2 0.8
 0
1

(∵P = 0.2 Given data)


(i) =  1  0.8  1  0 1  0.2  1  0.2 1  0  1  0.8
1–P y0
2 2 2 2 2 2 
x0
P P(Y)  0.4 0.2 0.4
y1 Probabilities associated with channel outputs,
x1 P P(Y0) = 0.4
y2 P(Y1) = 0.2
1–P
P(Y2 ) = 0.4

34. Explain source coding. A discrete message source emits seven symbols {m1,m2,m3----m7} with
probabilities {0.35,0.3,0.2,0.1,0.04,0.005,0.005} respectively. Given Huffman codes for these
symbols and calculate average bits of information and average binary digits of information
per symbol. calculate code efficiency. (IES-EC-17)(15 M)
Sol: Source Coding: Source encoding techniques assigns bits to the symbol either using uniform
length coding or non uniform length coding.
Non uniform length coding can be implemented using either Shannon fano coding algorithm or
Huffman coding algorithm.
The major disadvantage of Shannon fano coding technique is ambiguity in selecting the intervals,
so Huffman coding is preferred and also Huffman coding has relatively more coding efficiency.
0.35 1 0.35 1 0.35 1 0.35 1 0.35 1 0.65 0
0.3 01 0.3 01 0.3 01 0.3 01 0.35 00 0.35 1
0.2 000 0.2 000 0.2 000 0.2 000 0.3 01
0.1 0010 0.1 0010 0.1 0010 0.15 001
0.04 00110 0.04 00110 0.05 0011
0.005 001110 0.01 00111
0.005 001111

Probability Code No. of bits


0.35 1 1
0.3 01 2
0.2 000 3
0.1 0010 4
0.04 00110 5
0.005 001110 6
0.005 001111 6
ACE Engg. Publications Hyderabad|Delhi|Bhopal |Pune |Bhubaneswar |Bengaluru |Lucknow |Patna|Chennai |Vijayawada |Vizag | Tirupati |Kukatpally | Kolkata
: 27 : Basics of Information Theory

Average binary digits per symbol (Average code word length) (L)
= 0.351 + 0.32 + 0.23 + 0.14 + 0.045 + 0.0056 + 0.005 6
= 2.21 bits/symbol

Entropy (average bits of information)


k
= H( x )   Pk log 2 Pk
i 1
= – (0.35 log20.35 + 0.3 log20.3 + 0.2log20.2 + 0.1log20.1 + 0.04log20.04 + 0.005log20.005
+ 0.005log20.005)
= – (– 0.530 – 0.521 – 0.464 – 0.332 – 0.185 – 0.0382 – 0.0382)
= 2.1084 bits/symbol

H( x )
% code efficiency =  100
L
2.1084
  100
2.21
 95.40%

35. Consider a discrete memoryless source If the probabilities are equal code length is
whose alphabet consists of K also equal. So, fixed length coding is more
equiprobable symbols. efficient.
(A) Explain why the use of a fixed-length
K 1
code for the representation of such a 1
source is about as efficient as any
(B) H(X) = i 0
Pi log 2
Pi
code can be. If the probabilities are equal, the code
(B) What conditions have to be satisfied length is also equal. So, the H(X) is
by K and the codeword length for the maximum.
coding efficiency to be 100 percent? H(X) max = log2 K
(IES-EC-18) (10 M) HX 
Sol: = =1
Alphabet size = 'K' L
(A) All symbols are equiprobable. log 2 K
Coding main objective is to determine the 1
L
code length based on the probabilities.
If the probability of occurrence is more, the L  log 2 K
code length is less.
If the probability of occurrence is less, the K = 2L
code length is more.
If the probability of occurrence is equal, the
code length is equal.
H x 
Coding efficiency  =
L
If the probabilities are equal H(X) = L and
 = 100%
ACE Engg. Publications Hyderabad|Delhi|Bhopal |Pune |Bhubaneswar |Bengaluru |Lucknow |Patna|Chennai |Vijayawada |Vizag | Tirupati |Kukatpally | Kolkata

You might also like