Professional Documents
Culture Documents
ESE 2019 Mains Electronics & Telecommunication Engineering Conventional Paper II Previous Conventional Questions With Solutions PDF
ESE 2019 Mains Electronics & Telecommunication Engineering Conventional Paper II Previous Conventional Questions With Solutions PDF
com
ACE
Engineering Publications
(A Sister Concern of ACE Engineering Academy, Hyderabad)
Hyderabad | Delhi | Bhopal | Pune | Bhubaneswar | Bengaluru | Lucknow | Patna | Chennai | Vijayawada | Visakhapatnam | Tirupati | Kolkata | Ahmedabad
ESE - 19
(MAINS)
Previous Conventional Questions with Solutions, Subject wise & Chapter wise
(1980 - 2018)
ACE is the leading institute for coaching in ESE, GATE & PSUs
H O: Sree Sindhi Guru Sangat Sabha Association, # 4-1-1236/1/A, King Koti, Abids, Hyderabad-500001.
Ph: 040-23234418 / 19 / 20 / 21, 040 - 24750437
Published at :
Authors :
Subject experts of ACE Engineering Academy, Hyderabad
While every effort has been made to avoid any mistake or omission, the publishers do not owe any
responsibility for any damage or loss to any person on account of error or omission in this publication.
Mistakes if any may be brought to the notice of the publishers, for further corrections in forthcoming
editions, to the following Email-id.
Email : info@aceenggpublications.com
Printed at :
Karshak Art Printers,
Hyderabad.
Price : ₹. 380/-
ISBN : 978-1724241290
Foreword
UPSC Engineering Services in Electronics & Telecommunication Engineering
[Paper - II Conventional Questions with Solutions from 1980–2018]
(More than 38 years)
In UPSC Engineering Services, Conventional Papers have 54.5% weightage in the entire process of
written test.
In Paper-II of Electronics & Telecommunication Engineering six subjects are
included as follows:
01. Analog & Digital Communication Systems 02. Control systems
03. Computer Organization & Architecture 04. Electro Magnetics
05. Advanced Electronics Topics 06. Advanced Communication Topics
The following approach is advisable to secure maximum marks.
• The solution to any question shall have the following appropriate steps generally. Steps will
also have due weightage.
(a) The data given
(b) The appropriate figure, if applicable, with parts labeled properly
(c) The concept on which the problem is being solved
(d) The relevant formulae with standard notations and abbreviations. Incase the paper
setter asks to solve the question from the fundamentals, necessary derivations shall be
done. Otherwise 70% of marks one has to lose. If the question carries more marks like
10 to 20 marks detailed analysis is compulsory to score high. If the question carries less
marks (<8) the formulae with abbreviations and concept may be sufficient. In any case, the
assumptions made shall be written clearly while solving a problem.
• The neatness and presentation of solutions will have 5% marks.
• Note that all parts of a question shall be answered together.
• Using short sentences and brevity in expressing the content is appreciated.
• Practice of numerical problems using calculator is very much essential instead of reading
merely.
* Try to understand and practice the solutions keeping in view QCAB format
The solutions are prepared with utmost care. In spite of this there may be some typographical
mistakes or improper sequence. The student is requested to inform us regarding errors, if any to
aceenggpublications@gmail.com. ACE Engineering Publications will be grateful in this regard.
Thanks to all the professors who co-operated in the preparation of this booklet. Thanks to the
Academic Assistants and Data Entry section in the design of this booklet.
With best wishes to all those who wish to go through the following pages.
(1980 – 2018)
MAIN INDEX
PART – A
(Control Systems) 109 - 221
2 Control Systems
PART – B
(Signals & Systems) 222 - 255
m n
Conventional Questions with Solutions H(X, Y) P(x i , y j ) log 2 [P(y j /x i )P(x i )]
i 1 j1
P( x , y ) P( x )
i j i
i 1
15 Y 0 1 1 1
H P0P log2 P0P log
16 X 0 0 0 P 1
P(x1)=1/2 0 0 P
0
0
1 1
0 1 1 1
16 16 P1P log2 P1P log2
1
0 1
1
P(x2)=1/2 1 1 P P
15 1 1
16 1 15 16 1 1 1 1 1 15 16
. log 2 . log 2 16 . log 2 16 . log 2
Y
PY PX P 2 16 15 2 16 2 16 2 16 15
X
15 1 15 16 1
H(Y/X) = log 2 log 2 16
1 1 32 15 32
16 16 1 15 16
2 212 1 15 log 2 16 log 2
16 1622 32 32 15
15 1 1 1 1 1 1 15 = 0.044+0.125+0.125 + 0.044
= 0.338
16 2 16 2 2 16 2 16
I [X,Y] = H(Y) H(Y/X)
I(X,Y) = 1 0.338 = 0.662
16 16
PY 0.5 0.5 Information rate (R) = r I(X,Y)
32 32 12 =104 0.662 = 6620 bps =6.6 kbps
m
1
HX Px i log 2
i 1 P x i 02. a) Define Mutual Information I(x, y) and
show that: I(x,y) = H(x) – H(x/y);
log 2 m log 2 2 1bps b) In Binary communication channel,
p(xi =0) = 0.4 and p(xi=1) = 0.6. For the
HY Py j log2
n
1 given noise matrix of the
log2 m log2 2 1bps
j1 Py j channel. Calculate the average
information I(x,y) conveyed per
symbol
H P xi , y j log
Y m n 1
(IES-EC-81)
X i 1 j1 yj
P yi P(y/x)
xi xi 0 1
0 .99 .01
From Baye’s theorem 1 0 1
y j Px i , y j Sol:
P
xi Px i a) Mutual information is defined as the
amount of information transferred when Xi
yj
Px i , y j Px i P is transmitted & Yj is received. It is
xi represented as I(Xi, Yj)
Y m n yj 1 The mutual information is a measure of the
H Px i P log 2 uncertainty about the channel input that is
X i 1 j1 xi yj resolved by observing the channel output
P
xi and vice versa
ACE Engg. Publications Hyderabad|Delhi|Bhopal |Pune |Bhubaneswar |Bengaluru |Lucknow |Patna|Chennai |Vijayawada |Vizag | Tirupati |Kukatpally | Kolkata
:5: Basics of Information Theory
H Px i , y j log 2
X Y m n 1
The quantity H is called a conditional
Y X i 1 j1 yj
P
entropy.
xi
It represents the amount of uncertainty
y 1
about the channel input after observing the Px i P j log
channel output. xi y
The quantity H(X) represents the amount of P j
xi
uncertainty about the channel input before
observing the channel output. H(Y/X) = [0.396 log20.99 +0.04log2 0.01
+ 0.6 log2 1]
X = 5.72 10-3 +0.2657
The difference H(X) H must
Y = 0.2714 bits / symbol
represent the amount of uncertainty about I(X, Y) = H(Y) – H(Y/X) = 0.9685 0.2714
the channel input that is resolved by = 0.697 bits / symbol
observing the channel output. This quantity
is called “Mutual Information” of the 03. Show that the channel capacity of a noisy
channel denoted by I(x , y) channel is C = B log2 (1 + S/N) where
x B = bandwidth and S/N is the signal to
I(x, y) = H(x) H
y noise ratio.
Average mutual information (IES-EC-82,84, 87)(10M)
x
P i 04. a) State Hartley-Shannon theorem and
yj explain
Pxi , yj log
m n
I(x, y) b) A system has a bandwidth of 4.0 kHz
i 1 j1
P(xi ) and an S/N ratio 28 dB at the input to
the receiver. Calculate
m n m n i) its information carrying capacity
P(x , y ) log P(x , / y P(x ,y ) log P(x )
i 1 j1
i j i j
i 1 j1
i j i
ii) the capacity of the channel if its
x bandwidth is double while the
I(x,y) = –H + H(x) transmitted signal power remains
y constant.
x (IES-EC-85)(12M)
I(x,y) = H(x) – H
y
05. a) State and explain Hartley-Shannon
b) Given P(xi = 0) = 0.4 P(xi = 1) = 0.6 Theorem.
P(X) = [0.4 0.6] b) Calculate the amount of information
Y 0.99 0.01 needed to open a lock whose
P
X 0 1 combination consists of three numbers
each ranging from 0 and 1.
Y 0.99 0.01
PY PX P 0.4 0.6 (IES-EC-90)(10M)
X 0 1 Sol: Shannon-Hartley Law:
0.396 0.604 Hartley-Shannon law is so named in
recognition of the early work by Hartley on
H(Y)= [0.396 log20.396 +0.604log20.604] the subject and its rigorous derivation by
= 0.529+0.439 = 0.9685bits/symbol
ACE Engg. Publications Hyderabad|Delhi|Bhopal |Pune |Bhubaneswar |Bengaluru |Lucknow |Patna|Chennai |Vijayawada |Vizag | Tirupati |Kukatpally | Kolkata
:6: Analog & Digital Communication Systems
Shannon. The Hartley-Shannon law has two Both x(t) & y(t) are assumed to have a PSD
important implications in the range B f B and zero else where.
1. It gives the upper limit for the rate of The output signal is sampled at a sampling
reliable information transmission over a rate = Nyquist rate.
Gaussian channel. Let Y denote a sample of the received
2. For a specified channel capacity, it signal y(f).since y has a Gaussian
defines the way in which transmission distribution with zero mean and variance
Bandwidth BT may be traded off for equal to (S+N)
improved signal to Noise ratio S/N and The Entropy of the received signal = H(Y)
vice versa. = log 2 2eS N
It relates to the channel capacity so it is H(X) of a sample x of a transmitted signal
also called as channel capacity theorem differs from H(Y). The difference equals the
which is usually stated for a discrete conditional entropy H(Y/X) which is a
memory less channel. measure of the average uncertainty of the
The channel capacity is the maximum rate received sample Y.
of reliable information transmission over a H Y / X
f x , y ( x , y) log 2 f y / x Y / X dx dy
additive white Gaussian channel. Where Joint Pdf is given by
Let C be the capacity of a discrete memory f x , y x, y f Y / X y / x f x x
less channel and let H be the entropy of a
discrete information source emitting ‘r’ HY / X f X xdx f y / X y / x log2 f Y / x Y / xdy
symbols/sec.
When the additive noise at the channel
The capacity theorem states that if rHC output is independent of the transmitted
then there exists a coding scheme such that signal conditional entropy does not depend
the output of the source can be transmitted
on x or y except in the combination (YX)
over the channel with an arbitrarily small
Probability of error. It is not possible to H(Y/X) equals the entropy of the additive
transmit messages without error if rH > C. noise since the noise is a Gaussian process
This gives essentially error –free of zero mean and variance equal to N, the
transmission in the presence of additive entropy of a noise samples
noise. HY / X log 2 2eN
SHANNON defines the channel capacity of
a continuous channel having an average Mutual information
power limitation and by an additive white IX, Y HY HY / X
Gaussian noise. In a continuous channel, the
log 2 2eS N log 2 2eN
input and output signals are continuous
functions of time.
= 1 log 2 1 S bits/sample
Consider a channel of BW ‘B’ Hz and let 2 N
x(t) be input, y(t) be the output Since samples are taken at the maximum
S = Average power of transmitted signal rate of 2B samples/sec. For a AWGN
x(t) channel, the channel capacity
N = Average power of additive noise C 2BIX, Y
component at the received signal
S
S + N = Average power of the received C B log 2 1 bits / sec
signal y(t) N
ACE Engg. Publications Hyderabad|Delhi|Bhopal |Pune |Bhubaneswar |Bengaluru |Lucknow |Patna|Chennai |Vijayawada |Vizag | Tirupati |Kukatpally | Kolkata
:7: Basics of Information Theory
ACE Engg. Publications Hyderabad|Delhi|Bhopal |Pune |Bhubaneswar |Bengaluru |Lucknow |Patna|Chennai |Vijayawada |Vizag | Tirupati |Kukatpally | Kolkata
:8: Analog & Digital Communication Systems
6
Average length ( L ) = i 1
Pi xi
08. A message source generates eight symbols m1, m2, …m8 with probabilities 0.25, 0.03, 0.19,
0.16, 0.11, 0.14, 0.08 and 0.04 respectively. Give the Huffman codes for these Symbols,
Calculate the entropy of the source and the average number of bits per symbol.
(IES-EC-92)( 15M)
Sol: Xi P(Xi) CODE
m1 0.25 01 0.25 01 0.25 01 0.25 01 0.31 00 0.44 1 0.56 0
m3 0.19 11 0.19 11 0.19 11 0.25 10 0.25 01 0.31 00 0.44 1
m4 0.16 000 0.16 000 0.16 000 0.19 11 0.25 10 0.25 01
m6 0.14 100 0.14 100 0.15 001 0.16 000 0.19 11
m5 0.11 101 0.11 101 0.14 100 0.15 001
m7 0.08 0010 0.08 0010 0.11 101
m8 0.04 00110 0.07 0011
m2 0.03 00111
m
H = Pi logPi
i 1
ACE Engg. Publications Hyderabad|Delhi|Bhopal |Pune |Bhubaneswar |Bengaluru |Lucknow |Patna|Chennai |Vijayawada |Vizag | Tirupati |Kukatpally | Kolkata
:9: Basics of Information Theory
H Y P y j log 2 P y j
n
8
1
Entropy H(x) = Pi log 2 bits/symbol
i 1 Pi j 1
16 16 32
j1 i 1 j
1 The conditional entropy H(x/y) is a measure
log 2 32
32 of the average uncertainty about the channel
= 2.3125 bits/ symbol input after the channel output has been
ACE Engg. Publications Hyderabad|Delhi|Bhopal |Pune |Bhubaneswar |Bengaluru |Lucknow |Patna|Chennai |Vijayawada |Vizag | Tirupati |Kukatpally | Kolkata
: 10 : Analog & Digital Communication Systems
i 1 j1 xi 8 8
1 3 0 1
The conditional entropy H(Y/X) is average P , P ;
uncertainty of the channel output when x 0 4 1 16
was transmitted and y was received. 1 0 15
P 1 P
1 1 16
Redundancy: Redundancy is the number of
bits used to transmit a message minus the 0 1 1
P 1 P
number of bits of actual information in the 0 0 4
message. Informally, it is the amount of 0 1
wasted "space" used to transmit certain 0 1 3
data. Data compression is a way to reduce 4 4
or eliminate unwanted redundancy, while y
P
checksums are a way of adding desired x 1 15
redundancy for purposes of error detection 1 16 16 22
when communicating over a noisy channel
y
H Px i , y j log P j
of limited capacity Y m n
The combined role of the channel encoder X i1 j1 xi
and decoder is to provide reliable m n
yj 1
communication over a noisy channel. This Px i P log
i 1 j1 xi yj
is done by introducing redundancy in the P
channel encoder and exploiting in the xi
channel decoder to reconstruct the original
encoder input as accurately as possible. 1 1 0 1
In source coding, we remove redundancy, P0.P log P1.P log
0 1 1 0
where as in channel coding we introduce P P
controlled redundancy. Because of 0 1
redundancy, we are able to decode a 0 1 1 1
P0.P log P1.P log
message accurately without errors in the 0 0 1 1
P P
received message. 0 1
For example to the code words 0001 if we 3 3 4 5 1 16
add a fifth pulse of positive polarity to make . log . log
a new code word 00011. Now the number 8 4 3 8 16 1
of positive pulses is 2 (even). 3 1 4 5 15 16
. log . . log
If a single error occurs in any position, this 8 4 1 8 16 15
parity will be violated. The receiver knows
that an error has been made and can request 3 3 4 1
log log 4
retransmission of the message. It can detect 8 4 3 4
an error, but cannot locate it. 5 15 16 1
Redundancy =1 – Efficiency log log 16
H( x ) 8 16 15 16
= 1 – 1 = 0.515
L
ACE Engg. Publications Hyderabad|Delhi|Bhopal |Pune |Bhubaneswar |Bengaluru |Lucknow |Patna|Chennai |Vijayawada |Vizag | Tirupati |Kukatpally | Kolkata
: 11 : Basics of Information Theory
events which are statistically independent. ii) The average information in the dot
The measure of information associated with dash code
an event A occurring with probability PA is iii) The average rate of information
defined as transmission, assuming that a dot
1 lasts 10 m sec and the same time
I A log 2 interval is allowed between symbols.
PA
(b) The information content of a message is Sol:
proportional to the logarithm of the a) Self Information: Same as 13(a)
reciprocal of the probability of the message. Entropy: Entropy is a measure of the
The basic quantum unit of information is uncertainty in a random variable. Entropy is
called a binary digit normally called as bit. defined as the average information content
In general any one of ‘n’ equiprobable per source symbol. The entropy H(X)
messages that containing log2n bits of depends only on the probabilities of the
information. Then the probability of symbols of the source. It has the dimensions
1 of energy by its definition. It provides a
occurrence of each one is Pi quantitative measure of the degree of
n
The information associated with each randomness of the system. The entropy
1 H(X) of source is bounded as
message is I i log 2 n log 2 log 2 Pi 0 HX log 2 N
Pi
If H = 0 then Entropy corresponds to no
For r-ary digits logr(1/p) uncertainty.
1 If H = log2N then Entropy corresponds to
I log r r-ary units
P 1
1 1 maximum uncertainty if and only if PK =
I log 2 bits log r r-ary units K
P P The Entropy of a binary memory less source
1 r-ary unit = log2r bits. which is statistically independent is
4
H = – plog2p – (1–p) log2(1–p)
1
c) H( x ) Pi log 2 H max = 1 when p1 = p0 =
1
i 1 pi 2
1 1 1 1 i.e., symbols ‘1’ & ‘0’ are equiprobable.
log 2 8 log 2 4 log 2 2 log 2 8
8 4 2 8
H
3 2 1 3
=1.75 bits /symbol.
8 4 2 8 1.0
P 1 1 p p
X x 2 p 1 p C s Max IX, Y s[1 p log 2 p 1 p log 2 1 p ]
ACE Engg. Publications Hyderabad|Delhi|Bhopal |Pune |Bhubaneswar |Bengaluru |Lucknow |Patna|Chennai |Vijayawada |Vizag | Tirupati |Kukatpally | Kolkata
: 15 : Basics of Information Theory
16. a) When raw binary bits generated by the 17. Prove that a (n,k) linear block code of
source can be transmitted in a minimum distance dmin can correct up to
channel, why is source coding done 1
which adds to complicacy of the ‘t’ errors if and only if t (dmin -1)
2
transmission work? (IES-EC-01)(8M)
(IES-EC-01)(4M)
Sol: In block codes, a block of ‘k’ data digits is
b) Obtain Shannon-Fano code for the encoded by a codeword of ‘n’ digits (n>k)
source information consisting of 5 i.e., k data digits are accumulated and then
messages m1, m2, m3, m4 and m5 with encoded into an ‘n’ digit codeword such
probabilities of 1/16,1/4, 1/8 ,1/2 , 1/16 that the number of check digits m = n – k
(IES-EC-01)(4M) k
The code efficiency is such a code is
Sol: n
a) Source coding is an efficient way of known as (n, k) code.
representing the output of a source. The minimum distance dmin of a Linear
Consider that there are M = 2N messages, Block Code ‘c’ is defined as the smallest
each messages coded into N bits. If the hamming distance between any pair of code
messages are equally likely, the average words in ‘c’.
information per message is The minimum distance dmin of a linear code
H = N, i.e. N bits. These being N ‘c’ is the smallest Hamming weight of the
bits/messages, the average information nonzero code word in the ‘c’.
carried by an individual bit is H/N = 1 bit. The minimum distance dmin of a linear code
If the messages are not equally likely, then ‘c’ determines the error detection and
H < N, and each bit carries less than 1 bit of correction capabilities of code word.
information. A linear code ‘c’ of minimum distance dmin
The efficiency can be improved by using a can detect up to t errors if and only if
code in which, not all messages are encoded dmin t + 1.
into same number of bits. The more likely a A linear code ‘c’ of minimum distance dmin
message is, the fewer the number of bits can correct up to t errors if and only if
that should be used in its code word. dmin 2t + 1.
Thus, source coding is the way of In figure two Hamming Spheres, each of
transmitting the output of a source, with less radius t, are constructed around the points
number of bits (an average) without any that represent code words c i and cj . Fig (a)
information loss. depicts the case where two spheres are
disjoint, that is d( c i , c j ) 2t +1. for this
b) Shannon - Fano coding: case ,if the code word c i is transmitted, the
received word is r, and d( c i ,r) t, such that
Symbol Probability Code word No. ofBits the decoder will choose c i , since it is the
m4 1/2 1 1 code word closest to the received word r.
m2 1/4 01 2 In figure (b) depicts the case where two
m3 1/8 001 3 spheres intersect, that is d( c i , cj )<2t.
m1 1/16 0001 4 If the code word c i is transmitted, there
m5 1/16 0000 4
exists a received code word r such that
ACE Engg. Publications Hyderabad|Delhi|Bhopal |Pune |Bhubaneswar |Bengaluru |Lucknow |Patna|Chennai |Vijayawada |Vizag | Tirupati |Kukatpally | Kolkata
: 16 : Analog & Digital Communication Systems
21. If a binary PCM, if ‘0’ with probability ¼ and ‘1’ occurs with probability equal to ¾, then
calculate the amount of information carried by each bit. Comment on the result obtained.
(IES-EC-10)(8M)
1
Sol: Probability of ‘0’ =
4
3
Probability of ‘1’ =
4
1
Amount of Information of ‘0’ = log2 = log2 4 = 2 bits.
p
1 4
Amount of Information of ‘1’ = log2 = log2 = 0.42 1 bit.
p 3
1
I
p
The amount of information of zero is more compared to one.
Since amount of information is inversely proportional to the probability of occurrence.
Thus as probability P decreases from 1 to 0, Ik increases monotonically going from 0 to infinity. A
greater amount of information has been conveyed when the receiver correctly identifies a less
likely message.
Average length L =
1
2
1 2
1 2 3 = 1.75 bits/symbol.
4 8
H(X)
Efficiency () = 100 = 100%.
L
ACE Engg. Publications Hyderabad|Delhi|Bhopal |Pune |Bhubaneswar |Bengaluru |Lucknow |Patna|Chennai |Vijayawada |Vizag | Tirupati |Kukatpally | Kolkata
: 19 : Basics of Information Theory
23. If I(x1) is the information carried by The bit rate Rb, coming out of the encoder is
message x1 and I(x2) is the information called channel data rate.
carried by message x2. Then prove that
the amount of information carried
compositely due to x1 and x2 is b0, b1,…… bn-k-1 m0, m1,…… mk-1
I(x1, x2) = I(x1) + I(x2).
(IES-EC-12)(7M)
Sol: The measure of information associated with Parity bits Message bits
a message x occurring with probability Px is
1 Hamming codes are single error correcting
defined as I( x ) log 2 binary perfect codes.
Px
The single-error correcting capability of
For two statistically independent events
Hamming codes is also confirmed by the
P(x1x2) or P (x1, x2) = P (x1) P(x2)
minimum distance (dmin)
1
I( x 1 , x 2 ) log (By definition ) A binary code for which the Hamming
P( x 1 , x 2 ) bound is satisfied with the equality sign is
1 1 1 called a “Perfect code”.
log log . An (n,k) linear block code can correct up to
P( x 1 ) P( x 2 ) P( x 1 ) P( x 2 ) ‘t’ errors per codeword, provided that n and
1 1 k satisfy the Hamming bound
log log
t
P( x 1 ) P( x 2 ) 2 n k n
i
I(x1,x2) = I(x1)+I(x2) i 0
An (n, k) linear block code of minimum
24. a) Explain Hamming Codes? distance dmin can correct up to ‘t’ errors if
1
b) How many Hamming bits are required and only if t d min 1
for a block length of 20 message bits to 2
correct one bit error? For a family of (n, k) linear codes that have
(IES - EC-12)(10M) the following parameters:
Sol: Hamming codes are error-correcting codes. Block length n 2 m 1
These codes have been classified into block Number of message bits
codes and convolutional codes. k 2m m 1
To generate an (n,k) block code, the channel
Number of parity bits n – k = m
encoder accepts information in successive
‘k’ bit block; for each block, it adds (n-k) b) Given k = 20, t = 1
redundant bits that related to the k message t
n n k
bits, there by producing an over all encoded 2
i 0 i
(Hamming Bound)
block of ‘n’ bits where n>k.
The n bit block is called a code word where
1
n
2 n 20
n is the block length of the code. i 0 i
n 1
n n! n!
R b R s where Rs is the bit rate of the
k
i 0 i
n 0!0! n 1!1!
information generated by the source.
n ! n n 1!
The dimensionless ratio r = k/n is called the 1 n
code rate, where 0 < r < 1. n! n 1!
ACE Engg. Publications Hyderabad|Delhi|Bhopal |Pune |Bhubaneswar |Bengaluru |Lucknow |Patna|Chennai |Vijayawada |Vizag | Tirupati |Kukatpally | Kolkata
: 20 : Analog & Digital Communication Systems
2 n 20 1 n 1 1 1
p x 1 p x 2 P X
log 2 n 20 log 1 n 2 2 2
0.3 n 6 log 1 n H X log 2 m log 2 2 1bits / msg
Given K = 20 so n should be 20. y1 y2 y3 y4
By trial & error method n 25. Y x q p p p
n K m (number of parity bits) P 1
X x 2 p p p q
m = 25 – 20 = 5
H Σ Σ p x i , y j log
Five Hamming bits are required. Y 1
X p y j/x i
25.
Σ Σ p x i p y j /x i log
y1 [00] 1
p y j /x i
00 1 01
x1 [00]
p 00 p log p 00 p
y2 [01] 00 00 00
p
00
1 10 1
y3 [10] log p 00 p log
x2 [11]
01 00
10
p p
00 00
y4 [11] 11 1 00
p 00 p log p 11 p
Calculate the rate of joint information 00 11 11
p
transmission for the above channel. 00
1 01 1
Assume p x1 p x 2
1
where log p 11 p log
2 00 11 01
p p
X x 1 , x 2 is the set of input symbols. 11 11
Assume q →Probability of correct 10 1 11 1
p11 p log p11 p log
reception. 11 p 10 11 p 11
P = Probability of incorrect reception
11 11
and Y y 1 , y 2 , y 3 , y 4 be the set of
Y
received symbols. H [qlog 2q 3 plog 2 p]
X
(IES-EC-13)(10M)
Sol: Y 1 1 q p p p
P(Y) P(X)P
X 2 2 12 p p p q 24
q y1 [00]
q p p q
P(Y) p p
p
2 2 2 2 1 4
x1 [00]
p H(Y) = p log2 p
p y2 [01]
p q 1
p 2 2 log 2 (p q) plogp
p y3 [10]
p q 1
x2 [11] p plogp log (p q)
2 2 2
q y4 [11] 1
= [2 p log p + (p + q) log (p + q)]
2
ACE Engg. Publications Hyderabad|Delhi|Bhopal |Pune |Bhubaneswar |Bengaluru |Lucknow |Patna|Chennai |Vijayawada |Vizag | Tirupati |Kukatpally | Kolkata
: 21 : Basics of Information Theory
X1= 00
Since p1 < p, I(X,Y)2 is now greater than the
pq original value of [1H(p)] of a BSC. It may
p1 = 1/2 pq 01 = y3 be shown that with three repetitions of the
ERASE
p2
10 = y4
p3
IX, Y 3 p 3 q 3 1 H
p3 q3
3pq1 Hp
pq
pq
1 3
X2= 11 Given BSC with p ,q
p2 = 1/2 q2
4 4
11 = y2 p(0) = p(1) = 0.5.
A repetitive BSC
ACE Engg. Publications Hyderabad|Delhi|Bhopal |Pune |Bhubaneswar |Bengaluru |Lucknow |Patna|Chennai |Vijayawada |Vizag | Tirupati |Kukatpally | Kolkata
: 22 : Analog & Digital Communication Systems
8 b
1
With 3 repetitions, the error rate p =
28
a
2 ( x , p)dx k 2 ….. (3)
27 .
and q =
28 .
IX, Y 3 p 3 q 3 1 Hp' ' 3pq1 Hp
b
.
7
1 27 28 9
1 log 28 log 0.185 n ( x , p)dx k n
16
28 28 27 16 a
7 Where k1, k2,….. kn are preassigned
0.78 0.104 0.444 bits/symbol
16 constants. The form of p(x) which satisfies
It is thus seen that the mutual information the above constraints and makes H(x)
improves considerably with repetitions. maximum (or minimum) is obtained by
solving the equation
27. A continuous random variable X is F
constrained to a peak magnitude M. +1 1 +2 2 ….n n = 0 ...(4)
p p p p
Show that
(i) The differential entropy of X is The quantities 1,2….n are the
maximum when it is uniformly lagrangians (undetermined multipliers) and
distributed; they are determined by substituting the
(ii) The maximum differential entropy of value of p(x) in the equation (3) .
X is log2 2M. Given that the signal is peak limited to M
M
(IES-EC-14) (10M)
Sol:(i) The entropy of a continuous random H(x) = p(x) log
M
2 p( x )dx
variable X is
Subject to the conditions given in equation
H(x) = p( x ) log 2 p( x )dx ….. (1) (2) from (3) we have p= p(x) and
F(x,p) = –plogp ;
ACE Engg. Publications Hyderabad|Delhi|Bhopal |Pune |Bhubaneswar |Bengaluru |Lucknow |Patna|Chennai |Vijayawada |Vizag | Tirupati |Kukatpally | Kolkata
: 23 : Basics of Information Theory
And 1(x,p) = p
(M0) P(r0|M0|)=0.6
F r0
Then = –(1+logp) ….. (5) P(M0)=0.3
p 0.3 0.1
1
=1 ….. (6) 0.1
p
Substituting (5) & (6) in (4) (M1) 0.5
P(M1)=0.5 r1
we get the value
(b) P(r1 / M1) P(M1) > P(r1 / M0) P(M0) > P(E V) 0.2
P(r1 / M2) P(M2) = 0.4
P( E ) 0.5
i.e (0.5)(0.5) > (0.3)(0.3) > (0.1)(0.2)
We select M1 whenever r1 is received. EC
(ii) P = Probability of person being not
V
(c) P (r2 / M1) P (M1) > P (r2 / M2)P(M2) educated given that he is a voter
> P(r2 / M0) (M0)
P(EC/V) =
P EC V
P(V) P(E V)
i.e (0.4)(0.5)>(0.8)(0.2)>(0.1)(0.3) P(V) P( V )
i.e We select M1 whenever r2 is received 0.4 0.2 0.2
= 0.5
0.4 0.4
(ii) Based on the decisions made in (i), we
have the probability of error as (iii) P(EC VC ) = Probability that a person is
Pe = 1–Pc neither a voter nor educated.
Where Pc = P(r0 / M0) P(M0) + = 1– P (E V)
P(r1 /M1)P(M1)+P(r2 / M1) P(M1)
= 1– [P(E) + P(V) – P(E V)]
Pc = 0.6 ×0.3 + 0.5× 0.5+0.4 ×0.5 = 1– [0.4 + 0.5 – 0.2]
= 1– 0.7 = 0.3
Pc = 0.63
30. Three students A, B and C are given a
Pe = 0.37 problem in Maths. The probabilities of
3 2 1
29. 40% of the population of a town are their solving the problem are , and
voters, 50% are educated and 20% are 4 3 4
educated-voters. A person is chosen at respectively. Determine the probability
random. that the problem is solved if all of them
(i) If he is educated, what is the try to solve the problem.
probability that he is a voter? (IES-EC-15)(5M)
(ii) If he is a voter, what is the Sol: P(A) = 3/4; P(B) = 2/3
probability that he is not educated? P(C) = 1/4.
(iii) What is the probability that he is
neither a voter nor educated? Let p(s) be the probability that the problem
(Paper - I) (IES-EC-14)(10M)
Sol: Probability of a person being a voter is solved and ps be the probability that the
= P(V) = 0.4 problem is not solved.
Probability of a person being educated
= P(E) = 0.5 Ps 1 Ps ---- (1)
(ii) If the source symbols/bits have equally (ii) Source symbols/bits have equally likely
likely probabilities, then compute the probability i.e.,
probabilities associated with the 1
channel outputs for p = 0.2. P( x 0 ) P( x 1 )
2
(IES-EC-16)(10M) Y
Sol: P(Y) PX P
y 0 y1 y 2 X
y x 0 1 P P 0 1 1 0.8 0.2 0
P
x
x P 1 P 2 2 0 0.2 0.8
0
1
34. Explain source coding. A discrete message source emits seven symbols {m1,m2,m3----m7} with
probabilities {0.35,0.3,0.2,0.1,0.04,0.005,0.005} respectively. Given Huffman codes for these
symbols and calculate average bits of information and average binary digits of information
per symbol. calculate code efficiency. (IES-EC-17)(15 M)
Sol: Source Coding: Source encoding techniques assigns bits to the symbol either using uniform
length coding or non uniform length coding.
Non uniform length coding can be implemented using either Shannon fano coding algorithm or
Huffman coding algorithm.
The major disadvantage of Shannon fano coding technique is ambiguity in selecting the intervals,
so Huffman coding is preferred and also Huffman coding has relatively more coding efficiency.
0.35 1 0.35 1 0.35 1 0.35 1 0.35 1 0.65 0
0.3 01 0.3 01 0.3 01 0.3 01 0.35 00 0.35 1
0.2 000 0.2 000 0.2 000 0.2 000 0.3 01
0.1 0010 0.1 0010 0.1 0010 0.15 001
0.04 00110 0.04 00110 0.05 0011
0.005 001110 0.01 00111
0.005 001111
Average binary digits per symbol (Average code word length) (L)
= 0.351 + 0.32 + 0.23 + 0.14 + 0.045 + 0.0056 + 0.005 6
= 2.21 bits/symbol
H( x )
% code efficiency = 100
L
2.1084
100
2.21
95.40%
35. Consider a discrete memoryless source If the probabilities are equal code length is
whose alphabet consists of K also equal. So, fixed length coding is more
equiprobable symbols. efficient.
(A) Explain why the use of a fixed-length
K 1
code for the representation of such a 1
source is about as efficient as any
(B) H(X) = i 0
Pi log 2
Pi
code can be. If the probabilities are equal, the code
(B) What conditions have to be satisfied length is also equal. So, the H(X) is
by K and the codeword length for the maximum.
coding efficiency to be 100 percent? H(X) max = log2 K
(IES-EC-18) (10 M) HX
Sol: = =1
Alphabet size = 'K' L
(A) All symbols are equiprobable. log 2 K
Coding main objective is to determine the 1
L
code length based on the probabilities.
If the probability of occurrence is more, the L log 2 K
code length is less.
If the probability of occurrence is less, the K = 2L
code length is more.
If the probability of occurrence is equal, the
code length is equal.
H x
Coding efficiency =
L
If the probabilities are equal H(X) = L and
= 100%
ACE Engg. Publications Hyderabad|Delhi|Bhopal |Pune |Bhubaneswar |Bengaluru |Lucknow |Patna|Chennai |Vijayawada |Vizag | Tirupati |Kukatpally | Kolkata