Professional Documents
Culture Documents
Ec6501 DC 1
Ec6501 DC 1
DEPARTMENT OF ECE
EC 6501- DIGITAL COMMUNICATION
ALL UNITS NOTES
SEM/YEAR: V/ III
12
Since in this case, the origin lies in the middle of the riser of the stair case, it
is referred to as symmetric quantizer of midriser type.
Both quantizers mid tread (or) mid riser type is memory less ,that is the quantizer
output is determined only by the value of corresponding input samples.
Overload level:
The absolute value of which is one half of peak to peak range of input sample
values.
Quantization Noise: The use of quantization introduces an error defined as the
difference between the input signal m and output signal v.The error is called
quantization noise.
Let the quantizer input m be the sample value of a zero mean random variable M.
A quantizerg(.)maps the input random variable M of continuous amplitude into a
discrete random variable, thir respective sample values are related by the equation
v = g(m)
Let the quantization error be denoted by random variable Q of sample valueq
q= m-v (or)
Q=M-V
With the input M having zero mean, and the quantizer assumed to be symmetric
the quantization output V and therefore quantization error also have zero mean.
Quantization error Q:
Consider then an input m of continuous amplitude in the range (-
mmax,mmax).Assuming a uniform quantizer of mid riser type. we find that the step size of
the quantizer is given by
quantizer. Then with the mean of the quantization error being zero, its variance is
the same as the mean squarevalue.
=E[Q2]
------------(2)
Now,
= x
=
Let P be the average power of the message signal m(t).Now express the output
signal-to-noise ratio of uniform quantizer
(SNR)o=
=
The above equation shows that the output signal to noise ratio of the quantizer
increases exponentially with increasing number of bits per sample „R‟
2.Describe PCM waveform coder and decoder with neat sketch and list the
merits and compared with analog coders(Nov/Dec15,April17)
Pulse-Code Modulation
PCM is a discrete-time, discrete-amplitude waveform-coding process, by means
of which an analog signal is directly represented by a sequence of coded pulses.
Specifically, the transmitter consists of two components: a pulse-amplitude
modulator followed by an analog-to-digital (A/D) converter. The latter component itself
embodies a quantizer followed by an encoder. The receiver performs the inverse of
these two operations: digital-to- analog (D/A) conversion followed by pulse-amplitude
demodulation. The communication channel is responsible for transporting the encoded
pulses from the transmitter to the receiver. Figure 3, a block diagram of the PCM,
shows the transmitter, the transmission path from the transmitter output to the receiver
input, and the receiver. It is important to realize, however, that once distortion in the
form of quantization noise is introduced into the encoded pulses, there is absolutely
nothing that can be done at the receiver to compensate for that distortion.
The most important feature of a PCM systems is its ability to control the effects of
distortion and noise produced by transmitting a PCM signal through the channel,
connecting the receiver to the transmitter. This capability is accomplished by
reconstructing the PCM signal through a chain of regenerative repeaters, located at
sufficiently close spacing along the transmission path.
Let g (t) denote the signal obtained by individually weighting the elements of a
periodic sequence of delta functions spaced Ts seconds apart by the sequence of
numbers {g(nTs)}, as shown by
Where G(f) is the Fourier transform of the original signal g(t) and fs is
the sampling rate.
The process of uniformly sampling a continuous-time signal of finite energy
results in a periodic spectrum with a frequency equal to the sampling rate.
Aliasing Phenomenon
Aliasing refers to the phenomenon of a high-frequency component in the
spectrum of the signal seemingly taking on the identity of a lower frequency in the
spectrum of its sampled version, as illustrated in Figure 8. The aliased spectrum,
shown by the solid curve in Figure 8 b, pertains to the under sampled version of the
message signal represented by the spectrum of Figure 8 a. To combat the effects of
aliasing in practice, we may use two corrective measures:
Prior to sampling, a low-pass anti-aliasing filter is used to attenuate those high-
frequency components of the signal that are not essential to the information
being conveyed by the message signalg(t).
The filtered signal is sampled at a rate slightly higher than the Nyquistrate.
The use of a sampling rate higher than the Nyquist rate also has the beneficial
effect of easing the design of the reconstruction filter used to recover the original signal
from its sampled version. Consider the example of a message signal that has been
anti-alias (low- pass) filtered, resulting in the spectrum shown in Figure 9 a. The
corresponding spectrum of the instantaneously sampled version of the signal is shown
in Figure 9 b, assuming a sampling rate higher than the Nyquist rate. According to
Figure 9 b, we readily see that design of the reconstruction filter may be specified as
follows:
The reconstruction filter is low-pass with a pass band extending from –W to W,
which is itself determined by the anti-aliasingfilter.
The reconstruction filter has a transition band extending (for positive
frequencies) from W to (fs – W), where fs is the samplingrate.
At the receiving end of the system, the received signal is applied to a pulse
amplitude demodulator, which performs the reverse operation of the pulse
amplitude modulator. The short pulses produced at the pulse demodulator output
are distributed to the appropriate low-pass reconstruction filters by means of a
decommutator, which operates in synchronism with the commutator in the
transmitter. This synchronization is essential for a satisfactory operation of the
TDM system, and provisions have to be made forit.
Non uniform quantization (robust quantization):
In telephonic communication, it is preferable to use variable separation
between the representation levels
For example,the range of voltages covered by voice signals from the peaks of
loud talk to the weak passages of the weak talk is on the order of 1000 to 1.
The use of non uniformquantizer is equivalent to passing the baseband
signal through a compressor and then applying the compressed signal to the
uniform quantizer.
Logarithmic companding of speech signal:
A particular form of compression law that is used in practice is µ law,which
is defined by
Where m and v are the normalized input and output voltages and µ is a
positive constant.
In figure 11(a), we have plotted the µ -law for 3 different values of µ.The
case of uniform quantization corresponds to µ=0.
For a given value of µ, the reciprocal slope of the compression curve,
PART –B
DM TRANSMITTER.
It consist of a summer, two-level quantizer and an accumulator interconnected as
shown infig. Assume that accumulator is initially set tozero.
In summer accumulator adds quantizer output (±δ) with the previous
sample approximation.
=
At each sampling instants the accumulator increments the approximation to the
input signal by ±δ, depending upon the binary output of themodulator.
The accumulator does the best it can track the input by an increment +δ or –δ at
atime.
DM RECEIVER
The staircase approximation u(t) is reconstructed by passing the incoming
sequence of positive and negative pulses through an accumulator in a manner similar to
that used in thetransmitter.
The out-of-band quantization noise in the high frequency staircase waveform u(t)
is rejected by passing it through low-pass filter with a bandwidth equal to the original
signalbandwidth.
Delta modulation offers two unique features (1) a one bit code-word for the output
which eliminates the need of word framing (2) simplicity of design for both the transmitter
andreceiver.
QUANTIZATION ERROR.
Delta modulation has two types oferror.
1) slope over load 2) granularnoise
Thereby causing the staircase approximation u(t) to hunt around a relatively flat
segment of the inputwaveform.
2. Explain how Adaptive Delta Modulation performs better and gains more SNR
than delta modulation.[Nov/Dec 2016,April/May 2017]
The performance of the delta modulator can be improved significantly by making
the step size of the modulator (assume time-varyingform). During a steep segment of
the input signal the step size isincreased. Conversely when the input signal is varying
slowly, the step size isreduced.
In this way, the step size is adapted to the level of the input signal is called
adaptive delta modulation(ADM). Several ADM schemes to adjust stepsize
Discrete set of values is provided for the stepsize. Continuous range for step size
variation isprovided.
In summer accumulator adds quantizer output (±δ) with the previous sample
approximation.
At each sampling instants the accumulator increments the approximation to the
input signal by ±δ, depending upon the binary output of themodulator.
SNR IN DPCM:
The output signal to quantization noise ratio of a signal coder
(SNR)o= .
(SNR)o= .
(SNR)o=GP(SNR)P.
The prediction error to quantization noise ratio.
(SNR)P = .
GP is the prediction gain produced by the differential quantization scheme is
definedby
Reduction in the number of bits per sample from 8 to 4 involves the combined
use of adaptive quantization and adaptiveprediction.
Adaptive means being responsive to changing level and spectrum of the input
speechsignal.
The variation of performance with speakers and speech material together with
variations in signal level inherent in the speech communicationprocess.
A digital coding scheme that uses both adaptive quantization and adaptive
prediction is called adaptive differential pulse-code modulation(ADPCM).
The term “adaptive quantization” refers to a quantizer that operates with a time
varying step size ∆(nTs), where Ts is the samplingperiod.
∆(nTs) = X(nTs)
Unquantized samples of the input signal are used to derive forward estimates of
.
Samples of the quantizer output are used to derive backward estimates of
ADAPTIVE QUANTIZATION
The respective quantization schemes are referred to as adaptive quantization
with forward estimation (AQF) and adaptive quantization with backward
estimation(AQB).
bounded then so with the backward estimate X(nTs) and the correspondingstep size
∆(nTs).
ADAPTIVE PREDICTION
The use of adaptive prediction in ADPCM is justified because step size are
inherently nonstationary.
The two schemes for performing adaptive prediction are 1) adaptive prediction
with forward estimation (APF) 2) adaptive prediction with backward estimation (APB).
The APF in which unquantized samples of the input signal are used to derive
estimates of the predictorcoefficient.
In APF scheme N unquantized samples of the input speech are first buffered and
then released after computation of M predictor coefficients that are optimized for the
buffered segment of inputsamples.
The choice of M involves a compromise between an adequate prediction gain
and an acceptable amount of sideinformation.
Likewise the choice of learning period or buffer length N involves a compromise
between the rate at which information on predictor coefficients must be updated and
transmitted toreceiver.
The optimum predictor coefficients are estimated on the basis of quantized and
transmitted data, they can be updated as frequently asdesired.
From sample to sample APB is the preferred method ofprediction.
The adaptive prediction is intended to represent the mechanism for updating the
predictorcoefficients.
Let y(nTs)denote the quantizer output where Ts is the sampling period and , n is
(nTs)isthepredictionofthespeechinputsamplex(nT s)theaboveequationcan be
rewriteas
The transmitter first performs analysis on the input speech signal, block
byblock.
Each block is 10-30 ms long for which the speech production process may
be treated as essentiallystationary.
The parameters resulting from the analysis namely the prediction –error
filter (analyzer) coefficients, a voiced/unvoiced parameter, and the pitch period,
provide a complete description for the particular segment of the input
speechsignal.
A digital representation of the parameters of the complete description
constitutes the transmittedsignal.
The receiver first performs decoding followed by synthesis of the speech
signal the standard result of this analysis/synthesis is an artificial – sounding
reproduction of the original speech signal.
6.Briefly explain about PredictionFilter.
Prediction constitutes a special form of estimation the requirement is to
use a finite set of present and past samples of a stationary process to predict a
sample of the process in thefuture.
If prediction is linear if it is linear combination of the given samples of the
process and is confined to linearpredictor.
The filter designed to perform the prediction is calledpredictor.
The difference between the actual sample of the process at the (future)
timeofinterestandthepredictoroutputiscalledthepredictionerror.
Consider the random samples Xn-1,Xn-2, ……..,Xn-M drawn from a stationary
process X(t), the requirement is to make a prediction of the sampleXn.
Let n denote the random variable resulting from thisprediction.
n=
h01, h02, …….., h0M are the optimum predictorcoefficients
M – number of delay elements employed in thepredictor.
By minimizing the mean square value of the prediction error as a special
case of the weiner filter proceed asfollow.
The variance of the sample Xn, viewed as the desired response,equals
Where it is assumed that Xn has zero mean
The cross-correlation function of Xn, acting as the desired response, and
Xn-k, acting as the kth tap input of the predictor, is givenby
The autocorrelation function of the predictor‟s tap input Xn-k with another
tap inputXn-m is givenby
E[Xn-kXn-m] = RX(m-k) k, m = 1,2, . . . ,M
The normal equation to fit the linear prediction problem asfollows
k =1, 2, . . .M
Therefore we need only to know the auto correlation function of the signal
for different lag in order to solve the normal equations for the predictor
coefficients.
PREDICTION ERROR PROCESS
prediction error denoted by εn, is definedby
The prediction error εn is computed by giving the present and past samples
of a stationary process, namely Xn, Xn-1 . . . Xn-Many giving the predictor
coefficients h01,h02, . . . h0M, by using the structures which is called as prediction
error filter as shown infig.
Where n refers the present structure for performing the inverse operations
so and the second as the inverse filter.
The impulse response of the inverse filter has infinite duration because of
feedback present in the filter whereas the impulse response of the prediction error
filter has finitedduration.
The structures from the figure that there is a one to one correspondence
between samples of a stationary process an those of the prediction errorinthat if
we are given one can compute the other by means of a linear filteringoperations.
The reason for representing samples of a stationary process (Xn) by
samples of the corresponding prediction error .The prediction
error variance is less thanζ 2X1.
UNIT-III BASEBAND TRANSMISSION
PART-A
1.What are line codes? Name some popular line codes. (MAY/JUNE2016)
Line coding refers to the process of representing the bit stream (1‟s and 0‟s) in the
form of voltage or current variations optimally tuned for the specific properties of the
physical channel beingused.
Unipolar (Unipolar NRZ and UnipolarRZ)
Polar (Polar NRZ and PolarRZ)
Non-Return-to-Zero, Inverted(NRZI)
Manchesterencoding
.
3.List the properties of syndrome. (NOV/DEC2015)
Syndrome depends only on the error pattern and not on transmitted
code word
All error pattern differing by a code word will have the samesyndrome.
syndrome is the sum of those columns of matrix H corresponding to
The
the errorlocation
With syndrome decoding and (n,k) linear block code can correct upto t
error per code word if n and k satisfy the hammingbound.
Width of the eye- It defines the time interval over which the received waveform can be
sampled without error from intersymbol interference.
Sensitivity of an eye- The sensitivity of the system to timing error is determined by the
rate of closure of the eye as the sampling time isvaried. Margin over noise- The height
of the eye opening at a specified sampling time defines the margin over noise.
Applications for eye pattern
used to study the effect ofISI
Eye opening-additive noise in thesignal
10. A 64 kbps binary PCM polar NRZ signal is passed through a communication
system with a raised cosine filter with roll off factor 0.25. Find the bandwidth of the
filteredPCMsignal. [NOV12]Fb
= 64Kpbs
B0 = Fb/2 =32Kpbs
4. =0.25
B = B0(1+α) = 32*103*(1+0.25)= 40KHz
PART-B
1.Derive and explain Nyquist first criteria to minimizeISI.[Nov16,April 17]
The transfer function of the channel and the transmitted pulse shape are
specified and the problem is to determine the transfer functions of the transmitting and
receiving filters to reconstruct the transmitted datasequence.{b k}
The receiver extracts and then decodes the corresponding sequence
ofweights {ak} from the output y(t)
The extraction involves sampling the output y(t) at some timet=iTb.
The decoding requires that the weighted pulse contribution akp(iTb –Ktb) for k=I be free
from ISI must be represented byk
Pð (f) =
Let the integer m=i-k. Then i=k corresponds to m=1 and corresponds to
m
Imposing the condition of zero ISI on the sample values of p(t) in the above integral
Pð (f) = = p(0).by using the sifting property of
deltafunction.
As p(0)=1,bynormalization the condition or zero ISI is satisfied if
=µa0sinc (2B0ðt)+µsin(2πB0ðt)/∏
The first term on right side defines the desired symbol and the remaining
series represents interference caused by timing error ðt in sampling the outputy(t).
PRACTICAL SOLUTION:
The practical difficulties caused by ideal solution is overcome by extending the
bandwidth from B0=Rb/2 to value between B0 and2B0.
A particular form of P(f) is constructed by raised cosine spectrum.The frequency
characteristics consists of flat portion and roll off portion . It has a sinusoidal formas
Hc(f) . 0otherwise
The impulse response consists of two sinc pulses, time displaced by T b Seconds
h(t) =Tb 2 sin (πt/Tb ) /πt(Tb-t)
The original data {bk} may be detected from duobinary coded sequence { c k} by
subtracting the previous decoded binary digit from the currently receiveddigit.
Let `bk representing estimate of the original binary digit b k noted by the receiver at time t
equal to kTb .~bk =ck -~bk-1
If ck is received without error and if also previous estimate ~b k-1 at time t=(k-1)Tb
corresponds to a correct decision. then the current estimate will becorrect
The technique of using a stored estimate of the previous symbol is called
decisionfeedback.
A drawback of this detection process is that once errors are made, they tend to
propagate. This is due to the fact that a decision on the current binary digit b k depends
on the correctness of a decision made on the previous binary digitbk-1
Error propagation can be avoided by using precoding before the duobinary
coding.
Ck=
The detector consists of a rectifier the output of which is compared to a
threshold of 1 volt and the original binary sequence is therebydetected.
Modified duo binary technique:
It involves a correlation span of two binary digits. This is achieved by subtracting
input binary digits spaced 2Tb seconds apart.
Ck =
By choosing various combinations of integer values for W n, we obtain different forms of
correlative coding schemes.In the duo binary case W 0 =+1 w1 =+1 and wN =0
for In the modified duo binary case we have W 0 W 1 =0,W 2 = -1 and W n =0 for
n
3.Explain the modes of operation of adaptive equalizer.[NOV/DEC 2015]
Definition:
Equalization is process of correcting channel induced distortion. To realize the
full transmission capability of telephone channel, adaptive equalization is needed.
Equalizer is said to be adaptive when it adjusts itself continuously during data
transmission by operating on the inputsignal.
Prechannel equalization is used at the transmitter and post channel equalization
is used at thereceiver.
As prechannel equalization requires a feedback channel, adaptive equalization
at the receiving side isconsidered.
This equalization can be achieved before data transmission by training the filter
with suitable training sequence transmitted through channel so as to adjust the filter
parameters to optimalvalues.
The adaptive equalizer consists of a tapped delay line filter with 100 taps or
more and its coefficients are updated according to LMSalgorithm.
The adjustments to the filter coefficients are made in a step by step fashion
synchronously with the incomingdata
Modes of operation:
(i) Training period mode (ii) decision directed mode
Training period mode
During the training period, a known sequence is transmitted and a synchronized
version of the signal is generated in the receiver .It is applied to the adaptive equalizer as
the desired response. The training sequence may be Pseudo Noise sequence and the
length of the training sequence may be equal to or greater than the length of
adaptiveequalizer.
When the training period is completed adaptive equalizer is switched to decision
directedmode.
.
The error signal equals e(nT) = b(nT)- y(nT), where y(nT) is
theequalizeroutput and b(nT) is the final correct estimate of the
transmittedsymbol(nT).
In normal operations the decisions made by the receiver are correct with high
probability. It means that the error estimates are correct most of thetime.
An adaptive equalizer operating in the decision directed mode.
4. a) Determine the power spectral density of NRZ polar and unipolar data
formats. [NOV/DEC2015,April/May 2017]
Unipolar format (on-off signaling)
The symbol 1 is represented as transmitting a pulse whereas symbol 0 is
represented by switching off pulse. When the pulse occupies the full duration of a symbol
the unipolar format is said to be non-return to zero (NRZ) type. When it occupies a
fraction (usually one half) of the symbol duration it is said to be return to zero (RZ)type.
Polar format:
A positive pulse is transmitted for symbol 1 and a negative for symbol 0. It can be
of the NRZ or RZ type. Polar waveform has no dc component provided that 0s and 1s in
the input data occur in the equal proportion.
X(t)= ( t –kT)…………….(1)
66
Where the coefficient is a discrete random variable, v(t) is a basic pulse
shape and T is the symbol duration. The basic pulse v(t) is centered at the origin
t=0and normalized such that u(0) =1.
The data signaling rate is defined as the rate, measured in bits per second, at
which data rates are transmitted. It is also common practice to refer to the data signaling
rate as the bit rate. This rate is denoted by Rb = 1 / Tb. where Tb is the bit duration.
For an M-aryformat , the symbol duration of the is related to the bit duration T b
by T= Tb log2 M. correspondingly one baud equals log 2 M bits persecond.
The source is characterized as having the ensemble averaged autocorrelation function
RA(n)= E [ AkAk-n ]
Where E is the expectation operator. the power spectral density of the discrete
PAM signal X(t) defined in equation (1) is given by
P(Ak= 0 ) = P(Ak≠ a ) =
Hence for n = 0, we may write
E[ ] P(Ak= 0 ) + P(Ak≠ a )=
Consider the next product AkAk-nfor n ≠ 0. This product has four possible values
namely 0, 0 , 0 and . Assuming that the successive symbols in the binary sequence
are statistically independent these four values occur with a probability of 1/4 each. Hence
for n ≠ 0, we maywrite
For the basic pulse v(t) we have a rectangular pulse of unit amplitude
and duration Tb. hence the fourier transform of v(t)equals
Hence the use of the equation (3) & (4) in (2) with T= T b, yields the following
result for the power spectral density of the NRZ unipolarformat
Sx(f)= ..(5)
we next use poissons formula written in the form
…(6)
Where δ (f) denotes a dirac delta function at f=0. Now substituting equation (6) in
and recognizing that the sinc function sinc(fTb) has nulls at f= ±1/Tb,±2/Tb,……. We
may simply expression for the power spectral density Sx(f)as
The presence of the Dirac delta function δ (f) accounts for one half of the power
contained in the unipolar waveform. The curve a shows a normalized plot of equation
NRZ Polarformat:
Consider a polar format of the NRZ type for which the binary data consists of
independent and equally likely symbols and it is givenby
RA(n)= …..(8)
The basic pulse v(t) for the pulse format is same as that for the unipolar format.
Hence the use of equation (4) and (8) in equation (2) with the symbol period T=Tb yields
the power spectral density of NRZ polar formatas
The normalized form of this equation is plotted in curve b. the power of the NRZ
polar format lies inside the main lobe of the sinc shaped curve, which extends up to the
bit rate 1/Tb.
4 b) Determine the power spectral density of NRZ and RZ bipolar and unipolar data
formats.
Bipolar format (pseudoternary signaling:
Positive pulse and negative pulses are used alternatively for the transmission of
1s and no pulses for the transmission of 0s. It can be of the NRZ or RZ type. In this
representation there are three levels such as +1, 0. -1. An attractive feature of this
formatistheabsenceofadccomponent,eventhoughtheinputbinarydatamay
contain long strings of 0s and 1s. This property does not hold for the unipolar
and polarformats.
NRZ Bipolar format:
The bipolar format has three levels a, 0 and –a. then assuming that the 1s and
0s with equal probability. The probabilities of three levels are asfollows
P(Ak= a ) =
P(Ak= 0 )=
P(Ak= -a ) =
Hence for n = 0, we may write
V(f) = jTbsinc( )
Thus substituting the corresponding equations , we find that the power spectral
density of the Manchester format is givenby
The normalized form of this equation is plotted in curve d. The power lies inside
a bandwidth equal to the bit rate 2/T b.
5.Write short notes on Eye pattern & Inter symbol interference [Nov/Dec 2015,2016
Eye pattern: Eye patterns can be observed using an oscilloscope. The received wave is
applied to the vertical deflection plates of an oscilloscope and the saw tooth wave at a
rate equal to transmitted symbol rate is applied to the horizontal deflection plates,
resulting display is eye pattern as it resembles human eye. The interior region of eye
pattern is called eye opening.
The width of the eye opening defines the time interval over which the received
wave can be sampled without error from ISI. It is apparent that the preferred time for
sampling is the instant of time at which the eye is openwidest.
The sensitivity of the system to timing error is determined by the rate of closure
of the eye as the sampling time isvaried.
The height of the eye opening at a specified sampling time is a measure of the
margin over channelnoise.
Fig: Interpretation of Eye pattern
Intersymbol interference:
When the dispersed pulse originate from different symbol interval and the
channel bandwidth closer to signal bandwidth, spreading of signal exceed a symbol
duration and causes the signal to overlap or interfere with each other. This is known as
Inter Symbol Interference(ISI).
Theincomingbinarysequence{bk}consistsofsymbol0and1,withduration
of Tb.
The pulse amplitude modulator modulates the binary sequence in a sequence of
short pulses.
The signal is applied to the transmit filter of impulse response g(t). The
transmitted signal will be
S(t)= )
The transmitted signal is modified when it is transmitted through the channel
with the impulse response of h(t). In addition to that it adds a random noise to the
signalatthereceiverinput.Thissignalispassedthroughthereceiverfilter.The
resultant signal is sampled synchronously with the transmitter. Sampling instants can
be determined by clock or timing signal.
If the sample value is greater than the threshold then the decision made in favor
of 1. If the sample value is less than the threshold then the decision made in favor of 0.If
the sample value is equal to the threshold then the receiver makes a random guess
about which symbol was transmitted. The receiver filter outputis
y(t)= ) +n(t)
Where is a scaling factor and p(t) is the pulse to be defined
The delay (t0) due to the transmission delay through the system should be
included with the pulse but for the simplification purpose we kept t 0 to be zero. Scaled
pulse p(t) is obtained by the double convolution of the impulse response of the transmit
filter g(t), impulse response h(t) of the channel, the impulse response of c(t) of the
receiverfilter.
p(t) = g(t) * h(t) * c(t)
Convolution of time domain will be equal to the multiplication in frequency domain
P(f) = G(f) . H(f) . C(f)
Where n(t) is the noise produced at the output of the receive filter due to channel
noise w(t) and w(t) is the white Gaussian noise with zero mean.
Receive filter output is y(t) which is sampled at time t i= iTb
y(ti)= + n(ti)
y(ti)= + n(ti)
k≠i
The first term is produced by the i thtransmitted bit. The second term represents
the residual effect of all other transmitted bits on the decoding of the ithbit. This residual
effect is called inter symbol interference. Last term n(t i) represents the noise sample at ti.
In the absence of noise andISI
y(ti)=
SIN )
2. Distinguish coherent and non-coherent reception.(May/June16,Nov/Dec16)
Coherent detection Non-coherent detection
Local carrier generated at the Local carrier generated at the
receiver is phase locked with receiver not be phase locked with the
the carrier at the transmitter. carrier at the transmitter.
5.DefineBER [MAY14]
The signal gets contaminated by several undesired waveforms in channel. The net
effect of all these degradations causes error in detection. The performance measure of
this error is called Bit error rate.
6.How can BER of a systembeimproved? [NOV12]
Increasing transmitted signal power
Improving frequency filtering techniques
Proper modulation & demodulation techniques
Coding a Decoding methods
PART-B
1.Explain the transmitter, receiver and signal space diagram of BPSK [may/june
2016,April /May 2017]
In a coherent binary PSK system the pair of signals, S1(t) and S2(t) used
to represent binary symbols 1 and 0 are definedby
S1(t) (1)
S2(t)= =- (2)
A coherent binary PSK system is having a signal space that is one dimensional
i.e., N=1 and with two message points i.e., M=2 as shown in figure1
S11 =
The signal space of Figure 1 is partitioned into tworegions:
The set of points closest to the message pointat+ .
x1= (8)
(x1|0) =
=
The conditional probability of the receiver deciding in favor of symbol 1
given that symbol 0 was transmitted istherefore
to z so rewrite the equation
formwithsymbols1and0representedbyconstantamplitudelevelsof and
respectively. This binary wave and a sinusoidalcarrierwave are applied to the
product modulator as shown in figure2.
79
The carrier and the timing pulses used to generate the binary wave are usually
extracted from a common master clock. The desired PSK wave is obtained at the
modulatoroutput.
Binary PSK Receiver:
To detect the original binary sequence of 1s and 0s apply the noisy PSK wave
x(t) to a correlator which is also supplied with a locally generated
coherentreferencesignal
Si(t) =
0 elsewhere (1)
Where i=1,2and is the transmitted signal energy per bit
andthetransmittedfrequencyequals
Thus a coherent binary FSK system is having a signal space that is two
dimensional i.e., N=2 with two message points i.e., M=2 as in figure1
The two message points are defined by the signalvectors:
S1 =
and
where is the received signal the form of which depends on which symbol was
transmitted. Given that symbol 1 was transmitted equalss1(t)+w(t)
mean and power spectral density N0/2. If symbol 0 was transmitted equals s2(t)+w(t).
After applying the decision rule the observation space is partitioned into two
decision regions labeled and as shown infigure1.
Accordinglythereceiverdecidesinfavorofsymbol1ifthereceivedsignal
point represented by the observation vector x falls insideregion
This occurs when x1>x2 if we have x1 <x2 the received signal point falls inside
region and the receiver decides in favor of symbol 0. The decision boundaryseparating
regionfrom region is defined by x1 = x2 .
(13)
Since the condition x1 >x2 or equivalently l > 0 corresponds to the receiver making
a decision in favor of symbol 1 we deduce that the conditional probability of error given
that symbol 0 was transmitted is givenby
is the conditional probability of error given that symbol 1 was transmitted
In a binary PSK system the distance between the two message points is equal
oscillator with frequency in the lower channel is switched off with the result
thatfrequency istransmitted.
Suppose if we have symbol 0 at the input the oscillator in the upper channel is
switched off whilethe oscillator in the lower channel is switched on with the result that
frequency is transmitted
In the transmitter we assume that the two oscillators are synchronized so that their
outputs satisfy the requirements of the two orthonormalbasisfunction and as
in equation(4).
To detect the original binary sequence given the noisy received wavex(t)
receiver is used as shown in figure 3
BFSK Receiver
Itconsistsoftwocorrelatorswithacommoninputwhicharesuppliedwithlocally
generated coherent reference signals and .
The correlator outputs are then subtracted one from the other and the resulting
difference l is compared with a threshold of zero volts.
If l>0 the receiver decides in favor of 1. If l<0 it decides in favor of 0.
3.Explain the transmitter, receiver and signal space diagram of QPSK [nov/dec
2015,2016]
As with binary PSK QPSK is characterized by the fact that the information carried by the
transmitted wave is contained in thephase.
In Quadriphase shift keying (QPSK) the phase of the carrier takes on one offour
equally spaced values such as , , , as shownby
Si(t) =
0 elsewhere (1) Where i=1,2,3,4 and E is the
transmitted signal energy per symbol Tis the time
duration and the carrier frequency equals for some fixed integer .
Each possible value of the phase corresponds to a unique pair of bits called as dibit.
example the foregoing set of phase values to represent the Gray encoded set of
dibits: 10,00,01, and11.
Using trigonometric identity we may rewite (1) in the equivalent form:
A QPSK signal is having a two dimensional signal constellation i.e., N=2 and
four message points i.e., M=4 as illustrated in Figure 1
To realize the decision rule for the detection of the transmitted data sequence
the signal space is partitioned into fourregions
The set of points closest to the message point associated with signal vector
x(t)=
i=1,2,3,4
x1=
= )
and x2=
=
The values equal to and =
Where the first integral on the right side is the conditional probability of the event
x1>0 and the second integral is the conditional probability of the event
x2>0 both given that signal wastransmitted.
=1-
n the region where (E/ )we may ignore the second term on the righside of equation
(14) and so approximate the formula for the average probability of symbol error for
coherent QPSKas
In a QPSK system we note that there are two bits per symbol. This means that
the transmitted signal energy per symbol is twice the signal energy per bit that is
E=2 (16)
Thus expressing the average probability of symbol error in terms of the
ratio we maywrite
QPSK transmitter.
The input binary sequence is represented in polar form with symbols 1 and
0representedby and voltsrespectively.
This binary wave is divided by means of a demultiplexer into two separate
binary waves consisting of the odd and even numbered inputbits.
These two binary waves aredenotedby and .
In any signaling interval theamplitudesof and equal Si1andSi2
andՓ2(t) .
The result is a pair of binary PSK waves which may be detected independently
due to the orthogonality of Փ1(t) andՓ2(t).
Finally the two binary PSK waves are added to produce the desired QPSK
wave. Note that the symbol duration T of a QPSK wave is twice as long as the bit
That is for a given bit rate a QPSK wave requires half the transmission
bandwidth of the corresponding binary PSK wave. Equivalently for a given transmission
bandwidth a QPSK wave carries twice as many bits of information as the corresponding
binary PSKwave.
QPSK Receiver
The QPSK receiver consists of a pair of correlators with a common input and
supplied with a locally generated pair of coherent reference signals Փ1(t) andՓ2(t)
The correlator outputs and are each compared with a threshold of zero volts.
Finally these two binary sequences at the in-phase and quadrature channel
outputs are combined in a multiplexer to reproduce the original binary sequence at the
transmitter input with the minimum probability of symbolerror.
Let S2(t) denote the transmitted DPSK signal for 0≤ t ≤ 2T b for the case when
we have binary symbol 0 at the transmitter input for T b ≤ t ≤ 2Tb . The
transmission of 0 advances the carrier by phase by 180 0 and so we define S2(t)
Pe= )
The next issue is generation and demodulation of DPSK. The differential
encoding process at the transmitter input starts with an arbitrary first bit serving as
reference and there after the differentially encoded sequence {d k}is generated by using
logicalequation
Where bkis the input binary digit at time KTb and dk-1 is the previous value of
differentially encoded digit. The use of an over bar denotes logical inversion. The
following table illustrates logical operation involved in the use of logical equation,
assuming that the reference bit added to the differentially encoded sequence {dk}is as 1.
The differentially encoded sequence {d k} thus generated is used to phase shift key a
carrier with the phase angles 0 and πradians.
S1(t) =
Where E0is the energy of the signal with the lowest amplitude and a iand biare a
pair of independent integers chosen in accordance with the location of the pertinent
message point. The signal S1(t) consists of two phase quadrature carriers , each of which
is modulated by a set of discrete amplitude hence the name called quadrature amplitude
modulation.
ᶲ1(t) and
Pe‟=( 1- )erfc ( )
Where L is the square root of M
The probability of symbol error for M-ary QAM is
givenby Pe = 1 – Pc
1 – ( 1- Pe‟)2
Pe = 2 Pe‟
Where it is assumed that Pe‟ is small compared to unity and we find the
probability of symbol error for M-ary QAM is given by
Pe = 2( 1- ) erfc( )
The transmitted energy in M-ary QAM is variable in that its instantaneous value
depends on the particular symbol transmitted. It is logical to express Pe in terms of the
average value of the transmitted energy rather than Eo . Assuming that the L amplitude
levels of the in phase or quadrature component are equally likely wehave
Eav= 2
Where the multiplying factor 2 accounts for the equal combination made by in
phase and quadrature components. The limits of the summation take account of the
symmetric nature of the pertinent amplitude levels around zero we get
Eav=
Pe = 2( 1- ) erfc( )
The serial to parallel converter accepts a binary sequence at a bit rate Rb=1/Tb
and produces two parallel binary sequences whose bit rates are Rb/2 each. The 2 to L
Message Order
m = [m0, m1 ,……,mk-1] 1* k
Parity bits
b = [b0, b1, …………, bn-k-1] 1* n-k
Code word
x = [x0, x1 ,……,xn-1] 1*n
Coefficient Matrix
P= k*n-k
b = mP
IdentityMatrix
Ik = k*k
Generator Matrix
x=
x=
x=m
x=mG
G= k*n
10H = n-k*n
To prove the use of parity check matrix
W.K.T
X=MG
Syndrome Decoding
Y=x+e
Y – receivedvector
e – error pattern
S= y
Important properties of syndrome
Property 1:
The syndrome depends only on the error pattern and not on th e transmitted
code word
S= y
S= (x+e)
6.x
7.0+e
S=e
Property2
All error pattern that differs at most by a code word have the same syndrome
ei =e+xi i= 0,1,2,…….
Multiplyby
=e +0
=e
Property3
The syndrome S is the sum of those columns of the matrix H corresponding to
the error locations.
8. = [h0, h1 ,……,hn-1]
S=
[e1, e2 ,……,en]
S=
Property 4
With syndrome decoding an (n,k)LBC can correct upto t errors per codeword,
provided n & k satisfy the hamming bound
Where =
Minimum distance Considerations
Hamming distance - It is the no.of location in which the respective elements of
two code words differ
Hamming weight - It is defined as the number of non zero elements in the code
vector.
Minimum distance (dmin) -The minimum distance of a linear block code is the
smallest hamming weight of the non- zero code vector
Errordetection- It
can detect S number oferrors
Errorcorrection - It
can correct t number oferrors
Inverse mapping the channel output sequence into an output data sequence in
such a way that the overall effect of channel noise on the system isminimized.
The mapping operation is performed in the transmitter by means of an encoder,
whereas the
inverse mapping operation is performed in the receiver by means of
adecoder.
the Channel reliability whereas
coding introduces controlled redundancy to improve
Source coding reduces redundancy to improveefficiency.
The message sequence is subdivided into sequential blocks each k bitslong
Each k-bit block is mapped into an n-bitblock
Where n >k the number of redundant bits added by the encoder to each
transmitted block is n - kbits.
The ratio k/n is called the coderate.
r = k/n
„r‟is less thanunity.
Statement: The channel coding theorem for a discrete memoryless channelis
stated in two parts asfollows.
Let a discrete memoryless source with an alphabet „ζ‟ have entropy H(ζ)
and produce symbols once every T s seconds. Let a discrete memoryless channel have
capacity C and be used once every T seconds. f
exists a coding scheme for which the source output can be transmitted over
thechannel and be reconstructed with an arbitrarily small probability of error. The
parameterC/Tc is called the criticalrate.
it is not possible to transmit information over the channel and reconstruct
it with an arbitrarily small probability of error.
NOTE:
The channel coding theorem does not show us how to construct a good code.
Rather, that it tells us that if the condition is satisfied, then good codes do exist.
r≤C
3.For (6,3) systematic linear block code, the code word comprises I 1 , I2, I3, P1, P2,
P3 where the three parity check bits P1, P2 and P3 are formed from the information
bits as follows:
P1 = I2
P2 = I3
P3 = I3
Find
I.The parity checkmatrix
Ii.The generatormatrix
Iii.All possible codewords.
Minimum weight and minimum distanceand
The error detecting and correcting capability of thecode.
vi.If the received sequence is 10000. Calculate the syndrome and
decode thereceivedsequence. (16)
[DEC 10]
Solution:
Parity CheckMatrix:
Given: n=6K
=3
GeneratorMatrix:
SYNDROME TABLE:
SYNDROME ERROR PATTERN
000 000000
110
110 100000
101 010000
011 001000
100 000100
010 000010
001 000001
4.Consider a (7, 4) linear block code whose parity check matrix is givenby
H=
H=
PT
P=
G=
Given K=4
b. Error Detection:
b1= m1 m2 m3
b2 = m1 m2 m4
b3 = m1 m3 m4
dmin =3
It can detect upto 2 errors.
C. Error Correction:
m1(x) = 1+x2
Step 1:
Step 2:
x3+x5 / (1+x+x3)
Quotient = q(x) = x2
Remainder = R(x) = x2
Step 3:
m2(x) = 1+x+x2+x3
Step 1:
xn-k= x7-3 = x3
x3m2(x) = x3(1+x+x2+x3) = x3+x4+x5+x6
Step 2:
Quotient=q(x)
=x3+x2+1
Remainder=R(x) =x2+x+1
= 1+x+x2
Step 3:
x6 C2 = 1111111
Consider data vector1000:
m1 = 1000
m3(x) = 1
Step 1:
x3m3(x) = x3(1) = x3
Step 2:
x3 / (1+x+x3)
Quotient = q(x) = 1
Remainder = R(x) =x+1
Step 3:
Add the remainder R(x) tox3m3(x)
C3(x) = (x+1) + (x3)
11. 1+x+x3
C3 = 1101000
6.Consider a (7,4) linear block code with the parity checkmatrix
H=
H=
W.k.t H=
From above equation
COEFFICIENT MATRIX:
12. = =
=
GENERATOR MATRIX:
Given n=7,K=4
G =
All possiblecodewords
b = mP
b
No.of parity bits = n-k = 7-4 =3
m
No.of message bits = k= 4
b1= m1 m3 m4
b2 = m1 m2 m4
b3 = m2 m3 m4
15. =2q -1
7 = 23 -1
7 = 8-1
7=7 Yes
3) No.of Messagebits
=
SYNDROME TABLE:
Generatorvectorg1= &
g2 =Input
message m =10011
Encoder:
Rate = ½ 1 input & 2 output
Dimension of thecode:
Coderate:
Constraintlength:
Definition:
No of shifts over which the msg bit can influence the encoder output.
Here it is 3.
Output sequence: Given
Generatorvectorg1= &
g2 =
Input message m = 10011
In Polynomial Representation
2
g1(D) = 1+D+D
g2(D) = 1+(0)D+D2 = 1+D2
m(D) = 1+(0)D+(0)D2+D3+D4
20. 1+D3+D4
Output of Upper Path
x1(D) = m(D) g1(D)
21. (1+D3+D4)(1+D+D2)
22. 1+D3+D4+D+D4+D5+D2+D5+D6
23. 1+D+D2+D3+D6
x1 = {1 1 1 1 0 01}
Output of Lower Path
x2(D) = m(D) g2(D)
24. (1+D3+D4) (1+D2)
25. 1+D3+D4+D2+D5+D6
26. 1+D2+D3+D4+D5+D6
x1= {1 0 1 1 1 11}
Overall output
The switch moves between upper and lower path alternatively
Code word = {11 10 11 11 01 01 11}
Generatorvector = &
=
Dimension of thecode:
The encoder takes 1 input at a time. So k = 1
It generates 2 output bits. So n =2
Dimension = (n, k) = (2, 1)
Coderate:
Constraintlength:
Number of shifts over which the message bits can influence
the encoder output.
Here it is 3.
i = 0, 1, 2, 3, 4, 5,6
l = 0, 1, 2
i=0
l=0
= 1*1
27.1
i=1
l = 0,1
= 1*0 1*1
=0 1
=1
i=2
l = 0, 1, 2
i=3
l = 0, 1, 2
=1*1 1*0 1*0
= 0
0
=1
i=4
l= 0, 1, 2
= 1 0
=0
i=5
l = 0, 1, 2
=1 1
=0
i=6
l = 0, 1, 2
=1*m6 1*1
=1
The bottom branch output sequence is
i = 0, 1, 2, 3, 4, 5, 6
l = 0, 1, 2
i=0
l=0
= 1*1 = 1
i=1
l = 0,1
= 1*0 0*1
=0 0
=0
i=2
l = 0, 1, 2
= 0*0 1*1
= 0 1
=1
i=3
l = 0, 1, 2
=1 0 0
=1
i=4
l = 0, 1, 2
= 1*1 0*1 1*0
=1 0 0
=1
i=5
l = 0, 1, 2
= 1
=1
i=6
l = 0, 1, 2
=1*m6 0*m5 1*1
=1
Overall output
The Switch moves between upper & lower path alternatively Code
word =
K =1, n=3
Using generator polynomialx1= m
x2=m m1 m2x3=m m2
Encoder:
Code Tree, Trellis & Statediagram: Assume
00 a
01 b
30. 0 c
31. 1 d
State Table:
Output
In x1 = m
Current x2 = m Next
SNo state m1m state
2
x3=m m2
m2 m1 m x1 x2 x3 m1 m
1 a= 0 0 0 0 0 0 0 0 =a
1 1 1 1 0 1 =b
2 b= 0 1 0 0 1 0 1 0 =c
1 1 0 1 1 1 =d
3 c= 1 0 0 0 1 1 0 0 =a
1 1 0 0 0 1 =b
4 d= 1 1 0 0 0 1 1 0 =c
1 1 1 0 1 1 =d