Modern Digital Communication: Analog Source Coding

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 18

MODERN DIGITAL COMMUNICATION

Analog Stationary Sources

- Dr. P. Susheelkumar S,
- Faculty – Dept. of Electronics Engineering
- Datta Meghe College of Engineering
- Airoli, Navi Mumbai
Analog Sources:
Coding Techniques for Analog Sources: 3 types

1) Temporal Waveform Coding: Source encoder is designed to


represent digitally the temporal characteristics of the Source
waveform.
2) Spectral Waveform Coding: The signal Waveform is usually sub-
divided into different frequency bands and either its time
waveform or spectral characteristics are encoded for
transmission.
3) Model Based Coding: based on Mathematical model
representation of the Source.
Analog Sources:
Temporal Waveform Coding: Source encoder is designed to
represent digitally the temporal characteristics of the Source
waveform. There are various types of Analog Source Coding
methods or techniques:

PCM:

1) Let x(t) be a sample function emitted by a Source and let xn


denote the samples taken at a sampling rate where
W is the highest frequency in the spectrum of x(t).
2) Each sample is quantized to one of the 2R amplitude levels
where R is the no of bits used to represent each level.
3) Thus the rate from the source is: Rfs
Analog Sources:

Assuming a uniform quantizer with the input-output char as shown


below, the quantizer noise can be characterised statistically by the
uniform pdf p(q):

  2 R
Analog Sources:

The mean square value of the quantization error is expressed as:


 
2
1 2
E (q )   q p(q)dq   q dq
2 2 2

  
2 2
 
1 2 2
1 q  3
1  23 
2
  q dq    
    3   3  8 
2 2

2 22 R
 
12 12
Analog Sources:

Differential Pulse Code Modulation (DPCM):


1) In PCM each sample is encoded independently of the others.
2) But most source signals, sampled at the Nyquist’s rate or faster
exhibit strong correlation between successive samples
3) In other words average change in the amplitude of successive
samples is very small. DPCM exploits this redundancy in the
samples resulting in a lower bit rate for the source output.
4) Thus the simple solution would be to encode the difference.
5) A refinement to this approach is the Prediction model. i.e. to
predict the current sample based on the previous p-samples
6) Let xn and xn denote the current sample and its predicted
value
Analog Sources:

Mathematically it can be expressed as:

Where xn is a weighted linear combination of the past p-


samples and are the Predictor coefficients, which are
selected to minimize the error function between xn and xn ,
mostly the Mean Square Error function is the practical and most
convenient error function. It is expressed as:
Analog Sources:

 Eq : 1
Thus the values of the predictor co-efficients are established from
the above equation.
Analog Sources:

But when the autocorrelation function is not known apriori, then it


is estimated from the samples (xn) using the expression:

 Eq : 2
Eq : 1

Eq : 1
p
en  xn  xn  xn   ak x nk
k 1

en  en  en  ( xn  xn )  en  xn  xn (a) DPCM Encoder


 xn  xn
 qn  quantization error

en
xn  xn  en
p
en  xn  xn  xn   ak x nk
k 1

en  en  en  ( xn  xn )  en  xn  xn
(a) DPCM
 xn  xn
Encoder
 qn  quantization error

Predictor is implemented with the feedback loop around the


quantizer.
The input to the predictor is xn which represents the quantized
value of the sampled signal xn p
The output of the predictor is:
xn   ai xni
i 1

The difference: is the input to the quantizer and

en denotes the output to be sent


p
en  xn  xn  xn   ak x nk
k 1

en  en  en  ( xn  xn )  en  xn  xn
(a) DPCM
 xn  xn
Encoder
 qn  quantization error

Each value of quantized prediction error en is encoded into a


sequence of binary digits and transmitted over the channel to the
destination.
The quantized error en is also added to the predicted value xn
to get xn
en
Same predictor is synthesized and its output xn is added to en
to get x .
n

The signal x is the desired excitation for the predictor and also
n

the desired output sequence from which the reconstructed signal x(t )
is obtained by filtering.
Analysis gives:
An improvement in the quality of estimate is obtained by including
the linearly filtered past values of the quantized error.
Therefore x can be expressed as:
n

The two sets of coefficients {ai) and {bi} are selected to minimize
some function of error en  xn  xn , such as mean square error
(MSE)
In both PCM and DPCM, the quantization error qn resulting from
uniform quantizer operating on a Quasi-stationary ( variance and
autocorrelation function of the source output vary slowly with
time) input signal will have a time variant variance( quantization
noise power).

An adaptive quantizer can reduce this dynamic range of


quantization noise.

A simple approach to implement the adaptive quantizer is to vary


the step size of a uniform quantizer in accordance with the variance
of the past signal samples.

Eg: a short term running estimaate of the variance of xn can be


computed from the input sequence {xn} and the step size can be
adjusted based on such estimate.
In the simplest form, algorithm for the step size adjustment
employs only the previous signal sample.
Figure below represents a 3-bit quantizer in which the step size is
adjusted recursively according to the relation:
In DPCM, the predictors can also be made adaptive when the
source output is quasi-stationary

The coefficients of the predictor can be changed periodically to


reflect the changing signal statistics of the source.

The linear equations given by :  Eq : 1


holds and still apply, with the short term estimate of the
autocorrelation function of xn, substituted inplace of the ensemble
correlation function.

The predictor coefficients thus determined may be transmitted


along with the quantized error e to the receiver, which
n

implements the same predictor.

But this transmission of predictor coefficients results in higher bit


rates over the channel, offsetting the lower bit rates for reduced
error

You might also like