PPM, Quantizing, Entropy

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 11

PPM, Quantizing , Entropy

Final Exam Report,


This report is submitted to the electrical department,
College of Engineering, Kirkuk University.

Student Name: ‫محمد ابوبكر عثمان‬

Subject: PPM, Quantizing, Entropy

Stage: fourth

Student Signature:
Introduction:
Pulse Position Modulation (PPM) is an analog
modulating scheme in which the amplitude and width
of the pulses are kept constant, while the position of
each pulse, with reference to the position of a
reference pulse varies according to the instantaneous
sampled value of the message signal .The transmitter
has to send synchronizing pulses (or simply sync
pulses) to keep the transmitter and receiver in
synchronism. These sync pulses help maintain the
position of the pulses. The following figures explain
the Pulse Position Modulation.

Pulse position modulation is done in accordance with


the pulse width modulated signal. Each trailing of the
pulse width modulated signal becomes the starting
point for pulses in PPM signal. Hence, the position of
these pulses is proportional to the width of the PWM
pulses.

Generation of PPM
Fig.1 (a) shows the PPM generator. It consists of differentiator and a monostable multivibrator. The input to the
differentiator is a PWM waveform. The differentiator generates positive and negative spikes corresponding to
leading and trailing edges of the PWM waveform. Diode Di is used to bypass the positive spikes. The negative
spikes are used to the trigger monostable multivibrator. The monostable multivibrator then generates the pulses
of same width and amplitude with reference to trigger to give pulse position modulated waveform, as shown in
the Fig.1 (b).

Fig.1(a)
Fig.1(b)

Demodulation of PPM
In the case of pulse-position modulation, it is customary to convert the received pulses that vary in position to
pulses that vary in length. One way to achieve this is illustrated in Fig.2.

Fig.2

As shown in the Fig. 2 flip-flop circuit is set or turned 'ON' (giving high output) when the reference pulse
arrives. This reference puke is generated by reference pulse generator of the receiver with the synchronization
signal from the transmitter. The flip-flop circuit is reset or turned 'OFF' (giving low output) at the leading edge
of the position modulated puke. This repeats and we get PWM pulses at the output of the flip-flop. The PWM
pulses are then demodulated by PWM demodulator to get original modulating signal.

Advantages of PPM

1. Like PWM, in PPM amplitude is held constant thus less noise interference.

2. Like PWM, signal and noise separation is very easy.

3. Due to constant pulse widths and amplitudes, transmission power for each pulse is same.

Disadvantages of PPM

1. Synchronization between transmitter and receiver is required.

2. Large bandwidth is required as compared to PAM


Quantization
The digitization of analog signals involves the rounding off of the values which are approximately equal to the
analog values. The method of sampling chooses a few points on the analog signal and then these points are
joined to round off the value to a near stabilized value. Such a process is called as Quantization.

Quantizing an analog signal

The analog-to-digital converters perform this type of function to create a series of digital values out of the given
analog signal. The following figure represents an analog signal. This signal to get converted into digital, has to
undergo sampling and quantizing.

The quantizing of an analog signal is done by discretizing the signal with a number of quantization
levels. Quantization is representing the sampled values of the amplitude by a finite set of levels, which means
converting a continuous-amplitude sample into a discrete-time signal.

The following figure shows how an analog signal gets quantized. The blue line represents analog signal while
the brown one represents the quantized signal.
Both sampling and quantization result in the loss of information. The quality of a Quantizer output depends
upon the number of quantization levels used. The discrete amplitudes of the quantized output are called
as representation levels or reconstruction levels. The spacing between the two adjacent representation levels is
called a quantum or step-size.
The following figure shows the resultant quantized signal which is the digital form for the given analog signal.

Types of Quantization
There are two types of Quantization - Uniform Quantization and Non-uniform Quantization.
The type of quantization in which the quantization levels are uniformly spaced is termed as a Uniform
Quantization. The type of quantization in which the quantization levels are unequal and mostly the relation
between them is logarithmic, is termed as a Non-uniform Quantization.
There are two types of uniform quantization. They are Mid-Rise type and Mid-Tread type. The following
figures represent the two types of uniform quantization.

• The Mid-Rise type is so called because the origin lies in the middle of a raising part of the
stair-case like graph. The quantization levels in this type are even in number.
• The Mid-tread type is so called because the origin lies in the middle of a tread of the stair-
case like graph. The quantization levels in this type are odd in number.
• Both the mid-rise and mid-tread type of uniform quantizers are symmetric about the origin.
Companding in PCM
The word Companding is a combination of Compressing and Expanding, which means that it does
both. This is a non-linear technique used in PCM which compresses the data at the transmitter and
expands the same data at the receiver. The effects of noise and crosstalk are reduced by using this
technique.
There are two types of Companding techniques. They are –
µ Law of companding
for a given input V1,the expression for compressed output V2 is given by
𝒍𝒐𝒈(𝟏+𝝁|𝒗𝟏 |)
V2(V1) = sign(V1) where -1 ≤ V1 ≤ 1
𝒍𝒐𝒈(𝟏+𝝁)

Where Y(x) = compressed output


Sign(x) = Polarity of input signal
µ = positive constant which determines the degree of compression
µ = 255 (8bit) in north America and Japanese standard
the plot of normalized output |V2| with normalized input |V1| for various value of µ is shown in fig.3(a) , at
µ=0 simply it will provide no compression so output is a uniform quantization.

Fig.3

A Law of companding
for a given input V1,the expression for compressed output V2 is given by
𝑨|𝒗𝟏 |
0≤ |V1|≤ 1/A
𝟏+𝒍𝒐𝒈 𝑨

V2(V1) = sign(V1) the A is compression parameter , in Euroup


𝟏+𝒍𝒐𝒈 𝑨|𝒗𝟏 |
1/A ≤ |V1| ≤ 1 A=87.7 , the value of 87.6 is also used
𝟏+𝒍𝒐𝒈 𝑨
the plot of normalized output |V2| with normalized input |V1| for various value of A is shown in fig.3(b) , it
should be noted that for A=1 , it behaves just like a uniform quantizer.

Entropy
When we observe the possibilities of the occurrence of an event, how surprising or uncertain it would
be, it means that we are trying to have an idea on the average content of the information from the
source of the event. Entropy can be defined as a measure of the average information content per
source symbol.
How to calculate entropy
Let us consider an arbitrary signaling interval. Let the source emit the symbol (discrete value) Xk with a
probability pk , k=0,1,2,….., (M-1), where
𝑀−1

∑ 𝑝𝑘 = 1
𝑘=𝑈

Thus , I(kx), the information produced by the source during that signaling interval , is itself a random variable
which can take on the finite set of values I(x0), I(x1),……. I(xM-1) with probabilities p0, p1, p2, ….pM-1 . since
there are M discrete values or symbol in the alphabet S of the sourses , the average information given by the
source per signaling interval , or per symbol is
𝐻(𝑆) = 𝐸[1(𝑥𝑘 )]

∑𝑀−1 𝑀−1
𝑘=0 𝑃𝑘 𝐼(𝑥𝑘 ) = − ∑𝑘=0 𝑝𝜅 𝑙𝑜𝑔2 𝑝𝑘

𝐻(𝑆)= − ∑𝑀−1
𝑘=0 𝑝𝜅 𝑙𝑜𝑔2 𝑝𝑘

This quantity , H(S), which represents the average information per symbol emitted by DMS with a source
alphabet S of size M is called 'entropy' of the source .
Example 1 : A discrete memoryless source S, has an alphabet {s0,s1} with probabilities 1/4 and 3/4 respectively,
Find the entropies of the source.
Solution:
𝐻(𝑆)= − ∑𝑀−1
𝑘=0 𝑝𝜅 𝑙𝑜𝑔2 𝑝𝑘 = −[0.25 𝑙𝑜𝑔2 0.25 + 0.75 𝑙𝑜𝑔2 0 ⋅ 75] = 0.81128

Example 2: A binary memoryless source produces the binary symbols 0 and 1 with probabilities p and (1 — p)
respectively. Determine the entropy of this source and sketch the variation of the entropy with the value of p.
Solution:
H(S)=− ∑1𝑘=0 𝑝𝑘 𝑙𝑜𝑔 𝑝𝑘 = −𝑝0 𝑙𝑜𝑔 𝑝0 − 𝑝1 𝑙𝑜𝑔 𝑝1
Where p0 = P(X=0) and p1 = P(X=1)
Since P(X=0)=p and P(X=1) = (1-p)
𝐻(𝑠) = −𝑝0 𝑙𝑜𝑔2 𝑝0 −𝑝1 𝑙𝑜𝑔2 𝑝1 = −𝑝 𝑙𝑜𝑔2 𝑝 −(1 − 𝑝) 𝑙𝑜𝑔2 (1 − 𝑝)

𝐻(𝑠) = -[ 𝑝 𝑙𝑜𝑔2 𝑝 −(1 − 𝑝) 𝑙𝑜𝑔2 (1 − 𝑝)] bits per symbol

References
Communication Systems by P. Ramakrishna Rao

Analog communication by A. P. Godse and U.A. Bakshi

Analog Communication System by Abhishek Yadav

www.tutotialspoint.com

You might also like