Download as pdf or txt
Download as pdf or txt
You are on page 1of 8

5/31/2023

Random Processes and Spectral


Analysis
Introduction to Communication Systems • An RV that is a function of time is called a
Lecture 12 random process or stochastic process.

• Communication signals as well as noises,


Performance Analysis of Digital
Communication System typically random and varying with time, are
well characterized by random processes.
Bilge Kartal Çetin, PhD

1 2

1 2

Ensemble- Sample Function


Random Variable vs Random Process
-SAMPLE FUNCTION
A wafeforms in this
possible waveforms
collection
• In the case of an RV, the outcome of each trial of
the experiment is a number.

-ENSEMBLE-
The collection of all
• An Random Process also can be seen as the
possible waveforms outcome of an experiment, where the outcome
of each trial is a waveform (a sample function)
that is a function of t.

3 4

3 4
5/31/2023

The Mean 𝑥(𝑡) of a random process


Characterization of a Random Process
x(t)
• The ensemble has the complete information • The mean of a random process can
about the RP. determined from the first order PDF as
• Need some quantitative measure that will specify
or characterize the RP.
• Consider RP as an RV x that is a function of time.
Hence RP is just a collection of an infine number
of RVs, which are generally dependent.
• Joint PDF provides the complete information of • The mean of a random process is typically a
several dependent RV which it belongs to. deterministic function of time t.
5 6

5 6

Autocorrelation Function of a Random Using correlation to measure the


Process similarity of amplitudes at t1 and t2=t1+𝜏
• Autocorrelation function is the most important
statistical characteristic of a Random Process • Autocorrelaton function Rx(t1,t2) is defined as
• It has the information about the spectral content of
the process which depends on the rapidity of the
amplitude change with time. • The correlation of RV x(t1) and x(t2), indicate the similarity
between RVs x(t1) and x(t2)
• For small 𝜏, the product x1x2  positive but y1y2 will be
 equally likley to be positive or negative.
• X1 and x2 will show correlation for considerable larger of 𝜏,
whereas y1 and y2 will lose correlation quicly, even for
small 𝜏.
• Autocorrelation function provides valuable information
about he frequency content of the process.

7 8

7 8
5/31/2023

Power Spectral Density and Wiener-


Khintchine Theorem for Random Process
• The PSD of a random process x(t) is the ensemble
average of the PSDs of all sample functions.

• FT of time truncated RP


• Wiener-Khintchine theorem

• For the PSD to exist, the process must be stationary (at


• For small 𝜏, the product x1x2  positive but y1y2 will be  equally likley to be
positive or negative. least wide sense). Stationary process are power signals.
• Hence, 𝑥 𝑥 will be larger than 𝑦 𝑦
• X1 and X2 will show correlation for considerably larger values of τ, whereas y1 and
10
y2 will lose correlation quicly.

9 10

Autocorrelation function for real Mean square value of random process is


process is an even function of 𝜏 Rx(0)
• Because is an • The mean square value 𝑥 of the random process
even function of f, x(t) is 𝑅 0

•  even function
• The mean square value 𝑥 is not the time mean
of f square of a sample function
• The mean square value 𝑥 is the ensemble
 even function of 𝜏 average of the amplitude squares of sample
functions at any instant t.

11 12

11 12
5/31/2023

Bandpass White Gaussian Random


The Power of a Random Process
Process
• The power Px (average power) of a wide-sense • Gaussian random process with a uniform PSD
random process x(t) is its mean square value is called white gaussian RP
𝑥 . • Popular notion for white gaussian noise is
Sn(t)=N/2, and bandwidth of 2B

13 14

13 14

Quandrature Representaion of Noise How to find the White Gaussian Noise


Power?
• We know mean square values of guadrature components
of the noise are identical to that of n(t)
𝑅 𝜏 = 𝑥 𝑡 𝑥(𝑡 + 𝜏)  𝑅 0 = 𝑥 𝑡 𝑥(𝑡) = 𝑥(𝑡) = 𝑥
RVs nc(t) and ns(t) are uncorrelated Gaussina • 𝑅 𝜏 =∫ 𝑆 (𝑓)𝑒 𝑑𝑓
RVs with zero mean and variance 2NB. So
their PDFs are identical • 𝑃 = 𝑛 = 𝑅 0 = ∫ 𝑆 (𝑓) 𝑑𝑓
• 𝑆 (𝑓) is even function of f so
Mean is zero • 𝑃 = 𝑛 = 𝑅 0 = 2 ∫ 𝑆 (𝑓) 𝑑𝑓 **this is the area
under the PSD, mean square value of n(t) equal to its
power

15 16

15 16
5/31/2023

Cumulative Distribution Function


Probability Density Function (PDF)
CDF
• For any real number x, the CDF is the probability
that the random variables X is no larger than x.
• All random variabless have cumulatie distribution
but only discrete random variables have
probability mass function.
• CDF describes the probability that an outcome will
be less than or equal to a specified value.
• 𝐹 𝑥 = 𝑃(𝑥 ≤ 𝑥)

17 18

17 18

The Gaussian Random Variable Gaussian Random Variable


• Standard Gaussian or Normal probability • From the symmetry of 𝑝 𝑥 about the origin,
density. Named after mathematician Carl and the fact that the total area under
𝑝 𝑥
Friedrich Gauss equal to 1

x=2 error is 18.7%


x=4 error is 10.4%
x=6 error is 2.3%
x>2.15 error is <1%
19 20

19 20
5/31/2023

General Gaussian Density Function (m, 𝜎) Gaussian PDF with mean m and variance
𝜎

Letting (x-m)/ 𝜎 =z

• Erfc function (error function or gauss error


function)

21 22

21 22

PERFORMANCE ANALYSIS OF DIGITAL Learning Outcomes in Performance


COMMUNICATION SYSTEMS Analysis
• Analog Communication  Reproduction of • Ability to apply the fundamental tools of
waveform probability theory and random process for
• Digital CommunicationWhich waveform is BER performance analysis.
transmitted

• Optimum detection receivers to minimize the


• Analog Communication Output Signal to Noise
Ratio receiver BER
• Digital Communication  Probability of bit error
(Bit error rate-BER)
23 24

23 24
5/31/2023

Gaussian PDF with mean m and


The Gaussian Random Variable
variance 𝜎
• Standard Gaussian or Normal probability
density. Named after mathematician Carl
Friedrich Gauss
Letting (x-m)/ 𝜎 =z

25 26

25 26

Received Pulses with Noise


Threshold Detection Optimum detecion threshold
Transmitted messages:
Due to channel noise the sampler output is ±𝐴 + 𝑛, n = n(𝑇 )
• m=1  positive pulse p(t)
• m=0  negative pulse –p(t)
• To detect the pulses at the receiver, each pulse is
sampled at its peak amplitude.
• Let the peak amplitude of p(t) be 𝐴 at 𝑡 = 𝑇 ,
𝑝 𝑇 =𝐴
• At the receiver input; In the absence of noise, the
the optimum detection threshold is zero;
sampler output is either The received pulse is detected as
𝐴 (for m=1) or −𝐴 (for m=0) 1 if the sample value is +
0 if the sample value is -
27 28

27 28
5/31/2023

Noise has Gaussian PDF with zero


Binary Data Detection Error Probability
mean
• 0 is transmitted detected as 1 • Error probabilities can be calculated with Q(.) function
𝑨𝒑
• if −𝐴 + 𝑛 > 0 𝑡ℎ𝑎𝑡 𝑖𝑠 𝑛>𝐴 • 𝑷 𝝐 𝟎 = 𝑷 𝒏 > 𝑨𝒑 = 𝑸( )
𝝈𝒏
• Error proability given that 0 is transmitted • 𝑷 𝝐 𝟏 = 𝑷 𝒏 < −𝑨𝒑 = 𝑸
𝑨𝒑
=𝑷 𝝐𝟎
𝝈𝒏

𝑷 𝝐 𝟎 = 𝑷(𝒏 > 𝑨𝒑 )
𝒙 𝒎
𝑷 𝑿 > 𝒙 = 𝑸( )
• 1 is transmitted detected as 0 𝝈𝒏

• if 𝐴 + 𝑛 < 0 𝑡ℎ𝑎𝑡 𝑖𝑠 𝑛 < −𝐴


• Error proability given that 1 is transmitted

𝑷 𝝐 𝟏 = 𝑷(𝒏 < −𝑨𝒑 )

29 30

29 30

We can use a single table with 𝜇 =


Q function & erfc 0 𝑎𝑛𝑑 𝜎 = 1 for the probability of bit error

Q(1.82) ?
1 𝑥
𝑄 𝑥 = erfc
2 2

erfc x = 2Q(x 2)

Q(2.63) ?

32

31 32

You might also like