Download as pdf or txt
Download as pdf or txt
You are on page 1of 71

Principles of Communications

Weiyao Lin, PhD


Associate Professor
Dept. of Electronic Engineering
Shanghai Jiao Tong University

Chapter 2: Signal, Random Process, and Spectra

Textbook: 2
2.1
1-2
2.6,
6 55.1
1-5
5.3
3
1
Signal and Noise in
Communication Systems
ƒ In communication systems,
y , the received waveform is usuallyy
categorized into the desired part containing the information and the
extraneous or undesired part. The desired part is called the signal, and
the undesired part is called noise.
noise
ƒ Noise is one of the most critical and fundamental concepts affecting
communication systems
ƒ The entire subject of communication systems is all about methods to
overcome the distorting or bad effects of noise
ƒ To do so, understanding random variables and random processes
becomes quite essential

Typical noise source

2009/2010 Meixia Tao @ SJTU 2


Topics to be Covered

2.1.Signals

2.2.Review of probability and random variables

2.3.Random Processes: basic concepts

2.4.Guassian and White Processes

2009/2010 Meixia Tao @ SJTU 3


What is Signal?
ƒ Any physical quantity that varies with time, space, or any
other independent variables is called a signal
ƒ In communication systems
systems, signals are used to transmit
information over a communication channel. Such signals
are called information-bearing signals

2009/2010 Meixia Tao @ SJTU 4


Classification of Signals
ƒ Signals can be characterized in several ways
ƒ Continuous-time signal vs. discrete-time signal
ƒ Continuous valued signal vs
vs. discrete-valued signal
o Continuous-time and continuous valued: analog signal (speech)
o Discrete-time and discrete valued: digital
g signal
g ((CD))
o Discrete-time and continuous valued: sampled signal
o Continuous-time and discrete valued: quantized signal

2009/2010 Meixia Tao @ SJTU 5


2009/2010 Meixia Tao @ SJTU 6
ƒ Deterministic signal
g vs. random signal
g

2009/2010 Meixia Tao @ SJTU 7


Energy and Power
ƒ Energy ∞ T /2
Ex = ∫
−∞
x(t ) 2 dt = lim
T →∞ ∫
−T /2
x(t ) 2 dt
T /2
1
T →∞ T ∫
ƒ Power
P Px = lim x (t ) 2
dt
−T /2

ƒ A signal is an energy signal if and only if Ex is finite


ƒ A signal is a power signal if and only if Px is finite
ƒ Physically realizable waveforms are of energy-type
ƒ Mathematical models are often of power-type

2009/2010 Meixia Tao @ SJTU 8


Probability
ƒ Let A be an event in a sample space S
ƒ The probability P(A) is a real number that
measures the
th likelihood
lik lih d off the
th eventt A

ƒ Axioms
A i off P
Probability
b bilit
1)
2)
3) Let A and B are two mutually exclusive events, i.e.
Then

2009/2010 Meixia Tao @ SJTU 9


Elementary Properties of Probability
ƒ

ƒ When A and B are NOT mutually exclusive

ƒ If then

2009/2010 Meixia Tao @ SJTU 10


Conditional Probability
ƒ Consider two events A and B in a random experiment
p
ƒ The probability that event A will occur GIVEN that B has
occurred, P(A|B), is called the conditional probability
ƒ The probability that both A and B occur,
is called the joint probability
ƒ Joint
J i t and
d conditional
diti l probabilities
b biliti are related
l t dbby
P( AB ) = P( B ) P( A | B ) = P( A) P( B | A)
P( AB ) P( AB )
ƒ Alternatively, P ( A | B ) = P( B | A) =
P( B ) P( A)
ƒ Two events A and B are said statistically independent iff

then and
2009/2010 Meixia Tao @ SJTU 11
Law of Total Probability
ƒ Let be mutually exclusive events
with

ƒ Then for anyy event B we have

2009/2010 Meixia Tao @ SJTU 12


Bayes’ Theorem
Bayes
ƒ An extremely useful relationship for conditional
probabilities is Bayes’ theorem
ƒ Let are mutually exclusive events such
that and B is an arbitrary event with nonzero
probability. Then

ƒ This formula will be used to derive the structure of the


optimal receiver
2009/2010 Meixia Tao @ SJTU 13
Example
ƒ Consider a binary communication system
P(0) = 0.3 P01 = P(receive 1 | sent 0) = 0.01
P00 = P(receive 0 | sent 0) = 1- P01 = 0.09
P10 = P(receive 0 | sent 1) = 0.1
P(1) = 0.7
07
P11 = P(receive 1 | sent 1) = 1- P10 = 0.9

ƒ What is the probability that the output of this channel is 1?


ƒ Assuming that we have observed a 1 at the output
output, what is
the probability that the input to the channel was a 1?

2009/2010 Meixia Tao @ SJTU 14


Random Variables (r.v.)
ƒ A r.v. is a real
real-valued
valued function assigned to the events of the
sample space S. denoted by capital letters X, Y, etc

ƒ A r.v.
r v may be
ƒ Discrete-valued: range is finite (e.g. {0,1}), or countable
infinite (e.g.
(e g {1,2,3
{1 2 3 …})
})
ƒ Continuous-valued: range is uncountable infinite (e.g. )

2009/2010 Meixia Tao @ SJTU 15


ƒ The Cumulative distribution function (CDF), or simply the
probability distribution of a r.v. X, is
Δ
FX ( x ) = P ( X ≤ x )

ƒ Key properties of CDF


1. 0 ≤ FX(x) ≤ 1 with
2. FX(x) is a non-decreasing function of x
3. F (x1 < X ≤ x2) = FX(x2) − FX(x1)

16
Probability Density Function (PDF)
ƒ The PDF, of a r.v. X, is defined as
Δ
f X ( x ) = FX ( x ) FX ( x ) = ∫ f X ( y )dy
d x
or
dx −∞

ƒ Key properties of PDF



1. p X ( x) ≥ 0 2. ∫
−∞
p X ( x)dx = 1
x2
3. P ( x1 < X ≤ x2 ) = PX ( x2 ) − PX ( x1 ) = ∫
x1
p X ( x)dx

Area = P ( x1 < X ≤ x2 )
FX ( x ) f X (x )
1 1
Area = f X ( x )dx
P( x1 < X ≤ x2 )

Xmin 0 x1 x2 Xmax x Xmin 0 x1 x2 x x+dx


d Xmax x
2009/2010 Meixia Tao @ SJTU 17
Joint Distribution
ƒ In many situation, one must consider TWO or more r.v.’s
r.v. s
⇒ joint distribution function
ƒ Consider 2 rr.v.
v ’ss X and Y,
Y joint distribution function is
defined as F ( x, y ) = P( X ≤ x, Y ≤ y )
XY

∂ 2 FXY ( x, y )
and joint PDF is f XY ( x, y ) =
∂x∂y
ƒ Key properties of joint distribution
∞ ∞
∫ ∫
−∞ −∞
p XY ( x, y )dxdy = 1
y2 x2
P( x1 < X ≤ x2 , y1 < Y ≤ y2 ) = ∫ ∫ p XY ( x, y )dxdy
y1 x1

2009/2010 Meixia Tao @ SJTU 18


ƒ Marginal distribution
∞ x
PX ( x) = P( X ≤ x, − ∞ < Y < ∞) = ∫ ∫ p XY (α , β )dαdβ
−∞ −∞

y ∞
PY ( y ) = ∫ ∫ p XY (α , β )dαdβ
−∞ −∞

ƒ Marginal density

p X ( x) = ∫ p XY ( x, β )dβ
−∞

ƒ X and Y are said to be independent iff


PXY ( x, y ) = PX ( x) PY ( y )
p XY ( x, y ) = p X ( x) pY ( y )

2009/2010 Meixia Tao @ SJTU 19


Statistical Averages
ƒ Let X be a rr.v.
v Then X can either continuous or
discrete. For the moment, consider a discrete r.v.
which takes on the ppossible values x1, x2, …,, xM with
respective probabilities P1, P2, …, PM. Then the mean
or expected value of X is
M
m X = E [X ] = ∑ xi Pi
i =1

where “E[]” denotes the expectation operation


( t ti ti ll averaging)
(statistically i )

2009/2010 Meixia Tao @ SJTU 20


ƒ If X is continuous,
continuous then
mX = E [ X ] = ∫ xf X ( x )dx

−∞

ƒ This is the first moment of the random variable X


ƒ Let g(
g(X)) be a function of X,, then
E [g ( X )] = ∫ g ( x) p X ( x)dx

−∞

ƒ F
For the
th special (X) =xn, we obtain
i l case off g(X) the nth
bt i th
moment of X, that is
EX[ ]= ∫
n

−∞
x n p X ( x )dx
ƒ Let n = 2,
2 we have the mean
mean-square
square value of X as
EX[ ]= ∫
2

−∞
x 2 p X ( x )dx

2009/2010 Meixia Tao @ SJTU 21


ƒ n-th
n th Central moment is
[
E ( X − mX ) = ∫
n
−∞
](

x − mx ) f X ( x )dx
n

ƒ The expected value of second central moment


((n=2)) is called variance
σ X2 = E [( X − mX )2 ]
[
= E X 2 − 2mX X + m2X ]
= E [X ] − m2 2
X

ƒ σX, square-root of the variance, is called the


standard deviation
deviation. It is the average distance from
the mean, a measure of the concentration of X
around the mean
2009/2010 Meixia Tao @ SJTU 22
Correlation
ƒ In considering multiple varaibles, the joint moments like
correlation and covariance between pairs of r.v.s are most
useful
ƒ Correlation of the two r.v. X and Y is defined as

RXY = E [ XY ] = ∫
∞ ∞

−∞ −∞ ∫ xyf XY ( x, y )dxdy

ƒ Correlation of X and Y is the mean of the product X and Y


ƒ Correlation of the two centered r.v. X-E[X] and Y-E[Y], is
called the covariance of X and Y
Cov XY = E [( X − E[ x])(Y − E[ y ])]
23
ƒ The covariance of X and Y normalized w r t σX σY is
w.r.t.
referred to the correlation coefficient of X and Y:
C ( XY )
Cov
ρ XY =
σ XσY
ƒ X and Y are uncorrelated iff their correlation coefficient is 0
ρ XY = 0
ƒ X and Y are orthogonal iff their correlation is 0

R XY = E[ XY ] = 0

ƒ If X and Y are independent,


p , then theyy are uncorrelated.
However, the converse is not true (The Gaussian case is
the only exception)

24
Some Useful Probability Distributions

ƒ Discrete Distribution
ƒ Binary distribution
ƒ Binomial distribution
ƒ Continuous Distribution
ƒ Uniform distribution
ƒ Gaussian distribution (most important one)
ƒ Rayleigh distribution (very important in mobile and
wireless
i l communications)
i ti )

2009/2010 Meixia Tao @ SJTU 25


Binary Distribution
ƒ Let X be a discrete random variable that has two
possible values, say X = 1 or X = 0. Distribution of X
can be described byypprobability
y mass function (p
(pmf))

ƒ This is frequently
y used to model binary
y data
ƒ Mean:

ƒ Variance

2009/2010 Meixia Tao @ SJTU 26


Binomial Distribution
ƒ Let , where are independent
binary r.v.s with

ƒ Then
Th
where
ƒ That is, the probability that Y = k is the probability that
k of the Xi are equal
q to 1 and n-k are equal
q to 0
ƒ Mean:
ƒ Variance:
V i
27
Example
ƒ Suppose that we transmit a 31-bit
31 bit long sequence with
error correction capability up to 3 bit errors
ƒ If the probability of a bit error is p = 0.001,
0 001 what is the
probability that this sequence is received in errors?

ƒ If no error correction is used, the error probability is

2009/2010 Meixia Tao @ SJTU 28


Uniform Distribution

ƒ The
Th pdf
df off uniform
if distribution
di t ib ti iis given
i b
by

Any example?

2009/2010 Meixia Tao @ SJTU 29


Gaussian Distribution
ƒ The Gaussian distribution, also called normal distribution,
is byy far the most important
p distribution in the statistical
analysis of communication systems
ƒ The PDF of a Gaussian r.v. is
1 ⎡ 1 2⎤
f X ( x) = exp⎢− 2 ( x − mX ) ⎥
2πσ X ⎣ 2σ X
2

ƒ A Gaussian r.v. is completely determined by its mean and


variance,
i and
dhhence usually
ll ddenoted
t d as
pX(x)
X ~ N (m X , σ X2 )

x
0 mX
2009/2010 Meixia Tao @ SJTU 30
The Q
Q-Function
Function
ƒ The Q-function is a standard form to express error
probabilities without
itho t a closed form
∞ 1 ⎛ u2 ⎞
Q( x ) = ∫ exp⎜⎜ − ⎟⎟du
x 2π ⎝ 2⎠
ƒ The Q-function is the area under the tail of a Gaussian p
pdf
with mean zero and variance one

ƒ Extremely important in error probability analysis!!!


31
More about Q-Function
Q Function
ƒ Q
Q-function
u c o iss monotonically
o o o ca y dec
decreasing
eas g
ƒ Some features

ƒ Craig’s alternative form of Q-function (IEEE MILCOM’91)

ƒ Upper
U b
bound
d

ƒ If we have a Gaussian variable X ~ N ( μ , σ 2 ) , then


⎛ x−μ ⎞
Pr( X > x) = Q ⎜ ⎟
⎝ σ ⎠
2009/2010 Meixia Tao @ SJTU 32
Joint Gaussian Random Variables
ƒ X1, X2, …, Xn are jointly Gaussian iff

ƒ x is a column vector
ƒ m is the vector of the means
ƒ C is the covariance matrix

2009/2010 Meixia Tao @ SJTU 33


Two-Variate
Two Variate Gaussian PDF
ƒ Given two rr.v.s:
v s: X1 and X2 that are joint Gaussian

ƒ Then

2009/2010 Meixia Tao @ SJTU 34


ƒ For uncorrelated X and Y
Y, ii.e.
e

X1 and X2 are also independent

If X1 and X2 are Gaussian and uncorrelated,


then they are independent.
2009/2010 Meixia Tao @ SJTU 35
Rayleigh Distribution

0.7

0.6

0.5

0.4

0.3

0.2

0.1

0
0 1 2 3 4 5 6

ƒ Rayleigh distributions are frequently used to model fading


for non
non-line
line of sight (NLOS) signal transmission
ƒ Very important for mobile and wireless communications

2009/2010 Meixia Tao @ SJTU 36


Sums of Random Variables
ƒ Consider a sequence of r.v.’s
r.v. s
ƒ Weak Law of Large Numbers
ƒ Let

ƒ A
And
d assume th
thatt Xi’s
’ are uncorrelated
l t d with
ith th
the same
mean and variance
ƒ Then

So what?

i.e. the average converges to the expected value

2009/2010 Meixia Tao @ SJTU 37


Central Limit Theorem
ƒ Let be a set of independent random
variables with common mean and common variance
ƒ Next let

ƒ Th
Then as , the
th distribution
di t ib ti off Y will
ill ttend
d ttowards
d a
Gaussian distribution

Key Conclusion: the sum of random variables is “Gaussian”

ƒ Thermal noise results from the random movement of many


electrons – it is well modeled by a Gaussian distribution.

2009/2010 Meixia Tao @ SJTU 38


Example

2009/2010 Meixia Tao @ SJTU 39


2009/2010 Meixia Tao @ SJTU 40
2009/2010 Meixia Tao @ SJTU 41
2009/2010 Meixia Tao @ SJTU 42
2009/2010 Meixia Tao @ SJTU 43
2009/2010 Meixia Tao @ SJTU 44
Random Process
ƒ A random process is the natural extension of random
variables when dealing with signals

ƒ Also referred to as stochastic process or random signal

ƒ Voices signals, TV signals, thermal noise generated by a


radio receiver are all examples of random signals
signals.

2009/2010 Meixia Tao @ SJTU 45


ƒ A random process can be described as
ƒ For each experiment n, there exists a time-function xn(t) ,
called a sample function or realization of the random
process
ƒ At any time instant t1, t2, …, the value of the random
process is a random variable X(t1), ) X(t2),
) …,
x1(t)
Outcome of 1st
experiment
i t

x2(t) Outcome of 2nd


Sample experiment
space S

46
xn(t) X(t1) X(t2) Outcome of nth
experiment
t1 t2
2009/2010 Meixia Tao @ SJTU
Statistics of Random Processes
ƒ By sampling the random process at any time
time, we
get a random variable
ƒ F
From this
thi view
i point,
i t we can thi think
k off a random
d
process as an infinite collection of random
variables
i bl specified
ifi d att titime t:
t {X(t1),
) X(t2),) …, X(tn)}
ƒ Thus,, a random process
p can be completely
p y
defined statistically as a collection of random
variables indexed byy time with p properties
p defined
by a joint PDF

2009/2010 Meixia Tao @ SJTU 47


ƒ A random process X(t) is described by its M-th
order statistics if for all and all
the joint pdf of {X(t1),
) X(t2),
) …, X(tn)} is given
ƒ This joint pdf is written as
f ( x1 , x2 ,K, xn ; t1 , t2 ,K, tn )

ƒ In order to completely specify a random process process,


one must given f (x1 , x2 ,K, xn ; t1 , t2 ,K, tn ) for all
possible values of {xi} {ti},
} and for all n n. This is
obviously quite difficult in general

48
First Order Statistics on Random
P
Processes
ƒ The first order statistics is simply the PDF of a
random variable at one particular time
ƒ f(x;t) = first order density of X(t)
ƒ F(x;t) = P(X(t) ≤x), first order distribution of X(t)
E [ X (t0 )] = E [ X (t = t0 )] = ∫ xf X ( x; t0 ) = X (t0 )

ƒ Mean −∞

ƒ Variance [ 2
]
E X (t0 ) − X (t0 ) = σ X2 (t0 )

2009/2010 Meixia Tao @ SJTU 49


Second-Order Statistics on Random
Processes
ƒ Second-order statistics means the jjoint PDF of X(t
( 1) and
X(t2) for all choices t1 and t2.
ƒ Auto-correlation function: Let t1 = t, and t2 = t+τ,

RX (t;τ ) = E[ X (t ) X (t + τ )] = ∫
∞ ∞

−∞ −∞ ∫ x1 x2 f ( x1 , x2 ; t , t + τ )dx1dx2

ƒ The physical meaning of RX(t; τ) is a measure of the


relationship
p of the function X(t) ( τ)) (correlation
( ) and X(t+ (
within a process)
ƒ In general, the autocorrelation function is a function of both
t and τ.

2009/2010 Meixia Tao @ SJTU 50


Example
ƒ Given a stochastic process , where
is a random variable uniformly distributed from to
ƒ At each
h t,
t X(t) can b
be viewed
i d as a ffunction
ti off
ƒ The mean is
ƒ The auto-correlation is

2009/2010 Meixia Tao @ SJTU 51


Stationary Processes

ƒ A stochastic process is said to be stationary if for any n


and the following holds:
f X ( x1 , x2 ,L xn ; t1 , t2 ,L tn ) = f X ( x1 , x2 ,L xn ; t1 + τ , t2 + τ ,L tn + τ ) (1)

ƒ Therefore,
ƒ Te first-order statistics is independent of t

mean E { X (t )} = ∫ xf X ( x)dx = mX ((2))
−∞

ƒ The second-order statistics only depends on the gap


between t1 and t2
∞ ∞
Autocorrelation RX (t1 , t2 ) =
function
∫ ∫
−∞ −∞ 1 2
x x f X ( x1 , x2 , t2 − t1 )dx1dx2
= RX (t2 − t1 ) = RX (τ ) , where τ = t2 − t1 (3)
2009/2010 Meixia Tao @ SJTU 52
Wide-Sense
Wide Sense Stationary
ƒ Our engineers often care about the first- and second-
order statistics only
ƒ A random process is said to be WSS when conditions
(2) and (3) hold
ƒ A random p process is said to be strictly
y stationary
y when
condition (1) holds
ƒ Example:
, where

Only depends on the time


difference
Thus, X(t) is WSS
2009/2010 Meixia Tao @ SJTU 53
Averages and Ergodic
ƒ Ensemble averaging
g g
Δ
X (t ) = E [X (t )] = ∫ xp( x; t )dx

−∞
Δ
RX (t1 , t 2 ) = E [X (t1 ) X (t 2 )] = ∫
∞ ∞

−∞ −∞ ∫ x1 x2 p ( x1 , x2 ; t1 , t 2 )dx1dx2

ƒ Time averaging
g g Δ 1 T
T →∞ 2T ∫−T
< X (t ) > = lim x(t )dt
Δ 1 T
T →∞ 2T ∫−T
< X (t ) X (t − τ ) > = lim
li x(t ) x(t − τ )dt

ƒ In general, ensemble averages and time averages are not equal


ƒ A r.p. X(t) is said to be Ergodic if all time averages and
Ensemble averages are equal

2009/2010 Meixia Tao @ SJTU 54


Random Processes in the Frequency
D
Domain:
i Power
P Spectral
S t l Density
D it
ƒ Let
e X(t)
( ) de
denote
o e a random
a do p process
ocess a
and
d let
e de
denote
oea
sample function of this process
ƒ Truncate the signal
g by
y defining
g

in order to get an energy signal.


ƒ Performing a Fourier transform on , we get
ƒ According to Parseval theorem
∞ ∞
∫ T ∫ T
2
x 2
(t , n )dt = X ( f , n ) df
−∞ −∞

: energy
gy spectral
p density
y

2009/2010 Meixia Tao @ SJTU 55


ƒ Then the power spectral density is the average energy
spectral density per time unit, i.e.

ƒ Letting , we define the power spectral density for the


sample function: 2
X T ( f , n)
S X ( f , n) = lim
T →∞ T
ƒ If we take the ensemble average, the power spectral
density (PSD) of the random process is

(4)
Watts/Hz

The general definition of power spectral density

2009/2010 Meixia Tao @ SJTU 56


PSD of Stationary Process
Wiener-Khinchin
Wiener Khinchin theorem

For a stationary random process X(t), the PSD is equal to


the Fourier Transform of the autocorrelation function, i.e.,

RX (τ ) = ∫ S X ( f ) exp( j 2πfτ )df
S X ( f ) ↔ RX (τ ) −∞

S X ( f ) = ∫ RX (τ ) exp(− j 2πfτ )dτ
−∞

ƒ In general, is a measure of the relative power in the


random
d signal
i l at eachh ffrequency component

RX (0) = ∫ S X ( f )df = total power
−∞

2009/2010 Meixia Tao @ SJTU 57


Gaussian Process
ƒ The importance of Gaussian processes in communication
systems is due to that thermal noise in electronic devices
can be closely modeled by a Gaussian process
ƒ Definition:
ƒ A random process X(t) is a Gaussian process if for all n
and all (t1, t2, …, tn), the random variables {X(t1),
X(t2),
( ), …,, X(tn)}
( )} have a joint
j Gaussian densityy function

2009/2010 Meixia Tao @ SJTU 58


Properties of Gaussian Processes
ƒ If a Gaussian random process is wide-sense
wide sense
stationary, then it is also stationary
ƒ Any sample point from a Gaussian random process is
a Gaussian random variable
ƒ If the
h input
i to a linear
li system iis a G
Gaussian
i random
d
process, then the output is also a Gaussian random
process.
process

2009/2010 Meixia Tao @ SJTU 59


Random Process Transmission Through
Li
Linear Systems
S t
ƒ Consider a linear system

X(t) Impulse Y(t)


response
p
h(t)

Y (t ) = X (t ) * h(t ) = ∫ h(τ ) X (t − τ )dτ
−∞

ƒ The mean of the output random process Y(t)


Y (t ) = E [Y (t )] = ∫ h(τ ) E [X (t − τ )]dτ

−∞

= ∫ h(τ ) X (t − τ )dτ
−∞
If X(t) is WSS

= X ∫ h(τ )dτ = X ⋅ H (0)
−∞

where H(0) is the zero-frequency response of the system


2009/2010 Meixia Tao @ SJTU 60
ƒ The autocorrelation of Y(t)
RY (t , u ) = E [Y (t )Y (u )]

= E ∫ h(τ 1 ) X (t − τ 1 )dτ 1 ∫ h(τ 2 ) X (u − τ 2 )dτ 2 ⎤


⎡ ∞ ∞

⎢⎣ −∞ −∞ ⎥⎦
= ∫ h(τ 1 )dτ 1 ∫ h(τ 2 ) E [X (t − τ 1 ) X (u − τ 2 )]dτ 2

−∞

If X(t) is WSS
∞ ∞
RY (τ ) = ∫ ∫ h(τ 1 )h(τ 2 ) RX (τ − τ 1 + τ 2 )dτ 1dτ 2
−∞ −∞

If input is a WSS random process, the output


is also a WSS random process
2009/2010 Meixia Tao @ SJTU 61
Relation Among the Input
Input-Output
Output PSD

ƒ Autocorrelation of Y(t)
∞ ∞
RY (τ ) = ∫ ∫ h(τ 1 )h(τ 2 ) RX (τ − τ 1 + τ 2 )dτ 1dτ 2
−∞ −∞
∞ ∞
= ∫ h(τ 2 )dτ 2 ∫ h(τ 1 ) RX (τ + τ 2 − τ 1 )dτ 1
−∞ −∞

= ∫ h(τ 2 )[h(τ + τ 2 ) * RX (τ + τ 2 )]dτ 2


−∞

= h(−τ ) * h(τ ) * RX (τ )
ƒ PSD of Y(t): SY ( f ) = H ( f ) 2 S X ( f )

X(t) Y(t)
h(t)
SX ( f ) SY ( f )
2
SY ( f ) = H ( f ) S X ( f ) Key
ey Results
esu ts

2009/2010 Meixia Tao @ SJTU 62


Noise
ƒ Noise is a critical component in the analysis of the performance
of communication receivers
ƒ Often assumed to be Gaussian and stationary
ƒ The mean is taken to be zero while the autocorrelation is usually
specified by the power spectral density
ƒ The noise is a white noise, when all frequency components
appear with equal power (white is used in white light for a similar
reason)
N
Sn ( f ) = N 0 2 Rn (τ ) = 0 δ (τ )
2 White noise is completely
Sn ( f ) Rn (τ )
uncorrelated!
N0 2 N0 2

0 f 0 τ
2009/2010 Meixia Tao @ SJTU 63
Bandlimited Noise
White noise Bandlimited white
Filter Bandwidth noise n(t)
B Hz

In most applications

N 0 = KT = 4.14 × 10−21
= −174 dBm/Hz

At what sampling rate to sample


the noise can we get uncorrelated
realizations?
2009/2010 Meixia Tao @ SJTU 64
Narrow-Band
Narrow Band Random Process
ƒ The bandwidth of the signal is limited to a narrow
band around a central frequency fc >> 0
S (ω )

− fc f c

ƒ Canonical form of a narrow band process


X (t ) = X I (t ) cos ( 2π f 0t ) − X Q (t ) sin ( 2π f 0t )

In-phase component Quadrature component

2009/2010 Meixia Tao @ SJTU 65


Narrow band Noise
ƒ Let n(t) be a zero-mean,
zero mean, stationary noise
n(t ) = nc (t ) cos ω0t − ns (t ) sin ω0t

ƒ Find
Fi d the t ti ti off nc(t)
th statistics ( ) and
d ns(t)
()
ƒ Result 1:
E {n(t )} = E {nc (t )} = E {ns (t )} = 0

ƒ Proof:
E [ n(t ) ] = E [ nc (t ) ] cos ω0t − E [ ns (t ) ] sin ω0t

ƒ Since n(t) is stationary


stationary,zero-mean,
zero-mean for any tt, we have
E [ n(t ) ] = 0
ƒ Thus: E {nc (t )} = E {ns (t )} = 0
2009/2010 Meixia Tao @ SJTU 66
ƒ Result 2:
⎧ S n ( f − f 0 ) + S n ( f + f 0 ), f ≤ B / 2
S nc ( f ) = S ns ( f ) = ⎨
⎩0 otherwise
h i

ƒ Proof HL ( f )
nc (t )
× Z (t)
1
1

-B/2 0 B/2 f
2cos ω0t
n(t )
−2sin ω0t
HL ( f )
ns (t )
× Z2 (t ) 1

-B/2 0 B/2 f

2009/2010 Meixia Tao @ SJTU 67


ƒ Result 3:for the same tt, nc(t) and ns(t) are
uncorrelated or independent
Rnc ns (0) = 0

ƒ Result 4:
E {n 2 (t )} = E {nc 2 (t )} = E {ns 2 (t )} = σ 2

ƒ Result 5:If n(t) is a Gaussian process, so are nc(t)


and ns(t)

2009/2010 Meixia Tao @ SJTU 68


Envelop and Phase
ƒ Angular representation of n(t)
n(t ) = R(t ) cos [ω0t + φ (t )]

R(t ) = nc 2 (t ) + ns 2 (t ) envelop
where ns (t )
φ (t ) = tan −1 [0 ≤ φ (t ) ≤ 2π ] phase
h
nc (t )

1 R(t )

n(t ) B

0 t

1

f0

2009/2010 Meixia Tao @ SJTU 69


ƒ Let n(t) be a zero-mean, stationary Gaussian process,
find the statistics of the envelop and phase
ƒ Result:
ƒ Envelop follows Rayleigh distribution while phase
follows uniform distribution
2π R ⎧ R2 ⎫
f ( R) = ∫ f ( R, φ )dφ = 2 exp ⎨− 2 ⎬ R≥0
0 σ ⎩ 2σ ⎭
P f?
Proof? ∞ 1
f (φ ) = ∫ f ( R, φ ) dR = 0 ≤ φ ≤ 2π
0 2π
ƒ For
o thee sa
samee t,, the
eeenvelop
e op variable
a ab e R a
and
dpphase
ase
variable φ are independent (but not the two processes)

2009/2010 Meixia Tao @ SJTU 70


Homework 1

ƒ Textbook Chapter 2: 2.7(3)(4), 2.13(6)(13)(16)


ƒ Textbook Chapter 5: 5.5, 5.15, 5.22, 5.28, 5.44,
5.49
ƒ Due: in class on Sep. 28 (next Wednesday)

You might also like