Download as pdf or txt
Download as pdf or txt
You are on page 1of 91

Definition of a Random Variable

Random Variable [m-w.org]


: a variable that is itself a function of the result of a statistical
experiment in which each outcome has a definite
probability of occurrence

Copyright © Syed Ali Khayam 2009 3


Definition of a Random Variable
A random variable is a mapping from an outcome s of a
random experiment to a real number

X : S → SX ⊂

domain range
SX is called the image of X

Copyright © Syed Ali Khayam 2009 4


Definition of a Random Variable

X : S → SX ⊂

Random
R d
Sample Space X(s) Random
Experiment
Variable
head

0 1 R
tail

Sx

Copyright © Syed Ali Khayam 2009 5


Definition of a Random Variable

X : S → SX ⊂
X(s)

1 2 3 4 5 6 R

Random
Experiment
Sample Space, S Sx
Random Variable

Image courtesy of www.buzzle.com/

Copyright © Syed Ali Khayam 2009 6


Definition of a Random Variable
More than one outcomes can be mapped to the same real
number
X : S → SX ⊂

X(s)

0 1

Random Sx
E
Experiment
i t
Random Variable
Sample Space

Image courtesy of www.buzzle.com/

Copyright © Syed Ali Khayam 2009 7


Types of Random Variables
Discrete random variables: have a countable (finite or infinite)
image
Sx = {0, 1}
Sx = {…, -3, -2, -1, 0, 1, 2, 3, …}
Continuous random variables: have an uncountable image
Sx = (0, 1]
Sx = R
Mixed random variables: have an image which contains
continuous and discrete parts
Sx = {0} U (0, 1]

We will mostly focus on discrete and continuous random


variables
i bl

Copyright © Syed Ali Khayam 2009 8


Di
Discrete R
Random
d V
Variables
i bl

Copyright © Syed Ali Khayam 2009 9


Probability Mass Function
The Probability Mass Function (pmf) or the discrete probability
density function provides the probability of a particular point in
the sample space of a discrete random variable (rv)
For a countable SX={a0, a2, …, an}, the pmf is the set of
probabilities
pX (ak ) = Pr {X = ak }, k = 1, 2, …, n

pX(a
( k)

SX={a0=0, a1=1, …, a5=5},

0 1 2 3 4 5 X

Copyright © Syed Ali Khayam 2009 10


Properties of a PMF
P1: 0 ≤ pX (ak ) ≤ 1

P2: ∑p
ak ∈S X
X (ak ) = 1

pX(ak)

SX={a0=0, a1=1, …, a5=5},

0 1 2 3 4 5 X

Copyright © Syed Ali Khayam 2009 11


Cumulative Distribution Function (CDF) of Discrete
Random Variable
The Cumulative Distribution Function (CDF) for a discrete rv is
defined as:

FX (t ) = Pr {X ≤ t } = ∑ pX (x )
x ≤t

pX(x) FX(x)

pmf
CDF

1 2 3 4 X 1 2 3 4 X

Copyright © Syed Ali Khayam 2009 12


Cumulative Distribution Function (CDF) of Discrete
Random Variable
CDF can be used to find the probability of a range of values in a
rv’s image:

Pr {a < X ≤ b } = Pr {X ≤ b } − Pr {X ≤ a }
= FX (b ) − FX (a )
pX(x) FX(x)

pmf
CDF

1 2 3 4 X 1 2 3 4 X

Copyright © Syed Ali Khayam 2009 13


Properties of a CDF
F1: 0 ≤ FX (x ) ≤ 1 ∀ −∞ < x < ∞

F2: a ≤ b ⇒ FX (a ) ≤ FX (b )

pX(x) FX(x)

pmf
CDF

1 2 3 4 X 1 2 3 4 X

Copyright © Syed Ali Khayam 2009 14


Properties of a CDF

limx →−∞ FX (x ) = 0
F3:
li x →∞ FX (x ) = 1
lim
F4: FX (x i +1 ) = FX (x i ) + pX (x i +1 )

pX(x) FX(x)

pmf
CDF

1 2 3 4 X 1 2 3 4 X

Copyright © Syed Ali Khayam 2009 15


Expected Value of a Discrete Random
Variable
The expected value, expectation or mean of a discrete rv is the
“average” value of the random variable

What is the average value of a random variable whose image is


SX={1,
{ , 6,, 7,, 9,, 13}?
}

Copyright © Syed Ali Khayam 2009 16


Expected Value of a Discrete Random
Variable
What is the average value of the following random variable
whose image is SX={1, 6, 7, 9, 13}?

If your answer is 7.2 then you assumed that all of the values in the
rv’s image
g have equal
q weights
g
1 1 1 1 1 1
1 × + 6 × + 7 × + 9 × + 13 × = (1 + 6 + 7 + 9 + 13) = 7.2
5 5 5 5 5 5

Copyright © Syed Ali Khayam 2009 17


Expected Value of a Discrete Random
Variable
Mathematically, the expected value of a discrete random
variable is:
Ε {X } = µX = ∑a
ak ∈S X
k Pr {X = ak }

In some cases
cases, the expected value does not converge
In such cases, we say that the expected value does not exist

Copyright © Syed Ali Khayam 2009 19


Variance of a Random Variable
Variance of a rv is a measure of “the amount of variation of a rv
around its mean”

Intuitively, which of the following discrete rvs has a higher


variance?

pX(x) qX(x)

E{X}=3.87 E{X}=5.2
0.5 0.4
0.4
0.2
0.1
0.033
1 6 7 9 13 X 1 6 7 9 13 X

Copyright © Syed Ali Khayam 2009 20


Variance of a Random Variable
Intuitively, which of the following discrete rvs has a higher
variance?
qX(x) has a higher variance because it varies more around its mean
than pX(x)

pX(x) qX(x)

E{X}=3.87
E{X} 3.87 E{X}=5.2
0.5
0 04
0.4
0.4
0.2
0.1
0.033
1 6 7 9 13 X 1 6 7 9 13 X

Copyright © Syed Ali Khayam 2009 21


Variance of a Random Variable
Mathematically, the variance of a discrete rv is defined as:

∑ (a
2
var {X } = σ = 2
X k − Ε {X }) Pr
P {X = ak }
ak ∈S X

Copyright © Syed Ali Khayam 2009 22


Variance of a Random Variable

pX((x)) qX((x))

E{X}=3.87 E{X}=5.2
0.5 0.4
0.4 var{X}=15.36

var{X}=9.91 0.2
0.1
0.033
1 6 7 9 13 X 1 6 7 9 13 X
2 2
var {X } = (1 − 3.87) × 0.5 + var {X } = (1 − 5.2) × 0.4 +
2 2 2 2
(6 − 3.87) × 0.4 + (7 − 3.87) × 0.033 + (6 − 5.2) × 0.2 + (7 − 5.2) × 0.2 +
2 2 2 2
(9 − 3.87) × 0.033 + (13 − 3.87) × 0.033 (9 − 5.2) × 0.1 + (13 − 5.2) × 0.1
= 9.91
9 91 = 15.36
15 36

Copyright © Syed Ali Khayam 2009 23


Standard Deviation of a Random Variable
In many scenarios, we use the square root of the variance
called its standard deviation

σX = var {X }

Copyright © Syed Ali Khayam 2009 24


Standard Deviation of a Random Variable

pX(x) qX(x)

E{X}=3.87 E{X}=5.2
05
0.5 0.4
0
0.4 σX=3.92

σX=3.14 0.2
01
0.1
0.033
1 6 7 9 13 X 1 6 7 9 13 X

σX = var {X } = 9.91 = 3.14 σX = var {X } = 15.36 = 3.92

Copyright © Syed Ali Khayam 2009 25


Discrete Uniform Random Variable
A discrete uniform rv, D, has a finite image and all the elements
of the image have equal probabilities

Pr{D=k}

1/n

x1 x2 x3 xn
k

What do you think are the expected value and standard


deviation of this random variable?

Copyright © Syed Ali Khayam 2009 26


Bernoulli Random Variable
A Bernoulli Random Variable is defined on a single event A
This rv is based on an experiment called a Bernoulli trial
The experiment is performed and the event A either happens or does not
happen
Thus the sample space of a Bernoulli rv is binary
Bernoulli Random
Variable
Sample Space X(s)
Bernoulli Trrial: Toss a
B

A=head
coin

Ac = Not head 0 1 R
= tail

Sx
Copyright © Syed Ali Khayam 2009 27
Bernoulli Random Variable
A Bernoulli Random Variable is defined on a single event A
This rv is based on a the experiment called a Bernoulli trial
The experiment is performed and the event A either happens or does not
happen
Thus the sample space of a Bernoulli rv is binary
Bernoulli Random
Variable
Pa

X(s)
akistan criccket team
plays Au

A=Pak wins

R
ustralia

Ac=Pak losses
0 1

Sample Space
Image Courtesies of http://www.tribuneindia.com and
images.google.com

Copyright © Syed Ali Khayam 2009 28


Bernoulli Random Variable
A Bernoulli Random Variable is defined on a single event A
This rv is based on a the experiment called a Bernoulli trial
The experiment is performed and the event A either happens or does not
happen
Thus the sample space of a Bernoulli rv is binary
Of course the Pr{A} = 0 for this
p
experiment Bernoulli Random
Variable
Pakistan

X(s)
playss Australia

A=Pak wins
n cricket te

R
Ac=Pak losses
0 1
a
eam

Sample Space
Image Courtesies of http://www.tribuneindia.com and
Copyright © Syed Ali Khayam 2009 images.google.com 29
Bernoulli Random Variable
Typical examples of Bernoulli rvs in communication:
Transmit a bit over a wireless channel
Outcomes:
0 −> bit is received error-free
1 −> bit received is not received error-free => bit is received with errors
Transmit a packet over the Internet
Outcomes:
0 −> packet is received
1 −> p
packet is not received => p
packet is lost en-route due to congestion
g

Copyright © Syed Ali Khayam 2009 30


Bernoulli Random Variable
Sample space of a Bernoulli rv, I, is binary
Both outcomes are mapped to real numbers,
Traditionally: I(A) = 1 and I(Ac) = 0 are used to represent a Bernoulli rv’s
outcomes
The pmf of I is:
Pr{I = 1} = p
Pr{I = 0} = 1 − p

Pr{I=k}

p
1-p

I
0 1

Copyright © Syed Ali Khayam 2009 31


Bernoulli Random Variable
Example:
Consider the experiment of a fair coin toss. What is the expected
value and the variance of this Bernoulli rv?

Pr{I=k}

0.5

I
0 1

Copyright © Syed Ali Khayam 2009 32


Discrete Random Variables
Example:
Consider the experiment of a fair coin toss. What is the expected
value
l and d th
the variance
i off thi
this B
Bernoulli
lli rv??
Since the coin toss is fair: Pr{I = 1} = 0.5 and Pr{I = 0} = 0.5
E{I} = (1)x(0
(1)x(0.5)
5) + (0)x(0
(0)x(0.5)
5) = 0
0.5
5
var{I} = (1−0.5)2x(0.5) + (0−0.5)2x(0.5) = 0.25

Pr{I=k} σI=0.5

µI=0.5
0.5
0.5

I
0 1

Copyright © Syed Ali Khayam 2009 33


Bernoulli Random Variable
Example:
Consider a binary symmetric channel with probability of bit-error
0.1. What is the expected value and the variance of this Bernoulli
rv?

Pr{I=k}
0.9

0.1
I
0 1

Copyright © Syed Ali Khayam 2009 34


Bernoulli Random Variable
Example:
Consider a binary symmetric channel with probability of bit-error 0.1. What is the
expected
p value and the variance of this Bernoulli rv?
The event of interest here is a bit-error:
=> Pr{I = 1} = 0.9 and Pr{I = 0} = 0.1
E{I} = (1)x(0
(1)x(0.9)
9) + (0)x(0
(0)x(0.1)
1) = 0
0.9
9
var{I} = (1−0.9)2x(0.9) + (0−0.9)2x(0.1) = 0.09
Note that the variance of this pmf is smaller than the variance of the coin toss
pmff
Pr{I=k} σI=0.3
0.9
µI=0.9

0.1
I
0 1

Copyright © Syed Ali Khayam 2009 35


Binomial Random Variable
Consider a collection of n independent Bernoulli trials

A Binomial Random Variable is the total number of occurrences


of an event A in this independent Bernoulli collection
Send n bits, count the number of bits that are received with errors
Send n packets, count the number of packets that are not lost

Copyright © Syed Ali Khayam 2009 36


Binomial Random Variable
If Ij(A)=1 and Ij(Ac)=0, j=1,2,…,n, are used to represent the
outcomes of the Bernoulli trials then the Binomial Random
V i bl X,
Variable, X is
i
n
X = ∑Ij
j =1

So what is the image


g of X?

Copyright © Syed Ali Khayam 2009 37


Binomial Random Variable
If Ij(A)=1 and Ij(Ac)=0, j=1,2,…,n, are used to represent the
outcomes of the Bernoulli trials then the Binomial Random
Variable X,
Variable, X is
n
X = ∑Ij
j =1

So what is the image of X?


SX = {0, 1, 2, 3, …, n}

Copyright © Syed Ali Khayam 2009 38


Binomial Random Variable
Pr{Ij(A)=1} = p and Pr{Ij(A)=0} = 1-p

Then a Binomial rv X is defined as:


n
X = ∑Ij
j =1

A d the
And h pmff off a bi
binomial
i l rv is
i

n  k
 
P {X = k } =   p (1 − p )
Pr
n −k

k 

Copyright © Syed Ali Khayam 2009 39


Binomial Random Variable
The pmf of a Binomial rv X is:
n  k
b (k ; n, p ) = Pr  
P {X = k } =   p (1 − p )
n −k

k 

This pmf gives the probability that exactly k out of a total of n


Bernoulli trials were successes

Be very careful about the definition of a success

Copyright © Syed Ali Khayam 2009 40


Binomial Random Variable

Pr{X=k}

Image courtesy of Wikipedia article on Binomial Distribution

Copyright © Syed Ali Khayam 2009 41


Binomial Random Variable
Example:
Consider a binary symmetric channel with probability of bit-error p.
Fi d th
Find the probability
b bilit th
thatt a packet
k t off n bits
bit iis received
i d with
ith one
or more errors?

Copyright © Syed Ali Khayam 2009 42


Binomial Random Variable
Example:
Consider a binary symmetric channel with probability of bit-error p.
Fi d th
Find the probability
b bilit th
thatt a packet
k t off n bits
bit iis received
i d with
ith one
or more errors?

Pr{Ij = 1} = 0.1 and Pr{Ij = 0} = 0.9, j=1,2,…,n


Then the probability that a packet is received with errors is
Pr {(X = 1) ∪ (X = 2) ∪ (X = 3) ∪ … ∪ (X = n )}

Copyright © Syed Ali Khayam 2009 43


Binomial Random Variable
Example:
Consider a binary symmetric channel with probability of bit-error p.
Fi d th
Find the probability
b bilit th
thatt a packet
k t off n bits
bit iis received
i d with
ith one
or more errors?

Pr {pkt with errs} = Pr {(X = 1) ∪ (X = 2) ∪ (X = 3) ∪ … ∪ (X = n )}


= Pr {X = 1} + Pr {X = 2} + Pr {X = 3} + … + Pr {X = n }
n  1 n  2 n  3  n
=   p (1 − p ) +   p (1 − p ) +   p (1 − p ) + … + n  p n (1 − p )
n −1 n −2 n − 3 n −n

 1   2   3   
n n 
= ∑   p i (1 − p )
n −i

i =1  i 

Copyright © Syed Ali Khayam 2009 44


Binomial Random Variable
Example:
Consider a binary symmetric channel with probability of bit-error p.
Fi d th
Find the probability
b bilit th
thatt a packet
k t off n bits
bit iis received
i d with
ith one
or more errors?
nn  i
ith errs} = ∑   p (1 − p )
n −i
P {pkt
Pr kt with
 i 
i =1 

There is an easier way to compute the same probability by noting


that:
Pr {pkt with errs} = Pr {X > 0} = 1 − Pr {X < 0} = 1 − Pr {X = 0}
n  0
= 1 −   p (1 − p )
n −0

 0 
n
= 1 − (1 − p )

Copyright © Syed Ali Khayam 2009 45


Connection Between Bernoulli and Binomial
RVss

I(A)=1, I(A)=0 n Bernoulli trials

S SI SX

A
Pr{I=1}=p n  k
Pr {X = k } =   p (1 − p )
n −k
Pr{A} k 
Pr{I=0}=1-p

Bernoulli Trial Bernoulli RV Binomial RV

Copyright © Syed Ali Khayam 2009 46


Geometric Random Variable
A Geometric Random Variable, M, is the number of Bernoulli
trials until the first occurrence of an event A

The experiment is stopped as soon as event A is observed

The image of a Geometric random variable is infinite but


countable
SM = {1,
{1 22, 3
3, …}}

Copyright © Syed Ali Khayam 2009 48


Geometric Random Variable

Also callled the mo


The pmf of a Geometric Random Variable, Z, is

geo
k −1
pZ (k ) = Pr {Z = k } = (1 − p ) p

ometric pm
OR
k
pZ (k ) = Pr
P {Z = k } = (1 − p ) p

mf
odified
Depending on whether the success trial is included in the total
count or not

Copyright © Syed Ali Khayam 2009 49


Geometric Random Variable

Image courtesy of Wikipedia article on Geometric Distribution

Copyright © Syed Ali Khayam 2009 50


Connection between Geometric and Binomial
RVss
Can we find the probability of a Geometric random variable
using the Binomial random variable?
k Geometric
Pr {Z = k } = (1 − p ) p

n  k
 
Pr {Z = k } =   p (1 − p )
n −k
Binomial
k 

Copyright © Syed Ali Khayam 2009 51


Connection between Geometric and Binomial
RVs
Can we find the probability of a Geometric random variable using
the Binomial random variable?

k  k 

 
Pr {X = k } =   p (1 − p )
1 k −1  
1 1
 
k −1
Pr {X = k } = (1 − p ) p = Pr {Z = k }

Copyright © Syed Ali Khayam 2009 52


Connection between Geometric and Binomial
RVs
In k Bernoulli trials, there are k ways in which you can have 1
success and k-1 failures

Since the Binomial random variable counts successes and


failures,, it sums and considers all the k outcomes together
g

Copyright © Syed Ali Khayam 2009 53


Connection between Geometric and Binomial
RVs
For k=4, a Binomial random variable X jointly considers the
outcomes 0001, 0010, 0100, 1000
Pr{X = 1} = Pr{(0001) U (0010) U (0100) U (1000)}
Pr{X = 1} = Pr{0001} + Pr{0010} + Pr{0100} + Pr{1000}
Since the underlying Bernoulli trials are independent:
Pr{X = 1} = (k)Pr{one success in k trials}

For the Geometric random variable


variable, we are only interested in
one of these outcomes, 0001
=> Pr{Z = 1} = Pr{X = 1} / k

Copyright © Syed Ali Khayam 2009 54


Memoryless Property
It can be shown that the Geometric rv satisfies the memoryless
property

The memoryless property is satisfied when:

Pr {Z = j + k Z ≥ k } = Pr {Z = j }

This property is also called the Markov Property

Copyright © Syed Ali Khayam 2009 55


Memoryless Property
The memoryless property is satisfied when:
Pr {Z = j + k Z ≥ k } = Pr {Z = j }

For the Geometric rv, RHS of the above equation is:


j
P {Z = j } = (1 − p ) p
Pr

Thus to prove that the Geometric rv satisfies the memoryless


property, we need to show that
Pr {Z = j + k Z ≥ k } = (1 − p ) p
j

Copyright © Syed Ali Khayam 2009 56


Memoryless Property
To prove that the Geometric rv satisfies the memoryless
property, we need to show that
Pr {Z = j + k Z ≥ k } = (1 − p ) p
j

Let’s expand the LHS using the definition of conditional


probability
Pr {Z = j + k ∩ Z ≥ k }
Pr {Z = j + k Z ≥ k } =
Pr {Z ≥ k }
Pr {Z = j + k }
=
Pr {Z ≥ k }
j +k
(1 − p ) p
= k k +1 k +2
(1 − p ) p + (1 − p ) p + (1 − p ) p + …

Copyright © Syed Ali Khayam 2009 57


Memoryless Property
Continued from last page
j +k
(1 − p ) p
Pr {Z = j + k Z ≥ j } = k k +1 k +2
(1 − p ) p + (1 − p ) p + (1 − p ) p + …
j +k
(1 − p ) p
=
(1 − p ) ((1 − p ) p + (1 − p ) p + (1 − p ) p + …)
k 0 1 2

j +k
(1 − p ) p
= k
(1 − p )
j
= (1 − p ) p
Summation over all possible values
of the Geometric pmf

Copyright © Syed Ali Khayam 2009 58


Memoryless Property
It can also be shown that the Geometric rv is the only discrete
random variable that satisfies the memoryless property

Because of the memoryless property, the Geometric rv can be


thought of as the number of failures between two successes

OR the inter-arrival time between successes

Copyright © Syed Ali Khayam 2009 59


Poisson Random Variable
A Poisson Random Variable, N, is the number of occurrences or
arrivals of an event A in a time interval of fixed length
EE.g.:
g : The number of packets that arrive at a wireless access point per
second

The Poisson rv assumes that:


The average number of arrivals per time interval, denoted by λ, is known
The arrivals are independent of each other

Poisson time interval

Poisson arrivals

Copyright © Syed Ali Khayam 2009 60


Poisson Random Variable
A good way of understanding Poisson rv is through the Binomial
rv

Let us divide the fixed Poisson time interval into n very small
sub-intervals of length
g ∆t
∆t is so small that only one arrival is possible within each sub-interval

Poisson time interval

1 2 3 4 … … n
t

∆t Poisson arrivals

Copyright © Syed Ali Khayam 2009 61


Poisson Random Variable
Now the experiment can be treated as Binomial rv where a
success corresponds to the presence of an arrival in a sub-
interval

Given that λ is the average


g arrival rate,, what is the probability
p y of
success (success corresponds to an arrival)?

Poisson time interval

1 2 3 4 … … n
t

∆t Poisson arrivals

Copyright © Syed Ali Khayam 2009 62


Poisson Random Variable
Now the experiment can be treated as Binomial rv where a
success corresponds to the presence of an arrival in a sub-
interval

Given that λ is the average


g arrival rate,, what is the probability
p y of
success (success corresponds to an arrival)?
Pr{success} = p = λ/n

Poisson time interval

1 2 3 4 … … n
t

∆t Poisson arrivals

Copyright © Syed Ali Khayam 2009 63


Poisson Random Variable
Now the experiment can be treated as Binomial rv with
parameter λ/n
λ  n   λ  
k n −k
 λ 
Pr {X = k } = b k ; n,  =  
  

1 − 
 n  k   n   n 
k n −k
n!  λ   λ
=   1 − 
(n − k ) ! k !  n   n 
 n − k + 1  λk  
n −k
 n   n − 1
 n − 2  λ   λ 

=    
       1 −  1 − 
 n  n  n   n  k !  n  n

Poisson time interval

1 2 3 4 … … n
t

∆t Poisson arrivals
Copyright © Syed Ali Khayam 2009 64
Poisson Random Variable
Thus for large n, the Binomial pmf approaches the Poisson pmf
 λ  λk −λ
lim Pr {X = k } = lim b k ; n,  = e
n →∞ n →∞  n k !

This is called the Poisson approximation of Binomial random


variable

Poisson time interval

1 2 3 4 … … n
t

∆t Poisson arrivals
Copyright © Syed Ali Khayam 2009 66
Poisson Random Variable
In general, the pmf of a Poisson random variable is
λk exp {−λ }
Pr {N = k } =
k!
where λ is the average arrival rate (arrivals/time interval)

Copyright © Syed Ali Khayam 2009 67


End of Discrete Random Variables
At this point, we conclude our discussion on examples of
discrete random variables

We will cover some other aspects of discrete rvs after looking at


examples of continuous rvs

Copyright © Syed Ali Khayam 2009 69


C i
Continuous R
Random
d V
Variables
i bl

Copyright © Syed Ali Khayam 2009 72


Continuous Random Variables
Recall that a Continuous random variable has an uncountable
image
Sx = (0, 1]
Sx = R

Copyright © Syed Ali Khayam 2009 73


Probability Density Function (pdf) of a
Continuous Random Variable
The Probability Density Function (pdf) of a continuous random
variable, fX(x), is a continuous function of the x є Sx

Properties of pdf:
f1: fX (x ) ≥ 0, ∀x

f2: ∫ fX (x )dx = 1
−∞

fX(x)

SX=(0, n]

0 n x

Copyright © Syed Ali Khayam 2009 74


Probability Density Function (pdf) of a
Continuous Random Variable
f3:
Pr {a ≤ X ≤ b } = Pr {a ≤ X < b } = Pr {a < X ≤ b } = Pr {a < X < b }
b

= ∫f X (x )dx
a

fX(x)

0 n x
a b

Copyright © Syed Ali Khayam 2009 75


Cumulative Distribution Function (CDF) of a
Continuous Random Variable
CDF of a continuous rv is computed by integrating the pdf
t

FX (t ) = Pr {X ≤ t } = ∫ fX (x )dx
x =−∞

Conversely,
y, we also have:
d
fX (x ) ≡ FX (t )
dx

FX(x)

x
Copyright © Syed Ali Khayam 2009 76
Properties of the CDF of a Continuous
Random Variable
F1: 0 ≤ FX (x ) ≤ 1, −∞ < x < ∞

F2: a ≤b ⇒ FX (a ) ≤ FX (b )

limx →−∞ FX (x ) = 0
F3:
limx →∞ FX (x ) = 1

FX(x)

Copyright © Syed Ali Khayam 2009 77


Properties of the CDF of a Continuous
Random Variable
a

F4: Pr {X = a } = Pr {a ≤ X ≤ a } = ∫f X
(x ) dx =0
a

FX(x)

Copyright © Syed Ali Khayam 2009 78


Properties of the CDF of a Continuous
Random Variable
F4:
a +ε 2

{
ε
Pr a − ≤ X ≤ a +
2
ε
2 }
= ∫ fX (x ) dx ≈ ε fX (a )
a −ε 2

for small values of ε.

FX(x)

a-ε/2 a a+ε/2 x

Copyright © Syed Ali Khayam 2009 79


Properties of the CDF of a Continuous
Random Variable
F5:
Pr {a ≤ X ≤ b } = Pr {a ≤ X < b } = Pr {a < X ≤ b } = Pr {a < X < b }
= FX (b ) − FX (a )

FX(x)

FX(b)

Pr{a<x<b}

FX(a)

x
a b

Copyright © Syed Ali Khayam 2009 80


Expected Value of a Continuous Random
Variable
The expected value of a continuous random variable is:

Ε {X } = µX = ∫ xffX
(x ) d
dx
−∞

fX(x) fX(x) E{X}


E{X}

x x

Copyright © Syed Ali Khayam 2009 81


Variance of a Continuous Random Variable
The variance of a continuous random variable is:

2
var {X } = σ = ∫ (x − µ ) fX (x )dx
2
X X
−∞

σX > σY

fX(x) σX fY(y) σY

x y

Copyright © Syed Ali Khayam 2009 82


Exponential Random Variable
An exponential random variable measures the inter-arrival time
between two occurrences of an event
E g the inter-arrival
E.g., inter arrival time of packets arriving in a router’s
router s queue

Recall that a Poisson distribution counts the total number of


arrivals

In that sense
sense, the exponential rv is the inter-arrival
inter arrival time
between two Poisson arrivals
τ1 τ2 τ3 τ4 τ5 τ6

Poisson arrivals
Copyright © Syed Ali Khayam 2009 83
Exponential Random Variable

inter-arrival time λ=60

1 2 3 4 … … 60 t

inter-arrival time λ=20

1 2 3 4 … … 60 t

inter-arrival
inter arrival time λ=6

1 2 3 4 … … 60 t

Copyright © Syed Ali Khayam 2009 84


Exponential Random Variable
Exponential rv is the inter-arrival time between two Poisson
arrivals

Remembering the definition of a Poisson rv, what should be the


average
g (expected)
( p ) inter-arrival time for an exponential
p rv,, E?

τ1 τ2 τ3 τ4 τ5 τ6

Poisson arrivals
Copyright © Syed Ali Khayam 2009 85
Exponential Random Variable
Exponential rv is the inter-arrival time between two Poisson
arrivals

Remembering the definition of a Poisson rv, what should be the


average
g (expected)
( p ) inter-arrival time for an exponential
p rv,, E?
E{E} = 1/λ

τ1 τ2 τ3 τ4 τ5 τ6

Poisson arrivals
Copyright © Syed Ali Khayam 2009 86
Exponential Random Variable
An exponential rv is the inter-arrival time between two Poisson
arrivals
Let’s look at a sub-interval (0,t] of the Poisson window
On average, how many arrivals will take place in the (0,t]
subinterval?

t t t t t t

τ1 τ2 τ3 τ4 τ5 τ6

T
Copyright © Syed Ali Khayam 2009 87
Exponential Random Variable
On average, how many arrivals will take place in the (0,t]
subinterval?
Arrivals in the (0
(0,t]
t] sub
sub-interval
interval = λt/T
In other words, λt arrivals will take place in a (0,t] subinterval per
window T

t t t t t t

τ1 τ2 τ3 τ4 τ5 τ6

Copyright © Syed Ali Khayam 2009 88


Exponential Random Variable
λt arrivals will take place in a (0,t] subinterval per window T

( )
0
λt T e −(λt /T )
Pr {E > t } = Pr {N = 0} = = e −λt /T
0!
⇒ FE (t ) = 1 − e −λt /T
d λ
fE (t ) = FE (t ) = e −λt /T
dt T

t t t t t t

τ1 τ2 τ3 τ4 τ5 τ6

t
T

Copyright © Syed Ali Khayam 2009 89


Exponential Random Variable
Considering a window of T=1 unit time, the probability density
and cumulative distribution functions of an exponential rv are:
d
fE (t ) = FE (t ) = λe −λt
dt
FE (t ) = 1 − e −λt

Like the Poisson rv, the Exponential rv is characterized by a


single parameter λ

τ1 τ2 τ3 τ4 τ5 τ6

Copyright © Syed Ali Khayam 2009 Poisson arrivals 90


Exponential Random Variable
Exponential rv is a limiting case of a geometric rv

Like the geometric rv, the exponential rv also possess the


memoryless Markov property (see textbook, Pg. 113)
Pr {E > j + k E ≥ k } = Pr {E > j }

It can be shown that exponential


p is the onlyy continuous
distribution with the memoryless property

Copyright © Syed Ali Khayam 2009 92


Exponential Random Variable

LHS = Pr {E > j } = 1 − Pr {E ≤ j }
= 1 − (1 − e −λ j )
= e −λ j
ess properrty

Pr {E > j + k ∩ E ≥ k }
RHS = Pr {E > j + k E ≥ k } =
Pr {E ≥ k }
Memoryle

Pr {E > j + k } 1 − FE (E ≤ j + k )
= =
Pr {E ≥ k } 1 − FE (E < k )
1 − (1 − e −λ( j +k ) ) e −λ je −λk
= =
1 − (1 − e −λk ) e −λk
= e −λ j
Copyright © Syed Ali Khayam 2009 93
Uniform Distribution
A uniform rv has a uniform distribution over an interval [a, b]
 1
 , a ≤ x ≤b
fU (x ) = b − a
 0 otherwise

 0 x <a

 x − a
FU (x ) =  a ≤ x ≤b
 b − a
 1 x >b

This distribution is also called the Rectangular distribution

Copyright © Syed Ali Khayam 2009 99


Uniform Distribution

fU (x)

1/(b-a)

a b x

Copyright © Syed Ali Khayam 2009 100


Gaussian or Normal Distribution
Gaussian or Normal Distribution is by far the most famous
continuous distribution

It has the famous bell-shaped curve which can be used to


approximate
pp manyy other distributions

Copyright © Syed Ali Khayam 2009 101


Gaussian Distribution
The Gaussian rv is defined as:
1  x −µ 2
1 − 
2  σ 

fN (x ) = e
σ 2π
The Gaussian distribution is completely characterized by its mean
and variance and is generally written as
2
N (µ, σ )
A Gaussian distribution with zero mean and unit variance is
called a standard normal distribution

Copyright © Syed Ali Khayam 2009 102


Gaussian Distribution
fN (x)

Image Courtesy of Wikipedia

Copyright © Syed Ali Khayam 2009 103


Gaussian Distribution
The CDF of a Gaussian rv does not have a closed form
∞ 1  x −µ 2
1 −  
∫e 2 σ 

FN (x ) = dt
σ 2π −∞

Tables of CDF values are provided for each(μ, σ) pair

Copyright © Syed Ali Khayam 2009 104


Gaussian Distribution
Gaussian distribution does not deviate from its mean by more
than 3 standard deviations 99.7% of the times

Image
g Courtesy
y of Wikipedia
p

Copyright © Syed Ali Khayam 2009 105

You might also like