Download as pdf or txt
Download as pdf or txt
You are on page 1of 67

Chapter 5.

Probability and
Random Process

Review of Probability

1/7/2021 Latha, Dept. of ECE, ASE, Bengaluru 1


Introduction

⚫ Deterministic signals: the class of signals that may be modeled


as completely specified functions of time.
⚫ A signal is “random” if it is not possible to predict its precise
value in advance.
⚫ A random process consists of an ensemble (family) of sample
functions, each of which varies randomly with time.
⚫ A random variable is obtained by observing a random process at
a fixed instant of time.

1/7/2021 Latha, Dept. of ECE, ASE, Bengaluru 2


Probability
⚫ The probability theory is rooted in the phenomenon, that can be modeled
by an experiment whose outcome cannot be predicted with certainty.
⚫ Some of the random experiments includes, tossing a coin, throwing a die
and drawing a card from a deck.
⚫ Sample space : The set of all possible outcomes is called the sample
space denoted by Ω.
⚫ The outcomes are denoted by ω’s and each ω lies in Ω. Events are
subset of the sample space.
Example: Experiment of tossing a coin, the possible out comes are head
and a tail
Ω= [Head Tail ] where ω1= head and ω2= tail
⚫ A sample space is discrete if the number of its elements are finite or
countably infinite, otherwise it is a non-discrete sample space.
1/7/2021 Latha, Dept. of ECE, ASE, Bengaluru 3
Probability Conti…
⚫ Sample space corresponding to a random experiment whose out
comes are set of all values which is infinite and uncountable is
non-discrete or continuous sample space.
Example: choosing a real number from 0 to 1
⚫ Disjoint events or Mutually exclusive events: Two events are
mutually exclusive if occurrence of one event precludes the
occurrence of the other event or their intersection is empty.
Example: In the experiment of throwing a die, the event of getting
an out come of odd number and the event of getting an out come
which is divisible by 2.

1/7/2021 Latha, Dept. of ECE, ASE, Bengaluru 4


Probability Conti…
⚫ Probability P can be defined as set of function assigning
non negative values to all events E such that the following
conditions are satisfied

⚫ Some important
properties of the
probability are

1/7/2021 Latha, Dept. of ECE, ASE, Bengaluru 5


Conditional Probability
⚫ Let P[E1|E2] denote the probability of event E1, given that event E2
has occurred. The probability P[E1|E2] is called the conditional
probability of E1 given E2 which is given by

⚫ If it happens that P (E1| E2) = P (E1), then the knowledge of E2 does


not change the probability of E1 . In this case, the events E1 and E2 are
said to be independent. For independent events,
P (E1 Ո E2) = P (E1)P (E2).
1/7/2021 Latha, Dept. of ECE, ASE, Bengaluru 6
Example 5.1.1
In an experiment of throwing a fair die, find the probability of
getting outcome
a) greater than 3

b) an even number

c) Also find P(A\B)

1/7/2021 Latha, Dept. of ECE, ASE, Bengaluru 7


Example 5.1.1

a) event greater than 3

b) getting an even number

c) P(A|B)

1/7/2021 Latha, Dept. of ECE, ASE, Bengaluru 8


Total Probability Theorem and Bay’s Theorem
⚫ Total Probability Theorem
If the events {Ei} [i=1, 2, n ] are disjoint and their union is the entire sample
space, then they make a partition of the sample space Ω. Then, if for an event A,
we have the conditional probabilities
{P (A | Ei)} [i=1, 2, n ] , P (A) can be obtained by applying the total probability
theorem stated as

⚫ Bay’s Theorem
Bayes's rule gives the conditional probabilities P (Ei | A) by the relation

1/7/2021 Latha, Dept. of ECE, ASE, Bengaluru 9


Example 5.1.2
In a certain city, 50% of the population drive to work, 30%
take the subway, and 20% take the bus. The probability of
being late for those who drive is 10%, for those who take the
subway is 3%, and for those who take the bus is 5%.
1. What is the probability that an individual in this city will be
late for work?
2. If an individual is late for work, what is the probability that
he drove to work?

1/7/2021 Latha, Dept. of ECE, ASE, Bengaluru 10


Example 5.1.2
Solution: Let D, S, and B denote the events of an individual driving, taking
the subway, or taking the bus. Then P (D) = 0.5, P(S) = 0.3, and
P (B) = 0.2. If L denotes the event of being late, then, from the
assumptions, we have
P(L / D) = 0. 1 ; P(L/S) = 0.03; P(L /B) = 0.05.
1. From the total probability theorem,
P (L) = P(D)P(L/D) + P(S)P(L / S) + P (B)P(L / B)
= 0.5 x 0. 1 + 0.3 x 0.03 + 0.2 x 0.05
= 0.069.
2. Applying Bay’s rule we have

1/7/2021 Latha, Dept. of ECE, ASE, Bengaluru 11


Example 5.1.3
In a binary communication system, the input bits transmitted over the
channel are either 0 or 1with probabilities 0.3 and 0.7, respectively.
When a bit is transmitted over the channel, it can be either received
correctly or incorrectly (due to channel noise). Let us assume that if a
0 is transmitted, the probability of it being received in error (i.e.,
being received as 1) is 0.01, and if a 1 is transmitted, the probability
of it being received in error (i.e., being received as 0) is 0. 1 .
1 . What is the probability that the output of this channel is 1 ?
2 . Assuming we have observed a 1 at the output o f this channel, what
is the probability that the input to the channel was a 1 ?

1/7/2021 Latha, Dept. of ECE, ASE, Bengaluru 12


Example 5.1.3
Solution: Let X denote the input and Y denote the output. From the
problem assumptions, we have

P(X = 0) = 0.3; P(X = 1) = 0.7;


P (Y = 0/X = 0) = 0.99; P(Y = 0/X = 1) = 0. 1 ;
P(Y = l / X = 0) = 0.0 1 ; P(Y = l / X = 1) = 0.9.

1/7/2021 Latha, Dept. of ECE, ASE, Bengaluru 13


1. From total probability theorem
P(Y = 1) = P (Y = 1 , X = 0) + P(Y = 1 , X = 1)
= P (X = 0)P(Y = l / X = 0) + P(X = l)P(Y = l / X = 1)
= 0.3 x 0.01 + 0.7 x 0.9
= 0.003 + 0.63
= 0.633.

2. From Bay’s rule we have

1/7/2021 Latha, Dept. of ECE, ASE, Bengaluru 14


Random Variables

1/7/2021 Latha, Dept. of ECE, ASE, Bengaluru 15


5.1.3 Random Variable
⚫ A random variable is a mapping from the sample space Ω to the set of
real numbers. OR
⚫ A random variable is an assignment of real numbers to the outcomes of a
random experiment. A schematic diagram representing a random
variable is given in figure 5.1 below

Figure 5.1 A random variable as a mapping from Ω to R


1/7/2021 Latha, Dept. of ECE, ASE, Bengaluru 16
Random Variable cont…
⚫ Example: Throw a die once
Random Variable X = "The score shown on the top face".
X could be 1, 2, 3, 4, 5 or 6
So the Sample Space Ω = {1, 2, 3, 4, 5, 6}
⚫ Example: Tossing a fair coin

Sample Space Ω = { Head, Tail }


X is the random variable which takes the
value 1 when head shows up and takes Head 1
-1 when tail shows up X
-1
Tail

1/7/2021 Latha, Dept. of ECE, ASE, Bengaluru 17


Random Variable Cont…
⚫ Random variables are denoted by capital letters X, Y, and so on.
⚫ Individual values of the random variable X are X( ω).
⚫ A random variable is discrete if the range of its values is either finite or
countably infinite. This range is usually denoted by {xi}.
⚫ A continuous random variable is one in which the range of values is a
continuum.
⚫ The cumulative distribution function or CDF : It is defined as

Which can be simply written as FX(x) = P( X ≤ x )

1/7/2021 Latha, Dept. of ECE, ASE, Bengaluru 18


Properties of CDF

⚫ For discrete random variables, Fx(x) is a staircase function.


⚫ A random variable is continuous if Fx (x) is a continuous function.
⚫ A random variable is mixed if it is neither discrete nor continuous.

1/7/2021 Latha, Dept. of ECE, ASE, Bengaluru 19


Figure 5.2 CDF for a discrete
random variable

Figure 5.3 CDF for a continuous


random variable

Figure 5.4 CDF for a mixed


random variable

1/7/2021 Latha, Dept. of ECE, ASE, Bengaluru 20


Probability density function (PDF)
If the distribution function is continuously differentiable, then

fX(x) is called as probability density function(PDF) of the


random variable X. Probability density function is always a
non negative function with a total area of one unit.

1/7/2021 Latha, Dept. of ECE, ASE, Bengaluru 21


Properties of PDF

For discrete random variables, we generally define the probability mass


function, or PMF which is defined as {pi}, where pi= P(X = xi).
Obviously for all i we have pi ≥ 0 and

1/7/2021 Latha, Dept. of ECE, ASE, Bengaluru 22


Example: Coin Flips
When a coin is flipped 3 times the sample space will be
Ω = { HHH, HHT, HTH, THH, HTT, THT, TTH, TTT }

If X is the number of heads, then X is a random variable whose


probability distribution is as follows

Possible Events X P(X)


TTT 0 1/8
HTT, THT, TTH 1 3/8
HHT, HTH, THH 2 3/8
HHH 3 1/8
Total 1
1/7/2021 Latha, Dept. of ECE, ASE, Bengaluru 23
Important Random Variables
⚫ Bernoulli random variable.
⚫ Binomial random variable.
⚫ Uniform random variable
⚫ Gaussian or normal random variable.

1/7/2021 Latha, Dept. of ECE, ASE, Bengaluru 24


Bernoulli random variable
⚫ This is a discrete random variable taking two values 1 and 0, with
probabilities p and 1 - p.
⚫ A Bernoulli random variable is a good model for a binary-data generator.
⚫ When binary data is transmitted over a communication channel some bits
are received in error.
⚫ We can model an error by modulo-2 addition of a 1 to the input bit, thus,
we change a 0 into a 1 and a 1 into a 0.
⚫ Therefore, a Bernoulli random variable can also be employed to model the
channel errors

Figure 5.5 The PMF for the


Bernoulli random variable

25

1/7/2021
Latha, Dept. of ECE, ASE, Bengaluru
Binomial random variable
⚫ This is a discrete random variable giving the number of 1 's in a
sequence of n-independent Bernoulli trials. The PMF is given by

⚫ Example : it is used to model the total number of bits received in error


when a sequence of n bits is transmitted over a channel with a bit-
error probability of p.

Figure 5.6 The PMF for the binomial


random variable
1/7/2021 26
Latha, Dept. of ECE, ASE, Bengaluru
Example 5.1.5
Assume 10,000 bits are transmitted over a channel
in which the error probability is 10-3. What is the
probability that the total number of errors is less
than 3?

1/7/2021 Latha, Dept. of ECE, ASE, Bengaluru 27


Example 5.1.5
Assume 10,000 bits are transmitted over a channel in which the error
probability is 10-3. What is the probability that the total number of
errors is less than 3?
Solution : In this example, n = 10,000, p = 0.001, and we are looking
for P (X < 3). We have

1/7/2021 Latha, Dept. of ECE, ASE, Bengaluru 28


Uniform random variable
This is a continuous random variable taking values between a and b
with equal probabilities for intervals of equal length. The density
function is given by

Figure 5.7 The PDF for the uniform random


variable.

1/7/2021 Latha, Dept. of ECE, ASE, Bengaluru 29


Gaussian or normal random variable
⚫ The Gaussian, or normal, random variable is a continuous random
variable described by the density function.

⚫ There are two parameters involved in the definition of the Gaussian


random variable.
⚫ The parameter m is called the mean and can assume any finite value.
⚫ The parameter σ is called the standard deviation and can assume any
finite and positive value.
⚫ The square of the standard deviation, i.e., σ 2, is called the variance.
1/7/2021 Latha, Dept. of ECE, ASE, Bengaluru 30
A Gaussian random variable with mean and variance is denoted
by N(m, σ 2). The random variable N(0, 1) is usually called standard
normal.

Figure 5.8 The PDF for the Gaussian random


variable.

1/7/2021 Latha, Dept. of ECE, ASE, Bengaluru 31


Q-Function
⚫ Assuming that X is a standard normal random variable, we define the
function Q (x) as P (X > x). The Q-function is given by the relation

⚫ This function represents the area under the tail of a standard normal
random variable
⚫ Q-function is a decreasing function
⚫ This function is well tabulated and
frequently used in analyzing the
performance of communication systems
Q(x) using calculator(fx-991ES)
Mode 3→AC
Shift(1) →5 →3:(x)= Q(x) Figure 5.9 The Q-function as the area under32 the
1/7/2021 tail of a standard normal random variable
Latha, Dept. of ECE, ASE, Bengaluru
1/7/2021 Latha, Dept. of ECE, ASE, Bengaluru 33
Q(x) satisfies the relations

Two important upper bounds on the Q-function are widely used to find
bounds on the error probability of various communication systems.

These bounds are given as

A frequently used lower bound is

 x−m
For N(m, σ 2) random variable Q ( x ) = P ( X  x ) = Q 
  
1/7/2021 Latha, Dept. of ECE, ASE, Bengaluru 34
Example 5.1.6
⚫ X is a Gaussian random variable with mean 1 and variance 4. Find the
probability that X is between 5 and 7

1/7/2021 Latha, Dept. of ECE, ASE, Bengaluru 35


5.1.4 Function of a Random Variable
⚫ A linear function of a Gaussian random variable is itself a Gaussian
random variable.
⚫ If X is N(m, σ 2), and random variable Y is a linear function of X
such that Y = aX + b, then we can show that Y is also a Gaussian
random variable of the form N(am + b, a2σ 2)
Example 5.1.8
Assume X is a N(3, 6) random variable. Find the density function of
Y = -2X + 3.
a=-2, b=3, m=3, σ 2=6
Mean of Y mY=am +b=-6+3=-3,
Variance of Y = σ 2Y= a*2σ 2=4*6=24
Hence Y is N(-3, 24)
1/7/2021 Latha, Dept. of ECE, ASE, Bengaluru 36
Statistical Averages
The mean, expected value, or expectation of a continuous random
variable X is defined as

If Y=g(X), then

The mean, expected value, or expectation of a discrete random


variable X is defined as

If Y=g(X), then

In general, the nth moment of a random variable X is defined as

1/7/2021 Latha, Dept. of ECE, ASE, Bengaluru 37


Variance of X
⚫ In the special case where g(X) = (X - E(X))2
⚫ E(g(X)) is called the variance of X
⚫ E(g(X))=E( X – E (X) )2
= E [ X2 + (E(X))2 - 2X E(X)]
= E(X2) + (E(X))2 -2 E(X) E(X)
= E(X2) + (E(X))2 – 2(E(X))2
E(g(X)) = E(X2) – (E(X))2
⚫ The variance is denoted by σ2 and its square root σ, is called
the standard deviation
σ2 = E(X2) – (E(X))2

1/7/2021 Latha, Dept. of ECE, ASE, Bengaluru 38


⚫ Variance is a measure of the spread of the density function of X.
⚫ If the variance is small, it indicates that the random variable is very
concentrated around its mean, and in a sense is "less random."
However, if the variance is large, then the random variable is highly
spread hence, it is less predictable.

For any constant c, these relations hold

Variance has these properties

1/7/2021 Latha, Dept. of ECE, ASE, Bengaluru 39


Mean and variance of the important
random variables

1/7/2021 Latha, Dept. of ECE, ASE, Bengaluru 40


Multiple Random Variable
Let X and Y represent two random variables defined on the same sample
space Ω. For these two random variables, we can define the joint CDF as

Or simply as

The joint PDF denoted as fX,Y(x, y) is defined by

1/7/2021 Latha, Dept. of ECE, ASE, Bengaluru 41


The basic properties of the joint and marginal CDFs and PDFs can be
summarized with the following relations:

1/7/2021 Latha, Dept. of ECE, ASE, Bengaluru 42


The conditional probability density function of the random variable Y,
given that the value of the random variable X is equal to x, is denoted
by fY|X(y|x) and defined as

If the density function after the knowledge of X is the same as the


density function before the knowledge of X, then the random
variables are said to be statistically independent. For statistically
independent random variables, f X,Y(x, y) = fX(x) fY(y)

If g(X, Y) is a function of X and Y then the expected value of g(X, Y) is


obtained from

1/7/2021 Latha, Dept. of ECE, ASE, Bengaluru 43


Example 5.1.9

Let X be N(3, 4) and Y be N(-2, 6) . Assuming X and Y are


independent, determine fX,Y(x, y).
Solution: We have , for independent random variables X and Y
f X,Y(x, y) = fX(x) fY(y)

1/7/2021 Latha, Dept. of ECE, ASE, Bengaluru 44


Correlation and Covariance
⚫ E (XY) is called the correlation of X and Y which is given by
 
E ( XY ) =  XY f
− − 
X ,Y ( x, y ) dx dy

⚫ If X and Y are independent, then their correlation

E(XY) = E(X) E(Y)

⚫ The covariance of X and Y is defined as

COV(X, Y) = E(X Y) - E(X) E(Y)


⚫ If COV(X, Y) = 0, i.e., if E(XY) = E(X) E(Y), then X and Y are
called uncorrelated random variables.
1/7/2021 Latha, Dept. of ECE, ASE, Bengaluru 45
Correlation coefficient is the normalized version of the covariance
and it is denoted by ρX,Y and is defined as

⚫ It is obvious that if X and Y are independent, then COV(X, Y) = 0


⚫ Independence implies lack of correlation but lack of correlation does not
generally imply independence.
⚫ Covariance might be zero, but the random variables may still be
statistically dependent.
⚫ Properties of the expected value and variance applied to multiple random
variables (In these relations, the ci 's are constants.)

1/7/2021 Latha, Dept. of ECE, ASE, Bengaluru 46


Example 5.1.10
Assuming that X is N(3, 4), Y is N(- 1 , 2), and X and Y are
independent, determine the covariance of the two random variables
Z = X - Y and W = 2X + 3Y.
WZ= (X-Y) (2X+3Y)= 2X2 + XY - 3Y2

1/7/2021 Latha, Dept. of ECE, ASE, Bengaluru 47


Example 5.1.10
Assuming that X is N(3, 4), Y is N(- 1, 2), and X and Y are
independent, determine the covariance of the two random variables
Z = X - Y and W = 2X + 3Y.
COV(W, Z) = E(WZ) - E(W)E(Z)
WZ= (X-Y) (2X+3Y)= 2X2 + XY - 3Y2
E (Z) = E (X) - E (Y) = 3 + 1 = 4,
E(W) = 2E(X) + 3E(Y) = 6 - 3 = 3,
E (X2) = VAR(X) + (E (X))2 = 4 + 9 = 13,
E (Y2) = VAR(Y) + (E(Y))2 = 2 + 1 = 3, and E (XY) = E(X)E(Y) = -3.
COV(W, Z) = E(WZ) - E(W)E(Z)
= E (2X2 - 3Y2 + XY) - E(Z)E(W)
= 2E(X2) – 3E(Y2) + E(XY) - E(Z)E(W)
= 2 x 13 - 3 x 3 - 3 - 4 x 3
1/7/2021 =2 Latha, Dept. of ECE, ASE, Bengaluru 48
Jointly Gaussian Random Variables
Jointly Gaussian or binormal random variables X and Y are
distributed according to a joint PDF of the form

where m1 , m2, σ12 and σ22 are the mean and variances of X and Y,
respectively, and ρ is their correlation coefficient.
When two random variables X and Y are distributed according to a
binormal distribution, then X and Y are normal random variables and
the conditional densities f(x|y) and f(y|x) are also Gaussian
The definition of two jointly Gaussian random variables can be extended
to more random variablesLatha, Dept. of ECE, ASE, Bengaluru
1/7/2021 49
Properties of Jointly Gaussian Random Variables
⚫ If n random variables are jointly Gaussian, any subset of them is also
distributed according to a jointly Gaussian distribution of the appropriate
size. In particular, all individual random variables are Gaussian.
⚫ Jointly Gaussian random variables are completely characterized by the
means of all random variables m1 , m2,, . . . , mn and the set of all
covariance COV(Xi , Xj) for all 1 ≤ i ≤ n and 1 ≤ j ≤ n. These so-called
second-order properties completely describe the random variables.
⚫ Any set of linear combinations of (X 1 , X2, . . . , Xn) are themselves
jointly Gaussian. In particular, any linear combination of Xi 's is a
Gaussian random variable.
⚫ Two uncorrelated jointly Gaussian random variables are independent.
Therefore, for jointly Gaussian random variables, independence and
uncorrelatedness are equivalent. As previously stated, this is not true for
general random variables.
1/7/2021 Latha, Dept. of ECE, ASE, Bengaluru 50
5.1 .6 Sums of Random Variables
Law of large numbers and central limit theorem
⚫ Law of large numbers: It states that if the sequence of random
variables X1 , X2, . . . , Xn are uncorrelated with the same mean mx
and variance σ2 < ∞ or Xi 's are i.i.d. (independent and identically
distributed) random variables then for any

where

This means that the average converges (in probability) to the expected
value.

1/7/2021 51
Latha, Dept. of ECE, ASE, Bengaluru
⚫ Central limit theorem : This theorem states that if
Xi 's are i.i.d. (independent and identically distributed)
random variables which each have a mean m and variance σ2,
Then converges to a N( m, σ2/n ). Central limit
theorem states that the sum of many i.i.d. random variable
converges to a Gaussian random variable. This theorem
explains why thermal noise follows a Gaussian distribution.

1/7/2021 Latha, Dept. of ECE, ASE, Bengaluru 52


Example : A random variable X takes the values 0 and 1 with
probability α and β= 1- α respectively. Find the mean and
variance of X.
Solution : It is given that P(X=0) = α, and P(X=1) = 1- α
We know that the mean of X is
E(X) =Σ xi P(xi)
= 0. α + 1. β = β

E(X2) = Σ xi2 P(xi)


= 0. α + 12. β = β
The variance of X is σx2 = E(X2) – (E(X))2
= β – β2 = β (1 – β )
σx2 = β α
1/7/2021 Latha, Dept. of ECE, ASE, Bengaluru 53
Example : Binary data are transmitted over a noisy communication channel
in a block of 16 binary digits. The probability that the
received bit is in error is 0.01. Assume that the error occuring in
various digit positions in a block are independent.
(a) Find the mean or average error per block

(b) Find the variance of the number of errors per block

(c) Find the probability that the number of errors per block is greater than or
equal to 4.
Solution:
(a) Let X representing the number of errors per block. X has binomial
distribution with n=16 and p=0.01. Average number of errors per block
is E(X) = np = 0.16
(b) Variance σx2 = np (1-p) = 0.158
(c) P(X ≥ 4 )= 1- P(X ≤ 3)
= 1-[ P(X=0) + P(X = 1) + P(X = 2) + P(X = 3)]
1/7/2021 54

= 0.014 Latha, Dept. of ECE, ASE, Bengaluru


Example : The pdf of a random variable X is given by
k a  x  b
f X ( x) = 
0 otherwise
(a) Determine the value of k
(b) Let a =-1 and b = 2. Calculate P(|X| ≤ c) for c=0.5

1/7/2021 Latha, Dept. of ECE, ASE, Bengaluru 55


Example : The pdf of a random variable X is given by
k a  x  b
f X ( x) = 
0 otherwise where k is a constant
(a) Determine the value of k
(b) Let a =-1 and b = 2. Calculate P(|X| ≤ c) for c=0.5

Solution  b
1
(a) We know that
 f ( x ) dx =  k dx = 1 Or k =
−
X
a
b−a

 1
 a  xb
Hence f X ( x) =  b − a
0 otherwise

X is an uniform random variable

1/7/2021 Latha, Dept. of ECE, ASE, Bengaluru 56


(b) With a=-1 and b=2 we have
1
 −1  x  2
f X ( x) =  3

0 otherwise

 1  1 1
1/ 2 1/ 2
1 1
P | X |   = P  −  X   =
 2  2 2 
−1 / 2
f X ( x) dx =  dx =
−1 / 2
3 3

1/7/2021 Latha, Dept. of ECE, ASE, Bengaluru 57


Example 5.10 : X is a Gaussian random variable with a
mean 4 and a variance 9, i.e., N(4, 9).
Determine the following probabilities:
1. P(X > 7).
2. P (0 < X < 9).

1/7/2021 Latha, Dept. of ECE, ASE, Bengaluru 58


Example 5.10 : X is a Gaussian random variable with a mean 4
and a variance 9, i.e., N(4, 9).
Determine the following probabilities:
1. P(X > 7).
2. P (0 < X < 9).

7−4
1. P ( X  7) = Q  = Q(1) = 0.158
 3 
0−4 9−4  4  5
2. P(0  X  9) = Q  −Q  = Q −  − Q 
 3   3   3  3
4 5
= 1 − Q  − Q 
3 3
= 0.858

1/7/2021 Latha, Dept. of ECE, ASE, Bengaluru 59


Example 5.11:The noise voltage in an electric circuit can be
modeled as a Gaussian random variable with a mean equal to
zero and a variance equal to 10-8 .What is the probability that the
value of the noise exceeds 10-4?What is the probability that it exceeds
4 x 10-4? What is the probability that the noise value is between
-2 x 10-4 and 10-4?
Solution: X is a Gaussian random variable with mean zero and variance
σ2 = 10-8 . Hence x
P ( X  x ) = Q 
 

1/7/2021 Latha, Dept. of ECE, ASE, Bengaluru 60


Example 5.22 : Two random variables X and Y are distributed according to

1. Find the value of the constant K.


2. Find the marginal probability density functions of X and Y.
3. Are X and Y independent?
4. Find fX|Y(x|y).
5. Find E(X|Y = y).
6. Find COV(X, Y) and ρX,Y
1. We know that  

f
− − 
X ,Y ( x, y ) dx dy =1

  


− − 
f X ,Y ( x, y ) dx dy =   K e − x − y dx dy
0 y
1/7/2021 Latha, Dept. of ECE, ASE, Bengaluru 61
   


− − 
f X ,Y ( x, y ) dx dy = K  e − y  e − x dx dy
0 y
  
e−x
f ( x, y ) dx dy = K  e −y 
dy
−1
X ,Y y
− −  0
   


− − 
f X ,Y ( x, y ) dx dy = K  e − y e − y dy = K  e − 2 y dy
0 0

 
e −2 y
−− f X ,Y ( x, y) dx dy = K − 2

0

K
= =1
2
 K =2

1/7/2021 Latha, Dept. of ECE, ASE, Bengaluru 62


2. The marginal density functions are

3. Two random variables are said to be independent if


f X,Y (x, y) = fX (x) fY (y)

LHS= f X,Y (x, y) = 2e-x-y

RHS= fX (x) fY (y) = 2e-x(1-e-x) 2 e-2y

LHS ≠ RHS
Hence X and Y are not independent
1/7/2021 Latha, Dept. of ECE, ASE, Bengaluru 63
4. Conditional density function
If x < y then fX | Y(x|y) =0

If x ≥ y then

5.

6. Covariance of X and Y
COV(X, Y) = E(X Y) - E(X) E(Y)

1/7/2021 Latha, Dept. of ECE, ASE, Bengaluru 64


Here we are using

1/7/2021 65
Latha, Dept. of ECE, ASE, Bengaluru
Hence

and

1/7/2021 Latha, Dept. of ECE, ASE, Bengaluru 66


Example 5.4 :Under what conditions can two disjoint events A and B be
independent?
Solution: If two events are mutually exclusive (disjoint) then
P(A∪B) = P(A)∪P(B) which implies that
P(A∩B) =0. If the events are independent, then
P(A∩B) = P(A)∩P(B). Combining these two conditions we obtain that two
disjoint events are independent if
P(A ∩ B) = P(A)P (B) = 0
Thus, at least one of the events should be of zero probability

1/7/2021 Latha, Dept. of ECE, ASE, Bengaluru 67

You might also like