Random Signals and Processes: Chapter 9: Estimation of A Random Variable

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 36

Random Signals and Processes

Chapter 9: Estimation of a Random


Variable

Dr. Mohammad Rakibul Islam


Professor, EEE Department,
Islamic University of Technology
• Introduction:
• We use observations to calculate an
approximate value of a sample value of a
random variable that has not been observed.
• We refer to the estimation performed in the
latter situation as prediction.
• If X is the random variable to be estimated,
we adopt the notation X cap (also a random
variable) for the estimate. In most of the
Chapter, we use the mean square error

as a measure of the quality of the estimate.


• Introduction:
• We confine our attention to the following
problems:
– Blind estimation of a random variable
– Estimation of a random variable given an event
– Estimation of a random variable given one other
random variable
– Linear estimation of a random variable given a
random vector
– Linear prediction of one component of a random
vector given other components of the random
vector
• Optimum Estimation Given Another
Random Variable:
– Blind Estimation of X
– Estimation of X Given an Event
– Minimum Mean Square Estimation of X Given
Y
• Linear Estimation of X given Y
• MAP and ML Estimation
• Linear Estimation of Random Variables
from Random Vectors
• Blind Estimation of X
• An experiment produces a random variable
X. Prior to performing the experiment,
what is the best estimate of X? This is the
blind estimation problem
• Theorem 9.1
• In the absence of observations, the
minimum mean square error estimate of
random variable X is
• In the absence of observations, the
minimum mean square error estimate of X
is the expected value E[X]. The minimum
error is eX*= Var[X].
• Example 9.1

• Solution:
• Estimation of X Given an Event

• Theorem 9.2

• Example 9.2
• Solution:
• Observation:
• Minimum Mean Square Estimation of X
Given Y:

• Theorem 9.3
• Example 9.3:

• Solution:
• Example 9.4:

• Solution:

x
• Quiz 9.1:
• Solution:
• Solution (continued):
• Linear Estimation of X given Y:
• In previous section derives , the
optimum estimate for each possible
observation Y = y.
• By contrast, in this section the estimate is a
single function that applies for all Y.
• The notation for this function is
• Linear Estimation of X given Y:

• Minimum mean square error linear estimation,


• Linear Estimation of X given Y:
– Minimum mean square error estimation in
principle uses a different calculation for each y
ϵ Sy .
– By contrast, a linear estimator uses the same
coefficients a and b for all y.
• Theorem 9.4:

• Proof: Self study


• Understanding Theorem 9.4:
• Quiz 9.2:
• Solution:
• Solution (continued):
• MAP and ML Estimation:
– Although neither of these estimates produces
the minimum mean square error, they are
convenient to obtain in some applications, and
they often produce estimates with errors that
are not much higher than the minimum mean
square error.
• Maximum A Posteriori Probability (MAP)
Estimate:

• Theorem 9.6
• Maximum Likelihood (ML) Estimate:

• The primary difference between the MAP and ML


procedures is that the maximum likelihood procedure
does not use information about the a priori probability
model of X.
• ML rule is the same as the MAP rule when all possible
values of X are equally likely.
• For discrete case

• neither estimate minimizes the mean square error.


• Example 9.7:
• Beta Random Variable:
• Solution:
• Solution (continued):
• Solution (continued):
• Solution (continued):
• Solution (understanding):
Thank you

This is the end of Chapter 9

You might also like