Download as pdf or txt
Download as pdf or txt
You are on page 1of 10

Signals Systems and Inference 1st Edition Oppenheim Solutions Manual

Oppenheim / Verghese Signals, Systems and Inference, 1/E


L-ENGINEERING AND COMPUTER SCIENCE ISBN-10: 0133943283 | ISBN-13: 9780133943283
This work is protected by United States copyright laws and is provided solely for the use of instructors in teaching their courses and assessing student learning.
Dissemination or sale of any part of this work (including on the World Wide Web) will destroy the integrity of the work and is not permitted.

Massachusetts Institute of Technology


Department of Electrical Engineering and Computer Science

6.011: Communication, Control


and Signal Processing

Spring 2015

Problem Set 6 Solutions


Minimum Mean-Square-Error (MMSE) Estimation, Linear MMSE Estimation
Issued: Monday, 16 March Comments: aabid93@mit.edu and verghese@mit.edu

Problem 6.1

(a) Since the distribution is jointly uniform,


1
= 1. fX,Y (x, y) =
Area
We’ll need the marginals fX (x) and fY (y) for the next few calculations:

Z ∞
fX (x) = fX,Y (x, y)dy
−∞
 R 1+x
 R0 1dy = 1 + x, for − 1 ≤ x ≤ 0
1−x
= 1dy = 1 − x, for − 0 < x ≤ 1
 0
0, elsewhere.

1 − |x|, for − 1 ≤ x ≤ 1
=
0, elsewhere.

Z ∞
fY (y) = fX,Y (x, y)dx
−∞
( R
1−y
= y−1 1dx = 2 − 2y, for 0 ≤ y ≤ 1
0, elsewhere.

The first moments are:

µx = 0
Z ∞
µy = yfY (y)dy

Z 1
= y(2 − 2y)dy
0
1
=
3

Visit TestBankBell.com to get complete for all chapters


Oppenheim / Verghese Signals, Systems and Inference, 1/E
L-ENGINEERING AND COMPUTER SCIENCE ISBN-10: 0133943283 | ISBN-13: 9780133943283
This work is protected by United States copyright laws and is provided solely for the use of instructors in teaching their courses and assessing student learning.
Dissemination or sale of any part of this work (including on the World Wide Web) will destroy the integrity of the work and is not permitted.

The variances are:


Z ∞ Z 1
2 2
σX = E[X ] − µ2x = 2
x fX (x)dx − µ2X = x2 (1 − |x|)dy
∞ −1

1
=
6
Z ∞ Z 1
1
σY2 = E[Y ] − 2
µ2Y = 2
y fY (y)dy − µ2Y = y 2 (2 − 2y)dy − ( )2
∞ 0 3
1 1 1
= − = .
6 9 18
Finally, the covariance is:
Z 0 Z 1+x Z 1 Z 1−x
σX,Y = E[XY ] − µX µY = x ydxdy + x ydxdy =0.
−1 0 0 0

We expect this result because the joint uniform PDF of X and Y is defined over a triangular
region with even symmetry: for every infinitesimal element of probability mass at some point
(x0 , y0 ) , there is an equal mass at (−x0 , y0 ) , so the net contribution to the defining integral
is 0.

(b) The minimum mean-squared error (MMSE) estimate of Y , given X = x, is

Z ∞ Z ∞
fX,Y (x, y)
ŷ(x) = E[Y |X = x] = yfY |X (y|x)dy = y dy
−∞ −∞ fX (x)
Z 1−|x|
y
= dy
1 − |x|
0 1
= 2 (1 − |x|) , −1 ≤ x ≤ 1
0, elsewhere.

This is to be anticipated: since the conditional density of y, given x, is uniform, the expected
value of the conditional density for any x is the midpoint of the vertical line segment through
that x (within the triangular region on which the PDF is nonzero). The corresponding estimator
E[Y |X] is simply the function of x that is specified by this estimate, plotted below.
The associated mean-squared error, given X = x, is the variance of the conditional density, i.e.,
the variance of the uniform density extending from 0 to 1 − |x|, hence
(
(1−|x|)2
2 , −1 ≤ x ≤ 1
E[(Y − ŷ(x)) |X = x] = 12
0, elsewhere.

2
Oppenheim / Verghese Signals, Systems and Inference, 1/E
L-ENGINEERING AND COMPUTER SCIENCE ISBN-10: 0133943283 | ISBN-13: 9780133943283
This work is protected by United States copyright laws and is provided solely for the use of instructors in teaching their courses and assessing student learning.
Dissemination or sale of any part of this work (including on the World Wide Web) will destroy the integrity of the work and is not permitted.

The overall MSE can then be calculated by averaging the conditional MSE over all x.

1
(1 − |x|)2
Z
1
MSE = (1 − |x|) =
−1 12 24

(c) The LMMSE estimator is given by


σX,Y σX,Y
ŶL = 2 X + µY − 2 µX
σX σX

1
= µY = .
3
See Fig. 6.1-1 below for a plot of this estimator.
The associated mean-squared error is
2
σX,Y
2
E[(Y − ŶL ) ] = σY2 (1 2
−ρ )= σY2 (1 − 2 σ2 )
σX Y

1
. = σY2 =
18
This is larger than the MSE of the MMSE estimator, as anticipated.

Problem 6.2

(a) You are given y[n] = w[n] + w[n − 1]. The random input signal w[·] has mean 0 and
2 at each time, and the values at different times are uncorrelated. It is easily
variance σw
seen that µy = µw + µw = 0.

3
Oppenheim / Verghese Signals, Systems and Inference, 1/E
L-ENGINEERING AND COMPUTER SCIENCE ISBN-10: 0133943283 | ISBN-13: 9780133943283
This work is protected by United States copyright laws and is provided solely for the use of instructors in teaching their courses and assessing student learning.
Dissemination or sale of any part of this work (including on the World Wide Web) will destroy the integrity of the work and is not permitted.

Figure 6.1-1: Plot of the MMSE and LMMSE estimators for Y given X. Note that the linear
estimator is a horizontal line, which is consistent with (i) the random variables being uncorre-
lated and (ii) the joint PDF having even symmetry. Also note that the MSE is just the variance
for this estimator since, it doesnt glean any information about Y when given X = x.

σy2 = E[y 2 [n]]


= E w2 [n] + 2w[n]w[n − 1] + w2 [n − 2]
 

= E[w2 [n]] + E[w2 [n − 1]]


2
= 2σw

cov(y[n + 1], y[n]) = E[y[n + 1]y[n]]


 
= E (w[n + 1] + w[n])(w[n] + w[n − 1])
= E[w2 [n]]
2
= σw

cov(y[n + 1], y[n − 1]) = E[y[n + 1]y[n − 1]]


 
= E (w[n + 1] + w[n])(w[n − 1] + w[n − 2])
= 0

2 /2σ 2 = 1/2, while the cosine of angle


The cosine of angle between y[n + 1] and y[n] is σw w
between these y[n + 1] and y[n − 1] is 0.

4
Oppenheim / Verghese Signals, Systems and Inference, 1/E
L-ENGINEERING AND COMPUTER SCIENCE ISBN-10: 0133943283 | ISBN-13: 9780133943283
This work is protected by United States copyright laws and is provided solely for the use of instructors in teaching their courses and assessing student learning.
Dissemination or sale of any part of this work (including on the World Wide Web) will destroy the integrity of the work and is not permitted.

(b) Recall that we have closed form expressions for the optimal a and b.

cov(y[n], y[n − 1]) 1


a= 2
=
σy 2

and because µy = 0 and b must be chosen so that µŷ = µy we have

b=0.

For the mean squared error,


1
E[(y[n + 1] − ybone [n + 1])2 ] = σy2 − cov(y[n + 1], y[n]) + σy2
4
2 2 1 2 3 2
= 2σw − σw + σw = σw .
2 2

Equivalently this MSE equals σy2 (1 − ρ2 ) = 34 2σw


2 = 3 σ2 .
2 w

(c) Because cov(y[n + 1]y[n − 1]) = 0 the LMMSE estimate is always simply zero. The error
is then the variance of y[n + 1]

(d)
1 1
E[(y[n + 1] − y[n])y[n − 1]] = E[y[n + 1]y[n − 1]] − E[y[n]y[n − 1]]
2 2
1 2
= − σw 6= 0
2

y[n + 1] − ybone [n + 1] is not orthogonal to y[n − 1].

(e) See Figure 6.2-1. According to (d), the one-step predictor is not orthogonal to the value
two steps back, which means that y[n − 1] can provide some information, in conjunction
with y[n], to estimate y[n + 1].

(f) We write the normal equations using the covariance matrix as:

2 2 2
    
2σw σw e σw
2 2 =
σw 2σw f 0

We can solve this for e = 2/3, f = −1/3, and thus, g = 0. The MSE for this estimator is
2 2 4 2
σY2 − cY X a = 2σw
2
− σw = σw
3 3
Which is lower than the answers in part (b) and (c).

Problem 6.3

5
Oppenheim / Verghese Signals, Systems and Inference, 1/E
L-ENGINEERING AND COMPUTER SCIENCE ISBN-10: 0133943283 | ISBN-13: 9780133943283
This work is protected by United States copyright laws and is provided solely for the use of instructors in teaching their courses and assessing student learning.
Dissemination or sale of any part of this work (including on the World Wide Web) will destroy the integrity of the work and is not permitted.

Figure 6.2-1: Note that the y[n] vector is not actually in the plane with y[n + 1] and y[n − 1].
This is because we computed the angle above as cos θ = 1/2 so θ = π/3 6= π/4.

(a) The formula for the LMMSE estimator is


σQ,X1
Q̂L (X1 ) = µQ + 2 (X1 − µX1 ) .
σX 1

We will begin by calculating the means, variances, and the covariance:

µ X1 = µ Q + µ W 1 = µ Q .
2 2
σX1 ,Q = σW1 +Q,Q = σW1 ,Q + σQ,Q = 0 + σQ = σQ .
2
σX 1
= σX1 ,X1 = σQ+W1 ,Q+W1 = σQ,Q + 2σQ,W + σW1 ,W1
2 2 2 2
= σQ + 2 · 0 + σW 1
= σQ + σW 1
.
Therefore,
2
σQ
Q̂L (X1 ) = µQ + 2 + σ2 (X1 − µQ )
σQ W1

Note that this formula expresses the optimal estimator as a weighted linear combination
of the measurement X1 and the prior mean µQ . (It is actually what is known as a convex
combination, because the weights are nonnegative and sum to 1, so the possible values
are precisely those between and at the extremes, namely µQ and the measured X1 .) The
less the variance of the noise or disturbance W1 , relative to the variance of the signal
Q, the more we (trust and) skew the estimator towards our measurement. At the other
extreme, when the noise variance is high, we discount our measurement, and estimate Q by
something closer to its mean (i.e., to our best estimate prior to taking the measurement).
The MSE of this LMMSE estimator is given by

2
M SE = σQ (1 − ρ2Q,X1 ),

6
Oppenheim / Verghese Signals, Systems and Inference, 1/E
L-ENGINEERING AND COMPUTER SCIENCE ISBN-10: 0133943283 | ISBN-13: 9780133943283
This work is protected by United States copyright laws and is provided solely for the use of instructors in teaching their courses and assessing student learning.
Dissemination or sale of any part of this work (including on the World Wide Web) will destroy the integrity of the work and is not permitted.

where the correlation coefficient ρQ,X1 is


v
σQ2 u 2
σQ
σQX1 σQ u
ρQ,X1 = = = =t 2 2 ,
σ X1 σ Q σX1 σQ σ X1 σQ + σW 1

and consequently,

2
σQ 2 σ2
σQ
2 W1
M SE = σQ (1 − 2 )= 2 + σ2 .
σX 1
σQ W1

(b) First, from the properties of uniform random variables, we can calculate the means and
2 42 4 2 42 4
variances of Q and W1 , as µQ = 2, µW1 = 0, σQ = = and σW 1
= = . Using the
12 3 12 3
results of part (a), we can then calculate a and b as
1 1
a = ,b = 2 = 1
2 2
Also, ρ2 = 1/2 and the MSE is (4/3)(1/2) = 2/3 = 0.66667.
Here are 10 runs of the MATLAB code for this part, and we see that the values are close
to the theoretical value for the mean-squared error:

The theoretical minimum MSE is 0.66667


Run 1: Calculated MSE is 0.64302
Run 2: Calculated MSE is 0.77761
Run 3: Calculated MSE is 0.70552
Run 4: Calculated MSE is 0.77773
Run 5: Calculated MSE is 0.61416
Run 6: Calculated MSE is 0.54351
Run 7: Calculated MSE is 0.67806
Run 8: Calculated MSE is 0.72272
Run 9: Calculated MSE is 0.77715
Run 10: Calculated MSE is 0.57573

and here is the script that generated the output:

% first, let’s define the parameters


mean_q=2; var_q =4/3; var_wl =4/3;

% correlation coefficent
rho = sqrt(var_q)/sqrt(var_q + var_w1);

7
Oppenheim / Verghese Signals, Systems and Inference, 1/E
L-ENGINEERING AND COMPUTER SCIENCE ISBN-10: 0133943283 | ISBN-13: 9780133943283
This work is protected by United States copyright laws and is provided solely for the use of instructors in teaching their courses and assessing student learning.
Dissemination or sale of any part of this work (including on the World Wide Web) will destroy the integrity of the work and is not permitted.

% theoretical MSE
MSE_t =var_q*(1-rho^2) ;
disp([‘The theoretical minimum MSE is’num2str(MSE_t)]) ;

% LMMSE parameters for ∗this particular example∗ can be written as follows:


a = rho^2; b=mean_q*(1-rho^2);

% let’s do it 10 times
for i=1:10

% now define Q and Wl


Q=4*rand(100,1);
Wl=4*rand(100,1)-2;
% calculate X1
X1 = Q+W1;
% define the estimator
QhatL = a*X1 + b;
% calculate the empirical MSE
MSE = (1/100) * sum((QhatL - Q).2 );
disp([ Run num2str(i) : Calculated MSE is num2str(MSE)])
end

(c) The covariance between X1 and X2 is


2
σX1 ,X2 = σQ+W1 ,Q+W2 = σQ,Q + σQ,W2 + σW1 ,Q + σW1 ,W2 = σQ .

The normal equations CXX · a = CXQ in this case are


    
σX1 ,X1 σX1 ,X2 c σX1 ,Q
=
σX2 ,X1 σX2 ,X2 d σX2 ,Q
Simplifying the parameters yields
 2 2 2    2 
σQ + σW σQ c σQ
2
2
2 2 = 2
σQ σQ + σW 2
d σQ

The constant term is given by

e = µQ − cµX1 − dµX2 .

where the normal equations have been solved to yield


2 2 2 2 2 2 2
c = σQ σW2 /(σQ σW1 + σQ σW2 + σW σ2 )
1 W2

2 2 2 2 2 2 2
d = σQ σW1 /(σQ σW1 + σQ σW2 + σW σ2 )
1 W2

8
Oppenheim / Verghese Signals, Systems and Inference, 1/E
L-ENGINEERING AND COMPUTER SCIENCE ISBN-10: 0133943283 | ISBN-13: 9780133943283
This work is protected by United States copyright laws and is provided solely for the use of instructors in teaching their courses and assessing student learning.
Dissemination or sale of any part of this work (including on the World Wide Web) will destroy the integrity of the work and is not permitted.

Problem 6.4

3 2 3
(a) False: From ŶL = X we have that σY X /σX = . If we want to change the roles of
4 4
the variables in the estimation, we would have X̂L = (σXY /σY2 )Y . Since σY2 = σX
2 and
3
σY X = σXY , X̂L = Y and the statement is false.
4
(b) False: The LMMSE estimator of Y in terms of X can be expressed as
σXY
ŶL = µY + 2 (X − µX ) .
σX

We are told that ŶL = µY , so σXY = 0, i.e., the random variables X and Y are uncorre-
lated, but this does not imply independence.

(c) True: We know that σY2 (1 − ρ2 ) is the minimum mean square error obtained through
use of the LMMSE estimator. This will always be greater than or equal to the minimum
mean square error obtained using the (possibly) nonlinear MMSE estimator, E[Y |X]

(d) True: The estimation error is orthogonal to each Xi as well as 1, and ŶL is a linear
combination of these quantities, so the estimation error is orthogonal to ŶL itself.

(e) True: The estimation error Y − ŶL is orthogonal to the linear MMSE estimator ŶL (by
part (a) above).The corresponding minimum mean-square error is therefore

E[(Y − ŶL )2 ] = E[(Y − ŶL )(Y − ŶL )] = E[(Y − ŶL )Y ] = E[Y 2 ] − E[ŶL Y ]

(f) False: The above equation is valid, but is just a single equation, and we have L + 1
unknowns. We need to assert that the estimation error Y − YL is orthogonal to 1 (for
unbiasedness) and to each of the Xi (eq. 8.67 in the notes), in order to obtain the normal
equations for determining the ai .

(g) (i) True: From the orthogonality conditions applied on the original estimator, V̂L and
ŴL , we deduce the following conditions

E[V̂L − V ] = 0

E[(V̂L − V )X] = 0
E[ŴL − W ] = 0
E[(ŴL − W )X] = 0
Now let Z = 3V + 4W . We can now check that the orthogonality conditions hold for the
second estimator, ẐL . First note that Z − ẐL = 3(V − V̂L ) + 4(W − ŴL ) . Thus,

E[(Z − ẐL )] = 0

9
Signals Systems and Inference 1st Edition Oppenheim Solutions Manual
Oppenheim / Verghese Signals, Systems and Inference, 1/E
L-ENGINEERING AND COMPUTER SCIENCE ISBN-10: 0133943283 | ISBN-13: 9780133943283
This work is protected by United States copyright laws and is provided solely for the use of instructors in teaching their courses and assessing student learning.
Dissemination or sale of any part of this work (including on the World Wide Web) will destroy the integrity of the work and is not permitted.

E[(Z − ẐL )X] = 0

(ii) False: As a counterexample, assume V̂L = aX and ŴL = bX for nonzero constants a
and b, then V̂L (V̂L + ŴL ) = (a2 + ab)X 2 , a nonlinear function of X. This cannot be an
LMMSE estimator.
σX2,Y
(h) (i) True: ŶL2 = µY + 2
σX
(X − µX ) = 0
2

(ii) False: Writing out and solving the normal equations, one can see that in order for
this claim to be true, σX2 X1 must also be equal to 0.

10

Visit TestBankBell.com to get complete for all chapters

You might also like