Professional Documents
Culture Documents
Signals Systems and Inference 1st Edition Oppenheim Solutions Manual
Signals Systems and Inference 1st Edition Oppenheim Solutions Manual
Spring 2015
Problem 6.1
Z ∞
fX (x) = fX,Y (x, y)dy
−∞
R 1+x
R0 1dy = 1 + x, for − 1 ≤ x ≤ 0
1−x
= 1dy = 1 − x, for − 0 < x ≤ 1
0
0, elsewhere.
1 − |x|, for − 1 ≤ x ≤ 1
=
0, elsewhere.
Z ∞
fY (y) = fX,Y (x, y)dx
−∞
( R
1−y
= y−1 1dx = 2 − 2y, for 0 ≤ y ≤ 1
0, elsewhere.
µx = 0
Z ∞
µy = yfY (y)dy
∞
Z 1
= y(2 − 2y)dy
0
1
=
3
1
=
6
Z ∞ Z 1
1
σY2 = E[Y ] − 2
µ2Y = 2
y fY (y)dy − µ2Y = y 2 (2 − 2y)dy − ( )2
∞ 0 3
1 1 1
= − = .
6 9 18
Finally, the covariance is:
Z 0 Z 1+x Z 1 Z 1−x
σX,Y = E[XY ] − µX µY = x ydxdy + x ydxdy =0.
−1 0 0 0
We expect this result because the joint uniform PDF of X and Y is defined over a triangular
region with even symmetry: for every infinitesimal element of probability mass at some point
(x0 , y0 ) , there is an equal mass at (−x0 , y0 ) , so the net contribution to the defining integral
is 0.
Z ∞ Z ∞
fX,Y (x, y)
ŷ(x) = E[Y |X = x] = yfY |X (y|x)dy = y dy
−∞ −∞ fX (x)
Z 1−|x|
y
= dy
1 − |x|
0 1
= 2 (1 − |x|) , −1 ≤ x ≤ 1
0, elsewhere.
This is to be anticipated: since the conditional density of y, given x, is uniform, the expected
value of the conditional density for any x is the midpoint of the vertical line segment through
that x (within the triangular region on which the PDF is nonzero). The corresponding estimator
E[Y |X] is simply the function of x that is specified by this estimate, plotted below.
The associated mean-squared error, given X = x, is the variance of the conditional density, i.e.,
the variance of the uniform density extending from 0 to 1 − |x|, hence
(
(1−|x|)2
2 , −1 ≤ x ≤ 1
E[(Y − ŷ(x)) |X = x] = 12
0, elsewhere.
2
Oppenheim / Verghese Signals, Systems and Inference, 1/E
L-ENGINEERING AND COMPUTER SCIENCE ISBN-10: 0133943283 | ISBN-13: 9780133943283
This work is protected by United States copyright laws and is provided solely for the use of instructors in teaching their courses and assessing student learning.
Dissemination or sale of any part of this work (including on the World Wide Web) will destroy the integrity of the work and is not permitted.
The overall MSE can then be calculated by averaging the conditional MSE over all x.
1
(1 − |x|)2
Z
1
MSE = (1 − |x|) =
−1 12 24
1
= µY = .
3
See Fig. 6.1-1 below for a plot of this estimator.
The associated mean-squared error is
2
σX,Y
2
E[(Y − ŶL ) ] = σY2 (1 2
−ρ )= σY2 (1 − 2 σ2 )
σX Y
1
. = σY2 =
18
This is larger than the MSE of the MMSE estimator, as anticipated.
Problem 6.2
(a) You are given y[n] = w[n] + w[n − 1]. The random input signal w[·] has mean 0 and
2 at each time, and the values at different times are uncorrelated. It is easily
variance σw
seen that µy = µw + µw = 0.
3
Oppenheim / Verghese Signals, Systems and Inference, 1/E
L-ENGINEERING AND COMPUTER SCIENCE ISBN-10: 0133943283 | ISBN-13: 9780133943283
This work is protected by United States copyright laws and is provided solely for the use of instructors in teaching their courses and assessing student learning.
Dissemination or sale of any part of this work (including on the World Wide Web) will destroy the integrity of the work and is not permitted.
Figure 6.1-1: Plot of the MMSE and LMMSE estimators for Y given X. Note that the linear
estimator is a horizontal line, which is consistent with (i) the random variables being uncorre-
lated and (ii) the joint PDF having even symmetry. Also note that the MSE is just the variance
for this estimator since, it doesnt glean any information about Y when given X = x.
4
Oppenheim / Verghese Signals, Systems and Inference, 1/E
L-ENGINEERING AND COMPUTER SCIENCE ISBN-10: 0133943283 | ISBN-13: 9780133943283
This work is protected by United States copyright laws and is provided solely for the use of instructors in teaching their courses and assessing student learning.
Dissemination or sale of any part of this work (including on the World Wide Web) will destroy the integrity of the work and is not permitted.
(b) Recall that we have closed form expressions for the optimal a and b.
b=0.
(c) Because cov(y[n + 1]y[n − 1]) = 0 the LMMSE estimate is always simply zero. The error
is then the variance of y[n + 1]
(d)
1 1
E[(y[n + 1] − y[n])y[n − 1]] = E[y[n + 1]y[n − 1]] − E[y[n]y[n − 1]]
2 2
1 2
= − σw 6= 0
2
(e) See Figure 6.2-1. According to (d), the one-step predictor is not orthogonal to the value
two steps back, which means that y[n − 1] can provide some information, in conjunction
with y[n], to estimate y[n + 1].
(f) We write the normal equations using the covariance matrix as:
2 2 2
2σw σw e σw
2 2 =
σw 2σw f 0
We can solve this for e = 2/3, f = −1/3, and thus, g = 0. The MSE for this estimator is
2 2 4 2
σY2 − cY X a = 2σw
2
− σw = σw
3 3
Which is lower than the answers in part (b) and (c).
Problem 6.3
5
Oppenheim / Verghese Signals, Systems and Inference, 1/E
L-ENGINEERING AND COMPUTER SCIENCE ISBN-10: 0133943283 | ISBN-13: 9780133943283
This work is protected by United States copyright laws and is provided solely for the use of instructors in teaching their courses and assessing student learning.
Dissemination or sale of any part of this work (including on the World Wide Web) will destroy the integrity of the work and is not permitted.
Figure 6.2-1: Note that the y[n] vector is not actually in the plane with y[n + 1] and y[n − 1].
This is because we computed the angle above as cos θ = 1/2 so θ = π/3 6= π/4.
µ X1 = µ Q + µ W 1 = µ Q .
2 2
σX1 ,Q = σW1 +Q,Q = σW1 ,Q + σQ,Q = 0 + σQ = σQ .
2
σX 1
= σX1 ,X1 = σQ+W1 ,Q+W1 = σQ,Q + 2σQ,W + σW1 ,W1
2 2 2 2
= σQ + 2 · 0 + σW 1
= σQ + σW 1
.
Therefore,
2
σQ
Q̂L (X1 ) = µQ + 2 + σ2 (X1 − µQ )
σQ W1
Note that this formula expresses the optimal estimator as a weighted linear combination
of the measurement X1 and the prior mean µQ . (It is actually what is known as a convex
combination, because the weights are nonnegative and sum to 1, so the possible values
are precisely those between and at the extremes, namely µQ and the measured X1 .) The
less the variance of the noise or disturbance W1 , relative to the variance of the signal
Q, the more we (trust and) skew the estimator towards our measurement. At the other
extreme, when the noise variance is high, we discount our measurement, and estimate Q by
something closer to its mean (i.e., to our best estimate prior to taking the measurement).
The MSE of this LMMSE estimator is given by
2
M SE = σQ (1 − ρ2Q,X1 ),
6
Oppenheim / Verghese Signals, Systems and Inference, 1/E
L-ENGINEERING AND COMPUTER SCIENCE ISBN-10: 0133943283 | ISBN-13: 9780133943283
This work is protected by United States copyright laws and is provided solely for the use of instructors in teaching their courses and assessing student learning.
Dissemination or sale of any part of this work (including on the World Wide Web) will destroy the integrity of the work and is not permitted.
and consequently,
2
σQ 2 σ2
σQ
2 W1
M SE = σQ (1 − 2 )= 2 + σ2 .
σX 1
σQ W1
(b) First, from the properties of uniform random variables, we can calculate the means and
2 42 4 2 42 4
variances of Q and W1 , as µQ = 2, µW1 = 0, σQ = = and σW 1
= = . Using the
12 3 12 3
results of part (a), we can then calculate a and b as
1 1
a = ,b = 2 = 1
2 2
Also, ρ2 = 1/2 and the MSE is (4/3)(1/2) = 2/3 = 0.66667.
Here are 10 runs of the MATLAB code for this part, and we see that the values are close
to the theoretical value for the mean-squared error:
% correlation coefficent
rho = sqrt(var_q)/sqrt(var_q + var_w1);
7
Oppenheim / Verghese Signals, Systems and Inference, 1/E
L-ENGINEERING AND COMPUTER SCIENCE ISBN-10: 0133943283 | ISBN-13: 9780133943283
This work is protected by United States copyright laws and is provided solely for the use of instructors in teaching their courses and assessing student learning.
Dissemination or sale of any part of this work (including on the World Wide Web) will destroy the integrity of the work and is not permitted.
% theoretical MSE
MSE_t =var_q*(1-rho^2) ;
disp([‘The theoretical minimum MSE is’num2str(MSE_t)]) ;
% let’s do it 10 times
for i=1:10
e = µQ − cµX1 − dµX2 .
2 2 2 2 2 2 2
d = σQ σW1 /(σQ σW1 + σQ σW2 + σW σ2 )
1 W2
8
Oppenheim / Verghese Signals, Systems and Inference, 1/E
L-ENGINEERING AND COMPUTER SCIENCE ISBN-10: 0133943283 | ISBN-13: 9780133943283
This work is protected by United States copyright laws and is provided solely for the use of instructors in teaching their courses and assessing student learning.
Dissemination or sale of any part of this work (including on the World Wide Web) will destroy the integrity of the work and is not permitted.
Problem 6.4
3 2 3
(a) False: From ŶL = X we have that σY X /σX = . If we want to change the roles of
4 4
the variables in the estimation, we would have X̂L = (σXY /σY2 )Y . Since σY2 = σX
2 and
3
σY X = σXY , X̂L = Y and the statement is false.
4
(b) False: The LMMSE estimator of Y in terms of X can be expressed as
σXY
ŶL = µY + 2 (X − µX ) .
σX
We are told that ŶL = µY , so σXY = 0, i.e., the random variables X and Y are uncorre-
lated, but this does not imply independence.
(c) True: We know that σY2 (1 − ρ2 ) is the minimum mean square error obtained through
use of the LMMSE estimator. This will always be greater than or equal to the minimum
mean square error obtained using the (possibly) nonlinear MMSE estimator, E[Y |X]
(d) True: The estimation error is orthogonal to each Xi as well as 1, and ŶL is a linear
combination of these quantities, so the estimation error is orthogonal to ŶL itself.
(e) True: The estimation error Y − ŶL is orthogonal to the linear MMSE estimator ŶL (by
part (a) above).The corresponding minimum mean-square error is therefore
E[(Y − ŶL )2 ] = E[(Y − ŶL )(Y − ŶL )] = E[(Y − ŶL )Y ] = E[Y 2 ] − E[ŶL Y ]
(f) False: The above equation is valid, but is just a single equation, and we have L + 1
unknowns. We need to assert that the estimation error Y − YL is orthogonal to 1 (for
unbiasedness) and to each of the Xi (eq. 8.67 in the notes), in order to obtain the normal
equations for determining the ai .
(g) (i) True: From the orthogonality conditions applied on the original estimator, V̂L and
ŴL , we deduce the following conditions
E[V̂L − V ] = 0
E[(V̂L − V )X] = 0
E[ŴL − W ] = 0
E[(ŴL − W )X] = 0
Now let Z = 3V + 4W . We can now check that the orthogonality conditions hold for the
second estimator, ẐL . First note that Z − ẐL = 3(V − V̂L ) + 4(W − ŴL ) . Thus,
E[(Z − ẐL )] = 0
9
Signals Systems and Inference 1st Edition Oppenheim Solutions Manual
Oppenheim / Verghese Signals, Systems and Inference, 1/E
L-ENGINEERING AND COMPUTER SCIENCE ISBN-10: 0133943283 | ISBN-13: 9780133943283
This work is protected by United States copyright laws and is provided solely for the use of instructors in teaching their courses and assessing student learning.
Dissemination or sale of any part of this work (including on the World Wide Web) will destroy the integrity of the work and is not permitted.
(ii) False: As a counterexample, assume V̂L = aX and ŴL = bX for nonzero constants a
and b, then V̂L (V̂L + ŴL ) = (a2 + ab)X 2 , a nonlinear function of X. This cannot be an
LMMSE estimator.
σX2,Y
(h) (i) True: ŶL2 = µY + 2
σX
(X − µX ) = 0
2
(ii) False: Writing out and solving the normal equations, one can see that in order for
this claim to be true, σX2 X1 must also be equal to 0.
10