Download as pdf or txt
Download as pdf or txt
You are on page 1of 45

Signals Systems and Inference 1st

Edition Oppenheim Solutions Manual


Go to download the full and correct content document:
https://testbankfan.com/product/signals-systems-and-inference-1st-edition-oppenhei
m-solutions-manual/
More products digital (pdf, epub, mobi) instant
download maybe you interests ...

Signals and Systems 2nd Edition Oppenheim Solutions


Manual

https://testbankfan.com/product/signals-and-systems-2nd-edition-
oppenheim-solutions-manual/

Analog Signals and Systems 1st Edition Kudeki Solutions


Manual

https://testbankfan.com/product/analog-signals-and-systems-1st-
edition-kudeki-solutions-manual/

Signals and Systems 1st Edition Mahmood Nahvi Solutions


Manual

https://testbankfan.com/product/signals-and-systems-1st-edition-
mahmood-nahvi-solutions-manual/

Signals Systems and Transforms 5th Edition Phillips


Solutions Manual

https://testbankfan.com/product/signals-systems-and-
transforms-5th-edition-phillips-solutions-manual/
Signals and Systems using MATLAB 2nd Edition Chaparro
Solutions Manual

https://testbankfan.com/product/signals-and-systems-using-
matlab-2nd-edition-chaparro-solutions-manual/

Medical Imaging Signals and Systems 2nd Edition Prince


Solutions Manual

https://testbankfan.com/product/medical-imaging-signals-and-
systems-2nd-edition-prince-solutions-manual/

Signals and Systems Continuous and Discrete 4th Edition


Ziemer Solutions Manual

https://testbankfan.com/product/signals-and-systems-continuous-
and-discrete-4th-edition-ziemer-solutions-manual/

Signals and Systems Analysis Using Transform Methods


and MATLAB 3rd Edition Roberts Solutions Manual

https://testbankfan.com/product/signals-and-systems-analysis-
using-transform-methods-and-matlab-3rd-edition-roberts-solutions-
manual/

Probability and Statistical Inference 9th Edition Hogg


Solutions Manual

https://testbankfan.com/product/probability-and-statistical-
inference-9th-edition-hogg-solutions-manual/
Signals, Systems & Inference
Alan V. Oppenheim & George C. Verghese c 2016
Chapter 7 Solutions

Note from the authors

These solutions represent a preliminary version of the Instructors’ Solutions Manual (ISM).
The book has a total of 350 problems, so it is possible and even likely that at this preliminary
stage of preparing the ISM there are some omissions and errors in the draft solutions. It is
also possible that an occasional problem in the book is now slightly different from an earlier
version for which the solution here was generated. It is therefore important for an instructor to
carefully review the solutions to problems of interest, and to modify them as needed. We will,
from time to time, update these solutions with clarifications, elaborations, or corrections.

Many of these solutions have been prepared by the teaching assistants for the course in which
this material has been taught at MIT, and their assistance is individually acknowledged in the

)
eb
book. For preparing solutions to the remaining problems in recent months, we are particularly

er or in ing
ed id n
W
no the iss tea s

itt W tio
w
t p W em ch

e
d on g. in t la

m ld a
grateful to Abubakar Abid (who also constructed the solution template), Leighton Barnes,
an ing rnin tors igh

.
r

Fisher Jepsen, Tarek Lahlou, Catherine Medlock, Lucas Nissenbaum, Ehimwenma Nosakhare,
or ud a uc y
w cl le tr p
e in nt ns co

Juan Miguel Rodriguez, Andrew Song, and Guolong Su. We would also like to thank Laura
th k ( de f i es
of or stu e o tat
ity s w g us d S

von Bosau for her assistance in compiling the solutions.


is
te f t ss th nite
e rt ss fo U
gr hi in e
th a a ly by

k
in o e r
y y p d le d
ro n an o te
st f a s d s ec
de o rse de ot
ill le u vi pr
w r sa co pro is
o eir is rk
th nd wo
a his
T

© 2016 Pearson Education, Inc., Hoboken, NJ. All rights reserved. This material is protected under all copyright laws as they currently
exist. No portion of this material may be reproduced, in any form or by any means, without permission in writing from the publisher.
Solution 7.1
y6 y6 y6 y6
1 1 1 1
A D
1 1 1 1
2 2 2 C 2
B D
- - - -
1 1 1 1
2 1 x 2 1 x 2 1 x 2 1 x

(a) 1.  
1 1 1
P (A ∩ D) = P y > , x > =
2 2 4

1
P (A) = P (D) =
2

)
eb
er or in ing
ed id n
W
no the iss tea s

itt W tio
Since P (A ∩ D) = P (A)P (D), events A and D are independent. w
t p W em ch

e
d on g. in t la

m ld a
an ing rnin tors igh

2.
.
r
or ud a uc y
w cl le tr p

 
e in nt ns co

1 1 1
P (C ∩ D) = P y < , x < =
th k ( de f i es
of or stu e o tat

2 2 4
ity s w g us d S

is
te f t ss th nite
e rt ss fo U

1
gr hi in e
th a a ly by

P (C) = P (D) =
in o e r
y y p d le d

2
ro n an o te
st f a s d s ec
de o rse de ot
ill le u vi pr

Since P (C ∩ D) = P (C)P (D), events C and D are independent.


w r sa co pro is
o eir is rk
th nd wo

3.
a his

 
1 1
T

P (A ∩ B) = P y > , y < =0
2 2
1
P (A) = P (B) =
2
Since P (A ∩ B) 6= P (A)P (B), events A and B are not independent.

4. Another similar case that was not asked in the problem in the book:
 
1 1 1
P (A ∩ C) = P y > ,x < =
2 2 4

1
P (A) = P (C) =
2
Since P (A ∩ C) = P (A)P (C), events A and C are independent.

© 2016 Pearson Education, Inc., Hoboken, NJ. All rights reserved. This material is protected under all copyright laws as they currently
exist. No portion of this material may be reproduced, in any form or by any means, without permission in writing from the publisher.
(b) Note that P (A ∩ C ∩ D) = 0 since there is no region where all three sets overlap. However,
P (A) = P (C) = P (D) = 12 , so P (A ∩ C ∩ D) 6= P (A)P (C)P (D) and the events A, C,
and D are not mutually independent, even though they are pairwise independent as
seen in (a). For a collection of events to be independent, we require the probability of
the intersection of any of the events to equal the product of the probabilities of each
individual event. So for the 3–event case, pairwise independence is a necessary but not
sufficient condition for independence.

)
eb
er or in ing
ed id n
W
no the iss tea s

itt W tio
w
t p W em ch

e
d on g. in t la

m ld a
an ing rnin tors igh

.
r
or ud a uc y
w cl le tr p
e in nt ns co

D
th k ( de f i es
of or stu e o tat
ity s w g us d S

is
te f t ss th nite
e rt ss fo U
gr hi in e
th a a ly by

k
in o e r
y y p d le d
ro n an o te
st f a s d s ec
de o rse de ot
ill le u vi pr
w r sa co pro is
o eir is rk
th nd wo
a his
T

© 2016 Pearson Education, Inc., Hoboken, NJ. All rights reserved. This material is protected under all copyright laws as they currently
exist. No portion of this material may be reproduced, in any form or by any means, without permission in writing from the publisher.
Solution 7.2
We wish to determine the conditional probability P (S = k | R = k), where S denotes what
was sent, and R denotes what is received. Recall the conditional probability of an event A,
given that the event B with P (B) > 0 has happened, is defined by

P (A ∩ B)
P (A | B) =
P (B)
Therefore,
P (S = k and R = k)
P (S = k | R = k) =
P (R = k)

0.05 5
P (S = 1 | R = 1) = = ≈ 0.21
0.05 + 0.10 + 0.09 24
0.08 4

)
P (S = 2 | R = 2) = = ≈ 0.19

eb
er or in ing
ed id n
W
0.13 + 0.08 + 0.21 21
no the iss tea s

itt W tio
w
t p W em ch

e
d on g. in t la

m ld a
0.15 15
an ing rnin tors igh

P (S = 3 | R = 3) = = ≈ 0.44
.
r
or ud a uc y
w cl le tr p

0.12 + 0.07 + 0.15 34


e in nt ns co

D
th k ( de f i es
of or stu e o tat

The probability of transmission error incurred in using this system is


ity s w g us d S

is
te f t ss th nite
e rt ss fo U

P (transmission error) = 1 − P (no transmission error)


gr hi in e
th a a ly by

k
in o e r
y y p d le d
ro n an o te

3
st f a s d s ec
de o rse de ot

X
=1− P (S = k and R = k)
ill le u vi pr
w r sa co pro is
o eir is rk

k=1
th nd wo
a his

= 1 − (0.05 + 0.08 + 0.15)


T

= 0.72

© 2016 Pearson Education, Inc., Hoboken, NJ. All rights reserved. This material is protected under all copyright laws as they currently
exist. No portion of this material may be reproduced, in any form or by any means, without permission in writing from the publisher.
Solution 7.3

(a)

a+b
µV =
2
Z b
1 b2 + ab + a2
E[V 2 ] = v 2 dv =
b−a a 3
(b − a)2
σV2 = E[V 2 ] − µ2V =
12

(b) Independence implies V and W are uncorrelated. Hence

µY = µV + µ W = a + b

)
eb
(b − a)2

er or in ing
ed id n
W
σY2 = σV2 + σW no the iss tea s
2

itt W tio
= w
t p W em ch

e
d on g. in t la

m ld a
6
an ing rnin tors igh

.
r
or ud a uc y
w cl le tr p
e in nt ns co

If V and W were not independent, the expression for σY2 above would have contained the
D
th k ( de f i es
of or stu e o tat

additional term 2σV,W , where σV,W is the covariance of the two random variables.
ity s w g us d S

is
te f t ss th nite
e rt ss fo U
gr hi in e
th a a ly by

k
in o e r
y y p d le d
ro n an o te
st f a s d s ec
de o rse de ot
ill le u vi pr
w r sa co pro is
o eir is rk
th nd wo
a his
T

© 2016 Pearson Education, Inc., Hoboken, NJ. All rights reserved. This material is protected under all copyright laws as they currently
exist. No portion of this material may be reproduced, in any form or by any means, without permission in writing from the publisher.
Solution 7.4

(a) Recall the tilde notation Z̃ = Z − µZ and note that we have the relations

Z̃ = 2X̃ − Ỹ

and
1
W̃ = X̃ + Ỹ .
2
Then, by definition,

σZW = E[Z̃ W̃ ]
1
= E[(2X̃ − Ỹ )(X̃ + Ỹ )]
2
1
= 2E[X̃ 2 ] − E[Ỹ 2 ]
2

)
eb
er or in ing
ed id n
W
1
no the iss tea s

itt W tio
w
t p W em ch
= 2(E[X ] − E[X]2 ) − (E[Y 2 ] − E[Y ]2 )
2

e
d on g. in t la

m ld a
an ing rnin tors igh

.
r
or ud a uc y

1
w cl le tr p
e in nt ns co

= 2(8) − (3) = 29/2 .


th k ( de f i es

2
of or stu e o tat
ity s w g us d S

is
te f t ss th nite

(b) Z and W are affine functions of X and Y so they will also be bivariate Gaussian random
e rt ss fo U
gr hi in e
th a a ly by

variables. The form for the joint distribution of bivariate Guassians is given in Example
in o e r
y y p d le d
ro n an o te
st f a s d s ec

7.5 of the text.


de o rse de ot

  
z − µZ w − µW
ill le u vi pr

fZ,W (z, w) = c exp −q


w r sa co pro is

,
o eir is rk

σZ σW
th nd wo
a his

where
T

1
c= p
2πσZ σW 1 − ρ2
and
1
q(v, w) = (v 2 − 2ρvw + w2 ) .
2(1 − ρ2 )
So we will need to know the means and variances of Z and W as well as the correlation
coefficient ρ between Z and W .

µZ = 2E[X] − E[Y ] + 5 = 1
1
µW = E[X] + E[Y ] − 1 = −1
2

σZ2 = E[Z̃ 2 ] = E[(2X̃ − Ỹ )2 ]


= 4E[X̃ 2 ] − 4E[X̃ Ỹ ] + E[Ỹ 2 ]
= 4(E[X 2 ] − E[X]2 ) − 4(E[XY ] − E[X]E[Y ]) + E[Y 2 ] − E[Y ]2
= 4(8) − 4(−4 + 2) + 3 = 43 .

© 2016 Pearson Education, Inc., Hoboken, NJ. All rights reserved. This material is protected under all copyright laws as they currently
exist. No portion of this material may be reproduced, in any form or by any means, without permission in writing from the publisher.
2 1
σW = E[W̃ 2 ] = E[(X̃ + Ỹ )2 ]
2
1
= E[X̃ 2 ] + E[X̃ Ỹ ] + E[Ỹ 2 ]
4
1
= 8 − 2 + (3) = 27/4 .
4

σZW 29/2 29
ρ= =p =p
σZ σW (43)(27)/4 (43)(27)

So finally,

)
eb
  
1 z − µ Z w − µW

er or in ing
ed id n
W
no the iss tea s
exp −q

itt W tio
fZ,W (z, w) = , w
t p W em ch

e
d on g. in t la

m ld a
p
2πσZ σW 1 − ρ2 σZ σW
an ing rnin tors igh

.
r
or ud a uc y

!!!
w cl le tr p

(z − 1)2 z−1 w+1 (w + 1)2


e in nt ns co

1 1 29
√ exp − − 2p √ p
th k ( de f i es

= +
of or stu e o tat

8π 5 2(1 − ρ2 )
43 (43)(27) 43 27/4 27/4
ity s w g us d S

is
te f t ss th nite

!!!
1 (43)(27) (z − 1)2 29 z−1 w+1 (w + 1)2
e rt ss fo U
gr hi in e
th a a ly by

= √ exp − − 2p √ p +
k
in o e r
y y p d le d

8π 5 640 43 (43)(27) 43 27/4 27/4


ro n an o te
st f a s d s ec
de o rse de ot
ill le u vi pr

  
1 1
27(z − 1)2 − 4(29)(z − 1)(w + 1) + 4(43)(w + 1)2
w r sa co pro is


= √ exp − .
o eir is rk

640
th nd wo

8π 5
a his
T

© 2016 Pearson Education, Inc., Hoboken, NJ. All rights reserved. This material is protected under all copyright laws as they currently
exist. No portion of this material may be reproduced, in any form or by any means, without permission in writing from the publisher.
Solution 7.5
Since σV2 = E[V 2 ] − {E[V ]}2 and E[V ] = 0 is given, we get E[V 2 ] = 4. Furthermore,
E[X] = E[Y ] = 2.

Correlation

rX,Y = E[XY ] = E[(2 + V )(2 − V )] = E[4 − V 2 ] = 4 − E[V 2 ] = 0

Covariance

σX,Y = E[(X − E[X])(Y − E[Y ])] = E[(X − 2)(Y − 2)] = E[−V 2 ] = −4

Correlation coefficient
σX,Y
ρX,Y = = −1
σX σY

)
eb
er or in ing
Since rX,Y = E[XY ] = 0, two random variables X and Y are orthogonal. Since σX,Y 6= 0,

ed id n
W
no the iss tea s

itt W tio
w
t p W em ch

e
d on g. in t la

m ld a
the two random variables are correlated.
an ing rnin tors igh

.
r
or ud a uc y
w cl le tr p
e in nt ns co

D
th k ( de f i es
of or stu e o tat
ity s w g us d S

is
te f t ss th nite
e rt ss fo U
gr hi in e
th a a ly by

k
in o e r
y y p d le d
ro n an o te
st f a s d s ec
de o rse de ot
ill le u vi pr
w r sa co pro is
o eir is rk
th nd wo
a his
T

© 2016 Pearson Education, Inc., Hoboken, NJ. All rights reserved. This material is protected under all copyright laws as they currently
exist. No portion of this material may be reproduced, in any form or by any means, without permission in writing from the publisher.
Solution 7.6

(a) We begin by defining the centered random variables


e = X − µX ,
X Ye = Y − µY

and similarly
Ze = Z − µZ , Ve = V − µV , f = W − µW .
W
Note that because

µX = µZ + µV and
µY = βµZ + µW

we can subtract these equations from the defining equations for X and Y to obtain

)
eb
er or in ing
ed id n
W
no the iss tea s

itt W tio
Xe = Ze + Ve w (1)
t p W em ch

e
d on g. in t la

m ld a
an ing rnin tors igh

Ye = β Ze + W
f. (2)
.
r
or ud a uc y
w cl le tr p
e in nt ns co

D
th k ( de f i es
of or stu e o tat

Now since σX2 = E[X e 2 ], σ 2 = E[Ye 2 ], and so on, and noting that the specified uncorrelat-
ity s w g us d S

Y
is
te f t ss th nite

edness properties imply E[ZeVe ] = σZV = 0 and similarly σZW = 0, we can simply square
e rt ss fo U
gr hi in e
th a a ly by

each of the equations displayed above and take expectations, obtaining


k
in o e r
y y p d le d
ro n an o te
st f a s d s ec
de o rse de ot

2
σX = σZ2 + σV2
ill le u vi pr
w r sa co pro is
o eir is rk

σY2 = β 2 σZ2 + σW
2
.
th nd wo
a his
T

Similarly, for the covariance we take the product of the equations (1) and (2), then
compute expectations and use the fact that σV W = 0, to obtain
e Ye ] = σXY = βσ 2 .
E[X Z

Finally, we have the correlation coefficient


σXY
ρXY =
σX σY
βσZ2
= q .
(σZ2 + σV2 )(β 2 σZ2 + σW
2 )

(b) Setting α = σZ2 /σ 2 , we have that

βα
ρXY = p .
(1 + α)(1 + β 2 α)

Some exploration of limiting cases:

© 2016 Pearson Education, Inc., Hoboken, NJ. All rights reserved. This material is protected under all copyright laws as they currently
exist. No portion of this material may be reproduced, in any form or by any means, without permission in writing from the publisher.
p
If β 2  1/α, then ρXY ≈ ± α/(1 + α), with the sign being that of β. If α 
max{1, 1/β 2 } then ρXY ≈ ±1, with the sign being that of β. This corresponds to the
fluctuations in both Z and βZ being much larger than those in V and W , so Y ≈ βX.
On the other hand, if α  min{1, 1/β 2 }, then ρXY ≈ βα. In particular, for α  1/|β|,
ρXY ≈ 0, because now fluctuations in X and Y are dominated by those in V and W
respectively.

)
eb
er or in ing
ed id n
W
no the iss tea s

itt W tio
w
t p W em ch

e
d on g. in t la

m ld a
an ing rnin tors igh

.
r
or ud a uc y
w cl le tr p
e in nt ns co

D
th k ( de f i es
of or stu e o tat
ity s w g us d S

is
te f t ss th nite
e rt ss fo U
gr hi in e
th a a ly by

k
in o e r
y y p d le d
ro n an o te
st f a s d s ec
de o rse de ot
ill le u vi pr
w r sa co pro is
o eir is rk
th nd wo
a his
T

10

© 2016 Pearson Education, Inc., Hoboken, NJ. All rights reserved. This material is protected under all copyright laws as they currently
exist. No portion of this material may be reproduced, in any form or by any means, without permission in writing from the publisher.
Solution 7.7
y6 y6 y6 y6
1 1 1 1
A D
1 1 1 1
2 2 2 C 2
B D
- - - -
1 1 1 1
2 1 x 2 1 x 2 1 x 2 1 x

(a) 1. As an example, choose E = A, F = C, and G = D. We saw in part (a) that A and


C are independent. Let’s check if they are independent when conditioned on D:
1
P (A ∩ D) 4 1
P (A | D) = = 1 =
P (D) 2
2

)
eb
1

er or in ing
P (C ∩ D) 1

ed id n
W
no the iss tea s

itt W tio
4 w
t p W em ch
P (C | D) = = =

e
d on g. in t la

m ld a
P (D) 1 2
an ing rnin tors igh

.
r
or ud a uc y
w cl le tr p
e in nt ns co

P (A ∩ C ∩ D)
th k ( de f i es

P (A ∩ C | D) = =0
of or stu e o tat

P (D)
ity s w g us d S

is
te f t ss th nite

Since P (A ∩ C | D) 6= P (A | D)P (C | D), events A | D and C | D are not


e rt ss fo U
gr hi in e
th a a ly by

k
in o e r

independent even though A and C are independent when not conditioned on event
y y p d le d
ro n an o te
st f a s d s ec

D.
de o rse de ot
ill le u vi pr
w r sa co pro is

2. As an example, choose J = A∪C, K = B, and L = C. J and B are not independent:


o eir is rk
th nd wo
a his
T

3
P (J) = P (A) + P (C) − P (A ∩ C) =
4
1
P (B) =
2
 
1 1 1
P (J ∩ B) = P x < , y < = 6= P (J)P (B)
2 2 4
However, we can show that J and B are independent when conditioned on L = C:
1
P (J ∩ C) 2
P (J | C) = = 1 =1
P (C) 2
1
P (B ∩ C) 4 1
P (B | C) = = 1 =
P (C) 2
2
1
P (J ∩ B ∩ C) 4 1
P (J ∩ B | C) = = 1 = = P (J | C)P (B | C)
P (C) 2
2

11

© 2016 Pearson Education, Inc., Hoboken, NJ. All rights reserved. This material is protected under all copyright laws as they currently
exist. No portion of this material may be reproduced, in any form or by any means, without permission in writing from the publisher.
(b) The statement is TRUE. To see this, let us define a new event E = W ∩ Q, and use it
as follows:
P (V ∩ W ∩ Q)
P (V ∩ W |Q) =
P (Q)
P (V ∩ E)
=
P (Q)
P (V |E)P (E)
=
P (Q)
P (W ∩ Q)
= P (V |E)
P (Q)
= P (V |W ∩ Q)P (W |Q). (3)

As an example, we may use the previously defined events B, C, and D. The probability

)
P (B ∩ C|D) may be found from the definition of conditional independence by:

eb
er or in ing
ed id n
W
no the iss tea s

itt W tio
w
t p W em ch

e
d on g. in t la

m ld a
an ing rnin tors igh

P (B ∩ C ∩ D) 1/4 1

.
r

P (B ∩ C|D) = = = .
or ud a uc y
w cl le tr p
e in nt ns co

P (D) 1/2 2
th k ( de f i es
of or stu e o tat
ity s w g us d S

If we use the result from Equation 3 we have that


is
te f t ss th nite
e rt ss fo U
gr hi in e
th a a ly by

P (B ∩ C ∩ D) P (C ∩ D)
k

1/4  1/4  1
in o e r
y y p d le d

P (B ∩ C|D) = P (B|C ∩ D)P (C|D) = = = .


ro n an o te
st f a s d s ec

P (C ∩ D) P (D) 1/4 1/2 2


de o rse de ot
ill le u vi pr
w r sa co pro is
o eir is rk
th nd wo
a his
T

12

© 2016 Pearson Education, Inc., Hoboken, NJ. All rights reserved. This material is protected under all copyright laws as they currently
exist. No portion of this material may be reproduced, in any form or by any means, without permission in writing from the publisher.
Solution 7.8
This is a lot easier after Chapter 9, but only uses elementary concepts. We solve (a) and
(b) together.

There are two possible decision rules, D1 or D2 ,

D1 ⇒ {m b = 1 : B}, D2 ⇒ {m
b = 0 : A, m b = 0 : B, m
b = 1 : A}

The probability of error, Pe , is given by

b 6= m) = P (m = 1, m
Pe = P (m b = 0) + P (m = 0, m
b = 1)
b = 0 | m = 1) + P (m = 0)P (m
= P (m = 1)P (m b = 1 | m = 0)

Substituting in the given values, we get,

)
eb
er or in ing
ed id n
Pe,D1 = (0.4 × 0.5) + (0.6 × 0.3) = 0.38

W
no the iss tea s

itt W tio
w
t p W em ch

e
d on g. in t la

m ld a
an ing rnin tors igh

Pe,D2 = (0.4 × 0.5) + (0.6 × 0.7) = 0.62

.
r
or ud a uc y
w cl le tr p
e in nt ns co

D
th k ( de f i es

Since Pe,D1 is the lower of the two, the minimum-probability-of-error decoder is D1 .


of or stu e o tat
ity s w g us d S

is
te f t ss th nite
e rt ss fo U

The minimum-probability-of-error decoder can be chosen intuitively. One can observe that,
gr hi in e
th a a ly by

k
in o e r
y y p d le d

since m = 1 has equal probability of resulting in either outcome, it will not affect the final
ro n an o te
st f a s d s ec
de o rse de ot

probability of error. In the case m = 0, it is clear that m


b = 0 is the more probable outcome,
ill le u vi pr
w r sa co pro is

hence one can conclude that assigning A to m b = 0 results in less error.


o eir is rk
th nd wo
a his
T

13

© 2016 Pearson Education, Inc., Hoboken, NJ. All rights reserved. This material is protected under all copyright laws as they currently
exist. No portion of this material may be reproduced, in any form or by any means, without permission in writing from the publisher.
Solution 7.9

(a) There are three possible values for R: −2, 0, or 2. The probability for each value, P (R =
r), is given as,

P (R = r) = P (X = x, N = r − x) = P (X = x)P (N = r − x)

The independence of X and N allows the last equality.

P (R = 2) = P (X = 1)P (N = 1) = p(1 − g)
P (R = 0) = P (X = 1)P (N = −1) + P (X = −1)P (N = 1) = 1 − g − p + 2gp
P (R = −2) = P (X = −1)P (N = −1) = g(1 − p)

)
eb
er or in ing
ed id n
(b) The choice of our estimate will be based on the conditional probabilities P (X = x | R = r):

W
no the iss tea s

itt W tio
w
t p W em ch

e
d on g. in t la

m ld a
an ing rnin tors igh

.
r
or ud a uc y

R=2 or -2
w cl le tr p
e in nt ns co

D
th k ( de f i es

For each case, only a unique combination of X and N exists. For R = 2, X = N = 1 and
of or stu e o tat
ity s w g us d S

for R = −2, X = N = −1. Therefore, only one reasonable estimate is possible for both
is
te f t ss th nite

cases: R = 2 → X b = 1 and R = −2 → X b = −1
e rt ss fo U
gr hi in e
th a a ly by

k
in o e r
y y p d le d
ro n an o te
st f a s d s ec

R=0
de o rse de ot
ill le u vi pr
w r sa co pro is

We can further simplify the probability,


o eir is rk
th nd wo
a his

P (X = x, R = 0) P (N = −x)
T

P (X = x | R = 0) = =
P (R = 0) P (R = 0)

We compare two possible cases, x = 1 and x = −1.

P (N = −1) 1−p
P (X = 1 | R = 0) = =
P (R = 0) 1 − g − p + 2gp
P (N = 1) p
P (X = −1 | R = 0) = =
P (R = 0) 1 − g − p + 2gp

1
Since 2 < p < 1, P (X = −1 | R = 0) has the larger value. Our estimate, X,
b will be −1.

14

© 2016 Pearson Education, Inc., Hoboken, NJ. All rights reserved. This material is protected under all copyright laws as they currently
exist. No portion of this material may be reproduced, in any form or by any means, without permission in writing from the publisher.
(c) We expand the given equation for P (correct) as following,

P (correct) = P (X = −1)P (X
b = −1 | X = −1) + P (X = 1)P (X b = 1 | X = 1)
 
= P (X = −1) P (Xb = −1, N = 1 | X = −1) + P (Xb = −1, N = −1 | X = −1)
 
+ P (X = 1) P (Xb = 1, N = 1 | X = 1) + P (X
b = 1, N = −1 | X = 1)
 
X
= P (X = −1)  P (N = n)P (Xb = −1 | X = −1, N = n)
n∈{−1,1}
 
X
+ P (X = 1)  b = 1 | X = 1, N = n)
P (N = n)P (X
n∈{−1,1}
 
= P (X = −1) P (N = 1)P (X
b = −1 | R = 0) + P (N = −1)P (Xb = −1 | R = −2)

)
eb
er or in ing
 

ed id n
W
no the iss tea s

itt W tio
b = 1 | R = 2) + P (N = −1)P (X
+ P (X = 1) P (N = 1)P (X b = 1 | R = 0) w
t p W em ch

e
d on g. in t la

m ld a
an ing rnin tors igh

.
r
or ud a uc y
w cl le tr p
e in nt ns co

D
th k ( de f i es
of or stu e o tat

Using the values for conditional probabilities from (b), we get


ity s w g us d S

is
te f t ss th nite
e rt ss fo U

p2 (1 − p)2
gr hi in e

   
th a a ly by

P (correct) = g + (1 − p) + (1 − g) p +
in o e r
y y p d le d
ro n an o te

1 − g − p + 2gp 1 − g − p + 2gp
st f a s d s ec
de o rse de ot
ill le u vi pr

2
1 − p + 2pg(2p − 1) − g (2p − 1) 2
w r sa co pro is

=
o eir is rk
th nd wo

1 − g − p + 2gp
a his
T

This decision rule that chooses the X of maximum conditional probability, given R, also
maximizes the probability of being correct.

15

© 2016 Pearson Education, Inc., Hoboken, NJ. All rights reserved. This material is protected under all copyright laws as they currently
exist. No portion of this material may be reproduced, in any form or by any means, without permission in writing from the publisher.
Solution 7.10

(a) Given each hypothesis (H0 , H1 ), the number of photon arrivals is from 0 to ∞. Since the
sum of probabilities should equal 1, we get the following:
∞ ∞
X X A0
P (k | H0 ) = A0 v0k = 1 ⇒ = 1 ⇒ A0 = 1 − v 0
1 − v0
k=0 k=0
∞ ∞
X X A1
P (k | H1 ) = A1 v1k = 1 ⇒ = 1 ⇒ A1 = 1 − v 1
1 − v1
k=0 k=0

Therefore,
P (k | H0 ) = (1 − v0 )v0k P (k | H1 ) = (1 − v1 )v1k

)
eb
er or in ing
ed id n
W
no the iss tea s

itt W tio
w
t p W em ch
(b) We use Bayes’ rule.

e
d on g. in t la

m ld a
an ing rnin tors igh

.
r
or ud a uc y
w cl le tr p

P (m | H1 )P (H1 )
e in nt ns co

P (H1 | m) =
th k ( de f i es

P (m)
of or stu e o tat
ity s w g us d S

0.5 × (1 − v1 )v1m
is
te f t ss th nite

=
e rt ss fo U
gr hi in e

0.5 × (1 − v0 )v0m + 0.5 × (1 − v1 )v1m


th a a ly by

k
in o e r
y y p d le d
ro n an o te

(1 − v1 )v1m
st f a s d s ec
de o rse de ot

=
ill le u vi pr

(1 − v0 )v0m + (1 − v1 )v1m
w r sa co pro is
o eir is rk
th nd wo
a his
T

(c) The probability of error, given as Pe , can be defined as follows (for more information,
refer to Section 9.3.1):

Pe = P (H0 )P (0 H10 | H0 ) + P (H1 )P (0 H00 | H1 )


X X
= P (H0 ) P (k | H0 ) + P (H1 ) P (k | H1 )
D1 D0
X X
= 0.5 × (1 − v0 )v0k + 0.5 × (1 − v1 )v1k
k≥n0 0≤k<n0
v n0 + 1 − v1n0
= 0
2
One might be tempted to take the derivative with respect to n0 to find the n∗0 that mini-
mizes Pe . However, n0 is an integer, so this approach will be inaccurate.

An alternative approach is to use the hypothesis testing framework from Section 9.2.1,
where we learn that Pe is minimized if, for each possible number of photons, we chose
the hypothesis H with bigger P (H | k). We only have to compare the numerators,

16

© 2016 Pearson Education, Inc., Hoboken, NJ. All rights reserved. This material is protected under all copyright laws as they currently
exist. No portion of this material may be reproduced, in any form or by any means, without permission in writing from the publisher.
0.5 × (1 − v1 )v1k and 0.5 × (1 − v0 )v0k , since the denominators are equal. Note that these
two functions intersect at one point, k = k ∗ . For k > k ∗ , P (H1 | k) > P (H0 | k) and
k ≤ k ∗ , P (H1 | k) < P (H0 | k). If we let k ∗ = n0 , we notice that the receiver in the
problem description already follows the desired decision rule that minimizes Pe , with k ∗
equal to n∗ .

P (H1 | n∗0 ) = P (H0 | n∗0 )


n∗ n∗
⇒ (1 − v1 )v1 0 = (1 − v0 )v0 0
⇒ ln(1 − v0 ) + n∗0 ln(v0 ) = ln(1 − v1 ) + n∗0 ln(v1 )
 
1−v0
ln 1−v 1
⇒ n∗0 =  
v1
ln v0

As described above, this is the best decision rule.

)
eb
er or in ing
ed id n
W
no the iss tea s

itt W tio
w
t p W em ch

e
d on g. in t la

m ld a
(d) Since P (H0 ) 6= P (H1 ), we need to generalize n∗0 from above.
an ing rnin tors igh

.
r
or ud a uc y
w cl le tr p
e in nt ns co

 
ln PP (H 0 )(1−v0 )
th k ( de f i es
of or stu e o tat

(H1 )(1−v1 )
n∗0 =
ity s w g us d S

 
is
te f t ss th nite

ln vv10
e rt ss fo U
gr hi in e
th a a ly by

k
in o e r
y y p d le d
ro n an o te
st f a s d s ec

A simple calculation yields the following result


de o rse de ot
ill le u vi pr
w r sa co pro is

P (H1 ) = 0.7 → n∗0 = 0 → Pe = 0.5


o eir is rk
th nd wo
a his

P (H1 ) = 1 → n∗0 = −∞ → Pe ∼ 0
T

P (H1 ) = 1 makes sense, since if the light signal is always sent, the probability of error
would be 0.

17

© 2016 Pearson Education, Inc., Hoboken, NJ. All rights reserved. This material is protected under all copyright laws as they currently
exist. No portion of this material may be reproduced, in any form or by any means, without permission in writing from the publisher.
Solution 7.11

(a) In terms of conditional probability, we have to calculate P (Y = y | c used), where y =


{−1, 0, 1} and c = {A, B}. The conditional probability could further be simplified as
P (Y = y | c used) = P (Y = y, X = 1 | c used) + P (Y = y, X = −1 | c used)
= P (X = 1)P (Y = y | X = 1, c used) + P (X = −1)P (Y = y | X = −1, c used)

The conditional probabilities in the last equation can be read straightly from the given
figure.

For Channel A,
3 3
P (Y = −1 | A used) = 0.5 × 0 + 0.5 × =

)
4 8

eb
er or in ing
ed id n
W
no the iss tea s

itt W tio
1 1 w1
t p W em ch

e
d on g. in t la

m ld a
P (Y = 0 | A used) = 0.5 × + 0.5 × =
an ing rnin tors igh

4 4 4

.
r
or ud a uc y
w cl le tr p

3 3
e in nt ns co

P (Y = 1 | A used) = 0.5 × + 0.5 × 0 =


th k ( de f i es
of or stu e o tat

4 8
ity s w g us d S

is
te f t ss th nite
e rt ss fo U
gr hi in e
th a a ly by

k
in o e r
y y p d le d

For Channel B,
ro n an o te
st f a s d s ec
de o rse de ot
ill le u vi pr

1 5
w r sa co pro is

P (Y = −1 | B used) = 0.5 × + 0.5 × 1 =


o eir is rk

4 8
th nd wo
a his

3 3
T

P (Y = 1 | B used) = 0.5 × + 0.5 × 0 =


4 8

(b) We use the following relation, which is Bayes’ rule.


P (A used, Y = −1)
P (A used | Y = −1) =
P (Y = −1)
P (A)P (Y = −1 | A used)
=
P (Y = −1)

Note that the denominator could further be simplified as


P (Y = −1) = P (Y = −1, A used) + P (Y = −1, B used)
= P (A used)P (Y = −1 | A used) + P (B used)P (Y = −1 | B used)
3 5 5 − 2α
= α × + (1 − α) × =
8 8 8

18

© 2016 Pearson Education, Inc., Hoboken, NJ. All rights reserved. This material is protected under all copyright laws as they currently
exist. No portion of this material may be reproduced, in any form or by any means, without permission in writing from the publisher.
Therefore,

P (A used, Y = −1) α× 3 3α
P (A used | Y = −1) = = 5−2α8 =
P (Y = −1) 8
5 − 2α

(c) We first need to calculate P (B used | Y = −1) as in (b).


5
P (B used, Y = −1) (1 − α) × 8 5(1 − α)
P (B used | Y = −1) = = 5−2α =
P (Y = −1) 8
5 − 2α

Therefore,

P (A used | Y = −1) > P (B used | Y = −1)


3α 5(1 − α)
⇒ >
5 − 2α 5 − 2α

)
eb
er or in ing
5

ed id n
W
no the iss tea s

itt W tio
w
t p W em ch
⇒α>

e
d on g. in t la

m ld a
8
an ing rnin tors igh

.
r
or ud a uc y
w cl le tr p
e in nt ns co

D
th k ( de f i es
of or stu e o tat

5 − 2α is positive as α itself is positive and cannot be bigger than 1. Therefore, if


ity s w g us d S

is
te f t ss th nite

5
8 < α ≤ 1, we would decide that channel A was used upon receiving Y = −1.
e rt ss fo U
gr hi in e
th a a ly by

k
in o e r
y y p d le d
ro n an o te
st f a s d s ec

Next, we need to figure out the range of α for which the Channel A will be chosen,
de o rse de ot
ill le u vi pr

regardless of the received Y value. Notice that Y = 0 is only possible through channel
w r sa co pro is
o eir is rk
th nd wo

A, regardless of the α value (α cannot be 0 though, as Y = 0 cannot happen in the first


a his

place.) Thus, we focus on Y = 1. The following needs to hold:


T

P (A used | Y = 1) > P (B used | Y = 1)


3 3
8 ×α 8 × (1 − α)
⇒ >
+ 3(1−α)

8 8

8 +
3(1−α)
8
1
⇒α>
2
Combining the three cases, we reach the conlcusion that channel A will be decided, re-
gardless of Y value, for
5
<α≤1
8
(d) We use the idea from previous problems. If P (A used | Y = sequence) > P (B used | Y =
sequence), channel A will be chosen; otherwise, channel B will be chosen. The problem
can be broken down into two parts: 1) b 6= 0 and 2) b = 0.

If b 6= 0, channel A will always be chosen, as channel B does not allow for the
received value of 0.

19

© 2016 Pearson Education, Inc., Hoboken, NJ. All rights reserved. This material is protected under all copyright laws as they currently
exist. No portion of this material may be reproduced, in any form or by any means, without permission in writing from the publisher.
If b = 0,

P (A used | Y = sequence) > P (B used | Y = sequence)


P (Y = sequence | A used)P (A used) P (Y = sequence | B used)P (B used)
⇒ >
P (Y = sequence) P (Y = sequence)
⇒ P (Y = −1 | A) P (Y = 1 | A) × α > P (Y = −1 | B)a P (Y = 1 | B)c × (1 − α)
a c
 a  c  a  c
3 3 3 5
⇒ α> (1 − α) ⇒ 3c α > 5c (1 − α)
8 8 8 8
5c
⇒α> c
5 + 3c
5c
Therefore, channel A will be chosen for 5c +3c < α ≤ 1; otherwise, channel B will be
chosen.

)
eb
er or in ing
ed id n
W
no the iss tea s

itt W tio
w
t p W em ch

e
d on g. in t la

m ld a
an ing rnin tors igh

.
r
or ud a uc y
w cl le tr p
e in nt ns co

D
th k ( de f i es
of or stu e o tat
ity s w g us d S

is
te f t ss th nite
e rt ss fo U
gr hi in e
th a a ly by

k
in o e r
y y p d le d
ro n an o te
st f a s d s ec
de o rse de ot
ill le u vi pr
w r sa co pro is
o eir is rk
th nd wo
a his
T

20

© 2016 Pearson Education, Inc., Hoboken, NJ. All rights reserved. This material is protected under all copyright laws as they currently
exist. No portion of this material may be reproduced, in any form or by any means, without permission in writing from the publisher.
Solution 7.12

(a) False. Suppose (X, Y ) take the values (−1, 0), (0, 1), and (1, 0) with equal probability
(= 13 ). Then E[XY ] = 0 since either X or Y is 0 for every outcome, and also E[X] = 0,
because it is distributed symmetrically around 0. Hence E[XY ] = E[X]E[Y ], so X and
Y are uncorrelated.
Now note that (X 2 , Y 2 ) take the values (1, 0) and (0, 1) with respective probabilities 23
and 31 . Thus E[X 2 Y 2 ] = 0, because either X 2 or Y 2 is 0 for every outcome. However,
E[X 2 ] = 32 and E[Y 2 ] = 31 , so E[X 2 Y 2 ] 6= E[X 2 ]E[Y 2 ], i.e., X 2 and Y 2 are uncorrelated.

(b) True. By definition of independence, fX,Y (x, y) = fX (x)fY (y). Then we have,
Z Z
E[g(X)h(Y )] = g(X)h(Y )fX,Y (x, y)dxdy

)
Z Z

eb
er or in ing
ed id n
W
no the iss tea s
= g(X)fX (x)dx h(Y )fY (y)dy = E[g(X)]E[h(Y )]

itt W tio
w
t p W em ch

e
d on g. in t la

m ld a
an ing rnin tors igh

.
r
or ud a uc y
w cl le tr p
e in nt ns co

D
th k ( de f i es
of or stu e o tat
ity s w g us d S

(c) False. The definition of conditional probability is, as stated in Eq. (7.27),
is
te f t ss th nite
e rt ss fo U
gr hi in e
th a a ly by

fX,Y (x, y)
in o e r
y y p d le d

fX|Y (x | y) =
ro n an o te
st f a s d s ec

fY (y)
de o rse de ot
ill le u vi pr
w r sa co pro is
o eir is rk

which is same as the given equation in the problem. fX,Y (x, y) = fX (x)fY (y) must hold
th nd wo
a his

if X and Y are independent.


T

21

© 2016 Pearson Education, Inc., Hoboken, NJ. All rights reserved. This material is protected under all copyright laws as they currently
exist. No portion of this material may be reproduced, in any form or by any means, without permission in writing from the publisher.
Solution 7.13

2
(a) We would like to show that rXY 2 r 2 . Let’s start with what we know:
≤ rX Y

2 2 2
σXY ≤ σX σY

2
rX = µ2X + σX
2

rY2 = µ2Y + σY2


rXY = µX µY + σXY

2
Then substituting back into inequality σXY 2 σ2 ,
≤ σX Y

)
eb
er or in ing
ed id n
W
no the iss tea s

itt W tio
w
t p W em ch

e
d on g. in t la

m ld a
an ing rnin tors igh

(rXY − µX µY )2 ≤ (rX
2
− µ2X )(rY2 − µ2Y )
.
r
or ud a uc y
w cl le tr p
e in nt ns co

2
µ2Xµ2Y ≤ rX
− 2rXY µX µY +  2 2
rY − µ2X rY2 − µ2Y rX
2
µ2Xµ2Y
th k ( de f i es

 
rXY +
of or stu e o tat
ity s w g us d S

2 2 2
rXY − 2rXY µX µY ≤ rX rY − µ2X rY2 − µ2Y rX
2
is
te f t ss th nite
e rt ss fo U
gr hi in e

2 2 2
≤ rX rY − µ2X rY2 − µ2Y rX
2
th a a ly by

rXY + 2rXY µX µY
k
in o e r
y y p d le d
ro n an o te
st f a s d s ec

2 2
≤ rX rY − E[(µX Y − µY X)2 ]
de o rse de ot
ill le u vi pr
w r sa co pro is
o eir is rk
th nd wo
a his

Since we know that E[(µX Y − µY X)2 ] ≥ 0,


T

2 2 2
rXY ≤ rX rY .

A more direct route to the proof is to start with the observation that E[(αX − Y )2 ] ≥ 0
for all α, expand and take expectations to get α2 rX 2 − 2αr 2
XY + rY ≥ 0, and then note
that this is possible if and only if this quadratic in α has at most one real root, which
then yields the desired inequality.
(b) Using the vector space picture, we can imagine that vector X and vector Y are separated
by an angle θ. Then the inner product of X and Y is expressed as

p
E[XY ] = E[X 2 ]E[Y 2 ] cos(θ)
rXY = rX rY cos(θ) .

22

© 2016 Pearson Education, Inc., Hoboken, NJ. All rights reserved. This material is protected under all copyright laws as they currently
exist. No portion of this material may be reproduced, in any form or by any means, without permission in writing from the publisher.
Since cos2 (θ) ≤ 1,

2 2 2
rXY ≤ rX rY .

θ
X

)
eb
er or in ing
(c) Let’s start with

ed id n
W
no the iss tea s

itt W tio
w
t p W em ch

e
d on g. in t la

m ld a
an ing rnin tors igh

.
r
or ud a uc y
w cl le tr p
e in nt ns co

D
th k ( de f i es

−σX σY ≤ σXY ≤ σX σY
of or stu e o tat
ity s w g us d S

−σ 2 ≤ rXY − µ2 ≤ σ 2
is
te f t ss th nite
e rt ss fo U
gr hi in e
th a a ly by

k
in o e r
y y p d le d
ro n an o te
st f a s d s ec
de o rse de ot
ill le u vi pr

After adding µ2 to both sides,


w r sa co pro is
o eir is rk
th nd wo
a his

−σ 2 + µ2 ≤ rXY ≤ σ 2 + µ2
T

−r2 + 2µ2 ≤ rXY ≤ r2

23

© 2016 Pearson Education, Inc., Hoboken, NJ. All rights reserved. This material is protected under all copyright laws as they currently
exist. No portion of this material may be reproduced, in any form or by any means, without permission in writing from the publisher.
Solution 7.14

(a) The probability of error is the sum of the numbers in all the off-diagonal boxes, because
these are precisely the outcomes that contribute to an error for the given decision rule, so

X
Perror = P (j sent ∩ k received)
j6=k

= 0.10 + 0.09 + 0.13 + 0.21 + 0.12 + 0.07


= 0.72

(b) For minimum probability of error, for each received k, declare in favor of the sent message
j that corresponds to the largest element in the column. Thus, j is declared to be sent

)
eb
er or in ing
ed id n
W
no the iss tea s

itt W tio
according to the rule w
t p W em ch

e
d on g. in t la

m ld a
an ing rnin tors igh

.
r
or ud a uc y
w cl le tr p
e in nt ns co

D
th k ( de f i es


 2 :k=1
of or stu e o tat
ity s w g us d S

j= 1 :k=2
is
te f t ss th nite
e rt ss fo U

2 :k=3

gr hi in e
th a a ly by

k
in o e r
y y p d le d
ro n an o te
st f a s d s ec
de o rse de ot

The probability of deciding correctly is the sum of the joint probabilities that j was sent
ill le u vi pr
w r sa co pro is

and k received for each of the pairs above.


o eir is rk
th nd wo
a his
T

Pcorrect = P (j = 2 ∩ k = 1) + P (j = 1 ∩ k = 2) + P (j = 2 ∩ k = 3)
= 0.13 + 0.10 + 0.21
= 0.44

The probability of guessing in error and the probability of guessing correctly must sum
to 1. Thus,

Perror = 1 − Pcorrect
= 0.56 < 0.72

(c) The expected cost is


X
c(j, l)P (j sent ∩ l declared)
j,l

24

© 2016 Pearson Education, Inc., Hoboken, NJ. All rights reserved. This material is protected under all copyright laws as they currently
exist. No portion of this material may be reproduced, in any form or by any means, without permission in writing from the publisher.
For the decision rule in (a), P (j sent ∩ l declared) is the same as P (j sent ∩ l received),
and these terms are all displayed in the table. Multiply each j, k entry in the table by the
corresponding c(j, k), and add the resulting numbers to obtain the risk.

(d) Rewrite the expected cost as


X
c(j, l)P (j sent ∩ l declared |k received)P (k received)
j,k,l

The optimum decision rule minimizes the inner part of this, namely
X
c(j, l)P (j sent ∩ l declared |k received)
j,k,l

for each received k, because the decision rule specifies what to declare for each received

)
eb
er or in ing
signal, regardless of what’s specified for some other received signal. But given that k is

ed id n
W
no the iss tea s

itt W tio
w
t p W em ch

e
d on g. in t la

m ld a
received, the fact that we declare l is independent of the fact that j was sent. So we can
an ing rnin tors igh

.
r
or ud a uc y

write the expected cost as


w cl le tr p
e in nt ns co

D
th k ( de f i es
of or stu e o tat
ity s w g us d S

XX
c(j, l)P (j sent |k received)P (l declared|k received)
is
te f t ss th nite
e rt ss fo U
gr hi in e

l j
th a a ly by

k
in o e r
y y p d le d
ro n an o te
st f a s d s ec

Any (deterministic) decision rule makes P (l declared |k received) = 1 for some l and 0 for
de o rse de ot
ill le u vi pr

the others. For minimum expected cost, we should make P (l declared |k received) = 1
w r sa co pro is
o eir is rk
th nd wo

for the l that has the smallest value of


a his
T

X
c(j, l)P (j sent|l received)
j

In the case of the all-or-none cost, this means we should choose l to be the j for which
P (j sent|k received) is largest. This is the same rule we had for minimum probability of
error in (b).

25

© 2016 Pearson Education, Inc., Hoboken, NJ. All rights reserved. This material is protected under all copyright laws as they currently
exist. No portion of this material may be reproduced, in any form or by any means, without permission in writing from the publisher.
Solution 7.15

(a) The marginal distributions of V and W associated with f + (v, w) and f − (v, w) individually
are Gaussian, so the marginals associated with the given fV,W (v, w) are Gaussian as well.
However, the contours of constant density for fV,W (v, w) are evidently not ellipses, so the
joint distribution is not bivariate Gaussian.

(b) The random variables V and W are not independent. For instance, knowing nothing
about the value of V , the distribution of W is Gaussian, but given that V = v0  0, the
figure suggests that W will be bimodal. So information about W is obtained by knowing
the value of V , which indicates that these two variables are not independent.

(c) The four-quadrant symmetry of fV,W (v, w) shows that E[V W ] = 0, and E[V ] = 0 =
E[W ]. Therefore, we conclude that V and W are uncorrelated.

)
eb
er or in ing
ed id n
W
no the iss tea s

itt W tio
w
t p W em ch

e
d on g. in t la

m ld a
an ing rnin tors igh

.
r
or ud a uc y
w cl le tr p
e in nt ns co

D
th k ( de f i es
of or stu e o tat
ity s w g us d S

is
te f t ss th nite
e rt ss fo U
gr hi in e
th a a ly by

k
in o e r
y y p d le d
ro n an o te
st f a s d s ec
de o rse de ot
ill le u vi pr
w r sa co pro is
o eir is rk
th nd wo
a his
T

26

© 2016 Pearson Education, Inc., Hoboken, NJ. All rights reserved. This material is protected under all copyright laws as they currently
exist. No portion of this material may be reproduced, in any form or by any means, without permission in writing from the publisher.
Solution 7.16

(a) (i) The expected value of Z is µZ = E[cQ + V ] = E[cQ] + E[V ] = cE[Q] + E[V ] = c.
(ii) The variance computations are all simplified by recognizing that Ze = cQ
e + Ve , where
Ze denotes the deviation of Z from its mean value µZ , and similarly for the other
variables.

σZ2 = E[Ze2 ]
= E[c2 Q
e 2 + 2cQ
e Ve + Ve 2 ]
= c2 σQ
2
+ σV2 + 2cσQ,V

(iii) The covariance of Z and Q is

σZ,Q = E[ZeQ]
e

)
eb
er or in ing
ed id n
W
no the iss tea s

itt W tio
= E[(cQ
e + Ve )Q] w
t p W em ch

e
d on g. in t la

m ld a
e
an ing rnin tors igh

.
= cσQ + σV,Q
r
or ud a uc y
w cl le tr p
e in nt ns co

D
th k ( de f i es
of or stu e o tat

(iv) The covariance of Z and V is given by


ity s w g us d S

is
te f t ss th nite
e rt ss fo U
gr hi in e

σZ,V = E[ZeVe ]
th a a ly by

k
in o e r
y y p d le d
ro n an o te

= cσQ,V + σV2
st f a s d s ec
de o rse de ot
ill le u vi pr
w r sa co pro is
o eir is rk

(b) We want to choose parameters a and b so as to minimize E[(Q − bZ − a)2 ]. If we assume


th nd wo
a his

that b is fixed and given, and defining a new random variable X = Q−bZ, it is an easy fact
T

that the value of a that minimizes E[(Q−bZ −a)2 ] = E[(X −a)2 ] is a = E[X] = µQ −bµZ .
Thus, we want to minimize the expression

E[(Q − bZ − µQ − bµZ )2 ] = variance(Q − bZ) = σQ


2
+ b2 σZ2 − 2bσQ,Z (4)

with respect to b. This expression is a quadratic function of b, and the minimum is found
by taking the derivative with respect to b, setting it equal to 0 and solving for b:
d 2
(σ + b2 σZ2 − 2bσQ,Z ) = 2bσZ2 − 2σQ,Z
db Q
= 0.

Solving for b we find that


σQ,Z σQ
b= 2 =ρ , (5)
σZ σZ
σQ,Z
where ρ is the correlation coefficient defined as ρ = σQ σZ . Then a is given by

σQ
a = µQ − ρ µZ . (6)
σZ

27

© 2016 Pearson Education, Inc., Hoboken, NJ. All rights reserved. This material is protected under all copyright laws as they currently
exist. No portion of this material may be reproduced, in any form or by any means, without permission in writing from the publisher.
With these parameters, the mean squared error is given by
b 2 ] = σQ
E[(Q − Q) 2
+ b2 σZ2 − 2bσQ,Z from Equation 4
2 σQ
= σQ + ρ2 σ Q
2
− 2ρσQ,Z
σZ
σ 2 σ 2
2 Q,Z Q,Z
= σQ + 2 −2 2
σZ σZ
2
σQ,Z
2
= σQ −
σZ2
2 2 2
= σQ − σQ ρ
2
= σQ (1 − ρ2 ).

)
eb
er or in ing
ed id n
W
no the iss tea s

itt W tio
w
t p W em ch

e
d on g. in t la

m ld a
an ing rnin tors igh

.
r
or ud a uc y
w cl le tr p
e in nt ns co

D
th k ( de f i es
of or stu e o tat
ity s w g us d S

is
te f t ss th nite
e rt ss fo U
gr hi in e
th a a ly by

k
in o e r
y y p d le d
ro n an o te
st f a s d s ec
de o rse de ot
ill le u vi pr
w r sa co pro is
o eir is rk
th nd wo
a his
T

28

© 2016 Pearson Education, Inc., Hoboken, NJ. All rights reserved. This material is protected under all copyright laws as they currently
exist. No portion of this material may be reproduced, in any form or by any means, without permission in writing from the publisher.
Solution 7.17

(a) The mean and variance of X can be written in terms of the means and variances of Q
and W :

X =Q+W
µX = µQ
2 2 2
σX = σQ + σW

Since Q and W are independent Gaussians, their sum is Gaussian. Hence knowing the
mean and variance of X suffices to write down its pdf:

(x − µX )2
 
1
fX (x) = √ exp − 2
2πσX 2σX

)
eb
er or in ing
ed id n
W
no the iss tea s

itt W tio
w
t p W em ch

e
d on g. in t la

m ld a
(b) The covariance σXQ is most easily computed by first noting that the respective deviations
an ing rnin tors igh

.
r

from their mean for the random variables X, Q and W satisfy


or ud a uc y
w cl le tr p
e in nt ns co

D
th k ( de f i es
of or stu e o tat

X
e =Q
e+W
f,
ity s w g us d S

is
te f t ss th nite
e rt ss fo U
gr hi in e

where X
e = X = µX , and similarly for the other variables. Then
th a a ly by

k
in o e r
y y p d le d
ro n an o te
st f a s d s ec
de o rse de ot

σXQ = E[(X − µX )(Q − µQ )]


ill le u vi pr
w r sa co pro is
o eir is rk

= E[Xe Q]
th nd wo

e
a his

e 2 ] + E[Q
T

= E[Q eWf]
2
= σQ

since Q and W are independent (and therefore uncorrelated).


The correlation coefficient ρXQ is
2
σQ
σXQ σQ
ρXQ = =q =q
σX σQ 2 + σ2 σ
σQ 2 + σ2
σQ
W Q W

Note for use in part (d) below that


2
σW 2
σW
1 − ρ2XQ = 2 + σ2 = 2 .
σQ W σX

1. ρXQ is not a function of µQ


σQ
2. If σW  σQ , then noise wins and ρXQ ⇒ σW

29

© 2016 Pearson Education, Inc., Hoboken, NJ. All rights reserved. This material is protected under all copyright laws as they currently
exist. No portion of this material may be reproduced, in any form or by any means, without permission in writing from the publisher.
If σQ  σW , then signal wins and ρXQ ⇒ 1

(c) The joint pdf of fX,Q (x, q) can be found from fX|Q (x|q) and fQ (q):

fX,Q (x, q) = fX|Q (x|q)fQ (q)


!
1 (q − µQ )2
fQ (q) = √ exp − 2
2πσQ 2σQ
(x − q)2
 
1
fX|Q (x|q) = √ exp − 2
2πσW 2σW
!
1 (x − q)2 (q − µQ )2
fX,Q (x, q) = exp − 2 − 2
2πσQ σW 2σW 2σQ

(d) To find the conditional density fQ|X (q|x), using the fact that we already know fX,Q (x, q)

)
eb
er or in ing
ed id n
and fX (x), write

W
no the iss tea s

itt W tio
w
t p W em ch

e
d on g. in t la

m ld a
an ing rnin tors igh

fX,Q (x, q)
.
r
or ud a uc y
w cl le tr p

fQ|X (q|x) =
e in nt ns co

fX (x)
th k ( de f i es
of or stu e o tat
ity s w g us d S

!
1 σX (x − q)2 (q − µQ )2 (x − µX )2
is
te f t ss th nite

=√ exp − − +
e rt ss fo U

2 2 2
2π σW σQ 2σW 2σQ 2σX
gr hi in e
th a a ly by

k
in o e r
y y p d le d
ro n an o te

2 !
st f a s d s ec

1 q − µQ|x
de o rse de ot

exp − 2
ill le u vi pr

=
w r sa co pro is

2σQ (1 − ρ2 )
p
σQ 2π(1 − ρ2 )
o eir is rk
th nd wo
a his
T

where we have introduced the notation


2 x + σ2 µ
σQ W Q
µQ|x = 2 + σ2
σQ W

and ρ is streamlined notation for ρXQ . The algebra that gets you to this final result is
straightforward, though painful!
We conclude that the conditional distribution of Q, given X = x, is Gaussian, with mean
2 (1 − ρ2 ).
µQ|x and variance σQ

(e) The choice of constant that minimizes the expected or mean squared deviation of a random
variable from that constant is just the expected value of the random variable. In this case,
since we are given that X = x, we need the expected value of Q given X = x. Hence
2 x + σ2 µ
σQ W Q
Q
b MMSE (x) = E[Q|X = x] = µQ|x =
2 + σ2 .
σQ W

This estimate is indeed an affine function of x. As a sanity check on the answer, note that
b MMSE (x) ≈ x, which makes sense, since X = Q+W . On
if the noise intensity is low, then Q

30

© 2016 Pearson Education, Inc., Hoboken, NJ. All rights reserved. This material is protected under all copyright laws as they currently
exist. No portion of this material may be reproduced, in any form or by any means, without permission in writing from the publisher.
b MMSE (x) ≈ µQ , because the measurement
the other hand, if the noise dominates, then Q
is very unreliable.
The corresponding conditional mean square error is
2 σ2
σQ
2 2 W
MMSE = σQ (1 −ρ )= q
2 + σ2
σW Q

This expression also takes the values that one would expect for the extreme cases of high
noise and low noise.

(f) If Q and W are no longer independent, then X is not guaranteed to be Gaussian, so we


cannot proceed further to compute the requested pdf’s.

)
eb
er or in ing
ed id n
W
no the iss tea s

itt W tio
w
t p W em ch

e
d on g. in t la

m ld a
an ing rnin tors igh

.
r
or ud a uc y
w cl le tr p
e in nt ns co

D
th k ( de f i es
of or stu e o tat
ity s w g us d S

is
te f t ss th nite
e rt ss fo U
gr hi in e
th a a ly by

k
in o e r
y y p d le d
ro n an o te
st f a s d s ec
de o rse de ot
ill le u vi pr
w r sa co pro is
o eir is rk
th nd wo
a his
T

31

© 2016 Pearson Education, Inc., Hoboken, NJ. All rights reserved. This material is protected under all copyright laws as they currently
exist. No portion of this material may be reproduced, in any form or by any means, without permission in writing from the publisher.
Another random document with
no related content on Scribd:
PLEASE READ THIS BEFORE YOU DISTRIBUTE OR USE THIS WORK

To protect the Project Gutenberg™ mission of promoting the free


distribution of electronic works, by using or distributing this work (or
any other work associated in any way with the phrase “Project
Gutenberg”), you agree to comply with all the terms of the Full
Project Gutenberg™ License available with this file or online at
www.gutenberg.org/license.

Section 1. General Terms of Use and


Redistributing Project Gutenberg™
electronic works
1.A. By reading or using any part of this Project Gutenberg™
electronic work, you indicate that you have read, understand, agree
to and accept all the terms of this license and intellectual property
(trademark/copyright) agreement. If you do not agree to abide by all
the terms of this agreement, you must cease using and return or
destroy all copies of Project Gutenberg™ electronic works in your
possession. If you paid a fee for obtaining a copy of or access to a
Project Gutenberg™ electronic work and you do not agree to be
bound by the terms of this agreement, you may obtain a refund from
the person or entity to whom you paid the fee as set forth in
paragraph 1.E.8.

1.B. “Project Gutenberg” is a registered trademark. It may only be


used on or associated in any way with an electronic work by people
who agree to be bound by the terms of this agreement. There are a
few things that you can do with most Project Gutenberg™ electronic
works even without complying with the full terms of this agreement.
See paragraph 1.C below. There are a lot of things you can do with
Project Gutenberg™ electronic works if you follow the terms of this
agreement and help preserve free future access to Project
Gutenberg™ electronic works. See paragraph 1.E below.
1.C. The Project Gutenberg Literary Archive Foundation (“the
Foundation” or PGLAF), owns a compilation copyright in the
collection of Project Gutenberg™ electronic works. Nearly all the
individual works in the collection are in the public domain in the
United States. If an individual work is unprotected by copyright law in
the United States and you are located in the United States, we do
not claim a right to prevent you from copying, distributing,
performing, displaying or creating derivative works based on the
work as long as all references to Project Gutenberg are removed. Of
course, we hope that you will support the Project Gutenberg™
mission of promoting free access to electronic works by freely
sharing Project Gutenberg™ works in compliance with the terms of
this agreement for keeping the Project Gutenberg™ name
associated with the work. You can easily comply with the terms of
this agreement by keeping this work in the same format with its
attached full Project Gutenberg™ License when you share it without
charge with others.

1.D. The copyright laws of the place where you are located also
govern what you can do with this work. Copyright laws in most
countries are in a constant state of change. If you are outside the
United States, check the laws of your country in addition to the terms
of this agreement before downloading, copying, displaying,
performing, distributing or creating derivative works based on this
work or any other Project Gutenberg™ work. The Foundation makes
no representations concerning the copyright status of any work in
any country other than the United States.

1.E. Unless you have removed all references to Project Gutenberg:

1.E.1. The following sentence, with active links to, or other


immediate access to, the full Project Gutenberg™ License must
appear prominently whenever any copy of a Project Gutenberg™
work (any work on which the phrase “Project Gutenberg” appears, or
with which the phrase “Project Gutenberg” is associated) is
accessed, displayed, performed, viewed, copied or distributed:
This eBook is for the use of anyone anywhere in the United
States and most other parts of the world at no cost and with
almost no restrictions whatsoever. You may copy it, give it away
or re-use it under the terms of the Project Gutenberg License
included with this eBook or online at www.gutenberg.org. If you
are not located in the United States, you will have to check the
laws of the country where you are located before using this
eBook.

1.E.2. If an individual Project Gutenberg™ electronic work is derived


from texts not protected by U.S. copyright law (does not contain a
notice indicating that it is posted with permission of the copyright
holder), the work can be copied and distributed to anyone in the
United States without paying any fees or charges. If you are
redistributing or providing access to a work with the phrase “Project
Gutenberg” associated with or appearing on the work, you must
comply either with the requirements of paragraphs 1.E.1 through
1.E.7 or obtain permission for the use of the work and the Project
Gutenberg™ trademark as set forth in paragraphs 1.E.8 or 1.E.9.

1.E.3. If an individual Project Gutenberg™ electronic work is posted


with the permission of the copyright holder, your use and distribution
must comply with both paragraphs 1.E.1 through 1.E.7 and any
additional terms imposed by the copyright holder. Additional terms
will be linked to the Project Gutenberg™ License for all works posted
with the permission of the copyright holder found at the beginning of
this work.

1.E.4. Do not unlink or detach or remove the full Project


Gutenberg™ License terms from this work, or any files containing a
part of this work or any other work associated with Project
Gutenberg™.

1.E.5. Do not copy, display, perform, distribute or redistribute this


electronic work, or any part of this electronic work, without
prominently displaying the sentence set forth in paragraph 1.E.1 with
active links or immediate access to the full terms of the Project
Gutenberg™ License.
1.E.6. You may convert to and distribute this work in any binary,
compressed, marked up, nonproprietary or proprietary form,
including any word processing or hypertext form. However, if you
provide access to or distribute copies of a Project Gutenberg™ work
in a format other than “Plain Vanilla ASCII” or other format used in
the official version posted on the official Project Gutenberg™ website
(www.gutenberg.org), you must, at no additional cost, fee or expense
to the user, provide a copy, a means of exporting a copy, or a means
of obtaining a copy upon request, of the work in its original “Plain
Vanilla ASCII” or other form. Any alternate format must include the
full Project Gutenberg™ License as specified in paragraph 1.E.1.

1.E.7. Do not charge a fee for access to, viewing, displaying,


performing, copying or distributing any Project Gutenberg™ works
unless you comply with paragraph 1.E.8 or 1.E.9.

1.E.8. You may charge a reasonable fee for copies of or providing


access to or distributing Project Gutenberg™ electronic works
provided that:

• You pay a royalty fee of 20% of the gross profits you derive from
the use of Project Gutenberg™ works calculated using the
method you already use to calculate your applicable taxes. The
fee is owed to the owner of the Project Gutenberg™ trademark,
but he has agreed to donate royalties under this paragraph to
the Project Gutenberg Literary Archive Foundation. Royalty
payments must be paid within 60 days following each date on
which you prepare (or are legally required to prepare) your
periodic tax returns. Royalty payments should be clearly marked
as such and sent to the Project Gutenberg Literary Archive
Foundation at the address specified in Section 4, “Information
about donations to the Project Gutenberg Literary Archive
Foundation.”

• You provide a full refund of any money paid by a user who


notifies you in writing (or by e-mail) within 30 days of receipt that
s/he does not agree to the terms of the full Project Gutenberg™
License. You must require such a user to return or destroy all
copies of the works possessed in a physical medium and
discontinue all use of and all access to other copies of Project
Gutenberg™ works.

• You provide, in accordance with paragraph 1.F.3, a full refund of


any money paid for a work or a replacement copy, if a defect in
the electronic work is discovered and reported to you within 90
days of receipt of the work.

• You comply with all other terms of this agreement for free
distribution of Project Gutenberg™ works.

1.E.9. If you wish to charge a fee or distribute a Project Gutenberg™


electronic work or group of works on different terms than are set
forth in this agreement, you must obtain permission in writing from
the Project Gutenberg Literary Archive Foundation, the manager of
the Project Gutenberg™ trademark. Contact the Foundation as set
forth in Section 3 below.

1.F.

1.F.1. Project Gutenberg volunteers and employees expend


considerable effort to identify, do copyright research on, transcribe
and proofread works not protected by U.S. copyright law in creating
the Project Gutenberg™ collection. Despite these efforts, Project
Gutenberg™ electronic works, and the medium on which they may
be stored, may contain “Defects,” such as, but not limited to,
incomplete, inaccurate or corrupt data, transcription errors, a
copyright or other intellectual property infringement, a defective or
damaged disk or other medium, a computer virus, or computer
codes that damage or cannot be read by your equipment.

1.F.2. LIMITED WARRANTY, DISCLAIMER OF DAMAGES - Except


for the “Right of Replacement or Refund” described in paragraph
1.F.3, the Project Gutenberg Literary Archive Foundation, the owner
of the Project Gutenberg™ trademark, and any other party
distributing a Project Gutenberg™ electronic work under this
agreement, disclaim all liability to you for damages, costs and
expenses, including legal fees. YOU AGREE THAT YOU HAVE NO
REMEDIES FOR NEGLIGENCE, STRICT LIABILITY, BREACH OF
WARRANTY OR BREACH OF CONTRACT EXCEPT THOSE
PROVIDED IN PARAGRAPH 1.F.3. YOU AGREE THAT THE
FOUNDATION, THE TRADEMARK OWNER, AND ANY
DISTRIBUTOR UNDER THIS AGREEMENT WILL NOT BE LIABLE
TO YOU FOR ACTUAL, DIRECT, INDIRECT, CONSEQUENTIAL,
PUNITIVE OR INCIDENTAL DAMAGES EVEN IF YOU GIVE
NOTICE OF THE POSSIBILITY OF SUCH DAMAGE.

1.F.3. LIMITED RIGHT OF REPLACEMENT OR REFUND - If you


discover a defect in this electronic work within 90 days of receiving it,
you can receive a refund of the money (if any) you paid for it by
sending a written explanation to the person you received the work
from. If you received the work on a physical medium, you must
return the medium with your written explanation. The person or entity
that provided you with the defective work may elect to provide a
replacement copy in lieu of a refund. If you received the work
electronically, the person or entity providing it to you may choose to
give you a second opportunity to receive the work electronically in
lieu of a refund. If the second copy is also defective, you may
demand a refund in writing without further opportunities to fix the
problem.

1.F.4. Except for the limited right of replacement or refund set forth in
paragraph 1.F.3, this work is provided to you ‘AS-IS’, WITH NO
OTHER WARRANTIES OF ANY KIND, EXPRESS OR IMPLIED,
INCLUDING BUT NOT LIMITED TO WARRANTIES OF
MERCHANTABILITY OR FITNESS FOR ANY PURPOSE.

1.F.5. Some states do not allow disclaimers of certain implied


warranties or the exclusion or limitation of certain types of damages.
If any disclaimer or limitation set forth in this agreement violates the
law of the state applicable to this agreement, the agreement shall be
interpreted to make the maximum disclaimer or limitation permitted
by the applicable state law. The invalidity or unenforceability of any
provision of this agreement shall not void the remaining provisions.

1.F.6. INDEMNITY - You agree to indemnify and hold the


Foundation, the trademark owner, any agent or employee of the
Foundation, anyone providing copies of Project Gutenberg™
electronic works in accordance with this agreement, and any
volunteers associated with the production, promotion and distribution
of Project Gutenberg™ electronic works, harmless from all liability,
costs and expenses, including legal fees, that arise directly or
indirectly from any of the following which you do or cause to occur:
(a) distribution of this or any Project Gutenberg™ work, (b)
alteration, modification, or additions or deletions to any Project
Gutenberg™ work, and (c) any Defect you cause.

Section 2. Information about the Mission of


Project Gutenberg™
Project Gutenberg™ is synonymous with the free distribution of
electronic works in formats readable by the widest variety of
computers including obsolete, old, middle-aged and new computers.
It exists because of the efforts of hundreds of volunteers and
donations from people in all walks of life.

Volunteers and financial support to provide volunteers with the


assistance they need are critical to reaching Project Gutenberg™’s
goals and ensuring that the Project Gutenberg™ collection will
remain freely available for generations to come. In 2001, the Project
Gutenberg Literary Archive Foundation was created to provide a
secure and permanent future for Project Gutenberg™ and future
generations. To learn more about the Project Gutenberg Literary
Archive Foundation and how your efforts and donations can help,
see Sections 3 and 4 and the Foundation information page at
www.gutenberg.org.
Section 3. Information about the Project
Gutenberg Literary Archive Foundation
The Project Gutenberg Literary Archive Foundation is a non-profit
501(c)(3) educational corporation organized under the laws of the
state of Mississippi and granted tax exempt status by the Internal
Revenue Service. The Foundation’s EIN or federal tax identification
number is 64-6221541. Contributions to the Project Gutenberg
Literary Archive Foundation are tax deductible to the full extent
permitted by U.S. federal laws and your state’s laws.

The Foundation’s business office is located at 809 North 1500 West,


Salt Lake City, UT 84116, (801) 596-1887. Email contact links and up
to date contact information can be found at the Foundation’s website
and official page at www.gutenberg.org/contact

Section 4. Information about Donations to


the Project Gutenberg Literary Archive
Foundation
Project Gutenberg™ depends upon and cannot survive without
widespread public support and donations to carry out its mission of
increasing the number of public domain and licensed works that can
be freely distributed in machine-readable form accessible by the
widest array of equipment including outdated equipment. Many small
donations ($1 to $5,000) are particularly important to maintaining tax
exempt status with the IRS.

The Foundation is committed to complying with the laws regulating


charities and charitable donations in all 50 states of the United
States. Compliance requirements are not uniform and it takes a
considerable effort, much paperwork and many fees to meet and
keep up with these requirements. We do not solicit donations in
locations where we have not received written confirmation of
compliance. To SEND DONATIONS or determine the status of
compliance for any particular state visit www.gutenberg.org/donate.

While we cannot and do not solicit contributions from states where


we have not met the solicitation requirements, we know of no
prohibition against accepting unsolicited donations from donors in
such states who approach us with offers to donate.

International donations are gratefully accepted, but we cannot make


any statements concerning tax treatment of donations received from
outside the United States. U.S. laws alone swamp our small staff.

Please check the Project Gutenberg web pages for current donation
methods and addresses. Donations are accepted in a number of
other ways including checks, online payments and credit card
donations. To donate, please visit: www.gutenberg.org/donate.

Section 5. General Information About Project


Gutenberg™ electronic works
Professor Michael S. Hart was the originator of the Project
Gutenberg™ concept of a library of electronic works that could be
freely shared with anyone. For forty years, he produced and
distributed Project Gutenberg™ eBooks with only a loose network of
volunteer support.

Project Gutenberg™ eBooks are often created from several printed


editions, all of which are confirmed as not protected by copyright in
the U.S. unless a copyright notice is included. Thus, we do not
necessarily keep eBooks in compliance with any particular paper
edition.

Most people start at our website which has the main PG search
facility: www.gutenberg.org.

This website includes information about Project Gutenberg™,


including how to make donations to the Project Gutenberg Literary
Archive Foundation, how to help produce our new eBooks, and how
to subscribe to our email newsletter to hear about new eBooks.

You might also like