Professional Documents
Culture Documents
Instant download pdf Signals Systems and Inference 1st Edition Oppenheim Solutions Manual full chapter
Instant download pdf Signals Systems and Inference 1st Edition Oppenheim Solutions Manual full chapter
https://testbankfan.com/product/signals-and-systems-2nd-edition-
oppenheim-solutions-manual/
https://testbankfan.com/product/analog-signals-and-systems-1st-
edition-kudeki-solutions-manual/
https://testbankfan.com/product/signals-and-systems-1st-edition-
mahmood-nahvi-solutions-manual/
https://testbankfan.com/product/signals-systems-and-
transforms-5th-edition-phillips-solutions-manual/
Signals and Systems using MATLAB 2nd Edition Chaparro
Solutions Manual
https://testbankfan.com/product/signals-and-systems-using-
matlab-2nd-edition-chaparro-solutions-manual/
https://testbankfan.com/product/medical-imaging-signals-and-
systems-2nd-edition-prince-solutions-manual/
https://testbankfan.com/product/signals-and-systems-continuous-
and-discrete-4th-edition-ziemer-solutions-manual/
https://testbankfan.com/product/signals-and-systems-analysis-
using-transform-methods-and-matlab-3rd-edition-roberts-solutions-
manual/
https://testbankfan.com/product/probability-and-statistical-
inference-9th-edition-hogg-solutions-manual/
Signals, Systems & Inference
Alan V. Oppenheim & George C. Verghese c 2016
Chapter 7 Solutions
These solutions represent a preliminary version of the Instructors’ Solutions Manual (ISM).
The book has a total of 350 problems, so it is possible and even likely that at this preliminary
stage of preparing the ISM there are some omissions and errors in the draft solutions. It is
also possible that an occasional problem in the book is now slightly different from an earlier
version for which the solution here was generated. It is therefore important for an instructor to
carefully review the solutions to problems of interest, and to modify them as needed. We will,
from time to time, update these solutions with clarifications, elaborations, or corrections.
Many of these solutions have been prepared by the teaching assistants for the course in which
this material has been taught at MIT, and their assistance is individually acknowledged in the
)
eb
book. For preparing solutions to the remaining problems in recent months, we are particularly
er or in ing
ed id n
W
no the iss tea s
itt W tio
w
t p W em ch
e
d on g. in t la
m ld a
grateful to Abubakar Abid (who also constructed the solution template), Leighton Barnes,
an ing rnin tors igh
.
r
Fisher Jepsen, Tarek Lahlou, Catherine Medlock, Lucas Nissenbaum, Ehimwenma Nosakhare,
or ud a uc y
w cl le tr p
e in nt ns co
Juan Miguel Rodriguez, Andrew Song, and Guolong Su. We would also like to thank Laura
th k ( de f i es
of or stu e o tat
ity s w g us d S
k
in o e r
y y p d le d
ro n an o te
st f a s d s ec
de o rse de ot
ill le u vi pr
w r sa co pro is
o eir is rk
th nd wo
a his
T
© 2016 Pearson Education, Inc., Hoboken, NJ. All rights reserved. This material is protected under all copyright laws as they currently
exist. No portion of this material may be reproduced, in any form or by any means, without permission in writing from the publisher.
Solution 7.1
y6 y6 y6 y6
1 1 1 1
A D
1 1 1 1
2 2 2 C 2
B D
- - - -
1 1 1 1
2 1 x 2 1 x 2 1 x 2 1 x
(a) 1.
1 1 1
P (A ∩ D) = P y > , x > =
2 2 4
1
P (A) = P (D) =
2
)
eb
er or in ing
ed id n
W
no the iss tea s
itt W tio
Since P (A ∩ D) = P (A)P (D), events A and D are independent. w
t p W em ch
e
d on g. in t la
m ld a
an ing rnin tors igh
2.
.
r
or ud a uc y
w cl le tr p
e in nt ns co
1 1 1
P (C ∩ D) = P y < , x < =
th k ( de f i es
of or stu e o tat
2 2 4
ity s w g us d S
is
te f t ss th nite
e rt ss fo U
1
gr hi in e
th a a ly by
P (C) = P (D) =
in o e r
y y p d le d
2
ro n an o te
st f a s d s ec
de o rse de ot
ill le u vi pr
3.
a his
1 1
T
P (A ∩ B) = P y > , y < =0
2 2
1
P (A) = P (B) =
2
Since P (A ∩ B) 6= P (A)P (B), events A and B are not independent.
4. Another similar case that was not asked in the problem in the book:
1 1 1
P (A ∩ C) = P y > ,x < =
2 2 4
1
P (A) = P (C) =
2
Since P (A ∩ C) = P (A)P (C), events A and C are independent.
© 2016 Pearson Education, Inc., Hoboken, NJ. All rights reserved. This material is protected under all copyright laws as they currently
exist. No portion of this material may be reproduced, in any form or by any means, without permission in writing from the publisher.
(b) Note that P (A ∩ C ∩ D) = 0 since there is no region where all three sets overlap. However,
P (A) = P (C) = P (D) = 12 , so P (A ∩ C ∩ D) 6= P (A)P (C)P (D) and the events A, C,
and D are not mutually independent, even though they are pairwise independent as
seen in (a). For a collection of events to be independent, we require the probability of
the intersection of any of the events to equal the product of the probabilities of each
individual event. So for the 3–event case, pairwise independence is a necessary but not
sufficient condition for independence.
)
eb
er or in ing
ed id n
W
no the iss tea s
itt W tio
w
t p W em ch
e
d on g. in t la
m ld a
an ing rnin tors igh
.
r
or ud a uc y
w cl le tr p
e in nt ns co
D
th k ( de f i es
of or stu e o tat
ity s w g us d S
is
te f t ss th nite
e rt ss fo U
gr hi in e
th a a ly by
k
in o e r
y y p d le d
ro n an o te
st f a s d s ec
de o rse de ot
ill le u vi pr
w r sa co pro is
o eir is rk
th nd wo
a his
T
© 2016 Pearson Education, Inc., Hoboken, NJ. All rights reserved. This material is protected under all copyright laws as they currently
exist. No portion of this material may be reproduced, in any form or by any means, without permission in writing from the publisher.
Solution 7.2
We wish to determine the conditional probability P (S = k | R = k), where S denotes what
was sent, and R denotes what is received. Recall the conditional probability of an event A,
given that the event B with P (B) > 0 has happened, is defined by
P (A ∩ B)
P (A | B) =
P (B)
Therefore,
P (S = k and R = k)
P (S = k | R = k) =
P (R = k)
0.05 5
P (S = 1 | R = 1) = = ≈ 0.21
0.05 + 0.10 + 0.09 24
0.08 4
)
P (S = 2 | R = 2) = = ≈ 0.19
eb
er or in ing
ed id n
W
0.13 + 0.08 + 0.21 21
no the iss tea s
itt W tio
w
t p W em ch
e
d on g. in t la
m ld a
0.15 15
an ing rnin tors igh
P (S = 3 | R = 3) = = ≈ 0.44
.
r
or ud a uc y
w cl le tr p
D
th k ( de f i es
of or stu e o tat
is
te f t ss th nite
e rt ss fo U
k
in o e r
y y p d le d
ro n an o te
3
st f a s d s ec
de o rse de ot
X
=1− P (S = k and R = k)
ill le u vi pr
w r sa co pro is
o eir is rk
k=1
th nd wo
a his
= 0.72
© 2016 Pearson Education, Inc., Hoboken, NJ. All rights reserved. This material is protected under all copyright laws as they currently
exist. No portion of this material may be reproduced, in any form or by any means, without permission in writing from the publisher.
Solution 7.3
(a)
a+b
µV =
2
Z b
1 b2 + ab + a2
E[V 2 ] = v 2 dv =
b−a a 3
(b − a)2
σV2 = E[V 2 ] − µ2V =
12
µY = µV + µ W = a + b
)
eb
(b − a)2
er or in ing
ed id n
W
σY2 = σV2 + σW no the iss tea s
2
itt W tio
= w
t p W em ch
e
d on g. in t la
m ld a
6
an ing rnin tors igh
.
r
or ud a uc y
w cl le tr p
e in nt ns co
If V and W were not independent, the expression for σY2 above would have contained the
D
th k ( de f i es
of or stu e o tat
additional term 2σV,W , where σV,W is the covariance of the two random variables.
ity s w g us d S
is
te f t ss th nite
e rt ss fo U
gr hi in e
th a a ly by
k
in o e r
y y p d le d
ro n an o te
st f a s d s ec
de o rse de ot
ill le u vi pr
w r sa co pro is
o eir is rk
th nd wo
a his
T
© 2016 Pearson Education, Inc., Hoboken, NJ. All rights reserved. This material is protected under all copyright laws as they currently
exist. No portion of this material may be reproduced, in any form or by any means, without permission in writing from the publisher.
Solution 7.4
(a) Recall the tilde notation Z̃ = Z − µZ and note that we have the relations
Z̃ = 2X̃ − Ỹ
and
1
W̃ = X̃ + Ỹ .
2
Then, by definition,
σZW = E[Z̃ W̃ ]
1
= E[(2X̃ − Ỹ )(X̃ + Ỹ )]
2
1
= 2E[X̃ 2 ] − E[Ỹ 2 ]
2
)
eb
er or in ing
ed id n
W
1
no the iss tea s
itt W tio
w
t p W em ch
= 2(E[X ] − E[X]2 ) − (E[Y 2 ] − E[Y ]2 )
2
e
d on g. in t la
m ld a
an ing rnin tors igh
.
r
or ud a uc y
1
w cl le tr p
e in nt ns co
2
of or stu e o tat
ity s w g us d S
is
te f t ss th nite
(b) Z and W are affine functions of X and Y so they will also be bivariate Gaussian random
e rt ss fo U
gr hi in e
th a a ly by
variables. The form for the joint distribution of bivariate Guassians is given in Example
in o e r
y y p d le d
ro n an o te
st f a s d s ec
z − µZ w − µW
ill le u vi pr
,
o eir is rk
σZ σW
th nd wo
a his
where
T
1
c= p
2πσZ σW 1 − ρ2
and
1
q(v, w) = (v 2 − 2ρvw + w2 ) .
2(1 − ρ2 )
So we will need to know the means and variances of Z and W as well as the correlation
coefficient ρ between Z and W .
µZ = 2E[X] − E[Y ] + 5 = 1
1
µW = E[X] + E[Y ] − 1 = −1
2
© 2016 Pearson Education, Inc., Hoboken, NJ. All rights reserved. This material is protected under all copyright laws as they currently
exist. No portion of this material may be reproduced, in any form or by any means, without permission in writing from the publisher.
2 1
σW = E[W̃ 2 ] = E[(X̃ + Ỹ )2 ]
2
1
= E[X̃ 2 ] + E[X̃ Ỹ ] + E[Ỹ 2 ]
4
1
= 8 − 2 + (3) = 27/4 .
4
σZW 29/2 29
ρ= =p =p
σZ σW (43)(27)/4 (43)(27)
So finally,
)
eb
1 z − µ Z w − µW
er or in ing
ed id n
W
no the iss tea s
exp −q
itt W tio
fZ,W (z, w) = , w
t p W em ch
e
d on g. in t la
m ld a
p
2πσZ σW 1 − ρ2 σZ σW
an ing rnin tors igh
.
r
or ud a uc y
!!!
w cl le tr p
1 1 29
√ exp − − 2p √ p
th k ( de f i es
= +
of or stu e o tat
8π 5 2(1 − ρ2 )
43 (43)(27) 43 27/4 27/4
ity s w g us d S
is
te f t ss th nite
!!!
1 (43)(27) (z − 1)2 29 z−1 w+1 (w + 1)2
e rt ss fo U
gr hi in e
th a a ly by
= √ exp − − 2p √ p +
k
in o e r
y y p d le d
1 1
27(z − 1)2 − 4(29)(z − 1)(w + 1) + 4(43)(w + 1)2
w r sa co pro is
= √ exp − .
o eir is rk
640
th nd wo
8π 5
a his
T
© 2016 Pearson Education, Inc., Hoboken, NJ. All rights reserved. This material is protected under all copyright laws as they currently
exist. No portion of this material may be reproduced, in any form or by any means, without permission in writing from the publisher.
Solution 7.5
Since σV2 = E[V 2 ] − {E[V ]}2 and E[V ] = 0 is given, we get E[V 2 ] = 4. Furthermore,
E[X] = E[Y ] = 2.
Correlation
Covariance
Correlation coefficient
σX,Y
ρX,Y = = −1
σX σY
)
eb
er or in ing
Since rX,Y = E[XY ] = 0, two random variables X and Y are orthogonal. Since σX,Y 6= 0,
ed id n
W
no the iss tea s
itt W tio
w
t p W em ch
e
d on g. in t la
m ld a
the two random variables are correlated.
an ing rnin tors igh
.
r
or ud a uc y
w cl le tr p
e in nt ns co
D
th k ( de f i es
of or stu e o tat
ity s w g us d S
is
te f t ss th nite
e rt ss fo U
gr hi in e
th a a ly by
k
in o e r
y y p d le d
ro n an o te
st f a s d s ec
de o rse de ot
ill le u vi pr
w r sa co pro is
o eir is rk
th nd wo
a his
T
© 2016 Pearson Education, Inc., Hoboken, NJ. All rights reserved. This material is protected under all copyright laws as they currently
exist. No portion of this material may be reproduced, in any form or by any means, without permission in writing from the publisher.
Solution 7.6
and similarly
Ze = Z − µZ , Ve = V − µV , f = W − µW .
W
Note that because
µX = µZ + µV and
µY = βµZ + µW
we can subtract these equations from the defining equations for X and Y to obtain
)
eb
er or in ing
ed id n
W
no the iss tea s
itt W tio
Xe = Ze + Ve w (1)
t p W em ch
e
d on g. in t la
m ld a
an ing rnin tors igh
Ye = β Ze + W
f. (2)
.
r
or ud a uc y
w cl le tr p
e in nt ns co
D
th k ( de f i es
of or stu e o tat
Now since σX2 = E[X e 2 ], σ 2 = E[Ye 2 ], and so on, and noting that the specified uncorrelat-
ity s w g us d S
Y
is
te f t ss th nite
edness properties imply E[ZeVe ] = σZV = 0 and similarly σZW = 0, we can simply square
e rt ss fo U
gr hi in e
th a a ly by
2
σX = σZ2 + σV2
ill le u vi pr
w r sa co pro is
o eir is rk
σY2 = β 2 σZ2 + σW
2
.
th nd wo
a his
T
Similarly, for the covariance we take the product of the equations (1) and (2), then
compute expectations and use the fact that σV W = 0, to obtain
e Ye ] = σXY = βσ 2 .
E[X Z
βα
ρXY = p .
(1 + α)(1 + β 2 α)
© 2016 Pearson Education, Inc., Hoboken, NJ. All rights reserved. This material is protected under all copyright laws as they currently
exist. No portion of this material may be reproduced, in any form or by any means, without permission in writing from the publisher.
p
If β 2 1/α, then ρXY ≈ ± α/(1 + α), with the sign being that of β. If α
max{1, 1/β 2 } then ρXY ≈ ±1, with the sign being that of β. This corresponds to the
fluctuations in both Z and βZ being much larger than those in V and W , so Y ≈ βX.
On the other hand, if α min{1, 1/β 2 }, then ρXY ≈ βα. In particular, for α 1/|β|,
ρXY ≈ 0, because now fluctuations in X and Y are dominated by those in V and W
respectively.
)
eb
er or in ing
ed id n
W
no the iss tea s
itt W tio
w
t p W em ch
e
d on g. in t la
m ld a
an ing rnin tors igh
.
r
or ud a uc y
w cl le tr p
e in nt ns co
D
th k ( de f i es
of or stu e o tat
ity s w g us d S
is
te f t ss th nite
e rt ss fo U
gr hi in e
th a a ly by
k
in o e r
y y p d le d
ro n an o te
st f a s d s ec
de o rse de ot
ill le u vi pr
w r sa co pro is
o eir is rk
th nd wo
a his
T
10
© 2016 Pearson Education, Inc., Hoboken, NJ. All rights reserved. This material is protected under all copyright laws as they currently
exist. No portion of this material may be reproduced, in any form or by any means, without permission in writing from the publisher.
Solution 7.7
y6 y6 y6 y6
1 1 1 1
A D
1 1 1 1
2 2 2 C 2
B D
- - - -
1 1 1 1
2 1 x 2 1 x 2 1 x 2 1 x
)
eb
1
er or in ing
P (C ∩ D) 1
ed id n
W
no the iss tea s
itt W tio
4 w
t p W em ch
P (C | D) = = =
e
d on g. in t la
m ld a
P (D) 1 2
an ing rnin tors igh
.
r
or ud a uc y
w cl le tr p
e in nt ns co
P (A ∩ C ∩ D)
th k ( de f i es
P (A ∩ C | D) = =0
of or stu e o tat
P (D)
ity s w g us d S
is
te f t ss th nite
k
in o e r
independent even though A and C are independent when not conditioned on event
y y p d le d
ro n an o te
st f a s d s ec
D.
de o rse de ot
ill le u vi pr
w r sa co pro is
3
P (J) = P (A) + P (C) − P (A ∩ C) =
4
1
P (B) =
2
1 1 1
P (J ∩ B) = P x < , y < = 6= P (J)P (B)
2 2 4
However, we can show that J and B are independent when conditioned on L = C:
1
P (J ∩ C) 2
P (J | C) = = 1 =1
P (C) 2
1
P (B ∩ C) 4 1
P (B | C) = = 1 =
P (C) 2
2
1
P (J ∩ B ∩ C) 4 1
P (J ∩ B | C) = = 1 = = P (J | C)P (B | C)
P (C) 2
2
11
© 2016 Pearson Education, Inc., Hoboken, NJ. All rights reserved. This material is protected under all copyright laws as they currently
exist. No portion of this material may be reproduced, in any form or by any means, without permission in writing from the publisher.
(b) The statement is TRUE. To see this, let us define a new event E = W ∩ Q, and use it
as follows:
P (V ∩ W ∩ Q)
P (V ∩ W |Q) =
P (Q)
P (V ∩ E)
=
P (Q)
P (V |E)P (E)
=
P (Q)
P (W ∩ Q)
= P (V |E)
P (Q)
= P (V |W ∩ Q)P (W |Q). (3)
As an example, we may use the previously defined events B, C, and D. The probability
)
P (B ∩ C|D) may be found from the definition of conditional independence by:
eb
er or in ing
ed id n
W
no the iss tea s
itt W tio
w
t p W em ch
e
d on g. in t la
m ld a
an ing rnin tors igh
P (B ∩ C ∩ D) 1/4 1
.
r
P (B ∩ C|D) = = = .
or ud a uc y
w cl le tr p
e in nt ns co
P (D) 1/2 2
th k ( de f i es
of or stu e o tat
ity s w g us d S
P (B ∩ C ∩ D) P (C ∩ D)
k
1/4 1/4 1
in o e r
y y p d le d
12
© 2016 Pearson Education, Inc., Hoboken, NJ. All rights reserved. This material is protected under all copyright laws as they currently
exist. No portion of this material may be reproduced, in any form or by any means, without permission in writing from the publisher.
Solution 7.8
This is a lot easier after Chapter 9, but only uses elementary concepts. We solve (a) and
(b) together.
D1 ⇒ {m b = 1 : B}, D2 ⇒ {m
b = 0 : A, m b = 0 : B, m
b = 1 : A}
b 6= m) = P (m = 1, m
Pe = P (m b = 0) + P (m = 0, m
b = 1)
b = 0 | m = 1) + P (m = 0)P (m
= P (m = 1)P (m b = 1 | m = 0)
)
eb
er or in ing
ed id n
Pe,D1 = (0.4 × 0.5) + (0.6 × 0.3) = 0.38
W
no the iss tea s
itt W tio
w
t p W em ch
e
d on g. in t la
m ld a
an ing rnin tors igh
.
r
or ud a uc y
w cl le tr p
e in nt ns co
D
th k ( de f i es
is
te f t ss th nite
e rt ss fo U
The minimum-probability-of-error decoder can be chosen intuitively. One can observe that,
gr hi in e
th a a ly by
k
in o e r
y y p d le d
since m = 1 has equal probability of resulting in either outcome, it will not affect the final
ro n an o te
st f a s d s ec
de o rse de ot
13
© 2016 Pearson Education, Inc., Hoboken, NJ. All rights reserved. This material is protected under all copyright laws as they currently
exist. No portion of this material may be reproduced, in any form or by any means, without permission in writing from the publisher.
Solution 7.9
(a) There are three possible values for R: −2, 0, or 2. The probability for each value, P (R =
r), is given as,
P (R = r) = P (X = x, N = r − x) = P (X = x)P (N = r − x)
P (R = 2) = P (X = 1)P (N = 1) = p(1 − g)
P (R = 0) = P (X = 1)P (N = −1) + P (X = −1)P (N = 1) = 1 − g − p + 2gp
P (R = −2) = P (X = −1)P (N = −1) = g(1 − p)
)
eb
er or in ing
ed id n
(b) The choice of our estimate will be based on the conditional probabilities P (X = x | R = r):
W
no the iss tea s
itt W tio
w
t p W em ch
e
d on g. in t la
m ld a
an ing rnin tors igh
.
r
or ud a uc y
R=2 or -2
w cl le tr p
e in nt ns co
D
th k ( de f i es
For each case, only a unique combination of X and N exists. For R = 2, X = N = 1 and
of or stu e o tat
ity s w g us d S
for R = −2, X = N = −1. Therefore, only one reasonable estimate is possible for both
is
te f t ss th nite
cases: R = 2 → X b = 1 and R = −2 → X b = −1
e rt ss fo U
gr hi in e
th a a ly by
k
in o e r
y y p d le d
ro n an o te
st f a s d s ec
R=0
de o rse de ot
ill le u vi pr
w r sa co pro is
P (X = x, R = 0) P (N = −x)
T
P (X = x | R = 0) = =
P (R = 0) P (R = 0)
P (N = −1) 1−p
P (X = 1 | R = 0) = =
P (R = 0) 1 − g − p + 2gp
P (N = 1) p
P (X = −1 | R = 0) = =
P (R = 0) 1 − g − p + 2gp
1
Since 2 < p < 1, P (X = −1 | R = 0) has the larger value. Our estimate, X,
b will be −1.
14
© 2016 Pearson Education, Inc., Hoboken, NJ. All rights reserved. This material is protected under all copyright laws as they currently
exist. No portion of this material may be reproduced, in any form or by any means, without permission in writing from the publisher.
(c) We expand the given equation for P (correct) as following,
P (correct) = P (X = −1)P (X
b = −1 | X = −1) + P (X = 1)P (X b = 1 | X = 1)
= P (X = −1) P (Xb = −1, N = 1 | X = −1) + P (Xb = −1, N = −1 | X = −1)
+ P (X = 1) P (Xb = 1, N = 1 | X = 1) + P (X
b = 1, N = −1 | X = 1)
X
= P (X = −1) P (N = n)P (Xb = −1 | X = −1, N = n)
n∈{−1,1}
X
+ P (X = 1) b = 1 | X = 1, N = n)
P (N = n)P (X
n∈{−1,1}
= P (X = −1) P (N = 1)P (X
b = −1 | R = 0) + P (N = −1)P (Xb = −1 | R = −2)
)
eb
er or in ing
ed id n
W
no the iss tea s
itt W tio
b = 1 | R = 2) + P (N = −1)P (X
+ P (X = 1) P (N = 1)P (X b = 1 | R = 0) w
t p W em ch
e
d on g. in t la
m ld a
an ing rnin tors igh
.
r
or ud a uc y
w cl le tr p
e in nt ns co
D
th k ( de f i es
of or stu e o tat
is
te f t ss th nite
e rt ss fo U
p2 (1 − p)2
gr hi in e
th a a ly by
P (correct) = g + (1 − p) + (1 − g) p +
in o e r
y y p d le d
ro n an o te
1 − g − p + 2gp 1 − g − p + 2gp
st f a s d s ec
de o rse de ot
ill le u vi pr
2
1 − p + 2pg(2p − 1) − g (2p − 1) 2
w r sa co pro is
=
o eir is rk
th nd wo
1 − g − p + 2gp
a his
T
This decision rule that chooses the X of maximum conditional probability, given R, also
maximizes the probability of being correct.
15
© 2016 Pearson Education, Inc., Hoboken, NJ. All rights reserved. This material is protected under all copyright laws as they currently
exist. No portion of this material may be reproduced, in any form or by any means, without permission in writing from the publisher.
Solution 7.10
(a) Given each hypothesis (H0 , H1 ), the number of photon arrivals is from 0 to ∞. Since the
sum of probabilities should equal 1, we get the following:
∞ ∞
X X A0
P (k | H0 ) = A0 v0k = 1 ⇒ = 1 ⇒ A0 = 1 − v 0
1 − v0
k=0 k=0
∞ ∞
X X A1
P (k | H1 ) = A1 v1k = 1 ⇒ = 1 ⇒ A1 = 1 − v 1
1 − v1
k=0 k=0
Therefore,
P (k | H0 ) = (1 − v0 )v0k P (k | H1 ) = (1 − v1 )v1k
)
eb
er or in ing
ed id n
W
no the iss tea s
itt W tio
w
t p W em ch
(b) We use Bayes’ rule.
e
d on g. in t la
m ld a
an ing rnin tors igh
.
r
or ud a uc y
w cl le tr p
P (m | H1 )P (H1 )
e in nt ns co
P (H1 | m) =
th k ( de f i es
P (m)
of or stu e o tat
ity s w g us d S
0.5 × (1 − v1 )v1m
is
te f t ss th nite
=
e rt ss fo U
gr hi in e
k
in o e r
y y p d le d
ro n an o te
(1 − v1 )v1m
st f a s d s ec
de o rse de ot
=
ill le u vi pr
(1 − v0 )v0m + (1 − v1 )v1m
w r sa co pro is
o eir is rk
th nd wo
a his
T
(c) The probability of error, given as Pe , can be defined as follows (for more information,
refer to Section 9.3.1):
An alternative approach is to use the hypothesis testing framework from Section 9.2.1,
where we learn that Pe is minimized if, for each possible number of photons, we chose
the hypothesis H with bigger P (H | k). We only have to compare the numerators,
16
© 2016 Pearson Education, Inc., Hoboken, NJ. All rights reserved. This material is protected under all copyright laws as they currently
exist. No portion of this material may be reproduced, in any form or by any means, without permission in writing from the publisher.
0.5 × (1 − v1 )v1k and 0.5 × (1 − v0 )v0k , since the denominators are equal. Note that these
two functions intersect at one point, k = k ∗ . For k > k ∗ , P (H1 | k) > P (H0 | k) and
k ≤ k ∗ , P (H1 | k) < P (H0 | k). If we let k ∗ = n0 , we notice that the receiver in the
problem description already follows the desired decision rule that minimizes Pe , with k ∗
equal to n∗ .
)
eb
er or in ing
ed id n
W
no the iss tea s
itt W tio
w
t p W em ch
e
d on g. in t la
m ld a
(d) Since P (H0 ) 6= P (H1 ), we need to generalize n∗0 from above.
an ing rnin tors igh
.
r
or ud a uc y
w cl le tr p
e in nt ns co
ln PP (H 0 )(1−v0 )
th k ( de f i es
of or stu e o tat
(H1 )(1−v1 )
n∗0 =
ity s w g us d S
is
te f t ss th nite
ln vv10
e rt ss fo U
gr hi in e
th a a ly by
k
in o e r
y y p d le d
ro n an o te
st f a s d s ec
P (H1 ) = 1 → n∗0 = −∞ → Pe ∼ 0
T
P (H1 ) = 1 makes sense, since if the light signal is always sent, the probability of error
would be 0.
17
© 2016 Pearson Education, Inc., Hoboken, NJ. All rights reserved. This material is protected under all copyright laws as they currently
exist. No portion of this material may be reproduced, in any form or by any means, without permission in writing from the publisher.
Solution 7.11
The conditional probabilities in the last equation can be read straightly from the given
figure.
For Channel A,
3 3
P (Y = −1 | A used) = 0.5 × 0 + 0.5 × =
)
4 8
eb
er or in ing
ed id n
W
no the iss tea s
itt W tio
1 1 w1
t p W em ch
e
d on g. in t la
m ld a
P (Y = 0 | A used) = 0.5 × + 0.5 × =
an ing rnin tors igh
4 4 4
.
r
or ud a uc y
w cl le tr p
3 3
e in nt ns co
4 8
ity s w g us d S
is
te f t ss th nite
e rt ss fo U
gr hi in e
th a a ly by
k
in o e r
y y p d le d
For Channel B,
ro n an o te
st f a s d s ec
de o rse de ot
ill le u vi pr
1 5
w r sa co pro is
4 8
th nd wo
a his
3 3
T
18
© 2016 Pearson Education, Inc., Hoboken, NJ. All rights reserved. This material is protected under all copyright laws as they currently
exist. No portion of this material may be reproduced, in any form or by any means, without permission in writing from the publisher.
Therefore,
P (A used, Y = −1) α× 3 3α
P (A used | Y = −1) = = 5−2α8 =
P (Y = −1) 8
5 − 2α
Therefore,
)
eb
er or in ing
5
ed id n
W
no the iss tea s
itt W tio
w
t p W em ch
⇒α>
e
d on g. in t la
m ld a
8
an ing rnin tors igh
.
r
or ud a uc y
w cl le tr p
e in nt ns co
D
th k ( de f i es
of or stu e o tat
is
te f t ss th nite
5
8 < α ≤ 1, we would decide that channel A was used upon receiving Y = −1.
e rt ss fo U
gr hi in e
th a a ly by
k
in o e r
y y p d le d
ro n an o te
st f a s d s ec
Next, we need to figure out the range of α for which the Channel A will be chosen,
de o rse de ot
ill le u vi pr
regardless of the received Y value. Notice that Y = 0 is only possible through channel
w r sa co pro is
o eir is rk
th nd wo
If b 6= 0, channel A will always be chosen, as channel B does not allow for the
received value of 0.
19
© 2016 Pearson Education, Inc., Hoboken, NJ. All rights reserved. This material is protected under all copyright laws as they currently
exist. No portion of this material may be reproduced, in any form or by any means, without permission in writing from the publisher.
If b = 0,
)
eb
er or in ing
ed id n
W
no the iss tea s
itt W tio
w
t p W em ch
e
d on g. in t la
m ld a
an ing rnin tors igh
.
r
or ud a uc y
w cl le tr p
e in nt ns co
D
th k ( de f i es
of or stu e o tat
ity s w g us d S
is
te f t ss th nite
e rt ss fo U
gr hi in e
th a a ly by
k
in o e r
y y p d le d
ro n an o te
st f a s d s ec
de o rse de ot
ill le u vi pr
w r sa co pro is
o eir is rk
th nd wo
a his
T
20
© 2016 Pearson Education, Inc., Hoboken, NJ. All rights reserved. This material is protected under all copyright laws as they currently
exist. No portion of this material may be reproduced, in any form or by any means, without permission in writing from the publisher.
Solution 7.12
(a) False. Suppose (X, Y ) take the values (−1, 0), (0, 1), and (1, 0) with equal probability
(= 13 ). Then E[XY ] = 0 since either X or Y is 0 for every outcome, and also E[X] = 0,
because it is distributed symmetrically around 0. Hence E[XY ] = E[X]E[Y ], so X and
Y are uncorrelated.
Now note that (X 2 , Y 2 ) take the values (1, 0) and (0, 1) with respective probabilities 23
and 31 . Thus E[X 2 Y 2 ] = 0, because either X 2 or Y 2 is 0 for every outcome. However,
E[X 2 ] = 32 and E[Y 2 ] = 31 , so E[X 2 Y 2 ] 6= E[X 2 ]E[Y 2 ], i.e., X 2 and Y 2 are uncorrelated.
(b) True. By definition of independence, fX,Y (x, y) = fX (x)fY (y). Then we have,
Z Z
E[g(X)h(Y )] = g(X)h(Y )fX,Y (x, y)dxdy
)
Z Z
eb
er or in ing
ed id n
W
no the iss tea s
= g(X)fX (x)dx h(Y )fY (y)dy = E[g(X)]E[h(Y )]
itt W tio
w
t p W em ch
e
d on g. in t la
m ld a
an ing rnin tors igh
.
r
or ud a uc y
w cl le tr p
e in nt ns co
D
th k ( de f i es
of or stu e o tat
ity s w g us d S
(c) False. The definition of conditional probability is, as stated in Eq. (7.27),
is
te f t ss th nite
e rt ss fo U
gr hi in e
th a a ly by
fX,Y (x, y)
in o e r
y y p d le d
fX|Y (x | y) =
ro n an o te
st f a s d s ec
fY (y)
de o rse de ot
ill le u vi pr
w r sa co pro is
o eir is rk
which is same as the given equation in the problem. fX,Y (x, y) = fX (x)fY (y) must hold
th nd wo
a his
21
© 2016 Pearson Education, Inc., Hoboken, NJ. All rights reserved. This material is protected under all copyright laws as they currently
exist. No portion of this material may be reproduced, in any form or by any means, without permission in writing from the publisher.
Solution 7.13
2
(a) We would like to show that rXY 2 r 2 . Let’s start with what we know:
≤ rX Y
2 2 2
σXY ≤ σX σY
2
rX = µ2X + σX
2
2
Then substituting back into inequality σXY 2 σ2 ,
≤ σX Y
)
eb
er or in ing
ed id n
W
no the iss tea s
itt W tio
w
t p W em ch
e
d on g. in t la
m ld a
an ing rnin tors igh
(rXY − µX µY )2 ≤ (rX
2
− µ2X )(rY2 − µ2Y )
.
r
or ud a uc y
w cl le tr p
e in nt ns co
2
µ2Xµ2Y ≤ rX
− 2rXY µX µY + 2 2
rY − µ2X rY2 − µ2Y rX
2
µ2Xµ2Y
th k ( de f i es
rXY +
of or stu e o tat
ity s w g us d S
2 2 2
rXY − 2rXY µX µY ≤ rX rY − µ2X rY2 − µ2Y rX
2
is
te f t ss th nite
e rt ss fo U
gr hi in e
2 2 2
≤ rX rY − µ2X rY2 − µ2Y rX
2
th a a ly by
rXY + 2rXY µX µY
k
in o e r
y y p d le d
ro n an o te
st f a s d s ec
2 2
≤ rX rY − E[(µX Y − µY X)2 ]
de o rse de ot
ill le u vi pr
w r sa co pro is
o eir is rk
th nd wo
a his
2 2 2
rXY ≤ rX rY .
A more direct route to the proof is to start with the observation that E[(αX − Y )2 ] ≥ 0
for all α, expand and take expectations to get α2 rX 2 − 2αr 2
XY + rY ≥ 0, and then note
that this is possible if and only if this quadratic in α has at most one real root, which
then yields the desired inequality.
(b) Using the vector space picture, we can imagine that vector X and vector Y are separated
by an angle θ. Then the inner product of X and Y is expressed as
p
E[XY ] = E[X 2 ]E[Y 2 ] cos(θ)
rXY = rX rY cos(θ) .
22
© 2016 Pearson Education, Inc., Hoboken, NJ. All rights reserved. This material is protected under all copyright laws as they currently
exist. No portion of this material may be reproduced, in any form or by any means, without permission in writing from the publisher.
Since cos2 (θ) ≤ 1,
2 2 2
rXY ≤ rX rY .
θ
X
)
eb
er or in ing
(c) Let’s start with
ed id n
W
no the iss tea s
itt W tio
w
t p W em ch
e
d on g. in t la
m ld a
an ing rnin tors igh
.
r
or ud a uc y
w cl le tr p
e in nt ns co
D
th k ( de f i es
−σX σY ≤ σXY ≤ σX σY
of or stu e o tat
ity s w g us d S
−σ 2 ≤ rXY − µ2 ≤ σ 2
is
te f t ss th nite
e rt ss fo U
gr hi in e
th a a ly by
k
in o e r
y y p d le d
ro n an o te
st f a s d s ec
de o rse de ot
ill le u vi pr
−σ 2 + µ2 ≤ rXY ≤ σ 2 + µ2
T
23
© 2016 Pearson Education, Inc., Hoboken, NJ. All rights reserved. This material is protected under all copyright laws as they currently
exist. No portion of this material may be reproduced, in any form or by any means, without permission in writing from the publisher.
Solution 7.14
(a) The probability of error is the sum of the numbers in all the off-diagonal boxes, because
these are precisely the outcomes that contribute to an error for the given decision rule, so
X
Perror = P (j sent ∩ k received)
j6=k
(b) For minimum probability of error, for each received k, declare in favor of the sent message
j that corresponds to the largest element in the column. Thus, j is declared to be sent
)
eb
er or in ing
ed id n
W
no the iss tea s
itt W tio
according to the rule w
t p W em ch
e
d on g. in t la
m ld a
an ing rnin tors igh
.
r
or ud a uc y
w cl le tr p
e in nt ns co
D
th k ( de f i es
2 :k=1
of or stu e o tat
ity s w g us d S
j= 1 :k=2
is
te f t ss th nite
e rt ss fo U
2 :k=3
gr hi in e
th a a ly by
k
in o e r
y y p d le d
ro n an o te
st f a s d s ec
de o rse de ot
The probability of deciding correctly is the sum of the joint probabilities that j was sent
ill le u vi pr
w r sa co pro is
Pcorrect = P (j = 2 ∩ k = 1) + P (j = 1 ∩ k = 2) + P (j = 2 ∩ k = 3)
= 0.13 + 0.10 + 0.21
= 0.44
The probability of guessing in error and the probability of guessing correctly must sum
to 1. Thus,
Perror = 1 − Pcorrect
= 0.56 < 0.72
24
© 2016 Pearson Education, Inc., Hoboken, NJ. All rights reserved. This material is protected under all copyright laws as they currently
exist. No portion of this material may be reproduced, in any form or by any means, without permission in writing from the publisher.
For the decision rule in (a), P (j sent ∩ l declared) is the same as P (j sent ∩ l received),
and these terms are all displayed in the table. Multiply each j, k entry in the table by the
corresponding c(j, k), and add the resulting numbers to obtain the risk.
The optimum decision rule minimizes the inner part of this, namely
X
c(j, l)P (j sent ∩ l declared |k received)
j,k,l
for each received k, because the decision rule specifies what to declare for each received
)
eb
er or in ing
signal, regardless of what’s specified for some other received signal. But given that k is
ed id n
W
no the iss tea s
itt W tio
w
t p W em ch
e
d on g. in t la
m ld a
received, the fact that we declare l is independent of the fact that j was sent. So we can
an ing rnin tors igh
.
r
or ud a uc y
D
th k ( de f i es
of or stu e o tat
ity s w g us d S
XX
c(j, l)P (j sent |k received)P (l declared|k received)
is
te f t ss th nite
e rt ss fo U
gr hi in e
l j
th a a ly by
k
in o e r
y y p d le d
ro n an o te
st f a s d s ec
Any (deterministic) decision rule makes P (l declared |k received) = 1 for some l and 0 for
de o rse de ot
ill le u vi pr
the others. For minimum expected cost, we should make P (l declared |k received) = 1
w r sa co pro is
o eir is rk
th nd wo
X
c(j, l)P (j sent|l received)
j
In the case of the all-or-none cost, this means we should choose l to be the j for which
P (j sent|k received) is largest. This is the same rule we had for minimum probability of
error in (b).
25
© 2016 Pearson Education, Inc., Hoboken, NJ. All rights reserved. This material is protected under all copyright laws as they currently
exist. No portion of this material may be reproduced, in any form or by any means, without permission in writing from the publisher.
Solution 7.15
(a) The marginal distributions of V and W associated with f + (v, w) and f − (v, w) individually
are Gaussian, so the marginals associated with the given fV,W (v, w) are Gaussian as well.
However, the contours of constant density for fV,W (v, w) are evidently not ellipses, so the
joint distribution is not bivariate Gaussian.
(b) The random variables V and W are not independent. For instance, knowing nothing
about the value of V , the distribution of W is Gaussian, but given that V = v0 0, the
figure suggests that W will be bimodal. So information about W is obtained by knowing
the value of V , which indicates that these two variables are not independent.
(c) The four-quadrant symmetry of fV,W (v, w) shows that E[V W ] = 0, and E[V ] = 0 =
E[W ]. Therefore, we conclude that V and W are uncorrelated.
)
eb
er or in ing
ed id n
W
no the iss tea s
itt W tio
w
t p W em ch
e
d on g. in t la
m ld a
an ing rnin tors igh
.
r
or ud a uc y
w cl le tr p
e in nt ns co
D
th k ( de f i es
of or stu e o tat
ity s w g us d S
is
te f t ss th nite
e rt ss fo U
gr hi in e
th a a ly by
k
in o e r
y y p d le d
ro n an o te
st f a s d s ec
de o rse de ot
ill le u vi pr
w r sa co pro is
o eir is rk
th nd wo
a his
T
26
© 2016 Pearson Education, Inc., Hoboken, NJ. All rights reserved. This material is protected under all copyright laws as they currently
exist. No portion of this material may be reproduced, in any form or by any means, without permission in writing from the publisher.
Solution 7.16
(a) (i) The expected value of Z is µZ = E[cQ + V ] = E[cQ] + E[V ] = cE[Q] + E[V ] = c.
(ii) The variance computations are all simplified by recognizing that Ze = cQ
e + Ve , where
Ze denotes the deviation of Z from its mean value µZ , and similarly for the other
variables.
σZ2 = E[Ze2 ]
= E[c2 Q
e 2 + 2cQ
e Ve + Ve 2 ]
= c2 σQ
2
+ σV2 + 2cσQ,V
σZ,Q = E[ZeQ]
e
)
eb
er or in ing
ed id n
W
no the iss tea s
itt W tio
= E[(cQ
e + Ve )Q] w
t p W em ch
e
d on g. in t la
m ld a
e
an ing rnin tors igh
.
= cσQ + σV,Q
r
or ud a uc y
w cl le tr p
e in nt ns co
D
th k ( de f i es
of or stu e o tat
is
te f t ss th nite
e rt ss fo U
gr hi in e
σZ,V = E[ZeVe ]
th a a ly by
k
in o e r
y y p d le d
ro n an o te
= cσQ,V + σV2
st f a s d s ec
de o rse de ot
ill le u vi pr
w r sa co pro is
o eir is rk
that b is fixed and given, and defining a new random variable X = Q−bZ, it is an easy fact
T
that the value of a that minimizes E[(Q−bZ −a)2 ] = E[(X −a)2 ] is a = E[X] = µQ −bµZ .
Thus, we want to minimize the expression
with respect to b. This expression is a quadratic function of b, and the minimum is found
by taking the derivative with respect to b, setting it equal to 0 and solving for b:
d 2
(σ + b2 σZ2 − 2bσQ,Z ) = 2bσZ2 − 2σQ,Z
db Q
= 0.
σQ
a = µQ − ρ µZ . (6)
σZ
27
© 2016 Pearson Education, Inc., Hoboken, NJ. All rights reserved. This material is protected under all copyright laws as they currently
exist. No portion of this material may be reproduced, in any form or by any means, without permission in writing from the publisher.
With these parameters, the mean squared error is given by
b 2 ] = σQ
E[(Q − Q) 2
+ b2 σZ2 − 2bσQ,Z from Equation 4
2 σQ
= σQ + ρ2 σ Q
2
− 2ρσQ,Z
σZ
σ 2 σ 2
2 Q,Z Q,Z
= σQ + 2 −2 2
σZ σZ
2
σQ,Z
2
= σQ −
σZ2
2 2 2
= σQ − σQ ρ
2
= σQ (1 − ρ2 ).
)
eb
er or in ing
ed id n
W
no the iss tea s
itt W tio
w
t p W em ch
e
d on g. in t la
m ld a
an ing rnin tors igh
.
r
or ud a uc y
w cl le tr p
e in nt ns co
D
th k ( de f i es
of or stu e o tat
ity s w g us d S
is
te f t ss th nite
e rt ss fo U
gr hi in e
th a a ly by
k
in o e r
y y p d le d
ro n an o te
st f a s d s ec
de o rse de ot
ill le u vi pr
w r sa co pro is
o eir is rk
th nd wo
a his
T
28
© 2016 Pearson Education, Inc., Hoboken, NJ. All rights reserved. This material is protected under all copyright laws as they currently
exist. No portion of this material may be reproduced, in any form or by any means, without permission in writing from the publisher.
Solution 7.17
(a) The mean and variance of X can be written in terms of the means and variances of Q
and W :
X =Q+W
µX = µQ
2 2 2
σX = σQ + σW
Since Q and W are independent Gaussians, their sum is Gaussian. Hence knowing the
mean and variance of X suffices to write down its pdf:
(x − µX )2
1
fX (x) = √ exp − 2
2πσX 2σX
)
eb
er or in ing
ed id n
W
no the iss tea s
itt W tio
w
t p W em ch
e
d on g. in t la
m ld a
(b) The covariance σXQ is most easily computed by first noting that the respective deviations
an ing rnin tors igh
.
r
D
th k ( de f i es
of or stu e o tat
X
e =Q
e+W
f,
ity s w g us d S
is
te f t ss th nite
e rt ss fo U
gr hi in e
where X
e = X = µX , and similarly for the other variables. Then
th a a ly by
k
in o e r
y y p d le d
ro n an o te
st f a s d s ec
de o rse de ot
= E[Xe Q]
th nd wo
e
a his
e 2 ] + E[Q
T
= E[Q eWf]
2
= σQ
29
© 2016 Pearson Education, Inc., Hoboken, NJ. All rights reserved. This material is protected under all copyright laws as they currently
exist. No portion of this material may be reproduced, in any form or by any means, without permission in writing from the publisher.
If σQ σW , then signal wins and ρXQ ⇒ 1
(c) The joint pdf of fX,Q (x, q) can be found from fX|Q (x|q) and fQ (q):
(d) To find the conditional density fQ|X (q|x), using the fact that we already know fX,Q (x, q)
)
eb
er or in ing
ed id n
and fX (x), write
W
no the iss tea s
itt W tio
w
t p W em ch
e
d on g. in t la
m ld a
an ing rnin tors igh
fX,Q (x, q)
.
r
or ud a uc y
w cl le tr p
fQ|X (q|x) =
e in nt ns co
fX (x)
th k ( de f i es
of or stu e o tat
ity s w g us d S
!
1 σX (x − q)2 (q − µQ )2 (x − µX )2
is
te f t ss th nite
=√ exp − − +
e rt ss fo U
2 2 2
2π σW σQ 2σW 2σQ 2σX
gr hi in e
th a a ly by
k
in o e r
y y p d le d
ro n an o te
2 !
st f a s d s ec
1 q − µQ|x
de o rse de ot
exp − 2
ill le u vi pr
=
w r sa co pro is
2σQ (1 − ρ2 )
p
σQ 2π(1 − ρ2 )
o eir is rk
th nd wo
a his
T
and ρ is streamlined notation for ρXQ . The algebra that gets you to this final result is
straightforward, though painful!
We conclude that the conditional distribution of Q, given X = x, is Gaussian, with mean
2 (1 − ρ2 ).
µQ|x and variance σQ
(e) The choice of constant that minimizes the expected or mean squared deviation of a random
variable from that constant is just the expected value of the random variable. In this case,
since we are given that X = x, we need the expected value of Q given X = x. Hence
2 x + σ2 µ
σQ W Q
Q
b MMSE (x) = E[Q|X = x] = µQ|x =
2 + σ2 .
σQ W
This estimate is indeed an affine function of x. As a sanity check on the answer, note that
b MMSE (x) ≈ x, which makes sense, since X = Q+W . On
if the noise intensity is low, then Q
30
© 2016 Pearson Education, Inc., Hoboken, NJ. All rights reserved. This material is protected under all copyright laws as they currently
exist. No portion of this material may be reproduced, in any form or by any means, without permission in writing from the publisher.
b MMSE (x) ≈ µQ , because the measurement
the other hand, if the noise dominates, then Q
is very unreliable.
The corresponding conditional mean square error is
2 σ2
σQ
2 2 W
MMSE = σQ (1 −ρ )= q
2 + σ2
σW Q
This expression also takes the values that one would expect for the extreme cases of high
noise and low noise.
)
eb
er or in ing
ed id n
W
no the iss tea s
itt W tio
w
t p W em ch
e
d on g. in t la
m ld a
an ing rnin tors igh
.
r
or ud a uc y
w cl le tr p
e in nt ns co
D
th k ( de f i es
of or stu e o tat
ity s w g us d S
is
te f t ss th nite
e rt ss fo U
gr hi in e
th a a ly by
k
in o e r
y y p d le d
ro n an o te
st f a s d s ec
de o rse de ot
ill le u vi pr
w r sa co pro is
o eir is rk
th nd wo
a his
T
31
© 2016 Pearson Education, Inc., Hoboken, NJ. All rights reserved. This material is protected under all copyright laws as they currently
exist. No portion of this material may be reproduced, in any form or by any means, without permission in writing from the publisher.
Another random document with
no related content on Scribd:
PLEASE READ THIS BEFORE YOU DISTRIBUTE OR USE THIS WORK
1.D. The copyright laws of the place where you are located also
govern what you can do with this work. Copyright laws in most
countries are in a constant state of change. If you are outside the
United States, check the laws of your country in addition to the terms
of this agreement before downloading, copying, displaying,
performing, distributing or creating derivative works based on this
work or any other Project Gutenberg™ work. The Foundation makes
no representations concerning the copyright status of any work in
any country other than the United States.
• You pay a royalty fee of 20% of the gross profits you derive from
the use of Project Gutenberg™ works calculated using the
method you already use to calculate your applicable taxes. The
fee is owed to the owner of the Project Gutenberg™ trademark,
but he has agreed to donate royalties under this paragraph to
the Project Gutenberg Literary Archive Foundation. Royalty
payments must be paid within 60 days following each date on
which you prepare (or are legally required to prepare) your
periodic tax returns. Royalty payments should be clearly marked
as such and sent to the Project Gutenberg Literary Archive
Foundation at the address specified in Section 4, “Information
about donations to the Project Gutenberg Literary Archive
Foundation.”
• You comply with all other terms of this agreement for free
distribution of Project Gutenberg™ works.
1.F.
1.F.4. Except for the limited right of replacement or refund set forth in
paragraph 1.F.3, this work is provided to you ‘AS-IS’, WITH NO
OTHER WARRANTIES OF ANY KIND, EXPRESS OR IMPLIED,
INCLUDING BUT NOT LIMITED TO WARRANTIES OF
MERCHANTABILITY OR FITNESS FOR ANY PURPOSE.
Please check the Project Gutenberg web pages for current donation
methods and addresses. Donations are accepted in a number of
other ways including checks, online payments and credit card
donations. To donate, please visit: www.gutenberg.org/donate.
Most people start at our website which has the main PG search
facility: www.gutenberg.org.