hw6 - Sol 2011

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 16

EE 278B Friday, November 25, 2011

Statistical Signal Processing Handout #17


Homework #6 Solutions

1. (10 points) Vector CLT. The key point to this problem is to realize that we are asked to find
the distribution of the random vector Yn = [ X1n X2n ]T as n → ∞. First note that
 n 
1 X
E(X1n ) = E √ Zj cos Θj
n
j=1
n
1 X
=√ E(Zj cos Θj ) (by linearity of expectation)
n
j=1
n
1 X
=√ E(Zj ) E(cos Θj ) (by independence of Zj and Θj )
n
j=1

Since E(cos Θj ) = 0, we conclude that E(X1n ) = 0. Similarly, E(X2n ) = 0.


As discussed in the lecture notes, the Central Limit Theorem applies to a sequence of i.i.d.
random vectors. Thus the pdf of Yn converges to N (0, ΣY ). All that remains is to find the
covariance matrix for Yn .
X n 
1
Var(X1n ) = Var Zj cos Θj
n
j=1
n
1X
= Var(Zj cos Θj ) (independence)
n
j=1

= Var(Z1 cos Θ1 ) (identically distributed random variables)


= E(Z12 cos2 Θ1 ) − (E(Z1 cos Θ1 ))2
= E(Zj2 ) E(cos2 Θj ) (since E(Z1 cos Θ1 ) = 0)
= (σ 2 + µ2 ) E(cos2 Θj )
= 12 (σ 2 + µ2 ) . (since E(cos2 Θj ) = 12 )
Now consider
n X
X n 
1
Cov(X1n , X2n ) = E Zj Zk cos Θk sin Θj − E(X1n ) E(X2n )
n
j=1 k=1
n n
1 XX
= E(Zj Zk ) E(cos Θk sin Θj ) (independence)
n
j=1 k=1

If j 6= k then

Cov(X1n , X2n ) = E(Zj ) E(Zk ) E(cos Θk ) E(sin Θj )


=0 since E(cos Θk ) = 0 .
If j = k then

Cov(X1n , X2n ) = E(Zj2 ) E(cos Θj sin Θj )


1
=0 since E(cos Θj sin Θj ) = 2
E(sin 2Θj ) = 0 .

Since Cov(X1n , X2n ) = 0 in all cases,


1 2
(σ + µ2 )

0
ΣY = 2 1 .
0 2
(σ 2 + µ2 )

2. (15 points) Continuous time random process.


a. Sample functions:
X(t)

−1

Figure 1: Sample functions of X(t).

b. The first order pdf is the pdf of X(t) as a function of t. Since At and B are independent
the pdf of X(t) is the convolution of U[−t, t] and U[−1, 1]. The first order pdf is plotted in
the following figure.
c. Consider
P{X(t) ≥ 0, for all t ≥ 0} = P{At + B ≥ 0, for all t ≥ 0}
= P{A ≥ 0 and B ≥ 0} = 1/4.

3. (20 points) Digital modulation using PSK.


a. A sample function is shown in Figure 3.

The following code is used for parts (a) and (c).


clear all;
clf;

Page 2 of 16 EE 278B, Autumn 2011


X(t)

1
2
min{1, 1t }

x
−(t + 1) −|t − 1| |t − 1| t+1

Figure 2: First order pdf of X(t).

Problem 1: part a
1

0.5
X(t)

−0.5

−1
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
t

Problem 1: part c
1

0.5
Y(t)

−0.5

−1
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
t

Figure 3: Digital modulation using PSK.

Homework #6 Solutions Page 3 of 16


% This code will generate 5-second sample runs of X(t) and
% Y(t) and print out the corresponding plots.

% First, select 5 random phases (either pi/2 or -pi/2 with equal probability).
% WRITE MATLAB CODE HERE
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
theta_n = (pi/2)*(2*(rand(5,1)>0.5)-1);
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

% Replicate theta_n so that each random phase covers a 100-step time range.
theta_t = ones(1,100) * theta_n(1);

for i=2:5
theta_t = [theta_t ones(1,100)*theta_n(i)];
end

% Generate the time steps corresponding to theta_t.


t=0:.01:4.99;

% Generate the values of X(t).


% WRITE MATLAB CODE HERE
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
X_t = cos(4*pi.*t + theta_t);
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

subplot(2,1,1);
plot(t, X_t);
xlabel(’t’);
ylabel(’X(t)’);
title(’Part a’);

% Generate the values of Y(t). (Don’t forget to generate "psi"!)


% WRITE MATLAB CODE HERE
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
Y_t = cos(4*pi.*t + theta_t + rand(1)*2*pi);
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

subplot(2,1,2);
plot(t, Y_t);
xlabel(’t’);
ylabel(’Y(t)’);
title(’Part c’);
print -depsc2 pskModulationFigure;

Page 4 of 16 EE 278B, Autumn 2011


b. The first-order pmf of X(t) is
n  4π  o
pX(t) (x) = P cos t + Θ(t) = x
T
n  4π  π
o n π
o
= P cos t + Θ⌊t/T ⌋ = x Θ⌊t/T ⌋ = +
P Θ⌊t/T ⌋ = + +
nT  4π 
2
π
o n 2
π
o
P cos t + Θ⌊t/T ⌋ = x Θ⌊t/T ⌋ = − P Θ⌊t/T ⌋ = −
n  4π Tπ  o n  4π π
2 o 2
= 12 P cos t+ = x + 12 P cos t− =x
T 2 T 2
n  4π  o n  4π  o
= 12 P − sin t = x + 21 P sin t =x
T T



 1 x = 0, t = nT 4

 1 x = + sin( 4π t), t 6= nT

2 T 4
=
1 4π nT


 2
x = − sin( T t), t 6= 4


0 otherwise

c. A sample function of Y (t) is shown in Figure 3.


d. Since Ψ ∼ U[0, 2π], the pdf of Ψ is unchanged when we add either +π/2 or −π/2 . Thus
the first order pdf of Y (t) is the pdf of the random variable X(t) = cos(4πt/T + Ψ), which
is the same as the first order pdf for the random phase process in the lecture notes. Thus
1
fY (t) (y) = p if |y| < 1 and 0 if |y| ≥ 1 .
π 1 − y2

4. (10 points) Absolute value random walk.


a. This is a straightforward calculation and we can use results from lecture notes. If k ≥ 0
then
P{Yn = k} = P{Xn = +k or Xn = −k} .
If k > 0 then P{Yn = k} = 2P{Xn = k}, while P{Yn = 0} = P{Xn = 0}. Thus
n
  1 n−1
k > 0, n − k is even, n − k ≥ 0
 (n+k)/2 2



n
 1 n
P{Yn = k} = k = 0, n is even, n ≥ 0

 n/2 2

0 otherwise

b. If Y20 = |X20 | = 0 then there are only two sample paths with max1≤i<20 |Xi | = 10 . These
two paths are shown in Figure 4. Since the total number of sample paths is 20 10
and all
paths are equally likely,
 2 2 1
P max Yi = 10 | Y20 = 0 = 20 = = .
1≤i<20 184756 92378
10

5. (10 points) Random walk with random start.


a. We must show that for every sequence of indexes i1 , i2 , . . . , in such that i1 < i2 < . . . < in ,
the increments Xi1 , Xi2 −Xi1 , . . . , Xin −Xin−1 are independent. This is true by the definition

Homework #6 Solutions Page 5 of 16


Xn

10

0 n
10 20

−10

Figure 4: Sample paths for problem 2.

of the {Xi } random process; each Xij − Xij−1 is the sum of a different set of Zi ’s, and the
Zi ’s are i.i.d. and independent of X0 , which appears only in the first increment.
b. Starting at an even number (0 or ±2) can be ruled out, since there is no way that the process
could then end up at X11 = 2. Using Bayes rule for the remaining possibilities, we get
P(X11 = 2 | X0 = −1)P(X0 = −1)
P(X0 = −1 | X11 = 2) =
P(X11 = 2)
1
 11 1 7 1 4
=
1
 11
 5
1 7
7
1 4
2
1
 211 1 6
 1 5

5 7 2 2
+ 5 6 2 2
11

7 1 1 5
= 11
 =
11 11!7!4!
= 7 = 12
7
+ 6
1+ 11!6!5!
1+ 5
7
Similarly, P(X0 = 1 | X11 = 2) = 12
.
To summarize,
5
 12
 x = −1
P(X0 = x | X11 = 2) = 127
x = +1

0 otherwise

6. (20 points) Discrete-time Wiener process.


a. Yes, {Xn } is an independent-increment process. The increments
Xn1 , Xn2 − Xn1 , . . . , Xnk − Xnk−1
are sums of disjoint sets of i.i.d. Zi ’s.
b. Yes, {Xn } is a Gaussian process. Every vector (Xi1 , Xi2 , . . . , Xim ) is obtained by a linear
transformation of i.i.d. N (0, 1) random variables and therefore all m-th order distributions
of {Xn } are jointly Gaussian. (It is not sufficient to show that each random variable Xn is
Gaussian.)

Page 6 of 16 EE 278B, Autumn 2011


c. Mean of {Xn }:
n
X  n
X n
X
E(Xn ) = E Zi = E(Zi ) = 0 = 0.
i=0 i=0 i=0

Autocorrelation function of {Xn }:


RX (n1 , n2 ) = E(Xn1 Xn2 )
Xn1 Xn2 
=E Zi Zj
i=1 j=1
n1 X
X n2
= E(Zi Zj )
i=1 j=1
min(n1 ,n2 )
X
= E(Zi2 ) (Zi ’s are uncorrelated)
i=1
= min(n1 , n2 ) (E(Xi2 ) = 1)

d. It follows from part (c) that Xn is Gaussian with mean zero and variance
Var(Xn ) = E(Xn2 ) − E2 (Xn ) = RX (n, n) − 0 = n .
Therefore Xn ∼ N (0, n).
e. Since {Xn } is a zero-mean Gaussian random process,
     
X3 E[X3 ] RX (3, 3) RX (3, 5) RX (3, 8)
X5  ∼ N E[X5 ] , RX (5, 3) RX (5, 5) RX (5, 8) .
X8 E[X8 ] RX (8, 3) RX (8, 5) RX (8, 8)
Using the mean and autocorrelation function from part (c), we get
     
X3 0 3 3 3
X5  ∼ N 0 , 3 5 5 .
X8 0 3 5 8

f. Since Xn is an independent increment process,


E(X20 | X1, . . . , X10 ) = E(X20 − X10 + X10 | X1, . . . , X10 )
= E(X20 − X10 | X1, . . . , X10 ) + E(X10 | X1 , . . . , X10 )
= E(X20 − X10 ) + X10 = E(X10 ) + X10 = 0 + X10 = X10 .
Note that the conditional expectation is a random variable.

Homework #6 Solutions Page 7 of 16


g. The minimum MSE estimate of X4 given X1 = 4, X2 = 2, and 0 ≤ X3 ≤ 4 is
X̂ = E(X4 | X1 = 4, X2 = 2, 0 ≤ X3 ≤ 4)
= E(X4 − X3 + X3 − X2 + X2 | X1 = 4, X2 = 2, 0 ≤ X3 ≤ 4)
= E(X4 − X3 | X1 = 4, X2 = 2, 0 ≤ X3 ≤ 4) +
E(X3 − X2 | X1 = 4, X2 = 2, 0 ≤ X3 ≤ 4) +
E(X2 | X1 = 4, X2 = 2, 0 ≤ X3 ≤ 4)
= E(X4 − X3 ) + E(X3 − X2 | −2 ≤ X3 − X2 ≤ 2) + E(X2 | X2 = 2)
= E(Z4 ) + E(Z3 | −2 ≤ Z3 ≤ 2) + E(X2 | X2 = 2)
= 0+0+2 = 2.

7. (10 points) Poisson process. We want to find fTk |N (t) (τ |n), that is, the probability that the
waiting time for the kth event will be τ given that n events have occurred by time t.
fTk ,N (t) (τ, n)
fTk |N (t) (τ |n) =
fN (t) (n)
fTk ,N (t)−N (τ ) (τ, n − k)
=
fN (t) (n)
fTk (τ )fN (t)−N (τ ) (n − k)
=
fN (t) (n)

We evaluate each term as


(λt)n exp(−λt)
fN (t) (n) =
n!
(λ(t − τ ))n−k exp(−λ(t − τ ))
fN (t)−N (τ ) (n − k) =
(n − k)!
k k−1
λ τ exp(−λτ )
fTk (τ ) = ,
(k − 1)!
where we have used the fact that the (unconditional) distribution of Tk is the same as the inter-
arrival time between the ith and the (i + k)th event, due to memorylessness. The distribution
of interarrival times is due to the lecture notes. We can write the end result
kτ k−1 (t − τ )n−k n
 
fTk |N (t) (τ |n) = .
tn k

8. (10 points) Poisson process probabilities.


a. We use the fact that the number of events on an interval [a, b] has distribution Poisson(λ(b −
a)). Furthermore, the number of events on non-overlapping intervals are independent.
Therefore,
P{N(1) − N(0) = 1, N(2) − N(1) = 1, N(3) − N(2) = 1}
= P{N(1) − N(0) = 1) · P(N(2) − N(1) = 1) · P (N(3) − N(2) = 1}
= P{N(1) − N(0) = 1}3
= λ3 exp(−3λ).

Page 8 of 16 EE 278B, Autumn 2011


b. We need to decompose the event into events that relate to non-overlapping intervals as
follows.
P{N(2) − N(0) = 2, N(3) − N(1) = 2}
= P{N(2) − N(0) = 2, N(3) − N(1) = 2, N(2) − N(1) = 0}
+ P{N(2) − N(0) = 2, N(3) − N(1) = 2, N(2) − N(1) = 1}
+ P{N(2) − N(0) = 2, N(3) − N(1) = 2, N(2) − N(1) = 2}
= P{N(1) − N(0) = 2, N(2) − N(1) = 0, N(3) − N(2) = 2}
+ P{N(1) − N(0) = 1, N(2) − N(1) = 1, N(3) − N(2) = 1}
+ P{N(1) − N(0) = 0, N(2) − N(1) = 2, N(3) − N(2) = 0}
= exp(−3λ) λ4 /4 + λ3 + λ2 /2 ,


which is indeed larger than the result of part (a).


c. The probability in question is
P{N(2) − N(1) = 2 | N(2) − N(0) = 2, N(3) − N(1) = 2}
P{N(2) − N(1) = 2, N(2) − N(0) = 2, N(3) − N(1) = 2}
= .
P{N(2) − N(0) = 2, N(3) − N(1) = 2}
The denominator is the result of part (b). The numerator is one of the terms that appeared
in part (b). Therefore
P{N(2) − N(1) = 2 | N(2) − N(0) = 2, N(3) − N(1) = 2}
(λ2 /2) exp(−3λ)
= 4
(λ /4 + λ3 + λ2 /2) exp(−3λ)
2
= 2 .
λ + 4λ + 2
9. (20 points) Circles in a plane. We need to consider only the centers that are within the circle
of radius 1 around the origin. Let M be the number of such centers.
• Consider first the case that M = 1. As given by the hint, we can assume that the center is
uniformly distributed within the unit circle. Let the distance from the origin be D. The cdf
and pdf of D are
πd2
FD (d) = = d2 ,
π
fD (d) = 2d,
for d ∈ [0, 1]. Let the radius of the circle at that center be R with uniform pdf fR (r) = 1 for

Homework #6 Solutions Page 9 of 16


r ∈ [0, 1]. Note that R and D are independent. Then
P{this circle encompasses the origin}
= P{D ≤ R}
Z 1Z r
= fD,R (d, r) dd dr
0 0
Z 1Z r
= 2d dd dr
0 0
1
= .
3
• Now consider the case that M = m. Then the number of circles that encompass the origin
is distributed according to Binom(m, 1/3). This is because each circle has probability 1/3
to encompass the origin (as computed before), and all circles are independent of each other.
In other words,
N | {M = m} ∼ Binom(m, 1/3),
  m−n
m 2
P{N = n | M = m} = ,
n 3m
for n ≤ m, and zero otherwise.
• Now we need the distribution of M, the number of centers in the unit circle. Since the
centers follow a spatial Poisson process of intensity λ, and the area of the unit circle is π,
the distribution of M is Poisson with mean λπ.
(πλ)m
P{M = m} = exp(−πλ).
m!
• Putting the pieces together, we have

X
P{N = n} = P{M = m} P{N = n | M = m}
m=n

(πλ)m
  m−n
X m 2
= exp(−πλ)
m=n
m! n 3m

exp(−πλ) X (πλ)m 2m−n
=
n! m=n
3m (m − n)!

(πλ)n exp(−πλ) X (πλ)m−n 2m−n
=
n! 3n m=n
3m−n (m − n)!
(πλ)n exp(−πλ)
= exp(2πλ/3)
n! 3n
(πλ/3)n exp(−πλ/3)
=
n!
This is a Poisson distribution with mean πλ/3.

Page 10 of 16 EE 278B, Autumn 2011


Extra Problems Solutions
1. Polls. Let U1 , U2 , . . . , Un be i.i.d. such that
(
+1 if person i votes for candidate A
Ui =
−1 otherwise
Thus PUi (1) = 0.5005, and the difference in the number of votes is Xn = ni=1 Ui . By the
P
Central Limit Theorem, for large n we can approximate the distribution of Xn by a Gaussian:
Xn ∼ N (n E(U1 ), nσU2 1 ) .
Therefore
E(U1 ) = 0.5005 − 0.4995 = 0.001
σU2 1 = E(Ui2 ) − (0.001)2 = 1 − 10−6 = 0.999999
The probability that A wins after n votes are counted is P(Xn > 0). Thus
 
0.001 n
P(Xn > 0) ≈ 1 − Q √ = 0.99 ,
0.999999 n
which yields n ≥ 5475600.

2. Vector convergence. By the CLT, Zn approaches a Gaussian distribution. It suffices to find the
mean and covariance matrix.
σ11 = VarX1i
= EX12 − (EX1 )2
Z x1 =1 Z x2 =1−x1 Z x1 =1 Z x2 =1−x1 2
2
= dx1 2x1 dx2 − dx1 2x1 dx2
x1 =0 x2 =0 x1 =0 x2 =0
Z x1 =1 Z x1 =1 2
= 2 x21 − x31 dx1 −4 x1 − x21 dx1
x1 =0 x1 =0
1
= .
18

Homework #6 Solutions Page 11 of 16


By symmetry, σ11 = σ22 . For the off diagonal elements,
n n
1X X
σ12 = lim E (X1i − EX1i ) (X2j − EX2j )
n→∞ n i=1 j=1
n n
1 XX
= lim E (X1i X2j − EX1i EX2j )
n→∞ n i=1 j=1
n
1X
= lim E (X1k X2k − EX1k EX2k )
n→∞ n k=1
= EX1 X2 − EX1 EX2
Z x1 =1 Z x2 =1−x1
1
= dx1 2x1 x2 dx2 −
x1 =0 x2 =0 9
Z x1 =1
1
= x1 − 2x21 + x31 dx1 −
x1 =0 9
1
= − .
36
  1 1

− 36
Thus in the limit as n → ∞, fZn (z) ∼ N 0, 18
1 1 .
− 36 18

3. Random binary waveform.


a. See Figure 5 for a sketch of the sample function of X(t).
X(t)

T 3T 4T 5T 7T

Figure 5: Sample function of the process X(t).

b. The first-order pmf is



X 
PX(t) (x) = P(X(t) = x) = P Ak g(t − kT ) = x
k=0
(
1
 2
x = ±1
= P A⌊t/T ⌋ = x = P (A0 = x) =
0 otherwise
Note that X(t1 ) and X(t2 ) are dependent only if t1 and t2 fall within the same time interval T

Page 12 of 16 EE 278B, Autumn 2011


(indexed by k). Thus the second-order pmf is
PX(t1 )X(t2 ) (x1 , x2 ) = P(X(t1 ) = x1 , X(t2 ) = x2 )
nX∞ ∞
X o
=P Ak g(t1 − kT ) = x1 , Ak g(t2 − kT ) = x2
k=0 k=0
= P(A⌊t1 /T ⌋ = x1 , A⌊t2 /T ⌋ = x2 )
(
P(A0 = x1 , A0 = x2 ) ⌊t1 /T ⌋ = ⌊t2 /T ⌋
=
P(A0 = x1 , A1 = x2 ) otherwise

1
⌊t1 /T ⌋ = ⌊t2 /T ⌋ and (x1 , x2 ) = (+1, +1), (−1, −1)
2


= 41 ⌊t1 /T ⌋ = 6 ⌊t2 /T ⌋ and (x1 , x2 ) = (±1, ±1)


0 otherwise.

4. Biased random walk.


a.
" n #
X
E[Xn ] = E Zi
i=1
n
X
= E[Zi ]
i=1
= n(p − (1 − p))
= n(2p − 1)

b. Following the same steps as in the unbiased case in the lecture notes, we have
(
n
 (n+k)/2
(n+k)/2
p (1 − p)(n−k)/2 if k + n is even and k < n
P{Xn = k} =
0 otherwise.

c. Since the Zi are i.i.d., we have P{X2n = 2k | Xn = k} = P{Xn = k}, which was computed
in part (b).

5. Modified random walk.


If n = 0,
P {|Xn| = n − 1} = P {X0 = −1} = 0.
If n = 1,
P {|Xn | = n − 1} = P {X1 = 0}
= P {Z1 = 0}
= 13 .

Homework #6 Solutions Page 13 of 16


If n > 1,
P {|Xn | = n − 1} = P {(n − 1) steps were +1, and one step was 0}
+ P {(n − 1) steps were −1, and one step was 0}
= 2P {(n − 1) steps were +1, and one step was 0}
 
n  1 n−1 1
=2 (Binomial)
1 3 3

2n
= n.
3

6. Markov processes.
a. We are given that f (xn+1 |x1 , x2 , . . . , xn ) = f (xn+1 |xn ). From the chain rule, in general,
f (x1 , x2 , . . . , xn ) = f (x1 )f (x2 |x1 )f (x3 |x1 , x2 ) · · · f (xn |x1 , x2 , . . . , xn−1 ) .
Thus, by the definition of Markovity,
f (x1 , x2 , . . . , xn ) = f (x1 )f (x2 |x1 )f (x3 |x2 ) · · · f (xn |xn−1 ) . (1)
Similarly, applying the chain rule in reverse we get
f (x1 , x2 , . . . , xn ) = f (xn )f (xn−1 |xn )f (xn−2 |xn−1 , xn ) · · · f (x1 |x2 , x3 , . . . , xn ).
Next,
f (xi , xi+1 , . . . , xn ) f (xi )f (xi+1 |xi )
f (xi |xi+1 , . . . , xn ) = = = f (xi |xi+1 ) , (2)
f (xi+1 , . . . , xn ) f (xi+1 )
where the second equality follows from (1). Therefore
f (x1 , x2 , . . . , xn ) = f (xn )f (xn−1 |xn )f (xn−2 |xn−1 , xn ) · · · f (x1 |x2 , x3 , . . . , xn )
= f (xn )f (xn−1 |xn )f (xn−2 |xn−1 ) · · · f (x1 |x2 ) ,
where the second line follows from (2).
b. First consider
f (x1 , . . . , xk , xn )
f (xn |x1 , . . . , xk ) =
f (x1 , . . . , xk )
f (xn )f (xk |xn )f (xk−1 |xk , xn ) · · · f (x1 |x2 , . . . , xk , xn )
= , (3)
f (xk )f (xk−1 |xk ) · · · f (x1 |x2 )
where the denominator in the second line follows from part (a). Next consider
f (xk−1 , xk , . . . , xn ) = f (xk , xn )f (xk−1 |xk , xn )f (xk+1 , xk+2 , · · · , xn−1 |xk−1 , xk , xn )
= f (xn )f (xn−1 |xn ) · · · f (xk−1 |xk ) ,
where the second line follows from (2). Integrating both sides over xk+1 , . . . , xn−1 (i.e., using
the law of total probability), we get
f (xk , xn )f (xk−1 |xk , xn ) = f (xk , xn )f (xk−1 |xk ) .

Page 14 of 16 EE 278B, Autumn 2011


Finally, substituting into (3), we get
f (xn )f (xk |xn )f (xk−1 |xk ) · · · f (x1 |x2 )
f (xn |x1 , . . . , xk ) =
f (xk )f (xk−1 |xk ) · · · f (x1 |x2 )
f (xn )f (xk |xn )
= = f (xn |xk ) .
f (xk )
c. By the chain rule for conditional densities,
f (xn+1 , xn−1 |xn ) = f (xn+1 |xn )f (xn−1 |xn+1 , xn ) = f (xn+1 |xn )f (xn−1 |xn ) ,
where the second equality follows from (2).

7. Detection of Poisson process. First consider the problem with observation N((τ ), N(5)) for
some τ ∈ [0, 5]. In order to use the MAP detection rule, we need to compute the conditional
probability
pN (τ ),n(5)|Λ (nτ , n5 |λ) pΛ (λ)
pΛ|N (τ ),N (5) (λ|nτ , n5 ) = .
pN (τ ),N (5) (nτ , n5 )
The first factor in the numerator is
pN (τ ),N (5)|Λ (nτ , n5 |λ) = P{N(τ ) − N(0) = nτ , N(5) − N(τ ) = n5 − nτ |Λ = λ}
(τ λ)nτ −(τ λ) ((5 − τ )λ)n5 −nτ −((5−τ )λ)
  
= e e
nτ ! (n5 − nτ )!
τ nτ (5 − τ )n5 −nτ −5λ n5
= e λ .
nτ ! (n5 − nτ )!
Substituting back, the relevant ratio becomes
pΛ|N (τ ),N (5) (1|nτ , n5 ) (1/3) e−5 1n5
=
pΛ|N (τ ),N (5) (2|nτ , n5 ) (2/3) e−10 2n5
e5
= n5 +1 ,
2
and the optimal detection rule is therefore
( 5
1 if 2ne5 +1 ≥ 1
Λ̂(nτ , n5 ) =
2 otherwise
(
1 if n5 ≤ 6
=
2 if n5 ≥ 7.

Note that this detection rule only depends on the last observation N(5) = n5 , not on the
earlier observation N(τ ) = nτ . In other words, the last observation is a sufficient statistic for
the detection problem. Since our argument holds for any τ ∈ [0, 5], the same must be true if
we observe N(τ ) for all τ ∈ [0, 5]. The rule above is thus the answer to both part (a) and part
(b).

8. Bus passengers.

Homework #6 Solutions Page 15 of 16


a. Since the interarrival time, T1 , T2 , . . . , are independent exponentially distributed random
variables with parameter λ,
⌈X
c⌉
k
c

Dk = Ti ,
i=k+1

  
1 k
EDk = c−k ,
λ c
  
1 k
EDk2 = 2 c−k .
λ c

b. Y (t) is not an independent increment process, as a counter example, we show that P [Y (t2 )−
Y (t1 ) = 0|Y (t1 ) = 0] 6= P [Y (t2 ) − Y (t1 ) = 0|Y (t1 ) = 1]. Consider
P [Y (t2 ) − Y (t1 ) = 0|Y (t1 ) = 0]
P [X(t2) = 0, X(t1 ) = 0] + P [X(t2 ) = 1, X(t1 ) = 0]
=
P [X(t1 ) = 0] + P [X(t1 ) = 1]
P [X(t2 ) = 1, X(t1 ) = 1]
+
P [X(t1) = 0] + P [X(t1) = 1]
λt1 λ(t2 −t1 )
e e (1 + λ(t2 − t1 ) + λt1 )
= λt
e 1 (1 + λt1 )
1 + λt2 λ(t2 −t1 )
= e
1 + λt1
 
λ(t2 − t1 ) λ(t2 −t1 )
= 1− e ,
1 + λt1
and
P [Y (t2 ) − Y (t1 ) = 0|Y (t1 ) = 1]
P [X(t2) = 2, X(t1 ) = 2] + P [X(t2 ) = 3, X(t1 ) = 2]
=
P [X(t1 ) = 2] + P [X(t1 ) = 3]
P [X(t2 ) = 3, X(t1 ) = 3]
+
P [X(t1) = 2] + P [X(t1) = 3]
e (λt1 /2)eλ(t2 −t1 ) (1 + λ(t2 − t1 ) + (λt1 /3))
λt1
=
eλt1 (λt1 /2)(1 + λt1 /3)
3 + 3λt2 − 2λt1 λ(t2 −t1 )
= e
3 + λt1
 
3λ(t2 − t1 ) λ(t2 −t1 )
= 1+ e .
3 + λt1
The two expressions are not the same, which implies that Y (t) is not an independent incre-
ment process.

Page 16 of 16 EE 278B, Autumn 2011

You might also like