Professional Documents
Culture Documents
hw6 - Sol 2011
hw6 - Sol 2011
hw6 - Sol 2011
1. (10 points) Vector CLT. The key point to this problem is to realize that we are asked to find
the distribution of the random vector Yn = [ X1n X2n ]T as n → ∞. First note that
n
1 X
E(X1n ) = E √ Zj cos Θj
n
j=1
n
1 X
=√ E(Zj cos Θj ) (by linearity of expectation)
n
j=1
n
1 X
=√ E(Zj ) E(cos Θj ) (by independence of Zj and Θj )
n
j=1
If j 6= k then
−1
b. The first order pdf is the pdf of X(t) as a function of t. Since At and B are independent
the pdf of X(t) is the convolution of U[−t, t] and U[−1, 1]. The first order pdf is plotted in
the following figure.
c. Consider
P{X(t) ≥ 0, for all t ≥ 0} = P{At + B ≥ 0, for all t ≥ 0}
= P{A ≥ 0 and B ≥ 0} = 1/4.
1
2
min{1, 1t }
x
−(t + 1) −|t − 1| |t − 1| t+1
Problem 1: part a
1
0.5
X(t)
−0.5
−1
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
t
Problem 1: part c
1
0.5
Y(t)
−0.5
−1
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
t
% First, select 5 random phases (either pi/2 or -pi/2 with equal probability).
% WRITE MATLAB CODE HERE
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
theta_n = (pi/2)*(2*(rand(5,1)>0.5)-1);
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Replicate theta_n so that each random phase covers a 100-step time range.
theta_t = ones(1,100) * theta_n(1);
for i=2:5
theta_t = [theta_t ones(1,100)*theta_n(i)];
end
subplot(2,1,1);
plot(t, X_t);
xlabel(’t’);
ylabel(’X(t)’);
title(’Part a’);
subplot(2,1,2);
plot(t, Y_t);
xlabel(’t’);
ylabel(’Y(t)’);
title(’Part c’);
print -depsc2 pskModulationFigure;
b. If Y20 = |X20 | = 0 then there are only two sample paths with max1≤i<20 |Xi | = 10 . These
two paths are shown in Figure 4. Since the total number of sample paths is 20 10
and all
paths are equally likely,
2 2 1
P max Yi = 10 | Y20 = 0 = 20 = = .
1≤i<20 184756 92378
10
10
0 n
10 20
−10
of the {Xi } random process; each Xij − Xij−1 is the sum of a different set of Zi ’s, and the
Zi ’s are i.i.d. and independent of X0 , which appears only in the first increment.
b. Starting at an even number (0 or ±2) can be ruled out, since there is no way that the process
could then end up at X11 = 2. Using Bayes rule for the remaining possibilities, we get
P(X11 = 2 | X0 = −1)P(X0 = −1)
P(X0 = −1 | X11 = 2) =
P(X11 = 2)
1
11 1 7 1 4
=
1
11
5
1 7
7
1 4
2
1
211 1 6
1 5
5 7 2 2
+ 5 6 2 2
11
7 1 1 5
= 11
=
11 11!7!4!
= 7 = 12
7
+ 6
1+ 11!6!5!
1+ 5
7
Similarly, P(X0 = 1 | X11 = 2) = 12
.
To summarize,
5
12
x = −1
P(X0 = x | X11 = 2) = 127
x = +1
0 otherwise
d. It follows from part (c) that Xn is Gaussian with mean zero and variance
Var(Xn ) = E(Xn2 ) − E2 (Xn ) = RX (n, n) − 0 = n .
Therefore Xn ∼ N (0, n).
e. Since {Xn } is a zero-mean Gaussian random process,
X3 E[X3 ] RX (3, 3) RX (3, 5) RX (3, 8)
X5 ∼ N E[X5 ] , RX (5, 3) RX (5, 5) RX (5, 8) .
X8 E[X8 ] RX (8, 3) RX (8, 5) RX (8, 8)
Using the mean and autocorrelation function from part (c), we get
X3 0 3 3 3
X5 ∼ N 0 , 3 5 5 .
X8 0 3 5 8
7. (10 points) Poisson process. We want to find fTk |N (t) (τ |n), that is, the probability that the
waiting time for the kth event will be τ given that n events have occurred by time t.
fTk ,N (t) (τ, n)
fTk |N (t) (τ |n) =
fN (t) (n)
fTk ,N (t)−N (τ ) (τ, n − k)
=
fN (t) (n)
fTk (τ )fN (t)−N (τ ) (n − k)
=
fN (t) (n)
2. Vector convergence. By the CLT, Zn approaches a Gaussian distribution. It suffices to find the
mean and covariance matrix.
σ11 = VarX1i
= EX12 − (EX1 )2
Z x1 =1 Z x2 =1−x1 Z x1 =1 Z x2 =1−x1 2
2
= dx1 2x1 dx2 − dx1 2x1 dx2
x1 =0 x2 =0 x1 =0 x2 =0
Z x1 =1 Z x1 =1 2
= 2 x21 − x31 dx1 −4 x1 − x21 dx1
x1 =0 x1 =0
1
= .
18
T 3T 4T 5T 7T
b. Following the same steps as in the unbiased case in the lecture notes, we have
(
n
(n+k)/2
(n+k)/2
p (1 − p)(n−k)/2 if k + n is even and k < n
P{Xn = k} =
0 otherwise.
c. Since the Zi are i.i.d., we have P{X2n = 2k | Xn = k} = P{Xn = k}, which was computed
in part (b).
2n
= n.
3
6. Markov processes.
a. We are given that f (xn+1 |x1 , x2 , . . . , xn ) = f (xn+1 |xn ). From the chain rule, in general,
f (x1 , x2 , . . . , xn ) = f (x1 )f (x2 |x1 )f (x3 |x1 , x2 ) · · · f (xn |x1 , x2 , . . . , xn−1 ) .
Thus, by the definition of Markovity,
f (x1 , x2 , . . . , xn ) = f (x1 )f (x2 |x1 )f (x3 |x2 ) · · · f (xn |xn−1 ) . (1)
Similarly, applying the chain rule in reverse we get
f (x1 , x2 , . . . , xn ) = f (xn )f (xn−1 |xn )f (xn−2 |xn−1 , xn ) · · · f (x1 |x2 , x3 , . . . , xn ).
Next,
f (xi , xi+1 , . . . , xn ) f (xi )f (xi+1 |xi )
f (xi |xi+1 , . . . , xn ) = = = f (xi |xi+1 ) , (2)
f (xi+1 , . . . , xn ) f (xi+1 )
where the second equality follows from (1). Therefore
f (x1 , x2 , . . . , xn ) = f (xn )f (xn−1 |xn )f (xn−2 |xn−1 , xn ) · · · f (x1 |x2 , x3 , . . . , xn )
= f (xn )f (xn−1 |xn )f (xn−2 |xn−1 ) · · · f (x1 |x2 ) ,
where the second line follows from (2).
b. First consider
f (x1 , . . . , xk , xn )
f (xn |x1 , . . . , xk ) =
f (x1 , . . . , xk )
f (xn )f (xk |xn )f (xk−1 |xk , xn ) · · · f (x1 |x2 , . . . , xk , xn )
= , (3)
f (xk )f (xk−1 |xk ) · · · f (x1 |x2 )
where the denominator in the second line follows from part (a). Next consider
f (xk−1 , xk , . . . , xn ) = f (xk , xn )f (xk−1 |xk , xn )f (xk+1 , xk+2 , · · · , xn−1 |xk−1 , xk , xn )
= f (xn )f (xn−1 |xn ) · · · f (xk−1 |xk ) ,
where the second line follows from (2). Integrating both sides over xk+1 , . . . , xn−1 (i.e., using
the law of total probability), we get
f (xk , xn )f (xk−1 |xk , xn ) = f (xk , xn )f (xk−1 |xk ) .
7. Detection of Poisson process. First consider the problem with observation N((τ ), N(5)) for
some τ ∈ [0, 5]. In order to use the MAP detection rule, we need to compute the conditional
probability
pN (τ ),n(5)|Λ (nτ , n5 |λ) pΛ (λ)
pΛ|N (τ ),N (5) (λ|nτ , n5 ) = .
pN (τ ),N (5) (nτ , n5 )
The first factor in the numerator is
pN (τ ),N (5)|Λ (nτ , n5 |λ) = P{N(τ ) − N(0) = nτ , N(5) − N(τ ) = n5 − nτ |Λ = λ}
(τ λ)nτ −(τ λ) ((5 − τ )λ)n5 −nτ −((5−τ )λ)
= e e
nτ ! (n5 − nτ )!
τ nτ (5 − τ )n5 −nτ −5λ n5
= e λ .
nτ ! (n5 − nτ )!
Substituting back, the relevant ratio becomes
pΛ|N (τ ),N (5) (1|nτ , n5 ) (1/3) e−5 1n5
=
pΛ|N (τ ),N (5) (2|nτ , n5 ) (2/3) e−10 2n5
e5
= n5 +1 ,
2
and the optimal detection rule is therefore
( 5
1 if 2ne5 +1 ≥ 1
Λ̂(nτ , n5 ) =
2 otherwise
(
1 if n5 ≤ 6
=
2 if n5 ≥ 7.
Note that this detection rule only depends on the last observation N(5) = n5 , not on the
earlier observation N(τ ) = nτ . In other words, the last observation is a sufficient statistic for
the detection problem. Since our argument holds for any τ ∈ [0, 5], the same must be true if
we observe N(τ ) for all τ ∈ [0, 5]. The rule above is thus the answer to both part (a) and part
(b).
8. Bus passengers.
Dk = Ti ,
i=k+1
1 k
EDk = c−k ,
λ c
1 k
EDk2 = 2 c−k .
λ c
b. Y (t) is not an independent increment process, as a counter example, we show that P [Y (t2 )−
Y (t1 ) = 0|Y (t1 ) = 0] 6= P [Y (t2 ) − Y (t1 ) = 0|Y (t1 ) = 1]. Consider
P [Y (t2 ) − Y (t1 ) = 0|Y (t1 ) = 0]
P [X(t2) = 0, X(t1 ) = 0] + P [X(t2 ) = 1, X(t1 ) = 0]
=
P [X(t1 ) = 0] + P [X(t1 ) = 1]
P [X(t2 ) = 1, X(t1 ) = 1]
+
P [X(t1) = 0] + P [X(t1) = 1]
λt1 λ(t2 −t1 )
e e (1 + λ(t2 − t1 ) + λt1 )
= λt
e 1 (1 + λt1 )
1 + λt2 λ(t2 −t1 )
= e
1 + λt1
λ(t2 − t1 ) λ(t2 −t1 )
= 1− e ,
1 + λt1
and
P [Y (t2 ) − Y (t1 ) = 0|Y (t1 ) = 1]
P [X(t2) = 2, X(t1 ) = 2] + P [X(t2 ) = 3, X(t1 ) = 2]
=
P [X(t1 ) = 2] + P [X(t1 ) = 3]
P [X(t2 ) = 3, X(t1 ) = 3]
+
P [X(t1) = 2] + P [X(t1) = 3]
e (λt1 /2)eλ(t2 −t1 ) (1 + λ(t2 − t1 ) + (λt1 /3))
λt1
=
eλt1 (λt1 /2)(1 + λt1 /3)
3 + 3λt2 − 2λt1 λ(t2 −t1 )
= e
3 + λt1
3λ(t2 − t1 ) λ(t2 −t1 )
= 1+ e .
3 + λt1
The two expressions are not the same, which implies that Y (t) is not an independent incre-
ment process.