Download as pdf or txt
Download as pdf or txt
You are on page 1of 5

CSE 383M / CS395T Midterm Exam Solution

1 (20 points)
Suppose X1 and X2 are independent random variables, both having p.d.f.
{
3x−4 for 1 < x < ∞,
f (x) =
0 otherwise.

(a) Find Var(X1 ), Var(X2 ), and Cov(X1 , X2 ).


Solution
∫ ∞ ∫ ∞
3
E(X1 ) = x ∗ 3x−4 dx = 3x−3 dx =
1 1 2
∫ ∞ ∫ ∞
E(X12 ) = x2 ∗ 3x−4 dx = 3x−2 dx = 3
1 1
3 3
V ar(X1 ) = E(X12 ) − E(X1 )2 = 3 − ( )2 =
2 4
3
Since X2 has the same distribution as X1 , we have V ar(X2 ) = V ar(X1 ) = 4
Since X1 and X2 are independent, we have Cov(X1 , X2 ) = 0

(b) Let Z = max(X1 , X2 ) denote the larger (maximum) of the two random variables. Find the p.d.f.
of Z.
Solution
To find the p.d.f. fZ (z) of Z, we first compute the c.d.f. FZ (z) using the ”maximum trick”: For z ≥ 1

FZ (z) = P (Z ≤ z) = P (max(X1 , X2 ) ≤ z)
= P (X1 ≤ z, X2 ≤ z) = P (X1 ≤ z)P (X2 ≤ z)
= F (z)2
∫z
Now, f (z) = 3z −4 for z ≥ 1, so F (z) = 1
3t−4 dt = 1 − z −3 , and therefore

FZ (z) = (1 − z −3 )2

fZ (z) = FZ′ (z) = 2(1 − z −3 )(−(−3)z −4 ) = 6z −4 − 6z −7 (z ≥ 1)

(c) Let Q = X2 /X1 . For q ≥ 1, find Prob(Q ≥ q).


Solution
By the independence of X1 and X2 and the given density, the joint density is

f (x1 , x2 ) = (3x−4 −4 −4 −4
1 )(3x2 ) = 9x1 x2 , 1 < x1 , x2 < ∞

1
Hence
∫ ∞ ∫ ∞
P (Q ≥ q) = P (X2 > qX1 ) = 9x−4 −4
1 x2 dx1 dx2
x1 =1 x2 =qx1
∫ ∞ [ ]∞
x−3
= 9x−4 2
dx1
x1 =1
1
−3 x2 =qx1
∫ ∞
= 3x−4
1 (qx1 )
−3
dx1
x1 =1
∫ ∞
3
= x−7
1 dx1
q3 x1 =1
1
= 3
2q

2 (20 points)
Let∑X1 , X2 , X3 , . . . be i.i.d. random variables, with mean µ = E(Xi ) and variance σ 2 = Var(Xi ), and let Sn
n
= i=1 Xi denote the partial sums of the Xi . The Weak Law of Large Numbers (WLLN) is for any ϵ > 0,
( )
Sn
lim P − µ ≤ ϵ = 1
n→∞ n
Sn
(a) Show that WLLN is equivalent to the statement that µ is an unbiased estimator of n
Solution
To show that µ is an unbiased estimator of Snn , we need to show that E(µ − Snn ) = 0.
Since for any ϵ > 0, ( )
Sn
lim P − µ ≤ ϵ = 1
n→∞ n
Then, for any ϵ > 0, ( )
Sn
lim E − µ ≤ ϵ
n→∞ n
Thus, ( )
Sn
lim E − µ = 0
n→∞ n
ϵ→0
Thus, we have
Sn
E(µ − )=0
n

(b) If X is a random variable with mean 50 and variance 25, what is the probability that X is between 40
and 60?
Solution
By Chebyshevs inequality
1
P (|X − 50| > 10) = P (|X − 50| > 5 × 2) ≤ 2
2
so
1 3
P (40 ≤ X ≤ 60) = 1 − P (|X − 50| > 10) ≥ 1 − =
4 4

2
3 (20 points)
Suppose two points are chosen independently and uniformly from the interval [0,1]. Let D denote the
Euclidean distance (2-norm) between these two points.
(a) What is the Probability that D ≤ 0.1?
Solution
We have D = |X − Y | where X and Y denote the two points, and we need to compute P (|X − Y | ≤ 0.1).
Since X and Y are independent and uniformly distributed on [0, 1], the joint distribution of X and Y is
uniform over the unit square, we can compute probabilities involving X and Y via areas. With R denoting
the region on which |x − y| ≤ 0.1, we get

Area(R) 1 − 0.92
P (D ≤ 0.1) = P (|X − Y | ≤ 0.1) = = = 0.19
Area(U nitSquare) 1

(b) What is E(D2 )?


Solution
Write D2 = (X − Y )2 , expand the square and use the properties of an expectation:
1 1 1 1 1
E(D2 ) = E((X − Y )2 ) = E(X 2 ) − 2E(X)E(Y ) + E(Y 2 ) = −2× × + =
3 2 2 3 6
∫1 ∫1
(since E(X 2 ) = 0
x2 ∗ 1dx = 13 , and E(X) = 0
xdx = 1/2 for a uniform distribution on [0, 1])

(c) Suppose the two random points are chosen independently and uniformly from the unit hypercube [0, 1]n ,
and let D denotes the Euclidean distance (2-norm) between the two points. What is E(D2 )? (Note: to
receive credit, please show your derivation)
Solution
Let (X1 , X2 , . . . , Xn ), (Y1 , Y2 , . . . , Yn ) denote the coordinates of the two points. By assumption, the Xi′ s are
mutually independent, and each Xi′ s or Yi′ s is uniformly distributed on the interval [0, 1]. The square of the
distance between these two points is

D2 = (Y1 − X1 )2 + (Y2 − X2 )2 + · · · + (Yn − Xn )2

Hence,

n
E(D2 ) = E((Y1 − X1 )2 + (Y2 − X2 )2 + · · · + (Yn − Xn )2 ) = E((Yi − Xi )2 )
i=1

Squaring out and using the properties of the expectation and the independence assumptions, we get
1 1 1 1 1
E((Yi − Xi )2 ) = E(Yi2 ) − 2E(Yi )(Xi ) + E(Xi2 ) = −2× × + =
3 2 2 3 6
∫1 ∫1
Since E(Xi2 ) = 0
x2 ∗ 1dx = 13 , and E(Xi ) = 0
xdx = 1/2, so

1 n
E(D2 ) = n × =
6 6

4 (20 points)
Let A be an n × n invertible (nonsingular) matrix. Let x be a nonzero vector. Suppose that Ax = λx. For
each of the following, first state if it is true or false, then briefly justify your answer.
(a) Ak x = λk x, for k ≥ 0
Solution
True. Prove by mathematic induction on k.

3
When k = 0, since A0 x = x = 1x = λ0 x
Assume that for any k ≤ m, we have Ak x = λk x. Then for k = m + 1

A(m+1) x = AAm x = A(Am x) = A(λm x) = λm Ax = λm λx = λ(m+1) x

Thus, for any k ≥ 0, Ak x = λk x

(b) λ−k x = (A−1 )k x, for k ≥ 0


Solution
True. Since for any k ≥ 0, Ak x = λk x, then

x = (A−1 )k Ak x = (A−1 )k λk x = λk (A−1 )k x

then
λ−k x = λ−k λk (A−1 )k x = (A−1 )k x

(c) p(A)x = p(λ)x, for any polynomial function p


Solution
True. Let p(A) = α1 A1 + α2 A2 + · · · + αm Am , then since αi Ai x = αi (λi x), we know that

p(A)x = (α1 A1 + α2 A2 + · · · + αm Am )x
= α1 A1 x + α2 A2 x + · · · + αm Am x
= α1 (λ1 x) + α2 (λ2 x) + · · · + αm (λm x)
= (α1 λ1 + α2 λ2 + · · · + αm λm )x
= p(λ)x

(d) Ak x = (1 − λ)k x , for k ≥ 0


Solution
False. Since (a) is correct, this is a false statement.

5 (20 points)
For each of the following, first state if it is true or false, then briefly justify your answer.
(a) Every n × n matrix has n distinct (different) eigenvalues.
Solution
False. A n × n matrix can have at most n eigenvalues, but some of the eigenvalues can be the same.

(b) The eigenvalues of a real matrix are real.


Solution
False. Only if the matrix is symmetric, the eigenvalues are real.

(c) A square matrix and its transpose have the same eigenvalues.
Solution
True. Since for any n × n matrix A,

det(A − λIn ) = det(A − λIn )T = det(AT − λIn )

4
(d) Determinant of a symmetric matrix is equal to the product of its eigenvalues.
Solution
True. Let λ1 , λ2 , . . . , λn be the eigenvalues of the n × n symmetric matrix A, then these λi are also the roots
of the characteristic polynomial p(λ).

det(A − λI) = p(λ) = (−1)n (λ − λ1 ) . . . (λ − λn ) = (λ1 − λ) . . . (λn − λ)

Let λ = 0, we have
det(A) = λ1 λ2 . . . λn

You might also like