Professional Documents
Culture Documents
CSE 383M / CS395T Midterm Exam Solution: 1 (20 Points)
CSE 383M / CS395T Midterm Exam Solution: 1 (20 Points)
1 (20 points)
Suppose X1 and X2 are independent random variables, both having p.d.f.
{
3x−4 for 1 < x < ∞,
f (x) =
0 otherwise.
(b) Let Z = max(X1 , X2 ) denote the larger (maximum) of the two random variables. Find the p.d.f.
of Z.
Solution
To find the p.d.f. fZ (z) of Z, we first compute the c.d.f. FZ (z) using the ”maximum trick”: For z ≥ 1
FZ (z) = P (Z ≤ z) = P (max(X1 , X2 ) ≤ z)
= P (X1 ≤ z, X2 ≤ z) = P (X1 ≤ z)P (X2 ≤ z)
= F (z)2
∫z
Now, f (z) = 3z −4 for z ≥ 1, so F (z) = 1
3t−4 dt = 1 − z −3 , and therefore
FZ (z) = (1 − z −3 )2
f (x1 , x2 ) = (3x−4 −4 −4 −4
1 )(3x2 ) = 9x1 x2 , 1 < x1 , x2 < ∞
1
Hence
∫ ∞ ∫ ∞
P (Q ≥ q) = P (X2 > qX1 ) = 9x−4 −4
1 x2 dx1 dx2
x1 =1 x2 =qx1
∫ ∞ [ ]∞
x−3
= 9x−4 2
dx1
x1 =1
1
−3 x2 =qx1
∫ ∞
= 3x−4
1 (qx1 )
−3
dx1
x1 =1
∫ ∞
3
= x−7
1 dx1
q3 x1 =1
1
= 3
2q
2 (20 points)
Let∑X1 , X2 , X3 , . . . be i.i.d. random variables, with mean µ = E(Xi ) and variance σ 2 = Var(Xi ), and let Sn
n
= i=1 Xi denote the partial sums of the Xi . The Weak Law of Large Numbers (WLLN) is for any ϵ > 0,
( )
Sn
lim P − µ ≤ ϵ = 1
n→∞ n
Sn
(a) Show that WLLN is equivalent to the statement that µ is an unbiased estimator of n
Solution
To show that µ is an unbiased estimator of Snn , we need to show that E(µ − Snn ) = 0.
Since for any ϵ > 0, ( )
Sn
lim P − µ ≤ ϵ = 1
n→∞ n
Then, for any ϵ > 0, ( )
Sn
lim E − µ ≤ ϵ
n→∞ n
Thus, ( )
Sn
lim E − µ = 0
n→∞ n
ϵ→0
Thus, we have
Sn
E(µ − )=0
n
(b) If X is a random variable with mean 50 and variance 25, what is the probability that X is between 40
and 60?
Solution
By Chebyshevs inequality
1
P (|X − 50| > 10) = P (|X − 50| > 5 × 2) ≤ 2
2
so
1 3
P (40 ≤ X ≤ 60) = 1 − P (|X − 50| > 10) ≥ 1 − =
4 4
2
3 (20 points)
Suppose two points are chosen independently and uniformly from the interval [0,1]. Let D denote the
Euclidean distance (2-norm) between these two points.
(a) What is the Probability that D ≤ 0.1?
Solution
We have D = |X − Y | where X and Y denote the two points, and we need to compute P (|X − Y | ≤ 0.1).
Since X and Y are independent and uniformly distributed on [0, 1], the joint distribution of X and Y is
uniform over the unit square, we can compute probabilities involving X and Y via areas. With R denoting
the region on which |x − y| ≤ 0.1, we get
Area(R) 1 − 0.92
P (D ≤ 0.1) = P (|X − Y | ≤ 0.1) = = = 0.19
Area(U nitSquare) 1
(c) Suppose the two random points are chosen independently and uniformly from the unit hypercube [0, 1]n ,
and let D denotes the Euclidean distance (2-norm) between the two points. What is E(D2 )? (Note: to
receive credit, please show your derivation)
Solution
Let (X1 , X2 , . . . , Xn ), (Y1 , Y2 , . . . , Yn ) denote the coordinates of the two points. By assumption, the Xi′ s are
mutually independent, and each Xi′ s or Yi′ s is uniformly distributed on the interval [0, 1]. The square of the
distance between these two points is
Hence,
∑
n
E(D2 ) = E((Y1 − X1 )2 + (Y2 − X2 )2 + · · · + (Yn − Xn )2 ) = E((Yi − Xi )2 )
i=1
Squaring out and using the properties of the expectation and the independence assumptions, we get
1 1 1 1 1
E((Yi − Xi )2 ) = E(Yi2 ) − 2E(Yi )(Xi ) + E(Xi2 ) = −2× × + =
3 2 2 3 6
∫1 ∫1
Since E(Xi2 ) = 0
x2 ∗ 1dx = 13 , and E(Xi ) = 0
xdx = 1/2, so
1 n
E(D2 ) = n × =
6 6
4 (20 points)
Let A be an n × n invertible (nonsingular) matrix. Let x be a nonzero vector. Suppose that Ax = λx. For
each of the following, first state if it is true or false, then briefly justify your answer.
(a) Ak x = λk x, for k ≥ 0
Solution
True. Prove by mathematic induction on k.
3
When k = 0, since A0 x = x = 1x = λ0 x
Assume that for any k ≤ m, we have Ak x = λk x. Then for k = m + 1
then
λ−k x = λ−k λk (A−1 )k x = (A−1 )k x
p(A)x = (α1 A1 + α2 A2 + · · · + αm Am )x
= α1 A1 x + α2 A2 x + · · · + αm Am x
= α1 (λ1 x) + α2 (λ2 x) + · · · + αm (λm x)
= (α1 λ1 + α2 λ2 + · · · + αm λm )x
= p(λ)x
5 (20 points)
For each of the following, first state if it is true or false, then briefly justify your answer.
(a) Every n × n matrix has n distinct (different) eigenvalues.
Solution
False. A n × n matrix can have at most n eigenvalues, but some of the eigenvalues can be the same.
(c) A square matrix and its transpose have the same eigenvalues.
Solution
True. Since for any n × n matrix A,
4
(d) Determinant of a symmetric matrix is equal to the product of its eigenvalues.
Solution
True. Let λ1 , λ2 , . . . , λn be the eigenvalues of the n × n symmetric matrix A, then these λi are also the roots
of the characteristic polynomial p(λ).
Let λ = 0, we have
det(A) = λ1 λ2 . . . λn