Professional Documents
Culture Documents
Spring 2010
Spring 2010
Spring 2010
1 p p−1 −λx
1. Let X1 , . . . , Xn be iid Γ(p, 1/λ) with density gθ (x) = Γ(p) λ x e ,x > 0, θ = (p, λ), p > 0, λ > 0.
b) Show that the moment estimates, θ̃, are asymptotically bi-variate normal and give their asymptotic
mean and variance covariance matrix.
Solution. Note that by the multivariate Central Limit Theorem, we have that
1 P p
√
n n P Xi − λ
⇒ N (0, Σθ ),
1
X 2 p2 +p
n i λ
where " p #
2p(p+1)
Cov(Xi , Xi2 )
Var(Xi ) λ 2 λ3
Σθ = = 2p(p+1) 2p(p+1)(2p+3) .
Cov(Xi , Xi2 ) Var(Xi2 )
λ3 λ4
2
x x
, then θ̃ = g n1 Xi , n1
P P 2
Now, note that if g(x, y) = y−x 2 , y−x2 Xi . Also, we have that
−x2
1 2xy
∇g(x, y) = ,
(y − x2 )2 y + x2 −x
and so,
2
" #
+ p) −λ3
2λ(1
p p(1 + p)
∇g , = 2 1 .
λ λ2 λ p + 2p − λp
where the asymptotic mean is (p, λ) , and the asymptotic variance-covariance matrix is
# "
2
" #" #
2λ(1 + p) λ2 p1 + 2p + p) −λ3
2λ(1
p 2p(p+1)
T λ2 λ3
∇g Σθ ∇g = 2p(p+1) 2p(p+1)(2p+3) 2 1 λ .
−λ2
3
− λp λ p + 2p −p
λ3 λ4
c) Compute the asymptotic variance covariance matrix of the maximum likelihood estimates. You
may leave your answer in terms of Γ function derivatives.
Solution. Since MLEs are asymptotically efficient, we have that the Cramer-Rao lower bound
matrix will be the asymptotic variance-covaraince matrix of the MLEs. The likelihood function
is !p−1
n n
Y 1 p p−1 −λxi Y P
L(p, λ; x) = λ xi e = Γ(p)−n λnp xi e−λ xi ,
i=1
Γ(p) i=1
∂ Γ0 (p) X ∂ np X
log L = −n + n log λ + xi , and log L = − xi ,
∂p Γ(p) ∂λ λ
∂2 Γ00 Γ − Γ02 ∂ 2 n ∂2 p
2
log L = −n 2
, log L = , and 2 log L = −n 2 .
∂p Γ ∂p∂λ λ ∂λ λ
Hence, the Fisher information matrix is
Γ00 Γ−Γ02 Γ00 Γ−Γ02
−n Γ2 n
− λ1
I(p, λ) = −E λ =n Γ2 .
n
λ −n λp2 − λ1 p
λ2
Thus, the Cramer-Rao lower bound matrix (and the asymptotic variance covariance matrix) is
p 1
λ 2 Γ2
1
I(p, λ)−1 = λ2
1 00
λ 02
Γ Γ−Γ .
n p(Γ Γ − Γ02 ) − Γ2
00
λ Γ2
2. Recall that the t-distribution with k > 0 degrees of freedom, location parameter l, and scale parameter
s > 0 has density
" 2 #−(k+1)/2
Γ((k + 1)/2) −1 x−l
√ 1+k .
Γ(k/2) kπs2 s
Show that the t-distribution can be written as a mixture of Gaussian distributions by letting X ∼
N (µ, σ 2 ), τ = 1/σ 2 ∼ Γ(α, β) and integrating the joint density f (x, τ |µ) to get the marginal desntiy
f (x|µ). What are the parameters of the resulting t-distribution, as functions of µ, α, β?
Solution. Recall that Γ(α, β) has distribution
1 1 α−1 −x/β
f (x; α, β) = x e .
Γ(α) β α
Γ((2α + 1)/2) −1 x − µ
= q 2 1 + (2α) ,
r q
1
1
Γ((2α)/2) (2α)π αβ
αβ