Spring 2010

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 3

Spring 2010

1 p p−1 −λx
1. Let X1 , . . . , Xn be iid Γ(p, 1/λ) with density gθ (x) = Γ(p) λ x e ,x > 0, θ = (p, λ), p > 0, λ > 0.

a) Find a moment estimate of the parameter.


Solution. We have
Z ∞ Z ∞ Z ∞
1 p p−1 −λx 1 1
EX 1 = x λ x e dx = (λx)p e−λx dx = u(p+1)−1 e−u dx
0 Γ(p) Γ(p) 0 λΓ(p) 0
Γ(p + 1) 1 p
= = ,
Γ(p) λ λ
and
Z ∞ Z ∞ Z ∞
1 p p−1 −λx 1 1
EX 2 = x2λ x e dx = (λx)p+1 e−λx dx = 2
u(p+2)−1 e−u dx
0 Γ(p) λΓ(p) 0 λ Γ(p) 0
Γ(p + 2) 1 p(p + 1)
= = .
Γ(p) λ2 λ2
p p(p+1) 1
Xi2 , and solving for λ and p gives us that the moment estimates
P
Setting λ = X̄ and λ2 = n
are
X̄ 2
 

θ̃ = (p̃, λ̃) = 1
P , 1
P .
n (Xi − X̄)2 n (Xi − X̄)2

b) Show that the moment estimates, θ̃, are asymptotically bi-variate normal and give their asymptotic
mean and variance covariance matrix.
Solution. Note that by the multivariate Central Limit Theorem, we have that
 1 P   p 

n n P Xi − λ
⇒ N (0, Σθ ),
1
X 2 p2 +p
n i λ

where  " p #
2p(p+1)
Cov(Xi , Xi2 )

Var(Xi ) λ 2 λ3
Σθ = = 2p(p+1) 2p(p+1)(2p+3) .
Cov(Xi , Xi2 ) Var(Xi2 )
λ3 λ4
 2 
x x
, then θ̃ = g n1 Xi , n1
P P 2
Now, note that if g(x, y) = y−x 2 , y−x2 Xi . Also, we have that

−x2
 
1 2xy
∇g(x, y) = ,
(y − x2 )2 y + x2 −x

and so,
2
" #
 + p) −λ3
2λ(1
 
p p(1 + p)
∇g , = 2 1 .
λ λ2 λ p + 2p − λp

Hence, by the multivariate Delta Method, we have that



   
p̃ p
n − ⇒ N (0, ∇g T Σθ ∇g),
λ̃ λ

where the asymptotic mean is (p, λ) , and the asymptotic variance-covariance matrix is

 # "
2
" #" #
2λ(1 + p) λ2 p1 + 2p  + p) −λ3
2λ(1
p 2p(p+1)
T λ2 λ3
∇g Σθ ∇g = 2p(p+1) 2p(p+1)(2p+3) 2 1 λ .
−λ2
3
− λp λ p + 2p −p
λ3 λ4
c) Compute the asymptotic variance covariance matrix of the maximum likelihood estimates. You
may leave your answer in terms of Γ function derivatives.
Solution. Since MLEs are asymptotically efficient, we have that the Cramer-Rao lower bound
matrix will be the asymptotic variance-covaraince matrix of the MLEs. The likelihood function
is !p−1
n n
Y 1 p p−1 −λxi Y P
L(p, λ; x) = λ xi e = Γ(p)−n λnp xi e−λ xi ,
i=1
Γ(p) i=1

and so, the log-likelihood function is


X X
log L(p, λ; x) = −n log(Γ(p)) + np log λ + (p − 1) log xi − λ xi .

Hence, the first partials are

∂ Γ0 (p) X ∂ np X
log L = −n + n log λ + xi , and log L = − xi ,
∂p Γ(p) ∂λ λ

and the second partials are

∂2 Γ00 Γ − Γ02 ∂ 2 n ∂2 p
2
log L = −n 2
, log L = , and 2 log L = −n 2 .
∂p Γ ∂p∂λ λ ∂λ λ
Hence, the Fisher information matrix is
 Γ00 Γ−Γ02   Γ00 Γ−Γ02 
−n Γ2 n
− λ1
I(p, λ) = −E λ =n Γ2 .
n
λ −n λp2 − λ1 p
λ2

Thus, the Cramer-Rao lower bound matrix (and the asymptotic variance covariance matrix) is

p 1
λ 2 Γ2
 
1
I(p, λ)−1 = λ2
1 00
λ 02
Γ Γ−Γ .
n p(Γ Γ − Γ02 ) − Γ2
00
λ Γ2
2. Recall that the t-distribution with k > 0 degrees of freedom, location parameter l, and scale parameter
s > 0 has density
"  2 #−(k+1)/2
Γ((k + 1)/2) −1 x−l
√ 1+k .
Γ(k/2) kπs2 s
Show that the t-distribution can be written as a mixture of Gaussian distributions by letting X ∼
N (µ, σ 2 ), τ = 1/σ 2 ∼ Γ(α, β) and integrating the joint density f (x, τ |µ) to get the marginal desntiy
f (x|µ). What are the parameters of the resulting t-distribution, as functions of µ, α, β?
Solution. Recall that Γ(α, β) has distribution
1 1 α−1 −x/β
f (x; α, β) = x e .
Γ(α) β α

Then, the joint density is



(x − µ)2
   
τ 1 α−1 τ
f (x, τ | µ) = f (x | τ, µ) · f (τ ) = √ exp − τ · τ exp −
2π 2 Γ(α)β α β
α−1/2 2
  
1 τ (x − µ) 1
=√ α
exp −τ + .
2πΓ(α) β 2 β

Now, we integrate the joint density to get the marginal density:


Z ∞ Z ∞
(x − µ)2
  
1 α−1/2 1
f (x | µ) = f (x, τ | µ) dτ = √ τ exp −τ + dτ
0 2πΓ(α)β α 0 2 β
√ −1  (x − µ)2 −(α+1/2) Z ∞
1
= 2πΓ(α)β α
+ tα−1/2 e−t dt
2 β 0
√ −1  (x − µ)2 −(α+1/2)  
−1/2 1
= 2πΓ(α)β β +1 Γ α+
2 2
  2  −(2α+1)/2

Γ((2α + 1)/2) −1  x − µ
= q 2 1 + (2α) ,
 
r q  
1
1
Γ((2α)/2) (2α)π αβ
αβ

which is the density of a t-distribution with k = 2α degrees of freedom, locationa parameter l = µ,


and scale parameter s = (αβ)−1/2 .

You might also like