Download as pdf or txt
Download as pdf or txt
You are on page 1of 25

GLS+ WLS+ OLS

1 Question: What is Genalised least square method (GLS)?

Answer: GLS is a method of finding the best estimator of 𝛽 in the model 𝑌=X𝛽+u
when E(u) = 0, and E(u, u՛)𝜎 2 v where v is n×n positive definite matrix.
To find the GLS estimator of 𝛽 the basis Idea is to transform are observations y
to other variables 𝑦*. Which satisfy the usual assumptions and then apply the OLS
method to the variable so obtained.

Question: What do you mean by 0 WLS method?


2
[00+
Answer: WLS means weighted least Square method. Let us consider a linear
regression model (in matrix notation) is Y = 𝑋𝛽+u, ∪ ~ NID (0, 𝜎 2 1) sometimes these
assumptions are unreasonable. i.e. if 𝐸(𝑢2 ) ≠ 𝜎 2 u or if E(ui uj) ≠ 0 The weighted sum
squares is in appropriate. Each sample observation should be given a different weight
and the appropriate procedure is to minimize a Weighted Sum Squares residual when
the weights are chosen to incorporate the effect of the various products (ui𝜇j or ui2).
This is appropriately known as WLS estimator.

3 Question: When GLS method is applicable- [04+

Answer: One of the fundamental assumptions in regression about the random vector u
is that E(u1 𝑢́ ) =𝜎 2 𝐼 which is known as the assumption of homoscedasticity. If these
assumption is violated i.e. u՛s are heteroscadasticity or auto correlated. Then we apply
GLS to find the best estimator of 𝛽.

4 Question: Distinguish Between WLS and GLS.

Answer: Distinguish between WLS and GLS are given bellow:

(i)The WLS estimator of 𝛽 is 𝛽̂ = (x՛wx)-1x՛wy where w= v-1, w is a diagonal


matrix.
The GLS estimator of 𝛽 is 𝛽̂ = (x՛wx)-1x՛wy with v(u) = 𝜎 2 𝑣 where v is a diagonal
matrix.
(ii) WLS is just a special case of the more general estimating technique GLS.
(iii) GLS and WLS both are unbiased and BLUE estimator.

5Question:
Which assumption of OLS method when violated necessity. The use
of GLS method for obtaining best estimator.

Answer: To find the OLSE estimator of 𝛽 in the model 𝑌 = 𝑋𝛽 + 𝑈 the usual


assumption are E(u) = 0, E(𝑢′u) = 𝜎 2 I
The assumption E(𝑢′u) =𝜎 2 I involves the double assumption that the
disturbance variance is constant at each observation point and that the disturbance co-
variance at all possible pairs of observations points are zero.
When either one or both of these two assumption violates. Then we use the GLS
method for obtaining the best estimator.

For the model 𝑦𝑖 = b0+b1xi+ui where E(ui2) = k2xi2 E(ui,uj)=0 ∀ i≠ j


6 Question:
Find the OLSE and GLSE of b1 and compare their variance.

Solution: the given model is


𝑦𝑖 = b0+b1xi+ui
In deviation for the model
𝑦𝑖 = b1xi+ui; i=1,2,3, …, n
In matrix natation we can write the model
Y = 𝑋𝛽+𝑈
𝑦1 𝑥1 𝑢1
𝑦2 𝑥2 𝑢2
. . .
Where 𝑦= . , x = . and 𝑢= .
. . .
(𝑦𝑛 ) (𝑥𝑛 ) (𝑢𝑛 )

Applying OLS method we get

β̂ = (x′ x)-1x′y
= (x′ x)-1 x′ (xβ+𝑢)
= β+ (x′x)-1 x′ 𝑢
𝑥1
𝑥2
. 2
Now x′ x = [𝑥1 𝑥2 . . . 𝑥𝑛 ] . = ∑𝑥𝑖
.
(𝑥𝑛 )
1
(x′x)-1 = and x′ 𝑢 = ∑𝑥𝑖 𝑢𝑖
∑𝑥𝑖2
∑𝑥 𝑢
∴ β̂ = 𝑏̂1 = b1 + ∑𝑥𝑖 2𝑖
𝑖
 E (β̂) = b1

Now V(β̂) = E[(𝑏̂1 - b1 )( 𝑏̂1 − b1 )′] = 𝑉(𝑏̂1 )


= E[(x′ x)-1 - x′ 𝑢 𝑢′×(x′ x)-1]
1 1
= ∑𝑥 2 x′ E (𝑢𝑢′)× ∑𝑥 2
𝑖 𝑖
2 2 1 1
= k 𝑥𝑖 ∑𝑥𝑖2 ∑ 𝑥𝑖 2
∑𝑥𝑖2
2 2
𝑘 𝑥𝑖 ∑𝑥𝑖 2
𝑉(𝑏̂1 ) = (∑𝑥 2 )2 (i)
𝑖

Again, from the original model


𝑦𝑖 = b0+b1xi+ui; 𝑖 = 1,2,3, … , n

In matrix notation
Y = 𝑋𝛽+𝑈
𝑦1 1 𝑥1 𝑢1
𝑦2 1 𝑥2 𝑢2
. . . 𝑏 .
Where 𝑦 = . x= 𝛽 = ( 0) 𝑢= .
. . 𝑏1
. . . .
(𝑦𝑛 ) (1 𝑥𝑛 ) (𝑢𝑛 )

Now E(𝑈𝑖2 ) = k2xi2


𝑥12 0 0 . . 0
0 𝑥22 0 . . 0
=>E( 𝑢𝑖 ′𝑢) = k2 0 0 𝑥32 . . . = k2 𝜈 (say)
. . . . . .
. . . . . .
(0 0 0 . . 𝑥𝑛2 )
𝑥12 0 0 . . 0
0 𝑥22 0 . . 0
2
Where 𝜈 = 0 0 𝑥3 . . .
. . . . . .
. . . . . .
(0 0 0 . . 𝑥𝑛2 )
1
0 . . . 0
𝑥12
1
0 . . . 0
𝑥22
 v-1 = . . . . . .
. . . . . .
. . . . . .
1
(0 0 . . . 2)
𝑥𝑛

Then the GLS of β is β̃ = (x′v-1x)-1x′v-1xy


1
0 . . . 0
𝑥12 1 𝑥1
1 1 𝑥2
0 . . . 0
𝑥22
1 1 . . . 1 . . . . . . . .
Now x′ v-1 x = [ ]
𝑥1 𝑥2 0 0 0 𝑥𝑛 . . . . . . . .
. . . . . . . .
1 (1 𝑥𝑛 )
(0 0 . . . 2)
𝑥𝑛
1 1
∑𝑥𝑖2 ∑ 𝑥𝑖
=[ 1 ]
∑ 𝑥𝑖
𝑛

adj (x′ v−1 x)


∴ (x′ v-1 x)-1 = |x′ v−1 x|
1
𝑛 − 𝛴𝑥
1 𝑖
= 1 1
2 [ 1 1 ]
𝑛 2 − (∑𝑥 ) − 𝛴𝑥
∑𝑥𝑖 𝑖 𝑖 ∑𝑥𝑖2
and
1
0 . . . 0 𝑦1
𝑥12

0 𝑥2
1
. . . 0 𝑦2
1 1 . . . 1 .2 .
x′v-1y = [ ] . . . . .
.
𝑥1 𝑥2 0 0 0 𝑥𝑛 . . . . . .
. . . . . . .
1 (𝑦𝑛 )
(0 0 . . . 2)
𝑥𝑛
𝑦1
1 1 1 𝑦2 𝑦
. . . 𝑥2 . ∑ 𝑥 2𝑖
𝑥12 𝑥22 𝑛 𝑖
= [1 1 1] . = [ 𝑦𝑖 ]
0 0 0 ∑
𝑥1 𝑥2 𝑥2 . 𝑥𝑖

(𝑦𝑛 )
1 𝑦
𝑛 − 𝛴𝑥 ∑ 𝑥 2𝑖
1
β̃= (x′ v-1 x)-1 x′ v-1y = 2 [ 1 1
𝑖
][ 𝑖
𝑦𝑖 ]
1 1
𝑛∑ 2 − (∑𝑥 ) − 𝛴𝑥 ∑𝑥
𝑥 𝑖 𝑖 𝑖 ∑𝑥𝑖2 𝑖
𝑦𝑖 1 𝑦𝑖
𝑛∑ 𝑥 2 − ∑
1 𝛴𝑥𝑖 𝑥𝑖
β̃ = 2 [ 1
𝑖
𝑦 1 𝑦𝑖 ]
𝑛∑
1
𝑥2
1
– (∑𝑥 ) − 𝛴𝑥 ∑ 𝑥𝑖 + ∑𝑥
𝑖 𝑖 𝑖 𝑖 ∑𝑥𝑖2 𝑖

1 𝑦 1 𝑦
∑𝑥𝑖 − ∑𝑥 ∑𝑥𝑖
∑𝑥2 𝑖 𝑖 𝑖
 𝑏̃1 = 𝑖
2
1 1
𝑛∑ − (∑𝑥 )
𝑥2
𝑖 𝑖
1
𝑛 −∑ 𝑥
𝑘2
𝜈 (β̃) = 𝜈 (𝑏̃1 ) = 𝑘 2 (x′ v-1 x)-1 = 𝑖
1 1
2 [ 1 1 ]
𝑛∑ 2 − (∑𝑥 ) − 𝛴𝑥
𝑥𝑖 𝑖 𝑖 ∑𝑥𝑖2
1
𝑘2∑
𝑥2
𝜈 (𝑏̃1 ) = 𝑖
2 (ii)
1 1
𝑛∑ − (∑𝑥 )
𝑥2
𝑖 𝑖

From (i) and (ii) we get


𝜈(𝑏̂1 ) > 𝜈 (𝑏̃1 )

∴ GLS is more efficient than OLS.

Question: Consider the model 𝑦𝑖 = 𝛽xi+ui with E(ui)=0 E(ui,uj) = 0, i ≠ j and E


7
(𝑈𝑖2 )
= k𝑥𝑖2 where k is a constant. Find the GLS estimator of 𝛽. Show that the
𝑘
estimator is unbiased having variance n where n is the number of observations. 00+.

Answer: We have the model


𝑦𝑖 = 𝛽xi+ ui where i=1,2,3----n
in matrix notation we can write
𝑌 = 𝑋𝛽 + 𝑈
𝑦1 𝑥1 𝑢1
𝑦2 𝑥2 𝑢2
. . .
Where 𝑦 = . x = . 𝑢 = .
. . .
(𝑦𝑛 ) (𝑥𝑛 ) (𝑢𝑛 )
2
and E(ui) = 0, E(uiuj) = 𝑘𝑥𝑖
𝑥12 0 0 . . 0
0 𝑥22 0 . . 0
2
 𝑘 = 0 0 𝑥3 . . . = 𝑘 𝜈(say)
. . . . . .
. . . . . .
( 0 0 0 . . 𝑥𝑛2 )
𝑥12 0 0 . . 0
2
0 𝑥2 0 . . 0
2
Where 𝜈 = 0 0 𝑥3 . . .
. . . . . .
. . . . . .
(0 0 0 . . 𝑥𝑛2 )
1
0 . . . 0
𝑥12
1
0 . . . 0
𝑥22
 V-1 = . . . . . .
. . . . . .
. . . . . .
1
(0 0 . . . 2)
𝑥𝑛
1
0 . . . 0 𝑥1
𝑥12

0
1
. . . 0 𝑥2
𝑥22 .
∴ (x′ v-1 x) = [x1 x2 . . . xn ] . . . . . .
.
. . . . . .
. . . . . . .
1 (𝑥𝑛 )
(0 0 . . . 2)
𝑥𝑛
𝑥1
𝑥2
1 1 1 .
= [x x . . . x ] .
1 2 n
.
(𝑥𝑛 )
= [1 + 1 + 1+. . . +1] = n

Now β̂ = (x′ v-1 x)-1 x′ v-1 𝛾


= (x′ v-1 x)-1 x′ v-1(× β + u)
= β + (x′ v-1 x)-1 x′ v-1u
∴ E (β̂) = β +(x′ v-1 x)-1 x′ v-1 E (u)
=β+0 [⸪ E(u) = 0]

Hence in GLS β̂ is an unbiased estimator of 𝛽


∴ 𝜈 (β̂) = k (x′ v-1x)-1 = 𝑘⁄𝑛

8Question: State and proof Gass –Aitkens theorem on GLS estimator OR Show
that GLSE is BLUE.

Statement: Let in general linear regression or general linear regression model of full
rank Y = × β + u, E(u) = 0, 𝜈 (u) = 𝜎 2 v
Where 𝜈 is positive definite matrix. Then the GLS estimator of 𝛽 is the best linear
unbiased estimator i.e. GLSE is BLUE

Proof: Let us consider a regression model in matrix notation is as follows.


Y=×β+u (i)
under assumption
E(u) =0, E(u′ u) = 𝜎 2 v, E(u2) = 𝜈 (u)
Where 𝜈 is a positive definite matrix. There exist an n×n nonsingular matrix T such
that T𝜈T′ = I

Now pre-multiplying both sides of (i) by T then we get


TY = T× β + T
 𝑌* = ×*β+ 𝑢 * (ii)

Where Y* = TY.×* = T× and 𝑢 * = TU

Now E (Tu) = T E(u) + T.0 = 0 … ) E (𝑢 *)


𝜈(𝑢 *) = 𝜈 (Tu) = T 𝜈 (U) T′ = T 𝜎 2 v T′ = 𝜎 2 T

Thus 𝑢 * satisfy all the assumption of OLS

Now applying OLS in (ii) we get.


β̂GLS = (×*′ ×*)-1×*Y
= (x′ T′ T x)-1 x′ TTY

Again we have T𝜈T′ = I


 TT′ = 𝜈 -1
β̂GLS = (x′ 𝜈 -1x)-1 x′ 𝜈 -1Y

Which is the GLS estimator of β

Linearity: The GLS of 𝛽 under assumption is

β̂GLS = (x′ 𝜈 -1x)-1 x′ 𝜈 -1Y


= cY
where β̂GLS is a linear function of y i.e. GLS is linear.

Unbiasedness: We have

β̂GLS = (x′ 𝜈 -1x)-1 x′ 𝜈 -1y


=
(x′ 𝜈 -1x)-1 x′ 𝜈 -1(x β + u)
= (x′ 𝜈 -1x)-1- (x′ 𝜈 -1x) β +(x′ 𝜈 -1x)-1 x′𝜈 -1u
= β + (x′𝜈 -1x)-1 x′𝜈 -1u
∴ E (β̂GLS) = β +(x′ 𝜈 -1x)-1 x′ 𝜈 -1u E(u)
= β +0 [ E(u) = 0]

Hence GLS estimator is unbiased.


Best estimator: Let β* = [ (x′𝜈-1x)-1x′𝜈′ + M] Y be any other linear estimator of β
where M is any arbitrary no- zero matrix i.e. E(β*) = β and
𝜈 (β̂GL) = 𝜎 2 (x*′x*)-1
= 𝜎 2 (x′T′ T x)-1
= 𝜎 2 (x′ 𝜈 -1x)-1

Now β* = [(x′ 𝜈 -1x)-1 x′ 𝜈 -1+ M]Y


= [A+M] [x β +u] where A= (x′ 𝜈 -1x)-1 x′ 𝜈 -1
= A× β + M× β+ A u + M u
= A× β + M× β + (A+M) u

∴ E(β*) = β + M × β + 0 [⸪ A × = I]
 β = β+M× β
 M× β = 0 [⸪ β ≠ = 0]
M× = 0
=> x′ M′ = 0
Now β* = A× β + M× β + (A+M) u
= β+ ∗ β + (A+M) u
= β + (A + M) u
*
=> β - β = (A + M) u

∴ 𝜈 (β ∗) = E[ (β*- β) (β*- β)′]


= E[(A+M) u u ′ (A′ +M′)]
= (A+M) E (u u ′) (A′ +M′)
= 𝜎 2 (A+M) 𝜈 (A′ +M′)
= 𝜎 2 (A 𝜈 A′ +A 𝜈 M′ + M 𝜈 A′ + M 𝜈 M′)
= 𝜎 2 [(x′ 𝜈 -1x)-1 x′ 𝜈 -1 𝜈 𝜈 -1 × (x′ 𝜈 -1x)-1 + (x′ 𝜈 -1x)-1 x′ 𝜈 -1 𝜈 M′+ M 𝜈 𝜈 -1
×(x′ 𝜈 -1x)-1+ M 𝜈 M′]
= 𝜎 2 [(x′ 𝜈 -1x)-1 + 0 + 0 + M 𝜈 M′]
= 𝜎 2 (x′ 𝜈 -1x)-1+𝜎 2 M 𝜈 M′
= 𝜈 (β̂GLS) + Positive quantity

∴ 𝜈 (β ∗) > 𝜈 (β̂GLS) [ Hence GLS is blue under non spherical disturbance]

9Question: For a two variable regression model with01+


first
05 order
+09 auto regressive
scheme. Show that GLS estimator are more than the OLSE
or For the model Yi= 𝑖𝜃 + ui where 𝜈 (ui) = ki; i = 1,2,3….. n
Cov (uiuj) = 0 v i ≠ j compare with GLS.

Answer: The model is Yi = 𝑖𝜃 + ui;


In matrix notation we can write Y = x𝜃 + 𝑢
𝑦1 𝑥1 𝑢1
𝑦2 𝑥2 𝑢2
. . .
where 𝑦 = . x= . 𝑢= .
. . .
(𝑦𝑛 ) (𝑥𝑛 ) (𝑢𝑛 )
1 0 0 . . 0
0 2 0 . . 0
0 0 3 . . .
Here E (u) = 0, and E(u u′) = ki = k =k v (say)
. . . . . .
. . . . . .
(0 0 0 . . 𝑛)
1 0 0 . . 0
1 0 0 . . 0 1
0 0 . . 0
0 2 0 . . 0 2
1
0 0 3 . . . 0 0 3 . . .
Where v = 𝜈 -1 =
. . . . . . . . . . . .
. . . . . . . . . . . .
(0 0 0 . . 𝑛) 1
(0 0 0 . . 𝑛)

Now The GLS E of 𝜃 is


𝜃* = (x′ 𝜈 -1x)-1 x′ 𝜈 -1y

1 0 0 . . 0
1 1
0 2 0 . . 0
2
1
0 0 3 . . . .
x′ 𝜈 -1 x = [1 2 . . .n]
. . . . . . .
. . . . . . .
1 (𝑛)
(0 0 0 . . 𝑛 )
1
2
.
= [1 1 . . . 1] = 1+2+3+ . . . + n
.
.
(𝑛)
𝑛(𝑛+1)
= 2
𝑦1
𝑦2
2 .
(x′ 𝜈 -1x)-1 = and x′ 𝜈 -1 𝛾 = [1 1 . . . 1] . => ∑𝑦𝑖
𝑛(𝑛+1)
.
(𝑦𝑛 )
2∑𝑦
𝑖
𝜃* = (x′ 𝜈 -1x)-1 x′ 𝜈 -1 y = 𝑛(𝑛+1)
2𝑘
𝜈 (𝜃*) = k (x′ 𝜈 -1x)-1 = 𝑛(𝑛+1)

Again, applying OLS of 𝜃


𝜃̂ = (x′ x)-1 x′y
1
2
. 𝑛(𝑛+1)(2𝑛+1)
x′ x = [1 2 . . . n] = 12 + 22 + ------+ n2 =
. 6
.
(𝑛 )
𝑦1
𝑦2
.
x′y = [1 2 . . . n] . = y1 + 2y2 + 3y3 +……..𝑛y𝑛 = ∑𝑛𝑖=1 𝑖𝑦𝑖
.
(𝑦𝑛 )
∴ 𝜃̂ = (x′ x) x′y
-1
6
= 𝑛(𝑛+1)(2𝑛+1) ∑𝑛𝑖=1 𝑖𝑦𝑖

and 𝜈 ( 𝜃̂ ) = k (x′ 𝑥 )-1 x′ 𝜈 × (x′ x)-1


1 0 0 . . 0 1
0 2 0 . . 0 2
0 0 3 . . . .
Now x′ 𝜈 x= [1 2 . . . n]
. . . . . . .
. . . . . . .
(0 0 0 . . 𝑛) (𝑛)

1
2
.
= [12 22 . . . n2] = 13 + 23 +33 ------+ n3
.
.
(𝑛)

𝑛(𝑛+1) 2
={ }
2

𝑛(𝑛+1) 2 6 2
𝜈 ( 𝜃̂ ) = k { } { }
2 𝑛(𝑛+1)(2𝑛+1)
9𝑘 9 2𝑘 9 2𝑘
= (2𝑛+1)2
= 2 ≤
2 (2𝑛+1) 2 𝑛(𝑛+1)
=> 𝜈 ( 𝜃̂ ) ≤ 9⁄2 𝜈 (𝜃*)
=> 𝜈 (𝜃*) < 𝜈 ( 𝜃̂ )
Hence GLS is more efficient than OLS.

Question:
10 For the model Yi = βxi + Ui, Ui ~ NID (0, i𝜎 2 ) i= 1, 2 if x1 = 1
And x2 = -1 obtain the GLS of β and find its variance.

Solution: We have the model


𝑦𝑖 = βxi + ui, i = 1,2
If x1 = 1 and x2 = -1 we get
𝑦1 = β = u1, and 𝑦1 = −β + u2

In matrix notation
y1 1 u1
(y ) = ( ) 𝛽 + (u )
2 −1 2
 Y = Xβ + U
Now Ui ~ NID (0, i𝜎 2 )
2
∴ E(u) = 0 and E(u u՛) = i 𝜎 2 = ( 𝜎 0 ) = 𝜎 2 (1 0)
0 2𝜎 2 0 2
1 0 -1 1 0
Where 𝜈= ( ) 𝜈 = (0 1 )
0 2 ⁄2
Thus the GLS of β is
β̂ = (x′𝜈 -1x)-1 x′𝜈 -1y
1 0 1 1 1
(x′ 𝜈 -1x) = (1 -1) (0 1 ) ( ) = (1 − ⁄2 ) ( ) = 1+1⁄2 = 3⁄2
⁄2 −1 −1
1 0 y1 1 y1 y
x′ 𝜈 -1 𝛾 = (1 -1) (0 1 ) (y ) = (1 − ⁄2 ) (y ) = y1 - 2⁄2
⁄2 2 2
γ2
∴ β̂ = 3 (y1 − ⁄2) = 2⁄3 y1 - 32
2 γ

∴ V(β1) = 𝜎 2 (x′ 𝜈 -1x)-1


2𝜎2
= 𝜎 2 . 2⁄3 = 3

11 Question: Let y𝑖 ~ NID (0, 𝜎 2⁄𝜔𝑖 ) ; i = 1,2 … n Find the linear unbiased
estimator of 𝜃 with minimum variance.

Answer: Since y𝑖 ~ NID (0, 𝜎 2⁄𝜔𝑖 ) ; i = 1,2 … n we get the linear model y𝑖 = 𝜃+ui;
2
i = 1,2, …, n and E (ui) = 0, E(𝑢𝑖 2 ) = 𝜎 ⁄𝑤𝑖 and E (ui uj) = 0 ∀ i ≠ j In matrix notation
we can write this model as
Y = Xβ+U
𝑦1 1 𝑢1
𝑦2 1 𝑢2
. . .
where Y = . ,𝑥= , and u = .
.
. . .
(𝑦𝑛 ) (1) (𝑢𝑛 )

𝜎 2⁄ 0 . . . 0
𝜔1
0 𝜎 2⁄ . . . 0
𝜔2
E (u) = 0, E(u, u′) = . . . . . . 𝜎 2 𝜈 (say)
. . . . . .
. . . . . .
2
( 0 0 . . . 𝜎 ⁄𝜔𝑛 )

1⁄ 0 . . . 0
𝜔1
0 1⁄ . . . 0
𝜔2
where 𝜈 = . . . . . .
. . . . . .
. . . . . .
( 0 0 . . . 1⁄𝜔𝑛 )

𝜔1 0 . . . 0
0 𝜔2 . . . 0
. . . . . .
 𝜈 -1 =
. . . . . .
. . . . . .
(0 0 . . . 𝜔𝑛 )

The GLS of 𝜃 is

𝜃̂ = (x′ 𝜈 -1x)-1 x′ 𝜈 -1 y

𝜔1 0 . . . 0 1
0 𝜔2 . . . 0 1
. . . . . . .
x′ 𝜈 -1x = [1 1 … 1]
. . . . . . .
. . . . . . .
(0 0 . . . 𝜔𝑛 ) 1)
(
1
1
.
= [w1 w2 … wn ] = ∑ 𝜔𝑖
.
.
(1)

𝜔1 0 . . . 0 𝑦1
0 𝜔2 . . . 0 𝑦2
. . . . . . .
x′ 𝜈 -1 y= [1 1 …. 1] . = ∑ 𝜔𝑖 𝑦𝑖
. . . . . .
. . . . . . .
(0 0 . . . 𝜔𝑛 ) (𝑦𝑛 )

∴ 𝜈 ( 𝜃̂ ) = 𝜎 2 (x′ 𝜈 -1x)-1
1 2 ∑𝜔 𝑦
= 𝜎 2 . ∑ 𝜔 = 𝜎 ⁄∑ 𝜔 and 𝜃̂ ∑ 𝜔𝑖 𝑖 with minimum variance.
𝑖 𝑖 𝑖

12QUESTION: Let the model Y=Xβ+U where U is distributed as MN (0, σ2v)


show that the maximum likelihood estimator of b of β is found by choosing to
minimize (y-xb)՛v-1 (y-xb) and show that this gives 𝑏̂ = (x՛ v-1x)-1 x՛ v-1y. Also obtained
variance covariance matrix of b.

Solution: Given the model

Y=Xβ+U . . . (1)

Where E(u) = 0, E(u2) = σ2v

We have to show that maximum likelihood estimator b of β is found by choosing to


minimize

(y-xb)՛v-1 (y-xb)

Let E = (y-xb)՛v-1 (y-xb)

= (y՛-b՛x՛)v-1 (y-xb)

= y՛v-1y - y՛v-1xb - b՛ x՛v-1y + b՛x՛b-1xb


ⅆ𝐸
Now = - x՛v-1y - x՛v-1y + 2bx՛v-1x = 0
ⅆ𝑏

 2bx՛v-1x = 2 x՛v-1y
 b = (x՛ v-1x)-1 x՛ v-1y
The estimator of b is b = = (x՛ v-1x)-1 x՛ v-1y

Again 𝑏̂ = (x՛ v-1x)-1 x՛ v-1y

= (x՛ v-1x)-1 x՛ v-1 (Xβ+U)

= β + (x՛ v-1x)-1 x՛ v-1U

E(𝑏̂) = β + (x՛ v-1x)-1 x՛ v-1E(U)

=β+0

=b

Again V(𝑏̂ ) = 𝐸{[𝑏̂ − E(𝑏̂)][𝑏̂ − E(𝑏̂ )]՛}

= E[(𝑏̂ − 𝑏)( 𝑏̂ − 𝑏) ՛]

= E[(x՛ v-1x)-1 x՛ v-1uu՛ v-1 x(x՛v-1 x) -1]

= σ2(x՛ v-1x)-1 x՛ v-1uu՛ v-1 x(x՛v-1 x) -1

= σ2(x՛ v-1x)-1 (x՛v-1 x)(x՛v-1 x)

= σ2(x՛ v-1x)-1

The variance covariance matrix of b is

V(𝑏̂ ) = σ2(x՛ v-1x)-1

13QUESTION: Let us consider the model Y=Xβ+U where E(U) = 0, V(U)


= σ2v obtain GLS estimator of the parameter.

Answer: The OLSE of β is 𝛽̂ =(x՛x)-1 x՛y. Another application for this problem by
transforming the model is a n set of observation that satisfy the Standard Least Square
assumption. Then we shall use OLS on the transformation data. Since σ2v is the
dispersion matrix of the errors v must be positive definite. So there exists an n×n no
singular matrix p such that

PPT = V
Let P be a transformation matrix such that

Y=Xβ+U

TY=TXβ+TU

Let TY = Z, TX = W, TU = C

Then the model becomes

Z = Wβ + C . . . (2)

Now E(C) = E(TU) = T E(U) = 0

E(CCT) = E(TUU՛T՛) = T E(UU՛)T՛

= T σ2 V T՛

= σ TV
2

Now if we choose T sh at that TV T՛ = I

Then we have E( CC՛) = σ2 I and we can apply OLS method to the model … (i)

Since P is nonsingular the P-1 exist

Now PP՛ = V

 P-1 V (P՛)-1 = 1
Hence the appropriate T = P-1

Now applying the OLS to the transformed model

Z = Wβ + Ɛ then the GLS estimator β is

β* = (W՛W) -1 W՛Z

= (X՛ T՛TX) -1 X՛ T՛TY [⸪ W= TX, Z= TY ]

Now, T՛T = { (P -1)՛ P -1 }

= (PP՛)-1

= V-1

⸫ β* = (X՛ V-1X) -1 X՛ V-1Y

Which is the GLS estimator.


14QUESTION: Suppose that Yi = 𝛽1 + 𝛽2 𝑥𝑖 + 𝑢𝑖 ; ii= 1,2, … n with E(u)

and E (uiuj) = 0 for ∀ i≠j


𝜎2
= ∀ i=j
𝑥𝑖

Find the BLUE of 𝛽1 and 𝛽2

Solution: We have the model

Yi = 𝛽1 + 𝛽2 𝑥𝑖 + 𝑢𝑖 ; ii= 1,2, … n

In matrix notation we can write the model as follows 𝑦 = 𝑥𝛽 + 𝑢


𝑦1 1 𝑥1 𝑢1
𝑦2 1 𝑥2 𝑢2
. . . 𝛽 .
Where 𝑦 = . x= 𝛽 = ( 1) 𝑢= .
. . 𝛽2
. . . .
(𝑦𝑛 ) (1 𝑥𝑛 ) (𝑢𝑛 )

1
0 . . . 0
𝑥1
1
0 . . . 0
2 𝑥2
With E(u) = 0, E (uiuj) = 𝜎 . . . . . . = 𝜎 2 𝑣 (say)
. . . . . .
. . . . . .
(0 0 . . . 𝑥𝑛 )
1
0 . . . 0
𝑥1 𝑥1 0 0 . . 0
1 0 𝑥2 0 . . 0
0 . . . 0
𝑥2
0 0 𝑥3 . . .
Where v = . . . . . . 𝑣 −1 =
. . . . . . . . . . . .
. . . . . . . . . . . .
1 (0 0 0 . . 𝑥𝑛 )
(0 0 . . . 𝑥𝑛 )

We know GLS estimator of β is

𝛽̂ = (𝑥 ′ 𝑣 −1 𝑥)−1 𝑥 ′ 𝑣 −1 𝑦
𝑥1 0 0 . . 0 1 𝑥1
0 𝑥2 0 . . 0 1 𝑥2
1 1 . . . 1 0 0 𝑥3 . . . . .
(𝑥 ′ 𝑣 −1 𝑥) = ( )
𝑥1 𝑥2 . . . 𝑥𝑛 . . . . . . . .
. . . . . . . .
(0 0 0 . . 𝑥𝑛 ) (1 𝑥𝑛 )

∑𝑥 ∑𝑥𝑖2
= ( 2𝑖 )
∑𝑥𝑖 ∑𝑥𝑖3

1 ∑𝑥𝑖3 −∑𝑥𝑖2
⸫ (𝑥 ′ 𝑣 −1 𝑥)−1 = 2 ( )
𝛴𝑥𝑖 ∑𝑥𝑖3 −(∑𝑥𝑖2 ) −∑𝑥𝑖2 ∑𝑥𝑖

𝑥1 0 0 . . 0 𝑦1
0 𝑥2 0 . . 0 𝑦2
1 1 . . . 1 0 0 𝑥3 . . . .
⸫ (𝑥 ′ 𝑣 −1 𝑦) = ( ) .
𝑥1 𝑥2 . . . 𝑥𝑛 . . . . . .
. . . . . . .
(0 0 0 . . 𝑥𝑛 ) (𝑦𝑛 )
∑𝑥𝑖 𝑦𝑖
= ( 2 )
∑𝑥𝑖 𝑦𝑖
3
1 ∑𝑥𝑖 −∑𝑥𝑖2 ∑𝑥𝑖 𝑦𝑖
𝛽̂ = 2 ( )( 2 )
𝛴𝑥𝑖 ∑𝑥𝑖3 −(∑𝑥𝑖2) −∑𝑥𝑖2 ∑𝑥𝑖 ∑𝑥𝑖 𝑦𝑖

∑𝑥𝑖3 ∑𝑥𝑖 𝑦𝑖 −∑𝑥𝑖2∑𝑥𝑖2 𝑦𝑖


( )
−∑𝑥𝑖2 ∑𝑥𝑖 𝑦𝑖 +∑𝑥𝑖 ∑𝑥𝑖2 𝑦𝑖
= 2
𝛴𝑥𝑖 ∑𝑥𝑖3 −(∑𝑥𝑖2)

∑𝑥𝑖3 ∑𝑥𝑖𝑦𝑖 −∑𝑥𝑖2 ∑𝑥𝑖2 𝑦𝑖


⸫ 𝛽̂1 = 2
𝛴𝑥𝑖 ∑𝑥𝑖3 −(∑𝑥𝑖2 )

∑𝑥𝑖 ∑𝑥𝑖2 𝑦𝑖 −∑𝑥𝑖2 ∑𝑥𝑖 𝑦𝑖


𝛽̂2 = 2 which is the GLSE.
𝛴𝑥𝑖 ∑𝑥𝑖3 −(∑𝑥𝑖2)

V(𝛽̂ ) = 𝜎 2 (𝑥 ′ 𝑣 −1 𝑥)−1

𝜎2 ∑𝑥𝑖3 −∑𝑥𝑖2
= 2 ( )
𝛴𝑥𝑖 ∑𝑥𝑖3 −(∑𝑥𝑖2 ) −∑𝑥𝑖2 ∑𝑥𝑖
𝜎2 ∑𝑥𝑖3
⸫ V(𝛽̂1 ) = 2 which is the BLUE of 𝛽1
𝛴𝑥𝑖 ∑𝑥𝑖3 −(∑𝑥𝑖2 )
𝜎2 𝛴𝑥𝑖
V(𝛽̂2 ) = 2 which is the BLUE of 𝛽2
𝛴𝑥𝑖 ∑𝑥𝑖3 −(∑𝑥𝑖2 )

15 QUESTION: What do you meant by OLS method?

Answer: Let (xi, yi) i = 1, 2, … , n be the nth pairs of observations that satisfy the
following variable regression model.

Yi = α+ βXi+ u ; i = 1, 2, … , n

Here the value of α and β that minimize the sum of squares of residuals ∑𝑢𝑖2 are
defined to be the least square estimators of α and β. So the method which minimize the
sum of squares of residuals ∑𝑢𝑖2 called the ordinary least squares method or principle
least square method.

16QUESTION: Distinguish between a regression and causation.

Answer: Regression analysis deals with the dependent of one variable on other
variables but it does not necessarily imply causation.

F – statistical relationship however strong and however suggestive can


never establish causal correlation our ideas of causation must come from outside
statistic ultimately same the Y or other.

17QUESTION: Distinguish between linear and no-linear regression


model.

Answer: The distinguish between linear and non-linear r regression model are as
follows

Linear No – linear
(i) A model which is linear in (i) A model which is not linear
parameter is called linear in parameter is called non
model. Example: linear model. Example: 𝑌 =
𝑌 = 𝛽0 + 𝛽1 𝑥1𝑖 + 𝛽2 𝑥2𝑖 + 𝑋𝛽 + 𝑈
⋯ + 𝛽𝑘 𝑥𝑘𝑖 + 𝑢𝑖 ;
(ii) The model can be fitted by (ii) The model can not be fitted
OLS method. by OLS method.
(iii) The parameter of such model (iv) The parameter of such model
can easily estimated. can not easily estimated.

18QUESTION: Construct an ANOVA table for K variable regression


model.

Answer: Let us consider the model

𝑦𝑖 = 𝛽0 + 𝛽1 𝑥1𝑖 + 𝛽2 𝑥2𝑖 + ⋯ + 𝛽𝑘 𝑥𝑘𝑖 + 𝑢𝑖 ; i = 1,2, … (1)

Then we have

𝑦𝑖 = 𝑦̂𝑖 + 𝑒𝑖 (2)

Now squaring and taking sum on both sides

∑𝑥𝑖2 = ∑𝑦̂𝑖2 + ∑𝑒𝑖2 + 2∑𝑦𝑖 ∑𝑒𝑖

= ∑𝑦̂𝑖2 + ∑𝑒𝑖2

∑𝑦𝑖2 = ESS + RSS

i.e. TSS = RSS + ESS

S.V. d.f. SS MS CalF F + ab


Reg k-1 RSS = ∑𝑦̂𝑖2 𝑅𝑆𝑆 F= F (k-1, n-k)
𝑘−1 𝑀𝑆𝑅
𝑀𝑆𝐸
Error n-k ESS = ∑𝑒𝑖2 𝐸𝑆𝑆
𝑛−1
Total n-1

19
QUESTION: Consider the following formulation of the two variable
population regression function.

Model I: yi = 𝛽1 + 𝛽2 𝑥𝑖 + 𝑢𝑖

Model II: 𝑦𝑖 = 𝛼1 + 𝛼2 (𝑥𝑖 − 𝑥̅ ) + 𝑢𝑖

(a) Find the estimator of 𝛽1 𝑎𝑛𝑑 𝛼1 . Are there variance identical?


(b) Find the estimators of 𝛽2 and 𝛼2 . Are their variance identical.
(c) What is the advantage if any of model-II over model-I
Answer: Given the model are

yi = 𝛽1 + 𝛽2 𝑥𝑖 + 𝑢𝑖 … (i)

𝑦𝑖 = 𝛼1 + 𝛼2 (𝑥𝑖 − 𝑥̅ ) + 𝑢𝑖 … (ii)

For model-I: Let 𝛽1 𝑎𝑛𝑑 𝛽2 be the OLS estimator of 𝛽1 𝑎𝑛𝑑 𝛽2 respectively. Then the
model becomes

𝑦𝑖 = 𝛽̂1 + 𝛽̂2 𝑥𝑖 + 𝑒𝑖
𝑛 2
 ∑𝑒𝑖2 = ∑ (𝑦𝑖 − 𝛽̂1 − 𝛽̂2 𝑥𝑖 )
𝑖=1

Minimizing ∑𝑒𝑖2 with respect to 𝛽1 𝑎𝑛𝑑 𝛽2 are zero.

𝜕∑𝑒𝑖2
=0
𝜕𝛽̂1
𝑛
 2 ∑𝑖=1(𝑦𝑖 − 𝛽̂1 − 𝛽̂2 𝑥𝑖 ) (−1) = 0
 ∑𝑦𝑖 - n𝛽̂1 - 𝛽̂2 ∑𝑥𝑖 = 0
 n𝛽̂1 = 2𝑦𝑖 − 𝛽̂2 ∑𝑥𝑖
∑𝑦 ∑𝑥
 𝛽̂1 = 𝑛 𝑖 − 𝛽̂2 𝑛 𝑖
 𝛽̂1 = 𝑦̅ − 𝛽̂2 𝑥̅ …. (iii)

Again
𝜕∑𝑒𝑖2
=0
𝜕𝛽̂2
𝑛
 2 ∑𝑖=1(𝑦𝑖 − 𝛽̂1 − 𝛽̂2 𝑥𝑖 ) (−𝑥𝑖 ) = 0
 ∑𝑥𝑖 𝑦𝑖 − 𝛽̂1 ∑𝑥𝑖 − 𝛽̂1 ∑𝑥𝑖2 = 0
∑𝑥𝑖2 𝑦𝑖 (∑𝑥 ) 2
 ∑𝑥𝑖 𝑦𝑖 − + 𝛽̂2 𝑛𝑖 − 𝛽̂2 ∑𝑥𝑖2 = 0
𝑛
(𝛴𝑥𝑖 )2 ∑𝑥𝑖 ∑𝑦𝑖
 𝛽̂2 {∑𝑥𝑖2 − } = ∑𝑥𝑖 𝑦𝑖 −
𝑛 𝑛
∑𝑥 ∑𝑦
∑𝑥𝑖 𝑦𝑖 − 𝑖 𝑖
 𝛽̂2 = 𝑛
2
(𝛴𝑥 )
∑𝑥𝑖2 − 𝑛𝑖
∑(𝑥𝑖 𝑦𝑖 )−𝑥̅ ∑𝑦𝑖
 𝛽̂2 = ∑(𝑥𝑖 −𝑥̅ )2
∑(𝑥𝑖 −𝑥̅ )𝑦𝑖
 𝛽̂2 = ∑(𝑥𝑖 −𝑥̅ )2
∑(𝑥𝑖 −𝑥̅ )
 𝛽̂2 = 𝑌
∑(𝑥𝑖 −𝑥̅ )2
∑(𝑥 −𝑥̅ )
 𝛽̂2 = 𝛴𝑤𝑖 𝑦𝑖 [where ∑𝑤𝑖 = ∑(𝑥 𝑖−𝑥̅ )2 = 0 𝑎𝑛𝑑 ∑𝑤𝑖 𝑥𝑖 = 1]
𝑖

 𝛽̂2 = 𝛴𝑤𝑖 (𝛽1 + 𝛽2 𝑥𝑖 + 𝑈𝑖 )


 𝛽̂2 = 𝛴𝑤𝑖 𝛽1 + 𝛽2 𝛴𝑤𝑖 𝑥𝑖 + 𝛴𝑤𝑖 𝑈𝑖 )
 𝛽̂2 = 0 × 𝛽1 + 𝛽2 × 1 + 𝛴𝑤𝑖 𝑈𝑖
 𝛽̂2 = 𝛽2 + 𝛴𝑤𝑖 𝑈𝑖

⸫𝐸(𝛽̂ ) = 𝛽2 + ∑𝑤𝑖 𝐸(𝑢𝑖 )

= 𝛽2 + 0

= 𝛽2

⸫ 𝛽̂2 𝑖𝑠 𝑎𝑛 𝑢𝑛𝑏𝑖𝑎𝑠𝑒𝑑 𝑒𝑠𝑡𝑖𝑚𝑎𝑡𝑜𝑟 𝑜𝑓 𝛽2

Again 𝛽̂1 = 𝑦̅ − 𝛽̂2 𝑥̅


∑𝑦 ∑𝑥
𝛽̂1 = 𝑖 − 𝛽2 𝑖
𝑛 𝑛

∑𝑦𝑖 ∑(𝑥𝑖 − 𝑥̅ )
𝛽̂1 = − 𝑥̅ 𝑦
𝑛 ∑(𝑥𝑖 − 𝑥̅ )2 𝑖
∑𝑦
𝛽̂1 = 𝑖 − 𝑥̅ 𝛴𝑤𝑖̇ 𝑦𝑖
𝑛

1
𝛽̂1 = ∑ ( − 𝑥̅ 𝑤𝑖 ) 𝑦𝑖
𝑛
1
𝛽̂1 = ∑ ( − 𝑥̅ 𝑤𝑖̇ ) 𝑦𝑖
𝑛
1
𝛽̂1 = ∑ (𝑛 − 𝑥̅ 𝑤𝑖̇ ) (𝛽1 + 𝛽2 𝑥𝑖 + 𝑢𝑖 )

1 𝛴𝑥 1
𝛽̂1 = 𝑛 𝑛𝛽1 + 𝛽2 𝑛 𝑖 + ∑ (𝑛 − 𝑥̅ 𝑤𝑖 ) 𝑢𝑖 - 𝛽1 ∑𝑤𝑖 𝑥̅ − 𝑥̅ 𝛽2 𝑤𝑖 𝑥𝑖

1
𝛽̂1 = 𝛽1 + 𝛽2 𝑥̅ + ∑ (𝑛 − 𝑥̅ 𝑤𝑖̇ ) 𝑢𝑖 − 0 − 𝛽2 𝑥̅

1
𝛽̂1 = 𝛽1 + ∑ ( − 𝑥̅ 𝑤𝑖 ) (𝑢𝑖 )
𝑛
1
⸫ 𝐸(𝛽̂1 ) = 𝛽1 + 2 (𝑛 − 𝑥̅ 𝑤𝑖 ) 𝐸(𝑢𝑖 )

= 𝛽1 + 0

= 𝛽1

⸫ 𝛽̂1 𝑖𝑠 𝑎𝑛 𝑢𝑛𝑏𝑖𝑎𝑠𝑒𝑑 𝑒𝑠𝑡𝑖𝑚𝑎𝑡𝑜𝑟 𝑜𝑓 𝛽1

Now variance
2
𝑣(𝛽̂1 ) = 𝐸(𝛽̂1 − 𝛽1 )

1 2
= 𝐸 [𝛽1 + ∑ (𝑛 − 𝑥̅ 𝑤𝑖 ) 𝑢𝑖 − 𝛽1 ]

1 2
= 𝐸 [∑ ( − 𝑥̅ 𝑤𝑖 ) 𝑢𝑖 ]
𝑛

1 2
= 𝐸 (𝑛 − 𝑥̅ 𝑤𝑖 ) 𝐸(𝑈𝑖2 )

1 1
= 𝜎 2 [∑ (𝑛2 − 2𝑥̅ 𝑛 𝑊𝑖 + 𝑥̅ 2 𝑊𝑖2 ) + 0]

1
= 𝜎 2 [𝑛 𝑛2 − 2𝑥̅𝜋1 ∑𝑤𝑖 + 𝑥̅ 2 𝛴𝑤𝑖2 ]

1 ∑(𝑥 −𝑥̅ )2
= 𝜎 2 [𝑛 − 0 + 𝑥̅ 2 {∑(𝑥 𝑖−𝑥̅ )2 }2 ]
𝑖

1 𝑥̅ 2
= 𝜎 2 [𝑛 + {∑(𝑥 −𝑥̅ )2 }]
𝑖

∑(𝑥𝑖 −𝑥̅ )2+𝑥̅ 2 𝑛


= 𝜎2 [ ]
𝑛∑(𝑥𝑖 −𝑥̅ )2

∑𝑥𝑖2 −2𝑥̅ ∑𝑥𝑖 +𝑥̅ 2𝑛+𝑥̅ 2 𝑛


= 𝜎2 [ ]
𝑛∑(𝑥𝑖−𝑥̅ )2

∑𝑥 2 2𝑛𝑥̅ 2 ∑𝑥𝑖 1
= 𝜎 2 [𝑛∑(𝑥 −𝑥̅
𝑖
)2
+ − 2𝑥̅ ]
𝑖 𝑛∑(𝑥𝑖 −𝑥̅ )2 𝑛 ∑(𝑥𝑖 −𝑥̅ )2

∑𝑥 2 2𝑥̅ 2 2𝑥̅ 2
= 𝜎 2 [𝑛∑(𝑥 −𝑥̅
𝑖
)2
+ ∑(𝑥 −𝑥̅ )2
− ∑(𝑥 −𝑥̅ )2]
𝑖 𝑖 𝑖

∑𝑥 2
= 𝜎 2 [𝑛∑(𝑥 −𝑥̅
𝑖
)2
]
𝑖
𝜎2 ∑𝑥𝑖2
⸫ 𝑣(𝛽̂1 ) = 𝑛∑(𝑥𝑖 −𝑥̅ )2

2
Again 𝑣(𝛽̂2 ) = 𝐸(𝛽̂2 − 𝛽2 )

= E[𝐵2 + ∑𝑤𝑖 𝑢𝑖 − 𝛽2 ]2

= ∑𝑤𝑖2 𝐸(𝑢𝑖2 )
1 ∑(𝑥 −𝑥̅ ) ∑(𝑥 −𝑥̅ )2 1
= 𝜎 2 ∑(𝑥 −𝑥̅ )2 [∑𝑤𝑖̇ = ∑(𝑥 𝑖−𝑥̅ )2 , ∑𝑊𝑖2 = {∑(𝑥 𝑖−𝑥̅ )2 }2 = ∑(𝑥𝑖 −𝑥̅ )2
]
𝑖 𝑖 𝑖

For model – II:

𝑦𝑖 = 𝛼1 + 𝛼2 (𝑥𝑖 − 𝑥̅ ) + 𝑢𝑖

Let 𝛼̂1 𝑎𝑛𝑑 𝛼̂2 be the OLSE of 𝛼1 𝑎𝑛𝑑 𝛼2 respectively then the model becomes

𝑦𝑖 = 𝛼̂1 + 𝛼̂2 (𝑥𝑖 − 𝑥̅ ) + 𝑢𝑖


2
∑𝑈𝑖2 = ∑(𝑦𝑖 − 𝛼̂1 − 𝛼̂2 (𝑥𝑖 − 𝑥̅ )) … (1)

Now differentiating with respect to 𝛼̂1 𝑎𝑛𝑑 𝛼̂2 in (1) and put equal to zero.
𝜕𝛴𝑢𝑖2
̂1
=0
𝜕𝛼

 2∑{𝑦𝑖 − 𝛼̂1 − 𝛼̂2 (𝑥𝑖 − 𝑥̅ )} (-1) = 0


 ∑𝑦𝑖 − 𝑛𝛼̂1 − 𝛼̂2 ∑(𝑥𝑖 − 𝑥̅ ) = 0
 𝑛𝛼̂1 = ∑𝑦𝑖 [where ∑(𝑥𝑖 − 𝑥̅ ) = 0]

Again
𝜕𝛴𝑢𝑖2
̂2
=0
𝜕𝛼

 -2∑{𝑦𝑖 − 𝛼̂1 − 𝛼̂2 (𝑥𝑖 − 𝑥̅ )} (𝑥𝑖 − 𝑥̅ ) = 0


 ∑𝑦𝑖 (𝑥𝑖 − 𝑥̅ ) - 𝛼̂1 ∑(𝑥𝑖 − 𝑥̅ ) - 𝛼̂2 ∑(𝑥𝑖 − 𝑥̅ )2 = 0
 ∑(𝑥𝑖 − 𝑥̅ ) 𝑦𝑖 = 𝛼̂2 ∑(𝑥𝑖 − 𝑥̅ )2 [As ∑(𝑥𝑖 − 𝑥̅ )= 0]
∑(𝑥𝑖 −𝑥̅ )
 𝛼̂2 = 𝑦
∑(𝑥𝑖 −𝑥̅ )2 𝑖
∑(𝑥 −𝑥̅ )
 𝛼̂2 = ∑WiYi [As ∑Wi = ∑(𝑥 𝑖−𝑥̅ )2 ]
𝑖
 𝛼̂2 = ∑{Wi (𝛼1 + 𝛼2 (𝑥𝑖 − 𝑥̅ ) + 𝑢𝑖 )}
 𝛼̂2 = 𝛼1 ∑Wi + 𝛼2 ∑Wi(𝑥𝑖 − 𝑥̅ ) + ∑Wi𝑢𝑖
 𝛼̂2 = 0 + 𝛼2 ∑Wixi - 𝑥̅ 𝛼2 ∑Wi +∑Wi𝑢𝑖
 𝛼̂2 = 𝛼2 +2Wi𝑢𝑖 [∑Wixi = 1 and ∑Wi = 0]

⸫𝐸[𝛼̂2 ] = 𝛼2

Thus 𝛼̂2 is an unbiased estimator of 𝛼2

Now V(𝛼̂2 ) = 𝐸(𝛼̂2 − 𝛼2 )2

= 𝐸(𝛼2 + ∑𝑤𝑖 𝑢𝑖 − 𝛼2 )2

= 𝐸(𝛴𝑤𝑖 𝑢𝑖 )2

=∑𝑤𝑖2 𝛴(𝑢𝑖2 )
= 𝜎 2 𝛴𝑊𝑖2
𝜎2 ∑(𝑥 −𝑥̅ ) 1
= ∑(𝑥 −𝑥̅ )2 [∑𝑤𝑖 = ∑(𝑥 𝑖−𝑥̅ )2 , ∑𝑤𝑖2 = ∑(𝑥 −𝑥̅ )2]
𝑖 𝑖 𝑖

Now 𝛼̂1 = 𝑦̅
∑𝑦𝑖
𝛼̂1 =
𝑛
1 1 1
= 𝑛 ∑𝛼𝑖 + 𝑛 𝛼1 𝛴(𝑥𝑖 − 𝑥̅ ) + 𝑛 ∑𝑢𝑖

𝑛𝛼1 𝛼2 𝛴𝑢𝑖
= + ×0+
𝛼1 𝑛 𝑛

∑𝑢𝑖
𝑎̂1 = 𝛼1 +
𝑛
1
𝐸[𝑎̂1 ] = 𝛼1 + ∑𝐸(𝑢𝑖 )
𝑛
= 𝛼1 + 0

= 𝛼1

⸫ 𝑎̂1 is an unbiased estimator of 𝛼1


And 𝑣(𝛼̂1 ) = 𝑣(𝑦̅) = 0 [𝐴𝑠 𝑎̂1 = 𝑦̅]

Since, the variance of constant is Zero.


𝜎2
i.e. 𝑣(𝛼̂1 ) = 0 and 𝑣(𝛼̂2 ) =
∑(𝑥𝑖 −𝑥̅ )2

Comment:

(a) 𝛽1 𝑎𝑛𝑑 𝛼1 are not identical. Because 𝛼̂1 = 𝑦̅ and 𝛽̂1 = 𝑦̅ − 𝛽2 𝑥̅ which is not
equal. The variance of 𝛼̂1 and 𝛽̂1 are not identical too.
∑(𝑥𝑖 −𝑥̅ )(𝑦𝑖 −𝑦̅)
(b) 𝑌𝑒𝑠 𝛽2 𝑎𝑛𝑑 𝛼2 are identical because 𝛽̂2 = 𝛼̂2 = 2 and the variance
∑(𝜘𝑖 −𝑥̅ )
is also identical.
(c) 𝑀𝑜𝑑𝑒𝑙 − 𝐼𝐼 is better than model-I. Because the variance of model-II is
𝑣(𝛼̂1 ) = 𝑣(𝑦̅) = 0
𝜎2
𝑣(𝛼̂2 ) = ∑(𝑥 −𝑥̅ )2
𝑖
And the variance of model-I is
2 2
𝜎 𝛴𝑥
𝑣(𝛽̂1 ) = 𝑛𝛴(𝑥 −𝑥̅𝑖 )2
𝑖
𝜎2
𝑣(𝛽̂2 ) = 𝛴(𝑥 −𝑥̅ )2
𝑖

You might also like