Professional Documents
Culture Documents
Assignment Id 1810047
Assignment Id 1810047
1. Define order statistics, simple range, sample mid-range smallest and largest
order statistics?
Answer:
Order Statistics: Order statistics at the statistics of random variables where the
sample values placed in ascending order. Let X1 ,X2 ,X3………. Xn be a random
sample of size n form a distribution of continuous type having pdf f(x),a<x<b .Let
X(i) be the smallest of X1, X2 be the second smallest of Xi and Xn be largest of Xi
So, the order statistics is defined as-
X1 <X2 <X3………. <Xn; a<x<b
Sample Range: The sample range is the distance between the smallest and largest
order statistics i.e.
R=R(Xn)-R(X1)
Sample mid-range: The sample mid-range is the mean of the smallest and largest
order statistics, i.e.
m= {R(Xn)-R(X1)}
Smallest and largest order statistics: Let X1:n ,X2:n ,X3:n………. Xn:n be the order
statistics. The smallest of the order statistics is denoted by X 1:n and the largest of
the order statistics is denoted by Xn:n .
3. Derive the pdf of rth order statistics. Also prove that it is pdf?
Answer:
Derivation of the pdf of rth order statistics:
Let us assume that X1 ,X2 ,X3………. Xn is a random sample from an absolutely
continuous population with probability density function (density function pdf) f(x)
and cumulative distribution function (cdf) F(x). Let X1:n ≤X2:n ≤X3:n………. ≤Xn:n be
the order statistics obtained by arranging the preceding random sample in
increasing order of magnitude. Then, the event 𝑥 < 𝑥 : ≤ 𝑥 + 𝛿𝑥 is essentially
same as the following event:
3
r-1 n-r
r-i
Proof of pdf:
∴ 𝑓 : (𝑥) 𝑑𝑥
𝑛!
= {𝐹(𝑥)} {1 − 𝐹(𝑥} 𝑓(𝑥) 𝑑𝑥
(r − 1)! (𝑛 − 𝑟)!
𝑛! Let,
= 𝑧 {1 − 𝑧} 𝑑𝑧
(r − 1)! (𝑛 − 𝑟)!
𝑛! 𝑑𝐹(𝑥) 𝑑𝑧
= 𝛽(𝑟, 𝑛 − 𝑟 + 1) 𝐹(𝑥) = 𝑧, ⇒ =
(r − 1)! (𝑛 − 𝑟)! 𝑑𝑥 𝑑𝑥
⇒ 𝐹(𝑥)𝑑𝑥 = 𝑑𝑧 x −∞ ∞
z 0 1
4
𝑛! (r − 1)! (𝑛 − 𝑟)!
=
(r − 1)! (𝑛 − 𝑟)! 𝑛!
=1
∴ 𝑓 : (𝑥) 𝑑𝑥 = 1
4. Derive the joint density function of r-th and s-th order statistics.
Solution:
Derivation of the joint density function of r-th and s-th order statistics:
Let us first visualize the event (𝑥 < 𝑋 : ≤ 𝑥 + 𝜕𝑥 , 𝑥 < 𝑋 : ≤ 𝑥 + 𝜕𝑥 ) as
follows:
𝑖−1 1 𝑗−𝑖−1 1 𝑛−𝑗
;
−∞ 𝑥 𝑥 + 𝜕𝑥 𝑥 𝑥 +
𝜕𝑥 ∞
𝑋 ≤ 𝑥 for 𝑖 − 1 of the 𝑋 ′𝑠, 𝑥 < 𝑋 ≤ 𝑥 + 𝜕𝑥 for exactly one of the 𝑋 𝑠, 𝑥 +
𝜕𝑥 < 𝑋 ≤ 𝑥 for 𝑗 − 𝑖 − 1 of the 𝑋 𝑠, 𝑥 < 𝑋 ≤ 𝑥 + 𝜕𝑥 for exactly one of the
𝑋 ′𝑠, and 𝑋 > 𝑥 + 𝜕𝑥 for the remaining 𝑛 − 𝑗 of the 𝑋 𝑠. By considering 𝜕𝑥
and 𝜕𝑥 to be both small, we may write
!
𝑃 𝑥 < 𝑋 : ≤ 𝑥 + 𝜕𝑥 , 𝑥 < 𝑋 : ≤ 𝑥 + 𝜕𝑥 = ( {𝐹(𝑥 )} ×
)!( )!( )!
Here 𝑂 (𝜕𝑥 ) 𝜕𝑥 and 𝑂 (𝜕𝑥 ) 𝜕𝑥 are higher order terms which correspond
to the probabilities of the event of having more than one 𝑋 in the interval
(𝑥 , 𝑥 + 𝜕𝑥 ), and of the event of having one
𝑓, : 𝑥 ,𝑥
5
5. Prove that,
Answer:
𝑓, : (𝑥, 𝑦)𝑑𝑥𝑑𝑦
𝑛! .
= {𝐹(𝑥)} {𝐹(𝑦) − 𝐹(𝑥} {1
(r − 1)! (𝑛 − 𝑠)! (𝑠 − 𝑟 − 1)!
− 𝐹(𝑦} 𝑓(𝑥)𝑓(𝑦)𝑑𝑥𝑑𝑦
𝑛!
= {1 − 𝐹(𝑥)} 𝑓(𝑦) 𝐹(𝑥) {𝐹(𝑦)
(r − 1)! (𝑛 − 𝑠)! (𝑠 − 𝑟 − 1)!
− 𝐹(𝑦)} 𝑓(𝑥)𝑑𝑥}𝑑𝑦
𝑛!
= {1
(r − 1)! (𝑛 − 𝑠)! (𝑠 − 𝑟 − 1)!
𝐹(𝑥)
− 𝑓(𝑥)𝑑𝑥}𝑑𝑦
𝐹(𝑦)
6
𝑛!
= {1
(r − 1)! (𝑛 − 𝑠)! (𝑠 − 𝑟 − 1)!
𝑛! Let,
= {𝑧} {1 − 𝑧}( )
𝑑𝑧
(𝑠 − 1)! (𝑛 − 𝑠)!
𝑑𝑧
𝑛! (𝑠 − 1)! (𝑛 − 𝑠)! 𝐹(𝑦) = 𝑧, ⇒ 𝑓(𝑦) =
= 𝑑𝑦
(𝑠 − 1)! (𝑛 − 𝑠)! 𝑛!
=1 𝑓(𝑦)𝑑𝑦 = 𝑑𝑧 x −∞ ∞
(proved) z 0 1
censored. They also provide some quick and simple estimators which are quite
often highly efficient.
7. Find the k-th moment of r-th order statistics for standard power function.
Solution:
We know the probability density function of standard power function is as follows
𝑓(𝑥) = 𝛽𝑥 ; 0 < 𝑥 < 1, 𝛽>0
Now,
𝐹(𝑥) = 𝑓(𝑥) 𝑑𝑥 = 𝛽𝑥 𝑑𝑥
𝑥
=𝛽
𝛽
= 𝑥
=𝑥
∴ The density function of r-th order statistics is,
𝑛!
𝑓 : (𝑥) = (𝑥 ) (1 − 𝑥 ) 𝛽𝑥
(𝑟 − 1)! (𝑛 − 𝑟)!
𝑛!
= 𝛽(𝑥) (1 − 𝑥 )
(𝑟 − 1)! (𝑛 − 𝑟)!
Now, the k-th moment of 𝑋 : (1 ≤ 𝑟 ≤ 𝑛) to be,
8
𝜇 : = 𝐸(𝑋 : )= 𝑥 𝑓 : (𝑥) 𝑑𝑥
𝑛!
= 𝑥 𝛽(𝑥) (1 − 𝑥 ) 𝑑𝑥
(𝑟 − 1)! (𝑛 − 𝑟)!
𝑛!
= 𝑥 (1 − 𝑥 ) 𝛽 𝑑𝑥
(𝑟 − 1)! (𝑛 − 𝑟)!
𝑛! 1
= 𝑥 (1 − 𝑧) 𝑑𝑧 Let,
(𝑟 − 1)! (𝑛 − 𝑟)! 𝑥
𝑛! 𝑥 =𝑧
= 𝑥 (1 − 𝑧) 𝑑𝑧 𝑑𝑧
(𝑟 − 1)! (𝑛 − 𝑟)! 𝑜𝑟, 𝛽𝑥 =
𝑑𝑥
𝑛!
= 𝑥 (1 − 𝑧) 𝑑𝑧 𝑑𝑧
(𝑟 − 1)! (𝑛 − 𝑟)! 𝑜𝑟, 𝛽𝑑𝑥 =
𝑥
𝑛!
= 𝑧 (1 − 𝑧)( )
𝑑𝑧
(𝑟 − 1)! (𝑛 − 𝑟)!
𝑘
𝑛! 𝑟 + − 1 ! (𝑛 − 𝑟)!
𝛽
=
(𝑟 − 1)! (𝑛 − 𝑟)! 𝑘
𝑛+ !
𝛽
𝑘
𝑛! 𝑟 + − 1 !
𝛽
=
𝑘
(𝑟 − 1)! 𝑛 + !
𝛽
𝑟 𝑟 𝑟 𝑟
𝑓 , : (𝑟, 𝑡) = 𝑛(𝑛 − 1) 𝐹 𝑡 + −𝐹 𝑡− 𝑓 𝑡− 𝑓 𝑡+
2 2 2 2
∙ 1 ; −∞ < 𝑟 < 𝑡 < ∞
Now, integrating out t we derive the pdf of sample range,
𝑟 𝑟 𝑟
𝑓(𝑟) = 𝑓 , : (𝑟, 𝑡)𝑑𝑡 𝑛(𝑛 − 1)[𝐹 𝑡 + −𝐹 𝑡− ] 𝑓 𝑡− 𝑓 𝑡
2 2 2
𝑟
+ 𝑑𝑡
2
9. Find the r-th, 1st and n-th order statistics for 𝒇(𝒙) = 𝜶𝒙𝜶 𝟏
; 𝟎 < 𝒙 < 𝟏 and
also prove that they are pdf.
Solution:
Given that, 𝑓(𝑥) = 𝛼𝑥 ;0 < 𝑥 < 1
Now,
𝐹(𝑥) = 𝑓(𝑥) 𝑑𝑥 = 𝛼𝑥 𝑑𝑥
𝑥
=𝛼
𝛼
= [𝑥 ]
=𝑥
∴ The r-th order statistics is,
𝑛!
𝑓 : (𝑥) = (𝑥 ) (1 − 𝑥 ) 𝛼𝑥 ;0 < 𝑥 < 1
(𝑟 − 1)! (𝑛 − 𝑟)!
𝑛!
= 𝛼(𝑥) (1 − 𝑥 )
(𝑟 − 1)! (𝑛 − 𝑟)!
∴ The 1st order statistics is,
11
𝑛!
𝑓 : (𝑥) = (𝑥 ) (1 − 𝑥 ) 𝛼𝑥 ;0 < 𝑥 < 1
(1 − 1)! (𝑛 − 1)!
= 𝑛𝛼(𝑥) (1 − 𝑥 )
∴ The n-th order statistics is,
𝑛!
𝑓 : (𝑥) = (𝑥 ) (1 − 𝑥 ) 𝛼𝑥 ;0 < 𝑥 < 1
(𝑛 − 1)! (𝑛 − 𝑛)!
= 𝑛𝛼(𝑥)
r-th order statistics is a pdf:
Proof:
𝑛!
𝑓 : (𝑥)𝑑𝑥 = (𝑥 ) (1 − 𝑥 ) 𝑑𝑥
(𝑟 − 1)! (𝑛 − 𝑟)!
𝑛!
= (𝑥 ) (1 − 𝑥 ) 𝛼𝑥 𝑑𝑥
(𝑟 − 1)! (𝑛 − 𝑟)!
𝑛!
= 𝑧 (1 Let,
(𝑟 − 1)! (𝑛 − 𝑟)!
𝑥 =𝑧
− 𝑧)( )
𝑑𝑧
𝑜𝑟, 𝛼𝑥 𝑑𝑥 = 𝑑𝑧
𝑛!
=
(𝑟 − 1)! (𝑛 − 𝑟)!
=1
(Proved)
1st order of the distribution is a pdf:
Proof:
𝑓 : (𝑥)𝑑𝑥 = 𝑛𝛼(𝑥) (1 − 𝑥 ) 𝑑𝑥
= 𝑛𝛼 (𝑥) (1 − 𝑥 ) 𝑑𝑥
12
= 𝑛 (1 − 𝑥 ) (𝑥) 𝑑𝑥
Let,
= 𝑛 (1 − 𝑧) 𝑑𝑧
𝑥 =𝑧
(1 − 𝑧) 𝑜𝑟, 𝛼𝑥 𝑑𝑥 = 𝑑𝑧
=𝑛
−𝑛
= −1[(1 − 𝑧) ]
= (−1)(−1)
=1
(Proved)
The n-th order statistics is a pdf:
Proof:
𝑓 : (𝑥)𝑑𝑥 = 𝑛𝛼𝑥 𝑑𝑥
= 𝑛𝛼 𝑥 𝑑𝑥
(𝑥)
= 𝑛𝛼
𝛼𝑛
= [(𝑥) ]
=1
(Proved)
Central chi-square variate: If x1, x2,..…, xn be n iid random variable with mean µ1,
µ2,..…, µn and variance 𝜎2. Then,
𝑥 −𝜇
𝜒 =
𝜎
𝑥
𝜒 =
𝜎
1 𝜇
𝜆=
2 𝜎
Solution:
16
∑
= 𝐸 𝑒
Now,
∑ ∑
𝐸 𝑒 = 𝑒 𝑓 𝑥 𝑑𝑥
∑ 1
= 𝑒 𝑒 𝑑𝑥
𝜎√2𝜋
1
= 𝑒 𝑑𝑥
𝜎√2𝜋
1 [ ]
= 𝑒 𝑑𝑥
𝜎√2𝜋
1 [ ( ) ]
= 𝑒 𝑑𝑥
𝜎 √2𝜋
1 [ √ ]
= 𝑒 𝑑𝑥
𝜎√2𝜋
1 [ √ ( )]
= 𝑒 √ 𝑑𝑥
𝜎√2𝜋
17
( )] 1 [ √ ]
=𝑒 𝑒 √ 𝑑𝑥
𝜎 √2𝜋
Let,
𝑥 √1 − 2𝑖𝑡 − =𝑃
√
𝑤ℎ𝑒𝑛, 𝑥 = − ∞ 𝑡ℎ𝑒𝑛 𝑝 = −∞
⇒ = √1 − 2𝑖𝑡
𝑥 = ∞ 𝑡ℎ𝑒𝑛 𝑝 = ∞
∴ 𝑑𝑥 =
√
∑ ( ) 1 𝑑𝑝
∴𝐸 𝑒 =𝑒 𝑒
𝜎√2𝜋 √1 − 2𝑖𝑡
1
=𝑒 (1 − 2𝑖𝑡) 𝑒 𝑑𝑝
𝜎√2𝜋
=𝑒 (1 − 2𝑖𝑡) . 1
∴ 𝑄 (𝑡) = 𝑒 (1 − 2𝑖𝑡)
∑
=𝑒 (1 − 2𝑖𝑡)
=𝑒 (1 − 2𝑖𝑡)
=𝑒 𝑒 (1 − 2𝑖𝑡)
We know,
18
𝜆 𝜆
𝑒 = 1+ + +⋯
1 − 2𝑖𝑡 2! (1 − 2𝑖𝑡)
𝜆
=
𝑗! (1 − 2𝑖𝑡)
𝜆
𝑄 (𝑡) = 𝑒 (1 − 2𝑖𝑡)
𝑗! (1 − 2𝑖𝑡)
𝜆
=𝑒 (1 − 2𝑖𝑡)
𝑗!
𝜆
=𝑒 (1 − 2𝑖𝑡)
𝑗!
1
𝑓(𝑧) = 𝑒 𝑄 (𝑡) 𝑑𝑡
2𝜆
1 𝜆
= 𝑒 𝑒 (1 − 2𝑖𝑡) 𝑄 (𝑡) 𝑑𝑡
2𝜆 𝑗!
𝑒 𝜆 𝑒
= 𝑑𝑡
2𝜆 𝑗!
(1 − 2𝑖𝑡)
1 𝜆 1 𝑒
= 𝑒 𝑑𝑡
2𝜆 𝑗! 1
2 ( − 𝑖𝑡)
2
19
Now using,
𝑒 𝑒 .𝑧
𝑑𝑧 = 2𝜋
(𝑐 − 𝑖𝑧) √𝑎
1 𝜆 1 𝑒
𝑓(𝑧) = 𝑒 2𝜋 𝑑𝑡
2𝜆 𝑗! 1
2 ( − 𝑖𝑡)
2
𝜆 𝑒 𝑧
=𝑒 ; 0<𝑧<∞
𝑗! 𝑛 + 2𝑗
2 Γ( )
2
𝐸(𝑧) = 𝑧 𝑓(𝑧)𝑑𝑧
𝜆 𝑒 𝑧
= 𝑧 𝑒 𝑑𝑧
𝑗! 𝑛 + 2𝑗
2 Γ( )
2
𝜆 1 1
=𝑒 𝑒 𝑧 𝑑𝑧
𝑗! 𝑛 + 2𝑗
2 Γ( )
2
𝜆 1 1 1 𝑛 + 2𝑗
=𝑒 ( ) Γ( + 1)
𝑗! 𝑛 + 2𝑗 2 2
2 Γ( )
2
20
𝜆 1 𝑛 + 2𝑗 𝑛 + 2𝑗
=𝑒 2 ∗2∗ Γ( )
𝑗! 𝑛 + 2𝑗 2 2
2 Γ( )
2
𝜆
=𝑒 (𝑛 + 2𝑗)
𝑗!
𝜆 𝜆
= 𝑛𝑒 + 2𝑒 𝑗
𝑗! 𝑗!
𝜆
= 𝑛𝑒 𝑒 + 2𝑒 ∗𝜆
(𝑗 − 1)!
= 𝑛 + 2𝜆𝑒 𝑒
= 𝑛 + 2𝜆
∴ 𝑀𝑒𝑎𝑛 = 𝑛 + 2𝜆 .
𝐸(𝑧 ) = 𝑧 𝑓(𝑧)𝑑𝑧
𝜆 𝑒 𝑧
= 𝑧 𝑒 𝑑𝑧
𝑗! 𝑛 + 2𝑗
2 Γ( )
2
𝜆 11
=𝑒 𝑒 𝑧 𝑑𝑧
𝑗! 𝑛 + 2𝑗
2 Γ( )
2
𝜆 1 1 1 𝑛 + 2𝑗
=𝑒 ( ) Γ( + 1)
𝑗! 𝑛 + 2𝑗 2 2
2 Γ( )
2
𝜆 1 𝑛 + 2𝑗 𝑛 + 2𝑗 𝑛 + 2𝑗
=𝑒 2 ∗2 ∗( + 1)( ) Γ(
𝑗! 𝑛 + 2𝑗 2 2 2
2 Γ( )
2
21
𝜆
=𝑒 (𝑛 + 2𝑗 + 2)(𝑛 + 2𝑗)
𝑗!
𝜆
=𝑒 (𝑛 + 2𝑛𝑗 + 4𝑗 + 2𝑛𝑗 + 2𝑛 + 4𝑗)
𝑗!
𝜆
=𝑒 {(𝑛 + 2𝑛) + 4𝑗 + 4𝑗(𝑛 + 1)}
𝑗!
𝜆 𝜆 𝜆
= (𝑛 + 2𝑛)𝑒 + 4(𝑛 + 1)𝑒 𝑗 + 4𝑒 𝑗
𝑗! 𝑗! 𝑗!
𝜆 𝜆
= (𝑛 + 2𝑛)𝑒 𝑒 + 4(𝑛 + 1)𝑒 𝜆 + 4𝑒 𝜆
(𝑗 − 1)! (𝑗 − 2)!
= 𝑛 + 2𝑛 + 4 𝜆(𝑛 + 1)𝑒 𝑒 + 4𝜆 𝑒 𝑒
= 𝑛 + 2𝑛 + 4𝑛𝜆 + 4𝜆 + 4𝜆
∴ 𝑉𝑎𝑟𝑖𝑎𝑛𝑐𝑒 = 2𝑛 + 4𝜆 .
𝜆 𝑒 𝑧
𝑓(𝜒 ) = 𝑒 ;
𝑗! 𝑛 + 2𝑗
2 Γ( )
2
22
𝜆 𝑒 (𝜒 ) 𝜆 𝑒 (𝜒 )
=𝑒 𝑛 +𝑒 +⋯ − − − − − (𝑖)
0! 2 Γ( ) 1! 𝑛+2
2 2 Γ( )
2
𝑒 (𝜒 )
𝑓(𝜒 ) = 𝑛
2 Γ( )
2
𝑒 ( )
𝑄 (𝑡) =
(1 − 2𝑖𝑡)
𝑒 ( )
= log[ ]
(1 − 2𝑖𝑡)
𝜆 𝜆 𝑛
=− + − log[1 − 2𝑖𝑡]
2 2(1 − 2𝑖𝑡) 2
𝜆 𝜆 𝑛
= − + (1 − 2𝑖𝑡) − log[1 − 2𝑖𝑡]
2 2 2
𝜆 𝜆 𝑛
= − + (1 − 2𝑖𝑡) − log[1 − 2𝑖𝑡]
2 2 2
𝜆 𝜆 𝑛 (2𝑖𝑡) (2𝑖𝑡)
= − + [1 + 2𝑖𝑡 + (2𝑖𝑡) + ⋯ ] − −2𝑖𝑡 − − −⋯
2 2 2 2 3
𝜆 𝜆
= − + + 𝜆𝑖𝑡 + 2𝜆(𝑖𝑡) + 4𝜆(𝑖𝑡) + ⋯ + 𝑛𝑖𝑡 + 𝑛(𝑖𝑡) + 𝑛(𝑖𝑡) + ⋯
2 2
4
= (𝑛 + 𝜆)𝑖𝑡 + (𝑛 + 2𝜆)(𝑖𝑡) + 𝑛 + 4𝜆 (𝑖𝑡) + (2𝑛 + 8𝜆)(𝑖𝑡) + ⋯
3
(𝑖𝑡) (𝑖𝑡) (𝑖𝑡)
= (𝑛 + 𝜆)𝑖𝑡 + 2(𝑛 + 2𝜆) + (8𝑛 + 24𝜆) + (48𝑛 + 192𝜆) +⋯
2! 3! 4!
( )
Now taking coefficient of , we get,
!
𝜇 ,
(𝑖𝑡)
𝜇 = 𝑐𝑜𝑒𝑓𝑓𝑖𝑐𝑖𝑒𝑛𝑡 𝑜𝑓 =𝑛+ 𝜆
1!
(𝑖𝑡)
𝜇 = 𝑐𝑜𝑒𝑓𝑓𝑖𝑐𝑖𝑒𝑛𝑡 𝑜𝑓 = 2(𝑛 + 2𝜆)
2!
(𝑖𝑡)
𝜇 = 𝑐𝑜𝑒𝑓𝑓𝑖𝑐𝑖𝑒𝑛𝑡 𝑜𝑓 = 8𝑛 + 24𝜆
3!
24
(𝑖𝑡)
𝜇 = 𝑐𝑜𝑒𝑓𝑓𝑖𝑐𝑖𝑒𝑛𝑡 𝑜𝑓 = 48𝑛 + 192𝜆
4!
∴ The skewness is,
(𝜇 ) 64(𝑛 + 3𝜆) 8(𝑛 + 3𝜆)
𝑠 (𝜒 ) = = =
(𝜇 ) 8(𝑛 + 2𝜆) (𝑛 + 2𝜆)
∴ The kurtosis is
𝜇 48(𝑛 + 4𝜆)
= =
(𝜇 ) 4(𝑛 + 2𝜆)
𝑛
𝜆 𝑣
𝑛
𝑓(𝑣) = 𝑒 ; 0< 𝑣 <∞.
𝑗! 𝑛 + 2𝑗 𝑛 𝑛
𝛽( , )(1 + 𝑣)
2 2 𝑛
Where,
25
𝑧
𝑛
𝑣 = 𝑧 = 𝐹′
𝑛
𝑓(𝐹) = 𝑓(𝐹′) =
( ) ( )
𝑒 ∑ ; 𝑒 ∑ ;
! !
, ( , )( )
20. Derive the non-central F-distribution. Show that it is pdf. Write down the
application and properties of non-central F-distribution.
Solution:
Derivation:
Suppose, 𝑧 ~ 𝜒′ ,
and 𝑧 ~𝜒′
∴ 𝑓( 𝑧 , 𝑧 ) = 𝑓( 𝑧 )𝑓( 𝑧 )
𝜆 𝑒 (𝑧 ) 𝑒 (𝑧 )
=𝑒
𝑗! 𝑛 + 2𝑗 𝑛
2 Γ( ) 2 Γ( 2 )
2
𝜆 𝑒 (𝑧 ) (𝑧 )
=𝑒
𝑗! 𝑛 + 2𝑗 Γ(𝑛 )
2 Γ( ) 2
2
Let, 𝑢 = 𝑧 + 𝑧 ---------------(i)
𝑢= ---------------(ii)
⇒𝑧 = 𝑢∗𝑧 ---------------(iii)
𝑛
𝑣𝑢
𝑛
∴𝑧 = 𝑛
1+ 𝑣
𝑛
Using 𝑧 in equation (i),
𝑛
𝑣𝑢
𝑛
𝑢= 𝑛 +𝑧
1+ 𝑣
𝑛
𝑛
𝑣𝑢
𝑛
⇒𝑧 =𝑢− 𝑛
1+ 𝑣
𝑛
𝑛 𝑛
𝑢 + 𝑣𝑢 − 𝑣𝑢
𝑛 𝑛
⇒𝑧 = 𝑛
1+ 𝑣
𝑛
𝑢
∴𝑧 = 𝑛
1+ 𝑣
𝑛
Now,
𝑛
𝑑𝑧 𝑣
𝑛
=
𝑑𝑢 1 + 𝑛 𝑣
𝑛
𝑑𝑧 1
∴ =
𝑑𝑢 1 + 𝑛 𝑣
𝑛
𝑛 𝑛 𝑛 𝑛
𝑑𝑧 1+ 𝑣 𝑢 − 𝑢𝑣 −
𝑛 𝑛 𝑛 𝑛
∴ = 𝑛
𝑑𝑣 (1 + 𝑣)
𝑛
𝑛 𝑛 𝑛
𝑢(1 + 𝑣 − 𝑣)
𝑛 𝑛 𝑛
= 𝑛
(1 + 𝑣)
𝑛
28
𝑛
𝑢
𝑛
= 𝑛
(1 + 𝑣)
𝑛
𝑛 𝑛
𝑑𝑧 − 𝑢
𝑛 𝑛
∴ = −𝑢 = 𝑛
𝑑𝑣 𝑛 (1 + 𝑣)
1+ 𝑣 𝑛
𝑛
And jacobian transformation is,
𝑛
𝑣 1
𝑛
𝑑𝑧 𝑑𝑧 𝑛 𝑛
1+ 𝑣 1+ 𝑣
𝑛 𝑛
𝑗 = 𝑑𝑢 𝑑𝑢 = 𝑛 𝑛
𝑑𝑧 𝑑𝑧 𝑢 − 𝑢
𝑛 𝑛
𝑑𝑣 𝑑𝑣 𝑛 𝑛
(1 + 𝑛 𝑣) (1 + 𝑛 𝑣)
𝑛 𝑛
−( ) 𝑢𝑣 𝑢
𝑛 𝑛
= 𝑛 − 𝑛
(1 + 𝑣) (1 + 𝑣)
𝑛 𝑛
𝑛 𝑛
( ) 𝑢𝑣 − 𝑢
𝑛 𝑛
= 𝑛
(1 + 𝑣)
𝑛
𝑛 𝑛
− 𝑢(1 + 𝑣)
𝑛 𝑛
= 𝑛
(1 + 𝑣)
𝑛
𝑛
𝑢
𝑛
= 𝑛
(1 + 𝑣)
𝑛
𝑛
𝑢
𝑛
∴ |𝑗| = 𝑛
(1 + 𝑣)
𝑛
29
Now,
𝑛
𝑢𝑣 𝑢
𝑛
𝑒 ( 𝑛 ) ( 𝑛 ) 𝑛
𝜆 1+𝑛 𝑣 1+𝑛 𝑣 𝑢
𝑛
𝑓( 𝑢, 𝑣) = 𝑒 𝑛 𝑛
𝑗! 𝑛 + 2𝑗 Γ( ) (1 + 𝑣)
2 Γ( ) 2 𝑛
2
𝑛
𝜆 𝑒 ( ) (𝑢) (𝑣)
𝑛
=𝑒
𝑗! 𝑛 + 2𝑗 𝑛 𝑛
2 Γ( )Γ( )(1 + 𝑣)
2 2 𝑛
𝑓( 𝑣) = 𝑓(𝑢𝑣) 𝑑𝑢
𝜆 𝑛
𝑒 ∑ (𝑣)
𝑗! 𝑛
= 𝑒 (𝑢) 𝑑𝑢
𝑛 + 2𝑗 𝑛 𝑛
2 Γ Γ 1+ 𝑣
2 2 𝑛
𝜆 𝑛
𝑒 ∑ (𝑣) 1 𝑛 + 𝑛 + 2𝑗
𝑗! 𝑛
= Γ
𝑛 + 2𝑗 𝑛 𝑛 2 2
2 Γ Γ 1+ 𝑣
2 2 𝑛
𝜆 𝑛
𝑒 ∑ (𝑣) 1 𝑛 + 𝑛 + 2𝑗
𝑗! 𝑛
= Γ
𝑛 + 2𝑗 𝑛 𝑛 2 2
2 Γ Γ 1+ 𝑣
2 2 𝑛
𝜆 𝑛 𝑛 + 𝑛 + 2𝑗
𝑒 ∑ (𝑣) Γ
𝑗! 𝑛 2
=
𝑛 𝑛 + 2𝑗 𝑛
1+ 𝑣 Γ Γ
𝑛 2 2
30
𝜆 𝑛
𝑒 ∑ (𝑣)
𝑗! 𝑛
∴ 𝑓( 𝑣) = ; 0<𝑣<∞
𝑛 𝑛 + 2𝑗 𝑛
1+ 𝑣 β ,
𝑛 2 2
𝜆 𝑛
𝑒 ∑ (𝑣)
𝑗! 𝑛
𝑓(𝑣) 𝑑𝑣 = 𝑑𝑣
𝑛 𝑛 + 2𝑗 𝑛
1+ 𝑣 β ,
𝑛 2 2
𝜆 𝑛
𝑒 ∑ 𝑛
𝑗! 𝑛
= (𝑣) 1+ 𝑣 𝑑𝑣
𝑛 + 2𝑗 𝑛 𝑛
β ,
2 2
Let,
𝑛
𝑣=𝑡
𝑛
𝑑𝑡 𝑛
⇒ =
𝑑𝑣 𝑛
𝑛
⇒ 𝑑𝑣 = 𝑑𝑡
𝑛
𝑛
𝜆 𝑛 𝑛 𝑛
=𝑒 𝑡 (1 + 𝑡) ( ) 𝑑𝑡
𝑗! β 𝑛 + 2𝑗 , 𝑛 𝑛 𝑛
2 2
31
𝑛 𝑛
𝜆 ()
𝑛 𝑛
=𝑒 (𝑡) (1 + 𝑡) 𝑑𝑡
𝑗! 𝑛 + 2𝑗 𝑛
β ,
2 2
𝜆
𝑒 ∑ 𝑛 + 2𝑗 𝑛
𝑗!
= β , [∵ β(p, q) = (𝑡) (1 + 𝑡) 𝑑𝑡 ]
𝑛 + 2𝑗 𝑛 2 2
β ,
2 2
𝜆
=𝑒
𝑗!
=𝑒 𝑒
=1
∴ 𝑓(𝑣) 𝑑𝑣 = 1
Application
The non-central f distribution is used in the theory of regression and
analysis of variance extremely.
The main application of the non -central distribution is to calculate the
power of hypothesis test relative to a particulate alternative.
It can be used to test the homogeneity of several means.
It can be used to test the equality of two population variance.
32
Properties:
Non-central F distribution is a continuous distribution having the range 0 to
∞.
The distribution has three parameters 𝑛 , 𝑛 , 𝜆 where 𝑛 , 𝑛 are called
degrees of freedom and 𝜆 is the non-centrality parameter.
It reduce to central distribution when 𝜆 = 0 .
. ( ) ( )
𝑓(F)=∑ . .
! , ( )
. ( )
=> log 𝑓(F’)=log ∑ .
! ,
. ( )
=> log 𝑓(𝐹 ) = 𝑙𝑜𝑔 ∑ . + − 1 log 𝐹 ′ −
! ,
log(1 + 𝐹)
𝑛 + 2𝑗 𝑛 + 𝑛 + 2𝑗 𝑛
−1 .
2 2 𝑛
∴ 𝑓 (𝐹 ) = 𝑓(𝐹 ) − 𝑛
𝐹 1+ 𝐹
𝑛
We know that mode is the value of F’ for which f’(F’)=0 and f”(F’)< 0
According to the first condition of mode, we setting f’(F’)=o . then,
𝑛 + 2𝑗 𝑛 + 𝑛 + 2𝑗 𝑛
−1 ,
2 2 𝑛
𝑓(𝐹) − 𝑛 =0
𝐹 1+ 𝐹
𝑛
Hence,𝑓(𝐹 ) ≠ 0, 𝑠𝑜,
𝑛 + 2𝑗 𝑛 + 𝑛 + 2𝑗 𝑛
−1 ,
2 2 𝑛
− 𝑛 =0
𝐹 1+ 𝐹
𝑛
𝑛 + 2𝑗 𝑛 + 𝑛 + 2𝑗 𝑛
−1 .
2 2 𝑛
=> = 𝑛
𝐹 1+ 𝐹
𝑛
𝑛 + 2𝑗 𝑛 + 𝑛 + 2𝑗
−1 .𝑛
=> 2 = 2
𝐹 𝑛 + 𝑛 𝐹′
𝑛 + 2𝑗 − 2 (𝑛 + 𝑛 + 2𝑗)
=> =
𝐹′ 𝑛 + 𝑛 𝐹′
=> 𝑛 𝑛 + 𝑛 𝐹 + 2𝑛 𝑗 + 2𝑛 𝑗𝐹 − 2𝑛 − 2𝑛 𝐹
= 𝑛 𝐹 + 𝑛 𝑛 𝐹 + 2𝑛 𝐹 𝑗
34
=> 𝑛 𝑛 + 2𝑗𝑛 − 2𝑛 − 2𝑛 𝐹 − 𝑛 𝑛 𝐹 = 0
=> 𝑛 𝑛 + 2𝑗𝑛 − 2𝑛 = 2𝑛 𝐹 + 𝑛 𝑛 𝐹
𝐹 (2𝑛 + 𝑛 𝑛 ) = 𝑛 𝑛 + 2𝑗𝑛 − 2𝑛
𝑛 𝑛 + 2𝑗𝑛 − 2𝑛
=> 𝐹 =
2𝑛 + 𝑛 𝑛
𝑛 (𝑛 + 2𝑗 − 2)
=> 𝐹 =
𝑛 (2 + 𝑛 )
24. Find the k-th row moment of non-central F distribution.Hence also find
mean and variance.
Solution:
The k-th raw moment of non-central F distribution is,
∴ µ = 𝐸[𝑣 ]
= 𝑣 𝑓(𝑣)𝑑𝑣
𝜆 𝑛
𝑒 ∑ (𝑣) (𝑣)
𝑗! 𝑛
= 𝑑𝑣
𝑛 𝑛 + 2𝑗 𝑛 𝑛
1+ 𝑣 β , 1+ 𝑣
𝑛 2 2 𝑛
35
𝜆 𝑛
(𝑣) 𝑒 ∑ (𝑣)
𝑗! 𝑛
=𝐶 𝑑𝑣 |𝑖. 𝑒 𝐶 =
𝑛 𝑛 𝑛 + 2𝑗 𝑛
1+ 𝑣 1+ 𝑣 β ,
𝑛 𝑛 2 2
𝑛
𝑡 𝑛 𝑛 𝑛
=𝐶 𝑛 ∴ 𝑑𝑣 = 𝑑𝑡
𝑑𝑡 𝑙𝑒𝑡, 𝑣 = 𝑡 𝑛
𝑛 𝑛
(1 + 𝑡)
𝑛 (𝑡) 𝑛
=𝐶 𝑑𝑡
𝑛 𝑛
(1 + 𝑡)
𝑛 + 2𝑗 + 2𝑘 𝑛 − 2𝑘
𝜆 𝑛 β ,
2 2
=𝑒
𝑗! 𝑛 𝑛 + 2𝑗 𝑛
β ,
2 2
𝑛 𝜆 𝜆
= [𝑛 𝑒 + 2𝜆𝑒 ]
𝑛 (𝑛 − 2) 𝑗! (𝑗 − 1)!
𝑛
= [𝑛 𝑒 𝑒 + 2𝜆𝑒 𝑒 ]
𝑛 (𝑛 − 2)
𝑛
= (𝑛 + 2𝜆)
𝑛 (𝑛 − 2)
36
= 𝑚𝑒𝑎𝑛
(𝑛 ) 𝜆
= 𝑒 { 𝑛 (𝑛 + 2𝑗) + 4𝑗(𝑛 + 1) + 4𝑗 }
(𝑛 ) (𝑛 − 2)(𝑛 − 4) 𝑗!
(𝑛 ) 𝜆
= 𝑒 { 𝑛 (𝑛 + 2𝑗) + 4𝑗(𝑛 − 1) + 4𝑗}
(𝑛 ) (𝑛 − 2)(𝑛 − 4) 𝑗!
(𝑛 ) 𝜆
= [𝑛 (𝑛 + 2)𝑒
(𝑛 ) (𝑛 − 2)(𝑛 − 4) 𝑗!
𝜆
+ 4𝜆(𝑛 + 1)𝑒
(𝑗 − 1)!
𝜆 𝜆
+ 4𝜆 𝑒 + +4𝜆 ]
(𝑗 − 2)! (𝑗 − 1)!
(𝑛 )
= [𝑛 (𝑛 + 2)𝑒 𝑒 + 4𝜆(𝑛 + 1)𝑒 𝑒 + 4𝜆 𝑒 𝑒
(𝑛 ) (𝑛 − 2)(𝑛 − 4)
+ 4𝜆𝑒 𝑒 ]
(𝑛 )
= [𝑛 (𝑛 + 2) + 4𝜆(𝑛 + 2) + 4𝜆 ]
(𝑛 ) (𝑛 − 2)(𝑛 − 4)
37
∴the variance = µ − µ
(𝑛 ) {𝑛 (𝑛 + 2) + 4𝜆(𝑛 + 2) + 4𝜆 } (𝑛 ) (𝑛 + 2)
= −
(𝑛 ) (𝑛 − 2)(𝑛 − 4) (𝑛 ) (𝑛 − 2)
with (n-1) degrees of freedom and non-centrality parameter 𝜆 = .Then the pdf
of non-central t distribution is,
𝜆 1 (𝑡′)
𝑓( 𝑡′) = 𝑒 ; −∞ < 𝑡′ < ∞
𝑗! 2𝑗 + 1 𝑛 (𝑡′)
𝑛 β , (1 + )
2 2 𝑛
Derivation:
We know, the non-central t statistics,
√𝑛𝑥
𝑥 √𝑛𝑥
𝑡 = = = 𝜎
𝑠
𝑠 𝑠
𝜎
𝑛
Now,
√𝑛𝑥 √𝑛 √𝑛
𝐸 = 𝐸(𝑥) = µ
𝜎 𝜎 𝜎
And
38
√𝑛𝑥 𝑛 𝑛 𝜎
𝑣 = 𝑣(𝑥) = =1
𝜎 𝜎 𝜎 𝑛
√ √
So, we can say that, is distributed as𝑁( , 1).
Now,
√𝑛𝑥 𝑥 𝜒′
( ) 𝑛 ( , ) (𝑛 − 1)𝜒′(
𝑡′ = 𝜎 𝜎 , )
= = =
𝑠 ∑(𝑥 − 𝑥) 𝜒( ) 𝜒( )
𝜎 𝜎 (𝑛 − 1) 𝑛−1
(𝑛 − 1)𝑧
⇒ 𝑡′ = ; 𝑤ℎ𝑒𝑟𝑒 𝑧 ~ 𝜒′( , ) 𝑎𝑛𝑑 𝑧 ~𝜒( )
𝑧
𝑧 𝑡′
⇒𝑧 = − − − − − − − (𝑖)
𝑛−1
Let,
𝑧 𝑡′
𝑠 = 𝑧 +𝑧 = +𝑧
𝑛−1
𝑡
⇒ 𝑠 = 𝑧 (1 + )
𝑛−1
𝑠
⇒𝑧 =
𝑡
1+
𝑛−1
2𝑠𝑡′ −2𝑠𝑡′
𝑛−1 (𝑛 − 1)
= +
𝑡′ 𝑡′
(1 + ) (1 + )
𝑛−1 𝑛−1
2𝑠𝑡 (𝑡′)
(1 + )
= 𝑛 − 1 𝑛 − 1
𝑡′
(1 + )
𝑛−1
2𝑠𝑡
= 𝑛−1
𝑡′
(1 + )
𝑛−1
𝜆 𝑒 (𝑧 ) 𝑒 (𝑧 )
=𝑒
𝑗! 1 + 2𝑗 𝑛−1
2 Γ( )2 Γ( )
2 2
40
𝑒 𝜆 1 ( )
= 𝑒 (𝑧 ) (𝑧 )
𝑗! 1 + 2𝑗 𝑛 − 1
2 Γ( )Γ( )
2 2
𝑓(𝑡 ) = 𝑓(𝑡 , 𝑠) 𝑑𝑠
𝑒 𝜆 2(𝑡′) ( )
= 𝑒 (𝑠) 𝑑𝑠
𝑗! 1 + 2𝑗 𝑛 − 1 𝑡′
2 Γ( )Γ( )(1 + ) (𝑛 − 1)
2 2 𝑛−1
𝑛 + 2𝑗
𝑒 𝜆 2(𝑡 ) 2 Γ
= 2
𝑗!
1 + 2𝑗 𝑛−1 𝑡
2 Γ Γ 1+ (𝑛 − 1)
2 2 𝑛−1
𝑒 𝜆 1 (𝑡′)
𝑓( 𝑡′) =
𝑗! 2𝑗 + 1 𝑛 (𝑡′)
𝑛 β , (1 + )
2 2 𝑛
Let t’=-t’, then
𝑒 𝜆 1 (−𝑡′)
𝑓(−𝑡′) =
𝑗! 2𝑗 + 1 𝑛 (𝑡′)
𝑛 β , (1 + )
2 2 𝑛
𝑒 𝜆 1 (𝑡′)
=
𝑗! 2𝑗 + 1 𝑛 (𝑡′)
𝑛 β , (1 + )
2 2 𝑛
= 𝑓(𝑡′)
∴ 𝑓(−𝑡 ) = 𝑓(𝑡′)
Therefore the non-central t distribution is a symmetric distribution.
Application:
Properties:
𝜆 1 (𝑡′)
𝑓( 𝑡′) = 𝑒 ; −∞ < 𝑡′ < ∞
𝑗! 2𝑗 + 1 𝑛 (𝑡′)
𝑛 β , (1 + )
2 2 𝑛
27. Show that under certain condition non-central t distribution tends to central
t-distribution.
Solution:
We know,
λj (𝑡) 𝑛
f(t′) = 𝑒
𝑗!
2𝑗 + 1 𝑛 𝑡′
𝛽 , 1+
2 2 𝑛
𝜆 (𝑡) 𝜆 (𝑡) 𝑛 1
=𝑒 +𝑒 + +⋯
0! 1! 3 𝑛
1 𝑛 𝑡′ 𝑡′ 𝛽 ,
𝛽 , 1+ 1+ 2 2
2 2 𝑛 𝑛
43
Putting λ=0,
1
𝐹(𝑡 ) = ; −∝< 𝑡 <∝
1 𝑛 𝑡
√𝑛 𝛽 2 , 2 1+
𝑛
(𝑡 )
= 𝑒 𝑑𝑡
1 + 2𝑗 𝑛 − 1 𝑡
𝑛−1 𝛽 ′ 1+
2 2 𝑛−1
(𝑡 ) Where,
= 𝐶 𝑑𝑡
𝜆 1
𝑡 𝐶 =𝑒
1+ 𝑗! 1 + 2𝑗 𝑛 − 1
𝑛−1 𝑛−1 𝛽
2
′
2
44
⎡ ⎤
⎢ (𝑡 ) (𝑡 ) ⎥
= 𝐶 ⎢ 𝑑𝑡 + 𝑑𝑡 ⎥
⎢ ⎥
⎢ 𝑡 𝑡 ⎥
1+ 1+
⎣ 𝑛−1 𝑛−1 ⎦
(𝑡 )
= 𝐶 2 𝑑𝑡
𝑡
1+
𝑛−1
Let, 𝑧 =
(𝑧) 𝑛−1 2𝑡
= 𝐶 2 ∙ 𝑑𝑧 => 𝑑𝑧 = 𝑑𝑡
2 𝑧(𝑛 − 1) 𝑛−1
(1 + 𝑧)
𝑛−1 𝑛−1
=> 𝑑𝑡 = 𝑑𝑧 = 𝑑𝑡 = 𝑑𝑧
2𝑡 2 𝑧(𝑛 − 1)
(𝑧)
= 𝐶 (𝑛 − 1) 𝑑𝑧
(1 + 𝑧)
(𝑧)
= 𝐶 (𝑛 − 1) 𝑑𝑧
(1 + 𝑧)
2𝑟 + 2𝑗 + 1 𝑛 − 2𝑟 − 1
𝜆 (𝑛 − 1) 𝛽 ′
= 𝑒 ∙ 2 2
𝑗! 1 + 2𝑗 𝑛 − 1
(𝑛 − 1) 𝛽 ′
2 2
2𝑟 + 2𝑗 + 1 𝑛 − 2𝑟 − 1 𝑛 + 2𝑗
𝜆 Γ Γ Γ
= 𝑒 (𝑛 − 1) ∙ 2 2 2
𝑗! 𝑛 + 2𝑗 1 + 2𝑗 𝑛−1
Γ Γ Γ
2 2 2
2𝑟 + 2𝑗 + 1 𝑛 − 2𝑟 − 1
𝜆 Γ Γ
∴𝜇 = 𝑒 (𝑛 − 1) ∙ 2 2
𝑗! 1 + 2𝑗 𝑛−1
Γ Γ
2 2
Now, we put 𝑟 = 1, we get,
45
2𝑗 + 3 𝑛−3
𝜆 Γ 2
Γ
2
𝜇 = (𝑛 − 1) 𝑒 ∙
𝑗! Γ 2𝑗 + 1 Γ 𝑛 − 1
2 2
𝜆 (2𝑗 + 1)
= (𝑛 − 1) 𝑒 ∙
𝑗! (𝑛 − 3)
(𝑛 − 1) 𝜆 𝜆
= 𝑒 + 2𝜆 𝑒
(𝑛 − 3) 𝑗! (𝑗 − 1)!
(𝑛 − 1)
= 𝑒 ∙ 𝑒 + 2𝜆 ∙ 𝑒 ∙𝑒
(𝑛 − 3)
(𝑛 − 1)
= (2𝜆 + 1)
(𝑛 − 3)
Again, by putting 𝑟 = 2, we get,
2𝑗 + 5 𝑛−5
𝜆 Γ Γ
2 2
𝜇 = (𝑛 − 1) 𝑒 ∙
𝑗! Γ 2𝑗 + 1 Γ 𝑛 − 1
2 2
𝜆 (2𝑗 + 3)(2𝑗 + 1)
= (𝑛 − 1) 𝑒 ∙
𝑗! (𝑛 − 3)(𝑛 − 5)
(𝑛 − 1) 𝜆
= 𝑒 ∙ [4𝑗(𝑗 − 1) + 12𝑗 + 3]
(𝑛 − 3)(𝑛 − 5) 𝑗!
(𝑛 − 1) 𝜆 𝜆
= 4𝜆 𝑒 + 12 𝑒
(𝑛 − 3)(𝑛 − 5) (𝑗 − 2)! (𝑗 − 1)!
𝜆
+3 𝑒
𝑗!
(𝑛 − 1)
= 4𝜆 𝑒 ∙ 𝑒 + 12𝜆𝑒 𝑒 + 3𝑒 𝑒
(𝑛 − 3)(𝑛 − 5)
46
(𝑛 − 1)
∴𝜇 = [4𝜆 + 12𝜆 + 3]
(𝑛 − 3)(𝑛 − 5)
Hence, the skewness and Kurtosis of non-central t-distribution are respectively,
𝜇
𝛽 =
𝜇
0
=
𝜇
=0
𝜇
𝑎𝑛𝑑, 𝛽 =
𝜇
(𝑛 − 1) (𝑛 − 3)
= [4𝜆 + 12𝜆 + 3].
(𝑛 − 3)(𝑛 − 5) (𝑛 − 1) (2𝜆 + 1)
(𝑛 − 3) 4𝜆 + 12𝜆 + 3
= ∙
(𝑛 − 5) 4𝜆 + 4𝜆 + 1
Negative definite: A matrix is negative definite if it's symmetric and all its
eigenvalues negative.
Positive semidefinite: A positive semidefinite matrix is a Hermitian matrix all of
whose eigenvalues are non-negative.
𝟑 𝟏 𝟐 𝟏
30. If 𝑨 = and 𝑨 = then find the Eigen decomposition
𝟏 𝟑 𝟏 𝟐
(𝒊. 𝒆 𝑨 = 𝑷𝑻 𝑫𝑷).Finally prove that 𝑷𝑻 𝑫𝑷 = 𝑨 .
Solution:
Given that,
3 1
𝐴=
1 3
The equation of Eigen matrix is,
|𝐴 − 𝜆𝐼| = 0
3 1 1 0
⇒ −𝜆 =0
1 3 0 1
3 1 𝜆 0
⇒ − =0
1 3 0 𝜆
3−𝜆 1
⇒ =0
1 3−𝜆
⇒ 9 − 3𝜆−3𝜆 + 𝜆 − 1 = 0
⇒ 𝜆 − 6𝜆 + 8 = 0
48
⇒ 𝜆 − 4𝜆 − 2𝜆 + 8 = 0
⇒ 𝜆(𝜆 − 4) − 2(𝜆 − 4) = 0
⇒ (𝜆 − 4)(𝜆 − 2) = 0
∴ 𝜆 = 4, 𝜆 = 2
For 𝜆 = 4 the equation of Eigen vector,
𝐴𝑥 = 𝜆𝑥
3 1 𝑥 𝑥
⇒ = 𝜆
1 3 𝑥 𝑥
3𝑥 + 𝑥 4𝑥
⇒ =
𝑥 + 3𝑥 4𝑥
3𝑥 + 𝑥 = 4𝑥 − − − − − (𝑖)
⇒
𝑥 + 3𝑥 = 4𝑥 − − − − − −(𝑖𝑖)
Solving the equation we find,
𝑥 = 1 𝑎𝑛𝑑 𝑥 = 1
Now,
For 𝜆 = 2 the equation of Eigen vector,
𝐴𝑥 = 𝜆𝑥
3 1 𝑥 𝑥
⇒ = 𝜆
1 3 𝑥 𝑥
3𝑥 + 𝑥 2𝑥
⇒ =
𝑥 + 3𝑥 2𝑥
3𝑥 + 𝑥 = 2𝑥 − − − − − (𝑖)
⇒
𝑥 + 3𝑥 = 2𝑥 − − − − − −(𝑖𝑖)
Solving the equation we find,
𝑥 = 1 𝑎𝑛𝑑 𝑥 = −1
We find the Eigen vector,
49
1 1
and
1 −1
1 −1
So, 𝑃 =
1 1
4 0
And 𝐷 =
0 2
∴ 𝑃𝐷𝑃 =
1 1 −1 4 0 1 1
=
2 1 1 0 2 −1 1
1 4 −2 1 1
=
2 4 2 −1 1
1 4+2 4−2
=
2 4−2 4+2
1 6 2
=
2 2 6
3 1
=
1 3
=𝐴
(𝑝𝑟𝑜𝑣𝑒𝑑)
Again
Given that,
2 1
𝐴=
1 2
The equation of Eigen matrix is,
|𝐴 − 𝜆𝐼| = 0
2 1 1 0
⇒ −𝜆 =0
1 2 0 1
2 1 𝜆 0
⇒ − =0
1 2 0 𝜆
50
2−𝜆 1
⇒ =0
1 2−𝜆
⇒ 4 − 4𝜆+𝜆 − 1 = 0
⇒ 𝜆 − 4𝜆 + 3 = 0
⇒ 𝜆 − 3𝜆 − 𝜆 + 3 = 0
⇒ 𝜆(𝜆 − 3) − 1(𝜆 − 3) = 0
⇒ (𝜆 − 3)(𝜆 − 1) = 0
∴ 𝜆 = 3, 𝜆 = 1
For 𝜆 = 1 the equation of Eigen vector,
𝐴𝑥 = 𝜆𝑥
2 1 𝑥 𝑥
⇒ = 𝜆
1 2 𝑥 𝑥
2𝑥 + 𝑥 𝑥
⇒ = 𝑥
𝑥 + 2𝑥
2𝑥 + 𝑥 = 𝑥 − − − − − (𝑖)
⇒
𝑥 + 2𝑥 = 𝑥 − − − − − −(𝑖𝑖)
Solving the equation we find,
𝑥 = 1 𝑎𝑛𝑑 𝑥 = −1
Now,
For 𝜆 = 3 the equation of Eigen vector,
𝐴𝑥 = 𝜆𝑥
2 1 𝑥 𝑥
⇒ = 𝜆
1 2 𝑥 𝑥
2𝑥 + 𝑥 3𝑥
⇒ =
𝑥 + 2𝑥 3𝑥
2𝑥 + 𝑥 = 3𝑥 − − − − − (𝑖)
⇒
𝑥 + 2𝑥 = 3𝑥 − − − − − −(𝑖𝑖)
51
(𝑝𝑟𝑜𝑣𝑒𝑑)
31. If Y is a 𝒑 × 𝟏 random variable and 𝒀~𝑵(𝟎, 𝑰), then find the matrix and
𝒑
sampling distribution of 𝑸𝟐 = ∑𝒊 𝟏(𝒀𝒊 − 𝒀)𝟐 .
Solution:
We have,
52
𝑄 = (𝑌 − 𝑌 )
= 𝑌 − 𝑝𝑌
1 1
= 𝑌 − 𝑌 − 𝑌𝑌
𝑝 𝑝
1 1
= 1− 𝑌 − 𝑌𝑌
𝑝 𝑝
1
⎡1 − 𝑝 −1 𝑝 ⋯ −1 𝑝 ⎤ 𝑌
⎢ 1 1 − 1 𝑝 ⋯ −1 𝑝 ⎥ 𝑌
= [𝑌 𝑌 ⋯ 𝑌 ]⎢ − 𝑝 ⎥ ⋮
⎢ ⋮ ⋮ ⋱ ⋮ ⎥
𝑌
1
⎣ − 𝑝 − 1 𝑝 ⋯ 1 − 1 𝑝⎦
= 𝑌 𝐵𝑌
𝑊ℎ𝑒𝑟𝑒,
1
⎡1 − 𝑝 −1 𝑝 ⋯ −1 𝑝 ⎤
⎢ 1 1−1 𝑝 ⋯ −1 𝑝 ⎥
𝐵=⎢ − 𝑝 ⎥
⎢ ⋮ ⋮ ⋱ ⋮ ⎥
1 1
− 𝑝 ⋯ 1 − 1 𝑝⎦
⎣ − 𝑝
Since, we know that if 𝑍~𝑁(0, 𝐼), then 𝑍𝐴𝑍′~𝜒( ) if A is idempotent of rank K.
Now,
53
1
⎡1 − 𝑝 −1 𝑝 ⋯ −1 𝑝 ⎤
⎢ 1 1−1 𝑝 ⋯ −1 𝑝 ⎥
𝐵 =𝐵∙𝐵 =⎢ − 𝑝 ⎥
⎢ ⋮ ⋮ ⋱ ⋮ ⎥
1
⎣ − 𝑝 −1 𝑝 ⋯ 1 − 1 𝑝⎦
1
⎡1 − 𝑝 −1 𝑝 ⋯ −1 𝑝 ⎤
⎢ 1 1 − 1 𝑝 ⋯ −1 𝑝 ⎥
∙⎢ − 𝑝 ⎥
⎢ ⋮ ⋮ ⋱ ⋮ ⎥
1
⎣ − 𝑝 − 1 𝑝 ⋯ 1 − 1 𝑝⎦
1
⎡1 − 𝑝 −1 𝑝 ⋯ −1 𝑝 ⎤
⎢ 1 1−1 𝑝 ⋯ −1 𝑝 ⎥
=⎢ − 𝑝 ⎥
⎢ ⋮ ⋮ ⋱ ⋮ ⎥
1
⎣ − 𝑝 −1 𝑝 ⋯ 1 − 1 𝑝⎦
=𝐵
Therefore, B is idempotent.
The rank of B is,
1
𝑡𝑟(𝐵) = 𝑝 1 −
𝑝
=𝑝−1
Hence,
𝑄 ~𝜒( )
𝑿𝒊 𝑿 𝟐
32. If 𝑿~𝑵(𝝁, 𝝈𝟐 ), then what is the distribution of ∑𝒏𝒊 𝟏( ) .
𝝈
Solution:
Let,
𝑋
𝑌 =
𝜎
54
𝐸[𝑋 ] 𝜇
∴ 𝐸[𝑌 ] = =
𝜎 𝜎
𝑉[𝑋 ] 𝜎
𝑉[𝑌 ] = = =1
𝜎 𝜎
𝜇
∴ 𝑌 ~𝑁( , 𝐼)
𝜎
Let us consider,
𝑍 = 𝑝𝑌
1
𝑍 = (𝑌 + 𝑌 + ⋯ + 𝑌 ) = √𝑛𝑌
√𝑛
∴ 𝑍 = 𝑛𝑌
Now,
𝑍 𝑍 = 𝑌 𝑃𝑃 𝑌 = 𝑌 𝑌
=> 𝑍 = 𝑌
=> 𝑍 = 𝑌 −𝑍
=> 𝑍 = 𝑌 − 𝑛𝑌
=> 𝑍 = (𝑌 − 𝑌 )
𝑋 𝑋
=> 𝑍 = −
𝜎 𝜎
55
𝑋 −𝑋
∴ 𝑍 =
𝜎
Now,
𝜇
𝐸[𝑍] = 𝑝𝐸[𝑌 ] = 𝑝
𝜎
𝑉[𝑍] = 𝐸[𝑍𝑍 ]𝑝𝐸[𝑦𝑦 ]𝑝 = 𝑝𝐼𝑝 = 𝑝𝑝 = 𝐼
𝜇
∴ 𝑍~𝑁(𝑝 , 𝐼)
𝜎
Since, 𝑍 ’s are independent at each other, then
𝑋 −𝑋
𝑍 = ~𝜒
𝜎
(𝑝𝑟𝑜𝑣𝑒𝑑)
𝟏
33. Prove that, the adjusted sample variance 𝒔𝟐 = ∑𝒏𝒊 𝟏(𝑿𝒊 − 𝑿)𝟐 has a
𝒏 𝟏
Gamma distribution with parameters 𝒏 − 𝟏 𝒂𝒏𝒅 𝝈 , where,𝑿𝒊 ~𝑵𝒏 (𝝁, 𝝈𝟐 ).
𝟐
Solution:
Let 𝑋 , ⋯ ⋯ ⋯ 𝑋 be 𝑛 independent random variables, all having a normal
distribution with mean μ and variance σ2. Let their sample mean 𝑋 be defined as
1
𝑠 = (𝑋 − 𝑋)
𝑛−1
A
B
Where, in step A we have used the fact that M is symmetric; in step B we have
used the fact that M is idempotent.
Now define a new random vector
1
𝑍= (𝑋 − 𝜇 )
𝜎
And note that Z has a standard (mean zero and covariance I) multivariate normal
distribution.
57
𝜎 ∵𝑀 =0
= 𝑍 𝑀𝑍
𝑛−1
𝜎
∴𝑠 = 𝑍 𝑀𝑍
𝑛−1
Is proportional to a quadratic form in a standard normal random vector(𝑍 𝑀𝑍).
We know that the quadratic form 𝑍 𝑀𝑍 has a Chi-square distribution with 𝑡𝑟(𝑀)
degrees of freedom. But the trace of M is,
1
𝑡𝑟(𝑀) = 𝑀 = 1−
𝑛
1
= 𝑛 1− =𝑛−1
𝑛
So, the quadratic form 𝑍 𝑀𝑍 has a Chi-square distribution with 𝑛 − 1 degrees of
freedom. Multiplying a Chi-square random variable with 𝑛 − 1 degrees of
freedom by one obtains a Gamma random variable with parameters 𝑛 −
1 and 𝜎 .
So, summing up, the adjusted sample variance s2 has a Gamma distribution
with parameters 𝑛 − 1 and 𝜎 .
The End