Download as pdf or txt
Download as pdf or txt
You are on page 1of 57

1

1. Define order statistics, simple range, sample mid-range smallest and largest
order statistics?
Answer:
Order Statistics: Order statistics at the statistics of random variables where the
sample values placed in ascending order. Let X1 ,X2 ,X3………. Xn be a random
sample of size n form a distribution of continuous type having pdf f(x),a<x<b .Let
X(i) be the smallest of X1, X2 be the second smallest of Xi and Xn be largest of Xi
So, the order statistics is defined as-
X1 <X2 <X3………. <Xn; a<x<b
Sample Range: The sample range is the distance between the smallest and largest
order statistics i.e.
R=R(Xn)-R(X1)
Sample mid-range: The sample mid-range is the mean of the smallest and largest
order statistics, i.e.

m= {R(Xn)-R(X1)}

Smallest and largest order statistics: Let X1:n ,X2:n ,X3:n………. Xn:n be the order
statistics. The smallest of the order statistics is denoted by X 1:n and the largest of
the order statistics is denoted by Xn:n .

2. Write down the application and properties of order statistics.


Answer:
The application of order statistics:
 It is used to detecting outliers
 It is used to censored sampling
 To measure the strength of materials
 To construct the control chart and quality control.
 It is used to test the goodness of fit.
2

 To select the best product.


 It also used to test inequality requirement
 It also used to test reliability.

Properties of order statistics:


 𝑓 : (𝑥) = 𝑃(𝑥 : ≤ 𝑥)
𝑛
= ( ) {𝐹(𝑥)} {1 − 𝐹(𝑥} 𝑓(𝑥); −∞ < 𝑥 < ∞
𝑟
 The density of 𝑥 : is given by,
𝑛!
𝑓 : (𝑥) = {𝐹(𝑥)} {1 − 𝐹(𝑥)} 𝑓(𝑥)
(i − 1)! (𝑛 − 𝑟)!
−∞ < 𝑥 < ∞
 The joint density of 𝑥 : and 𝑥 : is given by,
𝑛!
𝑓 , : (𝑥) = {𝐹(𝑥)} 𝐹 𝑥 − 𝐹(𝑥 ) 1
(i − 1)! (𝑛 − 𝑗)! (𝑗 − 𝑖 − 1)!
−𝐹 𝑥 𝑓(𝑥 )𝑓 𝑥 ;
−∞ < 𝑥 < 𝑥 < ∞

 The joint pdf of all order statistic is


𝑛! 𝑓(𝑧 )𝑓(𝑧 ) … 𝑓(𝑧 ) 𝑓𝑜𝑟 − ∞ < 𝑧 < 𝑧 < ⋯ < 𝑧 < ∞

3. Derive the pdf of rth order statistics. Also prove that it is pdf?
Answer:
Derivation of the pdf of rth order statistics:
Let us assume that X1 ,X2 ,X3………. Xn is a random sample from an absolutely
continuous population with probability density function (density function pdf) f(x)
and cumulative distribution function (cdf) F(x). Let X1:n ≤X2:n ≤X3:n………. ≤Xn:n be
the order statistics obtained by arranging the preceding random sample in
increasing order of magnitude. Then, the event 𝑥 < 𝑥 : ≤ 𝑥 + 𝛿𝑥 is essentially
same as the following event:
3

r-1 n-r
r-i

𝑥 ≤ 𝑥 for r-1 of the 𝑥 ′𝑠, 𝑥 < 𝑥 : ≤ 𝑥 + 𝛿𝑥 for exactly one of the 𝑥 ′𝑠


and 𝑥 > 𝑥 + 𝛿𝑥 for the remaining n-r of 𝑥 ′𝑠.
By considering 𝛿𝑥 to be small, we may write,
𝑃(𝑥 < 𝑥 : ≤ 𝑥 + 𝛿𝑥 )
𝑛!
= {𝐹(𝑥)} {1 − 𝐹(𝑥 + 𝛿𝑥) } {𝐹(𝑥 + 𝛿𝑥) − 𝐹(𝑥)}
(r − 1)! (𝑛 − 𝑟)!
+ O(𝛿𝑥)
− − − − (𝑖)
Where O(𝛿𝑥) , a term of order (𝛿𝑥) is the probability corresponding to the
event of having more than one 𝑥 is the interval (𝑥, 𝑥 + 𝛿𝑥).
From (i) we may derive the density function of 𝑥 : (1 ≤ 𝑟 ≤ 𝑛) to be,
𝑃(𝑥 < 𝑥 : ≤ 𝑥 + 𝛿𝑥)
𝑓 : (𝑥) = lim { }
→ 𝛿𝑥
𝑛!
= {𝐹(𝑥)} {1 − 𝐹(𝑥} 𝑓(𝑥); −∞ < 𝑥 < ∞
(r − 1)! (𝑛 − 𝑟)!

Proof of pdf:

∴ 𝑓 : (𝑥) 𝑑𝑥
𝑛!
= {𝐹(𝑥)} {1 − 𝐹(𝑥} 𝑓(𝑥) 𝑑𝑥
(r − 1)! (𝑛 − 𝑟)!
𝑛! Let,
= 𝑧 {1 − 𝑧} 𝑑𝑧
(r − 1)! (𝑛 − 𝑟)!
𝑛! 𝑑𝐹(𝑥) 𝑑𝑧
= 𝛽(𝑟, 𝑛 − 𝑟 + 1) 𝐹(𝑥) = 𝑧, ⇒ =
(r − 1)! (𝑛 − 𝑟)! 𝑑𝑥 𝑑𝑥
⇒ 𝐹(𝑥)𝑑𝑥 = 𝑑𝑧 x −∞ ∞
z 0 1
4

𝑛! (r − 1)! (𝑛 − 𝑟)!
=
(r − 1)! (𝑛 − 𝑟)! 𝑛!
=1
∴ 𝑓 : (𝑥) 𝑑𝑥 = 1

The distribution of rth order statistics is a pdf.

4. Derive the joint density function of r-th and s-th order statistics.
Solution:
Derivation of the joint density function of r-th and s-th order statistics:
Let us first visualize the event (𝑥 < 𝑋 : ≤ 𝑥 + 𝜕𝑥 , 𝑥 < 𝑋 : ≤ 𝑥 + 𝜕𝑥 ) as
follows:
𝑖−1 1 𝑗−𝑖−1 1 𝑛−𝑗
;

−∞ 𝑥 𝑥 + 𝜕𝑥 𝑥 𝑥 +
𝜕𝑥 ∞
𝑋 ≤ 𝑥 for 𝑖 − 1 of the 𝑋 ′𝑠, 𝑥 < 𝑋 ≤ 𝑥 + 𝜕𝑥 for exactly one of the 𝑋 𝑠, 𝑥 +
𝜕𝑥 < 𝑋 ≤ 𝑥 for 𝑗 − 𝑖 − 1 of the 𝑋 𝑠, 𝑥 < 𝑋 ≤ 𝑥 + 𝜕𝑥 for exactly one of the
𝑋 ′𝑠, and 𝑋 > 𝑥 + 𝜕𝑥 for the remaining 𝑛 − 𝑗 of the 𝑋 𝑠. By considering 𝜕𝑥
and 𝜕𝑥 to be both small, we may write
!
𝑃 𝑥 < 𝑋 : ≤ 𝑥 + 𝜕𝑥 , 𝑥 < 𝑋 : ≤ 𝑥 + 𝜕𝑥 = ( {𝐹(𝑥 )} ×
)!( )!( )!

𝐹 𝑥 − 𝐹(𝑥 + 𝜕𝑥 ) × 1 − 𝐹(𝑥 + 𝜕𝑥 ) {𝐹(𝑥 )} × {𝐹(𝑥 + 𝜕𝑥 ) −


𝐹(𝑥 )} 𝐹 𝑥 + 𝜕𝑥 − 𝐹(𝑥 ) +𝑂 (𝜕𝑥 ) 𝜕𝑥 + 𝑂 (𝜕𝑥 ) 𝜕𝑥 ;

Here 𝑂 (𝜕𝑥 ) 𝜕𝑥 and 𝑂 (𝜕𝑥 ) 𝜕𝑥 are higher order terms which correspond
to the probabilities of the event of having more than one 𝑋 in the interval
(𝑥 , 𝑥 + 𝜕𝑥 ), and of the event of having one

𝑓, : 𝑥 ,𝑥
5

−∞ < 𝑥 < 𝑥 < ∞

5. Prove that,

𝒇𝒓,𝒔:𝒏 (𝒙, 𝒚)𝒅𝒙𝒅𝒚 = 𝟏


𝒙𝒚

Answer:

𝑓, : (𝑥, 𝑦)𝑑𝑥𝑑𝑦

𝑛! .
= {𝐹(𝑥)} {𝐹(𝑦) − 𝐹(𝑥} {1
(r − 1)! (𝑛 − 𝑠)! (𝑠 − 𝑟 − 1)!
− 𝐹(𝑦} 𝑓(𝑥)𝑓(𝑦)𝑑𝑥𝑑𝑦

𝑛!
= {1 − 𝐹(𝑥)} 𝑓(𝑦) 𝐹(𝑥) {𝐹(𝑦)
(r − 1)! (𝑛 − 𝑠)! (𝑠 − 𝑟 − 1)!
− 𝐹(𝑦)} 𝑓(𝑥)𝑑𝑥}𝑑𝑦
𝑛!
= {1
(r − 1)! (𝑛 − 𝑠)! (𝑠 − 𝑟 − 1)!

− 𝐹(𝑥)} {𝐹(𝑦)} 𝑓(𝑦){ {𝐹(𝑥)} } 1

𝐹(𝑥)
− 𝑓(𝑥)𝑑𝑥}𝑑𝑦
𝐹(𝑦)
6

𝑛!
= {1
(r − 1)! (𝑛 − 𝑠)! (𝑠 − 𝑟 − 1)!

− 𝐹(𝑦)} {𝐹(𝑦)} 𝑓(𝑦){ {𝑧𝐹(𝑦)} } {1


− 𝑧} 𝑓(𝑦)𝑑𝑧}𝑑𝑦 Let,
𝐹(𝑥) 1 𝑑𝑧
= 𝑧, ⇒ 𝑓(𝑥) =
𝐹(𝑦) 𝐹(𝑦) 𝑑𝑥
𝑑𝑧 x −∞ 𝑦
𝑓(𝑥)𝑑𝑥 =
𝐹(𝑦) z 0 1
𝑛!
= {1
(r − 1)! (𝑛 − 𝑠)! (𝑠 − 𝑟 − 1)!

− 𝐹(𝑦)} {𝐹(𝑦)} 𝑓(𝑦){ {𝑧} } {1 − 𝑧} 𝑑𝑧}𝑑𝑦

𝑛! Let,
= {𝑧} {1 − 𝑧}( )
𝑑𝑧
(𝑠 − 1)! (𝑛 − 𝑠)!
𝑑𝑧
𝑛! (𝑠 − 1)! (𝑛 − 𝑠)! 𝐹(𝑦) = 𝑧, ⇒ 𝑓(𝑦) =
= 𝑑𝑦
(𝑠 − 1)! (𝑛 − 𝑠)! 𝑛!
=1 𝑓(𝑦)𝑑𝑦 = 𝑑𝑧 x −∞ ∞

(proved) z 0 1

6. What is the role of order statistics in statistical inference?


Solution:
Order statistics play an important role in several optimal inference procedures. In
quite a few instances the order statistics become sufficient statistics and thus
provide minimum variance unbiased estimators (MVUEs) which is most powerful
test procedures for the unknown parameters. The vector of order statistics is
maximal invariant under the permutation group of transformations. Order
statistics appear in a natural way in inference procedures when the sample is
7

censored. They also provide some quick and simple estimators which are quite
often highly efficient.

7. Find the k-th moment of r-th order statistics for standard power function.
Solution:
We know the probability density function of standard power function is as follows
𝑓(𝑥) = 𝛽𝑥 ; 0 < 𝑥 < 1, 𝛽>0
Now,

𝐹(𝑥) = 𝑓(𝑥) 𝑑𝑥 = 𝛽𝑥 𝑑𝑥

𝑥
=𝛽
𝛽

= 𝑥

=𝑥
∴ The density function of r-th order statistics is,
𝑛!
𝑓 : (𝑥) = (𝑥 ) (1 − 𝑥 ) 𝛽𝑥
(𝑟 − 1)! (𝑛 − 𝑟)!
𝑛!
= 𝛽(𝑥) (1 − 𝑥 )
(𝑟 − 1)! (𝑛 − 𝑟)!
Now, the k-th moment of 𝑋 : (1 ≤ 𝑟 ≤ 𝑛) to be,
8

𝜇 : = 𝐸(𝑋 : )= 𝑥 𝑓 : (𝑥) 𝑑𝑥
𝑛!
= 𝑥 𝛽(𝑥) (1 − 𝑥 ) 𝑑𝑥
(𝑟 − 1)! (𝑛 − 𝑟)!
𝑛!
= 𝑥 (1 − 𝑥 ) 𝛽 𝑑𝑥
(𝑟 − 1)! (𝑛 − 𝑟)!
𝑛! 1
= 𝑥 (1 − 𝑧) 𝑑𝑧 Let,
(𝑟 − 1)! (𝑛 − 𝑟)! 𝑥
𝑛! 𝑥 =𝑧
= 𝑥 (1 − 𝑧) 𝑑𝑧 𝑑𝑧
(𝑟 − 1)! (𝑛 − 𝑟)! 𝑜𝑟, 𝛽𝑥 =
𝑑𝑥
𝑛!
= 𝑥 (1 − 𝑧) 𝑑𝑧 𝑑𝑧
(𝑟 − 1)! (𝑛 − 𝑟)! 𝑜𝑟, 𝛽𝑑𝑥 =
𝑥
𝑛!
= 𝑧 (1 − 𝑧)( )
𝑑𝑧
(𝑟 − 1)! (𝑛 − 𝑟)!
𝑘
𝑛! 𝑟 + − 1 ! (𝑛 − 𝑟)!
𝛽
=
(𝑟 − 1)! (𝑛 − 𝑟)! 𝑘
𝑛+ !
𝛽
𝑘
𝑛! 𝑟 + − 1 !
𝛽
=
𝑘
(𝑟 − 1)! 𝑛 + !
𝛽

8. Derive the distribution of sample range.


Solution:
Let 𝑋 : ≤𝑋 : ≤ ⋯ ⋯ ⋯ ≤ 𝑋 be the order statistics. Then,
𝑓, : (𝑥 , 𝑥 ) = 𝑛(𝑛 − 1){𝐹(𝑥 ) − 𝐹(𝑥 )} 𝑓(𝑥 )𝑓(𝑥 ) ⋯ ⋯ ⋯ (𝑖)
We know the sample range is,
𝑅 =𝑋 −𝑋
𝑟 =𝑥 −𝑥 ⋯ ⋯ ⋯ (𝑖𝑖)
9

And the sample mid-range is,


𝑋 +𝑋
𝑇=
2
𝑥 +𝑥
𝑡= ⋯ ⋯ ⋯ (𝑖𝑖𝑖)
2
2𝑡 = 𝑥 + 𝑥 ⋯ ⋯ ⋯ (𝑖𝑣)
Now, (𝑖𝑖) + (𝑖𝑣)
𝑥 − 𝑥 + 𝑥 − 𝑥 = 𝑟 + 2𝑡
=> 2𝑥 = 𝑟 + 2𝑡
𝑟
∴𝑥 =𝑡+
2
𝑟
𝑎𝑛𝑑, 𝑥 = 𝑡 + − 𝑟
2
𝑟
∴ 𝑥 =𝑡−
2
The Jacobian transformation is,
𝑑𝑥 𝑑𝑥
|𝐽| = 𝑑𝑟 𝑑𝑡
𝑑𝑥 𝑑𝑥
𝑑𝑟 𝑑𝑡
1
− 1
= 2
1
1
2
1 1
=− −
2 2
= −1
= |−1|
=1
From (𝑖),
10

𝑟 𝑟 𝑟 𝑟
𝑓 , : (𝑟, 𝑡) = 𝑛(𝑛 − 1) 𝐹 𝑡 + −𝐹 𝑡− 𝑓 𝑡− 𝑓 𝑡+
2 2 2 2
∙ 1 ; −∞ < 𝑟 < 𝑡 < ∞
Now, integrating out t we derive the pdf of sample range,

𝑟 𝑟 𝑟
𝑓(𝑟) = 𝑓 , : (𝑟, 𝑡)𝑑𝑡 𝑛(𝑛 − 1)[𝐹 𝑡 + −𝐹 𝑡− ] 𝑓 𝑡− 𝑓 𝑡
2 2 2
𝑟
+ 𝑑𝑡
2

9. Find the r-th, 1st and n-th order statistics for 𝒇(𝒙) = 𝜶𝒙𝜶 𝟏
; 𝟎 < 𝒙 < 𝟏 and
also prove that they are pdf.
Solution:
Given that, 𝑓(𝑥) = 𝛼𝑥 ;0 < 𝑥 < 1
Now,

𝐹(𝑥) = 𝑓(𝑥) 𝑑𝑥 = 𝛼𝑥 𝑑𝑥

𝑥
=𝛼
𝛼
= [𝑥 ]
=𝑥
∴ The r-th order statistics is,
𝑛!
𝑓 : (𝑥) = (𝑥 ) (1 − 𝑥 ) 𝛼𝑥 ;0 < 𝑥 < 1
(𝑟 − 1)! (𝑛 − 𝑟)!
𝑛!
= 𝛼(𝑥) (1 − 𝑥 )
(𝑟 − 1)! (𝑛 − 𝑟)!
∴ The 1st order statistics is,
11

𝑛!
𝑓 : (𝑥) = (𝑥 ) (1 − 𝑥 ) 𝛼𝑥 ;0 < 𝑥 < 1
(1 − 1)! (𝑛 − 1)!
= 𝑛𝛼(𝑥) (1 − 𝑥 )
∴ The n-th order statistics is,
𝑛!
𝑓 : (𝑥) = (𝑥 ) (1 − 𝑥 ) 𝛼𝑥 ;0 < 𝑥 < 1
(𝑛 − 1)! (𝑛 − 𝑛)!
= 𝑛𝛼(𝑥)
r-th order statistics is a pdf:
Proof:

𝑛!
𝑓 : (𝑥)𝑑𝑥 = (𝑥 ) (1 − 𝑥 ) 𝑑𝑥
(𝑟 − 1)! (𝑛 − 𝑟)!

𝑛!
= (𝑥 ) (1 − 𝑥 ) 𝛼𝑥 𝑑𝑥
(𝑟 − 1)! (𝑛 − 𝑟)!

𝑛!
= 𝑧 (1 Let,
(𝑟 − 1)! (𝑛 − 𝑟)!
𝑥 =𝑧
− 𝑧)( )
𝑑𝑧
𝑜𝑟, 𝛼𝑥 𝑑𝑥 = 𝑑𝑧
𝑛!
=
(𝑟 − 1)! (𝑛 − 𝑟)!
=1
(Proved)
1st order of the distribution is a pdf:
Proof:

𝑓 : (𝑥)𝑑𝑥 = 𝑛𝛼(𝑥) (1 − 𝑥 ) 𝑑𝑥

= 𝑛𝛼 (𝑥) (1 − 𝑥 ) 𝑑𝑥
12

= 𝑛 (1 − 𝑥 ) (𝑥) 𝑑𝑥

Let,
= 𝑛 (1 − 𝑧) 𝑑𝑧
𝑥 =𝑧

(1 − 𝑧) 𝑜𝑟, 𝛼𝑥 𝑑𝑥 = 𝑑𝑧
=𝑛
−𝑛
= −1[(1 − 𝑧) ]
= (−1)(−1)
=1
(Proved)
The n-th order statistics is a pdf:
Proof:

𝑓 : (𝑥)𝑑𝑥 = 𝑛𝛼𝑥 𝑑𝑥

= 𝑛𝛼 𝑥 𝑑𝑥

(𝑥)
= 𝑛𝛼
𝛼𝑛
= [(𝑥) ]
=1
(Proved)

11. Define statistic? Define central and non-central chi-square distribution.


Solution:
Statistic: Statistic is the measurable characteristics of a sample. Generally, a
statistic is used to estimate the value of a population parameter.
13

Central chi-square variate: If x1, x2,..…, xn be n iid random variable with mean µ1,
µ2,..…, µn and variance 𝜎2. Then,

𝑥 −𝜇
𝜒 =
𝜎

is called central chi-square variate with n degrees of freedom.


Non-central chi-square variate: If x1, x2, x3,..…, xn be n iid random variable with
mean µ1, µ2,..…, µn and constant variance 𝜎2. Then,

𝑥
𝜒 =
𝜎

is called non-central chi-square variate with n degrees of freedom and non-


centrality parameter

1 𝜇
𝜆=
2 𝜎

12. Distinguished between central and non-central distribution.


Solution:
Distinction between central and non-central distribution are given below:
Central distribution Non-central distribution
1. The central distribution describes 1. Non-central distribution describe
how a test statistic is distributed when the distribution of a test statistic when
the difference tested is null. the null is false (so the alternative
hypothesis is true).
2. There doesn’t exist non-centrality 2. There exists non-centrality
parameter in central distribution. parameter. If the non-centrality
parameter of any distribution is zero,
the distribution is identical to a
distribution in the central family.
14

13. Distinguished between central and non-central chi-square distribution.


Solution:
Distinction between central and non-central chi-square distribution are given
below:

Central chi-square distribution Non-central chi-square distribution


1. The central chi-square (𝜒 ) variate 1. The non-central chi-square square
is (𝜒 ) variate is
𝑥 −𝜇 𝑥 −𝜇
𝜒 = 𝜒, =
𝜎 𝜎
2. The central chi-square (𝜒 ) 2. The non-central chi-square (𝜒 )
distribution has no centrality distribution has centrality parameter.
parameter. i.e.
1 𝜇
𝜆=
2 𝜎
If we put 𝜆 = 0, then the pdf of central
chi-square (𝜒 ) distribution tends to
non-central chi-square (𝜒 )
distribution.
3. The mean of the central chi-square 3. The mean of the non-central chi-
(𝜒 ) distribution is equal to the square (𝜒 ) distribution is 𝑛 + 2𝜆.
number of degrees freedom n.
4. The variance of central chi-square 4. The variance of non-central 𝜒
(𝜒 ) distribution is 2n. distribution is 2𝑛(𝑛 + 2𝜆).
5. The mgf of 𝜒 distribution is 5. The mgf of 𝜒 distribution is
𝑀 (𝑡) = (1 − 2𝑡) 𝑀 (𝑡) = (1 − 2𝑡) . 𝑒
,
1
;𝑡<
2
15

14. Write down the application and properties of non-central chi-square


distribution.
Solution:
Applications of non-central chi-square distribution:
 The non-central chi-square (𝜒 ) distribution is used in evaluating ‘Power
Properties’ of test of significance of various hypothesis about means and
variances of normal populations.
 To test the goodness of fit.
 To test likelihood ratio test.
 To test the independence of variance.
Properties of non-central chi-square distribution:
 The non-central chi-square distribution has a non-centrality parameter λ.
 When the non-centrality parameter λ=0, then the pdf of non-central chi-
square tends to the pdf of central chi-square distribution with n degrees of
freedom.
 The non-central chi-square distribution holds additive property.
 The mean of 𝜒 is 𝑛 + 2𝜆 and variance of 𝜒 is 2𝑛(𝑛 + 2𝜆).
 The mgf of 𝜒 distribution is 𝑀 ,
(𝑡) = (1 − 2𝑡) . 𝑒 ;𝑡<
 The non-central chi-square distribution is positively skewed and leptokurtic.

15. Let {𝒙𝒋 } be a sequence of random variables are independently and


identically distributed as N (µ𝒋 , 𝝈𝟐 ). Find the characteristic function of
𝒏
𝒙𝟐𝒋
𝒁=
𝝈𝟐
𝒋 𝟏
Hence find the density function of z.

Solution:
16

Let 𝑥 , 𝑥 , … . , 𝑥 are the independent normal variates with means µ , µ , … µ and


constant variance 𝜎 .Then 𝑍 = ∑ is distributed as non-central chi-square

variates with n degrees of freedom and non-centrality parameter 𝜆= ∑


.

We know the characteristic function of z is



𝑄 (𝑡) = 𝐸 𝑒 =𝐸 𝑒


= 𝐸 𝑒

Now,
∑ ∑
𝐸 𝑒 = 𝑒 𝑓 𝑥 𝑑𝑥

∑ 1
= 𝑒 𝑒 𝑑𝑥
𝜎√2𝜋
1
= 𝑒 𝑑𝑥
𝜎√2𝜋
1 [ ]
= 𝑒 𝑑𝑥
𝜎√2𝜋
1 [ ( ) ]
= 𝑒 𝑑𝑥
𝜎 √2𝜋
1 [ √ ]
= 𝑒 𝑑𝑥
𝜎√2𝜋

1 [ √ ( )]
= 𝑒 √ 𝑑𝑥
𝜎√2𝜋
17

( )] 1 [ √ ]
=𝑒 𝑒 √ 𝑑𝑥
𝜎 √2𝜋

Let,
𝑥 √1 − 2𝑖𝑡 − =𝑃

𝑤ℎ𝑒𝑛, 𝑥 = − ∞ 𝑡ℎ𝑒𝑛 𝑝 = −∞
⇒ = √1 − 2𝑖𝑡
𝑥 = ∞ 𝑡ℎ𝑒𝑛 𝑝 = ∞
∴ 𝑑𝑥 =

∑ ( ) 1 𝑑𝑝
∴𝐸 𝑒 =𝑒 𝑒
𝜎√2𝜋 √1 − 2𝑖𝑡
1
=𝑒 (1 − 2𝑖𝑡) 𝑒 𝑑𝑝
𝜎√2𝜋

=𝑒 (1 − 2𝑖𝑡) . 1

∴ 𝑄 (𝑡) = 𝑒 (1 − 2𝑖𝑡)


=𝑒 (1 − 2𝑖𝑡)

=𝑒 (1 − 2𝑖𝑡)

=𝑒 𝑒 (1 − 2𝑖𝑡)
We know,
18

𝜆 𝜆
𝑒 = 1+ + +⋯
1 − 2𝑖𝑡 2! (1 − 2𝑖𝑡)
𝜆
=
𝑗! (1 − 2𝑖𝑡)

So the characteristic function of z is,

𝜆
𝑄 (𝑡) = 𝑒 (1 − 2𝑖𝑡)
𝑗! (1 − 2𝑖𝑡)

𝜆
=𝑒 (1 − 2𝑖𝑡)
𝑗!

𝜆
=𝑒 (1 − 2𝑖𝑡)
𝑗!

Which is the characteristic function of z.

Using the inversion theorem, the density function of z is given below:

1
𝑓(𝑧) = 𝑒 𝑄 (𝑡) 𝑑𝑡
2𝜆

1 𝜆
= 𝑒 𝑒 (1 − 2𝑖𝑡) 𝑄 (𝑡) 𝑑𝑡
2𝜆 𝑗!

𝑒 𝜆 𝑒
= 𝑑𝑡
2𝜆 𝑗!
(1 − 2𝑖𝑡)
1 𝜆 1 𝑒
= 𝑒 𝑑𝑡
2𝜆 𝑗! 1
2 ( − 𝑖𝑡)
2
19

Now using,

𝑒 𝑒 .𝑧
𝑑𝑧 = 2𝜋
(𝑐 − 𝑖𝑧) √𝑎

1 𝜆 1 𝑒
𝑓(𝑧) = 𝑒 2𝜋 𝑑𝑡
2𝜆 𝑗! 1
2 ( − 𝑖𝑡)
2
𝜆 𝑒 𝑧
=𝑒 ; 0<𝑧<∞
𝑗! 𝑛 + 2𝑗
2 Γ( )
2

Which is the density function of z.

16. Find the mean and variance of non-central chi-square distribution.


Solution:

Mean of the 𝜒 distribution:

𝐸(𝑧) = 𝑧 𝑓(𝑧)𝑑𝑧

𝜆 𝑒 𝑧
= 𝑧 𝑒 𝑑𝑧
𝑗! 𝑛 + 2𝑗
2 Γ( )
2
𝜆 1 1
=𝑒 𝑒 𝑧 𝑑𝑧
𝑗! 𝑛 + 2𝑗
2 Γ( )
2
𝜆 1 1 1 𝑛 + 2𝑗
=𝑒 ( ) Γ( + 1)
𝑗! 𝑛 + 2𝑗 2 2
2 Γ( )
2
20

𝜆 1 𝑛 + 2𝑗 𝑛 + 2𝑗
=𝑒 2 ∗2∗ Γ( )
𝑗! 𝑛 + 2𝑗 2 2
2 Γ( )
2
𝜆
=𝑒 (𝑛 + 2𝑗)
𝑗!

𝜆 𝜆
= 𝑛𝑒 + 2𝑒 𝑗
𝑗! 𝑗!

𝜆
= 𝑛𝑒 𝑒 + 2𝑒 ∗𝜆
(𝑗 − 1)!

= 𝑛 + 2𝜆𝑒 𝑒

= 𝑛 + 2𝜆

∴ 𝑀𝑒𝑎𝑛 = 𝑛 + 2𝜆 .

Variance of the 𝜒 distribution:

𝐸(𝑧 ) = 𝑧 𝑓(𝑧)𝑑𝑧

𝜆 𝑒 𝑧
= 𝑧 𝑒 𝑑𝑧
𝑗! 𝑛 + 2𝑗
2 Γ( )
2
𝜆 11
=𝑒 𝑒 𝑧 𝑑𝑧
𝑗! 𝑛 + 2𝑗
2 Γ( )
2
𝜆 1 1 1 𝑛 + 2𝑗
=𝑒 ( ) Γ( + 1)
𝑗! 𝑛 + 2𝑗 2 2
2 Γ( )
2
𝜆 1 𝑛 + 2𝑗 𝑛 + 2𝑗 𝑛 + 2𝑗
=𝑒 2 ∗2 ∗( + 1)( ) Γ(
𝑗! 𝑛 + 2𝑗 2 2 2
2 Γ( )
2
21

𝜆
=𝑒 (𝑛 + 2𝑗 + 2)(𝑛 + 2𝑗)
𝑗!

𝜆
=𝑒 (𝑛 + 2𝑛𝑗 + 4𝑗 + 2𝑛𝑗 + 2𝑛 + 4𝑗)
𝑗!

𝜆
=𝑒 {(𝑛 + 2𝑛) + 4𝑗 + 4𝑗(𝑛 + 1)}
𝑗!

𝜆 𝜆 𝜆
= (𝑛 + 2𝑛)𝑒 + 4(𝑛 + 1)𝑒 𝑗 + 4𝑒 𝑗
𝑗! 𝑗! 𝑗!

𝜆 𝜆
= (𝑛 + 2𝑛)𝑒 𝑒 + 4(𝑛 + 1)𝑒 𝜆 + 4𝑒 𝜆
(𝑗 − 1)! (𝑗 − 2)!
= 𝑛 + 2𝑛 + 4 𝜆(𝑛 + 1)𝑒 𝑒 + 4𝜆 𝑒 𝑒

= 𝑛 + 2𝑛 + 4𝑛𝜆 + 4𝜆 + 4𝜆

𝑣(𝑧) = 𝐸(𝑧 ) − {𝐸(𝑧)}


= 𝑛 + 2𝑛 + 4𝑛𝜆 + 4𝜆 + 4𝜆 − (𝑛 + 2𝜆)
= 𝑛 + 2𝑛 + 4𝑛𝜆 + 4𝜆 + 4𝜆 − 𝑛 − 4𝑛𝜆 − 4𝜆
= 2𝑛 + 4𝜆

∴ 𝑉𝑎𝑟𝑖𝑎𝑛𝑐𝑒 = 2𝑛 + 4𝜆 .

17. Establish the relationship between central and non-central chi-square


distribution.
Solution:
The probability density function of 𝜒 distribution is,

𝜆 𝑒 𝑧
𝑓(𝜒 ) = 𝑒 ;
𝑗! 𝑛 + 2𝑗
2 Γ( )
2
22

𝜆 𝑒 (𝜒 ) 𝜆 𝑒 (𝜒 )
=𝑒 𝑛 +𝑒 +⋯ − − − − − (𝑖)
0! 2 Γ( ) 1! 𝑛+2
2 2 Γ( )
2

Now putting 𝜆 = 0, in the equation(𝑖), we obtain,

𝑒 (𝜒 )
𝑓(𝜒 ) = 𝑛
2 Γ( )
2

Which is the pdf of central 𝜒 distribution with n degrees of freedom.


Hence, we may conclude that by putting non-centrality parameter 𝜆 = 0
the non-central 𝜒 distribution conversed to central 𝜒 distribution.
Which is the required relationship between the central and non-central
𝜒 distribution.

18. Find the cumulant generating function of non-central chi-square


distribution. Also find skewness & kurtosis and finally interpret.
Solution:
We know the characteristics function of non-central 𝜒 distribution is,

𝑒 ( )
𝑄 (𝑡) =
(1 − 2𝑖𝑡)

Hence, the cumulant generating function of non-central 𝜒 distribution is,


𝑘 (𝑡) = log 𝑄 (𝑡)
23

𝑒 ( )
= log[ ]
(1 − 2𝑖𝑡)
𝜆 𝜆 𝑛
=− + − log[1 − 2𝑖𝑡]
2 2(1 − 2𝑖𝑡) 2
𝜆 𝜆 𝑛
= − + (1 − 2𝑖𝑡) − log[1 − 2𝑖𝑡]
2 2 2
𝜆 𝜆 𝑛
= − + (1 − 2𝑖𝑡) − log[1 − 2𝑖𝑡]
2 2 2
𝜆 𝜆 𝑛 (2𝑖𝑡) (2𝑖𝑡)
= − + [1 + 2𝑖𝑡 + (2𝑖𝑡) + ⋯ ] − −2𝑖𝑡 − − −⋯
2 2 2 2 3
𝜆 𝜆
= − + + 𝜆𝑖𝑡 + 2𝜆(𝑖𝑡) + 4𝜆(𝑖𝑡) + ⋯ + 𝑛𝑖𝑡 + 𝑛(𝑖𝑡) + 𝑛(𝑖𝑡) + ⋯
2 2
4
= (𝑛 + 𝜆)𝑖𝑡 + (𝑛 + 2𝜆)(𝑖𝑡) + 𝑛 + 4𝜆 (𝑖𝑡) + (2𝑛 + 8𝜆)(𝑖𝑡) + ⋯
3
(𝑖𝑡) (𝑖𝑡) (𝑖𝑡)
= (𝑛 + 𝜆)𝑖𝑡 + 2(𝑛 + 2𝜆) + (8𝑛 + 24𝜆) + (48𝑛 + 192𝜆) +⋯
2! 3! 4!

( )
Now taking coefficient of , we get,
!

𝜇 ,

(𝑖𝑡)
𝜇 = 𝑐𝑜𝑒𝑓𝑓𝑖𝑐𝑖𝑒𝑛𝑡 𝑜𝑓 =𝑛+ 𝜆
1!

(𝑖𝑡)
𝜇 = 𝑐𝑜𝑒𝑓𝑓𝑖𝑐𝑖𝑒𝑛𝑡 𝑜𝑓 = 2(𝑛 + 2𝜆)
2!

(𝑖𝑡)
𝜇 = 𝑐𝑜𝑒𝑓𝑓𝑖𝑐𝑖𝑒𝑛𝑡 𝑜𝑓 = 8𝑛 + 24𝜆
3!
24

(𝑖𝑡)
𝜇 = 𝑐𝑜𝑒𝑓𝑓𝑖𝑐𝑖𝑒𝑛𝑡 𝑜𝑓 = 48𝑛 + 192𝜆
4!
∴ The skewness is,
(𝜇 ) 64(𝑛 + 3𝜆) 8(𝑛 + 3𝜆)
𝑠 (𝜒 ) = = =
(𝜇 ) 8(𝑛 + 2𝜆) (𝑛 + 2𝜆)

∴ The kurtosis is
𝜇 48(𝑛 + 4𝜆)
= =
(𝜇 ) 4(𝑛 + 2𝜆)

19. Define non-central F-distribution and distinguished it from central F


distribution.
Solution:
Non-central F distribution: The non-central F variate is defined as the ratio,
𝑧
𝑛
𝐹= 𝑧
𝑛
Where 𝑧 is the non-central (𝜒 ) random variable with 𝑛 degrees of freedom
and non-centrality parameter 𝜆 and 𝑧 is a central (𝜒 ) random variable with 𝑛
degrees of freedom.
The probability density function of non-central F distribution is,

𝑛
𝜆 𝑣
𝑛
𝑓(𝑣) = 𝑒 ; 0< 𝑣 <∞.
𝑗! 𝑛 + 2𝑗 𝑛 𝑛
𝛽( , )(1 + 𝑣)
2 2 𝑛
Where,
25

𝑧
𝑛
𝑣 = 𝑧 = 𝐹′
𝑛

Difference between central and non-central F distribution:


F F’
(i)The central F variate is defined as (i)The non-central F variate is
the ratio, defined as the ratio,
𝜒( ) 𝜒( , )
𝑛 𝑛
𝐹= 𝐹′ =
𝜒 𝜒
𝑛 𝑛

(ii)The mean of central F distribution (ii)The mean of non-central F


is, distribution is,
𝑛 𝑛 (𝑛 + 2𝜆)
𝑛 −2 𝑛 (𝑛 − 2)
.
(iii) The probability density function (iii) The probability density function
of central F distribution is, of central F distribution is,

𝑓(𝐹) = 𝑓(𝐹′) =
( ) ( )
𝑒 ∑ ; 𝑒 ∑ ;
! !
, ( , )( )

𝑤ℎ𝑒𝑟𝑒 0 < 𝐹 < ∞ 𝑤ℎ𝑒𝑟𝑒 0 < 𝐹′ < ∞


26

20. Derive the non-central F-distribution. Show that it is pdf. Write down the
application and properties of non-central F-distribution.
Solution:
Derivation:
Suppose, 𝑧 ~ 𝜒′ ,
and 𝑧 ~𝜒′

∴ 𝑓( 𝑧 , 𝑧 ) = 𝑓( 𝑧 )𝑓( 𝑧 )

𝜆 𝑒 (𝑧 ) 𝑒 (𝑧 )
=𝑒
𝑗! 𝑛 + 2𝑗 𝑛
2 Γ( ) 2 Γ( 2 )
2

𝜆 𝑒 (𝑧 ) (𝑧 )
=𝑒
𝑗! 𝑛 + 2𝑗 Γ(𝑛 )
2 Γ( ) 2
2

Let, 𝑢 = 𝑧 + 𝑧 ---------------(i)
𝑢= ---------------(ii)

⇒𝑧 = 𝑢∗𝑧 ---------------(iii)

Form (i) and (iii),


𝑛
𝑧 = 𝑢 ∗ (𝑢 − 𝑧 )
𝑛
𝑛
⇒ 𝑧 = 𝑣(𝑢 − 𝑧 )
𝑛
𝑛 𝑛
⇒ 𝑧 + 𝑣𝑧 = 𝑣𝑢
𝑛 𝑛
𝑛 𝑛
⇒ 𝑧 (1 + 𝑣) = 𝑣𝑢
𝑛 𝑛
27

𝑛
𝑣𝑢
𝑛
∴𝑧 = 𝑛
1+ 𝑣
𝑛
Using 𝑧 in equation (i),
𝑛
𝑣𝑢
𝑛
𝑢= 𝑛 +𝑧
1+ 𝑣
𝑛
𝑛
𝑣𝑢
𝑛
⇒𝑧 =𝑢− 𝑛
1+ 𝑣
𝑛
𝑛 𝑛
𝑢 + 𝑣𝑢 − 𝑣𝑢
𝑛 𝑛
⇒𝑧 = 𝑛
1+ 𝑣
𝑛
𝑢
∴𝑧 = 𝑛
1+ 𝑣
𝑛
Now,
𝑛
𝑑𝑧 𝑣
𝑛
=
𝑑𝑢 1 + 𝑛 𝑣
𝑛
𝑑𝑧 1
∴ =
𝑑𝑢 1 + 𝑛 𝑣
𝑛
𝑛 𝑛 𝑛 𝑛
𝑑𝑧 1+ 𝑣 𝑢 − 𝑢𝑣 −
𝑛 𝑛 𝑛 𝑛
∴ = 𝑛
𝑑𝑣 (1 + 𝑣)
𝑛
𝑛 𝑛 𝑛
𝑢(1 + 𝑣 − 𝑣)
𝑛 𝑛 𝑛
= 𝑛
(1 + 𝑣)
𝑛
28

𝑛
𝑢
𝑛
= 𝑛
(1 + 𝑣)
𝑛

𝑛 𝑛
𝑑𝑧 − 𝑢
𝑛 𝑛
∴ = −𝑢 = 𝑛
𝑑𝑣 𝑛 (1 + 𝑣)
1+ 𝑣 𝑛
𝑛
And jacobian transformation is,
𝑛
𝑣 1
𝑛
𝑑𝑧 𝑑𝑧 𝑛 𝑛
1+ 𝑣 1+ 𝑣
𝑛 𝑛
𝑗 = 𝑑𝑢 𝑑𝑢 = 𝑛 𝑛
𝑑𝑧 𝑑𝑧 𝑢 − 𝑢
𝑛 𝑛
𝑑𝑣 𝑑𝑣 𝑛 𝑛
(1 + 𝑛 𝑣) (1 + 𝑛 𝑣)
𝑛 𝑛
−( ) 𝑢𝑣 𝑢
𝑛 𝑛
= 𝑛 − 𝑛
(1 + 𝑣) (1 + 𝑣)
𝑛 𝑛
𝑛 𝑛
( ) 𝑢𝑣 − 𝑢
𝑛 𝑛
= 𝑛
(1 + 𝑣)
𝑛
𝑛 𝑛
− 𝑢(1 + 𝑣)
𝑛 𝑛
= 𝑛
(1 + 𝑣)
𝑛
𝑛
𝑢
𝑛
= 𝑛
(1 + 𝑣)
𝑛
𝑛
𝑢
𝑛
∴ |𝑗| = 𝑛
(1 + 𝑣)
𝑛
29

Now,
𝑛
𝑢𝑣 𝑢
𝑛
𝑒 ( 𝑛 ) ( 𝑛 ) 𝑛
𝜆 1+𝑛 𝑣 1+𝑛 𝑣 𝑢
𝑛
𝑓( 𝑢, 𝑣) = 𝑒 𝑛 𝑛
𝑗! 𝑛 + 2𝑗 Γ( ) (1 + 𝑣)
2 Γ( ) 2 𝑛
2
𝑛
𝜆 𝑒 ( ) (𝑢) (𝑣)
𝑛
=𝑒
𝑗! 𝑛 + 2𝑗 𝑛 𝑛
2 Γ( )Γ( )(1 + 𝑣)
2 2 𝑛

Therefore, the marginal density of v is,

𝑓( 𝑣) = 𝑓(𝑢𝑣) 𝑑𝑢

𝜆 𝑛
𝑒 ∑ (𝑣)
𝑗! 𝑛
= 𝑒 (𝑢) 𝑑𝑢
𝑛 + 2𝑗 𝑛 𝑛
2 Γ Γ 1+ 𝑣
2 2 𝑛

𝜆 𝑛
𝑒 ∑ (𝑣) 1 𝑛 + 𝑛 + 2𝑗
𝑗! 𝑛
= Γ
𝑛 + 2𝑗 𝑛 𝑛 2 2
2 Γ Γ 1+ 𝑣
2 2 𝑛

𝜆 𝑛
𝑒 ∑ (𝑣) 1 𝑛 + 𝑛 + 2𝑗
𝑗! 𝑛
= Γ
𝑛 + 2𝑗 𝑛 𝑛 2 2
2 Γ Γ 1+ 𝑣
2 2 𝑛

𝜆 𝑛 𝑛 + 𝑛 + 2𝑗
𝑒 ∑ (𝑣) Γ
𝑗! 𝑛 2
=
𝑛 𝑛 + 2𝑗 𝑛
1+ 𝑣 Γ Γ
𝑛 2 2
30

𝜆 𝑛
𝑒 ∑ (𝑣)
𝑗! 𝑛
∴ 𝑓( 𝑣) = ; 0<𝑣<∞
𝑛 𝑛 + 2𝑗 𝑛
1+ 𝑣 β ,
𝑛 2 2

The non-central F distribution is a pdf (Proof)

𝜆 𝑛
𝑒 ∑ (𝑣)
𝑗! 𝑛
𝑓(𝑣) 𝑑𝑣 = 𝑑𝑣
𝑛 𝑛 + 2𝑗 𝑛
1+ 𝑣 β ,
𝑛 2 2

𝜆 𝑛
𝑒 ∑ 𝑛
𝑗! 𝑛
= (𝑣) 1+ 𝑣 𝑑𝑣
𝑛 + 2𝑗 𝑛 𝑛
β ,
2 2

Let,
𝑛
𝑣=𝑡
𝑛
𝑑𝑡 𝑛
⇒ =
𝑑𝑣 𝑛
𝑛
⇒ 𝑑𝑣 = 𝑑𝑡
𝑛

𝑛
𝜆 𝑛 𝑛 𝑛
=𝑒 𝑡 (1 + 𝑡) ( ) 𝑑𝑡
𝑗! β 𝑛 + 2𝑗 , 𝑛 𝑛 𝑛
2 2
31

𝑛 𝑛
𝜆 ()
𝑛 𝑛
=𝑒 (𝑡) (1 + 𝑡) 𝑑𝑡
𝑗! 𝑛 + 2𝑗 𝑛
β ,
2 2
𝜆
𝑒 ∑ 𝑛 + 2𝑗 𝑛
𝑗!
= β , [∵ β(p, q) = (𝑡) (1 + 𝑡) 𝑑𝑡 ]
𝑛 + 2𝑗 𝑛 2 2
β ,
2 2
𝜆
=𝑒
𝑗!

=𝑒 𝑒
=1

∴ 𝑓(𝑣) 𝑑𝑣 = 1

∴ The non-central F distribution is a pdf. (Proved)

21. Write down the application and properties of non-central F distribution.


Solution:

Application
 The non-central f distribution is used in the theory of regression and
analysis of variance extremely.
 The main application of the non -central distribution is to calculate the
power of hypothesis test relative to a particulate alternative.
 It can be used to test the homogeneity of several means.
 It can be used to test the equality of two population variance.
32

Properties:
 Non-central F distribution is a continuous distribution having the range 0 to
∞.
 The distribution has three parameters 𝑛 , 𝑛 , 𝜆 where 𝑛 , 𝑛 are called
degrees of freedom and 𝜆 is the non-centrality parameter.
 It reduce to central distribution when 𝜆 = 0 .

23. Find the mode of non-central F distribution.


Solution:
We know, the pdf of non-central F distribution is,

. ( ) ( )
𝑓(F)=∑ . .
! , ( )

. ( )
=> log 𝑓(F’)=log ∑ .
! ,

. ( )
=> log 𝑓(𝐹 ) = 𝑙𝑜𝑔 ∑ . + − 1 log 𝐹 ′ −
! ,

log(1 + 𝐹)

Differentiating both side with respect to F’,


We get,
𝑛 + 𝑛 + 2𝑗
1 𝑛 + 2𝑗 2
𝑓 (𝐹) = 0 + − 𝑛
𝑓(𝐹 ) 𝐹′ 1 + 𝑛 𝐹′
33

𝑛 + 2𝑗 𝑛 + 𝑛 + 2𝑗 𝑛
−1 .
2 2 𝑛
∴ 𝑓 (𝐹 ) = 𝑓(𝐹 ) − 𝑛
𝐹 1+ 𝐹
𝑛

We know that mode is the value of F’ for which f’(F’)=0 and f”(F’)< 0
According to the first condition of mode, we setting f’(F’)=o . then,
𝑛 + 2𝑗 𝑛 + 𝑛 + 2𝑗 𝑛
−1 ,
2 2 𝑛
𝑓(𝐹) − 𝑛 =0
𝐹 1+ 𝐹
𝑛

Hence,𝑓(𝐹 ) ≠ 0, 𝑠𝑜,
𝑛 + 2𝑗 𝑛 + 𝑛 + 2𝑗 𝑛
−1 ,
2 2 𝑛
− 𝑛 =0
𝐹 1+ 𝐹
𝑛
𝑛 + 2𝑗 𝑛 + 𝑛 + 2𝑗 𝑛
−1 .
2 2 𝑛
=> = 𝑛
𝐹 1+ 𝐹
𝑛

𝑛 + 2𝑗 𝑛 + 𝑛 + 2𝑗
−1 .𝑛
=> 2 = 2
𝐹 𝑛 + 𝑛 𝐹′

𝑛 + 2𝑗 − 2 (𝑛 + 𝑛 + 2𝑗)
=> =
𝐹′ 𝑛 + 𝑛 𝐹′

=> (𝑛 + 2𝑗 − 2)(𝑛 + 𝑛 𝐹′) = (𝑛 + 𝑛 𝑛 + 2𝑛 𝑗)𝐹′

=> 𝑛 𝑛 + 𝑛 𝐹 + 2𝑛 𝑗 + 2𝑛 𝑗𝐹 − 2𝑛 − 2𝑛 𝐹
= 𝑛 𝐹 + 𝑛 𝑛 𝐹 + 2𝑛 𝐹 𝑗
34

=> 𝑛 𝑛 + 2𝑗𝑛 − 2𝑛 − 2𝑛 𝐹 − 𝑛 𝑛 𝐹 = 0

=> 𝑛 𝑛 + 2𝑗𝑛 − 2𝑛 = 2𝑛 𝐹 + 𝑛 𝑛 𝐹

𝐹 (2𝑛 + 𝑛 𝑛 ) = 𝑛 𝑛 + 2𝑗𝑛 − 2𝑛

𝑛 𝑛 + 2𝑗𝑛 − 2𝑛
=> 𝐹 =
2𝑛 + 𝑛 𝑛

𝑛 (𝑛 + 2𝑗 − 2)
=> 𝐹 =
𝑛 (2 + 𝑛 )

Which is the mode of non-central distribution.

24. Find the k-th row moment of non-central F distribution.Hence also find
mean and variance.
Solution:
The k-th raw moment of non-central F distribution is,
∴ µ = 𝐸[𝑣 ]

= 𝑣 𝑓(𝑣)𝑑𝑣

𝜆 𝑛
𝑒 ∑ (𝑣) (𝑣)
𝑗! 𝑛
= 𝑑𝑣
𝑛 𝑛 + 2𝑗 𝑛 𝑛
1+ 𝑣 β , 1+ 𝑣
𝑛 2 2 𝑛
35

𝜆 𝑛
(𝑣) 𝑒 ∑ (𝑣)
𝑗! 𝑛
=𝐶 𝑑𝑣 |𝑖. 𝑒 𝐶 =
𝑛 𝑛 𝑛 + 2𝑗 𝑛
1+ 𝑣 1+ 𝑣 β ,
𝑛 𝑛 2 2
𝑛
𝑡 𝑛 𝑛 𝑛
=𝐶 𝑛 ∴ 𝑑𝑣 = 𝑑𝑡
𝑑𝑡 𝑙𝑒𝑡, 𝑣 = 𝑡 𝑛
𝑛 𝑛
(1 + 𝑡)

𝑛 (𝑡) 𝑛
=𝐶 𝑑𝑡
𝑛 𝑛
(1 + 𝑡)
𝑛 + 2𝑗 + 2𝑘 𝑛 − 2𝑘
𝜆 𝑛 β ,
2 2
=𝑒
𝑗! 𝑛 𝑛 + 2𝑗 𝑛
β ,
2 2

Now putting k=1, we obtain,


𝑛 + 2𝑗 + 2 𝑛 −2 𝑛 + 𝑛 + 2𝑗
𝜆 𝑛 Γ Γ Γ
µ =𝑒 2 2 2
𝑗! 𝑛 𝑛 + 𝑛 + 2𝑗 𝑛 + 2𝑗 𝑛
Γ Γ Γ
2 2 2
𝑛 + 2𝑗 𝑛 + 2𝑗 𝑛 −2
𝜆 𝑛 Γ Γ
=𝑒 2 2 2
𝑗! 𝑛 𝑛 + 2𝑗 𝑛 − 2 𝑛 −2
Γ Γ
2 2 2
𝜆 𝑛 𝑛 + 2𝑗
=𝑒
𝑗! 𝑛 𝑛 −2

𝑛 𝜆 𝜆
= [𝑛 𝑒 + 2𝜆𝑒 ]
𝑛 (𝑛 − 2) 𝑗! (𝑗 − 1)!
𝑛
= [𝑛 𝑒 𝑒 + 2𝜆𝑒 𝑒 ]
𝑛 (𝑛 − 2)
𝑛
= (𝑛 + 2𝜆)
𝑛 (𝑛 − 2)
36

= 𝑚𝑒𝑎𝑛

∴the mean of non central F distribution is (𝑛 + 2𝜆)


( )

Now putting k=2,we obtain,


𝑛 + 2𝑗 + 4 𝑛 −4 𝑛 + 𝑛 + 2𝑗
𝜆 𝑛 Γ Γ Γ
µ =𝑒 2 2 2
𝑗! 𝑛 𝑛 +𝑛 + 2𝑗 𝑛 + 2𝑗 𝑛
Γ Γ Γ
2 2 2
𝑛 + 2𝑗 + 2 𝑛 + 2𝑗 𝑛 + 2𝑗 𝑛 −4
𝜆 𝑛 Γ Γ
=𝑒 2 2 2 2
𝑗! 𝑛 𝑛 + 2𝑗 𝑛 + 2𝑗 𝑛 − 4 𝑛 −4
Γ Γ
2 2 2 2
𝜆 𝑛 (𝑛 + 2𝑗 + 2)(𝑛 + 2𝑗)
=𝑒
𝑗! 𝑛 (𝑛 − 2)(𝑛 − 4)

(𝑛 ) 𝜆
= 𝑒 { 𝑛 (𝑛 + 2𝑗) + 4𝑗(𝑛 + 1) + 4𝑗 }
(𝑛 ) (𝑛 − 2)(𝑛 − 4) 𝑗!

(𝑛 ) 𝜆
= 𝑒 { 𝑛 (𝑛 + 2𝑗) + 4𝑗(𝑛 − 1) + 4𝑗}
(𝑛 ) (𝑛 − 2)(𝑛 − 4) 𝑗!

(𝑛 ) 𝜆
= [𝑛 (𝑛 + 2)𝑒
(𝑛 ) (𝑛 − 2)(𝑛 − 4) 𝑗!

𝜆
+ 4𝜆(𝑛 + 1)𝑒
(𝑗 − 1)!

𝜆 𝜆
+ 4𝜆 𝑒 + +4𝜆 ]
(𝑗 − 2)! (𝑗 − 1)!

(𝑛 )
= [𝑛 (𝑛 + 2)𝑒 𝑒 + 4𝜆(𝑛 + 1)𝑒 𝑒 + 4𝜆 𝑒 𝑒
(𝑛 ) (𝑛 − 2)(𝑛 − 4)
+ 4𝜆𝑒 𝑒 ]
(𝑛 )
= [𝑛 (𝑛 + 2) + 4𝜆(𝑛 + 2) + 4𝜆 ]
(𝑛 ) (𝑛 − 2)(𝑛 − 4)
37

∴the variance = µ − µ

(𝑛 ) {𝑛 (𝑛 + 2) + 4𝜆(𝑛 + 2) + 4𝜆 } (𝑛 ) (𝑛 + 2)
= −
(𝑛 ) (𝑛 − 2)(𝑛 − 4) (𝑛 ) (𝑛 − 2)

25. Define and derive the non-central t distribution. Is it symmetric distribution?


Solution:
Non-central t distribution : If 𝑥 , 𝑥 , … . , 𝑥 are n independent random variables
distributed as N(µ, 𝜎 ).then the statistics t’ defined as 𝑡 = is called t variate

with (n-1) degrees of freedom and non-centrality parameter 𝜆 = .Then the pdf
of non-central t distribution is,

𝜆 1 (𝑡′)
𝑓( 𝑡′) = 𝑒 ; −∞ < 𝑡′ < ∞
𝑗! 2𝑗 + 1 𝑛 (𝑡′)
𝑛 β , (1 + )
2 2 𝑛

Derivation:
We know, the non-central t statistics,

√𝑛𝑥
𝑥 √𝑛𝑥
𝑡 = = = 𝜎
𝑠
𝑠 𝑠
𝜎
𝑛
Now,

√𝑛𝑥 √𝑛 √𝑛
𝐸 = 𝐸(𝑥) = µ
𝜎 𝜎 𝜎

And
38

√𝑛𝑥 𝑛 𝑛 𝜎
𝑣 = 𝑣(𝑥) = =1
𝜎 𝜎 𝜎 𝑛

√ √
So, we can say that, is distributed as𝑁( , 1).

Now,
√𝑛𝑥 𝑥 𝜒′
( ) 𝑛 ( , ) (𝑛 − 1)𝜒′(
𝑡′ = 𝜎 𝜎 , )
= = =
𝑠 ∑(𝑥 − 𝑥) 𝜒( ) 𝜒( )
𝜎 𝜎 (𝑛 − 1) 𝑛−1

(𝑛 − 1)𝑧
⇒ 𝑡′ = ; 𝑤ℎ𝑒𝑟𝑒 𝑧 ~ 𝜒′( , ) 𝑎𝑛𝑑 𝑧 ~𝜒( )
𝑧
𝑧 𝑡′
⇒𝑧 = − − − − − − − (𝑖)
𝑛−1

Let,
𝑧 𝑡′
𝑠 = 𝑧 +𝑧 = +𝑧
𝑛−1
𝑡
⇒ 𝑠 = 𝑧 (1 + )
𝑛−1
𝑠
⇒𝑧 =
𝑡
1+
𝑛−1

Form (i) we get,


𝑠 𝑡′
𝑡′ 𝑠
𝑧 = = 𝑛−1
𝑛−1 𝑡 𝑡
1+ 1+
𝑛−1 𝑛−1
39

Now, the jacobian transformation is,


𝑑𝑧 𝑑𝑧
𝑗 = 𝑑𝑡 𝑑𝑡
𝑑𝑧 𝑑𝑧
𝑑𝑠 𝑑𝑠
2𝑡′𝑠 −2𝑠𝑡′
𝑛−1 𝑛−1
𝑡′ 𝑡′
(1 + ) (1 + )
= 𝑛 − 1 𝑛−1
(𝑡′)
𝑛−1 1
𝑡′ 𝑡′
(1 + 𝑛 − 1) (1 + 𝑛 − 1)

2𝑠𝑡′ −2𝑠𝑡′
𝑛−1 (𝑛 − 1)
= +
𝑡′ 𝑡′
(1 + ) (1 + )
𝑛−1 𝑛−1
2𝑠𝑡 (𝑡′)
(1 + )
= 𝑛 − 1 𝑛 − 1
𝑡′
(1 + )
𝑛−1
2𝑠𝑡
= 𝑛−1
𝑡′
(1 + )
𝑛−1

The joint density function of 𝑧 and 𝑧 is,


𝑓(𝑧 . 𝑧 ) = 𝑓(𝑧 ). 𝑓(𝑧 )

𝜆 𝑒 (𝑧 ) 𝑒 (𝑧 )
=𝑒
𝑗! 1 + 2𝑗 𝑛−1
2 Γ( )2 Γ( )
2 2
40

𝑒 𝜆 1 ( )
= 𝑒 (𝑧 ) (𝑧 )
𝑗! 1 + 2𝑗 𝑛 − 1
2 Γ( )Γ( )
2 2

Now the joint density function of t’ and s is,


𝑠 𝑡′ 2𝑠𝑡
𝑠
( 𝑛−1 ) ( ) 𝑛−1
𝑡 𝑡 𝑡′
𝑒 𝜆 1+𝑛−1 1+ (1 + )
𝑛−1 𝑛−1
𝑓(𝑡 , 𝑠) =
𝑗! 1 + 2𝑗 𝑛 − 1
2 Γ( )Γ( )
2 2

𝑒 𝜆 2(𝑡′) (𝑒) (𝑠)


=
𝑗! 1 + 2𝑗 𝑛 − 1 𝑡′
2 Γ( )Γ( )(1 + ) (𝑛 − 1)
2 2 𝑛−1

Now the marginal density of t’ is,

𝑓(𝑡 ) = 𝑓(𝑡 , 𝑠) 𝑑𝑠

𝑒 𝜆 2(𝑡′) ( )
= 𝑒 (𝑠) 𝑑𝑠
𝑗! 1 + 2𝑗 𝑛 − 1 𝑡′
2 Γ( )Γ( )(1 + ) (𝑛 − 1)
2 2 𝑛−1
𝑛 + 2𝑗
𝑒 𝜆 2(𝑡 ) 2 Γ
= 2
𝑗!
1 + 2𝑗 𝑛−1 𝑡
2 Γ Γ 1+ (𝑛 − 1)
2 2 𝑛−1

The non-central t distribution is a symmetric distribution (proof):


We know,
41

𝑒 𝜆 1 (𝑡′)
𝑓( 𝑡′) =
𝑗! 2𝑗 + 1 𝑛 (𝑡′)
𝑛 β , (1 + )
2 2 𝑛
Let t’=-t’, then

𝑒 𝜆 1 (−𝑡′)
𝑓(−𝑡′) =
𝑗! 2𝑗 + 1 𝑛 (𝑡′)
𝑛 β , (1 + )
2 2 𝑛
𝑒 𝜆 1 (𝑡′)
=
𝑗! 2𝑗 + 1 𝑛 (𝑡′)
𝑛 β , (1 + )
2 2 𝑛
= 𝑓(𝑡′)
∴ 𝑓(−𝑡 ) = 𝑓(𝑡′)
Therefore the non-central t distribution is a symmetric distribution.

26. State some of important application and properties of non-central t


distribution.
Solution:

Application:

 It is used to test the regression analysis and analysis of variance.


 It is used to test the population mean and pair wise test.
 It is used to test the significance of observed partial and multiple
correlation coefficient.
 It is used to test the equality of two population mean.
 It is used to test the significance of H0.
42

Properties:

 The pdf of non-central distribution is

𝜆 1 (𝑡′)
𝑓( 𝑡′) = 𝑒 ; −∞ < 𝑡′ < ∞
𝑗! 2𝑗 + 1 𝑛 (𝑡′)
𝑛 β , (1 + )
2 2 𝑛

Where (n-1) degrees of freedom and non-centrality parameter𝜆 = .


 Non-central t distribution is symmetric.
( )
 The mean is zero but variance is 𝜆+
 All add order moments are Zero.
 The distribution is continuous and range is −∞ < 𝑡 < ∞.
 When non-centrality parameter 𝜆 =0, then non-central t distribution tends
to central t distribution.

27. Show that under certain condition non-central t distribution tends to central
t-distribution.
Solution:
We know,

λj (𝑡) 𝑛
f(t′) = 𝑒
𝑗!
2𝑗 + 1 𝑛 𝑡′
𝛽 , 1+
2 2 𝑛

𝜆 (𝑡) 𝜆 (𝑡) 𝑛 1
=𝑒 +𝑒 + +⋯
0! 1! 3 𝑛
1 𝑛 𝑡′ 𝑡′ 𝛽 ,
𝛽 , 1+ 1+ 2 2
2 2 𝑛 𝑛
43

Putting λ=0,
1
𝐹(𝑡 ) = ; −∝< 𝑡 <∝
1 𝑛 𝑡
√𝑛 𝛽 2 , 2 1+
𝑛

Which is the pdf of central t distribution with (n-1) degrees of freedom.


Hence, we may conclude that by putting non-centrality parameter λ=0, the non-
central t’ distribution conversed to central t distribution.

28. Find the skewness and kurtosis of non-central t-distribution.


Solution:
As non-central t-distribution is a symmetric distribution, all odd order moments of
the distribution are zero. Then the raw moments are consider with central
moments. i.e.

𝜇 =𝜇 = 𝐸[𝑡 ] = (𝑡 ) 𝑓(𝑡 )𝑑𝑡

(𝑡 )
= 𝑒 𝑑𝑡
1 + 2𝑗 𝑛 − 1 𝑡
𝑛−1 𝛽 ′ 1+
2 2 𝑛−1

(𝑡 ) Where,
= 𝐶 𝑑𝑡
𝜆 1
𝑡 𝐶 =𝑒
1+ 𝑗! 1 + 2𝑗 𝑛 − 1
𝑛−1 𝑛−1 𝛽
2

2
44

⎡ ⎤
⎢ (𝑡 ) (𝑡 ) ⎥
= 𝐶 ⎢ 𝑑𝑡 + 𝑑𝑡 ⎥
⎢ ⎥
⎢ 𝑡 𝑡 ⎥
1+ 1+
⎣ 𝑛−1 𝑛−1 ⎦

(𝑡 )
= 𝐶 2 𝑑𝑡
𝑡
1+
𝑛−1
Let, 𝑧 =
(𝑧) 𝑛−1 2𝑡
= 𝐶 2 ∙ 𝑑𝑧 => 𝑑𝑧 = 𝑑𝑡
2 𝑧(𝑛 − 1) 𝑛−1
(1 + 𝑧)
𝑛−1 𝑛−1
=> 𝑑𝑡 = 𝑑𝑧 = 𝑑𝑡 = 𝑑𝑧
2𝑡 2 𝑧(𝑛 − 1)
(𝑧)
= 𝐶 (𝑛 − 1) 𝑑𝑧
(1 + 𝑧)

(𝑧)
= 𝐶 (𝑛 − 1) 𝑑𝑧
(1 + 𝑧)
2𝑟 + 2𝑗 + 1 𝑛 − 2𝑟 − 1
𝜆 (𝑛 − 1) 𝛽 ′
= 𝑒 ∙ 2 2
𝑗! 1 + 2𝑗 𝑛 − 1
(𝑛 − 1) 𝛽 ′
2 2
2𝑟 + 2𝑗 + 1 𝑛 − 2𝑟 − 1 𝑛 + 2𝑗
𝜆 Γ Γ Γ
= 𝑒 (𝑛 − 1) ∙ 2 2 2
𝑗! 𝑛 + 2𝑗 1 + 2𝑗 𝑛−1
Γ Γ Γ
2 2 2
2𝑟 + 2𝑗 + 1 𝑛 − 2𝑟 − 1
𝜆 Γ Γ
∴𝜇 = 𝑒 (𝑛 − 1) ∙ 2 2
𝑗! 1 + 2𝑗 𝑛−1
Γ Γ
2 2
Now, we put 𝑟 = 1, we get,
45

2𝑗 + 3 𝑛−3
𝜆 Γ 2
Γ
2
𝜇 = (𝑛 − 1) 𝑒 ∙
𝑗! Γ 2𝑗 + 1 Γ 𝑛 − 1
2 2
𝜆 (2𝑗 + 1)
= (𝑛 − 1) 𝑒 ∙
𝑗! (𝑛 − 3)

(𝑛 − 1) 𝜆 𝜆
= 𝑒 + 2𝜆 𝑒
(𝑛 − 3) 𝑗! (𝑗 − 1)!

(𝑛 − 1)
= 𝑒 ∙ 𝑒 + 2𝜆 ∙ 𝑒 ∙𝑒
(𝑛 − 3)
(𝑛 − 1)
= (2𝜆 + 1)
(𝑛 − 3)
Again, by putting 𝑟 = 2, we get,
2𝑗 + 5 𝑛−5
𝜆 Γ Γ
2 2
𝜇 = (𝑛 − 1) 𝑒 ∙
𝑗! Γ 2𝑗 + 1 Γ 𝑛 − 1
2 2
𝜆 (2𝑗 + 3)(2𝑗 + 1)
= (𝑛 − 1) 𝑒 ∙
𝑗! (𝑛 − 3)(𝑛 − 5)

(𝑛 − 1) 𝜆
= 𝑒 ∙ [4𝑗(𝑗 − 1) + 12𝑗 + 3]
(𝑛 − 3)(𝑛 − 5) 𝑗!

(𝑛 − 1) 𝜆 𝜆
= 4𝜆 𝑒 + 12 𝑒
(𝑛 − 3)(𝑛 − 5) (𝑗 − 2)! (𝑗 − 1)!

𝜆
+3 𝑒
𝑗!

(𝑛 − 1)
= 4𝜆 𝑒 ∙ 𝑒 + 12𝜆𝑒 𝑒 + 3𝑒 𝑒
(𝑛 − 3)(𝑛 − 5)
46

(𝑛 − 1)
∴𝜇 = [4𝜆 + 12𝜆 + 3]
(𝑛 − 3)(𝑛 − 5)
Hence, the skewness and Kurtosis of non-central t-distribution are respectively,
𝜇
𝛽 =
𝜇
0
=
𝜇
=0
𝜇
𝑎𝑛𝑑, 𝛽 =
𝜇
(𝑛 − 1) (𝑛 − 3)
= [4𝜆 + 12𝜆 + 3].
(𝑛 − 3)(𝑛 − 5) (𝑛 − 1) (2𝜆 + 1)
(𝑛 − 3) 4𝜆 + 12𝜆 + 3
= ∙
(𝑛 − 5) 4𝜆 + 4𝜆 + 1

29. Define quadratic form, positive definite, negative definite, positive


semidefinite, negative semidefinite?
Solution:
Quadratic form: A homogenous of polynomial of the second degree in any
numbers of variables is called a quadratic form. In general, the quadratic form can
be written as,
𝑄(𝑥) = . 𝑎 𝑥𝑥

Positive definite: A positive definite matrix is a symmetric matrix where every


eigenvalue is positive.
47

Negative definite: A matrix is negative definite if it's symmetric and all its
eigenvalues negative.
Positive semidefinite: A positive semidefinite matrix is a Hermitian matrix all of
whose eigenvalues are non-negative.

Negative semidefinite: A negative semidefinite matrix is a Hermitian matrix all of


whose eigenvalues are non-positive

𝟑 𝟏 𝟐 𝟏
30. If 𝑨 = and 𝑨 = then find the Eigen decomposition
𝟏 𝟑 𝟏 𝟐
(𝒊. 𝒆 𝑨 = 𝑷𝑻 𝑫𝑷).Finally prove that 𝑷𝑻 𝑫𝑷 = 𝑨 .
Solution:
Given that,
3 1
𝐴=
1 3
The equation of Eigen matrix is,
|𝐴 − 𝜆𝐼| = 0
3 1 1 0
⇒ −𝜆 =0
1 3 0 1
3 1 𝜆 0
⇒ − =0
1 3 0 𝜆
3−𝜆 1
⇒ =0
1 3−𝜆
⇒ 9 − 3𝜆−3𝜆 + 𝜆 − 1 = 0
⇒ 𝜆 − 6𝜆 + 8 = 0
48

⇒ 𝜆 − 4𝜆 − 2𝜆 + 8 = 0
⇒ 𝜆(𝜆 − 4) − 2(𝜆 − 4) = 0
⇒ (𝜆 − 4)(𝜆 − 2) = 0
∴ 𝜆 = 4, 𝜆 = 2
For 𝜆 = 4 the equation of Eigen vector,
𝐴𝑥 = 𝜆𝑥
3 1 𝑥 𝑥
⇒ = 𝜆
1 3 𝑥 𝑥
3𝑥 + 𝑥 4𝑥
⇒ =
𝑥 + 3𝑥 4𝑥
3𝑥 + 𝑥 = 4𝑥 − − − − − (𝑖)

𝑥 + 3𝑥 = 4𝑥 − − − − − −(𝑖𝑖)
Solving the equation we find,
𝑥 = 1 𝑎𝑛𝑑 𝑥 = 1

Now,
For 𝜆 = 2 the equation of Eigen vector,
𝐴𝑥 = 𝜆𝑥
3 1 𝑥 𝑥
⇒ = 𝜆
1 3 𝑥 𝑥
3𝑥 + 𝑥 2𝑥
⇒ =
𝑥 + 3𝑥 2𝑥
3𝑥 + 𝑥 = 2𝑥 − − − − − (𝑖)

𝑥 + 3𝑥 = 2𝑥 − − − − − −(𝑖𝑖)
Solving the equation we find,
𝑥 = 1 𝑎𝑛𝑑 𝑥 = −1
We find the Eigen vector,
49

1 1
and
1 −1
1 −1
So, 𝑃 =
1 1
4 0
And 𝐷 =
0 2
∴ 𝑃𝐷𝑃 =
1 1 −1 4 0 1 1
=
2 1 1 0 2 −1 1
1 4 −2 1 1
=
2 4 2 −1 1
1 4+2 4−2
=
2 4−2 4+2
1 6 2
=
2 2 6
3 1
=
1 3
=𝐴
(𝑝𝑟𝑜𝑣𝑒𝑑)

Again
Given that,
2 1
𝐴=
1 2
The equation of Eigen matrix is,
|𝐴 − 𝜆𝐼| = 0
2 1 1 0
⇒ −𝜆 =0
1 2 0 1
2 1 𝜆 0
⇒ − =0
1 2 0 𝜆
50

2−𝜆 1
⇒ =0
1 2−𝜆
⇒ 4 − 4𝜆+𝜆 − 1 = 0
⇒ 𝜆 − 4𝜆 + 3 = 0
⇒ 𝜆 − 3𝜆 − 𝜆 + 3 = 0
⇒ 𝜆(𝜆 − 3) − 1(𝜆 − 3) = 0
⇒ (𝜆 − 3)(𝜆 − 1) = 0
∴ 𝜆 = 3, 𝜆 = 1
For 𝜆 = 1 the equation of Eigen vector,
𝐴𝑥 = 𝜆𝑥
2 1 𝑥 𝑥
⇒ = 𝜆
1 2 𝑥 𝑥
2𝑥 + 𝑥 𝑥
⇒ = 𝑥
𝑥 + 2𝑥
2𝑥 + 𝑥 = 𝑥 − − − − − (𝑖)

𝑥 + 2𝑥 = 𝑥 − − − − − −(𝑖𝑖)
Solving the equation we find,
𝑥 = 1 𝑎𝑛𝑑 𝑥 = −1

Now,
For 𝜆 = 3 the equation of Eigen vector,
𝐴𝑥 = 𝜆𝑥
2 1 𝑥 𝑥
⇒ = 𝜆
1 2 𝑥 𝑥
2𝑥 + 𝑥 3𝑥
⇒ =
𝑥 + 2𝑥 3𝑥
2𝑥 + 𝑥 = 3𝑥 − − − − − (𝑖)

𝑥 + 2𝑥 = 3𝑥 − − − − − −(𝑖𝑖)
51

Solving the equation we find,


𝑥 = 1 𝑎𝑛𝑑 𝑥 = 1
We find the Eigen vector,
1 1
and
−1 1
1 1
So, 𝑃 =
−1 1
1 0
And 𝐷 =
0 3
∴ 𝑃𝐷𝑃 =
1 1 1 1 0 1 −1
=
2 −1 1 0 3 1 1
1 1 3 1 −1
=
2 −1 3 1 1
1 1 + 3 −1 + 3
=
2 −1 + 3 1 + 3
1 4 2
=
2 2 4
2 1
=
1 2
=𝐴

(𝑝𝑟𝑜𝑣𝑒𝑑)

31. If Y is a 𝒑 × 𝟏 random variable and 𝒀~𝑵(𝟎, 𝑰), then find the matrix and
𝒑
sampling distribution of 𝑸𝟐 = ∑𝒊 𝟏(𝒀𝒊 − 𝒀)𝟐 .
Solution:
We have,
52

𝑄 = (𝑌 − 𝑌 )

= 𝑌 − 𝑝𝑌

1 1
= 𝑌 − 𝑌 − 𝑌𝑌
𝑝 𝑝

1 1
= 1− 𝑌 − 𝑌𝑌
𝑝 𝑝

1
⎡1 − 𝑝 −1 𝑝 ⋯ −1 𝑝 ⎤ 𝑌
⎢ 1 1 − 1 𝑝 ⋯ −1 𝑝 ⎥ 𝑌
= [𝑌 𝑌 ⋯ 𝑌 ]⎢ − 𝑝 ⎥ ⋮
⎢ ⋮ ⋮ ⋱ ⋮ ⎥
𝑌
1
⎣ − 𝑝 − 1 𝑝 ⋯ 1 − 1 𝑝⎦

= 𝑌 𝐵𝑌
𝑊ℎ𝑒𝑟𝑒,
1
⎡1 − 𝑝 −1 𝑝 ⋯ −1 𝑝 ⎤
⎢ 1 1−1 𝑝 ⋯ −1 𝑝 ⎥
𝐵=⎢ − 𝑝 ⎥
⎢ ⋮ ⋮ ⋱ ⋮ ⎥
1 1
− 𝑝 ⋯ 1 − 1 𝑝⎦
⎣ − 𝑝
Since, we know that if 𝑍~𝑁(0, 𝐼), then 𝑍𝐴𝑍′~𝜒( ) if A is idempotent of rank K.

Now,
53

1
⎡1 − 𝑝 −1 𝑝 ⋯ −1 𝑝 ⎤
⎢ 1 1−1 𝑝 ⋯ −1 𝑝 ⎥
𝐵 =𝐵∙𝐵 =⎢ − 𝑝 ⎥
⎢ ⋮ ⋮ ⋱ ⋮ ⎥
1
⎣ − 𝑝 −1 𝑝 ⋯ 1 − 1 𝑝⎦
1
⎡1 − 𝑝 −1 𝑝 ⋯ −1 𝑝 ⎤
⎢ 1 1 − 1 𝑝 ⋯ −1 𝑝 ⎥
∙⎢ − 𝑝 ⎥
⎢ ⋮ ⋮ ⋱ ⋮ ⎥
1
⎣ − 𝑝 − 1 𝑝 ⋯ 1 − 1 𝑝⎦
1
⎡1 − 𝑝 −1 𝑝 ⋯ −1 𝑝 ⎤
⎢ 1 1−1 𝑝 ⋯ −1 𝑝 ⎥
=⎢ − 𝑝 ⎥
⎢ ⋮ ⋮ ⋱ ⋮ ⎥
1
⎣ − 𝑝 −1 𝑝 ⋯ 1 − 1 𝑝⎦

=𝐵
Therefore, B is idempotent.
The rank of B is,
1
𝑡𝑟(𝐵) = 𝑝 1 −
𝑝
=𝑝−1
Hence,
𝑄 ~𝜒( )

𝑿𝒊 𝑿 𝟐
32. If 𝑿~𝑵(𝝁, 𝝈𝟐 ), then what is the distribution of ∑𝒏𝒊 𝟏( ) .
𝝈

Solution:
Let,
𝑋
𝑌 =
𝜎
54

𝐸[𝑋 ] 𝜇
∴ 𝐸[𝑌 ] = =
𝜎 𝜎
𝑉[𝑋 ] 𝜎
𝑉[𝑌 ] = = =1
𝜎 𝜎
𝜇
∴ 𝑌 ~𝑁( , 𝐼)
𝜎
Let us consider,
𝑍 = 𝑝𝑌

Where, p is a quadratic matrix whose elements of the row , ,⋯⋯⋯, .


√ √ √

1
𝑍 = (𝑌 + 𝑌 + ⋯ + 𝑌 ) = √𝑛𝑌
√𝑛
∴ 𝑍 = 𝑛𝑌
Now,
𝑍 𝑍 = 𝑌 𝑃𝑃 𝑌 = 𝑌 𝑌

=> 𝑍 = 𝑌

=> 𝑍 = 𝑌 −𝑍

=> 𝑍 = 𝑌 − 𝑛𝑌

=> 𝑍 = (𝑌 − 𝑌 )

𝑋 𝑋
=> 𝑍 = −
𝜎 𝜎
55

𝑋 −𝑋
∴ 𝑍 =
𝜎

Now,
𝜇
𝐸[𝑍] = 𝑝𝐸[𝑌 ] = 𝑝
𝜎
𝑉[𝑍] = 𝐸[𝑍𝑍 ]𝑝𝐸[𝑦𝑦 ]𝑝 = 𝑝𝐼𝑝 = 𝑝𝑝 = 𝐼
𝜇
∴ 𝑍~𝑁(𝑝 , 𝐼)
𝜎
Since, 𝑍 ’s are independent at each other, then

𝑋 −𝑋
𝑍 = ~𝜒
𝜎

(𝑝𝑟𝑜𝑣𝑒𝑑)

𝟏
33. Prove that, the adjusted sample variance 𝒔𝟐 = ∑𝒏𝒊 𝟏(𝑿𝒊 − 𝑿)𝟐 has a
𝒏 𝟏
Gamma distribution with parameters 𝒏 − 𝟏 𝒂𝒏𝒅 𝝈 , where,𝑿𝒊 ~𝑵𝒏 (𝝁, 𝝈𝟐 ).
𝟐

Solution:
Let 𝑋 , ⋯ ⋯ ⋯ 𝑋 be 𝑛 independent random variables, all having a normal
distribution with mean μ and variance σ2. Let their sample mean 𝑋 be defined as

and their adjusted sample variance be defined as

1
𝑠 = (𝑋 − 𝑋)
𝑛−1

Define the following matrix:


56

Where, I is the n-dimensional identity matrix and 𝑖 is a 𝑛 × 1 vector of ones. In


other words, M has the following structure:

M is a symmetric matrix. By computing the product 𝑀 ∙ 𝑀, it can also be easily


verified that M is idempotent.
Denote by X the 𝑛 × 1 random vector whose i-th entry is equal to Xi and note
that X has a multivariate normal distribution with mean 𝜇 and covariance
matrix 𝜎 𝐼.
The matrix M can be used to write the sample variance as

A
B

Where, in step A we have used the fact that M is symmetric; in step B we have
used the fact that M is idempotent.
Now define a new random vector
1
𝑍= (𝑋 − 𝜇 )
𝜎
And note that Z has a standard (mean zero and covariance I) multivariate normal
distribution.
57

The sample variance can be written as

𝜎 ∵𝑀 =0
= 𝑍 𝑀𝑍
𝑛−1
𝜎
∴𝑠 = 𝑍 𝑀𝑍
𝑛−1
Is proportional to a quadratic form in a standard normal random vector(𝑍 𝑀𝑍).
We know that the quadratic form 𝑍 𝑀𝑍 has a Chi-square distribution with 𝑡𝑟(𝑀)
degrees of freedom. But the trace of M is,

1
𝑡𝑟(𝑀) = 𝑀 = 1−
𝑛

1
= 𝑛 1− =𝑛−1
𝑛
So, the quadratic form 𝑍 𝑀𝑍 has a Chi-square distribution with 𝑛 − 1 degrees of
freedom. Multiplying a Chi-square random variable with 𝑛 − 1 degrees of
freedom by one obtains a Gamma random variable with parameters 𝑛 −
1 and 𝜎 .
So, summing up, the adjusted sample variance s2 has a Gamma distribution
with parameters 𝑛 − 1 and 𝜎 .

The End

You might also like