Download as pdf or txt
Download as pdf or txt
You are on page 1of 11

QUEUING AND RELIABILITY THEORY

[MATH712]

Dr. Ritu Gupta

Department of Mathematics, Amity University, Uttar Pradesh


rgupta@amity.edu

MODULE 3: System Reliability Concepts


Methods of parameter estimation:
 Method of moments
 Maximum likelihood estimation (MLE)
 Bay’s estimation

MLE Method: Suppose 𝑋1 , 𝑋2 , … , 𝑋𝑛 are i.i.d r.v.’s with pdf


𝑓𝜃 𝑥 , then likelihood function
𝐿 𝜃, 𝑥1 , 𝑥2 , … , 𝑥𝑛 = ∏𝑛𝑖=1 𝑓𝜃 (𝑥𝑖 )
The maximum likelihood estimator of 𝜃 is the value of 𝜃, say 𝜃�
that maximizes the likelihood function 𝐿 𝜃, 𝑥1 , 𝑥2 , … , 𝑥𝑛 . The
likelihood function is much easier to deal with the log-likelihood
function that is ln 𝐿 𝜃, 𝑥1 , 𝑥2 , … , 𝑥𝑛 . Since ln is a monotone
function, when likelihood function is maximized, ln likelihood
function is also maximized, and vice-versa.
2 Lecture-25
Parameter Estimation of Weibull Distribution:
Suppose 𝑋1 , 𝑋2 , … , 𝑋𝑛 are i.i.d r.v.’s taken from two
parameters Weibull distribution.
The two-parameters probability density function is
𝛽 𝑥 𝜷−𝟏 −(𝑥 )𝛽
𝑓 𝑥; 𝛽, 𝜃 = 𝛽
𝑒 𝜃 , 𝑡 ≥ 0, 𝛽, 𝜃 > 0
𝜃
The likelihood function is defined as
𝑛
𝛽 𝑥𝑖 𝛽−1 −(𝑥𝑖 )𝛽
𝐿 𝑥; 𝛽, 𝜃 = � 𝛽
𝑒 𝜃
𝜃
𝑖=1
𝑛 𝑛
𝛽 𝑥𝑖 𝛽
− ∑𝑛
∴ 𝐿 𝑥; 𝛽, 𝜃 = 𝛽 � 𝑥𝑖 𝛽−1 𝑒 𝑖=1 𝜃
𝜃
𝑖=1

3 Lecture-25
The log-likelihood function is defined as
𝑛 𝑛
𝛽 𝑥𝑖 𝛽
𝛽−1 − ∑𝑛
ln 𝐿 𝑥; 𝛽, 𝜃 = ln � 𝑥𝑖 𝑒 𝑖=1 𝜃
𝜃𝛽
𝑖=1
𝑛 𝑛
𝑥𝑖 𝛽
= 𝑛 ln 𝛽 − 𝛽 ln 𝜃 + 𝛽 − 1 � ln 𝑥𝑖 − �
𝜃
𝑖=1 𝑖=1
Now, likelihood equations are given by
𝜕 𝜕
ln 𝐿 𝑥; 𝛽, 𝜃 = 0 and ln 𝐿 𝑥; 𝛽, 𝜃 = 0
𝜕𝛽 𝜕𝜃
Thus we get,

4 Lecture-25
1 𝑥𝑖 𝛽 𝑥𝑖
𝑛 − ln 𝜃 + ∑𝑛𝑖=1 ln 𝑥𝑖 − 𝑛
∑𝑖=1 . 𝑙𝑙 =0 (1)
𝛽 𝜃 𝜃
𝑛
𝑛𝛽 𝛽
𝜕 −𝛽
𝑎𝑎𝑎 − − � 𝑥𝑖 𝜃 =0
𝜃 𝜕𝜃
𝑖=1
𝑛𝛽
𝑜𝑜 = − ∑𝑛𝑖=1 𝑥𝑖 𝛽 . 𝛽 𝜃 −𝛽−1
=0 (2)
𝜃

Now, eqs (1) and (2) can be solved along numerical data to
get estimators 𝛽̂ and 𝜃̂.

5 Lecture-25
Parameter Estimation of Gamma Distribution:
Suppose 𝑋1 , 𝑋2 , … , 𝑋𝑛 are i.i.d r.v.’s taken from Gamma
distribution.
The probability density function is
𝑥 𝜶−𝟏
𝑓 𝑥; 𝛼, 𝛽 = 𝛼 𝑒 −𝑥/𝛽 , 𝑥 ≥ 0, 𝛼, 𝛽 > 0
𝛽 Γ(𝛼)

The likelihood function is defined as


𝑛
𝑥𝑖 𝛼−1 −𝑥𝛽𝑖
𝐿 𝑥; 𝛼, 𝛽 = � 𝛼 𝑒
𝛽 Γ(𝛼)
𝑖=1
𝑛
1 𝑥𝑖
− ∑𝑛
∴ 𝐿 𝑥; 𝛼, 𝛽 = � 𝑥𝑖 𝛼−1 𝑒 𝑖=1 𝛽
(Γ(𝛼))𝑛 𝛽𝑛𝛼
𝑖=1

6 Lecture-25
The log-likelihood function is defined as

ln 𝐿 𝑥; 𝛼, 𝛽
𝑛
1 𝑥𝑖
− ∑𝑛
= ln + 𝑙𝑙 � 𝑥𝑖 𝛼−1 + ln 𝑒 𝑖=1 𝛽
(Γ(𝛼))𝑛 𝛽𝑛𝑛
𝑖=1
𝑛 𝑛
1
= −𝑛 ln Γ 𝛼 − 𝑛𝛼 ln 𝛽 + 𝛼 − 1 � ln 𝑥𝑖 − � 𝑥𝑖
𝛽
𝑖=1 𝑖=1
Now, likelihood equations are given by
𝜕 𝜕
ln 𝐿 𝑥; 𝛼, 𝛽 = 0 and ln 𝐿 𝑥; 𝛼, 𝛽 = 0
𝜕𝛼 𝜕𝛽
Thus we get,

7 Lecture-25
1 ′
−𝑛 . Γ 𝛼 − 𝑛 ln 𝛽 + ∑𝑛𝑖=1 ln 𝑥𝑖 = 0
Γ 𝛼

or
Γ 𝛼
Γ 𝛼 = − ln 𝛽 + 1
𝑛
∑𝑛𝑖=1 ln 𝑥𝑖 (1)
Also
𝛼 1 𝑛
−𝑛
𝛽
+ 𝛽 2 ∑𝑖=1 𝑥𝑖 =0
1
or 𝛽= ∑𝑛𝑖=1 𝑥𝑖 = 0 (2)
𝑛𝛼

Now, eqs (1) and (2) can be solved along numerical data to
get estimators 𝛼� and 𝛽̂ .
8 Lecture-25
Parameter Estimation of Log-normal Distribution:
Suppose 𝑋1 , 𝑋2 , … , 𝑋𝑛 are i.i.d r.v.’s taken from log-
normal distribution.
The probability density function is
(ln 𝑥−𝜇) 2
2
1 −
𝑓 𝑥; 𝜇, 𝜎 = 𝑒 2𝜎 2 ,
2𝜋𝜎 2 𝑥
𝑥 > 0, 𝜎 > 0, −∞ < 𝜇 < ∞
The likelihood function is defined as
𝑛 2
1 (ln 𝑥−𝜇)
2 −
∴ 𝐿 𝑥; 𝜇, 𝜎 = � 𝑒 2𝜎 2
2𝜋𝜎 2𝑥
𝑖=1 𝑖

9 Lecture-25
Log-likelihood function is given by
ln 𝐿 𝑥; 𝜇, 𝜎 2 =
𝑛 𝑛 2 𝑛
𝑛 𝜇 1 𝑛𝜇
− ln 2𝜋𝜎 2 − � ln 𝑥𝑖 + 2 � ln 𝑥𝑖 − 2 � ln 𝑥𝑖 2 − 2
2 𝜎 2𝜎 2𝜎
𝑖=1 𝑖=1 𝑖=1
Now, we find 𝜇� and 𝜎�2 , which maximizes ln 𝐿 𝑥; 𝜇, 𝜎 2 . To do
this we take the gradient of it with respect to 𝜇 𝑎𝑎𝑎 𝜎 2 and set it
equal to zero.
Likelihood equations are given by
𝜕
ln 𝐿 𝑥; 𝜇, 𝜎 2 = 0 (1)
𝜕𝜇
𝜕
and 2 ln 𝐿 𝑥; 𝜇, 𝜎 2 =0 (2)
𝜕𝜎
10 Lecture-25
Thus we get,
1 𝑛
𝜇= ∑𝑖=1 ln 𝑥𝑖 (3)
𝑛
and
∑𝑛 ln 𝑥 2
𝑖
∑𝑛
𝑖=1 ln 𝑥 𝑖 − 𝑖=1
𝜎2 = 𝑛
(4)
𝑛

Now, eqs (3) and (4) can be solved along numerical data to
�2 .
get estimators 𝜇� and 𝜎

11 Lecture-25

You might also like