Download as pdf or txt
Download as pdf or txt
You are on page 1of 13

Applied Mathematical Modelling 36 (2012) 5380–5392

Contents lists available at SciVerse ScienceDirect

Applied Mathematical Modelling


journal homepage: www.elsevier.com/locate/apm

Parameter estimation for a two-parameter bathtub-shaped


lifetime distribution
Ammar M. Sarhan a,⇑, D.C. Hamilton b, B. Smith b
a
St. Francis Xavier University, Antigonish, NS, Canada B2G 2W5
b
Dalhousie University, Halifax, Canada NS B3H 4R2

a r t i c l e i n f o a b s t r a c t

Article history: A two-parameter distribution was revisited by Chen (2000) [7]. This distribution can have a
Received 5 August 2011 bathtub-shaped or increasing failure rate function which enables it to fit real lifetime data
Received in revised form 15 December 2011 sets. Maximum likelihood and Bayes estimates of the two unknown parameters are dis-
Accepted 21 December 2011
cussed in this paper. It is assumed in the Bayes case that the unknown parameters have
Available online 29 December 2011
gamma priors. Explicit forms of Bayes estimators cannot be obtained. Different approxima-
tions are used to establish point estimates and two sided Bayesian probability intervals for
Keywords:
the parameters. Monte Carlo simulations are applied to the comparison between the
Maximum likelihood estimator
Bayes estimator
maximum likelihood estimates and the approximate Bayes estimates obtained under
Reliability non-informative prior assumptions. Analysis of a real data set is also been presented for
Monte Carlo integration illustrative purposes.
Simulation-based methods Published by Elsevier Inc.
Lindley approximation

1. Introduction

Many parametric probability distributions have been introduced to analyze sets of real data with bathtub-shaped failure
rates. The bathtub-shape hazard function provides an appropriate conceptual model for some electronic and mechanical
products as well as the lifetime of humans. The previous work in detail on parametric probability distributions with bath-
tub-shaped failure rate function can be referred to many different authors’ papers. The exponential power distribution was
suggested by Smith and Bain [1], and it was studied by Leemis [2]. A four parameter distribution was proposed by Gaver and
Acar [3]. A similar distribution with increasing, decreasing, or bathtub-shaped failure rate has been considered by Hjorth [4].
An exponentiated-Weibull distribution with three parameters is suggested by Mudholkar and Srivastava [5].
Complex systems usually have a bathtub-shaped failure rate over the lifecycle of the product. Though modified Weibull
distribution and Weibull extension are mentioned to have bathtub-shaped failure rate, they may not be able to give a good
bathtub shape of the failure rate. Zhang et al. [6] discussed the parametric analysis of some models which exhibit a good
bathtub shape.
In this paper, we study parameter estimation for a two-parameter lifetime distribution with either bathtub-shaped or
increasing failure rate investigated by Chen [7]. The cumulative distribution function (cdf) of this distribution is
xb
Fðx; a; bÞ ¼ 1  eað1e Þ ; x > 0; ð1:1Þ

⇑ Corresponding author. Address: Department of Mathematics, Faculty of Science, Mansoura University, Mansoura 35516, Egypt. Tel.: +1 902 867 5732;
fax: +1 902 867 3302.
E-mail addresses: asarhan@stfx.ca, asarhan0@yahoo.com, asarhan@mathstat.dal.ca (A.M. Sarhan), hamilton@mathstat.dal.ca (D.C. Hamilton),
bsmith@mathstat.dal.ca (B. Smith).

0307-904X/$ - see front matter Published by Elsevier Inc.


doi:10.1016/j.apm.2011.12.054
A.M. Sarhan et al. / Applied Mathematical Modelling 36 (2012) 5380–5392 5381

where a > 0 is a parameter which does not affect the shape of the failure rate function and b > 0 is the shape parameter. The
corresponding survival function (sf) is
xb
Sðx; a; bÞ ¼ eað1e Þ ; x > 0; ð1:2Þ
The probability density function (pdf) is
b xb
f ðx; a; bÞ ¼ abxb1 ex eað1e Þ ; x > 0: ð1:3Þ
The corresponding failure rate function of this distribution is
b
hðx; a; bÞ ¼ abxb1 ex ; x > 0: ð1:4Þ
0 b2 xb
Using the first derivative of hðxÞ given by h ðxÞ ¼ abx e ðb  1 þ bx Þ, one can easily verify that: (1) hðxÞ has a bathtub-
b

shape when b < 1 and reaches a minimum at x ¼ fð1  bÞ=bg1=b ; (2) hðxÞ is an increasing function when b P 1. For simplicity,
we will refer to this distribution as CH (a, b).
Chen [7] discussed exact confidence intervals and exact joint confidence regions for the parameters based on a type-II
censored sample. Wu et al. [8] discussed statistical inference about the shape parameter of this distribution based on
type-II right censored data. Wu [9] investigated the estimation problem of progressively type-II censored data from this dis-
tribution using the maximum likelihood procedure.
Nadarajah and Kotz [10] pointed out that this distribution is a particular case of the distribution considered by Gurvich
et al. [11] and of the distribution offered by Haynatzki et al. [12]. Also, Pham and Lai [13] made a clarification to Nadarajah
and Kotz’s observation.
In this paper we derive both the MLE and Bayes estimators of the unknown parameters using a complete sample. The
Bayes estimations of the parameters are derived under the assumption of gamma priors and squared error loss. The Bayes
estimators cannot be derived in closed form. Different approximation methods are used to get both point estimates and two
sided Bayesian probability intervals for a and b. We compute an approximation to the Bayes estimators under the assump-
tion of non-informative priors. Bayes and maximum likelihood estimators are compared using Monte Carlo simulations. A
real data set is analyzed for illustrative purposes. Also, a simulated sample presented in [7] is revisited for comparison
purposes.

2. Maximum likelihood estimation

In this section we use the maximum likelihood method to estimate the two unknown parameters a and b. Suppose
X 1 ; X 2 ; . . . ; X n is a random sample from CH (a, b), then the likelihood function of the observed data is
" #b1 P
n
x
b

n
Y
n fxbi það1e i Þg
L ¼ ðabÞ xi e i¼1 : ð2:1Þ
i¼1

The log-likelihood function becomes


X
n X
n
b
L ¼ nðln a þ ln bÞ þ ðb  1Þ ln xi þ fxbi þ að1  exi Þg: ð2:2Þ
i¼1 i¼1

The corresponding likelihood equations are

@L n Xn
¼ þn xbi ¼ 0; ð2:3Þ
@a a i¼1

@L n Xn Xn Xn
b
¼ þ ln xi þ xbi ln xi  a exi xbi ln xi ¼ 0: ð2:4Þ
@b b i¼1 i¼1 i¼1

From (2.3), we get the MLE of a as a function of the MLE of b as


n
a^ ¼ P ^ ; ð2:5Þ
n xbi
i¼1 e n
^ is the solution of the following non-linear equation
where b
P b
n Xn Xn
n n exi xbi ln xi
þ ln xi þ xbi ln xi  Pi¼1 b ¼ 0: ð2:6Þ
b i¼1 i¼1
n
e xi  n i¼1

A closed-form solution of (2.6) does not exist, so a numerical technique must be used to find the maximum likelihood esti-
mate of b for any given data set.
5382 A.M. Sarhan et al. / Applied Mathematical Modelling 36 (2012) 5380–5392

Large-sample intervals: The MLEs of the parameters a and b are asymptotically normally distributed with means equal
to the true values of a and b and variances given by the inverse of the information matrix. In particular,
     1 !
a^ a L11 L12
 AN ; Ra^ ;b^ ¼ ; ð2:7Þ
b^ b L21 L22

where
" #
@2L n
L11 ¼ E ¼ 2;
@ a2 a
" #
@2L Xn
b
L12 ¼ E ¼ xbi exi ln xi ¼ L21 ;
@ ab i¼1

" #
@2L n X
n X
n
b
L22 ¼ E 2
¼ 2
 xbi ðln xi Þ2 þ a xbi ð1 þ xbi Þðln xi Þ2 exi :
@b b i¼1 i¼1
 
Let var ða ^ denote the (1, 1) and (2, 2) entry of R ^ ^ , respectively. Then large-sample ð1  #Þ100% confidence inter-
^ Þ and var b a;b
vals for a and b are
qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
 
a^  Z #=2 vd
ar ða ^  Z #=2 v
^ Þ and b d ^ ;
ar b

d
where v arðÞ indicates that the estimated values of a and b have been substituted in the variance formula and Z #=2 is the
upper #=2 quantile of the standard normal distribution.

3. Bayes estimation

This section presents Bayes estimates of the parameters a and b. It is assumed here that the parameters a and b are inde-
pendent and follow the gamma prior distributions

p1 ðaÞ / aa1 1 ea2 a ; a > 0; ð3:1Þ


p2 ðbÞ / bb1 1 eb2 b ; b > 0; ð3:2Þ
where all the hyperparameters a1 ; a2 ; b1 and b2 are assumed to be nonnegative and known.
Using (3.1), (3.2), (2.1) and applying Bayes theorem, the posterior joint pdf of (a, b), for a; b > 0, becomes
1 nþa1 1 nþb1 1 b1 ½a2 þK 2 ðbÞab2 bþK 1 ðbÞ
pða; bjdataÞ ¼ a b Px e ; ð3:3Þ
Ið0;0Þ
where
ZZ
Ið0;0Þ ¼ anþa1 1 bnþb1 1 Pb1
x e
½a2 þK 2 ðbÞab2 bþK 1 ðbÞ
da db;
Q P P b
Px ¼ ni¼1 xi ; K 1 ðbÞ ¼ ni¼1 xbi , and K 2 ðbÞ ¼ ni¼1 ðexi  1Þ.
The integral Ið0;0Þ can be written as
Z 1
bnþb1 1 Pb1
x
Ið0;0Þ ¼ Cðn þ a1 Þ eb2 bþK 1 ðbÞ db: ð3:4Þ
0 ½a2 þ K 2 ðbÞnþa1
Marginal posterior distributions: The marginal posterior pdf’s of a and b can be derived from (3.3), (for a > 0; b > 0) as
Z
anþa1 1 ea2 a 1
nþb1 1 b2 bþK 1 ðbÞK 2 ðbÞa
pa ðajdataÞ ¼ Pb1
x b e db ð3:5Þ
Ið0;0Þ 0

and
1 Cðn þ a1 Þ
pb ðbjdataÞ ¼ Pb1 bnþb1 1 eb2 bþK 1 ðbÞ : ð3:6Þ
Ið0;0Þ ½a2 þ K 2 ðbÞnþa1 x
Bayes point estimates: Under squared error loss, the Bayes estimator of any function of the unknown parameter and its
corresponding minimum Bayes risk, MBR, are the posterior expectation and posterior variance of that function. To get the
Bayes estimates of the unknown parameters a and b and their corresponding MBRs, we need the first and second posterior
moments of a and b. The rth, r ¼ 1; 2;   , posterior moments of a and b are
A.M. Sarhan et al. / Applied Mathematical Modelling 36 (2012) 5380–5392 5383

Z nþb1 1
Cðn þ a1 þ rÞ 1
Pb1
x b
lðrÞ
a ¼ eb2 bþK 1 ðbÞ db ð3:7Þ
Ið0;0Þ 0 ½a2 þ K 2 ðbÞnþa1 þr
and
Z nþb1 þr1
Cðn þ a1 Þ 1
Pb1
x b
lðrÞ
b ¼ eb2 bþK 1 ðbÞ db: ð3:8Þ
Ið0;0Þ 0 ½a2 þ K 2 ðbÞnþa1
Thus, the Bayes estimator and the corresponding MBR of a and b are
 2
a~ ¼ lð1Þ ð2Þ ð1Þ
a ; Rp1 ðaÞ ¼ la  la
~ ð3:9Þ
and
h i2
~ ¼ lð1Þ ;
b ~ ¼ lð2Þ 
Rp2 ðbÞ lbð1Þ : ð3:10Þ
b b

Two-sided Bayesian probability intervals: Once the marginal posterior pdf’s of a and b are obtained, the two-sided
Bayesian probability intervals of these parameters can be derived.
The ð1  #Þ100% two-sided Bayesian probability interval (TBPI) for a, say ðaL ; aU Þ, can be found by solving the following
integral equations with respect to aL and aU
Z aL
#
pa ðajdataÞ da ¼ ; ð3:11Þ
0 2
Z aU
#
pa ðajdataÞ da ¼ 1  : ð3:12Þ
0 2
Similarly, the ð1  #Þ100% two-sided Bayesian probability interval for b, say ðbL ; bU Þ, can be found by solving the following
integral equations with respect to bL and bU
Z bL
#
pb ðbjdataÞ db ¼ ; ð3:13Þ
0 2
Z bU
#
pb ðbjdataÞ db ¼ 1  : ð3:14Þ
0 2
The integral Ið0;0Þ , the joint and marginal posterior pdf’s of a and b, the posterior moments, the Bayes estimates and their
MBR’s and the integral equations (3.11)–(3.14) have no analytic solutions in the general case. Numerical techniques should
be used to obtain the solutions. We apply Monte Carlo integration and simulation-based methods. The Lindley approxima-
tion can also be used.
Known b: The Bayes estimator and the corresponding minimum Bayes risk of a can be expressed in explicit form when b
is known. Under the same gamma prior distribution for a as in (3.1), the posterior distribution of a is gamma with param-
eters n þ a1 and a2 þ K 2 ðbÞ. Therefore, the Bayes estimator and the corresponding minimum Bayes risk of a become
n þ a1 n þ a1
a~ ¼ and ra~ ¼ :
a2 þ K 2 ðbÞ ½a2 þ K 2 ðbÞ2

4. Approximation to Bayes estimates

In this section we use three approximation techniques to obtain Bayes estimates as shown in the following subsections.

4.1. Simulation-based method (SB)

A simulation-based method can be used to compute the posterior distributions of the unknown parameters. This method
is general and easy to apply, requiring only computable expressions of the relative likelihood and the inverse cdf of the inde-
pendent prior distributions [14]. The procedure is as follows:

1. Generate a random sample hi ; i ¼ 1; 2; . . . ; M, from the prior pðhÞ.


2. Compute the likelihood ratio at hi ; Rðhi Þ ¼ Lðh Lð^


; i ¼ 1; 2; . . . ; M, where ^
h is the MLE of h computed based on the original
sample.
3. Generate a random sample U i ; i ¼ 1; 2; . . . ; M, from the uniform (0, 1).
4. Retain the ith sample value, hi , if U i 6 Rðhi Þ.

The retained sample values, say h1 ; h2 ; . . . ; hM ðM  6 MÞ are a random sample from the posterior distribution of h [14].
5384 A.M. Sarhan et al. / Applied Mathematical Modelling 36 (2012) 5380–5392

4.2. Monte Carlo integration (MCI)

Suppose we want to compute a complex integral


Z b
I¼ wðxÞ dx: ð4:1Þ
a

If the function wðxÞ can be decomposed into the product of a function v ðxÞ and a probability density function pðxÞ defined
over the support ða; bÞ, then the integral I can be expressed as the expectation of v ðxÞ with respect to the probability density
function pðxÞ
Z b
I¼ v ðxÞpðxÞ dx ¼ EpðxÞ ½v ðxÞ; ð4:2Þ
a

where the subscript pðxÞ means that the expectation is taken with respect to pðxÞ. If we generate a large random sample
x1 ; x2 ; . . . ; xM from the density pðxÞ, then

1 XM
I ¼ EpðxÞ ½v ðxÞ ’ v ðxi Þ ð4:3Þ
M i¼1

is an unbiased estimator of I. This is referred to as Monte Carlo integration, MCI [15].


The integral Ið0;0Þ , the marginal posterior pdf of a and the rth posterior moments of a and b can be written as
Ið0;0Þ ¼ Cðn þ a1 ÞEpðbÞ ½g ð0;0Þ ðbÞ; ð4:4Þ
anþa1 1 ea2 a
pa ðajdataÞ ¼ EpðbÞ ½gðajbÞ; ð4:5Þ
Ið0; 0Þ
Cðn þ a1 þ rÞ
lðrÞ
a ¼ EpðbÞ ½g ðr;0Þ ðbÞ ð4:6Þ
Ið0;0Þ
and
Cðn þ a1 Þ
lðrÞ
b ¼ EpðbÞ ½g ð0;rÞ ðbÞ; ð4:7Þ
Ið0;0Þ
where for i; j ¼ 0; 1; 2; . . .,

ai bj Cðn þ b1 ÞP b1
x e
K 1 ðbÞ
g ði;jÞ ðbÞ ¼
nþb
;
b2 1 ½a2 þ K 2 ðbÞnþa1 þj
ð4:8Þ
Cðn þ b1 Þ b1 K 1 ðbÞK 2 ðbÞa
gðajbÞ ¼ nþb
Px e
b2 1
and pðbÞ is the gamma density with parameters n þ b1 and b2 , that is
nþb
b2 1
pðbÞ ¼ bnþb1 1 eb2 b :
Cðn þ b1 Þ
Now, EpðbÞ ½g ði;jÞ ðbÞ and EpðbÞ ½gðajbÞ can be approximated using the Monte Carlo integration technique by drawing a large
random sample b1 ; b2 ; . . . ; bM from Gammaðn þ b1 ; b2 Þ, then

1 XM
EpðbÞ ½g ðr;rÞ ðbÞ ’ g ðb Þ
M i¼1 ðr;rÞ i

and

1 XM
EpðbÞ ½gðajbÞ ’ gðajbi Þ:
M i¼1

Once the marginal posterior pdf’s of a and b are approximated, the two-sided Bayesian probability intervals for these param-
eters can be computed.

4.3. Lindley approximation (LA)

Lindley [16] introduced an approximation technique to evaluate the ratio of integrals of the following form
R
wðhÞeLðhÞ dh
R ; ð4:9Þ
v ðhÞeLðhÞ dh
A.M. Sarhan et al. / Applied Mathematical Modelling 36 (2012) 5380–5392 5385

where h ¼ ðh1 ; . . . ; hm Þ; LðhÞ is the logarithm of the likelihood function, and wðhÞ and v ðhÞ are arbitrary functions of h. Setting
wðhÞ ¼ uðhÞpðhÞ, where pðhÞ is the joint prior pdf of h, gives the posterior expectation of uðhÞ
R
uðhÞeLðhÞþqðhÞ dh
E½uðhÞjdata ¼ R ; ð4:10Þ
eLðhÞþqðhÞ dh
where qðhÞ ¼ logðpðhÞÞ. The ratio of integrals (4.10) can be asymptotically approximated by
( )
1X m X m
1X m X m X m X m
E½uðhÞjdata  uþ ðuij þ 2ui qj Þrij þ Lijk rij rk‘ ui ð4:11Þ
2 i¼1 j¼1 2 i¼1 j¼1 k¼1 ‘¼1
^h
3 @q 2
where Lijk ¼ @h @@hL@h ; ui ¼ @h
@u u
and uij ¼ @h@ @h ; qi ¼ @h and rij is the ði; jÞth element of the Fisher information matrix of the
i j k i i j i
parameters. Gren [17] stated that the linear approximation (4.11) is a very good and operational approximation for the ratio
of multidimension integrals. In the case studied here, h1 ¼ a and h2 ¼ b.
The Lindley approximation to Bayes estimators for a and b under the squared error loss function are
1 1 1
a~ ¼ a^ M þ q1 r11 þ q2 r12 þ L111 r211 þ L221 ðr11 r22 þ 2r212 Þ þ L222 r22 r21 ; ð4:12Þ
2 2 2

b ^M þ q r21 þ q r22 þ 1 L111 r11 r12 þ 3 L221 r22 r12 þ 1 L222 r2 ;


~¼b ð4:13Þ
1 2 22
2 2 2
where a ^ are the MLEs of a; b, respectively, and
^; b
2n
L111 ¼ ;
a3
X
n
b
L221 ¼  ð1 þ xbi Þxbi exi ðln xi Þ2 ;
i¼1

2n X
n X
n
b xb
L222 ¼ 3
þ xbi ðln xi Þ3  a ð1 þ xbi þ x2b i
3
i Þxi e ðln xi Þ ;
b i¼1 i¼1

1 1 ð1Þiþj Lij
q1 ¼ ð1  a1 þ a2 aÞ; q2 ¼ ð1  b1 þ b2 bÞ; rij ¼
a b D
and
" # " #2
n n X
n X
n
b Xn
b
D¼  xbi ðln xi Þ2 þ a xbi ð1 þ xbi Þðln xi Þ2 exi  xbi exi ln xi :
a2 b2 i¼1 i¼1 i¼1

Finally, it is worthwhile to point out some different features of the three approximation techniques: (1) the SB method
provides the TBPI; (2) the MCI method provides the TBPI and the posterior pdfs of the parameters; (3) the LA method does
not provide either TBPI or the posterior pdfs. Overall, the marginal posterior variances using the SB and the MCI methods can
be computed in a much easier way than using the LA method.

5. Numerical results

In order to (i) illustrate the use of the three approximation methods for Bayes estimation proposed in this paper; (ii) make
comparisons among these three methods; and (iii) compare the performance of Bayes estimates of the parameters discussed
in this paper with that of the maximum likelihood estimates, we provide in this section two simple examples and a large
simulation study.

Example 1 [7]. In this example, the following random sample is generated from (1.1) when a ¼ 0:02 and b ¼ 0:5
0:29; 1:44; 8:38; 8:66; 10:2; 11:04; 13:44; 14:37; 17:05; 17:13 and 18:35:
Both MLE and Bayes estimates of the parameters a and b are computed using the three approximation techniques. To
show that the likelihood equations have a unique solution, the profile log-likelihood function of b is provided in Fig. 1.
^ ¼ 0:5618.
The vertical line is at the maximum of the log-likelihood function of b; b
Comparisons between MLE and Bayes estimates are made based on their corresponding percentage errors computed by
jEstimated value of h  Exact value of hj
PEh ¼ 100%:
Exact value of h
5386 A.M. Sarhan et al. / Applied Mathematical Modelling 36 (2012) 5380–5392

−32

The profile log−likelihood function of β


−34

−36

−38

−40

−42

−44

−46
0.2 0.3 0.4 0.5 0.6 0.7 0.8
β

Fig. 1. The profile log-likelihood function of b.

Table 1
Point estimates (P. est) and the corresponding PE.

MLE SB MCI LA
a b a b a b a b
P. est 0.0148 0.5618 0.0199 0.5399 0.0206 0.5355 0.0193 0.5677
PE 26% 12.36% 0.292% 7.972% 2.847% 7.091% 3.483% 13.544%

Table 2
95% TBPI and MBR.

Method TBPI MBR


a b a b
SB (0.00754, 0.0351) (0.4799, 0.600) 5:1195 105 9:3269 104
MCI (0.0079, 0.0402) (0.454, 0.592) 7:142 105 1:5 103

60

50 Posterior
Probability density function

Prior
40

30

20

10
0.0079 0.0402

0
0 0.01 0.02 0.03 0.04 0.05 0.06
α

Fig. 2. Prior and posterior pdfs of a.


A.M. Sarhan et al. / Applied Mathematical Modelling 36 (2012) 5380–5392 5387

Following the method in [18], it is assumed here that the prior parameters for a and b are a1 ¼ 4; a2 ¼ 200;
b1 ¼ 25; b2 ¼ 50. Table 1 gives the MLE and Bayes estimates of the parameters, using the three approximation techniques,
and their corresponding percentage errors. Based on the percentage errors shown in Table 1, we conclude that: (i) SB method
provides the best approximation to Bayes estimates of the parameters among the three methods applied while LA method is
the worst; and (ii) LA method provides better estimates than the MLE. This means that the Bayes procedure provides better
estimates than the maximum likelihood approach in this example.
The 95% TBPI of the parameters a and b using approximation SB and MCI methods are given in Table 2. Also, Table 2 gives
the minimum Bayes risks associated with the Bayes estimates of the parameters using SB and MCI methods. Because the
sample size is small we do not compute the asymptotic CI for the parameters. Further, marginal prior and posterior pdfs of a
and b are plotted in Figs. 2 and 3.
For comparison purposes, the Lindley approximation of the Bayes estimates of a and b, using noninformative priors, are
0.01480076, 0.56179999 and the corresponding PE’s are 25.99621 and 12.35997, respectively. As expected, LA of Bayes
estimates are very close to the MLE.
The results in Table 2 agree with the conclusion drawn from Table 1 because SB method admits a smaller MBR than that
for MCI method.
The empirical marginal posterior cumulative distribution functions of a and b using gamma priors are plotted in Fig. 4.

12

10
Posterior
Probability density function

4
Prior
0.5920
2 0.4540

0
0.3 0.4 0.5 0.6 0.7 0.8
β

Fig. 3. Prior and posterior pdfs of b.

Fig. 4. Empirical marginal posterior CDF of a and b using gamma priors.


5388 A.M. Sarhan et al. / Applied Mathematical Modelling 36 (2012) 5380–5392

Using exponential prior distributions for a and b with parameters 49.751 and 3.4431, we get the results present in
Table 3.
Using Lindley approximation, the Bayes estimates of the parameters a and b are 0.006207 and 0.6194, respectively, and
the corresponding PE’s are 68.67% and 26.39%. The MCI technique does not work in this case. Based on this result, we could
conclude that the SB method works better than LA methods, in the sense of having smaller PE, for this example when the
parameters follow exponential priors. Thus, SB method is the best among all methods discussed here for this example under
exponential priors.
The empirical marginal posterior cumulative distribution functions of a and b using exponential priors, which are used to
calculate the TBPIs, are plotted in Fig. 5.

Table 3
Results using SB method with exponential priors.

Parameter Bayes est. PE (%) MBR 95% TBPI


a 0.02392 19.59 1:342 104 (0.00264, 0.0501)
b 0.52952 5.904 2:050 103 (0.456, 0.627)

F
Fig. 5. Empirical marginal posterior CDF for a and b using exponential priors.

0.9
Empirical
0.8

0.7
Scaled TTT−Transform

0.6

0.5

0.4

0.3
CH
0.2

0.1

0
0 0.2 0.4 0.6 0.8 1
u

Fig. 6. Empirical and fitted scaled TTT-Transform for the electrical appliances data.
A.M. Sarhan et al. / Applied Mathematical Modelling 36 (2012) 5380–5392 5389

Example 2. In this example, we analyze a real data set from Lawless [19]. It shows the number of 1000s of cycles to failure
for electrical appliances in a life test
0.014 0.034 0.059 0.061 0.069 0.08 0.123 0.142 0.165 0.21 0.381 0.464 0.479 0.556 0.574 0.839 0.917 0.969
0.991 1.064 1.088 1.091 1.174 1.27 1.275 1.355 1.397 1.477 1.578 1.649 1.702 1.893 1.932 2.001 2.161 2.292 2.326 2.337
2.628 2.785 2.811 2.886 2.993 3.122 3.248 3.715 3.79 3.857 3.912 4.1 4.106 4.116 4.315 4.51 4.58 5.267 5.299 5.583 6.065
9.701
The MLE of a and b are calculated to be a^ ¼ 0:2452 and b ^ ¼ 0:5318. The Bayes estimates are: a ^ ¼ 0:2462 and b ^ ¼ 0:5307
(using LA) and a ^ ¼ 0:1954 and b ^ ¼ 0:5222 (using SB).
In order to see if the Chen distribution is a good candidate to fit this data set, we plotted: (1) the empirical scaled TTT
transform together with the fitted one using the Chen distribution, see Fig. 6 the empirical cdf and fitted cdf using the Chen
distribution, see Fig. 7, and we calculated the K-S statistics. As it seems from Fig. 6 that the hazard rate of the data takes a
bathtub shape, this is a good indication that the Chen distribution is suitable to fitting the current data set. Figs. 8–10 give the
plots of the cdf, histogram and pdf of the marginal posterior distributions of a and b. The 95% TBPIs for a and b are shown in
Fig. 10.

1
Empirical
0.9 MLE
Lindley
0.8 SB

0.7
The survival function

0.6

0.5

0.4

0.3

0.2

0.1

0
0 2 4 6 8 10
x

Fig. 7. Empirical and fitted cdf for the electrical appliances data.

Fig. 8. Histograms for the marginal posterior distributions for a and b.


5390 A.M. Sarhan et al. / Applied Mathematical Modelling 36 (2012) 5380–5392

Fig. 9. Histograms for the marginal posterior distributions for a and b.

Fig. 10. The marginal posterior probability density functions for a and b.

Example 3. To evaluate the performance of ML and Bayes procedures based on the sample size, a large simulation study
using Monte Carlo method is carried out according to the following scheme,

1. Specify the sample size n.


2. Generate a random sample with size n from CH (a, b).
3. Compute MLE’s of a and b by solving (2.5) and (2.6).
4. Compute Bayes estimates of a and b using LA and SB methods.
5. Compute the squared deviation of the point estimate of each parameter using both the procedures from the correspond-
ing true value.
6. Repeat Steps 2–5 5000 times.
7. Calculate the MSE associated to each estimate for the two parameters using the three techniques.
8. Steps 1–6 are done for n ¼ 10; 20; . . . ; 100.
9. Steps 1–8 are carried out when a ¼ 0:02 and b ¼ 0:5.

Figs. 11 and 12 show the plots of the MSE values associated with both MLE and Bayes estimates of the parameters a and b
against the sample size, respectively.
A.M. Sarhan et al. / Applied Mathematical Modelling 36 (2012) 5380–5392 5391

MSE for α
0.087

0.086
Lindley
0.085

0.084

0.083
The MSE

0.082

0.081

0.08
MLE
0.079

0.078 SB

0.077
10 20 30 40 50 60 70 80 90 100
The sample size

Fig. 11. The MSE associated with MLE and Bayes estimate of a.

MSE for β
0.65

0.6

Lindley
0.55

0.5
The MSE

0.45

0.4

0.35 MLE

0.3 SB

0.25
10 20 30 40 50 60 70 80 90 100
The sample size

Fig. 12. The MSE associated with MLE and Bayes estimate of b.

From Figs. 11 and 12, some of the points are quite clear. As we expected, the performances of all estimators are improved
when the sample size increases. It is also observed that in terms of MSE, the Bayes estimates using SB and the MLEs become
closer for large sample sizes. For all sample sizes, the Lindley approximation for the Bayes estimates and MLEs are very close.
SB provides better Bayes estimates of both a and b for all sample sizes.

6. Conclusion

In this paper we discussed parameter estimation for a two-parameter distribution [18,7]. This distribution exhibits a
bathtub shaped failure rate which makes it a suitable distribution for fitting several real lifetime data sets. We used the max-
imum likelihood and the Bayes method to estimate the two unknown parameters of this distribution. In the Bayes case, we
assumed that the prior distributions of the two unknown parameters follow gamma distributions.
5392 A.M. Sarhan et al. / Applied Mathematical Modelling 36 (2012) 5380–5392

The Bayes estimators cannot be obtained in explicit forms, therefore, we used different approximations to derive the
point estimates and the two sided Bayesian probability intervals for the two parameters.
A real data set from Lawless [19] is analyzed using this distribution. Comparisons between the point estimates obtained
by using the maximum likelihood and Bayes methods is presented for illustrative purposes for that data set. General com-
parisons between the maximum likelihood estimates and the approximate Bayes estimates obtained under non-informative
prior assumptions is discussed by using Monte Carlo simulations.
In a future work, Markov chain Monte Carlo (MCMC) simulations techniques can be applied to estimate the two unknown
parameters. The results that will be derived using MCMC can be compared with the results presented in current paper.

Acknowledgement

The authors thank the referee for his valuable suggestions.

References

[1] R.M. Smith, L.J. Bain, An exponential power life-testing distribution, Commun. Stat. 4 (1975) 469–481.
[2] L. Leemis, Relationships among common univariate distributions, The Am. Stat. 40 (1986) 143–146.
[3] D.P. Gaver, M. Acar, Analytical hazard representations for use in reliability, mortality, and simulation studies, Commun. Stat. B (Sim. Comput.) 8 (1979)
91–111.
[4] U. Hjorth, A reliability distribution with increasing, decreasing, and bathtub-shaped failure rates, Technometrics 22 (1980) 99–107.
[5] G.S. Mudholkar, D.K. Srivastava, Exponentiated Weibull family for analyzing bathtub failure-rate data, IEEE Trans. Rel. 42 (2) (1993) 299–302.
[6] T. Zhang, M. Xie, L.C. Tang, S.H. Ng, Reliability and modeling of systems integrated with firmware and hardware, Int. J. Reliab. Qual. Saf. Eng. 12 (3)
(2005) 227–239.
[7] Z. Chen, A new two-parameter lifetime distribution with bathtub shape or increasing failure rate function, Stat. Prob. Lett. 49 (2000) 155–161.
[8] J.-W. Wu, H.-L. Lu, C.-H. Chen, C.-H. Wu, Statistical inference about the shape parameter of the new two-parameter bathtub-shaped lifetime
distribution, Qual. Reliab. Eng. Int. 20 (2004) 607–616.
[9] S.-J. Wu, Estimation of the two-parameter bathtub-shaped lifetime distribution with progressive censoring, J. Appl. Stat. 35 (10) (2008) 1139–1150.
[10] S. Nadaraja, S. Kotz, The two-parameter bathtub-shaped lifetime distribution, Qual. Reliab. Eng. Int. 23 (2007) 279–280.
[11] M.R. Gurvich, A.T. Dibenedetto, S.V. Rande, A new statistical distribution for characterizing the random strength of brittle materials, J. Mater. Sci. 32
(1997) 2559–2564.
[12] G.R. Haynatzki, K. Weron, V.R. Haynatzka, A new statistical model of tumor latency time, Math. Comput. Model. 32 (2000) 251–256.
[13] H. Pham, C.-D. Lai, On recent generalizations of the Weibull distribution, IEEE Trans. Reliab. 56 (2007) 454–458.
[14] W.O. Meeker, L.A. Escobar, Statistical Methods for Reliability Data, John Wiley & Sons, Inc., New York, 1998.
[15] R.Y. Rubinstein, D.P. Kroese, Simulation and the Monte Carlo Method, 2nd edition., John Wiley and Sons, Inc., Hoboken, New Jersey, 2006.
[16] D.V. Lindley, Approximate Bayesian methods, Trabajos Estadistica 31 (1980) 223–237.
[17] J. Gren, Discussion on D.V. Lindley’s (1980) paper on approximate Bayesian methods, Trabajos Estudistica 31 (1980) 241.
[18] H.F. Martz, R.A. Waller, Bayesian Reliability Analysis, Wiley, New York, 1982.
[19] J.F. Lawless, Statistical Models and Methods for Lifetime Data, Wiley, New York, 2003.

You might also like