Estimation of The Stress-Strength Reliability For

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 23

This article appeared in a journal published by Elsevier.

The attached
copy is furnished to the author for internal non-commercial research
and education use, including for instruction at the authors institution
and sharing with colleagues.
Other uses, including reproduction and distribution, or selling or
licensing copies, or posting to personal, institutional or third party
websites are prohibited.
In most cases authors are permitted to post their version of the
article (e.g. in Word or Tex form) to their personal website or
institutional repository. Authors requiring further information
regarding Elsevier’s archiving and manuscript policies are
encouraged to visit:
http://www.elsevier.com/authorsrights
Author's personal copy

Statistical Methodology 15 (2013) 73–94

Contents lists available at SciVerse ScienceDirect

Statistical Methodology
journal homepage: www.elsevier.com/locate/stamet

Estimation of the stress–strength reliability for


the generalized logistic distribution
A. Asgharzadeh a,∗ , R. Valiollahi b , Mohammad Z. Raqab c
a
Department of Statistics, University of Mazandaran, Babolsar, Iran
b
Department of Statistics, Semnan University, Semnan, Iran
c
Department of Statistics and Operations Research, Kuwait University, P.O. Box 5969 Safat, 13060, Kuwait

article info abstract


Article history: Ragab [A. Ragab, Estimation and predictive density for the general-
Received 26 June 2012 ized logistic distribution, Microelectronics and Reliability 31 (1991)
Received in revised form 91–95] described the Bayesian and empirical Bayesian methods for
15 May 2013
estimation of the stress–strength parameter R = P (Y < X ), when
Accepted 16 May 2013
X and Y are independent random variables from two generalized
logistic (GL) distributions having the same known scale but differ-
Keywords:
Maximum likelihood estimator
ent shape parameters. In this current paper, we consider the esti-
Bootstrap confidence interval mation of R, when X and Y are both two-parameter GL distribution
Bayesian estimation with the same unknown scale but different shape parameters or
Metropolis–Hasting method with the same unknown shape but different scale parameters. We
also consider the general case when the shape and scale parame-
ters are different. The maximum likelihood estimator of R and its
asymptotic distribution are obtained and it is used to construct the
asymptotic confidence interval of R. We also implement Gibbs and
Metropolis samplings to provide a sample-based estimate of R and
its associated credible interval. Finally, analyzes of real data set and
Monte Carlo simulation are presented for illustrative purposes.
© 2013 Elsevier B.V. All rights reserved.

1. Introduction

The problem of estimating R = P (Y < X ), where X and Y are independent random variables, has
received a continuous interest. It is referred to as the reliability parameter that one random variable


Corresponding author. Tel.: +98 112 5342476.
E-mail addresses: a.asgharzadeh@umz.ac.ir (A. Asgharzadeh), rvaliollahi@semnan.ac.ir (R. Valiollahi),
mraqab@stat.kuniv.edu (M.Z. Raqab).

1572-3127/$ – see front matter © 2013 Elsevier B.V. All rights reserved.
http://dx.doi.org/10.1016/j.stamet.2013.05.002
Author's personal copy

74 A. Asgharzadeh et al. / Statistical Methodology 15 (2013) 73–94

exceeds another. If X ≤ Y , then either the component fails or the system that uses the component may
malfunction. This problem also arises in situations where X and Y represent lifetimes of two devices
and one wants to estimate the probability that one fails before the other.
It has many applications especially in electrical, electronic and mechanical systems such as fatigue
failure of aircraft structures, and the aging of concrete pressure vessels. For a technical system and its
elements to be capable of functioning, their operating characteristics must lie between certain limits.
For example, the strength of a chain link should be higher than the stress (load) applied. Consequently,
R plays an important role in this context. Hall [15] provided an example of a system application where
the breakdown voltage X of a capacitor must exceed the voltage output Y of a transverter (power
supply) in order for a component to work properly. Another example, if X is the response for a control
group, and Y refers to a treatment group, R is a measure of the effect of the treatment. It has also some
applications in biological, medical and health service research (see [8]).
Based on complete X -sample and Y -sample, the problem of estimating R has been extensively
studied for many statistical models including exponential, normal, Burr, generalized exponential,
Weibull and Pareto distributions. For example, see [5,13,1,19,6,25,20,26]. For more details on the
results of the stress–strength model, one may refer to [18]. Recently, some authors have also studied
the inferential procedures of R for some lifetime distributions under progressive censoring. See, for
example, the work of Saracoglua et al. [27] and Asgharzadeh et al. [4].
Balakrishnan and Leung [7] defined the generalized logistic (GL) distribution as one of three
generalized forms of the standard logistic distribution. The GL distribution has received additional
attention in estimating its parameters for practical usage. For α > 0 and λ > 0, the two-parameter
GL distribution has the cumulative distribution function (cdf)

F (x; α, λ) = (1 + e−λx )−α , −∞ < x < ∞ (1.1)

and has the probability density function (pdf)

f (x; α, λ) = αλe−λx (1 + e−λx )−α−1 . − ∞ < x < ∞. (1.2)

Here α and λ are the shape and scale parameters, respectively. The GL distribution is negatively
skewed when α > 1 and positively skewed when 0 < α < 1, and for α = 1, it becomes the standard
logistic distribution. The two-parameter GL distribution will be denoted by GL(α, λ). It is observed
that the pdf (1.2) is unimodal and log-concave and it can be used to model both left and right skewed
data. Extensive work on GL distribution can be found in [17,3,2,14].
In this paper, we consider the estimation of R when X and Y are both two-parameter GL distribu-
tion. To the best of our knowledge, this problem has not been studied so far in the literature except
for a special case when X and Y are two independent GL distributions with different shape parame-
ters, but having the same known scale parameter (see [24]). The main difference of our work with the
existing work [24] is that the common scale parameter considered here is assumed to be unknown.
We also consider the case when X and Y are both two-parameter GL distribution with same unknown
shape parameter and different scale parameters, which has not been considered before. Further, the
general case where the shape and scale parameters are unknown and different, is also discussed.
The rest of the paper is organized as follows. In Section 2, the estimation of R with same scale and
different shape parameters is studied. In this section, the maximum likelihood estimator (MLE) of R
and its asymptotic distribution, bootstrap confidence intervals, Bayes estimator and highest posterior
density (HPD) credible interval of R are presented. The estimation of R with same shape and different
scale parameters is considered in Section 3. In Section 4, the estimation of R with different shape and
scale parameters is also discussed. Numerical comparisons of the different proposed estimators are
provided in Section 5. A real life example is analyzed and discussed for illustrative purposes.

2. Estimation of R with different shape and same scale parameters

In this section, we consider the problem of estimating R = P (Y < X ), under the assumption that
X ∼ GL(α, λ), Y ∼ GL(β, λ), and X and Y are independently distributed. Then it can be easily checked
Author's personal copy

A. Asgharzadeh et al. / Statistical Methodology 15 (2013) 73–94 75

that
α
R = P (Y < X ) = . (2.1)
α+β

2.1. Maximum likelihood estimator of R

To compute the MLE of R, first we obtain the MLEs of α , β and λ. Suppose X1 , X2 , . . . , Xn is a random
sample from GL(α, λ) and Y1 , Y2 , . . . , Ym is another independent random sample from GL(β, λ).
Therefore the log-likelihood function of α, β and λ is given by
 
n m

l(α, β, λ) = n ln(α) + m ln(β) + (n + m) ln(λ) − λ xi + yj
i=1 j =1

− (α + 1)S1 (x, λ) − (β + 1)S2 (y, λ),


where
n
 m

S1 (x, λ) = ln(1 + e−λxi ), and S2 (y, λ) = ln(1 + e−λyj ). (2.2)
i=1 j =1

The MLEs of α, β and λ, say 


αML , 
βML and 
λML respectively, can be obtained numerically by solving the
following equations:
∂l n ∂l m
= − S1 (x, λ) = 0, = − S2 (y, λ) = 0, (2.3)
∂α α ∂β β
 
∂l n+m n  m n
xi e−λxi
= − xi + yj + (α + 1)
∂λ λ i=1 j =1 i=1
1 + e−λxi
m
 yj e−λyj
+ (β + 1) = 0. (2.4)
j =1
1 + e−λyj

From (2.3), we obtain


n m
α (λ) = , β(λ) =
and  . (2.5)
S1 (x, λ) S2 (y, λ)

α (λ) and 
Substituting  β(λ) into (2.4), we get 
λ as a fixed point solution of the following equation
g (λ) = λ, (2.6)
where
n+m
g (λ) = n m n m −λyj
.
xi e−λxi yj e
yj − (
α (λ) + 1) − (
β(λ) + 1)
   
xi +
1+e−λxi 1+e
−λyj
i =1 j =1 i=1 j =1

A simple iterative procedure g (λ(j+1) ) = λ(j+1) , where λ(j) is the j-th iteration, can be used to
find the solution of (2.6). Once we obtain  λML , the MLEs of α and β , can be deduced from (2.5) as
αML = 
 α (
λML ) and 
βML =  β(
λML ). Therefore, the MLE of R is computed to be
αML
.

RML =
 (2.7)
αML + 
 βML

2.2. Asymptotic distribution

θ = (
In this section, the asymptotic distribution of the MLE  α, 
β, 
λ) and of 
R are obtained. Based
on the asymptotic distribution of R, we can obtain the asymptotic confidence interval of R.

Author's personal copy

76 A. Asgharzadeh et al. / Statistical Methodology 15 (2013) 73–94

n
Theorem 1. As n → ∞, m → ∞ and m
→ p, where p is a positive real constant, then
√ √ d
n(
α − α), m(
β − β), n(
λ − λ) −→ N3 (0, A−1 (α, β, λ)),
√ 

where
a11 0 a13
 
A(α, β, λ) = 0 a22 a23 ,
a31 a32 a33

and
J11 1 J13 Ψ (α) + γ − 1
a11 = lim = , a13 = a31 = lim =−
n,m→∞ n α 2 n,m→∞ n λ(α + 1)

J22 1 p Ψ (β) + γ − 1
a22 = lim = , a23 = a32 = lim J23 =− √
n,m→∞ m β 2 n,m→∞ n λ(β + 1) p

J33 p+1 1
a33 = lim = , α[Ψ ′ (α) + Ψ 2 (α)] + 2[(α(γ − 1) + 1)Ψ (α)
+
n,m→∞ n pλ2
λ2 (α + 2)
 
π 2

1
− (γ (α − 1) + 1)] + α γ 2 + + 2 β[Ψ ′ (β) + Ψ 2 (β)]
6 pλ (β + 2)
 
π 2
+ 2[(β(γ − 1) + 1)Ψ (β) − (γ (β − 1) + 1)] + β γ 2 + .
6

Proof. The proof follows from the asymptotic properties of MLEs and the multivariate central limit
theorem. The details of derivation can be seen in Appendix A. 

The main result is presented in Theorem 2.

n
Theorem 2. As n → ∞, m → ∞ and m
→ p, then

n(
R − R) → N (0, BA ), (2.8)

where
1
β (a22 a33 − a223 ) − 2αβ(a13 a23 ) + α 2 (a11 a33 − a213 ) ,
 2 
BA =
uA (α + β)4
and

uA = a11 a22 a33 − a11 a23 a32 − a13 a22 a31 .

Proof. By using Theorem 1 and applying the delta method, we immediately conclude the asymptotic
R = g (
distribution of  α, 
β, 
λ), where g (α, β, λ) = α/(α + β) as the following:
√ D
n(
R − R) → N (0, BA ),

where BA = btA A−1 bA , with


 ∂R 
 ∂α 
β
 
a22 a33 − a223 a13 a32 −a13 a22
 
 ∂R  1 1
bA =  = −α , −1
A = a11 a33 − a213 −a11 a23  ,
   a23 a31
 ∂β  (α + β)2 0 uA
 ∂R  −a22 a31 −a11 a32 a11 a22

∂λ
Author's personal copy

A. Asgharzadeh et al. / Statistical Methodology 15 (2013) 73–94 77

and uA = a11 a22 a33 − a11 a23 a32 − a13 a22 a31 . Therefore
1
BA = btA A−1 bA = β 2 (a22 a33 − a223 ) − 2αβ(a13 a23 ) + α 2 (a11 a33 − a213 ) .
 
uA (α + β)4
The proof is thus completed. 

Theorem 2 can be used to construct the asymptotic confidence interval of R. To compute the
confidence interval of R, the variance BA has to be estimated. In fact, the estimate of BA is obtained
by replacing α, β and λ involved in BA by their corresponding MLEs.
Now, we can obtain the 100(1 − γ )% confidence interval for R as
   
BA BA

R − z1− γ
 √ , R + z1− γ √  , (2.9)
2 n 2 n

where zγ is 100γ th percentile of N (0, 1).



As mentioned by one of the reviewers, instead of approximating ( R − R)/ Var( R) as a standard
normalvariable, we may consider some normalizing transformation g (R) of R and assume [g ( R) −
′ (R)]2 B
g (R)]/ Var[g (
R)] as N (0, 1). It is known that Var[g ( R)] = [g (R)] Var(
R) =
′ 2 [g A
n
. Based on the

pivotal quantity [g (
R) − g (R)]/ Var[g ( R)], we establish 100(1 − γ )% confidence interval for g (R) as

g (
R) ± z1− γ Var[g ( R)]. If g (R) is strictly increasing, then the approximate 100(1 − γ )% confidence
2
interval for R is derived to be
      
g −1
g (
R) − z1− γ Var[g (R)] , g
 −1
g (R) + z1− γ Var[g (R)]
  .
2 2

Specifically, we may consider the following two transformations.


(a) Logit transformation: g (R) = ln , with g ′ (R) = R(11−R) .
R
 
1 −R

(b) Arcsin transformation: g (R) = sin−1 ( R), with g ′ (R) = 2√R(11−R) .

For more details about these transformations, see [22] and [21, p. 88].

2.3. Bootstrap confidence intervals

Confidence intervals based on the asymptotic results are expected not to perform very well for
small sample size(s). In this subsection, we propose to use two confidence intervals based on the
parametric bootstrap methods: (i) the percentile bootstrap method (we call it Boot-p) based on
the idea of Efron [11], and (ii) the bootstrap-t method (we refer to it as Boot-t) based on the idea
of Hall [16]. The algorithms for estimating the confidence intervals of R using both methods are
illustrated below.
(i) Boot-p method
1. From the sample x1 , . . . , xn and y1 , . . . , ym , compute  αML , 
βML and λML .
2. Using  αML and λML generate a bootstrap sample {x1 , . . . , x∗n } and similarly using 
 ∗
βML and 
λML
generate a bootstrap sample {y1 , . . . , ym }. Based on {x1 , . . . , xn } and {y1 , . . . , ym }, compute the
∗ ∗ ∗ ∗ ∗ ∗

bootstrap estimate of R say  R∗ using (2.7).


3. Repeat Step 2, NBOOT times.
4. Let g1 (x) = P (
R∗ ≤ x) be the cdf of  R∗ . Define  RBp (x) = g1−1 (x) for a given x. The approximate
100(1 − λ)% confidence interval of R is given by

λ  λ
    
RBp
 , RBp 1 − . (2.10)
2 2
Author's personal copy

78 A. Asgharzadeh et al. / Statistical Methodology 15 (2013) 73–94

(ii) Boot-t method


1. From the sample x1 , . . . , xn and y1 , . . . , ym , compute 
αML , 
βML and λML .
αML and λML generate a bootstrap sample {x1 , . . . , x∗n } and similarly using 
2. Using   ∗
βML and  λML
generate a bootstrap sample {y1 , . . . , ym } as before. Based on {x1 , . . . , xn } and {y∗1 , . . . , y∗m },
∗ ∗ ∗ ∗

compute the bootstrap estimate of R using (2.7), say  R∗ and the statistic
√ ∗
n(R − R)

T =  .
Var(
R∗ )

Compute Var( R∗ ) using Theorem 2.


3. Repeat Step 2, NBOOT times.
4. Let g2 (x) = P (T ∗ ≤ x) be the cumulative distribution function of T . For a given x, define

Var(
R)
RBt (x) = 
 R + g −1 ( x )
2 n
. The approximate 100(1 − λ)% confidence interval of R is given by

λ  λ
    
RBt
 , RBt 1 − . (2.11)
2 2

2.4. Bayes estimation of R

In this section, we obtain the Bayes estimation of R under the assumption that the scale parameter λ
and shape parameters α and β are random variables. It is quite natural to assume independent gamma
priors on the shape and scale parameters. In this case, it is observed that the problem becomes quite
intractable analytically and it has to be solved numerically. Specifically, we assume that α, β and λ
have density functions Gamma(a1 , b1 ), Gamma(a2 , b2 ) and Gamma(a3 , b3 ), respectively. Here all the
hyper-parameters ai and bi (i = 1, 2, 3) are assumed to be known and non-negative.
Based on the above assumptions, we have the likelihood function of the observed data as
  
n m

L(data|α, β, λ) = α n β m λn+m exp −λ xi + yj
i=1 j=1

− (α + 1)S1 (x, λ) − (β + 1)S2 (y, λ) ,

where S1 (x, λ) and S2 (y, λ) are defined in (2.2).


The joint density of the data, α, β and λ can be obtained as
L(data, α, β, λ) = L(data; λ, α, β) × π1 (α) × π2 (β) × π3 (λ), (2.12)
where π1 (α), π2 (β) and π3 (λ) are the gamma prior densities for α, β and λ. Therefore, the joint
posterior density of λ, α and β given the data is
L(data, λ, α, β)
L(λ, α, β|data) =  ∞  ∞  ∞ . (2.13)
0 0 0
L ( data, λ, α, β) dλ dα d β
Since the expression (2.13) cannot be written in a closed form, we adopt the Gibbs sampling technique
to compute the Bayes estimate of R and the corresponding credible interval of R.
The posterior pdfs of α, β and λ can be obtained as follows.
α|β, λ, data ∼ Gamma (n + a1 , b1 + S1 (x, λ)) ,
β|α, λ, data ∼ Gamma (m + a2 , b2 + S2 (y, λ)) ,
  
 n m

f (λ|α, β, data) ∝ λn+m+a3 −1 exp −λ b3 + xi + yj
i =1 j =1

− (α + 1)S1 (x, λ) − (β + 1)S2 (y, λ) .
Author's personal copy

A. Asgharzadeh et al. / Statistical Methodology 15 (2013) 73–94 79

Clearly, the form of the posterior density does not lead to explicit Bayes estimates of the model
parameters. For this, we use the Metropolis method with normal proposal distribution. Therefore,
the algorithm of Gibbs sampling is described as follows:
1. Start with an initial guess (α (0) , β (0) , λ(0) ).
2. Set t = 1.
3. Generate α (t ) from Gamma n + a1 , b1 + S1 (x, λ(t −1) ) .
 

4. Generate β (t ) from Gamma m + a2 , b2 + S2 (y, λ(t −1) ) .


 

5. Using the Metropolis method, generate λ(t ) from f (λ(t −1) |α, β, data) with the N (λ(t −1) , 0.5)
proposal distribution.
6. Compute R(t ) from (2.1).
7. Set t = t + 1.
8. Repeat Steps 3–7, T times.
Now the approximate posterior mean and variance of R become

T
1  (t )
E(R|data) =
 R , (2.14)
T t =1
T
1   (t ) 2
 (R|data) =
Var R −E (R|data) . (2.15)
T t =1

Using the results in [9], we immediately construct the 100(1 − γ )% highest posterior density (HPD)
credible interval as
 
R[ γ T ] , R[(1− γ )T ] ,
2 2

γ  γ 
where R[ γ T ] and R[(1− γ )T ] are the [ 2 T ]-th smallest integer and the [ 1 − 2 T ]-th smallest integer
2 2
of {Rt , t = 1, 2, . . . , T }, respectively.
It is worthy to point out that the Metropolis algorithm adopts only symmetric proposal distribu-
tions. Therefore the normal distribution is appropriately chosen. When choosing a proposal distribu-
tion, its scale (for example σ ) needs to be chosen carefully. A poor choice of σ may result in slow
convergence and slow mixing. Often, it is a good idea to experiment with different values for σ . It is
checked here that the normal proposal distribution with variance σ 2 = 0.5 is best one for the rapid
convergence of the Metropolis algorithm.

3. Estimation of R with different scale and same shape parameters

In this section, we consider the problem of estimating R = P (Y < X ), under the assumption that
X ∼ GL(α, λ1 ), Y ∼ GL(α, λ2 ), and X and Y are independently distributed. Then it can be easily seen
that

R = P (Y < X ) = H (α, λ1 , λ2 ), (3.1)

where
 ∞  λ2
−α
H (α, λ1 , λ2 ) = α (1 + t )−α−1
1+t λ1 dt .
0

3.1. Maximum likelihood estimator of R

Suppose X1 , X2 , . . . , Xn is a random sample from GL(α, λ1 ) and Y1 , Y2 , . . . , Ym is another inde-


pendent random sample from GL(α, λ2 ). Therefore the log-likelihood function of α, λ1 and λ2 is
Author's personal copy

80 A. Asgharzadeh et al. / Statistical Methodology 15 (2013) 73–94

given by
n
 m

l(α, λ1 , λ2 ) = (n + m) ln(α) + n ln(λ1 ) + m ln(λ2 ) − λ1 xi − λ2 yj
i=1 j =1

− (α + 1)[S1 (x, λ1 ) + S2 (y, λ2 )]


where
n
 m

S1 (x, λ1 ) = ln(1 + e−λ1 xi ), and S2 (y, λ2 ) = ln(1 + e−λ2 yj ). (3.2)
i=1 j =1

The MLEs of α, λ1 and λ2 , say 


αML , 
λ1ML and 
λ2ML respectively, can be found by solving
∂l n+m
= − S1 (x, λ1 ) − S2 (y, λ2 ) = 0, (3.3)
∂α α
∂l n  n  n
xi e−λ1 xi
= − xi + (α + 1) = 0, (3.4)
∂λ1 λ1 i =1 i=1
1 + e −λ1 xi

∂l m  m  m
yj e−λ2 yj
= − yj + (α + 1) = 0. (3.5)
∂λ2 λ2 j =1 j =1
1 + e−λ2 yj

From (3.3), we obtain


n+m
α (λ1 , λ2 ) = . (3.6)
S1 (x, λ1 ) + S2 (y, λ2 )

α (λ1 , λ2 ) into (3.4) and (3.5), 


Substituting  λ1 and 
λ2 can be obtained as a solution of the following
non-linear equations:
n n
n   xi e−λ1 xi
− xi + (
α (λ1 , λ2 ) + 1) = 0,
λ1 i =1 i=1
1 + e−λ1 xi
m m
m   yj e−λ2 yj
− yj + (
α (λ1 , λ2 ) + 1) = 0. (3.7)
λ2 j =1 j=1
1 + e−λ2 yj

Once we obtain λ1ML and 


λ2ML , the MLE of α , can be deduced from (3.6) as 
αML = 
α (
λ1ML , 
λ2ML ).
Therefore, we compute the MLE of R as

RML = H (
 αML , 
λ1ML , 
λ2ML ). (3.8)

3.2. Asymptotic distribution

To derive the asymptotic distribution of R and the corresponding asymptotic confidence interval
θ = (
of R, we first obtain the asymptotic distribution of  α, 
λ1 , 
λ2 ). From Appendix B, we describe the
results in Theorems 3 and 4. The proofs are similar to the ones in Theorems 1 and 2 and thus they are
omitted.
n
Theorem 3. As n → ∞, m → ∞ and → p, then m
√ √  d
m(
α − α), n(
λ1 − λ1 ), n(
λ2 − λ2 ) −→ N3 (0, C−1 (α, λ1 , λ2 )),
√

where
c11 c12 c13
 
C(α, λ1 , λ2 ) = c21 c22 0 ,
c31 0 c33
Author's personal copy

A. Asgharzadeh et al. / Statistical Methodology 15 (2013) 73–94 81

with
J11 p+1
c11 = lim = ,
n,m→∞ m α2
√ √
[Ψ (α) + γ − 1] p
p
c12 = c21 = lim J12 = − ,
n,m→∞ n λ1 (α + 1)
1 [Ψ (α) + γ − 1]
c13 = c31 = lim √ J13 = − √ ,
n,m→∞ m p λ2 (α + 1) p

J22 1 1
c22 = lim = 2+ 2 α[Ψ ′ (α) + Ψ 2 (α)] + 2[(α(γ − 1) + 1)Ψ (α)
n,m→∞ n λ1 λ1 (α + 2)
π2
 
− (γ (α − 1) + 1)] + α γ + 2
.
6

J33 1 1
c33 = lim = 2+ 2 α[Ψ ′ (α) + Ψ 2 (α)]
n,m→∞ n pλ2 pλ2 (α + 2)
π2
 
+ 2[(α(γ − 1) + 1)Ψ (α) − (γ (α − 1) + 1)] + α γ + 2
.
6

n
Theorem 4. As n → ∞, m → ∞ and m
→ p, then

m(
R − R) → N (0, BC ),
where BC = btC C−1 bC , and

h1 (α, λ1 , λ2 )
 
 α 2 λ2
 
 2 h2 (α, λ1 , λ2 )
 c22 c33 −c21 c33 −c22 c31
1
 λ1
bC =  , −1
C =  − c21 c13 2
c11 c33 − c13 c12 c31  ,
 α 2
 uC 2
 −c22 c31 c12 c31 c11 c22 − c12
− h2 (α, λ1 , λ2 )
λ1
with
 ∞   λ2
  λ2
−α
h1 (α, λ1 , λ2 ) = 1 − α ln(1 + t ) − α ln 1 + t λ1 (1 + t ) −α−1
1+t λ1 dt
0
 ∞ λ2
 λ2
−α−1
h2 (α, λ1 , λ2 ) = ln(t ) t λ1 (1 + t ) −α−1
1+t λ1 dt ,
0

and uC = c11 c22 c33 − c12 c21 c33 − c13 c31 c22 .
Using Theorem 4, we obtain the 100(1 − γ )% asymptotic confidence interval for R as
   
BC BC

R − z1− γ
 √ , R + z1− γ √  . (3.9)
2 m 2 m

BC is the estimate of BC which is obtained by replacing α, λ1 and λ2 in BC by their corresponding


Here 
MLEs. Note also that we can find confidence intervals of R based on the percentile bootstrap and
bootstrap-t methods. Since they are similar to those presented in Section 2.3, we omit them.

3.3. Bayes estimation of R

In this section, we obtain the Bayes estimation of R under assumption that the common shape pa-
rameter α and different scale parameters λ1 and λ2 are random variables. It is assumed that α, λ1 and
Author's personal copy

82 A. Asgharzadeh et al. / Statistical Methodology 15 (2013) 73–94

λ2 have density functions Gamma(a1 , b1 ), Gamma(a2 , b2 ) and Gamma(a3 , b3 ), respectively. More-


over, it is assumed that α, λ1 and λ2 are independent. Based on the above assumptions, we obtain the
likelihood function of the observed data as

n
 m

L(data|α, λ1 , λ2 ) = α λ λ exp −λ1
n+m n m
1 2 xi − λ 2 yj
i=1 j=1

− (α + 1) [S1 (x, λ1 ) + S2 (y, λ2 )] .

Once again, the Bayes estimates of α, λ1 and λ2 cannot be obtained in closed forms. For this, we
adopt the Gibbs sampling technique to compute the Bayes estimate of R and its respective credible
interval. The posterior pdfs of α, λ1 and λ2 are given by
α|λ1 , λ2 , data ∼ Gamma (n + m + a1 , b1 + S1 (x, λ1 ) + S2 (y, λ2 )) ,
   
 n
n+a2 −1
f (λ1 |α, λ2 , data) ∝ λ1 exp −λ1 b2 + xi − (α + 1)S1 (x, λ1 ) ,
i =1
   
m

m+a3 −1
f (λ2 |α, λ1 , data) ∝ λ2 exp −λ2 b3 + yj − (α + 1)S2 (y, λ2 ) .
j =1

We observe that the posterior pdfs of λ1 and λ2 are not well-known. Therefore, we generate ran-
dom numbers from these distributions by using the Metropolis method with normal proposal distri-
bution. The following algorithm summarizes the simulated Bayes estimates of λ1 and λ2 .
(0) (0)
1. Start with an initial guess (α (0) , λ1 , λ2 ).
2. Set t = 1.
(t −1)
) + S2 (y, λ(2t −1) ) .
 
3. Generate α (t ) from Gamma n + m + a1 , b1 + S1 (x, λ1
(t ) (t −1) (t −1) (t −1)
4. Using the Metropolis method, generate λ1 from f (λ1 |α (t ) , λ2 , data) with the N (λ1 , 0.5)
proposal distribution.
(t ) (t −1) (t )
5. Using the Metropolis method, generate λ2 from f (λ2 |α , λ(1t −1) , data) with the N (λ2(t −1) , 1)
proposal distribution.
6. Compute R(t ) from (3.1).
7. Set t = t + 1.
8. Repeat Steps 3–7, T times.
Now, the approximate posterior mean, variance and the HPD credible interval of R can be obtained
similar to those given in Section 2.4.
It should be mentioned here that when the same shape parameter α is known, the results can
be obtained with minor modifications. If we assume α = 1, then X1 , X2 , . . . , Xn is a random sample
from GL(1, λ1 ) and Y1 , Y2 , . . . , Ym is another independent random sample from GL(1, λ2 ). In this case,
the results for the scaled logistic distributions can be obtained as a special case with different scale
parameters.

4. Estimation of R in the general case

In this section, we consider the problem of estimating R = P (Y < X ), under the assumption that
X ∼ GL(α, λ1 ), Y ∼ GL(β, λ2 ), and X and Y are independently distributed. Then it can be easily seen
that
R = P (Y < X ) = K (α, β, λ1 , λ2 ), (4.1)
where
 ∞  λ2
−β
−(α+1)
K (α, β, λ1 , λ2 ) = α (1 + t ) 1+t λ1 dt .
0
Author's personal copy

A. Asgharzadeh et al. / Statistical Methodology 15 (2013) 73–94 83

4.1. MLE of R

Let X1 , X2 , . . . , Xn be a random sample from GL(α, λ1 ) and Y1 , Y2 , . . . , Ym be another independent


random sample from GL(β, λ2 ). Therefore the log-likelihood function of α, β, λ1 and λ2 is given by
n
 m

l(α, β, λ1 , λ2 ) = n[ln(α) + ln(λ1 )] + m[ln(β) + ln(λ2 )] − λ1 xi − λ 2 yj
i =1 j =1

− (α + 1)S1 (x, λ1 ) − (β + 1)S2 (y, λ2 )


where
n
 m

S1 (x, λ1 ) = ln(1 + e −λ1 xi
), and S2 (y, λ2 ) = ln(1 + e−λ2 yj ). (4.2)
i =1 j =1

The MLEs of α, β, λ1 and λ2 , say 


αML , 
βML , 
λ1ML and 
λ2ML respectively, can be obtained as the
solution of
∂l n
= − S1 (x, λ1 ) = 0, (4.3)
∂α α
∂l m
= − S2 (y, λ2 ) = 0, (4.4)
∂β β
∂l n n  n
xi e−λ1 xi
= − xi + (α + 1) = 0, (4.5)
∂λ1 λ1 i=1 i =1
1 + e−λ1 xi

∂l m  m  m
yj e−λ2 yj
= − yj + (β + 1) = 0. (4.6)
∂λ2 λ2 j =1 j =1
1 + e−λ2 yj

From (4.3) and (4.4), we obtain


n m
α (λ1 ) = β(λ2 ) =
and  . (4.7)
S1 (x, λ1 ) S2 (y, λ2 )

α (λ1 ) and 
Putting the values of  β(λ2 ) into (4.5) and (4.6), 
λ1ML and 
λ2ML can be obtained as a solution
of the following non-linear equations
n n
n   xi e−λ1 xi
− xi + (
α (λ1 ) + 1) = 0,
λ1 i=1 i =1
1 + e−λ1 xi
m m
m   yj e−λ2 yj
− yj + (
β(λ2 ) + 1) = 0. (4.8)
λ2 j =1 j =1
1 + e−λ2 yj

Once we obtain λ1ML and 


λ2ML , the MLEs of α and β can be deduced from (4.7) as 
αML = 
α (
λ1ML ) and
βML = β(λ2ML ). Therefore the MLE of R is computed to be
  

RML = K (
 αML , 
βML , 
λ1ML , 
λ2ML ). (4.9)

4.2. Asymptotic distribution

The asymptotic distribution of  θ =


R is obtained by deriving the asymptotic distribution of 
(
α , β, λ1 , λ2 ). The details of derivations are presented in Appendix C. Now, we describe the main
  
results in Theorems 5 and 6.
Author's personal copy

84 A. Asgharzadeh et al. / Statistical Methodology 15 (2013) 73–94

n
Theorem 5. Let n → ∞, and m → ∞ such that → p, where p is some positive real constant. Then
m
√ √ √  d
n(
α − α), m(
β − β), n(
λ1 − λ1 ), m(
λ2 − λ2 ) −→ N4 (0, D−1 (α, β, λ1 , λ2 )),
√

where
 
d11 0 d13 0
0 d22 0 d24 
D(α, β, λ1 , λ2 ) =  ,
d31 0 d33 0 
0 d42 0 d44
with
J11 1 Ψ (α) + γ − 1 J13
d11 = lim = , d13 = d31 = , lim =−
n,m→∞ n α 2 n,m→∞ n λ1 (α + 1)
J22 1 J24 Ψ (β) + γ − 1
d22 = lim = 2, d24 = d42 = lim =− ,
n,m→∞ m β n,m→∞ m λ2 (β + 1)

J33 1 1
d33 = lim = 2+ 2 α[Ψ ′ (α) + Ψ 2 (α)] + 2 [(α(γ − 1) + 1)Ψ (α)
n,m→∞ n λ1 λ1 (α + 2)
π2
 
− (γ (α − 1) + 1)] + α γ +2
,
6

J44 1 1
d44 = lim = 2+ 2 β[Ψ ′ (β) + Ψ 2 (β)] + 2 [(β(γ − 1) + 1)Ψ (β)
n,m→∞ m λ2 λ2 (β + 2)
π2
 
− (γ (β − 1) + 1)] + β γ + 2
.
6

n
Theorem 6. As n → ∞, m → ∞ and m
→ p, then

n(
R − R) → N (0, BD ),
where BD = btD D−1 bD , with
k1 (α, β, λ1 , λ2 )
 
−α k2 (α, β, λ1 , λ2 ) 
 αβλ2


bD = 
 k3 (α, β, λ1 , λ2 )
,
 λ12

 αβ 
− k3 (α, β, λ1 , λ2 )
λ1
 d33 d13 
0 − 0
 d33 d11 − d31 d13 d33 d11 − d31 d13 
d44 d24
 
0 0 −
 
 
d44 d22 − d24 d42 d44 d22 − d24 d42 
D −1
= ,

d31 d11
− 0 0
 

 d33 d11 − d31 d13 d33 d11 − d31 d13 
d42 d
 
22
0 − 0
d44 d22 − d24 d42 d44 d22 − d24 d42
and
 ∞  λ2
−β
−(α+1)
k1 (α, β, λ1 , λ2 ) = [1 − α ln(1 + t )] (1 + t ) 1+t λ1 dt ,
0
 ∞  λ2
  λ2
−β
−(α+1)
k2 (α, β, λ1 , λ2 ) = ln 1 + t λ1 (1 + t ) 1+t λ1 dt ,
0
Author's personal copy

A. Asgharzadeh et al. / Statistical Methodology 15 (2013) 73–94 85


 ∞ λ2
 λ2
−(β+1)
−(α+1)
k3 (α, β, λ1 , λ2 ) = ln(t ) t λ1 (1 + t ) 1+t λ1 dt .
0

Using Theorem 6, we obtain the 100(1 − γ )% asymptotic confidence interval for R as


   
BD BD

R − z1− γ
 √ , R + z1− γ √  . (4.10)
2 n 2 n

BD is the estimate of BD which is obtained by replacing α, β, λ1 and λ2 involved in BD by their


Here 
corresponding MLEs. We can also find the confidence intervals of R based on the percentile bootstrap
and bootstrap-t methods.

4.3. Bayes estimation of R

In this section, we obtain the Bayes estimation of R under the assumption that the shape parameters
α and β and different scale parameters λ1 and λ2 are random variables. It is assumed that α, β, λ1
and λ2 have density functions Gamma(a1 , b1 ), Gamma(a2 , b2 ), Gamma(a3 , b3 ) and Gamma(a4 , b4 ),
respectively. Moreover, it is assumed that all parameters are independent. Based on the above
assumptions, we obtain the likelihood function of the observed data as

n
 m

L(data|α, β, λ1 , λ2 ) = (αλ1 ) (βλ2 ) exp −λ1
n m
xi − λ2 yj
i=1 j =1

− (α + 1)S1 (x, λ1 ) − (β + 1)S2 (y, λ2 ) .

Therefore, the joint density of the data, α, β, λ1 and λ2 can be obtained as


L(data, α, β, λ1 , λ2 ) = L(data|α, β, λ1 , λ2 )π(α)π(β)π(λ1 )π(λ2 ), (4.11)
where π(.) is the prior distribution. Therefore the joint posterior density of α, β, λ1 and λ2 given the
data is
L(data, α, β, λ1 , λ2 )
L(α, β, λ1 , λ2 |data) =  ∞  ∞  ∞  ∞ . (4.12)
0 0 0 0
L ( data , α, β, λ1 , λ2 )d α dβ dλ 1 dλ 2

Since the joint posterior density of α, β, λ1 and λ2 given the data cannot be written in a closed
form, we again adopt the Gibbs sampling technique to compute the Bayes estimate of R and the
corresponding credible interval of R. The posterior pdfs of α, β, λ1 and λ2 have the following
distributional forms:
α|β, λ1 , λ2 , data ∼ Gamma (n + a1 , b1 + S1 (x, λ1 )) ,
β|α, λ1 , λ2 , data ∼ Gamma (m + a2 , b2 + S2 (y, λ2 )) ,
   
n
n+a3 −1
f (λ1 |α, β, λ2 , data) ∝ λ1 exp −λ1 b3 + xi − (α + 1)S1 (x, λ1 ) ,
i =1
   
m

m+a4 −1
f (λ2 |α, β, λ1 , data) ∝ λ 2 exp −λ2 b4 + yj − (β + 1)S2 (y, λ2 ) .
j =1

We observe that the posterior pdfs of λ1 and λ2 are not well-known and therefore random numbers
from these distributions can be generated by using the Metropolis method with normal proposal
distribution. The simulation process can be summarized in the following steps.
(0) (0)
1. Start with an initial guess (α (0) , β (0) , λ1 , λ2 ).
2. Set t = 1.
Author's personal copy

86 A. Asgharzadeh et al. / Statistical Methodology 15 (2013) 73–94

Table 1
Ln times for insulating fluid.

32 kV −1.3094 −0.9163 −0.3711 −0.2358 1.0116 1.3635 2.2905 2.6354 2.7682 3.3250
3.9748 4.4170 4.4918 4.6109 5.3711
34 kV −1.6608 −0.2485 −0.0409 0.2700 1.0224 1.1505 1.4231 1.5411 1.5789 1.8718
1.9947 2.0806 2.1126 2.4898 3.4578 3.4818 3.5237 3.6030 4.2889
36 kV −1.0499 −0.5277 −0.0409 −0.0101 0.5247 0.6780 0.7275 0.9477 0.9969 1.0647
1.3001 1.3837 1.6770 2.6224 3.2386

Table 2
Shape parameter, scale parameter, K–S, and p-values of the fitted generalized
logistic models.
Data set (kV) Shape parameter Scale parameter K–S p-value

32 2.6797 0.5918 0.1408 0.9055


34 3.0672 0.8260 0.1235 0.9164
36 2.3542 1.3242 0.1188 0.9764

(t −1)
 
3. Generate α (t ) from Gamma n + a1 , b1 + S1 (x, λ1 ) .
(t −1)
 
4. Generate β (t ) from Gamma m + a2 , b2 + S2 (y, λ2 ) .
(t ) (t −1) (t −1) (t −1)
5. Using the Metropolis method, generate λ1 from f (λ1 |α (t ) , β (t ) , λ2 , data) with the N (λ1 ,
0.25) proposal distribution.
(t ) (t −1) (t )
6. Using the Metropolis method, generate λ2 from f (λ2 |α , β (t ) , λ(1t −1) , data) with the N (λ(2t −1) ,
0.20) proposal distribution.
7. Compute R(t ) from (4.1).
8. Set t = t + 1.
9. Repeat Steps 3–8, T times.
Now, the Bayes estimator and the HPD credible interval of R can be obtained similarly to those
given in Sections 2 and 3.

5. Data analysis and numerical simulations

In this section, we analyze a real data set and conduct a Monte Carlo simulation study for illustrative
and comparative purposes.

5.1. Real data analysis

Here, we analyze Ln times to breakdown of an insulating fluid in an accelerated test reported by


Nelson [23]. Ln times to breakdown for insulating fluid were reported at different voltages of 26, 28,
30, 32, 34, 36 and 38 kV. Here we consider Ln times to breakdown 32, 34 and 36 kV. The data are
presented in Table 1.
We fit the generalized logistic distributions to the three data sets separately. We present the estimated
shape, and scale parameters, Kolmogorov–Smirnov (K–S) distances between the fitted and the
empirical distribution functions, and corresponding p-values in Table 2.
From Table 2, it is clear that the generalized logistic model fits quite well to the three data sets. By
using different methods discussed in the preceding sections, different point and interval estimations
are obtained. For this purpose, we consider the following two cases.
• Case 1 (Different scales and same shape parameters)
In this case, we consider 32 and 36 kV data sets. Since the two estimated shape parameters are
not very different, it is natural to assume the two shape parameters are equal. The estimates of the
parameters and K–S distances are reported in Table 3.
Author's personal copy

A. Asgharzadeh et al. / Statistical Methodology 15 (2013) 73–94 87

Table 3
Shape parameter, scale parameter, K–S, and p-values assuming that the two shape
parameters are equal.
Data set (kV) Shape parameter Scale parameter K–S p-value

32 2.5051 0.5872 0.1507 0.8036


36 2.5051 1.3305 0.1087 0.9910

From Table 3, it is clear that we cannot reject the null hypothesis that the two shape parameters are
equal. Therefore, the assumption that the two shape parameters are equal is justified for these data
sets (32 and 36 kV). The K–S values and the corresponding p-values indicate that the generalized
logistic models with equal shape parameters fit reasonably well to both data sets. Therefore,
according to the results given in Section 3, we can obtain different estimates of R. The MLE and
Bayes estimates of R are respectively, 0.666 and 0.701. Since we do not have any prior information,
we assume that a1 = a2 = a3 = b1 = b2 = b3 = 0. The prior distributions become improper,
as suggested by Congdon [10] and Kundu and Gupta [20]. In this case, we can choose very small
non-negative values of the hyper-parameters, i.e. a1 = a2 = a3 = b1 = b2 = b3 = 0.0001, which
are almost like Jeffreys priors, but they are proper. The 95% confidence intervals using asymptotic
distributions of MLEs (untransformed, logit and arcsin) and HPD credible interval of R become
(0.418, 0.912), (0.396, 0.858), (0.408, 0.879) and (0.384, 0.924) respectively. Further, the 95% Boot-p
and Boot-t confidence intervals are computed to be (0.526, 0.747) and (0.515, 0.701), respectively.
• Case 2 (General case)
In this case, we consider 32 and 34 kV data sets. Therefore, according to Section 4, we can obtain
different estimates of R. The MLE and Bayes estimates of R are 0.552 and 0.575, respectively. The
95% confidence intervals using asymptotic distributions of MLEs (untransformed, logit and arcsin)
and HPD credible interval of R become (0.351, 0.753), (0.353, 0.736), (0.353, 0.744) and (0.299,
0.791) respectively. The 95% Boot-p and Boot-t confidence intervals are also computed to be (0.388,
0.647) and (0.378, 0.701), respectively.

5.2. Numerical simulations

Since the performance of the different methods cannot be compared theoretically, we present here
some simulation results to compare the performances of the different methods proposed in the previ-
ous sections. We compare the MLEs and Bayes estimators in terms of their biases and mean squared er-
rors (MSEs). We also compare different confidence intervals, namely the confidence intervals obtained
by using asymptotic distribution of the MLEs, bootstrap confidence intervals and the HPD credible in-
tervals in terms of the average confidence lengths and coverage percentages. The Bayes estimators
are computed under the squared error loss function.
We use different parameter values, different hyper-parameters and different sample sizes. For
computing the Bayes estimators and HPD credible intervals, we assume two priors as follows:
Prior 1: aj = 0.0001, bj = 0.0001, j = 1, 2, 3,
Prior 2: aj = 1, bj = 3, j = 1, 2, 3.
Clearly, Prior 2 is more informative than Prior 1.
We report the average biases, and MSEs of the MLEs and Bayes estimators over 1000 replications.
The results based on different cases are reported in Table 4. The average confidence/credible lengths
and the corresponding coverage percentages are also reported in Table 5. All the computations are
performed using the mathematical software Maple 16. In the first case (case of different shape and
same scale parameters), we obtain the MLE of λ by solving Eq. (2.6). We have used the initial estimate
of λ to be 1 and the iterative process stops when the difference between the two consecutive iterates
are less than 10−5 . One can also use the codes of Maple 16 for solving Eq. (2.6). If the function fsolve
is used we may consider a reasonable interval as starting value. In Sections 3 and 4, the MLEs were
obtained as solutions of the likelihood equations using the function fsolve from Maple 16. Note also,
for both bootstrap methods, we compute the confidence intervals based on 1000 bootstrap iterations.
Author's personal copy

88 A. Asgharzadeh et al. / Statistical Methodology 15 (2013) 73–94

Table 4
Biases and MSEs of the MLE and Bayes estimators of R.
(n, m) MLE BS MLE BS
Prior 1 Prior 2 Prior 1 Prior 2

α = 1.5, β = 2, λ = 1 α = 2, β = 1.5, λ = 1
(15, 15) Bias −0.012 0.126 0.073 −0.007 0.059 0.022
MSE 0.007 0.045 0.024 0.008 0.051 0.035
(15, 25) Bias −0.001 0.064 0.088 −0.007 0.378 0.326
MSE 0.007 0.028 0.020 0.007 0.046 0.029
(15, 50) Bias −0.002 0.302 0.288 −0.007 0.505 0.490
MSE 0.003 0.023 0.015 0.004 0.035 0.021
(25, 15) Bias −0.026 0.321 0.231 −0.005 0.037 0.039
MSE 0.007 0.040 0.020 0.007 0.045 0.030
Different
(25, 25) Bias −0.010 0.107 0.060 −0.006 0.045 0.013
shapes and
MSE 0.006 0.023 0.017 0.007 0.037 0.022
same scale
(25, 50) Bias −0.020 0.164 0.161 −0.005 0.439 0.433
MSE 0.002 0.018 0.013 0.003 0.024 0.019
(50, 15) Bias −0.008 0.427 0.420 −0.006 0.200 0.190
MSE 0.005 0.033 0.017 0.006 0.035 0.022
(50, 25) Bias −0.005 0.051 0.045 −0.010 0.032 0.027
MSE 0.002 0.019 0.013 0.004 0.022 0.020
(50, 50) Bias −0.008 0.088 0.038 −0.003 0.029 0.013
MSE 0.002 0.015 0.009 0.003 0.016 0.013
α = 2, λ1 = 0.5, λ2 = 1.5 α = 2, λ1 = 1.5, λ2 = 0.5
(15, 15) Bias −0.010 0.173 0.092 −0.014 0.146 0.082
MSE 0.004 0.004 0.003 0.005 0.004 0.004
(15, 25) Bias −0.013 0.164 0.084 −0.018 0.124 0.064
MSE 0.004 0.003 0.002 0.005 0.003 0.003
(15, 50) Bias −0.009 0.134 0.067 −0.012 0.127 0.060
MSE 0.003 0.003 0.001 0.004 0.003 0.002
(25, 15) Bias −0.008 0.094 0.055 −0.011 0.116 0.076
MSE 0.003 0.002 0.002 0.003 0.003 0.002
Same shape
(25, 25) Bias −0.007 0.106 0.071 −0.010 0.123 0.092
and different
MSE 0.002 0.002 0.001 0.003 0.003 0.002
scales
(25, 50) Bias −0.016 0.112 0.061 −0.013 0.094 0.059
MSE 0.002 0.001 0.001 0.003 0.002 0.002
(50, 15) Bias −0.007 0.069 0.037 −0.009 0.067 0.051
MSE 0.002 0.001 0.001 0.002 0.002 0.001
(50, 25) Bias −0.005 0.043 0.027 −0.008 0.043 0.026
MSE 0.002 0.001 0.000 0.002 0.001 0.001
(50, 50) Bias −0.003 0.032 0.023 −0.005 0.026 0.017
MSE 0.001 0.001 0.000 0.002 0.001 0.000
α = 1.5, β = 2, λ1 = 0.5, λ2 = 1.5 α = 2, β = 1.5, λ1 = 1.5, λ2 = 0.5
(15, 15) Bias −0.017 0.087 0.064 −0.021 0.081 0.053
MSE 0.012 0.021 0.018 0.010 0.019 0.015
(15, 25) Bias −0.076 0.102 0.094 −0.068 0.091 0.086
MSE 0.017 0.032 0.027 0.016 0.028 0.024
(15, 50) Bias −0.060 0.095 0.088 −0.080 0.107 0.103
MSE 0.013 0.019 0.018 0.014 0.024 0.019
(25, 15) Bias −0.041 0.113 0.106 −0.039 0.107 0.094
MSE 0.009 0.017 0.014 0.009 0.016 0.012
(25, 25) Bias −0.073 0.109 0.090 −0.076 0.127 0.110
General case
MSE 0.011 0.020 0.018 0.010 0.019 0.016
(25, 50) Bias −0.057 0.086 0.081 −0.070 0.099 0.080
MSE 0.009 0.018 0.016 0.010 0.018 0.015
(50, 15) Bias −0.067 0.082 0.072 −0.031 0.061 0.056
MSE 0.006 0.013 0.015 0.005 0.010 0.008
(50, 25) Bias −0.073 0.036 0.025 −0.062 0.034 0.020
MSE 0.010 0.017 0.016 0.008 0.014 0.012
(50, 50) Bias −0.074 0.062 0.051 −0.063 0.083 0.042
MSE 0.008 0.014 0.015 0.006 0.011 0.009
Author's personal copy

A. Asgharzadeh et al. / Statistical Methodology 15 (2013) 73–94 89

Table 5
Average confidence/credible length and coverage percentage.
(n, m) MLEs Boot Bayes
Untransformed Logit Arcsin boot-t boot-p Prior 1 Prior 2

(15,15) Len. 0.343 0.331 0.336 0.310 0.317 0.380 0.364


CP 0.931 0.934 0.932 0.937 0.935 0.927 0.930
(15, 25) Len. 0.338 0.325 0.331 0.303 0.306 0.368 0.353
CP 0.928 0.932 0.931 0.934 0.933 0.924 0.928
(15, 50) Len. 0.336 0.319 0.328 0.303 0.305 0.354 0.341
CP 0.932 0.935 0.934 0.940 0.939 0.929 0.931
(25, 15) Len. 0.271 0.265 0.267 0.258 0.271 0.321 0.310
CP 0.932 0.934 0.933 0.939 0.937 0.930 0.932
α = 1.5,
(25, 25) Len. 0.266 0.262 0.263 0.252 0.264 0.314 0.306
β = 2,
CP 0.935 0.937 0.937 0.941 0.940 0.933 0.935
λ=1
(25, 50) Len. 0.263 0.258 0.260 0.250 0.257 0.303 0.292
CP 0.937 0.941 0.940 0.943 0.942 0.934 0.936
(50, 15) Len. 0.191 0.189 0.190 0.196 0.202 0.225 0.219
CP 0.940 0.942 0.941 0.935 0.935 0.927 0.928
(50, 25) Len. 0.189 0.187 0.188 0.191 0.198 0.223 0.211
CP 0.941 0.942 0.941 0.938 0.937 0.930 0.933
(50, 50) Len. 0.182 0.177 0.179 0.188 0.190 0.219 0.206
Different
CP 0.937 0.940 0.939 0.932 0.930 0.924 0.925
shapes and
same scale (15,15) Len. 0.348 0.345 0.346 0.321 0.333 0.384 0.378
CP 0.934 0.937 0.936 0.939 0.939 0.931 0.934
(15, 25) Len. 0.347 0.340 0.341 0.316 0.325 0.373 0.367
CP 0.937 0.940 0.939 0.943 0.940 0.935 0.937
(15, 50) Len. 0.342 0.337 0.339 0.307 0.313 0.360 0.354
CP 0.940 0.942 0.941 0.945 0.943 0.937 0.939
(25, 15) Len. 0.274 0.270 0.271 0.253 0.266 0.301 0.295
CP 0.928 0.932 0.932 0.935 0.934 0.925 0.927
α = 2,
(25, 25) Len. 0.270 0.263 0.266 0.250 0.259 0.292 0.285
β = 1.5,
CP 0.932 0.935 0.934 0.939 0.938 0.930 0.932
λ=1
(25, 50) Len. 0.268 0.262 0.265 0.247 0.255 0.282 0.274
CP 0.935 0.938 0.937 0.941 0.940 0.933 0.935
(50, 15) Len. 0.194 0.192 0.193 0.217 0.220 0.231 0.226
CP 0.926 0.929 0.926 0.926 0.927 0.922 0.923
(50, 25) Len. 0.189 0.187 0.188 0.210 0.215 0.223 0.219
CP 0.929 0.931 0.931 0.929 0.929 0.925 0.926
(50, 50) Len. 0.188 0.186 0.187 0.205 0.211 0.219 0.213
CP 0.932 0.935 0.933 0.930 0.928 0.927 0.929

(continued on next page)

The Bayes estimates and the corresponding credible intervals are based on T = 1000 samples. The
nominal level for the confidence intervals or the credible intervals is 0.95 in each case.
Let us first consider the case where we have different shape and same scale parameters. From
Table 4, we observe that the MLE compares very well with the Bayes estimators in terms of biases
and MSEs. Interestingly, in all of the cases considered, it provides the smallest biases and MSEs.
Comparing the two Bayes estimators based on two Priors 1 and 2, as expected, we observe that the
Bayes estimators based on Prior 2 perform better than the Bayes estimators based on Prior 1, in terms
of both biases and MSEs. From Table 5, comparing different confidence intervals, we observe that the
confidence intervals based on the asymptotic distributions of the MLEs work quite well. For all sample
sizes (n, m), except when n and m are small, the MLEs provide the smallest average lengths. Bootstrap
methods work well. It is observed that Boot-t confidence intervals perform better when compared to
the Boot-p confidence intervals. The performances of the HPD credible intervals are not satisfactory.
The HPD credible intervals are wider than the other confidence intervals. From Table 5, it is evident
that Boot-t confidence intervals provide the most highest coverage probabilities. It is also observed
that when n and m increase, then the biased, MSEs and all credible lengths decrease.
Let us now consider the case where different scale and same shape parameters are assumed.
From Tables 4 and 5, some of the points are clear. It is indicated that the performances of the Bayes
Author's personal copy

90 A. Asgharzadeh et al. / Statistical Methodology 15 (2013) 73–94

Table 5 (continued)

(n, m) MLEs Boot Bayes


Untransformed Logit Arcsin boot-t boot-p Prior 1 Prior 2

(15, 15) Len. 0.154 0.152 0.153 0.107 0.119 0.137 0.129
CP 0.932 0.935 0.932 0.938 0.936 0.935 0.936
(15, 25) Len. 0.142 0.138 0.139 0.105 0.110 0.125 0.118
CP 0.934 0.936 0.935 0.941 0.940 0.936 0.938
(15, 50) Len. 0.116 0.110 0.113 0.092 0.099 0.108 0.103
CP 0.934 0.937 0.935 0.943 0.942 0.938 0.938
(25, 15) Len. 0.112 0.107 0.109 0.087 0.096 0.101 0.098
CP 0.935 0.938 0.936 0.946 0.942 0.940 0.942
α = 2,
(25, 25) Len. 0.107 0.102 0.103 0.081 0.093 0.096 0.090
λ1 = 0.5,
CP 0.937 0.939 0.939 0.946 0.943 0.941 0.943
λ2 = 1.5
(25, 50) Len. 0.098 0.094 0.095 0.078 0.086 0.095 0.088
CP 0.937 0.939 0.938 0.947 0.944 0.940 0.943
(50, 15) Len. 0.095 0.092 0.092 0.097 0.096 0.093 0.087
CP 0.937 0.940 0.940 0.936 0.934 0.941 0.943
(50, 25) Len. 0.089 0.087 0.087 0.092 0.095 0.089 0.084
CP 0.939 0.940 0.940 0.936 0.935 0.942 0.943
Different (50, 50) Len. 0.084 0.082 0.083 0.091 0.092 0.082 0.078
scales and CP 0.941 0.943 0.942 0.937 0.936 0.944 0.946
same (15, 15) Len. 0.164 0.163 0.164 0.143 0.149 0.160 0.154
shapes CP 0.924 0.926 0.926 0.930 0.929 0.926 0.929
(15, 25) Len. 0.150 0.148 0.148 0.138 0.145 0.147 0.142
CP 0.925 0.927 0.926 0.932 0.930 0.926 0.930
(15, 50) Len. 0.137 0.134 0.136 0.125 0.129 0.134 0.130
CP 0.926 0.928 0.928 0.933 0.930 0.927 0.930
(25, 15) Len. 0.125 0.124 0.124 0.118 0.122 0.124 0.122
CP 0.927 0.930 0.930 0.935 0.933 0.929 0.931
α = 2,
(25, 25) Len. 0.119 0.118 0.118 0.113 0.116 0.116 0.114
λ1 = 1.5,
CP 0.930 0.932 0.930 0.935 0.934 0.931 0.932
λ2 = 0.5
(25, 50) Len. 0.107 0.105 0.106 0.102 0.105 0.105 0.103
CP 0.931 0.933 0.931 0.937 0.935 0.933 0.933
(50, 15) Len. 0.103 0.100 0.101 0.106 0.110 0.098 0.096
CP 0.931 0.934 0.933 0.929 0.928 0.934 0.935
(50, 25) Len. 0.096 0.093 0.094 0.103 0.106 0.090 0.087
CP 0.934 0.934 0.934 0.931 0.930 0.938 0.940
(50, 50) Len. 0.090 0.087 0.088 0.100 0.102 0.085 0.081
CP 0.937 0.938 0.938 0.932 0.932 0.941 0.941

(continued on next page)

estimators are quite satisfactory in terms of MSEs when compared to the MLEs. The Bayes credible
intervals perform well when compared to the approximate CIs based on MLEs. The bootstrap CIs work
quite well, especially when n and m are small. We also note that Bootstrap confidence intervals provide
the highest coverage probabilities in most of the cases considered.
For the general case, it is observed that the MLEs perform better than the Bayes estimators in terms
of biases and MSEs. The HPD credible intervals do not perform well when compared to other CIs,
except when n and m are very large. In most of the cases considered, the bootstrap CIs provide the
smallest average lengths. As expected, the performances of the approximate CIs based on MLEs are
satisfactory for large sample sizes.
From Table 5, in all of the cases considered, we note that the approximate CIs based on MLEs can
be improved by using two transformations Logit and Arcsin. It is also noted that the HPD credible
intervals based on Prior 2 perform better than the HPD credible intervals based on Prior 1, in terms of
the average credible lengths.

Acknowledgments

The authors are grateful to two referees for their insightful comments and suggestions that led to
substantive improvements in the article.
Author's personal copy

A. Asgharzadeh et al. / Statistical Methodology 15 (2013) 73–94 91

Table 5 (continued)

(n, m) MLEs Boot Bayes


Untransformed Logit Arcsin boot-t boot-p Prior 1 Prior 2

(15, 15) Len. 0.386 0.369 0.376 0.319 0.328 0.401 0.393
CP 0.918 0.925 0.923 0.934 0.928 0.911 0.913
(15, 25) Len. 0.390 0.372 0.380 0.333 0.344 0.406 0.400
CP 0.914 0.921 0.919 0.931 0.926 0.904 0.908
(15, 50) Len. 0.387 0.371 0.378 0.322 0.335 0.404 0.396
CP 0.914 0.922 0.920 0.933 0.926 0.907 0.910
(25, 15) Len. 0.303 0.294 0.298 0.287 0.291 0.317 0.313
α = 1.5, CP 0.926 0.932 0.929 0.936 0.932 0.919 0.924
β = 2, (25, 25) Len. 0.305 0.295 0.299 0.293 0.295 0.330 0.323
λ1 = 0.5, CP 0.925 0.930 0.928 0.934 0.932 0.917 0.924
λ2 = 1.5 (25, 50) Len. 0.302 0.292 0.297 0.281 0.287 0.312 0.304
CP 0.930 0.934 0.933 0.939 0.937 0.922 0.928
(50, 15) Len. 0.214 0.212 0.213 0.226 0.230 0.225 0.219
CP 0.934 0.942 0.940 0.933 0.929 0.930 0.932
(50, 25) Len. 0.216 0.213 0.215 0.223 0.227 0.222 0.218
CP 0.931 0.937 0.935 0.926 0.921 0.926 0.927
(50, 50) Len. 0.206 0.203 0.205 0.218 0.222 0.217 0.211
General CP 0.939 0.944 0.942 0.932 0.931 0.937 0.939
Case (15, 15) Len. 0.373 0.357 0.364 0.318 0.329 0.395 0.384
CP 0.924 0.929 0.927 0.934 0.931 0.920 0.921
(15, 25) Len. 0.377 0.360 0.368 0.226 0.236 0.406 0.397
CP 0.920 0.927 0.924 0.931 0.930 0.916 0.918
(15, 50) Len. 0.375 0.358 0.366 0.320 0.333 0.401 0.387
CP 0.923 0.927 0.926 0.932 0.931 0.918 0.921
(25, 15) Len. 0.270 0.266 0.265 0.258 0.267 0.281 0.276
α = 2, CP 0.937 0.941 0.940 0.943 0.942 0.934 0.935
β = 1.5, (25, 25) Len. 0.290 0.282 0.286 0.275 0.281 0.297 0.291
λ1 = 1.5, CP 0.930 0.937 0.934 0.939 0.938 0.926 0.929
λ2 = 0.5 (25, 50) Len. 0.273 0.269 0.270 0.263 0.270 0.280 0.276
CP 0.935 0.940 0.937 0.940 0.939 0.928 0.933
(50, 15) Len. 0.194 0.192 0.193 0.200 0.208 0.204 0.199
CP 0.940 0.943 0.942 0.937 0.935 0.939 0.940
(50, 25) Len. 0.195 0.193 0.194 0.206 0.213 0.206 0.200
CP 0.939 0.943 0.941 0.935 0.933 0.937 0.939
(50, 50) Len. 0.193 0.192 0.192 0.204 0.210 0.202 0.197
CP 0.943 0.945 0.945 0.937 0.935 0.940 0.942

Appendix A

 Letus denote the Fisher information matrix of θ = (α, β, λ) as J(θ) = E (I(θ )), where I(θ ) =
Iij (θ) for i, j = 1, 2, 3. Therefore,
 ∂ 2l ∂ 2l ∂ 2l 
 ∂α 2 ∂α∂β ∂α∂λ  
I11 I12 I13

 ∂ l ∂ 2l ∂ 2l 
 2 
I(θ) = − 
 ∂β∂α
 = I21 I22 I23 .
 ∂β 2 ∂β∂λ  I31 I32 I33
∂ l ∂ 2l ∂ 2l
 2 
∂λ∂α ∂λ∂β ∂λ2
It is easy to see that
n
n  xi e−λxi
I11 = , I13 = I31 = −
α 2
i =1
1 + e−λxi
m
m  yj e−λyj
I22 = , I23 = I32 =−
β2 j=1
1 + e−λyj
Author's personal copy

92 A. Asgharzadeh et al. / Statistical Methodology 15 (2013) 73–94

n+m
n
 x2i e−λxi
m
 y2j e−λyj
I33 = + (α + 1) + (β + 1) ,
λ2 i=1
(1 + e−λxi )2 j =1
(1 + e−λyj )2
I12 = I21 = 0.
By using the integrals of the forms (see [12])

Ψ (α) + γ

ln(t )(1 + t )−α−1 dt = −
α
0 ∞
Ψ (α) + γ − 1
t ln(t )(1 + t )−α−2 dt = −
0 α(α + 1)
Ψ ′ (α) + Ψ 2 (α) + γ 2 + π6
 ∞ 2

t [ln(t )] (1 + t )
2 −α−3
dt =
0 (α + 1)(α + 2)
[α(γ − 1) + 1]Ψ (α) − [γ (α − 1) + 1]
+2 ,
α(α + 1)(α + 2)
we obtain the elements of the Fisher information matrix as
n n
J11 = 2 , J13 = J31 = − [Ψ (α) + γ − 1]
α λ(α + 1)
m m
J22 = 2 , J23 = J32 = − [Ψ (β) + γ − 1]
β λ(β + 1)

n+m n
J33 = + α[Ψ ′ (α) + Ψ 2 (α)] + 2[(α(γ − 1) + 1)Ψ (α)
λ2 λ2 (α + 2)
π2
  
m
− (γ (α − 1) + 1)] + α γ +
2
+ 2 β[Ψ ′ (β) + Ψ 2 (β)]
6 λ (β + 2)
π2

+ 2[(β(γ − 1) + 1)Ψ (β) − (γ (β − 1) + 1)] + β γ + 2
,
6
J12 = J21 = 0,
where Ψ (t ) = d
dt
ln(Γ (t )), Ψ ′ (t ) = d
dt
Ψ (t ) and γ = −Ψ (1) = 0.5772.

Appendix B

The Fisher information matrix of θ = (α, λ1 , λ2 ) is J(θ ) = E (I; θ) where I(θ ) = Iij (θ ) for
 
i, j = 1, 2, 3, is the observed information matrix defined by
∂ 2l ∂ 2l ∂ 2l
 
 ∂α 2 ∂α∂λ1 ∂α∂λ2 
  
I11 I12 I13

 ∂ 2l ∂ 2l ∂ 2l 
I(θ) = −  = I21 I22 I23 .
 
 ∂λ1 ∂α ∂λ21 ∂λ1 ∂λ2 

I31 I32 I33
 ∂ l ∂ 2l ∂ 2l 
 2 

∂λ2 ∂α ∂λ2 ∂λ1 ∂λ22


It is easy to show that
n
n+m  xi e−λ1 xi
I11 = , I12 = I21 = − ,
α2 i=1
1 + e−λ1 xi

n
n
 x2i e−λ1 xi m
m
 y2j e−λyj
I22 = + (α + 1) , I33 = + (α + 1) ,
λ21 i=1
(1 + e−λ1 xi )2 λ22 j =1
(1 + e−λyj )2
m
 yj e−λ2 yj
I13 = I31 =− , I23 = I32 = 0.
j =1
1 + e−λ2 yj
Author's personal copy

A. Asgharzadeh et al. / Statistical Methodology 15 (2013) 73–94 93

The elements of the Fisher information matrix are derived as follows:


n+m n
J11 = , J12 = J21 = − [Ψ (α) + γ − 1],
α 2 λ1 (α + 1)

n n
J22 = + , α[Ψ ′ (α) + Ψ 2 (α)]
λ21 λ21 (α + 2)
π2
 
+ 2[(α(γ − 1) + 1)Ψ (α) − (γ (α − 1) + 1)] + α γ + 2
,
6

m m
J33 = 2+ 2 α[Ψ ′ (α) + Ψ 2 (α)] + 2[(α(γ − 1) + 1)Ψ (α)
λ2 λ2 (α + 2)
π2
 
− (γ (α − 1) + 1)] + α γ + 2
6
m
J13 = J31 = − [Ψ (α) + γ − 1], J23 = J32 = 0.
λ2 (α + 1)

Appendix C

The Fisher information matrix of θ = (α, β, λ1 , λ2 ) is J(θ) = E (I; θ) where I(θ ) = Iij (θ) for
 
i, j = 1, 2, 3, 4, is the observed information matrix defined by
 ∂ 2l ∂ 2l ∂ 2l ∂ 2l 
 ∂α 2 ∂α∂β ∂α∂λ1 ∂α∂λ2 
 ∂ l ∂ 2l ∂ 2l ∂ 2l 
 2 
 
  I11 I12 I13 I14
 ∂β∂α ∂β 2 ∂β∂λ1 ∂β∂λ2  I21 I22 I23 I24 

I(θ) = −   = I .

 ∂ 2l ∂ 2l ∂ 2l ∂ 2l  31 I32 I33 I34 
 ∂λ1 ∂α ∂λ1 ∂β ∂λ1 ∂λ2  I I42 I43 I44
∂λ21
 
 41

 ∂ 2l ∂ 2l ∂ 2l ∂ 2l 
∂λ2 ∂α ∂λ2 ∂β ∂λ2 ∂λ1 ∂λ22
It is easily shown that
n
n  xi e−λ1 xi
I11 = , I13 = I31 =− ,
α2 i =1
1 + e−λ1 xi
m
m  yj e−λ2 yj
I22 = , I24 = I42 = − ,
β 2
j=1
1 + e−λ2 yj

n
n
 x2i e−λ1 xi m
m
 y2j e−λ2 yj
I33 = + (α + 1) , I44 = + (β + 1) ,
λ21 i =1
(1 + e−λ1 xi )2 λ22 j =1
(1 + e−λ2 yj )2
I12 = I14 = I21 = I23 = I32 = I34 = I41 = I43 = 0.
The elements of the Fisher information matrix can be obtained as
n n
J11 = 2 , J13 = J31 = − [Ψ (α) + γ − 1],
α λ1 (α + 1)
m m
J22 = 2 , J24 = J42 = − [Ψ (β) + γ − 1],
β λ2 (β + 1)

n n
J33 = + α[Ψ ′ (α) + Ψ 2 (α)] + 2[(α(γ − 1) + 1)Ψ (α)
λ21 λ21 (α + 2)
π2
 
− (γ (α − 1) + 1)] + α γ + 2
,
6
Author's personal copy

94 A. Asgharzadeh et al. / Statistical Methodology 15 (2013) 73–94



m m
J44 = + β[Ψ ′ (β) + Ψ 2 (β)] + 2[(β(γ − 1) + 1)Ψ (β)
λ 2
2 λ (β + 2)
2
2
π2
 
− (γ (β − 1) + 1)] + β γ + 2
,
6
J12 = J14 = J21 = J23 = J32 = J34 = J41 = J43 = 0.

References

[1] K.E. Ahmad, M.E. Fakhry, Z.F. Jaheen, Empirical Bayes estimation of P (Y < X ) and characterizations of the Burr-type X
model, Journal of Statistical Planning and Inference 64 (1997) 297–308.
[2] M.R. Alkasasbeh, M.Z. Raqab, Estimation of the generalized logistic distribution parameters: comparative study, Statistical
Methodology 6 (2009) 262–279.
[3] A. Asgharzadeh, Point and interval estimation for a generalized logistic distribution under progressive type-II censoring,
Communications in Statistics—Theory and Methods 35 (2006) 1685–1702.
[4] A. Asgharzadeh, R. Valiollahi, M.Z. Raqab, Stress–strength reliability of Weibull distribution based on progressively
censored samples, SORT. Statistics and Operations Research Transactions 35 (2) (2011) 103–124.
[5] A.M. Awad, M.M. Azzam, M.A. Hamadan, Some inference results in P (Y < X ) in the bivariate exponential model,
Communications in Statistics—Theory and Methods 10 (1981) 2515–2524.
[6] A. Baklizi, Likelihood and Bayesian estimation of Pr (X < Y ) using lower record values from the generalized exponential
distribution, Computational Statistics and Data Analysis 52 (2008) 3468–3473.
[7] N. Balakrishnan, M.Y. Leung, Order statistics from the type I generalized logistic distribution, Communications in
Statistics—Simulation and Computation 17 (1) (1988) 25–50.
[8] D. Bamber, The area above the ordinal dominance graph and the area below the receiver operating graph, Journal of
Mathematical Psychology 12 (1975) 387–415.
[9] M.H. Chen, Q.M. Shao, Monte Carlo estimation of Bayesian credible and HPD intervals, Journal of Computational and
Graphical Statistics 8 (1999) 69–92.
[10] P. Congdon, Bayesian Statistical Modeling, John Wiley, New York, 2001.
[11] B. Efron, The Jackknife, the Bootstrap and Other Re-Sampling Plans, in: CBMS-NSF Regional Conference Series in Applied
Mathematics, vol. 34, SIAM, Philadelphia, PA, 1982.
[12] I.S. Gradshteyn, I.M. Ryzhik, Table of Integrals, Series, and Products, sixth ed., Academic Press, San Diego, 2007.
[13] R.D. Gupta, R.C. Gupta, Estimation of P (a0 X > b0 Y ) in the multivarite normal case, Statistics 1 (1990) 91–97.
[14] R.D. Gupta, D. Kundu, Generalized logistic distributions, Journal of Applied Statistical Sciences 18 (2010) 51–66.
[15] I.J. Hall, Approximate one-sided tolerance limits for the difference or sum of two independent normal variates, Journal of
Quality Technology 16 (1984) 15–19.
[16] P. Hall, Theoretical comparison of bootstrap confidence intervals, Annals of Statistics 16 (1988) 927–953.
[17] N.L. Johnson, S. Kotz, N. Balakrishnan, Continuous Univariate Distributions, Vol. 2, second ed., Wiley and Sons, New York,
1995.
[18] S. Kotz, Y. Lumelskii, M. Pensky, The Stress–Strength Model and its Generalizations, World Scientific, New York, 2003.
[19] D. Kundu, R.D. Gupta, Estimation of P (Y < X ) for the generalized exponential distribution, Metrika 61 (2005) 291–308.
[20] D. Kundu, R.D. Gupta, Estimation of P (Y < X ) for Weibull distributions, IEEE Transcations on Reliability 55 (2) (2006)
270–280.
[21] J.F. Lawless, Statistical Models and Methods for Lifetime Data, second ed., John Wiley and Sons, New York, 2003.
[22] S.P. Mukherjee, S.S. Maiti, Stress–strength reliability in the Weibull case, in: Frontiers in Reliability, vol. 4, World Scientific,
Singapore, 1998, pp. 231–248.
[23] W. Nelson, Applied Life Data Analysis, John Wiley and Sons, New York, 1982.
[24] A. Ragab, Estimation and predictive density for the generalized logistic distribution, Microelectronics and Reliability 31
(1991) 91–95.
[25] M.Z. Raqab, T. Madi, D. Kundu, Estimation of P (Y < X ) for the three-parameter generalized exponential distribution,
Communications in Statistics—Theory and Methods 37 (2008) 2854–2865.
[26] S. Rezaei, R. Tahmasbi, B.M. Mahmoodi, Estimation of P (Y < X ) for generalized Pareto distribution, Journal of Statistical
Planning and Inference 140 (2010) 480–494.
[27] B. Saracoglua, I. Kinacia, D. Kundu, On estimation of R = P (Y < X ) for exponential distribution under progressive type-II
censoring, Journal of Statistical Computation and Simulation 82 (5) (2012) 729–744.

You might also like