Download as pdf or txt
Download as pdf or txt
You are on page 1of 12

Inference of Generalized Pareto Parameters under Joint Two

Sample Progressive Type -II Censoring Scheme


Abstract

A new joint type-II progressive censoring scheme was introduced by Mondal


and Kundu (2016). They introduced a two sample Type-II progressive cen-
soring scheme and provide an inferential statistics of the unknown parameters
when the two populations are exponentially distributed. They noticed that the
new two sample progressive censoring scheme was more tractable than the well
known joint progressive censoring scheme. In this paper we consider the same
joint progressive censoring scheme suggested by Mondal and Kundu. The life-
time distributions of the experimental units follow two parameter Generalized
Pareto distribution with the same shape parameter and di¤erent scale para-
meters. For statistical inference we use Maximum likelihood method (MLE)
and Bayesian Method under quadratic loss function. Numerical methods are
used to obtain the point estimation of the unknown parameter. We use dif-
ferent methods to construct interval estimation for the unknown parameters,
and use simulation study to make comparison between these methods. Also
simulation is conducted to select the most e¢ cient method of estimation. An
optimal censoring scheme is obtained depending on maximum e¢ ciency of
Fisher information matrix, from which we conclude that the ....is the optimal
censoring scheme. Finally a real life data set has been analyzed for illustrative
purpose.

Key words: Progressive censoring, MLE, Bayes estimation, credible interval,


bootstrap con…dence interval, simulation, optimal censoring.

1 Introduction

In life-testing and reliability experiments, units may be lost or removed from


the experiment before failure. The loss may be unplanned, like in accidental
breakage of an experimental unit, or if a unit under study drops out. Some
times, the experiment must stop due to unavailability of testing facilities. Most
often, the removal of units from experiment is pre-planned and is made due
to time and cost limitation. The bene…t of progressive censoring lies in its
e¢ cient exploitation of the available resources, so when some of the surviving
units in the experiment are removed early, they can be used for some other
tests. Cohen (1963) discussed the importance of progressive censoring in life-
testing experiments. The most commonly used censoring schemes are called
progressive Type-I and progressive Type-II censoring schemes. During the last
ten years many authors studied various properties of di¤erent progressive cen-
soring schemes. One can refer to the book of Balakrishnan and Cramer (2014)

1
for progressive censoring schemes and its related issues. See also Balakrish-
nan (2007), Pradhan and Kundu (2009) and Kundu (2008), in this respect.
The joint progressive censoring scheme (JPC) was introduced by Rasouli and
Balakrishnan (2010), they provided the exact likelihood inference for two ex-
ponential populations under this scheme. One can also see the work of Parsi
et al. (2011) who extend the results of Rasouli and Balakrishnan (2010) to
Weibull life time with two parameters, Ashour and Eraki (2014), Balakrish-
nan et al. (2015) who deal with some problems related to the joint progressive
censoring scheme. Doostparast and Ahmadi et al. (2013) worked under joined
Progressive censoring and used Baysian inference to estimate the unknown
parameters.

Mondal and Kundu (2016) recently described a new scheme of joint progressive
censoring (NJPC), they used likelihood inference to estimate the unknown
parameters of two exponential populations under (NJPC) scheme. Later in
(2018) Mondal and Kundu made a generalization of population distribution
by considering Weibull lifetime of the (NJPC) scheme with common shape
parameter and di¤erent scale parameters. They used likelihood inference to
estimate the unknown parameters, and constructed exact and approximate
interval estimation for the parameters. Optimal censoring scheme was also
obtained.

In this paper we use the (NJPC) scheme which was discribed by Mondal and
Kundu (2016). The units under consideration are following two parameters
Generalized Pareto (GP) lifetime distribution. The main aim of this work
is to study di¤erent point and interval estimation methods for Generalized
Pareto parameters under (NJPC) scheme with common shape parameter and
di¤erent scale parameters. Numerical techniques are used to compaire between
these methods in order to select the most e¢ cient one. The generation of
samples from the new scheme are performed, hence the simulation experiments
can be obtained easily. Finally real data analysis is performed for illustrative
purposes.

The rest of the paper is organized as follows. In section 2 model description


is given. Point estimation methods are studied in section 3, while con…dence
intervals are obtained in section 4. Data analysis and simulation are used in
section 5 to facilitate comparison between di¤erent types of point and interval
estimation of GP parameters and a real data set is performed to check the
advantage of the new scheme over the old one. Finally an optimal censoring
scheme is suggested in section 6.

2
2 Model Description

Suppose X1 ; X2 ; :::; Xm denote the ordered lifetimes of m units of population


1, and it is assumed that they are independent and identically distributed
(i.i.d) from Generalized Pareto distribution (GP) with shape and scale para-
meters and 1 respectively. Similarly, it is assumed that Y1 ; Y2 ; :::; Yn denote
the ordered lifetimes of n units of population 2, and it is assumed that they
are independent and identically distributed (i.i.d) from Generalized Pareto
distribution with shape and scale parameters and 2 respectively.

The Generalized Pareto probability density function (pdf) is given by

1 x 1
1
(1 + ) 6= 0
f (x; ; ) = f ;
1 x=
e =0

and its cumulative density function (CDF) is

x 1
1 (1 + ) 6= 0
F (x; ; ) = f ;
x=
1 e =0

where > 0 and for > 0; x > 0, and for < 0; 0 < x < = . For
> 0, the GP distribution is one of several forms of the usual Pareto family of
distribution often called the Pareto distribution. For < 0 , the support of the
distribution is 0 < x < = , and the GP distribution has bounded support.
For ! 0, the GP distribution reduces to the exponential distribution. The
special case where = 1 corresponds to the uniform distribution U (0; ). In
this work we consider the case when > 0; with support x > 0:

For the Joint progressive censored sampling scheme described by Mondal and
Kundu (2016), let k be the total number of failures and R1 ; :::; Rk 1 be the
units that are withdrawn at failure times W1 ; :::; Wk . Also we de…ne a new set
of random variables Z1 ; :::; Zk , where Zj = 1 if the j th failure takes place from
population 1 and Zj = 0; otherwise.

Case I: kth failure comes from population 1

3
Case II: kth failure comes from population 2

The likelihood function of the joint progressive censored sample under Gener-
alized Pareto life time (JPCGP) can be written as

1 1 1
1
L( ; 1; 2 j w,z) =C mk nk (A1 A2 ) ; (1)
1 2

Q
k iP1 iP1
where C = [(m (Rj + 1))zi + (n (Rj + 1))(1 zi )];
i=1 j=1 j=1

kP1 kP1
wi wk
A1 = (Ri + 1)(1 + 1
) + (m (Ri + 1))(1 + 1
);
i=1 i=1

kP1 kP1
wi wk
A2 = (Ri + 1)(1 + 2
) + (n (Ri + 1))(1 + 2
);
i=1 i=1

P
k P
k
mk = zi ; and nk = (1 zi ) = k mk :
i=1 i=1

It is observed that the new joint progressive censoring scheme is easier to


handle analytically as mentioned by Mondal and Kundu (2016), hence the
properties of the estimators of GP parameters can be derived easily. In the
following section we provide the inference for two GP populations under the

4
new joint progressive censoring scheme. We obtain the maximum likelihood es-
timators (MLEs) and Bayes estimators of the unknown parameters, numerical
methods will be used to obtain the estimated parameters.

3 Point Estimations

In this section we will use likelihood inference together with the nonclassical
Baysian estimation method. Numerical methods is used to solve some nonlin-
ear equations since it is impossible to write it in explicit forms. These methods
will be used in section 4 to obtain exact and approximate con…dence intervals
for the unknown parameters.

3.1 Maximum Likelihood Estimators (MLEs)

Assume the log-likelihood function of the unknown parameters ; 1 and 2 is


denoted by `( ;w,z) where = ( ; 1 ; 2 ) is a vector of parameters. Now
taking partial derivatives of `( ;w,z) with respect to the unknown parameters,
we obtain
@`( ; w; z) 1 1 A1 A 2
= 2 log A1 A2 + ( 1) + ;
@ A1 A 2

kP1 kP1 kP1


where A1 = (Ri + 1) w1i + (m (Ri + 1)) w1k ; and A2 = (Ri + 1) w2i +
i=1 i=1 i=1
kP1
wk
(n (Ri + 1)) 2
i=1

Solving this equation by equating it to zero will give b: Now taking partial
derivative with respect to 1 and equating it to zero yields
0 1
kP1 kP1
wi wk
B (Ri + 1) 2 + (m (Ri + 1)) 2 C
@`( ; w; z) mk 1
+ ( + 1) B C
i=1 1 i=1 1
= B C:
@ 1 1 @ A1 A

and
0 1
kP1 kP1
wi wk
B (Ri + 1) 2 + (n (Ri + 1)) 2 C
@`( ; w; z) nk 1 B 2 2 C
= + ( + 1) B i=1 i=1
C:
@ 2 2 @ A2 A

5
If all parameters are unknown then we need numerical methods such as Newton-
Raphson to solve the above system of nonlinear equations.

3.2 Bayes Estimators

In this section we will …nd Bayes estimates for the unknown parameters
; 1 and 2 : The loss function assumed is the quadratic loss function, in this
case Bayes estimate is the posterior mean. In our work we will consider the
case when shape parameters is known and when it is unknown. In Bayes es-
timation method we need …rst to determine the loss function. In this paper we
considered quadratic loss function because this loss function is mostly used as
symmetrical loss function. This may be de…ned as L( b ; ) = ( b )2 ; where b
is the point estimate of the parameter : Under quadratic loss function, Bayes
estimators are the posterior mean of the distribution.

In Bayesian method all parameters are considered as random variables with


certain distribution called prior distribution. If prior information is not avail-
able which is usually the case, we need to select a prior distribution. Since
the selection of prior distribution plays an important role in estimation of the
parameters, our choice for the prior of ; 1 and 2 is the three independent
gamma distributions i.e. G(a1 ; b1 ); G(a2 ; b2 ) and G(a3 ; b3 ) respectively. The
reason for choosing this prior density is that Gamma prior has ‡exible nature
as a non-informative prior, specially when the values of hyperparameters are
assumed to be zero. Thus the suggested prior for ; 1 and 2 are
a1 1 b1 a2 1 a3 1
f1 ( ) / e ; f2 ( 1 ) / 1 e b2 1
; and f3 ( 2 ) / 2 e b3 2
; (2)

respectively, and a1 ; a2 ; a3 ; b1 ; b2 and b3 are the hyperparameters of prior dis-


tributions.

The joint prior of ; 1 and 2 is


a1 1 a2 1 a3 1
g( ; 1; 2) / 1 2 e b1 b2 1 b3 2
; ; 1; 2 ; a1 ; a2 ; a3 ; b1 ; b2 ; b3 > 0; :

while the joint posterior of ; 1 and 2 is given by

L( ; 1 ; 2 j w,z)g( ; 1; 2)
p( ; 1; 2 =w; z) =R R R ;
1 2
L( ; 1 ; 2 j w,z)g( ; 1; 2 )d d 1d 2

where L( ; 1 ; 2 jw,z) is the likelihood function of the JPCGP distribution.


Substituting L( ; 1 ; 2 jw,z) and g( ; 1 ; 2 ) for MOP distribution, the joint
posterior is:

6
1
a1 1 a2 mk 1 a3 nk 1
p( ; 1; 2 =w; z) / 1 2 e b1 b2 1 b3 2
(A1 A2 ) 1
(3)
/ G (a1 ; b1 )G 1 (a2 mk ; b2 )G 2 (a3 nk ; b3 )e ;

where = ( 1 + 1) ln(A1 A2 ):

The Bayes estimate of any function of ; 1 and 2 ; say h( ; 1 ; 2 ); under the


b ( ; ; ) = E
quadratic loss function is hB 1 2 ; 1 ; 2 =data (h( ; 1 ; 2 )): Since it is
di¢ cult to compute this expected value analytically, we will use the MCMC
technique, see Lindley (1980) and Karandikar (2006).

We will use Gibbs sampling method to generate a sample from the poste-
rior density function p( ; 1 ; 2 =w; z) and compute Bayes estimates. For the
purpose of generating a sample from the posterior distribution, we assumed
that the pdf of prior densities are as described in Eq (2). The full conditional
posterior densities of ; 1 and 2 and the data is given by:

( = 1; 2 ; w,z) / G (a1 ; b1 )e ;
( 1 +1) ln(A1 )
( 1= ; 2 ; w,z) / G 1 (a2 mk ; b2 )e ; (4)
( 1 +1) ln(A2 )
( 2= ; 1 ; w,z) / G 2 (a3 nk ; b3 )e ;

To apply Gibbs technique we need the following algorithm:

(1) Start with initial values ( 0 ; 01 ; 02 )


(2) Use M-H algorithm to generate posterior sample for ; 1 and 2 from Eq
(4).
(3) Repeat step 2 M times and obtain ( 1 ; 11 ; 21 ); ( 2 ; 12 ; 22 ); :::;
( M ; 1M ; 2M ):
(4) After obtaining the posterior sample, the Bayes estimates of ; 1 and 2
with respect to quadratic loss function are:

MXM0
bM C = [E ( =w,z)] 1
( i) (5)
M M0 i=1
MX M0
c M C = [E ( 1
1 1 =w,z)] ( 1i )
M M0 i=1
MX M0
c M C = [E ( 1
2 2 =w,z)] ( 2i )
M M0 i=1

Where, M0 is the burn-in-period of Markov Chain.

7
4 Interval Estimation

Normal approximation for constructing con…dence intervals is e¢ cient method


when the sample size is large enough, otherwise this method will not be useful.
If this is the case, bootstrap methods may provide more accurate approximate
con…dence intervals. In this section four di¤erent approximate con…dence in-
terval methods are proposed. Our goal is to select the best interval, which is
the one with shortest length.

4.1 Asymptotic con…dence interval

When the sample size is large enough,the normal approximation of the MLE
can be used to construct asymptotic con…dence intervals for the parameters
; 1 and 2 : The asymptotic normality of MLE can be stated as ( b )!d
1
N3 (0; I ( )); where = ( ; 1 ; 2 ) is a vector of parameters, !d denotes
convergence in distribution and I( ) is the Fisher information matrix, i.e.
2 3
6 E(` ) E(` 1) E(` 2) 7
6 7
I( ) = 6
6 E(` ) E(` ) E(`
7
7
2) 7
6 1 1 1 1
4 5
E(` 2 ) E(` 2 1 ) E(` 2 2 )

The expected values of the second order partial derivatives can be evaluated
using integration techniques. Therefore, the 100(1- )% approximate CIs for ;
1 and 2 are

b p p p
z =2 v11 ; c1 z =2 v22 ; c2 z =2 v33 ;

respectively, where v11 ; v22 ; v33 are the entries in the main diagonal of …sher
matrix I 1 ( ), and z =2 is the ( =2)100% lower percentile of standard normal
distribution.

4.2 Bootstrap con…dence interval

Since con…dence intervals based on asymptotic theory do not perform very


well for small sample size, An alternative approach to the traditional one
is the bootstrap method, see Ahmad (2014). Parametric and nonparametric
bootstrap methods are presented in Davison and Hinkley (1997) and Efron
and Tibshirani (1993). In this section we use two parametric bootstrap meth-

8
ods: (a) percentile bootstrap and (b) t- bootstrap (see Hall (1988) and Efron
(1981)).

4.2.1 Percentile bootstrap con…dence interval

For this method use the following algorithm:

(1) step (1): Given the original data set (w; z) = f(wi ; zi ); i = 1; :::; k; 1
k < maxfn; mgg; and zi = 0 or 1 depending on whether the failure is
from population one or two. Estimate ; 1 and 2 using the maximum
likelihood estimation (say b; c1 ; c2 ).
(2) step (2): Generate a bootstrap sample (w ; z ) from joint Weibull distri-
bution with parameters b; c1 ; c2 obtained in step (1).
(3) step (3): With respect to (w ; z ) the bootstrap sample estimation is
b ;b ;b :
1 2
(4) step (4): Repeat step 2 and 3 M-times to obtain di¤erent bootstrap sam-
ples.
(5) step (5): Arrange the di¤erent bootstrap samples in an ascending order
[1] [2] [M ]
as ( b j ; b j ; :::; b j ); j = 1; 2; 3; where ( b 1 = b ; b 2 = b 1 ; b 3 = b 2 ):
A two-sided 100(1- )% percentile bootstrap con…dence intervals for the
unknown parameters ; 1 and 2 are given by
[M =2 ] [M1 =2 ]
(bj ; bj ); j = 1; 2; 3

4.2.2 Bootstrap-t con…dence intervals

For this method use the following algorithm:

(1) Given the original data set (w; z) = f(wi ; zi ); i = 1; :::; k; 1 k < maxfn; mgg;
and zi = 0 or 1 depending on whether the failure is from population one or two.
Estimate ; 1 and 2 using the maximum likelihood estimation (say b; c1 ; c2 ).

(2) Generate a bootstrap sample (w ; z ) from joint Weibull distribution with


parameters b; c1 ; c2

(3)With respect to (w ; z ) the bootstrap sample estimation is b ; b 1 ; b 2

b b b b
(4) Compute the t-statistics T1 = pb b ; T2 = p 1 1 ; and T3 = p 2 2 ,
V ar(b ) V ar(b1 ) V ar(b2 )
b b b
where V ar( ); V ar( 1 ); and V ar( 2 ); are the asymptotic variance of ; 1 and
2 respectively
(1) (2) (M )
(5) Repeat steps 2 to 4 M times Tj ; Tj ; :::; Tj ; j = 1; 2; 3:

9
[1] [2] [M ]
(6) Arrange the T values obtained in step 5 in ascending order Tj ; Tj ; :::; Tj ;j =
1; 2; 3:

A two-sided 100(1- )% t- bootstrap con…dence intervals for the unknown pa-


rameters ; 1 and 2 are given by

q q
[M =2 ] [M1 =2 ]
(b + T1
b b
V ar( ); + T1 V ar(b))
q q
[M ] [M ]
( c1 + T2 =2 V ar( c1 ); c1 + T2 1 =2 V ar( c1 ))
q q
c [M =2 ] c c [M1 =2 ]
( +T
2 3 V ar( ); + T 2 V ar( c ))
2 3 2

4.3 Credible Intervals

Using MCMC techniques in section (3.2), Bayes credible intervals of the pa-
rameters ; 1 and 2 can be obtained as follows:

(1) Arrange i ; 1i ; and 2i ; ascending order as follow [1] ; [2] ; :::; [M ] ; 1[1] ; 1[2] ; :::; 1[M ]
and 2[1] ; 2[2] ; :::; 2[M ] :

(2) A two-sided 100(1- )% credible intervals for the unknown parameters ;


1 and 2 are given by

( [M =2 ]
; [M1 =2 ]
); ( 1[M =2 ]
; 1[M1 =2 ]
) and ( 2[M =2 ]
; 2[M1 =2 ]
)

5 Data analysis and Simulations

In this section comparisons are proposed regarding the di¤erent methods of


point and interval estimation that were used in the previous sections. These
comparisons need numerical analysis methods and simulation, Monte Carlo
Simulation is carried out. We analyze a real data set for illustrative purpose;
also, a simulation study is carried out to compare the performances of the
di¤erent estimators, using di¤erent parameter values and di¤erent sampling
schemes.

10
6 Optimal Censoring Scheme

7 References

[1] Ashour, S. and Eraki, O. (2014), Parameter estimation for multiple Weibull
populations under joint type-II censoring, International Journal of Advanced
Statistics and Probability, vol 2, 2, pp. 102-107.

[2] Balakrishnan, N. (2007), Progressive censoring methodology: an apprisal,


TEST, vol.16, pp. 211 - 296 (with discussions).

[3] Balakrishnan, N. and Cramer, E. (2014), The art of progressive censoring,


Springer, New York.

[4] Balakrishnan, N. and Su, F. and Liu, K. Y. (2015), Exact likelihood infer-
ence for k exponential populations under joint progressive Type-II censoring,
Communications in Statistics-Simulation and Computation, vol 44, 4, pp. 902-
923.

[ 5 ] Cohen AC. (1963), Progressively censored samples in life testing. Tech-


nometrics 5:327–329

[6 ] Davison A.C., Hinkley D.V., Bootstrap Methods and their Applications,


Cambridge University Press, 1997.

[7] Doostparast, M. and Ahmadi, M. V. and Ahmadi, J. (2013), Bayes Esti-


mation Based on Joint Progressive Type II Censored Data Under LINEX Loss
Function, Communications in Statistics-Simulation and Computation, vol 42,
8,pp 1865-1886.

[8 ] Efron, B. (1979). Bootstrap methods: another look at the jackknife. Annals


of Statistics 7, 1-26

[9] Efron B., Censored data and bootstrap, Journal of the American Statistical
Association, 1981, 76, 312.

[10] Efron B., Tibshirani R.J., An introduction to the bootstrap, New York
Chapman and Hall, 1993.

[11] Hall P., Theoretical comparison of bootstrap con…dence intervals, Annals


of Statistics, 1988, 16, 927.

[12] Karandikar, R.L. (2006). On Markov Chain Monte Carlo (MCMC) method,
sadhana, Vol.31, part 2, 81-104.

11
[13] Kundu, D. (2008), Bayesian inference and life testing plan for the Weibull
distribution in presence of progressive censoring, Technometrics, vol. 50, 144
- 154.

[14] Lindley, D.V. (1980). Approximate Bayes method, Trabajos de estadistica,


Vol. 31, 223-237.

[15] Mondal, S. and Kundu, D. (2016), A new two sample Type-II progressive
censoring scheme, arXiv:1609.05805.25

[16 ] Mondal, S. and Kundu, D. (2018), Inference on Weibull parameters


under a balanced two sample type-II progressive censoring scheme, arXiv:
1801.00434v1.

[17] Parsi, S. and Ganjali, M. and Farsipour, N. S. (2011), Conditional max-


imum likelihood and interval estimation for two Weibull populations under
joint Type-II progressive censoring, Communications in Statistics-Theory and
Methods, vol 40, 12, pp 2117-2135.

[18] Pradhan, B. and Kundu, D. (2009), On progressively censored generalized


exponential distribution , Test, vol 18, 3, pp 497-515.

[19] Rasouli, A. and Balakrishnan, N. (2010), Exact likelihood inference for


two exponential populations under joint progressive type-II censoring, Com-
munications in Statistics Theory and Methods, vol 39, 12, pp 2172-2191.

12

You might also like