Professional Documents
Culture Documents
Inference of Generalized Pareto Parameters Under Joint Two Sample Progressive Type - II Censoring Scheme
Inference of Generalized Pareto Parameters Under Joint Two Sample Progressive Type - II Censoring Scheme
1 Introduction
1
for progressive censoring schemes and its related issues. See also Balakrish-
nan (2007), Pradhan and Kundu (2009) and Kundu (2008), in this respect.
The joint progressive censoring scheme (JPC) was introduced by Rasouli and
Balakrishnan (2010), they provided the exact likelihood inference for two ex-
ponential populations under this scheme. One can also see the work of Parsi
et al. (2011) who extend the results of Rasouli and Balakrishnan (2010) to
Weibull life time with two parameters, Ashour and Eraki (2014), Balakrish-
nan et al. (2015) who deal with some problems related to the joint progressive
censoring scheme. Doostparast and Ahmadi et al. (2013) worked under joined
Progressive censoring and used Baysian inference to estimate the unknown
parameters.
Mondal and Kundu (2016) recently described a new scheme of joint progressive
censoring (NJPC), they used likelihood inference to estimate the unknown
parameters of two exponential populations under (NJPC) scheme. Later in
(2018) Mondal and Kundu made a generalization of population distribution
by considering Weibull lifetime of the (NJPC) scheme with common shape
parameter and di¤erent scale parameters. They used likelihood inference to
estimate the unknown parameters, and constructed exact and approximate
interval estimation for the parameters. Optimal censoring scheme was also
obtained.
In this paper we use the (NJPC) scheme which was discribed by Mondal and
Kundu (2016). The units under consideration are following two parameters
Generalized Pareto (GP) lifetime distribution. The main aim of this work
is to study di¤erent point and interval estimation methods for Generalized
Pareto parameters under (NJPC) scheme with common shape parameter and
di¤erent scale parameters. Numerical techniques are used to compaire between
these methods in order to select the most e¢ cient one. The generation of
samples from the new scheme are performed, hence the simulation experiments
can be obtained easily. Finally real data analysis is performed for illustrative
purposes.
2
2 Model Description
1 x 1
1
(1 + ) 6= 0
f (x; ; ) = f ;
1 x=
e =0
x 1
1 (1 + ) 6= 0
F (x; ; ) = f ;
x=
1 e =0
where > 0 and for > 0; x > 0, and for < 0; 0 < x < = . For
> 0, the GP distribution is one of several forms of the usual Pareto family of
distribution often called the Pareto distribution. For < 0 , the support of the
distribution is 0 < x < = , and the GP distribution has bounded support.
For ! 0, the GP distribution reduces to the exponential distribution. The
special case where = 1 corresponds to the uniform distribution U (0; ). In
this work we consider the case when > 0; with support x > 0:
For the Joint progressive censored sampling scheme described by Mondal and
Kundu (2016), let k be the total number of failures and R1 ; :::; Rk 1 be the
units that are withdrawn at failure times W1 ; :::; Wk . Also we de…ne a new set
of random variables Z1 ; :::; Zk , where Zj = 1 if the j th failure takes place from
population 1 and Zj = 0; otherwise.
3
Case II: kth failure comes from population 2
The likelihood function of the joint progressive censored sample under Gener-
alized Pareto life time (JPCGP) can be written as
1 1 1
1
L( ; 1; 2 j w,z) =C mk nk (A1 A2 ) ; (1)
1 2
Q
k iP1 iP1
where C = [(m (Rj + 1))zi + (n (Rj + 1))(1 zi )];
i=1 j=1 j=1
kP1 kP1
wi wk
A1 = (Ri + 1)(1 + 1
) + (m (Ri + 1))(1 + 1
);
i=1 i=1
kP1 kP1
wi wk
A2 = (Ri + 1)(1 + 2
) + (n (Ri + 1))(1 + 2
);
i=1 i=1
P
k P
k
mk = zi ; and nk = (1 zi ) = k mk :
i=1 i=1
4
new joint progressive censoring scheme. We obtain the maximum likelihood es-
timators (MLEs) and Bayes estimators of the unknown parameters, numerical
methods will be used to obtain the estimated parameters.
3 Point Estimations
In this section we will use likelihood inference together with the nonclassical
Baysian estimation method. Numerical methods is used to solve some nonlin-
ear equations since it is impossible to write it in explicit forms. These methods
will be used in section 4 to obtain exact and approximate con…dence intervals
for the unknown parameters.
Solving this equation by equating it to zero will give b: Now taking partial
derivative with respect to 1 and equating it to zero yields
0 1
kP1 kP1
wi wk
B (Ri + 1) 2 + (m (Ri + 1)) 2 C
@`( ; w; z) mk 1
+ ( + 1) B C
i=1 1 i=1 1
= B C:
@ 1 1 @ A1 A
and
0 1
kP1 kP1
wi wk
B (Ri + 1) 2 + (n (Ri + 1)) 2 C
@`( ; w; z) nk 1 B 2 2 C
= + ( + 1) B i=1 i=1
C:
@ 2 2 @ A2 A
5
If all parameters are unknown then we need numerical methods such as Newton-
Raphson to solve the above system of nonlinear equations.
In this section we will …nd Bayes estimates for the unknown parameters
; 1 and 2 : The loss function assumed is the quadratic loss function, in this
case Bayes estimate is the posterior mean. In our work we will consider the
case when shape parameters is known and when it is unknown. In Bayes es-
timation method we need …rst to determine the loss function. In this paper we
considered quadratic loss function because this loss function is mostly used as
symmetrical loss function. This may be de…ned as L( b ; ) = ( b )2 ; where b
is the point estimate of the parameter : Under quadratic loss function, Bayes
estimators are the posterior mean of the distribution.
L( ; 1 ; 2 j w,z)g( ; 1; 2)
p( ; 1; 2 =w; z) =R R R ;
1 2
L( ; 1 ; 2 j w,z)g( ; 1; 2 )d d 1d 2
6
1
a1 1 a2 mk 1 a3 nk 1
p( ; 1; 2 =w; z) / 1 2 e b1 b2 1 b3 2
(A1 A2 ) 1
(3)
/ G (a1 ; b1 )G 1 (a2 mk ; b2 )G 2 (a3 nk ; b3 )e ;
where = ( 1 + 1) ln(A1 A2 ):
We will use Gibbs sampling method to generate a sample from the poste-
rior density function p( ; 1 ; 2 =w; z) and compute Bayes estimates. For the
purpose of generating a sample from the posterior distribution, we assumed
that the pdf of prior densities are as described in Eq (2). The full conditional
posterior densities of ; 1 and 2 and the data is given by:
( = 1; 2 ; w,z) / G (a1 ; b1 )e ;
( 1 +1) ln(A1 )
( 1= ; 2 ; w,z) / G 1 (a2 mk ; b2 )e ; (4)
( 1 +1) ln(A2 )
( 2= ; 1 ; w,z) / G 2 (a3 nk ; b3 )e ;
MXM0
bM C = [E ( =w,z)] 1
( i) (5)
M M0 i=1
MX M0
c M C = [E ( 1
1 1 =w,z)] ( 1i )
M M0 i=1
MX M0
c M C = [E ( 1
2 2 =w,z)] ( 2i )
M M0 i=1
7
4 Interval Estimation
When the sample size is large enough,the normal approximation of the MLE
can be used to construct asymptotic con…dence intervals for the parameters
; 1 and 2 : The asymptotic normality of MLE can be stated as ( b )!d
1
N3 (0; I ( )); where = ( ; 1 ; 2 ) is a vector of parameters, !d denotes
convergence in distribution and I( ) is the Fisher information matrix, i.e.
2 3
6 E(` ) E(` 1) E(` 2) 7
6 7
I( ) = 6
6 E(` ) E(` ) E(`
7
7
2) 7
6 1 1 1 1
4 5
E(` 2 ) E(` 2 1 ) E(` 2 2 )
The expected values of the second order partial derivatives can be evaluated
using integration techniques. Therefore, the 100(1- )% approximate CIs for ;
1 and 2 are
b p p p
z =2 v11 ; c1 z =2 v22 ; c2 z =2 v33 ;
respectively, where v11 ; v22 ; v33 are the entries in the main diagonal of …sher
matrix I 1 ( ), and z =2 is the ( =2)100% lower percentile of standard normal
distribution.
8
ods: (a) percentile bootstrap and (b) t- bootstrap (see Hall (1988) and Efron
(1981)).
(1) step (1): Given the original data set (w; z) = f(wi ; zi ); i = 1; :::; k; 1
k < maxfn; mgg; and zi = 0 or 1 depending on whether the failure is
from population one or two. Estimate ; 1 and 2 using the maximum
likelihood estimation (say b; c1 ; c2 ).
(2) step (2): Generate a bootstrap sample (w ; z ) from joint Weibull distri-
bution with parameters b; c1 ; c2 obtained in step (1).
(3) step (3): With respect to (w ; z ) the bootstrap sample estimation is
b ;b ;b :
1 2
(4) step (4): Repeat step 2 and 3 M-times to obtain di¤erent bootstrap sam-
ples.
(5) step (5): Arrange the di¤erent bootstrap samples in an ascending order
[1] [2] [M ]
as ( b j ; b j ; :::; b j ); j = 1; 2; 3; where ( b 1 = b ; b 2 = b 1 ; b 3 = b 2 ):
A two-sided 100(1- )% percentile bootstrap con…dence intervals for the
unknown parameters ; 1 and 2 are given by
[M =2 ] [M1 =2 ]
(bj ; bj ); j = 1; 2; 3
(1) Given the original data set (w; z) = f(wi ; zi ); i = 1; :::; k; 1 k < maxfn; mgg;
and zi = 0 or 1 depending on whether the failure is from population one or two.
Estimate ; 1 and 2 using the maximum likelihood estimation (say b; c1 ; c2 ).
b b b b
(4) Compute the t-statistics T1 = pb b ; T2 = p 1 1 ; and T3 = p 2 2 ,
V ar(b ) V ar(b1 ) V ar(b2 )
b b b
where V ar( ); V ar( 1 ); and V ar( 2 ); are the asymptotic variance of ; 1 and
2 respectively
(1) (2) (M )
(5) Repeat steps 2 to 4 M times Tj ; Tj ; :::; Tj ; j = 1; 2; 3:
9
[1] [2] [M ]
(6) Arrange the T values obtained in step 5 in ascending order Tj ; Tj ; :::; Tj ;j =
1; 2; 3:
q q
[M =2 ] [M1 =2 ]
(b + T1
b b
V ar( ); + T1 V ar(b))
q q
[M ] [M ]
( c1 + T2 =2 V ar( c1 ); c1 + T2 1 =2 V ar( c1 ))
q q
c [M =2 ] c c [M1 =2 ]
( +T
2 3 V ar( ); + T 2 V ar( c ))
2 3 2
Using MCMC techniques in section (3.2), Bayes credible intervals of the pa-
rameters ; 1 and 2 can be obtained as follows:
(1) Arrange i ; 1i ; and 2i ; ascending order as follow [1] ; [2] ; :::; [M ] ; 1[1] ; 1[2] ; :::; 1[M ]
and 2[1] ; 2[2] ; :::; 2[M ] :
( [M =2 ]
; [M1 =2 ]
); ( 1[M =2 ]
; 1[M1 =2 ]
) and ( 2[M =2 ]
; 2[M1 =2 ]
)
10
6 Optimal Censoring Scheme
7 References
[1] Ashour, S. and Eraki, O. (2014), Parameter estimation for multiple Weibull
populations under joint type-II censoring, International Journal of Advanced
Statistics and Probability, vol 2, 2, pp. 102-107.
[4] Balakrishnan, N. and Su, F. and Liu, K. Y. (2015), Exact likelihood infer-
ence for k exponential populations under joint progressive Type-II censoring,
Communications in Statistics-Simulation and Computation, vol 44, 4, pp. 902-
923.
[9] Efron B., Censored data and bootstrap, Journal of the American Statistical
Association, 1981, 76, 312.
[10] Efron B., Tibshirani R.J., An introduction to the bootstrap, New York
Chapman and Hall, 1993.
[12] Karandikar, R.L. (2006). On Markov Chain Monte Carlo (MCMC) method,
sadhana, Vol.31, part 2, 81-104.
11
[13] Kundu, D. (2008), Bayesian inference and life testing plan for the Weibull
distribution in presence of progressive censoring, Technometrics, vol. 50, 144
- 154.
[15] Mondal, S. and Kundu, D. (2016), A new two sample Type-II progressive
censoring scheme, arXiv:1609.05805.25
12