Download as pdf or txt
Download as pdf or txt
You are on page 1of 28

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/338046763

Estimation and prediction based on type-I hybrid censored data from the
Poisson-Exponential distribution

Article in Communication in Statistics- Simulation and Computation · December 2019


DOI: 10.1080/03610918.2019.1699111

CITATIONS READS

3 90

4 authors, including:

Reza Arabi M.H. Behzadi


University of Tabriz Science and Research Branch,Islamic Azad University, Tehran, Iran
45 PUBLICATIONS 214 CITATIONS 83 PUBLICATIONS 251 CITATIONS

SEE PROFILE SEE PROFILE

Sukhdev Singh
Amity University Punjab
20 PUBLICATIONS 386 CITATIONS

SEE PROFILE

Some of the authors of this publication are also working on these related projects:

Lognormal distribution from Bayesian view point View project

Poisson-Exponential Distribution: problems of estimation and prediction View project

All content following this page was uploaded by Reza Arabi on 04 June 2021.

The user has requested enhancement of the downloaded file.


Communications in Statistics - Simulation and
Computation

ISSN: 0361-0918 (Print) 1532-4141 (Online) Journal homepage: https://www.tandfonline.com/loi/lssp20

Estimation and prediction based on type-I hybrid


censored data from the Poisson-Exponential
distribution

M. Mohammadi Monfared, Reza Arabi Belaghi, M. H. Behzadi & Sukhdev


Singh

To cite this article: M. Mohammadi Monfared, Reza Arabi Belaghi, M. H. Behzadi & Sukhdev
Singh (2019): Estimation and prediction based on type-I hybrid censored data from the Poisson-
Exponential distribution, Communications in Statistics - Simulation and Computation, DOI:
10.1080/03610918.2019.1699111

To link to this article: https://doi.org/10.1080/03610918.2019.1699111

Published online: 18 Dec 2019.

Submit your article to this journal

View related articles

View Crossmark data

Full Terms & Conditions of access and use can be found at


https://www.tandfonline.com/action/journalInformation?journalCode=lssp20
COMMUNICATIONS IN STATISTICS - SIMULATION AND COMPUTATIONV
R

https://doi.org/10.1080/03610918.2019.1699111

Estimation and prediction based on type-I hybrid censored


data from the Poisson-Exponential distribution
M. Mohammadi Monfareda, Reza Arabi Belaghib , M. H. Behzadia, and
Sukhdev Singhc
a
Department of Statistics, Science and Research branch, Islamic Azad University, Tehran, Iran;
b
Department of Statistics, Faculty of Mathematical Sciences, University of Tabriz, Tabriz, Iran;
c
Department of Mathematics, Chandigarh University, Punjab, India

ABSTRACT ARTICLE HISTORY


This paper considers the problems of estimation and prediction when Received 10 July 2018
lifetime data following Poisson-exponential distribution are observed Accepted 23 November 2019
under type-I hybrid censoring. For both the problems, we compute
KEYWORDS
point and associated interval estimates under classical and Bayesian
Bayesian estimation;
approaches. For point estimates in the problem of estimation, we Poisson-exponential distri-
compute maximum likelihood estimates using Newton–Raphson, bution; EM algorithm; SEM
Expectation-Maximization and Stochastic Expectation-Maximization algorithm; Shrinkage
algorithms under classical approach, and under Bayesian approach we estimation; Lindely
compute Bayes estimates with the help of Lindley and importance approximation; Prediction
sampling technique under informative and non-informative priors
using symmetric and asymmetric loss functions. The associated inter-
val estimates are obtained using the Fisher information matrix and
Chen and Shao method respectively under classical and Bayesian
approaches. Further, the predictive point estimates and associated
predictive interval estimates are computed by making use of best
unbiased and conditional median predictors under classical approach,
and Bayesian predictive and associated Bayesian predictive interval
estimates in the problem of prediction. We analysis real data set, and
conduct Monte Carlo simulation study for the comparison of various
proposed methods of estimation and prediction. Finally, a conclusion
is given.

1. Introduction
In the field of life testing and reliability analysis, failure lifetimes of the units put on a life
test experiment are studied through various statistical distributions. The distributions are
in fact used to model the lifetime data according to the failure behavior of the units such
as decreasing, constant and increasing failure/hazard rates or the combination of these
such as bathtub shaped hazard rate. The consideration of a lifetime distribution plays very
important role, and a lot of work on the problems of estimation and prediction have been
done in the existing literature for various statistical distributions such as Weibull, general-
ized exponential, Burr and inverted families when observed data are complete or censored,
see Banerjee and Kundu (2008), Kundu and Pradhan (2009), Kundu and Howlader

CONTACT Reza Arabi Belaghi rezaarabi11@gmail.com Department of Statistics, Faculty of Mathematical Sciences,
University of Tabriz, Tabriz, Iran.
ß 2019 Taylor & Francis Group, LLC
2 M. MOHAMMADI MONFARED ET AL.

(2010), Singh et al. (2019) and references cited there in. However not only the hazard rate
behavior but also the structure and cause of failure for the units put on the experiment are
need to pay attention. For an instance, consider the situation where a unit may fail due to
various risks involved rather than a particular risk, and the true cause of failure is not
known. In fact, the problems involving latent risks (latent in the sense that there is no
information about which factor was responsible for the failure) often occur in the field of
reliability analysis. For example, in modular systems, a module that contains many compo-
nents can be replaced without the identification of the exact failing component to keep a
system running. Further the problems involving complementary risks (in which lifetime
associated to a particular risk are not observable so only the maximum lifetime among all
risks are considered) also arise in many experiments. For example, often the maximum
component lifetimes are considered in parallel systems, one may also refer to Basu and
Klein (1982) for several such types of problems. However, in many situations lifetime data
arise under the complementary risk problems in the presence of latent risks, and also after
increasing the failure rate the function may become stabilized. For an instance, consider
the ball bearings data which represents the number of revolutions measured before failure
of the ball bearings, see Lawless (2011). During the test, the ball bearings may be exposed
to several unobserved complementary risks, such as risk of contamination from dirt from
the casting of the casing, wear particles from hardened steel gear wheels and harsh work-
ing environments etc. All the risks lead to increasing failure rate function, however if a ball
bearing survives exposure to several complementary risks its failure rate may stabilize dur-
ing the rest of its mission time. To study such types of lifetime data Cancho et al. (2011)
suggested Poisson-exponential (PE) distribution which motivates us to write this paper.
In their work, Cancho et al. (2011) discussed the failure rate and moments of the PE
distribution, and also showed the uniqueness and existence of the maximum likelihood
estimators. Further they employed Expectation-Maximization algorithm to compute the
maximum likelihood estimates, and obtained associated interval estimates using Fisher
information matrix. Louzada-Neto et al. (2011) also considered the PE distribution, and
discussed Bayesian inference using non-informative prior. Authors also presented a dis-
cussion on the model comparison followed by the analysis of ball bearings data set
using developed methodology. Later Singh et al. (2014) considered the problem of esti-
mating unknown parameters of the PE distribution under Bayesian approach by making
use of symmetric and asymmetric loss functions. Authors have also computed highest
posterior density interval estimates. Recently, Rodrigues et al. (2016) also discussed the
various methods for estimating the unknown parameters of the PE distribution. Singh
et al. (2016) also discussed maximum likelihood estimates and Bayesan estimation under
progressive type-II censoring with binomial removal, and illustrated the application of
PE to ovarian cancer data. We observed that all the mentioned work have been done
when observed data are complete, and not much attention have been given when data
are observed under some censoring which becomes need due to time restrictions and
cost constraints. Further the problem of prediction under classical and Bayesian
approaches have not been discussed neither for complete sample nor in the presence of
any censoring. We also observed that although the Bayesian approach has been dis-
cussed based on complete sample but discussion on the selection of hyper-parameters
values and also on preliminaries test estimators have not been paid much attention. All
COMMUNICATIONS IN STATISTICS - SIMULATION AND COMPUTATIONV
R
3

these gaps in the existing literature motivates us to write this paper with three main
objectives when lifetime data following PE distribution are observed in the presence of
type-I hybrid censoring. The first objective is to consider the problem of estimating the
unknown parameters of the PE distribution using classical and Bayesian approaches.
The second objective is to provide the inference on the censored observations using
various classical and Bayesian predictors. We also compute the associated interval esti-
mates under both the estimation and the prediction problems. Finally, the discussion on
selection of hyper-parameter values and shrinkage preliminary test estimators are given.
The results from this work can be used for the complete sample and samples observed
under type-I and type-II censoring schemes. In next section we first discuss the PE dis-
tribution and type-I hybrid censoring in details. The rest of this paper is organized as
follows. Section 3 talks about the maximum likelihood estimation. Section 4 is based on
estimation methods under the Bayesian framework. Section 5 is based on the shrinkage
preliminaries test estimators. Section 6 and Sec. 7 discuss the problem of prediction
respectively under classical and Bayesian approaches. Analysis of real data set and simu-
lation study is presented in Sec. 8. Finally, this work is concluded in Sec. 9.

2. Lifetime model and censoring


In this section we first discuss the Poisson-exponential distribution as lifetime model,
and then type-I hybrid censoring in detail. Suppose that for the units under consider-
ation, there are, say M number of random competing risks, and the random variable M
has a truncated Poisson distribution with probability mass function (PMF) given by
PðM ¼ mÞ ¼ eh hm =mð1  eh Þ, m ¼ 1, 2, :::, h > 0: Further assume that Tj denote time
to the occurrence of an event of interest due to the j-th competing risk, j ¼ 1, 2 :::, and
given M ¼ m the random variables Tj , j ¼ 1, 2 :::, m are independent and identically dis-
tributed by exponential density function with rate parameter k. Then the maximum
value of all causes, say random variable X ¼ maxðT1 , T2 , :::, TM Þ follows Poisson-expo-
nential (PE) distribution, denoted as PEðh, kÞ, with probability density function (PDF)
and cumulative distribution function (CDF) of the following form
kx
hkekxhe
f ðx; h, kÞ ¼ , x > 0, h > 0, k > 0, (1)
1  eh
kx
1  ehe
Fðx; h, kÞ ¼ 1  , (2)
1  eh
where h is the shape parameter, and h=ð1  eh Þ represents the mean of the number of
complementary risks. Further k is the scale parameter, and also denotes the increasing
hazard rate of the distribution of time-to-event due to individual complementary risks.
However, this distribution converges to traditional exponential distribution having con-
stant hazard rate k under the consideration of h approaches zero. One my further refer
to Cancho et al. (2011) for the various characteristics of this distribution.
Hybrid censoring is a mixture of the two well-known type-I and type-II censoring
schemes. The type-I censoring scheme may not be useful for the situations in which a
very few or no number of failures occurs before a prefixed time, say T reaches.
However, the type-II censoring censoring may not be desirable if the units are highly
4 M. MOHAMMADI MONFARED ET AL.

reliable, and a prefixed, say r number of units take enough time to fail. To take care of
time restriction and the cost involvement, type-I/type-II hybrid censoring suggests to
terminate an experiment on either reaching a prefixed time T or on occurrence of pre-
fixed r number of units whichever is earlier/later. The lifetime data under this censoring
are observed in the following way. Suppose that n test units, whose lifetimes are identi-
cally distributed and follow a PDF f ðx; HÞ and CDF Fðx; HÞ with H being a vector of
unknown parameters, are put on a life test experiment. Further assume that X1 < X2 <
::: < Xn are ordered lifetimes of the test units. Then the termination of the experiment
under type-I hybrid censoring is at a random time point C with C ¼ minimum ðT, Xr Þ
leads to one of the following two types of the observed lifetimes
Case I : fX1 < ::: < Xd g if d < r and Xd < T < Xdþ1 ,
Case II : fX1 < ::: < Xr g if Xr < T:
Therefore the observed data under this censoring can be represented as x ¼
ðx1 , x2 , :::, xR Þ, here R ¼ d and C ¼ T when case I occurs, and R ¼ r and C ¼ xr when
Case II occurs. Notice that Case I and Case II respectively correspond to the type-I and
the type-II censoring schemes. Moreover type-I hybrid censoring reduces to the com-
plete sampling case for R ¼ n and XR < T: In a similar way type-II hybrid censoring
can also be defined with the termination time point C with C ¼ maximum ðT, Xr Þ:

3. Maximum likelihood estimation


This section deals with obtaining maximum likelihood estimators (MLEs) for the
unknown parameters of the Poisson-Exponential distribution when the lifetime data are
observed under type-I hybrid censoring. Suppose that a random sample of n units
whose lifetimes follow PE distribution is put on a life test experiment, and the test is
terminated at a random time point C such that C ¼ minfT, Xr:n g where 1  r  n:
Further here r (0 < r  n) and T are respectively prefixed number and time responsible
for terminating the experiment in either case. Then the observed data under the type-I
hybrid censoring turn out to have R number of lifetimes, say x ¼ ðx1 , x2 , :::, xR Þ till the
termination time point C. Now the likelihood function of ðh, kÞ based on the observed
sample x can be written as
Y
R
Lðh, k j xÞ ¼ k f ðxi ; h, kÞ½1  FðC; h, kÞnR , (3)
i¼1

where k ¼ n!=ðn  RÞ!, and PDF f ð:; h, kÞ and CDF Fð:; h, kÞ are respectively given by
(1) and (2). In their work, Cancho et al. (2011) have given a formal proof for the exist-
ence of the MLEs and further on the uniqueness based on complete data. The same
idea can also be used here. Therefore, to obtain the MLEs for h and k, we first require
the partial derivatives of the log-likelihood function with respect to h and k, and then
require to solve both the partial derivatives on equating them to zeros. However, we
observed that the MLEs do not turn out to have closed forms, and some numerical
technique such as Newton–Raphson (NR) method is required to compute MLEs. The
idea of Expectation-Maximization (EM) algorithm suggested by Dempster et al. (1977)
has also been used for computing the MLEs, and the existing literature suggests that the
COMMUNICATIONS IN STATISTICS - SIMULATION AND COMPUTATIONV
R
5

EM algorithm is more reliable than the NR method particularly when data are not com-
plete and are observed under some censoring, see Pradhan and Kundu (2009). In next
section we consider EM algorithm.

3.1. EM algorithm
The EM algorithm starts with writing the likelihood function giving lifetimes for all the
n units of complete sample, say W ¼ ðW1 , W2 , :::, Wn Þ: This complete sample can be
seen as a combination, say W ¼ ðX, ZÞ, of the observed lifetimes x ¼ ðx1 , x2 , :::, xR Þ and
the censored lifetimes, say Z ¼ ðZ1 , Z2 , :::, ZnR Þ: Further the equations on equating par-
tial derivatives of the log-likelihood function, say l ¼ ln Lðh, k j WÞ of ðh, kÞ given the
complete sample W, to zeros can be written as
!
@l n n XR X
nR
¼   ekxi þ ekzi ¼ 0, (4)
@h h eh  1 i¼1 i¼1
! !
@l n XR X
nR XR XnR
¼  xi þ zi þ h xi ekxi þ zi ekzi ¼ 0, (5)
@k k i¼1 i¼1 i¼1 i¼1

Notice that the Expectation-step (E-step) of the EM algorithm replaces the censored
observations with the expected observations, and the Maximization-step (M-step)
searches for the estimated values of ðh, kÞ, called maximum likelihood estimates, which
maximizes the E-step. So if we consider kðkÞ as the estimate of k at the k-th stage, then
the ðk þ 1Þ-th stage estimator of k can be obtained on solving
!
n n XR
ðkþ1Þ
 ðkÞ  ek xi þ ðn  RÞE1 ðhðkÞ , kðkÞ Þ ¼ 0, (6)
hðkÞ eh  1 i¼1

where E1 ðh, kÞ ¼ EðekZi jZi > CÞ obtained using the conditional density fZjX ðZi j x, Zi >
CÞ ¼ f ðzi ; h, kÞ=ð1  FðC; h, kÞÞ, zi > C, is given by
ð1
kZi hk kx
E1 ðh, kÞ ¼ Eðe j Zi > CÞ ¼ he kC e2kxhe dx:
1e C

Next giving the updated value of kðkþ1Þ , the ðk þ 1Þ-th stage estimator of h is given by
P
ð Ri¼1 xi þ ðn  RÞE2 ðhðkÞ , kðkÞ ÞÞ  nðkðkþ1Þ Þ1
hðkþ1Þ ¼ P , (7)
ð Ri¼1 xi ek xi þ ðn  RÞE3 ðhðkÞ , kðkÞ ÞÞ
ðkÞ

where
ð1
hk kx
E2 ðh, kÞ ¼ EðZi j Zi > CÞ ¼ xekxhe dx,
1  ehekC C

and
ð1
kZi hk kx
E3 ðh, kÞ ¼ EðZi e j Zi > CÞ ¼ xe2kxhe dx:
1  ehekC C
6 M. MOHAMMADI MONFARED ET AL.

The iterative procedure in Eqs. (6) and (7) can be terminated on achieving a conver-
gence, that is jhðkþ1Þ  hðkÞ j þ jkðkþ1Þ  kðkÞ j < , for a small value of : We denote the
desired maximum likelihood estimates of ðh, kÞ as ð^h, ^kÞ:
There is one more advantage of employing the EM algorithm that the idea of missing
information principle suggested by Louis (1982) can be used to compute Fisher infor-
mation matrix which will be further helpful to compute interval estimates. As the com-
plete data can be seen as the combination of the observed data and the censored data,
similarly observed information can be seen as: Observed information ¼ Complete infor-
mation - Missing information. Mathematically we can write the observed information,
say IX ðh, kÞ as
IX ðh, kÞ ¼ IW ðh, kÞ  IWjX ðh, kÞ (8)
where IW ðk, hÞ and IWjX ðh, kÞ respectively represent the complete information and the
missing information, and the evaluated expressions for both information are reported in
Appendix. Now asymptotic variance-covariance matrix of ð^h, ^kÞ can be obtained by
inverting the Fisher information matrix given by (8) at the maximum likelihood esti-
mated values, that is IX1 ð^h, ^kÞ: Subsequently 100ð1  aÞ% asymptotic confidence inter-
qffiffiffiffiffiffiffiffiffiffiffiffiffi qffiffiffiffiffiffiffiffiffiffiffiffiffi
^
vals for h and k are respectively given by h6Za=2 varðhÞ and k6Za=2 varð^kÞ, here
^ ^
Za=2 is the upper ða=2Þ th percentile of the standard normal distribution.
Observe that one of the purposes of employing the EM algorithm over NR method
was to avoid the computations involved in second derivatives of the associated log-like-
lihood function. However still with the EM algorithm we stuck in the integrations
involved in the expressions such as Ei ðh, kÞ, i ¼ 1, 2, 3, which further require numerical
technique to solve. The similar issue is observed while using the idea of missing infor-
mation principle. In next section we employ Stochastic Expectation Maximization
(SEM) algorithm to avoid such situation.

3.2. SEM algorithm


In many situations the replacement of censored observations with the expected one in
E-step leads to the involvement of complex and intractable expressions. So, employing
EM algorithm does not justify the advantage of avoiding numerical technique. For such
situations Wei and Tanner (1990) suggested the idea to approximate the expectations in
the E-step using Monte Carlo average. However in their work, Wang and Cheng (2009)
found that the idea of Monte Carlo average approximation may still lead to the compli-
cated and time-consuming process in maximization. Later Diebolt and Celeux (1993)
suggested that the E-step can be replaced by Stochastic step (S-step) which can be exe-
cuted by simulation. In fact, the idea is to replace the censored observations by ran-
domly drawn data from the distribution conditional on results from the previous step,
and then proceed further as a complete sample. This idea turns out more appropriate
and computationally less burdensome than the EM algorithm, see Tregouet et al.
(2004), and recently have received much attention of various researchers, see Zhang
et al. (2014), Belaghi et al. (2017) and Singh et al. (2019). Next we make use of the
same idea, and generate the independent ðn  RÞ number of censored lifetimes ZðiÞ
COMMUNICATIONS IN STATISTICS - SIMULATION AND COMPUTATIONV
R
7

from the conditional distribution function Gi ðZi ; h, k j Zi > CÞ, i ¼ 1, 2, :::, ðn  RÞ given
by
FðZi ; h, kÞ  FðC; h, kÞ
, Zi > C,
1  FðC; h, kÞ
Then the maximum likelihood estimators of k at the ðk þ 1Þ-th stage can be obtained
on solving the following form, see (4)
!
n n X R X
nR
kðkþ1Þ xi kðkþ1Þ zi
 ðkÞ  e þ e ¼ 0,
hðkÞ eh  1 i¼1 i¼1

Next giving the updated value of kðkþ1Þ , the ðk þ 1Þ-th stage estimator of h is given by
P PnR
ð Ri¼1 xi þ i¼1 zi Þ  nðkðkþ1Þ Þ1
ðkþ1Þ
h ¼ PR ðkþ1Þ PnR kðkþ1Þ z :
ð i¼1 xi ek xi þ i¼1 zi e iÞ

Likewise, the previous section the above iterative procedure can be terminated on
achieving a convergence.

4. Bayesian estimation
This section deals with obtaining Bayesian estimators for the parameters h and k. In lit-
erature survey we observed that based on complete sample drawn from the PEðh, kÞ dis-
tribution, Louzada-Neto et al. (2011) suggested gamma prior for h and Jeffery’s prior
for k, and the same priors have been considered by Singh et al. (2016). In this work we
suggest independent gamma priors for both h and k such that the prior densities are
respectively as follows
pðh j a1 , b1 Þ / ha1 1 eb1 h , a1 > 0, b1 > 0,
pðk j a2 , b2 Þ / ka2 1 eb2 k , a2 > 0, b2 > 0,
here a1 , b1 , a2 and b2 are the hyper-parameters, and provide information about the
parameters h and k. Now the joint prior density for ðh, kÞ becomes pðh, kÞ /
pðh j a1 , b1 Þpðk j a2 , b2 Þ: The prior proposed by Louzada-Neto et al. (2011) can also be
seen as a particular case when the hyper parameter values of a2 and b2 are considered
zero. Furthermore, corresponding to zero approaching all hyper-parameter values the
prior density reduces to non-informative prior of the form pðh, kÞ / 1=ðhkÞ: The joint
posterior density of ðh, kÞ under the proposed prior pðh, kÞ becomes
! !
XR X
R
kxi
pðh, k j xÞ / Gk R þ a2 , b2 þ xi :Ghjk R þ a1 , b1 þ e hðh, kÞ, (9)
i¼1 i¼1

where G: ð:, :Þ represents the density function of gamma distribution, and hðh, kÞ is given
by
hðh, kÞ ¼ ð1  eh Þn ½1  ehe
kC
nR : (10)
Notice that the selection of hyper-parameter values can be made using the past avail-
able data. For an instance, consider the situation in which some K number of samples
8 M. MOHAMMADI MONFARED ET AL.

drawn from the PEðh, kÞ distribution is available. Then the maximum likelihood esti-
mates of h and k, say ^h and ^k for each sample j such that j ¼ 1, 2, :::, K can be
j j

obtained. Then on equating the mean and variance of ð^h and ^k Þ with the mean and
j j

variance of considered priors, hyper-parameter values can be obtained. For example, let
us consider the parameter h. Notice that for the suggested gamma prior of h
MeanðhÞ ¼ a1 =b1 and VarianceðhÞ ¼ a1 =b21 : Now on equating mean and variance of
^ j
h , j ¼ 1, 2, :::, K with the mean and variance of gamma prior of h, we get
K  2
1X K
^h j ¼ a1 , 1 X 1X K
^h i ¼ a1 :
h^ 
j
and
K j¼1 b1 K  1 j¼1 K i¼1 b21

Further on solving the above equations, the estimated hyper-parameters turn out to
have in the following form
 P 2
1 K ^j 1
PK ^ j
K j¼1 h K j¼1 h
a1 ¼  2 , and b1 ¼   :
PK ^ j 1 PK ^ i PK ^ j 1 PK ^ i 2
1
K1 j¼1h  K h i¼1
1
h 
K1 j¼1 h K i¼1

Due to similarity in the priors, for k parameter hyper-parameter values for a2 and b2
can also be obtained in a similar way by replacing a1, b1 with a2, b2 and h with k in the
above equations. One may also refer to a similar type of discussion presented by Dey
et al. (2016) and Singh and Tripathi (2018) in their works.
Since the Bayesian inferences are associated with a loss function, and so the selection of
loss function plays very important role. In the existing literature a lot of attention have
been given to the squared error loss (SEL) function in which underestimation and over-
estimation are considered as same. However, in many practical situations where underesti-
mation may be more serious than the overestimation and vice versa, asymmetric loss
functions such as linear-exponential (LINEX) loss and general entropy loss (GEL) are sug-
^
gested. Notice that the SEL function is given by dSEL ð/ðgÞ, /ðgÞÞ ^
¼ ð/ðgÞ  /ðgÞÞ 2
, the
^ ^
LINEX loss (LL) function is given by dLL ð/ðgÞ, /ðgÞÞ ¼ exp ðð/ðgÞ  /ðgÞÞÞ 
^
ð/ðgÞ  /ðgÞÞ  1,  6¼ 0, and the GEL function is given by dGEL ð/ðgÞ, /ðgÞÞ ^ ¼
^
ð/ðgÞ=/ðgÞÞ q ^
 q ln ðð/ðgÞ=/ðgÞÞ  1, q 6¼ 0: Further notice that the magnitudes of 
in LL and q in GEL reflect the degree of asymmetry, such as the negative values represent
the under estimation more serious than the over estimation, and vice versa for the positive
values. Now the Bayes estimators of a function, say /ðgÞ under the considered prior
pðh, kÞ, using LL and GEL functions are respectively given by

^ ðgÞ ¼ 1 ln ðEðe/ðgÞ j xÞÞ and /


/ ^ ðgÞ ¼ ðEðð/ðgÞÞq j xÞq1 :
LL GEL

It is to be noticed that correspond to q ¼ – 1 in GEL, Bayes estimator of a function
/ðgÞ coincides with the Bayes estimator of /ðgÞ under SEL function. Further it can be
observed that Bayes estimators do not admit closed form expressions, therefore in the
next section we consider approximation methods.
COMMUNICATIONS IN STATISTICS - SIMULATION AND COMPUTATIONV
R
9

4.1. Lindley approximation


Lindley (1980) suggested an approximation method, and in existing literature this
method has been extensively used to compute Bayes estimates in the situations when
Bayes estimators do not admit a closed form. For the two-parameter distribution we
have briefed the method in Appendix. Let use consider the parameter h and LL func-
tion. Then we first need to evaluate Eðeh j xÞ, and following the Appendix by consid-
ering gð/Þ ¼ eh , Bayes estimator turn out to have the following form
 
^ 1 ^ ^ ^ ^
h LL ¼ ln e h  eh ð^ q1r ^ 12 þ q ^ 22 Þ þ 0:5  2 e h r
^ 2r ^ 22  e h ð^ ^ 12 l30
r 11 r


þr^ 22 l03 þ 3^
2
^ 22 l12 Þ :
r 21 r

In a similar way Bayes estimator of h under GEL function turn out to have the follow-
ing form
 
q
^ ^
h GEL ¼ h  qh ðqþ1Þ
ð^
q1r ^ 22 Þ þ 0:5 qðq þ 1Þhðqþ1Þ r22
^ 2r
^ 12 þ q
1=q
ðqþ1Þ
 qh ðr222 l03 þ 3r12 r22 l12 þ r11 r21 l30 Þ :

Notice that all the involved expressions in the right-hand side of the above expressions
are required to compute at the maximum likelihood estimates of h and k. In a similar
way Bayes estimates of k can also be obtained, details of all the involved expressions are
not reported here for the sake of conciseness. Notice that the method of Lindley can
only provide the Bayes estimates but not the associated highest posterior density (HPD)
interval estimates. Therefore, next we use importance sampling technique which will be
further helpful to construct HPD interval estimates.

4.2. Importance sampling


Importance sampling is a very useful technique to draw sample from the posterior dens-
ity which can be further used to compute Bayes estimators, and to construct HPD inter-
val estimates for h and k using the idea of Chen and Shao (1999). Observe that the
P
values ki can be generated from Gk ðR þ a2 , b2 þ Ri¼1 xi Þ distribution, and then further
using the generated values of ki, the values of hi can be generated from Ghjki ðR þ
P
a1 , b1 þ Ri¼1 eki xi Þ, see (9). In this way, we can generate a sample for ðh, kÞ of size s,
say fðh1 , k1 Þ, ðh2 , k2 Þ, :::, ðhs , ks Þg: Now this same sample can be used to obtain the Bayes
estimators of h under LL and GEL functions in following way
Ps hi ! Ps q !1q
^ LL ¼ ln1 e hðh , k Þ h hðh , ki Þ
, and h^ GEL ¼
i i i
h Ps
i¼1 Ps
i¼1 i
,
 i¼1 hðhi , ki Þ i¼1 hðhi , ki Þ

where hðh, kÞ is given by (10). Next we consider the idea of Chen and Shao (1999), also
explained in the work of Singh and Tripathi (2016), to construct HPD interval estimates
for h and k.
10 M. MOHAMMADI MONFARED ET AL.

5. Shrinkage preliminary test estimator


In Bayesian approach we make use of the known prior information on some or all of
the parameters, however incorporating the information as a constraint may lead to
restricted models. Therefore, the validity of an estimator resulted from a restricted
model may be under suspicion, and this motivates us for the need of a preliminary test
on the restrictions. For the relevant literature, we refer to the work of Saleh (2006) and
Belaghi et al. (2015a,b) and references cited there in. Now let us assume that rather
than sample some non-sample prior information with form of h ¼ h0 is available, and
we are going to estimate h based on this information. To check the accuracy of this
information, we set null hypothesis, say H0 : h ¼ h0 against the alternate hypothesis, say
H1 : h 6¼ h0 : The existing literature suggests that the construction of shrinkage estima-
tors for h based on fixed alternative such as H1 : h ¼ h0 þ d, for a fixed d, does not
offer substantial performance change compared to ^h, see Saleh (2006). So, we consider
local alternative such as AðRÞ : h ¼ h0 þ R1=2 d, where d is a fixed number, and further
define Shrinkage Preliminary Test (SPT) estimator of h based on maximum likelihood
estimates (M.SPT) and Bayes estimates (B.SPT) respectively as
^h M:SPT ¼ wh0 þ ð1  wÞ^h MLE IðWR < v2 ðcÞÞ,
1
^h B:SPT ¼ wh0 þ ð1  wÞ^h Bayes IðWR < v2 ðcÞÞ:
1

here w 2 ½0, 1, c denotes type-I error and v21 ðcÞ represents the c-upper quantile of Chi-
pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
square distribution with 1 degree of freedom. Notice that Rðh  h0 Þ is asymptotically
^
Nð0, VarðhÞÞ and the asymptotic distribution of test statistic defined as WR ¼
pffiffiffi
ð Rðh ^  hÞ=VarðhÞÞ
^ 2 converges to a non-central chi-square distribution with one
degree of freedom and non-centrality parameter D2 =2, where D2 ¼ d2 =ðd2 ð^hÞÞ, and we
reject H0 when WR > v21 ðcÞ: The SPT estimators for k can also be defined in a simi-
lar fashion.

6. Prediction
This section deals with providing inference on the censored observations, say sample
Z ¼ Zi ¼ XRþi , i ¼ 1, 2, :::, ðn  RÞ: The conditional density for the k-th order statistic
for Xk observations such that R < k  n given the observed data x can be written as:
ðFðz; h, kÞ  FðC; h, kÞÞkR1 ð1  Fðz; h, kÞÞni f ðz; h, kÞ
f1 ðz j x, h, kÞ ¼ K , z>C
ð1  FðC; h, kÞÞnR
 
nR
where K ¼ ðk  RÞ : Now using the Binomial expansion the above conditional
kR
density can be written as (see Singh and Tripathi (2016))
 jnþR  
X k  R  1
kR1
kR1j 1  e
hekC
1  ehe
kz nR1j

f1 ðz j x, h, kÞ ¼ K ð1Þ f ðzÞ,
j¼0
j 1  eh 1  eh
(11)
Next, we use different predictors to predict the censored observations.
COMMUNICATIONS IN STATISTICS - SIMULATION AND COMPUTATIONV
R
11

6.1. Best unbiased predictor


A Best unbiased predictor (BUP) is a statistic ^z used to predict z if the predictor error
ð^z  zÞ has a mean zero, and its prediction error variance varð^z  zÞ is less than or
equal to that of any other unbiased predictor of z. In this section, our objective is to
predict the z ¼ xk , R < k  n observations using BUP ^z ¼ E½ZjC, and therefore using
the conditional density given by (11), we get
ð1
^z ¼ zf1 ðz j x, h, kÞdz, z > C, i ¼ 1, 2, :::, ðn  RÞ:
C
!  jnþR
X kR1
K kR1 kR1j 1  e
hekC
¼ ð1Þ
k j¼0 j 1  eh
ð1  
1 h
 ln  ln ðð1  vÞð1  e Þ  1Þ ð1  vÞnR1j dv:
FðC;h, kÞ h
kz kC
Notice that we put u ¼ ½1  ehe =½1  ehe  in the condition density given by (11)
to obtain the above expression, and we observe that the probability density of u j x 
Betaðn  R  k þ 1, kÞ distribution. Finally, the involved h and k in the above expres-
sion can be replaced by their respective maximum likelihood estimates to obtain BUP
of z ¼ xRþi , i ¼ 1, 2, :::, n  R:

6.2. Conditional median predictor


Conditional median predictor (CMP), suggested by Raqab and Nagaraja (1995), is a
predictor Z^ of a statistic Z if it is the median of the conditional density of Z given the
observed data, that is PðZ  ^z j x, CÞ ¼ PðZ  ^z j x, CÞ: Observe that the random vari-
able w ¼ ð1  uÞ j x  Betaðk, n  R  k þ 1Þ distribution, and subsequently using the
distribution of w, CMP of Z is given by
 
1 1 h
^z ¼  ln  ln ðMð1  e Þ  1Þ ,
k h
where M stands for median of the Betaðk, n  R  k þ 1Þ distribution.
Further the associated 100ð1  aÞ% prediction interval according to pivotal method
can be obtained as
    
1 1 h 1 1 h
 ln  ln ðBa=2 ð1  e Þ  1Þ ,  ln  ln ðB1a=2 ð1  e Þ  1Þ ,
k h k h
where Bp represents the 100-pth percentile of the Betaðk, n  R  k þ 1Þ distribution.
Further observe that the PDF of random variable w is unimodal function of w for 1 <
k < ðn  RÞ: Subsequently the 100ð1  aÞ% highest conditional density (HCD) predic-
tion interval is
    
1 1 h 1 1 h
 ln  ln ðw1 ð1  e Þ  1Þ ,  ln  ln ðw2 ð1  e Þ  1Þ :
k h k h
12 M. MOHAMMADI MONFARED ET AL.

Here, w1 and w2 are the solution to the following equations


ð w2
gðwÞdw ¼ 1  a, and gðw1 Þ ¼ gðw2 Þ:
w1

More precisely the above equations can be rewritten as


 
1  w1 nRk
Betaw2 ðn  R  k þ 1, kÞ  Betaw1 ðn  R  k þ 1, kÞ ¼ 1  a, and
1  w2
 k1
w2
¼
w1
Notice that Beta: ð:, :Þ represents the CDF of Betað:, :Þ distribution.

7. Bayesian prediction
The objective of this section is to provide predictive inference using Bayesian approach.
Observe that under the considered prior pðh, kÞ the posterior predictive density is given
by
ð1 ð1

f1 ðh, k j xÞ ¼ f1 ðz j x, h, kÞpðh, k j xÞdhdk,
0 0

Now the Bayesian predictive estimate of z under LL and GEL are given by
ð 1  ð 1 ð 1 
1 1
^z LL ¼ ln ez f1 ðh, k j xÞdz ¼ ln I1 ðh, kÞpðh, k j xÞdhdk , (12)
 C  0 0
ð 1 1=q ð 1 ð 1 1=q
^z GEL ¼ zq f1 ðh, k j xÞdz ¼ I2 ðh, kÞpðh, k j xÞdhdk , (13)
C 0 0

where
ð1
I1 ðh, kÞ ¼ ez f1 ðz j x, h, kÞdz
C
!  jnþR
X kR1
K kR1 kR1j 1  e
hekC
¼ ð1Þ
k j¼0 j 1  eh
ð1   
1 h
 exp  ln  ln ðð1  vÞð1  e Þ  1Þ ð1  vÞnR1j dv,
FðC;h, kÞ h

and
ð1
I2 ðh, kÞ ¼ zq f1 ðz j x, h, kÞdz
C
!  jnþR
X kR1
K kR1 kR1j 1  e
hekC
¼ ð1Þ
k j¼0 j 1  eh
ð1   q
1 h
 ln  ln ðð1  vÞð1  e Þ  1Þ ð1  vÞnR1j dv:
FðC;h, kÞ h
COMMUNICATIONS IN STATISTICS - SIMULATION AND COMPUTATIONV
R
13

Now observe that the expression given by (12) and (13) can be respectively seen as
ln½EðI1 ðh, kÞjxÞ= and ½EðI2 ðh, kÞjxÞ1=q : Subsequently samples drawn from the poster-
ior distribution given by (9) using importance sampling procedure as mentioned in Sec.
4.2 can be used to compute the expressions. Further predictive survival function is given
by
ð1 ð1
S ðt j xÞ ¼ S1 ðt j x, h, kÞpðh, k j xÞ,
0 0

where
" #nR
kt
PðX > t j x, h, kÞ 1  ehe
S1 ðtjx, h, kÞ ¼ ¼
PðX > C j x, h, kÞ 1  ehekC
Now the 100ð1  aÞ% equal-tail (ET) credible predictive interval estimates can be
obtained on solving the following expressions for the lower bound L and upper bound
U
a a
S1 ðL j xÞ ¼ 1  , and S1 ðU j xÞ ¼ :
2 2
One may further refer to the algorithm given by Singh and Tripathi (2016) to solve the
above expressions.
Remark. Notice that the problem of prediction here with discussed is in fact called one-
sample prediction which deals with providing inference on the rest of the censored
observations based on the observed sample from a complete sample. However, two-sam-
ple prediction problem which deals with providing inference about the future sample
based on the observed sample is also important. The two-sample prediction problem is
as follow: Suppose that we have observed a sample, say x ¼ ðx1 , x2 , :::, xR Þ under type-I
hybrid censoring with giving the prefixed time T and r. Further assume that y ¼
y1 , y2 , :::, ys , :::, ym are the ordered lifetimes following PEðh, kÞ distribution of the future
sample of size m under type-I hybrid censoring giving the prefixed time T0 and s num-
bers. Then the two-sample prediction deals with providing inference on the future sam-
ple y based on the observed sample x. Notice that the PDF of the kth order statistic,
(1  k  m), of this future sample is given by
!
m
f2 ðyjhÞ ¼ k ðFðy; h, kÞÞk1 ð1  Fðy; h, kÞÞmk f ðy; h, kÞ, y > 0,
k
! ! (14)
m X k1
k1 k1j mj1
¼k ð1Þ ðFðy; h, kÞÞ f ðy; h, kÞ:
k j¼0 j

Further the CDF under two-sample prediction is given by


 Xk1  
m k1 1  ð1  Fðy; h, kÞÞmj
F2 ðyjhÞ ¼ k ð1Þk1j : (15)
k j¼0 j mj

Now the point predictive estimates can be obtained by making use of the PDF given by
(14) rather than (11. Similarly, the CDF given by (15 can be utilized for associated
14 M. MOHAMMADI MONFARED ET AL.

Table 1. Goodness-of-fit tests for given distributions.


Model PDF ^
h ^
k NLC AIC BIC K-S
PE f ðx; h, kÞ 96.955 0.1669 104.1426 212.2852 215.1532 0.1276
WD h h1 ðkx Þh 4.635 33.673 105.4889 214.9778 217.8458 0.1271
kh
x e
GE hkekx ð1  ekx Þh 0.190 193 104.8668 213.7336 216.6016 0.1386
Burr XII hkx k1 ð1 þ xk Þh1 4.2567 0.0690 269.3431 542.6862 545.5541 0.9690
Lomax khð1 þ kxÞh1 0.1227 115.248 201.4810 406.9621 409.8301 0.6190

predictive interval estimates. For relevant literature on two-sample prediction problem


one may refer to Shafay and Balakrishnan (2012) and Balakrishnan and Shafay (2012).

8. Data analysis and simulation study


8.1. Data analysis
This section deals with analyzing a real data set describing the all glass airplane window
design that measure polished window strength. The data set is taken from Pepi (1994),
and the observations of the set are as follows:
18.83, 20.8, 21.657, 23.03, 23.23, 24.05, 24.321, 25.5, 25.52, 25.8, 26.69,

26.77, 26.78, 27.05, 27.67, 29.9, 31.11, 33.2, 33.73, 33.76, 33.89, 34.76,

35.75, 35.91, 36.98, 37.08, 37.09, 39.58, 44.045, 45.29 and 45.381.\\
In the existing literature Kumar et al. (2016) have also discussed this data set for the
PE distribution. Authors used a graphical method to test the goodness of fit of the PE
distribution for this data set. We further compare the fitness of the PE distribution to
this data set with other lifetime distributions such as Weibull, generalized exponential,
Burr XII and Lomax distributions. For this purpose, we consider Akaike’s information
criterion (AIC), Bayesian information criterion (BIC), and Kolmogorov-Smirnov (KS)
test statistic value. All the values based on the considered criteria are reported in
Table 1. Notice that smaller the values of the considered criteria better the fitting of a
distribution, and so the tabulated values suggest that the PE distribution fits this data
set more appropriate as compared to the other distributions.
Next, we first generate two data sets from the given data set under type-I hybrid cen-
soring corresponding to T ¼ 35, r ¼ 25 and T ¼ 50, r ¼ 25: Subsequently, the first gen-
erated data set resembles the type-I censored data, and turn out to have 22 observations
till 34.76, that is R ¼ d ¼ 22 and C ¼ T ¼ 35: The second generated data set resembles
the type-II censored data, and turn out to have 25 observations, that is R ¼ r ¼ 25 and
C ¼ x25 ¼ 36:98: We next consider both the data sets, and compute maximum likeli-
hood estimates of h and k using NR, EM and SEM algorithms. We further compute
Bayes estimates under non-informative prior using Lindley and importance sampling
technique. Notice that under LL, we have considered two values of  such that  ¼
0:25, 0:5, and further under GEL two values for q such that q ¼ 0:5, 1:25 are taken
into account. All the estimated values are reported in Table 2. From tabulate values it
can be observed that the estimated values obtained using the EM and the SEM algo-
rithms are close to each other, however the NR method have very larger values. Also,
COMMUNICATIONS IN STATISTICS - SIMULATION AND COMPUTATIONV
R
15

Table 2. ML and Bayes estimates for the real data set.


Maximum likelihood Bayes estimates
estimates LINEX GEL
r T Parameter NR EM SEM Method  ¼ 0.25  ¼ 0.5 SEL q ¼ 0.5 q ¼ 1.25
25 35 h 0.1548 0.0310 0.0309 Lindley 0.0213 0.0227 0.0356 0.0293 0.0272
k 72.344 0.6990 0.6895 0.9902 1.0776 1.1249 1.1218 1.0517
h Is 0.0252 0.0247 0.0247 0.0258 0.0240
k 1.1023 1.1083 1.1864 1.1769 1.1455
50 h 0.1390 0.0260 0.0262 Lindley 0.0251 0.0182 0.0379 0.0312 0.0257
k 55.860 0.6487 0.6449 0.3890 0.4290 0.9816 0.9727 0.9366
h Is 0.0172 0.0171 0.0189 0.0188 0.0183
k 0.4801 0.4675 0.9376 0.9459 0.9184
Note: Is represents importance sampling.

Table 3. 95% asymptotic confidence and HPD interval estimates for the real data set.
r T Parameter Asymptotic confidence interval HPD interval
25 35 h 0.0272–2.9200 0.0191–0.0379
k 0.5978–0.7052 0.9531–3.2315
50 h 0.0253–2.0263 0.0149–0.0348
k 0.5648–0.7488 0.2728–1.7459

the Bayes estimates obtained using Lindley method and importance sampling are close
to each other’s and to the maximum likelihood estimates. It can also be observed that a
positive value of q under GEL function lead to smaller estimates as expected. Further
Table 3 reports the associated interval estimates, and it can be seen that the interval
estimates also include the reported estimated values. Table 4 reports shrinkage prelimin-
ary test estimates based on maximum likelihood and Bayes estimates. Notice that in the
estimation process we considered h0 ¼ 0:02 and k ¼ 0:5 as non sample prior informa-
tion with type-I error 0.05 and w ¼ 0.5. Further, Tables 5 and 6 respectively report pre-
dictive and associated predictive interval estimates for the censored observations. It can
be seen that the predictive estimates are close to the true values. The estimates obtained
using BUP have larger values as compared to estimates obtained using CMP. It can be
seen that the predictive interval estimates also contain the true values, and predictive
interval estimates obtained using HCD method are smaller as compared to piv-
otal method.

8.2. Simulation study


In this section, we conduct Monte Carlo simulation study to examine the performance
of proposed estimators and predictors. We first simulate the complete samples of size n
from the PEð0:5, 2Þ distribution using R-statistical programing language. We then sort
the simulated observations, and observe the data under type-I hybrid censoring accord-
ing to given different values of T and r. We mention that the results reported in this
paper are based on 5000 Monte Carlo simulations. We first report the maximum likeli-
hood estimates for h and k computed using EM and SEM algorithms in Table 7, and to
make a comparison we consider the average (Avg) estimates and mean squared error
(MSE) values. Tabulated values suggest that the performance of estimates obtained
16 M. MOHAMMADI MONFARED ET AL.

Table 4. Shrinkage preliminary test estimates for the real data set.
LINEX GEL
r T Parameter MLE  ¼ 0:25  ¼ 0:5 SEL q ¼ 0:5 q ¼ 1.25
25 35 h 0.0310 0.0287 0.0282 0.0284 0.0291 0.0276
k 0.5848 1.1246 1.0838 1.0859 0.9514 0.9353
50 h 0.0260 0.0218 0.0216 0.0215 0.0207 0.0204
k 0.5553 0.7342 0.7157 0.7190 0.7290 0.7040

Table 5. Predictive estimates of xRþ1 for the real data set.


Classical Bayesian prediction
prediction LINEX GEL
r T True value BUP CMP  ¼ 0:25  ¼ 0:5 SEL q ¼ 0:5 q ¼ 1.25
25 35 35.75 36.0170 35.8957 36.6472 36.9855 37.2184 38.2148 36.3524
50 37.08 39.5876 37.3245 38.4152 36.1958 38.4014 39.5182 37.4011

Table 6. Predictive interval estimates of xRþ1 for the real data set.
r T True value Pivotal method HCD method ET credible
25 35 35.75 34.6935–38.9587 34.7857–38.8551 34.7342–38.9421
50 37.08 36.7141–41.6875 36.9804–41.3957 36.9905–41.0175

Table 7. Average and MSE values of maximum likelihood estimates.


h k
EM SEM EM SEM
n T r Avg MSE Avg MSE Avg MSE Avg MSE
30 3 20 0.5649 0.0159 0.5640 0.0158 1.9940 0.5010 2.0152 0.5001
23 0.5345 0.0157 0.5270 0.0156 2.0466 0.4120 2.0343 0.4110
25 0.5168 0.0128 0.5089 0.0125 2.0647 0.3070 2.0459 0.2912
5 20 0.5582 0.0158 0.5501 0.0151 1.9600 0.4716 1.9753 0.4498
23 0.5355 0.0151 0.5209 0.0121 1.9987 0.4235 2.0169 0.4189
25 0.5238 0.0109 0.5065 0.0098 2.0123 0.3230 2.0210 0.3156
50 3 35 0.5632 0.0106 0.5590 0.0101 1.9474 0.4595 1.9651 0.4370
40 0.5570 0.0094 0.5480 0.0089 1.9827 0.3655 1.9959 0.3562
45 0.5309 0.0089 0.5272 0.0081 2.0444 0.1682 2.0312 0.1681
5 35 0.5612 0.0124 0.5535 0.0123 1.9031 0.4471 1.9453 0.4412
40 0.5396 0.0092 0.5310 0.0091 1.9200 0.3587 1.9659 0.3496
45 0.5164 0.0086 0.5069 0.0085 1.9860 0.1595 1.9886 0.1473

using SEM algorithm is good as compared to EM algorithm. Further Table 8 reports


the Bayes estimates obtained under non-informative (NIN) and informative (IN) priors.
Notice that to obtain the hyper-parameter values under IN prior, we first generate 1000
number of complete samples from PE(0.5, 2) distribution, each sample with 30 observa-
tions. Then according to the discussion presented in Sec. 4, we get the hyper-parameter
values as a1 ¼ 16:5443, b1 ¼ 7:3979, a2 ¼ 1:2145 and b2 ¼ 1:3608: The results show that
the performance of IN prior is better than the NIN prior both in terms of average esti-
mates and MSE values. The performance of Bayes estimates obtained using importance
sampling technique is also good as compared to the estimates obtained using the
Lindley method. It is further observed that as expected the higher value of  and q
COMMUNICATIONS IN STATISTICS - SIMULATION AND COMPUTATIONV
R
17

Table 8. Average and MSE values of Bayes estimates.


NIN prior IN prior
LINEX GEL LINEX GEL
n T r Method  ¼ –0.25  ¼ 0.5 SEL q ¼ –0.5 q ¼ 1.25  ¼ –0.25  ¼ 0.5 SEL q ¼ 0.5 q ¼ 1.25
30 3 20 Lin h Avg 0.5541 0.5456 0.5592 0.5371 0.5087 0.5792 0.5565 0.4971 0.5121 0.4770
MSE 0.0714 0.0762 0.0836 0.0856 0.0868 0.0921 0.0916 0.0838 0.0910 0.0921
k Avg 1.6651 1.6402 1.6218 1.7192 1.6885 1.8971 1.8865 1.8328 1.8224 1.7995
MSE 0.4891 0.4965 0.4279 0.4951 0.4470 0.4017 0.4818 0.3801 0.4734 0.4324
Is h Avg 0.4791 0.4711 0.4719 0.5124 0.4935 0.4712 0.4336 0.4441 0.4871 0.4338
MSE 0.1191 0.1192 0.1201 0.1121 0.1161 0.1117 0.1106 0.1105 0.1120 0.1118
k Avg 1.8417 1.8265 1.8331 1.8521 1.8047 2.0124 1.8507 1.8545 1.9011 1.8311
MSE 0.2489 0.2398 0.2544 0.3278 0.2369 0.3251 0.3231 0.3154 0.3584 0.3678
23 Lin h Avg 0.5742 0.5262 0.5194 0.4852 0.4434 0.5981 0.5657 0.4838 0.4792 0.4564
MSE 0.0847 0.0838 0.0718 0.0968 0.0960 0.0812 0.0791 0.0840 0.0951 0.0901
k Avg 1.6752 1.6021 1.5849 1.6682 1.6607 1.8142 1.7801 1.7671 1.7853 1.7592
MSE 0.4241 0.4555 0.4177 0.4921 0.4451 0.4181 0.3997 0.3863 0.3964 0.3877
Is h Avg 0.5842 0.5833 0.5841 0.5851 0.5782 0.4651 0.4244 0.4248 0.4212 0.4157
MSE 0.0719 0.0721 0.0724 0.0751 0.0740 0.0110 0.0079 0.0078 0.0112 0.0093
k Avg 1.7912 1.7833 1.7866 1.7832 1.7634 1.8142 1.7043 1.7071 1.7572 1.6893
MSE 0.1258 0.1141 0.1244 0.1190 0.1010 0.0948 0.0946 0.0934 0.0891 0.1020
25 Lin h Avg 0.5433 0.5096 0.5117 0.5289 0.4911 0.5719 0.5429 0.5060 0.5358 0.5141
MSE 0.0774 0.0795 0.0794 0.0768 0.0841 0.0715 0.0766 0.0788 0.0791 0.0730
k Avg 1.5876 1.5344 1.4940 1.5581 1.5392 1.7411 1.7249 1.6343 1.7182 1.4854
MSE 0.4412 0.4109 0.4514 0.4582 0.4798 0.3012 0.2807 0.3564 0.4152 0.3944
Is h Avg 0.5912 0.5921 0.5988 0.5697 0.5639 0.4661 0.4275 0.4688 0.4577 0.4538
MSE 0.0734 0.0728 0.0755 0.0601 0.0605 0.0048 0.0040 0.0040 0.0051 0.0032
k Avg 1.7401 1.7223 1.7125 1.7282 1.7093 1.9951 1.9753 1.9866 1.9980 1.9008
MSE 0.1119 0.1190 0.1065 0.1151 0.1108 0.0763 0.0743 0.0737 0.0752 0.0751
5 20 Lin h Avg 0.5682 0.5524 0.5609 0.4872 0.4560 0.5898 0.5628 0.4895 0.5321 0.4789
MSE 0.0728 0.0769 0.0831 0.0921 0.0954 0.0997 0.0966 0.0840 0.0935 0.0923
k Avg 1.7252 1.7034 1.6899 1.6754 1.6434 1.9072 1.8864 1.8361 1.8239 1.7919
MSE 0.4652 0.4704 0.4989 0.5492 0.5050 0.3314 0.3057 0.4162 0.3992 0.3454
Is h Avg 0.4392 0.4304 0.4306 0.5012 0.4917 0.5411 0.5241 0.5211 0.5221 0.5090
MSE 0.0146 0.0154 0.0154 0.0112 0.0157 0.0081 0.0022 0.0023 0.0018 0.0021
k Avg 1.8112 1.7894 1.7930 1.7847 1.7707 1.7814 1.7583 1.7589 1.7814 1.7446
MSE 0.0841 0.0842 0.0838 0.0876 0.0878 0.0641 0.0614 0.0642 0.0691 0.0678
23 Lin h Avg 0.5471 0.5223 0.5276 0.4912 0.4439 0.5953 0.5613 0.4771 0.4772 0.4612
MSE 0.0817 0.0819 0.0718 0.0854 0.0848 0.0821 0.0819 0.0850 0.0996 0.0905
k Avg 1.7732 1.7483 1.7253 1.7402 1.7315 1.8591 1.8289 1.7641 1.7791 1.7460
MSE 0.4172 0.4458 0.4732 0.4298 0.4026 0.3571 0.3250 0.3569 0.3892 0.3748
Is h Avg 0.4682 0.4619 0.4617 0.4910 0.4620 0.4891 0.4511 0.4496 0.4891 0.4430
MSE 0.0128 0.0131 0.0129 0.0107 0.0118 0.0081 0.0025 0.0029 0.0051 0.0033
k Avg 1.8614 1.8387 1.8415 1.8521 1.8241 1.7621 1.6411 1.6407 1.6917 1.6338
MSE 0.0887 0.0868 0.0862 0.1091 0.1009 0.0751 0.0534 0.0531 0.0813 0.0854
25 Lin h Avg 0.5541 0.5162 0.5498 0.4712 0.4832 0.5742 0.5453 0.4009 0.5318 0.5123
MSE 0.0765 0.0788 0.0775 0.0761 0.0843 0.0821 0.0969 0.0957 0.0991 0.0903
k Avg 1.6152 1.6483 1.6253 1.6402 1.6315 1.8591 1.8289 1.7641 1.7791 1.7460
MSE 0.5671 0.5318 0.5781 0.5172 0.5388 0.3171 0.2941 0.3645 0.3271 0.3068
Is h Avg 0.4840 0.4831 0.4832 0.4812 0.4793 0.4414 0.4264 0.4665 0.4241 0.4239
MSE 0.0488 0.0485 0.0484 0.0492 0.0499 0.0061 0.0055 0.0055 0.0062 0.0059
k Avg 1.6961 1.6783 1.6649 1.7012 1.6688 2.2371 2.1054 2.1137 2.2114 2.0669
MSE 0.1262 0.1177 0.1519 0.1249 0.1224 0.0213 0.0184 0.0202 0.0162 0.0117
50 3 35 Lin h Avg 0.5891 0.5677 0.5911 0.5285 0.5094 0.6121 0.5800 0.5128 0.5481 0.5237
MSE 0.0844 0.0843 0.0855 0.0864 0.0842 0.0817 0.0864 0.0763 0.0812 0.0825
k Avg 1.6957 1.6674 1.6494 1.6882 1.6474 1.8752 1.8676 1.8365 1.8221 1.7919
MSE 0.5167 0.5158 0.5304 0.5381 0.4458 0.4021 0.3667 0.3682 0.3691 0.3318
Is h Avg 0.7121 0.6791 0.6818 0.6817 0.6669 0.6129 0.4396 0.4397 0.4551 0.4380
(continued)
18 M. MOHAMMADI MONFARED ET AL.

Table 8. Continued.
NIN prior IN prior
LINEX GEL LINEX GEL
n T r Method  ¼ –0.25  ¼ 0.5 SEL q ¼ –0.5 q ¼ 1.25  ¼ –0.25  ¼ 0.5 SEL q ¼ 0.5 q ¼ 1.25
MSE 0.0843 0.0842 0.0845 0.0841 0.0840 0.0771 0.0744 0.0744 0.0744 0.0746
k Avg 1.7264 1.6984 1.6844 1.7251 1.6894 1.6012 1.5923 1.5928 1.6082 1.5891
MSE 0.1819 0.1808 0.2256 0.2612 0.1768 0.2013 0.1714 0.1711 0.1918 0.1739
40 Lin h Avg 0.5571 0.5343 0.556 0.5192 0.4876 0.5812 0.5669 0.5248 0.5442 0.5048
MSE 0.0812 0.0768 0.0756 0.0751 0.0693 0.0577 0.0576 0.0712 0.0698 0.0705
k Avg 1.7712 1.7279 1.7217 1.7852 1.7646 2.091 1.9521 1.9484 1.9021 1.8555
MSE 0.4582 0.4469 0.4533 0.4241 0.4126 0.3971 0.3769 0.3607 0.3971 0.3894
Is h Avg 0.5122 0.4819 0.4823 0.4917 0.4799 0.4917 0.4576 0.4577 0.4951 0.4548
MSE 0.0717 0.0716 0.0716 0.0715 0.0716 0.0515 0.0551 0.0686 0.0620 0.0639
k Avg 1.7622 1.7311 1.7296 1.7514 1.7208 1.8812 1.8337 1.8376 1.8722 1.8113
MSE 0.1974 0.1959 0.1806 0.1641 0.1565 0.2294 0.2275 0.2082 0.2825 0.2572
45 Lin h Avg 0.5498 0.5429 0.5622 0.4851 0.4681 0.5971 0.5748 0.5077 0.4896 0.4837
MSE 0.0924 0.0839 0.0797 0.0991 0.0952 0.0652 0.0542 0.0684 0.0599 0.0619
k Avg 1.8751 1.8316 1.8434 1.8392 1.8153 2.0851 2.0420 2.0332 1.9596 1.9285
MSE 0.2789 0.2765 0.2673 0.3081 0.2914 0.2298 0.2078 0.2008 0.2623 0.2572
Is h Avg 0.5981 0.5487 0.5490 0.5821 0.5435 0.5912 0.5836 0.5838 0.5811 0.5803
MSE 0.0571 0.0653 0.0554 0.0545 0.0639 0.0387 0.0305 0.0305 0.0312 0.0390
k Avg 1.9249 1.7741 1.7764 1.9541 1.7632 1.9112 1.8675 1.8694 2.0121 1.8568
MSE 0.1791 0.1832 0.1786 0.1522 0.1511 0.1475 0.1454 0.1467 0.1441 0.1436
5 35 Lin h Avg 0.5761 0.5574 0.5871 0.5271 0.5025 0.5971 0.5694 0.4999 0.5382 0.5181
MSE 0.0872 0.0824 0.0842 0.0841 0.0803 0.1080 0.1078 0.0869 0.0869 0.0858
k Avg 1.6821 1.6518 1.6428 1.6551 1.6144 1.8651 1.8525 1.8213 1.7961 1.7668
MSE 0.5048 0.5027 0.5084 0.5122 0.4704 0.4112 0.3851 0.3869 0.3759 0.3483
Is h Avg 0.6722 0.5171 0.5174 0.5471 0.5218 0.6172 0.4451 0.4452 0.4817 0.4689
MSE 0.0239 0.0237 0.0237 0.0238 0.0238 0.0172 0.0145 0.0145 0.0139 0.0136
k Avg 1.8145 1.6397 1.6176 1.8451 1.6344 1.8126 1.7095 1.7114 1.81521 1.6984
MSE 0.1612 0.1408 0.1645 0.1342 0.1437 0.1152 0.1138 0.1132 0.1192 0.1178
40 Lin h Avg 0.5312 0.5249 0.5453 0.5071 0.4764 0.5991 0.5786 0.5059 0.5012 0.4965
MSE 0.0831 0.0783 0.0774 0.0899 0.0849 0.0771 0.0769 0.0758 0.0712 0.0733
k Avg 1.7821 1.7307 1.7249 1.7451 1.7191 2.0151 1.9402 1.9329 1.9012 1.8329
MSE 0.4012 0.3993 0.4058 0.3574 0.3433 0.2851 0.2649 0.2508 .0.3371 0.3239
Is h Avg 0.4721 0.4401 0.4402 0.4513 0.4382 0.5712 0.5317 0.5320 0.5612 0.5265
MSE 0.0146 0.0145 0.0143 0.0144 0.0149 0.0111 0.0123 0.0123 0.0119 0.0121
k Avg 1.8214 1.6873 1.6887 1.8534 1.6791 1.8121 1.7081 1.7099 1.8241 1.6976
MSE 0.1041 0.1142 0.1136 0.1059 0.1180 0.0912 0.0887 .0.0878 0.0964 0.0945
45 Lin h Avg 0.5421 0.5357 0.5513 0.4671 0.4402 0.5891 0.5674 0.5007 0.4819 0.4743
MSE 0.0820 0.0735 0.0734 0.0798 0.0771 0.0754 0.0726 0.0725 0.0798 0.0742
k Avg 1.8752 1.8495 1.8630 1.7921 1.7613 2.0721 2.0299 2.0251 1.9371 1.9063
MSE 0.3421 0.3155 0.3033 0.3471 0.3330 0.1671 0.1414 0.1486 0.1891 0.1783
Is h Avg 0.5142 0.5180 0.5182 0.5162 0.5146 0.6182 0.6095 0.6098 0.6030 0.6040
MSE 0.0121 0.0124 0.0124 0.0122 0.0122 0.0102 0.0104 0.0109 0.0113 0.0108
k Avg 1.9241 1.7680 1.7704 1.9432 1.7549 1.8122 1.6962 1.6977 1.8531 1.6864
MSE 0.0811 0.0831 0.0825 0.0819 0.0869 0.0873 0.0858 0.0850 0.0834 0.0815
Note: Lin and Is respectively represent Lindley method and importance sampling.

respectively under LINEX and GEL functions lead to smaller estimates as compared to
the negative values of the  and q. Further in Table 9 we present the 95% asymptotic
confidence and highest posterior density (HPD) interval estimates along with average
interval lengths (AILs). From tabulated values, it is observed that the HPD interval esti-
mates obtained under IN prior have smaller AILs as compared to NIN priors. Also, a
higher value of r or T corresponding to a fixed n provides interval with smaller AIL.
This observation also holds true for the asymptotic confidence intervals.
COMMUNICATIONS IN STATISTICS - SIMULATION AND COMPUTATIONV
R
19

Table 9. 95% asymptotic confidence and HPD interval estimates.


HPD interval
Asymptotic confidence
n T r interval/AIL NIN prior: interval/AIL IN prior: interval/AIL
30 3 20 h (0.3098–0.5608)/0.2510 (0.2649–0.6208)/0.3559 (0.3401–0.6177)/0.2776
k (0.9442–2.1526)/1.2084 (0.5179–2.0548)/1.5369 (1.4197–2.2645)/0.8448
23 h (0.3057–0.5558)/0.2501 (0.2454–0.6012)/0.3558 (0.3705–0.6423)/0.2718
k (0.9760–2.0505)/1.0745 (0.5198–2.0471)/1.5273 (1.4536–2.2575)/0.8039
25 h (0.2997–0.5046)/0.2049 (0.2792–0.6214)/0.3422 (0.3822–0.5815)/0.1993
k (1.0995–2.0411)/0.9411 (0.7019–2.1154)/1.4135 (1.3497–2.0925)/0.7428
5 20 h (0.2628–0.5136)/0.2508 (0.2761–0.5714)/0.2953 (0.3370–0.5533)/0.2163
k (1.1418–2.1195)/0.9777 (0.6818–2.1864)/1.5046 (1.4038–2.1929)/0.7891
23 h (0.3095–0.5352)/0.2257 (0.2534–0.5401)/0.2867 (0.3570–0.5448)/0.1878
k (1.2372–2.1164)/0.8792 (0.6718–2.1659)/1.4941 (1.5917–2.1854)/0.5937
25 h (0.3276–0.5379)/0.2103 (0.2654–0.5123)/0.2469 (0.3771–0.5356)/0.1585
k (1.4766–2.1282)/0.6516 (0.6709–2.0159)/1.3450 (1.6605–2.1398)/0.4793
50 3 35 h (0.3291–0.5527)/0.2236 (0.3129–0.6402)/0.3273 (0.3624–0.5670)/0.2046
k (1.1269–2.1455)/1.0186 (0.9022–2.0971)/1.1949 (1.1566–2.0701)/0.9135
40 h (0.3612–0.5762)/0.2150 (0.3251–0.6214)/0.2963 (0.3761–0.5773)/0.2012
k (1.2007–2.1894)/0.9887 (0.9751–2.1199)/1.1448 (1.2948–2.0829)/0.7881
45 h (0.3651–0.5708)/0.2057 (0.3582–0.6212)/0.2630 (0.3881–0.5669)/0.1788
k (1.2716–2.0889)/0.8173 (0.9771–2.0978)/1.1207 (1.3493–2.0791)/0.7298
5 35 h (0.3271–0.5256)/0.1985 (0.4081–0.6192)/0.2111 (0.3954–0.5857)/0.1903
k (1.2039–2.1182)/0.9143 (0.9211–2.0861)/1.1650 (1.3509–2.0764)/0.7255
40 h (0.3440–0.5287)/0.1847 (0.4271–0.6121)/0.1850 (0.3986–0.5644)/0.1658
k (1.2451–2.0973)/0.8522 (0.9514–2.0817)/1.1303 (1.4016–2.0619)/0.6603
45 h (0.3544–0.5281)/0.1737 (0.4375–0.6151)/0.1776 (0.4022–0.5606)/0.1584
k (1.3141–2.1072)/0.7931 (1.0724–2.1142)/1.0418 (1.4097–2.0963)/0.6866

Next to obtain shrinkage preliminary test estimates, we first calculate the test statistic
WR under the null hypothesis, H0 : h ¼ h0 : In the process we considered w ¼ 0.5 and
the type-I error equal to 0.05, and assume k0 ¼ 2:2 and h0 ¼ 0:45 for some non-sample
prior information. We then make use of maximum likelihood estimates obtained using
EM algorithm for maximum likelihood-based shrinkage preliminary test estimates, and
Bayes estimates obtained using importance sampling for Bayes estimates-based shrink-
age preliminary test estimates. All the estimates are reported in Table 10, and to com-
pare these estimates we use relative efficiency (RE). Note that the RE of the suggested
estimator, say ~h to the estimator ^h is defined by REð~h : ^hÞ ¼ MSEð^hÞ=MSEð~hÞ, and RE
larger than one indicates the degree of superiority of the estimator ~h over ^h: It is
observed that Bayes shrinkage preliminary test estimates have higher relative efficiencies
than the maximum likelihood-based estimates. Further the performance of Bayes
shrinkage preliminary test estimates of k under LINEX loss is found satisfactory than
the other Bayes shrinkage preliminary test estimates. Further, Table 11 reports the first
predictive estimated values using classical and Bayesian predictors. Notice that the
Bayesian predictive estimates are computed under both NIN and IN priors. Tabulated
values suggest that the predictive observations using CMP are generally smaller than
BUP and Bayes predictors. Also, it can be seen that a higher value of r with fixed n
leads to smaller predictive estimate. Finally, in Table 12 we report the predictive interval
estimates along with AILs. It is observed that predictive intervals using HCD method
have smaller AILs as compared to pivotal method, and predictive ET credible intervals
have smaller AILs under IN prior as compared to NIN prior.
20 M. MOHAMMADI MONFARED ET AL.

Table 10. Average and RE values of shrinkage preliminary test estimates.


NIN prior IN prior
LINEX GEL LINEX GEL
MLE ¼ ¼ q¼ q¼ ¼ ¼ q¼ q¼
n T r MLE –0.25 0.5 SEL –0.5 1.25 –0.25 0.5 SEL –0.5 1.25
30 3 20 h Avg 0.5068 0.5182 0.5410 0.5643 0.5917 0.6107 0.5591 0.5570 0.5739 0.6748 0.6696
RE 1.0251 1.0729 1.0626 1.0858 1.0512 1.0423 1.0631 1.0626 1.0706 1.0328 1.0308
k Avg 1.8991 1.5581 1.5503 1.8949 1.3912 1.3818 1.6913 1.6842 1.9037 1.4971 1.5630
RE 1.0174 1.0872 1.0941 1.0095 1.0976 1.0955 1.0121 1.0451 1.0064 1.0941 1.0918
23 h Avg 0.5055 0.4851 0.5129 0.5699 0.6162 0.6567 0.5481 0.5431 0.5682 0.7102 0.7071
RE 1.0374 1.1272 1.1251 1.1344 1.1371 1.1326 1.0889 1.1277 1.1643 1.1451 1.1432
k Avg 1.8545 1.5112 1.4922 1.8573 1.3251 1.3115 1.6271 1.6065 1.8443 1.5848 1.4843
RE 1.0142 1.1151 1.1215 1.0271 1.1224 1.1209 1.1231 1.1629 1.1217 1.1228 1.1207
25 h Avg 0.5037 0.5037 0.5473 0.5777 0.6014 0.6415 0.5433 0.5335 0.5499 0.7144 0.7054
RE 1.0524 1.1291 1.1308 1.1361 1.1381 1.1321 1.1082 1.1311 1.1338 1.1385 1.1325
k Avg 1.8274 1.3941 1.3706 1.7895 1.1672 1.1492 1.5671 1.5468 1.8118 1.4121 1.3930
RE 1.0167 1.1175 1.1114 1.0274 1.1321 1.1302 1.1241 1.1698 1.1208 1.1218 1.1205
5 20 h Avg 0.5086 0.5214 0.5538 0.5519 0.5712 0.5932 0.5731 0.5715 0.5516 0.6591 0.6471
RE 1.0305 1.1812 1.1705 1.1841 1.1828 1.1821 1.1428 1.1417 1.1445 1.1512 1.1522
k Avg 1.9777 1.9777 1.597 1.9075 1.4451 1.4361 1.7371 1.7262 1.9182 1.5400 1.6289
RE 1.0027 1.0026 1.0801 1.0998 1.1017 1.1179 1.0428 1.0469 1.0986 1.0541 1.0693
23 h Avg 0.5031 0.5571 0.5884 0.5871 0.6172 0.6475 0.6117 0.6016 0.5684 0.7154 0.7061
RE 1.0471 1.1302 1.1398 1.1308 1.1412 1.1481 1.1414 1.1498 1.1303 1.1492 1.1421
k Avg 1.9276 1.5371 1.5148 1.8605 1.3571 1.3345 1.6871 1.6656 1.8834 1.5811 1.5598
RE 1.0021 1.0671 1.0778 1.0014 1.1021 1.1053 1.0437 1.0514 1.0998 1.0891 1.0763
25 h Avg 0.4996 0.5471 0.5736 0.5773 0.6019 0.6393 0.5207 0.5117 0.5611 0.7012 0.6908
RE 1.0481 1.1267 1.1352 1.1378 1.1461 1.1475 1.1521 1.1524 1.1399 1.1546 1.1545
k Avg 1.8875 1.4417 1.4207 1.8084 1.4051 1.4248 1.5821 1.5611 1.8176 1.4869 1.4364
RE 1.0027 1.0981 1.1112 1.0085 1.1321 1.1321 1.0412 1.0484 1.0037 1.0521 1.0636
50 3 35 h Avg 0.5614 0.5571 0.5602 0.6128 0.6011 0.5997 0.5928 0.5907 0.5797 0.6311 0.6290
RE 1.0325 1.1228 1.1229 1.1983 1.1905 1.1902 1.1431 1.1430 1.1998 1.1897 1.1897
k Avg 1.9686 1.3641 1.3550 1.7254 1.4012 1.3934 1.5102 1.5057 1.7671 1.3341 1.3545
RE 1.0269 1.0999 1.0988 1.0337 1.1447 1.1526 1.0722 1.0761 1.0299 1.1851 1.1877
40 h Avg 0.5361 0.5171 0.5767 0.5796 0.5619 0.5889 0.5217 0.5526 0.5835 0.5971 0.6496
RE 1.0523 1.1219 1.1292 1.1715 1.1871 1.1821 1.1245 1.1318 1.1776 1.1732 1.1859
k Avg 1.7388 1.5281 1.4528 1.7667 1.5214 1.4706 1.5358 1.5309 1.7974 1.4039 1.4031
RE 1.0292 1.0762 1.0808 1.0441 1.1519 1.1525 1.0651 1.0663 1.0385 1.1721 1.1735
45 h Avg 0.5338 0.5041 0.5507 0.5386 0.5217 0.4984 0.5291 0.5718 0.5363 0.5391 0.5823
RE 1.0546 1.1348 1.1302 1.1858 1.1871 1.1847 1.1391 1.1403 1.2475 1.8745 1.1879
k Avg 1.8022 1.5421 1.5310 1.8599 1.6124 1.6033 1.7121 1.6359 1.8389 1.5829 1.5374
RE 1.0308 1.0564 1.0575 1.1772 1.1778 1.1867 1.0879 1.0904 1.0558 1.1282 1.1271
5 35 h Avg 0.5584 0.6491 0.6272 0.5803 0.6381 0.6390 0.6268 0.6246 0.5736 0.6331 0.6314
RE 1.0154 1.1291 1.1289 1.1674 1.1771 1.1771 1.1275 1.1274 1.1642 1.7431 1.7219
k Avg 1.8171 1.4371 1.4243 1.7542 1.4102 1.3933 1.5412 1.5306 1.7773 1.3914 1.4141
RE 1.0004 1.0697 1.0688 1.0868 1.0771 1.0694 1.0491 1.0526 1.0726 1.0971 1.0990
40 h Avg 0.5351 0.5971 0.6462 0.5856 0.5471 0.5600 0.6614 0.7048 0.5891 0.5171 0.5632
RE 1.0153 1.1871 1.2176 1.2572 1.1871 1.1806 1.2298 1.2335 1.1883 1.1843 1.1959
k Avg 1.8816 1.5761 1.5172 1.7999 1.5181 1.4649 1.7115 1.6414 1.8441 1.5281 1.5458
RE 1.0006 1.0428 1.0561 1.0042 1.0759 1.0923 1.0314 1.0434 1.0977 1.0523 1.0566
45 h Avg 0.5217 0.6371 0.6864 0.5409 0.5471 0.5614 0.5971 0.6451 0.5160 0.4371 0.4875
RE 1.0521 1.1481 1.1397 1.1373 1.1401 1.1506 1.1397 1.1459 1.1579 1.1097 1.1865
k Avg 1.9605 1.6671 1.6553 1.7879 1.5871 1.5778 1.7951 1.7264 1.8892 1.6172 1.6551
RE 1.0010 1.0413 1.0436 1.0597 1.1367 1.1377 1.0165 1.0265 1.0592 1.1451 1.1394

9. Conclusion
In this paper, the problems of estimation and prediction from classical and Bayesian
viewpoints are considered when lifetime data following Poisson-exponential distribution
are observed under type-I hybrid censoring. Under classical estimation, we observed
COMMUNICATIONS IN STATISTICS - SIMULATION AND COMPUTATIONV
R
21

Table 11. Best unbiased, conditional median and Bayesian predictive estimates of xRþ1 :
Classical Bayesian prediction under NIN prior Bayesian prediction under IN prior
prediction Linex GEL Linex GEL
¼ ¼ q¼ q¼ ¼ ¼ q¼ q¼
n T r BUP CMP 0.25 0.5 SEL 0.5 1.25 0.25 0.5 SEL 0.5 1.25
30 3 20 5.8324 2.9155 7.2491 6.8751 6.5814 6.9817 6.2492 5.4992 6.1567 5.2753 5.4812 5.1717
23 3.6771 1.3772 5.6784 4.9178 3.7619 4.9618 4.1358 2.7222 3.0121 2.7047 2.8951 2.6399
25 2.5971 1.1292 4.2819 3.8272 2.8618 3.8581 3.5618 2.0867 2.3781 2.0608 2.3211 2.0022
5 20 5.8989 2.9203 7.5912 7.0172 6.8912 7.0714 6.5482 5.6166 6.2341 5.3977 5.5412 5.2977
23 3.9941 1.5637 5.8918 5.4926 4.7386 5.6127 5.0195 3.9361 4.1121 3.8264 3.8254 3.6923
25 2.5496 1.1436 4.6933 4.2761 3.5287 4.5181 4.0277 2.9371 4.2451 2.9109 3.0124 2.8511
50 3 35 8.1408 3.7374 12.678 11.895 11.023 12.081 11.968 9.8659 11.017 9.6734 9.7821 9.6841
40 5.7268 2.5298 9.9178 9.2879 8.5482 9.0172 8.6671 5.9637 6.6145 5.8711 6.1204 5.8391
45 2.4663 1.1799 5.0916 4.7619 4.1109 4.8129 3.8914 2.8866 3.1278 2.8716 3.1854 2.8381
5 35 8.7951 3.1263 13.682 12.391 11.796 12.015 11.192 9.2255 11.112 9.8943 10.182 9.9189
40 5.6286 2.7739 9.7951 8.8591 7.4962 8.2091 7.0291 5.6981 6.3214 5.5931 5.7512 5.5514
45 2.4045 1.2009 4.5184 3.7945 3.6603 3.8381 3.3984 2.6917 2.9812 2.6759 2.8372 2.6375

Table 12. Predictive interval estimates of xRþ1 :


Classical prediction Bayesian prediction
Pivotal method HCD method ET under NIN prior ET under IN prior
n T r interval/AIL interval/AIL interval/AIL interval/AIL
30 3 20 (0.8624–5.8987)/5.0363 (0.5994–3.9266)/3.3272 (1.1591–5.3028)/4.1437 (3.1319–5.8694)/2.7375
23 (0.6138–3.8030)/3.1892 (0.4359–2.8160)/2.3801 (0.9028–4.3092)/3.4064 (2.1465–4.8530)/2.7065
25 (0.5499–2.7585)/2.2086 (0.3502–2.1128)/1.7626 (0.7892–3.5028)/2.7136 (0.9028–3.0269)/2.1241
5 20 (0.8640–6.0445)/5.1805 (0.5794–3.9349)/3.3555 (1.0018–5.0198)/4.0180 (3.1119–5.9166)/2.8047
23 (0.6256–4.0049)/3.3793 (0.4796–2.9146)/2.4350 (0.7985–3.9818)/3.1833 (1.8229–4.5171)/2.6942
25 (0.5410–3.5732)/3.0322 (0.3788–2.0819)/1.7031 (0.5793–3.2102)/2.6309 (0.3455–3.3242)/2.9787
50 3 35 (0.9470–7.3363)/6.3893 (0.3561–4.1731)/3.8170 (2.2010–7.0982)/4.8972 (5.7937–9.0801)/3.2864
40 (0.7528–5.8360)/5.0832 (0.2624–3.3829)/3.1205 (1.2502–5.0391)/3.7889 (3.2890–6.4357)/3.1467
45 (0.4405–4.7829)/4.3424 (0.4111–3.0969)/2.6858 (0.8918–4.0198)/3.1280 (1.9884–4.7733)/2.7849
5 35 (1.0667–8.9324)/7.8657 (0.5435–4.7530)/4.2095 (2.1099–7.2573)/5.1474 (5.0963–9.2780)/4.1817
40 (0.8123–6.3157)/5.5034 (0.5326–3.6889)/3.1563 (1.0981–4.7602)/3.6621 (3.0018–6.3716)/3.3698
45 (0.4349–4.1281)/3.6932 (0.4764–2.3015)/1.8251 (0.9376–3.9915)/3.0539 (1.8915–4.1733)/2.2788

that MLEs of the unknown parameters of the distribution do not admit closed form
therefore we employed the EM algorithm to compute the maximum likelihood esti-
mates, and further computed approximate confidence intervals using the idea of missing
information principle and the asymptotic normality of the MLEs. However, we found
the involvement of complicated and intractable expressions in E-step of EM algorithm,
and to avoid that situation we suggested the SEM algorithm. In simulation study the
performance of the SEM algorithm is also found good as compared to the EM algo-
rithm. In Bayesian approach, we obtained Bayes estimators under SEL, LINEX and GEL
functions using Lindley and importance sampling as the Bayes estimators were not in
the closed form. Further with the help of importance sampling and the form of poster-
ior density we also computed highest posterior density interval estimates using the
22 M. MOHAMMADI MONFARED ET AL.

method of Chen and Shao (1999). Performance of EM algorithm over NR and Bayes
estimators under informative prior have been found more satisfactory in our simulation
study on the basis of average and mean square error values. Furthermore, to provide
predictive inference on the censored observations we made use of best unbiased and
conditional median predictors under classical approach and importance sampling under
Bayesian approach. In real data analysis we observed that the predictive observation is
close to the true observations, and further the predictive interval estimates contain the
true observations.

Acknowledgments
Authors would like to thank the Editor, associate Editor and anonymous reviewers for their con-
structive and valuable suggestions which led much improvement in the earlier version of this
manuscript.

ORCID
Reza Arabi Belaghi http://orcid.org/0000-0002-6989-9267
Sukhdev Singh http://orcid.org/0000-0001-6282-4281

Appendix
The complete information is given by
2 ð1 3
n h 1
Ð1
nkhð1  eh Þ1
kx kx
6 2 þ nkh 2
ð1  e Þ x2 e2kxhe dx 0 xe
2kxhe
dx 7
6k 0 7
IW ðk, hÞ ¼ 6 Ð1 7:
4 n neh 5
nkhð1  eh Þ1 0 xe2kxhe dx
kx
2 2
h ð1  eh Þ

Further the missing information denoted as IWjX ðh, kÞ can be computed as


 
a ðC, h, kÞ a12 ðC, h, kÞ
IWjX ðh, kÞ ¼ ðn  RÞIZi jX ðh, kÞ ¼ ðn  RÞ 11
a21 ðC, h, kÞ a22 ðC, h, kÞ

where

kxkx kx
1 hx2 ehe ehe þ hekx  1
a11 ðx, h, kÞ ¼  ,
k2 ð1  ehekx Þ2
ehe
kxkx
½1  ehekx  hekx 
a12 ðx, h, kÞ ¼ a21 ðx, h, kÞ ¼ 
ð1  ehekx Þ2
ð1
 hkð1  eh Þ1
kx
xe2kxhe dx,
0
hekx 2kx
1 e
a22 ðx, h, kÞ ¼  :
h2 ð1  ehekx Þ2
COMMUNICATIONS IN STATISTICS - SIMULATION AND COMPUTATIONV
R
23

Lindley’s approximation: For the two-parameter case ðH1 , H2 Þ, the Lindley’s approximation to
Eðgð/ÞjxÞ is given by

1
^g ð/Þ ¼ gðH^1 , H^2 Þ þ ½A þ l30 B12 þ l03 B21 þ l21 C12 þ l12 C21  þ q1 A12 þ q2 A21 , (16)
2

where
X
2 X
2
@ iþj LðH1 , H2 Þ
A¼ wij rij , lij ¼ j ,
i¼1 j¼1 @Hi1 @H2
@q @g @2g
qi ¼ , wi ¼ , wij ¼ ,
@Hi @Hi @Hi @Hj
q ¼ ln pðH1 , H2 Þ, Aij ¼ wi rii þ wj rji ,
Bij ¼ ðwi rii þ wj rij Þrii , Cij ¼ 3wi rii rij þ wj ðrii rjj þ 2r2ij Þ:

Here Lð:, :Þ denotes the likelihood function, pðH1 , H2 Þ denotes the prior distribution and rij is
the ði, jÞth element of the inverse of the Fisher information matrix. Note that the expressions in
(16) are evaluated at the maximum likelihood estimates ðH ^ 1, H
^ 2 Þ: For the case of our estima-
tion problem with ðH1 , H2 Þ ¼ ðh, kÞ, the approximate Bayes estimates of h and k are computed
using the expression (16), and the involved expressions are evaluated as

" #
X
R
CekC ehe ½1  hekC  hCe2kC e2he
kC kC
kxi
l11 ¼ xi e  ðn  RÞ  ,
i¼1 1  ehekC ð1  ehekC Þ2
" #
R XR kC
hC2 ekChe ð1  hÞ h2 C2 e2kC2he
kC
2 kxi
l02 ¼  2  h xi e þ ðn  RÞ  ,
k i¼1 1  ehekC ð1  ehekC Þ2
R Reh Re2h ð1  eh Þ
l20 ¼  2þ h
þ þ ðn  RÞ
h 1e ð1  eh Þ 2
1  ehekC
" kC kC kC kC
#
e2kC ehe 2ekCh ehe 2e2h ð1  ehe Þ eh ð1  ehe Þ
   þ þ
1  eh ð1  eh Þ2 ð1  eh Þ3 ð1  eh Þ2
 kC kC

1 2kC2hekC ð1  ehe ÞekCh ehe
 e 
ð1  ehekC Þ2 1  eh
" kC kC
#)
eh ekC ehe eh ð1  ehe Þ
þ  ,
1  ehekC 1  eh ð1  eh Þ2
"
XR
C2 ekC ehe ½h2 e2kC  3hekC þ 1
kC
2 kxi
l12 ¼  xi e þ ðn  RÞ
i¼1 1  ehekC
kC kC
#
3hC2 e2kC e2he ðh  1Þ 2h2 C2 e3kC e3he
þ þ ,
ð1  ehekC Þ2 ð1  ehekC Þ3
" #
Ce2kC ehe ½2  h Ce2kC e2he ½2  3hekC  2hCe3kC e3he
kC kC kC

l21 ¼ ðn  RÞ þ  ,
1  ehekC ð1  ehekC Þ2 ð1  ehekC Þ3
24 M. MOHAMMADI MONFARED ET AL.

2R Reh 3Re2h 2Re3h


l30 ¼   
h3 1  eh ð1  eh Þ2 ð1  eh Þ3
 hekC kC kC
1 e3kC ehe 3e2kC ehe eh 6ekC ehe e2h
þ ðn  RÞ þ þ
1  ehekC
1  eh ð1  eh Þ2 ð1  eh Þ3
kC kC kC
3ekC ehe eh 6ð1  ehe Þe3h 6ð1  ehe Þe2h
þ  
ð1  eh Þ2 ð1  eh Þ4 ð1  eh Þ3
kC   kC
ð1  ehe Þeh h 1 e2kC ehe
 ð1  e Þ  2ðn  RÞ 
ð1  eh Þ2 ð1  ehekC Þ2 1  eh
kC kC kC
) #
2ekC ehe eh 2ð1  ehe Þe2h ð1  ehe Þeh h kC hekC
 þ þ ð1  e Þe e
ð1  eh Þ2 ð1  eh Þ3 ð1  eh Þ2
" ( kC kC kC
1 e2kC ehe 2ekC ehe eh 2ð1  ehe Þe2h
þ 2ðn  RÞ   þ
ð1  ehekC Þ 1  eh ð1  eh Þ2 ð1  eh Þ3
kC
) # ( " kC kC
#
ð1  ehe Þeh h ekC ehe ð1  ehe Þeh
þ e þ 2ðn  RÞ 
ð1  eh Þ2 1  eh ð1  eh Þ2
" kC kC
#
ekC ehe ð1  ehe Þeh hkC hekC
2  e e
ð1  eh Þe2kC e2he
kC
1  eh ð1  eh Þ2
 
ð1  ehekC Þ3 ð1  ehekC Þ2
" kC kC
# 9
ekC ehe ð1  ehe Þeh h 2kC hekC >
>
 ð1  e Þe e >
>
1  eh ð1  eh Þ2 =
þ
1  ehe kC
>
>
>
>
;

X  3 kC hekC
3R R
3 kxi hC e e ½3hekC  h2 e2kC þ 1
l03 ¼ 3 þ h xi e  ðn  RÞ
k 1  ehekC
kC 
i¼1
3h2 C3 e2kC ehe ½1  hekC  2h3 C3 e3kC e3he
kC

þ  :
1  ehekC ð1  ehekC Þ3

References
Banerjee, A., and D. Kundu. 2008. Inference based on Type-II hybrid censored data from a
Weibull distribution. IEEE Transactions on Reliability 57 (2):369–78. doi:10.1109/TR.2008.
916890.
Balakrishnan, N., and R. Shafay. 2012. One- and two-sample Bayesian prediction intervals based
on Type-II hybrid censored data. Communications in Statistics - Theory and Methods 41 (9):
1511–31. doi:10.1080/03610926.2010.543300.
Basu, A. P., and J. P. Klein. 1982. Some recent results in competing risks theory. Lecture Notes-
Monograph Series 2:216–29.
Belaghi, R. A., M. Arashi, and S. Tabatabaey. 2015a. Improved estimators of the distribution
function based on lower record values. Statistical Papers 56 (2):453–77. doi:10.1007/s00362-
014-0591-9.
Belaghi, R. A., M. Arashi, and S. Tabatabaey. 2015b. On the construction of preliminary test esti-
mator based on record values for the Burr XII model. Communications in Statistics - Theory
and Methods 44 (1):1–23. doi:10.1080/03610926.2012.733473.
COMMUNICATIONS IN STATISTICS - SIMULATION AND COMPUTATIONV
R
25

Belaghi, R. A., M. N. Asl, and S. Singh. 2017. On estimating the parameters of the Burr XII
model under progressive type-I interval censoring. Journal of Statistical Computation and
Simulation 87 (16):3132–51. doi:10.1080/00949655.2017.1359600.
Cancho, V. G., F. Louzada-Neto, and G. D. Barriga. 2011. The Poisson-exponential lifetime dis-
tribution. Computational Statistics & Data Analysis 55 (1):677–86. doi:10.1016/j.csda.2010.05.
033.
Chen, M.-H., and Q.-M. Shao. 1999. Monte Carlo estimation of Bayesian credible and HPD
intervals. Journal of Computational and Graphical Statistics 8 (1):69–92. doi:10.2307/1390921.
Dempster, A. P., N. M. Laird, and D. B. Rubin. 1977. Maximum likelihood from incomplete data
via the EM algorithm. Journal of the Royal Statistical Society: Series B (Methodological) 39 (1):
1–38. doi:10.1111/j.2517-6161.1977.tb01600.x.
Dey, S., S. Singh, Y. M. Tripathi, and A. Asgharzadeh. 2016. Estimation and prediction for a pro-
gressively censored generalized inverted exponential distribution. Statistical Methodology 32:
185–202. doi:10.1016/j.stamet.2016.05.007.
Diebolt, J., and G. Celeux. 1993. Asymptotic properties of a stochastic EM algorithm for estimat-
ing mixing proportions. Stochastic Models 9 (4):599–613. doi:10.1080/15326349308807283.
Kumar, M., S. Kumar, S. Singh, and U. Singh. 2016. Reliability estimation for Poisson-
Exponential model under progressive type-II censoring data with binomial removal data.
Statistica 76 (1):3–26.
Kundu, D., and B. Pradhan. 2009. Estimating the parameters of the generalized exponential dis-
tribution in presence of hybrid censoring. Communications in Statistics - Theory and Methods
38 (12):2030–41. doi:10.1080/03610920802192505.
Kundu, D., and H. Howlader. 2010. Bayesian inference and prediction of the inverse Weibull dis-
tribution for Type-II censored data. Computational Statistics & Data Analysis 54 (6):1547–58.
doi:10.1016/j.csda.2010.01.003.
Lawless, J. F. 2011. Statistical models and methods for lifetime data. John Wiley & Sons.
Lindley, D. V. 1980. Approximate Bayesian methods. Trabajos de Estadistica Y de Investigacion
Operativa 31 (1):223–45. doi:10.1007/BF02888353.
Louis, T. A. 1982. Finding the observed information matrix when using the EM algorithm.
Journal of the Royal Statistical Society: Series B (Methodological) 44 (2):226–33. doi:10.1111/j.
2517-6161.1982.tb01203.x.
Louzada-Neto, F., V. G. Cancho, and G. D. Barriga. 2011. The Poisson-exponential distribution:
A Bayesian approach. Journal of Applied Statistics 38 (6):1239–48. doi:10.1080/02664763.2010.
491862.
Pepi, J. W. 1994. Failsafe design of an all BK-7 glass aircraft window. SPIE Proc 2286:431–43.
Pradhan, B., and D. Kundu. 2009. On progressively censored generalized exponential distribution.
Test 18 (3):497–515. doi:10.1007/s11749-008-0110-1.
Raqab, M., and H. Nagaraja. 1995. On some predictors of future order statistics. Metron 53
(1–2):185–204.
Rodrigues, G. C., F. Louzada, and P. L. Ramos. 2016. Poisson-exponential distribution: Different
methods of estimation. Journal of Applied Statistics 45 (1):128–144. doi:10.1080/02664763.2016.
1268571.
Saleh, A. M. E. 2006. Theory of preliminary test and Stein-type estimation with applications, Vol.
517, John Wiley & Sons.
Shafay, R., and N. Balakrishnan. 2012. One and two sample Bayesian prediction intervals based
on Type-I hybrid censored data. Communications in Statistics - Simulation and Computation
41 (1):65–88. doi:10.1080/03610918.2011.579367.
Singh, S., and Y. M. Tripathi. 2016. Bayesian estimation and prediction for a hybrid censored
lognormal distribution. IEEE Transactions on Reliability 65 (2):782–95. doi:10.1109/TR.2015.
2494370.
Singh, S., R. A. Belaghi, and M. N. Asl. 2019. Estimation and prediction using classical and
Bayesian approaches for Burr III model under progressive type-I hybrid censoring.
International Journal of System Assurance and Management 10 (4):746–764.
26 M. MOHAMMADI MONFARED ET AL.

Singh, S. K., U. Singh, and M. Kumar. 2014. Estimation for the parameter of Poisson-exponential
distribution under Bayesian paradigm. Journal of Data Science 12 (1):157–73.
Singh, S. K., U. Singh, and M. Kumar. 2016. Bayesian estimation for Poisson-exponential model
under progressive type-II censoring data with binomial removal and its application to ovarian
cancer data. Communications in Statistics - Simulation and Computation 45 (9):3457–75. doi:
10.1080/03610918.2014.948189.
Singh, S., and Y. M. Tripathi. 2018. Estimating the parameters of an inverse Weibull distribution
under progressive type-I interval censoring. Statistical Papers 59 (1):21–56. doi:10.1007/s00362-
016-0750-2.
Tregouet, D., S. Escolano, L. Tiret, A. Mallet, and J. Golmard. 2004. A new algorithm for haplo-
type-based association analysis: The stochastic-EM algorithm. Annals of Human Genetics 68
(2):165–77. doi:10.1046/j.1529-8817.2003.00085.x.
Wang, F. K., and Y. Cheng. 2009. EM algorithm for estimating the Burr XII parameters with
multiple censored data. Quality and Reliability Engineering International 26 (6):615–30. doi:10.
1002/qre.1087.
Wei, G. C., and M. A. Tanner. 1990. A Monte Carlo implementation of the EM algorithm and
the poor man’s data augmentation algorithms. Journal of the American Statistical Association
85 (411):699–704. doi:10.1080/01621459.1990.10474930.
Zhang, M., Z. Ye, and M. Xie. 2014. A stochastic EM algorithm for progressively censored data
analysis. Quality and Reliability Engineering International 30 (5):711–22. doi:10.1002/qre.1522.

View publication stats

You might also like