Download as pdf or txt
Download as pdf or txt
You are on page 1of 11

Available online at www.sciencedirect.

com

Mathematics and Computers in Simulation 77 (2008) 34–44

Estimation of parameters of the Makeham distribution


using the least squares method夽
Xinlong Feng a,b,∗ , Guoliang He b , Abdurishit a
a College of Mathematics and System Science, Xinjiang University, Urmuqi 830046, PR China
b Faculty of Science, Xi’an Jiaotong University, Xi’an 710049, PR China

Received 23 July 2006; received in revised form 15 January 2007; accepted 15 January 2007
Available online 20 January 2007

Abstract
The Makeham distribution has been used to describe human mortality and establish actuarial tables. The hazard function is
defined by μ(t) = A + BCt , we use the least squares type estimation to estimate the parameters of Makeham distribution in this
paper. Seven cases are considered, when A, B, C are known or unknown, respectively. Also, we evaluated the mean square errors of
these estimators.
© 2007 IMACS. Published by Elsevier B.V. All rights reserved.

AMS Classification: 62N05; 62J02

Keywords: Reliability; Makeham distribution; Least squares estimation; Parameter

1. Introduction

Reliability theory is a body of ideas, mathematical models, and methods directed to predict, estimate, understand,
and optimize the lifespan distribution of systems and their components. Reliability of the system refers to its ability to
operate properly according to a specified standard. Reliability is described by the reliability function (also called the
survival function) R(t), which is the probability that a system will carry out its mission through time t.
Assume that the distribution function of the life T of a system at time t is F (t). It is obvious that the life of a system is
non-negative number and therefore F (t) = 0 for t ≤ 0. It follows that the corresponding derived function which is the
density function f (t) will be also zero for t ≤ 0. From the definition of reliability follows that the reliability function
R(t) of the system at time t is given by
R(t) = P(T > t) = 1 − P(T ≤ t) = 1 − F (t). (1.1)
Assume that the distribution function F (t) is differentiable with respect to t, we define a new function for t > 0
(d/dt)F (t) (d/dt)R(t)
μ(t) = =− . (1.2)
1 − F (t) R(t)

夽 This research was subsidized by the NSF of China (10471110, 10571148, 10671154).
∗ Corresponding author at: College of Mathematics and System Science, Xinjiang University, Urmuqi 830046, PR China.
E-mail addresses: fxlmath@xju.edu.cn (X. Feng), hegl@mailst.xjtu.edu.cn (G. He), rashit@xju.edu.cn ( Abdurishit).

0378-4754/$32.00 © 2007 IMACS. Published by Elsevier B.V. All rights reserved.


doi:10.1016/j.matcom.2007.01.009
X. Feng et al. / Mathematics and Computers in Simulation 77 (2008) 34–44 35

The function μ(t) is called the hazard function of the system at time t. It gives the instantaneous rate of failure of
the system at time t knowing that it was working in the interval [0, t). Furthermore, from (1.1) and (1.2) it follows
that
t
− μ(x) dx
F (t) = 1 − e 0 . (1.3)
It is well known that the reliability functions have the following several cases.

Case 1. If μ(t) = μ = const, then


S(t) = exp(−μt), μ > 0, (1.4)
where μ is the parameter, the reliability function of system is described by the exponential distribution.

Case 2. If μ(t) = at k , then


 
a k+1
S(t) = exp t , a > 0, k > 0, (1.5)
k+1
where a, k are the parameters, the reliability function of system is described by the Weibull distribution.

Case 3. If μ(t) = BCt , then


 
B
S(t) = exp − (Ct − 1) , B > 0, C ≥ 1, (1.6)
ln C
where B, C are the parameters, the reliability function of system is described by the Gompertz distribution.

Case 4. If μ(t) = A + BCt , then


 
B t
S(t) = exp −At − (C − 1) , B > 0, A ≥ B, C > 1, (1.7)
ln C
where A, B, C are the parameters, the reliability function of system is described by the Makeham distribution
[12].

The failure-censored reliability sampling plans for the Case 1 and a class of tests for exponentiality against HNBUE
alternatives are given by Balasooriya [4], Bernhard [5], respectively. Bain [2] studied the distribution having linear
hazard function μ(t) = a + bt and obtained the least squares type estimates of the parameters a and b, while Lin et al.
[11] estimated the parameters based on records and inter-record times. Varies properties of the linear hazard function
have been studied by the authors of Refs. [3,19]. Mugdadi [15] studied the Case 2 and obtained the least squares
type estimates of the parameters a and k, three cases are discussed. Many authors studied the Case 3 and obtained
estimation of parameters, unique estimation, etc. (see [1,8,10,13,14,21–23]). Scollnik [18] investigated the Case 4
and simulated random variates from Makeham’s distribution with exact or nearly Log-Concave densities. Golubev
[9] pointed out that the Makeham distribution does make a lot of sense and should not be ignored in the analysis of
mortality data. A meaningful interpretation of the Makeham parameter seems more intuitively clear than interpretations
suggested so far in physical, biochemical and/or physiological terms for the parameters of the Gompertz member of
the GM law. If the GM law of mortality and the geographical Strehler–Mildvan correlation are indeed true reflections
of biological laws as some authors feel, then it is expedient to provide meaningful biological interpretations for all
their terms and to define factors that superimpose on these laws or interfere with them rather than refute or modify
them to produce the observed deviations from them. So the term A should not be ignored in any analysis of mortality
data.
In this paper, we only studied the Case 4 and obtained the least squares type estimates of the parameters using
the smallest r censored sample. The results for the complete sample are easily obtained by taking r = n. Finally, we
evaluated the mean square errors of these estimators. Seven cases are discussed in the following: (i) A is the parameter
and B, C are known; (ii) B is the parameter and A, C are known; (iii) C is the parameter and A, B are known; (iv) A,
B are the parameters and C is known; (v) A, C are the parameters and B is known; (vi) B, C are the parameters and
36 X. Feng et al. / Mathematics and Computers in Simulation 77 (2008) 34–44

A is known; (vii) A, B, C are the parameters. It is noticed that the maximum likelihood method may be ineffective
in cases (iii), (v), (vi) and (vii) when C is the parameter, as compared with the least squares estimation. Since the
nonlinear equations concerning the parameter C is very complex
 in above four cases 
by using the MLE method,
which contain the unwieldy terms ri=1 (1/(A + BCti )), ri=1 (ti BCti /(A + BCti )) or ri=1 (BCti /(ln C)2 ), and so
on. In view of numerical computation for solving the nonlinear equations, the least square approximation is very
effective.

2. A is the parameter and B, C are known

Setting C = eα , then the Makeham distribution is


 
B αt
F (t) = 1 − exp −At − (e − 1) (2.1)
α
and therefore the density function is
 
αt B αt
f (t) = (A + B e ) exp −At − (e − 1) , α > 0. (2.2)
α
Let t1 , . . . , tr denote the smallest r observations in a random sample of size n from the density function (2.2). Let θ
represents the vector of parameters, we apply the approach of references [6–8,15–17,20,22] to obtain the least squares
type estimate of θ, that is by minimizing
r

[ln(1 − F (ti , θ)) − ln E(1 − F (ti , θ))]2 (2.3)
i=1

over the parameter space, where


 
B αt
F (t, θ) = 1 − exp −At − (e − 1) . (2.4)
α
Let H(t, θ) = − ln[1 − F (t, θ)] and βi = − ln[1 − EF (ti , θ)]. Thus, we can write (2.3) in the form
r
 r

[ln(1 − F (ti , θ)) − ln E(1 − F (ti , θ))]2 = [H(ti , θ) − βi ]2 . (2.5)
i=1 i=1

It is clear that (2.5) will be minimized when H(ti , θ) = βi , i = 1, . . . , r. This leads to a system of linear equations
which in matrix notation can be written as

β = Xθ,

where
⎡ ⎤
B αt1
⎢ β1 − α (e − 1) ⎥ ⎡
t1

⎢ ⎥
⎢ B ⎥ ⎢t ⎥
⎢ β2 − (eαt2 − 1) ⎥ ⎢ 2⎥
⎢ α ⎥
β=⎢ ⎥, X=⎢ ⎥
⎢ .. ⎥ and θ = A.
⎢ .. ⎥ ⎣.⎦
⎢ . ⎥
⎢ ⎥
⎣ B αtr ⎦ tr
βr − (e − 1)
α

We can estimate θ when (X X)−1 exists by


−1
θ̂ = (X X) X β,
X. Feng et al. / Mathematics and Computers in Simulation 77 (2008) 34–44 37

 
where X X = ri=1 ti2 . Obviously, (X X)−1 exists and is given by (X X)−1 = 1/ ri=1 ti2 . It is easy to show that
E(F (ti , θ)) = i/(n + 1). Thus
 
i
βi = − ln 1 − , i = 1, . . . , r.
n+1
Finally, we have
r

[βi − (B/α)(eαti − 1)]ti
i=1
 = r .

ti2
i=1

3. B is the parameter and A, C are known

Similar to Section 2, we have


r

α(βi − Ati )(eαti − 1)
i=1
B̂ = r .

(eαti − 1) 2

i=1

4. α is the parameter and A, B are known

It is easy to write (2.5) in the form


r  2
B
L(α) = Ati + (eαti − 1) − βi .
α
i=1

Therefore, to obtain the values of α that minimize L(α), we consider


 r  
∂L(α) B αti B
=2 Ati + (e − 1) − βi · 2 [(αti − 1)eαti + 1].
∂α α α
i=1

To obtain the least squares type estimate of α, let ∂L(α)/∂α = 0. It follows that ∂L(α)/∂α = 0 iff
 r  
B
Ati + (eαti − 1) − βi [(αti − 1)eαti + 1] = 0. (4.1)
α
i=1

Furthermore
r  2  r  
∂2 L(α) B αti B αti B αti
= 2 t i e − (e − 1) + 2 Ati + (e − 1) − β i
∂α2 α α2 α
i=1 i=1
   
2B αti 2B B 2 αti
× (e − 1) − t i − t e .
α3 α2 α i

Let α̂ be the value of α obtained for Eq. (4.1). If (∂2 L(α)/∂α2 )|α=α̂ > 0, then the least squares type estimate α̂ of α
is the solution of α for Eq. (4.1).
Since we are unable to obtain an analytic expression for the mean and the variance of α̂, we do not know whether
the least square type estimate of the parameter α is unbiased or not and also we have no knowledge about its variance.
Therefore, we decided to study these quantities by simulation.
38 X. Feng et al. / Mathematics and Computers in Simulation 77 (2008) 34–44

The first step to simulation is to obtain a random sample from the density function (2.2). For this purpose

1. Find the distribution function F (t) = 1 − exp[−At − Bα (eαt − 1)].


2. Set U = F (t) where U is a random number in [0, 1). Therefore, we have

 
B
U = 1 − exp −At − (eαt − 1) .
α

3. Find t by iterative algorithm.

The following symbols will be used: N, number of samples; A, B, α, variables; SS, sample size; MB, mean bias;
MSE, mean squares error; R, number of failed systems at which the experiment is terminated.
The simulation procedure is as follows:

i. Fix the values of A, B, α, N, R and SS.


ii. Obtain a value of U and calculate t using the iterative algorithm.
iii. Repeat the process (ii) SS times. This gives us the sample t1 , . . . , tSS and take the smallest R observations
t1 , . . . , tR .
iv. Obtain the estimate α̂ of α using Eq. (4.1) and then check the condition (∂2 L(α)/∂α2 )|α=α̂ > 0 is true or
not.
v. Repeat (ii)–(iv) N times obtaining the estimates α̂1 , . . . , α̂N .
vi. Calculate MB(α) and MSE(α) using

 N 
  1
N
1  
MB(α) =  (α̂i − α) , MSE(α) = (α̂i − α)2 .
N   N
i=1 i=1

Some of the simulation values for Case 2 are given when a = 0.1, k = 2 (see [15]). Here, we give the sim-
ulation values for Case 4 when A = −0.185, B = 0.2, α = 0.5 in Table 1. In above two cases, the Weibull
hazard function and the Makeham hazard function illustrated, respectively, in Fig. 1. The Weibull distribution
and the Makeham distribution illustrated, respectively, in Fig. 2. From Table 1, we can conclude that, given
A, B, α, R, N, the MB and MSE of α̂ decrease as SS increases. Also, the least squares type estimation of α is
biased.

Remark. When μ(t) = −0.1t 2 in the Case 2 and μ(t) = −0.185 + 0.2 e0.5t in the Case 4, we obtained the rel-
ative error between them is 0.0348 by numerical computation in Fig. 2. In order to reduce the computation, we
can replace a random sample from f (t) = (−0.185 + 0.2 e0.5t ) exp[−0.185t − 0.4(e0.5t − 1)] by a random sample
from f (t) = 0.1t 2 exp(−(1/30)t 3 ) approximately in the first step to simulation. Namely, t = [−30 ln(1 − U)]1/4 (see
[15]).

Table 1
The simulation values as α is the parameter
α SS MB(α) MSE(α)

0.5 10 1.1763 4.8738


0.5 20 0.6477 2.8656
0.5 30 0.5343 1.4165
0.5 40 0.3815 0.3778
0.5 50 0.1520 0.2195

N = 100, A = −0.185, B = 0.2, R = INT(SS/2).


X. Feng et al. / Mathematics and Computers in Simulation 77 (2008) 34–44 39

Fig. 1. The solid line denotes the Makeham hazard function; the no line denotes the Weibull hazard function.

5. A, B are the parameters and C is known

Similar to Section 4, we have


r  2
B αti
L(A, B) = Ati + (e − 1) − βi .
α
i=1

Therefore, to obtain the values of A and B that minimize L(A, B), we have
 r  
∂L(A, B) B
=2 Ati + (eαti − 1) − βi ti , (5.1)
∂A α
i=1
r 
  αti
∂L(A, B) B αti (e − 1)
=2 Ati + (e − 1) − βi . (5.2)
∂B α α
i=1

Furthermore
 r r
 ti (eαti − 1) r
 (eαti −1 )2
∂2 L(A, B) ∂2 L(A, B) ∂2 L(A, B)
= 2 ti2 , =2 , = 2 .
∂A2 ∂A∂B α ∂B2 α2
i=1 i=1 i=1

Fig. 2. The solid line denotes the Makeham distribution; the no line denotes the Weibull distribution.
40 X. Feng et al. / Mathematics and Computers in Simulation 77 (2008) 34–44

Then the determinate of Hessian matrix H will be


r  r
 
 αti −1 )2 ti tj (eαti − 1)(eαtj − 1)
2 (e
det(H) = 4 tj − .
α2 α2
i=1 j=1

To find the least square type estimates of A and B, Eqs. (5.1) and (5.2) equate to zero. Let  and B̂ be the solutions of
these equations for A and B, respectively. If

∂2 L(A, B)
det(H)|A=Â,B=B̂ > 0 and |A=Â,B=B̂ > 0,
∂A2

then  and B̂ will be the least squares type estimates of A and B, respectively. It is obvious that
(∂2 L(A, B)/∂A2 )|A=Â,B=B̂ > 0. But the above condition

det(H)|A=Â,B=B̂ > 0 (5.3)

must show numerically. We do not know whether the least squares type estimates of A and B are unbiased or not and
also their variances are not known. Therefore, we want to study these quantities by simulation. We mentioned earlier
how to obtain a random sample from the density function (2.2).
The simulation procedure is as follows:

Fix the values of A, B, α, N, R and SS.


i.
ii.
Obtain a value of U and calculate t using the iterative algorithm.
iii.Repeat the process (ii) SS times. This gives us the sample t1 , . . . , tSS and take the smallest R observations t1 , . . . , tR .
iv.Obtain the estimate  and B̂ of A and B, respectively, using Eqs. (5.1) and (5.2), then check the condition
det(H)|A=Â,B=B̂ > 0 is true or not.
v. Repeat (ii)–(iv) N times obtaining the least square estimate Â1 , . . . , ÂN and B̂1 , . . . , B̂N .
vi. Calculate MB(A), MSE(A), MB(A) and MSE(B) by
 N   N 
1   1  
N
 1 2 
MB(A) =  (Âi − A) , MSE(A) = (Âi − A) , MB(B) =  (B̂i − B) ,
N  N N 
i=1 i=1 i=1
N

1 2
MSE(B) = (B̂i − B) .
N
i=1

The simulation values are given in Table 2. From Table 2, we can conclude that, given A, B, α, R, N, the MB(A),
MSE(A), MB(B) and MSE(B) decrease as SS increases.

Table 2
The simulation values as A and B are the parameters
SS MB(A) MSE(A) MB(B) MSE(B)

10 0.1082 0.2816 0.1309 0.2822


20 0.0858 0.0865 0.1077 0.0983
30 0.0563 0.0302 0.0607 0.0435
40 0.0324 0.0161 0.0275 0.0146
50 0.0109 0.0084 0.0183 0.0135

N = 100, A = −0.185, B = 0.2, α = 0.5, R = INT(SS/2).


X. Feng et al. / Mathematics and Computers in Simulation 77 (2008) 34–44 41

6. A, α are the parameters and B is known

Similar to Section 5
 r  2
B
L(A, α) = Ati + (eαti − 1) − βi .
α
i=1

Therefore, to obtain the values of A and α that minimize L(A, α), we have
r  
∂L(A, α) B αti
=2 Ati + (e − 1) − βi ti , (6.1)
∂A α
i=1
r 
  
∂L(A, α) B αti B αti B αti
=2 Ati + (e − 1) − βi − 2 (e − 1) + ti e . (6.2)
∂α α α α
i=1

Furthermore
r  r  
∂2 L(A, α) ∂2 L(A, α) B αti B αti
= 2 ti ,
2
= 2 ti − 2 (e − 1) + ti e ,
∂A2 ∂A∂α α α
i=1 i=1
r
   
∂2 L(A, α)  B αti B αti 2 B αti
=2 − 2 (e − 1) + ti e + Ati + (e − 1) − βi
∂α2 α α α
i=1
 
2B αti 2B B
× (e − 1) − 2 ti eαti + ti2 eαti .
α3 α α
If the Hessian matrix satisfies
∂2 L(A, α)
det(H)|A=Â,α=α̂ > 0 and |A=Â,α=α̂ > 0,
∂A2
then  and α̂ will be the least squares type estimates of A and α, respectively. Similar to the simulation procedure of
Section 5, we only substitute the character of α for the character of B.
The simulation values are given in Table 3. From the table we can conclude that, given A, B, α, R, N, the MB(A),
MSE(A), MB(α) and MSE(α) decrease as SS increases.

7. B and α are the parameters and A is known

Similar to Section 6, we have


 r  2
B αti
L(B, α) = Ati + (e − 1) − βi .
α
i=1

Table 3
The simulation values as A and α are the parameters
SS MB(A) MSE(A) MB(α) MSE(α)

10 0.1329 0.1067 0.4050 0.4658


20 0.0942 0.0860 0.2826 0.2653
30 0.0518 0.0545 0.1782 0.1668
40 0.0435 0.0258 0.0988 0.0805
50 0.0195 0.0128 0.0255 0.0217

N = 100, A = −0.185, B = 0.2, α = 0.5, R = INT(SS/2).


42 X. Feng et al. / Mathematics and Computers in Simulation 77 (2008) 34–44

Table 4
The simulation values as B and α are the parameters
SS MB(B) MSE(B) MB(α) MSE(α)

10 0.1374 0.1086 0.2562 0.2265


20 0.0986 0.0675 0.1064 0.1026
30 0.0633 0.0320 0.0798 0.0439
40 0.0217 0.0163 0.0315 0.0152
50 0.0158 0.0102 0.0226 0.0165

N = 100, A = −0.185, B = 0.2, α = 0.5, R = INT(SS/2).

Therefore, to obtain the values of B and α that minimize L(B, α), we have
r 
  αti
∂L(B, α) B (e − 1)
=2 Ati + (eαti − 1) − βi , (7.1)
∂B α α
i=1

r 
  
∂L(B, α) B αti B αti B αti
=2 Ati + (e − 1) − βi − 2 (e − 1) + ti e . (7.2)
∂α α α α
i=1

Furthermore
r
 (eαti −1 )2
∂2 L(B, α)
= 2 ,
∂B2 α2
i=1
r 
     αti 
∂2 L(B, α) −B αti B (eαti − 1) B αti ti e eαti − 1
=2 (e − 1) + ti eαti + Ati + (e − 1) − βi − ,
∂B∂α α2 α α α α α2
i=1
r
   
∂2 L(B, α)  B αti B αti 2 B αti
= 2 − (e − 1) + t i e + At i + (e − 1) − β i
∂α2 α2 α α
i=1
 
2B αti 2B αti B 2 αti
× (e − 1) − 2 ti e + ti e .
α3 α α

If the Hessian matrix satisfies


∂2 L(B, α)
det(H)|B=B̂,α=α̂ > 0 and |B=B̂,α=α̂ > 0,
∂B2
then B̂ and α̂ will be the least squares type estimates of B and α, respectively. Similar to the simulation procedure of
Section 6, we only substitute the character of B for the character of A.
The simulation values are given in Table 4. From the table we can conclude that, given A, B, α, R, N, the MB(B),
MSE(B), MB(α) and MSE(α) decrease as SS increases.

8. A, B and α are the parameters

Similar to Section 7, we have


r
 B αti 2
L(A, B, α) = [Ati + (e − 1) − βi ] .
α
i=1

Therefore, to obtain the values of A, B and α that minimize L(A, B, α), Eqs. (5.1), (5.2) and (6.2) equate to zero.
X. Feng et al. / Mathematics and Computers in Simulation 77 (2008) 34–44 43

Table 5
The simulation values as A, B and α are the parameters
SS MB(A) MSE(A) MB(B) MSE(B) MB(α) MSE(α)

10 0.0940 0.0855 0.1659 0.1384 0.1563 0.1368


20 0.0752 0.0624 0.1286 0.0976 0.1280 0.1090
30 0.0435 0.0251 0.0534 0.0353 0.0934 0.0653
40 0.0186 0.0129 0.0201 0.0125 0.0541 0.0093
50 0.0101 0.0066 0.0117 0.0094 0.0216 0.0076

N = 100, A = −0.185, B = 0.2, α = 0.5, R = INT(SS/2).

Furthermore, the Hessian matrix H equates to


⎛ 2 ⎞
∂ L ∂2 L ∂2 L
⎜ ∂A2 ∂A∂B ∂A∂α ⎟
⎜ ⎟
⎜ ∂2 L ∂2 L ∂2 L ⎟
⎜ ⎟
⎜ ⎟
⎜ ∂B∂A ∂B2 ∂B∂α ⎟
⎝ ∂2 L ∂ L
2 ∂ L ⎠
2

∂α∂A ∂α∂B ∂α2


If HA=Â,B=B̂,α=α̂ is positive definite matrix, then Â, B̂ and α̂ will be the least squares type estimates of A, B and α,
respectively. Similar to the simulation procedure of Section 5, we only add the computation of MB(α) and MSE(α).
The simulation values are given in Table 5. From the table we can conclude that, given A, B, α, R, N, the MB(A),
MSE(A), MB(B), MSE(B), MB(α) and MSE(α) decrease as SS increases.

Remark. In Section 4, we need solve a nonlinear equation about one unknown number many a time. In Section 5,
we need solve the linear equations about two unknown numbers many a time. In Sections 6 and 7, we need solve the
nonlinear equations about two unknown numbers many a time. In Section 8, we need solve the nonlinear equations
about three unknown numbers many a time. Using the iterative algorithm, we solve the above nonlinear equations.

In conclusion, it is resultful to estimate parameters of the Makeham distribution by the least squares type estimation.
The variances of parameters decrease as the sample size increases.

References

[1] S. Asmussena, J.R. Moller, Risk comparisons of premium rules: optimality and a life insurance study, Insur. Math. Econ. 32 (3) (2003) 331–344.
[2] L. Bain, Analysis for the linear failure-rate life testing distribution, Technometrics 15 (4) (1974) 551–559.
[3] N. Balakrishnan, H. Malik, Order statistics from the linear exponential distribution. I. Increasing hazard rate case, Commun. Stat.-Theory
Methods 15 (11) (1986) 179–203.
[4] U. Balasooriya, Failure-censored reliability sampling plans for the exponential distribution, J. Stat. Comput. Simul. 52 (1995) 337–349.
[5] K. Bernhard, A class of tests for exponentiality against HNBUE alternatives, Stat. Probab. Lett. 47 (2000) 199–207.
[6] A. Debon, F. Montes, R. Sala, A comparison of models for dynamic life tables. Application to mortality data from the Valencia Region (Spain),
Lifetime Data Anal. 12 (2) (2006) 223–244.
[7] A. Drapella, An improved failure-free time estimation method, Qual. Reliab. Eng. Int. 15 (3) (1999) 235–238.
[8] L.A. Gavrilov, N.S. Gavrilova, The reliability theory of aging and longevity, J. Theor. Biol. 213 (2001) 527–545.
[9] A. Golubev, Does Makeham make sense? Biogerontology 5 (2004) 159–167.
[10] E.S. Lakshminarayanan, M. Pitchaimani, Unique estimation of mortality rates in Gompertz survival model parameters, Appl. Math. Lett. 16
(2003) 211–219.
[11] C. Lin, S. Wu, N. Balakrishnan, Parameter estimation for the linear hazard rate distribution based on records and inter-record times, Commun.
Stat.-Theory Methods 32 (4) (2003) 729–748.
[12] W.M. Makeham, On the law of mortality and the construction of annuity tables, J. Inst. Actuaries 13 (2000) 325–358.
[13] D. Miklavcic, T. Jarm, R. Karba, G. Sersa, Mathematical modelling of tumor growth in mice following electrotherapy and bleomycin treatment,
Math. Comput. Simulat. 39 (1995) 597–602.
[14] H.M. Moustafa, S.G. Ramadan, Errors of misclassication and their probability distributions when the parent populations are Gompertz, Appl.
Math. Comput. 163 (2005) 423–442.
[15] A.R. Mugdadi, The least squares type estimation of the parameters in the power hazard function, Appl. Math. Comput. 169 (2005) 737–748.
[16] R. Norberg, Differential equations for moments of present values in life insurance, Insur. Math. Econ. 17 (1995) 171–180.
44 X. Feng et al. / Mathematics and Computers in Simulation 77 (2008) 34–44

[17] F. Poschet, K. Bernaerts, et al., Sensitivity analysis of microbial growth parameter distributions with respect to data quality and quantity by
using Monte Carlo analysis, Math. Comput. Simulat. 65 (2004) 231–243.
[18] D.P.M. Scollnik, Simulating random variates from Makeham’s distribution and from others with exact or nearly Log-Concave densities, Trans.
Soc. Actuaries 47 (1995) 409–437.
[19] A. Sen, G. Bhattacharyya, Inference procedures for the linear failure rate model, J. Stat. Plan. Inference 46 (1995) 59–76.
[20] T.Z. Sithole, S. Haberman, R.J. Verrall, An investigation into parametric models for mortality projections, with applications to immediate
annuitants’ and life office pensioners’ data, Insur. Math. Econ. 27 (2000) 285–312.
[21] J.W. Wu, W.L. Hung, C.H. Tsai, Estimation of parameters of the Gompertz distribution using the least squares method, Appl. Math. Comput.
158 (2004) 133–147.
[22] J.W. Wu, P.L. Li, Optimal estimation of the parameters of the Gompertz distribution based on the doubly type II censored sample, Qual. Quant.
38 (2004) 753–769.
[23] C.C. Wu, S.F. Wu, H.Y. Chan, MLE and the estimated expected test time for the two-parameter Gompertz distribution under progressive
censoring with binomial removals, Appl. Math. Comput. 181 (2006) 1657–1670.

You might also like