ch2 Online 3

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 19

Chapter 2

An Individual Risk Model for a Short


Period

1 THE DISTRIBUTION OF AN INDIVIDUAL PAYMENT


1.1 The distribution of the loss given that it has occurred
Two notions: loss event, and loss given that the loss event occurred.
Let q be the probability of the loss event, and ξ be the r.v. of the loss given that the loss
event has occurred.
Thus, the real loss of the insured is the r.v.
{
ξ with probability q,
X= (1.1.1)
0 with probability 1 − q.

I
Example: ξ has an exponential distribution, which, in a certain sense, may be viewed as
a key case. This is the distribution of a positive r.v. ξ = ξa with the density
{
0 if x < 0,
fa (x) = (1.1.2)
ae−ax if x ≥ 0.

The parameter a is positive and plays the role of a scale parameter.


If a r.v. ξ1 has the density f1 (x), then the r.v. ξa = ξ1 /a has density fa (x).
The corresponding distribution function (d.f.)
{
0 if x < 0,
Fa (x) = −ax (1.1.3)
1−e if x ≥ 0;

P(ξa > x) = e−ax ; (1.1.4)


and
E{ξa } = 1/a, Var{ξa } = 1/a2 . (1.1.5)
The exponential distribution has the unique
Lack-of-Memory (or Memoryless) Property: for any x, y ≥ 0,

P(ξ > x + y | ξ > x) = P(ξ > y), (1.1.6)

where ξ = ξa is defined above.

1
2 2. AN INDIVIDUAL RISK MODEL

The exponential distribution is the only distribution with the memoryless property.

The probability P(ξ > x) for large x’s is often called the tail of the distribution.
Below, for the tail of a r.v. ξ with the d.f. F(x), we will use notation F(x) = 1 − F(x) =
P(ξ > x).
Clearly,
F(x) → 0,
as x → ∞.
The distributions for which the tail F(x) → 0 as an exponential function or faster, are
called light-tailed,
The distributions for which this is not true are called heavy-tailed.
For example the normal distribution is light-tailed because in this case, for large x’s, the
tail
F(x) ≤ e−bx ,
2

where b is a constant.
For an example of a heavy-tailed distribution, consider
The Pareto distribution which may be defined as a distribution whose tail
( )α
θ
F(x) = for x ≥ 0,
x+θ

where the scale parameter θ > 0.

1.2 The distribution of the loss


Next, we consider the random loss
{
ξ with probability q,
X=
0 with probability 1 − q,

assuming ξ to be positive. Clearly,

P(X = 0) = 1 − q. (1.2.1)

Furthermore, P(X > x) = qP(ξ > x) = qF ξ (x),

FX (x) = P(X ≤ x) = 1 − P(X > x) = 1 − qF ξ (x). (1.2.2)

This also may be rewritten as

FX (x) = 1 − q + q Fξ (x), (1.2.3)

where Fξ (x) = 1 −F ξ (x) is the d.f. of ξ.


1. The Distribution of an Individual Payment 3

In particular, since ξ was assumed to be positive, F ξ (0) = 1 and

FX (0) = 1 − q. (1.2.4)

Because X is non-negative, FX (x) = 0 for all x < 0.


Let us also recall that if for a r.v. Z and a number c, it is true that P(Z = c) = 0, then the
d.f. FZ (x) is continuous at the point c. If P(Z = c) = ∆ > 0, then FZ (x) “jumps” at the point
c, and ∆ is the size of the jump.
We see that FX (x) jumps at the point 0 by 1 − q. Hence, P(X = 0) = 1 − q, which was
already stated in (1.2.1).
F(x)
EXAMPLE. Let ξ be exponential and E{ξ} = 1/a. Then
by (1.2.2), 1
{ 1-q
1 − qe−ax if x ≥ 0,
FX (x) =
0 if x < 0.

The graph is shown in Fig.1. It is quite typical. 0 x


FIGURE 1.
Furthermore,
E{X k } = qE{ξk }. (1.2.5)

Set µ = E{ξ} and v2 = Var{ξ}. Then, in view of (1.2.5),

E{X} = qµ, (1.2.6)


Var{X} = E{X } − (E{X}) = qE{ξ } − q (E{ξ}) = q(v + µ ) − q µ
2 2 2 2 2 2 2 2 2

= qv2 + q(1 − q)µ2 . (1.2.7)

EXAMPLE 2. In the situation of Example 1, we get

q q q(1 − q) 2q − q2
E{X} = , Var{X} = + = . (1.2.8)
a a2 a2 a2

1.3 The distribution of the payment and types of insurance


Let Y be the amount to be paid by the company in accordance with the insurance contract.
If the coverage is full, then Y = X. However, the insurance often pays only a part of the
loss, that is,
Y = r(X),

where r(x) is called a payment function. We assume that

• r(x) is non-negative,

• non-decreasing,

• r(0) = 0.
4 2. AN INDIVIDUAL RISK MODEL

Consider several particular but important cases. In the first case,


{
0 if x ≤ d,
r(x) = (1.3.1)
x − d if x > d,

that is, the payment policy involves a deductible d.


Next, consider the payment function
{
x if x ≤ s,
r(x) = (1.3.2)
s if x > s,

where s is a maximal or limit payment.


The combination of these types, when both restrictions –
r3ds(x)
s a deductible and a limit payment – are included, is given by

0 if x ≤ d,
r(x) = x − d if d < x < s + d, (1.3.3)

s if x ≥ s + d
x
d d+s (see the graph in Fig.2).
FIGURE 2. The pay-
ment function in the case One more type is proportional or quota share insurance
of deductible and limit where
payment.
r(x) = kx (1.3.4)

for a positive k < 1.


Our goal is to write the d.f. of Y , its expectation, and variance. Here, we restrict ourselves
to the case of deductible with s = ∞ (no limit payment).
First, {
0 if y < 0,
FY (y) = (1.3.5)
1 − qF ξ (y + d) if y ≥ 0.
{
0 if y < 0,
FY (0) =
1 − qF ξ (d) if y ≥ 0.
EXAMPLE 1. If ξ is exponentially distributed with a parameter a, then
{
0 if y < 0,
FY (y) = −a(y+d) (1.3.6)
1 − qe if y ≥ 0. 

To compute moments, we use a general formula for positive r.v.’s Z: ,


∫ ∞ ∫ ∞
E{r(Z)} = (1 − FZ (x))dr(x) = F Z (x)dr(x).
0 0

So, in our case, ∫ ∞ ∫ ∞


E{Y } = F X (x)dr(x) = q F ξ (x)dx. (1.3.7)
0 d
2. The Aggregate Payment 5

and
∫ ∞ ∫ ∞
E{Y } = E{r (X)} =
2 2 2
F X (x)dr (x) = q F ξ (x) 2(x − d)dx
0 d
∫ ∞
= 2q F ξ (x))(x − d)dx. (1.3.8)
0

EXAMPLE 2. Let ξ be exponentially distributed with parameter a and consequently,


E{ξ} = 1/a.
Then, by (1.3.7) and (1.3.8),
∫ ∞
q
E{Y } = q e−ax dx = e−ad ,
d a
and ∫ ∞
2q −ad
E{Y } = 2q
2
(x − d)e−ax dx = e .
d a2
Hence, after simple calculations, we get
q q [ ]
E{Y } = e−ad , Var{Y } = 2 e−ad 2 − qe−ad . (1.3.9)
a a

2 THE AGGREGATE PAYMENT


In this section, we do not specify particular details of insurance contracts such as de-
ductible or payment limits and denote by X’s the payments provided by a company to
particular clients.
Consider a group consisting of a fixed number n of clients. Let Xi be the payment to the
ith client. Then the cumulative payment

S = Sn = X1 + ... + Xn .

We assume the Xi ’s to be independent. If the X’s are also identically distributed, we call the
group homogeneous.

2.1 Convolutions
2.1.1 Definition and examples
Let Fi (x) be the d.f. of Xi . Consider first the case when n = 2, so S = X1 + X2 . The basic
fact is that if X1 and X2 are independent, then the d.f. of S is
∫ ∞
FS (x) = F1 (x − y)dF2 (y). (2.1.1)
−∞

The operation (2.1.1) is called convolution and is denoted in symbols as F1 ∗ F2 .


In general,
FSn = F1 ∗ ... ∗ Fn .
6 2. AN INDIVIDUAL RISK MODEL

If there exist the probability densities fi (x) = Fi′ (x), then, for n = 2, the density of S is
∫ ∞
fS (x) = f1 (x − y) f2 (y)dy. (2.1.2)
−∞

This operation is denoted by f1 ∗ f2 and is called the convolution of densities. In general,


for an arbitrary integer n
fSn = f1 ∗ ... ∗ fn .
The counterpart of (2.1.2) for discrete integer-valued r.v.’s is as follows. Let X1 , X2 take
(i)
on values 0, 1, 2, ... , and fk = P(Xi = k), i = 1, 2. Then, setting fm = P(S = m), for n = 2,
we have
m
∑ fk
(1) (2)
fm = fm−k . (2.1.3)
k=0

Consider the sequences


(1) (1) (2) (2)
f (1) = ( f1 , f2 , ...); f (2) = ( f1 , f2 , ...); f = ( f1 , f2 , ...).

The above sequences of probabilities specify the distributions of X1 , X2 , and S, respectively.


Then (2.1.3) may be written in compact form as

f = f (1) ∗ f (2) .

EXAMPLE 1. Let independent r.v.’s


{ {
1 with probability p1 = 13 1 with probability p2 = 21
X1 = , X2 = .
0 with probability q1 = 32 0 with probability q2 = 12

Clearly, S takes on values 0, 1, 2. The problem is very simple and can be solved directly,
but we use this example to demonstrate how (2.1.3) works. We have

(1) (2) 2 1 1
f0 = f0 f0 = q1 q2 = · = ,
3 2 3
1
2 1 1 1 1
∑ fk
(1) (2) (1) (2) (1) (2)
f1 = fn−k = f0 f1 + f1 f0 = q1 p2 + p1 q2 = · + · = ,
k=0 3 2 3 2 2
2
1 1 1
∑ fk
(1) (2) (1) (2) (1) (2) (1) (2)
f2 = fn−k = f0 f2 + f1 f1 + f2 f0 =q1 ·0+p1 p2 +0·q2 =p1 p2 = · = .
k=0 3 2 6

EXAMPLE 2. This is a classical example. Let X1 and X2 be independent and uniformly


distributed on [0, 1]. Obviously, S2 = X1 + X2 takes on values from [0, 2].
The densities fi (x) = 1 for x ∈ [0, 1] and = 0 otherwise. Hence, by (2.1.2),
∫ 1
fS (x) = f1 (x − y)dy. (2.1.4)
0

The integrand f1 (x − y) = 1 if 0 ≤ x − y ≤ 1 which is equivalent to x − 1 ≤ y ≤ x.


2. The Aggregate Payment 7

(a) (b)

FIGURE 3. For Example 2: (a) the graph of fS2 ; (b) the graph of fS3 .

Let 0 ≤ x ≤ 1. Then the left inequality holds automatically because x − 1 ≤ 0 while y ≥ 0.


So, f1 (x − y) = 1 if y ≤ x, and = 0 otherwise. Hence, for 0 ≤ x ≤ 1, we may integrate in
(2.1.4) only over [0, x], which implies that
∫ x
fS2 (x) = dy = x.
0

On the other hand, in view of the symmetry of the distributions of the X’s, the density fS (x)
should be symmetric with respect to the center of [0, 2]; that is, the point one (see Fig.6a).
So, for 1 ≤ x ≤ 2, we should have fS (x) = 2 − x. Eventually,


x if 0 ≤ x ≤ 1,
fS2 (x) = 2 − x if 1 ≤ x ≤ 2, (2.1.5)


0 otherwise;

see again Fig.3a. This distribution is called triangular. We see that while the values of X’s
are equally likely, the values of the sum are not.
Now let X3 be also uniformly distributed on [0, 1] and independent of X1 and X2 . Obvi-
ously, the sum S3 = X1 + X2 + X3 assumes values from [0, 3]. To find its density, we can
again apply (2.1.2), replacing f2 (y) by f3 (y), and f1 (x − y) by fS2 (x − y). Thus,
∫ 1
fS3 (x) = fS2 (x − y)dy, (2.1.6)
0

where fS2 is given in (2.1.5). We relegate a bit tedious calculations to Exercises. The result
is  2

 x /2 if 0 ≤ x ≤ 1,

(−2x2 + 6x − 3)/2 if 1 ≤ x ≤ 2,
fS3 (x) =

 (x − 3)2 /2 if 2 ≤ x ≤ 3,

0 otherwise.
The graph is given in Fig.3b.
8 2. AN INDIVIDUAL RISK MODEL

EXAMPLE 3. Let X1 and X2 be independent, X1 be exponential with E{X1 } = 1, and X2


be uniformly distributed on [0, 1]. Then the density of the sum is given by

∫ 1
fS (x) = f1 (x − y)dy.
0

The integrand f1 (x − y) = e−(x−y) if y ≤ x, and = 0 otherwise. Hence, for x ≤ 1, we should


integrate only up to x, which implies that

∫ x
fS (x) = e−(x−y) dy = 1 − e−x for x ≤ 1.
0

∫1
For x > 1, we should consider the total 0 , and

∫ 1
fS (x) = e−(x−y) dy = e−x (e − 1).
0

FIGURE 4.
The graph for all x’s is given in Fig.4. 
In the examples above, we saw that the distribution of a
sum may essentially differ from the distributions of the sep-
arate terms. Next, we consider cases when the convolution
inherits properties of individual terms in the sum.

2.1.2 Some classical examples

I. Sums of normals (normal r.v.’s).

Proposition 1 Let X1 and X2 be independent normals with expectations m1 and m2 , and


variances σ21 and σ22 , respectively. Then the r.v. S = X1 + X2 is normal with expectation
m1 + m2 , and variance σ21 + σ22 . In other words, if φmσ2 is the normal density with mean
m and variance σ2 , then

φm1 σ21 ∗ φm2 σ22 = φm1 +m2 ,σ21 +σ22 .


2. The Aggregate Payment 9

II. Sums of Poisson r.v.’s.


We call a r.v. X Poisson if
λk
fk = P(X = k) = e−λ ,
k!
where λ > 0. We know that λ = E{X}.
Proposition 2 Let X1 and X2 be independent Poisson r.v.’s with parameters λ1 and λ2 ,
respectively. Then the r.v. S = X1 + X2 is a Poisson r.v. with parameter λ1 + λ2 . In other
words, if πλ is the Poisson distribution with mean λ, then

πλ1 ∗ πλ2 = πλ1 +λ2 .

Proof. By virtue of (2.1.3), for fm = P(S = m), we have


m m
λk1 −λ2 λ2m−k
∑ fk ∑ e−λ
(1) (2)
fm = fm−k = 1
e
k=0 k=0 k! (m − k)!
m m ( )
−(λ1 +λ2 ) 1 m! −(λ1 +λ2 ) 1 m k m−k
=e ∑
m! k=0 k!(m − k)!
λ1 λ2 = e
k m−k
∑ λ λ .
m! k=0 k 1 2

The last sum above is the binomial expansion of (λ1 + λ2 )m , which leads to the Poisson
formula for the probability fm . 

III. Sums of Γ-distributed r.v.’s.


First, for all ν > 0, we define the Γ(gamma)-function
∫ ∞
Γ(ν) = xν−1 e−x dx.
0

The main properties:


∫ ∞ −x
• Γ(1) = 0 e dx = 1.

• Γ(ν + 1) = νΓ(ν).

• Γ(k + 1) = k!, so, the Γ-function may be viewed as a generalization of the notion of
factorial.
Consider, first, a continuous r.v. X1ν whose density is the function
1 ν−1 −x
f1ν (x) = x e for x ≥ 0, and = 0 otherwise.
Γ(ν)
For ν = 1, this is the standard exponential density. The parameter ν characterizes the type
of the distribution. Figure 5 illustrates how this type depends on ν by sketching the graphs
of f1ν (x) for particular values ν = 1, 2, 3.
It may be proved that

E{X1ν } = ν, E{X1ν
2
} = (ν + 1)ν , and hence Var{X1ν } = ν. (2.1.7)
10 2. AN INDIVIDUAL RISK MODEL

-x -x -1 2 -x
f(x)=e f(x)=xe f(x)=2 x e

FIGURE 5. The Γ-densities for ν = 1, 2, 3.

Now, let a > 0 and the r.v. Xaν = X1ν /a. Then the density of Xaν is
aν ν−1 −ax
faν (x) = x e for x ≥ 0, and = 0 otherwise. (2.1.8)
Γ(ν)
The distribution defined and its density faν (x) are called the Γ(Gamma)-distribution and
Γ-density, respectively, with parameters a and ν. As we saw, a is just a scale parameter,
while parameter ν may be called essential since it specifies the type of the distribution.
From (2.1.7) it follows that
ν (ν + 1)ν ν
E{Xaν } = , E{Xaν
2
}= 2
, and Var{Xaν } = 2 . (2.1.9)
a a a

Proposition 3 Let X1 and X2 be independent Γ-r.v.’s with parameters (a, ν1 ) and (a, ν2 )
respectively. (Notice that the scale parameter a is the same.) Then the r.v. S = X1 + X2
is a Γ-r.v. with parameters (a, ν1 + ν2 ). In other words, if faν denotes the Γ-density with
parameters (a, ν), then
faν1 ∗ faν2 = fa,ν1 +ν2 . (2.1.10)

EXAMPLE 1. During a day, a company received four telephone calls with claims from
clients from a homogenous group. The distribution of a particular claim given that a loss
event happened, is exponential with a mean of one (unit of money). However, the real
sizes of these particular claims have not yet been evaluated. What is the probability that the
cumulative claim will exceed, for example, 5?
We deal with S4 = X1 + X2 + X3 + X4 , where the X’s are exponential with a = 1.
The exponential distribution is the Γ-distribution with parameter ν = 1. Hence,
x3 −x x3 −x x3 −x
fS4 (x) = e = e = e ,
Γ(4) 3! 6
and ∫ 5
1
P(S4 > 5) = 1 − x3 e−x dx ≈ 0.27.
6 0
2. The Aggregate Payment 11

2.2 Moment generating functions (m.g.f.’s) — general facts


In this section, we use m.g.f.’s.
The moment generating function of a r.v. X and its distribution F is the function
∫ ∞
MX (z) = E{ezX } = ezx dF(x). (2.2.1)
−∞

Note that for real z’s, the integral in (2.2.1) may not exist. Therefore, m.g.f.’s exist not for
all r.v.’s and/or not for all values of z. Certainly for z = 0, we can always write that

MX (0) = E{e0·X } = E{1} = 1. (2.2.2)

The terminology chosen is related to the following fact. As usual, set mk = E{X k }, the
kth moment of X. Then the Taylor expansion for the m.g..f. is given by
m2 2 m3 3
MX (z) = 1 + m1 z + z + z + ... .
2 3!
PROPERTIES:

A. The main property:

For any independent r.v.’s X1 and X2 ,


MX1 +X2 (z) = MX1 (z)MX2 (z) (2.2.3)
for all z for which the m.g.f.’s above are well defined.

Proof:

MX1 +X2 (z) = E{ez(X1 +X2 ) } = E{ezX1 eX2 } = E{ezX1 }E{ezX2 } = MX1 (z)MX2 (z).

B. The Laplace transform of a linear transformation:

For Y = a + bX, the Laplace transform MY (z) = eaz MX (bz). (2.2.4)

Proof:

Ma+bX (z) = E{ez(a+bX) } = E{eza ezbX } = eza E{ezbX } == eza E{e(zb)X } = eza MX (bz).

C. Uniqueness.

Theorem 4 R.v.’s with distinct distributions have distinct m.g.f.’s.

Consider particular examples.


12 2. AN INDIVIDUAL RISK MODEL

2.2.1 The binomial distribution


First, let a r.v. {
1 with probability p,
ξ=
0 with probability q = 1 − p.
Then
Mξ (z) = ez·1 p + ez·0 q = ez p + q = 1 + p(ez − 1),
The easiest way to find the m.g.f. of a binomial r.v. X is to use the representation

X = X1 + ... + Xn ,

where the r.v.’s Xi are independent and has the same distribution as ξ above.
By property (2.2.3),
MX (z) = (1 + p(ez − 1))n .

2.2.2 The geometric distributions


For the geometric r,v, K = 0, 1, 2, ... with respective probabilities p, pq, pq2 , .... So,
∞ ∞
p
MK (z) = ∑ ekz pqk = p ∑ (ez q)k = 1 − qez . (2.2.5)
k=0 k=0

2.2.3 The Poisson distribution


For the Poisson distribution with parameter λ, its m.g.f.
∞ ∞
λk (ez λ)k
∑ ezk e−λ = e−λ ∑ = e−λ eλe = exp{λ(ez − 1)}.
z
M(z) = (2.2.6)
k=0 k! k=0 k!

Next we consider continuous distributions.

2.2.4 The uniform distribution


For the distribution uniform on [a, b], the m.g.f.
∫ b
1 ebz − eaz
M(z) = ezx dx = (2.2.7)
a b−a z(b − a)

if z ̸= 0. For z = 0, as we know, M(0) = 1.

2.2.5 The exponential and gamma distributions


For a r.v. X, let the density f (x) = ae−ax for x ≥ 0. Then the m.g.f.
∫ ∞
a 1
M(z) = ezx ae−ax dx = = (2.2.8)
0 a − z 1 − z/a

for z < a. It is important to emphasize that for z ≥ a the m.g.f. does not exist.
2. The Aggregate Payment 13

In general, for the density (2.1.8) and z < a, making the change of variable y = (a − z)x,
we have
∫ ∞ ∫ ∞ ∫ ∞
aν aν
M(z) = ezx faν (x)dx = ezx xν−1 e−ax dx =
xν−1 e−(a−z)x dx
0 0Γ(ν) 0 Γ(ν)
∫ ∞ ( )ν
aν 1 ν−1 −y a ν 1 a 1
= ν
y e dy = ν
Γ(ν) = = ,
Γ(ν) (a − z) 0 Γ(ν) (a − z) a−z (1 − z/a)ν
again provided that z < a.

2.2.6 The normal distribution


For a standard normal r.v. X,
∫ ∞
1
ezx √ e−x /2 dx = ez /2 .
2 2
MX (z) =
−∞ 2π
For the (m, σ2 )-normal r.v. Y = m + σX, by virtue of (2.2.4),

MY (z) = emz MX (σz) = exp{mz + σ2 z2 /2}. (2.2.9)

2.2.7 The moment generating function and moments


Consider a r.v. X with a d.f. F(x), and assume that in the definition
∫ ∞
M(z) = E{ezX } = ezx dF(x)
−∞

the integral is well defined for all z from the interval (−c0 , c0 ).
Clearly,
M(0) = 1.
It may be proved for the z’s above, we can differentiate M(z) an arbitrary number of
times, and we can do that by passing the operation of differentiation through the integral.
In particular, differentiating M(z) once, we get
∫ ∞
M ′ (z) = E{XezX } = xezx dF(x). (2.2.10)
−∞

Hence,
M ′ (0) = E{X}.
Differentiating (2.2.10), we have
∫ ∞
′′
M (z) = E{X e } = 2 zX
x2 ezx dF(x), (2.2.11)
−∞

and
M ′′ (0) = E{X 2 }.
Continuing in the same fashion, we get that the kth derivative
∫ ∞
M (k) (z) = E{X k ezX } = xk ezx dF(x), (2.2.12)
−∞
14 2. AN INDIVIDUAL RISK MODEL

and for all k


M (k) (0) = mk = E{X k }, (2.2.13)
the kth moment of X.
From (2.2.11) it follows, in particular, that M ′′ (z) = E{X 2 ezX } ≥ 0 for all z, and hence

M(z) is always convex. (2.2.14)


2. The Aggregate Payment 15

g (z) g (z) g (z)

1 1
z z z
(a) (b) (c)

FIGURE 6.

g(z) g(z)
M2(z)

M1(z)
1 1 1
z z z
0 0 0
(a) (b) (c)

FIGURE 7.

2.3 Moment generating functions — Examples


• Which functions in Fig.6 looks as a m.g.f’s.?

• What is the difference between the two distributions whose m.g.f.’s are graphed in
Fig.7a and 7b?

• Let M(z) be the m.g.f. of a r.v. X, E{X} = m, Var{X} = σ2 . Write M ′ (0) and M ′′ (0).

M ′ (0) = m, M ′′ (0) = E{X 2 } = σ2 + m2 .

• Compare the means and variances of the r.v.’s whose m.g.f.’s are graphed in Fig.7c.
(The graphs of M1 (z) and M2 (z) are tangent at z = 0.)
16 2. AN INDIVIDUAL RISK MODEL

2.4 Moment generating functions — sums of r.v.’s

Let again Sn = X1 + ... + Xn , where Xi ’s are independent r.v.’s (for example, of payments).
Let Mi (z) be the m.g.f. of Xi . Then, the m.g.f. of Sn is

MSn (z) = M1 (z) · M2 (z) · ... · Mn (z).

To demonstrate the power of the method of m.g.f.’s, we begin with the classical examples
corresponding to the three convolution cases considered in the previous section.
I. Sums of normals.
Let X1 and X2 be normal with expectations m1 and m2 and variances σ21 and σ22 , respec-
tively, and let S = X1 + X2 . Since the m.g.f. of a (m, σ2 )-normal r.v. is exp{mz + σ2 z2 /2},
the m.g.f.

MS (z) = exp{m1 z + σ21 z2 /2} exp{m2 z + σ22 z2 /2} = exp{(m1 + m2 )z + (σ21 + σ22 )z2 /2}.

This is the m.g.f. of the normal distribution with expectation m1 + m2 , and variance σ21 + σ22 ,
which proves Proposition 1.

II. Sums of Poisson r.v.’s.


Now, let X1 and X2 be Poisson r.v.’s with respective parameters λ1 and λ2 . The m.g.f. of
a Poisson r.v. with parameter λ is exp{λ(ez − 1)}. Then the m.g.f.

MS (z) = exp{λ1 (ez − 1)} exp{λ2 (ez − 1)} = exp{(λ1 + λ2 )(ez − 1)}.

This is the m.g.f. of the Poisson distribution with parameter λ1 + λ2 and Proposition 2 is
proved.

III. Sums of Γ-distributed r.v.’s.


Let X1 and X2 be Γ-r.v.’s with parameters (a, ν1 ) and (a, ν2 ), respectively. Since the m.g.f.
of the Γ-r.v. with parameters (a, ν) is (1 − z/a)−ν , the m.g.f.

MS (z) = (1 − z/a)−ν1 (1 − z/a)−ν2 = (1 − z/a)−(ν1 +ν2 ) .

This is the m.g.f. of the Γ-distribution with parameters (a, ν1 + ν2 ), which proves Proposi-
tion 3.
2. Premiums and the Solvency of Insurance 17

3 PREMIUMS AND SOLVENCY.


NORMAL APPROXIMATION FOR AGGREGATE CLAIM DIS-
TRIBUTIONS
Let Sn = X1 + ... + Xn , where Xi ’s are r.v.’s. For now, we do not assume them to be
independent or identically distributed.
Consider the normalized sum
Sn − E{Sn }
Sn∗ = √ . (3.1)
Var{Sn }

The goal of normalization is to consider the sum Sn in an appropriate scale; namely, after
normalization, E{Sn∗ } = 0 and Var{Sn∗ } = 1. The modern probability theory establishes a
wide spectrum of conditions under which the distribution of Sn∗ is asymptotically normal;
that is, conditions under which for any x,

P(Sn∗ ≤ x) → Φ(x) as n → ∞, (3.2)

where Φ(x) is the standard normal d.f.


Let us return to insurance treating Xi (i = 1, ..., n), as the (random) payment of the
company to the i-th client during a fixed period of time.
Assume that the premium the company collects is proportional to the expected payment.
More specifically, we assume that for the ith client, the premium will equal

ki = (1 + θ) E{Xi }.

As was already mentioned in Chapter 1, the coefficient θ is called a relative security load-
ing. The quantity θE{Xi } is called a security loading.
A distinctive feature of the model is that, while for clients with different mean losses the
premiums will be different, the relative security loading is the same for all clients.
Then the total premium
n
cn = ∑ (1 + θ)E{Xi } = (1 + θ)E{Sn }. (3.3)
i=1

Nest, we introduce the security or solvency level β as the minimal acceptable for the
company probability of not suffering a loss.
In other words, the probability of not suffering loss is P(Sn ≤ cn ). The company wants
this probability to be not less than a chosen β; in the worst case, to be equal to β. Certainly
such a probability depends on the premium charged. So, for the least acceptable premium,
we would have P(Sn ≤ cn ) = β.
For example, if β = 0.95, then the company wants to specify a premium (more precisely,
a relative security coefficient θ) such that the probability of not suffering a loss will be not
less than 0.95.
18 2. AN INDIVIDUAL RISK MODEL

So, we may write


( )
Sn − E{Sn } θE{Sn }
β = P(Sn ≤ cn ) = P(Sn ≤ (1 + θ)E{Sn }) = P(Sn − E{Sn } ≤ θE{Sn }) = P √ ≤√
Var{Sn } Var{Sn }
( /√ )
= P Sn∗ ≤ θE{Sn } Var{Sn } . (3.4)

We restrict ourselves here to a non-rigorous estimation. If we consider normal approxi-


mation acceptable, we can write that

P(Sn∗ ≤ x) ≈ Φ(x)

for any x.
In particular,
( /√ ) ( /√ )
P Sn∗ ≤ θE{Sn } Var{Sn } ≈ Φ θE{Sn } Var{Sn } .

Thus, ( /√ )
Φ θE{Sn } Var{Sn } ≈ β.

This, in turn, implies that


θE{Sn }
√ = qβs ,
Var{Sn }
or √
qβs Var{Sn }
θ≈ , (3.5)
E{Sn }
where qβs is the β-quantile of the standard normal distribution, which means that Φ(qβs ) =
β. For instance, if β = 0.9, then qβs = 1.28... .
Consider for a while, the particular case of independent and identically distributed (i.i.d)
r.v.’s Xi . Let
m = E{Xi }, σ2 = Var{Xi }.
Then, E{Sn } = mn , Var{Sn } = σ2 n, and as is easy to verify, (3.5) may be rewritten as
qβs σ σ
θ≈ √ = qβs √ . (3.6)
m n m n

For a distribution with mean m and standard deviation σ, the fraction σ/m is called a coef-
ficient of variation.
It makes sense to note also that θ in (3.5) should not be viewed as the real security loading
coefficient to be used by the company. If the law and circumstances allow it, the company
may proceed from a larger θ. The coefficient in (3.5) is the minimal coefficient acceptable
for the company.
EXAMPLE 1. Consider a homogeneous group of n = 2000 clients. Assume that the
probability of a loss event for each client is q = 0.1 and if a loss event occurs, the payment
is a r.v. uniformly distributed on [0, 1]. In accordance with (1.2.6)-(1.2.7), the expected
3. Premiums and the Solvency of Insurance 19

value and the variance of the separate payment X are m = q 12 = 0.05 and σ2 = q 12
1
+ q(1 −
q)( 2 ) ≈ 0.0308. Let β = 0.9. Then the quantile q0.9,s ≈ 1.281 .
1 2

Since the X’s are identically distributed, we use (3.6) which gives

1.281 · 0.0308
θ≈ √ ≈ 0.1005.
0.05 · 2000
This approximately amounts to a 10% loading. The premium for each client is (1 + θ)m ≈
(1 + 0.1)0.05 = 0.055 units of money.

EXAMPLE 2. Assume that a portfolio of a company consists of two homogeneous


groups of risks. For the first group, the number of clients n1 = 2000 and the probability of a
loss event for each client is q1 = 0.1. The payment, if a loss event occurs, is a non-random
amount of z1 = 10 units of money. For the second group, the corresponding quantities are
n2 = 400, q2 = 0.05, and z2 = 30. In particular, n = n1 + n2 = 2400.
Assume that the loss events are independent and β = 0.9. For a particular payment X
(the index is omitted), E{X} = qz and Var{X} = z2 q(1 − q). Hence,

E{Sn } = n1 q1 z1 + n2 q2 z2 = 2599.2,
Var{Sn } = n1 q1 (1 − q1 )z21 + n2 q2 (1 − q2 )z22 = 35100.

Then, by (3.5), √
1.281 · 35100
θ≈ ≈ 0.092,
2599.2
that is, about 9.2%. Each client from the first group should pay a premium of (1 + θ)q1 z1 ≈
1.092, while for the second group, the individual premium is (1 + θ)q2 z2 ≈ 1.638.

You might also like