103 April 2000 Solution

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 10

Faculty of Actuaries Institute of Actuaries

EXAMINATIONS

April 2000

Subject 103 — Stochastic Modelling

EXAMINERS’ REPORT

ã Faculty of Actuaries
ã Institute of Actuaries
Subject 103 (Stochastic Modelling) — April 2000 — Examiners’ Report

1 E[Mn+1½Fn] = E[ e - l( n +1 )+ gX n +1½Fn ]

= e - l(n +1) E[ e g ( X n +Yn +1 )½Fn ] = e - l ( n+1) eg X n E[ eg Yn+1½Fn ]

= e - l M n E[ e gYn +1 ] = e - l M n ( pe g + (1 - p ) e - g ).

Hence the condition for martingale:

peg + (1 - p)e-g = el .

Multiply by eg and solve quadratic equation in unknown eg:

e l ± e 2l - 4 p(1 - p )
eg = .
2p

2 Assume E[X½Y] = a + bY and determine a, b using:

(i) orthogonality condition E{(X - E[X½Y])Y} = 0.

(ii) E{E[X½Y]} = E[X].

(i) gives E[XY] - aE[Y] -bE[Y2] = 0;

Since the correlation coefficient r is

E[ XY ] - E[ X ]E[Y ]
r= ,
s X sY

we have E[XY] = rsX sY + mX mY and (i) yields

amY + b( sY2 + mY2 ) = rsX sY + mX mY .

(ii) gives a + bmY = mX .

Solve the two simultaneous equations to get

HI X rs X
b= , a = mX - mY
IY sY

Page 2
Subject 103 (Stochastic Modelling) — April 2000 — Examiners’ Report

l( t ) l( t ) l( t )
3 -0 1 2 3 L.

F -l(t) l( t ) I
G JJ
A(t) = G
- l( t ) l( t )
GG - l(t ) OJ
J
.

H OK

Forward equations:


P(s, t) = P(s, t) A(t), t ³ s.
¶t


¶t
P00(s, t) = -l(t) P00(s, t) and P00(s, s) = 1 imply that P00(s, t) = exp(- z t
s l(u)du) =

e-m(s,t).


For j > 0, we have P0j(s, t) = l(t) P0, j-1(s, t) - l(t) P0j(s, t) with initial
¶t
condition P0j(s, s) = 0.

Verify that the form of P0j(s, t) given in the question satisfies this equation:

e -m( s ,t ) ¶
LHS = (jm(s, t)j-1 - m(s, t)j) m( s, t),
j ! ¶t

m( s, t ) j -1 e -m( s,t ) m( s, t) j e -m( s ,t )


RHS = l(t) - l( t )
( j - 1)! j!


The observation that m( s, t ) = l(t) is sufficient to finish the verification.
¶t

4 (a) Z is stationary, i.e. I(0), as it is a first-order autoregression;


X is not stationary but ÑX is just a linear combination of Z and e1 , so is
stationary; this implies that X is I(1). The same goes for Y.

(b) Z satisfies the Markov property on its own; X and Y do not, since they
depend on values of Z.

(c) (X, Y, Z) is Markov; indeed, it is a vector autoregression.

(d) X and Y are not cointegrated. Although both are I(1), any linear
combination W =aX + bY satisfies Wn = Wn-1 + qW Zn-1 + e3,n which does not
define a stationary process.

Page 3
Subject 103 (Stochastic Modelling) — April 2000 — Examiners’ Report

5 (i) E[ Bt2½Fs ] = E[(Bt - Bs + Bs)2½Fs]

= E[(Bt - Bs)2 + 2(Bt -Bs) Bs + Bs2½Fs ]

= E[(Bt - Bs)2½Fs] + 2BsE[Bt - Bs½Fs] + Bs2 ,

by the property of conditional expectations which allows one to “take out


what is known”. Moreover, by independence of the increments, the above is
E[(Bt - Bs)2] + Bs2 = t - s + Bs2 .

Similarly,

E[ Bt4½Fs ] = E[(Bt - Bs + Bs)4½Fs]

= E[(Bt - Bs)4 + 4(Bt - Bs)3 + 6(Bt - Bs)2 Bs2 + 4(Bt - Bs) Bs Bs3 + Bs4½Fs ]

= E[(Bt - Bs)4] + 6Bs2 E[(Bt - Bs)2] + Bs4 ,

where we used the independence of increments property as well as the fact


that moments of odd order of N(0, s2) vanish. Finally

E[ Bt4½Fs ] = Bs4 + 6(t - s) Bs2 + 3(t - s)2.

(ii) From above

E[ Bt4 - 6tBt2½Fs ] = Bs4 + 6(t - s) Bs2 + 3(t - s)2 - 6t(t - s + Bs2 )

= Bs4 + 6sBs2 + 3(t - s)2 - 6t(t - s) = Bs4 - 6sBs2 + 3(s2 - t2)

\ Bt4 - 6tBt2 + 3t 2 is a martingale.

x u
6 (i) u = F1(x) = is solved by x = F1-1 ( u ) = .
1+ x 1-u

For the symmetrised version, the simplest thing is to multiply x by a


variable y which takes ±1 depending on whether another pseudo-random
uniform number v is in the range (0, 0.5) or (0.5, 1).

2q(1 + x ) 2
(ii) By symmetry we only need consider x > 0, so we find maxx>0 .
p( q 2 + x 2 )
Differentiating the logarithm of this fraction and setting equal to 0, we get
2 2x
= 2 , with solution x = q2. Substituting this value in, we obtain
1+ x q + x2
the required value of C.

Page 4
Subject 103 (Stochastic Modelling) — April 2000 — Examiners’ Report

f ( x ½q )
Let g(x) = , which we observe is less than or equal to 1 everywhere.
Cf2 ( x )
The method of Acceptance-Rejection sampling goes as follows: use (i) to
generate a variable y from density f2. We accept y as a valid observation
from f(x½q) with probability g(y), otherwise reject it. (Generate a uniform
variable u, and reject if u > g(y).) If we reject it, go back and generate
another y from f2 , and continue to do the same until eventual acceptance.

7 (i) (a) g0 = Var(et + b1 et-1) = (1 + b12 ) s e2 and g1 = Cov(et + b1 et-1 , et-1+ b1et-2)
= b1 s 2e , with gk = 0 for k > 1.

This gives r0 = 1, r1 = b1 / (1 + >12 ), rk = 0 otherwise.

(b) Invertibility requires that ½b1½ < 1, so that the sum Xt - b1 Xt-1 +
b12 X t-2 + ... converges. m and se are irrelevant.

(ii) We need to solve (1 + b12 ) s e2 = 14.5, b1 I 2e = 5.0. Eliminating s 2e , we


have 1 + b12 = 2.9b1 , or b1 = ½(2.9 ± 2.92 - 4 ) = 2.5 or 0.4.
b1 = 2.5 corresponds to I 2e = 2, whereas b1 = 0.4 corresponds to I 2e =
12.5.

For invertibility, solve 1 + b1z = 0. In the first case, z = -0.4 (no


good); in the second, z = -2.5 (OK).

8 (i) (a) States:

C: healthy contributor
C ¢: contributor but ill
B1 , B2 , B3 : beneficiary, with index giving duration of illness

(b) Transition 0.9 C


graph:

0.1 0.8 0.8 0.8

0.8
B1 B2 B3
0.2 0.2

0.2

0.2

Page 5
Subject 103 (Stochastic Modelling) — April 2000 — Examiners’ Report

(c) Transition matrix (states ordered C, C ¢, B1 , B2 , B3 ):

F 0.9 0 01
. 0 0 I
GG 0.8 0 0.2 0 0J
J
GG 0.8 0 0 0.2 0J
0.2J
P=

GH 00..88 0
0.2
0
0
0
0 0K
J
(ii) The chain is irreducible by inspection: every state is accessible from every
other state. State C is clearly aperiodic because of the one-step loop from C
to C; because of irreducibility, every other state must be aperiodic too.

(iii) (a) p = pP reads

pc = 0.9pc + 0.8(pc¢ + p1 + p2 + p3)

pc¢ = 0.2p3

p1 = 0.1pc + 0.2pc¢

p2 = 0.2p1

p3 = 0.2p2 .

Discard first equation and choose pc¢ as working variable:

1
p3 = pc¢ = 5pc¢
0.2

1
p2 = p3 = 5p3 = 25pc¢
0.2

\ p = pc¢(1248, 1, 125, 25, 5)

1
p1 = p2 = 5p2 = 125pc¢
0.2

1 0.2
pc = p1 - pc¢ = 10p1 - 2pc¢ = 1248pc¢ .
01
. 01
.

Find pc¢ by normalisation: pc¢(1248 + 1 + 125 + 25 + 5) = 1,

1
\ pc¢ = .
1404

125 + 25 + 5
(b) Proportion of beneficiaries: = 11.04%.
1404

Page 6
Subject 103 (Stochastic Modelling) — April 2000 — Examiners’ Report

(iv) (a) Average profit per period per member in stationary régime is

FG 1248 + 1IJ + (f - b) FG 125 + 25 + 5IJ


Z = (f - c)
H 1404 K H 1404 K
1249 155
=f-c -b .
1404 1404

1249 155
For Z > 0 you need f > c +b .
1404 1404

(b) With the given data

150 ´ 1249 + 600 ´ 155


Z = 300 - = 100.32.
1404

9 (i) (a) g1 = Cov(Xt , Xt-1) = Cov(a1Xt-1 + a2Xt-2 + et , Xt-1) = a1g0 + a2g1+ 0, since
et is independent of Xt-1.

(b) Similarly g2 = a1g1 + a2g0 and g0 = a1g1 + a2g2 + Cov(Xt , et). A further
application of the same technique gives Cov(Xt , et) = s 2e .
a1 Fa12 I
Thus g1 =
1 - a2
g0 and g2 = a 2 + GH
1 - a2
g0 . JK
(c) rk is found by the relation rk = gk / g0.

a 12
(ii) We have = 1 = r1 (1 - a 2 ) and a 2 + = r2 , which are solved by
1 - a 2

r1 (1 - r2 ) r2 - r12
= 1 = , = 2 = .
1 - r12 1 - r12

10 (i) (a) Bt defined by following properties:

· Independent increments: Bt - Bs independent of Ba , 0 £ a £ s


whenever s £ t.

· Stationary Gaussian increments: Bt - Bs ~ N(0, t - s).

· Continuous sample paths: t ® Bt continuous.

Page 7
Subject 103 (Stochastic Modelling) — April 2000 — Examiners’ Report

Transition density to go from x at time s to y at time t:

1 2
gt-s(y - x) = e -( y -x ) / 2( t -s )
.
2p( t - s )
x - ms
(b) {Ws = x, Wt = y} = {sBs + ms = x, sBt + mt = y} = {Bs = , Bt =
s
y - mt
}. Hence transition density of W is
s

1 FG y - x - m(t - s) IJ .
I
gt-s
H s K
(ii) By Itô’s lemma

F I
d(log St) =
1
St
dSt +
1
2
1
GH
- 2 ( dSt ) 2
St
JK
I2
= µdt + sdBt - dt.
2
Hence

F s2 I t + sB ,
log St = log S0 + m - GH 2 JK t

and finally

F m - s I t + sB
2

G 2 JK
S eH
t

St = 0 .

LM F s I t > log b OP 2

MN GH 2 JK
P[St > b½S0 = a] = P sBt + m -
a PQ
(iii)

L 1 F b F s I tI OP
= P MB > G log - G m -
2

MN s H a H 2 JK JK PQ
t

F log b - F m - s I t I 2

GG a GH 2 JK JJ
= 1-G
GG s t JJ .
H K

Page 8
Subject 103 (Stochastic Modelling) — April 2000 — Examiners’ Report

1
Here a = 38, b = 45, m = 0.25, s = 0.2, t = 3
year.

So, above quantity is

1 - G(0.800) = 1 - 0.7881 = 0.2119.

LM F F I I
s2 s OP
(iv) P max Ss ³ b½S0 = a = P max Bs + m -
0 £ s £t
MN 0£s£t GH GH JK JK
2 s
1
³ log
s
b
a PQ
F F m - s I t - log b I 2 F F m - s I t + log b I
2

GG GH 2 JK aJ
J R 2m - s G G 2 JK
bU G H aJ
JJ .
+ expS log V G -
2
=G
GG s t JJ T s 2
aW G
GH s t
JK
H K
The first term is 0.2119 by (iii).

F bI
2m - s2

The second term is the product of G J


s2

H aK = 6.9893 with G(-2.128)

= 1 - G(2.128) = 1 - 0.9833.

So the result is finally 0.2119 x 0.0167 = 0.3286.

11 (i) The generator matrix of the process would be

F -1 0.4 01
. 0.5 0I
GG 0 - 1
3
1
12
1
12
1
6
JJ
GG 0 0 1
- 60 0 1
60JJ .
GH 00 0
0
0
0
- 12
0
1

0K
2 J
1 4 1 1
(ii) The probability of ever visiting state I is + ´ = .
10 10 4 5

d
(iii) (a) pAA ( t ) = -pAA(t), which has solution pAA(t) = e-t.
dt

d 1
(b) Similarly, pAF ( t ) = - pAF (t) + 0.4pAA(t), so that
dt 3
d t/3
{e pAF(t)} = 0.4et/3 pAA(t) = 0.4e-2t/3,
dt
giving pAF (t) = e-t/3 ´ 0.6(1 - e-2t/3).

Page 9
Subject 103 (Stochastic Modelling) — April 2000 — Examiners’ Report

(iv) (a) The equation arises as follows: when the process arrives in state i the
subsequent holding time has mean l-1 i , after which the process
jumps to a different state, choosing state j with probability pij = sij / li
(independent of the length of the holding time). The total time to
reach state D is therefore the time until the first jump plus the time
from arriving in the new state until hitting D (unless the new state is
D).

1 1
(b) We have mI = 60, mO = 2, mF = 3 + ´ 60 + ´ 2 = 18.5,
4 4
mA = 1 + 0.1 ´ 60 + 0.5 ´ 2 + 0.4 ´ 18.5 = 15.4 hours.

(v) The time-homogeneous Markov model has exponential holding times, so the
distribution is completely determined by the expectation.

(vi) A simple check on whether the Markov model fits the data is therefore to
verify that the distributions of holding times are at least roughly
exponential, and a simple way of doing that is to compare sample standard
deviations with sample means. More detailed comparisons might be
possible, depending on the size of the data set.

(vii) (a) Calculations required in the first case would include working out the
expected duration of stay if the change were implemented, which
involves solving the equations in (iv) again. For the second situation,
just replace mO in the original calculation. New parameter values
will need to be guessed. Whichever model comes out better should be
compared with the initial situation, to determine whether the
improvement was worth the additional resources.

(b) Model suitability: on the one hand the required decision is couched in
terms of expectations, which lend themselves well to Markov process
treatment. On the other, the fundamental problem in the system is
queue length, which can never be successfully modelled by a process
which tracks only a single individual at a time. (A network of
queuing processes would be a much better model.)

Page 10

You might also like