Professional Documents
Culture Documents
PDE - S and Transition Density Function
PDE - S and Transition Density Function
Invariant Scalings
1
Taylor for two Variables
2
Suppose a function f = f (x; y ) and both x; y change by a small
amount, so x ! x + x and y ! y + y; then we can examine the
change in f using a two dimensional form of Taylor
f (x + x; y + y ) = f (x; y ) + fx x + fy y +
1 1
fxx x2 + fyy y 2 +
2 2
fxy x y + O x2; y 2 :
By taking f (x; y ) to the lhs, writing
df = f (x + x; y + y ) f (x; y )
and considering only linear terms, i.e.
@f @f
df = x+ y
@x @y
we obtain a formula for the di¤erential or total change in f:
3
The Dirac delta function
The delta function denoted (x) ; is a very useful ’object’ in applied
maths and more recently in quant …nance. It is the mathematical rep-
resentation of a point source e.g. force, payment. Although labelled a
function, it is more of a distribution or generalised function. Consider
the following de…nition for a piecewise function
(
1;
x2 2; 2
f (x) =
0; otherwise
4
Now put the delta function equal to the above for the following limiting
value
(x) = lim f (x)
!0
What is happening here? As decreases we note the ’hat’narrows whilst
becoming taller eventually becoming a spike. Due to the de…nition, the
area under the curve (i.e. rectangle) is …xed at 1, i.e. 1 ; which
5
Looking at what happens in the limit ! 0; the spike like (singular)
behaviour at the origin gives the following de…nition
(
1 x=0
(x) =
0 x 6= 0
with the property
Z 1
(x) dx = 1:
1
6
There are many ways to de…ne (x) : Consider the Gaussian/Normal
distribution with pdf
!
1 x2
G (x) = p exp 2
:
2 2
The function takes its highest value at x = 0; as jxj ! 1 there is
exponential decay away from the origin. If we stay at the origin, then as
decreases, G (x) exhibits the earlier spike (as it shoots up to in…nity),
so
lim G (x) = (x) :
!0
The normalising constant p1 ensures that the area under the curve
2
will always be unity.
7
The graph below shows G (x) for values = 2:0 (royal blue); 1:0
(red); 0:5 (green); 0:25 (purple); 0:125 (turquoise); the Gaussian curve
becomes slimmer and more peaked as decreases.
8
G (x) is plotted for = 0:01
9
The …gure will be as before, except that now centered at x0 and not at
the origin as before. So we see two de…nitions of (x) : Another is the
Cauchy distribution
1
L (x) =
x2 + 2
10
So here
1
(x) = lim
!0 x2 + 2
Now suppose we have a smooth function g (x) and consider the following
integral problem
Z 1
g (x) (x x0) dx = g (x0)
1
This sifting property of the delta function is a very important one.
11
Heaviside Function
12
Similarity Methods
f (x; y ) is homogeneous of degree t 0 if f ( x; y ) = tf (x; y ) :
r
1. f (x; y ) = x2 + y 2
rh i rh i
2 2
f ( x; y ) = ( x) + ( y ) = x2 + y2 = f (x; y )
2. g (x; y ) = x+y
x y then
g ( x; y ) = x+
x y
y
= 0 x+y = 0 g (x; y )
x y
3. h (x; y ) = x2 + y 3
h ( x; y ) = ( x)2 + ( y )3 = 2x2 + 3y 3 6= t x2 + y 3
for any t. So h is not homogeneous.
13
Consider the function
F (x; y ) = x2
x2 +y 2
If for any > 0 we write
x0 = x; y 0 = y
then
dy 0 dy x 2 x 02
= ; = 02 02 :
dx0 dx x2+y 2 x +y
We see that the equation is invariant under the change of variables. It
also makes sense to look for a solution which is also invariant under the
transformation. One choice is to write
y y0
v= = 0
x x
so write
y = vx:
14
dy
De…nition The di¤erential equation = f (x; y ) is said to be homo-
dx
geneous when f (x; y ) is homogeneous of degree t for some t:
Method of Solution
15
The di¤erential equation now becomes
v 0x + v = xtf (1; v )
which is not always solvable - the method may not work. But when
t = 0 (homogeneous of degree zero) then xt = 1: Hence
v 0x + v = f (1; v )
or
dv
x = f (1; v ) v
dx
which is separable, i.e.
Z Z
dv dx
= +c
f (1; v ) v x
and the method is guaranteed to work.
16
Example
dy y x
=
dx y+x
First we check:
!
y x 0 y x
=
y+ x y+x
which is homogeneous of degree zero. So put y = vx
vx x v 1
v 0x + v = f (x; yx) = = = f (1; v )
vx + x v+1
therefore
v 1
v 0x = v
v+1
1 + v2
=
v+1
17
and the D.E is now separable
Z Z
v+1 1
2
dv = dx
Z Z
v +1 Z
x
v 1 1
2
dv + 2
dv = dx
v +1 v +1 x
1
ln 1 + v 2 + arctan v = ln x + c
2
1
ln x2 1 + v 2 + arctan v = c
2
y
Now we turn to the original problem, so put v =
x
!
1 y2 y
ln x2 1 + 2 + arctan =c
2 x x
which simpli…es to
1 2 2 y
ln x + y + arctan = c:
2 x
18
Probability Distributions
At the heart of modern …nance theory lies the uncertain movement of
…nancial quantities. For modelling purposes we are concerned with the
evolution of random events through time.
19
Trinomial Random Walk
20
The Transition Probability Density Function
The transition pdf is denoted by
p y; t; y 0; t0
We can gain information such as the centre of the distribution, where
the random variable might be in the long run, etc. by studying its
probabilistic properties. So the density of particles di¤using from (y; t)
to y 0; t0 :
21
p y; t; y 0; t0 satis…es two equations:
22
Forward Equation
So the random variable can either rise or fall with equal probability
< 21 and remain at the same location with probability 1 2 :
23
At the previous step time step we must have been at one of y 0 + y; t0 t
or y 0 y; t0 t or y 0; t0 t :
So
p y; t; y 0; t0 = p y; t; y 0 + y; t0 t + (1 2 ) p y; t; y 0; t0 t
+ p y; t; y 0 y; t0 t
24
Taylor series expansion gives (omit the dependence on (y; t) in your
working as they will not change)
@p @p 1 @ 2p 2
p y0 + y; t0 t = p y 0; t0 0
t+ 0 y+2 2 y + :::
@t @y @y 0
@p
p y 0; t0 t = p y 0; t0 t + :::
@t0
@p @p @ 2p
p y0 y; t0 t = p y 0; t0 t y + 1 y 2 + :::
@t0 @y 0 2 2
@y 0
Substituting into the above
!
@p @p 1 @ 2p 2 +
p y 0; t0 = p y 0; t0 t + y + 2 2 y
@t0 @y 0 @y 0
0 0 @p
(1 2 ) p y ;t t+
@t0 !
@p @p 2
@ p 2
+ 0
p y ;t0 t y+2 1
2 y
@t0 @y 0 @y 0
25
@p @ 2p 2
0= t+ 2 y
@t0 @y 0
@p y 2 @ 2p
=
@t0 t @y 02
y2
Now take limits. This only makes sense if is O (1) ; i.e.
t y2
O ( t) and letting y; t ! 0 gives the equation
@p @ 2p
= c 2
2;
@t0 @y 0
y2
where c2 = t: This is called the forward Kolmogorov equation.
Also called Fokker Planck equation.
26
Backward Equation
The backward equation tells us the probability that we are at y 0; t0
given that we were at an initial state (y; t) :
27
So the probability of being at y 0; t0 given we are at y at t in the
past is linked to the probabilities of being at y + y; t + t; y 0; t0 ;
y; t + t; y 0; t0 and y y; t + t; y 0; t0 :
So a concrete example!
28
At 7pm you are at the o¢ ce - this is the point (y; t)
29
30
Remember p y; t; y 0; t0 represents the probability of being at the future
point y 0; t0 ; i.e. bed at midnight, given that you started at (y; t) ; the
o¢ ce at 7pm.
Looking at the earlier …gure, we can only get to bed at midnight via
either
the pub
the o¢ ce
Madame Jojo’s
at 8pm.
31
What happens after 8pm doesn’t matter - we don’t care, you may not
even remember! We are only concerned with being in bed at midnight.
The earlier …gure shows many di¤erent paths, only the ones ending up
in ’our’bed are of interest to us.
32
In words: The probability of going from the o¢ ce at 7pm to bed at
midnight is
the probability of going to the pub from the o¢ ce and then to bed
at midnight plus
33
p y; t; y 0; t0 = p y + y; t + t; y 0; t0 + (1 2 ) p y; t + t; y 0; t0 +
p y y; t + t; y 0; t0 :
Now since y 0; t0 do not change, drop these for the time being and
use a TSE on the right hand side
p (y; t) =
!
@p @p @ 2p
p (y; t) + t+ y + 12 2 y 2 + ::: +
@t @y @y
@p
(1 2 ) p (y; t) + t + :::
@t !
@p @p 2
@ p
p (y; t) + t y + 12 2 y 2 + :::
@t @y @y
which simpli…es to
@p 1 y 2 @ 2p
0= +2 2:
@t t @y
34
y2
Putting t = O (1) and taking limit gives the backward equation
@p 1 2 @ 2p
= 2c 2:
@t @y
or commonly written as
@p 1 2 @ 2p
+ c 2 = 0:
@t 2 @y
35
The above can be expressed mathematically as
p y; t; y 0; t0 = p y + y; t + t; y 0; t0 + (1 2 ) p y; t + t; y 0; t0 +
p y y; t + t; y 0; t0 :
Performing a Taylor expansion gives
@p 2 @ 2p
0= t + y + :::
@t @y 2
which becomes
@p y 2 @ 2p
0= + 2
+ :::
@t t @y
y2
and letting t = c2 where c is non-zero and …nite as t; y ! 0;
we have
@p 2 @ 2p
+c 2
=0
@t @y
36
Di¤usion Equation
The model equation is
@p 2 @ 2p
0
=c
@t @y 02
for the unknown function p = p y 0; t0 : The idea is to obtain a solution
in terms of Gaussian curves. Let’s drop the primed notation.
37
we can now say
p (y; t) = t f ( )
therefore
@p @p @ 0 1
= = t f ( ): = t f0 ( )
@y @ @y t
!
@ 2p @ @p @
2
= = t f0 ( )
@y @y @y @y
@ @
= t f0 ( )
@y @
1 @ 0 2 f 00 (
= t f ( )=t )
t @
@p @
=t f ( ) + t 1f ( )
@t @t
we can use the chain rule to write
@ @f @ 1f 0 ( )
f( )= : = yt
@t @ @t
38
so we have
@p
= t 1f ( ) yt 1f 0 ( )
@t
and then substituting these expressions in to the pde gives
t 1f ( ) yt 1f 0 ( ) = c2 t 2 f 00 :
t 1f ( ) t 1f 0 ( ) = c2 t 2 f 00 :
39
thus we have so far
p = t f py
t
which gives us a whole family of solutions dependent upon the choice
of :
40
So the D.E becomes
1
f + f 0 = c2f 00:
2
d
We have an exact derivative on the lhs, i.e. ( f ) = f + f 0, hence
d
1d
( f ) = c2f 00
2d
and we can integrate once to get
1
( f ) = c2f 0 + K:
2
We set K = 0 in order to get the correct solution, i.e.
1
( f ) = c2 f 0
2
which can be solved as a simple …rst order variable separable equation:
f ( ) = A exp 1 2
4c2
41
A is a normalizing constant, so write
Z
A exp 1 2 d = 1:
R 4c2
Now substitute x = =2c; so 2cdx = d
Z
2cA exp x2 dx = 1;
|R {z
p }
=
p
which gives A = 1=2c : Returning to
p (y; t) = t 1=2f ( )
becomes
0 1
2
1 y0
p y 0; t0 = p exp @ A:
2c t0 4t0c2
This is a pdf for a variable y that is normally distributed with mean zero
p
and standard deviation c 2t; which we ascertained by the following
42
comparison:
2
1 y0 1 (x )2
0 2
: 2
2 2t c 2
i.e. 0 and 2 2t0c2:
43
At t0 = t this is now a Dirac delta function y 0 y : This particle is
known to start from q(y; t) and di¤uses out to y 0; t0 with mean y and
standard deviation c 2 (t0 t):
44