Professional Documents
Culture Documents
Appendix C - Standard Statistical Resu - 2016 - Computational Finance Using C An
Appendix C - Standard Statistical Resu - 2016 - Computational Finance Using C An
Then the law of large numbers states that yn converges to µ as n → ∞, that is,
V ar[ yn ] → 0.
The mean of yn is
1 1
E[ yn ] = (E[x 1 ] + E[x 2 ] + + . . . + E[x n ]) = nµ = µ.
n n
where we have used the fact that the variance of the sum of independent random
variables is the sum of their variances, see Appendix C.2.
We have therefore shown that as n → ∞, V ar[ yn ] → 0.
Let
n
sn = ui + nµ.
i=1
So
n
sn = xi .
i=1
Computational Finance Using C and C#: Derivatives and Valuation. DOI: 10.1016/B978-0-12-
803579-5.00019-X 315
Copyright © 2016 Elsevier Ltd. All rights reserved.
316 APPENDIX | C Standard Statistical Results
The central limit theorem states that as n tends to infinity the probability
distribution of z n tends to a normal distribution with zero mean and unit
variance, mathematically z n → N(0, 1) as n → ∞.
n
t
Mz n = E [exp (t z n )] = E exp ,
√ ui
σ n i=1
Thus,
n n
t2 t2
t
Mu √ = 1+ →1+ as n → ∞,
σ n 2n 2
where we have use the fact that t << 1 see Grimmett and Welsh (1986).
We have therefore shown that as n → ∞
t2 t2
Mz n (t) → 1 + →e2 .
2
However from Appendix D.1, the moment generating function Mz (t) for a
standard normal distribution (µ = 0, σ 2 = 1) is
t2
Mz (t) = e 2 , where z ∼ N(0, 1).
and
V ar[Z] = E[(Z − E[Z])2 ]
= E[(a + bX − a − bE[X])2 ]
= E[(bX − bE[X])2 ]
= E[b2 (X − E[X])2 ]
= b2 E[(X − E[X])2 ].
Two Variables
Let Z = a + b1 X1 + b2 X2 , where a, b1 , and b2 are constants.
Then mean is E[Z] = E[a] + E[b1 X1 ] + E[b2 X2 ] = a + b1 E[X1 ] + b2 E[X2 ].
The variance V ar[Z] is computed as follows:
V ar[Z] = E {a + b1 X1 + b2 X2 − a − b1 E[X1 ] − b2 E[X2 ]}2
= E {b1 (X1 − E[X1 ]) + b2 (X2 − E[X2 ])}2
= b21 E (X1 − E[X1 ])2 + b22 E (X2 − E[X2 ])2
+ 2b1 b2 E (X1 − E[X1 ]) E (X2 − E[X2 ])
Three Variables
Let Z = a + b1 X1 + b2 X2 = b3 X3 , where a, b1 , b2 , and b3 are constants.
318 APPENDIX | C Standard Statistical Results
V ar[Z] = E {a + b1 X1 + b2 X2 + b3 X3 − a − b1 E[X1 ] − b2 E[X2 ] − b3 E[X3 ]}2
= E {b1 (X1 − E[X1 ]) + b2 (X2 − E[X2 ]) + b3 (X3 − E[X3 ])}2
= b21 E (X1 − E[X1 ])2 + b22 E (X2 − E[X2 ])2 + b23 E (X3 − E[X3 ])2
+ 2b1 b2 E (X1 − E[X1 ]) E (X2 − E[X2 ])
+ 2b2 b3 E (X2 − E[X2 ]) E (X3 − E[X3 ])
+ 2b1 b3 E (X1 − E[X1 ]) E (X3 − E[X3 ])
If X1 , X2 , and X3 are I I D, all the covariance terms are zero and the variance is
Variance of n Variables
We will now derive an expression for the sum of n I I D random variables.
Let Z = a + i=1 bi X i , where a and bi ,i = 1, . . . , n are constants.
n
n n 2
V ar[Z] = E +
a b i X i − a − b i E[X i ]
i=1 i=1
n n 2
= E
bi X i − bi E[X i ]
i=1 i=1
n 2
= E
bi (X i − E[X i ])
i=1
n n n
= b2i E (X i − E[X i ])2 +
bi b j E (X i − E[X i ]) X j − E[X j ]
i=1 i=1 j=1( j,i)
n
n
n
= b2i V ar[X i ] + bi b j Cov[X i , X j ].
i=1 i=1 j=1( j,i)
The Variance and Covariance of Random Variables Section| C.3 319
As before if all the X variables are I I D, then the covariance terms are zero
and we have
n
V ar [Z] = b2i V ar[x i ].
i=1
If in addition all the bi terms are one and all the X variable have variance σ 2 ,
we obtain
n
V ar [Z] = V ar[x i ] = nσ 2 .
i=1
C.3.2 Covariance
The covariance between two variables X and Y is defined by
Cov[X,Y ] = E (X − E[X])(Y − E[Y ]) = E XY − Y E[X] − X E[Y ] + E[X]E[Y ]
Two Variables
Let Z1 = a + bX and Z2 = c + dY , where a, b, c, and d are constants.
We have
∴ Cov[Z1 , Z2 ] = bdCov[X,Y ].
Three Variables
Let Z1 = a + b1 X1 + b2 X2 and Z2 = c + dY , where a, b1 , b2 , c, and d are
constants.
We have
Cov[Z1 , Z2 ] = Cov[a + b1 X1 + b2 X2 , c + dY ]
= E[(a + b1 X1 + b2 X2 )(c + dY )] − E[(a + b1 X1 + b2 X2 )]E[(c + dY )]
= E[(a + b1 X1 )(c + dY ) + b2 X2 (c + dY )]
− E[(a + b1 X1 )] + E[b2 X2 ]E[c + dY ]
320 APPENDIX | C Standard Statistical Results
Four Variables
Let Z1 = a + b1 X1 + b2 X2 + b3 X3 and Z2 = c + dY , where a, b1 , b2 , b3 , c, and
d are constants.
We have
Cov[Z1 , Z2 ] = Cov[a + b1 X1 + b2 X2 + b3 X3 , c + dY ]
= E[(a + b1 X1 + b2 X2 + b3 X3 )(c + dY )]
− E[(a + b1 X1 + b2 X2 + b3 X3 )]E[(c + dY )]
= E[(a + b1 X1 + b2 X2 )(c + dY ) + b3 X3 (c + dY )]
− E[(a + b1 X1 + b2 X2 )] + E[b3 X3 ]E[c + dY ]
= Cov[(a + b1 X1 + b2 X2 ), c + dY ] + Cov[b3 X3 , c + dY ],
∴ Cov[Z1 , Z2 ] = b1 dCov[X1 ,Y ] + b2 dCov[X2 ,Y ] + b3 dCov[X3 ,Y ].
Covariance of n Variables
In a similar manner to that outlined above,
n n
Cov a + bi X i , c + dY = d bi Cov[X i ,Y ].
i=1 i=1
So
n
M
n M
Cov a + bi X i , c + d j Yj = Cov bi X i , d j Yj
i=1 j=1 i=1 j=1
Conditional Mean and Covariance of Normal Distributions Section| C.4 321
n
M
= bi Cov X i , d j Yj
i=1 j=1
n
M
= bi Cov d j Yj , X i ,
i=1 j=1
n
M
∴ Cov[Z1 , Z2 ] = bi d j Cov[Yj , X i ] .
i=1 j=1
= E (X i − E[X i ]) X j − E[X j ]
= Cov[X]i j . (C.3.2)
Σ11 Σ12 +
Σ−1 = *. /. (C.4.1)
21 22
,Σ Σ -
322 APPENDIX | C Standard Statistical Results
Multiplying equation (C.4.5) on the left by (Σ11 )−1 and on the right by Σ22
−1 gives
and substituting for (Σ11 )−1 Σ12 from equation (C.4.7) into equation (C.4.8) gives
G = (x 1 − µ1 )T Σ11 (x 1 − µ1 ) + (x 1 − µ1 )T Σ12 (x 2 − µ2 )
+ (x 2 − µ2 )T Σ21 (x 1 − µ1 ),
T
G = x 1 − µ1 + (Σ11 )−1 Σ12 (x 2 − µ2 ) Σ11 x 1 − µ1 + (Σ11 )−1 Σ12 (x 2 − µ2 )
− (x 2 − µ2 )T Σ21 (Σ12 )−1 (x 2 − µ2 ), (C.4.11)
where for instance we have used the fact that the scalar quantity
(x 1 − µ1 )T Σ12 (x 2 − µ2 ) = (x 2 − µ2 )T Σ21 (x 1 − µ1 ).
Since the last term in equation (C.4.11) only involves constants (as far as
f (x 1 |x 2 ) is concerned), it follows that
1 T
f (x 1 |x 2 ) ∝ exp − x 1 − µ1 + (Σ11 )−1 Σ12 (x 2 − µ2 )
2
× Σ11 x 1 − µ1 + (Σ11 )−1 Σ12 (x 2 − µ2 ) ,
Therefore,
M y (t) = e bt Mx (at). (C.5.3)
Moment generating function of a linear combination of random variables
Let z = x + y where x and y are independent random variables then
Mz (t) = E e t z = E e x+y = E e t x e t y .