Professional Documents
Culture Documents
Continuous Distributions: Section 2
Continuous Distributions: Section 2
CONTINUOUS DISTRIBUTIONS
33 / 107
2.1 Things to remember
The variance of X is
Var(X) = E(X 2 ) − [E(X)]2 = E [X − E(X)]2 ≥ 0.
34 / 107
2.2 Some continuous families of distributions
35 / 107
2.2.1 Continuous Uniform
(b + a) (b − a)2
E(X) = , Var(X) = .
2 12
36 / 107
2.2.2 Exponential
Examples of use:
• the time until the next claim arrives at an insurance office
• the time until the next bus arrives
37 / 107
2.2.3 Gamma
38 / 107
Properties of the Gamma distribution
39 / 107
2.2.4 Normal
40 / 107
Normal
0.10
0.05
0.00
−5 0 5 10
41 / 107
The Standard Normal distribution
When µ = 0 and σ = 1
• the distribution is called the standard Normal distribution
• we use the notation Φ(z) for the distribution function, φ(z) for
the density.
It has two key properties.
• The symmetry property
If Z ∼ N(0, 1) then
P(Z < z) = P(Z > −z)
42 / 107
Calculation of Normal probabilities
You will also find tables showing the percentage points of Φ, where
you specify a value of p in the range (0, 1) and want to know what
value of z gives P(Z > z) = p, i.e., we calculate
zp = Φ−1 (1 − p).
43 / 107
Normal approximations to Binomial
As already seen,
• when k is large and θ small, the probability function of Bin(k, θ)
can be approximated by that of Pois(kθ).
• However, this is only really useful when kθ is not very large.
If kθ > 5 and k(1 − θ) > 5 it is more helpful to use a Normal
approximation to Binomial.
Suppose
• X ∼ Bin(k, θ) and
• Y ∼ N(µ, σ 2 ), where µ = kθ, σ 2 = kθ(1 − θ)
Then
P(X ≤ x) ≈ P(Y ≤ x + 21 ),
obtained from tables as above.
The 12 added to (or in some cases subtracted from) x is called the
continuity correction, and it arises because we are approximating
P(X = x) by P(x − 0.5 < Y < x + 0.5).
44 / 107
Other Normal approximations
45 / 107
2.3 Distributions arising from transformations
46 / 107
2.3.1 The chi-squared distribution
47 / 107
2.3.2 The lognormal distribution
If
• X is Normal with mean µ and variance σ 2 ,
• Y = eX ,
then Y is said to have the lognormal distribution with parameters µ
and σ 2 .
Note that eµ is the median of Y, but not the expectation.
Lognormal variables are always non-negative, unlike Normal. They
are right-skewed, again unlike Normal.
Examples of use:
• The value of a share index today is 1250. The value in one year’s
time might be modelled as a logNormal random variable.
• When valuing put and call options using the Black-Scholes
formula, a fundamental assumption is that the value of the
“underlying” (the asset on which the options are based) at the
exercise time is logNormally distributed
48 / 107
logNormal mean and variance
so that
2 2
Var[Y] = e2µ+σ eσ − 1 .
49 / 107
2.3.3 The beta distribution
α αβ
E[X] = , Var[X] = .
α+β (α + β)2 (α + β + 1)
50 / 107
2.4 Transformations
51 / 107
2.4.1 Linear transformations
52 / 107
2.4.2 Monotone transformations
53 / 107
Example of a monotone transformation
Example
Suppose that X ∼ U(0, 1) and Y = 1/X. Then
FY (y) = P[Y ≤ y] = P[1/X ≤ y]
Z 1
= P[X ≥ 1/y] = dx = 1 − 1/y
1/y
54 / 107
2.5 Random numbers and simulation
55 / 107
2.5.1 Principles of simulation
56 / 107
2.5.2 Examples of simulation
Example
If F(x) = 1 − e−λx then
1
F −1 (u) = − log(1 − u),
λ
so we set
1
X = − log(1 − U)
λ
.
Example
If we want to simulate from N(µ, σ 2 ), we observe that
x−µ
F(x) = Φ σ , so
F −1 (u) = µ + σΦ−1 (u)
Example
Suppose F(x) = eλx /(eλx + e−λx ). If X = F −1 (U), then U = F(X),
so
U = 1/(1 + e−2λX )
e−2λX = −1 + 1/U
1 1
X = − log −1
2λ U
So you simulate U from U(0, 1), then apply the formula to produce
the simulated value of X.
58 / 107
2.6 Summary
59 / 107