Download as pdf or txt
Download as pdf or txt
You are on page 1of 5

Probability spaces Discrete Univariate Distributions

- Sample space: set of all possible outcomes S. - Probability mass function (pmf ):
- Events: a collection F of subsets of S. p(x) = P(X = x), x ∈ Sx . (9)
- Probability: function from F into [0, 1].
• 0 < p(x) ≤ 1.
Events: Terminology •
P
i∈Sx p(i) = 1.
- Ac : A does not occur (complement of A).
- Probability of events for discrete random variables:
- A ∩ B: Both A and B occur (AB). X X
P(B) = P(X = i) = p(i), B ∈ B(R). (10)
- A ∪ B: Either A or B occur.
i∈B∩SX i∈B∩SX
- ∅: The impossible event.
- S: The certain event. - Relations between the cdf and pdf of a discrete random variable:
X
- If A ∩ B = ∅. A and B are disjoint. F (x) = p(i), p(x) = F (x) − F (x−). (11)
- The events A1 , A2 , . . . are mutually exclusive iff Ai ∩ Aj = ∅ for all i≤x
i 6= j.
S
- The events A1 , A2 , . . . are exhaustive iff i Ai = S.
Continuous Univariate Distributions
- Probability density function (pdf ):
- The events A1 , A2 , . . . form a partition if they are mutually exclusive
and exhaustive. • f (x) ≥ 0, for all x ∈ R.
R
• R f (x) dx = 1, (Riemann integrability implicit).
Set theory: De Morgan’s laws
  c - Probability of events for continuous random variables.
Aci .
S T
- i Ai = i Z
 c P(X ∈ A) = f (t)dt, A ∈ B(R).
A
Aci .
T S
- i Ai = i
- Relations between the cdf /pdf of a continuous random variable:
Axioms of probability Z x
d
Axiom 1. 0 ≤ P(A). F (x) = f (t)dt, f (x) = F (x). (12)
∞ dx
Axiom 2. P(S) = 1.
Axiom 3. If A1 , A2 , . . . are mutually exclusive then Discrete Joint Distributions
[
!
X - Joint probability mass function:
P Ai = P (Ai ) . (1) pX,Y (x, y) = P(X = x, Y = y). (13)
i i
• 0 < pX,Y (x, y) ≤ 1.
Consequences of the axioms P
• (x,y)∈Sx,y pX,Y (x, y) = 1.
- If A ⊂ B then P(A) ≤ P(B).
- P(∅) = 0. - Probability of an event B ∈ B(R2 )
- P(A) ≤ 1. X
P(B) = pX,Y (x, y). (14)
- P(Ac ) = 1 − P(A). (x,y)∈B∩SX,Y
- P(A ∪ B) = P(A) + P(B) − P(A ∩ B).
Continuous Joint Distributions
Conditional probability (events) - Joint probability density function. Defining properties:
- Definition: If P(B) 6= 0, then the probability of A given B is:
• fX,Y (x, y) ≥ 0.
P(AB)
P(A|B) = (2)
RR
P(B) • R2 fX,Y (x, y) dx dy = 1.

- Chain rule: two events. - Probability of an event B ∈ B(R2 )


ZZ
P(AB) = P(A|B)P(B) (3) P(B) = fX,Y (x, y) dx dy. (15)
B
- Chain rule: more than two events.
P(A1 A2 . . . An ) = P(A1 )P(A2 |A1 ) . . . P(An |A1 . . . An−1 ) (4) - Joint Cumulative Distribution Function.
Z x Z y
- Law of total probability: Let B1 , B2 , . . . be a partition. Then FX,Y (x, y) = fX,Y (t1 , t2 )dt2 dt1 . (16)
X −∞ −∞
P(A) = P(A|Bi )P(Bi ). (5)
i
- Relation between the joint pdf and joint cdf.

- Bayes’ Rule: Let B1 , B2 , . . . be a partition and A such that P(A) 6= 0. ∂2


fX,Y (x, y) = FX,Y (x, y). (17)
Then ∂x∂y
P(A|Bk )P(Bk )
P(Bk |A) = P . (6)
i P(A|Bi )P(Bi ) Marginal Distributions
- Marginal cumulative distribution function:
Independence of events
- A, B are independent iff P(AB) = P(A)P(B). FX (x) = lim FX,Y (x, y) (18)
y→∞
- A1 , A2 , . . . are independent iff
- Marginal probability mass functions:
P(Ak1 Ak2 . . . Akn ) = P(Ak1 )P(Ak2 ) . . . P(Akn ), (7) X
for any subset of events Ak1 , Ak2 , . . . , Akn . pX (x) = P(X = x) = pX,Y (x, y). (19)
y
- Product rule: If A1 , . . . , An are independent, then
P(A1 A2 . . . An ) = P(A1 )P(A2 ) . . . P(An ). (8) - Marginal probability density functions:
Z
fX (x) = fX,Y (x, y) dy. (20)
Cumulative Distribution Functions (CDF) R
- Defining properties:
• 0 ≤ F (x) ≤ 1.
Conditional Distributions
- Conditional pmf: If P(Y = y) 6= 0 then
• limx→∞ F (x) = 1. limx→−∞ F (x) = 0.
P(X = x, Y = y)
• If x ≤ y, then F (x) ≤ F (y) (non-decreasing). P(X = x|Y = y) = . (21)
P(Y = y)
• If xn ↓ x then limn→∞ F (xn ) = F (x) (right-continuity).
- Conditional pdf:
- The cdf of a discrete random variable is a step function. fX,Y (x, y)
- The cdf of a continuous random variable is a continuous function. fX|Y (x|y) = . (22)
fY (y)
Dr Leonardo Rojas–Nandayapa, Institute of Financial and Actuarial Mathematics, University of Liverpool, leorojas@liverpool.ac.uk
Independence of random variables Transformations of random variables/vectors
- A collection of random variables X1 , . . . , Xn is independent iff Let X = (X1 , . . . , Xn ) a random vector (possibly just a r.v.) and let
FX1 ,...,Xn (x1 , . . . , xn ) = FX1 (x1 ) . . . FXn (xn ) (general) h : Rn → Rm . (50)
pX1 ,...,Xn (x1 , . . . , xn ) = pX1 (x1 ) . . . pXn (xn ) (discrete) AIM: Determine the distribution of Y = h(X).
fX1 ,...,Xn (x1 , . . . , xn ) = fX1 (x1 ) . . . fXn (xn ) (continuous) - Classical Method: h : Rn → Rm . General procedure:
Expectation, moments, variance, covariance. (a) Determine the support Sy of Y (image of SX under h).
- LOTUS: Law of the Unconscious Statistician. Let h : SX → R, then (b) Let FY (y) be the cdf of Y. Then
X
E[h(X)] = h(x) · P(X = x), (discrete case). (23) FY (y) = P(Y ≤ y) = P(h(X) ≤ y). (51)
x∈SX
Z (c) Find the set Ay = {x : h(x) ≤ y} so
E[h(X)] = h(t) · f (t)dt, (continuous case). (24) P(h(X) ≤ y) = P(X ∈ Ay ). (52)
Rn
- Linearity of the Expectation: (d) Return FY (y) = P(X ∈ Ay ).
E[aX + bY + c] = aE[X] + bE[Y ] + c. (25) - Univariate Inversion Formula: h : R → R. The inverse h−1 exists and it
is differentiable.
- Moments. The k-th moment of X is defined as E[X k ]. (
- Variance and Covariance: FX (h−1 (y)) h increasing
FY (y) = , (53)
Var(X) = E(X − E(X))2 = E(X 2 ) − E2 (X). (26) 1 − FX (h−1 (y)) h decreasing
Cov(X, Y ) = E [(X − E(X)) (Y − E(Y ))] = E(XY ) − E(X)E(Y ). (27) d −1
fY (y) = fX (h−1 (y)) h (y) (54)
- Properties of the Covariance: dy
Cov(X, X) = Var(X), (28) - Multivariate Inversion Formula: h : Rn → Rn . The inverse exists and it
is differentiable.
Cov(X, Y ) = Cov(Y, X), (29)
fX (h−1 (y))
fY (y) = fX (h−1 (y))|Jy (h−1 )| =
 
n m n Xm . (55)
X X X |Jz (h)|
Cov a0 + ai Xi , b0 + b j Yj  = ai bj Cov(Xi , Yj ) (30)
i=1 j=1 i=1 j=1 Here z = h−1 (y) and Jz (h) is the Jacobian of h at z = h−1 (y).
(31) Transformations: Special Methods
- Properties of the Variance: - Censored distributions. Let X such that P(X ∈ A) > 0. Define
2 X|A := (X|X > A) (56)
Var(aX + b) = a Var(X) (32)
n n
!
X X X the censored random variable to the set A.
Var a0 + ai Xi = a2i Var(Xi ) + 2 ai aj Cov(Xi , Xj ) (33)
i=1 i=1 i<j
• If X is absolutely continuous with density fX then
- Independence: If X, Y are independent, then fX (x)IA (x)
fX|A (x) = . (57)
E[XY ] = E[X]E[Y ], (34) P(X ∈ A)
Cov(X, Y ) = 0, (the reverse is not always true). (35) • If X is discrete with probability mass function pX then
n n pX (x)IA (x)
!
Var
X
Xi =
X
Var(Xi ). (36) pX|A (x) = . (58)
P(X ∈ A)
i=1 i=1
- Sums: Let X1 , X2 be jointly distributed rv and Z = X1 + X2 .
Conditional Expectation
• If X1 , X2 have a continuous joint density fX1 ,X2 then
- Law of The Unconscious Statistician:
E[h(X)|Y = y] =
X
h(i) · P(X = i|Y = y), (37) Z∞
i∈Sx fZ (z) = fX1 ,X2 (z − x2 , x2 ) dx2 . (59)
−∞
Z
E[h(X)|Y = y] = h(x) · fX|Y (x|y) dx. (38)
R • If X1 , X2 have a discrete joint density pX1 ,X2 then
- Tower properties: X
pZ (z) = pX1 ,X2 (z − x2 , x2 ). (60)
E[X] = E[E[X|Y ]] (39) x2 ∈SX2
Var[X] = E[Var[X|Y ]] + Var[E[X|Y ]] (40)
• Convolutions: If X1 , X2 are continuous and independent, then
- Marginal distributions from conditional arguments:
X Z∞
P(X ∈ A) = P(X ∈ A|Y = y)p(y), (41) fZ (z) = fX2 (z − x2 )fX2 (x2 ) dx2 , (61)
x∈SY
Z −∞

P(X ∈ A) = P(X ∈ A|Y = y)fY (y)dy. (42) Z∞


R FZ (z) = FX1 (z − x2 )fX2 (x2 ) dx2 . (62)
Moment Generating Functions −∞
- Univariate: • Affine Transformations: Let X be a random vector and
• Definition: If X is a random variable, then MX : R → R is Z = b + AX where A is an invertible matrix. Then
MX (θ) = E[eθX ] (43) fX (A−1 (z − b))
fZ (z) = . (63)
• Moment theorem: |A|
dn - Products and Quotients: Let X1 , X2 be jointly distributed continuous
MX (θ) = E[X n ]. (44)
dθn θ=0
rv with X2 nonnegative. Let Y1 = X1 X2 and Y2 = X1 /X2 . Then
- Multivariate: Z∞
1
fY1 (y) = fX ,X (y/x2 , x2 ) dx2 , (64)
• Definition: If X is a random vector, then MX : Rn → R is x2 1 2
t −∞
MX (θ) = E[eθ X
] (45)
Z∞
• Moment theorem: fY2 (y) = x2 fX1 ,X2 (yx2 , x2 ) dx2 . (65)
∂ m1 +···+mn
mn MX (θ) = E[X1m1 X2m2 . . . Xnmn ]. (46) −∞
∂θ1m1 . . . ∂θn θ=0 - Extremes: If X1 , . . . , Xn are independent and Y1 = min{X1 , . . . , Xn }
- Linear Transformations: and Yn = max{X1 , . . . , Xn }, then
Ma0 +a1 X (θ) = eθa0 MX (θa1 ), (47) n
Y
n
Y FY1 (y) = 1 − (1 − FXk (y)) (66)
Ma0 +a1 X1 +···+an Xn (θ) = eθa0 MXk (θak ), (48) k=1
k=1 n
Y
t FXk (y) FYn (y) = (67)
Mb+Aθ = eθ b MX (At θ). (49) k=1
Dr Leonardo Rojas–Nandayapa, Institute of Financial and Actuarial Mathematics, University of Liverpool, leorojas@liverpool.ac.uk
Sums of independent rv: Special cases - Xi ∼ Gamma(αi , β).
- Xi ∼ Bin(mi , p). n
X
!
X1 + · · · + Xn ∼ Gamma αi , β . (71)
n
!
X i=1
X1 + · · · + Xn ∼ Bin mi , p . (68)
i=1 - Xi ∼ N(µi , σi2 ).
n n
!
- Xi ∼ Poisson(λi ). X X
X1 + · · · + Xn ∼ N µi , σi2 . (72)
n
!
X i=1 i=1
X1 + · · · + Xn ∼ Poisson λi . (69)
i=1 - Xi ∼ Cauchy(mi , bi ).
n n
!
- Xi ∼ Geo(p). X X
X1 + · · · + Xn ∼ Cauchy mi , bi . (73)
X1 + · · · + Xn ∼ BN(n, p). (70) i=1 i=1

Bounds Probability Generating Function (Optional)


- Markov inequality: If X is nonnegative (P(X ≥ 0) = 1) and - Definition. If N is non-negative and integer-valued, let
µX = E[X] is defined, then GN : [0, 1] → [0, 1] be defined as
µX ∞
P(X ≥ a) ≤ , a ≥ 0. (74) X
a GN (z) = Ez N = z k P(X = k). (86)
k=0
- Chebyshev inequality: If the mean µX and variance σX 2 of X are

defined, then - Factorial moments:


σ2
P(|X − µX | ≥ a) ≤ X dk
 
, a ≥ 0. (75) N! N −k
a2 GN (z) = E z . (87)
dz k (N − k)!
- Jensen inequality: If h : SX → R is well-defined and convex then
- Inversion formula:
Eh(X) ≥ h (EX) . (76)
1 dk
- Cauchy-Schwartz inequality: Let X, Y be random variables with G(z) = P(N = k). (88)
k! dz k z=0
finite second moments. Then
E2 [XY ] ≤ E[X 2 ]E[Y 2 ]. Non Negative Random Variables (Optional)
All definitions below correspond to a nonnegative random X variable.
Approximations and Limit Theorems - Expectation as an Integrated Tail. Let X be a nonnegative
WARNING! The following should only be used with large n. random variable with finite expected value.
- Normal approximation to the Binomial: Let X ∼ Bin(n, p). If • If X has an absolutely continuous distribution then
both np ≥ 10 and n(1 − p) ≥ 10, then for large n it holds Z ∞
E[X] = P(X > x)dx. (89)
!
k − np
P(X ≤ k) ≈ Φ p . (77) 0
np(1 − p)
• If X has a discrete distribution supported over N ∪ {0}, then
- Poisson approximation to the Binomial: Let X ∼ Bin(n, p). If ∞
np < 10, then for large n it holds
X
E[X] = P(X ≥ k). (90)
P(X ≤ k) ≈ P(Y ≤ k), Y ∼ Poisson(np). (78) k=1

(The case n(1 − p) < 10 is analogous.) All definitions below correspond to a nonnegative random X variable
having absolutely continuous distribution FX and density fX (x).
Let X1 , X2 , . . . iid rv with µ = E[Xi ] and σ 2 = Var(Xi ). Define
- Hazard Function (also survival function or tail probability).
Sn
Sn = X1 + · · · + Xn , Xn = . (79)
n SX (x) = P(X > x) = 1 − FX (x), x > 0. (91)
- Law of large numbers: - Hazard Rate Function (also force of mortality or failure rate).
X n → µ. (80)
fX (x)
(a) Weak Law of Large Numbers: λX (x) = , x > 0. (92)
 SX (x)
lim P X n − µ > ε = 0, ∀ε > 0. (81)
n→∞
- Relationship between SX (x) and λX (x).
(b) Strong Law of Large Numbers: •
  d
P lim X n = µ = 1. (82) λX (x) = − ln SX (x), x > 0. (93)
n→∞ dx
•  Z x
- Central limit theorem:

SX (x) = exp − λX (t)dt , x > 0. (94)
• Version for the sum Sn : 0

Sn − nµ
 - Mean Excess Function:
lim P √ ≤x = Φ(x) (83)
n→∞ σ n eX (x) = E[X − x|X > x], x>0 (95)
where Φ(x) is the cdf of the standard normal distribution.
• Version for the sample mean X n : Heavy–tailed distributions (Optional)
! - A nonnegative distribution F is called a heavy–tailed distribution iff
Xn − µ
lim P √ ≤x = Φ(x) (84) 1 − F (x)
n→∞ σ/ n lim = ∞, ∀θ > 0. (96)
θ→∞ e−θx
- Levy’s continuity theorem. Let X, X1 , X2 , . . . a collection of Otherwise F is light–tailed.
random variables with characteristic functions ϕX , ϕX1 , ϕX2 , . . . If
ϕXn → ϕX and ϕX is continuous at 0 then - A nonnegative distribution F is heavy–tailed iff
Z ∞
d
Xn → X. (85) eθx dF (x) = ∞, ∀θ > 0. (97)
0

Dr Leonardo Rojas–Nandayapa, Institute of Financial and Actuarial Mathematics, University of Liverpool, leorojas@liverpool.ac.uk
Summary of Important Univariate Distributions

Discrete Distributions Continuous Distributions


- Bernoulli with parameter p. - Uniform. X ∼ U(a, b), a, b ∈ R with a < b.
• X ∼ Ber(p), p ∈ (0, 1). I(a,b) (x)
f (x) = , (105)
k
P(X = k) = p (1 − p) 1−k
, k ∈ {0, 1}. (98) b−a
x−a
F (x) = , x ∈ (a, b). (106)
• E[X] = p, Var[X] = p(1 − p). b−a
• G(z) = 1 − p + pz, M (ω) = 1 − p + peθ . a+b (b−a)2 ebθ −eaθ
E[X] = 2
, Var[X] = 12
, M (θ) = θ(b−a)
.
- Binomial with parameter n and p.
- Exponential. X ∼ Exp(λ), λ > 0.
• X ∼ Bin(n, p), n ∈ N, p ∈ (0, 1).
f (x) = λe−λx I(0,∞) (x), (107)
 
n k h i
P(X = k) = p (1 − p)n−k , k ∈ {0, 1, . . . , n}. (99) F (x) = 1 − e−λx I(0,∞) . (108)
k

• E[X] = np, Var[X] = np(1 − p). E[X] = 1/λ, Var[X] = 1/λ2 , M (θ) = λ/(λ − θ).

• G(z) = (1 − p + pz)n , M (θ) = (1 − p + peθ )n . - Gamma. X ∼ Gamma(α, β), α > 0, β > 0.

- Geometric with parameter p. β α xα−1 e−βx


f (x) = I(0,∞) . (109)
Parametrization A: numbers of trials needed to observe a success. Γ(α)

• X ∼ Geo(p), p ∈ (0, 1). E[X] = α/β, Var[X] = α/β 2 , M (θ) = (β/(β − θ))α .

P(X = k) = p(1 − p)k−1 , k ∈ N. (100) - Gaussian or Normal. X ∼ N(µ, σ 2 ), µ ∈ R, σ > 0.


(x − µ)2
 
1
• E[X] = 1/p, Var[X] = (1 − p)/p2 . f (x) = √ exp − . (110)
2πσ 2σ 2
• G(z) = pz/(1 − (1 − p)z), M (θ) = peθ /(1 − (1 − p)eθ ).
2 2
E[X] = µ, Var[X] = σ 2 , M (θ) = eθµ+σ θ /2 .
Parametrization B: number of failures before a success.
- Cauchy. X ∼ Cauchy(m, b), m ∈ R, b > 0.
• X ∼ Geo(p), p ∈ (0, 1).
b
P(X = k) = p(1 − p)k , k ∈ {0} ∪ N. (101) f (x) = , (111)
π(b2 + (x − m)2 )
 
• E[X] = (1 − p)/p, Var[X] = (1 − p)/p2 . 1 1 x−m
F (x) = + arctan . (112)
• G(z) = p/(1 − (1 − p)z), M (θ) = p/(1 − (1 − p)eθ ). 2 π b

- Poisson distribution with parameter λ E[X] and Var[X] not defined. Φ(ω) = ejωm−b|ω| .
- Pareto. X ∼ Pareto(α, λ, c), with α > 0, λ > 0 and c ∈ R.
• X ∼ Poisson(λ), λ > 0.
λα α
λk −λ f (x) = I(c,∞) (x), (113)
P (X = k) = e , k ∈ {0} ∪ N. (102) (λ + x − c)α+1
k!
λα
 
• E[X] = λ, Var[X] = λ. F (x) = 1 − I(c,∞) . (114)
(λ + x − c)α
θ
• G(z) = e−λ(1−z) , M (θ) = e−λ(1−e ) . λ λ2 α
E[X] = α−1
+ c, α > 1; Var[X] = (α−1)2 (α−2)
, α > 2.
- Hypergeometric distribution with parameter n, N and r.
- Lognormal X ∼ Lognormal(µ, σ 2 ),
µ ∈ R, σ > 0.
• X ∼ Hyp(n, N, r), n, N, r ∈ N, N ≥ max{n, r}.
(log x − µ)2
 
1
   f (x) = √ exp − I(0,∞) (x), (115)
r N −r x 2πσ 2σ 2
k n−k 
log x − µ

P (X = k) =   , k ∈ {0, 1, . . . , min{n, r}}. (103) F (x) = Φ I(0,∞) (x). (116)
N σ
n
σ2 2 2
r r N −r N −n E[X] = eµ+ 2 , Var[X] = (eσ − 1)e2µ+σ .
• E[X] = nN , Var[X] = nN N N −1
.
- Weibull X ∼ Weibull(k, λ), λ > 0, k > 0.
- Negative Binomial distribution with parameter r and p. k
f (x) = kλk xk−1 e−(λx) I(0,∞) (x), (117)
• X ∼ NB(r, p), r ∈ N, p ∈ (0, 1).
k
  F (x) = 1 − e−(λx) I(0,∞) (x). (118)
k+r−1 k
P (X = k) = p (1 − p)r , k = 0, 1, . . . (104)
k Γ(1+1/k) Γ(1+2/k)−Γ2 (1+1/k)
E[X] = λ
, Var[X] = λ2
, k ≥ 1.
pr pr
• E[X] = 1−p
, Var[X] = (1−p)2
. - Beta. X ∼ Beta(α, β), α > 0, β > 0.
r r
xα−1 (1 − x)β−1
 
• G(z) = 1−p 1−p
, M (θ) = 1−pe . Γ(α)Γ(β)
1−pz θ f (x) = I(0,1) (x), Beta(α, β) = (119)
Beta(α, β) Γ(α + β)
α αβ
E[X] = α+β
, Var[X] = (α+β)2 (α+β+1)
.

Multivariate Distributions
Discrete Distributions Continuous Distributions
- Multinomial. X ∼ Bin(n, p1 , . . . , pk ), n ∈ N, pj > 0,
Pk - Dirichlet. X ∼ Dirichlet(α1 , α2 , . . . , αn ), αi > 0.
i=1 pi = 1.

k n
n!
Q α −1
x xi i I(0,1) (xi )
px1 px2 . . . pkk ,
X
xi = n, xi ∈ N+ . (120)
Qn
P(X = x) = Γ(αi )
x1 !x2 ! . . . xk ! 1 2 i=1 f (x) =
i=1
, Beta(α1 , . . . , αn ) = i=1 . (121)
Beta(α1 , . . . , αn ) Γ
Pk
i=1 αi
E[Xi ] = 
npi , Var[X] =
Pk nnpi (1 − pi ), Cov(Xi , Xj ) = −npi pj . α E[Xi ](1−E[Xi ]) −E[Xi ]E[Xj ]
M (θ) = (p e θi . E[Xi ] = Pk i , Var[X] = , Cov(Xi , Xj ) = .
i=1 i 1+ k 1+ k
P P
i=1 αi i=1 αi i=1 αi

Dr Leonardo Rojas–Nandayapa, Institute of Financial and Actuarial Mathematics, University of Liverpool, leorojas@liverpool.ac.uk
Multivariate Gaussian Distribution • Convolution. If Xi ∼ Nd (µi , Σi ), then
- Definition: Let Z1 , . . . , Zm iid rv with common distribution N(0, 1), X1 + X2 ∼ Nd (µ1 + µ2 , Σ1 + Σ2 ). (129)
and set Z = (Z1 , . . . , Zm )0 . Take µ ∈ Rn×1 and A ∈ Rn×m . Define
X = µ + AZ. (122) • Linear transformations. If A ∈ Rk×d and u ∈ Rk , then
AX1 + u ∼ Nk (Aµ1 + u, AΣ1 A0 ). (130)
We say that X has multivariate Gaussian distribution of dimension n
with mean vector µ and covariance matrix Σ = AA0 . We denote: In particular
u0 X1 ∼ N (u0 µ1 , u0 Σu). (131)
X ∼ Nn (µ, Σ). (123)
• Transformation to the Normal Standard. Let U−1 ΛU be the
• Density function. If det(Σ) 6= 0 then the joint density exists eigenvalue decomposition of Σ . Then
(x − µ)0 Σ−1 (x − µ)
 
fX (x) =
1
exp − . (124) Λ−1/2 UX1 ∼ Nd (0, I). (132)
(2π)n/2 |Σ|1/2 2
- Marginal and Conditional Distributions. Partition X1 into
• Joint Moment Generating Function: If X ∼ Nn (µ, Σ), then XA ∈ Rr and XB ∈ Rs with r + s = d.
   
1 µA ΣAA ΣAB
MX (t) = eht,µi+ 2 ht,Σti , t ∈ Rn×1 , (125) µ1 =
µB
, Σ1 =
ΣBA ΣBB
.

where hx, yi denotes the inner product in Rn .


• Marginal distributions.
- Mean, variance, covariance: If X ∼ Nn (µ, Σ), then
XA ∼ Nr (µA , ΣAA ) (133)
EXi = µi , Var Xi = Σii , Cov(Xi , Xj ) = Σij . (126) XB ∼ Ns (µB , ΣBB ) (134)
If (X1 , X2 ) ∼ N2 (µ, Σ), then
• Conditional Distributions.
X1 , X2 are independent ⇐⇒ Cov(X1 , X2 ) = 0. (127)
XA |XB ∼ Nr (µA|B , ΣA|B ) (135)
- Transformations. If X ∼ Nn (µ, Σ), then where
 
Xn n X
X n µA|B = µA + ΣAB Σ−1
BB (XB − µB ) (136)
X1 + · · · + Xn ∼ N  µi , Cov(Xi , Xj ) . (128)
i=1 i=1 j=1 ΣA|B = ΣAA − ΣAB Σ−1
BB ΣBA . (137)

Normal distribution
Zx  2
1 t
Φ(x) = √ exp − dt
Φ(x) 2π 2
−∞

Φ(x)
x .00 .01 .02 .03 .04 .05 .06 .07 .08 .09
0.0 0.5000 0.5040 0.5080 0.5120 0.5160 0.5199 0.5239 0.5279 0.5319 0.5359
0.1 0.5398 0.5438 0.5478 0.5517 0.5557 0.5596 0.5636 0.5675 0.5714 0.5753
0.2 0.5793 0.5832 0.5871 0.5910 0.5948 0.5987 0.6026 0.6064 0.6103 0.6141
0.3 0.6179 0.6217 0.6255 0.6293 0.6331 0.6368 0.6406 0.6443 0.6480 0.6517
0.4 0.6554 0.6591 0.6628 0.6664 0.6700 0.6736 0.6772 0.6808 0.6844 0.6879
0.5 0.6915 0.6950 0.6985 0.7019 0.7054 0.7088 0.7123 0.7157 0.7190 0.7224
0.6 0.7257 0.7291 0.7324 0.7357 0.7389 0.7422 0.7454 0.7486 0.7517 0.7549
0.7 0.7580 0.7611 0.7642 0.7673 0.7704 0.7734 0.7764 0.7794 0.7823 0.7852
0.8 0.7881 0.7910 0.7939 0.7967 0.7995 0.8023 0.8051 0.8078 0.8106 0.8133
0.9 0.8159 0.8186 0.8212 0.8238 0.8264 0.8289 0.8315 0.8340 0.8365 0.8389
1.0 0.8413 0.8438 0.8461 0.8485 0.8508 0.8531 0.8554 0.8577 0.8599 0.8621
1.1 0.8643 0.8665 0.8686 0.8708 0.8729 0.8749 0.8770 0.8790 0.8810 0.8830
1.2 0.8849 0.8869 0.8888 0.8907 0.8925 0.8944 0.8962 0.8980 0.8997 0.9015
1.3 0.9032 0.9049 0.9066 0.9082 0.9099 0.9115 0.9131 0.9147 0.9162 0.9177
1.4 0.9192 0.9207 0.9222 0.9236 0.9251 0.9265 0.9279 0.9292 0.9306 0.9319
1.5 0.9332 0.9345 0.9357 0.9370 0.9382 0.9394 0.9406 0.9418 0.9429 0.9441
1.6 0.9452 0.9463 0.9474 0.9484 0.9495 0.9505 0.9515 0.9525 0.9535 0.9545
1.7 0.9554 0.9564 0.9573 0.9582 0.9591 0.9599 0.9608 0.9616 0.9625 0.9633
1.8 0.9641 0.9649 0.9656 0.9664 0.9671 0.9678 0.9686 0.9693 0.9699 0.9706
1.9 0.9713 0.9719 0.9726 0.9732 0.9738 0.9744 0.9750 0.9756 0.9761 0.9767
2.0 0.9772 0.9778 0.9783 0.9788 0.9793 0.9798 0.9803 0.9808 0.9812 0.9817
2.1 0.9821 0.9826 0.9830 0.9834 0.9838 0.9842 0.9846 0.9850 0.9854 0.9857
2.2 0.9861 0.9864 0.9868 0.9871 0.9875 0.9878 0.9881 0.9884 0.9887 0.9890
2.3 0.9893 0.9896 0.9898 0.9901 0.9904 0.9906 0.9909 0.9911 0.9913 0.9916
2.4 0.9918 0.9920 0.9922 0.9925 0.9927 0.9929 0.9931 0.9932 0.9934 0.9936
2.5 0.9938 0.9940 0.9941 0.9943 0.9945 0.9946 0.9948 0.9949 0.9951 0.9952
2.6 0.9953 0.9955 0.9956 0.9957 0.9959 0.9960 0.9961 0.9962 0.9963 0.9964
2.7 0.9965 0.9966 0.9967 0.9968 0.9969 0.9970 0.9971 0.9972 0.9973 0.9974
2.8 0.9974 0.9975 0.9976 0.9977 0.9977 0.9978 0.9979 0.9979 0.9980 0.9981
2.9 0.9981 0.9982 0.9982 0.9983 0.9984 0.9984 0.9985 0.9985 0.9986 0.9986
3.0 0.9987 0.9987 0.9987 0.9988 0.9988 0.9989 0.9989 0.9989 0.9990 0.9990
3.1 0.9990 0.9991 0.9991 0.9991 0.9992 0.9992 0.9992 0.9992 0.9993 0.9993
3.2 0.9993 0.9993 0.9994 0.9994 0.9994 0.9994 0.9994 0.9995 0.9995 0.9995
3.3 0.9995 0.9995 0.9995 0.9996 0.9996 0.9996 0.9996 0.9996 0.9996 0.9997
3.4 0.9997 0.9997 0.9997 0.9997 0.9997 0.9997 0.9997 0.9997 0.9997 0.9998
3.5 0.9998 0.9998 0.9998 0.9998 0.9998 0.9998 0.9998 0.9998 0.9998 0.9998
3.6 0.9998 0.9998 0.9999 0.9999 0.9999 0.9999 0.9999 0.9999 0.9999 0.9999

Dr Leonardo Rojas–Nandayapa, Institute of Financial and Actuarial Mathematics, University of Liverpool, leorojas@liverpool.ac.uk

You might also like