Download as pdf or txt
Download as pdf or txt
You are on page 1of 21

Statistical Inference

Extra Notes and Exercises

Igor N Litvine

“And the first Morning of Creation wrote / What the Last Dawn of Reckoning shall read.”
Omar Khyaayam

June 21, 2021


Contents

1 Module Summary 2
1.1 Module Time-line . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

2 Exercises 4
2.1 Exercises for chapter 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
2.2 Exercises for chapter 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
2.3 Exercises for chapter 4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
2.4 Exercises for chapter 5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.5 Exercises for chapter 6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.6 Exercises for chapter 7 and 8 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.7 Exercises for chapter 10 and 11 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.8 Exercises for chapter 13 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.9 Exercises for chapter 14 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.10 Exercises for chapter 15 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

3 Solutions 8
3.1 Solutions for chapter 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
3.2 Solutions to exercises for chapter 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
3.3 Solutions to exercises for chapter 4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
3.4 Solutions to exercises for chapter 5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
3.5 Solutions to exercises for chapter 6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
3.6 Solutions to Exercises for chapter 7 and 8 . . . . . . . . . . . . . . . . . . . . . . . . 14
3.7 Exercises for chapter 10 and 11 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
3.8 Solutions to Exercises for chapter 13 . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
3.9 Solutions to Exercises for chapter 14 . . . . . . . . . . . . . . . . . . . . . . . . . . . 17

A Important Properties and Theorems from Elementary Probability 18


A.1 Properties of population mean and variance . . . . . . . . . . . . . . . . . . . . . . . 18
A.2 Important Distributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18

1
Chapter 1

Module Summary

The course consists of two parts:

• The Theory of Estimation.


• The Theory of Hypothesis Testing.

The content covers only classical, also known as ”frequentist” approach. The Bayesian approach
is covered in the next module.

In the philosophy of Science we have two competing directions. The first one is known as de-
terminism. This philosophy is based on the assumption that our world is essentially deterministic.
That is, the future is fully defined by the past Therefore, since the past is already there, there is
only one possible future. In other words if we know everything about any system at present, we can
(given sufficient knowledge and resources) exactly calculate the future of the system.

The extreme branch of determinism, known as fatalism goes even further. Since human brain
is also a system, everything that happens in our brains is also fully predictable. Consequently, hu-
mans have no free will and everything they will do or think in the future is predetermined by the past.

More reading about determinism may found in https://www.britannica.com/topic/determinism.

To the contrary, indeterminism assumes that our world is essentially stochastic, that is some
processes are to certain extent random and, therefore cannot be predicted with absolute certainty.
As a result, a theory may only predict probabilities of future scenarios and not an exact effect.

More reading about indeterminism may found in https://www.britannica.com/topic/indeterminism.

The Statistical Theory is in full support of the indeterminism and provides tools for making deci-
sions (or inference) in uncertain, random circumstances. Every conclusion made by a statistician is
subject to an error and therefore is accompanied by some measure or degree to which this conclusion
is true.

2
1.1 Module Time-line
• June 25 - submission of the last assignment
• June 28 - submission of the draft Essay (optional)
• July 8 - submission of the Portfolio and Essay (as part of the portfolio)

3
Chapter 2

Exercises

2.1 Exercises for chapter 2


1. The following is an estimate of the population variance σ 2 :

n−1
1 X
(xi+1 − xi )2
2(n − 1)
i=1

(a) Prove that the above estimate is unbiased.


(b) What is the advantage of this estimate compared to S 2 ?

2. As an estimate for variance the following statistic was used

n−1
X
C (xi+1 − xi )2
i=1

(a) If the estimate is unbiased, what is the value of the constant C?


(b) What is the advantage of the suggested estimate compared to S 2 ?

2.2 Exercises for chapter 3


1. Derive the estimate for the mode of a distribution applying the ECDF method.
2. Write (as a formula) the ECDF for the following sample:

1, 2, 4, 2, 5, 6, 8, 9, 3, 7, 3, 5, 2, 8, 7, 6

and sketch the graph of the ECDF.

2.3 Exercises for chapter 4


1. Prove that the CDF and the PDF (if exists) of X (n) are given by:

Fn (x) = (F (x))n

and

4
fn (x) = n(F (x))n−1 f(x)

where F (x) and f(x) are the population CDF and PDF respectively.

Hint: The proof may be done in two different ways. The first one is to use the Theorem 4.1
of the lecture notes (page 12). The second possibility for the proof is to use the following
statement: the largest observation is smaller than certain number x (X (n) < x) if and only if
all observations in the sample are smaller than this x.

2. Prove that the CDF and the PDF (if exists) of X (1) are given by:

F1 (x) = 1 − (1 − F (x))n

and

f1 (x) = n(1 − F (x))n−1 f(x)

where F (x) and f(x) are the population CDF and PDF respectively.

Hint: The proof may be done in two different ways. The first one is to use the Theorem
4.1 of the lecture notes (page 12). In this case you should use also the binomial formula (
(a + b)n = an + na(n−1)b + ...bn ). The second possibility for the proof is to use the following
statement: the smallest observation is greater than certain number x (X (1) > x) if and only if
all observations in the sample are greater than this x.

3. Let X have a Uniform(0, θ) distribution. Find E(X (n) ) and suggest an unbiased estimate for
θ.

2.4 Exercises for chapter 5


1. Suppose X1 , ..., Xn is a random sample from a Uniform(0, θ) distribution, for which θ is un-
known.
(a) Find distribution of X (n) .
(b) Find mean and variance of X (n) .
(c) Prove that X (n) is a biased estimate of the θ.
(d) Prove that X (n) is a consistent estimate of θ.
Hint: Use Beta distribution (see Appendix of the lecture notes).

2.5 Exercises for chapter 6


1. Suppose X1 , ..., Xn is a random sample from a Poisson(θ) distribution, for which θ is unknown.

(a) Determine the ML estimate of the θ, assuming that at least one of the observed values is
different from 0. Show that the ML estimate does not exist if every observed value is 0.
(b) Determine the ML estimate of the standard deviation of the distribution. Hint: Use
Theorem 6.2.

5
2.6 Exercises for chapter 7 and 8
Qn
1. Prove that T = i=1 xi is a sufficient statistic for the parameter α of the Gamma(α, β)
distribution (β is assumed to be known).
Pn
2. Prove that T = i=1 xi is a sufficient statistic for the parameter β of the Gamma(α, β)
distribution (α is assumed to be known).
3. Find the method of moments estimates of the parameters α and β of the Beta(α, β) distribu-
tion.

2.7 Exercises for chapter 10 and 11


1. Let a random variable X be distributed uniformly on the interval (θ − 1, θ + 1) (θ is unknown).
The hypotheses on θ are:

H0 : θ = 3
H1 : θ = 4
A single observation X is to be taken from the distribution. Let the rejection rule be: reject
H0 if X > 3.6.

(a) Derive the power function and sketch its plot. Indicate on the plot clearly: first type
error α, second type error β, power of the test.
(b) Having use of the power function and assuming that n = 3, compute the first type error
α, the second type error β, and the power of the test.
(c) Suppose one needs to change the critical value (which was 3.6) to such value that will
guarantee that α = 0. What condition should new critical value satisfy?

2. Suppose that x1 , ..., xn form a random sample from an Exponential distribution with parameter
λ, which is unknown. The hypotheses on λ are:
H0 : λ = λ0
H1 : λ = λ1

Prove that the value of 2α + 3β is minimised by a test procedure which rejects H0 when x̄ > K
and derive the formula for K.

2.8 Exercises for chapter 13


1. Refer to the example at the section 13.3 (page 60) of the lecture notes. Perform the test on
randomness for the given data using ”runs up and down” (as suggested at the end of the
section).

6
2.9 Exercises for chapter 14
The following table summarises observations on a discrete random variable X (f is the frequency):

x 0 1 2 3 4 and above
f 8 10 11 8 11

1. Perform Pearson’s goodness of fit test of the hypothesis that the X follows Poisson distribution
with parameter λ = 2. Significance level α = 0.4.
2. What can you say about the inference error?

2.10 Exercises for chapter 15


1. A researcher was interested if the age of a person impacts on a risk of severe COVID-19 infec-
tion. The following data (table 2.1) was collected:

age group/ward type 0-18 19-59 60+ total


admitted to general hospital ward 23 36 47
admitted to an ICU ward 15 34 42
totals

Table 2.1: Contingency table: age and covid-19

Perform the Pearson’s test on independence to check if the age and and the ward type are
dependent variables (α = 0.05).

7
Chapter 3

Solutions

3.1 Solutions for chapter 2


1. The following is an estimate of the population variance σ 2 :

n−1
1 X
(xi+1 − xi )2
2(n − 1) i=1

(a) Prove that the above estimate is unbiased.


(b) What is the advantage of this estimate compared to S 2 ?
Solution.
(a) Let
n−1
σˆ2 = C
2
X
(xi+1 − xi )
i=1
Now, let
E[X] = µ and V [X] = σ 2 where X1 , . . . , Xn is a random sample
So
E [Xi+1 − Xi ] = E [Xi+1 ] − E [Xi ] = µ − µ = 0
V [Xi+1 − Xi ] = V [Xi+1 ] + V [Xi ] = σ 2 + σ 2 = 2σ 2
h i
Easy to see that E (Xi+1 − Xi )2 = V (Xi+1 − Xi ) + E(Xi+1 − Xi )2 = 2σ 2 + 0 = 2σ 2 .
For σ̂ 2 to be unbiased we need E σ̂ 2 = σ 2 , so
 

E σ̂ 2 σ2
 
=
n−1
" #
2
X
⇒E C (xi+1 − xi ) = σ2
i=1
n−1 h i
2
X
⇒C E (xi+1 − xi ) = σ2
i=1
n−1
X
⇒C 2σ 2 = σ2
i=1
⇒ C(n − 1)2σ 2 = σ2
1
⇒C =
2 (n − 1)

8
1
Thus in order for σ̂ 2 to be unbiased C = 2(n−1) .

(b) This estimate does not require the knowledge of the population mean µ in order to be
calculated.

3.2 Solutions to exercises for chapter 3


1. Derive the estimate for the mode of a distribution applying the ECDF method.
Solution (for continuous population). We know that the CDF (F (x)) and the PDF (f(x)) of
a continuous distribution are connected as follows:

F 0 (x) = f(x)

We also know that a mode is the point where the PDF is highest:

x0 = mode(X), if f(x0 ) = max f(x)


−∞<x<+∞

Now, from this course we know:

• the ECDF is a close approximation for the true CDF.


• the ECDF is a step function with fixed jump (1/n).

The points of jumps are xi - observations. Therefore, (assuming we ordered the xi ’s) the
estimate for the mode will be where the difference xi+1 − xi is smallest. This is where the
slope is steepest!

So we can suggest as an estimate for the mode such xi that xi+1 − xi is smallest (equally we
may use xi+1 or the average of the two).

2. Write (as a formula) the ECDF for the following sample:

1, 2, 4, 2, 5, 6, 8, 9, 3, 7, 3, 5, 2, 8, 7, 6

and sketch the graph of the ECDF.

Solution.

Firstly, we order the sample:

1, 2, 2, 2, 3, 3, 4, 5, 5, 6, 6, 7, 7, 8, 8, 9

Then the ECDF may be written as:

9
0, if x<1


1/16, if 1≤x<2




1/4, if 2≤x<3




3/8, if 3≤x<4




7/16, if 4≤x<5

Fn (x) =

 9/16, if 5≤x<6
 11/16, if 6≤x<7



 13/16, if 7≤x<8



 15/16, if 8≤x<9



1, if x>9
The graph of the ECDF is presented on the figure 3.1.

Figure 3.1: ECDF plot

3.3 Solutions to exercises for chapter 4


1. Prove that the CDF and the PDF (if exists) of X (n) are given by:

Fn (x) = (F (x))n

and

fn (x) = n(F (x))n−1 f(x)

where F (x) and f(x) are the population CDF and PDF respectively.

Solution.

We have:

Fn (x) = P (X (n) < x) = P (all xi < x) = P (x1 < x, ..., xn < x) =

10
since the sample is iid

= P (x1 < x)...P (xn < x) = F (x)...F (x) = F n (x)

Differentiating the above we have:

fn (x) = Fn0 (x) = n(F (x))(n−1)F 0 (x) = n(F (x))(n−1)f(x)

2. Prove that the CDF and the PDF (if exists) of X (1) are given by:

F1 (x) = 1 − (1 − F (x))n

and

f1 (x) = n(1 − F (x))n−1 f(x)

where F (x) and f(x) are the population CDF and PDF respectively.

Solution.

We shall prove this question using the theorem 4.1. For n = 1 we have:

n  
X n
F1 (x) = (F (x))j (1 − F (x))n−j
j
j=1

We shall add and subtract to (from) the equation the following (1 − F (x))n :

n  
X n
= (F (x))j (1 − F (x))n−j + (1 − F (x))n − (1 − F (x))n =
j
j=1

n  
X n
= (F (x))j (1 − F (x))n−j − (1 − F (x))n =
j
j=0

Now we employ the binomial formula:

(F (x) + (1 − F (x)))n − (1 − F (x))n = 1 − (1 − F (x))n

The above proves the first part. Differentiating the F1 (x) we get the required expression for
f1 (x).

3. Let X have a Uniform(0, θ) distribution. Find E(X (n) ) and suggest an unbiased estimate for
θ.
Solution.

The pdf of the Uniform(0, θ) distribution:

11
1
f(x) = θ, if 0 < x < θ
0, otherwise

Consequently, the cdf of the Uniform(0, θ) distribution:

0, if x < 0
(
x
F (x) = θ , if 0 < x < θ
1, otherwise
From the corollary 4.1 we have:

n−1 n−1
 
fn (x) = n(F (x))n−1 f(x) = n 1θ xθn−1 , if 0 < x < θ = n xθn , if 0 < x < θ
0, otherwise 0, otherwise

Therefore, the expected value E(X (n) ) is:

θ θ
xn n θn+1 n
Z Z
(n)
E(X )= xf(x)dx = n n
dx = n
= θ
0 0 θ n+1 θ n+1

Therefore, we can set:

n + 1 (n)
θ̂ = X
n
which will be an unbiased estimate of the θ.

3.4 Solutions to exercises for chapter 5


1. Suppose X1 , ..., Xn is a random sample from a Uniform(0, θ) distribution, for which θ is un-
known.
(a) Find distribution of X (n) .
(b) Find mean and variance of X (n) .
(c) Prove that X (n) is a biased estimate of the θ.
(d) Prove that X (n) is a consistent estimate of θ.
Hint: Use Beta distribution (see Appendix of the lecture notes).
Solution.

(a) It was shown in the previous question, the PDF of the X (n) is:
n−1

fn (x) = n xθn , if 0 < x < θ
0, otherwise
Important note: the above distribution is not a Beta distribution. In Beta distribution
0 < x < 1. We need to apply a simple transformation to convert it to Beta distribution.

Consider a random variable Y = X (n) /θ. Obviously, the PDF of Y is:


 n−1 
nθ (θy) , if 0 < y < 1 = nyn−1 , if 0 < y < 1
fY (y) = fn (θy)θ = θn
0, otherwise 0, otherwise
Matching the above with the Beta distribution we have that Y is distributed as Beta(n, 1).

12
(b) Therefore:

n
E(Y ) =
n+1
and
n
V (Y ) =
(n + 1)2 (n + 2)
Consequently,

θn
E(X (n) ) = E(θY ) = θE(Y ) =
n+1
and

θ2 n
V (X (n) ) = V (θY ) = θ2 V (Y ) =
(n + 1)2 (n + 2)

(c) Since E(X (n) ) 6= θ, the estimate is biased.

(d) Since E(X (n) ) → θ and V (X (n) ) → 0 (as n → ∞) we conclude that the estimate is
consistent.

3.5 Solutions to exercises for chapter 6


1. Suppose X1 , ..., Xn is a random sample from a Poisson(θ) distribution, for which θ is unknown.

(a) Determine the ML estimate of the θ, assuming that at least one of the observed values is
different from 0. Show that the ML estimate does not exist if every observed value is 0.
(b) Determine the ML estimate of the standard deviation of the distribution.

Solution. (a) The likelihood function is:

n n
Y e−θ θxi P Y 1
L(θ) = = eθn θ− xi
xi xi
i=1 i=1

To find the maximum we differentiate the above (as function of θ:


P X P
xi
L0 (θ) = −Cne−θn θ− + Ce−θn ( xi )θ xi −1

where
n
Y 1
C=
xi
i=1

equating the above to zero, we get:


P X P
xi
nθ =( xi )θ xi −1

or
X
n=( xi )θ−1

13
P
and, assuming xi 6= 0:
P
xi
θ̂ =
n
P
Obviously, if xi = 0, we get θ̂ = 0, however the parameter in a Poisson distribution is always
greater than zero. So, if all observations are zero, we cannot find an estimate for θ.

(b) For Poisson distribution V (X) = θ and s.d. = θ. Therefore, using the theorem 6.2 (for
positive θ and s.d. the square root is one-to-one function), we have the MLE estimate for the
standard deviation:
rP
ˆ = xi
s.d.
n

3.6 Solutions to Exercises for chapter 7 and 8


Qn
1. Prove that T = i=1 xi is a sufficient statistic for the parameter α of the Gamma(α, β)
distribution (β is assumed to be known).
Solution. The Likelihood function for Gamma distribution (α is unknown):

n
Y β α α−1
L(α) = x exp(−βxi )
i=1
Γ(α) i

where all xi > 0. The above may be written as:

n
!α−1
β αn Y X β αn α−1
X
L(α) = n xi exp(−β xi ) = n h(x1 , ...xn) exp(−β xi )
Γ(α) i=1
Γ(α)
Qn
where T = h(x1 , ..., xn) = i=1 xi and

L(α) = V (h, α)W (x1, ..., xn)

β αn α−1
V (h, α) = h
Γ(α)n
 P
exp(−β xi ), if all xi > 0
W (x1 , ..., xn) =
0, otherwise
Pn
2. Prove that T = i=1 xi is a sufficient statistic for the parameter β of the Gamma(α, β)
distribution (α is assumed to be known).
Solution. The Likelihood function for Gamma distribution (β is unknown):

n
Y β α α−1
L(β) = x exp(−βxi )
i=1
Γ(α) i

where all xi > 0. The above may be written as:

n
!α−1 n
!α−1
β αn Y X β αn Y
L(α) = n xi exp(−β xi ) = n xi exp(−βh(x1 , ..., xn))
Γ(α) i=1
Γ(α) i=1

14
Pn
where T = h(x1 , ..., xn) = i=1 xi and

L(α) = V (h, β)W (x1 , ..., xn)

β αn
V (h, β) = n exp(−βh)
Γ(α)
α−1
 Qn
W (x1 , ..., xn) = ( i=1 xi ) , if all xi > 0
0, otherwise
3. Find the method of moments estimates of the parameters α and β of the Beta(α, β) distribu-
tion.

Solution. We know that for Beta distribution (see appendix A):

α αβ
E(X) = ; V (X) =
α+β (α + β)2 (α + β + 1)
α(1+α)
Therefore, E(X 2 ) = V (X) + E(X)2 = (α+β)(1+α+β)
.
We have two equations:
α X
= 1/n xi
α+β
α(1 + α) X
= 1/n x2i
(α + β)(1 + α + β)
Solving the above for α and β we have:
P P 2 P
xi ( x − xi )
α̂ = P 2 i P 2
( xi ) − n xi
( xi − n)( x2i − xi )
P P P
β̂ =
( xi )2 − n x2i
P P

NB: the easiest way to solve the equations is to define a new variable y = α + β, then the system
becomes:
α X
= 1/n xi
y
α(1 + α) X
= 1/n x2i
y(1 + y)
P
From the first equation: α = y/n xi . This may be substituted to the second equation to find
y. Knowing y we find α and then β.

3.7 Exercises for chapter 10 and 11


1. Let a random variable X be distributed uniformly on the interval (θ − 1, θ + 1) (θ is unknown).
The hypotheses on θ are:

H0 : θ = 3
H1 : θ = 4
A single observation X is to be taken from the distribution. Let the rejection rule be: reject
H0 if X > 3.6.

15
(a) Derive the power function and sketch its plot. Indicate on the plot clearly: first type
error α, second type error β, power of the test.
(b) Having use of the power function and assuming that n = 3, compute the first type error
α, the second type error β, and the power of the test.
(c) Suppose one needs to change the critical value (which was 3.6) to such value that will
guarantee that α = 0. What condition should new critical value satisfy?
2. Suppose that x1 , ..., xn form a random sample from an Exponential distribution with parameter
λ, which is unknown. The hypotheses on λ are:
H0 : λ = λ0
H1 : λ = λ1

Prove that the value of 2α + 3β is minimised by a test procedure which rejects H0 when x̄ > K
and derive the formula for K.

Solution.
The linear combination aα + bβ is minimised by the rejection rule:

af0 (x1 , ..., xn) < bf1 (x1 , ..., xn)


where a = 2, b = 1 and:
Y
f0 (x1 , ..., xn) = λ0 e−λ0 xi
Y
f1 (x1 , ..., xn) = λ1 e−λ1 xi
So, the rejection rule is:
Y Y
2 λ0 e−λ0 xi < λ1 e−λ1 xi
or
P P
xi xi
2λn0 e−λ0 < λn1 e−λ1

2λn0 e−λ0 nx̄ < λn1 e−λ1 nx̄

λn1 −λ1 nx̄


2e−λ0 nx̄ < e
λn0
λn1
e(λ1 −λ0)nx̄ <
2λn0
Taking natural logarithm of both sides:
λn1
(λ1 − λ0 )nx̄ < log( )
2λn0
λn
log( 2λ1n )
x̄ < 0

(λ1 − λ0 )n
So K is:
λn
log( 2λ1n )
K= 0

(λ1 − λ0 )n

Solution ends.

16
3.8 Solutions to Exercises for chapter 13
1. Refer to the example at the section 13.3 (page 60) of the lecture notes. Perform the test on
randomness for the given data using ”runs up and down” (as suggested at the end of the
section).

3.9 Solutions to Exercises for chapter 14


The following table summarises observations on a discrete random variable X (f is the frequency):

x 0 1 2 3 4 and above
f 8 10 11 8 11

1. Perform Pearson’s goodness of fit test of the hypothesis that the X follows Poisson distribution
with parameter λ = 2. Significance level α = 0.4.
2. What can you say about the inference error?

17
Appendix A

Important Properties and


Theorems from Elementary
Probability

A.1 Properties of population mean and variance


Properties of mean.

• E(C) = C, where C is a non-random constant.


• E(X1 + X2 ) = E(X1 ) + E(X2 )
• E(CX) = CE(X), where C is a non-random constant.
• For any non-random a and b: E(aX + b) = aE(X) + b

Properties of variance.

• V(C) = 0, where C is a non-random constant.


• V (X1 + X2 ) = V (X1 ) + V (X2 ), if X1 and X2 are uncorrelated.
• V (CX) = C 2 V (X), where C is a non-random constant.
• For any non-random a and b: V (aX + b) = a2 V (X)

A.2 Important Distributions


1. Bernoulli(p) Distribution

p(x) = px q 1−x; x = 0, 1
0≤p≤1
E(X) = p; V (X) = pq

Note that: here and onwards q = 1 − p.

18
2. Binomial(n, p) Distribution
 
n
p(x) = px q n−x; x = 0, 1, . . . , n
x
0 ≤ p ≤ 1; n - positive integer
E(X) = np; V (X) = npq

3. Geometric(p) Distribution

p(x) = pq x ; x = 0, 1, 2, 3, . . .
0≤p≤1
E(X) = q/p; V (X) = q/p2

4. Negative Binomial(r, p) Distribution


 
r+x−1
p(x) = pr q x ; x = 0, 1, 2, . . .
x
0 ≤ p ≤ 1; r - positive integer
E(X) = rq/p; V (X) = rq/p2

5. Poisson(λ) Distribution

e−λ λx
p(x) = ; x = 0, 1, 2, . . .
x!
λ>0
E(X) = V (X) = λ

6. Uniform(a, b) Distribution

1/(b − a) if a ≤ x ≤ b
f(x) =
0, otherwise

a<b
a+b (b − a)2
E(X) = ; V (X) =
2 12
7. Normal Distribution (N (µ, σ 2 ))

(x − µ)2
 
1
f(x) = √ exp −
2πσ 2 2σ 2
−∞ < x < +∞
−∞ < µ < +∞; σ > 0
E(X) = µ; V (X) = σ 2

8. Exponential(λ) Distribution
f(x) = λe−λx ;x > 0
F (x) = 1 − e−λx ;x > 0
λ>0
E(X) = 1/λ; V (X) = 1/λ2

19
9. Gamma(α, β) distribution

β α α−1
f(x) = x exp(−βx); x > 0
Γ(α)
α > 0; β > 0
E(X) = α/β; V (X) = α/β 2

10. Beta(α, β) distribution

Γ(α + β) α−1
f(x) = x (1 − x)β−1 0<x<1
Γ(α)Γ(α)
α > 0; β > 0
α αβ
E(X) = ; V (X) =
α+β (α + β)2 (α + β + 1)

11. χ2n distribution

1
f(x) = x(n/2)−1 exp(−x/2); x > 0
2n/2 Γ(n/2)
n>0
E(X) = n; V (X) = 2n

12. Student or t(n) distribution (t-distribution)

Γ((n + 1)/2) x2 −(n+1)/2


f(x) = (1 + ) ; − ∞ < x < +∞
(nπ)1/2 Γ(n/2) n
n>0
E(X) = 0; V (X) = n/(n − 2)(exists if n > 2)

13. Weibull(β, δ) distribution


   
β  x β−1 x β
f(x) = exp − ;x > 0
δ δ δ

β > 0; δ > 0

2
E(X) = δΓ(1 + 1/β); V (X) = δ 2 Γ(1 + 2/β) − δ 2 [Γ(1 + 1/β)]

14. Fisher F-distribution

Γ((m + n)/2)mm/2 nn/2 x(m/2)−1


f(x) =
Γ(m/2)Γ(n/2) (mx + n)(m+n)/2

n 2n2 (m + n − 2)
E(X) = (n > 2); V (X) = (n > 4)
n−2 m(n − 2)2 (n − 4)

20

You might also like