The Bayesian Approach To Parameter Estimation

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 10

Statistics 630

The Bayesian Approach to Parameter Estimation

In the Bayesian approach, we suppose that the unknown parameter is a random variable with a prior distribution (). For a given value of a pdf or pmf f (s). The joint distribution of (S, ) is

= , the data have

f (s, ) = f (s)().
The marginal distribution of s has pmf or pdf

m(s) =

f (s, )d =

f (s)()d.

For the purposes of inference, we are interested in the conditional distribution of given S

= s: f (s, ) = (|s) = m(s) f (s)() . f (s)()d


Slide 1

Chapter 7: Bayesian Inference

c 2010 TAMU Statistics (Thomas E. Wehrly, UUMH)

Statistics 630
The distribution specied by (|s) is called the posterior distribution of given

S = s. It represents the knowledge of the statistician arising from prior information and from the data. We see that as a function of , we can write the
posterior distribution as

(|s)
Posterior density

f (s)()
Likelihood Prior density

Example: Suppose that X is binomial(n, ) where has a beta(a, b) distribution. Find the posterior distribution of given X

= x.

f (x) = () =

n x (1 )nx x

(a + b) a1 (1 )b1 (a)(b)

Chapter 7: Bayesian Inference

c 2010 TAMU Statistics (Thomas E. Wehrly, UUMH)

Slide 2

Statistics 630

f (x, ) = =

n x (a + b) a1 (1 )nx (1 )b1 x (a)(b) n (a + b) x+a1 (1 )n+x+b1 x (a)(b)

Then the marginal pmf of X is

m(x) = =

n (a + b) 1 x+a1 f (x, )d = (1 )nx+b1 d x (a)(b) 0 0 n (a + b) (x + a)(n x + b) (n + a + b) x (a)(b) = x is


n (a+b) x+a1 (1 )nx+b1 x (a)(b) n (a+b) (x+a)(nx+b) (n+a+b) x (a)(b)

The posterior density of given X

(|x) = =

f (x, ) = m(x)

(n + a + b) x+a1 (1 )nx+b1 (x + a)(n x + b)


c 2010 TAMU Statistics (Thomas E. Wehrly, UUMH) Slide 3

Chapter 7: Bayesian Inference

Statistics 630
Remark: We could have saved a lot of work by using the relationship Posterior density We notice that Likelihood Prior density Posterior density

Likelihood Prior density

x (1 )nx a1 (1 )b1 x (1 )nx a1 (1 )b1 x+a1 (1 )nx+b1

We can form the posterior density by recognizing the constant ( depending on x) that makes this function of integrate to 1. We notice that the form of the density is that of a beta density. Hence, the constant is

(n + a + b) . (x + a)(n x + b)

Chapter 7: Bayesian Inference

c 2010 TAMU Statistics (Thomas E. Wehrly, UUMH)

Slide 4

Statistics 630
Back to Bill of Rights Example: We will consider two approaches to specifying the prior distribution and look at the resulting posterior distributions. The problem of specifying the prior distribution is central to the Bayes paradigm. Ideally the prior distribution reects the investigators knowledge about the parameter.

Investigator 1 desires to be objective and supposes that he has no prior information about . A uniform prior density on (0, 1) is a reasonable choice for a noninformative prior. This corresponds to a beta (1, 1) distribution.

Investigator 2 has information from an earlier study that found that the proportion knowing the bill of rights was 0.3, so she chooses a beta prior with this mean and a standard deviation of 0.1. This corresponds to a beta (6, 14) distribution.

Chapter 7: Bayesian Inference

c 2010 TAMU Statistics (Thomas E. Wehrly, UUMH)

Slide 5

Statistics 630
The data had n

= 50 and x = 14. Thus, the two investigators ended up with the


Investigator 2: Beta (20, 50)

following posterior densities: Investigator 1: Beta (15, 37)

Plots of the prior and posterior densities are below:


8 Prior 1 Prior 2 Posterior 1 Posterior 2 6 density 0 0.0 2 4

0.2

0.4 theta

0.6

0.8

1.0

Chapter 7: Bayesian Inference

c 2010 TAMU Statistics (Thomas E. Wehrly, UUMH)

Slide 6

Statistics 630 1.1 Summarizing Results from a Bayesian Analysis

The posterior distribution summarizes the statistical analysis based on the prior information and the data. We can use summary measures for the distribution as Bayes estimates.

The posterior mode is the mode of (|s). This represents the most likely value of given S = s. The posterior mean is the mean of (|s). This represents the average value of given S = s. The posterior mean and variance are used to summarize the variability of given S = s. We could also report percentiles of the posterior distribution. The median is
sometimes used as a point estimate.

Chapter 7: Bayesian Inference

c 2010 TAMU Statistics (Thomas E. Wehrly, UUMH)

Slide 7

Statistics 630
Back to the example. We present a table with comparing the prior and posterior distributions for the two investigators:

Estimate mode mean standard deviation

Prior 1 none 0.5 0.289

Bayes 1 0.28 0.2885 0.0622

Prior 2 0.2778 0.3 0.1

Bayes 2 0.2794 0.2857 0.0533

We compare these with the mle and its estimated standard error:

= 0.28

SE() = 0.0635

Chapter 7: Bayesian Inference

c 2010 TAMU Statistics (Thomas E. Wehrly, UUMH)

Slide 8

Statistics 630
Back to the nerve impulse example. We assumed that X1 , . . . , Xn formed a random sample from an exponential () distribution. Suppose now that is a rv with a gamma (a, b) distribution. Then is

()

a1 eb
n
n i=1

L() e

xi n i=1

Thus, the posterior pdf can be seen to be a gamma (a + n, b + Posterior pdf

xi ) pdf:

a+n1 (b+

n i=1

xi )

Suppose that the investigator wanted to use a prior distribution for with mean =

5 and variance = 16. This corresponds to a gamma (25/16, 5/16) xi = 174.64 and

distribution. The posterior distribution given the observed data with

n = 799 is a gamma (800.5625, 174.9525).

Chapter 7: Bayesian Inference

c 2010 TAMU Statistics (Thomas E. Wehrly, UUMH)

Slide 9

Statistics 630
0.15 2.5 2.5 f(gamma) 0 5 10 lambda 15 20 0.0 0.5 1.0 1.5 2.0

0.10

f(lambda)

f(lambda) 0 5 10 15 20 25

0.05

0.00

0.0

0.5

1.0

1.5

2.0

4.2

4.4

4.6 gamma

4.8

5.0

lambda

We can compare various estimates of :

E(|x1 , . . . , xn )
mode of posterior mle

= = =

800.5625 = 4.5759 174.9525 799.5625 n+a1 = = 4.5702 n 174.9525 b + i=1 xi 1 = 4.5751 x


c 2010 TAMU Statistics (Thomas E. Wehrly, UUMH) Slide 10

Chapter 7: Bayesian Inference

You might also like