Download as pdf or txt
Download as pdf or txt
You are on page 1of 3

STAT 3112

STATISTICAL INFERENCE - II
INTRODUCTION
Traditionally problem of statistical inference are divide into two problems of estimation
and test of hypothesis. The problem of estimation, as it shall be considered here in is usually
define as follow.
Assume that some characteristics of the element in the population can be represented by a
random variable X whose density function is,
𝑓𝑋 ( . ; 𝜃) = 𝑓( . ; 𝜃),
Where the form of the density is assume known except that it contain an unknown
parameter 𝜃. (If 𝜃 were known, the density function would be completely specified and
there would be no need to make inference about it.)
Further assume that the value 𝑥1 , 𝑥2 , … , 𝑥𝑛 of a random sample 𝑋1 , 𝑋2 , … , 𝑋𝑛 from 𝑓( . ; 𝜃)
can be observed of the basis of observe sample value 𝑥1 , 𝑥2 , … , 𝑥𝑛 . It is decide to estimate
the value of the unknown parameter 𝜃 or the value of some function, say 𝜏(𝜃) of the
unknown parameter.
The estimation can be made in two ways. The first called point estimation is to let the
values of some statistic, say 𝑡(𝑋1 , 𝑋2 , … , 𝑋𝑛 ), represent or estimate the unknown 𝜏(𝜃); such
a statistic 𝑡(𝑋1 , 𝑋2 , … , 𝑋𝑛 ) is called point estimator and other one is interval estimation.
Definition: Statistic
A statistic is a function of observable random variable, which does not contain any
unknown parameter.
𝑥1 + 𝑥2 + ⋯ + 𝑥𝑛
𝑇1 = 𝑋̅ =
𝑛
𝑇2 = (𝑋̅ − 𝜇)2
Here, 𝑇1 is a statistic whereas 𝑇2 is not a statistic since it contains unknown parameter 𝜇.
Definition: Estimator
Any statistic (known function of observable random variable; that is itself a random
variable) whose values are used to estimate 𝜏(𝜃), where 𝜏(. ) Is some function of the
parameter 𝜃, is define to be an estimator of 𝜏(𝜃).
Methods of finding estimators:
1. Methods of Moment
2. Methods of Maximum likelihood

STAT 3112 1
CHAPTER 01 - PROPERTIES OF POINT ESTIMATORS
We have discussed methods of obtaining point estimators. The question that now arises is:
are some of many possible estimators better in some sense than other. In this section, we
will define certain properties which an estimator may or may not possess that will help us
in deciding one estimator is better than another.
1. Closeness
2. Unbiasedness
3. Mean-Squared Error (MSE)
4. Consistency
5. Efficiency
6. Sufficiency
7. Completeness

1. Closeness
If we have a random sample 𝑋1 , 𝑋2 , … , 𝑋𝑛 from a density, say 𝑓(𝑥; 𝜃), which is known
except for 𝜃, then a point estimator of 𝜏(𝜃) is a statistic, say 𝓉(𝑋1 , 𝑋2 , … , 𝑋𝑛 ) whose value
is used as an estimator of 𝜏(𝜃). Assume here that 𝜏(𝜃) is a real-valued (not a vector)
function of the unknown parameter 𝜃. Often 𝜏(𝜃) will be 𝜃 itself.
It is always not possible to estimate the unknown 𝜏(𝜃) accurately. Therefore we look for
an estimator 𝓉(𝑋1 , 𝑋2 , … , 𝑋𝑛 ) that is “close” to 𝜏(𝜃).
There are several ways of defining “close”. 𝑇 = 𝓉(𝑋1 , 𝑋2 , … , 𝑋𝑛 ) is a statistic and hence
has a distribution, or rather a family of distributions, depending on what 𝜃 is. The
distribution of T tells how the values 𝓉 of T are distributed, and we would like to have the
values of T distributed near 𝜏(𝜃); that is, we would like to select 𝓉(. , … , . ) so that the
values of 𝑇 = 𝓉(𝑋1 , 𝑋2 , … , 𝑋𝑛 ) are concentrated near 𝜏(𝜃). So we need an estimator that it
has its mean near or equal to 𝜏(𝜃) and have small variance. Let’s see what “concentration”
might mean in terms of the distribution itself.

Definition: More concentrated and most concentrated


Let 𝑇 = 𝑡(𝑋1 , 𝑋2 , … , 𝑋𝑛 ) and 𝑇 ′ = 𝑡 ′ (𝑋1 , 𝑋2 , … , 𝑋𝑛 ) be two estimators of 𝜏(𝜃). 𝑇 ′ is
called a More concentrated estimator of 𝜏(𝜃) than if and only if,
𝑝𝜃 [𝜏(𝜃) − 𝜆 < 𝑇 ′ ≤ 𝜏(𝜃) + 𝜆] ≥ 𝑝𝜃 [𝜏(𝜃) − 𝜆 < 𝑇 ≤ 𝜏(𝜃) + 𝜆] for all 𝜆 > 0 and for
each 𝜃 in .
An estimator 𝑇 ∗ = 𝑡 ∗ (𝑋1 , 𝑋2 , … , 𝑋𝑛 ) is called most concentrated if it is more concentrated
than any other estimator.

STAT 3112 2
2. Unbiasedness
It is difficult to find an estimator which estimates the unknown parameter accurately. So,
at least we need an estimator that it has its mean (expected value) near or equal to 𝜏(𝜃).
Definition: Unbiased Estimator
An estimator 𝑇 = 𝑡(𝑋1 , 𝑋2 , … , 𝑋𝑛 ) is defined to be an unbiased estimator of 𝜏(𝜃) if and
only if,
𝐸𝜃 [𝑇] = 𝐸𝜃 [𝑡(𝑋1 , 𝑋2 , … , 𝑋𝑛 ) ] = 𝜏(𝜃) for all 𝜃 ∈
An estimator is unbiased if the mean of its distribution equals to 𝜏(𝜃), the function of the
parameter being estimated.
Note: The term {𝜏(𝜃) − 𝐸𝜃 [𝑇] is called the bias of the estimator T and can be either
positive, negative or zero.
Example 1:
Let 𝑋1 , 𝑋2 , 𝑋3 is a random sample taken from a population with mean 𝜇 and variance 𝜎 2 .
Find which of the following estimators are unbiased for 𝜇.
1 1 1
i. 𝑇1 = 4 𝑋1 + 2 𝑋2 + 4 𝑋3
1 3
ii. 𝑇2 = 3 𝑋1 + 5 𝑋2
4 1 1
iii. 𝑇3 = 5 𝑋1 + 10 𝑋2 + 10 𝑋3

Answer:
1 1 1
i. 𝐸 [𝑇1 ] = 𝐸 [4 𝑋1 + 2 𝑋2 + 4 𝑋3 ]
1 1 1
= 4 𝐸 [𝑋1 ] + 2 𝐸 [𝑋2 ] + 4 𝐸 [𝑋3 ]
1 1 1
= 4𝜇 + 2𝜇 + 4𝜇

𝐸 [𝑇1 ] = 𝜇
∴ 𝑇1 is an unbiased estimator for 𝜇.

Example 2:
Let 𝑋1 , 𝑋2 , … , 𝑋𝑛 be a random sample from 𝑓(𝑥; 𝜃) = 𝜙𝜇,𝜎2 (𝑥) {Normal density}. Recall
that the maximum likelihood estimators of 𝜇 and 𝜎 2 are respectively, 𝑋̅ and
(1/𝑛) ∑(𝑋𝑖 − 𝑋̅ )2 . Show that the given two maximum likelihood estimators are unbiased
and biased for 𝜇 and 𝜎 2 respectively.

STAT 3112 3

You might also like