Stats

You might also like

Download as pdf
Download as pdf
You are on page 1of 15
TueoreM 1. If S? is the variance of a random sample from an infinite pop- ulation with the finite variance o?, then E(S?) = 02. Proof By definition of sample mean and sample variance, 2 1 wy ee ase | A, Soa-7| i # [Sam -®- ei Ee [Sau — wy }—n- E((X — pw) | _ z Then, since E{(X;— 4)*} = 0? and E{(X — w)?} = - it follows that EXAMPLE 5 Show that X is a minimum variance unbiased estimator of the mean jy of a nor- mal population. Solution Since it follows that z In f(x) = ~inavia— 5 (**) a so that dlnf®) 1 /x—p “ooo and hence z eee [ey] - ([(5*)]-4 1-4 ae a o* o Thus, 1 1 a — Ta EA) OL ne | (ee) | _ 2 + av: : a. ao, +: woe . and since X is unbiased and var(X) = —, it follows that X is a minimum variance n unbiased estimator of ju. EXAMPLE 6 In Example 4 we showed that if X),X>,...,,, constitute a random sample from a uniform population with w = 0, then (a) (b) n+1 -Y,, is an unbiased estimator of £. Show that 2X is also an unbiased estimator of f. Compare the efficiency of these two estimators of p. Solution (a) b) Since the mean of the population is p = B according to the theorem “The mean and the variance of the uniform srt are given by « = “48 and ox $(6-ay” it follows from the theorem “If X),X2,...,Xn constitute a random sample from an infinite population with the mean y and the vari- ance o?, then E(X) = and var(X) = eZ that EX) = § and hence that E(2X) = B. Thus, 2X is an unbiased estimator of £. First we must find the variances of the two estimators. Using the sampling dis- tribution of Y, and the expression for E(Y,) given in Example 4, we get n B n EY) = xf wath dyn = oy and If we leave the details to the reader in Exercise 27, it can be shown that 2 var (= : Yn) = B n a(n +2) 2 Since the variance of the population is 62 = x according to the first stated theorem in the example, it follows from the above (second) theorem that and hence that _ 2 var(X) = f — — fp var(2X) = 4. var(X) = 3 Therefore, the efficiency of 2X relative to ath -Y,, is given by n 2 var (2 Yn) B A _ na+2) _ var(2*) 7 Bg 3n 3: nt+2 and it can be seen that for n > | the estimator based on the nth order statistic is much more efficient than the other one. For n = 10, for example, the relative efficiency is only 25 percent, and for m = 25 it is only 11 percent. Exercises I. If X1,X,...,X, constitute a random sample from a population with the mean j, what condition must be imposed on the constants @),@2,...,d, so that aX, +a2X2++--+anXn is an unbiased estimator of ;? 2. If ©; and O> are unbiased estimators of the same parameter 0, what condition must be imposed on the con- stants ky and k so that ky 0; +k 0 is also an unbiased estimator of 0? 10. If X,,X2,...,Xn constitute a random sample from a normal population with uw = 0, show that n x2 a t=1 is an unbiased estimator of o?. Te eee 25. If X;, X2, and X3 constitute a random sample of size n = 3 from a normal population with the mean yz and the X1 +24) +3 4 variance 2, find the efficiency of relative A +4243 to as estimates of ju. mis, ia an aa a a ce I Me EXAMPLE 10 If. X1, X2,....Xn constitute a random sample of size from a Bernoulli population, show that XX: x 6 1+X2 + +Xn is a sufficient estimator of the parameter # Solution By the definition “wERNOULLE DISTRIBUTION. A random variable X has @ Bernoulli distribution and it is referred to as a Bernoulli random variable if and only if its probability distribution is given by f(x; 6) =é@(1—@)'* forx =0,1", fars8) = 0a) forxy = 0,1 so that , PU XI. ka) = [Ord 8) 1 tag =e 0-0)" * =o" 1a) for x; = 0 or Land i = 1,2,...,. Also, since MSM + Hat. +My is abinomial random variable with the parameters 6 and n, its distribution is given by bex;1,0) = (;) e%(L—ay"* x and the transformation-of-variable technique yields a6) = a) eae" 295 Point Estimation Now, substituting into the formula for f(xj.x2, .....n|6) on the previous page, we get fx 8@) _ tan " \ nic, —ayn—nd ()e (16) 1 ~ n né _ ‘) x 1 - n ELIE tn for x; =O or 1 and # = 1,2,..,,1. Evidently, this does not depend on @ and we have a4 shown, therefore, that @ = + isa sufficient estimator of @ he Estimation of Means To illustrate how the possible size of errors can be appraised in point estimation, suppose that the mean of a random sample is to be used to estimate the mean of a normal population with the known variance 2. By the theorem, “If x is the mean of a random sample of size n from a normal population with the mean j1 and the vari- ance ¢?, its sampling distribution is a normal distribution with the mean yc and the variance o2/n, the sampling distribution of ¥ for random samples of size n from a normal population with the mean j and the variance o? isa normal distribution with fe =m and Thus, we can write P(Zl < Za) =1—a where X-u Z= of,/n and Zq/2 is such that the integral of the standard normal density from Zg/2 to co equals @/2. It follows that o(e-«|

You might also like