Download as pdf or txt
Download as pdf or txt
You are on page 1of 7

Chapter 7

Inference Concerning Proportion


• Let p denote the proportion of individuals or
objects in a population who possess a
specified property (e.g., passed number of
students or number of smokers).
• If an individual or object with the property is
labeled a success (S), then p is the population
proportion of successes.
I. Large Sample Tests for proportion (Z test)

Para- Hypothesis Sample Test Statistic Rejection region for Decision


meter Statistic  level of
significance

P H0: P = P0 p Z ≥ z Accept H0 if
Z= Z ≤ - z |z| < |z|
H1: P > P0 Either Z ≥ z/2 Reject H0 if
H1: P < P0 |z| > |z|
or Z ≤ - z/2
H1: P ≠ P0
II. Large Sample Tests for difference
between proportions (Z test)

Para- Hypothesis Sample Test Statistic Rejection region Decision


meter Statistic for  level of
significance
P1 – P2 H0: P1 = P2 p1-p2 Z ≥ z Accept H0 if
Z= Z ≤ - z |z| < |z|
H1: P1 > P2 Either Z ≥ z/2 Reject H0 if
H1: P1 < P2 or Z ≤ - z/2 |z| > |z|
H1: P1 ≠ P2 Where,

Q = 1- P
III. Chi-Square Test of goodness of fit
• It allows us to find if the deviation of observed value from theory is
just by chance or really there is a significant difference.
• The method involves the comparison of a set of observed
frequencies with frequencies specified by some hypothesis to be
tested.
• This is a test for comparing a theoretical distribution, such as a
Binomial, Poisson etc, with the observed data from a sample.
Hypothesis Test Statistic Rejection region Decision
for  level of
significance
H0: There is no significant 2 ≥ 2, (k-1) Accept H0 if
difference between observed 2 = where, k = no. of 2 < 2 , (k-1)
and expected frequencies. categories in Reject H0 if
(data supports theory) which data has 2  2 , (k-1)
H1: There is significant O = Observed value, been grouped
difference between observed E = Expected value
and expected frequencies. Ei = n*pi (sample size times
(data does not support theory) the hypothesized proportion
from the null hypothesis)
IV. Chi Square Test of independence
• Chi square test of independence is a analysis which
enables us to see whether two classifications of the same
data are independent of each other.
• We use contingency tables to test the independence of
factors.

Hypothesis Test Statistic Rejection region Decision


for  level of
significance
H0: The variables represented 2 ≥ 2, (r-1)(c-1) Accept H0 if
by two classifications are 2 = where, r = no. of 2<2 , (r-1)(c-1)
independent. rows, c= no. of
columns in Reject H0 if
O = Observed value, contingency table 2>2 , (r-1)(c-1)
H1: The variables represented E = Expected value
by two classifications are not Expected value for ithrow and
independent. jth column =
• Note: If the parameter is to be estimated from
given data, then degrees of freedom for
goodness of fit to a theoretical distribution =
k-1-r, where r is the number of parameters
contained in distribution which is being as
modeled.

Distribution No. of parameters (r) No. of degrees


of freedom
Binomial 1 (proportion of success) k-1-1 = k-2
Poisson 1 (Mean) k-1-1 = k-2
Normal 2 (mean and standard deviation) k-1-2 = k-3

You might also like