Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 35

TESTS OF

SIGNIFICANCE
STATISTICAL TEST
• Statistical tests are intended
Statistical
Tests
to decide whether a
hypothesis about Parametric
distribution of one or more tests
populations or samples
Non –
should be rejected or Parametric
tests
accepted.
PARAMETRIC TESTS
Parametric test is a statistical test that
makes assumptions about the parameters
of the population distribution(s) from
which one’s data is drawn.
APPLICATIONS

• Used for Quantitative data.

• Used for continuous variables.

• Used when data are measured on


approximate interval or ratio scales of
measurement.

• Data should follow normal distribution.


NON- PARAMETRIC
TESTS

Parametric test is a statistical test that


makes assumptions about the parameters
of the population distribution(s) from
which one’s data is drawn.
APPLICATIONS

• Used for Qualitative data.

• Used for nominal, ordinal, discrete


variables.

• Data doesn’t follow normal distribution.


PARAMETRIC TESTS
1. t-test

t-test for one


sample
Unpaired
t-test two sample
t-test
t-test for two
samples Unpaired
two sample
t-test
Contd..
2. ANOVA
One way
ANOVA
ANOVA
Two way
ANOVA
3. Pearson’s r correlation

4. Z test
Student’s t- test

• Developed by Prof.W.S.Gossett in 1908

•A t-test compares the difference


between two means of different groups
to determine whether the difference is
statistically significant.
One Sample t-test
Assumptions:
• Population is normally distributed
• Sample is drawn from the population
and it should be random
• We should know the population mean
Conditions:
• Population standard deviation is not
known
• Size of the sample is small (<30)
Contd..
• In one sample t-test , we know the
population mean.

• We draw a random sample from the


population and then compare the sample
mean with the population mean and make
a statistical decision as to whether or not
the sample mean is different from the
population.
Let x1, x2, …….,xn be a random sample of size “n”
has drawn from a normal population with mean (µ)
and variance 𝜎 2.

Null hypothesis (H0):

Population mean (μ) is equal to a specified value µ0.

𝒙 −µ
Under H0, the test statistic is 𝒕 𝒔
= 𝒏
df=n-1
Two sample t-test
• Used when the two independent random
samples come from the normal populations
having unknown or same variance.

• We test the null hypothesis that the two


population means are same i.e., µ1 = µ2
Contd…
Assumptions:

1. Populations are distributed normally

2. Samples are drawn independently and at


random

Conditions:

3. Standard deviations in the populations are


same and not known

4. Size of the sample is small


If two independent samples xi ( i = 1,2,….,n1)
and yj ( j = 1,2, …..,n2) of sizes n1and n2 have been
drawn from two normal populations with means µ1
and µ2 respectively.
Under H0, the test statistic is
Null hypothesis
H 0 : µ1 = µ2 𝒙ǀ − 𝒚 ǀ
𝒕= 𝟏 +

𝟏
� 𝒏
𝟏
𝒏𝟐
Paired t-test
Used when measurements are taken from
the same subject before and after some
manipulation or treatment.

Ex: To determine the significance of a


difference in blood pressure before and
after administration of an experimental
pressure substance.
Assumptions

1. Populations are distributed normally

2. Samples are drawn independently and at random

Conditions:

1. Samples are related with each other

2. Sizes of the samples are small and equal

3. Standard deviations in the populations are equal and


not known
Null Hypothesis:
H0: µd = 0
Under H0, the test statistic

ǀ 𝒅̅ ǀ
𝒕 =
𝒔
𝒏

Where, d = difference between x1 and x2


d̅ = Average of d
s = Standard deviation
n = Sample size
Z-Test
• Z-test is a statistical test where normal
distribution is applied and is basically used for
dealing with problems relating to large samples
when the frequency is greater than or equal to
30.

• It is used when population standard deviation


is known.
Contd…
Assumptions:
• Population is normally distributed

• The sample is drawn at random

Conditions:
• Population standard deviation σ is known

• Size of the sample is large (say n > 30)


Let x1, x2, ………x,n be a random sample size of
n from a normal population with mean µ and
variance σ2 .

Let x̅ be the sample mean of sample of size


“n”

Null Hypothesis:

Population mean (µ) is equal to a specified


value µο

H0: µ = µο
Under Hο, the test statistic is
𝒙ǀ − µ𝝄ǀ
𝒁=
𝒔
𝒏

If the calculated value of Z < table value of Z at


5% level of significance, H0 is accepted and
hence we conclude that there is no significant
difference between the population mean and the
one specified in H0 as µο.
Pearson’s ‘r’ Correlation

• Correlation is a technique for investigating


the relationship between two quantitative,
continuous variables.

• Pearson’s Correlation Coefficient (r) is a


measure of the strength of the
association between the two variables
Types of correlation
Type of correlation Correlation coefficient

Perfect positive r = +1
correlation
Partial positive correlation 0 < r < +1
No correlation r=0
Partial negative correlation 0 > r > -1
Perfect negative r = -1
correlation
ANOVA (Analysis of Variance)
• Analysis of Variance (ANOVA) is a
collection of statistical models used to
analyse the differences between group
means or variances.

• Compares multiple groups at one time

• Developed by R.A.Fischer
One way ANOVA
ANOVA
Two way ANOVA
One way ANOVA
Compares two or more unmatched groups when data
are categorized in one factor

Ex:

1. Comparing a control group with three different


doses of aspirin

2. Comparing the productivity of three or more


employees based on working hours in a
company
Two way ANOVA
• Used to determine the effect of two nominal predictor
variables on a continuous outcome variable.

• It analyses the effect of the independent variables


on the expected outcome along with their
relationship to the outcome itself.

Ex: Comparing the employee productivity based on the


working hours and working conditions.
Assumptions of ANOVA:
• The samples are independent and selected randomly.

• Parent population from which samples are taken is of


normal distribution.

• Various treatment and environmental effects are


additive in nature.

• The experimental errors are distributed normally


with mean zero and variance σ2.
• ANOVA compares variance by means of F-ratio
• It again depends on experimental designs
F=
Null hypothesis:
Hο = All population
means are same
• If the computed Fc is greater than F critical value,
we are likely to reject the null hypothesis.
• If the computed Fc is lesser than the F critical
value , then the null hypothesis is accepted.
ANOVA Table
Sources of Sum of Degrees of Mean squares F - Ratio
squares
Variation (SS) freedom (d.f) (MS)
𝒔𝒖𝒎 𝒐𝒇 𝒔𝒒𝒖𝒂𝒓𝒆𝒔
𝒅̅𝒆𝒈𝒓𝒆𝒆𝒔𝒐𝒇 𝒇𝒓𝒆𝒆𝒅̅𝒐𝒎
Between Treatment (k-1) 𝑇𝑟𝑆𝑆 𝑇𝑟𝑀𝑆
samples or sum of (𝑘 − 1) 𝐸𝑀𝑆
groups squares
(Treatments ( TrSS)
)

Within Error sum of (n-k) 𝐸𝑆𝑆


samples or
groups squares (ESS) (𝑛 − 𝑘)
( Errors )
Total Total (n-1)
sum of
squares
(TSS)
S.No Type of group Parametric test

1. Comparison of two paired groups Paired t-test

2. Comparison of two unpaired groups Unpaired two sample t-test

3. Comparison of population and One sample t-test


sample drawn from the same
population

4. Comparison of three or more Two way ANOVA


matched groups but varied
in two factors

5. Comparison of three or more One way ANOVA


matched groups but varied
in one factor

6. Correlation between two variables Pearson Correlation

You might also like