Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 23

HYPOTHESIS TESTING

The process of determining the


probability of obtaining a particular
result or set of results.
TWO TYPES OF HYPOTHESIS
• NULL HYPOTHESIS - A prediction of no
difference between groups; the hypothesis the
researcher expects to reject.
• ALTERNATIVE HYPOTHESIS - A prediction of
what the researcher expects to find in a study.
REJECTING A NULL HYPOTHESIS?
Region of acceptance:
• Area of sampling distribution generally
defined by the mean ±2 SD or 95% of the
distribution; results falling in this region imply
that our sample belongs to the sampling
distribution defined by the Ho and result in
the researcher retaining the Ho.
REGION OF REJECTION
• The extreme 5% (generally) of a sampling
distribution; results falling in this area imply
that our sample does not belong to the
sampling distribution defined by the Ho and
result in the researcher rejecting the Ho and
accepting the Ha.
ONE-TAILED TEST VS. TWO-TAILED TEST
ONE TAILED TEST
• A hypothesis stating the direction (higher or
lower) in which a sample statistic will differ
from the population or another group.
• Also known as the directional
TWO TAILED TEST
• A hypothesis stating that results from a
sample will differ from the population or
another group but without stating how the
results will differ.
• Also known as the non directional
Critical value
• The value of a statistic that defines the
extreme 5% of a distribution for a one-tailed
hypothesis or the extreme 2.5% of the
distribution for a two-tailed test.
Setting the Criterion Level (p)
• Criterion level: The percentage of a sampling
distribution that the researcher selects for the
region of rejection; typically researchers use
less than 5% (p < .05).
ERRORS OF HYPOTHESIS TESTING
TYPE I and TYPE II ERRORS
TYPE I ERROR
• The probability of rejecting a true H0; defined
by the probability of the significance level of
your findings.
TYPE II ERROR
• The probability of incorrectly retaining a false
Ho.
REDUCING THE CHANCE OF A TYPE I ERROR

• make your study less vulnerable to a Type I error


by changing your criterion level.
• Most of the time social scientists use the p < .05
REDUCING THE CHANCE OF A TYPE II ERROR
1. Sample size (larger sample size = more
power)
• In order to have power, you must have enough
members of the population represented in
your sample for you to feel confident that the
results you found in your study are in fact the
ones that exist in the population.
• The larger the sample size the more confident
we can be that the mean of our sample
approaches the mean of the population from
which the sample is drawn.
2. Amount of error in the research design
(less error = more power)
• Within-groups variance (or error variance):
The differences in your sample measure that
are not accounted for in the study.
• Homogeneity of the sample: The degree to
which the members of a sample have similar
characteristics.
• Sensitivity: The ability of a measurement
instrument to detect differences.
3. Strength of the effect (stronger effect =
more power)
• The strength of the effect refers to the
magnitude or intensity of a pattern or
relationship, and increases the likelihood that
the pattern or relationship will be detected.
• The strength of the effect is something that
we measure in research, and as such you do
not have as much ability to directly impact it
as you do with sample size and error.

You might also like