Experimental Research

You might also like

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 15

EXPERIMENTAL RESEARCH

Quantitative Research in ELT


What is experimental research?
• Researcher manipulates at least one independent
variable, while controlling other relevant variables,
and observes the effect on one or more dependent
variables.
Characteristics of Experimental Research
Random Assignment
• Random assignment is “the process of assigning individuals at
random to groups or to different groups in an experiment”
(Creswell, 2008, p. 300).
• The purpose of random assignment is to ensure that groups
receiving different treatments are as reasonably equal or
similar in any way that could possibly impact the outcome
(Slavin, 2007).
• It is important to note that random assignment and random
selection do not refer to the same process. Rather, random
selection refers to “the process of selecting individuals so
each individual has an equal chance of being
selected”(Creswell, 2008, p. 645).
Control of extraneous variables
• Extraneous variables are participant or environmental
variables that “confuse relationships among the variables
being studied” (Slavin, 2007, p. 384). Participant variables that
act as extraneous variables include both intervening variables
and organismic variables. Intervening variables include
variables that are not directly observable (e.g., anxiety,
boredom) and must be controlled for by researchers. While
organismic variables cannot be altered by researchers, as they
include variables such as participants‟ gender, they too can be
controlled.
• Extraneous variables are controlled through random selection
and random assignment.
Manipulation of treatment
• Experimental research requires the manipulation of at least
one independent variable. This basically means that
researchers decide which variable will serve as the
independent variable to be manipulated and which group of
participants will receive this treatment (Gay et al., 2006).
Measurement of outcomes
• In an experiment the two variables of major interest are the
independent variable and the dependent variable. The independent
variable is manipulated (changed) by the experimenter. The variable
on which the effects of the changes are observed is called the
dependent variable, which is observed but not manipulated by the
experimenter.
• The effect of the manipulation of the independent variable on the
dependent variable is observed.
• To examine the effect of different teaching methods on achievement
in reading, a researcher would manipulate method (the
independent variable) by using different teaching methods in order
to assess their effect on reading achievement (the dependent
variable).
Validity of experimental research
• Internal validity is the recognition that, when it is associated
with experimental research, it refers both to how well the
study was run (research design, operational definitions used,
how variables were measured, what was/ wasn't measured,
etc.), and how confidently one can conclude that the change
in the dependent variable was produced solely by the
independent variable and not extraneous ones.
• External validity is the extent to which a study's results can be
generalized/applied to other people or settings reflects.
Threats to internal validity
1. History--These are the unique experiences subjects have between the
various measurements done in an experiment. These experiences function
like extra, and unplanned, independent variables. Compounding this, the
experiences are likely to vary across subjects which has a differential effect
on the subjects' responses. Studies that take repeated measures on
subjects over time are more likely to be affected by history variables than
those that collect data in shorter time periods, or that do not use repeated
measures.
2. Maturation--These are natural (rather than experimenter imposed)
changes that occur as a result of the normal passage of time. For example,
the more time that passes in a study the more likely subjects are to
become tired and bored, more or less motivated as a function of hunger or
thirst, older, etc. As Isaac and Michael (1971) point out, subjects may
perform better or worse on a dependent variable not as a result of the
independent variable but because they are older, more/less motivated,
etc.
3. Testing--Many experiments pretest subjects to establish that
all the subjects are starting the study at approximately the
same level, etc. A consequence of pretesting
programs/protocols is that they can contaminate/change the
subjects' performance on later tests (e.g., those used as
dependent variables) that measure the same domain beyond
any effects caused by the treatment itself.
4. Instrumentation--Changing the measurement methods (or
their administration) during a study affects what is measured.
Additionally, if human observers are used, it may be the
judgment of the observer(s) that change over time rather than
the subjects' performance.
5. Statistical Regression--When subjects in a study are selected as
participants because they scored extremely high or extremely low on some
measure of performance (e.g., a test, etc.), retesting of the subjects will
almost always produce a different distribution of scores, and the average
for this new distribution will be closer to the population's. For example, if
the chosen subjects all had high scores initially, the group's average on the
retest will be lower (i.e, less extreme) than it was originally. Conversely, if
the group's mean was originally low, their retest mean would be higher.
6. Selection--The subjects in comparison (e.g., the control and experimental)
groups should be functionally equivalent at the beginning of a study. If
they are, then observed differences between the groups, as measured by
the performance dependent variable(s), at the end of the study are more
likely to be caused only by the independent variable instead of organismic
ones. If the comparison groups are different from one another at the
beginning of the study the results of the study are biased.
7. Experimental Mortality--Subjects drop out of studies. If
one comparison group experiences a higher level of
subject withdrawal/mortality than other groups, then
observed differences between groups become
questionable. Were the observed differences produced by
the independent variable or by the different drop out
rates? (Mortality is also a threat when drop out rates are
similar across comparison groups but high.)
8. Selection Interactions--In some studies the selection
method interacts with one or more of the other threats
(described above), biasing the study's results.
Threats to external validity
• Selection–treatment interaction
An effect found with certain kinds of subjects might not apply if other kinds of subjects
were used. Researcher should use a large, random sample of participants.
• Setting–treatment interaction
An effect found in one kind of setting may not hold if other kinds of settings were
used.
• Pretest–treatment interaction
Pretest may sensitize subjects to treatment to produce an effect not generalizable to
an unpretested population.
• Subject effects
Subjects’ attitudes developed during study may affect the generalizability of the
results. Examples are the Hawthorne and the John Henry effects.
• Experimenter effects
Characteristics unique to a specific experimenter may limit generalizability to
situations with a different experimenter.
Experimental research design
Pre-experimental design (one group design)
Pre-test treatment  post-test

Quasi experimental design (one group design)


Pre-test treatment  post-test

Quasi experimental design (two group design)


Experiment group: pre-testtreatment post-test
Control group: pre-test  post-test
Experiment group: treatment post-test
Control group:  post-test
Experimental research design
True experimental design
EG: pre-testtreatment post-test
Randomization
CG: pre-test  post-test

EG: treatment post-test


Randomization
CG:  post-test
Advantages and disadvantages of
Experimental Research
The most obvious advantages are control over independent variable,
straightforward determination of causal relationships, possibility of verifying
results through repeatability/replicability, and the opportunity to create
conditions that are not easily observed in natural settings or would take too long.
Among the disadvantages are its unnaturalness (which may result in outcomes
not otherwise found in natural situations or not to the same degree); difficulty to
generalize, that is, apply the results to real-life situations (in part because the
sample or situation may not be representative); and ethical considerations.
Although good experimental design is a powerful tool, it cannot be applied to all
types of research problems, and results may appear significant because of
experimenter error or the inability to control for all extraneous variables. Finally,
although experimental research reveals causality (i.e., there is an effect that can
be attributed to a cause), it does not necessarily explain why this occurred.

You might also like