Download as ppt, pdf, or txt
Download as ppt, pdf, or txt
You are on page 1of 73

Experimental Research Designs

Dr. Siddhartha Sen


PhD, MPT(Ortho),PGDHHM, BPT(NIOH), B.Sc
What is an Experiment?
• Research method in which
– conditions are controlled
– so that 1 or more independent variables
– can be manipulated to test a hypothesis
– about a dependent variable.
variable
• Allows
– evaluation of causal relationships among variables
– while all other variables are eliminated or
controlled.
Some Definitions
• Dependent Variable
– Criterion by which the results of the experiment are
judged.
– Variable that is expected to be dependent on the
manipulation of the independent variable
– Physical conditions, behaviors, attitudes, feelings, or
beliefs of subjects that change in response to a
treatment.
– How to measure
• Questionnaire, interviews
• Observation
• Test
• Direct outcome
• Independent Variable
– Any variable that can be manipulated, or altered,
independently of any other variable
– Cause
– Manipulation of independent variable create
different treatments.
• Event manipulation
– Affecting the independent variable by altering the events that
subjects experience
– Presence versus absence
• Instructional manipulation
– Varying the independent variable by giving different sets of
instructions to the subjects
More Definitions
• Experimental Treatments
– Alternative manipulations of the independent
variable being investigated
• Experimental Group
– Group of subjects exposed to the experimental
treatment
• Control Group
– Group of subjects exposed to the control condition
– Not exposed to the experimental treatment
More Definitions

• Test Unit
– Entity whose responses to experimental treatments are
being observed or measured
• Randomization
– Assignment of subjects and treatments to groups is based
on chance
– Provides “control by chance”
– Random assignment allows the assumption that the groups
are identical with respect to all variables except the
experimental treatment
Constant Error (bias)
• Constant error is error that occurs in the same
experimental condition every time the basic
experiment is repeated – a systematic bias
• Example:
– Experimental groups always administered the treatment in
the morning
– Control groups always in the afternoon
– Introduces an uncontrolled extraneous variable – time of
day
– Hence, systematic or constant error
• Extraneous Variables
– Variables other than the manipulated variables that affect
the results of the experiment
– Can potentially invalidate the results
Sources of Constant Error
• Demand Characteristics
– Experimental design procedures or situational aspects of
the experiment that provide unintentional hints to
subjects about the experimenter’s hypothesis
– If occurs, participants likely to act in a manner consistent
with the experimental treatment.
– Most prominent demand characteristic is the person
actually administering the experimental treatments.
• Experimenter Bias
– Effect on the subjects’ behavior caused by an
experimenter’s presence, actions, or comments.
• Guinea Pig Effect
– Effect on experimental results caused by subjects changing
normal behavior or attitudes to cooperate with
experimenter.
Controlling Extraneous Variables
• Blinding
– Technique used to control subjects’ knowledge of
whether or not they have been given the
experimental treatment.
– Taste tests, placebos (chemically inert pills), etc.
• Constancy of Conditions
– Subjects in experimental & control groups are
exposed to identical situations except for differing
conditions of the independent variable.
Controlling Extraneous Variables
• Order of Presentation
– If experimental method requires that the same
subjects be exposed to 2 or more experimental
treatments, error may occur due to order in which
the treatments are presented
– Counterbalancing
• ½ the subjects exposed to Treatment A first, then to
Treatment B.
• Other ½ exposed to Treatment B first, then to Treatment
A.
• Eliminates the effects of order of presentation
Experimental Validity
• Internal Validity
– Indicates whether the independent variable was
the sole cause of the change in the dependent
variable
• External Validity
– Indicates the extent to which the results of the
experiment are applicable to the real world
Extraneous Variables that Jeopardize Internal Validity

• History Effect
– Specific events in the external environment
between the 1st & 2nd measurements that are
beyond the experimenter’s control
– Common history effect occurs when competitors
change their marketing strategies during a test
marketing experiment
• Cohort Effect
– Change in the dependent variable that occurs
because members of one experimental group
experienced different historical situations than
members of other experimental groups
Extraneous Variables that Jeopardize Internal Validity

• Maturation Effect
– Effect on experimental results caused by experimental
subjects maturing or changing over time
– During a daylong experiment, subjects may grow
hungry, tired, or bored
• Testing Effect
– In before-and-after studies, pretesting may sensitize
subjects when taking a test for the 2nd time.
– May cause subjects to act differently than they would
have if no pretest measures were taken
Extraneous Variables that Jeopardize Internal Validity

• Instrumentation Effect
– Caused by a change in the wording of questions, in
interviewers, or in other procedures used to measure the
dependent variable.
• Selection Effect
– Sampling bias that results from differential selection of
respondents for the comparison groups.
• Mortality or Sample Attrition
– Results from the withdrawal of some subjects from the
experiment before it is completed
– Effects randomization
– Especially troublesome if some withdraw from one treatment
group and not from the others (or at least at different rates)
Experimental Design
• Advantages
– Best While establishing cause-and-effect
relationships
• Disadvantages
– Artificiality of experiments
– Feasibility
Types of Experimental Designs
• Simple True Experimental
• Complex True Experimental
• Quasi-Experimental
Types of Experimental Designs
• Simple True Experimental
• Complex True Experimental
• Quasi-Experimental
Characteristics of True Designs
• Manipulation (treatment)
• Randomization
• Control group

Characteristics of simple true designs


• One IV with 2 levels (T, C)
• One DV
Types
• Randomized posttest control group design
• Randomized pretest-posttest control group
design
Randomized posttest control group design

• A.K.A., After-Only with Control


• True experimental design
• Experimental group tested after treatment exposure
• Control group tested at same time without exposure to
experimental treatment
• Includes random assignment to groups
• Effect of all extraneous variables assumed to be the
same on both groups
• Do not run the risk of a testing effect
• Use in situations when cannot pretest
Cont…
• Diagrammed as
– Experimental Group: X O1
– Control Group: O2
• Effect of the experimental treatment equals
(O2 – O1)
Randomized pretest-posttest control group design
• A.K.A., Before-After with Control
• True experimental design
• Experimental group tested before and after
treatment exposure
• Control group tested at same two times
without exposure to experimental treatment
• Includes random assignment to groups
• Effect of all extraneous variables assumed to
be the same on both groups
• Do run the risk of a testing effect
Cont…
• Diagrammed as
– Experimental Group: O1 X O2
– Control Group: O3 O4
• Effect of the experimental treatment equals
(O2 – O1) -- (O4 – O3)
Advantages & Disadvantages
• Advantages of pretest design
– Equivalency of groups
– Can measure extent of change
– Determine inclusion
– Assess reasons for and effects of mortality
• Disadvantages of pretest design
– Time-consuming
– Sensitization to pre-test
Multigroup Pretest-Posttest Design
Solomon Four-Group Design
• True experimental design
• Combines pretest-posttest with control group
design and the posttest-only with control
group design
• Provides means for controlling the interactive
testing effect and other sources of extraneous
variation
• Does include random assignment
Solomon four-group design
• Diagrammed as
– Experimental Group 1: O1 X O2
– Control Group 1: O3 O4
– Experimental Group 2: X O5
– Control Group 2: O6
• Effect of independent variable (O2 – O4) & (O5 – O6)
• Effect of pretesting (O4 – O6)
• Effect of pretesting & measuring (O2 – O5)
• Effect of random assignment (O1 – O3)
Types of Experimental Designs
• Simple True Experimental
• Complex True Experimental
• Quasi-Experimental
• Pre-Experimental
Complex True Experimental
• Factorial design
• Randomized block design
• Repeated Measure Design
• Crossover design
Factorial Design
• A factorial design incorporates two or more independent variables, with
independent groups of subjects randomly assigned to various
combinations of levels of the two variables. Although such designs can
theoretically be expanded to include any number of variables, clinical
studies usually involve two or three at most.

• Factorial designs are described according to their dimensions or number


of factors, so that a two-way or two-factor design has two independent
variables, a three-way or three-factor design has three independent
variables, and so on.
• These designs can also be described by the number of levels within each
factor, so that a 3 x 3 design includes two variables, each with three
levels, and a 2 x 3 x 4 design includes three variables, with two, three and
four levels, respectively.
Two-Way Factorial Design
• A two-way factorial design incorporates two independent
variables, A and B.
• Example : Researchers were interested in studying the effect
of intensity and location of exercise programs on the self-
efficacy of sedentary women. Using a 2 x 2 factorial design,
subjects were randomly assigned to one of four groups,
receiving a combination of moderate or vigorous exercise at
home or a community center. The change in their exercise
behavior and their self-efficacy in maintaining their exercise
program was monitored over 18 months.
Two-Way Factorial Design
the two independent variables are intensity of exercise
(A) and location of exercise (B), each with two levels (2 x
2). One group (A1B1) will engage in moderate exercise
at home. A second group (A2B1) will engage in vigorous
exercise at home. The third group (A1B2) will engage in
moderate exercise at a community center. And the
fourth group (A2B2) will engage in vigorous exercise at a
community center. The two independent variables are
completely crossed in this design, which means that
every level of one factor is represented at every level of
the other factor.
Three-Way Factorial Design
• Factorial designs can be extended to include more than two
independent variables.
• In a three-way factorial design the relationship among variables
can be conceptualized in a three-dimensional format. We can
also think of it as a two-way design crossed on a third factor.
• we could expand the exercise study to include a third variable
such as frequency of exercise.
• We would then evaluate the simultaneous effect of intensity,
location and frequency of exercise.
• We could assign subjects to exercise1 day or 3 days per week.
Then we would have a 2 x 2 x 2 design, with subjects assigned to
one of 8 independent groups
Three-Way Factorial Design

Analysis of Factorial Designs. A two-way or three-way analysis


of variance is most
commonly used to examine the main effects and interaction effects of a factorial
design.
Randomized Block Design
• When a researcher is concerned that an extraneous factor
might influence differences between groups, one way to
control for this effect is to build the variable into the design
as an independent variable.

• The randomized block design is used when an attribute


variable, or blocking variable, is crossed with an active
independent variable; that is, homogeneous blocks of
subjects are randomly assigned to levels of a manipulated
treatment variable. In the following example, we have a 2 x
3 randomized block design, with a total of 6 groups.
Randomized Block Design
A study was performed to assess the
action of an antiarrhythmic agent in
healthy men and women after a single
intravenous dose. Researchers wanted to
determine if effects were related to dose
and gender. Twenty-four subjects were
recruited, 12 men and 12 women. Each
gender group was randomly assigned to
receive 0.5, 1 .5 or 3.0 mg/kg of the drug
for 2 minutes. Therefore, 4 men and 4
women received each dose.
Through blood tests, volume of Analysis of Randomized
distribution of the drug at steady state
was assessed before and 72 hours after
Block Designs. Data from a
randomized block design
drug administration.
can be analyzed using a two-way
analysis of variance, multiple
regression or discriminant analysis.
Repeated Measures Designs
• repeated measures design, where one group of
subjects is tested under all conditionsand each
subject acts as his own control.
• Conceptually, a repeated measures design can be
considered a series of trials, each with a single
subject. Therefore, such a design is also called a
within-subjects design, because treatment effects
are associated with differences observed within a
subject across treatment conditions, rather than
between subjects across randomized groups.
Repeated Measures Designs
• Therefore, repeated measures can only be used when the
outcome measure will revert back to baseline between
interventions, and the patient problem will remain relatively
stable throughout the study period.
• There are many treatments for which carryover cannot be
eliminated. For example, if we evaluate the effects of different
exercise programs for increasing strength over a 4-week period,
the effects of each exercise regimen will probably be long lasting,
and rest periods will be ineffective for reversing the effect.
• With variables that produce permanent or long-term
physiological or psychological effects, repeated measures
designs are not appropriate.
One-Way Repeated Measures Design
• Example of a One-Way Repeated Measures Design
• Researchers were interested in the effect of using a cane on the
intramuscular forces on prosthetic hip implants during walking.
• They studied 24 subjects with unilateral prosthetic hips under
three conditions: walking with a cane on the side contralateral to
the prosthesis, on the same side as the prosthesis, and on the
contralateral side with instructions to push with "near maximal
effort."
• They monitored electromyographic (EMG) activity of hip abductor
muscles and cane force under each condition.
• The order of testing under the three test conditions was
randomly assigned.
One-Way Repeated Measures Design
• the researchers wanted to examine EMG activity of the hip
abductor muscles, with all subjects exposed to all three cane
conditions.
• It would be possible to use a randomized design to investigate this
question, by assigning different groups to each condition, but it
doesn't make logical sense. By using a repeated measures format
we can be assured that differences across conditions are a function
of cane use, and not individual physiological differences.
• In this example, the independent variable does not present a
problem of carryover that would preclude one subject participating
at all three levels. This design is commonly referred to as a one-
way repeated measures design.
One-Way Repeated Measures Design
Order Effects
• Because subjects are exposed to multiple-
treatment conditions in a repeated measures
design, there must be some concern about the
potentially biasing effect of test sequence;
• that is, the researcher must determine if
responses might be dependent on which condition
preceded which other condition. Effects such as
fatigue, learning or carryover may influence
responses if subjects are all tested in the same
order.
Order Effects
• One solution to the problem of order effects is to randomize
the order of presentation for each subject, often by the flip of
a coin, so that there is no bias involved in choosing the order
of testing
• A second solution utilizes a Latin Square, which is a matrix
composed of equal numbers of rows and columns, designating
random permutations of sequence combinations.
• For example, in the cane study, if we had 30 subjects, we could
assign 10 subjects to each of three sequences, as shown in
Figure 10.10. Using random assignment, we would determine
which group would get each sequence, and then assign each
testing condition to A, B or C.
Order Effects

Analysis of One-
Way Repeated
Measures Designs.
The one-way analysis of
variance for repeated measures is
used to test for differences across
levels of one repeated factor.
Crossover Design
• When only two levels of an independent variable are
repeated, a preferred method to control for order
effects is to counterbalance the treatment conditions
so that their order is systematically varied.
• This creates a crossover design in which half the
subjects receive Treatment A followed by B, and half
receive B followed by A. Two subgroups are created,
one for each sequence, and subjects are randomly
assigned to one of the sequences.
Example of a Crossover Design
• Researchers were interested in comparing the
effects of prone and supine positions on stress
responses in mechanically ventilated preterm
infants.16 They randomly assigned 28 infants to a
supine/prone or prone/supine position sequence.
Infants were placed in each position for 2 hours.
Stress signs were measured following each 2 hour
period, including startle, tremor, and twitch
responses.
Crossover Design

Analysis
In the analysis of a crossover design, researchers will usually group scores by
treatment condition, regardless of which order they were given. A paired t-test
can then be used to compare change scores, or a two-way analysis of variance
with two repeated measures can be used to compare pretest and posttest
measures across both treatment conditions. The Wilcoxon signed-ranks test
should be used to look at change scores when ordinal data are used.
Types of Experimental Designs
• Simple True Experimental
• Complex True Experimental
• Quasi-Experimental
Characteristics of True Designs
• Manipulation (treatment)
• Randomization
• Control group

• Less control
• More real-world
• Program evaluation
Randomized posttest control group design

R T Post
R C Post
Randomized pretest-posttest control group
design
R Pre T Post
R Pre C Post
Quasi-experimental Designs
• One group posttest-only design
• One group pretest-posttest design
• Non-equivalent control group design
• Non-equivalent control group pretest-posttest design
• Time series
• Single subject designs (Case study)
• Developmental designs
Quasi-experimental Designs
• One group posttest-only design
• One group pretest-posttest design
• Non-equivalent control group design
• Non-equivalent control group pretest-posttest design
• Time series
• Single subject designs (Case study)
• Developmental designs
Randomized posttest control group design

R T Post
R C Post
One group posttest-only design (One shot
study)
T Post

No control of IV threats
Use?
One-Shot Design
• A.K.A. – after-only design
• A single measure is recorded after the treatment
is administered
• Study lacks any comparison or control of
extraneous influences
• No measure of test units not exposed to the
experimental treatment
• May be the only viable choice in taste tests
• Diagrammed as: X O1
Quasi-experimental Designs
• One shot study
• One group pretest-posttest design
• Non-equivalent control group design
• Non-equivalent control group pretest-posttest design
• Time series
• Single subject designs (Case study)
• Developmental designs
Randomized pretest-posttest control group
design
R Pre T Post
R Pre C Post
One group pretest-posttest design
Pre T Post

•History
•Maturation
•Testing Use control group
•Instrument decay
•Regression
One-Group Pretest-Posttest Design
• Subjects in the experimental group are
measured before and after the treatment is
administered.
• No control group
• Offers comparison of the same individuals
before and after the treatment (e.g., training)
• If time between 1st & 2nd measurements is
extended, may suffer maturation
• Can also suffer from history, mortality, and
testing effects
• Diagrammed as O1 X O2
Quasi-experimental Designs
• One shot study
• One group pretest-posttest design
• Non-equivalent control group design
• Non-equivalent control group pretest-posttest design
• Time series
• Single subject designs (Case study)
• Developmental designs
Randomized posttest control group design

R T Post
R C Post
Non-equivalent control group design
(Static group comparison design)
T Post
C Post

•Selection bias
Static Group Design
• A.K.A., after-only design with control group
• Experimental group is measured after being exposed to
the experimental treatment
• Control group is measured without having been exposed
to the experimental treatment
• No pre-measure is taken
• Major weakness is lack of assurance that the groups
were equal on variables of interest prior to the
treatment
• Diagrammed as: Experimental Group X O1
Control Group O2
Quasi-experimental Designs
• One shot study
• One group pretest-posttest design
• Non-equivalent control group design
• Non-equivalent control group pretest-posttest design
• Time series
• Single subject designs (Case study)
• Developmental designs
Randomized pretest-posttest control group
design
R Pre T Post
R Pre C Post
Non-equivalent control group pretest-
posttest design
Pre T Post
Pre C Post

•Can check selection bias


Quasi-experimental Designs
• One shot study
• One group pretest-posttest design
• Non-equivalent control group design
• Non-equivalent control group pretest-posttest design
• Time series
• Single subject designs (Case study)
• Developmental designs
Time series
Pre Pre Pre Pre T Post Post Post Post
Time Series Design
• Involves periodic measurements on the
dependent variable for a group of test units
• After multiple measurements, experimental
treatment is administered (or occurs naturally)
• After the treatment, periodic measurements are
continued in order to determine the treatment
effect
• Diagrammed as:
O1 O2 O3 O4 X O5 O6 O7 O8
Quasi-experimental Designs
• One shot study
• One group pretest-posttest design
• Non-equivalent control group design
• Non-equivalent control group pretest-posttest design
• Time series
• Single subject designs (Case study)
• Developmental designs
Quasi-experimental Designs
• One shot study
• One group pretest-posttest design
• Non-equivalent control group design
• Non-equivalent control group pretest-posttest design
• Time series
• Single subject designs (Case study)
Th a n k s

You might also like