Download as ppt, pdf, or txt
Download as ppt, pdf, or txt
You are on page 1of 38

Experimental Research (Scientific Inquiry)

McGraw-Hill

2006 The McGraw-Hill Companies, Inc. All rights reserved.

What is Research?

research is an unusually stubborn and persisting effort to think straight which involves the gathering and the intelligent use of relevant data
H. M. Hamlin, What is Research? American Vocational Journal, September 1966.

McGraw-Hill

2006 The McGraw-Hill Companies, Inc. All rights reserved.

What is an Experiment?

Research method in which


conditions are controlled so that 1 or more independent variables can be manipulated to test a hypothesis about a dependent variable. evaluation of causal relationships among variables while all other variables are eliminated or controlled.
2006 The McGraw-Hill Companies, Inc. All rights reserved.

Allows

McGraw-Hill

Uniqueness of Experimental Research

Experimental Research is unique in two important respects:


1) 2)

Only type of research that attempts to influence a particular variable Best type of research for testing hypotheses about cause-and-effect relationships

Experimental Research looks at the following variables:


Independent variable (treatment) Dependent variable (outcome)

McGraw-Hill

2006 The McGraw-Hill Companies, Inc. All rights reserved.

Characteristics of Experimental Research


The researcher manipulates the independent variable. They decide the nature and the extent of the treatment. After the treatment has been administered, researchers observe or measure the groups receiving the treatments to see if they differ. Experimental research enables researchers to go beyond description and prediction, and attempt to determine what caused effects.

McGraw-Hill

2006 The McGraw-Hill Companies, Inc. All rights reserved.

Some Definitions

Dependent Variable

Criterion by which the results of the experiment are judged. Variable that is expected to be dependent on the manipulation of the independent variable Any variable that can be manipulated, or altered, independently of any other variable Hypothesized to be the causal influence

Independent Variable

McGraw-Hill

2006 The McGraw-Hill Companies, Inc. All rights reserved.

More Definitions

Experimental Treatments

Alternative manipulations of the independent variable being investigated Group of subjects exposed to the experimental treatment

Experimental Group

Control Group

Group of subjects exposed to the control condition Not exposed to the experimental treatment

McGraw-Hill

2006 The McGraw-Hill Companies, Inc. All rights reserved.

More Definitions

Test Unit

Entity whose responses to experimental treatments are being observed or measured Assignment of subjects and treatments to groups is based on chance Provides control by chance Random assignment allows the assumption that the groups are identical with respect to all variables except the experimental treatment
2006 The McGraw-Hill Companies, Inc. All rights reserved.

Randomization

McGraw-Hill

Constant error is error that occurs in the

Constant Error (bias)

same experimental condition every time the basic experiment is repeated a systematic bias Example:

Extraneous Variables

Experimental groups always administered the treatment in the morning Control groups always in the afternoon Introduces an uncontrolled extraneous variable time of day Hence, systematic or constant error
Variables other than the manipulated variables that affect the results of the experiment Can potentially invalidate the results
2006 The McGraw-Hill Companies, Inc. All rights reserved.

McGraw-Hill

Sources of Constant Error

Demand Characteristics

Experimental design procedures or situational aspects of the experiment that provide unintentional hints to subjects about the experimenters hypothesis If occurs, participants likely to act in a manner consistent with the experimental treatment. Most prominent demand characteristic is the person actually administering the experimental treatments. Effect on the subjects behavior caused by an experimenters presence, actions, or comments. Effect on experimental results caused by subjects changing normal behavior or attitudes to cooperate with experimenter.
2006 The McGraw-Hill Companies, Inc. All rights reserved.

Experimenter Bias

Guinea Pig Effect

McGraw-Hill

Controlling Extraneous Variables

Blinding

Constancy of Conditions

Technique used to control subjects knowledge of whether or not they have been given the experimental treatment. Taste tests, placebos (chemically inert pills), etc.

McGraw-Hill

Subjects in experimental & control groups are exposed to identical situations except for differing conditions of the independent variable.

2006 The McGraw-Hill Companies, Inc. All rights reserved.

Controlling Extraneous Variables

Order of Presentation

If experimental method requires that the same subjects be exposed to 2 or more experimental treatments, error may occur due to order in which the treatments are presented

Counterbalancing

the subjects exposed to Treatment A first, then to Treatment B. Other exposed to Treatment B first, then to Treatment A. Eliminates the effects of order of presentation

McGraw-Hill

2006 The McGraw-Hill Companies, Inc. All rights reserved.

Experimental Validity

Internal Validity

Indicates whether the independent variable was the sole cause of the change in the dependent variable
Indicates the extent to which the results of the experiment are applicable to the real world

External Validity

McGraw-Hill

2006 The McGraw-Hill Companies, Inc. All rights reserved.

Extraneous Variables that Jeopardize Internal Validity

History Effect

Cohort Effect

Specific events in the external environment between the 1st & 2nd measurements that are beyond the experimenters control Common history effect occurs when competitors change their marketing strategies during a test marketing experiment Change in the dependent variable that occurs because members of one experimental group experienced different historical situations than members of other experimental groups 2006 The McGraw-Hill Companies, Inc. All rights reserved.

McGraw-Hill

Extraneous Variables that Jeopardize Internal Validity

Maturation Effect

Testing Effect

Effect on experimental results caused by experimental subjects maturing or changing over time During a daylong experiment, subjects may grow hungry, tired, or bored In before-and-after studies, pretesting may sensitize subjects when taking a test for the 2nd time. May cause subjects to act differently than they would have if no pretest measures were taken
2006 The McGraw-Hill Companies, Inc. All rights reserved.

McGraw-Hill

Instrumentation Effect

Extraneous Variables that Jeopardize Internal Validity


Caused by a change in the wording of questions, in interviewers, or in other procedures used to measure the dependent variable.
Sampling bias that results from differential selection of respondents for the comparison groups. Results from the withdrawal of some subjects from the experiment before it is completed Effects randomization Especially troublesome if some withdraw from one treatment group and not from the others (or at least at different rates)
2006 The McGraw-Hill Companies, Inc. All rights reserved.

Selection Effect

Mortality or Sample Attrition

McGraw-Hill

Weak Experimental Designs

The following designs are considered weak since they do not have built-in controls for threats to internal validity

The One-Shot Case Study

A single group is exposed to a treatment and its effects are assessed Single group is measured both before and after a treatment exposure Two intact groups receive two different treatments

The One-Group-Pretest-Posttest Design

The Static-Group Comparison Design

McGraw-Hill

2006 The McGraw-Hill Companies, Inc. All rights reserved.

Example of a One-Shot Case Study Design


(Figure 13.1)

McGraw-Hill

2006 The McGraw-Hill Companies, Inc. All rights reserved.

Example of a One-Group Pretest-Posttest Design (Figure 13.2)

McGraw-Hill

2006 The McGraw-Hill Companies, Inc. All rights reserved.

Example of a Static-Group Comparison Design (Figure 13.3)

McGraw-Hill

2006 The McGraw-Hill Companies, Inc. All rights reserved.

True Experimental Designs


The essential ingredient of a true experiment is random assignment of subjects to treatment groups Random assignments is a powerful tool for controlling threats to internal validity

The Randomized Posttest-only Control Group Design

Both groups receiving different treatments

The Randomized Pretest-Posttest Control Group Design

Pretest is included in this design Four groups used, with two pre-tested and two not pretested

The Randomized Solomon Four-Group Design

McGraw-Hill

2006 The McGraw-Hill Companies, Inc. All rights reserved.

Example of a Randomized Posttest-Only Control Group Design (Figure 13.4)

McGraw-Hill

2006 The McGraw-Hill Companies, Inc. All rights reserved.

Example of a Randomized PretestPosttest Control Group Design


(Figure 13.5)

McGraw-Hill

2006 The McGraw-Hill Companies, Inc. All rights reserved.

Example of a Randomized Solomon Four-Group Design


(Figure 13.6)

McGraw-Hill

2006 The McGraw-Hill Companies, Inc. All rights reserved.

Random Assignment with Matching

To increase the likelihood that groups of subjects will be equivalent, pairs of subjects may be matched on certain variables. Members of matched groups are then assigned to experimental or control groups. Matching can be mechanical or statistical.

McGraw-Hill

2006 The McGraw-Hill Companies, Inc. All rights reserved.

A Randomized Posttest-Only Control Group Design, Using Matched Subjects (Figure 13.7)

McGraw-Hill

2006 The McGraw-Hill Companies, Inc. All rights reserved.

Mechanical and Statistical Matching

Mechanical matching is a process of pairing two persons whose scores on a particular variable are similar. Statistical matching does not necessitate a loss of subjects, nor does it limit the number of matching variables.

Each subject is given a predicted score on the dependent variable, based on the correlation between the dependent variable and the variable on which the subjects are being matched. The difference between the predicted and actual scores for each individual is then used to compare experimental and control groups.
2006 The McGraw-Hill Companies, Inc. All rights reserved.

McGraw-Hill

Quasi-Experimental Designs

Quasi-Experimental Designs do not include the use of random assignments but use other techniques to control for threats to internal validity:

The Matching-Only Design

Similar except that no random assignment occurs


All groups are exposed to all treatments but in a different order

Counterbalanced Design

Time-Series Design

Involves repeated measures over time, both before and after treatment

McGraw-Hill

2006 The McGraw-Hill Companies, Inc. All rights reserved.

Results (Means) from a Study Using a Counterbalanced Design


(Figure 13.8)

McGraw-Hill

2006 The McGraw-Hill Companies, Inc. All rights reserved.

Possible Outcome Patterns in a TimeSeries Design (Figure 13.9)

McGraw-Hill

2006 The McGraw-Hill Companies, Inc. All rights reserved.

Factorial Designs

Factorial Designs extend the number of relationships that may be examined in an experimental study. They are modifications of either the posttest-only control group or pretest-posttest control group designs which permit the investigation of additional independent variables. They also allow a researcher to study the interaction of an independent variable with one or more other variables (moderator variable).

McGraw-Hill

2006 The McGraw-Hill Companies, Inc. All rights reserved.

Using a Factorial Design to Study Effects of Method and Class Size on Achievement
(Figure 13.10)

McGraw-Hill

2006 The McGraw-Hill Companies, Inc. All rights reserved.

Illustration of Interaction and No Interaction in a 2 by 2 Factorial Design (Figure 13.11)

McGraw-Hill

2006 The McGraw-Hill Companies, Inc. All rights reserved.

(Figure 13.12)

McGraw-Hill

2006 The McGraw-Hill Companies, Inc. All rights reserved.

Example of a 4 by 2 Factorial Design


(Figure 13.13)

McGraw-Hill

2006 The McGraw-Hill Companies, Inc. All rights reserved.

(Figure 13.14)

McGraw-Hill

2006 The McGraw-Hill Companies, Inc. All rights reserved.

Effectiveness of Experimental Designs in Controlling Threats to Internal Validity


(Table 13.1)
KEY: (++) = strong control, threat unlikely to occur; (+) = some control, threat may possibly occur; () = weak control, threat likely to occur; (?) = cant determine; (NA) = threat does not apply

Design
One-shot case study

Subject Characteristics

Mortality

Location

Instru- Data Collecment tor Charac- Data ColDecay teristics lector Bias
(NA)

Testing
(NA)

History

Maturation

Atti- Regrestudinal sion


Implementation

One group preposttest


Static group comparison Randomized posttest-only control group Randomized prepost-test control group Solomon fourgroup Randomized posttest only control group with matched subjects Matching-only pre-posttest control group Counterbalanced Time-series Factorial with randomization Factorial without randomization

++

++

++

++

++

++

++

++

++

++

++

++

++

++

++

++

+ ++ ++

+ ++

+ + _

+ ++

+ ++ +

++

+ ++ ++

++
?

++
?

++
++

+
+

+
+

++
+

++
?

McGraw-Hill

2006 The McGraw-Hill Companies, Inc. All rights reserved.

Controlling Threats to Internal Validity


(Table 13.1)

Subject Characteristics Mortality Location Instrument decay Data Collector Characteristics Data Collector bias

Testing History Maturation Attitudinal Regression Implementation

The above must be controlled to reduce threats to internal validity


McGraw-Hill 2006 The McGraw-Hill Companies, Inc. All rights reserved.

You might also like