Professional Documents
Culture Documents
Pracre1 Lec8 Basics of Experimentation Key Terms
Pracre1 Lec8 Basics of Experimentation Key Terms
Basics of Experimentation
Key Terms
Experimentation Basics
The goal of the experimental research strategy is to establish the existence of a
cause-and-effect relationship between two variables.
To rule out the possibility of a coincidental relationship, an experiment, often called
a true experiment, must demonstrate that changes in one variable are directly
responsible for causing changes in the second variable.
Four basic elements
1. Manipulation. The researcher manipulates one variable by changing its value to create a
set of two or more treatment conditions.
2. Measurement. A second variable is measured for a group of participants to obtain a set
of scores in each treatment condition.
3. Comparison. The scores in one treatment condition are compared with the scores in
another treatment condition. Consistent differences between treatments provide evidence
that the manipulation has caused changes in the scores.
4. Control. All other variables are controlled to be sure that they do not influence the two
variables being examined.
Terms
In an experiment, the variable that is manipulated by the researcher is called the
independent variable.
Typically, the independent variable is manipulated by creating a set of treatment
conditions.
The specific conditions that are used in the experiment are called the levels of the
independent variable.
The variable that is measured in each of the treatment conditions is called the
dependent variable.
All other variables in the study are extraneous variables.
Causation and the third-Variable Problem
One problem for experimental research is that variables rarely exist in isolation.
In natural circumstances, changes in one variable are typically accompanied by
changes in many other related variables.
As a result, in natural circumstances researchers are often confronted with a tangled
network of interrelated variables.
Although it is relatively easy to demonstrate that one variable is related to another,
it is much more difficult to establish the underlying cause of the relationship.
To determine the nature of the relationships among variables, particularly to
establish the causal influence of one event on another, it is essential that an
experiment separate and isolate the specific variables being studied.
Causation and the Directionality Problem
. Although a research study may establish a relationship between two variables, the
existence of a relationship does not always explain the direction of the relationship.
The remaining problem is to determine which variable is the cause and which is the
effect.
Controlling Nature
We cannot establish a cause-and-effect relationship by simply observing two
variables.
In particular, the researcher must actively unravel the tangle of relationships that
exists naturally.
To establish a cause-and-effect relationship, an experiment must control nature,
essentially creating an unnatural situation wherein the two variables being
examined are isolated from the influence of other variables and wherein the exact
character of a relationship can be seen clearly.
We acknowledge that it is somewhat paradoxical that experiments must interfere
with natural phenomena to gain a better understanding of nature.
How can observations made in an artificial, carefully controlled experiment reveal
any truth about nature?
An experimenter can be so intent on ensuring internal validity that external validity
is compromised. Researchers are aware of this problem and have developed
techniques to increase the external validity (natural character) of experiments.
Distinguishing Elements of an Experiment
The general purpose of the experimental research strategy is to demonstrate the
existence of a cause-and-effect relationship between two variables.
An experiment attempts to demonstrate that changing one variable (the
independent variable) causes changes in a second variable (the dependent variable).
This general purpose can be broken down into two specific goals.
1. The first step in demonstrating a cause-and-effect relationship is to demonstrate that the
“cause” happens before the “effect” occurs.
2. To establish that one specific variable is responsible for changes in another variable, an
experiment must rule out the possibility that the changes are caused by an extraneous
variable.
Key Terms
Confounding. An error that occurs when the value of an extraneous variable changes systematically
along with the independent variable in an experiment, an alternative explanation for the findings.
Construct validity. The degree to which an operational definition accurately represents the construct it is
intended to manipulate or measure.
Content validity. The degree to which the content of a measure reflects the content of what is measured.
Continuous dimension. The concept that traits, attitudes, and preferences can be viewed as continuous
dimensions, and each individual may fall at any point along each dimension; for example, sociability can
be viewed as a continuous dimension ranging from very unsociable to very sociable.
Dependent variable (DV). The specific behavior that a researcher tries to explain in an experiment; the
variable that is measured.
Environmental variable. Aspects of the physical environment that the experimenter can bring under
direct control as an independent variable.
Experimental condition. A treatment condition in which the researcher applies a particular value of an
independent variable to subjects and then measures the dependent variable.
Experimental operational definition. The explanation of the meaning of independent variables; defines
exactly what was done to create the various treatment conditions of the experiment.
External validity. How well the finding of an experiment generalize or apply to people and settings that
were not tested directly.
Extraneous variable. A variable other than an independent or dependent variable; a variable that is not
the main focus of an experiment and that may confound the results if not controlled.
Face validity. The degree to which a manipulation or measurement technique is self-evident.
History threat. A threat to internal validity in which an outside event or occurrence might have
produced effects on the dependent variable.
Hypothetical construct. Concepts used to explain unseen processes, such as hunger or learning;
postulated to explain observable behavior.
Independent variable (IV). The variable (antecedent condition) that the experimenter intentionally
manipulates.
Instrumentation threat. A threat to internal validity produced by changes in the measuring instrument
itself.
Interitem reliability. The degree to which different items measuring the same variable attain consistent
results.
Internal validity. The determination that the changes in behavior observed across treatment conditions in
the experiment were actually caused by the independent variable.
Interrater reliability. The degree of agreement among different observers or raters.
Interval scale. The measurement of magnitude or quantitative size having equal intervals between values
but no true zero point.
Latent content. The “hidden meaning” behind a question.
Level of measurement. The type of scale of measurement used to measure a variable.
Levels of the independent variable. The two or more values of the independent variable manipulated by
the experimenter.
Maturation threat. A threat to internal validity produced by internal (physical or psychological) changes
in subjects.
Measured operational definition. The definition of exactly how a variable in an experiment is measured.
Method. The section of a research report in which the subjects and experiment are described in enough
detail that the experiment may be replicated by others; it is typically divided into subsections, such as
Participants, Apparatus or Materials, and Procedures.
Nominal scale. The simplest level of measurement; classifies items into two or more distinct categories
on the basis of some common feature.
Operational definition. The specification of the precise meaning of a variable within an experiment;
defines a variable in terms of observable operations, procedures, and measurements.
Ordinal scale. A measure of magnitude in which each value is measured in the form of ranks.
Predictive validity. The degree to which a measure, definition, or experiment yields information
allowing prediction of actual behavior or performance.
Ratio scale. A measure of magnitude having equal intervals between values and having an absolute zero
point.
Reliability. The consistency and dependability of experimental procedures and measurements.
Response set. A tendency to answer questions based on their latent content with the goal of creating a
certain impression of ourselves.
Scale of measurement. The type of precise, quantitative measurement used to measure variables.
Selection interactions. A family of threats to internal validity produced when a selection threat combines
with one or more of the other threats to internal validity; when a selection threat is already present, other
threats can affect some experimental groups but not others.
Selection threat. A threat to internal validity that can occur when nonrandom procedures are used to
assign subjects to conditions or when random assignment fails to balance out differences among subjects
across the different conditions of the experiment.
Statistical regression threat. A threat to internal validity that can occur when subjects are assigned to
conditions on the basis of extreme scores on a test; upon retest, the scores of extreme scores tend to
regress toward the mean even without any treatment.
Subject mortality threat. A threat to internal validity produced by differences in dropout rates across
conditions of the experiment.
Subject variable. The characteristics of the subjects in an experiment that cannot be manipulated; an
independent variable, in the sense that a researcher can select subjects for their particular values.
Task variable. An aspect of a task that the experimenter intentionally manipulates as an independent
variable.
Testing threat. A threat to internal validity produced by a previous administration of the same test or
other measure.
Test-retest reliability. Consistency between an individual’s scores on the same test taken at two or more
different times.
Validity. The soundness of an operational definition; in experiments, the principle of actually studying
the variables intended to be manipulated or measured.
Assignments
2. Identify the independent and dependent variable in each of the following hypotheses:
a. Absence makes the heart grow fonder.
b. It takes longer to recognize a person in a photograph seen upside down.
c. People feel sadder in rooms painted blue.
d. Smoking cigarettes causes cancer.
References
Books:
Arboleda, C. R. (1998). Writing A Thesis Proposal (1st ed.). Manila: Rex Book Store.
Bertrand, I & Hguhes, P. (2005). Media Research Methods. New York: MacMillan.
Breakwell, G. M., Smith, J.A., & Wright D.B. (2012). Research Methods in Psychology (4th ed.).
London: Sage Publication Ltd.
Garcia, A. M., Bongola, V.T., Nuevo, J.J., Profeta, M.M., Malaborbor, P.B., Gamboa, D.L.,
Lupdag-Padaga, E.A. & Hilario, D.S. (2011). Philippines: Booklore Publishing Corporation.
Gravetter, F. J. & Forzano, L.A. (2018). Research methods for behavioral sciences. Cengage
Learning, Inc.Rubin, R. B., Rubin, A.M., & Piele, L.J. (1996). Communication Research:
Strategies and Sources. 4th Ed. USA:Wadsworth Publishing Company.
Schumaker, J. (2000). Statistics: An Intuitive Approach. Belmont California: Wadsworth
Publishing, Inc.
Smith, T. R. (2008). The Psychology Thesis: Research and Coursework. NY: Palgrave
Macmillan.
Qualitative Research Information Systems. (2008) http://w.w.wqual.auckland.ac.anz/
Haslam, S. A. (2012). Research methods and statistics in psychology. Los Angeles: SAGE,
2014.Riley, Sarah, Doing your qualitative psychology project. London: SAGE.
Cozby, P. C. (2012). Methods in behavioral research. Boston: McGraw- Hill.
Bordens, K. S. (2011). Research design and methods: a process approach. New York: McGraw-
Hill.
Paler, C. L. (2010). Research and statistics with computer. Mandaluyong City: National Book
Store.
Shaughnessy, J.J. (2008). Research methods in psychology. Boston: McGraw-Hill, c2009.
Bordens, K. S. (2008). Research Design and methods: a process approach. Boston: McGraw Hill.
Smyth, T. R. (2008). The psychology thesis: research and coursework. Houndmills, Basingstoke,
Hampshire; Palgrave Macmillan.
Gavin, H. (2008). Understanding research methods and statistics in psychology. Los Angeles,
CA: SAGE.
Spatz, C. (2008). Research methods in psychology: ideas, techniques and reports. Boston:
McGraw-Hill Higher Education.
Spats, C. (2008). Research methods in psychology: ideas, techniques, and reports. New York:
McGraw-Hill.
Smith, J. A. (2008). Qualitative psychology: a practical guide to research methods. Los Angeles,
Calif.: SAGE Publications.