Download as pdf or txt
Download as pdf or txt
You are on page 1of 693

Persuasion

Third Edition

2
SAGE was founded in 1965 by Sara Miller McCune to support the
dissemination of usable knowledge by publishing innovative and high-
quality research and teaching content. Today, we publish more than 750
journals, including those of more than 300 learned societies, more than
800 new books per year, and a growing range of library products including
archives, data, case studies, reports, conference highlights, and video.
SAGE remains majority-owned by our founder, and after Sara’s lifetime
will become owned by a charitable trust that secures our continued
independence.

Los Angeles | London | Washington DC | New Delhi | Singapore | Boston

3
Persuasion
Theory and Research

Third Edition

Daniel J. O’Keefe
Northwestern University

Los Angeles
London
New Delhi
Singapore
Washington DC
Boston

4
Copyright © 2016 by SAGE Publications, Inc.

All rights reserved. No part of this book may be reproduced or utilized in


any form or by any means, electronic or mechanical, including
photocopying, recording, or by any information storage and retrieval
system, without permission in writing from the publisher.

FOR INFORMATION:

SAGE Publications, Inc.

2455 Teller Road

Thousand Oaks, California 91320

E-mail: order@sagepub.com

SAGE Publications Ltd.

1 Oliver’s Yard

55 City Road

London, EC1Y 1SP

United Kingdom

SAGE Publications India Pvt. Ltd.

B 1/I 1 Mohan Cooperative Industrial Area

Mathura Road, New Delhi 110 044

India

SAGE Publications Asia-Pacific Pte. Ltd.

3 Church Street

#10-04 Samsung Hub

5
Singapore 049483

Acquisitions Editor: Matthew Byrnie

Digital Content Editor: Gabrielle Piccininni

Editorial Assistant: Janae Masnovi

Production Editor: Jane Haenel

Copy Editor: Lynn Weber

Typesetter: Hurix Systems Pvt. Ltd.

Proofreader: Gretchen Treadwell

Cover Designer: Gail Buschman

Marketing Manager: Liz Thornton

Printed in the United States of America

Library of Congress Cataloging-in-Publication Data

O’Keefe, Daniel J., 1950–

Persuasion : theory and research / Daniel J. O’Keefe. — 3rd edition.

pages cm

Includes bibliographical references and index.

ISBN 978-1-4522-7667-0 (pbk. : acid-free paper) 1. Persuasion (Psychology) I. Title.

BF637.P4054 2016

153.8’52—dc23         2015000192

This book is printed on acid-free paper.

15 16 17 18 19 10 9 8 7 6 5 4 3 2 1

6
Brief Contents
Preface
1. Persuasion, Attitudes, and Actions
2. Social Judgment Theory
3. Functional Approaches to Attitude
4. Belief-Based Models of Attitude
5. Cognitive Dissonance Theory
6. Reasoned Action Theory
7. Stage Models
8. Elaboration Likelihood Model
9. The Study of Persuasive Effects
10. Communicator Factors
11. Message Factors
12. Receiver Factors
References
Author Index
Subject Index
About the Author

7
Detailed Contents
Preface
1 Persuasion, Attitudes, and Actions
The Concept of Persuasion
About Definitions: Fuzzy Edges and Paradigm Cases
Five Common Features of Paradigm Cases of Persuasion
A Definition After All?
The Concept of Attitude
Attitude Measurement Techniques
Explicit Measures
Semantic Differential Evaluative Scales
Single-Item Attitude Measures
Features of Explicit Measures
Quasi-Explicit Measures
Implicit Measures
Summary
Attitudes and Behaviors
The General Relationship
Moderating Factors
Correspondence of Measures
Direct Experience
Summary
Encouraging Attitude-Consistent Behavior
Enhance Perceived Relevance
Induce Feelings of Hypocrisy
Encourage Anticipation of Feelings
Summary
Assessing Persuasive Effects
Attitude Change
Beyond Attitude Change
Conclusion
For Review
Notes
2 Social Judgment Theory
Judgments of Alternative Positions on an Issue
The Ordered Alternatives Questionnaire
The Concept of Ego-Involvement
Ego-Involvement and the Latitudes

8
Measures of Ego-Involvement
Size of the Ordered Alternatives Latitude of Rejection
Own Categories Procedure
Reactions to Communications
Assimilation and Contrast Effects
Attitude Change Effects
Assimilation and Contrast Effects Reconsidered
The Impact of Assimilation and Contrast Effects on
Persuasion
Ambiguity in Political Campaigns
Adapting Persuasive Messages to Recipients Using Social
Judgment Theory
Critical Assessment
The Confounding of Involvement With Other Variables
The Concept of Ego-Involvement
The Measures of Ego-Involvement
Conclusion
For Review
Notes
3 Functional Approaches to Attitude
A Classic Functional Analysis
Subsequent Developments
Identifying General Functions of Attitude
Assessing the Function of a Given Attitude
Influences on Attitude Function
Individual Differences
Attitude Object
Situational Variations
Multifunctional Attitude Objects Revisited
Adapting Persuasive Messages to Recipients: Function
Matching
The Persuasive Effects of Matched and Mismatched
Appeals
Explaining the Effects of Function Matching
Commentary
Generality and Specificity in Attitude Function Typologies
Functional Confusions
Some Functional Distinctions
Conflating the Functions
Reconsidering the Assessment and Conceptualization of
Attitude Function

9
Assessment of Attitude Function Reconsidered
Utilitarian and Value-Expressive Functions
Reconsidered
Summary
Persuasion and Function Matching Revisited
Reviving the Idea of Attitude Functions
Conclusion
For Review
Notes
4 Belief-Based Models of Attitude
Summative Model of Attitude
The Model
Adapting Persuasive Messages to Recipients Based on the
Summative Model
Alternative Persuasive Strategies
Identifying Foci for Appeals
Research Evidence and Commentary
General Correlational Evidence
Attribute Importance
Belief Content
Role of Belief Strength
Scoring Procedures
Alternative Integration Schemes
The Sufficiency of Belief-Based Analyses
Persuasive Strategies Reconsidered
Belief Strength as a Persuasion Target
Belief Evaluation as a Persuasion Target
Changing the Set of Salient Beliefs as a Persuasion
Mechanism
Conclusion
For Review
Notes
5 Cognitive Dissonance Theory
General Theoretical Sketch
Elements and Relations
Dissonance
Factors Influencing the Magnitude of Dissonance
Means of Reducing Dissonance
Some Research Applications
Decision Making
Conflict

10
Decision and Dissonance
Factors Influencing the Degree of Dissonance
Dissonance Reduction
Regret
Selective Exposure to Information
The Dissonance Theory Analysis
The Research Evidence
Summary
Induced Compliance
Incentive and Dissonance in Induced-Compliance
Situations
Counterattitudinal-Advocacy–Based Interventions
The “Low, Low Price” Offer
Limiting Conditions
Summary
Hypocrisy Induction
Hypocrisy as a Means of Influencing Behavior
Hypocrisy Induction Mechanisms
Backfire Effects
Revisions of, and Alternatives to, Dissonance Theory
Conclusion
For Review
Notes
6 Reasoned Action Theory
The Reasoned Action Theory Model
Intention
The Determinants of Intention
Attitude Toward the Behavior
Injunctive Norm
Descriptive Norm
Perceived Behavioral Control
Weighting the Determinants
The Distinctiveness of Perceived Behavioral Control
The Predictability of Intention Using the RAT Model
Influencing Intentions
Influencing Attitude Toward the Behavior
The Determinants of AB
Changing AB
Influencing the Injunctive Norm
The Determinants of IN
Changing IN

11
Influencing the Descriptive Norm
The Determinants of DN
Changing DN
Influencing Perceived Behavioral Control
The Determinants of PBC
Changing PBC
Altering the Weights
Intentions and Behaviors
Factors Influencing the Intention-Behavior Relationship
Correspondence of Measures
Temporal Stability of Intentions
Explicit Planning
The Sufficiency of Intention
Adapting Persuasive Messages to Recipients Based on Reasoned
Action Theory
Commentary
Additional Possible Predictors
Anticipated Affect
Moral Norms
The Assessment of Potential Additions
Revision of the Attitudinal and Normative Components
The Attitudinal Component
The Normative Components
The Nature of the Perceived Control Component
PBC as a Moderator
Refining the PBC Construct
Conclusion
For Review
Notes
7 Stage Models
The Transtheoretical Model
Decisional Balance and Intervention Design
Decisional Balance
Decisional Balance Asymmetry
Implications of Decisional Balance Asymmetry
Self-Efficacy and Intervention Design
Intervention Stage-Matching
Self-Efficacy Interventions
Broader Concerns About the Transtheoretical Model
The Distinctive Claims of Stage Models
Other Stage Models

12
Conclusion
For Review
Notes
8 Elaboration Likelihood Model
Variations in the Degree of Elaboration: Central Versus
Peripheral Routes to Persuasion
The Nature of Elaboration
Central and Peripheral Routes to Persuasion
Consequences of Different Routes to Persuasion
Factors Affecting the Degree of Elaboration
Factors Affecting Elaboration Motivation
Personal Relevance (Involvement)
Need for Cognition
Factors Affecting Elaboration Ability
Distraction
Prior Knowledge
Summary
Influences on Persuasive Effects Under Conditions of High
Elaboration: Central Routes to Persuasion
The Critical Role of Elaboration Valence
Influences on Elaboration Valence
Proattitudinal Versus Counterattitudinal Messages
Argument Strength
Other Influences on Elaboration Valence
Summary: Central Routes to Persuasion
Influences on Persuasive Effects Under Conditions of Low
Elaboration: Peripheral Routes to Persuasion
The Critical Role of Heuristic Principles
Varieties of Heuristic Principles
Credibility Heuristic
Liking Heuristic
Consensus Heuristic
Other Heuristics
Summary: Peripheral Routes to Persuasion
Multiple Roles for Persuasion Variables
Adapting Persuasive Messages to Recipients Based on the ELM
Commentary
The Nature of Involvement
Argument Strength
One Persuasion Process?
The Unimodel of Persuasion

13
Explaining ELM Findings
Comparing the Two Models
Conclusion
For Review
Notes
9 The Study of Persuasive Effects
Experimental Design and Causal Inference
The Basic Design
Variations on the Basic Design
Persuasiveness and Relative Persuasiveness
Two General Challenges in Studying Persuasive Effects
Generalizing About Messages
Ambiguous Causal Attribution
Nonuniform Effects of Message Variables
Designing Future Persuasion Research
Interpreting Past Persuasion Research
Beyond Message Variables
Variable Definition
Message Features Versus Observed Effects
The Importance of the Distinction
Conclusion
For Review
Notes
10 Communicator Factors
Communicator Credibility
The Dimensions of Credibility
Factor-Analytic Research
Expertise and Trustworthiness as Dimensions of
Credibility
Factors Influencing Credibility Judgments
Education, Occupation, and Experience
Nonfluencies in Delivery
Citation of Evidence Sources
Position Advocated
Liking for the Communicator
Humor
Summary
Effects of Credibility
Two Initial Clarifications
Influences on the Magnitude of Effect
Influences on the Direction of Effect

14
Liking
The General Rule
Some Exceptions and Limiting Conditions
Liking and Credibility
Liking and Topic Relevance
Greater Effectiveness of Disliked Communicators
Other Communicator Factors
Similarity
Similarity and Liking
Similarity and Credibility: Expertise Judgments
Similarity and Credibility: Trustworthiness Judgments
Summary: The Effects of Similarity
Physical Attractiveness
Physical Attractiveness and Liking
Physical Attractiveness and Credibility
Summary
About Additional Communicator Characteristics
Conclusion
The Nature of Communication Sources
Multiple Roles for Communicator Variables
For Review
Notes
11 Message Factors
Message Structure and Format
Conclusion Omission
Recommendation Specificity
Narratives
Complexities in Studying Narrative and Persuasion
The Persuasive Power of Narratives
Factors Influencing Narrative Persuasiveness
Entertainment-Education
Summary
Prompts
Message Content
Consequence Desirability
One-Sided Versus Two-Sided Messages
Gain-Loss Framing
Overall Effects
Disease Prevention Versus Disease Detection
Other Possible Moderating Factors
Summary

15
Threat Appeals
Protection Motivation Theory
Threat Appeals, Fear Arousal, and Persuasion
The Extended Parallel Process Model
Summary
Beyond Fear Arousal
Sequential Request Strategies
Foot-in-the-Door
The Strategy
The Research Evidence
Explaining FITD Effects
Door-in-the-Face
The Strategy
The Research Evidence
Explaining DITF Effects
Conclusion
For Review
Notes
12 Receiver Factors
Individual Differences
Topic-Specific Differences
General Influences on Persuasion Processes
Summary
Transient Receiver States
Mood
Reactance
Other Transient States
Influencing Susceptibility to Persuasion
Reducing Susceptibility: Inoculation, Warning, Refusal
Skills Training
Inoculation
Warning
Refusal Skills Training
Increasing Susceptibility: Self-Affirmation
Conclusion
For Review
Notes
References
Author Index
Subject Index
About the Author

16
Preface

This preface is intended to provide a general framing of this book and is


particularly directed to those who already have some familiarity with the
subject matter. Such readers will be able to tell at a glance that this book is
in many ways quite conventional (in the general plan of the work, the
topics taken up, and so forth) and will come to see the inevitable
oversimplifications, bypassed subtleties, elided details, and suchlike.
Because this book is pitched at roughly the level of a graduate-
undergraduate course, it is likely to be defective both by having sections
that are too shallow or general for some and by having segments that are
too detailed or technical for others; the hope is that complaints are not too
badly maldistributed across these two categories. This book aims at a
relatively generalized treatment of persuasion; in certain contexts in which
persuasion is a central or recurring activity, correspondingly localized
treatments of relevant research literatures are available elsewhere, such as
for consumer advertising (e.g., Armstrong, 2010) and for certain legal
settings (e.g., Devine, 2012). Readers acquainted with the second edition
will notice the addition of chapters concerning social judgment theory and
stage models, revision of the treatment of the theories of reasoned action
and planned behavior, and new attention to subjects such as reactance and
the use of narratives as vehicles for persuasion.

This edition also gives special attention to questions of message


adaptation. One broad theme that recurs in theoretical treatments of
persuasion is the need to adapt persuasive messages to their audiences:
different recipients may be persuaded by different sorts of messages. Thus
one way of approaching any given theoretical framework for persuasion is
to ask how it identifies ways in which messages might be adapted to
audiences. For this reason, a number of the chapters concerning theoretical
perspectives contain a section addressing this issue (and, as appropriate,
this matter also arises in other chapters).

Some readers will see the relationship of this theme to concepts such as
“message tailoring” and “message targeting.” In the research literature,
these labels have often been used to apply quite loosely to any sort of way
in which messages are adapted to (customized for) recipients, although
sometimes there have been efforts to use different labels to describe
different degrees or kinds of message customization (e.g., sometimes

17
“targeting” is described as adaptation on the basis of group-level
characteristics, whereas “tailoring” is based on individual-level
properties). But no matter the label, there is a common underlying
conceptual thread here, namely, that different kinds of messages are likely
to be persuasive for different recipients—and hence to maximize
persuasiveness, messages should be adapted to their audiences.

As should be apparent, there are quite a few different bases for such
adaptation: messages might be adapted to the audience’s literacy level,
cultural background, values, sex, degree of extroversion, age, regulatory
focus, level of self-monitoring, or race/ethnicity. A message may be
customized to the audience’s current psychological state as described by,
say, reasoned action theory (e.g., is perceived behavioral control low?),
protection motivation theory (is perceived vulnerability sufficiently high?),
or the transtheoretical model (which stage is the recipient in?). It may be
superficially personalized (e.g., by mentioning the recipient’s name in a
direct mail appeal), mention shared attitudes not relevant to the advocacy
subject, and so on.

For this reason, it is not fruitful to pursue questions such as “are tailored
messages more persuasive than non-tailored messages?” because the
answer is virtually certain to be “it depends”—if nothing else, the answer
may vary depending on the basis of tailoring. For example, it might be that
adapting messages through superficial personalization typically makes
very little difference to persuasiveness, but adapting messages by matching
the message’s appeals to the audience’s core values could
characteristically substantially enhance persuasiveness.

Still, the manifest importance of adapting messages to recipients


recommends its prominence. Aristotle was right (in the Rhetoric): the art
of persuasion consists of discerning, in any particular situation, the
available means of persuasion. Those means will vary from case to case,
and hence maximizing one’s chances for persuasion will require adapting
one’s efforts to the circumstance at hand. Whether one calls this message
adaptation, message tailoring, message targeting, message customization,
or something else, the core idea is the same: different approaches are
required in different persuasive circumstances.

Adding material (whether about audience adaptation or other matters) is an


easy decision; omitting material is not, because one fears encouraging the
loss of good (if imperfect) ideas. Someone somewhere once pointed out

18
that in the social and behavioral sciences, findings and theories often seem
to just fade away, not because of any decisive criticisms or
counterarguments but rather because they seem to be “too old to be true.”
This apt observation seems to me to identify one barrier to social-scientific
research synthesis, namely, that useful results and concepts somehow do
not endure but rather disappear—making it impossible for subsequent
work to exploit them.

As an example: If message assimilation and contrast effects are genuine


and have consequences for persuasive effects, then—although there is little
research attention being given to the theoretical framework within which
such phenomena were first clearly conceptualized (social judgment theory)
—we need somehow to ensure that our knowledge of these phenomena
does not evaporate. Similarly, although it has been some time since
substantial work was done on the question of the dimensions underlying
credibility judgments, the results of those investigations (the dimensions
identified in those studies) should not thereby fail to be mentioned in
discussions of credibility research.

To sharpen the point here: It has been many years since the islets of
Langerhans (masses of endocrine cells in the pancreas) were first noticed,
but medical textbooks do not ignore this biological structure. Indeed, it
would be inconceivable to discuss (for example) mechanisms of insulin
secretion without mentioning these structures. Now I do not mean to say
that social-scientific phenomena such as assimilation and contrast effects
are on all fours with the islets of Langerhans, but I do want to suggest that
premature disappearance of social-scientific concepts and findings seems
to happen all too easily. Without forgetting how grumpy old researchers
can sometimes view genuinely new developments (“this new phenomenon
is just another name for something that used to be called X”), one can
nevertheless acknowledge the real possibility that “old” knowledge can
somehow be lost, misplaced, insufficiently understood, unappreciated, or
overlooked.

It is certainly the case that the sheer amount of social-scientific research


output makes it difficult to keep up with current research across a number
of topics, let alone hold on to whatever accumulated information there
might be. In the specific case of persuasion research—which has seen an
explosion of interest in recent years—the problem is not made any easier
by the relevant literature’s dispersal across a variety of academic locales.
Yet somehow the insights available from this research and theorizing must

19
not be lost.

Unfortunately, there are not appealing shortcuts. One cannot simply


reproduce others’ citations or research descriptions with an easy mind (for
illustrations of the attendant pitfalls, see Gould, 1991, pp. 155–167; Gould,
1993, pp. 103–105; Tufte, 1997, p. 71). One hopes that it would be
unnecessary to say that as in the previous editions, I have read everything I
cite. I might inadvertently misrepresent or misunderstand, but at least such
flaws will be of my own hand.

Moreover, customary ways of drawing general conclusions about


persuasive effects can be seen to have some important shortcomings. One
source of difficulty here is a reliance on research designs using few
persuasive messages, a matter addressed in Chapter 9. Here I will point out
only the curiosity that generalizations about persuasive message effects—
generalizations intended to be general across both persons and messages—
have commonly been offered on the basis of data from scores or hundreds
of human respondents but from only one or two messages. One who is
willing to entertain seriously the possibility that the same manipulation
may have different effects in different messages should, with such data in
hand, be rather cautious.

Another source of difficulty has been the widespread misunderstandings


embedded in common ways of interpreting and integrating research
findings in the persuasion literature. To illuminate the relevant point,
consider the following hypothetical puzzle:

Suppose there have been two studies of the effect on persuasive


outcomes of having a concluding metaphor (versus having an
ordinary conclusion that does not contain a metaphor) in one’s
message, but with inconsistent results. In Study A, conclusion type
made a statistically significant difference (such that greater
effectiveness is associated with the metaphorical conclusion), but
Study B failed to replicate this result.

In Study A, the participants were female high school students who


read a written communication arguing that most persons need from 7
to 9 hours of sleep each night. The message was attributed to a
professor at the Harvard Medical School; the communicator’s
identification, including a photograph of the professor (an attractive,
youthful-looking man), was provided on a cover sheet immediately

20
preceding the message. The effect of conclusion type on persuasive
outcome was significant, t(60) = 2.35, p < .05: Messages with a
concluding metaphor were significantly more effective than messages
with an ordinary (nonmetaphorical) conclusion.

In Study B, the participants were male college undergraduates who


listened to an audio message that used a male voice. The message
advocated substantial tuition increases (of roughly 50% to 60%) at the
students’ university and presented five arguments to show the
necessity of such increases. The communicator was described as a
senior at the university, majoring in education. Although the means
were ordered as in Study A, conclusion type did not significantly
affect persuasive outcome, t(21) = 1.39, ns.

Why the inconsistency (the failure to replicate)?

A typical inclination has been to entertain possible explanatory stories


based on such differences as the receivers’ sex (“Women are more
influenced by the presence of a metaphorical conclusion than are men”),
the medium (“Metaphorical conclusions make more difference in written
messages than in oral messages”), the advocated position (“Metaphorical
conclusions are helpful in proattitudinal messages but not in
counterattitudinal ones”), and so on. But for this hypothetical example,
those sorts of explanatory stories are misplaced. Not only is the direction
of effect identical in Study A and Study B (each finds that the concluding-
metaphor message is more effective) but also the size of the advantage
enjoyed by the concluding-metaphor message is the same in the two
studies (expressed as a correlation, the effect size is .29). The difference in
the level of statistical significance achieved is a function of the difference
in sample size, not any difference in effect size.

Happily, recent years have seen some progress in the diffusion of more
careful understandings of statistical significance, effect sizes, statistical
power, confidence intervals, and related matters. (Some progress—but not
enough. It remains distressingly common that even graduate students with
statistical training can reason badly when faced with a problem such as
that hypothetical.) With the hope of encouraging greater sensitivity
concerning specifically the magnitude of effects likely to be found in
persuasion research, I have tried to include mention of average effect sizes
where appropriate and available.

21
But there is at present something of a disjuncture between the available
methods for describing research findings (in terms of effect sizes and
confidence intervals) and our theoretical equipment for generating
predictions. Although research results can be described in specific
quantitative terms (“the correlation was .37”), researchers are currently
prepared to offer only directional predictions (“the correlation will be
positive”). Developing more refined predictive capabilities is very much to
be hoped for, but significant challenges lie ahead (for some discussion, see
O’Keefe, 2011a).

Even with increasing attention to effect sizes and their meta-analytic


treatment, however, we are still not in a position to do full justice to the
issues engaged by the extensive research literature in persuasion, given the
challenges in doing relevant, careful, reflective research reviews. For
example, research reviews all too often exclude unpublished studies,
despite wide recognition of publication biases favoring statistically
significant results (see, e.g., Dwan, Gamble, Williamson, Kirkham, & the
Reporting Bias Group, 2013; Ferguson & Heene, 2012; Ioannidis, 2005,
2008). Similarly, meta-analytic reviews too often rely on fixed-effect
analyses rather than the random-effects analyses appropriate where
generalization is the goal (for discussion, see Card, 2012, pp. 233–234).
All these considerations conspire to encourage a rather conservative
approach to the persuasion literature (conservative in the sense of
exemplifying prudence with respect to generalization), and that has been
the aim in this treatment.

Of course, one cannot hope to survey the range of work covered here
without errors, oversights, and unclarities. These have been reduced by
advice and assistance from a number of quarters. Students in my
persuasion classes have helped make my lectures—and so this book—
clearer than otherwise might have been the case. Many good insights and
suggestions came from the reviewers arranged by Sage Publications:
Jonathan H. Amsbary, William B. Collins, Julia Jahansoozi, Bonnie Kay,
Andrew J. Kirk, Susan L. Kline, Sanja Novitsky, Charles Soukup, Kaja
Tampere, and Beth M. Waggenspack. Jos Hornikx also provided
especially useful commentary on drafts of this edition’s chapters. And I
thank Barbara O’Keefe both for helpful conversation and for an
unceasingly interesting life: “Age cannot wither her, nor custom stale / Her
infinite variety.”

22
Chapter 1 Persuasion, Attitudes, and
Actions

The Concept of Persuasion


About Definitions: Fuzzy Edges and Paradigm Cases
Five Common Features of Paradigm Cases of Persuasion
A Definition After All?
The Concept of Attitude
Attitude Measurement Techniques
Explicit Measures
Quasi-Explicit Measures
Implicit Measures
Summary
Attitudes and Behaviors
The General Relationship
Moderating Factors
Encouraging Attitude-Consistent Behavior
Assessing Persuasive Effects
Attitude Change
Beyond Attitude Change
Conclusion
For Review
Notes

This book surveys social-scientific theory and research concerning


persuasive communication. The relevant work, as will become apparent, is
scattered across the academic landscape—in communication, psychology,
advertising, marketing, political science, law, and so on. Although the
breadth and depth of this literature rule out a completely comprehensive
and detailed treatment, the main lines of work are at least sketched here.

This introductory chapter begins, naturally enough, with a discussion of


the concept of persuasion. But because social-scientific treatments of
persuasion have closely linked persuasion and attitude change, the concept
of attitude is discussed as well, some common attitude assessment
procedures are described, and the relationship of attitudes and behavior is
considered; a concluding section discusses the assessment of persuasive
effects.

23
The Concept of Persuasion

About Definitions: Fuzzy Edges and Paradigm Cases


A common way to clarify a concept is to provide a definition of the
concept. But definitions can be troublesome things, precisely because they
commonly are treated as providing sharp-edged distinctions between what
is included in the category and what is not. What is troublesome about
such sharp lines is that no matter where they are drawn, it is possible to
sustain objections to their location; for some the definition will be too
broad and for others too narrow.

Definitions are almost inevitably open to such criticisms, no matter where


the definitional lines are drawn, because most concepts have fuzzy edges,
that is, gray areas in which application of the concept is arguable. For any
concept, there are some cases that virtually everyone agrees are cases of
the concept (few would deny that a chair is an instance of the category
“furniture”), and there are some cases that virtually everyone agrees are
not cases of the concept (a pencil is not an instance of furniture)—but
there are also some cases that fall in a gray area and can give rise to
disagreements (is a television set a piece of furniture? or perhaps is it an
appliance?). No matter how the line is drawn, some objection is possible.

So, for example, if one defines persuasion in such a way as to distinguish


cases of persuasion from cases of manipulation by requiring that in
genuine instances of persuasion, the persuader “acts in good faith” (as do
Burnell & Reeve, 1984), then some will object that the definition is too
narrow; after all, such a definition almost certainly excludes at least some
instances of advertising. But including manipulation as instances of
persuasion will meet objections from those who think it important to
exclude instances of sheer manipulation from the definition of persuasion.

Happily, it is possible to clarify a concept without having to be committed


to a sharp-edged definition of the concept (and thus without having to
settle such border disputes). Such clarification can be obtained by focusing
on the shared features of paradigm cases of the concept. Paradigm cases of
a concept are the sorts of instances that nearly everyone would agree were
instances of the concept in question; they are straightforward,
uncontroversial examples. By identifying the common features of
paradigm cases, one can get a sense of the concept’s ordinary central

24
application, without having to draw sharp-edged definitional lines.

Five Common Features of Paradigm Cases of


Persuasion
Consider, then: What is ordinarily involved when we say that someone (a
persuader) has persuaded someone else (a persuadee)? In such
straightforward applications of the concept of persuasion, what sorts of
shared features can be observed? (For an alternative to the following
analysis, see Gass & Seiter, 2004.)

First, when we say that one person persuaded another, we ordinarily


identify a successful attempt to influence. That is, the notion of success is
embedded in the concept of persuasion. For instance, it does not make
sense to say, “I persuaded him but failed.” One can say, “I tried to
persuade him but failed,” but to say simply “I persuaded him” is to imply a
successful attempt to influence.1

Second, in paradigm cases of persuasion, the persuader intends to


influence the persuadee. For example, if I say, “I persuaded Sally to vote
for Jones,” you are likely to infer that I intended to obtain that effect. For
just that reason, it is entirely understandable that someone might say, “I
accidentally persuaded Mary to vote for Brown” precisely in the
circumstance in which the speaker does not want a hearer to draw the
usual inference of intent; absent such mention of accident, the ordinary
inference will be that the persuasion was purposeful.

A third feature shared by paradigm cases of persuasion is some measure of


freedom (free will, free choice, voluntary action) on the persuadee’s part.
Consider, for example, a circumstance in which a person is knocked
unconscious by a robber, who then takes the victim’s money; one would
not (except humorously) say that the victim had been “persuaded” to give
the money. By contrast, being induced by a television ad to make a
donation to a charitable cause is obviously an instance of persuasion.

When the persuadee’s freedom is minimized or questionable, it becomes


correspondingly questionable whether persuasion is genuinely involved;
one no longer has a straightforward exemplary case of persuasion.
Suppose a robber threatens to shoot the victim if the money is not
forthcoming, and the victim complies: Is this an instance of persuasion?
We need not settle this question here, as it requires a sharp line of

25
definition that we are avoiding.2 It is enough to notice that such cases are
borderline instances of persuasion, precisely because the persuadee’s
freedom is not so clear-cut as in paradigm instances.

Fourth, paradigm cases of persuasion are ones in which the effects are
achieved through communication (and perhaps especially through the
medium of language). My physically lifting you and throwing you off the
roof of a building is something quite different from my talking you into
jumping off the same roof; the latter might possibly be a case of
persuasion (depending on the circumstances, exactly what I have said to
you, and so on), but the former is certainly not. What distinguishes these
two instances is that communication is involved in the latter case but not in
the former.

Finally, paradigm cases of persuasion involve a change in the mental state


of the persuadee (principally as a precursor to a change in behavior). Some
ordinary instances of persuasion may be described as involving only a
change in mental state (as in “I persuaded Jan that the United States should
refuse to recognize the authority of the World Court”). But even when
behavioral change is involved (as in “I persuaded Tom to take golf
lessons”), there is ordinarily presumed to be some underlying change in
mental state that gave rise to the behavioral change (e.g., Tom came to
believe that his golf skills were poor, that his skills could be improved by
taking lessons, etc.). Thus even when a persuader’s eventual aim is to
influence what people do (how they vote or what products they buy), at
least in paradigm cases of persuasion that aim is ordinarily accomplished
by changing what people think (what they think of the political candidate
or of the product). That is, persuasion is ordinarily conceived of as
influencing others by influencing their mental states (rather than by
somehow influencing their conduct directly).

In persuasion theory and research, the relevant mental state has most
commonly been characterized as an attitude (and thus the concept of
attitude receives direct discussion later in this chapter).3 Even when a
persuader’s ultimate goal is the modification of another’s behavior, that
goal is often seen to be achieved through a process of attitude change—the
presumption being that attitude change is a means of behavioral change.

A Definition After All?


These shared features of exemplary cases of persuasion can be strung

26
together into something that looks like a definition of persuasion: a
successful intentional effort at influencing another’s mental state through
communication in a circumstance in which the persuadee has some
measure of freedom. But it should be apparent that constructing such a
definition would not eliminate the fuzzy edges of the concept of
persuasion. Such a definition leaves open to dispute just how much success
is required, just how intentional the effort must be, and so on.

Hence, by recognizing these shared features of paradigm cases of


persuasion, one can get a sense of the central core of the concept of
persuasion, but one need not draw sharp definitional boundaries around
that concept. Indeed, these paradigm case features permit one to see
clearly just how definitional disputes can arise—for instance, disputes
about the issue of just how much, and what sorts, of freedom the persuadee
must have before an instance qualifies as an instance of persuasion. It is
also easy to see that there can be no satisfactory definitive solution to these
disputes, given the fuzzy edges that the concept of persuasion naturally
has. Definitions of persuasion can serve useful functions, but a clear sense
of the concept of persuasion can be had without resorting to a hard-edged
definition.

The Concept of Attitude


As mentioned above, the mental state that has been seen (in theory and
research) to be most centrally implicated in persuasion is that of attitude.
The concept of attitude has a long history (see D. Fleming, 1967). Early
uses of the term “attitude” referred to posture or physical arrangement (as
in someone’s being in “the attitude of prayer”), uses that can be seen today
in descriptions of dance or airplane orientation. Gradually, however,
attitudes came to be seen as “orientations of mind” rather than of body, as
internal states that exerted influence on overt behavior.

Perhaps it was inevitable, thus, that in the early part of the 20th century,
the emerging field of social psychology should have seized on the concept
of attitude as an important one. Attitude offered to social psychologists a
distinctive psychological mechanism for understanding and explaining
individual variation in social conduct (Allport, 1935). And although for a
time there was considerable discussion of alternative definitions of attitude
(e.g., Audi, 1972; Eagly & Chaiken, 1993, pp. 1–21; McGuire, 1985), a
broad consensus emerged that an attitude is a person’s general evaluation
of an object (where “object” is understood in a broad sense, as

27
encompassing persons, events, products, policies, institutions, and so on).
Even when conceptual treatments of attitude differ in other ways, a
common theme is that an attitude is an evaluative judgment of (reaction to)
an object (Fishbein & Ajzen, 2010, pp. 75–79).

Understood this way, it is perhaps obvious why attitude should so often be


a mental state of interest to persuaders. What products people buy, which
candidates they vote for, which policies they endorse, what hobbies they
pursue, which businesses they patronize—influencing such things will
often involve influencing people’s attitudes. Precisely because attitudes
represent relatively stable evaluations that can influence behavior, they are
a common persuasive target.

Attitude Measurement Techniques


If persuasion is conceived of as fundamentally involving attitude change,
then the systematic study of persuasion requires means of assessing
persons’ attitudes. A great many attitude measurement techniques have
been proposed, and a large literature addresses the use of attitude measures
in specific circumstances such as public opinion polling and survey
research. The intention here is to give a brief overview of some exemplary
attitude measurement procedures (for more detailed information and
reviews, see Banaji & Heiphetz, 2010, pp. 359–370; Krosnick, Judd, &
Wittenbrink, 2005; Schwarz, 2008).

Attitude assessment procedures can be usefully distinguished by the


degree of explicitness (directness) with which they assess the respondent’s
evaluation of the attitude object. Some techniques directly obtain an
evaluative judgment; others do so in more roundabout ways.

Explicit Measures
Explicit attitude measurement techniques directly ask the respondent for
an evaluative judgment of the attitude object. Two commonly employed
explicit assessment procedures are semantic differential evaluative scales
and single-item attitude questions.

Semantic Differential Evaluative Scales


One popular means of directly assessing attitude is to employ the

28
evaluative scales from the semantic differential scale of Osgood, Suci, and
Tannenbaum (1957). In this procedure, respondents rate the attitude object
on a number of (typically) 7-point bipolar scales that are end-anchored by
evaluative adjective pairs (such as good-bad, desirable-undesirable, and so
forth). An example appears in Figure 1.1. The instructions for this scale
ask the respondent to place a check mark at the point on the scale that best
represents the respondent’s judgment. The investigator can
straightforwardly assign numerical values to the scale points (say, +3 for
the extreme positive point, through 0 for the midpoint, to −3 for the
extreme negative end) and then sum each person’s responses to obtain an
indication of the person’s attitude toward (general evaluative judgment of)
the object.

Figure 1.1 Example of a semantic differential scale.

Single-Item Attitude Measures


Another explicit means of assessing attitude is simply to have the
respondent complete a single questionnaire item that asks for the relevant
judgment (see Figure 1.2 for an example). There are various ways of
wording the question and of anchoring the scale (e.g., “In general, how
much do you like the United Nations?” with end anchors “very much” and
“not at all”), and it is possible to vary the number of scale points, but the
basic procedure is the same. A single-item attitude measure familiar to
U.S. survey researchers is the “feeling thermometer,” which asks
respondents to report their evaluation on a scale akin to a Fahrenheit
thermometer; the endpoints of the scale are zero degrees (very “cold” or
unfavorable feelings) and 100 degrees (very “warm” or favorable feelings;
see, e.g., Alwin, 1997).

Figure 1.2 Example of a single-item attitude measure.

A single-item attitude measure is an understandably attractive technique


for circumstances such as public opinion polling. The attitude assessment

29
can be undertaken orally (as in telephone surveys or face-to-face
interviewing); the question is typically straightforward and easily
comprehended by the respondent; the question can be asked (and
answered) in a short time.

The central drawback of single-item assessments of attitude is potentially


weak reliability. That is, a person’s response to a single attitude question
may not be as dependable an indicator of attitude as the person’s response
to three or four items all getting at roughly the same thing.

Features of Explicit Measures


Explicit attitude measurement techniques obviously offer the advantage of
being simple and straightforward, easy to administer, and so forth. Another
advantage of these techniques is that they are relatively easy to construct.
For instance, a public opinion survey of attitudes toward possible
presidential candidates can easily accommodate some new possible
candidate: The surveyor simply asks the standard question but inserts the
name of the new candidate. General evaluative scales from the semantic
differential can be used for rating all sorts of attitude objects (consumer
products, political candidates, government policies, etc.); to assess
attitudes toward Crest toothpaste rather than toward the United Nations,
one simply makes the appropriate substitution above the rating scales.
(This may be a false economy, however: for arguments emphasizing the
importance of customizing semantic differential evaluative scales for each
different attitude object, see Fishbein & Ajzen, 2010, pp. 79–82.)

One salient disadvantage of these explicit techniques is that because they


are so direct, they yield an estimate only of the respondent’s attitude. Of
course, this is not a drawback if all the researcher wants to know is the
respondent’s attitude. But investigators will often want other information
as well (about, for example, beliefs that might lie behind the attitude), and
in such circumstances, direct attitude assessment techniques will need to
be supplemented or replaced by other procedures.

Quasi-Explicit Measures
Quasi-explicit attitude measurement techniques assess attitude not by
directly eliciting an evaluative judgment of the attitude object but by
eliciting information that is obviously attitude-relevant and that offers a
straightforward basis for attitude assessment. For example, paired-

30
comparison procedures and ranking techniques do not ask directly for an
evaluation of any single attitude object but ask for comparative judgments
of several objects. In a paired-comparison technique, the respondent is
asked a series of questions about the relative evaluation of each of a
number of pairs of objects (e.g., “Which candidate do you prefer, Archer
or Barker? Archer or Cooper? Barker or Cooper?”); in a ranking
procedure, the respondent ranks a set of attitude objects (e.g., “Rank these
various leisure activities, from your most favorite to your least favorite”).
The obtained responses obviously permit an investigator to draw some
conclusions about the respondent’s evaluation of a given object.

The two most common and well-known quasi-explicit attitude


measurement procedures are those devised by Louis Thurstone and by
Rensis Likert. In their procedures, the respondent’s attitude is inferred
from agreement or disagreement with statements that are rather obviously
attitude-relevant. The attitude assessment instrument, then, consists of
statements to which the respondent reacts (by agreeing or disagreeing with
each one), and the respondent’s attitude is inferred from the pattern of
responses.

Obviously, however, if a researcher is going to gauge respondents’


attitudes by examining respondents’ reactions to a set of statements, not
just any statements will do; for example, one is not likely to learn much
about attitudes toward the United Nations by assessing persons’ agreement
with a statement such as “Baseball is a better game than football.” Thus
the task faced in constructing a Thurstone or Likert attitude scale is the
task of selecting items (statements) that appear to serve as suitable
indicators of attitude. One may start with a large pool of statements that
might possibly be included on a final attitude instrument, but the problem
is to somehow winnow that pool down.

This winnowing is accomplished by gathering and analyzing data about


respondents’ reactions to a large number of possible items. Detailed
descriptions of these procedures are available elsewhere (for some
specifics, see Green, 1954; Likert, 1932; Thurstone, 1931), but the key is
to identify those items (statements) that can dependably be taken as
indicators of attitudes. For example, if the topic of investigation concerns
attitudes toward the First Federal Bank, suitable statements might turn out
to be ones such as “This bank is reliable,” “This bank is inefficient,” “This
bank has unfriendly personnel,” and so on; by contrast, a statement such as
“This bank has a branch at the corner of Main and Elm” would be unlikely

31
to be included (because knowing whether a respondent agreed or disagreed
with such a statement would not provide information about the
respondent’s attitude).

Given a set of suitable statements, one elicits respondents’ agreement with


each statement. This can be accomplished in various ways. For example,
respondents can be given a list of statements and asked to check the ones
with which they agree (this is Thurstone’s, 1931, procedure). Or the
strength of agreement with each statement can be assessed through some
appropriate scale (this is Likert’s, 1932, procedure; see Figure 1.3). An
overall attitude score can then be obtained straightforwardly for each
respondent, to serve as estimate of that person’s overall attitude.

Figure 1.3 Example of an item on a Likert quasi-explicit attitude measure.

There is a good deal of variation in quasi-explicit attitude assessment


techniques, but as a rule, these procedures provide more information than
do explicit attitude measurement techniques. For example, when a
Thurstone or Likert scale has been employed, a researcher can see what
specific items were especially likely to be endorsed by respondents with
particular attitudes; an investigator who finds, for instance, that those with
unfavorable attitudes toward the bank very often agreed with the statement
that “this bank has unfriendly personnel” may well have learned about a
possible cause of those negative attitudes. Similarly, ranking techniques
can give information about a large number of attitudes and so provide
insight about comparative evaluations. Precisely because quasi-explicit
procedures involve acquiring attitude-relevant information (rather than the
attitude itself), these procedures offer information not available with
explicit measurement techniques.

But this additional information is obtained at a cost. Thurstone and Likert


attitude scales have to be constructed anew for each attitude object;
obviously, one cannot use the First Federal Bank attitude scale to assess
attitudes toward other objects. (Indeed, the substantial effort needed to
obtain a sound Thurstone or Likert scale is often a deterrent to the use of
such techniques.) Procedures such as paired-comparison ratings or ranking
tasks may take more time to administer than would direct attitude
measures.

32
Implicit Measures
Explicit and quasi-explicit measures are overwhelmingly the most
common ways of measuring attitudes. But a variety of other techniques
have been developed that assess attitude not by directly eliciting an
evaluation of the attitude object or even by eliciting information obviously
relevant to such an overall evaluation but instead by some more
roundabout (implicit, indirect) means.

Quite a few different implicit measures of attitude have appeared (for


collections and general discussions, see De Houwer, Teige-Mocigemba,
Spruyt, & Moors, 2009; Goodall, 2011; Petty, Fazio, & Briñol, 2009a;
Wittenbrink & Schwarz, 2007). These include physiological indices, such
as autonomic responses (e.g., heart rate) and measures of brain activity (for
general reviews, see Cunningham, Packer, Kesek, & Van Bavel, 2009; Ito
& Cacioppo, 2007); priming measures, in which attitudes are assessed by
examining the speed (reaction time) with which people make evaluative
judgments when those judgments are preceded (primed) by the attitude
object (for a review, see Wittenbrink, 2007); the Implicit Association Test
(IAT), in which attitudes are assessed by examining the strength of
association (as measured by reaction time) between attitude objects and
evaluative categories (for reviews and discussion, see Fiedler, Messner, &
Bluemke, 2006; Greenwald, Poehlman, Uhlmann, & Banaji, 2009; Lane,
Banaji, Nosek, & Greenwald, 2007; Oswald, Mitchell, Blanton, Jaccard, &
Tetlock, 2013); and a variety of others (for examples and discussion, see
Kidder & Campbell, 1970; Tykocinski & Bareket-Bojmel, 2009).

What is common to all implicit measures is that it is generally not obvious


to respondents that their attitudes are being assessed. For that reason,
implicit measures are likely to be most attractive in circumstances in
which one fears respondents may, for whatever reason, distort their true
attitudes. In most research on persuasion, however, these circumstances
are rather uncommon (respondents are ensured anonymity, message topics
are generally not unusually sensitive ones, etc.); consequently, implicit
attitude measures are rarely employed (for examples and discussion, see
Briñol, Petty, & McCaslin, 2009; Hefner, Rothmund, Klimmt, &
Gollwitzer, 2011; Maio, Haddock, Watt, & Hewstone, 2009).4

Summary

33
As this survey suggests, a variety of attitude measurement techniques are
available. The overwhelmingly most frequently used attitude measurement
procedures are explicit or quasi-explicit techniques; reliability and validity
are more readily established for attitude measures based on these
techniques than for measures derived from implicit procedures. Explicit
procedures are often preferred over quasi-explicit techniques because of
the effort required for constructing Thurstone or Likert scales. But which
specific attitude assessment procedure an investigator employs in a given
instance will depend on the particulars of the situation. Depending on what
the researcher wants to find out, the time available to prepare the attitude
assessment, the time available to question respondents, the sensitivity of
the attitude topic, and so forth, different techniques will recommend
themselves.

Attitudes and Behaviors

The General Relationship


Attitude has been taken to be a key mental state relevant to persuasion
because of a presumed relationship between attitudes and actions. The
assumption has been that attitudes are important determinants of behavior
and, correspondingly, that one avenue to changing a person’s behavior will
be to change that person’s attitudes.5 This assumption is generally well-
founded: A number of systematic reviews have found that attitudes and
behaviors are commonly reasonably consistent (for some reviews, see
Eckes & Six, 1994; Glasman & Albarracín, 2006; M.-S. Kim & Hunter,
1993a; Kraus, 1995).6

Moderating Factors
The degree of attitude-behavior consistency has been found to vary
depending on other “moderating” factors—factors that moderate or
influence the relationship between attitudes and behaviors. A large number
of possible moderating variables have been explored, including the degree
to which the behavior is effortful or difficult (Kaiser & Schultz, 2009;
Wallace, Paulson, Lord, & Bond, 2005); the perceived relevance of the
attitude to the behavior (Snyder, 1982; Snyder & Kendzierski, 1982);
attitude accessibility (Smith & Terry, 2003); attitudinal ambivalence
(Conner et al., 2002; Jonas, Broemer, & Diehl, 2000); having a vested

34
interest in a position (Crano & Prislin, 1995); the extent of attitude-
relevant knowledge (Fabrigar, Petty, Smith, & Crites, 2006); and many
others. In what follows, two well-studied factors are discussed as
illustrative: the correspondence between the attitudinal and behavioral
measures, and the degree of direct experience with the attitude object.

Correspondence of Measures
One factor that influences the observed consistency between an attitudinal
measure and a behavioral measure is the nature of the measures involved.
Good evidence indicates that substantial attitude-behavior correlations will
be obtained only when the attitudinal measure and the behavioral measure
correspond in specificity (Ajzen & Fishbein, 1977). A general attitude will
probably not be especially strongly correlated with any one particular
specific behavior. A general attitude measure corresponds to a general
behavioral measure, not to a specific one.

For example, general attitudes toward religion might or might not be


strongly correlated with performance of the particular act of (say) reading
books about religious philosophy. But attitudes toward religion may well
be strongly correlated with a general religious behavior index—an index
based on multiple behaviors (whether the person reads books about
religious philosophy, attends religious services, watches or listens to
religious programs, owns religious music, donates money to religious
institutions, consults clergy about personal problems, and so on). No one
of these behaviors may be very strongly predicted by religious attitude, but
the overall pattern of these behaviors might well be associated with
religious attitude. That is, although the correlation of the general attitude
with any one of these behaviors might be relatively small, the correlation
of the general attitude with a multiple-act behavioral measure may be
much greater.7

Several investigations have yielded information about the relative strength


of the attitude-behavior association when single-act and multiple-act
behavioral measures are predicted on the basis of general attitudes. In
these studies, the average correlation between general attitude and any
single-act index of behavior was roughly .30; by contrast, the average
correlation between general attitude and a multiple-act behavioral measure
was approximately .65 (Babrow & O’Keefe, 1984; Fishbein & Ajzen,
1974; O’Keefe & Shepherd, 1982; Sjoberg, 1982; Weigel & Newman,
1976; see also Bamberg, 2003). These findings plainly indicate that

35
attitudinal measures and behavioral measures are likely to be rather more
strongly associated when there is substantial correspondence between the
two measures and underscore the folly of supposing that a single specific
behavior will necessarily or typically be strongly associated with a
person’s general attitude (for some relevant reviews, see Ajzen & Cote,
2008; Eckes & Six, 1994; M.-S. Kim & Hunter, 1993a; Kraus, 1995).

Correspondingly, these findings underscore the importance of carefully


considering the focus of persuasive efforts. For instance, to encourage
participation in a community recycling program, it might seem natural to
construct persuasive messages aimed at inducing favorable attitudes
toward protecting the environment. But this is not likely to be a
particularly efficient persuasive strategy. Even if the messages succeed in
producing positive environmental protection attitudes, those general
attitudes may not be especially strongly associated with the specific
behavior that is wanted (recycling program participation). A more
effective focus for persuasive efforts might well be specific attitudes
toward participation in the recycling program, rather than general
environmental attitudes.8

Direct Experience
A second factor influencing attitude-behavior consistency is the degree of
direct experience with the attitude object. Attitudes based on direct
behavioral experience with the attitude object have been found to be more
predictive of later behavior toward the object than are attitudes based on
indirect experience. (For some examples and discussion, see Doll & Ajzen,
1992; Doll & Mallu, 1990; Eagly & Chaiken, 1993, pp. 194–200; Glasman
& Albarracín, 2006; Kraus, 1995; Steffen & Gruber, 1991. For some
complexities, see Millar & Millar, 1998.)

For example, during a housing shortage at Cornell University—well-


publicized on campus—some new students had to be placed in temporary
quarters (thus giving them firsthand experience with the problem); other
new students were given permanent dormitory rooms (and so knew of the
problem less directly). The two groups had equally negative attitudes
regarding the housing crisis, but the strength of the attitude-behavior
relationship differed. Those whose attitudes were formed on the basis of
direct experience exhibited greater consistency between their attitudes and
behaviors aimed at alleviating the crisis than did students whose attitudes
were based on indirect experience (Regan & Fazio, 1977).

36
A similar effect was observed in a study comparing attitude-behavior
consistency for product attitudes that were based either on a trial
experience with a sample of the product (direct experience) or on exposure
to advertising messages about the product (indirect experience). Much
greater attitude-behavior consistency was observed for those persons who
had had the opportunity to try the product than for those who had merely
read about it. For example, purchase of the product was more highly
correlated with attitudes based on product trial (.57) than with attitudes
based on product advertising (.18) (R. E. Smith & Swinyard, 1983).

This finding does not mean that product trial influence strategies (e.g.,
providing free samples through the mail, offering grocery store shoppers a
taste of a new food product, etc.) will necessarily be more effective in
producing sales than will advertising strategies: Direct experience
strengthens both positive and negative attitudes. The shopper who has a
negative attitude toward a food product because of having read about it
might still come to purchase the product; the shopper whose negative
attitude is based on tasting the product, however, is much less likely to do
so.

In short, attitudes induced by direct experience will be more strongly


correlated with behavior than attitudes induced by indirect experience.
Two persons may have equally positive attitudes but may differ in whether
they act consistently with those attitudes because of underlying differences
in the ways in which the attitudes were formed.

Summary
Research has examined a great many possible moderators of attitude-
behavior consistency (for some general discussions, see Ajzen & Sexton,
1999; Eagly & Chaiken, 1993, pp. 193–215; Fazio & Roskos-Ewoldsen,
2005; Fazio & Towles-Schwen, 1999; Glasman & Albarracín, 2006;
Wallace, Paulson, Lord, & Bond, 2005). The two mentioned here,
although relatively prominent, are only illustrative.

Encouraging Attitude-Consistent Behavior


Sometimes a persuader’s challenge is not so much to change a person’s
attitude but to get that person to act on their attitude. For example, it is not
enough to convince people to have favorable attitudes toward good health
(indeed, they probably already have such attitudes); what’s needed is to

37
convince people to make attitude-consistent behavioral choices about
exercise, diet, medical care, and the like. Similarly, persons who express
positive attitudes toward energy conservation and environmental
protection may nevertheless need to be induced to act consistently with
those views—to engage in recycling, consider packaging considerations
when buying products, choose appropriate thermostat settings, and so on.
Thus the question arises of how persuaders might approach such tasks. At
least three related strategies can be identified.

Enhance Perceived Relevance


One strategy for enhancing attitude-behavior consistency is to encourage
people to see their attitudes as relevant to their behavioral choices; people
are likely to act more consistently with their attitudes when they do.

For example, in a study by Snyder and Kendzierski (1982), participants


were undergraduates known to have attitudes favorable to psychological
research; they were asked to volunteer to participate in extra sessions of a
psychology experiment. This was an especially demanding request
(involving returning on different days, at inconvenient times, and so on).
Indeed, in the control condition only 25% of the participants volunteered,
despite their favorable attitudes. Before responding to the request, each
participant overheard a conversation between two other students
(confederates of the experimenters) who were discussing the request. The
first student said, “I don’t know if I should volunteer or if I shouldn’t
volunteer. What do you think?” In the control condition, the second
student responded, “Beats me—it’s up to you.” In the experimental
condition, the response was, “Well, I guess that whether you do or whether
you don’t is really a question of how worthwhile you think experiments
are”—a response designed to underscore the relevance of attitudes toward
psychological research as guides for decision making in this situation.
Although only 25% of the control condition participants agreed to
volunteer, 60% of the experimental condition participants agreed.

Obviously, then, one means of influencing behavior is the strategy of


emphasizing the relevance of an existing attitude to a current behavioral
choice. Little systematic research evidence concerns this strategy (see
Borgida & Campbell, 1982; Prislin, 1987; Shepherd, 1985; Snyder, 1982),
but as a testimony to the strategy’s potential effectiveness, consider the
many products (DVDs, computer programs, tutoring sessions, and so on)
purchased by parents who were prodded by sellers asking, “You want your

38
children to have a good education, don’t you? To have an edge in school?
To get ahead in life?” Fundamentally, these questions reflect the seller’s
understanding that enhancing the perceived relevance of an attitude to an
action can be a means of increasing attitude-behavior consistency.

Induce Feelings of Hypocrisy


A second strategy for encouraging attitude-behavior consistency can be
appropriate in situations in which people have previously acted
inconsistently with their attitudes: the strategy of hypocrisy induction. As
discussed more thoroughly in Chapter 5 (concerning cognitive dissonance
theory), a number of studies suggest that when persons have been
hypocritical (in the sense of believing one thing but doing something
different), one way of encouraging attitude-consistent behavior can be to
draw persons’ attention to the hypocrisy (Stone, 2012). Specifically, when
both the existing attitude and the previous inconsistency are made salient,
persons are likely subsequently to act more consistently with their
attitudes. For example, Stone, Aronson, Crain, Winslow, and Fried (1994)
varied the salience of participants’ positive attitudes about safe sex
practices (by having some participants write and deliver a speech about the
importance of safe sex) and varied the salience of previous behavior that
was inconsistent with such attitudes (by having some participants be
reminded of their past failures to engage in safe sex practices, through
having to list circumstances surrounding their past failures to use
condoms). The combination of salient attitudes and salient inconsistency
induced greater subsequent attitude-behavior consistency (reflected in
greater likelihood of buying condoms, and buying more condoms, at the
end of the experiment) than either one alone. Thus one means of inducing
attitude-behavior consistency may be to lead people to recognize their
hypocrisy.9

Encourage Anticipation of Feelings


A third strategy for enhancing attitude-behavior consistency is to invite
people to consider how they will feel if they fail to act consistently with
their attitudes. As discussed more extensively in Chapter 6 (concerning
reasoned action theory), feelings of anticipated emotions such as regret
and guilt can shape people’s behavioral choices—and hence one way of
influencing such choices is precisely by activating such anticipated
feelings. A number of studies have influenced the salience of anticipated

39
emotions simply by asking about such feelings, with consequent effects on
intention or behavior. For example, Richard, van der Pligt, and de Vries
(1996b) asked people either to indicate how they would expect to feel after
having unprotected sex (by rating the likelihood of experiencing various
positive and negative emotions) or to indicate how they felt about having
unprotected sex (using similar ratings). Those participants whose attention
was drawn to their anticipated feelings were more likely to intend to use
condoms (and subsequently were more consistent condom users) than the
other participants. Such results plainly suggest that making salient the
emotion-related consequences of contemplated attitude-inconsistent
behavior may have the effect of enhancing attitude-behavior consistency.

Summary
These three strategies all seek to tap some general desire for consistency as
a way of influencing behavior in a circumstance in which persons will
have an opportunity to act consistently with some existing attitude. But the
strategies vary in the means of engaging that motivation. The perceived-
relevance strategy amounts to saying, “You might not have realized it, but
this really is an opportunity to act consistently with your attitude.” The
hypocrisy-induction strategy says, in effect, “You haven’t been acting
consistently with your attitude, but here is an opportunity to do so.” The
anticipated-feelings strategy implicitly says, “Here is an opportunity to act
consistently with your attitude—and think how bad you’ll feel if you
don’t.”10

Assessing Persuasive Effects

Attitude Change
Attitude measurement procedures obviously provide means of assessing
persuasive effects. To see whether a given message changes attitudes, an
investigator can assess attitudes before and after exposure to the message
(perhaps also collecting parallel attitude assessments from persons not
exposed to the message, as a way of reducing ambiguity about the
potential causes of any observed changes). Indeed, such attitude
assessment procedures are the most common ones used in studies of
persuasive effects. The concrete realizations of attitude assessment may
vary depending on the particulars of the research design; for example, in
an experiment in which participants are randomly assigned to conditions,

40
one might dispense with the premessage attitude assessment and examine
only postmessage differences, on the assumption that random assignment
makes substantial initial differences unlikely. But effects on attitude are
the effects most frequently considered in persuasion research.

Beyond Attitude Change


Although attitude has historically been considered the key mental state
relevant to persuasive effects, attitudes are not the only possible focus for
persuasive efforts. Obviously, when other psychological states are of
interest, other assessments will be useful or necessary. (For a general
discussion, see Rhodes & Ewoldsen, 2013.)

Sometimes the focus of a persuasive effort will be some determinant of


attitude, such as a particular belief about the attitude object. For example,
an advertising campaign might try to persuade people that a product is
environmentally friendly (as a means of influencing persons’ attitudes
toward the product and, eventually, product purchase). The appropriate
assessment of the campaign’s persuasive effectiveness would involve
changes in that specific belief about the product, not changes in the overall
attitude toward the product. The belief that the product is environmentally
friendly might well influence the overall attitude, but to see whether the
target belief is changed by the persuasive effort, assessments of that belief
will be needed.11

Sometimes persuaders want to influence some property of an attitude other


than its valence (positive or negative) and extremity. That is, rather than
influencing whether (or the degree to which) an attitude is positive or
negative, a persuader might want to influence the salience (prominence,
accessibility) of the attitude, the confidence with which it is held, the
degree to which it is linked to other attitudes, and so forth (for discussions
of some such attitudinal properties, see Conner & Armitage, 2008;
Fabrigar, MacDonald, & Wegener, 2007; Petty, Briñol, Tormala, &
Wegener, 2007, pp. 260–262; van Harreveld, Schneider, Nohlen, & van
der Pligt, 2012; Visser, Bizer, & Krosnick, 2006). For example, when
consumers already have positive attitudes toward one’s product, the
persuasive task may be to ensure that those attitudes are salient (activated)
at the right time, perhaps by somehow reminding people of their attitudes.

Some such attitudinal properties have sometimes been grouped together


under the general heading of “attitude strength” (for some discussions, see

41
Petty & Krosnick, 1995; Visser, Bizer, & Krosnick, 2006).
Conceptualizations of attitude strength vary, but (as an example) Krosnick
and Petty (1995) proposed that attitude strength is best understood as an
amalgam of persistence (stronger attitudes are more persistent than are
weaker ones), resistance (stronger attitudes are more resistant to change
than are weaker ones), impact on information processing and judgments
(stronger attitudes are more likely to affect such processes than are weaker
attitudes), and impact on behavior (stronger attitudes will have more effect
on behavior than will weaker ones). It should be apparent that persuaders
might have an interest in influencing not merely attitude (valence and
extremity) but attitude strength as well.12

Finally, persuasive efforts sometimes will be concerned not with any


aspect of attitudes but rather with other mental states. For example, the key
to changing some behaviors might involve not influencing persons’
attitudes but rather changing their perceived ability to perform the desired
behavior. (For discussion of such persuasion targets, see Chapter 6
concerning reasoned action theory.) Consider, for instance, smokers who
have a positive attitude toward quitting but have not yet really tried to do
so because of a belief that quitting would be impossible; one can imagine
such people finally making a serious attempt to quit if they are persuaded
that they are indeed capable of quitting (e.g., by seeing examples of similar
people who have managed to quit).

In short, it should be apparent that persuasive efforts might seek changes


in mental states other than attitude, and hence researchers will want
correspondingly different outcome assessments. Attitude change will
often, but not always, be a persuader’s goal.13

Conclusion
This introductory chapter has elucidated the concepts of persuasion and
attitude, described some common attitude assessment procedures, sketched
the relationship of attitudes and behavior, and discussed the assessment of
persuasive effects. In the following chapters, extant social-scientific theory
and research about persuasion are reviewed. Several theoretical
perspectives that have been prominent in the explanation of persuasive
effects are discussed in Chapters 2 through 8. Research on various factors
influencing persuasive effects is explored in Chapters 9 through 12.

42
For Review
1. What is a paradigm (exemplary) case? Give examples. Describe how
the shared features of paradigm cases of a concept can provide
clarification of the concept. Explain how the “sharp edges” of a
definition can lead to disputes over borderline cases.
2. What are the shared features of exemplary cases of persuasion?
Explain how a successful attempt to influence is such a feature.
Explain how the persuader’s intending to influence is such a feature.
Explain how some measure of freedom on the persuadee’s part is
such a feature. Explain how having the effects be achieved through
communication is such a feature. Explain how a change in the
persuadee’s mental state is such a feature. Explain how features
present in full-fledged ways in paradigm cases can, when present in
only some diminished fashion, make for borderline cases of a
concept.
3. Identify one important mental state often changed in persuasion.
What is an attitude? Explain why attitudes are a common target for
persuasive messages.
4. What are explicit attitude measurement techniques? What are
semantic differential evaluative scales? Explain how they work. What
are single-item attitude measures? What is the feeling thermometer?
Identify a circumstance in which single-item attitude measures are
especially useful. Identify and explain a weakness of such measures.
5. What are quasi-explicit attitude measurement techniques? Explain
how a respondent’s agreement or disagreement with belief statements
can serve as a measure of attitude. Describe the process of identifying
suitable belief statements for such attitude measurement procedures.
Identify an advantage (and accompanying disadvantage) of using
such attitude measures.
6. What are implicit attitude measurement techniques? Give examples.
How are implicit attitude measures different from explicit and quasi-
explicit measures?
7. Are attitudes and behaviors generally consistent? What factors
influence the degree of attitude-behavior consistency? How does the
correspondence between the attitudinal measure and the behavioral
measure influence attitude-behavior consistency? How does the
degree of direct experience with the attitude object influence attitude-
behavior consistency? Describe three general ways of encouraging
attitude-consistent behavior. Explain how increasing the perceived
relevance of an attitude to a behavior might enhance attitude-behavior

43
consistency. Explain how inducing feelings of hypocrisy might
enhance attitude-behavior consistency. Explain how encouraging
anticipation of feelings might enhance attitude-behavior consistency.
8. How can persuasion be assessed using attitude measurement
techniques? Explain why other assessments (that is, other than
attitude) may be useful or necessary.

Notes
1. This point can also be expressed by saying that to persuade is a
perlocutionary act, whereas (for example) to urge is an illocutionary act. In
this regard there is a difference between “A persuaded B” and “A
attempted to persuade B” (Gass & Seiter, 2004, p. 27n3).

2. For discussion of some challenges in sharply distinguishing persuasion


and coercion, see Powers (2007).

3. Descriptions of the relationship between persuasion and attitude change


vary. For instance, sometimes attitude change is treated as a necessary
aspect of persuasion (e.g., in the claim that “persuasion inherently has
attitude change as its goal”; Beisecker & Parson, 1972, p. 5); sometimes
persuasion is defined as one species of attitude change (e.g., as “a
modification in one’s attitude that is the consequence of exposure to a
communication”; Levy, Collins, & Nail, 1998, p. 732); and sometimes
persuasion is simply treated as identical with attitude change generally, no
matter how such change arises (e.g., Chaiken, Wood, & Eagly, 1996, p.
702). Whatever the particular characterization of the relationship,
however, persuasion and attitude change have long been seen as closely
linked.

4. Recent work on implicit measures has invited important new lines of


investigation concerning the nature of attitude, but many open questions
remain (for some discussion, see Bodenhausen & Gawronski, 2013;
Bohner & Dickel, 2011; De Houwer, 2009; Gawronski & Bodenhausen,
2007; Petty, Fazio, & Briñol, 2009b).

5. For a period of time, it appeared as if the assumption of a close


relationship between attitudes and behaviors was mistaken. Some classic
studies (e.g., LaPiere, 1934) and some reviews (e.g., Wicker, 1969)
suggested that people’s actions were commonly inconsistent with their
attitudes. But these pessimistic conclusions about the attitude-behavior

44
relationship were overdrawn.

6. These reviews have used different procedures and analyzed different


numbers of studies, but their estimates of the mean attitude-behavior
correlation range from roughly .40 to .50. Larger mean correlations are
reported when various methodological artifacts are corrected or with
optimal levels of moderator variables.

7. Ajzen and Fishbein’s (1977) analysis specifies four ways in which


attitudes and behaviors might correspond (action, target, context, and
time). So, for example, the behavior of attending church services on
campus this Sunday corresponds most directly to the attitude toward
attending church services on campus this Sunday; this attitude and
behavior correspond in the action specified (attending), the target toward
which the action is directed (church services), the context of the action (on
campus), and the time of the action (this Sunday). A more general
behavior (e.g., one without a specified context or time, such as attending
church services) corresponds most directly to a more general attitude
(obviously, the attitude toward attending church services). Thus for an
attitude toward an object (e.g., a consumer product), the corresponding
behavioral measure would include assessments involving various actions,
contexts, and times—which is the point of the multiple-act behavioral
measure.

8. Notably, reasoned action theory (discussed in Chapter 6) includes


attitudes toward specific behaviors as a key determinant of behavioral
intentions.

9. As discussed in Chapter 5 (on cognitive dissonance theory), however,


hypocrisy induction efforts can also backfire as a behavioral influence
mechanism; instead of changing their future behaviors to be more
consistent with their attitudes, people might change their attitudes to be
consistent with their previous behavior (Fried, 1998).

10. Actually, some sense of hypocrisy may be a deeper connecting thread


among these strategies. The perceived relevance strategy and the
anticipated feelings strategy might be described as alerting people to
hypocrisy (or to potential hypocrisy or hypocrisy-related feelings). So, for
example, the reason that heightening the perceived relevance of an attitude
to an action enhances attitude-behavior consistency may be precisely that
such enhanced perceived relevance leads to an increased recognition of
past inconsistency (and thus to feelings of hypocrisy, guilt, and so on—

45
which then motivate attitude-consistent future behavior) and/or to an
increased expectation that negative feelings (guilt, regret, and so forth) will
arise if attitude-inconsistent behavior is undertaken (with attitude-
consistent behavior then motivated by a desire to avoid such negative
feelings).

11. As discussed in Chapter 8, the elaboration likelihood model (ELM) has


pointed to several “metacognitive” states that might influence attitude,
such as thought confidence (Briñol & Petty, 2009a, 2009b; Petty & Briñol,
2010, pp. 230–231).

12. Research on attitude strength is somewhat unsettled conceptually. For


example, Krosnick and Petty’s (1995, p. 4) approach treats strength’s
effects (persistence, resistance, and impact on information processing,
judgments, and behavior) as the “defining features” of strength. But if
strength is defined as (say) resistance, then it is necessarily true that
“strong” attitudes are resistant. That is, this leaves unanswered the
question of what makes attitudes resistant (saying “these attitudes are
resistant because they are strong” would be akin to saying “these men are
single because they are unmarried”). An alternative approach might define
attitude strength not by its effects but by the conjunction of various effect-
independent properties of attitude (e.g., an attitude’s interconnectedness
with other attitudes, its importance, and the certainty with which it is held)
or even dispense with any overarching concept of attitude strength in favor
of studying the particular individual effect-independent features (see
Visser, Bizer, & Krosnick, 2006). It will obviously be a substantial
undertaking to distinguish these various features conceptually and to
investigate their empirical interrelationships and effects (for work along
these lines, see Petty, Briñol, Tormala, & Wegener, 2007).

13. In experimental persuasion research, the most common outcome


assessments have been of attitudes, intentions, and behaviors. The
persuasiveness of a given message might vary across these outcomes (e.g.,
a message might produce greater attitude change than behavioral change).
However, where the research question concerns the relative persuasiveness
of two message kinds, those three outcomes yield substantively identical
conclusions. Carefully expressed: The mean effect sizes (describing the
difference in persuasiveness between two message types) for attitudinal
outcomes, for intention outcomes, and for behavior outcomes are
statistically indistinguishable and hence functionally interchangeable
(O’Keefe, 2013b). Hence meta-analyses aimed at drawing conclusions

46
about relative persuasiveness need not (should not) distinguish studies on
the basis of the outcome measure used; similarly, in formative message
design research (e.g., campaign planning) that tests the relative
persuasiveness of alternative possible messages, assessments of
appropriate intentions will provide a perfectly suitable guide to the relative
behavioral persuasiveness of the messages.

47
Chapter 2 Social Judgment Theory

Judgments of Alternative Positions on an Issue


The Ordered Alternatives Questionnaire
The Concept of Ego-Involvement
Ego-Involvement and the Latitudes
Measures of Ego-Involvement
Reactions to Communications
Assimilation and Contrast Effects
Attitude Change Effects
Assimilation and Contrast Effects Reconsidered
Adapting Persuasive Messages to Recipients Using Social
Judgment Theory
Critical Assessment
The Confounding of Involvement With Other Variables
The Concept of Ego-Involvement
The Measures of Ego-Involvement
Conclusion
For Review
Notes

Social judgment theory is a theoretical perspective most closely associated


with Muzafer Sherif, Carolyn Sherif, and their associates (C. W. Sherif,
Sherif, & Nebergall, 1965; M. Sherif & Hovland, 1961; for a classic
review, see Kiesler, Collins, & Miller, 1969, pp. 238–301; for additional
discussions, see Eagly & Chaiken, 1993, pp. 363–382; Granberg, 1982; C.
W. Sherif, 1980). The central tenet of social judgment theory is that
messages produce attitude change through judgmental processes and
effects. More specifically, the claim is that the effect of a persuasive
communication depends upon the way in which the receiver evaluates the
position it advocates.

Hence attitude change is seen as a two-step process: First, the receiver


makes an assessment of what position is being advocated by the message.
Then attitude change occurs after this judgment—with the amount and
direction of change dependent on that judgment.

The plausibility of this general approach should be apparent: Our reaction

48
to a particular persuasive communication will depend (at least in part) on
what we think of—how favorable we are toward—the point of view that it
advocates. But this suggests that, in order to understand a recipient’s
reaction to a given message, it is important to understand how the receiver
assesses the various positions on that issue (that is, the various different
stands that a message might advocate). Hence the next section discusses
the nature of people’s judgments of the alternative positions on an issue.
Subsequent sections discuss receivers’ reactions to persuasive messages,
how social judgment theory suggests adapting messages to recipients, and
some weaknesses of social judgment theory.

Judgments of Alternative Positions on an Issue


On any given persuasive issue, a number of different positions or points of
view are likely to be available. Consider, for example, some different
possible stands on an issue such as gun control: One might think that there
should be very few restrictions on ordinary citizens’ possession of firearms
or (the other extreme) that almost no ordinary citizen should be permitted
to possess a firearm; or one might hold any number of intermediate
positions varying in the degree of restriction.

A person is likely to have different assessments of these various positions,


finding some of them acceptable, others objectionable, perhaps some
neither particularly acceptable or unacceptable. If, as social judgment
theory suggests, a person’s reaction to a persuasive message depends on
the person’s judgment of the position being advocated, then it is important
to be able to assess persons’ judgments of the various possible positions.

The Ordered Alternatives Questionnaire


For obtaining person’s judgments of the various different positions, social
judgment theory researchers developed the Ordered Alternatives
questionnaire. An Ordered Alternatives questionnaire provides the
respondent with a set of statements, each representing a different point of
view on the issue being studied. The statements are chosen so as to
represent the range of positions on the issue (from the extreme view on
one side to the extreme view on the other) and are arranged in order from
one extreme to the other—hence the name “Ordered Alternatives.” For
example, the following Ordered Alternatives questionnaire was developed
for research on a presidential election campaign (M. Sherif & Hovland,

49
1961, pp. 136–137; for other examples, see Hovland, Harvey, & Sherif,
1957; C. W. Sherif, 1980).

______ (A) The election of the Republican presidential and vice-


presidential candidates in November is absolutely essential from all
angles in the country’s interests.
______ (B) On the whole the interests of the country will be served
best by the election of the Republican candidates for president and
vice-president in the coming election.
______ (C) It seems that the country’s interests would be better
served if the presidential and vice-presidential candidates of the
Republican party are elected this November.
______ (D) Although it is hard to decide, it is probable that the
country’s interests may be better served if the Republican presidential
and vice-presidential candidates are elected in November.
______ (E) From the point of view of the country’s interests, it is
hard to decide whether it is preferable to vote for the presidential and
vice-presidential candidates of the Republican party or the
Democratic party in November.
______ (F) Although it is hard to decide, it is probable that the
country’s interests may be better served if the Democratic presidential
and vice-presidential candidates are elected in November.
______ (G) It seems that the country’s interests would be better
served if the presidential and vice-presidential candidates of the
Democratic party are elected this November.
______ (H) On the whole the interests of the country will be served
best by the election of the Democratic candidates for president and
vice-president in the coming election.
______ (I) The election of the Democratic presidential and vice-
presidential candidates in November is absolutely essential from all
angles in the country’s interests.

The respondent is asked first to indicate the one statement that he or she
finds most acceptable (for example, by putting a + + in the corresponding
blank). The respondent is then asked to indicate the other statements that
are acceptable to the respondent (+), the one statement that is most
objectionable (XX), and the other statements that are unacceptable (X).
The respondent need not mark every statement as acceptable or
unacceptable; that is, some of the positions can be neither accepted nor
rejected by the respondent (and so be left blank, or marked with a zero).
(For procedural details, see Granberg & Steele, 1974.)

50
These responses are said to form the person’s judgmental latitudes on that
issue. The range of positions that the respondent finds acceptable form the
respondent’s latitude of acceptance, the positions that the respondent finds
unacceptable constitute the latitude of rejection, and the positions that the
respondent neither accepts or rejects form the latitude of noncommitment.

The structure of these judgmental latitudes can vary from person to person.
In fact, two people might have the same “most preferred” position on an
issue, but differ in their assessment of the other positions on the issue and
hence have very different latitudes of acceptance, rejection, and
noncommitment. For example, suppose that on the presidential election
issue, Carol and Mary both find statement B most acceptable: their own
most-preferred position is that, on the whole, the interests of the country
will be best served by the election of the Republicans. Mary finds
statements A, C, D, and E also acceptable, is noncommittal toward F, G,
and H, and rejects only the extreme Democratic statement I; Carol, on the
other hand, thinks that A is the only other acceptable statement, is
noncommittal regarding C and D, and rejects E, F, G, H, and I. Mary thus
has a larger latitude of acceptance than Carol (Mary finds five positions
acceptable, Carol only two), a larger latitude of noncommitment (three
positions as opposed to two), and a smaller latitude of rejection (only one
position is objectionable to Mary, whereas five are to Carol). Notice (to
jump ahead for a moment) that even though Carol and Mary have the same
most preferred position, they would presumably react very differently to a
message advocating position E: Mary finds that to be an acceptable
position on the issue, but Carol finds it objectionable. As this example
suggests, from the point of view of social judgment theory, a person’s
stand on an issue involves not merely a most preferred position, but also
assessment of all the other possible positions on the issue—as reflected in
the set of judgmental latitudes (the latitudes of acceptance, rejection, and
noncommitment).

Social judgment theory proposes that the structure of the judgmental


latitudes systematically varies depending on one’s level of ego-
involvement with the issue. Before discussing this relationship, some
attention to the concept of ego-involvement is required.

The Concept of Ego-Involvement


The concept of ego-involvement has been variously described in social
judgment theory, and there is room for some uncertainty about just what it

51
comes to (for discussion, see Wilmot, 1971a). However, very broadly
speaking, what is meant by “ego-involvement” is roughly the same as
would be meant in colloquially referring to someone’s being “involved
with an issue.” Thus a person might be said to be ego-involved when the
issue has personal significance to the individual, when the person’s stand
on the issue is central to his or her sense of self (hence ego-involvement),
when the issue is important to the person, when the person takes a strong
stand on the issue, when the person is strongly committed to the position,
and so forth. Ego-involvement is thus in a sense an omnibus concept,
meant to refer to this constellation of properties.

Notice, however, that ego-involvement is distinct from the extremity of the


most-preferred position (C. W. Sherif, 1980, p. 36; M. Sherif & Hovland,
1961, p. 171). That is, to be ego-involved on an issue is not the same thing
as holding an extreme position on the issue. For example, one might take
an extreme stand on an issue without being highly ego-involved (e.g., a
person might hold an extreme position on the issue of controlling the
federal deficit without being especially ego-involved in that stand). And
one can be highly ego-involved in a middle-of-the-road position (“I’m
strongly committed to this moderate position, my sense of self is
connected with my being a moderate on this issue,” and so on). Thus ego-
involvement and position extremity are conceptually different. Social
judgment theory does suggest that ego-involvement and position extremity
will be empirically related, however, such that those with more extreme
positions on an issue will tend to be more ego-involved in that issue (M.
Sherif & Hovland, 1961, pp. 138–140). But this empirical relationship
should not obscure the conceptual distinction between ego-involvement
and position extremity.

Ego-Involvement and the Latitudes


Social judgment theory suggests that one’s level of ego-involvement on an
issue will influence the structure of one’s judgmental latitudes on that
issue. Specifically, the claim is that as one’s level of ego-involvement
increases, the size of the latitude of rejection will also increase (and the
sizes of the latitudes of acceptance and noncommitment will decrease).
Hence highly involved persons will have a relatively large latitude of
rejection and relatively small latitudes of acceptance and noncommitment.
That is, the more involved person will find relatively few stands on the
issue to be acceptable (small latitude of acceptance), won’t be neutral or
noncommittal toward very many positions (small latitude of

52
noncommitment), and will find many positions objectionable (large
latitude of rejection).

To gather evidence bearing on this claim, one needs a way to assess the
relative sizes of the judgmental latitudes (which the Ordered Alternatives
questionnaire provides) and a procedure for assessing ego-involvement.
Two such ego-involvement measurement procedures are described in the
next section.

Measures of Ego-Involvement
Several different techniques have been devised for assessing ego-
involvement. Two particular measures can serve as useful examples.

Size of the Ordered Alternatives Latitude of Rejection


In early studies of the relationship of ego-involvement to the structure of
the judgmental latitudes, the participants were often persons whose
involvement levels could be presumed on the basis of their group
memberships.1 For example, to locate people who could be presumed to
be relatively highly involved in an election campaign, researchers might
go to local Democratic and Republican party headquarters. For
comparison, other participants could be obtained from unselected samples
(e.g., undergraduate students) that presumably would be comparatively
lower in ego-involvement.

In studies such as these, persons in the presumably higher-involvement


groups had larger latitudes of rejection than did presumably less involved
participants (for a general review of such work, see C. W. Sherif et al.,
1965). On the basis of such results, the size of the latitude of rejection on
the Ordered Alternatives questionnaire has been recommended as a
measure of ego-involvement (e.g., Granberg, 1982, p. 313; C. W. Sherif et
al., 1965, p. 234): The larger one’s latitude of rejection, the greater one’s
degree of ego-involvement.

Of course, as the latitude of rejection increases, the combined size of the


latitudes of acceptance and noncommitment must necessarily decrease. It
appears that the latitude of noncommitment tends to shrink more than does
the latitude of acceptance. That is, as the latitude of rejection increases, the
latitude of noncommitment decreases but there is sometimes little change
in the latitude of acceptance (for a review, see C. W. Sherif et al., 1965).

53
This regularity has sometimes led to the suggestion that the size of the
latitude of noncommitment might serve as a measure of ego-involvement
(e.g., C. W. Sherif et al., 1965, p. 234), but the size of the latitude of
rejection is the far more frequently studied index.

Own Categories Procedure


A second measure of ego-involvement was derived from what is called the
Own Categories procedure. Participants are presented with a large number
of statements (60 or more) on the topic of interest and are asked to sort
these statements into however many categories they think necessary to
represent the range of positions on the issue. They are told to sort the items
such that those in a given category seem to reflect the same basic
viewpoint on the topic (for procedural details, see C. W. Sherif et al.,
1965, pp. 92–126). What is of central interest is the number of categories a
respondent creates. As in the studies of the Ordered Alternatives
questionnaire, results were compared from selected and unselected
respondents whose involvement levels could be presumed on independent
grounds.

Systematic differences were observed in the number of categories created.


Those participants who were presumably highly involved created fewer
categories than did low-involvement participants.2 Such results suggested
the use of the Own Categories procedure as an index of ego-involvement:
The fewer categories created, the greater the degree of ego-involvement
(e.g., C. W. Sherif et al., 1965, p. 126).

This result can seem to be counterintuitive, but it makes good sense from
the perspective of social judgment theory (particularly against the
backdrop of assimilation and contrast effects, to be discussed shortly).
With increasing ego-involvement, increased perceptual distortion is likely.
When involvement is exceptionally high, the individual’s thinking takes on
an absolutist, black-or-white quality; in such a case, only two categories
might be thought necessary (“Here are the few statements representing the
right point of view—the one I hold—and here are all the wrongheaded
ones”).2

Reactions to Communications
Social judgment theory holds that a receiver’s reaction to a given

54
persuasive communication will depend centrally on how he or she
evaluates the point of view it is advocating. That implies that, in reacting
to a persuasive message, the receiver must initially come to decide just
what position the message is forwarding. Social judgment theory suggests
that, in making this judgment, the receiver may be subject to perceptual
distortions called assimilation and contrast effects.

Assimilation and Contrast Effects


Assimilation and contrast effects are perceptual effects concerning the
judgment of what position is being advocated by a message. An
assimilation effect is said to occur when a receiver perceives the message
to be advocating a position closer to his or her own position than it actually
does; that is, an assimilation effect involves the receiver minimizing the
difference between the message’s position and the receiver’s position. A
contrast effect is said to occur when a receiver perceives the message to be
advocating a position farther away from his or her position than it actually
does; thus a contrast effect involves the receiver’s exaggerating the
difference between the message’s position and the receiver’s position.3

Social judgment theory offers a rule of thumb concerning the occurrence


of assimilation and contrast effects (C. W. Sherif et al., 1965, p. 129).
Broadly speaking, a communication advocating a position in the latitude of
acceptance is likely to be assimilated (perceived as even closer to the
receiver’s own view), and a communication advocating a position in the
latitude of rejection is likely to be contrasted (perceived as even more
discrepant from the receiver’s view). In the latitude of noncommitment,
either assimilation or contrast effects might be found; the location of the
boundary in the latitude of noncommitment (the point at which
assimilation effects stop and contrast effects begin) is not clear, but it
seems likely to occur somewhere closer to the latitude of rejection than the
latitude of acceptance (Kiesler et al., 1969, p. 247).

Notice, thus, that the perceived position of a persuasive communication


may be different for persons with differing stands on the issue. An
illustration of this phenomenon is provided by a study in which
participants saw a message concerning a presidential election. The
communication briefly listed the claims of the two major parties on
various campaign issues, but did not take sides or draw clear conclusions.
When pro-Republican respondents were asked what position the message
advocated, they characterized it as being slightly pro-Democratic; pro-

55
Democratic respondents, on the other hand, saw the message as being
slightly pro-Republican. Both groups of respondents thus exhibited a
contrast effect, exaggerating the difference between the message and their
own position (M. Sherif & Hovland, 1961, p. 151). (For other research
illustrating assimilation and contrast effects, see Atkins, Deaux, & Bieri,
1967; Hurwitz, 1986; Manis, 1960; Merrill, Grofman, & Adams, 2001; C.
W. Sherif et al., 1965, pp. 149–163.)

Assimilation and contrast effects appear to be magnified by ego-


involvement. That is, there is a greater degree of perceptual distortion
(regarding what position a message is advocating) as the receiver’s degree
of involvement increases (C. W. Sherif, Kelly, Rodgers, Sarup, & Tittler,
1973; C. W. Sherif et al., 1965, p. 159). This relationship can be seen to
underlie the previously described involvement-related differences revealed
in the Own Categories procedure: Because higher ego-involvement means
a greater propensity toward perceptual distortion, the higher-involvement
perceiver finds it difficult to discern fine differences between advocated
positions—and thus needs fewer categories to represent (what appear to
be) the range of different positions on the issue.

However, assimilation and contrast effects are minimized by messages that


make clear what position is being advocated. That is, only relative
ambiguous messages are subject to assimilation and contrast effects (see
Granberg & Campbell, 1977; C. W. Sherif et al., 1965, p. 153; M. Sherif &
Hovland, 1961, p. 153). When a persuader makes clear just what view is
being forwarded, assimilation and contrast effects are minimized.4

Attitude Change Effects


Whether receivers will change their attitudes following reception of a
persuasive communication is said by social judgment theory to depend on
what position the message is perceived to be advocating—that is, the
perceived location of the communication with respect to the latitudes of
acceptance, rejection, and noncommitment. The general principle offered
by social judgment theory is this: A communication that is perceived to
advocate a position that falls in the latitude of acceptance or the latitude of
noncommitment will produce attitude change in the advocated direction
(that is, in the direction sought by the message), but a communication that
is perceived to advocate a position that falls in the latitude of rejection will
produce no attitude change and may even provoke “boomerang” attitude
change (i.e., change in the direction opposite that advocated by the

56
message). A number of studies have reported results consistent with this
general principle (Atkins et al., 1967; Eagly & Telaak, 1972; B. T.
Johnson, Lin, Symons, Campbell, & Ekstein, 1995; Sarup, Suchner, &
Gaylord, 1991; C. W. Sherif et al., 1973; Siero & Doosje, 1993).

This principle has important implications for the question of the effects of
discrepancy (the difference between the message’s position and the
receiver’s position) on attitude change. A persuader might advocate a
position very discrepant from (very different from) the receiver’s own
view, thus asking for a great deal of attitude change; or a persuader might
advocate a position only slightly discrepant from the receiver’s, so seeking
only a small amount of change. The question is: What amount of
discrepancy (between the message’s position and the receiver’s position)
will produce the greatest amount of attitude change in the advocated
direction?

Social judgment theory suggests that with increasing discrepancy, more


favorable attitude change will occur—up to a point, namely, the latitude of
rejection. But beyond that point, increasing discrepancy will produce less
favorable reactions (indeed, may produce boomerang attitude change).
Thus the general relationship between discrepancy and attitude change is
suggested to be something like an inverted-U-shaped curve, and indeed the
available research evidence is largely consistent with that suggestion. (For
a general discussion of this view, see Whittaker, 1967. For findings of
such a relationship—at least under some conditions—see, e.g., E.
Aronson, Turner, & Carlsmith, 1963; Freedman, 1964; Sakaki, 1980; M. J.
Smith, 1978; Whittaker, 1963, 1965. For complexities and additional
discussion, see Chung, Fink, & Kaplowitz, 2008; Clark & Wegener, 2013;
Fink & Cai, 2013; Fishbein & Lange, 1990; Kaplowitz & Fink, 1997.)
However, for social judgment theory, any effects of discrepancy on
attitude change are simply indirect reflections of the role played by the
judgmental latitudes. Correspondingly, the inverted-U curve (relating
discrepancy to attitude change) is only a crude and general guide to what
persuasive effects may be expected in a given circumstance.5

To illustrate this complexity, consider the interplay of discrepancy and


ego-involvement. As receivers become increasingly ego-involved in an
issue, their latitudes of rejection presumably grow larger. Thus for low-
involvement receivers, a persuader might be able to advocate a very
discrepant viewpoint without entering the (small) latitude of rejection; by
contrast, for high-involvement receivers, a very discrepant message will

57
almost certainly fall into the (large) latitude of rejection. Thus with any
one influence attempt, a persuader facing a highly involved receiver may
be able to advocate safely only a small change; obtaining substantial
change from the highly involved receiver may require a series of small
steps over time. By contrast, considerable attitude change might be
obtained from the low-involvement receiver rather rapidly, through
advocating a relatively discrepant (but not too discrepant) position (as
suggested by Harvey & Rutherford, 1958).

The larger point for persuaders is this: Effective persuasion requires


knowing more than the receiver’s most preferred position; one needs to
also know the structure of the judgmental latitudes. As noted earlier, two
people with the same most preferred position might nevertheless have very
different evaluations of other positions on the issue—and so a given
persuasive message might fall in one recipient’s latitude of acceptance but
the other’s latitude of rejection, even though the two people have the same
most preferred position on the issue.

Assimilation and Contrast Effects Reconsidered


The attitude-change principles discussed in the preceding section refer to
what position the message is perceived to advocate. It thus becomes
important to reconsider the role of assimilation and contrast effects in
persuasion, since these influence the perceived position of a message. The
key point to be noticed is this: Assimilation and contrast effects reduce the
effectiveness of persuasive messages.

The Impact of Assimilation and Contrast Effects on


Persuasion
Consider first the case of a contrast effect. If a message that advocates a
position in the receiver’s latitude of rejection—and so already is unlikely
to produce much favorable attitude change—is perceived as advocating an
even more discrepant position, then the chances for favorable attitude
change diminish even more (and the chances for boomerang attitude
change increase). Obviously, then, contrast effects impair persuasive
effectiveness.

But assimilation effects also reduce persuasive effectiveness. When an


assimilation effect occurs, the perceived discrepancy between the

58
message’s stand and the receiver’s position is reduced—and hence the
communicator is seen as asking for less change than he or she actually
seeks.6 Consider the case of a message that advocates a position in the
latitude of acceptance or the latitude of noncommitment; with increasing
perceived discrepancy, the chances of favorable attitude change
presumably increase. But an assimilation effect will reduce the perceived
discrepancy between the message’s view and the receiver’s position, and
so it will reduce the amount of attitude change obtained. Indeed, in the
extreme case of complete assimilation (when the receivers think that the
message is simply saying what they already believe), no attitude change
will occur, because the audience has misperceived the communicator’s
position. That is, when the recipient mistakenly believes (because of the
perceptual distortion of assimilation) that the message advocates the
recipient’s current position, then the recipient’s attitude will not change.7

However, persuaders can minimize assimilation and contrast effects by


being clear about their position on the persuasive issue at hand. As
mentioned previously, only relatively ambiguous communications (that is,
messages that aren’t clear about their stand on the persuasive issue) are
subject to assimilation and contrast effects. Thus social judgment theory
emphasizes for persuaders the importance of making one’s advocated
position clear.

Ambiguity in Political Campaigns


One might think that the prevalence (and apparent success) of ambiguity in
political campaigns suggests that something is amiss (with social judgment
theory, if not the political campaign process). After all, if ambiguity
reduces persuasive effectiveness, why is it that successful political
campaigners so frequently seem to be ambiguous about their stands on the
issues?

To understand this phenomenon, it is important to keep in mind the


persuasive aims of election campaigns. Ordinarily the candidate is not
trying to persuade audiences to favor this or that approach to the matter of
budget policy or gun control or any other “campaign issue.” Rather, the
persuasive aim of the campaign is to get people to vote for the candidate—
and candidates are never ambiguous about their stand on that question.
Thus on the topic on which candidates seek persuasion (namely, who to
vote for), candidates obviously take clear positions.

59
Candidates do sometimes adopt ambiguous positions on “campaign issues”
(economic policy, social issues, and so on). If a candidate were trying to
persuade voters that “the right approach to the issue of gun control is thus-
and-so,” then being ambiguous about the candidate’s position on gun
control would reduce the chances of successful persuasion on that topic.
Such ambiguity would encourage assimilation and contrast effects, thereby
impairing the candidate’s chances of changing anyone’s mind about that
issue.

But, ordinarily, candidates don’t seek to persuade voters about the wisdom
of some particular policy on some campaign issue. Usually, the candidate
hopes to encourage voters to believe that the candidate’s view on a given
issue is the same as the voter’s view. That is, candidates hope that with
respect to campaign issues, voters will assimilate the candidate’s views
(overestimate the degree of similarity between the candidate’s views and
their own).

Social judgment theory straightforwardly suggests how such an effect


might be obtained. Suppose—as seems plausible—that for many voters,
the positions around the middle of the scale on a given campaign issue
commonly fall in the latitude of noncommitment or the latitude of
acceptance; for a small number of voters (e.g., those with extreme views
and high ego-involvement on that topic), such positions might fall in the
latitude of rejection, but most of the electorate feels noncommittal toward,
if not accepting of, such views. In a such a circumstance, if the message
suggests some sort of vaguely moderate position on the issue, without
being very clear about exactly what position is being defended, then the
conditions are ripe for assimilation effects regarding the candidate’s stand
on that topic. Voters who themselves have widely varying views on the
issue may nevertheless all perceive the candidate’s issue position to be
similar to their own. (For research concerning assimilation and contrast
effects in political contexts, see Drummond, 2011; Granberg, 1982;
Granberg, Kasmer, & Nanneman, 1988; Judd, Kenny, & Krosnick, 1983;
M. King, 1978.)

Adapting Persuasive Messages to Recipients Using


Social Judgment Theory
One recurring theme in theoretical analyses of persuasion is the idea that to
maximize effectiveness, persuasive messages should be adapted (tailored,

60
adjusted) to fit the audience. From the perspective of social judgment
theory, this especially means adapting messages to the recipient’s
judgmental latitudes. As mentioned earlier, social judgment theory
emphasizes that a persuader needs to know more than simply the
receiver’s most preferred position; the structure of the judgmental latitudes
—the sizes and locations of the latitudes of acceptance, rejection, and
noncommitment—is also important. Even if two receivers have the same
most preferred position, a given persuasive message might fall in the
latitude of acceptance for one person but in the latitude of rejection for
another, leading to quite different reactions to a given message.

Persuaders are often not in a position to vary their advocated view for
different audiences. (For example, politicians who attempt to do so can
find themselves accused of “flip-flopping” or “talking out of both sides of
the mouth.”) But in some circumstances, persuaders can be free to vary
what they ask of audiences. For example, a charity might vary how large a
donation is requested depending on the recipient’s financial circumstance;
people who are financially better-off may be asked for larger sums. In such
a circumstance, having some sense of the recipient’s judgmental latitudes
—what requested amounts might seem outrageously large to them (latitude
of rejection) and which might seem at least worthy of considering (latitude
of noncommitment)—can be crucial.

More broadly, one might think of the judgmental latitudes as identifying


what sorts of claims might be found plausible by the intended audience.
For example, S. Smith, Atkin, Martell, Allen, and Hembroff (2006) were
planning a campus campaign to reduce alcohol abuse. Because people can
overestimate the frequency of campus drinking, the campaign wanted to
convey accurate information about alcohol consumption (accurate
information about the “descriptive norm,” the actual frequency of a given
behavior). However, if the audience were to perceive the (accurate)
information to be unbelievable, then presumably the campaign would not
be successful. So in preliminary campaign planning research, Smith and
colleagues adapted the Ordered Alternatives questionnaire to assess how
believable respondents found various percentages of students who drink
five or fewer drinks when they party. The accurate description of the
frequency of such drinking fell within the audience’s latitude of
noncommitment—that is, the audience was apparently not predisposed to
reject that percentage as unrealistic. The researchers were thus able to
confidently design campaign communications using accurate descriptive-
norm information, knowing that such claims would not be thought

61
unreasonable.

Social judgment theory also plainly suggests that messages may need to be
adapted to the audience’s level of ego-involvement. Where message
recipients are not very involved in the issue, a persuader might be able to
advocate a relatively discrepant position without encountering the latitude
of rejection; where the audience is highly involved, on the other hand, the
large latitude of rejection is likely to necessitate a smaller discrepancy if
the message is to be effective.

Critical Assessment
Social judgment theory obviously offers a number of concepts and
principles useful for illuminating persuasive effects. But several
weaknesses in social judgment theory and research have become apparent.

The Confounding of Involvement With Other


Variables
One weakness in much social judgment theory research stems from the use
of participants from preexisting groups thought to differ in involvement
(e.g., in research on a presidential election, using committed members of a
political party to represent persons high in ego-involvement). This research
procedure has created ambiguities in interpreting results, because the
procedure has confounded involvement with a number of other variables.

Two variables are said to be confounded in a research design when they


are associated in such a way as to make it impossible to disentangle their
separate effects. In the case of much social judgment theory research, the
persons selected to serve as high-involvement participants differed from
the low-involvement participants not just in involvement but in other ways
as well. For example, the high-involvement participants had more extreme
attitudes than the low-involvement participants (e.g., M. Sherif &
Hovland, 1961, pp. 134–135). In such a circumstance, when the high-
involvement group displays a larger latitude of rejection than the low-
involvement group, one cannot unambiguously attribute the difference to
involvement (as social judgment theory might propose). The difference in
latitude size could instead be due to differences in position extremity.

The problem is that, according to social judgment theory, ego-involvement

62
and position extremity are distinct concepts. Involvement and extremity
are often correlated (such that higher involvement is characteristically
associated with more extreme views) but nevertheless conceptually
distinct. Hence it is important to be able to distinguish the effects of ego-
involvement from the effects of position extremity. Social judgment theory
claims that larger latitudes of rejection are the result of heightened ego-
involvement, not the result of extreme positions per se (e.g., C. W. Sherif
et al., 1965, p. 233); but because the research evidence in hand confounds
ego-involvement and position extremity, the evidence is insufficient to
support such a claim.

In fact, the groups used in much social judgment research differed not only
in involvement and position extremity but in age, educational
achievement, and other variables. As a result, one cannot confidently
explain observed differences (e.g., in the size of the latitude of rejection, or
in the number of categories used in the own-categories procedure) as being
the result simply of involvement differences; one of the other factors, or
some combination of other factors, might have been responsible for the
observed effects. (A more general discussion of this problem with social
judgment research has been provided by Kiesler et al., 1969, pp. 254–257.)

The Concept of Ego-Involvement


The concept of ego-involvement is a global or omnibus concept, one that
involves a constellation of various properties—the person’s stand on the
issue being central to the person’s sense of self, the issue’s importance to
the person, the issue’s personal relevance to the person, the degree of
commitment the person has to the position, and the degree of intensity
with which the position is held, and so on (for a useful discussion, see
Wilmot, 1971a).

But these are distinguishable properties. For instance, I can think an issue
is important without my stand on that issue being central to my self-
concept (e.g., I think the issue of controlling the federal deficit is
important, but my sense of identity isn’t connected to my stand on this
matter). I can hold a given belief intensely, even though the issue isn’t
very important to me (e.g., my belief that the Earth is round). An issue
may not be personally relevant to me (e.g., abortion), but I could
nonetheless be strongly committed to a position on that issue, and my
stand on that issue could be important to my sense of self. I can hold a
belief strongly (e.g., about the superiority of a given basketball team), even

63
though that belief isn’t central to my self-concept.

The general point is that the notion of ego-involvement runs together a


number of distinct concepts in an unsatisfactory manner. It is possible to
distinguish (conceptually, if not empirically) commitment to a position,
importance of the issue, personal relevance of the issue, and so forth, and
hence a clear understanding of the roles these play in persuasion will
require separate treatment of each. (For examples of efforts at clarifying
one or another aspect of involvement, see B. T. Johnson & Eagly, 1989;
Levin, Nichols, & Johnson, 2000; Slater, 2002; Thomsen, Borgida, &
Lavine, 1995.)8

The Measures of Ego-Involvement


Research concerning the common measures of ego-involvement—the size
of the latitude of rejection in the Ordered Alternative questionnaire and the
number of categories created in the Own Categories procedure—has
revealed some worrisome findings, of two sorts.9

First, the measures are not very strongly correlated with each other. Two
instruments that measure the same property ought to be strongly
correlated. For example, in the case of the two common measures of ego-
involvement, the two measures should be strongly negatively correlated: as
the size of the latitude of rejection increases, the number of categories
created should decrease. But research that has examined the correlations
among various involvement measures (including, but not limited to, the
size of the latitude of rejection on the Ordered Alternatives questionnaire
and the number of categories in the Own Categories procedure) have
commonly yielded correlations that are roughly zero (e.g., Wilmot,
1971b). The implication is that the different measures of involvement
cannot all be measuring the same thing. Maybe one of them is measuring
involvement and the others are not, or maybe none of them is measuring
involvement. But plainly these assessments are not all measuring the same
thing.

Second, the measures do not display the expected patterns of association


with other variables. For example, ego-involvement measures are not
strongly correlated with such variables as the perceived importance of the
issue to the respondent, the perceived importance of the issue to society,
the respondent’s perceived commitment to their most acceptable position,
or the respondent’s self-reported certainty, intensity of feeling, or interest

64
in the topic (R. A. Clark & Stewart, 1971; Krosnick, Boninger, Chuang,
Berent, & Carnot, 1993; Wilmot, 1971b).

In short, there are good empirical grounds for concern about the adequacy
and meaning of the common measures of ego-involvement. This is perhaps
to be expected, however, given the lack of clarity of the concept of ego-
involvement; one cannot hope to have a very satisfactory assessment
procedure for a vague and indistinct concept. In any case, the empirical
evidence suggests that the various indices of ego-involvement ought not be
employed unreflectively.

Conclusion
In some ways social judgment theory is too simplified to serve as a
complete account of persuasive effects. From a social judgment theory
point of view, the only features of the message that are relevant to its
impact are (a) the position it advocates and (b) the clarity with which it
identifies its position. It doesn’t matter whether the message contains
sound arguments and good evidence or specious reasoning and poor
evidence; it doesn’t matter what sorts of values the message appeals to,
how the message is organized, or who the communicator is. Everything
turns simply on what position the message is seen to defend. And surely
this is an incomplete account of what underlies persuasive message effects.

But a theory can be useful even when incomplete. Social judgment theory
does draw one’s attention to various important facets of the process of
persuasion. For example, realizing the possibility of assimilation and
contrast effects can be crucially important to persuasive success. A
persuader who is not sufficiently clear about his or her advocated position
may think persuasion has been achieved because the message recipient
professes complete agreement with the message, but if the recipient
misperceived what view was being advocated (an assimilation effect), such
expressions of agreement will be misleading indicators of persuasive
success.

Similarly, recognizing that people commonly have not only a most-


preferred position but also have assessments of other positions on the issue
—the judgmental latitudes—can help persuaders understand their goals
and challenges more clearly. For example, the persuader’s objective does
not always have to be to induce the recipients to change their most-
preferred positions so as to match the persuader’s most-preferred position.

65
Sometimes it may be enough to get recipients to see that the persuader’s
position falls in their latitude of noncommitment. For example, where
public policy issues are the subject of advocacy, “the battle is not to
convince citizens that one’s policy is right, but simply that it is not
unreasonable” (Diamond & Cobb, 1996, p. 242).

And even if social judgment theory’s notion of ego-involvement is too


much of an omnibus concept, it nevertheless points to the importance of
variations in how individuals relate to a given persuasive issue. (Indeed,
social judgment theory here provides a useful example of how difficult it
can be to take a broad, common-sense concept like involvement and
articulate it in a careful, empirically well-grounded way.)

So although social judgment theory must be judged something of an


historical relic at present, in the sense that it is not the object of much
current research attention, it nevertheless is a framework that offers some
concepts and principles of continuing utility—and it offers some
instructive object lessons.

For Review
1. What is the central tenet of social judgment theory? Upon what is the
effect of a persuasive communication said to centrally depend?
According to social judgment theory, what are the two steps involved
in attitude change?
2. Explain the idea that people have judgments of the alternative
positions available on an issue. How can one obtain such judgments?
Describe the Ordered Alternatives questionnaire. What instructions
are respondents given for completing the Ordered Alternatives
questionnaire?
3. What are the judgmental latitudes? What is the latitude of
acceptance? The latitude of rejection? The latitude of
noncommitment? Explain how, for social judgment theory, a person’s
stand on an issue is represented by more than the person’s most-
acceptable position.
4. What is ego-involvement? What is the conceptual relationship of ego-
involvement and position extremity? Is being ego-involved in a issue
the same thing as holding an extreme position on the issue?
According to social judgment theory, what is the empirical
relationship of ego-involvement and position extremity?
5. How is ego-involvement predicted to influence the structure of the

66
judgmental latitudes? What latitude structure is said to be
characteristic of a person high in ego-involvement? Of a person low
in ego-involvement?
6. Explain how, in social judgment theory research, group membership
was used to validate the use of the size of the latitude of rejection (on
the Ordered Alternatives questionnaire) as a measure of ego-
involvement. What is the Own Categories procedure? Explain how
ego-involvement is thought to influence the number of categories
used in the Own Categories procedure.
7. What are assimilation and contrast effects (broadly speaking)? What
is a contrast effect? What is an assimilation effect? What is the rule of
thumb concerning when each effect will occur? Explain how, because
of assimilation and contrast effects, the perceived position of a
persuasive message may be different for people with different
positions on the issue. What is the relationship between ego-
involvement and assimilation and contrast effects? What kinds of
messages are subject to assimilation and contrast effects? How can a
persuader minimize assimilation and contrast effects?
8. Describe social judgment theory’s rule of thumb concerning attitude
change effects following persuasive communications. What is
“discrepancy”? What is the relationship between discrepancy and
attitude change, according to social judgment theory? Describe how
this analysis suggests different approaches to persuading high- and
low-involvement receivers.
9. Explain how contrast effects reduce the effectiveness of persuasive
messages. Explain how assimilation effects reduce the effectiveness
of persuasive messages. How can political campaigns exploit
assimilation effects concerning positions on policy issues?
10. Explain how persuasive messages might be adapted to the recipient’s
judgmental latitudes. Explain how messages might be adapted to the
recipient’s level of ego-involvement.
11. What does it mean to say that two factors (variables) are confounded?
Describe how extremity and involvement have been confounded in
social judgment research. Explain the implications of this
confounding for interpreting social judgment research. Explain how
the concept of ego-involvement conflates a number of different
concepts.
12. Identify and describe two worrisome findings concerning the
measures of ego-involvement. What sort of correlation is expected
between two instruments that measure the same property? If two
measures of involvement do measure involvement, what correlations

67
would be expected between them (e.g., between the number of
categories in the Own Categories procedure and the size of the
latitude of rejection on the Ordered Alternatives questionnaire)? What
correlations have been observed? Are the measures of ego-
involvement strongly correlated with each other? Do the measures of
ego-involvement display the expected patterns of association with
other variables? How are measures of ego-involvement related to
assessments of perceived topic importance or commitment to one’s
position?

Notes
1. The anchoring of attitudes in reference groups is emphasized in some
social judgment theory conceptualizations of involvement (e.g., M. Sherif
& Sherif, 1967, pp. 135–136), and hence this was an attractive research
procedure.

2. Ego-involvement is thought to influence not only the number of


categories but also the distribution of statements across categories. Where
ego-involvement is lower, statements are likely to be distributed roughly
evenly across whatever categories are created; the higher-involved person
is likely to use categories disproportionately (C. W. Sherif et al., 1965, p.
239).

3. Assimilation and contrast effects, more broadly defined, are familiar


psychophysical phenomena. If you’ve been lifting 20-pound boxes all day,
a 40-pound box will feel even heavier than 40 pounds (contrast effect), but
a 21-pound box will feel very similar (assimilation effect). The
psychophysical principle involved is that when a stimulus (the 40-pound
box) is distant from one’s judgmental anchor (the 20-pound boxes), a
contrast effect is likely; when the stimulus is close to the anchor, an
assimilation effect is likely. Indeed, social judgment theory was explicitly
represented as an attempt to generalize psychophysical judgmental
principles and findings to the realm of social judgment, with the person’s
own most-preferred position serving as the judgmental anchor (see M.
Sherif & Hovland, 1961).

4. For social judgment theory, the perceived position of a message thus is


influenced by message properties (the advocated position and the clarity
with which it is expressed) in conjunction with the recipient’s own
position (which serves as a perceptual anchor). For discussion of other

68
factors influencing the perceived position of messages, see, for example,
Kaplowitz and Fink (1997) and R. Smith and Boster (2009).

5. From a social judgment theory perspective, the relation of discrepancy


and attitude change is presumably not a completely symmetrical inverted-
U-shaped curve but something rather more like half of such a curve:
gradually increasing favorable attitude change up to the latitude of
rejection, but with a sharp drop-off (not a gentle decline) at that point.

6. Concerning social judgment theory’s predictions about the impact of


assimilation effects on persuasion, the current description—that
assimilation effects reduce persuasion—parallels that of most
commentators (e.g., Kiesler, Collins, & Miller, 1969, p. 260). Eagly and
Chaiken (1993, p. 387n13), however, have argued that “social judgment
theory posits a positive relation between assimilation and persuasion.”
Social judgment theorists have certainly described assimilation effects as
making it more likely that recipients will have certain sorts of evaluatively
positive reactions to a message (thinking it fair, unbiased, and so forth;
e.g., M. Sherif & Sherif, 1967, p. 130), but such positive assessments are
not necessarily incompatible with a lack of persuasion (attitude change).
Consider, for example, that C. W. Sherif et al. (1965) indicated that high-
involvement persons “are particularly prone to displace the position of a
communication in such a way that their stand is unaffected” (p. 176). This
displacement can involve either assimilation or contrast, but “in either case
… their placement of communications is such that little effect on their
attitudes could be expected” (p. 177). This passage certainly suggests that
social judgment theory depicts assimilation effects as reducing attitude
change.

7. Because involvement magnifies assimilation and contrast effects, highly


involved people are understandably often difficult to persuade: It’s not just
that they have small latitudes of acceptance and noncommitment (and so
are predisposed to react negatively to many advocated positions) but also
that they’re especially prone to misperceiving what view is being
advocated.

8. The elaboration likelihood model, discussed in Chapter 8, invokes a


rather narrower concept of involvement: direct personal relevance of the
issue.

9. A third troubling finding, not discussed here, is that there is more cross-
issue consistency in an individual’s apparent level of ego-involvement (as

69
assessed by common measures of ego-involvement) than should be
expected given that ego-involvement is an issue-specific property (that is,
one that varies from issue to issue for a given individual), not a
personality-trait-like disposition; for discussion and references, see
O’Keefe (1990, pp. 42–43).

70
Chapter 3 Functional Approaches to
Attitude

A Classic Functional Analysis


Subsequent Developments
Identifying General Functions of Attitude
Assessing the Function of a Given Attitude
Influences on Attitude Function
Adapting Persuasive Messages to Recipients: Function
Matching
Commentary
Generality and Specificity in Attitude Function Typologies
Functional Confusions
Reconsidering the Assessment and Conceptualization of
Attitude Function
Persuasion and Function Matching Revisited
Reviving the Idea of Attitude Functions
Conclusion
For Review
Notes

One general approach to the analysis of attitudes focuses on the functions


that attitudes can serve. The basic idea is that attitudes may serve various
functions for persons, that is, may do different jobs, meet different needs
or purposes. The relevance of this idea for understanding persuasion is that
the most effective technique for changing an attitude may vary depending
on the attitude’s function. Functional analyses of attitude have a long
history; the treatment here first discusses one classic example of such an
analysis and then turns to more recent developments. (For useful
collections and discussions concerning functional approaches to attitude,
see Carpenter, Boster, & Andrews, 2013; Eagly & Chaiken, 1993, pp.
479–490; Maio & Olson, 2000b; Shavitt & Nelson, 2002; Watt, Maio,
Haddock, & Johnson, 2008.)

A Classic Functional Analysis


In a well-known analysis, Katz (1960) proposed four attitude functions:

71
utilitarian, ego-defensive, value-expressive, and knowledge. The utilitarian
function is represented by attitudes that help people maximize rewards and
minimize punishments. For example, students who experience success
with essay exams are likely to develop favorable attitudes toward such
exams. Attitudes serving a utilitarian function, Katz suggested, will be
susceptible to change when the attitude (and related activities) no longer
effectively maximizes rewards and minimizes punishments. Thus
utilitarian attitudes are probably most effectively changed by either
creating new rewards and punishments (as when, for instance, a company
creates a new incentive program to encourage suggestions by employees)
or by changing what is associated with existing rewards and punishments
(as when a company changes the basis on which salespeople’s bonuses are
based).

Attitudes serving an ego-defensive function do the job of defending one’s


self-image. Ego-defensive attitudes are exemplified most clearly by
prejudicial attitudes toward minorities; such attitudes presumably bolster
the holder’s self-image (ego) by denigrating others (see, e.g., Fein &
Spencer, 1997). The most promising avenues to changing attitudes serving
such a function, Katz suggested, might involve removing the threat to the
ego (thus removing the need for self-defense) or giving persons insight
into their motivational dynamics (getting people to see that their attitudes
are not substantively well-grounded but simply stem from ego-defensive
needs).

With attitudes serving a value-expressive function, persons get satisfaction


from holding and expressing attitudes that reflect their central values and
self-images. For example, a person whose self-image is that of a
conservative Republican might get satisfaction from supporting a balanced
budget amendment because such a viewpoint reflects the person’s self-
image. Attitudes serving a value-expressive function are thought to be
likely to change either when the underlying beliefs and self-images change
(because then there would be no need to express the old values) or when
an alternative, superior means of expressing the values is presented (as
when a political candidate says, “If you’re looking for a real conservative
[or liberal or whatever], then vote for me, because I represent those values
better than the other candidates do”).

The knowledge function of attitudes reflects the role of attitudes in


organizing and understanding information and events. For example, one
way of making sense of complex sociopolitical situations (such as in the

72
Middle East) can be to, in effect, identify the “good guys” and the “bad
guys.” That is, attitudes (evaluations) can serve as at least a superficial
mechanism for organizing one’s understandings of such situations.
Attitudes serving a knowledge function, Katz suggested, are especially
susceptible to change through the introduction of ambiguity (as when the
good guys do something bad or the bad guys do something good); such
ambiguity indicates that the attitudes are not functioning well to organize
information, thus making the attitudes more likely to change.

Katz’s description of these four attitude functions provides a useful


concrete example of a functional analysis of attitudes. It is appropriately
nuanced; for example, it acknowledges that a given attitude might serve
more than one function. It makes plain the connection between functional
attitude analysis and the understanding of alternative persuasion
mechanisms by suggesting means of influence especially well tailored to
each functional attitude type. (The analysis does not claim that the
recommended means of changing each type of attitude will be guaranteed
to be successful, but only that a given attitude type is more likely to be
changed when approached with the appropriate means of influence.)

Katz’s analysis did not initially attract much research attention, in good
part because of perceived difficulties in assessing attitude function (for
some discussion, see Kiesler, Collins, & Miller, 1969, pp. 302–330;
Shavitt, 1989). But functional analyses of attitude have subsequently
flowered.

Subsequent Developments

Identifying General Functions of Attitude


Katz’s list of attitude functions is only one of many proposed function
typologies. In other analyses, different functions have been proposed, the
relationships among various functions reconsidered, and alternative
organizational schemes suggested.

For example, M. B. Smith, Bruner, and White (1956) suggested a social-


adjustive function, in which attitudes help people adjust to social situations
and groups. As described by Snyder and DeBono (1989), persons hold
attitudes serving a social-adjustive function because such attitudes “allow
them to fit into important social situations and allow them to interact

73
smoothly with their peers” (p. 341); expression of the attitude may elicit
social approval, make it easier to adapt to social situations, and the like.1
Shavitt’s (1990) taxonomy distinguished a utilitarian function, a social-
identity function (understood as including both social-adjustive and value-
expressive functions), and a self-esteem maintenance function (including
ego-defensive purposes). Gastil (1992) proposed six attitude functions:
personal utility, social utility, value expressive, social adjustment (easing
social interaction), social-identity (forging one’s identity), and self-esteem
maintenance.

But there is not yet a consensus on any one functional typology. This
surely reflects the lack of any simple, easily assessed source of evidence
for or against a given function list. An attitude function taxonomy
presumably shows its worth by being broadly useful, across a number of
applications, in illuminating the underlying motivational bases of attitude.
Expressed generally, this illumination consists of showing that the scheme
in question permits one to detect or predict relevant events or relationships,
but this evidence can be quite diverse. A given typology’s value might be
displayed by showing that knowledge of an attitude’s function (as captured
by the typology in question) permits one to predict or detect (for example)
the product features that persons will find most appealing, the relative
effectiveness of various persuasive messages, the connection between
personality traits and attitude functions, and so on. But because for any
given typology there commonly is relatively little research evidence
distinctively bearing on that scheme, there is at present little basis for
supposing that any given specific typology is unquestionably superior to
all others. (There is even less evidence comparing the usefulness of
alternative taxonomies; for an example, see Gastil, 1992.)

This lack of consensus makes for a rather chaotic and unsettled situation,
one in which a genuine accumulation of results (and corresponding
confident generalization) is difficult. If there was one widely agreed-on set
of specific functions, then research could straightforwardly be
accumulated; more could be learned about (say) what personality traits or
situational features incline persons to favor this or that function, what sorts
of messages are best adapted for changing attitudes serving the various
functions, and so forth. Instead, most of the research evidence concerning
functional attitude analyses is of a piecemeal sort: One study compares
personality correlates of social-adjustive and value-expressive functions,
another examines different means of influencing attitudes serving ego-
defensive functions, and so on.

74
In such a circumstance, one promising approach might be to paint in
broader strokes, deferring matters of detailed functional typologies in favor
of identifying some general functional differences. One broad functional
distinction has been found widely useful and seems contained (implicitly
or explicitly) in a great many attitude function analyses: a distinction
between symbolic and instrumental attitude functions (see Abelson &
Prentice, 1989; Ennis & Zanna, 2000, pp. 396–397). Briefly expressed,
symbolic functions focus on the symbolic associations of the object;
attitudes serving a symbolic function do the jobs of expressing
fundamental moral beliefs, symbolizing significant values, projecting self-
images, and the like (e.g., Katz’s ego-defensive function). Instrumental
functions focus on the intrinsic properties of the object; attitudes serving
instrumental functions do the jobs of summarizing the desirable and
undesirable aspects of the object, appraising the object through specific
intrinsic consequences or attributes, and so forth (e.g., Katz’s utilitarian
function).

For example, concerning stricter gun control laws in the United States, a
supporter’s positive attitudes might have a predominantly symbolic basis
(beliefs such as “It represents progress toward a more civilized world”) or
an instrumental basis (“It will reduce crime because criminals won’t be
able to get guns so easily”); similarly, an opponent’s negative attitudes
might be motivated by largely symbolic considerations (“It represents
impingement on constitutional rights”) or by largely instrumental
considerations (“It will increase crime because criminals will still have
guns, but law-abiding citizens won’t”). Of course, it is possible for a
person’s attitude on a given topic to have a mixture of symbolic and
instrumental underpinnings. And an attitude’s function might change
through time. For instance, an attitude might initially serve a symbolic
function but subsequently come to predominantly serve instrumental ends
(see Mangleburg et al., 1998). But the general distinction between
symbolic and instrumental attitude functions appears to be a broadly useful
one (see, e.g., Crandall, Glor, & Britt, 1997; Herek & Capitanio, 1998; A.
Kim, Stark, & Borgida, 2011; Prentice & Carlsmith, 2000).

Assessing the Function of a Given Attitude


Given a typology of attitude functions, the question that naturally arises is
how one can tell what function an individual’s attitude is serving. Indeed,
one recurring challenge facing functional attitude theories has been the
assessment of attitude functions (see Shavitt, 1989).

75
One straightforward procedure for assessing the function of a given
attitude involves coding (classifying) relevant free-response data (data
derived from open-ended questions). For example, Shavitt (1990) asked
participants to write down “what your feelings are about the attitude
object, and why you feel the way you do. … Write down all of your
thoughts and feelings that are relevant to your attitude, and try to describe
the reasons for your feelings” (p. 130). Responses were then classified on
the basis of the apparent attitude function. For example, responses
concerning what the attitude communicates to others were coded as
indicating a social-identity function, whereas responses focused on
attributes of the attitude object were classified as reflecting a utilitarian
function.

Such free-response data can be elicited in various ways (participants might


write essays or simply list their beliefs), and the classification system will
vary depending on the functional typology being used (e.g., one might
simply contrast symbolic and instrumental bases of attitudes; see Ennis &
Zanna, 1993). But the general principle behind these procedural variants is
that different attitude functions will have different characteristic clusters of
affiliated beliefs, spawned by the different motivations behind (different
functions of) the attitude, and hence examination of such freely elicited
beliefs will illuminate attitude functions. (For other examples of such
procedures, see Herek, 1987; Maio & Olson, 1994.)

A second avenue to the assessment of attitude functions is the use of a


questionnaire with standardized scale response items. The leading example
is Herek’s (1987) Attitude Functions Inventory, which presents
respondents with statements about different possible bases for their views
(statements of the form “My views about X mainly are based on …”);
respondents are asked to indicate the degree to which each statement is
true of them (giving answers on a scale anchored by the phrases “very true
of me” and “not at all true of me”). So, for instance, in an item assessing
the value-expressive function, persons are asked about the degree to which
it is true that their views are based on their “moral beliefs about how things
should be”; for the ego-defensive function, one item asks whether the
respondent’s views are based on “personal feelings of disgust or
revulsion.” (Each attitude function is assessed using several items.) As
another example of such procedures, Clary, Snyder, Ridge, Miene, and
Haugen (1994) had participants rate the importance of 30 possible reasons
for volunteering (six reasons for each of five attitude functions). For
example, “I can gain prestige at school or work” was one utilitarian reason,

76
whereas “members of a social group to which I belong expect people to
volunteer” was a social-adjustive reason. (For other examples of the use of
these or similar instruments, see Clary et al., 1998; Ennis & Zanna, 1993;
Gastil, 1992; Herek, 2000; Shavitt, 1990.)

In much attitude function research, however, a third approach has been


adopted, that of using proxy indices such as personality characteristics to
stand in for more direct assessments of function (on the basis of
associations between such characteristics and attitude functions). Among
these, the most frequently employed has been the individual-difference
variable of self-monitoring. Self-monitoring refers to the control or
regulation (monitoring) of one’s self-presentation, and specifically to the
tendency to tailor one’s behavior to fit situational considerations. Broadly
speaking, high self-monitors are concerned about the image they project to
others and tailor their conduct to fit the particular circumstances they are
in. Low self-monitors are less concerned about their projected image and
mold their behavior to fit inner states (their attitudes and values) rather
than external circumstances (social norms of appropriateness). In a well-
established questionnaire used to assess self-monitoring, high self-
monitoring is reflected by agreement with statements such as “I guess I put
on a show to impress or entertain others” and “I would probably make a
good actor”; low self-monitoring is reflected by agreement with statements
such as “I have trouble changing my behavior to suit different people and
different situations” and “I can only argue for ideas which I already
believe” (see Gangestad & Snyder, 2000).

Self-monitoring is taken to be broadly reflective of differences in likely


attitude function. For example, as described by DeBono (1987), the
expectation is that high self-monitors will emphasize social-adjustive
functions (letting the high self-monitor behave in ways appropriate to the
social situation), whereas low self-monitors will favor value-expressive
functions (in the sense that the low self-monitor’s attitudes will be chosen
on the basis of the degree to which the attitude is consistent with the
person’s underlying values).2 So, for example, high self-monitors are
likely to especially stress the image-related aspects of products (because of
the social-adjustive function), whereas low self-monitors are more likely to
focus on whether the product’s intrinsic characteristics and qualities match
the person’s criteria for such products.

For any such proxy measure, of course, the key question will be the degree
to which the proxy is actually related to differences in attitude function, a

77
question probably best addressed by examining the relationship between
proxy measures and more direct assessments. In the specific case of
personality characteristics such as self-monitoring, presumably such
characteristics merely incline persons (in appropriate circumstances) to be
more likely to favor one or another function. For instance, it is surely not
the case that all the attitudes of high self-monitors (whether toward aspirin
or automobiles or affirmative action) serve social-adjustive functions. (See
Herek, 2000, pp. 332–335, for commentary on the use of such proxy
measures.)

Influences on Attitude Function


A variety of factors might influence the function that a given attitude
serves. Three such classes of factors merit mention: individual differences
(from person to person), the nature of the attitude object, and features of
the situation.

Individual Differences
Different persons can favor different attitude functions, as
straightforwardly illustrated by self-monitoring. As just discussed, high
self-monitors appear to favor social-adjustive functions, whereas low self-
monitors seem more likely to adopt value-expressive functions. Other
personality correlates of attitude function differences have not received so
much recent research attention, although plainly it is possible that other
individual-difference variables might be related to differences in attitude
function (see, e.g., Katz, McClintock, & Sarnoff, 1957; Zuckerman,
Gioioso, & Tellini, 1988). But apart from any underlying personality
differences, people’s motivations can vary. For example, different people
can have different reasons for volunteering, although those differences
might not be systematically related to any general personality disposition.

Attitude Object
The function of an attitude toward an object may also be shaped by the
nature of the object because objects can differentially lend themselves to
attitude functions. For example, air conditioners commonly evoke
predominantly utilitarian thoughts (“keeps the air cool,” “expensive to
run”), whereas wedding rings are more likely to elicit social-identity
thoughts (“represents a sacred vow”; Shavitt, 1990). Similarly, attitudes

78
toward shampoo are determined more by instrumental attributes (such as
conditioning hair) than by symbolic ones (such as being a high-fashion
brand), whereas for perfume, symbolic attributes are more influential than
instrumental ones (Mittal, Ratchford, & Prabhakar, 1990).

Each of these objects (air conditioners, shampoo, wedding rings, and


perfume) appears to predominantly encourage one particular attitude
function (and so might be described as unifunctional). But other objects
are multifunctional, in the sense of easily being able to accommodate
different attitude functions. For instance, automobiles can readily permit
both symbolic and instrumental functions; a person’s attitude toward an
automobile might have a largely instrumental basis (“provides reliable
transportation”), a largely symbolic one (“looks sexy”), or some mixture of
these.

Situational Variations
Different situations can elicit different attitude functions (for a general
discussion, see Shavitt, 1989, pp. 326–332). For example, if the situation
makes salient the intrinsic attributes and outcomes associated with an
object, presumably instrumental (utilitarian) functions will be more likely
to be activated; by contrast, social-identity functions might be engaged by
“situations that involve using or affiliating with an attitude object, or
expressing one’s attitude toward the object, in public or in the presence of
reference group members” (p. 328). Thus attitude functions may vary
depending on features of the immediate situation.

Multifunctional Attitude Objects Revisited


As noted above, attitude objects differ in the degree to which they
accommodate multiple attitude functions, and this influences the role that
individual-difference variations and situational variations can play in
determining attitude function. For unifunctional attitude objects (those
eliciting predominantly one function), individual-difference variations and
situational variations may not have much impact. For example, aspirin is
likely to be generally (that is, across individuals and across situations)
perceived largely in instrumental terms. But (as emphasized by Shavitt,
1989) multifunctional attitude objects (such as automobiles) represent
objects for which individual-difference variations and situational
variations are likely to have greater impact on attitude function. It is
possible for the attitudes of high and low self-monitors toward automobiles

79
to serve different functions because the attitude object can accommodate
different functions. Similarly, situational factors can influence the salience
of various functions only if the attitude object permits different attitude
functions. The larger point is that the attitude object, individual
differences, and situational factors all intertwine to influence attitude
function.3

Adapting Persuasive Messages to Recipients:


Function Matching
One recurring theme in theoretical analyses of persuasion is the idea that to
maximize effectiveness, persuasive messages should be adapted (tailored,
adjusted) to fit the audience. In functional approaches to attitude, this
general idea is concretized as the matching of the persuasive appeal to the
functional basis of the recipient’s attitude. Two recipients with identical
overall evaluations may have different underlying functional bases for
their attitudes, which will require correspondingly different persuasive
approaches.

The Persuasive Effects of Matched and Mismatched


Appeals
Consistent with this analysis, a variety of investigations have found that
messages with appeals matching the receiver’s attitude function are indeed
more persuasive than messages containing mismatched appeals (Carpenter,
2012b). A good number of these studies have used self-monitoring as an
indirect index of variation in attitude function and have focused on how
self-monitoring differences are related to differential susceptibility to
various types of appeals in consumer product advertising. As discussed
above, high self-monitors are expected to favor social-adjustive functions
and low self-monitors to favor value-expressive functions; the appeal
variation consists of using arguments emphasizing correspondingly
different aspects of the attitude object. In the specific domain of consumer
products, the contrast can be characterized as a difference between appeals
emphasizing the image of the product or its users (a social-adjustive
appeal) and appeals emphasizing the intrinsic quality of the product (a
value-expressive appeal; see, e.g., Snyder & DeBono, 1987). This contrast
is exemplified by a pair of experimental magazine advertisements for
Canadian Club whiskey (Snyder & DeBono, 1985, Study 1). Both ads
showed a bottle of the whiskey resting on a set of blueprints for a house;

80
for the image-oriented advertisement, the slogan read, “You’re not just
moving in, you’re moving up,” whereas the product quality-oriented
advertisement claimed, “When it comes to great taste, everyone draws the
same conclusion.”

Across a number of studies, high self-monitors have been found to react


more favorably to image-oriented advertisements than to product quality-
oriented ads, whereas the opposite effect is found for low self-monitors
(e.g., DeBono & Packer, 1991; Lennon, Davis, & Fairhurst, 1988; Snyder
& DeBono, 1985; Zuckerman et al., 1988; cf. Bearden, Shuptrine, & Teel,
1989; Browne & Kaldenberg, 1997; for related work, see DeBono &
Snyder, 1989; DeBono & Telesca, 1990).4 Outside the realm of consumer
product advertising, parallel differences (between high and low self-
monitors) have been found with related appeal variations. For example,
concerning the topic of institutionalization of the mentally ill, DeBono
(1987) found that low self-monitors were more persuaded by value-
expressive messages (indicating what values were associated with positive
attitudes toward institutionalization) and high self-monitors by social-
adjustive messages (indicating that a substantial majority of the receiver’s
peers favored institutionalization). Similar effects have been reported
concerning dating attitudes (Bazzini & Shaffer, 1995), voting (Lavine &
Snyder, 1996), HIV prevention (Dutta-Bergman, 2003), and breastfeeding
(Sailors, 2011).

Parallel findings have been reported in investigations that assessed


individual attitude function differences in ways other than self-monitoring
differences. For example, Clary et al. (1994) initially assessed attitude
function through participants’ ratings of the importance of various possible
reasons for volunteering. Participants were then presented with
provolunteering messages that were matched or mismatched to their
attitude functions; matched messages were more persuasive than
mismatched messages in inducing intentions to volunteer (see also Clary et
al., 1998). In Celuch and Slama’s (1995) research, variations in the degree
to which persons’ self-presentation motives emphasized “getting ahead” or
“getting along” were related to the persuasiveness of messages
emphasizing either the self-advancement aspects of a product or the
conformity-relevant aspects of a product.5

Similar results have been obtained concerning the effectiveness of


different appeals for different types of products. Shavitt (1990, Study 2)
compared the effects of a utilitarian or social-identity appeal for either a

81
utilitarian product (such as air conditioners) or a social-identity product
(such as greeting cards); brands advertised with function-relevant appeals
were preferred over brands advertised with function-irrelevant appeals (so
that, for example, ads using utilitarian appeals were preferred over ads
using social-identity appeals when air conditioners were advertised, but
this preference was reversed when greeting cards were advertised).

Finally, these same effects have been observed when the situational
salience of attitude functions was varied experimentally. Julka and Marsh
(2005) varied the degree to which a knowledge or value-expressive
function was activated, and then they exposed participants to either a
matched or mismatched persuasive appeal for organ donation; matched
appeals produced more favorable attitude change and led participants to be
more likely to take an organ-donor registration card.

In short, substantial evidence suggests that persuasive appeals that are


matched to the receiver’s attitude function will be more persuasive than
mismatched appeals (for a review, see Carpenter, 2012b). And it is worth
noticing that studies in this area often report relatively large effects,
suggesting the substantive importance of functionally matched messages.6

Explaining the Effects of Function Matching


Exactly why are function-matched appeals typically more persuasive than
unmatched appeals? The answer to this question is not yet entirely clear,
but there are two broad possibilities. One is that functionally matched
appeals simply speak to a receiver’s psychological needs in ways that
unmatched appeals do not. This explanation is unsurprising, really—after
all, such correspondence is what makes the appeals matched. A receiver
who values a vehicle’s gas mileage more than the image projected by the
vehicle will naturally be more persuaded by appeals based on the former
than by those based on the latter. Correspondingly, people are likely to
perceive functionally matched messages as containing better arguments
than mismatched messages (Hullett, 2002, 2004, 2006; Lavine & Snyder,
1996, 2000; see, relatedly, Shavitt & Lowrey, 1992).

A second possibility is that function-matched messages are processed


more carefully than mismatched messages. For example, in Petty and
Wegener’s (1998b) study, high and low self-monitors read consumer
product messages that varied in the functional match of the appeals
(image-based versus product quality-based appeals) and in the quality of

82
the supporting arguments (strong versus weak arguments). Attitudes were
more strongly influenced by argument quality when the message contained
matched appeals than when it contained mismatched appeals. For instance,
high self-monitors were more influenced by the strength of the arguments
when the appeals were image-based than when the appeals were product
quality-based. This effect suggests that receivers more carefully
scrutinized messages with appeals matching their functional attitude bases
than they did messages with mismatched appeals (see also DeBono, 1987;
Lavine & Snyder, 1996, 2000; Petty, Wheeler, & Bizer, 2000). Several
studies have reported related findings suggesting that high self-monitors
more carefully process messages from attractive than unattractive (or
expert) communicators (DeBono & Harnish, 1988; DeBono & Telesca,
1990), findings that might reflect the propensity for high self-monitors to
favor social-adjustive functions. And this explanation is at least not
inconsistent with evidence suggesting that function-matched messages
may be better remembered than mismatched messages (DeBono & Packer,
1991, Study 3).

If this second explanation is sound, then at least sometimes function-


matched messages should be less persuasive than mismatched messages,
namely, when the messages contain poor-quality arguments (as indeed was
observed by Petty & Wegener, 1998b). The weaknesses of such poor-
quality messages will be recognized by receivers who scrutinize the
message closely but can go unnoticed by receivers who do not process so
carefully. So this explanation supposes that the widely observed greater
persuasiveness of function-matched messages is actually a consequence of
the generally high argumentative quality of the appeals; with poorer-
quality arguments, function-matched messages might not have displayed
such a persuasive advantage.

Both explanations might turn out to have some merit. For example, it may
be that when message scrutiny is already likely to be high, the different
intrinsic appeal of matched and mismatched arguments will play a key
role, whereas in other circumstances, the functional match or mismatch of
arguments will influence the degree of scrutiny given (Petty et al., 2000;
Ziegler, Dobre, & Diehl, 2007).7 Further research on these questions will
be welcomed.

Commentary

83
Generality and Specificity in Attitude Function
Typologies
The general enterprise of functional attitude analysis is driven by the
search for a small set of universal, exhaustive, and mutually exclusive
attitude functions that can be used to dependably and perceptively
distinguish any and all attitudes. But as discussed earlier, there is not yet a
consensus on any such set of functions, and perhaps there never will be
such consensus.

The general idea of functional analysis, however, can still be of use in


illuminating attitudes and persuasion processes, even without some
universal scheme of attitude functions. It has proved possible, for at least
some attitude objects or issues, to distinguish different attitudinal functions
in a way that is dependable (reliable) and that provides insight into
attitudes on that subject. For example, functional analyses have provided
insight concerning attitudes on such matters as volunteering (Stukas,
Snyder, & Clary, 2008), meat (M. W. Allen & Ng, 2003), and democracy
(Gastil, 1992). In particular, a number of studies have illuminated attitudes
on various HIV/AIDS-related issues by considering the relative
contributions of symbolic and instrumental attitudinal underpinnings (e.g.,
Crandall et al., 1997; Herek & Capitanio, 1998; Hosseinzadeh & Hossain,
2011). That is, the functions served by attitudes on a given subject can be
analyzed even in the absence of a general typology of attitude functions;
insights into the motivational dynamics of particular attitudes are possible
even without having some universal set of functions.

Of course, there is variation in the specific functional typologies used to


analyze these different particular attitudes. On one subject, it may be
helpful to distinguish various subtypes of a function, but distinguishing
those subtypes may not be necessary when considering attitudes on a
different topic. For instance, a generalized utilitarian function might
suffice when analyzing the functions of attitudes toward amusement parks,
whereas when analyzing attitudes toward democracy, it is useful to
distinguish personal utility functions (my beliefs about how democracy
benefits me personally) and social utility functions (my beliefs about how
democracy benefits society as a whole; see Gastil, 1992). The larger point
is that although one may hope that continuing research will eventuate in a
well-evidenced small set of general and well-articulated attitude functions,
one need not wait for such an outcome to appreciate the value of
functional attitude analyses. Indeed, even if there comes to be some

84
consensus on a general attitude function typology, analyses of specific
attitudes will still almost certainly require the typology to be modified
(elaborated, refined, adapted) to provide maximum illumination of the
particular attitude under study.

Functional Confusions

Some Functional Distinctions


There is some underlying murkiness in the conceptualization of attitude
functions. This can be displayed by considering that there are distinctions
—often unappreciated—among the functions of an attitude, the functions
of expressing an attitude, and the functions of the attitude object.

Consider first that there is a distinction to be drawn between the functions


of an attitude (that is, the functions of having that attitude) and the
functions of expressing an attitude. For example, imagine that John has
unfavorable views about minorities. His having that attitude might serve
an ego-defensive or self-esteem maintenance function; having that attitude
makes him feel better about himself—even if he never reveals his views to
anyone else. On the other hand, his expressing that attitude around his
bigoted friends might serve a social-adjustive function, one of letting him
fit in with people who are expressing similar attitudes. Thus the job done
by the attitude (the having of the attitude) and the job done by the
expression of the attitude can be different.8

Similarly, there is a plain distinction between the functions of an attitude


and the functions of an attitude object. After all, no one would give the
same answer to the question, “What are attitudes toward amusement parks
good for?” and the question, “What are amusement parks good for?” The
functions of an attitude (the jobs that the evaluation does) and the
functions of an object (the jobs that the object does) are plainly different.

Finally, there is a distinction between the functions of an attitude object


and the functions of expressing an attitude. The purposes served by
abolition of capital punishment (the attitude object) are different from the
purposes served by a person’s saying, “I support the abolition of capital
punishment” (the expression of the attitude). Consider, for example, that a
person might believe that abolishing capital punishment would serve an
instrumental function (“Doing away with capital punishment would

85
prevent execution of the innocent”), but this does not mean that the
person’s expressing opposition to capital punishment serves an
instrumental function; depending on the circumstances, expression of that
attitude might serve some thoroughly symbolic end (symbolizing one’s
values, for instance). Thus there is a difference between the jobs done by
the attitude object (the product, the policy) and the jobs done by
expressing attitudes concerning that object.9

Conflating the Functions


These three elements—the functions of an attitude, the functions of
expressions of an attitude, and the functions of an attitude object—are
commonly conflated in theory and research on attitude functions. For
example, the functions of having an attitude and the functions of
expressing an attitude are often not carefully distinguished. Such
conflation can be detected in (among other places) Katz’s (1960, p. 187)
discussion of voting behavior as reflecting a value-expressive function, in
Snyder and DeBono’s (1989, p. 341) description of social-adjustive
attitudes as allowing people to fit into social situations smoothly, and in
Shavitt and Nelson’s (2000, p. 55) treatment of the meanings
communicated by a person’s consumer product choices as an aspect of a
social-identity function of attitudes. All these would seem to be more
accurately characterized as describing functions of expressing attitudes
rather than functions of having attitudes.

Similarly, the functions of attitudes and the functions of attitude objects


are often treated as if these were indistinguishable. For instance, Clary et
al. (1994) quietly shift from discussing the idea that “attitudes could serve
a variety of distinct social and psychological functions” (and that “the
same attitude could serve very different motivational functions for
different people”) to discussing “the relevant motivations underlying
volunteerism” (note: motivations underlying volunteerism, not motivations
underlying attitudes toward volunteerism) and “the specific functions
served by volunteerism” (pp. 1130–1131; again, not the specific functions
served by positive or negative attitudes toward volunteerism but the
functions served by volunteerism itself). Similarly, Ennis and Zanna’s
(1993) opening paragraphs slide from discussion of “the psychological
functions of attitudes” to discussion of “the psychological functions of a
product” (p. 662; see also Ennis & Zanna, 2000). In discussions of attitude
function, such elision of the functions of attitudes, the functions of attitude
objects, and the functions of attitude expressions is common (see, e.g.,

86
Lavine & Snyder, 2000; Pratkanis & Greenwald, 1989; Shavitt, 1990).
This state of affairs suggests that it may be useful to reconsider the
assessment and conceptualization of attitude function with the relevant
distinctions in mind.

Reconsidering the Assessment and Conceptualization


of Attitude Function

Assessment of Attitude Function Reconsidered


The common conflation of attitude function and attitude object function
naturally raises some questions about procedures for assessing attitude
function. The procedures that assess attitude function through coding free-
response data or through standardized scale item data rest on the idea that
the function an attitude serves is reflected in the beliefs held about the
attitude object, the reasons given for the attitude, the importance of
alternative reasons for the attitude, and so forth. But all these may most
directly reflect not the function served by the respondent’s attitude but
rather the respondent’s perceptions of the functions served by the object
(the attitude object’s useful properties, the purposes served by the attitude
object, and so forth)—and hence the respondent’s perception of what is
valuable about the object.

To concretize this, imagine asking people questions of the form, “What’s


important about X?” or “What’s right or wrong [or good or bad] about
X?”—that is, questions of the sort one might use in an open-ended
questionnaire designed to assess attitude function. Depending on the topic,
a variety of answers might be received. For example, when asked, “What’s
right or wrong about gun control?” Al says, “It encourages crime by
disarming citizens,” but Bob says, “It infringes rights.” When asked,
“What’s good or bad about this automobile?” Christine says, “It gets good
gas mileage,” whereas Donna says, “It makes me look cool.” Asked,
“What’s important about volunteering?” Ed says, “It helps me develop job
skills,” but Frank says, “It contributes to the community.”

One straightforward way of understanding the variation in these answers is


to see it simply as reflecting differences in what people value (not only
differences in general abstract values but also differences in what goals
they seek, what attributes they value in particular types of objects, and so
on). That is, these sorts of differences—which commonly have been taken

87
to indicate differences in attitude functions—might more lucidly be
characterized as simply differences in what people value in (that is, what
they want from) attitude objects.

Consider, for example, the procedure in which attitude function


assessment is based on respondents’ perceived importance of various
reasons for volunteering (Clary et al., 1998; Clary et al., 1994). On the face
of things, it seems that this procedure classifies persons on the basis of
their perceived importance of various functions of the action of
volunteering (jobs that volunteering performs, outcomes of volunteering),
not on the basis of any functions of their attitudes. So, for instance,
someone who says that improving one’s resume is an important reason for
volunteering appears to be identifying a perceived important function of
(job done by, purpose served by) the action—which is not the same as
identifying a function of one’s positive or negative attitude toward that
action.

Proxy measures of attitude function such as self-monitoring are, if


anything, even more susceptible to being understood in this way. For
example, it is plain that high and low self-monitors can (in appropriate
circumstances) have systematically different beliefs about attitude objects.
But these different beliefs appear to correspond to differences in how self-
monitors value certain functions of the object. (For example, high self-
monitors value identity projection functions of automobiles more than do
low self-monitors.) Just because high self-monitors value certain functions
of objects more than do low self-monitors does not show that the attitudes
of high self-monitors serve different functions than do the attitudes of low
self-monitors.

To put it another way: High and low self-monitors want different things
from their consumer products. But this does not mean that high and low
self-monitors want different things from their attitudes. One might
plausibly say that high and low self-monitors want their attitudes to do the
same job—the job of identifying good and bad products for them.10 Thus,
instead of the supposition that high and low self-monitors have attitudes
that serve different functions, what seems invited is the conclusion that
although high and low self-monitors may sometimes differ in the criteria
they use to appraise objects, the underlying function of the evaluation (the
function of the attitude) is identical.

Consider, as a concrete example, Wang’s (2009) study of attitudes toward

88
regular physical activity. Respondents rated the desirability and likelihood
of various possible consequences of regular exercise. These consequences
were then grouped into three attitude functions: a “utilitarian” function
(e.g., “would help me reduce stress,” “would help me feel more
energetic”), a “social-identity” function (e.g., “would provide me with
more opportunities to socialize,” “would help me improve my social
relationships”), and a “self-esteem maintenance” function (e.g., “would
help me lose weight,” “would help me stay in shape”). The importance of
these different functions of exercise (note: not functions of attitudes
toward exercise, but functions of exercise) varied across people. In
particular, the importance of social-identity outcomes (as influences on
intention) varied depending on self-monitoring: high self-monitors placed
more emphasis on social-identity outcomes than did low self-monitors. So
although high and low self-monitors appear to want different things from
exercise (they differently value various outcomes), they would seem to
want the same thing from their attitudes (namely, serving the function of
indicating whether regular exercise would be a good thing for them to do).

In sum, the procedures commonly used for assessing attitude functions can
instead be understood as assessing variations in the perceived value or
importance of attributes (or functions) of the attitude object.11

Utilitarian and Value-Expressive Functions Reconsidered


Against this backdrop, it may be useful to reconsider how utilitarian and
value-expressive attitude functions have been conceptually differentiated.
In Katz’s (1960) treatment of these two functions, utilitarian attitudes are
exemplified by attitudes based on economic gain or other concrete rewards
(p. 171), whereas value-expressive attitudes are concerned with abstract
“central values” and self-images (p. 173). Indeed, Katz specifically
described value-expressive attitudes as different from attitudes aimed at
“gaining social recognition or monetary rewards” (p. 173).

A similar way of distinguishing utilitarian and value-expressive functions


appears in Maio and Olson’s (1994, 1995) research examining the
hypothesis that persons with value-expressive attitudes will exhibit closer
connections between attitudes and values than will persons with utilitarian
attitudes. In this work, the values implicated in value-expressive attitudes
are conceived of as abstract ends such as equality, honesty, and freedom
(Maio & Olson, 1994, p. 302), as opposed to the narrower self-interested
ends represented by utilitarian attitudes; values are “evaluations of abstract

89
ideas (e.g., equality, honesty) in terms of their importance as guiding
principles in one’s life” (Maio & Olson, 1995, p. 268). From this point of
view, a person considering whether to make a charitable donation who
thinks about “the importance of helping others” has a value-expressive
attitude, whereas people who think about “whether they can afford to
donate” have utilitarian attitudes (Maio & Olson, 2000a, p. 251).

That is, value-expressive attitudes have commonly been distinguished


from utilitarian attitudes on the basis of the nature of the outcomes sought:
Abstract, prosocial ends indicate value-expressive attitudes, whereas
concrete, self-enhancing ends indicate utilitarian attitudes. But plainly this
way of distinguishing attitudes seems less a matter of attitude function
than a matter of the abstractness or nobility of the ends served by the
object. On the conventional view, “protecting the environment” might be a
value, but “protecting my savings account” would not be. Yet obviously
each represents a potential outcome that can be valued, and hence each
represents a basis of assessment of objects (assessment of objects for the
degree to which the objects realize the outcome). Thus value-expressive
attitudes and utilitarian attitudes arguably do not actually serve different
attitude functions; the underlying attitude function is identical in the two
cases (evaluative object appraisal in the service of obtaining satisfactory
outcomes), although the criteria for assessing objects (that is, the outcomes
of interest) may vary.

This sort of reasoning led Maio and Olson (2000a, pp. 258–260) to
introduce the idea of “goal-expressive” attitudes, precisely meant to
“encompass what Katz referred to as value-expressive and utilitarian
functions” (p. 259). By collapsing value-expressive and utilitarian attitudes
into one functional category, this approach abandons the idea that value-
expressive and utilitarian attitudes serve different purposes; it recognizes
their similarity in abstract attitude function (appraisal) while not losing
sight of the variation possible in substantive motivational content (for a
related view, see Eagly & Chaiken, 1998, p. 304; for a similar treatment of
value-expressive and social-adjustive attitudes, see Hullett & Boster,
2001).

Summary
Taken together, these considerations invite a simpler, more straightforward
account of much research on attitude function variation. Specifically, this
work might more perspicaciously be described as work identifying

90
variation in what people value (their wants, goals, evaluations of various
properties of objects, and so on). As indicated above, both the procedures
commonly used to differentiate attitude functions and the conceptual
treatment of value-expressive and utilitarian functions can be seen to
distinguish cases on the basis of persons’ values, not on the basis of
attitude function.

Persuasion and Function Matching Revisited


Existing research on persuasion and function matching is entirely
congenial with the idea that apparent attitude function variation reflects
variation in people’s values. Indeed, approached from such a perspective,
it is hardly surprising that function-matched messages are so often more
persuasive than unmatched messages—because the matched messages
speak to what people want.

Consider the case of self-monitoring: high and low self-monitors


characteristically differ in their evaluation of various outcomes and object
attributes. For instance, high self-monitors characteristically place a higher
value on aspects of self-image presentation. Given this difference, it is
perhaps unsurprising that high self-monitors find image-oriented appeals
and certain normatively oriented appeals (concerning what their peers
think) to be especially congenial (e.g., DeBono, 1987; Snyder & DeBono,
1985); such appeals fit their values (not their attitude functions).

As another example, consider the previously mentioned finding that


variations in the degree to which persons’ self-presentation motives
emphasize “getting ahead” or “getting along” are related to the
persuasiveness of messages emphasizing either the self-advancement
aspects of a product or the conformity-relevant aspects of a product
(Celuch & Slama, 1995). These results can obviously be straightforwardly
described as a matter of matching appeals to the motivations (wants,
desires, values, goals) of message receivers (specifically, motivations for
self-presentation).

The same holds true for appeals matched not to individual-difference


variations but to variations in the nature of the object. Different objects are
valued for different types of reasons. People generally want certain sorts of
things from air conditioners and different sorts of things from greeting
cards—and hence appeals matched to what people want from these objects
(not to what they want from their attitudes toward those objects, but to

91
what they want from those objects) will naturally be likely to enjoy some
persuasive advantage (as observed by Shavitt, 1990). Similarly for
situational variations: When certain values (attributes, outcomes, etc.) are
made more salient, persuasive appeals engaging those wants are likely to
be more successful than appeals engaging nonsalient desires (e.g., Maio &
Olson, 2000a, Study 4).

This redescription is also congenial with the proposed account of function-


matching persuasion effects that suggests that functionally matched
messages engender greater message scrutiny. It would not be surprising
that a receiver’s attention be especially engaged by messages that appear
to be discussing something important to the receiver.

Finally, research directly exploring the role of function-relevant values has


found that the strength with which recipients held such values influenced
the degree to which recipients were persuaded by corresponding appeals:
the stronger a recipient held the values engaged by a message’s appeals,
the greater the message’s persuasiveness. The implication is that function-
matched appeals are more persuasive than mismatched appeals because the
matched appeals engage values that the recipient holds more strongly
(Hullett, 2002, 2004, 2006; Hullett & Boster, 2001; see, relatedly, Bailis,
Fleming, & Segall, 2005).

In short, existing research concerning attitude functions and persuasive


appeals appears to be well captured by two core ideas: First, what is valued
varies. Different persons can have different values (with systematic
relationships here, such as connected with self-monitoring differences);
different types of objects are characteristically valued for different reasons;
and as situations vary so can the salience of different values. Second,
persuasive messages are more effective when they engage what people
value than when they do not.

These two ideas are currently clothed in talk about variation in attitude
function, but such talk is at least misleading and arguably dispensable in
favor of talk about variation in values.12 In the long run, however, clear
treatment of variation in values will require some typology of values, that
is, some systematic analysis of the ways in which values (goals, desired
properties of objects, etc.) can vary (for some classic examples, see
Rokeach, 1973; Schwartz, 1992). The empirical success of research using
attitude function categories suggests that these categories might provide
some leads in this regard (although now the consequences of the lack of

92
agreement about a functional taxonomy may be more acutely felt). For
example, a carefully formulated version of the symbolic-instrumental
contrast might serve as one way of distinguishing variation in values (see
Allen, Ng, & Wilson, 2002; Eagly & Chaiken, 1998, p. 304). It may be
profitable, however, to consider other sources as well and, in particular, to
consider independent work on typologies of general, abstract values as a
possible source of further insight (as recommended by Maio & Olson,
2000a).

Reviving the Idea of Attitude Functions


The analysis offered in the preceding section might appear to recommend
jettisoning the idea of attitude functions and replacing it with an analysis
of systematic differences in what people value. Such an approach does
seem to capture much of the work conducted under the aegis of functional
approaches to attitude. That reframing, however, arguably fails to
appreciate the potential contribution afforded by considering genuine
differences in attitude function.

The value-based reframing of attitude functions implicitly focuses on only


one attitude function, that of object appraisal (evaluative appraisal in the
service of satisfaction of wants). But this overlooks another apparent
general function of attitudes, a self-maintenance function, as exemplified
by Katz’s (1960) ego-defensive function.13 The ego-defensive function is
genuinely a function of an attitude, not a function of an attitude object. For
example, the ego-defensive function of prejudicial attitudes toward a
minority group is different from the function of the minority group itself:
The minority group does not serve the function of bolstering the person’s
self, but the negative attitude toward the minority group can.14

This suggests that there are at least two distinguishable broad functions of
attitude.15 But most of the work on persuasion and attitude functions has
implicitly addressed attitudes serving object appraisal functions and so has
focused on adapting messages to different bases of object appraisal. Scant
work is concerned with (for example) how persuasion might be effected
when attitudes serve ego-defensive ends or with how to influence attitudes
adopted because of the reference group identification purposes served by
holding the attitude.16

Thus there is good reason to want to retain some version of the idea of

93
different attitude functions, as illustrated by the apparent usefulness of a
contrast between object-appraisal functions and self-maintenance
functions.17 But if the idea of attitude function is to be revived, a
consistent and clear focus on the functions of attitudes (as opposed to the
functions of objects or the functions of attitude expression) will be needed,
accompanied by attention to the continuing challenge of attitude function
assessment.

Conclusion
Despite some conceptual unclarities, work on the functional approach to
attitudes has pointed to some fundamentally important aspects of attitude
and persuasion. In cases in which attitudes are primarily driven by an
interest in object appraisal, persuaders will want to attend closely to the
receiver’s basis for assessing the attitude object. What people value can
vary, and hence the persuasiveness of a message can depend in good
measure on whether the message’s appeals match the receiver’s values.

For Review
1. Explain the general idea behind functional approaches to attitude.
2. In Katz’s classic analysis of attitude function, what four attitude
functions are identified? Explain the utilitarian function. What
techniques are best adapted to changing attitudes serving a utilitarian
function? Explain the ego-defensive function. What techniques are
best adapted to changing attitudes serving an ego-defensive function?
Explain the value-expressive function. Under what conditions are
attitudes serving a value-expressive function likely to be susceptible
to change? Explain the knowledge function. What is the primary
mechanism of change for attitudes that serve a knowledge function?
3. Is there a consensus about a particular typology of attitude functions?
Is there a broad distinction (among functions) that is common to
alternative functional typologies? Explain symbolic functions of
attitude. Explain instrumental functions of attitude.
4. Describe three ways of assessing the function of a given attitude.
What is free-response data? Explain how free-response data can be
analyzed to reveal attitude functions. Explain how standardized
questionnaires can be used to assess attitude functions. Explain how
proxy indices can be used to assess attitude functions; give an
example.

94
5. Identify three kinds of factors that can influence attitude function.
Describe how individual differences can influence attitude function;
give an example. Explain how the nature of the attitude object can
influence attitude function. Give examples of objects for which
attitudes likely serve a generally instrumental function; give examples
of objects for which attitudes likely serve a generally symbolic
function. What are multifunctional attitude objects? Describe how
situational variations can affect attitude function. For what kinds of
attitude objects are individual differences and situational variations
likely to have the greatest effect on attitude function?
6. Explain how functional approaches provide a basis for adapting
persuasive messages to recipients. What is function matching? Are
function-matched appeals generally more persuasive than mismatched
appeals? Describe image-oriented advertising appeals. Describe
product quality-oriented advertising appeals. Are high self-monitors
generally more persuaded by image-oriented or by product quality-
oriented appeals? Are low self-monitors generally more persuaded by
image-oriented or by product quality-oriented appeals? Describe two
possible explanations of the persuasive advantage of function-
matched appeals over mismatched appeals.
7. Explain how the general idea of attitude functions can be useful even
in the absence of an agreed-upon universal typology of attitude
functions. How might different functional typologies be useful for
different specific attitudes?
8. Explain the distinction between the functions of an attitude and the
functions of expressing an attitude. Explain the distinction between
the functions of an attitude and the functions of an attitude object.
Explain the distinction between the functions of an attitude object and
the functions of expressing an attitude. Describe how these different
functions have been confused in theory and research about attitude
functions.
9. Explain how differences in attitude function as assessed through
open-ended questions reflect differences in what respondents value in
attitude objects. Explain how differences in attitude function as
assessed through self-monitoring reflect differences in what
respondents value in attitude objects. Explain how the distinction
between utilitarian and value-expressive functions reflects differences
in what respondents value in attitude objects.
10. How can the idea of function-matched appeals be redescribed in
terms of matching the audience’s values? Describe the difference
between object appraisal and self-maintenance as two broad attitude

95
functions. Which has been the focus of most research attention?

Notes
1. Snyder and DeBono’s (1989) description of the social-adjustive function
implicitly focused not on the function of the attitude but on the function of
the attitude expression. By contrast, M. B. Smith et al.’s (1956) discussion
of this function emphasized that “one must take care to distinguish the
functions served by holding an opinion and by expressing it” (p. 41). The
potential social-adjustive function of attitude expression is straightforward
enough (e.g., one can fit into social situations by expressing this or that
opinion). The social-adjustive function of simply holding an attitude, on
the other hand, is “at once more subtle and more complex” (p. 42). At
base, it involves the creation of feelings of identification or similarity
through attitudes; the mere holding of certain attitudes can be “an act of
affiliation with reference groups” (M. B. Smith et al., 1956, p. 42),
independent of any overt expression of the attitude. Unhappily, as
discussed later in this chapter, the distinction between attitude functions
and attitude expression functions has not commonly been closely
observed.

2. The use of the term value-expressive is potentially confusing here. The


kind of attitude function putatively favored by low self-monitors might
better be described as value-matching (because the low self-monitor’s
attitudes are influenced by the extent to which the object’s properties
match the person’s values). This is different from the value-expressive
function described by Katz (1960), in which satisfaction is had by holding
attitudes that represent (express) fundamental values. Katz’s value-
expressive function seems rather more symbolic than instrumental (and
hence more similar to the ego-defensive function than to the utilitarian or
knowledge functions); the attitude’s basis is less the particular pros and
cons of the attitude object and more the symbolic connection with core
values. By contrast, DeBono’s (1987) value-expressive function seems
more instrumental than symbolic; the attitude’s job is not to represent or
display to others one’s central values but rather to summarize the object’s
pros and cons as assessed against one’s desiderata for such objects.

3. As an illustration of the complexity that’s possible here, consider that


attitudes toward organ donation appear to be unifunctional (specifically,
serving the same attitude function for high and low self-monitors alike) but
attitudes toward discussing organ donation appear multifunctional (serving

96
different functions depending on self-monitoring; Wang, 2012).

4. With respect to consumer product advertising, the differences between


high and low self-monitors extend beyond the differential appeal of image-
based and product quality-based ads. There are also related differences in
the ability to remember whether an ad has been seen before (e.g., high self-
monitors more accurately remember exposure to image ads than to quality
ads; DeBono & Packer, 1991, Study 3); in how self-relevant ads are
perceived to be (e.g., high self-monitors see image ads as more self-
relevant than quality ads; DeBono & Packer, 1991, Study 2); in the types
of advertisements they create for multiple-function attitude objects such as
watches (e.g., low self-monitors prefer to use utilitarian appeals, whereas
high self-monitors prefer social-identity arguments; Shavitt & Lowrey,
1992); and in the impact of the appearance (DeBono & Snyder, 1989),
name (Smidt & DeBono, 2011), or packaging (DeBono, Leavitt, &
Backus, 2003) of the product (e.g., the better-looking the car, the higher
the quality ratings given by high self-monitors). For a general discussion,
see DeBono (2006).

5. Actually, there are a number of studies that (a) are not commonly
treated as representing research on attitude function-matching and (b) may
not even cite attitude function-matching research but that nevertheless (c)
examine the relative effectiveness of persuasive appeals that have been
designed to match variations in receivers’ psychological needs as
extrapolated from some individual-difference variable (thus paralleling the
research format of much function-matching research). Studies by Cesario,
Grant, and Higgins (2004, Study 2), Orbell and Hagger (2006), and Aaker
and Schmitt (2001)—examining, respectively, regulatory focus
(prevention-focused versus promotion-focused), consideration of future
consequences (temporally distant versus temporally proximate
consequences), and individualism-collectivism (as reflected in cultural
variations)—provide just three examples. For a general discussion of this
point, see O’Keefe (2013a).

6. In a meta-analysis of the effect of function matching on persuasion,


Carpenter (2012b) reported a mean effect size (expressed as a correlation)
of .37 across 38 cases, and a mean effect size of .31 for the 29 cases in
which the basis of functional matching was self-monitoring (as opposed to,
e.g., the nature of the attitude object). But for two reasons these means
should be interpreted cautiously. First, these mean effects were based on
effect sizes that had been adjusted for measurement unreliability; the

97
application of such adjustments inflates effect sizes relative to those in
other persuasion meta-analyses based on unadjusted effect sizes. Second,
at least some studies in this research area have used designs in which a
given participant saw both a matched and a mismatched appeal, often in
close proximity; as Shavitt (1990, pp. 141–142) pointed out, such designs
might be expected to yield larger effect sizes than would the more usual
between-subjects designs (in which a participant sees only one kind of
appeal).

7. Some readers will recognize fragments of elaboration likelihood model


reasoning here (see Chapter 8), and specifically the idea that the variable
of functional matching versus mismatching might (like many variables)
play multiple roles in persuasion, depending on the circumstance; for some
amplification, see Petty et al. (2000, p. 145).

8. To further cement that distinction, notice that people might express


attitudes they do not hold simply because the (deceptive) expression of the
attitude serves some purpose, some function. A lifelong committed
Democrat, newly introduced to a group of Republicans, is asked by them
about a preference among presidential candidates. The Democrat strongly
prefers the Democratic candidate but—not wanting to initiate a potentially
unpleasant discussion—says, “I prefer the Republican.” The function
served by the negative attitude toward the Republican candidate is
obviously different from the function served by expressing a positive
attitude toward that candidate. The larger point is that the functions of
attitudes should not be confused with the functions of expressing attitudes.
(A complexity: The possessing of an attitude can serve the purpose of
having the attitude available for ready expression. But this does not
underwrite confusing the functions of attitudes with the functions of
expressing attitudes.)

9. There is a complexity here, however. The functions of an object and the


functions of expressing an attitude toward that object can sometimes
overlap (or coincide), at least in the realm of attitude objects that can be
possessed or used, such as consumer products. For such objects,
possession or use of the attitude object presumably counts as expression of
the corresponding favorable attitude (one’s ownership of the object
presumably expresses one’s liking for the object), and hence similar jobs
can be potentially done by the attitude object (that is, one’s having or using
the attitude object) and by other means of expressing the attitude (e.g.,
saying one likes the object).

98
10. In a sense, of course, the consumer product attitudes of high and low
self-monitors do different jobs, because the attitudes of high self-monitors
focus on one type of product attribute and the attitudes of low self-
monitors focus on another type: High self-monitors want their attitudes to
do the job of identifying objects that satisfy high self-monitor values, and
low self-monitors want their attitudes to do the job of identifying objects
that satisfy low self-monitor values. However, such a way of
differentiating attitude functions could be taken to absurd lengths, in that
whenever two persons differentially valued some attribute of an object,
their attitudes could be said to serve different functions; if Alice values,
but Betty does not, an automobile’s having a built-in navigation system,
then their attitudes toward automobiles serve different functions (in that
only Alice’s attitude would do the job of identifying cars that satisfy
Alice’s valuing of navigation systems). The real question is how to group
different possible attitude jobs (when to lump them together, when to
distinguish them), and the suggestion here is that it will be useful to
recognize that although high and low self-monitors may vary in what they
value, there is a sense in which the fundamental job done by their attitudes
—evaluative appraisal in the service of value satisfaction—is the same. (In
particular, as will be suggested shortly, attitudes driven by this sort of
interest look rather different from attitudes driven by an interest in ego
protection.)

11. As Eagly and Chaiken (1993, p. 490; 1998, p. 308) have stressed, early
functional approaches emphasized latent motivational aspects of attitudes,
aspects not necessarily apparent in manifest belief content or conscious
thought—and hence not necessarily well captured by coding the manifest
content of answers to open-ended questions or by examining responses to
standardized self-report instruments.

12. For a related attempt at reinterpreting function-matching appeal


research in ways that do not involve reference to attitude functions, see
Brannon and Brock (1994), who propose that “schema-relevance,” not
attitude function relevance, actually underlies the findings of attitude
function research.

13. A self-maintenance function might include not only ego-defensive


functions but also those social-adjustive functions of holding (as opposed
to expressing) attitudes, as described by M. B. Smith et al. (1956, p. 42)
and mentioned above in note 1; having a given attitude can create feelings
of identification or similarity, thus serving the function of creating and

99
maintaining one’s view of oneself.

14. The ego-defensive function of the attitude may also be shared,


however, by the attitude expression. That is, expressing prejudicial
attitudes may serve an ego-defensive function. But the focus here is on
functions of attitudes (not functions of attitude expression), and the point is
that ego defense is indeed one job that can be done by the holding of an
attitude.

15. In fact, Pratkanis and Greenwald’s (1989) analysis proposed just these
two functions: “First, an attitude is used to make sense of the world and to
help the organism operate on its environment. … Second, an attitude is …
used to define and maintain self-worth” (p. 249). This latter function
unfortunately elides attitude function and attitude expression function:
“We attach different labels to this self-related function of attitude,
depending on the audience (public, private, or collective) that is observing
the attitude and its expression” (p. 249).

16. Some work exists concerning attitudes about objects that serve self-
related purposes (such as attitudes about class rings), but this is different
from work concerning attitudes serving self-related purposes.

17. To be sure, even this contrast is contestable, in the following sort of


way: “Self-maintenance is itself a value, something wanted. Thus even
ego-defensive attitudes actually reflect an object appraisal function
(evaluative appraisal in the service of value satisfaction); it’s just that
‘self-maintenance’ is the value that’s being served instead of some other
value.” And it is certainly true that understood in a sufficiently abstract
way, all attitudes presumably (indeed, perhaps by definition) serve some
broad appraisal function. Still, there looks to be a difference between
appraisal that is in some sense object-driven (I know what I’m looking for
in aspirin or automobiles or whatever, and I engage in object appraisal to
see how this candidate stacks up) and appraisal that seems somehow self-
driven (I want to ensure a certain sort of self-evaluation, and I engage in
object appraisal to produce that outcome). Expressed differently: The
attitude functions of high self-monitors (“I like the car because it’s sexy”)
and low self-monitors (“I like the car because it’s reliable”) look rather
similar when contrasted with those of the bigot’s ego-defensive prejudices.
And that contrast is particularly sharp from a persuader’s point of view:
The same general sort of approach might be taken to persuade high and
low self-monitors (emphasizing different object attributes, to be sure, but

100
otherwise a similar approach), whereas persuading bigots likely requires
something rather different. For a similar general conclusion, see Carpenter,
Boster, and Andrews (2013).

101
Chapter 4 Belief-Based Models of Attitude

Summative Model of Attitude


The Model
Adapting Persuasive Messages to Recipients Based on the
Summative Model
Research Evidence and Commentary
General Correlational Evidence
Attribute Importance
Belief Content
Role of Belief Strength
Scoring Procedures
Alternative Integration Schemes
The Sufficiency of Belief-Based Analyses
Persuasive Strategies Reconsidered
Conclusion
For Review
Notes

This chapter discusses belief-based approaches to the analysis of attitude


and attitude change. The central theme of these approaches is that one’s
attitude toward an object is a function of the beliefs that one has about the
object. There are a number of variants of this general approach, with the
variations deriving from differences in what features of beliefs are seen to
contribute to attitude and from differences in how beliefs are seen to
combine to yield an attitude. One particular belief-based approach, the
summative model of attitude, has enjoyed special prominence among
students of persuasion and social influence and hence is the focus of the
chapter’s attention. (The summative model also figures in reasoned action
theory; see Chapter 6.)

Summative Model of Attitude

The Model
The summative model of attitude (Fishbein, 1967a, 1967b) is based on the
claim that one’s attitude toward an object is a function of one’s salient

102
beliefs about the object. For any given attitude object, a person may have a
large number of beliefs about the object. But at any given time, only some
of these are likely to be salient (prominent)—and it is those that are
claimed to determine one’s attitude. In, say, a public opinion or marketing
questionnaire, one might elicit the respondent’s salient beliefs (e.g., about
a product or a political candidate) by asking the respondent to list the
characteristics, qualities, and attributes of the object. Across a number of
respondents, the most frequently mentioned attributes represent the
modally salient beliefs, which can be used as the basis for a standardized
questionnaire. (For discussion of procedures for identifying salient beliefs,
see Ajzen & Fishbein, 1980, pp. 68–71; Ajzen, Nichols, & Driver, 1995;
Breivik & Supphellen, 2003; Fishbein & Ajzen, 2010, pp. 100–103;
Middlestadt, 2012; van der Pligt & de Vries, 1998b; for some
complexities, see Roskos-Ewoldsen & Fazio, 1997.)

In particular, the model holds that one’s attitude toward an object is a


function of belief strength (that is, the strength with which one holds one’s
salient beliefs about the object) and belief evaluation (the evaluation one
has of these beliefs).1 Specifically, the relation of belief strength and belief
evaluation to attitude is said to be described by the following formula:
Ao = Σbiei

where AO is the attitude toward the object, bi is the strength of a given


belief, and ei is the evaluation of a given belief. The sigma (Σ) indicates
that one sums across the products of the belief strength and belief
evaluation ratings for each belief. That is, one multiplies each belief
evaluation by the strength with which that belief is held and then sums
those products to arrive at an estimate of the overall attitude toward the
object. If there are five salient beliefs about the object, then the attitude
estimate is given by b1e1 + b2e2 + b3e3 + b4e4 + b5e5.

The procedures for assessing the elements of this model are well
established. One’s attitude toward the object (AO) can be obtained by
familiar attitude measurement techniques. The strength with which a belief
is held (Σbi) can be assessed through scales such as likely–unlikely,
probable–improbable, and true–false. The evaluation of a belief (ei) is
assessed through semantic-differential evaluative scales such as good–bad,
desirable–undesirable, favorable–unfavorable, and the like.

As an example: Suppose that a preliminary survey had indicated that the

103
most salient beliefs held about Senator Smith by the senator’s constituents
were that the senator supports defense cuts, is helpful to constituents, is
respected in the Senate, and is unethical. One might assess the strength
with which the first of these beliefs was held by respondents through items
such as those listed in Figure 4.1. The evaluation of that belief can be
assessed with items such as those in Figure 4.2.

Figure 4.1 Examples of questionnaire items assessing belief strength (bi).

Figure 4.2 Examples of questionnaire items assessing belief evaluation


(ei).

Suppose (to simplify matters) that for each belief, belief strength and belief
evaluation were assessed by a single scale (perhaps “likely–unlikely” for
belief strength, “good–bad” for belief evaluation) scored from + 3 (likely
or good) to −3 (unlikely or bad). A particular respondent might have the
belief strength and belief evaluation ratings for the four salient beliefs
about Senator Smith shown in Figure 4.3. The respondent in Figure 4.3
believes that it is quite likely that the senator supports defense cuts (belief
strength of +3), and supporting defense cuts is seen as a moderately
negative characteristic (evaluation of -2); the respondent thinks it very
unlikely that the senator is helpful to constituents (helpfulness to
constituents being thought to be a very good quality); the respondent
thinks it moderately likely that Smith is respected in the Senate, and that is
a slightly positive characteristic; and the respondent thinks it rather
unlikely that Smith possesses the highly negative characteristic of being
unethical.

Figure 4.3 Estimating attitude from belief strength (bi) and belief
evaluation (ei).

104
Because (in this example) each belief strength score (bi) can range from −3
to +3 and each belief evaluation score (ei) can range from −3 to +3, each
product (biei) can range from -9 to +9, and hence the total (across the four
beliefs in this example) can range from −36 to +36. A person who thought
that the qualities of supporting defense cuts, being helpful to constituents,
and being respected in the Senate were all very positive characteristics
(belief evaluations of +3 in each case) and who thought it very likely that
the senator possessed each of these qualities (belief strength of +3 for
each), and who also thought it quite unlikely (−3 belief strength) that the
senator possessed the strongly negative (−3 belief evaluation)
characteristic of being unethical would have a total (Σbiei) of +36,
indicating an extremely positive attitude toward the senator—as befits
such a set of beliefs. By comparison, the hypothetical respondent with a
total of-7 might be said to have a slightly negative attitude toward Senator
Smith.

Perhaps it is apparent how this general approach could be used for other
attitude objects (with different salient beliefs, of course). In consumer
marketing, for example, the attitude object of interest is a product or brand,
and the salient beliefs typically concern the attributes of the product or
brand. Thus, for instance, the underlying bases of consumers’ attitudes
toward a given brand of toothpaste might be investigated by examining the
belief strength and belief evaluation associated with consumers’ salient
beliefs about that brand’s attributes: whitening power, taste, ability to
prevent cavities, cost, ability to freshen breath, and so forth.

As another example of application, people’s attitudes toward public policy


proposals can be studied; here the salient beliefs might well include beliefs
about the consequences of adoption of the policy. Consider, for instance,
some possible cognitive bases of attitudes toward capital punishment.
Does capital punishment deter crime (belief strength), and how good an
outcome is that (belief evaluation)? Is capital punishment inhumane, and

105
how negatively valued is that? Is capital punishment applied inequitably,
and how disadvantageous is that? And so forth. Two persons with opposed
attitudes on this issue might equally value crime deterrence—that is, have
the same evaluation of that attribute—but disagree about whether capital
punishment has that attribute. Or two people with opposed attitudes might
agree that capital punishment has the characteristic of satisfying the desire
for vengeance but differ in the evaluation of that characteristic.

Adapting Persuasive Messages to Recipients Based on


the Summative Model
A recurring theme in theoretical analyses of persuasion is the idea that to
maximize effectiveness, persuasive messages should be adapted (tailored,
adjusted) to fit the audience. And one of the most common ways of
persuading others is by making arguments that change the recipient’s
beliefs in some way: an advertisement leads people to believe that a
cleaning product has exceptional stain-removal properties, a speaker at a
city council meeting convinces council members that additional dedicated
bicycle paths would enhance road safety, and so forth. The summative
model of attitude points to a number of alternative strategies for
influencing attitude in this way—and provides a systematic way of
adapting persuasive messages to audiences by identifying plausible foci
for persuasive appeals.

Alternative Persuasive Strategies


Because, on this view, one’s attitude is taken to be a function of the
strength and evaluation of one’s salient beliefs about the object, attitude
change will involve changing these putative bases of attitude. The model
thus suggests three broad ways in which attitude might be changed. First,
the evaluation of an existing salient belief might be changed. For example,
to encourage a more positive attitude, a persuader might try to make some
existing positive belief even more positively evaluated (“Senator Smith is,
as you know, respected in the Senate, but you may not realize just how
desirable that attribute is—it means Senator Smith can be more effective in
passing legislation to help our state”) or to make some existing negative
belief less negatively evaluated (“Sure, Senator Smith was only an average
student—but then again, being an average student isn’t so bad”).

Second, the strength (likelihood) of an existing salient belief might be

106
changed. For example, to encourage a more positive attitude, a persuader
might try to weaken the strength of an existing negative belief (“It’s not
likely that Senator Smith accepted bribes, because Senator Smith is
already very wealthy”) or to enhance the strength of an existing positive
belief (“You already know it’s true that Senator Smith has worked hard for
the people of this state—but you don’t know just how true that is”).

Third, the set of salient beliefs might be changed. This can be


accomplished in two ways. One is to add a new belief of the appropriate
valence (“You might not realize it, but Senator Smith has been quietly
working to fix the government’s budget problems”). The other is to change
the relative salience of existing beliefs (“Have you forgotten that five years
ago Senator Smith helped keep XYZ Industries from moving out of
state?”).

Identifying Foci for Appeals


The summative attitude model can also be useful in identifying likely foci
for persuasive appeals, that is, identifying what kinds of appeals will likely
be most appropriate for influencing a given message recipient. This facet
of the model is particularly apparent when considering mass persuasion
contexts. In planning persuasive messages, a persuader might survey those
favoring and opposing the advocated view, to get a clear sense of how the
strength and evaluation of their beliefs differ. Such information can be
helpful in identifying exactly which beliefs to address in one’s persuasive
appeals—and more specifically whether to target belief strength or belief
evaluation (or both). Indeed, such data can produce unexpected insights.

Consider, for example, the challenge of persuading people to eat a low-fat


diet. One apparent advantage of such diets is that they reduce the risk of
cardiovascular disease—so a persuader might naturally think of
constructing persuasive appeals around that consequence. But this
argument might not necessarily be especially effective. A study of UK
undergraduates distinguished participants on the basis of whether they
intended to eat a low-fat diet over the next month (Armitage & Conner,
1999). Using modally salient beliefs (identified in a preliminary study), the
mean belief strength (likelihood) and belief evaluation ratings were
examined separately for “intenders” and “non-intenders.” Intenders and
non-intenders had equally positive evaluations of reducing the risk of heart
disease, which is perhaps not surprising. But they also had equally strong
beliefs about the likelihood that eating a low-fat diet would lead to that

107
consequence. That is, non-intenders didn’t need to be convinced about the
outcome of heart disease risk reduction; a persuader would only be
wasting effort to construct persuasive appeals based on that benefit.

But the data did suggest other, more likely targets for persuasive messages.
For example, intenders and non-intenders had equally negative
assessments of “eating boring food,” but intenders thought that to be much
less likely a consequence than did non-intenders; a persuader thus might
try to convince non-intenders that in fact eating a low-fat diet doesn’t
mean having to eat boring food. Similarly, intenders and non-intenders had
equally positive evaluations of “feeling healthier,” but non-intenders were
not as convinced (as were intenders) that eating a low-fat diet would make
them feel healthier. Notably, for beliefs about whether a low-fat diet
“helps to maintain lower weight,” intenders and non-intenders differed
with respect to both belief strength (intenders thought that outcome more
likely than did non-intenders) and belief evaluation (intenders valued that
outcome more than did non-intenders). A persuader who wanted to
emphasize this advantage would face the task of convincing non-intenders
both that this was a desirable outcome and that eating a low-fat diet would
produce the outcome.

The general point is this: The summative model of attitude offers a


framework within which persuaders can think systematically about which
persuasive appeals to make. Instead of selecting arguments haphazardly
and hoping to stumble into an effective appeal, persuaders can
methodically identify the most likely avenues to persuasive success. (For
examples of such analyses, see Brown, Ham, & Hughes, 2010; Cappella,
Yzer, & Fishbein, 2003; Fishbein & Yzer, 2003; Jung & Heald, 2009;
Middlestadt, 2012; Parvanta et al., 2013; Rhodes, Blanchard, Courneya, &
Plotnikoff, 2009; relatedly, see Niederdeppe, Porticella, & Shapiro, 2012.)

Research Evidence and Commentary


The commentary that follows initially takes up some questions addressed
specifically to the summative model (the general evidence concerning the
model; the roles of attribute importance, belief content, and belief strength
in the model; and the procedures for scoring the model’s scales), then turns
to another belief-based model (offering an alternative image of how beliefs
are related to attitudes) and to the general question of the sufficiency of
belief-based analyses of attitude; a concluding section reconsiders the
persuasive strategies suggested by the summative model.

108
General Correlational Evidence
A number of investigations have examined the correlation between a direct
measure of the respondent’s attitude toward the object (AO) and the
predicted attitude based on the summative formula (Σbiei) using modally
salient beliefs. Reasonably strong positive correlations have commonly
been found, ranging roughly from .55 to .80 with a variety of attitude
objects including public policy proposals (e.g., Peay, 1980; Petkova,
Ajzen, & Driver, 1995), political candidates (e.g., M. H. Davis & Runge,
1981; Holbrook & Hulbert, 1975), and consumer products (e.g., Holbrook,
1977; Nakanishi & Bettman, 1974).2 That is, attitude appears to often be
reasonably well predicted by this model.

This correlational evidence, however, does not offer compelling support


for the claim that attitude is determined by salient beliefs. Attitude can
sometimes be equally well-predicted (using Σbiei) from nonsalient beliefs
as from salient ones (e.g., Ajzen et al., 1995; A. J. Smith & Clark, 1973).
Such findings might reflect respondents’ use of their current attitudes as
guides to responding to items concerning nonsalient beliefs (e.g., if I have
a negative attitude toward the object, and the standardized belief list asks
for my reactions to statements associating the object with some attribute
that I had not considered, I might well give relatively unfavorable
responses precisely because I already have a negative attitude toward the
object). Moreover, when standardized belief lists (that is, lists containing
modally salient beliefs) and individualized belief lists (in which each
respondent gets an individually constructed questionnaire, listing only his
or her particular salient beliefs) have been compared as the basis for
attitude prediction, often there is no dependable difference (e.g., Agnew,
1998; Bodur, Brinberg, & Coupey, 2000; O’Sullivan, McGee, & Keegan,
2008; see, relatedly, Steadman & Rutter, 2004). So the correlational
evidence in hand certainly shows that belief assessments can indicate a
person’s attitude (if only because persons give attitude-consistent
responses to questionnaire items about beliefs) but falls short of showing
that attitudes are determined by salient beliefs. (For a careful discussion of
these matters, see Eagly & Chaiken, 1993, pp. 232–234.)

Attribute Importance
Several investigations have explored the potential role of attribute
importance or relevance in predicting attitude. The summative model, it

109
will be noticed, uses only belief strength and belief evaluation to predict
attitude; some researchers have thought that the predictability of attitude
might be improved by adding the importance or relevance of the attribute
as a third variable. That is, in addition to assessing belief strength and
belief evaluation, one would also obtain measures of the relevance or
importance of each belief to the respondent; then some three-component
formula such as ΣbieiIi (where Ii refers to the importance of the attribute)
could be used to predict attitude.3

But the evidence in hand suggests that adding relevance or importance to


the summative formula does not improve the predictability of attitude
(e.g., L. R. Anderson, 1970; Hackman & Anderson, 1968; Holbrook &
Hulbert, 1975; Kenski & Fishbein, 2005). In understanding this result, it
may be helpful to consider the possibility that the attributes judged more
important or relevant may also have more extreme evaluations; the
assessment of belief evaluation (ei) may already involve indirect
assessment of relevance and importance (e.g., Holbrook & Hulbert, 1975;
cf. van der Pligt & de Vries, 1998a). Moreover, if an investigator is careful
to select only salient attributes as the basis for attitude prediction, then
presumably all the attributes assessed are comparatively relevant and
important ones. So there appears to be little reason to suppose that the
predictability of attitude from the original summative formula (Σbiei) can
be improved by adding a belief importance or belief relevance
component.4

Enhancing the predictability of attitude, however, is arguably not the main


relevant research goal. The larger purpose is that of illuminating how
beliefs contribute to attitude; given that attitude can be predicted even
from nonsalient beliefs (as discussed above), the use of predictability as
the relevant criterion is a bit misleading. Indeed, with that larger goal in
mind, belief importance ratings can be seen to be valuable in another way
(i.e., beyond whether they add to the predictability of attitude from Σbiei).
Suppose, for example, that an investigator has not been careful to ensure
(by pretesting) that the listed beliefs are modally salient for respondents.
Although many of the listed beliefs are not actually ones that determine the
respondents’ attitudes, it is still possible that the belief list will produce a
reasonably strong correlation between Σbiei and attitude (because, as
discussed earlier, respondents can use their current attitudes as guides to
responding to these nonsalient belief items). In such a circumstance,
importance ratings might indicate which of the listed beliefs are actually

110
salient for the respondents. Indeed, even if the belief list has been pretested
to ensure that it contains modally salient beliefs, belief importance ratings
may still give some insight into what underlies attitudes, especially if there
is some reason to think that different respondents (or subgroups of
respondents) may have importantly different sets of salient beliefs. In
short, although belief importance might not add to the predictability of
attitude from Σbiei, belief importance ratings may be crucial in permitting
the identification of those beliefs (on a standardized list) that actually
determine the respondent’s attitude—and hence the beliefs that warrant a
persuader’s attention. (For illustrations of such a role for importance
ratings, see, e.g., Elliott, Jobber, & Sharp, 1995; van der Pligt & de Vries,
1998a, 1998b. For a general discussion, see van der Pligt, de Vries,
Manstead, & van Harreveld, 2000.)

Belief Content
The summative model offers what might be called a content-free analysis
of the underpinnings of attitude. That is, for the summative model’s
analysis, the content of a belief is irrelevant; what matters is simply how
the belief is evaluated and how strongly it is held. Ignoring content may
indeed be appropriate given an interest simply in attitude prediction. But
for other purposes, systematic attention to belief content may be important.

Functional approaches to attitude (discussed in Chapter 3) provide a useful


contrast here. Functional approaches identify different syndromes
(coherent sets) of beliefs based on belief content; different attitude
functions correspond to substantively different (sets of) salient beliefs. So,
for example, two automobile owners may have equivalently positive
attitudes about a car but very different kinds of underlying salient beliefs:
One has beliefs about gas mileage and frequency-of-repair records,
whereas the other has beliefs about what identity is conveyed by the
automobile.

Such differences in belief content can figure importantly in persuasion.


Consider, for example, the persuasive strategy of creating a more positive
attitude by inducing belief in some new positive attribute. If content is
ignored, one positive attribute might seem as good as another for this
purpose. But a functional perspective recommends considering the
substantive content of the belief to be instilled; after all, if the receiver has
predominantly image-oriented beliefs about the object, then trying to add
some product quality–oriented belief (“gets good gas mileage”) may not be

111
successful.

The point here is not that such considerations cannot be represented within
a summative model framework (see, e.g., Belch & Belch, 1987). For
example, one might say that the “good gas mileage” attribute is more
likely to be salient for (or valued by) one person than another or that that
attribute is more likely to be perceived as associated with the attitude
object by one person than by another. Rather, the point is that the
summative model provides no systematic ways of thinking about belief
content, although such content is manifestly important. In a sense, then,
one might think of these approaches as complementary: Functional
approaches emphasize the (manifest or latent) content of beliefs, whereas
belief-based attitude models (such as the summative model) are aimed at
illuminating how underlying beliefs contribute to an overall attitude.

Role of Belief Strength


There is good reason to think that the apparent contribution of belief
strength scores to the prediction of attitude does not reflect a genuine role
for belief strength in determining attitude but instead is a methodological
artifact (something artificially arising from the research methods employed
rather than from any genuine phenomenon). The relevant evidence comes
from research comparing Σei (that is, the simple sum of the belief
evaluations) with Σbiei (the summative formula) as predictors of attitude.
The relative success of these two formulas varies, depending on the way in
which the list of salient beliefs is prepared.

The most common way of preparing the list of salient beliefs is by eliciting
beliefs from a test sample, identifying the most frequently mentioned
beliefs, and using these on the questionnaire. In this procedure, a
standardized belief list is composed (i.e., every respondent receives the
same set of modally salient beliefs). An alternative procedure is to elicit
salient beliefs from each respondent individually and so have each
respondent provide belief strength and belief evaluation ratings for his or
her unique set of salient beliefs. That is, an individualized belief list can be
constructed for each respondent.

The research evidence indicates that when individualized belief lists are
used, Σei and Σbiei are equally good predictors of attitude; adding belief
strength scores to the formula does not improve the predictability of

112
attitude. With standardized belief lists, however, Σbiei is a better predictor
than is Σei. That is, belief strength scores significantly improve the
predictability of attitude only when standardized (as opposed to
individualized) belief lists are used (Cronen & Conville, 1975; Delia,
Crockett, Press, & O’Keefe, 1975; Eagly, Mladinic, & Otto, 1994).5 On
reflection, of course, this result makes good sense. With individualized
belief lists, the respondent has just indicated that he or she thinks the
object possesses the attribute; only beliefs that the respondent already
holds are rated for belief strength. By contrast, with standardized belief
lists, belief strength scores distinguish those beliefs the respondent holds
from those the respondent does not hold. The use of standardized lists thus
creates a predictive role for belief strength scores (namely, the role of
differentiating those beliefs the respondent holds from those the
respondent does not hold), but the predictive contribution of belief strength
scores is a methodological artifact, not an indication of any genuine place
for belief strength in the cognitive states underlying attitude.

To put the point somewhat differently: These results suggest that—insofar


as the underlying bases of attitude are concerned—we may more usefully
think of persons’ beliefs about an object as rather more categorical (“I
think the object has the attribute,” “I don’t think the object has the
attribute,” or “I’m not sure”) than continuous (“I think that the probability
that the object possesses the attribute is thus-and-so”). The belief strength
scales give the appearance of some continuous gradation of belief
probability, but these scales make a contribution to attitude prediction only
because standardized belief lists are used. When individualized belief lists
are used, belief strength scores are unhelpful in predicting attitude because
in each case the individual thinks that the object has the attribute—and it is
that simple categorical judgment (not variations in the reported degree of
probabilistic association) that is important in determining the individual’s
attitude. (For some evidence suggesting a more categorical than purely
continuous image of belief strength, see Weinstein, 2000, esp. pp. 72–73.)6

Scoring Procedures
There has been a fair amount of discussion in the literature concerning
how the belief strength and belief evaluation scales should be scored (e.g.,
Ajzen & Fishbein, 2008; Bagozzi, 1984; Fishbein & Ajzen, 2010, pp. 105–
110; Lauver & Knapp, 1993; J. L. Smith, 1996). By way of illustration,
two common ways of scoring a 7-point scale are from −3 to +3 (bipolar

113
scoring) and from 1 to 7 (unipolar scoring). (There are possibilities in
addition to −3 to +3 and 1 to 7, but these two provide a useful basis for
discussion.) With belief strength and belief evaluation scales, one might
score both scales −3 to +3, score both scales 1 to 7, or score one scale −3
to +3 and the other 1 to 7. But (because the scales are multiplied) these
different scoring procedures can yield different correlations of Σbiei with
attitude, and hence a question has arisen concerning which scoring
procedures are preferable.

Sometimes conceptual considerations have been adduced as a basis for


choosing a scoring method. These arguments commonly take one of two
forms. One is an appeal to the nature of the relevant psychological states.
For example, it is sometimes suggested that evaluation is naturally better
understood as bipolar rather than as unipolar or that belief strength scales
should not be scored in a bipolar way because it is not psychologically
meaningful for attitude objects to be negatively associated with attributes
(for an example of such arguments, see Bagozzi, 1984). The other
considers the plausibility of the consequences of employing various
combinations of scoring procedures. For example, to take the simplifying
case of a person with just one salient belief, a respondent who strongly
believes that the object possesses a very undesirable characteristic should
presumably have the least favorable attitude possible—but if both scales
are scored from 1 to 7, that respondent will not have the lowest possible
Strength × Evaluation product (which thus suggests the implausibility of
such scoring). In particular, the combination of bipolar scoring for belief
evaluation and unipolar scoring for belief strength has often been argued to
be the theoretically most appropriate scoring combination (e.g., Steinfatt,
1977).

But the main criterion for assessing scoring procedures has been the
predictability of attitude thereby afforded. That is, the criterion has been
the observed correlation between Σbiei and attitude.7 Several studies have
compared the predictability of attitude using different scoring methods.
Although results vary, the most common finding seems to have been that
scoring both scales in a bipolar fashion yields larger correlations (of Σbiei,
with attitude) than do alternative combinations and, in particular, is
superior to the intuitively appealing bipolar evaluation and unipolar
strength combination (e.g., Ajzen, 1991; Gagné & Godin, 2000; Sparks,
Hedderley, & Shepherd, 1991; for discussion, see Fishbein & Ajzen, 2010,
pp. 108–109).8

114
But now the task becomes explaining why bipolar scoring for both scales
appears to maximize the correlation between Σbiei and attitude. Bipolar
scoring seems to make intuitive psychological sense in the case of belief
evaluation scales, but the general empirical success of bipolar scoring for
belief strength scales may appear puzzling. One possibility is simply this:
When standardized lists of modal salient beliefs are used, bipolar scoring
of belief strength scales may permit participants to remove all effects of
beliefs that they do not have (or beliefs that are not salient for them).
When such beliefs appear on the standardized belief list, a mark at the
midpoint of belief strength scales is a sensible response (the respondent
does not know, or is not sure, whether the object has the attribute, so
marks the midpoint rather than favoring either “likely” or “unlikely”).
With bipolar scoring, such a response is scored as zero—which, when
multiplied by the corresponding belief evaluation, will yield a product of
zero (no matter what the evaluation is); this has the entirely appropriate
effect of removing that belief from having any impact on the respondent’s
predicted attitude.9 In short, the common superiority of bipolar (over
unipolar) scoring of belief strength scales might be a consequence of the
use of standardized lists of beliefs and so may be a methodological artifact
rather than a source of substantive information about how belief strength
perceptions operate.

Alternative Integration Schemes


The summative model depicts beliefs as combining in a particular way to
yield an overall evaluation, namely, in an additive way (summing across
the biei products). Hence, for instance, everything else being equal, adding
a new positive belief will make an attitude more favorable. But different
integration schemes—that is, different images of how beliefs combine to
yield attitudes—have been proposed. The most prominent of these is an
averaging model, as embedded in Anderson’s information integration
theory (N. H. Anderson, 1971, 1981b, 1991).10 Crudely expressed, an
averaging model suggests that attitude is determined by the average, not
the sum, of the relevant belief properties.

An averaging model can produce some counterintuitive predictions. For


example, it suggests that adding a new positive belief about an object will
not necessarily make the attitude more positive. Suppose that a person’s
current attitude toward the object is based on four beliefs evaluated +3, +3,
+3, and +2. Imagine that the person acquires a new belief that is evaluated

115
+2 (and, to simplify matters, assume equal belief strength weights for each
belief). A summative picture of belief combination expects the additional
belief to make the attitude more positive (because the sum of the
evaluations would be 13 rather than 11), but an averaging model predicts
that the overall attitude would be less positive: The average of the initial
four beliefs is 2.75, but the average of the set of five beliefs is 2.60 (that is,
adding the new attribute lowers the average evaluation).

For a time, a fair amount of research attention was devoted to comparing


summative and averaging (and other) models of belief integration. But for
various reasons, no general conclusion issues from this research. For one
thing, in many circumstances, the models make equivalent predictions
(and so cannot be empirically distinguished); moreover, in circumstances
in which the models do make divergent predictions, each can point to
some evidence suggesting its superiority over the other (e.g., N. H.
Anderson, 1965; Chung, Fink, Waks, Meffert, & Xie, 2012; Fishbein &
Hunter, 1964). Thus neither seems to provide an entirely satisfactory
general account of how beliefs are related to attitudes; indeed, there may
be no single simple rule by which persons combine beliefs into an attitude.
(For some evidence and discussion, see Betsch, Kaufmann, Lindow,
Plessner, & Hoffmann, 2006; Eagly & Chaiken, 1993, pp. 241–255; Harris
& Hahn, 2009; R. Wyer, 1974, pp. 263–306.)11

An inability to display any decisive general superiority of one model over


the other is in some ways unfortunate, as summative and averaging models
can yield different recommendations to persuaders. Suppose, for example,
that voters have a generally favorable attitude toward some policy issue
(e.g., gun control) that appears as a referendum ballot item. The organizers
of the campaign favoring that policy discover some new advantage to the
proposed policy. Naturally enough, they undertake an advertising
campaign to publicize this new positive attribute of the policy, hoping to
make voters’ attitudes even more favorable toward their position.

The initiation of this new campaign rests implicitly on a summative image


of how this new information will be integrated: Adding a new positive
belief about an object should make attitudes toward that object more
favorable. But an averaging model will predict that, at least under some
circumstances, the addition of this new positive attribute could make
attitudes toward the policy less favorable than they had been—and hence
might conclude that this new advertising campaign is ill-advised. In the
absence of good evidence about just what sort of belief combination

116
principle might best describe what will occur in a circumstance such as
this, one can hardly give persuaders firm recommendations.

The Sufficiency of Belief-Based Analyses


Belief-based attitude models depict beliefs about the object as the sole
determinants of attitudes. But the question has arisen whether such beliefs
are a sufficient basis for understanding attitudes; the issue is whether some
non–belief-based (noncognitive) elements might independently contribute
to attitude (independent, that is, of representations of belief structure such
as Σbiei).

The central research evidence here takes the form of studies investigating
whether a given noncognitive element makes a contribution to the
prediction of attitude beyond that afforded by measures of belief structure
(Σbiei). A convenient illustration is provided by research concerning the
effects of consumer advertising. Advertising presumably attempts to
influence the consumer’s beliefs about the product’s attributes or
characteristics, thereby influencing the consumer’s attitude toward the
product. But evidence suggests that at least under some circumstances, the
influence of advertising on receivers’ attitudes toward a given brand or
product may come about not only through receivers’ beliefs about the
product’s characteristics but also through the receivers’ evaluation of the
advertisement itself (the receivers’ “attitude toward the ad”). As receivers
have more favorable evaluations of the advertising, they come to have
more favorable attitudes toward the product being advertised. And several
studies have reported that this effect occurs over and above the
advertising’s effects on product beliefs—that is, attitude toward the ad and
Σbiei jointly have been found to be more successful in predicting attitude
than is Σbiei alone (e.g., Mitchell, 1986; for related findings, see
MacKenzie, Lutz, & Belch, 1986; for a review, see S. P. Brown &
Stayman, 1992). Such evidence appears to point to some influence on
attitudes beyond beliefs about the object and hence suggests the
insufficiency of a purely belief-based analysis of the determinants of
attitude.

The research evidence bearing on these matters is not uncontroversial.


Fishbein and Middlestadt (1995) argued that most of the research
purporting to show an independent effect for noncognitive elements
(including attitude toward the ad) is methodologically flawed (for

117
discussion, see, e.g., Fishbein & Middlestadt, 1997; Herr, 1995; Miniard &
Barone, 1997; Priester & Fleming, 1997). One illustration of such flaws is
that if an investigator is not careful to ensure that salient beliefs are being
assessed, then the apparent ability of some noncognitive factor to add to
the predictability of attitude beyond Σbiei might reflect not some genuine
influence of the noncognitive factor but rather a shortcoming in the
assessment of beliefs; the suggestion is that with better belief assessment,
the apparent noncognitive contribution might disappear.12

There does however seem to be good evidence pointing to an independent


role for some noncognitive elements, namely, feelings (emotions, affect).
The suggestion is that attitude might be influenced either by cognitive
(belief-related) considerations or by affective (feeling-related)
considerations. So, for example, a person’s evaluation of a politician might
reflect cognitions concerning the politician’s personal attributes and issue
positions or might be based on the feelings that the politician evokes in the
person (hope, anger, disgust, pride, etc.). Consistent with this suggestion,
several studies have reported that attitudes are often better predicted from
a combination of affective and cognitive assessments than from either one
alone (e.g., Abelson, Kinder, Peters, & Fiske, 1982; Agarwal & Malhotra,
2005; C. T. Allen, Machleit, Kleine, & Notani, 2005; Bodur et al., 2000;
Crites, Fabrigar, & Petty, 1994; Eagly et al., 1994; Haddock & Zanna,
1998). Such evidence invites a picture of attitudes as potentially having
both affective and cognitive determinants (as offered by, e.g., Eagly &
Chaiken, 1993, pp. 14–16; Zanna & Rempel, 1988) and suggests the
incompleteness of a purely belief-based analysis of attitude.13

Of course, if belief is understood sufficiently broadly, none of this is


necessarily inconsistent with a belief-based model of attitude. The
distinction between affect and cognition might sensibly be said to be one
of emphasis: No cognition is free from affect (every belief has some
evaluative aspect, even if neutral), and even self-reported feelings amount
to reports about what people believe (what people believe their feelings
are). Indeed, these kinds of considerations have led some commentators to
suggest that instead of drawing a contrast between pure affect and pure
cognition, it might be more useful to distinguish affective beliefs and
cognitive beliefs (see, e.g., Trafimow & Sheeran, 1998) or instrumental
beliefs and experiential beliefs (Fishbein & Ajzen, 2010, pp. 82–85).

Even approached in such a fashion, however, the evidence in hand


suggests the importance of being alert to the different types of beliefs that

118
might underlie attitudes.14 Some attitudes might be primarily based on
affective or experiential considerations, others predominantly on cognitive
or instrumental considerations, and still others on a mixture of these
elements.15 And, of course, understanding the current basis of a person’s
attitude is commonly a first step toward understanding how the attitude
might be changed.

Persuasive Strategies Reconsidered


The various research findings discussed above invite some reconsideration
of the persuasive strategies suggested by the summative model. Those
strategies involve changing the strength of a current belief, changing the
evaluation of a current belief, or changing the set of salient beliefs (by
adding new beliefs or by altering the relative salience of current ones).
Scant evidence directly compares the relative effectiveness of these
different strategies, but other research provides some insight into the likely
utility of the various alternatives.16

Belief Strength as a Persuasion Target


The apparently artifactual role of belief strength scores suggests the
implausibility of certain persuasive strategies that the summative model
might recommend. Consider a persuader who is trying to induce a
favorable attitude toward Boffo Beer. Suppose that a particular respondent
has the salient belief that Boffo Beer tastes good and on 7-point scales
(scored −3 to +3) indicates that this attribute is highly desirable (+ 3 for
belief evaluation) and that it is moderately likely (+ 2 for belief strength).
The summative attitude model suggests that this respondent’s attitude
could be made more positive by influencing the belief strength rating for
this attribute—specifically, by getting the respondent to believe that it is
very likely that Boffo Beer tastes good (+3 for belief strength).

But if belief strength does not actually influence attitude, then such a
strategy is misguided; if a person already has the relevant categorical
judgment in place, trying to influence the degree of association between
the object and the attribute will not influence attitude. Thus if our
hypothetical respondent already believes that Boffo Beer tastes good, there
appears to be little point in seeking changes in the exact degree of the
respondent’s subjective probability judgment that Boffo Beer tastes good.

119
Of course, if our respondent thinks Boffo Beer does not taste good (or has
no opinion about Boffo’s taste), then in seeking to induce a positive
attitude toward the beer, a persuader may well want to induce the belief
that Boffo does taste good. But this will be a matter of changing the
relevant categorical judgment (e.g., from “Boffo Beer doesn’t taste good”
to “Boffo Beer does taste good”) and need not be approached as though
there is some psychologically real probabilistic degree of perceived
association between object and attribute. That is, the key distinction will
be between whether the person does or does not have the belief, not
between finer gradations of belief strength.

Indeed, a surprisingly large number of studies have found that messages


varying in the depicted likelihood of consequences did not differentially
influence persuasive outcomes (whereas messages varying in the
desirability of depicted outcomes did correspondingly differ in
persuasiveness). For example, in a series of studies, Smith-McLallen
(2005) manipulated both likelihood information and desirability
information, finding that attitudes were more influenced by variations in
the desirability of the claimed consequences than by variations in the
likelihood of those consequences’ occurrence (see also Johnson, Smith-
McLallen, Killeya, & Levin, 2004; Levin, Nichols, & Johnson, 2000).
Similar results have been reported in investigations of colorectal cancer
screening (Lipkus, Green, & Marcus, 2003), consumer product safety
perceptions (Wogalter & Barlow, 1990, Experiment 1), and energy
consumption reduction (Hass, Bagley, & Rogers, 1975): Messages varying
in the desirability of the depicted consequences varied correspondingly in
persuasiveness, but messages varying in the depicted likelihood of the
consequences did not (see, relatedly, Jerit, 2009; Mevissen, Meertens,
Ruiter, & Schaalma, 2010).17 These findings thus are consistent with the
idea that variation in belief strength may not be consequential—and thus
may not be a useful target for persuasive efforts. (For a review, see
O’Keefe, 2013a.)

Belief Evaluation as a Persuasion Target


Little research directly addresses the persuasive strategy of changing the
evaluation of a currently held belief. Lutz (1975) found that messages
designed specifically to change laundry detergent attribute evaluations had
little effect on those evaluations (and, correspondingly, little effect on
attitudes). It may be that, as Eagly and Chaiken (1993, p. 237) suggest,
some attribute evaluations are relatively stable by virtue of their basis in

120
prior experience. For example, the evaluation of the laundry detergent
attribute “gets your clothes clean” might well be expected to be relatively
stable for most people.

Still, sometimes attribute evaluations appear to be a useful focus for


persuaders. For example, a consumer may need to be convinced that a
faster Internet connection would be desirable. Death penalty opponents
might seek to convince the public that vengeance is a base and unworthy
motive—and hence that capital punishment’s attribute of providing
vengeance is less desirable than one might previously have thought.
Although it may not be easy to influence the evaluations associated with
existing beliefs, changing such evaluations may nevertheless sometimes be
a key aspect of a persuader’s campaign.18

Changing the Set of Salient Beliefs as a Persuasion


Mechanism
Changing the set of salient beliefs might be accomplished in two (not
mutually exclusive) ways, by adding new salient beliefs or by altering the
relative salience of existing beliefs.

Adding a new appropriately valenced belief (e.g., adding a belief that


associates some new positive attribute with the object when the goal is to
make the attitude more favorable) would appear to be relatively attractive
as an avenue to attitude change.19 But the previously discussed research
evidence suggests that two considerations should be kept in mind here.
First, the content (not just the evaluation) of the advocated new belief may
need to be considered closely, as some candidate new beliefs may be more
compatible than others with the current set of beliefs. For instance, a
receiver with predominantly concrete, instrumental beliefs about a given
automobile (“It gets bad gas mileage” or “It gets good gas mileage”) may
not be receptive to advertising invoking image-oriented appeals (“You’ll
feel so sexy driving it”); for such a person, adding new instrumentally
oriented beliefs (“It’s a very safe vehicle”) might be a more plausible
approach.20

Second, nonsummative models predict that adding a new belief may fail to
move the attitude in the desired direction. If beliefs combine in a way that
involves averaging the evaluations of the individual beliefs (rather than
summing them), then it is possible that (for example) adding a new

121
positive belief may not make the attitude more positive.21

The other broad way of changing the set of salient beliefs is to alter the
relative salience of currently held beliefs. For example, a persuader might
seek to make the audience’s beliefs about positive attributes more salient,
thereby enhancing the attitude. There is little direct evidence about the
effectiveness of implementing this strategy in persuasive messages (see
Batra & Homer, 2004; Delia et al., 1975; Shavitt & Fazio, 1990).
Nevertheless, it is easy to see that (for example) one purpose of point-of
purchase displays (e.g., in grocery stores) can be to influence which of the
product’s attributes are salient.

In employing such a strategy, it is important to identify just which beliefs


are already actually salient, and (as intimated earlier) belief importance
ratings can be especially valuable for this purpose when standardized lists
(of modally salient beliefs) are used. For example, as van der Pligt et al.
(2000) have pointed out, smokers do not necessarily evaluate the
undesirable health consequences of smoking any less negatively than do
nonsmokers (nor do they necessarily give different belief strength ratings),
but the health consequences are less important (less salient)—and other
consequences more important—for smokers than for nonsmokers. Thus
attempting to shift the relative salience of these beliefs, making the
negative consequences more prominent and the positive consequences less
salient, may be a more productive avenue for persuasion than attempting to
influence belief evaluation or belief strength.

An indication of the potential effects of belief salience manipulations can


be seen in studies of “issue framing,” in which messages (such as news
reports) concerning public policy issues vary in the interpretive framework
through which the issue is reported. In one classic study, participants read
one of two editorials concerning whether a hate group should be allowed
to stage a public rally. Those who read an editorial emphasizing free
speech were more inclined to permit the rally than those who read an
editorial about the public safety risks of such a rally (Nelson, Clawson, &
Oxley, 1997). One plausible mechanism for such effects is that the varying
frames of the communication produce corresponding variation in the
salience of frame-related beliefs. (For other examples and reviews, see
Bolsen, Druckman, & Cook, 2014; Chong & Druckman, 2007; Druckman,
2011; Nelson, Oxley, & Clawson, 1997; Tewksbury & Scheufele, 2009.)

122
Conclusion
The general idea that the beliefs one has about an object influences one’s
attitude toward that object is enormously plausible, and, correspondingly,
it seems obvious that one natural avenue to attitude change involves
influencing beliefs. Hence it is not surprising that belief-based models of
attitude have received such attention from students of persuasion. Indeed,
the summative model of attitude obviously offers some straightforward
recommendations to persuaders.22 Still, many particulars of the
relationship of beliefs and attitudes remain elusive, with corresponding
uncertainties for the understanding of persuasion.

For Review
1. Explain the general idea of belief-based approaches to attitude. What
is a salient belief? How can one identify a person’s salient beliefs
about a given object? Explain how, in a survey context, one might
identify the modal (average) salient beliefs about an object.
2. According to the summative model of attitude, what are the two
determinants of attitude? What is belief strength? Describe
questionnaire items that might be used to assess belief strength. What
is belief evaluation? Describe questionnaire items that might be used
to assess belief evaluation. Explain the summative model’s
description of how belief strength and belief evaluation combine to
produce attitude; that is, describe and explain the summative model’s
formula. Give an example that illustrates the model’s application.
3. Sketch three alternative strategies for attitude change suggested by
the summative model. Explain (and give examples of) the strategy of
changing the evaluation of an existing salient belief, the strategy of
changing the strength of an existing salient belief, and the strategy of
changing the set of salient beliefs. Describe two ways of changing the
set of salient beliefs. Explain how the summative model can be useful
in identifying possible foci for persuasive appeals.
4. What is the general pattern of correlations between the summative
model’s predictions and direct measures of attitude? Explain why
such correlational evidence does not necessarily show that attitude is
determined by salient beliefs. What is attribute importance? Does
adding attribute importance to the summative model’s formula
improve the predictability of attitude? Explain how belief importance
ratings can be useful even if they do not improve the predictability of

123
attitude.
5. Is the summative model concerned with the content (as opposed to
the evaluation) of beliefs? Explain the complementary relationship of
functional approaches and belief-based models of attitude.
6. Describe the difference between standardized and individualized
belief lists. Do belief strength scores improve the predictability of
attitude when individualized belief lists are used? Do belief strength
scores improve the predictability of attitude when standardized belief
lists are used? Explain.
7. Explain why it matters whether belief strength and belief evaluation
scales are scored in different ways. What is unipolar scoring? What is
bipolar scoring? Which kind of scoring maximizes the correlation
between Σbiei and attitude?
8. Describe an averaging model of how beliefs combine to yield
attitude. What does the research evidence indicate about whether an
averaging model or a summative (adding) model is superior? Explain
how adding models and summative models can have different
implications for persuasive strategy.
9. Explain the idea that non–belief-based (noncognitive) elements might
independently contribute to attitude. What sort of evidence bears on
such claims? Identify a noncognitive element that improves the
predictability of attitude. Explain how such elements might be
redescribed in belief-based terms.
10. Explain how the artifactual role of belief strength scores (in
predicting attitude) has implications for the persuasive strategy of
changing belief strength. Do messages that vary in the depicted
likelihood of consequences also vary correspondingly in
persuasiveness? Describe the challenges of trying to influence attitude
by changing belief evaluations. Explain why the strategy of adding
new beliefs (as a way of changing attitudes) might require attending
to belief content (not just evaluation). Why might adding a new
positive belief not make attitudes more positive? Describe how belief
importance ratings might be useful when trying to change attitudes by
influencing the relative salience of beliefs.

Notes
1. The summative model of attitude is sometimes referred to as an
expectancy-value (EV) model of attitude. An EV model of attitude
represents attitude as a function of the products of the value of a given

124
attribute (e.g., the attribute’s desirability) and the expectation that the
object has the attribute (e.g., belief strength). The summative model is only
one version of an EV model, however; this basic EV idea has been
formulated in various ways (e.g., Rosenberg, 1956). But the summative
model is the best studied, appears to have been the most successful
empirically, and indeed is the standard against which alternative EV
models have commonly been tested. (For some general discussions of EV
models of attitude, see Bagozzi, 1984, 1985; Eagly & Chaiken, 1993;
Kruglanski & Stroebe, 2005.)

2. This attitude model is embedded in reasoned action theory (RAT; see


Chapter 6). Research concerning RAT has produced evidence concerning
the use of the summative formula to predict specifically attitudes toward
behaviors (AB), with similar results (for some review discussions, see
Albarracín, Johnson, Fishbein, & Muellerleile, 2001; Armitage & Conner,
2001; Conner & Sparks, 1996).

3. Although not discussed here, the potential complexities in assessing


belief importance should not be underestimated; see Jaccard, Radecki,
Wilson, and Dittus (1995), van der Pligt et al. (2000, pp. 145–155), and
van Ittersum, Pennings, Wansink, and van Trijp (2007).

4. Relatedly, equally good correlations of Σbiei with attitude have been


observed using a list of modally salient beliefs and using a smaller set of
beliefs identified by the respondent as most important (Budd, 1986;
Steadman & Rutter, 2004; van der Pligt & de Vries, 1998a; van Harreveld,
van der Pligt, & de Vries, 1999; cf. Elliott et al., 1995). As with the
previously mentioned findings concerning the predictability of attitude
from nonsalient beliefs, these results might reflect the use of one’s current
attitude as a guide to responding to whatever belief items are presented.

5. Esses, Haddock, and Zanna (1993) reported a related but slightly


different finding. With individualized belief lists, attitudes toward various
social groups were equally well-predicted by the average of the attribute
evaluations (that is, Σei/n, where n is the number of beliefs) and by the
average of a multiplicative composite in which each attribute evaluation
was multiplied by the individual’s judgment of the percentage of the group
to which each attribute applies (that is, ΣbiPi /n where P is the relevant
percentage).

6. Convincing evidence of a continuous-probability role for belief strength

125
in contributing to attitude might be had by research that used
individualized belief lists but replaced the conventional belief strength
end-anchors (“likely–unlikely”) with ones more appropriate to the desired
judgment, such as “slightly likely” and “very likely.” If, with
individualized belief lists and such end-anchors, Σbiei were found to
generally be superior to Σei as a predictor of attitude, the case for
conceiving of belief strength in continuous probability terms would be
strengthened.

7. This is a curious criterion, because the goal of maximizing the


predictability of attitude from Σbiei (that is, maximizing the correlation
between attitude and Σbiei) is at best an interim research goal. The goal is
understandable, because predictive accuracy provides evidence bearing on
the adequacy of the summative model. But (as intimated earlier) such
predictability does not necessarily show that the model’s implicit depiction
of the underlying psychological processes is correct (recall, for example,
that attitude can be predicted even from lists of nonsalient beliefs). Some
measure of predictability is surely a necessary condition for the model’s
adequacy, but the more important question concerns the substantive
adequacy of the model, that is, whether the model provides an accurate
account of the relationship of beliefs and attitudes.

8. This conclusion is also recommended by several studies using optimal


scaling procedures, which assign scale values in such a fashion as to
maximize the resulting correlation. With these procedures, a constant is
added to each belief strength score and another is added to each belief
evaluation score; the constants are computed precisely to produce the
largest possible correlation between Σbiei and attitude. Several studies
have reported that the computed scaling constants suggest that both scales
should be bipolar (Ajzen, 1991; Holbrook, 1977; cf. Doll, Ajzen, &
Madden, 1991). The Σbiei-attitude correlations that result from optimal
scaling are not themselves good evidence for the summative model (after
all, the data have been manipulated to maximize the correlations) and are
not meant to be used that way. Instead, the result of interest is the
particular scaling constants recommended—not necessarily the specific
numerical values (as these might bounce around from study to study) but
rather what general sort of scale the constants recommend. For example, a
researcher might start with a belief strength scale scored in a unipolar
fashion, but the optimal scaling constants might be such as to transform it
into a bipolar scale. That is, optimal scaling results can give evidence

126
about whether unipolar or bipolar scoring will maximize the correlation.

9. Notice the contrast: With bipolar belief strength scoring, it does not
matter what the respondent’s evaluation is of an attribute for which the
respondent has marked the midpoint of the belief strength scales (because
the Strength × Evaluation product for that attribute will be zero). But with
unipolar belief strength scoring, the Strength × Evaluation product will
vary depending on the respondent’s evaluation of that attribute. Hence
even if a respondent is completely uncertain about whether the object has
the attribute (and so marks the midpoint of the belief strength scales), the
respondent would nevertheless be predicted to have a relatively more
favorable attitude if the attribute were evaluated positively than if the
attribute were evaluated negatively. Scott Moore helped me see this point
clearly.

10. Anderson’s information integration theory is much broader than a


simple averaging model (see, e.g., N. H. Anderson, 1981a, 1991). The
general notion is that there are many information integration principles that
persons employ, one of which is a weighted-averaging principle (for useful
general discussions, see Eagly & Chaiken, 1984, pp. 321–331; 1993, pp.
241–253). My purpose here is simply to introduce the idea that
nonsummative images of belief combination are possible; an
uncomplicated averaging model provides a convenient example.

11. As an indication of the hidden complexities here, consider that the


summative model suggests that if a new belief is added to the set of salient
beliefs, then (assuming constant belief strength) the increase in overall
attitude resulting from the addition of a positively evaluated belief will be
the same size as the decrease in overall attitude resulting from the addition
of a negatively evaluated belief of equivalent extremity. But there is good
evidence that new pieces of positive and negative information do not
always have equal-sized effects on attitudes. In fact, negative information
often has a disproportionate impact on evaluations or decisions compared
to otherwise equivalent positive information (e.g., Hamilton & Zanna,
1972; Lutz, 1975; for reviews, see Cacioppo, Gardner, & Berntson, 1997;
Rozin & Royzman, 2001; Skowronski & Carlston, 1989).

12. There are actually some rather difficult methodological challenges


here. For instance, the very assessment of beliefs may create apparent
consistency between attitudes and belief-based elements (as when existing
attitudes guide one’s responses to nonsalient belief items). Such

127
consistency may suggest the operation of a belief-based attitude process
even where none exists. For example, an attitude might be formed in a
wholly non–belief-based way, then used to guide responses to belief items
in such a way that the attitude appears to be largely determined by those
belief elements (for discussion of such problems, see Fishbein &
Middlestadt, 1997, pp. 112–113; Herr, 1995).

13. The idea that attitudes might have multiple underlying components—
including both affective and cognitive ones—has a long history in the
study of attitudes (e.g., Rosenberg & Hovland, 1960). But (as pointed out
by Eagly et al., 1994) it took quite some time for research to explicitly take
up the question of whether the predictability of attitude can be enhanced
by including non–belief-based considerations. And although
multicomponent views of attitude commonly treat affect and cognition as
representing just two of three attitudinal bases (the third being conation or
behavioral elements, as when one’s past behavior influences one’s
attitudes through self-perception processes; see, e.g., Bem, 1972), research
has come to focus on only the affective and cognitive elements (see, e.g.,
Haddock & Zanna, 1998, p. 328n4).

14. There is good reason to think that the wording of belief elicitation
questionnaires may influence the types of beliefs that people report. For
example, some common procedures may generally elicit predominantly
instrumental-utilitarian beliefs rather than symbolic beliefs (see Ennis &
Zanna, 1993, 2000; Sutton et al., 2003). This suggests the importance of
careful questionnaire design that minimizes the chances of missing some
important class of underlying beliefs. For example, it is possible to ask
different questions to elicit affective considerations and cognitive ones
(e.g., French et al., 2005; Haddock & Zanna, 1998).

15. A contrast between experiential/affective beliefs and


instrumental/cognitive beliefs can be thought of as another way of
analyzing belief content. That is, this distinction points to a substantive
variation in the beliefs underlying attitudes, and in that sense it can be seen
to be similar to elements of functional analyses of attitude (discussed in
Chapter 3). However, functional analyses offer the idea that beliefs
characteristically coalesce in substantively different motivationally
coherent packages or syndromes; by contrast, a general distinction
between affective and cognitive beliefs need not imply that an individual’s
beliefs commonly cluster together on the basis of being affective or
cognitive (but see Huskinson & Haddock, 2004; Trafimow & Sheeran,

128
1998).

16. Some general evidence indicates covariation between attitude change


and change in the underlying bases of attitude. That is, changes in belief
strength and evaluation (or Σbiei) have been found to be accompanied by
corresponding changes in attitude (e.g., DiVesta & Merwin, 1960; Lutz,
1975; Peay, 1980). Such evidence is consistent with the model’s
suggestions that attitude change can be influenced by changes in belief
strength and belief evaluation; however, other uncertainties (e.g., the
apparent artifactual contribution of belief strength scores) make such
evidence less helpful than it might be.

17. Notably, in Hass et al.’s (1975) study, across message conditions, an


energy crisis was perceived to be moderately likely; the perceived
likelihood of an energy crisis (on a 10-point scale) was 5.9 in the low-
likelihood message condition and 6.9 in the high-likelihood message
condition. This provides an illustration of the idea that when the relevant
categorical judgment is in place, smaller belief strength variations may not
matter much.

18. One way of enhancing the desirability of an attribute may be to


emphasize its scarcity, because “opportunities seem more valuable to us
when they are less available” (Cialdini, 2009, p. 200). For examples of
scarcity-based phenomena, see Aggarwal, Jun, and Huh (2011), Brannon
and Brock (2001), and Eisend (2008). For reviews and general discussion,
see Brock (1968) and Lynn (1991). For some complexities, see Cialdini,
Griskevicius, Sundie, and Kenrick (2007), Gierl and Huettl (2010), and
Jung and Kellaris (2004).

19. Adding a new belief might be thought of as influencing belief strength


but only in the sense that it involves changing some categorical judgment
(e.g., from “I don’t know whether the object has attribute X” to “I think
the object has attribute X”). That is, it need not be understood as
necessarily involving some gradation of subjective probability.

20. Given the previously discussed distinction between affectively oriented


and cognitively oriented beliefs, one might naturally hypothesize that
attitudes would most effectively be changed by appeals that invoke the
same sorts of considerations as underlie the attitude—that affectively
oriented appeals would be more persuasive than cognitively oriented
appeals for affectively based attitudes, for instance. A systematic review of

129
the relevant research is not in hand, but the evidence at least suggests a
rather more complicated picture (see, e.g., Clarkson, Tormala, & Rucker,
2011; Conner, Rhodes, Morris, McEachan, & Lawton, 2011; Edwards,
1990; Fabrigar & Petty, 1999; Haddock, Mio, Arnold, & Huskinson, 2008;
Ruiz & Sicilia, 2004).

21. As a complexity: Adding a new salient belief may cause some existing
belief to become less salient. The number of beliefs that can be salient is
surely limited (given that human information-processing capacity is not
unbounded). If the current set of beliefs has exhausted that capacity, then
the addition of some new salient belief will necessarily mean that some old
belief has to drop from the set of salient beliefs. Presumably in such a
circumstance, a comparison of the evaluations of the two beliefs in
question (the new salient one and the previously salient one) will, ceteris
paribus, indicate the consequences for attitude change.

22. The model, however, implicitly emphasizes message content as central


to persuasive effects and does not directly speak to the roles played by
such factors as communicator credibility, message organization, receiver
personality traits, and so forth. From the model’s point of view, all such
factors only indirectly influence message-induced attitude change—
indirectly in the sense that their influence is felt only through whatever
effects they might have on belief strength, evaluation, and salience.

130
Chapter 5 Cognitive Dissonance Theory

General Theoretical Sketch


Elements and Relations
Dissonance
Factors Influencing the Magnitude of Dissonance
Means of Reducing Dissonance
Some Research Applications
Decision Making
Selective Exposure to Information
Induced Compliance
Hypocrisy Induction
Revisions of, and Alternatives to, Dissonance Theory
Conclusion
For Review
Notes

A number of attitude theories have been based on the idea of cognitive


consistency—the idea that persons seek to maximize the internal
psychological consistency of their cognitions (beliefs, attitudes, etc.).
Cognitive inconsistency is taken to be an uncomfortable state, and hence
persons are seen as striving to avoid it (or, failing that, seeking to get rid of
it). Heider’s (1946, 1958) balance theory was perhaps the earliest effort at
developing such a consistency theory (for discussion and reviews, see
Crockett, 1982; Eagly & Chaiken, 1993, pp. 133–144). Osgood and
Tannenbaum’s (1955) congruity theory represented another variety of
consistency theory (for discussion and reviews, see Eagly & Chaiken,
1993, pp. 460–462; R. Wyer, 1974, pp. 151–185).1

But of all the efforts at articulating the general notion of cognitive


consistency, the most influential and productive has been Leon Festinger’s
(1957) cognitive dissonance theory. This chapter offers first a sketch of the
general outlines of dissonance theory and then a discussion of several
areas of research application.

General Theoretical Sketch

131
Elements and Relations
Cognitive dissonance theory is concerned with the relations among
cognitive elements (also called cognitions). An element is any belief,
opinion, attitude, or piece of knowledge about anything—about other
persons, objects, issues, oneself, and so on.

Three possible relations might hold between any two cognitive elements.
They might be irrelevant to each other, have nothing to do with each other.
My belief that university tuition will increase next year and my favorable
opinion of Swiss chocolate are presumably irrelevant to each other. Two
cognitive elements might be consonant (consistent) with each other; they
might hang together, form a package. My belief that the Greater Chicago
Food Depository is a worthy charity and my knowing I donate money to
that organization are presumably consonant cognitions.

Finally, two cognitive elements might be dissonant (inconsistent) with


each other. The careful specification of a dissonant relation is this: Two
elements are said to be in a dissonant relation if the opposite of one
element follows from the other. Thus (to use Festinger’s classic example
of a smoker) the cognition that “I smoke” and the cognition that “smoking
causes cancer” are dissonant with each other; from knowing that smoking
causes cancer, it follows that I should not smoke—but I do.2

Dissonance
When two cognitions are in a dissonant relation, the person with those two
cognitions is said to have dissonance, to experience dissonance, or to be in
a state of dissonance. Dissonance is taken to be an aversive motivational
state; persons will want to avoid experiencing dissonance, and if they do
encounter dissonance, they will attempt to reduce it.

Dissonance may vary in magnitude: one might have a lot of dissonance, a


little, or a moderate amount. As the magnitude of dissonance varies, so
will the pressure to reduce it; with increasing dissonance, there will be
increasing pressure to reduce it. With small amounts of dissonance, there
may be little or no motivational pressure.

Factors Influencing the Magnitude of Dissonance

132
Expressed most broadly, the magnitude of dissonance experienced will be
a function of two factors. One is the relative proportions of consonant and
dissonant elements. Thus far, dissonance has been discussed as a simple
two-element affair, but usually two clusters of elements are involved. A
smoker may believe, on the one hand, that smoking reduces anxiety,
makes one appear sophisticated, and tastes good and, on the other hand,
also believe that smoking causes cancer and is expensive. There are here
two clusters of cognitions, one of elements consonant with smoking and
one of dissonant elements. Just how much dissonance this smoker
experiences will depend on the relative size of these two clusters. As the
proportion of consonant elements (to the total number of elements)
increases, less and less dissonance will be experienced, but as the cluster
of dissonant elements grows (compared with the size of the consonant
cluster), the amount of dissonance will increase.

The second factor that influences the degree of dissonance is the


importance of the elements or issue. The greater importance this smoker
assigns to the expense and cancer-causing aspects of smoking, the greater
the dissonance experienced; correspondingly, the greater importance
assigned to anxiety reduction and the maintenance of a sophisticated
appearance, the less dissonance felt. If the entire question of smoking is
devalued in importance, less dissonance will be felt.

Means of Reducing Dissonance


There are two broad means of reducing dissonance, corresponding to the
two factors influencing the magnitude of dissonance. The first way to
reduce dissonance is by changing the relative proportions of consonant and
dissonant elements. This can be accomplished in several ways. One can
add new consonant cognitions; the smoker, for instance, might come to
believe that smoking prevents colds—a new consonant cognition added to
the consonant cluster. One can change or delete existing dissonant
cognitions; the smoker might persuade him- or herself that, say, smoking
does not really cause cancer.

The other way to reduce dissonance is by altering the importance of the


issue or the elements involved. The smoker could reduce dissonance by
deciding that the expense of smoking is not that important (devaluing the
importance of that dissonant cognition), might come to think that reducing
anxiety is an important outcome (increasing the importance of a consonant
cognition), or might decide that the whole question of smoking just is not

133
that important. (For an illustration of this means of dissonance reduction,
see Denizeau, Golsing, & Oberle, 2009.)

Some Research Applications


Cognitive dissonance theory has produced a great deal of empirical work
(for general reviews, see Brehm, 2007; Cooper, 2007; Harmon-Jones,
2002; Harmon-Jones & Mills, 1999; Stone & Fernandez, 2008a). In the
study of persuasive communication, at least four research areas are of
interest: decision making, selective exposure to information, induced
compliance, and hypocrisy induction.3

Decision Making
One application of dissonance theory concerns decision making (or choice
making). Dissonance is said to be a postdecisional phenomenon;
dissonance arises after a decision or choice has been made. When facing a
decision (in the simplest case, a choice between two alternatives), one is
said to experience conflict. But after making the choice, one will almost
inevitably experience at least some dissonance, and thus one will be faced
with the task of dissonance reduction. So the general sequence is (a)
conflict, (b) decision, (c) dissonance, and (d) dissonance reduction.

Conflict
Virtually every decision a person makes is likely to involve at least some
conflict. Rarely does one face a choice between one perfectly positive
option and one absolutely negative alternative. Usually, one chooses
between two (or more) alternatives that are neither perfectly good nor
perfectly bad—and hence there is at least some conflict, because the
choice is not without some trade-offs. Just how much conflict is
experienced by a person facing a decision will depend (at least in part) on
the initial evaluation of the alternatives. When (to take the simplest two-
option case) the two alternatives are initially evaluated similarly, the
decision maker will experience considerable conflict; two nearly equally
attractive options make for a difficult choice.

This conflict stage is the juncture at which persuasive efforts are most
obviously relevant. Ordinarily, persuasive efforts are aimed at regulating
(either increasing or decreasing) the amount of conflict experienced by

134
decision makers. If one’s friend is inclined toward seeing the new action-
adventure film, rather than the new romantic comedy, one can attempt to
undermine that preference and so increase the friend’s conflict (by saying
things aimed at getting the friend to have a less positive evaluation of the
action film and a more positive evaluation of the comedy), or one can
attempt to persuade the friend to follow that inclination and so reduce the
friend’s conflict (by saying things aimed at enhancing the evaluation of the
already preferred action film and at reducing further the evaluation of the
comedy).

Of course, a persuader might attempt to regulate a decision maker’s


conflict by trying to alter the evaluation of only one (not both) of the
alternatives; I might try to get you to have a more positive attitude toward
my preferred position on the persuasive issue, although I do not attack the
opposing point of view. But—perhaps not surprisingly—the research
evidence suggests that persuasive communications that only make
arguments supporting the persuader’s position (one-sided messages) are
generally not as effective as messages that also refute arguments favoring
the opposing side (refutational two-sided messages). This research (also
discussed in Chapter 11) suggests that as a rule persuaders are most likely
to successfully regulate the conflict experienced by the persuadee if they
attempt to influence the evaluation not only of their preferred alternative
but of other options as well (for a review, see O’Keefe, 1999a).

In any case, by regulating the degree of conflict experienced, the persuader


can presumably make it more likely that the persuadee will choose the
option desired by the persuader. But after the persuadee has made a choice
(whether or not the one wanted by the persuader), the persuadee will
almost inevitably face at least some dissonance—and, as will be seen, the
processes attendant to the occurrence of dissonance have important
implications for persuasion.4

Decision and Dissonance


Some dissonance is probably inevitable after a decision because in
virtually every decision, at least some aspects of the situation are dissonant
with one’s choice. Specifically, there are likely to be some undesirable
aspects to the chosen alternative and some desirable aspects to the
unchosen alternative; each of these is dissonant with the choice made.

Consider, for example, a person choosing where to eat lunch. Al’s Fresco

135
Restaurant offers good food and a pleasant atmosphere but is some
distance away and usually has slow service. The Bistro Cafe has so-so
food and the atmosphere isn’t much, but it’s nearby and has quick service.
No matter which restaurant is chosen, there will be some things dissonant
with the person’s choice. In choosing the Bistro, for instance, the diner
will face certain undesirable aspects of the chosen alternative (e.g., the
poor atmosphere) and certain desirable aspects of the unchosen alternative
(e.g., the good food the diner could have had at Al’s).

Factors Influencing the Degree of Dissonance


The amount of dissonance that one faces following a choice depends most
centrally on two factors. One is the similarity of the initial evaluations:
The closer the initial evaluations of the alternatives, the greater the
dissonance. Thus a choice between two nearly equally attractive sweaters
is likely to evoke more dissonance than a choice between one fairly
attractive and one fairly unattractive sweater. The other factor is the
relative importance of the decision, with more important decisions
predicted to yield more dissonance. A choice about what to eat for dinner
this evening is likely to provoke less dissonance than a choice of what
career to pursue.

These two factors represent particularized versions of the general factors


influencing the degree of dissonance experienced: the relative proportions
of consonant and dissonant elements (because when the two alternatives
are evaluated similarly, the proportions of consonant and dissonant
elements will presumably approach 50–50) and the importance of the issue
or elements (here represented as the importance of the decision).

Dissonance Reduction
One convenient way in which a decision maker can reduce the dissonance
felt following a choice is by reevaluating the alternatives. By evaluating
the chosen alternative more positively than one did before and by
evaluating the unchosen alternative less positively than before, the amount
of dissonance felt can be reduced. Because this process of re-rating the
alternatives will result in the alternatives being less similarly evaluated
than they were prior to the decision, this effect is sometimes described as
the “postdecisional spreading” of alternatives (in the sense that the
alternatives are spread further apart along the evaluative dimension than
they had been). If (as dissonance theory predicts) people experience

136
dissonance following decisions, then one should find dissonance reduction
in the form of this postdecisional spreading of the alternatives, and one
should find greater spreading (i.e., greater dissonance reduction) in
circumstances in which dissonance is presumably greater.

In simplified form, the typical experimental arrangement in dissonance-


based studies of choice making is one in which respondents initially give
evaluations of several objects or alternatives and are then faced with
making a choice between two of these. After making the choice,
respondents are then asked to reevaluate the alternatives, with these
rankings inspected for evidence of dissonance reduction through
postdecisional spreading of alternatives.

The research appears to indicate that one does often find the predicted
changes in evaluations following decisions (e.g., Brehm, 1956; G. L.
White & Gerard, 1981). However, the evidence is not so strong that the
magnitude of dissonance reduction is greater when the conditions for
heightened dissonance are present (as when the two alternatives are
initially rated closely, or the decision is important), because conflicting
findings have been reported, especially for the effects of decisional
importance (for discussion, see Converse & Cooper, 1979).

However, much of the research evidence concerning postdecisional re-


evaluation suffers from methodological problems. In particular, M. K.
Chen and Risen (2010; Risen & Chen, 2010) have argued that the research
procedures common in this research area can produce the appearance of
the postdecisional spreading of alternatives (attitude change) even if no
change has actually occurred. The gist of the argument is that the initial
ranking is an imperfect index (as can be seen by the fact that when faced
with a choice between two differently ranked alternatives, people do not
always choose the better-ranked one)—and this imperfection will generate
the appearance of postdecisional spreading even if there is no genuine
change in evaluations.5 (For some discussion, see M. K. Chen & Risen,
2009; Sagarin & Skowronski, 2009a,b.6)

But alternative experimental procedures (aimed at addressing the apparent


defects) appear to yield results similar to those previously obtained (e.g.,
Alós-Ferrer, Graníc, Shi, & Wagner, 2012; Sharot, Fleming, Yu, Koster, &
Dolan, 2012), indicating that postdecisional preference shifts are in fact
real. However, when researchers control for the artifacts identified by M.
K. Chen and Risen (2010), the effects appear to be diminished or

137
weakened relative to those observed in earlier research (Izuma &
Murayama, 2013). In sum, the sorts of postdecisional re-evaluations
expected by dissonance theory appear to be genuine, but the effect is not as
large as one might have supposed on the basis of the initial research
findings.

The general finding of postdecisional spreading in the evaluations of the


alternatives suggests that decision maker satisfaction will “take care of
itself” (Wicklund & Brehm, 1976, p. 289). Because people are likely to
more positively value that which they have freely chosen, then if one can
induce them to choose a given alternative, they will be likely to more
positively value that alternative just because they have chosen it. For
example, if people are induced to buy a product, they will likely have a
more positive attitude toward the product just as a consequence of having
chosen to buy it. Of course, this does not mean that every purchaser is
guaranteed to end up a satisfied customer; it still may happen that (say) a
new car buyer decides that the car is a lemon and returns it to the dealer.
Nevertheless, there are forces at work that incline people to be happier
with whatever they have chosen, just because they have chosen it.

Indeed, in the service of postdecisional dissonance reduction, people may


engage in selective information seeking or processing that confirms their
choice. For example, purchasers of a given model of automobile may be
especially drawn to advertisements for the vehicle they just purchased (the
classic study is Ehrlich, Guttman, Schönbach, & Mills, 1957; for examples
of related work, see Fischer, Lea, Kastenmüller, Greitemeyer, Fischer, &
Frey, 2011; Fischer, Schulz-Hardt, & Frey, 2008; Keng & Liao, 2009;
Shani & Zeelenberg, 2007). Information supportive of one’s decision will
naturally be seen as a source of dissonance-reducing material (for more
general discussions of studies of post-decisional information preferences,
see D’Alessio & Allen, 2007; Fischer, 2011; Fischer & Greitemeyer,
2010).7

Regret
Given these postdecisional dissonance-reduction processes, persuaders
might naturally infer that once a persuadee has been induced to decide the
way the persuader wants, then the persuader’s job is done; after all, having
made the choice, the persuadee is likely to become more satisfied with it
through the ordinary processes of dissonance reduction. However, this
inference is unsound; persuaders who reason in this fashion may find their

138
persuasive efforts failing in the end, in part because of the occurrence of
regret (see Festinger, 1964).

When regret occurs, it arises after the decision has been made but before
dissonance has been reduced (through postdecisional spreading of
alternatives). When regret is happening, the alternatives are temporarily
evaluated more similarly than they were initially. Then, following this
regret phase (during which dissonance presumably increases), the person
moves on to dissonance reduction, with the evaluations of the alternatives
spreading farther apart (see Festinger & Walster, 1964). Regret is not
inevitable, and research is only beginning to explore the factors
influencing the arousal and resolution of regret (e.g., Keaveney, Huber, &
Hemnann, 2007; Mannetti, Pierro, & Kruglanski, 2007; Rosenzweig &
Gilovich, 2012; Zeelenberg & Pieters, 2007), but regret occurs sufficiently
commonly to be quite familiar. (Indeed, some readers will have recognized
“buyer’s remorse” in the preceding description.)

One plausible account of this regret phenomenon is that having made the
choice, the decision maker now faces the task of dissonance reduction.
Naturally, the decision maker’s attention focuses on those cognitions that
are dissonant with his or her choice—on undesirable aspects of the chosen
option and on desirable aspects of the unchosen option—perhaps in the
hope of eventually being able to minimize each. As the decision maker
focuses on undesirable aspects of the chosen alternative, that alternative
may seem (at least temporarily) less attractive than it had before; focusing
on desirable aspects of the unchosen option may make that option seem (at
least temporarily) more attractive than it had before. With the chosen
alternative becoming rated less favorably, and the unchosen alternative
becoming rated more favorably, the two alternatives naturally become
evaluated more similarly than they had been.

During this regret phase, it is even possible that the initial evaluations
become reversed, so that the initially unchosen alternative becomes rated
more favorably than the chosen option. In such a circumstance, the
decision maker may back out of the original choice. This outcome
becomes more likely when the two alternatives are initially evaluated
rather similarly because in such a circumstance, comparatively small
swings in absolute evaluations can make for reversals in the relative
evaluations of the alternatives.

There is a moral here for persuaders concerning the importance of follow-

139
up persuasive efforts. It can be too easy for a persuader to assume that the
job is done when the persuadee has been induced to choose in the way the
persuader wants, but the possibility of regret, and particularly the
possibility that the decision maker’s mind may change, should make the
persuader realize that simply inducing the initial decision may not be
enough.

A fitting example is provided by an investigation of automobile buying. In


purchases of automobiles from a dealer, ordinarily some time elapses
between the buyer’s agreeing to buy the car and the delivery of the car to
the buyer. It sometimes happens that during this interval, the would-be
purchaser changes his or her mind and backs out of the decision to buy the
car. (There are likely any number of reasons why this happens, but it
should be easy enough to imagine that at least some of the time regret is at
work.) In this investigation, during the interval between decision and
delivery, some automobile purchasers received two follow-up telephone
calls from the seller; the calls emphasized the desirable aspects of the
automobile that had been chosen, reassured the purchaser of the wisdom of
the decision, and (one might say) encouraged the purchaser to move past
the regret phase and on to the stage of dissonance reduction. Other
purchasers received no such calls. Significantly fewer of the purchasers
receiving the follow-up calls backed out of their decisions than did those
not receiving the call (the back-out rate was cut in half), underscoring the
potential importance of follow-up persuasive efforts (Donnelly &
Ivancevich, 1970). (For another illustration of post-purchase follow-up
messages, with complexities, see Hunt, 1970.)

Selective Exposure to Information


A second area of dissonance theory research that is relevant to persuasion
concerns persons’ propensities to expose themselves selectively to
information. Below, the dissonance theory analysis of information
exposure is presented, followed by a discussion of the relevant research.

The Dissonance Theory Analysis


If dissonance is an aversive motivational state, then naturally persons will
want to do what they can to avoid dissonance-arousing situations and will
prefer instead to be in circumstances that do not arouse dissonance (or that
even increase the consonance of their cognitions). This general idea finds
specific expression in the form of dissonance theory’s selective exposure

140
hypothesis. Broadly put, this hypothesis has it that persons will prefer to be
exposed to information that is supportive of (consonant with) their current
beliefs rather than to nonsupportive information (which presumably could
arouse dissonance).8

This hypothesis applies to information exposure in any circumstance, but it


perhaps becomes especially pointed in the context of exposure to mass
media political information. If persons generally seek out only media
sources that confirm or reinforce their prior political beliefs (and,
correlatively, avoid exposure to nonsupportive or inconsistent
information), a polarized electorate might be the result.

More generally, of course, the selective exposure hypothesis suggests that


persuaders (through the mass media or otherwise) may need to be
concerned about getting receivers to attend to their messages. If, as
dissonance theory suggests, there is a predisposition to avoid
nonsupportive information, then persuaders may face the task of somehow
overcoming that obstacle so that their communications can have a chance
to persuade.

The Research Evidence


In the typical experimental research paradigm for the investigation of
selective exposure, respondents’ attitudes on a given issue are assessed.
Then respondents are given the choice of seeing (reading, hearing) one of
several communications on the issue. These communications are described
in a way that makes clear what position on the issue is advocated by each,
and both supportive and nonsupportive messages are included. The
respondent is then asked to select one of the messages. Support for the
selective exposure hypothesis consists of respondents’ preferring to see
supportive rather than nonsupportive communications. (For a discussion of
methodological issues in such experiments, see Feldman, Stroud, Bimber,
& Wojcieszak, 2013.)

There is now considerable accumulated evidence indicating a general


preference for supportive over nonsupportive information (for some
reviews, see Hart et al., 2009; Smith, Fabrigar, & Norris, 2008).9 That is,
people do commonly prefer to be exposed to information that is congenial
with their existing beliefs and attitudes than to uncongenial information.
Such a preference is seen across a variety of concrete circumstances,
including preferences for news media outlets (e.g., Iyengar & Hahn, 2009;

141
Stroud, 2008).

However, the strength of this preference can vary depending on a number


of factors. A great many different such factors have been explored, the
research evidence is sometime sparse, and there is not yet an entirely clear
picture of how all these might fit into some larger structure (for some
reviews, see Cotton, 1985; Frey, 1986; Hart et al., 2009; Smith, Fabrigar,
& Norris, 2008).10 But as an example: The relevance of an issue to a
person’s core values influences the strength of the preference for
supportive information; the preference is stronger when the issue concerns
core values (e.g., issues such as abortion or euthanasia) than when such
values are not relevant (for a review, see Hart et al., 2009). Even in the
latter circumstances, however, a (relatively weak) preference for
supportive information is apparent.

But research has also identified other, potentially competing, influences on


information exposure—influences that can be strong enough to overcome
the usual preference for supportive information. One such influence is the
perceived utility of the information, with persons preferring information
with greater perceived usefulness even if it is nonsupportive. Consider, for
example, an investigation in which undergraduates initially chose to take
either a multiple-choice exam or an essay exam. The students were then
asked for their preferences among reading several articles, some
supporting the decision and some obviously nonsupportive. For instance,
for a student who chose the multiple-choice exam, the nonsupportive
articles were described as arguing that students who prefer multiple-choice
tests would actually be likely to do better on essay exams. Contrary to the
selective exposure hypothesis—but not surprisingly—most of the students
preferred articles advocating a change from the type of exam they had
chosen (Rosen, 1961). Obviously, in this study, the nonsupportive
communication offered information that might be of substantial usefulness
to the students, and the perceived utility of the information could well have
outweighed any preference for supportive information (for a review
indicating the importance of information utility as a moderator of selective
exposure, see Hart et al., 2009).11

Fairness norms may also play a role in information exposure. In certain


social settings, there is an emphasis on obtaining the greatest amount of
information possible, being fair to all sides, and being open-minded until
all the evidence is in. One such setting is the trial. For example, in one
study, participants received brief synopses of a murder case and then

142
rendered a judgment about the guilt of the defendant. They were
subsequently offered a chance of seeing either confirming or
disconfirming information. Participants showed a general preference for
nonsupportive information, perhaps because the trial setting was one that
made salient the norms of fairness and openness to evidence (Sears, 1965).

Summary
All told, there is a general preference for supportive information. The
strength of this preference may vary, and the preference can be overridden
by other considerations (e.g., information utility). But dissonance theory’s
expectations about general information preferences have certainly been
confirmed. For that reason, persuaders who hope to encourage attention to
their messages will want to be attentive to the factors influencing
information exposure, as these may suggest avenues by which such
attention can be sought (see, e.g., Flay, McFall, Burton, Cook, &
Warnecke, 1993).

However, the idea of selective exposure comprises two distinguishable


processes: avoidance of dissonant information (“selective avoidance”) and
attraction to consonant information (“selective approach”). Either (or both)
might be responsible for the appearance of selective exposure effects. For
example, in an experimental situation in which people choose between
consonant and dissonant information, the choice of consonant information
might be driven by selective avoidance (motivation to avoid the dissonant
information), selective approach (desire to see the consonant information),
or some combination of these.

There is not yet much research evidence on the matter, but there is some
reason to suspect that selective avoidance effects may be weaker than
selective approach effects (e.g., Garrett, 2009; Garrett & Stroud, 2014; see
also Cotton, 1985, p. 26; Frey, 1986, pp. 69–70). That is, people may
actively look for confirming information, but not necessarily avoid
disconfirming information. In any case, one ought not assume that
selective approach and selective avoidance are equally powerful processes.
So, for example, even though an information environment such as afforded
by the Internet may enable selective approach, and although persons do
generally have a preference for supportive information, people may
nevertheless not actively avoid discrepant information—and under the
right circumstances (e.g., high perceived information utility, a setting that
prioritizes fairness) might even seek out nonsupportive information (for

143
some relevant work, see Valentino, Banks, Hutchings, & Davis, 2009;
Wojcieszak & Mutz, 2009).

Induced Compliance
Perhaps the greatest amount of dissonance research concerns what is
commonly called induced compliance. Induced compliance is said to occur
when an individual is induced to act in a way discrepant from his or her
beliefs and attitudes.

One special case of induced compliance is counterattitudinal advocacy,


which is said to occur when persons are led to advocate some viewpoint
opposed to their position. Most of the research on induced compliance
concerns counterattitudinal advocacy because that circumstance has
proved a convenient focus for study. (For a detailed discussion of induced
compliance research, see Eagly & Chaiken, 1993, pp. 505–521.)

Incentive and Dissonance in Induced-Compliance


Situations
Obviously, induced compliance situations have the potential to arouse
dissonance; after all, a person is acting in a way discrepant from his or her
beliefs. Dissonance theory suggests that the amount of dissonance
experienced in an induced compliance situation will depend centrally on
the amount of incentive offered to the person to engage in the discrepant
action. Any incentive offered for performing the counterattitudinal action
(e.g., some promised reward or threatened punishment) is consistent
(consonant) with engaging in the action. Thus someone who performs a
counterattitudinal action with large incentives for doing so will experience
relatively little dissonance.

To use Festinger’s (1957) example: Suppose you are offered a million


dollars to publicly state that you like reading comic books (assume, for the
purpose of the example, that you find this offer believable and that you do
not like reading comic books). Presumably, you would accept the money
and engage in the counterattitudinal advocacy. You might experience some
small amount of dissonance (from saying one thing and believing another)
—but the million dollars is an important element that is consonant with
your having performed the action, and hence overall there is little
dissonance experienced.

144
But if the incentive had been smaller (less money offered), then the
amount of dissonance experienced would have been greater. The greatest
possible dissonance would occur if the incentive were only just enough to
induce compliance. Suppose that you would not have agreed to engage in
the counterattitudinal advocacy for anything less than $100. In that case,
an offer of exactly $100—the minimum needed to induce compliance—
would have produced the maximum possible dissonance. Any incentive
larger than that minimum would only have reduced the amount of
dissonance experienced.

When substantial dissonance is created through induced compliance,


pressure is created to reduce that dissonance. One easy route to dissonance
reduction is to bring one’s private beliefs into line with one’s behavior. For
example, if you declared that you liked reading comic books when offered
only $100 (your minimum price) for doing so, you would experience
considerable dissonance and could easily reduce it by deciding that you
think reading comic books isn’t quite as bad as you thought.

What happens if the incentive offered is insufficient to induce compliance?


That is, what are the consequences if a person is offered some incentive for
engaging in a counterattitudinal action, and the person does not comply?
To continue the example, suppose that you had been offered only $10 to
say that you like to read comic books. You would decline the offer,
thereby losing the possibility of getting the $10—and hence you would
experience some dissonance over that (“I could have had that $10”). But
you would not experience much dissonance, and certainly not as much as
if you had turned down an offer of $90. Faced with the dissonance of
having turned down $90, one natural avenue to dissonance reduction
would be to strengthen one’s initial negative attitude (“I was right to turn
down that $90, because reading comic books really is pretty bad”).

So the relationship between the amount of incentive offered and the


amount of dissonance experienced is depicted by dissonance theory as
something like an inverted V. With increasing incentive, there is
increasing dissonance—up to the point at which the incentive is
sufficiently large to induce compliance. But beyond that point, increasing
incentive produces decreasing dissonance, such that with very large
incentives, there is little or no dissonance experienced from engaging in
the counterattitudinal action. Thus, so long as the amount of incentive is
sufficient to induce compliance, additional incentive will make it less
likely that the person will come to have more favorable attitudes toward

145
the position being advocated.

In an archetypal experiment, Festinger and Carlsmith (1959) obtained


striking evidence for this analysis. In this study, participants performed an
exceedingly dull and tedious task. At the conclusion of the task, they were
asked to tell a student who was waiting to participate in the experiment
(the student was a confederate of the experimenter) that the task was
enjoyable and interesting. As incentive for performing this
counterattitudinal behavior, participants were offered money; half were
offered $1 (low incentive), and half were offered $20 (high incentive).
After engaging in the counterattitudinal advocacy, participants’ attitudes
toward the task were assessed.

Festinger and Carlsmith found, consistent with dissonance theory’s


predictions, that those receiving $1 came to think that the task was
significantly more enjoyable than did those who complied for $20. Those
who complied under the influence of a large incentive ($20) presumably
experienced less dissonance from engaging in the counterattitudinal act
(because they had the $20 that was consonant with performing the act)—
and so had little need for attitude change. By contrast, participants
receiving the small incentive ($1) presumably experienced more
dissonance and hence had more motivation to change their attitudes to
reduce dissonance; they reduced their dissonance by coming to have a
more favorable attitude toward the dull task.

Subsequent investigations provided additional confirming evidence (for a


classic collection of studies on induced compliance, see Elms, 1969). For
example, E. Aronson and Carlsmith (1963) found that children prohibited
from playing with an attractive toy by a mild threat (of punishment for
disobedience) subsequently found the toy less attractive than did children
prohibited by a severe threat. That is, those who engaged in the counter-
attitudinal action of avoiding the toy when given only mild incentive to do
so apparently experienced greater dissonance (than did those who avoided
the toy when given strong incentives to do so) and hence displayed greater
underlying attitude change. There have been relatively fewer studies of
circumstances in which the incentives offered are insufficient to induce
compliance, but this evidence is also generally consistent with dissonance
theory predictions. For instance, Darley and Cooper (1972) found that
persons who were offered insufficient incentives (to engage in
counterattitudinal advocacy) were inclined to strengthen their initial
attitudes, and—as expected from dissonance theory—greater strengthening

146
occurred with larger incentives.

Counterattitudinal-Advocacy–Based Interventions
The potential utility of induced-compliance processes as a basis for
attitude change is nicely illustrated by counterattitudinal-advocacy
interventions. In these interventions, participants are led to engage in
counterattitudinal advocacy (under conditions of minimal incentive) as a
means of producing attitude change.

In particular, such interventions have formed the basis of an effective


eating disorder prevention program. As background: Among young
women, one source of eating disorders such as bulimia is internalization of
an excessively thin ideal body image. Interventions have been designed in
which at-risk women (ones with excessively elevated body image
concerns) engage, voluntarily, in what amounts to counterattitudinal
advocacy, in which they argue against the thin ideal (with this concretized
in various ways, such as role-playing exercises in which they attempt to
dissuade a friend from pursuing the thin ideal). A number of studies have
found that this intervention significantly reduces various risk factors for
eating disorders (e.g., degree of thin-ideal internalization, body
dissatisfaction, bulimic symptoms, etc.); for details and discussion, see
Becker, Smith, and Ciao (2006), Perez, Becker, and Ramirez (2010),
Roehrig, Thompson, Brannick, and van den Berg (2006), Stice, Chase,
Stormer, and Appel (2001), Stice, Marti, Spoor, Presnell, and Shaw
(2008), and Stice, Shaw, Becker, and Rohde (2008).

Similar counterattitudinal advocacy interventions have been explored in


other contexts such as prejudice reduction (Eisenstadt, Leippe, Rivers, &
Stambush, 2003; Heitland & Bohner, 2010) and attitudes toward online
gaming (Wan & Chiou, 2010). The general idea is the same: Persons who,
with minimal incentive, voluntarily engage in counterattitudinal advocacy
will emerge with attitudes more favorable to the views they have just
advocated.

The “Low, Low Price” Offer


Another example of an induced compliance process is provided by the
familiar marketing ploy of the “low, low price” offer. This offer is
sometimes cast as a straightforward lower price (“fifty cents off”),
sometimes as “two for the price of one” (“buy one get one free,” “three for

147
the price of two,” etc.). The central idea is that a lower price is offered to
the consumer, making purchase more likely.

Now imagine a situation in which a particular consumer is faced with


competing brands of soap. This consumer does not have an especially
positive impression of Brand A—it’s not the consumer’s usual brand—but
Brand A is running a really good low-price special (“three bars for the
price of one”). From a dissonance theory perspective, this lower price
represents an increased incentive for the consumer to purchase Brand A.
As the deal gets better and better—that is, as the price gets lower and
lower—there is more and more incentive to comply (incentive to
purchase). For example, there is more incentive to comply when the deal is
“three for the price of one” than when the deal is “two for the price of
one.”

The key insight offered by dissonance theory here is this: The greater the
incentive to comply, the less dissonance created by the purchase—and
hence less chance for favorable attitude change toward the brand. This
consumer might buy Brand A this time (because the price is so low), but
the consumer’s underlying unfavorable attitude toward Brand A is not
likely to change—precisely because the incentive to comply was so great.
So while the “low, low price” offer might boost sales for a while, it can
also undermine the development of more positive attitudes toward the
brand.

An illustration of these processes was offered in a set of five field


experiments concerning the effects of low introductory selling prices.
House brands of common household products (e.g., aluminum foil,
toothpaste, light bulbs) were introduced in various stores in a chain of
discount houses. In some of the stores, the brands were introduced at the
regular price, whereas at other stores, the brands were introduced with a
low introductory price offer for a short period (before the price increased
to the regular price). As one might expect, when the low-price offer was in
effect, sales were higher at the stores offering the lower prices. But when
prices returned to normal, the subsequent sales were greater at the stores
that had the initial higher prices (Doob, Carlsmith, Freedman, Landauer, &
Tom, 1969). Introducing these products at low introductory prices proved
to be harmful to long-run sales, presumably because there was relatively
little brand loyalty established by the low introductory selling price. Thus
the greater incentive created by the lower price apparently prevented the
development of sufficiently positive attitudes toward the brand.

148
One should not conclude from this that the low-price offer is a foolish
marketing stratagem that should never be used. The point is that this
marketing technique can set in motion forces opposed to the development
of positive attitudes toward the brand and that these forces are greater as
the incentive becomes greater (as the deal gets better). But some low-price
offers are better than others (from the view of creating favorable attitude
change): A low-price offer that is only just barely good enough to induce
purchase—an offer that provides just enough incentive to induce
compliance—will create the maximum possible dissonance (and so, a
marketer might hope, maximum favorable attitude change toward the
product). Low-price offers may also be useful as strategies for introducing
new brands; the marketer’s plan is that the low price would induce initial
purchase and that this exposure to the brand’s intrinsic positive
characteristics will create a positive attitude toward the brand. Of course, if
the brand does not have sufficiently great intrinsic appeal (as was likely
with the house brands studied by Doob et al., 1969), then using low
introductory prices to induce trial will not successfully create underlying
positive attitudes toward the brand. (For indications of the complexity of
the effects of price promotion on attitudes, see DelVecchio, Henard, &
Freling, 2006; Raghubir & Corfman, 1999; Yi & Yoo, 2011.)

Limiting Conditions
Researchers have not always obtained the induced compliance effects
predicted by dissonance theory. Two important limiting conditions have
been identified. First, the predicted dissonance effects seem to occur only
when the participants feel that they had a choice about whether to comply
(e.g., about whether to perform the advocacy). That is, freedom of choice
seems to be a necessary condition for the appearance of dissonance effects
(the classic work on this subject is Linder, Cooper, & Jones, 1967; for a
relevant review, see Preiss & Allen, 1998). Thus one can expect that
inducing counterattitudinal action with minimal incentive will produce
substantial dissonance (and corresponding favorable attitude change) only
when the person freely chooses to engage in the counterattitudinal
behavior.12

Second, the predicted dissonance effects are obtained only when there is
no obvious alternative cause to which the feelings of dissonance can be
attributed. Attributional processes are the (often nonconscious) methods by
which people arrive at explanations for their feelings. If people can
attribute their dissonance feelings to some cause other than their

149
counterattitudinal behavior, the usual dissonance effects will not be
observed. For example, if people take a pill (actually a placebo) before
engaging in counterattitudinal advocacy and are told the pill will probably
make them feel tense or anxious, then counterattitudinal advocacy does not
produce the usual changes in attitude (because people attribute their
discomfort to the pill, not to the counterattitudinal action; Zanna &
Cooper, 1974; for related work, see J. Cooper, 1998, Study 1; Fried &
Aronson, 1995; Joule & Martinie, 2008).

Summary
Dissonance theory’s expectations about the effects of incentive for
counterattitudinal action on attitude change have been confirmed in broad
outline—although not without the discovery of unanticipated limiting
conditions. When a person freely chooses to engage in counterattitudinal
action (without an apparent alternative account of the resulting feelings),
increasing incentive for such action leads to lessened pressure for making
one’s beliefs and attitudes consistent with the counterattitudinal act. Hence
a persuader seeking long-term behavioral change (by means of underlying
attitude change) ought not to create intense pressure to engage in the
counterattitudinal behavior; rather, the persuader should seek to offer only
just enough incentive to induce compliance and let dissonance reduction
processes encourage subsequent attitude change.13

Consider, as an example, marketing contests in which consumers are


invited to submit a slogan or ad for a product or to write an essay
explaining why they like the product, with prizes (cash or goods) to be
received by selected entries. When the behavior is counterattitudinal (“I
don’t really like the product, I just want to try to win the prize”), larger
prizes will likely minimize the development of more favorable attitudes
toward the advocated product (compared with smaller prizes).

Or consider some social influence tasks commonly faced by parents. In


hoping to encourage the young child not to play with the expensive
electronic equipment, parents ought to provide only just enough
punishment to induce compliance; excessive punishment might produce
short-term obedience but not underlying change (e.g., when the parents are
present, the child will not play with the equipment—but the child will still
want to, and when the back is turned …). Or in trying to encourage
children to do their homework, parents ought to think carefully about
offering extremely large rewards for compliance; such rewards can

150
undermine the development of positive attitudes toward homework
(whereas a minimal reward can induce immediate compliance while also
promoting the development of positive attitudes). All these examples
illustrate the potential application of the general principle that smaller
incentives for freely chosen counterattitudinal behavior are more likely
than larger incentives to produce underlying favorable attitudes toward
that behavior.

Hypocrisy Induction

Hypocrisy as a Means of Influencing Behavior


Sometimes a persuader’s task is not so much to encourage people to have
the desired attitudes as it is to encourage people to act on existing attitudes.
For example, people commonly express positive attitudes toward
recycling, natural resource conservation, condom use, and so forth, yet
often fail to act accordingly.

Such inconsistencies might be exploited by persuaders, however, as


suggested by dissonance research on hypocrisy induction. The basic idea is
that calling attention to the inconsistency of a person’s attitudes and
actions—that is, the person’s hypocrisy—can arouse dissonance, which
then is reduced through behavioral change (altering the behavior to make it
consistent with the existing attitude).14

For example, in a study of safer-sex practices, Stone et al. (1994) varied


whether participants engaged in public proattitudinal advocacy about safe
sex and varied whether they were made mindful of their past unsafe
practices (by listing circumstances surrounding their past failures to use
condoms). The combination of advocacy and mindfulness (the hypocrisy
condition) was expected to induce greater dissonance—and so greater
subsequent behavioral consistency—than either treatment alone (or neither
treatment). Consistent with this expectation, hypocrisy-condition
participants (compared with those in other conditions) were more likely to
buy condoms (and bought more condoms on average) at the end of the
experiment. That is, faced with the reality of their inconsistent actions,
these persons reduced their dissonance by bringing their behaviors in line
with their safer-sex attitudes. (For other examples and relevant discussion,
see Dickerson, Thibodeau, Aronson, & Miller, 1992; Fointiat, 2004; Freijy
& Kothe, 2013; Fried & Aronson, 1995; Hing, Li, & Zanna, 2002; Stone &

151
Fernandez, 2011; Stone, Wiegand, Cooper, & Aronson, 1997.)

Hypocrisy Induction Mechanisms


The specific interventions or treatments that might induce hypocrisy have
not yet been carefully distinguished or explored. Most hypocrisy induction
studies to date have employed structured proattitudinal advocacy exercises
of the sort used by Stone et al. (1994), but presumably the underlying
mechanism involves the salience of attitude-behavior inconsistency. The
general idea is that inconsistencies between beliefs and behavior can, if
made sufficiently salient, lead individuals to seek consistency, such as by
changing their actions to accord with their attitudes. Having persons
engage in proattitudinal advocacy is one possible way, but surely not the
only possible means, of enhancing the salience of an existing
inconsistency. For instance, in the right circumstances, a simple reminder
of one’s attitudinal commitments might be sufficient.

Consider, for example, Aitken, McMahon, Wearing, and Finlayson’s


(1994) research concerning water conservation. Households given
feedback about their water consumption, combined with a reminder of
their previously expressed belief in their responsibility to conserve water,
significantly reduced their consumption. Feedback alone was useful in
reducing consumption but not as effective as the combination of feedback
and the (presumably hypocrisy-inducing) reminder. (For a similar
intervention, with similar results, concerning electricity conservation, see
Kantola, Syme, & Campbell, 1984.)

To date, it appears that successful hypocrisy induction treatments involve a


combination of two key elements: (a) ensuring the salience of the relevant
attitude (e.g., through proattitudinal advocacy or through being explicitly
reminded of one’s commitment to the attitude) and (b) ensuring the
salience of past failures to act in ways consistent with that attitude (e.g., by
having the person recall such failures or by giving feedback indicating
such failures). When only one of these elements is present, hypocrisy
effects are weaker or nonexistent (Aitken et al., 1994; E. Aronson, Fried,
& Stone, 1991; Dickerson et al., 1992; Kantola et al., 1984; Stone et al.,
1994; Stone, Wiegand, Cooper, & Aronson, 1997, Experiment 1). It will
plainly be useful to have some clarification of alternative means of
implementing these two elements, identification of circumstances in which
one or another form of implementation is more powerful, and so forth.
(For some discussion focused specifically on proattitudinal advocacy

152
mechanisms, see Stone, 2012; Stone & Fernandez, 2008b; Stone &
Focella, 2011.)

Backfire Effects
It might appear straightforward enough to use hypocrisy as a means of
inducing behavioral change, but it is important to consider that, faced with
evidence of inconsistency between attitudes and actions, people might
change their attitudes rather than their behaviors. Fried (1998) had
participants engage in public advocacy about the importance of recycling,
under one of three conditions varying the salience of past inconsistent
behavior. Some participants listed their past recycling failures
anonymously (as in previous hypocrisy induction manipulations), some
listed their past failures in ways that permitted them to be personally
identified, and some did not list past failures (the no-salience condition).
Persons in the anonymous-salience condition exhibited the usual
behavioral effects of hypocrisy (e.g., they pledged larger amounts of
money to a recycling fund than did persons in the no-salience condition),
but persons in the identifiable-salience condition did not. These persons,
instead of changing behaviors to become consistent with their prorecycling
attitudes, changed their attitudes to become consistent with their recycling
failures—specifically, they displayed a reduced belief in the importance of
recycling.

It is not yet clear exactly how to explain such reversal of effects, how
general such outcomes are, the conditions under which they are likely to
occur (perhaps, say, with relatively unimportant attitudes), and so forth.
But persuaders will certainly want to take note of the potential dangers of
hypocrisy induction as an influence mechanism. As a means of changing a
person’s behavior, pointing out that the person’s conduct is inconsistent
with the person’s professed beliefs might lead to the desired behavioral
change—or might lead to belief revision (and so backfire on the
persuader).

One factor that might plausibly encourage such backfire effects is self-
efficacy, people’s perceived ability to perform the behavior (perceived
behavioral control, in the terminology of reasoned action theory as
discussed in Chapter 6). If people think that behavior change is unavailable
as a method to reduce dissonance, they may turn to attitude change instead.
Thus one likely limiting condition on the effectiveness of hypocrisy
induction is that the level of self-efficacy (perceived behavioral control) be

153
sufficiently high. In fact, if this limiting condition is not met, hypocrisy
induction might well produce boomerang attitude change, that is, attitude
change in a direction opposite that wanted by the persuader.15

This exemplifies a general point about arousing dissonance as a means of


influence. The key to the successful use of dissonance arousal as an
influence strategy is to arouse dissonance and then to shut off all the
possible modes of dissonance reduction except the desired one. (For some
classic discussion of this idea, see Abelson, 1968; E. Aronson, 1968, pp.
14–17. For discussion of this idea in the context of hypocrisy induction
specifically, see Fried, 1998.) When hypocrisy induction arouses
dissonance, but the only avenue to dissonance reduction is attitude change
(not behavioral change), then naturally that’s the route people will choose.

Revisions of, and Alternatives to, Dissonance


Theory
A number of revisions to dissonance theory have been suggested, and
several competing explanations have also been proposed. These various
alternative possibilities are too numerous and individually complex (e.g.,
some are focused specifically on explaining induced compliance effects,
whereas others offer broader reinterpretations of dissonance work) to be
easily compared here. But by way of illustration, it may be useful to
discuss one facet that is common to a number of the alternatives, namely,
an emphasis on the centrality of the concept of self (self-concept, self-
identity) to dissonance phenomena.

For example, E. Aronson (1992, 1999) has suggested revising dissonance


theory to specify that dissonance arises most plainly from inconsistencies
that distinctly involve the self. That is, “dissonance is greatest and clearest
when it involves not just any two cognitions but, rather, a cognition about
the self and a piece of our behavior that violates that self-concept” (E.
Aronson, 1992, p. 305).

Steele’s self-affirmation theory, offered more as a competitor to


dissonance theory, suggests that the key motivating force behind
dissonance phenomena is a desire for self-integrity (Steele, 1988; for
reviews, see J. Aronson, Cohen, & Nail, 1999; Sherman & Cohen, 2006).
The various attitudinal and behavioral changes attendant to dissonance are
argued to reflect the desire to maintain an image of the self as “adaptively

154
and morally adequate, that is, as competent, good, coherent, unitary,
stable, capable of free choice, capable of controlling important outcomes,
and so on” (Steele, 1988, p. 262).

Given the apparent centrality of the self to dissonance phenomena, perhaps


it is unsurprising that several commentators have pointed to the close
conceptual connections between dissonance and guilt (e.g., Baumeister,
Stillwell, & Heatherton, 1995; Kenworthy, Miller, Collins, Read, &
Earleywine, 2011; Klass, 1978; O’Keefe, 2000, pp. 85–88; Stice, 1992).
Guilt paradigmatically arises from conduct that is inconsistent with self-
standards, and it commonly motivates actions aimed at restoring a sense of
integrity and worth. Some dissonance phenomena (e.g., induced
compliance) may be more amenable to guilt-based analyses than others
(e.g., selective exposure; cf. Kenworthy et al., 2011), but continuing
attention to the relationship of dissonance and guilt seems appropriate.16

It remains to be seen how successful these and other alternatives will prove
to be. (For examples and discussion of various approaches, see Beauvois
& Joule, 1999; J. Cooper, 2007; Eagly & Chaiken, 1993, pp. 505–552;
Harmon-Jones, Amodio, & Harmon-Jones, 2010; Nail, Misak, & Davis,
2004; Stone & Cooper, 2001; Stone & Fernandez, 2008a; Van Overwalle
& Jordens, 2002.) The general question is the degree to which a given
framework can successfully encompass the variety of findings currently
housed within dissonance theory, while also pointing to new phenomena
recommending distinctive explanation. But no matter the particulars of the
resolution of such issues, it is plain that dissonance-related phenomena
continue to provide rich sources of theoretical and empirical development.
For students of persuasion, these various alternatives bear watching
because of the possibility that these new frameworks will shed additional
light on processes of social influence.

Conclusion
Dissonance theory does not offer a systematic theory of persuasion (and
was not intended to). But dissonance theory has served as a fruitful source
of ideas bearing on social influence processes and has stimulated
substantial relevant research. To be sure, unanticipated complexities have
emerged (as in the discovery of limiting conditions on induced compliance
effects or the phenomenon of postdecisional regret). But cognitive
dissonance theory has yielded a number of useful and interesting findings

155
bearing on processes of persuasion.

For Review
1. Explain the general idea of cognitive consistency. What is a cognitive
element (cognition)? What are the possible relationships between two
cognitions? Explain how two cognitions can be irrelevant to each
other, consistent with each other, or inconsistent with each other.
When are two cognitions said to be in a dissonant relationship?
2. What are the properties of dissonance? What sort of state is it? Can
dissonance vary in magnitude? What factors influence the degree of
dissonance experienced? Explain how the relative proportion of
consonant and dissonant elements influences dissonance. Explain
how the importance of the elements and the issue influence
dissonance. Describe and explain two basic ways of reducing
dissonance.
3. Explain how choice (decision making) inevitably arouses dissonance.
Is dissonance a predecisional or postdecisional state? What state is a
decision maker said to be in before having made the decision? What
state is a decision maker said to be in after having made the decision?
Identify two factors that influence the amount of postdecisional
dissonance. How can dissonance be reduced following a decision?
What is postdecisional spreading of alternatives? Has research
commonly detected postdecisional spreading of alternatives?
Describe how selective information seeking or processing can reduce
postdecisional dissonance. How is regret manifest following a
decision? Does regret precede or follow dissonance reduction?
Explain how regret can lead to a reversal of a decision. Describe the
function of follow-up persuasive efforts in the context of
postdecisional processes.
4. What is the selective exposure hypothesis? Explain how the
hypothesis reflects the main tenets of dissonance theory. Describe the
usual research design for studying selective exposure. In such
designs, what sort of result represents evidence of selective exposure?
Is there evidence of a general preference for supportive information?
Is this a strong preference? Explain how the strength of the preference
for supportive information is related to the relevance of the issue to
one’s core values. What others factors influence information
exposure? Explain how perceived information utility and fairness
norms can influence information exposure. What is the distinction

156
between selective avoidance effects and selective approach effects?
Which appears to be stronger?
5. What is induced compliance? What is counterattitudinal advocacy?
Explain the dissonance theory view of induced compliance situations.
What is the key influence on the amount of dissonance experienced in
such situations? Describe the relationship between incentive and
dissonance in such situations. Explain how counterattitudinal
advocacy interventions can be a way of changing attitudes. Explain,
from a dissonance perspective, the effects of low-price offers for
consumer goods. From the marketer’s point of view, what is the ideal
amount of incentive to offer? Explain, from a dissonance perspective,
the operation of promotions that invite consumers to send in essays
explaining why they like the product (or to send in advertisements,
etc.), in return for being entered in a prize drawing. Identify two
limiting conditions on the occurrence of the predicted dissonance
effects in induced compliance situations. How is freedom of choice
such a condition? How is the lack of an apparent alternative cause
(for the feelings of dissonance) such a condition?
6. What is hypocrisy induction? Explain how hypocrisy induction can
lead to behavioral change. Identify a common persuasive situation in
which hypocrisy induction might be useful to a persuader. What are
the two key elements of successful hypocrisy inductions? Describe
how and why hypocrisy induction efforts might backfire. Identify a
limiting condition on the success of using hypocrisy induction to
change behavior.
7. Explain the central role of the self in dissonance processes. What does
self-affirmation theory identify as the motivation behind dissonance
phenomena? Explain how dissonance and guilt might be related.

Notes
1. Despite their age and relatively narrowed focus, both balance theory and
congruity theory continue to find useful application (see, e.g., Basil &
Herr, 2006; E. Walther & Weil, 2012; J. B. Walther, Liang, Ganster,
Wohn, & Emington, 2012; Woodside, 2004; Woodside & Chebat, 2001).
More generally, cognitive consistency remains an enduring subject of
research attention (e.g., Gawronski & Strack, 2012).

2. This “follows from” is, obviously, a matter of psychological


implication, not logical implication. What matters is whether I think that

157
one belief follows from another—not whether it logically does so follow.

3. Among the lines of research not discussed here, work on “self-


prophesy” effects (also called the “question-behavior” effect and the “mere
measurement” effect) is worth mention. The effect of interest is that when
people are asked to predict what they will do, such self-predictions can
make performance of the predicted behavior more likely (for illustrations
and discussion, see Conner, Godin, Norman, & Sheeran, 2011; Godin et
al., 2010; J. K. Smith, Gerber, & Orlich, 2003; Spangenberg & Greenwald,
1999; Spangenberg, Sprott, Grohmann, & Smith, 2003; Sprott et al.,
2006). One might imagine any number of possible explanations for such
effects (e.g., Gollwitzer & Oettingen, 2008), among them a dissonance-
based account (Spangenberg, Sprott, Grohmann, & Smith, 2003).

4. The biased processing that is characteristic of postdecisional dissonance


reduction mechanisms (to be discussed shortly) can also be apparent at
predecisional phases (for a review, see Brownstein, 2003).

5. So (the argument runs) although it may appear that people became more
positive about an option after choosing it, they might in reality have had
that more positive evaluation even before choosing that option—a more
positive evaluation that went undetected because the initial evaluation
assessment was imperfect. Thus the postdecisional evaluation may appear
to have changed (appear to have become more positive) even though there
has been no underlying change in the actual evaluations. M. K. Chen and
Risen’s (2010) argument is more complex and nuanced than this (and in
particular emphasizes the importance of choosing an appropriate
comparison condition), but this will serve to convey the flavor of the
argument.

6. M. K. Chen and Risen (2010) offer a mathematical proof showing how


spreading can occur absent genuine change. That proof may not be entirely
secure (Alós-Ferrer & Shi, 2012), but it is not clear that the proof is
essential to their arguments.

7. Note that persons engaged in postdecisional dissonance reduction might


exhibit a preference for supportive information, whereas those engaged in
postdecisional regret (discussed shortly) might have different preferences
(e.g., R. L. Miller, 1977).

8. The discussion in this section concerns information preferences


generally. As mentioned earlier, selective information exposure might be a

158
way of pursuing specifically postdecisional dissonance reduction.

9. In Hart et al.’s (2009) review, the mean effect in a random-effects meta-


analysis, across 300 cases, was a standardized mean difference (d) of .38,
which corresponds to a correlation (r) of .19.

10. Some caution is appropriate in interpreting Hart et al.’s (2009)


proffered conclusions, as these are marred by two procedural choices. One
choice was to place more weight on results from fixed-effect analyses than
on results from random-effects analyses; the random-effects results should
be preferred, because only those results provide a basis for generalizing
beyond the cases in hand (see, e.g., Borenstein, Hedges, Higgins, &
Rothstein, 2010). The second was to use one-tailed tests for selected
hypotheses when two-tailed tests should be preferred; as several
commentators have suggested, no researcher actually endorses the beliefs
logically implied by one-tailed tests (e.g., a belief that finding an
extremely large effect in the direction opposite from that predicted is the
same as finding no effect) and the use of one-tailed tests encourages
“significance chasing” (for some discussion, see Abelson, 1995, pp. 57–
59; Burke, 1953; Dwan, Gamble, Williamson, Kirkham, & the Reporting
Bias Group, 2013; Kaiser, 1960). As an illustration of the consequences:
Hart et al. said that “past reviews concluded that attitudinal confidence and
congeniality [that is, selective exposure to attitude-congenial information]
are unrelated … but our results suggested that congeniality is weaker at
high (vs. low or moderate) levels of confidence” (p. 581). But the results
of the random-effects analyses showed no significant differences in
selective exposure as a function of variations in attitudinal confidence (see
Tables 3 and 4).

11. Indeed, in Hart et al.’s (2009) review, utility emerged as an especially


powerful moderator of selective exposure effects—to all appearances, the
only one capable of perhaps producing dependable preferences for
nonsupportive information (this, even in the random-effects analyses).

12. This description adopts the conventional language that characterizes


choice as a necessary condition for the induced compliance effects
predicted by dissonance theory. But any claims about necessary conditions
for dissonance-predicted effects have to be offered rather tentatively, if
only because the research evidence (concerning not only choice but other
putatively necessary conditions as well) has commonly not been analyzed
in ways entirely conducive to supporting such claims. There is a difference

159
between saying (for example), “Choice is necessary for the appearance of
dissonance-predicted effects” and “Dissonance-predicted effects are larger
under conditions of choice than under conditions without choice.” The
former depicts choice as a necessary condition, the latter as a moderating
factor; the former thus predicts null (zero) effects in no-choice conditions,
the latter only that the size of the effect will be smaller in no-choice
conditions. Any hypothesis of zero effect, however, is almost certainly
literally false; a more appropriate hypothesis would presumably be that the
effects would be trivially small (or, perhaps, opposite in direction). There
has not been discussion of what “trivially small” might be in this context,
however. The evidence that is usually advanced to support necessary
condition claims about choice commonly takes the form of (a) a finding
that a dissonance-predicted effect is statistically significantly different
from zero under choice conditions but not under no-choice conditions or
(b) a finding that a dissonance-predicted effect is significantly larger under
choice than under no-choice conditions. But neither of these is good
evidence for the necessary condition claim (and only the latter is good
evidence for a moderating factor claim). (For a first pass at a better
approach, see Preiss & Allen, 1998.) The evidentiary situation is more
complicated, but no more satisfactory, in the case of the claim that a
particular combination of conditions is necessary. In short, although it has
become customary to characterize research in this area as having identified
various necessary conditions for the appearance of dissonance-predicted
effects, these characterizations should be seen as deserving further
attention (attention especially focused on matters of effect size and
statistical power).

13. Conversely, a persuader seeking long-term commitment to an existing


behavior—and so who wants to discourage behavioral change—might
offer just-barely-insufficient incentive for change, thereby arousing
dissonance that is resolved in favor of the existing behavior. Consider
Amazon’s “Pay to Quit” program: Once a year, an employee who works in
an Amazon fulfillment center is offered money to quit. In the first year the
offer is for $2000, increasing $1000 each year to a maximum of $5000.
The headline on the offer is “Please Don’t Take This Offer”—because
Amazon hopes the employees will stay. If the financial incentive is not
quite sufficient to induce quitting, then employees will experience
dissonance (because of forgoing the money)—and to resolve that
dissonance they will strengthen their existing positive attitudes (about
being Amazon employees). But those stronger positive attitudes created
each year mean that in subsequent years, the offer has to keep increasing

160
(in order to keep inducing significant dissonance). (This program is
described in Amazon’s 2013 report to shareholders, widely available
online including here:
http://www.sec.gov/Archives/edgar/data/1018724/000119312514137753/d702518dex9
Thanks to Steve Booth-Butterfield for spotting this ploy.)

14. Notice the contrast with induced compliance situations, in which


counterattitudinal behavior is induced by some incentive and (under
appropriate conditions) leads to dissonance and subsequent attitudinal
realignment. In hypocrisy induction circumstances, counterattitudinal
behavior need not be induced because it already has occurred. But in the
colloquial sense of hypocrisy, both counterattitudinal advocacy situations
and hypocrisy induction situations represent cases of hypocrisy (in which a
person says one thing but thinks or does another).

15. As a concrete illustration, consider recycling: Several studies have


found that a key common barrier to recycling is a perceived lack of
information about how to perform the behavior, a lack of appropriate
facilities, the perception that the behavior is difficult to do, and so forth
(see, e.g., M. F. Chen & Tung, 2010; De Young, 1989, 1990; Ojala, 2008).
(Correlatively, several studies have found perceived behavioral control to
be a strong predictor of recycling intentions—often the predictor with the
largest zero-order correlation with intention. See, e.g., Mannetti, Pierro, &
Livi, 2004; Nigbur, Lyons, & Uzzell, 2010; K. M. White, Smith, Terry,
Greenslade, & McKimmie, 2009.) People holding such beliefs, if made to
feel hypocritical about not recycling, might well resolve their dissonance
by concluding that recycling is not all that valuable.

16. Concerning counterattitudinal advocacy specifically, consider that (a)


one common source of guilt feelings is having told a lie (e.g., Keltner &
Buswell, 1996, Study 1); (b) another common source of guilt is having
inflicted harm on others (e.g., Baumeister, Reis, & Delespaul, 1995, Study
2); and (c) presumably doing both these things would (ceteris paribus)
arouse greater guilt than doing either alone. From this perspective, it
should not be surprising that (a) counterattitudinal advocacy, even without
aversive consequences, has the capacity to arouse dissonance (e.g.,
Harmon-Jones, Brehm, Greenberg, Simon, & Nelson, 1996; for a review,
see Harmon-Jones, 1999); (b) aversive consequences, even from
proattitudinal advocacy, have the capacity to arouse dissonance (e.g.,
Scher & Cooper, 1989); and (c) the combination of counterattitudinal
advocacy and aversive consequences arouses greater dissonance than does

161
either element alone (e.g., R. W. Johnson, Kelly, & LeBlanc, 1995).

162
Chapter 6 Reasoned Action Theory

The Reasoned Action Theory Model


Intention
The Determinants of Intention
The Distinctiveness of Perceived Behavioral Control
The Predictability of Intention Using the RAT Model
Influencing Intentions
Influencing Attitude Toward the Behavior
Influencing the Injunctive Norm
Influencing the Descriptive Norm
Influencing Perceived Behavioral Control
Altering the Weights
Intentions and Behaviors
Factors Influencing the Intention-Behavior Relationship
The Sufficiency of Intention
Adapting Persuasive Messages to Recipients Based on Reasoned
Action Theory
Commentary
Additional Possible Predictors
Revision of the Attitudinal and Normative Components
The Nature of the Perceived Control Component
Conclusion
For Review
Notes

The behaviors of central interest to persuaders are voluntary actions, ones


under the actor’s volitional control. The most immediate determinant of
such an action is presumably the actor’s behavioral intention—what the
person intends to do. Influencing behavior, then, is to be accomplished
through influencing persons’ intentions. For example, getting voters to
vote for a given political candidate will involve (at a minimum) getting the
voters to intend to vote for the candidate. The question that naturally arises
is, “What determines intentions?” This chapter discusses reasoned action
theory (RAT), a framework that provides a broad, general account of the
determinants of intention—thereby identifying underlying targets for
persuasive messages.1

163
The Reasoned Action Theory Model
Reasoned action theory (RAT) is a general model of the determinants of
volitional behavior developed by Martin Fishbein and Icek Ajzen
(Fishbein & Ajzen, 2010). In what follows, the RAT model is described
and the current state of research on the theory is reviewed. Subsequent
sections describe the theory’s implications for influencing intentions,
discuss the relationship of intentions and behaviors, and offer some
commentary on the model.

Intention
RAT is focused on understanding behavioral intentions. A behavioral
intention represents a person’s readiness to perform a specified action.2 In
assessing behavioral intention, a questionnaire item such as shown in
Figure 6.1 is commonly employed.

Figure 6.1 Assessing behavioral intention (BI).

The Determinants of Intention


RAT proposes that one’s intention to perform or not perform a given
behavior is a function of four factors: one’s attitude toward the behavior in
question, one’s injunctive norm, one’s descriptive norm, and perceived
behavioral control.

Attitude Toward the Behavior


The attitude toward the behavior (abbreviated AB) is the person’s general
evaluation of the behavior. The expectation, of course, is that as the
attitude toward the behavior becomes more positive, the intention will
become more positive.3

To measure the attitude toward the behavior, several evaluative semantic


differential scales can be used, as shown in Figure 6.2.

Figure 6.2 Assessing attitude toward a behavior (AB)

164
Injunctive Norm
The injunctive norm (abbreviated IN) is the person’s general perception of
whether “important others” desire the performance or nonperformance of
the behavior. As the injunctive norm becomes more positive, the intention
is expected to become more positive.

To obtain an index of the injunctive norm, an item such as that in Figure


6.3 is commonly employed.

Figure 6.3 Assessing the injunctive norm (IN).

Descriptive Norm
The descriptive norm (abbreviated DN) is the person’s perception of
whether other people perform the behavior. The idea is that as people
come to think a given behavior is more widely performed by others, then
they themselves may be more likely to intend to perform the action. Thus
as the descriptive norm becomes more positive, intentions are expected to
become more positive.4

The descriptive norm can be assessed in various ways, as illustrated in


Figure 6.4. Such questions can be phrased in different ways and can ask
about people in general or about a specific group. That is, it is possible to
assess descriptive-normative perceptions for different comparison groups.
For example, a college student might give different answers to questions
such as “How many people your age exercise regularly?” “How many
college students exercise regularly?” “How many students at this
university exercise regularly?” “How many of your friends exercise
regularly?” and so forth. Formulating useful and informative descriptive-
norm questions can be a considerable challenge, but the general idea is to
assess the respondent’s perception of what other people do.

Figure 6.4 Assessing the descriptive norm (DN).

165
Perceived Behavioral Control
Perceived behavioral control (abbreviated PBC) is the person’s perception
of the ease or difficulty of performing the behavior. PBC is similar to the
concept of self-efficacy, which refers to a person’s perceived ability to
perform or control a behavior (see Bandura, 1997). The expectation is that
as PBC becomes more negative, intentions will correspondingly become
more negative.5

The plausibility of this idea can perhaps be seen by considering that


sometimes the obstacle to behavioral performance appears to reside not in
negative attitudes or norms but rather in a perceived lack of ability to
perform the action. For example, a person might have a positive attitude
toward exercising regularly (“I think exercising regularly would be a good
thing”), a positive injunctive norm (“Most people who are important to me
think I should exercise regularly”), and a positive descriptive norm (“Most
people like me exercise regularly”), but believe himself incapable of
engaging in the action (“I can’t do it, I don’t have the time”—negative
perceived behavioral control); as a result he does not even form the
intention to exercise regularly. As another illustration, in a comparison of
householders who recycled and those who did not, De Young (1989) found
that recyclers and non-recyclers had similar positive attitudes about
recycling; however, nonrecyclers perceived recycling as much more
difficult to do than did recyclers and indicated uncertainty about exactly
how to perform the behavior. That is, the barrier to recycling appeared to
be a matter of perceived inability to perform the action, not a negative
attitude toward the behavior. Perceived behavioral control can be assessed
in various ways, but questionnaire items have often taken forms such as
those in Figure 6.5.

Figure 6.5 Assessing perceived behavioral control (PBC).

Weighting the Determinants


166
These four factors will not always contribute equally to the formation of
intentions. In some circumstances, a person’s intentions may be
determined largely by the attitude toward the behavior, and normative
considerations may play little or no role; for other circumstances, a person
might be strongly influenced by descriptive and injunctive normative
considerations while the person’s own attitude is put aside. That is, the
various influences on intention may carry varying weights in influencing
intention. The RAT model expresses this algebraically, as follows:
BI=AB(w1)+IN(w2)+DN(w3)+PBC(w4)

Here, BI refers to behavioral intention; AB represents the attitude toward


the behavior; IN represents the injunctive norm; DN represents the
descriptive norm; PBC represents perceived behavioral control; and w1,
w2, w3, and w4 represent the weights for each factor. One’s behavioral
intentions are thus a joint function of attitude, injunctive norms,
descriptive norms, and perceived behavioral control, each appropriately
weighted.

The relative weights of the components are determined empirically. These


weights are not readily assessable for any single person. That is, there is
not any satisfactory way to measure the relative weights of the components
for an individual’s intention to exercise regularly. One can, however,
assess the relative weights of the components (for a given behavior) across
a group of respondents. For example, for a group of (say) first-year college
students, one can estimate the relative influence of the various components
on exercise intentions.

This information is obtained through examination of the weights (the beta-


weights, the standardized partial regression coefficients) from a multiple
regression analysis.6 In such an analysis, the four variables are used
simultaneously to predict intention; the relative size of the correlation of
each component with intention (in conjunction with other information,
specifically, the correlations between the components) yields an indication
of the relative weight of each component. (For instance, if the attitudinal
component is strongly correlated with intention, and the normative
components are not, the attitudinal component will receive a larger weight
than either of the other two—reflecting its greater influence on intention.)

The Distinctiveness of Perceived Behavioral Control

167
There is some reason to think that PBC is not the same sort of influence on
intention that AB, IN, and DN are. It makes sense that everything else
being equal, a more positive AB, IN, or DN should be associated with
more positive intentions. But it does not make sense that everything else
being equal, greater perceived control should be associated with more
positive intentions. There are many actions that I perceive to be entirely
under my control—for instance, setting fire to my office—that I have no
intention of performing. Just because I think I have the capability to
perform an action surely does not mean that I am more likely to intend to
do so.

One possibility is that rather than being a straightforward influence on


intention (in the ways that AB, IN, and DN are), PBC might instead be a
necessary (but not sufficient) condition for the formation of intention. That
is, if I do not think I have the ability to perform the behavior, then of
course I will not intend to perform the action, but if I do think I have the
ability to perform the behavior, then I might or might not intend to perform
it (depending on my attitude and norms).

This reasoning suggests an image in which AB, IN, and DN influence


intention when PBC is relatively high, but when PBC is relatively low,
then AB, IN, and DN will be less strongly related to intention. For
example, if I think I am capable of performing the behavior of mountain
climbing, then my attitude, injunctive norm, and descriptive norm can
influence my intentions (if I like mountain climbing, then I’ll intend to do
it; if I don’t like it, then I won’t intend to do it); but if I think the behavior
is not under my control (there are no mountains where I live, it’s hard to
travel to the mountains, and so forth), then my attitude, injunctive norm,
and descriptive norm are irrelevant (I don’t think I can go mountain
climbing, so—no matter what my attitude and norms are—I don’t intend
to). That is, PBC might be thought of not as a variable that
straightforwardly influences intention in the way that AB, IN, and DN do
but rather as a variable that moderates the influence of AB, IN, and DN on
intention; PBC might be said to enable AB, IN, and DN, in the sense that
those variables will influence intention only when PBC is sufficiently
high.

The Predictability of Intention Using the RAT Model

168
Various combinations of the four predictors have been explored
empirically in hundreds of research studies. Behavioral intentions have
proved to be rather predictable using the RAT model, across a variety of
behaviors, including exercise (Brickell, Chatzisarantis, & Pretty, 2006;
Everson, Daley, & Ussher, 2007; Paek, Oh, & Hove, 2012; for a review,
see Hausenblas, Carron, & Mack, 1997), conservation (recycling, water
conservation, and the like; Kaiser, Hübner, & Bogner, 2005; Lam, 2006;
Nigbur, Lyons, & Uzzell, 2010), health screening (Mason & White, 2008;
Michie, Dormandy, French, & Marteau, 2004; Sieverding, Matterne, &
Ciccarello, 2010), bicycle helmet use (Lajunen & Rasanen, 2004), voting
(Fishbein & Ajzen, 1981), vaccination (Dillard, 2011; Gerend & Shepherd,
2012), smoking (Hassandra et al., 2011), consumer purchases (Brinberg &
Durand, 1983; Smith et al., 2008), skin cancer prevention (Branstrom,
Ullen, & Brandberg, 2004; K. M. White et al., 2008), and many others.
The multiple correlations (obtained using RAT model variables to predict
intention) in these applications are commonly in the range of .50 to .90,
with an average multiple correlation of between .65 and .70. (For some
review discussions, see Albarracín, Johnson, Fishbein, & Muellerleile,
2001; Armitage & Conner, 2001; Conner & Sparks, 2005; Cooke &
French, 2008; Hagger & Chatzisarantis, 2009; Hale, Householder, &
Greene, 2002; McEachan, Conner, Taylor, & Lawton, 2011; Sutton, 2004;
Trafimow, Sheeran, Conner, & Finlay, 2002.)

This research has progressed in waves. Much early work examined only
two predictors of intention: attitude and injunctive norms (Fishbein &
Ajzen, 1975; for an illustrative review, see Sheppard, Hartwick, &
Warshaw, 1988). A second wave of research added perceived behavioral
control as a predictor (beginning with Ajzen, 1991). More recently,
descriptive norms have been added as a general predictor (Fishbein &
Ajzen, 2010).

The rationale for this succession of additional predictors has been that each
new variable has shown its value in contributing to the prediction of
intentions. That is, predictions based on AB, IN, and PBC are commonly
better than those based on AB and IN alone (for some relevant reviews and
discussions, see Conner & Armitage, 1998; Conner & Sparks, 1996; Godin
& Kok, 1996; Notani, 1998; Sutton, 1998). Similarly, adding DN has often
been found to improve the prediction of intention beyond that based on
AB, IN, and PBC (for reviews, see Manning, 2009; Rivis & Sheeran,
2003).7

169
Thus the four predictors described here (AB, IN, DN, and PBC) appear to
be predictors of sufficiently common utility to warrant their inclusion in a
single general model. That does not mean that in any given application, all
four will play a significant role in influencing intention, but it does suggest
that the four-predictor model is likely to be a useful starting point in trying
to unravel influences on intention.

Influencing Intentions
The RAT model identifies five possible avenues for changing a person’s
intention to perform a given behavior: by influencing one of the four
determinants of intention (AB, IN, DN, PBC)—assuming that the
determinant is significantly weighted—or by changing the relative
weighting of the components. (It is presumably apparent why inducing
change by altering one of the four components requires that the component
be significantly weighted. RAT underscores the futility of attempts to
change, say, the injunctive norm in circumstances in which only the
attitudinal component is significantly related to intention.) In what follows,
each of those five avenues is discussed in more detail. (For some general
discussion and reviews concerning RAT-based interventions, see Cappella,
2006; Fishbein & Yzer, 2003; Hackman & Knowlden, 2014; Hardeman et
al., 2002; Sutton, 2002; Yzer, 2012a, 2013. For some illustrative
applications, see Armitage & Talibudeen, 2010; Dillard, 2011; Elliot &
Armitage, 2009; French & Cooke, 2012; Giles et al., 2014; Jemmott, 2012;
Kothe, Mullan, & Amaratunga, 2011; Paek, Oh, & Hove, 2012; Stead,
Tagg, MacKintosh, & Eadie, 2005.)

Influencing Attitude Toward the Behavior


Presumably, a person’s attitude toward a behavior might be influenced by
any number of different attitude-change mechanisms. But the RAT model
provides an account of the determinants of AB that can be useful in
identifying some specific ways in which it can be influenced.

The Determinants of AB

An individual’s attitude toward the behavior is taken to be a function of his


or her salient beliefs about the act (which commonly are beliefs
concerning outcomes of the behavior). More specifically, he proposal is

170
that the evaluation of each belief (ei) and the strength with which each
belief is held (bi) jointly influence one’s attitude toward the behavior, as
represented in the following equation:
AB=∑biei

This is the same summative conception of attitude discussed in Chapter 4


(concerning belief-based attitude models). The assessment procedures are
identical, a set of modally salient beliefs is usually identified, and the same
sorts of belief strength scales (e.g., probable–improbable, true–false) and
belief evaluation scales (e.g., good–bad, desirable–undesirable) are
employed. For instance, the items in Figure 6.6 assess the respondent’s
belief strength (bi) concerning a particular outcome of regular exercise.
The evaluation of that outcome (ei) can be assessed with items such as
those in Figure 6.7.

Figure 6.6 Assessing belief strength (bi).

Figure 6.7 Assessing belief evaluation (ei).

RAT’s claims about the determinants of one’s attitude toward the act have
received rather good empirical support, with correlations between ∑biei
and AB commonly averaging more than .50 (for review discussions, see
Albarracín, Johnson, Fishbein, & Muellerleile, 2001; Armitage & Conner,
2001; Conner & Sparks, 1996; Eagly & Chaiken, 1993, p. 176).8

Changing AB

RAT thus identifies a number of possible means of changing the attitude


toward the behavior (AB). Consider the case of attempting to induce an
unfavorable attitude toward a given behavior such as smoking. Three
broad strategies are possible. First, the evaluation of an existing salient
belief might be changed. This might involve increasing the unfavorability
of an existing negative belief (“You probably already know that smoking
can lead to blood circulation problems—but you may not realize just how
serious such problems are. Impaired circulation is very undesirable, even

171
dangerous, …”) or decreasing the favorability of an existing positive belief
(“Maybe smoking does give you something to do with your hands, but
that’s a pretty trivial thing”). Second, the strength (likelihood) of an
existing salient belief might be changed. This might involve attempting to
increase the belief strength of an existing negative belief (“You probably
already realize that smoking can lead to health problems. But maybe you
don’t realize just how likely it is to do so. You really are at risk …”) or to
decrease the belief strength associated with an existing positive belief
(“Actually, smoking won’t help you keep your weight down”). Third, the
set of salient beliefs might be changed. This can be accomplished in two
ways. One is to add a new salient belief (of the appropriate valence) belief
about the act (“Maybe you didn’t realize that smoking leaves a bad odor
on your clothes”). The other is to change the relative saliency of current
beliefs such that a different set of beliefs is salient (“Have you forgotten
just how expensive cigarettes are nowadays?”). Obviously, these are not
mutually exclusive possibilities; a persuader might implement all these
strategies.

However, as discussed more extensively in Chapter 4 (concerning belief-


based attitude models), there are hidden complexities here. For example,
the property of belief strength may be more categorical (“I think the
behavior has the attribute”) than continuous (“I think the probability is
such-and-such that the behavior has the attribute”). Hence if the persuadee
already has the desired categorical judgment associating the behavior with
an attribute, trying to influence the degree of association may not be
useful.

Influencing the Injunctive Norm


As with AB, RAT offers an account of the determinants of the injunctive
norm that can be useful in identifying ways in which IN can be influenced.

The Determinants of IN
An individual’s injunctive norm is taken to be based on two elements. The
first is the person’s judgment of the normative expectations of specific
important others (what I think my parents want me to do, what I think my
best friend wants me to do, and so on). The second is the individual’s
motivation to comply with each of those referents (how much I want to do
what my parents think I should, etc.). Specifically, a person’s injunctive

172
norm is suggested to be a joint function of the normative beliefs that one
ascribes to particular salient others (ni) and one’s motivation to comply
with those others (mi). This is expressed algebraically as follows:
IN=∑nimi

An individual’s normative beliefs (ni) are commonly obtained through a


set of items in which the normative expectation of each referent is
assessed. Figure 6.8A provides one example. The motivation to comply
with each referent (mi) is typically assessed through a question such as the
one in Figure 6.8B. If I believe that my parents, my best friend, my
physician, and others who are important to me all think that I should
exercise regularly, and I am motivated to comply with each referent’s
expectations, then I will surely have a positive injunctive norm regarding
regular exercise.

Figure 6.8 Assessing normative expectations (ni) and motivation to


comply (mi).

RAT’s claims about the determinants of the injunctive norm have


generally received good empirical support, with correlations between
∑nimi and IN often .50 and greater (for review discussions, see Albarracín
et al., 2001; Armitage & Conner, 2001; Conner & Sparks, 1996; Eagly &
Chaiken, 1993, p. 176; McEachan et al., 2011).9 Even so, there is reason
for concern about the RAT’s analysis of the IN, and specifically about the
motivation to comply (mi) element. These worries are of two sorts.

The first is uncertainty about the most appropriate way to phrase


motivation-to-comply questionnaire items. These items can be worded in a
way that focuses on the specific behavior of interest (“When it comes to
exercising regularly, how much do you want to do what your best friend
thinks you should do?”), in a general way (“In general, how much do you
want to do what your best friend thinks you should do?”), or at some
intermediate level of specificity (“When it comes to health, how much
…”). It is not clear how one might most appropriately choose among
these.10

173
The second is some troubling empirical results concerning the role of the
motivation-to-comply element. Specifically, ∑ni has often been found to
be at least as good, and sometimes better, a predictor of IN than ∑nimi;
that is, deleting the motivation-to-comply element does not reduce, and
sometimes even improves, the prediction of IN (for some examples, see
Budd, North, & Spencer, 1984; Doll & Orth, 1993; Kantola, Syme, &
Campbell, 1982; Montaño, Thompson, Taylor, & Mahloch, 1997; Sayeed,
Fishbein, Hornik, Cappella, & Ahern, 2005). It may be that, when a
normative referent is salient, motivation to comply with that referent is
already likely to be reasonably high, and hence a measure of motivation-
to-comply does not add useful information (Fishbein & Ajzen, 2010, p.
143).

Changing IN
From the perspective of the RAT, one would influence the injunctive norm
by influencing ni and mi, in ways precisely parallel to the ways in which
AB is influenced through bi and ei. For example, one might attempt to
reconfigure the set of salient referents by adding a new referent or by
increasing the relative salience of an existing potential referent: “Have you
considered what your mother would think about your doing this?” Or one
might attempt to change the normative belief attributed to a current
referent: “Oh, no, you’re wrong—I talked to George, and he thinks you
should go ahead and do this.” Or one might try to change the motivation to
comply with a current referent: “You really shouldn’t worry about what he
thinks—he has no sense when it comes to things like this.”

But the previously mentioned uncertainties concerning the nature and


determinants of the injunctive norm make for some corresponding
difficulties here. For example, if one attempts to influence a receiver’s
motivation to comply with a particular referent, it is not clear whether one
should attempt to change the receiver’s motivation to comply (with that
referent) generally, or concerning the relevant broad behavioral domain, or
regarding the specific behavior at hand. Moreover, altering motivation to
comply with a given referent may not affect the receiver’s IN; given the
research evidence that the IN has often been better predicted by ∑ni than
by ∑nimi, perhaps changing the motivation to comply component may not
affect the IN in the expected ways.

So influencing injunctive norms will often be challenging for persuaders.

174
In some circumstances, some messages concerning others’ normative
beliefs are likely to simply be implausible (e.g., “your friends would really
be opposed to you doing this”). Yet it is plainly possible to devise
successful interventions based on something like alterations of the
injunctive norm. For example, Kelly et al. (1992) identified “trendsetters”
who subsequently communicated HIV risk reduction information to gay
men in their communities, producing substantial and sustained risk
reduction behavior; one way of understanding such effects is to see them
as reflecting changes in the receivers’ injunctive norms (see, relatedly,
Vet, de Wit, & Das, 2011). As another example, Prince and Carey’s (2010)
alcohol abuse intervention was able to affect college students’ injunctive-
normative perceptions of whether the typical student approved of
excessive drinking, although not parallel perceptions concerning close
friends’ approval (see also Armitage & Talibudeen, 2010; Reid & Aiken,
2013).

But rather than trying to change the persuadee’s injunctive norms by


addressing messages to the persuadee, persuaders might sometimes
consider targeting messages at the relevant important others—because if
the views of those referents change, then the persuadee’s normative beliefs
(the beliefs the persuadee attributes to those referents) may also change.
For example, to encourage potential military recruits to enlist, recruiters
might try to persuade parents to favor their child’s enlistment, thereby
laying the groundwork for the potential recruit to develop the desired
injunctive-normative beliefs.

Influencing the Descriptive Norm

The Determinants of DN
RAT does not yet provide an elaborated account of the determinants of the
descriptive norm (DN). One possibility might be to conceive of the DN as
arising from perceptions that parallel those determining IN (see Fishbein &
Ajzen, 2010, pp. 146–148). That is, a given respondent (or set of
respondents) might have a set of salient descriptive-norm referents
(parallel to the salient injunctive-norm referents)—a set of individuals or
groups whose behavior might be seen as a source of guidance. And the
descriptive-normative beliefs about such referents might be weighted in
some way (giving more weight to some referents than to others), thus
yielding the person’s overall perception of the DN. But these ideas have

175
not received sustained empirical attention.

Changing DN
Even without a fully explicit account of the determinants of the descriptive
norm, however, it is plain that the DN might most straightforwardly be
influenced by messages that convey DN information. Such messages
might influence intentions either by altering the DN (e.g., in cases where
people don’t know, or misperceive, the DN) or by enhancing the salience
of the DN.

In fact, providing descriptive-norm information has been the primary basis


of quite a number of successful persuasive interventions, on such diverse
subjects as conservation behavior (Goldstein, Cialdini, & Griskevicius,
2008), food choice (Burger et al., 2010), tax compliance (Wenzel, 2005),
physical activity (Burger & Shelton, 2011; Slaunwhite, Smith, Fleming, &
Fabrigar, 2009), and skin cancer prevention (Mahler, Kulik, Butler,
Gerrard, & Gibbons, 2008). As just one illustration: People can be
influenced to vote by learning that some of their Facebook friends have
voted (Bond et al., 2012; see, similarly, Glynn, Huge, & Lunney, 2009).

One extensively studied arena for descriptive-norm interventions has been


college student alcohol consumption. Such undertakings have been
motivated by the frequent observation that students commonly
overestimate the frequency or amount of alcohol consumption by others
(for a review discussion, see Berkowitz, 2005). On the supposition that
such inaccurate descriptive-norm beliefs might lead to alcohol abuse,
campaigns conveying accurate DN information have had a natural appeal.
These interventions have had mixed success (for examples and discussion,
see Clapp, Lange, Russell, Shillington, & Voas, 2003; Lewis & Neighbors,
2006; Mattern & Neighbors, 2004; Wechsler et al., 2003).

The challenges in creating effective descriptive-norm–based interventions


should not be underestimated. DN-based persuasive efforts can go off the
rails in a variety of ways: The DN information in the campaign messages
might not be believable (e.g., Polonec, Major, & Atwood, 2006), the
messages might not provide DN information about the most appropriate
referent comparison groups (e.g., Burger, LaSalvia, Hendricks,
Mehdipour, & Neudeck, 2011; Larimer et al., 2009), the DN might not be
strongly related to intentions or behavior (e.g., Cameron & Campo, 2006),
or DN information might backfire (e.g., Campo & Cameron, 2006; see,

176
relatedly, Cialdini et al., 2006). One hopes that the accumulation of
research evidence about DN-based interventions will eventuate in
guidelines about how to maximize the effectiveness of such interventions
(see DeJong & Smith, 2013).

Influencing Perceived Behavior Control

The Determinants of PBC


Perceived behavioral control is taken to be a function of the person’s
beliefs about the resources and obstacles relevant to performance of the
behavior. More specifically, the determinants of PBC are taken to reflect
jointly the person’s perception of (a) the likelihood or frequency that a
given control factor will occur and (b) the power of the control factor to
inhibit or facilitate the behavior. PBC is expressed algebraically as
follows:
PBC=∑cipi

where ci refers to the individual control belief (the perceived likelihood or


frequency that the control factor will occur) and pi refers to the perceived
facilitating or inhibiting power of the individual control factor. Procedures
for assessing these variables are not well established, but an individual’s
control beliefs (ci) might be assessed using items such as in Figure 6.9A.
The perceived power of each control factor (pi) can be assessed through a
question like the one in Figure 6.9B. If, for example, I think that bad
weather occurs frequently where I live, that I don’t have ready access to
exercise facilities, and that I don’t have much spare time, and I think that
each of these conditions makes it very difficult to exercise regularly, then I
will likely perceive that I have relatively little control over whether I
exercise regularly.11

Figure 6.9 Assessing individual control belief (ci) and the power of each
control factor (pi).

Relatively little research attention has been given to RAT’s claims about

177
the determinants of perceived behavioral control. Many RAT studies have
not collected data about ci, pi, and PBC (and of those that have, some do
not report the relevant correlation between ∑cipi and direct measures of
PBC). The few reported results are not especially encouraging, as the
correlations commonly range from roughly .10 to .35 (see, e.g., Cheung,
Chan, & Wong, 1999; Elliott, Armitage, & Baughan, 2005; Parker,
Manstead, & Stradling, 1995; Povey, Conner, Sparks, James, & Shepherd,
2000; Valois, Desharnais, Godin, Perron, & LeComte, 1993).12 However,
stronger relationships have been reported between direct assessments of
PBC and other belief-based measures, including measures based on
questions about only likelihood of occurrence (i.e., ∑ci), questions about
only powerfulness (∑pi), questions that appear to involve some amalgam
of likelihood of occurrence and powerfulness considerations (e.g., “Which
of the following reasons would be likely to stop you from exercising
regularly?”), and questions about the perceived importance of various
barriers. Using measures such as these, correlations with PBC measures of
between roughly .25 and .60 have been obtained (Ajzen & Madden, 1986;
Courneya, 1995; Elliott et al., 2005; Estabrooks & Carron, 1998; Godin,
Gagné, & Sheeran, 2004; Godin, Valois, & Lepage, 1993; P. Norman &
Smith, 1995; Sutton, McVey, & Glanz, 1999; Theodorakis, 1994;
Trafimow & Duran, 1998).13

Interpretation of these findings is complicated by variation in the direct


assessments of perceived behavioral control, in the means of establishing
the set of control beliefs, and in the assessments of likelihood (ci) and
powerfulness (pi).14 Taken together, however, these findings do suggest
that in some fashion perceptions of behavioral control are belief-based, in
the sense of being related to persons’ beliefs about resources and obstacles
relevant to behavioral performance. RAT may not yet have an adequate
account of exactly how beliefs combine to yield perceptions of behavioral
control, but it seems plain that assessments of the resources for, and
obstacles to, behavior play some role in shaping persons’ perceptions of
control.

Changing PBC
Influencing perceived behavioral control involves addressing the perceived
barriers to and resources for behavioral performance. Unfortunately, the
lack of a well-evidenced account of the determinants of PBC means that

178
there is less guidance than one might like concerning specific means of
influencing PBC. Even so, there appear to be four broad alternative means
by which a persuader might influence PBC. The appropriateness of each
mechanism will vary depending on the particular target behavior, and
combinations of these approaches may prove more effective than any one
individually, but each offers an avenue to influencing perceptions of
behavioral control.

One means of influencing perceived behavioral control may be for the


persuader to directly remove an obstacle to behavioral performance. Some
such obstacles are the result of a lack of relevant information, and in such
cases persuaders might find success simply by providing the information.
For example, parents’ self-efficacy for lowering the temperature setting of
a water heater (to prevent tap water scalding of infants) can be enhanced
by a simple informational brochure describing how to perform the action
(Cardenas & Simons-Morton, 1993). Similarly, better instructions may
improve self-efficacy concerning do-it-yourself medical tests (Feufel,
Schneider, & Berkel, 2010). Adolescents may not know how to use
condoms properly, voters may not know the location of their polling
places, and potential first-time home buyers may not understand the
process of buying a house; in all these cases, simply providing the relevant
information may remove a barrier to behavioral performance.

Even when the obstacle is substantive (rather than informational),


persuaders may be able to address it. For example, among low-income
patients whose initial medical test results indicate a need for a return
hospital visit, transportation problems might represent a significant barrier
to returning; Marcus et al. (1992) found that providing such patients with
free bus passes or parking permits significantly increased the likelihood of
a return visit.15 Similarly, racquetball players who didn’t have eye
protection equipment were willing to wear it when the recreational facility
provided it at the court (Dingus, Hunn, & Wreggit, 1991).

Second, a persuader might create the opportunity for successful


performance of the behavior in question. The core idea is that rehearsal of
a behavior—practice at performing the behavior successfully—will
enhance perception of control over the action (the underlying reasoning
being something such as “I’ve done it before, so I can do it again”). For
instance, several studies have found that self-efficacy for condom use can
be enhanced by interventions that include role-playing (or mental
rehearsal) of discussions with sexual partners, practice at using condoms

179
correctly, and the like (e.g., Calsyn et al., 2010; Yzer, Fisher, Bakker,
Siero, & Misovich, 1998). For other suggestions of the effect of successful
performance on self-efficacy, see Duncan, Duncan, Beauchamp, Wells,
and Ary (2000), Latimer and Ginis (2005a), Luzzo, Hasper, Albert, Bibby,
and Martinelli (1999), and Mishra et al. (1998).

Third, a persuader can provide examples of others (models) performing the


action successfully; such modeling can enhance self-efficacy (by message
recipients reasoning that “if they can do it, I can do it”). For example,
compared with a no-treatment control group, preservice teachers who
viewed a videotape that described and demonstrated various effective
behavior management techniques subsequently reported enhanced self-
efficacy for using such techniques (Hagen, Gutkin, Wilson, & Oats, 1998).
For other examples of the potential effects of modeling on self-efficacy,
see R. B. Anderson (1995, 2000), Gaston, Cramp, and Prapavessis (2012),
and Ng, Tam, Yew, and Lam (1999); for some discussion of factors
relevant to the choice of models, see Berry and Howe (2005), Corby,
Enguidanos, and Kay (1996), and R. B. Anderson and McMillion (1995).

Finally, simple encouragement may make a difference. That is, hearing a


communicator say (in effect) “you can do it” may enhance a person’s
perceived ability to perform an action. For instance, assuring receivers that
they can successfully prevent a friend from driving while drunk can
enhance receivers’ self-efficacy for that action (compared with a no-
treatment control condition; R. B. Anderson, 1995).16

Several studies have examined multicomponent self-efficacy interventions


(i.e., interventions that combine different potential means of influencing
self-efficacy, such as modeling and information; see, e.g., Luszczynska,
2004; Robinson, Turrisi, & Stapleton, 2007), and self-efficacy treatments
have sometimes been included as part of a larger intervention package (as
when, for instance, participants receive information designed to persuade
participants of the importance of the behavior in combination with a self-
efficacy treatment; for examples, see Darker, French, Eves, & Sniehotta,
2010; Fisher, Fisher, Misovich, Kimble, & Malloy, 1996; Kellar &
Abraham, 2005). Such research designs can provide evidence for the
influenceability of self-efficacy but obviously cannot provide information
about the relative impact of different specific mechanisms of influence
(although evidence is beginning to accumulate concerning that question;
e.g., Anderson, 2009; Ashford, Edmunds, & French, 2010; Hyde, Hankins,
Deale, & Marteau, 2008; Prestwich et al., 2014) or about the conditions

180
under which a given mechanism is most effective (although here, too,
research is developing; e.g., J. K. Fleming & Ginis, 2004; Hoeken &
Geurts, 2005; Luszczynska & Tryburcy, 2008; Mellor, Barclay, Bulger, &
Kath, 2006).

Altering the Weights


The final possible avenue of influence suggested by RAT is changing the
relative weights of AB, IN, and DN.17 For instance, suppose a person has a
positive attitude toward the act of attending law school but has a negative
injunctive norm (believes that important others think that she should not
go to law school). If the person places greater emphasis on injunctive-
normative than on attitudinal considerations in making this behavioral
decision, she would not intend to go to law school. A persuader who
wanted to encourage the person’s attending law school might try to
emphasize that insofar as a decision such as this is concerned, one’s
personal feelings ought to be more important than what others think (“It’s
your career choice, your life, not theirs; in situations like this, you need to
do what’s right for you,” “You’re the one who has to live with the
consequences, not them,” and so on). That is, the persuader might attempt
to have the person place more emphasis on attitudinal than injunctive-
normative considerations in forming the relevant intention.

This strategy can succeed in changing intention only when the relevant
components incline the person in opposite directions. For example, if a
person has a positive AB, a positive IN, and a positive DN, then it won’t
matter how the weights are shifted around among those three elements—
the person will still have a positive intention. Intention can be changed by
altering the weights of these three components only when one of those
three components differs in direction from the other two.18

However, these three components are often positively correlated. More


evidence is available concerning the relationship between AB and IN than
concerning either the AB-DN or IN-DN relationships, but all three
variables are generally reasonably positively correlated with each other,
with mean correlations ranging from roughly .35 to .60 (Manning, 2009;
Rivis & Sheeran, 2003).19 So, for example, persons with negative
injunctive norms are likely to also have relatively unfavorable attitudes
toward the behavior and relatively negative descriptive norms; as the

181
attitude toward the behavior becomes more positive, so do injunctive
norms and descriptive norms; and so on. As a rule, then, it is unlikely that
AB, IN, and DN will not all point in the same direction.20 The implication
is that the strategy of influencing their relative weights will not find wide
application.

Intentions and Behaviors


Reasoned action theory focuses on factors influencing the formation of
behavioral intentions, but such a focus promises to illuminate persuasion
only to the extent that intentions are related to action. As it happens, there
is good evidence that voluntary actions can often be successfully predicted
from intentions. Several broad reviews have reported mean intention-
behavior correlations ranging from roughly .40 to .55 (Eckes & Six, 1994;
M.-S. Kim & Hunter, 1993b; Sheeran, 2002; Sheppard et al., 1988), and
reviews of selected subsets of relevant work have reported similar
magnitudes (e.g., Cooke & French, 2008; Godin & Kok, 1996; Hagger,
Chatzisarantis, & Biddle, 2002; Hausenblas et al., 1997; Ouellette &
Wood, 1998; Schepers & Wetzels, 2007; Schwenk & Moser, 2009;
Sheeran & Orbell, 1998).21

Factors Influencing the Intention-Behavior


Relationship
Given that measures of intention are thus often reasonably strongly related
to behavioral assessments, the question that naturally arises is what
variables influence the strength of this relationship. A variety of factors
have been examined as possible influences on the intention-behavior
relationship (for some examples, see Chatzisarantis & Hagger, 2007; Hall,
Fong, Epp, & Elias, 2008; Prestwich, Perugini, & Hurling, 2008; for a
general discussion, see Cooke & Sheeran, 2004). Three factors are
discussed here as illustrative.

Correspondence of Measures
First, the degree of correspondence between the measure of intention and
the measure of behavior influences the strength of the observed intention-
behavior relationship (see Courneya, 1994; Fishbein & Ajzen, 2010, pp.
44–47). For instance, a questionnaire item asking about my intention to

182
buy diet cola at the grocery tonight may well be strongly related to
whether I buy diet cola at the grocery tonight—but it will be less strongly
related to whether I buy Diet Coke (specifically) at the grocery tonight or
to whether I buy diet cola at the cafeteria tomorrow. That is, as the degree
of correspondence between the two measures weakens, the intention
becomes a poorer predictor of (less strongly related to) the behavior. This
methodological consideration emphasizes how different means of
assessing intention and behavior can affect the size of the observed
association.

Temporal Stability of Intentions


A second influence on the intention-behavior relationship is the temporal
stability of intentions. If a person’s intentions fluctuate a good deal
through time, then a measure of intention (taken at one particular time)
may not necessarily be predictive of subsequent behavior (e.g., Conner,
Sheeran, Norman, & Armitage, 2000; Dibonaventura & Chapman, 2005;
for reviews, see Conner & Godin, 2007; Cooke & Sheeran, 2004; Rhodes
& Dickau, 2013).22 In part, this is a methodological point, in the sense that
if the value of a predictor variable is volatile over time, then any single
assessment of it is likely to be relatively weakly related to a subsequent
assessment of an outcome variable (even if the two properties are actually
closely related). That is, even if behavior is entirely determined by
whatever the actor’s intention is at the moment of behavioral performance,
an earlier assessment of intention will be predictive of that behavioral
performance only if the (earlier) assessed intention matches the (later) at-
the-moment-of-action intention. Thus if people’s intentions are stable over
time, then there is a good chance that their earlier intentions will match
their later ones, thus yielding a strong observed relationship between the
measure of intention and the measure of behavior. But if people’s
intentions are variable over time, then the observed relationship will be
weaker—not because intentions do not actually influence actions but
because the temporal instability of intention inevitably introduces error.

But there is also a substantive point here, because of the possibility that
some intentions (for some people or for some types of behaviors) are
generally more stable than others (see Sheeran & Abraham, 2003). There
is not yet much accumulated research on this matter, but (for example)
some evidence suggests that for behaviors deemed relatively important
(e.g., ones taken to be closely related to one’s self-image), intentions may
be more stable (compared with corresponding intentions for less important

183
behaviors) and hence more closely related to action (see Kendzierski &
Whitaker, 1997; Radecki & Jaccard, 1999; Sheeran & Orbell, 2000a). In
any case, the general point to notice is that to the degree that persons’
intentions are unstable, to that same degree intentions may not provide a
good basis for predicting subsequent action.

Explicit Planning
Third, explicit planning about behavioral performance can strengthen the
relationship between intentions and actions. In a large number of studies,
participants who specified when and where they would perform the action
were more likely (than control group participants) to subsequently engage
in the behavior. For example, Sheeran and Orbell (2000b) found that
participants who specified when, where, and how they would make an
appointment for a medical screening test were much more likely to
subsequently attend the screening than those in a control condition. Similar
effects of explicit-planning interventions have been reported for a great
variety of behaviors, including exercise (e.g., Andersson & Moss, 2011),
single-occupancy car use (Armitage, Reid, & Spencer, 2011), parent-
teacher communication (Arriaga & Longoria, 2011), smoking prevention
(Conner & Higgins, 2010), contraceptive adherence (Martin, Slade,
Sheeran, Wright, & Dibble, 2011), voting (Nickerson & Rogers, 2010),
and many others (for some reviews, see Adriaanse, Vinkers, de Ridder,
Hox, & De Wit, 2011; Gollwitzer & Sheeran, 2006; Sheeran, Milne,
Webb, & Gollwitzer, 2005).23

These effects are notable because one common persuasive challenge is


precisely that of encouraging people to translate their existing good
intentions into action. For example, people may form an initial intention to
exercise or recycle or eat a healthier diet but then fail to follow through.
Obviously, encouraging receivers to engage in explicit behavioral planning
is a possible mechanism for addressing such challenges.

Several different explanations have been considered for these planning


effects. One possibility is simply that planning makes intentions more
positive, but in fact these effects do not necessarily involve enhancing
intentions (see, e.g., Milne, Orbell, & Sheeran, 2002; Sheeran & Orbell,
1999b). Explicit planning appears to be able to influence the likelihood of
subsequent behavior without necessarily changing intentions.

A second explanation is that planning enhances PBC (self-efficacy). The

184
suggestion is that thinking through concrete action plans may convince
people of their ability to successfully perform the behavior. But two
considerations incline against this explanation. First, if planning enhances
PBC, then (given PBC’s influence on intention) planning should also have
the indirect effect of making intentions more positive; but, as just
indicated, that effect seems not to occur. Second, several studies have
found that planning enhances intention-behavior consistency only when
PBC is already relatively high (Koring et al., 2012; Lippke, Wiedemann,
Ziegelmann, Reuter, & Schwarzer, 2009; Schwarzer et al., 2010; Wieber,
Odenthal, & Gollwitzer, 2010; see, relatedly, Koestner et al., 2006); that is,
having high PBC appears to be a necessary condition for explicit-planning
interventions to be effective.

A third explanation—and it seems the best available—is that planning


encourages the development of “implementation intentions,” subsidiary
intentions related to the concrete realization (implementation) of a more
abstract intention (for a general discussion and review, see Gollwitzer &
Sheeran, 2006). A more abstract intention (“I intend to get a flu shot next
week”) may be thought of as describing a goal, whereas implementation
intentions specify both the concrete behavior to be performed in order to
achieve the goal and the context in which that behavior will be enacted
(“On Tuesday, on the way to work, I’ll stop at that pharmacy on Main
Street to get my flu shot”). Interventions that encourage explicit behavioral
planning thus naturally boost the development of implementation
intentions.

A number of factors can be expected to influence the success of explicit-


planning interventions. People must already have the appropriate abstract
intentions (e.g., Elliott & Armitage, 2006), the intervention must in fact
lead people to plan (Michie, Dormandy, & Marteau, 2004), and, as
mentioned earlier, perceived behavioral control must be sufficiently high
(e.g., Koring et al., 2012). The implementation intentions explanation
additionally suggests that, to be successful, explicit-planning interventions
should specifically emphasize the linkage between situational cues (the
context of performance) and the concrete action (Chapman, Armitage, &
Norman, 2009; Van Osch, Lechner, Reubsaet, & de Vries, 2010; Webb &
Sheeran, 2007; cf. Ajzen, Czasch, & Flood, 2009), so that when those
contextual cues are encountered, they will naturally prompt behavioral
performance. (For examples of discussion of some additional possible
factors, see Adriaanse, de Ridder, & de Wit, 2009; Churchill & Jessop,
2011; Hall, Zehr, Ng, & Zanna, 2012; Knäuper, Roseman, Johnson, &

185
Krantz, 2009; Prestwich et al., 2005.)

The Sufficiency of Intention


One other general aspect of the intention-behavior relationship worth
considering is whether intention is a sufficient basis for the prediction of
voluntary action. The RAT model proposes that intention is the only
significant influence on (volitional) behavior; any additional factors that
might be related to behavior are claimed to have their effect indirectly, via
intention (or via the determinants of intention).24 The question at issue is
this: Are there factors that have effects on behavior that are not mediated
through intention? Alternatively put: Are there additional variables that
might improve the prediction of behavior (over and above the
predictability afforded by intention)?

Among various possibilities, the most prominent and well-studied


suggestion focuses on prior performance of the behavior in question. Some
studies have found the prediction of behavior to be improved by taking
prior behavior into account (e.g., De Wit, Stroebe, De Vroome, Sandfort,
& van Griensven, 2000); specifically, persons who had performed the
action in the past were more likely to perform it in the future—over and
above the effects of intention on future performance. But other studies
have failed to find such an effect (e.g., Brinberg & Durand, 1983). A
systematic review of this research has indicated that a key differentiating
factor is whether the behavior is routinized (Ouellette & Wood, 1998).
Specifically, prior behavior makes an independent contribution (to the
prediction of behavior) only when the behavior has become habitual and
routine (and so, in a sense, automatic rather than fully intentional); where
conscious decision making is required, this effect disappears, as the
influence of prior behavior seems to largely be mediated through intention
or its determinants. And thus, as a number of studies have found,
intention-behavior correlations are smaller when habit is relatively strong
than when it is relatively weak—an effect observed across such diverse
behaviors as cancer screening (P. Norman & Cooper, 2011), bicycle use
(de Bruijn & Gardner, 2011; de Bruijn, Kremers, Singh, van den Putte, &
van Mechelen, 2009), fruit consumption (de Bruijn, 2010), exercise (de
Bruijn & Rhodes, 2010), saturated fat consumption (de Bruijn, Kroeze,
Oenema, & Brug, 2008), and binge drinking (P. Norman & Conner, 2006).

For persuaders, these findings serve as a reminder of the persuasive


difficulties created by entrenched behavioral patterns: Past behavior may

186
exert an influence on conduct that is not mediated by intention, and hence
securing changes in intention may not be sufficient to yield changes in
well-established behavioral routines. On the other hand, these findings also
suggest the durability of persuasive effects that involve establishing such
habits. (For examples and discussion concerning establishing or breaking
habitual or routinized behavior, see Aarts, Paulussen, & Schaalma, 1997;
Adriaanse, Gollwitzer, de Ridder, de Wit, & Kroese, 2011; Allcott &
Rogers, 2012; de Vries, Aarts, & Midden, 2011; Judah, Gardner, &
Aunger, 2013; Lally & Gardner, 2013.)

Adapting Persuasive Messages to Recipients


Based on Reasoned Action Theory
One recurring theme in theoretical analyses of persuasion is the idea that to
maximize effectiveness, persuasive messages should be adapted (tailored,
adjusted) to fit the audience. RAT provides an explicit treatment of two
ways in which such adaptation can occur (for discussion and details, see
Abraham, 2012; Ajzen, Albarracín, & Hornik, 2007; Ajzen & Manstead,
2007; Fishbein & Yzer, 2003; Sutton, 2002; Yzer, 2012a, 2013).

First, messages should be adapted by addressing whichever determinants


of intention are the most important influences. Concretely, this means
examining the relative weights of the different RAT model elements so as
to identify the ones most strongly related to intention. Generally speaking,
of the four components, AB is typically the most strongly related to
intention (see the reviews of Albarracín, Johnson, Fishbein, &
Muellerleile, 2001; Armitage & Conner, 2001; Cooke & French, 2008;
Hagger, Chatzisarantis, & Biddle, 2002; Manning, 2009; McEachan,
Conner, Taylor, & Lawton, 2011; Rivis & Sheeran, 2003).25 But the
relative contribution of the various components to the prediction of
intention can vary from behavior to behavior (compare, e.g., Cooke and
French’s 2008 findings concerning health screening behaviors and Hagger
et al.’s 2002 findings concerning exercise). Indeed, sometimes AB may
make a smaller contribution (to the prediction of intention) than do
normative elements (e.g., Croy, Gerrans, & Speelman, 2010).26 The
implication is that in any given persuasion circumstance, there will be no
substitute for direct evidence about the relative importance of the various
components.

187
Second, messages should be further adapted by addressing relevant beliefs
underlying the component to be changed. For example, if AB is the target
component, RAT-based questionnaires can be used to identify differences
between those who already intend to perform the persuader’s advocated
action (“intenders”) and those who do not (“nonintenders”)—differences
in the strength and evaluation of salient beliefs about the behavior (see,
e.g., Fishbein et al., 2002; French & Cooke, 2012; Marin, Marin, Perez-
Stable, Sabogal, & Otero-Sabogal, 1990; Rhodes, Blanchard, Courneya, &
Plotnikoff, 2009; Silk, Weiner, & Parrott, 2005; J. R. Smith &
McSweeney, 2007). Such data can then be used as the basis for
constructing persuasive messages—messages focused on changing those
specific elements known to distinguish intenders and nonintenders (for
examples of RAT-based message design, see Booth-Butterfield & Reger,
2004; Chatzisarantis & Hagger, 2005; Jordan, Piotrowski, Bleakley, &
Mallya, 2012; Jung & Heald, 2009; Milton & Mullan, 2012; Stead, Tagg,
MacKintosh, & Eadie, 2005).27

As attractive and useful as this general approach is, a word of caution is


appropriate: The relative sizes of the weights for the four components can
be misleading as a way of identifying the most important targets for
persuasive messages. This is a consequence of the often-substantial
correlations among the RAT model’s predictors (Albarracín et al., 2001;
Hagger et al., 2002; Manning, 2009; McEachan et al., 2011; Rivis &
Sheeran, 2003). Because the predictors are correlated with each other,
small differences in zero-order correlations (of each predictor with
intention) can produce large differences in the weights.

As a concrete example: In Fishbein et al.’s (2002) study of marijuana use


intentions, AB and IN had virtually identical zero-order correlations with
intention, but attitude had the larger weight in the regression. Fishbein et
al. reasoned that attitude was the more important target for persuasion
“because attitude was the most important determinant of intention” (p.
105). But this reasoning is defective; the zero-order correlations showed
that AB and IN were statistically indistinguishable (and indeed literally
nearly identical) as determinants of intention. There might have been good
reasons for preferring AB over IN as an intervention focus—perhaps the
malleability of attitude might plausibly be expected to be greater—but the
invocation of the relative sizes of the weights was not such a reason.28 In
general, then, intervention designers will want to examine not only the
weights but also the zero-order correlations in order to have an accurate

188
picture of the influences on intention.29

As a further illustration, consider the descriptive norm specifically: Even if


the DN is not a significantly weighted predictor, a persuader might
nevertheless want to present descriptive norm information, because such
information could potentially affect other determinants of intention.
Specifically, DN information (e.g., “everybody’s doing it”) might
influence AB (“if everybody’s doing it, there must be something good
about it that I haven’t seen” or “if everybody’s doing it, these negative
outcomes I’ve been worried about must not be very likely, or must not be
all that bad”) or PBC (“if other people are doing, it must not be as hard as I
thought, so maybe I can do it too;” e.g., Fornara, Carrus, Passafaro, &
Bonnes, 2011). But when DN influences intention entirely through such
effects on AB and PBC, DN would not be significantly weighted; that is,
DN would not make an independent contribution to the prediction of
intention over and above the contributions of AB, IN, and PBC. The point
here is that even when DN is not significantly weighted, presenting DN
information might nevertheless be useful as a means of influencing
intentions (via the effect of DN information on AB or PBC).

Commentary
Three general aspects of the RAT merit some comment: consideration of
additional possible predictors, some suggested revisions of the attitudinal
and normative components, and the nature of the perceived control
component.

Additional Possible Predictors


As mentioned earlier, research on RAT has progressively added predictors
to the model, with each new predictor justified by virtue of its adding to
the predictability of intention. The natural question that arises is whether
any other additional predictors might be generally useful. Of the
possibilities that have been explored, two suggestions—anticipated affect
and moral norms—are discussed here as illustrations.30

Anticipated Affect
Behaviors sometimes have affective (feeling-related) consequences—they

189
can arouse regret, happiness, guilt, and so forth—and people can often
foresee these consequences (as when one scans the offerings at the movie
theatre, looking for a mood-brightening comedy). A good deal of research
now indicates that various anticipated emotions are dependably related to
intentions and behavior. A number of studies have reported such effects
specifically for anticipated regret (e.g., McConnell et al., 2000; for a
review, see Sandberg & Conner, 2008); for instance, Lechner, de Vries,
and Offermans (1997) found that among women who had not previously
undergone mammography, the best predictor of participation intentions
was anticipated regret (the greater the regret anticipated from not
undergoing mammography, the greater the intention to do so). Related
effects have been reported for anticipated guilt (e.g., Birkimer, Johnston,
& Berry, 1993; Steenhaut & Van Kenhove, 2006) and other anticipated
emotions (e.g., S. P. Brown, Cron, & Slocum, 1997; Leone, Perugini, &
Bagozzi, 2005).

Moreover, anticipated emotions can influence intentions above and beyond


the more common RAT predictors. Studies of cancer screening
(McGilligan, McClenahan, & Adamson, 2009), safer sex behaviors (e.g.,
Conner, Graham, & Moore, 1999; Richard, de Vries, & van der Pligt,
1998), vaccination (Gallagher & Povey, 2006), lottery playing (Sheeran &
Orbell, 1999a), safe driving practices (Parker et al., 1995; Simsekoglu &
Lajunen, 2008), and using drugs, using alcohol, and eating fast food
(Richard, van der Pligt, & de Vries, 1996a) have reported that measures of
anticipated emotional states (some general, others more specific) have
improved the predictability of behavioral intentions beyond that afforded
by various sets of RAT predictors (for some discussion, see Conner &
Armitage, 1998, pp. 1446–1448; for a review concerning anticipated regret
specifically, see Sandberg & Conner, 2008).

Notice that the observed independent effect of anticipated affect implies


that anticipated affective reactions are distinct from AB and from IN. One
might have imagined that the expected affective consequences of an action
would already be included in AB (because those consequences would
contribute to the overall evaluation of the action) or in IN (through
recognition of the views of significant others; e.g., “My mother would
want me to do this, and I’ll feel bad if I disappoint her”) and hence would
not make a separate contribution to the prediction of intention. But these
studies suggest otherwise.

Correspondingly, such studies suggest that anticipated affective reactions

190
provide a distinctive persuasive target (see Cappella, 2007). There is good
evidence that the anticipation of emotion can indeed be influenced,
primarily by heightening the salience of such anticipations. Several studies
have apparently influenced the salience of anticipated emotions simply by
asking about such feelings, with consequent effects on intention or
behavior. For example, Sheeran and Orbell (1999a, Study 4) found that
persons who answered a questionnaire item about regretting not playing
the lottery (and so who presumably were induced to anticipate regret)
intended to buy more lottery tickets than persons who did not answer such
a question. (For related manipulations, see Abraham & Sheeran, 2004,
Study 2; Hetts, Boninger, Armor, Gleicher, & Nathanson, 2000; O’Carroll,
Dryden, Hamilton-Barclay, & Ferguson, 2011; Richard et al., 1996b;
Sandberg & Conner, 2009.) Thus it seems that one straightforward
mechanism for engaging anticipated emotions is simply to invite receivers
to consider how they will feel if they follow (or do not follow) a particular
course of action.

A more focused mechanism might involve suggesting that receivers would


experience a given emotion if they followed a particular course of action—
say, that they would feel guilty if they cheated on their taxes. The potential
of such an approach is illustrated by Parker, Stradling, and Manstead’s
(1996) research, in which participants saw one of four videos aimed at
influencing intentions to speed in residential areas; the videos were meant
to influence either normative beliefs, behavioral beliefs, perceived
behavioral control, or anticipated regret. The anticipated regret video
appeared to evoke greater anticipated regret than other videos, and only the
anticipated regret video proved more successful than a control video in
inducing negative attitudes toward speeding. A similar illustration is
offered by an antilittering campaign in Oklahoma (employing appeals
meant to make people feel guilty if they littered), which seems to have
produced substantial increases in the proportion of residents who said that
they would feel guilty if they littered (Grasmick, Bursik, & Kinsey, 1991).

The potential influenceability of anticipated emotions is perhaps also


suggested by the occurrence of certain forms of advertising. For example,
guilt-based consumer advertising often seeks to engage anticipated guilt
feelings. Huhmann and Brotherton (1997) examined guilt-based
advertisements in popular magazines over a 2-year period and found that
most of the guilt appeals were “anticipatory” appeals (offering consumers
the opportunity to avoid a guilt-inducing transgression) as opposed to
appeals meant to arouse guilt. Sweepstakes promoters and lottery

191
advertising often seem to seek to induce thoughts about anticipated
emotions—including not just the potential positive emotional
consequences of winning but also the regret of not playing (“Suppose
you’ve been assigned the winning mega-prize number, but because you
didn’t enter we had to give the 10 million dollars to someone else”; see
Hetts et al., 2000, p. 346; Landman & Petty, 2000).

All the examples thus far are ones in which a persuader seeks to encourage
the anticipation of particular emotions. But sometimes a persuader might
want to prevent the anticipation of certain emotions. In consumer
purchases, one type of possible anticipated regret involves the prospect of
finding a lower price elsewhere (“If I find a lower price at another store,
I’ll regret buying the product now—hence I’ll postpone my purchase”). An
appropriate price guarantee (in which the seller promises that if the buyer
finds the product offered at a lower price elsewhere, the seller will match
that price) can undermine the creation of that anticipated regret (as
observed by McConnell et al., 2000).

In short, it seems clear that persuaders can indeed effectively engage


anticipated emotions, perhaps even through relatively simple mechanisms
that make anticipated emotions more salient. To the extent that anticipated
affect influences intentions beyond the factors identified by RAT,
anticipated affect will correspondingly be an important potential target for
persuaders.

Moral Norms
Another possible addition to the RAT is what can be called moral norms
(also sometimes termed personal norms or moral obligation), that is, a
person’s conception of morally correct or required behavior.
Questionnaires for assessing moral norms have included items concerning
perceived obligation (e.g., “I feel a strong personal obligation to use
energy-saving light bulbs,” Harland, Staats, & Wilke, 1999) or perceived
moral propriety (e.g., “It would be morally wrong for me to use
marijuana,” Conner & McMillan, 1999; “Not using condoms would go
against my principles,” Conner, Graham, & Moore, 1999).

Several studies have found that moral norms can enhance the prediction of
intention above and beyond the predictors already contained in the RAT.
Such increased predictability has been found, for example, in studies of
marijuana use (Conner & McMillan, 1999), condom use (Kok, Hospers,

192
Harterink, & De Zwart, 2007), environmental behaviors (M. F. Chen &
Tung, 2010; Harland et al., 1999), smoking cessation (Høie, Moan, Rise,
& Larsen, 2012; Moan & Rise, 2005), volunteering (Warburton & Terry,
2000), driving behaviors (Conner, Smith, & McMillan, 2003; Moan &
Rise, 2011), and charitable donations (J. R. Smith & McSweeney, 2007).
(For review discussions concerning moral norms, see Conner & Armitage,
1998, pp. 1441–1444; Manstead, 2000.) One supposes that the inclusion of
moral norms will not always contribute to the prediction of intention, but
there is little firm evidence yet concerning relevant moderating factors (see
Hübner & Kaiser, 2006; Manstead, 2000, pp. 27–28).

As with any potential addition to RAT, the apparent influence of moral


norms on intention suggests that such norms may be a distinctive focus of
influence efforts. Two general influence paths are possible: One involves
the creation of some new perceived moral norm, the other (surely more
generally useful and plausible) involves making some existing moral norm
more salient. But there is little explicit research guidance on such
questions.31

The Assessment of Potential Additions


The impulse to add predictors to the RAT model is a natural one. After all,
earlier additions of variables proved useful, and hence pursuit of still
further additions is to be expected. But in assessing possible new
predictors, two criteria might be kept in mind (for related general
discussion, see Ajzen, 2011; Fishbein & Ajzen, 2010, pp. 281–282;
Sutton, 1998). One is the size of the improvement in predictability
afforded by a given candidate addition. It will not be enough that a given
variable make a dependable (statistically significant) additional
contribution to the prediction of intention; a large additional contribution is
what is sought.

The second is the breadth of behaviors across which the proposed addition
is useful. In articulating a general model of behavioral intentions, one
wants evidence suggesting that a proposed addition is broadly useful. It
might be the case that improved prediction results from including variable
X when predicting behavioral intention Y but that result (even if frequently
replicated in studies of Y) does not show that X adds to the prediction of
intention sufficiently broadly (i.e., across enough different behaviors) to
merit the creation of a new general model that includes X.

193
But it is important also to bear in mind that there is a natural tension
between a generally useful model and accurate prediction in a given
application. In studying a particular behavior, an investigator might add
variables that improve the prediction of that intention, never mind whether
those added variables would be helpful in improving prediction in other
applications. Thus when one’s interest concerns some particular behavior
of substantive interest (as opposed to concerning the elaboration of general
models), RAT might be thought of as providing useful general starting
points. In any particular application, there might be additional predictors
(beyond AB, IN, DN, and PBC) that prove to be useful in illuminating the
behavior of interest—even if those additional factors are not generally
useful (that is, even if not useful in studying other behaviors). And any
such additional predictor, whether general or case-specific, is another
distinguishable potential target for persuaders. (An example of a
specialized RAT-like model is provided by the “technology acceptance
model,” which has a distinctive set of predictors of the intention to use a
new technology; for discussion and reviews, see Davis, Bagozzi, &
Warshaw, 1989; King & He, 2006; Schepers & Wetzels, 2007; Venkatesh
& Bala, 2008. Protection motivation theory, discussed in Chapter 11,
although not explicitly conceived of as a specialized RAT model, is
functionally similar in focusing specifically on protective intentions and
behaviors.)

Revision of the Attitudinal and Normative


Components

The Attitudinal Component


Some commentators have suggested replacing the general attitudinal
component (AB) with two more specific ones, namely, an “instrumental”
attitude, reflecting the behavior’s anticipated positive or negative material
consequences, and an “experiential” (or “affective”) attitude, reflecting the
positive or negative experiences associated with the behavior. A number of
studies have suggested the potential value of such a distinction (e.g., Elliott
& Ainsworth, 2012; Keer, van den Putte, & Neijens, 2010; Lowe, Eves, &
Carroll, 2002).

However, rather than thinking of these as two different kinds of attitudes,


one might instead think of these categories as representing two different
kinds of beliefs that might contribute to attitude. So, for example, one’s

194
beliefs about exercise might include both (instrumental) beliefs about
health consequences and (experiential) beliefs about how it makes one
feel. So rather than distinguish two attitudes, one might instead distinguish
two possible kinds of belief underpinnings to attitude. Correspondingly,
persuaders will want to be attentive to the potential importance of
addressing each kind of belief (see Kiviniemi, Voss-Humke, & Seifert,
2007; Lawton, Conner, & McEachan, 2009; Wang, 2009).

This does, however, point to the importance of ensuring that one’s belief-
elicitation procedures evoke both kinds of belief (if both kinds are salient).
Some frequently used belief-elicitation procedures may be more likely to
elicit instrumental beliefs than affective ones (e.g., Sutton et al., 2003). To
ensure good representation of salient beliefs, then, researchers need to be
attentive to such issues. (For discussion of belief elicitation procedures, see
Breivik & Supphellen, 2003; Darker, French, Longdon, Morris, & Eves,
2007; Dean et al., 2006; Middlestadt, 2012.)

The Normative Components


The version of RAT presented here distinguishes injunctive and
descriptive norms as distinct influences on intention. Although often
positively correlated, these variables are conceptually distinct.32 Still,
because both concern normative influences on behavior, one might be
tempted to somehow combine IN and DN into a single general normative
element (e.g., Fishbein & Ajzen, 2010, pp. 130–133). But there are two
good reasons to resist such a merger, at least at the moment.

The first reason is that the injunctive norm and the descriptive norm appear
to operate in substantively different ways. Manning’s (2009) meta-analytic
review pointed to several differences between IN and DN in their
relationships to intentions and behaviors (e.g., the effects of DN may be
affected by the degree of social approval for the behavior in ways that IN
is not). A variety of other studies have pointed to the independent
operation of DN and IN (e.g., Park, Klein, Smith, & Martell, 2009; Park &
Smith, 2007; Vitoria, Salgueiro, Silva, & de Vries, 2009). For example,
there is evidence suggesting that whereas the effects of injunctive norms
characteristically require some degree of systematic thinking, descriptive
norms can operate in ways that require little cognitive effort (e.g.,
Göckeritz et al., 2010; Jacobson, Mortensen, & Cialdini, 2011; Melnyk,
van Herpen, Fischer, & van Trijp, 2011; for a general discussion, see
Goldstein & Mortensen, 2012). Taken together, such findings argue for

195
separate treatment of these two factors.

The second reason is measurement-related: Whereas there are well-


established ways of assessing attitude toward the behavior and perceived
behavioral control, there are not (yet, anyway) parallel means of assessing
generalized perceived norms (i.e., some assessment of the overall
generalized combination of injunctive and descriptive norms). And if
injunctive and descriptive norms in fact do operate substantively
differently (the first reason), then such measurement challenges will
eventually prove insurmountable. In sum, rather than treating IN and DN
as two contributors to one general normative factor, it seems preferable—
at least at present—to distinguish these as two different influences on
intention.

The Nature of the Perceived Control Component

PBC as a Moderator
As noted above, PBC does not seem to be quite like the other determinants
of intention. Instead of straightforwardly influencing intention as the
attitudinal and normative components do, PBC can instead plausibly be
thought to moderate the effects of those variables on intention.
Specifically, PBC can be seen as a necessary, but not sufficient, condition
for the formation of intentions, and hence AB, IN, and DN will influence
intention only when PBC is sufficiently high.

If this image is correct, persuaders would want to be alert to a possible


pitfall in focusing on PBC as a persuasion target. Specifically, increasing
PBC seems likely to increase behavioral intentions only when the other
determinants of intention (AB, IN, DN) incline the person toward a
positive intention. So, for example, persuading a person that exercising
regularly is under her control would presumably lead her to intend to
exercise only if she would otherwise be inclined to have a positive
intention given her AB, IN, and DN. When PBC is low, persuaders ought
not assume that increasing PBC will automatically mean corresponding
increases in intention.

If PBC operates in this sort of moderating fashion, then the usual statistical
tests of RAT should reveal an interaction effect such that the relationships
of intention to AB, IN, and DN would vary depending on the level of PBC

196
(and, specifically, that as PBC increases, there should be stronger relations
of AB, IN, and DN to intention). There is less empirical evidence on this
question than one might like, because researchers have not often
conducted the appropriate analyses. When the analyses have been
reported, some studies have not found the expected interaction (e.g.,
Crawley, 1990; Giles & Cairns, 1995), but an increasing number of studies
have detected it (e.g., Bansal & Taylor, 2002; Dillard, 2011; Hukkelberg,
Hagtvet, & Kovac, 2014; Kidwell & Jewell, 2003; Park, Klein, Smith, &
Martell, 2009; for a review, see Yzer, 2007). There are substantial
challenges to obtaining empirical evidence indicating such interactions
(e.g., considerable statistical power demands; see Manning, 2009, p. 662;
Yzer, 2007), so reports of nonsignificant interactions are not entirely
unexpected.

In any case, that PBC is not quite on all fours with AB, IN, and DN is
perhaps indicated by the finding that PBC can be significantly negatively
related to intentions. Wall, Hinson, and McKee (1998) observed such a
relationship in a study of excessive drinking; the less control that people
thought they had over excessive drinking, the more likely they were to
report intending to drink to excess. (Relatedly, PBC has been found to be
negatively related to binge drinking—that is, frequent binge drinkers were
less likely than others to think that the behavior was under their control; P.
Norman, Bennett, & Lewis, 1998.) In a similar vein, Conner and McMillan
(1999) found PBC to be significantly negatively related to intentions to use
marijuana. Such results are consistent with Eagly and Chaiken’s (1993, p.
189) supposition that increasing PBC might enhance intention only to the
degree that the behavior is positively evaluated—and these results
certainly indicate that PBC is something rather different from the other
three components.

Indeed, such findings invite the conclusion that PBC can be a repository
for rationalization. “Why do I keep doing these bad things that I know I
shouldn’t [drinking to excess, smoking, and so forth]? Because I can’t help
myself; it’s not really under my control. And why do I fail to do these
good things I know I should do [exercising, recycling, and so on]? Gee, I’d
like to do them, I really would—but I just can’t, it’s out of my hands, not
under my control.” Persuaders may need to address such rationalizations in
order to lay the groundwork for subsequent behavioral change.

Refining the PBC Construct

197
Several commentators have suggested that it may be useful to distinguish
different facets of perceived behavioral control (e.g., Armitage & Conner,
1999a, 1999b; Cheung et al., 1999; Estabrooks & Carron, 1998; Rhodes,
Blanchard, & Matheson, 2006; Rodgers, Conner, & Murray, 2008). In
good measure this suggestion has been stimulated by findings indicating
that items used to measure PBC often fall into two distinct clusters (e.g.,
Myers & Horswill, 2006; P. Norman & Hoyle, 2004; Pertl et al., 2010;
Trafimow, Sheeran, Conner, & Finlay, 2002). But the nature of these
clusters is not entirely clear, and the labels used to distinguish them exhibit
considerable variety (for some discussion, see Ajzen, 2002; Fishbein &
Ajzen, 2010, pp. 153–178; Gagné & Godin, 2007; Yzer, 2012b).

As an illustration, one possible distinction might be between internal and


external aspects of PBC, that is, between internal resources and obstacles
to action (motivation, personal capabilities, and the like) and external
resources and obstacles (facilities, equipment, cooperation of others, and
so forth). For example, I might feel as though I am personally capable of
exercising regularly (internal resources) but believe that a lack of facilities
prevents me from doing so (external obstacles). It is possible, however, to
appreciate this distinction (and others) without replacing PBC with two
separate predictors (one for internal aspects and one for external aspects);
one could retain the single PBC component but recognize that different
kinds of underlying beliefs might contribute to an overall sense of
behavioral efficacy.

For persuaders, any way of distinguishing different elements underlying


PBC points to correspondingly different possible targets of persuasion. For
example, internal and external control elements might need separate
attention in a given persuasion setting—and these two elements might well
be differentially responsive to different influence mechanisms.33

Conclusion
Reasoned action theory has undergone extensive empirical examination
and development over time. It is unquestionably the most influential
general framework for understanding the determinants of voluntary action.
And in illuminating the underpinnings of behavioral intention, RAT
provides manifestly useful applications to problems of persuasion,
primarily by identifying potential points of focus for persuasive efforts.

198
For Review
1. What is the most immediate determinant of voluntary action?
According to reasoned action theory (RAT), what are the four
primary determinants of behavioral intention?
2. What is the attitude toward the behavior (AB)? Explain the difference
between attitude toward the behavior and attitude toward the object.
Describe the sorts of questionnaire items commonly used for
assessing the AB. What is the injunctive norm (IN)? Describe the
sorts of questionnaire items commonly used for assessing the IN.
What is the descriptive norm (DN)? Describe the sorts of
questionnaire items commonly used for assessing the DN. Explain the
difference between the injunctive norm and the descriptive norm.
What is perceived behavioral control (PBC)? Describe the sorts of
questionnaire items commonly used for assessing PBC. Give
examples of circumstances in which PBC might plausibly be the key
barrier to behavioral performance.
3. Do the components above influence intention equally? How are the
relative weights of the components assessed? How is PBC different
from the other three components? How predictable are intentions
from the four components?
4. Describe the five possible ways of influencing intention as identified
by RAT. If persuasion is attempted by changing one of the
components, does that component need to be significantly weighted?
Explain.
5. What are the determinants of the attitude toward the behavior (AB)?
What is belief strength, and how is it assessed? What is belief
evaluation, and how is it assessed? Explain how these combine to
yield the AB. What does the research evidence suggest about the
predictability of the AB from its determinants? Describe alternative
means by which the AB might be influenced. Explain (and give
examples of) changing the strength or evaluation of existing salient
beliefs. Explain (and give examples of) reconfiguring the set of
salient beliefs; identify two ways in which such reconfiguration might
be accomplished.
6. What are the determinants of the injunctive norm (IN)? What are
normative beliefs (and how are they assessed)? What is motivation to
comply (and how is it assessed)? Explain how these combine to yield
the IN. What does the research evidence suggest about the

199
predictability of the IN from its determinants? Identify two concerns
about the motivation-to-comply element. Describe alternative means
by which the IN might be influenced. Explain (and give examples of)
changing the normative belief or motivation to comply that is
associated with an existing salient referent. Explain (and give
examples of) the two ways of reconfiguring the set of salient
referents. Why is it often difficult to change the IN? Explain how
directing messages to salient referents might lead to changes in the
IN.
7. Describe the current state of understanding of the determinants of the
descriptive norm (DN). Explain how the DN might be changed. Give
an example of a message designed to influence the DN. Describe
some potential pitfalls for DN interventions.
8. Describe RAT’s account of the determinants of perceived behavioral
control (PBC). What is a control belief, and how can it be assessed?
What is the perceived power of a control factor, and how can it be
assessed? What is the current state of the research evidence
concerning the determinants of PBC? Describe four means of
influencing PBC. Explain how directly removing an obstacle to
performance can influence PBC. Distinguish (and give examples of)
two kinds of obstacles a persuader might try to remove. Explain how
successful performance of a behavior can influence PBC; give an
example. Explain how modeling can influence PBC; give an example.
Explain how encouragement can influence PBC; give an example.
9. Explain the strategy of influencing intention by changing the relative
weights of the components. To which of the four components does
this strategy potentially apply? In what sort of circumstance can this
strategy succeed in changing intention? What is the usual pattern of
association (correlation) between the AB, the IN, and the DN? What
does this pattern imply about changing the weights as a means of
influencing intention?
10. What does the research evidence suggest about the predictability of
behavior from intention? Identify three factors influencing the
strength of the relationship between measures of intention and
measures of behavior. Explain how the relationship between
measures of intention and measures of behavior is affected by the
degree of correspondence between the two measures. Do more
specific intention measures lead to higher correlations with behavioral
measures than do less specific intention measures? Explain how the
relationship between measures of intention and measures of behavior
is affected by the temporal stability of intentions. Explain how the

200
relationship between measures of intention and measures of behavior
is affected by explicit planning about behavioral performance. Give
examples of circumstances in which the task facing the persuader is
that of encouraging persons to act on existing intentions; describe
how a persuader might approach such a task. What explains the effect
of explicit-planning interventions on behavior? Does planning make
intentions more positive? Does planning increase perceived
behavioral control (PBC)? What are implementation intentions? Does
planning encourage the development of implementation intentions?
Identify four factors that might influence the effectiveness of explicit-
planning interventions.
11. What factor might improve the prediction of behavior (beyond the
predictability afforded by intention)? Under what conditions does
assessment of prior behavior improve the prediction of behavior?
12. Describe two general ways in which RAT suggests persuasive
messages might be adapted to recipients. Explain how the weights of
the determinants of intention provide a basis for message adaptation.
How can such weights be misleading? Describe how message
adaptation can be guided by consideration of the beliefs that underlie
the component to be changed.
13. Describe the basis on which additional possible predictors of
intention (beyond AB, IN, DN, and PBC) might be considered for
inclusion in the model. Describe two criteria for assessing such
additions. Identify two specific possible additional predictors. What is
anticipated affect? Can anticipated affect improve the prediction of
intentions beyond the predictability afforded by the four RAT
components? Describe how persuaders might try to influence
anticipated affect. What are moral norms? Can moral norms improve
the prediction of intentions beyond the predictability afforded by the
four RAT components? Describe how persuaders might try to
influence moral norms.
14. How might the attitudinal component (AB) be revised? Describe the
distinction between instrumental and experiential attitudes. Explain
how this distinction might reflect differences in the kinds of beliefs
underlying an attitude. Describe how the injunctive norm (IN) and the
descriptive norm (DN) might be revised by being merged into a
single normative factor. Discuss why such a merger might not be
advisable.
15. Explain how perceived behavioral control (PBC) might moderate the
effects of the other three components. Describe the current state of the

201
research evidence concerning such a moderating role. Explain how
different kinds of PBC questionnaire items might represent different
kinds of underlying control beliefs.

Notes
1. The viewpoint described in this chapter has appeared in a number of
different forms, with a number of different labels including the “theory of
reasoned action” (Ajzen & Fishbein, 1980), the “theory of planned
behavior” (Ajzen, 1991), the “integrative model of behavioral prediction”
(Fishbein, 2008), and the “extended theory of planned behavior”
(Sieverding, Matterne, & Ciccarello, 2010). Following Fishbein and Ajzen
(2010), this presentation identifies four predictors of intention: attitude
toward the behavior, injunctive norms (formerly called the “subjective
norm”), descriptive norms, and perceived behavioral control. However,
whereas Fishbein and Ajzen’s (2010) presentation has one general
“perceived norms” factor that includes both injunctive and descriptive
norms, the presentation here treats those two normative elements as
distinct.

2. The specification of the behavior of interest is a matter requiring close


attention. For some discussion, see Fishbein and Ajzen (2010, pp. 29–32)
and Middlestadt (2007).

3. The attitude of interest here is specifically the attitude toward the


behavior in question. The suggestion is that, for example, the intention to
buy a Ford automobile is influenced most directly by one’s attitude toward
the behavior of buying a Ford automobile rather than by one’s attitude
toward Ford automobiles. Attitudes toward objects may have some
relationship to attitudes toward actions, but RAT suggests that attitudes
toward actions are the more immediate determinant of intentions.

4. Cialdini’s (2009, p. 99) principle of “social proof”—that “we view a


behavior as correct in a given situation to the degree that we see others
performing it”—offers another illustration of the power of descriptive-
normative perceptions.

5. RAT also expects that sometimes PBC will appear to have a direct
relationship to behavior. In circumstances in which actual (not perceived)
behavioral control influences performance of the behavior, then to the
extent that persons’ perceptions of behavioral control are accurate (and so

202
co-vary with actual behavioral control), to that same extent PBC will be
related to behavior.

6. This procedure—rather than simply asking people directly how


important the attitudinal and normative considerations are to them—is
used because there is reason to think such self-reports are not sufficiently
accurate (Fishbein & Ajzen, 1975, pp. 159–160; for an illustration, see
Nolan, Schultz, Cialdini, Goldstein, & Griskevicius, 2008). The
inadequacy of these self-reports has prevented satisfactory estimation of
the weights for a given individual’s intention to perform a given behavior
(for exploration of some possibilities other than such self-reports, see
Budd & Spencer, 1984; Hedeker, Flay, & Petraitis, 1996).

7. There is room for reasonable doubt about the generality of the


contribution of descriptive norms to the prediction of intention, so some
caution may be appropriate. On the one hand, DN seems to be correlated
reasonably strongly with intention—sometimes as strongly as AB (see the
review of Rivis & Sheeran, 2003), sometimes less strongly (see the review
of Manning, 2009), but generally quite positively. And DN has often been
found to make an independent contribution to the prediction of intention
(i.e., it improves the predictability of intention beyond that afforded by
AB, IN, and PBC). On the other hand, there is much less research evidence
concerning DN compared with the other three variables; for example,
Manning’s (2009) review identified 162 reports of the correlation between
AB and intention, but only 17 for the correlation between DN and
intention. One might reasonably anticipate that the early investigations of
the role of DN could (quite sensibly) have explored behaviors for which
DN was especially likely to play an important role. (A similar point about
studies of anticipated regret as a possible additional predictor—discussed
separately—was made by Sandberg & Conner, 2008.) It often enough
happens that early research findings turn out to be overstated (Ioannidis,
2008), so the apparent contribution of DN to the prediction of behavior
may well weaken as additional research results accumulate.

8. Many of the issues that have arisen in the context of belief-based models
of attitude (see Chapter 4)—such as the potentially artifactual contribution
of belief strength scores to the prediction of attitude—can naturally arise
here as well, because the same summative model of attitude, with the same
procedures, is involved (e.g., Gagné & Godin, 2000; O’Sullivan, McGee,
& Keegan, 2008; Steadman, & Rutter, 2004; Trafimow, 2007). Although
Armitage and Conner (2001) report a mean correlation of .50 between

203
behavioral beliefs and AB across 42 studies, it is not clear whether this
represents correlations with ∑biei, ∑bi, ∑ei, or some combination of these.

9. McEachan et al.’s (2011) reported mean correlation of .53 (with IN)


includes both studies using ∑nimi as the predictor and studies using ∑ni. It
is not clear whether Armitage and Conner’s (2001) reported mean
correlation of .50 between normative beliefs and IN (across 34 studies) is
an average of correlations involving ∑nimi, ∑ni, ∑mi, or some
combination of these.

10. In addition, the issues discussed in Chapter 4 (on belief-based models


of attitude) concerning alternative scale scoring procedures also arise in
the context of scoring normative belief and motivation-to-comply items.
For some discussion of the phrasings and scorings of such items, see
Fishbein and Ajzen (2010, pp. 137–138), Gagné and Godin (2000), and
Kantola et al. (1982).

11. Different types of resources and obstacles will want different phrasings
of questionnaire items, especially with respect to control beliefs
(likelihood or frequency of occurrence). For instance, although the control
belief associated with bad weather could be assessed by asking
respondents how frequently bad weather occurs where they live (with
scales end-anchored by phrases such as “very frequently” and “very
rarely”), the control belief concerning a lack of facilities might better be
assessed by asking a question such as “I have easy access to exercise
facilities” (with end-anchors such as “true” and “false”). An additional
complexity: Because ∑cipi is a multiplicative composite (as are ∑biei and
∑nimi), the same scale-scoring issues (e.g., unipolar vs. bipolar) can arise
(see Gagné & Godin, 2000).

12. The challenge here can be illustrated by Elliott et al.’s (2005) research
concerning speed limit compliance. In preliminary research, 12 possible
control beliefs were identified. In the main study, a belief was retained for
analysis if its control belief (ci) or its power belief (pi) or its product (cipi)
was statistically significantly related to PBC. Only four beliefs survived
this winnowing process.

13. Armitage and Conner (2001) reported a mean correlation of .52 (across
18 studies) between control beliefs and PBC, but it’s not clear whether this
represented correlations of PBC with ∑cipi, ∑ci, ∑pi, or some combination

204
of these. McEachan et al. (2011) reported a mean correlation of .41 across
27 studies examining either the ∑cipi-PBC correlation or the ∑ci-PBC
correlation, but results were not reported separately for these two sets of
correlations.

14. The lack of any standardized item format for assessing control beliefs
and powerfulness—necessitated by variation in the types of factors under
study (see note 11 above)—has produced considerable diversity in the
details of these belief-based assessments, and it is not always clear how
best to characterize the measures employed. For example, P. Norman and
Smith (1995) presented respondents with a list of seven barriers to
physical activity (such as a lack of time or the distance from facilities) and
asked respondents to indicate, “Which of the following reasons would be
likely to stop you from taking regular exercise?” (with responses given on
a 7-point scale anchored by “extremely likely” and “extremely unlikely”).
Although based on likelihood ratings, the resulting index appears not to
assess control beliefs (the perceived frequency or likelihood of occurrence
of a control factor); it might better be seen as amalgamating assessment of
powerfulness and likelihood of occurrence (notice that the question asks
for the likelihood that the factor will prevent the behavior—not the
likelihood that the factor will occur) or perhaps even as assessing simply
the factor’s powerfulness (the question might be taken to mean, “Which of
the following reasons, if they occurred, would be likely to stop you from
taking regular exercise?”).

15. The effect of including the parking permit or bus pass was only partly
attributable to its removal of transportation obstacles. Apparently, the
inclusion of the permit/pass also helped convince recipients of the value of
making a return visit (“This must be important, otherwise they would not
send me a bus pass”), which in turn helped boost return rates (Marcus et
al., 1992, p. 227). Sometimes persuasion happens in unexpected ways.

16. This list of alternative mechanisms reflects some parallels with


Bandura’s (1997, pp. 79–115) analysis of sources of self-efficacy, which
include “enactive attainment” (experiences of genuine mastery of the
behavior), “vicarious experience” (seeing or imagining others perform
successfully), and “verbal persuasion” (having others provide assurances
about one’s possession of the relevant capabilities).

17. This strategy (of altering the relative weights of the components so as
to influence intention) is probably in general applicable only to AB, IN,

205
and DN—not PBC. For example, if a person has a negative AB, negative
IN, and negative DN, then emphasizing “you really do have the ability to
do this” is unlikely to be very persuasive. This is related to the earlier point
that PBC is not quite like the other determinants of intention, in that it
seems more a variable that enables AB, IN, and DN (in the sense that those
variables will influence intention only when PBC is sufficiently high).
However, such an image does suggest that if a person has a positive AB,
positive IN, and positive DN, then emphasizing “you really don’t have the
ability to do this” might help to discourage formation of a positive
intention. For example, imagine trying to discourage a friend from an
excessively expensive purchase by saying (though perhaps not in so many
words) “you really can’t afford this.”

18. More carefully: The direction of intention can be changed by altering


the weights of these three components only when one of those three
components differs in direction from the other two. But even if all three
components incline the person in the same direction, the extremity of the
intention might be changeable. For example, if AB, IN, and DN are all
positive but vary in just how positive they are, then if a slightly positive
component were to come to be weighted more heavily than a strongly
positive component, the intention would presumably weaken (though it
would still be positive).

19. As another indication that PBC may operate in a different fashion from
the other three variables, the average correlations of PBC with the other
three components appear to be smaller than the correlations among those
three. In Rivis and Sheeran’s (2003) review, the mean correlations of PBC
with AB, IN, and DN were between .05 and .20; in Manning’s (2009)
review, those mean correlations ranged from roughly .20 to .45.

20. To be careful here: A positive correlation between two components


does not necessarily mean that if one component is positive, the other will
be as well; it means only that the two components vary directly (so that as
one becomes more positive, so does the other). Imagine, for example, that
in a group of respondents, each respondent has a positive attitude toward
the behavior and a negative injunctive norm; those with very strongly
positive attitudes, however, have only slightly negative injunctive norms,
whereas those with only slightly positive attitudes have very strongly
negative norms. There is a positive correlation between the two
components (as the attitude becomes more positive, the norm also

206
becomes more positive—that is, less negative), although for each
individual, one component is positive and the other is negative. But insofar
as the persuasive strategy of altering the weights is concerned, the
implication is (generally speaking) the same: Altering the weights of the
components is not likely to be a broadly successful way of changing
intention (because of the unusual requirements for the strategy’s working
—e.g., a dramatic change in the weights may be necessary).

21. There are many potential pitfalls in interpreting intention-behavior


correlations. For example, there is obviously some basis for worry about
causal interpretations of cross-sectional intention-behavior correlations
(i.e., correlations based on data collected at a single point in time).

But even positive longitudinal correlations (where behavior is assessed


subsequent to intentions) can potentially be misleading, because the
correlation between intention and subsequent behavior does not indicate
whether people’s intentions were realized in their behaviors. If intention
and subsequent behavior are positively correlated in a given data set, that
does not necessarily mean that—in any ordinary sense—people acted
consistently with their intentions. For example, imagine a sample in which
people said that they intended to exercise an average of 25 days over the
next 30 days, but subsequent behavioral assessments indicated that in fact
they exercised an average of only 5 days. This would seem to be a
straightforward case of inconsistency between intention and behavior—
and yet with such data intentions and behaviors could be perfectly
positively correlated. To see this, imagine a small data set (N = 11) in
which the intention scores (intended days of exercise) were 30, 29, 28, …
22, 21, and 20 (mean = 25.0) and the corresponding behavior scores
(actual days exercised) were 10, 9, 8, … 2, 1 and zero (mean = 5.0). That
is, the participant with an intention score of 30 had a behavior score of 10,
the participant with an intention score of 29 had a behavior score of 9, and
so on. The intention-behavior correlation is +1.00.

Similarly, imagine a sample in which people said that they intended to


exercise an average of 25 days over the next 30 days, and subsequent
behavioral assessments indicated that in fact they did exercise an average
of 25 days. One would think that this represents pretty decent intention-
behavior consistency, and yet the intention-behavior correlation in such a
sample could be perfectly negative (-1.00). To see this, imagine another
small data set (N = 11) in which the intention scores (intended days of
exercise) were again 30, 29, 28, … 22, 21, and 20 (mean = 25.0), but with

207
corresponding behavior scores (actual days exercised) of 20, 21, 22, … 28,
29, and 30 (mean = 25.0). That is, the participant with an intention score of
30 had a behavior score of 20, the participant with an intention score of 29
had a behavior score of 21, and so on. The intention-behavior correlation is
-1.00.

So rather than simply examining intention-behavior correlations as indices


of intention-behavior consistency, it might be more helpful to pursue
something like Sheeran’s (2002) strategy. He analyzed intention-behavior
relationships as a set of four cases that form a 2 × 2, where the contrasts
are positive vs. negative intention and performance vs. nonperformance of
the behavior. The four kinds of cases are the “inclined actor” (positive
intention, performs the behavior), the “inclined abstainer” (positive
intention, does not perform the behavior), the “disinclined actor” (negative
attitude, but performs the behavior), and the “disinclined abstainer”
(negative intention, does not perform the behavior). The first and last cases
represent intention-behavior consistency, the two middle cases represent
inconsistency. Sheeran reviewed several studies that collectively suggest
that intention-behavior inconsistency is largely due to inclined abstainers
rather than to disinclined actors. Application of Sheeran’s (2002)
framework is sometimes, but not always, straightforward: It’s easy enough
to create such a matrix with two-category variables (e.g., “Do you intend
to get a flu shot next month—yes or no?” and “Did vs. didn’t get a shot”),
but continuous variables can be a bit more challenging. Still, this
framework underscores the need for closer attention to intention-behavior
relationships than one would obtain from examination of correlations
alone. Indeed, there are good reasons for broader concerns about the use of
correlational data to assess models like RAT (Noar & Mehrotra, 2011;
Weinstein, 2007).

22. It has sometimes been suggested that the strength of the intention-
behavior relationship will be affected by the time interval between the
assessment of intention and the assessment of behavior (the idea being that
as the time interval increases, the predictability of behavior from intention
will decrease; see, e.g., Ajzen, 1985; Ajzen & Fishbein, 1980, p. 47). The
supposition is that with an increased time interval between intention
assessment and behavioral assessment, there would be increased
opportunity for a change in intention. As it happens, it is not clear that
variations in the size of the time interval have general effects on the
intention-behavior relationship (for a review, see Randall & Wolff, 1994,
but also see Sheeran & Orbell, 1998, pp. 234–235). But of course if (for a

208
particular behavior) persons’ intentions are relatively stable across time,
then variations in the interval between assessments would not show much
effect on the intention-behavior relationship. The relevant points to notice
here are that (a) time interval variation is a poor proxy measure of
temporal instability in intentions, and (b) an apparent absence of broad
time interval effects on the strength of the intention-behavior relationship
is not necessarily inconsistent with the hypothesis that temporal instability
of intentions influences the strength of the intention-behavior relationship.

23. At least some explicit-planning interventions have told participants


they will be more likely to perform the behavior if they write out a plan.
For example, Sheeran and Orbell (2000b, p. 285) told participants “You
are more likely to go for a cervical smear if you decide when and where
you will go” (similarly, see Michie, Dormandy, & Marteau, 2004;
Steadman & Quine, 2004). This raises the possibility that expectancy-
related effects (as have been observed with subliminal self-help
audiotapes: Spangenberg, Obermiller, & Greenwald, 1992) might have
contributed to some of the observed effects (but see also Chapman,
Armitage, & Norman, 2009).

24. There is a slight complexity here, as RAT acknowledges that in


addition to intention, PBC may also have a direct relationship with
behavior by virtue of a possible correspondence between PBC and actual
barriers to action.

25. At least some of these meta-analytic reviews reported confidence


intervals that appear to have been based on fixed-effect analyses, which
are less appropriate than random-effects analyses given interests in
generalization (see, e.g., Borenstein, Hedges, Higgins, & Rothstein, 2010).
Random-effects analyses would typically produce wider confidence
intervals, so comparison of mean effect sizes within these meta-analyses
should be undertaken cautiously.

26. Research exploring factors that might systematically affect the relative
influence of the components is unfortunately scattered across a variety of
factors, including self-monitoring (DeBono & Omoto, 1993), culture (Al-
Rafee & Dashti, 2012; Bagozzi, Lee, & Van Loo, 2001), mood (Armitage,
Conner, & Norman, 1999), degree of group identification (Terry & Hogg,
1996), state versus action orientation (Bagozzi, Baumgartner, & Yi, 1992),
private versus collective self-concepts (Ybarra & Trafimow, 1998), and
others (Latimer & Ginis, 2005b; Thuen & Rise, 1994). For some

209
discussion, see Fishbein and Ajzen (2010, pp. 193–201).

27. Nonintenders might be further subdivided on the basis of other


characteristics, and those subgroups compared for potential differences (in
salient referents, behavioral beliefs, and so forth) relevant to constructing
persuasive messages. For instance, in designing antimarijuana messages, it
may be important to recognize differences between low-risk and high-risk
adolescents (Yzer et al., 2004). Of course, sometimes one size will fit all
(Darker, French, Longdon, Morris, & Eves, 2007).

28. Two other examples to illustrate the importance of examining zero-


order correlations: In Dillard’s (2011) study of HPV vaccination
intentions, AB and IN had relatively similar zero-order correlations with
intention (.78 and .63), but very different beta-weights (.51 and .17). In
Sheeran and Orbell’s (1999a) two studies of lottery playing, IN had a
significant beta-weight (in regressions predicting intention) in Study 1 but
not in Study 2—even though the zero-order correlation of IN with
intention was identical (.41) in the two studies.

29. Some readers will recognize this as simply an example of the general
point that the presence of multicollinearity conditions the interpretation of
partial coefficients.

30. Of additional possible predictors not discussed here, self-identity is


perhaps the most notable. For research examples and review discussions,
see Astrom and Rise (2001); Booth, Norman, Harris, and Goyder, 2014;
Conner and Armitage (1998); Cook, Kerr, and Moore (2002); Mannetti,
Pierro, and Livi (2004); Nigbur, Lyons, and Uzzell (2010); and Rise,
Sheeran, and Hukkelberg (2010).

31. At least in some circumstances, moral norms and anticipated affect


may be closely intertwined; one’s moral obligations might lead to
expecting to feel guilty or regretful if one fails to live up to one’s personal
standards. For example, the beliefs “I feel I have a moral obligation to
donate to charity” (moral norm) and “I’ll feel bad [guilty, regretful] if I fail
to donate to charity” (anticipated affect) naturally hang together. In fact,
some measures of moral norms (or personal norms or moral obligation)
have included items concerning affective states (e.g., Conner & McMillan,
1999; Godin et al., 1996; Harland et al., 1999; Moan & Rise, 2005;
Warburton & Terry, 2000). But anticipated affect and moral norms may
sometimes also be usefully distinguished with respect to their influences

210
on intention. For example, a committed environmentalist’s expectations of
guilt feelings from failing to recycle is probably different from a person’s
anticipated regret about not playing the lottery; the former is more closely
bound up with significant personal identity questions, the latter probably
more with potentially forgone monetary gains. The larger point is that
although there are plainly connections to be explored between moral
norms and anticipated affect, one probably should not fuse these into a
single element. Similarly, moral norms may be seen to be related to
injunctive norms (IN). The IN has a particular referent group (“people who
are important to me”) and a particular target person (the respondent):
“Most people who are important to me think I should/should not engage in
behavior X.” This is not far from “Most people who are important to me
think it is wrong to engage in behavior X,” “Most people think it is wrong
to engage in behavior X,” and “(I think) it is wrong to engage in behavior
X.” But, again, running all these together into a single element is probably
not advisable.

32. Rivis and Sheeran (2003) reported a mean correlation of .38 across 14
studies; Manning (2009) reported a mean correlation of .59 across 12
studies.

33. A contrast between internal and external aspects is not the only
possible way of potentially distinguishing elements of PBC. For example,
Fishbein and Ajzen (2010, pp. 168–177) have suggested that the key
distinction (between facets of PBC) is actually that between perceived
capacity (ability, capability) and perceived autonomy (degree of control), a
distinction they argue is conceptually and empirically independent of a
contrast between internal and external factors. For present purposes, the
question of how best to interpret the various observed PBC item
clusterings does not need to be settled (and there may not be only one
appropriate taxonomy). The point here is that any such clusters might be
seen as representing substantively different sorts of beliefs underlying
PBC, with correspondingly distinct targets for persuasion.

211
Chapter 7 Stage Models

The Transtheoretical Model


Decisional Balance and Intervention Design
Self-Efficacy and Intervention Design
Broader Concerns About the Transtheoretical Model
The Distinctive Claims of Stage Models
Other Stage Models
Conclusion
For Review
Notes

Persuasion characteristically has behavior change as its eventual goal.


Stage models of behavioral change depict such change as involving
movement through a sequence of distinct phases (stages). People at
different stages are taken to need different kinds of messages
(interventions, treatments, etc.) to encourage them to move to the next
stage.

A number of different stage models have been offered. This chapter


focuses on the approach of the most-studied stage model, the
“transtheoretical model.” Subsequent sections briefly discuss other stage
models and the distinctiveness of stage approaches.

The Transtheoretical Model


The transtheoretical model (TTM) is an approach developed from analysis
of a number of different theories of psychotherapy and behavior change, in
the context of changing undesirable health behaviors such as smoking. The
goal was to integrate these diverse approaches by placing them in a larger
(“transtheoretical”) framework. (For some general presentations, see
Prochaska & DiClemente, 1984; Prochaska, Redding, & Evers, 2002.)

Instead of seeing behavior change as a singular event (quitting smoking,


for example), the TTM suggests that behavior change involves movement
through a sequence of five distinct stages: precontemplation,
contemplation, planning, action, and maintenance. In the precontemplation

212
stage, the person is not considering changing his or her behavior; for
example, a smoker in the precontemplation stage is not even thinking
about giving up smoking. In the contemplation stage, the person is
thinking about the possibility of behavioral change; a smoker in the
contemplation stage is at least considering quitting. In the planning stage,
the person is making preparations for behavior change; a smoker in the
planning stage is making arrangements to quit (choosing a quit date,
purchasing nicotine gum, and the like). In the action stage, the person has
initiated behavioral change; in this stage, the (now ex-) smoker has
stopped smoking. In the maintenance stage, the person sustains that
behavioral change; an ex-smoker in the maintenance stage has managed to
remain an ex-smoker.1

The TTM does not claim that stage movement is always a straightforward
linear process. To the contrary, it acknowledges that people may move
forward, backslide, cycle back and forth between stages, and so on, in a
complex and dynamic way. However, individuals are not expected to skip
any stage (Prochaska, DiClemente, Velicer, & Rossi, 1992, p. 825) and
hence these stages are offered as representing a general sequence through
which people pass in the course of behavior change.

People are said to progress through these stages using various “processes
of change.” TTM presentations often list 10 such processes (e.g.,
Prochaska, Redding, & Evers, 2002): self-reevaluation (reconsideration of
one’s self-image, such as one’s image as a smoker), environmental
reevaluation (assessment of the effects of one’s behavior on others, as
when a smoker consider the effects of secondhand smoke),
counterconditioning (healthier behaviors that can substitute for the
problem behavior, such as the use of nicotine replacement products),
consciousness raising (increased awareness of causes and effects of, and
cures for, the problem behavior), dramatic relief (the arousal and
attenuation of emotion, as through psychodrama), self-liberation
(willpower, a commitment to change), helping relationships (support for
behavioral change), contingency management (creation of consequences
for choices), stimulus control (removing cues that trigger the problem
behavior, adding cues to trigger the new behavior), and social liberation
(external policies and structures such as smoke-free zones).2

The TTM suggests that different processes of change are relevant at


different stages, but research evidence on this matter is sparse. This is
especially unfortunate because such evidence would provide valuable

213
guidance about how to construct effective interventions. For example, if
self-reevaluation were known to be a change process distinctly associated
with the movement from precontemplation to contemplation, then
interventions based on that process might be targeted specifically to
individuals in precontemplation. But what evidence is in hand seems to
suggest that the various processes of change can often be useful across a
number of different stages (see, e.g., Callaghan & Taylor, 2007; Guo,
Aveyard, Fielding, & Sutton, 2009; Rosen, 2000; Segan, Borland, &
Greenwood, 2004).

Even so, it seems apparent that different behavior-change interventions


(even if not described in terms of the 10 processes of change) should be
effective for people at different stages. For example, a smoker in
precontemplation presumably needs a different intervention than does a
smoker in the planning stage. So the key question is how to develop
interventions that are matched to the recipient’s stage. In support of the
development of stage-matched interventions, the TTM has come to
identify two particular mediating processes as crucial to the process of
behavior change: decisional balance (the person’s assessment of the pros
and cons of the new behavior) and self-efficacy (the person’s assessment
of his or her ability to perform the new behavior). The following sections
consider each of these.3

Decisional Balance and Intervention Design

Decisional Balance
One intriguing aspect of TTM research concerns “decisional balance,” the
person’s assessment of the importance of the pros (advantages, gains) and
cons (disadvantages, losses) associated with the behavior in question.4 The
expectation is that as people progress through the stages, the importance of
the pros of behavior change will come to outweigh the importance of the
cons. In the relevant research, respondents are provided with a
standardized list of pros and cons of the new behavior and are asked to rate
the importance of each to the behavioral decision (e.g., on a scale with
end-anchors such as “not important” and “extremely important”;
Prochaska et al., 1994, p. 42). The relative importance of the various pros
and cons can then straightforwardly be assessed.

Considerable research evidence has accumulated concerning decisional

214
balance, confirming that these assessments do vary depending on the
person’s stage (for some reviews, see Di Noia & Prochaska, 2010; Hall &
Rossi, 2008; Prochaska et al., 1994; but see also Sutton, 2005b, pp. 228–
233). As summarized by Di Noia and Prochaska (2010, p. 619): “The
balance between the pros and cons varies across stages. Because
individuals in pre-contemplation are not intending to take action to change
a behavior, the cons outweigh the pros in this stage. Pros increase and cons
decrease from earlier to later stages. In action and maintenance stages, the
pros outweigh the cons. A crossover between the pros and cons occurs
between precontemplation and action stages.”

This description is potentially a little misleading, however. The research in


hand shows that “pros increase and cons decrease from earlier to later
stages” in the specific sense that there are such changes in the perceived
importance of the pros and cons. The research evidence does not concern,
for example, possible changes in the number of perceived advantages
(pros) and disadvantages (cons), in the desirability or perceived likelihood
of the advantages and disadvantages, or in other properties of potential
interest. What the evidence shows is that in pre-action stages, the
perceived importance of the advantages of the new behavior is not greater
than the perceived importance of the disadvantages, but that once action
has been initiated, the perceived importance of the pros is greater than the
perceived importance of the cons.5

In some ways this may not be too surprising. If people think that the
importance of the advantages of a given behavior are not greater than the
importance of its disadvantages, they may not be especially motivated to
adopt that behavior. People who have adopted the behavior, on the other
hand, are naturally likely to think the advantages are more important than
the disadvantages.

Decisional Balance Asymmetry


TTM research has identified an unusual aspect to the changes in the
perceived importance of the pros and cons: they are not symmetrical. The
characteristic pattern is that the size of the increase in the perceived
importance of the pros is larger than the size of the decrease in the
perceived importance of the cons.6

This difference has been expressed as a matter of “strong and weak


principles” for progression through the stages described. These principles

215
describe the different amounts of change (in the perceived importance of
the pros and the cons) in terms of the standard deviation of each. The
strong principle is that “progress from precontemplation to action involves
approximately one standard deviation increase in the pros of changing.”
The weak principle is that “progress from precontemplation to action
involves approximately .5 SD decrease in the cons of changing”
(Prochaska, Redding, & Evers, 2002, pp. 105, 106). That is, the perceived
importance of the new behavior’s advantages increases by about one
standard deviation between precontemplation and action, while the
perceived importance of the behavior’s disadvantages decreases by about
half that much. (For reviews and discussion, see Di Noia & Prochaska,
2010; Hall & Rossi, 2008; Prochaska, 1994.)7

Implications of Decisional Balance Asymmetry


The asymmetry in these changes suggests that the adoption of a new
behavior may be less a matter of the person’s deciding that the behavior’s
disadvantages are insignificant than it is a matter of deciding that the
advantages make the behavior worth doing. If these two kinds of
considerations (the importance of the advantages and the importance of the
disadvantages) were equally influential, one might expect roughly equal
amounts of change as people move through the stages. But stage
progression is associated with distinctly different amounts of change in the
perceived importance of the advantages and disadvantages.

This asymmetry can be seen to have straightforward implications for the


design of effective interventions meant to move people from the
precontemplation stage to the action stage: “In progressing from
precontemplation to action, tailored interventions should place primary
emphasis on increasing the pros of change” (Hall & Rossi, 2008, p. 271).
“For example, individuals in precontemplation could receive feedback
designed to increase their pros of changing to help them progress to
contemplation” (Prochaska, Redding, & Evers, 2002, p. 108). Expressed
more carefully, the apparent implication is that for message recipients in
precontemplation, it may be more useful for persuaders to try to increase
the perceived importance of the advantages of the new behavior than to try
to reduce the perceived importance of the disadvantages.8

However, such conclusions (about how to tailor interventions for pre-


action stages) cannot be justified by the data in hand. The characteristic
research design concerning decisional balance consists of measuring both

216
people’s stages and their decisional balance assessments at one point in
time. The evidence concerning decisional balance thus takes the form of a
finding that (for example) for people who are in precontemplation, the
perceived importance of the pros of the new behavior is not greater than
the perceived importance of the cons, whereas for people who are in action
or post-action stages, the perceived importance of the pros is greater than
that of the cons.

The trouble with such data is that one cannot tell whether these decisional
balance shifts caused the movement from one stage to the next, as opposed
to simply being associated with (or being a result of) stage change. For
example, decisional balance might work in the following way. When the
perceived importance of the pros is greater than the perceived importance
of the cons, the person adopts the new behavior. This activates post-choice
dissonance reduction processes (see Chapter 5), in which the perceived
importance of the pros increases further and the perceived importance of
the cons decreases further—and this post-action period can be the time at
which the asymmetry appears (i.e., after behavioral initiation, not before).

The point is: The current evidence is not sufficient to underwrite a design
principle for interventions aimed at influencing decisional balance. One
cannot tell, on the basis of the kinds of studies in hand, whether the
asymmetry in decisional balance changes is a precursor, correlate, or
consequence of such change. For supporting recommendations about
intervention design, better evidence would be provided by experimental
research. Such work could compare the effectiveness (in moving people
from precontemplation to action) of messages that aimed either at
increasing the perceived importance of the advantages of the new behavior
or at reducing the perceived importance of the disadvantages of the new
behavior. Evidence of this sort is not yet in hand, but would plainly be
welcomed.

Self-Efficacy and Intervention Design

Intervention Stage-Matching
As briefly discussed above, the general idea of stage-matching of
interventions (messages, treatments) is straightforward: People in different
stages of change presumably need different interventions to encourage
movement to the next stage. This idea is depicted in an abstract way in
Figure 7.1. Intervention A is adapted to (matched to) persons who are in

217
Stage 1 and is designed to move people from Stage 1 to Stage 2.
Interventions B and C are matched to persons in Stages 2 and 3,
respectively, because those interventions are meant to move people to the
next stage in the sequence. Thus for people in Stage 1, Intervention A
should be more effective than Interventions B or C (more effective in
moving people to Stage 2); for people in Stages 2 or 3, Intervention B or
C, respectively, should be most effective.

Figure 7.1 Matching interventions to stages.

Assessing the relative effectiveness of stage-matched and stage-


mismatched interventions can face some difficult challenges. Among other
things, researchers need dependable ways of classifying participants into
distinguishable stages, clear conceptual treatments of what counts as a
matched intervention for a given stage, effective realization of those
interventions, and sound assessment of relevant outcomes (such as stage
progression, that is, forward movement of people through the stages).

In this regard, the TTM is still a work in progress. For example, there is
room for some uncertainty concerning exactly what makes for matched
and mismatched interventions at various specific stages. And the
difficulties in creating reliable stage assessments should not be
underestimated. At the same time, it is possible to illustrate some of the
relevant issues by considering one specific matter: the question of the
point at which interventions should target the receiver’s self-efficacy
concerning the desired behavior (their perceived ability to perform the
behavior, akin to perceived behavioral control as discussed in Chapter 6).

Self-Efficacy Interventions
The TTM suggests that self-efficacy interventions are not well-suited to
people at earlier stages (e.g., precontemplation), because those people have
not yet decided that they want to adopt the new behavior. At early stages,
interventions should presumably focus on developing positive attitudes
toward the new behavior (by influencing decisional balance). Self-efficacy
interventions are expected to be useful only when people have already
made that initial decision (e.g., are in the planning stage). The reasoning is
that until people have become convinced of the desirability of an action,

218
there is little reason to worry about whether they think they can perform
the behavior.

These expectations have been assessed in several studies. These studies


begin by identifying each participant’s stage, with an eye to distinguishing
those in earlier (especially preplanning) stages and those in later
(especially planning) stages. Then participants receive either an
intervention designed for those in early stages (e.g., one aimed at
encouraging a positive attitude toward the new behavior) or an
intervention designed for those in later stages (one aimed at enhancing
perceived self-efficacy); thus participants receive either a stage-matched
intervention or a stage-mismatched intervention.

The results of such studies have pointed to a rather more complex picture
than that suggested by the TTM. Sometimes the expected pattern of results
has obtained, such that interventions were more effective when matched to
participants’ stages than when mismatched—and, specifically, self-
efficacy interventions were effective for those at later stages but not for
those at earlier stages. For example, Prentice-Dunn, McMath, and Cramer
(2009) found that for encouraging sunscreen use, movement from
precontemplation to contemplation was affected by the nature of threat
appraisal information (about the dangers of sun exposure) but not by the
nature of self-efficacy information (about the ease of using sunscreen). For
individuals in contemplation, however, movement to the preparation stage
was influenced by the presentation of self-efficacy information. That is,
self-efficacy information was effective for influencing people in later
stages but not people in earlier stages.

However, a number of other studies have found that self-efficacy


interventions can be useful even at relatively early stages of behavior
change. For example, Schwarzer, Cao, and Lippke (2010) studied two
interventions designed to encourage physical activity in adolescents. Some
of the participants did not yet intend to engage in such activity (pre-
intenders); others had formed physical activity intentions but had not yet
acted upon them (intenders). One intervention, designed for pre-intenders,
stressed the benefits of regular physical activity and the risks of a
sedentary lifestyle (“resource communication”); the other, designed for
intenders, focused on action planning to translate those intentions into
behavior (“planning treatment”). As expected, the resource communication
treatment was effective in increasing the frequency of physical activity for
pre-intenders but not intenders—but the planning intervention was

219
effective for both pre-intenders and intenders. That is, a self-efficacy–
focused intervention was effective even for people in an early (pre-
intention) stage.

As another example: Weinstein, Lyon, Sandman, and Cuite (1998)


examined interventions for persuading people to undertake home radon
testing. Two target audiences were distinguished: individuals who had not
made up their minds about testing (the “undecided” group) and persons
who had formed the intention to test but had not yet done so (the “decided
to test” group). Two intervention messages were developed: The “high-
likelihood” intervention aimed at convincing people that their homes were
indeed vulnerable to the threat of radon; this intervention was taken to be
matched to undecided receivers (because it was focused on developing the
appropriate attitudes). The “low-effort” intervention aimed at convincing
people that the testing process was simple and inexpensive; this
intervention was taken to be matched to decided-to-test receivers (because
it was focused on self-efficacy). As expected, for decided-to-test receivers,
the low-effort intervention was dependably more successful (than the high-
vulnerability intervention) in moving people to a subsequent stage. But for
undecided receivers, the two interventions were equally effective. That is,
the intervention aimed at influencing self-efficacy was successful in
producing stage progression (movement to a more advanced stage) even
for people at early stages—people for whom the intervention was
putatively mismatched. (For similar indications of the effectiveness of self-
efficacy interventions at early stages, see Malotte et al., 2000; Quinlan &
McCaul, 2000.)

In short, several studies have failed to confirm the expected matching


effects for self-efficacy interventions (for a review offering a similar
conclusion concerning smoking specifically, see Cahill, Lancaster, &
Green, 2010). Such failures might be explained in any number of ways.
For example, participants might not have been correctly classified with
respect to stages; if some participants who were classified as being in early
stages (pre-intending, precontemplation, etc.) were actually in later stages
(intending, planning, etc.), then the self-efficacy–focused intervention
would in fact have been matched to those early-stage participants. Or
perhaps the self-efficacy interventions inadvertently included some
elements not focused on self-efficacy (but instead focused on developing
more positive attitudes), which made those interventions effective for
early-stage participants.

220
However, there is another reason to have doubts that self-efficacy–focused
interventions should be deployed only for persons in later stages of
change: research on threat appeals (discussed more extensively in Chapter
11). A threat appeal message has two components, one designed to arouse
fear or anxiety about possible negative events or consequences associated
with a possible threat, and one that offers a recommended course of action
to avert or reduce those negative outcomes. So, for instance, a message
might depict the dangers of not wearing a seat belt (the threat component)
as a way of encouraging seat belt use (the recommended action).

One much-studied threat-appeal message variation concerns differences in


the depicted ease with which the recommended action can be adopted.
Perhaps unsurprisingly, when the advocated action is depicted as easy to
do, people are (ceteris paribus) subsequently more likely to intend to
perform it. But the finding relevant to the present discussion is this: When
the advocated action is depicted as easy to do, people are also
subsequently more likely to have positive attitudes about the behavior (see
the meta-analytic review of Witte & Allen, 2000).9 That is, what looks like
a self-efficacy intervention can have effects on people’s attitudes. In the
context of the TTM, then, the plain implication is that persuaders should
not withhold self-efficacy interventions until people have already
developed positive attitudes. On the contrary, self-efficacy interventions
might be useful even when—or especially when—people do not yet have
positive attitudes about the new behavior.10

All told, then, there is good reason to think that self-efficacy–oriented


interventions can potentially be useful at earlier stages than are
contemplated by the transtheoretical model. That is to say, with respect to
this one aspect, the TTM’s expectations about stage-matching appear not
to be well-founded.11

Broader Concerns About the Transtheoretical Model


As just indicated, self-efficacy interventions may be useful at earlier stages
than one would expect on the basis of the transtheoretical model. Of
course, even if the TTM is mistaken about this one particular aspect of
stage-matching (concerning the timing of self-efficacy interventions), that
does not by itself somehow invalidate or undermine the TTM. However,
two broader concerns have been raised about the TTM.

221
One continuing concern is the transtheoretical model’s description of, and
procedures for assessing, the various stages. In TTM research, an
individual’s stage is most commonly assessed based on answers to a small
number of yes-no questions about current behavior, intentions to change,
and the like. But some of the resulting stage classifications can appear
artificial; for example, in some classification systems, an individual
planning to stop smoking in the next 30 days is placed in the preparation
stage, but an individual planning to quit in the next 31 days is described as
being in the contemplation stage (see Sutton, 2000, 2005b, pp. 238–242).
Moreover, the questions used (and the criteria used to subsequently
classify respondents) have varied from study to study, making for
difficulties in assessing the validity of such measures (Littell & Girvin,
2002). The challenge of creating reliable and valid stage assessments has
received considerable attention, with a variety of alternative stage-
measurement procedures being explored, but no easy resolution is in hand
(for some illustrative discussions, see Balmford, Borland, & Burney, 2008;
Bamberg, 2007; de Nooijer, van Assema, de Vet, & Brug, 2005; Lippke,
Ziegelmann, Schwarzer, & Velicer, 2009; Marttila & Nupponen, 2003;
Napper et al., 2008; Richert, Schüz, & Schüz, 2013).12

A second general concern is the paucity of empirical support for TTM-


based stage-matched interventions as compared with nonmatched
interventions. As indicated earlier, the expectations of the transtheoretical
model have not been confirmed concerning the specific question of the
timing of self-efficacy interventions. More generally, however, several
broader reviews have raised questions about the effectiveness of TTM-
based stage-matched interventions as compared to nonmatched
interventions (e.g., Adams & White, 2005; Bridle et al., 2005; Cahill,
Lancaster, & Green, 2010; Riemsma et al., 2003; Tuah et al., 2011). To be
sure, there are many ways in which doubt can creep in about such
conclusions (e.g., measurement error in assessing stages, or inadequate
realization of interventions; see, e.g., Hutchison, Breckon, & Johnston,
2009). But at present there is insufficient evidence to permit one to
confidently conclude that TTM-based stage-matched interventions are in
general more successful than nonmatched interventions.13

Other questions have been raised about, for example, whether the
transtheoretical model’s stages in fact constitute mutually exclusive
categories, whether there is good evidence of sequential movement
through the stages, whether the model is sufficiently well-specified to
permit useful empirical examination, and so forth. (For a particularly nice

222
review, see Sutton, 2005b. For other general critical discussions of the
TTM, see Armitage, 2009; Herzog, 2008; Littell & Girvin, 2002; Sutton,
2000, 2005a; R. West, 2005; Whitelaw, Baldwin, Bunton, & Flynn, 2000.)

These doubts about the adequacy of the transtheoretical model, however,


do not necessarily undermine the general idea of a stage model of behavior
change. Any failings of the TTM might reflect its particular realization of
the stage approach rather than some defect with the very idea of a stage
model. But before considering some alternative (non-TTM) stage models,
it will be useful to reflect on the distinctive claims advanced by stage (as
opposed to nonstage) approaches.

The Distinctive Claims of Stage Models


Stage models such as the TTM take on a set of distinctive empirical
commitments. A stage model must have a clear set of categories (with
principles for assigning people to categories), must show that those
categories are temporally ordered and so constitute stages, must define
those stages in a way that people in the same stage face common behavior-
change challenges (and so would profit from the same interventions), and
must define those stages in a way that people in different stages face
different challenges (and so need different interventions). (For a careful
elaboration of these evidentiary requirements, see Weinstein, Rothman, &
Sutton, 1998.)

As a consequence, stage models are quite different from nonstage models


of behavior such as reasoned action theory (RAT; Chapter 6). Nonstage
models might be called “continuum” approaches, in the sense that people
are seen as arrayed along a continuum reflecting the likelihood of their
engaging in the behavior (Weinstein, Rothman, & Sutton, 1998). In RAT,
for instance, people’s placements on the intention continuum are seen to be
influenced by four broad factors (attitude toward the behavior, injunctive
norms, descriptive norms, and perceived behavioral control). Where
continuum models see a range of propensities to act, with the same general
factors potentially influencing everyone, stage models see qualitative
differences between people on the basis of action preparation or propensity
(i.e., different stages), with different factors influencing people in those
different stages.

If one were to arbitrarily divide the intention continuum into distinct parts,
a stage-like classification might seem to result. For example, on an 11-

223
point intention scale, one could group together persons with scores of 1
and 2, those with scores of 3 and 4, those with scores of 5, 6, and 7, those
with scores of 8 and 9, and those with scores of 10 and 11—and then
conceive of these as five distinct “stages” along the way to action. But
these would more appropriately be called “pseudo-stages” (Weinstein,
Rothman, & Sutton, 1998). After all, there is no reason to suppose that
people need to pass through these various “stages” in succession; a person
might have a weak intention at one point in time (say, an intention score of
2) but subsequently be convinced to have a much stronger intention (say, a
score of 10) without having to pass through the points in between.
Moreover, two people in the same region of the continuum might need
different kinds of treatments; for example, two people might have equally
negative intentions, but (expressed in terms of RAT) one person could
have a negative attitude while the other person had low perceived
behavioral control—and hence those two people would need different
kinds of interventions.

To come at this same point from a different angle: Many theoretical


perspectives on persuasion and behavior change recognize that recipients
might be in different states that are relevant to behavioral performance.
Correspondingly, many different approaches emphasize the importance of
matching messages (interventions, treatments) to the current state of the
recipient—adapting the intervention to the recipient’s current beliefs,
attitude function, cultural values, personality characteristics, elaboration
likelihood, and so forth. For example, although reasoned action theory
identifies four abstract general influences on intention (attitude toward the
behavior, injunctive norms, descriptive norms, and perceived behavioral
control), RAT stresses that interventions should target only those
determinants of intention that significantly influence the intention in
question. That is, RAT emphasizes that interventions should be adapted to
the recipients’ current state; for instance, if injunctive norms are not
significantly related to the relevant intention, then injunctive-norm
interventions are presumably inappropriate.

Stage models share this emphasis on adapting interventions to recipients’


states—but differ by virtue of suggesting that the various different
recipient states form a temporal sequence, that is, a set of stages. It’s not
just that different people might be in different states (and so need different
interventions); it’s that people pass through those states in a definite
temporal order. The claim that the relevant states are temporally ordered
(and so form a set of stages) brings with it additional burdens of proof. For

224
example, the appropriate empirical evidence for stage models must consist
of something more than showing that state-matched interventions are more
effective than non-matched (mismatched or unmatched) interventions.
After all, any number of continuum (that is, non-stage) approaches also
contain the idea that state-matched interventions will enjoy greater success
than non-matched interventions. A finding that state-matched interventions
were more effective than non-matched interventions does not show that the
states form a sequence, that is, does not show that the states are stages.14

One kind of evidence especially relevant to the distinctive claims of stage


models concerns intervention sequencing. If different recipient states
represent genuine stages (i.e., are temporally sequenced), then some
intervention sequences should be more effective than others. To express
this point in terms of the abstract representation in Figure 7.1: If
Intervention A is needed to move people from Stage 1 to Stage 2, and
intervention B is needed to move people from Stage 2 to Stage 3, then it
should be more effective to deliver Intervention A first followed by
Intervention B, rather than the reverse.

Again, approaches such as reasoned action theory provide a useful


contrast. From a RAT perspective, if both attitude (AB) and descriptive
norms (DN) are significant predictors of intention, then each would be an
appropriate target for influence—and the sequence of such changes would
be irrelevant (that is, there’d be no necessary reason to change AB before
DN or vice versa). By contrast, with a stage model, the sequence of
treatments is expected to be crucial.

So for comparing the utility of stage models against non-stage (continuum)


models such as reasoned action theory, studies of the relative effectiveness
of different intervention sequences would be very helpful. In the case of
the TTM (and stage models more generally), research attention has
unfortunately focused on examining the relative effectiveness of stage-
matched and non-matched interventions without yet taking up the more
complex question of intervention sequencing. This focus is
understandable, especially in the early stages of research, but a genuine
stage model will contain clear predictions about intervention sequencing
that deserve empirical attention.

Other Stage Models

225
The transtheoretical model (TTM) is the most-studied stage model of
behavior change, but several others have also been developed. For
example, a number of stage models have concerned consumer purchasing
behavior. These models propose that a sequence of distinct stages precedes
product purchase. The description of these stages varies (in number and
composition), but common elements include awareness of the brand,
knowledge about the brand, attitude toward the brand, brand purchase
intention, purchase, and brand loyalty. These models are often described as
“hierarchy of advertising effects” models, because they putatively identify
a sequence of desired effects of advertising—to make the consumer aware
of the brand, to ensure the consumer has knowledge about the brand, and
so forth. (For one classic treatment, see Lavidge & Steiner, 1961.)
Although there is variation in the details of these hierarchy-of-effects
models, they share a common shortcoming, namely, that there is not good
evidence for the expected temporal sequence of advertising effects (Fennis
& Stroebe, 2010, pp. 29–34; Vakratsas & Ambler, 1999; Weilbacher,
2001).15

Health-related behaviors have been the focus of several non-TTM stage


models (e.g., the precaution adoption model of Weinstein & Sandman,
1992, 2002; for a general review of health-related stage models, see
Sutton, 2005b). Of these, perhaps the best developed is the Health Action
Process Approach (HAPA; Schwarzer & Fuchs, 1996). The HAPA
identifies three stages in behavioral change: non-intention, intention, and
action. Those in the non-intention stage (“non-intenders”) have not formed
the relevant intention; those in the intention stage (“intenders”) have
formed the intention but have not acted on it; those in the action stage
(“actors”) have engaged in the new behavior.16

The HAPA suggests that different kinds of messages are appropriate at


different stages. For example, information about the disadvantages of the
current behavior or the advantages of the new behavior (e.g., the benefits
of regular exercise) is likely to be most relevant for non-intenders, whereas
intenders may need explicit planning advice (about how to translate their
intentions into action). But the HAPA suggests that self-efficacy
information (about one’s capability to perform the new behavior) is
potentially valuable at all stages. (For a general review of HAPA research,
see Schwarzer, 2008a. For some illustrative specific applications, see
Craciun, Schüz, Lippke, & Schwarzer, 2012; Parschau et al., 2012; Schüz,,
Sniehotta, Mallach, Wiedemann, & Schwarzer, 2009; Schüz, Sniehotta, &
Schwarzer, 2007. For some commentaries on the HAPA, see Abraham,

226
2008; Conner, 2008; Leventhal & Mora, 2008; Sutton, 2008.)

However, there is room for doubt about whether the HAPA is in fact a
stage model, as opposed to being something more like a continuum model
(see especially Sutton, 2005b). Notice that the HAPA stages correspond to
different portions of an intention continuum: Pre-intenders presumably
have negative (or insufficiently positive) intentions, and intenders and
actors presumably have positive intentions (but are distinguished by
whether they have acted on those intentions). That is, HAPA seems to
divide the propensity-to-act (intention) continuum into two general
categories: those with negative intentions and those with positive
intentions. This differentiation, however, is arguably insufficiently sharp to
create a genuine stage distinction. The intention continuum has no natural
boundary (the mean? the median? the scale midpoint?), but rather offers
only a blurry distinction between intenders and non-intenders (Abraham,
2008). The implication is that the HAPA’s demarcation of intenders and
non-intenders seems an arbitrary division rather than a genuine stage
distinction.

At the same time, some such differentiation does seem potentially useful.
In this context it may be illuminating to consider the HAPA against the
backdrop of reasoned action theory. From the perspective of RAT, one set
of factors underlies the formation of positive intentions (intention
formation is influenced by attitude toward the behavior, injunctive norms,
descriptive norms, and perceived behavioral control), but the realization of
intentions in action involves different processes.17 In this regard, RAT and
the HAPA look rather similar, in that each (implicitly or explicitly)
recognizes a distinction between getting people to have the desired
intention and getting people to act on that intention.18

In any event, the HAPA is not a straightforward stage model. It might be


thought of instead as a continuum theory, akin to—and indeed an
alternative to—reasoned action theory (see Sutton, 2005b), or as an
attempt to somehow blend stage and continuum approaches (see
Schwarzer, 2008b; cf. Conner, 2008).

Conclusion
Stage models of behavioral change are naturally quite appealing. The idea
that behavior change requires movement through a sequence of stages

227
sounds like a very plausible idea, but it turns out to be surprisingly difficult
to redeem that abstract idea in empirically and conceptually sound ways.

The emphasis that stage models place on matching treatments to recipients


is valuable but not unique. What makes stage models distinctive is the
suggestion of a set of distinct stages—temporally ordered states through
which people pass in the course of behavior change. However, defining
these stages clearly, developing sound assessments of stages, identifying
stage-appropriate interventions, displaying the greater effectiveness of
stage-matched treatments over nonmatched treatments, showing the
expected effects of variation in intervention sequencing—these have
proved difficult challenges to surmount.

For Review
1. How do stage models describe the process of behavioral change? In
the transtheoretical model (TTM), what stages are distinguished?
Describe each stage: precontemplation, contemplation, planning,
action, and maintenance. Is stage movement a linear process? Explain
why different kinds of treatments (messages) might be needed for
people at different stages of change.
2. What is decisional balance? How does decisional balance change as
people progress through the stages? Is the size of the change in the
perceived importance of the pros the same as the size of the decrease
in the perceived importance of the cons? Which is larger? Describe
the implications of such a difference for the design of effective
interventions. Explain why the existing research evidence about
decisional balance does not necessarily support recommendations
about intervention design.
3. Explain the general idea of stage-matching, and identify challenges in
comparing the effectiveness of stage-matched and stage-mismatched
interventions. According to the TTM, when should self-efficacy
interventions be most effective, at early stages or later ones? What
does the research evidence indicate about the appropriate timing of
self-efficacy interventions?
4. Describe some concerns that have been raised about the TTM’s
procedures for assessing stages. How extensive is the empirical
evidence for the superiority of TTM-based stage-matched
interventions over nonmatched interventions?
5. Describe the distinctive claims of stage models. Explain the

228
difference between stage models and continuum models of behavior.
How can dividing an intention continuum into segments produce the
appearance of a stage-like set of categories? Why is such a set of
categories not a genuine stage model? Describe the difference
between thinking of recipients as being in different states and
thinking of them as being in different stages. Why would finding that
state-matched interventions are more effective than nonmatched
interventions not necessarily show that the states are stages? Explain
why, according to stage models, some intervention sequences should
be more effective than others.
6. Describe some stage models other than the transtheoretical model (the
TTM). Sketch the kinds of stages included in a hierarchy-of-
advertising-effects model. Describe the stages identified by the
Health Action Process Approach (HAPA). Is the HAPA a
straightforward stage model? Explain.

Notes
1. The number and description of the stages can vary, especially depending
on the application area. For example, some TTM presentations focused on
smoking cessation have described the maintenance stage as one in which
the ex-smoker is actively working to prevent relapse, and have added a
subsequent “termination” stage in which the ex-smoker has achieved
complete self-control (Prochaska, Redding, & Evers, 2002).

2. The various processes of change appear to form a haphazard collection


of activities and circumstances that some smokers have sometimes found
useful in quitting rather than a principled or systematically organized set of
fundamental processes. This naturally impairs their value in guiding
research or intervention design.

3. Viewed from the perspective of reasoned action theory (RAT; see


Chapter 6), decisional balance can be seen to be related to the person’s
attitude toward the new behavior (AB), with self-efficacy reflecting
perceived behavioral control (PBC). But the TTM’s treatment of these two
elements is rather less careful than that of RAT, and TTM does not
emphasize normative considerations (injunctive or descriptive) in the way
that RAT does.

4. Decisional balance was originally developed as part of Janis and Mann’s


(1968) account of decision making.

229
5. The TTM’s research evidence focuses exclusively on the perceived
importance of pros and cons, but—as suggested by belief-based models of
attitude (see Chapter 4)—other properties of perceived advantages and
disadvantages (such as evaluation or perceived likelihood) might arguably
be of at least as much interest to persuaders. It should not pass unnoticed
that, as discussed in Chapter 4, when the evaluation and strength
(likelihood) of salient beliefs are used to predict attitude, belief importance
(the property apparently of key interest to the TTM) does not add to the
predictability of attitude.

6. Again, common ways of expressing this finding can be misleading.


Consider, for example: “the magnitude of the increase in the pros was
greater than the magnitude of the decrease in the cons” (Di Noia &
Prochaska, 2010, p. 629). This might more carefully be expressed as “the
magnitude of the increase in the perceived importance of the pros was
greater than the magnitude of the decrease in the perceived importance of
the cons,” so as not to invite misapprehension of the property under
discussion.

7. A divergent effect was reported for organ donation decisions by


McIntyre et al. (1987). Organ donors and nondonors generally did not
differ in their ratings of the importance of various reasons for donating
(pros) but did often differ in their ratings of the importance of reasons for
not donating (cons).

8. TTM presentations of these principles are not always as clear as might


be wanted. For instance, Prochaska, Redding, and Evers (2002, p. 106)
misleadingly assert that one of the “practical implications” of the strong
and weak principles is that increasing the number of perceived advantages
(pros) should be emphasized more than decreasing the number of
perceived disadvantages (cons). The research evidence concerning
decisional balance, however, does not address variations in the number of
pros or cons, only in their perceived importance.

9. In Witte and Allen’s (2000) meta-analysis, the mean effect of variation


in depicted self-efficacy on attitudes, expressed as a correlation, was .12
(across eight cases). For comparison, the mean effect of variation in
depicted self-efficacy on perceived self-efficacy was .36 (across 17 cases;
Witte & Allen, 2000, p. 599, Table 2). This latter effect, however, was
based only on cases with a successful “manipulation check” (that is,
studies in which the experimental variation in depicted self-efficacy

230
produced statistically significant differences in perceived self-efficacy); if
studies with “failed” manipulation checks were to have been included, the
observed mean effect would probably be smaller.

10. Expressed in the framework of reasoned action theory (Chapter 6),


these findings indicate that an intervention meant to influence perceived
behavioral control (PBC) can also influence behavioral attitudes (AB).
Hence if AB is negative, an intervention focused on PBC (self-efficacy)
might nevertheless be useful, because that intervention might both enhance
PBC and make AB more positive.

11. One might also suppose that an assumption of some strict temporal
segregation between attitudinal considerations (the person’s evaluation of
the new behavior) and self-efficacy considerations (the person’s
perceptions of their ability to perform the action) is implausible. The
implicit suggestion of the TTM is that if people have a negative attitude
toward the new behavior, then they won’t even think about self-efficacy
considerations; the only time people think about self-efficacy (according to
this view) is when they already have a positive attitude (and so have
reached a stage at which self-efficacy becomes relevant). But if this were
the case, then no one would ever simultaneously express a negative
attitude and self-efficacy doubts—no one would ever say, for example, “I
don’t think exercise is all that valuable, and besides I don’t have time for it
anyway.” And yet this seems like a perfectly natural state of mind. In fact,
in a circumstance in which people have both a negative attitude toward the
desired behavior and doubts about their ability to perform it, persuaders
might well sometimes want to address those self-efficacy concerns first.
For instance, low perceived self-efficacy might sometimes be a
rationalization device, a way of justifying failing to do something that
people know they should do (“the recycling rules are so hard to
understand”). And when low perceived self-efficacy does function this
way, then removal of that rationalization could be crucially important for
encouraging behavioral change.

12. One potential issue with stage models is that stages can be defined in
ways that evade (or submerge) certain empirical questions. As one
illustration, stage definitions can guarantee a certain sequencing of stages.
For example, suppose a “precontemplation” stage is defined in such a way
that once a person has thought about adopting the new behavior, by
definition that person cannot ever be in precontemplation again. In that
case, a “contemplation” stage would by definition follow precontemplation

231
—which would mean that no empirical evidence would be needed to
confirm that temporal relation. Similarly, stage definitions can guarantee
that a given stage cannot possibly be skipped. For example, if part of the
definition of a “planning” stage is that the person must have already at
least been thinking about adopting the new behavior (i.e., must already
have been in a “contemplation” stage), then it would be conceptually
impossible for a person to reach the “planning” stage without passing
through the “contemplation” stage. The larger point here is that the
development of reliable and valid stage assessments is bound up with
definitional questions—and certain ways of defining stages can transform
what would look to be empirical questions (“Does stage Y always follow
stage X?”) into definitional matters (“Stage Y, by definition, follows stage
X”).

13. This conclusion is not inconsistent with a belief that state-matched


interventions are generally more effective than nonmatched interventions
(e.g., Noar, Benac, & Harris, 2007), or with a belief than non-TTM-based
stage-matched interventions are more effective than nonmatched
interventions, or with a belief that TTM-based stage-matched interventions
are more effective than a no-treatment control condition.

14. As a parallel case: One could imagine devising different messages that
were matched to differing political dispositions (say, Republicans and
Democrats in the U.S.). Finding that Republicans were more persuaded by
messages matched to Republicans than by messages matched to
Democrats (with the reverse true for Democrats) would not show that the
two categories (Democrat and Republican) formed any sort of temporal
sequence. Similarly here: Finding that people in the contemplation
category were more persuaded by interventions matched to contemplation
than by messages matched to planning (with the reverse true for people in
the planning category) would not show that the two categories form any
sort of temporal sequence. Showing that stage-matched interventions are
more successful than nonmatched interventions is a minimum requirement
for a valid stage theory but does not satisfy all the burdens of proof
incurred by stage models.

15. Notice, however, that even if the sequence of stages posited by a


hierarchy-of-advertising-effects model is not accurate, those different
“stages” might nevertheless be useful as a conceptualization of different
possible purposes of advertising. So, for example, the challenges of an
advertiser may be different if consumers don’t know the product exists

232
than if they know it exists but believe it’s not very good.

16. Some presentations of the HAPA have described just two stages—a
motivational stage (in which persons need to develop the appropriate
motivations to change) and a volitional stage (in which people already
have the relevant motivations)—but with the volitional stage further
divided between people who merely intend to adopt the new behavior and
those who have already adopted that behavior. Because such an analysis
eventuates in three relevant categories, it seems better to describe the
HAPA as proposing three (not two) stages. However, the “two-stage”
language does draw attention to the similarity between those in the
intention stage and those in the action stage, namely, that they all have the
requisite intention (which distinguishes them from those in the non-
intention stage).

17. On this latter subject (the intention-behavior relationship), RAT has


focused on the question of what influences the intention-behavior
correlation, so as to be able to better specify the conditions under which
intentions will predict behavior. RAT itself has not given so much
attention to the question of how to influence the intention-behavior
relationship.

18. This distinction, or something like it, has a long intellectual history,
dating back at least to 18th-century faculty psychology. A distinction
between “conviction” (influencing belief, associated with the faculty of
understanding) and “persuasion” (influencing action, associated with the
faculty of the will) can be found in George Campbell’s 1776 The
Philosophy of Rhetoric and even more sharply formulated in Bishop
Richard Whately’s 1828 The Elements of Rhetoric. However, this
conviction-persuasion distinction needlessly confused distinctions between
communicative purposes and distinctions between communicative means,
with unhappy consequences (O’Keefe, 2012a).

233
Chapter 8 Elaboration Likelihood Model

Variations in the Degree of Elaboration: Central Versus Peripheral


Routes to Persuasion
The Nature of Elaboration
Central and Peripheral Routes to Persuasion
Consequences of Different Routes to Persuasion
Factors Affecting the Degree of Elaboration
Factors Affecting Elaboration Motivation
Factors Affecting Elaboration Ability
Summary
Influences on Persuasive Effects Under Conditions of High
Elaboration: Central Routes to Persuasion
The Critical Role of Elaboration Valence
Influences on Elaboration Valence
Summary: Central Routes to Persuasion
Influences on Persuasive Effects Under Conditions of Low
Elaboration: Peripheral Routes to Persuasion
The Critical Role of Heuristic Principles
Varieties of Heuristic Principles
Summary: Peripheral Routes to Persuasion
Multiple Roles for Persuasion Variables
Adapting Persuasive Messages to Recipients Based on the ELM
Commentary
The Nature of Involvement
Argument Strength
One Persuasion Process?
Conclusion
For Review
Notes

The elaboration likelihood model (ELM) of persuasion is an approach


developed by Richard Petty, John Cacioppo, and their associates (the most
comprehensive single treatment of the ELM is provided by Petty &
Cacioppo, 1986a; for briefer presentations, see Petty & Briñol, 2010,
2012a; Petty, Cacioppo, Strathman, & Priester, 2005). The ELM suggests
that important variations in the nature of persuasion are a function of the
likelihood that receivers will engage in elaboration of (that is, thinking
about) information relevant to the persuasive issue. Depending on the

234
degree of elaboration, two types of persuasion process can be engaged
(one involving systematic thinking and the other involving cognitive
shortcuts)—with different factors influencing persuasive outcomes in each.
In the sections that follow, the nature of variations in the degree of
elaboration are described, factors influencing the degree of elaboration are
discussed, the two persuasion processes are described, and then various
complexities of persuasion processes are considered.

The ELM is an example of a dual-process approach to social information-


processing phenomena (see Chaiken & Trope, 1999), an example focused
specifically on persuasion phenomena. An alternative dual-process image
of persuasion has been provided by the heuristic-systematic model (HSM;
see Chaiken, 1987; S. Chen & Chaiken, 1999; Todorov, Chaiken, &
Henderson, 2002). Although the two models differ in some important
ways, the ELM and HSM share the broad idea that persuasion can be
achieved through two general avenues that vary in the amount of careful
thinking involved.

Variations in the Degree of Elaboration: Central


Versus Peripheral Routes to Persuasion

The Nature of Elaboration


The ELM is based on the idea that under different conditions, receivers
will vary in the degree to which they are likely to engage in elaboration of
information relevant to the persuasive issue. Elaboration here refers to
engaging in issue-relevant thinking. Thus sometimes receivers will engage
in extensive issue-relevant thinking: They will attend closely to a
presented message, carefully scrutinize the arguments it contains, reflect
on other issue-relevant considerations (e.g., other arguments recalled from
memory or arguments they devise), and so on. But sometimes receivers
will not undertake so much issue-relevant thinking; no one can engage in
such effort for every persuasive topic or message, and hence sometimes
receivers will display relatively little elaboration.1

A number of means have been developed for assessing variations in the


degree of elaboration that occurs in a given circumstance (for discussion,
see Petty & Cacioppo, 1986a, pp. 35–47). Perhaps the most
straightforward of these is the thought-listing technique: Immediately
following the receipt of a persuasive message, receivers are simply asked

235
to list the thoughts that occurred to them during the communication (for a
more detailed description, see Cacioppo, Harkins, & Petty, 1981, pp. 38–
47; for a broad review of such techniques, see Cacioppo, von Hippel, &
Ernst, 1997). The number of issue-relevant thoughts reported is
presumably at least a rough index of the amount of issue-relevant
thinking.2 Of course, the reported thoughts can also be classified in any
number of ways (e.g., according to their substantive content or according
to what appeared to provoke them); one classification obviously relevant
to the illumination of persuasive effects is one that categorizes thoughts
according to their favorability to the position being advocated by the
message.

As is probably already apparent, the degree to which receivers engage in


issue-relevant thinking forms a continuum, from cases of extremely high
elaboration to cases of little or no elaboration. One might be tempted to
think that in circumstances in which little or no elaboration occurs, little or
no persuasion will occur; after all, the receiver has not really engaged the
message. But the ELM suggests that persuasion can take place at any point
along the elaboration continuum, although the nature of the persuasion
processes will be different as the degree of elaboration varies. To bring out
the differences in these persuasion processes, the ELM offers a broad
distinction between two routes to persuasion: a central and a peripheral
route.

Central and Peripheral Routes to Persuasion


The central route to persuasion represents the persuasion processes
involved when elaboration is relatively high. When persuasion is achieved
through the central route, it commonly comes about through extensive
issue-relevant thinking: careful examination of the information contained
in the message, close scrutiny of the message’s arguments, consideration
of other issue-relevant material (e.g., arguments recalled from memory,
arguments devised by the receiver), and so on. In short, persuasion through
the central route is achieved through the receiver’s thoughtful examination
of issue-relevant considerations.

The peripheral route represents the persuasion processes involved when


elaboration is relatively low. When persuasion is achieved through
peripheral routes, it commonly comes about because the receiver employs
some simple decision rule (some heuristic principle) to evaluate the
advocated position. For example, receivers might be guided by whether

236
they like the communicator or by whether they find the communicator
credible. That is, receivers may rely on various peripheral cues (such as
communicator credibility) as guides to attitude and belief, rather than
engaging in extensive issue-relevant thinking.

Thus as elaboration decreases, peripheral cues presumably become


progressively more important determinants of persuasive effects, but as
elaboration increases, peripheral cues should have relatively smaller
effects on persuasive outcomes. Indeed, one indirect marker of the amount
of elaboration (in a given circumstance) is precisely the extent to which
observed persuasive effects are a function of available peripheral cues as
opposed to (for example) the quality of the message’s arguments. If, in a
given experimental condition, variations in peripheral cues have more
influence on persuasive outcomes than do variations in the strength of the
message’s arguments, then presumably relatively little elaboration
occurred. That is, the persuasive outcomes were presumably achieved
through a peripheral, not a central, route (but see Bless & Schwarz, 1999).

This distinction between the two routes to persuasion should not be


permitted to obscure the underlying elaboration continuum. The central
and peripheral routes to persuasion are not two exhaustive and mutually
exclusive categories or kinds of persuasion; they simply represent
prototypical extremes on the high-to-low elaboration continuum. The ELM
recognizes, for example, that at moderate levels of elaboration, persuasion
involves a mixture of central route and peripheral route processes, with
correspondingly complex patterns of effects (see, e.g., Petty & Wegener,
1999, pp. 44–48). Thus, in considering the differing character of
persuasion achieved through central and peripheral routes, it is important
to bear in mind that these routes are offered as convenient idealized cases
representing different points on the elaboration continuum.

A classic illustration of the distinction between central and peripheral


routes to persuasion is provided by Petty, Cacioppo, and Goldman’s
(1981) study of the effects of argument strength and communicator
expertise on persuasive effectiveness. In this investigation, the personal
relevance of the message topic for receivers was varied, such that for some
receivers the topic was quite relevant (and so presumably disposed
receivers to engage in high elaboration), whereas for other receivers the
topic was much less relevant (and hence these receivers would presumably
be less likely to engage in elaboration). The design also varied the quality
of the message’s arguments (strong vs. weak arguments) and the expertise

237
of the communicator (high vs. low).

High-topic-relevance receivers were significantly affected by the quality of


the arguments contained in the message (being more persuaded by strong
arguments than by weak arguments) but were not significantly influenced
by the communicator’s degree of expertise. By contrast, low-topic-
relevance receivers were more affected by expertise variations (being more
persuaded by the high-expertise source than by the low) than by variations
in argument quality. That is, when receivers were inclined (by virtue of
topic relevance) to engage in extensive elaboration, the results of their
examination of the message’s arguments were much more influential than
was the peripheral cue of the communicator’s expertise. But when
receivers were not inclined to invest the cognitive effort in argument
scrutiny, the peripheral cue of expertise had more influence.

As this investigation indicates, persuasion can be obtained either through a


central route (involving relatively high elaboration) or through a peripheral
route (in which little elaboration occurs). But the factors influencing
persuasive success are different in the two cases. Moreover, as indicated in
the next section, the consequences of persuasion are not identical for the
two routes.

Consequences of Different Routes to Persuasion


Although persuasion can be accomplished at any point along the
elaboration continuum, the persuasive effects obtained are not necessarily
identical. The ELM suggests that with variations in the amount of
elaboration (i.e., variations in the route to persuasion), there are
corresponding variations in the character of the persuasive outcomes
effected. Specifically, the ELM suggests that attitudes shaped under
conditions of high elaboration will (compared with attitudes shaped under
conditions of low elaboration) display greater temporal persistence, be
more predictive of intentions and subsequent behavior, and be more
resistant to counterpersuasion.

Each of these claims enjoys both some supportive direct research evidence
and some previous research that can be interpreted as indicating such
effects. For example, Petty, Cacioppo, and Schumann (1983) reported that
attitudes were more strongly correlated with intentions when the attitudes
were formed under conditions of high (as opposed to low) personal
relevance of the topic. Cacioppo, Petty, Kao, and Rodriguez (1986) found

238
that persons high in need for cognition (and so presumably higher in
elaboration motivation) displayed greater attitude-intention and attitude-
behavior consistency than did persons lower in need for cognition.
Verplanken (1991) reported greater persistence of attitudes and greater
attitude-intention consistency under conditions of high (rather than low)
elaboration likelihood (as indicated by topic relevance and need for
cognition). MacKenzie and Spreng (1992) experimentally varied
elaboration motivation and found stronger attitude-intention relationships
under conditions of higher (as opposed to lower) elaboration motivation.
(For other illustrations, see Gasco, Briñol, & Horcajo, 2010; Haugtvedt,
Schumann, Schneier, & Warren, 1994. For some general reviews and
discussions, see Petty & Cacioppo, 1986a, pp. 173–195; Petty, Haugtvedt,
& Smith, 1995; Petty & Wegener, 1999, pp. 61–63.)

These effects may seem intuitively plausible (in the sense that the greater
issue-relevant thinking affiliated with central route processes might well
be expected to yield attitudes that are stronger in these ways), but the
mechanism by which these outcomes arise is not entirely well understood
(for some discussion, see Petty, Haugtvedt, & Smith, 1995, pp. 119–123).3
Nevertheless, there is good reason for persuaders to presume that
persuasion accomplished through high elaboration is likely to be more
enduring (less likely to decay through time, less likely to succumb to
counterpersuasion) and to be more directive of behavior than is persuasion
accomplished through low elaboration.

Given that the underlying persuasion process varies depending on the level
of elaboration, and given that the different routes to persuasion have these
different consequences, it becomes important to consider what factors
influence the degree of elaboration that receivers are likely to undertake.

Factors Affecting the Degree of Elaboration


Two broad classes of factors influence the degree of elaboration that a
receiver will likely undertake in any given circumstance. One concerns the
receiver’s motivation for engaging in elaboration, the other the receiver’s
ability to engage in such elaboration. For extensive elaboration to occur,
both ability and motivation must be present. High elaboration will not
occur if the receiver is motivated to undertake issue-relevant thinking but
is unable to do so, nor will it occur if the receiver is able to engage in
elaboration but is unmotivated to do so.

239
Factors Affecting Elaboration Motivation
A variety of factors have received research attention as influences on
receivers’ motivation to engage in issue-relevant thinking, including the
receiver’s mood (e.g., Banas, Turner, & Shulman, 2012; Bless, Mackie, &
Schwarz, 1992; Bohner & Weinerth, 2001; Côté, 2005; Ziegler, 2010; but
also see Bless & Schwarz, 1999),4 attitudinal ambivalence (i.e., the degree
to which the attitude is based on a mixture of positive and negative
elements; e.g., Hänze, 2001; Jonas, Diehl, & Brömer, 1997; Maio, Esses,
& Bell, 2000), and perceived information sufficiency (B. B. Johnson,
2005; Trumbo, 1999). Two influences are discussed here as illustrative:
the personal relevance of the topic to the receiver and the receiver’s degree
of need for cognition.

Personal Relevance (Involvement)


The most studied influence on the receiver’s motivation for engaging in
issue-relevant thinking is the personal relevance of the topic to the
receiver. As a given issue becomes increasingly personally relevant to a
receiver, the receiver’s motivation for engaging in thoughtful
consideration of that issue presumably increases—and indeed a number of
investigations have reported findings confirming this expectation (e.g.,
Petty & Cacioppo, 1979b, 1981; Petty, Cacioppo, & Goldman, 1981;
Petty, Cacioppo, & Schumann, 1983).5

The ELM’s research evidence on this matter has employed a clever


methodological innovation (introduced by Apsler & Sears, 1968). In many
earlier studies of the effect of topic relevance variations on persuasive
processes, researchers commonly employed two message topics, one
presumably quite relevant for the population from which receivers were
drawn and one not so relevant. This obviously creates difficulties in
interpreting experimental results because any observed differences
between high- and low-relevance conditions might be due not to the
relevance differences but to some factor connected to the topic differences
(e.g., the necessarily different arguments used in the messages on the two
topics).

The procedure followed by ELM researchers is exemplified in a study in


which the participants were college undergraduates. The persuasive
messages advocated the adoption of senior comprehensive examinations as

240
a graduation requirement—either at the receivers’ college (the high-
relevance condition) or at a different, distant college (the low-relevance
condition). With this form of manipulation, receivers in parallel high- and
low-relevance conditions could hear messages identical in every respect
(e.g., with the same arguments and evidence) save for the name of the
college involved, thus simplifying interpretation of experimental findings
(Petty & Cacioppo, 1979b).

A note about terminology: In ELM research reports, these variations in


personal relevance have often been labeled as variations in the receiver’s
level of “involvement” with the message topic (and so, for instance, in the
high-relevance condition, receivers would be said to be “highly involved”
with the topic). But in persuasion research, the term involvement has also
been used to cover other variations in the sort of relationship that message
recipients have to the topic of advocacy, including the person’s judgment
of the importance of the issue, the degree to which the person is strongly
committed to a stand on the issue, the extent to which the person’s sense of
self is connected to the stand taken, and ego-involvement (see Chapter 2),
an omnibus concept meant to encompass a number of such elements. But
these are different properties, so it is important to bear in mind that the
involvement manipulations in ELM research are specifically ones that
induce variation in the personal relevance of the topic.6

Need for Cognition


A second factor influencing elaboration motivation is the receiver’s level
of need for cognition (NFC), which refers to “the tendency for an
individual to engage in and enjoy thinking” (Cacioppo & Petty, 1982, p.
116). This tendency varies among people; some persons are generally
disposed to enjoy and engage in effortful cognitive undertakings, whereas
others are not. Need-for-cognition scales have been developed to assess
this individual difference (e.g., Cacioppo & Petty, 1982). Persons high in
NFC tend to agree with statements such as “I really enjoy a task that
involves coming up with new solutions to problems” and “I like to have
the responsibility of handling a situation that requires a lot of thinking,”
whereas individuals low in NFC are more likely to agree with statements
such as “I like tasks that require little thought once I’ve learned them” and
“I think only as hard as I have to.” (For general reviews of research on
need for cognition, see Cacioppo, Petty, Feinstein, & Jarvis, 1996; Petty,
Briñol, Loersch, & McCaslin, 2009.)

241
As one might suppose, a good deal of research suggests that need for
cognition influences elaboration likelihood. Persons high in NFC are likely
to report a larger number of issue-relevant thoughts (following message
exposure) than are persons low in need for cognition (e.g., S. M. Smith,
Haugtvedt, & Petty, 1994; for a review, see Cacioppo, Petty, et al., 1996,
pp. 230–231). Relatedly, those high in NFC are more influenced by the
quality of the message’s arguments than are those low in need for
cognition (e.g., Axsom, Yates, & Chaiken, 1987; Green, Garst, Brock, &
Chung, 2006; for a review, see Cacioppo, Petty, et al., 1996, pp. 229–
230).7 Such findings, of course, are consistent with the supposition that
persons high in need for cognition have generally greater motivation for
engaging in issue-relevant thinking than do persons low in need for
cognition.8

Factors Affecting Elaboration Ability


Several possible influences on receivers’ ability to engage in issue-relevant
thinking have been investigated, including such variables as message
repetition (Claypool, Mackie, Garcia-Marques, McIntosh, & Udall, 2004)
and the receiver’s body posture (Petty, Wells, Heesacker, Brock, &
Cacioppo, 1983). Two factors with relatively more extensive research
support are discussed here: the presence of distraction in the persuasive
setting and the receiver’s prior knowledge about the persuasive topic.

Distraction
In this context, distraction refers to the presence of some distracting
stimulus or task accompanying a persuasive message. Research concerning
the effects of such distractions has used a variety of forms of distraction,
including having an audio message be accompanied by static or beep
sounds and having receivers monitor a bank of flashing lights, copy a list
of two-digit numbers, or record the location of an X flashing from time to
time on a screen in front of them (for a general discussion of such
manipulations, see Petty & Brock, 1981).

The theoretical importance of distraction effects to the ELM should be


plain. Under conditions that would otherwise produce relatively high
elaboration, distraction should interfere with such issue-relevant thinking.
Such interference should enhance persuasion in some circumstances and
reduce it in others. Specifically, if a receiver would ordinarily be inclined

242
to engage in favorable elaboration (that is, to predominantly have thoughts
favoring the advocated position), then distraction, by interfering with such
elaboration, would presumably reduce persuasive effectiveness. But if a
receiver would ordinarily be inclined to predominantly have thoughts
unfavorable to the position advocated, then distraction should presumably
enhance the success of the message (by interfering with the having of
those unfavorable thoughts).9

Distraction’s effects on persuasion have been extensively studied, although


regrettably little of this research is completely suitable for assessing the
predictions of the ELM (for some general discussions of this literature, see
Baron, Baron, & Miller, 1973; Buller & Hall, 1998; Petty & Brock, 1981).
But what relevant evidence exists does seem largely compatible with the
ELM. For example, studies reporting that distraction enhances persuasive
effects have commonly relied on circumstances in which elaboration
likelihood was high and predominantly unfavorable thoughts would be
expected (see Petty & Brock, 1981, p. 65). More direct tests of the ELM’s
predictions have also been generally supportive (for a review, see Petty &
Cacioppo, 1986a, pp. 61–68). For instance, one study found that increasing
distraction increased the effectiveness of a counterattitudinal message
containing weak arguments but decreased the effectiveness of a
counterattitudinal message containing strong arguments. The weak-
argument message ordinarily evoked predominantly unfavorable thoughts,
and hence distraction—by interfering with such thoughts—enhanced
persuasion for that message, but the strong-argument message ordinarily
evoked predominantly favorable thoughts, and thus distraction inhibited
persuasion for that message (Petty, Wells, & Brock, 1976, Experiment 1;
see, relatedly, Albarracín & Wyer, 2001; Jeong & Hwang, 2012; Miarmi
& DeBono, 2007).

Prior Knowledge
A second factor influencing elaboration ability is the receiver’s prior
knowledge about the persuasive topic: The more extensive such prior
knowledge, the better able the receiver is to engage in issue-relevant
thinking. Several studies have indicated that as the extent of receivers’
prior knowledge increases, more issue-relevant thoughts occur, the
influence of argument strength on persuasive effects increases, and the
influence of peripheral cues (such as source likability and message length)
decreases (e.g., Averbeck, Jones, & Robertson, 2011; Laczniak, Muehling,
& Carlson, 1991; W. Wood, 1982; W. Wood & Kallgren, 1988; W. Wood,

243
Kallgren, & Preisler, 1985).10 As one might expect, this suggests that
when receivers with extensive prior knowledge encounter a
counterattitudinal message, such receivers are better able to generate
counterarguments and hence are less likely to be persuaded (in comparison
with receivers with less extensive topic knowledge). But receivers with
extensive prior knowledge are also more affected by variations in message
argument strength; hence increasing the strength of a counterattitudinal
message’s arguments will presumably enhance persuasion for receivers
with extensive knowledge but will have little effect on receivers with less
extensive knowledge.11

Summary
As should be apparent, a variety of factors can influence the likelihood of
elaboration in a given circumstance by affecting the motivation or the
ability to engage in issue-relevant thinking. With variations in elaboration
likelihood, of course, different sorts of persuasion processes are engaged:
As elaboration increases, peripheral cues have diminished effects on
persuasive outcomes, and central route processes play correspondingly
greater roles. But the factors influencing persuasive effects are different,
depending on whether central or peripheral routes to persuasion are
followed. Thus the next two sections consider what factors influence
persuasive outcomes when elaboration likelihood is relatively high and
when it is relatively low.

Influences on Persuasive Effects Under


Conditions of High Elaboration: Central Routes
to Persuasion

The Critical Role of Elaboration Valence


Under conditions of relatively high elaboration, the outcomes of
persuasive efforts will largely depend on the outcomes of the receiver’s
thoughtful consideration of issue-relevant arguments (as opposed to simple
decision principles activated by peripheral cues). Broadly put, when
elaboration is high, persuasive effects will depend on the predominant
valence (positive or negative) of the receiver’s issue-relevant thoughts: To
the extent that the receiver is led to have predominantly favorable thoughts

244
about the advocated position, the message will presumably be relatively
successful in eliciting attitude change in the desired direction; but if the
receiver has predominantly unfavorable thoughts, then the message will
presumably be relatively unsuccessful. Thus the question becomes: Given
relatively high elaboration, what influences the predominant valence (the
overall evaluative direction) of elaboration?

Influences on Elaboration Valence


Of the many influences on the evaluative direction of receivers’ issue-
relevant thinking, two factors merit attention here: whether the message’s
advocated position is proattitudinal or counterattitudinal and the strength
(quality) of the message’s arguments.

Proattitudinal Versus Counterattitudinal Messages


The receiver’s initial attitude and the message’s advocated position,
considered jointly, will surely influence the valence of elaboration. When
the advocated position is one toward which the receiver is already
favorably inclined—that is, when the message advocates a pro-attitudinal
position—the receiver will presumably ordinarily be inclined to have
favorable thoughts about the position advocated. By contrast, when the
message advocates a counter-attitudinal position, receivers will ordinarily
be inclined to have unfavorable thoughts about the view being advocated.
That is, everything else being equal, one expects proattitudinal messages to
evoke predominantly favorable thoughts and counterattitudinal messages
to evoke predominantly unfavorable thoughts.

But of course, this cannot be the whole story—otherwise nobody would


ever be persuaded by a counterattitudinal message. At least sometimes
people are persuaded by the arguments contained in counterattitudinal
communications, and hence the ELM suggests that a second influence on
elaboration valence is the strength of the message’s arguments.

Argument Strength
Recall that under conditions of high elaboration, receivers are motivated
(and able) to engage in extensive issue-relevant thinking, including careful
examination of the message’s arguments. Presumably, then, the valence of
receivers’ elaboration will depend (at least in part) on the results of such

245
scrutiny: The more favorable the reactions evoked by that scrutiny of
message material, the more effective the message should be. If a receiver’s
examination of the message’s arguments reveals shoddy arguments and
bad evidence, one presumably expects little persuasion; but a different
outcome would be expected if the message contains powerful arguments,
sound reasoning, good evidence, and the like.

That is, under conditions of high elaboration, the strength (quality) of the
message’s arguments should influence the evaluative direction of
elaboration (and hence should influence persuasive success). Many
investigations have reported results indicating just such effects (e.g., Lee,
2008; Levitan & Visser, 2008; Petty & Cacioppo, 1979b; Petty, Cacioppo,
& Schumann, 1983; for complexities, see Park, Levine, Westermann,
Orfgen, & Foregger, 2007).

Other Influences on Elaboration Valence


Some research has examined other possible influences on elaboration
valence. For example, some evidence indicates that when elaboration
likelihood is high, warning receivers of an impending counterattitudinal
message can encourage receivers to have more unfavorable thoughts about
the advocated position than they otherwise would have had (e.g., Petty &
Cacioppo, 1979a; for a review, see W. Wood & Quinn, 2003). As another
example, when elaboration is high, the receiver’s mood can incline the
receiver to have mood-congruent thoughts, so that positive moods
encourage positive thoughts (e.g., Wegener & Petty, 2001; for
complexities, see, e.g., Agrawal, Menon, & Aaker, 2007; Mitchell, Brown,
Morris-Villagran, & Villagran, 2001). But the greatest research attention
concerning influences on elaboration valence has been given to variations
in argument strength.

Summary: Central Routes to Persuasion


Under conditions of high elaboration (e.g., high personal relevance of the
topic to the receiver), the outcome of persuasive efforts depends on the
valence of receivers’ elaboration: When a persuasive message leads
receivers to have predominantly favorable thoughts about the position
being advocated, persuasive success is correspondingly more likely. And
the valence of receivers’ elaboration will depend (at least in part) on the
character of the message’s arguments.12

246
Influences on Persuasive Effects Under
Conditions of Low Elaboration: Peripheral
Routes to Persuasion

The Critical Role of Heuristic Principles


The ELM suggests that under conditions of relatively low elaboration, the
outcomes of persuasive efforts will not generally turn on the results of the
receiver’s thoughtful consideration of the message’s arguments or other
issue-relevant information. Instead, persuasive effects will be much more
influenced by the receiver’s use of simple decision rules or heuristic
principles.13 These heuristic principles (or heuristics, for short) represent
simple decision procedures requiring little information processing. The
principles are activated by peripheral cues, that is, by extrinsic features of
the communication situation such as the characteristics of the
communicator (e.g., credibility). For example, in a circumstance in which
elaboration likelihood is low, receivers may display agreement with a liked
communicator because a simplifying decision rule (“If I like the source,
I’ll agree”) has been invoked.

Heuristic principles have ordinarily not been studied in a completely direct


fashion—and for good reason. One would not expect (for instance) that
self-report indices of heuristic use would be valuable; presumably, these
heuristics are commonly used in a tacit, nonconscious way, and thus
receivers may well not be in a good position to report on their use of such
principles (S. Chen & Chaiken, 1999, pp. 86–87; Petty & Cacioppo,
1986a, p. 35). Instead, the operation of heuristic principles has been
inferred from the observable influence of peripheral cues on persuasive
outcomes. The ELM expects particular patterns of cue effects on
persuasion: The influence of peripheral cues should be greater under
conditions of relatively low elaboration likelihood (e.g., lower topic
relevance) or under conditions in which the cue is relatively more salient.
The primary evidence for the operation of heuristic principles consists of
research results conforming to just such patterns of effect (for some
discussion, see Bless & Schwarz, 1999).

Varieties of Heuristic Principles


Although a number of heuristic principles have been suggested, three

247
heuristics have received relatively more extensive research attention: the
credibility, liking, and consensus heuristics.14

Credibility Heuristic
One heuristic principle is based on the apparent credibility of the
communicator and amounts to a belief that “statements by credible sources
can be trusted” (for alternative expressions of related ideas, see Chaiken,
1987, p. 4; Cialdini, 1987, p. 175). As discussed in Chapter 10, studies
have indicated that as the personal relevance of the topic to the receiver
increases, the effects of communicator credibility diminish (e.g., Byrne,
Guillory, Mathios, Avery, & Hart, 2012; H. H. Johnson & Scileppi, 1969;
Petty, Cacioppo, & Goldman, 1981; Rhine & Severance, 1970). Similar
results have been obtained when elaboration likelihood has been varied in
other ways (e.g., Janssen, Fennis, Pruyn, & Vohs, 2008; Kumkale,
Albarracín, & Seignourel, 2010). Thus, consistent with ELM expectations,
the peripheral cue of credibility has been found to have greater impact on
persuasive outcomes when elaboration likelihood is relatively low.
Moreover, some research suggests that variations in the salience of
credibility cues lead to corresponding variations in credibility’s effects
(e.g., Andreoli & Worchel, 1978). All told, there looks to be good
evidence for the existence of a credibility heuristic in persuasion.

Liking Heuristic
A second heuristic principle is based on how well the receiver likes the
communicator and might be expressed by beliefs such as these: “People
should agree with people they like” and “People I like usually have correct
opinions” (for alternative formulations of this heuristic, see Chaiken, 1987,
p. 4; Cialdini, 1987, p. 178). When this heuristic is invoked, liked sources
should prove more persuasive than disliked sources. As discussed in more
detail in Chapter 10, the research evidence does suggest that the ordinary
advantage of liked communicators over disliked communicators
diminishes as the personal relevance of the topic to the receiver increases
(e.g., Chaiken, 1980, Experiment 1; Petty, Cacioppo, & Schumann, 1983).
Confirming findings have been obtained in studies in which elaboration
likelihood varied in other ways (e.g., Kang & Kerr, 2006; W. Wood &
Kallgren, 1988) and in studies varying the salience of liking cues (e.g.,
Chaiken & Eagly, 1983): As elaboration likelihood declines or cue
saliency increases, the impact of liking cues on persuasion increases.

248
Taken together, then, these studies point to the operation of a liking
heuristic that can influence persuasive effects.

Consensus Heuristic
A third heuristic principle is based on the reactions of other people to the
message and could be expressed as a belief that “if other people believe it,
then it’s probably true” (for variant phrasings of such a heuristic, see
Chaiken, 1987, p. 4; Cialdini, 1987, p. 174). When this heuristic is
employed, the approving reactions of others should enhance message
effectiveness (and disapproving reactions should impair effectiveness). A
number of studies now indicate the operation of such a consensus heuristic
in persuasion (for a more careful review, see Axsom et al., 1987). For
example, several investigations have found that receivers are less
persuaded when they overhear an audience expressing disapproval (versus
approval) of the communicator’s message (e.g., Landy, 1972; Silverthorne
& Mazmanian, 1975). (For some related work, see Darke et al., 1998. For
complexities, see Beatty & Kruger, 1978; Hilmert, Kulik, & Christenfeld,
2006; Hodson, Maio, & Esses, 2001; Mercier & Strickland, 2012.)

Other Heuristics
Various other principles have been suggested as heuristics that receivers
may employ in reacting to persuasive messages (e.g., Chang, 2004;
Forehand, Gastil, & Smith, 2004). For example, it may be that the number
of arguments in the message (Chaiken, 1980, Experiment 2) or the sheer
length of the message (W. Wood et al., 1985) can serve as cues that
engage corresponding heuristic principles (“the more arguments, the
better” or “the longer the message, the better its position must be”). But for
the most part, relatively little research evidence concerns such heuristics,
and hence confident conclusions are perhaps premature.

Summary: Peripheral Routes to Persuasion


Under conditions of low elaboration likelihood, the outcome of persuasive
efforts depends less on the valence of receivers’ issue-relevant thinking
than on the operation of heuristic principles, simple decision rules
activated by peripheral cues in the persuasion setting. When receivers are
unable or unmotivated to engage in extensive issue-relevant thinking, their
reactions to persuasive communications will be guided by simpler

249
principles such as the credibility, liking, and consensus heuristics.

Multiple Roles for Persuasion Variables


One important contribution of the ELM to the general understanding of
persuasion is its emphasizing that a given variable might play different
roles in persuasion under different conditions. Viewed through the lens of
the ELM, a variable might influence persuasion in three broad ways.15
First, it might influence the degree of elaboration (and thus influence the
degree to which central route or peripheral route processes are engaged).
Second, it might serve as a peripheral cue (and so influence persuasive
outcomes when peripheral route persuasion is occurring). Third, it might
influence the valence of elaboration (and so influence persuasive outcomes
when central route persuasion is occurring), by being an argument or by
otherwise biasing (that is, encouraging one or another valence of)
elaboration.16

The ELM emphasizes that a given variable need not play only one of these
roles (e.g., Petty & Cacioppo, 1986a, pp. 204–215; Petty & Wegener,
1998a, 1999). In different circumstances, a variable might affect
persuasion through different mechanisms. For example, consider the
variable of message length (the simple length of a written message). This
might serve as a peripheral cue that activates a length-based heuristic (such
as “longer messages probably have lots of good reasons for the advocated
view”; see W. Wood et al., 1985). When message length operates this way,
longer messages will be more persuasive than shorter ones.

But message length might also (or instead) influence elaboration


motivation. For example, on a highly technical subject, the length of the
message might serve as a sign of whether the message was likely to be
worth close examination. Shorter messages might get little attention
(because receivers would think that the message could not possibly contain
the necessary amount of technical information), whereas longer messages
would be examined more carefully. (For some evidence of such a
phenomenon, see Soley, 1986.) In such a circumstance, obviously, a
longer message would not necessarily be more persuasive than a shorter
one; the persuasiveness of the longer message would turn on the outcome
of the closer scrutiny engendered by the message’s length. For example,
lengthening a message by adding weak arguments might enhance
persuasion for recipients who were not examining the message carefully

250
but diminish persuasion for recipients who were engaged in close scrutiny
(e.g., Friedrich, Fetherstonhaugh, Casey, & Gallagher, 1996).

Similarly, communicator attractiveness might operate as a peripheral cue


(engaging some version of the liking heuristic), might influence the
amount of elaboration (a communicator’s attractiveness might draw
attention toward or away from the message), or might serve as an
argument (e.g., in advertisements for beauty products) and hence influence
elaboration valence (see, e.g., Puckett, Petty, Cacioppo, & Fischer, 1983).
Another example: The articulation of justification (supporting
argumentation and evidence) in a message might influence the amount of
elaboration (as when the presence of such support leads receivers to think
that paying close attention to the message’s arguments will be
worthwhile), might serve as a peripheral cue (by suggesting the credibility
of the communicator or by activating a heuristic such as “if the message
cites information sources, the position must be worthy of belief”), or might
influence elaboration valence by encouraging more positive thoughts about
the advocated view (see O’Keefe, 1998).

The possibility of different persuasion roles for a single variable implies


considerable complexity in persuasion. Consider, for instance: What will
be the effect (on persuasive outcomes) of varying the communicator’s
attractiveness? The ELM’s analysis implies that no simple prediction can
be made; instead, the effects will be expected to vary depending on
(among other things) whether attractiveness operates as an influence on the
extent of elaboration, as an influence on the valence of elaboration, or as a
peripheral cue. So, for instance, increasing the communicator’s
attractiveness might enhance persuasion (e.g., if attractiveness operates as
a peripheral cue that activates a liking-implies-correctness heuristic, if
attractiveness enhances message scrutiny and the message contains strong
arguments, if attractiveness reduces message scrutiny and the message
contains weak arguments, or if greater attractiveness encourages positive
elaboration by serving as an argument) or inhibit persuasion (e.g., if
attractiveness enhances message scrutiny and the message contains weak
arguments, or if attractiveness reduces message scrutiny and the message
contains strong arguments). (For some illustrations of multiple persuasion
roles for variables, see J. K. Clark, Wegener, & Evans, 2011; Mondak,
1990; Tormala, Briñol, & Petty, 2007.)

Obviously, the key question that arises concerns specifying exactly when a
variable is likely to play one or another role. The ELM offers a general

251
rule of thumb for anticipating the likely function for a given variable,
based on the overall likelihood of elaboration (Petty, Wegener, Fabrigar,
Priester, & Cacioppo, 1993, p. 354). When elaboration likelihood is low,
then if a variable affects attitude change, it most likely does so by serving
as a peripheral cue. When elaboration likelihood is high, then any effects
of a variable on attitude change probably come about through influencing
elaboration valence. When elaboration likelihood is moderate, then the
effects of a variable on attitude change are likely to arise from affecting
the degree of elaboration (e.g., when some aspect of the persuasive
situation suggests that closer scrutiny of the message will be worthwhile).

There is reason to doubt that this ELM rule of thumb is genuinely


informative, because it amounts to little more than a restatement of the
distinction between the two routes to persuasion. For instance, the
proffered principle says in effect that “when elaboration is high, attitude
change happens through elaboration valence and so anything that affects
attitude change under such conditions does so by influencing elaboration
valence.” This verges on a tautology, in which by definition something
that influences attitude change under conditions of high elaboration must
be affecting elaboration valence. The value of this rule of thumb thus turns
on the degree to which one can independently assess whether peripheral or
central processes are engaged, and such independent assessments are not
easily had (as acknowledged by Petty & Briñol, 2006, p. 217).

However, the ELM’s analysis does point to distinctive predictions about


the different roles of a given variable, based on the operation of
moderating variables. For example, if the physical attractiveness of a
communicator in an advertisement is processed as a peripheral cue (which
activates the liking heuristic), then the nature of the advertised product is
unlikely to influence the cue’s effects. By contrast, if attractiveness
influences elaboration valence because of being processed as an argument,
then the nature of the product should be a moderating variable: The effect
of attractiveness should occur for some products (namely, those for which
attractiveness is a plausible argument, such as beauty products) but not for
others (Petty & Briñol, 2006, p. 218). The implication is that by examining
the effects of a moderator variable, one can distinguish whether a given
property is activating a heuristic or influencing elaboration valence.

The larger point is that the ELM draws attention to the mistake of thinking
that a given variable can influence persuasive outcomes through only one
pathway. Even in the absence of some well-articulated account of the

252
circumstances under which a given variable will serve in this or that
persuasion role, persuaders will be well-advised to be alert to such
complexities—and the ELM’s underscoring of these intricacies of
persuasion represents an especially important contribution.

Adapting Persuasive Messages to Recipients


Based on the ELM
Given that variations in the likelihood of the message recipient’s
elaboration make for very different underlying persuasion processes, the
ELM would presumably recommend that persuaders tailor their persuasive
efforts to the audience’s likely level of elaboration.

For high-elaboration recipients, the key to effective persuasion will be


having strong arguments. Under conditions of high elaboration, message
receivers will be engaged in close scrutiny of the message’s arguments, so
having high-quality arguments will be important. For low-elaboration
recipients, on the other hand, argument quality will presumably be less of
an influence on persuasive outcomes than will whatever heuristics the
receiver is led to engage. Thus when elaboration is likely to be low, the
ELM would recommend considering closely just what sorts of heuristics
might potentially be available to be activated in the persuasion situation;
for example, if the communicator’s expertise can easily be displayed, then
perhaps one might try to encourage use of the credibility heuristic.17

But sometimes persuaders will have opportunities to influence the


recipient’s degree of elaboration. For persuaders, achieving persuasion
through central route processes is clearly advantageous, so one might think
that persuaders will always want to encourage elaboration. But successful
persuasion using counterattitudinal messages under conditions of high
elaboration is likely to require having strong arguments, and at least
sometimes persuaders might worry that the quality of their arguments will
not be sufficient to overcome recipients’ inclination to counterargue. So
where a persuader expects considerable counterarguing (negative
elaboration), that persuader might well prefer to interfere with the
audience’s ability to elaborate, perhaps by ensuring that something
distracting is at hand. For instance, a scam artist selling dubious financial
products might not want people to give his pitch their undivided attention
—and so makes that pitch while people are eating the free dinner he has
provided.

253
Commentary
The ELM has stimulated a great deal of research. It is noteworthy that the
ELM provides a framework that offers the prospect of reconciling
apparently competing findings about the role played in persuasion by
various factors. For example, why might the receiver’s liking for the
communicator sometimes exert a large influence on persuasive outcomes
and sometimes little? One possibility is simply that as elaboration varies,
so will the impact of a simple decision rule such as the liking heuristic.
Indeed, the ELM’s capacity to account for conflicting findings from earlier
research makes it an especially important theoretical framework and
unquestionably the most influential recent theoretical development in
persuasion research. Even so, several facets of ELM theory and research
require some commentary.

The Nature of Involvement


In persuasion research, the concept of involvement has been used by a
variety of theoretical frameworks to describe variations in the relationship
that receivers have to the message topic (e.g., social judgment theory’s use
of ego-involvement; see Chapter 2). In the ELM, as noted earlier,
involvement refers specifically to the personal relevance of the message
topic to the recipient. But several commentators have recommended
distinguishing different forms of involvement, arguing that different
varieties of involvement have different effects on persuasion.

For example, B. T. Johnson and Eagly (1989) distinguished outcome-


relevant involvement (in which concrete short-term outcomes or goals are
involved) and value-relevant involvement (in which abstract values are
engaged). Their meta-analytic evidence suggested that high outcome-
relevant involvement produces the pattern of effects expected by the ELM,
in which variations in argument strength produce corresponding variations
in persuasive effects, but that high value-relevant involvement leads
receivers to defend their opinions when exposed to counterattitudinal
messages, regardless of whether the message contains strong or weak
arguments. Petty and Cacioppo (1990), however, have argued that the
same process might underlie these apparently divergent patterns of effect
(for some further discussion, see B. T. Johnson & Eagly, 1990; Levin,
Nichols, & Johnson, 2000; Petty & Cacioppo, 1990; Petty, Cacioppo, &
Haugtvedt, 1992; see also Park, Levine, Westermann, Orfgen, & Foregger,

254
2007).

As another example, Slater (2002) approached the task of clarifying


involvement’s role in persuasion not by identifying different kinds of
involvement but by identifying different kinds of message processing—
and then working backward to consider how different kinds of
involvement (and other factors) might influence different kinds of
processing. Slater’s analysis distinguished outcome-based processing
(motivated by the goal of self-interest assessment), value-affirmative
processing (motivated by the goal of value reinforcement), and hedonic
processing (motivated by the goal of entertainment)—with these
influenced by, respectively, outcome relevance (akin to “outcome-relevant
involvement”), value centrality (akin to “value-relevant involvement”),
and narrative interest. Slater (2002, p. 179) thus argued that “simply
distinguishing value-relevant involvement from the issue-or outcome-
relevant involvement manipulated in ELM research does not go far
enough.”

In the present context, the point to be borne in mind is that the ELM
conception of involvement is a specific one, namely, personal relevance—
and other kinds of “involvement” might not have the same pattern of
effects as is associated with personal relevance.

Argument Strength
In ELM research, argument strength (argument quality) variations have
been defined in an unusual way: in terms of persuasive effects under
conditions of high elaboration. To obtain experimental messages
containing strong or weak arguments, ELM researchers commonly pretest
various messages: A strong-argument message is defined as “one
containing arguments such that when subjects are instructed to think about
the message, the thoughts that they generate are predominantly favorable,”
and a weak-argument message is defined as one in which the arguments
“are such that when subjects are instructed to think about them, the
thoughts that they generate are predominantly unfavorable” (Petty &
Cacioppo, 1986a, p. 32). That is, a high-quality argument is one that, in
pretesting, is relatively more persuasive (compared to a low-quality
argument) under conditions of high elaboration.

By definition, then, high-quality arguments lead to greater persuasion


under conditions of higher elaboration than do low-quality arguments.

255
Thus to say, “Under conditions of high elaboration, strong arguments have
been found to be more effective than weak arguments” is rather like
saying, “Bachelors have been found to be unmarried.” No empirical
research is needed to confirm this claim (and indeed, there would be
something wrong with any empirical research that seemed to disconfirm
such claims). Notice, thus, how misleading the following statement might
be: “A message with strong arguments should tend to produce more
agreement when it is scrutinized carefully than when scrutiny is low, but a
message with weak arguments should tend to produce less overall
agreement when scrutiny is high rather than low” (Petty & Cacioppo,
1986a, p. 44). Appearances to the contrary, these are not empirical
predictions; these are not expectations that might be disconfirmed by
empirical results. If a message does not produce more agreement when
scrutinized carefully than when scrutiny is low, then (by definition) it
cannot possibly be a message with strong arguments.18

This way of defining argument quality reflects the role that argument
quality has played in ELM research designs. In ELM research, argument
quality variations have been used “primarily as a methodological tool to
examine whether some other variable increases or decreases message
scrutiny, not to examine the determinants of argument cogency per se”
(Petty & Wegener, 1998a, p. 352). When argument quality is
operationalized as the ELM has defined it, argument quality variations
provide simply a means of indirectly assessing the amount of elaboration
that has occurred. Thus to see whether a given factor influences
elaboration, one can examine the difference in the relative persuasiveness
of high- and low-quality arguments as that factor varies: High- and low-
quality arguments will be most different in persuasiveness precisely when
message scrutiny is high, and hence examining the size of the difference in
persuasiveness between high- and low-quality arguments provides a means
of assessing the degree of message scrutiny. For instance, one might detect
the effect of distraction on elaboration by noticing that when distraction is
present, there is relatively little difference in the persuasiveness of high-
quality arguments and low-quality arguments, but that without distraction,
there is a relatively large difference in persuasiveness. Such a pattern of
effects presumably reflects distraction’s effect on elaboration, because—
by definition—high- and low-quality arguments differ in persuasiveness
when elaboration is high.

But this way of defining argument quality means the ELM has a curious
lacuna. Consider the plight of a persuader who seeks advice about how to

256
construct an effective counterattitudinal persuasive message under
conditions of high elaboration. Presumably the ELM’s advice would be
“use strong arguments.” But because argument strength has been defined
in terms of effects (a strong argument is one that persuades under
conditions of high elaboration), this advice amounts to saying “to be
persuasive under conditions of high elaboration, use arguments that will be
persuasive”—which is obviously unhelpful (for some elaboration of this
line of reasoning, see O’Keefe, 2003).

Avoiding this shortcoming will require identification of the particular


argument features that give rise to the observed effects of “argument
quality” variations. Unfortunately, the experimental messages used in
ELM experiments appear to have confounded a great many different
appeal variations, making it challenging to identify just which features
might have been responsible for the observed effects.

However, at least one active ingredient in ELM messages has been


identified, namely, variation in the perceived desirability of the outcomes
associated with the advocated view (Areni & Lutz, 1988; Hustinx, van
Enschot, & Hoeken, 2007; van Enschot-van Dijk, Hustinx, & Hoeken,
2003; see also B. T. Johnson, Smith-McLallen, Killeya, & Levin, 2004).
That is, one key way in which ELM “strong argument” and “weak
argument” messages have varied is in the desirability of the consequences
of the advocated action or policy.

As an example: One recurring message topic in ELM research has been a


proposal to mandate senior comprehensive examinations as a university
graduation requirement. In studies with undergraduates as research
participants, the “strong argument” messages used arguments such as
“with mandatory senior comprehensive exams at our university, graduates
would have better employment opportunities and higher starting salaries,”
whereas the “weak argument” messages had arguments such as “with
mandatory senior comprehensive exams at our university, enrollment
would increase” (see Petty & Cacioppo, 1986, pp. 54–59, for other
examples of such arguments). Obviously, for these message recipients,
these different outcomes almost certainly varied in desirability. And,
correspondingly, it’s not surprising that, at least under conditions of
relatively close attention to message content, the “strong argument”
messages would be more persuasive than the “weak argument” messages
for these receivers.

257
The identification of this key ingredient permits a redescription of the
research findings concerning the effects of argument strength in terms of
outcome desirability, as follows: When elaboration is low, the
persuasiveness of a message is relatively unaffected by variation in the
perceived desirability of the outcomes, whereas when elaboration is high,
persuasive success is significantly influenced by the perceived desirability
of the outcomes. That is, under conditions of high elaboration, receivers
are led to have more positive thoughts about the advocated view when the
message’s arguments indicate that the advocated view will have outcomes
that the receivers think are relatively desirable than they do when the
arguments point to outcomes that are not so desirable—but this difference
is muted under conditions of low elaboration.19

What is not yet clear is whether there are other message variations that
might function in a way similar to outcome desirability, that is, ones that
serve as an influence on elaboration valence. Put somewhat differently, the
question is: Are there other quality-related features of persuasive appeals
whose variation makes relatively little difference to persuasive outcomes
under conditions of low elaboration, but whose variation makes a more
substantial difference under conditions of high elaboration?20

One natural candidate is outcome likelihood. A general expectancy-value


treatment of attitudes (as in Fishbein’s, 1967a, belief-based model,
described in Chapter 4) suggests that attitudes are a joint function of
evaluations (the perceived desirability of the attitude object’s various
characteristics) and likelihood (the likelihood that the object has each of
those different characteristics). So one might expect that messages varying
in the depicted likelihood of outcomes might have effects parallel to those
of messages varying in the depicted desirability of outcomes, such that
variations in outcome likelihood would make a greater difference to
persuasiveness under conditions of high elaboration than under conditions
of low elaboration.

Little direct evidence bears on this expectation. However, the indirect


evidence in hand is not very encouraging. A number of studies have
reported that messages varying in the depicted likelihood of consequences
did not differentially influence persuasive outcomes, but messages varying
in the desirability of depicted outcomes did vary correspondingly in
persuasiveness (e.g., Hass, Bagley, & Rogers, 1975; B. T. Johnson et al.,
2004; for a review, see O’Keefe, 2013a). That is, under conditions in
which elaboration was presumably sufficiently high to permit

258
consequence-desirability variations to produce differential effects,
consequence-likelihood variations did not produce parallel effects.21

In any case, the general question remains open: There may be other
quality-related message characteristics (in addition to outcome
desirability) that enhance message persuasiveness under conditions of high
elaboration. Identification of such message properties would represent an
important advance in the understanding of persuasion generally and
argument quality specifically.

One Persuasion Process?

The Unimodel of Persuasion


The two persuasion routes sketched by the ELM can be seen to be similar
in a key way: In each route, people are trying to reach conclusions about
what views to hold, and they do so on the basis of evidence that is
available to them. Different sorts of evidence might be relied on in the two
cases (peripheral cues in the peripheral route, the carefully scrutinized
message arguments in the central route), but—it has been argued—there
are not really two fundamentally different underlying processes here.
Instead, there is just one process—the process of reasoning to conclusions
based on evidence. Hence (this analysis suggests) in place of a dual-
process analysis, all that is needed is a unimodel of persuasion. (For some
presentations of the unimodel approach, see Bohner, Erb, & Siebler, 2008;
Kruglanski et al., 2006; Kruglanski & Thompson, 1999a, 1999b; E. P.
Thompson, Kruglanski, & Spiegel, 2000.)

It is important to be clear about exactly how the unimodel approach differs


from a framework such as the ELM. The unimodel approach does not
deny, for example, the roles played by motivational and ability variables in
influencing the degree to which evidence is processed. The key difference
is that the unimodel denies, whereas the ELM is said to assert, that a
qualitative difference in persuasion processes arises as a consequence of
whether persuasion occurs through the processing of message contents as
opposed to the processing of extrinsic information (peripheral cues). The
unimodel claims that there is an underlying uniformity to the persuasion
process, no matter which type of information is processed. That is, the
unimodel proposes that there is a “functional equivalence of cues and
message arguments” (E. P. Thompson et al., 2000, p. 91), in the sense that

259
cues and message arguments simply serve as evidence bearing on the
receiver’s conclusion about whether to accept the advocated view.

One way of expressing this equivalence is to see that both peripheral cues
and message arguments can be understood as supplying premises that
permit the receiver to complete a conditional (“if-then”) form of reasoning.
In the case of peripheral cues, the reasoning can be exemplified by a
receiver who believes that “if a statement comes from an expert, the
statement is correct.” A message from a source that the receiver recognizes
as an expert, then, satisfies the antecedent condition (a statement coming
from an expert), and hence the receiver reasons to the appropriate
conclusion (that the statement is correct). In the case of message
arguments, the reasoning can be exemplified by a receiver who believes
(for instance) that “if a public policy has the effect of reducing crime, it is
a good policy.” Accepting a message argument indicating that current gun
control policies have the effect of reducing crime, then, satisfies the
antecedent condition (that the policy reduces crime), and hence the
receiver reasons to the indicated conclusion (that current gun control
policies are good ones). Thus the unimodel proposes that there is really
only one type of persuasion process, a process that accommodates
different (but functionally equivalent) sources of evidence (viz., cues and
message arguments).

Explaining ELM Findings


One question that immediately arises is how the unimodel might explain
the substantial accumulated evidence supporting the ELM. For instance,
there appears to be considerable evidence showing that receivers vary in
their relative reliance on peripheral cues or message arguments depending
on such factors as the personal relevance of the topic; such evidence seems
to imply that cues and message arguments are not actually functionally
equivalent evidence sources.

The unimodel’s analysis of such research begins with the point that both
peripheral cues and message arguments can vary in their complexity, ease
of processing, brevity, and so forth. The unimodel acknowledges that
peripheral cues are often the sorts of things that are easily processed (and
message arguments are commonly the sorts of things that require more
processing), but that need not be so: “Cue and heuristic information need
not be briefer, less complex, or easier to process than message
information” (Kruglanski & Thompson, 1999b, p. 96).

260
But (the unimodel suggests) ELM research has commonly confounded the
cue-versus-message contrast with other contrasts—in particular, with
complexity and temporal location. That is, in ELM research, receivers are
offered a simple source of evidence at the beginning of the message in the
form of a peripheral cue (e.g., information about source credibility) and
then later are given a complex source of evidence (in the form of message
arguments). The unimodel analysis suggests that in such a research design,
under conditions of low personal relevance (low motivation to process),
receivers will naturally be more influenced by the brief, easily processed,
initially presented peripheral cue than by the subsequent difficult-to-
process argumentative material; when the message arguments appear later
in the sequence of information—and require more processing than do the
cues—they will likely affect only those receivers who (by virtue of higher
topic relevance) have greater motivation to process. Thus the unimodel
approach argues that the apparent differences between peripheral cues and
message arguments (in their relative impacts on persuasion as personal
relevance varies) do not reflect some general difference between cues and
arguments (as the ELM is taken to assert) but rather a confounding of
evidence type (peripheral cue vs. message argument) and other features of
evidence (brevity, ease of processing, and temporal location). If (the
unimodel suggests) peripheral cues and message arguments were equalized
on these other dimensions, then the putative dual-process differences
between them would evaporate.

As an illustration of research supporting the unimodel’s view, Kruglanski


and Thompson (1999b, Study 1) found that when source expertise
information was relatively lengthy, source expertise influenced the
attitudes of receivers for whom the topic was personally relevant but not
the attitudes of receivers for whom the topic was not relevant. In other
words, source expertise information and topic relevance interacted in just
the way that argument quality and topic relevance did in earlier ELM
studies. The apparent implication is that cues (such as expertise) and
arguments function identically in persuasion, once the complexity and
temporal location of each is equalized. (For related research, see Erb,
Pierro, Mannetti, Spiegel, & Kruglanski, 2007; Pierro, Mannetti, Erb,
Spiegel, & Kruglanski, 2005.)

Comparing the Two Models


The unimodel raises both empirical and conceptual issues concerning the
ELM, and these issues are sufficiently complicated that it will take some

261
time to sort them out. (For some discussion of these and related issues, see,
e.g., Chaiken, Duckworth, & Darke, 1999; Petty & Briñol, 2006; Petty,
Wheeler, & Bizer, 1999; Wegener & Claypool, 1999.)

Empirically, there is room for some uncertainty about exactly when (or,
indeed, whether) the ELM and the unimodel make genuinely different
predictions. The description given here of the unimodel has stressed its
putative contrasts with the ELM, but those contrasts may be less
substantial than is supposed. For instance, presentations of the unimodel
depict the distinction between cues and arguments as crucially important to
the ELM, in that the ELM is seen to treat these as functionally different
influences on persuasive outcomes (as opposed to the unimodel view, in
which cue and argument are simply two content categories and are not
functionally different as sources of evidence in the receiver’s reasoning
processes). It is certainly true that ELM theorists have sometimes used the
terms cue and argument in ways that make these into opposed categories
(e.g., Petty & Wegener, 1999), which invites some misunderstanding.
However, the key distinction for the ELM is not the contrast between
peripheral cues and message arguments but variation along the elaboration
continuum that yields a general trade-off between peripheral processes
(e.g., as represented by the influence of peripheral cues) and central
processes (as represented by elaboration valence, not message arguments
specifically) as influences on persuasive outcomes.22

Indeed, one important source of confusion can be a failure to grasp the


ELM’s insistence that a given variable can play different roles in
persuasion—and hence (for example) it is inappropriate to treat source
characteristics as necessarily always and only serving as peripheral cues
(Petty & Briñol, 2006, p. 217). Consider, for instance, the previously
mentioned finding that complex information about source expertise had
more influence on persuasive outcomes when the topic was personally
relevant to receivers than when it was not (Kruglanski & Thompson,
1999b, Study 1). From a unimodel perspective, this is taken to be
inconsistent with the ELM, because the ELM is assumed to expect that
source cues will have a smaller influence on persuasion as topic relevance
increases. But—bearing in mind that a given variable might affect
persuasion through various pathways—the ELM might explain this result
in several ways, including the possibility that expertise information was
processed as an argument or provoked elaboration of self-generated (as
opposed to message) arguments (Petty, Wheeler, & Bizer, 1999, pp. 159–
160).

262
The general point is that it is not yet clear whether (or exactly how) the
ELM and the unimodel can be made to offer contrasting empirical
predictions. Research findings indicating that communicator
characteristics and message arguments can both function either as
peripheral cues or as influences on elaboration valence—the kinds of
research findings offered as support for the unimodel—are in fact not
necessarily inconsistent with the ELM.

Conceptually, the unimodel does implicitly point to some unclarities in the


ELM. Consider, for example, the question of whether it is true by
definition that peripheral cues are easy to process. If part of the very
concept of a peripheral cue is that it is easy to process, then it does not
make sense to speak of there being any “confounding” of cues and
simplicity—and so the unimodel’s suggestion that there might be complex
cues would be conceptually malformed. On the other hand, if peripheral
cues are not necessarily (definitionally) easy to process, then it makes
sense to explore the effects of hard-to-process peripheral cues.

Thus the unimodel has raised some important questions concerning the
ELM. One may hope that continuing attention to these issues will lead to
more focused empirical predictions and better-articulated conceptual
equipment.

Conclusion
The elaboration likelihood model may be seen to contribute two key
insights about persuasion. One is the recognition of the variable character
of topic-related thinking engaged in by message recipients. Because the
extensiveness of topic-relevant thinking varies (from person to person,
from situation to situation, etc.), the central factors influencing persuasive
success vary: Simple heuristic principles may prevail when little
elaboration occurs, but when extensive elaboration is undertaken, then the
character of the message’s contents takes on greater importance. The
second is the recognition that a given variable may play different roles in
the persuasion process. The same variable (in different circumstances)
might influence the degree of elaboration, might influence the valence of
elaboration, and might serve as a peripheral cue—and so might have
different effects on persuasive outcomes depending on the situation. Taken
together, these two ideas offer the prospect of reconciling apparently
conflicting findings in the research literature concerning the role played by
various factors in influencing persuasive effects and mark the ELM as an

263
important step forward in the understanding of persuasion.

For Review
1. What is elaboration? How can the degree of elaboration be assessed?
Do variations in the amount of elaboration form a continuum or
discrete categories? Describe the general difference between central
and peripheral routes to persuasion. Explain how persuasion can
occur even under conditions of low elaboration.
2. Are the consequences of central route persuasion and peripheral route
persuasion identical? Identify three differences in the consequences of
persuasion’s being achieved through one or the other route. How is
the persistence of persuasion different? How is the strength of the
relationship of attitudes to intentions and behaviors different? How is
resistance to counterpersuasion different?
3. Identify two broad categories of factors that influence the amount of
elaboration undertaken. What is elaboration motivation? Identify two
factors influencing elaboration motivation. Explain how the personal
relevance of the topic (involvement) influences elaboration
motivation. What is “need for cognition”? Explain how need for
cognition influences elaboration motivation. What is elaboration
ability? Identify two factors influencing elaboration ability. What is
“distraction”? Explain how distraction influences elaboration ability.
Explain how prior knowledge influences elaboration ability.
4. In central route persuasion, what is the key determinant of persuasive
outcomes? Explain. Identify two factors that influence elaboration
direction (valence). Explain how the message’s proattitudinal or
counterattitudinal position influences elaboration direction. What is
argument strength (quality)? Explain how argument strength
influences elaboration direction.
5. In peripheral route persuasion, what influences the outcomes of
persuasive efforts? What is a heuristic principle? What activates
heuristic principles? Give three examples of heuristic principles.
What is the credibility heuristic? Explain how it works. Under what
conditions does credibility have relatively greater influence on
persuasive outcomes? What is the liking heuristic? Explain how it
works. Under what conditions does liking have relatively greater
influence on persuasive outcomes? What is the consensus heuristic?
Explain how it works. Under what conditions will the consensus
heuristic have relatively greater influence on persuasive outcomes?

264
6. Explain the idea that a given variable might play different roles in
persuasion in different circumstances. Describe three different roles
identified by the ELM. Give examples of how a variable might serve
in different roles. What is the ELM’s rule of thumb for expecting
what role a given variable will play? How useful is that rule of
thumb?
7. Describe how persuaders might adapt messages to recipients using
the ELM. How should messages be adapted to high-elaboration
recipients? How should messages be adapted to low-elaboration
recipients? Why might persuaders want to influence the likely amount
of elaboration?
8. How is involvement defined in ELM research? How is this sense of
involvement different from social judgment theory’s ego-
involvement? What various kinds of involvement might be
distinguished?
9. How is argument strength defined in ELM research? Explain how this
definition does not specify the message features that underlie
argument quality variations. Identify one active argument-quality
ingredient in the messages used in ELM research.
10. What is the unimodel of persuasion? Describe how the unimodel
suggests that one process, not two, underlies persuasion effects. How
does the unimodel attempt to explain ELM findings (about, e.g., the
relative influence of peripheral cues and argument quality)? Do the
unimodel and the ELM make different predictions? Explain how the
unimodel raises questions about the clarity of some ELM concepts.
11. Identify and explain two key insights about persuasion contributed by
the ELM.

Notes
1. There has been some variation in the ELM’s definition of elaboration.
Elaboration has sometimes been conceived in broad terms (as here),
namely, engaging in issue-relevant thinking (e.g., Petty & Wegener, 1999,
p. 46). But elaboration has also been defined more narrowly as issue-
relevant thinking undertaken with the motivation of impartially
determining the merits of the arguments (e.g., Cacioppo, Petty, &
Stoltenberg, 1985, p. 229) or as message scrutiny (e.g., Petty & Cacioppo,
1986a, p. 7). But the broadest definition is the most common.

2. Variations in the conceptualization of elaboration (mentioned in note 1)

265
have produced corresponding variations in proposed assessments of
elaboration (see, e.g., Cacioppo et al., 1985, p. 229). But most procedures
for the assessment of elaboration (as discussed by Petty & Cacioppo,
1986a, pp. 35–47) appear to represent indices of the amount of issue-
relevant thinking generally.

3. It can be tempting to say that central route persuasion makes for


“stronger” attitudes, but it is not plain that this is an entirely satisfactory
account, for several reasons. First, dangers of tautology lurk; if a “strong”
attitude is one that by definition is resistant to persuasion, then attitude
strength cannot possibly be an explanation of resistance. Second, a number
of different strength-related attitude properties are distinguishable, and
there is reason to suppose that these are best treated as distinct attributes
rather than being bundled together in an omnibus concept (see Visser,
Bizer, & Krosnick, 2006).

4. Although it seems common for this research literature to be glossed as


showing that positive moods lead to less elaboration than do negative
moods, this characterization seems not to quite capture the complexity in
the literature. In Hullett’s (2005) meta-analysis, the mean correlation of
argument strength with attitude (with this correlation serving as a proxy
for elaboration, because it presumably reflects sensitivity to argument-
strength variation) in positive-mood conditions (.29 across 12 effect sizes)
was not significantly different from that in negative-mood conditions (.39
across 21 effect sizes). (For additional discussion of this meta-analysis, see
Chapter 12, note 4.) And several studies have reported significantly greater
elaboration (or elaboration proxies) in positive moods than in negative
moods, at least under some conditions (e.g., Das & Fennis, 2008; Das,
Vonkeman, & Hartmann, 2012; Sinclair, Moore, Mark, Soldat, & Lavis,
2010; Ziegler & Diehl, 2011).

5. This research support largely consists of evidence showing that as


personal relevance increases, the effects of argument quality increase and
the effects of peripheral cues decrease.

6. Other properties captured under the term involvement may not have the
same effects as does personal relevance. As a simple illustration, the
effects on message scrutiny (that is, close attention to the message’s
contents) may not be the same for increasing personal relevance and for
increasing commitment to a position. As personal relevance increases,
message scrutiny increases, but as position commitment increases, one can

266
imagine message scrutiny either increasing or decreasing (e.g., increasing
when there are cues that message scrutiny will yield position-bolstering
material but decreasing when scrutiny looks to yield position-threatening
material). For some general discussions of involvement, see B. T. Johnson
and Eagly (1989, 1990), K. D. Levin et al. (2000), Petty and Cacioppo
(1990), Slater (2002), and Thomsen, Borgida, and Lavine (1995).

7. Across the 11 cases reviewed by Cacioppo, Petty, et al. (1996, p. 229),


the mean effect corresponds to a correlation of roughly .15. There is not a
corresponding difference in the influence of peripheral cues. That is,
persons low in need for cognition are not dependably more influenced by
peripheral persuasion cues than are those high in need for cognition (for a
review and discussion, see Cacioppo, Petty, et al., 1996, p. 230).

8. Because need-for-cognition indices are positively correlated with


various measures of intellectual ability (mean correlations are roughly in
the range of .15 to .30; for a review, see Cacioppo, Petty, et al., 1996, p.
214), one might wonder whether the apparent effects of need for cognition
on elaboration likelihood should be ascribed to differences in elaboration
motivation or differences in elaboration ability (Chaiken, 1987, pp. 16–
17). The evidence in hand appears to favor a motivational difference
explanation rather than an ability difference explanation (e.g., Cacioppo,
Petty, Kao, & Rodriguez, 1986, Study 1; See, Petty, & Evans, 2009); for
instance, the presence of additional motivational incentives (to engage in
elaboration) can minimize these effects of need for cognition (e.g., Priester
& Petty, 1995), suggesting that a difference in dispositional motivation
(not an ability difference) underlies the effects.

9. The ELM’s analysis of distraction effects is actually a bit more complex


than this. For instance, the ELM acknowledges that when the distraction is
so intense as to become the focus of attention, thus interfering with even
minimal message reception, one does not expect to find the otherwise
predicted distraction effects. For a more careful discussion, see Petty and
Cacioppo (1986a, pp. 61–68).

10. The studies by W. Wood (1982), W. Wood and Kallgren (1988), and
W. Wood et al. (1985) all use the same message topic with (it appears)
similar messages, which means that this research evidence does not
underwrite generalizations as confident as one might prefer.

11. As a further complexity, however, consider that prior knowledge might


have still other effects. For instance, although prior knowledge may

267
enhance elaboration ability, it could also diminish elaboration motivation
—as might happen if receivers think that they have sufficient information
and so expect that there would be little gained from close processing of the
message (see B. T. Johnson, 1994; Trumbo, 1999). (For another example
of diverse effects of receiver knowledge, see Biek, Wood, & Chaiken,
1996; for a general discussion, see W. Wood, Rhodes, & Biek, 1995.)

12. The organization of this description of the ELM has separated


influences on the amount of elaboration (factors affecting elaboration
motivation and/or elaboration ability) and influences on the valence
(evaluative direction) of elaboration (following, e.g., Petty & Wegener,
1999, p. 43, Figure 3.1). Alternatively (see Petty & Wegener, 1999, pp.
52–59), one might distinguish (a) variables that affect message processing
“in a relatively objective manner” (i.e., that influence elaboration
motivation and/or ability in such a way as to affect positive and negative
thoughts more or less equally; e.g., distraction interferes with elaboration
ability generally) and (b) variables that affect message processing “in a
relatively biased manner” (i.e., that influence elaboration motivation
and/or ability in selective ways that encourage a particular evaluative
direction to thinking; e.g., a message’s counterattitudinal stance might
enhance motivation to engage in specifically negative elaboration, that is,
counterarguing).

13. The ELM suggests that there are other peripheral route processes in
addition to heuristic principles—specifically, “simple affective processes”
(Petty & Cacioppo, 1986a, p. 8) in which attitudes change “as a result of
rather primitive affective and associational processes” (p. 9) such as
classical conditioning. Indeed, this additional element is one important
difference between the ELM and the HSM. The HSM’s systematic
processing mode corresponds to the ELM’s central route, and the HSM’s
heuristic mode refers specifically to the use of heuristic principles of the
sort discussed here.

Although the ELM’s peripheral route is thus broader than is the HSM’s
heuristic mode, here the peripheral route is treated in a way that makes it
look like the heuristic mode. That is, the present treatment focuses on the
simple rules/inferences (the heuristic principles) rather than on the
primitive affective processes that are taken to also represent peripheral
routes to persuasion. There are several reasons for this. First, the
nonheuristic peripheral route processes have not gotten much attention in
ELM research. Second, the ELM could abandon a belief in any particular

268
nonheuristic peripheral process (say, classical conditioning) with little
consequence for the model, which suggests that the ELM’s commitment to
any specific such process is inessential to the model. Third, it may be
possible to translate some apparently nonheuristic peripheral processes
into heuristic principle form (e.g., mood effects might reflect a tacit
heuristic such as “if it makes me feel good, it must be right”).

14. As Chaiken (1987, p. 5, n. 1) pointed out, a number of heuristic


principles appear to be represented in the various compliance principles
identified by Cialdini (1984; Cialdini & Trost, 1998); see, similarly, Petty
and Briñol (2012b).

15. Presentations of the ELM have expressed this “multiple roles” idea in
various ways (e.g., Petty & Briñol, 2008, 2010), but these have not always
been as clear as one might like. For instance, one formulation is that “the
ELM notes that a variable can influence attitudes in four ways: (1) by
serving as an argument, (2) by serving as a cue, (3) by determining the
extent of elaboration, and (4) by producing a bias in elaboration” (Petty &
Wegener, 1999, p. 51). But the ways in which (what would conventionally
be called) arguments can influence attitudes, from the perspective of the
ELM, seem to be (a) by serving as a cue (e.g., when the number of
arguments activates a heuristic such as “there are many supporting
arguments so the position must be correct”), (b) by influencing the extent
of elaboration (as when a receiver thinks that “there seem to be a lot of
arguments here so maybe it’s worth looking at them closely”), and (c) by
producing a bias in elaboration (i.e., by influencing the evaluative
direction of elaboration). That is, the roles of arguments appear already
subsumed in the other three roles (peripheral cue, influence on degree of
elaboration, and influence on valence of elaboration); it is not clear how
arguments might otherwise function in persuasion within an ELM
framework. Hence the presentation here does not distinguish “serving as
an argument” as a distinct role for a persuasion variable.

At least part of the confusion appears to concern the ELM’s use of the
word argument, about which three points might be noted. First,
“arguments” are sometimes conceived of as “bits of information contained
in a communication that are relevant to a person’s subjective determination
of the true merits of an advocated position” (Petty & Cacioppo, 1986b, p.
133); but taken at face value, such a definition would accommodate at
least some peripheral cues as arguments (after all, from the perspective of
the heuristic processor, a cue is a bit of information relevant to assessing

269
the true merits of the advocated view—it just happens to provide a shortcut
to such assessment), which seems a unimodel-like view (Kruglanski &
Thompson, 1999b), surely to be resisted by the ELM. Second, “argument”
and “cue” sometimes appear to be used as shorthand to cover anything that
affects, respectively, central route and peripheral route persuasion (e.g.,
Petty & Wegener, 1999, p. 49). But when they are used this way, it is not
clear why argument-based persuasion roles are to be distinguished from
persuasion roles involving influencing elaboration valence (given that
elaboration valence is presumably the engine of persuasion within central
route processes). Third, distinguishing “serving as an argument” does at
least underscore the broad possible application of “argument” within an
ELM perspective. For example, the communicator’s physical
attractiveness is recognized by the ELM as potentially not simply a
peripheral cue but also an argument (as in advertisements for beauty
products). Still, when attractiveness serves this argumentative role, it
presumably influences persuasive effects by influencing elaboration
valence (just as arguments of more conventional form do).

16. One additional possible role for variables is as influences on


metacognitive states (e.g., Petty & Briñol, 2010, pp. 230–232) such as
thought confidence (i.e., confidence in the accuracy or correctness of one’s
thoughts or beliefs) or attitude confidence (i.e., confidence in the accuracy
or correctness of one’s attitude). These metacognitive effects appear to
represent an expansion of the sorts of persuasion-relevant outcomes that
researchers might examine. The implicit suggestion is that in addition to
assessing attitude (valence and extremity of a general evaluation), one
might also assess (a) putative determinants of attitude (such as thought
confidence; Briñol & Petty, 2009a, 2009b; Petty & Briñol, 2010, pp. 230–
231) and (b) properties of attitude other than valence and extremity (e.g.,
attitudinal certainty or confidence; Petty, Briñol, Tormala, & Wegener,
2007, pp. 260–262).

With respect to metacognitive states that might be determinants of attitude:


Having a sound account of the determinants of attitude is very much to be
hoped for. Considerable work has already explored the roles of belief
evaluation and belief likelihood as possible determinants (see Chapter 4).
It may be that additional factors, such as belief confidence (thought
confidence), will also have some role to play, but this is not yet entirely
clear. (For example, it is not plain that belief confidence is so
straightforwardly an influence on attitude in the way that belief evaluation
is: e.g., Bennett & Harrell, 1975; Feldman, 1974.) In principle, at least,

270
metacognitive properties that influence attitude provide persuaders with
additional avenues to attitude change. However, such metacognitive states
presumably—though this is not entirely clear—influence attitude change
only under conditions of relatively high elaboration (i.e., only under
conditions in which attributes of thoughts, such as their valence or
confidence, influence attitudes). So one might treat the roles of
“influencing elaboration valence” and “influencing metacognitive states
that affect attitudes” as representing two concrete realizations of a single
more abstract role, namely, “influencing attitude-relevant thought
properties” (valence, confidence, likelihood, and so forth). But having a
clear picture of all this will require a careful enumeration of attitude-
relevant thought properties and a description of their relationships and
effects, and this task is not to be underestimated (for one such sketch, see
Petty, Briñol, Tormala, & Wegener, 2007).

With respect to metacognitive states that are properties of attitude:


Additional attitudinal properties, such as certainty or importance, represent
additional possible targets for persuasive efforts. For example, a persuader
might not want (or need) to make an existing attitude any more positive
but might want to make people more confident in those positive attitudes.
Pursuing research on these lines will want a clear conceptual treatment of
these various attitudinal properties (for some discussion, see Fabrigar,
MacDonald, & Wegener, 2007; Visser, Bizer, & Krosnick, 2006).
Increasing research attention is being given especially to attitude certainty
(e.g., Druckman & Bolsen, 2012; Nan, 2009; Tormala & Rucker, 2007)
and attitudinal ambivalence (e.g., Conner & Armitage, 2008; van
Harreveld, Schneider, Nohlen, & van der Pligt, 2012). Research on the
determinants and consequences of these additional attitudinal properties is
not yet as extensive as that concerning attitude valence and extremity, but
it is an extraordinarily promising new line of development.

17. Not a few investigators have tried to design message formats or


appeals so as to adapt messages to different levels of need for cognition
(NFC)—without much success. For example, Steward, Schneider, Pizarro,
and Salovey (2003) compared the effects of a simple message (with
framed cartoons, grammatically simple sentences, and so on) and a
complex message (no pictures, detailed statistical information, and so on)
on recipients varying in NFC, expecting the simple message to be more
persuasive than the complex message for those low in NFC, with the
reverse effect expected for those high in NFC; no such effect obtained. For
similar failures to find expected interaction effects involving NFC and

271
message variations, see McKay-Nesbitt, Manchanda, Smith, and Huhmann
(2011), Rosen and Haaga (1998), and Williams-Piehota, Pizarro, Silvera,
Mowad, and Salovey (2006); for an exception, see Bakker (1999).

18. Another example: “Subjects led to believe that the message topic (e.g.,
comprehensive exams) will (vs. won’t) impact on their own lives have also
been shown to be less persuaded by weak messages but more persuaded by
strong ones” (Chaiken & Stangor, 1987, p. 594). Despite the statement’s
appearance, this is not a discovery. It is not an empirical result or finding,
something that research “shows” to be true, or something that could have
been otherwise (given the effect of topic relevance on elaboration). The
described relationship is true by definition.

19. Indeed, when elaboration is low as a consequence of low personal


relevance of the topic (low receiver involvement), then an even more
pointed redescription is possible: When a message recipient is directly
affected by the advocated policy, the desirability of the claimed outcomes
matters a great deal—but when the policy isn’t personally relevant,
receivers are less affected by the apparent desirability of the outcomes
(after all, the outcomes aren’t going to happen to them). Unsurprising,
really: When the outcomes affect the message recipient directly, the
desirability of those outcomes becomes especially important as an
influence on the message’s persuasiveness.

20. The importance of this undertaking (identifying quality-related


message characteristics other than outcome desirability that influence
persuasiveness under conditions of high elaboration) is magnified when
one recognizes the commonality with which personal relevance
(“involvement”) variations have been used as an experimental means of
inducing variations in elaboration likelihood. The ELM’s claim is that
“argument quality variations make a larger difference to persuasive
outcomes under conditions of high elaboration than under conditions of
low elaboration”—but this is different from the narrower claim that
“outcome desirability variations make a larger difference to persuasive
outcomes under conditions of high elaboration than under conditions of
low elaboration,” and still more different from the even narrower claim
that “outcome desirability variations make a larger difference to persuasive
outcomes under conditions of high personal relevance than under
conditions of low personal relevance.” This last claim is amply supported
by ELM research, but that research is not necessarily evidence for any
broader claims.

272
As a first step in seeing what additional evidence is needed, consider
whether it is possible to have high elaboration without high personal
relevance (high involvement). It is certainly possible to have low
elaboration without low personal relevance (e.g., when personal relevance
—and so elaboration motivation—is high, distraction might interfere with
elaboration ability and so produce low elaboration despite high personal
relevance), but it is difficult to imagine conditions under which elaboration
would be high while personal relevance was low.

If high personal relevance is indeed a necessary condition for high


elaboration, then ELM-related descriptions ought to reflect that
relationship. For example, high elaboration should not be described simply
as something that might be influenced by (among other things) personal
relevance but rather as something that is entirely dependent on the level of
personal relevance.

If, on the other hand, high personal relevance is not required for high
elaboration, then additional research evidence is needed to fill the gap
between the narrower (and well-supported) claim that “outcome
desirability variations make a larger difference to persuasive outcomes
under conditions of high personal relevance than under conditions of low
personal relevance” and the broader claim that “argument quality
variations make a larger difference to persuasive outcomes under
conditions of high elaboration than under conditions of low elaboration.”
What is needed is evidence that parallel effects (parallel to those observed
with outcome desirability variations and personal relevance variations) can
be obtained when (a) argument quality is varied but outcome desirability is
held constant and (b) elaboration is varied but personal relevance is low. It
is not plain that such evidence is in hand.

21. It may be that outcome likelihood variations just don’t have the same
effects that outcome desirability variations do, or it might be that the
amount of message scrutiny required to yield outcome likelihood effects is
higher than that needed to produce outcome desirability effects (so that a
given level of elaboration might be high enough for recipients to be
affected by the apparent desirability of the depicted outcomes but not yet
high enough for recipients to be affected by the relative likelihood of those
outcomes).

22. In unimodel presentations and research, the “cue versus argument”


distinction sometimes seems to be cast as a “source versus message”

273
distinction (e.g., Kruglanski & Thompson, 1999b, p. 84). But this also
does not seem to capture the ELM’s assertions. The ELM does not
partition source variables and message variables as having intrinsically
different roles to play in persuasion but, on the contrary, emphasizes that
each category of variable can serve different persuasion roles in different
circumstances (Petty & Briñol, 2006, p. 217; Petty et al., 1999, p. 157;
Wegener & Claypool, 1999, pp. 176–177).

274
Chapter 9 The Study of Persuasive Effects

Experimental Design and Causal Inference


The Basic Design
Variations on the Basic Design
Persuasiveness and Relative Persuasiveness
Two General Challenges in Studying Persuasive Effects
Generalizing About Messages
Variable Definition
Conclusion
For Review
Notes

The research to be discussed in the next three chapters is, overwhelmingly,


experimental research that systematically investigates the influence that
various factors (communicator characteristics, message variations, and so
on) have on persuasive outcomes. This chapter first provides some general
background on the underlying logic of such experimental research and
then discusses some challenges that arise in the study of persuasive effects.

Experimental Design and Causal Inference


Various experimental arrangements are used in persuasion effects
research, but these can usefully be thought of as variations on a basic
design.

The Basic Design


The simplest sort of research design employed in the work to be discussed
is an experimental design in which the researcher manipulates a single
factor (the independent variable) to see its effects on persuasive outcomes
(the dependent variable). For instance, an investigator who wishes to
investigate the effects of explicit conclusion drawing on attitude change
might design a laboratory investigation of the following sort. The
researcher prepares two persuasive messages identical in every respect
except that in one message, the persuader’s conclusion is drawn explicitly

275
at the end of the message (the explicit conclusion message), whereas in the
other message, the persuader’s conclusion is left implicit (the implicit
conclusion message). When participants in this experiment arrive at the
laboratory, their attitudes on the persuasive topic are assessed, and then
they receive one of the two messages; which message a given participant
receives is a matter of chance, perhaps determined by flipping a coin. After
exposure to the persuasive message, receivers’ attitudes are assessed again
to ascertain the degree of attitude change produced by the message.

Suppose that (following conventional statistical procedures) the results


indicate reliably greater attitude change for those receiving the explicit
conclusion message than for those receiving the implicit conclusion
message. In considering how such a result might be explained, one can
rule out systematic bias in assigning participants to hear one or the other of
the messages because participants were randomly assigned to hear
messages. For instance, one can confidently say that it is unlikely that
those hearing the explicit conclusion message were people who are just
generally more easily persuaded than those hearing the implicit conclusion
message because participants were randomly distributed across the two
groups.

The obvious explanation for the obtained results, of course, is the presence
or absence of an explicit conclusion. Indeed, because this is the only factor
that varies between the two messages, it presumably must be the locus of
the observed differences. This is the general logic of experimental designs
such as this: These designs are intended to permit unambiguous causal
attribution precisely by virtue of experimental control over factors other
than the independent variable.

Variations on the Basic Design


There are innumerable ways in which this basic experimental arrangement
might be varied. For example, one might dispense with the initial attitude
assessment, reasoning that the random assignment of participants to the
two experimental conditions would likely ensure that the two groups
would have roughly comparable initial attitudes; this is commonly called a
posttest-only design (because there would be only a postmessage
assessment of attitude). Or an investigator might create an independent
variable with more than two conditions (more than two levels). For
instance, one might compare the persuasive effects of communicators who
are high, moderate, or low in credibility.

276
The most common and important variation, however, is the inclusion of
more than one independent variable in a single experiment. Thus (for
instance) rather than doing one experiment to study implicit versus explicit
conclusions and a second study to examine high versus moderate versus
low credibility, a researcher could design a single investigation to study
these two variables simultaneously. This would involve creating all six
possible combinations of conclusion type and credibility level (3
credibility conditions × 2 conclusion type conditions = 6 combinations).

Experimental designs with more than one independent variable permit the
detection of interaction effects involving those variables. An interaction
effect is said to occur if the effect of one independent variable depends on
the level of another independent variable; conversely, if the effect of one
variable does not depend on the level of another variable, then no
interaction effect exists. For example, if the effect of having an implicit or
explicit conclusion is constant, no matter what the credibility of the source,
then no interaction effect exists between credibility and conclusion type.
But if the effect of having an implicit or explicit conclusion varies
depending on the credibility of the source (say, if high-credibility sources
are most effective with explicit conclusions, and low-credibility sources
most effective with implicit conclusions), then an interaction effect
(involving credibility and conclusion type) exists; the effect of one
variable (conclusion type) depends on the level of another (credibility).1

Persuasiveness and Relative Persuasiveness


These experimental designs are meant to provide information about the
relative persuasiveness of two or more messages—not the absolute
persuasiveness of any one message. So, for example, an experiment that
found that explicit conclusion messages are more persuasive than implicit
conclusion messages would not necessarily show that explicit conclusion
messages are highly persuasive. That is, the research question in these
studies is characteristically not “How persuasive are messages of kind X?”
but rather “Which is more persuasive, kind X or kind Y?” So the question
is not “How persuasive are messages with explicit conclusions?” but
“Which is more persuasive, messages with explicit conclusions or
messages with implicit conclusions? ”2

There is much potential for misunderstanding here. Imagine, for example,


that research underwrites a general conclusion that messages with explicit

277
conclusions are generally more persuasive than those with implicit
conclusions. Message designers should not think that having explicit
conclusions will somehow automatically make their messages highly
persuasive. Messages with explicit conclusions may be relatively
persuasive compared with those with implicit conclusions, but that does
not mean that explicit conclusion messages are inevitably highly
persuasive in absolute terms. The larger point is that the research under
discussion here can certainly provide useful information to message
designers about how to enhance message persuasiveness (by creating
messages of one sort rather than another), but it does not offer evidence
bearing on the absolute persuasiveness of any given kind of message.3

Two General Challenges in Studying Persuasive


Effects
Two noteworthy general challenges arise in investigating factors
influencing the effectiveness of persuasive messages. One of these
concerns the difficulty in making reliable generalizations about the effects
of message types; the other concerns the task of defining independent
variables in studies of persuasive effects.

Generalizing About Messages


The earlier description of experimental design might make it seem easy to
arrive at generalizations about factors influencing persuasive effects. To
compare the persuasive effects of (for example) explicit and implicit
conclusions, one simply does an experiment of the sort previously
described: Create two versions of a given message (one of each conclusion
type), and see whether there is any difference in persuasive effect. Indeed,
this is overwhelmingly the most common sort of experimental design used
in studies of persuasive effects.

But this experimental design has important weaknesses, at least if one is


interested in arriving at dependable generalizations about the persuasive
effects of variations in message features (such as implicit vs. explicit
conclusions). This design uses a single message to represent each general
category (level, type) of the message variable. That is, the experiment
compares one particular instance of an explicit conclusion message and
one particular instance of an implicit conclusion message. Such single-
message designs create two important barriers to generalization: One is

278
that the design does not permit unambiguous causal attribution; the other is
that the design is blind to the possibility that the effects of a given message
factor may not be constant (uniform) across different messages (Jackson &
Jacobs, 1983).

Ambiguous Causal Attribution


Although the logic of experimental research is designed to permit clear
and unambiguous causal attribution, single-message designs inevitably
create some ambiguity concerning the cause of any observed differences.
This ambiguity arises because manipulating the variable of interest (say,
implicit vs. explicit conclusion) inevitably means also concomitantly
manipulating other variables that are not of interest.

For example, suppose a researcher created the implicit and explicit


conclusion messages in the following way. First, the explicit conclusion
message was written. Then, to create the implicit conclusion message, the
researcher simply eliminated the final paragraph (which contained the
explicit conclusion). These two messages differ in conclusion type, but
that is not the only thing that distinguishes the two messages. For one
thing, the explicit conclusion message is now longer than the implicit
conclusion message.

It is probably apparent what difficulty this poses for arriving at


generalizations. If the persuasiveness of the two messages differ, one’s
initial inclination might well be to explain that difference as resulting from
the type of conclusion used. But one could equally well suppose (given the
evidence) that it was message length, not conclusion type, that created the
difference. Worse, these are not the only two possibilities. The explicit
conclusion message might be more repetitive than the implicit conclusion
message, it might seem more insulting (because it says the obvious), or it
might be more coherent or better organized, and so on. The problem is that
one does not know whether conclusion type or some other variable leads to
the observed difference in persuasiveness.

There is another way of expressing this same problem: In a single-message


design, the manipulation of a given message variable can be described in
any number of ways, and, consequently, problems of causal attribution and
generalization arise. Consider, for example, the following experimental
manipulation. Two persuasive messages are prepared arguing in favor of
making the sale of cigarettes illegal. Both messages emphasize the harmful

279
physical consequences of smoking and indeed are generally similar, except
for the following sort of variation. Message A reads, “There can be no
doubt that cigarette smoking produces harmful physical effects,” whereas
Message B reads, “Only an ignorant person would doubt that cigarette
smoking produces harmful physical effects.” The statement in Message A,
“It is therefore readily apparent that the country should pass legislation to
make the sale of cigarettes illegal,” is replaced in Message B by the
statement, “Only the stupid or morally corrupt would oppose passage of
legislation to make the sale of cigarettes illegal” (with four such alterations
in the messages).

What is the independent variable under investigation here? That is, how
should this experimental manipulation be described? Framing some causal
generalization will require that the difference between the two messages be
expressed somehow—but exactly how? Several different answers have
been offered. The original investigators described this manipulation as a
matter of “opinionated” as opposed to “nonopinionated” language (G. R.
Miller & Baseheart, 1969), but others have characterized it varying
“language intensity” (Bradac, Bowers, & Courtright, 1980, pp. 200–201),
having a “confident style in debating” (Abelson, 1986, p. 227), or using a
“more dynamic” rather than “subdued” style (McGuire, 1985, p. 270). Of
course, not even these exhaust the possibilities. For instance, one could
describe this as a contrast between extreme and mild (or nonexistent)
denigration of those holding opposing views.

These different descriptions of the experimental manipulation could all be


correct, but of course they are not identical. Unfortunately, if one wishes to
frame a causal generalization from research using this concrete
experimental manipulation, one must choose some particular description.
But which one? Given this single-message design, all the various
interpretations are equally good—which is to say, researchers cannot make
the desired sorts of unambiguous causal attributions.

Nonuniform Effects of Message Variables


A second barrier to generalization created by single-message designs
arises from the possibility (or probability) that the effect of a given
message variable will not be uniform (constant) across all messages.

Consider again the example of a single-message study examining the


effects of having implicit versus explicit conclusions. Suppose that this

280
study found the explicit conclusion message to be significantly more
persuasive than the implicit conclusion message (and let’s overlook the
problem of deciding that it was conclusion type, not some other factor, that
was responsible for the difference). One should not necessarily conclude
that having explicit conclusions will always improve the effectiveness of a
message. After all, there might have been something peculiar about the
particular message that was studied (remember that only one message was
used). Perhaps something (unnoticed by the researchers) made that
particular message especially hospitable to having an explicit conclusion—
maybe the topic, maybe the way the rest of the message was organized, or
maybe the nature of the arguments that were made. Other messages might
not be so receptive to explicit conclusions.

To put that point more abstractly: The effect of a given message variable
may not be uniform across messages. Some messages might be helped a
lot by having an explicit conclusion, some helped only a little, and some
even hurt by it. But if that is true, then looking at the effects of conclusion
type on a single message does not really provide a good basis for drawing
a general conclusion. So, once again, the typical single-message design
used in persuasion effects research creates an obstacle to dependable
message generalization because it overlooks the possibility of nonuniform
effects across messages.4

Designing Future Persuasion Research


It has probably already occurred to the reader that there is a
straightforward way of dealing with these two obstacles to dependable
message generalization. Because those two obstacles arise from the use of
a single message to represent an entire category of messages, the
straightforward solution is to use multiple messages to represent each
category (Jackson & Jacobs, 1983; for discussion of parallel approaches in
other contexts, see Highhouse, 2009; Wells & Windschitl, 1999). For
example, a study of implicit versus explicit conclusions would want to
have many instances of each message type with as much variation within a
category as one could achieve (variation in topic, organization, length,
etc.).

With such a multiple-message design, the possibility of nonuniform effects


across messages is acknowledged. There is no presumption that the effect
of conclusion type will be constant across messages; on the contrary, the
design may permit the detection of variation in the effect that conclusion

281
type has across messages. And the chances for unambiguous causal
attribution are improved by such a design: Given the variation within the
set of messages for a given conclusion type, the researcher can rule out
alternative explanations and be more confident in attributing observed
differences to the sort of conclusion used.5

Beyond the desirability of using multiple-message designs, the


generalization problems associated with single-message designs also have
implications for how experimental messages are constructed in persuasion
research. A number of complex considerations bear on the question of how
to construct (or obtain) experimental messages (for discussion, see Bradac,
1986; Jackson, 1992, pp. 131–149, 1993; Jackson & Jacobs, 1983). But a
sense of the relevant implications can be obtained by focusing on one
particular research practice: the practice of using the same experimental
messages more than once.

The problem of generalizing about message types from individual


messages would be serious enough if each investigation of persuasive
message effects used only one message to represent each message type
(category) but with a different concrete message used in each study (so
that, e.g., every investigation of implicit versus explicit conclusions used
only one instance of each type, but every investigation created a new
instance of each type). But the problem is worse because sometimes the
same messages are used more than once in persuasion research (consider,
e.g., Pratkanis, Greenwald, Ronis, Leippe, & Baumgardner, 1986). For
example, related messages were used by Burnkrant and Howard (1984),
Petty, Cacioppo, and Heesacker (1981), and Swasy and Munch (1985); by
Holbrook (1978) and Venkatraman, Marlino, Kardes, and Sklar (1990); by
B. T. Johnson (1994), Petty and Cacioppo (1979b, 1984), Solomon,
Greenberg, Psyczynski, and Pryzbylinski (1995), and Sorrentino, Bobocel,
Gitta, Olson, and Hewitt (1988); by Kamins and Assael (1987a, 1987b)
and Kamins and Marks (1987); by Lalor and Hailey (1990) and
Meyerowitz and Chaiken (1987); by Mann, Sherman, and Updegraff
(2004), Sherman, Mann, and Updegraff (2006), and Updegraff, Sherman,
Luyster, and Mann (2007); by Shiv, Edell, and Payne (1997, 2004); and by
W. Wood (1982), W. Wood and Kallgren (1988), and W. Wood et al.
(1985).

This practice is readily understandable. First, the task of creating


satisfactory experimental materials is difficult and time-consuming, and if
existing messages already represent the variables of interest, then it can be

282
awfully tempting to employ those materials. Second, in a continuing line
of research, a desire for tight experimental control may suggest the reuse
of earlier messages.

But in the end this way of proceeding is unsatisfactory, precisely because


it complicates, rather than eases, the task of obtaining sound causal
generalizations. It complicates this task because single-message
instantiations are an unsatisfactory basis for generalizations about message
types.6 Even a very large number of studies of one message (or message
pair) cannot provide evidence for generalizations beyond that message.

Interpreting Past Persuasion Research


Employing multiple-message designs (and avoiding reusing experimental
messages) may help future researchers avoid the message generalization
problems created by single-message designs, but a great deal of earlier
research relied on single-message designs. Obviously, any such single-
message study should be interpreted very cautiously. The interpretive
difficulties created by single-message designs are such that one cannot
confidently make broad generalizations from a single study using such a
design.

But if several single-message studies address the same research question,


then some greater confidence may be warranted. If 10 investigations
compare explicit and implicit conclusions, and each one has a different
single example of each message category, a review that considers the body
of studies taken as a whole can transcend this limitation of the individual
investigations and provide a sounder basis for generalization.

Meta-analytic statistical techniques can be particularly helpful here.


Broadly, meta-analysis is a family of quantitative techniques for
summarizing the results obtained in a number of separate research studies.
Using meta-analytic techniques, a researcher can systematically examine
the different results obtained in separate investigations, combine these
separate studies to yield a picture of the overall effect obtained, look for
variations among the results of different studies, and so on. (For general
treatments, see Borenstein, Hedges, Higgins, & Rothstein, 2009; Card,
2012; Field & Gillett, 2010. For a single comprehensive source, see
Cooper, Hedges, & Valentine, 2009.) Obviously, meta-analysis offers the
possibility of overcoming some of the limitations of existing research
using single-message designs.7

283
However, two aspects of meta-analytic practice deserve some notice. The
first concerns which studies to include in a review, specifically whether to
attempt to locate unpublished studies (conference papers, dissertations, and
so forth). There are good reasons to want to include unpublished studies
whenever possible, because the published research literature may produce
a misleading picture. For example, studies that find statistically significant
differences may be more likely to be published than those with
nonsignificant results. (For some discussions of publication biases and
related questionable research practices, see Bakker, van Dijk, & Wicherts,
2012; Ioannidis, 2005, 2008; John, Loewenstein, & Prelec, 2012; Levine,
Asada, & Carpenter, 2009.)

The second concerns how to analyze meta-analytic data, specifically the


choice between fixed-effect and random-effects models. The technical
details need not detain us here, but results derived from fixed-effect
analyses characteristically overstate the precision of meta-analytic findings
—and only random-effects models permit generalization beyond the cases
in hand, which is usually the goal of undertaking meta-analytic reviews
(see Borenstein, Hedges, Higgins, & Rothstein, 2010; Card, 2012, pp.
233–234).8 Even so, the use of fixed-effect analyses is distressingly
common (for some discussion, see Cafri, Kromrey, & Brannick, 2010).

Beyond Message Variables


This discussion has focused on the message-generalizing problems
associated with single-message designs. These problems are especially
salient for persuasion researchers because—despite widespread interest in
generalizing across messages—single-message designs have been the
norm for studies of persuasive effects (but see Brashers & Jackson, 1999).
Of course, the same general considerations apply not just to message
factors but to anything; dependable generalizations about a collection of
things (messages, people, tables, and so on) commonly require multiple
instances of the class. There is, that is, nothing unique about these
problems of message generalization. But some focused attention to matters
of message generalization is important, if only because single-message
designs are so common.

Variable Definition
The other noteworthy challenge that arises in studying persuasive effects

284
concerns how independent variables are defined in research practice.
Because this issue arises most clearly in the context of defining message
variables (i.e., message variations or message types), the following
discussion focuses on such variables; as will be seen, however, the
difficulties that ensue are not limited to message variables.

Message Features Versus Observed Effects


Broadly put, a message variable can be defined in one of two ways: on the
basis of intrinsic message features or on the basis of observed effects on
recipients. Most message variables have been defined using message
features (as one might expect), but occasionally investigators have defined
message types using engendered recipient responses (i.e., the effects
observed in message receivers). Because these two ways of defining
message variations are consequentially different, the distinction is an
important one.

A useful example is provided by the extensive research on threat appeals


(also called fear appeals) in persuasive messages. A threat appeal is a
particular type of persuasive message, but it has been defined in varying
ways. Some investigators define a threat appeal as a message that contains
certain sorts of message content (e.g., graphic depictions of consequences
of not following the communicator’s recommendations, as in gruesome
films of traffic accidents in driver education classes). But for other
investigators, a threat appeal message is one that arouses fear or anxiety in
message recipients (that is, threat appeal is defined by the responses of
message receivers).

Obviously, these two definitions will not necessarily correspond. That is, a
message that contains gruesome content (a threat appeal by the first
definition) might not arouse fear or anxiety in message recipients (i.e.,
might not be a threat appeal by the second definition). Similarly, a
message might succeed in arousing fear without containing graphic
message content.

The Importance of the Distinction


It is important to be clear about the different ways of defining message
variables (by reference to message features or by reference to effects)
because the distinction is consequential. (For discussion, see O’Keefe,
2003; Tao & Bucy, 2007.)

285
First, generalizations about message types can only cautiously lump
together investigations that employ different ways of defining a given
variable. Two studies might call themselves studies of threat appeals, but if
one of them defines threat appeal by message content whereas the other
defines it by recipient response, it may be difficult to draw reliable
generalizations that encompass the two studies.

Second, the different ways of defining message types raise different


evidentiary issues concerning the soundness of experimental
manipulations. Consider that to construct defensible examples of threat
appeal messages for use in research, an investigator who defines threat
appeal by the presence of certain sorts of message content need only
ensure that the messages do contain the requisite sort of content. By
contrast, an investigator who defines threat appeal by audience responses
must show that the messages engender the required responses.9

Third, and most important, feature-based definitions offer several


advantages over effect-based definitions. In particular, research using
feature-based definitions can give obvious direct advice for persuaders
concerning the construction of persuasive messages (“Put features X, Y,
and Z in your message”), whereas effect-based definitions are likely to be
much less helpful (“Do something that engenders such-and-such
effects”).10 Similarly, exploration of the role of mediating states
(intervening between messages and effects) can be obscured with effect-
based definitions (O’Keefe, 2003).

The extended example in this discussion of the problem of variable


definition has been that of a particular message variable (threat appeals),
although other message variables—most notably, the variable of argument
strength that figures prominently in elaboration likelihood model research
(discussed in Chapter 8)—could have served as well. But these issues of
variable definition are not limited to message factors. For example,
communicator credibility could be defined by observed persuasive effects
(so that, by definition, higher credibility would be associated with greater
persuasion) or by other criteria (such as receivers’ impressions of the
source’s believability, expertise, honesty, and the like; see Chapter 10).

Conclusion
Experimental research examining the influence of various factors on

286
persuasive outcomes offers the prospect of useful insights into persuasive
processes and effects, but the task of creating dependable generalizations
from such research can be more challenging than might appear at first
look.

For Review
1. Describe the simplest experimental design used in the study of
persuasive message effects. What is an independent variable? A
dependent variable? Explain how experimental designs are meant to
permit unambiguous causal attribution for observed effects. Describe
some variations on the basic design. What is an interaction effect?
Explain how experimental designs with more than one independent
variable permit detection of interaction effects. Explain the difference
between a conclusion about the relative persuasiveness of two
messages and a conclusion about the absolute persuasiveness of one
message. Are experimental designs meant to provide evidence about
the absolute persuasiveness of a given message?
2. What is a single-message experimental design? Explain why such
designs do not provide good evidence for generalizations. How do
such designs undermine unambiguous causal attribution for effects?
How do such designs overlook the potential variability
(nonuniformity) of a given message factor across different concrete
messages? What is a multiple-message design? How do multiple-
message designs address some concerns about generalization? Why is
re-using messages (from earlier studies) a generally undesirable
research practice? How can generalizations be obtained from previous
studies using single-message designs? What is meta-analysis?
3. Describe two ways in which a message variable can be defined in
research. Explain the difference between defining a message variable
on the basis of message features and defining it on the basis of
recipient responses (effects). Describe how the criteria for satisfactory
experimental manipulations differ depending on whether a feature-
based or an effect-based definition is used. Explain why feature-based
definitions provide a better basis for advice to message designers.

Notes
1. An interaction effect can also be described as a “moderator” effect, in
the sense that one variable moderates (influences) the effect of another

287
variable. To continue the example: If the effect of implicit-versus-explicit
conclusions varies depending on the level of credibility, then credibility
would be said to moderate the effect of conclusion type (credibility would
be a moderator variable). Moderator effects, in which variable X
influences the relationship of variables A and B, are different from
mediator effects, in which X mediates the relationship of A and B by being
between them in a causal chain (A influences X, which in turn influences
B). The classic treatment of this distinction is Baron and Kenny (1986).
For some subsequent discussion, see Fairchild and MacKinnon (2009),
Green, Ha, and Bullock (2010), Kraemer, Kiernan, Essex, and Kupfer
(2008), Preacher and Hayes (2008), Spencer, Zanna, and Fong (2005), and
Zhao, Lynch, and Chen (2010).

2. In experimental research concerning relative persuasiveness, the most


common persuasive outcome assessments have been attitudes, intentions,
and behaviors. (Attitude assessment is briefly discussed in Chapter 1. For
some treatment of intention and behavior assessments, see Fishbein &
Ajzen, 2010, pp. 29–43.) Although attitudes, intentions, and behaviors are
commonly generally positively correlated (see Chapter 6 concerning
reasoned action theory), it is also the case that the persuasiveness of a
given message might vary across these outcomes (e.g., in response to a
given message, people might change their attitudes a lot, their intentions
only a little, and their behaviors not at all). However, where the research
question concerns the relative persuasiveness of two message kinds (are
messages of type A more persuasive than messages of type B?), those
different outcomes yield substantively identical research conclusions: If
message type A is more persuasive than message type B with attitudinal
outcomes, it is also more persuasive (and equally more persuasive) with
intention outcomes and with behavioral outcomes. More carefully: The
mean effect sizes (describing the difference in persuasiveness between two
message types) for attitudinal outcomes, for intention outcomes, and for
behavior outcomes are statistically indistinguishable (O’Keefe, 2013b).

3. Such potential confusions should also be borne in mind when


experimental results are described (as they should be) using effect sizes. In
experimental studies comparing the persuasiveness of two message types,
the effect size of interest describes the size and direction of the difference
between the two message conditions—not the persuasiveness of either
message. And when effect sizes are compared (as when, e.g., the effect
size for a given message variable is examined under two different
conditions), the same cautions are relevant. If the effect size for a given

288
message variable is larger under condition X than under condition Y, that
does not mean that messages are more persuasive in condition X than in
condition Y; it means that the difference in persuasiveness between the
two messages is larger in condition X than in condition Y. As an example:
O’Keefe and Jensen’s (2011) meta-analysis of gain-loss message framing
effects concerning obesity-related behaviors reported that the mean effect
size (expressed as a correlation, with positive values indicating a
persuasive advantage for gain-framed appeals) was .17 for physical
activity messages and .02 for healthy eating messages, a statistically
significant difference. This does not mean that physical activity messages
were more persuasive than healthy eating messages, or that gain-framed
physical activity messages were more persuasive than gain-framed healthy
eating messages. It means only that the difference in persuasiveness
between gain-framed and loss-framed appeals was larger for physical
activity messages than for healthy eating messages.

4. Indeed, heterogeneity (variability, nonuniformity) in persuasive message


effects is not just an abstract possibility. Meta-analytic reviews of
persuasive message variables can provide straightforward evidence here,
because such reviews contain multiple estimates of the size of a given
variable’s effect. In such reviews, effect size variability is common and
substantial; indeed, it is rare for the mean effect size to be larger than the
standard deviation and common for the variability to be twice or three
times as large as might be expected given human sampling error (for a
review, see O’Keefe, 1999c). For examples of primary research studies
reporting such variability from multiple-message designs, see Greene,
Krcmar, Rubin, Walters, and Hale (2002), Reichert, Heckler, and Jackson
(2001), and Siegel et al. (2008).

5. It’s not enough just to have multiple messages; an appropriate statistical


analysis (a random-effects analysis) is also required. For a general
treatment of these matters, see Jackson (1992). For additional discussion,
see M. Burgoon, Hall, and Pfau (1991), Jackson and Brashers (1994),
Jackson, O’Keefe, and Brashers (1994), and Slater (1991). For discussion
of such issues in other research contexts, see Baayen, Davidson, and Bates
(2008), H. H. Clark (1973), Fontenelle, Phillips, and Lane (1985), Judd,
Westfall, and Kenny (2012), Raaijmakers, Schrijnemakers, and Gremmen
(1999), Rietveld and van Hout (2007), and Siemer and Joormann (2003).

6. When the same message is used repeatedly as the instantiation of a


message type, meta-analytic techniques (discussed shortly) are less helpful

289
than they might be as a means of coping with message generalization
problems. Indeed, where messages have been reused in primary research
and generalizations about message types are wanted, the appropriate meta-
analytic procedure is to collapse the results (across studies) for a given
message pair; that is, the appropriate unit of analysis is the message pair,
not the study. To concretize this: Imagine two data sets in which 20 studies
have provided the experimental contrast of interest (comparing a message
of kind A versus a message of kind B). In one data set, each study used a
different message pair. In the other, 10 studies used one specific message
pair and the other 10 studies used a second message pair. Plainly, one’s
confidence in any generalizations would be greater in the first data set than
in the second, and the meta-analytic procedure should reflect that (making
the number of cases 20 in the first data set and 2 in the second).

7. Meta-analytic methods can be especially attractive because they


naturally shift the focus away from statistical significance (was there a
statistically significant difference in persuasiveness between the two
message kinds?) and toward effect sizes (how large was the difference in
persuasiveness between the two message kinds?) and confidence intervals
(what is the likely range of population values for the effect, given the data
in hand?). In addition, meta-analysis can encourage conceiving of message
effects in terms of the effect size distribution (with some mean and
variance) associated with a given variable (see Brashers & Jackson, 1999).

8. These questions (about the statistical treatment of meta-analytic data)


arise in parallel form in the context of multiple-message primary research
designs (see note 5 above).

9. This can create some confusion about “manipulation checks”


concerning message variables. Where message variations are defined on
the basis of intrinsic features, customary manipulation checks (under that
description) are unnecessary and inappropriate (O’Keefe, 2003).

10. As an illustration, consider Stephenson et al.’s (2005) research in


which hearing protection messages were pretested to identify messages
evoking positive, negative, and neutral affect; in a subsequent study, the
relative persuasiveness of these messages was then tested. The finding that
the affectively neutral messages were most successful gives only limited
help to future message designers because the message categories were
defined not by some intrinsic properties of the messages but by the
reactions the messages evoked.

290
Chapter 10 Communicator Factors

Communicator Credibility
The Dimensions of Credibility
Factors Influencing Credibility Judgments
Effects of Credibility
Liking
The General Rule
Some Exceptions and Limiting Conditions
Other Communicator Factors
Similarity
Physical Attractiveness
About Additional Communicator Characteristics
Conclusion
The Nature of Communication Sources
Multiple Roles for Communicator Variables
For Review
Notes

Persuasion researchers have quite naturally focused considerable research


attention on the question of how various characteristics of the
communicator influence the outcomes of the communicator’s persuasive
efforts. This chapter’s review of such research is focused on two particular
communicator characteristics—the communicator’s credibility and
likability—but also considers other source factors, including the
communicator’s similarity to the audience.

Communicator Credibility

The Dimensions of Credibility


Credibility (or, more carefully expressed, perceived credibility) consists of
the judgments made by a perceiver (e.g., a message recipient) concerning
the believability of a communicator. Communicator credibility is thus not
an intrinsic property of a communicator; a message source may be thought
highly credible by one perceiver and not at all credible by another. But this
general notion of credibility has been given a somewhat more careful

291
specification in investigations aimed at identifying the basic underlying
dimensions of credibility.

Factor-Analytic Research
There have been quite a few factor-analytic studies of the dimensions
underlying credibility judgments (e.g., Andersen, 1961; Applbaum &
Anatol, 1972, 1973; Baudhuin & Davis, 1972; Berlo, Lemert, & Mertz,
1969; Bowers & Phillips, 1967; Falcione, 1974; McCroskey, 1966;
Schweitzer & Ginsburg, 1966). In the most common research design in
these investigations, respondents rate communication sources on a large
number of scales. The ratings given of the sources are then submitted to
factor analysis, a statistical procedure that (broadly put) groups the scales
on the basis of their intercorrelations: Scales that are comparatively highly
intercorrelated will be grouped together as indicating some underlying
“factor” or dimension.

Expertise and Trustworthiness as Dimensions of


Credibility
Without overlooking potential weaknesses in this research (see, e.g., Delia,
1976; McCroskey & Young, 1981) or the variations in obtained factor
structures (compare, e.g., Berlo et al., 1969, with Schweitzer & Ginsburg,
1966), one may nevertheless say that with some frequency, two broad (and
sensible) dimensions have commonly emerged in factor-analytic
investigations of communicator credibility. These are variously labeled in
the literature, but two useful terms are expertise and trustworthiness.

The expertise dimension (sometimes called competence, expertness,


authoritativeness, or qualification) is commonly represented by scales such
as experienced-inexperienced, informed-uninformed, trained-untrained,
qualified-unqualified, skilled-unskilled, intelligent-unintelligent, and
expert–not expert. These items all seem directed at the assessment of
(roughly) whether the communicator is in a position to know the truth, to
know what is right or correct. Three or more of these scales are reported as
loading on a common factor in investigations by Applbaum and Anatol
(1972), Baudhuin and Davis (1972), Beatty and Behnke (1980), Beatty and
Kruger (1978), Berlo et al. (1969), Bowers and Phillips (1967), Falcione
(1974), McCroskey (1966), Pearce and Brommel (1972), Schweitzer and
Ginsburg (1966), and Tuppen (1974). And (as these factor-analytic results
would indicate) measures of perceived expertise that are composed of

292
several such items commonly exhibit high internal reliability (e.g.,
reliability coefficients of .85 or greater have been reported by Beatty &
Behnke, 1980; McCroskey, 1966).

The trustworthiness dimension (sometimes called character, safety, or


personal integrity) is commonly represented by scales such as honest-
dishonest, trustworthy-untrustworthy, open-minded–closed-minded, just-
unjust, fair-unfair, and unselfish-selfish. These items all appear to be
related to the assessment of (roughly) whether the communicator will
likely be inclined to tell the truth as he or she sees it. Three or more of
these scales are reported as loading on a common factor in investigations
by Applbaum and Anatol (1972), Baudhuin and Davis (1972), Berlo et al.
(1969), Falcione (1974), Schweitzer and Ginsburg (1966), Tuppen (1974),
and Whitehead (1968). Correspondingly, indices of perceived
trustworthiness that are composed of several such items have displayed
high internal reliability (e.g., reliabilities of .80 or better have been
reported by Bradley, 1981; Tuppen, 1974).1

These two dimensions parallel what have been described as the two types
of communicator bias that message recipients might perceive: knowledge
bias and reporting bias. “Knowledge bias refers to a recipient’s belief that
a communicator’s knowledge about external reality is nonveridical, and
reporting bias refers to the belief that a communicator’s willingness to
convey an accurate version of external reality is compromised” (Eagly,
Wood, & Chaiken, 1978, p. 424; see also Eagly, Wood, & Chaiken, 1981).
A communicator perceived as having a knowledge bias will presumably be
viewed as relatively less expert; a communicator viewed as having a
reporting bias will presumably be seen as comparatively less trustworthy.

Perhaps it is not surprising that both expertise and trustworthiness emerge


as basic dimensions of credibility because as a rule, only the conjunction
of expertise and trustworthiness makes for reliable communications. A
communicator who knows what is correct (has expertise) but who
nevertheless misleads the audience (is untrustworthy, has a reporting bias)
produces messages that are unreliable guides to belief and action, just as
does the sincere (trustworthy) but uninformed (low-expertise, knowledge-
biased) communicator.

These two dimensions, however, represent only the most general sorts of
credibilityrelevant judgments made by recipients about communicators.
The particular judgments underlying credibility may vary from

293
circumstance to circumstance, as can the emphasis placed on one or
another dimension of judgment. Thus it may be useful to develop
credibility assessments tailored to particular situations (for some examples,
see Frewer, Howard, Hedderley, & Shepherd, 1996; Gaziano & McGrath,
1986; Hilligoss & Rieh, 2008; Ohanian, 1990; M. D. West, 1994).
Notably, however, even such situation-specific assessments commonly
identify expertise and trustworthiness as key credibility dimensions, as in
studies of expert courtroom witnesses (Brodsky, Griffin, & Cramer, 2010),
risk communication (Siegrist, Earle, & Gutscher, 2003; Twyman, Harvey,
& Harries, 2008), and corporations (Newell & Goldsmith, 2001).

Factors Influencing Credibility Judgments


Judgments of a communicator’s expertise and trustworthiness are surely
influenced by a great many factors, and it is fair to say that research to date
leaves us rather far from a comprehensive picture of the determinants of
these judgments. What follows is a selective review of some features that
have received relatively more attention.

Education, Occupation, and Experience


Although little systematic research investigates exactly how credibility
judgments are influenced by information about the communicator’s
training, experience, and occupation, these characteristics are precisely the
ones most frequently manipulated by investigators in experimental studies
of the effects of variations in communicator credibility. That is, a
researcher who wishes to compare the effects of a high-credibility source
with those of a low-credibility source will most commonly manipulate the
receiver’s perception of the communicator’s credibility by varying the
information given about the communicator’s credentials. For instance, a
classic study of messages about nuclear radiation described the high-
credibility communicator as “a professor of nuclear research, recognized
as a national authority on the biological effects of radioactivity,” whereas
the low-credibility introduction described the source as “a high school
sophomore, whose information was based on a term paper prepared for a
social studies class” (Hewgill & Miller, 1965, p. 96).

Similar manipulations are commonplace, and researchers commonly


confirm the success of these manipulations by assessing respondents’
judgments of the communicators’ expertise and trustworthiness. As one
might expect, such high-credibility introductions do indeed generally lead

294
receivers to perceive the source as more trustworthy and (particularly)
more expert than do low-credibility introductions. What systematic
research exists on this matter is (perhaps not surprisingly) consistent with
these effects (e.g., Falomir-Pichastor, Butera, & Mugny, 2002; Hurwitz,
Miron, & Johnson, 1992; Tormala, Briñol, & Petty, 2006).

Nonfluencies in Delivery
There have been a number of investigations of how variations in delivery
can influence the credibility judgments made of a speaker. Unfortunately,
several of these studies have investigated conceptions of delivery that
embrace a number of behavioral features (e.g., Pearce & Brommel, 1972).
But one delivery characteristic that has been studied in isolation is the
occurrence of nonfluencies in the delivery of oral communications.
Nonfluencies include vocalized pauses (“uh, uh”), the superfluous
repetition of words or sounds, corrections of slips of the tongue,
articulation difficulties, and the like. Several investigations have found that
with increasing numbers of nonfluencies, speakers are rated significantly
lower on expertise, with judgments of trustworthiness typically unaffected
(e.g., Engstrom, 1994; for a review of effects on expertise judgments, see
Carpenter, 2012a).

Citation of Evidence Sources


Persuaders commonly include evidence in their persuasive messages, that
is, relevant facts, opinions, information, and the like, intended to support
the persuader’s claims. Several investigations have studied how citing the
sources of such evidence—as opposed to providing only vague
documentation (“Studies show that”) or no documentation at all—
influences perceived communicator credibility. On the whole, a
communicator’s citation of the sources of evidence appears to enhance
perceptions of the communicator’s expertise and trustworthiness, although
these effects are sometimes small (e.g., Whitehead, 1971; for reviews and
discussion, see O’Keefe, 1998; Reinard, 1998). These investigations
employed relevant supporting materials that were attributed (when source
citations were provided) to high-credibility sources. One should not expect
enhanced communicator credibility to result from citations to low-
credibility evidence sources or from citations for poor or irrelevant
evidence (Luchok & McCroskey, 1978). But the citation of expert and
trustworthy sources of evidence in the message appears to influence the
communicator’s perceived expertise and trustworthiness; in a sense, the

295
high credibility of the cited sources seems to rub off on the
communicator.2

Position Advocated
The position that the communicator advocates on the persuasive issue can
influence perceptions of the communicator’s expertise and trustworthiness.
Specifically, a communicator is likely to be perceived as more expert and
more trustworthy if the advocated position disconfirms the audience’s
expectations about the communicator’s views (when such expectations
derive from knowledge of the source’s characteristics or circumstances),
although certain sorts of trustworthiness judgments (concerning
objectivity, openmindedness, and unbiasedness) appear to be more
affected than others (such as sincerity and honesty).

The most straightforward examples of this phenomenon are


communicators who argue for positions that are apparently opposed to
their own self-interest. Ordinarily, of course, we expect persons to take
positions that forward their own interests; sources who support views
opposed to their interests thus disconfirm our expectations. If we wonder
why a source is taking this (apparently unusual) position, we may well be
led to conclude that the communicator must be especially well-informed
(expert) and honest (trustworthy): The source must really know the truth
and must really be willing to tell the truth, otherwise why would the source
be advocating that position? Conversely, where communicators advocate
views that forward their self-interest, credibility is likely to suffer by
comparison.

So, for example, online contributors who donate their revenue shares to
charity (as compared with those who retain the revenue; Hsieh, Hudson, &
Kraut, 2011), salespeople whose compensation is salary-based (rather than
commission-based; Straughan & Lynn, 2002) or who flatter the customer
after the sale (rather than before; M. C. Campbell & Kirmani, 2000),
prosecutors who argue that prosecutorial powers should be decreased
(rather than increased; Walster, Aronson, & Abrahams, 1966), politicians
who praise their opponents (as opposed to denigrating them; Combs &
Keller, 2010)—all are likely to enjoy relatively enhanced credibility.
Similarly, physicians find their colleagues more believable sources of drug
information than they do advertisements or salespeople (Beltramini &
Sirsi, 1992), and they are more likely to prescribe drugs that have been
studied in government-funded research than in industry-funded research

296
(Kesselheim et al., 2012).

Of course, receivers’ expectations about the position that a communicator


will express can derive from sources other than the ordinary presumption
that people will favor viewpoints that are in their own interest. A general
analysis of the bases of premessage expectancies (and their effects on
perceived credibility and persuasive outcomes) has been provided by
Eagly et al. (1981). As briefly mentioned earlier, Eagly et al. (1981)
distinguished two sorts of perceived communicator bias that receivers can
use to form premessage expectancies about the communicator’s position.
One is knowledge bias, which refers to the receiver’s belief that the
communicator’s knowledge of relevant information is somehow biased
(perhaps because of the source’s background or experience) and thus that
the source’s message will not accurately reflect reality. The other is
reporting bias, which refers to the receiver’s belief that a communicator
may not be willing to accurately convey relevant information (for instance,
because situational pressures might lead the source to withhold or distort
information). A receiver’s perception of either sort of communicator bias
will lead the receiver to have certain expectations about the position that a
communicator will express on the issue. When communicators confirm
those expectations (e.g., the lifelong Democrat speaking in favor of a
Democratic political candidate, or a speaker opposing gun control
legislation when addressing the National Rifle Association), we have ready
explanations for why the communicators acted as they did.

But when a communicator advocates a position that violates an expectancy


based on knowledge or reporting bias, the receiver faces the task of
explaining why the communicator is defending the advocated position—
why the lifelong Democrat is speaking in support of a Republican
candidate or why the speaker addressing the National Rifle Association is
urging stricter gun control legislation. The most plausible explanation at
least sometimes will be that the facts of the matter were so compelling that
the communicator was led to override those personal or situational
pressures (that had generated the receiver’s expectations) and thus defend
the advocated position. Correspondingly, the receiver may be led to
perceive the communicator as especially expert and trustworthy, precisely
because the communicator’s expressed position violates the receiver’s
expectations (for relevant research, see L. Anderson, 1970; Eagly &
Chaiken, 1975; Eagly et al., 1978; Peters, Covello, & McCallum, 1997; W.
Wood & Eagly, 1981).3

297
A related expectancy disconfirmation effect has been observed in studies
of advertisements for consumer products. Ordinarily, consumers expect
advertisements to tout the advertised product or brand as “the best” on
every feature or characteristic that is mentioned. Thus an advertisement for
exterior house paint that claimed that the product was superior to its
competitors on only three mentioned product features (durability, number
of coats needed, and ease of cleanup) while being equal on two others
(number of colors available and nonspill lip on container) would
disconfirm receivers’ expectations about the message’s contents
(particularly by contrast to an advertisement claiming that the product was
superior on each of these five features; R. E. Smith & Hunt, 1978). There
have been several experimental comparisons of these two types of
advertisements—an advertisement suggesting superiority for all the
mentioned features of the product (a one-sided advertisement), and an
advertisement that acknowledges (and does not refute or deny) some ways
in which the product is not superior (a nonrefutational two-sided
advertisement). As one might suppose, when an advertisement
acknowledges ways in which competing products are just as good as the
advertised product (or acknowledges weaknesses of the advertised
product), the ad is commonly perceived as more credible than when the ad
claims superiority on every product feature that is mentioned (e.g., Alden
& Crowley, 1995; Eisend, 2010; Pechmann, 1992; for reviews, see Eisend,
2006; O’Keefe, 1999a).4

Liking for the Communicator


Some indirect evidence indicates that the receiver’s liking for the
communicator can influence judgments of the communicator’s
trustworthiness, although not judgments of the communicator’s expertise.
This evidence, derived from factor-analytic investigations of credibility
judgments, is the finding that various general evaluation items often load
on the same factor as do trustworthiness scales. For example, items such as
friendly-unfriendly, pleasant-unpleasant, nice–not nice, and valuable-
worthless have been reported as loading on a common factor with such
trustworthiness items as honest-dishonest, trustworthyuntrustworthy,
unselfish-selfish, and just-unjust (see, e.g., Applbaum & Anatol, 1972;
Bowers & Phillips, 1967; Falcione, 1974; McCroskey, 1966; Pearce &
Brommel, 1972). This suggests that liking and trustworthiness judgments
are probably more likely to co-vary than are liking and expertise
judgments. Such a pattern of results surely makes good sense: One’s
general liking for a communicator is much more likely to influence one’s

298
judgments about the communicator’s dispositional trustworthiness (the
communicator’s general honesty, fairness, open-mindedness, and the like)
than the communicator’s expertise.5

Humor
Including humor in persuasive messages has been found to have rather
varied effects on perceptions of the communicator. When positive effects
of humor are found, they tend to most directly involve enhancement of the
audience’s liking for the communicator—and thus occasionally the
trustworthiness of the communicator (because liking and trustworthiness
are associated)—but rarely judgments of expertise (e.g., Chang & Gruner,
1981; Gruner, 1967, 1970; Gruner & Lampton, 1972; Skalski, Tamborini,
Glazer, & Smith, 2009). The use of humor, however, can also decrease the
audience’s liking for the communicator, the perceived trustworthiness of
the communicator, and even the perceived expertise of the source (e.g.,
Bryant, Brown, Silberberg, & Elliott, 1981; Munn & Gruner, 1981). These
negative effects seem most likely when the humor is perceived as
excessive or inappropriate for the context. Small amounts of appropriate
humor thus may have small enhancing effects on perceived trustworthiness
but are unlikely to affect assessments of the communicator’s expertise.

Summary
This selective review has touched on several broad factors that can
influence credibility judgments. However, different specific influences
may be at work in different persuasion circumstances. So, for example,
one might expect that different factors will affect the perceived credibility
of courtroom witnesses (e.g., Dahl et al., 2007), blogs (e.g., Armstrong &
McAdams, 2009), journalists (e.g., Jensen, 2008), online reviews (e.g., Pan
& Chiou, 2011), and so forth.6

Effects of Credibility
What effects do variations in communicator credibility have on persuasive
outcomes? It might be thought that the answer to this question is pretty
simple: As one’s credibility increases, so will one’s effectiveness. But the
answer turns out to be much more complicated.

Two Initial Clarifications


299
Two preliminary clarifications need to be made concerning the research on
the effects of communicator credibility. The first is that in this research,
the two primary dimensions of credibility (expertise and trustworthiness)
are usually not separately manipulated. That is, the research commonly
compares a source that is relatively high in both expertise and
trustworthiness (the high-credibility source) with a source that is relatively
low in both (the low-credibility source).

Obviously, because expertise and trustworthiness are conceptually distinct


aspects of credibility, it would be possible to manipulate these separately
and so examine their separate effects on persuasive outcomes. One could,
for instance, compare the effectiveness of a source high in expertise but
low in trustworthiness with that of a source low in expertise but high in
trustworthiness.

Overwhelmingly, however, expertise and trustworthiness have not been


independently manipulated in investigations of credibility’s effects. There
have been a few efforts at disentangling the effects of expertise and
trustworthiness (e.g., Mowen, Wiener, & Joag, 1987; O’Hara, Netemeyer,
& Burton, 1991; Terwel, Harinck, Ellemers, & Daamen, 2009), but to date
no clear generalizations seem possible.7 The point, thus, of this first
clarification is to emphasize the limits of current research on credibility’s
effects: This research concerns credibility generally, rather than different
dimensions of credibility individually.8

The second preliminary clarification concerns the nature of the low-


credibility sources in this research: The low-credibility sources are not low
in absolute terms but simply relatively low in credibility. In absolute
terms, the low-credibility communicators are probably accurately
described as no better than moderate in credibility.9 Several researchers
have remarked that it is difficult to create believable experimental
manipulations that will consistently yield credibility ratings that are low in
absolute terms.10 Thus although this discussion (like most in the literature)
will be cast as a matter of the differential persuasive effectiveness of high-
as opposed to low-credibility communicators, the comparison made in the
relevant research is nearly always between a relatively higher credibility
communicator and a relatively lower one, not necessarily between two
sources that are in absolute terms high and low in credibility.

With these preliminaries out of the way, we can now turn to a


consideration of just how variations in communicator credibility influence

300
persuasive effectiveness. The effects of credibility on persuasive outcomes
are not completely straightforward but depend centrally on other factors.
These factors can be usefully divided into two general categories: factors
that influence the magnitude of credibility’s effects and factors that
influence the direction of credibility’s effects.

Influences on the Magnitude of Effect


The size of the effect that communicator credibility has on persuasive
outcomes is not constant but varies from one circumstance to another.
Researchers have identified at least two factors that affect just how
consequential a role communicator credibility plays in persuasion.

The first is the degree of direct personal relevance that the issue has for the
receiver. As the issue becomes more personally relevant for the receiver,
variations in the source’s credibility make less difference; under conditions
of low personal relevance, the communicator’s credibility may make a
great deal of difference to the outcome, whereas on highly relevant topics,
the source’s credibility may have little impact (for a classic illustration, see
Petty, Cacioppo, & Goldman, 1981; for a review, see E. J. Wilson &
Sherrell, 1993).11

In some ways, it may seem paradoxical that as an issue becomes more


personally relevant for a receiver, the source’s expertise and
trustworthiness become less important. But this relationship may be more
understandable when viewed from the perspective of the elaboration
likelihood model (ELM; see Chapter 8). For issues of little personal
relevance, receivers may be content to let their opinions be shaped by the
communicator’s apparent credibility; for such an issue, it is not worth the
effort to follow the details of the arguments. But for highly relevant topics,
receivers will be more likely to attend closely to the details of the message,
to scrutinize the communicator’s arguments and evidence, and to invest
the effort involved in thinking closely about the contents of the message—
and this comparatively greater importance of the message contents means
that the communicator’s credibility will play a smaller role than it
otherwise might have.12 Indeed, this first factor might be cast in a more
general framework, by suggesting that as the amount of issuerelevant
thinking varies (whether because of personal relevance or other factors), so
will the effect of credibility variations. For example, Kumkale, Albarracín,
and Seignourel’s (2010) meta-analysis indicated that the effect of
credibility variations is greatest when recipients have poorly formed

301
attitudes and little background knowledge—conditions likely to be
conducive to relatively low elaboration.

The second factor influencing the magnitude of credibility’s impact is the


timing of the identification of the communicator. Often, of course, the
communicator’s identity is known before the message is received by the
audience (e.g., because the source is well-known and can be seen by the
audience or because another person introduces the communicator). But in
some circumstances, it can be possible to delay identification of the source
until after the audience has been exposed to the message (e.g., in television
advertisements, in which the source’s identity may be withheld until the
end of the ad, or in multipage web or print articles, in which information
about the writer may not appear on the first page but instead only at the
end). The timing of the identification of the source does make a substantial
difference in the role that source credibility plays in persuasion.

Specifically, the impact of communicator credibility appears to be


minimized when the identity of the source is withheld from the audience
until after the message has been presented (e.g., Allen et al., 2002;
O’Keefe, 1987; see, relatedly, Nan, 2009; Tormala, Briñol, & Petty, 2007).
When the communicator’s identity is delayed until after the audience has
received the message, the message is apparently heard more nearly on its
own terms, without the influence of the communicator’s credibility.

It might be thought that this finding implies that high-credibility


communicators should be sure not to delay their identification (but instead
should be sure to identify themselves before the message), whereas low-
credibility communicators should strive, where circumstances permit, to
have their messages received before the audience is given information
about their credibility. But that is a mistaken conclusion because it is based
on an unsound (although natural) assumption that the direction of
credibility’s effect is constant, with higher credibility always yielding
greater persuasion. As discussed in the next section, sometimes lower-
credibility communicators can be more successful persuaders than higher-
credibility sources.

Influences on the Direction of Effect


One might plausibly suppose that the direction of credibility’s effect would
be constant—specifically, that increases in credibility would yield only
increases in persuasive effectiveness. Perhaps sometimes only small

302
increases would occur, and sometimes (e.g., when the topic is personally
relevant to the receiver) no increase at all, but at least whenever credibility
had an effect, it would be in a constant direction, with high-credibility
sources being more effective than low-credibility sources.

However plausible such a supposition may seem, it is not consistent with


the empirical evidence. The direction of credibility’s effect is not constant:
Several investigations have found that at least sometimes low-credibility
communicators are significantly more effective than high-credibility
communicators (e.g., Bock & Saine, 1975; Bohner, Ruder, & Erb, 2002;
Chebat, Filiatrault, Laroche, & Watson, 1988; Dholakia, 1987; Falomir-
Pichastor, Butera, & Mugny, 2002; Harmon & Coney, 1982; Sternthal,
Dholakia, & Leavitt, 1978; Tormala, Briñol, & Petty, 2006; for some
discussion, see Pornpitakpan, 2004). This finding is not easily impeached,
as these results have been obtained by different investigators, using
various topics, with different participant populations, and with good
evidence for the success of the credibility manipulations employed.

An entirely clear picture is not yet in hand, but one factor that appears
critical in determining the direction of credibility’s effects appears to be
the nature of the position advocated by the message—specifically, whether
the message advocates a position initially opposed by the receiver (a
counterattitudinal message) or advocates a position toward which the
receiver initially feels at least somewhat favorable (a proattitudinal
message). With a counterattitudinal message, the high-credibility
communicator will tend to have a persuasive advantage over the low-
credibility source; with a proattitudinal message, however, the low-
credibility communicator appears to enjoy greater persuasive success than
the high-credibility source.

The most direct evidence of this relationship comes from investigations


that have varied the counter- or proattitudinal stance of the message (under
conditions of low topic relevance and with communicators identified prior
to messages). Under these conditions, high-credibility communicators are
more effective than low-credibility communicators with counterattitudinal
messages, but this advantage diminishes as the advocated position gets
closer to the receiver’s position, to the point that with proattitudinal
messages, the low-credibility communicator is often more effective than
the high-credibility source (Bergin, 1962; Bochner & Insko, 1966; Chebat
et al., 1988; J. K. Clark, Wegener, Habashi, & Evans, 2012; Harmon &
Coney, 1982; McGinnies, 1973; Sternthal et al., 1978, Study 2).

303
Perhaps one way of understanding this effect is to consider the degree to
which, given a proattitudinal message, receivers might be stimulated to
think about arguments and evidence supporting the advocated view. When
receivers hear their views defended by a high-credibility source, they may
well be inclined to presume that the communicator will do a perfectly good
job of advocacy, will defend the viewpoint adequately, will present the
best arguments, and so forth—and so they sit back and let the source do
the work. But when the source is low in credibility, receivers might be
more inclined to help the communicator in defending their common
viewpoint, and hence they might be led to think more extensively about
supporting arguments—thereby ending up being more persuaded than if
they had listened to a higher-credibility source. Expressed in ELM terms, a
proattitudinal message may provoke more elaboration, and more favorable
elaboration, when it comes from a low-credibility communicator than
when it comes from a high-credibility communicator (for some evidence
consistent with such an account, see Clark et al., 2012; Sternthal et al.,
1978).13

However, greater success of low-credibility communicators should not be


expected in every case of proattitudinal messages, nor should one expect
that high-credibility communicators will have an edge whenever
counterattitudinal messages are employed. Rather, one should find such
effects only when the conditions promote credibility’s having a substantial
effect (e.g., only when the topic is not especially personally relevant and
the communicator is identified prior to the message).

Liking

The General Rule


Perhaps it comes as no surprise that a number of investigations have found
support for the general principle that on the whole, liked communicators
are more effective influence agents than are disliked communicators (e.g.,
Eagly & Chaiken, 1975; Sampson & Insko, 1964).14 But the general
principle that liked persuaders are more successful can be misleading.
Important exceptions and limiting conditions on that principle are
discussed in the following section.

Some Exceptions and Limiting Conditions

304
Extant research evidence suggests at least three important caveats
concerning the effects of liking for the communicator on persuasive
outcomes: The effects of liking can apparently be overridden by
credibility, the superiority of liked over disliked communicators is
minimized as the topic becomes more personally relevant to the receiver,
and disliked communicators can at least sometimes be significantly more
effective persuaders than can liked communicators. (For indications of
additional possible limiting conditions, see Chebat, Laroche, Baddoura, &
Filiatrault, 1992; Roskos-Ewoldsen & Fazio, 1992.)

Liking and Credibility


The effects of liking on persuasive outcomes appear to be weaker than the
effects of credibility (e.g., Lupia & McCubbins, 1998, pp. 196–199;
Simons, Berkowitz, & Moyer, 1970; see, relatedly, Eisend & Langner,
2010). Thus when the receiver’s judgment of the source’s credibility
conflicts with the receiver’s liking for the source, the effects of liking may
be overridden by the effects of credibility. This may be exemplified by the
results of an investigation in which participants were asked to make a
judgment about the size of the monetary award to be given in a personal
injury damage suit. Each participant heard a persuasive message from a
source who advocated either a relatively small or a relatively large
monetary award; the source was portrayed either as cold and stingy or as
warm and generous. Although the warm, generous source was liked better
than was the cold, stingy communicator, the stingy source was
nevertheless sometimes a more effective persuader, namely, when the
stingy source was arguing for a relatively large award. Indeed, of the four
source-message combinations, the two most effective combinations were
the stingy source arguing for a large award and the generous source
arguing for a small award (Wachtler & Counselman, 1981). Both these
combinations, of course, represent sources who are (given their
personalities) advocating an unexpected position and who thus may well
have been perceived as relatively higher in credibility. Of particular
interest is that the communicator who was disliked and (presumably) high
in credibility (the stingy source advocating the large award) was
significantly more effective than the communicator who was liked and
(presumably) low in credibility (the generous source advocating the large
award), thus suggesting that the effects of liking for the communicator are
weaker than the effects of communicator credibility.

305
Liking and Topic Relevance
The effects of liking on persuasive outcomes are minimized as the topic
becomes more personally relevant to the receiver. Thus, although better-
liked sources may enjoy some general persuasive advantage, that
advantage is reduced when the issue is personally relevant to the receiver
(Chaiken, 1980; see, relatedly, Kang & Kerr, 2006). This result is, of
course, compatible with the image offered by the ELM (discussed in
Chapter 8). When receivers find the topic personally relevant, they are
more likely to engage in systematic active processing of message contents
and to minimize reliance on peripheral cues such as whether they happen
to like the communication source. But when personal relevance is low,
receivers are more likely to rely on simplifying heuristics emphasizing
cues such as liking (“I like this person, so I’ll agree”).

Greater Effectiveness of Disliked Communicators


At least sometimes disliked communicators can be more effective
persuaders than liked communicators—even when the communicators are
comparable in other characteristics such as credibility. A demonstration of
this possibility was provided by a classic investigation in which
participants were induced to eat fried grasshoppers. In one condition, the
communicator acted snobbish, cold, bossy, tactless, and hostile (the
disliked communicator); the liked communicator displayed none of these
characteristics. The two communicators were roughly equally successful in
inducing participants to eat the fried grasshoppers, but that is not the result
of interest. What is of interest is how, among participants who did eat the
grasshoppers, their attitudes toward eating grasshoppers changed. As one
might predict from dissonance-theoretic considerations, among those who
ate the grasshoppers, the disliked communicator was much more effective
in changing attitudes in the desired direction than was the liked
communicator: The person who ate the grasshoppers under the influence
of the disliked communicator presumably experienced more dissonance—
and thus exhibited more attitude change—than did the person induced to
eat by the liked source (Zimbardo, Weisenberg, Firestone, & Levy, 1965).

This, of course, is the familiar induced compliance counterattitudinal


action circumstance (as discussed in Chapter 5). But similar results have
been obtained in straightforward persuasive communication situations (J.
Cooper, Darley, & Henderson, 1974; Himmelfarb & Arazi, 1974; R. A.
Jones & Brehm, 1967; cf. Eagly & Chaiken, 1975). That is, disliked

306
communicators can indeed potentially be more persuasive than liked
communicators.

However, when disliked communicators have been found to be more


successful persuaders than liked communicators, the circumstances appear
to typically have involved the receivers’ having freely chosen to listen to
the message. For example, in J. Cooper et al.’s (1974) investigation,
suburban householders received a counterattitudinal communication (i.e.,
one opposed to the receiver’s views) from either a deviant-appearing
communicator (a long-haired hippie) or a conventional-appearing
communicator. The deviant-appearing communicator was significantly
more effective than the conventionally dressed communicator in
persuading these suburbanites, but the message recipients all had freely
chosen to receive the communication (and indeed had had two
opportunities to decline to receive the communication).

If one remembers that dissonance effects are expected in induced


compliance circumstances only when the person has freely chosen to
engage in the discrepant action, the finding that disliked communicators
can be more successful than liked communicators only under conditions of
choice is perhaps not surprising. Receivers who freely choose to listen to
(what turns out to be) an unlikable communicator presumably face a
dissonance reduction task that is not faced by receivers who find
themselves (through no fault of their own) listening to an unlikable source.
Hence the greater success of disliked (as opposed to liked) communicators
is, as the research evidence suggests, obtained only when the receiver has
chosen to listen to the message. (For an experimental illustration of this
moderating factor, see R. A. Jones & Brehm, 1967.)

Other Communicator Factors


Beyond credibility and liking, a large number of other communicator
factors have received at least some research attention as possible
influences on persuasive outcomes. This section focuses on two such
factors—similarity and physical attractiveness—but concludes with a more
general discussion of other communicator characteristics.

Similarity
It seems common and natural to assume that to the degree that receivers

307
perceive similarities between themselves and a persuader, to that same
degree the persuader’s effectiveness will be enhanced. The belief that
“greater similarity means greater effectiveness” is an attractive one and is
commonly reflected in recommendations that persuaders emphasize
commonalities between themselves and the audience.

But the relationship of similarity to persuasive effectiveness is much more


complex than this common assumption indicates. Indeed, while some
research findings indicate that persuasive effectiveness can be enhanced by
similarity (e.g., Brock, 1965; Woodside & Davenport, 1974), other
findings suggest that persuasive effectiveness can be reduced by similarity
(e.g., Infante, 1978; S. W. King & Sereno, 1973; Leavitt & Kaigler-Evans,
1975) or that similarity has little or no effect on persuasive outcomes (e.g.,
Klock & Traylor, 1983; Wagner, 1984).

Two initial clarifications will be helpful in untangling these complexities.


First, there is “an infinite number of possible dimensions” of similarity-
dissimilarity (Simons et al., 1970, p. 3). One might perceive oneself to be
similar or dissimilar to another person in age, occupation, attitudes,
physique, income, education, speech dialect, personality, ethnicity,
political affiliation, interpersonal style, clothing preferences, and on and
on. Thus there is not likely to be any truly general relationship between
similarity and persuasive effectiveness, or indeed between similarity and
any other variable. Different particular similarities or dissimilarities will
have different effects, making impossible any sound generalization about
similarity.

Second, similarities most likely do not influence persuasive effectiveness


directly (R. G. Hass, 1981; Simons et al., 1970). Rather, similarities
influence persuasive outcomes indirectly, especially by affecting the
receiver’s liking for the communicator and the receiver’s perception of the
communicator’s credibility (expertise and trustworthiness). Because the
effects of similarities may not be identical for liking, perceived expertise,
and perceived trustworthiness, the relationship of similarities to each of
these needs separate attention.

Similarity and Liking


Given the infinite varieties of possible similarities, any general relationship
is unlikely between perceived similarity and liking for another person.
That is, “there is no singular ‘similarity’ effect” on liking but rather “a

308
multiplicity of effects that depend on both content and context” (Huston &
Levinger, 1978, p. 126). However, the effect on liking of one particular
sort of similarity—attitudinal similarity—has received a good deal of
empirical attention. Attitudinal similarity is having similar attitudes
(similar evaluations of attitude objects), as opposed to, say, having similar
traits, abilities, occupations, or backgrounds.

A fair amount of evidence now indicates that as a general rule, perceived


attitudinal similarity engenders greater liking, at least among previously
unacquainted persons (for reviews, see Berscheid, 1985; Byrne, 1969;
Sunnafrank, 1991). Thus to the extent that message recipients perceive that
the communicator has attitudes (on matters other than the topic of the
influence attempt) that are similar to theirs, those recipients are likely to
come to like the communicator more. Hence, even when not especially
relevant to the topic of the influence attempt, perceived attitudinal
similarities (between source and audience) can enhance the audience’s
liking of the source and so can potentially influence persuasive
effectiveness.

The hypothesis that attitudinal similarities can influence persuasive


effectiveness by influencing the receiver’s liking for the communicator is
bolstered by the results of investigations that have varied both
communicator credibility (specifically, expertise) and communicator-
receiver attitudinal similarity. As discussed previously, the effects of liking
on persuasive effectiveness appear to be weaker than the effects of
credibility. Thus, if attitudinal similarities influence persuasive effects by
influencing liking for the communicator, then the effect of attitudinal
similarities on persuasive effectiveness should be smaller than the effect of
credibility. Indeed, several studies have found persuasive success to be
more influenced by the communicator’s expertise than by the
communicator’s attitudinal similarity (Wagner, 1984; Woodside &
Davenport, 1974).

But enhanced liking of a communicator will not always mean enhanced


persuasive effectiveness; as discussed earlier, greater liking for a
communicator may enhance, reduce, or have no effect on persuasive
effectiveness. Correspondingly, greater perceived attitudinal similarities
may (through their influence on the receiver’s liking for the
communicator) enhance, reduce, or have no influence on persuasive
effectiveness. Thus one should not assume that with greater perceived
attitudinal similarity comes greater persuasive effectiveness. Rather, with

309
greater perceived attitudinal similarity comes greater liking, which may or
may not mean greater effectiveness.15

Similarity and Credibility: Expertise Judgments


Perceived similarities (or dissimilarities) between source and audience can
certainly influence the audience’s judgment of the source’s expertise. But
there are two noteworthy features of this relationship.

First, the similarity or dissimilarity must be relevant to the influence


attempt if it is likely to influence judgments of expertise. For example, a
communicator seeking to influence a receiver’s judgment of the
president’s budget policy will probably not obtain enhanced expertise
judgments by pointing out that the communicator and recipient are
wearing the same color shirt. In a study that varied the communicator’s
occupational similarity (student vs. nonstudent, for an audience of
students) in advertisements for several consumer products, receivers’
judgments of the source’s expertise were found to be unrelated to
judgments of perceived similarity (Swartz, 1984); presumably, the
variations in similarity were not relevant to the persuasive issues involved.
Only relevant similarities (or dissimilarities) are likely to influence
judgments of the communicator’s expertise.

Second, not all relevant similarities will enhance perceived expertise, and
not all relevant dissimilarities will damage perceived expertise. For
example, a perceived similarity in relevant training and experience may
reduce the perceived expertise of a communicator (because the receiver
may be thinking, “I know as much about this topic as the speaker does”).
A perceived dissimilarity in relevant training and experience, on the other
hand, might either enhance or damage perceived expertise, depending on
the direction of the dissimilarity: If the receiver thinks that the
communicator is dissimilar because the communicator has better training
and experience, then presumably enhanced judgments of the
communicator’s expertise will be likely, but if the receiver thinks that the
communicator is dissimilar because the communicator has poorer training
and experience, then most likely the communicator’s perceived expertise
will suffer.

A demonstration of this sort of complexity was provided by a study of


speech dialect similarity, in which persons who spoke a general American
dialect heard one of two versions of a message from a speaker using either

310
a general American dialect or a Southern dialect; the message concerned a
well-known Southern governor (who enjoyed some popularity in the South
but not elsewhere), with one version offering a favorable view of the
governor and the other an unfavorable view. Regardless of the position
advocated, the speaker with the Southern (dissimilar) speech dialect was
perceived as more expert than the speaker with the general American
(similar) dialect, presumably because the Southern speaker could be
assumed to have better access to relevant information than would the
general American speaker (Delia, 1975).

Thus similarities should have varying effects on perceived expertise,


depending on the particulars of the circumstances. One should not be
surprised that the research literature indicates that similar others are
sometimes seen as more expert than are dissimilar others (e.g., Mills &
Kimble, 1973), sometimes as less expert (e.g., Delia, 1975), and
sometimes as not significantly differing in expertise (e.g., Atkinson,
Winzelberg, & Holland, 1985; Swartz, 1984). The effects of perceived
similarities and dissimilarities on judgments of communicator expertise
depend on whether, and how, the receiver perceives these as relevant to the
issue at hand.

Similarity and Credibility: Trustworthiness Judgments


The relationship between similarities and judgments of the
communicator’s trustworthiness appears to be complex as well. As
previously mentioned, certain sorts of similarities—specifically, perceived
attitudinal similarities—can influence the receiver’s liking for the
communicator, and enhanced liking for the communicator is commonly
accompanied by enhanced judgments of the communicator’s
trustworthiness. One would thus expect that perceived attitudinal
similarities might (through their influence on liking) exert some effect on
perceptions of the communicator’s trustworthiness.

This interplay of attitudinal similarity, liking, and trustworthiness


judgments is nicely illustrated in research by Meijinders et al. (2009). The
perceived trustworthiness of a journalist writing about genetically
modified food was greater when the journalist was described as having
similar (as opposed to dissimilar) attitudes about an unrelated subject. One
can see this effect as straightforwardly reflecting how attitudinal similarity
can enhance liking, which in turn enhances perceived trustworthiness.

311
However, there are intricacies here. In the previously described speech
dialect investigation, greater trustworthiness was ascribed to the
progovernor speaker using the similar (general American) dialect and to
the antigovernor speaker using the dissimilar (Southern) dialect (Delia,
1975). This effect is, of course, readily understandable: The Southern
speaker arguing against the Southern governor and the non-Southern
speaker supporting that governor could each have been seen as offering
views that ran against the tide of regional opinion—and hence seen as
speakers who must be especially sincere and honest in their expressions of
their opinions. But notice the complexity of these results regarding
similarity: Sometimes similarity enhanced perceptions of trustworthiness,
but sometimes it diminished such perceptions, depending on the position
advocated. And (to round things out) other investigators have found that
sometimes similarities have no significant effect on trustworthiness
judgments (e.g., Atkinson et al., 1985).

Summary: The Effects of Similarity


Perhaps it is now clear just how inadequate is a generalization such as
“greater similarity leads to greater persuasive effectiveness.” The effects of
similarity on persuasive outcomes are complex and indirect, and no single
easy generalization will encompass those varied effects. Indeed, in several
instances, similarities have been found in a single investigation to enhance
persuasive effectiveness under some conditions but to inhibit persuasive
effectiveness under other circumstances (e.g., Goethals & Nelson, 1973; S.
W. King & Sereno, 1973; Mills & Kimble, 1973).

So consider, as an example, the effects of salient similarities or


dissimilarities in group membership (“Ah, this communicator is a student
at my university”) on persuasion. Such group membership similarities
might provide bases for inferences about likely attitudinal similarities
between receiver and communicator or more generally might provide
bases for inferences about likability or credibility. Hence (following ELM-
like reasoning), on topics that are not especially personally relevant to the
receiver, such similarities might serve as peripheral cues (that engage
corresponding heuristics) and thus enhance the persuasiveness of messages
from similar communicators (those sharing group membership with the
receiver). By contrast, on topics of greater personal relevance (for which
greater message scrutiny is likely), these peripheral cue effects of group
membership similarities may be diminished. On such personally relevant
topics, however, group membership similarity might encourage closer

312
scrutiny of messages from similar communicators; such closer scrutiny
might enhance or inhibit persuasion, depending on such factors as the
quality of the message’s arguments. (For some relevant empirical work
and general discussions, see M. A. Fleming & Petty, 2000; Mackie &
Queller, 2000; N. Wyer, 2010.)

Such complexities might lead one to wonder about the common practice of
using peers (of the target audience) in health education programs on such
topics as smoking and unsafe sex; this practice can be seen to reflect a
generalized belief in the persuasive power of similarity. Given the
observed complexities of similarity’s roles and effects in persuasion,
however, perhaps it should not be surprising that several reviews have
concluded that peer-based health interventions are not dependably more
successful—and sometimes are significantly less successful—than
programs without such peer bases (Durantini, Albarracín, Mitchell, Earl, &
Gillette, 2006; Posavac, Kattapong, & Dew, 1999; cf. Cuijpers, 2002).16

In sum, if there is a general conclusion to be drawn about source-receiver


similarities in persuasion, it surely is that simple generalizations will not
do. To say, for example, that “receivers are more likely to be persuaded by
communicators they perceive as similar to themselves” is to overlook the
complexities of the effects that similarities have on persuasive outcomes.

Physical Attractiveness
The effects of physical attractiveness on persuasive outcomes—like the
effects of similarity—are rather varied. For the most part, “existing
research does indicate that heightened physical attractiveness generally
enhances one’s effectiveness as a social influence agent” (Chaiken, 1986,
p. 150; for some illustrative examples, see Horai, Naccari, & Fatoullah,
1974; Micu, Coulter, & Price, 2009; Widgery & Ruch, 1981). But physical
attractiveness appears to commonly operate in persuasion in a fashion akin
to similarity; that is, physical attractiveness affects persuasive outcomes
indirectly, by means of its influence on the receiver’s liking for the
communicator and the receiver’s assessment of the communicator’s
credibility.

Physical Attractiveness and Liking


Unsurprisingly, greater physical attractiveness tends to lead to greater

313
liking (for a review, see Berscheid & Walster, 1974). And, as discussed
previously, there is good evidence for the general proposition that on the
whole, liked communicators will be more effective persuaders than
disliked communicators. Hence the observed effects of physical
attractiveness on persuasive success might straightforwardly be explained
as arising from the recipient’s liking for the communicator (for a careful
elaboration of this idea, see Chaiken, 1986; for some illustrative results,
see Horai et al., 1974; Snyder & Rothbart, 1971).

In addition to the parallel overall general effect on persuasion (i.e., the


parallelism of the generally positive effect of attractiveness on persuasion
and the generally positive effect of liking on persuasion), there is also at
least some evidence that the complexities attendant to liking’s persuasive
effects also attach to attractiveness’s effects. Specifically, there are
indications that (a) credibility can be a more important determinant of
persuasion than physical attractiveness (e.g., Maddux & Rogers, 1980), (b)
the effect of physical attractiveness on persuasion is reduced as elaboration
increases (e.g., Kang & Kerr, 2006; see Chaiken, 1986, for a discussion of
the preponderance of low-relevance topics in studies reporting significant
effects of communicator physical attractiveness on persuasion), and (c) an
unattractive communicator can under some circumstances be a more
successful persuader than an attractive one (Buunk & Dijkstra, 2011; J.
Cooper et al., 1974; Kang & Kerr, 2006). These parallel effects thus
further strengthen the case for supposing that many of the effects of
communicator physical attractiveness on persuasive outcomes can best be
explained by the hypothesis that “physical attractiveness affects social
influence via its more direct impact on liking for the social influence
agent” (Chaiken, 1986, p. 151).

Physical Attractiveness and Credibility


It is, however, also conceivable that physical attractiveness can affect
persuasive outcomes through its effects on perceived credibility. But a
clear treatment of this possibility requires separate consideration of the
expertise and trustworthiness dimensions of credibility.

Concerning the effects of attractiveness on expertise: Investigations that


have found physically attractive persuaders to be more successful than
unattractive persuaders have typically not found the attractive
communicators to be rated higher in expertise (e.g., Chaiken, 1979; Horai
et al., 1974; Snyder & Rothbart, 1971; see also R. Norman, 1976; cf.

314
Patzer, 1983; and Praxmarer, 2011). Thus it is not plausible to suppose that
differential judgments of the communicator’s expertise generally mediate
the effect of communicator physical attractiveness on persuasive
outcomes. To be sure, in certain specific circumstances, the
communicator’s physical attractiveness might influence judgments of
expertise—namely, when the topic of influence is related to physical
attractiveness in relevant ways. For example, physically attractive sources
might enjoy greater perceived expertise in the realm of beauty products.
But generally speaking, the effect of the source’s physical attractiveness
on persuasive outcomes appears not to be achieved through enhanced
perceptions of the source’s expertise.

Concerning the effects of attractiveness on trustworthiness: Physical


attractiveness may (at least indirectly) influence judgments of the
communicator’s trustworthiness. Physical attractiveness influences liking
for the communicator; and, as discussed earlier, there is at least some
indirect evidence that the receiver’s liking for the communicator can
influence the receiver’s judgment of communicator trustworthiness. But
this roundabout path of influence is likely to mean that physical
attractiveness will have only weak effects on trustworthiness judgments: If
the effect of communicator physical attractiveness on trustworthiness
judgments is mediated by the receiver’s liking for the communicator, then
(given that liking for the communicator can be influenced by many other
things besides physical attractiveness) one should expect that the effect of
attractiveness on trustworthiness will be less strong than the effect of
attractiveness on liking (as was found by Patzer, 1983) and indeed will
typically be comparatively small, even negligible; Maddux and Rogers
(1980), for example, found that physically attractive persuaders were
indeed better liked but were not rated as significantly more sincere or
honest than were their physically unattractive counterparts (for related
results, see Snyder & Rothbart, 1971).

Summary
Understanding the role that communicator physical attractiveness plays in
influencing persuasive outcomes seems to require that central emphasis be
given to the influence of physical attractiveness on liking. Physical
attractiveness appears to affect persuasive outcomes not directly but rather
indirectly, especially (though not exclusively) by means of its influence on
the receiver’s liking for the communicator.

315
About Additional Communicator Characteristics
This discussion of the persuasive effects of communicator-receiver
similarity and communicator physical attractiveness has focused on how
those factors might influence credibility and liking, because the research
evidence seems to indicate that similarity and physical attractiveness
influence persuasive outcomes indirectly, through their effects on
credibility and liking. Indeed, in thinking about the effects of any given
additional source characteristic on persuasion, one useful avenue to
illuminating that characteristic’s effects can be a consideration of how that
characteristic might influence credibility or liking (and thereby indirectly
influence persuasive outcomes).

Such an approach will reveal considerable complexities in how a given


factor might eventually influence persuasive effects. Consider, as an
example, the question of the comparative persuasive success of
communicators varying in ethnicity (e.g., a Latino communicator and an
Anglo communicator) in influencing receivers who also vary in ethnicity.
The answer to this question almost certainly varies from case to case,
depending on the particulars involved. With one topic, the Latino
communicator may be perceived (by all receivers, no matter their
ethnicity) to be more credible than the Anglo communicator; with a
different topic, the Anglo communicator may be perceived as more
credible (no matter the receiver’s ethnicity); with yet another topic, there
may be no credibility differences associated with the ethnicity of the
communicator; or the credibility judgments may depend not just on the
topic addressed but also on the position advocated (as in the previously
discussed study of speech dialects, which had a topic on which regional
differences in knowledge and attitude were likely). But (to add to the
complexity here) these credibility judgments may not influence persuasive
outcomes substantially because variations in credibility are not always
associated with variations in effects; even when variations in these
credibility judgments are associated with variations in outcomes,
sometimes the lower-credibility communicator will be more effective than
the higher-credibility source. When one adds the complex relationship
between ethnicity and credibility to the complex relationship between
credibility and persuasive outcomes, the result is a rococo set of
possibilities for the relationship between ethnicity and persuasive effects.
(And notice: The discussion of this example has focused only on the direct
ethnicity-credibility relationship; the picture of ethnicity’s effects becomes
even more complex when one considers in addition the ethnicity-liking

316
relationship or the role of perceived ethnic similarity.)

Conclusion
As perhaps is apparent, communicator characteristics can have
complicated relationships with each other and can have various direct and
indirect effects on persuasive outcomes. But two further complexities
deserve mention: the nature of communication sources and the multiple
roles that communicator variables might play in persuasion.

The Nature of Communication Sources


The treatment of communicator factors here—and, for the most part, in the
research literature—rests implicitly on an image of the communicator as
an individual person communicating through linguistic means (speaking or
writing). In such cases, it is straightforward to conceptualize properties
such as the recipient’s liking for the source or the source’s perceived
credibility. But when persuasive messages take other forms, the nature of
the communication “source” undergoes some transformation.

Consider, for example, consumer advertisements, in which often there is


no identifiable individual person who is the communicator.
Advertisements themselves, however, are recognizable objects toward
which people might have attitudes (evaluations) and—given their
communicative function—they are objects whose credibility might
appropriately be assessed. That is, just as message recipients might have
attitudes toward, and credibility perceptions concerning, some identifiable
person delivering a message, so they can have attitudes toward
advertisements and perceptions of advertisement credibility. And thus
corresponding research questions can arise about the conceptualization and
assessment of ad liking (attitudes toward the advertisement) and ad
credibility (e.g., Eisend, 2002), about the factors that influence ad liking
and ad credibility (e.g., Kelly, Slater, & Karan, 2002; Lord, Lee, & Sauer,
1995), and about the persuasive effects of variations in ad liking and ad
credibility (e.g., S. P. Brown & Stayman, 1992; Nan & Zhao, 2010; Smit,
van Meurs, & Neijens, 2006).

Even in the context of advertisements, however, there are further


complications introduced by the possibility of advertisements’ using
spokespeople or endorsers. For example, in a consumer product ad that

317
uses a celebrity endorser, message recipients might have relevant attitudes
about the endorser (how well liked the endorser is), perceptions of the
endorser’s credibility (expertise, trustworthiness), attitudes toward the ad
as a whole, and perceptions of the ad’s overall credibility. (For some
illustrative research on endorsers, see Amos, Holmes, & Strutton, 2008;
Austin, de Vord, Pinkleton, & Epstein, 2008; Biswas, Biswas, & Das,
2006; Eisend & Langner, 2010; Lafferty, Goldsmith, & Newell, 2002;
Magnini, Garcia, & Honeycutt, 2010; Ohanian, 1990.)17

Websites (of all sorts, including personal or institutional webpages, wikis,


blogs, etc.) provide a similar case. As with advertisements, people might
have attitudes toward, and credibility perceptions concerning, websites.
And thus corresponding research questions can arise about the
conceptualization and assessment of website credibility and liking (e.g.,
Hu & Sundar, 2010; Metzger, Flanagin, Eyal, Lemus, & McCann, 2003;
Walther, Wang, & Loh, 2004) and about the antecedents and consequences
of variations in website credibility and liking (e.g., Cheung, Sia, & Kuan,
2012; Flanagin & Metzger, 2007; Hong, 2006; Lim, 2013; Metzger,
Flanagin, & Medders, 2010; Robins, Holmes, & Stansbury, 2010; Yi,
Yoon, Davis, & Lee, 2013).

The general point is that message “sources” can take a variety of forms—
people, advertisements, websites, and so forth—and consequently the
nature and operation of source characteristics (such as credibility)
naturally may vary across these different communication formats. Parallel
research questions will arise (about the nature, antecedents, and effects of
source characteristics), but the answers can be expected to differ.

Multiple Roles for Communicator Variables


A still larger complexity is also to be borne in mind here, namely, that a
variable can play many roles in persuasion (as suggested by the
elaboration likelihood model; see Chapter 8). For example, when a higher-
credibility communicator is observed to be more persuasive than a lower-
credibility communicator, this might have occurred because the
communicator’s apparent credibility served as a cue (and so engaged a
credibility-based heuristic), because the higher-credibility communicator
engendered greater message scrutiny than did the lower-credibility
communicator (in a circumstance in which the message had strong
arguments), because the higher-credibility communicator engendered less
message scrutiny than did the lower-credibility communicator (in a

318
circumstance in which the message had weak arguments), or because the
higher-credibility communicator more or less directly biased (influenced
the evaluative direction of) elaboration in a way favorable to the advocated
view. Moreover—apart from whatever influence credibility might
otherwise have on the persuasiveness of a message—the communicator’s
credibility may affect whether the communicator has access to the
audience (e.g., editors may provide space in the op-ed section of a news
outlet only to persons who appear to have relevant expertise) and whether
the audience pays much attention to the message (i.e., credibility may
influence message exposure or scrutiny). (For some examples of research
illustrating such varied roles for communicator characteristics, see J. K.
Clark, Wegener, & Evans, 2011; Howard & Kerin, 2011; Sinclair, Moore,
Mark, Soldat, & Lavis, 2010; Tormala, Briñol, & Petty, 2007; Ziegler &
Diehl, 2001.)18

For Review
1. What is credibility? What are the primary dimensions of credibility?
What is expertise? Describe the questionnaire items commonly used
to assess expertise. What is trustworthiness? Describe the
questionnaire items commonly used to asses trustworthiness.
Describe the research used to identify the primary dimensions of
credibility. What is factor analysis? What is knowledge bias?
Reporting bias? Explain the relationships of knowledge bias,
reporting bias, expertise, and trustworthiness.
2. Identify factors influencing credibility. Which of these influence
expertise and which trustworthiness? Describe the effect of
knowledge of the communicator’s education, occupation, experience,
and training on expertise and on trustworthiness. Describe the effect
of nonfluencies in delivery on expertise and on trustworthiness.
Describe the effect of citation of evidence sources on expertise and on
trustworthiness. Describe the effect of the advocated position on
expertise and on trustworthiness; explain the roles of knowledge bias
and reporting bias in this phenomenon. Describe the effect of liking
for the communicator on expertise and on trustworthiness. Describe
the effects of humor on expertise and on trustworthiness.
3. In research on the effects of credibility variations, are expertise and
trustworthiness usually manipulated separately? Explain. In this
research, are the low-credibility communicators low in absolute terms
or only relatively low? Explain.

319
4. Explain the idea that the magnitude of credibility’s effect on
persuasive outcomes might vary. Identify two factors that influence
the magnitude of credibility’s effect. Describe how the personal
relevance of the topic influences the magnitude of credibility’s effect.
Under what sort of relevance condition (high or low) will the effect of
credibility be relatively larger? Describe how the timing of
identification of the communicator influences the magnitude of
credibility’s effect. What timing of identification leads to relatively
larger effects of credibility?
5. Explain the idea that the direction of credibility’s effect on persuasive
outcomes might vary. Identify a factor that influences the direction of
credibility’s effect. Under what conditions will higher-credibility
sources be more persuasive than lower-credibility sources? And under
what conditions will the opposite effect occur? Describe a possible
explanation for the latter effect.
6. What is the general rule of thumb concerning the effect of variations
in liking (of the communicator) on persuasive outcomes? Explain
how that general principle can be misleading (e.g., identify a limiting
condition). Describe the relative strength of the effects of credibility
and the effects of liking (on persuasive outcomes). Describe how
variations in the personal relevance of the topic influence the effects
of liking. What relevance conditions (high or low) lead to relatively
larger effects of liking? Can a disliked communicator be more
persuasive than a liked communicator? Can a disliked communicator
be more persuasive than a liked communicator even when the two
communicators are equivalent with respect to other characteristics
(e.g., credibility)? Identify a necessary condition for an otherwise
equivalent disliked communicator’s being more persuasive than a
liked communicator.
7. Does perceived similarity influence persuasive outcomes directly or
indirectly? Explain. Through what avenues does perceived similarity
influence persuasive outcomes? What is attitudinal similarity? How
does perceived attitudinal similarity influence liking? Can liking be
influenced by perceived similarities that are not relevant to the
message topic? Can perceived similarities influence judgments of
communicator expertise? Identify a necessary condition for a
perceived similarity to influence expertise judgments. Will all
relevant perceived similarities enhance expertise? Will all relevant
perceived dissimilarities diminish expertise? Explain. Can perceived
similarities influence judgments of communicator trustworthiness?
Explain. Why is it misleading to assume that greater perceived

320
similarity enhances persuasive effectiveness?
8. Does the physical attractiveness of the communicator influence
persuasive outcomes directly or indirectly? Explain. Through what
avenues does physical attractiveness influence persuasive outcomes?
How does physical attractiveness influence liking? Can physical
attractiveness enhance perceived expertise? Give an example. Can
physical attractiveness enhance perceived trustworthiness?
9. Explain how other communicator characteristics (i.e., other than
credibility, liking, similarity, and physical attractiveness) influence
persuasive outcomes indirectly.
10. Explain how a communication “source” might not be an identifiable
individual; give examples. Describe how questions abut the nature,
antecedents, and effects of source characteristics can arise concerning
such sources. Explain how communicator variables might play
multiple roles in persuasion; give examples.

Notes
1. Not all the factors that in the research literature have been labeled
“trustworthiness” (or “character,” “safety,” or the like) contain many of the
items that here are identified as assessing trustworthiness (e.g.,
McCroskey, 1966). An important source of confusion is the apparent
empirical association between a receiver’s liking for a communicator and
the receiver’s judgment of the communicator’s trustworthiness; this
covariation is reflected in factor analyses that have found items such as
honest-dishonest, trustworthyuntrustworthy, and fair-unfair to load on the
same factor with items such as friendly-unfriendly, pleasantunpleasant,
nice–not nice, and valuable-worthless (see, e.g., Applbaum & Anatol,
1972; Bowers & Phillips, 1967; Falcione, 1974; McCroskey, 1966; Pearce
& Brommel, 1972). This pattern can plausibly be interpreted as reflecting
the effects of liking on trustworthiness judgments (receivers being inclined
to ascribe greater trustworthiness to persons they like). But such empirical
association should not obscure the conceptual distinction between
trustworthiness and liking, especially because the empirical association is
imperfect; see Delia’s (1976, pp. 374–375) discussion of Whitehead’s
(1968) results, or consider the stereotypical used car salesman who is
likable but untrustworthy. In this chapter, investigations are treated as
bearing on judgments of trustworthiness only when it appears that
trustworthiness (and not liking) has been assessed.

321
2. This finding (that citation of evidence sources can enhance perceptions
of the communicator’s expertise and trustworthiness) may be seen to have
implications for the elaboration likelihood model (ELM; see Chapter 8).
Although source and message variables are not partitioned by the ELM as
having intrinsically different roles to play in persuasion, it is clear that
message materials might have implications for perceptions of source
characteristics (as when advocacy of an unexpected position enhances
perceptions of communicator trustworthiness and thereby engenders
reduced message scrutiny; Priester & Petty, 1995). The finding under
discussion points specifically to the possibility that variations in
argumentative message content may alter impressions of the
communicator’s credibility (Slater & Rouner, 1996). (As an aside:
Compared with premessage identification, postmessage identification of
communicators has sometimes been seen to yield more positive
impressions of credibility [Ward & McGinnies, 1973]. This result might
easily be understood as a consequence of participants’ larger reliance on
message materials—which commonly appear to have been of good quality
—as a basis for credibility judgments in conditions in which identification
follows the message as compared with those in which it precedes the
message.) In the context of the ELM, this implies that variations in
argument strength might affect persuasive outcomes by providing what
amounts to credibility-related cue information (for some relevant evidence,
see Reimer, 2003). The existence of such a pathway, in turn, invites
reconsideration of the commonly observed enhanced effect that argument
strength manipulations have (on persuasive outcomes) under conditions of
high personal relevance; that effect could come about through the use of a
credibility-related heuristic (and not through anything such as genuinely
thoughtful consideration of substantive arguments), as long as there was
sufficiently close message scrutiny to permit receivers to notice whatever
message elements are used as a basis for inferences about credibility. The
point here is not that this pathway provides an entirely satisfactory account
of the accumulated findings on this matter but only that the possibility of
this pathway points to some complexities in untangling what lies behind
the effects observed in ELM research.

3. Although expectancy disconfirmation can enhance perceptions of the


communicator’s expertise and trustworthiness, the communicator likely to
be perceived as the most expert and trustworthy may be a qualified source
about whom the audience has no expectations so far as message position is
concerned (see, e.g., Arnold & McCroskey, 1967; Weinberger & Dillon,
1980).

322
4. Estimates of the mean effect on credibility perceptions, expressed as a
correlation, range from .16 to .22 (Eisend, 2006; O’Keefe, 1999a).
Notably, the credibility-enhancing effect (of mentioning opposing
considerations without refuting them) that obtains in consumer advertising
messages is not found in other messages (e.g., those concerning public
policy issues; O’Keefe, 1999a). It may simply be that skepticism about
consumer advertising is substantially greater than that about public policy
advocacy—and hence nonrefutational acknowledgment of potential
counterarguments is more surprising when it occurs in consumer
advertisements than when it occurs in other messages.

5. For results consistent with this expectation, see Marquart, O’Keefe, and
Gunther (1995), who found that perceived attitudinal similarity (which can
influence liking; see, e.g., Berscheid, 1985) influenced ratings of sources’
trustworthiness but not expertise.

6. For some reviews of research concerning factors affecting credibility in


specific persuasion contexts, see Hoyt (1996), G. R. Miller and Burgoon
(1982), and Wathen and Burkell (2002).

7. Some experimental manipulations do appear to be especially targeted to


influencing perceptions of expertise (e.g., by providing background
information about occupation and training), and it can be tempting to
interpret such studies as speaking specifically to questions about the
effects of variations in perceived expertise. But a manipulation of apparent
expertise may also affect perceptions of trustworthiness, and thus the
results of such a study should not necessarily be interpreted as reflecting
distinctly expertise effects. (Indeed, at least in some domains, perceptions
of expertise, trustworthiness, and attractiveness are sufficiently highly
correlated that some recommend collecting these under a single global
credibility construct; see Hoyt, 1996, concerning therapist credibility.) One
way of protecting against this problem can be to assess perceptions of both
expertise and trustworthiness and to examine the effects of the
manipulation on each. (It will not be sufficient to show that the
experimental manipulation significantly influenced expertise perceptions
but not trustworthiness perceptions. Such a finding would not necessarily
mean that the manipulation influenced expertise perceptions significantly
more than it influenced trustworthiness perceptions.) But when the
research question of interest concerns the relationship between perceived
expertise (or perceived trustworthiness) and persuasive outcomes,
analyzing data by examining the relationship between the experimental

323
manipulation and persuasive outcomes fails to provide relevant
information; with such a research question, the relationship between the
perceptual state (e.g., perceived expertise) and persuasive outcomes should
be examined directly.

8. E. J. Wilson and Sherrell’s (1993) review reported that expertise


manipulations appear to have larger effects on persuasive outcomes than
do trustworthiness manipulations. But (as discussed in the preceding note)
this does not speak to the question of the relative influence (on persuasive
effects) of perceived expertise and perceived trustworthiness. Concerning
perceived therapist credibility specifically, Hoyt’s (1996) review found
(with a relatively small number of effect sizes) no reason to suppose that
perceived expertise and perceived trustworthiness were differentially
related to therapist influence; but the generality of such a result is an open
question.

9. For example, in Terwel, Harinck, Ellemers, & Daamen’s (2009) study,


the means on the trustworthiness index (average of three 7-point scales,
with a midpoint of 4) were 5.24 for the high-credibility source and 3.52 for
the low-credibility source. On Nan’s (2009) 7-point trustworthiness scale
(midpoint of 4), the mean ratings were 4.41 for the high-credibility source
and 4.00 for the low-credibility source. Such results are common (e.g.,
Bochner & Insko, 1966; Falomir-Pichastor, Butera, & Mugny, 2002;
Greenberg & Miller, 1966, Experiment 1; H. H. Johnson & Scileppi, 1969;
Sternthal et al., 1978; Tormala, Briñol, & Petty, 2006).

10. See, for example, Greenberg and Miller (1966) and Sternthal,
Dholakia, and Leavitt (1978). This difficulty is consistent with studies of
the ratings given to “ideal” high- and low-credibility communicators,
which have found that when respondents are asked to indicate where a
perfectly credible and a perfectly noncredible communicator would be
rated on expertise and trustworthiness scales, the ratings are not at the
absolute extremes (R. A. Clark, Stewart, & Marston, 1972; see also J. K.
Burgoon, 1976).

11. As mentioned in Chapter 8 (concerning the elaboration likelihood


model), a good deal of research concerning variations in personal
relevance has used the term involvement as a label for this variable (and so
the point under discussion could be phrased as a matter of credibility’s
impact declining as the receiver’s involvement with the issue increases).
But the term involvement has also been used to cover other variations in

324
the relationship that receivers have to the message topic and so, in interests
of clarity, is avoided here.

12. As discussed in Chapter 8, the elaboration likelihood model (ELM)


actually proposes a slightly more specific version of this generalization
(about the decline of credibility’s effect on persuasive outcomes as
personal relevance increases); it stresses that as elaboration declines (e.g.,
as topic relevance declines), credibility plays less of a role as a peripheral
cue but may still influence persuasion through other mechanisms. But
when (for instance) credibility influences persuasion through influencing
the degree of message scrutiny, then (given that increasing or reducing
message scrutiny might lead to either greater or lesser persuasion,
depending on the outcomes of greater message scrutiny) credibility’s
apparent relationship with persuasive outcomes will presumably also
appear to weaken. On the one hand, there is the observed empirical
regularity (as personal relevance increases, the simple apparent
relationship between credibility variations and persuasive effects
weakens), and on the other, an ELM-based explanation of that observed
regularity (viz., that credibility has a lessened role as a peripheral cue but
might serve in other roles that also would naturally make for a weaker
apparent relationship); one presumably does not want to confuse these.

13. For a suggestion that parallel effects (i.e., generation of supportive


advocacy for proattitudinal messages) may be more likely to occur when
manifest argument quality is weak than when it is strong, see Akhar,
Paunesku, & Tormala (2013).

14. Because variation in liking can affect persuasive outcomes, anything


that influences the message recipient’s liking for the communicator thus
might potentially influence persuasive outcomes (via its effects on liking).
For example, doing a favor for the recipient can influence liking and
thereby persuasion (e.g., Goei, Lindsey, Boster, Skalski, & Bowman,
2003). Or, as discussed in a subsequent section, perceived attitudinal
similarities can influence liking and hence persuasion. But a word of
caution: Something that enhances both liking and persuasion might do so
without liking being the mechanism that produces the persuasive effects.
For example, compliments (given to the recipient by the communicator)
can enhance both liking and persuasion—but the increased liking might
not be responsible for the increased persuasion (Grant, Fabrigar, & Lim,
2010).

325
15. Other kinds of perceived similarities (beyond attitudinal similarities)
might also enhance liking and thereby potentially influence persuasive
outcomes. For example, incidental similarities in first names, birthdays, or
birthplaces appear capable of producing such effects (see, e.g., Burger,
Messian, Patel, del Prado, & Anderson, 2004; Garner, 2005; Guéguen,
Pichot, & Le Dreff, 2005; Jiang, Hoegg, Dahl, & Chattopadhyay, 2010;
Silvia, 2005; for some complexities, see Howard & Kerin, 2011).

16. Several other reviews have offered evidence concerning the


effectiveness of peer-based health interventions (e.g., Kennedy, O’Reilly,
& Sweat, 2009; Maticka-Tyndale & Barnett, 2010; Simoni, Nelson,
Franks, Yard, & Lehavot, 2011; Webel, Okonsky, Trompeta, & Holzemer,
2010)—but the question of interest here is the relative effectiveness of
peer-based and non-peer-based interventions.

17. And this does not exhaust the potentially relevant endorser-related
perceptions. For example, there is reason to think that the effectiveness of
endorser ads can be driven not so much by liking or credibility as by
perceptions of the fit between other attributes of the endorser and attributes
of the product (see, e.g., Kamins, 1990; Misra & Beatty, 1990;
Mittelstaedt, Riesz, & Burns, 2000; Till & Busier, 2000; Törn, 2012).

18. As discussed in Chapter 8 (concerning the elaboration likelihood


model), there is not yet a completely well-articulated account of the
circumstances under which a given variable will serve in one or another
persuasion role. For the moment, then, the point is to be attentive to the
mistake of assuming that a given communicator characteristic can function
in only one way in persuasion.

326
Chapter 11 Message Factors

Message Structure and Format


Conclusion Omission
Recommendation Specificity
Narratives
Prompts
Message Content
Consequence Desirability
One-Sided Versus Two-Sided Messages
Gain-Loss Framing
Threat Appeals
Sequential Request Strategies
Foot-in-the-Door
Door-in-the-Face
Conclusion
For Review
Notes

This chapter reviews research concerning the effects that selected message
variations have on persuasion. The message factors discussed are grouped
into three broad categories: message structure and format, message
content, and sequential-request strategies.

Message Structure and Format


This section discusses four aspects of the structure and format of
persuasive messages that have been investigated for their possible effects
on persuasive outcomes: whether the message’s conclusion is explicitly
stated, the degree of specificity with which the communicator’s advocated
action is described, the use of narratives as vehicles for persuasive
messages, and the use of simple prompts.

Conclusion Omission
Obviously, persuasive messages have some point—some opinion or belief
that the communicator hopes the audience will accept, some recommended

327
action that the communicator wishes to have adopted. But should the
message explicitly make that point—explicitly state the conclusion or
recommendation—or should the message omit the conclusion and so leave
the point unstated?1

Intuitively, there look to be good reasons for each alternative. For instance,
one might think that making the conclusion explicit would be superior
because receivers would then be less likely to misunderstand the point of
the message. On the other hand, it might be that if the communicator
simply supplies the premises, and the audience reasons its own way to the
conclusion, then perhaps the audience will be more persuaded than if the
communicator had presented the desired conclusion (more persuaded,
because they reached the conclusion on their own).

There have been a number of investigations of this question, in which an


explicit conclusion is either included in or omitted from the message.2 For
example, Struckman-Johnson and Struckman-Johnson (1996) compared
AIDS public service announcements with and without an explicit
recommendation to use condoms. In such studies, the overwhelmingly
predominant finding is that messages that include explicit conclusions or
recommendations are more persuasive than messages without such
elements (for a review, see O’Keefe, 2002b; see, relatedly, Moyer-Gusé,
Jain, & Chung, 2012).3

There has often been speculation that the apparent advantage of explicit
conclusions may be moderated by factors involving the hearer’s ability and
willingness to draw the appropriate conclusion when left unstated. Hence
variables such as the receiver’s intelligence (which bears on ability) and
initial opinion (which bears on willingness) have often been mentioned as
possible moderators (e.g., McGuire, 1985). The expectation has been that
explicit conclusions may not be necessary to, and might even impair,
persuasive success for intellectually more capable audiences and for
audiences initially favorable to the advocated view (because such
audiences should be able and willing to reason to the advocated
conclusion). What little relevant empirical evidence exists, however, gives
no support to these speculations. For example, in several studies, the
audience was comparatively intelligent and well-educated (college
students), and even so, there was a significant advantage for messages with
explicit recommendations or conclusions (e.g., Fine, 1957).

One possible explanation for the persuasive advantage of explicitly stated

328
conclusions is that when the conclusion is omitted, assimilation and
contrast effects are encouraged. As discussed in Chapter 2 (concerning
social judgment theory), assimilation and contrast effects are perceptual
distortions concerning what position is being advocated by a message (C.
W. Sherif et al., 1965; M. Sherif & Hovland, 1961): An assimilation effect
occurs when a receiver perceives the message to advocate a view closer to
his or her own than it actually does; a contrast effect occurs when a
receiver perceives the message to advocate a position more discrepant
from his or her own than it actually does. Both assimilation and contrast
effects reduce persuasive effectiveness—contrast effects because they
make the message appear to urge an even more unacceptable viewpoint,
assimilation effects because they reduce the amount of change apparently
sought by the message. Notably, relatively ambiguous messages (i.e.,
messages ambiguous about what position is being advocated) appear
especially susceptible to assimilation and contrast effects (Granberg &
Campbell, 1977). Thus the reduced persuasive success of messages
omitting explicit conclusions may arise because such messages are
relatively more subject to assimilation and contrast effects.

In any case, the research evidence suggests that persuaders commonly


have little to gain (and much to lose) by leaving the message’s conclusion
implicit. Ordinarily, messages containing explicit statements of the
conclusion will be more effective than messages that omit such
statements.4

Recommendation Specificity
When a communicator is urging some particular action, the message can
vary in the specificity with which the advocated action is described. The
contrast here is between messages that provide only a general description
of the advocate’s recommended action and messages that provide a more
specific (detailed) recommendation. Both messages contain an explicitly
stated conclusion (in the form of an explicitly identified desired action),
but one conclusion is more detailed than the other. For example,
Leventhal, Jones, and Trembly (1966) compared persuasive messages
recommending that students get tetanus shots at the student health clinic
with messages providing a more detailed description of the recommended
action (e.g., mentioning the location and hours of the clinic—although
students were already familiar with such information). Similarly, Evans,
Rozelle, Lasater, Dembroski, and Allen (1970) compared messages giving

329
relatively general and unelaborated dental care recommendations with
messages giving more detailed, specific recommendations. Such studies
have commonly found that messages with more specific descriptions of the
recommended action are more persuasive than those providing general,
nonspecific recommendations (for a review, see O’Keefe, 2002b).5

It is not yet clear what might explain this effect. One possibility is that
more specific descriptions of the recommended action enhance the
receiver’s behavioral self-efficacy (perceived behavioral control). As
discussed in Chapter 6, reasoned action theory suggests that one factor
influencing a person’s behavioral intention is the individual’s belief in his
or her ability to engage in the behavior (perceived behavioral control). For
example, people who do not think that they have the ability to engage in a
regular exercise program (because they lack the time, the equipment, and
so forth) are unlikely to undertake such behavior, even if they have
positive attitudes toward exercising. It may be that—akin to the enhanced
self-efficacy that can arise from seeing another person perform the action
—receivers who encounter a detailed description of the recommended
action may become more convinced of their ability to perform the
behavior. A second (not necessarily competing) possible explanation is
that a specific action description encourages people to plan their
behavioral performance and thus develop implementation intentions
(subsidiary intentions related to the concrete realization of a more abstract
intention, as discussed in Chapter 6; see Gollwitzer & Sheeran, 2006),
which in turn make behavioral performance more likely.6

Narratives
Broadly conceived, a narrative is a story, that is, a depiction of a sequence
of related events. Much research attention has recently been given to
studying narrative as a distinctive message format for persuasion. For
example, instead of trying to persuade by making explicit arguments,
instead one might use a story as the vehicle for persuasive information.

Complexities in Studying Narrative and Persuasion


Studying narratives and persuasion is complicated for at least three
reasons. First, narratives can take many forms. A narrative might figure as
a brief illustration (e.g., when one part of an advertisement for a diet plan
is a before-and-after testimonial), as a more extensive story that forms the

330
bulk of the message (e.g., when one example serves as a “case study”), or
as an even more extended story (e.g., when a daytime television drama has
a multiepisode story arc concerning some health topic). Narratives might
be fictional or factual, might have a simple natural order or a more
complex structure (e.g., with flashbacks), might be delivered in first person
(“I did X”) or third person (“He did X”) forms, and so forth.

Second, there is no single clearly conceived message form against which


to contrast narrative messages. Researchers have naturally wanted to
compare the effects of narrative messages against nonnarrative messages,
but a great variety of nonnarrative forms have been explored. For example,
some research has compared a narrative form in which a single illustrative
case is presented against a nonnarrative form in which statistical
information is presented (e.g., Uribe, Manzur, & Hidalgo, 2013). Other
studies have compared the effects of a narrative message against a
“didactic” or “argument-based” message (e.g., Prati, Pietrantoni, & Zani,
2011).7

Third, narratives might conceivably play any number of different roles in


persuasion. As examples: A narrative might engage the audience’s
attention and so enhance message exposure or message elaboration. A
narrative might provide evidence for a claim and hence enhance belief in
that claim (“you actually can integrate exercise into your daily routine—
here’s the story of someone like you who did just that”). A narrative might
induce a persuasion-relevant psychological state, such as a (positive or
negative) mood (e.g., Cerully & Klein, 2010) or a counterfactual mind-set
(e.g., Nan, 2008), which in turn might affect the amount or evaluative
direction of subsequent message processing. (This surely does not exhaust
the possible roles that narrative might play in persuasion.)8

The Persuasive Power of Narratives


For all that studying narrative persuasion is a complex challenge, and
despite the scattered and incomplete research to date, the potential
persuasiveness of narratives is undeniable. In a number of studies, (various
kinds of) narrative messages have been found to be more persuasive than
(various kinds of) nonnarrative messages. As examples: Adamval and
Wyer (1998) found that vacations were evaluated more favorably when a
travel brochure presented a narrative rather than a list of features of the
vacation. Dillard, Fagerlin, Dal Cin, Zikmund-Fisher, and Ubel (2010)
reported that adding a narrative to a traditional educational message

331
increased interest in colorectal cancer screening. In Murphy, Frank,
Chatterjee, and Baezconde-Garbanati’s (2013) study, a fictional narrative
film was more effective than a nonnarrative message in enhancing
knowledge and intentions concerning cervical cancer. Polyorat, Alden, and
Kim (2007) reported that narrative ads produced more favorable
evaluations of several consumer products than did “factual” ads. Prati et al.
(2011) found that a narrative message was more effective than a didactic
message in influencing various risk and efficacy perceptions concerning
flu shots. (For some other illustrations, see Appel & Richter, 2007; H. S.
Kim, Bigman, Leader, Lerman, & Cappella, 2012; Larkey & Gonzalez,
2007; Masser & France, 2010; Morgan, Cole, Struttmann, & Piercy, 2002;
Morman, 2000; Niederdeppe, Shapiro, & Porticella, 2011; Ricketts,
Shanteau, McSpadden, & Fernandez-Medina, 2010.)

In short, there is little room for doubt that narratives can be more
persuasive than nonnarrative messages. This research evidence does not
show that narratives are generally more persuasive than nonnarrative
messages—only that it is possible for narratives to have a persuasive
advantage. It remains to be seen exactly under what circumstances a given
narrative form will be more or less persuasive than some specific
nonnarrative message form.9

Factors Influencing Narrative Persuasiveness


The persuasiveness of narratives (in absolute terms or as compared with
nonnarrative messages) is likely to vary from one instance to another. A
number of different possible moderating factors have received at least
some research attention, but two specific factors warrant mention here.

One is the degree to which the recipient identifies with the narrative’s
characters.10 In a number of studies, greater identification with characters
has been found to be associated with greater persuasive effects of
narratives. As examples: In Moyer-Gusé, Chung, and Jain’s (2011) study
of narratives in which safer-sex conversations were modeled, greater
identification with the characters enhanced recipients’ self-efficacy for
having such conversations themselves. Igartua (2010) found that character
identification influenced the degree to which a fictional film affected
story-relevant beliefs and attitudes (see, relatedly, Igartua & Barrios,
2012). De Graaf, Hoeken, Sanders, and Beentjes (2012) manipulated
character identification by varying the perspective from which the story
was told, which yielded corresponding effects on story-consistent attitudes

332
(recipients were more inclined to have attitudes consistent with the
perspective of the character from whose perspective the story was told).
(For other illustrations, see Sestir & Green, 2010; van den Hende, Dahl,
Schoormans, & Snelders, 2012. For a review, see Tukachinsky &
Tokunaga, 2013).11

A second factor influencing the persuasiveness of narratives is the degree


to which the recipient is “transported” by the narrative—caught up in, or
carried away by, the story. Narratives are potentially capable of inducing a
state of “transportation” in message recipients, a state in which the
recipients are so immersed in (transported by) the story that they become
completely focused on the world depicted in the story.12 (For discussions
of the assessment of transportation, see Busselle & Bilandzic, 2009; Green
& Brock, 2000.) The suggestion has been that when people are thusly
transported, their real-world beliefs are more likely to be affected by
narrative content. For example, Green and Brock (2000) had participants
read a story about a student whose sister is stabbed to death by a
psychiatric patient at a shopping mall. Those participants who were
relatively highly transported into the story were (compared with less-
transported participants) more likely to subsequently report story-
consistent beliefs (e.g., enhanced beliefs in the likelihood of violence,
reduced beliefs that the world is just, etc.). That is, transportation made
people more susceptible to narrative influence. (For related findings, see,
e.g., Banerjee & Greene, 2012; Dunlop, Wakefield, & Kashima, 2010;
Escalas, 2004, 2007; Reinhart & Anker, 2012; Vaughn, Hesse, Petkova, &
Trudeau, 2009; Zwarun & Hall, 2012. For reviews, see Tukachinsky &
Tokunaga, 2013; van Laer, de Ruyter, Visconti, & Wetzels, 2014.)13

Transportation might lead to enhanced persuasion through any number of


different mechanisms (see Carpenter & Green, 2012)—reduced
counterarguing, enhanced imagery, and so on—but there is relatively little
evidence bearing on these possibilities. (For discussion, see Slater &
Rouner, 2002; Vaughn, Childs, Maschinski, Niño, & Ellsworth, 2010. For
some illustrative research findings, see Dunlop, Wakefield, & Kashima,
2010; McQueen, Kreuter, Kalesan, & Alcaraz, 2011; Moyer-Gusé & Nabi,
2009; Niederdeppe, Kim, Lundell, Fazili, & Frazier, 2012.) Research is
also only beginning to explore the factors that might affect the likelihood
of transportation, such as features of the story (e.g., de Graaf & Hustinx,
2011), characteristics of the recipient (e.g., Appel & Richter, 2010; Dal
Cin, Zanna, & Fong, 2004; Thompson & Haddock, 2012), and properties
of the medium (e.g., Braverman, 2008); for a general discussion, see Green

333
(2008).

The relationship of these two influences on narrative persuasion


(transportation and character identification) is not entirely clear. One might
treat each of these as a more specific realization of a broader concept of
narrative “involvement” (e.g., Tukachinsky & Tokunaga, 2013). Or one
might treat them as distinct influences on (i.e., alternative pathways to)
narrative persuasion. Or these might be related in some fashion; in
particular, character identification might foster transportation (see, e.g.,
van Laer et al., 2014). But to date there is too little research evidence to
permit confident conclusions about the relationships of these factors (for
some relevant work, see Murphy, Frank, Moran, & Patnoe-Woodley,
2011; Sestir & Green, 2010; Tal-Or & Cohen, 2010).

And, of course, character identification and transportation are not the only
possible influences on narrative persuasiveness. A variety of other factors
have also been explored, including the nature of the communication source
(e.g., Hopfer, 2012) and various properties of the narrative material (see,
e.g., Appel & Mara, 2013; Dahlstrom, 2010; H. S. Kim et al., 2012;
Moyer-Gusé, Jain, & Chung, 2012; Tal-Or, Boninger, Poran, & Gleicher,
2004). However, obtaining dependable generalizations about any such
factors—and identifying exactly how and why such factors influence
narrative persuasiveness (keeping in mind that the effects might be
obtained by affecting character identification or transportation)—remains
some distance in the future.

Entertainment-Education
One particular application of persuasive narratives is worth mention:
entertainmenteducation. Entertainment-education (EE) is the purposeful
design of entertainment media specifically as vehicles for educating—and
thereby influencing behavior. A classic example is provided by the South
African dramatic television series Soul City, which was initially created for
the purpose of conveying HIV prevention information. The series proved
both enormously popular and effective in providing the desired
information. In subsequent years the program expanded to address other
subjects (e.g., tobacco control and domestic violence) and to include other
media (e.g., a radio series). (For an overview of Soul City development,
see Usdin, Singhal, Shongwe, Goldstein, & Shabalala, 2004.) Similar
programs have been created in a number of developing countries (see, e.g.,
Abdulla, 2004; Kuhlmann et al., 2008; Ryerson & Teffera, 2004; Smith,

334
Downs, & Witte, 2007) and, less commonly, in the developed world (e.g.,
van Leeuwen, Renes, & Leeuwis, 2013; Wilkin et al., 2007). The
challenge in creating EE programs is striking the right balance between
entertainment (which attracts the audience) and education (which is the
reason for creating the program in the first place)—and this can be difficult
to manage (for some discussion, see Renes, Mutsaers, & van Woerkum,
2012).

A variant approach is to use some existing entertainment program as the


vehicle for conveying health information. That is, rather than creating an
entertainment program from scratch, instead one can weave the relevant
information into some existing program (ideally, a relatively popular one
that already attracts the desired audience). For example, storylines about
breast cancer, immunization, and safer sex have been embedded in various
popular television shows with the explicit purpose of influencing viewers
(for examples, see Bouman, 2004; Glik et al., 1998; Hether, Huang, Beck,
Murphy, & Valente, 2008; Kennedy, O’Leary, Beck, Pollard, & Simpson,
2004; Whittier, Kennedy, St. Lawrence, Seeley, & Beck, 2005).

In EE applications of narrative persuasion, persuasive effects can come


about in two ways. First, the program can have the usual sort of direct
effects on viewers. For example, consistent with the findings discussed
earlier, several studies have reported that the effectiveness of EE
programming is enhanced when recipients identify with a narrative
character (e.g., Kuhlmann et al., 2008; Smith, Downs, & Witte, 2007;
Wilkin et al., 2007). Second, indirect effects can arise from
communication stimulated by the entertainment programming. For
example, exposure to a Nepalese radio drama serial concerning family
planning enhanced the likelihood that women would discuss family
planning with their spouse (Sharan & Valente, 2002; for related findings,
see Love, Mouttapa, & Tanjasiri, 2009; Pappas-DeLuca et al., 2008).14

Summary
Narratives can be powerful vehicles for persuasion, but many open
questions remain about how and why narratives persuade. At the moment
not much is securely known about exactly when any given narrative form
will be more persuasive than some specifiable nonnarrative form, what
factors influence the relative persuasiveness of different narrative forms, or
how such moderating factors are related to the mechanisms underlying
narrative effects. Continuing attention to these questions will be

335
welcomed. (For some general discussions of narrative persuasion, see
Bilandzic & Busselle, 2013; Carpenter & Green, 2012; Green & Clark,
2013; Hinyard & Kreuter, 2007; Larkey & Hecht, 2010; Larkey & Hill,
2012; Moyer-Gusé, 2008; Slater & Rouner, 2002; Vaughn et al., 2010;
Winterbottom, Bekker, Conner, & Mooney, 2008.)

Prompts
A prompt (reminder) is a simple cue that that makes behavioral
performance salient and hence can trigger the behavior. Depending on the
context, a prompt might be delivered by a small sign or poster, a text
message, an automated phone call, an email, regular mail, and so on. The
message can be variously phrased—as a explicit reminder (“Don’t forget
to …”), as an invitation (“Have you considered … ?”), as a simple
rationale for the behavior (“Taking the stairs burns calories”), and so on—
but is characteristically relatively brief.

A large number of experiments have demonstrated the potential of such


prompts to influence behavior.15 As examples: Simple signs can increase
stair use in shopping malls, train stations, and workplaces, especially when
the alternative is an escalator (e.g., Andersen, Franckowiak, Snyder,
Bartlett, & Fontaine, 1998; Boen, Maurissen, & Opdenacker, 2010; Kwak,
Kremers, van Baak, & Brug, 2007; for a review, see Nocon, Müller-
Riemenschneider, Nitzschke, & Willich, 2010). Text message reminders
can encourage sunscreen use (Armstrong et al., 2009), voting (Dale &
Strauss, 2009), and saving money (Karlan, McConnell, Mullainathan, &
Zinman, 2010). Reminders can increase cancer screenings (Burack &
Gimotty, 1997), medication initiation (Waalen, Bruning, Peters, & Blau,
2009), medication adherence (Lester et al., 2010; Vervloet et al., 2012),
diabetes monitoring (Derose, Nakahiro, & Ziel, 2009), and newborn
immunization rates (Alemi et al., 1996). Prompts can increase organ donor
registration (A. J. King, Williams, Harrison, Morgan, & Havermahl,
2012), seat belt use (Austin, Alvero, & Olson, 1998; Austin, Sigurdsson, &
Rubin, 2006; Cox, Cox, & Cox, 2000), and the frequency with which
physicians recommend preventive care measures (such as immunization
and cancer screening) to their patients (Dexheimer, Talbot, Sanders,
Rosenbloom, & Aronsky, 2008; Shea, DuMouchel, & Bahamonde, 1996).
(For some reviews, see Head, Noar, Iannarino, & Harrington, 2013; Fry &
Neff, 2009; Szilagyi et al., 2002; Tseng, Cox, Plane, & Hia, 2001. For
some examples of unsuccessful prompting, see Amass, Bickel, Higgins,

336
Budney, & Foerg, 1993; Blake, Lee, Stanton, & Gorely, 2008.)16

There are probably at least two necessary conditions for a prompt to be


effective in inducing behavior. First, the recipients must presumably
already have the appropriate positive attitude; reminding people to do
something they don’t want to do is unlikely to be very effective. Second,
the recipients must likely already believe themselves capable of
performing the behavior (expressed in terms of reasoned action theory,
perceived behavioral control must be sufficiently high); prompting people
to do something they don’t think they can do is presumably unlikely to be
very effective. So when people are willing to perform a behavior (positive
attitude) and think themselves able to perform the behavior (high
perceived behavioral control), but nevertheless aren’t doing the behavior,
then perhaps all that’s needed is a simple prompt.17

Message Content
This section reviews research concerning the persuasive effects of certain
variations in the contents of messages. Literally dozens of content
variables have received at least some empirical attention; this review
focuses mainly on selected message content factors for which the
empirical evidence is relatively more extensive.

Consequence Desirability
One common way of trying to persuade people is by appealing to the
consequences of the advocated action. The general abstract form is “If the
advocated action A is undertaken, then desirable consequence D will
occur.” A good deal of research has addressed questions about the relative
persuasiveness of various forms of consequence-based arguments. Of
specific interest here is the comparison of appeals invoking more and less
desirable consequences of compliance with the advocated view. Abstractly
put, the experimental contrast is between arguments of the form “If
advocated action A is undertaken, then very desirable consequence D1 will
occur” and “If advocated action A is undertaken, then slightly desirable
consequence D2 will occur.”

Now one might—with some reason—think that this research question is


hardly worth investigating, because the answer is obvious. And, perhaps
understandably, the overt research question has not been expressed quite

337
this way. Even so, substantial research evidence, collected in other guises,
has accumulated on this matter. For example, many studies have examined
a question of the form “do people who differ with respect to characteristic
X differ in their responsiveness to corresponding kinds of persuasive
appeals?”—where characteristic X is actually a proxy for variations in
what people value.

For example (as discussed in Chapter 3 concerning functional approaches


to attitude), a number of studies have examined how people varying in
self-monitoring (concern about the image one projects to others) differ in
their responsiveness of different kinds of persuasive appeals. Specifically,
high self-monitors are more persuaded by consumer ads that use image-
based appeals than by ads that use product-quality-oriented appeals,
whereas the reverse effect is found for low self-monitors. As suggested in
Chapter 3, this effect can be seen to reflect differences in the values of
high and low self-monitors—high self-monitors value the image-related
attributes of products in ways low self-monitors do not. Naturally, then,
each kind of person is more persuaded by messages invoking
consequences that (from their perspective) are relatively more desirable.

A similar example is offered by research on the individual-difference


variable called “consideration of future consequences” (CFC; Strathman,
Gleicher, Boninger, & Edwards, 1994). CFC refers to differences in the
degree to which people consider longer-term as opposed to shorter-term
behavioral consequences. As one might expect, persons differing in CFC
respond differently to persuasive messages depending on whether the
message’s arguments emphasize immediate consequences (more
persuasive for those low in CFC) or long-term consequences (more
persuasive for those high in CFC; see, e.g., Orbell & Hagger, 2006; Orbell
& Kyriakaki, 2008).

Yet another example is provided by research on cultural differences in


“individualismcollectivism,” which refers to the degree to which
individualist values (e.g., independence) are prioritized as opposed to
collectivist values (e.g., interdependence; Hofstede, 2001). Persons from
cultures differing in individualism-collectivism respond differently to
persuasive messages depending on whether the message’s appeals
emphasize individualistic or collectivistic outcomes. For example,
advertisements for consumer goods have been more persuasive for
American audiences when the ads emphasize individualistic outcomes
(“this watch will help you stand out”) rather than collectivistic ones (“this

338
watch will help you fit in”), with the reverse being true for Chinese
audiences (e.g., Aaker & Schmitt, 2001; for a review, see Hornikx &
O’Keefe, 2009). Plainly, this effect reflects underlying differences in the
perceived desirability of various product attributes.

Several other lines of research similarly converge on the unsurprising


general conclusion: Consequence-based appeals are more persuasive when
they invoke outcomes of the advocated action that are (taken by the
audience to be) relatively more desirable than when they invoke outcomes
that are not valued so highly (for a review, see O’Keefe, 2013a).18

As obvious as this point might be, it nevertheless underscores the


importance of fashioning appeals that are tailored to the audience’s wants.
And it is an empirical question which appeals will be most persuasive for a
given audience. For instance, one might think that skin protection
behaviors (sunscreen use, tanning avoidance, and so on) might be
successfully influenced by invoking health-related consequences such as
skin cancer—but at least some people are more persuaded by appeals
concerning appearance-related consequences (J. L. Jones & Leary, 1994;
Thomas et al., 2011). Similarly, HPV (human papillomavirus) vaccine can
prevent both cancer and sexually transmitted infections, but these two
consequences may not be equally persuasive for all recipients (Krieger &
Sarge, 2013; Leader, Weiner, Kelly, Hornik, & Cappella, 2009). (See,
relatedly, Gollust, Niederdeppe, & Barry, 2013; Segar, Updegraff,
Zikmund-Fisher, & Richardson, 2012.)19

One-Sided Versus Two-Sided Messages


In many circumstances, a persuader will be aware of potential arguments
supporting an opposing view. How should a persuader handle such
arguments? One possibility is simply to ignore the opposing arguments, to
not mention or acknowledge them in any way; the persuader would offer
only constructive (supporting) arguments. Alternatively, the persuader
might not ignore opposing arguments but rather discuss them explicitly
(while also presenting supporting arguments). This basic contrast—
ignoring or discussing opposing arguments—has commonly been captured
in persuasion research as the difference between a one-sided message
(which presents supporting arguments but ignores opposing ones) and a
two-sided message (which discusses both supporting and opposing
arguments). (The broadest review is O’Keefe, 1999a; for other reviews and

339
discussions, see Allen, 1991, 1993, 1998; Crowley & Hoyer, 1994; Eisend,
2006, 2007; O’Keefe, 1993; Pechmann, 1990.)20

There appears to be no general difference in persuasiveness between one-


sided and two-sided messages. Previous discussions of sidedness effects
have mentioned many possible moderating factors that might influence the
relative persuasiveness of one-sided and two-sided messages, including the
audience’s level of education, the availability of counterarguments to the
audience (sometimes represented as the audience’s familiarity with the
topic), and the audience’s initial opinion on the topic (see, e.g., Chu, 1967;
R. G. Hass & Linder, 1972). None of these factors, however, appears to
moderate this effect; for example, the relative persuasiveness of one- and
two-sided messages does not vary as a function of whether the audience
initially favors or opposes the advocated view (O’Keefe, 1999a).

A more complex picture emerges, however, if one distinguishes two


varieties of twosided messages. A refutational two-sided message attempts
to refute opposing arguments; this might involve criticizing the reasoning
of an opposing argument, offering evidence to undermine an opposing
claim, challenging the relevance of an opposing argument, and so forth. A
nonrefutational two-sided message acknowledges opposing considerations
but does not attempt to refute them directly; the message might suggest
that the supporting arguments outweigh the opposing ones, but it does not
directly attack the opposing considerations. One way of expressing the
difference is to say that refutational two-sided messages characteristically
attempt to undermine opposing arguments (by refuting them), whereas
nonrefutational two-sided messages characteristically attempt to
overwhelm opposing arguments (by deploying supportive ones).

These two types of two-sided message have dramatically different


persuasive effects when compared with one-sided messages. Specifically,
refutational two-sided messages are dependably more persuasive than one-
sided messages; nonrefutational two-sided messages, on the other hand,
are slightly less persuasive than their one-sided counterparts.21 That is,
acknowledging opposing arguments without refuting them generally
makes messages somewhat less persuasive (compared with ignoring
opposing arguments), whereas refuting opposing arguments enhances
persuasiveness.22 As with the overall contrast between one-sided and two-
sided messages, this pattern of effects seems largely unaffected by
commonly proposed moderator factors such as audience education or
initial attitude, although the research evidence is often sketchy (O’Keefe,

340
1999a).

However, there is almost certainly a limiting condition on the apparent


persuasive advantage of refutational two-sided messages. Examination of
the messages in these studies suggests that the refuted counterarguments
were ones that might well have been entertained by the audience as
potentially significant objections. One ought not necessarily expect the
same results if implausible or trivial objections were to be refuted. Notice
that, at least when designed in recognition of this limiting condition, the
use of refutational two-sided messages represents a form of audience
adaptation. Two-sided messages that attempt to undermine the audience’s
active objections to the recommended action are tailored to the audience’s
psychological state (the audience’s potential counterarguments). Perhaps it
is only to be expected that meeting such objections head-on will be more
persuasive than ignoring them.

As an example of the potential importance of addressing active objections,


consider smoking cessation. Smokers often express concern that if they
quit smoking, they will gain weight. One possible way of defusing this
objection would be to couple a smoking cessation intervention with a
weight control intervention. And, indeed, such combined treatments
produce significantly greater abstinence, at least in the short term, than do
smoking cessation treatments alone (for a review, see Spring et al.,
2009).23

One additional complexity is worth noting: Nonrefutational two-sided


messages appear to have different effects in consumer advertising
messages than in other persuasive messages (messages concerning
political questions, public policy issues, and the like). In nonadvertising
messages, nonrefutational two-sided messages are dependably less
persuasive than their one-sided counterparts, but in consumer
advertisements, nonrefutational two-sided messages are neither more nor
less persuasive than one-sided advertisements (O’Keefe, 1999a).24 Put
differently: Nonrefutational two-sided advertising messages do not seem to
suffer the same negative persuasion consequences that parallel
nonadvertising messages do.

Some light might be shed on this by considering the effects of


nonrefutational twosided messages on credibility perceptions. The
credibility of consumer advertising is boosted by the use of nonrefutational
two-sided messages, but nonrefutational two-sided messages on other

341
persuasive topics do not produce the same enhancement of credibility
(O’Keefe, 1999a). It may be that receivers’ initial skepticism about
consumer advertising leads receivers to expect that advertisers will provide
a one-sided depiction of the advertised product—and thus when an
advertisement freely acknowledges (and does not refute) opposing
considerations, the advertiser’s credibility is enhanced (akin to the
credibility enhancement effects obtained when communicators advocate
positions opposed to their apparent self-interest, as discussed in Chapter
8).

This enhanced credibility for advertisements, in turn, could have varying


effects. It might boost the believability of both the supportive arguments
and the acknowledged counterarguments (with these effects canceling each
other out), it might enhance the counterarguments more than the
supportive arguments (making the ad less persuasive), or it might enhance
the supportive arguments more than the counterarguments (making the ad
more persuasive). Across a number of nonrefutational two-sided ads, then,
one might expect to find no dependable overall difference in
persuasiveness—which is precisely the observed effect.25 In sum,
persuaders are best advised to meet opposing arguments head-on, by
refuting them, rather than ignoring or (worse still) merely mentioning such
counterarguments—save, perhaps, in the case of consumer advertising,
where nonrefutational acknowledgment of opposing arguments promises
to be about as persuasive as ignoring opposing considerations.

Gain-Loss Framing
One especially well-studied persuasive message variation is gain-loss
message framing.26 A gain-framed message emphasizes the advantages of
undertaking the advocated action; a loss-framed message emphasizes the
disadvantages of not engaging in the advocated action. So, for example, “If
you wear sunscreen you’ll have attractive skin when you’re older” is a
gain-framed appeal, whereas “If you don’t wear sunscreen you’ll have
unattractive skin when you’re older” is a loss-framed appeal.

Overall Effects
The phenomenon of negativity bias provides a reason for expecting that
loss-framed appeals might have a general persuasive advantage over gain-
framed appeals. Negativity bias refers to the greater sensitivity to, and

342
impact of, negative information compared with equally extreme positive
information (for a review, see Cacioppo, Gardner, & Berntson, 1997). For
example, negative information has a disproportionate impact on
evaluations or decisions compared with otherwise equivalent positive
information (for a review, see Rozin & Royzman, 2001); learning one new
negative thing about a person often has a much larger effect than learning
one new positive thing. The phenomenon of negativity bias naturally
suggests that loss-framed messages, which emphasize the negative
consequences of noncompliance with the recommended action, should be
more persuasive than gain-framed appeals. However, there is no such
general advantage for loss-framed appeals. Gain-framed and loss-framed
appeals do not generally differ in persuasiveness (for a review, see
O’Keefe & Jensen, 2006).27

Disease Prevention Versus Disease Detection


Even though gain-framed and loss-framed appeals do not generally differ
in persuasiveness, some moderating factor might be at work—a factor that
makes gain-framed appeals more advantageous in some circumstances and
loss-framed appeals more advantageous in others (thus yielding no overall
difference). One especially well-studied moderator concerns the nature of
the advocated behavior, specifically whether the advocated action is a
disease detection behavior (such as cancer screening) or a disease
prevention behavior (such as regular flossing). The hypothesis was that
loss-framed messages will be more persuasive than gain-framed messages
for disease detection behaviors, with the reverse pattern expected for
disease prevention behaviors (see, e.g., Salovey, Schneider, & Apanovitch,
2002). This hypothesis was motivated by researchers having noticed what
looked like a pattern in some early framing studies. For example, a loss-
framed appeal was more persuasive than a gain-framed appeal in a study
of breast cancer screening (Meyerowitz & Chaiken, 1987), but gain-
framed appeals were more persuasive than loss-framed appeals in a study
of sunscreen use (Detweiler, Bedell, Salovey, Pronin, & Rothman,
1999).28

The accumulated research evidence, however, indicates that this


hypothesis is also faulty. Concerning disease detection topics, several
reviews have concluded that there is no overall persuasive advantage for
loss-framed appeals (Gallagher & Updegraff, 2012; O’Keefe & Jensen,
2009).29 With respect to disease prevention topics, the evidence is more

343
complex, but here too there does not appear to be any general persuasive
advantage for gain-framed appeals (Gallagher & Updegraff, 2012;
O’Keefe & Jensen, 2007).30

Other Possible Moderating Factors


In the search for other potential moderators of gain-loss message framing
effects, a number of different possibilities have been explored, including
the temporal proximity of the consequences (Gerend & Cullen, 2008), the
type of supporting evidence (Dardis & Shen, 2008), and the colors used in
the message (Chien, 2011). But the greatest attention has focused on
individual-level variables (see Latimer, Salovey, & Rothman, 2007;
Updegraff & Rothman, 2013), including self-efficacy (van ’t Riet, Ruiter,
Werrij, & de Vries, 2010), attitudinal ambivalence (Broemer, 2002),
perceived threat susceptibility (Gallagher, Updegraff, Rothman, & Sims,
2011), need for cognition (Steward, Schneider, Pizarro, & Salovey, 2003),
mood (Chang, 2007), and so on. However, the evidence is not yet
sufficiently extensive to permit confident conclusions about any such
candidates (see, e.g., Covey, 2014).

Moreover, identifying a genuine moderator of gain-loss message framing


effects can be a challenging task, because isolating the effect of this
message variation is potentially difficult. The difficulty arises from the
nature of the appeal variation. Both gain-framed and loss-framed appeals
are conditional arguments (“if-then” arguments) that depict some
consequence as being a result of some antecedent. Gain- and loss-framed
appeals differ in the antecedent condition: in gain-framed appeals, the
antecedent is compliance with the communicator’s recommended course
of action (“If you wear sunscreen, then …”), whereas in loss-framed
appeals the antecedent is noncompliance (“If you don’t wear sunscreen,
then …”).31 Because the communicative purpose is persuasion, there is
correspondingly a difference in the valence of the claimed consequences:
persuaders naturally argue that compliance produces desirable
consequences and that noncompliance produces undesirable
consequences.32 However, even though the valence of the consequences is
necessarily dictated by the nature of the antecedent, the substance of the
consequences can potentially vary. And here experimenters can face quite
a challenge. If the consequences invoked by the two appeals are
substantively different, then any difference in persuasiveness might reflect
the difference in the consequences, not the difference in the antecedent.

344
As an example: One suggested individual-level moderating factor is the
recipient’s approach/avoidance motivation (BAS/BIS; Carver & White,
1994). Individuals vary in their general sensitivity to reward (desirable
outcome) or punishment (undesirable outcome) cues, and the hypothesis
has been that approach-oriented individuals will be more persuaded by
gain-framed appeals than by loss-framed appeals, with the reverse pattern
holding for avoidance-oriented individuals (e.g., Jeong et al., 2011;
Latimer, Salovey, & Rothman, 2007). A related motivational difference is
the recipient’s regulatory focus (Higgins, 1998); regulatory focus
variations reflect a broad motivational difference between a promotion
focus, which emphasizes obtaining desirable outcomes, and a prevention
focus, which emphasizes avoiding undesirable outcomes.
Correspondingly, this hypothesis has sometimes been phrased in terms of
regulatory focus: Promotion-oriented individuals should be more
persuaded by gain-framed appeals than by loss-framed appeals, but
prevention-oriented individuals should be more persuaded by loss-framed
appeals than by gain-framed appeals.

But these motivational differences are presumably reflected in differing


evaluations of substantively different consequences; for example,
approach-oriented people will presumably prefer approach-oriented
consequences over avoidance-oriented consequences. And the distinction
between these two kinds of consequences (approach-oriented vs.
avoidance-oriented) is different from the distinction between the two kinds
of antecedents (compliance vs. noncompliance). Thus the conjunction of
gain-loss variations (different kinds of antecedents) and these motivational
variations (different kinds of consequences) yields four possible appeal
types: (1) gain-framed appeals that emphasize avoidanceoriented
consequences (e.g., “if you exercise, you’ll reduce your heart attack risk”),
(2) gain-framed appeals that emphasize approach-oriented consequences
(e.g., “if you exercise, you’ll increase your energy”), (3) loss-framed
appeals that emphasize avoidance- oriented consequences (e.g., “if you
don’t exercise, you won’t reduce your heart attack risk”), and (4) loss-
framed appeals that emphasize approach-oriented consequences (e.g., “if
you don’t exercise, you won’t increase your energy”).

In order to show that approach/avoidance orientation affects the relative


persuasiveness of gain-framed and loss-framed appeals, investigators must
take care to ensure that the gain-loss framing manipulation (compliance vs.
noncompliance) is not confounded with variation in the kinds of
consequences invoked. Consider, for example, a study of messages aimed

345
at encouraging flossing in which the gain-framed appeal emphasized
having healthy gums (an approach-oriented consequence) and the loss-
framed appeal emphasized avoiding gum disease (an avoidance-oriented
consequence). The results indicated that the former appeal was more
persuasive than the latter for approach-oriented participants, with the
reverse result for avoidance-oriented participants (Sherman, Mann, &
Updegraff, 2006). But, given the confounding of the type of antecedent
and the type of consequence, such results might more plausibly be said to
reflect differences in the consequences invoked (approach vs. avoidance)
than in the antecedent (compliance vs. noncompliance). (For some relevant
evidence, see Chang, 2010, Experiment 2; for discussion, see Cesario,
Corker, & Jelinek, 2013; O’Keefe, 2013a.)

Broadly speaking, individual differences in motivational orientation


(approach vs. avoidance, promotion vs. prevention) map easily onto a
contrast between different kinds of consequences (approach/promotion-
oriented consequences and avoidance/preventionoriented consequences),
but do not match up so well with a contrast between gain-framed
(compliance-focused) and loss-framed (noncompliance-focused) appeals.
For that reason, one might be justifiably skeptical that individual
motivational-orientation differences will turn out to moderate the effects of
gain-loss message framing variations independent of the kinds of
consequences invoked.33 But this illustrates the potential difficulty of
identifying moderating factors for this message variation—difficulties
arising from the need to distinguish the gain-loss framing variation
(variation in the antecedent of the appeal) from variation in the substantive
consequences invoked.34

Summary
Gain-framed and loss-framed appeals do not differ much in
persuasiveness. Research has not yet identified moderating factors that
yield substantial differences in persuasiveness between gain-framed and
loss-framed appeals—and identifying such factors will be challenging.

Threat Appeals
Threat appeals (also called fear appeals) are messages designed to
encourage the adoption of behaviors aimed at protecting against a potential
threat. Threat appeals have two components. One is material depicting the

346
threatening event or consequences; the other is material describing the
recommended protective action. So, for example, driver education
programs may show films depicting gruesome traffic accidents (in an
effort to reduce dangerous driving practices such as drinking and driving),
antismoking messages may display the horrors of lung cancer (so as to
discourage smoking initiation), and dental hygiene messages may
emphasize the ravages of gum disease (in an effort to encourage regular
flossing).

Protection Motivation Theory


Because threat appeals concern specifically protective behaviors, an
analysis of the factors underlying such behaviors can be useful. Protection
motivation theory (PMT; Rogers & Prentice-Dunn, 1997) identifies two
broad processes that influence the adoption of protective behaviors,
namely, threat appraisal and coping appraisal (appraisal of the
recommended action). That is, whether a person adopts a given protective
behavior will be a function of the person’s assessment of the threat and the
person’s assessment of the suggested way of coping with the threat (the
advocated action). Each of these two elements, however, is further
unpacked by PMT.

Threat appraisal is said to depend on perceived threat severity (the


perceived severity of the problem) and perceived threat vulnerability
(one’s perception of the likelihood that one will encounter or be
susceptible to the threat); as persons perceive the threat to be more severe
and as they perceive themselves to be more vulnerable to the threat,
persons are expected to have higher protection motivation. Coping
appraisal is said to depend on perceived response efficacy (perceived
recommendation effectiveness, the degree to which the recommended
action is perceived to be effective in dealing with the problem) and on
perceived self-efficacy (one’s perceived ability to adopt or perform the
protective action); as the perceived efficaciousness of the response
increases and as perceived ability to perform the action increases, persons
are expected to have greater protection motivation.35

For example, imagine someone contemplating adopting an exercise


program to prevent heart disease who thinks that heart disease is not all
that serious a problem (low perceived threat severity) and, in any case,
perceives herself to be relatively unlikely to have heart disease because no
one in her family has had it (low perceived threat vulnerability); moreover,

347
she is not convinced that exercise will really prevent heart disease (low
perceived recommendation effectiveness), and she does not think that she
has the discipline to stick with an exercise program (low perceived self-
efficacy). Such a person presumably will have relatively low protection
motivation, as reflected in corresponding actions (namely, not exercising)
and intentions (not intending to exercise).

A number of studies have indicated that protective intentions and


behaviors are indeed affected by these four underlying factors (for a
review, see Milne, Sheeran, & Orbell, 2000).36 That is, PMT appears to
have identified four influences on protective intentions and actions—and
thereby also identified four possible foci for persuasive messages
concerning protective behaviors. Such messages might focus on the
severity of the threat, on the recipient’s vulnerability to the threat, on the
effectiveness of the advocated action, or on the recipient’s ability to adopt
that action. (Of course these are not mutually exclusive.) And, plainly, this
framework provides a basis for adapting (tailoring) messages to recipients.
A person who acknowledges that a threat has really bad consequences
(high threat severity) but thinks it won’t happen to them (low perceived
vulnerability) presumably should be sent messages emphasizing
susceptibility, whereas a person who acknowledges their vulnerability but
doesn’t think the threat’s consequences are all that terrible presumably
should be sent messages focused on threat severity.

Considerable research evidence has shown that persuasive messages can


affect each of those four factors. The relevant experimental designs
involve manipulating some message feature in an effort to influence the
theoretically important mediating state. For instance, to influence
perceived threat severity, participants would receive either a message
suggesting that the consequences are extremely negative or one suggesting
that they are minor; to influence perceived recommendation effectiveness,
the message either would depict the recommended action as highly
effective in dealing with the threat or would describe it as an inconsistent
or unreliable means of coping with the threat; and so on. (For some
examples, see Brouwers & Sorrentino, 1993; Smerecnik & Ruiter,
2010.)37 Such experimental message variations have been found to have
the anticipated effects on the corresponding mediating states (e.g.,
messages varying in their depictions of recommendation effectiveness
produce corresponding variations in perceived recommendation
effectiveness; for reviews, see Milne, Sheeran, & Orbell, 2000; Witte &
Allen, 2000; see also Mongeau, 1998, p. 62, note 4) and have been found

348
to have parallel (but weaker) effects on relevant persuasive outcome
variables such as attitudes, intentions, and behavior (for reviews, see de
Hoog, Stroebe, & de Wit, 2007; Floyd, Prentice-Dunn, & Rogers, 2000;
Witte & Allen, 2000).38

Threat Appeals, Fear Arousal, and Persuasion


Threat appeals plainly have the potential to arouse fear or anxiety about
the threatening events. For that reason, much research attention has
focused on variations in how threat appeals depict the negative
consequences associated with the threat—and on how such message
variations are associated with fear arousal and persuasion. The specific
contrast of interest is between a message containing explicit, intense, vivid
depictions of those negative consequences (a strong threat appeal) and a
message with a tamer, toned-down version (a weak threat appeal).39

Research concerning the evocation of fear by threat appeals is extensive


and complex, but an overview of the key findings can be expressed in five
broad conclusions. First, messages with more intense content do generally
arouse greater fear. To be sure, the relationship between threat appeal
message variations and aroused fear is not perfect (expressed as a
correlation, estimates of the mean effect range between .25 and .40),
suggesting that influencing the audience’s level of fear is not necessarily
something easily accomplished (for reviews, see Boster & Mongeau, 1984;
de Hoog et al., 2007; Mongeau, 1998; Witte & Allen, 2000).40 On
reflection, of course, this may not be too surprising. A persuader may be
mistaken about what will be fearful, and what one person finds extremely
fearful may be only mildly worrisome to another person (for examples and
discussion, see Botta, Dunker, Fenson-Hood, Maltarich, & McDonald,
2008; Ditto, Druley, Moore, Danks, & Smucker, 1996; Henley &
Donovan, 2003; Murray-Johnson et al., 2001). Still, in general, stronger
threat appeal contents do arouse greater fear.41

Second, threat messages with more intense content also are more
persuasive than those with less intense content, although this effect is
smaller than the effect of threat appeal variations on aroused fear
(expressed as correlations, the effects average between .10 and .20; for
reviews, see Boster & Mongeau, 1984; Mongeau, 1998; Sutton, 1982;
Witte & Allen, 2000). This weaker effect on persuasive outcomes is
consistent with the idea that fear (the aroused emotional state) mediates the

349
effect of threat appeal message manipulations on persuasive outcomes.
That is, the invited image is that varying the message contents produces
variations in aroused fear, which in turn are related to changes in attitudes,
intentions, and actions (and thus the relationship between message
manipulations and persuasive outcomes would be expected to be weaker
than the relationship between message manipulations and fear).42

Third (and a natural corollary of the first two), messages that successfully
arouse greater fear are also generally more persuasive. That is, in studies
with messages that have been shown to arouse dependably different
amounts of fear, the messages that arouse greater fear are more persuasive
than the messages that arouse lesser fear (for reviews, see Sutton, 1982;
Witte & Allen, 2000).43

Fourth, these relationships are roughly linear. That is, generally speaking,
as message content becomes more intense, greater fear is aroused and
greater persuasion occurs. It has sometimes been thought that very intense
message materials will (compared with less intense materials) produce less
persuasion (because recipients tune out the message and so do not come to
accept the recommendations). That is, the thought has been that a
persuader might go “too far” in threat appeal intensity, producing a
curvilinear relationship between intensity and persuasion (and specifically
an inverted-U-shaped relationship, such that the highest levels of message
intensity are associated with relatively lower levels of persuasion). But the
evidence in hand gives little indication of any such curvilinear effects (see
Boster & Mongeau, 1984; Mongeau, 1998; Sutton, 1992; Witte & Allen,
2000).44

Fifth, there are at least two conditions under which more intense threat
appeals are unlikely to be more persuasive than less intense ones. One is
circumstances in which the recipients’ fear level is already relatively high.
If message recipients are already experiencing sufficiently high levels of
concern, it may not be necessary—or even possible—to increase it further.
In such circumstances, messages might more appropriately focus on
barriers to adopting the recommended action—perhaps receivers’ concerns
about whether the action is effective or their doubts about their ability to
perform the action. For example, Earl and Albarracín’s (2007)’s review of
HIV prevention interventions found that HIV counseling was more
effective than fear-inducing arguments in encouraging condom use,
arguably because HIV-related anxiety was already relatively high, and
counseling provided information about how to address such concerns.

350
(See, relatedly, Kessels & Ruiter, 2012; Muthusamy, Levine, & Weber,
2009.)

The second is circumstances in which the message recipients do not have a


sufficiently positive assessment of the recommended action (e.g., when the
advocated action is perceived as ineffective or difficult to perform).
Increasing the intensity of the depiction of the threat is unlikely to be
helpful when the key barrier to recommendation acceptance is a negative
appraisal of the recommended action. In the absence of a sufficiently
positive assessment of the advocated behavior, more intense threat appeals
are unlikely to be more persuasive than less intense appeals.
Correspondingly, greater fear arousal is likely to be associated with greater
persuasion only when a workable, effective protective action is (perceived
to be) available. (For a review, see Peters, Ruiter, & Kok, 2012.)45

One way of understanding these two limiting conditions is to see that a


threat appeal has what amounts to a problem-solution message format: It
identifies a potential (threatening, fear-inducing) problem and offers a
solution (the recommended action). If people are already convinced of the
problem (already experiencing high fear) or if they do not see a good
solution available (insufficiently positive assessment of the recommended
action), then it doesn’t much matter whether the message has more or less
intense depictions of the problem.

The Extended Parallel Process Model


The extended parallel process model (EPPM) provides a useful framework
for organizing these findings concerning threat appeals, fear, and
persuasive effects (Witte, 1992, 1998). Like protection motivation theory
(PMT), the EPPM is concerned with the factors influencing protective
actions. Although the two models have different terminology, they
incorporate the same key components. EPPM identifies “perceived threat”
(threat appraisal) and “perceived efficacy” (coping appraisal, that is,
recommendation appraisal) as the immediate influences on protective
behaviors and offers the same elements underlying each of those:
Perceived threat is influenced by perceived threat severity and perceived
threat susceptibility (vulnerability), and perceived efficacy is influenced by
response efficacy (i.e., perceived recommendation effectiveness) and
perceived self-efficacy.

But the EPPM additionally identifies two different (parallel) processes that

351
can be activated by threat appeals. People may want to control the
apparent danger posed by the threat (danger control processes), and they
may want to control their feelings of fear (fear control processes). The
activation of these processes varies depending on variations in the
combination of perceived threat and perceived efficacy, as follows.46

In a circumstance in which message recipients have high perceived threat


(high perceived threat severity and susceptibility) and high perceived
efficacy (high perceived recommendation effectiveness and high perceived
self-efficacy), then danger control processes will be engaged. People will
understand the danger posed by the threat and will see themselves as in a
position to deal with that danger by adopting the recommended action.

By contrast, in a circumstance in which message recipients have high


perceived threat but low perceived efficacy, then fear control processes
will be engaged. People will be facing a significant threat, which arouses
fear, but they do not believe they have a suitable way to control that threat
—and hence people look for ways to control their fear. For example,
people might avoid thinking about the threat (defensive avoidance), or
they might reassess their threat perceptions so as to diminish their feelings
of fear (e.g., they might decide that the threat really isn’t all that severe or
that they’re not really vulnerable to it).47

Where perceived threat is low (because the threat is seen as trivial or


because people think themselves invulnerable to it), then people are
unlikely to adopt the protective action. After all, there’s no apparent threat.
So under conditions of low perceived threat, variations in perceived
efficacy will not affect behavioral adoption.

Briefly, then, from the perspective of the EPPM, the role of threat-related
perceptions (perceived severity and vulnerability) is contingent on
efficacy-related perceptions (perceived recommendation efficacy and self-
efficacy). High perceived threat alone is insufficient to motivate protective
action; only the combination of high perceived threat and high perceived
efficacy activates the danger control processes that encourage protective
behavior.

Correspondingly, the effect of threat-related message materials is expected


to vary depending on the recipient’s efficacy-related perceptions. In
particular, the EPPM emphasizes that efforts to increase threat perceptions
may be unhelpful or unwise in circumstances in which efficacy

352
perceptions are not sufficiently high. Thus the EPPM provides a more
nuanced basis for message design than simply suggesting that persuaders
focus on whichever of the four perceptual determinants of protective
action (threat severity, threat vulnerability, recommendation effectiveness,
self-efficacy) needs attention. (For some examples of EPPM applications,
see Campo, Askelson, Carter, & Losch, 2012; Kotowski, Smith,
Johnstone, & Pritt, 2011; Krieger & Sarge, 2013; Murray-Johnson et al.,
2004. For general discussions of EPPM-based message design, see Basil &
Witte, 2012; Cho & Witte, 2005.)

Other theoretical frameworks for understanding threat appeals have also


offered distinctive message design recommendations (see, e.g., de Hoog,
Stroebe, & de Wit, 2007, 2008; Rimal, Bose, Brown, Mkandawire, &
Folda, 2009; Rimal & Juon, 2010; Rimal & Real, 2003; Ruiter & Kok,
2012), although the EPPM is currently the most widely applied
framework. At the same time, for all that the EPPM provides an attractive
housing for many threat-appeal research findings, at least some of the
model’s more detailed claims have not received much research attention—
and of those that have, the research evidence may not be univocal (for a
useful review, see Popova, 2012; see also Mongeau, 2013, pp. 191–192;
cf. Maloney, Lapinski, & Witte, 2011).48

Summary
Although some aspects of threat appeals have come into focus, many
unanswered questions remain, including exactly how threat-related
perceptions and efficacy-related perceptions combine to influence
protective intentions and actions (see, e.g., Goei et al., 2010), the
appropriate structure of the threat and efficacy components in threat
appeals (e.g., Carcioppolo et al., 2013; Wong & Cappella, 2009), whether
the within-individual dynamics of fear over time conform to theoretical
expectations (e.g., Algie & Rossiter, 2010; Dillard & Anderson, 2004; for
a useful discussion, see Shen & Dillard, 2014), and the potential role of
individual differences in reactions to threat appeals (e.g., Nestler & Egloff,
2012; Ruiter, Verplanken, De Cremer, & Kok, 2004; Schlehofer &
Thompson, 2011; van ’t Riet, Ruiter, & de Vries, 2012; for a general
treatment, see Cho & Witte, 2004). In short, there is much more to be
learned about threat appeals. (For some general discussions of threat
appeal research, see Dilliplane, 2010; Mongeau, 2013; Ruiter, Abraham, &
Kok, 2001; Ruiter, Kessels, Peters, & Kok, 2014; Yzer, Southwell, &
Stephenson, 2013.)

353
Beyond Fear Arousal
Fear is perhaps the best studied of the various emotions that persuasive
appeals might try to engage, although there has also been some work on
other emotions such as anger (Moons & Mackie, 2007; Quick, Bates, &
Quinlan, 2009), disgust (Leshner, Bolls, & Thomas, 2009; Nabi, 1998;
Porzig-Drummond, Stevenson, Case, & Oaten, 2009), and especially guilt
(e.g., Cotte, Coulter, & Moore, 2005; Hibbert, Smith, Davies, & Ireland,
2007; Turner & Underhill, 2012; for some reviews, see O’Keefe, 2000,
2002a). These lines of work have a common underlying idea, namely, that
one avenue to persuasion involves the arousal of an emotional state (such
as fear or guilt), with the advocated action providing a means for the
receiver to deal with those aroused feelings.49 (For some general
treatments of emotions and persuasion, see Dillard & Nabi, 2006; Dillard
& Seo, 2013; Nabi, 2002, 2007; Turner, 2012.)

There is another way, however, in which emotions might figure in


persuasion, namely, through the anticipation of emotional states.
Anticipated emotions plainly shape intentions and actions (e.g., people
often avoid actions that they think would make them feel guilty; Birkimer
et al., 1993), and such anticipations can be influenced (e.g., O’Carroll,
Dryden, Hamilton-Barclay, & Ferguson, 2011). This research is discussed
more extensively in Chapter 6 (concerning reasoned action theory) in the
context of considering how inclusion of anticipated affective states might
enhance the predictability of intentions. The point to be noticed here is
simply that emotional considerations might play a role in persuasive
messages either through the arousal of emotions or through the invocation
of expected emotional states.

Sequential Request Strategies


Substantial research has been conducted concerning the effectiveness of
two sequential request influence strategies: the foot-in-the-door (FITD)
strategy and the door-in-the-face (DITF) strategy. In each strategy, the
request of primary interest to the communicator (the target request) is
preceded by some other request; the question is how compliance with the
target request is affected by the presence of the preceding request.50

Foot-in-the-Door

354
The Strategy
The FITD strategy consists of initially making a small request of the
receiver, which the receiver grants, and then making the (larger) target
request. The hope is that having gotten one’s foot in the door, the second
(target) request will be looked on more favorably by the receiver. The
question thus is whether receivers will be more likely to grant a second
request if they have already granted an initial, smaller request.51

The Research Evidence


The research evidence suggests that this FITD strategy can enhance
compliance with the second (target) request. For example, in Freedman
and Fraser’s (1966, Experiment 2) FITD condition, homeowners were
initially approached by a member of the “Community Committee for
Traffic Safety” or the “Keep California Beautiful Committee.” The
requester either asked that the receiver display a small sign in their front
window (“Be a Safe Driver” or “Keep California Beautiful”) or asked that
the person sign a petition supporting appropriate legislation (legislation
that would promote either safer driving or keeping California beautiful).
Two weeks later, a different requester (from “Citizens for Safe Driving”)
approached the receiver, asking if the receiver would be willing to have a
large, unattractive “Drive Carefully” sign installed in the front yard for a
week. In the control condition, in which receivers heard only the large
request, fewer than 20% agreed to put the sign in the yard. But in the FITD
conditions, more than 55% agreed.52 This effect was obtained no matter
whether the same topic was involved in the two requests (safe driving or
beautification), and no matter whether the same action was involved
(displaying a sign or signing a petition): Even those who initially signed
the “Keep California Beautiful” petition were more likely to agree to
display the large safe driving yard sign. As these results suggest, the FITD
strategy can be quite successful.

Several factors that influence the strategy’s effectiveness have been


identified. First, if the FITD strategy is to be successful, there must be no
obvious external justification for complying with the initial request (for
reviews, see Burger, 1999; Dillard, Hunter, & Burgoon, 1984). For
example, if receivers are given some financial reward in exchange for
complying with the first request, then the FITD strategy is not very
successful. Second, the larger the first request (presuming it is agreed to by

355
the receiver), the more successful the FITD strategy (see the review by
Fern, Monroe, & Avila, 1986). Third, the FITD strategy appears to be
more successful if the receiver actually performs the action requested in
the initial request, as opposed to simply agreeing to perform the action (for
reviews, see Beaman, Cole, Preston, Klentz, & Steblay, 1983; Burger,
1999; Fern et al., 1986; cf. Dillard et al., 1984). Fourth, the FITD strategy
is more effective when the requests are prosocial requests (that is, requests
from institutions that might provide some benefit to the community at
large, such as civic or environmental groups) as opposed to nonprosocial
requests (from profit-seeking organizations such as marketing firms;
Dillard et al., 1984).53

Notably, several factors apparently do not affect the success of the FITD
strategy. The time interval between the two requests does not make a
difference (Beaman et al., 1983; Burger, 1999; Dillard et al., 1984; Fern et
al., 1986); for example, Cann, Sherman, and Elkes (1975) obtained
equivalent FITD effects with no delay between the two requests and with a
delay of 7–10 days. Similarly, it does not appear to matter whether the
same person makes the two requests (Fern et al., 1986).54

Explaining FITD Effects


The most widely accepted explanation for FITD effects is based on self-
perception processes (for a brief statement, see Freedman & Fraser, 1966;
a more extensive discussion is provided by DeJong, 1979). Briefly, the
explanation is that compliance with the first request leads receivers to
make inferences about themselves; in particular, initial compliance is taken
to enhance receivers’ conceptions of their helpfulness, cooperativeness,
and the like. These enhanced self-perceptions, in turn, are thought to
increase the probability of the receiver’s agreeing to the second request.

The observed moderating factors are consistent with this explanation. For
example, the presence of an external justification for initial compliance
obviously undermines enhancement of the relevant self-perceptions: If one
is paid money in exchange for agreeing to the initial request, it is more
difficult to conclude that one is especially cooperative and helpful just
because one agreed. Similarly, the larger the request initially agreed to, the
more one’s self-perceptions of helpfulness and cooperativeness should be
enhanced (“If I’m going along with this big request, without any obvious
external justification, then I must really be a pretty nice person, the kind of
person who does this sort of thing”). And it’s easier to think of oneself as a

356
helpful, socially minded person when one agrees to requests from civic
groups (as opposed to marketing firms) or when one actually performs the
requested action (as opposed to merely agreeing to perform it).55

Only a few studies have included direct assessments of participants’ self-


perceptions of helpfulness, but recent results have been supportive. For
example, Burger and Caldwell (2003) found that first-request compliance
produced the expected changes in selfperceived helpfulness—and such
self-perceptions were in turn related to second-request compliance. (For
similar results, see Burger & Guadagno, 2003; cf. Gorassini & Olson,
1995; Rittle, 1981. For some discussion, see Cialdini & Goldstein, 2004,
pp. 602–604.) So the direct evidence to date is slim but encouraging.

As attractive as the self-perception account is, there is one nagging


concern. As several commentators have remarked, it seems implausible to
suppose that self-perceptions of helpfulness would be deeply affected by
compliance with small requests of the sort used in FITD research (Rittle,
1981, p. 435; Gorassini & Olson, 1995, p. 102). Presumably, persons’
beliefs about their helpfulness rest on some large number of relevant
experiences, and it seems unlikely that such beliefs would be significantly
altered by consenting to a single small request. However, the self-
perception explanation’s ability to accommodate the observed moderator-
variable effects, and the emerging direct evidence of the role played by
self-perceptions in FITD effects, recommend it as the best available
explanation.56

Door-in-the-Face

The Strategy
The DITF strategy turns the FITD strategy on its head. The DITF strategy
consists of initially making a large request, which the receiver turns down,
and then making the smaller target request. The question is whether
initially having (metaphorically) closed the door in the requester’s face
will enhance the receiver’s compliance with the second request.

The Research Evidence


Studies of the DITF strategy have found that it can indeed enhance
compliance. That is, receivers will at least sometimes be more likely to

357
agree to a second smaller request if they have initially turned down a
larger first request. For example, in a study reported by Cialdini et al.
(1975, Experiment 1), individuals on campus sidewalks were approached
by a student who indicated that he or she represented the county youth
counseling program. In the DITF condition, persons were initially asked to
volunteer to spend 2 hours a week for a minimum of 2 years as an unpaid
counselor at a local juvenile detention center; no one agreed to this
request. The requester then asked if the person would volunteer to
chaperone a group of juveniles from the detention center on a 2-hour trip
to the zoo. Among those in the control condition, who received only the
target request, only 17% agreed to chaperone the zoo trip; but among those
in the DITF condition, who initially turned down the large request, 50%
agreed.57

The research evidence also suggests that various factors moderate the
success of the DITF strategy. DITF effects are larger if the two requests
are made by the same person as opposed to by different persons (for
relevant reviews, see Feeley, Anker, & Aloe, 2012; Fern et al., 1986;
O’Keefe & Hale, 1998, 2001), if the two requests have the same
beneficiary as opposed to benefiting different persons (Feeley et al., 2012;
O’Keefe & Hale, 1998, 2001), if there is no delay between the requests
(Dillard et al., 1984; Feeley et al., 2012; Fern et al., 1986; O’Keefe &
Hale, 1998, 2001), and if the requests come from prosocial rather than
nonprosocial organizations (Dillard et al., 1984; Feeley et al., 2012;
O’Keefe & Hale, 1998, 2001).58

Explaining DITF Effects


Several alternative explanations of DITF effects have been offered. One is
the reciprocal concessions explanation (see Cialdini et al., 1975). This
explanation proposes that the successive requests make the situation
appear to be one involving bargaining or negotiation—that is, a situation in
which a concession by one side is supposed to be reciprocated by the
other. The smaller second request represents a concession by the requester
—and so the receiver reciprocates (“Okay, you gave in a little bit by
making a smaller request, so I’ll also make a concession and agree with
that request”). Some of the observed moderator variable effects are nicely
explained by this account. For example, given this analysis, it makes sense
that DITF effects should be smaller if different persons make the requests;
if different persons make the requests, then no concession has been made
(and hence there is no pressure to reciprocate a concession).

358
But for several reasons, one might doubt whether the reciprocal
concessions explanation is entirely satisfactory. First, some moderator
variable effects are not so obviously accommodated by the explanation.
For example, it is not clear why the strategy should work better for
prosocial requests than for nonprosocial requests.

Second, several meta-analytic reviews have found that DITF effects are
not influenced by the size of the concession made (Fern et al., 1986;
O’Keefe & Hale, 1998, 2001), and this seems inconsistent with the
reciprocal concessions account. The reciprocal concessions account
appears to predict that larger concessions will make the DITF strategy
more effective (by putting greater pressure on the receiver), and hence the
failure to find such an effect seemingly indicates some weakness in the
explanation.59

Third, DITF effects do not appear to be influenced by emphasizing or


deemphasizing the concession. For example, stressing that the second
request represents a concession does not enhance the strategy’s
effectiveness, and deemphasizing the fact of a concession does not weaken
the strategy (e.g., Goldman, McVeigh, & Richterkessing, 1984; for a
review, see O’Keefe, 1999b).

A second explanation suggests that guilt arousal underlies DITF effects.


The general idea is that first-request refusal potentially represents a
violation of one’s own standards for socially responsible conduct, which
gives rise to feelings of guilt. Those guilt feelings then motivate target-
request compliance (for discussion of such explanations, see O’Keefe &
Figgé, 1999; Tusing & Dillard, 2000).

A guilt-based account appears capable of accommodating many of the


observed moderator variable effects. For example, declining prosocial
requests probably generates greater guilt than does declining nonprosocial
requests, thus making the strategy more effective for prosocial
organizations. Similarly, because guilt feeling presumably dissipate over
time, delaying the second request will diminish the strategy’s
effectiveness.

However, the guilt-based account does not provide a good explanation of


why the DITF strategy is more effective when the same person makes the
two requests (see Cialdini & Goldstein, 2004, p. 601). One might think

359
that feelings of guilt would be better reduced by “making it up to” the
requester (as opposed to agreeing to a request from someone else)—but
that’s not the way guilt works. There is considerable evidence that guilt-
reduction behaviors do not necessarily have to involve making amends to
the victim of the guiltinducing behavior. For example, when people are
feeling guilty about having committed a transgression (e.g., telling a lie)
that harmed another person, they are more likely to comply with a
subsequent request (than are people in a no-transgression control
condition)—but this effect is the same no matter whether the request
comes from the injured party or someone else (for a review, see O’Keefe,
2000). The implication of this finding is that DITF compliance cannot be
explained by guilt arousal alone; if DITF compliance arose purely from
guilt, then the strategy’s effectiveness would not be influenced by whether
the same person made both requests.

So it may be that DITF effects arise from a combination of reciprocity-


based processes and guilt-based processes. The reciprocity-based
explanation can explain the effect of having the requests come from the
same person, and the guilt-based account can accommodate other observed
moderating factors. Guilt-based and reciprocity-based motivations can
presumably operate simultaneously, and the evidence in hand points to just
such a combination.

Conclusion
Researchers have investigated a large number of message characteristics as
possible influences on persuasive effectiveness. These message factors are
varied, ranging from the details of message components (the phrasing of
the message’s conclusion) to the sequencing of multiple messages (as in
the FITD and DITF strategies). Indeed, this discussion can do no more
than provide a sampling of the message features that have been studied.
(For other general discussions of persuasive message variations, see
Perloff, 2014; Pratkanis, 2007; Shen & Bigsby, 2013; Stiff & Mongeau,
2003.)

For Review
1. What does the research evidence suggest about the relative
persuasiveness of stating the message’s conclusion explicitly as
opposed to omitting the conclusion (leaving the conclusion implicit)?

360
Does this difference vary depending on the audience’s educational
level? Does it vary depending on the audience’s initial favorability
toward the advocated view? Describe a possible explanation for the
observed effect.
2. What does the research evidence suggest about the relative
persuasiveness of providing a general (as opposed to a more specific)
description of the advocated action? Describe two possible
explanations for the observed effect.
3. What is a narrative? Explain why studying the role of narratives in
persuasion can be challenging. Can narratives be more persuasive
than nonnarrative messages? Are narratives generally more
persuasive than nonnarrative messages? Identify two factors that
influence the persuasiveness of narratives. What is character
identification? How does character identification influence narrative
persuasiveness? What is transportation? How does transportation
influence narrative persuasiveness? What is entertainment-education?
Describe two ways in which entertainment-education programs can
produce persuasive effects.
4. What is a prompt? Give examples. Can prompts influence behavior?
Identify two necessary conditions for prompts to be effective in
influencing behavior. Why is an existing positive attitude such a
condition? Why is sufficiently high perceived behavioral control
(PBC, self-efficacy) such a condition?
5. What is a consequence-based argument? How do variations in the
perceived desirability of the consequences affect the persuasiveness
of such arguments? Give examples.
6. What is a one-sided message? A two-sided message? Which is more
persuasive? Distinguish two varieties of two-sided messages. What is
a refutational two-sided message? What is a nonrefutational two-
sided message? Comparing one-sided messages and refutational two-
sided messages, which generally is more persuasive? Identify an
implicit limiting condition on the occurrence of these differences.
What general differences, if any, are there in persuasiveness between
one-sided messages and nonrefutational two-sided messages? In
advertising contexts, how do one-sided messages and nonrefutational
two-sided messages differ in persuasiveness? Outside advertising
contexts (that is, in “nonadvertising” messages), how do one-sided
messages and nonrefutational two-sided messages differ in
persuasiveness? What might explain the observed differences
between advertising messages and other persuasive messages in how
nonrefutational two-sided messages work? Are the effects of

361
nonrefutational two-sided messages (compared with one-sided
messages) on credibility perceptions the same for advertising
messages and for nonadvertising messages? Explain how skepticism
about advertising might underlie the different effects of
nonrefutational two-sided messages in advertising contexts as
opposed to nonadvertising contexts.
7. What is a gain-framed message? A loss-framed message? Describe a
reason for hypothesizing that loss-framed appeals might generally be
more persuasive than gain-framed appeals. Which kind of appeal is
generally more persuasive? Which kind of appeal is more persuasive
when the message topic concerns disease prevention? Which kind of
appeal is more persuasive when the message topic concerns disease
detection? Explain why it is difficult to identify a factor that
moderates the effects of gain- and loss-framed appeals. Describe the
hypothesis that the relative persuasiveness of gain-framed and loss-
framed appeals will vary depending on whether the recipient is
relatively approach/promotion–oriented or avoidance/prevention–
oriented; explain how such motivational differences are related to
different kinds of behavioral consequences.
8. What is a threat appeal? Describe the two parts of a threat appeal.
What is protection motivation theory (PMT)? What is protection
motivation? Identify the two processes underlying protection
motivation. What is threat appraisal? Identify two factors that
influence threat appraisal. What is perceived threat severity? What is
perceived vulnerability to threat? What is coping appraisal? Identify
two factors that influence coping appraisal. What is perceived
response efficacy? What is perceived self-efficacy? What is the
relationship between the intensity of threat appeal content and the
degree of fear aroused in receivers? Are messages with more intense
content generally more persuasive than those with less intense
content? Are messages that arouse greater fear generally more
persuasive than those that arouse lesser amounts of fear? Does the
relationship between the intensity of message contents and the
amount of aroused fear take the shape of an inverted U? Explain.
Does the relationship between the intensity of message contents and
persuasive outcomes take the shape of an inverted U? Explain.
Identify two conditions under which more intense threat appeals are
unlikely to be more persuasive than less intense appeals. Describe the
extended parallel process model (EPPM). What does the EPPM add
to protection motivation theory (PMT)? What is danger control?
What is fear control? Describe how the activation of fear control and

362
danger control processes varies as a function of variations in
perceived threat and perceived efficacy. From the perspective of the
EPPM, is high perceived threat sufficient to motivate protective
action? Explain. What emotions other than fear might be involved in
persuasion? Describe how the anticipation of emotional states can
play a role in persuasion.
9. Describe the foot-in-the-door (FITD) strategy. Identify four factors
that influence the success of the FITD strategy (four moderating
factors). How does the presence of an obvious external justification
(for initial-request compliance) influence the effectiveness of the
strategy? How does the size of the initial request influence the
effectiveness of the strategy? How is the strategy’s effectiveness
influenced by whether the initially requested behavior is actually
performed? How is the strategy’s effectiveness influenced by whether
the requests come from prosocial or nonprosocial organizations?
Does the time interval between the two requests influence the
strategy’s success? Is the strategy’s success affected by whether the
same person makes both requests? What is the selfperception
explanation of FITD effects? Describe how that explanation accounts
for the observed moderating factors. Identify a potential problem with
the self-perception explanation.
10. Describe the door-in-the-face (DITF) strategy. Identify four factors
that influence the success of the DITF strategy (four moderating
factors). How is the success of the strategy affected by whether the
same person makes the two requests? How is the success of the
strategy affected by whether the two requests have the same
beneficiary? How is the success of the strategy affected by the
presence of a delay between the requests? How is the strategy’s
effectiveness influenced by whether the requests come from prosocial
or nonprosocial organizations? Describe the reciprocalconcessions
explanation of DITF effects. Describe how that explanation accounts
for some of the observed moderating factors; describe how that
explanation has a difficult time accounting for other moderating
factors. Does the size of the concession (the reduction in request size
from the first to the second request) influence the success of the
strategy? Are DITF effects influenced by emphasizing or
deemphasizing the concession? Describe the guilt-based explanation
of DITF effects. Describe how that explanation accounts for some of
the observed moderating factors; describe how that explanation has a
difficult time accounting for other moderating factors. Do guilt-
reduction behaviors necessarily involve making amends to the person

363
injured by the guilt-producing behavior? Explain how DITF effects
might reflect a combination of reciprocity-based and guilt-based
processes.

Notes
1. This variation (stating or omitting the message’s overall conclusion)
thus is different from varying whether the message states the conclusions
to its individual supporting arguments (e.g., Kao, 2007). Both have been
glossed as “conclusion omission” manipulations but are plainly
distinguishable variations.

2. The experimental contrast of interest here is between a message in


which the overall conclusion is stated explicitly and one in which that
conclusion is simply omitted. A slightly different contrast compares a
message with an explicit overall conclusion and a message in which
recipients are explicitly urged to “decide for yourself” (Martin, Lang, &
Wong, 2003; Sawyer & Howard, 1991). The latter contrast might yield
effects different from the former; for example, a “decide for yourself”
message might minimize reactance and thereby diminish any relative
advantage of an explicit conclusion message. There is good evidence that
adding “but you are free to refuse” can enhance request compliance (e.g.,
Guéguen & Pascual, 2005; for a review, see Carpenter, 2013), so one
ought not assume that an omitted conclusion message and a decide-for-
yourself conclusion message will produce identical effects when compared
with an explicit conclusion message.

3. The observed mean effect size (across 17 such studies) corresponds to a


correlation of .10 (O’Keefe, 2002b; for some earlier reviews, see Cruz,
1998; O’Keefe, 1997).

4. It may be that in social influence settings such as psychotherapy, or


perhaps in unusual persuasive message circumstances (see, e.g., Linder &
Worchel, 1970), there can be some benefit to letting the receiver draw the
conclusion. But in ordinary persuasive message contexts, the evidence
indicates that such benefits are unlikely to accrue. And it might be that in
consumer advertising, any advantage of explicit conclusions (over omitted
conclusions) would be diminished because recipients are unlikely to
misperceive what position is being advocated. It would plainly be useful
for future research to obtain a principled description of the circumstances
in which explicit conclusions might impair persuasive success relative to

364
omitted conclusions.

5. Across 18 such studies, the mean effect size, expressed as a correlation,


was .10 (O’Keefe, 2002b).

6. Either of these explanations might be related to the possibility that a


message with a more specific description of the recommended action
makes it easier for receivers to imagine themselves performing that action.
This possibility is worth noticing because at least under some conditions,
imagining performing a hypothetical future behavior can lead to increased
(perceived and actual) likelihood of performing that behavior (e.g., C. A.
Anderson, 1983; Armitage & Reidy, 2008; Gregory, Cialdini, &
Carpenter, 1982; Libby, Shaeffer, Eibach, & Slemmer, 2007; R. T.
Sherman & Anderson, 1987). But the mechanism underlying such
imagined behavior effects is not yet clear. It might be that imagined
behavioral performance enhances perceived behavioral control (self-
efficacy), enhances the development of implementation intentions
(because it is akin to explicit planning; see Chapter 6), makes reasons for
performing the behavior more salient (see R. T. Sherman & Anderson,
1987), or increases behavioral performance through some other
mechanism. (For related work, see Escalas & Luce, 2004; Knäuper et al.,
2011; Petrova & Cialdini, 2005; Ratcliff et al., 1999.) The larger point is
that there are several strands of research (concerning specific action
descriptions, explicit planning, imagining behavioral performance, and
developing implementation intentions) that appear to be related but whose
connections have not been fully explored.

7. Taken together, these first two complexities should make plain the
challenges in reaching dependable generalizations about the persuasive
effects of narrative. Given that there are many different narrative forms
(the first point) and many different nonnarrative forms against which a
narrative form might be compared (the second point), quite a diverse set of
message contrasts is naturally possible.

8. This analysis is not quite of the sort recommended by the elaboration


likelihood model’s idea of different persuasion roles—or at least this is not
an analysis that relies on the ELM’s description of such roles (see Chapter
8). But this general line of thinking has obvious affinities with an ELM-
based analysis. For a general discussion of the capabilities of narrative in
communication (not only persuasive communication) in the context of
cancer prevention and control, see Kreuter et al. (2007).

365
9. One much-studied specific realization of the contrast between narrative
and nonnarrative messages is the contrast between a message that provides
a single example (in a form amounting to a narrative) and a message that
provides corresponding statistical information about many cases
(nonnarrative). In primary research, one can find results indicating a
statistically significant advantage for examples (e.g., Uribe, Manzur, &
Hidalgo, 2013), results indicating a statistically significant for statistical
summaries (e.g., Lindsey & Ah Yun, 2003), and results reporting no
significant difference (e.g., Hong & Park, 2012; Mazor et al., 2007; Schulz
& Meuffels, 2012). A thorough review of the relevant research is not yet in
hand. Allen and Preiss’s (1997) review included studies that did not
compare the persuasive effectiveness of examples and parallel quantitative
summaries (e.g., Harte, 1976). The review by Zebregs, van den Putte,
Neijens, and de Graaf (2015) was careful to avoid such problems but did
not include unpublished studies or several seemingly relevant published
studies (e.g., Dardis & Shen, 2008; Studts, Ruberg, McGuffin, & Roetzer,
2010).

10. Character identification has been conceptualized and assessed in a


number of different ways, including perceived similarity to a character,
liking for a character, adopting the role or perspective of a character,
wishful identification with a character (wanting to be like the character in
some way), and “parasocial interaction” with a character (a sense of
having a relationship with the character). The present treatment bundles
these together, but there is reason to think that, in the long run, a more
differentiated analysis will be useful (see Moyer-Gusé, 2008; Tukachinsky
& Tokunaga, 2013).

11. Tukachinsky and Tokunaga’s (2013) meta-analysis distinguished, inter


alia, perceived character similarity (homophily), empathic character
identification, and parasocial relationships. In a random-effects analysis,
perceived similarity and empathic identification produced statistically
significant effects on storyconsistent attitudes, beliefs, and behaviors, but
parasocial relationships did not. The mean effects were r = .30 for
perceived similarity (31 studies), r = .26 for empathic identification (17
studies), and r = .12 for parasocial relationships (five studies).

12. A number of different terms have been used to describe this


phenomenon, including transportation, engagement, immersion, and
absorption (Slater & Rouner, 2002, p. 179).

366
13. In Tukachinsky & Tokunaga’s (2013) random-effects meta-analysis,
greater transportation was significantly associated with story-consistent
attitudes, beliefs, and behaviors; the mean effect was r = .29 (across 31
cases). In van Laer, de Ruyter, Visconti, and Wetzels’s (2014) meta-
analysis, greater transportation was significantly associated with stronger
persuasive effects on beliefs (the random-effects unadjusted mean effect,
across 31 studies, corresponded to a correlation of .23), attitudes (31
studies, mean r = .41), and intentions (nine studies, mean r = .29).

14. Video games may provide another vehicle for EE narrative persuasion.
Games can easily encourage transportation (immersion in the game world)
and character identification (as when the player is a character). When
exposure to a game can be mandated (e.g., when school children are
required to play a health-oriented game as part of their instruction), then
the intrinsic appeal of the game may not matter so much; however, where
voluntary game playing is concerned, then (just as with, say, voluntary
exposure to EE television programming), the challenge arises of making
the game sufficiently entertaining (so people will want to play it) while
also ensuring delivery of the desired persuasive contents. (For some
review discussions of games as persuasive vehicles, see Lieberman, 2012,
2013; Lu, Baranowski, Thompson, & Buday, 2012; Peng, Crouse, & Lin,
2013; Primack et al, 2012.)

15. In these studies, the comparison condition is commonly a no-message


control (or the evidence of effectiveness is drawn from a pre- versus
postintervention comparison). The consequence is that conclusions are not
yet possible about the relative effectiveness of prompts and other (e.g.,
more complex) messages.

16. Head et al.’s (2013) reported meta-analytic results were based on


fixed-effect analyses and hence cannot appropriately be used as a basis for
generalizing about text messaging interventions other than the ones
already studied. A reanalysis of Head et al.’s (2013) effect sizes, converted
from standardized mean differences (ds) to correlations (rs) and analyzed
using Borenstein and Rothstein’s (2005) random-effects analysis, suggests
that some, but not all, of Head et al.’s conclusions can appropriately be
generalized. For example, the overall comparison between text messaging
interventions and comparison conditions yields a statistically significant
effect with both analyses (across 19 cases, the random-effects mean r was
.13, with 95% confidence interval limits of .08 and .19). By contrast, the
claimed effect of personalization (where personalized text messaging was

367
reported as more effective than nonpersonalized text messaging) does not
generalize: in a random-effects analysis, the mean effects for personalized
interventions (four cases, mean r = .17, 95% CI limits of .06 and .28) and
for nonpersonalized interventions (15 cases, mean r = .12, 95% CI limits
of .06 and .18) were not significantly different, Q(1) = .625, p = .43.

17. Of course, if behavioral performance is already at a practical maximum


in the target population (such that nearly all the people with appropriately
positive attitudes and perceived behavioral control are already engaging in
the behavior), then prompts may have little or no effect. For example,
Clack, Pitts, and Kellermann’s (2000) finding that parking deck prompts
did not increase safety belt use might have been a consequence of the high
baseline performance rate (83%).

18. A parallel generalization can be drawn concerning messages that


invoke undesirable consequences of noncompliance with the advocated
action—for example, “If you don’t do the advocated action [wear your seat
belt, use sunscreen, etc.] then this bad thing can happen to you [serious
injury, skin cancer, etc.].” Appeals invoking consequences of
noncompliance are more persuasive when they invoke consequences that
are (taken by the audience to be) relatively more undesirable than when
they invoke outcomes that are relatively less undesirable. The research
evidence here is drawn from work on threat appeals (fear appeals), where
the message variation of interest is commonly described as variations in
“threat severity”—and the relevant finding is that threats perceived as
more severe (i.e., more undesirable) make for more effective persuasive
appeals than do threats perceived as less severe (less undesirable); see the
meta-analytic reviews of de Hoog, Stroebe, and de Wit (2007), Floyd,
Prentice-Dunn, and Rogers (2000), and Witte and Allen (2000). For
discussion, see O’Keefe (2013a).

19. Persuaders might usefully be reminded here that their reasons for
wanting a behavior performed are not necessarily the reasons that will be
most persuasive to message recipients. The public health official may want
to encourage sunscreen use so as to reduce skin cancer, but appeals to
health-related consequences might be less persuasive than appeals to
appearance-related consequences.

20. Keller & Lehmann’s (2008) meta-analysis also offered conclusions


about message sidedness (and a number of other message variations), but
their procedures included nonexperimental data. For example, a study in

368
which all the messages were two-sided had its results included in the
analysis of the effects of twosided messages. That is, Keller and
Lehmann’s conclusions about a given message variable were not based
exclusively on experiments (randomized trials) in which levels of that
variable were manipulated. In fact, they reported, “we had relatively few
manipulated levels for many of the variables” whose effects they reviewed
(p. 120). There are, of course, very good and familiar reasons to prefer
conclusions based on randomized trials (“this experiment compared the
effectiveness of one-sided and two-sided messages and found …”) over
those based on observational studies (“in this study all the messages were
two-sided, and people were really persuaded, so therefore …”).
Correspondingly, there are good reasons to prefer meta-analytic
conclusions based exclusively on randomized-trial data over those based
largely on observational studies.

21. The average persuasive advantage of refutational two-sided messages


over one-sided messages (across 42 studies) corresponds to a correlation of
.08. The average persuasive disadvantage of nonrefutational two-sided
messages compared with one-sided messages (across 65 studies)
corresponds to a correlation of –.05 (O’Keefe, 1999a).

22. Both kinds of two-sided messages are perceived as more credible than
one-sided messages. For refutational two-sided messages, the effect
corresponds to a correlation of .11 (across 20 studies); for nonrefutational
two-sided messages, the correlation is .08 (across 36 cases; O’Keefe,
1999a).

23. In Spring et al.’s (2009) review, the (statistically significant) mean


effect size for short-term abstinence (under three months) for this
comparison, expressed as an odds ratio, was 1.29, which corresponds to a
correlation of .07. The (nonsignificant) mean effect size for long-term
abstinence (over six months) was 1.23, which corresponds to a correlation
of .06. (For some discussion, see Parsons, Lycett, & Aveyard, 2011;
Spring, Rademaker, McFadden, & Hitsman, 2011.)

24. Refutational two-sided messages appear to enjoy a persuasive


advantage (over one-sided messages) in both advertising (mean effect
corresponding to a correlation of .07, across nine cases) and
nonadvertising (mean effect corresponding to a correlation of .08, across
33 cases) messages. However, there are too few studies of refutational
two-sided advertisements to permit one to be confident of this effect (see

369
O’Keefe, 1999a).

25. Eisend’s (2006, 2007) meta-analysis of the effects of sidedness


variations, which was restricted to studies of consumer advertising,
appears to provide only partial confirmation of this picture. Concerning
effects on credibility perceptions, O’Keefe’s (1999a) results concerning
nonrefutational two-sided advertising messages and Eisend’s (2006)
results for (refutational and nonrefutational) two-sided advertising
messages were consistent: Such messages were more credible than one-
sided messages (with mean effect sizes, expressed as correlations, of .16
and .22, respectively). However, the two meta-analyses produced
dramatically different results concerning persuasion outcomes: Where
O’Keefe (1999a) reported no significant different in persuasiveness
between nonrefutational two-sided advertisements and one-sided
advertisements (mean effect size of –.02), Eisend (2006) reported
significant advantages for two-sided advertising messages (mean effect
sizes of .12 and .14 for brand attitude and purchase intention,
respectively). However, these two meta-analyses differed in several ways.
When the same message pair was used in multiple studies, O’Keefe’s
procedure combined results across studies (thus treating the message pair
as the unit of analysis), whereas Eisend’s procedures recorded separate
effect sizes when a message pair was reused. When multiple measures of a
given outcome were available in a given study (e.g., three measures of
attitude), O’Keefe’s procedures created a single composite effect size for
that outcome, whereas Eisend’s procedures treated these as contributing
separate effect sizes. These and other procedural differences made for a
large apparent difference in the size of the meta-analytic databases: For
advertising messages, O’Keefe’s (1999a) database had 35 persuasion-
outcome and 22 credibility-outcome effect sizes. Eisend’s (2006) database
had 65 brand-attitude, 37 purchase-intention, and 32 credibility-outcome
effect sizes—despite not including a number of cases included in
O’Keefe’s (1999a) data set. [Comparison of O’Keefe’s (1999a) Tables 6.1
and 6.4 with Eisend’s (2006) Table 2 indicates that 18 effect sizes
concerning persuasion outcomes (brand attitude and purchase intention)
and 12 effect sizes concerning credibility outcomes that appeared in the
earlier data set were not included in the later one.] If Eisend’s (2006)
procedures had treated multiple measures of a given outcome in a study as
contributing a single effect size, and if brand-attitude and intention
outcomes had been analyzed together as representing persuasion outcomes
(as in O’Keefe, 1999a; see also O’Keefe, 2013b), then the data set would
have consisted of 21 persuasion effect sizes—of which 17 were included

370
in O’Keefe’s (1999) data set of 35 advertising persuasion effect sizes. For
credibility outcomes, if multiple measures in a study had been treated as
contributing a single effect size, Eisend’s (2006) data set would have
consisted of 10 effect sizes—of which seven were included in O’Keefe’s
(1999) data set of 22 advertising credibility effect sizes. As an indication
of the potential of such differences to influence the results: The 18
advertising persuasion-outcome effect sizes included in O’Keefe’s (1999)
data set but not in Eisend’s (2006) data set had a random-effects mean
effect size, expressed as a correlation, of –.04 (N = 4,148; 95% confidence
interval limits of –.12 and .05). This effect was not different for
refutational (six cases, mean r = .02, 95% CI limits of –.07 and .12) and
nonrefutational (12 cases, mean r = –.08, 95% CI limits of –.19 and .04)
advertising messages, Q(1) = 1.7, p = .19. There may have been good
reasons for the observed procedural variations (e.g., for some cases in the
earlier data set to have been excluded from the later one). On the face of
things, however, one might reasonably be cautious about supposing that
nonrefutational two-sided advertising messages generally enjoy the size of
persuasive advantage over one-sided messages that might be implied by
Eisend’s (2006) results.

26. The phrase message framing has been used to cover a variety of
different message variations. Messages have been described as differently
framed when they have varied in the substantive consequences invoked (as
when HPV vaccine is described either as preventing genital warts or as
preventing both genital warts and cancer; McRee, Reiter, Chantala, &
Brewer, 2010) or in the description of a property of the attitude object (as
when ground beef is described as “75% lean” or “25% fat”; Levin &
Gaeth, 1988). Messages have also been labeled as framed differently when
a given outcome of the advocated action is described in various ways; for
example, the results of a surgical procedure might be described in terms of
the probability of living or the probability of dying (e.g., McNeil, Pauker,
Sox, & Tversky, 1982), or price might be characterized in terms of daily
expense (“pennies a day”) as opposed to aggregate cost (e.g., Gourville,
1999; see, relatedly, Chandran & Menon, 2004). All of these are plainly
distinguishable message variations, and not much is gained by lumping all
of them together as “message framing.” For one effort at sorting out such
matters, see Levin, Schneider, and Gaeth (1998).

27. In O’Keefe and Jensen’s (2006) review, the mean effect size,
expressed as a correlation, was .02 (not significantly different from zero)
across 165 cases; that review covered a variety of advocacy topics and

371
included both published and unpublished research. O’Keefe’s (2011)
review analyzed O’Keefe and Jensen’s (2006) cases, the studies of disease
prevention topics reviewed by O’Keefe and Jensen (2007), and the studies
of disease detection topics reviewed by O’Keefe and Jensen (2009); it
produced a similar result (mean effect size of r = .01 across 219 cases).
Other meta-analytic reviews of gain-loss message framing studies have
commonly had more limited scope by virtue of examining only certain
kinds of advocacy subjects (say, only health behaviors) or excluding
unpublished studies (e.g., Akl et al., 2011; Gallagher & Updegraff, 2012;
O’Keefe & Jensen, 2011).

28. This hypothesis was seemingly buttressed by Kahneman and Tversky’s


(1979) prospect theory, which was taken to suggest that preventive actions
would be motivated more by gains than by losses, with disease detection
behaviors more motivated by potential losses than by gains. But this
reading of prospect theory arguably misapplied the theory (for discussion,
see O’Keefe & Jensen, 2006, p. 23; O’Keefe, 2012b, pp. 12–15).
Moreover, direct examination of the mechanism hypothesized to underlie
putative framing differences between prevention and detection topics
(namely, perceived risk) has not yielded confirming evidence (van ’t Riet
et al., 2014).

29. O’Keefe and Jensen’s (2009) review reported a small but statistically
significant advantage for lossframed appeals for disease detection topics
(mean r = –.04 across 53 cases), but this effect reflected the results for
breast cancer detection (a statistically significant mean r of –.06 across 17
cases) and did not obtain for other kinds of detection (a nonsignificant
mean r of –.03 across 36 cases). Gallagher and Updegraff’s (2012) review
undertook separate analyses for different persuasion outcomes (attitude,
intention, and behavior); for disease detection topics, they reported no
significant differences between framing conditions (no statistically
significant mean effect size) for attitude outcomes (mean r = –.03 across
16 cases), intention outcomes (mean r = –.03 across 32 cases), or behavior
outcomes (mean r = –.04 across 18 cases).

The population effect (for gain-loss framing for disease detection topics) is
almost certainly not literally zero, but, taken together, these meta-analytic
results suggest that any such effect is likely to be quite small. For example,
O’Keefe & Jensen (2009, p. 306) pointed out that their results were
consistent with a belief that the population effect is –.02 both overall and
for each of the different detection topics they distinguished; that is, that

372
value falls within the 95% confidence interval around each of the various
mean effects. And that value also falls within the 95% confidence interval
for the three separate effects computed over (a corrected version of)
Gallagher and Updegraff’s (2012) data set (O’Keefe, 2013b). So the gain-
loss message framing population effect for disease detection topics may
not be zero, but it is not very distant from zero.

30. O’Keefe and Jensen’s (2007) review, which included both published
and unpublished studies, reported a small but statistically significant
advantage for gain-framed appeals for disease prevention topics (mean r =
.03 across 93 cases). But this effect reflected the results for dental hygiene
messages (a statistically significant mean r of .15 across nine cases) and
did not obtain for other kinds of prevention topics (a nonsignificant mean r
of .02 across 84 cases). Gallagher and Updegraff’s (2012) review, which
was restricted to published studies, undertook separate analyses for
different persuasion outcomes (attitude, intention, and behavior); for
disease prevention topics, they reported no significant difference between
framing conditions for attitude (mean r = .04 across 45 cases) and
intention (mean r = .03 across 46 cases) outcomes but did find a
significant mean effect size for behavioral outcomes (mean r = .08 across
32 cases). But a closer analysis of (a corrected version of) Gallagher and
Updegraff’s data set indicates that those three mean effect sizes were not
significantly different from each other (for details and discussion, see
O’Keefe, 2013b); that is, for prevention topics, the mean effect size for
behavior outcomes was not significantly larger than the mean effect sizes
for attitude or intention. Expressed differently: There is no evidence that
gain-loss framing effects on prevention topics vary as a consequence of the
kind of outcome examined; the observed effects on the different outcomes
were statistically indistinguishable.

Of course, the population effect (for gain-loss framing for disease


prevention topics) is almost certainly not literally zero; but, taken together,
these meta-analytic results suggest that any such effect is likely to be quite
small. For example, O’Keefe and Jensen’s (2007) results are consistent
with a belief that the population effect is .03 both overall and for each of
the different prevention topics they distinguished; that is, that value falls
within the 95% confidence interval around of each the various mean
effects. And a value of .04 falls within the 95% confidence intervals for
the three separate effects computed over Gallagher and Updegraff’s (2012)
corrected data set (O’Keefe, 2013b). So, for disease prevention topics, as
with disease detection topics, the gain-loss message framing population

373
effect may not literally be zero, but it is not very far from that value.

31. It might have been more transparent to have labeled this appeal
variation as the difference between “compliance-focused” and
“noncompliance-focused” appeals. But the terminology of “gain-framed”
and “loss-framed” is too well established to hope for any better labeling to
take hold.

32. Nonrefutational two-sided messages are an exception. In such


messages, the persuader as much as acknowledges some disadvantage of
compliance.

33. It is worth noticing, however, that an appropriately chosen gain-loss


frame might amplify differences (in persuasive effectiveness) arising from
whether the invoked consequences match the recipient’s motivational
focus. That is, the persuasive advantage that accrues to a message’s having
motivationally matched consequences (e.g., promotion-oriented
consequences for promotion-oriented people) could potentially be
enhanced or diminished depending on whether the appeal was phrased in
terms of consequences of compliance (gain-framed) or consequences of
noncompliance (loss-framed). For some related work, see Cesario, Corker,
and Jelinek (2013).

34. To put this problem a bit more abstractly: If a given individual-level


variable is associated with value differences (differences in the evaluations
of various substantively different consequences), then special care must be
taken in constructing experimental messages so as to ensure that such
value differences are not confounded with the gain-loss framing
manipulation. A sufficiently clever experimenter could show that
promotion-oriented people are more persuaded by gain-framed appeals
than by loss-framed appeals (by having the gain-framed appeal invoke
promotion consequences and the loss-framed appeal invoke prevention
consequences)—or could show the exact opposite (by having the gain-
framed appeal invoke prevention consequences and the loss-framed appeal
invoke promotion consequences). Or (see Chapter 3 concerning functional
approaches to attitude) an experimenter could show that high self-monitors
are more persuaded by gain-framed appeals than by loss-framed appeals
(by having the gain-framed appeal invoke symbolic consequences and the
loss-framed appeal invoke instrumental consequences)—or could show the
exact opposite. Or an experimenter could show that people in individualist
cultures are more persuaded by gainframed than by loss-framed appeals

374
(by having the gain-framed appeal invoke individualist consequences and
the loss-framed appeal invoke collectivistic consequences)—or could
show the exact opposite. In general, any individual-difference variable that
goes proxy for, or straightforwardly represents, value variations makes the
task of experimental message construction especially challenging.
Showing the effect of such an individual-difference variable on the relative
persuasiveness of gain- and loss-framed appeals requires ruling out
variations in consequence desirability as an alternative explanation.

35. PMT is actually a bit more complex than this. Threat appraisal is said
to depend not just on threat severity and threat vulnerability but also on the
rewards of adopting a maladaptive response (e.g., the perceived rewards of
not adopting the protective behavior); coping appraisal is said to depend
not just on response efficacy and self-efficacy but also on response costs
(perceived costs of taking the protective response, such as money, time,
and so forth). But maladaptive rewards and response costs have received
less research attention than the other four elements, and the simpler
version presented here suffices to introduce the relevant general issues.
(Moreover, the relation between self-efficacy and response costs is not
lucid. After all, one reason why I might think that I can’t actually carry out
a protective behavior such as an exercise program [low self-efficacy] is
that it takes too much time [high response cost]. But PMT treats these
separately.)

36. The mean correlations between PMT-related perceptions (perceived


threat severity and so on) and persuasive outcomes (intentions and
behaviors) were reported by Milne et al. (2000) to range roughly from .05
to .35, although relatively few cases were available for analysis. The
paucity of cases reflects unhappy data-analytic decisions in primary
research experiments in which messages are varied in an attempt to
influence one or more of the four underlying perceptual states (e.g., an
experiment in which participants receive either a message depicting the
recommended action as either relatively easy or relative difficult to adopt,
so as to influence perceived self-efficacy). When researchers have assessed
the relevant perceptual states (e.g., perceived self-efficacy), those data
have been used as a “manipulation check” to confirm that the messages
produced appropriate perceptual differences—and then those data have
been discarded, with the analysis focused on how the message conditions
vary in their effects on the outcome variables (e.g., intention). The
unfortunate consequence is that even when researchers have collected data
relevant to the relationship of PMT’s perceptual states to persuasive

375
outcomes such as intentions and behaviors, research reports commonly
have not reported such information.

37. For ethical reasons, when the message topic concerns a real (as
opposed to fabricated) threat, the experimental conditions often involve
contrasts between (for instance) a high-vulnerability message and a no-
message control condition (e.g., Yzer et al., 1998).

38. Expressed as correlations, the mean effects of message manipulations


(of threat severity, threat vulnerability, response efficacy, and self-
efficacy) on corresponding perceptions (perceived threat severity,
perceived threat vulnerability, and so forth) range from roughly .30 to .45
(Witte & Allen, 2000); Milne et al. (2000) reported mean effects ranging
from about .25 to .65 but analyzed a much smaller set of studies. The mean
effects (expressed as correlations) of these message manipulations on
persuasive outcomes (attitudes, intentions, and behaviors) were reported
by Witte and Allen (2000) to be in the neighborhood of .10 to .20; Floyd et
al. (2000), reviewing a different (and usually smaller) set of studies,
reported mean effects corresponding to correlations roughly in a range
from .20 to .40; de Hoog et al. (2007), reviewing effects of severity,
susceptibility, and response efficacy manipulations, reported mean effects
corresponding to correlations generally ranging from about .10 to .20—
though with a notable lack of effect of vulnerability manipulations on
attitude.

39. Notice that this way of defining the message variation is based on the
properties of the communication, not the reactions of an audience. By
contrast, sometimes threat appeal variations have been defined on the basis
of evoked reactions (so that a strong threat appeal is one that arouses more
fear than does a weak one). But this latter way of defining message
variations should be dispreferred (for discussion, see O’Keefe, 2003; Tao
& Bucy, 2007).

40. Even these estimates may be misleading. For example, Witte and Allen
(2000) reported a mean correlation of .30 between threat appeal
manipulations and aroused fear (across 51 cases). But this figure was
inflated by (a) the exclusion of studies with a failed “manipulation check”
(studies in which there was not a dependable difference in aroused fear
between message conditions) and (b) the adjustment of individual effect
sizes, before being analyzed, for factors such as range restriction (thereby
increasing the size of the individual effects). An analysis that included all

376
studies and used unadjusted correlations would presumably yield a smaller
mean effect. (This treatment passes over complexities such as questions
about how to interpret postmessage fear reports and about the potential
role of evoked emotions other than fear. For helpful discussion, see Shen
& Dillard, 2014.)

41. There has been regrettably little attention to describing the particulars
of threat appeal variations. The meta-analytic treatments of this literature
commonly simply rely on the primary research categories (e.g., strong and
weak appeals) and do not consider what specific message features might
have been experimentally varied. The consequence is that we know rather
less than we might about what particular message variations might produce
the observed effects.

42. Unfortunately, threat appeal research results are often reported in ways
that do not permit full examination of the relationships of interest (see note
36 above). For example, it is common that a researcher will create two
message variations (with strong and weak threat appeals), check that they
aroused different levels of fear (in a manipulation check), and then report
the contrast between the two message conditions for the persuasion-
outcome dependent variables (such as attitude and intention)—leaving
unreported the direct relationship between the presumed mediating state
(fear) and the persuasion outcome variables. This has been a rather
widespread problem in persuasion research. A brief way of putting the
problem is to say that assessments of intervening states, rather than
properly being understood (and analyzed) as assessing mediating states,
have instead unfortunately been seen (and analyzed) as providing
manipulation checks for independent variables (for discussion, see
O’Keefe, 2003).

43. Carey, McDermott, and Sarma’s (2013) meta-analytic results are not
obviously an exception to this generalization. That meta-analysis
examined studies that compared messages that addressed road safety using
threat appeals against various control messages. Across four studies, threat
messages aroused greater fear than control messages (mean effect size of r
= .64); across 15 studies, these two kinds of messages were not associated
with differences in driving practices (mean effect size of r = .03). But of
those 15 studies, only two assessed both fear arousal and driving outcomes
(and these two were not separately analyzed), so there is not much direct
evidence in this data set concerning the question of whether messages that
arouse greater fear are also generally more persuasive. Notice that in meta-

377
analyses of threat appeal research where the experimental contrast of
interest is between high-intensity and low-intensity depictions of negative
consequences in threat appeals (e.g., Witte & Allen, 2000), the meta-
analytic results speak to the question of how message designers should
implement threat appeals. In Carey et al.’s (2013) review, where the
experimental contrast was between threat appeals and nonthreat appeals,
the meta-analytic results speak to the question of whether road safety
message designers should have any reason to prefer threat appeals over
nonthreat (control) appeals.

44. The relevant relationships are almost certainly only roughly linear, not
rectilinear. For instance, in a given persuasive circumstance, as message
intensity increases, there might come a point at which aroused fear
plateaus. That is, at some point further increases in message intensity
might not yield any greater fear (or any greater persuasion).

45. Although this is a commonly mentioned limiting condition on the


persuasive effects of variations in threat-related aspects of threat appeals,
the evidence is surprisingly slim. Consider, for example: In an effort to
locate such evidence, Peters et al.’s (2012) meta-analytic review was
limited to studies that varied both threat (depicted severity, depicted
vulnerability, or both) and efficacy (depicted response efficacy, depicted
self-efficacy, or both) and that had behavioral outcomes. Only six studies
were eventually included. The results indicated that threat variations had a
statistically significant effect on behavioral adoption when depicted
efficacy was high (mean effect size reported as a standardized mean
difference of .31, which corresponds to a correlation of .15) but not when
depicted efficacy was low (mean effect size reported as a standardized
mean difference of –.31, which corresponds to a correlation of –.15), with
those two mean effect sizes indicated as being significantly different.
These results look to be consistent with the claimed limiting condition. (It
should not pass unremarked that this interpretation relies on treating
experimental variations in depicted efficacy as a proxy for perceived
efficacy.) But the fragility of these findings can be seen by noticing that
these results were obtained only with the exclusion of another apparently
relevant study (Study 43; Chu, 1966). A reanalysis of the set of seven
studies (i.e., including Study 43), with effect sizes converted from
standardized mean differences to correlations and analyzed using the
random-effects procedures of Borenstein and Rothstein (2005), yields
results that are not reassuring: Threat variations did not have a statistically
significant effect on behavioral adoption when depicted efficacy was high

378
(mean r = .22, 95% CI limits of –.08 and .48) or when depicted efficacy
was low (mean r = –.05, 95% CI limits of –.24 and .15), with those two
mean effect sizes not significantly different from each other, Q(1) = 2.2, p
= .14. Peters et al. (2012, pp. 11–12) did offer a rationale for the exclusion
of Study 43, but the point here is the substantial effect that a single study
can have on these meta-analytic conclusions. In sum, it may well be the
case that more intense depictions of threat severity (and correspondingly
greater fear arousal) will be associated with greater persuasion only when
a workable, effective solution is perceived to be in hand—but the evidence
to date is not as robust as one might like.

46. This description of the EPPM is necessarily only a brief gloss (the
most detailed description is Witte, 1998). There is room for some
uncertainty about the EPPM’s predictions, at least in part because
presentations of the EPPM sometimes run together questions about
influences on protective intentions and actions and questions about
message effects. To clarify: Read in one way, the EPPM—and protection
motivation theory (PMT)—offer an account of what influences protective
intentions and actions. There is an obvious parallel here with reasoned
action theory (RAT; Chapter 6). RAT, the EPPM, and PMT each identify a
set of determinants of (influences on, precursors to) intentions and
behavior. The EPPM and PMT are narrower than RAT (because the EPPM
and PMT are focused on protective behaviors specifically) and so naturally
have a distinct set of determinants, but the overall analytical approach is
quite similar. The EPPM goes beyond PMT because it incorporates
Leventhal’s (1970) concepts of fear control and danger control processes
and because it offers an elaborated account of the interplay between these
processes and among the various determinants. Even so, the EPPM can be
seen to have the same central focus as PMT: understanding what factors
affect protective intentions and actions. Notice that one can have such an
account without ever considering questions about persuasive messages or
interventions. Each distinct determinant is of course a potential target for
persuasive messages, but how and why messages influence those
perceptual determinants make for a separate set of research questions. It
would be possible to know that perceptual state X (e.g., perceived self-
efficacy) is strongly related to protective behaviors without knowing what
sorts of messages or interventions influence X. As a parallel case from
RAT: It is possible to know that perceived behavior control (PBC) is
generally correlated with behavioral intentions without knowing how to
influence PBC. But the EPPM wants to offer not only an account of what
influences protective actions and how those influences are interrelated

379
(e.g., Propositions 6–10 in Witte, 1998) but also an account of what
happens when threat-related persuasive messages are encountered (e.g.,
Propositions 1, 2, and 5). These enterprises are naturally related but
conceptually distinct. Questions about what happens when this or that
perceptual state has a given value (e.g., what happens when perceived
threat severity is high) are different from questions about what happens
when this or that message feature has a given value (e.g., what happens
when depicted threat severity is high). In this research domain,
unfortunately, such differences are not always fully grasped. For example,
as Popova (2012, p. 457) pointed out, “the conceptual difference between
threat as a message characteristic and perceived threat is often overlooked
in practice.” The upshot is that sorting out predictions (and, for that matter,
empirical findings) in this research area can be quite challenging.

47. Notice that where fear control processes are activated, recipients may
end up experiencing relatively little fear and hence exhibit relatively little
persuasion. That is, the EPPM’s picture here is consistent with the
generally positive relationship between fear and persuasion: If people
aren’t experiencing much fear (because they don’t find the message
contents scary, don’t think they’re vulnerable, aren’t thinking about the
threat, or any other reason), then they’re not likely to be especially
motivated to adopt the protective action.

48. Popova’s (2012) review of EPPM research nicely draws attention to


ways in which the EPPM’s concepts and claims have not been formulated
as carefully as one might like. But that review’s lack of a quantitative
treatment of the relevant research is felt acutely. For example, where the
EPPM expects two conditions to differ, a study that fails to find a
statistically significant difference between those conditions does not
necessarily represent evidence that is inconsistent with the EPPM (the
observed effect might have been in the predicted direction even though not
statistically significant). For all one knows, a meta-analytic integration of
significant and nonsignificant primary research effects would produce a
pattern of mean effect sizes wholly consistent with the EPPM.

49. These various lines of research also commonly focus on a single


emotional reaction (fear, guilt, and so on). Although research is only
beginning to untangle the complexities here, there is good reason to think
that messages influence multiple emotions and that the interplay of evoked
emotions may be important in influencing message effects (e.g., Dillard &
Peck, 2001; Morales, Wu, & Fitzsimons, 2012).

380
50. Other compliance techniques have also received some research
attention. Notable among these are the “that’s-not-all” technique, in which
before any response is given to the initial request, the communicator
makes the offer more attractive (see, e.g., Banas & Turner, 2011; Burger,
Reed, DeCesare, Rauner, & Rozolis, 1999); the “low-ball” technique, in
which the communicator initially obtains the receiver’s commitment to an
action and then increases the cost of performing the action (see, e.g.,
Cialdini, Cacioppo, Bassett, & Miller, 1978; Guéguen, Pascual, & Dagot,
2002); and the “legitimizing paltry contributions” technique, in which
fundraisers explicitly legitimize small contributions (e.g., by saying “even
a penny helps”; for a illustrative study, see Cialdini & Schroeder, 1976; for
a review, see Andrews, Carpenter, Shaw, & Boster, 2008). Cialdini and
Griskevicius (2010) provide a useful general discussion of compliance
techniques.

51. The following representation of FITD research should be treated


cautiously, for two reasons. First, the extant systematic reviews of this
topic are now rather dated (the most recent was reported in 1999), which
means that their conclusions do not reflect accumulated subsequent work.
Second, the data-analytic procedures of these reviews can leave room for
doubt about the security of their conclusions. For example, Burger’s
(1999) analysis aggregated raw frequencies across studies (rather than
treating each study’s data as providing a separate case). This had the
unfortunate consequence of making the analysis’s confidence intervals
insensitive both to the number of studies and to the amount of between-
studies variation. Aggregating raw frequencies in this way can also
potentially lead to a problem known as Simpson’s paradox, discussed by
Borenstein et al. (2009, pp. 303–309), but Burger (1999, p. 306n1)
suggested that that problem was unlikely to arise in this data set.

52. In computing the reported compliance rate in the FITD conditions


(55%), all participants who heard the first request (regardless of whether
they agreed to it) were included in the denominator in Freedman and
Fraser’s (1966) data analysis. This is the appropriate way of figuring the
FITD compliance rate (as opposed to figuring it by computing the
compliance proportion among only those who received the second
request). If those who declined the initial request had been excluded from
the analysis, then a higher target-request compliance rate in the FITD
condition might have been explained as an artifact of having excluded
dispositionally uncooperative persons (those generally unwilling to accede
to requests) from the FITD condition denominator but not the control

381
condition denominator (or, alternatively expressed, as an artifact of having
dispositionally cooperative persons overrepresented in the FITD condition
by virtue of having passed through what amounted to the screening
procedure of the initial request).

53. The average effect size in FITD studies is roughly equivalent to a


correlation of between .10 and .15, with larger effects under optimal
conditions (Beaman et al., 1983; Dillard et al., 1984; Fern et al., 1986).

54. Chartrand, Pinckert, and Burger (1999) found that if the same person
makes both requests with no delay between them, the FITD technique may
backfire. But even this effect is apparently not general. Burger’s (1999)
review reported an advantage for FITD conditions over control conditions
when the same person made both requests without a delay (overall effect
corresponding to a correlation of .05 across 24 studies); when the same
person made the requests but with a delay between them (correlation of
.07, seven studies); when different persons made the requests without a
delay (correlation of .11, five studies); and when different persons made
the requests with a delay (correlation of .12, 28 studies). Taken at face
value (see note 51 above), these reported overall effects underwrite a
conclusion that FITD effects are unaffected by delay and a suspicion that
FITD effects perhaps might be larger when different persons make the
requests than when the same person makes them (but in the absence of
appropriate statistical analyses—comparing the differences between the
relevant effects—this can be only a suspicion).

55. The lack of an effect for the time interval between the requests is
sometimes seen as inconsistent with the self-perception explanation (e.g.,
Dillard et al., 1984). But it is not clear what predictions the selfperception
explanation would make here. On the one hand, it might be expected that
with increasing delay between the two requests, the FITD effect would
weaken (because there would be many opportunities, during the interval,
for other events to undermine the self-attributions of helpfulness and
cooperativeness). On the other hand, it might be predicted that with
increasing delay between the requests, the FITD effect would become
stronger (because it takes time for receivers to reflect on the causes of their
behavior and so to make the required self-attributions). Or (as Beaman et
al., 1983, noted) it might be that both these processes are at work and
cancel each other out.

56. A number of other explanations have also been proposed for FITD

382
effects (e.g., Ahluwalia & Burnkrant, 1993; Gorassini & Olson, 1995), but
at present none seems entirely satisfactory. For many explanations, there is
little direct relevant evidence. And it is not always obvious how the
explanations can accommodate the existing evidence concerning
moderating factors. For example, Fennis, Janssen, and Vohs’s (2009; see
also Fennis & Janssen, 2010) account invokes self-regulatory resource
depletion processes (the effortful character of responding to the first
request makes yielding to the second request more likely); but resource
depletion presumably dissipates relatively quickly whereas FITD effects
have been observed at some temporal remove (e.g., two weeks: Freedman
& Fraser, 1966).

57. The average effect size in DITF studies is equivalent to a correlation of


between .05 and .15, with larger effects under optimal conditions (Dillard
et al., 1984; Feeley, Anker, & Aloe, 2012; Fern et al., 1986; O’Keefe &
Hale, 1998, 2001).

58. This representation of DITF moderator-variable effects is unsubtle. In


the most recent meta-analytic review (Feeley et al., 2012, concerning
verbal outcomes), mean DITF effects were numerically larger with the
same requester, with the same beneficiary, with no delay between requests,
and with prosocial requests (than in the respective comparison conditions).
And in that review, whether statistically significant mean DITF effects
were obtained did vary depending on each of those moderators (e.g., there
was a statistically significant DITF effect when the beneficiaries of the two
requests were the same but not when the beneficiaries differed). But the
only statistically significant difference in mean DITF effects between
conditions of those moderators was for requester variation (the mean effect
was significantly larger when the same person made both requests than
when different people made the two requests). For other moderator
variables, differences between mean effect sizes for different moderator
conditions (e.g., between the mean effect size for prosocial requests and
the mean effect size for nonprosocial requests) were not statistically
significant. So, on the one hand, one cannot say that (for example) DITF
effects are dependably larger with prosocial requests than with
nonprosocial requests; but on the other hand, one can confidently
recommend the use of the DITF strategy only with prosocial requests
(because with nonprosocial requests, the obtained mean effect is not
significantly different from zero).

59. The explanation might be defended against this criticism, however, by

383
suggesting that any concession merely needs to be large enough to trigger
the reciprocal concessions norm; so long as the concession surpasses this
threshold, the reciprocal concessions mechanism will be engaged (and thus
increasing the size of the concession beyond that threshold would not
affect the strategy’s effectiveness). This defense is certainly adequate as
far as it goes, but consider that if larger concessions had been found to be
associated with greater DITF effectiveness, such a result surely would
have been counted as evidence supporting the reciprocal concessions
explanation. Thus the failure to find such effects requires at a minimum
some revision in the account (such as represented by the articulation of a
threshold-model version of the explanation).

384
Chapter 12 Receiver Factors

Individual Differences
Topic-Specific Differences
General Influences on Persuasion Processes
Summary
Transient Receiver States
Mood
Reactance
Other Transient States
Influencing Susceptibility to Persuasion
Reducing Susceptibility: Inoculation, Warning, Refusal
Skills Training
Increasing Susceptibility: Self-Affirmation
Conclusion
For Review
Notes

This chapter reviews research concerning the effects that various recipient
characteristics have on persuasive outcomes. The discussion is organized
around three main topics: individual differences (such as personality
traits), transient receiver states (such as moods), and means of influencing
receivers’ susceptibility to persuasion.

Individual Differences
Individual differences (ways in which people vary with respect to
relatively stable characteristics, such as personality variations) can
influence persuasion in two broad ways: by virtue of their association with
topic-specific differences or by virtue of their general influence on
persuasion processes.

Topic-Specific Differences
Some individual-difference variables can be associated with topic-specific
differences in attitudes, beliefs, values, or behaviors—and hence (where
relevant to the topic) such individual differences may be related to

385
persuasive effects. A convenient example is provided by the personality
variable of self-monitoring (the degree to which a person regulates self-
presentation). As discussed in Chapter 3 (concerning functional
approaches to attitude), high and low self-monitors differ in what they tend
to value in certain consumer products; for instance, high self-monitors
especially favor the image projection attributes of automobiles, whereas
low self-monitors are more likely to value characteristics such as
reliability. Hence high and low self-monitors are differentially influenced
by corresponding persuasive appeals; high self-monitors react more
favorably to image-oriented advertisements than to ads focused on product
quality, whereas the opposite tendency is found for low self-monitors (e.g.,
Snyder & DeBono, 1985). To put the relevant point generally, this
personality difference serves as a marker of differences in receiver values
and hence is related to the success of persuasive appeals that vary in the
degree to which those values are engaged.

Similarly, variations in receiver age can sometimes serve as a marker of


topic-specific variations in persuasion-relevant beliefs and attitudes. For
instance, the motivations underlying volunteering may vary as a function
of age: There is some evidence that as people become older, the
importance of career-oriented motivations declines and the importance of
the interpersonal relationships that can come from volunteering increases
(Okun & Schultz, 2003). Thus when seeking to encourage volunteering,
one might want to use different persuasive appeals for receivers varying in
age—not because age per se is important but because age serves as a proxy
for the relevant value differences. (See, relatedly, Williams & Drolet,
2005; Zhang, Fung, & Ching, 2009.)

As another example, differences in receivers’ cultural backgrounds might


serve to index various topic-specific differences relevant to persuasion. For
instance, cultural variation may be associated with value differences and
hence with differential effectiveness of corresponding persuasive appeals
(e.g., Chang, 2006; Gregory, Munch, & Peterson, 2002; Laufer, Silvera,
McBride, & Schertzer, 2010; for a review, see Hornikx & O’Keefe,
2009).1 Or receivers of varying cultural backgrounds might typically have
different salient beliefs on a given topic, suggesting correspondingly
different persuasive approaches (see, e.g., Marin, Marin, Perez-Stable,
Sabogal, & Otero-Sabogal, 1990). (It is also possible that within a given
cultural group, individual variation in, for example, cultural identification
might provide still further bases for message adaptation. See Davis &
Resnicow, 2012; Kreuter, Lukwago, Bucholtz, Clark, & Sanders-

386
Thompson, 2003.)

General Influences on Persuasion Processes


The second way in which individual differences might influence
persuasion is through their general influence on persuasion processes. For
example, as discussed in Chapter 8 (concerning the elaboration likelihood
model), individual differences in need for cognition (NFC) are associated
with differences in elaboration motivation: Those higher in NFC are
generally likely to have greater elaboration motivation than those lower in
NFC. Other factors also influence elaboration motivation, of course, but
the point here is that (ceteris paribus) across advocacy topics generally,
one expects NFC variations to produce corresponding variations in
elaboration likelihood.

Variations in receiver self-esteem and intelligence may also affect


persuasion processes generally. There is some evidence that persuasibility
may be maximized at intermediate levels of self-esteem and at lower levels
of intelligence (for reviews, see Rhodes & Wood, 1992).2 Explanations for
these differences are at present somewhat speculative, but the most
plausible accounts look to the possible influence of these personality
characteristics on various general aspects of persuasion processes (see
Rhodes & Wood, 1992). Concerning self-esteem, the suggestion has been
that persons low in self-esteem are unlikely to pay sufficient attention to
the message (by virtue of being withdrawn or distracted), and those high in
self-esteem are likely to be confident in the correctness of their current
opinions (and so, perhaps, be more likely to counterargue), thus making
each group less likely to be persuaded than those of moderate levels of
self-esteem (for potential complexities, see Sanaktekin & Sunar, 2008).
With regard to intelligence, it may be that the greater knowledgeability
commonly associated with greater intelligence enables more critical
scrutiny of messages.

Another (more complex) example is provided by the individual-difference


variable of sensation seeking, which concerns preferences for novel,
complex, and ambiguous stimuli and situations. High sensation seekers
appear to use drugs and alcohol more frequently, and to begin using them
at an earlier age, than do low sensation seekers (e.g., Zuckerman, 1979, pp.
278–294). In designing programs aimed at reducing drug use, then, this
personality variable provides a means of identifying people who are the
most important and appropriate targets for persuasive messages

387
(Stephenson et al., 1999). But high and low sensation seekers also differ in
the kinds of messages to which they are especially susceptible; for
example, the use of rapid edits, intense imagery, and surprise endings can
make for more effective antidrug public service announcements for high
sensation seekers (e.g., Morgan, Palmgreen, Stephenson, Hoyle, & Lorch,
2003; Niederdeppe, Davis, Farrelly, & Yarsevich, 2007; for complexities,
see Kang, Cappella, & Fishbein, 2006). The implication is that sensation
seeking might provide not only a basis for identifying members of the
target audience but also a means of adapting messages to that audience (for
a useful review, see Morgan, 2012).

Summary
A great many individual individual-difference receiver characteristics have
been examined for their possible relationships to persuasibility. For most
such characteristics, the research evidence is commonly not extensive, and
dependable generalizations seem hard to come by. (For some illustrative
studies, see Geers, Handley, & McLarney, 2003; Guadagno & Cialdini,
2010; Gunnell & Ceci, 2010; Hirsh, Kang, & Bodenhausen, 2012; Lee &
Bichard, 2006; Magee & Kalyanaraman, 2009; Resnicow et al., 2008;
Saucier & Webster, 2009; Stephenson, Quick, & Hirsch, 2010; van ’t Riet,
Ruiter, & de Vries, 2012; Williams-Piehota et al., 2009.) But, as this brief
sketch indicates, individual differences may affect persuasion in a number
of different ways, so perhaps it is unsurprising that research has so often
yielded complex results. A given individual difference such as receiver age
might potentially be related to general persuasibility differences (Krosnick
& Alwin, 1989), to dispositional differences in information-processing
inclinations (e.g., Williams & Drolet, 2005), and to topic-specific
differences in persuasion-relevant beliefs and attitudes (as in the observed
age-related differences in evaluations of volunteering outcomes; Okun &
Schultz, 2003). Similarly, cultural variations may be related not only to
variations in underlying values but also to some information-processing
differences (e.g., Hornikx & Hoeken, 2007; Larkey & Gonzalez, 2007). It
is likely to take some time to sort out completely the different pathways by
which various individual-difference variables exert their influence on
persuasive effects. (For one attempt, see Briñol & Petty, 2005. For other
general discussions of individual differences and persuasion, see Briñol,
Rucker, Tormala, & Petty, 2004; Shakarchi & Haugtvedt, 2004; W. Wood
& Stagner, 1994.)

388
Transient Receiver States
Whereas the previous section discussed the effects of relatively stable
(individual-difference) receiver characteristics, persuasion can also
potentially be influenced by more transient receiver states. Two such states
are discussed here: mood and reactance.3

Mood
There seems to be a natural appeal to the hypothesis that a receiver’s
preexisting mood (affective state) will influence persuasion quite
straightforwardly, such that positive moods will enhance persuasion and
negative moods will diminish persuasion. And although the research
evidence to date does not yet yield an entirely clear picture, it is
nevertheless plain that this simple hypothesis will not suffice.

Rather, the research in hand appears to suggest that receivers in (at least
some kinds of) negative moods are more likely to engage in close message
processing than are receivers in (at least some kinds of) positive moods.
Expressed in terms of the elaboration likelihood model (ELM; see Chapter
8), mood influences elaboration likelihood. For example, Bless, Bohner,
Schwarz, and Strack (1990) found that sad participants were persuaded by
a counterattitudinal message if the message’s arguments were strong but
not if the arguments were weak (indicating relatively high elaboration); by
contrast, happy participants were equally persuaded by strong and by weak
arguments (suggesting relatively low elaboration). (For similar results, see,
e.g., Mackie & Worth, 1989. For a review, see Hullett, 2005.)4

Research has only begun to explore possible moderating conditions for this
effect (circumstances under which this effect weakens or even reverses),
so conclusions are not yet secure (see, e.g., Banas, Turner, & Shulman,
2012; Das, Vonkeman, & Hartmann, 2012; Shen, 2013; Sinclair, Moore,
Mark, Soldat, & Lavis, 2010; Ziegler, 2013, 2014). However, one notable
theme has been the suggestion that instead of referring generally to
positive and negative moods (affective states), a more differentiated
treatment of affective states will be needed—because, for example,
different positive affective states may have different message-processing
consequences. (For some illustrative studies, see Agrawal, Menon, &
Aaker, 2007; Griskevicius, Shiota, & Neufeld, 2010. For some useful
general discussions of affect and persuasion, see Bless & Schwarz, 1999;

389
Dillard & Seo, 2013; Nabi, 2007, 2010.)

Reactance
Reactance is a motivational state that is aroused when a person’s freedom
is perceived to be threatened or eliminated (Brehm, 1966; Brehm &
Brehm, 1981). When a person believes that his or her freedom may be
diminished, the person will be motivated to restore (defend, exercise) that
freedom—perhaps by acting counter to the impending pressure. (For some
general treatments of reactance, see Miron & Brehm, 2006; Quick, Shen,
& Dillard, 2013.)

Although reactance can be aroused by any number of circumstances (e.g.,


people may resist granting favors that they think would obligate them in
the future, so as to retain their freedom of action), counterattitudinal
persuasive messages are an obvious potential cause of reactance. Such
communications seek to reduce recipients’ freedom (of belief and action)
by suggesting that receivers adopt the advocated view, so such messages
might potentially be seen as trying to manipulate or pressure the recipient.
When a message arouses reactance, its persuasiveness is naturally
diminished. Receivers experiencing reactance are likely to reject the
advocated view; they may cling to their existing attitudes more strongly
and perhaps even change in the direction opposite that sought by the
message (a boomerang effect).

For many years, reactance was treated as an unmeasurable state


(identifiable only by its antecedents and consequences); however, recent
work has developed useful assessments of reactance, which in turn have
permitted reactance to be further unpacked. Specifically, where persuasive
messages are concerned, reactance seems best conceived of as a
combination of anger (an affective reaction) and counterarguing (a
cognitive reaction; see Dillard & Shen, 2005; Rains, 2013). That is,
reactance is not merely an affective (emotional) state but also involves
cognitive activity (specifically, counterarguing).

Research is only beginning to explore what makes reactance more or less


likely to be evoked.5 For many potential influences, the evidence is as yet
rather slim (for some illustrative studies, see Edwards, Li, & Lee, 2002;
Feiler, Tost, & Grant, 2012; Fitzsimons & Lehmann, 2004; Lee & Lee,
2009; Reinhart & Anker, 2012; Seibel & Dowd, 2001; Silvia, 2005).6

390
However, a number of studies have found that explicit freedom-
threatening language can evoke reactance and lead to diminished
persuasiveness (compared with parallel messages without such language).7
For example, Quick and Considine (2008) compared exercise messages
with “forceful” language (e.g., “it is impossible to deny all the evidence”
of exercise benefits, “no other conclusion makes sense,” and so forth) and
ones with non-forceful language (e.g., “there is pretty good evidence” of
exercise benefits, “it’s a sensible conclusion,” and so on), finding that
forceful language evoked reactance and reduced perceived message
persuasiveness. (For similar results, see, e.g., Bensley & Wu, 1991;
Burgoon et al., 2002; Dillard & Shen, 2005; Rains & Turner, 2007.)8

Avoiding such directive language is thus one way in which to reduce the
likelihood of the arousal of reactance. Another potential strategy for
minimizing reactance or its effect might be to emphasize the receiver’s
freedom of choice (e.g., Miller, Lane, Deatrick, Young, & Potts, 2007).
(See also the “but you are free to refuse” request strategy, as in Guéguen &
Pascual, 2005; for a review, see Carpenter, 2013.) Indeed, in work on
influencing addictive and related problematic health behaviors, the
approach known as motivational interviewing specifically recommends
that counselors avoid confrontation and instead affirm the client’s
autonomy and capacity for self-direction (for a general treatment of
motivational interviewing, see Miller & Rollnick, 2002; for some reviews
of relevant research, see Hettema & Hendricks, 2010; Jensen et al., 2011;
Knight, McGowan, Dickens, & Bundy, 2006; Morton et al., 2014).

Other Transient States


A number of other temporary states (beyond moods and reactance) have
also been investigated for their potential roles in persuasion processes. For
example, researchers have examined the effects of counterfactual mindsets
(in which a person thinks about what might have been; e.g.,
Krishnamurthy & Sivaraman, 2002; Tal-Or et al., 2004); of “bolstering”
and “counterarguing” mindsets (which orient the recipient to have
thoughts supporting or opposing the advocated view; Xu & Wyer, 2012);
of low-level, concrete mindsets as opposed to higher-level, abstract
mindsets (e.g., K. White, MacDonnell, & Dahl, 2011); of the receiver’s
expecting to have to communicate with others about the subject (e.g.,
Nienhuis, Manstead, & Spears, 2001; Tal-Or, Nemets, & Ziv, 2009); and
so forth. But the evidence in hand on these states is as yet too slim to

391
support confident conclusions.

Influencing Susceptibility to Persuasion


How can people’s susceptibility to persuasive messages be influenced?
Sometimes an advocate will have an interest in making people more
resistant (reducing their susceptibility to persuasion by opposing
messages), and sometimes a persuader will hope to make people less
resistant (increasing their susceptibility to the persuader’s efforts). This
section discusses ways of influencing people’s susceptibility to persuasion.
(For a collection of papers relevant to the general topic of influencing
susceptibility, see Knowles & Linn, 2004.)

Reducing Susceptibility: Inoculation, Warning,


Refusal Skills Training
It’s all very well to persuade someone to one’s point of view—but once
persuaded, the person may be exposed to counterpersuasion, that is,
persuasive messages advocating some opposing viewpoint. The question
that naturally arises is how receivers might be made resistant to such
persuasive efforts (recognizing that making receivers resistant to
counterpersuasion may involve something different from persuading them
in the first place).9 In what follows, three approaches to creating such
resistance to persuasion are discussed: inoculation, warning, and refusal
skills training.

Inoculation
The fundamental ideas of inoculation can be usefully displayed through a
biomedical metaphor. Consider the question of how persons can be made
resistant to a disease virus (such as smallpox). One possibility is what
might be called supportive treatments—making sure that people get
adequate rest, a good diet, sufficient exercise, necessary vitamin
supplements, and so on. The hope, obviously, is that this treatment will
make it less likely that the disease will be contracted. But another
approach to inducing resistance is inoculation (as with smallpox vaccines).
An inoculation treatment consists of exposing persons to small doses of
the disease virus. The dose is small (to avoid bringing on the disease itself)
but is sufficient to stimulate and build the body’s defenses so that any later

392
massive attack (e.g., a smallpox epidemic) can be defeated.

As this analogy suggests, the parallel approach for inducing resistance to


persuasion consists of giving people “small doses” of the opposing view.
Specifically, an inoculation treatment consists of exposing people to a
weak attack on their current attitudes and then refuting that attack. That is,
the advocate explicitly discusses and refutes some opposing argument.10
In experimental designs in this research area, receivers are initially
exposed to a treatment designed to induce resistance to persuasion on a
given topic and then are exposed to an attack message (i.e., a
counterattitudinal message) on that topic to see whether the treatment has
made them resistant to the attack. (For some general discussions of
inoculation theory and research, see Compton, 2013; Compton & Pfau,
2005; Ivanov, 2012; Pfau & Szabo, 2004. For a classic treatment, see
McGuire, 1964.)

Such inoculation treatments do commonly create resistance to persuasion


when compared with no-treatment control conditions. (For some
illustrative studies, see Bither, Dolich, & Nell, 1971; Nabi, 2003; Pfau,
Holbert, Zubric, Pasha, & Lin, 2000. For a review, see Banas & Rains,
2010.)11 That is, showing receivers refutations of weak opposing
arguments makes receivers more resistant to persuasion (by subsequent
attack messages) than they otherwise would have been.

Notably, this resistance-creating effect of inoculation is not limited to


attack messages that use the same arguments that were refuted in the
inoculation treatment. That is, the resistance conferred by inoculation
generalizes beyond the refuted arguments; receivers who have been
inoculated are also more resistant (than they would have been) to novel
opposing arguments (Banas & Rains, 2010). The implication is that an
advocate need not try to inoculate receivers against all possible opposing
arguments; application of an inoculation treatment to only some opposing
arguments will create generalized resistance.

Following the biomedical metaphor, however, supportive treatments are a


possible alternative to inoculation. In persuasion, a supportive treatment
would consist of providing people with arguments and information
supporting their current views.12 One might imagine that supportive
treatments could also create resistance to persuasion, by bolstering the
existing attitude (just as, in the biomedical realm, supportive treatments—
good diet, adequate rest, and so forth—might create some resistance to

393
disease). Research to date suggests that supportive treatments may indeed
confer some resistance to persuasion (compared to no-treatment control
conditions), but the evidence is not yet quite as decisive as one might
want. (For examples of relevant research, see Bernard, Maio, & Olson,
2003; Rosnow, 1968. For a review, see Banas & Rains, 2010.)13

The more illuminating comparison, however, is between inoculation


treatments and supportive treatments, and the evidence in hand plainly
indicates an advantage for inoculation: Inoculation treatments confer
greater resistance to persuasion than do supportive treatments. (For
illustrative research, see Adams & Beatty, 1977; Kamins & Assael, 1987b.
For a review, see Banas & Rains, 2010.)14 That is, providing refutation of
weak counterarguments is more effective in making people resistant to
counterpersuasion than is bolstering their existing attitudes by providing
supportive material. Thus, given a choice between administering a
supportive treatment or an inoculation treatment, advocates should
presumably prefer inoculation.15 Evidence concerning the resistance-
creating effects of combining inoculation and supportive treatments is
unhappily slim (and has not been systematically reviewed); there is,
however, some evidence suggesting that the combination may be more
effective in conferring resistance than are supportive treatments alone
(e.g., Koehler, 1972; McCroskey, Young, & Scott, 1972; Szybillo &
Heslin, 1973).16

Explaining how and why inoculation induces resistance to persuasion has


proved rather challenging. Given that the resistance created by inoculation
treatments generalizes to novel arguments, the underlying mechanism is
unlikely to involve receivers simply acquiring the contents of the particular
refutation to which they are exposed; obviously, something more general
happens, something that makes receivers resistant even to new attacks on
their views.

The most commonly invoked mechanism for explaining inoculation effects


has been the receiver’s perception that their views are vulnerable to attack.
In McGuire’s (1964) classic formulation of inoculation theory, the
supposition was that if receivers are not aware that their beliefs might be
opposed, then they will be unmotivated to prepare their cognitive defenses.
An inoculation treatment was thought to stimulate the receiver’s natural
defenses and so make them resistant to subsequent attack messages.17 That
is, one effect of the inoculation treatment is (supposedly) to make the

394
recipient aware of the possibility of opposing arguments or views (because
the recipient sees an opposing argument).18

But the idea of perceived vulnerability has not yet been entirely carefully
specified. For example, does the receiver need to think that an attack
message is actually about to be encountered? Or only that it might
plausibly occur at some time in the future? Or perhaps merely that in the
abstract, somebody somewhere might believe differently (never mind
whether an actual attack is expected)? Is it necessary that the receiver think
that the attack message (or the imagined interlocutor) has good reasons for
the opposing view (reasons that might form the basis of good arguments
against the receiver’s opinion)? Or perhaps is the mere recognition of the
possibility of opposition (whether or not well-founded) sufficient?19

Even given some specification of the idea of vulnerability, the issue will
then become explaining exactly how and why perceived vulnerability
leads to resistance. Perhaps it stimulates counterarguing, or possibly it
simply inclines the receiver to reject the opposing view out of hand
without thinking about it very much.20 (For some discussion of alternative
means of resistance, see Ahluwalia, 2000; Blumberg, 2000; Burkley,
2008.) In short, much remains to be learned about how inoculation creates
resistance to persuasion.

Warning
If one’s awareness that a belief is vulnerable to attack might be sufficient
to lead one to bolster one’s defense of that belief (and thereby reduce the
effectiveness of attacks on it), then perhaps simply warning a person of an
impending counterattitudinal message will decrease the effectiveness of
the attack once it is presented. A fair amount of research has been
conducted concerning the effects of such warning on resistance to
persuasion.

Two sorts of warnings have been studied. One type simply warns receivers
that they will hear a message intended to persuade them, without providing
any information about the topic of the message, the viewpoint to be
advocated, and so on. The other type of warning tells receivers the topic
and position of the message.

Both sorts of warnings can confer resistance to persuasion and appear to


do so by stimulating counterarguing in the audience (e.g., Petty &

395
Cacioppo, 1977, 1979a; for a review, see W. Wood & Quinn, 2003).21
Topic-position warnings make it possible for receivers to engage in
anticipatory counterarguing because the audience knows the issue to be
discussed and the view to be advocated. Thus as the time interval between
the topic-position warning and the onset of the message increases (up to a
point, anyway), there is more opportunity for the audience to engage in
counterarguing. For example, in one study, high school students were
shown messages arguing that teenagers should not be allowed to drive.
Students received a warning of the topic and position of the impending
message, but the interval between the warning and the message varied (no
delay between warning and message, a 2-minute delay, or a 10-minute
delay). With increasing delay, there was increasing resistance to
persuasion (Freedman & Sears, 1965).

Persuasive-intent warnings, of course, do not permit anticipatory


counterarguing because the receivers do not know the subject of the
message; consequently, variations in the time interval between a
persuasive-intent warning and the communication have little effect on
resistance (e.g., R. G. Hass & Grady, 1975). But persuasive-intent
warnings do apparently stimulate greater counterarguing during the
persuasive message, thereby reducing receivers’ susceptibility to
persuasion.

Because warning apparently creates resistance by encouraging


counterarguing, the effectiveness of warnings is influenced by factors that
affect receivers’ motivation and ability to counterargue. When receivers
are not motivated to counterargue (e.g., because the issue is insufficiently
important to them) or are unable to counterargue (e.g., because
accompanying distraction prevents them from doing so), then the
resistance-inducing effects of warning are reduced (see, e.g., H. C. Chen,
Reardon, Rea, & Moore, 1992; Neimeyer et al., 1991; Petty & Cacioppo,
1979a; Romero, Agnew, & Insko, 1996; for a review, see W. Wood &
Quinn, 2003).22

Refusal Skills Training


Another, more specialized approach to creating resistance to persuasion
focuses on training the receiver in skills for refusing unwanted offers. The
central idea is that in some circumstances, the key to resistance is being
able to refuse offers or requests made by an influence agent. In particular,
it has often been supposed that children and adolescents are commonly

396
unable to resist offers of illegal drugs, alcohol, or tobacco and so end up
using these substances—even if they have negative attitudes about such
substances. Hence it has been thought that one avenue to preventing
substance use (or abuse) might be to provide training in how to refuse such
offers.

Refusal skills training is a different approach to resistance induction from


inoculation or warning. Inoculation and warning seek to provide the
receiver with certain sorts of cognitive defenses (hardening the initial
attitude, preparing the receiver’s attitudinal defenses, encouraging mental
counterarguing). In contrast, refusal skills training aims at equipping the
receiver with certain communicative abilities.

A good deal of research has explored refusal skills induction in the context
of preventing children and adolescents from using or misusing drugs
(alcohol, tobacco, marijuana, and so on). Three broad conclusions may be
drawn from this research. First, it is possible to teach such refusal skills.
Studies have found that resistance skills training does improve the quality
of role-played refusals, participants’ perceived self-efficacy for refusing
offers, and the like (see, e.g., Brown, Birch, Thyagaraj, Teufel, & Phillips,
2007; Langlois, Petosa, & Hallam, 1999; Wynn, Schulenberg, Maggs, &
Zucker, 2000).

Second, the programs that are most effective at teaching refusal skills
commonly involve rehearsal with directed feedback (i.e., opportunities for
participants to practice their refusal skills and to receive systematic
evaluation of their performance). Simply encouraging participants to
refuse offers or providing information about refusal skills seems less
effective in developing such skills than is providing guided practice (see,
e.g., Corbin, Jones, & Schulman, 1993; Turner et al., 1993).

Third, refusal skills programs are generally not very effective in


preventing or reducing drug, alcohol, or tobacco use/misuse. Evaluations
of such programs commonly find that refusal skills (or refusal skills self-
efficacy or exposure to refusal skills training) are unrelated to substance
use or abuse (for examples and reviews, see Donaldson, Graham, &
Hansen, 1994; Elder, Sallis, Woodruff, & Wildey, 1993; Gorman, 1995;
D. C. Smith, Tabb, Fisher, & Cleeland, 2014; Wynn et al., 2000). Some
successes have been reported (e.g., Donaldson, Graham, Piccinin, &
Hansen, 1995; Hecht, Graham, & Elek, 2006), but in a few circumstances,
boomerang effects—where refusal skills training has outcomes that appear

397
to encourage substance use—have also been observed (Biglan et al., 1987;
Donaldson et al., 1995; S. Kim, McLeod, & Shantzis, 1989).

In the United States, one particularly prominent refusal skills training


program has been Drug Abuse Resistance Education (DARE); the program
is aimed at children and adolescents and has featured police officers as
influence agents. The traditional DARE curriculum has many elements,
but its core is focused on teaching students skills for recognizing and
resisting pressures to use drugs and alcohol. Despite widespread
implementation, there is strikingly little evidence that DARE dependably
reduces substance use (for reviews, see Ennett, Tobler, Ringwalt, &
Flowelling, 1994; Pan & Bai, 2009; S. K. West & O’Neal, 2004).23

The lack of effectiveness of these refusal skills training programs may


suggest that the key to preventing substance use/misuse is not found in the
ability to refuse offers but rather lies somewhere else. (One possibility is
that descriptive norms—the perceived prevalence of substance use among
one’s peers—are a more important determinant of use than are refusal
skills; see, e.g., Donaldson et al., 1994; Wynn et al., 2000. See, relatedly,
LaBrie, Grossbard, & Hummer, 2009; Lai, Ho, & Lam, 2004; Vitoria,
Salgueiro, Silva, & De Vries, 2009.) That is, the apparent relative
ineffectiveness of the refusal skills programs aimed at preventing (or
reducing) substance misuse does not mean that the general refusal skills
induction strategy is somehow intrinsically defective as a mechanism for
creating resistance to persuasion, only that it may not be especially helpful
for these particular applications.

Alternatively, one might think that, because some refusal skill programs
appear to have been more successful than others in addressing substance
use, the key to future program development is the identification of the
relevant program ingredients. For some discussion along these lines, see
Krieger et al. (2013) and Miller-Day and Hecht (2013).

Increasing Susceptibility: Self-Affirmation


Some persuasive messages seem to be able to evoke a particular sort of
defensive reaction in recipients—an avoidance motivation, in which
recipients do not want to engage the message, want to avoid thinking about
its information, perhaps want to avoid the topic entirely. This (avoidant,
defensive) reaction is different from reactance. Where reactance is
activated, recipients undertake counterarguing; that is, reactance is an

398
active form of resistance that engages the message. Defensive avoidance,
on the other hand, represents a withdrawal from the message, an
unwillingness to engage with it. It’s as though the message is somehow so
threatening that people want to close themselves off from it.24

Consider, for example, that smokers may want to avoid information


suggesting that smoking harms health, those who consume alcohol may
not want to attend closely to messages describing alcohol risks, and so on.
Generally speaking, people may well avoid information if attending to it
seems capable of causing distress. And, obviously, such reactions are
likely to minimize the success of persuasive messages.25 These avoidance
behaviors can be seen to arise from a broad desire to maintain a positive
self-image (itself a widely recognized general motivation; see, e.g., Briñol
& Petty, 2005; Chaiken, Liberman, & Eagly, 1989; Prislin & Wood,
2005). For example, smokers may find it hard to hold a positive view of
the self if they have to confront information suggesting they are harming
themselves.

The question that arises is how, in such circumstances, people can be made
more susceptible to influence, more open to persuasion. The apparent
motivational foundation for avoidance—the desire to maintain a positive
self-image—suggests a possible avenue to minimizing these avoidance
tendencies: self-affirmation. Self-affirmation refers to treatments aimed at
affirming (confirming, supporting) the recipient’s positive characteristics
or important values. Self-affirmation can be accomplished in a variety of
ways, but the most common methods in research studies have had
participants reflect on a core value (e.g., by writing about a value that is
important to them, by describing instances in which they performed
positive actions such as kindness behaviors, and so on). The idea is that
active affirmation of some positive aspect of one’s self-concept will permit
people to be open to information that would otherwise be threatening. (For
some discussion of self-affirmation manipulations, see Armitage, Harris,
& Arden, 2011; McQueen & Klein, 2006; Napper, Harris, & Epton, 2009.)

Substantial evidence has accumulated that self-affirmation treatments can


increase acceptance of subsequent messages with otherwise threatening
contents. For example, self-affirmation has been reported to make coffee
drinkers more accepting of information about the health risks of caffeine
(Van Koningsbruggen, Das, & Roskos-Ewoldsen, 2009), to make
sunbathers less defensive about sun-exposure risk information (Jessop,
Simmonds, & Sparks, 2009), to make smokers more accepting of

399
information about smoking risks (Harris, Mayle, Mabbott, & Napper,
2007), to make opponents and proponents of capital punishment more
open to opposing viewpoints (Cohen, Aronson, & Steele, 2000), and so
forth. (For other examples, see Howell & Shepperd, 2012; Reed &
Aspinwall, 1998; Schüz, Schüz, & Eid, 2013; Sparks, Jessop, Chapman, &
Holmes, 2010; Van Koningsbruggen, & Das, 2009. For reviews, see
Epton, Harris, Kane, van Koningsbruggen, & Sheeran, in press; Harris &
Epton, 2009; Sweeney & Moyer, 2015.)

At present, however, little can be confidently said about factors that might
moderate these self-affirmation effects—the circumstances under which
self-affirmation effects are most likely to occur, what sorts of self-
affirmation treatments might be most effective, whether individual
differences affect the success of self-affirmation treatments, and so forth
(for some illustrative studies, see Klein et al., 2010; Nan & Zhao, 2012;
Pietersma & Dijkstra, 2011; Sherman et al., 2009). Similarly, research is
only beginning to explore the mechanisms by which self-affirmation
treatments have their effects (e.g., Crocker, Niiya, & Mischkowski, 2008;
Klein & Harris, 2009; Van Koningsbruggen et al., 2009).26 But the
manifest usefulness of self-affirmation recommends its continued
investigation. (For some general discussions of self-affirmation theory and
research, see J. Aronson, Cohen, & Nail, 1999; Harris, 2011; Harris &
Epton, 2009, 2010; Sherman & Cohen, 2006.)

Conclusion
Researchers have investigated a large number of recipient characteristics
as possible influences on persuasive effectiveness; in particular, a great
many individual-difference variables have received some attention. The
present treatment provides only an overview of several especially
prominent lines of research.

For Review
1. What are individual differences? Explain how individual-difference
variables can be associated with topic-specific differences in
attitudes, beliefs, values, or behavior. Give examples. Explain how
individual-difference variables might be related to general differences
in persuasion processes. Give examples.
2. Are people in positive moods generally more easily persuaded than

400
people in negative moods? Describe the effect of variation in moods
on the extensiveness of message processing. What is reactance? How
does reactance influence message persuasiveness? Is reactance purely
an affective (emotional) state? Explain. Identify a message feature
that might arouse reactance. Describe how persuaders might
minimize the arousal of reactance.
3. Describe the general idea of resistance to counterpersuasion. Identify
two general ways persons might be made resistant to a disease virus.
Describe supportive medical treatments; describe how inoculation
against disease works. Describe inoculation treatments for inducing
resistance to persuasion; are these treatments effective in creating
resistance? Do inoculation treatments create resistance only to the
particular attack arguments that are refuted, or does the resistance
generalize to other attack arguments? Describe supportive treatments
for inducing resistance to persuasion. Which treatment, supportive or
inoculation, is more effective in creating resistance to persuasion?
Describe one possible explanation for the resistance-creating effects
of inoculation treatments.
4. Can warning a person of an impending counterattitudinal message
create resistance to persuasion? Distinguish two kinds of warnings.
Explain the mechanism by which warning confers resistance to
persuasion. Identify factors that influence the effectiveness of
warnings. How might the effectiveness of warnings be influenced by
the presence of distraction or by the degree of personal relevance of
the topic to the receiver?
5. What is refusal skills training? How is refusal skills training meant to
create resistance to persuasion? Explain how refusal skills training is
different from inoculation and warning as means of creating
resistance to persuasion. Is it possible to teach refusal skills
effectively? What are the most important elements in programs aimed
at teaching refusal skills? What effect do refusal skills training
programs have on substance use/misuse?
6. Describe how persuasive messages might evoke defensive avoidance
reactions from recipients. How are such reactions different from
reactance? Describe one way of minimizing the arousal of defensive
avoidance. What is self-affirmation? Can self-affirmation treatments
enhance acceptance of threatening messages?

Notes

401
1. As discussed in Chapter 11 (concerning message factors), a number of
individual-difference variables (such as “consideration of future
consequences”) appear to be proxies for value differences (for a general
treatment, see O’Keefe, 2013a).

2. Rhodes and Wood (1992) reviewed both self-esteem and intelligence


effects on persuasibility. Their mean reported persuasibility difference
between low and medium levels of self-esteem corresponds to a
correlation of .12 (across nine cases), indicating greater persuasibility at
medium levels; the difference between medium and high levels of self-
esteem corresponds to a correlation of –.06 (across nine cases), again
indicating greater persuasibility at medium levels. The mean persuasibility
difference between low and high levels of intelligence corresponds to a
correlation of –.14 (across seven cases), indicating greater persuasibility at
lower levels.

3. Several other such states have been mentioned in the context of


discussing messages aimed at arousing such states as fear or guilt (see
Chapter 11 concerning message factors).

4. Hullett’s (2005) analysis is not quite as decisive on this point as one


might suppose, for two reasons. First, Hullett’s analysis was limited to one
particular index of message processing, namely, differentiated effects of
messages varying in argument strength (the greater the observed difference
in persuasiveness of strong and weak arguments, the greater the degree of
message processing is taken to be). One cannot say whether studies with
other indices of elaboration (e.g., number of thoughts reported, or memory
for message contents) would yield similar conclusions. Second, the results
were taken to suggest “some interference of more positive moods on
message processing” (Hullett, 2005, p. 435), because the mean correlation
between argument quality and attitude was larger in the negative mood
condition (mean r = .39 across 21 cases) than in the positive mood
condition (mean r = .29 across 12 cases). But the reported confidence
intervals for those means suggests that those two mean effect sizes were
not significantly different; indeed, a random-effects reanalysis (using the
methods of Borenstein & Rothstein, 2005) of Hullett’s (2005, Table 1)
effect sizes yielded a nonsignificant (p = .23) difference between negative
(mean r = .377) and positive (mean r = .292) mood conditions. However, a
random-effects reanalysis restricted to counterattitudinal and neutral
messages (i.e., excluding proattitudinal messages) produced a significant
(p = .030) difference in processing between mood conditions: mean rs of

402
.418 (based on 11 cases; 95% confidence interval limits of .278 and .541)
and .236 (based on 8 cases; 95% confidence interval limits of .148 and
.321) for negative and positive moods, respectively. (But see also Chapter
8, note 4.)

5. As Quick, Shen, and Dillard (2013) pointed out, a good deal of


“reactance” research has not measured reactance directly but rather only
inferred its presence. This makes for some uncertainty about how to
interpret past findings, especially when alternative (nonreactance)
interpretations are possible.

6. As discussed elsewhere in this chapter, if receivers are warned of an


impending message described as intended to persuade, persuasiveness is
commonly diminished (for a review, see W. Wood & Quinn, 2003). One
might think that this effect reflects the arousal of reactance (generated by
the manifest intent to influence), but similar resistance-to-persuasion
effects are associated with warnings that do not mention the intent to
persuade (they mention only the topic and message position), which
suggests that reactance cannot explain the observed effects of intent-to-
persuade warnings (W. Wood & Quinn, 2003, p. 132).

7. Unfortunately this research literature does not contain a careful


conceptual treatment of the relevant linguistic variations, which are
variously described as “forceful,” “controlling,” or “freedom-threatening”
language. Because the message features have not been specified, drawing
dependable generalizations is difficult (and offering guidance to message
designers is correspondingly challenging).

8. A number of studies that have found such manipulations to reduce


persuasiveness have invoked reactance-based explanations but without
assessing reactance. One may hope that future research will illuminate just
how these language variations produce their effects (for some work along
such lines, see, e.g., Craig & Blankenship, 2011; Haugtvedt, Shakarchi,
Samuelson, & Liu, 2004; Silvia, 2006a).

9. One matter of some delicacy that is not treated here concerns the
definition of resistance to persuasion, which poses more difficulties than
one might initially suppose; a useful (if incomplete) discussion of this
topic has been provided by Pryor and Steinfatt (1978, pp. 220–221).

10. As discussed in Chapter 11 (concerning message factors), a


refutational two-sided message is one that both presents supporting

403
arguments and refutes opposing arguments. An inoculation treatment thus
functionally consists of the refutational portion of a refutational two-sided
message.

11. Banas and Rains’s (2010) fixed-effect analysis of adjusted effect sizes
yielded a statistically significant advantage (in resistance creation) for
inoculation treatments over no-treatment controls, with a mean d
(standardized mean difference) of .43 across 41 cases (an effect that
corresponds to an r of .21). A random-effects analysis (using the methods
of Borenstein & Rothstein, 2005) of the unadjusted effect sizes (converting
the reported ds to rs for the analysis) also yields a significant difference:
mean r = .200 (95% CI limits of .160 and .238).

12. A supportive treatment thus amounts to a one-sided persuasive


message (presenting only supportive materials).

13. Banas and Rains’s (2010) fixed-effect analysis of adjusted effect sizes
yielded a statistically significant difference (in resistance creation)
between supportive treatments and no-treatment controls, with a mean d
(standardized mean difference) of .34 across 10 cases (an effect that
corresponds to an r of .17). But a random-effects analysis (using the
methods of Borenstein & Rothstein, 2005) of the unadjusted effect sizes
(converting the reported ds to rs for the analysis) yields a nonsignificant (p
= .052) difference: mean r = .130 (95% CI limits of –.001 and .256). Given
the 95% confidence interval around that random-effects mean, however,
smart money will surely bet that the population effect is positive.

14. Banas and Rains’s (2010) fixed-effect analysis of adjusted effect sizes
yielded a statistically significant difference (in resistance creation)
between inoculation treatments and supportive treatments, with a mean d
(standardized mean difference) of .22 across 19 cases (an effect that
corresponds to an r of .11). A random-effects analysis (using the methods
of Borenstein & Rothstein, 2005) of the unadjusted effect sizes (converting
the reported ds to rs for the analysis) also yields a significant difference:
mean r = .099 (95% CI limits of .045 and .153).

15. This discussion has focused on the effects of inoculation on creating


resistance to persuasion among people for whom the subsequent attack
message is indeed an attack (i.e., is counterattitudinal). When an
“inoculation” treatment is applied to people in advance of a proattitudinal
message (i.e., what would otherwise be the “attack” message), on the other
hand, that treatment is probably better conceived in different terms (e.g., as

404
a refutation of beliefs that the audience might hold); for some discussion,
see M. Wood (2007).

16. Combining supportive and inoculation treatments amounts to creating


a refutational two-sided persuasive message (discussed in Chapter 11), that
is, a message that both gives arguments supporting the communicator’s
view and gives refutations of counterarguments. Thus this point can be
expressed in terms of message sidedness: Refutational two-sided messages
may be more effective in conferring resistance to persuasion than are one-
sided messages.

17. This reasoning led to the expectation that “cultural truisms” (beliefs
that a person rarely, if ever, hears attacked, such as “it’s a good idea to
brush after every meal if possible”) would be especially vulnerable to
attack, precisely because people were unpracticed at (and had no motivate
to rehearse) defending those beliefs (McGuire, 1964). This reasoning also
suggests that resistance-to-persuasion processes might differ between
cultural truisms and more controversial beliefs. Specifically, (a) supportive
treatments should differentially induce resistance in these two
circumstances (with supportive treatments inducing more resistance on
more controversial topics than on truisms), (b) refutational treatments
should differentially induce resistance in these two circumstances (with
inoculation producing greater resistance for truisms than for more
controversial topics), (c) for truisms, refutational treatments will confer
more resistance than supportive treatments, and (d) the difference in
resistance-induction between the two kinds of treatments will be smaller
for controversial topics than for truisms (such that it might even turn out
that for controversial topics, refutational treatments and supportive
treatments might not differ in resistance induction). However, the reports
of much early inoculation research concerning truisms (e.g., McGuire &
Papageorgis, 1961) do not contain sufficient statistical information to
permit such questions to be addressed meta-analytically (Banas & Rains,
2010, p. 304). This lack is especially acutely felt because the contrast
between truisms and controversial topics affords a natural basis for
examination of the role putatively played by awareness of opposing views
as a stimulus for arousing defenses.

18. Approached with the contrast between cultural truisms and


controversial beliefs in mind, it is easy to see how inoculation might
function this way for truisms. However, for more ordinary beliefs, people
may already be entirely prepared to think that others might hold different

405
views—which would imply that inoculation ought have no special powers
compared with, say, supportive treatments. And yet inoculation is
demonstrably more effective than supportive treatments for conferring
resistance on ordinary (nontruism) beliefs (Banas & Rains, 2010).

19. In some formulations, the essential element is said to be specific


awareness of an impending counterattitudinal (i.e., attack) message (e.g.,
Compton & Pfau, 2005, p. 100). But this is rather different from a global
awareness that opposing views are possible. Indeed, this approach as much
as suggests that one necessary component of inoculation treatments is a
warning of an impending counterattitudinal communication. But (as
discussed elsewhere in this chapter) given that extant research indicates
that such warnings alone (i.e., without any accompanying refutational
material) create resistance to persuasion, it would look to be necessary to
sort out exactly what contribution refutational material makes to the
creation of resistance via inoculation. The apparent resistance-creating
effect of warnings alone (as compared with no-treatment controls)
corresponds to a mean correlation of .21 (W. Wood & Quinn, 2003),
whereas the parallel effect for inoculation treatments is .20 (see note 11
above). Taken at face value, these results might suggest that the active
ingredient in inoculation treatments is not the refutational component but
rather the element of warning.

20. One might think that refutational inoculation treatments would create
resistance by encouraging counterarguing (in response to subsequent
attack messages), but what little evidence exists on this matter appears not
to be encouraging (see, e.g., Benoit, 1991; Pfau et al., 1997, 2000). This is
especially puzzling given that (a) warnings of impending counterattitudinal
messages do stimulate counterarguing (as discussed shortly) and (b) such
warnings may be responsible for the resistance-creating effects of
inoculation treatments (as discussed in note 19 above).

21. In W. Wood and Quinn’s (2003) random-effects analysis, across 17


cases, the mean persuasion-inhibiting effect of warnings corresponded to a
correlation of .21. The mean effects associated with the different kinds of
warnings (warnings of intent to persuade, warnings of topic and position,
or a combination of these) were statistically indistinguishable (though
there were relatively few cases of each).

22. A number of complexities in the research literature on warning are


passed over here. For example, on topics that are not especially personally

406
relevant to receivers, warnings sometimes seem to initially produce
opinion change toward the to-be-advocated position (e.g., J. Cooper &
Jones, 1970), but this change is apparently an anticipatory strategic shift
meant to minimize the threat to self of having to change later in response
to the message (and hence this effect evaporates when the expectation of
the impending message is canceled); for discussion, see W. Wood and
Quinn (2003). As another example of complexity, at least some prosocial
solicitations appear to be made more persuasive if preceded by a warning
(Kennedy, 1982); this, however, might reflect processes engaged when
warning of an impending proattitudinal communication (e.g., such a
warning, instead of eliciting the counterarguing engendered by warnings of
counterattitudinal messages, might encourage supportive argumentation).

23. A random-effects analysis (using the methods of Borenstein &


Rothstein, 2005) of the correlations in S. K. West and O’Neal’s (2004)
Table 1 yields a nonsignificant (p = .18) mean correlation of .03 across 11
cases. A similar reanalysis of the effect sizes concerning drug use in Pan
and Bai’s (2009) Table 1, converted to correlations, yields a significant (p
= .02) mean correlation of .02 across 18 cases. The DARE program has
undergone some revisions, but it is not clear that the changes have
improved the program’s effectiveness (e.g., Vincus, Ringwalt, Harris, &
Shamblen, 2010).

24. This treatment (of avoidance motivations) offers a simplified treatment


of a complex subject. A number of different defensive reactions and
processes might be distinguished (including a variety of avoidant
defensive reactions), and it is not yet clear exactly how things will get
sorted out conceptually or empirically. For useful discussions, see
Blumberg (2000), Good and Abraham (2007), and van ’t Riet and Ruiter
(2013). And one might place defensive reactions in a larger framework,
such as Chaiken, Liberman, and Eagly’s (1989) differentiation of accuracy
motivation (when recipients want to align their views with the facts),
defense motivation (when recipients want to defend particular views), and
impression motivation (when recipients want to make a good impression
on others); or such as Briñol and Petty’s (2005) differentiation of
knowledge, consistency, self-worth, and social approval motivations. For a
general discussion of such frameworks in the context of defensive
processing, see Eagly (2007).

25. Avoidant defensive reactions reduce persuasiveness, but presumably


not simply because the recipient is avoiding thinking much about the issue.

407
After all, as the elaboration likelihood model (Chapter 8) suggests, even
when there is little issue-relevant thinking (little elaboration), persuasion
can still come about through the receiver’s use of heuristics. But in the
case of avoidance motivation, recipients don’t want to think about the
issue at all (and so don’t even use the cognitive shortcut of a heuristic).

26. A word of caution: Stapel and van der Linde’s (2011) report on self-
affirmation mechanisms was based on falsified data; see
https://www.commissielevelt.nl/.

408
References

Aaker, J. L., & Schmitt, B. (2001). Culture-dependent assimilation and


differentiation of the self: Preferences for consumption symbols in the
United States and China. Journal of Cross-Cultural Psychology, 32,
561–576.

Aarts, H., Paulussen, T., & Schaalma, H. (1997). Physical exercise habit:
On the conceptualization and formation of habitual health behaviours.
Health Education Research, 12, 363–374.

Abdulla, R. A. (2004). Entertainment-education in the Middle East:


Lessons from the Egyptian oral rehydration therapy campaign. In A.
Singhal, M. J. Cody, E. M. Rogers, & M. Sabido (Eds.), Entertainment-
education and social change: History, research, and practice (pp.
301–320). Mahwah, NJ: Lawrence Erlbaum.

Abelson, R. P. (1968). A summary of hypotheses on modes of resolution.


In R. P. Abelson, E. Aronson, W. J. McGuire, T. M. Newcomb, M. J.
Rosenberg, & P. H. Tannenbaum (Eds.), Theories of cognitive
consistency: A sourcebook (pp. 716–720). Chicago: Rand McNally.

Abelson, R. P. (1986). Beliefs are like possessions. Journal for the Theory
of Social Behavior, 16, 223–250.

Abelson, R. P. (1995). Statistics as principled argument. Hillsdale, NJ:


Lawrence Erlbaum.

Abelson, R. P., Kinder, D. R, Peters, M. D., & Fiske, S. T. (1982).


Affective and semantic components in political person perception.
Journal of Personality and Social Psychology, 42, 619–630.

Abelson, R. P., & Prentice, D. A. (1989). Beliefs as possessions: A

409
functional perspective. In A. R Pratkanis, S. J. Breckler, & A. G.
Greenwald (Eds.), Attitude structure and function (pp. 361–381).
Hillsdale, NJ: Lawrence Erlbaum.

Abraham, C. (2008). Beyond stages of change: Multi-determinant


continuum models of action readiness and menu-based interventions.
Applied Psychology: An International Review, 57, 30–41.

Abraham, C. (2012). Developing evidence-based content for health


promotion materials. In C. Abraham & M. Kools (Eds.), Writing health
communication: An evidence-based guide (pp. 83–98). Los Angeles:
Sage.

Abraham, C., & Sheeran, P. (2004). Deciding to exercise: The role of


anticipated regret. British Journal of Health Psychology, 9, 269–278.

Adams, J., & White, M. (2005). Why don’t stage-based activity promotion
interventions work? Health Education Research, 20, 237–243.

Adams, W. C., & Beatty, M. J. (1977). Dogmatism, need for social


approval, and the resistance to persuasion. Communication
Monographs, 44, 321–325.

Adamval, R., & Wyer, R. S., Jr. (1998). The role of narratives in consumer
information processing. Journal of Consumer Psychology, 7, 207–245.

Adriaanse, M. A., de Ridder, D. T. D., & de Wit, J. B. F. (2009). Finding


the critical cue: Implementation intentions to change one’s diet work
best when tailored to personally relevant reasons for unhealthy eating.
Personality and Social Psychology Bulletin, 35, 60–71.

Adriaanse, M. A., Gollwitzer, P. M., de Ridder, D. T. D., de Wit, J. B. F.,


& Kroese, F. M. (2011). Breaking habits with implementation
intentions: A test of underlying processes. Personality and Social
Psychology Bulletin, 37, 502–513.

410
Adriaanse, M. A., Vinkers, C. D. W., de Ridder, D. T. D., Hox, J. J., & De
Wit, J. B. F. (2011). Do implementation intentions help to eat a healthy
diet? A systematic review and meta-analysis of the empirical evidence.
Appetite, 56, 183–193.

Agarwal, J., & Malhotra, N. K. (2005). An integrated model of attitude


and affect: Theoretical foundation and an empirical investigation.
Journal of Business Research, 58, 483–493.

Agarwal, N., Menon, G., & Aaker, J. L. (2007). Getting emotional about
health. Journal of Marketing Research, 44, 100–113.

Aggarwal, P., Jun, S. Y., & Huh, J. H. (2011). Scarcity messages: A


consumer competition perspective. Journal of Advertising, 40(3),
19–30.

Agnew, C. R. (1998). Modal versus individually-derived beliefs about


condom use: Measuring the cognitive underpinnings of the theory of
reasoned action. Psychology and Health, 13, 271–287.

Ahluwalia, R. (2000). Examination of psychological processes underlying


resistance to persuasion. Journal of Consumer Research, 27, 217–232.

Ahluwalia, R., & Burnkrant, R. E. (1993). A framework for explaining


multiple request effectiveness: The role of attitude toward the request.
Advances in Consumer Research, 20, 620–624.

Aitken, C. K., McMahon, T. A., Wearing, A. J., & Finlayson, B. L. (1994).


Residential water use: Predicting and reducing consumption. Journal of
Applied Social Psychology, 24, 136–158.

Ajzen, I. (1985). From intentions to actions: A theory of planned behavior.


In J. Kuhl & J. Beckmann (Eds.), Action control: From cognition to
behavior (pp. 11–39). Berlin: Springer-Verlag.

411
Ajzen, I. (1991). The theory of planned behavior. Organizational Behavior
and Human Decision Processes, 50, 179–211.

Ajzen, I. (2002). Perceived behavioral control, self-efficacy, locus of


control, and the theory of planned behavior. Journal of Applied Social
Psychology, 32, 1–19.

Ajzen, I. (2011). The theory of planned behaviour: Reactions and


reflections. Psychology & Health, 26, 1113–1127.

Ajzen, I., Albarracín, D., & Hornik, R. (Eds.). (2007). Prediction and
change of health behavior: Applying the reasoned action approach.
Mahwah, NJ: Lawrence Erlbaum.

Ajzen, I., & Cote, N. G. (2008). Attitudes and the prediction of behavior.
In W. D. Crano & R. Prislin (Eds.), Attitudes and attitude change (pp.
289–311). New York: Psychology Press.

Ajzen, I., Czasch, C., & Flood, M.G. (2009). From intentions to behavior:
Implementation intention, commitment, and conscientiousness. Journal
of Applied Social Psychology, 39, 1356–1372.

Ajzen, I., & Fishbein, M. (1977). Attitude-behavior relations: A theoretical


analysis and review of empirical research. Psychological Bulletin, 84,
888–918.

Ajzen, I., & Fishbein, M. (1980). Understanding attitudes and predicting


social behavior. Englewood Cliffs, NJ: Prentice Hall.

Ajzen, I., & Fishbein, M. (2008). Scaling and testing multiplicative


combinations in the expectancy-value model of attitudes. Journal of
Applied Social Psychology, 38, 2222–2247.

Ajzen, I., & Madden, T. J. (1986). Prediction of goal-directed behavior:

412
Attitudes, intentions, and perceived behavioral control. Journal of
Experimental Social Psychology, 22, 453–474.

Ajzen, I., & Manstead, A. S. R. (2007). Changing health-related


behaviours: An approach based on the theory of planned behaviour. In
M. Hewstone, H. A. W. Schut, J. B. F. de Wit, K. van den Bos, & M. S.
Stroebe (Eds.), The scope of social psychology: Theory and applications
(pp. 43–63). New York: Psychology Press.

Ajzen, I., Nichols, A. J., III, & Driver, B. L. (1995). Identifying salient
beliefs about leisure activities: Frequency of elicitation versus response
latency. Journal of Applied Social Psychology, 25, 1391–1410.

Ajzen, I., & Sexton, J. (1999). Depth of processing, belief congruence, and
attitude-behavior correspondence. In S. Chaiken & Y. Trope (Eds.),
Dual-process models in social psychology (pp. 117–138). New York:
Guilford.

Akhtar, O., Paunesku, D., & Tormala, Z. L. (2013). Weak > strong: The
ironic effect of argument strength on supportive advocacy. Personality
and Social Psychology Bulletin, 39, 1214–1226.

Akl, E. A., Oxman, A. D., Herrin, J., Vist, G. E., Terrenato, I., Sperati, F.,
… Schünemann, H. (2011). Framing of health information messages.
Cochrane Database of Systematic Reviews, 2001 (12), CD006777.

Albarracín, D., Johnson, B. T., Fishbein, M., & Muellerleile, P. A. (2001).


Theories of reasoned action and planned behavior as models of condom
use: A meta-analysis. Psychological Bulletin, 127, 142–161.

Albarracín, D., & Wyer, R. S., Jr. (2001). Elaborative and nonelaborative
processing of a behavior-related communication. Personality and Social
Psychology Bulletin, 27, 691–705.

Alden, D. L., & Crowley, A. E. (1995). Improving the effectiveness of

413
condom advertising: A research note. Health Marketing Quarterly,
12(4), 25–38.

Alemi, F., Alemagno, S. A., Goldhagen, J., Ash, L., Finkelstein, B., Lavin,
A., … Ghadiri, A. (1996). Computer reminders improve on-time
immunization rates. Medical Care, 34, OS45–OS51.

Algie, J., & Rossiter, J. R. (2010). Fear patterns: A new approach to


designing road safety advertisements. Journal of Prevention and
Intervention in the Community, 38, 264–279.

Allcott, H., & Rogers, T. (2012). The short-run and long-run effects of
behavioral interventions: Experimental evidence from energy
conservation. NBER Working Paper No. w18492. Retrieved from
SSRN: http://ssrn.com/abstract=2167595.

Allen, C. T., Machleit, K. A., Kleine, S. S., & Notani, A. S. (2005). A


place for emotion in attitude models. Journal of Business Research, 58,
494–499.

Allen, M. (1991). Meta-analysis comparing the persuasiveness of one-


sided and two-sided messages. Western Journal of Speech
Communication, 55, 390–404.

Allen, M. (1993). Determining the persuasiveness of message sidedness: A


prudent note about utilizing research summaries. Western Journal of
Communication, 57, 98–103.

Allen, M. (1998). Comparing the persuasive effectiveness of one- and two-


sided messages. In M. Allen & R. W. Preiss (Eds.), Persuasion:
Advances through meta-analysis (pp. 87–98). Cresskill, NJ: Hampton.

Allen, M., Adamski, L., Bates, M., Bernhagen, M., Callendar, A., Casey,
M., … Zirbel, C. (2002). Effect of timing of communicator
identification and level of source credibility on attitude. Communication

414
Research Reports, 19, 46–55.

Allen, M., & Preiss, R. W. (1997). Comparing the persuasiveness of


narrative and statistical evidence using meta-analysis. Communication
Research Reports, 14, 125–131.

Allen, M. W., & Ng, S. H. (2003). Human values, utilitarian benefits and
identification: The case of meat. European Journal of Social
Psychology, 33, 37–56.

Allen, M. W., Ng, S. H., & Wilson, M. (2002). A functional approach to


instrumental and terminal values and the value-attitude-behaviour
system of consumer choice. European Journal of Marketing, 36,
111–135.

Allport, G. W. (1935). Attitudes. In C. Murchison (Ed.), A handbook of


social psychology (pp. 798–844). Worcester, MA: Clark University
Press.

Alós-Ferrer, C., Granić, Đ.-G., Shi, F., & Wagner, A. K. (2012). Choices
and preferences: Evidence from implicit choices and response times.
Journal of Experimental Social Psychology, 48, 1336–1342.

Alós-Ferrer, C., & Shi, F. (2012). Choice-induced preference change: In


defense of the free-choice paradigm. Social Science Research Network
(SSRN) Working Paper 2062507. doi: 10.2139/ssrn.2062507.

Al-Rafee, S., & Dashti, A. E. (2012). A cross cultural comparison of the


extended TPB: The case of digital piracy. Journal of Global Information
Technology Management, 15(1), 5–24.

Alwin, D. F. (1997). Feeling thermometers versus 7-point scales: Which


are better? Sociological Methods and Research, 25, 318–340.

415
Amass, L., Bickel, W. K., Higgins, S. T., Budney, A. J., & Foerg, F. E.
(1993). The taking of free condoms in a drug abuse treatment clinic:
The effects of location and posters. American Journal of Public Health,
83, 1466–1468.

Amos, C., Holmes, G., & Strutton, D. (2008). Exploring the relationship
between celebrity endorser effects and advertising effectiveness: A
quantitative synthesis of effect size. International Journal of
Advertising, 27, 209–234.

Andersen, K. E. (1961). An experimental study of the interaction of artistic


and nonartistic ethos in persuasion (Doctoral dissertation, University of
Wisconsin-Madison). ProQuest No. 6103079.

Andersen, R. E., Franckowiak, S. C., Snyder, J., Bartlett, S. J., & Fontaine,
K. R. (1998). Can inexpensive signs encourage the use of stairs? Results
from a community intervention. Annals of Internal Medicine, 129,
363–369.

Anderson, C. A. (1983). Imagination and expectation: The effect of


imagining behavioral scripts on personal intentions. Journal of
Personality and Social Psychology, 45, 293–305.

Anderson, L. (1970). An experimental study of reluctant and biased


authority-based assertions. Journal of the American Forensic
Association, 7, 79–84.

Anderson, L. R. (1970). Prediction of negative attitude from congruity,


summation, and logarithm formulae for the evaluation of complex
stimuli. Journal of Social Psychology, 81, 37–48.

Anderson, N. H. (1965). Averaging versus adding as a stimulus-


combination rule in impression formation. Journal of Experimental
Psychology, 70, 394–400.

416
Anderson, N. H. (1971). Integration theory and attitude change.
Psychological Review, 78, 171–206.

Anderson, N. H. (1981a). Foundations of information integration theory.


New York: Academic Press.

Anderson, N. H. (1981b). Integration theory applied to cognitive responses


and attitudes. In R. E. Petty, T. M. Ostrom, & T. C. Brock (Eds.),
Cognitive responses in persuasion (pp. 361–397). Hillsdale, NJ:
Lawrence Erlbaum.

Anderson, N. H. (Ed.). (1991). Contributions to information integration


theory (Vols. 1–3). Hillsdale, NJ: Lawrence Erlbaum.

Anderson, R. (2009). Comparison of indirect sources of efficacy


information in pretesting messages for campaigns to prevent drunken
driving. Journal of Public Relations Research, 21, 428–454.

Anderson, R. B. (1995). Cognitive appraisal of performance capability in


the prevention of drunken driving: A test of self-efficacy theory. Journal
of Public Relations Research, 7, 205–229.

Anderson, R. B. (2000). Vicarious and persuasive influences on efficacy


expectations and intentions to perform breast self-examination. Public
Relations Review, 26, 97–114.

Anderson, R. B., & McMillion, P. Y. (1995). Effects of similar and


diversified modeling on African American women’s efficacy
expectations and intentions to perform breast self-examination. Health
Communication, 7, 327–343.

Andersson, E. K., & Moss, T. P. (2011). Imagery and implementation


intention: A randomised controlled trial of interventions to increase
exercise behaviour in the general population. Psychology of Sport and
Exercise, 12, 63–70.

417
Andreoli, V., & Worchel, S. (1978). Effects of media, communicator, and
message position on attitude change. Public Opinion Quarterly, 42,
59–70.

Andrews, K. R., Carpenter, C. J., Shaw, A. S., & Boster, F. J. (2008). The
legitimization of paltry favors effect: A review and meta-analysis.
Communication Reports, 21, 59–69.

Appel, M., & Mara, M. (2013). The persuasive influence of a fictional


character’s trustworthiness. Journal of Communication, 63, 912–932.

Appel, M., & Richter, T. (2007). Persuasive effects of fictional narratives


increase over time. Media Psychology, 10, 113–134.

Appel, M., & Richter, T. (2010). Transportation and need for affect in
narrative persuasion: A mediated moderation model. Media Psychology,
13, 101–135.

Applbaum, R. L., & Anatol, K. W. E. (1972). The factor structure of


source credibility as a function of the speaking situation. Speech
Monographs, 39, 216–222.

Applbaum, R. L., & Anatol, K. W. E. (1973). Dimensions of source


credibility: A test for repro-ducibility. Speech Monographs, 40,
231–237.

Apsler, R., & Sears, D. O. (1968). Warning, personal involvement, and


attitude change. Journal of Personality and Social Psychology, 9,
162–166.

Areni, C. S., & Lutz, R. J. (1988). The role of argument quality in the
elaboration likelihood model. Advances in Consumer Research, 15,
197–203.

418
Armitage, C. J. (2009). Is there utility in the transtheoretical model?
British Journal of Health Psychology, 14, 195–210.

Armitage, C. J., & Conner, M. (1999a). Distinguishing perceptions of


control from self-efficacy: Predicting consumption of a low-fat diet
using the theory of planned behavior. Journal of Applied Social
Psychology, 29, 72–90.

Armitage, C. J., & Conner, M. (1999b). The theory of planned behaviour:


Assessment of predictive validity and “perceived control.” British
Journal of Social Psychology, 38, 35–54.

Armitage, C. J., & Conner, M. (2001). Efficacy of the theory of planned


behaviour: A meta-analytic review. British Journal of Social
Psychology, 40, 471–499.

Armitage, C. J., Conner, M., & Norman, P. (1999). Differential effects of


mood on information processing: Evidence from the theories of
reasoned action and planned behaviour. European Journal of Social
Psychology, 29, 419–433.

Armitage, C. J., Harris, P. R., & Arden, M. A. (2011). Evidence that self-
affirmation reduces alcohol consumption: Randomized exploratory trial
with a new, brief means of self-affirming. Health Psychology, 30,
633–641.

Armitage, C. J., Reid, J. C., & Spencer, C. P. (2011). Evidence that


implementation intentions reduce single-occupancy car use in a rural
population: Moderating effects of compliance with instructions.
Transportmetrica, 7, 455–466.

Armitage, C. J., & Reidy, J. G. (2008). Use of mental simulations to


change theory of planned behaviour variables. British Journal of Health
Psychology, 13, 513–524.

419
Armitage, C. J., & Talibudeen, L. (2010). Test of a brief theory of planned
behaviour-based intervention to promote adolescent safe sex intentions.
British Journal of Psychology 101, 155–172.

Armstrong, A. W., Watson, A. J., Makredes, M., Frangos, J. E., Kimball,


A. B., & Kvedar, J. C. (2009). Text-message reminders to improve
sunscreen use: A randomized, controlled trial using electronic
monitoring. Archives of Dermatology, 145, 1230–1236.

Armstrong, C. L., & McAdams, M. J. (2009). Blogs of information: How


gender cues and individual motivations influence perceptions of
credibility. Journal of Computer-Mediated Communication, 14,
435–456.

Armstrong, J. S. (2010). Persuasive advertising: Evidence-based


principles. New York: Palgrave Macmillan.

Arnold, W. E., & McCroskey, J. C. (1967). The credibility of reluctant


testimony. Central States Speech Journal, 18, 97–103.

Aronson, E. (1968). Dissonance theory: Progress and problems. In R. P.


Abelson, E. Aronson, W. J. McGuire, T. M. Newcomb, M. J.
Rosenberg, & P. H. Tannenbaum (Eds.), Theories of cognitive
consistency: A sourcebook (pp. 5–27). Chicago: Rand McNally.

Aronson, E. (1992). The return of the repressed: Dissonance theory makes


a comeback. Psychological Inquiry, 3, 303–311.

Aronson, E. (1999). Dissonance, hypocrisy, and the self-concept. In E.


Harmon-Jones & J. Mills (Eds.), Cognitive dissonance: Progress on a
pivotal theory in social psychology (pp. 103–126). Washington, DC:
American Psychological Association.

Aronson, E., & Carlsmith, J. M. (1963). Effect of the severity of threat on


the devaluation of forbidden behavior. Journal of Abnormal and Social

420
Psychology, 66, 584–588.

Aronson, E., Fried, C., & Stone, J. (1991). Overcoming denial and
increasing the intention to use condoms through the induction of
hypocrisy. American Journal of Public Health, 81, 1636–1638.

Aronson, E., Turner, J. A., & Carlsmith, J. M. (1963). Communicator


credibility and communication discrepancy as determinants of opinion
change. Journal of Abnormal and Social Psychology, 67, 31–36.

Aronson, J., Cohen, G., & Nail, P. R. (1999). Self-affirmation theory: An


update and appraisal. In E. Harmon-Jones & J. Mills (Eds.), Cognitive
dissonance: Progress on a pivotal theory in social psychology (pp.
127–147). Washington, DC: American Psychological Association.

Arriaga, X. B., & Longoria, Z. N. (2011). Implementation intentions


increase parent-teacher communication among Latinos. Basic and
Applied Social Psychology, 33, 365–373.

Ashford, S., Edmunds, J., & French, D. P. (2010). What is the best way to
change self-efficacy to promote lifestyle and recreational physical
activity? A systematic review with meta-analysis. British Journal of
Health Psychology, 15, 265–288.

Astrom, A. N., & Rise, J. (2001). Young adults’ intention to eat healthy
food: Extending the theory of planned behavior. Psychology and Health,
16, 223–237.

Atkins, A. L., Deaux, K. K., & Bieri, J. (1967). Latitude of acceptance and
attitude change: Empirical evidence for a reformulation. Journal of
Personality and Social Psychology, 6, 47–54.

Atkinson, D. R., Winzelberg, A., & Holland, A. (1985). Ethnicity, locus of


control for family planning, and pregnancy counselor credibility.
Journal of Counseling Psychology, 32, 417–321.

421
Audi, R. (1972). On the conception and measurement of attitudes in
contemporary Anglo-American psychology. Journal for the Theory of
Social Behavior, 2, 179–203.

Austin, E. W., de Vord, R. V., Pinkleton, B. E., & Epstein, E. (2008).


Celebrity endorsements and their potential to motivate young voters.
Mass Communication & Society, 11, 420–436.

Austin, J., Alvero, A. M., & Olson, R. (1998). Prompting patron safety belt
use at a restaurant. Journal of Applied Behavior Analysis, 31, 655–657.

Austin, J., Sigurdsson, S. O., & Rubin, Y. S. (2006). An examination of


the effects of delayed versus immediate prompts on safety belt use.
Environment and Behavior, 38, 140–149.

Averbeck, J. M., Jones, A., & Robertson, K. (2011). Prior knowledge and
health messages: An examination of affect as heuristics and information
as systematic processing for fear appeals. Southern Communication
Journal, 76, 35–54.

Axsom, D., Yates, S., & Chaiken, S. (1987). Audience response as a


heuristic cue in persuasion. Journal of Personality and Social
Psychology, 53, 30–40.

Baayen, R. H., Davidson, D. J., & Bates, D. M. (2008). Mixed-effects


modeling with crossed random effects for subjects and items. Journal of
Memory and Language, 59, 390–412.

Babrow, A. S., & O’Keefe, D. J. (1984). Construct differentiation as a


moderator of attitude-behavior consistency: A failure to confirm.
Central States Speech Journal, 35, 160–165.

Bagozzi, R. P. (1984). Expectancy-value attitude models: An analysis of


critical measurement issues. International Journal of Research in
Marketing, 1, 295–310.

422
Bagozzi, R. P. (1985). Expectancy-value attitude models: An analysis of
critical theoretical issues. International Journal of Research in
Marketing, 2, 43–60.

Bagozzi, R. P., Baumgartner, H., & Yi, Y. (1992). State versus action
orientation and the theory of reasoned action: An application to coupon
usage. Journal of Consumer Research, 18, 505–518.

Bagozzi, R. P., Lee, K. H., & Van Loo, M. F. (2001). Decisions to donate
bone marrow: The role of attitudes and subjective norms across cultures.
Psychology and Health, 16, 29–56.

Bailis, D. S., Fleming, J. A., & Segall, A. (2005). Self-determination and


functional persuasion to encourage physical activity. Psychology and
Health, 20, 691–708.

Bakker, A. B. (1999). Persuasive communication about AIDS prevention:


Need for cognition determines the impact of message format. AIDS
Education and Prevention, 11, 150–162.

Bakker, M., van Dijk, A., & Wicherts, J. M. (2012). The rules of the game
called psychological science. Perspectives on Psychological Science, 7,
543–554.

Balmford, J., Borland, R., & Burney, S. (2008). Is contemplation a


separate stage of change to precontemplation? International Journal of
Behavioral Medicine, 15, 141–148.

Bamberg, S. (2003). How does environmental concern influence specific


environmentally related behaviors? A new answer to an old question.
Journal of Environmental Psychology, 23, 21–32.

Bamberg, S. (2007). Is a stage model a useful approach to explain car


drivers’ willingness to use public transportation? Journal of Applied
Social Psychology, 37, 1757–1783.

423
Banaji, M. R., & Heiphetz, L. (2010). Attitudes. In S. T. Fiske, D. T.
Gilbert, & G. Lindzey (Eds.), Handbook of social psychology (5th ed.,
Vol. 1, pp. 353–393). Hoboken, NJ: Wiley.

Banas, J. A., & Rains, S. A. (2010). A meta-analysis of research on


inoculation theory. Communication Monographs, 77, 281–311.

Banas, J. A., & Turner, M. M. (2011). Exploring the “that’s-not-all” effect:


A test of theoretical explanations. Southern Communication Journal, 76,
305–322.

Banas, J. A., Turner, M. M., & Shulman, H. (2012). A test of competing


hypotheses of the effects of mood on persuasion. Communication
Quarterly, 60, 143–164.

Bandura, A. (1997). Self-efficacy: The exercise of control. New York:


Freeman.

Banerjee, S. C., & Greene, K. (2012). Role of transportation in the


persuasion process: Cognitive and affective responses to antidrug
narratives. Journal of Health Communication, 17, 564–581.

Bansal, H. S., & Taylor, S. F. (2002). Investigating interactive effects in


the theory of planned behavior in a service-provider switching context.
Psychology and Marketing, 19, 407–425.

Baron, R. M., & Kenny, D. A. (1986). The moderator-mediator variable


distinction in social psychological research: Conceptual, strategic, and
statistical considerations. Journal of Personality and Social Psychology,
51, 1173–1182.

Baron, R. S., Baron, P. H., & Miller, N. (1973). The relation between
distraction and persuasion. Psychological Bulletin, 80, 310–323.

424
Basil, D. Z., & Herr, P. M. (2006). Attitudinal balance and cause-related
marketing: An empirical application of balance theory. Journal of
Consumer Psychology, 16, 391–403.

Basil, M., & Witte, K. (2012). Health risk message design using the
extended parallel process model. In H. Cho (Ed.), Health
communication message design: Theory and practice (pp. 41–58). Los
Angeles: Sage.

Batra, R., & Homer, P. M. (2004). The situational impact of brand image
beliefs. Journal of Consumer Psychology, 14, 318–330.

Baudhuin, E. S., & Davis, M. K. (1972). Scales for the measurement of


ethos: Another attempt. Speech Monographs, 39, 296–301.

Baumeister, R. F., Reis, H. T., & Delespaul, P. A. E. G. (1995). Subjective


and experiential correlates of guilt in daily life. Personality and Social
Psychology Bulletin, 21, 1256–1268.

Baumeister, R. F., Stillwell, A. M., & Heatherton, T. F. (1995). Personal


narratives about guilt: Role in action control and interpersonal
relationships. Basic and Applied Social Psychology, 178, 173–198.

Bazzini, D. G., & Shaffer, D. R. (1995). Investigating the social-adjustive


and value-expressive functions of well-grounded attitudes: Implications
for change and for subsequent behavior. Motivation and Emotion, 19,
279–305.

Beaman, A. L., Cole, C. M., Preston, M., Klentz, B., & Steblay, N. M.
(1983). Fifteen years of foot-in-the-door research: A meta-analysis.
Personality and Social Psychology Bulletin, 9, 181–196.

Bearden, W. O., Shuptrine, F. K., & Teel, J. E. (1989). Self-monitoring


and reactions to image appeals and claims about product quality.
Advances in Consumer Research, 16, 703–710.

425
Beatty, M. J., & Behnke, R. R. (1980). Teacher credibility as a function of
verbal content and paralinguistic cues. Communication Quarterly, 28(1),
55–59.

Beatty, M. J., & Kruger, M. W. (1978). The effects of heckling on speaker


credibility and attitude change. Communication Quarterly, 26(2), 46–50.

Beauvois, J.-L., & Joule, R. V. (1999). A radical point of view on


dissonance theory. In E. Harmon-Jones & J. Mills (Eds.), Cognitive
dissonance: Progress on a pivotal theory in social psychology (pp.
43–70). Washington, DC: American Psychological Association.

Becker, C. B., Smith, L. M., & Ciao, A. C. (2006). Peer-facilitated eating


disorder prevention: A randomized effectiveness trial of cognitive
dissonance and media advocacy. Journal of Counseling Psychology, 53,
550–555.

Beisecker, T. D., & Parson, D. W. (1972). Introduction. In T. D. Beisecker


& D. W. Parson (Eds.), The process of social influence (pp. 1–6).
Englewood Cliffs, NJ: Prentice Hall.

Belch, G. E., & Belch, M. A. (1987). The application of an expectancy


value operationalization of function theory to examine attitudes of
boycotters and nonboycotters of a consumer product. Advances in
Consumer Research, 14, 232–236.

Beltramini, R. F., & Sirsi, A. K. (1992). Physician information acquisition


and believability. Journal of Health Care Marketing, 12(4), 52–59.

Bem, D. J. (1972). Self-perception theory. In L. Berkowitz (Ed.),


Advances in experimental social psychology (Vol. 6, pp. 1–62). New
York: Academic Press.

Bennett, P. D., & Harrell, G. D. (1975). The role of confidence in


understanding and predicting buyers’ attitudes and purchase intentions.

426
Journal of Consumer Research, 2, 110–117.

Benoit, W. L. (1991). Two tests of the mechanism of inoculation theory.


Southern Communication Journal, 56, 219–229.

Bensley, L. S., & Wu, R. (1991). The role of psychological reactance in


drinking following alcohol prevention messages. Journal of Applied
Social Psychology, 21, 1111–1124.

Bergin, A. E. (1962). The effect of dissonant persuasive communications


upon changes in a self-referring attitude. Journal of Personality, 30,
423–438.

Berkowitz, A. D. (2005). An overview of the social norms approach. In L.


C. Lederman & L. P. Stewart (Eds.), Changing the culture of college
drinking: A socially situated health communication campaign (pp.
193–214). Cresskill, NJ: Hampton Press.

Berlo, D. K., Lemert, J. B., & Mertz, R. J. (1969). Dimensions for


evaluating the acceptability of message sources. Public Opinion
Quarterly, 33, 563–576.

Bernard, M. M., Maio, G. R., & Olson, J. M. (2003). The vulnerability of


values to attack: Inoculation of values and value-relevant attitudes.
Personality and Social Psychology Bulletin, 29, 63–75.

Berry, T. R., & Howe, B. L. (2005). The effects of exercise advertising on


self-efficacy and decisional balance. American Journal of Health
Behavior, 29, 117–126.

Berscheid, E. (1985). Interpersonal attraction. In G. Lindzey & E. Aronson


(Eds.), Handbook of social psychology (3rd ed., Vol. 2, pp. 413–484).
New York: Random House.

427
Berscheid, E., & Walster, E. (1974). Physical attractiveness. In L.
Berkowitz (Ed.), Advances in experimental social psychology (Vol. 7,
pp. 157–215). New York: Academic Press.

Betsch, T., Kaufmann, M., Lindow, F., Plessner, H., & Hoffmann, K.
(2006). Different principles of information integration in implicit and
explicit attitude formation. European Journal of Social Psychology, 36,
887–905.

Biek, M., Wood, W., & Chaiken, S. (1996). Working knowledge,


cognitive processing, and attitudes: On the determinants of bias.
Personality and Social Psychology Bulletin, 22, 547–556.

Biglan, A., Glasgow, R., Ary, D., Thompson, R., Severson, H.,
Lichtenstein, E., … Gallison, C. (1987). How generalizable are the
effects of smoking prevention programs? Refusal skills training and
parent messages in a teacher-administered program. Journal of
Behavioral Medicine, 10, 613–628.

Bilandzic, H., & Busselle, R. (2013). Narrative persuasion. In J. P. Dillard


& L. Shen (Eds.), The SAGE handbook of persuasion: Developments in
theory and practice (2nd ed., pp. 200–219). Thousand Oaks, CA: Sage.

Birkimer, J. C., Johnston, P. L., & Berry, M. M. (1993). Guilt and help
from friends: Variables related to healthy behavior. Journal of Social
Psychology, 133, 683–692.

Biswas, D., Biswas, A., & Das, N. (2006). The differential effects of
celebrity and expert endorsements on consumer risk perceptions: The
role of consumer knowledge, perceived congruency, and product
technology orientation. Journal of Advertising, 35(2), 17–31.

Bither, S. W., Dolich, I. J., & Nell, E. B. (1971). The application of


attitude immunization techniques in marketing. Journal of Marketing
Research, 8, 56-61.

428
Blake, H., Lee, S., Stanton, T., & Gorely, T. (2008). Workplace
intervention to promote stair-use in an NHS setting. International
Journal of Workplace Health Management, 1, 162–175.

Bless, H., Bohner, G., Schwarz, N., & Strack, F. (1990). Mood and
persuasion: A cognitive response analysis. Personality and Social
Psychology Bulletin, 16, 331–345.

Bless, H., Mackie, D. M., & Schwarz, N. (1992). Mood effects on attitude
judgments: Independent effects of mood before and after message
elaboration. Journal of Personality and Social Psychology, 63, 585–595.

Bless, H., & Schwarz, N. (1999). Sufficient and necessary conditions in


dual-process models: The case of mood and information processing. In
S. Chaiken & Y. Trope (Eds.), Dual-process models in social
psychology (pp. 423–440). New York: Guilford.

Blumberg, S. J. (2000). Guarding against threatening HIV prevention


messages: An information-processing model. Health Education and
Behavior, 27, 780–795.

Bochner, S., & Insko, C. A. (1966). Communicator discrepancy, source


credibility, and opinion change. Journal of Personality and Social
Psychology, 4, 614–621.

Bock, D. G., & Saine, T. J. (1975). The impact of source credibility,


attitude valence, and task sensitivity on trait errors in speech evaluation.
Speech Monographs, 42, 229–236.

Bodenhausen, G. V., & Gawronski, B. (2013). Attitude change. In D.


Reisberg (Ed.), The Oxford handbook of cognitive psychology. New
York: Oxford University Press.
doi:10.1093/oxfordhb/9780195376746.013.0060.

Bodur, H. O., Brinberg, D., & Coupey, E. (2000). Belief, affect, and

429
attitude: Alternative models of the determinants of attitude. Journal of
Consumer Psychology, 9, 17–28.

Boen, F., Maurissen, K., & Opdenacker, J. (2010). A simple health sign
increases stair use in a shopping mall and two train stations in Flanders,
Belgium. Health Promotion International, 25, 183–191.

Bohner, G., & Dickel, N. (2011). Attitudes and attitude change. Annual
Review of Psychology, 62, 391–417.

Bohner, G., Erb, H.-P., & Siebler, F. (2008). Information processing


approaches to persuasion: Integrating assumptions for the dual- and
single-processing perspectives. In W. D. Crano & R. Prislin (Eds.),
Attitudes and attitude change (pp. 161–188). New York: Psychology
Press.

Bohner, G., Ruder, M., & Erb, H.-P. (2002). When expertise backfires:
Contrast and assimilation effects in persuasion. British Journal of Social
Psychology, 41, 495–519.

Bohner, G., & Weinerth, T. (2001). Negative affect can increase or


decrease message scrutiny: The affect interpretation hypothesis.
Personality and Social Psychology Bulletin, 27, 1417–1428.

Bolsen, T., Druckman, J. N., & Cook, F. L. (2014). How frames can
undermine support for scientific adaptations: Politicization and the
status-quo bias. Public Opinion Quarterly, 78, 1–26.

Bond, R. M., Fariss, C. J., Jones, J. J., Kramer, A. D. I., Marlow, C., Settle,
J. E., & Fowler, J. H. (2012). A 61-million-person experiment in social
influence and political mobilization. Nature, 489, 295–298.

Booth, A. R., Norman, P., Harris, P. R., & Goyder, E. (2014). Using the
theory of planned behaviour and self-identity to explain chlamydia
testing intentions in young people living in deprived areas. British

430
Journal of Health Psychology, 19, 101–112.

Booth-Butterfield, S., & Reger, B. (2004). The message changes belief and
the rest is theory: The “1% or less” milk campaign and reasoned action.
Preventive Medicine, 39, 581–588.

Borenstein, M., Hedges, L. V., Higgins, J. P. T., & Rothstein, H. R.


(2009). Introduction to meta-analysis. Chichester, UK: Wiley.

Borenstein, M., Hedges, L. V., Higgins, J. P. T., & Rothstein, H. R.


(2010). A basic introduction to fixed-effect and random-effects models
for meta-analysis. Research Synthesis Methods, 1, 97–111.

Borenstein, M., & Rothstein, H. (2005). Comprehensive meta-analysis


(Version 2.2.023) [Computer software]. Englewood, NJ: Biostat.

Borgida, E., & Campbell, B. (1982). Belief relevance and attitude-


behavior consistency: The moderating role of personal experience.
Journal of Personality and Social Psychology, 42, 239–247.

Boster, F. J., & Mongeau, P. (1984). Fear-arousing persuasive messages.


Communication Yearbook, 8, 330–375.

Botta, R. A., Dunker, K., Fenson-Hood, K., Maltarich, S., & McDonald, L.
(2008). Using a relevant threat, EPPM and interpersonal communication
to change hand-washing behaviours on campus. Journal of
Communication in Healthcare, 1, 373–381.

Bouman, M. (2004). Entertainment-education television drama in the


Netherlands. In A. Singhal, M. J. Cody, E. M. Rogers, & M. Sabido
(Eds.), Entertainment-education and social change: History, research,
and practice (pp. 225–242). Mahwah, NJ: Lawrence Erlbaum.

Bowers, J. W., & Phillips, W. A. (1967). A note on the generality of

431
source-credibility scales Speech Monographs, 34, 185–186.

Bradac, J. J. (1986). Threats to generalization in the use of elicited,


purloined, and contrived messages in human communication research.
Communication Quarterly, 34, 55–65.

Bradac, J. J., Bowers, J. W., & Courtright, J. A. (1980). Lexical variations


in intensity, immediacy, and diversity: An axiomatic theory and causal
model. In R. N. St. Clair & H. Giles (Eds.), The social and
psychological contexts of language (pp. 193–223). Hillsdale, NJ:
Lawrence Erlbaum.

Bradley, P. H. (1981). The folk-linguistics of women’s speech: An


empirical examination. Communication Monographs, 48, 73–90.

Brannon, L. A., & Brock, T. C. (1994). Test of schema correspondence


theory of persuasion: Effects of matching an appeal to actual, ideal, and
product “selves.” In E. M. Clark, T. C. Brock, & D. W. Stewart (Eds.),
Attention, attitude, and affect in response to advertising (pp. 169–188).
Hillsdale, NJ: Lawrence Erlbaum.

Brannon, L. A., & Brock, T. C. (2001). Scarcity claims elicit extreme


responding to persuasive messages: Role of cognitive elaboration.
Personality and Social Psychology Bulletin, 27, 365–375. [Erratum
notice: Personality and Social Psychology Bulletin, 27 (2001), 639.]

Branstrom, R., Ullen, H., & Brandberg, Y. (2004). Attitudes, subjective


norms and perception of behavioural control as predictors of sun-related
behaviour in Swedish adults. Preventive Medicine, 39, 992–999.

Brashers, D. E., & Jackson, S. (1999). Changing conceptions of message


effects: A 24-year overview. Human Communication Research, 25,
457–477.

Braverman, J. (2008). Testimonials versus informative persuasive

432
messages: The moderating effect of delivery mode and personal
involvement. Communication Research, 35, 666–694.

Brehm, J. W. (1956). Postdecision changes in the desirability of


alternatives. Journal of Abnormal and Social Psychology, 52, 384–389.

Brehm, J. W. (1966). A theory of psychological reactance. New York:


Academic Press.

Brehm, J. W. (2007). A brief history of dissonance theory. Social and


Personality Psychology Compass, 1, 381–391.

Brehm, S. S., & Brehm, J. W. (1981). Psychological reactance: A theory of


freedom and control. New York: Academic Press.

Breivik, E., & Supphellen, M. (2003). Elicitation of product attributes in


an evaluation context: A comparison of three elicitation techniques.
Journal of Economic Psychology, 24, 77–98.

Brickell, T. A., Chatzisarantis, N. L. D., & Pretty, G. M. (2006).


Autonomy and control: Augmenting the validity of the theory of
planned behaviour in predicting exercise. Journal of Health Psychology,
11, 51–63.

Bridle, C., Riemsma, R. P., Pattenden, J., Sowden, A. J., Mather, L., Watt,
I. S., & Walker, A. (2005). Systematic review of the effectiveness of
health behavior interventions based on the transtheoretical model.
Psychology and Health, 20, 283–301.

Brinberg, D., & Durand, J. (1983). Eating at fast-food restaurants: An


analysis using two behavioral intention models. Journal of Applied
Social Psychology, 13, 459–472.

Briñol, P., & Petty, R. E. (2005). Individual differences in attitude change.

433
In D. Albarracín, B. T. Johnson, & M. P. Zanna (Eds.), Handbook of
attitudes (pp. 575–615). Mahwah, NJ: Lawrence Erlbaum.

Briñol, P., & Petty, R. E. (2009a). Persuasion: Insights from the self-
validation hypothesis. In M. P. Zanna (Ed.), Advances in experimental
social psychology (Vol. 41, pp. 69–118). New York: Academic Press.

Briñol, P., & Petty, R. E. (2009b). Source factors in persuasion: A self-


validation approach. European Review of Social Psychology, 20, 49–96.

Briñol, P., Petty, R. E., & McCaslin, M. J. (2009). Changing attitudes on


implicit versus explicit measures: What is the difference? In R. E. Petty,
R. H. Fazio, & P. Briñol (Eds.), Attitudes: Insights from the new
implicit measures (pp. 285–326). New York: Psychology Press.

Briñol, P., Rucker, D. D., Tormala, Z. L., & Petty, R. E. (2004). Individual
differences in resistance to persuasion: The role of beliefs and meta-
beliefs. In E. S. Knowles & J. A. Linn (Eds.), Resistance and persuasion
(pp. 83–104). Mahwah, NJ: Lawrence Erlbaum.

Brock, T. C. (1965). Communicator–recipient similarity and decision


change. Journal of Personality and Social Psychology, 1, 650–654.

Brock, T. C. (1968). Implications of commodity theory for value change.


In A. G. Greenwald, T. C. Brock, & T. M. Ostrom (Eds.), Psychological
foundations of attitudes (pp. 243–275). New York: Academic Press.

Brodsky, S. L., Griffin, M. P., & Cramer, R. J. (2010). The witness


credibility scale: An outcome measure for expert witness research.
Behavioral Sciences & the Law, 28, 892–907.

Broemer, P. (2002). Relative effectiveness of differently framed health


messages: The influence of ambivalence. European Journal of Social
Psychology, 32, 685–703.

434
Brouwers, M. C., & Sorrentino, R. M. (1993). Uncertainty orientation and
protection motivation theory: The role of individual differences in
health compliance. Journal of Personality and Social Psychology, 65,
102–112.

Brown, S., Birch, D., Thyagaraj, S., Teufel, J., & Phillips, C. (2007).
Effects of a single-lesson tobacco prevention curriculum on knowledge,
skill identification and smoking intention. Journal of Drug Education,
37, 55–69.

Brown, S. P., Cron, W. L., & Slocum, J. W., Jr. (1997). Effects of goal-
directed emotions on salesperson volitions, behavior, and performance:
A longitudinal study. Journal of Marketing, 61 (1), 39–50.

Brown, S. P., & Stayman, D. M. (1992). Antecedents and consequences of


attitude toward the ad: A meta-analysis. Journal of Consumer Research,
19, 34–51.

Brown, T. J., Ham, S. H., & Hughes, M. (2010). Picking up litter: An


application of theory-based communication to influence tourist
behaviour in protected areas. Journal of Sustainable Tourism, 18,
879–900.

Browne, B. A., & Kaldenberg, D. O. (1997). Self–monitoring and image


appeals in advertising. Psychological Reports, 81, 1267–1275.

Brownstein, A. L. (2003). Biased predecision processing. Psychological


Bulletin, 129, 545–568.

Bryant, J., Brown, D., Silberberg, A. R., & Elliott, S. M. (1981). Effects of
humorous illustrations in college textbooks. Human Communication
Research, 8, 43–57.

Budd, R. J. (1986). Predicting cigarette use: The need to incorporate


measures of salience in the “theory of reasoned action.” Journal of

435
Applied Social Psychology, 16, 663–685.

Budd, R. J., North, D., & Spencer, C. (1984). Understanding seat-belt use:
A test of Bentler and Speckart’s extension of the theory of reasoned
action. European Journal of Social Psychology, 14, 69–78.

Budd, R. J., & Spencer, C. (1984). Latitude of rejection, centrality, and


certainty: Variables affecting the relationship between attitudes, norms,
and behavioural intentions. British Journal of Social Psychology, 23,
1–8.

Buller, D. B., & Hall, J. R. (1998). The effects of distraction during


persuasion. In M. Allen & R. W. Preiss (Eds.), Persuasion: Advances
through meta-analysis (pp. 155–173). Cresskill, NJ: Hampton.

Burack, R. C., & Gimotty, P. A. (1997). Promoting screening


mammography in inner-city settings: The sustained effectiveness of
computerized reminders in a randomized controlled trial. Medical Care,
35, 921–932.

Burger, J. M. (1999). The foot-in-the-door compliance procedure: A


multiple-process analysis and review. Personality and Social
Psychology Review, 3, 303–325.

Burger, J. M., Bell, H., Harvey, K., Johnson, J., Stewart, C., Dorian, K., &
Swedroe, M. (2010). Nutritious or delicious? The effect of descriptive
norm information on food choice. Journal of Social and Clinical
Psychology, 29, 228–242.

Burger, J. M., & Caldwell, D. F. (2003). The effects of monetary


incentives and labeling on the foot-in-the-door effect: Evidence for a
self-perception process. Basic and Applied Social Psychology, 25,
235–241.

Burger, J. M., & Guadagno, R. E. (2003). Self-concept clarity and the foot-

436
in-the-door procedure. Basic and Applied Social Psychology, 25, 79–86.

Burger, J. M., LaSalvia, C. T., Hendricks, L. A., Mehdipour, T., &


Neudeck, E. M. (2011). Partying before the party gets started: The
effects of descriptive norms on pregaming behavior. Basic and Applied
Social Psychology, 33, 220–227.

Burger, J. M., Messian, N., Patel, S., del Prado, A., & Anderson, C.
(2004). What a coincidence! The effects of incidental similarity on
compliance. Personality and Social Psychology Bulletin, 30, 35–43.

Burger, J. M., Reed, M., DeCesare, K., Rauner, S., & Rozolis, J. (1999).
The effects of initial request size on compliance: More about the that’s-
not-all technique. Basic and Applied Social Psychology, 21, 243–249.

Burger, J. M., & Shelton, M. (2011). Changing everyday health behaviors


through descriptive norm manipulations. Social Influence, 6, 69–77.

Burgoon, J. K. (1976). The ideal source: A reexamination of source


credibility measurement. Central States Speech Journal, 27, 200–206.

Burgoon, M., Alvaro, E. M., Broneck, K., Miller, C., Grandpre, J. R., Hall,
J. R., & Frank, C. A. (2002). Using interactive media tools to test
substance abuse prevention messages. In W. D. Crano & M. Burgoon
(Eds.), Mass media and drug prevention: Classic and contemporary
theories and research (pp. 67–87). Mahwah, NJ: Lawrence Erlbaum.

Burgoon, M., Hall, J., & Pfau, M. (1991). A test of the “messages-as-
fixed-effect fallacy” argument: Empirical and theoretical implications of
design choices. Communication Quarterly, 39, 18–34.

Burke, C. J. (1953). A brief note on one-tailed tests. Psychological


Bulletin, 50, 384–387.

437
Burkley, E. (2008). The role of self-control in resistance to persuasion.
Personality and Social Psychology Bulletin, 34, 419–432.

Burnell, P., & Reeve, A. (1984). Persuasion as a political concept. British


Journal of Political Science, 14, 393–410.

Burnkrant, R. E., & Howard, D. J. (1984). Effects of the use of


introductory rhetorical questions versus statements on information
processing. Journal of Personality and Social Psychology, 47,
1218–1230.

Busselle, R., & Bilandzic, H. (2009). Measuring narrative engagement.


Media Psychology, 12, 321–347.

Buunk, A. P., & Dijkstra, P. (2011). Does attractiveness sell? Women’s


attitude toward a product as a function of model attractiveness, gender
priming, and social comparison orientation. Psychology and Marketing,
28, 958–973.

Byrne, D. (1969). Attitudes and attraction. In L. Berkowitz (Ed.),


Advances in experimental social psychology (Vol. 4, pp. 35–89). New
York: Academic Press.

Byrne, S., Guillory, J. E., Mathios, A. D., Avery, R. J., & Hart, P. S.
(2012). The unintended consequences of disclosure: Effect of
manipulating sponsor identification on the perceived credibility and
effectiveness of smoking cessation advertisements. Journal of Health
Communication, 17, 1119–1137.

Cacioppo, J. T., Gardner, W. L., & Berntson, G. G. (1997). Beyond bipolar


conceptualizations and measures: The case of attitudes and evaluative
space. Personality and Social Psychology Review, 1, 3–25.

Cacioppo, J. T., Harkins, S. G., & Petty, R. E. (1981). The nature of


attitudes and cognitive responses and their relationships to behavior. In

438
R. E. Petty, T. M. Ostrom, & T. C. Brock (Eds.), Cognitive responses in
persuasion (pp. 31–54). Hillsdale, NJ: Lawrence Erlbaum.

Cacioppo, J. T., & Petty, R. E. (1982). The need for cognition. Journal of
Personality and Social Psychology, 42, 116–131.

Cacioppo, J. T., & Petty, R. E. (1984). The elaboration likelihood model of


persuasion. Advances in Consumer Research, 11, 673–675.

Cacioppo, J. T., Petty, R. E., Feinstein, J. A., & Jarvis, W. B. G. (1996).


Dispositional differences in cognitive motivation: The life and times of
individuals varying in need for cognition. Psychological Bulletin, 119,
197–253.

Cacioppo, J. T., Petty, R. E., Kao, C. F., & Rodriguez, R. (1986). Central
and peripheral routes to persuasion: An individual difference
perspective. Journal of Personality and Social Psychology, 51,
1032–1043.

Cacioppo, J. T., Petty, R. E., & Stoltenberg, C. D. (1985). Processes of


social influence: The elaboration likelihood model of persuasion. In P.
C. Kendall (Ed.), Advances in cognitive-behavioral research and
therapy (Vol. 4, pp. 215–274). New York: Academic Press.

Cacioppo, J. T., von Hippel, W., & Ernst, J. M. (1997). Mapping cognitive
structures and processes through verbal content: The thought-listing
technique. Journal of Consulting and Clinical Psychology, 65, 928–940.

Cafri, G., Kromrey, J. D., & Brannick, M. T. (2010). A meta-meta-


analysis: Empirical review of statistical power, type I error rates, effect
sizes, and model selection of meta-analyses published in psychology.
Multivariate Behavioral Research, 45, 239–270.

Cahill, K., Lancaster, T., & Green, N. (2010). Stage-based interventions


for smoking cessation. Cochrane Database of Systematic Reviews,

439
2010(11), CD004492.

Callaghan, R. C., & Taylor, L. (2007). Is stage-matched process-use


associated with the transition from the preparation to action stage among
smokers? Longitudinal test for the transtheoretical model. Addition
Research and Theory, 15, 493–501.

Calsyn, D. A., Hatch-Maillette, M. A., Doyle, S. R., Cousins, S., Chen, T.,
& Godinez, M. (2010). Teaching condom use skills: Practice is superior
to observation. Substance Abuse, 31, 231–239.

Cameron, K. A., & Campo, S. (2006). Stepping back from social norms
campaigns: Comparing normative influences to other predictors of
health behaviors. Health Communication, 20, 277–288.

Campbell, G. (1776). The philosophy of rhetoric. Retrieved from


http://people.cohums.ohio-state.edu/Ulman1/Campbell/.

Campbell, M. C., & Kirmani, A. (2000). Consumers’ use of persuasion


knowledge: The effects of accessibility and cognitive capacity on
perceptions of an influence agent. Journal of Consumer Research, 27,
69–83.

Campo, S., Askelson, N. M., Carter, K. D., & Losch, M. (2012).


Segmenting audiences and tailoring messages using the extended
parallel process model and cluster analysis to improve health
campaigns. Social Marketing Quarterly, 18, 98–111.

Campo, S., & Cameron, K. A. (2006). Differential effects of exposure to


social norms campaigns: A cause for concern. Health Communication,
19, 209–219.

Cann, A., Sherman, S. J., & Elkes, R. (1975). Effects of initial request size
and timing of a second request on compliance: The foot in the door and
the door in the face. Journal of Personality and Social Psychology, 32,

440
774–782.

Cappella, J. N. (2006). Integrating message effects and behavior change


theories: Organizing comments and unanswered questions. Journal of
Communication, 56, S265–S279.

Cappella, J. N. (2007). The role of discrete emotions in the theory of


reasoned action and its successors: Quitting smoking in young adults. In
I. Ajzen, D. Albarracín, & R. Hornik (Eds.), Prediction and change of
health behavior: Applying the reasoned action approach (pp. 43–51).
Mahwah, NJ: Lawrence Erlbaum.

Cappella, J. N., Yzer, M., & Fishbein, M. (2003). Using beliefs about
positive and negative consequences as the basis for designing message
interventions for lowering risky behavior. In D. Romer (Ed.), Reducing
adolescent risk (pp. 210–219). Thousand Oaks, CA: Sage.

Carcioppolo, N., Jensen, J. D., Wilson, S. R., Collins, W. B., Carrion, M.,
& Linnemeier, G. (2013). Examining HPV threat-to-efficacy ratios in
the extended parallel process model. Health Communication, 28, 20–28.

Card, N. A. (2012). Applied meta-analysis for social science research.


New York: Guilford.

Cardenas, M. P., & Simons-Morton, B. G. (1993). The effect of


anticipatory guidance on mothers’ self-efficacy and behavioral
intentions to prevent burns caused by hot tap water. Patient Education
and Counseling, 21, 117–123.

Carey, R. N., McDermott, D. T., & Sarma, K. M. (2013). The impact of


threat appeals on fear arousal and driver behavior: A meta-analysis of
experimental research 1990–2011. PLOS ONE, 8(5), e62821.

Carpenter, C. J. (2012a). A meta-analysis and an experiment investigating


the effects of speaker disfluency on persuasion. Western Journal of

441
Communication, 76, 552–569.

Carpenter, C. J. (2012b). A meta-analysis of the functional matching effect


based on functional attitude theory. Southern Communication Journal,
77, 438–451.

Carpenter, C. J. (2013). A meta-analysis of the effectiveness of the “but


you are free” compliance-gaining technique. Communication Studies,
64, 6–17.

Carpenter, C., Boster, F. J., & Andrews, K. R. (2013). Functional attitude


theory. In J. P. Dillard & L. Shen (Eds.), The SAGE handbook of
persuasion: Developments in theory and practice (2nd ed., pp.
104–119). Los Angeles: Sage.

Carpenter, J. M., & Green, M. C. (2012). Flying with Icarus: Narrative


transportation and the persuasiveness of entertainment. In L. J. Shrum
(Ed.), The psychology of entertainment media: Blurring the lines
between entertainment and persuasion (2nd ed., pp. 169–194). New
York: Routledge.

Carver, C. S., & White, T. L. (1994). Behavioral inhibition, behavioral


activation, and affective responses to impending reward and
punishment: The BIS/BAS scales. Journal of Personality and Social
Psychology, 67, 319–333.

Celuch, K., & Slama, M. (1995). “Getting along” and “getting ahead” as
motives for self-presentation: Their impact on advertising effectiveness.
Journal of Applied Social Psychology, 25, 1700–1713.

Cerully, J. L., & Klein, W. M. P. (2010). Effects of emotional state on


behavioral responsiveness to personal risk feedback. Journal of Risk
Research, 13, 591–598.

Cesario, J., Corker, K. S., & Jelinek, S. (2013). A self-regulatory

442
framework for message framing. Journal of Experimental Social
Psychology, 49, 238–249.

Cesario, J., Grant, H., & Higgins, E. T. (2004). Regulatory fit and
persuasion: Transfer from “feeling right.” Journal of Personality and
Social Psychology, 86, 388–404.

Chaiken, S. (1979). Communicator physical attractiveness and persuasion.


Journal of Personality and Social Psychology, 37, 1387–1397.

Chaiken, S. (1980). Heuristic versus systematic information processing


and the use of source versus message cues in persuasion. Journal of
Personality and Social Psychology, 39, 752–766.

Chaiken, S. (1986). Physical appearance and social influence. In C. P.


Herman, M. P. Zanna, & E. T. Higgins (Eds.), Physical appearance,
stigma, and social behavior: The Ontario Symposium, vol. 3 (pp.
143–177). Hillsdale, NJ: Lawrence Erlbaum.

Chaiken, S. (1987). The heuristic model of persuasion. In M. P. Zanna, J.


M. Olson, & C. P. Herman (Eds.), Social influence: The Ontario
Symposium, vol. 5 (pp. 3–39). Hillsdale, NJ: Lawrence Erlbaum.

Chaiken, S., Duckworth, K. L., & Darke, P. (1999). When parsimony fails
… Psychological Inquiry, 10, 118–123.

Chaiken, S., & Eagly, A. H. (1983). Communication modality as a


determinant of persuasion: The role of communicator salience. Journal
of Personality and Social Psychology, 45, 241–256.

Chaiken, S., Liberman, A., & Eagly, A. H. (1989). Heuristic and


systematic information processing within and beyond the persuasion
context. In J. S. Uleman & J. A. Bargh (Eds.), Unintended thought (pp.
212–252). New York: Guilford.

443
Chaiken, S., & Stangor, C. (1987). Attitudes and attitude change. Annual
Review of Psychology, 38, 575–630.

Chaiken, S., & Trope, Y. (Eds.). (1999). Dual-process theories in social


psychology. New York: Guilford.

Chaiken, S., Wood, W., & Eagly, A. H. (1996). Principles of persuasion.


In E. T. Higgins & A. W. Kruglanski (Eds.), Social psychology:
Handbook of basic principles (pp. 702–742). New York: Guilford.

Chandran, S., & Menon, G. (2004). When a day means more than a year:
Effects of temporal framing on judgments of health risk. Journal of
Consumer Research, 31, 375–389.

Chang, C. (2004). Country of origin as a heuristic cue: The effects of


message ambiguity and product involvement. Media Psychology, 6,
169–192.

Chang, C. (2006). Cultural masculinity/femininity influences on


advertising appeals. Journal of Advertising Research, 46, 315–323.

Chang, C. (2010). Message framing and interpersonal orientation at


cultural and individual levels: Involvement as a moderator. International
Journal of Advertising, 29, 765–794.

Chang, C.-T. (2007). Interactive effects of message framing, product


perceived risk, and mood: The case of travel healthcare product
advertising. Journal of Advertising Research, 47, 51–65.

Chang, M.-J., & Gruner, C. R. (1981). Audience reaction to self–


disparaging humor. Southern Speech Communication Journal, 46(4),
19–26.

Chapman, J., Armitage, C. J., & Norman, P. (2009). Comparing

444
implementation intention interventions in relation to young adults’
intake of fruit and vegetables. Psychology and Health, 24, 317–332.

Chartrand, T., Pinckert, S., & Burger, J. M. (1999). When manipulation


backfires: The effects of time delay and requester on the foot-in-the-
door technique. Journal of Applied Social Psychology, 29, 211–221.

Chatzisarantis, N., & Hagger, M. (2005). Effects of a brief intervention


based on the theory of planned behavior on leisure-time physical
activity participation. Journal of Sport & Exercise Psychology, 27,
470–487.

Chatzisarantis, N. L. D., & Hagger, M. S. (2007). Mindfulness and the


intention-behavior relationship within the theory of planned behavior.
Personality and Social Psychology Bulletin, 33, 663–676.

Chebat, J.-C., Filiatrault, P., Laroche, M., & Watson, C. (1988).


Compensatory effects of cognitive characteristics of the source, the
message, and the receiver upon attitude change. Journal of Psychology,
122, 609–621.

Chebat, J.-C., Laroche, M., Baddoura, D., & Filiatrault, P. (1992). Effects
of source likability on attitude change through message repetition.
Advances in Consumer Research, 19, 353–358.

Chen, H. C., Reardon, R., Rea, C., & Moore, D. J. (1992). Forewarning of
content and involvement: Consequences for persuasion and resistance to
persuasion. Journal of Experimental Social Psychology, 28, 523–541.

Chen, M. F., & Tung, P. T. (2010). The moderating effect of perceived


lack of facilities on consumers’ recycling intentions. Environment and
Behavior, 42, 824–844.

Chen, M. K., & Risen, J. L. (2009). Is choice a reliable predictor of


choice? A comment on Sagarin and Skowronski. Journal of

445
Experimental Social Psychology, 45, 425–427.

Chen, M. K., & Risen, J. L. (2010). How choice affects and reflects
preferences: Revisiting the free-choice paradigm. Journal of Personality
and Social Psychology, 99, 573–594.

Chen, S., & Chaiken, S. (1999). The heuristic-systematic model in its


broader context. In S. Chaiken & Y. Trope (Eds.), Dual-process theories
in social psychology (pp. 73–96). New York: Guilford.

Cheung, C. M.-Y., Sia, C.-L., & Kuan, K. K. Y. (2012). Is this review


believable? A study of factors affecting the credibility of online
consumer reviews from an ELM perspective. Journal of the Association
for Information Systems, 13, 618–635.

Cheung, S. F., Chan, D. K.-S., & Wong, Z. S.-Y. (1999). Reexamining the
theory of planned behavior in understanding wastepaper recycling.
Environment and Behavior, 31, 587–612.

Chien, Y. H. (2011). Use of message framing and color in vaccine


information to increase willingness to be vaccinated. Social Behavior
and Personality, 39, 1063–1071.

Cho, H., & Witte, K. (2004). A review of fear-appeal effects. In J. S. Seiter


& R. H. Gass (Eds.), Perspectives on persuasion, social influence, and
compliance gaining (pp. 223–238). Boston: Pearson Allyn and Bacon.

Cho, H., & Witte, K. (2005). Managing fear in public health campaigns: A
theory-based formative evaluation process. Health Promotion Practice,
6, 482–490.

Chong, D., & Druckman, J. N. (2007). Framing theory. Annual Review of


Political Science, 10, 103–126.

446
Chu, G. C. (1966). Fear arousal, efficacy, and imminency. Journal of
Personality and Social Psychology, 4, 517–524.

Chu, G. C. (1967). Prior familiarity, perceived bias, and one-sided versus


two-sided communications. Journal of Experimental Social Psychology,
3, 243–254.

Chung, S., Fink, E. L., & Kaplowitz, S. A. (2008). The comparative statics
and dynamics of beliefs: The effect of message discrepancy and source
credibility. Communication Monographs, 75, 158–189.

Chung, S., Fink, E. L., Waks, L., Meffert, M. F., & Xie, X. (2012).
Sequential information integration and belief trajectories: An
experimental study using candidate evaluations. Communication
Monographs, 79, 160–180.

Churchill, S., & Jessop, D. C. (2011). Too impulsive for implementation


intentions? Evidence that impulsivity moderates the effectiveness of an
implementation intention intervention. Psychology & Health, 26,
517–530.

Cialdini, R. B. (1984). Influence: How and why people agree to things.


New York: William Morrow.

Cialdini, R. B. (1987). Compliance principles of compliance professionals:


Psychologists of necessity. In M. P. Zanna, J. M. Olson, & C. P.
Herman (Eds.), Social influence: The Ontario Symposium, vol. 5 (pp.
165–184). Hillsdale, NJ: Lawrence Erlbaum.

Cialdini, R. B. (2009). Influence: Science and practice (5th ed.). Boston:


Pearson.

Cialdini, R. B., Cacioppo, J. T., Bassett, R., & Miller, J. A. (1978). Low-
ball procedure for producing compliance: Commitment then cost.
Journal of Personality and Social Psychology, 36, 463–476.

447
Cialdini, R. B., Demaine, L. J., Sagarin, B. J., Barrett, D. W., Rhoads, K.,
& Winter, P. L. (2006). Managing social norms for persuasive impact.
Social Influence, 1, 3–15.

Cialdini, R. B., & Goldstein, N. J. (2004). Social influence: Compliance


and conformity. Annual Review of Psychology, 55, 591–621.

Cialdini, R. B., & Griskevicius, V. (2010). Social influence. In R. F.


Baumeister & E. J. Finkel (Eds.), Advanced social psychology: The
state of the science (pp. 385–417). Oxford, UK: Oxford University
Press.

Cialdini, R. B., Griskevicius, V., Sundie, J. M., & Kenrick, D. T. (2007).


Persuasion paralysis: When unrelated motives immobilize influence.
Social Influence, 2, 4–17.

Cialdini, R. B., & Schroeder, D. A. (1976). Increasing compliance by


legitimizing paltry contributions: When even a penny helps. Journal of
Personality and Social Psychology, 34, 599–604.

Cialdini, R. B., & Trost, M. R. (1998). Social influence: Social norms,


conformity, and compliance. In D. T. Gilbert, S. T. Fiske, & G. Lindzey
(Eds.), Handbook of social psychology (4th ed., Vol. 2, pp. 151–192).
Boston: McGraw-Hill.

Cialdini, R. B., Vincent, J. E., Lewis, S. K., Catalan, J., Wheeler, D., &
Darby, B. L. (1975). Reciprocal concessions procedure for inducing
compliance: The door-in-the-face technique. Journal of Personality and
Social Psychology, 31, 206–215.

Clack, Z. A., Pitts, S. R., & Kellermann, A. L. (2000). Do reminder signs


promote use of safety belts? Annals of Emergency Medicine, 36,
597–601.

Clapp, J. D., Lange, J. E., Russell, C., Shillington, A., & Voas, R. B.

448
(2003). A failed norms social marketing campaign. Journal of Studies
on Alcohol, 64, 409–414.

Clark, H. H. (1973). The language-as-fixed-effect fallacy: A critique of


language statistics in psychological research. Journal of Verbal Learning
and Verbal Behavior, 12, 335–359.

Clark, J. K., & Wegener, D. T. (2013). Message position, information


processing, and persuasion: The discrepancy motives model. Advances
in Experimental Social Psychology, 47, 189–232.

Clark, J. K., Wegener, D. T., & Evans, A. T. (2011). Perceptions of source


efficacy and persuasion: Multiple mechanisms for source effects on
attitudes. European Journal of Social Psychology, 41, 596–607.

Clark, J. K., Wegener, D. T., Habashi, M. M., & Evans, A. T. (2012).


Source expertise and persuasion: The effects of perceived opposition or
support on message scrutiny. Personality and Social Psychology
Bulletin, 38, 90–100.

Clark, R. A., & Stewart, R. (1971). Latitude of rejection as a measure of


ego involvement. Speech Monographs, 38, 228–234.

Clark, R. A., Stewart, R., & Marston, A. (1972). Scale values for highest
and lowest levels of credibility. Central States Speech Journal, 23,
193–196.

Clarkson, J. J., Tormala, Z. L., & Rucker, D. D. (2011). Cognitive and


affective matching effects in persuasion: An amplification perspective.
Personality and Social Psychology Bulletin, 37, 1415–1427.

Clary, E. G., Snyder, M., Ridge, R. D., Copeland, J., Stukas, A. A.,
Haugen, J., & Miene, P. (1998). Understanding and assessing the
motivations of volunteers: A functional approach. Journal of Personality
and Social Psychology, 74, 1516–1530.

449
Clary, E. G., Snyder, M., Ridge, R. D., Miene, P. K., & Haugen, J. A.
(1994). Matching messages to motives in persuasion: A functional
approach to promoting volunteerism. Journal of Applied Social
Psychology, 24, 1129–1149.

Claypool, H. M., Mackie, D. M., Garcia-Marques, T., McIntosh, A., &


Udall, A. (2004). The effects of personal relevance and repetition on
persuasive processing. Social Cognition, 22, 310–335.

Cohen, G. L., Aronson, J., & Steele, C. M. (2000). When beliefs yield to
evidence: Reducing biased evaluation by affirming the self. Personality
and Social Psychology Bulletin, 26, 1151–1164.

Combs, D. J. Y., & Keller, P. S. (2010). Politicians and trustworthiness:


Acting contrary to self-interest enhances trustworthiness. Basic and
Applied Social Psychology, 32, 328–339.

Compton, J. (2013). Inoculation theory. In J. P. Dillard & L. Shen (Eds.),


The SAGE handbook of persuasion: Developments in theory and
practice (2nd ed., pp. 220–236). Thousand Oaks, CA: Sage.

Compton, J. A., & Pfau, M. W. (2005). Inoculation theory of resistance to


influence at maturity: Recent progress in theory development and
application and suggestions for future research. Communication
Yearbook, 29, 97–145.

Conner, M. (2008). Initiation and maintenance of health behaviors.


Applied Psychology: An International Review, 57, 42–50.

Conner, M., & Armitage, C. J. (1998). Extending the theory of planned


behavior: A review and avenues for further research. Journal of Applied
Social Psychology, 28, 1429–1464.

Conner, M., & Armitage, C. J. (2008). Attitudinal ambivalence. In W. D.


Crano & R. Prislin (Eds.), Attitudes and attitude change (pp. 261–286).

450
New York: Psychology Press.

Conner, M., & Godin, G. (2007). Temporal stability of behavioural


intention as a moderator of intention-health behaviour relationships.
Psychology and Health, 22, 875–897.

Conner, M., Godin, G., Norman, P., & Sheeran, P. (2011). Using the
question-behavior effect to promote disease prevention behaviors: Two
randomized controlled trials. Health Psychology, 30, 300–309.

Conner, M., Graham, S., & Moore, B. (1999). Alcohol and intentions to
use condoms: Applying the theory of planned behaviour. Psychology
and Health, 14, 795–812.

Conner, M., & Higgins, A. R. (2010). Long-term effects of


implementation intentions on prevention of smoking uptake among
adolescents: A cluster randomized controlled trial. Health Psychology,
29, 529–538.

Conner, M., & McMillan, B. (1999). Interaction effects in the theory of


planned behaviour: Studying cannabis use. British Journal of Social
Psychology, 38, 195–222.

Conner, M., Rhodes, R. E., Morris, B., McEachan, R., & Lawton, R.
(2011). Changing exercise through targeting affective or cognitive
attitudes. Psychology & Health, 26, 133–149.

Conner, M., Sheeran, P., Norman, P., & Armitage, C. J. (2000). Temporal
stability as a moderator of relationships in the theory of planned
behaviour. British Journal of Social Psychology, 39, 469–493.

Conner, M., Smith, N., & McMillan, B. (2003). Examining normative


pressure in the theory of planned behaviour: Impact of gender and
passengers on intentions to break the speed limit. Current Psychology,
22, 252–263.

451
Conner, M., & Sparks, P. (1996). The theory of planned behaviour and
health behaviours. In M. Conner & P. Norman (Eds.), Predicting health
behaviour: Research and practice with social cognition models (pp.
121–162). Buckingham, UK: Open University Press.

Conner, M., & Sparks, P. (2005). Theory of planned behaviour and health
behaviour. In M. Conner & P. Norman (Eds.), Predicting health
behaviour: Research and practice with social cognition models (2nd ed.,
pp. 170–222). Maidenhead, UK: Open University Press.

Conner, M., Sparks, P., Povey, R., James, R., Shepherd, R., & Armitage,
C. J. (2002). Moderator effects of attitudinal ambivalence on attitude-
behaviour relationships. European Journal of Social Psychology, 32,
705–718.

Converse, J., Jr., & Cooper, J. (1979). The importance of decisions and
free-choice attitude change: A curvilinear finding. Journal of
Experimental Social Psychology, 15, 48–61.

Cook, A. J., Kerr, G. N., & Moore, K. (2002). Attitudes and intentions
towards purchasing GM food. Journal of Economic Psychology, 23,
557–572.

Cooke, R., & French, D. P. (2008). How well do the theory of reasoned
action and theory of planned behaviour predict intentions and
attendance at screening programmes? A meta-analysis. Psychology &
Health, 23, 745–765.

Cooke, R., & Sheeran, P. (2004). Moderation of cognition-intention and


cognition-behaviour relations: A meta-analysis of properties of variables
from the theory of planned behaviour. British Journal of Social
Psychology, 43, 159–186.

Cooper, H., Hedges, L. V., & Valentine, J. C. (Eds.). (2009). The


handbook of research synthesis and meta-analysis (2nd ed.). New York:
Russell Sage Foundation.

452
Cooper, J. (1998). Unlearning cognitive dissonance: Toward an
understanding of the development of dissonance. Journal of
Experimental Social Psychology, 34, 562–575.

Cooper, J. (2007). Cognitive dissonance: Fifty years of a classic theory.


Los Angeles: Sage.

Cooper, J., Darley, J. M., & Henderson, J. E. (1974). On the effectiveness


of deviant- and conventional-appearing communicators: A field
experiment. Journal of Personality and Social Psychology, 29, 752–757.

Cooper, J., & Jones, R. A. (1970). Self-esteem and consistency as


determinants of anticipatory opinion change. Journal of Personality and
Social Psychology, 14, 312–320.

Corbin, S. K. T., Jones, R. T., & Schulman, R. S. (1993). Drug refusal


behavior: The relative efficacy of skills-based and information-based
treatment. Journal of Pediatric Psychology, 18, 769–784.

Corby, N. H., Enguidanos, S. M., & Kay, L. S. (1996). Development and


use of role model stories in a community level HIV risk reduction
intervention. Public Health Reports, 111 (Suppl. 1), 54–58.

Côté, S. (2005). Reconciling the feelings-as-information and hedonic


contingency models of how mood influences systematic information
processing. Journal of Applied Social Psychology, 35, 1656–1679.

Cotte, J., Coulter, R. A., & Moore, M. (2005). Enhancing or disrupting


guilt: The role of ad credibility and perceived manipulative intent.
Journal of Business Research, 58, 361–368.

Cotton, J. L. (1985). Cognitive dissonance in selective exposure. In D.


Zillmann & J. Bryant (Eds.), Selective exposure to communication (pp.
11–33). Hillsdale, NJ: Lawrence Erlbaum.

453
Courneya, K. S. (1994). Predicting repeated behavior from intention: The
issue of scale correspondence. Journal of Applied Social Psychology,
24, 580–594.

Courneya, K. S. (1995). Understanding readiness for regular physical


activity in older individuals: An application of the theory of planned
behavior. Health Psychology, 14, 80–87.

Covey, J. (2014). The role of dispositional factors in moderating message


framing effects. Health Psychology, 33, 52–65.

Cox, B. S., Cox, A. B., & Cox, D. J. (2000). Motivating signage prompts
safety belt use among drivers exiting senior communities. Journal of
Applied Behavior Analysis, 33, 635–638.

Craciun, C., Schüz, N., Lippke, S., & Schwarzer, R. (2012). A mediator
model of sunscreen use: A longitudinal analysis of social-cognitive
predictors and mediators. International Journal of Behavioral Medicine,
19, 65–72.

Craig, T. Y., & Blankenship, K. L. (2011). Language and persuasion:


Linguistic extremity influences message processing and behavioral
intentions. Journal of Language and Social Psychology, 30, 290–310.

Crandall, C. S., Glor, J., & Britt, T. W. (1997). AIDS-related


stigmatization: Instrumental and symbolic attitudes. Journal of Applied
Social Psychology, 27, 95–123.

Crano, W. D., & Prislin, R. (1995). Components of vested interest and


attitude-behavior consistency. Basic and Applied Social Psychology, 17,
1–21.

Crawley, F. E., III (1990). Intentions of science teachers to use


investigative teaching methods: A test of the theory of planned
behavior. Journal of Research in Science Teaching, 27, 685–698.

454
Crites, S. L., Jr., Fabrigar, L. R., & Petty, R. E. (1994). Measuring the
affective and cognitive properties of attitudes: Conceptual and
methodological issues. Personality and Social Psychology Bulletin, 20,
619–634.

Crocker, J., Niiya, Y., & Mischkowski, D. (2008). Why does writing about
important values reduce defensiveness? Self-affirmation and the role of
positive other-directed feelings. Psychological Science, 19, 740–747.

Crockett, W. H. (1982). Balance, agreement, and positivity in the


cognition of small social structures. In L. Berkowitz (Ed.), Advances in
experimental social psychology (Vol. 15, pp. 1–57). New York:
Academic Press.

Cronen, V. E., & Conville, R. L. (1975). Fishbein’s conception of belief


strength: A theoretical, methodological, and experimental critique.
Speech Monographs, 42, 143–150.

Crowley, A. E., & Hoyer, W. D. (1994). An integrative framework for


understanding two-sided persuasion. Journal of Consumer Research, 20,
561–574.

Croy, G., Gerrans, P., & Speelman, C. (2010). Injunctive social norms
primacy over descriptive social norms in retirement savings decisions.
International Journal of Aging and Human Development, 71, 259–282.

Cruz, M. G. (1998). Explicit and implicit conclusions in persuasive


messages. In M. Allen & R.W. Preiss (Eds.), Persuasion: Advances
through meta-analysis (pp. 217–230). Cresskill, NJ: Hampton.

Cuijpers, P. (2002). Peer-led and adult-led school drug prevention: A


meta-analytic comparison. Journal of Drug Education, 32, 107–119.

Cunningham, W. A., Packer, D. J., Kesek, A., & Van Bavel, J. J. (2009).
Implicit measurement of attitudes: A physiological approach. In R. E.

455
Petty, R. H. Fazio, & P. Briñol (Eds.), Attitudes: Insights from the new
implicit measures (pp. 485–512). New York: Psychology Press.

Dahl, J., Enemo, I., Drevland, G. C. B., Wessel, E., Eilertsen, D. E., &
Magnussen, S. (2007). Displayed emotions and witness credibility: A
comparison of judgments by individuals and mock juries. Applied
Cognitive Psychology, 21, 1145–1156.

Dahlstrom, M. F. (2010). The role of causality in information acceptance


in narratives: An example from science communication.
Communication Research, 37, 857–875.

Dal Cin, S., Zanna, M. P., & Fong, G. T. (2004). Narrative persuasion and
overcoming resistance. In E. S. Knowles & J. A. Linn (Eds.), Resistance
and persuasion (pp. 175–191). Mahwah, NJ: Lawrence Erlbaum.

Dale, A., & Strauss, A. (2009). Don’t forget to vote: Text message
reminders as a mobilized tool. American Journal of Political Science,
53, 787–804.

D’Alessio, D., & Allen, M. (2007). The selective exposure hypothesis and
media choice processes. In R. W. Preiss, B. M. Gayle, N. Burrell, M.
Allen, & J. Bryant (Eds.), Mass media effects research: Advances
through meta-analysis (pp. 103–118). Mahwah, NJ: Lawrence Erlbaum.

Dardis, F. E., & Shen, F. (2008). The influence of evidence type and
product involvement on message-framing effects in advertising. Journal
of Consumer Behaviour, 7, 222–238.

Darke, P. R., Chaiken, S., Bohner, G., Einwiller, S., Erb, H.-P., &
Hazlewood, J. D. (1998). Accuracy motivation, consensus information,
and the law of large numbers: Effects on attitude judgment in the
absence of argumentation. Personality and Social Psychology Bulletin,
24, 1205–1215.

456
Darker, C. D., French, D. P., Eves, F. F., & Sniehotta, F. F. (2010). An
intervention to promote walking amongst the general population based
on an ‘extended’ theory of planned behaviour: A waiting list
randomised controlled trial. Psychology and Health, 25, 71–88.

Darker, C. D., French, D. P., Longdon, S., Morris, K., & Eves, F. F.
(2007). Are beliefs elicited biased by question order? A theory of
planned behaviour belief elicitation study about walking in the UK
general population. British Journal of Health Psychology, 12, 93–110.

Darley, S. A., & Cooper, J. (1972). Cognitive consequences of forced


noncompliance. Journal of Personality and Social Psychology, 24,
321–326.

Das, E., & Fennis, B. M. (2008). In the mood to face the facts: When a
positive mood promotes systematic processing of self-threatening
information. Motivation and Emotion, 32, 221–230.

Das, E., Vonkeman, C., & Hartmann, T. (2012). Mood as a resource in


dealing with health recommendations: How mood affects information
processing and acceptance of quitsmoking messages. Psychology and
Health, 27, 116–127.

Davis, F. D., Bagozzi, R. P., & Warshaw, P. R. (1989). User acceptance of


computer technology: A comparison of two theoretical models.
Management Science, 35, 982–1003.

Davis, M. H., & Runge, T. E. (1981). Beliefs and attitudes in a


gubernatorial primary: Some limitations on the Fishbein model. Journal
of Applied Social Psychology, 11, 93–113.

Davis, R. E., & Resnicow, K. (2012). The cultural variance framework for
tailoring health messages. In H. Cho (Ed.), Health communication
message design: Theory and practice (pp. 115–135). Los Angeles: Sage.

457
Dean, M., Arvola, A., Vassallo, M., Lähteenmäki, L., Raats, M. M., Saba,
A., & Shepherd, R. (2006). Comparison of elicitation methods for moral
and affective beliefs in the theory of planned behaviour. Appetite, 47,
244–252.

DeBono, K. G. (1987). Investigating the social-adjustive and value-


expressive functions of attitudes: Implications for persuasion processes.
Journal of Personality and Social Psychology, 52, 279–287.

DeBono, K. G. (2006). Self-monitoring and consumer psychology. Journal


of Personality, 74, 715–737.

DeBono, K. G., & Harnish, R. J. (1988). Source expertise, source


attractiveness, and the processing of persuasive information: A
functional approach. Journal of Personality and Social Psychology, 55,
541–546.

DeBono, K. G., Leavitt, A., & Backus, J. (2003). Product packaging and
product evaluation: An individual difference approach. Journal of
Applied Social Psychology, 33, 513–521.

DeBono, K. G., & Omoto, A. M. (1993). Individual differences in


predicting behavioral intentions from attitude and subjective norm.
Journal of Social Psychology, 133, 825–831.

DeBono, K. G., & Packer, M. (1991). The effects of advertising appeal on


perceptions of product quality. Personality and Social Psychology
Bulletin, 17, 194–200.

DeBono, K. G., & Snyder, M. (1989). Understanding consumer decision-


making processes: The role of form and function in product evaluation.
Journal of Applied Social Psychology, 19, 416–424.

DeBono, K. G., & Telesca, C. (1990). The influence of source physical


attractiveness on advertising effectiveness: A functional perspective.

458
Journal of Applied Social Psychology, 20, 1383–1395.

de Bruijn, G. J. (2010). Understanding college students’ fruit consumption.


Integrating habit strength in the theory of planned behaviour. Appetite,
54, 16–22.

de Bruijn, G. J., & Gardner, B. (2011). Active commuting and habit


strength: An interactive and discriminant analyses approach. American
Journal of Health Promotion, 25, 27–36.

de Bruijn, G. J., Kremers, S. P. J., Singh, A., van den Putte, B., & van
Mechelen, W. (2009). Adult active transportation: Adding habit strength
to the theory of planned behavior. American Journal of Preventive
Medicine, 36, 189–194.

de Bruijn, G. J., Kroeze, W., Oenema, A., & Brug, J. (2008). Saturated fat
consumption and the theory of planned behaviour: Exploring additive
and interactive effects of habit strength. Appetite, 51, 318–323.

de Bruijn, G. J., & Rhodes, R. E. (2011). Exploring exercise behavior,


intention and habit strength relationships. Scandinavian Journal of
Medicine & Science in Sports, 21, 482–491.

de Graaf, A., Hoeken, H., Sanders, J., & Beentjes, J. W. J. (2012).


Identification as a mechanism of narrative persuasion. Communication
Research, 39, 802–823.

de Graaf, A., & Hustinx, L. (2011). The effect of story structure on


emotion, transportation, and persuasion. Information Design Journal, 19,
142–154.

de Hoog, N., Stroebe, W., & de Wit, J. (2007). The impact of vulnerability
to and severity of a health risk on processing and acceptance of fear-
arousing communications: A meta-analysis. Review of General
Psychology, 11, 258–285.

459
de Hoog, N., Stroebe, W., & de Wit, J. B. F. (2008). The processing of
fear-arousing communications: How biased processing leads to
persuasion. Social Influence, 3, 84–113.

De Houwer, J. (2009). Comparing measures of attitudes at the functional


and procedural level: Analysis and implications. In R. E. Petty, R. H.
Fazio, & P. Briñol (Eds.), Attitudes: Insights from the new implicit
measures (pp. 362–390). New York: Psychology Press.

De Houwer, J., Teige-Mocigemba, S., Spruyt, A., & Moors, A. (2009).


Implicit measures: A normative analysis and review. Psychological
Bulletin, 135, 347–368.

DeJong, W. (1979). An examination of self-perception mediation of the


foot-in-the-door effect. Journal of Personality and Social Psychology,
37, 2221–2239.

DeJong, W., & Smith, S. W. (2013). Truth in advertising: Social norms


marketing campaigns to reduce college drinking. In R. E. Rice & C. K.
Atkin (Eds.), Public communication campaigns (4th ed., pp. 177–187).
Los Angeles: Sage.

Delia, J. G. (1975). Regional dialect, message acceptance, and perceptions


of the speaker. Central States Speech Journal, 26, 188–194.

Delia, J. G. (1976). A constructivist analysis of the concept of credibility.


Quarterly Journal of Speech, 62, 361–375.

Delia, J. G., Crockett, W. H., Press, A. N., & O’Keefe, D. J. (1975). The
dependency of interpersonal evaluations on context-relevant beliefs
about the other. Speech Monographs, 42, 10–19.

DelVecchio, D., Henard, D. H., & Freling, T. H. (2006). The effect of


sales promotion on post-promotion brand preference: A meta-analysis.
Journal of Retailing, 82, 203–213.

460
Denizeau, M., Golsing, P., & Oberle, D. (2009). L’effet de l’ordre et du
délai sur l’usage de trois modes de réduction de la dissonance cognitive:
le changement d’attitude, la trivialisation et le déni de responsabilité
[The effects of order and delay on the use of three modes of dissonance
reduction: Attitude change, trivialization and denial of responsibility].
Année Psychologique, 109, 629–654.

De Nooijer, J., van Assema, P., de Vet, E., & Brug, J. (2005). How stable
are stages of change for nutrition behaviors in the Netherlands? Health
Promotion International, 20, 27–32.

Derose, S. F., Nakahiro, R. K., & Ziel, F. H. (2009). Automated messaging


to improve compliance with diabetes test monitoring. American Journal
of Managed Care, 15, 425–436.

Detweiler, J. B., Bedell, B. T., Salovey, P., Pronin, E., & Rothman, A. J.
(1999). Message framing and sunscreen use: Gain-framed messages
motivate beach-goers. Health Psychology, 18, 189–196.

Devine, D. J. (2012). Jury decision making: The state of the science. New
York: New York University Press.

de Vries, P., Aarts, H., & Midden, C. J. H. (2011). Changing simple


energy-related consumer behaviors: How the enactment of intentions is
thwarted by acting and non-acting habits. Environment and Behavior,
43, 612–633.

de Wit, J. B. F., Stroebe, W., De Vroome, E. M. M., Sandfort, T. G. M., &


van Griensven, G. J. P. (2000). Understanding AIDS preventive
behavior with casual and primary partners in homosexual men: The
theory of planned behavior and the information-motivation-behavioral-
skills model. Psychology and Health, 15, 325–340.

Dexheimer, J. W., Talbot, T. R., Sanders, D. L., Rosenbloom, S. T., &


Aronsky, D. (2008). Prompting clinicians about preventive care
measures: A systematic review of randomized controlled trials. Journal

461
of the American Medical Informatics Association, 15, 311–318.

De Young, R. (1989). Exploring the difference between recyclers and non-


recyclers: The role of information. Journal of Environmental Systems,
18, 341–351.

De Young, R. (1990). Recycling as appropriate behavior: A review of


survey data from selected recycling education programs in Michigan.
Resources, Conservation, and Recycling, 3, 253–266.

Dholakia, R. R. (1987). Source credibility effects: A test of behavioral


persistence. Advances in Consumer Research, 14, 426–430.

Diamond, G. A., & Cobb, M. D. (1996). The candidate as catastrophe:


Latitude theory and the problems of political persuasion. In D. C. Mutz,
P. M. Sniderman, & R. A. Brody (Eds.), Political persuasion and
attitude change (pp. 225–247). Ann Arbor: University of Michigan
Press.

Dibonaventura, M. D., & Chapman, G. B. (2005). Moderators of the


intention–behavior relationship in influenza vaccinations: Intention
stability and unforeseen barriers. Psychology & Health, 20, 761–774.

Dickerson, C. A., Thibodeau, R., Aronson, E., & Miller, D. (1992). Using
cognitive dissonance to encourage water conservation. Journal of
Applied Social Psychology, 22, 841–854.

Dillard, A. J., Fagerlin, A., Dal Cin, S., Zikmund-Fisher, B. J., & Ubel, P.
A. (2010). Narratives that address affective forecasting errors reduce
perceived barriers to colorectal cancer screening. Social Science &
Medicine, 71, 45–52.

Dillard, J. P. (2011). An application of the integrative model to women’s


intention to be vaccinated against HPV: Implications for message
design. Health Communication, 26, 479–486.

462
Dillard, J. P., & Anderson, J. W. (2004). The role of fear in persuasion.
Psychology and Marketing, 21, 909–926.

Dillard, J. P., Hunter, J. E., & Burgoon, M. (1984). Sequential-request


persuasive strategies: Meta-analysis of foot-in-the-door and door-in-the-
face. Human Communication Research, 10, 461–488.

Dillard, J. P., & Nabi, R. L. (2006). The persuasive influence of emotion in


cancer prevention and detection messages. Journal of Communication,
56, S123–S139.

Dillard, J. P., & Peck, E. (2001). Persuasion and the structure of affect:
Dual systems and discrete emotions as complementary models. Human
Communication Research, 27, 38–68.

Dillard, J. P., & Seo, K. (2013). Affect and persuasion. In J. P. Dillard &
L. Shen (Eds.), The SAGE handbook of persuasion: Developments in
theory and practice (2nd ed., pp. 150–166). Thousand Oaks, CA: Sage.

Dillard, J. P., & Shen, L. (2005). On the nature of reactance and its role in
persuasive health communication. Communication Monographs, 72,
144–168.

Dilliplane, S. (2010). Raising the specter of death: What terror


management theory brings to the study of fear appeals. Communication
Yearbook, 34, 93–131.

Dingus, T. A., Hunn, B. P., & Wreggit, S. S. (1991). Two reasons for
providing protective equipment as part of hazardous consumer product
packaging. In Proceedings of the Human Factors Society 35th annual
meeting (pp. 1039–1042). Santa Monica, CA: Human Factors Society.

Di Noia, J., & Prochaska, J. O. (2010). Dietary stages of change and


decisional balance: A meta-analytic review. American Journal of Health
Behavior, 34, 618–632.

463
Ditto, P. H., Druley, J. A., Moore, K. A., Danks, J. H., & Smucker, W. D.
(1996). Fates worse than death: The role of valued life activities in
health–state evaluations. Health Psychology, 15, 332–343.

DiVesta, F. J., & Merwin, J. C. (1960). The effects of need-oriented


communications on attitude change. Journal of Abnormal and Social
Psychology, 60, 80–85.

Doll, J., & Ajzen, I. (1992). Accessibility and stability of predictors in the
theory of planned behavior. Journal of Personality and Social
Psychology, 63, 754–765.

Doll, J., Ajzen, I., & Madden, T. J. (1991). Optimale skalierung und
urteilsbildung in unter-schiedlichen einstellungsbereichen: Eine
reanalyse [Optimal scaling and judgment in different attitude domains:
A reanalysis]. Zeitschriftfur Sozialpsychologie, 22, 102–111.

Doll, J., & Mallu, R. (1990). Individuierte einstellungsformation,


einstellungsstruktur und einstellungs-verhalten-konsistenz [Individuated
attitude formation, attitude structure and attitude-behavior consistency].
Zeitschrift fur Sozialpsychologie, 21, 2–14.

Doll, J., & Orth, B. (1993). The Fishbein and Ajzen theory of reasoned
action applied to contraceptive behavior: Model variants and
meaningfulness. Journal of Applied Social Psychology, 23, 395–415.

Donaldson, S. I., Graham, J. W., & Hansen, W. B. (1994). Testing the


generalizability of intervening mechanism theories: Understanding the
effects of adolescent drug use prevention interventions. Journal of
Behavioral Medicine, 17, 195–216.

Donaldson, S. I., Graham, J. W., Piccinin, A. M., & Hansen, W. B. (1995).


Resistance-skills training and onset of alcohol use: Evidence for
beneficial and potentially harmful effects in public schools and in
private Catholic schools. Health Psychology, 14, 291–300.

464
Donnelly, J. H., Jr., & Ivancevich, J. M. (1970). Post-purchase
reinforcement and back-out behavior. Journal of Marketing Research, 7,
399–400.

Doob, A. N., Carlsmith, J. M., Freedman, J. L., Landauer, T. K., & Tom,
S., Jr. (1969). Effect of initial selling price on subsequent sales. Journal
of Personality and Social Psychology, 11, 345–350.

Druckman, J. N. (2011). What’s it all about? Framing in political science.


In G. Keren (Ed.), Perspectives on framing (pp. 279–302). New York:
Psychology Press.

Druckman, J. N., & Bolsen, T. (2012). How scientific evidence links


attitudes to behaviors. In D. A. Dana (Ed.), The nanotechnology
challenge: Creating legal institutions for uncertain risks (pp. 84–102).
Cambridge, UK: Cambridge University Press.

Drummond, A. J. (2011). Assimilation, contrast and voter projections of


parties in left-right space: Does the electoral system matter? Party
Politics, 17, 711–743.

Duncan, T. E., Duncan, S. C., Beauchamp, N., Wells, J., & Ary, D. V.
(2000). Development and evaluation of an interactive CD-ROM refusal
skills program to prevent youth substance use: “Refuse to Use.” Journal
of Behavioral Medicine, 23, 59–72.

Dunlop, S. M., Wakefield, M., & Kashima, Y. (2010). Pathways to


persuasion: Cognitive and experiential responses to health-promoting
mass media messages. Communication Research, 37, 133–164.

Durantini, M. R., Albarracín, D., Mitchell, A. L., Earl, A. N., & Gillette, J.
C. (2006). Conceptualizing the influence of social agents of behavior
change: A meta-analysis of the effectiveness of HIV-prevention
interventions for different groups. Psychological Bulletin, 132,
212–248.

465
Dutta-Bergman, M. J. (2003). The linear interaction model of personality
effects in health communication. Health Communication, 15, 101–116.

Dwan, K., Gamble, C., Williamson, P. R., Kirkham, J. J., & the Reporting
Bias Group. (2013). Systematic review of the empirical evidence of
study publication bias and outcome reporting bias: An updated review.
PLO SONE, 8(7), e66844.

Eagly, A. H. (2007). In defence of ourselves: The effects of defensive


processing on attitudinal phenomena. In M. Hewstone, H. A. W. Schut,
J. B. F. de Wit, K. van den Bos, & M. S. Stroebe (Eds.), The scope of
social psychology: Theory and applications (pp. 65–83). New York:
Psychology Press.

Eagly, A. H., & Chaiken, S. (1975). An attribution analysis of the effect of


communicator characteristics on opinion change: The case of
communicator attractiveness. Journal of Personality and Social
Psychology, 32, 136–144.

Eagly, A. H., & Chaiken, S. (1984). Cognitive theories of persuasion. In L.


Berkowitz (Ed.), Advances in experimental social psychology (Vol. 17,
pp. 267–359). New York: Academic Press.

Eagly, A. H., & Chaiken, S. (1993). The psychology of attitudes. Fort


Worth, TX: Harcourt Brace Jovanovich.

Eagly, A. H., & Chaiken, S. (1998). Attitude structure and function. In D.


T. Gilbert, S. T. Fiske, & G. Lindzey (Eds.), Handbook of social
psychology (4th ed., Vol. 1, pp. 269–322). Boston: McGraw-Hill.

Eagly, A. H., Mladinic, A., & Otto, S. (1994). Cognitive and affective
bases of attitudes toward social groups and social policies. Journal of
Experimental Social Psychology, 30, 113–137.

Eagly A. H., & Telaak, K. (1972). Width of the latitude of acceptance as a

466
determinant of attitude change. Journal of Personality and Social
Psychology, 23, 388–397.

Eagly, A. H., Wood, W., & Chaiken, S. (1978). Causal inferences about
communicators and their effect on opinion change. Journal of
Personality and Social Psychology, 36, 424–435.

Eagly, A. H., Wood, W., & Chaiken, S. (1981). An attribution analysis of


persuasion. In J. H. Harvey, W. Ickes, & R. F. Kidd (Eds.), New
directions in attribution research (Vol. 3, pp. 37–62). Hillsdale, NJ:
Lawrence Erlbaum.

Earl, A., & Albarracín, D. (2007). Nature, decay, and spiraling of the
effects of fear-inducing arguments and HIV counseling and testing: A
meta-analysis of the short and long-term outcomes of HIV-prevention
interventions. Health Psychology, 26, 496–506.

Eckes, T., & Six, B. (1994). Fakten und fiktionen in der einstellungs-
verhaltens-forschung: Eine meta-analyse [Fact and fiction in attitude-
behavior research: A meta-analysis]. Zeitschriftfür Sozialpsychologie,
25, 253–271.

Edwards, K. (1990). The interplay of affect and cognition in attitude


formation and change. Journal of Personality and Social Psychology, 59,
202–216.

Edwards, S. M., Li, H. R., & Lee, J. H. (2002). Forced exposure and
psychological reactance: Antecedents and consequences of the
perceived intrusiveness of pop-up ads. Journal of Advertising, 31 (3),
83–95.

Ehrlich, D., Guttman, I., Schönbach, P., & Mills, J. (1957). Postdecision
exposure to relevant information. Journal of Abnormal and Social
Psychology, 54, 98–102.

467
Eisend, M. (2002). Dimensions of credibility in marketing communication.
In R. Zwick & T. Ping (Eds.), Asia Pacific advances in consumer
research (Vol. 5, pp. 366–373). Valdosta, GA: Association for
Consumer Research.

Eisend, M. (2006). Two-sided advertising: A meta-analysis. International


Journal of Research in Marketing, 23, 187–198.

Eisend, M. (2007). Understanding two-sided persuasion: An empirical


assessment of theoretical approaches. Psychology and Marketing, 24,
615–640.

Eisend, M. (2008). Explaining the impact of scarcity appeals in


advertising: Affect integration in a simultaneous presentation context.
Journal of Advertising, 37(3), 33–41.

Eisend, M. (2010). Explaining the joint effect of source credibility and


negativity of information in two-sided messages. Psychology &
Marketing, 27, 1032–1049.

Eisend, M., & Langner, T. (2010). Immediate and delayed advertising


effects of celebrity endorsers’ attractiveness and expertise. International
Journal of Advertising, 29, 527–546.

Eisenstadt, D., Leippe, M. R., Rivers, J. A., & Stambush, M. A. (2003).


Counterattitudinal advocacy on a matter of prejudice: Effects of
distraction, commitment, and personal importance. Journal of Applied
Social Psychology, 33, 2123–2152.

Elder, J. P., Sallis, J. F., Woodruff, S. I., & Wildey, M. B. (1993).


Tobacco-refusal skills and tobacco use among high-risk adolescents.
Journal of Behavioral Medicine, 16, 629–642.

Elliott, M. A., & Ainsworth, K. (2012). Predicting university


undergraduates’ binge-drinking behavior: A comparative test of the one-

468
and two-component theories of planned behavior. Addictive Behaviors,
37, 92–101.

Elliott, M. A., & Armitage, C. J. (2006). Effects of implementation


intentions on the self-reported frequency of drivers’ compliance with
speed limits. Journal of Experimental Psychology: Applied, 12,
108–117.

Elliott, M. A., Armitage, C. J., & Baughan, C. J. (2005). Exploring the


beliefs underpinning drivers’ intentions to comply with speed limits.
Transportation Research Part F: Traffic Psychology and Behaviour, 8,
459–480.

Elliott, R., Jobber, D., & Sharp, J. (1995). Using the theory of reasoned
action to understand organizational behaviour: The role of belief
salience. British Journal of Social Psychology, 34, 161–172.

Elms, A. C. (Ed.). (1969). Role playing, reward, and attitude change. New
York: Van Nostrand Reinhold.

Engstrom, E. (1994). Effects of nonfluencies on speaker’s credibility in


newscast settings. Perceptual and Motor Skills, 78, 739–743.

Ennett, S. T., Tobler, N. S., Ringwalt, C. L., & Flowelling, R. L. (1994).


How effective is drug abuse resistance education? A meta-analysis of
Project DARE outcome evaluations. American Journal of Public Health,
84, 1394–1401.

Ennis, R., & Zanna, M. P. (1993). Attitudes, advertising, and automobiles:


A functional approach. Advances in Consumer Research, 20, 662–666.

Ennis, R., & Zanna, M. P. (2000). Attitude function and the automobile. In
G. R. Maio & J. M. Olson (Eds.), Why we evaluate: Functions of
attitudes (pp. 395–415). Mahwah, NJ: Lawrence Erlbaum.

469
Epton, T., Harris, P. R., Kane, R., van Koningsbruggen, G. M., & Sheeran,
P. (in press). The impact of self-affirmation on health-behavior change:
A meta-analysis. Health Psychology. doi:10.1037/hea0000116.

Erb, H.-P., Pierro, A., Mannetti, L., Spiegel, S., & Kruglanski, A. W.
(2007). Biased processing of persuasive information: On the functional
equivalence of cues and message arguments. European Journal of Social
Psychology, 37, 1057–1075.

Escalas, J. E. (2004). Imagine yourself in the product: Mental simulation,


narrative transportation, and persuasion. Journal of Advertising, 33(2),
37–48.

Escalas, J. E. (2007). Self referencing and persuasion: Narrative


transportation versus analytic elaboration. Journal of Consumer
Research, 33, 421–429.

Escalas, J. E., & Luce, M. F. (2004). Understanding the effect of process-


focused versus outcome-focused thought in response to advertising.
Journal of Consumer Research, 31, 274–285.

Esses, V. M., Haddock, G., & Zanna, M. P. (1993). Values, stereotypes,


and emotions as determinants of intergroup attitudes. In D. M. Mackie
& D. L. Hamilton (Eds.), Affect, cognition, and stereotyping: Interactive
processes in group perception (pp. 137–166). San Diego: Academic
Press.

Estabrooks, P., & Carron, A. V. (1998). The conceptualization and effect


of control beliefs on exercise attendance in the elderly. Journal of Aging
and Health, 10, 441–457.

Evans, R. I., Rozelle, R. M., Lasater, T. M., Dembroski, T. M., & Allen, B.
P. (1970). Fear arousal, persuasion, and actual versus implied behavioral
change: New perspective utilizing a real-life dental hygiene program.
Journal of Personality and Social Psychology, 16, 220–227.

470
Everson, E. S., Daley, A. J., & Ussher, M. (2007). Brief report: The theory
of planned behaviour applied to physical activity in young people who
smoke. Journal of Adolescence, 30, 347–351.

Fabrigar, L. R., MacDonald, T. K., & Wegener, D. T. (2007). The


structure of attitudes. In D. Albarracín, B. T. Johnson, & M. P. Zanna
(Eds.), Handbook of attitudes and attitude change (pp. 79–124).
Mahwah, NJ: Lawrence Erlbaum.

Fabrigar, L. R., & Petty, R. E. (1999). The role of the affective and
cognitive bases of attitudes in susceptibility to affectively and
cognitively based persuasion. Personality and Social Psychology
Bulletin, 25, 363–381.

Fabrigar, L. R., Petty, R. E., Smith, S. M., & Crites, S. L., Jr. (2006).
Understanding knowledge effects on attitude-behavior consistency: The
role of relevance, complexity, and amount of knowledge. Journal of
Personality and Social Psychology, 90, 556–577.

Fairchild, A. J., & MacKinnon, D. P. (2009). A general model for testing


mediation and moderation effects. Prevention Science, 10, 87–99.

Falcione, R. L. (1974). The factor structure of source credibility scales for


immediate superiors in the organizational context. Central States Speech
Journal, 25, 63–66.

Falomir-Pichastor, J. M., Butera, F., & Mugny, G. (2002). Persuasive


constraint and expert versus non-expert influence in intention to quit
smoking. European Journal of Social Psychology, 32, 209–222.

Fazio, R. H., & Roskos-Ewoldsen, D. R. (2005). Acting as we feel: When


and how attitudes guide behavior. In T. C. Brock & M. C. Green (Eds.).
Persuasion: Psychological insights and perspectives (2nd ed., pp.
41–79). Thousand Oaks, CA: Sage.

471
Fazio, R. H., & Towles-Schwen, T. (1999). The MODE model of attitude-
behavior processes. In S. Chaiken & Y. Trope (Eds.), Dual-process
models in social psychology (pp. 97–116). New York: Guilford.

Feeley, T. H., Anker, A. E., & Aloe, A. M. (2012). The door-in-the-face


persuasive message strategy: A meta-analysis of the first 35 years.
Communication Monographs, 79, 316–343.

Feiler, D. C., Tost, L. P., & Grant, A. M. (2012). Mixed reasons, missed
givings: The costs of blending egoistic and altruistic reasons in donation
requests. Journal of Experimental Social Psychology, 48, 1322–1328.

Fein, S., & Spencer, S. J. (1997). Prejudice as self-image maintenance:


Affirming the self through derogating others. Journal of Personality and
Social Psychology, 73, 31–44.

Feldman, J. M. (1974). Note on the utility of certainty weights in


expectancy theory. Journal of Applied Psychology, 59, 727–730.

Feldman, L., Stroud, N. J., Bimber, B., & Wojcieszak, M. (2013).


Assessing selective exposure in experiments: The implications of
different methodological choices. Communication Methods and
Measures, 7, 172–194.

Fennis, B. M., & Janssen, L. (2010). Mindlessness revisited: Sequential


request techniques foster compliance by draining self-control resources.
Current Psychology, 29, 235–246.

Fennis, B. M., Janssen, L., & Vohs, K. D. (2009). Acts of benevolence: A


limited-resource account of compliance with charitable requests. Journal
of Consumer Research, 35, 906–924.

Fennis, B. M., & Stroebe, W. (2010). The psychology of advertising. New


York: Psychology Press.

472
Ferguson, C. J., & Heene, M. (2012). A vast graveyard of undead theories:
Publication bias and psychological science’s aversion to the null.
Perspectives on Psychological Science, 7, 555–561.

Fern, E. F., Monroe, K. B., & Avila, R. A. (1986). Effectiveness of


multiple request strategies: A synthesis of research results. Journal of
Marketing Research, 23, 144–152.

Festinger, L. (1957). A theory of cognitive dissonance. Stanford, CA:


Stanford University Press.

Festinger, L. (Ed.). (1964). Conflict, decision, and dissonance. Stanford,


CA: Stanford University Press.

Festinger, L., & Carlsmith, J. M. (1959). Cognitive consequences of forced


compliance. Journal of Abnormal and Social Psychology, 58, 203–210.

Festinger, L., & Walster, E. (1964). Post-decision regret and decision


reversal. In L. Festinger (Ed.), Conflict, decision, and dissonance (pp.
100–110). Stanford, CA: Stanford University Press.

Feufel, M. A., Schneider, T. R., & Berkel, H. J. (2010). A field test of the
effects of instruction design on colorectal cancer self-screening
accuracy. Health Education Research, 25, 709–723.

Fiedler, K., Messner, C., & Bluemke, M. (2006). Unresolved problems


with the “I,” the “A,” and the “T”: A logical and psychometric critique
of the implicit association test (IAT). European Review of Social
Psychology, 17, 74–147.

Field, A. P., & Gillett, R. (2010). How to do a meta-analysis. British


Journal of Mathematical & Statistical Psychology, 63, 665–694.

Fine, B. J. (1957). Conclusion-drawing, communicator credibility, and

473
anxiety as factors in opinion change. Journal of Abnormal and Social
Psychology, 54, 369–374.

Fink, E. L., & Cai, D. A. (2013). Discrepancy models of belief change. In


J. P. Dillard & L. Shen (Eds.), The SAGE handbook of persuasion:
Developments in theory and practice (2nd ed., pp. 84–103). Thousand
Oaks, CA: Sage.

Fischer, P. (2011). Selective exposure, decision uncertainty, and cognitive


economy: A new theoretical perspective on confirmatory information
search. Social and Personality Psychology Compass, 5, 751–762.

Fischer, P., & Greitemeyer, T. (2010). A new look at selective-exposure


effects: An integrative model. Current Directions in Psychological
Science, 19, 384–389.

Fischer, P., Lea, S., Kastenmüller, A., Greitemeyer, T., Fischer, J., & Frey,
D. (2011). The process of selective exposure: Why confirmatory
information search weakens over time. Organizational Behavior and
Human Decision Processes, 114, 37–48.

Fischer, P., Schulz-Hardt, S. & Frey, D. (2008). Selective exposure and


information quantity: How different information quantities moderate
decision makers’ preference for consistent and inconsistent information.
Journal of Personality and Social Psychology, 94, 231–244.

Fishbein, M. (1967a). A behavior theory approach to the relations between


beliefs about an object and the attitude toward the object. In M. Fishbein
(Ed.), Readings in attitude theory and measurement (pp. 389–400). New
York: Wiley.

Fishbein, M. (1967b). A consideration of beliefs, and their role in attitude


measurement. In M. Fishbein (Ed.), Readings in attitude theory and
measurement (pp. 257–266). New York: Wiley.

474
Fishbein, M. (2008). A reasoned action approach to health promotion.
Medical Decision Making, 28, 834–844.

Fishbein, M., & Ajzen, I. (1974). Attitudes towards objects as predictors of


single and multiple behavioral criteria. Psychological Review, 81,
59–74.

Fishbein, M., & Ajzen, I. (1975). Belief, attitude, intention, and behavior.
Reading, MA: Addison-Wesley.

Fishbein, M., & Ajzen, I. (1981). Attitudes and voting behaviour: An


application of the theory of reasoned action. In G. M. Stephenson & J.
M. Davis (Eds.), Progress in applied social psychology (Vol. 1, pp.
253–313). New York: Wiley.

Fishbein, M., & Ajzen, I. (2010). Predicting and changing behavior: The
reasoned action approach. New York: Psychology Press.

Fishbein, M., Cappella, J., Hornik, R., Sayeed, S., Yzer, M., & Ahern, R.
K. (2002). The role of theory in developing effective anti-drug public
service announcements. In W. D. Crano & M. Burgoon (Eds.), Mass
media and drug prevention: Classic and contemporary theories and
research (pp. 89–117). Mahwah, NJ: Lawrence Erlbaum.

Fishbein, M., & Hunter, R. (1964). Summation versus balance in attitude


organization and change. Journal of Abnormal and Social Psychology,
69, 505–510.

Fishbein, M., & Lange, R. (1990). The effects of crossing the midpoint on
belief change: A replication and extension. Personality and Social
Psychology Bulletin, 16, 189–199.

Fishbein, M., & Middlestadt, S. (1995). Noncognitive effects on attitude


formation and change: Fact or artifact? Journal of Consumer
Psychology, 4, 181–202.

475
Fishbein, M., & Middlestadt, S. E. (1997). A striking lack of evidence for
nonbelief-based attitude formation and change: A response to five
commentaries. Journal of Consumer Psychology, 6, 107–115.

Fishbein, M., & Yzer, M. C. (2003). Using theory to design effective


health behavior interventions. Communication Theory, 13, 164–183.

Fisher, J. D., Fisher, W. A., Misovich, S. J., Kimble, D. L., & Malloy, T.
E. (1996). Changing AIDS-risk behavior: Effects of an intervention
emphasizing AIDS risk reduction information, motivation, and
behavioral skills in a college student population. Health Psychology, 15,
114–123.

Fitzsimons, G. J., & Lehmann, D. R. (2004). Reactance to


recommendations: When unsolicited advice yields contrary responses.
Marketing Science, 23, 82–94.

Flanagin, A. J., & Metzger, M. J. (2007). The role of site features, user
attributes, and information verification behaviors on the perceived
credibility of web-based information. New Media and Society, 9,
319–142.

Flay, B. R., McFall, S., Burton, D., Cook, T. D., & Warnecke, R. B.
(1993). Health behavior changes through television: The roles of de
facto and motivated selection processes. Journal of Health and Social
Behavior, 34, 322–335.

Fleming, D. (1967). Attitude: The history of a concept. Perspectives in


American History, 1, 287–365.

Fleming, J. K., & Ginis, K. A. M. (2004). The effects of commercial


exercise video models on women’s self-presentational efficacy and
exercise task self-efficacy. Journal of Applied Sport Psychology, 16,
92–102.

476
Fleming, M. A., & Petty, R. E. (2000). Identity and persuasion: An
elaboration likelihood approach. In D. J. Terry & M. A. Hogg (Eds.),
Attitudes, behavior, and social context: The role of norms and group
membership (pp. 171–199). Mahwah, NJ: Lawrence Erlbaum.

Floyd, D. L., Prentice-Dunn, S., & Rogers, R. W. (2000). A meta-analysis


of research on protection motivation theory. Journal of Applied Social
Psychology, 30, 407–429.

Fointiat, V. (2004). “I know what I have to do, but …”: When hypocrisy
leads to behavioral change. Social Behavior and Personality, 32,
741–746.

Fontenelle, G. A., Phillips, A. P., & Lane, D. M. (1985). Generalizing


across stimuli as well as subjects: A neglected aspect of external
validity. Journal of Applied Psychology, 70, 101–107.

Forehand, M., Gastil, J., & Smith, M. A. (2004). Endorsements as voting


cues: Heuristic and systematic processing in initiative elections. Journal
of Applied Social Psychology, 34, 2215–2231.

Fornara, F., Carrus, G., Passafaro, P., & Bonnes, M. (2011).


Distinguishing the sources of normative influence on proenvironmental
behaviors: The role of local norms in household waste recycling. Group
Processes & Intergroup Relations, 14, 623–635.

Freedman, J. L. (1964). Involvement, discrepancy, and change. Journal of


Abnormal and Social Psychology, 69, 290–295.

Freedman, J. L., & Fraser, S. C. (1966). Compliance without pressure: The


foot-in-the-door technique. Journal of Personality and Social
Psychology, 4, 195–202.

Freedman, J. L., & Sears, D. O. (1965). Warning, distraction, and


resistance to influence. Journal of Personality and Social Psychology, 1,

477
262–266.

Freijy, T., & Kothe, E. J. (2013). Dissonance-based interventions for


health behaviour change: A systematic review. British Journal of Health
Psychology, 18, 310–337.

French, D. P., & Cooke, R. (2012). Using the theory of planned behaviour
to understand binge drinking: The importance of beliefs for developing
interventions. British Journal of Health Psychology, 17, 1–17.

French, D. P., Sutton, S., Hennings, S. J., Mitchell, J., Wareham, N. J.,
Griffin, S., … Kinmonth, A. L. (2005). The importance of affective
beliefs and attitudes in the theory of planned behavior: Predicting
intention to increase physical activity. Journal of Applied Social
Psychology, 35, 1824–1848.

Frewer, L. J., Howard, C., Hedderley, D., & Shepherd, R. (1996). What
determines trust in information about food-related risks? Underlying
psychological constructs. Risk Analysis, 16, 473–486.

Frey, D. (1986). Recent research on selective exposure to information. In


L. Berkowitz (Ed.), Advances in experimental social psychology (Vol.
19, pp. 41–80). New York: Academic Press.

Fried, C. B. (1998). Hypocrisy and identification with transgressions: A


case of undetected dissonance. Basic and Applied Social Psychology,
20, 145–154.

Fried, C. B., & Aronson, E. (1995). Hypocrisy, misattribution, and


dissonance reduction. Personality and Social Psychology Bulletin, 21,
925–933.

Friedrich, J., Fetherstonhaugh, D., Casey, S., & Gallagher, D. (1996).


Argument integration and attitude change: Suppression effects in the
integration of one-sided arguments that vary in persuasiveness.

478
Personality and Social Psychology Bulletin, 22, 179–191.

Fry, J. P., & Neff, R. A. (2009). Periodic prompts and reminders in health
promotion and health behavior interventions: Systematic review. Journal
of Medical Internet Research, 11(2), e16.

Gagné, C., & Godin, G. (2000). The theory of planned behavior: Some
measurement issues concerning belief-based variables. Journal of
Applied Social Psychology, 30, 2173–2193.

Gagné, C., & Godin, G. (2007). Does the easy-difficult item measure
attitude or perceived behavioural control? British Journal of Health
Psychology, 12, 543–557.

Gallagher, K. M., & Updegraff, J. A. (2012). Health message framing


effects on attitudes, intentions, and behavior: A meta-analytic review.
Annals of Behavioral Medicine, 43, 101–116.

Gallagher, K. M., Updegraff, J. A., Rothman, A. J., & Sims, L. (2011).


Perceived susceptibility to breast cancer moderates the effect of gain-
and loss-framed messages on use of screening. Health Psychology, 30,
145–152.

Gallagher, S., & Povey, R. (2006). Determinants of older adults’ intentions


to vaccinate against influenza: A theoretical application. Journal of
Public Health, 28, 139–144.

Gangestad, S. W., & Snyder, M. (2000). Self-monitoring: Appraisal and


re-appraisal. Psychological Bulletin, 126, 530–555.

Garner, R. (2005). What’s in a name? Persuasion perhaps. Journal of


Consumer Psychology, 15, 108–116.

Garrett, R. K. (2009). Politically motivated reinforcement seeking:

479
Reframing the selective exposure debate. Journal of Communication,
59, 676–699.

Garrett, R. K., & Stroud, N. J. (2014). Partisan paths to exposure diversity:


Differences in proand counterattitudinal news consumption. Journal of
Communication, 64, 680–701.

Gasco, M., Briñol, P., & Horcajo, J. (2010). Cambio de actitudes hacia la
imagen corporal: El efecto de la elaboración sobre la fuerza de las
actitudes [Attitude change toward body image: The role of elaboration
on attitude strength]. Psicothema, 22, 71–76.

Gass, R. H., & Seiter, J. S. (2004). Embracing divergence: A definitional


analysis of pure and borderline cases of persuasion. In J. S. Seiter & R.
H. Gass (Eds.), Perspectives on persuasion, social influence, and
compliance gaining (pp. 13–29). Boston: Pearson Allyn and Bacon.

Gastil, J. (1992). Why we believe in democracy: Testing theories of


attitude functions and democracy. Journal of Applied Social
Psychology, 22, 423–450.

Gaston, A., Cramp, A., & Prapavessis, H. (2012). Enhancing self-efficacy


and exercise readiness in pregnant women. Psychology of Sport and
Exercise, 13, 550–557.

Gawronski, B., & Bodenhausen, G. V. (2007). What do we know about


implicit attitude measures and what do we have to learn? In B.
Wittenbrink & N. Schwarz (Eds.), Implicit measures of attitudes (pp.
265–286). New York: Guilford.

Gawronski, B., & Strack, F. (Eds.). (2012). Cognitive consistency: A


fundamental principle in social cognition. New York: Guilford.

Gaziano, C., & McGrath, K. (1986). Measuring the concept of credibility.


Journalism Quarterly, 63, 451–462.

480
Geers, A. W., Handley, I. M., & McLarney, A. R. (2003). Discerning the
role of optimism in persuasion: The valence-enhancement hypothesis.
Journal of Personality and Social Psychology, 85, 554–565.

Gerend, M. A., & Cullen, M. (2008). Effects of message framing and


temporal context on college student drinking behavior. Journal of
Experimental Social Psychology, 44, 1167–1173.

Gerend, M. A., & Shepherd, J. E. (2012). Predicting human papillomavirus


vaccine uptake in young adult women: Comparing the health belief
model and theory of planned behavior. Annals of Behavioral Medicine,
44, 171–180.

Gierl, H., & Huettl, V. (2010). Are scarce products always more
attractive? The interaction of different types of scarcity signals with
products’ suitability for conspicuous consumption. International Journal
of Research in Marketing, 27, 225–235.

Giles, M., & Cairns, E. (1995). Blood donation and Ajzen’s theory of
planned behaviour: An examination of perceived behavioural control.
British Journal of Social Psychology, 34, 173–188.

Giles, M., McClenahan, C., Armour, C., Millar, S., Rae, G., Mallett, J., &
Stewart-Knox, B. (2014). Evaluation of a theory of planned behaviour–
based breastfeeding intervention in Northern Irish Schools using a
randomized cluster design. British Journal of Health Psychology, 19,
16–35.

Glasman, L. R., & Albarracín, D. (2006). Forming attitudes that predict


future behavior: A meta-analysis of the attitude–behavior relation.
Psychological Bulletin, 132, 778–822.

Glik, D., Berkanovic, E., Stone, K., Ibarra, L., Jones, M. C., Rosen, B., …
Richardes, D. (1998). Health education goes Hollywood: Working with
prime-time and daytime entertainment television for immunization
promotion. Journal of Health Communication, 3, 263–284.

481
Glynn, C. J., Huge, M. E., & Lunney, C. A. (2009). The influence of
perceived social norms on college students’ intention to vote. Political
Communication, 26, 48–64.

Göckeritz, S., Schultz, P. W., Rendon, T., Cialdini, R. B., Goldstein, N. J.,
& Griskevicius, V. (2010). Descriptive normative beliefs and
conservation behavior: The moderating roles of personal involvement
and injunctive normative beliefs. European Journal of Social
Psychology, 40, 514–523.

Godin, G., Gagné, C., & Sheeran, P. (2004). Does perceived behavioural
control mediate the relationship between power beliefs and intention?
British Journal of Health Psychology, 9, 557–568.

Godin, G., & Kok, G. (1996). The theory of planned behavior: A review of
its applications to health-related behaviors. American Journal of Health
Promotion, 11, 87–98.

Godin, G., Maticka-Tyndale, E., Adrien, A., Manson-Singer, S., Willms,


D., & Cappon, P. (1996). Cross-cultural testing of three social cognitive
theories: An application to condom use. Journal of Applied Social
Psychology, 26, 1556–1586.

Godin, G., Sheeran, P., Conner, M., Delage, G., Germain, M., Bélanger-
Gravel, A., & Naccache, H. (2010). Which survey questions change
behavior? Randomized controlled trial of mere measurement
interventions. Health Psychology, 29, 636–644.

Godin, G., Valois, P., & Lepage, L. (1993). The pattern of influence of
perceived behavioral control upon exercising behavior: An application
of Ajzen’s theory of planned behavior. Journal of Behavioral Medicine,
16, 81–102.

Goei, R., Boyson, A. R., Lyon-Callo, S. K., Schott, C., Wasilevich, E., &
Cannarile, S. (2010). An examination of EPPM predictions when threat
is perceived externally: An asthma intervention with school workers.

482
Health Communication, 25, 333–344.

Goei, R., Lindsey, L. L. M., Boster, F. J., Skalski, P. D., & Bowman, J. M.
(2003). The mediating roles of liking and obligation on the relationship
between favors and compliance. Communication Research, 30,
178–197.

Goethals, G. R., & Nelson, R. E. (1973). Similarity in the influence


process: The belief-value distinction. Journal of Personality and Social
Psychology, 25, 117–122.

Goldman, M., McVeigh, J. F., & Richterkessing, J. L. (1984). Door-in-the-


face procedure: Reciprocal concession, perceptual contrast, or worthy
person. Journal of Social Psychology, 123, 245–251.

Goldstein, N. J., Cialdini, R. B., & Griskevicius, V. (2008). A room with a


viewpoint: Using social norms to motivate environmental conservation
in hotels. Journal of Consumer Research, 35, 472–482.

Goldstein, N. J., & Mortensen, C. R. (2012). Social norms: A how-to (and


how-not-to) guide. In D. T. Kendrick, N. J. Goldstein, & S. L. Braver
(Eds.), Six degrees of social influence: Science, application, and the
psychology of Robert Cialdini (pp. 68–78). New York: Oxford
University Press.

Gollust, S. E., Niederdeppe, J., & Barry, C. L. (2013). Framing the


consequences of childhood obesity to increase public support for obesity
prevention policy. American Journal of Public Health, 103, e96–e102.

Gollwitzer, P. M., & Oettingen, G. (2008). The question-behavior effect


from an action control perspective. Journal of Consumer Psychology,
18, 107–110.

Gollwitzer, P. M., & Sheeran, P. (2006). Implementation intentions and


goal achievement: A meta-analysis of effects and processes. In M. P.

483
Zanna (Ed.), Advances in experimental social psychology (Vol. 38, pp.
69–120). San Diego: Elsevier Academic Press.

Good, A., & Abraham, C. (2007). Measuring defensive responses to


threatening messages: A meta-analysis of measures. Health Psychology
Review, 1, 208–229.

Goodall, C. E. (2011). An overview of implicit measures of attitudes:


Methods, mechanisms, strengths, and limitations. Communication
Methods and Measures, 5, 203–222.

Gorassini, D. R., & Olson, J. M. (1995). Does self-perception change


explain the foot-in-the-door effect? Journal of Personality and Social
Psychology, 69, 91–105.

Gorman, D. R. (1995). Are school-based resistance skills training


programs effective in preventing alcohol misuse? Journal of Alcohol
and Drug Education, 41, 74–98.

Gould, S. J. (1991). Bully for brontosaurus: Reflections in natural history.


New York: Norton.

Gould, S. J. (1993). Eight little piggies: Reflections in natural history. New


York: Norton.

Gourville, J. T. (1999). The effect of implicit versus explicit comparisons


on temporal pricing claims. Marketing Letters, 10, 113–124.

Granberg, D. (1982). Social judgment theory. Communication Yearbook,


6, 304–329.

Granberg, D., & Campbell, K. E. (1977). Effect of communication


discrepancy and ambiguity on placement and opinion shift. European
Journal of Social Psychology, 7, 137–150.

484
Granberg, D., Kasmer, J., & Nanneman, T. (1988). An empirical
examination of two theories of political perception. Western Political
Quarterly, 41, 29–46.

Granberg, D., & Steele, L. (1974). Procedural considerations in measuring


latitudes of acceptance, rejection, and noncommitment. Social Forces,
52, 538–542.

Grant, N. K., Fabrigar, L. R., & Lim, H. (2010). Exploring the efficacy of
compliments as a tactic for securing compliance. Basic and Applied
Social Psychology, 32, 226–233.

Grasmick, H. G., Bursik, R. J., Jr., & Kinsey, K. A. (1991). Shame and
embarrassment as deterrents to noncompliance with the law: The case of
an antilittering campaign. Environment and Behavior, 23, 233–251.

Green, B. F. (1954). Attitude measurement. In G. Lindzey (Ed.),


Handbook of social psychology (Vol. 1, pp. 335–369). Reading, MA:
Addison-Wesley.

Green, D. P., Ha, S. E., & Bullock, J. G. (2010). Enough already about
“black box” experiments: Studying mediation is more difficult than
most scholars suppose. Annals of the American Academy of Political
and Social Science, 628, 200–208.

Green, M. C. (2008). Research challenges in narrative persuasion.


Information Design Journal, 16, 47–52.

Green, M. C., & Brock, T. C. (2000). The role of transportation in the


persuasiveness of public narratives. Journal of Personality and Social
Psychology, 79, 701–721.

Green, M. C., & Clark, J. L. (2013). Transportation into narrative worlds:


Implications for entertainment media influences on tobacco use.
Addiction, 108, 477–484.

485
Green, M. C., Garst, J., Brock, T. C., & Chung, S. (2006). Fact versus
fiction labeling: Persuasion parity despite heightened scrutiny of fact.
Media Psychology, 8, 267–285.

Greenberg, B. S., & Miller, G. R. (1966). The effects of low-credible


sources on message acceptance. Speech Monographs, 33, 127–136.

Greene, K., Krcmar, M., Rubin, D. L., Walters, L.H., & Hale, J. L. (2002).
Elaboration in processing adolescent health messages: The impact of
egocentrism and sensation seeking on message processing. Journal of
Communication, 52, 812–831.

Greenwald, A. G., Poehlman, T. A., Uhlmann, E. L., & Banaji, M. R.


(2009). Understanding and using the implicit association test: III. Meta-
analysis of predictive validity. Journal of Personality and Social
Psychology, 97, 17–41.

Gregory, G. D., Munch, J. M., & Peterson, M. (2002). Attitude functions


in consumer research: Comparing value-attitude relations in
individualist and collectivist cultures. Journal of Business Research, 55,
933–942.

Gregory, W. L., Cialdini, R. B., & Carpenter, K. M. (1982). Self-relevant


scenarios as mediators of likelihood estimates and compliance: Does
imagining make it so? Journal of Personality and Social Psychology, 43,
89–99.

Griskevicius, V., Shiota, M. N., & Neufeld, S. L. (2010). Influence of


different positive emotions on persuasion processing: A functional
evolutionary approach. Emotion, 10, 190–206.

Gruner, C. R. (1967). Effect of humor on speaker ethos and audience


information gain. Journal of Communication, 17, 228–233.

Gruner, C. R. (1970). The effect of humor in dull and interesting

486
informative speeches. Central States Speech Journal, 21, 160–166.

Gruner, C. R., & Lampton, W. E. (1972). Effects of including humorous


material in a persuasive sermon. Southern Speech Communication
Journal, 38, 188–196.

Guadagno, R. E., & Cialdini, R. B. (2010). Preference for consistency and


social influence: A review of current research findings. Social Influence,
5, 152–163.

Guéguen, N., & Pascual, A. (2005). Improving the response rate to a street
survey: An evaluation of the “but you are free to accept or to refuse”
technique. Psychological Record, 55, 297–303.

Guéguen, N., Pascual, A., & Dagot, L. (2002). Low-ball and compliance to
a request: An application in a field setting. Psychological Reports, 91,
81–84.

Guéguen, N., Pichot, N., & Le Dreff, G. (2005). Similarity and helping
behavior on the web: The impact of the convergence of surnames
between a solicitor and a subject in a request made by e-mail. Journal of
Applied Social Psychology, 35, 423–429.

Gunnell, J. J., & Ceci, S. J. (2010). When emotionality trumps reason: A


study of individual processing style and juror bias. Behavioral Sciences
& the Law, 28, 850–877.

Guo, B. L., Aveyard, P., Fielding, A., & Sutton, S. (2009). Do the
transtheoretical model processes of change, decisional balance and
temptation predict stage movement? Evidence from smoking cessation
in adolescents. Addiction, 104, 828–838.

Hackman, C. L., & Knowlden, A. P. (2014). Theory of reasoned action


and theory of planned behavior-based dietary interventions in
adolescents and young adults: A systematic review. Adolescent Health,

487
Medicine, and Therapeutics, 5, 101–114.

Hackman, J. R., & Anderson, L. R. (1968). The strength, relevance, and


source of beliefs about an object in Fishbein’s attitude theory. Journal of
Social Psychology, 76, 55–67.

Haddock, G., Mio, G. R., Arnold, K., & Huskinson, T. (2008). Should
persuasion be affective or cognitive? The moderating effects of need for
affect and need for cognition. Personality and Social Psychology
Bulletin, 34, 769–779.

Haddock, G., & Zanna, M. P. (1998). Assessing the impact of affective


and cognitive information in predicting attitudes toward capital
punishment. Law and Human Behavior, 22, 325–339.

Hagen, K. M., Gutkin, T. B., Wilson, C. P., & Oats, R. G. (1998). Using
vicarious experience and verbal persuasion to enhance self-efficacy in
pre-service teachers: “Priming the pump” for consultation. School
Psychology Quarterly, 13, 169–178.

Hagger, M. S., & Chatzisarantis, N. L. D. (2009). Integrating the theory of


planned behaviour and self-determination theory in health behaviour: A
meta-analysis. British Journal of Health Psychology, 14, 275–302.

Hagger, M. S., Chatzisarantis, N. L. D., & Biddle, S. J. H. (2002). A meta-


analytic review of the theories of reasoned action and planned behavior
in physical activity: Predictive validity and the contribution of
additional variables. Journal of Sport and Exercise Psychology, 24,
3–32.

Hale, J. L., Householder, B. J., & Greene, K. L. (2002). The theory of


reasoned action. In J. P. Dillard & M. Pfau (Eds.), The persuasion
handbook: Developments in theory and practice (pp. 259–286).
Thousand Oaks, CA: Sage.

488
Hall, K. L., & Rossi, J. S. (2008). Meta-analytic examination of the strong
and weak principles across 48 health behaviors. Preventive Medicine,
46, 266–274.

Hall, P. A., Fong, G. T., Epp, L. J., & Elias, L. J. (2008). Executive
function moderates the intention-behavior link for physical activity and
dietary behavior. Psychology and Health, 23, 309–326.

Hall, P. A., Zehr, C. E., Ng, M., & Zanna, M. P. (2012). Implementation
intentions for physical activity in supportive and unsupportive
environmental conditions: An experimental examination of intention–
behavior consistency. Journal of Experimental Social Psychology, 48,
432–436.

Hamilton, D. L., & Zanna, M. P. (1972). Differential weighting of


favorable and unfavorable attributes in impressions of personality.
Journal of Experimental Research in Personality, 6, 204–212.

Hänze, M. (2001). Ambivalence, conflict, and decision making: Attitudes


and feelings in Germany towards NATO’s military intervention in the
Kosovo war. European Journal of Social Psychology, 31, 693–706.

Hardeman, W., Johnson, M., Johnston, D. W., Bonetti, D., Wareham, N.


J., & Kinmonth, A. L. (2002). Application of the theory of planned
behaviour in behaviour change interventions: A systematic review.
Psychology and Health, 17, 123–158.

Harland, P., Staats, H., & Wilke, H. A. M. (1999). Explaining


proenvironmental intention and behavior by personal norms and the
theory of planned behavior. Journal of Applied Social Psychology, 29,
2505–2528.

Harmon, R. R., & Coney, K. A. (1982). The persuasive effects of source


credibility in buy and lease situations. Journal of Marketing Research,
19, 255–260.

489
Harmon-Jones, E. (1999). Toward an understanding of the motivation
underlying dissonance effects: Is the production of aversive
consequences necessary? In E. Harmon-Jones & J. Mills (Eds.),
Cognitive dissonance: Progress on a pivotal theory in social psychology
(pp. 71–99). Washington, DC: American Psychological Association.

Harmon-Jones, E. (2002). A cognitive dissonance theory perspective on


persuasion. In J. P. Dillard & M. Pfau (Eds.), The persuasion handbook:
Developments in theory and practice (pp. 99–116). Thousand Oaks, CA:
Sage.

Harmon-Jones, E., Amodio, D. M., & Harmon-Jones, C. (2010). Action-


based model of dissonance: On cognitive conflict and attitude change.
In J. P. Forgas, J. Cooper, & W. D. Crano (Eds.), The psychology of
attitudes and attitude change (pp. 163–181). New York: Psychology
Press.

Harmon-Jones, E., Brehm, J. W., Greenberg, J., Simon, L., & Nelson, D.
E. (1996). Evidence that the production of aversive consequences is not
necessary to create cognitive dissonance. Journal of Personality and
Social Psychology, 70, 5–16.

Harmon-Jones, E., & Mills, J. (Eds.). (1999). Cognitive dissonance:


Progress on a pivotal theory in social psychology. Washington, DC:
American Psychological Association.

Harris, A. J. L., & Hahn, U. (2009). Bayesian rationality in evaluating


multiple testimonies: Incorporating the role of coherence. Journal of
Experimental Psychology: Learning, Memory, and Cognition, 35,
1366–1373.

Harris, P. R. (2011). Self-affirmation and the self-regulation of health


behavior change. Self and Identity, 10, 304–314.

Harris, P. R., & Epton, T. (2009). The impact of self-affirmation on health


cognition, health behaviour, and other health-related responses: A

490
narrative review. Social and Personality Psychology Compass, 3,
962–978.

Harris, P. R., & Epton, T. (2010). The impact of self-affirmation on health-


related cognition and health behaviour: Issues and prospects. Social and
Personality Psychology Compass, 4, 439–454.

Harris, P. R., Mayle, K., Mabbott, L., & Napper, L. (2007). Self-
affirmation reduces smokers’ defensiveness to graphic on-pack cigarette
warning labels. Health Psychology, 26, 437–446.

Hart, W., Albarracín, D., Eagly, A. H., Brechan, I., Lindberg, M. J., &
Merrill, L. (2009). Feeling validated versus being correct: A meta-
analysis of selective exposure to information. Psychological Bulletin,
135, 555–588.

Harte, T. (1976). The effects of evidence in persuasive communication.


Central States Speech Journal, 27, 42–46.

Harvey, O., & Rutherford, J. (1958). Gradual and absolute approaches to


attitude change. Sociometry, 21, 61–68.

Hass, J. W., Bagley, G. S., & Rogers, R. W. (1975). Coping with the
energy crisis: Effects of fear appeals upon attitudes toward energy
consumption. Journal of Applied Psychology, 60, 754–756.

Hass, R. G. (1981). Effects of source characteristics on cognitive responses


and persuasion. In R. E. Petty, T. M. Ostrom, & T. C. Brock (Eds.),
Cognitive responses in persuasion (pp. 141–172). Hillsdale, NJ:
Lawrence Erlbaum.

Hass, R. G., & Grady, K. (1975). Temporal delay, type of forewarning,


and resistance to influence. Journal of Experimental Social Psychology,
11, 459–469.

491
Hass, R. G., & Linder, D. E. (1972). Counterargument availability and the
effects of message structure on persuasion. Journal of Personality and
Social Psychology, 23, 219–233.

Hassandra, M., Viachopoulos, S. P., Kosmidou, E., Hatzigeorgiadis, A.,


Goudas, M., & Theodorakis, Y. (2011). Predicting students’ intention to
smoke by theory of planned behaviour variables and parental influences
across school grade levels. Psychology & Health, 26, 1241–1258.

Haugtvedt, C. P., Schumann, D. W., Schneier, W. L., & Warren, W. L.


(1994). Advertising repetition and variation strategies: Implications for
understanding attitude strength. Journal of Consumer Research, 21,
176–189.

Haugtvedt, C. P., Shakarchi, R. J., Samuelson, B. M., & Liu, K. (2004).


Consumer psychology and attitude change. In E. S. Knowles & J. A.
Linn (Eds.), Resistance and persuasion (pp. 283–297). Mahwah, NJ:
Lawrence Erlbaum.

Hausenblas, H. A., Carron, A. V., & Mack, D. E. (1997). Application of


the theories of reasoned action and planned behavior to exercise
behavior: A meta-analysis. Journal of Sport and Exercise Psychology,
19, 36–51.

Head, K. J., Noar, S. M., Iannarino, N. T., & Harrington, N. G. (2013).


Efficacy of text messaging-based interventions for health promotion: A
meta-analysis. Social Science and Medicine, 97, 41–48.

Hecht, M. L., Graham, J. W., & Elek, E. (2006). The drug resistance
strategies intervention: Program effects on substance use. Health
Communication, 20, 267–276.

Hedeker, D., Flay, B. R., & Petraitis, J. (1996). Estimating individual


influences of behavioral intentions: An application of random-effects
modeling to the theory of reasoned action. Journal of Consulting and
Clinical Psychology, 64, 109–120.

492
Hefner, D., Rothmund, T., Klimmt, C., & Gollwitzer, M. (2011). Implicit
measures and media effects research: Challenges and opportunities.
Communication Methods and Measures, 5, 181–202.

Heider, F. (1946). Attitudes and cognitive organization. Journal of


Psychology, 21, 107–112.

Heider, F. (1958). The psychology of interpersonal relations. New York:


Wiley.

Heitland, K., & Bohner, G. (2010). Reducing prejudice via cognitive


dissonance: Individual differences in preference for consistency
moderate the effects of counter-attitudinal advocacy. Social Influence,
5, 164–181.

Henley, N., & Donovan, R. J. (2003). Young people’s response to death


threat appeals: Do they really feel immortal? Health Education
Research, 18, 1–14.

Herek, G. M. (1987). Can functions be measured? A new perspective on


the functional approach to attitudes. Social Psychology Quarterly, 50,
285–303.

Herek, G. M. (2000). The social construction of attitudes: Functional


consensus and divergence in the U.S. public’s reactions to AIDS. In G.
R. Maio & J. M. Olson (Eds.), Why we evaluate: Functions of attitudes
(pp. 325–364). Mahwah, NJ: Lawrence Erlbaum.

Herek, G. M., & Capitanio, J. P. (1998). Symbolic prejudice or fear of


infection? A functional analysis of AIDS-related stigma among
heterosexual adults. Basic and Applied Social Psychology, 20, 230–241.

Herr, P. M. (1995). Whither fact, artifact, and attitude: Reflections on the


theory of reasoned action. Journal of Consumer Psychology, 4,
371–380.

493
Herzog, T. A. (2008). Analyzing the transtheoretical model using the
framework of Weinstein, Rothman, and Sutton (1998): The example of
smoking cessation. Health Psychology, 27, 548–556.

Hether, H. J., Huang, G. C., Beck, V., Murphy, S. T., & Valente, T. W.
(2008). Entertainment-education in a media-saturated environment:
Examining the impact of single and multiple exposures to breast cancer
storylines on two popular medical dramas. Journal of Health
Communication, 13, 808–823.

Hettema, J. E., & Hendricks, P. S. (2010). Motivational interviewing for


smoking cessation: A meta-analytic review. Journal of Consulting and
Clinical Psychology, 78, 868–884.

Hetts, J. J., Boninger, D. S., Armor, D. A., Gleicher, F., & Nathanson, A.
(2000). The influence of anticipated counterfactual regret on behavior.
Psychology and Marketing, 17, 345–368.

Hewes, D. E. (1983). Confessions of a methodological puritan: A response


to Jackson and Jacobs. Human Communication Research, 9, 187–191.

Hewgill, M. A., & Miller, G. R. (1965). Source credibility and response to


feararousing communications. Speech Monographs, 32, 95–101.

Hibbert, S., Smith, A., Davies, A., & Ireland, F. (2007). Guilt appeals:
Persuasion knowledge and charitable giving. Psychology and
Marketing, 24, 723–742.

Higgins, E. T. (1998). Promotion and prevention: Regulatory focus as a


motivational principle. Advances in Experimental Social Psychology,
30, 1–46.

Highhouse, S. (2009). Designing experiments that generalize.


Organizational Research Methods, 12, 554–566.

494
Hilligoss, B., & Rieh, S. Y. (2008). Developing a unifying framework of
credibility assessment: Construct, heuristics, and interaction in context.
Information Processing & Management, 44, 1467–1484.

Hilmert, C. J., Kulik, J. A., & Christenfeld, N. J. S. (2006). Positive and


negative opinion modeling: The influence of another’s similarity and
dissimilarity. Journal of Personality & Social Psychology, 90, 440–452.

Himmelfarb, S., & Arazi, D. (1974). Choice and source attractiveness in


exposure to discrepant messages. Journal of Experimental Social
Psychology, 10, 516–527.

Hing, L. S. S., Li, W., & Zanna, M. P. (2002). Inducing hypocrisy to


reduce prejudicial responses among aversive racists. Journal of
Experimental Social Psychology, 38, 71–78.

Hinyard, L. J., & Kreuter, M. W. (2007). Using narrative communication


as a tool for health behavior change: A conceptual, theoretical, and
empirical overview. Health Education and Behavior, 34, 777–792.

Hirsh, J. B., Kang, S. K., & Bodenhausen, G. V. (2012). Personalized


persuasion: Tailoring persuasive appeals to recipient personality traits.
Psychological Science, 23, 578–581.

Hodson, G., Maio, G. R., & Esses, V. M. (2001). The role of attitudinal
ambivalence in susceptibility to consensus information. Basic and
Applied Social Psychology, 23, 197–205.

Hoeken, H., & Geurts, D. (2005). The influence of exemplars in fear


appeals on the perception of self-efficacy and message acceptance.
Information Design Journal + Document Design, 13, 238–248.

Hofstede, G. (2001). Culture’s consequences: Comparing values,


behaviors, institutions, and organizations across nations (2nd ed.).
Thousand Oaks, CA: Sage.

495
Høie, M., Moan, I. S., Rise, J., & Larsen, E. (2012). Using an extended
version of the theory of planned behaviour to predict smoking cessation
in two age groups. Addiction Research and Theory, 20, 42–54.

Holbrook, M. B. (1977). Comparing multi-attribute attitude models by


optimal scaling. Journal of Consumer Research, 4, 165–171.

Holbrook, M. B. (1978). Beyond attitude structure: Toward the


informational determinants of attitude. Journal of Marketing Research,
15, 545–556.

Holbrook, M. B., & Hulbert, J. M. (1975). Multi-attribute attitude models:


A comparative analysis. Advances in Consumer Research, 2, 375–388.

Hong, S., & Park, H. S. (2012). Computer-mediated persuasion in online


reviews: Statistical versus narrative evidence. Computers in Human
Behavior, 28, 906–919.

Hong, T. (2006). The influence of structural and message features on Web


site credibility. Journal of the American Society for Information Science
and Technology, 57, 114–127.

Hopfer, S. (2012). Effects of narrative HPV vaccination intervention


aimed at reaching college women: A randomized controlled trial.
Prevention Science, 13, 173–182.

Horai, J., Naccari, N., & Fatoullah, E. (1974). The effects of expertise and
physical attractiveness upon opinion agreement and liking. Sociometry,
37, 601–606.

Hornikx, J., & Hoeken, H. (2007). Cultural differences in the


persuasiveness of evidence types and evidence quality. Communication
Monographs, 74, 443–463.

496
Hornikx, J., & O’Keefe, D. J. (2009). Adapting consumer advertising
appeals to cultural values: A meta-analytic review of effects on
persuasiveness and ad liking. Communication Yearbook, 33, 39–71.

Hosseinzadeh, H., & Hossain, S. Z. (2011). Functional analysis of


HIV/AIDS stigma: Consensus or divergence? Health Education &
Behavior, 38, 584–595.

Hovland, C. I., Harvey, O. J., & Sherif, M. (1957). Assimilation and


contrast effects in reactions to communication and attitude change.
Journal of Abnormal and Social Psychology, 55, 244–252.

Howard, D. J., & Kerin, R. A. (2011). The effects of name similarity on


message processing and persuasion. Journal of Experimental Social
Psychology 47, 63–71.

Howell, J. L., & Shepperd, J. A. (2012). Reducing information avoidance


through affirmation. Psychological Science, 23, 141–145.

Hoyt, W. T. (1996). Antecedents and effects of perceived therapist


credibility: A meta-analysis. Journal of Counseling Psychology, 43,
430–447.

Hsieh, G., Hudson, S. E., & Kraut, R. E. (2011). Donate for credibility:
How contribution incentives can improve credibility. In Proceedings of
the ACM Conference on Human Factors in Computing Systems (CHI)
(pp. 3435–3438). New York: Association for Computing Machinery.
doi:10.1145/1978942.1979454.

Hu, Y. F., & Sundar, S. S. (2010). Effects of online health sources on


credibility and behavioral intentions. Communication Research, 37,
105–132.

Hübner, G., & Kaiser, F. G. (2006). The moderating role of the attitude-
subjective norms conflict on the link between moral norms and

497
intention. European Psychologist, 11, 99–109.

Huhmann, B. A., & Brotherton, T. P. (1997). A content analysis of guilt


appeals in popular magazine advertisements. Journal of Advertising,
26(2), 35–45.

Hukkelberg, S. S., Hagtvet, K. A., & Kovac, V. B. (2014). Latent


interaction effects in the theory of planned behaviour applied to quitting
smoking. British Journal of Health Psychology, 19, 83–100.

Hullett, C. R. (2002). Charting the process underlying the change of value-


expressive attitudes: The importance of value-relevance in predicting
the matching effect. Communication Monographs, 69, 158–178.

Hullett, C. R. (2004). Using functional theory to promote sexually


transmitted disease (STD) testing: The impact of value-expressive
messages and guilt. Communication Research, 31, 363–396.

Hullett, C. R. (2005). The impact of mood on persuasion: A meta-analysis.


Communication Research, 32, 423–442.

Hullett, C. R. (2006). Using functional theory to promote HIV testing: The


impact of value-expressive messages, uncertainty, and fear. Health
Communication, 20, 57–67.

Hullett, C. R., & Boster, F. J. (2001). Matching messages to the values


underlying value-expressive and social-adjustive attitudes: Reconciling
an old theory with a contemporary measurement approach.
Communication Monographs, 68, 133–153.

Hunt, S. D. (1970). Post-transaction communications and dissonance


reduction. Journal of Marketing, 34(3), 46–51.

Hurwitz, J. (1986). Issue perception and legislative decision making: An

498
application of social judgment theory. American Politics Quarterly, 14,
150–185.

Hurwitz, S. D., Miron, M. S., & Johnson, B. T. (1992). Source credibility


and the language of expert testimony. Journal of Applied Social
Psychology, 24, 1909–1939.

Huskinson, T. L. H., & Haddock, G. (2004). Individual differences in


attitude structure: Variance in the chronic reliance on affective and
cognitive information. Journal of Experimental Social Psychology, 40,
82–90.

Hustinx, L., van Enschot, R., & Hoeken, H. (2007). Argument quality in
the elaboration likelihood model: An empirical study of strong and weak
arguments in a persuasive message. In F. H. van Eemeren, J. A. Blair,
C. A. Willard, & B. Garssen (Eds.), Proceedings of the Sixth
Conference of the International Society for the Study of Argumentation
(pp. 651–657). Amsterdam: Sic Sat.

Huston, T. L., & Levinger, G. (1978). Interpersonal attraction and


relationships. Annual Review of Psychology, 29, 115–156.

Hutchison, A. J., Breckon, J. D., & Johnston, L. H. (2009). Physical


activity behavior change interventions based on the transtheoretical
model: A systematic review. Health Education and Behavior, 36,
829–845.

Hyde, J., Hankins, M., Deale, A., & Marteau, T. M. (2008). Interventions
to increase self-efficacy in the context of addiction behaviours: A
systematic literature review. Journal of Health Psychology, 13,
607–623.

Igartua, J.-J. (2010). Identification with characters and narrative persuasion


through fictional feature films. Communications, 35, 347–373.

499
Igartua, J.-J., & Barrios, I. (2012). Changing real-world beliefs with
controversial movies: Processes and mechanisms of narrative
persuasion. Journal of Communication, 62, 514–531.

Infante, D. A. (1978). Similarity between advocate and receiver: The role


of instrumentality. Central States Speech Journal, 29, 187–193.

Ioannidis, J. P. A. (2005). Why most published research findings are false.


PLOS Medicine, 2, 696–701.

Ioannidis, J. P. A. (2008). Why most discovered true associations are


inflated. Epidemiology, 19, 640–648.

Ito, T. A., & Cacioppo, J. T. (2007). Attitudes as mental and neural states
of readiness: Using physiological measures to study implicit attitudes. In
B. Wittenbrink & N. Schwarz (Eds.), Implicit measures of attitudes (pp.
125–158). New York: Guilford.

Ivanov, B. (2012). Designing inoculation messages for health


communication campaigns. In H. Cho (Ed.), Health communication
message design: Theory and practice (pp. 73–93). Los Angeles: Sage.

Iyengar, S., & Hahn, K. S. (2009). Red media, blue media: Evidence of
ideological selectivity in media use. Journal of Communication, 59,
19–39.

Izuma, K., & Murayama, K. (2013). Choice-induced preference change in


the free-choice paradigm: A critical methodological review. Frontiers in
Psychology, 4, 41.

Jaccard, J., Radecki, C., Wilson, T., & Dittus, P. (1995). Methods for
identifying consequential beliefs: Implications for understanding
attitude strength. In R. E. Petty & J. A. Krosnick (Eds.), Attitude
strength: Antecedents and consequences (pp. 337–359). Mahwah, NJ:
Lawrence Erlbaum.

500
Jackson, S. (1992). Message effects research: Principles of design and
analysis. New York: Guilford.

Jackson, S. (1993). How to do things to words: The experimental


manipulation of message variables. Southern Communication Journal,
58, 103–114.

Jackson, S., & Brashers, D. (1994). M > 1: Analysis of treatment ×


replication designs. Human Communication Research, 20, 356–389.

Jackson, S., & Jacobs, S. (1983). Generalizing about messages:


Suggestions for design and analysis of experiments. Human
Communication Research, 9, 169–181.

Jackson, S., O’Keefe, D. J., & Brashers, D. (1994). The messages


replication factor: Methods tailored to messages as objects of study.
Journalism Quarterly, 71, 984–996.

Jacobson, R. P., Mortensen, C. R., & Cialdini, R. B. (2011). Bodies


obliged and unbound: Differentiated response tendencies for injunctive
and descriptive social norms. Journal of Personality and Social
Psychology, 100, 433–448.

Janis, I. L., & Mann, L. (1968). A conflict-theory approach to attitude


change and decision making. In A. G. Greenwald, T. C. Brock, & T. M.
Ostrom (Eds.), Psychological foundations of attitudes (pp. 327–360).
New York: Academic Press.

Janssen, L., Fennis, B. M., Pruyn, A. T. H., & Vohs, K. D. (2008). The
path of least resistance: Regulatory resource depletion and the
effectiveness of social influence techniques. Journal of Business
Research, 61, 1041–1046.

Jemmott, J. B. (2012). The reasoned action approach in HIV risk-reduction


strategies for adolescents. Annals of the American Academy of Political

501
and Social Science, 640, 150–172.

Jensen, C. D., Cushing, C. C., Aylward, B. S., Craig, J. T., Sorell, D. M.,
& Steele, R. G. (2011). Effectiveness of motivational interviewing
interventions for adolescent substance use behavior change: A meta-
analytic review. Journal of Consulting and Clinical Psychology, 79,
433–440.

Jensen, J. D. (2008). Scientific uncertainty in news coverage of cancer


research: Effects of hedging on scientists’ and journalists’ credibility.
Human Communication Research, 34, 347–369.

Jeong, E. S., Shi, Y., Baazova, A., Chiu, C., Nahai, A., Moons, W. G., &
Taylor, S. E. (2011). The relation of approach/avoidance motivation and
message framing to the effectiveness of charitable appeals. Social
Influence, 6, 15–21.

Jeong, S.-H., & Hwang, Y. (2012), Does multitasking increase or decrease


persuasion? Effects of multitasking on comprehension and
counterarguing. Journal of Communication, 62, 571–587.

Jerit, J. (2009). How predictive appeals affect policy opinions. American


Journal of Political Science, 53, 411–426.

Jessop, D. C., Simmonds, L.V., & Sparks, P. (2009). Motivational and


behavioral consequences of self-affirmation interventions: A study of
sunscreen use among women. Psychology & Health, 24, 529–544.

Jiang, L., Hoegg, J., Dahl, D.W., & Chattopadhyay, A. (2010). The
persuasive role of incidental similarity on attitudes and purchase
intentions in a sales context. Journal of Consumer Research, 36,
778–791.

John, L. K., Loewenstein, G., & Prelec, D. (2012). Measuring the


prevalence of questionable research practices with incentives for truth

502
telling. Psychological Science, 23, 524–532.

Johnson, B. B. (2005). Testing and expanding a model of cognitive


processing of risk information. Risk Analysis, 25, 631–650.

Johnson, B. T. (1994). Effects of outcome-relevant involvement and prior


information on persuasion. Journal of Experimental Social Psychology,
30, 556–579.

Johnson, B. T., & Eagly, A. H. (1989). Effects of involvement on


persuasion: A meta-analysis. Psychological Bulletin, 106, 290–314.

Johnson, B. T., & Eagly, A. H. (1990). Involvement and persuasion:


Types, traditions, and the evidence. Psychological Bulletin, 107,
375–384.

Johnson, B. T., Lin, H.-Y., Symons, C. S., Campbell, L. A., & Ekstein, G.
(1995). Initial beliefs and attitudinal latitudes as factors in persuasion.
Personality and Social Psychology Bulletin, 21, 502–511.

Johnson, B. T., Smith-McLallen, A., Killeya, L. A., & Levin, K. D.


(2004). Truth or consequences: Overcoming resistance to persuasion
with positive thinking. In E. S. Knowles & J. A. Linn (Eds.), Resistance
and persuasion (pp. 215–233). Mahwah, NJ: Lawrence Erlbaum.

Johnson, H. H., & Scileppi, J. A. (1969). Effects of ego-involvement


conditions on attitude change to high and low credibility
communicators. Journal of Personality and Social Psychology, 13,
31–36.

Johnson, R. W., Kelly, R. J., & LeBlanc, B. A. (1995). Motivational basis


of dissonance: Aversive consequences or inconsistency. Personality and
Social Psychology Bulletin, 21, 850–855. [Erratum notice: Personality
and Social Psychology Bulletin, 22 (1996), 222.]

503
Jonas, K., Broemer, P., & Diehl, M. (2000). Experienced ambivalence as a
moderator of the consistency between attitudes and behaviors.
Zeitschrift fur Sozialpsychologie, 31, 153–165.

Jonas, K., Diehl, M., & Brömer, P. (1997). Effects of attitudinal


ambivalence on information processing and attitude-intention
consistency. Journal of Experimental Social Psychology, 33, 190–210.

Jones, J. L., & Leary, M. R. (1994). Effects of appearance-based


admonitions against sun exposure on tanning intentions in young adults.
Health Psychology, 13, 86–90.

Jones, R. A., & Brehm, J. W. (1967). Attitudinal effects of communicator


attractiveness when one chooses to listen. Journal of Personality and
Social Psychology, 6, 64–70.

Jordan, A., Piotrowski, J. T., Bleakley, A., & Mallya, G. (2012).


Developing media interventions to reduce household sugar-sweetened
beverage consumption. Annals of the American Academy of Political
and Social Science, 640, 118–135.

Joule, R. V., & Martinie, M. A. (2008). Forced compliance, misattribution


and trivialization. Social Behavior and Personality, 36, 1205–1212.

Judah, G., Gardner, B., & Aunger, R. (2013). Forming a flossing habit: An
exploratory study of the psychological determinants of habit formation.
British Journal of Health Psychology, 18, 338–353.

Judd, C. M., Kenny, D. A., & Krosnick, J. A. (1983). Judging the positions
of political candidates: Models of assimilation and contrast. Journal of
Personality and Social Psychology, 44, 952–963.

Judd, C. M., Westfall, J., & Kenny, D. A. (2012). Treating stimuli as a


random factor in social psychology: A new and comprehensive solution
to a pervasive but largely ignored problem. Journal of Personality and

504
Social Psychology, 103, 54–69.

Julka, D. L., & Marsh, K. L. (2005). An attitude functions approach to


increasing organdonation participation. Journal of Applied Social
Psychology, 35, 821–849.

Jung, J. M., & Kellaris, J. M. (2004). Cross-national differences in


proneness to scarcity effects: The moderating roles of familiarity,
uncertainty avoidance, and need for cognitive closure. Psychology and
Marketing, 21, 739–753.

Jung, T. J., & Heald, G. R. (2009). The effects of discriminate message


interventions on behavioral intentions to engage in physical activities.
Journal of American College Health, 57, 527–535.

Kahneman, D., & Tversky, A. (1979). Prospect theory: An analysis of


decision under risk. Econometrica, 47, 263–291.

Kaiser, F. G., Hübner, G., & Bogner, F. X. (2005). Contrasting the theory
of planned behavior with the value-belief-norm model in explaining
conservation behavior. Journal of Applied Social Psychology, 35,
2150–2170.

Kaiser, F. G., & Schultz, P. W. (2009). The attitude-behavior relationship:


A test of three models of the moderating role of behavioral difficulty.
Journal of Applied Social Psychology, 39, 186–207.

Kaiser, H. F. (1960). Directional statistical decisions. Psychological


Review, 67, 160–167.

Kamins, M. A. (1990). An investigation into the “match-up” hypothesis in


celebrity advertising: When beauty may be only skin deep. Journal of
Advertising, 19(1), 4–13.

505
Kamins, M. A., & Assael, H. (1987a). Moderating disconfirmation of
expectations through the use of two-sided appeals: A longitudinal
approach. Journal of Economic Psychology, 8, 237–254.

Kamins, M. A., & Assael, H. (1987b). Two-sided versus one-sided


appeals: A cognitive perspective on argumentation, source derogation,
and the effect of disconfirming trial on belief change. Journal of
Marketing Research, 24, 29–39.

Kamins, M. A., & Marks, L. J. (1987). Advertising puffery: The impact of


using two-sided claims on product attitude and purchase intention.
Journal of Advertising, 16(4), 6–15.

Kang, Y., Cappella, J., & Fishbein, M. (2006). The attentional mechanism
of message sensation value: Interaction between message sensation
value and argument quality on message effectiveness. Communication
Monographs, 73, 351–378.

Kang, Y.-S., & Kerr, P. M. (2006). Beauty and the beholder: Toward an
integrative model of communication source effects. Journal of
Consumer Research, 33, 123–130.

Kantola, S. J., Syme, G. J., & Campbell, N. A. (1982). The role of


individual differences and external variables in a test of the sufficiency
of Fishbein’s model to explain behavioral intentions to conserve water.
Journal of Applied Social Psychology, 12, 70–83.

Kantola, S. J., Syme, G. J., & Campbell, N. A. (1984). Cognitive


dissonance and energy conservation. Journal of Applied Psychology, 69,
416–421.

Kao, D. T. (2007). Conclusion explicitness in message communication:


The roles of NFC and knowledge in attitude formation. Social Behavior
and Personality, 35, 819–826.

506
Kaplowitz, S. A., & Fink, E. L. (1997). Message discrepancy and
persuasion. In G. A. Barnett & F. J. Boster (Eds.), Progress in
communication sciences: Vol. 13. Advances in persuasion (pp. 75–106).
Greenwich, CT: Ablex.

Karlan, D., McConnell, M., Mullainathan, S., & Zinman, J. (2010).


Getting to the top of mind: How reminders increase saving. National
Bureau of Economic Research Working Paper No. 16205. Retrieved
from http://www.nber.org/papers/w16205.pdf.

Katz, D. (1960). The functional approach to the study of attitudes. Public


Opinion Quarterly, 24, 163–204.

Katz, D., McClintock, C., & Sarnoff, I. (1957). The measurement of ego
defense as related to attitude change. Journal of Personality, 25,
465–474.

Keaveney, S. M., Huber, F., & Hemnann, A. (2007). A model of buyer


regret: Selected prepurchase and postpurchase antecedents with
consequences for the brand and the channel. Journal of Business
Research, 12, 1207–1215.

Keer, M., van den Putte, B., & Neijens, P. (2010). The role of affect and
cognition in health decision making. British Journal of Social
Psychology, 49, 143–153.

Kellar, I., & Abraham, C. (2005). Randomized controlled trial of a brief


research-based intervention promoting fruit and vegetable consumption.
British Journal of Health Psychology, 10, 543–558.

Keller, P. A., & Lehmann, D. R. (2008). Designing effective health


communications: A meta-analysis. Journal of Public Policy and
Marketing, 27, 117–130.

Kelly, J. A., St. Lawrence, J. S., Stevenson, Y., Hauth, A. C., Kalichman,

507
S. C., Diaz, Y. E., … Morgan, M. G. (1992). Community AIDS/HIV
risk reduction: The effects of endorsements by popular people in three
cities. American Journal of Public Health, 82, 1483–1489.

Kelly, K. J., Slater, M. D., & Karan, D. (2002). Image advertisements’


influence on adolescents’ perceptions of the desirability of beer and
cigarettes. Journal of Public Policy and Marketing, 21, 295–304.

Keltner, D., & Buswell, B. N. (1996). Evidence for the distinctness of


embarrassment, shame, and guilt: A study of recalled antecedents and
facial expressions of emotion. Cognition and Emotion, 10, 155–171.

Kendzierski, D., & Whitaker, D. J. (1997). The role of self-schema in


linking intentions with behavior. Personality and Social Psychology
Bulletin, 23, 139–147.

Keng, C.-J., & Liao, T.-H. (2009). Consequences of postpurchase


dissonance: The mediating role of an external information search. Social
Behavior and Personality, 37, 1327–1340.

Kennedy, C., O’Reilly, K., & Sweat, M. (2009). Effectiveness of peer


education interventions for HIV prevention in developing countries: A
systematic review and meta-analysis. AIDS Education and Prevention,
21, 181–206.

Kennedy, M. G., O’Leary, A., Beck, V., Pollard, K., & Simpson, P.
(2004). Increases in calls to the CDC National STD and AIDS Hotline
following AIDS-related episodes in a soap opera. Journal of
Communication, 54, 287–301.

Kennedy, N. B. (1982). Contact donors before making solicitations. Fund


Raising Management, 13(9), 16–17.

Kenski, K., & Fishbein, M. (2005). The predictive benefits of importance:


Do importance ratings improve the prediction of political attitudes?

508
Journal of Applied Social Psychology, 35, 487–507.

Kenworthy, J. B., Miller, N., Collins, B. E., Read, S. J., & Earleywine, M.
(2011). A trans-paradigm theoretical synthesis of cognitive dissonance
theory: Illuminating the nature of discomfort. European Review of
Social Psychology, 22, 36–113.

Kesselheim, A. S., Robertson, C. T., Myers, J. A., Rose, S. L., Gillet, V.,
Ross, K. M., … Avorn, J. (2012). A randomized study of how
physicians interpret research funding disclosures. New England Journal
of Medicine, 367, 1119–1127.

Kessels, L. T. E., & Ruiter, R. A. C. (2012). Eye movement responses to


health messages on cigarette packages. BMC Public Health, 12,
352–360.

Kidder, L. H., & Campbell, D. T. (1970). The indirect testing of social


attitudes. In G. F. Summers (Ed.), Attitude measurement (pp. 333–385).
Chicago: Rand McNally.

Kidwell, B., & Jewell, R. D. (2003). The moderated influence of internal


control: An examination across health-related behaviors. Journal of
Consumer Psychology, 13, 377–386.

Kiesler, C. A., Collins, B. E., & Miller, N. (1969). Attitude change: A


critical analysis of theoretical approaches. New York: Wiley.

Kim, A., Stark, E., & Borgida, E. (2011). Symbolic politics and the
prediction of attitudes toward federal regulation of reduced-exposure
tobacco products. Journal of Applied Social Psychology, 41, 381–400.

Kim, H. S., Bigman, C. A., Leader, A. E., Lerman, C., & Cappella, J. N.
(2012). Narrative health communication and behavior change: The
influence of exemplars in the news on intention to quit smoking. Journal
of Communication, 62, 473–492.

509
Kim, M.-S., & Hunter, J. E. (1993a). Attitude-behavior relations: A meta-
analysis of attitudinal relevance and topic. Journal of Communication,
43(1), 101–142.

Kim, M.-S., & Hunter, J. E. (1993b). Relationships among attitudes,


behavioral intentions, and behavior: A meta-analysis of past research,
part 2. Communication Research, 20, 331–364.

Kim, S., McLeod, J. H., & Shantzis, C. (1989). An outcome evaluation of


refusal skills program as a drug abuse prevention strategy. Journal of
Drug Education, 19, 363–371.

King, A. J., Williams, E. A., Harrison, T. R., Morgan, S. E., & Havermahl,
T. (2012). The “Tell Us Now” campaign for organ donation: Using
message immediacy to increase donor registration rates. Journal of
Applied Communication Research, 40, 229–246.

King, M. (1978). Assimilation and contrast of presidential candidates’


issue positions, 1972. Public Opinion Quarterly, 41, 515–522.

King, S. W., & Sereno, K. K. (1973). Attitude change as a function of


degree and type of interpersonal similarity and message type. Western
Speech, 37, 218–232.

King, W. R., & He, J. (2006). A meta-analysis of the technology


acceptance model. Information & Management, 43, 740–755.

Kiviniemi, M. T., Voss-Humke, A. M., & Seifert, A. L. (2007). How do I


feel about the behavior? The interplay of affective associations with
behaviors and cognitive beliefs as influences on physical activity
behavior. Health Psychology, 26, 152–158.

Klass, E. T. (1978). Psychological effects of immoral actions: The


experimental evidence. Psychological Bulletin, 85, 756–771.

510
Klein, W. M. P., & Harris, P. R. (2009). Self-affirmation enhances
attentional bias toward threatening components of a persuasive message.
Psychological Science, 20, 1463–1467.

Klein, W. M. P., Lipkus, I. M., Scholl, S. M., McQueen, A., Cerully, J. L.,
& Harris, P. R. (2010). Self-affirmation moderates effects of unrealistic
optimism and pessimism on reactions to tailored risk feedback.
Psychology & Health, 25, 1195–1208.

Klock, S. J., & Traylor, M. B. (1983). Older and younger models in


advertising to older consumers: An advertising effectiveness
experiment. Akron Business and Economic Review, 14(4), 48–52.

Knäuper, B., McCollam, A., Rosen-Brown, A., Lacaille, J., Kelso, E., &
Roseman, M. (2011). Fruitful plans: Adding targeted mental imagery to
implementation intentions increases fruit consumption. Psychology and
Health, 26, 601–617.

Knäuper, B., Roseman, M., Johnson, P.J., & Krantz, L.H. (2009). Using
mental imagery to enhance the effectiveness of implementation
intentions. Current Psychology, 28, 181–186.

Knight, K. M., McGowan, L., Dickens, C., & Bundy, C. (2006). A


systematic review of motivational interviewing in physical health care
settings. British Journal of Health Psychology, 11, 319–332.

Knowles, E. S., & Linn, J. A. (Eds.). (2004). Resistance and persuasion.


Mahwah, NJ: Lawrence Erlbaum.

Koehler, J. W. (1972). Effects on audience opinion of one-sided and two-


sided speeches supporting and opposing a proposition. In T. D.
Beisecker & D. W. Parson (Eds.), The process of social influence (pp.
351–369). Englewood Cliffs, NJ: Prentice Hall.

Koestner, R., Horberg, E. J., Gaudreau, P., Powers, T., Di Dio, P., Bryan,

511
C., … Salter, N. (2006). Bolstering implementation plans for the long
haul: The benefits of simultaneously boosting self-concordance or self-
efficacy. Personality and Social Psychology Bulletin, 32, 1547–1558.

Kok, G., Hospers, H. J., Harterink, P., & De Zwart, O. (2007). Social-
cognitive determinants of HIV risk-taking intentions among men who
date men through the Internet. AIDS Care, 19, 410–417.

Koring, M., Richert, J., Lippke, S., Parschau, L., Reuter, T., & Schwarzer,
R. (2012). Synergistic effects of planning and self-efficacy on physical
activity. Health Education & Behavior, 39, 152–158.

Kothe, E. J., Mullan, B. A., & Amaratunga, R. (2011). Randomised


controlled trial of a brief theory-based intervention promoting breakfast
consumption. Appetite, 56, 148–155.

Kotowski, M. R., Smith, S. W., Johnstone, P. M., & Pritt, E. (2011). Using
the EPPM to create and evaluate the effectiveness of brochures to
reduce the risk for noise-induced hearing loss in college students. Noise
and Health, 13, 261–271.

Kraemer, H. C., Kiernan, M., Essex, M., & Kupfer, D. J. (2008). How and
why criteria defining moderators and mediators differ between the
Baron & Kenny and MacArthur approaches. Health Psychology, 27,
S101–S109.

Kraus, S. J. (1995). Attitudes and the prediction of behavior: A meta-


analysis of the empirical literature. Personality and Social Psychology
Bulletin, 21, 58–75.

Kreuter, M. W., Green, M. C., Cappella, J. N., Slater, M. D., Wise, M. E.,
Storey, D., … Wooley, S. (2007). Narrative communication in cancer
prevention and control: A framework to guide research and application.
Annals of Behavioral Medicine, 33, 221–235.

512
Kreuter, M. W., Lukwago, S. N., Bucholtz, D. C., Clark, E. M., &
Sanders-Thompson, V. (2003). Achieving cultural appropriateness in
health promotion programs: Targeted and tailored approaches. Health
Education and Behavior, 30, 133–146.

Krieger, J. L., Coveleski, S., Hecht, M. L., Miller-Day, M., Graham, J. W.,
Pettigrew, J., & Kootsikas, A. (2013). From kids, through kids, to kids:
Examining the social influence strategies used by adolescents to
promote prevention among peers. Health Communication, 28, 683–695.

Krieger, J. L., & Sarge, M. A. (2013). A serial multiple mediation model


of message framing on intentions to receive the human papillomavirus
(HPV) vaccine: Revisiting the role of threat and efficacy perceptions.
Health Communication, 28, 5–19.

Krishnamurthy, P., & Sivaraman, A. (2002). Counterfactual thinking and


advertising responses. Journal of Consumer Research, 28, 650–658.

Krosnick, J. A., & Alwin, D. F. (1989). Aging and susceptibility to attitude


change. Journal of Personality and Social Psychology, 57, 416–425.

Krosnick, J. A., Boninger, D. S., Chuang, Y. C., Berent, M. K., & Carnot,
C. G. (1993). Attitude strength: One construct or many related
constructs? Journal of Personality and Social Psychology, 65,
1132–1151.

Krosnick, J. A., Judd, C. M., & Wittenbrink, B. (2005). The measurement


of attitudes. In D. Albarracín, B. T. Johnson, & M. P. Zanna (Eds.),
Handbook of attitudes and attitude change (pp. 21–76). Mahwah, NJ:
Lawrence Erlbaum.

Krosnick, J. A., & Petty, R. E. (1995). Attitude strength: An overview. In


R. E. Petty & J. A. Krosnick (Eds.), Attitude strength: Antecedents and
consequences (pp. 1–24). Mahwah, NJ: Lawrence Erlbaum.

513
Kruglanski, A. W., Chen, X., Pierro, A., Mannetti, L., Erb, H.-P., &
Spiegel, S. (2006). Persuasion according to the unimodel: Implications
for cancer communication. Journal of Communication, 56, S105–S122.

Kruglanski, A. W., & Stroebe, W. (2005). The influence of beliefs and


goals on attitudes: Issues of structure, function, and dynamics. In D.
Albarracín, B. T. Johnson, & M. P. Zanna (Eds.), The handbook of
attitudes (pp. 323–368). Mahwah, NJ: Lawrence Erlbaum.

Kruglanski, A. W., & Thompson, E. P. (1999a). The illusory second mode


or, the cue is the message. Psychological Inquiry, 10, 182–193.

Kruglanski, A. W., & Thompson, E. P. (1999b). Persuasion by a single


route: A view from the unimodel. Psychological Inquiry, 10, 83–109.

Kuhlmann, A. K. S., Kraft, J. M., Galavotti, C., Creek, T. L., Mooki, M.,
& Ntumy, R. (2008). Radio role models for the prevention of mother-to-
child transmission of HIV and HIV testing among pregnant women in
Botswana. Health Promotion International, 23, 260–268.

Kumkale, G. T., Albarracín, D., & Seignourel, P. J. (2010). The effects of


source credibility in the presence or absence of prior attitudes:
Implications for the design of persuasive communication campaigns.
Journal of Applied Social Psychology, 40, 1325–1356.

Kwak, L., Kremers, S. P. J., van Baak, M. A., & Brug, J. (2007). A poster-
based intervention to promote stair use in blue- and white-collar
worksites. Preventive Medicine, 45, 177–181.

LaBrie, J. W., Grossbard, J. R., & Hummer, J. F. (2009). Normative


misperceptions and marijuana use among male and female college
athletes. Journal of Applied Sport Psychology, 21, S77–S85.

Laczniak, R. N., Muehling, D. D., & Carlson, L. (1991). Effects of


motivation and ability on ad-induced cognitive processing. In R.

514
Holman (Ed.), Proceedings of the 1991 Conference of the American
Academy of Advertising (pp. 81–87). New York: D’Arcy Masius
Benton & Bowles.

Lafferty, B. A., Goldsmith, R. E., & Newell, S. J. (2002). The dual


credibility model: The influence of corporate and endorser credibility on
attitudes and purchase intentions. Journal of Marketing Theory and
Practice, 10(3), 1–12.

Lai, M. K., Ho, S. K., & Lam, T. H. (2004). Perceived peer smoking
prevalence and its association with smoking behaviours and intentions
in Hong Kong Chinese adolescents. Addiction, 99, 1195–1205.

Lajunen, T., & Rasanen, M. (2004). Can social psychological models be


used to promote bicycle helmet use among teenagers? A comparison of
the health belief model, theory of planned behavior, and the locus of
control. Journal of Safety Research, 35, 115–123.

Lally, P., & Gardner, B. (2013). Promoting habit formation. Health


Psychology Review, 7(S1), S137–S158.

Lalor, K. M., & Hailey, B. J. (1990). The effects of message framing and
feelings of susceptibility to breast cancer on reported frequency of
breast self-examination. International Quarterly of Community Health
Education, 10, 183–192.

Lam, S.-P. (2006). Predicting intention to save water: Theory of planned


behavior, response efficacy, vulnerability, and perceived efficiency of
alternative solutions. Journal of Applied Social Psychology, 36,
2803–2824.

Landman, J., & Petty, R. (2000). “It could have been you”: How states
exploit counterfactual thought to market lotteries. Psychology and
Marketing, 17, 299–321.

515
Landy, D. (1972). The effects of an overheard audience’s reaction and
attractiveness on opinion change. Journal of Experimental Social
Psychology, 8, 276–288.

Lane, K. A., Banaji, M. R., Nosek, B. A., & Greenwald, A. G. (2007).


Understanding and using the Implicit Association Test: What we know
(so far) about the method. In B. Wittenbrink & N. Schwarz (Eds.),
Implicit measures of attitudes (pp. 59–102). New York: Guilford.

Langlois, M. A., Petosa, R., & Hallam, J. S. (1999). Why do effective


smoking prevention programs work? Student changes in social cognitive
theory constructs. Journal of School Health, 69, 326–331.

LaPiere, R. T. (1934). Attitudes vs. actions. Social Forces, 13, 230–237.

Larimer, M. E., Kaysen, D. L., Lee, C. M., Kilmer, J. R., Lewis, M. A.,
Dillworth, T., … Neighbors, C. (2009). Evaluating level of specificity of
normative referents in relation to personal drinking behavior. Journal of
Studies on Alcohol and Drugs, S16, 115–121.

Larkey, L. K., & Gonzalez, J. (2007). Storytelling for promoting colorectal


cancer prevention and early detection among Latinos. Patient Education
and Counseling, 67, 272–278.

Larkey, L. K. & Hecht, M. (2010). A model of effects of narrative as


culture-centric health promotion. Journal of Health Communication, 15,
114–135.

Larkey, L. K., & Hill, A. L. (2012). Using narratives to promote health: A


culture-centric approach. In H. Cho (Ed.), Health communication
message design: Theory and practice (pp. 95–112). Los Angeles: Sage.

Latimer, A. E., & Ginis, K. A. M. (2005a). Change in self-efficacy


following a single strength training session predicts sedentary older
adults’ subsequent motivation to join a strength training program.

516
American Journal of Health Promotion, 20, 135–138.

Latimer, A. E., & Ginis, K. A. M. (2005b). The importance of subjective


norms for people who care what others think of them. Psychology and
Health, 20, 53–62.

Latimer, A. E., Salovey, P., & Rothman, A. J. (2007). The effectiveness of


gain-framed messages for encouraging disease prevention behavior: Is
all hope lost? Journal of Health Communication, 12, 645–649.

Laufer, D., Silvera, D. H., McBride, J. B., & Schertzer, S. M. B. (2010).


Communicating charity successes across cultures: Highlighting
individual or collective achievement? European Journal of Marketing,
44, 1322–1333.

Lauver, D., & Knapp, T. R. (1993). Sum-of-products variables: A


methodological critique. Research in Nursing and Health, 16, 385–391.

Lavidge, R. J., & Steiner, G. A. (1961). A model for predictive


measurements of advertising effectiveness. Journal of Marketing, 25(6),
59–62.

Lavine, H., & Snyder, M. (1996). Cognitive processing and the functional
matching effect in persuasion: The mediating role of subjective
perceptions of message quality. Journal of Experimental Social
Psychology, 32, 580–604.

Lavine, H., & Snyder, M. (2000). Cognitive processes and the functional
matching effect in persuasion: Studies of personality and political
behavior. In G. R. Maio & J. M. Olson (Eds.), Why we evaluate:
Functions of attitudes (pp. 97–131). Mahwah, NJ: Lawrence Erlbaum.

Lawton, R., Conner, M., & McEachan, R. (2009). Desire or reason:


Predicting health behaviors from affective and cognitive attitudes.
Health Psychology, 28, 56–65.

517
Leader, A. E., Weiner, J. L., Kelly, B. J., Hornik, R. C., & Cappella, J. N.
(2009). Effects of information framing on human papillomavirus
vaccination. Journal of Women’s Health, 18, 225–233.

Leavitt, C., & Kaigler-Evans, K. (1975). Mere similarity versus


information processing: An exploration of source and message
interaction. Communication Research, 2, 300–306.

Lechner, L., de Vries, H., & Offermans, N. (1997). Participation in a


breast cancer screening program: Influence of past behavior and
determinants on future screening participation. Preventive Medicine, 26,
473–482.

Lee, E.-J. (2008). When are strong arguments stronger than weak
arguments? Deindividuation effects on message elaboration in
computer-mediated communication. Communication Research, 35,
646–665.

Lee, G., & Lee, W. J. (2009). Psychological reactance to online


recommendation services. Information & Management, 46, 448–452.

Lee, M. J., & Bichard, S. L. (2006). Effective message design targeting


college students for the prevention of binge-drinking: Basing design on
rebellious risk-taking tendency. Health Communication, 20, 299–308.

Lennon, S. J., Davis, L. L., & Fairhurst, A. (1988). Evaluations of apparel


advertising as a function of self-monitoring. Perceptual and Motor
Skills, 66, 987–996.

Leone, L., Perugini, M., & Bagozzi, R. P. (2005). Emotions and decision
making: Regulatory focus moderates the influence of anticipated
emotions on action evaluation. Cognition and Emotion, 19, 1175–1198.

Leshner, G., Bolls, P., & Thomas, E. (2009). Scare’em or disgust’em: The
effects of graphic health promotion messages. Health Communication,

518
24, 447–451.

Lester, R.T., Ritvo, P., Mills, E.J., Kariri, A., Karanja, S., Chung, M.H., …
Plummer, F.A. (2010). Effects of a mobile phone short message service
on antiretroviral treatment adherence in Kenya (WelTel Kenya1): A
randomised trial. The Lancet, 376, 1838–1845.

Leventhal, H. (1970). Findings and theory in the study of fear


communications. Advances in Experimental Social Psychology, 5,
119–186.

Leventhal, H., Jones, S., & Trembly, G. (1966). Sex differences in attitude
and behavior change under conditions of fear and specific instructions.
Journal of Experimental Social Psychology, 2, 387–399.

Leventhal, H., & Mora, P. A. (2008). Predicting outcomes or modeling


process? Commentary on the health action process approach. Applied
Psychology: International Review, 57, 51–65.

Levin, I. P., & Gaeth, G. J. (1988). How consumers are affected by the
framing of attribute information before and after consuming the product.
Journal of Consumer Research, 15, 374–378.

Levin, I. P., Schneider, S. L., & Gaeth, G. J. (1998). All frames are not
created equal: A typology and critical analysis of framing effects.
Organizational Behavior and Human Decision Processes, 76, 149–188.

Levin, K. D., Nichols, D. R., & Johnson, B. T. (2000). Involvement and


persuasion: Attitude functions for the motivated processor. In G. R.
Maio & J. M. Olson (Eds.), Why we evaluate: Functions of attitudes
(pp. 163–194). Mahwah, NJ: Lawrence Erlbaum.

Levine, T., Asada, K. J., & Carpenter, C. (2009). Sample sizes and effect
sizes are negatively correlated in meta-analyses: Evidence and
implications of a publication bias against non-significant findings.

519
Communication Monographs, 76, 286–302.

Levitan, L. C., & Visser, P. S. (2008). The impact of the social context on
resistance to persuasion: Effortful versus effortless responses to counter-
attitudinal information. Journal of Experimental Social Psychology, 44,
640–649.

Levy, D. A., Collins, B. E., & Nail, P. R. (1998). A new model of


interpersonal influence characteristics. Journal of Social Behavior and
Personality, 13, 715–733.

Lewis, M. A., & Neighbors, C. (2006). Social norms approaches using


descriptive drinking norms education: A review of the research on
personalized normative feedback. Journal of American College Health,
54, 213–218.

Libby, L. K., Shaeffer, E. M., Eibach, R. P., & Slemmer, J. A. (2007).


Picture yourself at the polls: Visual perspective in mental imagery
affects self-perception and behavior. Psychological Science, 18,
199–203.

Lieberman, D. A. (2012). Digital games for health behavior change:


Research, design, and future directions. In S. M. Noar & N. G.
Harrington (Eds.), eHealth applications: Promising strategies for
behavior change (pp. 110–127). New York: Routledge.

Lieberman, D. A. (2013). Designing digital games, social media, and


mobile technologies to motivate and support health behavior change. In
R. E. Rice & C. K. Atkin (Eds.), Public communication campaigns (4th
ed., pp. 273–287). Los Angeles: Sage.

Likert, R. (1932). A technique for the measurement of attitudes. Archives


of Psychology, 22(140), 1–55.

Lim, S. (2013). College students’ credibility judgments and heuristics

520
concerning Wikipedia. Information Processing and Management, 49,
405–419.

Linder, D. E., Cooper, J., & Jones, E. E. (1967). Decision freedom as a


determinant of the role of incentive magnitude in attitude change.
Journal of Personality and Social Psychology, 6, 245–254.

Linder, D. E., & Worchel, S. (1970). Opinion change as a result of


effortfully drawing a counterattitudinal conclusion. Journal of
Experimental Social Psychology, 6, 432–448.

Lindsey, L. L. M., & Ah Yun, K. (2003). Examining the persuasive effect


of statistical messages: A test of mediating relationships.
Communication Studies, 54, 306–321.

Lipkus, I. M., Green, L. G., & Marcus, A. (2003). Manipulating


perceptions of colorectal cancer threat: Implications for screening
intentions and behaviors. Journal of Health Communication, 8,
213–228.

Lippke, S., Wiedemann, A.U., Ziegelmann, J. P., Reuter, T., & Schwarzer,
R. (2009). Self-efficacy moderates the mediation of intentions into
behavior via plans. American Journal of Health Behavior, 33, 521–529.

Lippke, S., Ziegelmann, J. P., Schwarzer, R., & Velicer, W. F. (2009).


Validity of stage assessment in the adoption and maintenance of
physical activity and fruit and vegetable consumption. Health
Psychology, 28, 183–193.

Littell, J. H., & Girvin, H. (2002). Stages of change: A critique. Behavior


Modification, 26, 223–273.

Lord, K. R., Lee, M. S., & Sauer, P. L. (1995). The combined influence
hypothesis: Central and peripheral antecedents of attitude toward the ad.
Journal of Advertising, 24(1), 73–85.

521
Love, G. D., Mouttapa, M., & Tanjasiri, S. P. (2009). Everybody’s talking:
Using entertainment-education video to reduce barriers to discussion of
cervical cancer screening among Thai women. Health Education
Research, 24, 829–838.

Lowe, R., Eves, F., & Carroll, D. (2002). The influence of affective and
instrumental beliefs on exercise intentions and behavior: A longitudinal
analysis. Journal of Applied Social Psychology, 32, 1241–1252.

Lu, A. S., Baranowski, T., Thompson, D., & Buday, R. (2012). Story
immersion of video games for youth health promotion: A review of
literature. Games for Health Journal, 1, 199–204.

Luchok, J. A., & McCroskey, J. C. (1978). The effect of quality of


evidence on attitude change and source credibility. Southern Speech
Communication Journal, 43, 371–383.

Lupia, A., & McCubbins, M. D. (1998). The democratic dilemma: Can


citizens learn what they need to know? Cambridge, UK: Cambridge
University Press.

Luszczynska, A. (2004). Change in breast self-examination behavior:


Effects of intervention on enhancing self-efficacy. International Journal
of Behavioral Medicine, 11, 95–104.

Luszczynska, A., & Tryburcy, M. (2008). Effects of a self-efficacy


intervention on exercise: The moderating role of diabetes and
cardiovascular diseases. Applied Psychology: An International Review,
57, 644–659.

Lutz, R. J. (1975). Changing brand attitudes through modification of


cognitive structure. Journal of Consumer Research, 1(4), 49–59.

Luzzo, D. A., Hasper, P., Albert, K. A., Bibby, M. A., & Martinelli, E. A.,
Jr. (1999). Effects of self-efficacy-enhancing interventions on the

522
math/science self-efficacy and career interests, goals, and actions of
career undecided college students. Journal of Consulting Psychology,
46, 233–243.

Lynn, M. (1991). Scarcity effects on value: A quantitative review of the


commodity theory literature. Psychology and Marketing, 8, 43–57.

MacKenzie, S. B., Lutz, R. J., & Belch, G. E. (1986). The role of attitude
toward the ad as a mediator of advertising effectiveness: A test of
competing explanations. Journal of Marketing Research, 23, 130–143.

MacKenzie, S. B., & Spreng, R. A. (1992). How does motivation moderate


the impact of central and peripheral processing on brand attitudes and
intentions? Journal of Consumer Research, 18, 519–529.

Mackie, D. M., & Queller, S. (2000). The impact of group membership on


persuasion: Revisiting “who says what to whom with what effect?” In
D. J. Terry & M. A. Hogg (Eds.), Attitudes, behavior, and social
context: The role of norms and group-membership (pp. 135–155).
Mahwah, NJ: Lawrence Erlbaum.

Mackie, D. M., & Worth, L. T. (1989). Processing deficits and the


mediation of positive affect in persuasion. Journal of Personality and
Social Psychology, 57, 27–40.

Maddux, J. E., & Rogers, R. W. (1980). Effects of source expertness,


physical attractiveness, and supporting arguments on persuasion: A case
of brains over beauty. Journal of Personality and Social Psychology, 39,
235–244.

Magee, R. G., & Kalyanaraman, S. (2009). Effects of worldview and


mortality salience in persuasion processes. Media Psychology, 12,
171–194.

Magnini, V. P., Garcia, C., & Honeycutt, E. D. (2010). Identifying the

523
attributes of an effective restaurant chain endorser. Cornell Hospitality
Quarterly, 51, 238–250.

Mahler, H. I. M., Kulik, J. A., Butler, H. A., Gerrard, M., & Gibbons, F. X.
(2008). Social norms information enhances the efficacy of an
appearance-based sun protection intervention. Social Science &
Medicine, 67, 321–329.

Maio, G. R., Esses, V. M., & Bell, D. W. (2000). Examining conflict


between components of attitudes: Ambivalence and inconsistency are
distinct constructs. Canadian Journal of Behavioural Science, 32, 71–83.

Maio, G. R., Haddock, G., Watt, S. E., & Hewstone, M. (2009). Implicit
measures in applied contexts: An illustrative examination of antiracism
advertising. In R. E. Petty, R. H. Fazio, & P. Briñol (Eds.), Attitudes:
Insights from the new implicit measures (pp. 327–357). New York:
Psychology Press.

Maio, G. R., & Olson, J. M. (1994). Value-attitude-behaviour relations:


The moderating role of attitude functions. British Journal of Social
Psychology, 33, 301–312.

Maio, G. R., & Olson, J. M. (1995). Relations between values, attitudes,


and behavioral intentions: The moderating role of attitude function.
Journal of Experimental Social Psychology, 31, 266–285.

Maio, G. R., & Olson, J. M. (2000a). What is a “value-expressive”


attitude? In G. R. Maio & J. M. Olson (Eds.), Why we evaluate:
Functions of attitude (pp. 249–269). Mahwah, NJ: Lawrence Erlbaum.

Maio, G. R., & Olson, J. M. (Eds.). (2000b). Why we evaluate: Functions


of attitude. Mahwah, NJ: Lawrence Erlbaum.

Maloney, E. K., Lapinski, M. K., & Witte, K. (2011). Fear appeals and
persuasion: A review and update of the extended parallel process model.

524
Social and Personality Psychology Compass, 5, 206–219.

Malotte, C. K., Jarvis, B., Fishbein, M., Kamb, K., Iatesta, M., Hoxworth,
T., … the Project RESPECT Study Group. (2000). Stage of change
versus an integrated psychosocial theory as a basis for developing
effective behaviour change interventions. AIDS Care, 12, 357–364.

Mangleburg, T. F., Sirgy, M. J., Grewal, D., Axsom, D., Hatzios, M.,
Claiborne, C. B., & Bogle, T. (1998). The moderating effect of prior
experience in consumers’ use of user–image based versus utilitarian
cues in brand attitude. Journal of Business and Psychology, 13,
101–113.

Manis, M. (1960). The interpretation of opinion statements as a function of


recipient attitude. Journal of Abnormal and Social Psychology, 60,
340–344.

Mann, T., Sherman, D., & Updegraff, J. (2004). Dispositional motivations


and message framing: A test of the congruency hypothesis in college
students. Health Psychology, 23, 330–334.

Mannetti, L., Pierro, A., & Kruglanski, A. (2007). Who regrets more after
choosing a non-status-quo option? Post decisional regret under need for
cognitive closure. Journal of Economic Psychology, 28, 186–196.

Mannetti, L., Pierro, A., & Livi, S. (2004). Recycling: Planned and self-
expressive behaviour. Journal of Environmental Psychology, 24,
227–236.

Manning, M. (2009). The effects of subjective norms on behavior in the


theory of planned behaviour: A meta-analysis. British Journal of Social
Psychology, 48, 649–705.

Manstead, A. S. R. (2000). The role of moral norms in the attitude-


behavior relation. In D. J. Terry & M. A. Hogg (Eds.), Attitudes,

525
behavior, and social context: The role of norms and group membership
(pp. 11–30). Mahwah, NJ: Lawrence Erlbaum.

Marcus, A. C., Crane, L. A., Kaplan, C. R, Reading, A. E., Savage, E.,


Gunning, J., … Berek, J. S. (1992). Improving adherence to screening
follow-up among women with abnormal pap smears: Results from a
large clinic-based trial of three intervention strategies. Medical Care, 30,
216–230.

Marin, G., Marin, B. V., Perez-Stable, E. J., Sabogal, F., & Otero-Sabogal,
R. (1990). Cultural differences in attitudes and expectancies between
Hispanic and non–Hispanic white smokers. Hispanic Journal of
Behavioral Sciences, 12, 422–436.

Marquart, J., O’Keefe, G. J., & Gunther, A. C. (1995). Believing in


biotech: Farmers’ perceptions of the credibility of BGH information
sources. Science Communication, 16, 388–402.

Martin, B. A. S., Lang, B., & Wong, S. (2003). Conclusion explicitness in


advertising: The moderating role of need for cognition (NFC) and
argument quality (AQ) on persuasion. Journal of Advertising, 32(4),
57–65.

Martin, J., Slade, P., Sheeran, P., Wright, A., & Dibble, T. (2011). “If-
then” planning in one-to-one behaviour change counselling is effective
in promoting contraceptive adherence in teenagers. Journal of Family
Planning and Reproductive Health Care, 37, 85–88.

Marttila, J., & Nupponen, R. (2003). Assessing stage of change for


physical activity: How congruent are parallel methods? Health
Education Research, 18, 419–428.

Mason, T. E., & White, K. M. (2008). Applying an extended model of the


theory of planned behaviour to breast self-examination. Journal of
Health Psychology, 13, 946–956.

526
Masser, B., & France, C. R. (2010). An evaluation of a donation coping
brochure with Australian non-donors. Transfusion and Apheresis
Science, 43, 291–297.

Maticka-Tyndale, E., & Barnett, J. P. (2010). Peer-led interventions to


reduce HIV risk of youth: A review. Evaluation and Program Planning,
33, 98–112.

Mattern, J. L., & Neighbors, C. (2004). Social norms campaigns:


Examining the relationship between changes in perceived norms and
changes in drinking levels. Journal of Studies on Alcohol, 65, 489–493.

Mazor, K. M., Baril, J., Dugan, E., Spencer, F., Burgwinkle, P., &
Gurwitz, J. H. (2007). Patient education about anticoagulant medication:
Is narrative evidence or statistical evidence more effective? Patient
Education and Counseling, 69, 145–157.

McConnell, A. R., Niedermeier, K. E., Leibold, J. M., El-Alayli, A. G.,


Chin, P. P., & Kuiper, N. M. (2000). What if I find it cheaper
somewhere else?: Role of prefactual thinking and anticipated regret in
consumer behavior. Psychology and Marketing, 17, 281–298.

McCroskey, J. C. (1966). Scales for the measurement of ethos. Speech


Monographs, 33, 65–72.

McCroskey, J. C., & Young, T. J. (1981). Ethos and credibility: The


construct and its measurement after three decades. Central States Speech
Journal, 32, 24–34.

McCroskey, J. C., Young, T. J., & Scott, M. D. (1972). The effects of


message sidedness and evidence on inoculation against
counterpersuasion in small group communication. Speech Monographs,
39, 205–212.

McEachan, R. R. C., Conner, M., Taylor, N. J., & Lawton, R. J. (2011).

527
Prospective prediction of health-related behaviours with the theory of
planned behaviour: A meta-analysis. Health Psychology Review, 5,
97–144.

McGilligan, C., McClenahan, C., & Adamson, G. (2009). Attitudes and


intentions to performing testicular self-examination: Utilizing an
extended theory of planned behavior. Journal of Adolescent Health, 44,
404–406.

McGinnies, E. (1973). Initial attitude, source credibility, and involvement


as factors in persuasion. Journal of Experimental Social Psychology, 9,
285–296.

McGuire, W. J. (1964). Inducing resistance to persuasion: Some


contemporary approaches. In L. Berkowitz (Ed.), Advances in
experimental social psychology (Vol. 1, pp. 191–229). New York:
Academic Press.

McGuire, W. J. (1985). Attitudes and attitude change. In G. Lindzey & E.


Aronson (Eds.), Handbook of social psychology (3rd ed., Vol. 2, pp.
233–346). New York: Random House.

McGuire, W. J., & Papageorgis, D. (1961). The relative efficacy of various


types of prior belief-defense in producing immunity against persuasion.
Journal of Abnormal and Social Psychology, 62, 327–337.

McIntyre, P., Barnett, M. A., Harris, R. J., Shanteau, J., Skowronski, J. J.,
& Klassen, M. (1987). Psychological factors influencing decisions to
donate organs. Advances in Consumer Research, 14, 331–334.

McKay-Nesbitt, J., Manchanda, R. V., Smith, M. C., & Huhmann, B. A.


(2011). Effects of age, need for cognition, and affective intensity on
advertising effectiveness. Journal of Business Research, 64, 12–17.

McNeil, B. J., Pauker, S. G., Sox, H. C., Jr., & Tversky, A. (1982). On the

528
elicitation of preferences for alternative therapies. New England Journal
of Medicine, 306, 1259–1262.

McQueen, A., & Klein, W. M. P. (2006). Experimental manipulations of


self-affirmation: A systematic review. Self and Identity, 5, 289–354.

McQueen, A., Kreuter, M. W., Kalesan, B., & Alcaraz, K. I. (2011).


Understanding narrative effects: The impact of breast cancer survivor
stories on message processing, attitudes, and beliefs among African
American women. Health Psychology, 30, 674–682.

McRee, A. L., Reiter, P. L., Chantala, K., & Brewer, N. T. (2010). Does
framing human papillomavirus vaccine as preventing cancer in men
increase vaccine acceptability? Cancer Epidemiology, Biomarkers &
Prevention, 19, 1937–1944.

Meijinders, A., Midden, C., Olofsson, A., Ohman, S., Matthes, J.,
Bondarenko, O., … Rusanen, M. (2009). The role of similarity cues in
the development of trust in sources of information about GM food. Risk
Analysis, 29, 1116–1128.

Mellor, S., Barclay, L. A., Bulger, C. A., & Kath, L. M. (2006).


Augmenting the effect of verbal persuasion on self-efficacy to serve as a
steward: Gender similarity in a union environment. Journal of
Occupational and Organizational Psychology, 79, 121–129.

Melnyk, V., van Herpen, E., Fischer, A. R. H., & van Trijp, H. C. M.
(2011). To think or not to think: The effect of cognitive deliberation on
the influence of injunctive versus descriptive social norms. Psychology
& Marketing, 28, 709–729.

Mercier, H., & Strickland, B. (2012). Evaluating arguments from the


reaction of the audience. Thinking and Reasoning, 18, 365–378.

Merrill, S., Grofman, B., & Adams, J. (2001). Assimilation and contrast

529
effects in voter projections of party locations: Evidence from Norway,
France, and the USA. European Journal of Political Research, 40,
199–221.

Metzger, M. J., Flanagin, A. J., Eyal, K., Lemus, D. R., & McCann, R. M.
(2003). Credibility for the 21st century: Integrating perspectives on
source, message, and media credibility in the contemporary media
environment. Communication Yearbook, 27, 293–335.

Metzger, M. J., Flanagin, A. J., & Medders, R. B. (2010). Social and


heuristic approaches to credibility evaluation online. Journal of
Communication, 60, 413–439.

Mevissen, F. E. F., Meertens, R. M., Ruiter, R. A. C., & Schaalma, H. P.


(2010). Testing implicit assumptions and explicit recommendations: The
effects of probability information on risk perception. Journal of Health
Communication, 15, 578–589.

Meyerowitz, B. E., & Chaiken, S. (1987). The effect of message framing


on breast self-examination attitudes, intentions, and behavior. Journal of
Personality and Social Psychology, 52, 500–510.

Miarmi, L., & DeBono, K. G. (2007). The impact of distractions on


heuristic processing: Internet advertisements and stereotype use. Journal
of Applied Social Psychology, 37, 539–548.

Michie, S., Dormandy, E., French, D. P., & Marteau, T. M. (2004). Using
the theory of planned behaviour to predict screening uptake in two
contexts. Psychology and Health, 19, 705–718. [Erratum notice:
Psychology and Health, 20 (2005), 275.]

Michie, S., Dormandy, E., & Marteau, T. M. (2004). Increasing screening


uptake amongst those intending to be screened: The use of action plans.
Patient Education and Counseling, 55, 218–222.

530
Micu, C. C., Coulter, R. A., & Price, L. L. (2009). How product trial alters
the effects of model attractiveness. Journal of Advertising, 38(2), 69–81.

Middlestadt, S. E. (2007). What is the behavior? Strategies for selecting


the behavior to be addressed by health promotion interventions. In I.
Ajzen, D. Albarracín, & R. Hornik (Eds.), Prediction and change of
health behavior: Applying the reasoned action approach (pp. 129–147).
Mahwah, NJ: Lawrence Erlbaum.

Middlestadt, S. E. (2012). Beliefs underlying eating better and moving


more: Lessons learned from comparative salient belief elicitations with
adults and youths. Annals of the American Academy of Political and
Social Science, 640, 81–100.

Millar, M. G., & Millar, K. U. (1998). Effects of prior experience and


thought on the attitude-behavior relation. Social Behavior and
Personality, 26, 105–114.

Miller, C. H., Lane, L. T., Deatrick, L. M., Young, A. M., & Potts, K. A.
(2007). Psychological reactance and promotional health messages: The
effects of controlling language, lexical concreteness, and the restoration
of freedom. Human Communication Research, 33, 219–240.

Miller, G. R., & Baseheart, J. (1969). Source trustworthiness, opinionated


statements, and response to persuasive communication. Speech
Monographs, 36, 1–7.

Miller, G. R., & Burgoon, J. K. (1982). Factors affecting assessments of


witness credibility. In N. L. Kerr & R. M. Bray (Eds.), Psychology of
the courtroom (pp. 169–194). New York: Academic Press.

Miller, R. L. (1977). The effects of postdecisional regret on selective


exposure. European Journal of Social Psychology, 7, 121–127.

Miller, W. R., & Rollnick, S. (2002). Motivational interviewing: Preparing

531
people for change (2nd ed.). New York: Guilford.

Miller-Day, M., & Hecht, M. L. (2013). Narrative means to preventative


ends: A narrative engagement framework for designing prevention
interventions. Health Communication, 28, 657–670.

Mills, J., & Kimble, C. E. (1973). Opinion change as a function of


perceived similarity of the communicator and subjectivity of the issue.
Bulletin of the Psychonomic Society, 2, 35–36.

Milne, S., Orbell, S., & Sheeran, P. (2002). Combining motivational and
volitional interventions to promote exercise participation: Protection
motivation theory and implementation intentions. British Journal of
Health Psychology, 7, 163–184.

Milne, S., Sheeran, P., & Orbell, S. (2000). Prediction and intervention in
health-related behavior: A meta-analytic review of protection
motivation theory. Journal of Applied Social Psychology, 30, 106–143.

Milton, A. C., & Mullan, B. A. (2012). An application of the theory of


planned behavior: A randomized controlled food safety pilot
intervention for young adults. Health Psychology, 31, 250–259.

Miniard, P. W., & Barone, M. J. (1997). The case for noncognitive


determinants of attitude: A critique of Fishbein and Middlestadt. Journal
of Consumer Psychology, 6, 77–91.

Miron, A. M., & Brehm, J. W. (2006). Reaktanztheorie: 40 Jahre später


[Reactance theory: 40 years later]. Zeitschrift fur Sozialpsychologie, 37,
9–18.

Mishra, S. I., Chavez, L. R., Magana, J. R., Nava, P., Valdez, R. B., &
Hubbell, F. A. (1998). Improving breast cancer control among Latinas:
Evaluation of a theory-based educational program. Health Education
and Behavior, 25, 653–670.

532
Misra, S., & Beatty, S. E. (1990). Celebrity spokesperson and brand
congruence: An assessment of recall and affect. Journal of Business
Research, 21, 159–173.

Mitchell, A. A. (1986). The effect of verbal and visual components of


advertisements on brand attitudes and attitude toward the advertisement.
Journal of Consumer Research, 13, 12–24.

Mitchell, M. M., Brown, K. M., Morris-Villagran, M., & Villagran, P. D.


(2001). The effects of anger, sadness and happiness on persuasive
message processing: A test of the negative state relief model.
Communication Monographs, 68, 347–359.

Mittal, B., Ratchford, B., & Prabhakar, P. (1990). Functional and


expressive attributes as determinants of brand-attitude. In J. N. Sheth
(Ed.), Research in marketing (Vol. 10, pp. 135–155). Greenwich, CT:
JAI.

Mittelstaedt, J. D., Riesz, P. C., & Burns, W. J. (2000). Why are


endorsements effective? Sorting among theories of product and endorser
effects. Journal of Current Issues and Research in Advertising, 22(1),
55–65.

Moan, I. S., & Rise, J. (2005). Quitting smoking: Applying an extended


version of the theory of planned behavior to predict intention and
behavior. Journal of Applied Biobehavioral Research, 10, 39–68.

Moan, I. S., & Rise, J. (2011). Predicting intentions not to “drink and
drive” using an extended version of the theory of planned behaviour.
Accident Analysis & Prevention, 43, 1378–1384.

Mondak, J. J. (1990). Perceived legitimacy of Supreme Court decisions:


Three functions of source credibility. Political Behavior, 12, 363–384.

Mongeau, P. A. (1998). Another look at fear-arousing persuasive appeals.

533
In M. Allen & R. W. Preiss (Eds.), Persuasion: Advances through meta-
analysis (pp. 53–68). Cresskill, NJ: Hampton.

Mongeau, P. A. (2013). Fear appeals. In J. P. Dillard & L. Shen (Eds.),


The SAGE handbook of persuasion: Developments in theory and
practice (2nd ed., pp. 184–199). Thousand Oaks, CA: Sage.

Montaño, D. E., Thompson, B., Taylor, V. M., & Mahloch, J. (1997).


Understanding mammography intention and utilization among women
in an inner city public hospital clinic. Preventive Medicine, 26,
817–824.

Moons, W. G., & Mackie, D. M. (2007), Thinking straight while seeing


red: The influence of anger on information processing. Personality and
Social Psychology Bulletin, 33, 706–720.

Morales, A. C., Wu, E. C., & Fitzsimons, G. J. (2012). How disgust


enhances the effectiveness of fear appeals. Journal of Marketing
Research, 49, 383–393.

Morgan, S. E. (2012). Designing high sensation value messages for the


sensation seeking audience. In H. Cho (Ed.), Health communication
message design: Theory and practice (pp. 231–247). Los Angeles: Sage.

Morgan, S. E., Cole, H. P., Struttmann, T., & Piercy, L. (2002). Stories or
statistics? Farmers’ attitudes toward messages in an agricultural safety
campaign. Journal of Agricultural Safety and Health, 8, 225–239.

Morgan, S. E., Palmgreen, P., Stephenson, M. T., Hoyle, R. H., & Lorch,
E. P. (2003). Associations between message features and subjective
evaluations of the sensation value of antidrug public service
announcements. Journal of Communication, 53, 512–526.

Morman, M. T. (2000). The influence of fear appeals, message design, and


masculinity on men’s motivation to perform the testicular self-exam.

534
Journal of Applied Communication Research, 28, 91–116.

Morton, K., Beauchamp, M., Prothero, A., Joyce, L., Saunders, L.,
Spencer-Bowdage, S., … Pedlar, C. (2014). The effectiveness of
motivational interviewing for health behaviour change in primary care
settings: A systematic review. Health Psychology Review. doi:
10.1080/17437199.2014.882006.

Mowen, J. C., Wiener, J. L., & Joag, S. (1987). An information integration


analysis of how trust and expertise combine to influence source
credibility and persuasion. Advances in Consumer Research, 14, 564.

Moyer-Gusé, E. (2008). Toward a theory of entertainment persuasion:


Explaining the persuasive effects of entertainment-education messages.
Communication Theory, 18, 407–425.

Moyer-Gusé, E., Chung, A. H., & Jain, P. (2011). Identification with


characters and discussion of taboo topics after exposure to an
entertainment narrative about sexual health. Journal of Communication,
61, 387–406.

Moyer-Gusé, E., Jain, P., & Chung, A. H. (2012). Reinforcement or


reactance? Examining the effect of an explicit persuasive appeal
following an entertainment-education narrative. Journal of
Communication, 62, 1010–1027.

Moyer-Gusé, E., & Nabi, R. L. (2009). Explaining the effects of narrative


in an entertainment television program: Overcoming resistance to
persuasion. Human Communication Research, 36, 26–52.

Munn, W. C., & Gruner, C R. (1981). “Sick” jokes, speaker sex, and
informative speech. Southern Speech Communication Journal, 46,
411–418.

Murphy, S. T., Frank, L. B., Chatterjee, J. S., & Baezconde-Garbanati, L.

535
(2013). Narrative versus nonnarrative: The role of identification,
transportation, and emotion in reducing health disparities. Journal of
Communication, 63, 116–137.

Murphy, S. T., Frank, L. B., Moran, M. B., & Patnoe-Woodley, P. (2011).


Involved, transported, or emotional? Exploring the determinants of
change in knowledge, attitudes, and behavior in entertainment-
education. Journal of Communication, 61, 407–431.

Murray-Johnson, L., Witte, K., Liu, W.-Y., Hubbell, A. P., Sampson, J., &
Morrison, K. (2001). Addressing cultural orientations in fear appeals:
Promoting AIDS-protective behaviors among Mexican immigrant and
African American adolescents and American and Taiwanese college
students. Journal of Health Communication, 6, 335–358.

Murray-Johnson, L., Witte, K., Patel, D., Orrego, V., Zuckerman, C.,
Maxfield, A. M., & Thimons, E. D. (2004). Using the extended parallel
process model to prevent noise-induced hearing loss among coal miners
in Appalachia. Health Education and Behavior, 31, 741–755.

Muthusamy, N., Levine, T. R., & Weber, R. (2009). Scaring the already
scared: Some problems with HIV/AIDS fear appeals in Namibia.
Journal of Communication, 59, 317–344.

Myers, L. B., & Horswill, M. S. (2006). Social cognitive predictors of sun


protection intention and behavior. Behavioral Medicine, 32, 57–63.

Nabi, R. L. (1998). The effect of disgust-eliciting visuals on attitudes


toward animal experimentation. Communication Quarterly, 46,
472–484.

Nabi, R. L. (2002). Discrete emotions and persuasion. In J. P. Dillard &


M. Pfau (Eds.), The persuasion handbook: Developments in theory and
practice (pp. 289–308). Thousand Oaks, CA: Sage.

536
Nabi, R. L. (2003). “Feeling” resistance: Exploring the role of emotionally
evocative visuals in inducing inoculation. Media Psychology, 5,
199–223.

Nabi, R. L. (2007). Emotion and persuasion: A social cognitive


perspective. In D. R. Roskos-Ewoldsen & J. L. Monahan (Eds.),
Communication and social cognition: Theories and methods (pp.
377–398). Mahwah, NJ: Lawrence Erlbaum.

Nabi, R. L. (2010). The case for emphasizing discrete emotions in


communication research. Communication Monographs, 77, 153–159.

Nail, P. R., Misak, J. E., & Davis, R. M. (2004). Self-affirmation versus


self-consistency: A comparison of two competing self-theories of
dissonance phenomena. Personality and Individual Differences, 36,
1893–1905.

Nakanishi, M., & Bettman, J. R. (1974). Attitude models revisited: An


individual level analysis. Journal of Consumer Research, 1(3), 16–21.

Nan, X. (2008). The pursuit of self-regulatory goals. Journal of


Advertising, 37(1), 17–27.

Nan, X. (2009). The influence of source credibility on attitude certainty:


Exploring the moderating effects of timing of source identification and
individual need for cognition. Psychology and Marketing, 26, 321–332.

Nan, X., & Zhao, X. (2010). The influence of liking for antismoking PSAs
on adolescents’ smoking-related behavioral intentions. Health
Communication, 25, 459–469.

Nan, X., & Zhao, X. (2012). When does self-affirmation reduce negative
responses to antismoking messages? Communication Studies, 63,
482–497.

537
Napper, L., Harris, P. R., & Epton, T. (2009). Developing and testing a
self-affirmation manipulation. Self and Identity, 8, 45–62.

Napper, L. E., Wood, M. M., Jaffe, A., Fisher, D. G., Reynolds, G. L., &
Klahn, J. A. (2008). Convergent and discriminant validity of three
measures of stage of change. Psychology of Addictive Behaviors, 22,
362–271.

Neimeyer, G. J., MacNair, R., Metzler, A. E., & Courchaine, K. (1991).


Changing personal beliefs: Effects of forewarning, argument quality,
prior bias, and personal exploration. Journal of Social and Clinical
Psychology, 10, 1–20.

Nelson, T. E., Clawson, R. A., & Oxley, Z. M. (1997). Media framing of a


civil liberties conflict and its effect on tolerance. American Political
Science Review, 91, 567–583.

Nelson, T. E., Oxley, Z. M., & Clawson, R. A. (1997). Toward a


psychology of framing effects. Political Behavior, 19, 221–246.

Nestler, S., & Egloff, B. (2012). Interactive effect of dispositional


cognitive avoidance, magnitude of threat, and response efficacy on the
persuasive impact of threat communications. Journal of Individual
Differences, 33, 94–100.

Newell, S. J., & Goldsmith, R. E. (2001). The development of a scale to


measure perceived corporate credibility. Journal of Business Research,
52, 235–247.

Ng, J. Y. Y., Tam, S. F., Yew, W. W., & Lam, W. K. (1999). Effects of
video modeling on self-efficacy and exercise performance of COPD
patients. Social Behavior and Personality, 27, 475–486.

Nickerson, D. W., & Rogers, T. (2010). Do you have a voting plan?


Implementation intentions, voter turnout, and organic plan making.

538
Psychological Science, 21, 194–199.

Niederdeppe, J., Davis, K. C., Farrelly, M. C., & Yarsevich, J. (2007).


Stylistic features, need for sensation, and confirmed recall of national
smoking prevention advertisements. Journal of Communication, 57,
272–292.

Niederdeppe, J., Kim, H. K., Lundell, H., Fazili, F., & Frazier, B. (2012).
Beyond counterarguing: Simple elaboration, complex integration, and
counterelaboration in response to variations in narrative focus and
sidedness. Journal of Communication, 62, 758–777.

Niederdeppe, J., Porticella, N., & Shapiro, M. A. (2012). Using theory to


identify beliefs associated with support for policies to raise the price of
high-fat and high-sugar foods. Journal of Health Communication, 17,
90–104.

Niederdeppe, J., Shapiro, M. A., & Porticella, N. (2011). Attributions of


responsibility for obesity: Narrative communication reduces reactive
counterarguing among liberals. Human Communication Research, 37,
295–323.

Nienhuis, A. E., Manstead, A. S. R., & Spears, R. (2001). Multiple


motives and persuasive communication: Creative elaboration as a result
of impression motivation and accuracy motivation. Personality & Social
Psychology Bulletin, 27, 118–132.

Nigbur, D., Lyons, E., & Uzzell, D. (2010). Attitudes, norms, identity and
environmental behaviour: Using an expanded theory of planned
behaviour to predict participation in a kerbside recycling programme.
British Journal of Social Psychology, 49, 259–284.

Noar, S. M., Benac, C. N., & Harris, M. S. (2007). Does tailoring matter?
Meta-analytic review of tailored print health behavior change
interventions. Psychological Bulletin, 133, 673–693.

539
Noar, S. M., & Mehrotra, P. (2011). Toward a new methodological
paradigm for testing theories of health behavior and health behavior
change. Patient Education and Counseling, 82, 468–474.

Nocon, M., Müller-Riemenschneider, F., Nitzschke, K., & Willich, S.N.


(2010). Review article: Increasing physical activity with point-of-choice
prompts—a systematic review. Scandinavian Journal of Public Health,
38, 633–638.

Nolan, J. M., Schultz, P. W., Cialdini, R. B., Goldstein, N. J., &


Griskevicius, V. (2008). Normative social influence is underdetected.
Personality and Social Psychology Bulletin, 34, 913–923.

Norman, P., Bennett, P., & Lewis, H. (1998). Understanding binge


drinking among young people: An application of the theory of planned
behavior. Health Education Research, 13, 163–169.

Norman, P., & Conner, M. (2006). The theory of planned behaviour and
binge drinking: Assessing the moderating role of past behaviour within
the theory of planned behaviour. British Journal of Health Psychology,
11, 55–70.

Norman, P., & Cooper, Y. (2011). The theory of planned behaviour and
breast self-examination: Assessing the impact of past behaviour, context
stability and habit strength. Psychology & Health, 26, 1156–1172.

Norman, P., & Hoyle, S. (2004). The theory of planned behavior and
breast self-examination: Distinguishing between perceived control and
self-efficacy. Journal of Applied Social Psychology, 34, 694–708.

Norman, P., & Smith, L. (1995). The theory of planned behaviour and
exercise: An investigation into the role of prior behaviour, behavioural
intentions and attitude variability. European Journal of Social
Psychology, 25, 403–415.

540
Norman, R. (1976). When what is said is important: A comparison of
expert and attractive sources. Journal of Experimental Social
Psychology, 12, 294–300.

Notani, A. S. (1998). Moderators of perceived behavioral control’s


predictiveness in the theory of planned behavior: A meta-analysis.
Journal of Consumer Psychology, 7, 247–271.

O’Carroll, R. E., Dryden, J., Hamilton-Barclay, T., & Ferguson, E. (2011).


Anticipated regret and organ donor registration: A pilot study. Health
Psychology, 30, 661–664.

Ohanian, R. (1990). Construction and validation of a scale to measure


celebrity endorsers’ perceived expertise, trustworthiness, and
attractiveness. Journal of Advertising, 19(3), 39–52.

O’Hara, B. S., Netemeyer, R. G., & Burton, S. (1991). An examination of


the relative effects of source expertise, trustworthiness, and likability.
Social Behavior and Personality, 19, 305–314.

Ojala, M. (2008). Recycling and ambivalence: Quantitative and qualitative


analyses of household recycling among young adults. Environment and
Behavior, 40, 777–797.

O’Keefe, D. J. (1987). The persuasive effects of delaying identification of


high- and low-credibility communicators: A meta-analytic review.
Central States Speech Journal, 38, 63–72.

O’Keefe, D. J. (1990). Persuasion: Theory and research. Newbury Park,


CA: Sage.

O’Keefe, D. J. (1993). The persuasive effects of message sidedness


variations: A cautionary note concerning Allen’s (1991) meta-analysis.
Western Journal of Communication, 57, 87–97.

541
O’Keefe, D. J. (1997). Standpoint explicitness and persuasive effect: A
meta-analytic review of the effects of varying conclusion articulation in
persuasive messages. Argumentation and Advocacy, 34, 1–12.

O’Keefe, D. J. (1998). Justification explicitness and persuasive effect: A


meta-analytic review of the effects of varying support articulation in
persuasive messages. Argumentation and Advocacy, 35, 61–75.

O’Keefe, D. J. (1999a). How to handle opposing arguments in persuasive


messages: A meta-analytic review of the effects of one-sided and two-
sided messages. Communication Yearbook, 22, 209–249.

O’Keefe, D. J. (1999b). Three reasons for doubting the adequacy of the


reciprocal-concessions explanation of door-in-the-face effects.
Communication Studies, 50, 211–220.

O’Keefe, D. J. (1999c). Variability of persuasive message effects: Meta-


analytic evidence and implications. Document Design, 1, 87–97.

O’Keefe, D. J. (2000). Guilt and social influence. Communication


Yearbook, 23, 67–101.

O’Keefe, D. J. (2002a). Guilt as a mechanism of persuasion. In J. P.


Dillard & M. Pfau (Eds.), The persuasion handbook: Developments in
theory and practice (pp. 329–344). Thousand Oaks, CA: Sage.

O’Keefe, D. J. (2002b). The persuasive effects of variation in standpoint


articulation. In F. H. van Eemeren (Ed.), Advances in pragma-dialectics
(pp. 65–82). Amsterdam: Sic Sat.

O’Keefe, D. J. (2003). Message properties, mediating states, and


manipulation checks: Claims, evidence, and data analysis in
experimental persuasive message effects research. Communication
Theory, 13, 251–274.

542
O’Keefe, D. J. (2011a). The asymmetry of predictive and descriptive
capabilities in quantitative communication research: Implications for
hypothesis development and testing. Communication Methods and
Measures, 5, 113–125.

O’Keefe, D. J. (2011b). Generalizing about the persuasive effects of


message variations: The case of gain-framed and loss-framed appeals. In
T. van Haaften, H. Jansen, J. de Jong, & W. Koetsenruijter (Eds.),
Bending opinion: Essays on persuasion in the public domain (pp.
117–131). Leiden, Netherlands: Leiden University Press.

O’Keefe, D. J. (2012a). Conviction, persuasion, and argumentation:


Untangling the ends and means of influence. Argumentation, 26, 19–32.

O’Keefe, D. J. (2012b). From psychological theory to message design:


Lessons from the story of gain-framed and loss-framed persuasive
appeals. In H. Cho (Ed.), Health communication message design:
Theory, research, and practice (pp. 3–20). Thousand Oaks, CA: Sage.

O’Keefe, D. J. (2013a). The relative persuasiveness of different forms of


arguments-from-consequences: A review and integration. In C. T.
Salmon (Ed.), Communication Yearbook 36 (pp. 109–135). New York:
Routledge.

O’Keefe, D. J. (2013b). The relative persuasiveness of different message


types does not vary as a function of the persuasive outcome assessed:
Evidence from 29 meta-analyses of 2,062 effect sizes for 13 message
variations. In E. L. Cohen (Ed.), Communication Yearbook 37 (pp.
221–249). New York: Routledge.

O’Keefe, D. J., & Figgé, M. (1999). Guilt and expected guilt in the door-
in-the face technique. Communication Monographs, 66, 312–324.

O’Keefe, D. J., & Hale, S. L. (1998). The door-in-the-face influence


strategy: A random-effects meta-analytic review. Communication
Yearbook, 21, 1–33.

543
O’Keefe, D. J., & Hale, S. L. (2001). An odds-ratio-based meta-analysis of
research on the door-in-the-face influence strategy. Communication
Reports, 14, 31–38.

O’Keefe, D. J., & Jensen, J. D. (2006). The advantages of compliance or


the disadvantages of noncompliance? A meta-analytic review of the
relative persuasive effectiveness of gain-framed and loss-framed
messages. Communication Yearbook, 30, 1–43.

O’Keefe, D. J., & Jensen, J. D. (2007). The relative persuasiveness of


gain-framed and loss-framed messages for encouraging disease
prevention behaviors: A meta-analytic review. Journal of Health
Communication, 12, 623–644.

O’Keefe, D. J., & Jensen, J. D. (2009). The relative persuasiveness of


gain-framed and loss-framed messages for encouraging disease
detection behaviors: A meta-analytic review. Journal of
Communication, 59, 296–316.

O’Keefe, D. J., & Jensen, J. D. (2011). The relative effectiveness of gain-


framed and loss-framed persuasive appeals concerning obesity-related
behaviors: Meta-analytic evidence and implications. In R. Batra, P. A.
Keller, & V. J. Strecher (Eds.), Leveraging consumer psychology for
effective health communications: The obesity challenge (pp. 171–185).
Armonk, NY: M. E. Sharpe.

O’Keefe, D. J., & Shepherd, G. J. (1982). Interpersonal construct


differentiation, attitudinal confidence, and the attitude-behavior
relationship. Central States Speech Journal, 33, 416–423.

Okun, M. A., & Schultz, A. (2003). Age and motives for volunteering:
Testing hypotheses derived from socioemotional selectivity theory.
Psychology and Aging, 18, 231–239.

Orbell, S., & Hagger, M. (2006). Temporal framing and the decision to
take part in Type 2 diabetes screening: Effects of individual differences

544
in consideration of future consequences on persuasion. Health
Psychology, 25, 537–548.

Orbell, S., & Kyriakaki, M. (2008). Temporal framing and persuasion to


adopt preventive health behavior: Moderating effects of individual
differences in consideration of future consequences on sunscreen use.
Health Psychology, 27, 770–779.

Osgood, C. E., Suci, G. J., & Tannenbaum, P. H. (1957). The measurement


of meaning. Urbana: University of Illinois Press.

Osgood, C. E., & Tannenbaum, P. H. (1955). The principle of congruity in


the prediction of attitude change. Psychological Review, 62, 42–55.

O’Sullivan, B., McGee, H., & Keegan, O. (2008). Comparing solutions to


the “expectancy-value muddle” in the theory of planned behaviour.
British Journal of Health Psychology, 13, 789–802.

Oswald, F. L., Mitchell, G., Blanton, H., Jaccard, J., & Tetlock, P. E.
(2013). Predicting ethnic and racial discrimination: A meta-analysis of
IAT criterion studies. Journal of Personality and Social Psychology,
105, 171–192.

Ouellette, J. A., & Wood, W. (1998). Habit and intention in everyday life:
The multiple processes by which past behavior predicts future behavior.
Psychological Bulletin, 124, 54–74.

Paek, H.-J., Oh, H. J., & Hove, T. (2012). How media campaigns influence
children’s physical activity: Expanding the normative mechanisms of
the theory of planned behavior. Journal of Health Communication, 17,
869–885.

Pan, L. Y., & Chiou, J. S. (2011). How much can you trust online
information? Cues for perceived trustworthiness of consumer-generated
online information. Journal of Interactive Marketing, 25, 67–74.

545
Pan, W., & Bai, H. (2009). A multivariate approach to a meta-analytic
review of the effectiveness of the D.A.R.E. program. International
Journal of Environmental Research and Public Health, 6, 267–277.

Pappas-DeLuca, K. A., Kraft, J. M., Galavotti, C., Warner, L., Mooki, M.,
Hastings, P., … Kilmarx, P. H. (2008). Entertainment-education radio
serial drama and outcomes related to HIV testing in Botswana. AIDS
Education and Prevention, 20, 486–504.

Park, H. S., Klein, K. A., Smith, S., & Martell, D. (2009). Separating
subjective norms, university descriptive and injunctive norms, and U. S.
descriptive and injunctive norms for drinking behavior intentions.
Health Communication, 24, 746–751.

Park, H. S., Levine, T. R., Westermann, C. Y. K., Orfgen, T., & Foregger,
S. (2007). The effects of argument quality and involvement type on
attitude formation and attitude change: A test of dual-process and social
judgment predictions. Human Communication Research, 33, 81–102.

Park, H. S., & Smith, S. W. (2007). Distinctiveness and influence of


subjective norms, personal descriptive and injunctive norms, and
societal descriptive and injunctive norms on behavioral intent: A case of
two behaviors critical to organ donation. Human Communication
Research, 33, 194–218.

Parker, D., Manstead, A. S. R., & Stradling, S. G. (1995). Extending the


theory of planned behaviour: The role of personal norm. British Journal
of Social Psychology, 34, 127–137.

Parker, D., Stradling, S. G., & Manstead, A. S. R. (1996). Modifying


beliefs and attitudes to exceeding the speed limit: An intervention study
based on the theory of planned behavior. Journal of Applied Social
Psychology, 26, 1–19.

Parschau, L., Richert, J., Koring, M., Ernsting, A., Lippke, S., &
Schwarzer, R. (2012). Changes in social-cognitive variables are

546
associated with stage transitions in physical activity. Health Education
Research, 27, 129–140.

Parsons, A., Lycett, D., & Aveyard, P. (2011). Response to Spring et al.:
What is the best method to assess the effect of combined interventions
for smoking cessation and post-cessation weight gain? Addiction, 106,
675–676.

Parvanta, S., Gibson, L., Forquer, H., Shapiro-Luft, D., Dean, L., Freres,
D., … Hornik, R. (2013). Applying quantitative approaches to the
formative evaluation of antismoking campaign messages. Social
Marketing Quarterly, 19, 242–264.

Patzer, G. L. (1983). Source credibility as a function of communicator


physical attractiveness. Journal of Business Research, 11, 229–241.

Pearce, W. B., & Brommel, B. J. (1972). Vocalic communication in


persuasion. Quarterly Journal of Speech, 58, 298–306.

Peay, M. Y. (1980). Changes in attitudes and beliefs in two-person


interaction situations. European Journal of Social Psychology, 10,
367–377.

Pechmann, C. (1990). How do consumer inferences moderate the


effectiveness of two-sided messages? Advances in Consumer Research,
17, 337–341.

Pechmann, C. (1992). Predicting when two-sided ads will be more


effective than one-sided ads: The role of correlational and correspondent
inferences. Journal of Marketing Research, 29, 441–453.

Peng, W., Crouse, J. C., & Lin, J.-H. (2013). Using active video games for
physical activity promotion: A systematic review of the current state of
research. Health Education & Behavior, 40, 171–192.

547
Perez, M., Becker, C. B., & Ramirez, A. (2010). Transportability of an
empirically supported dissonance-based prevention program for eating
disorders. Body Image, 7, 179–186.

Perloff, R. M. (2014). The dynamics of persuasion: Communication and


attitudes in the 21st century (5th ed.). New York: Routledge.

Pertl, M., Hevey, D., Thomas, K., Craig, A., Chuinneagáin, S. N., &
Maher, L. (2010). Differential effects of self-efficacy and perceived
control on intention to perform skin cancer-related health behaviours.
Health Education Research, 25, 769–779.

Peters, G.-J. Y., Ruiter, R. A. C., & Kok, G. (2013). Threatening


communication: A critical re-analysis and a revised meta-analytic test of
fear appeal theory. Health Psychology Review, 7(suppl. 1), S8–S31. doi:
10.1080/17437199.2012.703527.

Peters, R. G., Covello, V. T., & McCallum, D. B. (1997). The


determinants of trust and credibility in environmental risk
communication: An empirical study. Risk Analysis, 17, 43–54.

Petkova, K. G., Ajzen, I., & Driver, B. L. (1995). Salience of anti-abortion


beliefs and commitment to an attitudinal position: On the strength,
structure, and predictive validity of anti-abortion attitudes. Journal of
Applied Social Psychology, 25, 463–483.

Petrova, P. K., & Cialdini, R. B. (2005). Fluency of consumption imagery


and the backfire effects of imagery appeals. Journal of Consumer
Research, 32, 442–452.

Petty, R. E., & Briñol, P. (2006). Understanding social judgment: Multiple


systems and processes. Psychological Inquiry, 17, 217–223.

Petty, R. E., & Briñol, P. (2010). Attitude change. In R. F. Baumeister &


E. J. Finkel (Eds.), Advanced social psychology: The state of the

548
science (pp. 217–259). Oxford, UK: Oxford University Press.

Petty, R. E., & Briñol, P. (2012a). The elaboration likelihood model. In P.


A. M. Van Lange, A. Kruglanski, & E. T. Higgins (Eds.), Handbook of
theories of social psychology (Vol. 1, pp. 224–245). London: Sage.

Petty, R. E., & Briñol, P. (2012b). A multiprocess approach to social


influence. In D. T. Kendrick, N. J. Goldstein, & S. L. Braver (Eds.), Six
degrees of social influence: Science, application, and the psychology of
Robert Cialdini (pp. 49–58). New York: Oxford University Press.

Petty, R. E., Briñol, P., Loersch, C., & McCaslin, M. J. (2009). The need
for cognition. In M. R. Leary & R. H. Hoyle (Eds.), Handbook of
individual differences in social behavior (pp. 318–329). New York:
Guilford.

Petty, R. E., Briñol, P., Tormala, Z. L., & Wegener, D. T. (2007). The role
of metacognition in social psychology. In A. W. Kruglanski & E. T.
Higgins (Eds.), Social psychology: Handbook of basic principles (2nd
ed., pp. 254–284). New York: Guilford.

Petty, R. E., & Brock, T. C. (1981). Thought disruption and persuasion:


Assessing the validity of attitude change experiments. In R. E. Petty, T.
M. Ostrom, & T. C. Brock (Eds.), Cognitive responses in persuasion
(pp. 55–79). Hillsdale, NJ: Lawrence Erlbaum.

Petty, R. E., & Cacioppo, J. T. (1977). Forewarning, cognitive responding,


and resistance to persuasion. Journal of Personality and Social
Psychology, 35, 645–655.

Petty, R. E., & Cacioppo, J. T. (1979a). Effects of forewarning of


persuasive intent and involvement on cognitive responses and
persuasion. Personality and Social Psychology Bulletin, 5, 173–176.

Petty, R. E., & Cacioppo, J. T. (1979b). Issue involvement can increase or

549
decrease persuasion by enhancing message-relevant cognitive
responses. Journal of Personality and Social Psychology, 37,
1915–1926.

Petty, R. E., & Cacioppo, J. T. (1986a). Communication and persuasion:


Central and peripheral routes to attitude change. New York: Springer-
Verlag.

Petty, R. E., & Cacioppo, J. T. (1986b). The elaboration likelihood model


of persuasion. In L. Berkowitz (Ed.), Advances in experimental social
psychology (Vol. 19, pp. 123–205). New York: Academic Press.

Petty, R. E., & Cacioppo, J. T. (1990). Involvement and persuasion:


Tradition versus integration. Psychological Bulletin, 107, 367–374.

Petty, R. E., Cacioppo, J. T., & Goldman, R. (1981). Personal involvement


as a determinant of argument-based persuasion. Journal of Personality
and Social Psychology, 41, 847–855.

Petty, R. E., Cacioppo, J. T., & Haugtvedt, C. P. (1992). Ego-involvement


and persuasion: An appreciative look at the Sherifs’ contribution to the
study of self-relevance and attitude change. In D. Granberg & G. Sarup
(Eds.), Social judgment and intergroup relations: Essays in honor of
Muzafer Sherif (pp. 147–174). New York: Springer-Verlag.

Petty, R. E., Cacioppo, J. T., & Heesacker, M. (1981). Effects of rhetorical


questions on persuasion: A cognitive response analysis. Journal of
Personality and Social Psychology, 40, 432–440.

Petty, R. E., Cacioppo, J. T., & Schumann, D. (1983). Central and


peripheral routes to advertising effectiveness: The moderating role of
involvement. Journal of Consumer Research, 10, 135–146.

Petty, R. E., Cacioppo, J. T., Strathman, A. J., & Priester, J. R. (2005). To


think or not to think: Exploring two routes to persuasion. In T. C. Brock

550
& M. C. Green (Eds.). Persuasion: Psychological insights and
perspectives (2nd ed., pp. 81–116). Thousand Oaks, CA: Sage.

Petty, R. E., Fazio, R. H., & Briñol, P. (Eds.). (2009a). Attitudes: Insights
from the new implicit measures. New York: Psychology Press.

Petty, R. E., Fazio, R. H., & Briñol, P. (2009b). The new implicit
measures: An overview. In R. E. Petty, R. H. Fazio, & P. Briñol (Eds.),
Attitudes: Insights from the new implicit measures (pp. 3–18). New
York: Psychology Press.

Petty, R. E., Haugtvedt, C. P., & Smith, S. M. (1995). Elaboration as a


determinant of attitude strength: Creating attitudes that are persistent,
resistant, and predictive of behavior. In R. E. Petty & J. A. Krosnick
(Eds.), Attitude strength: Antecedents and consequences (pp. 93–130).
Mahwah, NJ: Lawrence Erlbaum.

Petty, R. E., & Krosnick, J. A. (Eds.). (1995). Attitude strength:


Antecedents and consequences. Mahwah, NJ: Lawrence Erlbaum.

Petty, R. E., & Wegener, D. T. (1998a). Attitude change: Multiple roles for
persuasion variables. In D. T. Gilbert, S. T. Fiske, & G. Lindzey (Eds.),
Handbook of social psychology (4th ed., Vol. 1, pp. 323–390). Boston:
McGraw-Hill.

Petty, R. E., & Wegener, D. T. (1998b). Matching versus mismatching


attitude functions: Implications for scrutiny of persuasive messages.
Personality and Social Psychology Bulletin, 24, 227–240.

Petty, R. E., & Wegener, D. T. (1999). The elaboration likelihood model:


Current status and controversies. In S. Chaiken & Y. Trope (Eds.),
Dual-process theories in social psychology (pp. 41–72). New York:
Guilford.

Petty, R. E., Wegener, D. T., Fabrigar, L. R., Priester, J. R., & Cacioppo, J.

551
T. (1993). Conceptual and methodological issues in the elaboration
likelihood model: A reply to the Michigan State critics. Communication
Theory, 3, 336–362.

Petty, R. E., Wells, G. L., & Brock, T. C. (1976). Distraction can enhance
or reduce yielding to propaganda: Thought disruption versus effort
justification. Journal of Personality and Social Psychology, 34,
874–884.

Petty, R. E., Wells, G. L., Heesacker, M., Brock, T. C., & Cacioppo, J. T.
(1983). The effects of recipient posture on persuasion: A cognitive
response analysis. Personality and Social Psychology Bulletin, 9,
209–222.

Petty, R. E., Wheeler, S. C., & Bizer, G. Y. (1999). Is there one persuasion
process or more? Lumping versus splitting in attitude change theories.
Psychological Inquiry, 10, 156–163.

Petty, R. E., Wheeler, S. C., & Bizer, G. Y. (2000). Attitude functions and
persuasion: An elaboration likelihood approach to matched versus
mismatched messages. In G. R. Maio & J. M. Olson (Eds.), Why we
evaluate: Functions of attitudes (pp. 133–162). Mahwah, NJ: Lawrence
Erlbaum.

Pfau, M., Holbert, R. L., Zubric, S. J., Pasha, N. H., & Lin, W.-K. (2000).
Role and influence of communication modality in the process of
resistance to persuasion. Media Psychology, 2, 1–33.

Pfau, M., & Szabo, E. A. (2004). Inoculation and resistance to persuasion.


In J. S. Seiter & R. H. Gass (Eds.), Perspectives on persuasion, social
influence, and compliance gaining (pp. 265–286). Boston: Pearson
Allyn and Bacon.

Pfau, M., Tusing, K. J., Koerner, A. F., Lee, W., Godbold, L. C., Penaloza,
L. C., … Hong, Y.-H. (1997). Enriching the inoculation construct: The
role of critical components in the process of resistance. Human

552
Communication Research, 24, 187–215.

Pierro, A., Mannetti, L., Erb, H.-P., Spiegel, S. & Kruglanski, A. W.


(2005). Informational length and order of presentation as determinants
of persuasion. Journal of Experimental Social Psychology, 41, 458–469.

Pietersma, S., & Dijkstra, A. (2011). Do behavioural health intentions


engender health behaviour change? A study on the moderating role of
self-affirmation on actual fruit intake versus vegetable intake. British
Journal of Health Psychology, 16, 815–827.

Polonec, L. D., Major, A. M., & Atwood, L. E. (2006). Evaluating the


believability and effectiveness of the social norms message “most
students drink 0 to 4 drinks when they party.” Health Communication,
20, 23–34.

Polyorat, K., Alden, D. L., & Kim, E. S. (2007). Impact of narrative versus
factual print ad copy on product evaluation: The mediating role of ad
message involvement. Psychology and Marketing, 24, 539–554.

Popova, L. (2012). The extended parallel process model: Illuminating the


gaps in research. Health Education and Behavior, 39, 455–473.

Pornpitakpan, C. (2004). The persuasiveness of source credibility: A


critical review of five decades’ evidence. Journal of Applied Social
Psychology, 34, 243–281.

Porzig-Drummond, R., Stevenson, R., Case, T., & Oaten, M. (2009). Can
the emotion of disgust be harnessed to promote hand hygiene?
Experimental and field-based tests. Social Science & Medicine, 68,
1006–1012.

Posavac, E. J., Kattapong, K. R., & Dew, D. E., Jr. (1999). Peer-based
interventions to influence health-related behaviors and attitudes: A
meta-analysis. Psychological Reports, 85, 1179–1194.

553
Povey, R., Conner, M., Sparks, P., James, R., & Shepherd, R. (2000).
Application of the theory of planned behaviour to two dietary
behaviours: Roles of perceived control and self-efficacy. British Journal
of Health Psychology, 5, 121–139.

Powers, P. (2007). Persuasion and coercion: A critical review of


philosophical and empirical approaches. HEC Forum, 19, 125–143.

Prati, G., Pietrantoni, L., & Zani, B. (2011). Influenza vaccination: The
persuasiveness of messages among people aged 65 years and older.
Health Communication, 27, 413–420.

Pratkanis, A. R. (2007). Social influence analysis: An index of tactics. In


A. R. Pratkanis (Ed.), The science of social influence: Advances and
future progress (pp. 17–82). New York: Psychology Press.

Pratkanis, A. R., & Greenwald, A. G. (1989). A sociocognitive model of


attitude structure and function. In L. Berkowitz (Ed.), Advances in
experimental social psychology (Vol. 22, pp. 245–285). New York:
Academic Press.

Pratkanis, A. R., Greenwald, A. G., Ronis, D. L., Leippe, M. R., &


Baumgardner, M. H. (1986). Consumer-product and sociopolitical
messages for use in studies of persuasion. Personality and Social
Psychology Bulletin, 12, 536–538.

Praxmarer, S. (2011). How a presenter’s perceived attractiveness affects


persuasion for attractiveness-unrelated products. International Journal of
Advertising, 30, 839–865.

Preacher, K. J., & Hayes, A. F. (2008). Contemporary approaches to


assessing mediation in communication research. In A. F. Hayes, M. D.
Slater, & L. B. Snyder (Eds.), The Sage sourcebook of advanced data
analysis methods for communication research (pp. 13–54). Thousand
Oaks, CA: Sage.

554
Preiss, R. W., & Allen, M. (1998). Performing counterattitudinal
advocacy: The persuasive impact of incentives. In M. Allen & R. W.
Preiss (Eds.), Persuasion: Advances through meta-analysis (pp.
231–242). Cresskill, NJ: Hampton Press.

Prentice, D. A., & Carlsmith, K. M. (2000). Opinions and personality: On


the psychological functions of attitudes and other valued possessions. In
G. R. Maio & J. M. Olson (Eds.), Why we evaluate: Functions of
attitudes (pp. 223–248). Mahwah, NJ: Lawrence Erlbaum.

Prentice-Dunn, S., McMath, B. F., & Cramer, R. J. (2009). Protection


motivation theory and stages of change in sun protective behavior.
Journal of Health Psychology, 14, 297–305.

Prestwich, A., Conner, M., Lawton, R., Bailey, W., Litman, J., &
Molyneaux, V. (2005). Individual and collaborative implementation
intentions and the promotion of breast self-examination. Psychology and
Health, 20, 743–760. [Erratum notice: Psychology & Health, 21 (2006),
143.]

Prestwich, A., Kellar, I., Parker, R., MacRae, S., Learmonth, M., Sykes,
B., … Castle, H. (2014). How can self-efficacy be increased? Meta-
analysis of dietary interventions. Health Psychology Review, 8,
270–285.

Prestwich, A., Perugini, M., & Hurling, R. (2008). Goal desires moderate
intention-behaviour relations. British Journal of Social Psychology, 47,
49–73.

Priester, J. R., & Fleming, M. A. (1997). Artifact or meaningful theoretical


constructs? Examining evidence for nonbelief- and belief-based attitude
change processes. Journal of Consumer Psychology, 6, 67–76.

Priester, J. R., & Petty, R. E. (1995). Source attributions and persuasion:


Perceived honesty as a determinant of message scrutiny. Personality and
Social Psychology Bulletin, 21, 637–654.

555
Primack, B. A., Carroll, M. V., McNamara, M., Klem, M. L., King, B.,
Rich, M., … Nayak, S. (2012). Role of video games in improving
health-related outcomes: A systematic review. American Journal of
Preventive Medicine, 42, 630–638.

Prince, M. A., & Carey, K. B. (2010). The malleability of injunctive norms


among college students. Addictive Behaviors, 35, 940–947.

Prislin, R. (1987). Attitude-behaviour relationship: Attitude relevance and


behaviour relevance. European Journal of Social Psychology, 17,
483–485.

Prislin, R., & Wood, W. (2005). Social influence in attitudes and attitude
change. In D. Albarracín, B. T. Johnson, & M. P. Zanna (Eds.), The
handbook of attitudes (pp. 671–706). Mahwah, NJ: Lawrence Erlbaum.

Prochaska, J. O. (1994). Strong and weak principles for progressing from


precontemplation to action on the basis of twelve problem behaviors.
Health Psychology, 13, 47–51.

Prochaska, J. O., & DiClemente, C. C. (1984). The transtheoretical


approach: Crossing traditional boundaries of therapy. Homewood, IL:
Dow Jones-Irwin.

Prochaska, J. O., DiClemente, C. C., Velicer, W. F., & Rossi, J. S. (1992).


Criticisms and concerns of the transtheoretical model in light of recent
research. British Journal of Addiction, 87, 825–828.

Prochaska, J. O., Redding, C. A., & Evers, K. E. (2002). The


transtheoretical model and stages of change. In K. Glanz, B. K. Rimer,
& F. M. Lewis (Eds.), Health behavior and health education: Theory,
research, and practice (3rd ed., pp. 99–120). San Francisco: Jossey-
Bass.

Prochaska, J. O., Velicer, W. F., Rossi, J. S., Goldstein, M. G., Marcus, B.

556
H., Rakowski, W., … Rossi, S. R. (1994). Stages of change and
decisional balance for 12 problem behaviors. Health Psychology, 13,
39–46.

Pryor, B., & Steinfatt, T. M. (1978). The effects of initial belief level on
inoculation theory and its proposed mechanisms. Human
Communication Research, 4, 217–230.

Puckett, J. M., Petty, R. E., Cacioppo, J. T., & Fischer, D. L. (1983). The
relative impact of age and attractiveness stereotypes on persuasion.
Journal of Gerontology, 38, 340–343.

Quick, B. L., Bates, B. R., & Quinlan, M. R. (2009). The utility of anger in
promoting clean indoor air policies. Health Communication, 24,
548–561.

Quick, B. L., & Considine, J. R. (2008). Examining the use of forceful


language when designing exercise persuasive messages for adults: A
test of conceptualizing reactance arousal as a two-step process. Health
Communication, 23, 483–491.

Quick, B. L., Shen, L., & Dillard, J. P. (2013). Reactance theory and
persuasion. In J. P. Dillard & L. Shen (Eds.), The SAGE handbook of
persuasion: Developments in theory and practice (2nd ed., pp.
167–183). Thousand Oaks, CA: Sage.

Quinlan, K. B., & McCaul, K. D. (2000). Matched and mismatched


interventions with young adult smokers: Testing a stage theory. Health
Psychology, 19, 165–171.

Raaijmakers, J. G. W., Schrijnemakers, J. M. C., & Gremmen, F. (1999).


How to deal with “the language-as-fixed-effect fallacy”: Common
misconceptions and alternative solutions. Journal of Memory and
Language, 41, 416–426.

557
Radecki, C. M., & Jaccard, J. (1999). Signing an organ donation letter:
The prediction of behavior from behavioral intentions. Journal of
Applied Social Psychology, 29, 1833–1853.

Raghubir, P., & Corfman, K. (1999). When do price promotions affect


pretrial brand evaluations? Journal of Marketing Research, 36, 211–222.

Rains, S. A. (2013). The nature of psychological reactance revisited: A


meta-analytic review. Human Communication Research, 39, 47–73.

Rains, S. A., & Turner, M. M. (2007). Psychological reactance and


persuasive health communication: A test and extension of the
intertwined model. Human Communication Research, 33, 241–269.

Randall, D. M., & Wolff, J. A. (1994). The time interval in the intention-
behaviour relationship: Meta-analysis. British Journal of Social
Psychology, 33, 405–418.

Ratcliff, C. D., Czuchry, M., Scarberry, N. C., Thomas, J. C., Dansereau,


D. F., & Lord, C. G. (1999). Effects of directed thinking on intentions to
engage in beneficial activities: Actions versus reasons. Journal of
Applied Social Psychology, 29, 994–1009.

Reed, M. B., & Aspinwall, L. G. (1998). Self-affirmation reduces biased


processing of health-risk information. Motivation and Emotion, 22,
99–132.

Regan, D. T., & Fazio, R. (1977). On the consistency between attitudes


and behavior: Look to the method of attitude formation. Journal of
Experimental Social Psychology, 13, 28–45.

Reichert, T., Heckler, S. E., & Jackson, S. (2001). The effects of sexual
social marketing appeals on cognitive processing and persuasion.
Journal of Advertising, 30(1), 13–27.

558
Reid, A. E., & Aiken, L. S. (2013). Correcting injunctive norm
misperceptions motivates behavior change: A randomized controlled
sun protection intervention. Health Psychology, 32, 551–560.

Reimer, T. (2003). Direkte und indirekte effekte der argumentqualität: Der


einfluss der argumentstärke auf die wahrgenommene expertise eines
kommunikators [Direct and indirect effects of argument quality: The
impact of argument strength on the perceived expertise of a
communicator]. Zeitschrift für Sozialpsychologie, 34, 243–255.

Reinard, J. C. (1998). The persuasive effects of testimonial assertion


evidence. In M. Allen & R. W. Preiss (Eds.), Persuasion: Advances
through meta-analysis (pp. 69–86). Cresskill, NJ: Hampton.

Reinhart, A. M., & Anker, A. E. (2012). An exploration of transportation


and psychological reactance in organ donation PSAs. Communication
Research Reports, 29, 274–284.

Renes, R. J., Mutsaers, K., & van Woerkum, C. (2012). The difficult
balance between entertainment and education: A qualitative evaluation
of a Dutch health-promoting documentary series. Health Promotion
Practice, 13, 259–264.

Resnicow, K., Davis, R. E., Zhang, G., Konkel, J., Strecher, V. J., Shaikh,
A. R., … Weise, C. (2008). Tailoring a fruit and vegetable intervention
on novel motivational constructs: Results of a randomized study. Annals
of Behavioral Medicine, 35, 159–170.

Rhine, R. J., & Severance, L. J. (1970). Ego-involvement, discrepancy,


source credibility, and attitude change. Journal of Personality and Social
Psychology, 16, 175–190.

Rhodes, N., & Ewoldsen, D. R. (2013). Outcomes of persuasion:


Behavioral, cognitive, and social. In J. P. Dillard & L. Shen (Eds.), The
SAGE handbook of persuasion: Developments in theory and practice
(2nd ed., pp. 53–69). Thousand Oaks, CA: Sage.

559
Rhodes, N., & Wood, W. (1992). Self-esteem and intelligence affect
influenceability: The mediating role of message reception.
Psychological Bulletin, 111, 156–171.

Rhodes, R. E., Blanchard, C. M., Courneya, K. S., & Plotnikoff, R. C.


(2009). Identifying belief-based targets for the promotion of leisure-time
walking. Health Education & Behavior, 36, 381–393.

Rhodes, R. E., Blanchard, C. M., & Matheson, D. H. (2006). A


multicomponent model of the theory of planned behaviour. British
Journal of Health Psychology, 11, 119–137.

Rhodes, R. E., & Dickau, L. (2013). Moderators of the intention-behaviour


relationship in the physical activity domain: A systematic review.
British Journal of Sports Medicine, 37, 215–225.

Richard, R., de Vries, N. K., & van der Pligt, J. (1998). Anticipated regret
and precautionary sexual behavior. Journal of Applied Social
Psychology, 28, 1411–1428.

Richard, R., van der Pligt, J., & de Vries, N. (1996a). Anticipated affect
and behavioral choice. Basic and Applied Social Psychology, 18,
111–129.

Richard, R., van der Pligt, J., & de Vries, N. (1996b). Anticipated regret
and time perspective: Changing sexual risk-taking behavior. Journal of
Behavioral Decision Making, 9, 185–199.

Richert, J., Schüz, N., & Schüz, B. (2013). Stages of health behavior
change and mindsets. Health Psychology, 32, 273–282.

Ricketts, M., Shanteau, J., McSpadden, B., & Fernandez-Medina, K.


(2010). Using stories to battle unintentional injuries: Narratives in safety
and health communication. Social Science & Medicine, 70, 1441–1449.

560
Riemsma, R. P., Pattenden, J., Bridle, C., Sowden, A. J., Mather, L., Watt,
I. S. & Walker, A. (2003). Systematic review of the effectiveness of
stage based interventions to promote smoking cessation. BMJ, 326,
1175–1181.

Rietveld, T., & van Hout, R. (2007). Analysis of variance for repeated
measures designs with word materials as a nested random or fixed
factor. Behavior Research Methods, 39, 735–747.

Rimal, R. N., Bose, K., Brown, J., Mkandawire, G., & Folda, L. (2009).
Extending the purview of the risk perception attitude framework:
Findings from HIV/AIDS prevention research in Malawi. Health
Communication, 24, 210–218.

Rimal, R. N., & Juon, H.S. (2010). Use of the risk perception attitude
framework for promoting breast cancer prevention. Journal of Applied
Social Psychology, 40, 287–310.

Rimal, R. N., & Real, K. (2003). Perceived risk and efficacy beliefs as
motivators of change: Use of the risk perception attitude (RPA)
framework to understand health behaviors. Human Communication
Research, 29, 370–399.

Rise, J., Sheeran, P., & Hukkelberg, S. (2010). The role of self-identity in
the theory of planned behavior: A meta-analysis. Journal of Applied
Social Psychology, 40, 1085–1105.

Risen, J. L., & Chen, M. K. (2010). How to study choice-induced attitude


change: Strategies for fixing the free-choice paradigm. Social and
Personality Psychology Compass, 4, 1151–1164.

Rittle, R. H. (1981). Changes in helping behavior: Self- versus situational


perceptions as mediators of the foot-in-the-door effect. Personality and
Social Psychology Bulletin, 7, 431–437.

561
Rivis, A., & Sheeran, P. (2003). Descriptive norms as an additional
predictor in the theory of planned behaviour: A meta-analysis. Current
Psychology, 22, 218–233.

Robins, D., Holmes, J., & Stansbury, M. (2010). Consumer health


information on the web: The relationship of visual design and
perceptions of credibility. Journal of the American Society for
Information Science and Technology, 61, 13–29.

Robinson, J. K., Turrisi, R., & Stapleton, J. (2007). Efficacy of a partner


assistance intervention designed to increase skin self-examination
performance. Archives of Dermatology, 143, 37–41.

Rodgers, W. M., Conner, M., & Murray, T. C. (2008). Distinguishing


among perceived control, perceived difficulty, and self-efficacy as
determinants of intentions and behaviour. British Journal of Social
Psychology, 47, 607–630.

Roehrig, M., Thompson, J. K., Brannick, M., & van den Berg, P. (2006).
Dissonance-based eating disorder prevention program: A preliminary
dismantling investigation. International Journal of Eating Disorders, 39,
1–10.

Rogers, R. W., & Prentice-Dunn, S. (1997). Protection motivation theory.


In D. Gochman (Ed.), Handbook of health behavior research: Vol. 1.
Personal and social determinants (pp. 113–132). New York: Plenum.

Rokeach, M. (1973). The nature of human values. New York: Free Press.

Romero, A. A., Agnew, C. R., & Insko, C. A. (1996). The cognitive


mediation hypothesis revisited: An empirical response to
methodological and theoretical criticism. Personality and Social
Psychology Bulletin, 22, 651–665.

Rosen, C. S. (2000). Is the sequencing of change processes by stage

562
consistent across health problems? A meta-analysis. Health Psychology,
19, 593–604.

Rosen, J., & Haaga, D. A. F. (1998). Facilitating cooperation in a social


dilemma: A persuasion approach. Journal of Psychology, 132, 143–153.

Rosen, S. (1961). Postdecision affinity for incompatible information.


Journal of Abnormal and Social Psychology, 63, 188–190.

Rosenberg, M. J. (1956). Cognitive structure and attitudinal affect. Journal


of Abnormal and Social Psychology, 53, 367–372.

Rosenberg, M. J., & Hovland, C. I. (1960). Cognitive, affective, and


behavioral components of attitudes. In C. I. Hovland & M. J. Rosenberg
(Eds.), Attitude organization and change: An analysis of consistency
among attitude components (pp. 1–14). New Haven, CT: Yale
University Press.

Rosenzweig, E., & Gilovich, T. (2012). Buyer’s remorse or missed


opportunity? Differential regrets for material and experiential purchases.
Journal of Personality and Social Psychology, 102, 215–223.

Roskos-Ewoldsen, D. R., & Fazio, R. H. (1992). The accessibility of


source likability as a determinant of persuasion. Personality and Social
Psychology Bulletin, 18, 19–25.

Roskos-Ewoldsen, D. R., & Fazio, R. H. (1997). The role of belief


accessibility in attitude formation. Southern Communication Journal,
62, 107–116.

Rosnow, R. L. (1968). One-sided vs. two-sided communication under


indirect awareness of persuasive intent. Public Opinion Quarterly, 32,
95–101.

563
Rozin, P., & Royzman, E. B. (2001). Negativity bias, negativity
dominance, and contagion. Personality and Social Psychology Review,
5, 296–320.

Ruiter, R. A. C., Abraham, C., & Kok, G. (2001). Scary warnings and
rational precautions: A review of the psychology of fear appeals.
Psychology and Health, 16, 613–630.

Ruiter, R. A. C., Kessels, L. T. E., Peters, G.-J. Y., & Kok, G. (2014).
Sixty years of fear appeal research: Current state of the evidence.
International Journal of Psychology, 49, 63–70.

Ruiter, R. A. C., & Kok, G. (2012). Planning to frighten people? Think


again! In C. Abraham & M. Kools (Eds.), Writing health
communication: An evidence-based guide (pp. 117–133). Los Angeles:
Sage.

Ruiter, R. A. C., Verplanken, B., De Cremer, D., & Kok, G. (2004).


Danger and fear control in response to fear appeals: The role of need for
cognition. Basic and Applied Social Psychology, 26, 13–24.

Ruiz, S., & Sicilia, M. (2004). The impact of cognitive and/or affective
processing styles on consumer response to advertising appeals. Journal
of Business Research, 57, 657–664.

Ryerson, W. N., & Teffera, N. (2004). Organizing a comprehensive


national plan for entertainment-education in Ethiopia. In A. Singhal, M.
J. Cody, E. M. Rogers, & M. Sabido (Eds.), Entertainment-education
and social change: History, research, and practice (pp. 177–190).
Mahwah, NJ: Lawrence Erlbaum.

Sagarin, B. J., & Skowronski, J. J. (2009a). The implications of imperfect


measurement for free-choice carry-over effects: Reply to M. Keith
Chen’s (2008) “Rationalization and cognitive dissonance: Do choices
affect or reflect preferences?” Journal of Experimental Social
Psychology, 45, 421–423.

564
Sagarin, B. J., & Skowronski, J. J. (2009b). In pursuit of the proper null:
Reply to Chen and Risen (2009). Journal of Experimental Social
Psychology, 45, 428–430.

Sailors, J. J. (2011). Preventing childhood obesity by persuading mothers


to breastfeed: Matching appeal type to personality. In R. Batra, P. A.
Keller, & V. J. Strecher (Eds.), Leveraging consumer psychology for
effective health communications: The obesity challenge (pp. 253–271).
Armonk, NY: M. E. Sharpe.

Sakaki, H. (1980). [Communication discrepancy and ego involvement as


determinants of attitude change]. Journal of the Nihon University
College of Industrial Technology, 13, 1–9.

Salovey, P., Schneider, T. R., & Apanovitch, A. M. (2002). Message


framing in the prevention and early detection of illness. In J. P. Dillard
& M. Pfau (Eds.), The persuasion handbook: Developments in theory
and practice (pp. 391–406). Thousand Oaks, CA: Sage.

Sampson, E. E., & Insko, C. A. (1964). Cognitive consistency and


performance in the autokinetic situation. Journal of Abnormal and
Social Psychology, 68, 184–192.

Sanaktekin, O. H., & Sunar, D. (2008). Persuasion and relational versus


personal bases of self-esteem: Does the message need to be one- or two-
sided? Social Behavior and Personality, 36, 1315–1332.

Sandberg, T., & Conner, M. (2008). Anticipated regret as an additional


predictor in the theory of planned behaviour: A meta-analysis. British
Journal of Social Psychology, 47, 589–606.

Sandberg, T., & Conner, M. (2009). A mere measurement effect for


anticipated regret: Impacts on cervical screening attendance. British
Journal of Social Psychology, 48, 221–236.

565
Sarup, G., Suchner, R. W., & Gaylord, G. (1991). Contrast effects and
attitude change: A test of the two-stage hypothesis of social judgment
theory. Social Psychology Quarterly, 54, 364–372.

Saucier, D., & Webster, R. (2009). Social vigilantism: Measuring


individual differences in belief superiority and resistance to persuasion.
Personality and Social Psychology Bulletin, 36, 19–32.

Sawyer, A. G., & Howard, D. J. (1991). Effects of omitting conclusions in


advertisements to involved and uninvolved audiences. Journal of
Marketing Research, 28, 467–474.

Sayeed, S., Fishbein, M., Hornik, R., Cappella, J., & Ahern, R. K. (2005).
Adolescent marijuana use intentions: Using theory to plan an
intervention. Drugs: Education, Prevention, and Policy, 12, 19–34.

Schepers, J., & Wetzels, M. (2007). A meta-analysis of the technology


acceptance model: Investigating subjective norm and moderation
effects. Information and Management, 44, 90–103.

Scher, S. J., & Cooper, J. (1989). Motivational basis of dissonance: The


singular role of behavioral consequences. Journal of Personality and
Social Psychology, 56, 899–906.

Schlehofer, M. M., & Thompson, S. C. (2011). Individual differences in


mediators and reactions to a personal safety threat message. Basic and
Applied Social Psychology, 33, 194–205.

Schulz, P. J., & Meuffels, B. (2012). Justifying age thresholds for


mammographic screening: An application of pragma-dialectical
argumentation theory. Health Communication, 27, 167–178.

Schüz,, B., Sniehotta, F. F., Mallach, N., Wiedemann, A. U., & Schwarzer,
R. (2009). Predicting transitions from preintentional, intentional and
actional stages of change. Health Education Research, 24, 64–75.

566
Schüz, B., Sniehotta, F. F., & Schwarzer, R. (2007). Stage-specific effects
of an action control intervention on dental flossing. Health Education
Research, 22, 332–341.

Schüz, N., Schüz, B., & Eid, M. (2013). When risk communication
backfires: Randomized controlled trial on self-affirmation and reactance
to personalized risk feedback in highrisk individuals. Health
Psychology, 32, 561–570.

Schwartz, S. H. (1992). Universals in the content and structure of values:


Theoretical advances and empirical tests in 20 countries. In M. Zanna
(Ed.), Advances in experimental social psychology (Vol. 25, pp. 1–65).
San Diego: Academic Press.

Schwarz, N. (2008). Attitude measurement. In W. D. Crano & E. Prislin


(Eds.), Attitudes and attitude change (pp. 41–60). New York:
Psychology Press.

Schwarzer, R. (2008a). Modeling health behavior change: How to predict


and modify the adoption and maintenance of health behaviors. Applied
Psychology: An International Review, 57, 1–29.

Schwarzer, R. (2008b). Response: Some burning issues in research on


health behavior change. Applied Psychology: An International Review,
57, 84–93.

Schwarzer, R., Cao, D. S., & Lippke, S. (2010). Stage-matched minimal


interventions to enhance physical activity in Chinese adolescents.
Journal of Adolescent Health, 47, 533–539.

Schwarzer, R., & Fuchs, R. (1996). Self-efficacy and health behaviours. In


M. Conner & P. Norman (Eds.), Predicting health behaviour: Research
and practice with social cognition models (pp. 163–196). Buckingham,
UK: Open University Press.

567
Schwarzer, R., Richert, J., Kreausukon, P., Remme, L., Wiedemann, A. U.,
& Reuter, T. (2010). Translating intentions into nutrition behaviors via
planning requires self-efficacy: Evidence from Thailand and Germany.
International Journal of Psychology, 45, 260–268.

Schweitzer, D., & Ginsburg, G. P. (1966). Factors of communicator


credibility. In C. W. Backman & P. F. Secord (Eds.), Problems in social
psychology (pp. 94–102). New York: McGraw-Hill.

Schwenk, G., & Moser, G. (2009). Intention and behavior: A Bayesian


meta-analysis with focus on the Ajzen-Fishbein model in the field of
environmental behavior. Quality & Quantity, 43, 743–756.

Sears, D. O. (1965). Biased indoctrination and selectivity of exposure to


new information. Sociometry, 28, 363–376.

See, Y. H. M., Petty, R. E., & Evans, L. M. (2009). The impact of


perceived message complexity and need for cognition on information
processing and attitudes. Journal of Research in Personality, 43,
880–889.

Segan, C. J., Borland, R., & Greenwood, K. M. (2004). What is the right
thing at the right time? Interactions between stages and processes of
change among smokers who make a quit attempt. Health Psychology,
23, 86–93.

Segar, M. L., Updegraff, J. A., Zikmund-Fisher, B. J., & Richardson, C. R.


(2012). Physical activity advertisements that feature daily well-being
improve autonomy and body image in overweight women but not men.
Journal of Obesity, 2012, article ID 354721.

Seibel, C. A., & Dowd, E. T. (2001). Personality characteristics associated


with psychological reactance. Journal of Clinical Psychology, 57,
963–969.

568
Sestir, M., & Green, M. C. (2010). You are who you watch: Identification
and transportation effects on temporary self-concept. Social Influence,
5, 272–288.

Shakarchi, R. J., & Haugtvedt, C. P. (2004). Differentiating individual


differences in resistance to persuasion. In E. S. Knowles & J. A. Linn
(Eds.), Resistance and persuasion (pp. 105–113). Mahwah, NJ:
Lawrence Erlbaum.

Shani, Y., & Zeelenberg, M. (2007). When and why do we want to know?
How experienced regret promotes post-decision information search.
Journal of Behavioral Decision Making, 20, 207–222.

Sharan, M., & Valente, T. W. (2002). Spousal communication and family


planning adoption: Effects of a radio drama serial in Nepal.
International Family Planning Perspectives, 28, 16–25.

Sharot, T., Fleming, S. M., Yu, X., Koster, R., & Dolan, R. J. (2012). Is
choice-induced preference change long lasting? Psychological Science,
23, 1123–1129.

Shavitt, S. (1989). Operationalizing functional theories of attitude. In A. R.


Pratkanis, S. J. Breckler, & A. G. Greenwald (Eds.), Attitude structure
and function (pp. 311–337). Hillsdale, NJ: Lawrence Erlbaum.

Shavitt, S. (1990). The role of attitude objects in attitude functions. Journal


of Experimental Social Psychology, 26, 124–148.

Shavitt, S., & Fazio, R. H. (1990). Effects of attribute salience on the


consistency of product evaluations and purchase predictions. Advances
in Consumer Research, 17, 91–97.

Shavitt, S., & Lowrey, T. M. (1992). Attitude functions in advertising


effectiveness: The interactive role of product type and personality type.
Advances in Consumer Research, 19, 323–328.

569
Shavitt, S., & Nelson, M. R. (2000). The social-identity function in person
perception: Communicated meanings of product preferences. In G. R.
Maio & J. M. Olson (Eds.), Why we evaluate: Functions of attitudes
(pp. 37–57). Mahwah, NJ: Lawrence Erlbaum.

Shavitt, S., & Nelson, M. R. (2002). The role of attitude functions in


persuasion and social judgment. In J. P. Dillard & M. Pfau (Eds.), The
persuasion handbook: Developments in theory and practice (pp.
137–153). Thousand Oaks, CA: Sage.

Shea, S., DuMouchel, W., & Bahamonde, L. (1996). A meta-analysis of 16


randomized controlled trials to evaluate computer-based clinical
reminder systems for preventive care in the ambulatory setting. Journal
of the American Medical Informatics Association, 3, 399–409.

Sheeran, P. (2002). Intention-behavior relations: A conceptual and


empirical review. European Review of Social Psychology, 12, 1–36.

Sheeran, P., & Abraham, C. (2003). Mediator of moderators: Temporal


stability of intention and the intention-behavior relation. Personality and
Social Psychology Bulletin, 29, 205–215.

Sheeran, P., Milne, S., Webb, T. L., & Gollwitzer, P. M. (2005).


Implementation intentions and health behaviour. In M. Conner & P.
Norman (Eds.), Predicting health behaviour: Research and practice with
social cognition models (2nd ed., pp. 276–323). Maidenhead, UK: Open
University Press.

Sheeran, P., & Orbell, S. (1998). Do intentions predict condom use? Meta-
analysis and examination of six moderator variables. British Journal of
Social Psychology, 37, 231–250.

Sheeran, P., & Orbell, S. (1999a). Augmenting the theory of planned


behavior: Roles for anticipated regret and descriptive norms. Journal of
Applied Social Psychology, 29, 2107–2142.

570
Sheeran, P., & Orbell, S. (1999b). Implementation intentions and repeated
behaviour: Augmenting the predictive validity of the theory of planned
behaviour. European Journal of Social Psychology, 29, 349–369.

Sheeran, P., & Orbell, S. (2000a). Self-schemas and the theory of planned
behaviour. European Journal of Social Psychology, 30, 533–550.

Sheeran, P., & Orbell, S. (2000b). Using implementation intentions to


increase attendance for cervical cancer screening. Health Psychology,
19, 283–289.

Shen, L. (2013). Incidental affect and message processing: Revisiting the


competing hypotheses. Communication Studies, 64, 337–352.

Shen, L., & Bigsby, E. (2013). The effects of message features: Content,
structure, and style. In J. P. Dillard & L. Shen (Eds.), The SAGE
handbook of persuasion: Developments in theory and practice (2nd ed.,
pp. 20–35). Thousand Oaks, CA: Sage.

Shen, L., & Dillard, J. P. (2014). Threat, fear, and persuasion: Review and
critique of questions about functional form. Review of Communication
Research, 2, 94–114.

Shepherd, G. J. (1985). Linking attitudes and behavioral criteria. Human


Communication Research, 12, 275–284. [Erratum notice: Human
Communication Research, 12 (1985), 358.]

Sheppard, B. H., Hartwick, J., & Warshaw, P. R. (1988). The theory of


reasoned action: A meta-analysis of past research with
recommendations for modifications and future research. Journal of
Consumer Research, 15, 325–343.

Sherif, C. W. (1980). Social values, attitudes, and involvement of the self.


In M. M. Page (Ed.), Nebraska Symposium on Motivation 1979:
Beliefs, attitudes, and values (pp. 1–64). Lincoln: University of

571
Nebraska Press.

Sherif, C. W., Kelly, M., Rodgers, H. L., Jr., Sarup, G., & Tittler, B. I.
(1973). Personal involvement, social judgment and action. Journal of
Personality and Social Psychology, 27, 311–328.

Sherif, C. W., Sherif, M., & Nebergall, R. E. (1965). Attitude and attitude
change: The social judgment-involvement approach. Philadelphia: W.
B. Saunders.

Sherif, M., & Hovland, C. I. (1961). Social judgment: Assimilation and


contrast effects in communication and attitude change. New Haven, CT:
Yale University Press.

Sherif, M., & Sherif, C. W. (1967). Attitude as the individual’s own


categories: The social judgment-involvement approach to attitude and
attitude change. In C. W. Sherif & M. Sherif (Eds.), Attitude, ego-
involvement, and change (pp. 105–139). New York: Wiley.

Sherman, D. K., & Cohen, G. L. (2006). The psychology of self-defense:


Self-affirmation theory. In M. P. Zanna (Ed.), Advances in experimental
social psychology (Vol. 38, pp. 183–242). San Diego: Academic Press.

Sherman, D. K., Cohen, G. L., Nelson, L. D., Nussbaum, A. D., Bunyan,


D. P., & Garcia, J. (2009). Affirmed yet unaware: Exploring the role of
awareness in the process of selfaffirmation. Journal of Personality and
Social Psychology, 97, 745–764.

Sherman, D. K., Mann, T., & Updegraff, J. A. (2006).


Approach/avoidance orientation, message framing, and health behavior:
Understanding the congruency effect. Motivation and Emotion, 30,
165–169.

Sherman, R. T., & Anderson, C. A. (1987). Decreasing premature


termination from psychotherapy. Journal of Social and Clinical

572
Psychology, 5, 298–312.

Shiv, B., Edell, J. A., & Payne, J. W. (1997). Factors affecting the impact
of negatively and positively framed ad messages. Journal of Consumer
Research, 24, 285–294.

Shiv, B., Edell, J. A., & Payne, J. W. (2004). Does elaboration increase or
decrease the effectiveness of negatively versus positively framed
messages? Journal of Consumer Research, 31, 199–208.

Siegel, J. T., Alvaro, E. M., Crano, W. D., Lac, A., Ting, S., & Jones, S. P.
(2008). A quasi-experimental investigation of message appeal variations
on organ donor registration rates. Health Psychology, 27, 170–178.

Siegrist, M., Earle, T. C., & Gutscher, H. (2003). Test of a trust and
confidence model in the applied context of electromagnetic field (EMF)
risks. Risk Analysis, 23, 705–716.

Siemer, M., & Joormann, J. (2003). Power and measure of effect size in
analysis of variance with fixed versus random nested factors.
Psychological Methods, 8, 497–517.

Siero, F. W., & Doosje, B. J. (1993). Attitude change following persuasive


communication: Integrating social judgment theory and the elaboration
likelihood model. European Journal of Social Psychology, 23, 541–554.

Sieverding, M., Matterne, U., & Ciccarello, L. (2010). What role do social
norms play in the context of men’s cancer screening intention and
behavior? Application of an extended theory of planned behavior.
Health Psychology, 29, 72–81.

Silk, K. J., Weiner, J., & Parrott, R. L. (2005). Gene cuisine or


frankenfood? The theory of reasoned action as an audience
segmentation strategy for messages about genetically modified foods.
Journal of Health Communication, 10, 751–767.

573
Silverthorne, C. P., & Mazmanian, L. (1975). The effects of heckling and
media of presentation on the impact of a persuasive communication.
Journal of Social Psychology, 96, 229–236.

Silvia, P. J. (2005). Deflecting reactance: The role of similarity in


increasing compliance and reducing resistance. Basic and Applied
Social Psychology, 27, 277–284.

Silvia, P. J. (2006a). Reactance and the dynamics of disagreement:


Multiple paths from threatened freedom to resistance to persuasion.
European Journal of Social Psychology, 36, 673–685.

Silvia, P. J. (2006b). A skeptical look at dispositional reactance.


Personality and Individual Differences, 40, 1291–1297.

Simoni, J., Nelson, K., Franks, J., Yard, S., & Lehavot, K. (2011). Are
peer interventions for HIV efficacious? A systematic review. AIDS and
Behavior, 15, 1589–1595.

Simons, H. W., Berkowitz, N. N., & Moyer, R. J. (1970). Similarity,


credibility, and attitude change: A review and a theory. Psychological
Bulletin, 73, 1–16.

Simsekoglu, O., & Lajunen, T. (2008). Social psychology of seat belt use:
A comparison of theory of planned behavior and health belief model.
Transportation Research Part F: Traffic Psychology and Behaviour, 11,
181–191.

Sinclair, R. C., Moore, S. E., Mark, M. M., Soldat, A. S., & Lavis, C. A.
(2010). Incidental moods, source likeability, and persuasion: Liking
motivates message elaboration in happy people. Cognition & Emotion,
24, 940–961.

Sjoberg, L. (1982). Attitude-behavior correlation, social desirability, and


perceived diagnostic value. British Journal of Social Psychology, 21,

574
283–292.

Skalski, P., Tamborini, R., Glazer, E., & Smith, S. (2009). Effects of
humor on presence and recall of persuasive messages. Communication
Quarterly, 57, 136–153.

Skowronski, J. J., & Carlston, D. E. (1989). Negativity and extremity


biases in impression formation: A review of explanations. Psychological
Bulletin, 105, 131–142.

Slater, M. D. (1991). Use of message stimuli in mass communication


experiments: A methodological assessment and discussion. Journalism
Quarterly, 68, 412–421.

Slater, M. D. (2002). Involvement as goal-directed strategic processing:


Extending the elaboration likelihood model. In J. P. Dillard & M. Pfau
(Eds.), The persuasion handbook: Developments in theory and practice
(pp. 175–194). Thousand Oaks, CA: Sage.

Slater, M. D., & Rouner, D. (1996). How message evaluation and source
attributes may influence credibility assessment and belief change.
Journalism and Mass Communication Quarterly, 73, 974–991.

Slater, M. D., & Rouner, D. (2002). Entertainment-education and


elaboration likelihood: Understanding the processing of narrative
persuasion. Communication Theory, 12, 173–191.

Slaunwhite, J. M., Smith, S. M., Fleming, M. T., & Fabrigar, L. R. (2009).


Using normative messages to increase healthy behaviours. International
Journal of Workplace Health Management, 2, 231–244.

Smerecnik, C. M. R., & Ruiter, R. A. C. (2010). Fear appeals in HIV


prevention: The role of anticipated regret. Psychology, Health, &
Medicine, 15, 550–559.

575
Smidt, K. E., & DeBono, K. G. (2011). On the effects of product name on
product evaluation: An individual difference perspective. Social
Influence, 6, 131–141.

Smit, E. G., van Meurs, L., & Neijens, P. C. (2006). Effects of advertising
likeability: A 10-year perspective. Journal of Advertising Research, 46,
73–83.

Smith, A. J., & Clark, R. D., III. (1973). The relationship between attitudes
and beliefs. Journal of Personality and Social Psychology, 26, 321–326.

Smith, D. C., Tabb, K. M., Fisher, D., & Cleeland, L. (2014). Drug refusal
skills training does not enhance outcomes of African American
adolescents with substance use problems. Journal of Substance Abuse
Treatment, 46, 274–279.

Smith, J. K., Gerber, A. S., & Orlich, A. (2003). Self-prophecy effects and
voter turnout: An experimental replication. Political Psychology, 24,
593–604.

Smith, J. L. (1996). Expectancy, value, and attitudinal semantics.


European Journal of Social Psychology, 26, 501–506.

Smith, J. R., & McSweeney, A. (2007). Charitable giving: The effects of


revised theory of planned behavior model in predicting donating
intentions and behavior. Journal of Community & Applied Social
Psychology, 17, 363–386.

Smith, J. R., & Terry, D. J. (2003). Attitude-behaviour consistency: The


role of group norms, attitude accessibility, and mode of behavioural
decision-making. European Journal of Social Psychology, 33, 591–608.

Smith, J. R., Terry, D. J., Manstead, A. S. R., Louis, W. R., Kotterman, D.,
& Wolfs, J. (2008). The attitude-behavior relationship in consumer
conduct: The role of norms, past behaviors, and self-identity. Journal of

576
Social Psychology, 148, 311–334.

Smith, M. B., Bruner, J. B., & White, R. W. (1956). Opinions and


personality. New York: Wiley.

Smith, M. J. (1978). Discrepancy and the importance of attitudinal


freedom. Human Communication Research, 4, 308–314.

Smith, R. A., & Boster, F. J. (2009). Understanding the influence of others


on perceptions of a message’s advocacy: Testing a two-step model.
Communication Monographs, 76, 333–350.

Smith, R. A., Downs, E., & Witte, K. (2007). Drama theory and
entertainment education: Exploring the effects of a radio drama on
behavioral intentions to limit HIV transmissions in Ethiopia.
Communication Monographs, 74, 133–153.

Smith, R. E., & Hunt, S. D. (1978). Attributional processes and effects in


promotional situations. Journal of Consumer Research, 5, 149–158.

Smith, R. E., & Swinyard, W. R. (1983). Attitude-behavior consistency:


The impact of product trial versus advertising. Journal of Marketing
Research, 20, 257–267.

Smith, S. M., Fabrigar, L. R., & Norris, M. E. (2008). Reflecting on six


decades of selective exposure research: Progress, challenges, and
opportunities. Social and Personality Psychology Compass, 2, 464–493.

Smith, S. M., Haugtvedt, C. P., & Petty, R. E. (1994). Need for cognition
and the effects of repeated expression on attitude accessibility and
extremity. Advances in Consumer Research, 21, 234–237.

Smith, S. W., Atkin, C. K., Martell, D. C., Allen, R., & Hembroff, L.
(2006). A social judgment theory approach to conducting formative

577
research in a social norms campaign. Communication Theory, 16,
141–152.

Smith-McLallen, A. (2005). Is it true? (When) does it matter? The roles of


likelihood and desirability in argument judgments and attitudes
(Doctoral dissertation). Retrieved from UMI (UMI No. AAT-3187759).

Snyder, M. (1982). When believing means doing: Creating links between


attitudes and behavior. In M. P. Zanna, E. T. Higgins, & C. P. Herman
(Eds.), Consistency in social behavior: The Ontario Symposium, vol. 2
(pp. 105–130). Hillsdale, NJ: Lawrence Erlbaum.

Snyder, M., & DeBono, K. G. (1985). Appeals to image and claims about
quality: Understanding the psychology of advertising. Journal of
Personality and Social Psychology, 49, 586–597.

Snyder, M., & DeBono, K. G. (1987). A functional approach to attitudes


and persuasion. In M. P. Zanna, J. M. Olson, & C. P. Herman (Eds.),
Social influence: The Ontario Symposium, vol. 5 (pp. 107–125).
Hillsdale, NJ: Lawrence Erlbaum.

Snyder, M., & DeBono, K. G. (1989). Understanding the functions of


attitudes: Lessons from personality and social behavior. In A. R.
Pratkanis, S. J. Breckler, & A. G. Greenwald (Eds.), Attitude structure
and function (pp. 339–359). Hillsdale, NJ: Lawrence Erlbaum.

Snyder, M., & Gangestad, S. (1986). On the nature of self-monitoring:


Matters of assessment, matters of validity. Journal of Personality and
Social Psychology, 51, 125–139.

Snyder, M., & Kendzierski, D. (1982). Acting on one’s attitudes:


Procedures for linking attitude and behavior. Journal of Experimental
Social Psychology, 18, 165–183.

Snyder, M., & Rothbart, M. (1971). Communicator attractiveness and

578
opinion change. Canadian Journal of Behavioural Science, 3, 377–387.

Soley, L. C. (1986). Copy length and industrial advertising readership.


Industrial Marketing Management, 15, 245–251.

Solomon, S., Greenberg, J., Psyczynski, T., & Pryzbylinski, J. (1995). The
effects of mortality salience on personally-relevant persuasive appeals.
Social Behavior and Personality, 23, 177–190.

Sorrentino, R. M., Bobocel, D. R., Gitta, M. Z., Olson, J. M., & Hewitt, E.
C. (1988). Uncertainty orientation and persuasion: Individual
differences in the effects of personal relevance on social judgments.
Journal of Personality and Social Psychology, 55, 357–371.

Spangenberg, E. R., & Greenwald, A. G. (1999). Social influence by


requesting self-prophesy. Journal of Consumer Psychology, 8, 61–69.

Spangenberg, E. R., Obermiller, C., & Greenwald, A. G. (1992). A field


test of subliminal selfhelp audiotapes: The power of expectancies.
Journal of Public Policy and Marketing, 11(2), 26–36.

Spangenberg, E. R., Sprott, D. E., Grohmann, B., & Smith, R. J. (2003).


Mass-communicated prediction requests: Practical application and a
cognitive dissonance explanation for self-prophesy. Journal of
Marketing, 67(3), 47–62.

Sparks, P., Hedderley, D., & Shepherd, R. (1991). Expectancy-value


models of attitude: A note on the relationship between theory and
methodology. European Journal of Social Psychology, 21, 261–271.

Sparks, P., Jessop, D. C., Chapman, J., & Holmes, K. (2010). Pro-
environmental actions, climate change, and defensiveness: Do self-
affirmations make a difference to people’s motives and beliefs about
making a difference? British Journal of Social Psychology 49, 553–568.

579
Spencer, S. J., Zanna, M. P., & Fong, G. T. (2005). Establishing a causal
chain: Why experiments are often more effective than mediational
analyses in examining psychological processes. Journal of Personality
and Social Psychology, 89, 845–851.

Spring, B., Howe, D., Berendsen, M., McFadden, H. G., Hitchcock, K.,
Rademaker, A. W., & Hitsman, B. (2009). Behavioral intervention to
promote smoking cessation and prevent weight gain: A systematic
review and meta-analysis. Addiction, 104, 1472–1486.

Spring, B., Rademaker, A. W., McFadden, H. G., & Hitsman, B. (2011).


Reducing bias in systematic reviews of behavioral interventions: A
response to Parsons et al. Addiction, 106, 676–678.

Sprott, D. E., Spangenberg, E. R., Block, L. G., Fitzsimons, G. J.,


Morwitz, V. G., & Williams, P. (2006). The question–behavior effect:
What we know and where we go from here. Social Influence, 1,
128–137.

Stapel, D. A., & van der Linde, L. A. J. G. (2011). What drives self-
affirmation effects? On the importance of differentiating value
affirmation and attribute affirmation. Journal of Personality and Social
Psychology, 101, 34–45. See also https://www.commissielevelt.nl/.

Stead, M., Tagg, S., MacKintosh, A. M., & Eadie, D. (2005). Development
and evaluation of a mass media theory of planned behaviour
intervention to reduce speeding. Health Education Research, 20, 36–50.

Steadman, L., & Quine, L. (2004). Encouraging young males to perform


testicular self-examination: A simple, but effective, implementation
intentions intervention. British Journal of Health Psychology, 9,
479–487.

Steadman, L., & Rutter, D. R. (2004). Belief importance and the theory of
planned behaviour: Comparing modal and ranked modal beliefs in
predicting attendance at breast screening. British Journal of Health

580
Psychology, 9, 447–463.

Steele, C. M. (1988). The psychology of self-affirmation: Sustaining the


integrity of the self. In L. Berkowitz (Ed.), Advances in experimental
social psychology (Vol. 21, pp. 261–302). San Diego: Academic Press.

Steenhaut, S., & Van Kenhove, P. (2006). The mediating role of


anticipated guilt in consumers’ ethical decision-making. Journal of
Business Ethics, 69, 269–288.

Steffen, V. J., & Gruber, V. A. (1991). Direct experience with a cancer


self-exam: Effects on cognitions and behavior. Journal of Social
Psychology, 13, 165–177.

Steinfatt, T. M. (1977). Measurement, transformations, and the real world:


Do the numbers represent the concept? Et Cetera, 34, 277–289.

Stephenson, M. T., Palmgreen, P., Hoyle, R. H., Donohew, L., Lorch, E.


P., & Colon, S. E. (1999). Short-term effects of an anti-marijuana media
campaign targeting high sensation seeking adolescents. Journal of
Applied Communication Research, 27, 175–195.

Stephenson, M. T., Quick, B. L., & Hirsch, H. A. (2010). Evidence in


support of a strategy to target authoritarian and permissive parents in
antidrug media campaigns. Communication Research, 37, 73–104.

Stephenson, M. T., Witte, K., Vaught, C., Quick, B. L., Booth-Butterfield,


S., Patel, D., & Zuckerman, C. (2005). Using persuasive messages to
encourage voluntary hearing protection among coal miners. Journal of
Safety Research, 36, 9–17.

Sternthal, B., Dholakia, R., & Leavitt, C. (1978). The persuasive effect of
source credibility: Tests of cognitive response. Journal of Consumer
Research, 4, 252–260.

581
Steward, W. T., Schneider, T. R., Pizarro, J., & Salovey, P. (2003). Need
for cognition moderates responses to framed smoking-cessation
messages. Journal of Applied Social Psychology, 33, 2439–2464.

Stice, E. (1992). The similarities between cognitive dissonance and guilt:


Confession as relief of dissonance. Current Psychology Research and
Reviews, 11, 69–78.

Stice, E., Chase, A., Stormer, S., & Appel, A. (2001). A randomized trial
of a dissonance-based eating disorder prevention program. International
Journal of Eating Disorders, 29, 247–262.

Stice, E., Marti, C. N., Spoor, S., Presnell, K., & Shaw, H. (2008).
Dissonance and healthy weight eating disorder prevention programs:
Long-term effects from a randomized efficacy trial. Journal of
Consulting and Clinical Psychology, 76, 329–340.

Stice, E., Shaw, H., Becker, C. B., & Rohde, P. (2008). Dissonance-based
interventions for the prevention of eating disorders: Using persuasion
principles to promote health. Prevention Science, 9, 114–128.

Stiff, J. B., & Mongeau, P. A. (2003). Persuasive communication (2nd


ed.). New York: Guilford.

Stone, J. (2012). Consistency as a basis for behavioral interventions: Using


hypocrisy and cognitive dissonance to motivate behavior change. In B.
Gawronski & F. Strack (Eds.), Cognitive consistency: A fundamental
principle in social cognition (pp. 326–347). New York: Guilford.

Stone, J., Aronson, E., Crain, A. L., Winslow, M. P., & Fried, C. B.
(1994). Inducing hypocrisy as a means of encouraging young adults to
use condoms. Personality and Social Psychology Bulletin, 20, 116–128.

Stone, J., & Cooper, J. (2001). A self-standards model of cognitive


dissonance. Journal of Experimental Social Psychology, 37, 228–243.

582
Stone, J., & Fernandez, N. C. (2008a). How behavior shapes attitudes:
Cognitive dissonance processes. In W. D. Crano & R. Prislin (Eds.),
Attitudes and attitude change (pp. 313–334). New York: Psychology
Press.

Stone, J., & Fernandez, N. C. (2008b). To practice what we preach: The


use of hypocrisy and cognitive dissonance to motivate behavior change.
Social and Personality Psychology Compass, 2, 1024–1051.

Stone, J., & Fernandez, N. C. (2011). When thinking about less failure
causes more dissonance: The effect of elaboration and recall on
behavior change following hypocrisy. Social Influence, 6, 199–211.

Stone, J., & Focella, E. (2011). Hypocrisy, dissonance and the self-
regulation processes that improve health. Self and Identity, 10, 295–303.

Stone, J., Wiegand, A. W., Cooper, J., & Aronson, E. (1997). When
exemplification fails: Hypocrisy and the motive for self-integrity.
Journal of Personality and Social Psychology, 72, 54–65.

Strathman, A., Gleicher, F., Boninger, D. S., & Edwards, C. S. (1994). The
consideration of future consequences: Weighing immediate and distant
outcomes of behavior. Journal of Personality and Social Psychology, 66,
742–752.

Straughan, R. D., & Lynn, M. (2002). The effects of salesperson


compensation on perceptions of salesperson honesty. Journal of Applied
Social Psychology, 32, 719–731.

Stroud, N. J. (2008). Media use and political predispositions: Revisiting


the concept of selective exposure. Political Behavior, 30, 341–366.

Struckman-Johnson, D., & Struckman-Johnson, C. (1996). Can you say


condom? It makes a difference in fear-arousing AIDS prevention public
service announcements. Journal of Applied Social Psychology, 26,

583
1068–1083.

Studts, J. L., Ruberg, J. L., McGuffin, S. A., & Roetzer, L. M. (2010).


Decisions to register for the National Marrow Donor Program: Rational
vs. emotional appeals. Bone Marrow Transplantation, 45, 422–428.

Stukas, A. A., Snyder, M., & Clary, E. G. (2008). The social marketing of
volunteerism: A functional approach. In C. P. Haugtvedt, P. M. Herr, &
F. R. Kardes (Eds.), Handbook of consumer psychology (pp. 959–979).
New York: Lawrence Erlbaum.

Sunnafrank, M., (1991). Interpersonal attraction and attitude similarity: A


communication-based assessment. Communication Yearbook, 14,
451–483.

Sutton, S. (1992). Shock tactics and the myth of the inverted U. British
Journal of Addiction, 87, 517–519.

Sutton, S. (1998). Predicting and explaining intentions and behavior: How


well are we doing? Journal of Applied Social Psychology, 28,
1317–1338.

Sutton, S. (2000). A critical review of the transtheoretical model applied to


smoking cessation. In P. Norman, C. Abraham, & M. Conner (Eds.),
Understanding and changing health behaviour: From health beliefs to
self-regulation (pp. 207–225). Amsterdam: Harwood.

Sutton, S. (2002). Using social cognition models to develop health


behaviour interventions: Problems and assumptions. In D. Rutter & L.
Quine (Eds.), Changing health behaviour: Intervention and research
with social cognition models (pp. 193–208). Buckingham, UK: Open
University Press.

Sutton, S. (2004). Determinants of health-related behaviours: Theoretical


and methodological issues. In S. Sutton, A. Baum, & M. Johnston

584
(Eds.), The SAGE handbook of health psychology (pp. 94–126).
London: Sage.

Sutton, S. (2005a). Another nail in the coffin of the transtheoretical model?


A comment on West (2005). Addiction, 100, 1043–1046.

Sutton, S. (2005b). Stage theories of health behaviour. In M. Conner & P.


Norman (Eds.), Predicting health behaviour: Research and practice with
social cognition models (2nd ed., pp. 223–275). Maidenhead, UK: Open
University Press.

Sutton, S. (2008). How does health action process approach (HAPA)


bridge the intention-behavior gap? An examination of the mode’s causal
structure. Applied Psychology: An International Review, 57, 66–74.

Sutton, S., French, D. P., Hennings, S. J., Mitchell, J., Wareham, N. J.,
Griffin, S., … Kinmonth, A. L. (2003). Eliciting salient beliefs in
research on the theory of planned behaviour: The effect of question
wording. Current Psychology, 22, 234–251.

Sutton, S., McVey, D., & Glanz, A. (1999). A comparative test of the
theory of reasoned action and the theory of planned behavior in the
prediction of condom use intentions in a national sample of English
young people. Health Psychology, 18, 72–81.

Sutton, S. R. (1982). Fear-arousing communications: A critical


examination of theory and research. In J. R. Eiser (Ed.), Social
psychology and behavioral medicine (pp. 303–337). New York: Wiley.

Swartz, T. A. (1984). Relationship between source expertise and source


similarity in an advertising context. Journal of Advertising, 13(2),
49–55.

Swasy, J. L., & Munch, J. M. (1985). Examining the target of receiver


elaborations: Rhetorical question effects on source processing and

585
persuasion. Journal of Consumer Research, 11, 877–886.

Sweeney, A. M., & Moyer, A. (2015). Self-affirmation and responses to


health messages: A meta-analysis on intentions and behavior. Health
Psychology, 34, 149–159.

Szilagyi, P., Vann, J., Bordley, C., Chelminski, A., Kraus, R., Margolis, P.,
& Rodewald, L. (2002). Interventions aimed at improving immunization
rates. Cochrane Database Systematic Review, 2002(4), CD003941.

Szybillo, G. J., & Heslin, R. (1973). Resistance to persuasion: Inoculation


theory in a marketing context. Journal of Marketing Research, 10,
396–403.

Tal-Or, N., Boninger, D. S., Poran, A., & Gleicher, F. (2004).


Counterfactual thinking as a mechanism in narrative persuasion. Human
Communication Research, 30, 301–328.

Tal-Or, N., & Cohen, J. (2010). Understanding audience involvement:


Conceptualizing and manipulating identification and transportation.
Poetics, 38, 402–418.

Tal-Or, N., Nemets, E., & Ziv, S. (2009). The transmitter-persistence


effect: Resolving the dispute. Social Influence, 4, 274–281.

Tao, C.-C., & Bucy, E. P. (2007). Conceptualizing media stimuli in


experimental research: Psychological versus attribute-based definitions.
Human Communication Research, 33, 397–426.

Terry, D. J., & Hogg, M. A. (1996). Group norms and the attitude-
behavior relationship: A role for group identification. Personality and
Social Psychology Bulletin, 22, 776–793.

Terwel, B. W., Harinck, F., Ellemers, N., & Daamen, D. D. L. (2009).

586
Competence-based and integrity-based trust as predictors of acceptance
of carbon dioxide capture and storage (CCS). Risk Analysis, 29,
1129–1140.

Tewksbury, D., & Scheufele, D. A. (2009). News framing theory and


research. In J. Bryant & M. B. Oliver (Eds.), Media effects: Advances in
theory and research (3rd ed., pp. 17–33). New York: Routledge.

Theodorakis, Y. (1994). Planned behavior, attitude strength, role identity,


and the prediction of exercise behavior. Sport Psychologist, 8, 149–165.

Thomas, K., Hevey, D., Pertl, M., Chuinneagáin, S. N., Craig, A., &
Maher, L. (2011). Appearance matters: The frame and focus of health
messages influences beliefs about skin cancer. British Journal of Health
Psychology, 16, 418–429.

Thompson, E. P., Kruglanski, A. W., & Spiegel, S. (2000). Attitudes as


knowledge structures and persuasion as a specific case of subjective
knowledge acquisition. In G. R. Maio & J. M. Olson (Eds.), Why we
evaluate: Functions of attitudes (pp. 59–95). Mahwah, NJ: Lawrence
Erlbaum.

Thompson, R., & Haddock, G. (2012). Sometimes stories sell: When are
narrative appeals most likely to work? European Journal of Social
Psychology, 42, 92–102.

Thomsen, C. J., Borgida, E., & Lavine, H. (1995). The causes and
consequences of personal involvement. In R. E. Petty & J. A. Krosnick
(Eds.), Attitude strength: Antecedents and consequences (pp. 191–214).
Mahwah, NJ: Lawrence Erlbaum.

Thuen, F., & Rise, J. (1994). Young adolescents’ intention to use seat
belts: The role of attitudinal and normative beliefs. Health Education
Research, 9, 215–223.

587
Thurstone, L. L. (1931). The measurement of social attitudes. Journal of
Abnormal and Social Psychology, 26, 249–269.

Till, B. D., & Busier, M. (2000). The match-up hypothesis: Physical


attractiveness, expertise, and the role of fit on brand attitude, purchase
intent and brand beliefs. Journal of Advertising, 29(3), 1–14.

Todorov, A., Chaiken, S., & Henderson, M. D. (2002). The heuristic-


systematic model of social information processing. In J. P. Dillard & M.
Pfau (Eds.), The persuasion handbook: Developments in theory and
practice (pp. 195–211). Thousand Oaks, CA: Sage.

Tormala, Z. L., Briñol, P., & Petty, R. E. (2006). When credibility attacks:
The reverse impact of source credibility on persuasion. Journal of
Experimental Social Psychology, 42, 684–691.

Tormala, Z. L., Briñol, P., & Petty, R. E. (2007) Multiple roles for source
credibility under high elaboration: It’s all in the timing. Social
Cognition, 25, 536–552.

Tormala, Z. L., & Rucker, D. D. (2007). Attitude certainty: A review of


past findings and emerging perspectives. Social and Personality
Psychology Compass, 1, 469–492.

Tormala, Z. L., Rucker, D. D., & Seger, C. R. (2008). When increased


confidence yields increased thought: A confidence-matching hypothesis.
Journal of Experimental Social Psychology, 44, 141–147.

Törn, F. (2012). Revisiting the match-up hypothesis: Effects of brand-


incongruent celebrity endorsements. Journal of Current Issues &
Research in Advertising, 33, 20–36.

Trafimow, D. (2007). Distinctions pertaining to Fishbein and Ajzen’s


theory of reasoned action. In I. Ajzen, D. Albarracín, & R. Hornik
(Eds.), Prediction and change of health behavior: Applying the reasoned

588
action approach (pp. 23–42). Mahwah, NJ: Lawrence Erlbaum.

Trafimow, D., & Duran, A. (1998). Some tests of the distinction between
attitude and perceived behavioural control. British Journal of Social
Psychology, 37, 1–14.

Trafimow, D., & Sheeran, P. (1998). Some tests of the distinction between
cognitive and affective beliefs. Journal of Experimental Social
Psychology, 34, 378–397.

Trafimow, D., Sheeran, P., Conner, M., & Finlay, K. A. (2002). Evidence
that perceived behavioural control is a multidimensional construct:
Perceived control and perceived difficulty. British Journal of Social
Psychology, 41, 101–121.

Trumbo, C. W. (1999). Heuristic-systematic information processing and


risk judgment. Risk Analysis, 19, 391–400.

Tseng, D. S., Cox, E., Plane, M. B., & Hia, K. (2001). Efficacy of patient
letter reminders on cervical cancer screening: A meta-analysis. Journal
of General Internal Medicine, 16, 563–568.

Tuah, N. A. A., Amiel, C., Qureshi, S., Car, J., Kaur, B., & Majeed, A.
(2011). Transtheoretical model for dietary and physical exercise
modification in weight loss management for overweight and obese
adults. Cochrane Database of Systematic Reviews 2011(10), CD008066.

Tufte, E. R. (1997). Visual explanations: Images and quantities, evidence


and narrative. Cheshire, CT: Graphics Press.

Tukachinsky, R., & Tokunaga, R. S. (2013). The effects of engagement


with entertainment. In E. L. Cohen (Ed.), Communication Yearbook 37
(pp. 287–321). New York: Routledge.

589
Tuppen, C. J. S. (1974). Dimensions of communicator credibility: An
oblique solution. Speech Monographs, 41, 253–260.

Turner, G. E., Burciaga, C., Sussman, S., Klein-Selski, E., Craig, S., Dent,
C. W., … Flay, B. (1993). Which lesson components mediate refusal
assertion skill improvement in school-based adolescent tobacco use
prevention? International Journal of the Addictions, 28, 749–766.

Turner, M. M. (2012). Using emotional appeals in health messages. In H.


Cho (Ed.), Health communication message design: Theory and practice
(pp. 59–71). Los Angeles: Sage.

Turner, M. M., & Underhill, J. C. (2012). Motivating emergency


preparedness behaviors: The differential effects of guilt appeals and
actually anticipating guilty feelings. Communication Quarterly, 60,
545–559.

Tusing, K. J., & Dillard, J. P. (2000). The psychological reality of the


door-in-the-face: It’s helping, not bargaining. Journal of Language and
Social Psychology, 19, 5–25.

Twyman, M., Harvey, N., & Harries, C. (2008). Trust in motives, trust in
competence: Separate factors determining the effectiveness of risk
communication. Judgment and Decision Making Journal, 2, 111–120.

Tykocinski, O. E., & Bareket-Bojmel, L. (2009). The lost e-mail


technique: Use of an implicit measure to assess discriminatory attitudes
toward two minority groups in Israel. Journal of Applied Social
Psychology, 39, 62–81.

Updegraff, J. A., & Rothman, A. J. (2013). Health message framing:


Moderators, mediators, and mysteries. Social and Personality
Psychology Compass, 7, 668–679.

Updegraff, J. A., Sherman, D. K., Luyster, F. S., & Mann, T. L. (2007).

590
The effects of message quality and congruency on perceptions of
tailored health communications. Journal of Experimental Social
Psychology, 43, 249–257.

Uribe, R., Manzur, E., & Hidalgo, P. (2013). Exemplars’ impacts in


marketing communication campaigns. Journal of Business Research, 66,
SI1787–SI1790.

Usdin, S., Singhal, A., Shongwe, T., Goldstein, S., & Shabalala, A. (2004).
No short cuts in entertainment-education: Designing Soul City step-by-
step. In A. Singhal, M. J. Cody, E. M. Rogers, & M. Sabido (Eds.),
Entertainment-education and social change: History, research, and
practice (pp. 153–175). Mahwah, NJ: Lawrence Erlbaum.

Vakratsas, D., & Ambler, T. (1999). How advertising works: What do we


really know? Journal of Marketing, 63(1), 26–43.

Valentino, N. A., Banks, A. J., Hutchings, V. L., & Davis, A. K. (2009).


Selective exposure in the Internet age: The interaction between anxiety
and information utility. Political Psychology, 30, 591–614.

Valois, P., Desharnais, R., Godin, G., Perron, J., & LeComte, C. (1993).
Psychometric properties of a perceived behavioral control multiplicative
scale developed according to Ajzen’s theory of planned behavior.
Psychological Reports, 72, 1079–1083.

van den Hende, E. A., Dahl, D. W., Schoormans, J. P. L., & Snelders, D.
(2012). Narrative transportation in concept tests for really new products:
The moderating effect of reader-protagonist similarity. Journal of
Product Innovation Management, 29, 157–170.

van der Pligt, J., & de Vries, N. K. (1998a). Belief importance in


expectancy-value models of attitudes. Journal of Applied Social
Psychology, 28, 1339–1354.

591
van der Pligt, J., & de Vries, N. K. (1998b). Expectancy-value models of
health behaviour: The role of salience and anticipated affect.
Psychology and Health, 13, 289–305.

van der Pligt, J., de Vries, N. K., Manstead, A. S. R., & van Harreveld, F.
(2000). The importance of being selective: Weighing the role of
attribute importance in attitudinal judgment. In M. P. Zanna (Ed.),
Advances in experimental social psychology (Vol. 32, pp. 135–200).
San Diego: Academic Press.

van Enschot-van Dijk, R., Hustinx, L., & Hoeken, H. (2003). The concept
of argument quality in the elaboration likelihood model: A normative
and empirical approach to Petty and Cacioppo’s “strong” and “weak”
arguments. In F. H. van Eemeren, J. A. Blair, C. A. Willard, & A. F.
Snoeck Henkemans (Eds.), Anyone who has a view: Theoretical
contributions to the study of argumentation (pp. 319–335). Amsterdam:
Kluwer.

van Harreveld, F., Schneider, I. K., Nohlen, H., & van der Pligt, J. (2012).
The dynamics of ambivalence: Evaluative conflict in attitudes and
decision making. In B. Gawronski & F. Strack (Eds.), Cognitive
consistency: A fundamental principle in social cognition (pp. 267–284).
New York: Guilford.

van Harreveld, F., van der Pligt, J., & de Vries, N. K. (1999). Attitudes
towards smoking and the subjective importance of attributes:
Implications for changing risk-benefit ratios. Swiss Journal of
Psychology, 58, 65–72.

van Ittersum, K., Pennings, J. M. E., Wansink, B., & van Trijp, H. C. M.
(2007). The validity of attribute-importance measurement: A review.
Journal of Business Research, 60, 1177–1190.

Van Koningsbruggen, G. M., & Das, E. (2009). Don’t derogate this


message! Self-affirmation promotes online type 2 diabetes risk test
taking. Psychology & Health, 24, 635–650.

592
Van Koningsbruggen, G. M., Das, E., & Roskos-Ewoldsen, D. R. (2009).
How self-affirmation reduces defensive processing of threatening health
information: Evidence at the implicit level. Health Psychology, 28,
563–568.

van Laer, T., de Ruyter, K., Visconti, L. M., & Wetzels, M. (2014). The
extended transportation-imagery model: A meta-analysis of the
antecedents and consequences of consumers’ narrative transportation.
Journal of Consumer Research, 40, 797–817.

van Leeuwen, L., Renes, R. J., & Leeuwis, C. (2013). Televised


entertainment-education to prevent adolescent alcohol use: Perceived
realism, enjoyment, and impact. Health Education and Behavior, 40,
193–205.

Van Osch, L., Lechner, L., Reubsaet, A., & de Vries, H. (2010). From
theory to practice: an explorative study into the instrumentality and
specificity of implementation intentions. Psychology and Health, 25,
351–364.

Van Overwalle, F., & Jordens, K. (2002). An adaptive connectionist model


of cognitive dissonance. Personality and Social Psychology Review, 6,
204–231.

van ’t Riet, J., Cox, A. D., Cox, D., Zimet, G. D., De Bruijn, G.-J., van den
Putte, B., … Ruiter, R. A. C. (2014). Does perceived risk influence the
effects of message framing? A new investigation of a widely held
notion. Psychology and Health, 29, 933–949.

van ’t Riet, J., & Ruiter, R. A. C. (2013). Defensive reactions to health-


promoting information: An overview and implications for future
research. Health Psychology Review, 7(Suppl. 1), S104–S136.

van ’t Riet, J., Ruiter, R. A.C., & de Vries, H. (2012). Avoidance


orientation moderates the effect of threatening messages. Journal of
Health Psychology, 17, 14–25.

593
van ’t Riet, J., Ruiter, R. A. C., Werrij, M. Q., & de Vries, H. (2010). Self-
efficacy moderates message-framing effects: The case of skin-cancer
detection. Psychology and Health, 25, 339–349.

Vaughn, L. A., Childs, K. E., Maschinski, C., Niño, N. P., & Ellsworth, R.
(2010). Regulatory fit, processing fluency, and narrative persuasion.
Social and Personality Psychology Compass, 4(12), 1181–1192.

Vaughn, L. A., Hesse, S. J., Petkova, Z., & Trudeau, L. (2009). “This story
is right on”: The impact of regulatory fit on narrative engagement and
persuasion. European Journal of Social Psychology, 39, 447–456.

Venkatesh, V., & Bala, H. (2008). Technology acceptance model 3 and a


research agenda on interventions. Decision Sciences, 39, 273–315.

Venkatraman, M. P., Marlino, D., Kardes, F. R., & Sklar, K. B. (1990).


The interactive effects of message appeals and individual differences on
information processing and persuasion. Psychology and Marketing, 7,
85–96.

Verplanken, B. (1991). Persuasive communication of risk information: A


test of cue versus message processing effects in a field experiment.
Personality and Social Psychology Bulletin, 17, 188–193.

Vervloet, M., Linn, A. J., van Weert, J. C. M., de Bakker, D. H., Bouvy,
M. L., & van Dijk, L. (2012). The effectiveness of interventions using
electronic reminders to improve adherence to chronic medication: A
systematic review of the literature. Journal of the American Medical
Informatics Association, 19, 696–704.

Vet, R., de Wit, J. B. F., & Das, E. (2011). The efficacy of social role
models to increase motivation to obtain vaccination against hepatitis B
among men who have sex with men. Health Education Research, 26,
192–200.

594
Vincus, A. A., Ringwalt, C., Harris, M. S., & Shamblen, S. R. (2010). A
short-term, quasi-experimental evaluation of D.A.R.E’s revised
elementary school curriculum. Journal of Drug Education, 40, 37–49.

Visser, P. S., Bizer, G. Y., & Krosnick, J. A. (2006). Exploring the latent
structure of strength-related attitude attributes. In M. P. Zanna (Ed.),
Advances in experimental social psychology (Vol. 38, pp. 1–68). San
Diego: Elsevier Academic Press.

Vitoria, P. D., Salgueiro, M. F., Silva, S. A., & de Vries, H. (2009). The
impact of social influence on adolescent intention to smoke: Combining
types and referents of influence. British Journal of Health Psychology,
14, 681–699.

Waalen, J., Bruning, A. L., Peters, M. J., & Blau, E. M. (2009). A


telephone-based intervention for increasing the use of osteoporosis
medication: A randomized controlled trial. American Journal of
Managed Care, 15, E60–E70.

Wachtler, J., & Counselman, E. (1981). When increased liking for a


communicator decreases opinion change: An attribution analysis of
attractiveness. Journal of Experimental Social Psychology, 17, 386–395.

Wagner, W. (1984). Social comparison of opinions: Similarity, ability, and


the value-fact distinction. Journal of Psychology, 117, 197–202.

Wall, A.-M., Hinson, R. E., & McKee, S. A. (1998). Alcohol outcome


expectancies, attitudes toward drinking and the theory of planned
behavior. Journal of Studies on Alcohol, 59, 409–419.

Wallace, D. S., Paulson, R. M., Lord, C. G., & Bond, C. F., Jr. (2005).
Which behaviors do attitudes predict? Meta-analyzing the effects of
social pressure and perceived difficulty. Review of General Psychology,
9, 214–227.

595
Walster, E., Aronson, E., & Abrahams, D. (1966). On increasing the
persuasiveness of a low prestige communicator. Journal of Experimental
Social Psychology, 2, 325–342.

Walther, E., & Weil, R. (2012). Balance principles in attitude formation


and change: The desire to maintain consistent cognitions about people.
In B. Gawronski & F. Strack (Eds.), Cognitive consistency: A
fundamental principle in social cognition (pp. 351–368). New York:
Guilford.

Walther, J. B., Liang, Y. H., Ganster, T., Wohn, D. Y., & Emington, J.
(2012). Online reviews, helpfulness ratings, and consumer attitudes: An
extension of congruity theory to multiple sources in web 2.0. Journal of
Computer-Mediated Communication, 18, 97–112.

Walther, J. B., Wang, Z., & Loh, T. (2004). The effect of top-level
domains and advertisements on health web site credibility. Journal of
Medical Internet Research, 6, e24.

Wan, C. S., & Chiou, W. B. (2010). Inducing attitude change toward


online gaming among adolescent players based on dissonance theory:
The role of threats and justification of effort. Computers & Education,
54, 162–168.

Wang, X. (2009). Integrating the theory of planned behavior and attitude


functions: Implications for health campaign design. Health
Communication, 24, 426–434.

Wang, X. (2012). The role of attitude functions and self-monitoring in


predicting intentions to register as organ donors and to discuss organ
donation with family. Communication Research, 39, 26–47.

Warburton, J., & Terry, D. J. (2000). Volunteer decision making by older


people: A test of a revised theory of planned behavior. Basic and
Applied Social Psychology, 22, 245–258.

596
Ward, C. D., & McGinnies, E. (1973). Perception of communicator’s
credibility as a function of when he is identified. Psychological Record,
23, 561–562.

Wathen, C. N., & Burkell, J. (2002). Believe it or not: Factors influencing


credibility on the web. Journal of the American Society for Information
Science and Technology, 53, 134–144.

Watt, S. E., Maio, G. R., Haddock, G., & Johnson, B. T. (2008). Attitude
functions in persuasion: Matching, involvement, self-affirmation, and
hierarchy. In W. D. Crano & R. Prislin (Eds.), Attitudes and attitude
change (pp. 189–211). New York: Psychology Press.

Webb, T. L., & Sheeran, P. (2007). How do implementation intentions


promote goal attainment? A test of component processes. Journal of
Experimental Social Psychology, 43, 295–302.

Webel, A. R., Okonsky, J., Trompeta, J., & Holzemer, W. L. (2010). A


systematic review of the effectiveness of peer-based interventions on
health related behaviors in adults. American Journal of Public Health,
100, 247–263.

Wechsler, H., Nelson, T. F., Lee, J. E., Seibring, M., Lewis, C., & Keeling,
R. P. (2003). Perception and reality: A national evaluation of social
norms marketing interventions to reduce college students’ heavy alcohol
use. Journal of Studies on Alcohol, 64, 484–494.

Wegener, D. T., & Claypool, H. M. (1999). The elaboration continuum by


any other name does not smell as sweet. Psychological Inquiry, 10,
176–181.

Wegener, D. T., & Petty, R. E. (2001). Understanding effects of mood


through the elaboration likelihood and flexible correction models. In L.
L. Martin & G. L. Clore (Eds.), Theories of mood and cognition: A
user’s guidebook (pp. 177–201). Mahwah, NJ: Lawrence Erlbaum.

597
Weigel, R. H., & Newman, L. S. (1976). Increasing attitude-behavior
correspondence by broadening the scope of the behavioral measure.
Journal of Personality and Social Psychology, 33, 793–802.

Weilbacher, W. M. (2001). Point of view: Does advertising cause a


“hierarchy of effects”? Journal of Advertising Research, 41(6), 19–26.

Weinberger, M. G., & Dillon, W. R. (1980). The effects of unfavorable


product rating information. Advances in Consumer Research, 7,
528–532.

Weinstein, N. D. (2000). Perceived probability, perceived severity, and


health-protective behavior. Health Psychology, 19, 65–74.

Weinstein, N. D. (2007). Misleading tests of health behavior theories.


Annals of Behavioral Medicine, 33, 1–10.

Weinstein, N. D., Lyon, J. E., Sandman, P. M., & Cuite, C. L. (1998).


Experimental evidence for stages of health behavior change: The
precaution adoption process model applied to home radon testing.
Health Psychology, 17, 445–453.

Weinstein, N. D., Rothman, A. J., & Sutton, S. R. (1998). Stage theories of


health behavior: Conceptual and methodological issues. Health
Psychology, 17, 290–299.

Weinstein, N. D., & Sandman, P. M. (1992). A model of the precaution


adoption process: Evidence from home radon testing. Health
Psychology, 11, 170–180.

Weinstein, N. D., & Sandman, P. M. (2002). Reducing the risks of


exposure to radon gas: An application of the precaution adoption
process model. In D. Rutter & L. Quine (Eds.), Changing health
behaviour: Intervention and research with social cognition models (pp.
66–86). Buckingham, UK: Open University Press.

598
Wells, G. L., & Windschitl, P. D. (1999). Stimulus sampling and social
psychological experimentation. Personality and Social Psychology
Bulletin, 25, 1115–1125.

Wenzel, M. (2005). Misperceptions of social norms about tax compliance:


From theory to intervention. Journal of Economic Psychology, 26,
862–883.

West, M. D. (1994). Validating a scale for the measurement of credibility:


A covariance structure modeling approach. Journalism Quarterly, 71,
159–168.

West, R. (2005). Time for a change: Putting the transtheoretical (stages of


change) model to rest. Addiction, 100, 1036–1039.

West, S. K., & O’Neal, K. K. (2004). Project D.A.R.E. outcome


effectiveness revisited. American Journal of Public Health, 94,
1027–1029.

Whately, R. (1828). The elements of rhetoric (2nd ed.). Oxford, UK:


Murray and Parker.

White, G. L., & Gerard, H. B. (1981). Postdecision evaluation of choice


alternatives as a function of valence of alternatives, choice, and
expected delay of choice consequences. Journal of Research in
Personality, 15, 371–382.

White, K., MacDonnell, R., & Dahl, D. W. (2011). It’s the mind-set that
matters: The role of construal level and message framing in influencing
consumer efficacy and conservation behaviors. Journal of Marketing
Research, 48, 472–485.

White, K. M., Robinson, N. G., Young, R. M., Anderson, P. J., Hyde, M.


K., Greenbank, S., … Baskerville, D. (2008). Testing an extended
theory of planned behaviour to predict young people’s sun safety in a

599
high risk area. British Journal of Health Psychology, 13, 435–448.

White, K. M., Smith, J. R., Terry, D. J., Greenslade, J. H., & McKimmie,
B. M. (2009). Social influence in the theory of planned behaviour: The
role of descriptive, injunctive, and in-group norms. British Journal of
Social Psychology, 48, 135–158.

Whitehead, J. L., Jr. (1968). Factors of source credibility. Quarterly


Journal of Speech, 54, 59–63.

Whitehead, J. L., Jr. (1971). Effects of authority-based assertion on


attitude and credibility. Speech Monographs, 38, 311–315.

Whitelaw, S., Baldwin, S., Bunton, R., & Flynn, D. (2000). The status of
evidence and outcomes in stages of change research. Health Education
Research, 15, 707–718.

Whittaker, J. O. (1963). Opinion change as a function of communication-


attitude discrepancy. Psychological Reports, 13, 763–772.

Whittaker, J. O. (1965). Attitude change and communication-attitude


discrepancy. Journal of Social Psychology, 65, 141–147.

Whittaker, J. O. (1967). Resolution of the communication discrepancy


issue in attitude change. In C. W. Sherif & M. Sherif (Eds.), Attitude,
ego-involvement, and change (pp. 159–177). New York: Wiley.

Whittier, D. K., Kennedy, M. G., St. Lawrence, J. S., Seeley, S., & Beck,
V. (2005). Embedding health messages into entertainment television:
Effect on gay men’s response to a syphilis outbreak. Journal of Health
Communication, 10, 251–259.

Wicker, A. W. (1969). Attitudes versus actions: The relationship of verbal


and overt behavioral responses to attitude objects. Journal of Social

600
Issues, 25(4), 41–78.

Wicklund, R. A., & Brehm, J. W. (1976). Perspectives on cognitive


dissonance. Hillsdale, NJ: Lawrence Erlbaum.

Widgery, R. N., & Ruch, R. S. (1981). Beauty and the Machiavellian.


Communication Quarterly, 29, 297–301.

Wieber, F., Odenthal, G., & Gollwitzer, P. (2010). Self-efficacy feelings


moderate implementation intention effects. Self and Identity, 9,
177–194.

Wilkin, H. A., Valente, T. W., Murphy, S., Cody, M. J., Huang, G., &
Beck, V. (2007). Does entertainment-education work with Latinos in the
United States? Identification and the effects of a telenovela breast
cancer storyline. Journal of Health Communication, 12, 455–470.

Williams, P., & Drolet, A. (2005). Age-related differences in responses to


emotional advertisements. Journal of Consumer Research, 32, 343–354.

Williams-Piehota, P., Latimer, A. E., Katulak, N. A., Cox, A., Silvera, S.


A. N., Mowad, L., & Salovey, P. (2009). Tailoring messages to
individual differences in monitoring-blunting styles to increase fruit and
vegetable intake. Journal of Nutrition Education and Behavior, 41,
398–405.

Williams-Piehota, P., Pizarro, J., Silvera, S. A. N., Mowad, L., & Salovey,
P. (2006). Need for cognition and message complexity in motivating
fruit and vegetable intake among callers to the Cancer Information
Service. Health Communication, 19, 75–84.

Wilmot, W. W. (1971a). Ego-involvement: A confusing variable in speech


communication research. Quarterly Journal of Speech, 57, 429–436.

601
Wilmot, W. W. (1971b). A test of the construct and predictive validity of
three measures of ego involvement. Speech Monographs, 38, 217–227.

Wilson, E. J., & Sherrell, D. L. (1993). Source effects in communication


and persuasion research: A meta-analysis of effect size. Journal of the
Academy of Marketing Science, 21, 101–112.

Winterbottom, A., Bekker, H. L., Conner, M., & Mooney, A. (2008). Does
narrative information bias individual’s decision making? A systematic
review. Social Science and Medicine, 67, 2079–2088.

Witte, K. (1992). Putting the fear back into fear appeals: The extended
parallel process model. Communication Monographs, 59, 329–349.

Witte, K. (1998). Fear as motivator, fear as inhibitor: Using the extended


parallel process model to explain fear appeal successes and failures. In
P. A. Andersen & L. K. Guerrero (Eds.), Handbook of communication
and emotion: Research, theory, applications, and contexts (pp.
423–450). San Diego: Academic Press.

Witte, K., & Allen, M. (2000). A meta-analysis of fear appeals:


Implications for effective public health campaigns. Health Education
and Behavior, 27, 591–615.

Wittenbrink, B. (2007). Measuring attitudes through priming. In B.


Wittenbrink & N. Schwarz (Eds.), Implicit measures of attitudes (pp.
17–58). New York: Guilford.

Wittenbrink, B., & Schwarz, N. (Eds.). (2007). Implicit measures of


attitudes. New York: Guilford.

Wogalter, M. S., & Barlow, T. (1990). Injury severity and likelihood in


warnings. In Proceedings of the Human Factors Society 34th Annual
Meeting (pp. 580–583). Santa Monica, CA: Human Factors Society.

602
Wojcieszak, M. E., & Mutz, D. C. (2009). Online groups and political
discourse: Do online discussion spaces facilitate exposure to political
disagreement? Journal of Communication, 59, 40–56.

Wong, N. C. H., & Cappella, J. N. (2009). Antismoking threat and efficacy


appeals: Effects on smoking cessation intentions for smokers with low
and high readiness to quit. Journal of Applied Communication
Research, 37, 1–20.

Wood, M. L. M. (2007). Rethinking the inoculation analogy: Effects of


subjects with differing preexisting beliefs. Human Communication
Research, 33, 357–378.

Wood, W. (1982). Retrieval of attitude-relevant information from


memory: Effects on susceptibility to persuasion and on intrinsic
motivation. Journal of Personality and Social Psychology, 42, 798–810.

Wood, W., & Eagly, A. H. (1981). Stages in the analysis of persuasive


messages: The role of causal attributions and message comprehension.
Journal of Personality and Social Psychology, 40, 246–259.

Wood, W., & Kallgren, C. A. (1988). Communicator attributes and


persuasion: Recipients’ access to attitude-relevant information in
memory. Personality and Social Psychology Bulletin, 14, 172–182.

Wood, W., Kallgren, C. A., & Preisler, R. M. (1985). Access to attitude-


relevant information in memory as a determinant of persuasion: The role
of message attributes. Journal of Experimental Social Psychology, 21,
73–85.

Wood, W., & Quinn, J. M. (2003). Forewarned and forearmed? Two meta-
analytic syntheses of forewarnings of influence appeals. Psychological
Bulletin, 129, 119–138.

Wood, W., Rhodes, N., & Biek, M. (1995). Working knowledge and

603
attitude strength: An information-processing analysis. In R. E. Petty & J.
A. Krosnick (Eds.), Attitude strength: Antecedents and consequences
(pp. 283–313). Mahwah, NJ: Lawrence Erlbaum.

Wood, W., & Stagner, B. (1994). Why are some people easier to influence
than others? In S. Shavitt & T. C. Brock (Eds.), Persuasion:
Psychological insights and perspectives (pp. 149–174). Boston: Allyn
and Bacon.

Woodside, A. G. (2004). Advancing means–end chains by incorporating


Heider’s balance theory and Fournier’s consumer–brand relationship
typology. Psychology and Marketing, 21, 279–294.

Woodside, A. G., & Chebat, J.-C. (2001). Updating Heider’s balance


theory in consumer behavior: A Jewish couple buys a German car and
additional buying-consuming transformation stories. Psychology and
Marketing, 18, 475–496.

Woodside, A. G., & Davenport, J. W., Jr. (1974). The effect of salesman
similarity and expertise on consumer purchasing behavior. Journal of
Marketing Research, 11, 198–202.

Wyer, N. A. (2010). Selective self-categorization: Meaningful


categorization and the in-group persuasion effect. Journal of Social
Psychology, 150, 452–470.

Wyer, R. S., Jr. (1974). Cognitive organization and change: An


information processing approach. Potomac, MD: Lawrence Erlbaum.

Wynn, S. R., Schulenberg, J., Maggs, J. L., & Zucker, R. A. (2000).


Preventing alcohol misuse: The impact of refusal skills and norms.
Psychology of Addictive Behaviors, 14, 36–47.

Xu, A. J., & Wyer, R. S., Jr. (2012). The role of bolstering and
counterarguing mind-sets in persuasion. Journal of Consumer Research,

604
38, 920–932.

Ybarra, O., & Trafimow, D. (1998). How priming the private self or
collective self affects the relative weights of attitudes and subjective
norms. Personality and Social Psychology Bulletin, 24, 362–370.

Yi, M. Y., Yoon, J. J., Davis, J. M., & Lee, T. (2013). Untangling the
antecedents of initial trust in Web-based health information: The roles
of argument quality, source expertise, and user perceptions of
information quality and risk. Decision Support Systems, 55, 284–295.

Yi, Y., & Yoo, J. (2011). The long-term effects of sales promotions on
brand attitude across monetary and non-monetary promotions.
Psychology and Marketing, 28, 879–896.

Yzer, M. (2007). Does perceived control moderate attitudinal and


normative effects on intention? A review of conceptual and
methodological issues. In I. Ajzen, D. Albarracín, & R. Hornik (Eds.),
Prediction and change of health behavior: Applying the reasoned action
approach (pp. 111–127). Mahwah, NJ: Lawrence Erlbaum.

Yzer, M. (2012a). The integrative model of behavioral prediction as a tool


for designing health messages. In H. Cho (Ed.), Health communication
message design: Theory and practice (pp. 21–40). Los Angeles: Sage.

Yzer, M. (2012b). Perceived behavioral control in reasoned action theory:


A dual-aspect interpretation. Annals of the American Academy of
Political and Social Science, 640, 101–117.

Yzer, M. (2013). Reasoned action theory: Persuasion as belief-based


behavior change. In J. P. Dillard & L. Shen (Eds.), The SAGE handbook
of persuasion: Developments in theory and practice (2nd ed., pp.
120–136). Thousand Oaks, CA: Sage.

Yzer, M. C., Cappella, J. N., Fishbein, M., Hornik, R., Sayeed, S., &

605
Ahern, R. K. (2004). The role of distal variables in behavior change:
Effects of adolescents’ risk for marijuana use on intention to use
marijuana. Journal of Applied Social Psychology, 34, 1229–1250.

Yzer, M. C., Fisher, J. D., Bakker, A. B., Siero, F. W., & Misovich, S. J.
(1998). The effects of information about AIDS risk and self-efficacy on
women’s intentions to engage in AIDS preventive behavior. Journal of
Applied Social Psychology, 28, 1837–1852.

Yzer, M. C., Southwell, B. G., & Stephenson, M. T. (2013). Inducing fear


as a public communication campaign strategy. In R. E. Rice & C. K.
Atkin (Eds.), Public communication campaigns (4th ed., pp. 163–176).
Los Angeles: Sage.

Zanna, M. P., & Cooper, J. (1974). Dissonance and the pill: An attribution
approach to studying the arousal properties of dissonance. Journal of
Personality and Social Psychology, 29, 703–709.

Zanna, M. P., & Rempel, J. K. (1988). Attitudes: A new look at an old


concept. In D. Bar-Tal & A. W. Kruglanski (Eds.), The social
psychology of knowledge (pp. 315–334). Cambridge, UK: Cambridge
University Press.

Zebregs, S., van den Putte, B., Neijens, P., & de Graaf, A. (2015). The
differential impact of statistical and narrative evidence on beliefs,
attitude, and intention: A meta-analysis. Health Communication, 30,
282–289.

Zeelenberg, M., & Pieters, R. (2007). A theory of regret regulation 1.0.


Journal of Consumer Psychology, 17, 3–18.

Zhang, X., Fung, H., & Ching, B. H. (2009). Age differences in goals:
Implications for health promotion. Aging and Mental Health, 13,
336–348.

606
Zhao, X. S., Lynch, J. G., & Chen, Q. M. (2010). Reconsidering Baron and
Kenny: Myths and truths about meditation analysis. Journal of
Consumer Research, 37, 197–206.

Ziegler, R. (2010). Mood, source characteristics, and message processing:


A mood-congruent expectancies approach. Journal of Experimental
Social Psychology, 46, 743–752.

Ziegler, R. (2013). Mood and processing of proattitudinal and


counterattitudinal messages. Personality and Social Psychology
Bulletin, 39, 482–495.

Ziegler, R. (2014). Mood and processing effort: The mood-congruent


expectancies approach. Advances in Experimental Social Psychology,
49, 287–355.

Ziegler, R., & Diehl, M. (2001). The effect of multiple source information
on message scrutiny: The case of source expertise and likability. Swiss
Journal of Psychology, 60, 253–263.

Ziegler, R., & Diehl, M. (2011). Mood and multiple source characteristics:
Mood congruency of source consensus status and source trustworthiness
as determinants of message scrutiny. Personality and Social Psychology
Bulletin, 37, 1016–1030.

Ziegler, R., Dobre, B., & Diehl, M. (2007). Does matching versus
mismatching message content to attitude functions lead to biased
processing? The role of message ambiguity. Basic and Applied Social
Psychology, 29, 268–278.

Zimbardo, P. G., Weisenberg, M., Firestone, I., & Levy, B. (1965).


Communicator effectiveness in producing public conformity and private
attitude change. Journal of Personality, 33, 233–255.

Zuckerman, M. (1979). Sensation seeking: Beyond the optimal level of

607
arousal. Hillsdale, NJ: Lawrence Erlbaum.

Zuckerman, M., Gioioso, C., & Tellini, S. (1988). Control orientation, self-
monitoring, and preference for image versus quality approach to
advertising. Journal of Research in Personality, 22, 89–100.

Zwarun, L., & Hall, A. (2012). Narrative persuasion, transportation, and


the role of need for cognition in online viewing of fantastical films.
Media Psychology, 15, 327–355.

608
Author Index

Aaker, J. L., 54, 157, 222, 255


Aarts, H., 116
Abdulla, R. A., 219
Abelson, R. P., 38, 67, 96–97, 180
Abraham, C., 111, 113, 116, 119, 143, 233, 267
Abrahams, D., 192
Ackers, M., 221
Adams, J., 25, 139
Adams, W. C., 258
Adamski, L., 196
Adamson, G., 118
Adamval, R., 217
Adriaanse, M. A., 114–116
Adrien, A., 131
Agarwal, J., 67
Aggarwal, P., 75
Agnew, C. R., 61, 260
Agrawal, N., 157, 255
Ah Yun, K., 241
Ahern, R. K., 106, 116–117, 131
Ahluwalia, R., 250, 259
Aiken, L. S., 107
Ainsworth, K., 121
Aitken, C. K., 91
Ajzen, I., 4, 6, 10–12, 17, 57, 61, 64–65, 68, 73, 99, 103, 106–107,
109, 113, 115–116, 120, 122–123, 126–127, 130–131, 185
Akhar, O., 212
Akl, E. A., 244
Albarracín, D., 10–12, 72, 84, 96, 103–104, 106, 116–117, 155, 158,
196, 204, 231
Albert, K. A., 111
Alcaraz, K. I., 219
Alden, D. L., 193, 217
Alemagno, S. A., 221
Alemi, F., 221
Alexander, G., 254
Algie, J., 232

609
Allcott, H., 116
Allen, B. P., 216
Allen, C. T., 67
Allen, M., 81, 89, 96, 139, 145, 196, 223, 229–230, 241–242, 246–
247
Allen, M. W., 44, 50
Allen, R., 29
Allport, G. W., 4
Aloe, A. M., 236, 250–251
Alós-Ferrer, C., 81, 95
Al-Rafee, S., 130
Alvaro, E. M., 186, 256
Alvero, A. M., 221
Alwin, D. F., 6, 254
Amaratunga, R., 104
Amass, L., 221
Ambler, T., 142
Amiel, C., 139
Amodio, D. M., 93
Amos, C., 207
Anatol, K. W. E., 189, 193, 210
Andersen, K. E., 189
Andersen, R. E., 221
Anderson, C., 212
Anderson, C. A., 240
Anderson, J. P., 254
Anderson, J. W., 233
Anderson, L., 193
Anderson, L. R., 62
Anderson, N. H., 66, 73
Anderson, P. J., 103
Anderson, R. B., 111
Andersson, E. K., 114
Andreoli, V., 158
Andrews, K. R., 35, 55, 249
Anker, A. E., 218, 236, 250–251, 256
Apanovitch, A. M., 226
Appel, A., 88
Appel, M., 217, 219
Applbaum, R. L., 189, 193, 210
Apsler, R., 153

610
Arazi, D., 199
Arden, M. A., 262
Areni, C. S., 165
Armitage, C. J., 10, 15, 60, 72, 103–104, 106–107, 109, 113–116,
118, 120, 123, 127–128, 130–131, 140, 174, 240, 262
Armor, D. A., 119
Armour, C., 104
Armstrong, A. W., 221
Armstrong, C. L., 194
Armstrong, J. S., xv
Arnold, K., 75
Arnold, W. E., 211
Aronsky, D., 221
Aronson, E., 13, 26, 87, 89–91, 93, 97, 192
Aronson, J., 93, 262–263
Arriaga, X. B., 114
Arvola, A., 121
Ary, D. V., 111, 261
Asada, K. J., 182
Ash, L., 221
Ashford, S., 111
Askelson, N. M., 232
Aspinwall, L. G., 262
Assael, H., 181, 258
Astrom, A. N., 131
Atkin, C. K., 29
Atkins, A. L., 25
Atkinson, D. R., 203
Atwood, L. E., 108
Audi, R., 4
Aunger, R., 116
Austin, E. W., 207
Austin, J., 221
Averbeck, J. M., 155
Avery, R. J., 158
Aveyard, P., 133, 243
Avila, R. A., 234, 236, 250
Avom, J., 192
Axsom, D., 38, 154, 159
Aylward, B. S., 256

611
Baayen, R. H., 186
Baazova, A., 227
Babrow, A. S., 10
Backus, J., 53
Baezconde-Garbanati, L., 217
Bagley, G. S., 69, 74, 166
Bagozzi, R. P., 64–65, 72, 118, 121, 130
Bahamonde, L., 221
Bai, H., 261, 267
Bailey, W., 115
Bailis, D. S., 50
Bakker, A. B., 111, 174, 246
Bakker, M., 182
Bala, H., 121
Baldwin, S., 140
Ball, T. B., 221
Balmford, J., 139
Bamberg, S., 11, 139
Banaji, M. R., 5, 9
Banas, J. A., 152, 249, 255, 258, 265–266
Bandura, A., 100, 128
Banerjee, S. C., 218
Banks, A. J., 85
Bansal, H. S., 123
Baranowski, T., 241
Barclay, L. A., 111
Bareket-Bojmel, L., 9
Baril, J., 241
Barlow, T., 69
Barnett, J. P., 212
Barnett, M. A., 145
Baron, P. H., 154
Baron, R. M., 185
Baron, R. S., 154
Barone, M. J., 67
Barrett, D. W., 108
Barrios, I., 218
Barry, C. L., 223
Bartlett, S. J., 221
Baseheart, J., 180
Basil, D. Z., 95

612
Basil, M., 232
Baskerville, D., 103
Bassett, R., 249
Bates, B. R., 233
Bates, D. M., 186
Bates, M., 196
Batra, R., 70
Baudhuin, E. S., 189
Baughan, C. J., 109, 128
Baumeister, R. F., 93, 97
Baumgardner, M. H., 181
Baumgartner, H., 130
Bazzini, D. G., 42
Beaman, A. L., 234, 250
Bearden, W. O., 42
Beatty, M. J., 159, 189, 258
Beatty, S. E., 213
Beauchamp, M., 256
Beauchamp, N., 111
Beauvois, J.-L., 93
Beck, V., 219, 220
Becker, C. B., 88
Bedell, B. T., 226
Beentjes, J. W. J., 218
Behnke, R. R., 189
Beisecker, T. D., 17
Bekker, H. L., 220
Bélanger-Gravel, A., 95
Belch, G. E., 63, 67
Belch, M. A., 63
Bell, D. W., 152
Bell, H., 108
Beltramini, R. F., 192
Bem, D. J., 74
Benac, C. N., 146
Bennett, P., 123
Benoit, W. L., 266
Bensley, L. S., 256
Berek, J. S., 110, 128
Berendsen, M., 224, 243
Berent, M. K., 31

613
Bergin, A. E., 197
Berkanovic, E., 220
Berkel, H. J., 110
Berkowitz, A. D., 108
Berkowitz, N. N., 198, 200–201
Berlo, D. K., 189
Bernard, M. M., 258
Bernhagen, M., 196
Bernstein, G., 110, 128
Berntson, G. G., 74, 225
Berry, M. M., 118, 233
Berry, T. R., 111
Berscheid, E., 201, 204, 211
Betsch, T., 66
Bibby, M. A., 111
Bichard, S. L., 254
Bickel, W. K., 221
Biddle, S. J. H., 112, 116–117
Biek, M., 172
Bieri, J., 25
Biglan, A., 261
Bigman, C. A., 217
Bigsby, E., 237
Bilandzic, H., 218, 220
Bimber, B., 84
Birch, D., 260
Birkimer, J. C., 118, 233
Biswas, A., 207
Biswas, D., 207
Bither, S. W., 258
Bizer, G. Y., 15, 43–44, 54, 168, 169, 171, 174–175
Blake, H., 221
Blanchard, C. M., 60, 116, 123
Blank, D., 244
Blankenship, K. L., 265
Blanton, H., 9
Blau, E. M., 221
Bleakley, A., 117
Bless, H., 150, 152, 158, 255
Block, L. G., 95
Bluemke, M., 9

614
Blumberg, S. J., 259, 267
Bobocel, D. R., 181
Bochner, S., 197, 211
Bock, D. G., 197
Bodenhausen, G. V., 17, 254
Bodur, H. O., 61, 67
Boen, F., 221
Bogle, T., 38
Bogner, F. X., 103
Bohner, G., 17, 88, 152, 159, 166, 197, 255
Bolan, G., 138
Bolls, P., 233
Bolsen, T., 70, 174
Bond, C. F., Jr., 10, 12
Bond, R. M., 108
Bondarenko, O., 203
Bonetti, D., 104
Boninger, D. S., 31, 119, 219, 222, 257
Bonnes, M., 117
Booth, A. R., 131
Booth-Butterfield, S., 97, 117, 187
Bordley, C., 221
Borenstein, M., 96, 130, 182–183, 242, 248, 250, 264–265, 267
Borgida, E., 30, 38, 171
Borland, R., 133, 139
Bose, K., 232
Boster, F. J., 34–35, 49–50, 55, 212, 230, 249
Botta, R. A., 230
Bouman, M., 220
Bouvy, M. L., 221
Bowers, J. W., 180, 189, 193, 210
Bowman, J. M., 212
Boyson, A. R., 232
Bradac, J. J., 180, 181
Bradley, P. H., 190
Brandberg, Y., 103
Brannick, M. T., 88, 183
Brannon, L. A., 55, 75
Branstrom, R., 103
Brasfield, T. L., 107
Brashers, D., 183, 186–187

615
Braverman, J., 219
Brechan, I., 84, 96
Breckon, J. D., 139
Brehm, J. W., 78, 80–81, 97, 199–200, 255–256
Brehm, S. S., 255
Breivik, E., 57, 121
Brewer, N. T., 244
Brickell, T. A., 102
Bridle, C., 139
Brinberg, D., 61, 67, 103, 115
Briñol, P., 9, 15, 17–18, 148, 151, 154, 161, 168–169, 173–175, 191,
196–197, 208, 211, 255, 262, 267
Britt, T. W., 38, 44
Brock, T. C., 55, 75, 154–155, 200, 218
Brodsky, S. L., 190
Broemer, P., 10, 226
Brömer, P., 152
Brommel, B. J., 189, 191, 193, 210
Broneck, K., 256
Brotherton, T. P., 119
Brouwers, M. C., 229
Brown, D., 194
Brown, J., 232
Brown, K. M., 157
Brown, S., 260
Brown, S. P., 67, 118, 207
Brown, T. J., 60
Browne, B. A., 42
Brownstein, A. L., 95
Brug, J., 116, 139, 221
Bruner, J. B., 37, 53, 55
Bruning, A. L., 221
Bryan, C., 114
Bryant, J., 194
Bucholtz, D. C., 253
Bucy, E. P., 184, 247
Buday, R., 241
Budd, R. J., 72, 106, 127
Budney, A. J., 221
Bulger, C. A., 111
Buller, D. B., 154

616
Bullock, J. G., 185
Bundy, C., 256
Bunton, R., 140
Bunyan, D. P., 262
Burack, R. C., 221
Burciaga, C., 261
Burger, J. M., 108, 212, 234–235, 249–250
Burgoon, J. K., 211–212
Burgoon, M., 186, 234, 236, 250, 256
Burgwinkle, P., 241
Burke, C. J., 96
Burkell, J., 211
Burkley, E., 259
Burnell, P., 2
Burney, S., 139
Burnkrant, R. E., 181, 250
Burns, W. J., 213
Bursik, R. J., Jr., 119
Burton, D., 85, 261
Burton, S., 195
Busier, M., 213
Busselle, R., 218, 220
Buswell, B. N., 97
Butera, F., 191, 197, 211
Butler, H. A., 108
Butts, J., 221
Buunk, A. P., 205
Byrne, D., 201
Byrne, S., 158

Cacioppo, J. T., 9, 74, 148–155, 157–161, 163–165, 171–173, 181,


195, 225, 249, 259–260
Cafri, G., 183
Cahill, K., 138–139
Cai, D. A., 26
Cairns, E., 123
Callaghan, R. C., 133
Callendar, A., 196
Calsyn, D. A., 111
Calvi, J., 254
Cameron, K. A., 108

617
Campbell, D. T., 9
Campbell, G., 147
Campbell, K. E., 25, 215
Campbell, L. A., 25
Campbell, M. C., 192
Campbell, N. A., 91, 106, 127
Campo, S., 108, 232
Cannarile, S., 232
Cao, D. S., 137
Capitanio, J. P., 38, 44
Cappella, J. N., 60, 104, 106, 116–118, 131, 217, 223, 232, 241, 254
Cappon, P., 131
Car, J., 139
Carcioppolo, N., 232
Card, N. A., xviii, 182–183
Cardenas, M. P., 110
Carey, K. B., 107
Carey, R. N., 247
Carlsmith, J. M., 26, 86–89
Carlsmith, K. M., 38
Carlson, L., 155
Carlston, D. E., 74
Carnot, C. G., 31
Carpenter, C. J., 35, 42–43, 54–55, 182, 191, 240, 249, 256
Carpenter, J. M., 219–220
Carpenter, K. M., 240
Carrion, M., 232
Carroll, D., 121
Carroll, M. V., 241
Carron, A. V., 103, 109, 112, 123
Carrus, G., 117
Carter, K. D., 232
Carver, C. S., 227
Case, T., 233
Casey, M., 196
Casey, S., 160
Castle, H., 111
Catalan, J., 235–236
Ceci, S. J., 254
Celuch, K., 42, 50
Cerully, J. L., 217, 262

618
Cesario, J., 54, 227, 245
Chaiken, S., 4, 11–12, 17, 19, 34–35, 49–50, 55, 61, 66–67, 69, 72–
73, 76, 85, 93, 104, 106, 123, 149, 154, 158–159, 168, 172–174, 181,
190, 192–193, 198–199, 204–205, 226, 262, 267
Chan, C. W., 241
Chan, D. K.-S., 109, 123
Chandran, S., 244
Chang, C., 159, 227, 253
Chang, C.-T., 226
Chang, M. J., 194
Chantala, K., 244
Chapman, G. B., 113
Chapman, J., 115, 130, 262
Chartrand, T., 250
Chase, A., 88
Chatterjee, J. S., 217
Chattopadhyay, A., 212
Chatzisarantis, N. L. D., 102–103, 112–113, 116–117
Chavez, L. R., 111
Chebat, J.-C., 95, 197–198
Chelminski, A., 221
Chen, H. C., 260
Chen, M. F., 97, 120
Chen, M. K., 81, 95
Chen, Q. M., 185
Chen, S., 149, 158
Chen, T., 111
Chen, X., 166
Cheung, C. M.-Y., 208
Cheung, S. F., 109, 123
Chien, Y. H., 226
Childs, K. E., 219–220
Chin, P. P., 118–119
Ching, B. H., 253
Chiou, J. S., 194
Chiou, W. B., 88
Chiu, C., 227
Cho, H., 232–233
Chong, D., 70
Christenfeld, N. J. S., 159
Chu, G. C., 223, 248

619
Chuang, Y. C., 31
Chuinneagáin, S. N., 123, 223
Chung, A. H., 215, 218–219
Chung, M. H., 221
Chung, S., 26, 66, 154
Churchill, S., 115
Cialdini, R. B., 75, 108, 122, 126–127, 158–159, 173, 235–237, 240,
249, 254
Ciao, A. C., 88
Ciccarello, L., 103, 126
Clack, Z. A., 242
Claiborne, C. B., 38
Clapp, J. D., 108
Clark, E. M., 241, 253
Clark, H. H., 186
Clark, J. K., 26, 161, 197, 208
Clark, J. L., 220
Clark, R. A., 31, 212
Clarkson, J. J., 75
Clary, E. G., 39, 42, 46–47
Clawson, R. A., 70
Claypool, H. M., 154, 168, 175
Cleeland, L., 261
Cobb, M. D., 32
Cody, M. J., 219–220
Cohen, G. L., 93, 262–263
Cohen, J., 219
Cole, C. M., 234, 250
Cole, H. P., 217
Collins, B. E., 17, 19, 24, 30, 34, 37, 93
Collins, W. B., 232
Colon, S. E., 254
Combs, D. J. Y., 192
Compton, J. A., 258, 266
Coney, K. A., 197
Conner, M., 10, 15, 60, 72, 75, 95, 103–104, 106, 109, 113–121, 123,
127–128, 130–131, 143, 174, 220
Considine, J. R., 256
Conville, R. L., 64
Cook, A. J., 131
Cook, F. L., 70

620
Cook, T. D., 85
Cooke, R., 103–104, 112–113, 116
Cooper, H., 182
Cooper, J., 78, 87, 89, 91, 93, 97, 199–200, 205, 267
Cooper, Y., 115
Copeland, J., 39, 42, 47
Corbin, S. K. T., 261
Corby, N. H., 111
Corfman, K., 89
Corker, K. S., 227, 245
Costiniuk, C., 244
Cote, N. G., 11
Côté, S., 152
Cotte, J., 233
Cotton, J. L., 84–85
Coulter, R. A., 204, 233
Counselman, E., 199
Coupey, E., 61, 67
Courchaine, K., 260
Courneya, K. S., 60, 109, 113, 116
Courtright, J. A., 180
Cousins, S., 111
Coveleski, S., 261
Covello, V. T., 193
Covey, J., 226
Cox, A., 254
Cox, A. B., 221
Cox, A. D., 244
Cox, B. S., 221
Cox, D., 244
Cox, D. J., 221
Cox, E., 221
Craciun, C., 143
Craig, A., 123, 223
Craig, J. T., 256
Craig, S., 261
Craig, T. Y., 265
Crain, A. L., 13, 90–91
Cramer, R. J., 137, 190
Cramp, A., 111
Crandall, C. S., 38, 44

621
Crane, L. A., 110, 128
Crano, W. D., 10, 186
Crawley, F. E., III, 123
Creek, T. L., 219–220
Crites, S. L., Jr., 10, 67
Crocker, J., 263
Crockett, W. H., 64, 70, 76
Cron, W. L., 118
Cronen, V. E., 64
Crouse, J. C., 241
Crowley, A. E., 193, 223
Croy, G., 116
Cruz, M. G., 240
Cuijpers, P., 204
Cuite, C. L., 138
Cullen, M., 226
Cunningham, W. A., 9
Cushing, C. C., 256
Czasch, C., 115
Czerwinski, A., 196
Czuchry, M., 240

D’Alessio, D., 81
Daamen, D. D. L., 195, 211
Dagot, L., 249
Dahl, D. W., 212, 218, 257
Dahl, J., 194
Dahlstrom, M. F., 219
Dal Cin, S., 217, 219
Dale, A., 221
Daley, A. J., 103
Dancy, B., 256
Danks, J. H., 230
Dansereau, D. F., 240
Darby, B. L., 235–236
Dardis, F. E., 226, 241
Darke, P. R., 159, 168
Darker, C. D., 111, 121, 131
Darley, J. M., 199–200, 205
Darley, S. A., 87
Das, E., 107, 171, 255, 262–263

622
Das, N., 207
Dashti, A. E., 130
Davenport, J. W., Jr., 200–201
Davidson, D. J., 186
Davies, A., 233
Davis, A. K., 85
Davis, F. D., 121
Davis, J. M., 208
Davis, K. C., 254
Davis, L. L., 42
Davis, M. K., 189
Davis, R. E., 253–254
Davis, R. M., 93
de Bakker, D. H., 221
de Bruijn, G. J., 116
De Bruijn, G.-J., 244
de Cremer, D., 233
de Graaf, A., 218–219, 241
de Hoog, N., 229–230, 232, 242, 247
de Houwer, J., 8, 17
de Nooijer, J., 139
de Ridder, D. T. D., 114–116
de Ruyter, K., 218–219, 241
de Vet, E., 139
de Vord, R. V., 207
de Vries, H., 115, 118, 122, 226, 233, 244, 254, 261
de Vries, N. K., 14, 57, 62, 70, 72, 118–119
de Vries, P., 116
de Vroome, E. M. M., 115
de Wit, J. B. F., 107, 114–116, 229–230, 232, 242, 247
de Young, R., 97
de Zwart, O., 120
Deale, A., 111
Dean, L., 60
Dean, M., 121
Deatrick, L. M., 256
Deaux, K. K., 25
DeBono, K. G., 37, 39, 42–44, 46, 49, 53, 130, 155, 253
DeCesare, K., 249
Decker, L., 196
DeJong, W., 108, 234

623
del Prado, A., 212
Delage, G., 95
Delespaul, P. A. E. G., 97
Delia, J. G., 64, 70, 189, 202–203, 210
DelVecchio, D., 89
Demaine, L. J., 108
Dembroski, T. M., 216
Denizeau, M., 78
Dent, C. W., 261
Derose, S. F., 221
Desharnais, R., 109
Detweiler, J. B., 226
Devine, D. J., xv
Dew, D. E., Jr., 204
Dexheimer, J. W., 221
Dholakia, R., 197
Dholakia, R. R., 197, 211–212
Di Dio, P., 114
Di Noia, J., 134–135, 145
Diamond, G. A., 32
Diaz, Y. E., 107
Dibble, T., 114
Dibonaventura, M. D., 113
Dickau, L., 113
Dickel, N., 17
Dickens, C., 256
Dickerson, C. A., 91
DiClemente, C. C., 132–133
Diehl, M., 10, 44, 152, 171, 208
Dijkstra, A., 262
Dijkstra, P., 205
Dillard, A. J., 217
Dillard, J. P., 103–104, 123, 131, 232–234, 236–237, 247, 249–250,
255–256, 264
Dilliplane, S., 233
Dillon, W. R., 211
Dillworth, T., 108
Dingus, T. A., 110
Ditto, P. H., 230
Dittus, P., 72
DiVesta, F. J., 74

624
Dobre, B., 44
Dolan, R. J., 81
Dolich, I. J., 258
Doll, J., 11, 73, 106
Donaldson, S. I., 261
Donnelly, J. H., Jr., 83
Donohew, L., 254
Donovan, R. J., 230
Doob, A. N., 88–89
Doosje, B. J., 25
Dorian, K., 108
Dormandy, E., 103, 115, 130
Dowd, E. T., 256
Downs, E., 219–220
Doyle, S. R., 111
Drevland, G. C. B., 194
Driver, B. L., 57, 61
Drolet, A., 253–254
Druckman, J. N., 70, 174
Druley, J. A., 230
Drummond, A. J., 28
Dryden, J., 119, 233
Duckworth, K. L., 168
Dugan, E., 241
DuMouchel, W., 221
Duncan, S. C., 111
Duncan, T. E., 111
Dunker, K., 230
Dunlop, S. M., 218–219
Duran, A., 109
Durand, J., 103, 115
Durantini, M. R., 204
Dutta-Bergman, M. J., 42
Dwan, K., xviii, 96

Eadie, D., 104, 117


Eagly, A. H., 4, 11–12, 17, 19, 25, 30, 34–35, 49–50, 55, 61, 64, 66–
67, 69, 72–74, 76, 84–85, 93, 96, 104, 106, 123, 159, 163, 171, 190,
192–193, 198–199, 262, 267
Earl, A. N., 204, 231
Earle, T. C., 190

625
Earleywine, M., 93
Eckes, T., 10–11, 112
Edell, J. A., 181
Edmunds, J., 111
Edwards, C. S., 222
Edwards, K., 75
Edwards, S. M., 256
Egloff, B., 233
Ehrlich, D., 81
Eibach, R. P., 240
Eid, M., 262
Eilertsen, D. E., 194
Einwiller, S., 159
Eisend, M., 193, 198, 207, 211, 223, 243–244
Eisenstadt, D., 88
Ekstein, G., 25
El-Alayli, A. G., 118–119
Elder, J. P., 261
Elek, E., 261
Elias, L. J., 113
Ellemers, N., 195, 211
Elliot, M. A., 104
Elliott, M. A., 109, 115, 121, 128
Elliott, R., 62, 72
Elliott, S. M., 194
Ellsworth, R., 219–220
Elms, A. C., 87
Emington, J., 95
Enemo, I., 194
Engstrom, E., 191
Enguidanos, S. M., 111
Ennett, S. T., 261
Ennis, R., 38–39, 46, 74
Epp, L. J., 113
Epstein, E., 207
Epton, T., 262–263
Erb, H.-P., 159, 166, 168, 197
Ernst, J. M., 149
Ernsting, A., 143
Erwin, D. O., 241
Escalas, J. E., 218, 240

626
Esses, V. M., 72, 152, 159
Essex, M., 185
Estabrooks, P., 109, 123
Estambale, B., 221
Evans, A. T., 161, 197, 208
Evans, L. M., 172
Evans, R. I., 216
Evers, K. E., 132, 133, 135, 144–145
Everson, E. S., 103
Eves, F. F., 111, 121, 131
Ewoldsen, D. R., 15
Eyal, K., 208

Fabrigar, L. R., 10, 15, 67, 75, 84, 108, 161, 174, 212
Fagerlin, A., 217
Fairchild, A. J., 185
Fairhurst, A., 42
Falcione, R. L., 189, 193, 210
Faller, C., 261
Falomir-Pichastor, J. M., 191, 197, 211
Fariss, C. J., 108
Farrelly, M. C., 254
Fatoullah, E., 204–205
Fazili, F., 219
Fazio, R. H., 9, 11–12, 17, 57, 70, 198
Feeley, T. H., 236, 250–251
Feiler, D. C., 256
Fein, S., 36
Feinstein, J. A., 154, 172
Feldman, L., 84
Fennis, B. M., 142, 158, 171, 250
Fenson-Hood, K., 230
Ferguson, C. J., xviii
Ferguson, E., 119, 233
Fern, E. F., 234, 236, 250
Fernandez, N. C., 78, 91, 93
Fernandez-Medina, K., 218
Festinger, L., 76–77, 82, 86–87
Fetherstonhaugh, D., 160
Feufel, M. A., 110
Fiedler, K., 9

627
Field, A. P., 182
Fielding, A., 133
Figgé, M., 237
Filiatrault, P., 197–198
Fine, B. J., 215
Fink, E. L., 26, 34, 66
Finkelstein, B., 221
Finlay, K. A., 103, 123
Finlayson, B. L., 91
Fiore, C., 134
Firestone, I., 199
Fischer, A. R. H., 122
Fischer, D. L., 160
Fischer, J., 81
Fischer, P., 81
Fishbein, M., 4, 6, 10, 17, 26, 56–57, 60, 62, 64–68, 72, 74, 99, 103–
104, 106–107, 113, 116–117, 120, 122–123, 126–127, 130–131, 138,
166, 185, 254
Fisher, D., 261
Fisher, D. G., 139
Fisher, J. D., 111, 246
Fisher, W. A., 111
Fiske, S. T., 67
Fitzsimons, G. J., 95, 249, 256
Flanagin, A. J., 208
Flay, B. R., 85, 127, 261
Fleming, D., 4
Fleming, J. A., 50
Fleming, J. K., 111
Fleming, M. A., 67, 204
Fleming, M. T., 108
Fleming, S. M., 81
Flood, M. G., 115
Flowelling, R. L., 261
Floyd, D. L., 229, 242, 246
Flynn, D., 140
Foerg, F. E., 221
Folda, L., 232
Fong, G. T., 113, 185, 219
Fontaine, K. R., 221
Fontenelle, G. A., 186

628
Foregger, S., 157, 163
Forehand, M., 159
Fornara, F., 117
Forquer, H., 60
Fowler, J. H., 108
France, C. R., 217
Franckowiak, S. C., 221
Frangos, J. E., 221
Frank, C. A., 256
Frank, L. B., 217, 219
Franks, J., 212
Fraser, S. C., 234, 250
Frazier, B., 219
Freedman, J. L., 26, 88–89, 234, 250, 260
Freijy, T., 91
Freling, T. H., 89
French, D. P., 74, 103–104, 111–112, 116, 121, 131
Freres, D., 60
Frewer, L. J., 190
Frey, D., 81, 84–85
Fried, C. B., 13, 18, 89–91, 97
Friedrich, J., 160
Fry, J. P., 221
Fuchs, R., 142
Fung, H., 253

Gaeth, G. J., 244


Gagné, C., 65, 109, 123, 127–128
Galavotti, C., 219–220
Gallagher, D., 160
Gallagher, K. M., 226, 244–245
Gallagher, S., 118
Gallison, C., 261
Gamble, C., xviii, 96
Gangestad, S. W., 39
Ganster, T., 95
Garcia, C., 207
Garcia, J., 262
Garcia-Marques, T., 154
Gardner, B., 116
Gardner, W. L., 74, 225

629
Garner, R., 212
Garrett, R. K., 85
Garst, J., 154
Gasco, M., 151
Gass, R. H., 2, 17
Gastil, J., 37, 39, 44–45, 159
Gaston, A., 111
Gaudreau, P., 114
Gawronski, B., 17, 95
Gaylord, G., 25
Gaziano, C., 190
Geers, A. W., 254
Gelmon, L. J., 221
Gerard, H. B., 80
Gerber, A. S., 95
Gerend, M. A., 103, 226
Germain, M., 95
Gerrans, P., 116
Gerrard, M., 108
Geurts, D., 111
Ghadiri, A., 221
Gibbons, F. X., 108
Gibson, L., 60
Gierl, H., 75
Giles, M., 104, 123
Gillet, V., 192
Gillett, R., 182
Gillette, J. C., 204
Gilovich, T., 82
Gimotty, P. A., 221
Ginis, K. A. M., 111, 130
Ginsburg, G. P., 189
Gioioso, C., 40, 42
Girvin, H., 139–140
Gitta, M. Z., 181
Glanz, A., 109
Glasgow, R., 261
Glasman, L. R., 10–12
Glazer, E., 194
Gleicher, F., 119, 219, 222, 257
Glik, D., 220

630
Glor, J., 38, 44
Glynn, C. J., 108
Glynn, R. J., 192
Göckeritz, S., 122
Godbold, L. C., 266
Godin, G., 65, 95, 103, 109, 112–113, 123, 127–128, 131
Godinez, M., 111
Goei, R., 212, 232
Goethals, G. R., 203
Goldhagen, J., 221
Goldman, M., 237
Goldman, R., 150, 153, 158, 195
Goldsmith, R. E., 190, 207
Goldstein, M. G., 134
Goldstein, N. J., 108, 122, 127, 237
Goldstein, S., 219
Gollust, S. E., 223
Gollwitzer, M., 9
Gollwitzer, P. M., 95, 114, 116, 216
Golsing, P., 78
Gonzalez, J., 217, 255
Good, A., 267
Goodall, C. E., 8
Gorassini, D. R., 235, 250
Gordon, L., 220
Gorely, T., 221
Gorman, D. R., 261
Goudas, M., 103
Gould, S. J., xvii
Gourville, J. T., 244
Goyder, E., 131
Grady, K., 260
Graham, J. W., 261
Graham, S., 118, 120
Granberg, D., 19, 21, 23, 25, 28, 215
Grandpre, J. R., 256
Granić, Ð.-G., 81
Grant, A. M., 256
Grant, H., 54
Grant, N. K., 212
Grasmick, H. G., 119

631
Green, B. F., 7
Green, D. P., 185
Green, L. G., 69
Green, M. C., 154, 218–220, 241
Green, N., 138–139
Greenbank, S., 103
Greenberg, B. S., 211–212
Greenberg, J., 97, 181
Greene, K. L., 103, 186, 218
Greenslade, J. H., 97
Greenwald, A. G., 9, 46, 55, 95, 130, 181
Greenwood, K. M., 133
Gregory, G. D., 253
Gregory, W. L., 240
Greitemeyer, T., 81
Gremmen, F., 186
Grewal, D., 38
Griffin, M. P., 190
Griffin, S., 74, 121
Griskevicius, V., 75, 108, 122, 127, 249, 255
Grofman, B., 25
Grohmann, B., 95
Grossbard, J. R., 261
Gruber, V. A., 11
Gruner, C. R., 194
Guadagno, R. E., 254
Guéguen, N., 212, 240, 249, 256
Guillory, J. E., 158
Gunnell, J. J., 254
Gunning, J., 110, 128
Gunther, A. C., 211
Guo, B. L., 133
Gurwitz, J. H., 241
Gutkin, T. B., 111
Gutscher, H., 190
Gutteling, J., 203
Guttman, I., 81

Ha, S. E., 185


Haaga, D. A. F., 174
Habashi, M. M., 197

632
Habyarimana, J., 221
Hackman, C. L., 104
Hackman, J. R., 62
Haddock, G., 9, 35, 67, 72, 74–75, 219
Hagen, K. M., 111
Hagger, M. S., 54, 103, 112–113, 116–117, 222
Hagtvet, K. A., 123
Hahn, K. S., 84
Hahn, U., 66
Hailey, B. J., 181
Hale, J. L., 103, 186
Hale, S. L., 236, 250
Hall, A., 218
Hall, J. R., 154, 186, 256
Hall, K. L., 134–135
Hall, P. A., 113, 115
Hallam, J. S., 260
Ham, S. H., 60
Hamilton, D. L., 74
Hamilton-Barclay, T., 119, 233
Handley, I. M., 254
Hankins, M., 111
Hansen, W. B., 261
Hänze, M., 152
Hardeman, W., 74, 104, 121
Harinck, F., 195, 211
Harkins, S. G., 149
Harland, P., 120, 131
Harlow, L. L., 134
Harmon, R. R., 197
Harmon-Jones, C., 93
Harmon-Jones, E., 78, 93, 97
Harnish, R. J., 44
Harries, C., 190
Harrington, N. G., 221, 242
Harris, A. J. L., 66
Harris, M. S., 146, 267
Harris, P. R., 131, 262–263
Harris, R. J., 145
Harrison, T. R., 221
Hart, P. S., 158

633
Hart, W., 84, 96
Harte, T., 241
Harterink, P., 120
Hartmann, T., 171, 255
Hartwick, J., 103, 112
Harvey, K., 108
Harvey, N., 190
Harvey, O. J., 20, 26
Hasper, P., 111
Hass, J. W., 69, 74, 166
Hass, R. G., 201, 223, 260
Hassandra, M., 103
Hastings, P., 220
Hatch-Maillette, M. A., 111
Hatzigeorgiadis, A., 103
Hatzios, M., 38
Haugen, J. A., 39, 42, 46–47
Haugtvedt, C. P., 151–152, 154, 163, 255, 265
Hausenblas, H. A., 103, 112
Hauth, A. C., 107
Havermahl, T., 221
Hayes, A. F., 185
Hazlewood, J. D., 159
He, J., 121
Head, K. J., 221, 242
Heald, G. R., 60, 117
Heatherton, T. F., 93
Hecht, M. L., 220, 261
Heckler, S. E., 186
Hedderley, D., 65, 190
Hedeker, D., 127
Hedges, L. V., 96, 130, 182–183, 250
Heene, M., xviii
Heesacker, M., 154, 181
Hefner, D., 9
Heiphetz, L., 5
Heitland, K., 88
Hembroff, L., 29
Hemnann, A., 82
Henard, D. H., 89
Henderson, J. E., 199–200, 205

634
Henderson, M. D., 149
Hendricks, L. A., 108
Hendricks, P. S., 256
Henley, N., 230
Hennings, S. J., 74, 121
Herek, G. M., 38–40, 44
Herr, P. M., 67, 74, 95
Herrin, J., 244
Herzog, T. A., 140
Heslin, R., 259
Hesse, S. J., 218
Hether, H. J., 220
Hettema, J. E., 256
Hetts, J. J., 119
Hevey, D., 123, 223
Hewgill, M. A., 191
Hewitt, E. C., 181
Hewstone, M., 9
Hia, K., 221
Hibbert, S., 233
Hidalgo, P., 217, 241
Higgins, A. R., 114
Higgins, E. T., 54, 227
Higgins, J. P. T., 96, 130, 182–183, 250
Higgins, S. T., 221
Highhouse, S., 181
Hill, A. L., 220
Hilligoss, B., 190
Hilmert, C. J., 159
Himmelfarb, S., 199
Hinson, R. E., 123
Hinyard, L. J., 220, 241
Hirsch, H. A., 254
Hirsh, J. B., 254
Hitchcock, K., 224, 243
Hitsman, B., 224, 243
Ho, S. K., 261
Hodson, G., 159
Hoegg, J., 212
Hoeken, H., 111, 165, 218, 254
Hoffmann, K., 66

635
Hogg, M. A., 130
Høie, M., 120
Holbert, R. L., 258, 266
Holbrook, M. B., 61–62, 73, 181
Holland, A., 203
Holmes, G., 207
Holmes, J., 208
Holmes, K., 241, 262
Holzemer, W. L., 212
Homer, P. M., 70
Honeycutt, E. D., 207
Hong, S., 241
Hong, T., 208
Hong, Y.-H., 266
Hopfer, S., 219
Horai, J., 204–205
Horberg, E. J., 114
Horcajo, J., 151
Hornik, R. C., 60, 106, 116–117, 131, 223
Hornikx, J., 222, 253–254
Horswill, M. S., 123
Hospers, H. J., 120
Hossain, S. Z., 44
Hosseinzadeh, H., 44
Householder, B. J., 103
Houston, T., 241
Hove, T., 103, 104
Hovland, C. I., 19–20, 22, 25, 29, 34, 74, 215
Howard, C., 190
Howard, D. J., 181, 208, 212, 240
Howard, G., 196
Howe, B. L., 111
Howe, D., 224, 243
Howell, J. L., 262
Hox, J. J., 114
Hoxworth, T., 138
Hoyer, W. D., 223
Hoyle, R. H., 254
Hoyle, S., 123
Hoyt, W. T., 211
Hsieh, G., 192

636
Hu, Y. F., 208
Huang, G. C., 219–220
Hubbell, A. P., 230
Hubbell, F. A., 111
Huber, F., 82
Hübner, G., 103, 120
Hudson, S. E., 192
Huettl, V., 75
Huge, M. E., 108
Hughes, M., 60
Huh, J. H., 75
Huhmann, B. A., 119, 174
Hukkelberg, S., 131
Hukkelberg, S. S., 123
Hulbert, J. M., 62
Hullett, C. R., 43, 49–50, 171, 255, 264
Hummer, J. F., 261
Hunn, B. P., 110
Hunt, S. D., 83, 193
Hunter, J. E., 10–11, 112, 234, 236, 250
Hunter, R., 66
Hurling, R., 113
Hurwitz, J., 25
Hurwitz, S. D., 191
Huskinson, T. L. H., 74–75
Hustinx, L., 165, 219
Huston, T. L., 201
Hutchings, V. L., 85
Hutchison, A. J., 139
Hwang, Y., 155
Hyde, J., 111
Hyde, M. K., 103

Iannarino, N. T., 221, 242


Iatesta, M., 138
Ibarra, L., 220
Igartua, J. J., 218
Infante, D. A., 200
Insko, C. A., 197–198, 211, 260
Ioannidis, J. P. A., xviii, 127, 182
Ireland, F., 233

637
Ito, T. A., 9
Ivancevich, J. M., 83
Ivanov, B., 258
Iyengar, S., 84
Izuma, K., 81

Jaccard, J., 9, 72, 113


Jack, W., 221
Jackson, S., 179, 181, 183, 186–187
Jacobs, S., 179, 181
Jacobson, R. P., 122
Jaffe, A., 139
Jain, P., 215, 218–219
James, R., 10, 109
Janssen, L., 158, 250
Jarvis, B., 138
Jarvis, W. B. G., 154, 172
Jelinek, S., 227, 245
Jemmott, J. B., 104
Jensen, C. D., 256
Jensen, J. D., 186, 194, 225–226, 232, 244–245
Jeong, E. S., 227
Jeong, S.-H., 155
Jerit, J., 69
Jessop, D. C., 115, 262
Jewell, R. D., 123
Jiang, L., 212
Joag, S., 195
Jobber, D., 62, 72
Jochum, R., 114
Joffe, S., 192
John, L. K., 182
Johnson, B. B., 152
Johnson, B. T., 25, 30, 35, 69, 72, 103–104, 106, 116–117, 163, 165–
166, 171–172, 181, 191
Johnson, H. H., 158, 211
Johnson, J., 108
Johnson, M., 104
Johnson, P. J., 115
Johnson, R. W., 97
Johnston, D. W., 104

638
Johnston, L. H., 139
Johnston, P. L., 118, 233
Johnstone, P. M., 232
Jonas, K., 10, 152
Jones, A., 155
Jones, E. E., 89
Jones, J. J., 108
Jones, J. L., 223
Jones, M. C., 220
Jones, R. A., 199–200, 267
Jones, R. T., 261
Jones, S., 216
Jones, S. P., 186
Joormann, J., 186
Jordan, A., 117
Jordan, B., 196
Jordens, K., 93
Joule, R. V., 89, 93
Joyce, L., 256
Judah, G., 116
Judd, C. M., 5, 28, 186
Julka, D. L., 43
Jun, S. Y., 75
Jung, J. M., 75
Jung, T. J., 60, 117
Juon, H. S., 232
Kahneman, D., 244

Kaigler-Evans, K., 200


Kaiser, F. G., 10, 103, 120
Kaiser, H. F., 96
Kaldenberg, D. O., 42
Kalesan, B., 219
Kalichman, S. C., 107
Kallgren, C. A., 155, 159–160, 172, 181
Kalyanaraman, S., 254
Kamb, K., 138
Kamins, M. A., 181, 213, 258
Kane, R., 262
Kang, S. K., 254
Kang, Y., 254

639
Kang, Y.-S., 159, 199, 205
Kantola, S. J., 91, 106, 127
Kao, C. F., 151, 172
Kao, D. T., 240
Kaplan, C. R., 110, 128
Kaplowitz, S. A., 26, 34
Karan, D., 207
Karanja, S., 221
Kardes, F. R., 181
Kariri, A., 221
Karlan, D., 221
Kashima, Y., 218–219
Kasmer, J., 28
Kath, L. M., 111
Kattapong, K. R., 204
Katulak, N. A., 254
Katz, D., 35–37, 40, 46, 48–49, 51, 53
Kaufmann, M., 66
Kaur, B., 139
Kay, L. S., 111
Kaysen, D. L., 108
Keane, J., 103
Keaveney, S. M., 82
Keegan, O., 61, 127
Keeling, R. P., 108
Keer, M., 121
Kellar, I., 111
Kellaris, J. M., 75
Keller, P. A., 242
Keller, P. S., 192
Kellermann, A. L., 242
Kelly, B. J., 223
Kelly, J. A., 107
Kelly, K. J., 207
Kelly, M., 25
Kelly, R. J., 97
Kelso, E., 240
Keltner, D., 97
Kendzierski, D., 10, 12, 113
Keng, C. J., 81
Kennedy, C., 212

640
Kennedy, M. G., 220
Kennedy, N. B., 267
Kenny, D. A., 28, 185–186
Kenrick, D. T., 75
Kenworthy, J. B., 93
Kerin, R. A., 208, 212
Kerr, G. N., 131
Kerr, P. M., 159, 199, 205
Kesek, A., 9
Kesselheim, A. S., 192
Kessels, L. T. E., 231, 233
Kidder, L. H., 9
Kidwell, B., 123
Kiernan, M., 185
Kiesler, C. A., 19, 24, 30, 34, 37
Killeya, L. A., 69, 165–166
Kilmarx, P. H., 220
Kilmer, J. R., 108
Kim, A., 38
Kim, E. S., 217
Kim, H. K., 219
Kim, H. S., 217, 219
Kim, M.-S., 10, 11, 112
Kim, S., 261
Kimani, J., 221
Kimball, A. B., 221
Kimble, C. E., 203
Kimble, D. L., 111
Kinder, D. R., 67
King, A. J., 221
King, B., 241
King, M., 28
King, S. W., 200, 203
King, W. R., 121
Kinmonth, A. L., 74, 104, 121
Kinsey, K. A., 119
Kirkham, J. J., xviii, 96
Kirmani, A., 192
Kiviniemi, M. T., 121
Klahn, J. A., 139
Klass, E. T., 93

641
Klassen, M., 145
Klein, K. A., 122–123
Klein, W. M. P., 217, 262–263
Kleine, S. S., 67
Klein-Selski, E., 261
Klem, M. L., 241
Klentz, B., 234, 250
Klimmt, C., 9
Klock, S. J., 200
Knapp, T. R., 64
Knäuper, B., 115
Knäuper, B., 240
Knight, K. M., 256
Knowlden, A. P., 104
Knowles, E. S., 257
Koehler, J. W., 259
Koerner, A. F., 266
Koestner, R., 114
Kok, G., 103, 112, 120, 231–233, 248
Konkel, J., 254
Koob, J. J., 107
Kootsikas, A., 261
Kopperhaver, T., 220
Koring, M., 114–115, 143
Kosmidou, E., 103
Koster, R., 81
Kothe, E. J., 91, 104
Kotowski, M. R., 232
Kotterman, D., 103
Kovac, V. B., 123
Kraemer, H. C., 185
Kraft, J. M., 219–220
Kramer, A. D. I., 108
Krantz, L. H., 115
Kraus, R., 221
Kraus, S. J., 10–11
Kraut, R. E., 192
Krcmar, M., 186
Kreausukon, P., 114
Kremers, S. P. J., 116, 221
Kreuter, M. W., 219–220, 241, 253

642
Krieger, J. L., 223, 232, 261
Krishnamurthy, P., 257
Kroese, F. M., 116
Kroeze, W., 116
Kromrey, J. D., 183
Krosnick, J. A., 5, 15, 18, 28, 31, 171, 174, 254
Kruger, M. W., 159, 189
Kruglanski, A. W., 72, 82, 166–169, 173, 175
Kuan, K. K. Y., 208
Kuhlmann, A. K. S., 219–220
Kuiper, N. M., 118–119
Kujawski, E., 196
Kulik, J. A., 108, 159
Kumkale, G. T., 158, 196
Kupfer, D. J., 185
Kvedar, J. C., 221
Kwak, L., 221
Kyriakaki, M., 222

LaBrie, J. W., 261


Lac, A., 186
Lacaille, J., 240
Laczniak, R. N., 155
Lafferty, B. A., 207
Lähteenmäki, L., 121
Lai, M. K., 261
Lajunen, T., 103, 118
Lally, P., 116
Lalor, K. M., 181
Lam, S.-P., 103
Lam, T. H., 261
Lam, W. K., 111
Lampton, W. E., 194
Lancaster, T., 138–139
Landauer, T. K., 88–89
Landman, J., 119
Landy, D., 159
Lane, D. M., 186
Lane, K. A., 9
Lane, L. T., 256
Lang, B., 240

643
Lange, J. E., 108
Lange, R., 26
Langlois, M. A., 260
Langner, T., 198, 207
LaPiere, R. T., 17
Lapinski, M. K., 232
Larimer, M. E., 108
Larkey, L. K., 217, 220, 254
Laroche, M., 197–198
Larsen, E., 120
LaSalvia, C. T., 108
Lasater, T. M., 216
Latimer, A. E., 111, 130, 226–227, 254
Laufer, D., 253
Lauver, D., 64
Lavidge, R. J., 142
Lavin, A., 221
Lavine, H., 30, 42–43, 46, 171
Lavis, C. A., 171, 208, 255
Lawton, R. J., 75, 103, 106, 115–117, 121, 127–128
Le Dreff, G., 212
Leader, A. E., 217, 223
Learmonth, M., 111
Leary, M. R., 223
Leavitt, A., 53
Leavitt, C., 197, 200, 211–212
LeBlanc, B. A., 97
Lechner, L., 115, 118
LeComte, C., 109
Lee, C. M., 108
Lee, E.-J., 157
Lee, G., 256
Lee, J. E., 108
Lee, J. H., 256
Lee, K. H., 130
Lee, M. J., 254
Lee, M. S., 207
Lee, S., 221
Lee, T., 208
Lee, W., 266
Lee, W. J., 256

644
Leeuwis, C., 219
Lehavot, K., 212
Lehmann, D. R., 242, 256
Leibold, J. M., 118–119
Leippe, M. R., 88, 181
Lemert, J. B., 189
Lemus, D. R., 208
Lennon, S. J., 42
Leone, L., 118
Lepage, L., 109
Lerman, C., 60, 217
Leshner, G., 233
Lester, R. T., 221
Leventhal, H., 143, 216, 248
Levin, I. P., 244
Levin, K. D., 30, 69, 163, 165–166, 171
Levine, T. R., 157, 163, 182, 231
Levinger, G., 201
Levitan, L. C., 157
Levy, B., 199
Levy, D. A., 17
Lewis, C., 108
Lewis, H., 123
Lewis, M. A., 108
Lewis, S. K., 235–236
Li, H. R., 256
Liang, Y. H., 95
Liao, T.-H., 81
Libby, L. K., 240
Liberman, A., 262, 267
Lichtenstein, E., 261
Lieberman, D. A., 241
Likert, R., 7
Lim, H., 212
Lim, S., 208
Lin, H.-Y., 25
Lin, J.-H., 241
Lin, W.-K., 258, 266
Lindberg, M. J., 84, 96
Linder, D. E., 89, 223, 240
Lindow, F., 66

645
Lindsey, L. L. M., 212, 241
Linn, A. J., 221
Linn, J. A., 257
Linnemeier, G., 232
Lipkus, I. M., 69, 262
Lippke, S., 114–115, 137, 139, 143
Litman, J., 115
Littell, J. H., 139–140
Liu, K., 265
Liu, W.-Y., 230
Livi, S., 97, 131
Loersch, C., 154
Loewenstein, G., 182
Loh, T., 208
Longdon, S., 121, 131
Longoria, Z. N., 114
Lorch, E. P., 254
Lord, C. G., 10, 12, 240
Lord, K. R., 207
Losch, M., 232
Louis, W. R., 103
Love, G. D., 220
Lowe, R., 121
Lowrey, T. M., 43, 53
Lu, A. S., 241
Luce, M. F., 240
Luchok, J. A., 191
Lukwago, S. N., 253
Lundell, H., 219
Lunney, C. A., 108
Lupia, A., 198
Luszczynska, A., 111
Lutz, R. J., 67, 69, 74, 165
Luyster, F. S., 181
Luzzo, D. A., 111
Lycett, D., 243
Lynch, J. G., 185
Lynn, M., 75, 192
Lyon, J. E., 138
Lyon-Callo, S. K., 232
Lyons, E., 97, 103, 131

646
Mabbott, L., 262
MacDonald, T. K., 15, 174
MacDonnell, R., 257
Machleit, K. A., 67
Mack, D. E., 103, 112
MacKenzie, S. B., 67, 151
Mackie, D. M., 152, 154, 204, 233, 255
MacKinnon, D. P., 185
MacKintosh, A. M., 104, 117
MacNair, R., 260
MacRae, S., 111
Madden, T. J., 73, 109
Maddux, J. E., 205–206
Magana, J. R., 111
Magee, R. G., 254
Maggs, J. L., 260–261
Magnini, V. P., 207
Magnussen, S., 194
Maher, L., 123, 223
Mahler, H. I. M., 108
Mahloch, J., 106
Maio, G. R., 9, 35, 39, 48–51, 152, 159, 258
Majeed, A., 139
Major, A. M., 108
Makredes, M., 221
Malhotra, N. K., 67
Mallach, N., 143
Mallett, J., 104
Malloy, T. E., 111
Mallu, R., 11
Mallya, G., 60, 117
Maloney, E. K., 232
Malotte, C. K., 138
Maltarich, S., 230
Manchanda, R. V., 174
Mangleburg, T. F., 38
Manis, M., 25
Mann, T., 181, 227
Mannetti, L., 82, 97, 131, 166, 168
Manning, M., 103, 112, 116–117, 122–123, 127, 129, 131
Manson-Singer, S., 131

647
Manstead, A. S. R., 62, 70, 72, 103, 109, 116, 118–120, 257
Manzur, E., 217, 241
Mara, M., 219
Marcus, A. C., 69, 110, 128
Marcus, B. H., 134
Margolis, P., 221
Marin, B. V., 116, 253
Marin, G., 116, 253
Mark, M. M., 171, 208, 255
Marks, L. J., 181
Marlino, D., 181
Marlow, C., 108
Marquart, J., 211
Marra, C. A., 221
Marsh, K. L., 43
Marston, A., 212
Marteau, T. M., 103, 111, 115, 130
Martell, C., 29
Martell, D., 122–123
Marti, C. N., 88
Martin, B. A. S., 240
Martin, J., 114
Martinelli, E. A., Jr., 111
Martinie, M. A., 89
Marttila, J., 139
Maschinski, C., 219–220
Mason, H. R. C., 261
Mason, T. E., 103
Masser, B., 217
Mather, L., 139
Matheson, D. H., 123
Mathios, A. D., 158
Maticka-Tyndale, E., 131, 212
Mattern, J. L., 108
Matterne, U., 103, 126
Matthes, J., 203
Maurissen, K., 221
Maxfield, A. M., 232
May, K., 196
Mayle, K., 262
Mazmanian, L., 159

648
Mazor, K. M., 241
McAdams, M. J., 194
McBride, J. B., 253
McCallum, D. B., 193
McCann, R. M., 208
McCaslin, M. J., 9, 154
McCaul, K. D., 138
McClenahan, C., 104, 118
McClintock, C., 40
McCollam, A., 240
McConnell, A. R., 118–119
McConnell, M., 221
McCroskey, J. C., 189, 191, 193, 210–211, 259
McCubbins, M. D., 198
McDermott, D. T., 247
McDonald, L., 230
McEachan, R. R. C., 75, 103, 106, 116–117, 121, 127–128
McFadden, H. G., 224, 243
McFall, S., 85
McGee, H., 61, 127
McGilligan, C., 118
McGinnies, E., 197, 210
McGowan, L., 256
McGrath, K., 190
McGuffin, S. A., 241
McGuire, W. J., 180, 215, 258–259, 266
McIntosh, A., 154
McIntyre, P., 145
McKay-Nesbitt, J., 174
McKee, S. A., 123
McKimmie, B. M., 97
McLarney, A. R., 254
McLeod, J. H., 261
McMahon, T. A., 91
McMath, B. F., 137
McMillan, B., 120, 123, 131
McMillion, P. Y., 111
McNamara, M., 241
McNeil, B. J., 244
McQueen, A., 219, 262
McRee, A. L., 244

649
McSpadden, B., 218
McSweeney, A., 116, 120
McVeigh, J. F., 237
McVey, D., 109
Medders, R. B., 208
Meertens, R. M., 69
Meffert, M. F., 66
Mehdipour, T., 108
Mehrotra, P., 130
Meijinders, A., 203
Mellor, S., 111
Melnyk, V., 122
Menon, G., 157, 244, 255
Mercier, H., 159
Merrill, L., 84, 96
Merrill, S., 25
Mertz, R. J., 189
Merwin, J. C., 74
Messian, N., 212
Messner, C., 9
Metzger, M. J., 208
Metzler, A. E., 260
Meuffels, B., 241
Mevissen, F. E. F., 69
Meyerowitz, B. E., 181, 226
Miarmi, L., 155
Michie, S., 103, 115, 130
Micu, C. C., 204
Midden, C. J. H., 116, 203
Middlestadt, S. E., 57, 60, 67, 74, 121, 126
Miene, P. K., 39, 42, 46–47
Millar, K. U., 11
Millar, M. G., 11
Millar, S., 104
Miller, C., 256
Miller, C. H., 256
Miller, D., 91
Miller, G. R., 180, 191, 211–212
Miller, J. A., 249
Miller, N., 19, 24, 30, 34, 37, 93, 154
Miller, R. L., 95

650
Miller, W. R., 256
Miller-Day, M., 261
Mills, E. J., 221
Mills, J., 78, 81, 203
Milne, S., 114, 229, 246
Milton, A. C., 117
Minassian, L., 220
Miniard, P. W., 67
Mio, G. R., 75
Miron, A. M., 256
Miron, M. S., 191
Misak, J. E., 93
Mischkowski, D., 263
Mishra, S. I., 111
Misovich, S. J., 111, 246
Misra, S., 213
Mitchell, A. A., 67
Mitchell, A. L., 204
Mitchell, G., 9
Mitchell, J., 74, 121
Mitchell, M. M., 157
Mittal, B., 41
Mittelstaedt, J. D., 213
Mkandawire, G., 232
Mladinic, A., 64, 67, 74
Moan, I. S., 120, 131
Moldovan-Johnson, M., 60
Molyneaux, V., 115
Mondak, J. J., 161
Mongeau, P. A., 229–230, 232–233, 237
Monroe, K. B., 234, 236, 250
Montaño, D. E., 106
Montoya, H. D., 108
Mooki, M., 219–220
Mooney, A., 220
Moons, W. G., 227, 233
Moore, B., 118, 120
Moore, D. J., 260
Moore, K., 131
Moore, K. A., 230
Moore, M., 233

651
Moore, S. E., 171, 208, 255
Moors, A., 8
Mora, P. A., 143
Morales, A. C., 249
Moran, M. B., 219
Morgan, M. G., 107
Morgan, S. E., 217, 221, 254
Morman, M. T., 218
Morris, B., 75
Morris, K., 121, 131
Morrison, K., 230
Morris-Villagran, M., 157
Mortensen, C. R., 122
Morton, K., 256
Morwitz, V. G., 95
Moser, G., 112
Moss, T. P., 114
Mouttapa, M., 220
Mowad, L., 174, 254
Mowen, J. C., 195
Moyer, A., 262
Moyer, R. J., 198, 200–201
Moyer-Gusé, E., 215, 218–220, 241
Muehling, D. D., 155
Muellerleile, P. A., 72, 103–104, 106, 116–117
Mugny, G., 191, 197, 211
Mullainathan, S., 221
Mullan, B. A., 104, 117
Müller-Riemenschneider, F., 221
Munch, J. M., 181, 253
Munn, W. C., 194
Murayama, K., 81
Murphy, S. T., 217, 219–220
Murray, T. C., 123
Murray-Johnson, L., 230, 232
Muthusamy, N., 231
Mutsaers, K., 220
Mutz, D. C., 85
Myers, J. A., 192
Myers, L. B., 123

652
Nabi, R. L., 219, 233, 255, 258
Naccache, H., 95
Naccari, N., 204–205
Nahai, A., 227
Nail, P. R., 17, 93, 263
Najafzadeh, M., 221
Nakahiro, R. K., 221
Nan, X., 174, 196, 207, 211, 217, 262
Nanneman, T., 28
Napper, L. E., 139, 262
Nathanson, A., 119
Nava, P., 111
Nayak, S., 241
Nebergall, R. E., 19, 23–25, 30, 33–34, 215
Neff, R. A., 221
Neighbors, C., 108
Neijens, P., 241
Neijens, P. C., 121, 207
Neimeyer, G. J., 260
Nell, E. B., 258
Nelson, D. E., 97
Nelson, K., 212
Nelson, L. D., 262
Nelson, M. R., 35, 46
Nelson, R. E., 203
Nelson, T. E., 70
Nelson, T. F., 108
Nemets, E., 257
Nestler, S., 233
Netemeyer, R. G., 195
Neudeck, E. M., 108
Neufeld, S. L., 255
Newell, S. J., 190, 207
Newman, L. S., 11
Ng, J. Y. Y., 111
Ng, M., 115
Ng, S. H., 44, 50
Ngugi, E., 221
Nichols, A. J., III, 57
Nichols, D. R., 30, 69, 163, 171
Nickerson, D. W., 114

653
Niederdeppe, J., 60, 218–219, 223, 254
Niedermeier, K. E., 118–119
Nienhuis, A. E., 257
Nigbur, D., 97, 103, 131
Niiya, Y., 263
Niño, N. P., 219–220
Nitzschke, K., 221
Noar, S. M., 130, 146, 221, 242
Nocon, M., 221
Nohlen, H., 15, 174
Nolan, J. M., 127
Norman, P., 95, 109, 113, 115–116, 123, 130–131
Norman, R., 205
Norris, M. E., 84
North, D., 106
Nosek, B. A., 9
Notani, A. S., 67, 103
Ntumy, R., 219–220
Nupponen, R., 139
Nussbaum, A. D., 262

Oaten, M., 233


Oats, R. G., 111
Oberle, D., 78
Obermiller, C., 130
O’Carroll, R. E., 119, 233
Odenthal, G., 114
Oenema, A., 116
Oettingen, G., 95
Offermans, N., 118
Oh, H. J., 103, 104
Ohanian, R., 190, 207
O’Hara, B. S., 195
Ohman, S., 203
Ojala, M., 97
O’Keefe, D. J., xviii, 10, 18, 34, 54, 64, 69–70, 79, 93, 147, 160,
165–166, 184, 186–187, 191, 193, 196, 211, 215–216, 222–227, 233,
236–237, 240–245, 247, 250, 253, 264
O’Keefe, G. J., 211
Okonsky, J., 212
Okun, M. A., 253–254

654
O’Leary, A., 220
Olofsson, A., 203
Olson, J. M., 35, 39, 48–51, 181, 235, 250, 258
Olson, P., 196
Olson, R., 221
Omoto, A. M., 130
O’Neal, K. K., 261, 267
Opdenacker, J., 221
Orbell, S., 54, 112, 114, 118–119, 130–131, 222, 229, 246
O’Reilly, K., 212
Orfgen, T., 157, 163
Orlich, A., 95
Orrego, V., 232
Orth, B., 106
Osgood, C. E., 5, 76
O’Sullivan, B., 61, 127
Oswald, F. L., 9
Otero-Sabogal, R., 116, 253
Otto, S., 64, 67, 74
Ouellette, J. A., 112, 115
Oxley, Z. M., 70
Oxman, A. D., 244

Packer, D. J., 9
Packer, M., 42, 44, 53
Paek, H.-J., 103–104
Palmgreen, P., 254
Pan, L. Y., 194
Pan, W., 261, 267
Papageorgis, D., 266
Pappas-DeLuca, K. A., 220
Parenteau, A., 196
Park, H. S., 122–123, 157, 163, 241
Parker, D., 109, 118–119
Parker, R., 111
Parrott, R. L., 116
Parschau, L., 114–115, 143
Parson, D. W., 17
Parsons, A., 243
Parvanta, S., 60
Pascual, A., 240, 249, 256

655
Pasha, N. H., 258, 266
Passafaro, P., 117
Patel, D., 187, 232
Patel, S., 212
Patnoe-Woodley, P., 219
Pattenden, J., 139
Patzer, G. L., 205–206
Pauker, S. G., 244
Paulson, R. M., 10, 12
Paulussen, T., 116
Paunesku, D., 212
Payne, J. W., 181
Pearce, W. B., 189, 191, 193, 210
Peay, M. Y., 61, 74
Pechmann, C., 193, 223
Peck, E., 249
Pedlar, C., 256
Penaloza, L. C., 266
Peng, W., 241
Pennings, J., 72
Perez, M., 88
Perez-Stable, E. J., 116, 253
Perloff, R. M., 237
Perron, J., 109
Pertl, M., 123, 223
Perugini, M., 113, 118
Peters, G.-J. Y., 231, 233, 248
Peters, M. D., 67
Peters, M. J., 221
Peters, R. G., 193
Peterson, M., 253
Petkova, Z., 218
Petosa, R., 260
Petraitis, J., 127
Petrova, P. K., 240
Pettigrew, J., 261
Petty, R., 119
Petty, R. E., 9–10, 15, 17–18, 43–44, 54, 67, 75, 148–155, 157–161,
163–165, 168–169, 171–175, 181, 191, 195–197, 204, 208, 210–211,
255, 259–260, 262, 267
Pfau, M. W., 186, 258, 266

656
Phillips, A. P., 186
Phillips, C., 260
Phillips, W. A., 189, 193, 210
Piccinin, A. M., 261
Pichot, N., 212
Piercy, L., 217
Pierro, A., 82, 97, 131, 166, 168
Pieters, R., 82
Pietersma, S., 262
Pietrantoni, L., 217
Pinckert, S., 250
Pinkleton, B. E., 207
Piotrowski, J. T., 117
Pitts, S. R., 242
Pizarro, J., 174, 226
Plane, M. B., 221
Plessner, H., 66
Plotnikoff, R. C., 60, 116
Plummer, F. A., 221
Poehlman, T. A., 9
Pollard, K., 220
Polonec, L. D., 108
Polyorat, K., 217
Popova, L., 232, 249
Poran, A., 219, 257
Pornpitakpan, C., 197
Porticella, N., 60, 218
Porzig-Drummond, R., 233
Posavac, E. J., 204
Potts, K. A., 256
Povey, R., 10, 109, 118
Powers, P., 17
Powers, T., 114
Prabhakar, P., 41
Prapavessis, H., 111
Prati, G., 217
Pratkanis, A. R., 46, 55, 181, 237
Praxmarer, S., 205
Preacher, K. J., 185
Preisler, R. M., 155, 159–160, 172, 181
Preiss, R. W., 89, 96, 241

657
Prelec, D., 182
Prentice, D. A., 38
Prentice-Dunn, S., 137, 228–229, 242, 246
Presnell, K., 88
Press, A. N., 64, 70
Preston, M., 234, 250
Prestwich, A., 111, 113, 115
Pretty, G. M., 103
Price, L. L., 204
Priester, J. R., 67, 148, 161, 172, 210
Primack, B. A., 241
Prince, M. A., 107
Prislin, R., 10, 262
Pritt, E., 232
Prochaska, J. O., 132–135, 144–145
Pronin, E., 226
Prothero, A., 256
Pruyn, A. T. H., 158
Pryor, B., 265
Pryzbylinski, J., 181
Psyczynski, T., 181
Puckett, J. M., 160

Queller, S., 204


Quick, B. L., 187, 233, 254, 256, 264
Quine, L., 130
Quinlan, K. B., 138
Quinlan, M. R., 233
Quinn, J. M., 157, 259–260, 265–267
Qureshi, S., 139

Raaijmakers, J. G. W., 186


Raats, M. M., 121
Radecki, C. M., 72, 113
Rademaker, A. W., 224, 243
Rae, G., 104
Raghubir, P., 89
Rains, S. A., 256, 258, 265–266
Rakowski, W., 134
Ramirez, A., 88
Randall, D. M., 130

658
Rasanen, M., 103
Ratchford, B., 41
Ratcliff, C. D., 240
Rauner, S., 249
Rea, C., 260
Read, S. J., 93
Reading, A. E., 110, 128
Real, K., 232
Reardon, R., 260
Redding, C. A., 132–135, 144–145
Reed, M., 249
Reed, M. B., 262
Reeve, A., 2
Regan, D. T., 11
Reger, B., 117
Reichert, T., 186
Reid, A. E., 107
Reid, J. C., 114
Reidy, J. G., 240
Reilly, S., 196
Reimer, T., 210
Reinard, J. C., 191
Reinhart, A. M., 218, 256
Reis, H. T., 97
Reiter, P. L., 244
Remme, L., 114
Rempel, J. K., 67
Rendon, T., 122
Renes, R. J., 219–220
Resnicow, K., 253–254
Reubsaet, A., 115
Reuter, T., 114, 115
Reynolds, G. L., 139
Rhine, R. J., 158
Rhoads, K., 108
Rhodes, N., 15, 172, 254, 264
Rhodes, R. E., 60, 75, 113, 116, 123
Rich, M., 241
Richard, R., 14, 118–119
Richardes, D., 220
Richardson, C. R., 223

659
Richert, J., 114–115, 139, 143
Richter, T., 217, 219
Richterkessing, J. L., 237
Ricketts, M., 218
Ridge, R. D., 39, 42, 46–47
Rieh, S. Y., 190
Riemsma, R. P., 139
Riesz, P. C., 213
Rietveld, T., 186
Rimal, R. N., 232
Ringwalt, C. L., 261, 267
Rise, J., 120, 130–131
Risen, J. L., 81, 95
Rittle, R. H., 235
Ritvo, P., 221
Rivers, J. A., 88
Rivis, A., 103, 112, 116–117, 127, 129, 131
Robertson, C. T., 192
Robertson, K., 155
Robins, D., 208
Robinson, J. K., 111
Robinson, N. G., 103
Rodewald, L., 221
Rodgers, H. L., Jr., 25
Rodgers, W. M., 123
Rodriguez, R., 151, 172
Roehrig, M., 88
Roels, T. H., 220
Roetzer, L. M., 241
Rogers, R. W., 69, 74, 166, 205–206, 228–229, 242, 246
Rogers, T., 114, 116
Rohde, P., 88
Rokeach, M., 50
Rolfe, T., 103
Rollnick, S., 256
Romero, A. A., 260
Ronis, D. L., 181
Rose, S. L., 192
Roseman, M., 115, 240
Rosen, B., 220
Rosen, C. S., 133

660
Rosen, J., 174
Rosen, S., 84
Rosenberg, M. J., 72, 74
Rosenbloom, D., 134
Rosenbloom, S. T., 221
Rosen-Brown, A., 240
Rosenzweig, E., 82
Roskos-Ewoldsen, D. R., 12, 198, 262–263
Rosnow, R. L., 258
Ross, K. M., 192
Rossi, J. S., 133–135
Rossi, S. R., 134
Rossiter, J. R., 232
Rothbart, M., 205–206
Rothman, A. J., 140, 226–227
Rothmund, T., 9
Rothstein, H. R., 96, 130, 182–183, 250, 264–265, 267
Rouner, D., 210, 219–220, 241
Royzman, E. B., 74, 225
Rozelle, R. M., 216
Rozin, P., 74, 225
Rozolis, J., 249
Ruberg, J. L., 241
Rubin, D. L., 186
Rubin, Y. S., 221
Ruch, R. S., 204
Rucker, D. D., 75, 174, 255
Ruder, M., 197
Ruiter, R. A. C., 69, 226, 229, 231–233, 244, 248, 254, 267
Ruiz, S., 75
Rusanen, M., 203
Russell, C., 108
Rutherford, J., 26
Rutter, D. R., 61, 72, 127
Ryerson, W. N., 219

Saba, A., 121


Sabogal, F., 116, 253
Sadatsafavi, M., 221
Sagarin, B. J., 81, 108
Sailors, J. J., 42

661
Saine, T. J., 197
Sakaki, H., 26
Salgueiro, M. F., 122, 261
Sallis, J. F., 261
Salovey, P., 174, 226–227, 254
Salter, N., 114
Sampson, E. E., 198
Sampson, J., 230
Samuelson, B. M., 265
Sanaktekin, O. H., 254
Sandberg, T., 118–119, 127
Sanders, D. L., 221
Sanders, J., 218
Sanders-Thompson, V., 253
Sandfort, T. G. M., 115
Sandman, P. M., 138, 142
Sarge, M. A., 223, 232
Sarma, K. M., 247
Sarnoff, I., 40
Sarup, G., 25
Saucier, D., 254
Sauer, P. L., 207
Saunders, L., 256
Savage, E., 110, 128
Sawyer, A. G., 240
Sayeed, S., 106, 116–117, 131
Scarberry, N. C., 240
Schaalma, H. P., 69, 116
Schepers, J., 112, 121
Scher, S. J., 97
Schertzer, S. M. B., 253
Scheufele, D. A., 70
Schlehofer, M. M., 233
Schmidt, J., 196
Schmitt, B., 54, 222
Schneider, I. K., 15, 174
Schneider, S. L., 244
Schneider, T. R., 110, 174, 226
Schneier, W. L., 152
Scholl, S. M., 262
Schönbach, P., 81

662
Schoormans, J. P. L., 218
Schott, C., 232
Schreibman, M., 220
Schrijnemakers, J. M. C., 186
Schroeder, D. A., 249
Schulenberg, J., 260–261
Schulman, R. S., 261
Schultz, A., 253–254
Schultz, P. W., 10, 122, 127
Schulz, P. J., 241
Schulz-Hardt, S., 81
Schumann, D. W., 151–153, 157, 159
Schünemann, H., 244
Schüz, B., 139, 143, 262
Schüz, N., 139, 143, 262
Schwartz, S. H., 50
Schwarz, N., 5, 9, 150, 152, 158, 255
Schwarzer, R., 114–115, 137, 139, 142–143
Schweitzer, D., 189
Schwenk, G., 112
Scileppi, J. A., 158, 211
Scott, M. D., 259
Sears, D. O., 84, 153, 260
See, Y. H. M., 172
Seeley, S., 220
Segall, A., 50
Segan, C. J., 133
Segar, M. L., 223
Seibel, C. A., 256
Seibring, M., 108
Seifert, A. L., 121
Seignourel, P. J., 158, 196
Seiter, J. S., 2, 17
Seo, K., 233, 255
Sereno, K. K., 200, 203
Sestir, M., 218, 219
Settle, J. E., 108
Severance, L. J., 158
Severson, H., 261
Sexton, J., 12
Shabalala, A., 219

663
Shaeffer, E. M., 240
Shaffer, D. R., 42
Shaikh, A. R., 254
Shakarchi, R. J., 255, 265
Shamblen, S. R., 267
Shani, Y., 81
Shanteau, J., 145, 218
Shantzis, C., 261
Shapiro, M. A., 60, 218
Shapiro-Luft, D., 60
Sharan, M., 220
Sharot, T., 81
Sharp, J., 62, 72
Shavitt, S., 35, 37–41, 43, 46, 50, 53–54, 70
Shaw, A. S., 249
Shaw, H., 88
Shea, S., 221
Sheeran, P., 68, 74, 95, 103, 109, 112–119, 123, 127, 129–131, 216,
229, 246, 262
Shelton, M., 108
Shen, F., 226, 241
Shen, L., 233, 237, 247, 255–256, 264
Shepherd, G. J., 11
Shepherd, J. E., 103
Shepherd, R., 10, 65, 109, 121, 190
Sheppard, B. H., 103, 112
Shepperd, J. A., 262
Sherif, C. W., 19–20, 22–25, 30, 33–34, 215
Sherif, M., 19–20, 22–25, 29–30, 33–34, 215
Sherman, D. K., 93, 181, 227, 262–263
Sherman, R. T., 240
Sherrell, D. L., 195, 211
Shi, F., 81, 95
Shi, Y., 227
Shillington, A., 108
Shiota, M. N., 255
Shiv, B., 181
Shongwe, T., 219
Shulman, H., 152, 255
Shuptrine, F. K., 42
Sia, C.-L., 208

664
Sicilia, M., 75
Siebler, F., 166
Siegel, J. T., 186
Siegrist, M., 190
Siemer, M., 186
Siero, F. W., 25, 111, 246
Sieverding, M., 103, 126
Sigurdsson, S. O., 221
Silberberg, A. R., 194
Silk, K. J., 116
Silva, S. A., 122, 261
Silvera, D. H., 253
Silvera, S. A. N., 174, 254
Silverthorne, C. P., 159
Silvia, P. J., 212, 256, 265
Simmonds, L. V., 262
Simon, L., 97
Simoni, J., 212
Simons, H. W., 198, 200–201
Simons-Morton, B. G., 110
Simpson, P., 220
Sims, L., 226
Simsekoglu, O., 118
Sinclair, R. C., 171, 208, 255
Singh, A., 116
Singhal, A., 219
Sirgy, M. J., 38
Sirsi, A. K., 192
Sivaraman, A., 257
Six, B., 10–11, 112
Sjoberg, L., 11
Skalski, P. D., 194, 212
Sklar, K. B., 181
Skowronski, J. J., 74, 81, 145
Slade, P., 114
Slama, M., 42, 50
Slater, M. D., 30, 163, 171, 186, 207, 210, 219–220, 241
Slaunwhite, J. M., 108
Slemmer, J. A., 240
Slocum, J. W., Jr., 118
Smerecnik, C. M. R., 229

665
Smit, E. G., 207
Smith, A., 233
Smith, D. C., 261
Smith, J. K., 95
Smith, J. R., 10, 97, 103, 116, 120
Smith, L., 109
Smith, L. M., 88
Smith, M. A., 159
Smith, M. B., 37, 53, 55
Smith, M. C., 174
Smith, M. J., 26
Smith, N., 120
Smith, R. A., 34, 219, 220
Smith, R. E., 11, 193
Smith, R. J., 95
Smith, S., 122–123, 194
Smith, S. M., 10, 84, 108, 152, 154
Smith, S. W., 29, 108, 122, 232
Smith-McLallen, A., 69, 165, 166
Smucker, W. D., 230
Snelders, D., 218
Sniehotta, F. F., 111, 143
Snyder, J., 221
Snyder, M., 10, 12–13, 37, 39, 42–44, 46–47, 49, 53, 205–206, 253
Soldat, A. S., 171, 208, 255
Soley, L. C., 160
Solomon, S., 181
Sorell, D. M., 256
Sorrentino, R. M., 181, 229
Southwell, B. G., 233
Sowden, A. J., 139
Sox, H. C., Jr., 244
Spangenberg, E. R., 95, 130
Sparks, P., 10, 65, 72, 103–104, 106, 109, 262
Spears, R., 257
Speelman, C., 116
Spencer, C. P., 106, 114, 127
Spencer, F., 241
Spencer, S. J., 36, 185
Spencer-Bowdage, S., 256
Sperati, F., 244

666
Spiegel, S., 166–168
Spoor, S., 88
Spreng, R. A., 151
Spring, B., 224, 243
Sprott, D. E., 95
Spruyt, A., 8
St. Lawrence, J. S., 107, 220
Staats, H., 120, 131
Stagner, B., 255
Stambush, M. A., 88
Stangor, C., 174
Stansbury, M., 208
Stanton, T., 221
Stapel, D. A., 267
Stapleton, J., 111
Stark, E., 38
Stayman, D., 67
Stayman, D. M., 207
Stead, M., 104, 117
Steadman, L., 61, 72, 127, 130
Steblay, N. M., 234, 250
Stebnitz, S., 196
Steele, C. M., 93, 262
Steele, L., 21
Steele, R. G., 256
Steenhaut, S., 118
Steffen, V. J., 11
Steiner, G. A., 142
Steinfatt, T. M., 65, 265
Stephenson, M. T., 187, 233, 254
Sternthal, B., 197, 211, 212
Stevenson, R., 233
Stevenson, Y., 107
Steward, W. T., 174, 226
Stewart, C., 108
Stewart, R., 31, 212
Stewart-Knox, B., 104
Stice, E., 88, 93
Stiff, J. B., 237
Stillwell, A. M., 93
Stoltenberg, C. D., 171

667
Stone, J., 13, 78, 90–91, 93
Stone, K., 220
Storey, D., 241
Stormer, S., 88
Strack, F., 95, 255
Stradling, S. G., 109, 118–119
Strathman, A. J., 148, 222
Straughan, R. D., 192
Strauss, A., 221
Strecher, V. J., 254
Strickland, B., 159
Stroebe, W., 72, 115, 142, 229–230, 232, 242, 247
Stroud, N. J., 84–85
Struckman-Johnson, C., 215
Struckman-Johnson, D., 215
Struttmann, T., 217
Strutton, D., 207
Studts, J. L., 241
Stukas, A. A., 39, 42, 47
Suchner, R. W., 25
Suci, G. J., 5
Sunar, D., 254
Sundar, S. S., 208
Sundie, J. M., 75
Sunnafrank, M., 201
Supphellen, M., 57, 121
Sussman, S., 261
Sutton, S., 104
Sutton, S. R., 74, 103, 109, 116, 120–121, 133–134, 139–140, 142–
143, 230
Swartz, T. A., 202–203
Swasy, J. L., 181
Sweat, M., 212
Swedroe, M., 108
Sweeney, A. M., 262
Swinyard, W. R., 11
Sykes, B., 111
Syme, G. J., 91, 106, 127
Symons, C. S., 25
Szabo, E. A., 258
Szilagyi, P., 221

668
Szybillo, G. J., 259

Tabb, K. M., 261


Tagg, S., 104, 117
Talbot, T. R., 221
Talibudeen, L., 104, 107
Tal-Or, N., 219, 257
Tam, S. F., 111
Tamborini, R., 194
Tan, A., 60
Tanjasiri, S. P., 220
Tannenbaum, P. H., 5, 76
Tao, C.-C., 184, 247
Taylor, L., 133
Taylor, N., 111
Taylor, N. J., 103, 106, 116–117, 127–128
Taylor, S. E., 227
Taylor, S. F., 123
Taylor, V. M., 106
Teel, J. E., 42
Teffera, N., 219
Teige-Mocigemba, S., 8
Telaak, K., 25
Telesca, C., 42, 44
Tellini, S., 40, 42
Terrenato, I., 244
Terry, D. J., 10, 97, 103, 120, 130–131
Terwel, B. W., 195, 211
Tetlock, P. E., 9
Teufel, J., 260
Tewksbury, D., 70
Thabane, L., 221
Thau, S., 196
Theodorakis, Y., 103, 109
Thibodeau, R., 91
Thimons, E. D., 232
Thomas, E., 233
Thomas, J. C., 240
Thomas, K., 123, 223
Thompson, B., 106
Thompson, D., 241

669
Thompson, E. P., 166–169, 173, 175
Thompson, J. K., 88
Thompson, R., 219, 261
Thompson, S. C., 233
Thomsen, C. J., 30, 171
Thuen, F., 130
Thurstone, L. L., 7
Thyagaraj, S., 260
Till, B. D., 213
Ting, S., 186
Tittler, B. I., 25
Tobler, N. S., 261
Todorov, A., 149
Tokunaga, R. S., 218–219, 241
Tollefson, M., 196
Tolsma, D., 254
Tom, S., Jr., 88, 89
Tormala, Z. L., 15, 18, 75, 161, 173–174, 191, 196–197, 208, 211–
212, 255
Törn, F., 213
Tost, L. P., 256
Towles-Schwen, T., 12
Trafimow, D., 68, 74, 103, 109, 123, 127, 130
Traylor, M. B., 200
Trembly, G., 216
Trompeta, J., 212
Trope, Y., 149
Trost, M. R., 173
Trudeau, L., 218
Trumbo, C. W., 152, 172
Tryburcy, M., 111
Tseng, D. S., 221
Tuah, N. A. A., 139
Tufte, E. R., xvii
Tukachinsky, R., 218–219, 241
Tung, P. T., 97, 120
Tuppen, C. J. S., 189–190
Turner, G. E., 261
Turner, J. A., 26
Turner, M. M., 152, 233, 249, 255–256
Turrisi, R., 111

670
Tusing, K. J., 237, 266
Tversky, A., 244
Twyman, M., 190
Tykocinski, O. E., 9

Ubel, P. A., 217


Udall, A., 154
Uhlmann, E. L., 9
Ullen, H., 103
Underhill, J. C., 233
Updegraff, J. A., 181, 223, 226–227, 244–245
Uribe, R., 217, 241
Usdin, S., 219
Ussher, M., 103
Uzzell, D., 97, 103, 131

Vakratsas, D., 142


Valdez, R. B., 111
Valente, T. W., 219–220
Valentine, J. C., 182
Valentino, N. A., 85
Valois, P., 109
van ’t Riet, J., 226, 233, 244, 254, 267
van Assema, P., 139
van Baak, M. A., 221
Van Bavel, J. J., 9
van den Berg, P., 88
van den Hende, E. A., 218
van den Putte, B., 116, 121, 241, 244
van der Linde, L. A. J. G., 267
van der Pligt, J., 14–15, 57, 62, 70, 72, 118–119, 174
van Dijk, A., 182
van Dijk, L., 221
van Enschot-van Dijk, R., 165
van Griensven, G. J. P., 115
van Harreveld, F., 15, 62, 70, 72, 174
van Herpen, E., 122
van Hout, R., 186
van Ittersum, K., 72
Van Kenhove, P., 118
Van Koningsbruggen, G. M., 262–263

671
van Laer, T., 218–219, 241
van Leeuwen, L., 219
Van Loo, M. F., 130
van Mechelen, W., 116
van Meurs, L., 207
Van Osch, L., 115
Van Overwalle, F., 93
van Trijp, H. C. M., 72, 122
van Weert, J. C. M., 221
van Woerkum, C., 220
Vann, J., 221
Vardon, P., 103
Vassallo, M., 121
Vaughn, L. A., 218–220
Vaught, C., 187
Velicer, W. F., 133–134, 139
Venkatesh, V., 121
Venkatraman, M. P., 181
Verplanken, B., 151, 233
Vervloet, M., 221
Vet, R., 107
Viachopoulos, S. P., 103
Villagran, P. D., 157
Vincent, J. E., 235–236
Vincus, A. A., 267
Vinkers, C. D. W., 114
Visconti, L. M., 218–219, 241
Visser, P. S., 15, 157, 171, 174
Vist, G. E., 244
Vitoria, P. D., 122, 261
Voas, R. B., 108
Vohs, K. D., 158, 250
von Hippel, W., 149
Vonkeman, C., 171, 255
Voss-Humke, A. M., 121

Waalen, J., 221


Wachtler, J., 199
Wagner, A. K., 81
Wagner, W., 200–201
Wakefield, M., 218–219

672
Waks, L., 66
Walker, A., 139
Wall, A.-M., 123
Wallace, D. S., 10, 12
Walster, E., 82, 192, 204
Walters, L. H., 186
Walther, E., 95
Walther, J. B., 95, 208
Wan, C. S., 88
Wang, X., 48, 53, 121
Wang, Z., 208
Wansink, B., 72
Warburton, J., 120, 131
Ward, C. D., 210
Wareham, N. J., 74, 104, 121
Warnecke, R. B., 85
Warner, L., 220
Warren, W. L., 152
Warshaw, P. R., 103, 112, 121
Wasilevich, E., 232
Wathen, C. N., 211
Watson, A. J., 221
Watson, C., 197
Watt, I. S., 139
Watt, S. E., 9, 35
Wearing, A. J., 91
Webb, T. L., 114–115
Webel, A. R., 212
Weber, R., 231
Webster, R., 254
Wechsler, H., 108
Wegener, D. T., 15, 18, 26, 43–44, 150, 152, 157, 160–161, 164, 168,
171–175, 197, 208
Weigel, R. H., 11
Weil, R., 95
Weilbacher, W. M., 142
Weinberger, M. G., 211
Weiner, J., 116
Weiner, J. L., 223
Weinerth, T., 152
Weinstein, N. D., 64, 130, 138, 140, 142

673
Weisenberg, M., 199
Weissman, W., 261
Wells, G. L., 154–155, 181
Wells, J., 111
Wenzel, M., 108
Werrij, M. Q., 226, 244
Wessel, E., 194
West, M. D., 190
West, R., 140
West, S. K., 261, 267
Westermann, C. Y. K., 157, 163
Westfall, J., 186
Wetzels, M., 112, 121, 218–219, 241
Whately, R., 147
Wheeler, D., 235–236
Wheeler, S. C., 43–44, 54, 168–169, 175
Whitaker, D. J., 113
White, G. L., 80
White, K., 257
White, K. M., 97, 103
White, M., 139
White, R. W., 37, 53, 55
White, T. L., 227
Whitehead, J. L., Jr., 190–191, 210
Whitelaw, S., 140
Whittaker, J. O., 26
Whittier, D. K., 220
Wicherts, J. M., 182
Wicker, A. W., 17
Wicklund, R. A., 81
Widgery, R. N., 204
Wieber, F., 114
Wiedemann, A. U., 114, 143
Wiegand, A. W., 91
Wiener, J. L., 195
Wildey, M. B., 261
Wilke, H. A. M., 120, 131
Wilkin, H. A., 219–220
Williams, E. A., 221
Williams, P., 95, 253–254
Williamson, P. R., xviii, 96

674
Williams-Piehota, P., 174, 254
Willich, S. N., 221
Willms, D., 131
Wilmot, W. W., 22, 30, 31
Wilson, C. P., 111
Wilson, E. J., 195, 211
Wilson, M., 50
Wilson, S. R., 232
Wilson, T., 72
Windschitl, P. D., 181
Winslow, M. P., 13, 90–91
Winter, P. L., 108
Winterbottom, A., 220
Winzelberg, A., 203
Wise, M. E., 241
Witte, K., 139, 145, 187, 219–220, 229–233, 242, 246–249
Wittenbrink, B., 5, 9
Wogalter, M. S., 69
Wohn, D. Y., 95
Wojcieszak, M. E., 84–85
Wolff, J. A., 130
Wolfs, J., 103
Wong, N. C. H., 232
Wong, S., 240
Wong, Z. S.-Y., 109, 123
Wood, M. L. M., 266
Wood, M. M., 139
Wood, W., 17, 112, 115, 155, 157, 159–160, 172, 181, 190, 192–193,
254–255, 259–260, 262, 264–267
Woodruff, S. I., 261
Woodside, A. G., 95, 200–201
Wooley, S., 241
Worchel, S., 158, 240
Worth, L. T., 255
Wreggit, S. S., 110
Wright, A., 114
Wu, E. C., 249
Wu, R., 256
Wyer, N., 204
Wyer, R. S., Jr., 66, 76, 155, 217, 257
Wynn, S. R., 260–261

675
Xie, X., 66
Xu, A.J., 257

Yang, V. S.-H., 266


Yard, S., 212
Yarsevich, J., 254
Yates, S., 154, 159
Ybarra, O., 130
Yew, W. W., 111
Yi, M. Y., 208
Yi, Y., 89, 130
Yoo, J., 89
Yoon, J. J., 208
Young, A. M., 256
Young, R. M., 103
Young, T. J., 189, 259
Yu, X., 81
Yzer, M. C., 60, 104, 111, 116–117, 123, 131, 233, 246

Zani, B., 217


Zanna, M. P., 38–39, 46, 67, 72, 74, 89, 91, 115, 185, 219
Zebregs, S., 241
Zeelenberg, M., 81–82
Zehr, C. E., 115
Zenilman, J., 138
Zhang, G., 254
Zhang, X., 253
Zhao, X., 207, 262
Zhao, X. S., 185
Ziegelmann, J. P., 114, 139
Ziegler, R., 44, 152, 171, 208, 255
Ziel, F. H., 221
Zikmund-Fisher, B. J., 217, 223
Zimbardo, P. G., 199
Zimet, G. D., 244
Zindler, D., 196
Zinman, J., 221
Zirbel, C., 196
Ziv, S., 257
Zubric, S. J., 258, 266
Zucker, R. A., 260–261

676
Zuckerman, C., 187, 232
Zuckerman, M., 40, 42, 254
Zwarun, L., 218

677
Subject Index

Accessibility of attitude, 10
Advertising, 193, 207, 224
Advocated position
ambiguity of, 25, 27–28
counterattitudinal vs. proattitudinal, 156, 197, 215
discrepancy of, 25–26, 34(n5)
expected vs. unexpected, 192–193
influence on credibility, 192–193
influence on elaboration valence, 156
Affect
anticipated, 13, 118–20, 233
as attitude basis, 67–68, 74(n13), 75(n20)
Age, 253
Ambiguity (of position advocated), 25, 27–28
Ambivalence, 10, 174(n16), 226
Anger, 233, 256
Anticipated feelings, 13, 118–20, 233
Appeals
affective vs. cognitive, 75(n20)
consequence-based, 165–166, 221–223
fear, 228–233
function-matched, 41–44, 49–51, 54(n6)
gain-framed and loss-framed, 225–228
normative, 106–108
one-sided and two-sided, 79, 193, 211(n4), 223–225, 265(n10,
n12), 266(n16)
scarcity, 75(n18)
threat, 228–233
Approach/avoidance motivation (BAS/BIS), 54(n5), 227
Arguments
consequence-based, 165, 166, 174(n19), 221–223
discussion of opposing, 79, 193, 211(n4), 223–225, 265(n10,
n12), 266(n16)
gain-framed and loss-framed, 225–228
number of, 159
strength (quality) of, 156–157, 163–166
Assimilation and contrast effects, 24–25, 27, 33(n3), 34(n4, nn6–7),

678
215
Attitude
accessibility, 10
ambivalence, 10, 174(n16), 226
bases of, 56–59, 66–68
certainty (confidence), 174(n16)
concept of, 4–5
functions of, 35–38
measurement, 5–9
relation to behavior, 9–12
strength, 15, 18(n12), 171(n3)
toward behavior. See Attitude toward the behavior
Attitude toward the behavior
assessment of, 99
determinants of, 105
influencing, 105
relation to norms, 112
Attitude-behavior consistency
factors affecting, 9–12
influencing, 12–14
Attitude-toward-the-ad, 67
Attitudinal similarity, 201
Attractiveness, 160, 204–206
Attribute importance, 61–62, 72(n3), 78
Audience. See Receiver
Audience adaptation, xv–xvi
elaboration likelihood model (ELM) and, 162, 174(n17)
functional attitude approaches and, 41–44, 49–51
individual differences and, 252–255
reasoned action theory (RAT) and, 117
social judgment theory and, 29
stage models and, 134–141
summative model of attitude and, 59–61
Audience reaction (as peripheral cue), 159
Averaging model of attitude, 65–66
Aversive consequences and induced compliance, 97(n16)

Balance theory, 76, 95(n1)


Behavior
relation to attitude, 9–12
relation to intention, 112–116, 129(n21), 130(n22)

679
Belief
content, 62–63
evaluation, 57, 69, 75(n18), 104. See also Consequence
desirability
importance, 61–62, 72(n3), 78
lists, 63–64
salience, 56, 61, 69–71
strength (likelihood), 57, 63–64, 68–69, 104, 127(n8). See also
Consequence likelihood
Belief-based models of attitude
description of, 56–59
implications for persuasion, 59–61, 68–71
sufficiency of, 66–68
Bias, knowledge and reporting, 190, 192
Bipolar scale scoring, 64, 73(nn8–9)

Central route to persuasion, 150


Character identification, 218
Choice effects in induced compliance, 89, 96(n12), 200
Citation of evidence sources, 191
Cognitive bases of attitude, 56–59
Cognitive dissonance
and decision making, 78–83
defined, 77
factors influencing, 77–78
and hypocrisy induction, 90–92
and induced compliance, 85–90, 96(nn12–13), 97(n16), 199
means of reducing, 78
and selective exposure, 83–85
Communicator
credibility, 188–198
ethnicity, 206–207
liking, 198–200
physical attractiveness, 160, 204–206
self-interest, 192
similarity to receiver, 200–204
Competence (credibility dimension), 189
Conclusion omission, 214–216
Congruity theory, 76, 95(n1)
Consensus heuristic, 159
Consequence desirability, 165, 174(n19), 221–223. See also Belief,

680
evaluation
Consequence likelihood, 166. See also Belief, strength (likelihood)
Consideration of future consequences (CFC), 54(n5), 222, 264(n1)
Contrast and assimilation effects, 24–25, 27, 33(n3), 34(n4, nn6–7),
215
Correspondence
of attitude and behavior measures, 10–11, 17
of intention and behavior measures, 113
Counterarguing, 256, 257, 259–260. See also Elaboration
Counterattitudinal vs. proattitudinal messages, 156, 197, 215
Credibility
dimensions of, 188–190
effects of, 194–198
factors affecting, 190–194
heuristic, 158
relationship to similarity, 202–203
Cultural background, 253
Cultural truisms, 266(nn17–18)

DARE, 261
Decisional balance, 134–136
Decision-making and dissonance, 78–83
Defensive avoidance, 261
Delivery
dialect, 202
nonfluencies, 191
Descriptive norm (DN), 29, 261
assessment of, 100
determinants of, 107–108
influencing, 108
relation to attitude, 112
Dialect, 202
Direct experience, 11–12
Discrepancy, 25–26, 34(n5)
Disgust, 233
Dissonance
and decision making, 78–83
defined, 77
factors influencing, 77–78
and hypocrisy induction, 90–92
and induced compliance, 85–90, 96(nn12–13), 97(n16), 199

681
means of reducing, 78
and selective exposure, 83–85
Distraction, 154–155
Door-in-the-face (DITF) strategy, 235–237
explanations, 236–237
moderating factors, 236
Dual-process models, 149

Ego-involvement
concept of, 22, 30–31, 153
confounding in research, 29–30
measures of, 23–24, 31, 34
relationship to judgmental latitudes, 22–23
Elaboration
ability, 154–155
assessment of, 149
continuum, 149, 150
definition of, 149
factors affecting amount of, 152–156, 197, 255
factors affecting valence of, 156–157, 197
motivation, 152–154
Elaboration likelihood model (ELM), 148–175
Emotions
anger, 233, 256
anticipated, 13, 118–20, 233
fear, 229–232
disgust, 233
guilt, 93, 97(n16), 233, 237
regret, 82–83, 118
Entertainment-education (EE), 219–220
Ethnicity of communicator, 206–207
Even-a-penny-helps strategy, 249(n50)
Evidence, citation of sources of, 191
Examples vs. statistics, 241(n9)
Expectancy confirmation and disconfirmation, 192–193, 211(n3)
Expectancy-value models of attitude, 72(n1)
Experimental design, 176–178
Expertise (credibility dimension), 189
Explicit measures of attitude, 5–7
Explicit planning of behavior, 114–115
Explicit vs. implicit conclusions, 214–216

682
Extended parallel process model (EPPM), 231–232

Fairness norms, 84
Familiarity of topic, 10, 155
Fear, 229–232
Fear appeals, 228–233
Follow-up persuasive efforts, 82
Foot-in-the-door (FITD) strategy, 233–235
explanation of, 234–235
moderating factors, 234
Forewarning, 157, 259–260
Formative basis of attitude, 11–12
Framing
gain vs. loss, 225–228
issue, 70
Function-matched appeals, 41–44, 49–51, 54(n6)
Functions of attitude
assessing, 38–40, 46–48
vs. functions of objects, 45–46
influences on, 40–41
matching appeals to, 41–44, 49–51, 54(n6)
typologies of, 35–38, 44–45, 48–49

Gain-framed vs. loss-framed appeals, 225–228


Games, 241(n14)
General vs. specific recommendations, 216
Group membership, 203
Guilt, 93, 97(n16), 233, 237

Habit, 115–116
Health Action Process Approach (HAPA), 142
Heuristic principles, 157–159
Heuristic-systematic model, 149
Hierarchy-of-effects models, 142
Humor, 194
Hypocrisy, 13, 18(nn9–10)

Ideal credibility ratings, 212(n10)


Identification of source, timing of, 196
Identification with narrative characters, 218

683
Imagined behavior, 240(n6)
Immersion (in narratives), 218
Implementation intentions, 114, 216
Implicit measures of attitude, 8–9
Implicit vs. explicit conclusions, 214–216
Importance
of beliefs, 61–62, 72(n3), 78
of topic, 152–153, 163, 195, 199
Incentive effects in induced compliance, 85–87
Indirect experience, 11–12
Individual differences, 252–255
approach/avoidance motivation (BAS/BIS), 54(n5), 227
consideration of future consequences (CFC), 54(n5), 222,
264(n1)
intelligence, 215, 253
need for cognition (NFC), 153–154, 174(n17), 226
regulatory focus, 54(n5), 227
self-esteem, 253
self-monitoring, 39–40, 42, 53(n2, n4), 54–55(n10), 222, 253
sensation-seeking, 254
Individualism-collectivism, 54, 222
Individualized belief lists, 63
Induced compliance, 85–90, 96(nn12–13), 97(n16), 199
choice effects in, 89, 96(n12), 200
incentive effects in, 85–87
Information exposure, influences on, 83–85
Information integration theory, 73(n10)
Information utility, 84, 96(n11)
Injunctive norm (IN)
assessment of, 100
determinants of, 105–106
influencing, 106–107
relation to attitude, 112
Inoculation, 257–259
Intelligence, 215, 253
Intention, relation to behavior, 112–116, 129(n21), 130(n22)
Involvement
ego-involvement. See Ego-involvement
personal relevance, 152–153, 163, 195, 199

Knowledge about topic, 10, 155

684
Knowledge bias, 190, 192

Latitudes, judgmental, 22
Legitimizing paltry contributions, 249(n50)
Length of message, 159, 160
Likert attitude scales, 7
Liking
for the communicator, 193, 198–200
heuristic, 158
relationship to similarity, 201–202
Loss-framed vs. gain-framed appeals, 225–228
Low price offer, 88
Low-ball strategy, 249(n50)

Message characteristics, 214–251


Message tailoring. See Audience adaptation
Message variable definition, 163–165, 183–184, 187(nn9–10),
247(n39)
Meta-analysis, xviii, 186(n6), 187(n7)
Metacognitive states, 18(n11), 174(n16)
Modal belief lists, 57, 63
Modeling, 111
Mood, 157, 171(n4), 217, 226, 255
Moral norms, 120
Motivation to comply, 105
Motivational interviewing, 256
Multiple roles for variables, 160–162, 173(n16), 208–210, 217, 254
Multiple-act behavioral measures, 10
Multiple-message designs, 181

Narratives, 216–220
Need for cognition (NFC), 153–154, 174(n17), 226
Negativity bias, 74(n11), 225
Noncognitive bases of attitude, 66–68
Nonfluencies in delivery, 191
Nonrefutational two-sided messages, 223, 245(n32)
Normative beliefs, 105
Norms
descriptive. See Descriptive norm
fairness, 84

685
injunctive. See Injunctive norm.
moral, 120
Nudges, 220–221
Null hypothesis significance testing, xviii
Number of arguments, 159

One-sided vs. two-sided messages, 79, 193, 211(n4), 223–225,


265(n10, n12), 266(n16)
Optimal scaling, 73(n8)
Ordered Alternatives questionnaire, 20–21
Own Categories procedure, 23, 33(n2)

Paradigm case, 2
Peer-based interventions, 204
Perceived behavioral control, 216, 221
assessment of, 101
conceptualization of, 100, 123–124, 131(n33)
determinants of, 108–110
influencing, 110–111
relation to attitudes and norms, 102, 122–123, 129(n19)
relation to stages of change, 136–139
Peripheral cues, 150
Peripheral route to persuasion, 150
Persistence of persuasion, 151
Personal (moral) norms, 120
Personal relevance of topic, 152–153, 163, 195, 199
Personality characteristics, 252–255
approach/avoidance motivation (BAS/BIS), 54(n5), 227
consideration of future consequences (CFC), 54(n5), 222,
264(n1)
intelligence, 215, 253
need for cognition (NFC), 153–154, 174(n17), 226
regulatory focus, 54(n5), 227
self-esteem, 253
self-monitoring, 39–40, 42, 53(n2,n4), 54–55(n10), 222, 253
sensation-seeking, 254
Persuasion, concept of, 2–4
Persuasive effects, assessing, 14–16, 18(n13), 185(n2)
Physical attractiveness, 160, 204–206
Planned behavior, theory of, 126(n1)

686
Planning of behavior, 114–115
Position advocated. See Advocated position
Postdecisional spreading of alternatives, 80
Prior knowledge (of topic), 10, 155
Proattitudinal vs. counterattitudinal messages, 156, 197, 215
Product trial, 11
Prompts, 220–221
Prospect theory, 244(n28)
Protection motivation theory (PMT), 228–229

Quality (strength) of arguments, 156–157, 163–166


Quantity of arguments, 159
Quasi-explicit measures of attitude, 7–8

Reactance, 255–257
Reasoned action theory (RAT)
determinants of intention, 99–102
influencing attitude toward the behavior, 104–105
influencing descriptive norms, 107–108
influencing injunctive norms, 105–107
influencing perceived behavioral control, 108–111
influencing relative weights, 111–112, 128(nn17–18)
Receiver factors, 252–267
Reciprocal concessions, 236
Recommendation specificity, 216
Refusal skills training, 260–261
Refutational inoculation treatments, 258
Refutational two-sided messages, 223, 265(n10), 266(n16)
Regret
anticipated, 118
postdecisional, 82–83
Regulatory focus, 54(n5), 227
Relevance
of attitude to behavior, 10, 12–13
of topic to receiver, 152–153, 163, 195, 199
Reminders, 220–221
Reporting bias, 190, 192
Resistance to persuasion from different persuasion routes, 151
Role models, 111
Routes to persuasion, 150–152

687
Salience of beliefs, 56, 61, 69–71
Scale scoring procedures, 64–65, 127(n10), 128(n11)
Scarcity appeals, 75(n18)
Selective exposure, 81, 83–85
Self-affirmation, 261–264
Self-affirmation theory, 93
Self-efficacy, 216, 221, 226.
and stage-matching, 136–139
See also Perceived behavioral control
Self-esteem, 253
Self-identity, 131(n30)
Self-monitoring, 39–40, 42, 53(n2, n4), 54–55(n10), 222, 253
Self-perception, 234
Self-prophesy effects, 95(n3)
Semantic differential evaluative scales, 5
Sensation-seeking, 254
Sequential-request strategies, 233–237
Sidedness (of messages), 79, 193, 211(n4), 223–225, 265(n10, n12),
266(n16)
Similarity of communicator and receiver
relationship to credibility, 202–203
relationship to liking, 201–202
Single-act behavioral measures, 10
Single-item attitude measures, 6
Single-message designs, 178–181
Social judgment theory, 19–34
Source
credibility, 188–198
ethnicity, 206–207
liking, 198–200
physical attractiveness, 160, 204–206
self-interest, 192
similarity to receiver, 200–204
Specific vs. general recommendations, 216
Stage models, 132–147
distinctive claims of, 140–142
stage assessment, 139
transtheoretical model (TTM), 132–140
Stage-matching
and decisional balance, 134–136
and self-efficacy, 136–139

688
vs. state-matching, 141
Standardized belief lists, 63
Statistical significance testing, xviii
Statistics vs. examples, 241(n9)
Stories, 216–220
Strength
argument, 156–157, 163–166
attitude, 15, 18(n12), 171(n3)
belief, 57, 63–64, 68–69, 104, 127(n8). See also Consequence
likelihood
Subjective norm. See Injunctive norm
Summative model of attitude, 56–59
Supportive treatments (for creating resistance), 258

Tailoring (of messages). See Audience adaptation


That’s-not-all strategy, 249(n50)
Theory of planned behavior, 126(n1)
Threat appeals, 228–233
defining variations of, 247(n39)
effects of, 230–231
explanations of, 231–233
Thurstone attitude scales, 7
Time
decay of persuasive effects, 151
follow-up persuasive efforts, 82
interval between DITF requests, 236
interval between FITD requests, 234
interval between intention and behavior measures, 130(n22)
interval between warning and message, 260
stability of intentions, 113–114
timing of communicator identification, 196
Topic
knowledge of, 10, 155
personal relevance of, 152–153, 163, 195, 199
Transgression-compliance effects, 237
Transportation (into narratives), 218
Transtheoretical model, 132–140
decisional balance, 134–136
processes of change, 133, 144(n2)
self-efficacy, 136–139
stages of change, 132–133

689
Trustworthiness (credibility dimension), 189
Two-sided vs. one-sided messages, 79, 193, 211(n4), 223–225,
265(n10, n12), 266(n16)

Unexpected position, 192–193


Unimodel of persuasion, 166–169
Unipolar scale scoring, 64, 73(n8)
Utility of information, 84, 96(n11)

Variable definition, 163–165, 183–184, 187(nn9–10), 247(n39)


Video games, 241(n14)

Warning, 157, 259–260


Websites as sources, 207
Weights of reasoned action theory components, 101–102, 117,
127(n6), 130(n26), 131(n28)

690
About the Author

Daniel J. O’Keefe
is the Owen L. Coon Professor in the Department of Communication
Studies at Northwestern University. He received his Ph.D. from the
University of Illinois at Urbana-Champaign and has been a faculty
member at the University of Michigan, Pennsylvania State
University, and the University of Illinois. He has received the
National Communication Association’s Charles Woolbert Research
Award, its Golden Anniversary Monograph Award, its Rhetorical and
Communication Theory Division Distinguished Scholar Award, and
its Health Communication Division Article of the Year Award; the
International Communication Association’s Best Article Award and
its Division 1 John E. Hunter Meta-Analysis Award; the International
Society for the Study of Argumentation’s Distinguished Research
Award; the American Forensic Association’s Daniel Rohrer
Memorial Research Award; and teaching awards from Northwestern
University, the University of Illinois, and the Central States
Communication Association.

691
692
目录
Half Title 2
Publisher Note 3
Title Page 4
Copyright Page 5
Brief Contents 7
Detailed Contents 8
Preface 17
Chapter 1 Persuasion, Attitudes, and Actions 23
Chapter 2 Social Judgment Theory 48
Chapter 3 Functional Approaches to Attitude 71
Chapter 4 Belief-Based Models of Attitude 102
Chapter 5 Cognitive Dissonance Theory 131
Chapter 6 Reasoned Action Theory 163
Chapter 7 Stage Models 212
Chapter 8 Elaboration Likelihood Model 234
Chapter 9 The Study of Persuasive Effects 275
Chapter 10 Communicator Factors 291
Chapter 11 Message Factors 327
Chapter 12 Receiver Factors 385
References 409
Author Index 609
Index 678
About the Author 691
Advertisement 692

693

You might also like