Professional Documents
Culture Documents
Persuasion Theory and Research 3rd Edition
Persuasion Theory and Research 3rd Edition
Third Edition
2
SAGE was founded in 1965 by Sara Miller McCune to support the
dissemination of usable knowledge by publishing innovative and high-
quality research and teaching content. Today, we publish more than 750
journals, including those of more than 300 learned societies, more than
800 new books per year, and a growing range of library products including
archives, data, case studies, reports, conference highlights, and video.
SAGE remains majority-owned by our founder, and after Sara’s lifetime
will become owned by a charitable trust that secures our continued
independence.
3
Persuasion
Theory and Research
Third Edition
Daniel J. O’Keefe
Northwestern University
Los Angeles
London
New Delhi
Singapore
Washington DC
Boston
4
Copyright © 2016 by SAGE Publications, Inc.
FOR INFORMATION:
E-mail: order@sagepub.com
1 Oliver’s Yard
55 City Road
United Kingdom
India
3 Church Street
5
Singapore 049483
pages cm
BF637.P4054 2016
153.8’52—dc23 2015000192
15 16 17 18 19 10 9 8 7 6 5 4 3 2 1
6
Brief Contents
Preface
1. Persuasion, Attitudes, and Actions
2. Social Judgment Theory
3. Functional Approaches to Attitude
4. Belief-Based Models of Attitude
5. Cognitive Dissonance Theory
6. Reasoned Action Theory
7. Stage Models
8. Elaboration Likelihood Model
9. The Study of Persuasive Effects
10. Communicator Factors
11. Message Factors
12. Receiver Factors
References
Author Index
Subject Index
About the Author
7
Detailed Contents
Preface
1 Persuasion, Attitudes, and Actions
The Concept of Persuasion
About Definitions: Fuzzy Edges and Paradigm Cases
Five Common Features of Paradigm Cases of Persuasion
A Definition After All?
The Concept of Attitude
Attitude Measurement Techniques
Explicit Measures
Semantic Differential Evaluative Scales
Single-Item Attitude Measures
Features of Explicit Measures
Quasi-Explicit Measures
Implicit Measures
Summary
Attitudes and Behaviors
The General Relationship
Moderating Factors
Correspondence of Measures
Direct Experience
Summary
Encouraging Attitude-Consistent Behavior
Enhance Perceived Relevance
Induce Feelings of Hypocrisy
Encourage Anticipation of Feelings
Summary
Assessing Persuasive Effects
Attitude Change
Beyond Attitude Change
Conclusion
For Review
Notes
2 Social Judgment Theory
Judgments of Alternative Positions on an Issue
The Ordered Alternatives Questionnaire
The Concept of Ego-Involvement
Ego-Involvement and the Latitudes
8
Measures of Ego-Involvement
Size of the Ordered Alternatives Latitude of Rejection
Own Categories Procedure
Reactions to Communications
Assimilation and Contrast Effects
Attitude Change Effects
Assimilation and Contrast Effects Reconsidered
The Impact of Assimilation and Contrast Effects on
Persuasion
Ambiguity in Political Campaigns
Adapting Persuasive Messages to Recipients Using Social
Judgment Theory
Critical Assessment
The Confounding of Involvement With Other Variables
The Concept of Ego-Involvement
The Measures of Ego-Involvement
Conclusion
For Review
Notes
3 Functional Approaches to Attitude
A Classic Functional Analysis
Subsequent Developments
Identifying General Functions of Attitude
Assessing the Function of a Given Attitude
Influences on Attitude Function
Individual Differences
Attitude Object
Situational Variations
Multifunctional Attitude Objects Revisited
Adapting Persuasive Messages to Recipients: Function
Matching
The Persuasive Effects of Matched and Mismatched
Appeals
Explaining the Effects of Function Matching
Commentary
Generality and Specificity in Attitude Function Typologies
Functional Confusions
Some Functional Distinctions
Conflating the Functions
Reconsidering the Assessment and Conceptualization of
Attitude Function
9
Assessment of Attitude Function Reconsidered
Utilitarian and Value-Expressive Functions
Reconsidered
Summary
Persuasion and Function Matching Revisited
Reviving the Idea of Attitude Functions
Conclusion
For Review
Notes
4 Belief-Based Models of Attitude
Summative Model of Attitude
The Model
Adapting Persuasive Messages to Recipients Based on the
Summative Model
Alternative Persuasive Strategies
Identifying Foci for Appeals
Research Evidence and Commentary
General Correlational Evidence
Attribute Importance
Belief Content
Role of Belief Strength
Scoring Procedures
Alternative Integration Schemes
The Sufficiency of Belief-Based Analyses
Persuasive Strategies Reconsidered
Belief Strength as a Persuasion Target
Belief Evaluation as a Persuasion Target
Changing the Set of Salient Beliefs as a Persuasion
Mechanism
Conclusion
For Review
Notes
5 Cognitive Dissonance Theory
General Theoretical Sketch
Elements and Relations
Dissonance
Factors Influencing the Magnitude of Dissonance
Means of Reducing Dissonance
Some Research Applications
Decision Making
Conflict
10
Decision and Dissonance
Factors Influencing the Degree of Dissonance
Dissonance Reduction
Regret
Selective Exposure to Information
The Dissonance Theory Analysis
The Research Evidence
Summary
Induced Compliance
Incentive and Dissonance in Induced-Compliance
Situations
Counterattitudinal-Advocacy–Based Interventions
The “Low, Low Price” Offer
Limiting Conditions
Summary
Hypocrisy Induction
Hypocrisy as a Means of Influencing Behavior
Hypocrisy Induction Mechanisms
Backfire Effects
Revisions of, and Alternatives to, Dissonance Theory
Conclusion
For Review
Notes
6 Reasoned Action Theory
The Reasoned Action Theory Model
Intention
The Determinants of Intention
Attitude Toward the Behavior
Injunctive Norm
Descriptive Norm
Perceived Behavioral Control
Weighting the Determinants
The Distinctiveness of Perceived Behavioral Control
The Predictability of Intention Using the RAT Model
Influencing Intentions
Influencing Attitude Toward the Behavior
The Determinants of AB
Changing AB
Influencing the Injunctive Norm
The Determinants of IN
Changing IN
11
Influencing the Descriptive Norm
The Determinants of DN
Changing DN
Influencing Perceived Behavioral Control
The Determinants of PBC
Changing PBC
Altering the Weights
Intentions and Behaviors
Factors Influencing the Intention-Behavior Relationship
Correspondence of Measures
Temporal Stability of Intentions
Explicit Planning
The Sufficiency of Intention
Adapting Persuasive Messages to Recipients Based on Reasoned
Action Theory
Commentary
Additional Possible Predictors
Anticipated Affect
Moral Norms
The Assessment of Potential Additions
Revision of the Attitudinal and Normative Components
The Attitudinal Component
The Normative Components
The Nature of the Perceived Control Component
PBC as a Moderator
Refining the PBC Construct
Conclusion
For Review
Notes
7 Stage Models
The Transtheoretical Model
Decisional Balance and Intervention Design
Decisional Balance
Decisional Balance Asymmetry
Implications of Decisional Balance Asymmetry
Self-Efficacy and Intervention Design
Intervention Stage-Matching
Self-Efficacy Interventions
Broader Concerns About the Transtheoretical Model
The Distinctive Claims of Stage Models
Other Stage Models
12
Conclusion
For Review
Notes
8 Elaboration Likelihood Model
Variations in the Degree of Elaboration: Central Versus
Peripheral Routes to Persuasion
The Nature of Elaboration
Central and Peripheral Routes to Persuasion
Consequences of Different Routes to Persuasion
Factors Affecting the Degree of Elaboration
Factors Affecting Elaboration Motivation
Personal Relevance (Involvement)
Need for Cognition
Factors Affecting Elaboration Ability
Distraction
Prior Knowledge
Summary
Influences on Persuasive Effects Under Conditions of High
Elaboration: Central Routes to Persuasion
The Critical Role of Elaboration Valence
Influences on Elaboration Valence
Proattitudinal Versus Counterattitudinal Messages
Argument Strength
Other Influences on Elaboration Valence
Summary: Central Routes to Persuasion
Influences on Persuasive Effects Under Conditions of Low
Elaboration: Peripheral Routes to Persuasion
The Critical Role of Heuristic Principles
Varieties of Heuristic Principles
Credibility Heuristic
Liking Heuristic
Consensus Heuristic
Other Heuristics
Summary: Peripheral Routes to Persuasion
Multiple Roles for Persuasion Variables
Adapting Persuasive Messages to Recipients Based on the ELM
Commentary
The Nature of Involvement
Argument Strength
One Persuasion Process?
The Unimodel of Persuasion
13
Explaining ELM Findings
Comparing the Two Models
Conclusion
For Review
Notes
9 The Study of Persuasive Effects
Experimental Design and Causal Inference
The Basic Design
Variations on the Basic Design
Persuasiveness and Relative Persuasiveness
Two General Challenges in Studying Persuasive Effects
Generalizing About Messages
Ambiguous Causal Attribution
Nonuniform Effects of Message Variables
Designing Future Persuasion Research
Interpreting Past Persuasion Research
Beyond Message Variables
Variable Definition
Message Features Versus Observed Effects
The Importance of the Distinction
Conclusion
For Review
Notes
10 Communicator Factors
Communicator Credibility
The Dimensions of Credibility
Factor-Analytic Research
Expertise and Trustworthiness as Dimensions of
Credibility
Factors Influencing Credibility Judgments
Education, Occupation, and Experience
Nonfluencies in Delivery
Citation of Evidence Sources
Position Advocated
Liking for the Communicator
Humor
Summary
Effects of Credibility
Two Initial Clarifications
Influences on the Magnitude of Effect
Influences on the Direction of Effect
14
Liking
The General Rule
Some Exceptions and Limiting Conditions
Liking and Credibility
Liking and Topic Relevance
Greater Effectiveness of Disliked Communicators
Other Communicator Factors
Similarity
Similarity and Liking
Similarity and Credibility: Expertise Judgments
Similarity and Credibility: Trustworthiness Judgments
Summary: The Effects of Similarity
Physical Attractiveness
Physical Attractiveness and Liking
Physical Attractiveness and Credibility
Summary
About Additional Communicator Characteristics
Conclusion
The Nature of Communication Sources
Multiple Roles for Communicator Variables
For Review
Notes
11 Message Factors
Message Structure and Format
Conclusion Omission
Recommendation Specificity
Narratives
Complexities in Studying Narrative and Persuasion
The Persuasive Power of Narratives
Factors Influencing Narrative Persuasiveness
Entertainment-Education
Summary
Prompts
Message Content
Consequence Desirability
One-Sided Versus Two-Sided Messages
Gain-Loss Framing
Overall Effects
Disease Prevention Versus Disease Detection
Other Possible Moderating Factors
Summary
15
Threat Appeals
Protection Motivation Theory
Threat Appeals, Fear Arousal, and Persuasion
The Extended Parallel Process Model
Summary
Beyond Fear Arousal
Sequential Request Strategies
Foot-in-the-Door
The Strategy
The Research Evidence
Explaining FITD Effects
Door-in-the-Face
The Strategy
The Research Evidence
Explaining DITF Effects
Conclusion
For Review
Notes
12 Receiver Factors
Individual Differences
Topic-Specific Differences
General Influences on Persuasion Processes
Summary
Transient Receiver States
Mood
Reactance
Other Transient States
Influencing Susceptibility to Persuasion
Reducing Susceptibility: Inoculation, Warning, Refusal
Skills Training
Inoculation
Warning
Refusal Skills Training
Increasing Susceptibility: Self-Affirmation
Conclusion
For Review
Notes
References
Author Index
Subject Index
About the Author
16
Preface
Some readers will see the relationship of this theme to concepts such as
“message tailoring” and “message targeting.” In the research literature,
these labels have often been used to apply quite loosely to any sort of way
in which messages are adapted to (customized for) recipients, although
sometimes there have been efforts to use different labels to describe
different degrees or kinds of message customization (e.g., sometimes
17
“targeting” is described as adaptation on the basis of group-level
characteristics, whereas “tailoring” is based on individual-level
properties). But no matter the label, there is a common underlying
conceptual thread here, namely, that different kinds of messages are likely
to be persuasive for different recipients—and hence to maximize
persuasiveness, messages should be adapted to their audiences.
As should be apparent, there are quite a few different bases for such
adaptation: messages might be adapted to the audience’s literacy level,
cultural background, values, sex, degree of extroversion, age, regulatory
focus, level of self-monitoring, or race/ethnicity. A message may be
customized to the audience’s current psychological state as described by,
say, reasoned action theory (e.g., is perceived behavioral control low?),
protection motivation theory (is perceived vulnerability sufficiently high?),
or the transtheoretical model (which stage is the recipient in?). It may be
superficially personalized (e.g., by mentioning the recipient’s name in a
direct mail appeal), mention shared attitudes not relevant to the advocacy
subject, and so on.
For this reason, it is not fruitful to pursue questions such as “are tailored
messages more persuasive than non-tailored messages?” because the
answer is virtually certain to be “it depends”—if nothing else, the answer
may vary depending on the basis of tailoring. For example, it might be that
adapting messages through superficial personalization typically makes
very little difference to persuasiveness, but adapting messages by matching
the message’s appeals to the audience’s core values could
characteristically substantially enhance persuasiveness.
18
that in the social and behavioral sciences, findings and theories often seem
to just fade away, not because of any decisive criticisms or
counterarguments but rather because they seem to be “too old to be true.”
This apt observation seems to me to identify one barrier to social-scientific
research synthesis, namely, that useful results and concepts somehow do
not endure but rather disappear—making it impossible for subsequent
work to exploit them.
To sharpen the point here: It has been many years since the islets of
Langerhans (masses of endocrine cells in the pancreas) were first noticed,
but medical textbooks do not ignore this biological structure. Indeed, it
would be inconceivable to discuss (for example) mechanisms of insulin
secretion without mentioning these structures. Now I do not mean to say
that social-scientific phenomena such as assimilation and contrast effects
are on all fours with the islets of Langerhans, but I do want to suggest that
premature disappearance of social-scientific concepts and findings seems
to happen all too easily. Without forgetting how grumpy old researchers
can sometimes view genuinely new developments (“this new phenomenon
is just another name for something that used to be called X”), one can
nevertheless acknowledge the real possibility that “old” knowledge can
somehow be lost, misplaced, insufficiently understood, unappreciated, or
overlooked.
19
not be lost.
20
preceding the message. The effect of conclusion type on persuasive
outcome was significant, t(60) = 2.35, p < .05: Messages with a
concluding metaphor were significantly more effective than messages
with an ordinary (nonmetaphorical) conclusion.
Happily, recent years have seen some progress in the diffusion of more
careful understandings of statistical significance, effect sizes, statistical
power, confidence intervals, and related matters. (Some progress—but not
enough. It remains distressingly common that even graduate students with
statistical training can reason badly when faced with a problem such as
that hypothetical.) With the hope of encouraging greater sensitivity
concerning specifically the magnitude of effects likely to be found in
persuasion research, I have tried to include mention of average effect sizes
where appropriate and available.
21
But there is at present something of a disjuncture between the available
methods for describing research findings (in terms of effect sizes and
confidence intervals) and our theoretical equipment for generating
predictions. Although research results can be described in specific
quantitative terms (“the correlation was .37”), researchers are currently
prepared to offer only directional predictions (“the correlation will be
positive”). Developing more refined predictive capabilities is very much to
be hoped for, but significant challenges lie ahead (for some discussion, see
O’Keefe, 2011a).
Of course, one cannot hope to survey the range of work covered here
without errors, oversights, and unclarities. These have been reduced by
advice and assistance from a number of quarters. Students in my
persuasion classes have helped make my lectures—and so this book—
clearer than otherwise might have been the case. Many good insights and
suggestions came from the reviewers arranged by Sage Publications:
Jonathan H. Amsbary, William B. Collins, Julia Jahansoozi, Bonnie Kay,
Andrew J. Kirk, Susan L. Kline, Sanja Novitsky, Charles Soukup, Kaja
Tampere, and Beth M. Waggenspack. Jos Hornikx also provided
especially useful commentary on drafts of this edition’s chapters. And I
thank Barbara O’Keefe both for helpful conversation and for an
unceasingly interesting life: “Age cannot wither her, nor custom stale / Her
infinite variety.”
22
Chapter 1 Persuasion, Attitudes, and
Actions
23
The Concept of Persuasion
24
application, without having to draw sharp-edged definitional lines.
25
definition that we are avoiding.2 It is enough to notice that such cases are
borderline instances of persuasion, precisely because the persuadee’s
freedom is not so clear-cut as in paradigm instances.
Fourth, paradigm cases of persuasion are ones in which the effects are
achieved through communication (and perhaps especially through the
medium of language). My physically lifting you and throwing you off the
roof of a building is something quite different from my talking you into
jumping off the same roof; the latter might possibly be a case of
persuasion (depending on the circumstances, exactly what I have said to
you, and so on), but the former is certainly not. What distinguishes these
two instances is that communication is involved in the latter case but not in
the former.
In persuasion theory and research, the relevant mental state has most
commonly been characterized as an attitude (and thus the concept of
attitude receives direct discussion later in this chapter).3 Even when a
persuader’s ultimate goal is the modification of another’s behavior, that
goal is often seen to be achieved through a process of attitude change—the
presumption being that attitude change is a means of behavioral change.
26
together into something that looks like a definition of persuasion: a
successful intentional effort at influencing another’s mental state through
communication in a circumstance in which the persuadee has some
measure of freedom. But it should be apparent that constructing such a
definition would not eliminate the fuzzy edges of the concept of
persuasion. Such a definition leaves open to dispute just how much success
is required, just how intentional the effort must be, and so on.
Perhaps it was inevitable, thus, that in the early part of the 20th century,
the emerging field of social psychology should have seized on the concept
of attitude as an important one. Attitude offered to social psychologists a
distinctive psychological mechanism for understanding and explaining
individual variation in social conduct (Allport, 1935). And although for a
time there was considerable discussion of alternative definitions of attitude
(e.g., Audi, 1972; Eagly & Chaiken, 1993, pp. 1–21; McGuire, 1985), a
broad consensus emerged that an attitude is a person’s general evaluation
of an object (where “object” is understood in a broad sense, as
27
encompassing persons, events, products, policies, institutions, and so on).
Even when conceptual treatments of attitude differ in other ways, a
common theme is that an attitude is an evaluative judgment of (reaction to)
an object (Fishbein & Ajzen, 2010, pp. 75–79).
Explicit Measures
Explicit attitude measurement techniques directly ask the respondent for
an evaluative judgment of the attitude object. Two commonly employed
explicit assessment procedures are semantic differential evaluative scales
and single-item attitude questions.
28
evaluative scales from the semantic differential scale of Osgood, Suci, and
Tannenbaum (1957). In this procedure, respondents rate the attitude object
on a number of (typically) 7-point bipolar scales that are end-anchored by
evaluative adjective pairs (such as good-bad, desirable-undesirable, and so
forth). An example appears in Figure 1.1. The instructions for this scale
ask the respondent to place a check mark at the point on the scale that best
represents the respondent’s judgment. The investigator can
straightforwardly assign numerical values to the scale points (say, +3 for
the extreme positive point, through 0 for the midpoint, to −3 for the
extreme negative end) and then sum each person’s responses to obtain an
indication of the person’s attitude toward (general evaluative judgment of)
the object.
29
can be undertaken orally (as in telephone surveys or face-to-face
interviewing); the question is typically straightforward and easily
comprehended by the respondent; the question can be asked (and
answered) in a short time.
Quasi-Explicit Measures
Quasi-explicit attitude measurement techniques assess attitude not by
directly eliciting an evaluative judgment of the attitude object but by
eliciting information that is obviously attitude-relevant and that offers a
straightforward basis for attitude assessment. For example, paired-
30
comparison procedures and ranking techniques do not ask directly for an
evaluation of any single attitude object but ask for comparative judgments
of several objects. In a paired-comparison technique, the respondent is
asked a series of questions about the relative evaluation of each of a
number of pairs of objects (e.g., “Which candidate do you prefer, Archer
or Barker? Archer or Cooper? Barker or Cooper?”); in a ranking
procedure, the respondent ranks a set of attitude objects (e.g., “Rank these
various leisure activities, from your most favorite to your least favorite”).
The obtained responses obviously permit an investigator to draw some
conclusions about the respondent’s evaluation of a given object.
31
to be included (because knowing whether a respondent agreed or disagreed
with such a statement would not provide information about the
respondent’s attitude).
32
Implicit Measures
Explicit and quasi-explicit measures are overwhelmingly the most
common ways of measuring attitudes. But a variety of other techniques
have been developed that assess attitude not by directly eliciting an
evaluation of the attitude object or even by eliciting information obviously
relevant to such an overall evaluation but instead by some more
roundabout (implicit, indirect) means.
Summary
33
As this survey suggests, a variety of attitude measurement techniques are
available. The overwhelmingly most frequently used attitude measurement
procedures are explicit or quasi-explicit techniques; reliability and validity
are more readily established for attitude measures based on these
techniques than for measures derived from implicit procedures. Explicit
procedures are often preferred over quasi-explicit techniques because of
the effort required for constructing Thurstone or Likert scales. But which
specific attitude assessment procedure an investigator employs in a given
instance will depend on the particulars of the situation. Depending on what
the researcher wants to find out, the time available to prepare the attitude
assessment, the time available to question respondents, the sensitivity of
the attitude topic, and so forth, different techniques will recommend
themselves.
Moderating Factors
The degree of attitude-behavior consistency has been found to vary
depending on other “moderating” factors—factors that moderate or
influence the relationship between attitudes and behaviors. A large number
of possible moderating variables have been explored, including the degree
to which the behavior is effortful or difficult (Kaiser & Schultz, 2009;
Wallace, Paulson, Lord, & Bond, 2005); the perceived relevance of the
attitude to the behavior (Snyder, 1982; Snyder & Kendzierski, 1982);
attitude accessibility (Smith & Terry, 2003); attitudinal ambivalence
(Conner et al., 2002; Jonas, Broemer, & Diehl, 2000); having a vested
34
interest in a position (Crano & Prislin, 1995); the extent of attitude-
relevant knowledge (Fabrigar, Petty, Smith, & Crites, 2006); and many
others. In what follows, two well-studied factors are discussed as
illustrative: the correspondence between the attitudinal and behavioral
measures, and the degree of direct experience with the attitude object.
Correspondence of Measures
One factor that influences the observed consistency between an attitudinal
measure and a behavioral measure is the nature of the measures involved.
Good evidence indicates that substantial attitude-behavior correlations will
be obtained only when the attitudinal measure and the behavioral measure
correspond in specificity (Ajzen & Fishbein, 1977). A general attitude will
probably not be especially strongly correlated with any one particular
specific behavior. A general attitude measure corresponds to a general
behavioral measure, not to a specific one.
35
attitudinal measures and behavioral measures are likely to be rather more
strongly associated when there is substantial correspondence between the
two measures and underscore the folly of supposing that a single specific
behavior will necessarily or typically be strongly associated with a
person’s general attitude (for some relevant reviews, see Ajzen & Cote,
2008; Eckes & Six, 1994; M.-S. Kim & Hunter, 1993a; Kraus, 1995).
Direct Experience
A second factor influencing attitude-behavior consistency is the degree of
direct experience with the attitude object. Attitudes based on direct
behavioral experience with the attitude object have been found to be more
predictive of later behavior toward the object than are attitudes based on
indirect experience. (For some examples and discussion, see Doll & Ajzen,
1992; Doll & Mallu, 1990; Eagly & Chaiken, 1993, pp. 194–200; Glasman
& Albarracín, 2006; Kraus, 1995; Steffen & Gruber, 1991. For some
complexities, see Millar & Millar, 1998.)
36
A similar effect was observed in a study comparing attitude-behavior
consistency for product attitudes that were based either on a trial
experience with a sample of the product (direct experience) or on exposure
to advertising messages about the product (indirect experience). Much
greater attitude-behavior consistency was observed for those persons who
had had the opportunity to try the product than for those who had merely
read about it. For example, purchase of the product was more highly
correlated with attitudes based on product trial (.57) than with attitudes
based on product advertising (.18) (R. E. Smith & Swinyard, 1983).
This finding does not mean that product trial influence strategies (e.g.,
providing free samples through the mail, offering grocery store shoppers a
taste of a new food product, etc.) will necessarily be more effective in
producing sales than will advertising strategies: Direct experience
strengthens both positive and negative attitudes. The shopper who has a
negative attitude toward a food product because of having read about it
might still come to purchase the product; the shopper whose negative
attitude is based on tasting the product, however, is much less likely to do
so.
Summary
Research has examined a great many possible moderators of attitude-
behavior consistency (for some general discussions, see Ajzen & Sexton,
1999; Eagly & Chaiken, 1993, pp. 193–215; Fazio & Roskos-Ewoldsen,
2005; Fazio & Towles-Schwen, 1999; Glasman & Albarracín, 2006;
Wallace, Paulson, Lord, & Bond, 2005). The two mentioned here,
although relatively prominent, are only illustrative.
37
convince people to make attitude-consistent behavioral choices about
exercise, diet, medical care, and the like. Similarly, persons who express
positive attitudes toward energy conservation and environmental
protection may nevertheless need to be induced to act consistently with
those views—to engage in recycling, consider packaging considerations
when buying products, choose appropriate thermostat settings, and so on.
Thus the question arises of how persuaders might approach such tasks. At
least three related strategies can be identified.
38
children to have a good education, don’t you? To have an edge in school?
To get ahead in life?” Fundamentally, these questions reflect the seller’s
understanding that enhancing the perceived relevance of an attitude to an
action can be a means of increasing attitude-behavior consistency.
39
emotions simply by asking about such feelings, with consequent effects on
intention or behavior. For example, Richard, van der Pligt, and de Vries
(1996b) asked people either to indicate how they would expect to feel after
having unprotected sex (by rating the likelihood of experiencing various
positive and negative emotions) or to indicate how they felt about having
unprotected sex (using similar ratings). Those participants whose attention
was drawn to their anticipated feelings were more likely to intend to use
condoms (and subsequently were more consistent condom users) than the
other participants. Such results plainly suggest that making salient the
emotion-related consequences of contemplated attitude-inconsistent
behavior may have the effect of enhancing attitude-behavior consistency.
Summary
These three strategies all seek to tap some general desire for consistency as
a way of influencing behavior in a circumstance in which persons will
have an opportunity to act consistently with some existing attitude. But the
strategies vary in the means of engaging that motivation. The perceived-
relevance strategy amounts to saying, “You might not have realized it, but
this really is an opportunity to act consistently with your attitude.” The
hypocrisy-induction strategy says, in effect, “You haven’t been acting
consistently with your attitude, but here is an opportunity to do so.” The
anticipated-feelings strategy implicitly says, “Here is an opportunity to act
consistently with your attitude—and think how bad you’ll feel if you
don’t.”10
Attitude Change
Attitude measurement procedures obviously provide means of assessing
persuasive effects. To see whether a given message changes attitudes, an
investigator can assess attitudes before and after exposure to the message
(perhaps also collecting parallel attitude assessments from persons not
exposed to the message, as a way of reducing ambiguity about the
potential causes of any observed changes). Indeed, such attitude
assessment procedures are the most common ones used in studies of
persuasive effects. The concrete realizations of attitude assessment may
vary depending on the particulars of the research design; for example, in
an experiment in which participants are randomly assigned to conditions,
40
one might dispense with the premessage attitude assessment and examine
only postmessage differences, on the assumption that random assignment
makes substantial initial differences unlikely. But effects on attitude are
the effects most frequently considered in persuasion research.
41
Petty & Krosnick, 1995; Visser, Bizer, & Krosnick, 2006).
Conceptualizations of attitude strength vary, but (as an example) Krosnick
and Petty (1995) proposed that attitude strength is best understood as an
amalgam of persistence (stronger attitudes are more persistent than are
weaker ones), resistance (stronger attitudes are more resistant to change
than are weaker ones), impact on information processing and judgments
(stronger attitudes are more likely to affect such processes than are weaker
attitudes), and impact on behavior (stronger attitudes will have more effect
on behavior than will weaker ones). It should be apparent that persuaders
might have an interest in influencing not merely attitude (valence and
extremity) but attitude strength as well.12
Conclusion
This introductory chapter has elucidated the concepts of persuasion and
attitude, described some common attitude assessment procedures, sketched
the relationship of attitudes and behavior, and discussed the assessment of
persuasive effects. In the following chapters, extant social-scientific theory
and research about persuasion are reviewed. Several theoretical
perspectives that have been prominent in the explanation of persuasive
effects are discussed in Chapters 2 through 8. Research on various factors
influencing persuasive effects is explored in Chapters 9 through 12.
42
For Review
1. What is a paradigm (exemplary) case? Give examples. Describe how
the shared features of paradigm cases of a concept can provide
clarification of the concept. Explain how the “sharp edges” of a
definition can lead to disputes over borderline cases.
2. What are the shared features of exemplary cases of persuasion?
Explain how a successful attempt to influence is such a feature.
Explain how the persuader’s intending to influence is such a feature.
Explain how some measure of freedom on the persuadee’s part is
such a feature. Explain how having the effects be achieved through
communication is such a feature. Explain how a change in the
persuadee’s mental state is such a feature. Explain how features
present in full-fledged ways in paradigm cases can, when present in
only some diminished fashion, make for borderline cases of a
concept.
3. Identify one important mental state often changed in persuasion.
What is an attitude? Explain why attitudes are a common target for
persuasive messages.
4. What are explicit attitude measurement techniques? What are
semantic differential evaluative scales? Explain how they work. What
are single-item attitude measures? What is the feeling thermometer?
Identify a circumstance in which single-item attitude measures are
especially useful. Identify and explain a weakness of such measures.
5. What are quasi-explicit attitude measurement techniques? Explain
how a respondent’s agreement or disagreement with belief statements
can serve as a measure of attitude. Describe the process of identifying
suitable belief statements for such attitude measurement procedures.
Identify an advantage (and accompanying disadvantage) of using
such attitude measures.
6. What are implicit attitude measurement techniques? Give examples.
How are implicit attitude measures different from explicit and quasi-
explicit measures?
7. Are attitudes and behaviors generally consistent? What factors
influence the degree of attitude-behavior consistency? How does the
correspondence between the attitudinal measure and the behavioral
measure influence attitude-behavior consistency? How does the
degree of direct experience with the attitude object influence attitude-
behavior consistency? Describe three general ways of encouraging
attitude-consistent behavior. Explain how increasing the perceived
relevance of an attitude to a behavior might enhance attitude-behavior
43
consistency. Explain how inducing feelings of hypocrisy might
enhance attitude-behavior consistency. Explain how encouraging
anticipation of feelings might enhance attitude-behavior consistency.
8. How can persuasion be assessed using attitude measurement
techniques? Explain why other assessments (that is, other than
attitude) may be useful or necessary.
Notes
1. This point can also be expressed by saying that to persuade is a
perlocutionary act, whereas (for example) to urge is an illocutionary act. In
this regard there is a difference between “A persuaded B” and “A
attempted to persuade B” (Gass & Seiter, 2004, p. 27n3).
44
relationship were overdrawn.
45
which then motivate attitude-consistent future behavior) and/or to an
increased expectation that negative feelings (guilt, regret, and so forth) will
arise if attitude-inconsistent behavior is undertaken (with attitude-
consistent behavior then motivated by a desire to avoid such negative
feelings).
46
about relative persuasiveness need not (should not) distinguish studies on
the basis of the outcome measure used; similarly, in formative message
design research (e.g., campaign planning) that tests the relative
persuasiveness of alternative possible messages, assessments of
appropriate intentions will provide a perfectly suitable guide to the relative
behavioral persuasiveness of the messages.
47
Chapter 2 Social Judgment Theory
48
to a particular persuasive communication will depend (at least in part) on
what we think of—how favorable we are toward—the point of view that it
advocates. But this suggests that, in order to understand a recipient’s
reaction to a given message, it is important to understand how the receiver
assesses the various positions on that issue (that is, the various different
stands that a message might advocate). Hence the next section discusses
the nature of people’s judgments of the alternative positions on an issue.
Subsequent sections discuss receivers’ reactions to persuasive messages,
how social judgment theory suggests adapting messages to recipients, and
some weaknesses of social judgment theory.
49
1961, pp. 136–137; for other examples, see Hovland, Harvey, & Sherif,
1957; C. W. Sherif, 1980).
The respondent is asked first to indicate the one statement that he or she
finds most acceptable (for example, by putting a + + in the corresponding
blank). The respondent is then asked to indicate the other statements that
are acceptable to the respondent (+), the one statement that is most
objectionable (XX), and the other statements that are unacceptable (X).
The respondent need not mark every statement as acceptable or
unacceptable; that is, some of the positions can be neither accepted nor
rejected by the respondent (and so be left blank, or marked with a zero).
(For procedural details, see Granberg & Steele, 1974.)
50
These responses are said to form the person’s judgmental latitudes on that
issue. The range of positions that the respondent finds acceptable form the
respondent’s latitude of acceptance, the positions that the respondent finds
unacceptable constitute the latitude of rejection, and the positions that the
respondent neither accepts or rejects form the latitude of noncommitment.
The structure of these judgmental latitudes can vary from person to person.
In fact, two people might have the same “most preferred” position on an
issue, but differ in their assessment of the other positions on the issue and
hence have very different latitudes of acceptance, rejection, and
noncommitment. For example, suppose that on the presidential election
issue, Carol and Mary both find statement B most acceptable: their own
most-preferred position is that, on the whole, the interests of the country
will be best served by the election of the Republicans. Mary finds
statements A, C, D, and E also acceptable, is noncommittal toward F, G,
and H, and rejects only the extreme Democratic statement I; Carol, on the
other hand, thinks that A is the only other acceptable statement, is
noncommittal regarding C and D, and rejects E, F, G, H, and I. Mary thus
has a larger latitude of acceptance than Carol (Mary finds five positions
acceptable, Carol only two), a larger latitude of noncommitment (three
positions as opposed to two), and a smaller latitude of rejection (only one
position is objectionable to Mary, whereas five are to Carol). Notice (to
jump ahead for a moment) that even though Carol and Mary have the same
most preferred position, they would presumably react very differently to a
message advocating position E: Mary finds that to be an acceptable
position on the issue, but Carol finds it objectionable. As this example
suggests, from the point of view of social judgment theory, a person’s
stand on an issue involves not merely a most preferred position, but also
assessment of all the other possible positions on the issue—as reflected in
the set of judgmental latitudes (the latitudes of acceptance, rejection, and
noncommitment).
51
comes to (for discussion, see Wilmot, 1971a). However, very broadly
speaking, what is meant by “ego-involvement” is roughly the same as
would be meant in colloquially referring to someone’s being “involved
with an issue.” Thus a person might be said to be ego-involved when the
issue has personal significance to the individual, when the person’s stand
on the issue is central to his or her sense of self (hence ego-involvement),
when the issue is important to the person, when the person takes a strong
stand on the issue, when the person is strongly committed to the position,
and so forth. Ego-involvement is thus in a sense an omnibus concept,
meant to refer to this constellation of properties.
52
noncommitment), and will find many positions objectionable (large
latitude of rejection).
To gather evidence bearing on this claim, one needs a way to assess the
relative sizes of the judgmental latitudes (which the Ordered Alternatives
questionnaire provides) and a procedure for assessing ego-involvement.
Two such ego-involvement measurement procedures are described in the
next section.
Measures of Ego-Involvement
Several different techniques have been devised for assessing ego-
involvement. Two particular measures can serve as useful examples.
53
This regularity has sometimes led to the suggestion that the size of the
latitude of noncommitment might serve as a measure of ego-involvement
(e.g., C. W. Sherif et al., 1965, p. 234), but the size of the latitude of
rejection is the far more frequently studied index.
This result can seem to be counterintuitive, but it makes good sense from
the perspective of social judgment theory (particularly against the
backdrop of assimilation and contrast effects, to be discussed shortly).
With increasing ego-involvement, increased perceptual distortion is likely.
When involvement is exceptionally high, the individual’s thinking takes on
an absolutist, black-or-white quality; in such a case, only two categories
might be thought necessary (“Here are the few statements representing the
right point of view—the one I hold—and here are all the wrongheaded
ones”).2
Reactions to Communications
Social judgment theory holds that a receiver’s reaction to a given
54
persuasive communication will depend centrally on how he or she
evaluates the point of view it is advocating. That implies that, in reacting
to a persuasive message, the receiver must initially come to decide just
what position the message is forwarding. Social judgment theory suggests
that, in making this judgment, the receiver may be subject to perceptual
distortions called assimilation and contrast effects.
55
Democratic respondents, on the other hand, saw the message as being
slightly pro-Republican. Both groups of respondents thus exhibited a
contrast effect, exaggerating the difference between the message and their
own position (M. Sherif & Hovland, 1961, p. 151). (For other research
illustrating assimilation and contrast effects, see Atkins, Deaux, & Bieri,
1967; Hurwitz, 1986; Manis, 1960; Merrill, Grofman, & Adams, 2001; C.
W. Sherif et al., 1965, pp. 149–163.)
56
message). A number of studies have reported results consistent with this
general principle (Atkins et al., 1967; Eagly & Telaak, 1972; B. T.
Johnson, Lin, Symons, Campbell, & Ekstein, 1995; Sarup, Suchner, &
Gaylord, 1991; C. W. Sherif et al., 1973; Siero & Doosje, 1993).
This principle has important implications for the question of the effects of
discrepancy (the difference between the message’s position and the
receiver’s position) on attitude change. A persuader might advocate a
position very discrepant from (very different from) the receiver’s own
view, thus asking for a great deal of attitude change; or a persuader might
advocate a position only slightly discrepant from the receiver’s, so seeking
only a small amount of change. The question is: What amount of
discrepancy (between the message’s position and the receiver’s position)
will produce the greatest amount of attitude change in the advocated
direction?
57
almost certainly fall into the (large) latitude of rejection. Thus with any
one influence attempt, a persuader facing a highly involved receiver may
be able to advocate safely only a small change; obtaining substantial
change from the highly involved receiver may require a series of small
steps over time. By contrast, considerable attitude change might be
obtained from the low-involvement receiver rather rapidly, through
advocating a relatively discrepant (but not too discrepant) position (as
suggested by Harvey & Rutherford, 1958).
58
message’s stand and the receiver’s position is reduced—and hence the
communicator is seen as asking for less change than he or she actually
seeks.6 Consider the case of a message that advocates a position in the
latitude of acceptance or the latitude of noncommitment; with increasing
perceived discrepancy, the chances of favorable attitude change
presumably increase. But an assimilation effect will reduce the perceived
discrepancy between the message’s view and the receiver’s position, and
so it will reduce the amount of attitude change obtained. Indeed, in the
extreme case of complete assimilation (when the receivers think that the
message is simply saying what they already believe), no attitude change
will occur, because the audience has misperceived the communicator’s
position. That is, when the recipient mistakenly believes (because of the
perceptual distortion of assimilation) that the message advocates the
recipient’s current position, then the recipient’s attitude will not change.7
59
Candidates do sometimes adopt ambiguous positions on “campaign issues”
(economic policy, social issues, and so on). If a candidate were trying to
persuade voters that “the right approach to the issue of gun control is thus-
and-so,” then being ambiguous about the candidate’s position on gun
control would reduce the chances of successful persuasion on that topic.
Such ambiguity would encourage assimilation and contrast effects, thereby
impairing the candidate’s chances of changing anyone’s mind about that
issue.
But, ordinarily, candidates don’t seek to persuade voters about the wisdom
of some particular policy on some campaign issue. Usually, the candidate
hopes to encourage voters to believe that the candidate’s view on a given
issue is the same as the voter’s view. That is, candidates hope that with
respect to campaign issues, voters will assimilate the candidate’s views
(overestimate the degree of similarity between the candidate’s views and
their own).
60
adjusted) to fit the audience. From the perspective of social judgment
theory, this especially means adapting messages to the recipient’s
judgmental latitudes. As mentioned earlier, social judgment theory
emphasizes that a persuader needs to know more than simply the
receiver’s most preferred position; the structure of the judgmental latitudes
—the sizes and locations of the latitudes of acceptance, rejection, and
noncommitment—is also important. Even if two receivers have the same
most preferred position, a given persuasive message might fall in the
latitude of acceptance for one person but in the latitude of rejection for
another, leading to quite different reactions to a given message.
Persuaders are often not in a position to vary their advocated view for
different audiences. (For example, politicians who attempt to do so can
find themselves accused of “flip-flopping” or “talking out of both sides of
the mouth.”) But in some circumstances, persuaders can be free to vary
what they ask of audiences. For example, a charity might vary how large a
donation is requested depending on the recipient’s financial circumstance;
people who are financially better-off may be asked for larger sums. In such
a circumstance, having some sense of the recipient’s judgmental latitudes
—what requested amounts might seem outrageously large to them (latitude
of rejection) and which might seem at least worthy of considering (latitude
of noncommitment)—can be crucial.
61
unreasonable.
Social judgment theory also plainly suggests that messages may need to be
adapted to the audience’s level of ego-involvement. Where message
recipients are not very involved in the issue, a persuader might be able to
advocate a relatively discrepant position without encountering the latitude
of rejection; where the audience is highly involved, on the other hand, the
large latitude of rejection is likely to necessitate a smaller discrepancy if
the message is to be effective.
Critical Assessment
Social judgment theory obviously offers a number of concepts and
principles useful for illuminating persuasive effects. But several
weaknesses in social judgment theory and research have become apparent.
62
and position extremity are distinct concepts. Involvement and extremity
are often correlated (such that higher involvement is characteristically
associated with more extreme views) but nevertheless conceptually
distinct. Hence it is important to be able to distinguish the effects of ego-
involvement from the effects of position extremity. Social judgment theory
claims that larger latitudes of rejection are the result of heightened ego-
involvement, not the result of extreme positions per se (e.g., C. W. Sherif
et al., 1965, p. 233); but because the research evidence in hand confounds
ego-involvement and position extremity, the evidence is insufficient to
support such a claim.
In fact, the groups used in much social judgment research differed not only
in involvement and position extremity but in age, educational
achievement, and other variables. As a result, one cannot confidently
explain observed differences (e.g., in the size of the latitude of rejection, or
in the number of categories used in the own-categories procedure) as being
the result simply of involvement differences; one of the other factors, or
some combination of other factors, might have been responsible for the
observed effects. (A more general discussion of this problem with social
judgment research has been provided by Kiesler et al., 1969, pp. 254–257.)
But these are distinguishable properties. For instance, I can think an issue
is important without my stand on that issue being central to my self-
concept (e.g., I think the issue of controlling the federal deficit is
important, but my sense of identity isn’t connected to my stand on this
matter). I can hold a given belief intensely, even though the issue isn’t
very important to me (e.g., my belief that the Earth is round). An issue
may not be personally relevant to me (e.g., abortion), but I could
nonetheless be strongly committed to a position on that issue, and my
stand on that issue could be important to my sense of self. I can hold a
belief strongly (e.g., about the superiority of a given basketball team), even
63
though that belief isn’t central to my self-concept.
First, the measures are not very strongly correlated with each other. Two
instruments that measure the same property ought to be strongly
correlated. For example, in the case of the two common measures of ego-
involvement, the two measures should be strongly negatively correlated: as
the size of the latitude of rejection increases, the number of categories
created should decrease. But research that has examined the correlations
among various involvement measures (including, but not limited to, the
size of the latitude of rejection on the Ordered Alternatives questionnaire
and the number of categories in the Own Categories procedure) have
commonly yielded correlations that are roughly zero (e.g., Wilmot,
1971b). The implication is that the different measures of involvement
cannot all be measuring the same thing. Maybe one of them is measuring
involvement and the others are not, or maybe none of them is measuring
involvement. But plainly these assessments are not all measuring the same
thing.
64
in the topic (R. A. Clark & Stewart, 1971; Krosnick, Boninger, Chuang,
Berent, & Carnot, 1993; Wilmot, 1971b).
In short, there are good empirical grounds for concern about the adequacy
and meaning of the common measures of ego-involvement. This is perhaps
to be expected, however, given the lack of clarity of the concept of ego-
involvement; one cannot hope to have a very satisfactory assessment
procedure for a vague and indistinct concept. In any case, the empirical
evidence suggests that the various indices of ego-involvement ought not be
employed unreflectively.
Conclusion
In some ways social judgment theory is too simplified to serve as a
complete account of persuasive effects. From a social judgment theory
point of view, the only features of the message that are relevant to its
impact are (a) the position it advocates and (b) the clarity with which it
identifies its position. It doesn’t matter whether the message contains
sound arguments and good evidence or specious reasoning and poor
evidence; it doesn’t matter what sorts of values the message appeals to,
how the message is organized, or who the communicator is. Everything
turns simply on what position the message is seen to defend. And surely
this is an incomplete account of what underlies persuasive message effects.
But a theory can be useful even when incomplete. Social judgment theory
does draw one’s attention to various important facets of the process of
persuasion. For example, realizing the possibility of assimilation and
contrast effects can be crucially important to persuasive success. A
persuader who is not sufficiently clear about his or her advocated position
may think persuasion has been achieved because the message recipient
professes complete agreement with the message, but if the recipient
misperceived what view was being advocated (an assimilation effect), such
expressions of agreement will be misleading indicators of persuasive
success.
65
Sometimes it may be enough to get recipients to see that the persuader’s
position falls in their latitude of noncommitment. For example, where
public policy issues are the subject of advocacy, “the battle is not to
convince citizens that one’s policy is right, but simply that it is not
unreasonable” (Diamond & Cobb, 1996, p. 242).
For Review
1. What is the central tenet of social judgment theory? Upon what is the
effect of a persuasive communication said to centrally depend?
According to social judgment theory, what are the two steps involved
in attitude change?
2. Explain the idea that people have judgments of the alternative
positions available on an issue. How can one obtain such judgments?
Describe the Ordered Alternatives questionnaire. What instructions
are respondents given for completing the Ordered Alternatives
questionnaire?
3. What are the judgmental latitudes? What is the latitude of
acceptance? The latitude of rejection? The latitude of
noncommitment? Explain how, for social judgment theory, a person’s
stand on an issue is represented by more than the person’s most-
acceptable position.
4. What is ego-involvement? What is the conceptual relationship of ego-
involvement and position extremity? Is being ego-involved in a issue
the same thing as holding an extreme position on the issue?
According to social judgment theory, what is the empirical
relationship of ego-involvement and position extremity?
5. How is ego-involvement predicted to influence the structure of the
66
judgmental latitudes? What latitude structure is said to be
characteristic of a person high in ego-involvement? Of a person low
in ego-involvement?
6. Explain how, in social judgment theory research, group membership
was used to validate the use of the size of the latitude of rejection (on
the Ordered Alternatives questionnaire) as a measure of ego-
involvement. What is the Own Categories procedure? Explain how
ego-involvement is thought to influence the number of categories
used in the Own Categories procedure.
7. What are assimilation and contrast effects (broadly speaking)? What
is a contrast effect? What is an assimilation effect? What is the rule of
thumb concerning when each effect will occur? Explain how, because
of assimilation and contrast effects, the perceived position of a
persuasive message may be different for people with different
positions on the issue. What is the relationship between ego-
involvement and assimilation and contrast effects? What kinds of
messages are subject to assimilation and contrast effects? How can a
persuader minimize assimilation and contrast effects?
8. Describe social judgment theory’s rule of thumb concerning attitude
change effects following persuasive communications. What is
“discrepancy”? What is the relationship between discrepancy and
attitude change, according to social judgment theory? Describe how
this analysis suggests different approaches to persuading high- and
low-involvement receivers.
9. Explain how contrast effects reduce the effectiveness of persuasive
messages. Explain how assimilation effects reduce the effectiveness
of persuasive messages. How can political campaigns exploit
assimilation effects concerning positions on policy issues?
10. Explain how persuasive messages might be adapted to the recipient’s
judgmental latitudes. Explain how messages might be adapted to the
recipient’s level of ego-involvement.
11. What does it mean to say that two factors (variables) are confounded?
Describe how extremity and involvement have been confounded in
social judgment research. Explain the implications of this
confounding for interpreting social judgment research. Explain how
the concept of ego-involvement conflates a number of different
concepts.
12. Identify and describe two worrisome findings concerning the
measures of ego-involvement. What sort of correlation is expected
between two instruments that measure the same property? If two
measures of involvement do measure involvement, what correlations
67
would be expected between them (e.g., between the number of
categories in the Own Categories procedure and the size of the
latitude of rejection on the Ordered Alternatives questionnaire)? What
correlations have been observed? Are the measures of ego-
involvement strongly correlated with each other? Do the measures of
ego-involvement display the expected patterns of association with
other variables? How are measures of ego-involvement related to
assessments of perceived topic importance or commitment to one’s
position?
Notes
1. The anchoring of attitudes in reference groups is emphasized in some
social judgment theory conceptualizations of involvement (e.g., M. Sherif
& Sherif, 1967, pp. 135–136), and hence this was an attractive research
procedure.
68
factors influencing the perceived position of messages, see, for example,
Kaplowitz and Fink (1997) and R. Smith and Boster (2009).
9. A third troubling finding, not discussed here, is that there is more cross-
issue consistency in an individual’s apparent level of ego-involvement (as
69
assessed by common measures of ego-involvement) than should be
expected given that ego-involvement is an issue-specific property (that is,
one that varies from issue to issue for a given individual), not a
personality-trait-like disposition; for discussion and references, see
O’Keefe (1990, pp. 42–43).
70
Chapter 3 Functional Approaches to
Attitude
71
utilitarian, ego-defensive, value-expressive, and knowledge. The utilitarian
function is represented by attitudes that help people maximize rewards and
minimize punishments. For example, students who experience success
with essay exams are likely to develop favorable attitudes toward such
exams. Attitudes serving a utilitarian function, Katz suggested, will be
susceptible to change when the attitude (and related activities) no longer
effectively maximizes rewards and minimizes punishments. Thus
utilitarian attitudes are probably most effectively changed by either
creating new rewards and punishments (as when, for instance, a company
creates a new incentive program to encourage suggestions by employees)
or by changing what is associated with existing rewards and punishments
(as when a company changes the basis on which salespeople’s bonuses are
based).
72
Middle East) can be to, in effect, identify the “good guys” and the “bad
guys.” That is, attitudes (evaluations) can serve as at least a superficial
mechanism for organizing one’s understandings of such situations.
Attitudes serving a knowledge function, Katz suggested, are especially
susceptible to change through the introduction of ambiguity (as when the
good guys do something bad or the bad guys do something good); such
ambiguity indicates that the attitudes are not functioning well to organize
information, thus making the attitudes more likely to change.
Katz’s analysis did not initially attract much research attention, in good
part because of perceived difficulties in assessing attitude function (for
some discussion, see Kiesler, Collins, & Miller, 1969, pp. 302–330;
Shavitt, 1989). But functional analyses of attitude have subsequently
flowered.
Subsequent Developments
73
smoothly with their peers” (p. 341); expression of the attitude may elicit
social approval, make it easier to adapt to social situations, and the like.1
Shavitt’s (1990) taxonomy distinguished a utilitarian function, a social-
identity function (understood as including both social-adjustive and value-
expressive functions), and a self-esteem maintenance function (including
ego-defensive purposes). Gastil (1992) proposed six attitude functions:
personal utility, social utility, value expressive, social adjustment (easing
social interaction), social-identity (forging one’s identity), and self-esteem
maintenance.
But there is not yet a consensus on any one functional typology. This
surely reflects the lack of any simple, easily assessed source of evidence
for or against a given function list. An attitude function taxonomy
presumably shows its worth by being broadly useful, across a number of
applications, in illuminating the underlying motivational bases of attitude.
Expressed generally, this illumination consists of showing that the scheme
in question permits one to detect or predict relevant events or relationships,
but this evidence can be quite diverse. A given typology’s value might be
displayed by showing that knowledge of an attitude’s function (as captured
by the typology in question) permits one to predict or detect (for example)
the product features that persons will find most appealing, the relative
effectiveness of various persuasive messages, the connection between
personality traits and attitude functions, and so on. But because for any
given typology there commonly is relatively little research evidence
distinctively bearing on that scheme, there is at present little basis for
supposing that any given specific typology is unquestionably superior to
all others. (There is even less evidence comparing the usefulness of
alternative taxonomies; for an example, see Gastil, 1992.)
This lack of consensus makes for a rather chaotic and unsettled situation,
one in which a genuine accumulation of results (and corresponding
confident generalization) is difficult. If there was one widely agreed-on set
of specific functions, then research could straightforwardly be
accumulated; more could be learned about (say) what personality traits or
situational features incline persons to favor this or that function, what sorts
of messages are best adapted for changing attitudes serving the various
functions, and so forth. Instead, most of the research evidence concerning
functional attitude analyses is of a piecemeal sort: One study compares
personality correlates of social-adjustive and value-expressive functions,
another examines different means of influencing attitudes serving ego-
defensive functions, and so on.
74
In such a circumstance, one promising approach might be to paint in
broader strokes, deferring matters of detailed functional typologies in favor
of identifying some general functional differences. One broad functional
distinction has been found widely useful and seems contained (implicitly
or explicitly) in a great many attitude function analyses: a distinction
between symbolic and instrumental attitude functions (see Abelson &
Prentice, 1989; Ennis & Zanna, 2000, pp. 396–397). Briefly expressed,
symbolic functions focus on the symbolic associations of the object;
attitudes serving a symbolic function do the jobs of expressing
fundamental moral beliefs, symbolizing significant values, projecting self-
images, and the like (e.g., Katz’s ego-defensive function). Instrumental
functions focus on the intrinsic properties of the object; attitudes serving
instrumental functions do the jobs of summarizing the desirable and
undesirable aspects of the object, appraising the object through specific
intrinsic consequences or attributes, and so forth (e.g., Katz’s utilitarian
function).
For example, concerning stricter gun control laws in the United States, a
supporter’s positive attitudes might have a predominantly symbolic basis
(beliefs such as “It represents progress toward a more civilized world”) or
an instrumental basis (“It will reduce crime because criminals won’t be
able to get guns so easily”); similarly, an opponent’s negative attitudes
might be motivated by largely symbolic considerations (“It represents
impingement on constitutional rights”) or by largely instrumental
considerations (“It will increase crime because criminals will still have
guns, but law-abiding citizens won’t”). Of course, it is possible for a
person’s attitude on a given topic to have a mixture of symbolic and
instrumental underpinnings. And an attitude’s function might change
through time. For instance, an attitude might initially serve a symbolic
function but subsequently come to predominantly serve instrumental ends
(see Mangleburg et al., 1998). But the general distinction between
symbolic and instrumental attitude functions appears to be a broadly useful
one (see, e.g., Crandall, Glor, & Britt, 1997; Herek & Capitanio, 1998; A.
Kim, Stark, & Borgida, 2011; Prentice & Carlsmith, 2000).
75
One straightforward procedure for assessing the function of a given
attitude involves coding (classifying) relevant free-response data (data
derived from open-ended questions). For example, Shavitt (1990) asked
participants to write down “what your feelings are about the attitude
object, and why you feel the way you do. … Write down all of your
thoughts and feelings that are relevant to your attitude, and try to describe
the reasons for your feelings” (p. 130). Responses were then classified on
the basis of the apparent attitude function. For example, responses
concerning what the attitude communicates to others were coded as
indicating a social-identity function, whereas responses focused on
attributes of the attitude object were classified as reflecting a utilitarian
function.
76
whereas “members of a social group to which I belong expect people to
volunteer” was a social-adjustive reason. (For other examples of the use of
these or similar instruments, see Clary et al., 1998; Ennis & Zanna, 1993;
Gastil, 1992; Herek, 2000; Shavitt, 1990.)
For any such proxy measure, of course, the key question will be the degree
to which the proxy is actually related to differences in attitude function, a
77
question probably best addressed by examining the relationship between
proxy measures and more direct assessments. In the specific case of
personality characteristics such as self-monitoring, presumably such
characteristics merely incline persons (in appropriate circumstances) to be
more likely to favor one or another function. For instance, it is surely not
the case that all the attitudes of high self-monitors (whether toward aspirin
or automobiles or affirmative action) serve social-adjustive functions. (See
Herek, 2000, pp. 332–335, for commentary on the use of such proxy
measures.)
Individual Differences
Different persons can favor different attitude functions, as
straightforwardly illustrated by self-monitoring. As just discussed, high
self-monitors appear to favor social-adjustive functions, whereas low self-
monitors seem more likely to adopt value-expressive functions. Other
personality correlates of attitude function differences have not received so
much recent research attention, although plainly it is possible that other
individual-difference variables might be related to differences in attitude
function (see, e.g., Katz, McClintock, & Sarnoff, 1957; Zuckerman,
Gioioso, & Tellini, 1988). But apart from any underlying personality
differences, people’s motivations can vary. For example, different people
can have different reasons for volunteering, although those differences
might not be systematically related to any general personality disposition.
Attitude Object
The function of an attitude toward an object may also be shaped by the
nature of the object because objects can differentially lend themselves to
attitude functions. For example, air conditioners commonly evoke
predominantly utilitarian thoughts (“keeps the air cool,” “expensive to
run”), whereas wedding rings are more likely to elicit social-identity
thoughts (“represents a sacred vow”; Shavitt, 1990). Similarly, attitudes
78
toward shampoo are determined more by instrumental attributes (such as
conditioning hair) than by symbolic ones (such as being a high-fashion
brand), whereas for perfume, symbolic attributes are more influential than
instrumental ones (Mittal, Ratchford, & Prabhakar, 1990).
Situational Variations
Different situations can elicit different attitude functions (for a general
discussion, see Shavitt, 1989, pp. 326–332). For example, if the situation
makes salient the intrinsic attributes and outcomes associated with an
object, presumably instrumental (utilitarian) functions will be more likely
to be activated; by contrast, social-identity functions might be engaged by
“situations that involve using or affiliating with an attitude object, or
expressing one’s attitude toward the object, in public or in the presence of
reference group members” (p. 328). Thus attitude functions may vary
depending on features of the immediate situation.
79
to serve different functions because the attitude object can accommodate
different functions. Similarly, situational factors can influence the salience
of various functions only if the attitude object permits different attitude
functions. The larger point is that the attitude object, individual
differences, and situational factors all intertwine to influence attitude
function.3
80
for the image-oriented advertisement, the slogan read, “You’re not just
moving in, you’re moving up,” whereas the product quality-oriented
advertisement claimed, “When it comes to great taste, everyone draws the
same conclusion.”
81
utilitarian product (such as air conditioners) or a social-identity product
(such as greeting cards); brands advertised with function-relevant appeals
were preferred over brands advertised with function-irrelevant appeals (so
that, for example, ads using utilitarian appeals were preferred over ads
using social-identity appeals when air conditioners were advertised, but
this preference was reversed when greeting cards were advertised).
Finally, these same effects have been observed when the situational
salience of attitude functions was varied experimentally. Julka and Marsh
(2005) varied the degree to which a knowledge or value-expressive
function was activated, and then they exposed participants to either a
matched or mismatched persuasive appeal for organ donation; matched
appeals produced more favorable attitude change and led participants to be
more likely to take an organ-donor registration card.
82
the supporting arguments (strong versus weak arguments). Attitudes were
more strongly influenced by argument quality when the message contained
matched appeals than when it contained mismatched appeals. For instance,
high self-monitors were more influenced by the strength of the arguments
when the appeals were image-based than when the appeals were product
quality-based. This effect suggests that receivers more carefully
scrutinized messages with appeals matching their functional attitude bases
than they did messages with mismatched appeals (see also DeBono, 1987;
Lavine & Snyder, 1996, 2000; Petty, Wheeler, & Bizer, 2000). Several
studies have reported related findings suggesting that high self-monitors
more carefully process messages from attractive than unattractive (or
expert) communicators (DeBono & Harnish, 1988; DeBono & Telesca,
1990), findings that might reflect the propensity for high self-monitors to
favor social-adjustive functions. And this explanation is at least not
inconsistent with evidence suggesting that function-matched messages
may be better remembered than mismatched messages (DeBono & Packer,
1991, Study 3).
Both explanations might turn out to have some merit. For example, it may
be that when message scrutiny is already likely to be high, the different
intrinsic appeal of matched and mismatched arguments will play a key
role, whereas in other circumstances, the functional match or mismatch of
arguments will influence the degree of scrutiny given (Petty et al., 2000;
Ziegler, Dobre, & Diehl, 2007).7 Further research on these questions will
be welcomed.
Commentary
83
Generality and Specificity in Attitude Function
Typologies
The general enterprise of functional attitude analysis is driven by the
search for a small set of universal, exhaustive, and mutually exclusive
attitude functions that can be used to dependably and perceptively
distinguish any and all attitudes. But as discussed earlier, there is not yet a
consensus on any such set of functions, and perhaps there never will be
such consensus.
84
consensus on a general attitude function typology, analyses of specific
attitudes will still almost certainly require the typology to be modified
(elaborated, refined, adapted) to provide maximum illumination of the
particular attitude under study.
Functional Confusions
85
prevent execution of the innocent”), but this does not mean that the
person’s expressing opposition to capital punishment serves an
instrumental function; depending on the circumstances, expression of that
attitude might serve some thoroughly symbolic end (symbolizing one’s
values, for instance). Thus there is a difference between the jobs done by
the attitude object (the product, the policy) and the jobs done by
expressing attitudes concerning that object.9
86
Lavine & Snyder, 2000; Pratkanis & Greenwald, 1989; Shavitt, 1990).
This state of affairs suggests that it may be useful to reconsider the
assessment and conceptualization of attitude function with the relevant
distinctions in mind.
87
to indicate differences in attitude functions—might more lucidly be
characterized as simply differences in what people value in (that is, what
they want from) attitude objects.
To put it another way: High and low self-monitors want different things
from their consumer products. But this does not mean that high and low
self-monitors want different things from their attitudes. One might
plausibly say that high and low self-monitors want their attitudes to do the
same job—the job of identifying good and bad products for them.10 Thus,
instead of the supposition that high and low self-monitors have attitudes
that serve different functions, what seems invited is the conclusion that
although high and low self-monitors may sometimes differ in the criteria
they use to appraise objects, the underlying function of the evaluation (the
function of the attitude) is identical.
88
regular physical activity. Respondents rated the desirability and likelihood
of various possible consequences of regular exercise. These consequences
were then grouped into three attitude functions: a “utilitarian” function
(e.g., “would help me reduce stress,” “would help me feel more
energetic”), a “social-identity” function (e.g., “would provide me with
more opportunities to socialize,” “would help me improve my social
relationships”), and a “self-esteem maintenance” function (e.g., “would
help me lose weight,” “would help me stay in shape”). The importance of
these different functions of exercise (note: not functions of attitudes
toward exercise, but functions of exercise) varied across people. In
particular, the importance of social-identity outcomes (as influences on
intention) varied depending on self-monitoring: high self-monitors placed
more emphasis on social-identity outcomes than did low self-monitors. So
although high and low self-monitors appear to want different things from
exercise (they differently value various outcomes), they would seem to
want the same thing from their attitudes (namely, serving the function of
indicating whether regular exercise would be a good thing for them to do).
In sum, the procedures commonly used for assessing attitude functions can
instead be understood as assessing variations in the perceived value or
importance of attributes (or functions) of the attitude object.11
89
ideas (e.g., equality, honesty) in terms of their importance as guiding
principles in one’s life” (Maio & Olson, 1995, p. 268). From this point of
view, a person considering whether to make a charitable donation who
thinks about “the importance of helping others” has a value-expressive
attitude, whereas people who think about “whether they can afford to
donate” have utilitarian attitudes (Maio & Olson, 2000a, p. 251).
This sort of reasoning led Maio and Olson (2000a, pp. 258–260) to
introduce the idea of “goal-expressive” attitudes, precisely meant to
“encompass what Katz referred to as value-expressive and utilitarian
functions” (p. 259). By collapsing value-expressive and utilitarian attitudes
into one functional category, this approach abandons the idea that value-
expressive and utilitarian attitudes serve different purposes; it recognizes
their similarity in abstract attitude function (appraisal) while not losing
sight of the variation possible in substantive motivational content (for a
related view, see Eagly & Chaiken, 1998, p. 304; for a similar treatment of
value-expressive and social-adjustive attitudes, see Hullett & Boster,
2001).
Summary
Taken together, these considerations invite a simpler, more straightforward
account of much research on attitude function variation. Specifically, this
work might more perspicaciously be described as work identifying
90
variation in what people value (their wants, goals, evaluations of various
properties of objects, and so on). As indicated above, both the procedures
commonly used to differentiate attitude functions and the conceptual
treatment of value-expressive and utilitarian functions can be seen to
distinguish cases on the basis of persons’ values, not on the basis of
attitude function.
91
what they want from those objects) will naturally be likely to enjoy some
persuasive advantage (as observed by Shavitt, 1990). Similarly for
situational variations: When certain values (attributes, outcomes, etc.) are
made more salient, persuasive appeals engaging those wants are likely to
be more successful than appeals engaging nonsalient desires (e.g., Maio &
Olson, 2000a, Study 4).
These two ideas are currently clothed in talk about variation in attitude
function, but such talk is at least misleading and arguably dispensable in
favor of talk about variation in values.12 In the long run, however, clear
treatment of variation in values will require some typology of values, that
is, some systematic analysis of the ways in which values (goals, desired
properties of objects, etc.) can vary (for some classic examples, see
Rokeach, 1973; Schwartz, 1992). The empirical success of research using
attitude function categories suggests that these categories might provide
some leads in this regard (although now the consequences of the lack of
92
agreement about a functional taxonomy may be more acutely felt). For
example, a carefully formulated version of the symbolic-instrumental
contrast might serve as one way of distinguishing variation in values (see
Allen, Ng, & Wilson, 2002; Eagly & Chaiken, 1998, p. 304). It may be
profitable, however, to consider other sources as well and, in particular, to
consider independent work on typologies of general, abstract values as a
possible source of further insight (as recommended by Maio & Olson,
2000a).
This suggests that there are at least two distinguishable broad functions of
attitude.15 But most of the work on persuasion and attitude functions has
implicitly addressed attitudes serving object appraisal functions and so has
focused on adapting messages to different bases of object appraisal. Scant
work is concerned with (for example) how persuasion might be effected
when attitudes serve ego-defensive ends or with how to influence attitudes
adopted because of the reference group identification purposes served by
holding the attitude.16
Thus there is good reason to want to retain some version of the idea of
93
different attitude functions, as illustrated by the apparent usefulness of a
contrast between object-appraisal functions and self-maintenance
functions.17 But if the idea of attitude function is to be revived, a
consistent and clear focus on the functions of attitudes (as opposed to the
functions of objects or the functions of attitude expression) will be needed,
accompanied by attention to the continuing challenge of attitude function
assessment.
Conclusion
Despite some conceptual unclarities, work on the functional approach to
attitudes has pointed to some fundamentally important aspects of attitude
and persuasion. In cases in which attitudes are primarily driven by an
interest in object appraisal, persuaders will want to attend closely to the
receiver’s basis for assessing the attitude object. What people value can
vary, and hence the persuasiveness of a message can depend in good
measure on whether the message’s appeals match the receiver’s values.
For Review
1. Explain the general idea behind functional approaches to attitude.
2. In Katz’s classic analysis of attitude function, what four attitude
functions are identified? Explain the utilitarian function. What
techniques are best adapted to changing attitudes serving a utilitarian
function? Explain the ego-defensive function. What techniques are
best adapted to changing attitudes serving an ego-defensive function?
Explain the value-expressive function. Under what conditions are
attitudes serving a value-expressive function likely to be susceptible
to change? Explain the knowledge function. What is the primary
mechanism of change for attitudes that serve a knowledge function?
3. Is there a consensus about a particular typology of attitude functions?
Is there a broad distinction (among functions) that is common to
alternative functional typologies? Explain symbolic functions of
attitude. Explain instrumental functions of attitude.
4. Describe three ways of assessing the function of a given attitude.
What is free-response data? Explain how free-response data can be
analyzed to reveal attitude functions. Explain how standardized
questionnaires can be used to assess attitude functions. Explain how
proxy indices can be used to assess attitude functions; give an
example.
94
5. Identify three kinds of factors that can influence attitude function.
Describe how individual differences can influence attitude function;
give an example. Explain how the nature of the attitude object can
influence attitude function. Give examples of objects for which
attitudes likely serve a generally instrumental function; give examples
of objects for which attitudes likely serve a generally symbolic
function. What are multifunctional attitude objects? Describe how
situational variations can affect attitude function. For what kinds of
attitude objects are individual differences and situational variations
likely to have the greatest effect on attitude function?
6. Explain how functional approaches provide a basis for adapting
persuasive messages to recipients. What is function matching? Are
function-matched appeals generally more persuasive than mismatched
appeals? Describe image-oriented advertising appeals. Describe
product quality-oriented advertising appeals. Are high self-monitors
generally more persuaded by image-oriented or by product quality-
oriented appeals? Are low self-monitors generally more persuaded by
image-oriented or by product quality-oriented appeals? Describe two
possible explanations of the persuasive advantage of function-
matched appeals over mismatched appeals.
7. Explain how the general idea of attitude functions can be useful even
in the absence of an agreed-upon universal typology of attitude
functions. How might different functional typologies be useful for
different specific attitudes?
8. Explain the distinction between the functions of an attitude and the
functions of expressing an attitude. Explain the distinction between
the functions of an attitude and the functions of an attitude object.
Explain the distinction between the functions of an attitude object and
the functions of expressing an attitude. Describe how these different
functions have been confused in theory and research about attitude
functions.
9. Explain how differences in attitude function as assessed through
open-ended questions reflect differences in what respondents value in
attitude objects. Explain how differences in attitude function as
assessed through self-monitoring reflect differences in what
respondents value in attitude objects. Explain how the distinction
between utilitarian and value-expressive functions reflects differences
in what respondents value in attitude objects.
10. How can the idea of function-matched appeals be redescribed in
terms of matching the audience’s values? Describe the difference
between object appraisal and self-maintenance as two broad attitude
95
functions. Which has been the focus of most research attention?
Notes
1. Snyder and DeBono’s (1989) description of the social-adjustive function
implicitly focused not on the function of the attitude but on the function of
the attitude expression. By contrast, M. B. Smith et al.’s (1956) discussion
of this function emphasized that “one must take care to distinguish the
functions served by holding an opinion and by expressing it” (p. 41). The
potential social-adjustive function of attitude expression is straightforward
enough (e.g., one can fit into social situations by expressing this or that
opinion). The social-adjustive function of simply holding an attitude, on
the other hand, is “at once more subtle and more complex” (p. 42). At
base, it involves the creation of feelings of identification or similarity
through attitudes; the mere holding of certain attitudes can be “an act of
affiliation with reference groups” (M. B. Smith et al., 1956, p. 42),
independent of any overt expression of the attitude. Unhappily, as
discussed later in this chapter, the distinction between attitude functions
and attitude expression functions has not commonly been closely
observed.
96
different functions depending on self-monitoring; Wang, 2012).
5. Actually, there are a number of studies that (a) are not commonly
treated as representing research on attitude function-matching and (b) may
not even cite attitude function-matching research but that nevertheless (c)
examine the relative effectiveness of persuasive appeals that have been
designed to match variations in receivers’ psychological needs as
extrapolated from some individual-difference variable (thus paralleling the
research format of much function-matching research). Studies by Cesario,
Grant, and Higgins (2004, Study 2), Orbell and Hagger (2006), and Aaker
and Schmitt (2001)—examining, respectively, regulatory focus
(prevention-focused versus promotion-focused), consideration of future
consequences (temporally distant versus temporally proximate
consequences), and individualism-collectivism (as reflected in cultural
variations)—provide just three examples. For a general discussion of this
point, see O’Keefe (2013a).
97
application of such adjustments inflates effect sizes relative to those in
other persuasion meta-analyses based on unadjusted effect sizes. Second,
at least some studies in this research area have used designs in which a
given participant saw both a matched and a mismatched appeal, often in
close proximity; as Shavitt (1990, pp. 141–142) pointed out, such designs
might be expected to yield larger effect sizes than would the more usual
between-subjects designs (in which a participant sees only one kind of
appeal).
98
10. In a sense, of course, the consumer product attitudes of high and low
self-monitors do different jobs, because the attitudes of high self-monitors
focus on one type of product attribute and the attitudes of low self-
monitors focus on another type: High self-monitors want their attitudes to
do the job of identifying objects that satisfy high self-monitor values, and
low self-monitors want their attitudes to do the job of identifying objects
that satisfy low self-monitor values. However, such a way of
differentiating attitude functions could be taken to absurd lengths, in that
whenever two persons differentially valued some attribute of an object,
their attitudes could be said to serve different functions; if Alice values,
but Betty does not, an automobile’s having a built-in navigation system,
then their attitudes toward automobiles serve different functions (in that
only Alice’s attitude would do the job of identifying cars that satisfy
Alice’s valuing of navigation systems). The real question is how to group
different possible attitude jobs (when to lump them together, when to
distinguish them), and the suggestion here is that it will be useful to
recognize that although high and low self-monitors may vary in what they
value, there is a sense in which the fundamental job done by their attitudes
—evaluative appraisal in the service of value satisfaction—is the same. (In
particular, as will be suggested shortly, attitudes driven by this sort of
interest look rather different from attitudes driven by an interest in ego
protection.)
11. As Eagly and Chaiken (1993, p. 490; 1998, p. 308) have stressed, early
functional approaches emphasized latent motivational aspects of attitudes,
aspects not necessarily apparent in manifest belief content or conscious
thought—and hence not necessarily well captured by coding the manifest
content of answers to open-ended questions or by examining responses to
standardized self-report instruments.
99
maintaining one’s view of oneself.
15. In fact, Pratkanis and Greenwald’s (1989) analysis proposed just these
two functions: “First, an attitude is used to make sense of the world and to
help the organism operate on its environment. … Second, an attitude is …
used to define and maintain self-worth” (p. 249). This latter function
unfortunately elides attitude function and attitude expression function:
“We attach different labels to this self-related function of attitude,
depending on the audience (public, private, or collective) that is observing
the attitude and its expression” (p. 249).
16. Some work exists concerning attitudes about objects that serve self-
related purposes (such as attitudes about class rings), but this is different
from work concerning attitudes serving self-related purposes.
100
otherwise a similar approach), whereas persuading bigots likely requires
something rather different. For a similar general conclusion, see Carpenter,
Boster, and Andrews (2013).
101
Chapter 4 Belief-Based Models of Attitude
The Model
The summative model of attitude (Fishbein, 1967a, 1967b) is based on the
claim that one’s attitude toward an object is a function of one’s salient
102
beliefs about the object. For any given attitude object, a person may have a
large number of beliefs about the object. But at any given time, only some
of these are likely to be salient (prominent)—and it is those that are
claimed to determine one’s attitude. In, say, a public opinion or marketing
questionnaire, one might elicit the respondent’s salient beliefs (e.g., about
a product or a political candidate) by asking the respondent to list the
characteristics, qualities, and attributes of the object. Across a number of
respondents, the most frequently mentioned attributes represent the
modally salient beliefs, which can be used as the basis for a standardized
questionnaire. (For discussion of procedures for identifying salient beliefs,
see Ajzen & Fishbein, 1980, pp. 68–71; Ajzen, Nichols, & Driver, 1995;
Breivik & Supphellen, 2003; Fishbein & Ajzen, 2010, pp. 100–103;
Middlestadt, 2012; van der Pligt & de Vries, 1998b; for some
complexities, see Roskos-Ewoldsen & Fazio, 1997.)
The procedures for assessing the elements of this model are well
established. One’s attitude toward the object (AO) can be obtained by
familiar attitude measurement techniques. The strength with which a belief
is held (Σbi) can be assessed through scales such as likely–unlikely,
probable–improbable, and true–false. The evaluation of a belief (ei) is
assessed through semantic-differential evaluative scales such as good–bad,
desirable–undesirable, favorable–unfavorable, and the like.
103
most salient beliefs held about Senator Smith by the senator’s constituents
were that the senator supports defense cuts, is helpful to constituents, is
respected in the Senate, and is unethical. One might assess the strength
with which the first of these beliefs was held by respondents through items
such as those listed in Figure 4.1. The evaluation of that belief can be
assessed with items such as those in Figure 4.2.
Suppose (to simplify matters) that for each belief, belief strength and belief
evaluation were assessed by a single scale (perhaps “likely–unlikely” for
belief strength, “good–bad” for belief evaluation) scored from + 3 (likely
or good) to −3 (unlikely or bad). A particular respondent might have the
belief strength and belief evaluation ratings for the four salient beliefs
about Senator Smith shown in Figure 4.3. The respondent in Figure 4.3
believes that it is quite likely that the senator supports defense cuts (belief
strength of +3), and supporting defense cuts is seen as a moderately
negative characteristic (evaluation of -2); the respondent thinks it very
unlikely that the senator is helpful to constituents (helpfulness to
constituents being thought to be a very good quality); the respondent
thinks it moderately likely that Smith is respected in the Senate, and that is
a slightly positive characteristic; and the respondent thinks it rather
unlikely that Smith possesses the highly negative characteristic of being
unethical.
Figure 4.3 Estimating attitude from belief strength (bi) and belief
evaluation (ei).
104
Because (in this example) each belief strength score (bi) can range from −3
to +3 and each belief evaluation score (ei) can range from −3 to +3, each
product (biei) can range from -9 to +9, and hence the total (across the four
beliefs in this example) can range from −36 to +36. A person who thought
that the qualities of supporting defense cuts, being helpful to constituents,
and being respected in the Senate were all very positive characteristics
(belief evaluations of +3 in each case) and who thought it very likely that
the senator possessed each of these qualities (belief strength of +3 for
each), and who also thought it quite unlikely (−3 belief strength) that the
senator possessed the strongly negative (−3 belief evaluation)
characteristic of being unethical would have a total (Σbiei) of +36,
indicating an extremely positive attitude toward the senator—as befits
such a set of beliefs. By comparison, the hypothetical respondent with a
total of-7 might be said to have a slightly negative attitude toward Senator
Smith.
Perhaps it is apparent how this general approach could be used for other
attitude objects (with different salient beliefs, of course). In consumer
marketing, for example, the attitude object of interest is a product or brand,
and the salient beliefs typically concern the attributes of the product or
brand. Thus, for instance, the underlying bases of consumers’ attitudes
toward a given brand of toothpaste might be investigated by examining the
belief strength and belief evaluation associated with consumers’ salient
beliefs about that brand’s attributes: whitening power, taste, ability to
prevent cavities, cost, ability to freshen breath, and so forth.
105
how negatively valued is that? Is capital punishment applied inequitably,
and how disadvantageous is that? And so forth. Two persons with opposed
attitudes on this issue might equally value crime deterrence—that is, have
the same evaluation of that attribute—but disagree about whether capital
punishment has that attribute. Or two people with opposed attitudes might
agree that capital punishment has the characteristic of satisfying the desire
for vengeance but differ in the evaluation of that characteristic.
106
changed. For example, to encourage a more positive attitude, a persuader
might try to weaken the strength of an existing negative belief (“It’s not
likely that Senator Smith accepted bribes, because Senator Smith is
already very wealthy”) or to enhance the strength of an existing positive
belief (“You already know it’s true that Senator Smith has worked hard for
the people of this state—but you don’t know just how true that is”).
107
consequence. That is, non-intenders didn’t need to be convinced about the
outcome of heart disease risk reduction; a persuader would only be
wasting effort to construct persuasive appeals based on that benefit.
But the data did suggest other, more likely targets for persuasive messages.
For example, intenders and non-intenders had equally negative
assessments of “eating boring food,” but intenders thought that to be much
less likely a consequence than did non-intenders; a persuader thus might
try to convince non-intenders that in fact eating a low-fat diet doesn’t
mean having to eat boring food. Similarly, intenders and non-intenders had
equally positive evaluations of “feeling healthier,” but non-intenders were
not as convinced (as were intenders) that eating a low-fat diet would make
them feel healthier. Notably, for beliefs about whether a low-fat diet
“helps to maintain lower weight,” intenders and non-intenders differed
with respect to both belief strength (intenders thought that outcome more
likely than did non-intenders) and belief evaluation (intenders valued that
outcome more than did non-intenders). A persuader who wanted to
emphasize this advantage would face the task of convincing non-intenders
both that this was a desirable outcome and that eating a low-fat diet would
produce the outcome.
108
General Correlational Evidence
A number of investigations have examined the correlation between a direct
measure of the respondent’s attitude toward the object (AO) and the
predicted attitude based on the summative formula (Σbiei) using modally
salient beliefs. Reasonably strong positive correlations have commonly
been found, ranging roughly from .55 to .80 with a variety of attitude
objects including public policy proposals (e.g., Peay, 1980; Petkova,
Ajzen, & Driver, 1995), political candidates (e.g., M. H. Davis & Runge,
1981; Holbrook & Hulbert, 1975), and consumer products (e.g., Holbrook,
1977; Nakanishi & Bettman, 1974).2 That is, attitude appears to often be
reasonably well predicted by this model.
Attribute Importance
Several investigations have explored the potential role of attribute
importance or relevance in predicting attitude. The summative model, it
109
will be noticed, uses only belief strength and belief evaluation to predict
attitude; some researchers have thought that the predictability of attitude
might be improved by adding the importance or relevance of the attribute
as a third variable. That is, in addition to assessing belief strength and
belief evaluation, one would also obtain measures of the relevance or
importance of each belief to the respondent; then some three-component
formula such as ΣbieiIi (where Ii refers to the importance of the attribute)
could be used to predict attitude.3
110
salient for the respondents. Indeed, even if the belief list has been pretested
to ensure that it contains modally salient beliefs, belief importance ratings
may still give some insight into what underlies attitudes, especially if there
is some reason to think that different respondents (or subgroups of
respondents) may have importantly different sets of salient beliefs. In
short, although belief importance might not add to the predictability of
attitude from Σbiei, belief importance ratings may be crucial in permitting
the identification of those beliefs (on a standardized list) that actually
determine the respondent’s attitude—and hence the beliefs that warrant a
persuader’s attention. (For illustrations of such a role for importance
ratings, see, e.g., Elliott, Jobber, & Sharp, 1995; van der Pligt & de Vries,
1998a, 1998b. For a general discussion, see van der Pligt, de Vries,
Manstead, & van Harreveld, 2000.)
Belief Content
The summative model offers what might be called a content-free analysis
of the underpinnings of attitude. That is, for the summative model’s
analysis, the content of a belief is irrelevant; what matters is simply how
the belief is evaluated and how strongly it is held. Ignoring content may
indeed be appropriate given an interest simply in attitude prediction. But
for other purposes, systematic attention to belief content may be important.
111
successful.
The point here is not that such considerations cannot be represented within
a summative model framework (see, e.g., Belch & Belch, 1987). For
example, one might say that the “good gas mileage” attribute is more
likely to be salient for (or valued by) one person than another or that that
attribute is more likely to be perceived as associated with the attitude
object by one person than by another. Rather, the point is that the
summative model provides no systematic ways of thinking about belief
content, although such content is manifestly important. In a sense, then,
one might think of these approaches as complementary: Functional
approaches emphasize the (manifest or latent) content of beliefs, whereas
belief-based attitude models (such as the summative model) are aimed at
illuminating how underlying beliefs contribute to an overall attitude.
The most common way of preparing the list of salient beliefs is by eliciting
beliefs from a test sample, identifying the most frequently mentioned
beliefs, and using these on the questionnaire. In this procedure, a
standardized belief list is composed (i.e., every respondent receives the
same set of modally salient beliefs). An alternative procedure is to elicit
salient beliefs from each respondent individually and so have each
respondent provide belief strength and belief evaluation ratings for his or
her unique set of salient beliefs. That is, an individualized belief list can be
constructed for each respondent.
The research evidence indicates that when individualized belief lists are
used, Σei and Σbiei are equally good predictors of attitude; adding belief
strength scores to the formula does not improve the predictability of
112
attitude. With standardized belief lists, however, Σbiei is a better predictor
than is Σei. That is, belief strength scores significantly improve the
predictability of attitude only when standardized (as opposed to
individualized) belief lists are used (Cronen & Conville, 1975; Delia,
Crockett, Press, & O’Keefe, 1975; Eagly, Mladinic, & Otto, 1994).5 On
reflection, of course, this result makes good sense. With individualized
belief lists, the respondent has just indicated that he or she thinks the
object possesses the attribute; only beliefs that the respondent already
holds are rated for belief strength. By contrast, with standardized belief
lists, belief strength scores distinguish those beliefs the respondent holds
from those the respondent does not hold. The use of standardized lists thus
creates a predictive role for belief strength scores (namely, the role of
differentiating those beliefs the respondent holds from those the
respondent does not hold), but the predictive contribution of belief strength
scores is a methodological artifact, not an indication of any genuine place
for belief strength in the cognitive states underlying attitude.
Scoring Procedures
There has been a fair amount of discussion in the literature concerning
how the belief strength and belief evaluation scales should be scored (e.g.,
Ajzen & Fishbein, 2008; Bagozzi, 1984; Fishbein & Ajzen, 2010, pp. 105–
110; Lauver & Knapp, 1993; J. L. Smith, 1996). By way of illustration,
two common ways of scoring a 7-point scale are from −3 to +3 (bipolar
113
scoring) and from 1 to 7 (unipolar scoring). (There are possibilities in
addition to −3 to +3 and 1 to 7, but these two provide a useful basis for
discussion.) With belief strength and belief evaluation scales, one might
score both scales −3 to +3, score both scales 1 to 7, or score one scale −3
to +3 and the other 1 to 7. But (because the scales are multiplied) these
different scoring procedures can yield different correlations of Σbiei with
attitude, and hence a question has arisen concerning which scoring
procedures are preferable.
But the main criterion for assessing scoring procedures has been the
predictability of attitude thereby afforded. That is, the criterion has been
the observed correlation between Σbiei and attitude.7 Several studies have
compared the predictability of attitude using different scoring methods.
Although results vary, the most common finding seems to have been that
scoring both scales in a bipolar fashion yields larger correlations (of Σbiei,
with attitude) than do alternative combinations and, in particular, is
superior to the intuitively appealing bipolar evaluation and unipolar
strength combination (e.g., Ajzen, 1991; Gagné & Godin, 2000; Sparks,
Hedderley, & Shepherd, 1991; for discussion, see Fishbein & Ajzen, 2010,
pp. 108–109).8
114
But now the task becomes explaining why bipolar scoring for both scales
appears to maximize the correlation between Σbiei and attitude. Bipolar
scoring seems to make intuitive psychological sense in the case of belief
evaluation scales, but the general empirical success of bipolar scoring for
belief strength scales may appear puzzling. One possibility is simply this:
When standardized lists of modal salient beliefs are used, bipolar scoring
of belief strength scales may permit participants to remove all effects of
beliefs that they do not have (or beliefs that are not salient for them).
When such beliefs appear on the standardized belief list, a mark at the
midpoint of belief strength scales is a sensible response (the respondent
does not know, or is not sure, whether the object has the attribute, so
marks the midpoint rather than favoring either “likely” or “unlikely”).
With bipolar scoring, such a response is scored as zero—which, when
multiplied by the corresponding belief evaluation, will yield a product of
zero (no matter what the evaluation is); this has the entirely appropriate
effect of removing that belief from having any impact on the respondent’s
predicted attitude.9 In short, the common superiority of bipolar (over
unipolar) scoring of belief strength scales might be a consequence of the
use of standardized lists of beliefs and so may be a methodological artifact
rather than a source of substantive information about how belief strength
perceptions operate.
115
+2 (and, to simplify matters, assume equal belief strength weights for each
belief). A summative picture of belief combination expects the additional
belief to make the attitude more positive (because the sum of the
evaluations would be 13 rather than 11), but an averaging model predicts
that the overall attitude would be less positive: The average of the initial
four beliefs is 2.75, but the average of the set of five beliefs is 2.60 (that is,
adding the new attribute lowers the average evaluation).
116
principle might best describe what will occur in a circumstance such as
this, one can hardly give persuaders firm recommendations.
The central research evidence here takes the form of studies investigating
whether a given noncognitive element makes a contribution to the
prediction of attitude beyond that afforded by measures of belief structure
(Σbiei). A convenient illustration is provided by research concerning the
effects of consumer advertising. Advertising presumably attempts to
influence the consumer’s beliefs about the product’s attributes or
characteristics, thereby influencing the consumer’s attitude toward the
product. But evidence suggests that at least under some circumstances, the
influence of advertising on receivers’ attitudes toward a given brand or
product may come about not only through receivers’ beliefs about the
product’s characteristics but also through the receivers’ evaluation of the
advertisement itself (the receivers’ “attitude toward the ad”). As receivers
have more favorable evaluations of the advertising, they come to have
more favorable attitudes toward the product being advertised. And several
studies have reported that this effect occurs over and above the
advertising’s effects on product beliefs—that is, attitude toward the ad and
Σbiei jointly have been found to be more successful in predicting attitude
than is Σbiei alone (e.g., Mitchell, 1986; for related findings, see
MacKenzie, Lutz, & Belch, 1986; for a review, see S. P. Brown &
Stayman, 1992). Such evidence appears to point to some influence on
attitudes beyond beliefs about the object and hence suggests the
insufficiency of a purely belief-based analysis of the determinants of
attitude.
117
discussion, see, e.g., Fishbein & Middlestadt, 1997; Herr, 1995; Miniard &
Barone, 1997; Priester & Fleming, 1997). One illustration of such flaws is
that if an investigator is not careful to ensure that salient beliefs are being
assessed, then the apparent ability of some noncognitive factor to add to
the predictability of attitude beyond Σbiei might reflect not some genuine
influence of the noncognitive factor but rather a shortcoming in the
assessment of beliefs; the suggestion is that with better belief assessment,
the apparent noncognitive contribution might disappear.12
118
might underlie attitudes.14 Some attitudes might be primarily based on
affective or experiential considerations, others predominantly on cognitive
or instrumental considerations, and still others on a mixture of these
elements.15 And, of course, understanding the current basis of a person’s
attitude is commonly a first step toward understanding how the attitude
might be changed.
But if belief strength does not actually influence attitude, then such a
strategy is misguided; if a person already has the relevant categorical
judgment in place, trying to influence the degree of association between
the object and the attribute will not influence attitude. Thus if our
hypothetical respondent already believes that Boffo Beer tastes good, there
appears to be little point in seeking changes in the exact degree of the
respondent’s subjective probability judgment that Boffo Beer tastes good.
119
Of course, if our respondent thinks Boffo Beer does not taste good (or has
no opinion about Boffo’s taste), then in seeking to induce a positive
attitude toward the beer, a persuader may well want to induce the belief
that Boffo does taste good. But this will be a matter of changing the
relevant categorical judgment (e.g., from “Boffo Beer doesn’t taste good”
to “Boffo Beer does taste good”) and need not be approached as though
there is some psychologically real probabilistic degree of perceived
association between object and attribute. That is, the key distinction will
be between whether the person does or does not have the belief, not
between finer gradations of belief strength.
120
prior experience. For example, the evaluation of the laundry detergent
attribute “gets your clothes clean” might well be expected to be relatively
stable for most people.
Second, nonsummative models predict that adding a new belief may fail to
move the attitude in the desired direction. If beliefs combine in a way that
involves averaging the evaluations of the individual beliefs (rather than
summing them), then it is possible that (for example) adding a new
121
positive belief may not make the attitude more positive.21
The other broad way of changing the set of salient beliefs is to alter the
relative salience of currently held beliefs. For example, a persuader might
seek to make the audience’s beliefs about positive attributes more salient,
thereby enhancing the attitude. There is little direct evidence about the
effectiveness of implementing this strategy in persuasive messages (see
Batra & Homer, 2004; Delia et al., 1975; Shavitt & Fazio, 1990).
Nevertheless, it is easy to see that (for example) one purpose of point-of
purchase displays (e.g., in grocery stores) can be to influence which of the
product’s attributes are salient.
122
Conclusion
The general idea that the beliefs one has about an object influences one’s
attitude toward that object is enormously plausible, and, correspondingly,
it seems obvious that one natural avenue to attitude change involves
influencing beliefs. Hence it is not surprising that belief-based models of
attitude have received such attention from students of persuasion. Indeed,
the summative model of attitude obviously offers some straightforward
recommendations to persuaders.22 Still, many particulars of the
relationship of beliefs and attitudes remain elusive, with corresponding
uncertainties for the understanding of persuasion.
For Review
1. Explain the general idea of belief-based approaches to attitude. What
is a salient belief? How can one identify a person’s salient beliefs
about a given object? Explain how, in a survey context, one might
identify the modal (average) salient beliefs about an object.
2. According to the summative model of attitude, what are the two
determinants of attitude? What is belief strength? Describe
questionnaire items that might be used to assess belief strength. What
is belief evaluation? Describe questionnaire items that might be used
to assess belief evaluation. Explain the summative model’s
description of how belief strength and belief evaluation combine to
produce attitude; that is, describe and explain the summative model’s
formula. Give an example that illustrates the model’s application.
3. Sketch three alternative strategies for attitude change suggested by
the summative model. Explain (and give examples of) the strategy of
changing the evaluation of an existing salient belief, the strategy of
changing the strength of an existing salient belief, and the strategy of
changing the set of salient beliefs. Describe two ways of changing the
set of salient beliefs. Explain how the summative model can be useful
in identifying possible foci for persuasive appeals.
4. What is the general pattern of correlations between the summative
model’s predictions and direct measures of attitude? Explain why
such correlational evidence does not necessarily show that attitude is
determined by salient beliefs. What is attribute importance? Does
adding attribute importance to the summative model’s formula
improve the predictability of attitude? Explain how belief importance
ratings can be useful even if they do not improve the predictability of
123
attitude.
5. Is the summative model concerned with the content (as opposed to
the evaluation) of beliefs? Explain the complementary relationship of
functional approaches and belief-based models of attitude.
6. Describe the difference between standardized and individualized
belief lists. Do belief strength scores improve the predictability of
attitude when individualized belief lists are used? Do belief strength
scores improve the predictability of attitude when standardized belief
lists are used? Explain.
7. Explain why it matters whether belief strength and belief evaluation
scales are scored in different ways. What is unipolar scoring? What is
bipolar scoring? Which kind of scoring maximizes the correlation
between Σbiei and attitude?
8. Describe an averaging model of how beliefs combine to yield
attitude. What does the research evidence indicate about whether an
averaging model or a summative (adding) model is superior? Explain
how adding models and summative models can have different
implications for persuasive strategy.
9. Explain the idea that non–belief-based (noncognitive) elements might
independently contribute to attitude. What sort of evidence bears on
such claims? Identify a noncognitive element that improves the
predictability of attitude. Explain how such elements might be
redescribed in belief-based terms.
10. Explain how the artifactual role of belief strength scores (in
predicting attitude) has implications for the persuasive strategy of
changing belief strength. Do messages that vary in the depicted
likelihood of consequences also vary correspondingly in
persuasiveness? Describe the challenges of trying to influence attitude
by changing belief evaluations. Explain why the strategy of adding
new beliefs (as a way of changing attitudes) might require attending
to belief content (not just evaluation). Why might adding a new
positive belief not make attitudes more positive? Describe how belief
importance ratings might be useful when trying to change attitudes by
influencing the relative salience of beliefs.
Notes
1. The summative model of attitude is sometimes referred to as an
expectancy-value (EV) model of attitude. An EV model of attitude
represents attitude as a function of the products of the value of a given
124
attribute (e.g., the attribute’s desirability) and the expectation that the
object has the attribute (e.g., belief strength). The summative model is only
one version of an EV model, however; this basic EV idea has been
formulated in various ways (e.g., Rosenberg, 1956). But the summative
model is the best studied, appears to have been the most successful
empirically, and indeed is the standard against which alternative EV
models have commonly been tested. (For some general discussions of EV
models of attitude, see Bagozzi, 1984, 1985; Eagly & Chaiken, 1993;
Kruglanski & Stroebe, 2005.)
125
in contributing to attitude might be had by research that used
individualized belief lists but replaced the conventional belief strength
end-anchors (“likely–unlikely”) with ones more appropriate to the desired
judgment, such as “slightly likely” and “very likely.” If, with
individualized belief lists and such end-anchors, Σbiei were found to
generally be superior to Σei as a predictor of attitude, the case for
conceiving of belief strength in continuous probability terms would be
strengthened.
126
about whether unipolar or bipolar scoring will maximize the correlation.
9. Notice the contrast: With bipolar belief strength scoring, it does not
matter what the respondent’s evaluation is of an attribute for which the
respondent has marked the midpoint of the belief strength scales (because
the Strength × Evaluation product for that attribute will be zero). But with
unipolar belief strength scoring, the Strength × Evaluation product will
vary depending on the respondent’s evaluation of that attribute. Hence
even if a respondent is completely uncertain about whether the object has
the attribute (and so marks the midpoint of the belief strength scales), the
respondent would nevertheless be predicted to have a relatively more
favorable attitude if the attribute were evaluated positively than if the
attribute were evaluated negatively. Scott Moore helped me see this point
clearly.
127
consistency may suggest the operation of a belief-based attitude process
even where none exists. For example, an attitude might be formed in a
wholly non–belief-based way, then used to guide responses to belief items
in such a way that the attitude appears to be largely determined by those
belief elements (for discussion of such problems, see Fishbein &
Middlestadt, 1997, pp. 112–113; Herr, 1995).
13. The idea that attitudes might have multiple underlying components—
including both affective and cognitive ones—has a long history in the
study of attitudes (e.g., Rosenberg & Hovland, 1960). But (as pointed out
by Eagly et al., 1994) it took quite some time for research to explicitly take
up the question of whether the predictability of attitude can be enhanced
by including non–belief-based considerations. And although
multicomponent views of attitude commonly treat affect and cognition as
representing just two of three attitudinal bases (the third being conation or
behavioral elements, as when one’s past behavior influences one’s
attitudes through self-perception processes; see, e.g., Bem, 1972), research
has come to focus on only the affective and cognitive elements (see, e.g.,
Haddock & Zanna, 1998, p. 328n4).
14. There is good reason to think that the wording of belief elicitation
questionnaires may influence the types of beliefs that people report. For
example, some common procedures may generally elicit predominantly
instrumental-utilitarian beliefs rather than symbolic beliefs (see Ennis &
Zanna, 1993, 2000; Sutton et al., 2003). This suggests the importance of
careful questionnaire design that minimizes the chances of missing some
important class of underlying beliefs. For example, it is possible to ask
different questions to elicit affective considerations and cognitive ones
(e.g., French et al., 2005; Haddock & Zanna, 1998).
128
1998).
129
the relevant research is not in hand, but the evidence at least suggests a
rather more complicated picture (see, e.g., Clarkson, Tormala, & Rucker,
2011; Conner, Rhodes, Morris, McEachan, & Lawton, 2011; Edwards,
1990; Fabrigar & Petty, 1999; Haddock, Mio, Arnold, & Huskinson, 2008;
Ruiz & Sicilia, 2004).
21. As a complexity: Adding a new salient belief may cause some existing
belief to become less salient. The number of beliefs that can be salient is
surely limited (given that human information-processing capacity is not
unbounded). If the current set of beliefs has exhausted that capacity, then
the addition of some new salient belief will necessarily mean that some old
belief has to drop from the set of salient beliefs. Presumably in such a
circumstance, a comparison of the evaluations of the two beliefs in
question (the new salient one and the previously salient one) will, ceteris
paribus, indicate the consequences for attitude change.
130
Chapter 5 Cognitive Dissonance Theory
131
Elements and Relations
Cognitive dissonance theory is concerned with the relations among
cognitive elements (also called cognitions). An element is any belief,
opinion, attitude, or piece of knowledge about anything—about other
persons, objects, issues, oneself, and so on.
Three possible relations might hold between any two cognitive elements.
They might be irrelevant to each other, have nothing to do with each other.
My belief that university tuition will increase next year and my favorable
opinion of Swiss chocolate are presumably irrelevant to each other. Two
cognitive elements might be consonant (consistent) with each other; they
might hang together, form a package. My belief that the Greater Chicago
Food Depository is a worthy charity and my knowing I donate money to
that organization are presumably consonant cognitions.
Dissonance
When two cognitions are in a dissonant relation, the person with those two
cognitions is said to have dissonance, to experience dissonance, or to be in
a state of dissonance. Dissonance is taken to be an aversive motivational
state; persons will want to avoid experiencing dissonance, and if they do
encounter dissonance, they will attempt to reduce it.
132
Expressed most broadly, the magnitude of dissonance experienced will be
a function of two factors. One is the relative proportions of consonant and
dissonant elements. Thus far, dissonance has been discussed as a simple
two-element affair, but usually two clusters of elements are involved. A
smoker may believe, on the one hand, that smoking reduces anxiety,
makes one appear sophisticated, and tastes good and, on the other hand,
also believe that smoking causes cancer and is expensive. There are here
two clusters of cognitions, one of elements consonant with smoking and
one of dissonant elements. Just how much dissonance this smoker
experiences will depend on the relative size of these two clusters. As the
proportion of consonant elements (to the total number of elements)
increases, less and less dissonance will be experienced, but as the cluster
of dissonant elements grows (compared with the size of the consonant
cluster), the amount of dissonance will increase.
133
that important. (For an illustration of this means of dissonance reduction,
see Denizeau, Golsing, & Oberle, 2009.)
Decision Making
One application of dissonance theory concerns decision making (or choice
making). Dissonance is said to be a postdecisional phenomenon;
dissonance arises after a decision or choice has been made. When facing a
decision (in the simplest case, a choice between two alternatives), one is
said to experience conflict. But after making the choice, one will almost
inevitably experience at least some dissonance, and thus one will be faced
with the task of dissonance reduction. So the general sequence is (a)
conflict, (b) decision, (c) dissonance, and (d) dissonance reduction.
Conflict
Virtually every decision a person makes is likely to involve at least some
conflict. Rarely does one face a choice between one perfectly positive
option and one absolutely negative alternative. Usually, one chooses
between two (or more) alternatives that are neither perfectly good nor
perfectly bad—and hence there is at least some conflict, because the
choice is not without some trade-offs. Just how much conflict is
experienced by a person facing a decision will depend (at least in part) on
the initial evaluation of the alternatives. When (to take the simplest two-
option case) the two alternatives are initially evaluated similarly, the
decision maker will experience considerable conflict; two nearly equally
attractive options make for a difficult choice.
This conflict stage is the juncture at which persuasive efforts are most
obviously relevant. Ordinarily, persuasive efforts are aimed at regulating
(either increasing or decreasing) the amount of conflict experienced by
134
decision makers. If one’s friend is inclined toward seeing the new action-
adventure film, rather than the new romantic comedy, one can attempt to
undermine that preference and so increase the friend’s conflict (by saying
things aimed at getting the friend to have a less positive evaluation of the
action film and a more positive evaluation of the comedy), or one can
attempt to persuade the friend to follow that inclination and so reduce the
friend’s conflict (by saying things aimed at enhancing the evaluation of the
already preferred action film and at reducing further the evaluation of the
comedy).
Consider, for example, a person choosing where to eat lunch. Al’s Fresco
135
Restaurant offers good food and a pleasant atmosphere but is some
distance away and usually has slow service. The Bistro Cafe has so-so
food and the atmosphere isn’t much, but it’s nearby and has quick service.
No matter which restaurant is chosen, there will be some things dissonant
with the person’s choice. In choosing the Bistro, for instance, the diner
will face certain undesirable aspects of the chosen alternative (e.g., the
poor atmosphere) and certain desirable aspects of the unchosen alternative
(e.g., the good food the diner could have had at Al’s).
Dissonance Reduction
One convenient way in which a decision maker can reduce the dissonance
felt following a choice is by reevaluating the alternatives. By evaluating
the chosen alternative more positively than one did before and by
evaluating the unchosen alternative less positively than before, the amount
of dissonance felt can be reduced. Because this process of re-rating the
alternatives will result in the alternatives being less similarly evaluated
than they were prior to the decision, this effect is sometimes described as
the “postdecisional spreading” of alternatives (in the sense that the
alternatives are spread further apart along the evaluative dimension than
they had been). If (as dissonance theory predicts) people experience
136
dissonance following decisions, then one should find dissonance reduction
in the form of this postdecisional spreading of the alternatives, and one
should find greater spreading (i.e., greater dissonance reduction) in
circumstances in which dissonance is presumably greater.
The research appears to indicate that one does often find the predicted
changes in evaluations following decisions (e.g., Brehm, 1956; G. L.
White & Gerard, 1981). However, the evidence is not so strong that the
magnitude of dissonance reduction is greater when the conditions for
heightened dissonance are present (as when the two alternatives are
initially rated closely, or the decision is important), because conflicting
findings have been reported, especially for the effects of decisional
importance (for discussion, see Converse & Cooper, 1979).
137
weakened relative to those observed in earlier research (Izuma &
Murayama, 2013). In sum, the sorts of postdecisional re-evaluations
expected by dissonance theory appear to be genuine, but the effect is not as
large as one might have supposed on the basis of the initial research
findings.
Regret
Given these postdecisional dissonance-reduction processes, persuaders
might naturally infer that once a persuadee has been induced to decide the
way the persuader wants, then the persuader’s job is done; after all, having
made the choice, the persuadee is likely to become more satisfied with it
through the ordinary processes of dissonance reduction. However, this
inference is unsound; persuaders who reason in this fashion may find their
138
persuasive efforts failing in the end, in part because of the occurrence of
regret (see Festinger, 1964).
When regret occurs, it arises after the decision has been made but before
dissonance has been reduced (through postdecisional spreading of
alternatives). When regret is happening, the alternatives are temporarily
evaluated more similarly than they were initially. Then, following this
regret phase (during which dissonance presumably increases), the person
moves on to dissonance reduction, with the evaluations of the alternatives
spreading farther apart (see Festinger & Walster, 1964). Regret is not
inevitable, and research is only beginning to explore the factors
influencing the arousal and resolution of regret (e.g., Keaveney, Huber, &
Hemnann, 2007; Mannetti, Pierro, & Kruglanski, 2007; Rosenzweig &
Gilovich, 2012; Zeelenberg & Pieters, 2007), but regret occurs sufficiently
commonly to be quite familiar. (Indeed, some readers will have recognized
“buyer’s remorse” in the preceding description.)
One plausible account of this regret phenomenon is that having made the
choice, the decision maker now faces the task of dissonance reduction.
Naturally, the decision maker’s attention focuses on those cognitions that
are dissonant with his or her choice—on undesirable aspects of the chosen
option and on desirable aspects of the unchosen option—perhaps in the
hope of eventually being able to minimize each. As the decision maker
focuses on undesirable aspects of the chosen alternative, that alternative
may seem (at least temporarily) less attractive than it had before; focusing
on desirable aspects of the unchosen option may make that option seem (at
least temporarily) more attractive than it had before. With the chosen
alternative becoming rated less favorably, and the unchosen alternative
becoming rated more favorably, the two alternatives naturally become
evaluated more similarly than they had been.
During this regret phase, it is even possible that the initial evaluations
become reversed, so that the initially unchosen alternative becomes rated
more favorably than the chosen option. In such a circumstance, the
decision maker may back out of the original choice. This outcome
becomes more likely when the two alternatives are initially evaluated
rather similarly because in such a circumstance, comparatively small
swings in absolute evaluations can make for reversals in the relative
evaluations of the alternatives.
139
up persuasive efforts. It can be too easy for a persuader to assume that the
job is done when the persuadee has been induced to choose in the way the
persuader wants, but the possibility of regret, and particularly the
possibility that the decision maker’s mind may change, should make the
persuader realize that simply inducing the initial decision may not be
enough.
140
hypothesis. Broadly put, this hypothesis has it that persons will prefer to be
exposed to information that is supportive of (consonant with) their current
beliefs rather than to nonsupportive information (which presumably could
arouse dissonance).8
141
Stroud, 2008).
142
rendered a judgment about the guilt of the defendant. They were
subsequently offered a chance of seeing either confirming or
disconfirming information. Participants showed a general preference for
nonsupportive information, perhaps because the trial setting was one that
made salient the norms of fairness and openness to evidence (Sears, 1965).
Summary
All told, there is a general preference for supportive information. The
strength of this preference may vary, and the preference can be overridden
by other considerations (e.g., information utility). But dissonance theory’s
expectations about general information preferences have certainly been
confirmed. For that reason, persuaders who hope to encourage attention to
their messages will want to be attentive to the factors influencing
information exposure, as these may suggest avenues by which such
attention can be sought (see, e.g., Flay, McFall, Burton, Cook, &
Warnecke, 1993).
There is not yet much research evidence on the matter, but there is some
reason to suspect that selective avoidance effects may be weaker than
selective approach effects (e.g., Garrett, 2009; Garrett & Stroud, 2014; see
also Cotton, 1985, p. 26; Frey, 1986, pp. 69–70). That is, people may
actively look for confirming information, but not necessarily avoid
disconfirming information. In any case, one ought not assume that
selective approach and selective avoidance are equally powerful processes.
So, for example, even though an information environment such as afforded
by the Internet may enable selective approach, and although persons do
generally have a preference for supportive information, people may
nevertheless not actively avoid discrepant information—and under the
right circumstances (e.g., high perceived information utility, a setting that
prioritizes fairness) might even seek out nonsupportive information (for
143
some relevant work, see Valentino, Banks, Hutchings, & Davis, 2009;
Wojcieszak & Mutz, 2009).
Induced Compliance
Perhaps the greatest amount of dissonance research concerns what is
commonly called induced compliance. Induced compliance is said to occur
when an individual is induced to act in a way discrepant from his or her
beliefs and attitudes.
144
But if the incentive had been smaller (less money offered), then the
amount of dissonance experienced would have been greater. The greatest
possible dissonance would occur if the incentive were only just enough to
induce compliance. Suppose that you would not have agreed to engage in
the counterattitudinal advocacy for anything less than $100. In that case,
an offer of exactly $100—the minimum needed to induce compliance—
would have produced the maximum possible dissonance. Any incentive
larger than that minimum would only have reduced the amount of
dissonance experienced.
145
the position being advocated.
146
occurred with larger incentives.
Counterattitudinal-Advocacy–Based Interventions
The potential utility of induced-compliance processes as a basis for
attitude change is nicely illustrated by counterattitudinal-advocacy
interventions. In these interventions, participants are led to engage in
counterattitudinal advocacy (under conditions of minimal incentive) as a
means of producing attitude change.
147
the price of two,” etc.). The central idea is that a lower price is offered to
the consumer, making purchase more likely.
The key insight offered by dissonance theory here is this: The greater the
incentive to comply, the less dissonance created by the purchase—and
hence less chance for favorable attitude change toward the brand. This
consumer might buy Brand A this time (because the price is so low), but
the consumer’s underlying unfavorable attitude toward Brand A is not
likely to change—precisely because the incentive to comply was so great.
So while the “low, low price” offer might boost sales for a while, it can
also undermine the development of more positive attitudes toward the
brand.
148
One should not conclude from this that the low-price offer is a foolish
marketing stratagem that should never be used. The point is that this
marketing technique can set in motion forces opposed to the development
of positive attitudes toward the brand and that these forces are greater as
the incentive becomes greater (as the deal gets better). But some low-price
offers are better than others (from the view of creating favorable attitude
change): A low-price offer that is only just barely good enough to induce
purchase—an offer that provides just enough incentive to induce
compliance—will create the maximum possible dissonance (and so, a
marketer might hope, maximum favorable attitude change toward the
product). Low-price offers may also be useful as strategies for introducing
new brands; the marketer’s plan is that the low price would induce initial
purchase and that this exposure to the brand’s intrinsic positive
characteristics will create a positive attitude toward the brand. Of course, if
the brand does not have sufficiently great intrinsic appeal (as was likely
with the house brands studied by Doob et al., 1969), then using low
introductory prices to induce trial will not successfully create underlying
positive attitudes toward the brand. (For indications of the complexity of
the effects of price promotion on attitudes, see DelVecchio, Henard, &
Freling, 2006; Raghubir & Corfman, 1999; Yi & Yoo, 2011.)
Limiting Conditions
Researchers have not always obtained the induced compliance effects
predicted by dissonance theory. Two important limiting conditions have
been identified. First, the predicted dissonance effects seem to occur only
when the participants feel that they had a choice about whether to comply
(e.g., about whether to perform the advocacy). That is, freedom of choice
seems to be a necessary condition for the appearance of dissonance effects
(the classic work on this subject is Linder, Cooper, & Jones, 1967; for a
relevant review, see Preiss & Allen, 1998). Thus one can expect that
inducing counterattitudinal action with minimal incentive will produce
substantial dissonance (and corresponding favorable attitude change) only
when the person freely chooses to engage in the counterattitudinal
behavior.12
Second, the predicted dissonance effects are obtained only when there is
no obvious alternative cause to which the feelings of dissonance can be
attributed. Attributional processes are the (often nonconscious) methods by
which people arrive at explanations for their feelings. If people can
attribute their dissonance feelings to some cause other than their
149
counterattitudinal behavior, the usual dissonance effects will not be
observed. For example, if people take a pill (actually a placebo) before
engaging in counterattitudinal advocacy and are told the pill will probably
make them feel tense or anxious, then counterattitudinal advocacy does not
produce the usual changes in attitude (because people attribute their
discomfort to the pill, not to the counterattitudinal action; Zanna &
Cooper, 1974; for related work, see J. Cooper, 1998, Study 1; Fried &
Aronson, 1995; Joule & Martinie, 2008).
Summary
Dissonance theory’s expectations about the effects of incentive for
counterattitudinal action on attitude change have been confirmed in broad
outline—although not without the discovery of unanticipated limiting
conditions. When a person freely chooses to engage in counterattitudinal
action (without an apparent alternative account of the resulting feelings),
increasing incentive for such action leads to lessened pressure for making
one’s beliefs and attitudes consistent with the counterattitudinal act. Hence
a persuader seeking long-term behavioral change (by means of underlying
attitude change) ought not to create intense pressure to engage in the
counterattitudinal behavior; rather, the persuader should seek to offer only
just enough incentive to induce compliance and let dissonance reduction
processes encourage subsequent attitude change.13
150
undermine the development of positive attitudes toward homework
(whereas a minimal reward can induce immediate compliance while also
promoting the development of positive attitudes). All these examples
illustrate the potential application of the general principle that smaller
incentives for freely chosen counterattitudinal behavior are more likely
than larger incentives to produce underlying favorable attitudes toward
that behavior.
Hypocrisy Induction
151
Fernandez, 2011; Stone, Wiegand, Cooper, & Aronson, 1997.)
152
mechanisms, see Stone, 2012; Stone & Fernandez, 2008b; Stone &
Focella, 2011.)
Backfire Effects
It might appear straightforward enough to use hypocrisy as a means of
inducing behavioral change, but it is important to consider that, faced with
evidence of inconsistency between attitudes and actions, people might
change their attitudes rather than their behaviors. Fried (1998) had
participants engage in public advocacy about the importance of recycling,
under one of three conditions varying the salience of past inconsistent
behavior. Some participants listed their past recycling failures
anonymously (as in previous hypocrisy induction manipulations), some
listed their past failures in ways that permitted them to be personally
identified, and some did not list past failures (the no-salience condition).
Persons in the anonymous-salience condition exhibited the usual
behavioral effects of hypocrisy (e.g., they pledged larger amounts of
money to a recycling fund than did persons in the no-salience condition),
but persons in the identifiable-salience condition did not. These persons,
instead of changing behaviors to become consistent with their prorecycling
attitudes, changed their attitudes to become consistent with their recycling
failures—specifically, they displayed a reduced belief in the importance of
recycling.
It is not yet clear exactly how to explain such reversal of effects, how
general such outcomes are, the conditions under which they are likely to
occur (perhaps, say, with relatively unimportant attitudes), and so forth.
But persuaders will certainly want to take note of the potential dangers of
hypocrisy induction as an influence mechanism. As a means of changing a
person’s behavior, pointing out that the person’s conduct is inconsistent
with the person’s professed beliefs might lead to the desired behavioral
change—or might lead to belief revision (and so backfire on the
persuader).
One factor that might plausibly encourage such backfire effects is self-
efficacy, people’s perceived ability to perform the behavior (perceived
behavioral control, in the terminology of reasoned action theory as
discussed in Chapter 6). If people think that behavior change is unavailable
as a method to reduce dissonance, they may turn to attitude change instead.
Thus one likely limiting condition on the effectiveness of hypocrisy
induction is that the level of self-efficacy (perceived behavioral control) be
153
sufficiently high. In fact, if this limiting condition is not met, hypocrisy
induction might well produce boomerang attitude change, that is, attitude
change in a direction opposite that wanted by the persuader.15
154
and morally adequate, that is, as competent, good, coherent, unitary,
stable, capable of free choice, capable of controlling important outcomes,
and so on” (Steele, 1988, p. 262).
It remains to be seen how successful these and other alternatives will prove
to be. (For examples and discussion of various approaches, see Beauvois
& Joule, 1999; J. Cooper, 2007; Eagly & Chaiken, 1993, pp. 505–552;
Harmon-Jones, Amodio, & Harmon-Jones, 2010; Nail, Misak, & Davis,
2004; Stone & Cooper, 2001; Stone & Fernandez, 2008a; Van Overwalle
& Jordens, 2002.) The general question is the degree to which a given
framework can successfully encompass the variety of findings currently
housed within dissonance theory, while also pointing to new phenomena
recommending distinctive explanation. But no matter the particulars of the
resolution of such issues, it is plain that dissonance-related phenomena
continue to provide rich sources of theoretical and empirical development.
For students of persuasion, these various alternatives bear watching
because of the possibility that these new frameworks will shed additional
light on processes of social influence.
Conclusion
Dissonance theory does not offer a systematic theory of persuasion (and
was not intended to). But dissonance theory has served as a fruitful source
of ideas bearing on social influence processes and has stimulated
substantial relevant research. To be sure, unanticipated complexities have
emerged (as in the discovery of limiting conditions on induced compliance
effects or the phenomenon of postdecisional regret). But cognitive
dissonance theory has yielded a number of useful and interesting findings
155
bearing on processes of persuasion.
For Review
1. Explain the general idea of cognitive consistency. What is a cognitive
element (cognition)? What are the possible relationships between two
cognitions? Explain how two cognitions can be irrelevant to each
other, consistent with each other, or inconsistent with each other.
When are two cognitions said to be in a dissonant relationship?
2. What are the properties of dissonance? What sort of state is it? Can
dissonance vary in magnitude? What factors influence the degree of
dissonance experienced? Explain how the relative proportion of
consonant and dissonant elements influences dissonance. Explain
how the importance of the elements and the issue influence
dissonance. Describe and explain two basic ways of reducing
dissonance.
3. Explain how choice (decision making) inevitably arouses dissonance.
Is dissonance a predecisional or postdecisional state? What state is a
decision maker said to be in before having made the decision? What
state is a decision maker said to be in after having made the decision?
Identify two factors that influence the amount of postdecisional
dissonance. How can dissonance be reduced following a decision?
What is postdecisional spreading of alternatives? Has research
commonly detected postdecisional spreading of alternatives?
Describe how selective information seeking or processing can reduce
postdecisional dissonance. How is regret manifest following a
decision? Does regret precede or follow dissonance reduction?
Explain how regret can lead to a reversal of a decision. Describe the
function of follow-up persuasive efforts in the context of
postdecisional processes.
4. What is the selective exposure hypothesis? Explain how the
hypothesis reflects the main tenets of dissonance theory. Describe the
usual research design for studying selective exposure. In such
designs, what sort of result represents evidence of selective exposure?
Is there evidence of a general preference for supportive information?
Is this a strong preference? Explain how the strength of the preference
for supportive information is related to the relevance of the issue to
one’s core values. What others factors influence information
exposure? Explain how perceived information utility and fairness
norms can influence information exposure. What is the distinction
156
between selective avoidance effects and selective approach effects?
Which appears to be stronger?
5. What is induced compliance? What is counterattitudinal advocacy?
Explain the dissonance theory view of induced compliance situations.
What is the key influence on the amount of dissonance experienced in
such situations? Describe the relationship between incentive and
dissonance in such situations. Explain how counterattitudinal
advocacy interventions can be a way of changing attitudes. Explain,
from a dissonance perspective, the effects of low-price offers for
consumer goods. From the marketer’s point of view, what is the ideal
amount of incentive to offer? Explain, from a dissonance perspective,
the operation of promotions that invite consumers to send in essays
explaining why they like the product (or to send in advertisements,
etc.), in return for being entered in a prize drawing. Identify two
limiting conditions on the occurrence of the predicted dissonance
effects in induced compliance situations. How is freedom of choice
such a condition? How is the lack of an apparent alternative cause
(for the feelings of dissonance) such a condition?
6. What is hypocrisy induction? Explain how hypocrisy induction can
lead to behavioral change. Identify a common persuasive situation in
which hypocrisy induction might be useful to a persuader. What are
the two key elements of successful hypocrisy inductions? Describe
how and why hypocrisy induction efforts might backfire. Identify a
limiting condition on the success of using hypocrisy induction to
change behavior.
7. Explain the central role of the self in dissonance processes. What does
self-affirmation theory identify as the motivation behind dissonance
phenomena? Explain how dissonance and guilt might be related.
Notes
1. Despite their age and relatively narrowed focus, both balance theory and
congruity theory continue to find useful application (see, e.g., Basil &
Herr, 2006; E. Walther & Weil, 2012; J. B. Walther, Liang, Ganster,
Wohn, & Emington, 2012; Woodside, 2004; Woodside & Chebat, 2001).
More generally, cognitive consistency remains an enduring subject of
research attention (e.g., Gawronski & Strack, 2012).
157
one belief follows from another—not whether it logically does so follow.
5. So (the argument runs) although it may appear that people became more
positive about an option after choosing it, they might in reality have had
that more positive evaluation even before choosing that option—a more
positive evaluation that went undetected because the initial evaluation
assessment was imperfect. Thus the postdecisional evaluation may appear
to have changed (appear to have become more positive) even though there
has been no underlying change in the actual evaluations. M. K. Chen and
Risen’s (2010) argument is more complex and nuanced than this (and in
particular emphasizes the importance of choosing an appropriate
comparison condition), but this will serve to convey the flavor of the
argument.
158
way of pursuing specifically postdecisional dissonance reduction.
159
between saying (for example), “Choice is necessary for the appearance of
dissonance-predicted effects” and “Dissonance-predicted effects are larger
under conditions of choice than under conditions without choice.” The
former depicts choice as a necessary condition, the latter as a moderating
factor; the former thus predicts null (zero) effects in no-choice conditions,
the latter only that the size of the effect will be smaller in no-choice
conditions. Any hypothesis of zero effect, however, is almost certainly
literally false; a more appropriate hypothesis would presumably be that the
effects would be trivially small (or, perhaps, opposite in direction). There
has not been discussion of what “trivially small” might be in this context,
however. The evidence that is usually advanced to support necessary
condition claims about choice commonly takes the form of (a) a finding
that a dissonance-predicted effect is statistically significantly different
from zero under choice conditions but not under no-choice conditions or
(b) a finding that a dissonance-predicted effect is significantly larger under
choice than under no-choice conditions. But neither of these is good
evidence for the necessary condition claim (and only the latter is good
evidence for a moderating factor claim). (For a first pass at a better
approach, see Preiss & Allen, 1998.) The evidentiary situation is more
complicated, but no more satisfactory, in the case of the claim that a
particular combination of conditions is necessary. In short, although it has
become customary to characterize research in this area as having identified
various necessary conditions for the appearance of dissonance-predicted
effects, these characterizations should be seen as deserving further
attention (attention especially focused on matters of effect size and
statistical power).
160
(in order to keep inducing significant dissonance). (This program is
described in Amazon’s 2013 report to shareholders, widely available
online including here:
http://www.sec.gov/Archives/edgar/data/1018724/000119312514137753/d702518dex9
Thanks to Steve Booth-Butterfield for spotting this ploy.)
161
either element alone (e.g., R. W. Johnson, Kelly, & LeBlanc, 1995).
162
Chapter 6 Reasoned Action Theory
163
The Reasoned Action Theory Model
Reasoned action theory (RAT) is a general model of the determinants of
volitional behavior developed by Martin Fishbein and Icek Ajzen
(Fishbein & Ajzen, 2010). In what follows, the RAT model is described
and the current state of research on the theory is reviewed. Subsequent
sections describe the theory’s implications for influencing intentions,
discuss the relationship of intentions and behaviors, and offer some
commentary on the model.
Intention
RAT is focused on understanding behavioral intentions. A behavioral
intention represents a person’s readiness to perform a specified action.2 In
assessing behavioral intention, a questionnaire item such as shown in
Figure 6.1 is commonly employed.
164
Injunctive Norm
The injunctive norm (abbreviated IN) is the person’s general perception of
whether “important others” desire the performance or nonperformance of
the behavior. As the injunctive norm becomes more positive, the intention
is expected to become more positive.
Descriptive Norm
The descriptive norm (abbreviated DN) is the person’s perception of
whether other people perform the behavior. The idea is that as people
come to think a given behavior is more widely performed by others, then
they themselves may be more likely to intend to perform the action. Thus
as the descriptive norm becomes more positive, intentions are expected to
become more positive.4
165
Perceived Behavioral Control
Perceived behavioral control (abbreviated PBC) is the person’s perception
of the ease or difficulty of performing the behavior. PBC is similar to the
concept of self-efficacy, which refers to a person’s perceived ability to
perform or control a behavior (see Bandura, 1997). The expectation is that
as PBC becomes more negative, intentions will correspondingly become
more negative.5
167
There is some reason to think that PBC is not the same sort of influence on
intention that AB, IN, and DN are. It makes sense that everything else
being equal, a more positive AB, IN, or DN should be associated with
more positive intentions. But it does not make sense that everything else
being equal, greater perceived control should be associated with more
positive intentions. There are many actions that I perceive to be entirely
under my control—for instance, setting fire to my office—that I have no
intention of performing. Just because I think I have the capability to
perform an action surely does not mean that I am more likely to intend to
do so.
168
Various combinations of the four predictors have been explored
empirically in hundreds of research studies. Behavioral intentions have
proved to be rather predictable using the RAT model, across a variety of
behaviors, including exercise (Brickell, Chatzisarantis, & Pretty, 2006;
Everson, Daley, & Ussher, 2007; Paek, Oh, & Hove, 2012; for a review,
see Hausenblas, Carron, & Mack, 1997), conservation (recycling, water
conservation, and the like; Kaiser, Hübner, & Bogner, 2005; Lam, 2006;
Nigbur, Lyons, & Uzzell, 2010), health screening (Mason & White, 2008;
Michie, Dormandy, French, & Marteau, 2004; Sieverding, Matterne, &
Ciccarello, 2010), bicycle helmet use (Lajunen & Rasanen, 2004), voting
(Fishbein & Ajzen, 1981), vaccination (Dillard, 2011; Gerend & Shepherd,
2012), smoking (Hassandra et al., 2011), consumer purchases (Brinberg &
Durand, 1983; Smith et al., 2008), skin cancer prevention (Branstrom,
Ullen, & Brandberg, 2004; K. M. White et al., 2008), and many others.
The multiple correlations (obtained using RAT model variables to predict
intention) in these applications are commonly in the range of .50 to .90,
with an average multiple correlation of between .65 and .70. (For some
review discussions, see Albarracín, Johnson, Fishbein, & Muellerleile,
2001; Armitage & Conner, 2001; Conner & Sparks, 2005; Cooke &
French, 2008; Hagger & Chatzisarantis, 2009; Hale, Householder, &
Greene, 2002; McEachan, Conner, Taylor, & Lawton, 2011; Sutton, 2004;
Trafimow, Sheeran, Conner, & Finlay, 2002.)
This research has progressed in waves. Much early work examined only
two predictors of intention: attitude and injunctive norms (Fishbein &
Ajzen, 1975; for an illustrative review, see Sheppard, Hartwick, &
Warshaw, 1988). A second wave of research added perceived behavioral
control as a predictor (beginning with Ajzen, 1991). More recently,
descriptive norms have been added as a general predictor (Fishbein &
Ajzen, 2010).
The rationale for this succession of additional predictors has been that each
new variable has shown its value in contributing to the prediction of
intentions. That is, predictions based on AB, IN, and PBC are commonly
better than those based on AB and IN alone (for some relevant reviews and
discussions, see Conner & Armitage, 1998; Conner & Sparks, 1996; Godin
& Kok, 1996; Notani, 1998; Sutton, 1998). Similarly, adding DN has often
been found to improve the prediction of intention beyond that based on
AB, IN, and PBC (for reviews, see Manning, 2009; Rivis & Sheeran,
2003).7
169
Thus the four predictors described here (AB, IN, DN, and PBC) appear to
be predictors of sufficiently common utility to warrant their inclusion in a
single general model. That does not mean that in any given application, all
four will play a significant role in influencing intention, but it does suggest
that the four-predictor model is likely to be a useful starting point in trying
to unravel influences on intention.
Influencing Intentions
The RAT model identifies five possible avenues for changing a person’s
intention to perform a given behavior: by influencing one of the four
determinants of intention (AB, IN, DN, PBC)—assuming that the
determinant is significantly weighted—or by changing the relative
weighting of the components. (It is presumably apparent why inducing
change by altering one of the four components requires that the component
be significantly weighted. RAT underscores the futility of attempts to
change, say, the injunctive norm in circumstances in which only the
attitudinal component is significantly related to intention.) In what follows,
each of those five avenues is discussed in more detail. (For some general
discussion and reviews concerning RAT-based interventions, see Cappella,
2006; Fishbein & Yzer, 2003; Hackman & Knowlden, 2014; Hardeman et
al., 2002; Sutton, 2002; Yzer, 2012a, 2013. For some illustrative
applications, see Armitage & Talibudeen, 2010; Dillard, 2011; Elliot &
Armitage, 2009; French & Cooke, 2012; Giles et al., 2014; Jemmott, 2012;
Kothe, Mullan, & Amaratunga, 2011; Paek, Oh, & Hove, 2012; Stead,
Tagg, MacKintosh, & Eadie, 2005.)
The Determinants of AB
170
that the evaluation of each belief (ei) and the strength with which each
belief is held (bi) jointly influence one’s attitude toward the behavior, as
represented in the following equation:
AB=∑biei
RAT’s claims about the determinants of one’s attitude toward the act have
received rather good empirical support, with correlations between ∑biei
and AB commonly averaging more than .50 (for review discussions, see
Albarracín, Johnson, Fishbein, & Muellerleile, 2001; Armitage & Conner,
2001; Conner & Sparks, 1996; Eagly & Chaiken, 1993, p. 176).8
Changing AB
171
dangerous, …”) or decreasing the favorability of an existing positive belief
(“Maybe smoking does give you something to do with your hands, but
that’s a pretty trivial thing”). Second, the strength (likelihood) of an
existing salient belief might be changed. This might involve attempting to
increase the belief strength of an existing negative belief (“You probably
already realize that smoking can lead to health problems. But maybe you
don’t realize just how likely it is to do so. You really are at risk …”) or to
decrease the belief strength associated with an existing positive belief
(“Actually, smoking won’t help you keep your weight down”). Third, the
set of salient beliefs might be changed. This can be accomplished in two
ways. One is to add a new salient belief (of the appropriate valence) belief
about the act (“Maybe you didn’t realize that smoking leaves a bad odor
on your clothes”). The other is to change the relative saliency of current
beliefs such that a different set of beliefs is salient (“Have you forgotten
just how expensive cigarettes are nowadays?”). Obviously, these are not
mutually exclusive possibilities; a persuader might implement all these
strategies.
The Determinants of IN
An individual’s injunctive norm is taken to be based on two elements. The
first is the person’s judgment of the normative expectations of specific
important others (what I think my parents want me to do, what I think my
best friend wants me to do, and so on). The second is the individual’s
motivation to comply with each of those referents (how much I want to do
what my parents think I should, etc.). Specifically, a person’s injunctive
172
norm is suggested to be a joint function of the normative beliefs that one
ascribes to particular salient others (ni) and one’s motivation to comply
with those others (mi). This is expressed algebraically as follows:
IN=∑nimi
173
The second is some troubling empirical results concerning the role of the
motivation-to-comply element. Specifically, ∑ni has often been found to
be at least as good, and sometimes better, a predictor of IN than ∑nimi;
that is, deleting the motivation-to-comply element does not reduce, and
sometimes even improves, the prediction of IN (for some examples, see
Budd, North, & Spencer, 1984; Doll & Orth, 1993; Kantola, Syme, &
Campbell, 1982; Montaño, Thompson, Taylor, & Mahloch, 1997; Sayeed,
Fishbein, Hornik, Cappella, & Ahern, 2005). It may be that, when a
normative referent is salient, motivation to comply with that referent is
already likely to be reasonably high, and hence a measure of motivation-
to-comply does not add useful information (Fishbein & Ajzen, 2010, p.
143).
Changing IN
From the perspective of the RAT, one would influence the injunctive norm
by influencing ni and mi, in ways precisely parallel to the ways in which
AB is influenced through bi and ei. For example, one might attempt to
reconfigure the set of salient referents by adding a new referent or by
increasing the relative salience of an existing potential referent: “Have you
considered what your mother would think about your doing this?” Or one
might attempt to change the normative belief attributed to a current
referent: “Oh, no, you’re wrong—I talked to George, and he thinks you
should go ahead and do this.” Or one might try to change the motivation to
comply with a current referent: “You really shouldn’t worry about what he
thinks—he has no sense when it comes to things like this.”
174
In some circumstances, some messages concerning others’ normative
beliefs are likely to simply be implausible (e.g., “your friends would really
be opposed to you doing this”). Yet it is plainly possible to devise
successful interventions based on something like alterations of the
injunctive norm. For example, Kelly et al. (1992) identified “trendsetters”
who subsequently communicated HIV risk reduction information to gay
men in their communities, producing substantial and sustained risk
reduction behavior; one way of understanding such effects is to see them
as reflecting changes in the receivers’ injunctive norms (see, relatedly,
Vet, de Wit, & Das, 2011). As another example, Prince and Carey’s (2010)
alcohol abuse intervention was able to affect college students’ injunctive-
normative perceptions of whether the typical student approved of
excessive drinking, although not parallel perceptions concerning close
friends’ approval (see also Armitage & Talibudeen, 2010; Reid & Aiken,
2013).
The Determinants of DN
RAT does not yet provide an elaborated account of the determinants of the
descriptive norm (DN). One possibility might be to conceive of the DN as
arising from perceptions that parallel those determining IN (see Fishbein &
Ajzen, 2010, pp. 146–148). That is, a given respondent (or set of
respondents) might have a set of salient descriptive-norm referents
(parallel to the salient injunctive-norm referents)—a set of individuals or
groups whose behavior might be seen as a source of guidance. And the
descriptive-normative beliefs about such referents might be weighted in
some way (giving more weight to some referents than to others), thus
yielding the person’s overall perception of the DN. But these ideas have
175
not received sustained empirical attention.
Changing DN
Even without a fully explicit account of the determinants of the descriptive
norm, however, it is plain that the DN might most straightforwardly be
influenced by messages that convey DN information. Such messages
might influence intentions either by altering the DN (e.g., in cases where
people don’t know, or misperceive, the DN) or by enhancing the salience
of the DN.
176
relatedly, Cialdini et al., 2006). One hopes that the accumulation of
research evidence about DN-based interventions will eventuate in
guidelines about how to maximize the effectiveness of such interventions
(see DeJong & Smith, 2013).
Figure 6.9 Assessing individual control belief (ci) and the power of each
control factor (pi).
Relatively little research attention has been given to RAT’s claims about
177
the determinants of perceived behavioral control. Many RAT studies have
not collected data about ci, pi, and PBC (and of those that have, some do
not report the relevant correlation between ∑cipi and direct measures of
PBC). The few reported results are not especially encouraging, as the
correlations commonly range from roughly .10 to .35 (see, e.g., Cheung,
Chan, & Wong, 1999; Elliott, Armitage, & Baughan, 2005; Parker,
Manstead, & Stradling, 1995; Povey, Conner, Sparks, James, & Shepherd,
2000; Valois, Desharnais, Godin, Perron, & LeComte, 1993).12 However,
stronger relationships have been reported between direct assessments of
PBC and other belief-based measures, including measures based on
questions about only likelihood of occurrence (i.e., ∑ci), questions about
only powerfulness (∑pi), questions that appear to involve some amalgam
of likelihood of occurrence and powerfulness considerations (e.g., “Which
of the following reasons would be likely to stop you from exercising
regularly?”), and questions about the perceived importance of various
barriers. Using measures such as these, correlations with PBC measures of
between roughly .25 and .60 have been obtained (Ajzen & Madden, 1986;
Courneya, 1995; Elliott et al., 2005; Estabrooks & Carron, 1998; Godin,
Gagné, & Sheeran, 2004; Godin, Valois, & Lepage, 1993; P. Norman &
Smith, 1995; Sutton, McVey, & Glanz, 1999; Theodorakis, 1994;
Trafimow & Duran, 1998).13
Changing PBC
Influencing perceived behavioral control involves addressing the perceived
barriers to and resources for behavioral performance. Unfortunately, the
lack of a well-evidenced account of the determinants of PBC means that
178
there is less guidance than one might like concerning specific means of
influencing PBC. Even so, there appear to be four broad alternative means
by which a persuader might influence PBC. The appropriateness of each
mechanism will vary depending on the particular target behavior, and
combinations of these approaches may prove more effective than any one
individually, but each offers an avenue to influencing perceptions of
behavioral control.
179
correctly, and the like (e.g., Calsyn et al., 2010; Yzer, Fisher, Bakker,
Siero, & Misovich, 1998). For other suggestions of the effect of successful
performance on self-efficacy, see Duncan, Duncan, Beauchamp, Wells,
and Ary (2000), Latimer and Ginis (2005a), Luzzo, Hasper, Albert, Bibby,
and Martinelli (1999), and Mishra et al. (1998).
180
under which a given mechanism is most effective (although here, too,
research is developing; e.g., J. K. Fleming & Ginis, 2004; Hoeken &
Geurts, 2005; Luszczynska & Tryburcy, 2008; Mellor, Barclay, Bulger, &
Kath, 2006).
This strategy can succeed in changing intention only when the relevant
components incline the person in opposite directions. For example, if a
person has a positive AB, a positive IN, and a positive DN, then it won’t
matter how the weights are shifted around among those three elements—
the person will still have a positive intention. Intention can be changed by
altering the weights of these three components only when one of those
three components differs in direction from the other two.18
181
attitude toward the behavior becomes more positive, so do injunctive
norms and descriptive norms; and so on. As a rule, then, it is unlikely that
AB, IN, and DN will not all point in the same direction.20 The implication
is that the strategy of influencing their relative weights will not find wide
application.
Correspondence of Measures
First, the degree of correspondence between the measure of intention and
the measure of behavior influences the strength of the observed intention-
behavior relationship (see Courneya, 1994; Fishbein & Ajzen, 2010, pp.
44–47). For instance, a questionnaire item asking about my intention to
182
buy diet cola at the grocery tonight may well be strongly related to
whether I buy diet cola at the grocery tonight—but it will be less strongly
related to whether I buy Diet Coke (specifically) at the grocery tonight or
to whether I buy diet cola at the cafeteria tomorrow. That is, as the degree
of correspondence between the two measures weakens, the intention
becomes a poorer predictor of (less strongly related to) the behavior. This
methodological consideration emphasizes how different means of
assessing intention and behavior can affect the size of the observed
association.
But there is also a substantive point here, because of the possibility that
some intentions (for some people or for some types of behaviors) are
generally more stable than others (see Sheeran & Abraham, 2003). There
is not yet much accumulated research on this matter, but (for example)
some evidence suggests that for behaviors deemed relatively important
(e.g., ones taken to be closely related to one’s self-image), intentions may
be more stable (compared with corresponding intentions for less important
183
behaviors) and hence more closely related to action (see Kendzierski &
Whitaker, 1997; Radecki & Jaccard, 1999; Sheeran & Orbell, 2000a). In
any case, the general point to notice is that to the degree that persons’
intentions are unstable, to that same degree intentions may not provide a
good basis for predicting subsequent action.
Explicit Planning
Third, explicit planning about behavioral performance can strengthen the
relationship between intentions and actions. In a large number of studies,
participants who specified when and where they would perform the action
were more likely (than control group participants) to subsequently engage
in the behavior. For example, Sheeran and Orbell (2000b) found that
participants who specified when, where, and how they would make an
appointment for a medical screening test were much more likely to
subsequently attend the screening than those in a control condition. Similar
effects of explicit-planning interventions have been reported for a great
variety of behaviors, including exercise (e.g., Andersson & Moss, 2011),
single-occupancy car use (Armitage, Reid, & Spencer, 2011), parent-
teacher communication (Arriaga & Longoria, 2011), smoking prevention
(Conner & Higgins, 2010), contraceptive adherence (Martin, Slade,
Sheeran, Wright, & Dibble, 2011), voting (Nickerson & Rogers, 2010),
and many others (for some reviews, see Adriaanse, Vinkers, de Ridder,
Hox, & De Wit, 2011; Gollwitzer & Sheeran, 2006; Sheeran, Milne,
Webb, & Gollwitzer, 2005).23
184
suggestion is that thinking through concrete action plans may convince
people of their ability to successfully perform the behavior. But two
considerations incline against this explanation. First, if planning enhances
PBC, then (given PBC’s influence on intention) planning should also have
the indirect effect of making intentions more positive; but, as just
indicated, that effect seems not to occur. Second, several studies have
found that planning enhances intention-behavior consistency only when
PBC is already relatively high (Koring et al., 2012; Lippke, Wiedemann,
Ziegelmann, Reuter, & Schwarzer, 2009; Schwarzer et al., 2010; Wieber,
Odenthal, & Gollwitzer, 2010; see, relatedly, Koestner et al., 2006); that is,
having high PBC appears to be a necessary condition for explicit-planning
interventions to be effective.
185
Krantz, 2009; Prestwich et al., 2005.)
186
exert an influence on conduct that is not mediated by intention, and hence
securing changes in intention may not be sufficient to yield changes in
well-established behavioral routines. On the other hand, these findings also
suggest the durability of persuasive effects that involve establishing such
habits. (For examples and discussion concerning establishing or breaking
habitual or routinized behavior, see Aarts, Paulussen, & Schaalma, 1997;
Adriaanse, Gollwitzer, de Ridder, de Wit, & Kroese, 2011; Allcott &
Rogers, 2012; de Vries, Aarts, & Midden, 2011; Judah, Gardner, &
Aunger, 2013; Lally & Gardner, 2013.)
187
Second, messages should be further adapted by addressing relevant beliefs
underlying the component to be changed. For example, if AB is the target
component, RAT-based questionnaires can be used to identify differences
between those who already intend to perform the persuader’s advocated
action (“intenders”) and those who do not (“nonintenders”)—differences
in the strength and evaluation of salient beliefs about the behavior (see,
e.g., Fishbein et al., 2002; French & Cooke, 2012; Marin, Marin, Perez-
Stable, Sabogal, & Otero-Sabogal, 1990; Rhodes, Blanchard, Courneya, &
Plotnikoff, 2009; Silk, Weiner, & Parrott, 2005; J. R. Smith &
McSweeney, 2007). Such data can then be used as the basis for
constructing persuasive messages—messages focused on changing those
specific elements known to distinguish intenders and nonintenders (for
examples of RAT-based message design, see Booth-Butterfield & Reger,
2004; Chatzisarantis & Hagger, 2005; Jordan, Piotrowski, Bleakley, &
Mallya, 2012; Jung & Heald, 2009; Milton & Mullan, 2012; Stead, Tagg,
MacKintosh, & Eadie, 2005).27
188
picture of the influences on intention.29
Commentary
Three general aspects of the RAT merit some comment: consideration of
additional possible predictors, some suggested revisions of the attitudinal
and normative components, and the nature of the perceived control
component.
Anticipated Affect
Behaviors sometimes have affective (feeling-related) consequences—they
189
can arouse regret, happiness, guilt, and so forth—and people can often
foresee these consequences (as when one scans the offerings at the movie
theatre, looking for a mood-brightening comedy). A good deal of research
now indicates that various anticipated emotions are dependably related to
intentions and behavior. A number of studies have reported such effects
specifically for anticipated regret (e.g., McConnell et al., 2000; for a
review, see Sandberg & Conner, 2008); for instance, Lechner, de Vries,
and Offermans (1997) found that among women who had not previously
undergone mammography, the best predictor of participation intentions
was anticipated regret (the greater the regret anticipated from not
undergoing mammography, the greater the intention to do so). Related
effects have been reported for anticipated guilt (e.g., Birkimer, Johnston,
& Berry, 1993; Steenhaut & Van Kenhove, 2006) and other anticipated
emotions (e.g., S. P. Brown, Cron, & Slocum, 1997; Leone, Perugini, &
Bagozzi, 2005).
190
provide a distinctive persuasive target (see Cappella, 2007). There is good
evidence that the anticipation of emotion can indeed be influenced,
primarily by heightening the salience of such anticipations. Several studies
have apparently influenced the salience of anticipated emotions simply by
asking about such feelings, with consequent effects on intention or
behavior. For example, Sheeran and Orbell (1999a, Study 4) found that
persons who answered a questionnaire item about regretting not playing
the lottery (and so who presumably were induced to anticipate regret)
intended to buy more lottery tickets than persons who did not answer such
a question. (For related manipulations, see Abraham & Sheeran, 2004,
Study 2; Hetts, Boninger, Armor, Gleicher, & Nathanson, 2000; O’Carroll,
Dryden, Hamilton-Barclay, & Ferguson, 2011; Richard et al., 1996b;
Sandberg & Conner, 2009.) Thus it seems that one straightforward
mechanism for engaging anticipated emotions is simply to invite receivers
to consider how they will feel if they follow (or do not follow) a particular
course of action.
191
advertising often seem to seek to induce thoughts about anticipated
emotions—including not just the potential positive emotional
consequences of winning but also the regret of not playing (“Suppose
you’ve been assigned the winning mega-prize number, but because you
didn’t enter we had to give the 10 million dollars to someone else”; see
Hetts et al., 2000, p. 346; Landman & Petty, 2000).
All the examples thus far are ones in which a persuader seeks to encourage
the anticipation of particular emotions. But sometimes a persuader might
want to prevent the anticipation of certain emotions. In consumer
purchases, one type of possible anticipated regret involves the prospect of
finding a lower price elsewhere (“If I find a lower price at another store,
I’ll regret buying the product now—hence I’ll postpone my purchase”). An
appropriate price guarantee (in which the seller promises that if the buyer
finds the product offered at a lower price elsewhere, the seller will match
that price) can undermine the creation of that anticipated regret (as
observed by McConnell et al., 2000).
Moral Norms
Another possible addition to the RAT is what can be called moral norms
(also sometimes termed personal norms or moral obligation), that is, a
person’s conception of morally correct or required behavior.
Questionnaires for assessing moral norms have included items concerning
perceived obligation (e.g., “I feel a strong personal obligation to use
energy-saving light bulbs,” Harland, Staats, & Wilke, 1999) or perceived
moral propriety (e.g., “It would be morally wrong for me to use
marijuana,” Conner & McMillan, 1999; “Not using condoms would go
against my principles,” Conner, Graham, & Moore, 1999).
Several studies have found that moral norms can enhance the prediction of
intention above and beyond the predictors already contained in the RAT.
Such increased predictability has been found, for example, in studies of
marijuana use (Conner & McMillan, 1999), condom use (Kok, Hospers,
192
Harterink, & De Zwart, 2007), environmental behaviors (M. F. Chen &
Tung, 2010; Harland et al., 1999), smoking cessation (Høie, Moan, Rise,
& Larsen, 2012; Moan & Rise, 2005), volunteering (Warburton & Terry,
2000), driving behaviors (Conner, Smith, & McMillan, 2003; Moan &
Rise, 2011), and charitable donations (J. R. Smith & McSweeney, 2007).
(For review discussions concerning moral norms, see Conner & Armitage,
1998, pp. 1441–1444; Manstead, 2000.) One supposes that the inclusion of
moral norms will not always contribute to the prediction of intention, but
there is little firm evidence yet concerning relevant moderating factors (see
Hübner & Kaiser, 2006; Manstead, 2000, pp. 27–28).
The second is the breadth of behaviors across which the proposed addition
is useful. In articulating a general model of behavioral intentions, one
wants evidence suggesting that a proposed addition is broadly useful. It
might be the case that improved prediction results from including variable
X when predicting behavioral intention Y but that result (even if frequently
replicated in studies of Y) does not show that X adds to the prediction of
intention sufficiently broadly (i.e., across enough different behaviors) to
merit the creation of a new general model that includes X.
193
But it is important also to bear in mind that there is a natural tension
between a generally useful model and accurate prediction in a given
application. In studying a particular behavior, an investigator might add
variables that improve the prediction of that intention, never mind whether
those added variables would be helpful in improving prediction in other
applications. Thus when one’s interest concerns some particular behavior
of substantive interest (as opposed to concerning the elaboration of general
models), RAT might be thought of as providing useful general starting
points. In any particular application, there might be additional predictors
(beyond AB, IN, DN, and PBC) that prove to be useful in illuminating the
behavior of interest—even if those additional factors are not generally
useful (that is, even if not useful in studying other behaviors). And any
such additional predictor, whether general or case-specific, is another
distinguishable potential target for persuaders. (An example of a
specialized RAT-like model is provided by the “technology acceptance
model,” which has a distinctive set of predictors of the intention to use a
new technology; for discussion and reviews, see Davis, Bagozzi, &
Warshaw, 1989; King & He, 2006; Schepers & Wetzels, 2007; Venkatesh
& Bala, 2008. Protection motivation theory, discussed in Chapter 11,
although not explicitly conceived of as a specialized RAT model, is
functionally similar in focusing specifically on protective intentions and
behaviors.)
194
beliefs about exercise might include both (instrumental) beliefs about
health consequences and (experiential) beliefs about how it makes one
feel. So rather than distinguish two attitudes, one might instead distinguish
two possible kinds of belief underpinnings to attitude. Correspondingly,
persuaders will want to be attentive to the potential importance of
addressing each kind of belief (see Kiviniemi, Voss-Humke, & Seifert,
2007; Lawton, Conner, & McEachan, 2009; Wang, 2009).
This does, however, point to the importance of ensuring that one’s belief-
elicitation procedures evoke both kinds of belief (if both kinds are salient).
Some frequently used belief-elicitation procedures may be more likely to
elicit instrumental beliefs than affective ones (e.g., Sutton et al., 2003). To
ensure good representation of salient beliefs, then, researchers need to be
attentive to such issues. (For discussion of belief elicitation procedures, see
Breivik & Supphellen, 2003; Darker, French, Longdon, Morris, & Eves,
2007; Dean et al., 2006; Middlestadt, 2012.)
The first reason is that the injunctive norm and the descriptive norm appear
to operate in substantively different ways. Manning’s (2009) meta-analytic
review pointed to several differences between IN and DN in their
relationships to intentions and behaviors (e.g., the effects of DN may be
affected by the degree of social approval for the behavior in ways that IN
is not). A variety of other studies have pointed to the independent
operation of DN and IN (e.g., Park, Klein, Smith, & Martell, 2009; Park &
Smith, 2007; Vitoria, Salgueiro, Silva, & de Vries, 2009). For example,
there is evidence suggesting that whereas the effects of injunctive norms
characteristically require some degree of systematic thinking, descriptive
norms can operate in ways that require little cognitive effort (e.g.,
Göckeritz et al., 2010; Jacobson, Mortensen, & Cialdini, 2011; Melnyk,
van Herpen, Fischer, & van Trijp, 2011; for a general discussion, see
Goldstein & Mortensen, 2012). Taken together, such findings argue for
195
separate treatment of these two factors.
PBC as a Moderator
As noted above, PBC does not seem to be quite like the other determinants
of intention. Instead of straightforwardly influencing intention as the
attitudinal and normative components do, PBC can instead plausibly be
thought to moderate the effects of those variables on intention.
Specifically, PBC can be seen as a necessary, but not sufficient, condition
for the formation of intentions, and hence AB, IN, and DN will influence
intention only when PBC is sufficiently high.
If PBC operates in this sort of moderating fashion, then the usual statistical
tests of RAT should reveal an interaction effect such that the relationships
of intention to AB, IN, and DN would vary depending on the level of PBC
196
(and, specifically, that as PBC increases, there should be stronger relations
of AB, IN, and DN to intention). There is less empirical evidence on this
question than one might like, because researchers have not often
conducted the appropriate analyses. When the analyses have been
reported, some studies have not found the expected interaction (e.g.,
Crawley, 1990; Giles & Cairns, 1995), but an increasing number of studies
have detected it (e.g., Bansal & Taylor, 2002; Dillard, 2011; Hukkelberg,
Hagtvet, & Kovac, 2014; Kidwell & Jewell, 2003; Park, Klein, Smith, &
Martell, 2009; for a review, see Yzer, 2007). There are substantial
challenges to obtaining empirical evidence indicating such interactions
(e.g., considerable statistical power demands; see Manning, 2009, p. 662;
Yzer, 2007), so reports of nonsignificant interactions are not entirely
unexpected.
In any case, that PBC is not quite on all fours with AB, IN, and DN is
perhaps indicated by the finding that PBC can be significantly negatively
related to intentions. Wall, Hinson, and McKee (1998) observed such a
relationship in a study of excessive drinking; the less control that people
thought they had over excessive drinking, the more likely they were to
report intending to drink to excess. (Relatedly, PBC has been found to be
negatively related to binge drinking—that is, frequent binge drinkers were
less likely than others to think that the behavior was under their control; P.
Norman, Bennett, & Lewis, 1998.) In a similar vein, Conner and McMillan
(1999) found PBC to be significantly negatively related to intentions to use
marijuana. Such results are consistent with Eagly and Chaiken’s (1993, p.
189) supposition that increasing PBC might enhance intention only to the
degree that the behavior is positively evaluated—and these results
certainly indicate that PBC is something rather different from the other
three components.
Indeed, such findings invite the conclusion that PBC can be a repository
for rationalization. “Why do I keep doing these bad things that I know I
shouldn’t [drinking to excess, smoking, and so forth]? Because I can’t help
myself; it’s not really under my control. And why do I fail to do these
good things I know I should do [exercising, recycling, and so on]? Gee, I’d
like to do them, I really would—but I just can’t, it’s out of my hands, not
under my control.” Persuaders may need to address such rationalizations in
order to lay the groundwork for subsequent behavioral change.
197
Several commentators have suggested that it may be useful to distinguish
different facets of perceived behavioral control (e.g., Armitage & Conner,
1999a, 1999b; Cheung et al., 1999; Estabrooks & Carron, 1998; Rhodes,
Blanchard, & Matheson, 2006; Rodgers, Conner, & Murray, 2008). In
good measure this suggestion has been stimulated by findings indicating
that items used to measure PBC often fall into two distinct clusters (e.g.,
Myers & Horswill, 2006; P. Norman & Hoyle, 2004; Pertl et al., 2010;
Trafimow, Sheeran, Conner, & Finlay, 2002). But the nature of these
clusters is not entirely clear, and the labels used to distinguish them exhibit
considerable variety (for some discussion, see Ajzen, 2002; Fishbein &
Ajzen, 2010, pp. 153–178; Gagné & Godin, 2007; Yzer, 2012b).
Conclusion
Reasoned action theory has undergone extensive empirical examination
and development over time. It is unquestionably the most influential
general framework for understanding the determinants of voluntary action.
And in illuminating the underpinnings of behavioral intention, RAT
provides manifestly useful applications to problems of persuasion,
primarily by identifying potential points of focus for persuasive efforts.
198
For Review
1. What is the most immediate determinant of voluntary action?
According to reasoned action theory (RAT), what are the four
primary determinants of behavioral intention?
2. What is the attitude toward the behavior (AB)? Explain the difference
between attitude toward the behavior and attitude toward the object.
Describe the sorts of questionnaire items commonly used for
assessing the AB. What is the injunctive norm (IN)? Describe the
sorts of questionnaire items commonly used for assessing the IN.
What is the descriptive norm (DN)? Describe the sorts of
questionnaire items commonly used for assessing the DN. Explain the
difference between the injunctive norm and the descriptive norm.
What is perceived behavioral control (PBC)? Describe the sorts of
questionnaire items commonly used for assessing PBC. Give
examples of circumstances in which PBC might plausibly be the key
barrier to behavioral performance.
3. Do the components above influence intention equally? How are the
relative weights of the components assessed? How is PBC different
from the other three components? How predictable are intentions
from the four components?
4. Describe the five possible ways of influencing intention as identified
by RAT. If persuasion is attempted by changing one of the
components, does that component need to be significantly weighted?
Explain.
5. What are the determinants of the attitude toward the behavior (AB)?
What is belief strength, and how is it assessed? What is belief
evaluation, and how is it assessed? Explain how these combine to
yield the AB. What does the research evidence suggest about the
predictability of the AB from its determinants? Describe alternative
means by which the AB might be influenced. Explain (and give
examples of) changing the strength or evaluation of existing salient
beliefs. Explain (and give examples of) reconfiguring the set of
salient beliefs; identify two ways in which such reconfiguration might
be accomplished.
6. What are the determinants of the injunctive norm (IN)? What are
normative beliefs (and how are they assessed)? What is motivation to
comply (and how is it assessed)? Explain how these combine to yield
the IN. What does the research evidence suggest about the
199
predictability of the IN from its determinants? Identify two concerns
about the motivation-to-comply element. Describe alternative means
by which the IN might be influenced. Explain (and give examples of)
changing the normative belief or motivation to comply that is
associated with an existing salient referent. Explain (and give
examples of) the two ways of reconfiguring the set of salient
referents. Why is it often difficult to change the IN? Explain how
directing messages to salient referents might lead to changes in the
IN.
7. Describe the current state of understanding of the determinants of the
descriptive norm (DN). Explain how the DN might be changed. Give
an example of a message designed to influence the DN. Describe
some potential pitfalls for DN interventions.
8. Describe RAT’s account of the determinants of perceived behavioral
control (PBC). What is a control belief, and how can it be assessed?
What is the perceived power of a control factor, and how can it be
assessed? What is the current state of the research evidence
concerning the determinants of PBC? Describe four means of
influencing PBC. Explain how directly removing an obstacle to
performance can influence PBC. Distinguish (and give examples of)
two kinds of obstacles a persuader might try to remove. Explain how
successful performance of a behavior can influence PBC; give an
example. Explain how modeling can influence PBC; give an example.
Explain how encouragement can influence PBC; give an example.
9. Explain the strategy of influencing intention by changing the relative
weights of the components. To which of the four components does
this strategy potentially apply? In what sort of circumstance can this
strategy succeed in changing intention? What is the usual pattern of
association (correlation) between the AB, the IN, and the DN? What
does this pattern imply about changing the weights as a means of
influencing intention?
10. What does the research evidence suggest about the predictability of
behavior from intention? Identify three factors influencing the
strength of the relationship between measures of intention and
measures of behavior. Explain how the relationship between
measures of intention and measures of behavior is affected by the
degree of correspondence between the two measures. Do more
specific intention measures lead to higher correlations with behavioral
measures than do less specific intention measures? Explain how the
relationship between measures of intention and measures of behavior
is affected by the temporal stability of intentions. Explain how the
200
relationship between measures of intention and measures of behavior
is affected by explicit planning about behavioral performance. Give
examples of circumstances in which the task facing the persuader is
that of encouraging persons to act on existing intentions; describe
how a persuader might approach such a task. What explains the effect
of explicit-planning interventions on behavior? Does planning make
intentions more positive? Does planning increase perceived
behavioral control (PBC)? What are implementation intentions? Does
planning encourage the development of implementation intentions?
Identify four factors that might influence the effectiveness of explicit-
planning interventions.
11. What factor might improve the prediction of behavior (beyond the
predictability afforded by intention)? Under what conditions does
assessment of prior behavior improve the prediction of behavior?
12. Describe two general ways in which RAT suggests persuasive
messages might be adapted to recipients. Explain how the weights of
the determinants of intention provide a basis for message adaptation.
How can such weights be misleading? Describe how message
adaptation can be guided by consideration of the beliefs that underlie
the component to be changed.
13. Describe the basis on which additional possible predictors of
intention (beyond AB, IN, DN, and PBC) might be considered for
inclusion in the model. Describe two criteria for assessing such
additions. Identify two specific possible additional predictors. What is
anticipated affect? Can anticipated affect improve the prediction of
intentions beyond the predictability afforded by the four RAT
components? Describe how persuaders might try to influence
anticipated affect. What are moral norms? Can moral norms improve
the prediction of intentions beyond the predictability afforded by the
four RAT components? Describe how persuaders might try to
influence moral norms.
14. How might the attitudinal component (AB) be revised? Describe the
distinction between instrumental and experiential attitudes. Explain
how this distinction might reflect differences in the kinds of beliefs
underlying an attitude. Describe how the injunctive norm (IN) and the
descriptive norm (DN) might be revised by being merged into a
single normative factor. Discuss why such a merger might not be
advisable.
15. Explain how perceived behavioral control (PBC) might moderate the
effects of the other three components. Describe the current state of the
201
research evidence concerning such a moderating role. Explain how
different kinds of PBC questionnaire items might represent different
kinds of underlying control beliefs.
Notes
1. The viewpoint described in this chapter has appeared in a number of
different forms, with a number of different labels including the “theory of
reasoned action” (Ajzen & Fishbein, 1980), the “theory of planned
behavior” (Ajzen, 1991), the “integrative model of behavioral prediction”
(Fishbein, 2008), and the “extended theory of planned behavior”
(Sieverding, Matterne, & Ciccarello, 2010). Following Fishbein and Ajzen
(2010), this presentation identifies four predictors of intention: attitude
toward the behavior, injunctive norms (formerly called the “subjective
norm”), descriptive norms, and perceived behavioral control. However,
whereas Fishbein and Ajzen’s (2010) presentation has one general
“perceived norms” factor that includes both injunctive and descriptive
norms, the presentation here treats those two normative elements as
distinct.
5. RAT also expects that sometimes PBC will appear to have a direct
relationship to behavior. In circumstances in which actual (not perceived)
behavioral control influences performance of the behavior, then to the
extent that persons’ perceptions of behavioral control are accurate (and so
202
co-vary with actual behavioral control), to that same extent PBC will be
related to behavior.
8. Many of the issues that have arisen in the context of belief-based models
of attitude (see Chapter 4)—such as the potentially artifactual contribution
of belief strength scores to the prediction of attitude—can naturally arise
here as well, because the same summative model of attitude, with the same
procedures, is involved (e.g., Gagné & Godin, 2000; O’Sullivan, McGee,
& Keegan, 2008; Steadman, & Rutter, 2004; Trafimow, 2007). Although
Armitage and Conner (2001) report a mean correlation of .50 between
203
behavioral beliefs and AB across 42 studies, it is not clear whether this
represents correlations with ∑biei, ∑bi, ∑ei, or some combination of these.
11. Different types of resources and obstacles will want different phrasings
of questionnaire items, especially with respect to control beliefs
(likelihood or frequency of occurrence). For instance, although the control
belief associated with bad weather could be assessed by asking
respondents how frequently bad weather occurs where they live (with
scales end-anchored by phrases such as “very frequently” and “very
rarely”), the control belief concerning a lack of facilities might better be
assessed by asking a question such as “I have easy access to exercise
facilities” (with end-anchors such as “true” and “false”). An additional
complexity: Because ∑cipi is a multiplicative composite (as are ∑biei and
∑nimi), the same scale-scoring issues (e.g., unipolar vs. bipolar) can arise
(see Gagné & Godin, 2000).
12. The challenge here can be illustrated by Elliott et al.’s (2005) research
concerning speed limit compliance. In preliminary research, 12 possible
control beliefs were identified. In the main study, a belief was retained for
analysis if its control belief (ci) or its power belief (pi) or its product (cipi)
was statistically significantly related to PBC. Only four beliefs survived
this winnowing process.
13. Armitage and Conner (2001) reported a mean correlation of .52 (across
18 studies) between control beliefs and PBC, but it’s not clear whether this
represented correlations of PBC with ∑cipi, ∑ci, ∑pi, or some combination
204
of these. McEachan et al. (2011) reported a mean correlation of .41 across
27 studies examining either the ∑cipi-PBC correlation or the ∑ci-PBC
correlation, but results were not reported separately for these two sets of
correlations.
14. The lack of any standardized item format for assessing control beliefs
and powerfulness—necessitated by variation in the types of factors under
study (see note 11 above)—has produced considerable diversity in the
details of these belief-based assessments, and it is not always clear how
best to characterize the measures employed. For example, P. Norman and
Smith (1995) presented respondents with a list of seven barriers to
physical activity (such as a lack of time or the distance from facilities) and
asked respondents to indicate, “Which of the following reasons would be
likely to stop you from taking regular exercise?” (with responses given on
a 7-point scale anchored by “extremely likely” and “extremely unlikely”).
Although based on likelihood ratings, the resulting index appears not to
assess control beliefs (the perceived frequency or likelihood of occurrence
of a control factor); it might better be seen as amalgamating assessment of
powerfulness and likelihood of occurrence (notice that the question asks
for the likelihood that the factor will prevent the behavior—not the
likelihood that the factor will occur) or perhaps even as assessing simply
the factor’s powerfulness (the question might be taken to mean, “Which of
the following reasons, if they occurred, would be likely to stop you from
taking regular exercise?”).
15. The effect of including the parking permit or bus pass was only partly
attributable to its removal of transportation obstacles. Apparently, the
inclusion of the permit/pass also helped convince recipients of the value of
making a return visit (“This must be important, otherwise they would not
send me a bus pass”), which in turn helped boost return rates (Marcus et
al., 1992, p. 227). Sometimes persuasion happens in unexpected ways.
17. This strategy (of altering the relative weights of the components so as
to influence intention) is probably in general applicable only to AB, IN,
205
and DN—not PBC. For example, if a person has a negative AB, negative
IN, and negative DN, then emphasizing “you really do have the ability to
do this” is unlikely to be very persuasive. This is related to the earlier point
that PBC is not quite like the other determinants of intention, in that it
seems more a variable that enables AB, IN, and DN (in the sense that those
variables will influence intention only when PBC is sufficiently high).
However, such an image does suggest that if a person has a positive AB,
positive IN, and positive DN, then emphasizing “you really don’t have the
ability to do this” might help to discourage formation of a positive
intention. For example, imagine trying to discourage a friend from an
excessively expensive purchase by saying (though perhaps not in so many
words) “you really can’t afford this.”
19. As another indication that PBC may operate in a different fashion from
the other three variables, the average correlations of PBC with the other
three components appear to be smaller than the correlations among those
three. In Rivis and Sheeran’s (2003) review, the mean correlations of PBC
with AB, IN, and DN were between .05 and .20; in Manning’s (2009)
review, those mean correlations ranged from roughly .20 to .45.
206
becomes more positive—that is, less negative), although for each
individual, one component is positive and the other is negative. But insofar
as the persuasive strategy of altering the weights is concerned, the
implication is (generally speaking) the same: Altering the weights of the
components is not likely to be a broadly successful way of changing
intention (because of the unusual requirements for the strategy’s working
—e.g., a dramatic change in the weights may be necessary).
207
corresponding behavior scores (actual days exercised) of 20, 21, 22, … 28,
29, and 30 (mean = 25.0). That is, the participant with an intention score of
30 had a behavior score of 20, the participant with an intention score of 29
had a behavior score of 21, and so on. The intention-behavior correlation is
-1.00.
22. It has sometimes been suggested that the strength of the intention-
behavior relationship will be affected by the time interval between the
assessment of intention and the assessment of behavior (the idea being that
as the time interval increases, the predictability of behavior from intention
will decrease; see, e.g., Ajzen, 1985; Ajzen & Fishbein, 1980, p. 47). The
supposition is that with an increased time interval between intention
assessment and behavioral assessment, there would be increased
opportunity for a change in intention. As it happens, it is not clear that
variations in the size of the time interval have general effects on the
intention-behavior relationship (for a review, see Randall & Wolff, 1994,
but also see Sheeran & Orbell, 1998, pp. 234–235). But of course if (for a
208
particular behavior) persons’ intentions are relatively stable across time,
then variations in the interval between assessments would not show much
effect on the intention-behavior relationship. The relevant points to notice
here are that (a) time interval variation is a poor proxy measure of
temporal instability in intentions, and (b) an apparent absence of broad
time interval effects on the strength of the intention-behavior relationship
is not necessarily inconsistent with the hypothesis that temporal instability
of intentions influences the strength of the intention-behavior relationship.
26. Research exploring factors that might systematically affect the relative
influence of the components is unfortunately scattered across a variety of
factors, including self-monitoring (DeBono & Omoto, 1993), culture (Al-
Rafee & Dashti, 2012; Bagozzi, Lee, & Van Loo, 2001), mood (Armitage,
Conner, & Norman, 1999), degree of group identification (Terry & Hogg,
1996), state versus action orientation (Bagozzi, Baumgartner, & Yi, 1992),
private versus collective self-concepts (Ybarra & Trafimow, 1998), and
others (Latimer & Ginis, 2005b; Thuen & Rise, 1994). For some
209
discussion, see Fishbein and Ajzen (2010, pp. 193–201).
29. Some readers will recognize this as simply an example of the general
point that the presence of multicollinearity conditions the interpretation of
partial coefficients.
210
on intention. For example, a committed environmentalist’s expectations of
guilt feelings from failing to recycle is probably different from a person’s
anticipated regret about not playing the lottery; the former is more closely
bound up with significant personal identity questions, the latter probably
more with potentially forgone monetary gains. The larger point is that
although there are plainly connections to be explored between moral
norms and anticipated affect, one probably should not fuse these into a
single element. Similarly, moral norms may be seen to be related to
injunctive norms (IN). The IN has a particular referent group (“people who
are important to me”) and a particular target person (the respondent):
“Most people who are important to me think I should/should not engage in
behavior X.” This is not far from “Most people who are important to me
think it is wrong to engage in behavior X,” “Most people think it is wrong
to engage in behavior X,” and “(I think) it is wrong to engage in behavior
X.” But, again, running all these together into a single element is probably
not advisable.
32. Rivis and Sheeran (2003) reported a mean correlation of .38 across 14
studies; Manning (2009) reported a mean correlation of .59 across 12
studies.
33. A contrast between internal and external aspects is not the only
possible way of potentially distinguishing elements of PBC. For example,
Fishbein and Ajzen (2010, pp. 168–177) have suggested that the key
distinction (between facets of PBC) is actually that between perceived
capacity (ability, capability) and perceived autonomy (degree of control), a
distinction they argue is conceptually and empirically independent of a
contrast between internal and external factors. For present purposes, the
question of how best to interpret the various observed PBC item
clusterings does not need to be settled (and there may not be only one
appropriate taxonomy). The point here is that any such clusters might be
seen as representing substantively different sorts of beliefs underlying
PBC, with correspondingly distinct targets for persuasion.
211
Chapter 7 Stage Models
212
stage, the person is not considering changing his or her behavior; for
example, a smoker in the precontemplation stage is not even thinking
about giving up smoking. In the contemplation stage, the person is
thinking about the possibility of behavioral change; a smoker in the
contemplation stage is at least considering quitting. In the planning stage,
the person is making preparations for behavior change; a smoker in the
planning stage is making arrangements to quit (choosing a quit date,
purchasing nicotine gum, and the like). In the action stage, the person has
initiated behavioral change; in this stage, the (now ex-) smoker has
stopped smoking. In the maintenance stage, the person sustains that
behavioral change; an ex-smoker in the maintenance stage has managed to
remain an ex-smoker.1
The TTM does not claim that stage movement is always a straightforward
linear process. To the contrary, it acknowledges that people may move
forward, backslide, cycle back and forth between stages, and so on, in a
complex and dynamic way. However, individuals are not expected to skip
any stage (Prochaska, DiClemente, Velicer, & Rossi, 1992, p. 825) and
hence these stages are offered as representing a general sequence through
which people pass in the course of behavior change.
People are said to progress through these stages using various “processes
of change.” TTM presentations often list 10 such processes (e.g.,
Prochaska, Redding, & Evers, 2002): self-reevaluation (reconsideration of
one’s self-image, such as one’s image as a smoker), environmental
reevaluation (assessment of the effects of one’s behavior on others, as
when a smoker consider the effects of secondhand smoke),
counterconditioning (healthier behaviors that can substitute for the
problem behavior, such as the use of nicotine replacement products),
consciousness raising (increased awareness of causes and effects of, and
cures for, the problem behavior), dramatic relief (the arousal and
attenuation of emotion, as through psychodrama), self-liberation
(willpower, a commitment to change), helping relationships (support for
behavioral change), contingency management (creation of consequences
for choices), stimulus control (removing cues that trigger the problem
behavior, adding cues to trigger the new behavior), and social liberation
(external policies and structures such as smoke-free zones).2
213
guidance about how to construct effective interventions. For example, if
self-reevaluation were known to be a change process distinctly associated
with the movement from precontemplation to contemplation, then
interventions based on that process might be targeted specifically to
individuals in precontemplation. But what evidence is in hand seems to
suggest that the various processes of change can often be useful across a
number of different stages (see, e.g., Callaghan & Taylor, 2007; Guo,
Aveyard, Fielding, & Sutton, 2009; Rosen, 2000; Segan, Borland, &
Greenwood, 2004).
Decisional Balance
One intriguing aspect of TTM research concerns “decisional balance,” the
person’s assessment of the importance of the pros (advantages, gains) and
cons (disadvantages, losses) associated with the behavior in question.4 The
expectation is that as people progress through the stages, the importance of
the pros of behavior change will come to outweigh the importance of the
cons. In the relevant research, respondents are provided with a
standardized list of pros and cons of the new behavior and are asked to rate
the importance of each to the behavioral decision (e.g., on a scale with
end-anchors such as “not important” and “extremely important”;
Prochaska et al., 1994, p. 42). The relative importance of the various pros
and cons can then straightforwardly be assessed.
214
balance, confirming that these assessments do vary depending on the
person’s stage (for some reviews, see Di Noia & Prochaska, 2010; Hall &
Rossi, 2008; Prochaska et al., 1994; but see also Sutton, 2005b, pp. 228–
233). As summarized by Di Noia and Prochaska (2010, p. 619): “The
balance between the pros and cons varies across stages. Because
individuals in pre-contemplation are not intending to take action to change
a behavior, the cons outweigh the pros in this stage. Pros increase and cons
decrease from earlier to later stages. In action and maintenance stages, the
pros outweigh the cons. A crossover between the pros and cons occurs
between precontemplation and action stages.”
In some ways this may not be too surprising. If people think that the
importance of the advantages of a given behavior are not greater than the
importance of its disadvantages, they may not be especially motivated to
adopt that behavior. People who have adopted the behavior, on the other
hand, are naturally likely to think the advantages are more important than
the disadvantages.
215
describe the different amounts of change (in the perceived importance of
the pros and the cons) in terms of the standard deviation of each. The
strong principle is that “progress from precontemplation to action involves
approximately one standard deviation increase in the pros of changing.”
The weak principle is that “progress from precontemplation to action
involves approximately .5 SD decrease in the cons of changing”
(Prochaska, Redding, & Evers, 2002, pp. 105, 106). That is, the perceived
importance of the new behavior’s advantages increases by about one
standard deviation between precontemplation and action, while the
perceived importance of the behavior’s disadvantages decreases by about
half that much. (For reviews and discussion, see Di Noia & Prochaska,
2010; Hall & Rossi, 2008; Prochaska, 1994.)7
216
people’s stages and their decisional balance assessments at one point in
time. The evidence concerning decisional balance thus takes the form of a
finding that (for example) for people who are in precontemplation, the
perceived importance of the pros of the new behavior is not greater than
the perceived importance of the cons, whereas for people who are in action
or post-action stages, the perceived importance of the pros is greater than
that of the cons.
The trouble with such data is that one cannot tell whether these decisional
balance shifts caused the movement from one stage to the next, as opposed
to simply being associated with (or being a result of) stage change. For
example, decisional balance might work in the following way. When the
perceived importance of the pros is greater than the perceived importance
of the cons, the person adopts the new behavior. This activates post-choice
dissonance reduction processes (see Chapter 5), in which the perceived
importance of the pros increases further and the perceived importance of
the cons decreases further—and this post-action period can be the time at
which the asymmetry appears (i.e., after behavioral initiation, not before).
The point is: The current evidence is not sufficient to underwrite a design
principle for interventions aimed at influencing decisional balance. One
cannot tell, on the basis of the kinds of studies in hand, whether the
asymmetry in decisional balance changes is a precursor, correlate, or
consequence of such change. For supporting recommendations about
intervention design, better evidence would be provided by experimental
research. Such work could compare the effectiveness (in moving people
from precontemplation to action) of messages that aimed either at
increasing the perceived importance of the advantages of the new behavior
or at reducing the perceived importance of the disadvantages of the new
behavior. Evidence of this sort is not yet in hand, but would plainly be
welcomed.
Intervention Stage-Matching
As briefly discussed above, the general idea of stage-matching of
interventions (messages, treatments) is straightforward: People in different
stages of change presumably need different interventions to encourage
movement to the next stage. This idea is depicted in an abstract way in
Figure 7.1. Intervention A is adapted to (matched to) persons who are in
217
Stage 1 and is designed to move people from Stage 1 to Stage 2.
Interventions B and C are matched to persons in Stages 2 and 3,
respectively, because those interventions are meant to move people to the
next stage in the sequence. Thus for people in Stage 1, Intervention A
should be more effective than Interventions B or C (more effective in
moving people to Stage 2); for people in Stages 2 or 3, Intervention B or
C, respectively, should be most effective.
In this regard, the TTM is still a work in progress. For example, there is
room for some uncertainty concerning exactly what makes for matched
and mismatched interventions at various specific stages. And the
difficulties in creating reliable stage assessments should not be
underestimated. At the same time, it is possible to illustrate some of the
relevant issues by considering one specific matter: the question of the
point at which interventions should target the receiver’s self-efficacy
concerning the desired behavior (their perceived ability to perform the
behavior, akin to perceived behavioral control as discussed in Chapter 6).
Self-Efficacy Interventions
The TTM suggests that self-efficacy interventions are not well-suited to
people at earlier stages (e.g., precontemplation), because those people have
not yet decided that they want to adopt the new behavior. At early stages,
interventions should presumably focus on developing positive attitudes
toward the new behavior (by influencing decisional balance). Self-efficacy
interventions are expected to be useful only when people have already
made that initial decision (e.g., are in the planning stage). The reasoning is
that until people have become convinced of the desirability of an action,
218
there is little reason to worry about whether they think they can perform
the behavior.
The results of such studies have pointed to a rather more complex picture
than that suggested by the TTM. Sometimes the expected pattern of results
has obtained, such that interventions were more effective when matched to
participants’ stages than when mismatched—and, specifically, self-
efficacy interventions were effective for those at later stages but not for
those at earlier stages. For example, Prentice-Dunn, McMath, and Cramer
(2009) found that for encouraging sunscreen use, movement from
precontemplation to contemplation was affected by the nature of threat
appraisal information (about the dangers of sun exposure) but not by the
nature of self-efficacy information (about the ease of using sunscreen). For
individuals in contemplation, however, movement to the preparation stage
was influenced by the presentation of self-efficacy information. That is,
self-efficacy information was effective for influencing people in later
stages but not people in earlier stages.
219
effective for both pre-intenders and intenders. That is, a self-efficacy–
focused intervention was effective even for people in an early (pre-
intention) stage.
220
However, there is another reason to have doubts that self-efficacy–focused
interventions should be deployed only for persons in later stages of
change: research on threat appeals (discussed more extensively in Chapter
11). A threat appeal message has two components, one designed to arouse
fear or anxiety about possible negative events or consequences associated
with a possible threat, and one that offers a recommended course of action
to avert or reduce those negative outcomes. So, for instance, a message
might depict the dangers of not wearing a seat belt (the threat component)
as a way of encouraging seat belt use (the recommended action).
221
One continuing concern is the transtheoretical model’s description of, and
procedures for assessing, the various stages. In TTM research, an
individual’s stage is most commonly assessed based on answers to a small
number of yes-no questions about current behavior, intentions to change,
and the like. But some of the resulting stage classifications can appear
artificial; for example, in some classification systems, an individual
planning to stop smoking in the next 30 days is placed in the preparation
stage, but an individual planning to quit in the next 31 days is described as
being in the contemplation stage (see Sutton, 2000, 2005b, pp. 238–242).
Moreover, the questions used (and the criteria used to subsequently
classify respondents) have varied from study to study, making for
difficulties in assessing the validity of such measures (Littell & Girvin,
2002). The challenge of creating reliable and valid stage assessments has
received considerable attention, with a variety of alternative stage-
measurement procedures being explored, but no easy resolution is in hand
(for some illustrative discussions, see Balmford, Borland, & Burney, 2008;
Bamberg, 2007; de Nooijer, van Assema, de Vet, & Brug, 2005; Lippke,
Ziegelmann, Schwarzer, & Velicer, 2009; Marttila & Nupponen, 2003;
Napper et al., 2008; Richert, Schüz, & Schüz, 2013).12
Other questions have been raised about, for example, whether the
transtheoretical model’s stages in fact constitute mutually exclusive
categories, whether there is good evidence of sequential movement
through the stages, whether the model is sufficiently well-specified to
permit useful empirical examination, and so forth. (For a particularly nice
222
review, see Sutton, 2005b. For other general critical discussions of the
TTM, see Armitage, 2009; Herzog, 2008; Littell & Girvin, 2002; Sutton,
2000, 2005a; R. West, 2005; Whitelaw, Baldwin, Bunton, & Flynn, 2000.)
If one were to arbitrarily divide the intention continuum into distinct parts,
a stage-like classification might seem to result. For example, on an 11-
223
point intention scale, one could group together persons with scores of 1
and 2, those with scores of 3 and 4, those with scores of 5, 6, and 7, those
with scores of 8 and 9, and those with scores of 10 and 11—and then
conceive of these as five distinct “stages” along the way to action. But
these would more appropriately be called “pseudo-stages” (Weinstein,
Rothman, & Sutton, 1998). After all, there is no reason to suppose that
people need to pass through these various “stages” in succession; a person
might have a weak intention at one point in time (say, an intention score of
2) but subsequently be convinced to have a much stronger intention (say, a
score of 10) without having to pass through the points in between.
Moreover, two people in the same region of the continuum might need
different kinds of treatments; for example, two people might have equally
negative intentions, but (expressed in terms of RAT) one person could
have a negative attitude while the other person had low perceived
behavioral control—and hence those two people would need different
kinds of interventions.
224
example, the appropriate empirical evidence for stage models must consist
of something more than showing that state-matched interventions are more
effective than non-matched (mismatched or unmatched) interventions.
After all, any number of continuum (that is, non-stage) approaches also
contain the idea that state-matched interventions will enjoy greater success
than non-matched interventions. A finding that state-matched interventions
were more effective than non-matched interventions does not show that the
states form a sequence, that is, does not show that the states are stages.14
225
The transtheoretical model (TTM) is the most-studied stage model of
behavior change, but several others have also been developed. For
example, a number of stage models have concerned consumer purchasing
behavior. These models propose that a sequence of distinct stages precedes
product purchase. The description of these stages varies (in number and
composition), but common elements include awareness of the brand,
knowledge about the brand, attitude toward the brand, brand purchase
intention, purchase, and brand loyalty. These models are often described as
“hierarchy of advertising effects” models, because they putatively identify
a sequence of desired effects of advertising—to make the consumer aware
of the brand, to ensure the consumer has knowledge about the brand, and
so forth. (For one classic treatment, see Lavidge & Steiner, 1961.)
Although there is variation in the details of these hierarchy-of-effects
models, they share a common shortcoming, namely, that there is not good
evidence for the expected temporal sequence of advertising effects (Fennis
& Stroebe, 2010, pp. 29–34; Vakratsas & Ambler, 1999; Weilbacher,
2001).15
226
2008; Conner, 2008; Leventhal & Mora, 2008; Sutton, 2008.)
However, there is room for doubt about whether the HAPA is in fact a
stage model, as opposed to being something more like a continuum model
(see especially Sutton, 2005b). Notice that the HAPA stages correspond to
different portions of an intention continuum: Pre-intenders presumably
have negative (or insufficiently positive) intentions, and intenders and
actors presumably have positive intentions (but are distinguished by
whether they have acted on those intentions). That is, HAPA seems to
divide the propensity-to-act (intention) continuum into two general
categories: those with negative intentions and those with positive
intentions. This differentiation, however, is arguably insufficiently sharp to
create a genuine stage distinction. The intention continuum has no natural
boundary (the mean? the median? the scale midpoint?), but rather offers
only a blurry distinction between intenders and non-intenders (Abraham,
2008). The implication is that the HAPA’s demarcation of intenders and
non-intenders seems an arbitrary division rather than a genuine stage
distinction.
At the same time, some such differentiation does seem potentially useful.
In this context it may be illuminating to consider the HAPA against the
backdrop of reasoned action theory. From the perspective of RAT, one set
of factors underlies the formation of positive intentions (intention
formation is influenced by attitude toward the behavior, injunctive norms,
descriptive norms, and perceived behavioral control), but the realization of
intentions in action involves different processes.17 In this regard, RAT and
the HAPA look rather similar, in that each (implicitly or explicitly)
recognizes a distinction between getting people to have the desired
intention and getting people to act on that intention.18
Conclusion
Stage models of behavioral change are naturally quite appealing. The idea
that behavior change requires movement through a sequence of stages
227
sounds like a very plausible idea, but it turns out to be surprisingly difficult
to redeem that abstract idea in empirically and conceptually sound ways.
For Review
1. How do stage models describe the process of behavioral change? In
the transtheoretical model (TTM), what stages are distinguished?
Describe each stage: precontemplation, contemplation, planning,
action, and maintenance. Is stage movement a linear process? Explain
why different kinds of treatments (messages) might be needed for
people at different stages of change.
2. What is decisional balance? How does decisional balance change as
people progress through the stages? Is the size of the change in the
perceived importance of the pros the same as the size of the decrease
in the perceived importance of the cons? Which is larger? Describe
the implications of such a difference for the design of effective
interventions. Explain why the existing research evidence about
decisional balance does not necessarily support recommendations
about intervention design.
3. Explain the general idea of stage-matching, and identify challenges in
comparing the effectiveness of stage-matched and stage-mismatched
interventions. According to the TTM, when should self-efficacy
interventions be most effective, at early stages or later ones? What
does the research evidence indicate about the appropriate timing of
self-efficacy interventions?
4. Describe some concerns that have been raised about the TTM’s
procedures for assessing stages. How extensive is the empirical
evidence for the superiority of TTM-based stage-matched
interventions over nonmatched interventions?
5. Describe the distinctive claims of stage models. Explain the
228
difference between stage models and continuum models of behavior.
How can dividing an intention continuum into segments produce the
appearance of a stage-like set of categories? Why is such a set of
categories not a genuine stage model? Describe the difference
between thinking of recipients as being in different states and
thinking of them as being in different stages. Why would finding that
state-matched interventions are more effective than nonmatched
interventions not necessarily show that the states are stages? Explain
why, according to stage models, some intervention sequences should
be more effective than others.
6. Describe some stage models other than the transtheoretical model (the
TTM). Sketch the kinds of stages included in a hierarchy-of-
advertising-effects model. Describe the stages identified by the
Health Action Process Approach (HAPA). Is the HAPA a
straightforward stage model? Explain.
Notes
1. The number and description of the stages can vary, especially depending
on the application area. For example, some TTM presentations focused on
smoking cessation have described the maintenance stage as one in which
the ex-smoker is actively working to prevent relapse, and have added a
subsequent “termination” stage in which the ex-smoker has achieved
complete self-control (Prochaska, Redding, & Evers, 2002).
229
5. The TTM’s research evidence focuses exclusively on the perceived
importance of pros and cons, but—as suggested by belief-based models of
attitude (see Chapter 4)—other properties of perceived advantages and
disadvantages (such as evaluation or perceived likelihood) might arguably
be of at least as much interest to persuaders. It should not pass unnoticed
that, as discussed in Chapter 4, when the evaluation and strength
(likelihood) of salient beliefs are used to predict attitude, belief importance
(the property apparently of key interest to the TTM) does not add to the
predictability of attitude.
230
produced statistically significant differences in perceived self-efficacy); if
studies with “failed” manipulation checks were to have been included, the
observed mean effect would probably be smaller.
11. One might also suppose that an assumption of some strict temporal
segregation between attitudinal considerations (the person’s evaluation of
the new behavior) and self-efficacy considerations (the person’s
perceptions of their ability to perform the action) is implausible. The
implicit suggestion of the TTM is that if people have a negative attitude
toward the new behavior, then they won’t even think about self-efficacy
considerations; the only time people think about self-efficacy (according to
this view) is when they already have a positive attitude (and so have
reached a stage at which self-efficacy becomes relevant). But if this were
the case, then no one would ever simultaneously express a negative
attitude and self-efficacy doubts—no one would ever say, for example, “I
don’t think exercise is all that valuable, and besides I don’t have time for it
anyway.” And yet this seems like a perfectly natural state of mind. In fact,
in a circumstance in which people have both a negative attitude toward the
desired behavior and doubts about their ability to perform it, persuaders
might well sometimes want to address those self-efficacy concerns first.
For instance, low perceived self-efficacy might sometimes be a
rationalization device, a way of justifying failing to do something that
people know they should do (“the recycling rules are so hard to
understand”). And when low perceived self-efficacy does function this
way, then removal of that rationalization could be crucially important for
encouraging behavioral change.
12. One potential issue with stage models is that stages can be defined in
ways that evade (or submerge) certain empirical questions. As one
illustration, stage definitions can guarantee a certain sequencing of stages.
For example, suppose a “precontemplation” stage is defined in such a way
that once a person has thought about adopting the new behavior, by
definition that person cannot ever be in precontemplation again. In that
case, a “contemplation” stage would by definition follow precontemplation
231
—which would mean that no empirical evidence would be needed to
confirm that temporal relation. Similarly, stage definitions can guarantee
that a given stage cannot possibly be skipped. For example, if part of the
definition of a “planning” stage is that the person must have already at
least been thinking about adopting the new behavior (i.e., must already
have been in a “contemplation” stage), then it would be conceptually
impossible for a person to reach the “planning” stage without passing
through the “contemplation” stage. The larger point here is that the
development of reliable and valid stage assessments is bound up with
definitional questions—and certain ways of defining stages can transform
what would look to be empirical questions (“Does stage Y always follow
stage X?”) into definitional matters (“Stage Y, by definition, follows stage
X”).
14. As a parallel case: One could imagine devising different messages that
were matched to differing political dispositions (say, Republicans and
Democrats in the U.S.). Finding that Republicans were more persuaded by
messages matched to Republicans than by messages matched to
Democrats (with the reverse true for Democrats) would not show that the
two categories (Democrat and Republican) formed any sort of temporal
sequence. Similarly here: Finding that people in the contemplation
category were more persuaded by interventions matched to contemplation
than by messages matched to planning (with the reverse true for people in
the planning category) would not show that the two categories form any
sort of temporal sequence. Showing that stage-matched interventions are
more successful than nonmatched interventions is a minimum requirement
for a valid stage theory but does not satisfy all the burdens of proof
incurred by stage models.
232
than if they know it exists but believe it’s not very good.
16. Some presentations of the HAPA have described just two stages—a
motivational stage (in which persons need to develop the appropriate
motivations to change) and a volitional stage (in which people already
have the relevant motivations)—but with the volitional stage further
divided between people who merely intend to adopt the new behavior and
those who have already adopted that behavior. Because such an analysis
eventuates in three relevant categories, it seems better to describe the
HAPA as proposing three (not two) stages. However, the “two-stage”
language does draw attention to the similarity between those in the
intention stage and those in the action stage, namely, that they all have the
requisite intention (which distinguishes them from those in the non-
intention stage).
18. This distinction, or something like it, has a long intellectual history,
dating back at least to 18th-century faculty psychology. A distinction
between “conviction” (influencing belief, associated with the faculty of
understanding) and “persuasion” (influencing action, associated with the
faculty of the will) can be found in George Campbell’s 1776 The
Philosophy of Rhetoric and even more sharply formulated in Bishop
Richard Whately’s 1828 The Elements of Rhetoric. However, this
conviction-persuasion distinction needlessly confused distinctions between
communicative purposes and distinctions between communicative means,
with unhappy consequences (O’Keefe, 2012a).
233
Chapter 8 Elaboration Likelihood Model
234
degree of elaboration, two types of persuasion process can be engaged
(one involving systematic thinking and the other involving cognitive
shortcuts)—with different factors influencing persuasive outcomes in each.
In the sections that follow, the nature of variations in the degree of
elaboration are described, factors influencing the degree of elaboration are
discussed, the two persuasion processes are described, and then various
complexities of persuasion processes are considered.
235
to list the thoughts that occurred to them during the communication (for a
more detailed description, see Cacioppo, Harkins, & Petty, 1981, pp. 38–
47; for a broad review of such techniques, see Cacioppo, von Hippel, &
Ernst, 1997). The number of issue-relevant thoughts reported is
presumably at least a rough index of the amount of issue-relevant
thinking.2 Of course, the reported thoughts can also be classified in any
number of ways (e.g., according to their substantive content or according
to what appeared to provoke them); one classification obviously relevant
to the illumination of persuasive effects is one that categorizes thoughts
according to their favorability to the position being advocated by the
message.
236
they like the communicator or by whether they find the communicator
credible. That is, receivers may rely on various peripheral cues (such as
communicator credibility) as guides to attitude and belief, rather than
engaging in extensive issue-relevant thinking.
237
of the communicator (high vs. low).
Each of these claims enjoys both some supportive direct research evidence
and some previous research that can be interpreted as indicating such
effects. For example, Petty, Cacioppo, and Schumann (1983) reported that
attitudes were more strongly correlated with intentions when the attitudes
were formed under conditions of high (as opposed to low) personal
relevance of the topic. Cacioppo, Petty, Kao, and Rodriguez (1986) found
238
that persons high in need for cognition (and so presumably higher in
elaboration motivation) displayed greater attitude-intention and attitude-
behavior consistency than did persons lower in need for cognition.
Verplanken (1991) reported greater persistence of attitudes and greater
attitude-intention consistency under conditions of high (rather than low)
elaboration likelihood (as indicated by topic relevance and need for
cognition). MacKenzie and Spreng (1992) experimentally varied
elaboration motivation and found stronger attitude-intention relationships
under conditions of higher (as opposed to lower) elaboration motivation.
(For other illustrations, see Gasco, Briñol, & Horcajo, 2010; Haugtvedt,
Schumann, Schneier, & Warren, 1994. For some general reviews and
discussions, see Petty & Cacioppo, 1986a, pp. 173–195; Petty, Haugtvedt,
& Smith, 1995; Petty & Wegener, 1999, pp. 61–63.)
These effects may seem intuitively plausible (in the sense that the greater
issue-relevant thinking affiliated with central route processes might well
be expected to yield attitudes that are stronger in these ways), but the
mechanism by which these outcomes arise is not entirely well understood
(for some discussion, see Petty, Haugtvedt, & Smith, 1995, pp. 119–123).3
Nevertheless, there is good reason for persuaders to presume that
persuasion accomplished through high elaboration is likely to be more
enduring (less likely to decay through time, less likely to succumb to
counterpersuasion) and to be more directive of behavior than is persuasion
accomplished through low elaboration.
Given that the underlying persuasion process varies depending on the level
of elaboration, and given that the different routes to persuasion have these
different consequences, it becomes important to consider what factors
influence the degree of elaboration that receivers are likely to undertake.
239
Factors Affecting Elaboration Motivation
A variety of factors have received research attention as influences on
receivers’ motivation to engage in issue-relevant thinking, including the
receiver’s mood (e.g., Banas, Turner, & Shulman, 2012; Bless, Mackie, &
Schwarz, 1992; Bohner & Weinerth, 2001; Côté, 2005; Ziegler, 2010; but
also see Bless & Schwarz, 1999),4 attitudinal ambivalence (i.e., the degree
to which the attitude is based on a mixture of positive and negative
elements; e.g., Hänze, 2001; Jonas, Diehl, & Brömer, 1997; Maio, Esses,
& Bell, 2000), and perceived information sufficiency (B. B. Johnson,
2005; Trumbo, 1999). Two influences are discussed here as illustrative:
the personal relevance of the topic to the receiver and the receiver’s degree
of need for cognition.
240
a graduation requirement—either at the receivers’ college (the high-
relevance condition) or at a different, distant college (the low-relevance
condition). With this form of manipulation, receivers in parallel high- and
low-relevance conditions could hear messages identical in every respect
(e.g., with the same arguments and evidence) save for the name of the
college involved, thus simplifying interpretation of experimental findings
(Petty & Cacioppo, 1979b).
241
As one might suppose, a good deal of research suggests that need for
cognition influences elaboration likelihood. Persons high in NFC are likely
to report a larger number of issue-relevant thoughts (following message
exposure) than are persons low in need for cognition (e.g., S. M. Smith,
Haugtvedt, & Petty, 1994; for a review, see Cacioppo, Petty, et al., 1996,
pp. 230–231). Relatedly, those high in NFC are more influenced by the
quality of the message’s arguments than are those low in need for
cognition (e.g., Axsom, Yates, & Chaiken, 1987; Green, Garst, Brock, &
Chung, 2006; for a review, see Cacioppo, Petty, et al., 1996, pp. 229–
230).7 Such findings, of course, are consistent with the supposition that
persons high in need for cognition have generally greater motivation for
engaging in issue-relevant thinking than do persons low in need for
cognition.8
Distraction
In this context, distraction refers to the presence of some distracting
stimulus or task accompanying a persuasive message. Research concerning
the effects of such distractions has used a variety of forms of distraction,
including having an audio message be accompanied by static or beep
sounds and having receivers monitor a bank of flashing lights, copy a list
of two-digit numbers, or record the location of an X flashing from time to
time on a screen in front of them (for a general discussion of such
manipulations, see Petty & Brock, 1981).
242
to engage in favorable elaboration (that is, to predominantly have thoughts
favoring the advocated position), then distraction, by interfering with such
elaboration, would presumably reduce persuasive effectiveness. But if a
receiver would ordinarily be inclined to predominantly have thoughts
unfavorable to the position advocated, then distraction should presumably
enhance the success of the message (by interfering with the having of
those unfavorable thoughts).9
Prior Knowledge
A second factor influencing elaboration ability is the receiver’s prior
knowledge about the persuasive topic: The more extensive such prior
knowledge, the better able the receiver is to engage in issue-relevant
thinking. Several studies have indicated that as the extent of receivers’
prior knowledge increases, more issue-relevant thoughts occur, the
influence of argument strength on persuasive effects increases, and the
influence of peripheral cues (such as source likability and message length)
decreases (e.g., Averbeck, Jones, & Robertson, 2011; Laczniak, Muehling,
& Carlson, 1991; W. Wood, 1982; W. Wood & Kallgren, 1988; W. Wood,
243
Kallgren, & Preisler, 1985).10 As one might expect, this suggests that
when receivers with extensive prior knowledge encounter a
counterattitudinal message, such receivers are better able to generate
counterarguments and hence are less likely to be persuaded (in comparison
with receivers with less extensive topic knowledge). But receivers with
extensive prior knowledge are also more affected by variations in message
argument strength; hence increasing the strength of a counterattitudinal
message’s arguments will presumably enhance persuasion for receivers
with extensive knowledge but will have little effect on receivers with less
extensive knowledge.11
Summary
As should be apparent, a variety of factors can influence the likelihood of
elaboration in a given circumstance by affecting the motivation or the
ability to engage in issue-relevant thinking. With variations in elaboration
likelihood, of course, different sorts of persuasion processes are engaged:
As elaboration increases, peripheral cues have diminished effects on
persuasive outcomes, and central route processes play correspondingly
greater roles. But the factors influencing persuasive effects are different,
depending on whether central or peripheral routes to persuasion are
followed. Thus the next two sections consider what factors influence
persuasive outcomes when elaboration likelihood is relatively high and
when it is relatively low.
244
about the advocated position, the message will presumably be relatively
successful in eliciting attitude change in the desired direction; but if the
receiver has predominantly unfavorable thoughts, then the message will
presumably be relatively unsuccessful. Thus the question becomes: Given
relatively high elaboration, what influences the predominant valence (the
overall evaluative direction) of elaboration?
Argument Strength
Recall that under conditions of high elaboration, receivers are motivated
(and able) to engage in extensive issue-relevant thinking, including careful
examination of the message’s arguments. Presumably, then, the valence of
receivers’ elaboration will depend (at least in part) on the results of such
245
scrutiny: The more favorable the reactions evoked by that scrutiny of
message material, the more effective the message should be. If a receiver’s
examination of the message’s arguments reveals shoddy arguments and
bad evidence, one presumably expects little persuasion; but a different
outcome would be expected if the message contains powerful arguments,
sound reasoning, good evidence, and the like.
That is, under conditions of high elaboration, the strength (quality) of the
message’s arguments should influence the evaluative direction of
elaboration (and hence should influence persuasive success). Many
investigations have reported results indicating just such effects (e.g., Lee,
2008; Levitan & Visser, 2008; Petty & Cacioppo, 1979b; Petty, Cacioppo,
& Schumann, 1983; for complexities, see Park, Levine, Westermann,
Orfgen, & Foregger, 2007).
246
Influences on Persuasive Effects Under
Conditions of Low Elaboration: Peripheral
Routes to Persuasion
247
heuristics have received relatively more extensive research attention: the
credibility, liking, and consensus heuristics.14
Credibility Heuristic
One heuristic principle is based on the apparent credibility of the
communicator and amounts to a belief that “statements by credible sources
can be trusted” (for alternative expressions of related ideas, see Chaiken,
1987, p. 4; Cialdini, 1987, p. 175). As discussed in Chapter 10, studies
have indicated that as the personal relevance of the topic to the receiver
increases, the effects of communicator credibility diminish (e.g., Byrne,
Guillory, Mathios, Avery, & Hart, 2012; H. H. Johnson & Scileppi, 1969;
Petty, Cacioppo, & Goldman, 1981; Rhine & Severance, 1970). Similar
results have been obtained when elaboration likelihood has been varied in
other ways (e.g., Janssen, Fennis, Pruyn, & Vohs, 2008; Kumkale,
Albarracín, & Seignourel, 2010). Thus, consistent with ELM expectations,
the peripheral cue of credibility has been found to have greater impact on
persuasive outcomes when elaboration likelihood is relatively low.
Moreover, some research suggests that variations in the salience of
credibility cues lead to corresponding variations in credibility’s effects
(e.g., Andreoli & Worchel, 1978). All told, there looks to be good
evidence for the existence of a credibility heuristic in persuasion.
Liking Heuristic
A second heuristic principle is based on how well the receiver likes the
communicator and might be expressed by beliefs such as these: “People
should agree with people they like” and “People I like usually have correct
opinions” (for alternative formulations of this heuristic, see Chaiken, 1987,
p. 4; Cialdini, 1987, p. 178). When this heuristic is invoked, liked sources
should prove more persuasive than disliked sources. As discussed in more
detail in Chapter 10, the research evidence does suggest that the ordinary
advantage of liked communicators over disliked communicators
diminishes as the personal relevance of the topic to the receiver increases
(e.g., Chaiken, 1980, Experiment 1; Petty, Cacioppo, & Schumann, 1983).
Confirming findings have been obtained in studies in which elaboration
likelihood varied in other ways (e.g., Kang & Kerr, 2006; W. Wood &
Kallgren, 1988) and in studies varying the salience of liking cues (e.g.,
Chaiken & Eagly, 1983): As elaboration likelihood declines or cue
saliency increases, the impact of liking cues on persuasion increases.
248
Taken together, then, these studies point to the operation of a liking
heuristic that can influence persuasive effects.
Consensus Heuristic
A third heuristic principle is based on the reactions of other people to the
message and could be expressed as a belief that “if other people believe it,
then it’s probably true” (for variant phrasings of such a heuristic, see
Chaiken, 1987, p. 4; Cialdini, 1987, p. 174). When this heuristic is
employed, the approving reactions of others should enhance message
effectiveness (and disapproving reactions should impair effectiveness). A
number of studies now indicate the operation of such a consensus heuristic
in persuasion (for a more careful review, see Axsom et al., 1987). For
example, several investigations have found that receivers are less
persuaded when they overhear an audience expressing disapproval (versus
approval) of the communicator’s message (e.g., Landy, 1972; Silverthorne
& Mazmanian, 1975). (For some related work, see Darke et al., 1998. For
complexities, see Beatty & Kruger, 1978; Hilmert, Kulik, & Christenfeld,
2006; Hodson, Maio, & Esses, 2001; Mercier & Strickland, 2012.)
Other Heuristics
Various other principles have been suggested as heuristics that receivers
may employ in reacting to persuasive messages (e.g., Chang, 2004;
Forehand, Gastil, & Smith, 2004). For example, it may be that the number
of arguments in the message (Chaiken, 1980, Experiment 2) or the sheer
length of the message (W. Wood et al., 1985) can serve as cues that
engage corresponding heuristic principles (“the more arguments, the
better” or “the longer the message, the better its position must be”). But for
the most part, relatively little research evidence concerns such heuristics,
and hence confident conclusions are perhaps premature.
249
principles such as the credibility, liking, and consensus heuristics.
The ELM emphasizes that a given variable need not play only one of these
roles (e.g., Petty & Cacioppo, 1986a, pp. 204–215; Petty & Wegener,
1998a, 1999). In different circumstances, a variable might affect
persuasion through different mechanisms. For example, consider the
variable of message length (the simple length of a written message). This
might serve as a peripheral cue that activates a length-based heuristic (such
as “longer messages probably have lots of good reasons for the advocated
view”; see W. Wood et al., 1985). When message length operates this way,
longer messages will be more persuasive than shorter ones.
250
but diminish persuasion for recipients who were engaged in close scrutiny
(e.g., Friedrich, Fetherstonhaugh, Casey, & Gallagher, 1996).
Obviously, the key question that arises concerns specifying exactly when a
variable is likely to play one or another role. The ELM offers a general
251
rule of thumb for anticipating the likely function for a given variable,
based on the overall likelihood of elaboration (Petty, Wegener, Fabrigar,
Priester, & Cacioppo, 1993, p. 354). When elaboration likelihood is low,
then if a variable affects attitude change, it most likely does so by serving
as a peripheral cue. When elaboration likelihood is high, then any effects
of a variable on attitude change probably come about through influencing
elaboration valence. When elaboration likelihood is moderate, then the
effects of a variable on attitude change are likely to arise from affecting
the degree of elaboration (e.g., when some aspect of the persuasive
situation suggests that closer scrutiny of the message will be worthwhile).
The larger point is that the ELM draws attention to the mistake of thinking
that a given variable can influence persuasive outcomes through only one
pathway. Even in the absence of some well-articulated account of the
252
circumstances under which a given variable will serve in this or that
persuasion role, persuaders will be well-advised to be alert to such
complexities—and the ELM’s underscoring of these intricacies of
persuasion represents an especially important contribution.
253
Commentary
The ELM has stimulated a great deal of research. It is noteworthy that the
ELM provides a framework that offers the prospect of reconciling
apparently competing findings about the role played in persuasion by
various factors. For example, why might the receiver’s liking for the
communicator sometimes exert a large influence on persuasive outcomes
and sometimes little? One possibility is simply that as elaboration varies,
so will the impact of a simple decision rule such as the liking heuristic.
Indeed, the ELM’s capacity to account for conflicting findings from earlier
research makes it an especially important theoretical framework and
unquestionably the most influential recent theoretical development in
persuasion research. Even so, several facets of ELM theory and research
require some commentary.
254
2007).
In the present context, the point to be borne in mind is that the ELM
conception of involvement is a specific one, namely, personal relevance—
and other kinds of “involvement” might not have the same pattern of
effects as is associated with personal relevance.
Argument Strength
In ELM research, argument strength (argument quality) variations have
been defined in an unusual way: in terms of persuasive effects under
conditions of high elaboration. To obtain experimental messages
containing strong or weak arguments, ELM researchers commonly pretest
various messages: A strong-argument message is defined as “one
containing arguments such that when subjects are instructed to think about
the message, the thoughts that they generate are predominantly favorable,”
and a weak-argument message is defined as one in which the arguments
“are such that when subjects are instructed to think about them, the
thoughts that they generate are predominantly unfavorable” (Petty &
Cacioppo, 1986a, p. 32). That is, a high-quality argument is one that, in
pretesting, is relatively more persuasive (compared to a low-quality
argument) under conditions of high elaboration.
255
Thus to say, “Under conditions of high elaboration, strong arguments have
been found to be more effective than weak arguments” is rather like
saying, “Bachelors have been found to be unmarried.” No empirical
research is needed to confirm this claim (and indeed, there would be
something wrong with any empirical research that seemed to disconfirm
such claims). Notice, thus, how misleading the following statement might
be: “A message with strong arguments should tend to produce more
agreement when it is scrutinized carefully than when scrutiny is low, but a
message with weak arguments should tend to produce less overall
agreement when scrutiny is high rather than low” (Petty & Cacioppo,
1986a, p. 44). Appearances to the contrary, these are not empirical
predictions; these are not expectations that might be disconfirmed by
empirical results. If a message does not produce more agreement when
scrutinized carefully than when scrutiny is low, then (by definition) it
cannot possibly be a message with strong arguments.18
This way of defining argument quality reflects the role that argument
quality has played in ELM research designs. In ELM research, argument
quality variations have been used “primarily as a methodological tool to
examine whether some other variable increases or decreases message
scrutiny, not to examine the determinants of argument cogency per se”
(Petty & Wegener, 1998a, p. 352). When argument quality is
operationalized as the ELM has defined it, argument quality variations
provide simply a means of indirectly assessing the amount of elaboration
that has occurred. Thus to see whether a given factor influences
elaboration, one can examine the difference in the relative persuasiveness
of high- and low-quality arguments as that factor varies: High- and low-
quality arguments will be most different in persuasiveness precisely when
message scrutiny is high, and hence examining the size of the difference in
persuasiveness between high- and low-quality arguments provides a means
of assessing the degree of message scrutiny. For instance, one might detect
the effect of distraction on elaboration by noticing that when distraction is
present, there is relatively little difference in the persuasiveness of high-
quality arguments and low-quality arguments, but that without distraction,
there is a relatively large difference in persuasiveness. Such a pattern of
effects presumably reflects distraction’s effect on elaboration, because—
by definition—high- and low-quality arguments differ in persuasiveness
when elaboration is high.
But this way of defining argument quality means the ELM has a curious
lacuna. Consider the plight of a persuader who seeks advice about how to
256
construct an effective counterattitudinal persuasive message under
conditions of high elaboration. Presumably the ELM’s advice would be
“use strong arguments.” But because argument strength has been defined
in terms of effects (a strong argument is one that persuades under
conditions of high elaboration), this advice amounts to saying “to be
persuasive under conditions of high elaboration, use arguments that will be
persuasive”—which is obviously unhelpful (for some elaboration of this
line of reasoning, see O’Keefe, 2003).
257
The identification of this key ingredient permits a redescription of the
research findings concerning the effects of argument strength in terms of
outcome desirability, as follows: When elaboration is low, the
persuasiveness of a message is relatively unaffected by variation in the
perceived desirability of the outcomes, whereas when elaboration is high,
persuasive success is significantly influenced by the perceived desirability
of the outcomes. That is, under conditions of high elaboration, receivers
are led to have more positive thoughts about the advocated view when the
message’s arguments indicate that the advocated view will have outcomes
that the receivers think are relatively desirable than they do when the
arguments point to outcomes that are not so desirable—but this difference
is muted under conditions of low elaboration.19
What is not yet clear is whether there are other message variations that
might function in a way similar to outcome desirability, that is, ones that
serve as an influence on elaboration valence. Put somewhat differently, the
question is: Are there other quality-related features of persuasive appeals
whose variation makes relatively little difference to persuasive outcomes
under conditions of low elaboration, but whose variation makes a more
substantial difference under conditions of high elaboration?20
258
consequence-desirability variations to produce differential effects,
consequence-likelihood variations did not produce parallel effects.21
In any case, the general question remains open: There may be other
quality-related message characteristics (in addition to outcome
desirability) that enhance message persuasiveness under conditions of high
elaboration. Identification of such message properties would represent an
important advance in the understanding of persuasion generally and
argument quality specifically.
259
cues and message arguments simply serve as evidence bearing on the
receiver’s conclusion about whether to accept the advocated view.
One way of expressing this equivalence is to see that both peripheral cues
and message arguments can be understood as supplying premises that
permit the receiver to complete a conditional (“if-then”) form of reasoning.
In the case of peripheral cues, the reasoning can be exemplified by a
receiver who believes that “if a statement comes from an expert, the
statement is correct.” A message from a source that the receiver recognizes
as an expert, then, satisfies the antecedent condition (a statement coming
from an expert), and hence the receiver reasons to the appropriate
conclusion (that the statement is correct). In the case of message
arguments, the reasoning can be exemplified by a receiver who believes
(for instance) that “if a public policy has the effect of reducing crime, it is
a good policy.” Accepting a message argument indicating that current gun
control policies have the effect of reducing crime, then, satisfies the
antecedent condition (that the policy reduces crime), and hence the
receiver reasons to the indicated conclusion (that current gun control
policies are good ones). Thus the unimodel proposes that there is really
only one type of persuasion process, a process that accommodates
different (but functionally equivalent) sources of evidence (viz., cues and
message arguments).
The unimodel’s analysis of such research begins with the point that both
peripheral cues and message arguments can vary in their complexity, ease
of processing, brevity, and so forth. The unimodel acknowledges that
peripheral cues are often the sorts of things that are easily processed (and
message arguments are commonly the sorts of things that require more
processing), but that need not be so: “Cue and heuristic information need
not be briefer, less complex, or easier to process than message
information” (Kruglanski & Thompson, 1999b, p. 96).
260
But (the unimodel suggests) ELM research has commonly confounded the
cue-versus-message contrast with other contrasts—in particular, with
complexity and temporal location. That is, in ELM research, receivers are
offered a simple source of evidence at the beginning of the message in the
form of a peripheral cue (e.g., information about source credibility) and
then later are given a complex source of evidence (in the form of message
arguments). The unimodel analysis suggests that in such a research design,
under conditions of low personal relevance (low motivation to process),
receivers will naturally be more influenced by the brief, easily processed,
initially presented peripheral cue than by the subsequent difficult-to-
process argumentative material; when the message arguments appear later
in the sequence of information—and require more processing than do the
cues—they will likely affect only those receivers who (by virtue of higher
topic relevance) have greater motivation to process. Thus the unimodel
approach argues that the apparent differences between peripheral cues and
message arguments (in their relative impacts on persuasion as personal
relevance varies) do not reflect some general difference between cues and
arguments (as the ELM is taken to assert) but rather a confounding of
evidence type (peripheral cue vs. message argument) and other features of
evidence (brevity, ease of processing, and temporal location). If (the
unimodel suggests) peripheral cues and message arguments were equalized
on these other dimensions, then the putative dual-process differences
between them would evaporate.
261
time to sort them out. (For some discussion of these and related issues, see,
e.g., Chaiken, Duckworth, & Darke, 1999; Petty & Briñol, 2006; Petty,
Wheeler, & Bizer, 1999; Wegener & Claypool, 1999.)
Empirically, there is room for some uncertainty about exactly when (or,
indeed, whether) the ELM and the unimodel make genuinely different
predictions. The description given here of the unimodel has stressed its
putative contrasts with the ELM, but those contrasts may be less
substantial than is supposed. For instance, presentations of the unimodel
depict the distinction between cues and arguments as crucially important to
the ELM, in that the ELM is seen to treat these as functionally different
influences on persuasive outcomes (as opposed to the unimodel view, in
which cue and argument are simply two content categories and are not
functionally different as sources of evidence in the receiver’s reasoning
processes). It is certainly true that ELM theorists have sometimes used the
terms cue and argument in ways that make these into opposed categories
(e.g., Petty & Wegener, 1999), which invites some misunderstanding.
However, the key distinction for the ELM is not the contrast between
peripheral cues and message arguments but variation along the elaboration
continuum that yields a general trade-off between peripheral processes
(e.g., as represented by the influence of peripheral cues) and central
processes (as represented by elaboration valence, not message arguments
specifically) as influences on persuasive outcomes.22
262
The general point is that it is not yet clear whether (or exactly how) the
ELM and the unimodel can be made to offer contrasting empirical
predictions. Research findings indicating that communicator
characteristics and message arguments can both function either as
peripheral cues or as influences on elaboration valence—the kinds of
research findings offered as support for the unimodel—are in fact not
necessarily inconsistent with the ELM.
Thus the unimodel has raised some important questions concerning the
ELM. One may hope that continuing attention to these issues will lead to
more focused empirical predictions and better-articulated conceptual
equipment.
Conclusion
The elaboration likelihood model may be seen to contribute two key
insights about persuasion. One is the recognition of the variable character
of topic-related thinking engaged in by message recipients. Because the
extensiveness of topic-relevant thinking varies (from person to person,
from situation to situation, etc.), the central factors influencing persuasive
success vary: Simple heuristic principles may prevail when little
elaboration occurs, but when extensive elaboration is undertaken, then the
character of the message’s contents takes on greater importance. The
second is the recognition that a given variable may play different roles in
the persuasion process. The same variable (in different circumstances)
might influence the degree of elaboration, might influence the valence of
elaboration, and might serve as a peripheral cue—and so might have
different effects on persuasive outcomes depending on the situation. Taken
together, these two ideas offer the prospect of reconciling apparently
conflicting findings in the research literature concerning the role played by
various factors in influencing persuasive effects and mark the ELM as an
263
important step forward in the understanding of persuasion.
For Review
1. What is elaboration? How can the degree of elaboration be assessed?
Do variations in the amount of elaboration form a continuum or
discrete categories? Describe the general difference between central
and peripheral routes to persuasion. Explain how persuasion can
occur even under conditions of low elaboration.
2. Are the consequences of central route persuasion and peripheral route
persuasion identical? Identify three differences in the consequences of
persuasion’s being achieved through one or the other route. How is
the persistence of persuasion different? How is the strength of the
relationship of attitudes to intentions and behaviors different? How is
resistance to counterpersuasion different?
3. Identify two broad categories of factors that influence the amount of
elaboration undertaken. What is elaboration motivation? Identify two
factors influencing elaboration motivation. Explain how the personal
relevance of the topic (involvement) influences elaboration
motivation. What is “need for cognition”? Explain how need for
cognition influences elaboration motivation. What is elaboration
ability? Identify two factors influencing elaboration ability. What is
“distraction”? Explain how distraction influences elaboration ability.
Explain how prior knowledge influences elaboration ability.
4. In central route persuasion, what is the key determinant of persuasive
outcomes? Explain. Identify two factors that influence elaboration
direction (valence). Explain how the message’s proattitudinal or
counterattitudinal position influences elaboration direction. What is
argument strength (quality)? Explain how argument strength
influences elaboration direction.
5. In peripheral route persuasion, what influences the outcomes of
persuasive efforts? What is a heuristic principle? What activates
heuristic principles? Give three examples of heuristic principles.
What is the credibility heuristic? Explain how it works. Under what
conditions does credibility have relatively greater influence on
persuasive outcomes? What is the liking heuristic? Explain how it
works. Under what conditions does liking have relatively greater
influence on persuasive outcomes? What is the consensus heuristic?
Explain how it works. Under what conditions will the consensus
heuristic have relatively greater influence on persuasive outcomes?
264
6. Explain the idea that a given variable might play different roles in
persuasion in different circumstances. Describe three different roles
identified by the ELM. Give examples of how a variable might serve
in different roles. What is the ELM’s rule of thumb for expecting
what role a given variable will play? How useful is that rule of
thumb?
7. Describe how persuaders might adapt messages to recipients using
the ELM. How should messages be adapted to high-elaboration
recipients? How should messages be adapted to low-elaboration
recipients? Why might persuaders want to influence the likely amount
of elaboration?
8. How is involvement defined in ELM research? How is this sense of
involvement different from social judgment theory’s ego-
involvement? What various kinds of involvement might be
distinguished?
9. How is argument strength defined in ELM research? Explain how this
definition does not specify the message features that underlie
argument quality variations. Identify one active argument-quality
ingredient in the messages used in ELM research.
10. What is the unimodel of persuasion? Describe how the unimodel
suggests that one process, not two, underlies persuasion effects. How
does the unimodel attempt to explain ELM findings (about, e.g., the
relative influence of peripheral cues and argument quality)? Do the
unimodel and the ELM make different predictions? Explain how the
unimodel raises questions about the clarity of some ELM concepts.
11. Identify and explain two key insights about persuasion contributed by
the ELM.
Notes
1. There has been some variation in the ELM’s definition of elaboration.
Elaboration has sometimes been conceived in broad terms (as here),
namely, engaging in issue-relevant thinking (e.g., Petty & Wegener, 1999,
p. 46). But elaboration has also been defined more narrowly as issue-
relevant thinking undertaken with the motivation of impartially
determining the merits of the arguments (e.g., Cacioppo, Petty, &
Stoltenberg, 1985, p. 229) or as message scrutiny (e.g., Petty & Cacioppo,
1986a, p. 7). But the broadest definition is the most common.
265
have produced corresponding variations in proposed assessments of
elaboration (see, e.g., Cacioppo et al., 1985, p. 229). But most procedures
for the assessment of elaboration (as discussed by Petty & Cacioppo,
1986a, pp. 35–47) appear to represent indices of the amount of issue-
relevant thinking generally.
6. Other properties captured under the term involvement may not have the
same effects as does personal relevance. As a simple illustration, the
effects on message scrutiny (that is, close attention to the message’s
contents) may not be the same for increasing personal relevance and for
increasing commitment to a position. As personal relevance increases,
message scrutiny increases, but as position commitment increases, one can
266
imagine message scrutiny either increasing or decreasing (e.g., increasing
when there are cues that message scrutiny will yield position-bolstering
material but decreasing when scrutiny looks to yield position-threatening
material). For some general discussions of involvement, see B. T. Johnson
and Eagly (1989, 1990), K. D. Levin et al. (2000), Petty and Cacioppo
(1990), Slater (2002), and Thomsen, Borgida, and Lavine (1995).
10. The studies by W. Wood (1982), W. Wood and Kallgren (1988), and
W. Wood et al. (1985) all use the same message topic with (it appears)
similar messages, which means that this research evidence does not
underwrite generalizations as confident as one might prefer.
267
enhance elaboration ability, it could also diminish elaboration motivation
—as might happen if receivers think that they have sufficient information
and so expect that there would be little gained from close processing of the
message (see B. T. Johnson, 1994; Trumbo, 1999). (For another example
of diverse effects of receiver knowledge, see Biek, Wood, & Chaiken,
1996; for a general discussion, see W. Wood, Rhodes, & Biek, 1995.)
13. The ELM suggests that there are other peripheral route processes in
addition to heuristic principles—specifically, “simple affective processes”
(Petty & Cacioppo, 1986a, p. 8) in which attitudes change “as a result of
rather primitive affective and associational processes” (p. 9) such as
classical conditioning. Indeed, this additional element is one important
difference between the ELM and the HSM. The HSM’s systematic
processing mode corresponds to the ELM’s central route, and the HSM’s
heuristic mode refers specifically to the use of heuristic principles of the
sort discussed here.
Although the ELM’s peripheral route is thus broader than is the HSM’s
heuristic mode, here the peripheral route is treated in a way that makes it
look like the heuristic mode. That is, the present treatment focuses on the
simple rules/inferences (the heuristic principles) rather than on the
primitive affective processes that are taken to also represent peripheral
routes to persuasion. There are several reasons for this. First, the
nonheuristic peripheral route processes have not gotten much attention in
ELM research. Second, the ELM could abandon a belief in any particular
268
nonheuristic peripheral process (say, classical conditioning) with little
consequence for the model, which suggests that the ELM’s commitment to
any specific such process is inessential to the model. Third, it may be
possible to translate some apparently nonheuristic peripheral processes
into heuristic principle form (e.g., mood effects might reflect a tacit
heuristic such as “if it makes me feel good, it must be right”).
15. Presentations of the ELM have expressed this “multiple roles” idea in
various ways (e.g., Petty & Briñol, 2008, 2010), but these have not always
been as clear as one might like. For instance, one formulation is that “the
ELM notes that a variable can influence attitudes in four ways: (1) by
serving as an argument, (2) by serving as a cue, (3) by determining the
extent of elaboration, and (4) by producing a bias in elaboration” (Petty &
Wegener, 1999, p. 51). But the ways in which (what would conventionally
be called) arguments can influence attitudes, from the perspective of the
ELM, seem to be (a) by serving as a cue (e.g., when the number of
arguments activates a heuristic such as “there are many supporting
arguments so the position must be correct”), (b) by influencing the extent
of elaboration (as when a receiver thinks that “there seem to be a lot of
arguments here so maybe it’s worth looking at them closely”), and (c) by
producing a bias in elaboration (i.e., by influencing the evaluative
direction of elaboration). That is, the roles of arguments appear already
subsumed in the other three roles (peripheral cue, influence on degree of
elaboration, and influence on valence of elaboration); it is not clear how
arguments might otherwise function in persuasion within an ELM
framework. Hence the presentation here does not distinguish “serving as
an argument” as a distinct role for a persuasion variable.
At least part of the confusion appears to concern the ELM’s use of the
word argument, about which three points might be noted. First,
“arguments” are sometimes conceived of as “bits of information contained
in a communication that are relevant to a person’s subjective determination
of the true merits of an advocated position” (Petty & Cacioppo, 1986b, p.
133); but taken at face value, such a definition would accommodate at
least some peripheral cues as arguments (after all, from the perspective of
the heuristic processor, a cue is a bit of information relevant to assessing
269
the true merits of the advocated view—it just happens to provide a shortcut
to such assessment), which seems a unimodel-like view (Kruglanski &
Thompson, 1999b), surely to be resisted by the ELM. Second, “argument”
and “cue” sometimes appear to be used as shorthand to cover anything that
affects, respectively, central route and peripheral route persuasion (e.g.,
Petty & Wegener, 1999, p. 49). But when they are used this way, it is not
clear why argument-based persuasion roles are to be distinguished from
persuasion roles involving influencing elaboration valence (given that
elaboration valence is presumably the engine of persuasion within central
route processes). Third, distinguishing “serving as an argument” does at
least underscore the broad possible application of “argument” within an
ELM perspective. For example, the communicator’s physical
attractiveness is recognized by the ELM as potentially not simply a
peripheral cue but also an argument (as in advertisements for beauty
products). Still, when attractiveness serves this argumentative role, it
presumably influences persuasive effects by influencing elaboration
valence (just as arguments of more conventional form do).
270
metacognitive properties that influence attitude provide persuaders with
additional avenues to attitude change. However, such metacognitive states
presumably—though this is not entirely clear—influence attitude change
only under conditions of relatively high elaboration (i.e., only under
conditions in which attributes of thoughts, such as their valence or
confidence, influence attitudes). So one might treat the roles of
“influencing elaboration valence” and “influencing metacognitive states
that affect attitudes” as representing two concrete realizations of a single
more abstract role, namely, “influencing attitude-relevant thought
properties” (valence, confidence, likelihood, and so forth). But having a
clear picture of all this will require a careful enumeration of attitude-
relevant thought properties and a description of their relationships and
effects, and this task is not to be underestimated (for one such sketch, see
Petty, Briñol, Tormala, & Wegener, 2007).
271
message variations, see McKay-Nesbitt, Manchanda, Smith, and Huhmann
(2011), Rosen and Haaga (1998), and Williams-Piehota, Pizarro, Silvera,
Mowad, and Salovey (2006); for an exception, see Bakker (1999).
18. Another example: “Subjects led to believe that the message topic (e.g.,
comprehensive exams) will (vs. won’t) impact on their own lives have also
been shown to be less persuaded by weak messages but more persuaded by
strong ones” (Chaiken & Stangor, 1987, p. 594). Despite the statement’s
appearance, this is not a discovery. It is not an empirical result or finding,
something that research “shows” to be true, or something that could have
been otherwise (given the effect of topic relevance on elaboration). The
described relationship is true by definition.
272
As a first step in seeing what additional evidence is needed, consider
whether it is possible to have high elaboration without high personal
relevance (high involvement). It is certainly possible to have low
elaboration without low personal relevance (e.g., when personal relevance
—and so elaboration motivation—is high, distraction might interfere with
elaboration ability and so produce low elaboration despite high personal
relevance), but it is difficult to imagine conditions under which elaboration
would be high while personal relevance was low.
If, on the other hand, high personal relevance is not required for high
elaboration, then additional research evidence is needed to fill the gap
between the narrower (and well-supported) claim that “outcome
desirability variations make a larger difference to persuasive outcomes
under conditions of high personal relevance than under conditions of low
personal relevance” and the broader claim that “argument quality
variations make a larger difference to persuasive outcomes under
conditions of high elaboration than under conditions of low elaboration.”
What is needed is evidence that parallel effects (parallel to those observed
with outcome desirability variations and personal relevance variations) can
be obtained when (a) argument quality is varied but outcome desirability is
held constant and (b) elaboration is varied but personal relevance is low. It
is not plain that such evidence is in hand.
21. It may be that outcome likelihood variations just don’t have the same
effects that outcome desirability variations do, or it might be that the
amount of message scrutiny required to yield outcome likelihood effects is
higher than that needed to produce outcome desirability effects (so that a
given level of elaboration might be high enough for recipients to be
affected by the apparent desirability of the depicted outcomes but not yet
high enough for recipients to be affected by the relative likelihood of those
outcomes).
273
distinction (e.g., Kruglanski & Thompson, 1999b, p. 84). But this also
does not seem to capture the ELM’s assertions. The ELM does not
partition source variables and message variables as having intrinsically
different roles to play in persuasion but, on the contrary, emphasizes that
each category of variable can serve different persuasion roles in different
circumstances (Petty & Briñol, 2006, p. 217; Petty et al., 1999, p. 157;
Wegener & Claypool, 1999, pp. 176–177).
274
Chapter 9 The Study of Persuasive Effects
275
at the end of the message (the explicit conclusion message), whereas in the
other message, the persuader’s conclusion is left implicit (the implicit
conclusion message). When participants in this experiment arrive at the
laboratory, their attitudes on the persuasive topic are assessed, and then
they receive one of the two messages; which message a given participant
receives is a matter of chance, perhaps determined by flipping a coin. After
exposure to the persuasive message, receivers’ attitudes are assessed again
to ascertain the degree of attitude change produced by the message.
The obvious explanation for the obtained results, of course, is the presence
or absence of an explicit conclusion. Indeed, because this is the only factor
that varies between the two messages, it presumably must be the locus of
the observed differences. This is the general logic of experimental designs
such as this: These designs are intended to permit unambiguous causal
attribution precisely by virtue of experimental control over factors other
than the independent variable.
276
The most common and important variation, however, is the inclusion of
more than one independent variable in a single experiment. Thus (for
instance) rather than doing one experiment to study implicit versus explicit
conclusions and a second study to examine high versus moderate versus
low credibility, a researcher could design a single investigation to study
these two variables simultaneously. This would involve creating all six
possible combinations of conclusion type and credibility level (3
credibility conditions × 2 conclusion type conditions = 6 combinations).
Experimental designs with more than one independent variable permit the
detection of interaction effects involving those variables. An interaction
effect is said to occur if the effect of one independent variable depends on
the level of another independent variable; conversely, if the effect of one
variable does not depend on the level of another variable, then no
interaction effect exists. For example, if the effect of having an implicit or
explicit conclusion is constant, no matter what the credibility of the source,
then no interaction effect exists between credibility and conclusion type.
But if the effect of having an implicit or explicit conclusion varies
depending on the credibility of the source (say, if high-credibility sources
are most effective with explicit conclusions, and low-credibility sources
most effective with implicit conclusions), then an interaction effect
(involving credibility and conclusion type) exists; the effect of one
variable (conclusion type) depends on the level of another (credibility).1
277
conclusions are generally more persuasive than those with implicit
conclusions. Message designers should not think that having explicit
conclusions will somehow automatically make their messages highly
persuasive. Messages with explicit conclusions may be relatively
persuasive compared with those with implicit conclusions, but that does
not mean that explicit conclusion messages are inevitably highly
persuasive in absolute terms. The larger point is that the research under
discussion here can certainly provide useful information to message
designers about how to enhance message persuasiveness (by creating
messages of one sort rather than another), but it does not offer evidence
bearing on the absolute persuasiveness of any given kind of message.3
278
that the design does not permit unambiguous causal attribution; the other is
that the design is blind to the possibility that the effects of a given message
factor may not be constant (uniform) across different messages (Jackson &
Jacobs, 1983).
279
physical consequences of smoking and indeed are generally similar, except
for the following sort of variation. Message A reads, “There can be no
doubt that cigarette smoking produces harmful physical effects,” whereas
Message B reads, “Only an ignorant person would doubt that cigarette
smoking produces harmful physical effects.” The statement in Message A,
“It is therefore readily apparent that the country should pass legislation to
make the sale of cigarettes illegal,” is replaced in Message B by the
statement, “Only the stupid or morally corrupt would oppose passage of
legislation to make the sale of cigarettes illegal” (with four such alterations
in the messages).
What is the independent variable under investigation here? That is, how
should this experimental manipulation be described? Framing some causal
generalization will require that the difference between the two messages be
expressed somehow—but exactly how? Several different answers have
been offered. The original investigators described this manipulation as a
matter of “opinionated” as opposed to “nonopinionated” language (G. R.
Miller & Baseheart, 1969), but others have characterized it varying
“language intensity” (Bradac, Bowers, & Courtright, 1980, pp. 200–201),
having a “confident style in debating” (Abelson, 1986, p. 227), or using a
“more dynamic” rather than “subdued” style (McGuire, 1985, p. 270). Of
course, not even these exhaust the possibilities. For instance, one could
describe this as a contrast between extreme and mild (or nonexistent)
denigration of those holding opposing views.
280
study found the explicit conclusion message to be significantly more
persuasive than the implicit conclusion message (and let’s overlook the
problem of deciding that it was conclusion type, not some other factor, that
was responsible for the difference). One should not necessarily conclude
that having explicit conclusions will always improve the effectiveness of a
message. After all, there might have been something peculiar about the
particular message that was studied (remember that only one message was
used). Perhaps something (unnoticed by the researchers) made that
particular message especially hospitable to having an explicit conclusion—
maybe the topic, maybe the way the rest of the message was organized, or
maybe the nature of the arguments that were made. Other messages might
not be so receptive to explicit conclusions.
To put that point more abstractly: The effect of a given message variable
may not be uniform across messages. Some messages might be helped a
lot by having an explicit conclusion, some helped only a little, and some
even hurt by it. But if that is true, then looking at the effects of conclusion
type on a single message does not really provide a good basis for drawing
a general conclusion. So, once again, the typical single-message design
used in persuasion effects research creates an obstacle to dependable
message generalization because it overlooks the possibility of nonuniform
effects across messages.4
281
type has across messages. And the chances for unambiguous causal
attribution are improved by such a design: Given the variation within the
set of messages for a given conclusion type, the researcher can rule out
alternative explanations and be more confident in attributing observed
differences to the sort of conclusion used.5
282
awfully tempting to employ those materials. Second, in a continuing line
of research, a desire for tight experimental control may suggest the reuse
of earlier messages.
283
However, two aspects of meta-analytic practice deserve some notice. The
first concerns which studies to include in a review, specifically whether to
attempt to locate unpublished studies (conference papers, dissertations, and
so forth). There are good reasons to want to include unpublished studies
whenever possible, because the published research literature may produce
a misleading picture. For example, studies that find statistically significant
differences may be more likely to be published than those with
nonsignificant results. (For some discussions of publication biases and
related questionable research practices, see Bakker, van Dijk, & Wicherts,
2012; Ioannidis, 2005, 2008; John, Loewenstein, & Prelec, 2012; Levine,
Asada, & Carpenter, 2009.)
Variable Definition
The other noteworthy challenge that arises in studying persuasive effects
284
concerns how independent variables are defined in research practice.
Because this issue arises most clearly in the context of defining message
variables (i.e., message variations or message types), the following
discussion focuses on such variables; as will be seen, however, the
difficulties that ensue are not limited to message variables.
Obviously, these two definitions will not necessarily correspond. That is, a
message that contains gruesome content (a threat appeal by the first
definition) might not arouse fear or anxiety in message recipients (i.e.,
might not be a threat appeal by the second definition). Similarly, a
message might succeed in arousing fear without containing graphic
message content.
285
First, generalizations about message types can only cautiously lump
together investigations that employ different ways of defining a given
variable. Two studies might call themselves studies of threat appeals, but if
one of them defines threat appeal by message content whereas the other
defines it by recipient response, it may be difficult to draw reliable
generalizations that encompass the two studies.
Conclusion
Experimental research examining the influence of various factors on
286
persuasive outcomes offers the prospect of useful insights into persuasive
processes and effects, but the task of creating dependable generalizations
from such research can be more challenging than might appear at first
look.
For Review
1. Describe the simplest experimental design used in the study of
persuasive message effects. What is an independent variable? A
dependent variable? Explain how experimental designs are meant to
permit unambiguous causal attribution for observed effects. Describe
some variations on the basic design. What is an interaction effect?
Explain how experimental designs with more than one independent
variable permit detection of interaction effects. Explain the difference
between a conclusion about the relative persuasiveness of two
messages and a conclusion about the absolute persuasiveness of one
message. Are experimental designs meant to provide evidence about
the absolute persuasiveness of a given message?
2. What is a single-message experimental design? Explain why such
designs do not provide good evidence for generalizations. How do
such designs undermine unambiguous causal attribution for effects?
How do such designs overlook the potential variability
(nonuniformity) of a given message factor across different concrete
messages? What is a multiple-message design? How do multiple-
message designs address some concerns about generalization? Why is
re-using messages (from earlier studies) a generally undesirable
research practice? How can generalizations be obtained from previous
studies using single-message designs? What is meta-analysis?
3. Describe two ways in which a message variable can be defined in
research. Explain the difference between defining a message variable
on the basis of message features and defining it on the basis of
recipient responses (effects). Describe how the criteria for satisfactory
experimental manipulations differ depending on whether a feature-
based or an effect-based definition is used. Explain why feature-based
definitions provide a better basis for advice to message designers.
Notes
1. An interaction effect can also be described as a “moderator” effect, in
the sense that one variable moderates (influences) the effect of another
287
variable. To continue the example: If the effect of implicit-versus-explicit
conclusions varies depending on the level of credibility, then credibility
would be said to moderate the effect of conclusion type (credibility would
be a moderator variable). Moderator effects, in which variable X
influences the relationship of variables A and B, are different from
mediator effects, in which X mediates the relationship of A and B by being
between them in a causal chain (A influences X, which in turn influences
B). The classic treatment of this distinction is Baron and Kenny (1986).
For some subsequent discussion, see Fairchild and MacKinnon (2009),
Green, Ha, and Bullock (2010), Kraemer, Kiernan, Essex, and Kupfer
(2008), Preacher and Hayes (2008), Spencer, Zanna, and Fong (2005), and
Zhao, Lynch, and Chen (2010).
288
message variable is larger under condition X than under condition Y, that
does not mean that messages are more persuasive in condition X than in
condition Y; it means that the difference in persuasiveness between the
two messages is larger in condition X than in condition Y. As an example:
O’Keefe and Jensen’s (2011) meta-analysis of gain-loss message framing
effects concerning obesity-related behaviors reported that the mean effect
size (expressed as a correlation, with positive values indicating a
persuasive advantage for gain-framed appeals) was .17 for physical
activity messages and .02 for healthy eating messages, a statistically
significant difference. This does not mean that physical activity messages
were more persuasive than healthy eating messages, or that gain-framed
physical activity messages were more persuasive than gain-framed healthy
eating messages. It means only that the difference in persuasiveness
between gain-framed and loss-framed appeals was larger for physical
activity messages than for healthy eating messages.
289
than they might be as a means of coping with message generalization
problems. Indeed, where messages have been reused in primary research
and generalizations about message types are wanted, the appropriate meta-
analytic procedure is to collapse the results (across studies) for a given
message pair; that is, the appropriate unit of analysis is the message pair,
not the study. To concretize this: Imagine two data sets in which 20 studies
have provided the experimental contrast of interest (comparing a message
of kind A versus a message of kind B). In one data set, each study used a
different message pair. In the other, 10 studies used one specific message
pair and the other 10 studies used a second message pair. Plainly, one’s
confidence in any generalizations would be greater in the first data set than
in the second, and the meta-analytic procedure should reflect that (making
the number of cases 20 in the first data set and 2 in the second).
290
Chapter 10 Communicator Factors
Communicator Credibility
The Dimensions of Credibility
Factors Influencing Credibility Judgments
Effects of Credibility
Liking
The General Rule
Some Exceptions and Limiting Conditions
Other Communicator Factors
Similarity
Physical Attractiveness
About Additional Communicator Characteristics
Conclusion
The Nature of Communication Sources
Multiple Roles for Communicator Variables
For Review
Notes
Communicator Credibility
291
specification in investigations aimed at identifying the basic underlying
dimensions of credibility.
Factor-Analytic Research
There have been quite a few factor-analytic studies of the dimensions
underlying credibility judgments (e.g., Andersen, 1961; Applbaum &
Anatol, 1972, 1973; Baudhuin & Davis, 1972; Berlo, Lemert, & Mertz,
1969; Bowers & Phillips, 1967; Falcione, 1974; McCroskey, 1966;
Schweitzer & Ginsburg, 1966). In the most common research design in
these investigations, respondents rate communication sources on a large
number of scales. The ratings given of the sources are then submitted to
factor analysis, a statistical procedure that (broadly put) groups the scales
on the basis of their intercorrelations: Scales that are comparatively highly
intercorrelated will be grouped together as indicating some underlying
“factor” or dimension.
292
several such items commonly exhibit high internal reliability (e.g.,
reliability coefficients of .85 or greater have been reported by Beatty &
Behnke, 1980; McCroskey, 1966).
These two dimensions parallel what have been described as the two types
of communicator bias that message recipients might perceive: knowledge
bias and reporting bias. “Knowledge bias refers to a recipient’s belief that
a communicator’s knowledge about external reality is nonveridical, and
reporting bias refers to the belief that a communicator’s willingness to
convey an accurate version of external reality is compromised” (Eagly,
Wood, & Chaiken, 1978, p. 424; see also Eagly, Wood, & Chaiken, 1981).
A communicator perceived as having a knowledge bias will presumably be
viewed as relatively less expert; a communicator viewed as having a
reporting bias will presumably be seen as comparatively less trustworthy.
These two dimensions, however, represent only the most general sorts of
credibilityrelevant judgments made by recipients about communicators.
The particular judgments underlying credibility may vary from
293
circumstance to circumstance, as can the emphasis placed on one or
another dimension of judgment. Thus it may be useful to develop
credibility assessments tailored to particular situations (for some examples,
see Frewer, Howard, Hedderley, & Shepherd, 1996; Gaziano & McGrath,
1986; Hilligoss & Rieh, 2008; Ohanian, 1990; M. D. West, 1994).
Notably, however, even such situation-specific assessments commonly
identify expertise and trustworthiness as key credibility dimensions, as in
studies of expert courtroom witnesses (Brodsky, Griffin, & Cramer, 2010),
risk communication (Siegrist, Earle, & Gutscher, 2003; Twyman, Harvey,
& Harries, 2008), and corporations (Newell & Goldsmith, 2001).
294
receivers to perceive the source as more trustworthy and (particularly)
more expert than do low-credibility introductions. What systematic
research exists on this matter is (perhaps not surprisingly) consistent with
these effects (e.g., Falomir-Pichastor, Butera, & Mugny, 2002; Hurwitz,
Miron, & Johnson, 1992; Tormala, Briñol, & Petty, 2006).
Nonfluencies in Delivery
There have been a number of investigations of how variations in delivery
can influence the credibility judgments made of a speaker. Unfortunately,
several of these studies have investigated conceptions of delivery that
embrace a number of behavioral features (e.g., Pearce & Brommel, 1972).
But one delivery characteristic that has been studied in isolation is the
occurrence of nonfluencies in the delivery of oral communications.
Nonfluencies include vocalized pauses (“uh, uh”), the superfluous
repetition of words or sounds, corrections of slips of the tongue,
articulation difficulties, and the like. Several investigations have found that
with increasing numbers of nonfluencies, speakers are rated significantly
lower on expertise, with judgments of trustworthiness typically unaffected
(e.g., Engstrom, 1994; for a review of effects on expertise judgments, see
Carpenter, 2012a).
295
high credibility of the cited sources seems to rub off on the
communicator.2
Position Advocated
The position that the communicator advocates on the persuasive issue can
influence perceptions of the communicator’s expertise and trustworthiness.
Specifically, a communicator is likely to be perceived as more expert and
more trustworthy if the advocated position disconfirms the audience’s
expectations about the communicator’s views (when such expectations
derive from knowledge of the source’s characteristics or circumstances),
although certain sorts of trustworthiness judgments (concerning
objectivity, openmindedness, and unbiasedness) appear to be more
affected than others (such as sincerity and honesty).
So, for example, online contributors who donate their revenue shares to
charity (as compared with those who retain the revenue; Hsieh, Hudson, &
Kraut, 2011), salespeople whose compensation is salary-based (rather than
commission-based; Straughan & Lynn, 2002) or who flatter the customer
after the sale (rather than before; M. C. Campbell & Kirmani, 2000),
prosecutors who argue that prosecutorial powers should be decreased
(rather than increased; Walster, Aronson, & Abrahams, 1966), politicians
who praise their opponents (as opposed to denigrating them; Combs &
Keller, 2010)—all are likely to enjoy relatively enhanced credibility.
Similarly, physicians find their colleagues more believable sources of drug
information than they do advertisements or salespeople (Beltramini &
Sirsi, 1992), and they are more likely to prescribe drugs that have been
studied in government-funded research than in industry-funded research
296
(Kesselheim et al., 2012).
297
A related expectancy disconfirmation effect has been observed in studies
of advertisements for consumer products. Ordinarily, consumers expect
advertisements to tout the advertised product or brand as “the best” on
every feature or characteristic that is mentioned. Thus an advertisement for
exterior house paint that claimed that the product was superior to its
competitors on only three mentioned product features (durability, number
of coats needed, and ease of cleanup) while being equal on two others
(number of colors available and nonspill lip on container) would
disconfirm receivers’ expectations about the message’s contents
(particularly by contrast to an advertisement claiming that the product was
superior on each of these five features; R. E. Smith & Hunt, 1978). There
have been several experimental comparisons of these two types of
advertisements—an advertisement suggesting superiority for all the
mentioned features of the product (a one-sided advertisement), and an
advertisement that acknowledges (and does not refute or deny) some ways
in which the product is not superior (a nonrefutational two-sided
advertisement). As one might suppose, when an advertisement
acknowledges ways in which competing products are just as good as the
advertised product (or acknowledges weaknesses of the advertised
product), the ad is commonly perceived as more credible than when the ad
claims superiority on every product feature that is mentioned (e.g., Alden
& Crowley, 1995; Eisend, 2010; Pechmann, 1992; for reviews, see Eisend,
2006; O’Keefe, 1999a).4
298
judgments about the communicator’s dispositional trustworthiness (the
communicator’s general honesty, fairness, open-mindedness, and the like)
than the communicator’s expertise.5
Humor
Including humor in persuasive messages has been found to have rather
varied effects on perceptions of the communicator. When positive effects
of humor are found, they tend to most directly involve enhancement of the
audience’s liking for the communicator—and thus occasionally the
trustworthiness of the communicator (because liking and trustworthiness
are associated)—but rarely judgments of expertise (e.g., Chang & Gruner,
1981; Gruner, 1967, 1970; Gruner & Lampton, 1972; Skalski, Tamborini,
Glazer, & Smith, 2009). The use of humor, however, can also decrease the
audience’s liking for the communicator, the perceived trustworthiness of
the communicator, and even the perceived expertise of the source (e.g.,
Bryant, Brown, Silberberg, & Elliott, 1981; Munn & Gruner, 1981). These
negative effects seem most likely when the humor is perceived as
excessive or inappropriate for the context. Small amounts of appropriate
humor thus may have small enhancing effects on perceived trustworthiness
but are unlikely to affect assessments of the communicator’s expertise.
Summary
This selective review has touched on several broad factors that can
influence credibility judgments. However, different specific influences
may be at work in different persuasion circumstances. So, for example,
one might expect that different factors will affect the perceived credibility
of courtroom witnesses (e.g., Dahl et al., 2007), blogs (e.g., Armstrong &
McAdams, 2009), journalists (e.g., Jensen, 2008), online reviews (e.g., Pan
& Chiou, 2011), and so forth.6
Effects of Credibility
What effects do variations in communicator credibility have on persuasive
outcomes? It might be thought that the answer to this question is pretty
simple: As one’s credibility increases, so will one’s effectiveness. But the
answer turns out to be much more complicated.
300
persuasive effectiveness. The effects of credibility on persuasive outcomes
are not completely straightforward but depend centrally on other factors.
These factors can be usefully divided into two general categories: factors
that influence the magnitude of credibility’s effects and factors that
influence the direction of credibility’s effects.
The first is the degree of direct personal relevance that the issue has for the
receiver. As the issue becomes more personally relevant for the receiver,
variations in the source’s credibility make less difference; under conditions
of low personal relevance, the communicator’s credibility may make a
great deal of difference to the outcome, whereas on highly relevant topics,
the source’s credibility may have little impact (for a classic illustration, see
Petty, Cacioppo, & Goldman, 1981; for a review, see E. J. Wilson &
Sherrell, 1993).11
301
attitudes and little background knowledge—conditions likely to be
conducive to relatively low elaboration.
302
increases would occur, and sometimes (e.g., when the topic is personally
relevant to the receiver) no increase at all, but at least whenever credibility
had an effect, it would be in a constant direction, with high-credibility
sources being more effective than low-credibility sources.
An entirely clear picture is not yet in hand, but one factor that appears
critical in determining the direction of credibility’s effects appears to be
the nature of the position advocated by the message—specifically, whether
the message advocates a position initially opposed by the receiver (a
counterattitudinal message) or advocates a position toward which the
receiver initially feels at least somewhat favorable (a proattitudinal
message). With a counterattitudinal message, the high-credibility
communicator will tend to have a persuasive advantage over the low-
credibility source; with a proattitudinal message, however, the low-
credibility communicator appears to enjoy greater persuasive success than
the high-credibility source.
303
Perhaps one way of understanding this effect is to consider the degree to
which, given a proattitudinal message, receivers might be stimulated to
think about arguments and evidence supporting the advocated view. When
receivers hear their views defended by a high-credibility source, they may
well be inclined to presume that the communicator will do a perfectly good
job of advocacy, will defend the viewpoint adequately, will present the
best arguments, and so forth—and so they sit back and let the source do
the work. But when the source is low in credibility, receivers might be
more inclined to help the communicator in defending their common
viewpoint, and hence they might be led to think more extensively about
supporting arguments—thereby ending up being more persuaded than if
they had listened to a higher-credibility source. Expressed in ELM terms, a
proattitudinal message may provoke more elaboration, and more favorable
elaboration, when it comes from a low-credibility communicator than
when it comes from a high-credibility communicator (for some evidence
consistent with such an account, see Clark et al., 2012; Sternthal et al.,
1978).13
Liking
304
Extant research evidence suggests at least three important caveats
concerning the effects of liking for the communicator on persuasive
outcomes: The effects of liking can apparently be overridden by
credibility, the superiority of liked over disliked communicators is
minimized as the topic becomes more personally relevant to the receiver,
and disliked communicators can at least sometimes be significantly more
effective persuaders than can liked communicators. (For indications of
additional possible limiting conditions, see Chebat, Laroche, Baddoura, &
Filiatrault, 1992; Roskos-Ewoldsen & Fazio, 1992.)
305
Liking and Topic Relevance
The effects of liking on persuasive outcomes are minimized as the topic
becomes more personally relevant to the receiver. Thus, although better-
liked sources may enjoy some general persuasive advantage, that
advantage is reduced when the issue is personally relevant to the receiver
(Chaiken, 1980; see, relatedly, Kang & Kerr, 2006). This result is, of
course, compatible with the image offered by the ELM (discussed in
Chapter 8). When receivers find the topic personally relevant, they are
more likely to engage in systematic active processing of message contents
and to minimize reliance on peripheral cues such as whether they happen
to like the communication source. But when personal relevance is low,
receivers are more likely to rely on simplifying heuristics emphasizing
cues such as liking (“I like this person, so I’ll agree”).
306
communicators can indeed potentially be more persuasive than liked
communicators.
Similarity
It seems common and natural to assume that to the degree that receivers
307
perceive similarities between themselves and a persuader, to that same
degree the persuader’s effectiveness will be enhanced. The belief that
“greater similarity means greater effectiveness” is an attractive one and is
commonly reflected in recommendations that persuaders emphasize
commonalities between themselves and the audience.
308
multiplicity of effects that depend on both content and context” (Huston &
Levinger, 1978, p. 126). However, the effect on liking of one particular
sort of similarity—attitudinal similarity—has received a good deal of
empirical attention. Attitudinal similarity is having similar attitudes
(similar evaluations of attitude objects), as opposed to, say, having similar
traits, abilities, occupations, or backgrounds.
309
greater perceived attitudinal similarity comes greater liking, which may or
may not mean greater effectiveness.15
Second, not all relevant similarities will enhance perceived expertise, and
not all relevant dissimilarities will damage perceived expertise. For
example, a perceived similarity in relevant training and experience may
reduce the perceived expertise of a communicator (because the receiver
may be thinking, “I know as much about this topic as the speaker does”).
A perceived dissimilarity in relevant training and experience, on the other
hand, might either enhance or damage perceived expertise, depending on
the direction of the dissimilarity: If the receiver thinks that the
communicator is dissimilar because the communicator has better training
and experience, then presumably enhanced judgments of the
communicator’s expertise will be likely, but if the receiver thinks that the
communicator is dissimilar because the communicator has poorer training
and experience, then most likely the communicator’s perceived expertise
will suffer.
310
a general American dialect or a Southern dialect; the message concerned a
well-known Southern governor (who enjoyed some popularity in the South
but not elsewhere), with one version offering a favorable view of the
governor and the other an unfavorable view. Regardless of the position
advocated, the speaker with the Southern (dissimilar) speech dialect was
perceived as more expert than the speaker with the general American
(similar) dialect, presumably because the Southern speaker could be
assumed to have better access to relevant information than would the
general American speaker (Delia, 1975).
311
However, there are intricacies here. In the previously described speech
dialect investigation, greater trustworthiness was ascribed to the
progovernor speaker using the similar (general American) dialect and to
the antigovernor speaker using the dissimilar (Southern) dialect (Delia,
1975). This effect is, of course, readily understandable: The Southern
speaker arguing against the Southern governor and the non-Southern
speaker supporting that governor could each have been seen as offering
views that ran against the tide of regional opinion—and hence seen as
speakers who must be especially sincere and honest in their expressions of
their opinions. But notice the complexity of these results regarding
similarity: Sometimes similarity enhanced perceptions of trustworthiness,
but sometimes it diminished such perceptions, depending on the position
advocated. And (to round things out) other investigators have found that
sometimes similarities have no significant effect on trustworthiness
judgments (e.g., Atkinson et al., 1985).
312
scrutiny of messages from similar communicators; such closer scrutiny
might enhance or inhibit persuasion, depending on such factors as the
quality of the message’s arguments. (For some relevant empirical work
and general discussions, see M. A. Fleming & Petty, 2000; Mackie &
Queller, 2000; N. Wyer, 2010.)
Such complexities might lead one to wonder about the common practice of
using peers (of the target audience) in health education programs on such
topics as smoking and unsafe sex; this practice can be seen to reflect a
generalized belief in the persuasive power of similarity. Given the
observed complexities of similarity’s roles and effects in persuasion,
however, perhaps it should not be surprising that several reviews have
concluded that peer-based health interventions are not dependably more
successful—and sometimes are significantly less successful—than
programs without such peer bases (Durantini, Albarracín, Mitchell, Earl, &
Gillette, 2006; Posavac, Kattapong, & Dew, 1999; cf. Cuijpers, 2002).16
Physical Attractiveness
The effects of physical attractiveness on persuasive outcomes—like the
effects of similarity—are rather varied. For the most part, “existing
research does indicate that heightened physical attractiveness generally
enhances one’s effectiveness as a social influence agent” (Chaiken, 1986,
p. 150; for some illustrative examples, see Horai, Naccari, & Fatoullah,
1974; Micu, Coulter, & Price, 2009; Widgery & Ruch, 1981). But physical
attractiveness appears to commonly operate in persuasion in a fashion akin
to similarity; that is, physical attractiveness affects persuasive outcomes
indirectly, by means of its influence on the receiver’s liking for the
communicator and the receiver’s assessment of the communicator’s
credibility.
313
liking (for a review, see Berscheid & Walster, 1974). And, as discussed
previously, there is good evidence for the general proposition that on the
whole, liked communicators will be more effective persuaders than
disliked communicators. Hence the observed effects of physical
attractiveness on persuasive success might straightforwardly be explained
as arising from the recipient’s liking for the communicator (for a careful
elaboration of this idea, see Chaiken, 1986; for some illustrative results,
see Horai et al., 1974; Snyder & Rothbart, 1971).
314
Patzer, 1983; and Praxmarer, 2011). Thus it is not plausible to suppose that
differential judgments of the communicator’s expertise generally mediate
the effect of communicator physical attractiveness on persuasive
outcomes. To be sure, in certain specific circumstances, the
communicator’s physical attractiveness might influence judgments of
expertise—namely, when the topic of influence is related to physical
attractiveness in relevant ways. For example, physically attractive sources
might enjoy greater perceived expertise in the realm of beauty products.
But generally speaking, the effect of the source’s physical attractiveness
on persuasive outcomes appears not to be achieved through enhanced
perceptions of the source’s expertise.
Summary
Understanding the role that communicator physical attractiveness plays in
influencing persuasive outcomes seems to require that central emphasis be
given to the influence of physical attractiveness on liking. Physical
attractiveness appears to affect persuasive outcomes not directly but rather
indirectly, especially (though not exclusively) by means of its influence on
the receiver’s liking for the communicator.
315
About Additional Communicator Characteristics
This discussion of the persuasive effects of communicator-receiver
similarity and communicator physical attractiveness has focused on how
those factors might influence credibility and liking, because the research
evidence seems to indicate that similarity and physical attractiveness
influence persuasive outcomes indirectly, through their effects on
credibility and liking. Indeed, in thinking about the effects of any given
additional source characteristic on persuasion, one useful avenue to
illuminating that characteristic’s effects can be a consideration of how that
characteristic might influence credibility or liking (and thereby indirectly
influence persuasive outcomes).
316
relationship or the role of perceived ethnic similarity.)
Conclusion
As perhaps is apparent, communicator characteristics can have
complicated relationships with each other and can have various direct and
indirect effects on persuasive outcomes. But two further complexities
deserve mention: the nature of communication sources and the multiple
roles that communicator variables might play in persuasion.
317
uses a celebrity endorser, message recipients might have relevant attitudes
about the endorser (how well liked the endorser is), perceptions of the
endorser’s credibility (expertise, trustworthiness), attitudes toward the ad
as a whole, and perceptions of the ad’s overall credibility. (For some
illustrative research on endorsers, see Amos, Holmes, & Strutton, 2008;
Austin, de Vord, Pinkleton, & Epstein, 2008; Biswas, Biswas, & Das,
2006; Eisend & Langner, 2010; Lafferty, Goldsmith, & Newell, 2002;
Magnini, Garcia, & Honeycutt, 2010; Ohanian, 1990.)17
The general point is that message “sources” can take a variety of forms—
people, advertisements, websites, and so forth—and consequently the
nature and operation of source characteristics (such as credibility)
naturally may vary across these different communication formats. Parallel
research questions will arise (about the nature, antecedents, and effects of
source characteristics), but the answers can be expected to differ.
318
circumstance in which the message had weak arguments), or because the
higher-credibility communicator more or less directly biased (influenced
the evaluative direction of) elaboration in a way favorable to the advocated
view. Moreover—apart from whatever influence credibility might
otherwise have on the persuasiveness of a message—the communicator’s
credibility may affect whether the communicator has access to the
audience (e.g., editors may provide space in the op-ed section of a news
outlet only to persons who appear to have relevant expertise) and whether
the audience pays much attention to the message (i.e., credibility may
influence message exposure or scrutiny). (For some examples of research
illustrating such varied roles for communicator characteristics, see J. K.
Clark, Wegener, & Evans, 2011; Howard & Kerin, 2011; Sinclair, Moore,
Mark, Soldat, & Lavis, 2010; Tormala, Briñol, & Petty, 2007; Ziegler &
Diehl, 2001.)18
For Review
1. What is credibility? What are the primary dimensions of credibility?
What is expertise? Describe the questionnaire items commonly used
to assess expertise. What is trustworthiness? Describe the
questionnaire items commonly used to asses trustworthiness.
Describe the research used to identify the primary dimensions of
credibility. What is factor analysis? What is knowledge bias?
Reporting bias? Explain the relationships of knowledge bias,
reporting bias, expertise, and trustworthiness.
2. Identify factors influencing credibility. Which of these influence
expertise and which trustworthiness? Describe the effect of
knowledge of the communicator’s education, occupation, experience,
and training on expertise and on trustworthiness. Describe the effect
of nonfluencies in delivery on expertise and on trustworthiness.
Describe the effect of citation of evidence sources on expertise and on
trustworthiness. Describe the effect of the advocated position on
expertise and on trustworthiness; explain the roles of knowledge bias
and reporting bias in this phenomenon. Describe the effect of liking
for the communicator on expertise and on trustworthiness. Describe
the effects of humor on expertise and on trustworthiness.
3. In research on the effects of credibility variations, are expertise and
trustworthiness usually manipulated separately? Explain. In this
research, are the low-credibility communicators low in absolute terms
or only relatively low? Explain.
319
4. Explain the idea that the magnitude of credibility’s effect on
persuasive outcomes might vary. Identify two factors that influence
the magnitude of credibility’s effect. Describe how the personal
relevance of the topic influences the magnitude of credibility’s effect.
Under what sort of relevance condition (high or low) will the effect of
credibility be relatively larger? Describe how the timing of
identification of the communicator influences the magnitude of
credibility’s effect. What timing of identification leads to relatively
larger effects of credibility?
5. Explain the idea that the direction of credibility’s effect on persuasive
outcomes might vary. Identify a factor that influences the direction of
credibility’s effect. Under what conditions will higher-credibility
sources be more persuasive than lower-credibility sources? And under
what conditions will the opposite effect occur? Describe a possible
explanation for the latter effect.
6. What is the general rule of thumb concerning the effect of variations
in liking (of the communicator) on persuasive outcomes? Explain
how that general principle can be misleading (e.g., identify a limiting
condition). Describe the relative strength of the effects of credibility
and the effects of liking (on persuasive outcomes). Describe how
variations in the personal relevance of the topic influence the effects
of liking. What relevance conditions (high or low) lead to relatively
larger effects of liking? Can a disliked communicator be more
persuasive than a liked communicator? Can a disliked communicator
be more persuasive than a liked communicator even when the two
communicators are equivalent with respect to other characteristics
(e.g., credibility)? Identify a necessary condition for an otherwise
equivalent disliked communicator’s being more persuasive than a
liked communicator.
7. Does perceived similarity influence persuasive outcomes directly or
indirectly? Explain. Through what avenues does perceived similarity
influence persuasive outcomes? What is attitudinal similarity? How
does perceived attitudinal similarity influence liking? Can liking be
influenced by perceived similarities that are not relevant to the
message topic? Can perceived similarities influence judgments of
communicator expertise? Identify a necessary condition for a
perceived similarity to influence expertise judgments. Will all
relevant perceived similarities enhance expertise? Will all relevant
perceived dissimilarities diminish expertise? Explain. Can perceived
similarities influence judgments of communicator trustworthiness?
Explain. Why is it misleading to assume that greater perceived
320
similarity enhances persuasive effectiveness?
8. Does the physical attractiveness of the communicator influence
persuasive outcomes directly or indirectly? Explain. Through what
avenues does physical attractiveness influence persuasive outcomes?
How does physical attractiveness influence liking? Can physical
attractiveness enhance perceived expertise? Give an example. Can
physical attractiveness enhance perceived trustworthiness?
9. Explain how other communicator characteristics (i.e., other than
credibility, liking, similarity, and physical attractiveness) influence
persuasive outcomes indirectly.
10. Explain how a communication “source” might not be an identifiable
individual; give examples. Describe how questions abut the nature,
antecedents, and effects of source characteristics can arise concerning
such sources. Explain how communicator variables might play
multiple roles in persuasion; give examples.
Notes
1. Not all the factors that in the research literature have been labeled
“trustworthiness” (or “character,” “safety,” or the like) contain many of the
items that here are identified as assessing trustworthiness (e.g.,
McCroskey, 1966). An important source of confusion is the apparent
empirical association between a receiver’s liking for a communicator and
the receiver’s judgment of the communicator’s trustworthiness; this
covariation is reflected in factor analyses that have found items such as
honest-dishonest, trustworthyuntrustworthy, and fair-unfair to load on the
same factor with items such as friendly-unfriendly, pleasantunpleasant,
nice–not nice, and valuable-worthless (see, e.g., Applbaum & Anatol,
1972; Bowers & Phillips, 1967; Falcione, 1974; McCroskey, 1966; Pearce
& Brommel, 1972). This pattern can plausibly be interpreted as reflecting
the effects of liking on trustworthiness judgments (receivers being inclined
to ascribe greater trustworthiness to persons they like). But such empirical
association should not obscure the conceptual distinction between
trustworthiness and liking, especially because the empirical association is
imperfect; see Delia’s (1976, pp. 374–375) discussion of Whitehead’s
(1968) results, or consider the stereotypical used car salesman who is
likable but untrustworthy. In this chapter, investigations are treated as
bearing on judgments of trustworthiness only when it appears that
trustworthiness (and not liking) has been assessed.
321
2. This finding (that citation of evidence sources can enhance perceptions
of the communicator’s expertise and trustworthiness) may be seen to have
implications for the elaboration likelihood model (ELM; see Chapter 8).
Although source and message variables are not partitioned by the ELM as
having intrinsically different roles to play in persuasion, it is clear that
message materials might have implications for perceptions of source
characteristics (as when advocacy of an unexpected position enhances
perceptions of communicator trustworthiness and thereby engenders
reduced message scrutiny; Priester & Petty, 1995). The finding under
discussion points specifically to the possibility that variations in
argumentative message content may alter impressions of the
communicator’s credibility (Slater & Rouner, 1996). (As an aside:
Compared with premessage identification, postmessage identification of
communicators has sometimes been seen to yield more positive
impressions of credibility [Ward & McGinnies, 1973]. This result might
easily be understood as a consequence of participants’ larger reliance on
message materials—which commonly appear to have been of good quality
—as a basis for credibility judgments in conditions in which identification
follows the message as compared with those in which it precedes the
message.) In the context of the ELM, this implies that variations in
argument strength might affect persuasive outcomes by providing what
amounts to credibility-related cue information (for some relevant evidence,
see Reimer, 2003). The existence of such a pathway, in turn, invites
reconsideration of the commonly observed enhanced effect that argument
strength manipulations have (on persuasive outcomes) under conditions of
high personal relevance; that effect could come about through the use of a
credibility-related heuristic (and not through anything such as genuinely
thoughtful consideration of substantive arguments), as long as there was
sufficiently close message scrutiny to permit receivers to notice whatever
message elements are used as a basis for inferences about credibility. The
point here is not that this pathway provides an entirely satisfactory account
of the accumulated findings on this matter but only that the possibility of
this pathway points to some complexities in untangling what lies behind
the effects observed in ELM research.
322
4. Estimates of the mean effect on credibility perceptions, expressed as a
correlation, range from .16 to .22 (Eisend, 2006; O’Keefe, 1999a).
Notably, the credibility-enhancing effect (of mentioning opposing
considerations without refuting them) that obtains in consumer advertising
messages is not found in other messages (e.g., those concerning public
policy issues; O’Keefe, 1999a). It may simply be that skepticism about
consumer advertising is substantially greater than that about public policy
advocacy—and hence nonrefutational acknowledgment of potential
counterarguments is more surprising when it occurs in consumer
advertisements than when it occurs in other messages.
5. For results consistent with this expectation, see Marquart, O’Keefe, and
Gunther (1995), who found that perceived attitudinal similarity (which can
influence liking; see, e.g., Berscheid, 1985) influenced ratings of sources’
trustworthiness but not expertise.
323
manipulation and persuasive outcomes fails to provide relevant
information; with such a research question, the relationship between the
perceptual state (e.g., perceived expertise) and persuasive outcomes should
be examined directly.
10. See, for example, Greenberg and Miller (1966) and Sternthal,
Dholakia, and Leavitt (1978). This difficulty is consistent with studies of
the ratings given to “ideal” high- and low-credibility communicators,
which have found that when respondents are asked to indicate where a
perfectly credible and a perfectly noncredible communicator would be
rated on expertise and trustworthiness scales, the ratings are not at the
absolute extremes (R. A. Clark, Stewart, & Marston, 1972; see also J. K.
Burgoon, 1976).
324
the relationship that receivers have to the message topic and so, in interests
of clarity, is avoided here.
325
15. Other kinds of perceived similarities (beyond attitudinal similarities)
might also enhance liking and thereby potentially influence persuasive
outcomes. For example, incidental similarities in first names, birthdays, or
birthplaces appear capable of producing such effects (see, e.g., Burger,
Messian, Patel, del Prado, & Anderson, 2004; Garner, 2005; Guéguen,
Pichot, & Le Dreff, 2005; Jiang, Hoegg, Dahl, & Chattopadhyay, 2010;
Silvia, 2005; for some complexities, see Howard & Kerin, 2011).
17. And this does not exhaust the potentially relevant endorser-related
perceptions. For example, there is reason to think that the effectiveness of
endorser ads can be driven not so much by liking or credibility as by
perceptions of the fit between other attributes of the endorser and attributes
of the product (see, e.g., Kamins, 1990; Misra & Beatty, 1990;
Mittelstaedt, Riesz, & Burns, 2000; Till & Busier, 2000; Törn, 2012).
326
Chapter 11 Message Factors
This chapter reviews research concerning the effects that selected message
variations have on persuasion. The message factors discussed are grouped
into three broad categories: message structure and format, message
content, and sequential-request strategies.
Conclusion Omission
Obviously, persuasive messages have some point—some opinion or belief
that the communicator hopes the audience will accept, some recommended
327
action that the communicator wishes to have adopted. But should the
message explicitly make that point—explicitly state the conclusion or
recommendation—or should the message omit the conclusion and so leave
the point unstated?1
Intuitively, there look to be good reasons for each alternative. For instance,
one might think that making the conclusion explicit would be superior
because receivers would then be less likely to misunderstand the point of
the message. On the other hand, it might be that if the communicator
simply supplies the premises, and the audience reasons its own way to the
conclusion, then perhaps the audience will be more persuaded than if the
communicator had presented the desired conclusion (more persuaded,
because they reached the conclusion on their own).
There has often been speculation that the apparent advantage of explicit
conclusions may be moderated by factors involving the hearer’s ability and
willingness to draw the appropriate conclusion when left unstated. Hence
variables such as the receiver’s intelligence (which bears on ability) and
initial opinion (which bears on willingness) have often been mentioned as
possible moderators (e.g., McGuire, 1985). The expectation has been that
explicit conclusions may not be necessary to, and might even impair,
persuasive success for intellectually more capable audiences and for
audiences initially favorable to the advocated view (because such
audiences should be able and willing to reason to the advocated
conclusion). What little relevant empirical evidence exists, however, gives
no support to these speculations. For example, in several studies, the
audience was comparatively intelligent and well-educated (college
students), and even so, there was a significant advantage for messages with
explicit recommendations or conclusions (e.g., Fine, 1957).
328
conclusions is that when the conclusion is omitted, assimilation and
contrast effects are encouraged. As discussed in Chapter 2 (concerning
social judgment theory), assimilation and contrast effects are perceptual
distortions concerning what position is being advocated by a message (C.
W. Sherif et al., 1965; M. Sherif & Hovland, 1961): An assimilation effect
occurs when a receiver perceives the message to advocate a view closer to
his or her own than it actually does; a contrast effect occurs when a
receiver perceives the message to advocate a position more discrepant
from his or her own than it actually does. Both assimilation and contrast
effects reduce persuasive effectiveness—contrast effects because they
make the message appear to urge an even more unacceptable viewpoint,
assimilation effects because they reduce the amount of change apparently
sought by the message. Notably, relatively ambiguous messages (i.e.,
messages ambiguous about what position is being advocated) appear
especially susceptible to assimilation and contrast effects (Granberg &
Campbell, 1977). Thus the reduced persuasive success of messages
omitting explicit conclusions may arise because such messages are
relatively more subject to assimilation and contrast effects.
Recommendation Specificity
When a communicator is urging some particular action, the message can
vary in the specificity with which the advocated action is described. The
contrast here is between messages that provide only a general description
of the advocate’s recommended action and messages that provide a more
specific (detailed) recommendation. Both messages contain an explicitly
stated conclusion (in the form of an explicitly identified desired action),
but one conclusion is more detailed than the other. For example,
Leventhal, Jones, and Trembly (1966) compared persuasive messages
recommending that students get tetanus shots at the student health clinic
with messages providing a more detailed description of the recommended
action (e.g., mentioning the location and hours of the clinic—although
students were already familiar with such information). Similarly, Evans,
Rozelle, Lasater, Dembroski, and Allen (1970) compared messages giving
329
relatively general and unelaborated dental care recommendations with
messages giving more detailed, specific recommendations. Such studies
have commonly found that messages with more specific descriptions of the
recommended action are more persuasive than those providing general,
nonspecific recommendations (for a review, see O’Keefe, 2002b).5
It is not yet clear what might explain this effect. One possibility is that
more specific descriptions of the recommended action enhance the
receiver’s behavioral self-efficacy (perceived behavioral control). As
discussed in Chapter 6, reasoned action theory suggests that one factor
influencing a person’s behavioral intention is the individual’s belief in his
or her ability to engage in the behavior (perceived behavioral control). For
example, people who do not think that they have the ability to engage in a
regular exercise program (because they lack the time, the equipment, and
so forth) are unlikely to undertake such behavior, even if they have
positive attitudes toward exercising. It may be that—akin to the enhanced
self-efficacy that can arise from seeing another person perform the action
—receivers who encounter a detailed description of the recommended
action may become more convinced of their ability to perform the
behavior. A second (not necessarily competing) possible explanation is
that a specific action description encourages people to plan their
behavioral performance and thus develop implementation intentions
(subsidiary intentions related to the concrete realization of a more abstract
intention, as discussed in Chapter 6; see Gollwitzer & Sheeran, 2006),
which in turn make behavioral performance more likely.6
Narratives
Broadly conceived, a narrative is a story, that is, a depiction of a sequence
of related events. Much research attention has recently been given to
studying narrative as a distinctive message format for persuasion. For
example, instead of trying to persuade by making explicit arguments,
instead one might use a story as the vehicle for persuasive information.
330
bulk of the message (e.g., when one example serves as a “case study”), or
as an even more extended story (e.g., when a daytime television drama has
a multiepisode story arc concerning some health topic). Narratives might
be fictional or factual, might have a simple natural order or a more
complex structure (e.g., with flashbacks), might be delivered in first person
(“I did X”) or third person (“He did X”) forms, and so forth.
331
increased interest in colorectal cancer screening. In Murphy, Frank,
Chatterjee, and Baezconde-Garbanati’s (2013) study, a fictional narrative
film was more effective than a nonnarrative message in enhancing
knowledge and intentions concerning cervical cancer. Polyorat, Alden, and
Kim (2007) reported that narrative ads produced more favorable
evaluations of several consumer products than did “factual” ads. Prati et al.
(2011) found that a narrative message was more effective than a didactic
message in influencing various risk and efficacy perceptions concerning
flu shots. (For some other illustrations, see Appel & Richter, 2007; H. S.
Kim, Bigman, Leader, Lerman, & Cappella, 2012; Larkey & Gonzalez,
2007; Masser & France, 2010; Morgan, Cole, Struttmann, & Piercy, 2002;
Morman, 2000; Niederdeppe, Shapiro, & Porticella, 2011; Ricketts,
Shanteau, McSpadden, & Fernandez-Medina, 2010.)
In short, there is little room for doubt that narratives can be more
persuasive than nonnarrative messages. This research evidence does not
show that narratives are generally more persuasive than nonnarrative
messages—only that it is possible for narratives to have a persuasive
advantage. It remains to be seen exactly under what circumstances a given
narrative form will be more or less persuasive than some specific
nonnarrative message form.9
One is the degree to which the recipient identifies with the narrative’s
characters.10 In a number of studies, greater identification with characters
has been found to be associated with greater persuasive effects of
narratives. As examples: In Moyer-Gusé, Chung, and Jain’s (2011) study
of narratives in which safer-sex conversations were modeled, greater
identification with the characters enhanced recipients’ self-efficacy for
having such conversations themselves. Igartua (2010) found that character
identification influenced the degree to which a fictional film affected
story-relevant beliefs and attitudes (see, relatedly, Igartua & Barrios,
2012). De Graaf, Hoeken, Sanders, and Beentjes (2012) manipulated
character identification by varying the perspective from which the story
was told, which yielded corresponding effects on story-consistent attitudes
332
(recipients were more inclined to have attitudes consistent with the
perspective of the character from whose perspective the story was told).
(For other illustrations, see Sestir & Green, 2010; van den Hende, Dahl,
Schoormans, & Snelders, 2012. For a review, see Tukachinsky &
Tokunaga, 2013).11
333
(2008).
And, of course, character identification and transportation are not the only
possible influences on narrative persuasiveness. A variety of other factors
have also been explored, including the nature of the communication source
(e.g., Hopfer, 2012) and various properties of the narrative material (see,
e.g., Appel & Mara, 2013; Dahlstrom, 2010; H. S. Kim et al., 2012;
Moyer-Gusé, Jain, & Chung, 2012; Tal-Or, Boninger, Poran, & Gleicher,
2004). However, obtaining dependable generalizations about any such
factors—and identifying exactly how and why such factors influence
narrative persuasiveness (keeping in mind that the effects might be
obtained by affecting character identification or transportation)—remains
some distance in the future.
Entertainment-Education
One particular application of persuasive narratives is worth mention:
entertainmenteducation. Entertainment-education (EE) is the purposeful
design of entertainment media specifically as vehicles for educating—and
thereby influencing behavior. A classic example is provided by the South
African dramatic television series Soul City, which was initially created for
the purpose of conveying HIV prevention information. The series proved
both enormously popular and effective in providing the desired
information. In subsequent years the program expanded to address other
subjects (e.g., tobacco control and domestic violence) and to include other
media (e.g., a radio series). (For an overview of Soul City development,
see Usdin, Singhal, Shongwe, Goldstein, & Shabalala, 2004.) Similar
programs have been created in a number of developing countries (see, e.g.,
Abdulla, 2004; Kuhlmann et al., 2008; Ryerson & Teffera, 2004; Smith,
334
Downs, & Witte, 2007) and, less commonly, in the developed world (e.g.,
van Leeuwen, Renes, & Leeuwis, 2013; Wilkin et al., 2007). The
challenge in creating EE programs is striking the right balance between
entertainment (which attracts the audience) and education (which is the
reason for creating the program in the first place)—and this can be difficult
to manage (for some discussion, see Renes, Mutsaers, & van Woerkum,
2012).
Summary
Narratives can be powerful vehicles for persuasion, but many open
questions remain about how and why narratives persuade. At the moment
not much is securely known about exactly when any given narrative form
will be more persuasive than some specifiable nonnarrative form, what
factors influence the relative persuasiveness of different narrative forms, or
how such moderating factors are related to the mechanisms underlying
narrative effects. Continuing attention to these questions will be
335
welcomed. (For some general discussions of narrative persuasion, see
Bilandzic & Busselle, 2013; Carpenter & Green, 2012; Green & Clark,
2013; Hinyard & Kreuter, 2007; Larkey & Hecht, 2010; Larkey & Hill,
2012; Moyer-Gusé, 2008; Slater & Rouner, 2002; Vaughn et al., 2010;
Winterbottom, Bekker, Conner, & Mooney, 2008.)
Prompts
A prompt (reminder) is a simple cue that that makes behavioral
performance salient and hence can trigger the behavior. Depending on the
context, a prompt might be delivered by a small sign or poster, a text
message, an automated phone call, an email, regular mail, and so on. The
message can be variously phrased—as a explicit reminder (“Don’t forget
to …”), as an invitation (“Have you considered … ?”), as a simple
rationale for the behavior (“Taking the stairs burns calories”), and so on—
but is characteristically relatively brief.
336
Budney, & Foerg, 1993; Blake, Lee, Stanton, & Gorely, 2008.)16
Message Content
This section reviews research concerning the persuasive effects of certain
variations in the contents of messages. Literally dozens of content
variables have received at least some empirical attention; this review
focuses mainly on selected message content factors for which the
empirical evidence is relatively more extensive.
Consequence Desirability
One common way of trying to persuade people is by appealing to the
consequences of the advocated action. The general abstract form is “If the
advocated action A is undertaken, then desirable consequence D will
occur.” A good deal of research has addressed questions about the relative
persuasiveness of various forms of consequence-based arguments. Of
specific interest here is the comparison of appeals invoking more and less
desirable consequences of compliance with the advocated view. Abstractly
put, the experimental contrast is between arguments of the form “If
advocated action A is undertaken, then very desirable consequence D1 will
occur” and “If advocated action A is undertaken, then slightly desirable
consequence D2 will occur.”
337
this way. Even so, substantial research evidence, collected in other guises,
has accumulated on this matter. For example, many studies have examined
a question of the form “do people who differ with respect to characteristic
X differ in their responsiveness to corresponding kinds of persuasive
appeals?”—where characteristic X is actually a proxy for variations in
what people value.
338
watch will help you fit in”), with the reverse being true for Chinese
audiences (e.g., Aaker & Schmitt, 2001; for a review, see Hornikx &
O’Keefe, 2009). Plainly, this effect reflects underlying differences in the
perceived desirability of various product attributes.
339
discussions, see Allen, 1991, 1993, 1998; Crowley & Hoyer, 1994; Eisend,
2006, 2007; O’Keefe, 1993; Pechmann, 1990.)20
340
1999a).
341
persuasive topics do not produce the same enhancement of credibility
(O’Keefe, 1999a). It may be that receivers’ initial skepticism about
consumer advertising leads receivers to expect that advertisers will provide
a one-sided depiction of the advertised product—and thus when an
advertisement freely acknowledges (and does not refute) opposing
considerations, the advertiser’s credibility is enhanced (akin to the
credibility enhancement effects obtained when communicators advocate
positions opposed to their apparent self-interest, as discussed in Chapter
8).
Gain-Loss Framing
One especially well-studied persuasive message variation is gain-loss
message framing.26 A gain-framed message emphasizes the advantages of
undertaking the advocated action; a loss-framed message emphasizes the
disadvantages of not engaging in the advocated action. So, for example, “If
you wear sunscreen you’ll have attractive skin when you’re older” is a
gain-framed appeal, whereas “If you don’t wear sunscreen you’ll have
unattractive skin when you’re older” is a loss-framed appeal.
Overall Effects
The phenomenon of negativity bias provides a reason for expecting that
loss-framed appeals might have a general persuasive advantage over gain-
framed appeals. Negativity bias refers to the greater sensitivity to, and
342
impact of, negative information compared with equally extreme positive
information (for a review, see Cacioppo, Gardner, & Berntson, 1997). For
example, negative information has a disproportionate impact on
evaluations or decisions compared with otherwise equivalent positive
information (for a review, see Rozin & Royzman, 2001); learning one new
negative thing about a person often has a much larger effect than learning
one new positive thing. The phenomenon of negativity bias naturally
suggests that loss-framed messages, which emphasize the negative
consequences of noncompliance with the recommended action, should be
more persuasive than gain-framed appeals. However, there is no such
general advantage for loss-framed appeals. Gain-framed and loss-framed
appeals do not generally differ in persuasiveness (for a review, see
O’Keefe & Jensen, 2006).27
343
complex, but here too there does not appear to be any general persuasive
advantage for gain-framed appeals (Gallagher & Updegraff, 2012;
O’Keefe & Jensen, 2007).30
344
As an example: One suggested individual-level moderating factor is the
recipient’s approach/avoidance motivation (BAS/BIS; Carver & White,
1994). Individuals vary in their general sensitivity to reward (desirable
outcome) or punishment (undesirable outcome) cues, and the hypothesis
has been that approach-oriented individuals will be more persuaded by
gain-framed appeals than by loss-framed appeals, with the reverse pattern
holding for avoidance-oriented individuals (e.g., Jeong et al., 2011;
Latimer, Salovey, & Rothman, 2007). A related motivational difference is
the recipient’s regulatory focus (Higgins, 1998); regulatory focus
variations reflect a broad motivational difference between a promotion
focus, which emphasizes obtaining desirable outcomes, and a prevention
focus, which emphasizes avoiding undesirable outcomes.
Correspondingly, this hypothesis has sometimes been phrased in terms of
regulatory focus: Promotion-oriented individuals should be more
persuaded by gain-framed appeals than by loss-framed appeals, but
prevention-oriented individuals should be more persuaded by loss-framed
appeals than by gain-framed appeals.
345
at encouraging flossing in which the gain-framed appeal emphasized
having healthy gums (an approach-oriented consequence) and the loss-
framed appeal emphasized avoiding gum disease (an avoidance-oriented
consequence). The results indicated that the former appeal was more
persuasive than the latter for approach-oriented participants, with the
reverse result for avoidance-oriented participants (Sherman, Mann, &
Updegraff, 2006). But, given the confounding of the type of antecedent
and the type of consequence, such results might more plausibly be said to
reflect differences in the consequences invoked (approach vs. avoidance)
than in the antecedent (compliance vs. noncompliance). (For some relevant
evidence, see Chang, 2010, Experiment 2; for discussion, see Cesario,
Corker, & Jelinek, 2013; O’Keefe, 2013a.)
Summary
Gain-framed and loss-framed appeals do not differ much in
persuasiveness. Research has not yet identified moderating factors that
yield substantial differences in persuasiveness between gain-framed and
loss-framed appeals—and identifying such factors will be challenging.
Threat Appeals
Threat appeals (also called fear appeals) are messages designed to
encourage the adoption of behaviors aimed at protecting against a potential
threat. Threat appeals have two components. One is material depicting the
346
threatening event or consequences; the other is material describing the
recommended protective action. So, for example, driver education
programs may show films depicting gruesome traffic accidents (in an
effort to reduce dangerous driving practices such as drinking and driving),
antismoking messages may display the horrors of lung cancer (so as to
discourage smoking initiation), and dental hygiene messages may
emphasize the ravages of gum disease (in an effort to encourage regular
flossing).
347
she is not convinced that exercise will really prevent heart disease (low
perceived recommendation effectiveness), and she does not think that she
has the discipline to stick with an exercise program (low perceived self-
efficacy). Such a person presumably will have relatively low protection
motivation, as reflected in corresponding actions (namely, not exercising)
and intentions (not intending to exercise).
348
to have parallel (but weaker) effects on relevant persuasive outcome
variables such as attitudes, intentions, and behavior (for reviews, see de
Hoog, Stroebe, & de Wit, 2007; Floyd, Prentice-Dunn, & Rogers, 2000;
Witte & Allen, 2000).38
Second, threat messages with more intense content also are more
persuasive than those with less intense content, although this effect is
smaller than the effect of threat appeal variations on aroused fear
(expressed as correlations, the effects average between .10 and .20; for
reviews, see Boster & Mongeau, 1984; Mongeau, 1998; Sutton, 1982;
Witte & Allen, 2000). This weaker effect on persuasive outcomes is
consistent with the idea that fear (the aroused emotional state) mediates the
349
effect of threat appeal message manipulations on persuasive outcomes.
That is, the invited image is that varying the message contents produces
variations in aroused fear, which in turn are related to changes in attitudes,
intentions, and actions (and thus the relationship between message
manipulations and persuasive outcomes would be expected to be weaker
than the relationship between message manipulations and fear).42
Third (and a natural corollary of the first two), messages that successfully
arouse greater fear are also generally more persuasive. That is, in studies
with messages that have been shown to arouse dependably different
amounts of fear, the messages that arouse greater fear are more persuasive
than the messages that arouse lesser fear (for reviews, see Sutton, 1982;
Witte & Allen, 2000).43
Fourth, these relationships are roughly linear. That is, generally speaking,
as message content becomes more intense, greater fear is aroused and
greater persuasion occurs. It has sometimes been thought that very intense
message materials will (compared with less intense materials) produce less
persuasion (because recipients tune out the message and so do not come to
accept the recommendations). That is, the thought has been that a
persuader might go “too far” in threat appeal intensity, producing a
curvilinear relationship between intensity and persuasion (and specifically
an inverted-U-shaped relationship, such that the highest levels of message
intensity are associated with relatively lower levels of persuasion). But the
evidence in hand gives little indication of any such curvilinear effects (see
Boster & Mongeau, 1984; Mongeau, 1998; Sutton, 1992; Witte & Allen,
2000).44
Fifth, there are at least two conditions under which more intense threat
appeals are unlikely to be more persuasive than less intense ones. One is
circumstances in which the recipients’ fear level is already relatively high.
If message recipients are already experiencing sufficiently high levels of
concern, it may not be necessary—or even possible—to increase it further.
In such circumstances, messages might more appropriately focus on
barriers to adopting the recommended action—perhaps receivers’ concerns
about whether the action is effective or their doubts about their ability to
perform the action. For example, Earl and Albarracín’s (2007)’s review of
HIV prevention interventions found that HIV counseling was more
effective than fear-inducing arguments in encouraging condom use,
arguably because HIV-related anxiety was already relatively high, and
counseling provided information about how to address such concerns.
350
(See, relatedly, Kessels & Ruiter, 2012; Muthusamy, Levine, & Weber,
2009.)
But the EPPM additionally identifies two different (parallel) processes that
351
can be activated by threat appeals. People may want to control the
apparent danger posed by the threat (danger control processes), and they
may want to control their feelings of fear (fear control processes). The
activation of these processes varies depending on variations in the
combination of perceived threat and perceived efficacy, as follows.46
Briefly, then, from the perspective of the EPPM, the role of threat-related
perceptions (perceived severity and vulnerability) is contingent on
efficacy-related perceptions (perceived recommendation efficacy and self-
efficacy). High perceived threat alone is insufficient to motivate protective
action; only the combination of high perceived threat and high perceived
efficacy activates the danger control processes that encourage protective
behavior.
352
perceptions are not sufficiently high. Thus the EPPM provides a more
nuanced basis for message design than simply suggesting that persuaders
focus on whichever of the four perceptual determinants of protective
action (threat severity, threat vulnerability, recommendation effectiveness,
self-efficacy) needs attention. (For some examples of EPPM applications,
see Campo, Askelson, Carter, & Losch, 2012; Kotowski, Smith,
Johnstone, & Pritt, 2011; Krieger & Sarge, 2013; Murray-Johnson et al.,
2004. For general discussions of EPPM-based message design, see Basil &
Witte, 2012; Cho & Witte, 2005.)
Summary
Although some aspects of threat appeals have come into focus, many
unanswered questions remain, including exactly how threat-related
perceptions and efficacy-related perceptions combine to influence
protective intentions and actions (see, e.g., Goei et al., 2010), the
appropriate structure of the threat and efficacy components in threat
appeals (e.g., Carcioppolo et al., 2013; Wong & Cappella, 2009), whether
the within-individual dynamics of fear over time conform to theoretical
expectations (e.g., Algie & Rossiter, 2010; Dillard & Anderson, 2004; for
a useful discussion, see Shen & Dillard, 2014), and the potential role of
individual differences in reactions to threat appeals (e.g., Nestler & Egloff,
2012; Ruiter, Verplanken, De Cremer, & Kok, 2004; Schlehofer &
Thompson, 2011; van ’t Riet, Ruiter, & de Vries, 2012; for a general
treatment, see Cho & Witte, 2004). In short, there is much more to be
learned about threat appeals. (For some general discussions of threat
appeal research, see Dilliplane, 2010; Mongeau, 2013; Ruiter, Abraham, &
Kok, 2001; Ruiter, Kessels, Peters, & Kok, 2014; Yzer, Southwell, &
Stephenson, 2013.)
353
Beyond Fear Arousal
Fear is perhaps the best studied of the various emotions that persuasive
appeals might try to engage, although there has also been some work on
other emotions such as anger (Moons & Mackie, 2007; Quick, Bates, &
Quinlan, 2009), disgust (Leshner, Bolls, & Thomas, 2009; Nabi, 1998;
Porzig-Drummond, Stevenson, Case, & Oaten, 2009), and especially guilt
(e.g., Cotte, Coulter, & Moore, 2005; Hibbert, Smith, Davies, & Ireland,
2007; Turner & Underhill, 2012; for some reviews, see O’Keefe, 2000,
2002a). These lines of work have a common underlying idea, namely, that
one avenue to persuasion involves the arousal of an emotional state (such
as fear or guilt), with the advocated action providing a means for the
receiver to deal with those aroused feelings.49 (For some general
treatments of emotions and persuasion, see Dillard & Nabi, 2006; Dillard
& Seo, 2013; Nabi, 2002, 2007; Turner, 2012.)
Foot-in-the-Door
354
The Strategy
The FITD strategy consists of initially making a small request of the
receiver, which the receiver grants, and then making the (larger) target
request. The hope is that having gotten one’s foot in the door, the second
(target) request will be looked on more favorably by the receiver. The
question thus is whether receivers will be more likely to grant a second
request if they have already granted an initial, smaller request.51
355
the receiver), the more successful the FITD strategy (see the review by
Fern, Monroe, & Avila, 1986). Third, the FITD strategy appears to be
more successful if the receiver actually performs the action requested in
the initial request, as opposed to simply agreeing to perform the action (for
reviews, see Beaman, Cole, Preston, Klentz, & Steblay, 1983; Burger,
1999; Fern et al., 1986; cf. Dillard et al., 1984). Fourth, the FITD strategy
is more effective when the requests are prosocial requests (that is, requests
from institutions that might provide some benefit to the community at
large, such as civic or environmental groups) as opposed to nonprosocial
requests (from profit-seeking organizations such as marketing firms;
Dillard et al., 1984).53
Notably, several factors apparently do not affect the success of the FITD
strategy. The time interval between the two requests does not make a
difference (Beaman et al., 1983; Burger, 1999; Dillard et al., 1984; Fern et
al., 1986); for example, Cann, Sherman, and Elkes (1975) obtained
equivalent FITD effects with no delay between the two requests and with a
delay of 7–10 days. Similarly, it does not appear to matter whether the
same person makes the two requests (Fern et al., 1986).54
The observed moderating factors are consistent with this explanation. For
example, the presence of an external justification for initial compliance
obviously undermines enhancement of the relevant self-perceptions: If one
is paid money in exchange for agreeing to the initial request, it is more
difficult to conclude that one is especially cooperative and helpful just
because one agreed. Similarly, the larger the request initially agreed to, the
more one’s self-perceptions of helpfulness and cooperativeness should be
enhanced (“If I’m going along with this big request, without any obvious
external justification, then I must really be a pretty nice person, the kind of
person who does this sort of thing”). And it’s easier to think of oneself as a
356
helpful, socially minded person when one agrees to requests from civic
groups (as opposed to marketing firms) or when one actually performs the
requested action (as opposed to merely agreeing to perform it).55
Door-in-the-Face
The Strategy
The DITF strategy turns the FITD strategy on its head. The DITF strategy
consists of initially making a large request, which the receiver turns down,
and then making the smaller target request. The question is whether
initially having (metaphorically) closed the door in the requester’s face
will enhance the receiver’s compliance with the second request.
357
agree to a second smaller request if they have initially turned down a
larger first request. For example, in a study reported by Cialdini et al.
(1975, Experiment 1), individuals on campus sidewalks were approached
by a student who indicated that he or she represented the county youth
counseling program. In the DITF condition, persons were initially asked to
volunteer to spend 2 hours a week for a minimum of 2 years as an unpaid
counselor at a local juvenile detention center; no one agreed to this
request. The requester then asked if the person would volunteer to
chaperone a group of juveniles from the detention center on a 2-hour trip
to the zoo. Among those in the control condition, who received only the
target request, only 17% agreed to chaperone the zoo trip; but among those
in the DITF condition, who initially turned down the large request, 50%
agreed.57
The research evidence also suggests that various factors moderate the
success of the DITF strategy. DITF effects are larger if the two requests
are made by the same person as opposed to by different persons (for
relevant reviews, see Feeley, Anker, & Aloe, 2012; Fern et al., 1986;
O’Keefe & Hale, 1998, 2001), if the two requests have the same
beneficiary as opposed to benefiting different persons (Feeley et al., 2012;
O’Keefe & Hale, 1998, 2001), if there is no delay between the requests
(Dillard et al., 1984; Feeley et al., 2012; Fern et al., 1986; O’Keefe &
Hale, 1998, 2001), and if the requests come from prosocial rather than
nonprosocial organizations (Dillard et al., 1984; Feeley et al., 2012;
O’Keefe & Hale, 1998, 2001).58
358
But for several reasons, one might doubt whether the reciprocal
concessions explanation is entirely satisfactory. First, some moderator
variable effects are not so obviously accommodated by the explanation.
For example, it is not clear why the strategy should work better for
prosocial requests than for nonprosocial requests.
Second, several meta-analytic reviews have found that DITF effects are
not influenced by the size of the concession made (Fern et al., 1986;
O’Keefe & Hale, 1998, 2001), and this seems inconsistent with the
reciprocal concessions account. The reciprocal concessions account
appears to predict that larger concessions will make the DITF strategy
more effective (by putting greater pressure on the receiver), and hence the
failure to find such an effect seemingly indicates some weakness in the
explanation.59
359
that feelings of guilt would be better reduced by “making it up to” the
requester (as opposed to agreeing to a request from someone else)—but
that’s not the way guilt works. There is considerable evidence that guilt-
reduction behaviors do not necessarily have to involve making amends to
the victim of the guiltinducing behavior. For example, when people are
feeling guilty about having committed a transgression (e.g., telling a lie)
that harmed another person, they are more likely to comply with a
subsequent request (than are people in a no-transgression control
condition)—but this effect is the same no matter whether the request
comes from the injured party or someone else (for a review, see O’Keefe,
2000). The implication of this finding is that DITF compliance cannot be
explained by guilt arousal alone; if DITF compliance arose purely from
guilt, then the strategy’s effectiveness would not be influenced by whether
the same person made both requests.
Conclusion
Researchers have investigated a large number of message characteristics as
possible influences on persuasive effectiveness. These message factors are
varied, ranging from the details of message components (the phrasing of
the message’s conclusion) to the sequencing of multiple messages (as in
the FITD and DITF strategies). Indeed, this discussion can do no more
than provide a sampling of the message features that have been studied.
(For other general discussions of persuasive message variations, see
Perloff, 2014; Pratkanis, 2007; Shen & Bigsby, 2013; Stiff & Mongeau,
2003.)
For Review
1. What does the research evidence suggest about the relative
persuasiveness of stating the message’s conclusion explicitly as
opposed to omitting the conclusion (leaving the conclusion implicit)?
360
Does this difference vary depending on the audience’s educational
level? Does it vary depending on the audience’s initial favorability
toward the advocated view? Describe a possible explanation for the
observed effect.
2. What does the research evidence suggest about the relative
persuasiveness of providing a general (as opposed to a more specific)
description of the advocated action? Describe two possible
explanations for the observed effect.
3. What is a narrative? Explain why studying the role of narratives in
persuasion can be challenging. Can narratives be more persuasive
than nonnarrative messages? Are narratives generally more
persuasive than nonnarrative messages? Identify two factors that
influence the persuasiveness of narratives. What is character
identification? How does character identification influence narrative
persuasiveness? What is transportation? How does transportation
influence narrative persuasiveness? What is entertainment-education?
Describe two ways in which entertainment-education programs can
produce persuasive effects.
4. What is a prompt? Give examples. Can prompts influence behavior?
Identify two necessary conditions for prompts to be effective in
influencing behavior. Why is an existing positive attitude such a
condition? Why is sufficiently high perceived behavioral control
(PBC, self-efficacy) such a condition?
5. What is a consequence-based argument? How do variations in the
perceived desirability of the consequences affect the persuasiveness
of such arguments? Give examples.
6. What is a one-sided message? A two-sided message? Which is more
persuasive? Distinguish two varieties of two-sided messages. What is
a refutational two-sided message? What is a nonrefutational two-
sided message? Comparing one-sided messages and refutational two-
sided messages, which generally is more persuasive? Identify an
implicit limiting condition on the occurrence of these differences.
What general differences, if any, are there in persuasiveness between
one-sided messages and nonrefutational two-sided messages? In
advertising contexts, how do one-sided messages and nonrefutational
two-sided messages differ in persuasiveness? Outside advertising
contexts (that is, in “nonadvertising” messages), how do one-sided
messages and nonrefutational two-sided messages differ in
persuasiveness? What might explain the observed differences
between advertising messages and other persuasive messages in how
nonrefutational two-sided messages work? Are the effects of
361
nonrefutational two-sided messages (compared with one-sided
messages) on credibility perceptions the same for advertising
messages and for nonadvertising messages? Explain how skepticism
about advertising might underlie the different effects of
nonrefutational two-sided messages in advertising contexts as
opposed to nonadvertising contexts.
7. What is a gain-framed message? A loss-framed message? Describe a
reason for hypothesizing that loss-framed appeals might generally be
more persuasive than gain-framed appeals. Which kind of appeal is
generally more persuasive? Which kind of appeal is more persuasive
when the message topic concerns disease prevention? Which kind of
appeal is more persuasive when the message topic concerns disease
detection? Explain why it is difficult to identify a factor that
moderates the effects of gain- and loss-framed appeals. Describe the
hypothesis that the relative persuasiveness of gain-framed and loss-
framed appeals will vary depending on whether the recipient is
relatively approach/promotion–oriented or avoidance/prevention–
oriented; explain how such motivational differences are related to
different kinds of behavioral consequences.
8. What is a threat appeal? Describe the two parts of a threat appeal.
What is protection motivation theory (PMT)? What is protection
motivation? Identify the two processes underlying protection
motivation. What is threat appraisal? Identify two factors that
influence threat appraisal. What is perceived threat severity? What is
perceived vulnerability to threat? What is coping appraisal? Identify
two factors that influence coping appraisal. What is perceived
response efficacy? What is perceived self-efficacy? What is the
relationship between the intensity of threat appeal content and the
degree of fear aroused in receivers? Are messages with more intense
content generally more persuasive than those with less intense
content? Are messages that arouse greater fear generally more
persuasive than those that arouse lesser amounts of fear? Does the
relationship between the intensity of message contents and the
amount of aroused fear take the shape of an inverted U? Explain.
Does the relationship between the intensity of message contents and
persuasive outcomes take the shape of an inverted U? Explain.
Identify two conditions under which more intense threat appeals are
unlikely to be more persuasive than less intense appeals. Describe the
extended parallel process model (EPPM). What does the EPPM add
to protection motivation theory (PMT)? What is danger control?
What is fear control? Describe how the activation of fear control and
362
danger control processes varies as a function of variations in
perceived threat and perceived efficacy. From the perspective of the
EPPM, is high perceived threat sufficient to motivate protective
action? Explain. What emotions other than fear might be involved in
persuasion? Describe how the anticipation of emotional states can
play a role in persuasion.
9. Describe the foot-in-the-door (FITD) strategy. Identify four factors
that influence the success of the FITD strategy (four moderating
factors). How does the presence of an obvious external justification
(for initial-request compliance) influence the effectiveness of the
strategy? How does the size of the initial request influence the
effectiveness of the strategy? How is the strategy’s effectiveness
influenced by whether the initially requested behavior is actually
performed? How is the strategy’s effectiveness influenced by whether
the requests come from prosocial or nonprosocial organizations?
Does the time interval between the two requests influence the
strategy’s success? Is the strategy’s success affected by whether the
same person makes both requests? What is the selfperception
explanation of FITD effects? Describe how that explanation accounts
for the observed moderating factors. Identify a potential problem with
the self-perception explanation.
10. Describe the door-in-the-face (DITF) strategy. Identify four factors
that influence the success of the DITF strategy (four moderating
factors). How is the success of the strategy affected by whether the
same person makes the two requests? How is the success of the
strategy affected by whether the two requests have the same
beneficiary? How is the success of the strategy affected by the
presence of a delay between the requests? How is the strategy’s
effectiveness influenced by whether the requests come from prosocial
or nonprosocial organizations? Describe the reciprocalconcessions
explanation of DITF effects. Describe how that explanation accounts
for some of the observed moderating factors; describe how that
explanation has a difficult time accounting for other moderating
factors. Does the size of the concession (the reduction in request size
from the first to the second request) influence the success of the
strategy? Are DITF effects influenced by emphasizing or
deemphasizing the concession? Describe the guilt-based explanation
of DITF effects. Describe how that explanation accounts for some of
the observed moderating factors; describe how that explanation has a
difficult time accounting for other moderating factors. Do guilt-
reduction behaviors necessarily involve making amends to the person
363
injured by the guilt-producing behavior? Explain how DITF effects
might reflect a combination of reciprocity-based and guilt-based
processes.
Notes
1. This variation (stating or omitting the message’s overall conclusion)
thus is different from varying whether the message states the conclusions
to its individual supporting arguments (e.g., Kao, 2007). Both have been
glossed as “conclusion omission” manipulations but are plainly
distinguishable variations.
364
omitted conclusions.
7. Taken together, these first two complexities should make plain the
challenges in reaching dependable generalizations about the persuasive
effects of narrative. Given that there are many different narrative forms
(the first point) and many different nonnarrative forms against which a
narrative form might be compared (the second point), quite a diverse set of
message contrasts is naturally possible.
365
9. One much-studied specific realization of the contrast between narrative
and nonnarrative messages is the contrast between a message that provides
a single example (in a form amounting to a narrative) and a message that
provides corresponding statistical information about many cases
(nonnarrative). In primary research, one can find results indicating a
statistically significant advantage for examples (e.g., Uribe, Manzur, &
Hidalgo, 2013), results indicating a statistically significant for statistical
summaries (e.g., Lindsey & Ah Yun, 2003), and results reporting no
significant difference (e.g., Hong & Park, 2012; Mazor et al., 2007; Schulz
& Meuffels, 2012). A thorough review of the relevant research is not yet in
hand. Allen and Preiss’s (1997) review included studies that did not
compare the persuasive effectiveness of examples and parallel quantitative
summaries (e.g., Harte, 1976). The review by Zebregs, van den Putte,
Neijens, and de Graaf (2015) was careful to avoid such problems but did
not include unpublished studies or several seemingly relevant published
studies (e.g., Dardis & Shen, 2008; Studts, Ruberg, McGuffin, & Roetzer,
2010).
366
13. In Tukachinsky & Tokunaga’s (2013) random-effects meta-analysis,
greater transportation was significantly associated with story-consistent
attitudes, beliefs, and behaviors; the mean effect was r = .29 (across 31
cases). In van Laer, de Ruyter, Visconti, and Wetzels’s (2014) meta-
analysis, greater transportation was significantly associated with stronger
persuasive effects on beliefs (the random-effects unadjusted mean effect,
across 31 studies, corresponded to a correlation of .23), attitudes (31
studies, mean r = .41), and intentions (nine studies, mean r = .29).
14. Video games may provide another vehicle for EE narrative persuasion.
Games can easily encourage transportation (immersion in the game world)
and character identification (as when the player is a character). When
exposure to a game can be mandated (e.g., when school children are
required to play a health-oriented game as part of their instruction), then
the intrinsic appeal of the game may not matter so much; however, where
voluntary game playing is concerned, then (just as with, say, voluntary
exposure to EE television programming), the challenge arises of making
the game sufficiently entertaining (so people will want to play it) while
also ensuring delivery of the desired persuasive contents. (For some
review discussions of games as persuasive vehicles, see Lieberman, 2012,
2013; Lu, Baranowski, Thompson, & Buday, 2012; Peng, Crouse, & Lin,
2013; Primack et al, 2012.)
367
reported as more effective than nonpersonalized text messaging) does not
generalize: in a random-effects analysis, the mean effects for personalized
interventions (four cases, mean r = .17, 95% CI limits of .06 and .28) and
for nonpersonalized interventions (15 cases, mean r = .12, 95% CI limits
of .06 and .18) were not significantly different, Q(1) = .625, p = .43.
19. Persuaders might usefully be reminded here that their reasons for
wanting a behavior performed are not necessarily the reasons that will be
most persuasive to message recipients. The public health official may want
to encourage sunscreen use so as to reduce skin cancer, but appeals to
health-related consequences might be less persuasive than appeals to
appearance-related consequences.
368
which all the messages were two-sided had its results included in the
analysis of the effects of twosided messages. That is, Keller and
Lehmann’s conclusions about a given message variable were not based
exclusively on experiments (randomized trials) in which levels of that
variable were manipulated. In fact, they reported, “we had relatively few
manipulated levels for many of the variables” whose effects they reviewed
(p. 120). There are, of course, very good and familiar reasons to prefer
conclusions based on randomized trials (“this experiment compared the
effectiveness of one-sided and two-sided messages and found …”) over
those based on observational studies (“in this study all the messages were
two-sided, and people were really persuaded, so therefore …”).
Correspondingly, there are good reasons to prefer meta-analytic
conclusions based exclusively on randomized-trial data over those based
largely on observational studies.
22. Both kinds of two-sided messages are perceived as more credible than
one-sided messages. For refutational two-sided messages, the effect
corresponds to a correlation of .11 (across 20 studies); for nonrefutational
two-sided messages, the correlation is .08 (across 36 cases; O’Keefe,
1999a).
369
O’Keefe, 1999a).
370
in O’Keefe’s (1999) data set of 35 advertising persuasion effect sizes. For
credibility outcomes, if multiple measures in a study had been treated as
contributing a single effect size, Eisend’s (2006) data set would have
consisted of 10 effect sizes—of which seven were included in O’Keefe’s
(1999) data set of 22 advertising credibility effect sizes. As an indication
of the potential of such differences to influence the results: The 18
advertising persuasion-outcome effect sizes included in O’Keefe’s (1999)
data set but not in Eisend’s (2006) data set had a random-effects mean
effect size, expressed as a correlation, of –.04 (N = 4,148; 95% confidence
interval limits of –.12 and .05). This effect was not different for
refutational (six cases, mean r = .02, 95% CI limits of –.07 and .12) and
nonrefutational (12 cases, mean r = –.08, 95% CI limits of –.19 and .04)
advertising messages, Q(1) = 1.7, p = .19. There may have been good
reasons for the observed procedural variations (e.g., for some cases in the
earlier data set to have been excluded from the later one). On the face of
things, however, one might reasonably be cautious about supposing that
nonrefutational two-sided advertising messages generally enjoy the size of
persuasive advantage over one-sided messages that might be implied by
Eisend’s (2006) results.
26. The phrase message framing has been used to cover a variety of
different message variations. Messages have been described as differently
framed when they have varied in the substantive consequences invoked (as
when HPV vaccine is described either as preventing genital warts or as
preventing both genital warts and cancer; McRee, Reiter, Chantala, &
Brewer, 2010) or in the description of a property of the attitude object (as
when ground beef is described as “75% lean” or “25% fat”; Levin &
Gaeth, 1988). Messages have also been labeled as framed differently when
a given outcome of the advocated action is described in various ways; for
example, the results of a surgical procedure might be described in terms of
the probability of living or the probability of dying (e.g., McNeil, Pauker,
Sox, & Tversky, 1982), or price might be characterized in terms of daily
expense (“pennies a day”) as opposed to aggregate cost (e.g., Gourville,
1999; see, relatedly, Chandran & Menon, 2004). All of these are plainly
distinguishable message variations, and not much is gained by lumping all
of them together as “message framing.” For one effort at sorting out such
matters, see Levin, Schneider, and Gaeth (1998).
27. In O’Keefe and Jensen’s (2006) review, the mean effect size,
expressed as a correlation, was .02 (not significantly different from zero)
across 165 cases; that review covered a variety of advocacy topics and
371
included both published and unpublished research. O’Keefe’s (2011)
review analyzed O’Keefe and Jensen’s (2006) cases, the studies of disease
prevention topics reviewed by O’Keefe and Jensen (2007), and the studies
of disease detection topics reviewed by O’Keefe and Jensen (2009); it
produced a similar result (mean effect size of r = .01 across 219 cases).
Other meta-analytic reviews of gain-loss message framing studies have
commonly had more limited scope by virtue of examining only certain
kinds of advocacy subjects (say, only health behaviors) or excluding
unpublished studies (e.g., Akl et al., 2011; Gallagher & Updegraff, 2012;
O’Keefe & Jensen, 2011).
29. O’Keefe and Jensen’s (2009) review reported a small but statistically
significant advantage for lossframed appeals for disease detection topics
(mean r = –.04 across 53 cases), but this effect reflected the results for
breast cancer detection (a statistically significant mean r of –.06 across 17
cases) and did not obtain for other kinds of detection (a nonsignificant
mean r of –.03 across 36 cases). Gallagher and Updegraff’s (2012) review
undertook separate analyses for different persuasion outcomes (attitude,
intention, and behavior); for disease detection topics, they reported no
significant differences between framing conditions (no statistically
significant mean effect size) for attitude outcomes (mean r = –.03 across
16 cases), intention outcomes (mean r = –.03 across 32 cases), or behavior
outcomes (mean r = –.04 across 18 cases).
The population effect (for gain-loss framing for disease detection topics) is
almost certainly not literally zero, but, taken together, these meta-analytic
results suggest that any such effect is likely to be quite small. For example,
O’Keefe & Jensen (2009, p. 306) pointed out that their results were
consistent with a belief that the population effect is –.02 both overall and
for each of the different detection topics they distinguished; that is, that
372
value falls within the 95% confidence interval around each of the various
mean effects. And that value also falls within the 95% confidence interval
for the three separate effects computed over (a corrected version of)
Gallagher and Updegraff’s (2012) data set (O’Keefe, 2013b). So the gain-
loss message framing population effect for disease detection topics may
not be zero, but it is not very distant from zero.
30. O’Keefe and Jensen’s (2007) review, which included both published
and unpublished studies, reported a small but statistically significant
advantage for gain-framed appeals for disease prevention topics (mean r =
.03 across 93 cases). But this effect reflected the results for dental hygiene
messages (a statistically significant mean r of .15 across nine cases) and
did not obtain for other kinds of prevention topics (a nonsignificant mean r
of .02 across 84 cases). Gallagher and Updegraff’s (2012) review, which
was restricted to published studies, undertook separate analyses for
different persuasion outcomes (attitude, intention, and behavior); for
disease prevention topics, they reported no significant difference between
framing conditions for attitude (mean r = .04 across 45 cases) and
intention (mean r = .03 across 46 cases) outcomes but did find a
significant mean effect size for behavioral outcomes (mean r = .08 across
32 cases). But a closer analysis of (a corrected version of) Gallagher and
Updegraff’s data set indicates that those three mean effect sizes were not
significantly different from each other (for details and discussion, see
O’Keefe, 2013b); that is, for prevention topics, the mean effect size for
behavior outcomes was not significantly larger than the mean effect sizes
for attitude or intention. Expressed differently: There is no evidence that
gain-loss framing effects on prevention topics vary as a consequence of the
kind of outcome examined; the observed effects on the different outcomes
were statistically indistinguishable.
373
effect may not literally be zero, but it is not very far from that value.
31. It might have been more transparent to have labeled this appeal
variation as the difference between “compliance-focused” and
“noncompliance-focused” appeals. But the terminology of “gain-framed”
and “loss-framed” is too well established to hope for any better labeling to
take hold.
374
(by having the gain-framed appeal invoke individualist consequences and
the loss-framed appeal invoke collectivistic consequences)—or could
show the exact opposite. In general, any individual-difference variable that
goes proxy for, or straightforwardly represents, value variations makes the
task of experimental message construction especially challenging.
Showing the effect of such an individual-difference variable on the relative
persuasiveness of gain- and loss-framed appeals requires ruling out
variations in consequence desirability as an alternative explanation.
35. PMT is actually a bit more complex than this. Threat appraisal is said
to depend not just on threat severity and threat vulnerability but also on the
rewards of adopting a maladaptive response (e.g., the perceived rewards of
not adopting the protective behavior); coping appraisal is said to depend
not just on response efficacy and self-efficacy but also on response costs
(perceived costs of taking the protective response, such as money, time,
and so forth). But maladaptive rewards and response costs have received
less research attention than the other four elements, and the simpler
version presented here suffices to introduce the relevant general issues.
(Moreover, the relation between self-efficacy and response costs is not
lucid. After all, one reason why I might think that I can’t actually carry out
a protective behavior such as an exercise program [low self-efficacy] is
that it takes too much time [high response cost]. But PMT treats these
separately.)
375
outcomes such as intentions and behaviors, research reports commonly
have not reported such information.
37. For ethical reasons, when the message topic concerns a real (as
opposed to fabricated) threat, the experimental conditions often involve
contrasts between (for instance) a high-vulnerability message and a no-
message control condition (e.g., Yzer et al., 1998).
39. Notice that this way of defining the message variation is based on the
properties of the communication, not the reactions of an audience. By
contrast, sometimes threat appeal variations have been defined on the basis
of evoked reactions (so that a strong threat appeal is one that arouses more
fear than does a weak one). But this latter way of defining message
variations should be dispreferred (for discussion, see O’Keefe, 2003; Tao
& Bucy, 2007).
40. Even these estimates may be misleading. For example, Witte and Allen
(2000) reported a mean correlation of .30 between threat appeal
manipulations and aroused fear (across 51 cases). But this figure was
inflated by (a) the exclusion of studies with a failed “manipulation check”
(studies in which there was not a dependable difference in aroused fear
between message conditions) and (b) the adjustment of individual effect
sizes, before being analyzed, for factors such as range restriction (thereby
increasing the size of the individual effects). An analysis that included all
376
studies and used unadjusted correlations would presumably yield a smaller
mean effect. (This treatment passes over complexities such as questions
about how to interpret postmessage fear reports and about the potential
role of evoked emotions other than fear. For helpful discussion, see Shen
& Dillard, 2014.)
41. There has been regrettably little attention to describing the particulars
of threat appeal variations. The meta-analytic treatments of this literature
commonly simply rely on the primary research categories (e.g., strong and
weak appeals) and do not consider what specific message features might
have been experimentally varied. The consequence is that we know rather
less than we might about what particular message variations might produce
the observed effects.
42. Unfortunately, threat appeal research results are often reported in ways
that do not permit full examination of the relationships of interest (see note
36 above). For example, it is common that a researcher will create two
message variations (with strong and weak threat appeals), check that they
aroused different levels of fear (in a manipulation check), and then report
the contrast between the two message conditions for the persuasion-
outcome dependent variables (such as attitude and intention)—leaving
unreported the direct relationship between the presumed mediating state
(fear) and the persuasion outcome variables. This has been a rather
widespread problem in persuasion research. A brief way of putting the
problem is to say that assessments of intervening states, rather than
properly being understood (and analyzed) as assessing mediating states,
have instead unfortunately been seen (and analyzed) as providing
manipulation checks for independent variables (for discussion, see
O’Keefe, 2003).
43. Carey, McDermott, and Sarma’s (2013) meta-analytic results are not
obviously an exception to this generalization. That meta-analysis
examined studies that compared messages that addressed road safety using
threat appeals against various control messages. Across four studies, threat
messages aroused greater fear than control messages (mean effect size of r
= .64); across 15 studies, these two kinds of messages were not associated
with differences in driving practices (mean effect size of r = .03). But of
those 15 studies, only two assessed both fear arousal and driving outcomes
(and these two were not separately analyzed), so there is not much direct
evidence in this data set concerning the question of whether messages that
arouse greater fear are also generally more persuasive. Notice that in meta-
377
analyses of threat appeal research where the experimental contrast of
interest is between high-intensity and low-intensity depictions of negative
consequences in threat appeals (e.g., Witte & Allen, 2000), the meta-
analytic results speak to the question of how message designers should
implement threat appeals. In Carey et al.’s (2013) review, where the
experimental contrast was between threat appeals and nonthreat appeals,
the meta-analytic results speak to the question of whether road safety
message designers should have any reason to prefer threat appeals over
nonthreat (control) appeals.
44. The relevant relationships are almost certainly only roughly linear, not
rectilinear. For instance, in a given persuasive circumstance, as message
intensity increases, there might come a point at which aroused fear
plateaus. That is, at some point further increases in message intensity
might not yield any greater fear (or any greater persuasion).
378
(mean r = .22, 95% CI limits of –.08 and .48) or when depicted efficacy
was low (mean r = –.05, 95% CI limits of –.24 and .15), with those two
mean effect sizes not significantly different from each other, Q(1) = 2.2, p
= .14. Peters et al. (2012, pp. 11–12) did offer a rationale for the exclusion
of Study 43, but the point here is the substantial effect that a single study
can have on these meta-analytic conclusions. In sum, it may well be the
case that more intense depictions of threat severity (and correspondingly
greater fear arousal) will be associated with greater persuasion only when
a workable, effective solution is perceived to be in hand—but the evidence
to date is not as robust as one might like.
46. This description of the EPPM is necessarily only a brief gloss (the
most detailed description is Witte, 1998). There is room for some
uncertainty about the EPPM’s predictions, at least in part because
presentations of the EPPM sometimes run together questions about
influences on protective intentions and actions and questions about
message effects. To clarify: Read in one way, the EPPM—and protection
motivation theory (PMT)—offer an account of what influences protective
intentions and actions. There is an obvious parallel here with reasoned
action theory (RAT; Chapter 6). RAT, the EPPM, and PMT each identify a
set of determinants of (influences on, precursors to) intentions and
behavior. The EPPM and PMT are narrower than RAT (because the EPPM
and PMT are focused on protective behaviors specifically) and so naturally
have a distinct set of determinants, but the overall analytical approach is
quite similar. The EPPM goes beyond PMT because it incorporates
Leventhal’s (1970) concepts of fear control and danger control processes
and because it offers an elaborated account of the interplay between these
processes and among the various determinants. Even so, the EPPM can be
seen to have the same central focus as PMT: understanding what factors
affect protective intentions and actions. Notice that one can have such an
account without ever considering questions about persuasive messages or
interventions. Each distinct determinant is of course a potential target for
persuasive messages, but how and why messages influence those
perceptual determinants make for a separate set of research questions. It
would be possible to know that perceptual state X (e.g., perceived self-
efficacy) is strongly related to protective behaviors without knowing what
sorts of messages or interventions influence X. As a parallel case from
RAT: It is possible to know that perceived behavior control (PBC) is
generally correlated with behavioral intentions without knowing how to
influence PBC. But the EPPM wants to offer not only an account of what
influences protective actions and how those influences are interrelated
379
(e.g., Propositions 6–10 in Witte, 1998) but also an account of what
happens when threat-related persuasive messages are encountered (e.g.,
Propositions 1, 2, and 5). These enterprises are naturally related but
conceptually distinct. Questions about what happens when this or that
perceptual state has a given value (e.g., what happens when perceived
threat severity is high) are different from questions about what happens
when this or that message feature has a given value (e.g., what happens
when depicted threat severity is high). In this research domain,
unfortunately, such differences are not always fully grasped. For example,
as Popova (2012, p. 457) pointed out, “the conceptual difference between
threat as a message characteristic and perceived threat is often overlooked
in practice.” The upshot is that sorting out predictions (and, for that matter,
empirical findings) in this research area can be quite challenging.
47. Notice that where fear control processes are activated, recipients may
end up experiencing relatively little fear and hence exhibit relatively little
persuasion. That is, the EPPM’s picture here is consistent with the
generally positive relationship between fear and persuasion: If people
aren’t experiencing much fear (because they don’t find the message
contents scary, don’t think they’re vulnerable, aren’t thinking about the
threat, or any other reason), then they’re not likely to be especially
motivated to adopt the protective action.
380
50. Other compliance techniques have also received some research
attention. Notable among these are the “that’s-not-all” technique, in which
before any response is given to the initial request, the communicator
makes the offer more attractive (see, e.g., Banas & Turner, 2011; Burger,
Reed, DeCesare, Rauner, & Rozolis, 1999); the “low-ball” technique, in
which the communicator initially obtains the receiver’s commitment to an
action and then increases the cost of performing the action (see, e.g.,
Cialdini, Cacioppo, Bassett, & Miller, 1978; Guéguen, Pascual, & Dagot,
2002); and the “legitimizing paltry contributions” technique, in which
fundraisers explicitly legitimize small contributions (e.g., by saying “even
a penny helps”; for a illustrative study, see Cialdini & Schroeder, 1976; for
a review, see Andrews, Carpenter, Shaw, & Boster, 2008). Cialdini and
Griskevicius (2010) provide a useful general discussion of compliance
techniques.
381
condition denominator (or, alternatively expressed, as an artifact of having
dispositionally cooperative persons overrepresented in the FITD condition
by virtue of having passed through what amounted to the screening
procedure of the initial request).
54. Chartrand, Pinckert, and Burger (1999) found that if the same person
makes both requests with no delay between them, the FITD technique may
backfire. But even this effect is apparently not general. Burger’s (1999)
review reported an advantage for FITD conditions over control conditions
when the same person made both requests without a delay (overall effect
corresponding to a correlation of .05 across 24 studies); when the same
person made the requests but with a delay between them (correlation of
.07, seven studies); when different persons made the requests without a
delay (correlation of .11, five studies); and when different persons made
the requests with a delay (correlation of .12, 28 studies). Taken at face
value (see note 51 above), these reported overall effects underwrite a
conclusion that FITD effects are unaffected by delay and a suspicion that
FITD effects perhaps might be larger when different persons make the
requests than when the same person makes them (but in the absence of
appropriate statistical analyses—comparing the differences between the
relevant effects—this can be only a suspicion).
55. The lack of an effect for the time interval between the requests is
sometimes seen as inconsistent with the self-perception explanation (e.g.,
Dillard et al., 1984). But it is not clear what predictions the selfperception
explanation would make here. On the one hand, it might be expected that
with increasing delay between the two requests, the FITD effect would
weaken (because there would be many opportunities, during the interval,
for other events to undermine the self-attributions of helpfulness and
cooperativeness). On the other hand, it might be predicted that with
increasing delay between the requests, the FITD effect would become
stronger (because it takes time for receivers to reflect on the causes of their
behavior and so to make the required self-attributions). Or (as Beaman et
al., 1983, noted) it might be that both these processes are at work and
cancel each other out.
56. A number of other explanations have also been proposed for FITD
382
effects (e.g., Ahluwalia & Burnkrant, 1993; Gorassini & Olson, 1995), but
at present none seems entirely satisfactory. For many explanations, there is
little direct relevant evidence. And it is not always obvious how the
explanations can accommodate the existing evidence concerning
moderating factors. For example, Fennis, Janssen, and Vohs’s (2009; see
also Fennis & Janssen, 2010) account invokes self-regulatory resource
depletion processes (the effortful character of responding to the first
request makes yielding to the second request more likely); but resource
depletion presumably dissipates relatively quickly whereas FITD effects
have been observed at some temporal remove (e.g., two weeks: Freedman
& Fraser, 1966).
383
suggesting that any concession merely needs to be large enough to trigger
the reciprocal concessions norm; so long as the concession surpasses this
threshold, the reciprocal concessions mechanism will be engaged (and thus
increasing the size of the concession beyond that threshold would not
affect the strategy’s effectiveness). This defense is certainly adequate as
far as it goes, but consider that if larger concessions had been found to be
associated with greater DITF effectiveness, such a result surely would
have been counted as evidence supporting the reciprocal concessions
explanation. Thus the failure to find such effects requires at a minimum
some revision in the account (such as represented by the articulation of a
threshold-model version of the explanation).
384
Chapter 12 Receiver Factors
Individual Differences
Topic-Specific Differences
General Influences on Persuasion Processes
Summary
Transient Receiver States
Mood
Reactance
Other Transient States
Influencing Susceptibility to Persuasion
Reducing Susceptibility: Inoculation, Warning, Refusal
Skills Training
Increasing Susceptibility: Self-Affirmation
Conclusion
For Review
Notes
This chapter reviews research concerning the effects that various recipient
characteristics have on persuasive outcomes. The discussion is organized
around three main topics: individual differences (such as personality
traits), transient receiver states (such as moods), and means of influencing
receivers’ susceptibility to persuasion.
Individual Differences
Individual differences (ways in which people vary with respect to
relatively stable characteristics, such as personality variations) can
influence persuasion in two broad ways: by virtue of their association with
topic-specific differences or by virtue of their general influence on
persuasion processes.
Topic-Specific Differences
Some individual-difference variables can be associated with topic-specific
differences in attitudes, beliefs, values, or behaviors—and hence (where
relevant to the topic) such individual differences may be related to
385
persuasive effects. A convenient example is provided by the personality
variable of self-monitoring (the degree to which a person regulates self-
presentation). As discussed in Chapter 3 (concerning functional
approaches to attitude), high and low self-monitors differ in what they tend
to value in certain consumer products; for instance, high self-monitors
especially favor the image projection attributes of automobiles, whereas
low self-monitors are more likely to value characteristics such as
reliability. Hence high and low self-monitors are differentially influenced
by corresponding persuasive appeals; high self-monitors react more
favorably to image-oriented advertisements than to ads focused on product
quality, whereas the opposite tendency is found for low self-monitors (e.g.,
Snyder & DeBono, 1985). To put the relevant point generally, this
personality difference serves as a marker of differences in receiver values
and hence is related to the success of persuasive appeals that vary in the
degree to which those values are engaged.
386
Thompson, 2003.)
387
(Stephenson et al., 1999). But high and low sensation seekers also differ in
the kinds of messages to which they are especially susceptible; for
example, the use of rapid edits, intense imagery, and surprise endings can
make for more effective antidrug public service announcements for high
sensation seekers (e.g., Morgan, Palmgreen, Stephenson, Hoyle, & Lorch,
2003; Niederdeppe, Davis, Farrelly, & Yarsevich, 2007; for complexities,
see Kang, Cappella, & Fishbein, 2006). The implication is that sensation
seeking might provide not only a basis for identifying members of the
target audience but also a means of adapting messages to that audience (for
a useful review, see Morgan, 2012).
Summary
A great many individual individual-difference receiver characteristics have
been examined for their possible relationships to persuasibility. For most
such characteristics, the research evidence is commonly not extensive, and
dependable generalizations seem hard to come by. (For some illustrative
studies, see Geers, Handley, & McLarney, 2003; Guadagno & Cialdini,
2010; Gunnell & Ceci, 2010; Hirsh, Kang, & Bodenhausen, 2012; Lee &
Bichard, 2006; Magee & Kalyanaraman, 2009; Resnicow et al., 2008;
Saucier & Webster, 2009; Stephenson, Quick, & Hirsch, 2010; van ’t Riet,
Ruiter, & de Vries, 2012; Williams-Piehota et al., 2009.) But, as this brief
sketch indicates, individual differences may affect persuasion in a number
of different ways, so perhaps it is unsurprising that research has so often
yielded complex results. A given individual difference such as receiver age
might potentially be related to general persuasibility differences (Krosnick
& Alwin, 1989), to dispositional differences in information-processing
inclinations (e.g., Williams & Drolet, 2005), and to topic-specific
differences in persuasion-relevant beliefs and attitudes (as in the observed
age-related differences in evaluations of volunteering outcomes; Okun &
Schultz, 2003). Similarly, cultural variations may be related not only to
variations in underlying values but also to some information-processing
differences (e.g., Hornikx & Hoeken, 2007; Larkey & Gonzalez, 2007). It
is likely to take some time to sort out completely the different pathways by
which various individual-difference variables exert their influence on
persuasive effects. (For one attempt, see Briñol & Petty, 2005. For other
general discussions of individual differences and persuasion, see Briñol,
Rucker, Tormala, & Petty, 2004; Shakarchi & Haugtvedt, 2004; W. Wood
& Stagner, 1994.)
388
Transient Receiver States
Whereas the previous section discussed the effects of relatively stable
(individual-difference) receiver characteristics, persuasion can also
potentially be influenced by more transient receiver states. Two such states
are discussed here: mood and reactance.3
Mood
There seems to be a natural appeal to the hypothesis that a receiver’s
preexisting mood (affective state) will influence persuasion quite
straightforwardly, such that positive moods will enhance persuasion and
negative moods will diminish persuasion. And although the research
evidence to date does not yet yield an entirely clear picture, it is
nevertheless plain that this simple hypothesis will not suffice.
Rather, the research in hand appears to suggest that receivers in (at least
some kinds of) negative moods are more likely to engage in close message
processing than are receivers in (at least some kinds of) positive moods.
Expressed in terms of the elaboration likelihood model (ELM; see Chapter
8), mood influences elaboration likelihood. For example, Bless, Bohner,
Schwarz, and Strack (1990) found that sad participants were persuaded by
a counterattitudinal message if the message’s arguments were strong but
not if the arguments were weak (indicating relatively high elaboration); by
contrast, happy participants were equally persuaded by strong and by weak
arguments (suggesting relatively low elaboration). (For similar results, see,
e.g., Mackie & Worth, 1989. For a review, see Hullett, 2005.)4
Research has only begun to explore possible moderating conditions for this
effect (circumstances under which this effect weakens or even reverses),
so conclusions are not yet secure (see, e.g., Banas, Turner, & Shulman,
2012; Das, Vonkeman, & Hartmann, 2012; Shen, 2013; Sinclair, Moore,
Mark, Soldat, & Lavis, 2010; Ziegler, 2013, 2014). However, one notable
theme has been the suggestion that instead of referring generally to
positive and negative moods (affective states), a more differentiated
treatment of affective states will be needed—because, for example,
different positive affective states may have different message-processing
consequences. (For some illustrative studies, see Agrawal, Menon, &
Aaker, 2007; Griskevicius, Shiota, & Neufeld, 2010. For some useful
general discussions of affect and persuasion, see Bless & Schwarz, 1999;
389
Dillard & Seo, 2013; Nabi, 2007, 2010.)
Reactance
Reactance is a motivational state that is aroused when a person’s freedom
is perceived to be threatened or eliminated (Brehm, 1966; Brehm &
Brehm, 1981). When a person believes that his or her freedom may be
diminished, the person will be motivated to restore (defend, exercise) that
freedom—perhaps by acting counter to the impending pressure. (For some
general treatments of reactance, see Miron & Brehm, 2006; Quick, Shen,
& Dillard, 2013.)
390
However, a number of studies have found that explicit freedom-
threatening language can evoke reactance and lead to diminished
persuasiveness (compared with parallel messages without such language).7
For example, Quick and Considine (2008) compared exercise messages
with “forceful” language (e.g., “it is impossible to deny all the evidence”
of exercise benefits, “no other conclusion makes sense,” and so forth) and
ones with non-forceful language (e.g., “there is pretty good evidence” of
exercise benefits, “it’s a sensible conclusion,” and so on), finding that
forceful language evoked reactance and reduced perceived message
persuasiveness. (For similar results, see, e.g., Bensley & Wu, 1991;
Burgoon et al., 2002; Dillard & Shen, 2005; Rains & Turner, 2007.)8
Avoiding such directive language is thus one way in which to reduce the
likelihood of the arousal of reactance. Another potential strategy for
minimizing reactance or its effect might be to emphasize the receiver’s
freedom of choice (e.g., Miller, Lane, Deatrick, Young, & Potts, 2007).
(See also the “but you are free to refuse” request strategy, as in Guéguen &
Pascual, 2005; for a review, see Carpenter, 2013.) Indeed, in work on
influencing addictive and related problematic health behaviors, the
approach known as motivational interviewing specifically recommends
that counselors avoid confrontation and instead affirm the client’s
autonomy and capacity for self-direction (for a general treatment of
motivational interviewing, see Miller & Rollnick, 2002; for some reviews
of relevant research, see Hettema & Hendricks, 2010; Jensen et al., 2011;
Knight, McGowan, Dickens, & Bundy, 2006; Morton et al., 2014).
391
support confident conclusions.
Inoculation
The fundamental ideas of inoculation can be usefully displayed through a
biomedical metaphor. Consider the question of how persons can be made
resistant to a disease virus (such as smallpox). One possibility is what
might be called supportive treatments—making sure that people get
adequate rest, a good diet, sufficient exercise, necessary vitamin
supplements, and so on. The hope, obviously, is that this treatment will
make it less likely that the disease will be contracted. But another
approach to inducing resistance is inoculation (as with smallpox vaccines).
An inoculation treatment consists of exposing persons to small doses of
the disease virus. The dose is small (to avoid bringing on the disease itself)
but is sufficient to stimulate and build the body’s defenses so that any later
392
massive attack (e.g., a smallpox epidemic) can be defeated.
393
disease). Research to date suggests that supportive treatments may indeed
confer some resistance to persuasion (compared to no-treatment control
conditions), but the evidence is not yet quite as decisive as one might
want. (For examples of relevant research, see Bernard, Maio, & Olson,
2003; Rosnow, 1968. For a review, see Banas & Rains, 2010.)13
394
recipient aware of the possibility of opposing arguments or views (because
the recipient sees an opposing argument).18
But the idea of perceived vulnerability has not yet been entirely carefully
specified. For example, does the receiver need to think that an attack
message is actually about to be encountered? Or only that it might
plausibly occur at some time in the future? Or perhaps merely that in the
abstract, somebody somewhere might believe differently (never mind
whether an actual attack is expected)? Is it necessary that the receiver think
that the attack message (or the imagined interlocutor) has good reasons for
the opposing view (reasons that might form the basis of good arguments
against the receiver’s opinion)? Or perhaps is the mere recognition of the
possibility of opposition (whether or not well-founded) sufficient?19
Even given some specification of the idea of vulnerability, the issue will
then become explaining exactly how and why perceived vulnerability
leads to resistance. Perhaps it stimulates counterarguing, or possibly it
simply inclines the receiver to reject the opposing view out of hand
without thinking about it very much.20 (For some discussion of alternative
means of resistance, see Ahluwalia, 2000; Blumberg, 2000; Burkley,
2008.) In short, much remains to be learned about how inoculation creates
resistance to persuasion.
Warning
If one’s awareness that a belief is vulnerable to attack might be sufficient
to lead one to bolster one’s defense of that belief (and thereby reduce the
effectiveness of attacks on it), then perhaps simply warning a person of an
impending counterattitudinal message will decrease the effectiveness of
the attack once it is presented. A fair amount of research has been
conducted concerning the effects of such warning on resistance to
persuasion.
Two sorts of warnings have been studied. One type simply warns receivers
that they will hear a message intended to persuade them, without providing
any information about the topic of the message, the viewpoint to be
advocated, and so on. The other type of warning tells receivers the topic
and position of the message.
395
Cacioppo, 1977, 1979a; for a review, see W. Wood & Quinn, 2003).21
Topic-position warnings make it possible for receivers to engage in
anticipatory counterarguing because the audience knows the issue to be
discussed and the view to be advocated. Thus as the time interval between
the topic-position warning and the onset of the message increases (up to a
point, anyway), there is more opportunity for the audience to engage in
counterarguing. For example, in one study, high school students were
shown messages arguing that teenagers should not be allowed to drive.
Students received a warning of the topic and position of the impending
message, but the interval between the warning and the message varied (no
delay between warning and message, a 2-minute delay, or a 10-minute
delay). With increasing delay, there was increasing resistance to
persuasion (Freedman & Sears, 1965).
396
unable to resist offers of illegal drugs, alcohol, or tobacco and so end up
using these substances—even if they have negative attitudes about such
substances. Hence it has been thought that one avenue to preventing
substance use (or abuse) might be to provide training in how to refuse such
offers.
A good deal of research has explored refusal skills induction in the context
of preventing children and adolescents from using or misusing drugs
(alcohol, tobacco, marijuana, and so on). Three broad conclusions may be
drawn from this research. First, it is possible to teach such refusal skills.
Studies have found that resistance skills training does improve the quality
of role-played refusals, participants’ perceived self-efficacy for refusing
offers, and the like (see, e.g., Brown, Birch, Thyagaraj, Teufel, & Phillips,
2007; Langlois, Petosa, & Hallam, 1999; Wynn, Schulenberg, Maggs, &
Zucker, 2000).
Second, the programs that are most effective at teaching refusal skills
commonly involve rehearsal with directed feedback (i.e., opportunities for
participants to practice their refusal skills and to receive systematic
evaluation of their performance). Simply encouraging participants to
refuse offers or providing information about refusal skills seems less
effective in developing such skills than is providing guided practice (see,
e.g., Corbin, Jones, & Schulman, 1993; Turner et al., 1993).
397
to encourage substance use—have also been observed (Biglan et al., 1987;
Donaldson et al., 1995; S. Kim, McLeod, & Shantzis, 1989).
Alternatively, one might think that, because some refusal skill programs
appear to have been more successful than others in addressing substance
use, the key to future program development is the identification of the
relevant program ingredients. For some discussion along these lines, see
Krieger et al. (2013) and Miller-Day and Hecht (2013).
398
active form of resistance that engages the message. Defensive avoidance,
on the other hand, represents a withdrawal from the message, an
unwillingness to engage with it. It’s as though the message is somehow so
threatening that people want to close themselves off from it.24
The question that arises is how, in such circumstances, people can be made
more susceptible to influence, more open to persuasion. The apparent
motivational foundation for avoidance—the desire to maintain a positive
self-image—suggests a possible avenue to minimizing these avoidance
tendencies: self-affirmation. Self-affirmation refers to treatments aimed at
affirming (confirming, supporting) the recipient’s positive characteristics
or important values. Self-affirmation can be accomplished in a variety of
ways, but the most common methods in research studies have had
participants reflect on a core value (e.g., by writing about a value that is
important to them, by describing instances in which they performed
positive actions such as kindness behaviors, and so on). The idea is that
active affirmation of some positive aspect of one’s self-concept will permit
people to be open to information that would otherwise be threatening. (For
some discussion of self-affirmation manipulations, see Armitage, Harris,
& Arden, 2011; McQueen & Klein, 2006; Napper, Harris, & Epton, 2009.)
399
information about smoking risks (Harris, Mayle, Mabbott, & Napper,
2007), to make opponents and proponents of capital punishment more
open to opposing viewpoints (Cohen, Aronson, & Steele, 2000), and so
forth. (For other examples, see Howell & Shepperd, 2012; Reed &
Aspinwall, 1998; Schüz, Schüz, & Eid, 2013; Sparks, Jessop, Chapman, &
Holmes, 2010; Van Koningsbruggen, & Das, 2009. For reviews, see
Epton, Harris, Kane, van Koningsbruggen, & Sheeran, in press; Harris &
Epton, 2009; Sweeney & Moyer, 2015.)
At present, however, little can be confidently said about factors that might
moderate these self-affirmation effects—the circumstances under which
self-affirmation effects are most likely to occur, what sorts of self-
affirmation treatments might be most effective, whether individual
differences affect the success of self-affirmation treatments, and so forth
(for some illustrative studies, see Klein et al., 2010; Nan & Zhao, 2012;
Pietersma & Dijkstra, 2011; Sherman et al., 2009). Similarly, research is
only beginning to explore the mechanisms by which self-affirmation
treatments have their effects (e.g., Crocker, Niiya, & Mischkowski, 2008;
Klein & Harris, 2009; Van Koningsbruggen et al., 2009).26 But the
manifest usefulness of self-affirmation recommends its continued
investigation. (For some general discussions of self-affirmation theory and
research, see J. Aronson, Cohen, & Nail, 1999; Harris, 2011; Harris &
Epton, 2009, 2010; Sherman & Cohen, 2006.)
Conclusion
Researchers have investigated a large number of recipient characteristics
as possible influences on persuasive effectiveness; in particular, a great
many individual-difference variables have received some attention. The
present treatment provides only an overview of several especially
prominent lines of research.
For Review
1. What are individual differences? Explain how individual-difference
variables can be associated with topic-specific differences in
attitudes, beliefs, values, or behavior. Give examples. Explain how
individual-difference variables might be related to general differences
in persuasion processes. Give examples.
2. Are people in positive moods generally more easily persuaded than
400
people in negative moods? Describe the effect of variation in moods
on the extensiveness of message processing. What is reactance? How
does reactance influence message persuasiveness? Is reactance purely
an affective (emotional) state? Explain. Identify a message feature
that might arouse reactance. Describe how persuaders might
minimize the arousal of reactance.
3. Describe the general idea of resistance to counterpersuasion. Identify
two general ways persons might be made resistant to a disease virus.
Describe supportive medical treatments; describe how inoculation
against disease works. Describe inoculation treatments for inducing
resistance to persuasion; are these treatments effective in creating
resistance? Do inoculation treatments create resistance only to the
particular attack arguments that are refuted, or does the resistance
generalize to other attack arguments? Describe supportive treatments
for inducing resistance to persuasion. Which treatment, supportive or
inoculation, is more effective in creating resistance to persuasion?
Describe one possible explanation for the resistance-creating effects
of inoculation treatments.
4. Can warning a person of an impending counterattitudinal message
create resistance to persuasion? Distinguish two kinds of warnings.
Explain the mechanism by which warning confers resistance to
persuasion. Identify factors that influence the effectiveness of
warnings. How might the effectiveness of warnings be influenced by
the presence of distraction or by the degree of personal relevance of
the topic to the receiver?
5. What is refusal skills training? How is refusal skills training meant to
create resistance to persuasion? Explain how refusal skills training is
different from inoculation and warning as means of creating
resistance to persuasion. Is it possible to teach refusal skills
effectively? What are the most important elements in programs aimed
at teaching refusal skills? What effect do refusal skills training
programs have on substance use/misuse?
6. Describe how persuasive messages might evoke defensive avoidance
reactions from recipients. How are such reactions different from
reactance? Describe one way of minimizing the arousal of defensive
avoidance. What is self-affirmation? Can self-affirmation treatments
enhance acceptance of threatening messages?
Notes
401
1. As discussed in Chapter 11 (concerning message factors), a number of
individual-difference variables (such as “consideration of future
consequences”) appear to be proxies for value differences (for a general
treatment, see O’Keefe, 2013a).
402
.418 (based on 11 cases; 95% confidence interval limits of .278 and .541)
and .236 (based on 8 cases; 95% confidence interval limits of .148 and
.321) for negative and positive moods, respectively. (But see also Chapter
8, note 4.)
9. One matter of some delicacy that is not treated here concerns the
definition of resistance to persuasion, which poses more difficulties than
one might initially suppose; a useful (if incomplete) discussion of this
topic has been provided by Pryor and Steinfatt (1978, pp. 220–221).
403
arguments and refutes opposing arguments. An inoculation treatment thus
functionally consists of the refutational portion of a refutational two-sided
message.
11. Banas and Rains’s (2010) fixed-effect analysis of adjusted effect sizes
yielded a statistically significant advantage (in resistance creation) for
inoculation treatments over no-treatment controls, with a mean d
(standardized mean difference) of .43 across 41 cases (an effect that
corresponds to an r of .21). A random-effects analysis (using the methods
of Borenstein & Rothstein, 2005) of the unadjusted effect sizes (converting
the reported ds to rs for the analysis) also yields a significant difference:
mean r = .200 (95% CI limits of .160 and .238).
13. Banas and Rains’s (2010) fixed-effect analysis of adjusted effect sizes
yielded a statistically significant difference (in resistance creation)
between supportive treatments and no-treatment controls, with a mean d
(standardized mean difference) of .34 across 10 cases (an effect that
corresponds to an r of .17). But a random-effects analysis (using the
methods of Borenstein & Rothstein, 2005) of the unadjusted effect sizes
(converting the reported ds to rs for the analysis) yields a nonsignificant (p
= .052) difference: mean r = .130 (95% CI limits of –.001 and .256). Given
the 95% confidence interval around that random-effects mean, however,
smart money will surely bet that the population effect is positive.
14. Banas and Rains’s (2010) fixed-effect analysis of adjusted effect sizes
yielded a statistically significant difference (in resistance creation)
between inoculation treatments and supportive treatments, with a mean d
(standardized mean difference) of .22 across 19 cases (an effect that
corresponds to an r of .11). A random-effects analysis (using the methods
of Borenstein & Rothstein, 2005) of the unadjusted effect sizes (converting
the reported ds to rs for the analysis) also yields a significant difference:
mean r = .099 (95% CI limits of .045 and .153).
404
a refutation of beliefs that the audience might hold); for some discussion,
see M. Wood (2007).
17. This reasoning led to the expectation that “cultural truisms” (beliefs
that a person rarely, if ever, hears attacked, such as “it’s a good idea to
brush after every meal if possible”) would be especially vulnerable to
attack, precisely because people were unpracticed at (and had no motivate
to rehearse) defending those beliefs (McGuire, 1964). This reasoning also
suggests that resistance-to-persuasion processes might differ between
cultural truisms and more controversial beliefs. Specifically, (a) supportive
treatments should differentially induce resistance in these two
circumstances (with supportive treatments inducing more resistance on
more controversial topics than on truisms), (b) refutational treatments
should differentially induce resistance in these two circumstances (with
inoculation producing greater resistance for truisms than for more
controversial topics), (c) for truisms, refutational treatments will confer
more resistance than supportive treatments, and (d) the difference in
resistance-induction between the two kinds of treatments will be smaller
for controversial topics than for truisms (such that it might even turn out
that for controversial topics, refutational treatments and supportive
treatments might not differ in resistance induction). However, the reports
of much early inoculation research concerning truisms (e.g., McGuire &
Papageorgis, 1961) do not contain sufficient statistical information to
permit such questions to be addressed meta-analytically (Banas & Rains,
2010, p. 304). This lack is especially acutely felt because the contrast
between truisms and controversial topics affords a natural basis for
examination of the role putatively played by awareness of opposing views
as a stimulus for arousing defenses.
405
views—which would imply that inoculation ought have no special powers
compared with, say, supportive treatments. And yet inoculation is
demonstrably more effective than supportive treatments for conferring
resistance on ordinary (nontruism) beliefs (Banas & Rains, 2010).
20. One might think that refutational inoculation treatments would create
resistance by encouraging counterarguing (in response to subsequent
attack messages), but what little evidence exists on this matter appears not
to be encouraging (see, e.g., Benoit, 1991; Pfau et al., 1997, 2000). This is
especially puzzling given that (a) warnings of impending counterattitudinal
messages do stimulate counterarguing (as discussed shortly) and (b) such
warnings may be responsible for the resistance-creating effects of
inoculation treatments (as discussed in note 19 above).
406
relevant to receivers, warnings sometimes seem to initially produce
opinion change toward the to-be-advocated position (e.g., J. Cooper &
Jones, 1970), but this change is apparently an anticipatory strategic shift
meant to minimize the threat to self of having to change later in response
to the message (and hence this effect evaporates when the expectation of
the impending message is canceled); for discussion, see W. Wood and
Quinn (2003). As another example of complexity, at least some prosocial
solicitations appear to be made more persuasive if preceded by a warning
(Kennedy, 1982); this, however, might reflect processes engaged when
warning of an impending proattitudinal communication (e.g., such a
warning, instead of eliciting the counterarguing engendered by warnings of
counterattitudinal messages, might encourage supportive argumentation).
407
After all, as the elaboration likelihood model (Chapter 8) suggests, even
when there is little issue-relevant thinking (little elaboration), persuasion
can still come about through the receiver’s use of heuristics. But in the
case of avoidance motivation, recipients don’t want to think about the
issue at all (and so don’t even use the cognitive shortcut of a heuristic).
26. A word of caution: Stapel and van der Linde’s (2011) report on self-
affirmation mechanisms was based on falsified data; see
https://www.commissielevelt.nl/.
408
References
Aarts, H., Paulussen, T., & Schaalma, H. (1997). Physical exercise habit:
On the conceptualization and formation of habitual health behaviours.
Health Education Research, 12, 363–374.
Abelson, R. P. (1986). Beliefs are like possessions. Journal for the Theory
of Social Behavior, 16, 223–250.
409
functional perspective. In A. R Pratkanis, S. J. Breckler, & A. G.
Greenwald (Eds.), Attitude structure and function (pp. 361–381).
Hillsdale, NJ: Lawrence Erlbaum.
Adams, J., & White, M. (2005). Why don’t stage-based activity promotion
interventions work? Health Education Research, 20, 237–243.
Adamval, R., & Wyer, R. S., Jr. (1998). The role of narratives in consumer
information processing. Journal of Consumer Psychology, 7, 207–245.
410
Adriaanse, M. A., Vinkers, C. D. W., de Ridder, D. T. D., Hox, J. J., & De
Wit, J. B. F. (2011). Do implementation intentions help to eat a healthy
diet? A systematic review and meta-analysis of the empirical evidence.
Appetite, 56, 183–193.
Agarwal, N., Menon, G., & Aaker, J. L. (2007). Getting emotional about
health. Journal of Marketing Research, 44, 100–113.
411
Ajzen, I. (1991). The theory of planned behavior. Organizational Behavior
and Human Decision Processes, 50, 179–211.
Ajzen, I., Albarracín, D., & Hornik, R. (Eds.). (2007). Prediction and
change of health behavior: Applying the reasoned action approach.
Mahwah, NJ: Lawrence Erlbaum.
Ajzen, I., & Cote, N. G. (2008). Attitudes and the prediction of behavior.
In W. D. Crano & R. Prislin (Eds.), Attitudes and attitude change (pp.
289–311). New York: Psychology Press.
Ajzen, I., Czasch, C., & Flood, M.G. (2009). From intentions to behavior:
Implementation intention, commitment, and conscientiousness. Journal
of Applied Social Psychology, 39, 1356–1372.
412
Attitudes, intentions, and perceived behavioral control. Journal of
Experimental Social Psychology, 22, 453–474.
Ajzen, I., Nichols, A. J., III, & Driver, B. L. (1995). Identifying salient
beliefs about leisure activities: Frequency of elicitation versus response
latency. Journal of Applied Social Psychology, 25, 1391–1410.
Ajzen, I., & Sexton, J. (1999). Depth of processing, belief congruence, and
attitude-behavior correspondence. In S. Chaiken & Y. Trope (Eds.),
Dual-process models in social psychology (pp. 117–138). New York:
Guilford.
Akhtar, O., Paunesku, D., & Tormala, Z. L. (2013). Weak > strong: The
ironic effect of argument strength on supportive advocacy. Personality
and Social Psychology Bulletin, 39, 1214–1226.
Akl, E. A., Oxman, A. D., Herrin, J., Vist, G. E., Terrenato, I., Sperati, F.,
… Schünemann, H. (2011). Framing of health information messages.
Cochrane Database of Systematic Reviews, 2001 (12), CD006777.
Albarracín, D., & Wyer, R. S., Jr. (2001). Elaborative and nonelaborative
processing of a behavior-related communication. Personality and Social
Psychology Bulletin, 27, 691–705.
413
condom advertising: A research note. Health Marketing Quarterly,
12(4), 25–38.
Alemi, F., Alemagno, S. A., Goldhagen, J., Ash, L., Finkelstein, B., Lavin,
A., … Ghadiri, A. (1996). Computer reminders improve on-time
immunization rates. Medical Care, 34, OS45–OS51.
Allcott, H., & Rogers, T. (2012). The short-run and long-run effects of
behavioral interventions: Experimental evidence from energy
conservation. NBER Working Paper No. w18492. Retrieved from
SSRN: http://ssrn.com/abstract=2167595.
Allen, M., Adamski, L., Bates, M., Bernhagen, M., Callendar, A., Casey,
M., … Zirbel, C. (2002). Effect of timing of communicator
identification and level of source credibility on attitude. Communication
414
Research Reports, 19, 46–55.
Allen, M. W., & Ng, S. H. (2003). Human values, utilitarian benefits and
identification: The case of meat. European Journal of Social
Psychology, 33, 37–56.
Alós-Ferrer, C., Granić, Đ.-G., Shi, F., & Wagner, A. K. (2012). Choices
and preferences: Evidence from implicit choices and response times.
Journal of Experimental Social Psychology, 48, 1336–1342.
415
Amass, L., Bickel, W. K., Higgins, S. T., Budney, A. J., & Foerg, F. E.
(1993). The taking of free condoms in a drug abuse treatment clinic:
The effects of location and posters. American Journal of Public Health,
83, 1466–1468.
Amos, C., Holmes, G., & Strutton, D. (2008). Exploring the relationship
between celebrity endorser effects and advertising effectiveness: A
quantitative synthesis of effect size. International Journal of
Advertising, 27, 209–234.
Andersen, R. E., Franckowiak, S. C., Snyder, J., Bartlett, S. J., & Fontaine,
K. R. (1998). Can inexpensive signs encourage the use of stairs? Results
from a community intervention. Annals of Internal Medicine, 129,
363–369.
416
Anderson, N. H. (1971). Integration theory and attitude change.
Psychological Review, 78, 171–206.
417
Andreoli, V., & Worchel, S. (1978). Effects of media, communicator, and
message position on attitude change. Public Opinion Quarterly, 42,
59–70.
Andrews, K. R., Carpenter, C. J., Shaw, A. S., & Boster, F. J. (2008). The
legitimization of paltry favors effect: A review and meta-analysis.
Communication Reports, 21, 59–69.
Appel, M., & Richter, T. (2010). Transportation and need for affect in
narrative persuasion: A mediated moderation model. Media Psychology,
13, 101–135.
Areni, C. S., & Lutz, R. J. (1988). The role of argument quality in the
elaboration likelihood model. Advances in Consumer Research, 15,
197–203.
418
Armitage, C. J. (2009). Is there utility in the transtheoretical model?
British Journal of Health Psychology, 14, 195–210.
Armitage, C. J., Harris, P. R., & Arden, M. A. (2011). Evidence that self-
affirmation reduces alcohol consumption: Randomized exploratory trial
with a new, brief means of self-affirming. Health Psychology, 30,
633–641.
419
Armitage, C. J., & Talibudeen, L. (2010). Test of a brief theory of planned
behaviour-based intervention to promote adolescent safe sex intentions.
British Journal of Psychology 101, 155–172.
420
Psychology, 66, 584–588.
Aronson, E., Fried, C., & Stone, J. (1991). Overcoming denial and
increasing the intention to use condoms through the induction of
hypocrisy. American Journal of Public Health, 81, 1636–1638.
Ashford, S., Edmunds, J., & French, D. P. (2010). What is the best way to
change self-efficacy to promote lifestyle and recreational physical
activity? A systematic review with meta-analysis. British Journal of
Health Psychology, 15, 265–288.
Astrom, A. N., & Rise, J. (2001). Young adults’ intention to eat healthy
food: Extending the theory of planned behavior. Psychology and Health,
16, 223–237.
Atkins, A. L., Deaux, K. K., & Bieri, J. (1967). Latitude of acceptance and
attitude change: Empirical evidence for a reformulation. Journal of
Personality and Social Psychology, 6, 47–54.
421
Audi, R. (1972). On the conception and measurement of attitudes in
contemporary Anglo-American psychology. Journal for the Theory of
Social Behavior, 2, 179–203.
Austin, J., Alvero, A. M., & Olson, R. (1998). Prompting patron safety belt
use at a restaurant. Journal of Applied Behavior Analysis, 31, 655–657.
Averbeck, J. M., Jones, A., & Robertson, K. (2011). Prior knowledge and
health messages: An examination of affect as heuristics and information
as systematic processing for fear appeals. Southern Communication
Journal, 76, 35–54.
422
Bagozzi, R. P. (1985). Expectancy-value attitude models: An analysis of
critical theoretical issues. International Journal of Research in
Marketing, 2, 43–60.
Bagozzi, R. P., Baumgartner, H., & Yi, Y. (1992). State versus action
orientation and the theory of reasoned action: An application to coupon
usage. Journal of Consumer Research, 18, 505–518.
Bagozzi, R. P., Lee, K. H., & Van Loo, M. F. (2001). Decisions to donate
bone marrow: The role of attitudes and subjective norms across cultures.
Psychology and Health, 16, 29–56.
Bakker, M., van Dijk, A., & Wicherts, J. M. (2012). The rules of the game
called psychological science. Perspectives on Psychological Science, 7,
543–554.
423
Banaji, M. R., & Heiphetz, L. (2010). Attitudes. In S. T. Fiske, D. T.
Gilbert, & G. Lindzey (Eds.), Handbook of social psychology (5th ed.,
Vol. 1, pp. 353–393). Hoboken, NJ: Wiley.
Baron, R. S., Baron, P. H., & Miller, N. (1973). The relation between
distraction and persuasion. Psychological Bulletin, 80, 310–323.
424
Basil, D. Z., & Herr, P. M. (2006). Attitudinal balance and cause-related
marketing: An empirical application of balance theory. Journal of
Consumer Psychology, 16, 391–403.
Basil, M., & Witte, K. (2012). Health risk message design using the
extended parallel process model. In H. Cho (Ed.), Health
communication message design: Theory and practice (pp. 41–58). Los
Angeles: Sage.
Batra, R., & Homer, P. M. (2004). The situational impact of brand image
beliefs. Journal of Consumer Psychology, 14, 318–330.
Beaman, A. L., Cole, C. M., Preston, M., Klentz, B., & Steblay, N. M.
(1983). Fifteen years of foot-in-the-door research: A meta-analysis.
Personality and Social Psychology Bulletin, 9, 181–196.
425
Beatty, M. J., & Behnke, R. R. (1980). Teacher credibility as a function of
verbal content and paralinguistic cues. Communication Quarterly, 28(1),
55–59.
426
Journal of Consumer Research, 2, 110–117.
427
Berscheid, E., & Walster, E. (1974). Physical attractiveness. In L.
Berkowitz (Ed.), Advances in experimental social psychology (Vol. 7,
pp. 157–215). New York: Academic Press.
Betsch, T., Kaufmann, M., Lindow, F., Plessner, H., & Hoffmann, K.
(2006). Different principles of information integration in implicit and
explicit attitude formation. European Journal of Social Psychology, 36,
887–905.
Biglan, A., Glasgow, R., Ary, D., Thompson, R., Severson, H.,
Lichtenstein, E., … Gallison, C. (1987). How generalizable are the
effects of smoking prevention programs? Refusal skills training and
parent messages in a teacher-administered program. Journal of
Behavioral Medicine, 10, 613–628.
Birkimer, J. C., Johnston, P. L., & Berry, M. M. (1993). Guilt and help
from friends: Variables related to healthy behavior. Journal of Social
Psychology, 133, 683–692.
Biswas, D., Biswas, A., & Das, N. (2006). The differential effects of
celebrity and expert endorsements on consumer risk perceptions: The
role of consumer knowledge, perceived congruency, and product
technology orientation. Journal of Advertising, 35(2), 17–31.
428
Blake, H., Lee, S., Stanton, T., & Gorely, T. (2008). Workplace
intervention to promote stair-use in an NHS setting. International
Journal of Workplace Health Management, 1, 162–175.
Bless, H., Bohner, G., Schwarz, N., & Strack, F. (1990). Mood and
persuasion: A cognitive response analysis. Personality and Social
Psychology Bulletin, 16, 331–345.
Bless, H., Mackie, D. M., & Schwarz, N. (1992). Mood effects on attitude
judgments: Independent effects of mood before and after message
elaboration. Journal of Personality and Social Psychology, 63, 585–595.
Bodur, H. O., Brinberg, D., & Coupey, E. (2000). Belief, affect, and
429
attitude: Alternative models of the determinants of attitude. Journal of
Consumer Psychology, 9, 17–28.
Boen, F., Maurissen, K., & Opdenacker, J. (2010). A simple health sign
increases stair use in a shopping mall and two train stations in Flanders,
Belgium. Health Promotion International, 25, 183–191.
Bohner, G., & Dickel, N. (2011). Attitudes and attitude change. Annual
Review of Psychology, 62, 391–417.
Bohner, G., Ruder, M., & Erb, H.-P. (2002). When expertise backfires:
Contrast and assimilation effects in persuasion. British Journal of Social
Psychology, 41, 495–519.
Bolsen, T., Druckman, J. N., & Cook, F. L. (2014). How frames can
undermine support for scientific adaptations: Politicization and the
status-quo bias. Public Opinion Quarterly, 78, 1–26.
Bond, R. M., Fariss, C. J., Jones, J. J., Kramer, A. D. I., Marlow, C., Settle,
J. E., & Fowler, J. H. (2012). A 61-million-person experiment in social
influence and political mobilization. Nature, 489, 295–298.
Booth, A. R., Norman, P., Harris, P. R., & Goyder, E. (2014). Using the
theory of planned behaviour and self-identity to explain chlamydia
testing intentions in young people living in deprived areas. British
430
Journal of Health Psychology, 19, 101–112.
Booth-Butterfield, S., & Reger, B. (2004). The message changes belief and
the rest is theory: The “1% or less” milk campaign and reasoned action.
Preventive Medicine, 39, 581–588.
Botta, R. A., Dunker, K., Fenson-Hood, K., Maltarich, S., & McDonald, L.
(2008). Using a relevant threat, EPPM and interpersonal communication
to change hand-washing behaviours on campus. Journal of
Communication in Healthcare, 1, 373–381.
431
source-credibility scales Speech Monographs, 34, 185–186.
432
messages: The moderating effect of delivery mode and personal
involvement. Communication Research, 35, 666–694.
Bridle, C., Riemsma, R. P., Pattenden, J., Sowden, A. J., Mather, L., Watt,
I. S., & Walker, A. (2005). Systematic review of the effectiveness of
health behavior interventions based on the transtheoretical model.
Psychology and Health, 20, 283–301.
433
In D. Albarracín, B. T. Johnson, & M. P. Zanna (Eds.), Handbook of
attitudes (pp. 575–615). Mahwah, NJ: Lawrence Erlbaum.
Briñol, P., & Petty, R. E. (2009a). Persuasion: Insights from the self-
validation hypothesis. In M. P. Zanna (Ed.), Advances in experimental
social psychology (Vol. 41, pp. 69–118). New York: Academic Press.
Briñol, P., Rucker, D. D., Tormala, Z. L., & Petty, R. E. (2004). Individual
differences in resistance to persuasion: The role of beliefs and meta-
beliefs. In E. S. Knowles & J. A. Linn (Eds.), Resistance and persuasion
(pp. 83–104). Mahwah, NJ: Lawrence Erlbaum.
434
Brouwers, M. C., & Sorrentino, R. M. (1993). Uncertainty orientation and
protection motivation theory: The role of individual differences in
health compliance. Journal of Personality and Social Psychology, 65,
102–112.
Brown, S., Birch, D., Thyagaraj, S., Teufel, J., & Phillips, C. (2007).
Effects of a single-lesson tobacco prevention curriculum on knowledge,
skill identification and smoking intention. Journal of Drug Education,
37, 55–69.
Brown, S. P., Cron, W. L., & Slocum, J. W., Jr. (1997). Effects of goal-
directed emotions on salesperson volitions, behavior, and performance:
A longitudinal study. Journal of Marketing, 61 (1), 39–50.
Bryant, J., Brown, D., Silberberg, A. R., & Elliott, S. M. (1981). Effects of
humorous illustrations in college textbooks. Human Communication
Research, 8, 43–57.
435
Applied Social Psychology, 16, 663–685.
Budd, R. J., North, D., & Spencer, C. (1984). Understanding seat-belt use:
A test of Bentler and Speckart’s extension of the theory of reasoned
action. European Journal of Social Psychology, 14, 69–78.
Burger, J. M., Bell, H., Harvey, K., Johnson, J., Stewart, C., Dorian, K., &
Swedroe, M. (2010). Nutritious or delicious? The effect of descriptive
norm information on food choice. Journal of Social and Clinical
Psychology, 29, 228–242.
Burger, J. M., & Guadagno, R. E. (2003). Self-concept clarity and the foot-
436
in-the-door procedure. Basic and Applied Social Psychology, 25, 79–86.
Burger, J. M., Messian, N., Patel, S., del Prado, A., & Anderson, C.
(2004). What a coincidence! The effects of incidental similarity on
compliance. Personality and Social Psychology Bulletin, 30, 35–43.
Burger, J. M., Reed, M., DeCesare, K., Rauner, S., & Rozolis, J. (1999).
The effects of initial request size on compliance: More about the that’s-
not-all technique. Basic and Applied Social Psychology, 21, 243–249.
Burgoon, M., Alvaro, E. M., Broneck, K., Miller, C., Grandpre, J. R., Hall,
J. R., & Frank, C. A. (2002). Using interactive media tools to test
substance abuse prevention messages. In W. D. Crano & M. Burgoon
(Eds.), Mass media and drug prevention: Classic and contemporary
theories and research (pp. 67–87). Mahwah, NJ: Lawrence Erlbaum.
Burgoon, M., Hall, J., & Pfau, M. (1991). A test of the “messages-as-
fixed-effect fallacy” argument: Empirical and theoretical implications of
design choices. Communication Quarterly, 39, 18–34.
437
Burkley, E. (2008). The role of self-control in resistance to persuasion.
Personality and Social Psychology Bulletin, 34, 419–432.
Byrne, S., Guillory, J. E., Mathios, A. D., Avery, R. J., & Hart, P. S.
(2012). The unintended consequences of disclosure: Effect of
manipulating sponsor identification on the perceived credibility and
effectiveness of smoking cessation advertisements. Journal of Health
Communication, 17, 1119–1137.
438
R. E. Petty, T. M. Ostrom, & T. C. Brock (Eds.), Cognitive responses in
persuasion (pp. 31–54). Hillsdale, NJ: Lawrence Erlbaum.
Cacioppo, J. T., & Petty, R. E. (1982). The need for cognition. Journal of
Personality and Social Psychology, 42, 116–131.
Cacioppo, J. T., Petty, R. E., Kao, C. F., & Rodriguez, R. (1986). Central
and peripheral routes to persuasion: An individual difference
perspective. Journal of Personality and Social Psychology, 51,
1032–1043.
Cacioppo, J. T., von Hippel, W., & Ernst, J. M. (1997). Mapping cognitive
structures and processes through verbal content: The thought-listing
technique. Journal of Consulting and Clinical Psychology, 65, 928–940.
439
2010(11), CD004492.
Calsyn, D. A., Hatch-Maillette, M. A., Doyle, S. R., Cousins, S., Chen, T.,
& Godinez, M. (2010). Teaching condom use skills: Practice is superior
to observation. Substance Abuse, 31, 231–239.
Cameron, K. A., & Campo, S. (2006). Stepping back from social norms
campaigns: Comparing normative influences to other predictors of
health behaviors. Health Communication, 20, 277–288.
Cann, A., Sherman, S. J., & Elkes, R. (1975). Effects of initial request size
and timing of a second request on compliance: The foot in the door and
the door in the face. Journal of Personality and Social Psychology, 32,
440
774–782.
Cappella, J. N., Yzer, M., & Fishbein, M. (2003). Using beliefs about
positive and negative consequences as the basis for designing message
interventions for lowering risky behavior. In D. Romer (Ed.), Reducing
adolescent risk (pp. 210–219). Thousand Oaks, CA: Sage.
Carcioppolo, N., Jensen, J. D., Wilson, S. R., Collins, W. B., Carrion, M.,
& Linnemeier, G. (2013). Examining HPV threat-to-efficacy ratios in
the extended parallel process model. Health Communication, 28, 20–28.
441
Communication, 76, 552–569.
Celuch, K., & Slama, M. (1995). “Getting along” and “getting ahead” as
motives for self-presentation: Their impact on advertising effectiveness.
Journal of Applied Social Psychology, 25, 1700–1713.
442
framework for message framing. Journal of Experimental Social
Psychology, 49, 238–249.
Cesario, J., Grant, H., & Higgins, E. T. (2004). Regulatory fit and
persuasion: Transfer from “feeling right.” Journal of Personality and
Social Psychology, 86, 388–404.
Chaiken, S., Duckworth, K. L., & Darke, P. (1999). When parsimony fails
… Psychological Inquiry, 10, 118–123.
443
Chaiken, S., & Stangor, C. (1987). Attitudes and attitude change. Annual
Review of Psychology, 38, 575–630.
Chandran, S., & Menon, G. (2004). When a day means more than a year:
Effects of temporal framing on judgments of health risk. Journal of
Consumer Research, 31, 375–389.
444
implementation intention interventions in relation to young adults’
intake of fruit and vegetables. Psychology and Health, 24, 317–332.
Chebat, J.-C., Laroche, M., Baddoura, D., & Filiatrault, P. (1992). Effects
of source likability on attitude change through message repetition.
Advances in Consumer Research, 19, 353–358.
Chen, H. C., Reardon, R., Rea, C., & Moore, D. J. (1992). Forewarning of
content and involvement: Consequences for persuasion and resistance to
persuasion. Journal of Experimental Social Psychology, 28, 523–541.
445
Experimental Social Psychology, 45, 425–427.
Chen, M. K., & Risen, J. L. (2010). How choice affects and reflects
preferences: Revisiting the free-choice paradigm. Journal of Personality
and Social Psychology, 99, 573–594.
Cheung, S. F., Chan, D. K.-S., & Wong, Z. S.-Y. (1999). Reexamining the
theory of planned behavior in understanding wastepaper recycling.
Environment and Behavior, 31, 587–612.
Cho, H., & Witte, K. (2005). Managing fear in public health campaigns: A
theory-based formative evaluation process. Health Promotion Practice,
6, 482–490.
446
Chu, G. C. (1966). Fear arousal, efficacy, and imminency. Journal of
Personality and Social Psychology, 4, 517–524.
Chung, S., Fink, E. L., & Kaplowitz, S. A. (2008). The comparative statics
and dynamics of beliefs: The effect of message discrepancy and source
credibility. Communication Monographs, 75, 158–189.
Chung, S., Fink, E. L., Waks, L., Meffert, M. F., & Xie, X. (2012).
Sequential information integration and belief trajectories: An
experimental study using candidate evaluations. Communication
Monographs, 79, 160–180.
Cialdini, R. B., Cacioppo, J. T., Bassett, R., & Miller, J. A. (1978). Low-
ball procedure for producing compliance: Commitment then cost.
Journal of Personality and Social Psychology, 36, 463–476.
447
Cialdini, R. B., Demaine, L. J., Sagarin, B. J., Barrett, D. W., Rhoads, K.,
& Winter, P. L. (2006). Managing social norms for persuasive impact.
Social Influence, 1, 3–15.
Cialdini, R. B., Vincent, J. E., Lewis, S. K., Catalan, J., Wheeler, D., &
Darby, B. L. (1975). Reciprocal concessions procedure for inducing
compliance: The door-in-the-face technique. Journal of Personality and
Social Psychology, 31, 206–215.
Clapp, J. D., Lange, J. E., Russell, C., Shillington, A., & Voas, R. B.
448
(2003). A failed norms social marketing campaign. Journal of Studies
on Alcohol, 64, 409–414.
Clark, R. A., Stewart, R., & Marston, A. (1972). Scale values for highest
and lowest levels of credibility. Central States Speech Journal, 23,
193–196.
Clary, E. G., Snyder, M., Ridge, R. D., Copeland, J., Stukas, A. A.,
Haugen, J., & Miene, P. (1998). Understanding and assessing the
motivations of volunteers: A functional approach. Journal of Personality
and Social Psychology, 74, 1516–1530.
449
Clary, E. G., Snyder, M., Ridge, R. D., Miene, P. K., & Haugen, J. A.
(1994). Matching messages to motives in persuasion: A functional
approach to promoting volunteerism. Journal of Applied Social
Psychology, 24, 1129–1149.
Cohen, G. L., Aronson, J., & Steele, C. M. (2000). When beliefs yield to
evidence: Reducing biased evaluation by affirming the self. Personality
and Social Psychology Bulletin, 26, 1151–1164.
450
New York: Psychology Press.
Conner, M., Godin, G., Norman, P., & Sheeran, P. (2011). Using the
question-behavior effect to promote disease prevention behaviors: Two
randomized controlled trials. Health Psychology, 30, 300–309.
Conner, M., Graham, S., & Moore, B. (1999). Alcohol and intentions to
use condoms: Applying the theory of planned behaviour. Psychology
and Health, 14, 795–812.
Conner, M., Rhodes, R. E., Morris, B., McEachan, R., & Lawton, R.
(2011). Changing exercise through targeting affective or cognitive
attitudes. Psychology & Health, 26, 133–149.
Conner, M., Sheeran, P., Norman, P., & Armitage, C. J. (2000). Temporal
stability as a moderator of relationships in the theory of planned
behaviour. British Journal of Social Psychology, 39, 469–493.
451
Conner, M., & Sparks, P. (1996). The theory of planned behaviour and
health behaviours. In M. Conner & P. Norman (Eds.), Predicting health
behaviour: Research and practice with social cognition models (pp.
121–162). Buckingham, UK: Open University Press.
Conner, M., & Sparks, P. (2005). Theory of planned behaviour and health
behaviour. In M. Conner & P. Norman (Eds.), Predicting health
behaviour: Research and practice with social cognition models (2nd ed.,
pp. 170–222). Maidenhead, UK: Open University Press.
Conner, M., Sparks, P., Povey, R., James, R., Shepherd, R., & Armitage,
C. J. (2002). Moderator effects of attitudinal ambivalence on attitude-
behaviour relationships. European Journal of Social Psychology, 32,
705–718.
Converse, J., Jr., & Cooper, J. (1979). The importance of decisions and
free-choice attitude change: A curvilinear finding. Journal of
Experimental Social Psychology, 15, 48–61.
Cook, A. J., Kerr, G. N., & Moore, K. (2002). Attitudes and intentions
towards purchasing GM food. Journal of Economic Psychology, 23,
557–572.
Cooke, R., & French, D. P. (2008). How well do the theory of reasoned
action and theory of planned behaviour predict intentions and
attendance at screening programmes? A meta-analysis. Psychology &
Health, 23, 745–765.
452
Cooper, J. (1998). Unlearning cognitive dissonance: Toward an
understanding of the development of dissonance. Journal of
Experimental Social Psychology, 34, 562–575.
453
Courneya, K. S. (1994). Predicting repeated behavior from intention: The
issue of scale correspondence. Journal of Applied Social Psychology,
24, 580–594.
Cox, B. S., Cox, A. B., & Cox, D. J. (2000). Motivating signage prompts
safety belt use among drivers exiting senior communities. Journal of
Applied Behavior Analysis, 33, 635–638.
Craciun, C., Schüz, N., Lippke, S., & Schwarzer, R. (2012). A mediator
model of sunscreen use: A longitudinal analysis of social-cognitive
predictors and mediators. International Journal of Behavioral Medicine,
19, 65–72.
454
Crites, S. L., Jr., Fabrigar, L. R., & Petty, R. E. (1994). Measuring the
affective and cognitive properties of attitudes: Conceptual and
methodological issues. Personality and Social Psychology Bulletin, 20,
619–634.
Crocker, J., Niiya, Y., & Mischkowski, D. (2008). Why does writing about
important values reduce defensiveness? Self-affirmation and the role of
positive other-directed feelings. Psychological Science, 19, 740–747.
Croy, G., Gerrans, P., & Speelman, C. (2010). Injunctive social norms
primacy over descriptive social norms in retirement savings decisions.
International Journal of Aging and Human Development, 71, 259–282.
Cunningham, W. A., Packer, D. J., Kesek, A., & Van Bavel, J. J. (2009).
Implicit measurement of attitudes: A physiological approach. In R. E.
455
Petty, R. H. Fazio, & P. Briñol (Eds.), Attitudes: Insights from the new
implicit measures (pp. 485–512). New York: Psychology Press.
Dahl, J., Enemo, I., Drevland, G. C. B., Wessel, E., Eilertsen, D. E., &
Magnussen, S. (2007). Displayed emotions and witness credibility: A
comparison of judgments by individuals and mock juries. Applied
Cognitive Psychology, 21, 1145–1156.
Dal Cin, S., Zanna, M. P., & Fong, G. T. (2004). Narrative persuasion and
overcoming resistance. In E. S. Knowles & J. A. Linn (Eds.), Resistance
and persuasion (pp. 175–191). Mahwah, NJ: Lawrence Erlbaum.
Dale, A., & Strauss, A. (2009). Don’t forget to vote: Text message
reminders as a mobilized tool. American Journal of Political Science,
53, 787–804.
D’Alessio, D., & Allen, M. (2007). The selective exposure hypothesis and
media choice processes. In R. W. Preiss, B. M. Gayle, N. Burrell, M.
Allen, & J. Bryant (Eds.), Mass media effects research: Advances
through meta-analysis (pp. 103–118). Mahwah, NJ: Lawrence Erlbaum.
Dardis, F. E., & Shen, F. (2008). The influence of evidence type and
product involvement on message-framing effects in advertising. Journal
of Consumer Behaviour, 7, 222–238.
Darke, P. R., Chaiken, S., Bohner, G., Einwiller, S., Erb, H.-P., &
Hazlewood, J. D. (1998). Accuracy motivation, consensus information,
and the law of large numbers: Effects on attitude judgment in the
absence of argumentation. Personality and Social Psychology Bulletin,
24, 1205–1215.
456
Darker, C. D., French, D. P., Eves, F. F., & Sniehotta, F. F. (2010). An
intervention to promote walking amongst the general population based
on an ‘extended’ theory of planned behaviour: A waiting list
randomised controlled trial. Psychology and Health, 25, 71–88.
Darker, C. D., French, D. P., Longdon, S., Morris, K., & Eves, F. F.
(2007). Are beliefs elicited biased by question order? A theory of
planned behaviour belief elicitation study about walking in the UK
general population. British Journal of Health Psychology, 12, 93–110.
Das, E., & Fennis, B. M. (2008). In the mood to face the facts: When a
positive mood promotes systematic processing of self-threatening
information. Motivation and Emotion, 32, 221–230.
Davis, R. E., & Resnicow, K. (2012). The cultural variance framework for
tailoring health messages. In H. Cho (Ed.), Health communication
message design: Theory and practice (pp. 115–135). Los Angeles: Sage.
457
Dean, M., Arvola, A., Vassallo, M., Lähteenmäki, L., Raats, M. M., Saba,
A., & Shepherd, R. (2006). Comparison of elicitation methods for moral
and affective beliefs in the theory of planned behaviour. Appetite, 47,
244–252.
DeBono, K. G., Leavitt, A., & Backus, J. (2003). Product packaging and
product evaluation: An individual difference approach. Journal of
Applied Social Psychology, 33, 513–521.
458
Journal of Applied Social Psychology, 20, 1383–1395.
de Bruijn, G. J., Kremers, S. P. J., Singh, A., van den Putte, B., & van
Mechelen, W. (2009). Adult active transportation: Adding habit strength
to the theory of planned behavior. American Journal of Preventive
Medicine, 36, 189–194.
de Bruijn, G. J., Kroeze, W., Oenema, A., & Brug, J. (2008). Saturated fat
consumption and the theory of planned behaviour: Exploring additive
and interactive effects of habit strength. Appetite, 51, 318–323.
de Hoog, N., Stroebe, W., & de Wit, J. (2007). The impact of vulnerability
to and severity of a health risk on processing and acceptance of fear-
arousing communications: A meta-analysis. Review of General
Psychology, 11, 258–285.
459
de Hoog, N., Stroebe, W., & de Wit, J. B. F. (2008). The processing of
fear-arousing communications: How biased processing leads to
persuasion. Social Influence, 3, 84–113.
Delia, J. G., Crockett, W. H., Press, A. N., & O’Keefe, D. J. (1975). The
dependency of interpersonal evaluations on context-relevant beliefs
about the other. Speech Monographs, 42, 10–19.
460
Denizeau, M., Golsing, P., & Oberle, D. (2009). L’effet de l’ordre et du
délai sur l’usage de trois modes de réduction de la dissonance cognitive:
le changement d’attitude, la trivialisation et le déni de responsabilité
[The effects of order and delay on the use of three modes of dissonance
reduction: Attitude change, trivialization and denial of responsibility].
Année Psychologique, 109, 629–654.
De Nooijer, J., van Assema, P., de Vet, E., & Brug, J. (2005). How stable
are stages of change for nutrition behaviors in the Netherlands? Health
Promotion International, 20, 27–32.
Detweiler, J. B., Bedell, B. T., Salovey, P., Pronin, E., & Rothman, A. J.
(1999). Message framing and sunscreen use: Gain-framed messages
motivate beach-goers. Health Psychology, 18, 189–196.
Devine, D. J. (2012). Jury decision making: The state of the science. New
York: New York University Press.
461
of the American Medical Informatics Association, 15, 311–318.
Dickerson, C. A., Thibodeau, R., Aronson, E., & Miller, D. (1992). Using
cognitive dissonance to encourage water conservation. Journal of
Applied Social Psychology, 22, 841–854.
Dillard, A. J., Fagerlin, A., Dal Cin, S., Zikmund-Fisher, B. J., & Ubel, P.
A. (2010). Narratives that address affective forecasting errors reduce
perceived barriers to colorectal cancer screening. Social Science &
Medicine, 71, 45–52.
462
Dillard, J. P., & Anderson, J. W. (2004). The role of fear in persuasion.
Psychology and Marketing, 21, 909–926.
Dillard, J. P., & Peck, E. (2001). Persuasion and the structure of affect:
Dual systems and discrete emotions as complementary models. Human
Communication Research, 27, 38–68.
Dillard, J. P., & Seo, K. (2013). Affect and persuasion. In J. P. Dillard &
L. Shen (Eds.), The SAGE handbook of persuasion: Developments in
theory and practice (2nd ed., pp. 150–166). Thousand Oaks, CA: Sage.
Dillard, J. P., & Shen, L. (2005). On the nature of reactance and its role in
persuasive health communication. Communication Monographs, 72,
144–168.
Dingus, T. A., Hunn, B. P., & Wreggit, S. S. (1991). Two reasons for
providing protective equipment as part of hazardous consumer product
packaging. In Proceedings of the Human Factors Society 35th annual
meeting (pp. 1039–1042). Santa Monica, CA: Human Factors Society.
463
Ditto, P. H., Druley, J. A., Moore, K. A., Danks, J. H., & Smucker, W. D.
(1996). Fates worse than death: The role of valued life activities in
health–state evaluations. Health Psychology, 15, 332–343.
Doll, J., & Ajzen, I. (1992). Accessibility and stability of predictors in the
theory of planned behavior. Journal of Personality and Social
Psychology, 63, 754–765.
Doll, J., Ajzen, I., & Madden, T. J. (1991). Optimale skalierung und
urteilsbildung in unter-schiedlichen einstellungsbereichen: Eine
reanalyse [Optimal scaling and judgment in different attitude domains:
A reanalysis]. Zeitschriftfur Sozialpsychologie, 22, 102–111.
Doll, J., & Orth, B. (1993). The Fishbein and Ajzen theory of reasoned
action applied to contraceptive behavior: Model variants and
meaningfulness. Journal of Applied Social Psychology, 23, 395–415.
464
Donnelly, J. H., Jr., & Ivancevich, J. M. (1970). Post-purchase
reinforcement and back-out behavior. Journal of Marketing Research, 7,
399–400.
Doob, A. N., Carlsmith, J. M., Freedman, J. L., Landauer, T. K., & Tom,
S., Jr. (1969). Effect of initial selling price on subsequent sales. Journal
of Personality and Social Psychology, 11, 345–350.
Duncan, T. E., Duncan, S. C., Beauchamp, N., Wells, J., & Ary, D. V.
(2000). Development and evaluation of an interactive CD-ROM refusal
skills program to prevent youth substance use: “Refuse to Use.” Journal
of Behavioral Medicine, 23, 59–72.
Durantini, M. R., Albarracín, D., Mitchell, A. L., Earl, A. N., & Gillette, J.
C. (2006). Conceptualizing the influence of social agents of behavior
change: A meta-analysis of the effectiveness of HIV-prevention
interventions for different groups. Psychological Bulletin, 132,
212–248.
465
Dutta-Bergman, M. J. (2003). The linear interaction model of personality
effects in health communication. Health Communication, 15, 101–116.
Dwan, K., Gamble, C., Williamson, P. R., Kirkham, J. J., & the Reporting
Bias Group. (2013). Systematic review of the empirical evidence of
study publication bias and outcome reporting bias: An updated review.
PLO SONE, 8(7), e66844.
Eagly, A. H., Mladinic, A., & Otto, S. (1994). Cognitive and affective
bases of attitudes toward social groups and social policies. Journal of
Experimental Social Psychology, 30, 113–137.
466
determinant of attitude change. Journal of Personality and Social
Psychology, 23, 388–397.
Eagly, A. H., Wood, W., & Chaiken, S. (1978). Causal inferences about
communicators and their effect on opinion change. Journal of
Personality and Social Psychology, 36, 424–435.
Earl, A., & Albarracín, D. (2007). Nature, decay, and spiraling of the
effects of fear-inducing arguments and HIV counseling and testing: A
meta-analysis of the short and long-term outcomes of HIV-prevention
interventions. Health Psychology, 26, 496–506.
Eckes, T., & Six, B. (1994). Fakten und fiktionen in der einstellungs-
verhaltens-forschung: Eine meta-analyse [Fact and fiction in attitude-
behavior research: A meta-analysis]. Zeitschriftfür Sozialpsychologie,
25, 253–271.
Edwards, S. M., Li, H. R., & Lee, J. H. (2002). Forced exposure and
psychological reactance: Antecedents and consequences of the
perceived intrusiveness of pop-up ads. Journal of Advertising, 31 (3),
83–95.
Ehrlich, D., Guttman, I., Schönbach, P., & Mills, J. (1957). Postdecision
exposure to relevant information. Journal of Abnormal and Social
Psychology, 54, 98–102.
467
Eisend, M. (2002). Dimensions of credibility in marketing communication.
In R. Zwick & T. Ping (Eds.), Asia Pacific advances in consumer
research (Vol. 5, pp. 366–373). Valdosta, GA: Association for
Consumer Research.
468
and two-component theories of planned behavior. Addictive Behaviors,
37, 92–101.
Elliott, R., Jobber, D., & Sharp, J. (1995). Using the theory of reasoned
action to understand organizational behaviour: The role of belief
salience. British Journal of Social Psychology, 34, 161–172.
Elms, A. C. (Ed.). (1969). Role playing, reward, and attitude change. New
York: Van Nostrand Reinhold.
Ennis, R., & Zanna, M. P. (2000). Attitude function and the automobile. In
G. R. Maio & J. M. Olson (Eds.), Why we evaluate: Functions of
attitudes (pp. 395–415). Mahwah, NJ: Lawrence Erlbaum.
469
Epton, T., Harris, P. R., Kane, R., van Koningsbruggen, G. M., & Sheeran,
P. (in press). The impact of self-affirmation on health-behavior change:
A meta-analysis. Health Psychology. doi:10.1037/hea0000116.
Erb, H.-P., Pierro, A., Mannetti, L., Spiegel, S., & Kruglanski, A. W.
(2007). Biased processing of persuasive information: On the functional
equivalence of cues and message arguments. European Journal of Social
Psychology, 37, 1057–1075.
Evans, R. I., Rozelle, R. M., Lasater, T. M., Dembroski, T. M., & Allen, B.
P. (1970). Fear arousal, persuasion, and actual versus implied behavioral
change: New perspective utilizing a real-life dental hygiene program.
Journal of Personality and Social Psychology, 16, 220–227.
470
Everson, E. S., Daley, A. J., & Ussher, M. (2007). Brief report: The theory
of planned behaviour applied to physical activity in young people who
smoke. Journal of Adolescence, 30, 347–351.
Fabrigar, L. R., & Petty, R. E. (1999). The role of the affective and
cognitive bases of attitudes in susceptibility to affectively and
cognitively based persuasion. Personality and Social Psychology
Bulletin, 25, 363–381.
Fabrigar, L. R., Petty, R. E., Smith, S. M., & Crites, S. L., Jr. (2006).
Understanding knowledge effects on attitude-behavior consistency: The
role of relevance, complexity, and amount of knowledge. Journal of
Personality and Social Psychology, 90, 556–577.
471
Fazio, R. H., & Towles-Schwen, T. (1999). The MODE model of attitude-
behavior processes. In S. Chaiken & Y. Trope (Eds.), Dual-process
models in social psychology (pp. 97–116). New York: Guilford.
Feiler, D. C., Tost, L. P., & Grant, A. M. (2012). Mixed reasons, missed
givings: The costs of blending egoistic and altruistic reasons in donation
requests. Journal of Experimental Social Psychology, 48, 1322–1328.
472
Ferguson, C. J., & Heene, M. (2012). A vast graveyard of undead theories:
Publication bias and psychological science’s aversion to the null.
Perspectives on Psychological Science, 7, 555–561.
Feufel, M. A., Schneider, T. R., & Berkel, H. J. (2010). A field test of the
effects of instruction design on colorectal cancer self-screening
accuracy. Health Education Research, 25, 709–723.
473
anxiety as factors in opinion change. Journal of Abnormal and Social
Psychology, 54, 369–374.
Fischer, P., Lea, S., Kastenmüller, A., Greitemeyer, T., Fischer, J., & Frey,
D. (2011). The process of selective exposure: Why confirmatory
information search weakens over time. Organizational Behavior and
Human Decision Processes, 114, 37–48.
474
Fishbein, M. (2008). A reasoned action approach to health promotion.
Medical Decision Making, 28, 834–844.
Fishbein, M., & Ajzen, I. (1975). Belief, attitude, intention, and behavior.
Reading, MA: Addison-Wesley.
Fishbein, M., & Ajzen, I. (2010). Predicting and changing behavior: The
reasoned action approach. New York: Psychology Press.
Fishbein, M., Cappella, J., Hornik, R., Sayeed, S., Yzer, M., & Ahern, R.
K. (2002). The role of theory in developing effective anti-drug public
service announcements. In W. D. Crano & M. Burgoon (Eds.), Mass
media and drug prevention: Classic and contemporary theories and
research (pp. 89–117). Mahwah, NJ: Lawrence Erlbaum.
Fishbein, M., & Lange, R. (1990). The effects of crossing the midpoint on
belief change: A replication and extension. Personality and Social
Psychology Bulletin, 16, 189–199.
475
Fishbein, M., & Middlestadt, S. E. (1997). A striking lack of evidence for
nonbelief-based attitude formation and change: A response to five
commentaries. Journal of Consumer Psychology, 6, 107–115.
Fisher, J. D., Fisher, W. A., Misovich, S. J., Kimble, D. L., & Malloy, T.
E. (1996). Changing AIDS-risk behavior: Effects of an intervention
emphasizing AIDS risk reduction information, motivation, and
behavioral skills in a college student population. Health Psychology, 15,
114–123.
Flanagin, A. J., & Metzger, M. J. (2007). The role of site features, user
attributes, and information verification behaviors on the perceived
credibility of web-based information. New Media and Society, 9,
319–142.
Flay, B. R., McFall, S., Burton, D., Cook, T. D., & Warnecke, R. B.
(1993). Health behavior changes through television: The roles of de
facto and motivated selection processes. Journal of Health and Social
Behavior, 34, 322–335.
476
Fleming, M. A., & Petty, R. E. (2000). Identity and persuasion: An
elaboration likelihood approach. In D. J. Terry & M. A. Hogg (Eds.),
Attitudes, behavior, and social context: The role of norms and group
membership (pp. 171–199). Mahwah, NJ: Lawrence Erlbaum.
Fointiat, V. (2004). “I know what I have to do, but …”: When hypocrisy
leads to behavioral change. Social Behavior and Personality, 32,
741–746.
477
262–266.
French, D. P., & Cooke, R. (2012). Using the theory of planned behaviour
to understand binge drinking: The importance of beliefs for developing
interventions. British Journal of Health Psychology, 17, 1–17.
French, D. P., Sutton, S., Hennings, S. J., Mitchell, J., Wareham, N. J.,
Griffin, S., … Kinmonth, A. L. (2005). The importance of affective
beliefs and attitudes in the theory of planned behavior: Predicting
intention to increase physical activity. Journal of Applied Social
Psychology, 35, 1824–1848.
Frewer, L. J., Howard, C., Hedderley, D., & Shepherd, R. (1996). What
determines trust in information about food-related risks? Underlying
psychological constructs. Risk Analysis, 16, 473–486.
478
Personality and Social Psychology Bulletin, 22, 179–191.
Fry, J. P., & Neff, R. A. (2009). Periodic prompts and reminders in health
promotion and health behavior interventions: Systematic review. Journal
of Medical Internet Research, 11(2), e16.
Gagné, C., & Godin, G. (2000). The theory of planned behavior: Some
measurement issues concerning belief-based variables. Journal of
Applied Social Psychology, 30, 2173–2193.
Gagné, C., & Godin, G. (2007). Does the easy-difficult item measure
attitude or perceived behavioural control? British Journal of Health
Psychology, 12, 543–557.
479
Reframing the selective exposure debate. Journal of Communication,
59, 676–699.
Gasco, M., Briñol, P., & Horcajo, J. (2010). Cambio de actitudes hacia la
imagen corporal: El efecto de la elaboración sobre la fuerza de las
actitudes [Attitude change toward body image: The role of elaboration
on attitude strength]. Psicothema, 22, 71–76.
480
Geers, A. W., Handley, I. M., & McLarney, A. R. (2003). Discerning the
role of optimism in persuasion: The valence-enhancement hypothesis.
Journal of Personality and Social Psychology, 85, 554–565.
Gierl, H., & Huettl, V. (2010). Are scarce products always more
attractive? The interaction of different types of scarcity signals with
products’ suitability for conspicuous consumption. International Journal
of Research in Marketing, 27, 225–235.
Giles, M., & Cairns, E. (1995). Blood donation and Ajzen’s theory of
planned behaviour: An examination of perceived behavioural control.
British Journal of Social Psychology, 34, 173–188.
Giles, M., McClenahan, C., Armour, C., Millar, S., Rae, G., Mallett, J., &
Stewart-Knox, B. (2014). Evaluation of a theory of planned behaviour–
based breastfeeding intervention in Northern Irish Schools using a
randomized cluster design. British Journal of Health Psychology, 19,
16–35.
Glik, D., Berkanovic, E., Stone, K., Ibarra, L., Jones, M. C., Rosen, B., …
Richardes, D. (1998). Health education goes Hollywood: Working with
prime-time and daytime entertainment television for immunization
promotion. Journal of Health Communication, 3, 263–284.
481
Glynn, C. J., Huge, M. E., & Lunney, C. A. (2009). The influence of
perceived social norms on college students’ intention to vote. Political
Communication, 26, 48–64.
Göckeritz, S., Schultz, P. W., Rendon, T., Cialdini, R. B., Goldstein, N. J.,
& Griskevicius, V. (2010). Descriptive normative beliefs and
conservation behavior: The moderating roles of personal involvement
and injunctive normative beliefs. European Journal of Social
Psychology, 40, 514–523.
Godin, G., Gagné, C., & Sheeran, P. (2004). Does perceived behavioural
control mediate the relationship between power beliefs and intention?
British Journal of Health Psychology, 9, 557–568.
Godin, G., & Kok, G. (1996). The theory of planned behavior: A review of
its applications to health-related behaviors. American Journal of Health
Promotion, 11, 87–98.
Godin, G., Sheeran, P., Conner, M., Delage, G., Germain, M., Bélanger-
Gravel, A., & Naccache, H. (2010). Which survey questions change
behavior? Randomized controlled trial of mere measurement
interventions. Health Psychology, 29, 636–644.
Godin, G., Valois, P., & Lepage, L. (1993). The pattern of influence of
perceived behavioral control upon exercising behavior: An application
of Ajzen’s theory of planned behavior. Journal of Behavioral Medicine,
16, 81–102.
Goei, R., Boyson, A. R., Lyon-Callo, S. K., Schott, C., Wasilevich, E., &
Cannarile, S. (2010). An examination of EPPM predictions when threat
is perceived externally: An asthma intervention with school workers.
482
Health Communication, 25, 333–344.
Goei, R., Lindsey, L. L. M., Boster, F. J., Skalski, P. D., & Bowman, J. M.
(2003). The mediating roles of liking and obligation on the relationship
between favors and compliance. Communication Research, 30,
178–197.
483
Zanna (Ed.), Advances in experimental social psychology (Vol. 38, pp.
69–120). San Diego: Elsevier Academic Press.
484
Granberg, D., Kasmer, J., & Nanneman, T. (1988). An empirical
examination of two theories of political perception. Western Political
Quarterly, 41, 29–46.
Grant, N. K., Fabrigar, L. R., & Lim, H. (2010). Exploring the efficacy of
compliments as a tactic for securing compliance. Basic and Applied
Social Psychology, 32, 226–233.
Grasmick, H. G., Bursik, R. J., Jr., & Kinsey, K. A. (1991). Shame and
embarrassment as deterrents to noncompliance with the law: The case of
an antilittering campaign. Environment and Behavior, 23, 233–251.
Green, D. P., Ha, S. E., & Bullock, J. G. (2010). Enough already about
“black box” experiments: Studying mediation is more difficult than
most scholars suppose. Annals of the American Academy of Political
and Social Science, 628, 200–208.
485
Green, M. C., Garst, J., Brock, T. C., & Chung, S. (2006). Fact versus
fiction labeling: Persuasion parity despite heightened scrutiny of fact.
Media Psychology, 8, 267–285.
Greene, K., Krcmar, M., Rubin, D. L., Walters, L.H., & Hale, J. L. (2002).
Elaboration in processing adolescent health messages: The impact of
egocentrism and sensation seeking on message processing. Journal of
Communication, 52, 812–831.
486
informative speeches. Central States Speech Journal, 21, 160–166.
Guéguen, N., & Pascual, A. (2005). Improving the response rate to a street
survey: An evaluation of the “but you are free to accept or to refuse”
technique. Psychological Record, 55, 297–303.
Guéguen, N., Pascual, A., & Dagot, L. (2002). Low-ball and compliance to
a request: An application in a field setting. Psychological Reports, 91,
81–84.
Guéguen, N., Pichot, N., & Le Dreff, G. (2005). Similarity and helping
behavior on the web: The impact of the convergence of surnames
between a solicitor and a subject in a request made by e-mail. Journal of
Applied Social Psychology, 35, 423–429.
Guo, B. L., Aveyard, P., Fielding, A., & Sutton, S. (2009). Do the
transtheoretical model processes of change, decisional balance and
temptation predict stage movement? Evidence from smoking cessation
in adolescents. Addiction, 104, 828–838.
487
Medicine, and Therapeutics, 5, 101–114.
Haddock, G., Mio, G. R., Arnold, K., & Huskinson, T. (2008). Should
persuasion be affective or cognitive? The moderating effects of need for
affect and need for cognition. Personality and Social Psychology
Bulletin, 34, 769–779.
Hagen, K. M., Gutkin, T. B., Wilson, C. P., & Oats, R. G. (1998). Using
vicarious experience and verbal persuasion to enhance self-efficacy in
pre-service teachers: “Priming the pump” for consultation. School
Psychology Quarterly, 13, 169–178.
488
Hall, K. L., & Rossi, J. S. (2008). Meta-analytic examination of the strong
and weak principles across 48 health behaviors. Preventive Medicine,
46, 266–274.
Hall, P. A., Fong, G. T., Epp, L. J., & Elias, L. J. (2008). Executive
function moderates the intention-behavior link for physical activity and
dietary behavior. Psychology and Health, 23, 309–326.
Hall, P. A., Zehr, C. E., Ng, M., & Zanna, M. P. (2012). Implementation
intentions for physical activity in supportive and unsupportive
environmental conditions: An experimental examination of intention–
behavior consistency. Journal of Experimental Social Psychology, 48,
432–436.
489
Harmon-Jones, E. (1999). Toward an understanding of the motivation
underlying dissonance effects: Is the production of aversive
consequences necessary? In E. Harmon-Jones & J. Mills (Eds.),
Cognitive dissonance: Progress on a pivotal theory in social psychology
(pp. 71–99). Washington, DC: American Psychological Association.
Harmon-Jones, E., Brehm, J. W., Greenberg, J., Simon, L., & Nelson, D.
E. (1996). Evidence that the production of aversive consequences is not
necessary to create cognitive dissonance. Journal of Personality and
Social Psychology, 70, 5–16.
490
narrative review. Social and Personality Psychology Compass, 3,
962–978.
Harris, P. R., Mayle, K., Mabbott, L., & Napper, L. (2007). Self-
affirmation reduces smokers’ defensiveness to graphic on-pack cigarette
warning labels. Health Psychology, 26, 437–446.
Hart, W., Albarracín, D., Eagly, A. H., Brechan, I., Lindberg, M. J., &
Merrill, L. (2009). Feeling validated versus being correct: A meta-
analysis of selective exposure to information. Psychological Bulletin,
135, 555–588.
Hass, J. W., Bagley, G. S., & Rogers, R. W. (1975). Coping with the
energy crisis: Effects of fear appeals upon attitudes toward energy
consumption. Journal of Applied Psychology, 60, 754–756.
491
Hass, R. G., & Linder, D. E. (1972). Counterargument availability and the
effects of message structure on persuasion. Journal of Personality and
Social Psychology, 23, 219–233.
Hecht, M. L., Graham, J. W., & Elek, E. (2006). The drug resistance
strategies intervention: Program effects on substance use. Health
Communication, 20, 267–276.
492
Hefner, D., Rothmund, T., Klimmt, C., & Gollwitzer, M. (2011). Implicit
measures and media effects research: Challenges and opportunities.
Communication Methods and Measures, 5, 181–202.
493
Herzog, T. A. (2008). Analyzing the transtheoretical model using the
framework of Weinstein, Rothman, and Sutton (1998): The example of
smoking cessation. Health Psychology, 27, 548–556.
Hether, H. J., Huang, G. C., Beck, V., Murphy, S. T., & Valente, T. W.
(2008). Entertainment-education in a media-saturated environment:
Examining the impact of single and multiple exposures to breast cancer
storylines on two popular medical dramas. Journal of Health
Communication, 13, 808–823.
Hetts, J. J., Boninger, D. S., Armor, D. A., Gleicher, F., & Nathanson, A.
(2000). The influence of anticipated counterfactual regret on behavior.
Psychology and Marketing, 17, 345–368.
Hibbert, S., Smith, A., Davies, A., & Ireland, F. (2007). Guilt appeals:
Persuasion knowledge and charitable giving. Psychology and
Marketing, 24, 723–742.
494
Hilligoss, B., & Rieh, S. Y. (2008). Developing a unifying framework of
credibility assessment: Construct, heuristics, and interaction in context.
Information Processing & Management, 44, 1467–1484.
Hodson, G., Maio, G. R., & Esses, V. M. (2001). The role of attitudinal
ambivalence in susceptibility to consensus information. Basic and
Applied Social Psychology, 23, 197–205.
495
Høie, M., Moan, I. S., Rise, J., & Larsen, E. (2012). Using an extended
version of the theory of planned behaviour to predict smoking cessation
in two age groups. Addiction Research and Theory, 20, 42–54.
Horai, J., Naccari, N., & Fatoullah, E. (1974). The effects of expertise and
physical attractiveness upon opinion agreement and liking. Sociometry,
37, 601–606.
496
Hornikx, J., & O’Keefe, D. J. (2009). Adapting consumer advertising
appeals to cultural values: A meta-analytic review of effects on
persuasiveness and ad liking. Communication Yearbook, 33, 39–71.
Hsieh, G., Hudson, S. E., & Kraut, R. E. (2011). Donate for credibility:
How contribution incentives can improve credibility. In Proceedings of
the ACM Conference on Human Factors in Computing Systems (CHI)
(pp. 3435–3438). New York: Association for Computing Machinery.
doi:10.1145/1978942.1979454.
Hübner, G., & Kaiser, F. G. (2006). The moderating role of the attitude-
subjective norms conflict on the link between moral norms and
497
intention. European Psychologist, 11, 99–109.
498
application of social judgment theory. American Politics Quarterly, 14,
150–185.
Hustinx, L., van Enschot, R., & Hoeken, H. (2007). Argument quality in
the elaboration likelihood model: An empirical study of strong and weak
arguments in a persuasive message. In F. H. van Eemeren, J. A. Blair,
C. A. Willard, & B. Garssen (Eds.), Proceedings of the Sixth
Conference of the International Society for the Study of Argumentation
(pp. 651–657). Amsterdam: Sic Sat.
Hyde, J., Hankins, M., Deale, A., & Marteau, T. M. (2008). Interventions
to increase self-efficacy in the context of addiction behaviours: A
systematic literature review. Journal of Health Psychology, 13,
607–623.
499
Igartua, J.-J., & Barrios, I. (2012). Changing real-world beliefs with
controversial movies: Processes and mechanisms of narrative
persuasion. Journal of Communication, 62, 514–531.
Ito, T. A., & Cacioppo, J. T. (2007). Attitudes as mental and neural states
of readiness: Using physiological measures to study implicit attitudes. In
B. Wittenbrink & N. Schwarz (Eds.), Implicit measures of attitudes (pp.
125–158). New York: Guilford.
Iyengar, S., & Hahn, K. S. (2009). Red media, blue media: Evidence of
ideological selectivity in media use. Journal of Communication, 59,
19–39.
Jaccard, J., Radecki, C., Wilson, T., & Dittus, P. (1995). Methods for
identifying consequential beliefs: Implications for understanding
attitude strength. In R. E. Petty & J. A. Krosnick (Eds.), Attitude
strength: Antecedents and consequences (pp. 337–359). Mahwah, NJ:
Lawrence Erlbaum.
500
Jackson, S. (1992). Message effects research: Principles of design and
analysis. New York: Guilford.
Janssen, L., Fennis, B. M., Pruyn, A. T. H., & Vohs, K. D. (2008). The
path of least resistance: Regulatory resource depletion and the
effectiveness of social influence techniques. Journal of Business
Research, 61, 1041–1046.
501
and Social Science, 640, 150–172.
Jensen, C. D., Cushing, C. C., Aylward, B. S., Craig, J. T., Sorell, D. M.,
& Steele, R. G. (2011). Effectiveness of motivational interviewing
interventions for adolescent substance use behavior change: A meta-
analytic review. Journal of Consulting and Clinical Psychology, 79,
433–440.
Jeong, E. S., Shi, Y., Baazova, A., Chiu, C., Nahai, A., Moons, W. G., &
Taylor, S. E. (2011). The relation of approach/avoidance motivation and
message framing to the effectiveness of charitable appeals. Social
Influence, 6, 15–21.
Jiang, L., Hoegg, J., Dahl, D.W., & Chattopadhyay, A. (2010). The
persuasive role of incidental similarity on attitudes and purchase
intentions in a sales context. Journal of Consumer Research, 36,
778–791.
502
telling. Psychological Science, 23, 524–532.
Johnson, B. T., Lin, H.-Y., Symons, C. S., Campbell, L. A., & Ekstein, G.
(1995). Initial beliefs and attitudinal latitudes as factors in persuasion.
Personality and Social Psychology Bulletin, 21, 502–511.
503
Jonas, K., Broemer, P., & Diehl, M. (2000). Experienced ambivalence as a
moderator of the consistency between attitudes and behaviors.
Zeitschrift fur Sozialpsychologie, 31, 153–165.
Judah, G., Gardner, B., & Aunger, R. (2013). Forming a flossing habit: An
exploratory study of the psychological determinants of habit formation.
British Journal of Health Psychology, 18, 338–353.
Judd, C. M., Kenny, D. A., & Krosnick, J. A. (1983). Judging the positions
of political candidates: Models of assimilation and contrast. Journal of
Personality and Social Psychology, 44, 952–963.
504
Social Psychology, 103, 54–69.
Kaiser, F. G., Hübner, G., & Bogner, F. X. (2005). Contrasting the theory
of planned behavior with the value-belief-norm model in explaining
conservation behavior. Journal of Applied Social Psychology, 35,
2150–2170.
505
Kamins, M. A., & Assael, H. (1987a). Moderating disconfirmation of
expectations through the use of two-sided appeals: A longitudinal
approach. Journal of Economic Psychology, 8, 237–254.
Kang, Y., Cappella, J., & Fishbein, M. (2006). The attentional mechanism
of message sensation value: Interaction between message sensation
value and argument quality on message effectiveness. Communication
Monographs, 73, 351–378.
Kang, Y.-S., & Kerr, P. M. (2006). Beauty and the beholder: Toward an
integrative model of communication source effects. Journal of
Consumer Research, 33, 123–130.
506
Kaplowitz, S. A., & Fink, E. L. (1997). Message discrepancy and
persuasion. In G. A. Barnett & F. J. Boster (Eds.), Progress in
communication sciences: Vol. 13. Advances in persuasion (pp. 75–106).
Greenwich, CT: Ablex.
Katz, D., McClintock, C., & Sarnoff, I. (1957). The measurement of ego
defense as related to attitude change. Journal of Personality, 25,
465–474.
Keer, M., van den Putte, B., & Neijens, P. (2010). The role of affect and
cognition in health decision making. British Journal of Social
Psychology, 49, 143–153.
Kelly, J. A., St. Lawrence, J. S., Stevenson, Y., Hauth, A. C., Kalichman,
507
S. C., Diaz, Y. E., … Morgan, M. G. (1992). Community AIDS/HIV
risk reduction: The effects of endorsements by popular people in three
cities. American Journal of Public Health, 82, 1483–1489.
Kennedy, M. G., O’Leary, A., Beck, V., Pollard, K., & Simpson, P.
(2004). Increases in calls to the CDC National STD and AIDS Hotline
following AIDS-related episodes in a soap opera. Journal of
Communication, 54, 287–301.
508
Journal of Applied Social Psychology, 35, 487–507.
Kenworthy, J. B., Miller, N., Collins, B. E., Read, S. J., & Earleywine, M.
(2011). A trans-paradigm theoretical synthesis of cognitive dissonance
theory: Illuminating the nature of discomfort. European Review of
Social Psychology, 22, 36–113.
Kesselheim, A. S., Robertson, C. T., Myers, J. A., Rose, S. L., Gillet, V.,
Ross, K. M., … Avorn, J. (2012). A randomized study of how
physicians interpret research funding disclosures. New England Journal
of Medicine, 367, 1119–1127.
Kim, A., Stark, E., & Borgida, E. (2011). Symbolic politics and the
prediction of attitudes toward federal regulation of reduced-exposure
tobacco products. Journal of Applied Social Psychology, 41, 381–400.
Kim, H. S., Bigman, C. A., Leader, A. E., Lerman, C., & Cappella, J. N.
(2012). Narrative health communication and behavior change: The
influence of exemplars in the news on intention to quit smoking. Journal
of Communication, 62, 473–492.
509
Kim, M.-S., & Hunter, J. E. (1993a). Attitude-behavior relations: A meta-
analysis of attitudinal relevance and topic. Journal of Communication,
43(1), 101–142.
King, A. J., Williams, E. A., Harrison, T. R., Morgan, S. E., & Havermahl,
T. (2012). The “Tell Us Now” campaign for organ donation: Using
message immediacy to increase donor registration rates. Journal of
Applied Communication Research, 40, 229–246.
510
Klein, W. M. P., & Harris, P. R. (2009). Self-affirmation enhances
attentional bias toward threatening components of a persuasive message.
Psychological Science, 20, 1463–1467.
Klein, W. M. P., Lipkus, I. M., Scholl, S. M., McQueen, A., Cerully, J. L.,
& Harris, P. R. (2010). Self-affirmation moderates effects of unrealistic
optimism and pessimism on reactions to tailored risk feedback.
Psychology & Health, 25, 1195–1208.
Knäuper, B., McCollam, A., Rosen-Brown, A., Lacaille, J., Kelso, E., &
Roseman, M. (2011). Fruitful plans: Adding targeted mental imagery to
implementation intentions increases fruit consumption. Psychology and
Health, 26, 601–617.
Knäuper, B., Roseman, M., Johnson, P.J., & Krantz, L.H. (2009). Using
mental imagery to enhance the effectiveness of implementation
intentions. Current Psychology, 28, 181–186.
Koestner, R., Horberg, E. J., Gaudreau, P., Powers, T., Di Dio, P., Bryan,
511
C., … Salter, N. (2006). Bolstering implementation plans for the long
haul: The benefits of simultaneously boosting self-concordance or self-
efficacy. Personality and Social Psychology Bulletin, 32, 1547–1558.
Kok, G., Hospers, H. J., Harterink, P., & De Zwart, O. (2007). Social-
cognitive determinants of HIV risk-taking intentions among men who
date men through the Internet. AIDS Care, 19, 410–417.
Koring, M., Richert, J., Lippke, S., Parschau, L., Reuter, T., & Schwarzer,
R. (2012). Synergistic effects of planning and self-efficacy on physical
activity. Health Education & Behavior, 39, 152–158.
Kotowski, M. R., Smith, S. W., Johnstone, P. M., & Pritt, E. (2011). Using
the EPPM to create and evaluate the effectiveness of brochures to
reduce the risk for noise-induced hearing loss in college students. Noise
and Health, 13, 261–271.
Kraemer, H. C., Kiernan, M., Essex, M., & Kupfer, D. J. (2008). How and
why criteria defining moderators and mediators differ between the
Baron & Kenny and MacArthur approaches. Health Psychology, 27,
S101–S109.
Kreuter, M. W., Green, M. C., Cappella, J. N., Slater, M. D., Wise, M. E.,
Storey, D., … Wooley, S. (2007). Narrative communication in cancer
prevention and control: A framework to guide research and application.
Annals of Behavioral Medicine, 33, 221–235.
512
Kreuter, M. W., Lukwago, S. N., Bucholtz, D. C., Clark, E. M., &
Sanders-Thompson, V. (2003). Achieving cultural appropriateness in
health promotion programs: Targeted and tailored approaches. Health
Education and Behavior, 30, 133–146.
Krieger, J. L., Coveleski, S., Hecht, M. L., Miller-Day, M., Graham, J. W.,
Pettigrew, J., & Kootsikas, A. (2013). From kids, through kids, to kids:
Examining the social influence strategies used by adolescents to
promote prevention among peers. Health Communication, 28, 683–695.
Krosnick, J. A., Boninger, D. S., Chuang, Y. C., Berent, M. K., & Carnot,
C. G. (1993). Attitude strength: One construct or many related
constructs? Journal of Personality and Social Psychology, 65,
1132–1151.
513
Kruglanski, A. W., Chen, X., Pierro, A., Mannetti, L., Erb, H.-P., &
Spiegel, S. (2006). Persuasion according to the unimodel: Implications
for cancer communication. Journal of Communication, 56, S105–S122.
Kuhlmann, A. K. S., Kraft, J. M., Galavotti, C., Creek, T. L., Mooki, M.,
& Ntumy, R. (2008). Radio role models for the prevention of mother-to-
child transmission of HIV and HIV testing among pregnant women in
Botswana. Health Promotion International, 23, 260–268.
Kwak, L., Kremers, S. P. J., van Baak, M. A., & Brug, J. (2007). A poster-
based intervention to promote stair use in blue- and white-collar
worksites. Preventive Medicine, 45, 177–181.
514
Holman (Ed.), Proceedings of the 1991 Conference of the American
Academy of Advertising (pp. 81–87). New York: D’Arcy Masius
Benton & Bowles.
Lai, M. K., Ho, S. K., & Lam, T. H. (2004). Perceived peer smoking
prevalence and its association with smoking behaviours and intentions
in Hong Kong Chinese adolescents. Addiction, 99, 1195–1205.
Lalor, K. M., & Hailey, B. J. (1990). The effects of message framing and
feelings of susceptibility to breast cancer on reported frequency of
breast self-examination. International Quarterly of Community Health
Education, 10, 183–192.
Landman, J., & Petty, R. (2000). “It could have been you”: How states
exploit counterfactual thought to market lotteries. Psychology and
Marketing, 17, 299–321.
515
Landy, D. (1972). The effects of an overheard audience’s reaction and
attractiveness on opinion change. Journal of Experimental Social
Psychology, 8, 276–288.
Larimer, M. E., Kaysen, D. L., Lee, C. M., Kilmer, J. R., Lewis, M. A.,
Dillworth, T., … Neighbors, C. (2009). Evaluating level of specificity of
normative referents in relation to personal drinking behavior. Journal of
Studies on Alcohol and Drugs, S16, 115–121.
516
American Journal of Health Promotion, 20, 135–138.
Lavine, H., & Snyder, M. (1996). Cognitive processing and the functional
matching effect in persuasion: The mediating role of subjective
perceptions of message quality. Journal of Experimental Social
Psychology, 32, 580–604.
Lavine, H., & Snyder, M. (2000). Cognitive processes and the functional
matching effect in persuasion: Studies of personality and political
behavior. In G. R. Maio & J. M. Olson (Eds.), Why we evaluate:
Functions of attitudes (pp. 97–131). Mahwah, NJ: Lawrence Erlbaum.
517
Leader, A. E., Weiner, J. L., Kelly, B. J., Hornik, R. C., & Cappella, J. N.
(2009). Effects of information framing on human papillomavirus
vaccination. Journal of Women’s Health, 18, 225–233.
Lee, E.-J. (2008). When are strong arguments stronger than weak
arguments? Deindividuation effects on message elaboration in
computer-mediated communication. Communication Research, 35,
646–665.
Leone, L., Perugini, M., & Bagozzi, R. P. (2005). Emotions and decision
making: Regulatory focus moderates the influence of anticipated
emotions on action evaluation. Cognition and Emotion, 19, 1175–1198.
Leshner, G., Bolls, P., & Thomas, E. (2009). Scare’em or disgust’em: The
effects of graphic health promotion messages. Health Communication,
518
24, 447–451.
Lester, R.T., Ritvo, P., Mills, E.J., Kariri, A., Karanja, S., Chung, M.H., …
Plummer, F.A. (2010). Effects of a mobile phone short message service
on antiretroviral treatment adherence in Kenya (WelTel Kenya1): A
randomised trial. The Lancet, 376, 1838–1845.
Leventhal, H., Jones, S., & Trembly, G. (1966). Sex differences in attitude
and behavior change under conditions of fear and specific instructions.
Journal of Experimental Social Psychology, 2, 387–399.
Levin, I. P., & Gaeth, G. J. (1988). How consumers are affected by the
framing of attribute information before and after consuming the product.
Journal of Consumer Research, 15, 374–378.
Levin, I. P., Schneider, S. L., & Gaeth, G. J. (1998). All frames are not
created equal: A typology and critical analysis of framing effects.
Organizational Behavior and Human Decision Processes, 76, 149–188.
Levine, T., Asada, K. J., & Carpenter, C. (2009). Sample sizes and effect
sizes are negatively correlated in meta-analyses: Evidence and
implications of a publication bias against non-significant findings.
519
Communication Monographs, 76, 286–302.
Levitan, L. C., & Visser, P. S. (2008). The impact of the social context on
resistance to persuasion: Effortful versus effortless responses to counter-
attitudinal information. Journal of Experimental Social Psychology, 44,
640–649.
520
concerning Wikipedia. Information Processing and Management, 49,
405–419.
Lippke, S., Wiedemann, A.U., Ziegelmann, J. P., Reuter, T., & Schwarzer,
R. (2009). Self-efficacy moderates the mediation of intentions into
behavior via plans. American Journal of Health Behavior, 33, 521–529.
Lord, K. R., Lee, M. S., & Sauer, P. L. (1995). The combined influence
hypothesis: Central and peripheral antecedents of attitude toward the ad.
Journal of Advertising, 24(1), 73–85.
521
Love, G. D., Mouttapa, M., & Tanjasiri, S. P. (2009). Everybody’s talking:
Using entertainment-education video to reduce barriers to discussion of
cervical cancer screening among Thai women. Health Education
Research, 24, 829–838.
Lowe, R., Eves, F., & Carroll, D. (2002). The influence of affective and
instrumental beliefs on exercise intentions and behavior: A longitudinal
analysis. Journal of Applied Social Psychology, 32, 1241–1252.
Lu, A. S., Baranowski, T., Thompson, D., & Buday, R. (2012). Story
immersion of video games for youth health promotion: A review of
literature. Games for Health Journal, 1, 199–204.
Luzzo, D. A., Hasper, P., Albert, K. A., Bibby, M. A., & Martinelli, E. A.,
Jr. (1999). Effects of self-efficacy-enhancing interventions on the
522
math/science self-efficacy and career interests, goals, and actions of
career undecided college students. Journal of Consulting Psychology,
46, 233–243.
MacKenzie, S. B., Lutz, R. J., & Belch, G. E. (1986). The role of attitude
toward the ad as a mediator of advertising effectiveness: A test of
competing explanations. Journal of Marketing Research, 23, 130–143.
523
attributes of an effective restaurant chain endorser. Cornell Hospitality
Quarterly, 51, 238–250.
Mahler, H. I. M., Kulik, J. A., Butler, H. A., Gerrard, M., & Gibbons, F. X.
(2008). Social norms information enhances the efficacy of an
appearance-based sun protection intervention. Social Science &
Medicine, 67, 321–329.
Maio, G. R., Haddock, G., Watt, S. E., & Hewstone, M. (2009). Implicit
measures in applied contexts: An illustrative examination of antiracism
advertising. In R. E. Petty, R. H. Fazio, & P. Briñol (Eds.), Attitudes:
Insights from the new implicit measures (pp. 327–357). New York:
Psychology Press.
Maloney, E. K., Lapinski, M. K., & Witte, K. (2011). Fear appeals and
persuasion: A review and update of the extended parallel process model.
524
Social and Personality Psychology Compass, 5, 206–219.
Malotte, C. K., Jarvis, B., Fishbein, M., Kamb, K., Iatesta, M., Hoxworth,
T., … the Project RESPECT Study Group. (2000). Stage of change
versus an integrated psychosocial theory as a basis for developing
effective behaviour change interventions. AIDS Care, 12, 357–364.
Mangleburg, T. F., Sirgy, M. J., Grewal, D., Axsom, D., Hatzios, M.,
Claiborne, C. B., & Bogle, T. (1998). The moderating effect of prior
experience in consumers’ use of user–image based versus utilitarian
cues in brand attitude. Journal of Business and Psychology, 13,
101–113.
Mannetti, L., Pierro, A., & Kruglanski, A. (2007). Who regrets more after
choosing a non-status-quo option? Post decisional regret under need for
cognitive closure. Journal of Economic Psychology, 28, 186–196.
Mannetti, L., Pierro, A., & Livi, S. (2004). Recycling: Planned and self-
expressive behaviour. Journal of Environmental Psychology, 24,
227–236.
525
behavior, and social context: The role of norms and group membership
(pp. 11–30). Mahwah, NJ: Lawrence Erlbaum.
Marin, G., Marin, B. V., Perez-Stable, E. J., Sabogal, F., & Otero-Sabogal,
R. (1990). Cultural differences in attitudes and expectancies between
Hispanic and non–Hispanic white smokers. Hispanic Journal of
Behavioral Sciences, 12, 422–436.
Martin, J., Slade, P., Sheeran, P., Wright, A., & Dibble, T. (2011). “If-
then” planning in one-to-one behaviour change counselling is effective
in promoting contraceptive adherence in teenagers. Journal of Family
Planning and Reproductive Health Care, 37, 85–88.
526
Masser, B., & France, C. R. (2010). An evaluation of a donation coping
brochure with Australian non-donors. Transfusion and Apheresis
Science, 43, 291–297.
Mazor, K. M., Baril, J., Dugan, E., Spencer, F., Burgwinkle, P., &
Gurwitz, J. H. (2007). Patient education about anticoagulant medication:
Is narrative evidence or statistical evidence more effective? Patient
Education and Counseling, 69, 145–157.
527
Prospective prediction of health-related behaviours with the theory of
planned behaviour: A meta-analysis. Health Psychology Review, 5,
97–144.
McIntyre, P., Barnett, M. A., Harris, R. J., Shanteau, J., Skowronski, J. J.,
& Klassen, M. (1987). Psychological factors influencing decisions to
donate organs. Advances in Consumer Research, 14, 331–334.
McNeil, B. J., Pauker, S. G., Sox, H. C., Jr., & Tversky, A. (1982). On the
528
elicitation of preferences for alternative therapies. New England Journal
of Medicine, 306, 1259–1262.
McRee, A. L., Reiter, P. L., Chantala, K., & Brewer, N. T. (2010). Does
framing human papillomavirus vaccine as preventing cancer in men
increase vaccine acceptability? Cancer Epidemiology, Biomarkers &
Prevention, 19, 1937–1944.
Meijinders, A., Midden, C., Olofsson, A., Ohman, S., Matthes, J.,
Bondarenko, O., … Rusanen, M. (2009). The role of similarity cues in
the development of trust in sources of information about GM food. Risk
Analysis, 29, 1116–1128.
Melnyk, V., van Herpen, E., Fischer, A. R. H., & van Trijp, H. C. M.
(2011). To think or not to think: The effect of cognitive deliberation on
the influence of injunctive versus descriptive social norms. Psychology
& Marketing, 28, 709–729.
Merrill, S., Grofman, B., & Adams, J. (2001). Assimilation and contrast
529
effects in voter projections of party locations: Evidence from Norway,
France, and the USA. European Journal of Political Research, 40,
199–221.
Metzger, M. J., Flanagin, A. J., Eyal, K., Lemus, D. R., & McCann, R. M.
(2003). Credibility for the 21st century: Integrating perspectives on
source, message, and media credibility in the contemporary media
environment. Communication Yearbook, 27, 293–335.
Michie, S., Dormandy, E., French, D. P., & Marteau, T. M. (2004). Using
the theory of planned behaviour to predict screening uptake in two
contexts. Psychology and Health, 19, 705–718. [Erratum notice:
Psychology and Health, 20 (2005), 275.]
530
Micu, C. C., Coulter, R. A., & Price, L. L. (2009). How product trial alters
the effects of model attractiveness. Journal of Advertising, 38(2), 69–81.
Miller, C. H., Lane, L. T., Deatrick, L. M., Young, A. M., & Potts, K. A.
(2007). Psychological reactance and promotional health messages: The
effects of controlling language, lexical concreteness, and the restoration
of freedom. Human Communication Research, 33, 219–240.
531
people for change (2nd ed.). New York: Guilford.
Milne, S., Orbell, S., & Sheeran, P. (2002). Combining motivational and
volitional interventions to promote exercise participation: Protection
motivation theory and implementation intentions. British Journal of
Health Psychology, 7, 163–184.
Milne, S., Sheeran, P., & Orbell, S. (2000). Prediction and intervention in
health-related behavior: A meta-analytic review of protection
motivation theory. Journal of Applied Social Psychology, 30, 106–143.
Mishra, S. I., Chavez, L. R., Magana, J. R., Nava, P., Valdez, R. B., &
Hubbell, F. A. (1998). Improving breast cancer control among Latinas:
Evaluation of a theory-based educational program. Health Education
and Behavior, 25, 653–670.
532
Misra, S., & Beatty, S. E. (1990). Celebrity spokesperson and brand
congruence: An assessment of recall and affect. Journal of Business
Research, 21, 159–173.
Moan, I. S., & Rise, J. (2011). Predicting intentions not to “drink and
drive” using an extended version of the theory of planned behaviour.
Accident Analysis & Prevention, 43, 1378–1384.
533
In M. Allen & R. W. Preiss (Eds.), Persuasion: Advances through meta-
analysis (pp. 53–68). Cresskill, NJ: Hampton.
Morgan, S. E., Cole, H. P., Struttmann, T., & Piercy, L. (2002). Stories or
statistics? Farmers’ attitudes toward messages in an agricultural safety
campaign. Journal of Agricultural Safety and Health, 8, 225–239.
Morgan, S. E., Palmgreen, P., Stephenson, M. T., Hoyle, R. H., & Lorch,
E. P. (2003). Associations between message features and subjective
evaluations of the sensation value of antidrug public service
announcements. Journal of Communication, 53, 512–526.
534
Journal of Applied Communication Research, 28, 91–116.
Morton, K., Beauchamp, M., Prothero, A., Joyce, L., Saunders, L.,
Spencer-Bowdage, S., … Pedlar, C. (2014). The effectiveness of
motivational interviewing for health behaviour change in primary care
settings: A systematic review. Health Psychology Review. doi:
10.1080/17437199.2014.882006.
Munn, W. C., & Gruner, C R. (1981). “Sick” jokes, speaker sex, and
informative speech. Southern Speech Communication Journal, 46,
411–418.
535
(2013). Narrative versus nonnarrative: The role of identification,
transportation, and emotion in reducing health disparities. Journal of
Communication, 63, 116–137.
Murray-Johnson, L., Witte, K., Liu, W.-Y., Hubbell, A. P., Sampson, J., &
Morrison, K. (2001). Addressing cultural orientations in fear appeals:
Promoting AIDS-protective behaviors among Mexican immigrant and
African American adolescents and American and Taiwanese college
students. Journal of Health Communication, 6, 335–358.
Murray-Johnson, L., Witte, K., Patel, D., Orrego, V., Zuckerman, C.,
Maxfield, A. M., & Thimons, E. D. (2004). Using the extended parallel
process model to prevent noise-induced hearing loss among coal miners
in Appalachia. Health Education and Behavior, 31, 741–755.
Muthusamy, N., Levine, T. R., & Weber, R. (2009). Scaring the already
scared: Some problems with HIV/AIDS fear appeals in Namibia.
Journal of Communication, 59, 317–344.
536
Nabi, R. L. (2003). “Feeling” resistance: Exploring the role of emotionally
evocative visuals in inducing inoculation. Media Psychology, 5,
199–223.
Nan, X., & Zhao, X. (2010). The influence of liking for antismoking PSAs
on adolescents’ smoking-related behavioral intentions. Health
Communication, 25, 459–469.
Nan, X., & Zhao, X. (2012). When does self-affirmation reduce negative
responses to antismoking messages? Communication Studies, 63,
482–497.
537
Napper, L., Harris, P. R., & Epton, T. (2009). Developing and testing a
self-affirmation manipulation. Self and Identity, 8, 45–62.
Napper, L. E., Wood, M. M., Jaffe, A., Fisher, D. G., Reynolds, G. L., &
Klahn, J. A. (2008). Convergent and discriminant validity of three
measures of stage of change. Psychology of Addictive Behaviors, 22,
362–271.
Ng, J. Y. Y., Tam, S. F., Yew, W. W., & Lam, W. K. (1999). Effects of
video modeling on self-efficacy and exercise performance of COPD
patients. Social Behavior and Personality, 27, 475–486.
538
Psychological Science, 21, 194–199.
Niederdeppe, J., Kim, H. K., Lundell, H., Fazili, F., & Frazier, B. (2012).
Beyond counterarguing: Simple elaboration, complex integration, and
counterelaboration in response to variations in narrative focus and
sidedness. Journal of Communication, 62, 758–777.
Nigbur, D., Lyons, E., & Uzzell, D. (2010). Attitudes, norms, identity and
environmental behaviour: Using an expanded theory of planned
behaviour to predict participation in a kerbside recycling programme.
British Journal of Social Psychology, 49, 259–284.
Noar, S. M., Benac, C. N., & Harris, M. S. (2007). Does tailoring matter?
Meta-analytic review of tailored print health behavior change
interventions. Psychological Bulletin, 133, 673–693.
539
Noar, S. M., & Mehrotra, P. (2011). Toward a new methodological
paradigm for testing theories of health behavior and health behavior
change. Patient Education and Counseling, 82, 468–474.
Norman, P., & Conner, M. (2006). The theory of planned behaviour and
binge drinking: Assessing the moderating role of past behaviour within
the theory of planned behaviour. British Journal of Health Psychology,
11, 55–70.
Norman, P., & Cooper, Y. (2011). The theory of planned behaviour and
breast self-examination: Assessing the impact of past behaviour, context
stability and habit strength. Psychology & Health, 26, 1156–1172.
Norman, P., & Hoyle, S. (2004). The theory of planned behavior and
breast self-examination: Distinguishing between perceived control and
self-efficacy. Journal of Applied Social Psychology, 34, 694–708.
Norman, P., & Smith, L. (1995). The theory of planned behaviour and
exercise: An investigation into the role of prior behaviour, behavioural
intentions and attitude variability. European Journal of Social
Psychology, 25, 403–415.
540
Norman, R. (1976). When what is said is important: A comparison of
expert and attractive sources. Journal of Experimental Social
Psychology, 12, 294–300.
541
O’Keefe, D. J. (1997). Standpoint explicitness and persuasive effect: A
meta-analytic review of the effects of varying conclusion articulation in
persuasive messages. Argumentation and Advocacy, 34, 1–12.
542
O’Keefe, D. J. (2011a). The asymmetry of predictive and descriptive
capabilities in quantitative communication research: Implications for
hypothesis development and testing. Communication Methods and
Measures, 5, 113–125.
O’Keefe, D. J., & Figgé, M. (1999). Guilt and expected guilt in the door-
in-the face technique. Communication Monographs, 66, 312–324.
543
O’Keefe, D. J., & Hale, S. L. (2001). An odds-ratio-based meta-analysis of
research on the door-in-the-face influence strategy. Communication
Reports, 14, 31–38.
Okun, M. A., & Schultz, A. (2003). Age and motives for volunteering:
Testing hypotheses derived from socioemotional selectivity theory.
Psychology and Aging, 18, 231–239.
Orbell, S., & Hagger, M. (2006). Temporal framing and the decision to
take part in Type 2 diabetes screening: Effects of individual differences
544
in consideration of future consequences on persuasion. Health
Psychology, 25, 537–548.
Oswald, F. L., Mitchell, G., Blanton, H., Jaccard, J., & Tetlock, P. E.
(2013). Predicting ethnic and racial discrimination: A meta-analysis of
IAT criterion studies. Journal of Personality and Social Psychology,
105, 171–192.
Ouellette, J. A., & Wood, W. (1998). Habit and intention in everyday life:
The multiple processes by which past behavior predicts future behavior.
Psychological Bulletin, 124, 54–74.
Paek, H.-J., Oh, H. J., & Hove, T. (2012). How media campaigns influence
children’s physical activity: Expanding the normative mechanisms of
the theory of planned behavior. Journal of Health Communication, 17,
869–885.
Pan, L. Y., & Chiou, J. S. (2011). How much can you trust online
information? Cues for perceived trustworthiness of consumer-generated
online information. Journal of Interactive Marketing, 25, 67–74.
545
Pan, W., & Bai, H. (2009). A multivariate approach to a meta-analytic
review of the effectiveness of the D.A.R.E. program. International
Journal of Environmental Research and Public Health, 6, 267–277.
Pappas-DeLuca, K. A., Kraft, J. M., Galavotti, C., Warner, L., Mooki, M.,
Hastings, P., … Kilmarx, P. H. (2008). Entertainment-education radio
serial drama and outcomes related to HIV testing in Botswana. AIDS
Education and Prevention, 20, 486–504.
Park, H. S., Klein, K. A., Smith, S., & Martell, D. (2009). Separating
subjective norms, university descriptive and injunctive norms, and U. S.
descriptive and injunctive norms for drinking behavior intentions.
Health Communication, 24, 746–751.
Park, H. S., Levine, T. R., Westermann, C. Y. K., Orfgen, T., & Foregger,
S. (2007). The effects of argument quality and involvement type on
attitude formation and attitude change: A test of dual-process and social
judgment predictions. Human Communication Research, 33, 81–102.
Parschau, L., Richert, J., Koring, M., Ernsting, A., Lippke, S., &
Schwarzer, R. (2012). Changes in social-cognitive variables are
546
associated with stage transitions in physical activity. Health Education
Research, 27, 129–140.
Parsons, A., Lycett, D., & Aveyard, P. (2011). Response to Spring et al.:
What is the best method to assess the effect of combined interventions
for smoking cessation and post-cessation weight gain? Addiction, 106,
675–676.
Parvanta, S., Gibson, L., Forquer, H., Shapiro-Luft, D., Dean, L., Freres,
D., … Hornik, R. (2013). Applying quantitative approaches to the
formative evaluation of antismoking campaign messages. Social
Marketing Quarterly, 19, 242–264.
Peng, W., Crouse, J. C., & Lin, J.-H. (2013). Using active video games for
physical activity promotion: A systematic review of the current state of
research. Health Education & Behavior, 40, 171–192.
547
Perez, M., Becker, C. B., & Ramirez, A. (2010). Transportability of an
empirically supported dissonance-based prevention program for eating
disorders. Body Image, 7, 179–186.
Pertl, M., Hevey, D., Thomas, K., Craig, A., Chuinneagáin, S. N., &
Maher, L. (2010). Differential effects of self-efficacy and perceived
control on intention to perform skin cancer-related health behaviours.
Health Education Research, 25, 769–779.
548
science (pp. 217–259). Oxford, UK: Oxford University Press.
Petty, R. E., Briñol, P., Loersch, C., & McCaslin, M. J. (2009). The need
for cognition. In M. R. Leary & R. H. Hoyle (Eds.), Handbook of
individual differences in social behavior (pp. 318–329). New York:
Guilford.
Petty, R. E., Briñol, P., Tormala, Z. L., & Wegener, D. T. (2007). The role
of metacognition in social psychology. In A. W. Kruglanski & E. T.
Higgins (Eds.), Social psychology: Handbook of basic principles (2nd
ed., pp. 254–284). New York: Guilford.
549
decrease persuasion by enhancing message-relevant cognitive
responses. Journal of Personality and Social Psychology, 37,
1915–1926.
550
& M. C. Green (Eds.). Persuasion: Psychological insights and
perspectives (2nd ed., pp. 81–116). Thousand Oaks, CA: Sage.
Petty, R. E., Fazio, R. H., & Briñol, P. (Eds.). (2009a). Attitudes: Insights
from the new implicit measures. New York: Psychology Press.
Petty, R. E., Fazio, R. H., & Briñol, P. (2009b). The new implicit
measures: An overview. In R. E. Petty, R. H. Fazio, & P. Briñol (Eds.),
Attitudes: Insights from the new implicit measures (pp. 3–18). New
York: Psychology Press.
Petty, R. E., & Wegener, D. T. (1998a). Attitude change: Multiple roles for
persuasion variables. In D. T. Gilbert, S. T. Fiske, & G. Lindzey (Eds.),
Handbook of social psychology (4th ed., Vol. 1, pp. 323–390). Boston:
McGraw-Hill.
Petty, R. E., Wegener, D. T., Fabrigar, L. R., Priester, J. R., & Cacioppo, J.
551
T. (1993). Conceptual and methodological issues in the elaboration
likelihood model: A reply to the Michigan State critics. Communication
Theory, 3, 336–362.
Petty, R. E., Wells, G. L., & Brock, T. C. (1976). Distraction can enhance
or reduce yielding to propaganda: Thought disruption versus effort
justification. Journal of Personality and Social Psychology, 34,
874–884.
Petty, R. E., Wells, G. L., Heesacker, M., Brock, T. C., & Cacioppo, J. T.
(1983). The effects of recipient posture on persuasion: A cognitive
response analysis. Personality and Social Psychology Bulletin, 9,
209–222.
Petty, R. E., Wheeler, S. C., & Bizer, G. Y. (1999). Is there one persuasion
process or more? Lumping versus splitting in attitude change theories.
Psychological Inquiry, 10, 156–163.
Petty, R. E., Wheeler, S. C., & Bizer, G. Y. (2000). Attitude functions and
persuasion: An elaboration likelihood approach to matched versus
mismatched messages. In G. R. Maio & J. M. Olson (Eds.), Why we
evaluate: Functions of attitudes (pp. 133–162). Mahwah, NJ: Lawrence
Erlbaum.
Pfau, M., Holbert, R. L., Zubric, S. J., Pasha, N. H., & Lin, W.-K. (2000).
Role and influence of communication modality in the process of
resistance to persuasion. Media Psychology, 2, 1–33.
Pfau, M., Tusing, K. J., Koerner, A. F., Lee, W., Godbold, L. C., Penaloza,
L. C., … Hong, Y.-H. (1997). Enriching the inoculation construct: The
role of critical components in the process of resistance. Human
552
Communication Research, 24, 187–215.
Polyorat, K., Alden, D. L., & Kim, E. S. (2007). Impact of narrative versus
factual print ad copy on product evaluation: The mediating role of ad
message involvement. Psychology and Marketing, 24, 539–554.
Porzig-Drummond, R., Stevenson, R., Case, T., & Oaten, M. (2009). Can
the emotion of disgust be harnessed to promote hand hygiene?
Experimental and field-based tests. Social Science & Medicine, 68,
1006–1012.
Posavac, E. J., Kattapong, K. R., & Dew, D. E., Jr. (1999). Peer-based
interventions to influence health-related behaviors and attitudes: A
meta-analysis. Psychological Reports, 85, 1179–1194.
553
Povey, R., Conner, M., Sparks, P., James, R., & Shepherd, R. (2000).
Application of the theory of planned behaviour to two dietary
behaviours: Roles of perceived control and self-efficacy. British Journal
of Health Psychology, 5, 121–139.
Prati, G., Pietrantoni, L., & Zani, B. (2011). Influenza vaccination: The
persuasiveness of messages among people aged 65 years and older.
Health Communication, 27, 413–420.
554
Preiss, R. W., & Allen, M. (1998). Performing counterattitudinal
advocacy: The persuasive impact of incentives. In M. Allen & R. W.
Preiss (Eds.), Persuasion: Advances through meta-analysis (pp.
231–242). Cresskill, NJ: Hampton Press.
Prestwich, A., Conner, M., Lawton, R., Bailey, W., Litman, J., &
Molyneaux, V. (2005). Individual and collaborative implementation
intentions and the promotion of breast self-examination. Psychology and
Health, 20, 743–760. [Erratum notice: Psychology & Health, 21 (2006),
143.]
Prestwich, A., Kellar, I., Parker, R., MacRae, S., Learmonth, M., Sykes,
B., … Castle, H. (2014). How can self-efficacy be increased? Meta-
analysis of dietary interventions. Health Psychology Review, 8,
270–285.
Prestwich, A., Perugini, M., & Hurling, R. (2008). Goal desires moderate
intention-behaviour relations. British Journal of Social Psychology, 47,
49–73.
555
Primack, B. A., Carroll, M. V., McNamara, M., Klem, M. L., King, B.,
Rich, M., … Nayak, S. (2012). Role of video games in improving
health-related outcomes: A systematic review. American Journal of
Preventive Medicine, 42, 630–638.
Prislin, R., & Wood, W. (2005). Social influence in attitudes and attitude
change. In D. Albarracín, B. T. Johnson, & M. P. Zanna (Eds.), The
handbook of attitudes (pp. 671–706). Mahwah, NJ: Lawrence Erlbaum.
556
H., Rakowski, W., … Rossi, S. R. (1994). Stages of change and
decisional balance for 12 problem behaviors. Health Psychology, 13,
39–46.
Pryor, B., & Steinfatt, T. M. (1978). The effects of initial belief level on
inoculation theory and its proposed mechanisms. Human
Communication Research, 4, 217–230.
Puckett, J. M., Petty, R. E., Cacioppo, J. T., & Fischer, D. L. (1983). The
relative impact of age and attractiveness stereotypes on persuasion.
Journal of Gerontology, 38, 340–343.
Quick, B. L., Bates, B. R., & Quinlan, M. R. (2009). The utility of anger in
promoting clean indoor air policies. Health Communication, 24,
548–561.
Quick, B. L., Shen, L., & Dillard, J. P. (2013). Reactance theory and
persuasion. In J. P. Dillard & L. Shen (Eds.), The SAGE handbook of
persuasion: Developments in theory and practice (2nd ed., pp.
167–183). Thousand Oaks, CA: Sage.
557
Radecki, C. M., & Jaccard, J. (1999). Signing an organ donation letter:
The prediction of behavior from behavioral intentions. Journal of
Applied Social Psychology, 29, 1833–1853.
Randall, D. M., & Wolff, J. A. (1994). The time interval in the intention-
behaviour relationship: Meta-analysis. British Journal of Social
Psychology, 33, 405–418.
Reichert, T., Heckler, S. E., & Jackson, S. (2001). The effects of sexual
social marketing appeals on cognitive processing and persuasion.
Journal of Advertising, 30(1), 13–27.
558
Reid, A. E., & Aiken, L. S. (2013). Correcting injunctive norm
misperceptions motivates behavior change: A randomized controlled
sun protection intervention. Health Psychology, 32, 551–560.
Renes, R. J., Mutsaers, K., & van Woerkum, C. (2012). The difficult
balance between entertainment and education: A qualitative evaluation
of a Dutch health-promoting documentary series. Health Promotion
Practice, 13, 259–264.
Resnicow, K., Davis, R. E., Zhang, G., Konkel, J., Strecher, V. J., Shaikh,
A. R., … Weise, C. (2008). Tailoring a fruit and vegetable intervention
on novel motivational constructs: Results of a randomized study. Annals
of Behavioral Medicine, 35, 159–170.
559
Rhodes, N., & Wood, W. (1992). Self-esteem and intelligence affect
influenceability: The mediating role of message reception.
Psychological Bulletin, 111, 156–171.
Richard, R., de Vries, N. K., & van der Pligt, J. (1998). Anticipated regret
and precautionary sexual behavior. Journal of Applied Social
Psychology, 28, 1411–1428.
Richard, R., van der Pligt, J., & de Vries, N. (1996a). Anticipated affect
and behavioral choice. Basic and Applied Social Psychology, 18,
111–129.
Richard, R., van der Pligt, J., & de Vries, N. (1996b). Anticipated regret
and time perspective: Changing sexual risk-taking behavior. Journal of
Behavioral Decision Making, 9, 185–199.
Richert, J., Schüz, N., & Schüz, B. (2013). Stages of health behavior
change and mindsets. Health Psychology, 32, 273–282.
560
Riemsma, R. P., Pattenden, J., Bridle, C., Sowden, A. J., Mather, L., Watt,
I. S. & Walker, A. (2003). Systematic review of the effectiveness of
stage based interventions to promote smoking cessation. BMJ, 326,
1175–1181.
Rietveld, T., & van Hout, R. (2007). Analysis of variance for repeated
measures designs with word materials as a nested random or fixed
factor. Behavior Research Methods, 39, 735–747.
Rimal, R. N., Bose, K., Brown, J., Mkandawire, G., & Folda, L. (2009).
Extending the purview of the risk perception attitude framework:
Findings from HIV/AIDS prevention research in Malawi. Health
Communication, 24, 210–218.
Rimal, R. N., & Juon, H.S. (2010). Use of the risk perception attitude
framework for promoting breast cancer prevention. Journal of Applied
Social Psychology, 40, 287–310.
Rimal, R. N., & Real, K. (2003). Perceived risk and efficacy beliefs as
motivators of change: Use of the risk perception attitude (RPA)
framework to understand health behaviors. Human Communication
Research, 29, 370–399.
Rise, J., Sheeran, P., & Hukkelberg, S. (2010). The role of self-identity in
the theory of planned behavior: A meta-analysis. Journal of Applied
Social Psychology, 40, 1085–1105.
561
Rivis, A., & Sheeran, P. (2003). Descriptive norms as an additional
predictor in the theory of planned behaviour: A meta-analysis. Current
Psychology, 22, 218–233.
Roehrig, M., Thompson, J. K., Brannick, M., & van den Berg, P. (2006).
Dissonance-based eating disorder prevention program: A preliminary
dismantling investigation. International Journal of Eating Disorders, 39,
1–10.
Rokeach, M. (1973). The nature of human values. New York: Free Press.
562
consistent across health problems? A meta-analysis. Health Psychology,
19, 593–604.
563
Rozin, P., & Royzman, E. B. (2001). Negativity bias, negativity
dominance, and contagion. Personality and Social Psychology Review,
5, 296–320.
Ruiter, R. A. C., Abraham, C., & Kok, G. (2001). Scary warnings and
rational precautions: A review of the psychology of fear appeals.
Psychology and Health, 16, 613–630.
Ruiter, R. A. C., Kessels, L. T. E., Peters, G.-J. Y., & Kok, G. (2014).
Sixty years of fear appeal research: Current state of the evidence.
International Journal of Psychology, 49, 63–70.
Ruiz, S., & Sicilia, M. (2004). The impact of cognitive and/or affective
processing styles on consumer response to advertising appeals. Journal
of Business Research, 57, 657–664.
564
Sagarin, B. J., & Skowronski, J. J. (2009b). In pursuit of the proper null:
Reply to Chen and Risen (2009). Journal of Experimental Social
Psychology, 45, 428–430.
565
Sarup, G., Suchner, R. W., & Gaylord, G. (1991). Contrast effects and
attitude change: A test of the two-stage hypothesis of social judgment
theory. Social Psychology Quarterly, 54, 364–372.
Sayeed, S., Fishbein, M., Hornik, R., Cappella, J., & Ahern, R. K. (2005).
Adolescent marijuana use intentions: Using theory to plan an
intervention. Drugs: Education, Prevention, and Policy, 12, 19–34.
Schüz,, B., Sniehotta, F. F., Mallach, N., Wiedemann, A. U., & Schwarzer,
R. (2009). Predicting transitions from preintentional, intentional and
actional stages of change. Health Education Research, 24, 64–75.
566
Schüz, B., Sniehotta, F. F., & Schwarzer, R. (2007). Stage-specific effects
of an action control intervention on dental flossing. Health Education
Research, 22, 332–341.
Schüz, N., Schüz, B., & Eid, M. (2013). When risk communication
backfires: Randomized controlled trial on self-affirmation and reactance
to personalized risk feedback in highrisk individuals. Health
Psychology, 32, 561–570.
567
Schwarzer, R., Richert, J., Kreausukon, P., Remme, L., Wiedemann, A. U.,
& Reuter, T. (2010). Translating intentions into nutrition behaviors via
planning requires self-efficacy: Evidence from Thailand and Germany.
International Journal of Psychology, 45, 260–268.
Segan, C. J., Borland, R., & Greenwood, K. M. (2004). What is the right
thing at the right time? Interactions between stages and processes of
change among smokers who make a quit attempt. Health Psychology,
23, 86–93.
568
Sestir, M., & Green, M. C. (2010). You are who you watch: Identification
and transportation effects on temporary self-concept. Social Influence,
5, 272–288.
Shani, Y., & Zeelenberg, M. (2007). When and why do we want to know?
How experienced regret promotes post-decision information search.
Journal of Behavioral Decision Making, 20, 207–222.
Sharot, T., Fleming, S. M., Yu, X., Koster, R., & Dolan, R. J. (2012). Is
choice-induced preference change long lasting? Psychological Science,
23, 1123–1129.
569
Shavitt, S., & Nelson, M. R. (2000). The social-identity function in person
perception: Communicated meanings of product preferences. In G. R.
Maio & J. M. Olson (Eds.), Why we evaluate: Functions of attitudes
(pp. 37–57). Mahwah, NJ: Lawrence Erlbaum.
Sheeran, P., & Orbell, S. (1998). Do intentions predict condom use? Meta-
analysis and examination of six moderator variables. British Journal of
Social Psychology, 37, 231–250.
570
Sheeran, P., & Orbell, S. (1999b). Implementation intentions and repeated
behaviour: Augmenting the predictive validity of the theory of planned
behaviour. European Journal of Social Psychology, 29, 349–369.
Sheeran, P., & Orbell, S. (2000a). Self-schemas and the theory of planned
behaviour. European Journal of Social Psychology, 30, 533–550.
Shen, L., & Bigsby, E. (2013). The effects of message features: Content,
structure, and style. In J. P. Dillard & L. Shen (Eds.), The SAGE
handbook of persuasion: Developments in theory and practice (2nd ed.,
pp. 20–35). Thousand Oaks, CA: Sage.
Shen, L., & Dillard, J. P. (2014). Threat, fear, and persuasion: Review and
critique of questions about functional form. Review of Communication
Research, 2, 94–114.
571
Nebraska Press.
Sherif, C. W., Kelly, M., Rodgers, H. L., Jr., Sarup, G., & Tittler, B. I.
(1973). Personal involvement, social judgment and action. Journal of
Personality and Social Psychology, 27, 311–328.
Sherif, C. W., Sherif, M., & Nebergall, R. E. (1965). Attitude and attitude
change: The social judgment-involvement approach. Philadelphia: W.
B. Saunders.
572
Psychology, 5, 298–312.
Shiv, B., Edell, J. A., & Payne, J. W. (1997). Factors affecting the impact
of negatively and positively framed ad messages. Journal of Consumer
Research, 24, 285–294.
Shiv, B., Edell, J. A., & Payne, J. W. (2004). Does elaboration increase or
decrease the effectiveness of negatively versus positively framed
messages? Journal of Consumer Research, 31, 199–208.
Siegel, J. T., Alvaro, E. M., Crano, W. D., Lac, A., Ting, S., & Jones, S. P.
(2008). A quasi-experimental investigation of message appeal variations
on organ donor registration rates. Health Psychology, 27, 170–178.
Siegrist, M., Earle, T. C., & Gutscher, H. (2003). Test of a trust and
confidence model in the applied context of electromagnetic field (EMF)
risks. Risk Analysis, 23, 705–716.
Siemer, M., & Joormann, J. (2003). Power and measure of effect size in
analysis of variance with fixed versus random nested factors.
Psychological Methods, 8, 497–517.
Sieverding, M., Matterne, U., & Ciccarello, L. (2010). What role do social
norms play in the context of men’s cancer screening intention and
behavior? Application of an extended theory of planned behavior.
Health Psychology, 29, 72–81.
573
Silverthorne, C. P., & Mazmanian, L. (1975). The effects of heckling and
media of presentation on the impact of a persuasive communication.
Journal of Social Psychology, 96, 229–236.
Simoni, J., Nelson, K., Franks, J., Yard, S., & Lehavot, K. (2011). Are
peer interventions for HIV efficacious? A systematic review. AIDS and
Behavior, 15, 1589–1595.
Simsekoglu, O., & Lajunen, T. (2008). Social psychology of seat belt use:
A comparison of theory of planned behavior and health belief model.
Transportation Research Part F: Traffic Psychology and Behaviour, 11,
181–191.
Sinclair, R. C., Moore, S. E., Mark, M. M., Soldat, A. S., & Lavis, C. A.
(2010). Incidental moods, source likeability, and persuasion: Liking
motivates message elaboration in happy people. Cognition & Emotion,
24, 940–961.
574
283–292.
Skalski, P., Tamborini, R., Glazer, E., & Smith, S. (2009). Effects of
humor on presence and recall of persuasive messages. Communication
Quarterly, 57, 136–153.
Slater, M. D., & Rouner, D. (1996). How message evaluation and source
attributes may influence credibility assessment and belief change.
Journalism and Mass Communication Quarterly, 73, 974–991.
575
Smidt, K. E., & DeBono, K. G. (2011). On the effects of product name on
product evaluation: An individual difference perspective. Social
Influence, 6, 131–141.
Smit, E. G., van Meurs, L., & Neijens, P. C. (2006). Effects of advertising
likeability: A 10-year perspective. Journal of Advertising Research, 46,
73–83.
Smith, A. J., & Clark, R. D., III. (1973). The relationship between attitudes
and beliefs. Journal of Personality and Social Psychology, 26, 321–326.
Smith, D. C., Tabb, K. M., Fisher, D., & Cleeland, L. (2014). Drug refusal
skills training does not enhance outcomes of African American
adolescents with substance use problems. Journal of Substance Abuse
Treatment, 46, 274–279.
Smith, J. K., Gerber, A. S., & Orlich, A. (2003). Self-prophecy effects and
voter turnout: An experimental replication. Political Psychology, 24,
593–604.
Smith, J. R., Terry, D. J., Manstead, A. S. R., Louis, W. R., Kotterman, D.,
& Wolfs, J. (2008). The attitude-behavior relationship in consumer
conduct: The role of norms, past behaviors, and self-identity. Journal of
576
Social Psychology, 148, 311–334.
Smith, R. A., Downs, E., & Witte, K. (2007). Drama theory and
entertainment education: Exploring the effects of a radio drama on
behavioral intentions to limit HIV transmissions in Ethiopia.
Communication Monographs, 74, 133–153.
Smith, S. M., Haugtvedt, C. P., & Petty, R. E. (1994). Need for cognition
and the effects of repeated expression on attitude accessibility and
extremity. Advances in Consumer Research, 21, 234–237.
Smith, S. W., Atkin, C. K., Martell, D. C., Allen, R., & Hembroff, L.
(2006). A social judgment theory approach to conducting formative
577
research in a social norms campaign. Communication Theory, 16,
141–152.
Snyder, M., & DeBono, K. G. (1985). Appeals to image and claims about
quality: Understanding the psychology of advertising. Journal of
Personality and Social Psychology, 49, 586–597.
578
opinion change. Canadian Journal of Behavioural Science, 3, 377–387.
Solomon, S., Greenberg, J., Psyczynski, T., & Pryzbylinski, J. (1995). The
effects of mortality salience on personally-relevant persuasive appeals.
Social Behavior and Personality, 23, 177–190.
Sorrentino, R. M., Bobocel, D. R., Gitta, M. Z., Olson, J. M., & Hewitt, E.
C. (1988). Uncertainty orientation and persuasion: Individual
differences in the effects of personal relevance on social judgments.
Journal of Personality and Social Psychology, 55, 357–371.
Sparks, P., Jessop, D. C., Chapman, J., & Holmes, K. (2010). Pro-
environmental actions, climate change, and defensiveness: Do self-
affirmations make a difference to people’s motives and beliefs about
making a difference? British Journal of Social Psychology 49, 553–568.
579
Spencer, S. J., Zanna, M. P., & Fong, G. T. (2005). Establishing a causal
chain: Why experiments are often more effective than mediational
analyses in examining psychological processes. Journal of Personality
and Social Psychology, 89, 845–851.
Spring, B., Howe, D., Berendsen, M., McFadden, H. G., Hitchcock, K.,
Rademaker, A. W., & Hitsman, B. (2009). Behavioral intervention to
promote smoking cessation and prevent weight gain: A systematic
review and meta-analysis. Addiction, 104, 1472–1486.
Stapel, D. A., & van der Linde, L. A. J. G. (2011). What drives self-
affirmation effects? On the importance of differentiating value
affirmation and attribute affirmation. Journal of Personality and Social
Psychology, 101, 34–45. See also https://www.commissielevelt.nl/.
Stead, M., Tagg, S., MacKintosh, A. M., & Eadie, D. (2005). Development
and evaluation of a mass media theory of planned behaviour
intervention to reduce speeding. Health Education Research, 20, 36–50.
Steadman, L., & Rutter, D. R. (2004). Belief importance and the theory of
planned behaviour: Comparing modal and ranked modal beliefs in
predicting attendance at breast screening. British Journal of Health
580
Psychology, 9, 447–463.
Sternthal, B., Dholakia, R., & Leavitt, C. (1978). The persuasive effect of
source credibility: Tests of cognitive response. Journal of Consumer
Research, 4, 252–260.
581
Steward, W. T., Schneider, T. R., Pizarro, J., & Salovey, P. (2003). Need
for cognition moderates responses to framed smoking-cessation
messages. Journal of Applied Social Psychology, 33, 2439–2464.
Stice, E., Chase, A., Stormer, S., & Appel, A. (2001). A randomized trial
of a dissonance-based eating disorder prevention program. International
Journal of Eating Disorders, 29, 247–262.
Stice, E., Marti, C. N., Spoor, S., Presnell, K., & Shaw, H. (2008).
Dissonance and healthy weight eating disorder prevention programs:
Long-term effects from a randomized efficacy trial. Journal of
Consulting and Clinical Psychology, 76, 329–340.
Stice, E., Shaw, H., Becker, C. B., & Rohde, P. (2008). Dissonance-based
interventions for the prevention of eating disorders: Using persuasion
principles to promote health. Prevention Science, 9, 114–128.
Stone, J., Aronson, E., Crain, A. L., Winslow, M. P., & Fried, C. B.
(1994). Inducing hypocrisy as a means of encouraging young adults to
use condoms. Personality and Social Psychology Bulletin, 20, 116–128.
582
Stone, J., & Fernandez, N. C. (2008a). How behavior shapes attitudes:
Cognitive dissonance processes. In W. D. Crano & R. Prislin (Eds.),
Attitudes and attitude change (pp. 313–334). New York: Psychology
Press.
Stone, J., & Fernandez, N. C. (2011). When thinking about less failure
causes more dissonance: The effect of elaboration and recall on
behavior change following hypocrisy. Social Influence, 6, 199–211.
Stone, J., & Focella, E. (2011). Hypocrisy, dissonance and the self-
regulation processes that improve health. Self and Identity, 10, 295–303.
Stone, J., Wiegand, A. W., Cooper, J., & Aronson, E. (1997). When
exemplification fails: Hypocrisy and the motive for self-integrity.
Journal of Personality and Social Psychology, 72, 54–65.
Strathman, A., Gleicher, F., Boninger, D. S., & Edwards, C. S. (1994). The
consideration of future consequences: Weighing immediate and distant
outcomes of behavior. Journal of Personality and Social Psychology, 66,
742–752.
583
1068–1083.
Stukas, A. A., Snyder, M., & Clary, E. G. (2008). The social marketing of
volunteerism: A functional approach. In C. P. Haugtvedt, P. M. Herr, &
F. R. Kardes (Eds.), Handbook of consumer psychology (pp. 959–979).
New York: Lawrence Erlbaum.
Sutton, S. (1992). Shock tactics and the myth of the inverted U. British
Journal of Addiction, 87, 517–519.
584
(Eds.), The SAGE handbook of health psychology (pp. 94–126).
London: Sage.
Sutton, S., French, D. P., Hennings, S. J., Mitchell, J., Wareham, N. J.,
Griffin, S., … Kinmonth, A. L. (2003). Eliciting salient beliefs in
research on the theory of planned behaviour: The effect of question
wording. Current Psychology, 22, 234–251.
Sutton, S., McVey, D., & Glanz, A. (1999). A comparative test of the
theory of reasoned action and the theory of planned behavior in the
prediction of condom use intentions in a national sample of English
young people. Health Psychology, 18, 72–81.
585
persuasion. Journal of Consumer Research, 11, 877–886.
Szilagyi, P., Vann, J., Bordley, C., Chelminski, A., Kraus, R., Margolis, P.,
& Rodewald, L. (2002). Interventions aimed at improving immunization
rates. Cochrane Database Systematic Review, 2002(4), CD003941.
Terry, D. J., & Hogg, M. A. (1996). Group norms and the attitude-
behavior relationship: A role for group identification. Personality and
Social Psychology Bulletin, 22, 776–793.
586
Competence-based and integrity-based trust as predictors of acceptance
of carbon dioxide capture and storage (CCS). Risk Analysis, 29,
1129–1140.
Thomas, K., Hevey, D., Pertl, M., Chuinneagáin, S. N., Craig, A., &
Maher, L. (2011). Appearance matters: The frame and focus of health
messages influences beliefs about skin cancer. British Journal of Health
Psychology, 16, 418–429.
Thompson, R., & Haddock, G. (2012). Sometimes stories sell: When are
narrative appeals most likely to work? European Journal of Social
Psychology, 42, 92–102.
Thomsen, C. J., Borgida, E., & Lavine, H. (1995). The causes and
consequences of personal involvement. In R. E. Petty & J. A. Krosnick
(Eds.), Attitude strength: Antecedents and consequences (pp. 191–214).
Mahwah, NJ: Lawrence Erlbaum.
Thuen, F., & Rise, J. (1994). Young adolescents’ intention to use seat
belts: The role of attitudinal and normative beliefs. Health Education
Research, 9, 215–223.
587
Thurstone, L. L. (1931). The measurement of social attitudes. Journal of
Abnormal and Social Psychology, 26, 249–269.
Tormala, Z. L., Briñol, P., & Petty, R. E. (2006). When credibility attacks:
The reverse impact of source credibility on persuasion. Journal of
Experimental Social Psychology, 42, 684–691.
Tormala, Z. L., Briñol, P., & Petty, R. E. (2007) Multiple roles for source
credibility under high elaboration: It’s all in the timing. Social
Cognition, 25, 536–552.
588
action approach (pp. 23–42). Mahwah, NJ: Lawrence Erlbaum.
Trafimow, D., & Duran, A. (1998). Some tests of the distinction between
attitude and perceived behavioural control. British Journal of Social
Psychology, 37, 1–14.
Trafimow, D., & Sheeran, P. (1998). Some tests of the distinction between
cognitive and affective beliefs. Journal of Experimental Social
Psychology, 34, 378–397.
Trafimow, D., Sheeran, P., Conner, M., & Finlay, K. A. (2002). Evidence
that perceived behavioural control is a multidimensional construct:
Perceived control and perceived difficulty. British Journal of Social
Psychology, 41, 101–121.
Tseng, D. S., Cox, E., Plane, M. B., & Hia, K. (2001). Efficacy of patient
letter reminders on cervical cancer screening: A meta-analysis. Journal
of General Internal Medicine, 16, 563–568.
Tuah, N. A. A., Amiel, C., Qureshi, S., Car, J., Kaur, B., & Majeed, A.
(2011). Transtheoretical model for dietary and physical exercise
modification in weight loss management for overweight and obese
adults. Cochrane Database of Systematic Reviews 2011(10), CD008066.
589
Tuppen, C. J. S. (1974). Dimensions of communicator credibility: An
oblique solution. Speech Monographs, 41, 253–260.
Turner, G. E., Burciaga, C., Sussman, S., Klein-Selski, E., Craig, S., Dent,
C. W., … Flay, B. (1993). Which lesson components mediate refusal
assertion skill improvement in school-based adolescent tobacco use
prevention? International Journal of the Addictions, 28, 749–766.
Twyman, M., Harvey, N., & Harries, C. (2008). Trust in motives, trust in
competence: Separate factors determining the effectiveness of risk
communication. Judgment and Decision Making Journal, 2, 111–120.
590
The effects of message quality and congruency on perceptions of
tailored health communications. Journal of Experimental Social
Psychology, 43, 249–257.
Usdin, S., Singhal, A., Shongwe, T., Goldstein, S., & Shabalala, A. (2004).
No short cuts in entertainment-education: Designing Soul City step-by-
step. In A. Singhal, M. J. Cody, E. M. Rogers, & M. Sabido (Eds.),
Entertainment-education and social change: History, research, and
practice (pp. 153–175). Mahwah, NJ: Lawrence Erlbaum.
Valois, P., Desharnais, R., Godin, G., Perron, J., & LeComte, C. (1993).
Psychometric properties of a perceived behavioral control multiplicative
scale developed according to Ajzen’s theory of planned behavior.
Psychological Reports, 72, 1079–1083.
van den Hende, E. A., Dahl, D. W., Schoormans, J. P. L., & Snelders, D.
(2012). Narrative transportation in concept tests for really new products:
The moderating effect of reader-protagonist similarity. Journal of
Product Innovation Management, 29, 157–170.
591
van der Pligt, J., & de Vries, N. K. (1998b). Expectancy-value models of
health behaviour: The role of salience and anticipated affect.
Psychology and Health, 13, 289–305.
van der Pligt, J., de Vries, N. K., Manstead, A. S. R., & van Harreveld, F.
(2000). The importance of being selective: Weighing the role of
attribute importance in attitudinal judgment. In M. P. Zanna (Ed.),
Advances in experimental social psychology (Vol. 32, pp. 135–200).
San Diego: Academic Press.
van Enschot-van Dijk, R., Hustinx, L., & Hoeken, H. (2003). The concept
of argument quality in the elaboration likelihood model: A normative
and empirical approach to Petty and Cacioppo’s “strong” and “weak”
arguments. In F. H. van Eemeren, J. A. Blair, C. A. Willard, & A. F.
Snoeck Henkemans (Eds.), Anyone who has a view: Theoretical
contributions to the study of argumentation (pp. 319–335). Amsterdam:
Kluwer.
van Harreveld, F., Schneider, I. K., Nohlen, H., & van der Pligt, J. (2012).
The dynamics of ambivalence: Evaluative conflict in attitudes and
decision making. In B. Gawronski & F. Strack (Eds.), Cognitive
consistency: A fundamental principle in social cognition (pp. 267–284).
New York: Guilford.
van Harreveld, F., van der Pligt, J., & de Vries, N. K. (1999). Attitudes
towards smoking and the subjective importance of attributes:
Implications for changing risk-benefit ratios. Swiss Journal of
Psychology, 58, 65–72.
van Ittersum, K., Pennings, J. M. E., Wansink, B., & van Trijp, H. C. M.
(2007). The validity of attribute-importance measurement: A review.
Journal of Business Research, 60, 1177–1190.
592
Van Koningsbruggen, G. M., Das, E., & Roskos-Ewoldsen, D. R. (2009).
How self-affirmation reduces defensive processing of threatening health
information: Evidence at the implicit level. Health Psychology, 28,
563–568.
van Laer, T., de Ruyter, K., Visconti, L. M., & Wetzels, M. (2014). The
extended transportation-imagery model: A meta-analysis of the
antecedents and consequences of consumers’ narrative transportation.
Journal of Consumer Research, 40, 797–817.
Van Osch, L., Lechner, L., Reubsaet, A., & de Vries, H. (2010). From
theory to practice: an explorative study into the instrumentality and
specificity of implementation intentions. Psychology and Health, 25,
351–364.
van ’t Riet, J., Cox, A. D., Cox, D., Zimet, G. D., De Bruijn, G.-J., van den
Putte, B., … Ruiter, R. A. C. (2014). Does perceived risk influence the
effects of message framing? A new investigation of a widely held
notion. Psychology and Health, 29, 933–949.
593
van ’t Riet, J., Ruiter, R. A. C., Werrij, M. Q., & de Vries, H. (2010). Self-
efficacy moderates message-framing effects: The case of skin-cancer
detection. Psychology and Health, 25, 339–349.
Vaughn, L. A., Childs, K. E., Maschinski, C., Niño, N. P., & Ellsworth, R.
(2010). Regulatory fit, processing fluency, and narrative persuasion.
Social and Personality Psychology Compass, 4(12), 1181–1192.
Vaughn, L. A., Hesse, S. J., Petkova, Z., & Trudeau, L. (2009). “This story
is right on”: The impact of regulatory fit on narrative engagement and
persuasion. European Journal of Social Psychology, 39, 447–456.
Vervloet, M., Linn, A. J., van Weert, J. C. M., de Bakker, D. H., Bouvy,
M. L., & van Dijk, L. (2012). The effectiveness of interventions using
electronic reminders to improve adherence to chronic medication: A
systematic review of the literature. Journal of the American Medical
Informatics Association, 19, 696–704.
Vet, R., de Wit, J. B. F., & Das, E. (2011). The efficacy of social role
models to increase motivation to obtain vaccination against hepatitis B
among men who have sex with men. Health Education Research, 26,
192–200.
594
Vincus, A. A., Ringwalt, C., Harris, M. S., & Shamblen, S. R. (2010). A
short-term, quasi-experimental evaluation of D.A.R.E’s revised
elementary school curriculum. Journal of Drug Education, 40, 37–49.
Visser, P. S., Bizer, G. Y., & Krosnick, J. A. (2006). Exploring the latent
structure of strength-related attitude attributes. In M. P. Zanna (Ed.),
Advances in experimental social psychology (Vol. 38, pp. 1–68). San
Diego: Elsevier Academic Press.
Vitoria, P. D., Salgueiro, M. F., Silva, S. A., & de Vries, H. (2009). The
impact of social influence on adolescent intention to smoke: Combining
types and referents of influence. British Journal of Health Psychology,
14, 681–699.
Wallace, D. S., Paulson, R. M., Lord, C. G., & Bond, C. F., Jr. (2005).
Which behaviors do attitudes predict? Meta-analyzing the effects of
social pressure and perceived difficulty. Review of General Psychology,
9, 214–227.
595
Walster, E., Aronson, E., & Abrahams, D. (1966). On increasing the
persuasiveness of a low prestige communicator. Journal of Experimental
Social Psychology, 2, 325–342.
Walther, J. B., Liang, Y. H., Ganster, T., Wohn, D. Y., & Emington, J.
(2012). Online reviews, helpfulness ratings, and consumer attitudes: An
extension of congruity theory to multiple sources in web 2.0. Journal of
Computer-Mediated Communication, 18, 97–112.
Walther, J. B., Wang, Z., & Loh, T. (2004). The effect of top-level
domains and advertisements on health web site credibility. Journal of
Medical Internet Research, 6, e24.
596
Ward, C. D., & McGinnies, E. (1973). Perception of communicator’s
credibility as a function of when he is identified. Psychological Record,
23, 561–562.
Watt, S. E., Maio, G. R., Haddock, G., & Johnson, B. T. (2008). Attitude
functions in persuasion: Matching, involvement, self-affirmation, and
hierarchy. In W. D. Crano & R. Prislin (Eds.), Attitudes and attitude
change (pp. 189–211). New York: Psychology Press.
Wechsler, H., Nelson, T. F., Lee, J. E., Seibring, M., Lewis, C., & Keeling,
R. P. (2003). Perception and reality: A national evaluation of social
norms marketing interventions to reduce college students’ heavy alcohol
use. Journal of Studies on Alcohol, 64, 484–494.
597
Weigel, R. H., & Newman, L. S. (1976). Increasing attitude-behavior
correspondence by broadening the scope of the behavioral measure.
Journal of Personality and Social Psychology, 33, 793–802.
598
Wells, G. L., & Windschitl, P. D. (1999). Stimulus sampling and social
psychological experimentation. Personality and Social Psychology
Bulletin, 25, 1115–1125.
White, K., MacDonnell, R., & Dahl, D. W. (2011). It’s the mind-set that
matters: The role of construal level and message framing in influencing
consumer efficacy and conservation behaviors. Journal of Marketing
Research, 48, 472–485.
599
high risk area. British Journal of Health Psychology, 13, 435–448.
White, K. M., Smith, J. R., Terry, D. J., Greenslade, J. H., & McKimmie,
B. M. (2009). Social influence in the theory of planned behaviour: The
role of descriptive, injunctive, and in-group norms. British Journal of
Social Psychology, 48, 135–158.
Whitelaw, S., Baldwin, S., Bunton, R., & Flynn, D. (2000). The status of
evidence and outcomes in stages of change research. Health Education
Research, 15, 707–718.
Whittier, D. K., Kennedy, M. G., St. Lawrence, J. S., Seeley, S., & Beck,
V. (2005). Embedding health messages into entertainment television:
Effect on gay men’s response to a syphilis outbreak. Journal of Health
Communication, 10, 251–259.
600
Issues, 25(4), 41–78.
Wilkin, H. A., Valente, T. W., Murphy, S., Cody, M. J., Huang, G., &
Beck, V. (2007). Does entertainment-education work with Latinos in the
United States? Identification and the effects of a telenovela breast
cancer storyline. Journal of Health Communication, 12, 455–470.
Williams-Piehota, P., Pizarro, J., Silvera, S. A. N., Mowad, L., & Salovey,
P. (2006). Need for cognition and message complexity in motivating
fruit and vegetable intake among callers to the Cancer Information
Service. Health Communication, 19, 75–84.
601
Wilmot, W. W. (1971b). A test of the construct and predictive validity of
three measures of ego involvement. Speech Monographs, 38, 217–227.
Winterbottom, A., Bekker, H. L., Conner, M., & Mooney, A. (2008). Does
narrative information bias individual’s decision making? A systematic
review. Social Science and Medicine, 67, 2079–2088.
Witte, K. (1992). Putting the fear back into fear appeals: The extended
parallel process model. Communication Monographs, 59, 329–349.
602
Wojcieszak, M. E., & Mutz, D. C. (2009). Online groups and political
discourse: Do online discussion spaces facilitate exposure to political
disagreement? Journal of Communication, 59, 40–56.
Wood, W., & Quinn, J. M. (2003). Forewarned and forearmed? Two meta-
analytic syntheses of forewarnings of influence appeals. Psychological
Bulletin, 129, 119–138.
Wood, W., Rhodes, N., & Biek, M. (1995). Working knowledge and
603
attitude strength: An information-processing analysis. In R. E. Petty & J.
A. Krosnick (Eds.), Attitude strength: Antecedents and consequences
(pp. 283–313). Mahwah, NJ: Lawrence Erlbaum.
Wood, W., & Stagner, B. (1994). Why are some people easier to influence
than others? In S. Shavitt & T. C. Brock (Eds.), Persuasion:
Psychological insights and perspectives (pp. 149–174). Boston: Allyn
and Bacon.
Woodside, A. G., & Davenport, J. W., Jr. (1974). The effect of salesman
similarity and expertise on consumer purchasing behavior. Journal of
Marketing Research, 11, 198–202.
Xu, A. J., & Wyer, R. S., Jr. (2012). The role of bolstering and
counterarguing mind-sets in persuasion. Journal of Consumer Research,
604
38, 920–932.
Ybarra, O., & Trafimow, D. (1998). How priming the private self or
collective self affects the relative weights of attitudes and subjective
norms. Personality and Social Psychology Bulletin, 24, 362–370.
Yi, M. Y., Yoon, J. J., Davis, J. M., & Lee, T. (2013). Untangling the
antecedents of initial trust in Web-based health information: The roles
of argument quality, source expertise, and user perceptions of
information quality and risk. Decision Support Systems, 55, 284–295.
Yi, Y., & Yoo, J. (2011). The long-term effects of sales promotions on
brand attitude across monetary and non-monetary promotions.
Psychology and Marketing, 28, 879–896.
Yzer, M. C., Cappella, J. N., Fishbein, M., Hornik, R., Sayeed, S., &
605
Ahern, R. K. (2004). The role of distal variables in behavior change:
Effects of adolescents’ risk for marijuana use on intention to use
marijuana. Journal of Applied Social Psychology, 34, 1229–1250.
Yzer, M. C., Fisher, J. D., Bakker, A. B., Siero, F. W., & Misovich, S. J.
(1998). The effects of information about AIDS risk and self-efficacy on
women’s intentions to engage in AIDS preventive behavior. Journal of
Applied Social Psychology, 28, 1837–1852.
Zanna, M. P., & Cooper, J. (1974). Dissonance and the pill: An attribution
approach to studying the arousal properties of dissonance. Journal of
Personality and Social Psychology, 29, 703–709.
Zebregs, S., van den Putte, B., Neijens, P., & de Graaf, A. (2015). The
differential impact of statistical and narrative evidence on beliefs,
attitude, and intention: A meta-analysis. Health Communication, 30,
282–289.
Zhang, X., Fung, H., & Ching, B. H. (2009). Age differences in goals:
Implications for health promotion. Aging and Mental Health, 13,
336–348.
606
Zhao, X. S., Lynch, J. G., & Chen, Q. M. (2010). Reconsidering Baron and
Kenny: Myths and truths about meditation analysis. Journal of
Consumer Research, 37, 197–206.
Ziegler, R., & Diehl, M. (2001). The effect of multiple source information
on message scrutiny: The case of source expertise and likability. Swiss
Journal of Psychology, 60, 253–263.
Ziegler, R., & Diehl, M. (2011). Mood and multiple source characteristics:
Mood congruency of source consensus status and source trustworthiness
as determinants of message scrutiny. Personality and Social Psychology
Bulletin, 37, 1016–1030.
Ziegler, R., Dobre, B., & Diehl, M. (2007). Does matching versus
mismatching message content to attitude functions lead to biased
processing? The role of message ambiguity. Basic and Applied Social
Psychology, 29, 268–278.
607
arousal. Hillsdale, NJ: Lawrence Erlbaum.
Zuckerman, M., Gioioso, C., & Tellini, S. (1988). Control orientation, self-
monitoring, and preference for image versus quality approach to
advertising. Journal of Research in Personality, 22, 89–100.
608
Author Index
609
Allcott, H., 116
Allen, B. P., 216
Allen, C. T., 67
Allen, M., 81, 89, 96, 139, 145, 196, 223, 229–230, 241–242, 246–
247
Allen, M. W., 44, 50
Allen, R., 29
Allport, G. W., 4
Aloe, A. M., 236, 250–251
Alós-Ferrer, C., 81, 95
Al-Rafee, S., 130
Alvaro, E. M., 186, 256
Alvero, A. M., 221
Alwin, D. F., 6, 254
Amaratunga, R., 104
Amass, L., 221
Ambler, T., 142
Amiel, C., 139
Amodio, D. M., 93
Amos, C., 207
Anatol, K. W. E., 189, 193, 210
Andersen, K. E., 189
Andersen, R. E., 221
Anderson, C., 212
Anderson, C. A., 240
Anderson, J. P., 254
Anderson, J. W., 233
Anderson, L., 193
Anderson, L. R., 62
Anderson, N. H., 66, 73
Anderson, P. J., 103
Anderson, R. B., 111
Andersson, E. K., 114
Andreoli, V., 158
Andrews, K. R., 35, 55, 249
Anker, A. E., 218, 236, 250–251, 256
Apanovitch, A. M., 226
Appel, A., 88
Appel, M., 217, 219
Applbaum, R. L., 189, 193, 210
Apsler, R., 153
610
Arazi, D., 199
Arden, M. A., 262
Areni, C. S., 165
Armitage, C. J., 10, 15, 60, 72, 103–104, 106–107, 109, 113–116,
118, 120, 123, 127–128, 130–131, 140, 174, 240, 262
Armor, D. A., 119
Armour, C., 104
Armstrong, A. W., 221
Armstrong, C. L., 194
Armstrong, J. S., xv
Arnold, K., 75
Arnold, W. E., 211
Aronsky, D., 221
Aronson, E., 13, 26, 87, 89–91, 93, 97, 192
Aronson, J., 93, 262–263
Arriaga, X. B., 114
Arvola, A., 121
Ary, D. V., 111, 261
Asada, K. J., 182
Ash, L., 221
Ashford, S., 111
Askelson, N. M., 232
Aspinwall, L. G., 262
Assael, H., 181, 258
Astrom, A. N., 131
Atkin, C. K., 29
Atkins, A. L., 25
Atkinson, D. R., 203
Atwood, L. E., 108
Audi, R., 4
Aunger, R., 116
Austin, E. W., 207
Austin, J., 221
Averbeck, J. M., 155
Avery, R. J., 158
Aveyard, P., 133, 243
Avila, R. A., 234, 236, 250
Avom, J., 192
Axsom, D., 38, 154, 159
Aylward, B. S., 256
611
Baayen, R. H., 186
Baazova, A., 227
Babrow, A. S., 10
Backus, J., 53
Baezconde-Garbanati, L., 217
Bagley, G. S., 69, 74, 166
Bagozzi, R. P., 64–65, 72, 118, 121, 130
Bahamonde, L., 221
Bai, H., 261, 267
Bailey, W., 115
Bailis, D. S., 50
Bakker, A. B., 111, 174, 246
Bakker, M., 182
Bala, H., 121
Baldwin, S., 140
Ball, T. B., 221
Balmford, J., 139
Bamberg, S., 11, 139
Banaji, M. R., 5, 9
Banas, J. A., 152, 249, 255, 258, 265–266
Bandura, A., 100, 128
Banerjee, S. C., 218
Banks, A. J., 85
Bansal, H. S., 123
Baranowski, T., 241
Barclay, L. A., 111
Bareket-Bojmel, L., 9
Baril, J., 241
Barlow, T., 69
Barnett, J. P., 212
Barnett, M. A., 145
Baron, P. H., 154
Baron, R. M., 185
Baron, R. S., 154
Barone, M. J., 67
Barrett, D. W., 108
Barrios, I., 218
Barry, C. L., 223
Bartlett, S. J., 221
Baseheart, J., 180
Basil, D. Z., 95
612
Basil, M., 232
Baskerville, D., 103
Bassett, R., 249
Bates, B. R., 233
Bates, D. M., 186
Bates, M., 196
Batra, R., 70
Baudhuin, E. S., 189
Baughan, C. J., 109, 128
Baumeister, R. F., 93, 97
Baumgardner, M. H., 181
Baumgartner, H., 130
Bazzini, D. G., 42
Beaman, A. L., 234, 250
Bearden, W. O., 42
Beatty, M. J., 159, 189, 258
Beatty, S. E., 213
Beauchamp, M., 256
Beauchamp, N., 111
Beauvois, J.-L., 93
Beck, V., 219, 220
Becker, C. B., 88
Bedell, B. T., 226
Beentjes, J. W. J., 218
Behnke, R. R., 189
Beisecker, T. D., 17
Bekker, H. L., 220
Bélanger-Gravel, A., 95
Belch, G. E., 63, 67
Belch, M. A., 63
Bell, D. W., 152
Bell, H., 108
Beltramini, R. F., 192
Bem, D. J., 74
Benac, C. N., 146
Bennett, P., 123
Benoit, W. L., 266
Bensley, L. S., 256
Berek, J. S., 110, 128
Berendsen, M., 224, 243
Berent, M. K., 31
613
Bergin, A. E., 197
Berkanovic, E., 220
Berkel, H. J., 110
Berkowitz, A. D., 108
Berkowitz, N. N., 198, 200–201
Berlo, D. K., 189
Bernard, M. M., 258
Bernhagen, M., 196
Bernstein, G., 110, 128
Berntson, G. G., 74, 225
Berry, M. M., 118, 233
Berry, T. R., 111
Berscheid, E., 201, 204, 211
Betsch, T., 66
Bibby, M. A., 111
Bichard, S. L., 254
Bickel, W. K., 221
Biddle, S. J. H., 112, 116–117
Biek, M., 172
Bieri, J., 25
Biglan, A., 261
Bigman, C. A., 217
Bigsby, E., 237
Bilandzic, H., 218, 220
Bimber, B., 84
Birch, D., 260
Birkimer, J. C., 118, 233
Biswas, A., 207
Biswas, D., 207
Bither, S. W., 258
Bizer, G. Y., 15, 43–44, 54, 168, 169, 171, 174–175
Blake, H., 221
Blanchard, C. M., 60, 116, 123
Blank, D., 244
Blankenship, K. L., 265
Blanton, H., 9
Blau, E. M., 221
Bleakley, A., 117
Bless, H., 150, 152, 158, 255
Block, L. G., 95
Bluemke, M., 9
614
Blumberg, S. J., 259, 267
Bobocel, D. R., 181
Bochner, S., 197, 211
Bock, D. G., 197
Bodenhausen, G. V., 17, 254
Bodur, H. O., 61, 67
Boen, F., 221
Bogle, T., 38
Bogner, F. X., 103
Bohner, G., 17, 88, 152, 159, 166, 197, 255
Bolan, G., 138
Bolls, P., 233
Bolsen, T., 70, 174
Bond, C. F., Jr., 10, 12
Bond, R. M., 108
Bondarenko, O., 203
Bonetti, D., 104
Boninger, D. S., 31, 119, 219, 222, 257
Bonnes, M., 117
Booth, A. R., 131
Booth-Butterfield, S., 97, 117, 187
Bordley, C., 221
Borenstein, M., 96, 130, 182–183, 242, 248, 250, 264–265, 267
Borgida, E., 30, 38, 171
Borland, R., 133, 139
Bose, K., 232
Boster, F. J., 34–35, 49–50, 55, 212, 230, 249
Botta, R. A., 230
Bouman, M., 220
Bouvy, M. L., 221
Bowers, J. W., 180, 189, 193, 210
Bowman, J. M., 212
Boyson, A. R., 232
Bradac, J. J., 180, 181
Bradley, P. H., 190
Brandberg, Y., 103
Brannick, M. T., 88, 183
Brannon, L. A., 55, 75
Branstrom, R., 103
Brasfield, T. L., 107
Brashers, D., 183, 186–187
615
Braverman, J., 219
Brechan, I., 84, 96
Breckon, J. D., 139
Brehm, J. W., 78, 80–81, 97, 199–200, 255–256
Brehm, S. S., 255
Breivik, E., 57, 121
Brewer, N. T., 244
Brickell, T. A., 102
Bridle, C., 139
Brinberg, D., 61, 67, 103, 115
Briñol, P., 9, 15, 17–18, 148, 151, 154, 161, 168–169, 173–175, 191,
196–197, 208, 211, 255, 262, 267
Britt, T. W., 38, 44
Brock, T. C., 55, 75, 154–155, 200, 218
Brodsky, S. L., 190
Broemer, P., 10, 226
Brömer, P., 152
Brommel, B. J., 189, 191, 193, 210
Broneck, K., 256
Brotherton, T. P., 119
Brouwers, M. C., 229
Brown, D., 194
Brown, J., 232
Brown, K. M., 157
Brown, S., 260
Brown, S. P., 67, 118, 207
Brown, T. J., 60
Browne, B. A., 42
Brownstein, A. L., 95
Brug, J., 116, 139, 221
Bruner, J. B., 37, 53, 55
Bruning, A. L., 221
Bryan, C., 114
Bryant, J., 194
Bucholtz, D. C., 253
Bucy, E. P., 184, 247
Buday, R., 241
Budd, R. J., 72, 106, 127
Budney, A. J., 221
Bulger, C. A., 111
Buller, D. B., 154
616
Bullock, J. G., 185
Bundy, C., 256
Bunton, R., 140
Bunyan, D. P., 262
Burack, R. C., 221
Burciaga, C., 261
Burger, J. M., 108, 212, 234–235, 249–250
Burgoon, J. K., 211–212
Burgoon, M., 186, 234, 236, 250, 256
Burgwinkle, P., 241
Burke, C. J., 96
Burkell, J., 211
Burkley, E., 259
Burnell, P., 2
Burney, S., 139
Burnkrant, R. E., 181, 250
Burns, W. J., 213
Bursik, R. J., Jr., 119
Burton, D., 85, 261
Burton, S., 195
Busier, M., 213
Busselle, R., 218, 220
Buswell, B. N., 97
Butera, F., 191, 197, 211
Butler, H. A., 108
Butts, J., 221
Buunk, A. P., 205
Byrne, D., 201
Byrne, S., 158
617
Campbell, D. T., 9
Campbell, G., 147
Campbell, K. E., 25, 215
Campbell, L. A., 25
Campbell, M. C., 192
Campbell, N. A., 91, 106, 127
Campo, S., 108, 232
Cannarile, S., 232
Cao, D. S., 137
Capitanio, J. P., 38, 44
Cappella, J. N., 60, 104, 106, 116–118, 131, 217, 223, 232, 241, 254
Cappon, P., 131
Car, J., 139
Carcioppolo, N., 232
Card, N. A., xviii, 182–183
Cardenas, M. P., 110
Carey, K. B., 107
Carey, R. N., 247
Carlsmith, J. M., 26, 86–89
Carlsmith, K. M., 38
Carlson, L., 155
Carlston, D. E., 74
Carnot, C. G., 31
Carpenter, C. J., 35, 42–43, 54–55, 182, 191, 240, 249, 256
Carpenter, J. M., 219–220
Carpenter, K. M., 240
Carrion, M., 232
Carroll, D., 121
Carroll, M. V., 241
Carron, A. V., 103, 109, 112, 123
Carrus, G., 117
Carter, K. D., 232
Carver, C. S., 227
Case, T., 233
Casey, M., 196
Casey, S., 160
Castle, H., 111
Catalan, J., 235–236
Ceci, S. J., 254
Celuch, K., 42, 50
Cerully, J. L., 217, 262
618
Cesario, J., 54, 227, 245
Chaiken, S., 4, 11–12, 17, 19, 34–35, 49–50, 55, 61, 66–67, 69, 72–
73, 76, 85, 93, 104, 106, 123, 149, 154, 158–159, 168, 172–174, 181,
190, 192–193, 198–199, 204–205, 226, 262, 267
Chan, C. W., 241
Chan, D. K.-S., 109, 123
Chandran, S., 244
Chang, C., 159, 227, 253
Chang, C.-T., 226
Chang, M. J., 194
Chantala, K., 244
Chapman, G. B., 113
Chapman, J., 115, 130, 262
Chartrand, T., 250
Chase, A., 88
Chatterjee, J. S., 217
Chattopadhyay, A., 212
Chatzisarantis, N. L. D., 102–103, 112–113, 116–117
Chavez, L. R., 111
Chebat, J.-C., 95, 197–198
Chelminski, A., 221
Chen, H. C., 260
Chen, M. F., 97, 120
Chen, M. K., 81, 95
Chen, Q. M., 185
Chen, S., 149, 158
Chen, T., 111
Chen, X., 166
Cheung, C. M.-Y., 208
Cheung, S. F., 109, 123
Chien, Y. H., 226
Childs, K. E., 219–220
Chin, P. P., 118–119
Ching, B. H., 253
Chiou, J. S., 194
Chiou, W. B., 88
Chiu, C., 227
Cho, H., 232–233
Chong, D., 70
Christenfeld, N. J. S., 159
Chu, G. C., 223, 248
619
Chuang, Y. C., 31
Chuinneagáin, S. N., 123, 223
Chung, A. H., 215, 218–219
Chung, M. H., 221
Chung, S., 26, 66, 154
Churchill, S., 115
Cialdini, R. B., 75, 108, 122, 126–127, 158–159, 173, 235–237, 240,
249, 254
Ciao, A. C., 88
Ciccarello, L., 103, 126
Clack, Z. A., 242
Claiborne, C. B., 38
Clapp, J. D., 108
Clark, E. M., 241, 253
Clark, H. H., 186
Clark, J. K., 26, 161, 197, 208
Clark, J. L., 220
Clark, R. A., 31, 212
Clarkson, J. J., 75
Clary, E. G., 39, 42, 46–47
Clawson, R. A., 70
Claypool, H. M., 154, 168, 175
Cleeland, L., 261
Cobb, M. D., 32
Cody, M. J., 219–220
Cohen, G. L., 93, 262–263
Cohen, J., 219
Cole, C. M., 234, 250
Cole, H. P., 217
Collins, B. E., 17, 19, 24, 30, 34, 37, 93
Collins, W. B., 232
Colon, S. E., 254
Combs, D. J. Y., 192
Compton, J. A., 258, 266
Coney, K. A., 197
Conner, M., 10, 15, 60, 72, 75, 95, 103–104, 106, 109, 113–121, 123,
127–128, 130–131, 143, 174, 220
Considine, J. R., 256
Conville, R. L., 64
Cook, A. J., 131
Cook, F. L., 70
620
Cook, T. D., 85
Cooke, R., 103–104, 112–113, 116
Cooper, H., 182
Cooper, J., 78, 87, 89, 91, 93, 97, 199–200, 205, 267
Cooper, Y., 115
Copeland, J., 39, 42, 47
Corbin, S. K. T., 261
Corby, N. H., 111
Corfman, K., 89
Corker, K. S., 227, 245
Costiniuk, C., 244
Cote, N. G., 11
Côté, S., 152
Cotte, J., 233
Cotton, J. L., 84–85
Coulter, R. A., 204, 233
Counselman, E., 199
Coupey, E., 61, 67
Courchaine, K., 260
Courneya, K. S., 60, 109, 113, 116
Courtright, J. A., 180
Cousins, S., 111
Coveleski, S., 261
Covello, V. T., 193
Covey, J., 226
Cox, A., 254
Cox, A. B., 221
Cox, A. D., 244
Cox, B. S., 221
Cox, D., 244
Cox, D. J., 221
Cox, E., 221
Craciun, C., 143
Craig, A., 123, 223
Craig, J. T., 256
Craig, S., 261
Craig, T. Y., 265
Crain, A. L., 13, 90–91
Cramer, R. J., 137, 190
Cramp, A., 111
Crandall, C. S., 38, 44
621
Crane, L. A., 110, 128
Crano, W. D., 10, 186
Crawley, F. E., III, 123
Creek, T. L., 219–220
Crites, S. L., Jr., 10, 67
Crocker, J., 263
Crockett, W. H., 64, 70, 76
Cron, W. L., 118
Cronen, V. E., 64
Crouse, J. C., 241
Crowley, A. E., 193, 223
Croy, G., 116
Cruz, M. G., 240
Cuijpers, P., 204
Cuite, C. L., 138
Cullen, M., 226
Cunningham, W. A., 9
Cushing, C. C., 256
Czasch, C., 115
Czerwinski, A., 196
Czuchry, M., 240
D’Alessio, D., 81
Daamen, D. D. L., 195, 211
Dagot, L., 249
Dahl, D. W., 212, 218, 257
Dahl, J., 194
Dahlstrom, M. F., 219
Dal Cin, S., 217, 219
Dale, A., 221
Daley, A. J., 103
Dancy, B., 256
Danks, J. H., 230
Dansereau, D. F., 240
Darby, B. L., 235–236
Dardis, F. E., 226, 241
Darke, P. R., 159, 168
Darker, C. D., 111, 121, 131
Darley, J. M., 199–200, 205
Darley, S. A., 87
Das, E., 107, 171, 255, 262–263
622
Das, N., 207
Dashti, A. E., 130
Davenport, J. W., Jr., 200–201
Davidson, D. J., 186
Davies, A., 233
Davis, A. K., 85
Davis, F. D., 121
Davis, J. M., 208
Davis, K. C., 254
Davis, L. L., 42
Davis, M. K., 189
Davis, R. E., 253–254
Davis, R. M., 93
de Bakker, D. H., 221
de Bruijn, G. J., 116
De Bruijn, G.-J., 244
de Cremer, D., 233
de Graaf, A., 218–219, 241
de Hoog, N., 229–230, 232, 242, 247
de Houwer, J., 8, 17
de Nooijer, J., 139
de Ridder, D. T. D., 114–116
de Ruyter, K., 218–219, 241
de Vet, E., 139
de Vord, R. V., 207
de Vries, H., 115, 118, 122, 226, 233, 244, 254, 261
de Vries, N. K., 14, 57, 62, 70, 72, 118–119
de Vries, P., 116
de Vroome, E. M. M., 115
de Wit, J. B. F., 107, 114–116, 229–230, 232, 242, 247
de Young, R., 97
de Zwart, O., 120
Deale, A., 111
Dean, L., 60
Dean, M., 121
Deatrick, L. M., 256
Deaux, K. K., 25
DeBono, K. G., 37, 39, 42–44, 46, 49, 53, 130, 155, 253
DeCesare, K., 249
Decker, L., 196
DeJong, W., 108, 234
623
del Prado, A., 212
Delage, G., 95
Delespaul, P. A. E. G., 97
Delia, J. G., 64, 70, 189, 202–203, 210
DelVecchio, D., 89
Demaine, L. J., 108
Dembroski, T. M., 216
Denizeau, M., 78
Dent, C. W., 261
Derose, S. F., 221
Desharnais, R., 109
Detweiler, J. B., 226
Devine, D. J., xv
Dew, D. E., Jr., 204
Dexheimer, J. W., 221
Dholakia, R., 197
Dholakia, R. R., 197, 211–212
Di Dio, P., 114
Di Noia, J., 134–135, 145
Diamond, G. A., 32
Diaz, Y. E., 107
Dibble, T., 114
Dibonaventura, M. D., 113
Dickau, L., 113
Dickel, N., 17
Dickens, C., 256
Dickerson, C. A., 91
DiClemente, C. C., 132–133
Diehl, M., 10, 44, 152, 171, 208
Dijkstra, A., 262
Dijkstra, P., 205
Dillard, A. J., 217
Dillard, J. P., 103–104, 123, 131, 232–234, 236–237, 247, 249–250,
255–256, 264
Dilliplane, S., 233
Dillon, W. R., 211
Dillworth, T., 108
Dingus, T. A., 110
Ditto, P. H., 230
Dittus, P., 72
DiVesta, F. J., 74
624
Dobre, B., 44
Dolan, R. J., 81
Dolich, I. J., 258
Doll, J., 11, 73, 106
Donaldson, S. I., 261
Donnelly, J. H., Jr., 83
Donohew, L., 254
Donovan, R. J., 230
Doob, A. N., 88–89
Doosje, B. J., 25
Dorian, K., 108
Dormandy, E., 103, 115, 130
Dowd, E. T., 256
Downs, E., 219–220
Doyle, S. R., 111
Drevland, G. C. B., 194
Driver, B. L., 57, 61
Drolet, A., 253–254
Druckman, J. N., 70, 174
Druley, J. A., 230
Drummond, A. J., 28
Dryden, J., 119, 233
Duckworth, K. L., 168
Dugan, E., 241
DuMouchel, W., 221
Duncan, S. C., 111
Duncan, T. E., 111
Dunker, K., 230
Dunlop, S. M., 218–219
Duran, A., 109
Durand, J., 103, 115
Durantini, M. R., 204
Dutta-Bergman, M. J., 42
Dwan, K., xviii, 96
625
Earleywine, M., 93
Eckes, T., 10–11, 112
Edell, J. A., 181
Edmunds, J., 111
Edwards, C. S., 222
Edwards, K., 75
Edwards, S. M., 256
Egloff, B., 233
Ehrlich, D., 81
Eibach, R. P., 240
Eid, M., 262
Eilertsen, D. E., 194
Einwiller, S., 159
Eisend, M., 193, 198, 207, 211, 223, 243–244
Eisenstadt, D., 88
Ekstein, G., 25
El-Alayli, A. G., 118–119
Elder, J. P., 261
Elek, E., 261
Elias, L. J., 113
Ellemers, N., 195, 211
Elliot, M. A., 104
Elliott, M. A., 109, 115, 121, 128
Elliott, R., 62, 72
Elliott, S. M., 194
Ellsworth, R., 219–220
Elms, A. C., 87
Emington, J., 95
Enemo, I., 194
Engstrom, E., 191
Enguidanos, S. M., 111
Ennett, S. T., 261
Ennis, R., 38–39, 46, 74
Epp, L. J., 113
Epstein, E., 207
Epton, T., 262–263
Erb, H.-P., 159, 166, 168, 197
Ernst, J. M., 149
Ernsting, A., 143
Erwin, D. O., 241
Escalas, J. E., 218, 240
626
Esses, V. M., 72, 152, 159
Essex, M., 185
Estabrooks, P., 109, 123
Estambale, B., 221
Evans, A. T., 161, 197, 208
Evans, L. M., 172
Evans, R. I., 216
Evers, K. E., 132, 133, 135, 144–145
Everson, E. S., 103
Eves, F. F., 111, 121, 131
Ewoldsen, D. R., 15
Eyal, K., 208
Fabrigar, L. R., 10, 15, 67, 75, 84, 108, 161, 174, 212
Fagerlin, A., 217
Fairchild, A. J., 185
Fairhurst, A., 42
Falcione, R. L., 189, 193, 210
Faller, C., 261
Falomir-Pichastor, J. M., 191, 197, 211
Fariss, C. J., 108
Farrelly, M. C., 254
Fatoullah, E., 204–205
Fazili, F., 219
Fazio, R. H., 9, 11–12, 17, 57, 70, 198
Feeley, T. H., 236, 250–251
Feiler, D. C., 256
Fein, S., 36
Feinstein, J. A., 154, 172
Feldman, L., 84
Fennis, B. M., 142, 158, 171, 250
Fenson-Hood, K., 230
Ferguson, C. J., xviii
Ferguson, E., 119, 233
Fern, E. F., 234, 236, 250
Fernandez, N. C., 78, 91, 93
Fernandez-Medina, K., 218
Festinger, L., 76–77, 82, 86–87
Fetherstonhaugh, D., 160
Feufel, M. A., 110
Fiedler, K., 9
627
Field, A. P., 182
Fielding, A., 133
Figgé, M., 237
Filiatrault, P., 197–198
Fine, B. J., 215
Fink, E. L., 26, 34, 66
Finkelstein, B., 221
Finlay, K. A., 103, 123
Finlayson, B. L., 91
Fiore, C., 134
Firestone, I., 199
Fischer, A. R. H., 122
Fischer, D. L., 160
Fischer, J., 81
Fischer, P., 81
Fishbein, M., 4, 6, 10, 17, 26, 56–57, 60, 62, 64–68, 72, 74, 99, 103–
104, 106–107, 113, 116–117, 120, 122–123, 126–127, 130–131, 138,
166, 185, 254
Fisher, D., 261
Fisher, D. G., 139
Fisher, J. D., 111, 246
Fisher, W. A., 111
Fiske, S. T., 67
Fitzsimons, G. J., 95, 249, 256
Flanagin, A. J., 208
Flay, B. R., 85, 127, 261
Fleming, D., 4
Fleming, J. A., 50
Fleming, J. K., 111
Fleming, M. A., 67, 204
Fleming, M. T., 108
Fleming, S. M., 81
Flood, M. G., 115
Flowelling, R. L., 261
Floyd, D. L., 229, 242, 246
Flynn, D., 140
Foerg, F. E., 221
Folda, L., 232
Fong, G. T., 113, 185, 219
Fontaine, K. R., 221
Fontenelle, G. A., 186
628
Foregger, S., 157, 163
Forehand, M., 159
Fornara, F., 117
Forquer, H., 60
Fowler, J. H., 108
France, C. R., 217
Franckowiak, S. C., 221
Frangos, J. E., 221
Frank, C. A., 256
Frank, L. B., 217, 219
Franks, J., 212
Fraser, S. C., 234, 250
Frazier, B., 219
Freedman, J. L., 26, 88–89, 234, 250, 260
Freijy, T., 91
Freling, T. H., 89
French, D. P., 74, 103–104, 111–112, 116, 121, 131
Freres, D., 60
Frewer, L. J., 190
Frey, D., 81, 84–85
Fried, C. B., 13, 18, 89–91, 97
Friedrich, J., 160
Fry, J. P., 221
Fuchs, R., 142
Fung, H., 253
629
Garner, R., 212
Garrett, R. K., 85
Garst, J., 154
Gasco, M., 151
Gass, R. H., 2, 17
Gastil, J., 37, 39, 44–45, 159
Gaston, A., 111
Gaudreau, P., 114
Gawronski, B., 17, 95
Gaylord, G., 25
Gaziano, C., 190
Geers, A. W., 254
Gelmon, L. J., 221
Gerard, H. B., 80
Gerber, A. S., 95
Gerend, M. A., 103, 226
Germain, M., 95
Gerrans, P., 116
Gerrard, M., 108
Geurts, D., 111
Ghadiri, A., 221
Gibbons, F. X., 108
Gibson, L., 60
Gierl, H., 75
Giles, M., 104, 123
Gillet, V., 192
Gillett, R., 182
Gillette, J. C., 204
Gilovich, T., 82
Gimotty, P. A., 221
Ginis, K. A. M., 111, 130
Ginsburg, G. P., 189
Gioioso, C., 40, 42
Girvin, H., 139–140
Gitta, M. Z., 181
Glanz, A., 109
Glasgow, R., 261
Glasman, L. R., 10–12
Glazer, E., 194
Gleicher, F., 119, 219, 222, 257
Glik, D., 220
630
Glor, J., 38, 44
Glynn, C. J., 108
Glynn, R. J., 192
Göckeritz, S., 122
Godbold, L. C., 266
Godin, G., 65, 95, 103, 109, 112–113, 123, 127–128, 131
Godinez, M., 111
Goei, R., 212, 232
Goethals, G. R., 203
Goldhagen, J., 221
Goldman, M., 237
Goldman, R., 150, 153, 158, 195
Goldsmith, R. E., 190, 207
Goldstein, M. G., 134
Goldstein, N. J., 108, 122, 127, 237
Goldstein, S., 219
Gollust, S. E., 223
Gollwitzer, M., 9
Gollwitzer, P. M., 95, 114, 116, 216
Golsing, P., 78
Gonzalez, J., 217, 255
Good, A., 267
Goodall, C. E., 8
Gorassini, D. R., 235, 250
Gordon, L., 220
Gorely, T., 221
Gorman, D. R., 261
Goudas, M., 103
Gould, S. J., xvii
Gourville, J. T., 244
Goyder, E., 131
Grady, K., 260
Graham, J. W., 261
Graham, S., 118, 120
Granberg, D., 19, 21, 23, 25, 28, 215
Grandpre, J. R., 256
Granić, Ð.-G., 81
Grant, A. M., 256
Grant, H., 54
Grant, N. K., 212
Grasmick, H. G., 119
631
Green, B. F., 7
Green, D. P., 185
Green, L. G., 69
Green, M. C., 154, 218–220, 241
Green, N., 138–139
Greenbank, S., 103
Greenberg, B. S., 211–212
Greenberg, J., 97, 181
Greene, K. L., 103, 186, 218
Greenslade, J. H., 97
Greenwald, A. G., 9, 46, 55, 95, 130, 181
Greenwood, K. M., 133
Gregory, G. D., 253
Gregory, W. L., 240
Greitemeyer, T., 81
Gremmen, F., 186
Grewal, D., 38
Griffin, M. P., 190
Griffin, S., 74, 121
Griskevicius, V., 75, 108, 122, 127, 249, 255
Grofman, B., 25
Grohmann, B., 95
Grossbard, J. R., 261
Gruber, V. A., 11
Gruner, C. R., 194
Guadagno, R. E., 254
Guéguen, N., 212, 240, 249, 256
Guillory, J. E., 158
Gunnell, J. J., 254
Gunning, J., 110, 128
Gunther, A. C., 211
Guo, B. L., 133
Gurwitz, J. H., 241
Gutkin, T. B., 111
Gutscher, H., 190
Gutteling, J., 203
Guttman, I., 81
632
Habyarimana, J., 221
Hackman, C. L., 104
Hackman, J. R., 62
Haddock, G., 9, 35, 67, 72, 74–75, 219
Hagen, K. M., 111
Hagger, M. S., 54, 103, 112–113, 116–117, 222
Hagtvet, K. A., 123
Hahn, K. S., 84
Hahn, U., 66
Hailey, B. J., 181
Hale, J. L., 103, 186
Hale, S. L., 236, 250
Hall, A., 218
Hall, J. R., 154, 186, 256
Hall, K. L., 134–135
Hall, P. A., 113, 115
Hallam, J. S., 260
Ham, S. H., 60
Hamilton, D. L., 74
Hamilton-Barclay, T., 119, 233
Handley, I. M., 254
Hankins, M., 111
Hansen, W. B., 261
Hänze, M., 152
Hardeman, W., 74, 104, 121
Harinck, F., 195, 211
Harkins, S. G., 149
Harland, P., 120, 131
Harlow, L. L., 134
Harmon, R. R., 197
Harmon-Jones, C., 93
Harmon-Jones, E., 78, 93, 97
Harnish, R. J., 44
Harries, C., 190
Harrington, N. G., 221, 242
Harris, A. J. L., 66
Harris, M. S., 146, 267
Harris, P. R., 131, 262–263
Harris, R. J., 145
Harrison, T. R., 221
Hart, P. S., 158
633
Hart, W., 84, 96
Harte, T., 241
Harterink, P., 120
Hartmann, T., 171, 255
Hartwick, J., 103, 112
Harvey, K., 108
Harvey, N., 190
Harvey, O. J., 20, 26
Hasper, P., 111
Hass, J. W., 69, 74, 166
Hass, R. G., 201, 223, 260
Hassandra, M., 103
Hastings, P., 220
Hatch-Maillette, M. A., 111
Hatzigeorgiadis, A., 103
Hatzios, M., 38
Haugen, J. A., 39, 42, 46–47
Haugtvedt, C. P., 151–152, 154, 163, 255, 265
Hausenblas, H. A., 103, 112
Hauth, A. C., 107
Havermahl, T., 221
Hayes, A. F., 185
Hazlewood, J. D., 159
He, J., 121
Head, K. J., 221, 242
Heald, G. R., 60, 117
Heatherton, T. F., 93
Hecht, M. L., 220, 261
Heckler, S. E., 186
Hedderley, D., 65, 190
Hedeker, D., 127
Hedges, L. V., 96, 130, 182–183, 250
Heene, M., xviii
Heesacker, M., 154, 181
Hefner, D., 9
Heiphetz, L., 5
Heitland, K., 88
Hembroff, L., 29
Hemnann, A., 82
Henard, D. H., 89
Henderson, J. E., 199–200, 205
634
Henderson, M. D., 149
Hendricks, L. A., 108
Hendricks, P. S., 256
Henley, N., 230
Hennings, S. J., 74, 121
Herek, G. M., 38–40, 44
Herr, P. M., 67, 74, 95
Herrin, J., 244
Herzog, T. A., 140
Heslin, R., 259
Hesse, S. J., 218
Hether, H. J., 220
Hettema, J. E., 256
Hetts, J. J., 119
Hevey, D., 123, 223
Hewgill, M. A., 191
Hewitt, E. C., 181
Hewstone, M., 9
Hia, K., 221
Hibbert, S., 233
Hidalgo, P., 217, 241
Higgins, A. R., 114
Higgins, E. T., 54, 227
Higgins, J. P. T., 96, 130, 182–183, 250
Higgins, S. T., 221
Highhouse, S., 181
Hill, A. L., 220
Hilligoss, B., 190
Hilmert, C. J., 159
Himmelfarb, S., 199
Hinson, R. E., 123
Hinyard, L. J., 220, 241
Hirsch, H. A., 254
Hirsh, J. B., 254
Hitchcock, K., 224, 243
Hitsman, B., 224, 243
Ho, S. K., 261
Hodson, G., 159
Hoegg, J., 212
Hoeken, H., 111, 165, 218, 254
Hoffmann, K., 66
635
Hogg, M. A., 130
Høie, M., 120
Holbert, R. L., 258, 266
Holbrook, M. B., 61–62, 73, 181
Holland, A., 203
Holmes, G., 207
Holmes, J., 208
Holmes, K., 241, 262
Holzemer, W. L., 212
Homer, P. M., 70
Honeycutt, E. D., 207
Hong, S., 241
Hong, T., 208
Hong, Y.-H., 266
Hopfer, S., 219
Horai, J., 204–205
Horberg, E. J., 114
Horcajo, J., 151
Hornik, R. C., 60, 106, 116–117, 131, 223
Hornikx, J., 222, 253–254
Horswill, M. S., 123
Hospers, H. J., 120
Hossain, S. Z., 44
Hosseinzadeh, H., 44
Householder, B. J., 103
Houston, T., 241
Hove, T., 103, 104
Hovland, C. I., 19–20, 22, 25, 29, 34, 74, 215
Howard, C., 190
Howard, D. J., 181, 208, 212, 240
Howard, G., 196
Howe, B. L., 111
Howe, D., 224, 243
Howell, J. L., 262
Hox, J. J., 114
Hoxworth, T., 138
Hoyer, W. D., 223
Hoyle, R. H., 254
Hoyle, S., 123
Hoyt, W. T., 211
Hsieh, G., 192
636
Hu, Y. F., 208
Huang, G. C., 219–220
Hubbell, A. P., 230
Hubbell, F. A., 111
Huber, F., 82
Hübner, G., 103, 120
Hudson, S. E., 192
Huettl, V., 75
Huge, M. E., 108
Hughes, M., 60
Huh, J. H., 75
Huhmann, B. A., 119, 174
Hukkelberg, S., 131
Hukkelberg, S. S., 123
Hulbert, J. M., 62
Hullett, C. R., 43, 49–50, 171, 255, 264
Hummer, J. F., 261
Hunn, B. P., 110
Hunt, S. D., 83, 193
Hunter, J. E., 10–11, 112, 234, 236, 250
Hunter, R., 66
Hurling, R., 113
Hurwitz, J., 25
Hurwitz, S. D., 191
Huskinson, T. L. H., 74–75
Hustinx, L., 165, 219
Huston, T. L., 201
Hutchings, V. L., 85
Hutchison, A. J., 139
Hwang, Y., 155
Hyde, J., 111
Hyde, M. K., 103
637
Ito, T. A., 9
Ivancevich, J. M., 83
Ivanov, B., 258
Iyengar, S., 84
Izuma, K., 81
638
Johnston, L. H., 139
Johnston, P. L., 118, 233
Johnstone, P. M., 232
Jonas, K., 10, 152
Jones, A., 155
Jones, E. E., 89
Jones, J. J., 108
Jones, J. L., 223
Jones, M. C., 220
Jones, R. A., 199–200, 267
Jones, R. T., 261
Jones, S., 216
Jones, S. P., 186
Joormann, J., 186
Jordan, A., 117
Jordan, B., 196
Jordens, K., 93
Joule, R. V., 89, 93
Joyce, L., 256
Judah, G., 116
Judd, C. M., 5, 28, 186
Julka, D. L., 43
Jun, S. Y., 75
Jung, J. M., 75
Jung, T. J., 60, 117
Juon, H. S., 232
Kahneman, D., 244
639
Kang, Y.-S., 159, 199, 205
Kantola, S. J., 91, 106, 127
Kao, C. F., 151, 172
Kao, D. T., 240
Kaplan, C. R., 110, 128
Kaplowitz, S. A., 26, 34
Karan, D., 207
Karanja, S., 221
Kardes, F. R., 181
Kariri, A., 221
Karlan, D., 221
Kashima, Y., 218–219
Kasmer, J., 28
Kath, L. M., 111
Kattapong, K. R., 204
Katulak, N. A., 254
Katz, D., 35–37, 40, 46, 48–49, 51, 53
Kaufmann, M., 66
Kaur, B., 139
Kay, L. S., 111
Kaysen, D. L., 108
Keane, J., 103
Keaveney, S. M., 82
Keegan, O., 61, 127
Keeling, R. P., 108
Keer, M., 121
Kellar, I., 111
Kellaris, J. M., 75
Keller, P. A., 242
Keller, P. S., 192
Kellermann, A. L., 242
Kelly, B. J., 223
Kelly, J. A., 107
Kelly, K. J., 207
Kelly, M., 25
Kelly, R. J., 97
Kelso, E., 240
Keltner, D., 97
Kendzierski, D., 10, 12, 113
Keng, C. J., 81
Kennedy, C., 212
640
Kennedy, M. G., 220
Kennedy, N. B., 267
Kenny, D. A., 28, 185–186
Kenrick, D. T., 75
Kenworthy, J. B., 93
Kerin, R. A., 208, 212
Kerr, G. N., 131
Kerr, P. M., 159, 199, 205
Kesek, A., 9
Kesselheim, A. S., 192
Kessels, L. T. E., 231, 233
Kidder, L. H., 9
Kidwell, B., 123
Kiernan, M., 185
Kiesler, C. A., 19, 24, 30, 34, 37
Killeya, L. A., 69, 165–166
Kilmarx, P. H., 220
Kilmer, J. R., 108
Kim, A., 38
Kim, E. S., 217
Kim, H. K., 219
Kim, H. S., 217, 219
Kim, M.-S., 10, 11, 112
Kim, S., 261
Kimani, J., 221
Kimball, A. B., 221
Kimble, C. E., 203
Kimble, D. L., 111
Kinder, D. R., 67
King, A. J., 221
King, B., 241
King, M., 28
King, S. W., 200, 203
King, W. R., 121
Kinmonth, A. L., 74, 104, 121
Kinsey, K. A., 119
Kirkham, J. J., xviii, 96
Kirmani, A., 192
Kiviniemi, M. T., 121
Klahn, J. A., 139
Klass, E. T., 93
641
Klassen, M., 145
Klein, K. A., 122–123
Klein, W. M. P., 217, 262–263
Kleine, S. S., 67
Klein-Selski, E., 261
Klem, M. L., 241
Klentz, B., 234, 250
Klimmt, C., 9
Klock, S. J., 200
Knapp, T. R., 64
Knäuper, B., 115
Knäuper, B., 240
Knight, K. M., 256
Knowlden, A. P., 104
Knowles, E. S., 257
Koehler, J. W., 259
Koerner, A. F., 266
Koestner, R., 114
Kok, G., 103, 112, 120, 231–233, 248
Konkel, J., 254
Koob, J. J., 107
Kootsikas, A., 261
Kopperhaver, T., 220
Koring, M., 114–115, 143
Kosmidou, E., 103
Koster, R., 81
Kothe, E. J., 91, 104
Kotowski, M. R., 232
Kotterman, D., 103
Kovac, V. B., 123
Kraemer, H. C., 185
Kraft, J. M., 219–220
Kramer, A. D. I., 108
Krantz, L. H., 115
Kraus, R., 221
Kraus, S. J., 10–11
Kraut, R. E., 192
Krcmar, M., 186
Kreausukon, P., 114
Kremers, S. P. J., 116, 221
Kreuter, M. W., 219–220, 241, 253
642
Krieger, J. L., 223, 232, 261
Krishnamurthy, P., 257
Kroese, F. M., 116
Kroeze, W., 116
Kromrey, J. D., 183
Krosnick, J. A., 5, 15, 18, 28, 31, 171, 174, 254
Kruger, M. W., 159, 189
Kruglanski, A. W., 72, 82, 166–169, 173, 175
Kuan, K. K. Y., 208
Kuhlmann, A. K. S., 219–220
Kuiper, N. M., 118–119
Kujawski, E., 196
Kulik, J. A., 108, 159
Kumkale, G. T., 158, 196
Kupfer, D. J., 185
Kvedar, J. C., 221
Kwak, L., 221
Kyriakaki, M., 222
643
Lange, J. E., 108
Lange, R., 26
Langlois, M. A., 260
Langner, T., 198, 207
LaPiere, R. T., 17
Lapinski, M. K., 232
Larimer, M. E., 108
Larkey, L. K., 217, 220, 254
Laroche, M., 197–198
Larsen, E., 120
LaSalvia, C. T., 108
Lasater, T. M., 216
Latimer, A. E., 111, 130, 226–227, 254
Laufer, D., 253
Lauver, D., 64
Lavidge, R. J., 142
Lavin, A., 221
Lavine, H., 30, 42–43, 46, 171
Lavis, C. A., 171, 208, 255
Lawton, R. J., 75, 103, 106, 115–117, 121, 127–128
Le Dreff, G., 212
Leader, A. E., 217, 223
Learmonth, M., 111
Leary, M. R., 223
Leavitt, A., 53
Leavitt, C., 197, 200, 211–212
LeBlanc, B. A., 97
Lechner, L., 115, 118
LeComte, C., 109
Lee, C. M., 108
Lee, E.-J., 157
Lee, G., 256
Lee, J. E., 108
Lee, J. H., 256
Lee, K. H., 130
Lee, M. J., 254
Lee, M. S., 207
Lee, S., 221
Lee, T., 208
Lee, W., 266
Lee, W. J., 256
644
Leeuwis, C., 219
Lehavot, K., 212
Lehmann, D. R., 242, 256
Leibold, J. M., 118–119
Leippe, M. R., 88, 181
Lemert, J. B., 189
Lemus, D. R., 208
Lennon, S. J., 42
Leone, L., 118
Lepage, L., 109
Lerman, C., 60, 217
Leshner, G., 233
Lester, R. T., 221
Leventhal, H., 143, 216, 248
Levin, I. P., 244
Levin, K. D., 30, 69, 163, 165–166, 171
Levine, T. R., 157, 163, 182, 231
Levinger, G., 201
Levitan, L. C., 157
Levy, B., 199
Levy, D. A., 17
Lewis, C., 108
Lewis, H., 123
Lewis, M. A., 108
Lewis, S. K., 235–236
Li, H. R., 256
Liang, Y. H., 95
Liao, T.-H., 81
Libby, L. K., 240
Liberman, A., 262, 267
Lichtenstein, E., 261
Lieberman, D. A., 241
Likert, R., 7
Lim, H., 212
Lim, S., 208
Lin, H.-Y., 25
Lin, J.-H., 241
Lin, W.-K., 258, 266
Lindberg, M. J., 84, 96
Linder, D. E., 89, 223, 240
Lindow, F., 66
645
Lindsey, L. L. M., 212, 241
Linn, A. J., 221
Linn, J. A., 257
Linnemeier, G., 232
Lipkus, I. M., 69, 262
Lippke, S., 114–115, 137, 139, 143
Litman, J., 115
Littell, J. H., 139–140
Liu, K., 265
Liu, W.-Y., 230
Livi, S., 97, 131
Loersch, C., 154
Loewenstein, G., 182
Loh, T., 208
Longdon, S., 121, 131
Longoria, Z. N., 114
Lorch, E. P., 254
Lord, C. G., 10, 12, 240
Lord, K. R., 207
Losch, M., 232
Louis, W. R., 103
Love, G. D., 220
Lowe, R., 121
Lowrey, T. M., 43, 53
Lu, A. S., 241
Luce, M. F., 240
Luchok, J. A., 191
Lukwago, S. N., 253
Lundell, H., 219
Lunney, C. A., 108
Lupia, A., 198
Luszczynska, A., 111
Lutz, R. J., 67, 69, 74, 165
Luyster, F. S., 181
Luzzo, D. A., 111
Lycett, D., 243
Lynch, J. G., 185
Lynn, M., 75, 192
Lyon, J. E., 138
Lyon-Callo, S. K., 232
Lyons, E., 97, 103, 131
646
Mabbott, L., 262
MacDonald, T. K., 15, 174
MacDonnell, R., 257
Machleit, K. A., 67
Mack, D. E., 103, 112
MacKenzie, S. B., 67, 151
Mackie, D. M., 152, 154, 204, 233, 255
MacKinnon, D. P., 185
MacKintosh, A. M., 104, 117
MacNair, R., 260
MacRae, S., 111
Madden, T. J., 73, 109
Maddux, J. E., 205–206
Magana, J. R., 111
Magee, R. G., 254
Maggs, J. L., 260–261
Magnini, V. P., 207
Magnussen, S., 194
Maher, L., 123, 223
Mahler, H. I. M., 108
Mahloch, J., 106
Maio, G. R., 9, 35, 39, 48–51, 152, 159, 258
Majeed, A., 139
Major, A. M., 108
Makredes, M., 221
Malhotra, N. K., 67
Mallach, N., 143
Mallett, J., 104
Malloy, T. E., 111
Mallu, R., 11
Mallya, G., 60, 117
Maloney, E. K., 232
Malotte, C. K., 138
Maltarich, S., 230
Manchanda, R. V., 174
Mangleburg, T. F., 38
Manis, M., 25
Mann, T., 181, 227
Mannetti, L., 82, 97, 131, 166, 168
Manning, M., 103, 112, 116–117, 122–123, 127, 129, 131
Manson-Singer, S., 131
647
Manstead, A. S. R., 62, 70, 72, 103, 109, 116, 118–120, 257
Manzur, E., 217, 241
Mara, M., 219
Marcus, A. C., 69, 110, 128
Marcus, B. H., 134
Margolis, P., 221
Marin, B. V., 116, 253
Marin, G., 116, 253
Mark, M. M., 171, 208, 255
Marks, L. J., 181
Marlino, D., 181
Marlow, C., 108
Marquart, J., 211
Marra, C. A., 221
Marsh, K. L., 43
Marston, A., 212
Marteau, T. M., 103, 111, 115, 130
Martell, C., 29
Martell, D., 122–123
Marti, C. N., 88
Martin, B. A. S., 240
Martin, J., 114
Martinelli, E. A., Jr., 111
Martinie, M. A., 89
Marttila, J., 139
Maschinski, C., 219–220
Mason, H. R. C., 261
Mason, T. E., 103
Masser, B., 217
Mather, L., 139
Matheson, D. H., 123
Mathios, A. D., 158
Maticka-Tyndale, E., 131, 212
Mattern, J. L., 108
Matterne, U., 103, 126
Matthes, J., 203
Maurissen, K., 221
Maxfield, A. M., 232
May, K., 196
Mayle, K., 262
Mazmanian, L., 159
648
Mazor, K. M., 241
McAdams, M. J., 194
McBride, J. B., 253
McCallum, D. B., 193
McCann, R. M., 208
McCaslin, M. J., 9, 154
McCaul, K. D., 138
McClenahan, C., 104, 118
McClintock, C., 40
McCollam, A., 240
McConnell, A. R., 118–119
McConnell, M., 221
McCroskey, J. C., 189, 191, 193, 210–211, 259
McCubbins, M. D., 198
McDermott, D. T., 247
McDonald, L., 230
McEachan, R. R. C., 75, 103, 106, 116–117, 121, 127–128
McFadden, H. G., 224, 243
McFall, S., 85
McGee, H., 61, 127
McGilligan, C., 118
McGinnies, E., 197, 210
McGowan, L., 256
McGrath, K., 190
McGuffin, S. A., 241
McGuire, W. J., 180, 215, 258–259, 266
McIntosh, A., 154
McIntyre, P., 145
McKay-Nesbitt, J., 174
McKee, S. A., 123
McKimmie, B. M., 97
McLarney, A. R., 254
McLeod, J. H., 261
McMahon, T. A., 91
McMath, B. F., 137
McMillan, B., 120, 123, 131
McMillion, P. Y., 111
McNamara, M., 241
McNeil, B. J., 244
McQueen, A., 219, 262
McRee, A. L., 244
649
McSpadden, B., 218
McSweeney, A., 116, 120
McVeigh, J. F., 237
McVey, D., 109
Medders, R. B., 208
Meertens, R. M., 69
Meffert, M. F., 66
Mehdipour, T., 108
Mehrotra, P., 130
Meijinders, A., 203
Mellor, S., 111
Melnyk, V., 122
Menon, G., 157, 244, 255
Mercier, H., 159
Merrill, L., 84, 96
Merrill, S., 25
Mertz, R. J., 189
Merwin, J. C., 74
Messian, N., 212
Messner, C., 9
Metzger, M. J., 208
Metzler, A. E., 260
Meuffels, B., 241
Mevissen, F. E. F., 69
Meyerowitz, B. E., 181, 226
Miarmi, L., 155
Michie, S., 103, 115, 130
Micu, C. C., 204
Midden, C. J. H., 116, 203
Middlestadt, S. E., 57, 60, 67, 74, 121, 126
Miene, P. K., 39, 42, 46–47
Millar, K. U., 11
Millar, M. G., 11
Millar, S., 104
Miller, C., 256
Miller, C. H., 256
Miller, D., 91
Miller, G. R., 180, 191, 211–212
Miller, J. A., 249
Miller, N., 19, 24, 30, 34, 37, 93, 154
Miller, R. L., 95
650
Miller, W. R., 256
Miller-Day, M., 261
Mills, E. J., 221
Mills, J., 78, 81, 203
Milne, S., 114, 229, 246
Milton, A. C., 117
Minassian, L., 220
Miniard, P. W., 67
Mio, G. R., 75
Miron, A. M., 256
Miron, M. S., 191
Misak, J. E., 93
Mischkowski, D., 263
Mishra, S. I., 111
Misovich, S. J., 111, 246
Misra, S., 213
Mitchell, A. A., 67
Mitchell, A. L., 204
Mitchell, G., 9
Mitchell, J., 74, 121
Mitchell, M. M., 157
Mittal, B., 41
Mittelstaedt, J. D., 213
Mkandawire, G., 232
Mladinic, A., 64, 67, 74
Moan, I. S., 120, 131
Moldovan-Johnson, M., 60
Molyneaux, V., 115
Mondak, J. J., 161
Mongeau, P. A., 229–230, 232–233, 237
Monroe, K. B., 234, 236, 250
Montaño, D. E., 106
Montoya, H. D., 108
Mooki, M., 219–220
Mooney, A., 220
Moons, W. G., 227, 233
Moore, B., 118, 120
Moore, D. J., 260
Moore, K., 131
Moore, K. A., 230
Moore, M., 233
651
Moore, S. E., 171, 208, 255
Moors, A., 8
Mora, P. A., 143
Morales, A. C., 249
Moran, M. B., 219
Morgan, M. G., 107
Morgan, S. E., 217, 221, 254
Morman, M. T., 218
Morris, B., 75
Morris, K., 121, 131
Morrison, K., 230
Morris-Villagran, M., 157
Mortensen, C. R., 122
Morton, K., 256
Morwitz, V. G., 95
Moser, G., 112
Moss, T. P., 114
Mouttapa, M., 220
Mowad, L., 174, 254
Mowen, J. C., 195
Moyer, A., 262
Moyer, R. J., 198, 200–201
Moyer-Gusé, E., 215, 218–220, 241
Muehling, D. D., 155
Muellerleile, P. A., 72, 103–104, 106, 116–117
Mugny, G., 191, 197, 211
Mullainathan, S., 221
Mullan, B. A., 104, 117
Müller-Riemenschneider, F., 221
Munch, J. M., 181, 253
Munn, W. C., 194
Murayama, K., 81
Murphy, S. T., 217, 219–220
Murray, T. C., 123
Murray-Johnson, L., 230, 232
Muthusamy, N., 231
Mutsaers, K., 220
Mutz, D. C., 85
Myers, J. A., 192
Myers, L. B., 123
652
Nabi, R. L., 219, 233, 255, 258
Naccache, H., 95
Naccari, N., 204–205
Nahai, A., 227
Nail, P. R., 17, 93, 263
Najafzadeh, M., 221
Nakahiro, R. K., 221
Nan, X., 174, 196, 207, 211, 217, 262
Nanneman, T., 28
Napper, L. E., 139, 262
Nathanson, A., 119
Nava, P., 111
Nayak, S., 241
Nebergall, R. E., 19, 23–25, 30, 33–34, 215
Neff, R. A., 221
Neighbors, C., 108
Neijens, P., 241
Neijens, P. C., 121, 207
Neimeyer, G. J., 260
Nell, E. B., 258
Nelson, D. E., 97
Nelson, K., 212
Nelson, L. D., 262
Nelson, M. R., 35, 46
Nelson, R. E., 203
Nelson, T. E., 70
Nelson, T. F., 108
Nemets, E., 257
Nestler, S., 233
Netemeyer, R. G., 195
Neudeck, E. M., 108
Neufeld, S. L., 255
Newell, S. J., 190, 207
Newman, L. S., 11
Ng, J. Y. Y., 111
Ng, M., 115
Ng, S. H., 44, 50
Ngugi, E., 221
Nichols, A. J., III, 57
Nichols, D. R., 30, 69, 163, 171
Nickerson, D. W., 114
653
Niederdeppe, J., 60, 218–219, 223, 254
Niedermeier, K. E., 118–119
Nienhuis, A. E., 257
Nigbur, D., 97, 103, 131
Niiya, Y., 263
Niño, N. P., 219–220
Nitzschke, K., 221
Noar, S. M., 130, 146, 221, 242
Nocon, M., 221
Nohlen, H., 15, 174
Nolan, J. M., 127
Norman, P., 95, 109, 113, 115–116, 123, 130–131
Norman, R., 205
Norris, M. E., 84
North, D., 106
Nosek, B. A., 9
Notani, A. S., 67, 103
Ntumy, R., 219–220
Nupponen, R., 139
Nussbaum, A. D., 262
654
O’Leary, A., 220
Olofsson, A., 203
Olson, J. M., 35, 39, 48–51, 181, 235, 250, 258
Olson, P., 196
Olson, R., 221
Omoto, A. M., 130
O’Neal, K. K., 261, 267
Opdenacker, J., 221
Orbell, S., 54, 112, 114, 118–119, 130–131, 222, 229, 246
O’Reilly, K., 212
Orfgen, T., 157, 163
Orlich, A., 95
Orrego, V., 232
Orth, B., 106
Osgood, C. E., 5, 76
O’Sullivan, B., 61, 127
Oswald, F. L., 9
Otero-Sabogal, R., 116, 253
Otto, S., 64, 67, 74
Ouellette, J. A., 112, 115
Oxley, Z. M., 70
Oxman, A. D., 244
Packer, D. J., 9
Packer, M., 42, 44, 53
Paek, H.-J., 103–104
Palmgreen, P., 254
Pan, L. Y., 194
Pan, W., 261, 267
Papageorgis, D., 266
Pappas-DeLuca, K. A., 220
Parenteau, A., 196
Park, H. S., 122–123, 157, 163, 241
Parker, D., 109, 118–119
Parker, R., 111
Parrott, R. L., 116
Parschau, L., 114–115, 143
Parson, D. W., 17
Parsons, A., 243
Parvanta, S., 60
Pascual, A., 240, 249, 256
655
Pasha, N. H., 258, 266
Passafaro, P., 117
Patel, D., 187, 232
Patel, S., 212
Patnoe-Woodley, P., 219
Pattenden, J., 139
Patzer, G. L., 205–206
Pauker, S. G., 244
Paulson, R. M., 10, 12
Paulussen, T., 116
Paunesku, D., 212
Payne, J. W., 181
Pearce, W. B., 189, 191, 193, 210
Peay, M. Y., 61, 74
Pechmann, C., 193, 223
Peck, E., 249
Pedlar, C., 256
Penaloza, L. C., 266
Peng, W., 241
Pennings, J., 72
Perez, M., 88
Perez-Stable, E. J., 116, 253
Perloff, R. M., 237
Perron, J., 109
Pertl, M., 123, 223
Perugini, M., 113, 118
Peters, G.-J. Y., 231, 233, 248
Peters, M. D., 67
Peters, M. J., 221
Peters, R. G., 193
Peterson, M., 253
Petkova, Z., 218
Petosa, R., 260
Petraitis, J., 127
Petrova, P. K., 240
Pettigrew, J., 261
Petty, R., 119
Petty, R. E., 9–10, 15, 17–18, 43–44, 54, 67, 75, 148–155, 157–161,
163–165, 168–169, 171–175, 181, 191, 195–197, 204, 208, 210–211,
255, 259–260, 262, 267
Pfau, M. W., 186, 258, 266
656
Phillips, A. P., 186
Phillips, C., 260
Phillips, W. A., 189, 193, 210
Piccinin, A. M., 261
Pichot, N., 212
Piercy, L., 217
Pierro, A., 82, 97, 131, 166, 168
Pieters, R., 82
Pietersma, S., 262
Pietrantoni, L., 217
Pinckert, S., 250
Pinkleton, B. E., 207
Piotrowski, J. T., 117
Pitts, S. R., 242
Pizarro, J., 174, 226
Plane, M. B., 221
Plessner, H., 66
Plotnikoff, R. C., 60, 116
Plummer, F. A., 221
Poehlman, T. A., 9
Pollard, K., 220
Polonec, L. D., 108
Polyorat, K., 217
Popova, L., 232, 249
Poran, A., 219, 257
Pornpitakpan, C., 197
Porticella, N., 60, 218
Porzig-Drummond, R., 233
Posavac, E. J., 204
Potts, K. A., 256
Povey, R., 10, 109, 118
Powers, P., 17
Powers, T., 114
Prabhakar, P., 41
Prapavessis, H., 111
Prati, G., 217
Pratkanis, A. R., 46, 55, 181, 237
Praxmarer, S., 205
Preacher, K. J., 185
Preisler, R. M., 155, 159–160, 172, 181
Preiss, R. W., 89, 96, 241
657
Prelec, D., 182
Prentice, D. A., 38
Prentice-Dunn, S., 137, 228–229, 242, 246
Presnell, K., 88
Press, A. N., 64, 70
Preston, M., 234, 250
Prestwich, A., 111, 113, 115
Pretty, G. M., 103
Price, L. L., 204
Priester, J. R., 67, 148, 161, 172, 210
Primack, B. A., 241
Prince, M. A., 107
Prislin, R., 10, 262
Pritt, E., 232
Prochaska, J. O., 132–135, 144–145
Pronin, E., 226
Prothero, A., 256
Pruyn, A. T. H., 158
Pryor, B., 265
Pryzbylinski, J., 181
Psyczynski, T., 181
Puckett, J. M., 160
658
Rasanen, M., 103
Ratchford, B., 41
Ratcliff, C. D., 240
Rauner, S., 249
Rea, C., 260
Read, S. J., 93
Reading, A. E., 110, 128
Real, K., 232
Reardon, R., 260
Redding, C. A., 132–135, 144–145
Reed, M., 249
Reed, M. B., 262
Reeve, A., 2
Regan, D. T., 11
Reger, B., 117
Reichert, T., 186
Reid, A. E., 107
Reid, J. C., 114
Reidy, J. G., 240
Reilly, S., 196
Reimer, T., 210
Reinard, J. C., 191
Reinhart, A. M., 218, 256
Reis, H. T., 97
Reiter, P. L., 244
Remme, L., 114
Rempel, J. K., 67
Rendon, T., 122
Renes, R. J., 219–220
Resnicow, K., 253–254
Reubsaet, A., 115
Reuter, T., 114, 115
Reynolds, G. L., 139
Rhine, R. J., 158
Rhoads, K., 108
Rhodes, N., 15, 172, 254, 264
Rhodes, R. E., 60, 75, 113, 116, 123
Rich, M., 241
Richard, R., 14, 118–119
Richardes, D., 220
Richardson, C. R., 223
659
Richert, J., 114–115, 139, 143
Richter, T., 217, 219
Richterkessing, J. L., 237
Ricketts, M., 218
Ridge, R. D., 39, 42, 46–47
Rieh, S. Y., 190
Riemsma, R. P., 139
Riesz, P. C., 213
Rietveld, T., 186
Rimal, R. N., 232
Ringwalt, C. L., 261, 267
Rise, J., 120, 130–131
Risen, J. L., 81, 95
Rittle, R. H., 235
Ritvo, P., 221
Rivers, J. A., 88
Rivis, A., 103, 112, 116–117, 127, 129, 131
Robertson, C. T., 192
Robertson, K., 155
Robins, D., 208
Robinson, J. K., 111
Robinson, N. G., 103
Rodewald, L., 221
Rodgers, H. L., Jr., 25
Rodgers, W. M., 123
Rodriguez, R., 151, 172
Roehrig, M., 88
Roels, T. H., 220
Roetzer, L. M., 241
Rogers, R. W., 69, 74, 166, 205–206, 228–229, 242, 246
Rogers, T., 114, 116
Rohde, P., 88
Rokeach, M., 50
Rolfe, T., 103
Rollnick, S., 256
Romero, A. A., 260
Ronis, D. L., 181
Rose, S. L., 192
Roseman, M., 115, 240
Rosen, B., 220
Rosen, C. S., 133
660
Rosen, J., 174
Rosen, S., 84
Rosenberg, M. J., 72, 74
Rosenbloom, D., 134
Rosenbloom, S. T., 221
Rosen-Brown, A., 240
Rosenzweig, E., 82
Roskos-Ewoldsen, D. R., 12, 198, 262–263
Rosnow, R. L., 258
Ross, K. M., 192
Rossi, J. S., 133–135
Rossi, S. R., 134
Rossiter, J. R., 232
Rothbart, M., 205–206
Rothman, A. J., 140, 226–227
Rothmund, T., 9
Rothstein, H. R., 96, 130, 182–183, 250, 264–265, 267
Rouner, D., 210, 219–220, 241
Royzman, E. B., 74, 225
Rozelle, R. M., 216
Rozin, P., 74, 225
Rozolis, J., 249
Ruberg, J. L., 241
Rubin, D. L., 186
Rubin, Y. S., 221
Ruch, R. S., 204
Rucker, D. D., 75, 174, 255
Ruder, M., 197
Ruiter, R. A. C., 69, 226, 229, 231–233, 244, 248, 254, 267
Ruiz, S., 75
Rusanen, M., 203
Russell, C., 108
Rutherford, J., 26
Rutter, D. R., 61, 72, 127
Ryerson, W. N., 219
661
Saine, T. J., 197
Sakaki, H., 26
Salgueiro, M. F., 122, 261
Sallis, J. F., 261
Salovey, P., 174, 226–227, 254
Salter, N., 114
Sampson, E. E., 198
Sampson, J., 230
Samuelson, B. M., 265
Sanaktekin, O. H., 254
Sandberg, T., 118–119, 127
Sanders, D. L., 221
Sanders, J., 218
Sanders-Thompson, V., 253
Sandfort, T. G. M., 115
Sandman, P. M., 138, 142
Sarge, M. A., 223, 232
Sarma, K. M., 247
Sarnoff, I., 40
Sarup, G., 25
Saucier, D., 254
Sauer, P. L., 207
Saunders, L., 256
Savage, E., 110, 128
Sawyer, A. G., 240
Sayeed, S., 106, 116–117, 131
Scarberry, N. C., 240
Schaalma, H. P., 69, 116
Schepers, J., 112, 121
Scher, S. J., 97
Schertzer, S. M. B., 253
Scheufele, D. A., 70
Schlehofer, M. M., 233
Schmidt, J., 196
Schmitt, B., 54, 222
Schneider, I. K., 15, 174
Schneider, S. L., 244
Schneider, T. R., 110, 174, 226
Schneier, W. L., 152
Scholl, S. M., 262
Schönbach, P., 81
662
Schoormans, J. P. L., 218
Schott, C., 232
Schreibman, M., 220
Schrijnemakers, J. M. C., 186
Schroeder, D. A., 249
Schulenberg, J., 260–261
Schulman, R. S., 261
Schultz, A., 253–254
Schultz, P. W., 10, 122, 127
Schulz, P. J., 241
Schulz-Hardt, S., 81
Schumann, D. W., 151–153, 157, 159
Schünemann, H., 244
Schüz, B., 139, 143, 262
Schüz, N., 139, 143, 262
Schwartz, S. H., 50
Schwarz, N., 5, 9, 150, 152, 158, 255
Schwarzer, R., 114–115, 137, 139, 142–143
Schweitzer, D., 189
Schwenk, G., 112
Scileppi, J. A., 158, 211
Scott, M. D., 259
Sears, D. O., 84, 153, 260
See, Y. H. M., 172
Seeley, S., 220
Segall, A., 50
Segan, C. J., 133
Segar, M. L., 223
Seibel, C. A., 256
Seibring, M., 108
Seifert, A. L., 121
Seignourel, P. J., 158, 196
Seiter, J. S., 2, 17
Seo, K., 233, 255
Sereno, K. K., 200, 203
Sestir, M., 218, 219
Settle, J. E., 108
Severance, L. J., 158
Severson, H., 261
Sexton, J., 12
Shabalala, A., 219
663
Shaeffer, E. M., 240
Shaffer, D. R., 42
Shaikh, A. R., 254
Shakarchi, R. J., 255, 265
Shamblen, S. R., 267
Shani, Y., 81
Shanteau, J., 145, 218
Shantzis, C., 261
Shapiro, M. A., 60, 218
Shapiro-Luft, D., 60
Sharan, M., 220
Sharot, T., 81
Sharp, J., 62, 72
Shavitt, S., 35, 37–41, 43, 46, 50, 53–54, 70
Shaw, A. S., 249
Shaw, H., 88
Shea, S., 221
Sheeran, P., 68, 74, 95, 103, 109, 112–119, 123, 127, 129–131, 216,
229, 246, 262
Shelton, M., 108
Shen, F., 226, 241
Shen, L., 233, 237, 247, 255–256, 264
Shepherd, G. J., 11
Shepherd, J. E., 103
Shepherd, R., 10, 65, 109, 121, 190
Sheppard, B. H., 103, 112
Shepperd, J. A., 262
Sherif, C. W., 19–20, 22–25, 30, 33–34, 215
Sherif, M., 19–20, 22–25, 29–30, 33–34, 215
Sherman, D. K., 93, 181, 227, 262–263
Sherman, R. T., 240
Sherrell, D. L., 195, 211
Shi, F., 81, 95
Shi, Y., 227
Shillington, A., 108
Shiota, M. N., 255
Shiv, B., 181
Shongwe, T., 219
Shulman, H., 152, 255
Shuptrine, F. K., 42
Sia, C.-L., 208
664
Sicilia, M., 75
Siebler, F., 166
Siegel, J. T., 186
Siegrist, M., 190
Siemer, M., 186
Siero, F. W., 25, 111, 246
Sieverding, M., 103, 126
Sigurdsson, S. O., 221
Silberberg, A. R., 194
Silk, K. J., 116
Silva, S. A., 122, 261
Silvera, D. H., 253
Silvera, S. A. N., 174, 254
Silverthorne, C. P., 159
Silvia, P. J., 212, 256, 265
Simmonds, L. V., 262
Simon, L., 97
Simoni, J., 212
Simons, H. W., 198, 200–201
Simons-Morton, B. G., 110
Simpson, P., 220
Sims, L., 226
Simsekoglu, O., 118
Sinclair, R. C., 171, 208, 255
Singh, A., 116
Singhal, A., 219
Sirgy, M. J., 38
Sirsi, A. K., 192
Sivaraman, A., 257
Six, B., 10–11, 112
Sjoberg, L., 11
Skalski, P. D., 194, 212
Sklar, K. B., 181
Skowronski, J. J., 74, 81, 145
Slade, P., 114
Slama, M., 42, 50
Slater, M. D., 30, 163, 171, 186, 207, 210, 219–220, 241
Slaunwhite, J. M., 108
Slemmer, J. A., 240
Slocum, J. W., Jr., 118
Smerecnik, C. M. R., 229
665
Smit, E. G., 207
Smith, A., 233
Smith, D. C., 261
Smith, J. K., 95
Smith, J. R., 10, 97, 103, 116, 120
Smith, L., 109
Smith, L. M., 88
Smith, M. A., 159
Smith, M. B., 37, 53, 55
Smith, M. C., 174
Smith, M. J., 26
Smith, N., 120
Smith, R. A., 34, 219, 220
Smith, R. E., 11, 193
Smith, R. J., 95
Smith, S., 122–123, 194
Smith, S. M., 10, 84, 108, 152, 154
Smith, S. W., 29, 108, 122, 232
Smith-McLallen, A., 69, 165, 166
Smucker, W. D., 230
Snelders, D., 218
Sniehotta, F. F., 111, 143
Snyder, J., 221
Snyder, M., 10, 12–13, 37, 39, 42–44, 46–47, 49, 53, 205–206, 253
Soldat, A. S., 171, 208, 255
Soley, L. C., 160
Solomon, S., 181
Sorell, D. M., 256
Sorrentino, R. M., 181, 229
Southwell, B. G., 233
Sowden, A. J., 139
Sox, H. C., Jr., 244
Spangenberg, E. R., 95, 130
Sparks, P., 10, 65, 72, 103–104, 106, 109, 262
Spears, R., 257
Speelman, C., 116
Spencer, C. P., 106, 114, 127
Spencer, F., 241
Spencer, S. J., 36, 185
Spencer-Bowdage, S., 256
Sperati, F., 244
666
Spiegel, S., 166–168
Spoor, S., 88
Spreng, R. A., 151
Spring, B., 224, 243
Sprott, D. E., 95
Spruyt, A., 8
St. Lawrence, J. S., 107, 220
Staats, H., 120, 131
Stagner, B., 255
Stambush, M. A., 88
Stangor, C., 174
Stansbury, M., 208
Stanton, T., 221
Stapel, D. A., 267
Stapleton, J., 111
Stark, E., 38
Stayman, D., 67
Stayman, D. M., 207
Stead, M., 104, 117
Steadman, L., 61, 72, 127, 130
Steblay, N. M., 234, 250
Stebnitz, S., 196
Steele, C. M., 93, 262
Steele, L., 21
Steele, R. G., 256
Steenhaut, S., 118
Steffen, V. J., 11
Steiner, G. A., 142
Steinfatt, T. M., 65, 265
Stephenson, M. T., 187, 233, 254
Sternthal, B., 197, 211, 212
Stevenson, R., 233
Stevenson, Y., 107
Steward, W. T., 174, 226
Stewart, C., 108
Stewart, R., 31, 212
Stewart-Knox, B., 104
Stice, E., 88, 93
Stiff, J. B., 237
Stillwell, A. M., 93
Stoltenberg, C. D., 171
667
Stone, J., 13, 78, 90–91, 93
Stone, K., 220
Storey, D., 241
Stormer, S., 88
Strack, F., 95, 255
Stradling, S. G., 109, 118–119
Strathman, A. J., 148, 222
Straughan, R. D., 192
Strauss, A., 221
Strecher, V. J., 254
Strickland, B., 159
Stroebe, W., 72, 115, 142, 229–230, 232, 242, 247
Stroud, N. J., 84–85
Struckman-Johnson, C., 215
Struckman-Johnson, D., 215
Struttmann, T., 217
Strutton, D., 207
Studts, J. L., 241
Stukas, A. A., 39, 42, 47
Suchner, R. W., 25
Suci, G. J., 5
Sunar, D., 254
Sundar, S. S., 208
Sundie, J. M., 75
Sunnafrank, M., 201
Supphellen, M., 57, 121
Sussman, S., 261
Sutton, S., 104
Sutton, S. R., 74, 103, 109, 116, 120–121, 133–134, 139–140, 142–
143, 230
Swartz, T. A., 202–203
Swasy, J. L., 181
Sweat, M., 212
Swedroe, M., 108
Sweeney, A. M., 262
Swinyard, W. R., 11
Sykes, B., 111
Syme, G. J., 91, 106, 127
Symons, C. S., 25
Szabo, E. A., 258
Szilagyi, P., 221
668
Szybillo, G. J., 259
669
Thompson, E. P., 166–169, 173, 175
Thompson, J. K., 88
Thompson, R., 219, 261
Thompson, S. C., 233
Thomsen, C. J., 30, 171
Thuen, F., 130
Thurstone, L. L., 7
Thyagaraj, S., 260
Till, B. D., 213
Ting, S., 186
Tittler, B. I., 25
Tobler, N. S., 261
Todorov, A., 149
Tokunaga, R. S., 218–219, 241
Tollefson, M., 196
Tolsma, D., 254
Tom, S., Jr., 88, 89
Tormala, Z. L., 15, 18, 75, 161, 173–174, 191, 196–197, 208, 211–
212, 255
Törn, F., 213
Tost, L. P., 256
Towles-Schwen, T., 12
Trafimow, D., 68, 74, 103, 109, 123, 127, 130
Traylor, M. B., 200
Trembly, G., 216
Trompeta, J., 212
Trope, Y., 149
Trost, M. R., 173
Trudeau, L., 218
Trumbo, C. W., 152, 172
Tryburcy, M., 111
Tseng, D. S., 221
Tuah, N. A. A., 139
Tufte, E. R., xvii
Tukachinsky, R., 218–219, 241
Tung, P. T., 97, 120
Tuppen, C. J. S., 189–190
Turner, G. E., 261
Turner, J. A., 26
Turner, M. M., 152, 233, 249, 255–256
Turrisi, R., 111
670
Tusing, K. J., 237, 266
Tversky, A., 244
Twyman, M., 190
Tykocinski, O. E., 9
671
van Laer, T., 218–219, 241
van Leeuwen, L., 219
Van Loo, M. F., 130
van Mechelen, W., 116
van Meurs, L., 207
Van Osch, L., 115
Van Overwalle, F., 93
van Trijp, H. C. M., 72, 122
van Weert, J. C. M., 221
van Woerkum, C., 220
Vann, J., 221
Vardon, P., 103
Vassallo, M., 121
Vaughn, L. A., 218–220
Vaught, C., 187
Velicer, W. F., 133–134, 139
Venkatesh, V., 121
Venkatraman, M. P., 181
Verplanken, B., 151, 233
Vervloet, M., 221
Vet, R., 107
Viachopoulos, S. P., 103
Villagran, P. D., 157
Vincent, J. E., 235–236
Vincus, A. A., 267
Vinkers, C. D. W., 114
Visconti, L. M., 218–219, 241
Visser, P. S., 15, 157, 171, 174
Vist, G. E., 244
Vitoria, P. D., 122, 261
Voas, R. B., 108
Vohs, K. D., 158, 250
von Hippel, W., 149
Vonkeman, C., 171, 255
Voss-Humke, A. M., 121
672
Waks, L., 66
Walker, A., 139
Wall, A.-M., 123
Wallace, D. S., 10, 12
Walster, E., 82, 192, 204
Walters, L. H., 186
Walther, E., 95
Walther, J. B., 95, 208
Wan, C. S., 88
Wang, X., 48, 53, 121
Wang, Z., 208
Wansink, B., 72
Warburton, J., 120, 131
Ward, C. D., 210
Wareham, N. J., 74, 104, 121
Warnecke, R. B., 85
Warner, L., 220
Warren, W. L., 152
Warshaw, P. R., 103, 112, 121
Wasilevich, E., 232
Wathen, C. N., 211
Watson, A. J., 221
Watson, C., 197
Watt, I. S., 139
Watt, S. E., 9, 35
Wearing, A. J., 91
Webb, T. L., 114–115
Webel, A. R., 212
Weber, R., 231
Webster, R., 254
Wechsler, H., 108
Wegener, D. T., 15, 18, 26, 43–44, 150, 152, 157, 160–161, 164, 168,
171–175, 197, 208
Weigel, R. H., 11
Weil, R., 95
Weilbacher, W. M., 142
Weinberger, M. G., 211
Weiner, J., 116
Weiner, J. L., 223
Weinerth, T., 152
Weinstein, N. D., 64, 130, 138, 140, 142
673
Weisenberg, M., 199
Weissman, W., 261
Wells, G. L., 154–155, 181
Wells, J., 111
Wenzel, M., 108
Werrij, M. Q., 226, 244
Wessel, E., 194
West, M. D., 190
West, R., 140
West, S. K., 261, 267
Westermann, C. Y. K., 157, 163
Westfall, J., 186
Wetzels, M., 112, 121, 218–219, 241
Whately, R., 147
Wheeler, D., 235–236
Wheeler, S. C., 43–44, 54, 168–169, 175
Whitaker, D. J., 113
White, G. L., 80
White, K., 257
White, K. M., 97, 103
White, M., 139
White, R. W., 37, 53, 55
White, T. L., 227
Whitehead, J. L., Jr., 190–191, 210
Whitelaw, S., 140
Whittaker, J. O., 26
Whittier, D. K., 220
Wicherts, J. M., 182
Wicker, A. W., 17
Wicklund, R. A., 81
Widgery, R. N., 204
Wieber, F., 114
Wiedemann, A. U., 114, 143
Wiegand, A. W., 91
Wiener, J. L., 195
Wildey, M. B., 261
Wilke, H. A. M., 120, 131
Wilkin, H. A., 219–220
Williams, E. A., 221
Williams, P., 95, 253–254
Williamson, P. R., xviii, 96
674
Williams-Piehota, P., 174, 254
Willich, S. N., 221
Willms, D., 131
Wilmot, W. W., 22, 30, 31
Wilson, C. P., 111
Wilson, E. J., 195, 211
Wilson, M., 50
Wilson, S. R., 232
Wilson, T., 72
Windschitl, P. D., 181
Winslow, M. P., 13, 90–91
Winter, P. L., 108
Winterbottom, A., 220
Winzelberg, A., 203
Wise, M. E., 241
Witte, K., 139, 145, 187, 219–220, 229–233, 242, 246–249
Wittenbrink, B., 5, 9
Wogalter, M. S., 69
Wohn, D. Y., 95
Wojcieszak, M. E., 84–85
Wolff, J. A., 130
Wolfs, J., 103
Wong, N. C. H., 232
Wong, S., 240
Wong, Z. S.-Y., 109, 123
Wood, M. L. M., 266
Wood, M. M., 139
Wood, W., 17, 112, 115, 155, 157, 159–160, 172, 181, 190, 192–193,
254–255, 259–260, 262, 264–267
Woodruff, S. I., 261
Woodside, A. G., 95, 200–201
Wooley, S., 241
Worchel, S., 158, 240
Worth, L. T., 255
Wreggit, S. S., 110
Wright, A., 114
Wu, E. C., 249
Wu, R., 256
Wyer, N., 204
Wyer, R. S., Jr., 66, 76, 155, 217, 257
Wynn, S. R., 260–261
675
Xie, X., 66
Xu, A.J., 257
676
Zuckerman, C., 187, 232
Zuckerman, M., 40, 42, 254
Zwarun, L., 218
677
Subject Index
Accessibility of attitude, 10
Advertising, 193, 207, 224
Advocated position
ambiguity of, 25, 27–28
counterattitudinal vs. proattitudinal, 156, 197, 215
discrepancy of, 25–26, 34(n5)
expected vs. unexpected, 192–193
influence on credibility, 192–193
influence on elaboration valence, 156
Affect
anticipated, 13, 118–20, 233
as attitude basis, 67–68, 74(n13), 75(n20)
Age, 253
Ambiguity (of position advocated), 25, 27–28
Ambivalence, 10, 174(n16), 226
Anger, 233, 256
Anticipated feelings, 13, 118–20, 233
Appeals
affective vs. cognitive, 75(n20)
consequence-based, 165–166, 221–223
fear, 228–233
function-matched, 41–44, 49–51, 54(n6)
gain-framed and loss-framed, 225–228
normative, 106–108
one-sided and two-sided, 79, 193, 211(n4), 223–225, 265(n10,
n12), 266(n16)
scarcity, 75(n18)
threat, 228–233
Approach/avoidance motivation (BAS/BIS), 54(n5), 227
Arguments
consequence-based, 165, 166, 174(n19), 221–223
discussion of opposing, 79, 193, 211(n4), 223–225, 265(n10,
n12), 266(n16)
gain-framed and loss-framed, 225–228
number of, 159
strength (quality) of, 156–157, 163–166
Assimilation and contrast effects, 24–25, 27, 33(n3), 34(n4, nn6–7),
678
215
Attitude
accessibility, 10
ambivalence, 10, 174(n16), 226
bases of, 56–59, 66–68
certainty (confidence), 174(n16)
concept of, 4–5
functions of, 35–38
measurement, 5–9
relation to behavior, 9–12
strength, 15, 18(n12), 171(n3)
toward behavior. See Attitude toward the behavior
Attitude toward the behavior
assessment of, 99
determinants of, 105
influencing, 105
relation to norms, 112
Attitude-behavior consistency
factors affecting, 9–12
influencing, 12–14
Attitude-toward-the-ad, 67
Attitudinal similarity, 201
Attractiveness, 160, 204–206
Attribute importance, 61–62, 72(n3), 78
Audience. See Receiver
Audience adaptation, xv–xvi
elaboration likelihood model (ELM) and, 162, 174(n17)
functional attitude approaches and, 41–44, 49–51
individual differences and, 252–255
reasoned action theory (RAT) and, 117
social judgment theory and, 29
stage models and, 134–141
summative model of attitude and, 59–61
Audience reaction (as peripheral cue), 159
Averaging model of attitude, 65–66
Aversive consequences and induced compliance, 97(n16)
679
Belief
content, 62–63
evaluation, 57, 69, 75(n18), 104. See also Consequence
desirability
importance, 61–62, 72(n3), 78
lists, 63–64
salience, 56, 61, 69–71
strength (likelihood), 57, 63–64, 68–69, 104, 127(n8). See also
Consequence likelihood
Belief-based models of attitude
description of, 56–59
implications for persuasion, 59–61, 68–71
sufficiency of, 66–68
Bias, knowledge and reporting, 190, 192
Bipolar scale scoring, 64, 73(nn8–9)
680
evaluation
Consequence likelihood, 166. See also Belief, strength (likelihood)
Consideration of future consequences (CFC), 54(n5), 222, 264(n1)
Contrast and assimilation effects, 24–25, 27, 33(n3), 34(n4, nn6–7),
215
Correspondence
of attitude and behavior measures, 10–11, 17
of intention and behavior measures, 113
Counterarguing, 256, 257, 259–260. See also Elaboration
Counterattitudinal vs. proattitudinal messages, 156, 197, 215
Credibility
dimensions of, 188–190
effects of, 194–198
factors affecting, 190–194
heuristic, 158
relationship to similarity, 202–203
Cultural background, 253
Cultural truisms, 266(nn17–18)
DARE, 261
Decisional balance, 134–136
Decision-making and dissonance, 78–83
Defensive avoidance, 261
Delivery
dialect, 202
nonfluencies, 191
Descriptive norm (DN), 29, 261
assessment of, 100
determinants of, 107–108
influencing, 108
relation to attitude, 112
Dialect, 202
Direct experience, 11–12
Discrepancy, 25–26, 34(n5)
Disgust, 233
Dissonance
and decision making, 78–83
defined, 77
factors influencing, 77–78
and hypocrisy induction, 90–92
and induced compliance, 85–90, 96(nn12–13), 97(n16), 199
681
means of reducing, 78
and selective exposure, 83–85
Distraction, 154–155
Door-in-the-face (DITF) strategy, 235–237
explanations, 236–237
moderating factors, 236
Dual-process models, 149
Ego-involvement
concept of, 22, 30–31, 153
confounding in research, 29–30
measures of, 23–24, 31, 34
relationship to judgmental latitudes, 22–23
Elaboration
ability, 154–155
assessment of, 149
continuum, 149, 150
definition of, 149
factors affecting amount of, 152–156, 197, 255
factors affecting valence of, 156–157, 197
motivation, 152–154
Elaboration likelihood model (ELM), 148–175
Emotions
anger, 233, 256
anticipated, 13, 118–20, 233
fear, 229–232
disgust, 233
guilt, 93, 97(n16), 233, 237
regret, 82–83, 118
Entertainment-education (EE), 219–220
Ethnicity of communicator, 206–207
Even-a-penny-helps strategy, 249(n50)
Evidence, citation of sources of, 191
Examples vs. statistics, 241(n9)
Expectancy confirmation and disconfirmation, 192–193, 211(n3)
Expectancy-value models of attitude, 72(n1)
Experimental design, 176–178
Expertise (credibility dimension), 189
Explicit measures of attitude, 5–7
Explicit planning of behavior, 114–115
Explicit vs. implicit conclusions, 214–216
682
Extended parallel process model (EPPM), 231–232
Fairness norms, 84
Familiarity of topic, 10, 155
Fear, 229–232
Fear appeals, 228–233
Follow-up persuasive efforts, 82
Foot-in-the-door (FITD) strategy, 233–235
explanation of, 234–235
moderating factors, 234
Forewarning, 157, 259–260
Formative basis of attitude, 11–12
Framing
gain vs. loss, 225–228
issue, 70
Function-matched appeals, 41–44, 49–51, 54(n6)
Functions of attitude
assessing, 38–40, 46–48
vs. functions of objects, 45–46
influences on, 40–41
matching appeals to, 41–44, 49–51, 54(n6)
typologies of, 35–38, 44–45, 48–49
Habit, 115–116
Health Action Process Approach (HAPA), 142
Heuristic principles, 157–159
Heuristic-systematic model, 149
Hierarchy-of-effects models, 142
Humor, 194
Hypocrisy, 13, 18(nn9–10)
683
Imagined behavior, 240(n6)
Immersion (in narratives), 218
Implementation intentions, 114, 216
Implicit measures of attitude, 8–9
Implicit vs. explicit conclusions, 214–216
Importance
of beliefs, 61–62, 72(n3), 78
of topic, 152–153, 163, 195, 199
Incentive effects in induced compliance, 85–87
Indirect experience, 11–12
Individual differences, 252–255
approach/avoidance motivation (BAS/BIS), 54(n5), 227
consideration of future consequences (CFC), 54(n5), 222,
264(n1)
intelligence, 215, 253
need for cognition (NFC), 153–154, 174(n17), 226
regulatory focus, 54(n5), 227
self-esteem, 253
self-monitoring, 39–40, 42, 53(n2, n4), 54–55(n10), 222, 253
sensation-seeking, 254
Individualism-collectivism, 54, 222
Individualized belief lists, 63
Induced compliance, 85–90, 96(nn12–13), 97(n16), 199
choice effects in, 89, 96(n12), 200
incentive effects in, 85–87
Information exposure, influences on, 83–85
Information integration theory, 73(n10)
Information utility, 84, 96(n11)
Injunctive norm (IN)
assessment of, 100
determinants of, 105–106
influencing, 106–107
relation to attitude, 112
Inoculation, 257–259
Intelligence, 215, 253
Intention, relation to behavior, 112–116, 129(n21), 130(n22)
Involvement
ego-involvement. See Ego-involvement
personal relevance, 152–153, 163, 195, 199
684
Knowledge bias, 190, 192
Latitudes, judgmental, 22
Legitimizing paltry contributions, 249(n50)
Length of message, 159, 160
Likert attitude scales, 7
Liking
for the communicator, 193, 198–200
heuristic, 158
relationship to similarity, 201–202
Loss-framed vs. gain-framed appeals, 225–228
Low price offer, 88
Low-ball strategy, 249(n50)
Narratives, 216–220
Need for cognition (NFC), 153–154, 174(n17), 226
Negativity bias, 74(n11), 225
Noncognitive bases of attitude, 66–68
Nonfluencies in delivery, 191
Nonrefutational two-sided messages, 223, 245(n32)
Normative beliefs, 105
Norms
descriptive. See Descriptive norm
fairness, 84
685
injunctive. See Injunctive norm.
moral, 120
Nudges, 220–221
Null hypothesis significance testing, xviii
Number of arguments, 159
Paradigm case, 2
Peer-based interventions, 204
Perceived behavioral control, 216, 221
assessment of, 101
conceptualization of, 100, 123–124, 131(n33)
determinants of, 108–110
influencing, 110–111
relation to attitudes and norms, 102, 122–123, 129(n19)
relation to stages of change, 136–139
Peripheral cues, 150
Peripheral route to persuasion, 150
Persistence of persuasion, 151
Personal (moral) norms, 120
Personal relevance of topic, 152–153, 163, 195, 199
Personality characteristics, 252–255
approach/avoidance motivation (BAS/BIS), 54(n5), 227
consideration of future consequences (CFC), 54(n5), 222,
264(n1)
intelligence, 215, 253
need for cognition (NFC), 153–154, 174(n17), 226
regulatory focus, 54(n5), 227
self-esteem, 253
self-monitoring, 39–40, 42, 53(n2,n4), 54–55(n10), 222, 253
sensation-seeking, 254
Persuasion, concept of, 2–4
Persuasive effects, assessing, 14–16, 18(n13), 185(n2)
Physical attractiveness, 160, 204–206
Planned behavior, theory of, 126(n1)
686
Planning of behavior, 114–115
Position advocated. See Advocated position
Postdecisional spreading of alternatives, 80
Prior knowledge (of topic), 10, 155
Proattitudinal vs. counterattitudinal messages, 156, 197, 215
Product trial, 11
Prompts, 220–221
Prospect theory, 244(n28)
Protection motivation theory (PMT), 228–229
Reactance, 255–257
Reasoned action theory (RAT)
determinants of intention, 99–102
influencing attitude toward the behavior, 104–105
influencing descriptive norms, 107–108
influencing injunctive norms, 105–107
influencing perceived behavioral control, 108–111
influencing relative weights, 111–112, 128(nn17–18)
Receiver factors, 252–267
Reciprocal concessions, 236
Recommendation specificity, 216
Refusal skills training, 260–261
Refutational inoculation treatments, 258
Refutational two-sided messages, 223, 265(n10), 266(n16)
Regret
anticipated, 118
postdecisional, 82–83
Regulatory focus, 54(n5), 227
Relevance
of attitude to behavior, 10, 12–13
of topic to receiver, 152–153, 163, 195, 199
Reminders, 220–221
Reporting bias, 190, 192
Resistance to persuasion from different persuasion routes, 151
Role models, 111
Routes to persuasion, 150–152
687
Salience of beliefs, 56, 61, 69–71
Scale scoring procedures, 64–65, 127(n10), 128(n11)
Scarcity appeals, 75(n18)
Selective exposure, 81, 83–85
Self-affirmation, 261–264
Self-affirmation theory, 93
Self-efficacy, 216, 221, 226.
and stage-matching, 136–139
See also Perceived behavioral control
Self-esteem, 253
Self-identity, 131(n30)
Self-monitoring, 39–40, 42, 53(n2, n4), 54–55(n10), 222, 253
Self-perception, 234
Self-prophesy effects, 95(n3)
Semantic differential evaluative scales, 5
Sensation-seeking, 254
Sequential-request strategies, 233–237
Sidedness (of messages), 79, 193, 211(n4), 223–225, 265(n10, n12),
266(n16)
Similarity of communicator and receiver
relationship to credibility, 202–203
relationship to liking, 201–202
Single-act behavioral measures, 10
Single-item attitude measures, 6
Single-message designs, 178–181
Social judgment theory, 19–34
Source
credibility, 188–198
ethnicity, 206–207
liking, 198–200
physical attractiveness, 160, 204–206
self-interest, 192
similarity to receiver, 200–204
Specific vs. general recommendations, 216
Stage models, 132–147
distinctive claims of, 140–142
stage assessment, 139
transtheoretical model (TTM), 132–140
Stage-matching
and decisional balance, 134–136
and self-efficacy, 136–139
688
vs. state-matching, 141
Standardized belief lists, 63
Statistical significance testing, xviii
Statistics vs. examples, 241(n9)
Stories, 216–220
Strength
argument, 156–157, 163–166
attitude, 15, 18(n12), 171(n3)
belief, 57, 63–64, 68–69, 104, 127(n8). See also Consequence
likelihood
Subjective norm. See Injunctive norm
Summative model of attitude, 56–59
Supportive treatments (for creating resistance), 258
689
Trustworthiness (credibility dimension), 189
Two-sided vs. one-sided messages, 79, 193, 211(n4), 223–225,
265(n10, n12), 266(n16)
690
About the Author
Daniel J. O’Keefe
is the Owen L. Coon Professor in the Department of Communication
Studies at Northwestern University. He received his Ph.D. from the
University of Illinois at Urbana-Champaign and has been a faculty
member at the University of Michigan, Pennsylvania State
University, and the University of Illinois. He has received the
National Communication Association’s Charles Woolbert Research
Award, its Golden Anniversary Monograph Award, its Rhetorical and
Communication Theory Division Distinguished Scholar Award, and
its Health Communication Division Article of the Year Award; the
International Communication Association’s Best Article Award and
its Division 1 John E. Hunter Meta-Analysis Award; the International
Society for the Study of Argumentation’s Distinguished Research
Award; the American Forensic Association’s Daniel Rohrer
Memorial Research Award; and teaching awards from Northwestern
University, the University of Illinois, and the Central States
Communication Association.
691
692
目录
Half Title 2
Publisher Note 3
Title Page 4
Copyright Page 5
Brief Contents 7
Detailed Contents 8
Preface 17
Chapter 1 Persuasion, Attitudes, and Actions 23
Chapter 2 Social Judgment Theory 48
Chapter 3 Functional Approaches to Attitude 71
Chapter 4 Belief-Based Models of Attitude 102
Chapter 5 Cognitive Dissonance Theory 131
Chapter 6 Reasoned Action Theory 163
Chapter 7 Stage Models 212
Chapter 8 Elaboration Likelihood Model 234
Chapter 9 The Study of Persuasive Effects 275
Chapter 10 Communicator Factors 291
Chapter 11 Message Factors 327
Chapter 12 Receiver Factors 385
References 409
Author Index 609
Index 678
About the Author 691
Advertisement 692
693