Professional Documents
Culture Documents
3.3 Bad News - Send An AI. Good News - Send A Human
3.3 Bad News - Send An AI. Good News - Send A Human
Journal of Marketing
2023, Vol. 87(1) 10-25
Bad News? Send an AI. © American Marketing Association 2022
Article reuse guidelines:
Abstract
The present research demonstrates how consumer responses to negative and positive offers are influenced by whether the
administering marketing agent is an artificial intelligence (AI) or a human. In the case of a product or service offer that is
worse than expected, consumers respond better when dealing with an AI agent in the form of increased purchase likelihood
and satisfaction. In contrast, for an offer that is better than expected, consumers respond more positively to a human agent.
The authors demonstrate that AI agents, compared with human agents, are perceived to have weaker intentions when admin-
istering offers, which accounts for this effect. That is, consumers infer that AI agents lack selfish intentions in the case of an offer
that favors the agent and lack benevolent intentions in the case of an offer that favors the customer, thereby dampening the
extremity of consumer responses. Moreover, the authors demonstrate a moderating effect, such that marketers may anthropo-
morphize AI agents to strengthen perceived intentions, providing an avenue to receive due credit from consumers when the
agent provides a better offer and mitigate blame when it provides a worse offer. Potential ethical concerns with the use of AI
to bypass consumer resistance to negative offers are discussed.
Keywords
artificial intelligence, anthropomorphism, algorithm, robots, technology, pricing, intentions, expectations, satisfaction
Online supplement: https://doi.org/10.1177/00222429211066972
Marketing managers currently find themselves in a period of by human agents. For example, Uber uses an AI machine learn-
technological transition, wherein artificial intelligence (AI) ing system to estimate travel elasticities and administer ride
agents are increasingly viable replacement options for human price offers (Newcomer 2017). Imagine a customer who pays
representatives in administering product and service offers for an Uber ride downtown but, for the return trip, unexpectedly
directly to customers. AI agents have been adopted across a receives a price offer triple the original. Does administration of
broad range of consumer domains to handle both face-to-face sthis worse-than-expected offer by an AI agent, rather than a
and remote customer transactions (Davenport et al. 2020; human agent, have implications for purchase likelihood, cus-
Harris, Kimson, and Schwedel 2018; Huang and Rust 2018; tomer satisfaction, or intentions toward the use of Uber in the
Wirtz et al. 2018), ranging from traditional retail and travel future? What if the return trip was unexpectedly much cheaper
(Mende et al. 2019) to ride and residence sharing (Hughes than anticipated, resulting in a much better-than-expected offer
et al. 2019) and even legal and medical services (Esteva et al. for the consumer? The present research theorizes and demon-
2017; Turner 2016). Given AI agents’ advanced information strates an interaction between the type of agent and outcome
processing capabilities and labor cost advantages (Kumar expectation discrepancies, such that consumers respond less
et al. 2016), the transition away from human representatives
in administering product and service offers seems predomi-
nately advantageous for firms. Despite AI’s potential, scant Aaron M. Garvey is Associate Professor of Marketing and Ashland Oil Research
research has examined how consumers evaluate AI systems in Professor of Marketing, Gatton College of Business and Economics, University
relation to equivalent offerings delivered by humans. of Kentucky, USA (email: AaronGarvey@uky.edu). TaeWoo Kim is Lecturer of
Marketing, University of Technology Sydney, Australia (email: Taewoo.kim@
The increasingly pervasive use of AI raises the possibility uts.edu.au). Adam Duhachek is Professor of Marketing, University of Illinois
that offers administered by AI agents may impact consumer at Chicago, USA, and Honorary Professor of Marketing, University of Sydney,
response in novel ways as compared with offers administered Australia (email: adamd@uic.edu).
Garvey et al. 11
negatively to worse-than-expected price offers when transmitted Heafner, and Epley 2014), or generally negative compensatory
by AI agents (compared with human agents) and more positively responses (Mende et al. 2019), we show an asymmetric pattern
to better-than-expected price offers when transmitted by human such that consumers transacting with anthropomorphized AIs
agents (compared with AI agents). In addition, this interaction is are more likely to react negatively to worse-than-expected mar-
further moderated by anthropomorphic characteristics of the AI. keting offers but respond more favorably to better-than-expected
The more humanlike the AI is in terms of its appearance or cognitive offers from anthropomorphized AIs owing to the role of per-
functions, the more it reduces the expectations discrepancy gap ceived intentions. Third, we provide evidence that our effects
between human and AI agents. This moderator represents a strategic are not driven by potential alternative explanations, including
managerial input that can be used to manage customer satisfaction uncanny valley theory (Mori, MacDorman, and Kageki 2012)
on the basis of the customer’s price expectations. or AIs’ superior market tracking ability. We note that prior
Our theory and findings have key implications for firms research exploring the anthropomorphism of AI agents has
enlisting both human and AI agents who administer outcomes argued for an uncanny valley effect that leads people to
that are discrepant from expectations. These findings are rele- avoid agents that appear too humanlike, ostensibly due to eeri-
vant for price offers, in addition to other situations where con- ness that emerges after a certain inflection point (Kim, Schmitt,
sumers learn of unexpectedly negative outcomes along and Thalmann 2019; Mende et al. 2019; Mori, MacDorman,
dimensions other than price, such as cancellations, delays, neg- and Kageki 2012). Our research demonstrates AI anthropomor-
ative evaluations, status changes, product defects, rejections, phism patterns that are ostensibly incompatible with an uncanni-
service failures, and stockouts. Our findings are also pertinent ness explanation and that emerge even when empirically
to instances where consumers receive unexpectedly positive controlling for uncanniness perceptions. Instead, we reveal how
outcomes such as expedited deliveries, rebates, upgrades, AI (vs. human) agents are inferred to have weaker selfish and
service bundles, exclusive offers, loyalty rewards, and customer benevolent intentions when developing discrepant offers, with
promotions. Managers can apply our findings to prioritize (vs. implications for subsequent consumer response. In doing so,
postpone) human-to-AI role transitions in situations where neg- we suggest a new and important process driving both aversion
ative (vs. positive) discrepancies are more frequent and impact- to and engagement with anthropomorphized AI agents.
ful. Moreover, our results suggest that even when a role As a fourth contribution, our research reveals that the use of
transition is not holistically passed to an AI, the selective AI agents to administer offers can simultaneously present
recruitment of an AI agent to disclose certain discrepant infor- ethical dilemmas alongside opportunities for marketing firms.
mation can still be advantageous. Firms that have already tran- On the one hand, our work reveals opportunities for firms to
sitioned to consumer-facing AI agents, including the multitude receive due (and perhaps otherwise overlooked) credit for
of online and mobile applications that use AI-based algorithms offers that are better than expected, with corresponding
to create and administer offers, also stand to benefit from our improvements in customer response. On the other hand, our
findings. Our research reveals that AI agents should be selec- work also reveals the possibility of AI misuse as a darker tool
tively anthropomorphized (i.e., depicted as either machinelike through which marketers can increase the acceptance of offers
vs. humanlike) depending on whether an offer will be worse that fall short of expectations while simultaneously increasing
or better than expected. intentions to reengage with the offending firm.
Our research contributes to the literature in multiple ways.
First, we show that AI (vs. human) agents asymmetrically alter
the effects of outcome expectation discrepancies on purchase, sat-
isfaction, and reengagement intentions, thereby broadening the Theoretical Framework
associated literature examining human–AI transactions in mar-
Expectations Discrepancy Theory in Marketing
keting contexts (e.g., Huang and Rust 2018; Kim and
Duhachek 2020; Longoni, Bonezzi, and Morewedge 2019; Transactions
Mende et al. 2019; Srinivasan and Abi 2021). Whereas the We examine the impact of AI versus human product and service
growing literature on technology in marketing has shown that offer administration through the theoretical lens of discrepant
consumer engagement tends to decrease when an AI agent outcome expectations—that is, consumer reactions to offers
administers a transaction (e.g., through a perceived lack of that are better or worse than expected on a key dimension.
human mental or emotional attributes on the part of the represen- Although expectation discrepancies have been examined in a
tative; Longoni, Bonezzi, and Morewedge 2019; see also variety of interpersonal sales transaction contexts (e.g., Darke,
Dietvorst, Simmons, and Massey 2015), we reveal how transac- Ashworth, and Main 2010; Evangelidis and Van Osselaer
tions with AI can either improve or undermine purchase tenden- 2018; Oliver, Balakrishnan, and Barry 1994), to our knowledge,
cies and reengagement intentions depending on the valence of no studies have investigated the implications of AI agents as
deviation from consumer expectations. transaction administrators. Although no research has specifi-
Second, whereas previous research has shown largely positive cally examined this phenomenon, extant research into robotics
consequences of anthropomorphized products in the form of and artificial intelligence suggests that AI (vs. human) agents
increased consumer engagement and product liking (Aaker, could have either positive or negative effects when administer-
Vohs, and Mogilner 2010; Aggarwal and McGill 2007; Waytz, ing price offers that are better or worse than expected. We next
12 Journal of Marketing 87(1)
consider the extant literature on expectancy theory, followed by elicited by an AI could be met with a more adverse customer
a conceptual integration with recent AI research. response. However, extant research has not considered transac-
The broad body of research that examines expectation dis- tional contexts that vary in the valence of outcome expectation
crepancies for product and service offers assumes that a set of discrepancy, for which we theorize that a different pattern of
expectations exists prior to entering a consumption event, and effects will emerge. We extend previous research revealing an
offers trigger a comparison against expected referents AI aversion phenomenon in the contexts of prediction tasks
(Evangelidis and Van Osselaer 2018; Oliver and DeSarbo and medical decisions (Dietvorst, Simmons, and Massey
1988). For example, a buyer entering into a potential Uber trans- 2015; Longoni, Bonezzi, and Morewedge 2019). In doing so,
action could have expectations for a price of $20 based on his- we contrast AI and human agents in the dimension of intentions
torical experience but then receive a ride price offer of $30 inferred from their actions and theorize that transactional offers
(worse than expected) or $10 (better than expected). with the same face value could be evaluated as more acceptable
Worse-than-expected offers elicit an adverse reaction and when provided by an AI (vs. a human) agent, depending on the
have been demonstrated to lead to lower purchase likelihood, valence of the offer.
satisfaction, and desire to reengage with specific sales agents, According to established models of intentionality, an action
whereas better-than-expected offers produce favorable con- is perceived to be driven by intentions when it involves three
sumer responses (Oliver, Balakrishnan, and Barry 1994; key elements: a desired outcome based on one’s own
Spreng, MacKenzie, and Olshavsky 1996). These expectations motives, a belief that the action will lead to that outcome,
can relate to many aspects of service and product experiences, and one’s autonomous decision to take the action (Brand
including temporal dimensions such as product/service delivery 1984; Bratman 1987; Malle, Moses, and Baldwin 2001).
time, attribute performance, price, and so on. Any dimension of Research has shown that individuals are capable of inferring
consumer experience for which outcomes deviate from experi- intentions from others’ actions, and an action that suffices for
ence can potentially be used to form evaluations of the agent. all three elements is perceived to have stronger intentions. A
The current research aims to conjoin the extant literature on growing body of research suggests that AI (vs. human)
technology and expectations discrepancy theory by highlighting agents are considered to have a lower capacity for self-
the important role of AI. The previous literature on expectations motivated decision making and, by extension, lack the capacity
discrepancy has assumed a human agent during offer adminis- to form their own intentions that drive subsequent actions.
tration. AI is perceived differently than human agents in many Because an AI agent is a nonhuman machine made by
aspects because it is an artificially created technology humans to serve humans (Russell and Norvig 2010), lay
(Dietvorst, Simmons, and Massey 2015; Longoni, Bonezzi, beliefs have formed such that AI agents’ actions serve human
and Morewedge 2019; Mende et al. 2019). Moreover, AI tech- goals and fulfill extrinsic human desires (Huang and Chen
nology endowed with machinelike versus humanlike features 2019; Kim and Duhachek 2020; Russell and Norvig 2010).
(e.g., a name or face) appears to trigger different perceptions Put differently using intentionality theory, AI agents lack the
about its capabilities (MacDorman, Vasudevan, and Ho 2009; primary element of intentionality, namely self-motivated
Waytz, Heafner, and Epley 2014), thereby potentially altering “desired outcomes,” and thus, they do not qualify as agents
responses to offers that are discrepant from expectations. that possess their own intentions (Brand 1984; Bratman
1987; Malle, Moses, and Baldwin 2001).
Social information processing theory has revealed that when
Transactions with AI Versus Human Agents: approaching an interpersonal engagement with meaningful out-
comes—such as a marketing transaction—individuals attempt
Inferred Intentions Matter to judge the other party primarily on inferred intentions
Previous research has shown that human agents are more liked (Abele and Wojciszke 2007; Wojciszke, Abele, and Baryla
and trusted than AI agents across a variety of contexts 2009). Of particular importance to determining response is
(Dietvorst, Simmons, and Massey 2015; Longoni, Bonezzi, whether the intentions driving the other party’s decision are
and Morewedge 2019). For example, consumers are more inferred to be selfish (i.e., placing the self first) or benevolent
likely to prefer a medical service provided by a human versus (i.e., considerate of others; Wojciszke, Abele, and Baryla
an AI agent because AI agents are considered incapable of con- 2009). Inferred selfish (benevolent) intentions decrease
sidering each patient’s unique, individual, and situational char- (increase) favorable evaluation of an agent’s actions. For
acteristics (Longoni, Bonezzi, and Morewedge 2019). Thus, example, imagine that a human driver of a vehicle swerved
one could predict that the administration of an offer, such as into a wall to protect a child who jumped into the street
offering a price for an Uber trip, by an AI (vs. a human) chasing a ball. The benevolence of the action, if conducted by
agent could negatively influence purchase, satisfaction, and a human driver, would be praised, but the same action would
reengagement outcomes regardless of how well the offer receive less praise when conducted by an autonomous vehicle’s
matches expectations. Moreover, extant research has shown AI system (Awad et al. 2018; Gray, Gray, and Wegner 2007;
that people have less aversion to inflicting physical punishment Gray, Young, and Waytz 2012). This explanation is also consis-
on an embodied AI (i.e., a robot) than a human (Złotowski et al. tent with research demonstrating weaker perceptions of good-
2015), suggesting in particular that a worse-than-expected offer will from benevolent actions conducted by humans (e.g.,
Garvey et al. 13
Results
Pretest. We conducted a pretest of offer type to assess our oper-
ationalizations of worse than expected (ticket sold to someone
else for $110; participants were offered $140) and expected
(ticket sold to someone else for $140; participants were
offered $140). One hundred fifty respondents from Amazon Figure 1. Offer Acceptance Rate as a Function of Offer and Agent
Type (Study 1a).
Mechanical Turk (MTurk) completed the pretest in return for Notes: Error bars = ±1 SEs.
monetary compensation. After reading the ticket scenario
without agent descriptions, respondents answered the
two-item expectancy disconfirmation scale from Oliver (1980)
of results for offers administered by an AI agent versus a human
adapted to this context: “Think about what your expectations
agent. In Study 1b, we examine whether better-than-expected
were for the price offer. How does the price that you were
offers elicit a different pattern of response when administered by
offered compare to your expectations?” (1 = “I expected a
AI versus a human agent.
much lower price,” 4 = “Price was as I expected,” and 7 = “I
expected a much higher price”; 1 = “Much worse offer than I
expected,” 4 = “Offer was as I expected,” 7 = “Much better
offer than I expected”). Analysis of the two-item composite
Study 1b: Response to a
(r = .80) revealed that the worse-than-expected offer condition was Better-Than-Expected Offer from
significantly below both the “as expected” midpoint (i.e., 4) an AI Agent Versus Human Agent
(Mworse = 2.90, SD = 1.36; t(75) = −7.06; p < .001) and the Study 1b addresses the case of a better-than-expected offer. We
expected condition (Mexpected = 3.98, SD = .96; F(1, 148) = posit that offer administration by a human leads to more positive
31.52, p < .001, η2p = .18), indicating a successful manipulation. consumer outcomes as compared with AI.
Results
Pretests. As in Study 1a, we conducted a separate pretest to val-
idate our offer type manipulation with 150 MTurk participants
who received monetary compensation. Using the same two
items from Study 1a, we measured the extent to which partici-
pants perceived that the offer was better than they expected.
Analysis of the two-item composite (r = .72) revealed that the
mean score of the composite was higher in the
better-than-expected offer condition (Mbetter = 5.83, SD =
1.25) when compared with the expected offer condition
(Mexpected = 4.18, SD = 1.00; F(1, 148) = 80.8, p < .001, η2p = .35),
indicating a successful manipulation.
variables, offer acceptance likelihood served as the dependent benevolent intentions, which in turn decreases (increases)
variable, and agent type (human, AI) served as the moderating offer acceptance. However, an AI agent attenuates these path-
variable. This analysis revealed that the index of moderated ways, thereby leading to an acceptance increase in the case of
mediation for both selfish and benevolent intentions did not a worse-than-expected offer, but an acceptance decrease in the
include 0 (indirect effect for selfish intention = .12; 95% confi- case of a better-than-expected offer, compared with a human
dence interval: [.05, .21]; indirect effect for benevolent intention agent. In addition, this study rules out several potential alterna-
= .14; 95% confidence interval: [.07, .23]), indicating that the tive process mechanisms.
offer type → inferred intentions → acceptance likelihood path
is moderated by agent type.
Studies 3a and 3b: Anthropomorphism of
Alternative mechanisms. The first alternative mechanism
assessed was the uncanniness index (three-item average, α = AI Agents
.91). An ANOVA of the uncanniness index revealed that AI In our final two studies, Studies 3a and 3b, we extend our work
was perceived as more uncanny than a human only in the beyond the human–AI dichotomy to consider different forms of
expected offer condition (MAI = 2.98, SD = 1.29; Mhuman = AI agents. We propose that AI agents may be anthropomor-
2.50, SD = 1.35; F(2, 692) = 6.96, p = .01). More germane to phized through their presentation to be perceived as more
our inquiry, the two-way interaction between the agent and humanlike (vs. machinelike), thereby eliciting patterns more
offer type was not significant, resulting in a pattern inconsistent consistent with human agents. In doing so, we provide insights
with explaining our results. The second alternative mechanism for managers on how to best depict their AI agents (i.e., as more
assessed was whether the perception the original fare (i.e., humanlike vs. more machinelike) in situations where consumers
$20 to get to the restaurant) was overpriced. An ANOVA of will receive better- or worse-than-expected offers.
the overpricing perception revealed a significant main effect Research has revealed that endowing an AI with humanlike
of offer type (Mworse = 4.14, SD = 1.70; Mexpected = 5.10, SD features typically makes people perceive the AI more positively.
= 1.53; Mbetter = 5.52, SD = 1.39; F(2, 692) = 49.00, p < For example, consumers assessing an AI agent that controlled
.001). However, there was no interaction between the agent an autonomous vehicle became more trusting as that agent
and the offer (F(2, 692) = 2.19, p = .11), precluding it as a was presented as increasingly humanlike, rather than machine-
driver of the observed pattern of mediation. Third, we assessed like (Waytz, Heafner, and Epley 2014). Indeed, the marketing
the agent’s market-price-tracking capability as an alternative literature has shown that anthropomorphized products are pre-
mechanism. An ANOVA of price tracking capability revealed dominately liked more and improve consumer engagement.
a significant main effect of agent (F(2, 692) = 22.90, p < .001) For example, thinking about a product (e.g., a car) through an
and offer type (F(2, 692) = 3.38, p = .04). However, there was anthropomorphic perspective increases affection toward that
no interaction between the agent and the offer (F(2, 692) = product and leads to less willingness to replace it (Chandler
.16, p = .86), resulting in a pattern inconsistent with explaining and Schwarz 2010). Research has also shown that a product
our results. Finally, ANOVA of the perception that the offer is that resembles a human face in its design could increase per-
controlled by another person revealed only a significant main ceived friendliness, leading to a positive evaluation of the
effect of agent (F(2, 692) = 6.03, p = .01). The main effect of product (Landwehr, McGill, and Herrmann 2011). As another
offer (F(2, 692) = 2.03, p = .13) and the interaction (F(2, 692) instance, socially excluded consumers prefer anthropomor-
= 1.60, p = .20) were not significant, also resulting in a phized brands due to their need to establish relationships
pattern inconsistent with explaining our results. Taken together, (Chen, Peng, and Levy 2017).
these results rule out multiple alternative explanations and lend However, prior research investigating anthropomorphic AI
further confidence to our conceptual model. has predominately focused on nonpurchase situations in
which AI agents have been in a supporting role to consumers,
such as providing safe transportation or providing medical
Discussion advice. Our present inquiry builds on previous research by
The results of Study 2 extend our understanding of how offers examining AI agents in a more adversarial role as administrators
that are discrepant from expectations systematically elicit differ- of product and service offers—that is, the consumer is sitting
ent responses between human and AI agents. We replicate the across the transaction table from the AI agent. In such transac-
differential human versus AI effects observed in Studies 1a tional contexts, we propose that AI anthropomorphism will not
and 1b, supporting H1. Specifically, worse-than-expected have the generally favorable effect observed in many other con-
offers are more likely to be accepted when administered by an texts but will instead systematically either improve or worsen
AI, whereas better-than-expected offers are more likely to be the impact of offers that differ from expectations.
accepted when administered by a human. Moreover, our The influence of AI anthropomorphism on response to dis-
results reveal that inferred intentions mediate the observed crepant offers will be driven by changes in the extent to
effects, in support of H2. Specifically, a worse-than-expected which AI is perceived to be capable of intentionality (i.e., as
(better-than-expected) offer from a human increases (decreases) defined by the previously discussed three elements of intention-
inferred selfish intentions and decreases (increases) inferred ality models; Brand 1984; Bratman 1987; Malle, Moses, and
18 Journal of Marketing 87(1)
Baldwin 2001), thereby altering the strength of inferred AI the role of anthropomorphism specified in H3, this effect will be
intentions. As an AI agent’s humanlikeness increases, so too even stronger for individuals who perceive technological
should its perceived capacity for intentionality. Indeed, a objects to be less humanlike, whereas the effect will attenuate
mind capable of intentionality has been argued to be a defining for consumers who view technological objects as more human-
aspect of what it means to be uniquely human, and items specif- like. We also use a different context to test the robustness of our
ically measuring related constructs are embedded in multiple theory: an ultimatum game setting that examines the likelihood
established anthropomorphism scales (e.g., Kim and McGill to accept both expected and unexpected allocations
2011; Waytz, Cacioppo, and Epley 2010). Extant research has (Morewedge 2009; Thaler 1988). Prior research has demon-
revealed that individuals are surprisingly willing to attribute strated that the ultimatum game is a highly controllable
advanced humanlike capabilities to anthropomorphized AI context in which human offers may be contrasted with offers
agents (Złotowski et al. 2015), potentially due in part to an over- proposed by a nonhuman agent (e.g., Sanfey et al. 2003). We
estimation of AI technical advancement stemming from fic- advance beyond previous research by examining the role of
tional literature and film (Pollack 2006) and to the nature of anthropomorphism of AI agents in the ultimatum context.
human inference that assumes similar internal properties Moreover, prior research has demonstrated that individuals
between entities that share common external appearances have predictable and strong expectations regarding offer terms
(Rozin and Nemeroff 2002). As such, AI agents that are (Suleiman 1996), making this a context to which our theoretical
viewed as increasingly anthropomorphic should be perceived predictions regarding worse-than-expected offers are particu-
to possess a greater capacity for intentionality, thereby generat- larly applicable.
ing stronger inferred intentions that increasingly elicit a
response akin to a human agent.
We thus posit that anthropomorphizing an AI agent through Method
the depiction of humanlike (vs. machinelike) properties will Participants and design. The experiment was a 2 (agent type:
asymmetrically influence responses to discrepant offers. In the human, AI) × 2 (offer type: worse than expected, expected)
case of a better-than-expected offer, an anthropomorphized AI between-subjects design. A total of 403 members of an
agent will lead to more positive downstream consequences MTurk online panel (Mage = 34.9 years; 64% female) partici-
(e.g., offer acceptance, satisfaction). Conversely, we hypothe- pated in this study in return for financial compensation.
size that the response to a worse-than-expected offer will
become more negative when the AI agent is endowed with Procedure. Participants first completed the full Individual
humanlike (vs. machinelike) properties. Stated formally, Differences in Anthropomorphism Questionnaire (IDAQ)
scale for trait anthropomorphism of nonhuman agents (Waytz,
H3: Anthropomorphism of an AI agent moderates the Cacioppo, and Epley 2010), which includes a subscale specific
response to price offers that are better or worse than to technological devices. The five-item technological anthropo-
expected: a humanlike AI agent makes worse-than-expected morphism scale measures the extent to which technology is
offer responses more negative but makes better-than- viewed as having humanlike mental and emotional qualities
expected offer responses more positive. (e.g., “To what extent does the average computer have a mind
of its own?,” “To what extent does a car have free will?”; 0 =
We test H3 in Studies 3a and 3b. In doing so, we provide valu- “not at all,” and 10 = “very much”; for the full list of IDAQ
able insights to managers who are faced with decisions regard- items, see Web Appendix E). We averaged the five items to
ing customer interactions with AI agents. If H3 is supported, our form an index of technology anthropomorphism (α = .85).
results suggest that selective anthropomorphism of AI agents Then, participants were invited to participate in an ostensibly
could be applied to improve overall customer response to unrelated study on decision making. Participants were intro-
both worse- and better-than-expected offers. duced to an ultimatum game in which the other player was
either a “human” or an “artificially intelligent machine” who
Study 3a: The Moderating Role of Consumer would administer an offer determining how $100 should be
allocated. Previous research in behavioral economics has
Trait Anthropomorphism shown that an approximately even split (50%/50%) is both
In Study 3a, we extend our findings to consider the implications the modal offer by allocators and expectation by receivers,
of anthropomorphism of AI agents for worse-than-expected with splits even modestly outside of this range falling short of
offers. We do this utilizing an ecologically valid and manageri- expectations and leading to an increasing tendency to reject
ally relevant segmentation measure: technology anthropomor- the offer (Suleiman 1996). Thus, participants in the worse-
phism (Waytz, Cacioppo, and Epley 2010). Technology than-expected offer condition received an offer of $10 to them-
anthropomorphism is the individual tendency to view techno- selves and $90 to the other player, whereas participants in
logical products as possessing humanlike mental and emotional the expected offer condition received a $50/$50 offer.
qualities. Consistent with our theory and H1, we predict that Participants then indicated their acceptance or rejection of the
consumers will be more likely to accept a worse-than-expected offer (1 = yes, 0 = no). Finally, participants responded to back-
offer from an AI than a human. Moreover, in concordance with ground questions (e.g., gender).
Garvey et al. 19
transition within firms from human agents to AI agents as mar- well-being. On the one hand, our work reveals opportunities
keting representatives. First, our findings give insight into how for firms to receive due, and perhaps otherwise overlooked,
this transition can be systematically and selectively managed to credit for better-than-expected offers and potentially other
improve customer purchase tendencies, satisfaction, and reen- behaviors that are ostensibly benevolent. On the other hand,
gagement. Indeed, whereas extant works regarding customer- regarding threats to consumer well-being, our work does
facing AI have revealed primarily main effects of reveal a tool through which marketers can increase the accep-
human-to-AI transitions on consumer engagement—that is, tance of worse-than-expected offers made to a customer,
either humans or AI are generally superior in a given product while also increasing intentions to reengage with the offending
or service domain (e.g., Longoni, Bonezzi and Morewedge firm. In those instances where the worse-than-expected offer is
2019; Mende et al. 2019)—our work reveals that the nature of objectively detrimental to consumers, the use of this approach
the transaction (i.e., offers that are discrepant from expectations) does raise ethical concerns. By documenting these effects, we
is important. In other words, our findings inform when and intend to benefit consumers and policy makers through strength-
where the transition should occur in product and service offer ening their understanding of the nature of differential consumer
situations involving discrepant expectations. Managers could responses to discrepant outcomes.
apply our findings to prioritize (vs. postpone) human-to-AI Importantly, we propose that this ethical dilemma predomi-
role transitions in situations where better-than-expected (vs. nately emerges in situations where consumers are ostensibly
worse-than-expected) offers are more frequent and impactful. harmed—that is, where natural consumer resistance to unex-
In addition, our results suggest that even when a role transition pectedly high prices is bypassed by the selective use of AI.
is not holistically passed to an AI, the selective recruitment of an However, some situations do exist where increased consumer
AI agent to disclose worse-than-expected offers should still be purchase would not result in objective consumer harm, and
advantageous—that is, our results reveal that an AI “bad there are even select contexts where consumers and firms
cop”/human “good cop” approach to managing discrepant could jointly benefit from this effect. For example, consumers
expectations should have beneficial outcomes for the firm. often hold unrealistic expectations for product and service
Moreover, our research builds on brand crisis research that offers due to self-serving biases or lack of information. For
explores consumer response to algorithmic errors (Srinivasan example, a customer could anchor on a historic promotional
and Abi 2021); however, our work reveals differences in per- price (Lin and Chen 2017) or observe that another customer
ceived intentions, outside the context of algorithmic errors, received a lower price offer without knowledge of that other
related to whether an offer is administered by a human or AI. customer’s greater earned loyalty status (Mayser and
Second, and perhaps even more impactful, our work has Wangenheim 2013). Such misperceptions or biased interpreta-
implications for firms that have already transitioned to tions could lead consumers to irrationally abandon transactions
consumer-facing AI representatives, including the multitude that are objectively beneficial. In such cases, consumers and
of online and mobile applications that use AI-based algorithms firms would benefit from shifting consumer tendencies toward
to create and administer offers (e.g., Myer’s “6-second sale”; preserving the marketing relationship, thereby minimizing
Green 2017). In such posttransition instances where AI is switching costs and maximizing consumer utility. Thus, the
already customer facing, our research reveals that AI may be current research indicates ways managers can strengthen their
depicted or otherwise framed as either machinelike or human- customer relationships through AI agents when these findings
like. Our manipulations show that this can be accomplished are applied ethically.
through differences in the embodied appearance of the AI The results of our current studies reveal that most consumers
(e.g., a robotic form shown in Study 3b). Moreover, the currently tend to see AI as not inherently possessed of benevo-
results of Study 3a suggest that the majority of contemporary lent or selfish intentions when administering offers. However,
consumers consider AI to be inherently machinelike (though there is the possibility that some consumers infer that humans
Study 3a reveals that a minority of consumers do already con- within companies are manipulating AI offers behind the
sider AI to be moderately humanlike) unless humanlike scenes or otherwise have programmed the AI to indirectly
imagery or verbiage is introduced. Thus, marketers should carry out selfish human intentions. In such instances, the
anticipate a default “machinelike” response toward AI agents observed increase in acceptance for worse-than-expected
unless specifically humanlike attributes are depicted. With this offers from AI agents could be attenuated. Exploring whether
understanding, marketers should depict an AI agent as machine- certain segments of consumers hold such skeptical beliefs, or
like when disclosing a worse-than-expected offer but should whether similar beliefs may increase over time as AI administra-
depict more humanlike attributes in the case of a better- tion of offers becomes even more common, is a fruitful avenue
than-expected offer. for future research.
AI agents more humanlike—can actually lead to decreased pref- Awad, Edmond, Sohan Dsouza, Richard Kim, Jonathan Schulz,
erence and increased disengagement. This is in contrast to pre- Joseph Henrich, Azim Shariff, et al. (2018), “The Moral Machine
vious research, which has shown predominately positive Experiment,” Nature, 563 (7729), 59–64.
consequences of anthropomorphized products in the form of Barasch, Alixandra, Emma E. Levine, Jonathan Z. Berman, and
increased consumer engagement and product liking Deborah A. Small (2014), “Selfish or Selfless? On the Signal
(Aggarwal and McGill 2007). Whereas extant research suggests Value of Emotion in Altruistic Behavior,” Journal of Personality
that humanlike AI is more comforting and trusted (Longoni, and Social Psychology, 107 (3), 393–413.
Bonezzi, and Morewedge 2019; Waytz, Heafner, and Epley Bausell, R. Barker and Yu-Fang Li (2002), Power Analysis for
2014), our findings reveal situations where humanlike AI may Experimental Research: A Practical Guide for the Biological,
serve as a liability for maintaining the marketing relationship, Medical and Social Sciences. Cambridge, UK: Cambridge
particularly in contexts of discrepant expectations. Future University Press.
research should explore other psycholinguistic and personality Bolton, Lisa E. and Anna S. Mattila (2015), “How Does Corporate
psychology factors incorporated into AI agent design (e.g., Social Responsibility Affect Consumer Response to Service
altering the gender or perceived cultural origins of the AI) and Failure in Buyer–Seller Relationships?” Journal of Retailing, 91
resulting changes in consumer perception and reactions. (1), 140–53.
Beyond contextual effects due to AI design elements, our Brand, Myles (1984), Intending and Acting: Toward a Naturalized
research also indicates that consumers individually vary in Action Theory. Cambridge, MA: MIT Press.
their tendency to anthropomorphize AI agents, with subsequent Bratman, Michael E. (1987), Intention, Plans, and Practical Reason.
implications for consumption. Our results reveal that individual Cambridge, MA: Harvard University Press.
differences in perceptions of technology anthropomorphism Chandler, Jesse and Norbert Schwarz (2010), “Use Does Not Wear
moderate offer acceptance in cases of worse-than-expected Ragged the Fabric of Friendship: Thinking of Objects as Alive
offers (Study 3a). Managers should consider integrating trait Makes People Less Willing to Replace Them,” Journal of
measures of technology anthropomorphism when developing Consumer Psychology, 20 (2), 138–45.
consumer marketing segmentations relevant to AI interactions. Chen, Rocky, Echo Wen Wan Peng, and Eric Levy (2017), “The
Moreover, it is possible that other consumer-level personality Effect of Social Exclusion on Consumer Preference for
traits will influence consumer–AI interactions (e.g., big five per- Anthropomorphized Brands,” Journal of Consumer Psychology,
sonality factors)—a potential topic for further research. 27 (1), 23–34.
Darke, Peter R., Laurence Ashworth, and Kelley J. Main (2010), “Great
Expectations and Broken Promises: Misleading Claims, Product
Associate Editor Failure, Expectancy Disconfirmation and Consumer Distrust,”
Juliet Zhu Journal of the Academy of Marketing Science, 38 (3), 347–62.
Davenport, Thomas, Abhijit Guha, Dhruv Grewal, and
Timna Bressgott (2020), “How Artificial Intelligence Will
Declaration of Conflicting Interests
Change the Future of Marketing,” Journal of the Academy of
The author(s) declared no potential conflicts of interest with respect to
Marketing Science, 48 (1), 24–42.
the research, authorship, and/or publication of this article.
Dietvorst, Berkeley J., Joseph P. Simmons, and Cade Massey (2015),
“Algorithm Aversion: People Erroneously Avoid Algorithms
Funding After Seeing Them Err,” Journal of Experimental Psychology:
The author(s) received no financial support for the research, authorship General, 144 (1), 114–26.
and/or publication of this article. Du, Shuili, Chitrabhan B. Bhattacharya, and Sankar Sen (2007),
“Reaping Relational Rewards from Corporate Social
Responsibility: The Role of Competitive Positioning,”
References International Journal of Research in Marketing, 24 (3), 224–41.
Aaker, Jennifer, Kathleen D. Vohs, and Cassie Mogilner (2010), Esteva, Andre, Brett Kuprel, Roberto A. Novoa, Justin Ko, Susan
“Nonprofits Are Seen as Warm and For-Profits as Competent: Firm M. Swetter, Helen M. Blau, et al. (2017), “Dermatologist-Level
Stereotypes Matter,” Journal of Consumer Research, 37 (2), 224–37. Classification of Skin Cancer with Deep Neural Networks,”
Abele, Andrea E. and Bogdan Wojciszke (2007), “Agency and Nature, 542 (7639), 115–18.
Communion from the Perspective of Self Versus Others,” Journal Evangelidis, Ioannis and Stijn M.J. Van Osselaer (2018), “Points of
of Personality and Social Psychology, 93 (5), 751–63. (Dis)Parity: Expectation Disconfirmation from Common
Aggarwal, Pankaj and Ann L. McGill (2007), “Is That Car Smiling at Me? Attributes in Consumer Choice,” Journal of Marketing Research,
Schema Congruity as a Basis for Evaluating Anthropomorphized 55 (1), 1–13.
Products,” Journal of Consumer Research, 34 (4), 468–79. Fisk, Raymond P. and Clifford E. Young (1985), “Disconfirmation of
Anderson, Ronald D., Jack L. Engledow, and Helmut Becker (1979), Equity Expectations: Effects on Consumer Satisfaction with
“Evaluating the Relationships Among Attitude Toward Business, Services,” NA – Advances in Consumer Research, Vol. 12,
Product Satisfaction, Experience, and Search Effort,” Journal of Elizabeth C. Hirschman and Moris B. Holbrook, eds. Provo, UT:
Marketing Research, 16 (3), 394–400. Association for Consumer Research, 340–45.
24 Journal of Marketing 87(1)
Gray, Heather M., Kurt Gray, and Daniel M. Wegner (2007), Applications of Intelligent Agent Technologies (IATs) in
“Dimensions of Mind Perception,” Science, 315 (5812), 619–19. Marketing,” Journal of the Academy of Marketing Science, 44
Gray, Kurt (2012), “The Power of Good Intentions: Perceived (1), 24–45.
Benevolence Soothes Pain, Increases Pleasure, and Improves Landwehr, Jan R., Ann L. McGill, and Andreas Herrmann (2011), “It’s
Taste,” Social Psychological and Personality Science, 3 (5), 639–45. Got the Look: The Effect of Friendly and Aggressive ‘Facial’
Gray, Kurt and Daniel M. Wegner (2008), “The Sting of Intentional Expressions on Product Liking and Sales,” Journal of Marketing,
Pain,” Psychological Science, 19 (12), 1260–62. 75 (3), 132–46.
Gray, Kurt, Liane Young, and Adam Waytz (2012), “Mind Perception Lin, Chien-Huang and Ming Chen (2017), “Follow Your Heart: How Is
Is the Essence of Morality,” Psychological Inquiry, 23 (2), Willingness to Pay Formed Under Multiple Anchors?” Frontiers in
101–24. Psychology, 8 (2269), 1–9.
Green, Ricki (2017), “Myer Launches the ‘6 Second Sale’ Campaign Longoni, Chiara, Andrea Bonezzi, and Carey Morewedge (2019),
on YouTube via Clemenger BBDO, Melbourne,” Campaign “Resistance to Medical Artificial Intelligence,” Journal of
Brief (June 1), https://campaignbrief.com/myer-launches-the-6- Consumer Research, 46 (4), 629–50.
second-sal/. MacDorman, Karl F., Sandosh K. Vasudevan, and Chin-Chang Ho
Güroğ lu, Berna, Wouter van den Bos, and Eveline A. Crone (2009), (2009), “Does Japan Really Have Robot Mania? Comparing
“Fairness Considerations: Increasing Understanding of Intentionality Attitudes by Implicit and Explicit Measures,” AI & Society, 23,
During Adolescence,” Journal of Experimental Child Psychology, 485–510.
104 (4), 398–409. Malle, Bertram F., Louis J. Moses, and Dare A. Baldwin
Harris, Karen, Austin Kimson, and Andrew Schwedel (2018), “Labor (2001), “Introduction: The Significance of Intentionality,”
2030: The Collision of Demographics, Automation and Intentions and Intentionality: Foundations of Social Cognition,
Inequality,” Bain & Company, research report (February 7), 4 (1), 1–26.
https://www.bain.com/insights/labor-2030-the-collision-of-demographics- Mayser, Sabine and Florian von Wangenheim (2013), “Perceived
automation-and-inequality/. Fairness of Differential Customer Treatment: Consumers’
Hayes, Andrew F. (2017), Introduction to Mediation, Moderation, and Understanding of Distributive Justice Really Matters,” Journal of
Conditional Process Analysis: A Regression-Based Approach. Service Research, 16 (1), 99–113.
New York: Guilford Publications. Mende, Martin, Maura L. Scott, Jenny van Doorn, Dhruv Grewal, and
Hilbig, Benjamin E., Isabel Thielmann, Johanna Hepp, Sina A. Klein, Ilana Shanks (2019), “Service Robots Rising: How Humanoid
and Ingo Zettler (2015), “From Personality to Altruistic Behavior Robots Influence Service Experiences and Elicit Compensatory
(and Back): Evidence from a Double-Blind Dictator Game,” Consumer Responses,” Journal of Marketing Research, 56 (4),
Journal of Research in Personality, 55 (April), 46–50. 535–56.
Huang, Szu-chi and Fangyuan Chen (2019), “When Robots Come to Morewedge, Carey. K. (2009), “Negativity Bias in Attribution of
Our Rescue: Why Professional Service Robots Aren’t Inspiring External Agency,” Journal of Experimental Psychology: General,
and Can Demotivate Consumers’ Pro-Social Behaviors,” in NA – 138 (4), 535–45.
Advances in Consumer Research Volume 47, Rajesh Bagchi, Mori, Masahiro, Karl F. MacDorman, and Norri Kageki (2012), “The
Lauren Block, and Leonard Lee, eds. Duluth, MN: Association Uncanny Valley,” IEEE Robotics & Automation Magazine, 19 (2),
for Consumer Research, 93–98. 98–100.
Huang, Ming-Hui and Roland T. Rust (2018), “Artificial Intelligence in Newcomer, Eric (2017), “Uber Starts Charging What It Thinks You’re
Service,” Journal of Service Research, 21 (2), 155–72. Willing to Pay,” Bloomberg News, (May 19), https://www.
Hughes, Claretha, Lionel Robert, Kristin Frady, and Adam Arroyos bloomberg.com/news/articles/2017-05-19/uber-s-future-may-rely-
(2019), “Artificial Intelligence, Employee Engagement, on-predicting-how-much-you-re-willing-to-pay.
Fairness, and Job Outcomes,” in Managing Technology and Oliver, Richard L. (1980), “A Cognitive Model of the Antecedents and
Middle-and Low-Skilled Employees. Bingley, UK: Emerald Consequences of Satisfaction Decisions,” Journal of Marketing
Publishing, 61–68. Research, 17 (4), 460–69.
Kim, TaeWoo and Adam Duhachek (2020), “Artificial Intelligence and Oliver, Richard L., P.V. Sundar Balakrishnan, and Bruce Barry (1994),
Persuasion: A Construal Level Account,” Psychological Science, “Outcome Satisfaction in Negotiation: A Test of Expectancy
31 (4), 364–80. Disconfirmation,” Organizational Behavior and Human Decision
Kim, Sara and Ann L. McGill (2011), “Gaming with Mr. Slot or Processes, 60 (2), 252–75.
Gaming the Slot Machine? Power, Anthropomorphism, and Oliver, Richard L. and Wayne S. DeSarbo (1988), “Response
Risk Perception,” Journal of Consumer Research, 38 (1), Determinants in Satisfaction Judgments,” Journal of Consumer
94–107. Research, 14 (4), 495–507.
Kim, Seo Young, Bernd H. Schmitt, and Nadia M. Thalmann (2019), Pollack, Jordan B. (2006), “Mindless Intelligence,” IEEE Intelligent
“Eliza in the Uncanny Valley: Anthropomorphizing Consumer Systems, 21 (3), 50–6.
Robots Increases Their Perceived Warmth but Decreases Liking,” Radke, Sina, Berna Güroğ lu, and Ellen R.A. de Bruijn (2012), “There’s
Marketing Letters, 30 (1), 1–12. Something About a Fair Split: Intentionality Moderates Context-
Kumar, V., Ashutosh Dixit, Rajshekar Raj G. Javalgi, and Based Fairness Considerations in Social Decision-Making,” PLoS
Mayukh Dass (2016), “Research Framework, Strategies, and One, 7 (2).
Garvey et al. 25
Rand, David G., George E. Newman, and Owen M. Wurzbacher Turner, Karen (2016), “Meet ‘Ross,’ the Newly Hired Legal Robot,”
(2015), “Social Context and the Dynamics of Cooperative The Washington Post (May 16), https://www.washingtonpost.
Choice,” Journal of Behavioral Decision Making, 28 (2), 159–66. com/news/innovations/wp/2016/05/16/meet-ross-the-newly-hired-
Rozin, Paul and Carol Nemeroff (2002), “Sympathetic Magical legal-robot/.
Thinking: The Contagion and Similarity ‘Heuristics’,” in Waytz, Adam, John Cacioppo, and Nicholas Epley (2010), “Who Sees
Heuristic and Biases: The Psychology of Intuitive Judgment, Human? The Stability and Importance of Individual Differences in
Thomas Gilovich, Dale Griffin, and Daniel Kahneman, eds. Anthropomorphism,” Perspectives on Psychological Science, 5 (3),
New York: Cambridge University Press, 201–16. 219–32.
Russell, Stuart J. and Peter Norvig (2010), Artificial Intelligence: A Waytz, Adam, Joy Heafner, and Nicholas Epley (2014), “The Mind in
Modern Approach. Hoboken, NJ: Pearson Education. the Machine: Anthropomorphism Increases Trust in an
Sanfey, Alan G., James K. Rilling, Jessica A. Aronson, Leigh Autonomous Vehicle,” Journal of Experimental Social
E. Nystrom, and Jonathan D. Cohen (2003), “The Neural Basis of Psychology, 52 (May), 113–17.
Economic Decision-Making in the Ultimatum Game,” Science, Williams, Paul and Earl Naumann (2011), “Customer Satisfaction and
300 (5626), 1755–58. Business Performance: A Firm-Level Analysis,” Journal of
Spreng, Richard A., Scott B. MacKenzie, and Richard W. Olshavsky Services Marketing, 25 (1), 20–32.
(1996), “A Reexamination of the Determinants of Consumer Wirtz, Jochen, Paul G. Patterson, Werner H. Kunz, Thorsten Gruber,
Satisfaction,” Journal of Marketing, 60 (3), 15–32. Vinh Nhat Lu, Stefanie Paluch, et al. (2018), “Brave New World:
Srinivasan, Raji and Gülen Sarial Abi (2021), “When Algorithms Fail: Service Robots in the Frontline,” Journal of Service Management,
Consumers’ Responses to Brand Harm Crises Caused by Algorithm 29 (5), 907–31.
Errors,” Journal of Marketing, 85 (5), 74–91. Wojciszke, Bogdan, Andrea E. Abele, and Wiesław Baryla (2009),
Suleiman, Ramzi (1996), “Expectations and Fairness in a Modified “Two Dimensions of Interpersonal Attitudes: Liking Depends on
Ultimatum Game,” Journal of Economic Psychology, 17 (5), 531–54. Communion, Respect Depends on Agency,” European Journal of
Thaler, Richard H. (1988), “Anomalies: The Ultimatum Game,” Social Psychology, 39 (6), 973–90.
Journal of Economic Perspectives, 2 (4), 195–206. Złotowski, Jakub, Diane Proudfoot, Kumar Yogeeswaran, and
Tsiros, Michael, Vikas Mittal, and William T. Ross Jr. (2004), “The Christoph Bartneck (2015), “Anthropomorphism: Opportunities
Role of Attributions in Customer Satisfaction: A Reexamination,” and Challenges in Human–Robot Interaction,” International
Journal of Consumer Research, 31 (2), 476–83. Journal of Social Robotics, 7 (3), 347–60.
Copyright of Journal of Marketing is the property of American Marketing Association and its
content may not be copied or emailed to multiple sites or posted to a listserv without the
copyright holder's express written permission. However, users may print, download, or email
articles for individual use.