Psychology and Marketing - 2021 - Kim - When do you trust AI The effect of number presentation detail on consumer trust

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 16

Received: 29 April 2020 | Revised: 26 December 2020 | Accepted: 9 April 2021

DOI: 10.1002/mar.21498

RESEARCH ARTICLE

When do you trust AI? The effect of number presentation


detail on consumer trust and acceptance of AI
recommendations

Jungkeun Kim1 | Marilyn Giroux1 | Jacob C. Lee2,3

1
Department of Marketing, Auckland
University of Technology, Auckland, Abstract
New Zealand
When do consumers trust artificial intelligence (AI)? With the rapid adoption of
2
Department of Business Administration,
Dongguk University, Seoul, Korea
AI technology in the field of marketing, it is crucial to understand how consumer
3
Department of Artificial Intelligence, adoption of the information generated by AI can be improved. This study explores a
Dongguk University, Seoul, Korea novel relationship between number presentation details associated with AI and
consumers' behavioral and evaluative responses toward AI. We theorized that
Correspondence
Jacob C. Lee, Department of Business consumer trust would mediate the preciseness effect on consumer judgment and
Administration and Department of Artificial
evaluation of the information provided by AI. The results of five studies demon-
Intelligence, Dongguk University,
30 Pildong‐ro 1‐gil, Seoul 04620, Korea. strated that the use of a precise (vs. imprecise) information format leads to higher
Email: lee.jacob.c@gmail.com
evaluations and behavioral intentions. We also show mediational evidence in-
Funding information dicating that the effect of number preciseness is mediated by consumer trust
This study was supported by the Institute of (Studies 2, 4, and 5). We further show that the preciseness effect is moderated by
Information & communications Technology
Planning & Evaluation (IITP) grant funded by
the accuracy of AI‐generated information (Study 3) and the objective product
the Korea government (MSIT), quality of the recommended products (Study 4). This study provides theoretical
Grant/Award Number: 2019‐0‐00050
implications to the AI acceptance literature, the information processing literature,
the consumer trust literature, and the decision‐making literature. Moreover, this
study makes practical implications for marketers of AI businesses including those
who strategically use AI‐generated information.

KEYWORDS
AI adoption, AI technology, artificial intelligence, marketing, preciseness, recommendation,
trust

1 | INTRODUCTION The use of AI‐based recommendation agents in the online market-


place is increasingly prevalent (Davenport et al., 2020). For example,
For years, advances in artificial intelligence (AI) have radically shifted Netflix displays a narrower selection of TV shows and movies that
how marketing strategies are implemented in terms of customer people are expected to enjoy rather than presenting their entire
service, business relationships, and sales management (Bock catalog to each viewer. This contributes to the overall experience of
et al., 2020; Kaartemo & Helkkula, 2018; Lu et al., 2020; Martínez‐ consumers and results in considerable gains for the company. In
López & Casillas, 2013). Recommendation systems are an efficient addition, companies like Amazon.com integrate recommendations
way for companies to suggest products and services to customers into the purchasing process, and these systems have expanded into
based on data analysis of their personal preferences and behaviors. several other domains such as retail, travel, and finance.

Jungkeun Kim, Marilyn Giroux, and Jacob C. Lee contributed equally to this study.

1140 | © 2021 Wiley Periodicals LLC wileyonlinelibrary.com/journal/mar Psychol Mark. 2021;38:1140–1155.


15206793, 2021, 7, Downloaded from https://onlinelibrary.wiley.com/doi/10.1002/mar.21498 by Universidad Tecnica Federico, Wiley Online Library on [11/10/2023]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
KIM ET AL. | 1141

Many retailers and service companies are using AI to understand example, when forecasting the next day's probability of rain, people
their customers' preferences and desires, with varying levels of might prefer an AI recommendation containing precise information
success and accuracy. Despite the expanding adoption of AI and in- (e.g., 59.85% chance of rain) to one containing imprecise or rounded
creasing use of AI‐based recommendation agents, our understanding information (e.g., 60% chance of rain). We further suggest that user
of consumers' cognitive reactions toward these emerging technolo- trust in AI recommendations drives the positive effect of AI re-
gies is still limited. Prior research has demonstrated consumer re- commendations providing precise information.
sistance toward adopting AI recommendations (e.g., algorithm Accordingly, this study makes several significant theoretical and
aversion, see Dietvorst et al., 2015). In addition to privacy and reg- managerial contributions. First, it contributes to the marketing and
ulatory issues, this stream of research suggests that consumers do usage of technologies, specifically AI literature, by examining a factor
not trust AI's ability to make accurate inferences regarding their that can influence consumer reactions to AI recommendation sys-
preferences (Dietvorst et al., 2015; Longoni et al., 2019). A lack of tems. This is significant because prior research has obtained con-
sufficient data and information can lead to inaccurate re- tradictory findings on preferences and trust regarding AI
commendations from the AI recommendation system or agent. In recommendations. Indeed, by incorporating the notion of informa-
addition, previous research in healthcare and customer support has tion preciseness, we contribute to a better understanding of how
shown that individuals may experience a great level of discomfort cognitive processing contributes to the acceptance of AI re-
and perceived risk when dealing with AI robots and recommendation commendations. Second, we advance trust as a crucial construct for
systems (Huang & Rust, 2018). Adoption and acceptance of new understanding people's behaviors toward AI. With new technologies,
technologies can be complex as those innovations come with many individuals can experience discomfort and uncertainty, and devel-
changes that often translate into higher levels of perceived risk and oping trust toward AI plays a pivotal role in the adoption and ac-
uncertainty for individuals. Thus, a significant research question is ceptance. We show that the preciseness of the information provided
how to increase the understanding of when consumers trust AI influences the level of trust and that this trust can impact pre-
agents' recommendations. Extant research has found several factors ferences and decisions related to AI recommendations. This provides
that increase trust toward AI. For example, consumers respond to AI an answer about how cognitive trust can be developed between
more favorably when it is used to support the analytical (vs. intuitive) humans and AI. In addition, our research expands the literature in
aspects of decision‐making (Jarrahi, 2018), but consumers react to AI behavioral economics by showing that AI‐generated recommenda-
less favorably when the decisions are more intuitive and of the tions represent an anchor in consumer evaluations and decision‐
“common‐sense” type (Guszcza et al., 2017; McAfee et al., 2012). making processes. Finally, the results of this study have substantive
Meanwhile, other research has suggested the opposite pattern implications. Although AI is revolutionizing the business world and
of consumer trust in AI recommendations: algorithm appreciation that consumers are exposed to an increasing amount of AI‐generated
(Logg et al., 2019). Indeed, the algorithmic judgments of AI systems content, more research is needed to provide guidance in terms of the
are often more accurate and precise than human assessments best ways to communicate with individuals. This paper contributes to
(Dawes et al., 1989). With large amounts of data, these behavioral our understanding of how to frame information to increase accep-
signals can become insights for companies, and greater personali- tance of AI recommendations. The findings have straightforward
zation of offers and services can be achieved, which are then per- implications for companies to make recommendations successfully
ceived to be more suited for consumers, ultimately leading to more and enhance consumer experiences with their organizations.
purchase intentions, engagement, and click‐through conversions
(Aguirre et al., 2015). Therefore, variables that can guide the usage of
AI recommendations in decision‐making must be determined. 2 | THEOR ETICAL F RAMEWORK AND
Given the recent substantial developments in the area of AI and PR ED IC TIO N S
AI‐generated recommendations, the objective of this paper is
threefold: (a) examine conditions under which individuals are more 2.1 | AI technologies in business and marketing
willing to accept AI suggestions and recommendations, (b) in-
vestigate the impact of information preciseness on individuals' re- AI elates to “programs, algorithms, systems and machines”
actions to AI, and (c) examine how trust affects this process. (Shankar, 2018, p. vi) that manifest and imitate elements of human
Specifically, we focus on the nudging way of improving AI re- intelligence and behavior (Huang & Rust, 2018; Syam &
commendations by examining the preciseness of the recommenda- Sharma, 2018). AI uses various technologies, including machine
tion information. The existing literature suggests that people believe learning, natural language processing, deep learning, big data analy-
that AI currently has good skills for mechanical and analytical tasks sis, and physical robots (Davenport et al., 2020; Mariani, 2019;
as opposed to intuitive and empathetic ones (Huang & Rust, 2018). Mariani et al., 2018). AI technologies have developed considerably
One salient characteristic of mechanical and analytical tasks could be over the past decades and are becoming an important part of con-
“precise” or “logical and analytic.” Therefore, we can expect that an sumers' daily lives.
AI recommendation will be evaluated higher when the preciseness of In business, AI technologies allow for the efficient processing of
the recommendation communication is precise (vs. nonprecise). For vast amounts of data and the learning of relationships among the
15206793, 2021, 7, Downloaded from https://onlinelibrary.wiley.com/doi/10.1002/mar.21498 by Universidad Tecnica Federico, Wiley Online Library on [11/10/2023]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
1142 | KIM ET AL.

data, potentially leading to consumer insights and enhanced perso- capacity of analyzing data and behavioral signals from customers, AI
nalization. The use of AI can significantly enhance various functions can transform those inputs into insights and improve personalized
of a business, including the automation of business processes; the recommendations (Pardes, 2019). The increasing adoption of AI
acquisition of marketing insights from data, engagement, and inter- technology suggests that it provides greater performance and that
action with customers and employees; the development of planning consumers acknowledge and appreciate its unique capabilities.
strategies, such as segmentation and analytics, predictions of trends Despite the superior performance and accuracy of judgments
and consumer behavior; and the recommendations made to custo- made by statistical models, the research of marketing and psychol-
mers (Davenport et al., 2020). Companies from various industries, ogy documents people's skepticism toward accepting AI re-
including healthcare (Longoni et al., 2019; Zang et al., 2015), military commendations. As it is often the case with new and emerging
(Amoroso et al., 2018) and hospitality (Adomavicius & Tuzhilin, 2005) technologies, individuals continue to have important reservations
adopt AI technology, and they used it in various usage contexts in- due to the significant levels of risk and uncertainty associated with
cluding voice assistant (Moriuchi, 2019) and companion robots those novelties (Seegebarth et al., 2019). Resistance, anxiety, and
(Davenport et al., 2020). ambivalence toward new technologies represent important aspects
In the context of marketing, AI is revolutionizing how companies of the success and adoption of those technologies (Talwar
and organizations create content, advertise and make re- et al., 2020). Research showed that some people tend to prefer hu-
commendations (de Ruyter et al., 2018). Marketers must understand man recommendations over those made by statistical models
how to best integrate AI in businesses and contribute to individual's (Castelo et al., 2018; Dietvorst et al., 2015; Longoni et al., 2019). The
higher acceptance to maintain a competitive advantage. Given the resistance of the statistical models' recommendations has been ob-
arrival of AI‐generated marketing, a critical research question is to served in various AI applications, including book suggestions
understand whether and how consumers accept AI‐generated con- (Yeomans et al., 2019), judgments for recruiters (Highhouse, 2008),
tents and information. In the past decade, researchers have identi- predictions of employee performance (Kuncel et al., 2013) and crime
fied potential issues and areas to develop to gain a deeper (Kleinberg et al., 2017), consumer preferences (Sinha &
understanding. For instance, Bock et al. (2020) explored the internal Swearingen, 2001), medical decision‐making (Longoni et al., 2019),
and external values provided by integrating AI technologies in the and doctor preferences (Keeffe et al., 2005).
service environments. Technologies, such as AI and robots, are cri- Prior research has investigated various task type factors that
tical agents in the value cocreation and technology engagement and cause people to accept or avoid AI‐generated information. One im-
experience (Kaartemo & Helkkula, 2018). Technological develop- portant aspect is the task accomplished by the AI systems. People
ments shape individuals' motivations, expectations, and concerns, avoid AI when the task is subjective and involves intuition or affect
and marketers must comprehend better how they can influence and (Castelo, 2019). This is because AI is perceived to lack affective
alter consumer behaviors and experiences (Lu et al., 2020). Based on capabilities or empathy (Castelo et al., 2018). In addition, consumers
the key issues identified around the management of consumers' re- avoid AI when the task domain involves intuitive (vs. analytical) as-
lationships and engagement (Lu et al., 2020; Martínez‐López & pects of decision‐making (Jarrahi, 2018) or “common‐sense” deci-
Casillas, 2013), this paper generates a better understanding of how sions (Guszcza et al., 2017; McAfee et al., 2012). This is related to the
AI can be a critical factor in the transformation of behaviors. general belief that although AI shows superb performance and skills
Moreover, it examines how cognitive processing influences the ac- in analytical tasks (Castelo, 2019) and mechanical and analytical
ceptance of those technologies by enhancing the general feeling of tasks, AI would be less functional in performing intuitive and em-
trust. pathetic tasks (Huang & Rust, 2018). The key characteristic of AI is
its ability to process large amounts of data, identify data‐driven
patterns, and thus make appropriate and scientific recommendations
2.2 | AI‐generated information acceptance or (Marwala & Hurwitz, 2015). In the same vein, people tend not to
avoidance associate AI applications with autonomous goals (Kim & Du-
hacheck, 2020) and will thus be less responsive to messages that
One of the biggest strengths of AI is the ability to acquire informa- highlight “why” rather than “how.” Indeed, people accept AI less for
tion and solve problems based on deep learning and knowledge more consequential and risky tasks.
creation and transfer (De Bruyn et al., 2020). With the capacity of AI
to collect large amounts of data from individuals' history and inter-
actions, AI recommendation engines can produce fast and relevant 2.3 | Preciseness and information processing
recommendations to reflect each person's preferences and wants.
With AI's superb computational abilities, forecasts made by statis- Marketers must understand the psychological factors that influence
tical models tend to outperform human instinct. For example, sta- consumers' acceptance of AI‐generated information and how to in-
tistical models outperform human intuition in predicting student duce more favorable consumers' responses about their AI‐generated
grades (Dawes, 1979), employee performance (Kuncel et al., 2013), information and marketing. As one novel way to increase AI tech-
and market demand (Sanders & Manrodt, 2003). With its large nology acceptance among consumers, this study proposes that the
15206793, 2021, 7, Downloaded from https://onlinelibrary.wiley.com/doi/10.1002/mar.21498 by Universidad Tecnica Federico, Wiley Online Library on [11/10/2023]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
KIM ET AL. | 1143

number preciseness of the recommendation information will criti- further suggested that the positive impact of the precise information
cally influence consumer responses to AI technology. was only higher for people with low advertising skepticism. Put
Various researchers have suggested that the preciseness of the differently, people prefer precise information to aid their decision‐
information presented to someone can significantly influence their making and judgment.
evaluation and judgment of that information. First, the impact of On the other hand, another research stream suggests a negative
preciseness on information processing has been investigated most effect of precise information in evaluations. For example, a rounded
frequently in the pricing literature. One significant stream of re- (and less precise) number (e.g., 200 mg) is associated with the con-
search has involved comparing a price ending in “9” and a price cept of stability as opposed to a precise number (e.g., 203 mg) (Pena‐
ending in “0,” which relates to the preciseness of the information Marin & Bhargave, 2016). This difference also influences further
presented. Traditionally, researchers have found that consumers inferred quality, such as the perception of the duration of the pro-
prefer prices with the precise ending (e.g., the “9” ending) to those duct benefits. In addition, less precise (vs. precise) numbers are easy
with the less precise ending (e.g., the “0” ending). This phenomenon is to process and have higher processing fluency, resulting in more
mainly attributable to two explanations: (i) an association‐based positive evaluations (King & Janiszewski, 2011; Wadhwa &
explanation (i.e., the tendency to associate a product with a price Zhang, 2015). Furthermore, an imprecise number (e.g., $365,000)
ending in “9” as the cheaper product) (Naipaul & Parsa, 2001; compared to a precise number (e.g., $364,578) could generate a
Stiving, 2000) and (ii) a heuristic process‐based explanation (i.e., the feeling of uncertainty in the judgment and decision‐making process.
tendency to ignore or drop‐off the precise component; e.g., ignoring For example, Thomas et al. (2010) argued that a precise (vs. im-
the “.99 cent” component of a $3.99 price value and instead con- precise) number requires much deeper processing, which conse-
sidering it as $3) (Basu, 1997; Stiving & Winer, 1997). In addition, the quently evokes a feeling of uncertainty and therefore reduces
preciseness of the information can influence the meaning of product evaluations. In sum, the previous literature has suggested
the price: Kim et al. (2020) suggested that people associate a precise conflicting patterns regarding the impact of preciseness on
price ($199) as a cost or sacrifice of the product, whereas they as- evaluation.
sociate an imprecise price ($200) as a benefit or quality of the
product.
Another stream of research (Burson et al., 2009; Gourville, 1998; 2.4 | Hypothesis 1: Precise format increases
Monga & Bagchi, 2012) has examined the different framing effects consumer response
regarding the basic unit presented (e.g., 85 cents per day [pennies‐a‐
day strategy] vs. $300 per year). Researchers have found that people By linking the literature about AI technology acceptance and the
prefer the product with precise (versus nonprecise) information, psychology of price preciseness, we make a novel prediction that the
mainly due to the transaction utility (Gourville, 1998). When com- high (vs. low) preciseness of the AI‐generated information derives
paring two pieces of attribute information, people prefer the option positive consumer responses for the entities associated with the AI
with precise information (e.g., for a movie‐rental plan—number of technology. We make this prediction based on the following. First,
movies per week: seven and price per month: $10) to the same op- the existing literature suggests the belief of people that current AI
tion with less precise information (e.g., number of movies per year: has an excellent performance in automated and analytical tasks as
364 and price per month: $10) (Burson et al., 2009). In short, the opposed to intuitive and empathetic ones (Huang & Rust, 2018). We
previous literature provides empirical evidence that the preciseness view that a salient characteristic of mechanical and analytical tasks is
of the price and unit significantly influence our preference and value “precise” or “logical and analytic.” Similarly, people associate robots
judgments. with agency‐related capabilities (e.g., self‐control, morality, memory,
Existing literature suggests two opposing conclusions regarding emotion recognition, planning, communication, and thought), which
the impact of preciseness on the evaluation of the product. On overlaps more closely with “preciseness” than experience‐related
the one hand, one stream of research shows the positive effect of capabilities (e.g., pleasure, personality, consciousness, pride, embar-
precise information on product evaluation. According to this stream, rassment, and joy) (Gray et al. (2007). Thus, we view that a precise
the preciseness of information significantly influences confidence format of AI‐generated information would provide the notions of the
and credibility regarding that information (Jerez‐Fernandez performed task being analytic and automated, thereby resulting in
et al., 2014; Zhang & Schwarz, 2013). Indeed, people exhibit great- more favorable consumer responses.
er trust in other's evaluations when the estimation is precise (e.g., the Second, given that consumers' confidence level is an essential
estimated length of the Niger River in miles: 2611 miles) rather than factor for adopting new technology (Lee, 2004; Li et al., 2008), we
less precise (e.g., 2600 miles) (Jerez‐Fernandez et al., 2014). In ad- argue that the preciseness format, which is associated with high
dition, Xie and Kronrod (2012) found that people prefer product confidence (Jerez‐Fernandez et al., 2014; Zhang & Schwarz, 2013)
information using precise numbers (e.g., 19.42%) rather than that will lead to favorable consumer responses. A competing hypothesis
using less precise numbers (e.g., 20%). This effect was explained by might suggest that information preciseness would elicit negative
the disfluency effect in that precise information requires elaborate consumer responses because preciseness is associated with low
processing, resulting in greater attention and evaluation. They processing fluency (King & Janiszewski, 2011; Wadhwa &
15206793, 2021, 7, Downloaded from https://onlinelibrary.wiley.com/doi/10.1002/mar.21498 by Universidad Tecnica Federico, Wiley Online Library on [11/10/2023]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
1144 | KIM ET AL.

Zhang, 2015). However, we view any negative influence due to low importance of trust in acceptance and reactance toward their re-
fluency to be minimal because given that AI is a new technology, commendations (Aljukhadar et al., 2017; Wang & Benbasat, 2005).
consumers will process AI‐generated information thoughtfully We believe that the role of trust is especially strong in con-
because. Therefore, the negative underlying mechanism of precise sumers' reactions toward AI companies or products. Trust is parti-
information (i.e., processing fluency) will be minimal for AI‐involved cularly critical when individuals lack control over the other party
decision‐making and judgment, especially in a new environment. (Lynch et al., 2001). Indeed, compared to interpersonal relationships
Based on this reasoning, we make the following hypothesis: and traditional businesses, human/machine interactions require new
means of creating trust, and more research is needed into how in-
H1: Consumers show more favorable (a) behavioral intention and (b) dividuals place confidence in AI interactions (Bock et al., 2020).
evaluation toward the company when the AI‐generated information Compared to other technologies, AI has many unique and unfamiliar
is presented in a precise (vs. imprecise) format. characteristics, and its performance and process need to be better
defined. With the vulnerability related to AI, trust has been sug-
gested to be a crucial antecedent of the development and successful
2.5 | Hypothesis 2: Trust mediates the effect of adoption of AI (Ostrom et al., 2018).
precise format increasing consumer response Integrating the AI and trust literature, we suggest that the precise-
ness of AI‐generated information will greatly strengthen trust toward the
As the key mechanism for the information preciseness inducing fa- AI's recommendations, which will, in turn, lead to higher evaluations and
vorable consumer response, consumers' perceived trust is the focus intentions from individuals. More precise information is believed to be
of this study. Several definitions of trust have emphasized the im- associated with a higher quality of communication and content. There-
portance of the perceived confidence, honesty, and reliability of the fore, this study proposes that trust will mediate the relationship between
other entity (Crosby et al., 1990; Morgan & Hunt, 1994). Indeed, the preciseness of AI‐generated information and consumer evaluations.
Morgan and Hunt (1994) describe trust as the perception of “con-
fidence in the exchange partner's reliability and integrity” (p. 23). H2: The perceived trust for AI mediates the impact of the format
Research also suggests that trust has both cognitive and affective preciseness of AI‐generated information on consumer's (a)
dimensions. Cognitive trust relates more to the perception of the behavioral intention and (b) evaluation.
ability (e.g., skills, knowledge, and competencies), reliability, and in-
tegrity of the trustee, whereas the affective dimension involves the
perceived benevolence and disposition to do good of the trustee 2.6 | Present studies
(Johnson & Grayson, 2005; Mayer et al., 1995). Cognition related to
available information and valid logic is the foundation of trust Five studies tested our hypotheses in various contexts (see Figure 1 for
(Ye et al., 2020). Integrating these streams of research, given that the overall theoretical framework). Study 1 tested for initial evidence of
crucial elements of trust are reliability and ability, we propose the preciseness effect on consumers' response in the context of buying a
that the preciseness of AI‐generated information will greatly company's stock. Study 2 tested for the mediating role of trust that
strengthen trust toward the AI's recommendations, thus deriving in explains the preciseness effect in an AI‐based online music
more favorable consumer responses toward the AI company.
Consumer trust is viewed as a significant factor for consumers'
decision‐making and behaviors, and researchers have put a lot of em-
phasis on understanding how to build and manage business strategies
that contribute to a higher level of trust (Garbarino & Johnson, 1999;
Morgan & Hunt, 1994). The importance of trust was investigated in
several domains, such as communications, services marketing, branding,
and sales management (Berry & Parasuraman, 1991; Eisingerich &
Bell, 2008; Hovland et al., 1953; Sung & Kim, 2010).
Related to our research, past research has suggested that trust is a
prominent factor in the development and adoption of new technologies
and products. Building trust was a major factor for success in the de-
velopment of relationships (Bart et al., 2005; Berry, 1995; Slade
et al., 2015). Trust has also been a critical concept in academic research
related to online and technology‐mediated environments (Dimitriadis &
Kyrezis, 2010; Schlosser et al., 2006; Yang et al., 2006). Emerging tech-
nologies are often associated with greater complexity and uncertainty,
requiring confidence and trust in the systems (Seegebarth et al., 2019).
Existing studies on recommendation agents have exposed the FIGURE 1 Overall framework
15206793, 2021, 7, Downloaded from https://onlinelibrary.wiley.com/doi/10.1002/mar.21498 by Universidad Tecnica Federico, Wiley Online Library on [11/10/2023]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
KIM ET AL. | 1145

recommendation context. Studies 3 and 4 tested the boundary condi-

H1 and H2 supported

H1 and H2 supported
tions of the preciseness effect; Study 3 tested whether the accuracy of
AI‐generated information moderates the impact of preciseness and Study

H1 supported

H3 supported

H4 supported
4 tested whether objective product quality influences the preciseness
effect. Study 5 replicated the results of Study 1 with more defined

Sample size (n) Results


measures (i.e., multiple measures) and using different experimental sti-
muli. See Table 1 for a detailed summary of our empirical studies.

3 | STUDY 1

114

155

IV: 2 (number presentation detail: precise vs. imprecise) × 2 (estimation direction: low vs. high to 436

IV: 2 (number presentation detail: precise [79.865%] vs. imprecise [80%]) × 2 (objective product 157

131
The purpose of Study 1 was to provide initial evidence of the preciseness

the actual weight) × 2 (estimation accuracy: similar to the actual weight vs. different from
effect of AI recommendations on consumers' responses (H1a). Partici-

IV: number presentation detail: precise I (80.135%) versus precise II (80.000%) versus
pants evaluated AI‐generated number information of a company's pro-
duct, presented either in precise or imprecise format. We predicted that
participants' responses toward the company would be more positive
when the information was presented in a precise format.

IV: number presentation detail (precise [74.975%] vs. imprecise [75%])

IV: number presentation detail (precise [78.025%] vs. imprecise [78%])


3.1 | Method

DVs: attitude, consumer trust, and willingness to recommend

DVs: attitude, consumer trust, and purchase intention (stock)


3.1.1 | Pretest

To test the validity of our preciseness manipulation, a pretest was con-

DVs: consumer trust and purchase intention


ducted (n = 51, Mage = 38.12, SD = 12.71; 49.1% female) with participants

DV: consumer adoption of AI's estimation


recruited via MTurk. Participants were randomly assigned to evaluate

the actual weight) of AI estimations


one of the two number information format (74.975% [precise] vs. 75%
[imprecise]) that are used in the main study (see below section for de-
DV: purchase intention (stock)

tails). Participants indicated their perceived preciseness of the accuracy


rating of the AI system (i.e., “How detailed is the accuracy rating of the AI

quality: high vs. low)


system?”) along two 7‐point scales (1 = not at all precise/detailed, 7 = very
imprecise (80%)

precise/detailed; Pearson correlation r = 0.60, p < 0.001). Supporting the


Research design

validity of using our experimental stimuli, the perceived preciseness was


significantly higher for information format used in the precise condition
(M = 4.84, SD = 1.09) than the imprecise condition (M = 4.41, SD = 1.40,
F(1, 49) = 4.23, p = 0.045, η2 = 0.079).
AI system that detects Coronavirus using chest

AI system that detects Coronavirus using chest


AI‐based online book recommendation engine
AI system estimates weight estimator with

3.1.2 | Participants, design, and procedure


AI‐based online music recommendation

Participants were 114 (Mage = 37.38, SD = 12.99; 57.0% female) adults


Summary of empirical studies

recruited via MTurk in exchange for a small monetary payment. The


study employed a simple, one‐way between‐subjects experimental design
with two levels (number presentation detail: precise vs. imprecise). All
machine learning

participants were asked to read the following passage containing


information about a company's AI system that detects Coronavirus1:
CT scans

CT scans
Study name Context

A tech company has developed an AI (artificial in-


telligence) system that can detect Coronavirus on
chest CT (computed tomography) scans. According to
TABLE 1

Study 1

Study 2

Study 3

Study 4

Study 5

1
The study was conducted in early April 2020. The contents were modified from an article
about the virus.
15206793, 2021, 7, Downloaded from https://onlinelibrary.wiley.com/doi/10.1002/mar.21498 by Universidad Tecnica Federico, Wiley Online Library on [11/10/2023]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
1146 | KIM ET AL.

the researchers who developed the system, the AI experimental conditions (i.e., precise I [80.135%], precise II
system can yield a [74.975% vs. 75%] accuracy rate. [80.000%], or the imprecise [80%]; see below section for details),
they were asked to rate their perceived preciseness of the con-
Participants were randomly assigned to either the precise condition fidence rating of the AI system (i.e., “How detailed is the confidence
in which they saw the 74.975% accuracy rate or the imprecise condition rating of the AI recommendation system?”) along two 7‐point scales
in which they say the 75% accuracy condition. Then, participants in- (1 = not at all precise/detailed, 7 = very precise/detailed; Pearson
dicated their purchase intention to buy the company's stock (i.e., “If you correlation r = 0.59 p < 0.001). As expected, the perceived precise-
have money to invest, to what extent are you willing to buy the stocks of ness differed significantly between the experimental conditions
the tech company?”) by answering on a 7‐point scale (1 = not at all, (F(2, 79) = 6.82, p = 0.002, η2 = 0.147). Specifically, the perceived
7 = very much). We viewed that purchase intention of company stock is preciseness for the imprecise condition (M = 4.75, SD = 0.94) was
an indicator of consumers' favorable evaluation and behavior toward the lower than that for the precise I condition (M = 5.66, SD = 0.84;
company and its products. Finally, participants provided their age and posthoc p = 0.002) and the precise II condition (M = 5.35, SD = 1.01;
gender. posthoc p = 0.046).

3.2 | Results and discussion 4.1.2 | Participants, design, and procedure

We conducted an ANOVA and found a significant main effect of our Participants were 155 (Mage = 39.59, SD = 12.02; 47.1% female)
experimental factor (F(1, 112) = 4.12, p = 0.045, η2 = 0.035). Specifically, adults recruited via MTurk. The study employed a one‐way between‐
participants in the precise condition (M = 4.28, SD = 1.71) showed higher subjects experimental design with three levels (number presentation
intention to buy the company's stock than those in the imprecise con- detail: precise I vs. precise II vs. imprecise). All participants were
dition (M = 3.63, SD = 1.71). These results support H1. asked to read the following scenario about online music
Study 1 provided initial evidence of the preciseness effect on con- consumption:
sumers' responses toward the company associated with AI. As antici-
pated, participants showed more favorable responses toward the Imagine that you visit an online music streaming site
company when the AI‐generated information was presented in a precise to listen and download some music in a specific genre.
(vs. imprecise) format. On the website, the AI (Artificial Intelligence)‐based
recommendation system shows you 3 songs that are
"Recommended for You". The AI system came up with
4 | STUDY 2 the list based on your past activities on the website.
(The recommended songs are currently ranked #1, #3,
The purpose of Study 2 was threefold. First, we wanted to replicate the #4, respectively, in the Hot 20 Chart list in the specific
preciseness effects in another marketing context: music. Participants genre).
indicated their attitudes and behavioral intentions toward AI‐
recommended songs, which were presented either in precise or im- The AI recommendation system says it is [80.135% vs.
precise format. We predicted more positive consumer responses when 80.000% vs. 80%] confident that you will like these
the AI information was given in a precise (vs. imprecise) format (H1a and recommended songs.
H1b). Second, we measured consumer trust and tested whether it
mediated the preciseness effect on consumer responses toward AI (H2). The manipulation of the experimental factor was changed in that
Third, we evaluated other consumer response measures: attitude toward participants were randomly assigned to either the precise I (80.135%),
the recommended songs and recommendation intention to others. One precise II (80.000%), or the imprecise (80%) condition in the scenario
limitation of Study 1 was that the consumers' intention to purchase above. The purpose of the precise II condition was to test the preciseness
company stock was assessed as the key consumer response measure. effect on evaluation not based on an absolute value difference but rather
Study 2 addresses this limitation by assessing multiple and more direct based on a difference in the specific presentation.
response measures. Then, participants indicated their attitudes toward the re-
commended songs (i.e., “To what extent do you think you will like the
songs?”) and recommendation intentions to others (i.e., “To what
4.1 | Method extent are you willing to recommend the songs that AI recommended
to your friend?”) by answering a 7‐point scale (1 = not at all, 7 = very
4.1.1 | Pretest much). They were also asked to indicate their level of trust in the AI
(i.e., “To what extent do you trust that AI knows your preference?”)
A pretest was conducted (n = 82, Mage = 39.07, SD = 13.45; 49.1% by answering a 7‐point scale (1 = not at all, 7 = very much). Finally,
female). After participants were randomly exposed to one of three participants provided their age and gender.
15206793, 2021, 7, Downloaded from https://onlinelibrary.wiley.com/doi/10.1002/mar.21498 by Universidad Tecnica Federico, Wiley Online Library on [11/10/2023]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
KIM ET AL. | 1147

4.2 | Results and discussion

4.2.1 | Main effects

First, we conducted a one‐way ANOVA on the attitudes toward the


recommended songs. The results indicated a marginally significant
main effect of our experimental factor (F(2, 152) = 2.63, p = 0.075,
η2 = 0.033). Planned contrast analysis found a significant difference
between the imprecise (M = 4.67, SD = 1.58) and precise I (M = 5.19,
SD = 0.98) conditions (p = 0.026). The difference between the im-
precise and precise II conditions (M = 5.06, SD = 0.94) was not sig-
nificant but did show the expected direction (p = 0.104). No
difference emerged between the precise I and precise II conditions
(p = 0.566) (Figure 2). These results support H1b. FIGURE 2 Results of study 2
Second, the results for the recommendation were similar in that
the main effect was also significant (F(2, 152) = 3.54, p = 0.032,
η2 = 0.044). Planned contrast analysis found significant differences (i.e., IV: preciseness of→Mediator: AI trust→DV: attitude) was also
between the imprecise (M = 4.02, SD = 1.52) and precise I (M = 4.72, significant (effect = 0.32, 95% CI: [0.021, 0.661]), whereas the direct
SD = 1.37) conditions (p = 0.011) and between the imprecise and effect was not significant (effect = 0.14, t = 0.51, 90% CI: [−0.172,
precise II conditions (M = 4.58, SD = 1.24; p = 0.049). No difference 0.457]), indicating a full mediation (H2b).
emerged between the precise I and precise II conditions (p = 0.576). Based on the results above, we conducted a serial mediation (i.e., IV:
These results replicated the results of Study 1 (H1a) in that con- preciseness of information [1: imprecise, 2: precise]→Mediator 1:
sumers showed more favorable responses when the information was AI trust→Mediator 2: attitude→DV: recommendation intention) using
given in a precise (vs. imprecise) format. Hayes' (2017) Process Macro (Model 6 with 5000 bootstrapping). The
Third, the results of AI trust perception indicated a marginally results indicated that the direct effect with mediators became insignif-
significant main effect of our experimental factor (F(2, 152) = 2.82, icant (effect = 0.25, t = 1.28, 95% CI: [−0.133, 0.624]). By contrast, the
p = 0.063, η = 0.036). Planned contrast analysis found a significant
2
overall mediation was significant (effect = 0.39, 95% CI: [0.031, 0.789]).
difference between the imprecise (M = 4.33, SD = 1.60) and precise I Specifically, the serial mediation (i.e., preciseness of information→AI
(M = 4.91, SD = 1.08) conditions (p = 0.023), and a difference between trust→attitude→recommendation intention) was significant (effect =
the imprecise and precise II conditions (M = 4.79, SD = 1.14) that was 0.13, 95% CI: [0.005, 0.326]), as was one of the simple mediation
marginally significant in the direction that is consistent with our (i.e., preciseness of information→AI trust→recommendation intention)
hypothesis (p = 0.080). No difference emerged between the precise I (effect = 0.20, 95% CI: [0.011, 0.453]). However, another simple media-
and precise II conditions (p = 0.606). tion (i.e., the preciseness of information→attitude→recommendation
intention) was not significant (effect = 0.06, 95% CI: [−0.066, 0.239]),
suggesting the role of AI trust as the key mediator.
4.2.2 | Mediation analysis In Study 2, we provided another evidence of the preciseness
effect of AI recommendations in another context (i.e., online music
For mediation analysis, we collapsed the two conditions into the services). As expected, participants' attitude toward the products
precise condition and compared them to the imprecise condition associated with AI and behavioral intention (i.e., recommendation to
because precise I and II conditions do not show differences. The others) were more favorable when the information was given in a
results indicated significant differences for attitude (M_imprecise = precise rather than an imprecise format.
4.67, SD = 1.58, M_precise = 5.13, SD = 0.96; F(1, 153) = 4.93, p = 0.028, Second, we further provided empirical evidence of the critical role
η = 0.031), recommendation intentions (M_imprecise = 4.02, SD = 1.52,
2
that consumer trust toward AI mediates the preciseness effect on con-
M_precise = 4.65, SD = 1.31; F(1, 153) = 6.79, p = 0.010, η2 = 0.042), and sumer response. Third, we showed the preciseness effect influenced
AI trust (M_imprecise = 4.33, SD = 1.60, M_precise = 4.85, SD = 1.11; consumers' evaluation (i.e., attitude) and behavioral responses (i.e., re-
F(1, 153) = 5.40, p = 0.021, η2 = 0.034). commendation to others) about the AI company's products.
Using Hayes' (2017) Process Macro (Model 4 with 5000 boot-
strapping), we conducted a simple mediation (i.e., IV: preciseness
of→Mediator: AI trust→DV: recommendation intention). The results 5 | STUDY 3
indicated that the indirect effect was significant (effect = 0.33, 95%
confidence interval [CI]: [0.008, 0.681]), whereas the direct effect The purpose of Study 3 was to test for boundary conditions of the
was not significant (effect = 0.30, t = 1.51, 95% CI: [−0.094, 0.701]), preciseness effect to examine more circumstances when preciseness
indicating a full mediation (H2a). The additional simple mediation will lead to more adoption through trust. Research on online
15206793, 2021, 7, Downloaded from https://onlinelibrary.wiley.com/doi/10.1002/mar.21498 by Universidad Tecnica Federico, Wiley Online Library on [11/10/2023]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
1148 | KIM ET AL.

recommendations suggests that accuracy influences the adoption of The manipulation of the experimental factor was changed in that
new technologies (Punj & Moore, 2007; Zhu et al., 2014). In relation participants were randomly assigned to either the precise conditions
to AI, data accuracy is also a critical factor determining its adoption (i.e., 130, 160, 170, and 190 lb) or the imprecise conditions (i.e.,
by individuals. The accuracy and quality of the information available 130.25, 160.25, 169.75, and 189.75 lb) group in the scenario above.
to people influence consumers' responses (Kreps & The estimation direction was lower (i.e., 130, 130.25, 160, and
Neuhauser, 2013). Therefore, we predict that the preciseness effect 160.25 lb) than the actual weight or estimated weight (i.e., actual
of the information is more pronounced when the accuracy of the weight = 164 lb and estimated weight from separate pretest
information is lower. Indeed, we suggest that when the difference [n = 85] = 162.96 lb, SD = 16.83), or higher (i.e., 169.75, 170, 189.75,
between estimations of AI and individuals are bigger (vs. smaller), the and 190 lb) than both these weights. The estimation accuracy was
preciseness of the information will have a stronger influence on manipulated by providing numbers either similar to the actual weight
decisions to adopt the AI estimations. (i.e., 160, 160.25, 169.75, and 169.75 lb) or different from the actual
weight (i.e., 130, 130.25, 189.75, and 190 lb).
H3: When the accuracy of the information is low (vs. high), information Participants were then asked to estimate the weight considering
preciseness has a stronger influence on consumers' responses. the information above. Therefore, the key dependent variable
was the weight estimation. We assumed that if the participants
believed the AI recommendation, they would generate a biased
5.1 | Method estimation from the actual weight or estimated weight without any
information (i.e., 164–165 lb). Finally, participants were asked to
5.1.1 | Pretest provide their age and gender.

Aa separate pretest was conducted (n = 53, Mage = 38.55, SD = 14.82;


49.1% female). After participants were randomly exposed to one of 5.2 | Results and discussion
the eight experimental conditions (see below section for details),
they were asked to rate their perceived preciseness of the accuracy Based on H3, we expect the below predictions from our experiment.
of the AI system (i.e., How detailed is the estimation of the AI sys- Specifically, the impact of estimation accuracy will moderate the impact
tem?”) along two 7‐point scales (1 = not at all precise/detailed, 7 = very of estimated direction on the relationship between information precise-
precise/detailed; Pearson correlation r = 0.84, p < 0.001). As expected, ness and consumers' response (i.e., the estimated weight of a person). On
the overall effect was significant (F(7, 45) = 2.43, p = 0.034, the one hand, when the estimated weight is significantly different from
η2 = 0.274). The further analysis, after considering two groups (1: the actual weight, the significant interaction effect between estimation
imprecise conditions, 2: precise condition), also indicated that the direction and preciseness of AI information will be significant in that
perceived preciseness was significantly higher for the precise con- participants' estimated weight will be lower when AI estimated weight is
dition (M = 5.43, SD = 1.41) than it was for the imprecise condition precise (vs. imprecise) when AI estimation is lower than the actual one.
(M = 4.02, SD = 1.77, F(1, 51) = 10.28, p = 0.002, η2 = 0.168). Meanwhile, participants' estimated weight will be higher when AI esti-
mated weight is precise (vs. imprecise) when AI estimation is higher than
the actual one. On the other hand, when the estimated weight is similar
5.1.2 | Participants, design, and procedure to the actual weight, the interaction effect between estimation direction
and preciseness of AI information on the participants' estimated weight
Participants were 436 (Mage = 38.39, SD = 12.96; 46.8% female) will not be significant.
adults recruited via MTurk in exchange for a small monetary pay- To verify these predictions, we conducted a 2 (number pre-
ment. The study employed a 2 (number presentation detail: precise sentation detail) × 2 (estimation direction) × 2 (estimation accuracy)
vs. imprecise) × 2 (estimation direction: low vs. high to the actual ANOVA on the estimated weight. The results indicated that the
weight) × 2 (estimation accuracy: similar to the actual weight vs. three‐way interaction effect was significant (F(1, 427) = 5.78,
different from the actual weight) between‐subjects design. p = 0.017, η2 = 0.013). To illustrate the three‐way interaction effect,
The stimuli were modified from the stimuli used by Logg et al. we conducted two separate 2 (number presentation detail) × 2 (es-
(2019). Specifically, participants were asked to estimate the weight timation direction) ANOVA for weights similar to the actual weight
of a person in a provided photo from Logg et al. (2019) (see condition and weights different from the actual weight condition.
Appendix A for the photo). Participants were first asked to read the First, for the different from the actual weight condition (i.e., 130,
following: 130.25, 189.75, and 190 lb), a 2 (number presentation detail) × 2
(estimation direction) ANOVA generated a significant two‐way in-
An AI (artificial intelligence) system incorporating teraction effect (F(1, 208) = 8.58, p = 0.004, η2 = 0.040). Planned
machine learning estimated the weight of the person contrast analysis found that when the AI's estimation was lower (i.e.,
in the photograph to be [130 vs. 130.25 vs. 160 vs. 130 and 130.25 lb) than the actual weight condition, the estimated
160.25 vs. 169.75 vs. 170 vs. 189.75 vs. 190] lb. weight was lower (showed higher biased estimation) for the precise
15206793, 2021, 7, Downloaded from https://onlinelibrary.wiley.com/doi/10.1002/mar.21498 by Universidad Tecnica Federico, Wiley Online Library on [11/10/2023]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
KIM ET AL. | 1149

condition (M = 143.61 lb, SD = 16.92) than the imprecise condition


(M = 150.78 lb, SD = 18.19, F(1, 208) = 5.67, p = 0.018, η2 = 0.027). By
contrast, the opposite pattern was found for the conditions when the
AI estimation was higher (i.e., 189.75 and 190 lb) than the actual
weight condition. The participants' estimated weights were higher
(showed higher biased estimation) for the precise condition
(M = 178.56 lb, SD = 12.32) than the imprecise condition
(M = 172.94 lb, SD = 15.19, F(1, 208) = 3.16, p = 0.077, η2 = 0.015)
(Figure 3). In sum, we found that precise AI recommendations gen-
erated estimations that were more biased by the participants. This
tendency indicated the participants' beliefs regarding AI‐generated
information.
Second, for the similar to the actual weight condition (i.e., 160, FIGURE 3 Results of study 3
160.25, 169.75, and 169.75 lb), the overall two‐way interaction effect
was not significant (F(1, 208) = 0.02, p = 0.881, η2 = 0.001). The detailed presentation detail: precise vs. imprecise) × 2 (objective product
pattern is presented in Figure 3. In sum, we found a significant three‐way quality: high vs. low) between‐subjects design. Participants were
interaction effect showing important moderators for our main hy- randomly assigned to either the precise (79.865%) or the imprecise
potheses. When the AI‐generated information was similar to the lay- (80%) condition and either the high objective quality (ranking #1, #2,
person's estimation, the preciseness of the AI‐generated information did and #4) or the low objective quality (ranking #11, #12, and #14)
not matter. However, when the AI‐generated information was sub- condition. All participants read the following scenario about pur-
stantially different from the layperson's estimation, the preciseness of the chasing books online:
AI‐generated information did matter. In this situation, the preciseness of
the information helped individuals adopt the AI's recommendation, even Imagine that you visit an online bookstore to look for
in cases of biased estimation, which supports our prediction about the some books in a specific category. As you are starting
importance of information preciseness when the accuracy of the in- to browse, the AI (Artificial Intelligence)‐based re-
formation is lower. These results support H3. commendation system shows you 3 books that “Re-
commended for You.” The AI system came up with the
list based on your past activities at the website.
6 | STUDY 4
The recommended books, which you have not read,
The purpose of Study 4 was to test another boundary condition to are currently ranked [#1, #2, #4 vs. #11, #12, #14],
examine when the preciseness of information is more critical. In this respectively, in the Top 20 Bestsellers list in the
experiment, we investigate whether the number presentation detail specific category.
associated with AI and the objective product quality of the re-
commended products influenced consumer purchase intentions. In The AI recommendation system says it is [79.865% vs.
addition, we tested whether consumer trust toward AI mediated the 80%] confident that you will like these recommended
effects of presentation detail and objective quality on purchase in- books.
tention for a different product category (i.e., books). Previous re-
search demonstrates that the objective quality of a decision or a Then, participants indicated their purchase intention by an-
product has a significant impact on information processing in the swering two questions, “To what extent are you willing to consider
case of recommendation agents (Gupta & Harris, 2010). In addition, buying the books that AI recommended?” and “To what extent are
price preciseness has been demonstrated to influence the perceived you willing to buy the books that AI recommended?” (1 = not at all,
quality of product and ultimately affecting consumers' preferences 7 = very much). The responses to these two purchase intention items
(Kim & Kim, & Marshall, 2020). were averaged (Pearson correlation r = 0.78, p < 0.001).
Participants indicated their levels of AI trust by answering two
H4: When the product's objective quality is high (vs. how), information questions, “To what extent do you trust that AI knows your pre-
preciseness has a stronger influence on consumers' purchase intentions. ference?” and “To what extent do you trust that AI correctly assessed
your preference?” (1 = not at all, 7 = very much; Pearson correlation
r = 0.83, p < 0.001).
6.1 | Method: Participants, design, and procedure Participants answered the question, “How detailed is the con-
fidence rating of the AI recommendation system?” (1 = not at all de-
Participants were 157 (Mage = 36.80, SD = 11.60; 46.5% female) tailed, 7 = very detailed), as a manipulation check for the number
adults recruited via MTurk. The study employed a 2 (number presentation manipulation. Meanwhile, for the objective quality
15206793, 2021, 7, Downloaded from https://onlinelibrary.wiley.com/doi/10.1002/mar.21498 by Universidad Tecnica Federico, Wiley Online Library on [11/10/2023]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
1150 | KIM ET AL.

manipulation, participants answered the question, “According to the


scenario, how high are the recommended books ranked in the top 20
Bestseller list?” (1 = not at all high, 7 = very high). Finally, participants
were asked to provide their age and gender.

6.2 | Results

6.2.1 | Manipulation checks

A series of 2 × 2 ANOVA analysis revealed that experimental ma-


nipulations were successful. As predicted, a 2 × 2 ANOVA on per-
ceived detail revealed only a marginal main effect of the number
presentation (Mprecise = 5.33, SD = 1.20; Mimprecise = 4.96, SD = 1.49, FIGURE 4 Results of study 4
F(1, 153) = 2.90, p = 0.091, η = 0.019), whereas a 2 × 2 ANOVA on
2

perceived product quality revealed only a main effect of the


product quality (Mhigh = 6.15, SD = 1.06; Mlow = 4.24, SD = 1.88, Study 4 tested the effects of number presentation detail on AI
F(1, 153) = 61.10, p < 0.001, η = 0.285). Thus, our manipulations
2
confidence and of objective product quality of the recommended
were successful. products on consumers' purchase intention. As predicted, we found
that the precise presentation led to higher purchase intention than
the imprecise presentation when the objective quality of the re-
6.2.2 | Results and discussion commended products was high. However, as predicted, the number
presentation did not influence purchase intention when the objective
Next, a 2 × 2 ANOVA on purchase intention revealed only a sig- quality of the recommended products was relatively low. In addition,
nificant interaction effect (F(1, 153) = 4.24, p = 0.041, η2 = 0.027) we showed that consumer trust toward AI mediated the effect of
(Figure 4). Planned contrast analysis revealed that in the high ob- presentation details on purchase intention.
jective quality condition, the precise presentation led to higher
purchase intention (M = 5.18, SD = 1.28) than the imprecise pre-
sentation (M = 4.46, SD = 1.73; F(1, 153) = 5.32, p = 0.022, η2 = 0.034). 7 | STUDY 5
However, in the low objective quality condition, the precise pre-
sentation (M = 4.70, SD = 1.17) and the imprecise presentation The purpose of Study 5 was to address some limitations from pre-
(M = 4.88, SD = 1.32; F (1, 153) = 0.36, p = 0.550, η2 = 0.002) did not vious studies. First, even though we provided the empirical evidence
differ in terms of purchase intention. These results support H4. of mediation in the prior studies (i.e., Studies 2 and 4), the key
A 2 × 2 ANOVA on AI trust revealed only a significant interac- mediator of trust was measured by a single item. In this study, we
tion effect (F(1, 153) = 5.46, p = 0.021, η2 = 0.034). Planned contrast will use the multiple‐item scale based on Kim et al. (2016).
analysis revealed that in the high objective quality condition, the Second, in all previous studies, the imprecise numbers were based on
precise presentation led to higher AI trust (M = 4.88, SD = 1.20) than a multiple of 5 or 10 (e.g., 75%, 80%, 130 lb, or 190 lb). This study will use
the imprecise presentation (M = 4.16, SD = 1.66; F(1, 153) = 5.18, a precise number using multiple of 5 and an imprecise number without
p = 0.024, η = 0.033). However, in the low objective quality condi-
2
using multiple of 5. Specifically, we will compare an imprecise number
tion, the precise presentation (M = 4.55, SD = 1.19) and the imprecise (i.e., 78%) and a precise number (i.e., 78.025%). Finally, we will reduce the
presentation (M = 4.87, SD = 1.47; F(1, 153) = 1.05, p = 0.308, weaknesses of Study 1 by measuring the overall attitude toward the
η2 = 0.007) did not differ in terms of AI trust. product and the intention to buy the company's stock.
Using Hayes' (2017) Process Macro (Model 8 with 5000 boot-
strapping), a moderated mediation analysis was conducted (i.e., IV:
preciseness of information [1: imprecise, 2: precise]→Mediator: AI 7.1 | Method: Participants, design, and procedure
trust→DV: purchase intention & Moderator: objective product
quality [1: high, 2: low]). The results indicated that AI trust mediated Participants were 131 (Mage = 41.40, SD = 12.75; 51.9% female)
the effect of Number Presentation × Objective Product Quality on adults recruited via MTurk in exchange for a small monetary pay-
purchase intention (95% CI: [−1.587, −0.126]). Specifically, for the ment. The study employed a one‐way between‐subjects experimental
high‐quality recommendation condition, trust significantly mediated design with two levels (number presentation detail: precise vs. im-
the impact of IV on DV (95% CI: [0.055, 1.138]), consistent with H2a. precise). The overall stimuli were similar to that of Study 1A with few
By contrast, the corresponding mediation was not significant for the modifications. First, the accuracy of the CT scan system was either
low‐quality recommendation condition (95% CI: [−0.743, 0.224]). 78% [imprecise condition] or 78.025% [precise condition].
15206793, 2021, 7, Downloaded from https://onlinelibrary.wiley.com/doi/10.1002/mar.21498 by Universidad Tecnica Federico, Wiley Online Library on [11/10/2023]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
KIM ET AL. | 1151

Then, participants were asked to indicate their perceived trust of


AI‐based CT scan system by three‐item 7‐point scale (1 = not at all
believable/trustworthy/truthful, 7 = very believable/trustworthy/truthful,
Cronbach's α = 0.923, based on Kim et al. (2016)). They also indicated
the overall attitude toward the CT scan system by 3‐item 7‐point
scale (1 = not at all good/favorable/positive, 7 = very good/favorable/
positive, Cronbach's α = 0.949, based on Kim et al. (2016)) and their
intention to buy the company's stock with the same scale of Study 1.
Participants were asked to rate their perceived preciseness of the
accuracy rating of the AI system (i.e., How detailed is the accuracy
rating of the AI system?”) along a 7‐point scale (1 = not at all detailed,
7 = very detailed) for the manipulation check. Finally, participants
were asked to provide their age and gender.
FIGURE 5 Results of study 5

7.2 | Results and discussion bootstrapping). The results indicated that the direct effect with
mediators became insignificant (effect = 0.22, t = 0.83, 95% CI:
The manipulation was successful in that the perceived preciseness [−0.302, 0.739]), but the serial mediation was significant (effect =
was higher for the precise condition (M = 5.09, SD = 1.02) compared 0.29, 95% CI: [0.003, 0.621]).
to the imprecise condition (M = 4.51, SD = 1.32; F(1, 129) = 8.00,
p = 0.005, η2 = 0.058).
We compared consumers' responses across the two experi- 8 | GENERAL D ISCUSS ION
mental conditions (see Figure 5). First, the intention to buy the
company's stock in that the intention for the precise condition AI uses various key technologies, including machine learning and
(M = 4.68, SD = 1.82) was higher than that for imprecise condition deep learning, and it can efficiently process vast amounts of data and
(M = 3.91, SD = 1.83; F(1, 129) = 5.92, p = 0.016, η2 = 0.044). These learn relationships among the data. AI is becoming an important tool
results support H1a. Next, supporting H1b, the result of the attitude in marketing and business applications. In fact, AI is revolutionizing
toward the scan system was similar in that the attitude for how companies and organizations operate businesses. With the in-
the precise condition (M = 5.31, SD = 1.21) was higher than that for creasing presence of AI in several aspects of the business environ-
the imprecise condition (M = 4.65, SD = 1.41; F(1, 129) = 8.47, ment and our daily lives, organizations and marketers are keen to
p = 0.004, η2 = 0.062). find ways to improve the successful adoption of AI technologies and
For the mediator, we found that the perceived trust of the AI‐ AI‐generated recommendations. Despite the superior performance
based scan system for the precise condition (M = 5.17, SD = 1.16) was and accuracy of AI, recent evidence has shown that consumers have
higher than that for the imprecise condition (M = 4.75, SD = 1.25; F reservations toward accepting AI‐generated recommendations
(1, 129) = 3.85, p = 0.052, η2 = 0.029). (Castelo et al., 2018; Dietvorst et al., 2015; Longoni et al., 2019).
Finally, we conducted two mediation tests. We conducted a Therefore, it is critical to understand how to improve consumer
simple mediation (i.e., IV: preciseness of→Mediator: AI trust→DV: adoption of the information generated by AI.
intention to buy the company's stock) using Hayes' (2017) Process The current research uses five studies to provide insights into
Macro (Model 4 with 5000 bootstrapping). The results indicated that how one nudging way (i.e., the preciseness of the AI‐generated in-
the indirect effect was significant (effect = 0.33, 95% CI: [0.013, formation) of presenting the information can influence consumer
0.729]), whereas the direct effect was not significant (effect = 0.45, evaluations and intentions. Taken together, the results of the five
t = 1.62, 95% CI: [−0.099, 0.996]), indicating a full mediation (H2a). experiments contribute to the understanding of AI‐generated in-
Next, another simple mediation (i.e., IV: preciseness of information formation and the continuous development of trust in AI technolo-
[1: imprecise, 2: precise]→Mediator: perceived trust→DV: attitude) gies. First, we demonstrate a relationship between the preciseness of
using Hayes' (2017) Process Macro (Model 4 with 5000 boot- the information and individuals' evaluations of AI recommendations.
strapping). The results showed that mediation was significant (ef- Indeed, precise (vs. imprecise) information leads to higher evalua-
fect = 0.37, 95% CI: [0.014, 0.772]), whereas the direct effect was tions and behavioral intentions (Studies 1–5). This effect is con-
also still significant (effect = 0.30, t = 2.24, 95% CI: [0.035, 0.556]). sistent across different measurements, dependent variables, and
These results support H2b. recommendation categories (i.e., coronavirus chest CT scans, AI‐
The second analysis was a serial mediation (i.e., IV: preciseness based online music recommendation engine, AI‐based weight esti-
of information [1: imprecise, 2: precise]→Mediator 1: perceived mator, AI‐based online book recommendation engine). This study
trust→Mediator 2: attitude→DV: intention to buy the company's also provides evidence that the impact of information preciseness on
stock) using Hayes' (2017) Process Macro (Model 6 with 5000 consumer evaluations is mediated by trust (Studies 2, 4, and 5).
15206793, 2021, 7, Downloaded from https://onlinelibrary.wiley.com/doi/10.1002/mar.21498 by Universidad Tecnica Federico, Wiley Online Library on [11/10/2023]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
1152 | KIM ET AL.

Finally, this study demonstrates the boundary conditions to this re- Janiszewski and Uy (2008) found that the adjustment from the given
lationship. The accuracy of the AI‐generated information influences anchor was weaker when the anchoring information was based on
the impact of the presented information (Study 3). Further, the ob- precise (vs. less precise) information. For example, the price esti-
jective product quality of the recommended products impacts the mation of a type of cheese was lower when the anchor, which was
effect of presentation details (i.e., preciseness) on consumer pur- higher than the actual price, was less precise (e.g., “5”) rather than
chase intentions (Study 4). precise (e.g., “4.85”). This study extends this pattern into the context
of judgments of AI‐generated information. Therefore, even in the
context of nonhuman information, the AI recommendation served as
8.1 | Theoretical contributions anchoring information in the judgment. We also suggested the
boundary conditions, such as the gap between the AI‐generated in-
These findings make several important theoretical contributions to formation and the actual information, in Study 3.
the AI literature. First, this study offers new insights into the AI
literature by investigating a new factor that influences consumer
attitudes and behaviors regarding the adoption of AI recommenda- 8.2 | Managerial implications
tions. Our findings are consistent with prior work on the preciseness
of information and price, suggesting that preciseness significantly This paper has several practical and managerial implications. First,
influences individuals' preferences and value judgments the results of our experiments provide additional evidence that the
(Gourville, 1998). It may be argued that being exposed to precise presentation of the information provided by AI recommendation
information may further affect one's confidence in the information. agents can strongly impact consumer adoption of AI‐related tech-
Our study investigates this concept in the AI environment to build on nologies. The design and delivery of the generated information are of
the aforementioned work by demonstrating that precise information critical importance for businesses using AI. As AI is becoming in-
(vs. imprecise information) can lead to more positive evaluations. creasingly prevalent, companies can influence its adoption in our
Regarding the positive effect of precise numbers in terms of quan- everyday lives. AI recommendations can be useful for consumers by
titative evaluation and judgment, Zhang and Schwarz (2013) sug- making their lives easier and providing an enhanced experience.
gested that the influence of precise information is greater in that the Thus, understanding more about the factors that impact individuals'
relevant inference (e.g., the estimated cost of a DVD drive) was interactions with AI can lead to faster and stronger acceptance of
higher when the related information was delivered in the precise this new technology.
versus imprecise format (e.g., when the retailing price of the DVD Second, according to the findings of this study, trust is an es-
drive was $29.75 vs. $40). They further argue that this positive effect sential factor in consumer evaluations of AI‐generated information.
is only valid for human communicators and not for non‐human Indeed, trust mediates the relationship between the preciseness of
agents like computer messages. Contrary to this result, we find that information and judgments and evaluations of that information. This
the preciseness of information generated by non‐human agents (i.e., should be considered by marketers and information system leaders
AI) significantly influences the evaluation of and inference related to developing and managing AI technologies. Based on the findings of
a recommendation. One possible explanation for this is that people this study, we recommend that businesses should consider factors,
might consider AI to be closer to human‐like agents than such as preciseness, in the presentation of information, which could
simple computers (see Gray et al., 2007). Future research can in- lead to higher credibility. Such nudges should increase the level of
vestigate this possible moderating factor for the preciseness effect. trust and thus motivate people to adopt the recommendations.
Second, prior work suggests that trust is a crucial factor in the Finally, the recommendations generated by the AI agents and
adoption of new technologies, thus advancing trust as a critical ele- presented to consumers are crucial elements that affect their judg-
ment for understanding people's behaviors toward AI. Specifically, ments; therefore, businesses should have valid and accurate models
we add to this study by showing that precise information increases to develop the information they deliver in these recommendations.
the level of confidence that individuals place in the relevant in- Thus, the algorithms developed with the data to find patterns must
formation and that this trust leads to positive affective and beha- be correct to form strong relationships with consumers, as the va-
vioral outcomes. This finding offers empirical support for arguments lidity and reliability of those suggestions will determine the future
suggesting that increased preciseness of information leads to higher adoption of AI‐generated recommendations.
reliability and credibility. Our research shows that in contrast to
imprecise information, precise information may elicit people to think
that the information is more reliable, potentially elevating their trust 8.3 | Limitations and future directions
level.
Finally, it extends the literature on heuristics by showing that AI‐ This study has several limitations, which can offer opportunities for
generated recommendations can be used as an anchor in consumer future work. First, this study was administered through online panels
judgments and decision‐making processes. In the contexts of the providing experimental scenarios. Although hypothetical scenarios
anchoring‐and‐adjustment heuristic (Tversky & Kahneman, 1974), have been used in the past in the AI area, further research needs to
15206793, 2021, 7, Downloaded from https://onlinelibrary.wiley.com/doi/10.1002/mar.21498 by Universidad Tecnica Federico, Wiley Online Library on [11/10/2023]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
KIM ET AL. | 1153

be done to replicate these results in real interactions with AI re- large‐scale exploratory empirical study. Journal of Marketing, 69(4),
commendation agents and technologies. Regarding the Amazon 133–152.
Basu, K. (1997). Why are so many goods priced to end in nine? And why
Mturk sample, previous research suggested that the overall quality
this practice hurts the producers. Economics Letters, 54(1), 41–44.
of Mturk was good as other sources of marketing research (e.g., Berry, L. L. (1995). Relationship marketing of services—Growing interest,
Buhrmester et al., 2011; Crump et al., 2013), even though their un- emerging perspectives. Journal of the Academy of Marketing Science,
derstanding of AI and machine learning could be different from other 23(4), 236–245.
Berry, L. L., & Parasuraman, A. (1991). Marketing services: Competing
populations. Future studies can cover this issue much deeply.
through quality. The Free Press.
Second, as presentation methods are diverse, we can assume Bock, D. E., Wolter, J. S., & Ferrell, O. C. (2020). Artificial intelligence:
that their impacts on persuading individuals will differ (e.g., Giroux Disrupting what we know about services. Journal of Services
et al., 2021; Kim et al., 2020, 2021). Therefore, future research Marketing, 34(3), 317–334.
Buhrmester, M., Kwang, T., & Gosling, S. D. (2011). Amazon's Mechanical
should employ additional presentation methods and determine their
Turk: A new source of inexpensive, yet high‐quality, data?
influences on customers' reactions.
Perspectives on Psychological Science, 6(1), 3–5.
Third, even though our paper focused on the AI recommenda- Burson, K. A., Larrick, R. P., Lynch Jr J. G. (2009). Six of one, half dozen of the
tion, it's worthy to investigate the impact of the preciseness of hu- other: Expanding and contracting numerical dimensions produces
man suggestions. We expect that the impact of the preciseness preference reversals. Psychological Science, 20(9), 1074–1078.
Castelo, N. (2019). Blurring the line between human and machine: Marketing
would be weaker for humans, but it might depend on the expertise
artificial intelligence (doctoral dissertation). Retrieved from Columbia
area of a human (e.g., Huang & Rust, 2018). Finally, specific char- University Academic Commons.
acteristics or factors (e.g., persuasion knowledge model; Friestad & Castelo, N., Bos, M., & Lehman, D. (2018). Consumer adoption of algorithms
Wright, 1994; Kim et al., 2016) could influence trust; hence, different that blur the line between human and machine. Graduate School of
Business: Columbia University Working Paper.
antecedents of trust related to AI‐generated content and re-
Crosby, L. A., Evans, K. R., & Cowles, D. (1990). Relationship quality in
commendations represent a promising field for future research. services selling: An interpersonal influence perspective. Journal of
Marketing, 54(3), 68–81.
Crump, M. J., McDonnell, J. V., & Gureckis, T. M. (2013). Evaluating
Amazon's Mechanical Turk as a tool for experimental behavioral
8.4 | Closing statements
research. PLOS One, 8(3), e57410.
De Bruyn, A., Viswanathan, V., Beh, Y. S., Brock, J. K. U., &
In this paper, we investigated a factor that can facilitate the adoption von Wangenheim, F. (2020). Artificial intelligence and marketing:
of AI‐powered technology. Specifically, we suggested that the em- Pitfalls and opportunities. Journal of Interactive Marketing, 51, 91–105.
Davenport, T., Guha, A., Grewal, D., & Bressgott, T. (2020). How artificial
ployment of precise (vs. imprecise) information in AI‐generated
intelligence will change the future of marketing. Journal of the
messages leads to higher evaluations and behavioral intentions for Academy of Marketing Science, 48(1), 24–42.
that AI technology. We hope that this study helps extend the un- Dawes, R., Faust, D., & Meehl, P. (1989). Clinical versus actuarial
derstanding of the psychological reasons affecting consumer adop- judgment. Science, 243(4899), 1668–1674.
Dawes, R. M. (1979). The robust beauty of improper linear models in
tion of AI technology.
decision making. American Psychologist, 34(7), 571–582.
Dietvorst, B. J., Simmons, J. P., & Massey, C. (2015). Algorithm aversion:
ORCID People erroneously avoid algorithms after seeing them err. Journal
Jungkeun Kim https://orcid.org/0000-0003-2104-833X of Experimental Psychology: General, 144(1), 114–126.
Marilyn Giroux https://orcid.org/0000-0001-5764-0533 Dimitriadis, S., & Kyrezis, N. (2010). Linking trust to use intention for
technology‐enabled bank channels: The role of trusting intentions.
Jacob C. Lee https://orcid.org/0000-0002-1410-4711
Psychology & Marketing, 27(8), 799–820.
de Ruyter, K., Isobel Keeling, D., & Ngo, L. V. (2018). When nothing is
R EF E RE N C E S what it seems: A digital marketing research agenda. Australasian
Adomavicius, G., & Tuzhilin, A. (2005). Toward the next generation of Marketing Journal, 26(3), 199–203.
recommender systems: A survey of the state‐of‐the‐art and possible Eisingerich, A. B., & Bell, S. J. (2008). Perceived service quality and
extensions. IEEE Transactions on Knowledge and Data Engineering, customer trust: Does enhancing customers' service knowledge
17(6), 734–749. matter? Journal of Service Research, 10(3), 256–268.
Aguirre, E., Mahr, D., Grewal, D., de Ruyter, K., & Wetzels, M. (2015). Friestad, M., & Wright, P. (1994). The persuasion knowledge model: How
Unraveling the personalization paradox: The effect of information people cope with persuasion attempts. Journal of Consumer Research,
collection and trust‐building strategies on online advertisement 21(1), 1–31.
effectiveness. Journal of Retailing, 91(1), 34–49. Garbarino, E., & Johnson, M. S. (1999). The different roles of satisfaction,
Aljukhadar, M., Trifts, V., & Senecal, S. (2017). Consumer self‐construal trust, and commitment in customer relationships. Journal of
and trust as determinants of the reactance to a recommender Marketing, 63(2), 70–87.
advice. Psychology & Marketing, 34(7), 708–719. Giroux, M., Franklin, D., Kim, J., Park, J., & Kwak, K. (2021). The impact of
Amoroso, D., Sauer, F., Sharkey, N., Suchman, L., & Tamburrini, G. (2018). same versus different price presentation on travel choice and the
Autonomy in weapon systems: The military application of artificial moderating role of childhood socioeconomic status. Journal of Travel
intelligence as a litmus test for Germany's new foreign and security Research.
policy. Heinrich Böll Foundation. Gourville, J. T. (1998). Pennies‐a‐day: The effect of temporal reframing on
Bart, Y., Shankar, V., Sultan, F., & Urban, G. L. (2005). Are the drivers and transaction evaluation. Journal of Consumer Research, 24(4),
role of online trust the same for all web sites and consumers? A 395–408.
15206793, 2021, 7, Downloaded from https://onlinelibrary.wiley.com/doi/10.1002/mar.21498 by Universidad Tecnica Federico, Wiley Online Library on [11/10/2023]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
1154 | KIM ET AL.

Gray, H. M., Gray, K., & Wegner, D. M. (2007). Dimensions of mind Li, S., Glass, R., & Records, H. (2008). The influence of gender on new
perception. Science, 315(5812), 619. technology adoption and use–mobile commerce. Journal of Internet
Gupta, P., & Harris, J. (2010). How e‐WOM recommendations influence Commerce, 7(2), 270–289.
product consideration and quality of choice: A motivation to process Logg, J. M., Minson, J. A., & Moore, D. A. (2019). Algorithm appreciation:
information perspective. Journal of Business Research, 63(9‐10), People prefer algorithmic to human judgment. Organizational
1041–1049. Behavior and Human Decision Processes, 151, 90–103.
Guszcza, J., Lewis, H., & Evans‐Greenwood, P. (2017). Cognitive Longoni, C., Bonezzi, A., & Morewedge, C. (2019). Resistance to medical
collaboration: Why humans and computers think better together. artificial intelligence. Journal of Consumer Research, 46(4), 629–650.
Deloitte Review, 1(20), 7–30. Lu, V. N., Wirtz, J., Kunz, W. H., Paluch, S., Gruber, T., Martins, A., &
Hayes, A. F. (2017). Introduction to mediation, moderation, and conditional Patterson, P. G. (2020). Service robots, customers and service
process analysis: A regression‐based approach. Guilford. employees: What can we learn from the academic literature and
Highhouse, S. (2008). Stubborn reliance on intuition and subjectivity in where are the gaps? Journal of Service Theory and Practice, 30(3),
employee selection. Industrial and Organizational Psychology, 1(3), 361–391.
333–342. Lynch, P. D., Kent, R. J., & Srinivasan, S. S. (2001). The global internet
Hovland, C. I., Janis, I. L., & Kelley, H. H. (1953). Communication and shopper: Evidence from shopping tasks in twelve countries. Journal
persuasion. Yale University Press of Advertising Research, 41(3), 15–23.
Huang, M. H., & Rust, R. T. (2018). Artificial intelligence in service. Journal Mariani, M. (2019). Big data and analytics in tourism and hospitality: A
of Service Research, 21(2), 155–172. perspective article. Tourism Review, 75(1), 299–303.
Janiszewski, C., & Uy, D. (2008). Precision of the anchor influences the Mariani, M. M., Baggio, R., Fuchs, M., & Höepken, W. (2018). Business
amount of adjustment. Psychological Science, 19(2), 121–127. intelligence and big data in hospitality and tourism: A systematic
Jarrahi, M. H. (2018). Artificial intelligence and the future of work: literature review. International Journal of Contemporary Hospitality
Human‐AI symbiosis in organizational decision making. Business Management, 30(12), 3514–3554.
Horizons, 61(4), 577–586. Martínez‐López, F. J., & Casillas, J. (2013). Artificial intelligence‐based
Jerez‐Fernandez, A., Angulo, A. N., & Oppenheimer, D. M. (2014). Show systems applied in industrial marketing: An historical overview,
me the numbers: Precision as a cue to others' confidence. current and future insights. Industrial Marketing Management, 42(4),
Psychological Science, 25(2), 633–635. 489–495.
Johnson, D., & Grayson, K. (2005). Cognitive and affective trust in service Marwala, T., & Hurwitz, E. (2015). Artificial intelligence and asymmetric
relationships. Journal of Business Research, 58(4), 500–507. information theory (Doctoral dissertation). Johannesburg, South
Kaartemo, V., & Helkkula, A. (2018). A systematic review of artificial Africa: University of Johannesburg.
intelligence and robots in value co‐creation: Current status and Mayer, R. C., Davis, J. H., & Schoorman, F. D. (1995). An integrative model of
future research avenues. Journal of Creating Value, 4(2), 211–228. organizational trust. Academy of Management Review, 20(3), 709–734.
Keeffe, B., Subramanian, U., Tierney, W. M., Udris, E., Willems, J., McDonell, M., McAfee, A., Brynjolfsson, E., Davenport, T. H., Patil, D. J., & Barton, D.
& Fihn, S. (2005). Provider response to computer‐based care suggestions (2012). Big data: The management revolution. Harvard Business
for chronic heart failure. Medical Care, 43(5), 461–465. Review, 90(10), 60–68.
Kim, J., Cui, Y. G., Choi, C., Lee, S. J., & Marshall, R. (2020). The influence Monga, A., & Bagchi, R. (2012). Years, months, and days versus 1, 12, and
of preciseness of price information on the travel option choice. 365: The influence of units versus numbers. Journal of Consumer
Tourism Management, 79, 104012. Research, 39(1), 185–198.
Kim, J., Jhang, J., Kim, S. S., & Chen, S. C. (2021). Effects of concealing vs. Morgan, R. M., & Hunt, S. D. (1994). The commitment‐trust theory of
displaying prices on consumer perceptions of hospitality products. relationship marketing. Journal of Marketing, 58(3), 20–38.
International Journal of Hospitality Management, 92, 102708. Moriuchi, E. (2019). Okay, Google!: An empirical study on voice assistants
Kim, J., Kim, J. E., & Marshall, R. (2016). Are two arguments always on consumer engagement and loyalty. Psychology & Marketing, 36(5),
better than one? Persuasion knowledge moderating the effect 489–501.
of integrated marketing communications. European Journal of Naipaul, S. A., & Parsa, H. G. (2001). Menu price endings that
Marketing, 50(7), 1399–1425. communicate value and quality. Cornell Hotel and Restaurant
Kim, J., Kim, J. E., & Marshall, R. (2020). Choose quickly! The influence of Administration Quarterly, 42(1), 26–37.
cognitive resource availability on the preference between the Ostrom, A., Fotheringham, D., & Bitner, M. J. (2018). Customer
intuitive and externally recommended options. Australasian acceptance of AI in service encounters: Understanding
Marketing Journal, 28(4), 263–272. antecedents and consequences. In P. P. Maglio, C. A. Kieliszewski,
Kim, T. W., & Duhachek, A. (2020). Artificial intelligence and persuasion: A J. C. Spohrer, K. Lyons, L. Patricio, & Y. Swatani (Eds.), Handbook of
construal‐level account. Psychological Science, 31(4), 363–380. service science (Vol. 2, pp. 77–103). Springer Nature.
King, D., & Janiszewski, C. (2011). The sources and consequences of the fluent Pardes, A. (2019). Need some fashion advice? Just ask the algorithm.
processing of numbers. Journal of Marketing Research, 48(2), 327–341. https://www.wired.com/story/stitch-fix-shop-your-looks/
Kleinberg, J., Lakkaraju, H., Leskovec, J., Ludwig, J., & Mullainathan, S. Pena‐Marin, J., & Bhargave, R. (2016). Lasting performance: Round numbers
(2017). Human decisions and machine predictions. The Quarterly activate associations of stability and increase perceived length of
Journal of Economics, 133(1), 237–293. product benefits. Journal of Consumer Psychology, 26(3), 410–416.
Kreps, G. L., & Neuhauser, L. (2013). Artificial intelligence and immediacy: Punj, G. N., & Moore, R. (2007). Smart versus knowledgeable online
Designing health communication to personally engage consumers recommendation agents. Journal of Interactive Marketing, 21(4),
and providers. Patient Education and Counseling, 92(2), 205–210. 46–60.
Kuncel, N. R., Klieger, D. M., Connelly, B. S., & Ones, D. S. (2013). Sanders, N. R., & Manrodt, K. B. (2003). The efficacy of using judgmental
Mechanical versus clinical data combination in selection and versus quantitative forecasting methods in practice. Omega, 31(6),
admissions decisions: A meta‐analysis. Journal of Applied 511–522.
Psychology, 98(6), 1060–1072. Schlosser, A. E., White, T. B., & Lloyd, S. M. (2006). Converting web site
Lee, J. (2004). Discriminant analysis of technology adoption behavior: A visitors into buyers: How web site investment increases consumer
case of internet technologies in small businesses. Journal of trusting beliefs and online purchase intentions. Journal of Marketing,
Computer Information Systems, 44(4), 57–66. 70, 133–148.
15206793, 2021, 7, Downloaded from https://onlinelibrary.wiley.com/doi/10.1002/mar.21498 by Universidad Tecnica Federico, Wiley Online Library on [11/10/2023]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
KIM ET AL. | 1155

Seegebarth, B., Backhaus, C., & Woisetschläger, D. M. (2019). The role of Zang, Y., Zhang, F., Di, C., & Zhu, D. (2015). Advances of flexible pressure
emotions in shaping purchase intentions for innovations using sensors toward artificial intelligence and health care applications.
emerging technologies: A scenario‐based investigation in the Materials Horizons, 2(2), 140–156.
context of nanotechnology. Psychology & Marketing, 36(9), 844–862. Zhang, Y. C., & Schwarz, N. (2013). How and why 1 year differs from 365
Shankar, V. (2018). How artificial intelligence (AI) is reshaping retailing. days: A conversational logic analysis of inferences from the
Journal of Retailing, 94(4), 6–11. granularity of quantitative expressions. Journal of Consumer
Sinha, R., & Swearingen, K. (2001). Comparing recommendations made by Research, 39(2), 248–259.
online systems and friends, Proceedings of the DELOS‐NSF Workshop Zhu, D. H., Chang, Y. P., Luo, J. J., & Li, X. (2014). Understanding the
on Personalization and Recommender Systems in Digital Libraries. www. adoption of location‐based recommendation agents among active
onemweb.com users of social networking sites. Information Processing &
Slade, E. L., Dwivedi, Y. K., Piercy, N. C., & Williams, M. D. (2015). Modeling Management, 50(5), 675–682.
consumers' adoption intentions of remote mobile payments in the
United Kingdom: Extending UTAUT with innovativeness, risk, and
trust. Psychology & Marketing, 32(8), 860–873.
Stiving, M. (2000). Price‐endings when prices signal quality. Management How to cite this article: Kim, J., Giroux, M., & Lee, J. C.
Science, 46(12), 1617–1629. (2021). When do you trust AI? The effect of number
Stiving, M., & Winer, R. S. (1997). An empirical analysis of price endings
presentation detail on consumer trust and acceptance of AI
using scanner data. Journal of Consumer Research, 24(1), 57–67.
Sung, Y., & Kim, J. (2010). Effects of brand personality on brand trust and recommendations. Psychol Mark, 38, 1140–1155.
brand affect. Psychology & Marketing, 27(7), 639–661. https://doi.org/10.1002/mar.21498
Syam, N., & Sharma, A. (2018). Waiting for a sales renaissance in the
fourth industrial revolution: Machine learning and artificial
intelligence in sales research and practice. Industrial Marketing
Management, 69, 135–146.
APPEND IX A
Talwar, S., Talwar, M., Kaur, P., & Dhir, A. (2020). Consumers' resistance
to digital innovations: A systematic review and framework Stimulus of Study 3
development. Australasian Marketing Journal, 28(4), 286–299.
Thomas, M., Simon, D. H., & Kadiyali, V. (2010). The price precision effect:
Evidence from laboratory and market data. Marketing Science, 29(1),
175–190.
Tversky, A., & Kahneman, D. (1974). Judgment under uncertainty:
Heuristics and biases. Science, 185(4157), 1124–1131.
Wadhwa, M., & Zhang, K. (2015). This number just feels right: The impact
of roundedness of price numbers on product evaluations. Journal of
Consumer Research, 41(5), 1172–1185.
Wang, W., & Benbasat, I. (2005). Trust in and adoption of online
recommendation agents. Journal of the Association for Information
Systems, 6(3), 72–101.
Xie, G., & Kronrod, A. (2012). Is the devil in the details? The signaling
effect of numerical precision in environmental advertising claims.
Journal of Advertising, 41(4), 103–117.
Yang, S.‐C., Hung, W.‐C., Sung, K., & Farn, C.‐K. (2006). Investigating initial
trust toward e‐tailers from the elaboration likelihood model
perspective. Psychology & Marketing, 23(5), 429–445.
Ye, C., Hofacker, C. F., Peloza, J., & Allen, A. (2020). How online trust
evolves over time: The role of social perception. Psychology &
Marketing, 37(11), 1539–1553.
Yeomans, M., Shah, A., Mullainathan, S., & Kleinberg, J. (2019). Making
sense of recommendations. Journal of Behavioral Decision Making,
32(4), 403–414.

You might also like