1 s2.0 S0306457322003582 Main

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 17

Information Processing and Management 60 (2023) 103257

Contents lists available at ScienceDirect

Information Processing and Management


journal homepage: www.elsevier.com/locate/infoproman

Understanding AI-based customer service resistance: A perspective


of defective AI features and tri-dimensional distrusting beliefs
Bo Yang a, Yongqiang Sun a, b, *, Xiao-Liang Shen a
a
School of Information Management, Wuhan University, Wuhan, China
b
Center for Studies of Information Resources, Wuhan University, Wuhan, China

A R T I C L E I N F O A B S T R A C T

Keywords: Communicating with customers through AI-based chatbots in customer service (AISC) has
Artificial intelligence become increasingly popular for many companies. However, in actual service encounters, AISC
Customer service seems defective and is not always accepted by customers. Occasionally it is even resisted. This
Empathy
study aims to investigate such customer resistance. In addition to two cognition-centered AI
Emotional intelligence
Customer resistance
features (i.e., irrelevant and biased information) discussed in prior studies, this study proposes
that lack of empathy is another key feature of defective AI (i.e., in its emotional dimension) and
investigates the underlying mechanism of empathy. Specifically, this study proposes three
pathways in which empathy functions are lacking. A survey was conducted to test our hypotheses,
and the results suggest that lack of empathy has three effects on customer resistance: direct,
indirect, and moderating. Finally, theoretical contributions and practical implications are
discussed.

1. Introduction

“I am sorry that this happened to you”. This answer was made by an empathic service chatbot that recognizes customer emotions
and responds accordingly based on artificial intelligence (AI). “AI” refers to a series of technologies that enable a computer system to
perceive, understand, react, and learn (Chi et al., 2021). With recent advances in AI technology, communicating with customers
through AI-based chatbots has become an increasingly popular way for many companies to provide real-time customer service
(Østerlund et al., 2021). In this study, we use the term “AI-based service chatbot” (AISC) to refer to conversational system interfaces
designed to communicate with customers in natural language based on AI technology in service settings (Chaves & Gerosa, 2021). The
enormous commercial potential of AISC has drawn the attention of academics and practitioners alike (Diederich et al., 2022).
However, studies and statistics have shown that not all customers enjoy interacting with AI chatbots (Chi et al., 2020; Gursoy et al.,
2019). In a large-scale survey on customer anger, most Americans complained about the AISC experience, including being left
screaming at mechanical algorithms that prevent them from talking to a real employee (Yam et al., 2021). AISC seems to be defective
and not always accepted in actual service encounters (Liu-Thompkins et al., 2022), leading to customer dissatisfaction and, in the worst
case, frustration and service failure (Wieseke et al., 2012).
The development of service chatbots involves technical and service designs that require extensive interdisciplinary efforts at the
research and practice levels (Lin et al., 2014). The capabilities currently assigned to AI in the design of chatbots are divided into

* Corresponding author at: School of Information Management, Wuhan University, Wuhan, China.
E-mail address: sunyq@whu.edu.cn (Y. Sun).

https://doi.org/10.1016/j.ipm.2022.103257
Received 13 June 2022; Received in revised form 6 December 2022; Accepted 19 December 2022
Available online 26 December 2022
0306-4573/© 2022 Elsevier Ltd. All rights reserved.
B. Yang et al. Information Processing and Management 60 (2023) 103257

cognitive intelligence (CI) and emotional intelligence (EI) (Kaplan & Haenlein, 2019; Zhou et al., 2018). “CI” refers to chatbots
generating a cognitive representation of the world and using learning based on experience to inform future decisions (Kaplan &
Haenlein, 2019). “EI” refers to chatbots understanding human emotions and considering them in their decision-making (Kaplan &
Haenlein, 2019). For example, when a customer enters “Why did I receive a broken item?” into a chatbot interface, a chatbot with CI
matches the customer input to a knowledge base and outputs an appropriate answer (e.g., “Click the link below to enter the return and
exchange process”). A chatbot with EI can rely on its affective computing engine to recognize a user’s emotions and respond
accordingly (e.g., “We are sorry you had this experience, and we will replace it for you as soon as possible”).
As supporting technologies for AISC are rapidly evolving, AISC can now focus on the specific needs of consumers, for example by
exhibiting EI. An AISC with EI can better understand customers than one equipped with CI due to its ability to analyze emotional data
(Huang & Rust, 2021a). Although in practice EI is an essential emerging capability of AI that has been shown to be crucial to merchant
profitability, few studies have examined the effects of EI on user perceptions. Specifically, previous studies have focused on user
evaluations and perceptions of CI (Chau et al., 2013; Lee & Benbasat, 2011; Xu et al., 2018), such as relevance and bias. Therefore, this
study attempts to extend the research centered on CI by considering EI. Several studies have argued that EI is particularly essential for
employees (Prentice et al., 2020) and chatbots (Zhou et al., 2020). Moreover, empathy, a key dimension of EI, is the most intuitive
indicator that reflects the EI of employees in service encounters (Law et al., 2021). “Empathy” refers to the capacity to sense and react
to a customer’s thoughts, feelings, and experiences during a service encounter (Wieseke et al., 2012). A chatbot’s empathy is based on
empathy in interpersonal domains, such as how the term is used in clinical psychology, social psychology, and neuroscience (Yalcin &
DiPaola, 2018). An empathic chatbot can understand a situation and leverage empathic attention to recognize the customer’s
emotional state and provide the customer emotional support in a manner similar to that of an effective human employee in such a
situation (Liu-Thompkins et al., 2022).
To advance this stream of research, we draw on relationship marketing theory for the three reasons. First, relationship marketing
theory suggests that service quality is a well-established antecedent of positive relational outcomes (Crosby et al., 1990). Moreover,
previous marketing research has revealed that employee CI and EI are positively related to job performance and service quality (Côté &
Miners, 2006; Kidwell et al., 2011). Therefore, we introduce the evaluation of employee CI and EI into the AI chatbot research context.
Second, relationship marketing theory suggests that relationship marketing variables (e.g., empathy) can build quality customer-seller
(e.g., AISC in this study) relationships that drive relational outcomes (Palmatier et al., 2006). Third, the customer-seller relationship
should be mutually beneficial because the relationship is critical to the business’s success (Morgan & Hunt, 1994). Furthermore, the
customer-seller relationship is the core issue of relationship marketing (Möller & Halinen, 2000); therefore, companies should
emphasize relationship marketing activities so as to positively influence the adoption of AISC. As an essential social “glue”, empathy
has been shown important to maintaining satisfying social relationships (Gremler & Gwinner, 2008; Mangus et al., 2020), enhancing
relationship quality (Ndubisi & Nataraajan, 2018; Wieseke et al., 2012), and facilitating the adoption of AISC (de Kervenoael et al.
2020; Luo et al. 2019). Empirical evidence indicates that empathic responses from employees and chatbots not only underlie social
cooperation and prosocial behavior (Leite et al., 2013) but also can positively influence human emotions and behaviors (Cheng et al.,
2022; Liu-Thompkins et al., 2022). Conversely, an inappropriate expression of empathy can produce unfavorable outcomes. For
example, if an AI device’s empathic response is inappropriate for the interaction context, it can increase distrust in the information
provided (Chin & Yi, 2021; Cramer, Evers, et al., 2010). Therefore, it is of great theoretical and practical importance to investigate AI
empathy because a lack of empathy may influence customer attitudes and behaviors otherwise correlated with potential gains for
merchants (Huang & Rust, 2021a). Although in practice AISC that lacks empathy seems to be defective and is not always accepted or
even resisted by customers in service encounters (Yam et al., 2021), leading to customer anger and service failure, few studies have
empirically investigated customer resistance by considering empathy.
This study attempts to answer the following research question to address the research gaps noted above:
RQ: What is the relationship between lack of empathy and customer resistance to AISC?
This study investigates user resistance to AISC from the perspective of distrust to address. The study focuses on distrust for the
following three reasons. First, according to relationship marketing theory, trust is a core relationship-related factor, and resistance is a
relationship outcome. An important metric in this theory evaluates relationship-related factors and their relationship outcome
(Morgan & Hunt, 1994; Sundermann, 2018). Furthermore, trust is considered a necessary element for maintaining relationships
because trust shifts the focus of relationships from short- to long-term orientations (Bock et al., 2016). Second, distrust is a core
relational construct that influences AI acceptance (Moussawi et al., 2021; Nasirian et al., 2017; Saffarizadeh et al., 2017) and is
particularly vital in online environments and human-AI interactions (Aljukhadar et al., 2017; Yokoi et al., 2021). Third, according to
the computer as social actors (CASA) framework (Nass et al., 1994), customers can unconsciously conceptualize AISC as a human being
with whom they can develop interpersonal relationships that have a strong association with trust (Nasirian et al., 2017), but lack of
empathy can enhance distrust in such relationships.
Furthermore, this study focuses on the joint role of CI and EI in customer resistance to AISC. The study proposes and tests three
different mechanisms in which EI (i.e., lack of empathy) and CI (i.e., irrelevant and biased information) may play roles. First, both EI
and CI positively impact distrust, which in turn affects customer resistance to AISC. Second, there is a link between EI and CI; i.e., EI
positively affects CI (irrelevant and biased information). Third, EI acts synergistically with CI. Specifically, lack of empathy strengthens
the relationship between CI (irrelevant information and biased information) and distrust (competence and integrity). “Lack of
empathy” refers to AISC failing to sense and react to a customer’s thoughts, feelings, and experiences during a service encounter
(Wieseke et al., 2012). “Irrelevant information” refers to AISC providing customers incorrect or inaccurate information that does not
match customer preferences (Chau et al., 2013). “Biased information” refers to AISC providing information based on the merchant’s
interest (Chau et al., 2013).

2
B. Yang et al. Information Processing and Management 60 (2023) 103257

This study contributes to the literature in several ways. First, it contributes to research on consumer innovation resistance by
examining the new challenges of AI-based services from the perspective of customer resistance. Second, unlike previous studies
focusing on the CI perspective, this study extends the cognition-centered framework of the AI literature by addressing the emotional
dimension. Third, the study deconstructs the underlying mechanism of empathy in the context of AI chatbot resistance. From a
practical perspective, this study encourages merchants to appropriately adjust their AI chatbot implementation strategies based on the
three dimensions of defective AI features. This study also encourages practitioners to focus on developing more intelligent and
empathic AISC.
The remainder of the article is structured as follows. In the next section, we review the relevant literature. Subsequently, we present
our research model and hypotheses. The research methodology and data analysis results are reported in the subsequent two sections.
Finally, implications for research and practice are discussed in detail.

2. Literature review

2.1. Two-dimensional framework of AI capabilities

The current capabilities of AI chatbots are conceptualized as CI and EI (Kaplan & Haenlein, 2019; Zhou et al., 2018). Early research
on AI chatbots mainly focused on CI. CI provides chatbots the ability to think about or analyze information and situations. For example,
mechanical and analytical AI applications, such as food ordering and delivery and customer service chatbots for simple issues, generate
a cognitive representation and use learning based on past data to make future decisions (Johnson & Verdicchio, 2017; Kaplan &
Haenlein, 2019). With time, CI has become unable to meet the needs of customers in practice. Therefore, practitioners have spent much
effort equipping AI with EI based on CI. Chatbot EI is attracting substantial attention from scholars (Law et al., 2021; Zhou et al., 2018).
EI is a new and critical design factor manipulated in AI applications. Practitioners have incorporated EI into chatbot design to enable
chatbots to emotionally interact with customers based on affective computing engines (Song et al., 2022). EI is actively discussed in
psychology and service fields (Di Fabio & Saklofske, 2014; Prentice et al., 2020). Human beings are considered to be highly
emotionally intelligent because they are aware of their emotions, able to manage their social skills, and aware of the emotional state of
others (Dollmat & Abdullah, 2021). It could be beneficial for chatbots to be perceived as being emotionally intelligent (Law et al.,
2021). EI is a set of social and emotional skills that enables chatbots to recognize, regulate and express emotions, and EI is also crucial
for chatbots to communicate harmoniously (Yalcin & DiPaola, 2018).
Empathy is a key dimension of EI and one of the clearest ways people can signal to others that they understand their emotions (Law
et al., 2021). Empathy is the capability to understand or feel what a user is experiencing from within the user’s frame of reference in
service encounters (Zhou et al., 2020). Furthermore, chatbots require empathy and EI to process emotional information in a service
designed to mimic human-human interaction (Yang et al., 2017). Scholars have examined empathy in customer service based on
AI-based chatbots. Empathy has been shown to play a central role in human-AI interactions (de Kervenoael et al., 2020; Leite et al.,
2013; Liu & Sundar, 2018; Paiva et al., 2017). Moreover, empathic intelligence is the highest level of AI for customer service (Huang &
Rust, 2018), providing customer emotional support (Huang & Rust, 2021b).
Chatbot and customer service areas are inextricably linked. In the literature, empathy is considered a key concept associated with
service quality and a significant prerequisite for successful service encounters (Parasuraman et al., 1988, 1994; Pitt et al., 1995).
Empathy is involved in employees caring for and paying individualized attention to their customers (Pitt et al., 1995). Prior studies
have found that employees’ empathic individual concern leads to highly positive service encounters. Being empathetic to customer
needs is a viable starting point for creating a unique experience (Collier et al., 2018). Researchers in the service field believe that when
employees are strongly customer-oriented, customers will develop a greater emotional commitment to the brand or merchant
(Markovic et al., 2018). Moreover, as a driver of trust, loyalty, and long-term relationships, empathy requires employees to understand
the customer’s position, creating the necessary conditions for connecting service providers and customers (Parasuraman et al., 1988).
Scholars have studied the effects of a lack of empathy on user cognition and behavior (Cramer, Goddijn, et al., 2010; Luo et al.,
2019). For instance, lack of empathy is negatively associated with user call length and purchase behavior (Luo et al., 2019), and
inaccurate empathy leads to user distrust (Cramer, Goddijn, et al., 2010). These studies conducted experiments in the context of robots
and examined the effects of empathy. However, they did not precisely measure empathy or investigate its mechanism. The results show
that participants generally perceived robots that lack empathy as untrustworthy.

2.2. User resistance behavior

In the current business and service environment, services based on information technology (IT) are crucial in improving organi­
zational and operational effectiveness. User resistance is considered a significant constraint of the successful implementation of IT-
based services (Ali et al., 2016) and reduces the potential profits of service providers. User resistance is a complex phenomenon
with different definitions. Kim and Kankanhalli (2009) defined user resistance as the opposition of a user to change associated with
new technology implementation. In addition, Klaus and Blanton (2010) defined user resistance as the behavioral expression of a user’s
opposition to a system implementation during the implementation. In summary, because all implementations of new IT-based services
will face some form of consumer resistance (Laukkanen, 2016), merchants must overcome such resistance if AISC is to be successfully
adopted. In this study, we define AI-based chatbot resistance as customers’ negative cognition, affection, and opposition to AISC
adoption (Yoo et al., 2021).
Most previous studies on user resistance have focused on the organizational enforcement of new systems and working settings

3
B. Yang et al. Information Processing and Management 60 (2023) 103257

(Bhattacherjee & Hikmet, 2007; Hirschheim & Newman, 1988; Kim & Kankanhalli, 2009; Lapointe & Rivard, 2005). However, in this
study, AISC represents an emerging digital technology used by individuals, and a number of previous researchers have examined user
resistance to such technologies. More specifically, innovation resistance research is rapidly gaining prominence, and there have been
several studies on individual user resistance to digital services (Hsieh, 2016; Kang & Kim, 2009; Nel & Boshoff, 2021). These studies
support our study by demonstrating the impact of IT defects on user perception and behavior. Furthermore, previous research on
customer responses to chatbots mainly focuses on the customer’s acceptance or use intention of the chatbot (Schuetzler et al., 2020;
Sheehan et al., 2020; Wirtz et al., 2018). Although customer resistance has proven to be a significant issue in decreasing merchant
revenue, few studies have investigated customer resistance to AI-based chatbots (Cheng et al., 2022; Longoni et al., 2019; Pizzi et al.,
2021). Findings in the health care context indicate that AI-based chatbots take less account of customers’ unique characteristics and
personal needs, which leads to customer resistance to AI chatbots (Longoni et al., 2019). In the e-commerce context, the anthropo­
morphic design of chatbots reduces customer resistance to the chatbot (Pizzi et al., 2021), and two attributes (empathy and friend­
liness) of a chatbot strengthen trust, which in turn reduce customer resistance to the chatbot. These studies provide a specific research
base for investigating the antecedents and related pathways of customer resistance to chatbots.

2.3. Distrusting beliefs

Trust has attracted extensive attention from researchers. Previous studies have shown that the dimensions of trust depend on the
humanization degree of the trusted technology (Lankton et al., 2015). For less human-like technologies, user trust is primarily based on
functionality and usability, whereas for human-like technologies, user trust is based on multidimensional (integrity, benevolence,
competence) evaluation (Benbasat & Wang, 2005; Vance et al., 2008). Research on human-like trusting beliefs has been inspired by
studies on trust in human-human relationships (Aoki, 2020). According to the computers as social actors (CASA) paradigm, the social
norms and rules guiding human-human interaction can be directly applied to human-chatbot interaction because users unconsciously
treat chatbots as independent social entities. Thus, users can build trust with chatbots as they do with humans.
Furthermore, prior studies have used human-like trusting beliefs to study trust in technology because people tend to anthropo­
morphize technologies, ascribing human attributes to them (Lankton et al., 2015). Thus, the multidimensional structure of human-like
trust is applicable to anthropomorphic technology (e.g., AISC in this study). Consistent with the framework adopted in previous studies
(Höddinghaus et al., 2021; Hu et al., 2021) and those papers that have investigated the phenomenon from a negative perspective
(Charki & Josserand, 2008; Chau et al., 2013), this study applies multidimensional distrust to measure customer perceptions of AI
chatbots to fit the negative research context. Distrust has been shown to have multiple dimensions (McKnight et al., 2017; Moody et al.,
2017). The first dimension of distrusting beliefs is benevolence distrust, a significant variable in this study. “Benevolence” refers to “the
trustee’s act of, or a general inclination toward, kindness and sincerity”, and benevolence distrust is “the trustor’s belief that the trustee
does not care about, and is not motivated to act in, the trustor’s interests” (Chau et al., 2013; McKnight & Chervany, 2001b). More
specifically, “benevolence distrust” refers to the trustor’s belief that the supplier (trustee) will not act in the trustor’s best interest and
that the supplier (trustee) is only interested in its own well-being, not in the trustor’s well-being (McKnight et al., 2017). One study

Fig. 1. Research model.

4
B. Yang et al. Information Processing and Management 60 (2023) 103257

noted that no research had yet investigated benevolence distrust in conversational agents, arguing that it is difficult for online users to
precisely judge benevolence only by interacting with prior conversational agents (Chau et al., 2013). However, when based on AI
technologies such as affective computing, advanced AISC can now exhibit a rich variety of social cues that can encourage customers to
anthropomorphize it and ascribe human attributes to it (Zhou et al., 2020). Thus, we can increase our understanding of distrust by
examining the benevolence dimension of AISC. The second dimension of distrusting beliefs is competence distrust. “Competence”
refers to the ability of the trustee to honor a promise, and “competence distrust” refers to the trustor’s belief that the trustee cannot
perform the duties and obligations expected by the trustor (Chau et al., 2013; McKnight & Choudhury, 2006). The third dimension of
distrusting beliefs is integrity distrust. Here, “integrity” refers to the honesty with which the trustee acts for the trustor’s benefit, and
“integrity distrust” refers to the trustor’s belief that the trustee has not entered into a good faith agreement (Chau et al., 2013;
McKnight & Choudhury, 2006).
Several studies on trust in AI have examined AI design features (e.g., intelligence, humanness, AI quality characteristics) that
influence user trust, which in turn influence user behavior (Moussawi et al., 2021; Moussawi & Benbunan-Fich, 2021; Nasirian et al.,
2017; van Pinxteren et al., 2019, 2020). Although these studies focused on trust, the antecedents and multidimensionality of distrust
have not been fully investigated (Glikson & Woolley, 2020). Thus, this study attempts to extend the research from the perspective of
tri-dimensional distrusting beliefs.

3. Research model and hypotheses

Incorporating defective AI features, distrusting beliefs, and customer resistance, this study develops a research model as depicted in
Fig. 1. Specifically, defective AI features include lack of empathy (EI) and irrelevant and biased information (CI). In practice, the
chatbot responds to customer questions during the service process and provides relevant information to help the customer complete
the service process efficiently. In addition, the merchant will provide information to the customer that is in the merchant’s interest
with respect to increasing profit (Xiao & Benbasat, 2015, 2018). In addition, the CI in the research model is based on the previous
literature (Chau et al., 2013). In this study, lack of empathy, irrelevant information, and biased information are customer evaluations
of the quality of the information provided by AISC. According to relationship marketing theory, customer distrust in AISC can increase
if the service is unsatisfactory. Such an increase in distrust is found to be more profound for personalized systems (i.e., AISC in this
study) than for generic systems. Moreover, problems between a principal (e.g., customer) and an agent (e.g., AISC) mainly arise under
conditions of incomplete and asymmetric information, unaligned interests between the parties, and difficulty in monitoring and
evaluating the performance of the agent (Zhang & Curley, 2018). Thus, we contend that as a negative relationship-introducing factor,
lack of empathy will lead to customer distrust in AISC, which in turn influences customer resistance. “Distrusting beliefs” refers to
benevolence, competence, and integrity distrust. In addition, lack of empathy is considered a moderating variable. Regarding dis­
trusting beliefs, disposition to distrust and institutional-based distrust are considered control variables. Regarding AISC resistance,
gender, age, education, familiarity with AI, and AI customer service are considered control variables. The following sections discuss
each of the constructs and their relationships in detail.

3.1. EI and distrust

According to the theory of principal-agent relationships (Zhang & Curley, 2018), problems between the principal (e.g., customer)
and the agent (e.g., AISC) mainly concern the misalignment of interests. AISC that lacks empathy fails to sense and react to a customer’s
thoughts, feelings, and experiences (misalignment of interests). Therefore, customers are likely to be concerned that AISC is designed
to benefit the merchant and question whether AISC puts the customer’s interests first (i.e., misalignment of interest concern) and
makes a dispositional attribution of the perceived low benevolence of AISC (Wang & Benbasat, 2016). In contrast, empathic AISC
caring involves individualized attention to customers (alignment of interests). Thus, empathic AISC can mitigate customer concerns
regarding interest incongruence, thereby increasing the customer’s benevolence belief in AISC.
Scholars in the marketing field have provided extensive empirical evidence for the positive impact of empathy on trust (Mangus
et al., 2020; Weißhaar & Huber, 2016). Empirical research on chatbot empathy has shown that empathic AI leads to more trust (Brave
et al., 2005; Leite et al., 2014). Suppose AISC does not reach the level of “empathic intelligence” (Huang & Rust, 2018). In that case,
customers will not perceive personalized attention and regard AISC as mechanical and rigid, reducing the perception of the benev­
olence of AISC. Research in the field of social psychology empirically supports this positive linkage by demonstrating that high
empathy from others enables individuals to infer more about the benevolence of others (Carmody & Gordon, 2011; Myyry & Helkama,
2001; Paleari et al., 2005). Therefore, following these research streams, we propose the following:
H1. AISC lack of empathy is positively associated with user benevolent distrust.

3.2. EI, CI, and distrust

Prior studies have examined whether empathy is an essential prerequisite for successful service encounters (Wieseke et al., 2012;
Wilder et al., 2014). Especially in online service encounters, interaction with AISC inevitably involves money, privacy, and other issues
that cause customers anxiety (Marzouk et al., 2022). The role of empathy is even more crucial in this context because empathy is
capable of alleviating negative experiences in social interactions (Wieseke et al., 2012). Therefore, customer awareness that AISC acts
in an empathic manner may contribute to a favorable perception of AISC performance (Wieseke et al., 2012). Thus, lack of empathy
may reinforce the negative perception of AISC (i.e., irrelevant and biased in this study).

5
B. Yang et al. Information Processing and Management 60 (2023) 103257

Customers’ competence distrust of AISC originates in their evaluation of the relevance of the information provided by AISC
(Glikson & Woolley, 2020). Empirical research has found that relevant information provided by recommendation agents positively
influences competence trust (Komiak & Benbasat, 2006), while irrelevant information positively influences competence distrust (Chau
et al., 2013). If AISC fails to provide relevant information, the customer will worry about the negative consequences caused by the
irrelevant information, thereby increasing his or her distrust of the competence of AISC. Thus, we contend that lack of empathy will
enhance customer competence distrust in AISC through irrelevant information:
H2. Irrelevant information mediates the positive effect of lack of empathy on competence distrust.
According to psychological contract violation theory, biased information challenges the foundation of a customer’s general beliefs
regarding others’ behavioral norms related to trusting relationships, thereby increasing customers’ integrity distrust of AISC (Wang &
Wang, 2019). Customers’ distrust of integrity in AISC originates in their evaluation of the fairness of the information provided by AISC
(Glikson & Woolley, 2020). When customers perceive that such information is biased in favor of the interests of merchants, they will
believe that AISC is not providing fair and reasonable results. Therefore, customers may consider that AISC fails to honor its
commitment to serve customers honestly. Thus, we contend that lack of empathy will enhance customer integrity distrust of AISC
through irrelevant information:
H3. Biased information mediates the positive effect of lack of empathy on integrity distrust.

3.3. Moderating effect of EI

Previous studies have demonstrated that highly empathetic employees are more likely to build high-quality customer relationships
(Weißhaar & Huber, 2016; Wieseke et al., 2012). In this regard, relationship marketing theory further posits that when an employee is
empathetic, customers appreciate the employee’s customer orientation and value the relationship with the employee, thus developing
trust in the employee (Crosby et al., 1990). Accordingly, when customers perceive a high level of lack of empathy in AISC, such as an
impolite tone and machine-like responses, they will have a highly skeptical attitude toward AISC. In this case, irrelevant information
can easily lead to competence distrust in AISC, and biased information can easily cause integrity distrust in AISC. Conversely, cus­
tomers are more likely to evaluate employee performance positively when they perceive an employee’s behavior as empathic, such as
when an employee uses a gentle tone and exhibits a caring attitude (Wieseke et al., 2012). In this case, even if AISC generates irrelevant
and biased advice due to a lack of competence and a profit-making orientation, customers may have a relatively high tolerance level
and thus be less likely to quickly distrust the competence and integrity of AISC.
Although prior studies have either directly or indirectly associated AI empathy with AI features or user attitudes toward AI (de
Kervenoael et al., 2020; Luo et al., 2019; Pelau et al., 2021), few studies have examined empathy as a moderator of the impacts of
defective AI features on user attitudes. This gap is surprising because when AI behaves in an empathic manner, positive AI features
plausibly lead to better user attitudes toward AI. Furthermore, prior studies have argued that empathy promotes stronger relationships
and collaboration between customers and AISC (Paiva et al., 2017). Hence, we intend to investigate the following two moderating
effects:
H4. Lack of empathy strengthens the relationship between irrelevant information and competence distrust in AISC.
H5. Lack of empathy strengthens the relationship between biased information and integrity distrust in AISC.

3.4. Distrusting beliefs and user resistance to AISC

Prospect theory demonstrates a substantial influence of distrust on customer behavior (Tversky & Kahneman, 1989). Specifically,
user resistance is a distrust-related behavior (McKnight & Chervany, 2001a). Distrusting beliefs should be positively associated with
distrust-related behaviors because users tend to translate their beliefs and intentions into actions (Fishbein & Ajzen, 1975).
Distrusting beliefs reflect the customer’s expectation about AISC’s poor capabilities, negative motives, and harmful behavior
(Dimoka, 2010). Moreover, distrusting beliefs help customers avoid uncertain risks to make decisions since interaction with AISC will
inevitably involve money, privacy, and other issues that customers are concerned about. As a result, given the reasons for the negative
expectation, the intention to engage in related behaviors may not be forthcoming (Ou & Sia, 2010). Distrust has been negatively linked
to certain contexts associated with AISC, such as service acceptance (Ho & Chau, 2013; Mani & Chouk, 2018; Sharma et al., 2020) and
online transactions (Ahmad & Sun, 2018; Cho, 2006; Dimoka, 2010; McKnight & Chervany, 2001a; Moody et al., 2014, 2017; Nel &
Boshoff, 2021). Thus, we propose the following hypotheses
H6a. Competence distrust in AISC is positively associated with resistance to AISC.
H6b. Benevolence distrust in AISC is positively associated with resistance to AISC.
H6c. Integrity distrust in AISC is positively associated with resistance to AISC.

4. Research methodology

4.1. Research setting

AISC is increasingly implemented in customer service. Currently, AISC saves labor costs for merchants and avoids forcing customers
to wait in line. Despite its widespread use, AISC may have drawbacks that lead to customer resistance, such as lack of empathy and
irrelevant or biased answers. AISC seems defective and is not always accepted by customers in actual service encounters. A survey
reported that 90% of participants believed that AISC lacked “intelligence”, while 75% believed that AISC was deficient in

6
B. Yang et al. Information Processing and Management 60 (2023) 103257

“Conversational Language” (Mindbrowser, 2020). The average user retention rate after 1 month of using AISC is only 20-40%
(Mindbrowser, 2020). Therefore, AISC is an appropriate research context for this study.

4.2. Measures

All constructs in the research model were measured using items adopted from previous studies with slight modifications to fit the
AISC context. Specifically, lack of empathy was measured with items adapted from Iglesias et al. (2019). Irrelevant information, biased
information, disposition to distrust, and institutional-based distrust were measured with items adapted from Chau et al. (2013).
Distrusting beliefs (benevolence, competence, and integrity) were all measured with items adapted from McKnight et al. (2002). AISC
resistance was measured with items adapted from Yoo et al. (2021). All items were rated on a seven-point Likert scale from 1 (strongly
disagree) to 7 (strongly agree). Appendix D presents these constructs and their corresponding items.

4.3. Data collection

Data were collected through an online survey to verify the proposed research model and hypotheses. All construct measures were
adopted from previous studies. The online survey was conducted in May 2021 in China. We used a back-translation method to ensure
consistency between the English and Chinese versions of the questionnaire (Shen & Kuang, 2022). Moreover, the questionnaire was
pretested by experts and AISC users prior to formal data collection to improve survey quality. Suggestions were made on the format,
logic, wording, and other details of the questionnaire.
The target population of this study was AISC users. Questionnaires were collected on the Credamo platform. We asked the re­
spondents to recall the last time they used AISC and to answer all the questions in the questionnaire based on this interactive expe­
rience. To ensure that the service scenarios recalled throughout the questionnaire were as unique as possible, we tried to sharpen the
user’s impression of the recalled scenarios. At the beginning of the questionnaire, we asked the respondents to describe the specific
scenario in which they interacted with AISC. We collected information regarding users’ past use of AISC as well as demographic in­
formation. In addition, we employed screening questions to help us automatically reject invalid samples. We leveraged methods on the
Credamo platform to improve questionnaire quality, such as inviting respondents with higher credit scores, limiting respondents with
the same IP, and requesting authorization for geolocation. In the data cleaning, we rejected invalid questionnaires. In the end, we
received 301 valid responses. The demographic characteristics of the respondents are shown in Table 1.

Table 1
Demographic characteristics of respondents (N = 301).
Variables Levels Frequency Percentage

Gender Male 154 51.16%


Female 147 48.84%
Age <=20 16 5.32%
21-25 73 24.25%
26-30 117 38.87%
31-40 87 28.90%
>40 8 2.66%
Education High school or below 11 3.65%
Associate college 26 8.64%
Undergraduate 226 75.08%
Postgraduate 35 11.63%
PhD 3 1.00%
AISC frequency At least once daily 17 5.65%
At least once per week 146 48.50%
At least once per 2 weeks 67 22.26%
At least once per month 57 18.94%
At least once per 3 months 10 3.32%
At least once per 6 months 4 1.33%
Total number of times AISC used 1-2 times 4 1.33%
3-5 times 31 10.30%
6-10 times 49 16.28%
11-20 times 55 18.27%
21-30 times 39 12.96%
More than 30 times 123 40.86%
How long ago was AISC used Less than 3 months ago 7 2.33%
Less than 6 months ago 27 8.97%
Less than 1 year ago 50 16.61%
Less than 2 years ago 84 27.91%
Less than 3 years ago 68 22.59%
More than 3 years ago 65 21.59%

7
B. Yang et al. Information Processing and Management 60 (2023) 103257

5. Data analysis and results

This study used Smart PLS 3.2.9 in the data analysis to examine and validate the proposed research model and hypotheses. The
partial least squares (PLS) technique is a widely used structural equation modeling (SEM) method. PLS has the advantage of addressing
the measured factors and can analyze simultaneously measurement and structural models. Moreover, PLS-SEM can analyze data with a
small sample size and anomalous distribution (Hair et al., 2011).

5.1. Measurement model

In this study, all constructs were reflectively measured. The measurement model for the constructs was assessed by examining their
reliabilities, convergent validity, and discriminant validity. The reliability of a construct can be assessed by composite reliability (CR)
and average variance extracted (AVE). Specifically, a CR value greater than 0.7 is often deemed acceptable (Fornell & Larcker, 1981).
As shown in Table 2, the CR values for all the constructs were greater than 0.9, and the AVEs were greater than 0.7, exceeding the
suggested thresholds of 0.7 and 0.5, respectively (Fornell & Larcker, 1981).
We can examine the item loadings to evaluate convergent validity and discriminant validity. Specifically, convergent validity can
be assessed by checking whether the item loadings of their respective constructs are sufficiently high (Fornell & Larcker, 1981).
Discriminant validity can be assessed by checking whether the item loadings on their respective constructs are higher than the loadings
on other constructs (e.g., cross-loading) (Fornell & Larcker, 1981). As shown in Table 3, the item loadings on their respective constructs
were higher than 0.7. Thus, convergent validity was considered satisfactory. In addition, these loadings were higher than the
cross-loadings, indicating discriminant validity. These results reveal that these constructs exhibited good convergent validity and
discriminant validity. Furthermore, we compared the square root of the AVE of every construct and the correlation coefficients related
to this construct to evaluate discriminant validity. As shown in Table 2, the correlation with other constructs was lower than the square
root of the AVE for each construct. These results reveal that all constructs differed from other constructs, indicating good discriminant
validity.

5.2. Common method bias

Common method bias (CMB) is a frequently mentioned concern in empirical research, particularly when all data are self-reported
and collected from the same source simultaneously. Therefore, we compared the variances explained by both trait factors and method
factors to analyze the common method bias (Podsakoff et al., 2003). Following the guidelines of Liang et al. (2007), we found that the
trait factors explained 80.3% of the total variance, the method factors explained only 1.1% of the variance, and the ratio of substantive
variance to method variance was 73, indicating that common method bias was unlikely to be a problem in this study.

5.3. Structural model

Fig. 2 presents the PLS-SEM results, including path coefficients and explained variances. The results show that lack of empathy
significantly positively affects benevolence distrust (β = 0.461, t = 8.611), supporting H1. As described by Chin et al. (2003), the
method produces an interaction term to test the moderating effect. The results show that lack of empathy positively moderates the
relationship between irrelevant information and competence distrust (β = 0.141, t = 3.829) but not the relationship between biased
information and integrity distrust (β =0.021, t = 0.323). Thus, H4 was supported, while H5 was not supported. Both competence
distrust and benevolence distrust were found to have a significant impact on AI customer service resistance (β = 0.649, t = 14.187; β =
0.268, t = 6.051, respectively), while the effect of integrity distrust on AI customer service resistance was not significant (β = -0.076, t
= 1.575). Therefore, H6a and H6b were supported, while H6c was not supported. Regarding the control variables, the analysis results
are summarized in the table due to space limitations (Table 4). Specifically, institutional-based distrust was found to have a significant
effect on competence distrust (β = 0.097, t = 2.54) and integrity distrust (β = 0.186, t = 3.304). Familiarity with AISC had a significant

Table 2
Descriptive statistics, reliabilities, and correlations.
Mean SD AVE CR LE BI II CD BD ID AICR IBD DD

LE 4.567 1.537 0.706 0.905 0.840


BI 4.306 1.561 0.898 0.964 0.656 0.948
II 4.041 1.582 0.802 0.924 0.727 0.620 0.895
CD 3.929 1.574 0.802 0.942 0.690 0.598 0.795 0.896
BD 4.439 1.478 0.764 0.907 0.704 0.617 0.583 0.635 0.874
ID 3.478 1.501 0.798 0.941 0.463 0.470 0.388 0.554 0.597 0.893
AICR 3.564 1.666 0.796 0.940 0.730 0.626 0.776 0.79 0.652 0.465 0.892
IBD 2.915 1.309 0.807 0.926 0.319 0.194 0.326 0.366 0.270 0.306 0.386 0.898
DD 4.24 1.591 0.848 0.944 0.204 0.193 0.173 0.202 0.181 0.146 0.230 0.327 0.921

Note. AVE = Average variance extracted, CR = Composite reliability, LE = Lack of empathy, BI = Biased information, II = Irrelevant information, CD
= Competence distrust, BD = Benevolence distrust, ID = Integrity distrust, AICR = AI-based chatbot resistance, IBD = Institutional-based distrust, DD
= Disposition to distrust. The boldfaced diagonal elements are the square roots of the AVEs.

8
B. Yang et al. Information Processing and Management 60 (2023) 103257

Table 3
Loadings and cross-loadings.
Items LE BI II CD BD ID AICR IBD DD

LE1 0.857 0.519 0.629 0.555 0.546 0.337 0.615 0.237 0.184
LE2 0.778 0.402 0.483 0.473 0.476 0.302 0.518 0.262 0.205
LE3 0.873 0.705 0.626 0.615 0.681 0.482 0.633 0.265 0.166
LE4 0.849 0.536 0.682 0.652 0.629 0.405 0.67 0.306 0.144
BI1 0.621 0.945 0.592 0.565 0.586 0.433 0.601 0.157 0.187
BI2 0.607 0.947 0.571 0.556 0.573 0.421 0.567 0.17 0.178
BI3 0.636 0.951 0.598 0.577 0.594 0.477 0.609 0.221 0.184
II1 0.62 0.539 0.905 0.762 0.503 0.323 0.725 0.311 0.157
II2 0.699 0.564 0.861 0.618 0.535 0.343 0.639 0.259 0.147
II3 0.646 0.566 0.919 0.743 0.533 0.377 0.714 0.301 0.161
CD1 0.584 0.537 0.684 0.883 0.506 0.443 0.692 0.316 0.195
CD2 0.624 0.547 0.673 0.893 0.615 0.515 0.687 0.302 0.155
CD3 0.595 0.497 0.707 0.899 0.567 0.494 0.703 0.335 0.202
CD4 0.665 0.561 0.779 0.908 0.583 0.528 0.747 0.357 0.173
BD1 0.663 0.548 0.559 0.568 0.868 0.453 0.589 0.19 0.151
BD2 0.568 0.531 0.507 0.56 0.869 0.575 0.578 0.26 0.182
BD3 0.612 0.538 0.46 0.536 0.886 0.54 0.54 0.261 0.141
ID1 0.381 0.435 0.319 0.479 0.517 0.88 0.389 0.294 0.122
ID2 0.437 0.425 0.392 0.507 0.53 0.865 0.434 0.221 0.123
ID3 0.417 0.418 0.325 0.499 0.574 0.929 0.435 0.282 0.13
ID4 0.416 0.401 0.35 0.494 0.511 0.898 0.401 0.296 0.146
AICR1 0.644 0.557 0.646 0.655 0.619 0.426 0.863 0.236 0.168
AICR2 0.675 0.586 0.742 0.765 0.601 0.418 0.904 0.345 0.235
AICR3 0.662 0.554 0.706 0.714 0.539 0.402 0.907 0.382 0.202
AICR4 0.622 0.535 0.671 0.681 0.567 0.414 0.895 0.415 0.215
IBD1 0.243 0.149 0.266 0.296 0.219 0.26 0.302 0.846 0.282
IBD2 0.306 0.181 0.297 0.348 0.271 0.264 0.379 0.927 0.297
IBD3 0.306 0.19 0.312 0.339 0.237 0.299 0.356 0.919 0.301
DD1 0.222 0.212 0.189 0.219 0.177 0.15 0.233 0.34 0.925
DD2 0.143 0.15 0.134 0.14 0.127 0.134 0.18 0.26 0.896
DD3 0.186 0.164 0.148 0.187 0.187 0.117 0.216 0.292 0.941

Note. LE = Lack of empathy, BI = Biased information, II = Irrelevant information, CD = Competence distrust, BD = Benevolence distrust, ID =
Integrity distrust, AICR = AI-based chatbot resistance, IBD = Institutional-based distrust, DD = Disposition to distrust.

Fig. 2. PLS results.

9
B. Yang et al. Information Processing and Management 60 (2023) 103257

Table 4
The effects of control variables.
Control variables Dependent variables β t

Disposition to distrust Competence distrust 0.033 0.951


Benevolence distrust 0.022 0.582
Integrity distrust -0.013 0.216
Institutional-based distrust Competence distrust 0.097 2.54***
Benevolence distrust -0.034 0.89
Integrity distrust 0.186 3.304***
Age AICS resistance 0.024 0.813
Gender AICS resistance -0.037 1.097
Education AICS resistance 0.058 1.78*
Familiarity with AI AICS resistance -0.046 1.262
Familiarity with AISC AICS resistance -0.086 2.329***

negative effect on AISC resistance (β = -0.086, t = 2.329).


We then tested the mediation roles of irrelevant information (H2) and biased information (H3) for the effect of lack of empathy on
competence and integrity distrust, respectively, through a partial least squares (PLS) analysis using Smart PLS 3.2.9. As expected, lack
of empathy enhanced irrelevant information (β = 0.731, t = 26.084) and biased information (β = 0.664, t = 22.787), which led to
higher competence distrust (β = 0.619, t = 12.852) and integrity distrust (β = 0.285, t = 4.246), respectively. After controlling for
mediation processes, lack of empathy had a direct effect on competence distrust (β = 0.237 t = 4.516) and integrity distrust (β = 0.278
t = 4.528). These results supported H2 and H3 on the partial mediation roles of irrelevant and biased information.
Furthermore, all the variables jointly explain 68.1% of the variance in AISC resistance. For irrelevant information, biased infor­
mation, competence distrust, and integrity distrust, the values were 53.4%, 43.1%, 60.0%, and 29.2%, respectively.

6. Discussion and conclusion

6.1. Discussion

This study has investigated whether lack of empathy directly or indirectly affects distrusting beliefs, which in turn affects resistance
behavior toward AISC. There are several key findings.
First, regarding the effect of lack of empathy on distrusting beliefs, our study has proposed and empirically verified that 1) lack of
empathy has a significant direct positive effect on benevolence distrust (H1) and that 2) lack of empathy has a significant positive effect
on competence distrust (H2) and integrity distrust (H3). The two effects are partially mediated by irrelevant information and biased
information, respectively. The study results suggest that the EI of AI is likely to give rise to the other process stages (i.e., CI, irrelevant
information, and biased information in this study), verifying the rationality of the mechanism (Huang & Rust, 2021b).
Second, regarding the moderating effect of lack of empathy, our study finds that the more customers believe that AISC lacks
empathy, the more irrelevant information provided by AISC is likely to lead to competence distrust (H4), indicating that when lack of
empathy is high, customers do not like to trust the capability of the AISC. However, a similar effect was not found for biased infor­
mation (H5). We consider integrity a morally relevant belief that is a key element of trusting beliefs (Mayer et al., 1995). Once a user
begins to focus on the integrity of a trustee, the user is likely to develop a personality attribution involving low integrity belief, which is
usually stable and difficult to change (Kim et al., 2006; Tomiuk & Pinsonneault, 2009). Other trusting beliefs (e.g., competence) are
less stable than integrity and can be revised with effort (Wang & Wang, 2019). Therefore, the stable attribution may lead to this
insignificant moderating effect.
Third, regarding the effect of distrusting beliefs on customer resistance, the study finds that competence distrust (H6a) and
benevolence distrust (H6b) significantly impact AI-based chatbot resistance. However, we found one unexpected result: the effect of
integrity distrust (H6c) on AI customer service resistance is insignificant. Specifically, our PLS results indicate that customer distrust of
integrity in AISC does not reduce the customer’s resistance to AISC (β = -0.077). This result is somewhat similar to those of a prior
study (Chau et al., 2013). The values of the path coefficients in this study were also contrary to what was hypothesized. An explanation
could be that the respondents in this study had rich experience in using AISC (Table 1) and may have had prior knowledge that
merchants will behave dishonestly for marketing purposes. However, these factors did not affect their resistance to AI because they
may have been more concerned about AISC’s competence and benevolence. Another possibility is that the respondents’ perception of
integrity distrust in AISC was difficult to change with the interaction scenario because this perception was relatively stable (Wang &
Wang, 2019).

6.2. Implications for research

The theoretical implications of this study are threefold. First, the study contributes to research on consumer innovation resistance
by examining the challenges facing AI-based services from the perspective of customer resistance. This study responds to a call for
research expressed in the consumer innovation resistance literature (Huang et al., 2021). The cited literature review calls for
addressing several research questions (e.g., Why do consumers resist AI-related innovations?) in emerging key research contexts.

10
B. Yang et al. Information Processing and Management 60 (2023) 103257

Specifically, previous research has mainly examined how positive factors influence user attitudes and behaviors (e.g., de Kervenoael
et al., 2020; Fernandes & Oliveira, 2021; Moussawi et al., 2022). This study investigated the antecedents of consumer resistance to
AISC and how AISC drawbacks (especially lack of empathy) affect customer attitudes and behaviors. Furthermore, most of the
literature on AI chatbots focuses the antecedents of variables such as intention to adopt, attitude, customer satisfaction, and loyalty
(Castillo et al., 2021; Cha, 2020; Gursoy et al., 2019; Lin et al., 2020; Pillai & Sivathanu, 2020; Prentice & Nguyen, 2020). This study
provides a new research perspective on customer resistance and encourages the academic community to investigate the negative
effects of AI on users.
Second, this study extends the cognition-centered framework in the AI literature by addressing the emotional dimension (Chau
et al., 2013; Komiak & Benbasat, 2006; Wang et al., 2018; Wang & Wang, 2019). With the rapid development of supporting tech­
nologies, the AI industry has emphasized the importance of EI in chatbot design (Huang & Rust, 2021b; Yalcin & DiPaola, 2018; Yalcin
& DiPaola, 2019), but few studies have empirically examined the role of EI in human-chatbot interaction. Therefore, by compre­
hensively recognizing EI and CI as key constructs, this study extends research (Chau et al., 2013) centered on CI by considering EI.
Third, this study deconstructs the underlying mechanism of empathy in the context of AISC resistance. Specifically, the study
proposes and tests three different mechanisms by which EI and CI may play roles: 1) the direct effect of lack of empathy on benevolence
distrust, 2) the indirect effect of lack of empathy on competence and integrity distrust, and 3) the moderating effect of lack of empathy.
Several studies have examined the effect of empathy (de Kervenoael et al., 2020; Liu & Sundar, 2018; Luo et al., 2019; Lv et al., 2022;
Pelau et al., 2021), but few have investigated its underlying mechanism. This study has investigated 1) whether EI and CI play different
roles in influencing customer resistance, 2) whether there is a link between EI and CI, and 3) whether EI acts synergistically with CI,
thus increasing our understanding of the role of empathy in human-chatbot interaction.

6.3. Implications for practice

This study has several important implications for practice. First, our results indicate the significance of reducing the possibility of
AISC producing irrelevant information because such information strengthens competence distrust, which in turn influences customer
resistance to AISC. Merchants could collect more text from conversations between customers and AISC and use it to train AISC with the
aim of ensuring that AISC can accurately recognize customer input, match it to information in the database, and provide relevant
information. Second, our results indicate that empathy may be more important in reducing customer resistance to AISC. Thus, before
investing in AISC, merchants should consider whether AISC can sufficiently express empathy in communication with customers, in
addition to considering AISC competence. Merchants should develop more advanced affective computing engines to improve the EI of
AISC. Furthermore, they could employ emotion recognition programs to detect customer emotions and provide human services as a
backup option when customers experience negative emotions. Third, our results indicate that biased information strengthens integrity
distrust. In practice, AISC may provide biased information to customers for profit, and merchants should consider minimizing cus­
tomers’ integrity distrust of AISC. While reducing the amount of biased information merchants provide may reduce sponsorship
revenue, it may increase user trust in the long term. Therefore, AI practitioners should focus on developing AISC that combines CI and
EI. For instance, practitioners can collect text information that contains the widely varying requirements of users. The iteration of the
knowledge base (CI) and the improvement of the affective computing engine (EI) based on these data could help increase the CI and EI
of AISC.

6.4. Limitations and future research

This research has three limitations. First, according to our survey results, most respondents had substantial experience using AISC.
Therefore, their responses regarding the drawbacks of AISC and resistance behavior may differ from those of individuals unfamiliar
with AISC. Thus, whether the findings can be applied to other populations, such as customers with low prior knowledge of AI and AISC,
must be examined in future research. Second, Chinese personality traits (e.g., paying more attention to harmony, interpersonal
communication, and empathy than people elsewhere do) may have influenced the study results. The respondents in our study were
probably more concerned with social relationships. Thus, future research could examine whether these results apply to users in other
countries. Considering personality trait moderators may help validate the proposed model. Third, we asked respondents to recall an
interaction with AISC. In the service scenarios described by the respondents, we noticed that most of the scenarios involved shopping
on e-commerce platforms. However, AISC is also widely used in information communication, online education, finance, accommo­
dation, and catering. Thus, to determine whether the findings can be applied to these fields, future researchers could filter the plat­
forms and scenarios in which respondents use AISC.

CRediT authorship contribution statement

Bo Yang: Conceptualization, Methodology, Investigation, Writing – review & editing. Yongqiang Sun: Conceptualization, Su­
pervision, Methodology, Writing – review & editing. Xiao-Liang Shen: Conceptualization, Investigation, Funding acquisition, Writing
– review & editing.

Declaration of Competing Interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to

11
B. Yang et al. Information Processing and Management 60 (2023) 103257

influence the work reported in this paper.

Data availability

Data will be made available on request.

Acknowledgements

The work described in this paper was partially supported by the grants from the National Natural Science Foundation of China
(Project No. 71974148, 71904149, 71921002, 72274144) and the Humanities and Social Sciences Foundation of the Ministry of
Education, China (Project No. 22YJA870013).

Appendix A. An example of the combination of EI and CI in the actual use of AISC (E-commerce platform)

The figure depicts how an AI chatbot’s CI and EI capabilities work together to accomplish service tasks on an e-commerce platform.
Specifically, after a user asked why he received a damaged item, the AI chatbot, based on its CI, matched the user’s input with in­
formation from a prebuilt knowledge base and then output an answer with a valid link. An AI device can also analyze the user’s input
based on its EI using an affective computing engine. When it surmises that the user may be angry at the moment, the AI chatbot
apologizes to the user in a gentle tone.

12
B. Yang et al. Information Processing and Management 60 (2023) 103257

Appendix B. Examples of text conversations

Scenarios Examples

Lack of empathy Chatbot: Hello, I am a customer service chatbot. How can I help you today?
Customer: I received the dress I bought last week, but why is the dress package broken? This has happened to me twice now, and I’m really
angry.
Chatbot: Please click on the return request button in your order for after-sales service.
(The above response is considered to lack empathy. In contrast, a highly empathetic human employee might reply: We are genuinely sorry
about your experience. We will replace your clothes with new ones for free and compensate you with a coupon.)
Irrelevant Chatbot: Hello, I am a customer service chatbot. How can I help you today?
information Customer: Can you tell me which keys I need to press to switch between IOS and Android devices?
(If the chatbot’s answer is not the correct answer to the question but an inaccurate answer, such as “Hello, this Bluetooth keyboard is also
available for IOS devices” or “Before using the keyboard for the first time, you must press the Fn key three times in a row to start the device”,
the customer will think that the chatbot is unable to provide relevant information. The chatbot response can be considered an example
irrelevant information.)
Biased information Chatbot: Hello, I am a customer service chatbot. How can I help you today?
Customer: Can you recommend a cost-effective camera for me? I’ve been studying photography recently.
Chatbot: I can recommend the following products for you.
(If the customer sees a camera in the list of recommendations that he or she believes is overpriced, the customer will likely think that the
chatbot is providing biased information.)

Appendix C. Constructs and definition

Constructs Definitions Source

Lack of empathy AISC fails to sense and react appropriately to a customer’s thoughts, feelings, and experiences during a (Wieseke et al., 2012)
service encounter.
Irrelevant information AISC provides customers incorrect or inaccurate information that does not match their preferences. (Chau et al., 2013)
Biased information AISC provides information based on the interest of merchants.
Benevolence distrust AISC does not care about and show concern for the genuine welfare of the customers. (McKnight et al.,
Competence distrust AISC is unable to provide effective service for customers. 2002)
Integrity distrust AISC does not provide service sincerely and honestly.
AI customer service Customer behavior expressing negative cognition, affection, and opposition to AISC. (Yoo et al., 2021)
resistance

Appendix D. Constructs and items

Constructs Items

Irrelevant information II1: I feel that AISC can’t provide accurate advice that I like.
(Chau et al., 2013) II2: I feel that AISC seldom considers my preferences when providing services.
II3: I feel that the services provided by AISC don’t meet my preferences.
Biased information BR1: I feel that when AISC provides service, it is biased in favor of merchant interests rather than toward my preferences.
(Chau et al., 2013) BR2: I feel that AISC inclines more to favoring the merchant’s interest than my interest.
BR3: I feel that AISC takes more care to promote merchant interests than to address my preferences.
Lack of empathy LE1: AISC provides me less individual attention.
(Iglesias et al., 2019) LE2: AISC seldom deals with me in a caring fashion.
LE3: AISC doesn’t always have my best interest at heart.
LE4: AISC can’t fully understand my needs.
Competence distrust CD1: I am skeptical whether AISC is competent and effective in providing service.
(McKnight et al., 2002) CD2: I am worried whether AISC will perform well in its role of providing services.
CD3: Overall, it is uncertain whether AISC is capable and proficient in providing service.
CD4: Generally, I feel nervous about how knowledgeable AISC is regarding the service.
Integrity distrust ID1: I am worried whether AISC will be truthful in its dealings with me.
(McKnight et al., 2002) ID2: It is uncertain whether AISC will honor its commitments.
ID3: I suspect that AISC is not honest.
ID4: I am not sure that AISC is sincere and genuine.
Benevolence Distrust BD1: I am not sure AISC will act in my best interest.
(McKnight et al., 2002) BD2: I am skeptical whether AISC will do its best to help me if I require help.
BD3: I suspect that AISC is not interested in my well-being.
Institutional-based distrust IBD1: Generally, I feel bad about how things go when I ask AISC for advice during service.
(Chau et al., 2013) IBD2: Generally, I am uncomfortable when I ask AISC for advice during service.
IBD3: Generally, I don’t feel at ease when I ask AISC for advice during service.
Disposition to distrust DD1: I usually distrust people until they give me a reason to trust them.
(Chau et al., 2013) DD2: I am generally wary of people’s motives when I first meet them.
DD3: My typical approach is to distrust new acquaintances unless they prove that I should trust them.
(continued on next page)

13
B. Yang et al. Information Processing and Management 60 (2023) 103257

(continued )
Constructs Items

AI-based chatbot resistance AICR1: I have critical thoughts about using AISC.
(Yoo et al., 2021) AICR2: I refuse to use AISC.
AICR3: I oppose using AISC.
AICR4: I am discontented with using AISC.

References

Ahmad, W., & Sun, J. (2018). Modeling consumer distrust of online hotel reviews. International Journal of Hospitality Management, 71, 77–90. https://doi.org/10.1016/
j.ijhm.2017.12.005
Ali, M., Zhou, L., Miller, L., & Ieromonachou, P. (2016). User resistance in IT: A literature review. International Journal of Information Management, 36(1), 35–43.
https://doi.org/10.1016/j.ijinfomgt.2015.09.007
Aljukhadar, M., Trifts, V., & Senecal, S. (2017). Consumer self-construal and trust as determinants of the reactance to a recommender advice. Psychology & Marketing,
34(7), 708–719. https://doi.org/10.1002/mar.21017
Aoki, N. (2020). An experimental study of public trust in AI chatbots in the public sector. Government Information Quarterly, 37(4), Article 101490. https://doi.org/
10.1016/j.giq.2020.101490
Benbasat, I., & Wang, W. (2005). Trust in and adoption of online recommendation agents. Journal of the Association for Information Systems, 6(3), 4. https://doi.org/
10.17705/1jais.00065
Bhattacherjee, A., & Hikmet, N. (2007). Physicians’ resistance toward healthcare information technology: A theoretical model and empirical test. European Journal of
Information Systems, 16(6), 725–737. https://doi.org/10.1057/palgrave.ejis.3000717
Bock, D. E., Mangus, S. M., & Folse, J. A. G. (2016). The road to customer loyalty paved with service customization. Journal of Business Research, 69(10), 3923–3932.
https://doi.org/10.1016/j.jbusres.2016.06.002
Brave, S., Nass, C., & Hutchinson, K. (2005). Computers that care: Investigating the effects of orientation of emotion exhibited by an embodied computer agent.
International Journal of Human-Computer Studies, 62(2), 161–178. https://doi.org/10.1016/j.ijhcs.2004.11.002
Carmody, P., & Gordon, K. (2011). Offender variables: Unique predictors of benevolence, avoidance, and revenge? Personality and Individual Differences, 50(7),
1012–1017. https://doi.org/10.1016/j.paid.2010.12.037
Castillo, D., Canhoto, A. I., & Said, E. (2021). The dark side of AI-powered service interactions: Exploring the process of co-destruction from the customer perspective.
The Service Industries Journal, 41(13–14), 900–925. https://doi.org/10.1080/02642069.2020.1787993
Cha, S. S. (2020). Customers’ intention to use robot-serviced restaurants in Korea: Relationship of coolness and MCI factors. International Journal of Contemporary
Hospitality Management, 32(9), 2947–2968. https://doi.org/10.1108/ijchm-01-2020-0046
Charki, M. H., & Josserand, E. (2008). Online reverse auctions and the dynamics of trust. Journal of Management Information Systems, 24(4), 175–197. https://doi.org/
10.2753/MIS0742-1222240407
Chau, P. Y., Ho, S. Y., Ho, K. K., & Yao, Y. (2013). Examining the effects of malfunctioning personalized services on online users’ distrust and behaviors. Decision
Support Systems, 56, 180–191. https://doi.org/10.1016/j.dss.2013.05.023
Chaves, A. P., & Gerosa, M. A. (2021). How should my chatbot interact? A survey on social characteristics in human–chatbot interaction design. International Journal of
Human–Computer Interaction, 37(8), 729–758. https://doi.org/10.1080/10447318.2020.1841438
Cheng, X., Bao, Y., Zarifis, A., Gong, W., & Mou, J. (2022). Exploring consumers’ response to text-based chatbots in e-commerce: The moderating role of task
complexity and chatbot disclosure. Internet Research, 32(2), 496–517. https://doi.org/10.1108/INTR-08-2020-0460
Chi, O. H., Denton, G., & Gursoy, D. (2020). Artificially intelligent device use in service delivery: A systematic review, synthesis, and research agenda. Journal of
Hospitality Marketing & Management, 29(7), 757–786. https://doi.org/10.1080/19368623.2020.1721394
Chi, O. H., Jia, S., Li, Y., & Gursoy, D. (2021). Developing a formative scale to measure consumers’ trust toward interaction with artificially intelligent (AI) social
robots in service delivery. Computers in Human Behavior, 118, Article 106700. https://doi.org/10.1016/j.chb.2021.106700
Chin, H., & Yi, M. Y. (2021). Voices that care differently: Understanding the effectiveness of a conversational agent with an alternative empathy orientation and
emotional expressivity in mitigating verbal abuse. International Journal of Human–Computer Interaction, 1–15. https://doi.org/10.1080/10447318.2021.1987680
Chin, W. W., Marcolin, B. L., & Newsted, P. R. (2003). A partial least squares latent variable modeling approach for measuring interaction effects: Results from a monte
carlo simulation study and an electronic-mail emotion/adoption study. Information Systems Research, 14(2), 189–217. https://doi.org/10.1287/
isre.14.2.189.16018
Cho, J. (2006). The mechanism of trust and distrust formation and their relational outcomes. Journal of Retailing, 82(1), 25–35. https://doi.org/10.1016/j.
jretai.2005.11.002
Collier, J. E., Barnes, D. C., Abney, A. K., & Pelletier, M. J. (2018). Idiosyncratic service experiences: When customers desire the extraordinary in a service encounter.
Journal of Business Research, 84, 150–161. https://doi.org/10.1016/j.jbusres.2017.11.016
Côté, S., & Miners, C. T. H. (2006). Emotional intelligence, cognitive intelligence, and job performance. Administrative Science Quarterly, 51(1), 1–28. https://doi.org/
10.2189/asqu.51.1.1
Cramer, H., Evers, V., van Slooten, T., Ghijsen, M., & Wielinga, B. (2010). Trying too hard: Effects of mobile agents’(Inappropriate) social expressiveness on trust,
affect and compliance. In Proceedings of the SIGCHI conference on human factors in computing systems (pp. 1471–1474). https://doi.org/10.1145/1753326.1753546
Cramer, H., Goddijn, J., Wielinga, B., & Evers, V. (2010). Effects of (in) accurate empathy and situational valence on attitudes towards robots. In 2010 5th ACM/IEEE
international conference on human-robot interaction (HRI) (pp. 141–142). IEEE.
Crosby, L. A., Evans, K. R., & Cowles, D. (1990). Relationship quality in services selling: An interpersonal influence perspective. Journal of Marketing, 54(3), 68–81.
https://doi.org/10.1177/002224299005400306
de Kervenoael, R., Hasan, R., Schwob, A., & Goh, E. (2020). Leveraging human-robot interaction in hospitality services: Incorporating the role of perceived value,
empathy, and information sharing into visitors’ intentions to use social robots. Tourism Management, 78, Article 104042. https://doi.org/10.1016/j.
tourman.2019.104042
Di Fabio, A., & Saklofske, D. H. (2014). Comparing ability and self-report trait emotional intelligence, fluid intelligence, and personality traits in career decision.
Personality and Individual Differences, 64, 174–178. https://doi.org/10.1016/j.paid.2014.02.024
Diederich, S., Brendel, A. B., Morana, S., & Kolbe, L. (2022). On the design of and interaction with conversational agents: An organizing and assessing review of
human-computer interaction research. Journal of the Association for Information Systems, 23(1), 96–138. https://doi.org/10.17705/1jais.00724
Dimoka, A. (2010). What does the brain tell us about trust and distrust? Evidence from a functional neuroimaging study. MIS Quarterly, 34(2), 373–396. https://doi.
org/10.2307/20721433
Dollmat, K. S., & Abdullah, N. A. (2021). Machine learning in emotional intelligence studies: A survey. Behaviour & Information Technology, 1–18. https://doi.org/
10.1080/0144929x.2021.1877356
Fernandes, T., & Oliveira, E. (2021). Understanding consumers’ acceptance of automated technologies in service encounters: Drivers of digital voice assistants
adoption. Journal of Business Research, 122, 180–191. https://doi.org/10.1016/j.jbusres.2020.08.058
Fishbein, M., & Ajzen, I. (1975). Belief, attitude, intention and behavior: An introduction to theory and research. Reading, MA: Addison-Wesley.

14
B. Yang et al. Information Processing and Management 60 (2023) 103257

Fornell, C., & Larcker, D. F. (1981). Evaluating structural equation models with unobservable variables and measurement error. Journal of Marketing Research, 18(1),
39–50.
Glikson, E., & Woolley, A. W. (2020). Human trust in artificial intelligence: Review of empirical research. Academy of Management Annals, 14(2), 627–660. https://doi.
org/10.5465/annals.2018.0057
Gremler, D. D., & Gwinner, K. P. (2008). Rapport-building behaviors used by retail employees. Journal of Retailing, 84(3), 308–324. https://doi.org/10.1016/j.
jretai.2008.07.001
Gursoy, D., Chi, O. H., Lu, L., & Nunkoo, R. (2019). Consumers acceptance of artificially intelligent (AI) device use in service delivery. International Journal of
Information Management, 49, 157–169. https://doi.org/10.1016/j.ijinfomgt.2019.03.008
Hair, J. F., Ringle, C. M., & Sarstedt, M. (2011). PLS-SEM: Indeed a silver bullet. Journal of Marketing Theory and Practice, 19(2), 139–152. https://doi.org/10.2753/
mtp1069-6679190202
Hirschheim, R., & Newman, M. (1988). Information systems and user resistance: Theory and practice. The Computer Journal, 31(5), 398–408. https://doi.org/
10.1093/comjnl/31.5.398
Ho, S. Y., & Chau, P. Y. K. (2013). The effects of location personalization on integrity trust and integrity distrust in mobile merchants. International Journal of Electronic
Commerce, 17(4), 39–71. https://doi.org/10.2753/jec1086-4415170402
Höddinghaus, M., Sondern, D., & Hertel, G. (2021). The automation of leadership functions: Would people trust decision algorithms? Computers in Human Behavior,
116, Article 106635. https://doi.org/10.1016/j.chb.2020.106635
Hsieh, P.-J. (2016). An empirical investigation of patients’ acceptance and resistance toward the health cloud: The dual factor perspective. Computers in Human
Behavior, 63, 959–969. https://doi.org/10.1016/j.chb.2016.06.029
Hu, P., Lu, Y., & Gong, Y.(Yale) (2021). Dual humanness and trust in conversational AI: A person-centered approach. Computers in Human Behavior, 119, Article
106727. https://doi.org/10.1016/j.chb.2021.106727
Huang, D., Jin, X., & Coghlan, A. (2021). Advances in consumer innovation resistance research: A review and research agenda. Technological Forecasting and Social
Change, 166, Article 120594. https://doi.org/10.1016/j.techfore.2021.120594
Huang, M.-H., & Rust, R. T. (2018). Artificial intelligence in service. Journal of Service Research, 21(2), 155–172. https://doi.org/10.1177/1094670517752459
Huang, M.-H., & Rust, R. T. (2021a). A strategic framework for artificial intelligence in marketing. Journal of the Academy of Marketing Science, 49(1), 30–50. https://
doi.org/10.1007/s11747-020-00749-9
Huang, M.-H., & Rust, R. T. (2021b). Engaged to a robot? The role of AI in service. Journal of Service Research, 24(1), 30–41. https://doi.org/10.1177/
1094670520902266
Iglesias, O., Markovic, S., & Rialp, J. (2019). How does sensory brand experience influence brand equity? Considering the roles of customer satisfaction, customer
affective commitment, and employee empathy. Journal of Business Research, 96, 343–354. https://doi.org/10.1016/j.jbusres.2018.05.043
Johnson, D. G., & Verdicchio, M. (2017). AI anxiety. Journal of the Association for Information Science and Technology, 68(9), 2267–2270. https://doi.org/10.1002/
asi.23867
Kang, Y., & Kim, S. (2009). Understanding user resistance to participation in multihop communications. Journal of Computer-Mediated Communication, 14(2), 328–351.
https://doi.org/10.1111/j.1083-6101.2009.01443.x
Kaplan, A., & Haenlein, M. (2019). Siri, Siri, in my hand: Who’s the fairest in the land? On the interpretations, illustrations, and implications of artificial intelligence.
Business Horizons, 62(1), 15–25. https://doi.org/10.1016/j.bushor.2018.08.004
Kidwell, B., Hardesty, D. M., Murtha, B. R., & Sheng, S. (2011). Emotional intelligence in marketing exchanges. Journal of Marketing, 75(1), 78–95. https://doi.org/
10.1509/jm.75.1.78
Kim, & Kankanhalli. (2009). Investigating user resistance to information systems implementation: A status quo bias perspective. MIS Quarterly, 33(3), 567. https://
doi.org/10.2307/20650309
Kim, P. H., Dirks, K. T., Cooper, C. D., & Ferrin, D. L. (2006). When more blame is better than less: The implications of internal vs. External attributions for the repair
of trust after a competence-vs. Integrity-based trust violation. Organizational Behavior and Human Decision Processes, 99(1), 49–65. https://doi.org/10.1016/j.
obhdp.2005.07.002
Klaus, T., & Blanton, J. E. (2010). User resistance determinants and the psychological contract in enterprise system implementations. European Journal of Information
Systems, 19(6), 625–636. https://doi.org/10.1057/ejis.2010.39
Komiak, S. Y., & Benbasat, I. (2006). The effects of personalization and familiarity on trust and adoption of recommendation agents. MIS Quarterly, 30(4), 941–960.
https://doi.org/10.2307/25148760
Lankton, N. K., McKnight, D. H., & Tripp, J. (2015). Technology, humanness, and trust: Rethinking trust in technology. Journal of the Association for Information
Systems, 16(10), 880–918. https://doi.org/10.17705/1jais.00411
Lapointe, & Rivard. (2005). A multilevel model of resistance to information technology implementation. MIS Quarterly, 29(3), 461–491. https://doi.org/10.2307/
25148692
Laukkanen, T. (2016). Consumer adoption versus rejection decisions in seemingly similar service innovations: The case of the Internet and mobile banking. Journal of
Business Research, 69(7), 2432–2439. https://doi.org/10.1016/j.jbusres.2016.01.013
Law, T., Chita-Tegmark, M., & Scheutz, M. (2021). The interplay between emotional intelligence, trust, and gender in human–robot interaction: A vignette-based
study. International Journal of Social Robotics, 13(2), 297–309. https://doi.org/10.1007/s12369-020-00624-1
Lee, Y. E., & Benbasat, I. (2011). The influence of trade-off difficulty caused by preference elicitation methods on user acceptance of recommendation agents across
loss and gain conditions. Information Systems Research, 22(4), 867–884. https://doi.org/10.1287/isre.1100.0334
Leite, I., Castellano, G., Pereira, A., Martinho, C., & Paiva, A. (2014). Empathic robots for long-term interaction: Evaluating social presence, engagement and perceived
support in children. International Journal of Social Robotics, 6(3), 329–341. https://doi.org/10.1007/s12369-014-0227-1
Leite, I., Pereira, A., Mascarenhas, S., Martinho, C., Prada, R., & Paiva, A. (2013). The influence of empathy in human–robot relations. International Journal of Human-
Computer Studies, 71(3), 250–260. https://doi.org/10.1016/j.ijhcs.2012.09.005
Liang, H., Saraf, N., Hu, Q., & Xue, Y. (2007). Assimilation of enterprise systems: The effect of institutional pressures and the mediating role of top management. MIS
Quarterly, 31(1), 59–87. https://doi.org/10.2307/25148781
Lin, H., Chi, O. H., & Gursoy, D. (2020). Antecedents of customers’ acceptance of artificially intelligent robotic device use in hospitality services. Journal of Hospitality
Marketing & Management, 29(5), 530–549. https://doi.org/10.1080/19368623.2020.1685053
Lin, W., Yueh, H.-P., Wu, H.-Y., & Fu, L.-C. (2014). Developing a service robot for a children’s library: A design-based research approach. Journal of the Association for
Information Science and Technology, 65(2), 290–301. https://doi.org/10.1002/asi.22975
Liu, B., & Sundar, S. S. (2018). Should machines express sympathy and empathy? Experiments with a health advice chatbot. Cyberpsychology, Behavior, and Social
Networking, 21(10), 625–636. https://doi.org/10.1089/cyber.2018.0110
Liu-Thompkins, Y., Okazaki, S., & Li, H. (2022). Artificial empathy in marketing interactions: Bridging the human-AI gap in affective and social customer experience.
Journal of the Academy of Marketing Science. https://doi.org/10.1007/s11747-022-00892-5
Longoni, C., Bonezzi, A., & Morewedge, C. K. (2019). Resistance to medical artificial intelligence. Journal of Consumer Research, 46(4), 629–650. https://doi.org/
10.1093/jcr/ucz013
Luo, X., Tong, S., Fang, Z., & Qu, Z. (2019). Frontiers: Machines vs. Humans: The impact of artificial intelligence chatbot disclosure on customer purchases. Marketing
Science, 38(6), 937–947. https://doi.org/10.1287/mksc.2019.1192
Lv, X., Yang, Y., Qin, D., Cao, X., & Xu, H. (2022). Artificial intelligence service recovery: The role of empathic response in hospitality customers’ continuous usage
intention. Computers in Human Behavior, 126, Article 106993. https://doi.org/10.1016/j.chb.2021.106993
Mangus, S. M., Bock, D. E., Jones, E., & Folse, J. A. G. (2020). Examining the effects of mutual information sharing and relationship empathy: A social penetration
theory perspective. Journal of Business Research, 109, 375–384. https://doi.org/10.1016/j.jbusres.2019.12.019

15
B. Yang et al. Information Processing and Management 60 (2023) 103257

Mani, Z., & Chouk, I. (2018). Consumer resistance to innovation in services: Challenges and barriers in the internet of things era. Journal of Product Innovation
Management, 35(5), 780–807. https://doi.org/10.1111/jpim.12463
Markovic, S., Iglesias, O., Singh, J. J., & Sierra, V. (2018). How does the perceived ethicality of corporate services brands influence loyalty and positive word-of-
mouth? Analyzing the roles of empathy, affective commitment, and perceived quality. Journal of Business Ethics, 148(4), 721–740. https://doi.org/10.1007/
s10551-015-2985-6
Marzouk, O., Salminen, J., Zhang, P., & Jansen, B. J. (2022). Which message? Which channel? Which customer? Exploring response rates in multi-channel marketing
using short-form advertising. Data and Information Management, 6(1), Article 100008. https://doi.org/10.1016/j.dim.2022.100008
Mayer, R. C., Davis, J. H., & Schoorman, F. D. (1995). An integrative model of organizational trust. Academy of Management Review, 20(3), 709. https://doi.org/
10.2307/258792
McKnight, D. H., & Chervany, N. (2001a). While trust is cool and collected, distrust is fiery and frenzied: A model of distrust concepts. In AMCIS 2001 proceedings (p.
171).
McKnight, D. H., & Chervany, N. L. (2001b). Trust and distrust definitions: One bite at a time. Trust in cyber-societies (pp. 27–54). Berlin, Heidelberg: Springer.
McKnight, D. H., & Choudhury, V. (2006). Distrust and trust in B2C e-commerce: Do they differ?. In Proceedings of the 8th international conference on electronic
commerce: The new e-commerce: innovations for conquering current barriers, obstacles and limitations to conducting successful business on the internet (pp. 482–491).
ACM.
McKnight, D. H., Choudhury, V., & Kacmar, C. (2002). Developing and validating trust measures for e-commerce: An integrative typology. Information Systems
Research, 13(3), 334–359. https://doi.org/10.1287/isre.13.3.334.81
McKnight, D. H., Lankton, N. K., Nicolaou, A., & Price, J. (2017). Distinguishing the effects of B2B information quality, system quality, and service outcome quality on
trust and distrust. The Journal of Strategic Information Systems, 26(2), 118–141. https://doi.org/10.1016/j.jsis.2017.01.001
Mindbrowser (2020). Chatbot Survey 2020. https://www.mindbowser.com/chatbot-market-survey-2022/.
Möller, K., & Halinen, A. (2000). Relationship marketing theory: Its roots and direction. Journal of Marketing Management, 16(1–3), 29–54. https://doi.org/10.1362/
026725700785100460
Moody, G. D., Galletta, D. F., & Lowry, P. B. (2014). When trust and distrust collide online: The engenderment and role of consumer ambivalence in online consumer
behavior. Electronic Commerce Research and Applications, 13(4), 266–282. https://doi.org/10.1016/j.elerap.2014.05.001
Moody, G. D., Lowry, P. B., & Galletta, D. F. (2017). It’s complicated: Explaining the relationship between trust, distrust, and ambivalence in online transaction
relationships using polynomial regression analysis and response surface analysis. European Journal of Information Systems, 26(4), 379–413. https://doi.org/
10.1057/s41303-016-0027-9
Morgan, R. M., & Hunt, S. D. (1994). The commitment-trust theory of relationship marketing. Journal of Marketing, 58(3), 20–38. https://doi.org/10.1177/
002224299405800302
Moussawi, S., & Benbunan-Fich, R. (2021). The effect of voice and humour on users’ perceptions of personal intelligent agents. Behaviour & Information Technology, 40
(15), 1603–1626. https://doi.org/10.1080/0144929x.2020.1772368
Moussawi, S., Koufaris, M., & Benbunan-Fich, R. (2021). How perceptions of intelligence and anthropomorphism affect adoption of personal intelligent agents.
Electronic Markets, 31(2), 343–364. https://doi.org/10.1007/s12525-020-00411-w
Moussawi, S., Koufaris, M., & Benbunan-Fich, R. (2022). The role of user perceptions of intelligence, anthropomorphism, and self-extension on continuance of use of
personal intelligent agents. European Journal of Information Systems, 1–22. https://doi.org/10.1080/0960085x.2021.2018365
Myyry, L., & Helkama, K. (2001). University students’ value priorities and emotional empathy. Educational Psychology, 21(1), 25–40. https://doi.org/10.1080/
01443410123128
Nasirian, F., Ahmadian, M., & Lee, O.-K. (2017). AI-based voice assistant systems: Evaluating from the interaction and trust perspectives. In Proceedings of the twenty-
third Americas conference on information systems (AMCIS).
Nass, C., Steuer, J., & Tauber, E. R. (1994). Computers are social actors. In Proceedings of the SIGCHI conference on human factors in computing systems (pp. 72–78).
Ndubisi, N. O., & Nataraajan, R. (2018). Customer satisfaction, Confucian dynamism, and long-term oriented marketing relationship: A threefold empirical analysis.
Psychology & Marketing, 35(6), 477–487. https://doi.org/10.1002/mar.21100
Nel, J., & Boshoff, C. (2021). Traditional-bank customers’ digital-only bank resistance: Evidence from South Africa. International Journal of Bank Marketing, 39(3),
429–454. https://doi.org/10.1108/ijbm-07-2020-0380
Østerlund, C., Jarrahi, M. H., Willis, M., Boyd, K., & Wolf, C. (2021). Artificial intelligence and the world of work, a co-constitutive relationship. Journal of the
Association for Information Science and Technology, 72(1), 128–135. https://doi.org/10.1002/asi.24388
Ou, C. X., & Sia, C. L. (2010). Consumer trust and distrust: An issue of website design. International Journal of Human-Computer Studies, 68(12), 913–934. https://doi.
org/10.1016/j.ijhcs.2010.08.003
Paiva, A., Leite, I., Boukricha, H., & Wachsmuth, I. (2017). Empathy in virtual agents and robots: A survey. ACM Transactions on Interactive Intelligent Systems, 7(3),
1–40. https://doi.org/10.1145/2912150
Paleari, F. G., Regalia, C., & Fincham, F. (2005). Marital quality, forgiveness, empathy, and rumination: A longitudinal analysis. Personality and Social Psychology
Bulletin, 31(3), 368–378. https://doi.org/10.1177/0146167204271597
Palmatier, R. W., Dant, R. P., Grewal, D., & Evans, K. R. (2006). Factors influencing the effectiveness of relationship marketing: A meta-analysis. Journal of Marketing,
70(4), 136–153. https://doi.org/10.1509/jmkg.70.4.136
Parasuraman, A., Zeithaml, V. A., & Berry, L. L. (1988). SERVQUAL: A multiple-item scale for measuring consumer perceptions of service quality. Journal of Retailing,
64(1), 12–40.
Parasuraman, Arun, Zeithaml, V. A., & Berry, L. L. (1994). Alternative scales for measuring service quality: A comparative assessment based on psychometric and
diagnostic criteria. Journal of Retailing, 70(3), 201–230. https://doi.org/10.1016/0022-4359(94)90033-7
Pelau, C., Dabija, D.-C., & Ene, I. (2021). What makes an AI device human-like? The role of interaction quality, empathy and perceived psychological
anthropomorphic characteristics in the acceptance of artificial intelligence in the service industry. Computers in Human Behavior, 122, Article 106855. https://doi.
org/10.1016/j.chb.2021.106855
Pillai, R., & Sivathanu, B. (2020). Adoption of AI-based chatbots for hospitality and tourism. International Journal of Contemporary Hospitality Management, 32(10),
3199–3226. https://doi.org/10.1108/ijchm-04-2020-0259
Pitt, L. F., Watson, R. T., & Kavan, C. B. (1995). Service quality: A measure of information systems effectiveness. MIS Quarterly, 19(2), 173. https://doi.org/10.2307/
249687
Pizzi, G., Scarpi, D., & Pantano, E. (2021). Artificial intelligence and the new forms of interaction: Who has the control when interacting with a chatbot? Journal of
Business Research, 129, 878–890. https://doi.org/10.1016/j.jbusres.2020.11.006
Podsakoff, P. M., MacKenzie, S. B., Lee, J.-Y., & Podsakoff, N. P. (2003). Common method biases in behavioral research: A critical review of the literature and
recommended remedies. Journal of Applied Psychology, 88(5), 879. https://doi.org/10.1037/0021-9010.88.5.879
Prentice, C., Dominique Lopes, S., & Wang, X (2020). Emotional intelligence or artificial intelligence—An employee perspective. Journal of Hospitality Marketing &
Management, 29(4), 377–403. https://doi.org/10.1080/19368623.2019.1647124
Prentice, C., & Nguyen, M. (2020). Engaging and retaining customers with AI and employee service. Journal of Retailing and Consumer Services, 56, Article 102186.
https://doi.org/10.1016/j.jretconser.2020.102186
Saffarizadeh, K., Boodraj, M., & Alashoor, T. (2017). Conversational assistants: Investigating privacy concerns, trust, and self-disclosure. In Proceedings of the
international conference on information systems (ICIS).
Schuetzler, R. M., Grimes, G. M., & Scott Giboney, J. (2020). The impact of chatbot conversational skill on engagement and perceived humanness. Journal of
Management Information Systems, 37(3), 875–900. https://doi.org/10.1080/07421222.2020.1790204
Sharma, I., Jain, K., & Behl, A. (2020). Effect of service transgressions on distant third-party customers: The role of moral identity and moral judgment. Journal of
Business Research, 121, 696–712. https://doi.org/10.1016/j.jbusres.2020.02.005

16
B. Yang et al. Information Processing and Management 60 (2023) 103257

Sheehan, B., Jin, H. S., & Gottlieb, U. (2020). Customer service chatbots: Anthropomorphism and adoption. Journal of Business Research, 115, 14–24. https://doi.org/
10.1016/j.jbusres.2020.04.030
Shen, B., & Kuang, Y. (2022). Assessing the relationship between technostress and knowledge hiding—A moderated mediation model. Data and Information
Management, 6(1), Article 100002. https://doi.org/10.1016/j.dim.2022.100002
Song, X., Xu, B., & Zhao, Z. (2022). Can people experience romantic love for artificial intelligence? An empirical study of intelligent assistants. Information &
Management, 59(2), Article 103595. https://doi.org/10.1016/j.im.2022.103595
Sundermann, L. M. (2018). Share experiences: Receiving word of mouth and its effect on relationships with donors. Journal of Services Marketing, 32(3), 322–333.
https://doi.org/10.1108/JSM-08-2016-0319
Tomiuk, D., & Pinsonneault, A. (2009). Applying relationship theories to web site design: Development and validation of a site-communality scale. Information Systems
Journal, 19(4), 413–435. https://doi.org/10.1111/j.1365-2575.2008.00293.x
Tversky, A., & Kahneman, D. (1989). Rational choice and the framing of decisions. Multiple criteria decision making and risk analysis using microcomputers (pp. 81–126).
Springer.
van Pinxteren, M. M. E., Pluymaekers, M., & Lemmink, J. G. (2020). Human-like communication in conversational agents: A literature review and research agenda.
Journal of Service Management, 31(2), 203–225. https://doi.org/10.1108/josm-06-2019-0175
van Pinxteren, M. M. E., Wetzels, R. W. H., Rüger, J., Pluymaekers, M., & Wetzels, M. (2019). Trust in humanoid robots: Implications for services marketing. Journal of
Services Marketing, 33(4), 507–518. https://doi.org/10.1108/jsm-01-2018-0045
Vance, A., Elie-Dit-Cosaque, C., & Straub, D. W. (2008). Examining trust in information technology artifacts: The effects of system quality and culture. Journal of
Management Information Systems, 24(4), 73–100. https://doi.org/10.2753/MIS0742-1222240403
Wang, W., & Benbasat, I. (2016). Empirical assessment of alternative designs for enhancing different types of trusting beliefs in online recommendation agents. Journal
of Management Information Systems, 33. https://doi.org/10.1080/07421222.2016.1243949
Wang, W., & Wang, M. (2019). Effects of sponsorship disclosure on perceived integrity of biased recommendation agents: Psychological contract violation and
knowledge-based trust perspectives. Information Systems Research, 30(2), 507–522. https://doi.org/10.1287/isre.2018.0811
Wang, W., Xu, J.(David), & Wang, M. (2018). Effects of recommendation neutrality and sponsorship disclosure on trust vs. distrust in online recommendation agents:
Moderating role of explanations for organic recommendations. Management Science, 64(11), 5198–5219. https://doi.org/10.1287/mnsc.2017.2906
Weißhaar, I., & Huber, F. (2016). Empathic relationships in professional services and the moderating role of relationship age. Psychology & Marketing, 33(7), 525–541.
https://doi.org/10.1002/mar.20895
Wieseke, J., Geigenmüller, A., & Kraus, F. (2012). On the role of empathy in customer-employee interactions. Journal of Service Research, 15(3), 316–331. https://doi.
org/10.1177/1094670512439743
Wilder, K. M., Collier, J. E., & Barnes, D. C. (2014). Tailoring to customers’ needs: Understanding how to promote an adaptive service experience with frontline
employees. Journal of Service Research, 17(4), 446–459. https://doi.org/10.1177/1094670514530043
Wirtz, J., Patterson, P. G., Kunz, W. H., Gruber, T., Lu, V. N., Paluch, S., & Martins, A. (2018). Brave new world: Service robots in the frontline. Journal of Service
Management, 29(5), 907–931. https://doi.org/10.1108/josm-04-2018-0119
Xiao, B., & Benbasat, I. (2015). Designing warning messages for detecting biased online product recommendations: An empirical investigation. Information Systems
Research, 26(4), 793–811. https://doi.org/10.1287/isre.2015.0592
Xiao, B., & Benbasat, I. (2018). An empirical examination of the influence of biased personalized product recommendations on consumers’ decision making outcomes.
Decision Support Systems, 110, 46–57. https://doi.org/10.1016/j.dss.2018.03.005
Xu, D. J., Benbasat, I., & Cenfetelli, R. T. (2018). The outcomes and the mediating role of the functional triad: The users’ perspective. Information Systems Journal, 28
(5), 956–988. https://doi.org/10.1111/isj.12183
Yalcin, Ӧzge N., & DiPaola, S. (2018). A computational model of empathy for interactive agents. Biologically Inspired Cognitive Architectures, 26, 20–25. https://doi.
org/10.1016/j.bica.2018.07.010
Yalcin, Ö. N., & DiPaola, S. (2019). Modeling empathy: Building a link between affective and cognitive processes. Artificial Intelligence Review, 53, 2983–3006. https://
doi.org/10.1007/s10462-019-09753-0
Yam, K. C., Bigman, Y. E., Tang, P. M., Ilies, R., De Cremer, D., Soh, H., & Gray, K. (2021). Robots at work: People prefer—and forgive—service robots with perceived
feelings. Journal of Applied Psychology, 106(10), 1557–1572. https://doi.org/10.1037/apl0000834
Yang, Y., Ma, X., & Fung, P. (2017). Perceived Emotional Intelligence in Virtual Agents. In Proceedings of the 2017 CHI conference extended abstracts on human factors in
computing systems (pp. 2255–2262). ACM. https://doi.org/10.1145/3027063.3053163.
Yokoi, R., Eguchi, Y., Fujita, T., & Nakayachi, K. (2021). Artificial intelligence is trusted less than a doctor in medical treatment decisions: Influence of perceived care
and value similarity. International Journal of Human–Computer Interaction, 37(10), 981–990. https://doi.org/10.1080/10447318.2020.1861763
Yoo, J., Choi, S., Hwang, Y., & Yi, M. Y. (2021). The role of user resistance and social influences on the adoption of smartphone: Moderating effect of age. Journal of
Organizational and End User Computing, 33(2), 36–58. https://doi.org/10.4018/joeuc.20210301.oa3
Zhang, J., & Curley, S. P. (2018). Exploring explanation effects on consumers’ trust in online recommender agents. International Journal of Human–Computer
Interaction, 34(5), 421–432. https://doi.org/10.1080/10447318.2017.1357904
Zhou, H., Huang, M., Zhang, T., Zhu, X., & Liu, B. (2018). Emotional chatting machine: Emotional conversation generation with internal and external memory. In , 32.
Proceedings of the AAAI conference on artificial intelligence.
Zhou, L., Gao, J., Li, D., & Shum, H.-Y. (2020). The design and implementation of Xiaoice, an empathetic social chatbot. Computational Linguistics, 46(1), 53–93.
https://doi.org/10.1162/coli_a_00368

17

You might also like