Download as pdf or txt
Download as pdf or txt
You are on page 1of 17

User Interaction with AI-enabled Information Systems

User Interaction with AI-enabled Systems:


A Systematic Review of IS Research
Completed Research Paper

Christine Rzepka Benedikt Berger


LMU Munich LMU Munich
Ludwigstr. 28, 80539 Munich, Ludwigstr. 28, 80539 Munich,
Germany Germany
rzepka@bwl.lmu.de benedikt.berger@bwl.lmu.de

Abstract
The improved performance of technological capabilities in the field of artificial
intelligence (AI), including computer vision and natural language processing, makes it
possible to enhance existing and to develop new types of information systems. We refer
to such systems as AI-enabled systems. User interaction with these systems is an
important topic for information systems (IS) research because they are supposed to bring
about substantial change for individuals, organizations, and society. Despite the recent
public and academic interest in AI, AI-enabled systems are not a new phenomenon.
However, previous research is separated into research streams on different AI-enabled
system types. We conducted a literature review to aggregate the dispersed knowledge
regarding individual user interaction with such systems in IS research. Our results show
common behavioral patterns in interactions between users and various types of AI-
enabled systems and provide a solid foundation for future research on this topic.

Keywords: artificial intelligence, AI-enabled systems, user interaction, literature review

Introduction
The increasing availability of large amounts of data, growing computing power, and the advancement of
learning algorithms have led to major improvements in several information technology (IT) capabilities,
including computer vision, speech recognition, and natural language processing (Anthes 2017). Along with
others, these capabilities are commonly referred to as artificial intelligence (AI) because they enable IT to
perform tasks that arguably require intelligence (Russell and Norvig 2010). IT providers have utilized AI
capabilities to improve existent systems’ performance (e.g. in decision support systems) and develop new
applications (e.g. voice assistants). In line with previous research, we refer to the former as AI-enhanced
systems (Boland and Lyytinen 2017) and the latter as AI-based systems (Wünderlich and Paluch 2017),
both of which we subsume under the term AI-enabled systems. Recent market research indicates that the
growing diffusion of AI-enabled systems has strong implications for society, organizations, and individuals
within both private and workplace contexts (Baccala et al. 2018). Given that information systems (IS)
research investigates “how individuals, groups, organizations, and markets interact with IT” (Sidorova et
al. 2008, p. 475), interaction with AI-enabled systems is an important subject for the IS discipline.
While AI has received renewed public and academic interest of late (Watson 2017), AI research has a long
history. The term AI dates to the 1950s and since then, various AI-enabled systems have been developed.
Because AI has particular approaches aimed at realizing distinct capabilities (Russell and Norvig 2010), AI-
enabled systems comprise system types differing strongly in their purpose and functionality (e.g. expert
systems and autonomous cars). Although there are literature reviews of certain AI-enabled system types,
such as recommender systems (Li and Karahanna 2015), and specific AI capabilities, such as natural
language processing (Liu et al. 2017), knowledge regarding the interaction with these systems as a whole

Thirty Ninth International Conference on Information Systems, San Francisco 2018 1


User Interaction with AI-enabled Information Systems

remains dispersed. However, a common knowledge base would facilitate future research projects on AI-
enabled systems, which are likely to follow the renewed interest in AI. Therefore, we suggest it is worthwhile
to investigate what the IS discipline has learned thus far about the interaction with AI-enabled systems.
Because the interaction between individuals and IT serves as a foundation for interactions at the group and
organizational level (Sidorova et al. 2008), we narrow the scope of our study to the individuals’ perspective:
RQ: What is the current state of IS research on user interaction with AI-enabled systems?
To answer this research question, we conducted a systematic literature review following the guidelines of
Paré et al. (2015). We structured our review using a framework of human-computer interaction based on
Zhang and Li (2004; 2005). That is, we analyzed the effects that the characteristics of AI-enabled systems,
their users, and the task or context have on the interaction with such systems, as well as the effects that the
interaction itself produces. Within each of these effect categories, we then identified common research
topics across the different AI-enabled system types in our literature sample. Determining which systems
can be regarded as AI-enabled poses a major obstacle to a discussion of AI-enabled systems. This is because
no common understanding of what determines intelligent behavior or thinking has yet been reached
(Russell and Norvig 2010). We address this problem not by providing another definition for AI but by
building an understanding of the aims of AI research and the resulting techniques to build AI capabilities
and by relying on those systems that previous IS research regards as intelligent or AI-enabled.
Our study contributes to IS research by building a foundation underpinning AI and IS research, using
previously separated research streams to aggregate knowledge and to identify common patterns on the
interaction with AI-enabled systems. Furthermore, we derive an agenda for further research that can serve
as a starting point for future research projects in this field. For practitioners, our study provides a condensed
account of phenomena previously observed in the interaction with AI-enabled systems (e.g. threat
perceptions by users), which should be considered before employing such systems. In the paper, we
introduce the foundations of AI research and the interaction between users and IT before presenting our
methodology, our results and a discussion of them, and ultimately, the paper’s limitations. The paper closes
with a roadmap for further research.

Conceptual Foundations
AI-enabled Systems
AI is a research discipline covering aspects of philosophy, mathematics, economics, neuroscience,
psychology, computer engineering, cybernetics, and linguistics (Russell and Norvig 2010). Among those
who pioneered AI research is Alan Turing (1950). Turing introduced the question of whether machines can
think and proposed a scenario to test this question, which is widely known as the Turing test. Early AI
research was mostly determined to build a general human-like intelligence, also referred to as strong AI
(Kurzweil 2005). The dominant approach to building strong AI has been symbolic reasoning. The
underlying idea of symbolic reasoning is that human experts can encode knowledge using symbols and that
systems can solve problems by making rule-based inferences on the basis of that knowledge. While the first
steps toward strong AI, such as the general problem solver by Newell and Simon (1963), showed promise,
the progress soon stalled, and it remains uncertain whether and when strong AI can be realized. Owing to
the lack of progress in mimicking human intelligence, AI researchers turned toward solving more specific
problems. This led to the development of expert (or knowledge-based) systems, which employ symbolic
reasoning to provide decision support in specifically defined areas of expertise (Davern et al. 2012). Another
approach of AI researchers to solve narrowly defined problems is machine learning, which Mitchell (1997,
p. 2) defines as follows: “A computer program is said to learn from experience E with respect to some class
of tasks T and performance measure P, if its performance at tasks in T, as measured by P, improves with
experience E.” Machine learning comprises a broad class of algorithms and statistical methods. These
techniques can be categorized according to the function used to represent the acquired knowledge (e.g.
decision trees, Bayesian networks, or artificial neural networks), which vary with the task the system is
supposed to perform (Carbonell et al. 1983; Russell and Norvig 2010). Machine learning techniques further
differ in their underlying learning strategy (i.e. the way in which the program gains experience). Recent
advances in machine learning have enabled considerable improvements in various fields of AI research,
such as speech recognition or computer vision (LeCun et al. 2015).

Thirty Ninth International Conference on Information Systems, San Francisco 2018 2


User Interaction with AI-enabled Information Systems

The different approaches to problem solving in AI research raise the question as to which systems can be
considered intelligent. The Turing test, for instance, has been regarded as insufficient to determine AI
(Shieber 1994). McCarthy and colleagues offered the first definition of the term AI in a workshop proposal:
“For the present purpose the artificial intelligence problem is taken to be that of making a machine behave
in ways that would be called intelligent if a human were so behaving” (McCarthy et al. 1955, p. 11). Over
time, several similar definitions have been proposed, which Russel and Norvig (2010) categorize into four
categories: AI as systems that think like humans, AI as systems that act like humans, AI as systems that
think rationally, and AI as systems that act rationally. These definitions explain what AI research seeks to
achieve, but they do not conclusively determine what AI is. To the best of our knowledge, there is not yet a
commonly accepted definition of AI – nor, therefore, of intelligent systems. However, researchers have
come to a consensus that certain capabilities require intelligence. These include problem solving,
knowledge representation, reasoning, planning, learning, perceiving (including computer vision), acting
(robotics), natural language processing, and communicating (Russell and Norvig 2010). Therefore, we refer
to AI-enabled systems as systems that possess at least one of these capabilities. Within IS research, AI-
enabled systems drew considerable attention during the 1980s, when expert systems were of interest
(Gregor and Benbasat 1999). More recently, recommender systems, most of which incorporate machine
learning techniques, have been an important topic in IS research (Li and Karahanna 2015). While previous
literature reviews have aggregated knowledge about specific types of AI-enabled systems, we are not aware
of a consolidated review of IS research regarding the interaction with all kinds of AI-enabled systems. Our
study seeks to fill this gap.

User Interaction
Following the description of the IS discipline by Sidorova et al. (2008), the interaction between IT and
individuals constitutes one of the five core IS research areas and is concerned with the psychological facets
of human-computer interaction (or simply user interaction). Among the research topics within this area are
technology acceptance, user satisfaction, trust, computer self-efficacy, personalization, and privacy. Zhang
and Li (2004; 2005) and Zhang et al. (2009) have reviewed human-computer interaction research within
the IS discipline. They found users’ cognitive evaluations and behavior, attitudes, and performance to be
the most prominent research topics. Apart from these user-related facets, further factors influencing
human-computer interaction are characteristics of the task, the context, and the system (Zhang and Li
2004). Based on these findings, Zhang and Li (2004; 2005) derive a framework of the topics in human-
computer interaction IS research. According to this framework, interaction takes place between a system
and a user seeking to carry out a certain task within a particular context. User, system, task, and context
each exhibit certain characteristics that shape the interaction. Figure 1 depicts our adapted framework.

User
− Characteristics

Interaction Outcomes
− Perceptions − Perceptions
System
− Attitudes − Attitudes
− Characteristics
− Intentions − Intentions
− Behavior − Behavior

Task & Context


− Characteristics

Figure 1. Framework of Human-Computer Interaction Based on Zhang and Li (2005)

Thirty Ninth International Conference on Information Systems, San Francisco 2018 3


User Interaction with AI-enabled Information Systems

The interaction itself comprises the actual use of the system by the user, as well as the cognitive evaluations
that precede the user’s behavior. Many IS theories explaining user behavior describe these cognitive
processes as a chain of perceptions, attitudes, and intentions, as proposed by the underlying theories of
reasoned action and planned behavior (Ajzen 1991; Fishbein and Ajzen 1975). Use behavior can in turn
affect attitudes and intentions toward the system (e.g. the continuance intention). For the purpose of our
literature review, we incorporate these cognitive processes along with use behavior at the core of the
framework. Furthermore, we follow Gregor and Benbasat (1999) and Xiao and Benbasat (2007) in making
a distinction between the system interaction itself and the outcomes of the system interaction. These
outcomes can again be cognitions (e.g. job satisfaction) or actual behavior (e.g. task performance).

Methodology
To answer our research question, we conducted a systematic literature review. Literature reviews critically
examine the current state of knowledge on a particular topic to reveal knowledge gaps and to provide a
starting point for future research (Rowe 2014). The topic of interest in our study is user interaction with AI-
enabled systems. The scope of our review is limited to the IS discipline. Literature reviews can serve four
major research goals: describing, understanding, aggregating, or explaining prior knowledge (Rowe 2014).
In line with our research goal, we chose an aggregative approach. While various guidelines provide detailed
step-by-step descriptions of how to conduct a literature review (Okoli 2015; vom Brocke et al. 2015; Webster
and Watson 2002), we specifically followed the review plan proposed by Paré et al. (2016), which ensures
a systematic and transparent process. We documented this process in a comprehensive review protocol.
Based on the goal and scope of our review, we developed a twofold search strategy to build a comprehensive
literature sample. First, we conducted a keyword search in the titles, abstracts, and keywords of the top 40
IS journals and four relevant HCI journals within the IS domain, namely Computers in Human Behavior,
Human-Computer Interaction, International Journal of Human-Computer Studies, and International
Journal of Man-Machines Studies (Lowry et al. 2013). To capture contemporary research in greater detail,
we added the proceedings of the ICIS, ECIS, PACIS, AMCIS, and HICSS IS conferences. The search terms
consisted of ‘artificial intelligence’ and all combinations of the terms ‘AI-based / AI-enhanced / AI-enabled
/ intelligent / smart / cognitive’ and ‘information system / system / application / agent.’ This search
yielded 436 search results, which were screened for exclusion. After excluding all editors’ comments, book
reviews, teaching cases, and research-in-progress papers, we selected empirical studies on user interaction
with AI-enabled systems and obtained an initial set of 28 papers. Next, we extended this sample by
conducting a forward and backward search following Webster and Watson (2002), which increased our
sample to a total of 51 papers published in the selected IS journals and conference proceedings. In our first
sample, we found studies of user interaction with a variety of AI-enabled system types ranging from expert
systems over recommender systems, intelligent tutoring systems, and chatbots to robots and autonomous
vehicles. To broaden our sample further, we started a second set of searches with these AI-enabled system
types and ‘user’ OR ‘interaction’ as search terms in title, keywords, and abstract within the same set of
publications searched before. The screening process followed the same rules as in the first search. Overall,
our final literature sample includes 96 studies.
More than half of these studies were published in human-computer interaction journals, e.g. Computers in
Human Behavior and International Journal of Human-Computer Studies, followed by the journals MIS
Quarterly and Journal of Management Information Systems, and other IS journals and conference
proceedings (see Table 1). While the first empirical studies were already published in 1987 and focused on
expert system use, research interest has increased sharply since 2010 with behavioral studies on
recommender systems, robots, and chatbots. Regarding the system types, the sample includes 27 papers on
robots, 21 papers on expert systems, 20 papers on personalized recommender systems, 17 papers on
chatbots, five papers on intelligent tutoring systems, and six papers on other system types such as
autonomous vehicles and intelligent assistant applications. Experiments and surveys were the predominant
research approach.

Thirty Ninth International Conference on Information Systems, San Francisco 2018 4


User Interaction with AI-enabled Information Systems

Journal/Conference <1999 1999- 2009- Total


2008 2018
Americas Conference on Information Systems 0 0 3 3
Computers in Human Behavior 0 3 28 31
Decision Support Systems 1 2 3 6
European Conference on Information Systems 0 0 1 1
Hawaii International Conference on System Sciences 0 0 1 1
Human-Computer Interaction 0 2 1 3
International Conference on Information Systems 0 0 3 3
Information & Management 1 0 4 5
Information Systems Research 0 0 1 1
International Journal of Human-Computer Studies 2 4 17 23
International Journal of Man-Machine Studies 3 0 0 3
Journal of Management Information Systems 2 1 4 7
Journal of the AIS 0 1 0 1
MIS Quarterly 4 2 2 8
Total 13 15 68 96

Table 1. Literature Sample


To analyze the literature, we applied the framework of human-computer interaction as described in the
conceptual foundations. Using a concept matrix, we extracted causal relationships from the studies and
assigned independent and dependent variables to the different parts of the interaction framework.
Subsequently, we clustered the studies into four groups according to their independent variables: effects of
the interaction itself, the system characteristics, the user characteristics, and the task or context
characteristics. We assigned papers to multiple groups if they dealt with more than one of these effects. In
each group, we aggregated the findings from studies investigating related causal relationships. Thereby,
common research topics emerged. For instance, several studies on system characteristics investigated the
effects of systems’ transparency. Thus, transparency became one of the research topics we identified in our
sample. In the following results section, we elaborate on all the identified topics. The structure of the section
reflects this categorization of the papers. We begin with the literature on the effects of the interaction itself
on users’ perceptions, attitudes, intentions, and actual behavior. Next, we analyze the effects of system,
user, and task or context characteristics on the interaction with AI-enabled systems. In our discussion, we
then aggregate the topics from all sections and map them onto our research framework. Owing to space
restrictions, we refrained from citing all references in the text and focused our analysis on the major topics
that emerged. Therefore, isolated research studies on system configurations not related to AI capabilities
(Duffy and Azevedo 2015; Matt et al. 2014; Ochi et al. 2010), specific task related outcomes (Hasan et al.
2018; Hostler et al. 2011), and technology acceptance in general (Burton et al. 1992; Ernst and Reinelt 2017;
Fridin and Belokopytov 2014; Nguyen and Sidorova 2017) are not further cited in the results.

Results
Effects of System Interaction
Given that AI techniques enable systems to incorporate human-like capabilities, interaction with AI-
enabled systems triggers various behavioral responses in the users, including perceptions of humanness
and threat toward the system as well as contradicting actual behavior compared with human interactions.
Extant research confirms that users perceive humanness and attribute typically human behavioral,
cognitive, and affective characteristics to AI-enabled systems (Beran et al. 2011) and judge the system as
being as credible and communicative as human counterparts (Edwards et al. 2014). As a consequence, users
may even judge the system to be human, leading to initial systems passing the Turing test (Gilbert and
Forney 2015). While Holtgraves et al. (2007) support these results and find that users assign human-like
personalities to chatbots, they also report that this perception depends on the human characteristics
represented by the system since people prefer certain personality factors over others. In line with these
findings, other authors have observed that people adjust their grammatical structure (Cowan et al. 2015)
and verbal utterances (Le Bigot et al. 2007) in a way that ist analogous to human-human communication

Thirty Ninth International Conference on Information Systems, San Francisco 2018 5


User Interaction with AI-enabled Information Systems

when interacting with a chatbot. Extant research shows that people also align their opinions to the system,
confirming an “anchoring effect” (Adomavicius et al. 2013; Silverman and Bedewi 1996). However, among
the perceptions of humanness reported in extant literature including system familiarity and likability,
perceptions of threat are also found (Rosenthal-von der Pütten and Krämer 2014). Elkins et al. (2013) find
that when receiving counter-attitudinal advice, expert users are shown to perceive the system as a threat
and develop negative attitudes toward it. Analogously, Fernandez-Llames et al. (2018) report that children
perceive threat toward a robotic teacher. Interestingly, these threat perceptions were not apparent for
children who actually interacted with a robot. To address these differences, Komiak and Benbasat (2008)
suggest that users form trust and distrust in different ways, which should be studied as separate processes.
Although various studies confirm that users perceive humanness in AI-enabled systems and behave
accordingly, findings in previous research studies also suggest that users nevertheless interact differently
with AI-enabled systems compared with human counterparts. Several authors note considerable
differences in users’ conversational behavior with chatbots with regard to intersubjective effort (Corti and
Gillespie 2016), self-disclosure (Pickard et al. 2016; Valdez and Ziefle 2018)), and message length and
content (Hill et al. 2015; Mou and Xu 2017). They observe more negative connotations and a higher level of
neuroticism in chatbot interaction, while humans usually engage in socially desirable and positive behaviors
with other humans (Hill et al. 2015; Mou and Xu 2017). Apart from users’ conversational behavior, non-
verbal behavior also differs for user-robot and human-human interaction in that it is less expressive during
the former kind of interaction than the latter (Shahid et al. 2014). In a study on users’ affective responses
to observing affectionate and abusive behaviors toward humans and robots, Rosenthal-von der Pütten et
al. (2014) show that users’ emotional responses also differ: while users show similar emotional arousal for
both a human and a robot in the affectionate behavioral condition, participants did not react emotionally
when observing abusive behavior toward robots. These different behaviors are also observable in research
studies that show decreasing interest in interacting with a chatbot after the first use, representing the
occurrence of the so-called ‘novelty effect’ for AI-enabled systems (Fryer et al. 2017). However, other
authors report a ‘mere exposure’ effect, which suggests that, as soon as individuals become used to and
familiar with the technology, or even build a relationship with it, this novelty effect may disappear (de Graaf
et al. 2015; Kanda et al. 2004). While users’ behavior during system interaction may apparently differ from
pure human-human interactions in a negative way, the extant literature reports rather positive task-related
outcomes for interaction with AI-enabled systems. The aim of capabilities such as reasoning, learning, and
acting is to perform certain tasks better than conventional systems or humans and to provide users with
personalized and experience-based advice. It is shown that the use of an AI-enabled system may indeed
lead to higher accuracy and decision quality (Batra and Antony 2001; Sviokla 1990), fewer error incidents
(Byrd 1992), improved efficiency (Ben Mimoun et al. 2017; Changchit et al. 2001; Vinze 1992), and positive
learning outcomes (Antony and Santhanam 2007; Kurilovas et al. 2015; Xu et al. 2014b). These outcomes
are subject to various system, user, and task characteristics, which are analyzed in detail below.

Effects of System Characteristics


Four major topics with regard to AI-enabled system characteristics emerge from our literature review: AI
capabilities, system transparency, human-like appearance, and gestural and conversational behavior.
Techniques from AI research have enabled systems to incorporate AI capabilities such as acting
autonomously, learning user behaviors, perceiving and reacting to the environment, and understanding
natural language. Extant research has investigated the impact of these system attributes on robots,
intelligent tutoring systems, chatbots, and intelligent agents in general. Intelligent system characteristics
such as learning ability (Chang 2010), autonomy, and reactivity (Chao et al. 2016) significantly promote
users’ adoption intentions. Paradoxically, Chao et al. (2016) also show that system autonomy and
learnability lead to higher risk perceptions, which negatively affect users’ intention to use AI-enabled
systems. The negative impact of autonomy is also confirmed by Złotowski et al. (2017), who find that higher
levels of autonomous behavior trigger users’ perceptions of realistic and identity threats as well as negative
attitudes toward robots. With regard to the system’s reactivity as a typical aspect of human-human
interaction, various authors show positive effects on perceptions of humanness (Schuetzler et al. 2014),
partner value (Birnbaum et al. 2016), and intimacy and interactional enjoyment (Lee and Choi 2017).
Furthermore, system responsiveness is found to influence users’ trust and satisfaction with the interaction
(Lee and Choi 2017). Although the system’s ability to communicate has received less attention in extant
research, advancements in natural language processing and speech recognition have encouraged isolated

Thirty Ninth International Conference on Information Systems, San Francisco 2018 6


User Interaction with AI-enabled Information Systems

projects comparing text- with voice-based interactions. Le Bigot et al. (2007) find that voice-based
interaction promotes longer, more complex, and more personal conversations, while text-based interaction
focuses on short and efficient dialogues. D'Mello et al. (2010) show that voice-based interaction decreases
task completion time, but they report equal learning gains for both input modalities. In addition, D'Mello
et al. (2010) observe a negative impact of speech recognition errors on participants’ system evaluations.
Interestingly, a similar finding was previously reported by Workman (2005), who also identified the
negative impact of system errors on system use. These findings suggest that, apart from system
functionalities, dysfunctionalities may also affect user interaction with AI-enabled systems.
Because expert and recommender systems are capable of making decisions or providing advice tailored to
users thanks to their ability to reason, plan, and learn, another topic of interest for users is how these
systems reach their conclusions. Therefore, the transparency of the systems’ decisions or actions
significantly influences users’ behavior. Xu et al. (2014a) show that increasing the system’s transparency
positively affects users’ perceptions of recommender systems. More specifically, they report higher
perceptions of informativeness and enjoyment thanks to the transparency of the system’s reasoning process
and thereby also better evaluations of decision quality and system acceptance. Higher system transparency
may also affect users’ elaboration processes of expert system recommendations as it is further shown that
users who understand the systems’ reasoning differ in their decision-making process (Mak et al. 1997). In
this regard, the provision of explanations has also received major attention. Gregor and Benbasat (1999)
address this topic in a comprehensive review on explanations in AI-enabled systems. Overall, the provision
of explanations significantly enhances users’ system perceptions and adherence to the system’s advice
(Arnold et al. 2006; Ye and Johnson 1995), as well as users’ decision-making effectiveness (Gregor 2001;
Lamberti and Wallace 1990). Gedikli et al. (2014) find that explanations promote users’ perceived
transparency of recommender systems and user satisfaction. Explanations help resolve comprehension
difficulties and enhance understanding for the provided advice (Mao and Benbasat 2000), which leads to
higher confidence beliefs in systems that justify their decision (Arnold et al. 2006; Ye and Johnson 1995).
Similarly, other authors show positive effects on trust beliefs, which promote perceptions of usefulness
(Wang et al. 2016) and recommendation quality (Wang and Benbasat 2016).
Besides the effects of advanced and increasingly human-like capabilities, system humanness is further
enhanced through different forms of human-like appearance and physical embodiment. Extant
literature has found significant differences between human-like and machine-like appearances with respect
to both virtual (chatbots, recommender systems, intelligent tutoring systems) and physical systems
(robots). Hinds et al. (2004), for example, find that the more human-like a robotic co-worker looks, the less
responsible humans feel for the task. Similarly, Corti and Gillespie (2016) show that humans put
significantly more intersubjective effort into interactions with a human chatbot embodiment than a text
screen interface. In this vein, several studies confirm that the existence of an avatar increase users’
perceptions of social characteristics and humanness or social presence of an AI-enabled system (Looije et
al. 2010; Qiu and Benbasat 2009) as well as users’ engagement during the interaction (Schuetzler et al.
2018). These perceptions may further be influenced by the avatar’s gender or ethnicity (Mara and Appel
2015; Nunamaker et al. 2011; Qiu and Benbasat 2010). Apart from visual appearance, previous research
has shown that among different output modalities, anthropomorphic voice-output promotes users’
perceptions of social presence, trust, and enjoyment, which is in line with the findings above (Qiu and
Benbasat 2009). However, while most studies confirm the positive effect of a human-like appearance,
Rosenthal-von der Pütten and Krämer (2014) show that it increases perceptions of both likeability and
threat, confirming the uncanny valley hypothesis that states that users’ perception of familiarity with a
human-like system starts to decrease at a certain degree of perceived humanness. In addition, different
types of physical embodiments and how they compare with virtual interfaces have received considerable
interest in extant research, especially for chatbots and robots. It is shown that physically present systems
engender perceptions of social presence during user-system interaction (Lee et al. 2006), higher
perceptions of trust, enjoyment, and future use intentions (Mann et al. 2015), and longer durations of user
interactions than virtual screen interfaces do. (Rodriguez-Lizundia et al. 2015).
Based on the aforementioned AI capabilities, AI-enabled systems are capable of diverse gestural and
conversational behaviors, ranging from mere demeanor and body language to more complex
communication styles. Several research studies have investigated the effects of different interaction styles
for chatbots, recommender systems, and robots. They show that even small adjustments in avatar
demeanor, head tilt, and conversational styles affect users’ perceptions of the system. For example, a

Thirty Ninth International Conference on Information Systems, San Francisco 2018 7


User Interaction with AI-enabled Information Systems

smiling demeanor (Nunamaker et al. 2011) and speaking in a playful tone (Sundar et al. 2017) are perceived
as more favorable than neutral or serious behaviors. This finding is also confirmed by Mara and Appel
(2015), who observe that a robot tilting its head promotes users’ perceptions of cuteness and human-
likeness. Various studies in this context have examined the effect of social behavior expressed by AI-enabled
systems. According to Looije et al. (2010), robots that engage in social behavior promote perceptions of
social characteristics and stimulate users’ behavior in that they engage in longer interactions. In line with
these findings, Leite et al. (2013) show that systems depicting empathetic facial expressions and verbal
utterances promote a feeling of companionship toward the robot, which may explain the finding above.
Similarly, chatbots and robots that incorporate humor, relationship maintenance, or empathic behavior
during the interaction promote users’ perceived enjoyment (Lee and Choi 2017) and companionship (Leite
et al. 2013), positive perceptions of friendliness and trustworthiness, and users’ friendly behavior (Kim et
al. 2013b; Looije et al. 2010). Ultimately, it is shown that these positive perceptions of social and human-
like system behavior increase overall system acceptance (Cavedon et al. 2015; Holtgraves et al. 2007). In
contrast to social behavior, persuasive-conversational strategies promote users’ perceptions of chatbot
power, trustworthiness, and expertise (Derrick and Ligon 2014). Depending on the task and context they
are used in, robots may represent different roles and power distances, which significantly affect users’
perceived social presence and humanness in the system (Kim et al. 2013a) and own responsibility (Hinds
et al. 2004). Tay et al. (2014) show that robot behavior and personality should be aligned with this
occupational role. The user experience and task performance with supervisor or subordinate robot roles is
further mediated by their physical distance (Kim and Mutlu 2014).

Effects of User Characteristics


Across all identified AI-enabled system types, user characteristics such as users’ demographic background,
personality, task- or system-related experiences, or the fit between system and user moderate the
interaction. Among demographic factors, age and gender are shown to influence robot use (de Graaf et
al. 2015). Research studies focusing on the effect of different age groups further show that younger children
display higher engagement (Shahid et al. 2014) and more positive attitudes (Martínez-Miranda et al. 2018;
Fernández-Llamas et al. 2018) than older children during robot interactions. Besides young people, the
older generation seems to be less afraid of robots than their middle-aged counterparts are, as shown by
lower proxemic distance and longer interaction time (Rodriguez-Lizundia et al. 2015). Regarding the effect
of users’ gender, differences are found in their perceptions of system configurations such as avatar
appearances (Qiu and Benbasat 2010) and conversational styles (Derrick and Ligon 2014). Another
investigated demographic factor is users’ cultural background, which significantly affects evaluations of
robots’ conversational styles (Rau et al. 2009) and perceived enjoyment (Shahid et al. 2014). Furthermore,
users’ personal characteristics are shown to influence user interaction. Several studies confirm the
moderating positive effects of users’ personal innovativeness on their intention to use voice assistants
(Nasirian et al. 2017), expert systems (Agarwal and Prasad 1998), and robots (de Graaf et al. 2015), as well
as a direct impact on users’ deliberation of recommender systems’ advice (Wang and Doong 2010). Other
personal characteristics include users’ desire for control (Gaudiello et al. 2016) and robot anxiety
(Rosenthal-von der Pütten and Krämer 2014), both of which promote negative attitudes toward robots.
Apart from those individual characteristics, system use also differs according to different levels of task- or
system-related user experience. Novices with low prior experiences with and knowledge about a
particular task show higher perceptions of usefulness and use intentions toward expert systems (Will 1992).
Significant differences are particularly evident with regard to users’ performance as an outcome of system
use in the case of expert systems, e-learning systems, and recommender systems. Higher levels of prior
knowledge result in higher user performance with fewer error incidents (Batra and Antony 2001), better
learning gains (Kanda et al. 2004; Xu et al. 2014b), and greater efficiency (Loup-Escande et al. 2017) than
is the case with less experienced users. In addition, contradictory effects are observable with respect to
system-specific experiences. While user familiarity with the system decreases perceived system effort and,
therefore, also perceived recommendation quality for recommender systems (Tsekouras and Li 2015),
users’ familiarity with the system is also shown to promote recommendation acceptance (Mak et al. 1997).
Moreover, several researchers have explicitly investigated how the actual fit between user and system
affects system use. A consistent problem-solving process between the system and the users’ mental
processes affects users’ confidence levels and performance outcome for expert system use (Dalal and Kasper
1994; Jiang et al. 2000). The resulting fit between system and user decision processes and outcomes is

Thirty Ninth International Conference on Information Systems, San Francisco 2018 8


User Interaction with AI-enabled Information Systems

ultimately shown to influence users’ decision making process (Ho and Bodoff 2014) and decision outcome
(Yoon et al. 2013). In addition, explanations that fit users’ cognitive style are perceived as being better-
quality (Giboney et al. 2015). Similarly, users’ perceptions of the system, as well as their own efforts during
the interaction (Tsekouras and Li 2015) and the fit between users’ cognitive processes and the system
presentation (Shmueli et al. 2016), may affect their evaluations of the recommender system’s advice.
Furthermore, Qiu and Benbasat (2010) show that matching system characteristics, such as avatar
appearances, to the users’ gender and ethnicity leads to higher enjoyment and favorable system perceptions.

Effects of Task and Context Characteristics


Although task and context characteristics are rarely studied in the identified literature sample, they do
exhibit moderating effects on the relationship between AI-enabled systems and the user. Differences in user
performance during system use depend especially on the task’s complexity (Dalal and Kasper 1994; Roth et
al. 1987), ambiguity (Nissen and Sengupta 2006), and uncertainty (Lamberti and Wallace 1990).
Furthermore, contextual factors such as stressful conditions also affect users’ emotional perceptions of
robots (Thimmesch-Gill et al. 2017). In addition, Chang (2010) finds that users’ intention to use an AI-
enabled system depends on the fit between technology and task characteristics, and, thus, that task
characteristics may not only moderate system use but also significantly affect users’ adoption decisions.
Other authors report similar results in different use scenarios. Hoffmann and Krämer (2013) find that
robots are perceived more favorably in a task-oriented scenario, while virtual characters are preferred in
persuasive-conversational contexts. Analogously, Gaudiello et al. (2016) find users’ trust toward robots to
be greater for functional than social tasks, which implies that users may perceive a better fit between robot
technology and functional tasks. In addition, also the nature of the task change experienced by the user may
affect users’ intentions to use expert systems (Gill 1996). These results confirm Chang’s (2010) findings.

Discussion
Conclusion and Implications
In this study, we set out to aggregate the findings on user interaction with AI-enabled systems within the
IS research domain. In line with our definition, AI-enabled systems possess one or more of the capabilities
that require intelligence according to AI researchers. We consolidated research streams within IS research
that had previously been treated separated and aggregated insights regarding the interaction with different
AI-enabled systems types. Figure 2 summarizes the major topics identified.

User
− Demographics &
Personality
− Task & System
Experience
− Cognitive Fit

System
− AI Capabilities
Interaction
− Transparency
− Humanness &
− Human-like
Threat Perceptions Outcomes
Appearance
− Interaction with
− Gestural &
Humans vs. AI
Conversational
Behavior

Task & Context


− Task-Technology Fit

Figure 2. Main Research Topics on AI-enabled Systems

Thirty Ninth International Conference on Information Systems, San Francisco 2018 9


User Interaction with AI-enabled Information Systems

Our results reveal that user interaction with AI-enabled systems is a long-standing topic within IS research
and ranges from supporting individual productivity (e.g. expert systems) to enhancing web use (e.g.
recommender systems) and providing personal assistant functionality (e.g. chatbots). Many topics found in
research on the interaction with conventional systems are also common in research on the interaction with
AI-enabled systems. However, we also revealed patterns that are specific to AI-enabled systems and can be
observed across different AI-enabled system types. Figure 2 therefore depicts both well-known (presented
in gray) and AI-specific patterns. The figure does not depict outcomes of the interaction (e.g. task
performance) as these are very context-specific.
Overall, the literature review revealed that users’ interactions with AI-enabled systems trigger contradictory
behavioral responses. On the one hand, users assign humanness and social characteristics to AI-enabled
systems. On the other hand, AI-enabled systems also trigger perceptions of threat. A possible explanation
for this phenomenon is offered by the uncanny valley hypothesis (Mori 1970), which states that users’
familiarity with a robot drops as the system becomes more human-like but fails to achieve a life-like
appearance. Extant research lacks a comprehensive explanation for determinants and triggers of users’
threat experiences. For example, this literature review shows that AI capabilities such as system autonomy
have both positive and negative effects on users’ evaluation of the system. Therefore, IT providers and
marketers should be careful how they communicate regarding system functionalities. However, our review
of the effects of system characteristics on interaction also shows how a system that engages in human-like
and responsive behavior through virtual or physical embodiment promotes favorable user reactions. For
this reason, depending on the system’s purpose, IT providers should assess different system configurations
and behavioral traits. Another way to increase users’ trust toward and acceptance of AI-enabled systems is
to increase the system’s transparency, for example, by providing explanations. Understanding AI-enabled
systems’ reasoning has been shown to promote users’ acceptance of both these systems and their advice. To
ensure that this effort produces the expected results, IT providers should further fit these explanations to
the users’ experience level and cognitive style. Apart from users’ perceptions, previous research has found
that users’ behavior during system use is less engaged and less favorable than during human-human
interaction. At the same time, users’ task related outcomes such as performance gains in effectiveness,
efficiency, and learning point to positive effects of system use. Hence, system designers and IT providers
should improve the actual interaction with the system by incorporating social behaviors and reducing error
incidents in order to ensure that individuals experience long-term benefits thanks to better performance.

Limitations
This literature review is not without its limitations. First, the scope of the literature searches may be
extended. Our analyses revealed several interchangeable terms describing system types with the same
purpose (e.g. knowledge-based and expert systems). Although our search approach covered the most
commonly used terms and included forward and backward searches, our search terms may not have
captured all the types of AI-enabled systems. Future research may search for additional attributes such as
autonomous or learning systems and for applications of AI-techniques. In addition, our literature review
was limited to the IS discipline. However, analyses of interdisciplinary sources may provide further insights
regarding user interaction with AI-enabled systems. Especially journals from the AI discipline may serve as
important sources to validate our results. Moreover, we restricted our review to interaction with AI-enabled
systems at the individual level, but interaction at the group or organizational level is also important to AI
research and may be addressed in future reviews. Second, the depth of our analyses could be increased.
This literature review focused on aggregating the most important effects on user interaction with AI-
enabled systems. Analyses of predominant theories, methods, and research models may add more to our
current understanding of this topic. Additionally, further analyses may reveal patterns that could validate
existing or develop new theoretical foundations for AI-enabled systems and may further contribute to the
current state of knowledge regarding the application and use of AI capabilities in IT.

Agenda for Further Research


Our findings may inform future research on less-studied aspects of interaction with existing AI-enabled
systems as well as behavioral research on new AI-enabled systems as they become available thanks to the
ongoing development of AI research. Table 2 summarizes the identified AI-specific topics, the main results
of the literature review, and the questions that further research may address.

Thirty Ninth International Conference on Information Systems, San Francisco 2018 10


User Interaction with AI-enabled Information Systems

Topic Main Result Further Research Questions


System Interaction
Humanness & Users perceive of both humanness and threat Which degree of anthropomorphism is
Threat toward AI-enabled systems, without consensus beneficial for users’ interaction with AI-enabled
Perceptions on triggers of their occurrence (e.g. autonomy). systems? When do users perceive threat?
There are analogies (e.g. users’ alignment) and What causes users’ to behave differently with
Interaction
differences (e.g. less engagement) between AI-enabled systems? Do these differences
with Humans
human and AI interactions. Outcomes are disappear when users’ continually interact with
vs. AI
mostly positive (e.g. task performance). these systems and experience their benefits?
System Characteristics
Autonomy, communicating, and learning What is the effect of different AI capabilities on
AI Capabilities ability show contradictory effects on the user. user interaction and does it differ according to
However, research in this area is still scarce. varying tasks and new system types?
A transparent system presentation, including How may transparency be ensured for systems
Transparency the provision of explanations, shows positive that increasingly act autonomously and learn
impacts on users’ decision making behavior. based on machine learning techniques?
Virtual (e.g. avatars) and physical (e.g. robotic) What are levels of human-likeness that still
Human-Like
appearances positively affect users’ perceptions promote users’ perceptions of humanness and
Appearance
of social characteristics in AI-enabled systems. likeability without risking threat perceptions?
Gestural & Behavioral (e.g. empathic) traits presented Which behaviors might prevent or alleviate
Conversational through gestural or conversational cues (e.g. negative user responses? When is the use of
Behavior playful tone) affect users’ relationship building. social cues beneficial (for which tasks or users)?
User Characteristics
The fit between user and system is a key How can the fit between user and system be
Cognitive Fit influential factor on users’ system acceptance ensured for different AI-enabled systems, tasks,
and task performance. and users? What is the impact on interactions?

Table 2. AI-specific Topics, Main Results and Further Research Questions


Although all the identified topics open up multiple avenues for further research, we limit the detailed
elaboration of the research agenda to the most pressing ones in our opinion: perceptions of humanness and
threat, AI capabilities, and transparency. First, there is a need for further investigations into users’
perceptions of humanness and threat. It remains unknown which degree of humanness is disadvantageous
for users’ interaction with AI-enabled systems. To address this knowledge gap, future research may study
users’ actual and emotional behaviors during interactions, which have revealed different patterns than
users’ self-reported perceptions may suggest. In this vein, user characteristics should also be taken into
account. Previous research shows that users’ perceptions of threat vary according to age and task
experience. As it is further shown that human-like appearance and behavior are successful factors to
influence users’ perceptions of social characteristics, future research may study different interface designs
and physical embodiments, including avatar appearances and the use of social cues during interaction with
AI-enabled systems. Future research questions are: How do users behave and react when they interact with
an AI-enabled system? What causes perceptions of threat, and how might these they be prevented?
Second, future research should investigate how AI capabilities, including the ability to communicate, learn,
and behave autonomously affect users’ acceptance and use. Previous literature suggests that autonomy may
both increase and decrease users’ intention to use an AI-enabled system, which may also depend on certain
task characteristics. The implication that certain AI capabilities may be better suited to particular tasks than
others should be thoroughly investigated. Especially through advancements in natural language processing
and speech recognition, researchers should study specific affordances that are achieved by voice-based
interaction and how they differ from conventional input modalities such as text. Within this framework, we
also call for more research on new and emerging AI-based system types that are less represented in our
literature sample, such as voice assistants, autonomous vehicles, or digital investment management
systems commonly referred to as “robo-advisors.” While research on voice assistants promises to shed light
on users’ affordances in using conversational AI-enabled systems, autonomous vehicles provide a new
opportunity to close the aforementioned research gaps regarding the impact of system autonomy on the
individual. Moreover, research on digital investment management systems could be a starting point to
investigate the willingness to delegate decisions to AI-enabled systems. Finally, not only may innovative,

Thirty Ninth International Conference on Information Systems, San Francisco 2018 11


User Interaction with AI-enabled Information Systems

new systems incorporate AI capabilities, but conventional systems may also be enhanced by conversational
user interfaces or machine learning techniques. Future research questions include: Are certain AI
capabilities better suited to particular tasks than others? In which cases is voice-based interaction more
beneficial than other input modalities? What are some particular affordances of voice assistants? How does
autonomy affect users’ behavioral responses in the context of autonomous vehicles? Does the degree of
automation of digital investment management systems affect the willingness to delegate decisions to these
systems? Which benefits do users receive from the integration of AI capabilities into conventional systems?
Third, owing to recent advancements in machine learning, transparency is one of the system characteristics
that get more and more important as it is becoming increasingly difficult to follow and understand the
systems’ underlying reasoning processes. Extant research on expert and recommender systems shows that
this may inhibit users’ trust and competence beliefs and, as a consequence, system adoption and use.
However, transparency has so far received less attention for other AI-enabled systems, such as robots or
autonomous vehicles. For example, users of socially assistive robots that behave autonomously in their
apartment may wish for an application to trace the robots’ behavior in real time, specifically if they are not
at home. Future research may investigate different transparency features and their impact on users’
interaction behavior with emerging systems. In line with the research topic above, increasing system
transparency could also be one possible solution to address users’ perceptions of threat regarding AI-
enabled systems. Consequently, specific research questions include: How can system transparency be
ensured for new AI-enabled system types? May system transparency prevent users’ perceptions of threat?

References
Adomavicius, G., Bockstedt, J. C., Curley, S. P., and Jingjing, Z. 2013. "Do Recommender Systems
Manipulate Consumer Preferences?," Information Systems Research (24:4), pp. 956-975.
Agarwal, R., and Prasad, J. 1998. "The Antecedents and Consequents of User Perceptions in Information
Technology Adoption," Decision Support Systems (22:1), pp. 15-29.
Ajzen, I. 1991. "The Theory of Planned Behavior," Organizational Behavior and Human Decision Processes
(50:2), pp. 179-211.
Anthes, G. 2017. "Artificial Intelligence Poised to Ride a New Wave," Communications of the ACM (60:7),
pp. 19-21.
Antony, S., and Santhanam, R. 2007. "Could the Use of a Knowledge-Based System Lead to Implicit
Learning?," Decision Support Systems (43:1), pp. 141-151.
Arnold, V., Clark, N., Collier, P. A., Leech, S. A., and Sutton, S. G. 2006. "The Differential Use and Effect of
Knowledge-Based System Explanations in Novice and Expert Judgment Decisions," MIS Quarterly
(30:1), pp. 79-97.
Baccala, M., Curran, C., Garrett, D., Likens, S., Rao, A., Ruggles, A., and Shehab, M. 2018. "2018 AI
Predictions: 8 Insights to Shape Business Strategy," PwC, New York.
Batra, D., and Antony, S. R. 2001. "Consulting Support During Conceptual Database Design in the Presence
of Redundancy in Requirements Specifications: An Empirical Study," International Journal of Human-
Computer Studies (54:1), pp. 25-51.
Ben Mimoun, M. S., Poncin, I., and Garnier, M. 2017. "Animated Conversational Agents and E-Consumer
Productivity: The Roles of Agents and Individual Characteristics," Information & Management (54:5),
pp. 545-559.
Beran, T. N., Ramirez-Serrano, A., Kuzyk, R., Fior, M., and Nugent, S. 2011. "Understanding How Children
Understand Robots: Perceived Animism in Child–Robot Interaction," International Journal of
Human-Computer Studies (69:7–8), pp. 539-550.
Birnbaum, G. E., Mizrahi, M., Hoffman, G., Reis, H. T., Finkel, E. J., and Sass, O. 2016. "What Robots Can
Teach Us About Intimacy: The Reassuring Effects of Robot Responsiveness to Human Disclosure,"
Computers in Human Behavior (63), pp. 416-423.
Boland, R. J., and Lyytinen, K. 2017. "The Limits to Language in Doing Systems Design," European Journal
of Information Systems (26:3), pp. 248-259.
Burton, F. G., Yi-Ning, C., Grover, V., and Stewart, K. A. 1992. "An Application of Expectancy Theory for
Assessing User Motivation to Utilize an Expert System," Journal of Management Information Systems
(9:3), pp. 183-198.
Byrd, T. A. 1992. "Implementation and Use of Expert Systems in Organizations: Perceptions of Knowledge
Engineers," Journal of Management Information Systems (8:4), pp. 97-116.

Thirty Ninth International Conference on Information Systems, San Francisco 2018 12


User Interaction with AI-enabled Information Systems

Carbonell, J. G., Michalski, R. S., and Mitchell, T. M. 1983. "An Overview of Machine Learning," in Machine
Learning, R.S. Michalski, J.G. Carbonell and T.M. Mitchell (eds.). San Francisco (CA): Morgan
Kaufmann, pp. 3-23.
Cavedon, L., Kroos, C., Herath, D., Burnham, D., Bishop, L., Leung, Y., and Stevens, C. J. 2015. "“C‫׳‬Mon
Dude!”: Users Adapt Their Behaviour to a Robotic Agent with an Attention Model," International
Journal of Human-Computer Studies (80), pp. 14-23.
Chang, H. H. 2010. "Task-Technology Fit and User Acceptance of Online Auction," International Journal
of Human-Computer Studies (68), pp. 69-89.
Changchit, C., Holsapple, C. W., and Madden, D. L. 2001. "Supporting Managers' Internal Control
Evaluations: An Expert System and Experimental Results," Decision Support Systems (30:4), pp. 437-
449.
Chao, C.-Y., Chang, T.-C., Wu, H.-C., Lin, Y.-S., and Chen, P.-C. 2016. "The Interrelationship between
Intelligent Agents’ Characteristics and Users’ Intention in a Search Engine by Making Beliefs and
Perceived Risks Mediators," Computers in Human Behavior (64), pp. 117-125.
Corti, K., and Gillespie, A. 2016. "Co-Constructing Intersubjectivity with Artificial Conversational Agents:
People Are More Likely to Initiate Repairs of Misunderstandings with Agents Represented as Human,"
Computers in Human Behavior (58), pp. 431-442.
Cowan, B. R., Branigan, H. P., Obregón, M., Bugis, E., and Beale, R. 2015. "Voice Anthropomorphism,
Interlocutor Modelling and Alignment Effects on Syntactic Choices in Human−Computer Dialogue,"
International Journal of Human-Computer Studies (83), pp. 27-42.
D'Mello, S. K., Graesser, A., and King, B. 2010. "Toward Spoken Human-Computer Tutorial Dialogues,"
Human-Computer Interaction (25:4), pp. 289-323.
Dalal, N. P., and Kasper, G. M. 1994. "The Design of Joint Cognitive Systems: The Effect of Cognitive
Coupling on Performance," International Journal of Human-Computer Studies (40:4), pp. 677-702.
Davern, M., Shaft, T., and Te'eni, D. 2012. "Cognition Matters: Enduring Questions in Cognitive IS
Research," Journal of the Association for Information Systems (13:4), pp. 273-314.
de Graaf, M. M. A., Allouch, S. B., and Klamer, T. 2015. "Sharing a Life with Harvey: Exploring the
Acceptance of and Relationship-Building with a Social Robot," Computers in Human Behavior (43),
pp. 1-14.
Derrick, D. C., and Ligon, G. S. 2014. "The Affective Outcomes of Using Influence Tactics in Embodied
Conversational Agents," Computers in Human Behavior (33), pp. 39-48.
Duffy, M. C., and Azevedo, R. 2015. "Motivation Matters: Interactions between Achievement Goals and
Agent Scaffolding for Self-Regulated Learning within an Intelligent Tutoring System," Computers in
Human Behavior (52), pp. 338-348.
Edwards, C., Edwards, A., Spence, P. R., and Shelton, A. K. 2014. "Is That a Bot Running the Social Media
Feed? Testing the Differences in Perceptions of Communication Quality for a Human Agent and a Bot
Agent on Twitter," Computers in Human Behavior (33), pp. 372-376.
Elkins, A. C., Dunbar, N. E., Adame, B., and Nunamaker, J. F. 2013. "Are Users Threatened by Credibility
Assessment Systems?," Journal of Management Information Systems (29:4), pp. 249-262.
Ernst, C.-P. H., and Reinelt, P. 2017. "Autonomous Car Acceptance: Safety Vs. Personal Driving
Enjoyment," Proceedings of the Americas Conference on Information Systems (AMCIS), Boston.
Fernández-Llamas, C., Conde, M. A., Rodríguez-Lera, F. J., Rodríguez-Sedano, F. J., and García, F. 2018.
"May I Teach You? Students' Behavior When Lectured by Robotic Vs. Human Teachers," Computers in
Human Behavior (80), pp. 460-469.
Fishbein, M., and Ajzen, I. 1975. Belief, Attitude, Intention and Behavior : An Introduction to Theory and
Research, Reading, Massachusetts: Addison-Wesley Publishing.
Fridin, M., and Belokopytov, M. 2014. "Acceptance of Socially Assistive Humanoid Robot by Preschool and
Elementary School Teachers," Computers in Human Behavior (33), pp. 23-31.
Fryer, L. K., Ainley, M., Thompson, A., Gibson, A., and Sherlock, Z. 2017. "Stimulating and Sustaining
Interest in a Language Course: An Experimental Comparison of Chatbot and Human Task Partners,"
Computers in Human Behavior (75), pp. 461-468.
Gaudiello, I., Zibetti, E., Lefort, S., Chetouani, M., and Ivaldi, S. 2016. "Trust as Indicator of Robot
Functional and Social Acceptance. An Experimental Study on User Conformation to Icub Answers,"
Computers in Human Behavior (61), pp. 633-655.
Gedikli, F., Jannach, D., and Ge, M. 2014. "How Should I Explain? A Comparison of Different Explanation
Types for Recommender Systems," Internat. Journal of Human-Computer Studies (72:4), pp. 367-382.

Thirty Ninth International Conference on Information Systems, San Francisco 2018 13


User Interaction with AI-enabled Information Systems

Giboney, J. S., Brown, S. A., Lowry, P. B., and Jr.Nunamaker, J. F. 2015. "User Acceptance of Knowledge-
Based System Recommendations," Decision Support Systems (72), pp. 1-10.
Gilbert, R. L., and Forney, A. 2015. "Can Avatars Pass the Turing Test? Intelligent Agent Perception in a 3D
Virtual Environment," International Journal of Human-Computer Studies (73), pp. 30-36.
Gill, T. 1996. "Expert Systems Usage: Task Change and Intrinsic Motivation," MIS Quarterly (20:3), pp.
301-329.
Gregor, S. 2001. "Explanations from Knowledge-Based Systems and Cooperative Problem Solving: An
Empirical Study," International Journal of Human-Computer Studies (54:1), pp. 81-105.
Gregor, S., and Benbasat, I. 1999. "Explanations from Intelligent Systems: Theoretical Foundations and
Implications for Practice," MIS Quarterly (23:4), pp. 497-530.
Hasan, M. R., Jha, A. K., and Liu, Y. 2018. "Excessive Use of Online Video Streaming Services: Impact of
Recommender System Use, Psychological Factors, and Motives," Computers in Human Behavior (80),
pp. 220-228.
Hill, J., Randolph Ford, W., and Farreras, I. G. 2015. "Real Conversations with Artificial Intelligence: A
Comparison between Human–Human Online Conversations and Human–Chatbot Conversations,"
Computers in Human Behavior (49), pp. 245-250.
Hinds, P. J., Roberts, T. L., and Jones, H. 2004. "Whose Job Is It Anyway? A Study of Human-Robot
Interaction in a Collaborative Task," Human-Computer Interaction (19:1/2), pp. 151-181.
Ho, S. Y., and Bodoff, D. 2014. "The Effects of Web Personalization on User Attitude and Behavior: An
Integration of the Elaboration Likelihood Model and Consumer Research Theory," MIS Quarterly
(38:2), pp. 497-520.
Hoffmann, L., and Krämer, N. C. 2013. "Investigating the Effects of Physical and Virtual Embodiment in
Task-Oriented and Conversational Contexts," International Journal of Human-Computer Studies
(71:7–8), pp. 763-774.
Holtgraves, T., Ross, S. J., Weywadt, C., and Han, T. 2007. "Perceiving Artificial Social Agents," Computers
in human behavior (23:5), pp. 2163-2174.
Hostler, R. E., Yoon, V. Y., Guo, Z., Guimaraes, T., and Forgionne, G. 2011. "Assessing the Impact of
Recommender Agents on on-Line Consumer Unplanned Purchase Behavior," Information &
Management (48:8), pp. 336-343.
Jiang, J. J., Klein, G., and Vedder, R. G. 2000. "Persuasive Expert Systems: The Influence of Confidence
and Discrepancy," Computers in Human Behavior (16:2), pp. 99-109.
Kanda, T., Hirano, T., Eaton, D., and Ishiguro, H. 2004. "Interactive Robots as Social Partners and Peer
Tutors for Children: A Field Trial," Human-Computer Interaction (19:1/2), pp. 61-84.
Kim, K. J., Park, E., and Shyam Sundar, S. 2013a. "Caregiving Role in Human–Robot Interaction: A Study
of the Mediating Effects of Perceived Benefit and Social Presence," Computers in Human Behavior
(29:4), pp. 1799-1806.
Kim, Y., Kwak, S. S., and Kim, M.-s. 2013b. "Am I Acceptable to You? Effect of a Robot’s Verbal Language
Forms on People’s Social Distance from Robots," Computers in Human Behavior (29:3), pp. 1091-1101.
Kim, Y., and Mutlu, B. 2014. "How Social Distance Shapes Human–Robot Interaction," International
Journal of Human-Computer Studies (72:12), pp. 783-795.
Komiak, S. Y. X., and Benbasat, I. 2008. "A Two-Process View of Trust and Distrust Building in
Recommendation Agents: A Process-Tracing Study," Journal of the Association for Information
Systems (9:12), pp. 727-747.
Kurilovas, E., Zilinskiene, I., and Dagiene, V. 2015. "Recommending Suitable Learning Paths According to
Learners’ Preferences," Computers in Human Behavior (51), pp. 945-951.
Kurzweil, R. 2005. The Singularity Is Near. New York: Viking.
Lamberti, D., and Wallace, W. 1990. "Intelligent Interface Design: An Empirical Assessment of Knowledge
Presentation in Expert Systems," MIS Quarterly (14:3), pp. 279-311.
Le Bigot, L., Terrier, P., Amiel, V., Poulain, G., Jamet, E., and Rouet, J.-F. 2007. "Effect of Modality on
Collaboration with a Dialogue System," Int. Journal of Human-Computer Studies (65:12), pp. 983-991.
LeCun, Y., Bengio, Y., and Hinton, G. 2015. "Deep Learning," Nature (521:7553), pp. 436-444.
Lee, K. M., Jung, Y., Kim, J., and Kim, S. R. 2006. "Are Physically Embodied Social Agents Better Than
Disembodied Social Agents?: The Effects of Physical Embodiment, Tactile Interaction, and People's
Loneliness in Human-Robot Interaction," International Journal of Human-Computer Studies (64:10),
pp. 962-973.

Thirty Ninth International Conference on Information Systems, San Francisco 2018 14


User Interaction with AI-enabled Information Systems

Lee, S., and Choi, J. 2017. "Enhancing User Experience with Conversational Agent for Movie
Recommendation: Effects of Self-Disclosure and Reciprocity," International Journal of Human-
Computer Studies (103), pp. 95-105.
Leite, I., Pereira, A., Mascarenhas, S., Martinho, C., Prada, R., and Paiva, A. 2013. "The Influence of
Empathy in Human-Robot Relations," Int. Journal of Human-Computer Studies (71:3), pp. 250-260.
Li, S. S., and Karahanna, E. 2015. "Online Recommendation Systems in a B2C E-Commerce Context: A
Review and Future Directions," Journal of the Association for Information Systems (16:2), pp. 72-107.
Liu, D., Li, Y., and Thomas, M. A. 2017. "A Roadmap for Natural Language Processing Research in
Information Systems," 50th Hawaii International Conference on System Sciences, Waikoloa, Hawaii.
Looije, R., Neerincx, M. A., and Cnossen, F. 2010. "Persuasive Robotic Assistant for Health Self-
Management of Older Adults: Design and Evaluation of Social Behaviors," International Journal of
Human-Computer Studies (68:6), pp. 386-397.
Loup-Escande, E., Frenoy, R., Poplimont, G., Thouvenin, I., Gapenne, O., and Megalakaki, O. 2017.
"Contributions of Mixed Reality in a Calligraphy Learning Task: Effects of Supplementary Visual
Feedback and Expertise on Cognitive Load, User Experience and Gestural Performance," Computers in
Human Behavior (75), pp. 42-49.
Lowry, P. B., Moody, G. D., Gaskin, J., Galletta, D. F., Humpherys, S. L., Barlow, J. B., and Wilson, D. W.
2013. "Evaluating Journal Quality and the Association for Information Systems Senior Scholars'
Journal Basket Via Bibliometric Measures: Do Expert Journal Assessments Add Value?," MIS
Quarterly (37:4), pp. 993-1012.
Mak, B., Schmitt, B. H., and Lyytinen, K. 1997. "User Participation in Knowledge Update of Expert
Systems," Information & Management (32:2), pp. 55-63.
Mann, J. A., MacDonald, B. A., Kuo, I. H., Li, X., and Broadbent, E. 2015. "People Respond Better to Robots
Than Computer Tablets Delivering Healthcare Instructions," Computers in Human Behavior (43), pp.
112-117.
Mao, J.-Y., and Benbasat, I. 2000. "The Use of Explanations in Knowledge-Based Systems: Cognitive
Perspectives and a Process-Tracing Analysis," Journal of Management Information Systems (17:2), pp.
153-179.
Mara, M., and Appel, M. 2015. "Effects of Lateral Head Tilt on User Perceptions of Humanoid and Android
Robots," Computers in Human Behavior (44), pp. 326-334.
Martínez-Miranda, J., Pérez-Espinosa, H., Espinosa-Curiel, I., Avila-George, H., and Rodríguez-Jacobo, J.
2018. "Age-Based Differences in Preferences and Affective Reactions Towards a Robot's Personality
During Interaction," Computers in Human Behavior (84), pp. 245-257.
Matt, C., Benlian, A., Hess, T., and Weiß, C. 2014. "Escaping from the Filter Bubble? The Effects of Novelty
and Serendipity on Users’ Evaluations of Online Recommendations," Proceedings of the Thirty Fifth
International Conference on Information Systems (ICIS), Auckland.
McCarthy, J., Minsky, M. L., Rochester, N., and Shannon, C. E. 1955. "A Proposal for the Dartmouth
Summer Research Project on Artificial Intelligence."
Mitchell, T. M. 1997. Machine Learning. New York: McGraw-Hill.
Mori, M. 1970. "The uncanny valley", Energy (7:4), pp. 33–35.
Mou, Y., and Xu, K. 2017. "The Media Inequality: Comparing the Initial Human-Human and Human-AI
Social Interactions," Computers in Human Behavior (72), pp. 432-440.
Nasirian, F., Ahmadian, M., and Lee, O.-K. 2017. "AI-Based Voice Assistant Systems: Evaluating from the
Interaction and Trust Perspectives," Proceedings of the Twenty-third Americas Conference on
Information Systems (AMCIS), Boston.
Newell, A., and Simon, H. A. 1963. "Gps, a Program That Simulates Human Thought," in Computers and
Thought, E.A. Feigenbaum and J. Feldman (eds.). New York: McGraw-Hill.
Nguyen, Q. N., and Sidorova, A. 2017. "AI Capabilities and User Experiences: A Comparative Study of User
Reviews for Assistant and Non-Assistant Mobile Apps," Proceedings of the Twenty-third Americas
Conference on Information Systems (AMCIS), Boston.
Nissen, M. E., and Sengupta, K. 2006. "Incorporating Software Agents into Supply Chains: Experimental
Investigation with a Procurement Task," MIS Quarterly (30:1), pp. 145-166.
Nunamaker, J. F., Derrick, D. C., Elkins, A. C., Burgoon, J. K., and Patton, M. W. 2011. "Embodied
Conversational Agent-Based Kiosk for Automated Interviewing," Journal of Management Information
Systems (28:1), pp. 17-48.

Thirty Ninth International Conference on Information Systems, San Francisco 2018 15


User Interaction with AI-enabled Information Systems

Ochi, P., Rao, S., Takayama, L., and Nass, C. 2010. "Predictors of User Perceptions of Web Recommender
Systems: How the Basis for Generating Experience and Search Product Recommendations Affects User
Responses," International Journal of Human-Computer Studies (68:8), pp. 472-482.
Okoli, C. 2015. "A Guide to Conducting a Standalone Systematic Literature Review," Communications of
the Association for Information Systems (37), pp. 879-910.
Paré, G., Tate, M., Johnstone, D., and Kitsiou, S. 2016. "Contextualizing the Twin Concepts of Systematicity
and Transparency in Information Systems Literature Reviews," European Journal of Information
Systems (25:6), pp. 493-508.
Paré, G., Trudel, M.-C., Jaana, M., and Kitsiou, S. 2015. "Synthesizing Information Systems Knowledge: A
Typology of Literature Reviews," Information & Management (52:2), pp. 183-199.
Pickard, M. D., Roster, C. A., and Chen, Y. 2016. "Revealing Sensitive Information in Personal Interviews:
Is Self-Disclosure Easier with Humans or Avatars and under What Conditions?," Computers in Human
Behavior (65), pp. 23-30.
Qiu, L., and Benbasat, I. 2009. "Evaluating Anthropomorphic Product Recommendation Agents: A Social
Relationship Perspective to Designing Information Systems," Journal of Management Information
Systems (25:4), pp. 145-181.
Qiu, L., and Benbasat, I. 2010. "A Study of Demographic Embodiments of Product Recommendation Agents
in Electronic Commerce," Internat. Journal of Human-Computer Studies (68:10), pp. 669-688.
Rau, P. L. P., Li, Y., and Li, D. 2009. "Effects of Communication Style and Culture on Ability to Accept
Recommendations from Robots," Computers in Human Behavior (25:2), pp. 587-595.
Rodriguez-Lizundia, E., Marcos, S., Zalama, E., Gómez-García-Bermejo, J., and Gordaliza, A. 2015. "A
Bellboy Robot: Study of the Effects of Robot Behaviour on User Engagement and Comfort,"
International Journal of Human-Computer Studies (82), pp. 83-95.
Rosenthal-von der Pütten, A. M., and Krämer, N. C. 2014. "How Design Characteristics of Robots
Determine Evaluation and Uncanny Valley Related Responses," Computers in Human Behavior (36),
pp. 422-439.
Rosenthal-von der Pütten, A. M., Schulte, F. P., Eimler, S. C., Sobieraj, S., Hoffmann, L., Maderwald, S.,
Brand, M., and Krämer, N. C. 2014. "Investigations on Empathy Towards Humans and Robots Using
Fmri," Computers in Human Behavior (33), pp. 201-212.
Roth, E. M., Bennett, K. B., and Woods, D. D. 1987. "Human Interaction with an “Intelligent” Machine,"
International Journal of Man-Machine Studies (27:5–6), pp. 479-525.
Rowe, F. 2014. "What Literature Review Is Not: Diversity, Boundaries and Recommendations," European
Journal of Information Systems (23:3), pp. 241-255.
Russell, S. J., and Norvig, P. 2010. Artificial Intelligence: A Modern Approach. Upper Saddle River, New
Jersey: Pearson.
Schuetzler, R., Grimes, M., Giboney, J., and Buckman, J. 2014. "Facilitating Natural Conversational Agent
Interactions: Lessons from a Deception Experiment," Proceedings of the Thirty Fifth International
Conference on Information Systems (ICIS), Auckland.
Schuetzler, R. M., Giboney, J. S., Grimes, G. M., and Nunamaker, J. F. 2018. "The Influence of
Conversational Agents on Socially Desirable Responding," Proceedings of the 51st Hawaii
International Conference on System Sciences (HICSS), Hawaii.
Shahid, S., Krahmer, E., and Swerts, M. 2014. "Child–Robot Interaction across Cultures: How Does Playing
a Game with a Social Robot Compare to Playing a Game Alone or with a Friend?," Computers in Human
Behavior (40), pp. 86-100.
Shieber, S. M. 1994. "Lessons from a Restricted Turing Test," Communications of the ACM (37:6), pp. 70-
78.
Shmueli, L., Benbasat, I., and Cenfetelli, R. T. 2016. "A Construal-Level Approach to Persuasion by
Personalization," Proceedings of the International Conference on Information Systems (ICIS), Dublin.
Sidorova, A., Evangelopoulos, N., Valacich, J. S., and Ramakrishnan, T. 2008. "Uncovering the Intellectual
Core of the Information Systems Discipline," MIS Quarterly (32:3), pp. 467-482.
Silverman, B. G., and Bedewi, N. 1996. "Intelligent Multimedia Repositories (Imrs) for Project Estimation
and Management," International Journal of Human-Computer Studies (45:4), pp. 443-482.
Sundar, S. S., Jung, E. H., Waddell, T. F., and Kim, K. J. 2017. "Cheery Companions or Serious Assistants?
Role and Demeanor Congruity as Predictors of Robot Attraction and Use Intentions among Senior
Citizens," International Journal of Human-Computer Studies (97), pp. 88-97.
Sviokla, J. 1990. "An Examination of the Impact of Expert Systems on the Firm: The Case of Xcon," MIS
Quarterly (14:2), pp. 127-140.

Thirty Ninth International Conference on Information Systems, San Francisco 2018 16


User Interaction with AI-enabled Information Systems

Tay, B., Jung, Y., and Park, T. 2014. "When Stereotypes Meet Robots: The Double-Edge Sword of Robot
Gender and Personality in Human-Robot Interaction," Computers in Human Behavior (38), pp. 75-84.
Thimmesch-Gill, Z., Harder, K. A., and Koutstaal, W. 2017. "Perceiving Emotions in Robot Body Language:
Acute Stress Heightens Sensitivity to Negativity While Attenuating Sensitivity to Arousal," Computers
in Human Behavior (76), pp. 59-67.
Tsekouras, D., and Li, T. 2015. "The Dual Role of Perceived Effort in Personalized Recommendations," ECIS
2015 Completed Research Papers, Münster.
Turing, A. M. 1950. "Computing Machinery and Intelligence," Mind (59:236), pp. 433-460.
Valdez, A. C., and Ziefle, M. 2018. "The Users’ Perspective on the Privacy-Utility Trade-Offs in Health
Recommender Systems," International Journal of Human-Computer Studies.
Vinze, A. S. 1992. "Empirical Verification of Effectiveness for a Knowledge-Based System," International
Journal of Man-Machine Studies (37:3), pp. 309-334.
vom Brocke, J., Simons, A., Reimer, K., Niehaves, B., Plattfaut, R., and Cleven, A. 2015. "Standing on the
Shoulders of Giants: Challenges and Recommendations of Literature Search in Information Systems
Research," Communications of the Association for Information Systems (37), pp. 205-224.
Wang, H.-C., and Doong, H.-S. 2010. "Online Customers’ Cognitive Differences and Their Impact on the
Success of Recommendation Agents," Information & Management (47:2), pp. 109-114.
Wang, W., and Benbasat, I. 2016. "Empirical Assessment of Alternative Designs for Enhancing Different
Types of Trusting Beliefs in Online Recommendation Agents," Journal of Management Information
Systems (33:3), pp. 744-775.
Wang, W., Qiu, L., Kim, D., and Benbasat, I. 2016. "Effects of Rational and Social Appeals of Online
Recommendation Agents on Cognition- and Affect-Based Trust," Decision Support Systems (86), pp.
48-60.
Watson, H. J. 2017. "Preparing for the Cognitive Generation of Decision Support," MIS Quarterly
Executive, pp. 153-169.
Webster, J., and Watson, R. T. 2002. "Analyzing the Past to Prepare for the Future: Writing a Literature
Review," MIS Quarterly (26:2), pp. 13-23.
Will, R. P. 1992. "Individual Differences in the Performance and Use of an Expert System," International
Journal of Man-Machine Studies (37:2), pp. 173-190.
Workman, M. 2005. "Expert Decision Support System Use, Disuse, and Misuse: A Study Using the Theory
of Planned Behavior," Computers in Human Behavior (21:2), pp. 211-231.
Wünderlich, N. V., and Paluch, S. 2017. "A Nice and Friendly Chat with a Bot: User Perceptions of AI-Based
Service Agents," 38th International Conference on Information Systems, Seoul, South Korea.
Xiao, B., and Benbasat, I. 2007. "E-Commerce Product Recommendation Agents: Use, Characteristics, and
Impact," MIS Quarterly (31:1), pp. 137-209.
Xu, D., Benbasat, I., and Cenfetelli, R. T. 2014a. "The Nature and Consequences of Trade-Off Transparency
in the Context of Recommendation Agents," MIS Quarterly (38:2), pp. 379-406.
Xu, D., Huang, W. W., Wang, H., and Heales, J. 2014b. "Enhancing E-Learning Effectiveness Using an
Intelligent Agent-Supported Personalized Virtual Learning Environment: An Empirical Investigation,"
Information & Management (51:4), pp. 430-440.
Ye, L., and Johnson, P. 1995. "The Impact of Explanation Facilities on User Acceptance of Expert Systems
Advice," MIS Quarterly (19:2), pp. 157-172.
Yoon, V. Y., Hostler, R. E., Guo, Z., and Guimaraes, T. 2013. "Assessing the Moderating Effect of Consumer
Product Knowledge and Online Shopping Experience on Using Recommendation Agents for Customer
Loyalty," Decision Support Systems (55:4), pp. 883-893.
Zhang, P., Li, N., Scialdone, M., and Carey, J. 2009. "The Intellectual Advancement of Human-Computer
Interaction Research: A Critical Assessment of the MIS Literature (1990-2008)," AIS Transactions on
Human-Computer Interaction (1:3), pp. 55-107.
Zhang, P., and Li, N. 2004. "An Assessment of Human–Computer Interaction Research in Management
Information Systems: Topics and Methods," Computers in Human Behavior (20:2), pp. 125-147.
Zhang, P., and Li, N. 2005. "The Intellectual Development of Human-Computer Interaction Research: A
Critical Assessment of the MIS Literature (1990-2002)," Journal of the Association for Information
Systems (6:11), pp. 227-291.
Złotowski, J., Yogeeswaran, K., and Bartneck, C. 2017. "Can We Control It? Autonomous Robots Threaten
Human Identity, Uniqueness, Safety, and Resources," International Journal of Human-Computer
Studies (100), pp. 48-54.

Thirty Ninth International Conference on Information Systems, San Francisco 2018 17

You might also like