A_Literature_Review_of_Human-AI_Synergy_in_Decisio

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 20

systems

Review
A Literature Review of Human–AI Synergy in Decision Making:
From the Perspective of Affordance Actualization Theory
Ying Bao 1 , Wankun Gong 2, * and Kaiwen Yang 3

1 School of Economics and Management, China University of Petroleum, Beijing 102249, China;
baoying@cup.edu.cn
2 School of Government, Beijing Normal University, Beijing 100875, China
3 School of International Studies, University of International Business and Economics, Beijing 100029, China;
202000620114@uibe.edu.cn
* Correspondence: 202231260003@mail.bnu.edu.cn

Abstract: The emergence of artificial-intelligence (AI)-powered information technology, such as


deep learning and natural language processing, enables human to shift their behaving or working
diagram from human-only to human–AI synergy, especially in the decision-making process. Since
AI is multidisciplinary by nature and our understanding of human–AI synergy in decision-making
is fragmented, we conducted a literature review to systematically characterize the phenomenon.
Adopting the affordance actualization theory, we developed a framework to organize and understand
the relationship between AI affordances, the human–AI synergy process, and the outcomes of
human–AI synergy. Three themes emerged from the review: the identification of AI affordances
in decision-making, human–AI synergy patterns regarding different decision tasks, and outcomes
of human–AI synergy in decision-making. For each theme, we provided evidence on the existing
research gaps and proposed future research directions. Our findings provide a holistic framework for
understanding human–AI synergy phenomenon in decision-making. This work also offers theoretical
contributions and research directions for researchers studying human–AI synergy in decision-making.

Keywords: literature review; human–AI synergy; decision-making; affordance actualization theory

Citation: Bao, Y.; Gong, W.; Yang, K.


A Literature Review of Human–AI
Synergy in Decision Making: From 1. Introduction
the Perspective of Affordance Over the last decades, artificial intelligence (AI) has penetrated almost all aspects of
Actualization Theory. Systems 2023, human life, resulting in a growing trend of human–AI synergy. AI is not merely a set
11, 442. https://doi.org/10.3390/ of applications or tools. Instead, it can be broadly defined as “intelligent systems with
systems11090442 the ability to think and learn” [1]. It embodies a wide variety of tools, techniques, and
Academic Editor: Alexander Laszlo algorithms, including robotics, autonomous vehicles, virtual assistants, neural networks,
speech/pattern/facial recognition, genetic algorithms, natural language processing, deep
Received: 29 July 2023 learning, and machine learning. AI systems have been widely used to augment or assist hu-
Revised: 19 August 2023
mans in the decision-making process by providing recommendations or predictions, which
Accepted: 22 August 2023
decision makers can choose to either accept or dismiss. In detail, AI has been introduced to
Published: 25 August 2023
various domains such as healthcare, finance, human resource management, and criminal
justice, and is increasingly being incorporated into organizational management. Some
well-known examples of AI applications include IBM’s Watson and Google DeepMind’s
Copyright: © 2023 by the authors.
AlphaGo. Taking Watson as an example, natural language processing empowers Watson to
Licensee MDPI, Basel, Switzerland. understand human-composed sentences and assign multiple meanings to concepts and
This article is an open access article terms. Machine learning affords Watson the ability to learn from previous experiences or
distributed under the terms and interactions, and to formulate solutions or recommendations based on past experiences [2].
conditions of the Creative Commons In the medical field, Watson can be leveraged to assist doctors in discerning the patterns of
Attribution (CC BY) license (https:// cancers based on machine learning techniques. Moreover, AI technologies can also provide
creativecommons.org/licenses/by/ a variety of other capabilities to enhance better decision-making, including performance
4.0/). metrics of the model, as well as explanation and uncertainty of the prediction [3]. As such,

Systems 2023, 11, 442. https://doi.org/10.3390/systems11090442 https://www.mdpi.com/journal/systems


Systems 2023, 11, 442 2 of 20

we can expect that humans and AI-enabled machines will play a synergistic role in the
decision-making process in the future.
There has been an emergence of studies on human–AI synergy, and the number of
papers on AI adoption in decision-making has also been growing dramatically in the
past five years [4]. Scholars have delved into several aspects, encompassing the roles of
humans–AI, users’ perceptions, as well as AI design and configuration of AI systems [5–8].
The existing literature provides valuable insights into the multifaceted aspects of human–
AI synergy. However, the aforementioned research is spread across multiple research
communities and a wide variety of disciplines, resulting in a lack of consensus on the key
findings. As the prevalence of AI in decision-making calls for a deeper and systematic
understanding of human–AI decision-making, we conducted an interdisciplinary literature
review on human–AI synergy in decision-making and systematically characterized the
state of human–AI synergy in decision-making. In the present study, we focus on the
human–AI synergy process in general decision-making contexts and decision tasks across
many fields, such as e-commerce, marketing, daily life, and human resource management.
The remainder of this paper is structured as follows. The paper commences with an
introduction of the overarching framework used to guide this literature review. This is
then followed by a discussion on the scope of the surveyed papers and the methodology
employed to screen and select relevant papers for further study. Subsequently, the section
presents and summarizes the findings, identifying three key themes that emerged from
the reviewing process. Then, the study concludes with a research agenda specific to
each theme, accompanied by suggested research opportunities for future investigation
into the synergy between humans and AI in decision-making. This review makes several
significant contributions to the field. It identifies associated technologies and AI affordances
within various decision-making sectors. Moreover, it offers a systematic framework for
the adoption of AI in decision-making, along with the identification of several research
themes from existing studies. Additionally, it provides a holistic understanding of the
subject matter and proposes a research agenda aimed at addressing the identified research
gaps in future studies.

2. Theoretical Background
2.1. Overarching Framework: Affordance Actualization Theory
We developed an overarching theoretical framework and organized the extant papers
by adopting Strong et al.’s affordance actualization theory [9]. Existing research on infor-
mation systems (IS) has attached much importance to IT affordance, which focuses on the
property of the relationship between an actor and an object [10]. The relational property is
essential in information technology (IT) implementation, especially in the case of AI imple-
mentation in the organizational decision-making process, as AI systems and AI artifacts are
inherently complex and their influence should be addressed. IT affordance does not simply
refer to the physical characteristics, but also to the action possibilities empowered by the IT
functionalities. The possibilities and potential embedded in these affordances are realized
in the actualization process. Thus, affordance actualization is defined as “actions taken by
actors as they take advantage of one or more affordances through their use of the technology
to achieve immediate concrete outcomes in support of organizational goals” [6] (p. 70).
The actualization process comprises actions (e.g., use of technology) and the outcomes of
these actions. The relationship between actions and outcomes is iterative, with the outcome
providing feedback to impact subsequent actions. The immediate outcomes also act as a
mediator between users’ actions and organizations’ ultimate goals. Moreover, there are
also some external actors that will impact the actualization process.
The affordance actualization theory has become popular in IS literature because it
provides a better understanding of how IT affords the ways of reciprocal actions that
contribute to achieving final goals [11,12]. Beyond the IS field, this theory has also been
adopted in other disciplines, including sociology, computer science, and also human–
computer interactions [12,13]. We adopted the affordance actualization theory framework
ultimate goals. Moreover, there are also some external actors that will impact the actual-
ization process.
The affordance actualization theory has become popular in IS literature because it
provides a better understanding of how IT affords the ways of reciprocal actions that con-
Systems 2023, 11, 442 tribute to achieving final goals [11,12]. Beyond the IS field, this theory has also3 been of 20

adopted in other disciplines, including sociology, computer science, and also human–
computer interactions [12,13]. We adopted the affordance actualization theory framework
to organize
organize thetheresearch
researchononhuman–AI
human–AI synergy
synergy in organizational
in organizational decision-making
decision-making for sev-
for
several reasons.
eral reasons. For For
oneone thing,
thing, the the affordance
affordance actualization
actualization lenslens
has has
beenbeen successfully
successfully em-
employed in studying
ployed in studying IT use
IT use andandits its impacts
impacts [14],
[14], which
which provides
provides a solid
a solid foundation
foundation forforin-
investigating
vestigating other other forms
forms of of technology
technology use.
use. ForFor another,
another, thethe theoretical
theoretical lenslens includes
includes the
the impacts
impacts of IToffunctionalities
IT functionalities on both
on both usage
usage andand outcomes,
outcomes, which
which cater
cater to the
to the need
need to
to un-
understand the influence of AI systems on both usage and organizational
derstand the influence of AI systems on both usage and organizational decision-making decision-making
outcomes
outcomes in inthis
thisresearch
researchcontext.
context.The Theoverarching
overarching framework
framework for for
affordance
affordanceactualization
actualiza-
is shown
tion in Figure
is shown 1. 1.
in Figure

Figure 1.
Figure 1. Overarching
Overarching framework
framework of
of affordance
affordanceactualization
actualization(adapted
(adaptedfrom
fromStrong
Strongetetal.
al.[9]).
[9]).

2.2.
2.2. Human–AI
Human–AI SynergySynergy
The
The literature
literature on on human–AI
human–AI synergysynergy can can be
be generally
generally divided
divided intointo the
the following
following
three streams.
three streams.
The
The first
first stream
stream focus
focuson onthe
theroles
rolesofofhumans–AI
humans–AIand andtask
taskdivision
divisionduring
duringthethecollabo-
collab-
ration.
oration. In collaborative decision-making between humans and machines, AI canassume
In collaborative decision-making between humans and machines, AI can assume
different roles, such
different roles, such as
asaafacilitator,
facilitator,reviewer,
reviewer,expert
expertadvisor,
advisor,oror guide
guide [5,15].
[5,15]. With
With the the
im-
improvement of AI automation, agents’ roles have evolved from being auxiliary tools to
provement of AI automation, agents’ roles have evolved from being auxiliary tools to ac-
active participants in team decision-making processes [16], leading to a wider array of team
tive participants in team decision-making processes [16], leading to a wider array of team
collaboration formats. Consequently, scholars have explored various methods of dividing
collaboration formats. Consequently, scholars have explored various methods of dividing
labor and allocating tasks between humans and AI in different collaborative scenarios.
labor and allocating tasks between humans and AI in different collaborative scenarios. For
For instance, some have considered factors like the respective capabilities of humans and
instance, some have considered factors like the respective capabilities of humans and ma-
machines, the collaborative environment, the status of team members, and the autonomy
chines, the collaborative environment, the status of team members, and the autonomy and
and openness of collaborative tasks to appropriately assign responsibilities [17].
openness of collaborative tasks to appropriately assign responsibilities [17].
The second stream of the literature focuses on the impact of AI systems on users’ expe-
rienceThe
andsecond stream
perception. of the
Kumar et literature focuses
al. [6] identified theonrole
theofimpact of AI systems
AI in practice on users’
and investigated
the impacts of AI on customers’ experiences. Other scholars also focused on studying inves-
experience and perception. Kumar et al. [6] identified the role of AI in practice and users’
tigated the impacts of AI on customers’ experiences. Other scholars also
perceptions and acceptance of AI technologies in other fields, including service marketing, focused on stud-
ying users’ perceptions and acceptance of AI technologies in other
learning design, and clinical decision-making [7,18–20]. However, due to variations in fields, including service
marketing,
the learning design,
design characteristics andagents
of AI clinical
anddecision-making [7,18–20].
individual differences, However,
there due to var-
are discrepancies
iations in the design characteristics of AI agents and individual differences,
in the level of understanding, acceptance, cognitive processing ability, and trust towards there are dis-
crepancies in the level of understanding, acceptance, cognitive
AI or AI-generated decisions during human–machine collaboration [21]. These factors processing ability, and
trust towardsinfluence
subsequently AI or AI-generated decisions of
the overall outcomes during human–machine
human–machine collaboration [21].
collaboration.
These factors
The thirdsubsequently influence
stream of research the overallon
concentrates outcomes
enhancing of human–machine collabora-
the design, configuration,
tion.
and governance of AI systems to ensure the design quality and trustworthiness of the AI
The thus
systems, thirdfacilitating
stream of better
research concentrates
human–AI on enhancing
interactions. Some the design,
studies have configuration,
delved into
and governance
optimizing of AI systemseffect
the collaboration to ensure the design
by discussing thequality
designand trustworthiness
aspects of the AI
of human–machine
systems, thussystems,
collaboration facilitating better human–AI
including interactions.
AI’s appearance, Some studies
presentation modes, have delvedopti-
technology into
mization, physical interfaces for human–machine collaboration, and mental interfaces [8].
Moreover, to help users to improve their understanding of the inherent risks of AI sys-
tems and to mitigate these risks, organizations and scholars have created several tools.
For example, the European Commission created the Assessment List for Trustworthy AI
(ALTAI) [22,23], which indicates self-assessment checklist questions and a set of steps that
organizations are encouraged to complete. Each question presents multiple choices to
Systems 2023, 11, 442 4 of 20

verify the implementation of the guidelines. Scholars also proposed general processes with
concrete steps that can be applied to assess if an AI system is trustworthy [24].

3. Materials and Methods


In this section, we identify the scope of this literature review in detail and describe the
screening criteria of the papers. Our review specifically focused on empirical studies on
human–AI synergy in organizational decision-making, where the goal was to understand,
evaluate, and improve the experience of human–AI synergy for decision-making tasks.
Therefore, we specified the following scope of our study: In detail, the selected paper
should include evaluative empirical studies on human–AI synergy. We thus excluded
studies that merely focus on the design or configuration of relative AI systems, as these
tend to be predominantly quantitative studies. Moreover, the paper should be targeted
to a specific decision-making task. We thus excluded studies involving tasks with other
purposes, such as gaming or debugging.
Regarding the search strategy, we followed the three-stage approach by Webster and
Watson [25], as presented in Figure 2. Initially, we selected four digital libraries, including
Web of Science, Scopus, Business Source Complete, and ProQuest. These databases encom-
pass academic studies in the fields of information systems, technology, science, medicine,
social sciences, humanities, business, and interdisciplinary research on behavioral and
social sciences. Our search terms were divided into two categories. The first category
included terms related to human–AI synergy and its associated terms, such as “human-
AI/robot”, “human-AI/robot synergy”, “human-AI/robot interaction”, “human-AI/robot
teaming”, and “human-AI/robot collaboration”. The second category consisted of terms
related to the process, effects or impacts of human–AI synergy, such as “decision-making”,
“productivity”, “efficiency”, “effectiveness”, and “performance”.
Inclusion and exclusion criteria were also established to eliminate the studies that do
not address the research projects. Regarding the initial inclusion criteria, titles and abstracts
in English published in peer-reviewed journals and conferences from 2013 to 2023 were
searched. The literature was searched from March to August 2023, during which we also
included newly published articles. We focused on research published in the last 10 years
due to the substantial growth in the field of human–AI decision-making during this period.
To maintain a comprehensive scope, we did not restrict the concept of AI, encompassing
autonomous agents, algorithms, robots, and automated systems in the reviewing process.
The initial search process identified a total of 2161 journal articles and conference papers
from all disciplines. A total of 998 papers were removed as they were duplicated papers or
the full texts were inaccessible. Each remaining paper’s title and abstract was meticulously
reviewed, with consultation of the full text when necessary, to determine its eligibility for
inclusion. Articles were excluded if they were merely technical papers that focus on design
issues or engineering related to the technologies that we focus on. We further applied
additional exclusion criteria, such as excluding non-empirical articles and those focusing
solely on AI or measurement development. Following the filtering process, we identified
over 40 papers that qualified for the final review and coding. Additionally, we conducted
a forward and backward searching process, which yielded a further 7 articles meeting
our inclusion criteria. After a final iterative evaluation, using the specified inclusion and
exclusion criteria, a total of 47 papers were selected for the final review and coding. The
specific review protocol is presented in Figure 2.
Systems 2023,
Systems 10,11,
2023, x FOR
442 PEER REVIEW 55ofof
20 20

Figure
Figure2.2.The
Thereview
reviewprotocol.
protocol.

InInourourstudy,
study,we
weemployed
employeda arigorous
rigorous coding
coding process
process based
based onon ourour overarching
overarching the-
theoretical
oretical framework.
framework. TheThe main
main researchfindings
research findingsofofeach
each source
source were
were used
usedasasthe
theprimary
primary
principlefor
principle forin-depth
in-depth coding.
coding. Papers
Papers that
that provided
providedfindings
findingsrelated
relatedtotothe
theresearch
researchthemes
themes
were coded. By thoroughly examining these papers, we were able to
were coded. By thoroughly examining these papers, we were able to uncover valuable uncover valuable
information regarding
information regarding human–AI
human–AIsynergy
synergy in in
decision-making. We We
decision-making. thenthen
mapped the papers
mapped the pa-
from all disciplines to our affordance actualization framework, which enabled
pers from all disciplines to our affordance actualization framework, which enabled us to develop
us to
a holistic understanding of what has been studied. The profile of the studies in our sample
develop a holistic understanding of what has been studied. The profile of the studies in
pool is presented in Table 1. This table provides an overview of the decision tasks, types of
our sample pool is presented in Table 1. This table provides an overview of the decision
AI, AI systems, and organizational outcomes discussed in these studies.
tasks, types of AI, AI systems, and organizational outcomes discussed in these studies.
Systems 2023, 11, 442 6 of 20

Table 1. Profile of studies.

Author Decision Tasks Types of AI and AI Systems Organizational Outcomes


humans’ acceptance of machine
[1] hiring and firing employees algorithm-enabled software system
participation
[7] service encounter intelligent digital voice assistant users’ motivations to adopt AI
[15] human resource management algorithm-based AI system human reaction to an AI supervisor
welcoming visitors and employees
[18] and offering directions to specific humanoid robots trust, intention to use, and enjoyment
locations on a campus
clinical decision-making on usefulness and attitudes toward the
[19] AI-based decision-support-system
Rehabilitation Assessment system
socially shared regulation (SSRL) in
[20] intelligent agent learning regulation improvement
learning
explainable artificial intelligence aligned understanding between
[26] Scout Exploration Game
(XAI) system humans and AI
reconnaissance missions to gather transparency, trust, mission success,
[27] algorithm-based robots
intelligence in a foreign town and team performance
autonomous-system-based robot human–robot team performance and
[28] table-clearing task
assistants trust evolution
real-time collaboration performance and user
[29] algorithm-based XAI
human–robot cooking task perception of the robot
make a cup of coffee or clean the more explainable robot behavior,
[30] a synthetic robot maid
bathroom team performance
a human–robot team preparing autonomous-system-based robot human–robot collaboration
[31]
meals in a kitchen assistants performance
turn this plate of food into a
[32] recommend system (XAI) team performance
low-carb meal
objective
nutrition-related
[33] recommend system (XAI) performance, trust, preference,
decision-making task
mental demand, and understanding
human performance and human
[34] deception-detection task machine learning models
agency
[35] deception-detection task machine learning models human performance
outcome prediction accuracy and
[36] multi-label image classification machine learning algorithms
confidence
performance/compatibility
[37] three high-stakes classification tasks machine learning algorithm
tradeoff
recruitment and degree of fairness, transparent
[38] AI-based platform/assistant
staffing, e-commerce, banking feedback, less-biased decisions
managerial decision-making such
[39] algorithm-based AI system acceptance of the decisions
as hiring and firing employees
[40] open medicine bottles autonomous-system-based robot human trust in the robot
[41] naming and distinguishing species machine learning classifier end users’ appropriate trust
explainable artificial intelligence mental model, task performance, and
[42] cooking-related tasks in a kitchen
(XAI) reliance on the system
visual estimation task, song algorithm appreciation (preference
[43] forecasting task, romantic attraction algorithm-based AI system between algorithmic and human
forecasting task judgment)
machine-learning (ML)-based
[44] water pipe failure prediction user confidence in decision-making
decision-support-systems
Systems 2023, 11, 442 7 of 20

Table 1. Cont.

Author Decision Tasks Types of AI and AI Systems Organizational Outcomes


quality control in a automated human trust, system performance,
[45]
drinking-glass-making factory decision-support-systems human perception and decisions
multi-UxV (unmanned vehicle) performance, trust, and perceived
[46] intelligent agent
planning task usability
trust in algorithmic
[47] student admission algorithm-based AI system
decisions
intention reading and trusting
[48] collaborative game humanoid robot
capabilities
automate the loan underwriting explainable AI trade-off between prediction accuracy
[49]
process decision-support-system and explainability
machine-learning-based automated trust in automation, human–systems
[50] control of unmanned vehicle
agents integration
diagnosis of pneumonia on chest
[51] deep-learning model architectures diagnosis performance
radiographs
[52] visual navigation autonomous robot human–robot trust and efficiency
26 tasks, including predicting stock
market outcomes, predicting the
[53] algorithms trust in algorithms
weather, analyzing data, and giving
directions
[54] computer game humanoid robot trust and distrust over time
restaurants and food services user discomfort, compensatory
[55] humanoid robot
providing consumption
interact with preschool-aged acceptance of (SAR) by preschool and
[56] humanoid robot
children primary school teachers
inspection task to sort laundered people’s Rapport Building and
[57] humanoid robot
squares of cloth Hindering Behaviors
cognitive performances and
[58] cognitive assessment humanoid robot
workload
interact on the academy enrolment individuals’ psychophysiological
[59] text chatbot
process indices
[60] line scenario-like platform machine-learning-based platform team performance
human perceptions and expectations
[61] multiplayer games AI algorithms
of AI teammates
individual and collaborative control, trust, responsibility, efficiency,
[62] AI-based tutoring systems
learning and accuracy
accuracy, reliance on AI,
understanding of AI, decision
[63] recidivism risk assessment algorithm-based AI system
fairness, willingness to take
accountability
people’s integration of the model
[64] AI-assisted house-price prediction algorithm-based AI model outputs with information, prediction
accuracy
algorithm-based intelligent online diagnose transparency and
[65] symptom diagnose
symptom checkers explainability
clinical concept identification and natural language-processing-based
[66] accuracy and efficiency
classification clinical annotation system
Systems 2023, 10, x FOR PEER REVIEW 8 of 20
Systems 2023, 11, 442 8 of 20

4.4.Findings
Findings
Inthis
In thissection,
section,weweconclude
conclude the
the findings
findings based
based ononourour coding
coding andand reviewing
reviewing of the
of the
papers. We synthesized the 42 papers based on the affordance actualization
papers. We synthesized the 42 papers based on the affordance actualization framework. framework.
Throughiterative
Through iterativeanalysis,
analysis,weweidentified
identified three
three prominent
prominent themes
themes thatthat significantly
significantly con-con-
tributetotoour
tribute ourunderstanding
understanding ofof human–AI
human–AI synergy
synergy in decision-making
in decision-making contexts.
contexts. These
These
themes
themesinclude
includeAI AIaffordances
affordances inin
decision-making,
decision-making, patterns of human–AI
patterns of human–AI synergy across
synergy across
different
differentdecision
decisiontasks,
tasks,and
andoutcomes
outcomes ofof
human–AI
human–AI synergy
synergyin collaborative
in collaborative decision-
decision-
making.
making. The overall results of this high-level synthesis of human–AI synergydecision-
The overall results of this high-level synthesis of human–AI synergy in in decision-
making
makingare arevisually
visuallydepicted
depicted inin
Figure 3. 3.
Figure

Figure3.3.High-level
Figure High-levelsynthesis
synthesisofof research
research onon human–AI
human–AI synergy
synergy in decision-making.
in decision-making.

4.1.
4.1.Theme
Theme1:1:Identification
IdentificationofofAIAIAffordances
Affordances in in
Decision-Making
Decision-Making
Theme 1 attempts to describe and categorize
Theme 1 attempts to describe and categorize the AI affordances
the they enable
AI affordances during during
they enable the
decision-making process, which are embedded in the AI systems in this review. Specifically,
the decision-making process, which are embedded in the AI systems in this review. Spe-
the AI techniques used in decision-making may encompass a range of methods such as
cifically, the AI techniques used in decision-making may encompass a range of methods
natural language processing, artificial neural networks, regression-based models, genetic
such as natural language processing, artificial neural networks, regression-based models,
algorithms, fuzzy logic, cluster analysis, and humanoid robots. Although the literature
genetic algorithms, fuzzy logic, cluster analysis, and humanoid robots. Although the lit-
acknowledges the presence of these functionalities and techniques in AI, the identification
oferature acknowledges
affordances the presence
and functionalities withinof these
thesefunctionalities and
studies is scarce. techniques
Given in AI, the iden-
the ubiquitousness
tification of affordances and functionalities within these studies is scarce.
of AI in human life, it is worth untangling the functionalities and affordances of AI systems. Given the ubiq-
uitousness of AI in human life, it is worth untangling the functionalities
By understanding AI’s functionalities, we can further identify key affordances and how they and affordances
of delivered
are AI systems. Byhuman–AI
in the understanding AI’s functionalities,
decision-making we the
process. After cancoding
further identify
of the key af-
literature,
wefordances and how
have identified they
four are delivered
general categoriesin of the human–AI in
AI affordances decision-making
decision-makingprocess.
process.After
the coding of the literature, we have identified four general categories of AI affordances
4.1.1. Automated Information
in decision-making process. Collecting and Updating Affordance
One of the significant affordances of AI, as highlighted in the literature we surveyed,
is4.1.1. Automated
its capacity Information
for automated Collecting
information and Updating
collection Affordance
and updating across various fields. AI
devices offer novel approaches to collect data and information via
One of the significant affordances of AI, as highlighted in the literatureonline platforms we[38]. For
surveyed,
example, on social media platforms, AI systems automatically capture news
is its capacity for automated information collection and updating across various fields. information on AI
the network, classify and process the information, and realize accurate news releases. Simi-
devices offer novel approaches to collect data and information via online platforms [38].
larly, in the e-commerce context, AI systems automatically capture commodity information
For example, on social media platforms, AI systems automatically capture news infor-
or customer information and make classifications of the products or customers, facilitating
mation on the network, classify and process the information, and realize accurate news
accurate commodity display and promotion. The automated collection of heterogeneous
releases. Similarly, in the e-commerce context, AI systems automatically capture commod-
data can improve the accuracy of data analysis. In the clinical diagnosis context, symptom
ity information
checkers collect theor patient’s
customerinformation
information and
and make classifications
perform of the products
diagnosis accordingly [65]. or cus-
tomers, facilitating accurate commodity display and promotion. The automated collection
of heterogeneous data can improve the accuracy of data analysis. In the clinical diagnosis
Systems 2023, 11, 442 9 of 20

To illustrate further, in the context of job applications, machine learning models


and algorithms provide companies with the opportunity to optimize the scope of job
applicants [67]. Additionally, the organizations can collect the applicants’ online behavioral
information, including the user clicks and time of staying on a specific job listing through
collaborative filtering. These data also enable organizations to understand applicants’
preferences, which, in turn, aids in offering tailored job listing recommendations.

4.1.2. Information Processing and Analyzing Affordance


After collecting data from multiple resources, AI enables organizations to have the
ability to develop new approaches for information processing and analyzing [38]. Taking
human resource management as an example, natural language processing (NLP) algorithms
can help with the interpretation of text in different formats and decrease staffs’ work stress.
NLP-powered systems can efficiently analyze resumes and extract relevant information
from job applications, freeing-up HR professionals to focus on more strategic tasks [67].
Also, after training the AI with specific competence-related questions, organizations can
use robots to conduct automatic interviews with the job candidates. The robots can also be
used to record, transcribe, and analyze the candidates’ responses during the interviews.
AI-powered interviewers ensure a standardized and unbiased evaluation of candidates,
leading to a more efficient and fair hiring process. In an online shopping context, AI helps
with the analysis of customer data, such as body measurements, customer preferences and
feedback, and style trends. This data-driven approach allows organizations to personal-
ize product recommendations, offers tailored styling suggestions, and enhances overall
customer satisfaction [68]. AI’s ability to cater to individual preferences creates a more
engaging and enjoyable shopping journey for customers, boosting customer loyalty and
retention. As AI continues to evolve and advance, organizations can harness its potential
to drive innovation, streamline processes, and deliver more personalized and efficient ser-
vices to their users. Embracing AI technology opens up a world of possibilities, propelling
organizations towards a data-driven and customer-centric future. Machine learning is also
one of the typical representatives of this affordance. Through the utilization of information
acquired from prior experiences, machine-learning-based systems have the capability to
train on data and effectively address analogous problems [69].

4.1.3. Predicting/Forecasting and Decision-Making-Assistance Affordance


In spite of the information collecting and analyzing affordances, algorithms and
historical information provide AI with the ability to assist humans in predicting and
making decisions. AI applications in decision-making can be broadly categorized into
expert systems, optimization techniques, and other methods centered around simulation
and modeling [69]. This affordance is particularly helpful for tasks in need of predicting and
forecasting functions, such as in the supply chain industry, medical diagnosing, product
recommendation, house prediction, and so on [64,69,70]. For instance, in an online shopping
context, with the combination of human expertise and the efficiency and insight of AI,
human stylists can provide customers with possible recommendations [71]. Another
remarkable example of successful human–AI synergy is found in cancer detection. AI
can assist humans with the analysis of lymph-node-cell images, outperforming both AI-
only and human-only decisions [72]. Unlike human decisions which involve intuition or
subjective reasoning, AI enables organizations to develop unbiased approaches for data
analysis with objective reasoning, thus yielding less-biased decisions.
Furthermore, AI can handle hundreds of thousands of transactions or datasets per
second and process information at speeds far beyond human capacity. This proficiency in
data analysis allows AI to identify patterns, trends, and correlations that may be challenging
for humans to detect [19]. In industries like supply chain management, AI’s predictive
abilities help organizations optimize inventory levels, anticipate demand fluctuations,
and make informed decisions, thereby leading to cost savings and enhanced operational
efficiency and prediction accuracy [64,69].
Systems 2023, 11, 442 10 of 20

4.1.4. Explanations Providing Affordance


Interpretable AI or explainable AI (XAI) refers to efficient and transparent interactions
between AI and multiple stakeholders by providing information about how decisions and
generated contents are made. The effectiveness of the human–AI synergy depends not only
on the efficiency of the human team members, but also on how well the human members
interact with their robotic “colleagues” [48]. Especially when AI acts as a “leader”, the
explainability of its decisions directly determines the trust and decision-making of human
team members [49]. The level of cognition alignment between human team members and
robot “colleagues” plays a pivotal role in determining the understanding consistency of
team values, goals, and tasks, as well as the overall value equivalence between humans and
machines [26]. In other words, effective collaboration depends on the ability of intelligent
machines and team members to comprehend each other’s value needs and communicate
their own solutions and needs with clarity and rationality. Successful collaboration hinges
on mutual understanding and the effective communication of value requirements between
human team members and AI systems [30], thus improving the collaboration performance
and user perception of AI systems [29]. When both parties can effectively exchange and
comprehend their respective value needs and rationalize their solutions, the collaboration
becomes more seamless and productive.
The types and criteria of explanations vary greatly [36,73]. Regarding the types of
explanations, studies have proposed that both inductive and deductive reasoning can
provide humans with general rules or conclusions for the decision [33]. Other studies
also designed different explanations (e.g., confidence-level explanation and observation
explanation; symbolic explanation, haptic explanation, and text explanation; example-
based and feature-based explanations) to investigate the impact of explanation-providing
affordances on team performance [27,32,34,40]. Lim et al. also tested users’ understanding
of the AI systems and decisions based on different types of explanations, including What,
Why, Why Not, What If, and How To [74]. The above studies have verified the effectiveness
of providing explanations for AI systems. However, other scholars also proposed that too
much communication between humans and AI may lead to cognitive load, thus degrading
performance. To address this challenge, existing research has also developed a framework
to decide the necessity, timing, and the proper contents to communicate during human–AI
synergy [31].

4.2. Theme 2: Human–AI Synergy Patterns Regarding Different Decision Tasks


In the human–AI synergy decision-making process, the collaboration between humans
and algorithms can be considered as an organization, such as a multi-agent and goal-
oriented system [75]. The goal of the organization is to produce a final decision, and its
responsibilities include task division, task allocation, and integration of effort. According to
some AI pioneers, the combination of computers and humans can outperform either of them
alone. In the organizational decision-making process, humans and AI also complement
each other [2]. Based on the prevailing state of technology, the relationship between humans
and AI also differs during the human–AI synergy process, in which humans and AI can act
as different roles in a multi-agent organization [34].
Studies on human–AI synergy in decision-making span a wide range of fields, in-
cluding human–AI team collaboration [16,76], human–AI relationships, task allocation
between humans and AI [77,78], human trust in AI [21,50], shared mental models [79], and
practice in the human–AI decision-making process. The synergy between humans and AI
has also been demonstrated to bring benefits in many areas, such as medical diagnoses,
automated vehicles, deception detection, bail decisions, learning technology design, and so
on [20,62,65]. More specifically, the decision-making process is a complex cognitive process
and is always susceptible to uncertainty [44]. Due to AI’s automated information collecting
and updating affordance, the machine has the ability to assess uncertainties and convey key
messages to the human, thereby saving the human decision-makers’ cognitive resources [8].
However, due to the complexity of different tasks by nature, some tasks are not desired to
Systems 2023, 11, 442 11 of 20

be totally automated. Therefore, when considering different decision tasks, we can examine
human–AI synergy patterns from three main perspectives: AI-centered patterns, human-
centered patterns, and human–AI synergy-centered patterns. Each of these perspectives
offers valuable insights into how humans and AI can collaborate effectively to achieve
optimal decision-making outcomes.

4.2.1. AI-Centered Patterns


Considering the different levels of uncertainty, variability, and complexity of the
decision tasks that humans and AI address, we can characterize the research opportunities
for human–AI synergy in decision-making by levels of task uncertainty involved [2]. When
the uncertainty of a task is low, AI can be trained properly following the decision rules of
human decision-makers with sufficient data and act as the supporting role [47]. In this case,
studies are mainly algorithm-centered or design-centered, which focuses on the proper
utilization of the machines’ computing power [51]. The algorithms involved include natural
language processing, machine learning, inverse reinforcement learning, artificial neural
networks, and so on. For example, natural language processing algorithms enable the
possibility of communication between customers and intelligent customer service agents
in e-commerce. Also, some studies focus on the design of AI systems or robots that cater
to human needs. In this phase, communication between humans and AI is also unilateral.
Automated decision-making systems can replace human decisions in some cases. However,
in most scenarios, humans always act as the final decision makers with the guidance of the
systems’ suggestions [47].

4.2.2. Human-Centered Patterns


On the other hand, with the increasing level of task uncertainty, studies put more
emphasis on human-centered algorithms or system design [35]. To address more complex
tasks, algorithms are more sophisticated, which may lead to a lack of understanding and
“black box” issues [80,81]. The lack of humans’ understanding calls for higher transparency
of the algorithm or AI system [26]. In detail, the intelligent agent should be able to explain
its decision-making process and reasons in a trustable and understandable manner to
humans. In this case, studies focus on the design of transparent AI or explainable artificial
intelligence. Specifically, humans and AI should share the same mental model. Studies also
emphasized the design of XAI to build the shared mental model.
Despite the “black box” issue of AI, the “cognitive bias” of humans remains another
issue that needs to be addressed in human-centered patterns. Specifically, human decision-
makers are vulnerable to bias and anchoring effects [82], and their judgements are not
always consistent with the Bayesian rule [83]. Such kinds of biases have also been widely
found in empirical studies, such as in medical diagnoses, investment, and operations
management [84,85]. Additionally, the performance of the decision in the human-centered
pattern also relies largely on the decision-makers’ ability, emotion and pressure [86], and
their framing of different decision tasks. In general, the “black box” of AI and the cognitive
limitations of humans create barriers to the final decision and synergy performance [8].

4.2.3. Human–AI Synergy-Centered Patterns


Further considering tasks with a high uncertainty, studies are mostly human–AI
synergy-centered and put more emphasis on the collaboration between the two agents [37].
Humans possess certain advantages over AI systems, such as intuition, experience-based
decision-making, and the ability to transfer learning from one task to another. However,
AI can handle complex tasks through its automated information collecting and updating
affordances. For example, AI and humans can collaborate with high-uncertainty tasks in
many fields, including behavioral science, computer science, and cognitive science. It is
worth noting that in some critical tasks, such as recidivism prediction, medical diagnoses,
and other ethics-related tasks, humans are the final decision-makers [19,63]. Due to legal
Systems 2023, 11, 442 12 of 20

and ethical concerns, full automation of these tasks is undesired and presents challenges
for both humans and AI systems [34].
In summary, the combination of human expertise with AI’s capabilities leads to pow-
erful and synergetic decision-making processes. From product recommendations in online
shopping to cancer detection in the medical field, AI’s assistance enhances decision-making
accuracy and efficiency, while reducing bias and allowing organizations to harness data-
driven insights for better outcomes [37]. Embracing AI as a collaborative partner empowers
humans to tackle complex challenges more effectively and paves the way for a promising
future with human–AI synergy [87]. In particular, human–machine collaboration has been
brought to a new level with the announcement of the next industrial revolution, named
Industry 5.0 [88]. The revolution has shifted manufacturing from machine-centered systems
to human–AI synergy-centered systems. This represents that humans and machines can
collaborate to make processes more efficient by integrating human creativity and the power
of intelligent systems [89].

4.3. Theme 3: Outcomes of Human–AI Synergy in Decision-Making


The outcomes of AI affordance actualization are the direct results that users can
achieve by adopting AI, including general outcomes, psychological, or cognitive outcomes
induced by human–AI synergy. The affordance actualization theory also suggests that
these outcomes are essential mechanisms that help the human–AI team with their final
decision-making. The affordances identified in theme 1 and different synergy patterns
in theme 2 yield several findings, contributing to the conclusions of the outcomes. By
iterating between the existing literature and affordance actualization theory, we analyzed
the outcomes of human–AI synergy in decision-making and concluded with four categories:
the general performance of human–AI synergy in decision-making; trust in human–AI
synergy in decision-making; transparency and explainability between human and AI
synergy; and cognitive perspectives of human–AI synergy.

4.3.1. Outcome 1: General Performance of Human–AI Synergy in Decision-Making


The implementation of AI in the decision-making process has the potential to increase
the general performance of decision-making. Regarding the decision-making assisting
affordance, AI increases the interaction and communication between humans and other
key actors in an organization, such as employees, customers, and other key actors in the
commerce setting. The automated information collecting and updating affordance of AI
can also reduce bias in the data collection process and help to develop unbiased approaches
in the data analysis process. Unlike human decisions, which may be influenced by intuition
or subjective reasoning, AI enables organizations to develop unbiased approaches for data
analysis. By eliminating human biases, AI-driven decisions tend to be more objective and
impartial, leading to fairer outcomes [90]. Moreover, when faced with tasks of “unreadable”
or incoherent information, humans tend to “fill in” the information with subjective inputs,
which may be influenced by inherent cognitive biases [91]. However, AI-based systems
will produce a visualization based on relevant data, which is a representation of the data
itself, rather than a cognitive perception [92]. Additionally, information collecting and
processing affordances of AI can also enlarge the data pool that organizations can reach,
collect, and analyze. For instance, in clinical decisions on rehabilitation assessment, AI-
based decision-support-systems can automatically assess and monitor users’ data and
exercise information using sensors. Then, machine learning algorithms can help with the
quantitative analysis [19]. This helps to increase the effectiveness and efficiency of the
decision-making process, which subsequently yields effective decisions.

4.3.2. Outcome 2: Trust in Human–AI Synergy in Decision-Making


Among the many key factors perceived by individuals in the above human–AI syn-
ergy background, some scholars mainly focused on the issue of perceived trust [28,45].
Trust in AI and automation differs from the trust between humans in that AI lacks con-
Systems 2023, 11, 442 13 of 20

sciousness [28,93,94]. In detail, Xu and Dudek [52] investigated the robot’s trustworthiness
using an online trust inference model. Heerink et al. discussed the influence of trust in
the elderly’s healthcare context on their willingness to use intelligent machines to make
decisions [95]. Some scholars have discussed the influence of human trust in service robots
on their intention to use under the background of service marketing [18,96]. Davenport
proposed that the prerequisite for AI to be recognized in enterprises and society is complete
information disclosure of AI technology and endorsement by external organizations or insti-
tutions [97]. Castelo et al. explored the impact of objectivity and other factors on users’ trust
and usage of algorithms [53]. Seeber et al. proposed that in the context of the application of
AI in teamwork, the research on trust can take into account different dimensions, such as
trust in robot teammates, trust in suggestions automatically recommended by machines,
and trust in machine algorithms [16]. Trust is not absolutely static, but a dynamic process.
Through a longitudinal study, Jessup et al. evaluated the changes in human trust/distrust
towards robot partners in human–AI synergy, and proposed changes in trust levels over
time [54]. Yang et al. investigated the impact of example-based explanations on users’ trust
over time [41]. It can be seen from the above research that trust in human–AI synergy has
aroused wide attention in various application scenarios of AI and plays essential roles in
the decision-making process.

4.3.3. Outcome 3: Transparency and Explainability between Human and AI Synergy


AI can participate in team collaboration as different roles (leader/participant) and
improve the efficiency of team performance to some extent [15,92,98]. However, in the
practice of human–AI team collaboration, the lack of explainability of AI algorithms or
AI systems remains to be addressed, which is mainly caused by the “black box” of AI
technology and algorithm. Adding transparency to AI can improve human trust and
performance in a team [27]. Humans lack a sense of understanding, trust, or acceptance
of AI and its generated content, which further hinders the application and promotion of
AI in team cooperation. Specifically, the interpretability of AI systems means ensuring
efficient and transparent interactions between AI and multiple stakeholders by providing
information on how decisions and generated contents are made. Different objects and
stakeholders, such as system developers, domain experts, and system end-users, also have
different requirements for the interpretability of AI systems. To address the explainability
problem during human–AI synergy, studies have investigated several aspects: (1) the
clarification of what to explain (content) and when to explain (timing) that the AI should
constitute and (2) the algorithms and systems designed to meet the explainable standards to
make the decisions/recommendations explainable. By enhancing procedural transparency
and explanability, AI-based systems can make decisions less controversial and help mitigate
agency problems [92]. Explanations have also been identified to be able to address users’
bias during human–AI synergy [42].

4.3.4. Outcome 4: Cognitive Perspectives of Human–AI Synergy


Studies have put much emphasis on humans’ cognitive perception of human–AI syn-
ergy, recognizing its potential impact on the final decision-making process. For example,
individuals’ perception after interacting with AI in different scenarios mainly includes the
perceived interactive comfort [99]. A positive experience of interactive comfort fosters trust
and confidence in AI’s abilities, promoting its integration into decision-making processes.
Conversely, interactive discomfort may lead to hesitation and resistance towards AI adop-
tion. In certain contexts, AI interactions may be perceived as entertaining or enjoyable. Such
positive experiences can enhance users’ engagement with AI technologies, making them
more willing to explore and utilize AI-driven decision-making tools [56,95]. Interactions
with AI systems can also evoke feelings of social belonging, as AI technologies are designed
to emulate human-like qualities [99]. Perceptions of AI’s adaptability and social presence
contribute to the human–AI synergy experience [56]. Other studies also investigated hu-
mans’ cognitive perception from the perspective of relationship building [18,57], social
Systems 2023, 11, 442 14 of 20

presence [96,99], perceived usefulness, and ease of use of AI/robots/decisions [100]. In


addition to the discussion of the above perceptions, scholars also focus on the behavioral
outcomes and responses of humans, such as users’ emotional and physiological reactions
to AI [58], intention to use AI, acceptance of AI-recommended decisions, actual use situa-
tions [56,59], and willingness to cooperate with robot colleagues in the future [101]. Despite
the positive perception of AI, negative perceptions of the automation have also been dis-
cussed, such as individuals’ perceived interactive discomfort [53,55], over-trust or -reliance
on AI [66], and humans’ resistance to unethical instructions from their AI supervisors [15].

5. Discussion
The findings of this review provide a holistic understanding of research on human–
AI synergy in decision-making, and develop a systematic framework to guide future
investigations. Drawing upon the affordance actualization theory [9], our syntheses yield
three key themes: (1) the identification of AI affordances in decision-making, (2) human–AI
synergy patterns regarding different decision tasks, and (3) the outcomes of human–AI
synergy. The findings provide evidence for the usefulness of AI and its positive effects on
human–AI synergy processes in decision-making, ultimately leading to improved outcomes
of this synergy. Our synthesis also reveals promising opportunities for future research
regarding each theme, which are outlined as follows.
Regarding theme 1, the identified AI affordances afford users the ability to complete
decision-making tasks more efficiently. Ideally, AI can also enable users to complete new
tasks that may be difficult to accomplish through purely human efforts. It should be noted
that some of the identified affordances, such as automated information collecting and
updating, are foundational and essential, while others, like predicting/forecasting and
decision-making assistance, provide added value with complementary functionalities. It
is essential to recognize that as AI technology advances, the functions and delivery of
different AI affordances will undergo significant transformations. However, our current
understanding of how the new functionalities can improve the delivery of AI affordances
and decision-making efficacy is still limited. Accordingly, we propose the following areas
and opportunities for future research. For one, there is a need to delve deeper into the
delivery efficiency of the four AI affordances that we identified during the decision-making
process. Understanding how AI functions are implemented and delivered to users can help
optimize the integration of AI tools into decision-making workflows, ensuring that users
can fully leverage AI’s potential to enhance efficiency and decision outcomes. Against
this background, policy makers and scholars should decide on intervention tools in the
implementation of AI systems in industry. Specifically, future research can consider the
employment of the ALTAI tool and other similar tools in practice. For another, despite
the usefulness and positive effects of AI technology adoption in decision-making, there
are also some conditions under which AI adoption may yield negative consequences. For
instance, overreliance on AI or biases embedded in AI algorithms may result in suboptimal
decisions or ethical concerns [92]. Future research should, therefore, focus on investigating
both the positive and negative effects of AI affordances in decision-making processes [56].
By understanding the potential drawbacks and challenges associated with AI adoption,
decision-makers can proactively address these issues and effectively harness AI’s benefits
more effectively.
Regarding theme 2, patterns of human–AI synergy were summarized regarding the
uncertainty level of different decision tasks. Several challenges and future research oppor-
tunities have also emerged. Firstly, it is important to recognize that different decision tasks
may be better-suited for specific types of synergy between humans and AI. Understanding
these task-specific synergies can lead to more effective and efficient decision-making pro-
cesses, while, from another point, different patterns of human–AI synergy may also affect
the allocation of tasks between humans and AI, thus impacting the overall performance
and outcomes of human–AI synergy in decision-making. Therefore, future research can
focus on the dynamic and flexible development of human–AI team organization that can
Systems 2023, 11, 442 15 of 20

adapt to different decision tasks and uncertainty levels. Secondly, one of the primary goals
of human–AI synergy is to leverage AI’s capabilities to overcome human limitations in
decision-making. We have reviewed the characteristics and disadvantages of different
synergy patterns, especially the human-centered pattern. To address these problems, future
research should focus on exploring effective mechanisms for correcting and mitigating
human cognitive biases and limitations. By understanding how AI can complement human
decision-making processes, researchers can develop strategies to improve the efficiency
and accuracy of decision-making with AI assistance.
Regarding theme 3, we concluded four types of human–AI synergy outcomes in gen-
eral that may facilitate the ultimate goal of collaborative decision-making. For example,
the explanations provided by the AI technology offers users new opportunities to generate
explainable or transparent decisions, which are beneficial for successful decision-making.
However, the role of these outcomes has been generally studied, without rigorous inves-
tigation into their effects on decision-making process. Accordingly, we argue that more
rigorous investigation should be conducted to illustrate their mediating effects on the final
accomplishment of decision tasks. For example, future research may conduct longitudinal
research to investigate the dynamic mediating effects of the synergy outcomes and their
specific role in improving the decision-making process. Moreover, as the outcomes of
human–AI synergy are often interconnected and interdependent, future research should
explore the interactions between these outcomes. Understanding how these outcomes work
together and influence each other can offer valuable insights into optimizing the human–
AI collaboration process and achieving optimal decision-making outcomes. In addition,
there should be a feedback loop during the human–AI synergy process. As the affordance
actualization theory suggests, there exists a feedback mechanism from actualized outcomes
to the affordance potentials. Therefore, future research can include various feedback mech-
anisms and provide deeper understanding of the theoretical framework during human–AI
synergy in the decision-making process. Understanding humans’ cognitive perception and
behavioral responses to human–AI synergy is crucial for maximizing the potential of AI
technologies in decision-making contexts.

6. Limitations and Conclusions


This review of human–AI synergy in decision-making has highlighted potential av-
enues for future research to gain a deeper understanding of this phenomenon through
more comprehensive insights and perspectives. However, there are certain limitations that
should be acknowledged and addressed in future research endeavors. Firstly, regarding
the sample pool selection, the inclusion of studies in this review was constrained to those
explicitly incorporating the specified search terms. While this approach helped manage the
selected articles within a manageable size, it may have limited the diversity and breadth
of the sample pool. To address this, future studies could aim to include as many relevant
studies as possible, employing a more comprehensive search strategy to encompass a
wider array of perspectives and findings. Secondly, it becomes increasingly crucial to pay
attention to variations in human–AI synergy patterns and their impact on decision-making
performance. Different fields and industries may experience unique challenges and op-
portunities when integrating AI into decision-making processes. Future reviews could
prioritize examining explicit decision-making contexts and addressing research gaps in
various domains, such as healthcare, finance, and marketing, among others. This targeted
approach would provide specialized insights that cater to the specific needs and complexi-
ties of different fields, ultimately contributing to more-informed decision-making practices.
Last but not least, due to the rapid development of AI technology, a similar literature review
should be undertaken in 2 years’ time to take into account the frontier changes in the topic
and assess issues around the pace of change in AI development in human–AI synergy. This
will ensure that this review remains up-to-date and captures the latest advancements in
this evolving field.
Systems 2023, 11, 442 16 of 20

Author Contributions: Conceptualization, Y.B. and W.G.; methodology, Y.B. and W.G.; formal
analysis, Y.B., W.G. and K.Y.; resources, Y.B., W.G. and K.Y.; writing—original draft preparation, Y.B.;
writing—review and editing, Y.B., W.G. and K.Y.; funding acquisition, Y.B. and W.G. All authors have
read and agreed to the published version of the manuscript.
Funding: This research was funded by the China Postdoctoral Science Foundation, grant number
2022M723491, the Science Foundation of China University of Petroleum, Beijing, grant number
2462023SZBH006, the Interdisciplinary Research Foundation for Doctoral Candidates of Beijing
Normal University, grant number BNUXKJC2223, and the Postgraduate Innovative Research Fund of
University of International Business and Economics, grant number 202367.
Data Availability Statement: No new data were created or analyzed in this study. Data sharing is
not applicable to this article.
Conflicts of Interest: The authors declare no conflict of interest.

References
1. Russell, S.; Norvig, P.; Intelligence, A. Artificial Intelligence: A Modern Approach; Prentice-Hall: Englewood Cliffs, NJ, USA, 1995.
2. Jarrahi, M.H. Artificial Intelligence and the Future of Work: Human-AI Symbiosis in Organizational Decision Making. Bus. Horiz.
2018, 61, 577–586. [CrossRef]
3. Lai, V.; Chen, C.; Liao, Q.V.; Smith-Renner, A.; Tan, C. Towards a Science of Human-AI Decision Making: A Survey of Empirical
Studies. arXiv 2021, arXiv:2112.11471.
4. Achmat, L.; Brown, I. Artificial Intelligence Affordances for Business Innovation: A Systematic Review of Literature. In
Proceedings of the 4th International Conference on the Internet, Cyber Security and Information Systems, (ICICIS), Johannesburg,
South Africa, 31 October–1 November 2019; pp. 1–12.
5. Bader, J.; Edwards, J.; Harris-Jones, C.; Hannaford, D. Practical Engineering of Knowledge-Based Systems. Inf. Softw. Technol.
1988, 30, 266–277. [CrossRef]
6. Kumar, V.; Rajan, B.; Venkatesan, R.; Lecinski, J. Understanding the Role of Artificial Intelligence in Personalized Engagement
Marketing. Calif. Manag. Rev. 2019, 61, 135–155. [CrossRef]
7. Fernandes, T.; Oliveira, E. Understanding Consumers’ Acceptance of Automated Technologies in Service Encounters: Drivers of
Digital Voice Assistants Adoption. J. Bus. Res. 2021, 122, 180–191. [CrossRef]
8. Xiong, W.; Fan, H.; Ma, L.; Wang, C. Challenges of Human—Machine Collaboration in Risky Decision-Making. Front. Eng. Manag.
2022, 9, 89–103. [CrossRef]
9. Strong, D.; Volkoff, O.; Johnson, S.; Pelletier, L.; Tulu, B.; Bar-On, I.; Trudel, J.; Garber, L. A Theory of Organization-EHR
Affordance Actualization. J. Assoc. Inf. Syst. 2014, 15, 53–85. [CrossRef]
10. Du, W.; Pan, S.L.; Leidner, D.E.; Ying, W. Affordances, Experimentation and Actualization of FinTech: A Blockchain Implementa-
tion Study. J. Strateg. Inf. Syst. 2019, 28, 50–65. [CrossRef]
11. Zeng, D.; Tim, Y.; Yu, J.; Liu, W. Actualizing Big Data Analytics for Smart Cities: A Cascading Affordance Study. Int. J. Inf. Manag.
2020, 54, 102156. [CrossRef]
12. Lehrer, C.; Wieneke, A.; Vom Brocke, J.; Jung, R.; Seidel, S. How Big Data Analytics Enables Service Innovation: Materiality,
Affordance, and the Individualization of Service. J. Manag. Inf. Syst. 2018, 35, 424–460. [CrossRef]
13. Chatterjee, S.; Moody, G.; Lowry, P.B.; Chakraborty, S.; Hardin, A. Information Technology and Organizational Innovation:
Harmonious Information Technology Affordance and Courage-Based Actualization. J. Strateg. Inf. Syst. 2020, 29, 101596.
[CrossRef]
14. Anderson, C.; Robey, D. Affordance Potency: Explaining the Actualization of Technology Affordances. Inf. Organ. 2017, 27,
100–115. [CrossRef]
15. Lanz, L.; Briker, R.; Gerpott, F.H. Employees Adhere More to Unethical Instructions from Human Than AI Supervisors:
Complementing Experimental Evidence with Machine Learning. J. Bus. Ethics 2023. [CrossRef]
16. Seeber, I.; Bittner, E.; Briggs, R.O.; De Vreede, T.; De Vreede, G.-J.; Elkins, A.; Maier, R.; Merz, A.B.; Oeste-Reiß, S.;
Randrup, N.; et al. Machines as Teammates: A Research Agenda on AI in Team Collaboration. Inf. Manag. 2020, 57, 103174.
[CrossRef]
17. Hancock, P.A.; Kajaks, T.; Caird, J.K.; Chignell, M.H.; Mizobuchi, S.; Burns, P.C.; Feng, J.; Fernie, G.R.; Lavallière, M.; Noy, I.Y.; et al.
Challenges to Human Drivers in Increasingly Automated Vehicles. Hum. Factors J. Hum. Factors Ergon. Soc. 2020, 62, 310–328.
[CrossRef] [PubMed]
18. Van Pinxteren, M.M.E.; Wetzels, R.W.H.; Rüger, J.; Pluymaekers, M.; Wetzels, M. Trust in Humanoid Robots: Implications for
Services Marketing. J. Serv. Mark. 2019, 33, 507–518. [CrossRef]
19. Lee, M.H.; Siewiorek, D.P.P.; Smailagic, A.; Bernardino, A.; Bermúdez i Badia, S.B. A Human-AI Collaborative Approach for
Clinical Decision Making on Rehabilitation Assessment. In Proceedings of the 2021 CHI Conference on Human Factors in
Computing Systems, Yokohama, Japan, 8–13 May 2021; ACM: New York, NY, USA, 2021; pp. 1–14.
Systems 2023, 11, 442 17 of 20

20. Järvelä, S.; Nguyen, A.; Hadwin, A. Human and Artificial Intelligence Collaboration for Socially Shared Regulation in Learning.
Br. J. Educ. Technol. 2023, 54, 1057–1076. [CrossRef]
21. Hoff, K.A.; Bashir, M. Trust in Automation: Integrating Empirical Evidence on Factors That Influence Trust. Hum. Factors J. Hum.
Factors Ergon. Soc. 2015, 57, 407–434. [CrossRef]
22. Radclyffe, C.; Ribeiro, M.; Wortham, R.H. The Assessment List for Trustworthy Artificial Intelligence: A Review and Recommen-
dations. Front. Artif. Intell. 2023, 6, 1020592. [CrossRef]
23. Stahl, B.C.; Leach, T. Assessing the Ethical and Social Concerns of Artificial Intelligence in Neuroinformatics Research: An
Empirical Test of the European Union Assessment List for Trustworthy AI (ALTAI). AI Ethics 2023, 3, 745–767. [CrossRef]
24. Zicari, R.V.; Brodersen, J.; Brusseau, J.; Dudder, B.; Eichhorn, T.; Ivanov, T.; Kararigas, G.; Kringen, P.; McCullough, M.;
Moslein, F.; et al. Z-Inspection® : A Process to Assess Trustworthy AI. IEEE Trans. Technol. Soc. 2021, 2, 83–97. [CrossRef]
25. Webster, J.; Watson, R.T. Analyzing the Past to Prepare for the Future: Writing a Literature Review. MIS Q. 2002, 26, xiii–xxiii.
26. Yuan, L.; Gao, X.; Zheng, Z.; Edmonds, M.; Wu, Y.N.; Rossano, F.; Lu, H.; Zhu, Y.; Zhu, S.-C. In Situ Bidirectional Human-Robot
Value Alignment. Sci. Robot. 2022, 7, eabm4183. [CrossRef] [PubMed]
27. Wang, N.; Pynadath, D.V.; Hill, S.G. Trust Calibration within a Human-Robot Team: Comparing Automatically Generated Expla-
nations. In Proceedings of the 2016 11th ACM/IEEE International Conference on Human-Robot Interaction (HRI), Christchurch,
New Zealand, 7–10 March 2016; IEEE: Piscataway, NJ, USA, 2016; pp. 109–116.
28. Chen, M.; Nikolaidis, S.; Soh, H.; Hsu, D.; Srinivasa, S. Planning with Trust for Human-Robot Collaboration. In Proceedings of
the 2018 ACM/IEEE International Conference on Human-Robot Interaction, Chicago, IL, USA, 5–8 March 2018; ACM: New York,
NY, USA, 2018; pp. 307–315.
29. Gao, X.; Gong, R.; Zhao, Y.; Wang, S.; Shu, T.; Zhu, S.-C. Joint Mind Modeling for Explanation Generation in Complex
Human-Robot Collaborative Tasks. In Proceedings of the 29th IEEE International Conference on Robot and Human Interactive
Communication (RO-MAN), Naples, Italy, 31 August–4 September 2020.
30. Gong, Z.; Zhang, Y. Behavior Explanation as Intention Signaling in Human-Robot Teaming. In Proceedings of the 2018 27th IEEE
International Symposium on Robot and Human Interactive Communication (RO-MAN), Nanjing, China, 27–31 August 2018;
IEEE: Piscataway, NJ, USA, 2018; pp. 1005–1011.
31. Unhelkar, V.V.; Li, S.; Shah, J.A. Decision-Making for Bidirectional Communication in Sequential Human-Robot Collaborative
Tasks. In Proceedings of the 2020 ACM/IEEE International Conference on Human-Robot Interaction, Cambridge, UK, 23–26
March 2020; ACM: New York, NY, USA, 2020; pp. 329–341.
32. Buçinca, Z.; Malaya, M.B.; Gajos, K.Z. To Trust or to Think: Cognitive Forcing Functions Can Reduce Overreliance on AI in
AI-Assisted Decision-Making. Proc. ACM Hum.-Comput. Interact. 2021, 5, 1–21. [CrossRef]
33. Buçinca, Z.; Lin, P.; Gajos, K.Z.; Glassman, E.L. Proxy Tasks and Subjective Measures Can Be Misleading in Evaluating Explainable
AI Systems. In Proceedings of the 25th International Conference on Intelligent User Interfaces, Cagliari, Italy, 17–20 March 2020;
pp. 454–464.
34. Lai, V.; Tan, C. On Human Predictions with Explanations and Predictions of Machine Learning Models: A Case Study on
Deception Detection. In Proceedings of the Conference on Fairness, Accountability, and Transparency, Atlanta, GA, USA,
29–31 January 2019; pp. 29–38.
35. Lai, V.; Liu, H.; Tan, C. “Why Is ‘Chicago’ Deceptive?” Towards Building Model-Driven Tutorials for Humans. In Proceedings of
the 2020 CHI Conference on Human Factors in Computing Systems, Honolulu, HI, USA, 25–30 April 2020; ACM: New York, NY,
USA, 2020; pp. 1–13.
36. Alqaraawi, A.; Schuessler, M.; Weiß, P.; Costanza, E.; Berthouze, N. Evaluating Saliency Map Explanations for Convolutional
Neural Networks: A User Study. In Proceedings of the 25th International Conference on Intelligent User Interfaces, Cagliari, Italy,
17–20 March 2020.
37. Bansal, G.; Nushi, B.; Kamar, E.; Weld, D.S.; Lasecki, W.S.; Horvitz, E. Updates in Human-AI Teams: Understanding and
Addressing the Performance/Compatibility Tradeoff. Proc. AAAI Conf. Artif. Intell. 2019, 33, 2429–2437. [CrossRef]
38. Trocin, C.; Hovland, I.V.; Mikalef, P.; Dremel, C. How Artificial Intelligence Affords Digital Innovation: A Cross-Case Analysis of
Scandinavian Companies. Technol. Forecast. Soc. Change 2021, 173, 121081. [CrossRef]
39. Haesevoets, T.; De Cremer, D.; Dierckx, K.; Van Hiel, A. Human-Machine Collaboration in Managerial Decision Making. Comput.
Hum. Behav. 2021, 119, 106730. [CrossRef]
40. Edmonds, M.; Gao, F.; Liu, H.; Xie, X.; Qi, S.; Rothrock, B.; Zhu, Y.; Wu, Y.N.; Lu, H.; Zhu, S.-C. A Tale of Two Explanations:
Enhancing Human Trust by Explaining Robot Behavior. Sci. Robot. 2019, 4, eaay4663. [CrossRef]
41. Yang, F.; Huang, Z.; Scholtz, J.; Arendt, D.L. How Do Visual Explanations Foster End Users’ Appropriate Trust in Machine
Learning? In Proceedings of the 25th International Conference on Intelligent User Interfaces, Cagliari, Italy, 17–20 March 2020;
ACM: New York, NY, USA, 2020; pp. 189–201.
42. Nourani, M.; Roy, C.; Block, J.E.; Honeycutt, D.R.; Rahman, T.; Ragan, E.; Gogate, V. Anchoring Bias Affects Mental Model
Formation and User Reliance in Explainable AI Systems. In Proceedings of the 26th International Conference on Intelligent User
Interfaces, College Station, TX, USA, 14–17 April 2021; ACM: New York, NY, USA, 2021; pp. 340–350.
43. Logg, J.M.; Minson, J.A.; Moore, D.A. Algorithm Appreciation: People Prefer Algorithmic to Human Judgment. Organ. Behav.
Hum. Decis. Process. 2019, 151, 90–103. [CrossRef]
Systems 2023, 11, 442 18 of 20

44. Arshad, S.Z.; Zhou, J.; Bridon, C.; Chen, F.; Wang, Y. Investigating User Confidence for Uncertainty Presentation in Predictive
Decision Making. In Proceedings of the Annual Meeting of the Australian Special Interest Group for Computer Human Interaction,
Parkville, VIC, Australia, 7–10 December 2015; ACM: New York, NY, USA, 2015; pp. 352–360.
45. Yu, K.; Berkovsky, S.; Taib, R.; Zhou, J.; Chen, F. Do I Trust My Machine Teammate?: An Investigation from Perception to Decision.
In Proceedings of the 24th International Conference on Intelligent User Interfaces, Marina del Ray, CA, USA, 17–20 March 2019;
ACM: New York, NY, USA, 2019; pp. 460–468.
46. Mercado, J.E.; Rupp, M.A.; Chen, J.Y.C.; Barnes, M.J.; Barber, D.; Procci, K. Intelligent Agent Transparency in Human–Agent
Teaming for Multi-UxV Management. Hum. Factors J. Hum. Factors Ergon. Soc. 2016, 58, 401–415. [CrossRef]
47. Cheng, H.-F.; Wang, R.; Zhang, Z.; O’Connell, F.; Gray, T.; Harper, F.M.; Zhu, H. Explaining Decision-Making Algorithms through
UI: Strategies to Help Non-Expert Stakeholders. In Proceedings of the 2019 CHI Conference on Human Factors in Computing
Systems, Glasgow, Scotland, 4–9 May 2019; ACM: New York, NY, USA, 2019; pp. 1–12.
48. Vinanzi, S.; Cangelosi, A.; Goerick, C. The Collaborative Mind: Intention Reading and Trust in Human-Robot Interaction. iScience
2021, 24, 102130. [CrossRef] [PubMed]
49. Sachan, S.; Yang, J.-B.; Xu, D.-L.; Benavides, D.E.; Li, Y. An Explainable AI Decision-Support-System to Automate Loan
Underwriting. Expert Syst. Appl. 2020, 144, 113100. [CrossRef]
50. Gutzwiller, R.S.; Reeder, J. Dancing with Algorithms: Interaction Creates Greater Preference and Trust in Machine-Learned
Behavior. Hum. Factors J. Hum. Factors Ergon. Soc. 2021, 63, 854–867. [CrossRef] [PubMed]
51. Patel, B.N.; Rosenberg, L.; Willcox, G.; Baltaxe, D.; Lyons, M.; Irvin, J.; Rajpurkar, P.; Amrhein, T.; Gupta, R.; Halabi, S.; et al.
Human–Machine Partnership with Artificial Intelligence for Chest Radiograph Diagnosis. NPJ Digit. Med. 2019, 2, 111. [CrossRef]
[PubMed]
52. Xu, A.; Dudek, G. OPTIMo: Online Probabilistic Trust Inference Model for Asymmetric Human-Robot Collaborations. In
Proceedings of the Tenth Annual ACM/IEEE International Conference on Human-Robot Interaction, Portland, OR, USA, 2–5
March 2015; ACM: New York, NY, USA, 2015; pp. 221–228.
53. Castelo, N.; Bos, M.W.; Lehmann, D.R. Task-Dependent Algorithm Aversion. J. Mark. Res. 2019, 56, 809–825. [CrossRef]
54. Jessup, S.; Gibson, A.; Capiola, A.; Alarcon, G.; Borders, M. Investigating the Effect of Trust Manipulations on Affect over Time
in Human-Human versus Human-Robot Interactions. In Proceedings of the 53rd Hawaii International Conference on System
Sciences, Maui, HI, USA, 7–10 January 2020.
55. Mende, M.; Scott, M.L.; Van Doorn, J.; Grewal, D.; Shanks, I. Service Robots Rising: How Humanoid Robots Influence Service
Experiences and Elicit Compensatory Consumer Responses. J. Mark. Res. 2019, 56, 535–556. [CrossRef]
56. Fridin, M.; Belokopytov, M. Acceptance of Socially Assistive Humanoid Robot by Preschool and Elementary School Teachers.
Comput. Hum. Behav. 2014, 33, 23–31. [CrossRef]
57. Seo, S.H.; Griffin, K.; Young, J.E.; Bunt, A.; Prentice, S.; Loureiro-Rodríguez, V. Investigating People’s Rapport Building and
Hindering Behaviors When Working with a Collaborative Robot. Int. J. Soc. Robot. 2018, 10, 147–161. [CrossRef]
58. Desideri, L.; Ottaviani, C.; Malavasi, M.; Di Marzio, R.; Bonifacci, P. Emotional Processes in Human-Robot Interaction during
Brief Cognitive Testing. Comput. Hum. Behav. 2019, 90, 331–342. [CrossRef]
59. Ciechanowski, L.; Przegalinska, A.; Magnuski, M.; Gloor, P. In the Shades of the Uncanny Valley: An Experimental Study of
Human–Chatbot Interaction. Future Gener. Comput. Syst. 2019, 92, 539–548. [CrossRef]
60. Bansal, G.; Nushi, B.; Kamar, E.; Lasecki, W.; Weld, D.S.; Horvitz, E. Beyond Accuracy: The Role of Mental Models in Human-AI
Team Performance. In Proceedings of the Seventh AAAI Conference on Human Computation and Crowdsourcing (HCOMP-19),
Stevenson, WA, USA, 28–30 October 2019; p. 10.
61. Zhang, R.; McNeese, N.J.; Freeman, G.; Musick, G. “An Ideal Human”: Expectations of AI Teammates in Human-AI Teaming.
Proc. ACM Hum.-Comput. Interact. 2021, 4, 246. [CrossRef]
62. Lawrence, L.; Echeverria, V.; Yang, K.; Aleven, V.; Rummel, N. How Teachers Conceptualise Shared Control with an AI
Co-orchestration Tool: A Multiyear Teacher-centred Design Process. Br. J. Educ. Technol. 2023, bjet.13372. [CrossRef]
63. Chiang, C.-W.; Lu, Z.; Li, Z.; Yin, M. Are Two Heads Better Than One in AI-Assisted Decision Making? Comparing the Behavior
and Performance of Groups and Individuals in Human-AI Collaborative Recidivism Risk Assessment. In Proceedings of the 2023
CHI Conference on Human Factors in Computing Systems, Hamburg, Germany, 23–28 April 2023; ACM: New York, NY, USA,
2023; pp. 1–18.
64. Holstein, K.; De-Arteaga, M.; Tumati, L.; Cheng, Y. Toward Supporting Perceptual Complementarity in Human-AI Collaboration
via Reflection on Unobservables. Proc. ACM Hum.-Comput. Interact. 2023, 7, 152. [CrossRef]
65. Tsai, C.-H.; You, Y.; Gui, X.; Kou, Y.; Carroll, J.M. Exploring and Promoting Diagnostic Transparency and Explainability in Online
Symptom Checkers. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, Yokohama, Japan,
8–13 May 2021; ACM: New York, NY, USA, 2021; pp. 1–17.
66. Levy, A.; Agrawal, M.; Satyanarayan, A.; Sontag, D. Assessing the Impact of Automated Suggestions on Decision Making:
Domain Experts Mediate Model Errors but Take Less Initiative. In Proceedings of the 2021 CHI Conference on Human Factors in
Computing Systems, Yokohama, Japan, 8–13 May 2021; ACM: New York, NY, USA, 2021; pp. 1–13.
67. Vrontis, D.; Christofi, M.; Pereira, V.; Tarba, S.; Makrides, A.; Trichina, E. Artificial Intelligence, Robotics, Advanced Technologies
and Human Resource Management: A Systematic Review. Int. J. Hum. Resour. Manag. 2022, 33, 1237–1266. [CrossRef]
Systems 2023, 11, 442 19 of 20

68. Prentice, C.; Dominique Lopes, S.; Wang, X. The Impact of Artificial Intelligence and Employee Service Quality on Customer
Satisfaction and Loyalty. J. Hosp. Mark. Manag. 2020, 29, 739–756. [CrossRef]
69. Pournader, M.; Ghaderi, H.; Hassanzadegan, A.; Fahimnia, B. Artificial Intelligence Applications in Supply Chain Management.
Int. J. Prod. Econ. 2021, 241, 108250. [CrossRef]
70. Wilson, H.J.; Daugherty, P.; Shukla, P. How One Clothing Company Blends AI and Human Expertise. Harv. Bus. Rev. 2016.
71. Marr, B. Stitch Fix: The Amazing Use Case of Using Artificial Intelligence in Fashion Retail. Forbes 2018, 25.
72. Wang, D.; Khosla, A.; Gargeya, R.; Irshad, H.; Beck, A.H. Deep Learning for Identifying Metastatic Breast Cancer. arXiv 2016,
arXiv:1606.05718. [CrossRef]
73. Arnold, T.; Kasenberg, D.; Scheutz, M. Explaining in Time: Meeting Interactive Standards of Explanation for Robotic Systems.
ACM Trans. Hum.-Robot Interact. 2021, 10, 25. [CrossRef]
74. Lim, B.Y.; Dey, A.K.; Avrahami, D. Why and Why Not Explanations Improve the Intelligibility of Context-Aware Intelligent
Systems. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, Boston, MA, USA, 4–9 April 2009;
ACM: New York, NY, USA, 2009; pp. 2119–2128.
75. Puranam, P. Human–AI Collaborative Decision-Making as an Organization Design Problem. J. Organ. Des. 2021, 10, 75–80.
[CrossRef]
76. Parker, S.K.; Grote, G. Automation, Algorithms, and Beyond: Why Work Design Matters More Than Ever in a Digital World.
Appl. Psychol. 2022, 71, 1171–1204. [CrossRef]
77. Roth, E.M.; Sushereba, C.; Militello, L.G.; Diiulio, J.; Ernst, K. Function Allocation Considerations in the Era of Human Autonomy
Teaming. J. Cogn. Eng. Decis. Mak. 2019, 13, 199–220. [CrossRef]
78. Van Maanen, P.P.; van Dongen, K. Towards Task Allocation Decision Support by Means of Cognitive Modeling of Trust.
In Proceedings of the 17th Belgian-Netherlands Artificial Intelligence Conference, Brussels, Belgium, 17–18 October 2005;
pp. 399–400.
79. Flemisch, F.; Heesen, M.; Hesse, T.; Kelsch, J.; Schieben, A.; Beller, J. Towards a Dynamic Balance between Humans and
Automation: Authority, Ability, Responsibility and Control in Shared and Cooperative Control Situations. Cogn. Technol. Work
2012, 14, 3–18. [CrossRef]
80. Topol, E.J. High-Performance Medicine: The Convergence of Human and Artificial Intelligence. Nat. Med. 2019, 25, 44–56.
[CrossRef]
81. The Precise4Q Consortium; Amann, J.; Blasimme, A.; Vayena, E.; Frey, D.; Madai, V.I. Explainability for Artificial Intelligence in
Healthcare: A Multidisciplinary Perspective. BMC Med. Inform. Decis. Mak. 2020, 20, 310. [CrossRef]
82. Bier, V. Implications of the Research on Expert Overconfidence and Dependence. Reliab. Eng. Syst. Saf. 2004, 85, 321–329.
[CrossRef]
83. Charness, G.; Karni, E.; Levin, D. Individual and Group Decision Making under Risk: An Experimental Study of Bayesian
Updating and Violations of First-Order Stochastic Dominance. J. Risk Uncertain. 2007, 35, 129–148. [CrossRef]
84. Tong, J.; Feiler, D. A Behavioral Model of Forecasting: Naive Statistics on Mental Samples. Manag. Sci. 2017, 63, 3609–3627.
[CrossRef]
85. Blumenthal-Barby, J.S.; Krieger, H. Cognitive Biases and Heuristics in Medical Decision Making: A Critical Review Using a
Systematic Search Strategy. Med. Decis. Mak. 2015, 35, 539–557. [CrossRef]
86. Zinn, J.O. Heading into the Unknown: Everyday Strategies for Managing Risk and Uncertainty. Health Risk Soc. 2008, 10, 439–450.
[CrossRef]
87. Bayati, M.; Braverman, M.; Gillam, M.; Mack, K.M.; Ruiz, G.; Smith, M.S.; Horvitz, E. Data-Driven Decisions for Reducing
Readmissions for Heart Failure: General Methodology and Case Study. PLoS ONE 2014, 9, e109264. [CrossRef] [PubMed]
88. Pizoń, J.; Gola, A. Human–Machine Relationship—Perspective and Future Roadmap for Industry 5.0 Solutions. Machines 2023,
11, 203. [CrossRef]
89. Nahavandi, S. Industry 5.0—A Human-Centric Solution. Sustainability 2019, 11, 4371. [CrossRef]
90. Trocin, C.; Mikalef, P.; Papamitsiou, Z.; Conboy, K. Responsible AI for Digital Health: A Synthesis and a Research Agenda. Inf.
Syst. Front. 2021, 1–19. [CrossRef]
91. McShane, M.; Nirenburg, S.; Jarrell, B. Modeling Decision-Making Biases. Biol. Inspired Cogn. Archit. 2013, 3, 39–50. [CrossRef]
92. Parry, K.; Cohen, M.; Bhattacharya, S. Rise of the Machines: A Critical Consideration of Automated Leadership Decision Making
in Organizations. Group Organ. Manag. 2016, 41, 571–594. [CrossRef]
93. Lee, J.D.; See, K.A. Trust in Automation: Designing for Appropriate Reliance. Hum. Factors J. Hum. Factors Ergon. Soc. 2004, 46,
50–80. [CrossRef]
94. Rheu, M.; Shin, J.Y.; Peng, W.; Huh-Yoo, J. Systematic Review: Trust-Building Factors and Implications for Conversational Agent
Design. Int. J. Hum.–Comput. Interact. 2021, 37, 81–96. [CrossRef]
95. Heerink, M. Assessing Acceptance of Assistive Social Robots by Aging Adults. Ph.D. Thesis, Universiteit van Amsterdam,
Amsterdam, The Netherlands, 2010.
96. Wirtz, J.; Patterson, P.G.; Kunz, W.H.; Gruber, T.; Lu, V.N.; Paluch, S.; Martins, A. Brave New World: Service Robots in the
Frontline. J. Serv. Manag. 2018, 29, 907–931. [CrossRef]
97. Davenport, T.; Guha, A.; Grewal, D.; Bressgott, T. How Artificial Intelligence Will Change the Future of Marketing. J. Acad. Mark.
Sci. 2020, 48, 24–42. [CrossRef]
Systems 2023, 11, 442 20 of 20

98. Mikalef, P.; Gupta, M. Artificial Intelligence Capability: Conceptualization, Measurement Calibration, and Empirical Study on Its
Impact on Organizational Creativity and Firm Performance. Inf. Manag. 2021, 58, 103434. [CrossRef]
99. Van Doorn, J.; Mende, M.; Noble, S.M.; Hulland, J.; Ostrom, A.L.; Grewal, D.; Petersen, J.A. Domo Arigato Mr. Roboto: Emergence
of Automated Social Presence in Organizational Frontlines and Customers’ Service Experiences. J. Serv. Res. 2017, 20, 43–58.
[CrossRef]
100. Libert, K.; Mosconi, E.; Cadieux, N. Human-Machine Interaction and Human Resource Management Perspective for Collaborative
Robotics Implementation and Adoption. In Proceedings of the 53rd Hawaii International Conference on System Sciences, Maui,
HI, USA, 7–10 January 2020; Volume 3, pp. 533–542.
101. Piçarra, N.; Giger, J.-C. Predicting Intention to Work with Social Robots at Anticipation Stage: Assessing the Role of Behavioral
Desire and Anticipated Emotions. Comput. Hum. Behav. 2018, 86, 129–146. [CrossRef]

Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual
author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to
people or property resulting from any ideas, methods, instructions or products referred to in the content.

You might also like