Critical Assessment of AI in Health Care

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 24

The Journal of Medicine and Philosophy, 47: 155–178, 2022

https://doi.org/10.1093/jmp/jhab036

Doctor Ex Machina: A Critical Assessment of

Downloaded from https://academic.oup.com/jmp/article/47/1/155/6524316 by IRCCS Humanitas Milano user on 29 November 2022


the Use of Artificial Intelligence in Health Care

ANNIKA M. SVENSSON*
Länssjukhuset i Kalmar, Kalmar, Sweden

FABRICE JOTTERAND
Medical College of Wisconsin, Milwaukee, Wisconsin, USA
Universal of Basel, Basel, Switzerland

*Address correspondence to: Annika Svensson, MD, PhD, MA, Länssjukhuset I Kalmar,
Lasarettsv. 8, 392 44 Kalmar, Sweden. E-mail: Annika.Svensson@ymail.com

This article examines the potential implications of the implementation


of artificial intelligence (AI) in health care for both its delivery and the
medical profession. To this end, the first section explores the basic fea-
tures of AI and the yet theoretical concept of autonomous AI followed
by an overview of current and developing AI applications. Against
this background, the second section discusses the transforming roles
of physicians and changes in the patient–physician relationship
that could be a consequence of gradual expansion of AI in health
care. Subsequently, an examination of the responsibilities physicians
should assume in this process is explored. The third section describes
conceivable practical and ethical challenges that implementation of a
single all-encompassing AI healthcare system would pose. The fourth
section presents arguments for regulation of AI in health care to en-
sure that these applications do not violate basic ethical principles and
that human control of AI will be preserved in the future. In the final
section, fundamental components of a moral framework from which
such regulation may be derived are brought forward, and some pos-
sible strategies for building a moral framework are discussed.
Keywords: AI governance, artificial intelligence, confidentiality,
deliberative democracy, machine learning, medical practice,
privacy, professionalism
“The rise of powerful AI will be either the best, or the worst thing, ever to happen
to humanity”

—Stephen Hawking1
© The Author(s) 2022. Published by Oxford University Press, on behalf of the Journal of Medicine and Philosophy Inc.
All rights reserved. For permissions, please e-mail: journals.permissions@oup.com
156 Annika M. Svensson and Fabrice Jotterand

I. INTRODUCTION

Artificial intelligence (AI) applications are rapidly evolving in many fields,


including health care. AI systems are still inferior to the human brain when
it comes to tasks that require human life experience, including input from

Downloaded from https://academic.oup.com/jmp/article/47/1/155/6524316 by IRCCS Humanitas Milano user on 29 November 2022


emotions and perceptions by a physical body. However, they are already
widely superior in many respects such as processing speed, capacity to accu-
mulate large amounts of data, ability to recall results of previous calculations,
and to operate without interruption in function or interference in perform-
ance by emotions, bias, irrational impulses, or morality.2
This article discusses possible consequences of implementation of AI ap-
plications in health care. Developing advanced comprehensive AI systems
could potentially assume many of the functions that are currently performed
by physicians. Hypothetically, one such system could achieve dominance on
the global market. Even a system covering select areas of medicine could fun-
damentally transform the way health care is organized and delivered, as well
as how physicians and patients perceive and communicate with each other.
Many aspects of the nature of the patient–physician relationship would un-
doubtedly change if an expanding AI would increasingly provide the basis
for decision-making. If health insurance information and gatekeeping func-
tions3 would be directly incorporated and processed within such a system
without human interference, the AI may at some point provide the final
decision about treatment. We submit that, based on a long tradition of the
fiduciary physician–patient relationship, physicians have a duty to protect
patients throughout the implementation of any major changes in health
care, including the impending implementation of AI. The medical profession
should engage with philosophers, computer scientists, policy-makers, pa-
tient advocacy organizations, and other stakeholders to bring forth for public
discussion ethical issues related to the use of AI in health care, including
beneficence, maleficence, risk/safety, autonomy, privacy, confidentiality,
truthfulness, and justice. Throughout the implementation process physicians
should keep patients’ needs in focus and, importantly, prioritize their wel-
fare above the health-care providers’ self-interest. Of course, the current
ideological and cultural pluralism remains a challenge that is not conducive
to the resolution of the aforementioned ethical, clinical, and practical issues.
However, we believe that a managed agreement can be achieved through
a process grounded on principles of deliberative democracy (Gutmann and
Thompson, 2000). It is, however, necessary to evoke a debate among the
various stakeholders and for this purpose create spaces in the public square
where these issues can be discussed and debated notwithstanding the limi-
tations of public moral discourse for social collaboration.
The first section of this article explores the basic features of AI and the
theoretical concept of autonomous AI followed by an overview of current
and developing AI applications. Against this background, the second section
Doctor Ex Machina 157

discusses the transforming roles of physicians and changes in the patient–


physician relationship that could be a consequence of gradual expansion
of AI in health care. This leads to an examination of the responsibilities
physicians should assume in this process. The third section of the article de-
scribes conceivable practical and ethical challenges that implementation of

Downloaded from https://academic.oup.com/jmp/article/47/1/155/6524316 by IRCCS Humanitas Milano user on 29 November 2022


a single all-encompassing AI health-care system would pose. Subsequently,
the fourth section presents arguments for regulation of AI in health care to
ensure that these applications do not violate basic ethical principles and that
human control of AI will be preserved in the future. In the final section,
fundamental components of a moral framework from which such regula-
tion may be derived are brought forward and some possible strategies for
building a moral framework are discussed.

II. DEVELOPMENT OF ARTIFICIAL INTELLIGENCE RELATED TO


HEALTH-CARE APPLICATIONS

Artificial Neural Networks and the Thinking Machine


Machine learning uses algorithms that allow the computer to learn from data
as opposed to operating exclusively according to detailed instructions pro-
vided by a human programmer. Importantly, this process allows computer
systems to improve their own capability to perform by “training” where
the system repeatedly executes tasks and absorbs feedback about the out-
come into the system. Artificial neural networks consist of up to a million
units or nodes called “artificial neurons” that are linked together through
connections termed “synapses.” These structures enable signaling between
the nodes, somewhat similar to the way natural neurons communicate with
each other in the human brain. The artificial neurons are typically organ-
ized in an input layer, several layers that perform sequential calculations,
and an output layer. The neurons may be “on” or “off” and may also de-
velop different “weights” relative to each other. The weight of participating
neurons affects the strength of the signals transferred in different parts of the
network. Following “training” of the network, certain neurons have higher
weight, and pathways involving these neurons will be preferred over other
pathways.
Data can be supplied to the network through a supervised algorithm, in
which case it can be organized and filtered by a programmer. Alternatively,
it can be accessed directly as unstructured information4 by the computer (un-
supervised learning). AI systems can also learn from performing simulations
(reinforcement learning) (Mnih et al., 2015). The Berkeley Robot for the
Elimination of Tedious Tasks (BRETT) represents a development of this con-
cept. The AI learns to perform simple tasks by repeated attempts using its
artificial “limbs” and adjusts its behavior to get increasingly closer to the goal
(Clark, 2015). Currently available AI applications include web search engines
158 Annika M. Svensson and Fabrice Jotterand

with text recognition; image, voice, and speech recognition systems; soft-
ware for recognition of unexpected patterns in big data sets (data mining);
and “smart robots” including “carebots”5 for use in healthcare settings.
Applications for clinical decision-making are in development. However, be-
fore turning to emerging applications of AI in health care, we offer a his-

Downloaded from https://academic.oup.com/jmp/article/47/1/155/6524316 by IRCCS Humanitas Milano user on 29 November 2022


torical perspective of the advance of AI and the concept of autonomous AI.
Many authors have entertained the idea of creating an AI that very closely
resembles a human being also when it comes to more subtle aspects of
behavior, that is, beyond the ability to carry out a basic conversation.
However, human expression is among other things dependent on emo-
tional experiences. At this time, although AI could be trained to “react”
to human emotions through applications that use facial recognition and
respond by projecting “emotions” in text or synthetic voice, it cannot truly
experience these emotions. The ability to have human emotional experi-
ences and create memories related to these sentiments requires a physical
body and interactions with the natural world. Complex patterns of human
emotions are determined by feedback circles involving a combination of
impulses from both the brain and the body (hormones, heart rate, vasodila-
tion, muscle tension, etc.).6 To simulate such a system in a computer would
require much more knowledge about human physiology and psychology
than is currently available. Furthermore, human decision-making is at times
seemingly irrational or unexpected in nature. To simulate this aspect of
human expression, patterns of such irrational behavior could theoretic-
ally be introduced to AI if researchers had a better understanding of what
exactly they consist, and how they are stored and activated.7 However, the
benefits of creating an AI with such attributes are unclear.8 Several authors
have suggested that AI may progress to a point where it becomes autono-
mous.9 While evolution has led to slow progress of the human mind based
on natural selection in nature, AI could make high-speed evolutionary “ex-
periments” in silico, develop superhuman intelligence, and abruptly start
upgrading itself with tremendous speed, possibly out of human control.
This would lead to the so-called technological Singularity, an unfathom-
able paradigm shift for humanity. The theories regarding the technological
Singularity have been criticized by many authors (Allen, 2011; Regalado,
2013). A major limiting factor in the development of AI is the processing
power of existing hardware development.
Research in the life sciences faces tremendous challenges as big data10
are generated, while the analysis of this data is not proceeding at a corres-
ponding pace, creating a bottleneck for progress. A comprehensive system
that could analyze text from scientific literature as well as other types of
data of different origins and formats, including laboratory data from patient
charts, information from clinical trials, and so forth, to make novel con-
nections, distinguish patterns, and generate new hypotheses, would be ex-
tremely useful (Chen, Argentinis, and Weber, 2016).
Doctor Ex Machina 159

The era of personalized evidence-based medicine has brought about a


tremendous increase in complexity of both diagnostic algorithms and treat-
ment. A crucial component of frontline medical practice is the incorporation
of novel results of clinical investigations as presented in scientific publi-
cations. Indeed, physicians are constantly flooded by increasing amounts

Downloaded from https://academic.oup.com/jmp/article/47/1/155/6524316 by IRCCS Humanitas Milano user on 29 November 2022


of new and sometimes conflicting information about diagnostic processes
and treatment options. One may envision that future AI applications could
help sift through scientific publications as they become available, perform
meta-analyses, and integrate relevant data into patient records to provide
optimized recommendations and follow-up of treatments at the individual
level. Such a system could merge outcome data from many different inter-
connected health-care centers to reveal patterns in treatment effects or side
effects that would otherwise go unnoticed, due to the small number of pa-
tients with specific rare disease variants seen in each clinic.
Medical diagnostics involves complex evaluations of incomplete facts and
likelihoods that certain diagnoses explain available data. The endpoint must
be determined, even if it does not explain all data or if it is clear that more
data is needed to produce a conclusion. Over the past decades, attempts have
been made to construct software programs that include decision-making
under uncertainty that would provide support for clinicians in diagnosing
limited groups of related diseases to increase speed and avoid human error
in this process (Patel et al., 2009). Medical specialties that involve a sub-
stantial portion of routine image analysis that can now be digitalized are in
the forefront of this development (Jha, 2016; Bejnordi, Veta, and Van Diest,
2017; Sharma and Carter, 2017). However, the specific processing required to
produce a complex medical diagnosis from several different types of patient
data would until now have been considered specifically human.

Current AI Applications
In 2011, the “super computer” Watson11 beat two well-known human cham-
pions in the trivia game show Jeopardy. The machine thus proved that it
could deal with “the ambiguity and contextual nature of language.”12 This
was a significant development from previous AI applications that were de-
signed to play games such as chess, the Chinese game Go, or even games that
involve incomplete information such as poker (Silver et al., 2017; Williams
et al., 2018). The ability to understand a sentence in natural human language
also differentiated Watson from regular text search engines that deliver a list
of results that are related to certain keywords in order of popularity. Watson
now uses not only deductive and inductive, but also abductive reasoning.
This has been employed in applications such as personalized marketing, “in-
telligent tutoring systems” (Straumsheim, 2016), and even dress design.13,14
Experiments exploring the Watson application have demonstrated cre-
ation of novel hypotheses based on mining of large amounts of scientific
160 Annika M. Svensson and Fabrice Jotterand

literature (Spangler et al., 2014, 1877–86). In other pilot projects within the
field of pharmacological research, new drug targets have been identified,
and suggestions for repurposing of currently used drugs have been gener-
ated by AI.15,16 In this very narrow sense, that is, creation of novel hypoth-
eses based on its ability to analyze big data, Watson would already be more

Downloaded from https://academic.oup.com/jmp/article/47/1/155/6524316 by IRCCS Humanitas Milano user on 29 November 2022


“creative” than humans.
An example of an application of AI in health care for which some infor-
mation is currently publicly available is IBM Watson Oncology, which is an
ongoing collaboration between IBM and Memorial Sloan Kettering Cancer
Center. This AI system analyzes a patient’s medical record to help identify
evidence-based and personalized treatment options for a limited number of
malignant disorders.17 The currently advertised version claims to data mine
the patient’s entire electronic medical record (EMR), as well as treatment
guidelines provided by physicians at Memorial Sloan Kettering and select
peer-reviewed literature (text books and medical journals; a total 15 mil-
lion pages of text). A patient chart is created by the AI, and the patient’s
physician is asked to verify the data displayed therein. The AI system then
analyzes the data and compiles a prioritized list of treatment options div-
ided into the categories “recommended,” “for consideration,” and “not re-
commended.” The physician can retrieve information about efficiency and
side effects of the suggested treatments, as well as a list of publications that
supports each alternative. The performance of Watson Oncology in terms of
recommending a treatment has been compared with that of hospital clinical
tumor boards, which integrate the knowledge and skills of representatives of
multiple medical disciplines for decision-making in complex patient cases.
Importantly, it should be noted that the current Watson system is cur-
ated and relies on treatment algorithms by Memorial Sloan Kettering. It is
thus not operating independently, but is heavily dependent on frequent
manual updates regarding developing treatment recommendations. IBM is
also developing AI that integrates clinical data with radiologic images and
selects those with anomalies for further review by the physician (the Medical
Sieve). “Swarm technology” is another AI concept that would enable groups
of physicians to join their thinking into virtual collective “hive minds.” In a
recently presented preliminary study, the diagnosis of pneumonia was ren-
dered by integrating the real-time input of a small group of radiologists with
a trained AI algorithm that weighted not only a yes or no answer from the
members of the group, but also the degree of confidence with which each
group member’s response was given. The resulting diagnostic accuracy by
this hybrid application was above that of an average radiologist and that of
an AI application (CheXNet) by itself.18
It can be envisaged that making Watson or similar systems available glo-
bally could help mitigate geographical challenges, in that every doctor
connected to the system would have access to cutting-edge diagnostics, re-
gardless of location. Presumably, being included into a comprehensive AI
Doctor Ex Machina 161

system through local physicians would also lead to less cost for patients in
remote settings compared to seeking care at a U.S. hospital.
At this time, there is no publicly available data that show how Watson or
any other AI system can work in a more diverse setting, that is, handle more
than one disease group at a time. Also, it has not yet been demonstrated that

Downloaded from https://academic.oup.com/jmp/article/47/1/155/6524316 by IRCCS Humanitas Milano user on 29 November 2022


information from health-care records can be incorporated and used to build
new decision-making algorithms in the system (i.e., using unsupervised
learning).19 However, if the current system’s limited abilities would be devel-
oped into a new system that would use advanced AI with higher processing
power to integrate many different aspects of health care, one could envisage
that this system could eventually become global and possibly the dominant
(or only) system for AI in health care. Next, we discuss how current as well
as future AI applications could affect the role of physicians in health care.

III. PHYSICIANS’ PROFESSIONAL ROLES AND RESPONSIBILITIES


DURING IMPLEMENTATION OF AI IN HEALTH CARE

If a universal AI system or even a patchwork of partial solutions that would


provide substantial support in diagnosis and treatment decisions would
eventually be realized in routine clinical practice, the consequences for the
medical profession would likely be profound. Already in the current version
of Watson for Oncology, the more advanced part of the cognitive processing
is carried out by the AI system, while the tasks of ensuring accuracy of data
and direct communication with the patient is still performed by the local
physician. Giving up the most challenging part of the diagnostic process
while retaining the more mundane tasks, as well as being prompted step-
by-step by the AI system while progressing through the algorithm, will likely
have effects on the physicians’ self-image as a professional. Even the ap-
plication of swarm technology, where the physician joins a group in which
individual contribution may not be possible to define, could change the
way physicians perceive themselves. So far, to our knowledge, no published
studies have addressed this issue or, conversely, how patients with different
cultural backgrounds might perceive the physicians who use this tool.
With wider clinical use of Watson or other similar systems, one may en-
visage patients coming to the doctor’s office to meet with a physician or a
physician’s assistant to undergo physical exam and confirm data entered into
the system. The physician would initiate the diagnostic algorithm, then help
explain the results, discuss practical issues regarding the treatment plan or
further surveillance, and provide emotional support and counseling. At that
point, depending on the design of the process, it may or may not be possible
for physicians to mitigate the results from the AI in case there are specific
circumstances that are not programmable into the system that need to be
considered for individual patients. The interaction with the system may not
162 Annika M. Svensson and Fabrice Jotterand

as such present an issue for most patients, since many healthcare providers
already work with the EMR on a screen in the presence of their patients;
however, the fact that the system (not the physician) provides the diagnosis
and preferred treatment option would be a new feature that would be ob-
vious to the patient.

Downloaded from https://academic.oup.com/jmp/article/47/1/155/6524316 by IRCCS Humanitas Milano user on 29 November 2022


As telemedicine with physician–patient encounters through specific net-
works or social media is becoming more common, most patients would be
expected to accept interactions with a physician on a screen. Furthermore,
if the AI system would improve its interface with human beings to a point
where many interactions could be handled directly between the AI and the
patients, personal interactions between patients and physicians could be-
come difficult to defend economically, and most patients except those who
cannot handle the technology may be offered to primarily interface with the
AI. However, even as the physicians’ involvement in the diagnostic process
as well as direct personal interactions with patients may decrease over time,
physicians could continue to work in the virtual interface between AI and
patients as educators, counselors, and promoters of public health initiatives
for prevention of disease. Importantly, physicians would have an essential
role as advocates for their patients while the AI system is phased in.
Although the adaptation process for physicians could be difficult as they
may see themselves losing the advanced cognitive tasks they have been
trained for to advanced AI applications, they should nevertheless honor the
fiduciary relationship with their patients. Physicians have a responsibility to
promote what is best for their patients, regardless of any personal bias. If an
AI system were developed that could significantly increase quality and/or
decrease cost of care, physicians should act in the patient’s best interest by
championing implementation of such a system, even if its application would
profoundly alter the role of the physicians. However, physicians also must
balance such support against their duty to be truthful to their patients, honor
their autonomy, and protect them from harm. Therefore, physicians need to
understand the risks involved with application of AI in patient care, appro-
priately and candidly communicate these risks to patients, and use their influ-
ence to help optimize the design of AI systems to avoid detrimental effects to
patient care. Furthermore, physicians have a deeply rooted (Lasagna, 1964;
Charter on Medical Professionalism, 2002; American Medical Association,
2018) responsibility to serve all patients, including the socioeconomically
deprived, members of historically underprivileged groups, and those who
elude or cannot access health care. They should act specifically to support
patients who do not have a voice of their own during implementation of AI
in health care.
Considering the possibility that many of the currently established
physician’s tasks could be taken over by an AI system, one may question the
future of medical education. In the hypothetical case where the entire corpus
of medical knowledge would be contained and continuously updated in AI
Doctor Ex Machina 163

applications and new information be accumulated with a speed impossible


for humans to keep up with, and if physicians’ main responsibilities would
be administration and counseling, would there be any incentive for students
to undertake expensive and time-consuming studies of basic medicine or for
society to support such efforts? It is possible that training of physicians in the

Downloaded from https://academic.oup.com/jmp/article/47/1/155/6524316 by IRCCS Humanitas Milano user on 29 November 2022


future would focus more on human interactions and teamwork with other
health-care providers than on memorization of technical data and treatment
algorithms.20
However, without elementary medical training, new generations of health-
care providers could become entirely dependent on the AI system, which
would confer substantial vulnerability if the system would fail. Therefore,
some capability to handle essential medical care should be retained outside
of a major AI system for health care.

IV. CHALLENGES IN IMPLEMENTATION OF AI IN HEALTH CARE

Issues With Import of Data Could Lead to Bias


Several issues can be envisaged regarding implementation of AI for handling
analysis of both patient data and published scientific studies. Optimization
of AI systems is greatly dependent on the quantity of data made available
to the AI. Furthermore, input of low-quality, unfiltered, and/or inappropri-
ately formatted patient data that cannot be correctly interpreted and pro-
cessed can result in bias in the algorithms constructed by the AI. Any type
of data import (supervised or unsupervised) may be affected by bias. For
instance, data may have been sampled primarily from patients of certain eth-
nicities, while other parts of the population are not represented. Some types
of data, for instance unusual findings in a physical exam of a child with a
rare genetic disorder, are inherently difficult to format for input because of
subjectivity in the evaluation by the provider. Standardized criteria for pa-
tient data evaluation and categorization prior to entry into charts could be
useful but may be very difficult to implement widely. Historical patient data
may have to go through manual controls as well as formatting processes
prior to entry into the system so as not to be misrepresented and create bias.
Substantial amounts of existing patient data may not be practically possible
to curate and enter into the system. Such loss of data may also create bias.
The import of results from scientific studies also confers risk for errors or
bias. Considering the amount of data that is made available in the public
domain without traditional peer review, a vetting process must be instituted
so that all data be critically evaluated prior to inclusion into the system. It
is of interest to note that the reproducibility of medical and other scientific
research studies has recently been challenged (Baker, 2016). The problem
of reproducibility may be accentuated in cutting-edge research with small
sample sizes, such as evaluation of novel, extremely expensive treatments
164 Annika M. Svensson and Fabrice Jotterand

for rare cancer variants which could indeed benefit greatly from analysis by
AI, if adequate data were provided (Begley and Ellis, 2012).21 In addition to
all these issues, there is concern for random errors introduced by human
beings that handle the data.
In summary, it appears as if multiple issues and concerns related to data

Downloaded from https://academic.oup.com/jmp/article/47/1/155/6524316 by IRCCS Humanitas Milano user on 29 November 2022


curation may delay the development of a universal comprehensive system
that can conduct the entire process from diagnosis to treatment recommen-
dation until such time when universal methods for vetting, data formatting,
and algorithms for reproducibility of research studies have been instituted.

Ethical Challenges: Privacy, Confidentiality, and Informed Consent


The IBM Watson application for value-based care aims to integrate data
from insurance claims, laboratory test results, imaging studies, medical pro-
cedures, and other information from the EMR, as well as “other factors that
influence a person’s health—including socioeconomic status, environment,
social support and access to health care”22 to identify patients that are at
risk for developing a disorder or incurring particularly high cost. Analysis
by such an all-encompassing AI would enable the practice of personalized
medicine, as it may diagnose disorders at higher resolution and identify can-
didates for novel therapies. A comprehensive system would also allow for
faster and more efficient follow-up, as well as coordination of medication
and care provided at different facilities. Calculation of patients’ risk scores
based on genetic or other data would provide an opportunity for early inter-
vention to prevent development of disease. However, this would, at least
in the Western hemisphere, also raise issues regarding patients’ rights to
privacy and confidentiality. The AI application could confer undesirable con-
sequences if certain individuals or population groups would be targeted for
coerced or forced interventions based on risk evaluation. Because Genetic
Information Nondiscrimination Act of 2008 (GINA) may be partially disman-
tled by H.R.1313 (Preserving Employee Wellness Programs Act),23 patients
with genetic disorders or disease traits and their family members may need
additional attention and protection. It is essential that patients and their
physician advocates be able to review the genetic information as it is en-
tered into the AI system to prevent errors from negatively interfering with
the system’s evaluation of risk and diagnostic decision-making. Furthermore,
patients should not be coerced or forced to have their genetic test results
entered into the system. Individuals or groups should not be discriminated
against, based on analysis of information from the AI. For instance, em-
ployers should not have the right to collect information from the system
about job applicants. Furthermore, it should not be possible for insurance
companies to make it mandatory for patients to be part of the system to ob-
tain insurance, as long as the information needed to make decisions about
coverage can be collected by alternative means.24,25
Doctor Ex Machina 165

A comprehensive AI application would use the EMR and other patient


data in what may be characterized as a continuously ongoing research or
quality improvement project. The technical solutions for de-identification of
data for maintenance of privacy and confidentiality are beyond the scope of
this article. However, questions may be raised regarding informed consent

Downloaded from https://academic.oup.com/jmp/article/47/1/155/6524316 by IRCCS Humanitas Milano user on 29 November 2022


for patients that are part of this system, in particular since it would integrate
research applications with clinical practice. Since there would be limited
knowledge about the underlying developing algorithms and no previous
experience with similar systems, it would be very difficult to provide the
conventional components of informed consent, such as information about
expected benefits and risks and the likelihoods of each. Furthermore, cur-
rent informed consent procedures offer an option to agree or not to agree,
as well as information about any alternatives. If entering into the system
would be the only available option for a patient to receive a diagnosis or
a suggestion for treatment, the patient would in practice have no choice.
This would elicit concerns regarding patients’ autonomy and coercion. An
interesting question in this context is whether patients should have the right
to have certain sensitive data (or any data they do not wish to share) omitted
from their AI charts, although this would not only lead to loss of data from
the individual’s chart, but also potentially to loss of comprehensiveness and
possibly bias of the AI system if many patients choose such options. Finally,
one may ask whether a system that would use potentially fluctuating (see
below) black-box diagnostic algorithms could receive approval for use by
the U.S. Food and Drug Administration (FDA). Such a system would consti-
tute an entirely new paradigm, and the diagnostic function could not readily
be compared to currently approved panels of lab tests; it would have to be
evaluated as the only one of its kind or be waived.

The Perils of Relying on a Single Comprehensive AI System for


Health Care
As previously discussed, the Watson AI system has been claimed to be de-
signed with the ultimate goal to comprise many different aspects of health
care, including insurance applications. In the future, such a system could
hypothetically reach global dominance. If so, potentially, individuals not
willing to enter it could be excluded from optimal health care. Furthermore,
with no opportunity to cross-check with another system, any glitches or errors
could go undetected and lead to serious consequences.26 Importantly, there
would be no possibility for patients and their families to request a second
opinion of equal quality, and the patient could be left with no other options
than to accept the conclusion and recommendations given by the “black box”
AI system. If the AI application would make independent decisions based
on peer-reviewed literature without human oversight, publication of a major
research study could result in diverging treatment recommendations for the
166 Annika M. Svensson and Fabrice Jotterand

same patient from one day to the other. Both physicians and patients would
have to adapt to the fact that diagnostic methods and treatments would be in
constant flux, and entirely dependent on the decisions that come out of the
AI system; these unpredictable changes would make it even more difficult for
providers and patients to question results from the AI system. Both uninten-

Downloaded from https://academic.oup.com/jmp/article/47/1/155/6524316 by IRCCS Humanitas Milano user on 29 November 2022


tional bias and intentional manipulation of the content of the AI system could
lead to issues with the diagnostic process, as well as systematic misrepresenta-
tion of available treatments,27 forcing patients to accept certain types of treat-
ments and forgo others. Autonomy and freedom of choice would be at risk,
unless an option to access information about alternative treatments could be
incorporated. Maintaining the current mode of output of Watson Oncology,
where the suggested treatment is supplemented by a list of treatments that
could be considered, could enable the patient to choose the therapy that is
most consistent with his or her personal values and preferences but still would
not guarantee that all possible therapies would be included in the list.
On a related note, if the patient’s socioeconomic status would be incorp-
orated, the system could theoretically be designed to avoid recommending
or even displaying treatments and procedures that the patient could not af-
ford. Similarly, risk calculations, genetic test results, or other data integrated
into the system could be used to exclude individuals from certain insurance
options, possibly without notifying the patient that more favorable options
exist. In the interest of transparency and truthfulness, the patient should at
least have a choice to be informed about all possible treatment and insur-
ance options. Furthermore, if the entire population is contained in a single
system, genetic test results from one family member could theoretically be
used to draw conclusions about the genetic status of biologically related in-
dividuals. Such invasion of privacy could lead to discrimination in the form
of coerced treatments or loss of insurance options. If in the future a point
would be reached where a comprehensive globally dominant AI system be-
comes truly autonomous, it would likely assume responsibility for assessing
its own efficiency, if no countermeasures are taken. Thus, it would not be
possible to determine to what extent the AI fulfills its original purpose of
preserving and promoting human health. Finally, disruption or eradication
of the system would have disastrous consequences. All these concerns could
at least to some extent be alleviated by a separate parallel AI system that
could present alternative treatments, insurance options, an opportunity to
obtain a second review, and backup for catastrophic events, although the
creation and maintenance of an additional system could prove costly.

V. THE EXIGENCY OF REGULATION

Development of AI in health care will likely accelerate in the coming years.


Some medical devices with built-in AI algorithms that could be of great
Doctor Ex Machina 167

benefit to individuals if correctly applied without violating the integrity of


the patients are already becoming available. For instance, the cloud-based
system developed by Catalia Health uses robots to collect information about
how patients take their medications, but also about “each patient’s person-
ality, interests, and treatment challenges.”28 The manufacturers refer to cur-

Downloaded from https://academic.oup.com/jmp/article/47/1/155/6524316 by IRCCS Humanitas Milano user on 29 November 2022


rent HIPAA regulation regarding treatment of the collected data. However, in
the future, we may enter into an entirely different phase in the advancement
of AI, where humanity will transfer at least some of the control of develop-
ment and quality assurance of AI technology to the technology itself. It is
crucial that all consequences of each significant step forward in this process
be carefully assessed to ascertain that future advanced AI applications do
not hurt human beings or compromise human values. Furthermore, all AI
algorithms should be readily reversible by human interference, should this
become necessary. To accomplish these goals, we argue that both develop-
ment and implementation of AI in health care need to be tightly regulated.
AI applications must be objectively and independently examined prior to
release, as well as monitored following implementation.29
One may respond that such dystopian views are not justified, based on a
belief that AI is unlikely to reach a state where it can operate independently
of human input within the foreseeable future. Indeed, the speed of progress
of AI is difficult to assess, due to the complexity of the process, and the
fact that much of the development is carried out in nontransparent settings.
However, if one extrapolates from the evolution of AI over the past few dec-
ades, continued rapid development seems likely.
Some may claim that sufficient legislation is already in place to cover
all future AI applications. Thus, specific management of risks associated
with developing AI in health care is redundant, and could in fact impede
evolution of beneficial applications by adding complexity to the process.
Furthermore, regulation could cause the United States to be surpassed by na-
tions that lack systems for control of AI where development may be driven
entirely by narrow and short-sighted economic or political interests.
However, many issues that are specific to AI, such as the legal responsi-
bility of owners, manufacturers, and programmers of AI for damages incurred
by the actions or nonactions of AI applications, remain to be regulated.
Some companies realize the need for increased transparency and regulation.
An example of self-imposed control was the “Independent Review Panel” at
the now Google-owned UK company DeepMind. It should be noted that this
group consisted of members selected by the company and has now been
dismantled (Hern, 2018).
Importantly, the complexity of regulation, including the difficulty to de-
termine which parameters should be assessed regarding developing AI in
health care, should not discourage stakeholders from initiating this process.
Indeed, regulation and surveillance of AI should be applied equally and
worldwide, even though countries with poorly developed legislation may
168 Annika M. Svensson and Fabrice Jotterand

be particularly difficult to incorporate in such efforts. Finally, future legis-


lation must be proactive, while at the same time be written without formu-
lations that could hamper desirable progress. Efforts to regulate should not
be viewed as efforts to discontinue development, but merely to enable ad-
equate control to ensure that applications are beneficial to the users and that

Downloaded from https://academic.oup.com/jmp/article/47/1/155/6524316 by IRCCS Humanitas Milano user on 29 November 2022


they remain under human control.
The European Union (EU) is currently debating regulation of AI. A 2017
report from the EU Committee on the Environment, Public Health and Food
Safety with recommendations to the Commission on Civil Law Rules on
Robotics suggested that AI be granted status of “electronic personality,” which
would confer legal responsibility similar to what is currently assigned to cor-
porations (European Parliament, 2015). An AI application would be liable for
any damage it may cause and may be mandated to carry insurance to pay
for such damage to humans or property. This proposal was recently criti-
cized by a multidisciplinary group of AI experts for removing responsibility
for the potential actions of AI from designers and manufacturers. According
to this group, current EU legislation is sufficient for contemporary AI appli-
cations; however, a framework for regulation of future applications should
be constructed in consideration of not only legal and economic aspects, but
also societal, psychological, and ethical impacts (Robotics-openletter, 2021).
Several bills regarding regulation of AI were recently introduced to the
U.S. Congress (Fonzone and Heinzelman, 2018). Two of these specifically
address the expected implementation of autonomous vehicles.30 These bills
were introduced to preempt conflicting state laws and to promote invest-
ment and safety in AI applications.
The Future of AI Act (H.R.4625) aims to establish an advisory committee
that studies AI from a general perspective. It was written to promote invest-
ment and competitiveness of the United States, but also in response to con-
cerns regarding societal change precipitated by introduction of AI, as well as
possible bias and issues with data privacy.31,32 However, while surveillance
for development of autonomous AI may be handled by a general regulatory
framework, special regulation should be pursued separately for healthcare
applications. Considering the profound impact that AI applications in health
care will have on individuals, clinical practice, and society, and the fact that
such applications are already in trials or even in practical use, it seems rea-
sonable that the same close attention be paid to these applications as to
autonomous cars.
Given the speed of development, monitoring of developing AI should be
initiated as soon as possible. Oversight should be by entities that are organ-
ized and funded separately from any financial interests behind AI software
development, such as a governmental agency or other entities with trans-
parent organization and funding.
One suggested strategy for surveillance would be to monitor specifically
for breakthroughs that are considered necessary for further development of
Doctor Ex Machina 169

AI. However, passive monitoring would not likely be sufficient, unless the
monitoring agency also has the authority to take action if it is determined
that a certain medical device should not be allowed on the market. It is con-
ceivable that the FDA could be expanded to deal with the AI as well as other
medical devices, or serve as a model for such an agency.33

Downloaded from https://academic.oup.com/jmp/article/47/1/155/6524316 by IRCCS Humanitas Milano user on 29 November 2022


When it comes to the continuous monitoring and quality assurance of AI
devices that are already in place, a model such as the one used for quality
control in the field of laboratory medicine in the United States could be en-
visaged. There, the diagnostic processes are regularly challenged by entering
artificial “mock” samples or de-identified patient cases provided by a central
organization34 into the diagnostic system, then proceeding according to the
usual protocols to check that the system is able to provide an optimal result.
Furthermore, one could envisage inspectors (human or independent AI)
with authority to enter each system and challenge its integrity as well as its
ability to preserve patient privacy. As previously pointed out, although some
steps in the process such as data entry and curation could be monitored by
repeated analyses by the same system, access to more than one AI system
would allow for the whole process to be checked by comparing different
systems.
Given the lack of knowledge of the long-term consequences of introducing
AI in health-care applications and the fact that the outcomes of such imple-
mentation could diverge greatly from what is currently foreseeable, regula-
tory frameworks must be created to be flexible to enable timely responses
to challenges as they emerge.
Regulations regarding technology in development are inherently difficult
to formulate. Models for risk management have been explored in the con-
text of nanotechnology (Jotterand, 2006; Alexander and Jotterand, 2014).
These emphasize incorporation of stakeholders, transparency, and commu-
nication (Jardine et al., 2003). The International Risk Governance Council
(IRGC) created a framework that emphasized a more global approach to
risk management by including all circumstances, including processes, stake-
holders, and institutions (Renn and Roco, 2006). A similar risk governance
framework was developed by Marchant et al. (2008). A high degree of trans-
parency and vigilance is of the essence, so that issues can be instantly de-
tected and addressed as they transpire. The IRGC emphasizes flexibility to
meet unforeseen challenges rather than legal regulations, since the latter
may be too slow to implement. Such unorthodox arrangements could be
challenging from the perspective of justice (ensuring unbiased evaluations).
However, while theranostics are graspable “known unknowns” (Jotterand
and Alexander, 2011), AI in health care possesses the potential of being an
“unknown unknown,” since it is not even possible to define a spectrum of
consequences. Furthermore, a global system would be difficult to monitor
for adverse effects, since such issues could themselves be incorporated into
the system and difficult to isolate for evaluation.
170 Annika M. Svensson and Fabrice Jotterand

As AI in health care develops from today’s decision support algorithms


based on detailed human input into applications with “black box” algo-
rithms, surveillance may have to utilize equally complex algorithms that are
impossible for human beings to understand. So-called “safe AI” (efforts to
neutralize malicious AI) may be helpful in this regard.35

Downloaded from https://academic.oup.com/jmp/article/47/1/155/6524316 by IRCCS Humanitas Milano user on 29 November 2022


We believe that the medical profession should drive the development
toward appropriate surveillance and regulation together with other stake-
holders and, importantly, that public input should be sought. Medical pro-
fessional organizations could spearhead the debate by developing ethical
frameworks for the use of AI in health care.

VI. TOWARD FORMULATION OF A MORAL FRAMEWORK FOR AI IN


HEALTH CARE

The application of AI presents an unprecedented challenge to health care.


We have argued that global regulation while recognizing local moral iden-
tities is necessary to preserve human values in this process. A laissez-faire
approach without defined goals or oversight cannot be justified, given what
is at stake. Until now, scientific progress has been driven by a combination
of economic, political, and social incentives. However, we are at a point
where technological development could potentially emerge to drive itself,
which means that regulation is urgent. AI is a new paradigm in development
and needs to be addressed as such. As discussed above, regulation could be
developed, based on existing law. However, we do not believe that simple
extrapolation from regulation of older technologies would be a sustainable
strategy. Therefore, we propose that a new moral framework for AI in health
care be developed to provide a foundation for future regulation.
Clearly, developing a substantial and detailed yet universally acceptable
moral framework for AI will be an exceptionally difficult, if not impossible
undertaking. Moral and cultural diversity in modern society and the neces-
sity to include countries and ethnic groups that have not traditionally been
afforded much weight in decision-making on a global scale creates tremen-
dous complexity in reaching consensus. The solution cannot lie in scru-
pulous application of traditional religious or ethical frameworks. The new
framework must deal with unknown entities, that is, technology in devel-
opment where the end result cannot be envisaged. This requires a different
approach.
Recognizing that true and complete moral consensus cannot be practic-
ally accomplished, we propose to use a strategy for consensus building
around a moral framework for novel methodologies based on the concepts
of deliberative democracy (Gutmann and Thompson, 2004), the procedural
integrated model (PIM) previously suggested by Jotterand in the context
of theranostics (Jotterand and Alexander, 2011). The core of this model is
Doctor Ex Machina 171

the generation of norms and values by deliberative democratic processes


involving a dynamic dialogue between stakeholders and the public where
all participants are heard and respected.
Creation of a moral framework for AI would be best accomplished by a
transdisciplinary approach bringing together engineers, physicians, and law-

Downloaded from https://academic.oup.com/jmp/article/47/1/155/6524316 by IRCCS Humanitas Milano user on 29 November 2022


makers with representatives from the humanities, as well as lay people. This
can only be accomplished if there is shared responsibility for the process it-
self, as well as for the outcome of the process, if flow of information is free,
if all stakeholders can participate in the process, if there is fair representation
of various opinions in the fora where debates take place, and the exchange is
optimized to include as many individuals as possible. Importantly, a delibera-
tive democratic framework requires justification of processes. Decisions may
be made by the majority in the absence of true consensus; however, such de-
cisions may later be challenged and modified through a continued dialogue
(Gutmann and Thompson, 2004). Transparency in the proposed process is
limited by the fact that the technical details of the AI technology are only ac-
cessible to a small group of experts. One may argue that extensive knowledge
about AI is not necessary for a debate about moral values. However, we be-
lieve that manufacturers and lawmakers have a responsibility to provide the
public with timely and comprehensible information about progress in AI to
enable the public to create well-informed opinions about AI in health care.
The internet could provide excellent opportunities to disseminate infor-
mation and expand the debate to involve lay people in a discussion about
ethical concepts. In 2018, more than half of the world population had ac-
cess to the internet, although in some countries such access may be limited
for political or geographical (coverage) reasons. However, an obvious chal-
lenge would be to display ethical problems in such a way that they can be
accessed, understood, and processed by most people. Presenting ethical
dilemmas as “cases” where individuals are allowed to make decisions would
be one way of initiating an ethical discourse that would involve a larger
group of individuals to make object-level judgments.36 The “moral machine,”
an online platform created by researchers at the MIT, exemplifies this con-
cept (see moralmachine.mit.edu). It was recently used to explore moral
decisions by over 40 million users worldwide. Specifically, the application
investigated moral issues resembling the trolley problem that arise in the
implementation of autonomous cars. The layout is essentially that of a game
where the decision-maker (player) is presented with a series of catastrophic
scenarios where two alternative solutions are provided. The player makes a
“most acceptable choice” by simply clicking on the graphical representation
of the preferred alternative.37
It is possible that a similar device could help identify differences as well
as overlapping moral grounds between different groups of people also in
the context of AI in health care. Could such surveys even form the basis for
a moral framework?
172 Annika M. Svensson and Fabrice Jotterand

One could argue against the “moral machine” that it allows for only a very
simplified view of ethical problems and encourages “gamification”38 of ser-
ious issues of life and death.39 However, we believe that it may be possible
to create internet surveys with a somewhat different approach, which could
provide a basis for communication at the level of regulatory concepts, for

Downloaded from https://academic.oup.com/jmp/article/47/1/155/6524316 by IRCCS Humanitas Milano user on 29 November 2022


instance, to ensure that regulations to be instituted are in agreement with
the opinions of most people. Although problems that require more abstract
thinking do not lend themselves as easily to “gamification,” it may be pos-
sible to construct surveys that could provide valuable input to moral dis-
course with the ultimate goal to develop a moral framework that is not only
based on object-level judgments but also based on reasons and choices.
Interestingly, the “moral machine” survey found that ethnic groups clus-
tered together in terms of preferences. On the other hand, the preferences
of each cluster were not markedly different from those of the other clusters.
Although the responders to this survey represent a selection of individuals
limited by both access to the internet and interest in new technology, these
data lend some support to the idea that it would be possible to build con-
sensus regarding ethical aspects of various technologies.

VII. CONCLUSION

Development of AI applications may lead to an unprecedented paradigm


shift in the practice of medicine. The potential short- and long-term impact
both at the individual level and in society as a whole must be elucidated and
debated among all stakeholders, including the public. If a point is reached
where the diagnostic process and treatment decisions are entirely guided
by an AI system, physicians would abdicate their positions as diagnosticians
and decision-makers, while retaining important roles as counselors and ad-
vocates for their patients. The medical profession has a special responsibility
to work to increase beneficence and transparency of AI systems, promote
patient autonomy, and decrease the risk for harm. Ideally, a comprehensive
AI system would become available to patients of all geographical and eco-
nomic strata and transform health care by introducing an entirely new way
of making evidence-based diagnosis and treatment recommendations with
increased accuracy and precision. However, many issues beyond technical
development remain and should be resolved prior to implementation. AI
systems should not be exploited to increase inequality by allowing selection
of certain groups of patients for research, diagnostic screening, or treatments,
while ignoring others, or to discriminate against individuals based on genetic
or other data. All patients that wish to enroll should be included, while those
who wish to remain outside of the AI system should still be able to receive
health care. Issues with informed consent, privacy, and confidentiality must
be reconciled. The contexts in which AI applications may be used should
Doctor Ex Machina 173

be clearly defined. Importantly, if only one global system would be avail-


able, any kind of failure, be it eradication, inclusion of erroneous data, or
failure to protect patients from misuse of the system, would have profound
consequences. Mechanisms for detection of errors and the ability to obtain
a second opinion regarding diagnosis or suggested therapy should be en-

Downloaded from https://academic.oup.com/jmp/article/47/1/155/6524316 by IRCCS Humanitas Milano user on 29 November 2022


sured by promoting development of parallel systems or independent loops
within the system. Construction and implementation of AI should proceed in
a tightly controlled and transparent manner to make certain that AI always
remains under control of human beings and that potential problems are
possible to assess independently. Given that AI applications in health care
will likely have major influence on human life, it is essential to strive toward
basing its use on a moral ground. We believe that respect for the uniqueness
and inner value of humanity and all human beings and the fundamental con-
cepts of human autonomy and social justice constitute a primary foundation
on which to build a framework. Physicians have a unique opportunity and a
crucial responsibility to monitor and modulate the development and imple-
mentation of AI in health care. However, at this critical time in the evolution
of AI, an open discussion involving ethicists and humanists, those engaged
in the technical development per se as well as the public, must be initiated.
It is the intent of this article to invite this critical debate.

NOTES

1. Stephen Hawking speaking at the launch of the Center for the Future of Intelligence at Cambridge
University, October 2016. Available at http://www.cam.ac.uk/research/news/the-best-or-worst-thing-to-
happen-to-humanity-stephen-hawking-launches-center-for-the-future-of (accessed September 16, 2021).
2. Neural networks are capable of not “forgetting” tasks on which they were previously trained.
This means that sequential learning of several tasks is possible (Kirkpatrick et al., 2017). Data can be
shared between units and universally (cloud function).
3. This refers to physicians such as general practitioners that determine which patients can receive
referrals or access certain laboratory tests within the framework of various health plans.
4. A formatted table is an example of “structured information,” while a patient chart or a scientific
article is an example of “unstructured information.”
5. Carebots are robots equipped with AI that are employed in the specialized care of elderly or
disabled patients.
6. This argument contrasts with the concept of Neuroessentialism, which can be defined as
the belief that the brain alone contains and determines all aspects of personhood. For criticism of
neuroessentialism from a perspective of moral philosophy and enhancement, see Jotterand (2016).
7. Mechanisms behind rational and irrational human behavior, including the influence of bias
and emotions are being uncovered by current research in neuroendocrinology, neuroeconomics, and
related fields.
8. A metaphysical analysis of the differences between AI applications and the human mind is not
within the scope of this article.
9. In his book “Mind Children” from 1988, futurist Hans Moravec extended Moore’s law (the observa-
tion made in 1965 by Gordon Moore that the number of transistors that could be placed per square inch in
an integrated circle had doubled every year since the invention of the integrated circuit, and the prediction
that this trend would continue in the future [later adjusted to every two years]) to other technologies and
proposed that robots may evolve into a new artificial species, creating a “mindfire” of superintelligence.
Vernor Vinge discussed the concept of the technological Singularity in the context of AI in a 1993 essay (the
174 Annika M. Svensson and Fabrice Jotterand

concept of machines with superhuman intelligence that could rapidly evolve in an “intelligence explosion”
was first introduced by I. J. Good in 1966 [Good, 1966]). Vinge argued that humanity is at the verge of fun-
damental change, which would be brought about by creation of superhuman artificial intelligence. He sug-
gested several possible components to such a development, including creation of potent AI, giant computer
networks (corresponding to the development of the internet), computer/human interfaces that enhance the
human component (intellectual amplification of humans), and enhancement through “biological science”

Downloaded from https://academic.oup.com/jmp/article/47/1/155/6524316 by IRCCS Humanitas Milano user on 29 November 2022


(Vinge, 1993). The Transhumanist movement aims to promote the process of transformation of humans
to “Post-humans” (i.e., human beings that exist as entities with capacities beyond what is currently con-
sidered normal for humans). This would be accomplished by application of various artificial enhancements,
including AI, or even by merging of human beings with AI into new types of entities. See also Bostrom
(2003). The law of accelerating returns (i.e., technological progress) was proposed by Ray Kurzweil in his
1999 book, The Age of Spiritual Machines. In this context, “returns” means technological progress. Kurzweil
(2005) predicted that the technological Singularity would occur within a few decades, and furthermore that
it would lead to a merge between biological and nonbiological intelligence.
10. Big data are commonly understood to be high-volume data sets that are so extensive that they
cannot be handled by regular software.
11. Watson was built by IBM within the DeepQA project and named after IBM’s first CEO. The
application can answer questions posed in natural language. Prior to the Jeopardy challenge, Watson
had accessed 200 million pages of structured and unstructured text. It was not connected to the internet
during the game.
12. Watson takes in text, analyzes it, retrieves facts from databases, builds hypotheses, evaluates
level of confidence in its solution of the task, and provides an answer in a synthesized voice. The system
sorts nouns, verbs, and prepositions to comprehend the meaning of the text (Jackson, 2011).
13. Watson first analyzed the previous work of the designers, including how many times photo-
graphs of specific dresses had been published, then decided on material and color for the dress to be
designed. Watson made the creation change colors by communicating with a computer built into the
dress. The chosen colors depended on the modes displayed in concurrent Tweets about the dress. See
https://www.ibm.com/blogs/internet-of-things/cognitive-marchesa-dress/.
14. For an overview of current applications of Watson, see the following link. Note that similar ap-
plications may be in development by other companies. https://www.ibm.com/watson/health/.
15. For example, AI can find new indications for drugs already on the market.
16. As an example, Watson was recently shown to be able to classify proteins as likely to be in-
volved in cardiac disease after being trained using annotated material from a large scientific database
(Ruff et al., 2017).
17. The IBM Watson Health website contains information regarding current initiatives within the fields
of oncology and genomics. Available at https://www.ibm.com/watson/health/oncology-and-genomics/.
18. See Liu (2018).
19. A collaboration project between IBM Watson and the University of Texas MD Anderson Cancer
Center (the Oncology Expert Adviser) was suspended in 2017 after an audit showed issues with project
management. See Ackerman (2017).
20. It should be noted that if counseling of patients does not involve decision-making, that is,
does not require the specific competence of a physician, it could also be performed (perhaps in a more
cost-efficient manner) by physician assistants or genetic counselors.
21. A detailed analysis of issues with reproducibility in scientific research is beyond the scope of this article.
22. A more detailed description of IBM Value-based Care applications is available at https://www.
ibm.com/watson/health/value-based-care/.
23. The HR 13.13 Preserving Employee Wellness Programs Act “exempts workplace wellness
programs from: (1) limitations under the Americans with Disabilities Act of 1990 on medical examin-
ations and inquiries of employees, (2) the prohibition on collecting genetic information in connection
with issuing health insurance, and (3) limitations under the Genetic Information Nondiscrimination Act of
2008 on collecting the genetic information of employees or family members of employees.…” “Collection
of information about a disease or disorder of a family member as part of a workplace wellness program
is not an unlawful acquisition of genetic information about another family member.” The bill is available
at https://www.congress.gov/bill/115th-congress/house-bill/1313/all-actions?overview=closed#tabs.
24. The latter in turn requires alternative solutions be in place for individuals who do not wish to
be part of the AI system.
25. Detailed legal analysis is beyond the scope of this article.
Doctor Ex Machina 175

26. The risks of deficiencies in current algorithm-based software in electronic health record sys-
tems are illustrated by the recent 999 million-dollar lawsuit against eClinicalWorks, software supplier for
850,000 health-care providers, for breach of fiduciary duty and gross negligence, claiming that glitches in
the software lead to multiple issues that would result in misleading records.
27. Currently, discussion of alternative treatments is a required part of the informed consent process.
28. Mabu, “the personal healthcare companion,” tailors its communication with patients according

Downloaded from https://academic.oup.com/jmp/article/47/1/155/6524316 by IRCCS Humanitas Milano user on 29 November 2022


to their responses by a “proven model of behavioral psychology to promote behavioral change.”
29. The basic concepts devised by Isaac Asimov in his 1942 book I, Robot have been put forward
by several authors as a possible starting point for development of a general regulatory framework for
AI. Asimov defined the “three laws of robotics” as follows: (1) A robot may not injure a human being or,
through inaction, allow a human being to come to harm. (2) A robot must obey the orders given to it by
human beings, except where such orders would conflict with the First Law. (3) A robot must protect its
own existence as long as such protection does not conflict with the First or Second Law. Later, Asimov
devised a fourth or zeroth law that would override the other three: A robot may not harm humanity or, by
inaction, allow humanity to come to harm. Clearly, this loose framework was never meant to be applied
directly to real-life situations. For instance, in many scenarios, the concepts of not causing harm or not
allowing human beings or humanity to be harmed would be open to interpretation.
Oren Etzioni recently suggested further expansion of the Asimov laws to include responsibility for
owners of AI, obligation for AI to disclose it is not human, and a rule against retention or disclosure of
confidential information by AI without explicit permission from the source of the information. See Etzioni
(2017) for more information.
30. The Self Drive Act (H.R.3388) contains a federal framework for driverless vehicles. It was passed
by the House while the corresponding senate bill, the AV Start Act is still pending. A majority of states
have already enacted legislation specifically for this purpose, see www.ncsl.org.
31. H.R.4829 and H.R.5356, respectively. There is also a bill that deals specifically with the impact
of AI on the job market and one that targets national security issues.
32. A detailed analysis of the legislation pertaining to AI is beyond the scope of this article.
33. The FDA, an agency within the U.S. Department of Health and Human Services, protects the
public health by assuring the “safety, effectiveness, and security of human and veterinary drugs, vaccines
and other biological products for human use, and medical devices. The agency also is responsible for
the safety and security of our nation’s food supply, cosmetics, dietary supplements, products that give off
electronic radiation, and for regulating tobacco products.” See https://www.fda.gov/ for more.
34. In the case of lab medicine, a professional organization (CAP, College of American Pathologists)
handles these controls.
35. So-called “safe AI” is built to neutralize malicious AI. Efforts toward development of such AI
include the creation of OpenAI, a nonprofit research company, with the mission to build safe artificial
general intelligence (AGI), to be an extension of individual human wills, and ensure AGI benefits are as
widely and evenly distributed as possible. See, for example, https://openai.com.
36. The choice between different options to act in specific cases.
37. There is no minimum time during which the user must ponder the dilemma; the choice could be
made in an instant. The use of practical examples places this game at the level of object-level judgments.
38. “Gamification” is the use of game design and principles in contexts other than games. See, for
example, https://en.wikipedia.org/wiki/Gamification.
39. In essence, the application leads people to weigh different interests against each other and make
simplistic calculations that decide which human lives should be sacrificed or saved, based on scant informa-
tion about individuals such as gender, age, whether they are “athletic” or “homeless,” and so forth. It does
not ask players whether they believe it is appropriate that the AI be enabled to make these decisions at all,
or challenge them to think in more abstract terms. Importantly, there is no option to refuse to make a choice.

REFERENCES

Ackerman, T. 2017, Feb. 17. Touted IBM supercomputer project at MD Anderson on hold
after audit finds spending issues. Houston Chronicle [On-line]. Available: https://www.
houstonchronicle.com/news/houston-texas/houston/article/Touted-IBMsupercomputer-
project-at-MD-Anderson-10941783.php. (accessed September 9, 2021).
176 Annika M. Svensson and Fabrice Jotterand

Alexander, A. A., and F. Jotterand. 2014. Market considerations for nanomedicines and
theranostic nanomedicines. In Cancer Theranostics, eds. X. Chen and S. Wong, 471–89.
Amsterdam, The Netherlands: Elsevier.
Allen, P. G. 2011. The Singularity is not near. MIT Technology Review [On-line]. Available:
https://www.technologyreview.com/s/425733/paul-allen-the-singularity-isnt-near/ (ac-

Downloaded from https://academic.oup.com/jmp/article/47/1/155/6524316 by IRCCS Humanitas Milano user on 29 November 2022


cessed September 9, 2021).
American Medical Association. 2018. AMA Code of Medical Ethics [On-line]. Available: https://
www.ama-assn.org/delivering-care/ama-code-medical-ethics (accessed September 13, 2021).
Baker, M. 2016. 1,500 Scientists lift the lid on reproducibility. Nature 533(7604):452–4.
Begley, C. G., and L. M. Ellis. 2012. Drug development: Raise standards for preclinical cancer
research. Nature 483(7391):531–3.
Bejnordi, B. E., M. Veta, and P. J. van Diest. 2017. Diagnostic assessment of deep learning
algorithms for detection of lymph node metastases in women with breast cancer. JAMA
318(22):2199–210.
Bostrom, N. 2003. The Transhumanist FAQ—A General Introduction [On-line]. Available:
https://nickbostrom.com/views/transhumanist.pdf (accessed September 9, 2021).
Charter on Medical Professionalism. 2002, June 19. Annals of Internal Medicine [On-line].
Available: http://abimfoundation.org/what-we-do/physician-charter (accessed
September 13, 2021).
Chen, Y., E. Argentinis, and G. Weber. 2016. IBM Watson: How cognitive computing can
be applied to big data challenges in life sciences research. Clinical Therapeutics
38(4):688–701.
Clark, J. 2015. This preschool is for robots. Bloomberg.com [On-line]. Available: https://www.
bloomberg.com/features/2015-preschool-for-robots/ (accessed September 9, 2021).
Etzioni, O. 2017, Sept. 1. How to regulate artificial intelligence. The New York Times [On-line].
Available: https://www.nytimes.com/2017/09/01/opinion/artificial-intelligence-
regulations-rules.html (accessed September 9, 2021).
European Parliament. 2015. Report on with Recommendations to the Commission on Civil
Law Rules on Robotics [On-line]. Available: https://www.europarl.europa.eu/doceo/
document/A-8-2017-0005_EN.html (accessed September 13, 2021).
Fonzone, C., and K. Heinzelman. 2018. What Congress’s first steps into AI legislation portend.
Bloomberg Law. Big Law Business [On-line]. Available: https://biglawbusiness.com/
what-congresss-first-steps-into-ai-legislation-portend/ (accessed September 9, 2021).
Good, I. J. 1966. Speculations concerning the first ultraintelligent machine. Advances in
Computers 6(1):31–88.
Gutmann, A., and D. F. Thompson. 2004. Why Deliberative Democracy? Princeton, NJ:
Princeton University Press.
Hern, A. 2018. Google ‘betrays patient trust’ with DeepMind health move. The Guardian
[On-line]. Available: https://www.theguardian.com/technology/2018/nov/14/
googlebetrays-patient-trust-deepmind-healthcare-move (accessed September 13, 2021).
Jackson, J. 2011. IBM Watson vanquishes human Jeopardy foes. PC World [On-line]. Available:
https://www.pcworld.com/article/219893/ibm_watson_vanquishes_human_jeo pardy_
foes html (accessed September 9, 2021).
Jardine, C. G., S. E. Hrudey, J. H. Shortreed, L. Craig, D. Krewski, C. Furgal, and S. McColl.
2003. Risk management frameworks for human health and environmental risks. Journal
of Toxicology, Environment, and Health Part B 6(6):570–718.
Jha, S. 2016. Adapting to artificial intelligence: Radiologists and pathologists as information
specialists. JAMA 316(22):2353–4.
Doctor Ex Machina 177

Jotterand, F. 2006. The politization of science and technology: Its implications for nanotech-
nology. Journal of Law, Medicine and Ethics 34(4):658–66.
———. 2016. Moral enhancement, neuroessentialism, and moral content. In Cognitive
Enhancement: Ethical and Policy Implications in International Perspectives, eds.
F. Jotterand and V. Dubljevic, 42–56. New York: Oxford University Press.

Downloaded from https://academic.oup.com/jmp/article/47/1/155/6524316 by IRCCS Humanitas Milano user on 29 November 2022


Jotterand, F., and A. A. Alexander. 2011. Managing the “known unknowns”: Theranostic
cancer nanomedicine and informed consent. In Biomedical Nanotechnology: Methods
and Protocols Methods in Molecular Biology, vol. 726, ed. S. J. Hurst, 413–29. Dordrecht,
The Netherlands: Springer Science Business Media.
Kirkpatrick, J., R. Pascanu, N. Rabinowitz, J. Veness, G. Desjardins, A. A. Rusu, K. Milan, et al.
2017. Overcoming catastrophic forgetting in neural networks. PNAS 114(13):3521–6.
Kurzweil, R. 2005. The Singularity Is Near: When Humans Transcend Biology. New York:
Viking Books.
Lasagna, L. 1964. Modern Oath of Physicians [On-line]. Available: http://www.pbs.org/wgbh/
nova/body/hippocratic-oath-today.html (accessed September 13, 2021).
Liu, F. 2018, Sept. 27. Artificial swarm intelligence diagnoses pneumonia better than individual
computer or doctor. The Stanford Daily [On-line]. Available: https://www.stanforddaily.
com/2018/0/9/27/artificial-swarm-intelligence-diagnoses-pneumonia-better-than-
individual-computer-or-doctor/ (accessed September 14, 2021).
Marchant, G. E., D. J. Sylvester, and K. W. Abbott. 2008. Risk management principles for nano-
technology. Nanoethics 2(1):43–60.
Mnih, V., K. Kavukcuoglu, D. Silver, A. A. Rusu, J. Veness, M. G. Bellemare, A. Graves,
et al. 2015. Human-level control through deep reinforcement learning. Nature
518(7540):529–33.
Patel, V. L., E. H. Shortliffe, M. Stefanelli, P. Szolovits, M. R. Berthold, R. Bellazzi, and
A. Abu-Hanna. 2009. The coming of age of artificial intelligence in medicine. Artificial
Intelligence and Medicine 46(1):5–17.
Regalado, A. 2013. The Brain is not computable. MIT Technology Review [On-line]. Available:
https://www.technologyreview.com/s/511421/the-brain-is-not-computable (accessed
September 9, 2021).
Renn, O., and M. C. Roco. 2006. Nanotechnology and the need for risk governance. Journal
of Nanoparticle Research 8(2):153–91.
Robotics-openletter. 2021. Open Letter to the European Commission Artificial Intelligence and
Robotics [On-line]. Available: http://www.robotics-openletter.eu/ (accessed September
13, 2021).
Ruff, C. T., A. Lacoste, F. Nordio, C. L. Fanola, M. G. Silverman, E. Argentinis, S. Spangler,
and M. S. Sabatine. 2017. Classification of cardiovascular proteins involved in cor-
onary atherosclerosis and heart failure using Watson’s cognitive computing technology.
Circulation 136(Suppl 1):A16678.
Sharma, G., and A. Carter. 2017. Artificial intelligence and the pathologist: Future frenemies?
Archives of Pathology & Laboratory Medicine 141(5):622–3.
Silver, D., J. Schrittwieser, K. Simonyan, I. Antonoglou, A. Huang, A. Guez, T. Hubert,
et al. 2017. Mastering the game of Go without human knowledge. Nature
550(7676):354–9.
Spangler, S., A. D. Wilkins, B. J. Bachman, M. Nagarajan, T. Dayaram, P. Haas, S. Regenbogen,
et al. 2014. Automated hypothesis generation based on mining scientific literature. In
Proceedings of the 20th ACM SIGKDD International Conference on Knowledge Discovery
and Data Mining. New York: Association for Computing Machinery.
178 Annika M. Svensson and Fabrice Jotterand

Straumsheim, C. 2016. “Augmented intelligence” for higher ed. Inside Higher Ed [On-line].
Available: https://www.insidehighered.com/news/2016/11/16/blackboard-pearson-
joinibms-ecosystem-bring-watson-technology-higher-ed (accessed September 9, 2021).
Vinge, V. 1993. The coming technological singularity: How to survive in the post-human
era. Article for the VISION-21 Symposium sponsored by NASA Lewis Research Center

Downloaded from https://academic.oup.com/jmp/article/47/1/155/6524316 by IRCCS Humanitas Milano user on 29 November 2022


and the Ohio Aerospace Institute, March 30–31 [On-line]. Available: https://edoras.sdsu.
edu/~vinge/misc/singularity.html (accessed September 9, 2021).
Williams, A. M., Y. Liu, K. R. Regner, F. Jotterand, P. Liu, and M. Liang. 2018. Artificial in-
telligence, physiological genomics, and precision medicine. Physiological Genomics
50(4):237–43.

You might also like