Professional Documents
Culture Documents
Critical Assessment of AI in Health Care
Critical Assessment of AI in Health Care
Critical Assessment of AI in Health Care
https://doi.org/10.1093/jmp/jhab036
ANNIKA M. SVENSSON*
Länssjukhuset i Kalmar, Kalmar, Sweden
FABRICE JOTTERAND
Medical College of Wisconsin, Milwaukee, Wisconsin, USA
Universal of Basel, Basel, Switzerland
*Address correspondence to: Annika Svensson, MD, PhD, MA, Länssjukhuset I Kalmar,
Lasarettsv. 8, 392 44 Kalmar, Sweden. E-mail: Annika.Svensson@ymail.com
—Stephen Hawking1
© The Author(s) 2022. Published by Oxford University Press, on behalf of the Journal of Medicine and Philosophy Inc.
All rights reserved. For permissions, please e-mail: journals.permissions@oup.com
156 Annika M. Svensson and Fabrice Jotterand
I. INTRODUCTION
with text recognition; image, voice, and speech recognition systems; soft-
ware for recognition of unexpected patterns in big data sets (data mining);
and “smart robots” including “carebots”5 for use in healthcare settings.
Applications for clinical decision-making are in development. However, be-
fore turning to emerging applications of AI in health care, we offer a his-
Current AI Applications
In 2011, the “super computer” Watson11 beat two well-known human cham-
pions in the trivia game show Jeopardy. The machine thus proved that it
could deal with “the ambiguity and contextual nature of language.”12 This
was a significant development from previous AI applications that were de-
signed to play games such as chess, the Chinese game Go, or even games that
involve incomplete information such as poker (Silver et al., 2017; Williams
et al., 2018). The ability to understand a sentence in natural human language
also differentiated Watson from regular text search engines that deliver a list
of results that are related to certain keywords in order of popularity. Watson
now uses not only deductive and inductive, but also abductive reasoning.
This has been employed in applications such as personalized marketing, “in-
telligent tutoring systems” (Straumsheim, 2016), and even dress design.13,14
Experiments exploring the Watson application have demonstrated cre-
ation of novel hypotheses based on mining of large amounts of scientific
160 Annika M. Svensson and Fabrice Jotterand
literature (Spangler et al., 2014, 1877–86). In other pilot projects within the
field of pharmacological research, new drug targets have been identified,
and suggestions for repurposing of currently used drugs have been gener-
ated by AI.15,16 In this very narrow sense, that is, creation of novel hypoth-
eses based on its ability to analyze big data, Watson would already be more
system through local physicians would also lead to less cost for patients in
remote settings compared to seeking care at a U.S. hospital.
At this time, there is no publicly available data that show how Watson or
any other AI system can work in a more diverse setting, that is, handle more
than one disease group at a time. Also, it has not yet been demonstrated that
as such present an issue for most patients, since many healthcare providers
already work with the EMR on a screen in the presence of their patients;
however, the fact that the system (not the physician) provides the diagnosis
and preferred treatment option would be a new feature that would be ob-
vious to the patient.
for rare cancer variants which could indeed benefit greatly from analysis by
AI, if adequate data were provided (Begley and Ellis, 2012).21 In addition to
all these issues, there is concern for random errors introduced by human
beings that handle the data.
In summary, it appears as if multiple issues and concerns related to data
same patient from one day to the other. Both physicians and patients would
have to adapt to the fact that diagnostic methods and treatments would be in
constant flux, and entirely dependent on the decisions that come out of the
AI system; these unpredictable changes would make it even more difficult for
providers and patients to question results from the AI system. Both uninten-
AI. However, passive monitoring would not likely be sufficient, unless the
monitoring agency also has the authority to take action if it is determined
that a certain medical device should not be allowed on the market. It is con-
ceivable that the FDA could be expanded to deal with the AI as well as other
medical devices, or serve as a model for such an agency.33
One could argue against the “moral machine” that it allows for only a very
simplified view of ethical problems and encourages “gamification”38 of ser-
ious issues of life and death.39 However, we believe that it may be possible
to create internet surveys with a somewhat different approach, which could
provide a basis for communication at the level of regulatory concepts, for
VII. CONCLUSION
NOTES
1. Stephen Hawking speaking at the launch of the Center for the Future of Intelligence at Cambridge
University, October 2016. Available at http://www.cam.ac.uk/research/news/the-best-or-worst-thing-to-
happen-to-humanity-stephen-hawking-launches-center-for-the-future-of (accessed September 16, 2021).
2. Neural networks are capable of not “forgetting” tasks on which they were previously trained.
This means that sequential learning of several tasks is possible (Kirkpatrick et al., 2017). Data can be
shared between units and universally (cloud function).
3. This refers to physicians such as general practitioners that determine which patients can receive
referrals or access certain laboratory tests within the framework of various health plans.
4. A formatted table is an example of “structured information,” while a patient chart or a scientific
article is an example of “unstructured information.”
5. Carebots are robots equipped with AI that are employed in the specialized care of elderly or
disabled patients.
6. This argument contrasts with the concept of Neuroessentialism, which can be defined as
the belief that the brain alone contains and determines all aspects of personhood. For criticism of
neuroessentialism from a perspective of moral philosophy and enhancement, see Jotterand (2016).
7. Mechanisms behind rational and irrational human behavior, including the influence of bias
and emotions are being uncovered by current research in neuroendocrinology, neuroeconomics, and
related fields.
8. A metaphysical analysis of the differences between AI applications and the human mind is not
within the scope of this article.
9. In his book “Mind Children” from 1988, futurist Hans Moravec extended Moore’s law (the observa-
tion made in 1965 by Gordon Moore that the number of transistors that could be placed per square inch in
an integrated circle had doubled every year since the invention of the integrated circuit, and the prediction
that this trend would continue in the future [later adjusted to every two years]) to other technologies and
proposed that robots may evolve into a new artificial species, creating a “mindfire” of superintelligence.
Vernor Vinge discussed the concept of the technological Singularity in the context of AI in a 1993 essay (the
174 Annika M. Svensson and Fabrice Jotterand
concept of machines with superhuman intelligence that could rapidly evolve in an “intelligence explosion”
was first introduced by I. J. Good in 1966 [Good, 1966]). Vinge argued that humanity is at the verge of fun-
damental change, which would be brought about by creation of superhuman artificial intelligence. He sug-
gested several possible components to such a development, including creation of potent AI, giant computer
networks (corresponding to the development of the internet), computer/human interfaces that enhance the
human component (intellectual amplification of humans), and enhancement through “biological science”
26. The risks of deficiencies in current algorithm-based software in electronic health record sys-
tems are illustrated by the recent 999 million-dollar lawsuit against eClinicalWorks, software supplier for
850,000 health-care providers, for breach of fiduciary duty and gross negligence, claiming that glitches in
the software lead to multiple issues that would result in misleading records.
27. Currently, discussion of alternative treatments is a required part of the informed consent process.
28. Mabu, “the personal healthcare companion,” tailors its communication with patients according
REFERENCES
Ackerman, T. 2017, Feb. 17. Touted IBM supercomputer project at MD Anderson on hold
after audit finds spending issues. Houston Chronicle [On-line]. Available: https://www.
houstonchronicle.com/news/houston-texas/houston/article/Touted-IBMsupercomputer-
project-at-MD-Anderson-10941783.php. (accessed September 9, 2021).
176 Annika M. Svensson and Fabrice Jotterand
Alexander, A. A., and F. Jotterand. 2014. Market considerations for nanomedicines and
theranostic nanomedicines. In Cancer Theranostics, eds. X. Chen and S. Wong, 471–89.
Amsterdam, The Netherlands: Elsevier.
Allen, P. G. 2011. The Singularity is not near. MIT Technology Review [On-line]. Available:
https://www.technologyreview.com/s/425733/paul-allen-the-singularity-isnt-near/ (ac-
Jotterand, F. 2006. The politization of science and technology: Its implications for nanotech-
nology. Journal of Law, Medicine and Ethics 34(4):658–66.
———. 2016. Moral enhancement, neuroessentialism, and moral content. In Cognitive
Enhancement: Ethical and Policy Implications in International Perspectives, eds.
F. Jotterand and V. Dubljevic, 42–56. New York: Oxford University Press.
Straumsheim, C. 2016. “Augmented intelligence” for higher ed. Inside Higher Ed [On-line].
Available: https://www.insidehighered.com/news/2016/11/16/blackboard-pearson-
joinibms-ecosystem-bring-watson-technology-higher-ed (accessed September 9, 2021).
Vinge, V. 1993. The coming technological singularity: How to survive in the post-human
era. Article for the VISION-21 Symposium sponsored by NASA Lewis Research Center