Download as pdf or txt
Download as pdf or txt
You are on page 1of 2

Opinion

AI IN MEDICINE
VIEWPOINT
The Challenges for Regulating Medical Use of ChatGPT
and Other Large Language Models

Timo Minssen, LLD Theintroductionofartificialintelligence(AI)intomedi- with the General Data Protection Regulation.3 A differ-
Center for Advanced cal devices, decision support, and clinical practice is not ent privacy problem may occur in the US under the
Studies in Biomedical new, with a particular uptick in investment and deploy- Health Insurance Portability and Accountability Act of
Innovation Law,
ment within the past decade. Regulators (eg, the US Food 1996 or under privacy laws for individual states if phy-
University of
Copenhagen, and Drug Administration, the European Medicines sicians include protected patient data in prompts they
Copenhagen, Denmark; Agency, and the National Medical Products Administra- submit to LLMs.
and Centre for Law, tion), intergovernmental organizations (eg, the World The more general ethical questions that regulators
Medicine, and Life
Sciences, University of
Health Organization), civil society groups, institutional need to grapple with are: When is the scraping of pa-
Cambridge, Cambridge, review boards at hospitals, and others have worked hard tient data from the web or other data sets permissible?
England. to define the scope of what AI applications should re- Should patients be able to opt out of such data uses?
quire review and approval, implementing rules in a fast- An important first step would be for privacy regulators
Effy Vayena, PhD changing terrain with mixed results. in the US, the European Union, and elsewhere to issue
Institute of
Translational Medicine, What has changed is the public availability and in- specific guidance, binding or advisory, as to privacy per-
Swiss Federal Institute terest in large language models (LLMs), most notably missible and impermissible ways to develop and de-
of Technology Zurich OpenAI’s ChatGPT and Google’s Bard, with potential ap- ploy LLMs. Harmonization between these regulators is
(ETH Zurich), Zurich,
plications in medicine. These LLMs are large (1 trillion also an important next step to the extent possible.
Switzerland.
parameters for GPT-4 and growing), general rather than
I. Glenn Cohen, JD trained for a specific task (though fine-tuning is pos- Device Regulation
Harvard Law School, sible), autoregressive (using only past data to predict Under laws in the US and the European Union, LLM soft-
Petrie-Flom Center for future data), and foundational (likely to be the seed for warethatisonlydevelopedforgeneralpurposesandwith-
Health Law Policy,
many further devices and applications).1 There has also out any claims to their use for medical purposes would
Biotechnology, and
Bioethics, Harvard been interest in creating LLMs with more curated medi- usuallynotqualifyasmedicaldevices.However,LLMsthat
University, Cambridge, cal training data, such as Google’s Med-PaLM 2. are developed, modified, or directed toward specific
Massachusetts. medical purposes, such as through inter-
action with medical devices or by help-
ing clinicians reach medical decisions and

Viewpoint
Regulating all of these selected areas communicating with patients, might be
treated as medical devices.4
pages 309, 311, 313, is delicate and only a starting point
and 317 Policing the line of intended use will
because new risks and benefits be tricky. There may also be particular
will emerge as LLMs evolve and become concerns about open-source LLMs that
may be regarded as so-called software of
more prominent for medical uses. unknown provenance if a third party
adapts or uses them as a component in
The medical world has taken note, with a wide range a medical device, which would trigger special compli-
of uses from generating draft progress plans to answer- ance standards and require sufficient documentation
ing patient questions via a chatbot.2 In this Viewpoint, that may not always be accessible. Device regulators
we discuss how regulators across the world should ap- need to clarify when incorporating an LLM into a medi-
proach the legal and ethical challenges raised by medi- cal product renders it a medical device under existing
cal use of LLMs, including privacy, device regulation, regulations, make fundamental decisions about how to
competition, intellectual property rights, cybersecu- manage adaptive learning in LLMs, and consider how ex-
rity, and liability. isting risk classification approaches apply to the distinc-
tive setting of LLM use in devices.
Privacy
Large language models are trained on vast amounts of Competition Law
data scraped from the web and other digital sources, Now, at the dawn of medical use of LLMs, one could
Corresponding
Author: I. Glenn which may contain personal data. In the European Union, envision 2 very different futures. One future is a vibrant
Cohen, JD, the General Data Protection Regulation only permits ac- ecosystem of medical LLMs trained on different data
Harvard University, cess to personal data, including health data, when there sources with very different designs; the other future is one
1525 Massachusetts
are specific justifications (eg, informed consent or pub- where 2 or 3 large corporations dominate the LLM mar-
Ave, Cambridge, MA
02138 (igcohen@law. lic interest). OpenAI’s ChatGPT is currently under inves- ket and license their products to all medical users. Each
harvard.edu). tigation by European authorities to assess compliance future poses real trade-offs. Can a large number of small

jama.com (Reprinted) JAMA July 25, 2023 Volume 330, Number 4 315

© 2023 American Medical Association. All rights reserved.

Downloaded From: https://jamanetwork.com/ by a Tel-Aviv University User on 08/24/2023


Opinion Viewpoint

companies adequately ensure privacy and other ethics by design? Will For example, foundational open-source AI tools should be based
consumers be able to adequately distinguish high- and low-quality on a programmatic core that effectively reduces the risk of hacks and
LLMs in a crowded market? Will the dominance of a small number manipulation. Regulators must also ensure that physicians and other
of LLM developers result in model homogeneity? Would a small num- medical staff are well trained on how LLMs work, their limitations,
ber of developers exert market power in a way that increases prices and cybersecurity risks.7 Lawmakers must consider whether they
for patients and health care systems? Antitrust regulators have a key have adequate structures to hold developers accountable and li-
role to play in determining where the world ends up in between these able for noncompliance with such standards. It will also be impor-
2 very different futures. Moreover, the requirements set by medical tant to clarify when developers can lawfully require that users waive
regulators for market entry and the flexibility level of medical regu- liability claims or indemnify developers from future harms as OpenAI,
lations will also have a huge effect on the competitive landscape. for example, purports to do at the moment.
Regulating all of these selected areas is delicate and only a
Intellectual Property starting point because new risks and benefits will emerge as LLMs
The competition policy analysis in turn depends on what intellec- evolve and become more prominent for medical uses. The chal-
tual property rights are recognized in LLMs. Recognizing too many lenge is not just in the governance of these technologies but in lim-
intellectual property rights at the computing, data creation, and foun- its to the current mechanisms of governance. One governance
dation modeling stages of LLM development could create in- approach is to try to prohibit the use of LLMs in certain spheres.
creased barriers to entering the market.5 Recognizing too few Within the last several months, some countries have announced
intellectual property rights, though, may deter some market en- that they would do so, but most have backed off given the changes
trants because they may be unable to raise enough capital based on that have been made by developers. We are skeptical whether this
claims of having an exclusive product at the end of development. will be feasible in the long term both because of political pressure
The fact that LLMs are often trained on enormous amounts of opaque and the ability to circumvent such restrictions. Moreover, the
data sources raises the specter that they themselves may violate recently introduced changes to the proposed Artificial Intelligence
existing intellectual property rights. This concern and the lack of Act in the European Union (in response to the recent LLM develop-
transparency for current LLMs recently resulted in changes to the ments) demonstrates that overly detailed and rigid legislation will
proposed Artificial Intelligence Act in the European Union, and now face severe update challenges with regard to new technological
requires companies deploying generative AI tools to disclose any developments.
copyrighted material used to develop their systems.6 Meanwhile, voices in industry are urging governments to hold
off on regulating in favor of allowing self-regulation by the LLM
Cybersecurity and Liability industry. A different approach is to try to pass new legislation
Large language models may be a force for good or evil for medical focused on LLMs. The proposed Artificial Intelligence Act in the
cybersecurity. Generative AI can play a key role in scanning digital European Union could serve as a model. The problem is that such
logs, finding patterns in vulnerability exploitation, and helping cli- legislation risks rigidly bonding medical AI to today’s technical stan-
nicians improve cybersecurity systems. But security vulnerabilities dards, which may be inadequate for tomorrow’s problems. We
of LLMs can allow their manipulation and turn them into misinfor- believe that these forms of governance will be important, but so is
mation sources, generators of more sophisticated and automated more open-ended delegation of authority to regulate medical AI to
fraudulent attacks, or spreaders of malware. Hence, minimum se- administrative agencies like the US Food and Drug Administration
curity thresholds should be set before rolling out LLM applications and the European Medicines Agency as well as common law
in health care and drug development settings. decision-making by courts, especially regarding liability questions.

ARTICLE INFORMATION or approval of the manuscript or decision to submit J Law Biosci. 2020;7(1):lsaa002. doi:10.1093/jlb/
Published Online: July 6, 2023. the manuscript for publication. lsaa002
doi:10.1001/jama.2023.9651 5. Hoppner T, Streatfeild L. ChatGPT, Bard & Co:
REFERENCES an introduction to AI for competition and
Conflict of Interest Disclosures: Dr Minssen
reported receiving grants from the European 1. Moor M, Banerjee O, Abad ZSH, et al. Foundation regulatory lawyers. Published March 3, 2023.
Union; serving as a consultant to Corti; and serving models for generalist medical artificial intelligence. Accessed June 29, 2023. https://papers.ssrn.com/
on advisory boards for Christian Hansen Holding Nature. 2023;616(7956):259-265. doi:10.1038/ sol3/papers.cfm?abstract_id=4371681
A/S and the World Health Organization. Dr Vayena s41586-023-05881-4 6. Mukherjee S, Chee FY, Coulter M. EU proposes
reported serving on ethics advisory panels for 2. Haupt CE, Marks M. AI-generated medical new copyright rules for generative AI. Published
IQVIA, Merck KGaA, and the World Health advice—GPT and beyond. JAMA. 2023;329(16): April 28, 2023. Accessed June 29, 2023. https://
Organization. Mr Cohen reported serving on ethics 1349-1350. doi:10.1001/jama.2023.5321 www.reuters.com/technology/eu-lawmakers-
advisory boards for Illumina and Bayer. 3. Weatherbed J. OpenAI’s regulatory troubles are committee-reaches-deal-artificial-intelligence-act-
Funding/Support: Dr Minssen and Mr Cohen only just beginning. Published May 5, 2023. 2023-04-27/
reported receiving grant NNF17SA0027784 from Accessed June 29, 2023. https://www.theverge. 7. Chilton J. The new risks ChatGPT poses to
the Novo Nordisk Foundation. com/2023/5/5/23709833/openai-chatgpt-gdpr-ai- cybersecurity. Published April 21, 2023. Accessed
Role of the Funder/Sponsor: The Novo Nordisk regulation-europe-eu-italy June 29, 2023. https://hbr.org/2023/04/the-new-
Foundation had no role in the preparation, review, 4. Minssen T, Gerke S, Aboy M, Price N, Cohen G. risks-chatgpt-poses-to-cybersecurity
Regulatory responses to medical machine learning.

316 JAMA July 25, 2023 Volume 330, Number 4 (Reprinted) jama.com

© 2023 American Medical Association. All rights reserved.

Downloaded From: https://jamanetwork.com/ by a Tel-Aviv University User on 08/24/2023

You might also like