Download as pdf or txt
Download as pdf or txt
You are on page 1of 2

Pediatric Radiology (2023) 53:816–817

https://doi.org/10.1007/s00247-023-05668-9

EDITORIAL

ChatGPT: an editor’s perspective


Amaka C. Offiah1,2 · Geetika Khanna3

Published online: 19 April 2023


© The Author(s), under exclusive licence to Springer-Verlag GmbH Germany, part of Springer Nature 2023

There is a first time for everything, and the day finally came Answer (ChatGPT):
(March 7, 2023) when, as an editor, A.C.O. found herself The article by Saliba T. and Boitsios G. titled “A Radi-
requesting confirmation that an artificial intelligence (AI) ologist’s Perspective on ChatGPT” provides an inter-
was not the author of a manuscript submitted for potential esting insight into the use of artificial intelligence (AI)
publication in our journal. Her exact words were, “Provide in the field of radiology. The authors have presented a
a disclaimer at the end of the manuscript confirming that comprehensive overview of the current state of AI in
you and not ChatGPT wrote the article,” the corresponding radiology and the potential benefits and challenges on
author was Saliba T. and the article in question, “ChatGPT: integrating AI into clinical practice.
A radiologist’s perspective” [1]. The authors highlight the potential of ChatGPT, a large
In a thought-provoking Letter to the Editor, the authors language model trained by OpenAI, in assisting radi-
summarise the current strengths and weaknesses of the ologists with report generation, differential diagnosis,
software tool, ChatGPT (Chat Generative Pre-trained and image interpretation. They also acknowledge the
Transformer). challenges associated with AI, including concerns
The editors had of course heard about and even read regarding accuracy, reliability, and ethical issues.
some articles on ChatGPT (superlative articles bordering The authors rightly suggest that AI should not replace
on sensationalistic from the media, more considered in the radiologists, but rather complement their skills and
scientific literature). However, it was not until receiving the expertise.
paper by Saliba and Boitsios [1] that the Editors could not The authors have effectively presented the views of
resist the temptation and tried ChatGPT themselves. One various experts in the field and have cited relevant
example is included below: studies to support their arguments. However, the article
lacks a critical analysis of the limitations of ChatGPT
Question (Prof. A.C. Offiah):
and its potential impact on the role of radiologists. For
In 250 words, critically review a 2023 journal article
instance, the authors do not discuss the potential nega-
in Pediatric Radiology in which the authors, Saliba
tive consequences of over-reliance on AI and the pos-
T. and Boitsios G. provide a radiologist’s perspective
sibility of radiologists losing their interpretive skills.
on ChatGPT. They conclude with the words of Curtis
Furthermore, while the authors provide some insight
Langlotz that “AI won’t replace radiologists, but radi-
into the ethical issues surrounding the use of AI in
ologists who use AI will replace those who don’t”.
radiology, they do not delve deeper into the potential
impact of AI on patient care, patient privacy, and data
security. Given the growing concerns regarding the
* Amaka C. Offiah ethical implications of AI in healthcare, this is a sig-
a.offiah@sheffield.ac.uk
nificant limitation of the article.
1
Department of Oncology and Metabolism, In conclusion, the article by Saliba T. and Boitsios G.
University of Sheffield, Sheffield Children’s NHS provides a useful overview of the potential benefits
Foundation Trust, Western Bank, Room 3, Damer Street and challenges of using AI in radiology. However, the
Building, Sheffield S10 2TH, UK
article lacks a critical analysis of the limitations and
2
Department of Radiology, Sheffield Children’s NHS potential negative consequences of AI, and the poten-
Foundation Trust, Sheffield, UK
tial ethical implications of its use in clinical practice.
3
Department of Radiology & Imaging Sciences, Emory The authors’ conclusion that radiologists who use AI
University and Children’s Healthcare of Atlanta, Atlanta,
GA, USA

13
Vol:.(1234567890)
Pediatric Radiology (2023) 53:816–817 817

will replace those who don’t, while a catchy quote, Materials and Methods section is not relevant to your manu-
seems somewhat oversimplistic and lacking in nuance. script, the Introduction section can be used to document the
use of any AI tools.
The above review by ChatGPT is not at all bad, consider-
These large language models can be a useful aid in scien-
ing that the response was derived in seconds, without having
tific writing and editing, especially for non-native English
had sight of the original article. This capability is why many
speakers, and can even aid the author in adding accurately
are worried that the introduction of ChatGPT heralds a rapid
formatted references. However, authors should limit the use
and significant change in education, science and research as
of these tools to topics that they are subject matter experts
we know it.
on to ensure that the information provided is accurate and up
From an editor’s perspective, there is concern that manu-
to date. No matter what AI tool is used, the authors remain
scripts may not have been authored by humans and indeed
responsible for the scientific integrity of their publications.
one study (in preprint) records that 32% of ChatGPT-gener-
ated abstracts (based on completely generated data) misled
scientists, who believed them to be original [2]. Important Author contributions A.C.O. wrote the first draft of the manuscript.
words of caution related to ethics, legal issues, innovation, A.C.O. and G.K. revised the draft and approved the final version of
accuracy, bias and transparency of ChatGPT are raised the manuscript.
by Biswas, S., sections of whose article he has declared
Declarations
were written by ChatGPT and then edited by himself [3].
This last is the key—it is very difficult to police the use of Conflicts of interest A.C.O. and G.K. are Managing Editors of Pedi-
ChatGPT—therefore, the question is whether authors (and atric Radiology (for outside the Americas and the Americas, respec-
reviewers [4]) should disclose that they have used ChatGPT tively).
and take full responsibility for any errors arising thereof.
The editors of the Science group of publications think not
and have updated their editorial policies to specify that no References
aspects of any manuscript submitted to them (text, figures,
1. Saliba T, Boitsios G (2023) ChatGPT: a radiologist’s perspective.
images, graphics) can be generated by ChatGPT (unless Pediatr Radiol. https://​doi.​org/​10.​1007/​s00247-​023-​05656-z
ChatGPT is intentionally used as part of the research). A 2. Gao CA, Howard FM, Markov NS, Dyer EC, Ramesh S, Luo
violation of their policy will be deemed to be scientific mis- Y, Pearson AT (2022) Comparing scientific abstracts generated
conduct [5]. by ChatGPT to original abstracts using an artificial intelligence
output detector, plagiarism detector and blinded human reviewers.
Springer Nature (the publishers of our journal, Pedi- bioRxiv. https://​doi.​org/​10.​1101/​2022.​12.​23.​521610
atric Radiology) has taken a different view. The recently 3. Biswas S (2023) ChatGPT and the future of medical writing. Radi-
announced AI-related policy of Springer Nature can be sum- ology. https://​doi.​org/​10.​1148/​radiol.​223312
marized as follows: (1) AI writing tools can be used to write 4. Hosseini M, Horbach SPJM (2023) Fighting reviewer fatigue or
amplifying bias? Considerations and recommendations for use
manuscripts and to conceptualize research ideas, (2) this of ChatGPT and other large language models in scholarly peer
contribution must be openly declared and (3) AI tools cannot review. Res Sq. https://​doi.​org/​10.​21203/​rs.3.​rs-​25877​66/​v1
be listed as authors [6]. An attribution of authorship carries 5. Thorp HH (2023) ChatGPT is fun, but not an author. Science
with it accountability for the work, which cannot be effec- 379:313. https://​doi.​org/​10.​1126/​scien​ce.​adg78​79
6. Editorial (2023) Tools such as ChatGPT threaten transparent sci-
tively applied to large language models. As the Managing ence; here are our ground rules for their use. Nature 613:612.
Editors of Pediatric Radiology, we will update our Author https://​www.​nature.​com/​artic​les/​d41586-​023-​00191-1
Guidelines, henceforth mandating that authors declare the
role played by AI tools, not as a disclaimer, but within the Publisher's note Springer Nature remains neutral with regard to
jurisdictional claims in published maps and institutional affiliations.
Materials and Methods section of their manuscripts. If a

13

You might also like