Download as pdf or txt
Download as pdf or txt
You are on page 1of 2

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/373257945

Unveiling Justice through Generative AI: Analysing the Impact of Large


Language Models in Law Enforcement

Preprint · November 2024

CITATIONS READS

0 144

1 author:

Tsehaye Haidemariam
BI Norwegian Business School
18 PUBLICATIONS 0 CITATIONS

SEE PROFILE

All content following this page was uploaded by Tsehaye Haidemariam on 21 August 2023.

The user has requested enhancement of the downloaded file.


Unveiling Justice through Generative AI: Analysing the Impact of Large Language
Models in Law Enforcement
Tsehaye Haidemariam
Abstract
The release and widespread use of large language models (LLMs), such as
ChatGPT, have generated significant attention owing to their ability to provide quick
and versatile responses across various contexts. This article critically examines the
outcomes of 2023 expert workshops organised by the Europol Innovation Lab to
explore the potential misuse of ChatGPT by criminals and their usefulness in assisting
law enforcement. Throughout the enquiry, references will be made: (1) to the
workshop’s report and its strength to raise awareness about the impact of LLMs on
the law enforcement community; (2) its limitations in not considering other Open AI
systems, such as DALL·E 2. The exclusion of DALL·E 2 and other image-generating AI
systems such as Midjourney from the workshops limits the comprehensive
understanding of the potential misuse of LLMs and the challenges it may pose in
countering other criminal cases, such as counterfeiting, identity theft, or even the
production of illicit imagery. By drawing on a viral case study of AI-generated images
of officers arresting Donald Trump, the article demonstrates other LLMs’ capacity for
misuse. The questions raised are investigated within the technical, legal and ethical
discourses relevant to Science and Technology Studies (STS). It is argued that any
technical prevention of the misuse of LLMs that do not take into account the
difference and relationship between machine behaviour and human technical
behaviour are inadequate, and the concept of prompt engineering currently being
used by AI researchers is broader than understood by the Europol Innovation Lab.
Forthcoming, Nordic Journal of Science and Technology Studies

View publication stats

You might also like