Professional Documents
Culture Documents
EKoM Presentation AILawEthics17112021
EKoM Presentation AILawEthics17112021
This project has received funding from the European Union’s Horizon
2020 research and innovation programme under grant agreement no
101020574.
your123/stock.adobe.com
Outline
1. Ethical and legal issues arising from the use of AI by LEAs
2. Relevant ethical and legal framework
High Level Expert Group on AI's Ethics Guidelines for Trustworthy AI & EU
Commission’s White Paper on AI
AI Regulation Proposal
Data protection framework: Law Enforcement Directive
3. Case study: Risk assessment of violent behavior
2
Limitations of AI instruments
AI solutions are not equally applicable to all security problems
Right data not always available
Lack of data or data of poor quality
Meaningful patterns can be recognized only for offences that show fixed and repeated
patterns
Historical data and data patterns do not offer straightforward results but only
probabilistic indications of possible outcomes
No causalities or certainties
Source: Dennis Broeders, Erik Schrijvers, Bart van der Sloot, Rosamunde van Brakel, Josta de Hoog, Ernst Hirsch Ballin, Big Data and security policies: Towards a framework for regulating the phases of 3
analytics and use of Big Data
Ethical and legal issues arising from the use of
AI by LEAs
Inherent machine bias and discrimination
Inaccurate predictions
Poor selection and use of data
Opacity
Predictions result from a process hidden in a ‘black box’, difficult to understand for
both citizens and police officers
Unaccountability
Inability to assign responsibility for decisions taken
Interplay between developers of algorithms, managers of databases and police officers
Source: Oskar Josef Gstrein, Anno Bunnik, Andrej Zwitter, Ethical, Legal and Social Challenges of Predictive Policing 4
Ethical and legal issues arising from the use of
AI by LEAs
Interferences with individual and collective privacy rights
Individual harm, as well as affecting a collective expression of freedom
Collection and analysis of data of large numbers of people in any way implicated as suspects
Fair trial and due process
Detriment to the presumption of innocence
Shift towards the concept of ‘(predicted) guilty until proven innocent’
Challenges to evidentiary reliability
Inability to adequately examine, explain and evaluate the evidence
Inability for the defendant to adequately challenge the evidence
Sources: Dennis Broeders, Erik Schrijvers, Bart van der Sloot, Rosamunde van Brakel, Josta de Hoog, Ernst Hirsch Ballin, Big Data and security policies: Towards a framework for regulating the phases of 5
analytics and use of Big Data and Sabine Gless, AI in the Courtroom: A Comparative Analysis of Machine Evidence in Criminal Trials
Relevant ethical and legal framework
High Level Expert Group on AI's Ethics Guidelines for Trustworthy
AI
Trustworthy AI: lawful, ethical, robust (= will not cause any unintentional harm)
4 ethical principles
1. Respect for human autonomy
2. Prevention of harm
3. Fairness
4. Explicability
6
Relevant ethical and legal framework
High Level Expert Group on AI's Ethics Guidelines for Trustworthy AI
(cont.)
Respect for human autonomy Prevention of harm Fairness Explicability
• Full & effective self • Protection of human • Equal and just • Transparent
determination dignity, mental and distribution of processes, open
physical integrity & benefits and costs communication,
• AI systems should natural environment • Free from unfair explainable
not unjustifiably & all living beings bias, discrimination decisions
subordinate, coerce, and stigmatisation.
deceive, manipulate, • Attention to • Never deceive or • The degree of the
condition or herd vulnerable persons unjustifiably impair required explicability
humans & asymmetries of people’s freedom of depends on the
power or choice context and severity
• Human-centric information • Ability to contest of consequences in
design principles & (employers vs and seek effective case of erroneous
employees, businesses redress against /inaccurate outputs
meaningful vs consumers,
opportunity for governments vs decisions made by AI
human choice citizens) systems and by the
humans operating
them
7
Relevant ethical and legal framework
High Level Expert Group on AI's Ethics Guidelines for Trustworthy AI
(cont.)
7 key requirements to be taken into consideration for Trustworthy AI
Human agency and oversight
Transparency
Accountability
8
Ethical and legal framework
EU Commission’s White Paper on AI
Risk-based approach – high risk / low risk
9
Ethical and legal framework
EU Commission’s White Paper on AI (cont.)
Always high-risk AI, regardless of the criteria:
• AI applications used for recruitment & in situations impacting workers rights
• AI applications for the purposes of remote biometric identification and other intrusive surveillance
technologies
10
Ethical and legal framework
AI Regulation Proposal
More granular/nuanced risk-based approach
Unacceptable Risk – Prohibited AI Practices (next slide)
High-risk AI systems
AI systems intended to be used as safety component of products & subject to third party
ex-ante conformity assessment
Other AI systems with mainly fundamental rights implications, explicitly listed in Annex III
Low or minimal risk
11
Ethical and legal framework
AI Regulation Proposal (cont.)
3 Exceptions, subject to prior authorisation:
Unacceptable Risk - Prohibited AI Practices - targeted search for specific potential victims
Practices contravening Union values (e.g. violating of crime, including missing children
fundamental rights) - prevention of a specific, substantial and
Practices likely to manipulate persons or exploit imminent threat to the life or physical safety
vulnerabilities of vulnerable groups (e.g. children, of natural persons or of a terrorist attack
persons with disabilities) to materially distort their
- detection, localisation, identification or
behaviour, likely to cause psychological/physical harm
prosecution of a perpetrator or suspect of a
AI-based social scoring for general purposes done by criminal offence referred to in Article 2(2) of Council
public authorities Framework Decision 2002/584/JHA 62 and punishable in the
Member State concerned by a custodial sentence or a detention
The use of ‘real time’ remote biometric identification order for a maximum period of at least three years, as determined by
the law of that Member State
systems in publicly accessible spaces for the purpose of
law enforcement
12
Ethical and legal framework
AI Regulation Proposal (cont.)
High-risk AI systems → Stricter requirements
AI Regulation Proposal Annex III –
Risk management system
Article 6 provides a list of the high-
Data and data governance risk AI systems used by Law
Enforcement
Technical documentation and recording keeping Additional obligations for providers,
Transparency and provision of information to users manufacturers, importers,
distributors, users etc. of high-risk AI
Human oversight systems
Self assessment & conformity
Robustness
assessment by 3rd parties in specific
Accuracy and security cases (Article 43)
Exceptions for surveillance (next
slide)
13
Ethical and legal framework
AI Regulation Proposal (cont.)
Transparency obligations for certain AI systems
Natural persons shall be informed
that they are interacting with an
/!\ This obligation shall not apply to AI systems authorised by law to
AI system detect, prevent, investigate and prosecute criminal offences, unless those
systems are available for the public to report a criminal offence
14
Ethical and legal framework
Data protection framework: the Law Enforcement Directive
15
Ethical and legal framework
Data protection framework: the Law Enforcement Directive (cont.)
Automated Individual Decision-Making & Profiling (Art. 11)
16
Case study
Risk assessment of violent behavior
Algorithm that assesses the likelihood of violent behavior of specific people
Analyses of police reports associated to a specific person with the aim of producing a
score representing the likelihood of this person expressing violent behavior, based on a
series of relevant terms (e.g., confused, crisis, psychotic, alcohol), which weight
differently
Developed by experts without using machine learning to expand the list of terms
Data used: Both structured (e.g., codes for specific types of incidents) and unstructured (e.g.,
notes from police officers)
Source: Marc Steen, Tjerk Timan, Ibo van de Poel, Responsible innovation, anticipation and responsiveness: case studies of algorithms in decision support in justice and security, and an exploration of 17
potential, unintended, undesirable, higher-order effects
Case study
Risk assessment of violent behavior (cont.)
Application of the HLEG on AI's Ethics Guidelines for Trustworthy AI
Respect for human autonomy
Risk is assessed by the algorithm and its output is then interpreted by police officers
with professional discretionary competence, based on relevant and reliable information
→ Human in the loop
Prevention of harm
Implementation of specific protocols for police officers to prevent risks of bias,
stigmatization or discrimination
Division of labor between police officers operating on the streets and officers
interpreting the outputs of the algorithm prevents/mitigates the risks of misusing or
overusing the system
Source: Marc Steen, Tjerk Timan, Ibo van de Poel, Responsible innovation, anticipation and responsiveness: case studies of algorithms in decision support in justice and security, and an exploration of 18
potential, unintended, undesirable, higher-order effects
Case study
Risk assessment of violent behavior (cont.)
Application of the HLEG on AI's Ethics Guidelines for Trustworthy AI
Fairness
Data mining process which enables a deep understanding the context in which the
algorithm is deployed and the data used
Possibility to create feedback loops that can help to identify incorrect or unfair outputs
Explicability
Explicit and limited list of terms and weights that can be inspected
Source: Marc Steen, Tjerk Timan, Ibo van de Poel, Responsible innovation, anticipation and responsiveness: case studies of algorithms in decision support in justice and security, and an exploration of 19
potential, unintended, undesirable, higher-order effects
Sources
Directive (EU) 2016/680 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the
processing of personal data by competent authorities for the purposes of the prevention, investigation, detection or prosecution of criminal
offences or the execution of criminal penalties, and on the free movement of such data, and repealing Council Framework Decision 2008/977/JHA,
[2016] OJ L119/89 (Law Enforcement Directive), https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:02016L0680-20160504&from=EN
European Commission, “Proposal for a Regulation of the European Parliament and of the Council Laying Down Harmonised Rules on Artificial
Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts”, 21 April 2021,
https://eur-lex.europa.eu/legal-content/EN/TXT/?qid=1623335154975&uri=CELEX%3A52021PC0206
European Commission, “White Paper on Artificial Intelligence - A European approach to excellence and trust”, 19 February 2020,
https://ec.europa.eu/info/sites/default/files/commission-white-paper-artificial-intelligence-feb2020_en.pdf
High-Level Expert Group on Artificial Intelligence, “Ethics Guidelines for Trustworthy AI”, 8 April 2019,
https://www.aepd.es/sites/default/files/2019-12/ai-ethics-guidelines.pdf
Dennis Broeders, Erik Schrijvers, Bart van der Sloot, Rosamunde van Brakel, Josta de Hoog, Ernst Hirsch Ballin, “Big Data and security policies:
Towards a framework for regulating the phases of analytics and use of Big Data”, Computer Law & Security Review, Volume 33, Issue 3, 2017, Pages
309-323, ISSN 0267-3649, https://doi.org/10.1016/j.clsr.2017.03.002
Sabine Gless, “AI in the Courtroom: A Comparative Analysis of Machine Evidence in Criminal Trials”, Georgetown Journal of International Law,
Volume 51, Issue 2, 2020, pages 195-253, https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3602038
Oskar Josef Gstrein, Anno Bunnik, Andrej Zwitter, “Ethical, Legal and Social Challenges of Predictive Policing”, Católica Law Review, Volume 3, Issue
3, 2019, Pages 77-98, https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3447158
Marc Steen, Tjerk Timan, Ibo van de Poel, “Responsible innovation, anticipation and responsiveness: case studies of algorithms in decision support
in justice and security, and an exploration of potential, unintended, undesirable, higher-order effects”, AI and Ethics, Volume 1, Issue 4, 2021, Pages
501–515, https://doi.org/10.1007/s43681-021-00063-2
20
Thank you!
KU Leuven Centre for IT & IP Law (CiTiP) - imec
Sint-Michielsstraat 6, box 3443
BE-3000 Leuven, Belgium
http://www.law.kuleuven.be/citip
The sole responsibility for the content of this publication lies with the authors. It does not necessarily represent
the opinion of the European Union. Neither the REA nor the European Commission are responsible for any use
that may be made of the information contained therein.
21
your123/stock.adobe.com