Artificial Intelligence and Lawyers

You might also like

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 18

GUIDO ALPA

The defence of rights and the role of the lawyer in the application of
artificial intelligence systems

1. Foreword

In the period between the last G7 meeting in Japan - the 49th summit
held from 19 to 21 May 2023 in the city of Hiroshima - and the one to
be held in Italy next June, there have been numerous interventions by
public and private institutions to improve knowledge of Artificial
Intelligence, to understand its economic potential, to chase its
extraordinarily rapid developments, and to attempt to approximate the
models of its discipline. The common goal of the interventions was to
make the best use of the advantages of the new, or at least recent,
technological revolution and to prevent the risks it may create for the
whole of humanity.

Not that measures had not already been taken before the Hiroshima
meeting. On the contrary, the European Union started to deal with
artificial intelligence as early as 20101 , but it was mainly concerned
with the digital market and, on the sidelines, the protection of
personal data, with the approval of the 2016 Regulation(GDPR)2 . The
interest in AI subsequently matured with an extensive activity of
research, consultation and drafting of legal texts, particularly focused
on a segment of the relationships between private individuals that are
established through these systems, i.e. civil liability and the
consequent compensation for damage caused by self-propelled
machines, maschine learning, robots and driverless cars.3 Obviously
1
See the thorough research by Pajno,Donati,Perrucci (ed.) Artificial intelligence and law: a reivolution? I.
Fundamental rights, personal data and regulation; II..Administration, responsibility, jurisdiction; III.Intellectual
property, society and finance, Bologna, 2022; Alpa (cur.), Diritto e intelligenza artificiale.Profili
generali,soggetti, contratti, Responsabilità civile, diritto bancario e finanziario, processo civile, Pisa, 2020; Alpa,
L' intelligenza artificiale.Il contesto giuridico, Modena, 2021;
2
Zorzi Galgano (cur.),Persona e mercato dei dati.Riflessioni sul GDPR, Padova,2019
3
See in the large number of references, Di Donna, Artificial Intelligence and Remedies, Padua 2022; Calabresi
and Al Mureden, Driverless cars.Artificial intelligence and the future of mobility, Bologna,2021

1
the legal discourse involved the administration of justice and in
particular predictive justice, but the treatment of this aspect would
take us far4 . It is an aspect to which lawyers are very sensitive, as can
be seen from the great work done by the CCBE in this regard.5

The representatives of the seven countries composed a joint


declaration in the course of their work, during which they also
discussed rule of law, economic cooperation and the social problems
that affect most continents today. The declaration was published on
30 October and it is worth picking it up to understand its meaning,
which is, of course, purely political.6

4
V. Colomba, Il futuro delle professioni legali con l'AI: cosa verrà dopo la giustizia predittiva?, in Agenda
Digitale, 5 April 2023; Barberis, Giustizia predittiva: ausiliare e sostitutiva. An evolutionary approach,in MILAN
LAW REVIEW, Vol. 3, No. 2, 2022; Carleo A. (cur.) Legal calculability, Bologna, 2017, with writings by Guido
Alpa, Giovanni Canzio, Alessandra Carleo, Massimo De Felice, Giorgio De Nova, Andrea Di Porto, Natalino
Irti, Giovanni Legnini, Franco Moriconi, Carlo Mottura, Mario Nuzzo, Valerio Onida, Filippo Patroni Griffi,
Alberto Quadrio Curzio, Pietro Rossi; Zaccaria G., The responsibility of the judge and the algorithm,
Modena,2023
5
See the documentation available on the CCBE website with critical comments on the texts proposed by the
European Union.
6
We, the Leaders of the Group of Seven (G7), stress the innovative opportunities and transformative potential
of advanced Artificial Intelligence (AI) systems, in particular, foundation models and generative AI. We also
recognise the need to manage risks and to protect individuals, society, and our shared principles including the
rule of law and democratic values, keeping humankind at the centre. We affirm that meeting those challenges
requires shaping an inclusive governance for artificial intelligence. Building on the progress made by relevant
ministers on the Hiroshima AI Process, including the G7 Digital & Tech Ministers' Statement issued on
September 7, 2023, we welcome the Hiroshima Process International Guiding Principles for Organizations
Developing Advanced AI Systems and the Hiroshima Process International Code of Conduct for Organizations
Developing Advanced AI Systems (Link). In order to ensure both documents remain fit for purpose and
responsive to this rapidly evolving technology, they will be reviewed and updated as necessary, including
through ongoing inclusive multistakeholder consultations. We call on organisations developing advanced AI
systems to commit to the application of the International Code of Conduct.

We instruct relevant ministers to accelerate the process toward developing the Hiroshima AI Process
Comprehensive Policy Framework, which includes project based cooperation, by the end of this year, in
cooperation with the Global Partnership for Artificial Intelligence (GPAI) and the Organisation for Economic
Co-operation and Development (OECD), and to conduct multi-stakeholder outreach and consultation, including
with governments, academia, civil society, and the private sector, not only those in the G7 but also in the
economies beyond, including developing and emerging economies. We also ask relevant ministers to develop a
work plan by the end of the year for further advancing the Hiroshima AI Process.

We believe that our joint efforts through the Hiroshima AI Process will foster an open and enabling environment
where safe, secure, and trustworthy AI systems are designed, developed, deployed, and used to maximise the
benefits of the technology while mitigating its risks, for the common good worldwide, including in developing
and emerging economies with a view to closing digital divides and achieving digital inclusion. We also look
forward to the UK's AI Safety Summit on November 1 and 2.

2
Among the many initiatives of this period are the resulting Guiding
Principles and Code of Conduct on Artificial Intelligence. They were
highly appreciated in the European context7 .

2. The Initiatives of the G7 Countries

Eleven principles have thus been drawn up to which IA must adhere.


These are broad principles that incorporate both rules that assume a
preceptive tone and statements that address aspirations, as well as
rules of an ethical nature. The recourse to general principles is typical
of international understandings and stands as a table of values with an
axiological or optative function.8 There is no mention in the principles
7
G7Hiroshima Process on Generative Artificial Intelligence (AI), 7 september 2023, by the OECD,Brussels,2023
8

1. Take appropriate measures throughout the development of advanced AI


systems, including prior to and throughout their deployment and placement on the
market, to identify, evaluate, and mitigate risks across the AI lifecycle.
This includes employing diverse internal and independent external testing measures, through a
combination of methods such as red-teaming, and implementing appropriate mitigation to
address identified risks and vulnerabilities. Testing and mitigation measures should for example,
seek to ensure the trustworthiness, safety and security of systems throughout their entire
lifecycle so that they do not pose unreasonable risks. In support of such testing, developers
should seek to enable traceability, in relation to datasets, processes, and decisions made during
system development.
2. Patterns of misuse, after deployment including placement on the market.

Organisations should use, as and when appropriate commensurate to the level of risk, AI
systems as intended and monitor for vulnerabilities, incidents, emerging risks and misuse after
deployment, and take appropriate action to address these. Organisations are encouraged to
consider, for example, facilitating third-party and user discovery and reporting of issues and
vulnerabilities after deployment. Organisations are further encouraged to maintain appropriate
documentation of reported incidents and to mitigate the identified risks and vulnerabilities, in
collaboration with other stakeholders. Mechanisms to report vulnerabilities, where appropriate,
should be accessible to a diverse set of stakeholders.

3. Publicly report advanced AI systems' capabilities, limitations and domains of


appropriate and inappropriate use, to support ensuring sufficient transparency ,
thereby contributing to increase accountability.

This should include publishing transparency reports containing meaningful information for all
new significant releases of advanced AI systems.
Organisations should make the information in the transparency reports sufficiently clear and
understandable to enable deployers and users as appropriate and relevant to interpret the

3
of the responsibility of operators for the risks introduced into society.
It is likely that this issue was considered resolvable within the
individual legal systems concerned, and in the European case, it is
likely that an ad hoc directive or regulation was considered. It was
debated whether to resort to a supplement to the producer liability
regulation or to an ad hoc regulation. The second alternative
prevailed, with persuasive reasons, even though the concept of risk
and liability for created risk itself, which recalls the German models
of the 1940s, are dogmatically questionable. In order to tighten
model/system's output and to enable users to use it appropriately, and that transparency
reporting should be supported and informed by robust documentation processes.
4. Work towards responsible information sharing and reporting of incidents among
organisations developing advanced AI systems including with industry,
governments, civil society, and academia.

This includes responsibly sharing information, as appropriate, including, but not limited to
evaluation reports, information on security and safety risks, dangerous, intended or unintended
capabilities, and attempts AI actors to circumvent safeguards across the AI lifecycle.
5. Develop, implement and disclose AI governance and risk management policies,
grounded in a risk-based approach - including privacy policies, and mitigation
measures, in particular for organisations developing advanced AI systems.

This includes disclosing where appropriate privacy policies, including for personal data, user
prompts and advanced AI system outputs. Organisations are expected to establish and disclose
their AI governance policies and organisational mechanisms to implement these policies in
accordance with a risk based approach. This should include accountability and governance
processes to evaluate and mitigate risks, where feasible throughout the AI lifecycle.

6. Invest in and implement robust security controls, including physical security,


cybersecurity and insider threat safeguards across the AI lifecycle.

These may include securing model weights and algorithms, servers, and datasets, such as
through operational security measures for information security and appropriate cyber/physical
access controls.
7. Develop and deploy reliable content authentication and provenance
mechanisms, where technically feasible, such as watermarking or other techniques
to enable users to identify AI-generated content

This includes, where appropriate and technically feasible, content authentication such
provenance mechanisms for content created with an organisation's advanced AI system. The
provenance data should include an identifier of the service or model that created the content,
but need not include user information. Organisations should also endeavour to develop tools or
APIs to allow users to determine if particular content was created with their advanced AI system
such as via watermarks.

4
liability for high risks, it has been considered to presume a causal link
when the circumstances of the case suggest that the damage most
likely could have resulted from the use of AI.

At the same time, President Joe Biden's executive order 9 was


published on 30 October 2023.

It is a binding regulatory text that by virtue of the constitutional


powers entrusted to the President does not have to be approved by
Congress to come into force. It is a very broad text - about a hundred
pages - in which public interests are balanced against private interests.
In particular, the contents concern standards for AI security,
Organisations are further encouraged to implement other mechanisms such as labeling or
disclaimers to enable users, where possible and appropriate, to know when they are interacting
with an AI system.
8. Prioritise research to mitigate societal, safety and security risks and prioritise
investment in effective mitigation measures.

This includes conducting, collaborating on and investing in research that supports the
advancement of AI safety, security and trust, and addressing key risks, as well as investing in
developing appropriate mitigation tools.
9. Prioritise the development of advanced AI systems to address the world's
greatest challenges, notably but not limited to the climate crisis, global health and
education

These efforts are undertaken in support of progress on the United Nations Sustainable
Development Goals, and to encourage AI development for global benefit.
Organisations should prioritise responsible stewardship of trustworthy and human-centric AI
and also support digital literacy initiatives.
10. Advance the development of and, where appropriate, adoption of international
technical standards

This includes contributing to the development and, where appropriate, use of international
technical standards and best practices, including for watermarking, and working with Standars
Development Organizatios (SDOs).
11. Implement appropriate data input measures and protections for personal data
and intellectual property

Organisations are encouraged to take appropriate measures to manage data quality, including
training data and data collection, to mitigate against harmful biases.
Appropriate transparency of training datasets should also be supported and organisations
should comply with applicable legal frameworks.
9
Congressional Research Service, Highlights of the 2023 Executive Order on Artificial Intelligence for Congress ,
Washington, Updated April 3, 2024

5
protection of privacy, equality of citizens and protection of civil
rights, protection of consumers, clinical patients and students,
protection of workers, proper use of AI by public institutions,
promotion of innovation and competition, and, perhaps for electoral
reasons, promotion of American leadership abroad.

These are not normative statements set out in the form of a command,
but recommendations and commitments that the Administration
makes to citizens and businesses: this is a significant step because it
was precisely the defence of personal data that was the obstacle to the
approval of the Transatlantic Trade and Cooperation Treaty between
the US and the EU. It should be emphasised with favour that the
executive order does not privilege the market, even though the major
global players in artificial intelligence are based in the United States
and, in view of their behaviour that is not always in line with
European rules, are the subject of substantial sanctions imposed by
the Court of Justice, national antitrust authorities and personal data
protection authorities. As is well known, the measure that has caused
a great stir and has been followed by the Garanti in other European
countries concerns the temporary suspension of CHATGPT's activity
imposed by the Italian Garante (measures of 30 March and 11 April
2023).

On 13 March, the European Parliament approved at first reading the


text of the Artificial Intelligence Regulation: precisely because it
consists of 113 articles, preceded by 180 recitals, supplemented by 13
annexes totalling 459 pages, the text was received as the world's first
all-encompassing law on artificial intelligence. 10 But, as anticipated,
and as the CCBE well pointed out when asked to comment on the
draft prepared in 202111 this is only one segment of a more complex
piece of legislation that consists of the Digital Services Act, the
Digital Market Act, the directive updating the producer liability rules.

10
Among the many comments are Donati, Fundamental Rights and Algorithms in the Proposal for a Regulation
on Artificial Intelligence, in Il diritto dell' Unione europea, 2021,n.3-4,p.453 ss.Finocchiaro, Intelligrenza
artificiale.quali regole?,Bologna, 2024,p. 115 ss.; Balaguer Callejon, La costituzione dell'algoritmo, Milano,2023
11
See the position paper of 8 October 2021 available on the CCBE website

6
And the draft directive on liability for the use of artificial intelligence
systems.

The Regulation, its lights and shadows, its shortcomings and the
underlying political choices have been much discussed.

The Regulation, while concise, did not appear to be a complete or


particularly significant text. One must certainly appreciate the choice
of instrument, which, taking into account the sources of the European
Union, did not stop at a simple resolution - as had happened for the
regulation of civil liability - nor did it settle on a directive, which
would have enunciated principles while leaving the Member States
free to adapt them to the internal system, but preferred to enunciate
rules immediately applicable and identical for all, in order to prevent
divergent interpretations. It is also appreciable for the balancing of
interests that in regulating the market (the preamble dwells on this
aspect) focuses attention on the anthropocentric conception of rules,
thus constructing a line of defence of man against artificial
intelligence techniques that could compromise the values of modern
civilisation.

The regulation does not change the rules on personal data. Rather, it is
concerned with introducing certain prohibitions on practices that may
harm the dignity and rights of the individual, such as the use of
subliminal techniques, the manipulation of persons, taking advantage
of the ability to choose of weak, vulnerable persons, facial recognition
practices, the ascertainment of emotions, the use of biometric
techniques not required for the investigation of crimes, and then
distinguishes between high-risk AI systems, listed in an annex, and
moderate-risk systems. For the former, it provides for precautionary
principles and appropriate safeguards. It takes care to allow for
requests for information to ascertain the transparency of supplier and
distributor data, to specify the behaviour criteria for importers, and the
compliance obligations of systems, with relevant certifications. It
also provides for the drafting of codes of conduct, measures to
support innovation, and a complex system of controls with the
establishment of special authorities.
7
It is a complex system with a high degree of bureaucratisation, and,
except for the prohibitions, it is not particularly detailed even though
the text is extensive. In addition, these are 'horizontal' rules that only
consider the risk classification of systems, but do not dictate specific
disciplines in the fields of application, such as medicine,
communication, transport, and so on.

But equally important for us Europeans are the determinations of the


Council of Europe, the OECD and the CEPEJ, which particularly
concern the use of artificial intelligence in the field of justice. And the
fact that the regulation does not deal with this topic has been the
subject of fierce criticism by the CCBE.

Equally interesting are the initiatives promoted in Canada and Japan.


Detailed disciplines are also being studied in both these countries.

As you will notice, the G7 countries, although they come from


different continents, share orientations proper to the Western world.
Unfortunately, the centre of gravity of geopolitics has shifted due to
the wars in progress and those that are to be announced, although we
all hope that general pacification will be achieved as soon as possible.
Therefore, to speak of universal rules to be assigned to Artificial
Intelligence seems unrealistic, since some great powers such as
Russia, China and India, and the other countries that make up the
BRICS are missing from the list: it may be that our models act as a
driving force for the others, which are still in their infancy, but it is
clear that only an international agency that includes all countries, even
the emerging ones, would provide greater assurances on the
prevention of risks connected with new technologies.12

Just as it has been possible to reach agreements on the globalisation of


markets, so it is worth trying to identify, if not a detailed discipline, at
least guidelines and general legal, but first and foremost ethical,
12
In this sense see also the considerations of Simoncini, The Global Standard, in Civiltà delle Macchine,
1,2024,p.23 ff.

8
principles for this new form of globalisation to which we give the
name Artificial Intelligence, so that it may be used for beneficial
purposes and not for purposes of war and destruction.

3. The role of lawyers

For jurists, AI is a great challenge, which began more than twenty


years ago, but which is only now universally perceived because of the
enormous cultural debate that is taking place, because of the
enormous production of books, essays and statements that follow one
another day by day, but above all because of the dilemmas that
artificial intelligence poses. These are dilemmas of great moment,
which cannot be overcome with maximalist resolutions: scientific and
technological development cannot be halted except in exceptional
cases, as was the case with the cloning of man; the artificial
intelligence market is expanding at great speed and producing high
profits and creating new job opportunities; the professions - from
computer scientists to engineers to physicists to doctors and even
lawyers - have followed this evolution from the very beginning; even
people's health is benefiting considerably from the use of the new
systems.

But what is the role of lawyers at this juncture?

Lawyers are engaged on several fronts: first of all, they must know
the rules in order to be able to apply them, and, indeed, they are often
the authors of the rules themselves either because they are part of the
Parliaments that approve them or the governments that propose them,
or the institutions that control them; they also perform consultative
functions and therefore must prepare research and reports that
necessarily involve knowledge of other sciences, computer science,
engineering, mathematics, cognitive sciences and others. They must
also suggest the choices with which to regulate these phenomena, for
which comparative analysis is particularly enlightening. They
participate in supervisory institutions to verify that Artificial
Intelligence systems are correctly applied.

9
But above all, in their capacity as lawyers, they have the task of
protecting the person, to fulfil the mission entrusted to them for so
many centuries, because as our forefathers used to say,< Hominum
causa omne ius constitutum est".

Among the many issues that can be considered, and which have
already been the subject of extensive analysis, three in particular seem
to me to be the tasks to which lawyers are called and which require
some further clarification: (i) to identify the models with which to
regulate AI, given that the collection and knowledge of rules already
constitute an enormous labyrinth, given the many international
European and national institutions dealing with the subject, (ii) to
understand the way in which lawyers can exploit the benefits of AI in
the performance of their work (iii) and above all to identify, moment
by moment, the rights that must be safeguarded.In short, how to
organise the work and set up our noble profession to facilitate the
rational development of AI and how to protect the individual from the
risks that AI may entail.

When Richard Susskind posed the problem of the future of lawyers,


having first provocatively concluded that they would come to a
certain end (The End of Lawyers was the title of his 2010 book) and
then, retracting a prophecy that had turned out to be far-fetched, he
was concerned to paint the future of the lawyer of tomorrow
(Tomorrow's Lawyers.An Introduction to your future, 2014), in his
description of the activity and organisation of professional firms, he
foresaw the use of artificial intelligence, but it was only a hint,
concerning only problem solving (p.49). Not even he, who had
prophetic abilities and great imagination, could have imagined, ten
years in advance, the gigantic developments and countless
applications that scientists have been able to promote. So it is with
great caution that we can say that what we know today and what we
are discussing today, is merely propositional, and will be overtaken in
no time.

4. Normative models

10
For jurists, this is a great challenge, which is only now universally
perceived thanks to the enormous cultural debate that is taking place,
because of the enormous production of books, essays and statements
that follow one another day after day, but above all because of the
dilemmas that artificial intelligence poses. These are dilemmas of
great moment, which cannot be overcome with maximalist
resolutions: scientific and technological development cannot be halted
except in exceptional cases; the artificial intelligence market is
expanding at great speed and producing high profits and creating new
job opportunities; the professions, and the legal profession in
particular, cannot be overtaken by these phenomena, however
complex they may be.

We have to take into account the interests at stake, which, as has been
established in the course of the discussions on the subject, concern the
building and development of Artificial Intelligence, its use in all areas
of the economy and other human sectors - from communication to
transport, medicine and finance - we have to compare the interests and
balance them, because each of them is relevant, but does not exhaust
the horizon we have to consider.

The most elementary model, which nevertheless required considerable


diplomatic effort, is the listing of principles mentioned above.
Regulation by principles is a soft way of legislating, which leaves
ample room for interpretation.

And the principles we speak of are general, comprising aspirations


rather than commands, rules of an ethical nature. The recourse to
general principles is typical of international understandings and stands
as a table of values with an axiological or obective function.

This fact justifies the apprehension of CEPEJ, which has drawn up an


ethical charter for the use of artificial intelligence in justice
administration systems.13
13
(Commission Europeenne Pour L'efficacite De La Justice (CEPEJ) Charte éthique européenne d'utilisation de
l'intelligence artificielle dans les systèmes judiciaires et leur environnement Adoptée lors de la 31e réunion
plénière de la CEPEJ (Strasbourg, 3-4 décembre 2018)

11
Press reports indicated that the European Data Protection Supervisor
expressed his disappointment with the Artificial Intelligence (AI)
treaty negotiated in Strasbourg, stating that it deviated greatly from its
original purpose. The Council of Europe had originally decided to
develop a legally binding international convention to uphold human
rights standards without harming innovation in the development of
artificial intelligence. But the text was considerably watered down
from its original version during the negotiations. It was 'a missed
opportunity to establish a strong and effective legal framework' for
the protection of human rights, especially because of the debated
limitation of the scope of the convention to public bodies only.

Among the models opting for soft regulation of AI is that of the


United Kingdom. Each system follows its own tradition, and the
United Kingdom has always shown its aversion to the excessive
regulation of the European Union, so much so that, after Brexit, it
resumed its line of soft regulation without, however, sacrificing
human rights, which are codified in the Human Rights Act of 1998,
the repeal of which had actually been proposed with the substitution
of a Bill of Rights, but the proposal was then withdrawn. It is clear
that companies that did not comply with the Act would be sanctioned,
since according to the Supreme Court's constant interpretation, the
Act also applies horizontally to relations between private individuals,
and not only to public Bodies.

Commentators have reported that in March 2023, the UK government


published the White Paper on AI regulation, which sets out the
proposed regulatory framework for AI.

Unlike the EU's AI Act, which will create new compliance obligations
for a range of AI actors (such as suppliers, importers, distributors and
retailers), the UK government is developing a principles-based
framework for existing regulators to interpret and apply in their
specific sectoral areas.

Interestingly, the UK government launched a public consultation for


all stakeholders to express their views on the choices to be made. No

12
agreement was reached on the drafting of codes of conduct, especially
with regard to the coordination between AI development and
copyright protection.

The regulatory principles proposed for consultation roughly


correspond to those developed by the Hiroshima project: safety,
security and soundness; adequate transparency and explainability;
fairness; accountability and governance; contestability and redress.
The government remains committed 'to a context-based approach
that avoids unnecessary generalised regulation'.

Significantly, the Lords committee report called for a standardised set


of powers for key regulators to deal with AI. It recommended
ensuring that regulators are able to investigate effectively and impose
'meaningful sanctions to provide credible deterrents against serious
wrongdoing'. This is an area to be monitored in the medium term with
regard to possible legislation, as enforcement tools and sanctioning
powers differ widely between regulatory regimes.

The draft legislation in Canada is reportedly very close to European


regulation.

An even different model has emerged in this period between the two
G7 meetings is that of the US, where President Joe Biden's executive
order, mentioned above, has received wide support on the Democratic
side.

Still following the order of topics proposed above, lawyers have o a


duty to understand exactly the texts they have to interpret and apply.
In this area, the difficulties are enormous, given the various
competences that add up, ranging from the scientific to the social.
What is more, in our field, we are in the presence of techniques - if
one does not want to use the term science - that evolve so rapidly that
they cannot be defined unambiguously.

5. Definitions of IA

13
Curiously enough, the first question that arises is precisely the
definition of the subject matter, i.e. what the IA that is the subject of
standardisation consists of.

The eleven principles drawn up by the Japanese edition of the G7 do


not define AI, perhaps so as not to constrain in a concise statement all
the possible developments that this complex of techniques may entail.
On the other hand, if one disregards the descriptive definition, one
encounters several alternatives. Indeed, more than alternatives, these
are mutually complementary meanings. To review the most
significant definitions, there are those offered by encyclopaedias or
the mass media.

The European Parliament has changed definitions several times in


successive texts. In 2020, it defined AI as <the ability of a machine to
display human capabilities such as reasoning, learning, planning and
creativity>, thus giving a very circumscribed mechanical content and
explaining that artificial intelligence enables systems to understand
their environment, relate to what they perceive and solve problems,
and act towards a specific goal. The computer receives data (already
prepared or collected by sensors, such as a video camera), processes it
and responds.
And now in the Regulation, the official definition speaks of an 'AI
system', i.e. an automated system designed to operate with varying
levels of autonomy and which may exhibit adaptability after
deployment and which, for explicit or implicit purposes, deduces from
the input it receives how to generate output such as predictions,
content, recommendations or decisions that may influence physical or
virtual environments>.
This is a very vague definition that approximates that of Biden's
executive order, which states: (b) The term "artificial intelligence" or
"AI" has the meaning set forth in 15 U.S.C. 9401(3): a machine-based
system capable, for a given set of human-defined objectives, of
making predictions, recommendations, or decisions that affect real or
virtual environments.Artificial intelligence systems use machine- and
human-based inputs to perceive real and virtual environments,
abstract those perceptions into models through automated analysis,
and use model inference to formulate information or action options.
14
In European acts, reference is made to the subjects to which AI
applies, and it therefore seems closer, more familiar and less
mysterious:

AI consists of a rapidly evolving family of technologies that


contribute to the achievement of a wide range of economic,
environmental and societal benefits across the spectrum of industrial
and social activities. The use of AI, ensuring improved forecasting,
optimised operations and allocation of
resources and the customisation of digital solutions available to
individuals and organisations, can provide key competitive
advantages to businesses and lead to socially and environmentally
beneficial outcomes, e.g. in healthcare, agriculture, food security,
education and training, media, sports, culture, infrastructure
management, energy, transport and logistics, public services, security,
justice, energy and resource efficiency, environmental monitoring,
biodiversity and ecosystem conservation and restoration, and climate
change mitigation and adaptation.
At the same time, AI may, depending on the circumstances of its
application, use and specific level of technological development,
entail risks and prejudice public interests and fundamental rights
protected by Union legislation. Such harm may be both material and
immaterial, including physical, psychological, social or economic
harm.14
Normative definitions are binding on the interpreter, but it is possible
that they may change as AI takes on new fields and evolves with new
techniques.
14
The encyclopaedic definitions are even more precise: the online Encyclopaedia Treccani defines it
as follows: Discipline that studies whether and how the most complex mental processes can be reproduced
using a computer. This research is developed along two complementary paths: on the one hand, artificial i.
seeks to bring the functioning of computers closer to the capabilities of human intelligence, and on the other it
uses computer simulations to make hypotheses about the mechanisms used by the human mind.It is both a
science and an engineering. It is a science in that by emulating certain intelligent behaviours with certain
artificial systems, man achieves the goals of formulating objective and rigorous models, obtaining
experimental confirmations and making unquestionable progress in the scientific study of the human intellect.
Artificial intelligence is engineering because, when one obtains from machines performances that emulate
behaviours erroneously considered inaccessible to the artificial sphere, one provides an objective
advancement to the contribution that engineering itself offers to the improvement of human life.

15
With respect to the G7 principles, it seems to me that the principles
are all respected.
However, the European model must be considered in its entirety,
including the GDPR. We must not forget that the very differences in
the regulation of personal data in Europe and the US was the main
cause of the failure of the Transatlantic Trade and Investment Treaty
(TTIP).

6. The organisation of professional firms.

This is interesting even if the organisation is changing from


country to country, where solo lawyers or firms organised in the
form of a company with hundreds, sometimes thousands, of
lawyers who are semi-employees or consultants to a single client,
their firm, prevail. And it is clear that artificial intelligence systems
will be more developed where the complexity of matters or the
turnover require their use.

There are now a growing number of books explaining how lawyers


could make use of AI systems for the best organisation15 .

I am not referring to the tools that have become commonplace, which


have simplified our research and given us so much information, such
as databases, collections of case law, reports of the Courts - in
particular the studies of the Superior Courts - and now, when it will
be made available, the database of the courts of merit before which
new and singular cases are often decided, which do not reach the
Supreme Court. Rather, I am referring to the contractual models, the
so-called templates that create problems of interpretation because, if
translated, they imply the transplantation of foreign institutions into
our legal system, creating 'alien contracts', and, if not translated, they
imply knowledge of the different interpretations given to them by
foreign doctrine and jurisprudence and therefore require comparative
law skills that are demanding. Even investigations, enquiries - I am
15
Some significant examples are the book by C.Morelli, Artificial Intelligence.Being Lawyers in the Age of
ChatGPT, Santarcangelo di Romagna,2024; KAYA, AI in Law.How IArtificial Intelligence is Transforming the
Legal Profession?s.l.,2023;Gibson, Artificial Intelligence.3 books in 1, s.l.,2023

16
always talking about civil law matters - can be facilitated by the use
of these systems, although a technician is always needed, and the
lawyer's ability to correctly interpret the results of the advice.

I do not envisage trial acts written with the help of ChatGPT, because
one thing is certain: artificial intelligence gives us millions of data,
but it can hardly find two identical cases from which one can draw
solutions to be mechanically applied to the case being studied. What
is more: this system is based on the past, on facts that happened in the
past while the cases we have to solve happened in the present, in an
environment that may have changed, and in a cultural context that
may have evolved.

7. Defence of rights and justice

The list of rights to be protected, each of which would require an in-


depth study in its own right, thus appears:

- Personal data, their collection, processing, handling,


dissemination, for which the GDPR provides;
- Privacy, the right to an image, the right to be oneself;
- Identification data, (iris, face, imprint);
- Health data
- Moral integrity and defamation through fake news;
- Profiling and unfair commercial practices
- The right to correct information
- Covert influence and orientation in political choices and
voting patterns
- Violations of the right of association and participation in
social networks
- The right to physical integrity impaired by self-propelled or
driverless cars
- Rights derived from smart contracts
- Copyright and CHATGPT
- Compensation for damages from robots and the use of
artificial intelligence systems involving risks
17
- Workers' rights for the use of these systems
-The right to health for the application of AI in medicine
- the right to security, cybersecurity and defence
- Damage resulting from financial investments or betting
- Damages from the Internet of Things
This opens up new scenarios, new job opportunities, and above all
new aspects of the defence of citizens' rights.
Is it possible to conclude this analysis with a word of hope, given the
difficulties of constructing an appropriate legal framework for a
science that is constantly evolving and in the eyes of most is still
inscrutable? Just last week, the problems of artificial intelligence were
discussed at an assembly of the Accademia dei Lincei, the world's
oldest cultural academy. Here the subject was examined from a
perspective that reflects the principles shared by Western countries:
trying to foresee how 'science for the future' will be shaped. First of
all, the exact sciences have been combined with the social and human
sciences, according to the teaching that scientific progress must
benefit mankind and thus also take into account the social and
environmental conditions in which he lives; the challenges,
responsibilities and opportunities it offers to the whole of mankind
have been considered, in the conviction that tackling the new science
requires the cooperation of all, institutions, businesses and citizens, to
ensure a sound, transparent and fair use of artificial intelligence. I
believe that lawyers too, as they have demonstrated on this study day,
can subscribe to this declaration and make an important contribution
with their commitment and professionalism to achieving the desired
results.

18

You might also like