EU Consequences Version 1.1

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 10

Introduction

Solom model of economic development

Since 1939 World’s Fair the automotive industry has theorised with the idea of autonomous
vehicles and how we could implement this kind of technology in our lives. Nowadays, this kind of
technology is no longer a dream, but a reality. We are facing a period of extremely rapid
technological change and with it a new framework of regulation arises. The sector of arti cial
intelligence (AI) supposes a new challenge for regional, national and supra-national governance
schemes, in this research we will try to focus the scope of analysis in a speci c model of AI, that
of autonomous vehicles to try to answer the question: How do the di erent logics of regulation of
AI a ect the level of development of this technology?

Arti cial Intelligence (AI) software can be de ned as “intelligent human created systems with the
ability to think and learn”, its applications are extremely broad, from machines that propose
solutions from observational data, to algorithms that recode themselves to adapt to changes in
the data they analyse. The future importance and power of these software solutions is so patent
that the White House, the UK House of Commons, the European Commission and the
government of the Democratic Republic of China have issued reports on how to prepare for the
deployment of AI and possible ways of regulating it (Reis et al., 2019). From all the possibilities AI
o ers, we are specially inclined to the eld of Connected and Automated Mobility (CAM) due to
the economical, moral and political challenges it supposes for our systems of governance.

Connected and Automated Mobility refers to those vehicles or self driving cars that can guide
themselves without human intervention (European Commission, 2020). This kind of technological
development is expected to have enormous impact on the economies of countries such as
Germany, the USA and Japan due to the change in paradigm it supposes for an industry as
potent as the automotive (Anderson J., et al., 2016). Wether this impact is bene cial or detrimental
will depend on the quality and e ectivity of the regulation of the sector. Regulation can be seen as
the employment of legal instruments for achieving speci c of social-economic policy objectives
(Hertog, 2000). Within the eld of regulation we can observe a distinction between economic and
social regulation (Viscusi, Vernon and Harrington, 1996):

- Economic Regulation: aims to manipulate directly the market and the scheme of supply and
demand both in the number and behaviour of producers, known as structural regulation, and in
the behaviour of consumers through quality standards, this is known as conduct regulation.

- Social regulation: manipulates the social impact of an economic activity in areas such as labour,
environment and consumer protection. Mainly aims to the reduction of discrimination and
abuse in the market to consumers.
ff
fi
ff
fi
ff
fi
fi
fi
ff
fi
fi
fi
While trying to regulate CAM we have observed that the division between social and economic
regulation becomes di use due to the challenges this kind of AI supposes for transparency,
accountability and privacy of the consumers (Wirtz et al., 2020). Moreover, the high costs of entry
in the sector, the asymmetric distribution of information regarding the quality of algorithms and the
possible externalities of the implementation of CAM makes for a market prone to monopolistic
schemes (Akerlof, 1970; Meade, 1973; Kay and Vickers, 1990). For these reasons countries have
decided to actively participate in the future of this technology, refusing to apply the so called
“technological-neutrality” approach observed until now in other sectors.

In sectors such as mobile communication or the electri cation of the automotive industry
governments made the choice of letting supply and demand conduct the process of technological
development. When no major political decisions are made towards the regulation of a
technological sector is de ned as a technologically neutral context. The laws passed in California,
New York or Arizona among other USA states (Anderson J., et al., 2016) and countries such as
Sweden, Norway (Hansson, 2020) or China (China Brie ng, 2021) and the proactive approach
implemented by the European Commission via the support to the "Automated Road Transport"
eld in the H2020 project (EU Comission, 2019) area clears examples of countries making a clear
distinction not wanting this sector to be technologically neutral.

This interest for regulating the AI sector drives from the dangers this kind of technology could
present to society and to the economy, but also, bad regulation could also sti e the development
and implementation of AI solutions, creating a competitive disadvantage vis a vis other actors and
countries. On the other hand, e ective regulation would improve the perception of safety
regarding AI and would also allow us to perceive that humans remain in control (Reed, 2018). We
consider that this desire fro regulating the CAM sector comes from the direct dangers this
technology supposes, this is why we will focus on an aspect of AI that has not yet seen extensive
research, but is key for its future development. We consider the allocation of AI responsibility a
key aspect for the future development of the eld, as sectors as sensitive as the nuclear power
are susceptible to the implementation of AI solutions (UK Nuclear Installations Order 562, 2016).

The rapid development and implementation of CAM technology allow us to analyse a non-tech-
neutral eld with everyday implications for the citizenry, but most importantly, the fact that there
have been even mortal incidents already due to AI malfunction in self driven systems allow us to
analyse how and upon who di erent administrations allocate responsibility of algorithms
(Anderson J., 2016). In 2019 a self driven Volvo XC90 hit a pedestrian in Arizona, the accident
unfortunately ended with the life of the woman hit by the car. Events like this portray how unclear
is the liability to apply to this kind of incidents. Classic personal liability cannot be applied, as an
CAM system is not a person that can compensate an accident, neither can product liability be
fi
fi
ff
fi
ff
ff
fi
fi
fi
fl
applied, as such a technology is a learning product and the error can come from that being an
unknown situation (Reed, 2018).

Our research will focus on a speci c application of AI technology, that of autonomous vehicles, in
order to identify how the regulation of the accountability allocation of CAM technology in uences
the development of this kind of systems. The fact that this eld is a non tech-neutral contexts in
three broadly different political systems, being these the USA, the EU and China, and taking into
consideration that these three different actors have strategic plans since 2017 for AI development,
will allow for the identi cation of either a convergence or divergence in the regulatory logic, and
thus, of liability allocation.

Literature revie

We consider that depending on which theory of regulation is dominant on a given context, the
political economy of the regulator will change and this would have an in uence on the actor that is
given responsibility of AI errors. These different schemes of regulation will have an impact on the
capabilities, restrictions and capabilities a CAM system can have, shaping the competitiveness of a
given country in the AI sector.

AI regulation aims towards minimising the adverse factors of this technology, while maximising its
bene ts (Smuha, 2021). The ideal regulatory eld following a liberal conception of regulation
would be the forces of supply and demand to de ne how softwares should be and which algorithms
are the better ones (Arrow, 1985). But in the majority of cases the government has to introduce
changes to the market in order to make it more ef cient, reduce transaction costs and other adverse
effects a missing market such as AI can be (Bator, 1958). In this case, contract law and property
rights over the functioning of the algorithms would decrease transaction and production costs,
trying to allocate resources in the most ef cient way. CAM systems are treated as products within a
market, and thus the accountability is allocated in the actor that created the algorithm (EEC, 1985).

An alternative logic of regulation would be that proposed by Stigler (1971) called the Economic
Theory of Regulation. This school of thought defends that regulation is developed by the industry
and is designed to bene t it the most. The regulator can be in uenced by interest groups and,
following the theories of Downs (1957) and Olson (1965), the organisational capacity of the
industry is much higher than that of individuals, therefore their interests will be better represented,
resulting in regulation that favours income transfer to the industry in exchange for political support
fi
w

fi
fi

fi
fi
fi
fi
fi
fi
fl
fl
fl

(Hertog, 2000). Organised AI developers will push for a regulatory system that allocates
responsibility of failure either in the state or in insurance companies like in the case of the UK (Bill
2017-19 s2). Alternatively, this theory of regulation also permits for the responsibility to be
allocated in the state if the government has a very clear interest in promoting this kind of
technology, as the possible negative impact of the CAM will be dealt with by the state while the
bene ts remain in the industry.

Our analysis will try to determine upon who falls the responsibility of covering AI malfunctions in
three broadly different contexts, the USA, the UE and China. After describing the theories of
regulation for CAM present in these countries, we will proceed to determine if this has a
signi cative impact over the state of development of self driven vehicles. Automated driving
systems can be classi ed in 6 levels according to the National Highway Traf c Safety
Administration (2016)

- Level 0: The human driver does all the driving


- Level 1: An advanced driver assistance system (ADAS) on the vehicle can sometimes assist the
human driver with either steering or braking/accelerating, but not both simultaneously
- Level 2: An advanced driver assistance system (ADAS) on the vehicle can itself actually
control both steering and braking/accelerating simultaneously under some circumstances. The
human driver must continue to pay full attention (“monitor the driving environment”) at all times
and perform the rest of the driving task
- Level 3: An automated driving system (ADS) on the vehicle can itself perform all aspects of the
driving task under some circumstances. In those circumstances, the human driver must be ready
to take back control at any time when the ADS requests the human driver to do so. In all other
circumstances, the human driver performs the driving task
- Level 4: An automated driving system (ADS) on the vehicle can itself perform all driving tasks
and monitor the driving environment – essentially, do all the driving – in certain circumstances.
The human need not pay attention in those circumstances
- Level 5: An automated driving system (ADS) on the vehicle can do all the driving in all
circumstances. The human occupants are just passengers and need never be involved in driving

Following the rationality of the developing agent of the AI we expect to observe different logics of
regulation depending on the country or institution being analysed, these logics would allocate
differently responsibility, changing the incentives of investing in the CAM systems. A context in
which the state is the one to cover for expenses derived from errors in the algorithms is expected to
fi
fi
:

fi

fi
.

have developed a much more complex algorithm as the industry of CAM would face no real
consequence if their systems failed. A context in which the insurance sector would have to face
these costs would have a considerable level of AI development, but not as much, due to insurance
fees modifying the supply and demand of these vehicles. Finally if the CAM software is treated as a
product and the developing company is the one to cover for malfunctions of the vehicles, we should
observe a lower level of development, as tech companies would invest much more in the security of
their systems (Hertog, 2000; Kay and Vickers, 1990; Reed, 2018)

Research desig

Research questio

Recalling the literature consulted and aiming towards the identi cation of different logics of
regulation in regards of AI and how these shape the accountability of algorithms of Connected and
Automated Mobility we se t for the research question to be as follows

How do the different logics of regulation of AI affect the level of development of this
technology

In this research we will observe a speci c implementation of regulatory theory, being this the
allocation of responsibility in the regulation of CAM systems in order to explore one of the
youngest elds of technology. The enormous amount of possibilities AI offers makes impossible to
regulate all types of algorithms with a unitary scheme of regulation, therefore, we will focus on one
of the applications that can have a more direct impact on daily life in a short term future, this being
the eld of autonomous mobility.

Hypothesi

The risks of Ai, and by extension of CAM, are likely to be posed over citizens and governments in
an incremental way. Some of them have been already theorised, but as this technology becomes
widely used our governance systems need to be prepared for situations developers did not take into
consideration Reed (2018). AI improves in basis of real life experiences, therefore, if a situation is
not yet learn and a vehicle causes harm due to this, the legal system is expected to be prepare for
this situation and be capable of solving the issue and designating who should pay for the damages.
But the scheme of incentives of actors can change depending on upon who the liability is imposed.
fi
fi
s

fi

fi
.

fi
:

The technological industry won’t have the same incentives of developing this kind of systems
depending on who or what is to be accountable. This change in the incentives of developing this
industry are expected to have a direct impact on the speed of development of such systems, this is
the main focus of our research. Taking this into consideration, we see t for our hypothesis to be the
following

H1: The different logics of regulation allocate responsibility of AI differently, having this an
in uence over the level of development of this technology.

We have decided to subdivide this hypothesis in order to be able to conduct a systematic analysis of
the analytical units of it, these sub-hypothesis are the following

H1.1: Different countries present different regulatory logics, that are re ected on the allocation of
responsibilit

H1.2: Depending on upon who responsibility is allocated, the level of development of CAM
systems will differ.

Following this scheme we will be able to observe a higher part of the reality of regulation of AI
technology. Logic says that if we observe the same regulatory theory being applied, responsibility
will be allocated differently, therefore all countries will be at the same level of CAM development.
If this is not the case, the logic can be the same, but the allocation can be different, but the level of
development is also differentiated through cases, we could consider that the regulatory logic is not
as important as the allocation of responsibility. Depending on the iterations of the combinations of
convergence of divergence in the logics and responsibility allocation we will determine the impact
of this variables over the level of development of AI.

Variable

Our research consists on two main parts, one descriptive and one substantive. The rst part of our
analysis will consist on a deep dive unto the regulatory policy implemented by the countries or
organisations of interest with the aim of determine the logic of regulation present in each of the
observed contexts. The next step will consist on the identi cation of the actor upon which resides
fl
s

fi
:

fi
fl
fi
the responsibility of the harm caused due to a malfunction in an autonomous vehicle algorithm.
With this into consideration, we will try to categories countries into one of the following logics

- Liberal regulation: countries that treat AI as a product of a market, the regulation in this case
aims towards the reduction of transaction costs and preventing missing markets. In this case, the
responsibility would be allocated on the developer of the algorithm.
- Private economic regulation: countries try to favour the industry while developing this
technology and externalise the costs via covering material costs of errors in the algorithms
through the insurance companies of the malfunctioning vehicles.
- Public economic regulation: the logic is the same than in the public economic regulation, but in
order to not burden the economic capabilities of the insurance industry, the state on its own
covers for the material costs of accidents caused by AI driven vehicles.

Additionally, we introduce into the analysis a series of control variables in order to isolate other
possible causal effects over the level of development reached in the eld, these will be the
following

- GDP per capita: we will divide the cases of analysis in regards of their GDP per capita in order to
discard the possibility of richer countries being able to afford better development of technology
- Level of education: being research and development of technology highly linked with the level of
education present in a context, we want to discard the possibility of this variable having in uence
over the eld of AI.
- Wight of the automotive sector: measured as percentage of this economic sector over the total
economy of the country, we will observe the relative importance of the vessel of CAM systems
over the development of this technology.
- Weight of the technological sector: in a similar logic that with the previous variable, we will
observe if the relative importance of the technological sector over the total economy of the
country has in uence over our dependent variable

Methodolog

Due to the amount of iterations possible between the variables we want to observe and the nal
impact the logics of responsibility allocation can have over the level of development of AI
technology, we have decide to realise a Qualitative Comparative Analysis (QCA) for our research.
This approach is highly innovative in the analysis of public policy and regulation and allows us to
fi
:

fl

fi
fl
:

fi
check for cross case causality instead of only comparing between policy results in different
scenarios (Mahoney, 2004).

Via the comparison and aggregation of dichotomous variables we will be able to detect patterns in
the effects of the three different logics of regulation. Depending on wether or not our variables of
interest are present distinctively when a certain level of CMA development is achieved we will be
able to con rm or discard our hypothesis. Additionally, this methodology allows us to accept
partially our main hypothesis in case only one of the sub-hypothesis is con rmed. Finally, this take
on the analysis of regulation allows for the observation of nuances in the causal mechanism present
in this speci c phenomenon as the variables analysed can be taken into consideration individually
and in conjunction between them when affecting the dependent variable.

Case

- China: with the release of the ‘New Generation Arti cial Intelligence Development Plan’ in 2017
the People’s Republic of China made a clear statement to the world of its intention to be a leader
in the sector of arti cial technology and a pioneer on the regulation of these solutions (China
Science and Technology newsletter, 2017). Being this a major actor in both development and
implementation of AI and CAM systems (Roberts et al., 2021), we see t to be one of the central
actors of our research
- The USA: in the US the automotive industry is specially reluctant to regulatory policy, therefore,
the federal government and the USDOT (US Department of Transportation) have not applied any
kind of enforceable policy, but have designed a series of guidelines for the states to regulate at a
local level this technology (NCSL, 2017). This multilevel approach to regulation offers two
contexts of analysis in one, we will treat the US as a two sided case, in which we analyse the
impact of the federal guidelines and the effects of speci c regulations in the State of California,
one of the areas of the US with more tech companies developing AI (AI Bussines, 2019).
- The EU: the intentions of the EU are towards incrementing trust in AI technology and ensuring
its compliance with human rights while its uses foster economic and social development. The
overall intention is to create synergies between the public an private sector in regards of AI
(European Comission, 2021). This distinctive approach and the proactive character of the EU
towards CAMs are both relevant for the eld of study
s

fi
fi
fi
.

fi
fi
.

fi
fi

fi

Bibliograph

- Arrow, Kenneth J. (1985), ‘The Potentials and Limits of the Market in Resource Allocation’, in
Feiwel, G.R. (ed.), Issues in Contempory Microeconomics and Welfare, London, The Macmillan
Press, 107-124.Akerlof, George A. (1970), ‘The Markets for ‘Lemons’: Qualitative Uncertainty
and the Market Mechanism’, 84 Quarterly Journal of Economics, 488-500
- Anderson, J., Kalra, N., Stanley, K., Sorensen, P., Samaras, C., & Oluwatola, O. (2016).
Autonomous Vehicle Technology: A Guide for Policymakers. In Autonomous Vehicle
Technology: A Guide for Policymakers. https://doi.org/10.7249/rr443-2
- Bator, Francis M. (1958), ‘The Anatomy of Market Failure’, 72 Quarterly Journal of Economics,
351-379
- Downs, Anthony (1957), An Economic Theory of Democracy, New York, Harper and Ro
- European Commission (2019), Connected and Automated Mobility. https://digital-
strategy.ec.europa.eu/en/policies/connected-and-automated-mobility
- Hansson, L. (2020). Regulatory governance in emerging technologies: The case of autonomous
vehicles in Sweden and Norway. Research in Transportation Economics, 83, 100967. https://
doi.org/10.1016/j.retrec.2020.100967
- Hertog J.. (2000). General theories of regulation. Encyclopedia of Law and Economics, Volume
I. The History and Methodology of Law and Economics, 223–270. http://encyclo. ndlaw.com/
index.html
- Kay, John A. and Vickers, John S. (1990), ‘Regulatory Reform: An Appraisal’, in Majone,
Giandomenico (ed.), Deregulation or Re-regulation, London, Pinter Publishers, 223-2
- Meade, James, A. (1973), The Theory of Economic Externalities, Leiden, Sijthoff
- NHTSA (2016),Federal Automated Vehicles Policy Public Meeting.
- Olson, Mancur (1965), The Logic of Collective Action. Public Goods and the Theory of Groups,
Cambridge, MA, Harvard University Press
- Reed, C. (2018). How should we regulate arti cial intelligence? Philosophical Transactions of
the Royal Society A: Mathematical, Physical and Engineering Sciences, 376(2128). https://
doi.org/10.1098/rsta.2017.036
- Reis, J., Santo, P. E., & Melao, N. (2019). Impacts of arti cial intelligence on public
administration: A systematic literature review. Iberian Conference on Information Systems and
Technologies, CISTI, 2019-June(June), 19–22. https://doi.org/10.23919/CISTI.2019.8760893
.

fi
fi

fi
5

- Smuha, N. A. (2021). From a ‘race to AI’ to a ‘race to AI regulation’: regulatory competition for
arti cial intelligence. Law, Innovation and Technology, 13(1), 57–84. https://doi.org/
10.1080/17579961.2021.1898300
- Stigler, George J. (1975a), ‘The Goals of Economic Policy’, 18 Journal of Law and Economics,
283-292
- UK Consumer Protection Act 198
- UK Nuclear Installations (Liability for Damage) Order 2016 No. 562. 13
- UK Automated and Electric Vehicles Bill 2017-19 s
- Viscusi, W. Kip, Vernon, John M. and Harrington, Joseph E., Jr (1996), Economics of Regulation
and Antitrust, Cambridge, MA, MIT Press
- Wirtz, B. W., Weyerer, J. C., & Sturm, B. J. (2020). The Dark Sides of Arti cial Intelligence: An
Integrated AI Governance Framework for Public Administration. International Journal of Public
Administration, 43(9), 818–829. https://doi.org/10.1080/01900692.2020.1749851
fi
.

fi

You might also like