Download as pdf or txt
Download as pdf or txt
You are on page 1of 9

Journal of Digital Economy 1 (2022) 44–52

Contents lists available at ScienceDirect

Journal of Digital Economy


journal homepage: www.keaipublishing.com/en/journals/journal-of-digital-economy

Ethical governance of artificial intelligence: An integrated


analytical framework
Lan Xue, Zhenjing Pang *
School of Public Policy and Management, Tsinghua University, Beijing, 100084, China

A R T I C L E I N F O A B S T R A C T

Keywords: Emerging technologies have faced ethical challenges, and ethical governance has changed over
Artificial intelligence time managing these technologies. The governance paradigm has gradually changed from scien-
Ethical governance tific rationality to social rationality and ultimately to a higher ethical morality. The trend of
Autonomous driving
seeking higher levels of ethics and morality provides a rich theoretical underpinning for the ethical
governance of artificial intelligence (AI), which is a complex and comprehensive project that in-
volves problem identification, path selection, and role configuration. Ethical problems in AI can
also be identified in technology, value, innovation, and order systems. In the four major systems,
the basic patterns of ethical problems can become uncontrolled risks, behavioral disorders, and
ethical disorders. When considering the path selection, AI governance strategies such as ethical
embedding, assessment, adaptation, and construction should be implemented within the tech-
nology life cycle at the stages of research and development, design and manufacturing, experi-
mental promotion, and deployment and application, respectively. Looking at role configuration,
multiple actors should assume different roles, including providing ethical factual information,
expertise, and analysis, as well as expressing ethical emotions or providing ethical regulation tools
under different governance strategies. This study provides a comprehensive discussion regarding
the practical applicability of AI ethical governance using the case of autonomous vehicles.

1. Introduction

Since the 1st Industrial Revolution, disruptive innovations have created a series of purposeful, and irreversible progresses that have
led us to the 4th industrial revolution through which a remarkable set of breakthrough innovations have been introduced such as genetic
engineering, new materials, new energy, Internet technology, and artificial intelligence (AI). These breakthroughs have been the advent
of an Axis Era of human technological revolution. While people are accepting the emerging technologies’ empowerment, at the same
time, they are actively constructing a system of governance for these technologies to avoid falling into the trap of what Heidegger refers
to as technological Gestell or fetishism. People have remained vigilant regarding the technological leap that crosses over the blurred
boundary between technology, human, nature, and society.
AI is the most typical of these technologies. People have been exploring AI from conception to application since the term of AI was
created in 1950s. With the increased computing power, availability of big data, and advances in algorithm, AI has permeated to the
entire production process of knowledge, technologies, and products. In particular, AI has become the core driving force for the digital
and economic transformation with the application of ANN-based deep learning using data gathered through AI, algorithms, full

* Corresponding author.
E-mail address: pangzhenjing@foxmail.com (Z. Pang).

https://doi.org/10.1016/j.jdec.2022.08.003
Received 27 June 2022; Received in revised form 7 August 2022; Accepted 14 August 2022
2773-0670/© 2022 The Authors. Published by Elsevier B.V. on behalf of KeAi Communications Co., Ltd. This is an open access article under the CC
BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).
L. Xue, Z. Pang Journal of Digital Economy 1 (2022) 44–52

coverage computing power, efficient replication, and multi-source heterogeneity. This process opens the gate to the smart era.
At the level of knowledge, technology, and application, the expansion of AI is characterized by strong penetration, high complexity,
and technological breakthroughs. While AI promotes the convergence of multiple elements, enhances the interaction of multiple
subjects, and facilitates the fusion of multiple states, the progress in AI research and application has also influenced the existing ethical
and moral order. People must walk the fine line between welfares generated by AI and the ethical risks brought by AI. Unlike other
emerging technologies, AI is characterized by a set of tangled attributes, such as hidden technical core, anthropomorphic technical form,
opportunities for cross-domain application, intertwined interest subjects, multi-dimensional technical risk, and complex social impacts.
These attributes generate many ethical concerns in the development and application of AI technology, including problems related to
infringement, discrimination, problems associated with technological leviathan, digital divide, information cocoons, the Matthew ef-
fect, and other issues.
Historically, discourse on these issues was dominated by advanced countries where technologies were first developed and the
challenges were first encountered. Various ethical initiatives, declarations, and rules were usually launched from these countries.
However, with the progresses made by China and other emerging countries, the ethical challenges of new technologies are recognized
and encountered by a much wider part of the global community, including China, which can no longer stay behind. China has to join
these discussions and contribute to the debates in a constructive way. In this spirit, our study tries to provide an integrated analytical
framework for the ethical governance of AI and to identify a practical path for its application. The rest of this paper is structured as
follows, section 2 reviews the past studies that explore ethical governance in emerging technologies and AI; section 3 provides inte-
grated analytical frameworks for the AI ethical governance; section 4 discusses the case of autonomous vehicles, section 5 concludes the
paper.

2. Literature reviews

2.1. Ethical governance in emerging technologies

Following the theoretical origins, ethical concerns in emerging technologies have followed a clear line of progression. In the 1960s,
the emerging new technologies such as nuclear energy and chemical-intensive industries caused prominent environmental problems.
With these problems, governments formulated policies regarding regulations to safeguard the ethical use of these technologies.
Technology assessment became the core approach to technology governance (Baram, 1973). Thereafter, an expert decision-making
model (technocracy) was confirmed by the legitimacy of knowledge, in which political experts with policy experience and technical
experts with knowledge authority worked collaboratively to make institutional arrangements. These experts chose technology gover-
nance tools by assessing the impact based on predictable paths of technology development (Sarewitz, 2011). In the 1980s, studies on
genetics leaped from theoretical knowledge to technological application, thereby, opening a potential Pandora's box for artificial life.
This led to a discussion of the uncertainty and ethical dilemmas of emerging technologies.
A series of short-term technological disasters, such as the European mad cow disease crisis and the Chernobyl Nuclear Power Plant
accident, further dissipated trust in public institutions regarding technological decision-making, which led to the gradual decline of the
regulation and assessment-based governance model. The precautionary approach emerged as a new method to govern the emerging
technologies, in which post-normal science was described as deficits in knowledge and information as well as burgeoning ethical tension
(Stirling, 2016). This new approach advocates an active precautionary policy framework until the ethical dilemmas in emerging
technology open discussion from multiple perspectives. With the commercialization of genetically modified crops, the theory and
practice of the governance of emerging technology centered on the precautionary principle. In the 1990s, the ethical, social, and
economic implications of technology became the focus of discussion with the implementation of the Human Genome Project. Then, the
ethical, legal, and social implication (ELSI) model emerged as an emerging corrective mechanism for technology governance to fill the
humanistic gap. The ELSI model advocated the inclusion of broader ethical values, legal, and socio-economic implications of technology
(Michael, 2008). Although the ELSI model has moved beyond the initiation stage, it was not promoted in the practice of emerging
technology governance.
At the beginning of the 21st century, the nanotechnology breakthrough became a precursor of the fourth technological revolution,
and the concept of anticipatory governance was proposed based on the reflections of ELSI. The core of anticipatory governance
embedded social values, ethics, and public preferences into the scientific research process (Guston, 2010) to shape technologies from an
early stage and make them more ethical (Rip et al., 1995; Guston, 2002; Grin, 2000; Wynne, 2002; Friedman et al., 2002). Moreover,
anticipatory governance formulated open scientific research and contributed to the development of an ontological aspect of the
co-evolution of technology and society. Since nanotechnology has failed to facilitate industrial renewal in a revolutionary way, people
have turned to the governance and innovation of emerging technologies. As a result, the concept of responsible research and innovation
(RRI) emerged in 2010 and was adopted by the European Union (EU) 2020 Framework Program. With the application of synthetic
biology research, RRI has become the mainstream paradigm for emerging technology governance in the EU (Owen et al., 2012). The RRI
emphasizes the paradigm shift of technology governance from traditional risk-based — such as assessment of technological, ethical,
legal, or social impact, the precautionary principle, and anticipatory governance—to a paradigm of shaping the responsibilities of
scientific researchers. The paradigms also shift to science and technology innovation, the institutional responses to innovation, the
reshaping of public responsibility for science development, and the establishment of public participation in science.

45
L. Xue, Z. Pang Journal of Digital Economy 1 (2022) 44–52

2.2. Ethical governance of AI

The ethical concerns in emerging technology governance provide rich theoretical resources for the ethical governance of AI. The
current studies on AI ethical governance focus on three specific research areas: concept, framework, and subject. At the conceptual level,
different perspectives have been used to determine the types of AI to be developed and influence the application process through the
definition of core concepts, objectives, and values. Although the concept of AI is still controversial, it does not affect the application of AI
from a governance perspective. The core conceptions that influence human-technology relations include “Beneficial AI” (Friedman
et al., 2002), “Ethical AI” (UK House of Lords, 2017), “Trustworthy AI” (OECD, 2019), and “Responsible AI,” which have been proposed
by China's New Generation AI Governance Committee. These concepts guide stakeholders to think about the direction of AI develop-
ment, while the interpretation of the specific connotations and the resulting requirements for products and behavioral norms also guide
the governance of ethical issues related to AI.
At the framework level, a holistic analytical framework was proposed to localize and modularize the complex issues related to the
main ethical concerns in AI. Such research focuses on the origins, innovations, improvements, and expansions of emerging technology
governance theories. In addition, this area of research embeds AI ethical issues in the existing theoretical frameworks or extracts useful
elements from existing theories and maps them into the analytical domain of AI ethical governance to provide paths for ethical issues
arising from the emergence, application, and development of AI. The goal aims to provide a pathway for solving ethical problems and
exploring existing governance frameworks. Typical examples include accountability and explainable AI, which focuses on issues such as
responsibility; discrimination of data mining, which focuses on issues of fairness; privacy by design, which focuses on privacy.
At the subject level, research focuses on practical rationality to delineate the scope of the subject matter, toolsets, and path selection
involved in AI ethical governance. The major focus is on the relationship and structure of responsibility and power distribution among
different stakeholders in the process of AI ethical governance. A holistic AI ethical governance framework should include technical,
organizational, and policy levels.
Recently, many new ideas have emerged in the discussion of governance mechanisms and emerging technologies frameworks.
Moreover, a more complete theoretical framework has gradually been formulated that provides an in-depth analysis of the power re-
lationships among different actors, such as government regulators, enterprises, and the public regarding the regulation of emerging
technologies. Tentative and adaptive governance models are empirically adopted in exploring emerging technologies, considering the
dynamics, uncertainties, innovations, and potential risks. In addition, the constraint and incentives between regulators and the regu-
lated ones are formed based on the grassroots discretionary power of front-line regulators to achieve more effective self-restraint.
Meanwhile, these discussions have become the theoretical basis for innovation in AI ethical governance mechanisms.
In summary, ethical concerns in the governance of emerging technology have followed a clear trajectory of paradigm change, from
the assessment of technology to the precautionary principle; from ethical, social, and legal assessment to anticipatory governance; from
responsible research and innovation to adaptive or tentative governance. Each paradigm change reflects a trend from scientific ratio-
nality to social rationality, with a strong focus on ethics and morality. This provides a rich theoretical underpinning for the ethical
governance of AI in the smart era. Existing research has formulated a systematic research outline, which is of great significance in
guiding the practice of AI ethical governance. However, this line of research failed to probe profoundly the problematic nature of ethical
governance of AI and analyzed the different ethical issues, using empirical data, model training, verification, application evaluation, and
feedback. In addition, the existing studies have not examined these issues based on the life cycle of AI development from R&D to
application. Moreover, they have not put forward any corresponding adaptive governance solutions or portrayed the role of different
actors (e.g., technical actors, policy actors, social actors) in the ethical governance of AI. This study tries to fill these gaps by constructing
an integrated framework for AI governance, recognizing the problem, application scenarios, and role configuration, and clarifying the
practical path for the ethical governance of AI for the autonomous driving scenario as a useful exploratory case.

3. Integrated analytical frameworks for the AI ethical governance

Recognizing the tremendous potential and the clear risks, more than 70 programs on the ethical principles of AI have been proposed
by different national and regional governments, intergovernmental organizations, scientific research institutions, non-profit organi-
zations, scientific and technological societies, and enterprises globally. These proposals focused on 10 major themes: human-
centeredness, cooperation, sharing, fairness, transparency, privacy, external security, internal security, accountability, and long-term
applications. On November 24, 2021, UNESCO released a Recommendation on the Ethics of Artificial Intelligence, which was the
first global normative framework for the ethical governance of AI. The recommendation identifies 10 principles and 11 areas of action
for regulating AI technologies. The recommendation also proposes that the development and application of AI should reflect four major
values: (1) respect, protect, improve human rights and enhance human dignity, (2) promote the development of the environment and
ecosystems, (3) promote diversity, inclusion, and equity in the workplace, and (4) build a peaceful, just, and interdependent human
society.
However, translating these ethical principles in practice is a complex and challenging endeavor and need a more systematic approach
in identifying problems, selecting solution paths, and assigning roles to relevant stake holders. In the following discussion, we will first
identify potential problems associated with AI application from the perspective of the social system where such application occurs.
Second, we will try to delineate paths to ethical use of the technology based on the technology life cycle. Finally, we will construct a role
configuration that can lay out relevant responsibilities to multiple stakeholders involved in the development and application of AI.

46
L. Xue, Z. Pang Journal of Digital Economy 1 (2022) 44–52

3.1. Problem identification

Fremont E. Kast used systems theory to analyze management issues comprehensively. He indicate that all management problems
arise from goal-orientation system, social-psycho system, an integration system of structured activities and technical systems. According
to social systems theory, the ethical governance of artificial intelligence is rooted in four major systems (Fig. 1). The first is the tech-
nology system, comprising the knowledge and artifacts of technology, which manifests as products, hardware, software, and services of
AI, and the ethical governance emphasizes effective control of technology risks. The second is the value system, focusing on technology
assessment, expressed as people's moral rationality regarding the value orientation. From the perspective of ethical governance, AI is
incapable of breaking the normal relationships between technology, people, society, and nature, but it can have an influence on AI's
ethical acceptability. The third is the innovation system, in which the socialization of AI technology is manifested as innovative be-
haviors and activities under different application scenarios. Here, the ethical governance dimension focuses on the constraint of
innovative behavior for AI. It also emphasizes that the instrumental rationality of behavior cannot exclude considerations on ethical
responsibility. The fourth is the order system—the system for distributing rights, powers, interests, and responsibilities under technical
conditions, expressed as the social impact brought about by the application of AI. Here ethical governance emphasizes that the dis-
tribution of rights and responsibilities, power structures, and interest patterns among relevant stakeholders are embedded in social
scenarios to maintain a stable social order.
In the technological system, the ethical problems of AI are endogenous and manifest as the probability of uncontrolled risk, man-
ifested as the inability to predict, explain, calculate, evaluate, and control technical risks scientifically. It is also presented as the inability
to dissipate the negative effects of AI applications. The uncertainty, bias, and vulnerability of AI technology are the root cause of all these
ethical problems. Ethical problems arise when the black-box algorithms, their self-reinforcing character, and the proliferation of al-
gorithms are fed with biased data. When machine learning heavily relies on code and data samples, stability can hardly be guaranteed.
Moreover, multiple repeated runs of sample data can mislead the machine to make wrong ethical decisions due to misleading as-
sumptions. For example, discrimination is often a by-product of the unpredictable and unconscious results of algorithms. Algorithmic
engineers make conscious choices and face challenges in identifying AI-related ethical problems.
From the perspective of the value system, the ethical problem of AI has a relational character, which is a disorder in the ethical
relationship, revealing that the development of AI boundaries is blurred when considering the relationship between humans and
technology, as well as the relationship between technology and society. Ethical themes before launching AI are different from those
ethical themes in the modern era of AI. Formerly, the scope of discussion was limited to the relationship between humans. Presently, the
scope of discussion now extends to the relationship between humans and machines, leading to the formulation of machine ethics, where
the autonomous system is capable of posing risks to a human. The question is how do human beings view their relationship with
intelligent machines? Generally, only humans have morals, and their biological senses determine their ethical behavior and make them
moral subjects since only human beings can rationally think and communicate. With the development of AI, moral status related to AI
products is the focus of discussion. The relationship between humans and intelligent machines is different from the relationship between
humans, probably changing the position of human morality. Thus, defining the ethical relationship between humans and machines is
challenging.
In the innovation system, the potential ethical problem AI's development is that the innovative practices and applications of AI
excessively pursue the means to reach an end, while ignoring the values and beliefs. Thus, AI is a used as a vital strategic resource for
promoting the development of science and technology, industrial optimization, upgrading, and productivity without proper consid-
eration of the ethical implications in using AI. The comprehensive progress of AI in data, algorithms and computing power has become
the core driving force for the digital transformation of the economy and society. If we pay too much attention to the tool value and ignore

Fig. 1. Problems identification in the ethical governance of artificial intelligence.

47
L. Xue, Z. Pang Journal of Digital Economy 1 (2022) 44–52

the moral responsibility of AI in the process of technological innovation, the development of AI will be unsustainable.
In the order system, the ethical problem of AI is derivative, leading to the disorder of ethical outcomes. In the process of AI so-
cialization, the reconfiguration of rights, power, interests, and responsibilities are caused by differences in cognition, understanding,
and usability between different groups or individuals, without institutional mechanism to function as a corrective measure to maintain
fairness and justice. The development of human society lies in the pursuit of harmony and the maintenance of social justice, which has
been the basic orientation of social ethical values. However, the development of AI has dismantled the social division of labor and
overturned traditional labor relations, causing significant structural unemployment and affecting social justice. With the breakthrough
of AI applications, privacy infringement, algorithmic discrimination, the digital divide, and blurred responsibility have generated many
legal and ethical challenges.

3.2. Path selection

Although AI technology itself does not have an ethical and moral quality, developers can incorporate ethical values into AI through
algorithm design, data selection, and model optimization. Thus, comprehensively understanding of ethical governance of AI is vital to
embed ethical considerations in each stage of innovation and application, involving adopting different regulatory mechanisms based on
different stages of technological development. Corresponding regulatory tools must be integrated at each stage of AI research and
development, design and manufacturing, and experimental promotion and deployment to prevent the probability of uncontrolled risk
and disorder in ethical relationships, behavior, and outcomes brought about by AI technology (Fig. 2).
The first stage is to embed an ethical model in the research and development phase. In this stage, ethical embedding involves
integrating AI technologies with ethical norms, comprising four major aspects. First, an analysis of the ethical goals embedded in AI
should be conducted through the basic theoretical assessment of the ethics of science and technology. This stage defines the stakeholders
and clarifies the value of rationality, normative conflicts, and human subject status. Moreover, the risks associated with R&D should be
clarified. In the second stage, the ethical goals of AI are implemented into specific architectures, which include physical carriers, al-
gorithms, and interfaces. Third, AI technological development is integrated with application scenarios to predetermine the specific value
norms. Fourth, the rationality of AI in terms of potential ethical norms is evaluated to determine whether the technology can reasonably
meet expectations.
The second stage incorporates ethical assessment in the design and manufacturing phase, involving the use of AI theory or tech-
nology to implement relevant activities to form systems, products, or services to meet specific needs. At this stage, the assessment was
oriented toward the future of the technology and ethical perspective. Rather than determining whether an AI technology is good or bad
and whether to accept or reject it, ethical assessment in the design and manufacturing phase should explore the new approach to
technology and consider the social functioning in the design process. In this phase, the ethical assessment is an analysis of potential AI
ethical problems. An ethical assessment entails anticipating and identifying potential risks, analyzing and clarifying ethical issues, as
well as developing solutions to ethical issues through peer review and communication with the government and the public.
The third stage involves ethical adaptations in the experimental promotion phase, entailing the use of AI systems, products, or
services within a certain range to observe the process of their integration with social systems, the AI socialization stage. At this point, AI
technologies have acquired specific social roles and passed the ethical assessment that should be integrated with social systems.
Considering the influence of social factors and adjusting societal value choices are vital to achieving social embedding and integrating
the AI product with the social value system. In the experimental promotion stage, ethical adaptations must be proactive. At the insti-
tutional level, the promotion of AI must be guided and amended through laws and regulations to follow the principles of fairness and
justice. AI products with negative potentials that run contrary to mainstream social values should be recalled. The establishment of a
social ethics education system is vital to the ethical issues of AI, advocating responsible innovation, and providing moral and intellectual

Fig. 2. Path selection: Ethical governance of artificial intelligence.

48
L. Xue, Z. Pang Journal of Digital Economy 1 (2022) 44–52

underpinning for sustainable AI development.


The final stage involves ethical construction in the deployment and application phase, in which AI systems, products, or services are
implemented at a large scale in work and life environments. At this stage, the utilization and stylization of AI technology are realized. In
practice, AI products have passed the ethical adaptation and realized the ethical construction at this stage, involving users’ recognition,
acceptance, and compliance with clear ethical and moral values, and the construct of moral ideals and ethical pursuits of AI. The
government, social organizations, and academia should re-examine and reflect on the ethical issues of AI within large-scale industri-
alization. Through positive iterations and multiple cycles between theory and practice, these stakeholders should continuously improve
and optimize AI applications. They should achieve a dynamic balance between economic benefits and the prevention of ethical issues.
Each stakeholder takes part in the ethical and moral cognitive initiative, maintains an open mind, and actively constructs a novel ethical
and moral value system that meets the standard of a smart era.

3.3. Role configuration

Presently, AI creates uncertainties in the complex emerging technology since no single institution or subject can provide all the
knowledge to support the governance needed to deal with AI's potential ethical issues. AI ethical governance must mobilize multiple
stakeholders — including technical people (scientists, algorithm engineers), business people (manufacturers, operators, sellers), people
in social sector (NGOs, media), scholars, government officials, and the general public. These actors should assume different re-
sponsibilities, and form an action coalition to achieve ethical governance of AI. Here, we classify the role configurations into five
categories: providers of (a) factual information, (b) ethical expertise, (c) analytical ideas, (d) regulatory tools, and (e) and general public
who care share their ethical concerns (Fig. 3).
Providers of factual information fill the ethical cognition gap by providing on-site evidence and publishing the field investigation
results. With this process, AI ethical governance is based on more comprehensive information sources. Providers of ethical expertise
identify relevant issues through professional ethical analysis and reasoning to provide a more solid foundation for ethical governance.
Providers of analytical ideas can change the singularity and narrowness of the ethical issues and facilitate comprehensive ethical
governance through value analysis across many fields. In addition, general public can put forward their psychological feelings and
emotional responses regarding ethical issues of AI through media to promote a collective review and response to the ethical issues.
Providers of ethical regulation tools promote AI development to support good values and human welfare by constructing ethical ini-
tiatives, consensus, declarations, laws, policies, standards, principles, and guidelines.
In the ethical embedding stage, ethicists and technology developers must assume to be providers of ethical expertise and develop an
analytical framework through cross-disciplinary cooperation, so that the value orientation and ethical-moral norms are internalized in
the structure of AI technology. At this stage, the most important task of ethicists is to clarify and technically interpret the connotation of
ethical principles from a theoretical perspective and to translate abstract macro-ethical principles into elements within the concrete
micro-environment. Based on the results of the ethicists’ interpretations, the task of scientists or algorithm engineers is to reflect ethical
principles to facilitate development.
In the ethical assessment phase, social organizations, ethics committees, market players, and technical peers must cooperate and
assume the role of the providers of an ethical analytical framework and ethical expertise, adhering to all problem preconceptions,
constructively predicting, identifying, and tracing possible ethical issues to reach a consensus framework through peer review. The
fundamental goal of the ethical AI assessment is to make systems reason and act ethically similar to human reasoning. This has been the
most important and difficult element of ethics in AI research. At this stage, the active players should refine the ethical principles,
construct a comprehensive and systematic ethical assessment methodology, and provide concrete instrumental guidance for the design
and manufacturing via better guiding principles and achieving the ethical standards to effectively translate design principles into ethical
engineering practice.

Fig. 3. Role configuration: Ethical governance of artificial intelligence.

49
L. Xue, Z. Pang Journal of Digital Economy 1 (2022) 44–52

At the ethical adaptation stage, the government, market players, and industry organizations should actively assume the role of the
providers of ethical regulation tools. Thus, enterprises should actively assume to be socially responsible, take effective measures to
prevent ethical risks, and systematically construct an internal ethical culture, ethical leadership and norms, reward policies, and codes of
conduct instead of unilaterally pursuing economic interests. Moreover, industry organizations must refine their ethical principles, AI's
technical fields, and application scenarios. They must also regulate innovation activities through industry-wide ethical standards. The
government must also take legal and policy measures to strengthen the ethical regulation of AI and coordinate the conflicts of interest,
imbalance of responsibility, and uneven rights. Meanwhile, the public and the media should provide information about ethical facts,
express ethical emotions, and judge the relevant ethical issues through multiple channels. Finally, experts and scholars should
consciously assume the role of providing ethical expertise and actively promote responsible AI technology innovation and social
popularization, related to ethical knowledge through the social education system. The process accelerates the deep integration of AI
products and services with the social value system.
At the ethical construction stage, government agencies, enterprises, the public, social organizations, and academia should participate
in forming a collaborative and co-management action network. They should jointly provide ethics-related factual information, expertise,
analytical framework, ethical emotions, and ethical regulation tools to re-examine and reflect on the arising issues from the industri-
alization and socialization of AI. Through the multiple cycles between problem identification and practice optimization, these players
should continuously improve AI applications. In this process, each subject should insist on being a facilitator of history, updating cultural
concepts through emancipation, and actively constructing ethical and cultural concepts that meet the smart era characteristics.

4. THE case of autonomous vehicles

In recent years, AI technology has become an important component of autonomous vehicles. However, these vehicles are creating
disruptive and revolutionary effects on human lifestyles and values. Autonomous driving frees people from handling the steering wheel
and allows people to travel conveniently. At the same time, autonomous driving poses challenges and raises many ethical issues to the
legal system, social values, and ethical concerns. Therefore, all the ethical issues raised by the autonomous driving technology can be
considered as a good test for relevant stakeholders to work together and make actionable recommendations to promote its benign
development.
As a technical artifact, autonomous vehicles raise at least four types of ethical issues from the perspective of problem identification.
First, the autonomous driving technology itself is immature in risk control. Information collection about the surrounding of the
autonomous vehicle is realized through the radar and photoelectric detectors. The information is transmitted to the core chip for
processing and instruction planning. Then, the autonomous vehicle passes through many scene simulations and data calculations.
However, the accuracy of an autonomous driving judgment is challenged under complicated road conditions or by blurred signals.
Second, from the perspective of the ethical relationship, the question remains whether autonomous driving is AI or an intelligence
enhancement (IA). AI emphasizes the replacement of human intelligence with machines. However, IA focuses on human-centered and
human-enhanced capabilities. Resolving the challenging issues is difficult regarding the relationship between autonomous vehicles and
human beings, whether autonomous vehicles have moral reasoning capabilities similar to those of humans, and whether autonomous
vehicles can become autonomous bodies. At the level of ethical values, the Trolley Problem is a moral dilemma that cannot be avoided in
the autonomous driving framework. This dilemma manifests in moral judgment and actual behavior. When an autonomous vehicle faces
a crash choice, it relies on the pre-programming of algorithms, which is based on our ethical preconceptions. The following questions
arise:

What kind of ethical algorithm is to be embedded?


Should it be based on the self-interest theory of optimal self-safety?
Should it be based on the utilitarian theory of minimal loss calculation?
Should it be based on the equity theory of random fate or the consensus of popular preference, a consensus of virtue, or optimal virtue
judgment?

In response to the ethical issues raised above, the governance of autonomous driving should be considered during each stage of its
development and commercialization. First, at the stage of research and development, ethics scholars and engineers should embed ethical
principles in the algorithm design. Solutions should be built into the algorithms to address issues such as risk allocation, right-to-life
conflict, and life-choice ordering problems, and etc. Second, in the design and manufacturing stage, ethics scholars, manufacturers,
industry associations, and social organizations should conduct a comprehensive assessment of the relevant ethical issues and form a
collective intellectual consensus to facilitate the controllability and ethical justification of the algorithms from the design and
manufacturing stages based on international, national, and industrial experiences.
During the experimental promotion stage, the government, scholars and industrial experts, market players, social organizations, and
the public must be involved in the ethical adaptation of autonomous vehicles. Moreover, the government should formulate road test
application methods, establish a system of self-examination, reporting, and third-party certification, and review the consistency of
products at the concept and R&D stages. In addition, these stakeholders should assess whether enterprises have the qualifications and
capabilities to handle complicated ethical issues that may arise in road tests. The stakeholders should also review the functional reli-
ability and ethical appropriateness of different levels of autonomous vehicles. For example, many countries have successively introduced
management requirements and regulatory standards for autonomous driving safety technology. China's Unmanned Aerial Vehicle System
Standard System Construction Guide (2017–2018 Edition) puts forward a framework for an unmanned aerial vehicle system standard

50
L. Xue, Z. Pang Journal of Digital Economy 1 (2022) 44–52

system. The UK's “key principles of vehicle cyber security for connected and automated vehicles” specify eight basic principles for
controlling personal data and remote vehicles. The enterprises should implement self-monitoring and incorporate ethics into the entire
process of corporate governance through personnel and institutional management. Social organizations must establish ethical norms
and standards to achieve industry self-regulation and allow enterprises to be responsible in innovative ways. Experts and scholars should
increase their efforts in public education, such as science and technology lectures, case studies, and popular science programs to enhance
a greater understanding of the ethical issues related to autonomous driving technology. The public and the media can adopt multiple
channels to evaluate autonomous driving technology and form a social atmosphere for monitoring ethical issues. The relevant gov-
ernment agencies should help people understand information about driverless cars and transmit the latest R&D results to society. These
agencies should make autonomous vehicles more transparent and help users better understand autonomous vehicles and make rational
judgments regarding their reliability and broader acceptance.
During the deployment and application stage, autonomous vehicle developers, manufacturers, sellers, industry associations, the
government, social organizations, experts, scholars, and the public should form a collaborative governance action network, learn from
the progressive updating of autonomous driving technology, and identify new ethical issues. These stakeholders should maintain
integrity, flexibility, and use different tools in ethical governance, such as laws and regulations, insurance systems, divisions of authority
and responsibility, ethical rules, technical standards, industry self-regulation, and social consensus on autonomous driving technology.
Regarding technological innovation, all institutional players should advocate and establish the concepts of green, safe, convenient, and
free travel in the new era, while redefining the ethical coordinates of autonomous driving technology.

5. Discussion and conclusion

A comprehensive system that incorporates cognition, evaluation, and restraint of the ethics of science and technology has been a
constant research theme since the early days of the previous industrial revolutions. From the theoretical framework, ethical concerns in
the governance of emerging technologies have followed a clear line. The governance paradigm has moved from scientific rationality and
social rationality to a higher ethical and moral quest. At present, AI technology has become the core driving force for digital economic
and societal transformation. While AI can empower various industries, it also affects the existing ethical and moral order. Thus, clar-
ifying the ethical issues of AI is crucial to proposing practical paths for ethical governance. The existing research on AI ethical gover-
nance focuses on three aspects: concepts, framework, and players. This creates a systematic research outline but does not deeply reflect
the research topic.
In fact, the ethical governance of AI can be understood from two levels, that is, to promote the development of AI ethics with
governance, and to ensure the governance of AI with ethics. The former emphasizes the use of scientific and technological governance
methods, principles, and tools to govern specific issues of AI ethics, such as the ethical issues related to human autonomy, privacy and
security caused by AI and big data technology. The latter emphasizes using the value orientation of science and technology ethics to
provide guarantees for AI governance frameworks and actions. As an integrated concept, the ethical governance of AI has been formed
recently. On the one hand, it is reflected in the governance of AI ethical issues through regulation and legalization, that is, countries
(regions) in the world have successively issued corresponding regulations and regulations to regulate AI. On the other hand, relevant
governance institutions of AI ethics have been established one after another, becoming the main governance and supervision institutions
for the development of AI. As AI matures, it also faces a common problem, that is, how to manage the ethical issues brought about by AI?
It will show a “tough” side when solving the ethical problems caused by AI with the traditional scientific and technological governance
models and tools. The ethical uncertainty of AI cannot guarantee the effective implementation of existing governance methods. The
method to regulate the ethical issues of AI will show a “weak” side, because the inherent cultivation mechanism of ethical principles and
methods cannot guarantee the strength and strength of governance. Therefore, in the face of ethical disputes arising from AI, we need to
develop new concepts, new perspectives and new methods of ethical governance of AI under the guidance of scientific and technological
ethical principles and norms.
The main contribution of this study was enriching the current research on the issues of AI ethics and providing new insights into the
analytical framework of ethical governance of AI. Unlike many previous studies that pay attention to ethical issues in the dimensions of
technology, society and economy (Ashok et al.,.2022;Siau and Wang. 2020; Ireni-Saban & Sherman.2021). This study comprehensively
deconstructs the four basic forms of ethical issues in artificial intelligence from the perspective of social systems, namely the probability
of uncontrolled risk, a disorder in ethical relationships, behavior, and results, which profoundly reveals the essence of AI ethical issues.
More importantly, we construct the path selection of ethical governance of AI from the perspective of the whole life cycle of AI research
and development. Different from some scholars dividing the steps of ethical governance of science and technology into the following
seven stages (Doridot et al., 2013; Maesschalck, 2017): Grasp the cognitive framework, Actor competency acquisition, identifying
cognitive conditions for ethical reflexivity, identifying ethical issues, finding solutions, identifying and elaborating solutions, and
confirming feasibility of solutions. We designed a strategy of ethical governance and consider the technology life cycle. At each
stage—AI research and development, design and manufacturing, experimental promotion, and deployment and application—ethical
embedding, assessment, adaption, and construction must be implemented according to the value of AI. Finally, this study linked role
configuration with ethical governance creatively, we assign collaborative responsibilities to multiple actors to form a collaborative
action network in the ethical governance of AI and assume distinct roles, such as providing factual information, ethical expertise, ethical
analytical ideas, expression of ethical emotion, and ethical regulation tools under different scenarios.
This study also has some practical implications. This article enlightened us: Firstly, a collaborative mechanism for ethical governance
of AI needs to be established, and a mechanism for the expression, compensation and coordination of interests of all parties in the ethical
governance of AI should be established to enhance the awareness of collaboration and achieve governance goals; the information

51
L. Xue, Z. Pang Journal of Digital Economy 1 (2022) 44–52

sharing mechanism should be improved, and an information sharing platform and information tracking dynamic database should be
established. The second is to construct a scientific evaluation mechanism for AI ethical issues. We need to make full use of the advantages
of various social subjects to form a participatory ethical risk assessment model. For example, using the technical evaluation of AI
technical experts and the experience evaluation of the public to comprehensively improve the scientific nature of the ethical governance
of AI.
This study concludes that emerging technologies have faced ethical challenges, and ethical governance has changed over time
managing these technologies. The governance paradigm has gradually changed from scientific rationality to social rationality and ul-
timately to a higher ethical morality. The trend of seeking higher levels of ethics and morality provides a rich theoretical underpinning
for the ethical governance of artificial intelligence (AI), which is a complex and comprehensive project that involves problem identi-
fication, path selection, and role configuration. Ethical problems in AI can also be identified in technology, value, innovation, and order
systems. In the four major systems, the basic patterns of ethical problems can become uncontrolled risks, behavioral disorders, and
ethical disorders. When considering the path selection, AI governance strategies such as ethical embedding, assessment, adaptation, and
construction should be implemented within the technology life cycle at the stages of research and development, design and
manufacturing, experimental promotion, and deployment and application, respectively. Looking at role configuration, multiple actors
should assume different roles, including providing ethical factual information, expertise, and analysis, as well as expressing ethical
emotions or providing ethical regulation tools under different governance strategies. This study provides a comprehensive discussion
regarding the practical applicability of AI ethical governance using the case of autonomous vehicles.

Acknowledgement

This research was part of the project of The National Natural Science Foundation of China (Grant no. 71790611), National Key R&D
Program of China (Grant No. 2020AAA0105300) and China Postdoctoral Science Foundation(Grant no. 2022T150363).

References

Ashok, M., Madan, R., Joha, A., Sivarajah, U., 2022. Ethical framework for artificial intelligence and digital technologies. Int. J. Inf. Manag. 62.
Baram, M.S., 1973. Technology assessment and social control [J]. Jurimetrics Journal 14 (2), 79–99.
Doridot, F., Duquenoy, P., Goujon, P., Kurtdickson, A., Lavelle, S., Patrignani, N., et al., 2013. Ethical Governance of Emerging Technologies Development. IGI Global.
Friedman, B., Kahn, P., Borning, A., 2002. Value Sensitive Design: Theory and methods[J]. University of Washington technical report, pp. 2–12.
Grin, J., Grunwald, A., 2000. Vision Assessment: Shaping Technology in 21st Century Society[M]. Springer Berlin Heidelberg.
Guston, D.H., Sarewitz, D., 2002. Real-time technology assessment[J]. Technol. Soc. 24 (1), 93–109.
Ireni-Saban, L., Sherman, M., 2021. Ethical Governance of Artificial Intelligence in the Public Sector.
Maesschalck, M., 2017. Reflexive Governance for Research and Innovative Knowledge.
Michael, J.D., 2008. What's ELSI got to do with it? Bioethics and the human Genome project[J]. New Genet. Soc. 27 (1), 1–6.
Rip, A., Schot, J., Misa, T.J., 1995. Constructive technology assessment: a new paradigm for managing technology in society[M]. In: Managing Technology in Society.
The Approach of Constructive Technology Assessment. Pinter Publishers, pp. 1–12.
Sarewitz, D., 2011. Anticipatory Governance of Emerging Technologies[M]. Springer Netherlands.
Siau, K., Wang, W., 2020. Artificial intelligence (ai) ethics: ethics of ai and ethical ai. J. Database Manag. 31 (2), 74–87.
Stirling, A., 2016. Precaution in the Governance of Technology[J]. SPRU Working Paper Series.
Wynne, B., 2002. Risk and environment as legitimatory discourses of technology: reflexivity inside out?[J]. Curr. Sociol. 50 (3), 459–477.

52

You might also like