Download as pdf or txt
Download as pdf or txt
You are on page 1of 15

AI Ethics: Software Practitioners’ and

Lawmakers’ Points of View


arXiv:2207.01493v1 [cs.CY] 30 Jun 2022

Arif Ali Khan1 , Muhammad Azeem Akbar2 , Muhammad


Waseem*3 , Mahdi Fahmideh4 , Aakash Ahmad5 , Peng Liang3 ,
Mahmood Niazi6 and Pekka Abrahamsson7
1
University of Oulu, Finland
2
Lappeenranta-Lahti University of Technology, Finland
3
Wuhan University, China
4
University of Southern Queensland, Australia
5
University of Ha’il, Saudi Arabia
6
King Fahd University of Petroleum and Minerals, Saudi Arabia
7
University of Jyvaskyla, Finland

Abstract
Despite their commonly accepted usefulness, Artificial Intelligence
(AI) technologies are concerned with ethical unreliability. Various
guidelines, principles, and regulatory frameworks are designed to en-
sure that AI technologies bring ethical well-being. However, the impli-
cations of AI ethics principles and guidelines are still being debated.
To further explore the significance of AI ethics principles and relevant
challenges, we conducted an empirical survey of 99 AI practitioners
and lawmakers from twenty countries across five continents. Study
findings confirm that transparency, accountability, and privacy are the
most critical AI ethics principles. On the other hand, lack of ethi-
cal knowledge, no legal frameworks, and lacking monitoring bodies are
found the most common AI ethics challenges. The impact analysis
of the challenges across AI ethics principles reveals that conflict in
practice is a highly severe challenge. Our findings stimulate further

1
research, epically empowering existing capability maturity models to
support the quality assessment of ethics-aware AI systems.

1 Introduction
Artificial Intelligence (AI) becomes necessary across a vast array of industries
including health, manufacturing, agriculture, and banking [1]. AI technolo-
gies have the potential to substantially transform society and offer various
societal benefits, which are expected to happen from high-level productiv-
ity and efficiency. In line with this, the ethical guidelines presented by the
European commission expert group on AI highlights that (AI HLEG) [2]:
 “AI is not an end in itself, but rather a promising means to increase
human flourishing, thereby enhancing individual and societal well-being and
the common good, as well as bringing progress and innovation.”
However, the promising benefits of AI technologies have been considered
with worries that the complex and opaque systems might bring more social
harm than good [1]. People start thinking beyond the operational capabili-
ties of AI technologies and investigate the ethical aspects of developing strong
and potentially life consequential technologies. US government and private
companies do not consider the virtual implications of decision-making sys-
tems in health, criminal justice, employment, and creditworthiness without
ensuring that these systems are not coded intentionally or unintentionally
with structural biases [1].
Concomitant with advances in AI systems, we witness the ethical failure
of those systems. For example, a high rate of unsuccessful job applications
that was decided by the Amazon recruitment system was later found biased
against women applicants and triggered interest in AI ethics [3]. The need
for developing policies and principles that address the ethical aspects of AI
systems is timely and critical. The ethical harm in AI systems might may
jeopardize human control and these potential threats set a stage for AI ethics
discussion. AI systems are not only concerned with technical efforts, but also
need to consider the social, political, legal, and intellectual aspects. However,
AI’s current state of ethics is broadly unknown to the public, practitioners,
policy, and lawmakers [4].
Extensively, the ethically aligned AI system should meet the following
three components through the entire life cycle [2]: 1) the proposed AI system
should comply with all the applicable laws and regulations, 2) adhere to

2
ethical principles and values, and 3) should be technically and socially robust.
To the best of our knowledge, no empirical study is conducted that cover
and explore the above three core components based on the data collected
from industrial practitioners and lawmakers. For instance, Vakkuri et al. [4]
conducted a survey study to determine industrial perceptions based only on
four AI ethics principles, Lu et al.[5] conducted interviews with researchers
and practitioners to understand the AI ethics principles implications and the
motivation for rooting these principles in the design practices. Similarly,
Leikas et al. [6] mainly focused on AI ethics guidelines.
In this study, insights are provided by encapsulating the views and opin-
ions of practitioners and lawmakers regarding the AI ethics principles and
challenges by collecting survey data from 99 respondents across 20 different
countries.

2 Background
Generally, the AI ethics is classified under the umbrella of applied ethics,
which mainly consider the ethical issues that happen because of developing,
deploying, and using the AI systems. It focuses on linking how an AI system
could raise worries related to human autonomy, freedom in a democratic
society, and quality of life. Ethical reflection across AI technologies could
serve in achieving multiple societal purposes [2]. For instance, it can stimulate
focusing on innovations that aim to foster ethical values and bring collective
well-being. Ethically aligned or trustworthy AI technologies can flourish
sustainable well-being in society by bringing prosperity, wealth maximization,
and value creation [2].
It is vital to understand the development, deployment, and use of AI
technologies to ensure that everyone can build a better future and live a
thriving life in the AI-based world. However, the increasing popularity of AI
systems has raised concerns such as reliability and impartiality of decision-
making scenarios [2]. We need to make sure that decision-making support of
AI technologies must have an accountable process to ensure that their actions
are ethically aligned with human values that should not be compromised [2].
In this regard, different organizations and technology giants developed
committees to draft the AI ethics guidelines. Google and SAP presented the
guidelines and policies to develop ethically aligned AI systems [7]. Similarly,
the Association of Computing Machinery (ACM), Access Now, and Amnesty

3
International jointly proposed the principles and guidelines to develop an
ethically mature AI system [7]. In Europe, the independent high-level expert
group on artificial intelligence (AI HLEG) developed the guidelines for pro-
moting trustworthy AI [2]. The Ethically Aligned Design (EAD) guidelines
are presented by IEEE, consisting of a set of principles and recommendations
that focus on the technical and ethical values of AI systems [8]. In addition,
the joint ISO/IEC international standard committee proposed the ISO/IEC
JTC 1/SC 42 standard which covers the entire AI ecosystem, including trust-
worthiness, computational approach, governance, standardization, and social
concerns [9].
However, various researchers claim that the extant AI ethics guidelines
and principles are not effectively adopted in industrial settings. McNamara
et al. [10] conducted an empirical study to understand the influence of the
ACM code of ethics in the software engineering decision-making process. Sur-
prisingly, the study findings reveal that no evidence has been found that the
ACM code of ethics regulate decision-making activities. Vakkuri et al. [11]
conducted multiple interviews to know the status of ethical practices in the
domain of the AI industry. The study findings uncover the fact that various
guidelines are available; however, their deployment in industrial domains are
far from being mature. The gap between AI ethics research and practice re-
mains an ongoing challenge. To bridge this gap, we conducted an empirical
study to know the significance of AI ethics principles, challenges, and their
impact by encapsulating the views of AI practitioners and lawmakers.

3 Setting the Stage


Previously, we conducted a systematic literature review (SLR) to provide
an in-depth overview of AI ethics challenges and principles [12]. This study
aims to empirically validate the SLR findings based on the insights provided
by the AI practitioners and lawmakers. AI practitioners have higher ethi-
cal responsibilities compared to others. Practitioners often make the design
decisions of complex autonomous systems with less ethical knowledge. The
magnitude of risks in AI systems makes practitioners responsible for under-
standing ethical attributes. To achieve reliable outcomes, it is essential to
know the practitioners understanding of AI ethics principles and challenges.
Law resolves everyday conflicts and sustains order in social life. People con-
sider law an information source as it impacts social norms and values [13].

4
The aim of considering this type of population (lawmakers) is to understand
the application of the law to AI ethics. The data collected from legislation
personnel will uncover the question, of whether standing AI ethics principles
are sufficient, or is there a need for innovative standards [13]?
The primary goal of this study is to understand the significance of the
AI ethics principles, challenges, and the severity of the challenges across the
principles. We used industrial collaboration contacts to search the AI prac-
titioners and send a formal invitation to participate in this study. Moreover,
various law forums across the world were contacted and requested to partic-
ipate in this survey. The targeted populations were approached using social
media networks including LinkedIn, WeChat, ResearchGate, Facebook, and
personal email addresses. The survey instrument consists of four core sec-
tions: 1) demographics 2) AI ethics principles 3) challenges 4) challenges
impact on principles. The survey questionnaire also includes open-ended
questions to know the novel principles and challenges that were not identi-
fied during the SLR [12]. The Likert scale is used to evaluate the significance
of each principle and challenge and assess the severity level of the challenging
factors. The survey instrument is structured both in English and Chinese.
The software industry in China is flourishing like never before, where AI is
taking the front seat and is home to some of the leading technology giants in
the world, such as Huawei, Alibaba, Baidu, Tencent, and Xiaomi. However,
it would be challenging to collect the data from the Chinese industry because
of the language barriers. Mandarin is the national and official language in
China, unlike India, where English is commonly used for official purposes.
Therefore, the Chinese version of the survey instrument is developed to cover
the major portion of the targeted population. Both English and Chinese ver-
sions of the survey instrument are available online for replication [14].
The piloting of the questionnaire is performed by inviting three exter-
nal qualitative software engineering research experts. The expert’s sugges-
tions were mainly related to the overall design, and understandability of the
survey questions. The suggested changes were incorporated, and the sur-
vey instrument was online deployed using Google forms (English version)
and Tencent questionnaire (Chinese version). The data were collected from
September 2021 to April 2022 and we received 107 responses. The manual
review revealed that eight responses were incomplete and we only considered
99 responses for the final data analysis.

5
4 Survey Results
Frequency analysis is performed to organize the descriptive data and it is
more suitable for analyzing a group of variables both for numeric and ordinal
data. We noticed that 99 respondents from 20 countries across 5 continents
with 9 roles and 10 different backgrounds participated in the survey study
(see Figure 1(a-c)). The organizational size (number of employees) of survey
participants mostly ranges from 50 to 249, which is 28% of the total responses
(see Figure 1(d)). Of all the responses, majority (48%) have 3-5 years of
experience working with AI focused projects as practitioners or lawmakers
(see Figure 1(e)).
Participants were asked to explain their opinions about the perceived
importance of AI systems in their organization. The majority of the partic-
ipants positively agreed. For instance, 77% mentioned that their respective
organizations consider ethical aspects in AI processes or develop policies for
AI projects, 12% answered negatively, and 10% were not sure about it (see
Figure 1(f)). We mapped the respondents’ roles across nine different cate-
gories using thematic mapping (see Figure 1(b)). The final results show that
the most of the respondents (29%) are classified across the law practitioner
category. Similarly, the domains of the participants’ organizations are con-
ceptually framed in 10 core categories and the results revealed that most
(19%) of the organizations are working on smart systems (see Figure 1(c)).
The survey responses are classified as average agree, neutral and average
disagree (see Figure 2(a-b)). We observed that average ≥ 60% of the respon-
dents positively confirmed the AI ethics principles and challenges identified
in the SLR study [12]. For instance, one survey participant mentioned that:
Õ “The listed AI ethics principles are comprehensive and extensive to
cover various aspects of ethics in AI.”
Moreover, we selected the most frequently reported seven challenging fac-
tors and six principles discussed in our SLR study [12]. The aim is to inves-
tigate the severity impact of the seven challenges (i.e., lack of ethical knowl-
edge, vague principles, highly general principles, conflict in practice, interpret
principles differently, lack of technical understanding, and extra constraints)
across the six AI ethics principles (i.e. transparency, privacy, accountability,
fairness, autonomy, and explainability). The survey participants were asked
to rate the severity impact using the Likert scale: short-term (insignificant,
minor, moderate) and long-term (major, and catastrophic) (see Figure 2(c)).
The results reveal that most challenges have long-term impacts on the princi-

6
ples (major, and catastrophic). For example, interpret principles differently
challenge is likely to have a long-term impact (i.e., 50% major, and 27%
catastrophic) on the transparency principle.
Further, the non-parametric statistical analysis is performed to under-
stand the significant differences between both types of populations (AI prac-
titioners and lawmakers). Because of the allowed word limits, the statistical
discussion and dataset are provided online [14].

5 Data Interpretation
The results illustrate that the majority of the survey participants positively
agreed to consider the identified list of AI ethics principles (see Figure 2(a)).
We noticed that 77.8% of survey respondents thought transparency as the
most significant AI ethics principle. This is an interesting observation as
transparency is equally confirmed as one of the core seven essential require-
ments by AI HLEG [2] for realizing the ‘trustworthy AI’. Transparency pro-
vides detailed explanations of logical AI models and decision-making struc-
tures understandable to the system stakeholders. Moreover, it deals with the
public perceptions and understanding of how AI systems work. Broadly, it
is a societal and normative ideal of “openness”.
The second most significant principle to the survey participants was ac-
countability (71.7%). It refers to the expectations or requirements that the
organizations or individuals need to ensure throughout the lifecycle of an AI
system. They should be accountable according to their roles and applicable
regulatory frameworks for the system design, development, deployment, and
operation by providing documentation on the decision-making process or con-
ducting regular auditing with proper justification. Privacy is the third most
frequently occurred principle, supported by 69.7% of the survey participants.
It refers to preventing harm, a fundamental right specifically affected by the
decision-making system. Privacy compels data governance throughout the
system lifecycle, covering data quality, integrity, application domain, access
protocols, and capability to process the data in a way that safeguards pri-
vacy. It must be ensured that the data collected and manipulated by the AI
system shall not be used unlawfully or unfairly discriminate against human
beings. For example, one of the respondents mentioned that
Õ “The privacy of hosted data used in AI applications and the risk of
data breaches must be considered.”

7
30
26
24
North America

Europe
Asia

(1, 1.0%)
(18, 18.2%) (77, 77.8%)
20 18

Australia

(1, 1.0%)

10 South America

(3, 3.0%)
26 5
3 3 3 3
2 2
1 1 1 1 1 1 1 1 1 1

il da a a k d e y nd ay ia pain ey
az ana Chin Cub mar lan ranc man India Ita
ly n
ala orw kista uss
ia en
ed Turk d Ar
ab ed
Br C n Fin F r Ze rab S nit m
De Ge w
N Pa R
diA Sw ite tes U gdo
N e
Sa
u U mira
n
Kin
E

a) Geo Distribution of Survey Participants

0 10 20 30 0 10 20 30

others
Requirements Engineer 4 Games Development 12

%
Software Designer and Architect 3 Law and Regulatory System 15
Software Developer 17 Healthcare  9
Smart Systems 19

250-499 > 500


Software Tester 3

10

Project Manager 25 Autonomous Systems & Robotics 11

%
AI Engineer 10 Information Systems 11
Chief Executive Officer 5 Green Technology 5
Law Practitioner 29 Data Privacy 4
Other(s) 3 Banking and Taxation 3

20

%
Other(s) 10

50-249
28

b) Professional Roles c) Types of Software/Systems


%

50
Yes
10-49

40
27

Generic 10%
%

77%
30 No 12%
AI focused
48
1-9
13

20
%

41 Not
36
27 Sure
10 21
10 10 d) Size of Organisation

5
0 (number of employees)
0-2 3-5 6-10 > 10

Years Years Years Years

f) AI Ethics

e) Years of Experience

Figure 1: Demographic details

8
In general, the survey findings of AI ethics principles are in line with
the widely adopted accountability, responsibility, and transparency (ART)
framework [15] and the findings of an industrial empirical study conducted
by Ville et al. [2]. Both studies [2] [15] jointly considered transparency and
accountability as the core AI ethics principles, which is consistent with the
findings in this survey. However, we noticed that privacy has been ignored
in both mentioned studies [2] [15], but is placed as the third most significant
principle in this survey. The reason might be that, as more and more AI
systems have been placed online, the significance of privacy and data pro-
tection is increasingly recognized. Presently, various countries embarked on
legislation to ensure the protection of data and privacy.
Further, the results reveal that the majority of the survey respondents
(>60%) confirmed the identified challenging factors [12] (see Figure 2(b)).
Lack of ethical knowledge is considered as the most significant challenge by
(81.8%) the survey participants. It exhibits that knowledge of ethical aspects
across AI systems is largely ignored in industrial settings. There is a signifi-
cant gap between research and practice in AI ethics. Various guidelines and
policies devised by researchers and regulatory bodies discussed different ethi-
cal goals for AI systems. However, these goals have not been widely adopted
in the industrial domain because of limited knowledge of scaling them in
practice. The findings are in agreement with the industrial study conducted
by Ville et al. [11], which summarises the overall findings by stating that
ethical aspects of AI systems are not particularly considered, and it mainly
happened because of a lack of knowledge, awareness, and personal commit-
ment. We noticed that no legal frameworks (69.7%) is ranked as the second
most common challenge for considering ethics in the AI domain. The pro-
liferation of AI technologies in high-risk areas starts mounting the pressure
of designing ethical and legal standards and frameworks to govern them. It
highlights the nuances of the debate on AI law and lays the groundwork for
a more inclusive AI governance framework. The framework shall focus on
most pertinent ethical issues raised by the AI systems, the use of AI across
industry and government organisations, and economic displacement (i.e. the
ethical reply to the loss of jobs as a result of AI-based automation). The
third most common challenging factor is lacking monitoring bodies, and it
was highlighted by (68.7%) of the survey participants. Lacking monitoring
bodies refers to the lack of regulatory oversight to assess ethics in AI sys-
tems. It raises the issue of public bodies’ empowerment to monitor, audit,
and oversee the enforcement of ethical concerns in AI technologies by the do-

9
main (e.g., health, transport, education). One survey respondent mentioned
that
Õ “I believe it shall be mandatory for the industry to get standard approval
from monitoring bodies to consider ethics in the development process of AI
systems.”
Monitoring bodies extensively promote and observe the ethical values in
society and evaluate technology development associated with ethical aspects
of AI [2]. They would be tasked to advocate and define responsibilities and
develop rules, regulations, and practices in a situation where the system takes
a decision autonomously. The monitoring group should ensure “ethics by, in
and for design” as mentioned in AI HLEG [2] guidelines.
Additionally, the survey participants elaborated on new challenging fac-
tors. For instance, one of the participants mentioned that
Õ “Implicit biases in AI algorithms such as data discrimination and cog-
nitive biases could impact system transparency.”
Similarly, the other respondent reported that
Õ “Biases in the AI system’s design might bring distress to a group of
people or individuals.”
Moreover, a survey respondent explicitly considered the lack of tools for
ethical transparency and AI biases as significant challenges to AI ethics. We
noticed that AI biases is reported as the most common additional challenge.
It will be interesting to further explore (i) the type of biases that might be
embedded with the AI algorithms, (ii) the causes of these biases, and (iii)
corresponding countermeasures to minimize the negative impact on AI ethics.
Finally, the severity impact of the seven challenging factors is measured
with respect to the six AI ethics principles (see Figure 2 (c)). For the trans-
parency principle, we noticed that the challenging factor interpret principles
differently has significant long-term impacts, and 77% (i.e., 50% major, and
27% catastrophic) of the survey participants agreed to it. The interpretation
of ethical concepts can change for a group of people and individuals. For
instance, the practitioners might perceive transparency differently (more fo-
cused on technical aspects) than law and policymakers, who have broad social
concerns. Furthermore, lack of ethical knowledge has a short-term impact on
the transparency principle, and it is evident from the survey findings sup-
ported by 52% (7% insignificant, 25% minor, and 20% moderate) responses.
Lack of knowledge could be instantly covered by attaining knowledge, un-
derstanding, and awareness of transparency concepts.
Conflict in practice is deemed the most significant challenge to the pri-

10
vacy principle. Hence, 74% (i.e., 53% major, and 21% catastrophic) survey
respondents considered it a long-term severe challenge. Various groups, or-
ganizations, and individuals might have opinion conflicts associated with
privacy in AI ethics. It is critical to interpret and understand privacy con-
flicts in practice. We noticed that (82%) of survey participants considered
the challenging factor extra constraints as the most severe (long-term) chal-
lenge for both accountability and fairness principles. Situational constraints,
including organizational politics, lack of information, and management in-
terruption, could possibly interfere with the accountability and fairness mea-
sures. It could negatively impact the employee’s motivation and interest
to explicitly consider ethical aspects across the AI activities. Interestingly,
(82%) of the survey respondents considered conflict in practice as the most
common (long-term) challenge for autonomy and explainability principles.
Overall, we could interpret that conflict in practice is the most severe
challenge, and its average occurrence is >77% for all the principles. It gives
a general understanding to propose specific solutions that focus on tackling
the opinion conflict regarding the real-world implication of AI ethics prin-
ciples. The results further reveal lack of ethical knowledge has an average
(38%) short-term impact across selected AI ethics principles. The lack of
knowledge gap could be covered by conducting training sessions, workshops,
certification, and encouraging social awareness of AI ethics. Knowledge in-
creases the possibility of AI ethics success and acceptance in the best practice
of the domain.

11
Strongly Moderately Slightly Slightly Moderately Strongly
Neutral
Agree Agree Agree Disagree Disagree Agree

Average Agree Average Disagree 

b) AI Ethics Challanges

10 20 30 40 50 60 70 80 90 100
21 45 15 6 4 6 2
1. Lack of ethical Knowledge
81.8 12.1
a) AI Ethics Principles 12 1
12 36 13 18 7
2. Vague principles 20.2
61.6

14 27 23 9 10 14 2
10 20 30 40 50 60 70 80 90 3. Highly general principles
64.6 26.3
34 18 10 9 2 1 16 33 17 10 11 5 7
25
1. Transparency 4. Conflict in practice
77.8 12.1 66.7 23.2
26 21 22 12 5 13 18 32 13 5 16 11 4
2. Privacy  5.  Interpret principles differently
69.7 18.2 63.6 31.3
19 31 21 7 12 7 2 20 27 19 11 13 7 2
3. Accountability 6. Lack of technical understanding
71.7 21.2 66.7 22.2
16 29 21 6 14 11 2
24 25 13 16 7 8 6
4. Fairness 7. Extra constraints
66.7 27.3
62.6 21.2
19 24 25 8 6 11 6
14 28 19 10 12 11 5
8. Lacking monitoring bodies
5. Autonomy 68.7 23.2
61.6 28.3
22 27 20 8 8 8 6
19 27 13 15 13 9 3 9. No legal frameworks
6. Explainability 69.7 22.2
59.6 25.2
20 25 15 8 17 13 1
20 28 15 12 10 8 6 10. Business interest
7. Justice 60.6 31.3
63.6 24.2
21 18 23 11 13 12 1
28 22 16 9 12 9 3
8. Non-maleficence 11. Pluralism of ethical methods  62.6 26.3
66.7 24.2 11 27 26 10 11 12 2
29 21 18 7 10 11 3 12. Cases of ethical dilemmas 64.6 25.3
9. Human Dignity
68.7 24.2 16 23 25 7 9 13 6
25 9
24 15 14 10 2 13. Machine distortion 
64.6 28.3
10. Beneficence 18 29 20 5 12 3
64.6 26.3 12
29 22 14 9 15 8 2 14. Lack of guidance and adoption  67.7 20.2
11. Responsability
65.7 25.3 19 28 24 10 15 6 7
33 16 16 10 10 13 1 15. Lack of cross-cultural cooperation  
12. Safety 61.6 28.3
65.7 24.2
28 19 18 11 13 9 1
13. Data Security
65.7 23.2
29 18 14 16 8 10 4 c) Impacts of AI Ethics Challanges on AI Ethics Principles
14. Sustainability 22.2
61.6
25 26 9 11 15 10 3
15. Freedom Catastrophic
60.6 28.2 Insignificant Minor Moderate Major
21 24 18 10 10 11 5
16. Solidarity
26.2
Short-term Long-term
63.6
21 22 17 11 10 12 6
13
7 9 6
19 12
17. Prosperity 15 16
60.6 28.3 25
A A A
34
7 31 26 14 12 4 5 39 17 18
47
18. Effectiveness 20
64.6 21.2 2 3 4 5 3
2
7
1
12
8 5 19 5 22 13 21 25
17 18 9
20 25 16 15 10 8 5 19
18 16 17 10 16 21
11 17
19. Accuracy
61.6 23.2 30 BB 45
40
C
41
D B
23
C D B
21
C 30 D

22 22
18 27 17 13 14 8 2 37
20 44 39
53
40 39 44
20. Predictability 4 6 4 5 3 16 8 1 4 1 15
62.6 24.2 27 20 16 21 11 23 17 24 18
21
10 18 18
29 11
12 14
18 29 15 6 13 15 3 F G E F
G E 16
F G
E 21 21 21
21. Interpretability 62.6 31.3 19
38 23
40 37
35 41
50 41 48 53

Transparency Privacy Accountability


2 6 5 10 6 2 5 2 1
3 5 15 14 19 19 12 15
16 17 19 14 20 27 13 24 15 19

29 E 17 F G E F G
E F G 20 18 20
18 27
24
39 24 41 39 44
43 42 40 39 42
4 1 6 3 2 4 4
4 14 15 3 20 10 4
13 3 16 17 10 17 20 15 11 19 12
13 15 31
34
18
B B C D B C D
23 C D
20 21
23 47 24
47 21 44
41 48 40
43 51 40
4 3
3 5 11 16
9 24 17
31
A A 23 A
20
43
51 37

Fairness Autonomy Explainability


A: Lack of ethical knowledge B: Vague Principles C: Highly Generic Principles
D: Conflict in Practice E: Interpret principles differently
F: Lack of technical understanding G: Extra Constraints

Figure 2: AI Ethics Principles and Challenges


12
6 Lessons Learned
The study findings result in the following lessons learned:
1. Emerging Roles: Besides practitioners, the role of policy and law-
makers is also important in defining the ethical solutions for AI-based
systems. Based on our knowledge, this study is the first effort made to
encapsulate the views and opinions of both types of populations.
2. Confirmatory Findings: This study empirically confirms the AI
ethics principles and challenging factors discussed in our published SLR
study [12].
3. Adherence to AI principles: The most common principles (e.g.,
transparency, privacy, accountability) and challenges (e.g., lack of eth-
ical knowledge, no legal frameworks, lacking monitoring bodies) must
be carefully realized in AI ethics.
4. Risk-aware AI ethics principle design: The challenging factors
have mainly long-term severity impacts across the AI ethics principles.
It opens a new research call to propose solutions for minimizing or
mitigating the impacts of the challenging factors.
The given catalogue (see Figure 2) of principles and challenging factors
could be used as a guideline for defining ethics in the AI domain. Moreover,
the catalogue is a starting point for further research on AI ethics. It is
essential to mention that the identified principles and challenging factors
only reflect the perceptions of 99 practitioners and lawmakers in 20 countries.
More deep and comprehensive empirical investigation with wider groups of
practitioners would be useful to generalize the study findings globally. This,
together with proposing a robust solution for integrating ethical aspects in
AI design and process flow, will be part of our future work.

References
[1] C. Pazzanese, “Ethical concerns mount as AI takes
bigger decision-making role in more industries.”
https://news.harvard.edu/gazette/story/2020/10/
ethical-concerns-mount-as-ai-takes-bigger-decision-making-role/.
accessed on 2022-04-05.

13
[2] High-Level Expert Group on Artificial Intelligence, “Ethics guidelines
for trustworthy AI.” https://www.aepd.es/sites/default/files/
2019-12/ai-ethics-guidelines.pdf. accessed on 2022-04-06.
[3] J. Dastin, “Worldwide DevOps software forecast up-
date, 2017–2021.” https://www.reuters.com/article/
-amazon-com-jobs-automation-insight-idUSKCN1MK08G. accessed
on 2022-04-05.
[4] V. Vakkuri, K.-K. Kemell, J. Kultanen, and P. Abrahamsson, “The cur-
rent state of industrial practice in artificial intelligence ethics,” IEEE
Software, vol. 37, no. 4, pp. 50–57, 2020.
[5] Q. Lu, L. Zhu, X. Xu, J. Whittle, D. Douglas, and C. Sanderson,
“Software engineering for responsible AI: An empirical study and op-
erationalised patterns,” in Proc. of the 44th IEEE/ACM International
Conference on Software Engineering: Software Engineering in Practice
(ICSE-SEIP), pp. 241–242, IEEE, 2022.
[6] J. Leikas, R. Koivisto, and N. Gotcheva, “Ethical framework for de-
signing autonomous intelligent systems,” Journal of Open Innovation:
Technology, Market, and Complexity, vol. 5, no. 1, p. 18, 2019.
[7] A. Jobin, M. Ienca, and E. Vayena, “The global landscape of AI ethics
guidelines,” Nature Machine Intelligence, vol. 1, no. 9, pp. 389–399,
2019.
[8] IEEE Global Initiative on Ethics of Autonomous and Intelligent Sys-
tems, “Ethically aligned design: A vision for prioritizing human well-
being with autonomous and intelligent systems.” http://standards.
ieee.org/develop/indconn/ec/autonomous_systems.html, Version
2. IEEE, 2017. accessed on 2022-04-06.
[9] ISO, “ISO/IEC JTC 1/SC 42 artificial intelligence.” https://www.iso.
org/committee/6794475.html, 2017. accessed on 2022-02-26.
[10] A. McNamara, J. Smith, and E. Murphy-Hill, “Does ACM’s code of
ethics change ethical decision making in software development?,” in
Proc. of the 26th ACM Joint Meeting on European Software Engineering
Conference and Symposium on the Foundations of Software Engineering
(ESEC/FSE), pp. 729–733, ACM, 2018.

14
[11] V. Vakkuri, K.-K. Kemell, M. Jantunen, and P. Abrahamsson, “This is
just a prototype: How ethics are ignored in software startup-like environ-
ments,” in Proc. of the 21st International Conference on Agile Software
Development (XP), pp. 195–210, Springer, 2020.

[12] A. A. Khan, S. Badshah, P. Liang, M. Waseem, B. Khan, A. Ahmad,


M. Fahmideh, M. Niazi, and M. A. Akbar, “Ethics of AI: A system-
atic literature review of principles and challenges,” in Proc. of the 26th
International Conference on Evaluation and Assessment in Software En-
gineering (EASE), pp. 383–392, ACM, 2022.

[13] A. Wilk, “Teaching AI, ethics, law and policy,” arXiv preprint
arXiv:1904.12470, 2019.

[14] A. A. Khan, M. Azeem Akbar, M. Waseem, M. Fahmideh, P. Liang,


A. Aakash, M. Niazi, and P. Abrahamsson, “Replication pack-
age for the paper: Ai ethics: Software practitioners’ and law-
makers’ points of view.” https://drive.google.com/drive/folders/
1f9NX6hCRyUa0l-GL0yHqoNPN9Yl68pXF?usp=sharing. accessed on
2022-06-26.

[15] V. Dignum, “Responsible autonomy,” arXiv preprint arXiv:1706.02513,


2017.

15

You might also like