Professional Documents
Culture Documents
The Role of Professional Norms in The Governance of Artificial Intelligence
The Role of Professional Norms in The Governance of Artificial Intelligence
Abstract
The development, deployment, and use of artificial intelligence (AI) systems and AI-based
technologies are governed by an increasingly complex set of legal, ethical, social, and other
types of norms, which surface from various sources and contribute to what might be described as
a patchwork of norms. Among this complicated landscape of modes of governance, this chapter
zeroes in on the extent to which professional norms — and specifically norms in the
development phase as expressed in formal documents such as code of ethics and ethical
principles — may serve as a reservoir of norms and accountability mechanisms to include within
the existing governance toolbox. It explores the interface between AI and “the profession,” with
an emphasis on new institutional arrangements and sources of norms that arise within the
profession, such as corporate principles and employee demands. This chapter discusses trends of
this fluctuating ecosystem through a suggested analytical framework for thinking about these
professional norms of AI development within the broader context of the AI lifecycle, and
hypothesizing about the future possibilities of professional norms within discussions of AI
governance.
I. Introduction
The development, deployment, and use of artificial intelligence (AI) systems and AI-
based technologies are governed by an increasingly complex set of legal, ethical, social, and
other types of norms. These norms stem from government, industry decision-makers,
professional and trade organizations, and may also rise from the developers of AI-based systems,
among others. The AI governance toolbox is thus compiled of a patchwork of norms and other
modes of governance which have yet to be assembled in the context of the lifecycle of an AI-
professional norms are context-sensitive, and on their own have limited governance effects.
However, professional norms can play a productive role in concert with other AI governance
Here we explore the interface between AI and “the profession,” with an emphasis on new
institutional arrangements and sources of norms that arise within the profession as AI integrates
into many parts of society and challenges traditional conceptions of the profession. We find that
this trend of challenging tradition is mirrored by the professional norms of AI, as we see
emerging trends and norms stemming from new areas outside of professional and trade
organizations. In addition, we suggest that we may be seeing the early stages of AI professions,
broadly defined.
In this chapter we aim to encapsulate the trends of this fluctuating ecosystem through a
suggested analytical framework for thinking about these professional norms of AI development
within the broader context of the AI lifecycle, and for hypothesizing about the future possibilities
Examining professional norms and ethics as a potential source and mode of governance
of AI triggers fundamental questions about definitions and concepts: What is AI? What are the
itself in a state of flux. The meaning of AI remains amorphous, at least from a multidisciplinary
perspective, as methods and techniques evolve and contexts of application change. The very
notion of what a profession is and what professionalism stands for has shifted dramatically over
time, particularly in knowledge economies. 1 And identifying and understanding the ethical
Layered on top of each other, these three concepts – the profession, professional norms, and AI –
create what one might describe as a perfect definitional storm, with an extraordinarily rich
In light of this complexity and uncertainty, this chapter takes a modest, pragmatic
approach and offers selected observations based on the work of the authors in the context of a
larger research effort on the ethics and governance of AI. The following initial triangulation of
the core elements at play – AI, professions, and professional norms – frames our subsequent
observations.
One of the few areas of consensus among scholars is that there is no universally agreed
upon definition for AI. 2 The working definition we use in this chapter, however, encapsulates the
complexity and contextuality of AI, including its history and future trajectory, the
interdisciplinary stakeholders and researchers involved in AI. 3 For present purposes, more
1
Julia Evetts, “The Sociological Analysis of Professionalism: Occupational Change in the
Modern World,” International Sociology 18, no. 2 (June 2003): 395–415,
https://doi.org/doi:10.1177/0268580903018002005.
2
M.C. Elish and Tim Hwang, “Introduction,” in An AI Pattern Language (New York, NY: Data
and Society, 2016), 1–15.
3
We draw our definition of Artificial Intelligence from Stanford’s AI100 report. Peter Stone et
al., “Artificial Intelligence and Life in 2030,” One Hundred Year Study on Artificial Intelligence:
Report of the 2015-2016 Study Panel (Stanford, CA: Stanford University, September 2016),
https://ai100.stanford.edu/2016-report.
increased pervasiveness and impact on human autonomy and agency. Many reports document
how AI-based technologies increasingly penetrate areas such as transportation, health, education,
justice, news and entertainment, and commerce, to name just a few areas of life. The increased
pervasiveness highlights the importance of the context for which AI systems are developed and
in which they are embedded when considering normative questions and the role of professional
norms aimed at addressing them. Perhaps the most fundamental attribute of many AI systems,
however, is the varying degrees to which they affect human autonomy and shift it away from
human beings towards machines, with potentially deep consequences also for concepts such as
Many of the changes associated with AI that will shape the nature of the profession can
be seen as amplifications of tectonic shifts that have been going on for some time: Over the past
few decades, a large body of multidisciplinary research has documented and analyzed the
transformations of the concepts of the profession and professionalism, including the seemingly
paradoxical erosion of traditional liberal professions (e.g., lawyers and doctors) on the one hand
and the growing appeal of these concepts across many other occupations on the other. 4 The
expanding application of “professions” across occupational groups, contexts, and social systems
has a direct effect on the extent to which the concept of the profession – and its set of norms and
enforcement regimes – has the potential to play a role in the governance of AI.
This expansion suggests that professions and associated norms might be increasingly
relevant as governance mechanisms of AI, particularly when it comes to the use of AI systems
by professionals in different social contexts. The ideal example is a physician who uses an AI-
4
Evetts, “Sociological Analysis,” 396.
making decisions about bail or sentencing. In these instances, the professional norms of ethics
have the potential to help govern the use of the respective AI-based systems. Perhaps even more
importantly, the conceptual shift away from traditional or “pure” professions towards what one
scholar calls “mixed-up” and other forms of professionalism 5 opens the door to examine how
new and evolving non-traditional occupations existing alongside professions engaged in the
development of AI can help fill the reservoir of norms and enrich the governance toolkit. This
particular perspective, with focus on professions that are on the development side of AI, will be
institutional arrangements are shifting, too. In the case of traditional professions, control over
professional practices was exercised through norms created and administered by professional
associations. In recent times, not only have professional associations proliferated in concert with
emerged that shape the evolutionary path of professional norms and the ways in which they are
enforced among professionals. This latter trend also pervades when looking at professional
norms and ethics in the context of AI, where various non-traditional players – including NGOs
and companies – are engaged in norm creation and, to a lesser extent, administration. Despite
changes in the institutional set-up and governance of professional norms, the types of issues
addressed by professional norms have remained largely stable by focusing on the regulation of
the behavior of the professionals themselves and the impact of their products and services on
society. Similarly, many of the ethical questions about norms of the profession have remained
5
Mirko Noordegraaf, “From ‘Pure’ to ‘Hybrid’ Professionalism,” Administration & Society 39,
no. 6 (October 2007): 761–85, https://doi.org/10.1177/0095399707304434.
be a driver of deeper changes: professional norms and ethics in the age of AI might not only
address the traditional questions, but also the future effects of the increasingly autonomous
systems that the profession creates – including feedback effects on the profession itself.
III. AI as Profession(s)?
AI-based technologies are often highly complex systems that require the collaboration of
people with various types of knowledge, skills, and judgment across the different phases of AI-
system creation – including design, development, testing. Consider autonomous vehicles. These
cars need designers, computer scientists, engineers, software developers, policymakers, legal
experts, business representatives, among others, who work together to conceptualize and build
self-driving cars and bring them to the street. Depending on geography, culture, and context,
several of these activities involved in the creation of AI systems might be carried out by people
that not only represent a discipline, but also belong to an occupation – some of which (self-
)identify as a profession. For instance, many people working on the technical side of AI
development are members of professional organizations such as The Institute of Electrical and
Electronics Engineers (IEEE). 7 However, determining what types of activities fall under a
“engineering” are illustrative: An in-depth analysis suggests the lines between engineering
6
Paula Boddington, Towards a Code of Ethics for Artificial Intelligence, 1st ed., Artificial
Intelligence: Foundations, Theory, and Algorithms (Cham, SUI: Springer International
Publishing, 2017).
7
Boddington, Towards a Code, 59.
individuals outside of established disciplines, occupations, and profession might also be involved
in the development of AI-based technologies. As one scholar says, AI development work “can be
done by those working entirely outside the framework of any professional accreditation” 9 – and
concept of the profession to the development of AI systems, there is also the possibility of the
emergence of what might be labeled “AI professions.” A few advancements might be early
indicators towards the birth of a new profession, despite the aforementioned definitional
ambiguities and other uncertainties. First, what constitutes a profession may emerge from – and
even be defined in terms of – one’s identity and sense of self, or belonging. 11 Expressions of this
identity include formal memberships in professional organizations and also (in the incubation
phase of a profession) more informal, but in some ways constitutive manifestations such as
annual conferences, meetings and working groups. Indicators of emerging identity of people and
the Association for the Advancement of Artificial Intelligence (AAAI) Conference on Artificial
8
Michael Davis, “Engineering as Profession: Some Methodological Problems in Its Study,” in
Engineering Identities, Epistemologies and Values, vol. 2, Engineering Education and Practice in
Context (Cham, SUI: Springer International Publishing, 2015), 65–79.
9
Boddington, Towards a Code, 31.
10
Tom Simonite, “The DIY Tinkerers Harnessing the Power of Artificial Intelligence,” Wired,
November 13, 2018, https://www.wired.com/story/diy-tinkerers-artificial-intelligence-smart-
tech/.
11
Brianna B Caza and Stephanie Creary, “The Construction of Professional Identity,” in
Perspectives on Contemporary Professional Work: Challenges and Experiences (Cheltenham,
UK: Edward Elgar, 2016), 259–285.
developments, including rapidly growing professorships with direct reference to AI and its
methods at universities around the world, are another sign of professionalization of AI.
developing principles specifically for ethical AI, both from tech companies and other leading
organizations. From approximately early 2018 until the time of writing, individual and powerful
Microsoft published a book which included their AI principles. 12 It’s too early to tell whether
this is a sustaining trend, but it is a noteworthy development in the landscape of ethical norms
Concurrently, initiatives for ethical AI principles are stemming from third party
organizations. Prominent examples include the forthcoming principles from the OECD’s
Committee on Digital Economy Policy, 13 a report on ethical guidelines from the European
from AccessNow and Amnesty International, 15 and “Universal Guidelines for Artificial
12
Microsoft, The Future Computed (Redmond, CA: Microsoft Corporation, 2018),
https://1gew6o3qn6vx9kp3s42ge0y1-wpengine.netdna-ssl.com/wp-
content/uploads/2018/02/The-Future-Computed_2.8.18.pdf.
13
“OECD Moves Forward on Developing Guidelines for Artificial Intelligence (AI),” OECD,
February 20, 2019, http://www.oecd.org/going-digital/ai/oecd-moves-forward-on-developing-
guidelines-for-artificial-intelligence.htm.
14
“Ethics Guidelines for Trustworthy AI.” (European Commission High-Level Expert Group on
Artificial Intelligence, April 8, 2019),
https://ec.europa.eu/digital-single-market/en/news/ethics-guidelines-trustworthy-ai.
15
“The Toronto Declaration: Protecting the Rights to Equality and Non-Discrimination in
Machine Learning Systems,” Access Now, May 16, 2018, https://www.accessnow.org/the-
AI, implying a corresponding group of addressees that are defined by their involvement in the
production and use of AI. For example, the report from the European Commission AI HLEG
states that “these guidelines are addressed to all AI stakeholders designing, developing,
expanding dynamics of the profession and professional norms, and as illustrated by the Illinois
communications, computer engineering, finance, law, media, and so forth, the emergence of “AI
complex organizational struggles as seen through history: At the birthdate of modern medical
ethics, which was at the forefront of professional ethics, there was more at stake than merely
individuals and their professional work. The first code of medical ethics was born out of outrage
over a crisis in 1792 in Manchester, England, in which a hospital refused to accept patients
during an epidemic because of disagreement among staff. After the crisis the hospital hired
Thomas Percival, an esteemed doctor and philosopher, to create the code of conduct. 19 Viewed
from this angle, open protests by employees of leading AI companies –such as Microsoft 20 and
toronto-declaration-protecting-the-rights-to-equality-and-non-discrimination-in-machine-
learning-systems/.
16
“Universal Guidelines for Artificial Intelligence,” The Public Voice, October 23, 2018,
https://thepublicvoice.org/ai-universal-guidelines/.
17
“Ethics Guidelines for Trustworthy AI,” 5.
18
“The Ethics Codes Collection,” The Ethics Codes Collection, accessed March 8, 2019,
http://ethicscodescollection.org/.
19
Robert Baker, “Codes of Ethics: Some History,” Perspectives on the Profession 19, no. 1 (Fall
1999): 3. Accessed March 8, 2019, http://ethics.iit.edu/perspective/v19n1%20perspective.pdf.
20
Sheera Frenkel, “Microsoft Employees Protest Work With ICE, as Tech Industry Mobilizes
Over Immigration,” The New York Times, June 19, 2018,
https://www.nytimes.com/2018/06/19/technology/tech-companies-immigration-border.html.
be precursors of such moments of organizational crisis – which in some cases have already led to
instances demonstrate that professional ethics become increasingly important in the presence of
highly complex situations with implications for society and for the public, from the medical field
The observations above suggest that the application of the concept of the profession and
professionalism in the context of AI are still in flux. Further complicating the scope of
professionalism with AI, the professional norms themselves are situated within various
contexts 23 and they interact with other explicit and implicit sources of norms. 24 In corporate
settings, for instance, these norms may also interact with extant frameworks for normative
business ethics. The trends sketched above simultaneously build upon well-established ground
and familiar territory (e.g., the earlier case of engineering), and are also more novel in terms of
the intricate assemblage of activities, disciplines, occupations, and professions involved in the
development of (typically) complex AI systems. The current ambiguity and diversity of possible
perspectives is also reflected when looking for norms that might be relevant at the nexus of AI
21
“We Are Google Employees. Google Must Drop Dragonfly,” Medium (blog), November 27,
2018, https://medium.com/@googlersagainstdragonfly/we-are-google-employees-google-must-
drop-dragonfly-4c8a30c5e5eb.
22
Devin Coldewey, “Google’s New ‘AI Principles’ Forbid Its Use in Weapons and Human
Rights Violations,” TechCrunch, June 7, 2018, https://techcrunch.com/2018/06/07/googles-new-
ai-principles-forbid-its-use-in-weapons-and-human-rights-violations/?guccounter=2.
23
Boddington, Towards a Code, 48-53.
24
See, e.g., Andrew Abbott, “Professional Ethics,” American Journal of Sociology 88, no. 5
(March 1983): 855–885. Accessed March 7, 2019, http://www.jstor.org/stable/2779443.
10
relevant: Given the mélange of (quasi-)professional activities and actors involved, it is not
surprising that the sources of norms transcend the set-up of professional associations that played
the decisive role in the context of the traditional professions. Furthermore, the dynamic nature of
the state of play suggests that relevant norms come in gestalt of “code of ethics,” but in other
cases and depending on context might be less structured and more informal, at times even
implicit in the form of normative routines of “everyday professional life.” 25 The following
examples provide further illustration of the different types of norms that might be considered
organizations and professional associations coming up with their own principles, such as the
Association for Computing Machinery (ACM) and the IEEE. Now, both associations are
responding to the pervasiveness and importance of AI. The IEEE embarked on a Global
Initiative on Ethics of Autonomous and Intelligent Systems 26 and as part of this initiative
published Ethically Aligned Design, a resource for a variety of stakeholders in AI to help ground
the development and deployment of AI-based technologies in ethical principles including human
rights, well-being, transparency, accountability, among others. The report discusses various
challenges and issues that surface at the interface of these principles and the design of AI-based
recommendations for ensuring ethics remain at the core of these issues and at the forefront of
25
Abbott, “Professional Ethics,” 856.
26
“The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems,” IEEE
Standards Association, accessed March 7, 2019, https://standards.ieee.org/industry-
connections/ec/autonomous-systems.html.
11
other fields, to ensuring developers fully understand the ethical implications of the technology. 27
ACM, in turn, revised its Code of Ethics and Professional Conduct in response to
“significant advances in computing technology,” which encompasses AI. 28 The ACM Code is
written for “computing professionals,” which is used in a very broad sense, and will encompass
many of the people working on AI-based systems. 29 ACM explicitly links their work to AI vis-à-
vis their updated code of ethics. In addressing the potential risks associated with computing, the
ACM code has an emphasis on machine learning; Principle 2.5 on evaluating risks states that
“extraordinary care should be taken to identify and mitigate potential risks in machine learning
systems.” 30
While the ACM Code of Ethics would cover the technical systems in terms of the
algorithms and software, there is movement towards investigating the potential normative
standards for data and data practices. These conversations are critical, as data is central to the
development of AI. These norms exist within a transitional category of norms defined by newer
There are several bodies, including groups within the European Commission 31 and
research organizations such as the Association of Internet Researchers (AoIR), that are coming
27
Ethically Aligned Design: A Vision for Prioritizing Human Well-Being with Autonomous and
Intelligent Systems, 1st ed. (IEEE, 2019), https://ethicsinaction.ieee.org/#read.
28
“World’s Largest Computing Association Affirms Obligation of Computing Professionals to
Use Skills for Benefit of Society,” Association for Computing Machinery, July 17, 2018,
https://www.acm.org/media-center/2018/july/acm-updates-code-of-ethics.
29
“ACM Code of Ethics and Professional Conduct,” Association for Computing Machinery, July
17, 2019, https://www.acm.org/code-of-ethics.
30
“ACM Code.”
31
“Ethics and Data Protection” (European Commission, November 14, 2018),
http://ec.europa.eu/research/participants/data/ref/h2020/grants_manual/hi/ethics/h2020_hi_ethics
-data-protection_en.pdf.
12
report is research-focused and is rooted in protecting data subjects and in abiding by laws
regarding the transfer of data. 32 Further, a paper from the AoIR addresses both Internet
researchers and a wider audience, and discusses similar ethical issues through open-ended
questions and considerations rather than formal rules. 33 In contrast, a multidisciplinary group of
influential scholars published “rules for responsible big data research,” which calls for each
company and organization to develop their own code of conduct for data. 34 These publications
demonstrate a source of norms that emerge in response to need and demand for guidance for
working with big data, though they are not published as ethical codes.
The professional norms discussed above exemplify how the norms of AI are dynamic and
are pieced together from various sources in traditional and transitional ways. We are also seeing
the emergence of new, forward-looking sources of norms. Examples of these emerging sources
are interactive resources from think tanks and articulations of norms from employees themselves.
One such example is Ethical OS. Ethical OS is a toolkit with questions and considerations for a
thinking about the future and potential unforeseen risks with the deployment of such systems.
While not aimed specifically at AI, the toolkit frames how one should approach thinking about
new technological systems and their features by offering step-by-step instructions and examples
32
“Ethics and Data Protection,” 10-12, 18-19.
33
Annette Markham and Elizabeth Buchanan, “Ethical Decision-Making and Internet Research:
Recommendations from the AOIR Ethics Committee” (Association of Internet Researchers,
December 2012), https://aoir.org/reports/ethics2.pdf.
34
Matthew Zook et al., “Ten Simple Rules for Responsible Big Data Research,” PLOS
Computational Biology 13, no. 3 (March 30, 2017): 1–10,
https://doi.org/doi:10.1371/journal.pcbi.1005399.
13
also witnessing norms surfacing from the “bottom up” within the profession themselves:
Employees of tech companies, particularly in the U.S., and specifically at Google and Amazon,
are starting to speak up – in a very public way – about their discontent with technology used for
We have seen that there are different types of norms that may apply, with varying degrees
of applicability, to professionals in the business of developing AI. Some of them follow tradition
and are enacted by trade organizations and professional associations, while others are formulated
by companies themselves, and others more recently stem from the bottom-up, such as employees
protesting. These norms need to be evaluated and analyzed in greater depth, and research is
emerging to fill this gap. 38 Against the backdrop of norms discussed in this section, we offer four
broad observations that surface from our exploration of professional norms, which include
similarities to traditional norms, possibly newer elements, and larger challenges for AI ethics in
the future.
The norms presented in this chapter adhere to various obligations of professions, which
include commitments to society, to their employer, to their clients, and their colleagues, and to
35
“Ethical OS: A Guide to Anticipating the Future Impact of Today’s Technology,” Ethical OS,
August 7, 2018, https://ethicalos.org/.
36
Daisuke Wakabayashi and Scott Shane, “Google Will Not Renew Pentagon Contract That
Upset Employees,” The New York Times, June 1, 2018,
https://www.nytimes.com/2018/06/01/technology/google-pentagon-project-maven.html.
37
James Vincent, “Amazon Employees Protest Sale of Facial Recognition Software to Police,”
The Verge, June 22, 2018, https://www.theverge.com/2018/6/22/17492106/amazon-ice-facial-
recognition-internal-letter-protest.
38
Daniel Greene, Anna L Hoffman, and Luke Stark, “Better, Nicer, Clearer, Fairer: A Critical
Assessment of the Movement for Ethical Artificial Intelligence and Machine Learning,” in
Proceedings of Hawaii International Conference on System Sciences (Hawaii International
Conference on System Sciences, Maui, Hawaii, 2019),
https://scholarspace.manoa.hawaii.edu/bitstream/10125/59651/0211.pdf.
14
aspirational, educational, and regulatory. 40 With an eye towards the potential governance effects
of norms – we can add another category, “technical norms,” which focuses on the development
aspect of AI-based systems. These categories are not mutually exclusive and may be merged
together within a professional code. 41 Taken together, they form the normative backbone of what
Norms set forth in corporate codes, such as Google, 42 Microsoft 43 and SAP 44 tend to
emphasize their obligation to society and public well-being and appear to be more aspirational in
nature. Codes from professional associations like IEEE 45 and ACM, 46 in contrast, also
associations and focused on development and technical norms. Microsoft, for example,
elucidates their ethical norms in their book by explaining the principles and the goals of their
work, reflecting both educational and aspirational types of codes. 47 Microsoft’s principles refer
also to technical norms, which are further articulated through guidelines for developers working
39
Effy Oz, “Ethical Standards for Computer Professionals: A Comparative Analysis of Four
Major Codes,” Journal of Business Ethics 12, no. 9 (1993): 709–726.
40
Mark S Frankel, “Professional Codes: Why, How, and with What Impact?,” Journal of
Business Ethics 8, no. 2/3 (1989): 109–115. Accessed March 7, 2019,
http://www.jstor.org/stable/25071878.
41
Frankel, "Professional Codes,” 111.
42
Sundar Pichai, “AI at Google: Our Principles,” The Keyword (blog), June 7, 2018,
www.blog.google/technology/ai/ai-principles/.
43
Microsoft, Future Computed, 51-84.
44
Corinna Machmeier, “SAP’s Guiding Principles for Artificial Intelligence,” SAP, September
18, 2018, https://news.sap.com/2018/09/sap-guiding-principles-for-artificial-intelligence/.
45
“IEEE Code of Ethics,” IEEE, accessed April 3, 2019,
https://www.ieee.org/about/corporate/governance/p7-8.html.
46
"ACM Code.”
47
Microsoft, Future Computed, 51-84.
15
Microsoft explicitly notes that the suggestions are “for the most part not hard-and-fast rules.” 49
This example illustrates how codes of ethics might interact with other manifestations of
professional norms; the resulting governance effects will depend not only on each of these
elements and their nature — an industry-wide technical norm might have more weight than a
The content of the codes and principles related to AI, and the norms described within
them, resemble codes from other industries, 50 and serve in essence familiar functions of
professional codes. 51 However, the latest generation of professional norms highlight some
interesting non-traditional features. For instance, norm addressees seem to gradually shift from
inclusive, albeit ambiguous, group of professionals involved in the development of AI. The
ACM Code of Ethics exemplifies this change between its 1992 code 52 and its 2018 code. 53
Interestingly, the addressees of both Google 54 and SAP’s principles 55 are not stated. Of note,
however, is that both codes include principles that draw attention to the responsibility of
developers specifically. SAP indicatively refers to “our technical teams” within a principle, 56
leaving open the question of who is meant to be held accountable to the set of norms overall.
48
“Responsible Bots: 10 Guidelines for Developers of Conversational AI.,” Microsoft,
November 4, 2018, https://www.microsoft.com/en-
us/research/uploads/prod/2018/11/Bot_Guidelines_Nov_2018.pdf.
49
“Responsible Bots,” 1.
50
e.g., Oz, “Ethical Standards,” 720-724.
51
Frankel, "Professional Codes,” 111-112.
52
Ronald E Anderson, “ACM Code of Ethics and Professional Conduct,” Communications of the
ACM, May 1992, 94–99. Accessed April 1, 2019, https://dl.acm.org/citation.cfm?id=129885.
53
"ACM Code.”
54
Pichai, "AI at Google."
55
Machmeier, "SAP’s Guiding Principles.”
56
Machmeier, "SAP’s Guiding Principles.”
16
norms and help translate them into practice. IEEE’s Ethically Aligned Design is illustrative by
making ethical norms accessible for a wide range of stakeholders, including developers and the
public, and providing guidance for how ethics can be embedded into AI-based systems. 57 IEEE
implemented the report into an educational course for technical professionals of varying
backgrounds and launched working groups that focus on the challenges of integrating ethics into
The norms in this section continue to struggle with broader social issues and risks
associated with AI, 59 such as the autonomous behavior of the AI systems they create. The norms
focus heavily on the behavior of the professionals who are creating AI-based systems and not on
the potential behavior of autonomous systems. In a few instances, however, some professional
norms come close to addressing these novel types of foreseeable challenges. Microsoft, for
example, asks in describing its principles, “How do we not lose control of our machines as they
become increasingly intelligent and powerful?” 60 ACM’s Code of Ethics similarly states that
when the future risk of the technology is uncertain, there must be “frequent reassessment” of the
within the professional norms approach to AI governance worthy of future study, and as one
V. Governance Effects
57
Ethically Aligned Design.
58
Ethically Aligned Design, 283-284.
59
Boddington, Towards a Code, 49.
60
Microsoft, Future Computed, 56.
61
"ACM Code.”
62
Boddington, Towards a Code, 24.
17
different types of governance effects of such norms. There may be direct effects and indirect
effects; there may be weak effects and strong effects; and there may even be undesirable effects,
The empirical evidence regarding the impact of codes of ethics on behavior is mixed.
There are some studies from business ethics arguing an observable effect, 63 and more recent
studies, looking at the ACM Code of Ethics, which posit that such norms have little effect. 64
While the governance effects of ethical codes are unclear, there are a few key reasons to
see promise for professional norms, articulated as codes of ethics and similar ethical principles,
as an important addition to the AI governance toolkit. In this section, we draw from pragmatic
examples as demonstrations of such potential effects. While the cynic can argue that these are
washing,” 66 there are practical design choices that can encourage robust accountability systems
in order to make professional norms more powerful. We thus situate professional norms within
Implementation of Norms
63
Joseph A McKinney, Tisha L. Emerson, and Mitchell J. Neubert, “The Effects of Ethical
Codes on Ethical Perceptions of Actions Toward Stakeholders,” Journal of Business Ethics 97,
no. 4 (2010): 505–516. Accessed March 7, 2019, http://www.jstor.org.ezp-
prod1.hul.harvard.edu/stable/40929510.
64
Andrew McNamara, Justin Smith, and Emerson Murphy-Hill, “Does ACM’s Code of Ethics
Change Ethical Decision Making in Software Development?,” in Proceedings of the 2018 26th
ACM Joint Meeting on European Software Engineering Conference and Symposium on the
Foundations of Software Engineering (ACM Joint Meeting on European Software Engineering
Conference and Symposium on the Foundations of Software Engineering, Lake Buena Vista,
Florida, 2018), 729–733, https://dl.acm.org/citation.cfm?doid=3236024.3264833.
65
Frankel, "Professional Codes,” 111.
66
James Vincent, “The Problem with AI Ethics,” The Verge, April 3, 2019,
https://www.theverge.com/2019/4/3/18293410/ai-artificial-intelligence-ethics-boards-charters-
problem-big-tech.
18
implemented and integrated into development processes, practices, and routines of AI-based
technologies. An important case are companies in situations where they are not only formulating
norms and principles but also creating granular guidelines that specify what the norms actually
mean for engineers in practice. These guidelines are sometimes presented as instructional
Conversational AI, presented as a living document that will evolve with a changing ecosystem,
are illustrative. 68
Companies are also drawing on internal review boards for additional guidance on
implementing principles in practice. Six months after publishing its AI guidelines, Google
created an internal review structure for difficult cases that are flagged internally, in which
developers struggled with the application of the company’s principles or faced barriers AI
challenging issues, including decisions that affect multiple products and technologies. 70
Overall, the effectiveness of professional norms within the corporate context as a force of
governance will depend, as we have seen in other contexts such as the pharmaceutical industry, 71
on numerous factors ranging from internal oversight, monitoring and reinforcement mechanisms
67
“Responsible AI Practices,” Google AI, accessed March 11, 2019,
https://ai.google/education/responsible-ai-practices.
68
"Responsible Bots,” 1.
69
Kent Walker, “Google AI Principles Updates, Six Months In,” The Keyword (blog), December
18, 2018, https://www.blog.google/technology/ai/google-ai-principles-updates-six-months/.
70
From our experience working with industry at the Berkman Klein Center, we’ve learned that
buy-in and awareness of the top executives are key for cultural and behavioral changes in these
organizations developing AI technologies.
71
Ruth Chadwick, ed., The Concise Encyclopedia of the Ethics of New Technologies, 1st ed.,
Bioindustry Ethics (Amsterdam, NL: Elsevier Academic Press, 2005).
19
Accountability Mechanisms
Norms that apply to the development of AI systems are diverse, and the possible
accountability mechanisms that seek to ensure compliance with these diverse norms across
contexts vary, too. Some of the principles and codes of ethics discussed lack explicit
accountability mechanisms, but still might be enforced informally as a “by-product of other types
of controls that are maintaining everyday professional routines.” 73 Others, such as the ACM
Code of Ethics, are paired with a separate Code of Ethics enforcement policy. The ACM
enforcement policy outlines the steps that the organization takes once a complaint is filed,
including investigations and complaint dismissals. The enforcement policy contains potential
disciplinary actions for code violations, such as barring members temporarily from ACM
conferences and publications, and expulsion from the organization. 74 The effects of such
enforcement mechanisms remain largely an open empirical question: While an early study on the
impact of the ACM Code of Ethics claimed that the ACM Code of Ethics did not impact
behavior, this study is limited in that it only looked at the exposure of participants to the norms
but not at the impact of enforcement mechanisms based on complaints of members – a form of
internal policing of the profession. 75 Nonetheless, decades of empirical research across a wide
variety of (traditional) professions suggest that “formal prosecution is a function largely of the
72
Chadwick, Ethics of New Technologies, 391-394.
73
Abbott, “Professional Ethics,” 860.
74
“ACM Code of Ethics Enforcement Procedures,” Association for Computing Machinery,
accessed March 8, 2019, https://www.acm.org/code-of-ethics/enforcement-procedures.
75
McNamara, Smith, and Murphy-Hill, “ACM’s Code of Ethics,” 730-731.
76
Abbott, “Professional Ethics,” 859.
20
experiment with accountability schemes. In addition to internal review boards, there are attempts
to establish external review boards, with varying degrees of success. Google, for example,
created a (controversial and short-lived) external council with a specific focus on the
development of Google’s AI-based tools. 77, 78 SAP also created an interdisciplinary External
Ethics Review Board with a focus on AI to guide the implementation of the principles, 79 in
addition to their Internal Steering Committee which created their AI Principles. 80 These review
panels – and the interest in establishing them – suggest that the responsibility to develop ethical
and responsible AI tools does not fall solely on the developers and engineers, potentially adding
layers of accountability for professional norms. The sobering recent experiences with external
authority are among the key factors that determine the effectiveness of such oversight
mechanisms. 81 Overall and beyond the context of AI, the performance of ethical review
77
Kent Walker, “An External Advisory Council to Help Advance the Responsible Development
of AI,” The Keyword (blog), March 26, 2019, https://www.blog.google/technology/ai/external-
advisory-council-help-advance-responsible-development-ai/.
78
Kelsey Piper, “Exclusive: Google Cancels AI Ethics Board in Response to Outcry,” Vox, April
4, 2019, https://www.vox.com/future-perfect/2019/4/4/18295933/google-cancels-ai-ethics-board.
79
“SAP Becomes First European Tech Company to Create Ethics Advisory Panel for Artificial
Intelligence,” SAP, September 18, 2018, https://news.sap.com/2018/09/sap-first-european-tech-
company-ai-ethics-advisory-panel/.
80
Machmeier, "SAP’s Guiding Principles.”
81
Meredith Whittaker et al., “AI Now Report 2018” (AI Now, New York University, December
2018), https://ainowinstitute.org/AI_Now_2018_Report.pdf.
82
Stuart G. Nicholls et al., “A Scoping Review of Empirical Research Relating to Quality and
Effectiveness of Research Ethics Review,” PLOS ONE 10, no. 7 (July 30, 2015),
https://doi.org/10.1371/journal.pone.0133639.
21
the context of traditional professional norms and accountability schemes. 84 In today’s social
media environment, the court of public opinion serves as another governance mechanism for
norm enforcement: organizations that stipulate principles and professional norms for AI
development will potentially be held responsible in the court of public opinion if they violate or
infringe on these norms. Recent media examples showcase employees of major technology
companies articulating their values, again from the “bottom up.” For example, Microsoft
employees made headlines for their open letter against the company’s work with the U.S.
planned search engine in China, published an open letter in protest of the project. 86 Following
reports from major news outlets 87 and public threats from an investor, 88 Google reportedly shut
down the project. 89 In other cases, corporate responses to employee activism have reportedly
83
Frankel, "Professional Codes,” 111.
84
Abbott, “Professional Ethics,” 859.
85
Frenkel, "Microsoft Employees Protest.”
86
"We Are Google Employees.”
87
Brian Fung, “Google Really Is Trying to Build a Censored Chinese Search Engine, Its CEO
Confirms,” The Washington Post, October 16, 2018,
https://www.washingtonpost.com/technology/2018/10/16/google-really-is-trying-build-censored-
chinese-search-engine-its-ceo-confirms/?utm_term=.84255850a20c.
88
Patrick Temple-West, “Google Shareholder Revolts over ‘Project Dragonfly,’” Politico,
December 19, 2018, https://www.politico.com/story/2018/12/19/google-shareholder-revolts-
project-dragonfly-1037966.
89
Ryan Gallagher, “Google’s Secret China Project ‘Effectively Ended’ after Internal
Confrontation,” The Intercept, December 17, 2018, https://theintercept.com/2018/12/17/google-
china-censored-search-engine-2/.
90
Alexia Fernández Campbell, “Google Employees Say the Company Is Punishing Them for
Their Activism,” Vox, April 23, 2019, https://www.vox.com/2019/4/23/18512542/google-
employee-walkout-organizers-claim-retaliation.
22
Amazon shareholders are currently demanding that the company stop selling facial recognition
technology in light of potential human rights violations. 91 This particular demand draws on an
earlier suggestion from Microsoft for government regulation of the technology. 92 Scholars have
previously noted that principles and codes may be used to appease stakeholders, 93 but this
depending on the context of application, type of norm violation, overall transparency, and other
factors. Based on recent experiences, it is also likely that media outlets rely on the existence of
investigative and translative organizations – current examples include ProPublica, 94 AI Now, and
AlgorithmWatch, among others – that draw attention to ethical issues in AI and possible norm
violations.
Looking ahead and towards more robust accountability schemes for professional norms
governing the development of AI systems, there may be instances where professional norms
make an entry into law. Professional norms can become legally relevant through different
associations); through contracts (e.g., two parties incorporate norms of the profession by way of
91
Mallory Locklear, “Shareholders Ask Amazon to Halt Sales of Facial Recognition Tech,”
Engadget, January 17, 2019, https://www.engadget.com/2019/01/17/shareholders-ask-amazon-
halt-sales-facial-recognition-tech/.
92
Brad Smith, “Facial Recognition Technology: The Need for Public Regulation and Corporate
Responsibility,” Microsoft (blog), July 13, 2018, https://blogs.microsoft.com/on-the-
issues/2018/07/13/facial-recognition-technology-the-need-for-public-regulation-and-corporate-
responsibility/.
93
Frankel, "Professional Codes,” 111.
94
“Machine Bias: Investigating Algorithmic Injustice,” ProPublica, accessed April 2, 2019,
https://www.propublica.org/series/machine-bias.
23
light of the norms of the profession). Examples of private norms – especially technical ones –
that make an entry into the legal arena are manifold and span across diverse contexts such as
accounting, health law, environment, etc. 95 The doctrinal questions of the interplay between
norms of the profession and the legal system are complex and remain controversial.
For the context of this chapter, a new mechanism established in Article 40 of the EU
professional association can become legally relevant. According to this provision, associations
and other bodies who represent data controllers or processors may prepare codes of conduct to
specify the regulation with regard to a long list of items including fair and transparent
processing, the collection of personal data, and so forth. 96 These codes of conduct can be
accredited bodies, which has to take action in case of infringement of the approved code of
The discussion in this chapter suggests that professional norms of different provenience
have the potential to serve as a reservoir for AI governance when contextualized within other
governance mechanisms. However, fundamental conceptual issues such as the notion of what
constitutes “AI professions” as well as a range of empirical questions, including the actual
effects of norms on professionals, remain open for further research and discussion.
95
Felix Uhlmann, ed., Private Normen and Staatliches Recht, vol. 5 (Zurich, DE: Dike, 2015).
96
“General Data Protection Regulation,” Official Journal of the European Union L119 59 (May
4, 2016), https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=OJ:L:2016:119:FULL.
24
elements of an emerging – and certainly evolving – framework that might serve as an analytical
tool to map the different types of relevant professional norms and to hypothesize about their
potential governing effects on the professionals involved in developing or deploying AI. The
following schematic captures some of the key dimensions of the proposed analytical framework,
which – as any framework – of course reduces the complexity of reality with all its nuances:
By way of summarizing some of the key findings and arguments presented in this
chapter, a number of components of the proposed (tentative) framework are worth highlighting.
First, the framework reflects our finding that the current landscape of professional norms that are
largely disjointed and in flux. The body of relevant norms doesn’t present itself in form of a
coherent normative structure such as a single code of ethics, but is rather a patchwork consisting
of existing as well as emerging norms. They can be more general or (AI-)specific in nature and
emerge, like in the case of traditional norms of the profession, from membership associations,
25
norms also interact in various ways with other norm types at different places in the norm
Second, professional norms in the context of technologies that embrace a rich set of
subdisciplines, methods, and tools under the umbrella of the catchword AI may emerge along the
full lifecycle of AI-based systems, including their design, development, testing, deployment,
support, and phase out. Future work could analyze the role of professional norms for each phase
of system development, which could be further be broken down to into more specific activities,
depending on the AI-method that is invoked in these applications, such as data preprocessing,
application of algorithms, and model selection in the case of machine learning techniques.
Third, we have argued that governing effects might stem from professional norms that are
input-oriented in that they are dealing with the circumstances under which AI-based technologies
are created, or output-oriented by addressing the use of such technologies. In some instances,
these two ideal-type categories might overlap or interact with each other. Examples of the former
are general professional norms that apply to computer scientists, engineers, and research and data
ethicists who are involved in the creation of AI-based systems as well as comprehensive
Examples of output- or application-oriented norms are norms of the legal, medical, and
educational, and other professions that might deploy AI-based systems in their professional
contexts that are governed by professional norms. An example of the interplay between the two
types of norms is, for instance, a situation in which a professional that is generally bound by one
set of norms (e.g. a physician) enters the “sphere” of the other (e.g. by advising the development
26
roughly distinguish between direct and indirect governance effects. The main example of a direct
or using AI is immediately guided in the intended direction by a particular norm set forth in the
applicable body of professional norms. A case in which a company, whose employees might
have violated norms of the profession, is held liable in the court of law or in the court of public
opinion with reference to professional best practices might serve as an example of an indirect
governing effect of professional norms. Whether a certain norm exhibits direct or indirect effects
depends on various contextual factors, including the specificity of the norm, the professional
In this chapter, we have taken a pragmatic look at the norms of the profession as a source
of AI governance, offering some observations about the fluid state of play and outlining the
contours of a possible analytical framework that might inform future research and discussions.
Perhaps the most interesting questions for further exploration – in addition to important
accountability schemes – are normative in nature and go beyond the scope of this chapter, and
technologies might fundamentally challenge the notion of the “profession” itself – whether on
27
dimension is the observation that highly specialized AI applications have the potential to
outperform even the best professionals – often without being able to provide an explanation for
it. This shift in performance and power from the human to the machine is likely to affect the
identity of the profession itself, for which specialized knowledge and information asymmetries
are constitutive for identity-formation, and will ultimately add another layer of complexity to
During this period of rapid change, we also need to challenge the dominant voices, norms
and power structures within professions (and “AI professions” in particular) to ensure they are
more diverse and inclusive across gender, race, socioeconomic status, etc. These concerns about
diversity and inclusion are not unique to AI and have been observed also through other
professions such as law, 98, 99 information technology, 100 and architecture. 101 In the AI context,
the current lack of diversity is striking; a recent report sheds light on the troubling gender and
racial imbalances within “AI professions” and calls for greater diversity within the workplaces
and environments in which AI-based systems are designed and developed, as well as within the
training data for these systems. 102 We anticipate efforts aimed at increasing diversity and
97
Boddington, Towards a Code, 59-65.
98
Deborah L Rhode, “Gender and Professional Roles,” Fordham Law Review 63, no. 1 (1994):
39–72. Accessed April 23, 2019, https://ir.lawnet.fordham.edu/flr/vol63/iss1/5/.
99
Alex M. Jr. Johnson, “The Underrepresentation of Minorities in the Legal Profession: A
Critical Race Theorist’s Perspective,” Michigan Law Review 95 (1997): 1005–1062. Accessed
April 23, 2019, https://heinonline.org/HOL/P?h=hein.journals/mlr95&i=1025.
100
Androniki Panteli, Janet Stack, and Harvie Ramsay, “Gender and Professional Ethics in the IT
Industry,” Journal of Business Ethics 22, no. 1 (1999): 51–61. Accessed April 23, 2019,
https://www.jstor.org/stable/25074189.
101
Kathryn H. Anthony, Designing for Diversity: Gender, Race, and Ethnicity in the
Architectural Profession (Urbana, IL: University of Illinois Press, 2001).
102
Sarah Myers West, Meredith Whittaker, and Kate Crawford, “Discriminating Systems:
Gender, Race and Power in AI” (AI Now, New York University, April 2019),
https://ainowinstitute.org/discriminatingsystems.pdf.
28
A related challenge relates to the capacity and legitimacy of norms of the profession to
deal with fundamental challenges brought forth by AI-based technologies. If current trajectories
hold, AI technology is likely to fundamentally shape and in some areas reconfigure social
relations among humans, as well as between humans and artifacts. A growing body of literature
demonstrates the ethical and governance questions associated with some of these AI-enabled
shifts touch upon some of the most fundamental concepts and values related to humanity and
productively engage with some of these extremely complex societal questions, and how the
A final question for further debate concerns the role of professional norms in concert with
other norms of AI governance. 104 One of the big advantages of professional norms is that they
are more context-sensitive than other types of norms. Simultaneously, this context-sensitivity,
combined with the fact that professional norms are only one source of constraints within the
broader landscape of meshed AI governance, leads us to question how conflicts among different
(contextual) norms and their interpretation can be resolved across the various norm hierarchies
and norm authorities. How to ensure certain levels of interoperability, 105 along with (normative)
consistency within the various sources of professional norms as applied to AI over time and vis-
à-vis other governance schemes across jurisdictions and cultures, will be an ecosystem-level
103
See, e.g., Boddington, Towards a Code, 67-83.
104
Urs Gasser and Virgilio A.F. Almeida, “A Layered Model for AI Governance,” IEEE Internet
Computing 21, no. 6 (December 2017): 58–62, https://doi.org/10.1109/MIC.2017.4180835.
105
John Palfrey and Urs Gasser, “Legal Interop,” in Interop: The Promise and Perils of Highly
Interconnected Systems, 1st ed. (New York, NY: Basic Books, 2012), 177–192.
29
Acknowledgments
The authors would like to express gratitude to the members of the Ethics and Governance of
Artificial Intelligence Initiative at the Berkman Klein Center and the MIT Media Lab, Amar
Ashar, John Bowers, Ryan Budish, Mary Gray, Hans-W. Micklitz, Ruth Okediji, Luke Stark,
Sara Watson, Mark Wu, and Jonathan Zittrain for conversations and inputs on this paper.
Bibliography
Abbott, Andrew. "Professional Ethics." American Journal of Sociology 88, no. 5 (1983):
855-885. http://www.jstor.org/stable/2779443.
Boddington, Paula. Towards a Code of Ethics for Artificial Intelligence. 1st ed. Artificial
Intelligence: Foundations, Theory, and Algorithms. Cham, SUI: Springer International
Publishing, 2017.
Bynum, Terrell W., and Simon Rogerson, eds. Computer Ethics and Professional Responsibility.
1st ed. Malden, MA: Blackwell, 2004.
Frankel, Mark S. "Professional Codes: Why, How, and with What Impact?" Journal of Business
Ethics 8, no. 2/3 (1989): 109-115. http://www.jstor.org/stable/25071878.
Greene, Daniel, Anna L. Hoffman, and Luke Stark. "Better, Nicer, Clearer, Fairer: A Critical
Assessment of the Movement for Ethical Artificial Intelligence and Machine Learning."
In Proceedings of the 52nd Hawaii International Conference on System Sciences | 2019.
Proceedings of Hawaii International Conference on System Sciences, Hawaii, Maui.
2019. Accessed April 2, 2019.
https://scholarspace.manoa.hawaii.edu/bitstream/10125/59651/0211.pdf.
Noordegraaf, Mirko. "From "Pure" to "Hybrid" Professionalism." Administration & Society 39,
no. 6 (October 2007): 761-785. doi:https://doi.org/10.1177/0095399707304434.
30
The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems. Ethically Aligned
Design: A Vision for Prioritizing Human Well-being with Autonomous and Intelligent
Systems, 1st ed. IEEE, 2019. https://standards.ieee.org/content/ieee-
standards/en/industry-connections/ec/ autonomous-systems.html
31