Professional Documents
Culture Documents
An AI Ethical Framework For Malaysia A Much-Needed Governance Tool For Reducing Risks and Liabilities Arising From AI Systems (2021) 3 MLJ Cdxiv
An AI Ethical Framework For Malaysia A Much-Needed Governance Tool For Reducing Risks and Liabilities Arising From AI Systems (2021) 3 MLJ Cdxiv
by
DARMAIN SEGARAN1
With new technologies, we proceed through a series of milestones in terms of
their lifecycle. There is a trajectory of invention, approval and adoption,
exploitation, and finally, regulation. Black and Murray identify the stages or
lifecycle of development of disruptive technologies touching on points where
regulation becomes part of these stages — either before, at the point of, or after,
commercial exploitation.2 The point in its lifecycle when a particular
technology is legally regulated has differed over time. Historically, there is
evidence of early intervention even before the technology’s use is yet to be
prevalent, such as in the case of radio communication. Conversely, legal
regulation of technological innovation may take place at a later stage when
there is a proliferation of the invention and when it enters into the public
domain.3 Regulation is justified as proliferation of the use of technological
innovation may present instances of documented risks that require managing.
Black and Murray’s allusion to ethical debates on the development and
deployment of technologies contextualises the debates on the regulation of
artificial intelligence (‘AI’), as does this paper. Ethical debates often predate
regulatory initiatives and the lifecycle of AI is no exception. If the risks and
challenges arising from the design and use of the AI require managing,
governance frameworks or processes can be introduced in place of or prior to
1 Convenors of the AI, Law and Ethics Series. Dr Jaspal Kaur Sadhu Singh is Senior Lecturer
at the Faculty of Law, HELP University, and Mr Darmain Segaran is the Founder of
Segaran Law Chambers & Adjunct Fellow, Faculty of Law, HELP University. They are both
founding members of AI Doctrina, a knowledge portal to drive the conversation and create
awareness on the intersection of law and ethics with artificial intelligence (AI) https://www.
ai-doctrina.info/.
2 J Black and A Murray, ‘Regulating AI and Machine Learning: Setting the Regulatory
Agenda’ (2019) 10(3) European Journal of Law and Technology 20 https://ejlt.org/index.
php/ejlt/article/view/722/980 accessed 20 July 2021.
3 Ibid.
An AI Ethical Framework for Malaysia: A Much-Needed
Governance Tool for Reducing Risks and Liabilities
[2021] 3 MLJ Arising From AI Systems cdxv
Not all countries may favour a legal regulatory regime in order to be seen as
promoting innovation and growth of the AI sector and may opt for adopting
policies and AI-national ethical frameworks which serve to promote self-
governance.7 The sum of it is that a framework provides principles that we want
to see in AI tools that are inherently trustworthy and responsible, and
intrinsically transparent and explainable. Therefore, when developers build
and create AI systems, they have to begin with understanding how ethical
considerations matter and essentially, understand the ramifications of omitting
these considerations.
The importance of such frameworks can be evidenced by the fact that in the
last ten years, approximately 100 proposals for AI principles have been
published with studies identifying which principles are most cited.8
Some of the most cited frameworks are referenced in this article which
includes frameworks or declaration of principles that govern AI systems (or as
‘Intelligent and autonomous technical systems’, the term used by the IEEE),
published by the IEEE, the OECD, the European Commission, the Future of
Life Institute, UNESCO and the WHO.
8 See, Y Zeng, E Lu and C Huangfu, ‘Linking artificial intelligence principles’ (the AAAI
Workshop on Artificial Intelligence Safety, Honolulu, 2019) https://arxiv.org/ftp/arxiv/
papers/1812/1812.04814.pdf accessed 22 August 2021; See also, A Jobin, M Ienca, and E
Vayena, ‘Artificial Intelligence: the global landscape of ethics guidelines’
(arXiv:1906.11668) https://arxiv.org/ftp/arxiv/papers/1906/1906.11668.pdf accessed 22
July 2021.
9 JK Sadhu Singh and D Segaran, ‘AI, Law and Ethics: The Emergence of a New Discourse
in the Legal Thoroughfare’ [2020] 5 MLJ i.
cdxviii Malayan Law Journal [2021] 3 MLJ
The most prominent risk is that of ‘placing’ decisions that impact people in
black boxes. An AI system processes data through its algorithm which produces
a prediction or outcome. The designers of the algorithm may not understand
how the variable are combined by the algorithm to make that prediction or
produce an outcome. This is because ‘black box predictive models can be such
complicated functions of the variables that no human can understand how the
variables are jointly related to each other to reach a final prediction’.10 Neural
networks may contain many ‘sharp corners’ that may achieve objectives that
the designer did not think about. Kearns and Roth explain that ‘complicated,
automated decision-making that can arise from machine learning has a
character of its own, distinct from that of its designer. The designer may have
had a good understanding of the algorithm that was used to find the decision-
making model, but not the model itself ’.11 Without the ability to understand
the process in which the decision was made, it lacks transparency and
explainability. Individuals may challenge these decisions on the basis it is bias
and discriminatory, or inaccurate and flawed, as they are unable to understand
how the algorithm arrived at its decision. The authors acknowledge the strides
made to make AI more explainable (explainable AI) and the use of ‘white
10 C Rudin and J Radin, ‘Why Are We Using Black Box Models in AI When We Don’t Need
to? A Lesson from an Explainable AI Competition’ (2019) Issue 1.2 Harvard Data Science
Review https://hdsr.mitpress.mit.edu/pub/f9kuryi8/release/6 accessed 22 August 2021. See
also, Tom Cassauwers, ‘Opening the ‘black box’ of artificial intelligence’ (Horizon, The EU
research & Innovation Magazine, 1 December 2020) https://ec.europa.eu/research-and-
innovation/en/horizon-magazine/opening-black-box-artificial-intelligence# accessed 22
August 2021.
11 M Kearns and A Roth, The Ethical Algorithm: The Science of Socially Aware Algorithm Design
(OUP 2020) 10–11.
An AI Ethical Framework for Malaysia: A Much-Needed
Governance Tool for Reducing Risks and Liabilities
[2021] 3 MLJ Arising From AI Systems cdxix
boxes’,12 this may reduce the degree of distrust in AI systems and their uses. It
should be noted that explainable AI systems are still at the early stages of
development. For example, in a white-box model, while a developer may know
the model architecture and parameters, this knowledge alone, will not make
the model explainable.13
The frameworks and documents espousing AI ethical principles often make
pronounced references to the risks around AI whilst recognising the altruistic
and economic benefits of AI. The OECD’s Recommendation of the Council
on Artificial Intelligence identifies that alongside benefits ‘AI also raises
challenges for our societies and economies, notably regarding economic shifts
and inequalities, competition, transitions in the labour market, and
implications for democracy and human rights’.14 Equally, the IEEE’s concerns
of using intelligent and autonomous technical systems ‘designed to reduce
human intervention in our day-to-day lives…impact on individuals and
societies’ includes the ‘potential harm to privacy, discrimination, loss of skills,
economic impacts, security of critical infrastructure, and the long-term effects
on social well-being’.15 European Commission’s High-Level Expert Group on
Artificial Intelligence highlighted a non-exhaustive list of concerns which
included the identification and tracking of individuals with AI, use of covert AI
systems, use of AI-enabled citizen scoring in violation of fundamental rights,
and the development of lethal autonomous weapon systems.16
The authors highlight instances of these concerns below with reference to
outcomes in court decisions and findings from research studies.
12 C Rudin and J Radin, ‘Why Are We Using Black Box Models in AI When We Don’t Need
to? A Lesson from an Explainable AI Competition’ (2019) Issue 1.2 Harvard Data Science
Review https://hdsr.mitpress.mit.edu/pub/f9kuryi8/release/6 accessed 22 August 2021.
13 A Das and P Rad, ‘Opportunities and Challenges in Explainable Artificial Intelligence
(XAI): A Survey’ (arXiv 2006.11371v2) https://arxiv.org/abs/2006.11371 accessed 23
August 2021.
14 The Organisation for Economic Co-operation and Development, Recommendation of the
Council on Artificial Intelligence https://legalinstruments.oecd.org/en/instruments/
OECD-LEGAL-0449 accessed 22 July 2021.
15 IEEE, The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems. Ethically
Aligned Design: A Vision for Prioritizing Human Well-being with Autonomous and Intelligent
Systems (Version 2, 2017), 6 https://standards.ieee.org accessed 22 August 2021.
16 European Commission, Ethics Guidelines for Trustworthy AI: High-Level Expert Group on
Artificial Intelligence, 33-34 https://digital-strategy.ec.europa.eu/en/library/
ethics-guidelines-trustworthy-ai accessed 22 August 2021.
cdxx Malayan Law Journal [2021] 3 MLJ
One major concern arising from the use of AI technology is the invasion and
erosion of privacy. In 2020, the Court of Appeal of England and Wales17 in the
Edward Bridges case had to consider the lawfulness of the use of automated
facial recognition technology (AFR) in a pilot project by the South Wales Police
Force (SWP). Bridges brought a claim for judicial review. He was in the vicinity
of two areas where the technology was deployed. Bridges contended that his
images were recorded, although the images were later deleted and Bridges did
not appear on SWP’s watch list. The Court of Appeal allowed the appeal on
three of the five grounds. The first successful ground was that the legislation
and local policies relied on by SWP provided no clear guidance on the use of
the AFR and as to who could be placed on a watch list. This was in breach of art
8 of the European Convention of Human Rights which protects the right to
family and private life. The court went further to say, on the second successful
ground, that the lack of guidance afforded excessively broad discretion in the
determination of the police officers to meet the standard required by art 8(2)
which sets out the grounds of any justified interference with the Convention
right. Resulting from this, the court concluded that the SWP provided a
deficient ‘data protection impact assessment’ (DPIA) as required by s 64 of the
UK Data Protection Act 2018 as it infringed art 8. On the third successful
ground, the Court found that there was non-compliance with the Public Sector
Equality Duty (PSED) under s 149 of the UK Equality Act 2010. The PSED’s
purpose is to ensure that public authorities consider the discriminatory
potential impact of the technology and the court found that the SWP erred by
not taking reasonable steps to make enquiries about whether the AFR software
had bias on racial or sex grounds. This assessment is vital even though the
appellate court acknowledged that there was no clear evidence that AFR was
indeed biased on the grounds of either or both race or sex.
17 R (on the application of Edward Bridges) v The Chief Constable of South Wales Police & Ors
[2020] EWCA Civ 1058.
An AI Ethical Framework for Malaysia: A Much-Needed
Governance Tool for Reducing Risks and Liabilities
[2021] 3 MLJ Arising From AI Systems cdxxi
18 Tom Simonite, ‘The Best Algorithms Struggle to Recognize Black Faces Equally’ (WIRED,
22 July 2019) https://www.wired.com/story/best-algorithms-struggle-recognize-black-
faces-equally/ accessed 22 August 2021.
19 Aaron Klein, ‘Reducing Bias in AI-based Financial Services’ (Brookings Institution, 10 July
2021) https://www.brookings.edu/research/reducing-bias-in-ai-based-financial-services/
accessed 22 August 2021.
20 BBC Tech, ‘Twitter algorithm prefers slimmer, younger, light-skinned faces’ (BBC News, 11
August 2021) https://www.bbc.com/news/technology-58159723 accessed 22 August
2021.
21 Craig Langran, ‘Why was my tweet about football labelled abusive?’ (BBC News, 18 July
2021) https://www.bbc.com/news/technology-57836409 accessed 22 August 2021.
22 881 N W 2d 749 (Wis 2016)(US).
23 Ibid 754.
24 881 N W 2d 749 (Wis 2016)(US) 765.
25 881 N W 2d 749 (Wis 2016)(US) 774.
cdxxii Malayan Law Journal [2021] 3 MLJ
record from sentencing courts would have been required on ‘the strengths,
weaknesses, and relevance to the individualized sentence being rendered of the
evidence-based tool’. This case has attracted much chagrin26 as it raises
questions about the disregard for the right of due process. The US Supreme
Court has established that the due process related to a fair sentencing procedure
is ‘the right to be sentenced on the basis of accurate information’27 and to be
given the means to ascertain the correctness of the information.28 Due process
is linked to the issue of transparency and explainability of the algorithm as to
how it arrived at the assessment it did.
26 ‘State v. Loomis: Wisconsin Supreme Court Requires Warning Before Use of Algorithmic
Risk Assessments in Sentencing’ (2017) 130 Harvard Law Review 1530 https://
harvardlawreview.org accessed 22 August 2021; See also, L Han-Wei, L Ching-Fu and C
Yu-Jie, ‘Beyond State v Loomis: Artificial Intelligence, Government Algorithmization, and
Accountability’ (2019) 27(2) International Journal of Law and Information Technology
122–141.
27 Townsend v Burke 334 US 736 (1948) (US).
28 State v Skaff 152 Wis 2d 48, 53; 447 N W 2d 84 (Ct.App.1989), 58 (US).
29 R Allen and D Masters, ‘SYRI: Think Twice Before Risk Profiling’ (AI Law Hub, 30 March
2020) https://ai-lawhub.com/2020/03/30/syri-think-twice-before-risk-profiling/ accessed
22 August 2021.
30 R Allen and D Masters, ‘French Parcoursup Decision’ (AI Law Hub, 16 April 2020)
https://ai-lawhub.com/2020/04/16/french-parcoursup-decision/ accessed 22 August
2021; See also ‘Parcoursup: Decision No.2020-834 QPC’ (AI Law Hub) https://ai-lawhub.
com/parcoursup-decision-no-2020-834-qpc/ accessed 22 August 2021.
An AI Ethical Framework for Malaysia: A Much-Needed
Governance Tool for Reducing Risks and Liabilities
[2021] 3 MLJ Arising From AI Systems cdxxiii
With the ubiquitous use of AI, there will be an increasing role of the courts
to handle disputes in areas of liability, reputational harm, due process, harmful
data practices, surveillance and invasion of privacy, and discrimination, to
name a few. In the near future, the design and the functioning of AI will come
under greater scrutiny, and, AI developers and deployers will be held
accountable to the questions that will require adjudication in such cases.
To manage the risks and concerns arising from AI and its impact on society,
ethical frameworks have been adopted or are in the nascent of being adopted at
the national, regional, and international levels. These frameworks comprise a
set of values to be embedded in the design and developmental process and the
deployment stage of the AI system. In a sense, these frameworks provide a
litmus test for assessing the extent that the AI is ‘responsible’ or ‘trustworthy’ or
synonyms of these adjectives, that describes the AI as one that holds up to the
standards of accountability required of it. This accountability is to be borne by
the developer and the deployer. The recurring reference in the titles of these
frameworks includes ‘ethical guidelines’ or ‘ethical principles’.
The IEEE in the second iteration of its guidelines emphasises that to attain
the full benefit of AI technologies, AI must be ‘aligned with our defined values
and ethical principles’ and this can be done with ethical frameworks that can
‘guide and inform dialogue and debate around the non-technical implications
31 R Allen and D Masters, A Clear Ruling from The Italian Supreme Court: Consent Without
Transparency Is Legally Worthless, Especially Where an AI System Is Used To Assess
Credibility And Reputation’ (AI Law Hub, 16 April 2020) https://ai-lawhub.com accessed
22 August 2021.
cdxxiv Malayan Law Journal [2021] 3 MLJ
of these technologies’.32 In a similar tone, the OECD recognises the ‘need for
a stable policy environment that promotes a human-centric approach to
trustworthy AI, that fosters research, preserves economic incentives to
innovate, and that applies to all stakeholders according to their role and the
context’. In another framework, UNESCO recognises that frameworks based
on ethical values and principles can help ‘shape the development and
implementation of rights-based policy measures and legal norms, by providing
guidance where the ambit of norms is unclear or where such norms are not yet
in place due to the fast pace of technological development combined with the
relatively slower pace of policy responses’.33
Most of these frameworks are aspirational of the fact that the application of
ethical values and principles can assist to overcome the challenges to reap the
opportunities linked to AI technologies.
32 IEEE, The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems. Ethically
Aligned Design: A Vision for Prioritizing Human Well-being with Autonomous and Intelligent
Systems (Version 2, 2017), 6 https://standards.ieee.org accessed 22 August 2021.
33 UNESCO, Preliminary report on the first draft of the Recommendation on the Ethics of
Artificial Intelligence https://unesdoc.unesco.org/ark:/48223/pf0000374266 accessed 22
August 2021.
34 Herman T Tavani, Ethics and Technology: Controversies, questions, and strategies for ethical
computing (4th edn, Wiley 2013).
An AI Ethical Framework for Malaysia: A Much-Needed
Governance Tool for Reducing Risks and Liabilities
[2021] 3 MLJ Arising From AI Systems cdxxv
and ‘machine ethics’. The role of applied ethics in technology is certainly not a
nascent subject. It is indeed an evolving one as Tavani has alluded to. Ethical
theories can act as foundational concepts upon which to make decisions of the
risks and impact of technological innovation. With the application of the
values found in these theories, innovators are to ask questions arising from their
actions or inactions and the consequences thereof.
The emergence of applied ethics found its way into the professional
standards, codes, and integrity of the IT profession. Requirement for ethical
consideration has formed part of the professional ethics of the IT specialist
tracing it to the adoption of professional codes of ethics by the Association for
Computing Machinery (ACM) and the Institute for Electrical and Electronics
Engineers-Computer Society (IEEE-CS). Early scholars viewed these codes as
regulating professional members35 or play an essential function in inspiring by
identifying values and ideals, educating, guiding, demanding accountability,
and enforcing the values of the code.36
In a similar vein, ethical frameworks aim to ‘regulate’ or operate as these
codes function in ensuring that the professionals who write AI algorithms
weave these values into the design of the algorithm. To apply ethical theories to
dilemmas arising from the use of AI systems provide a systematic approach to
the moral responsibility to be borne by data scientists writing algorithms and
the users of the said algorithms. Moral responsibility leads to the moral agency
of one’s decisions and actions or even inactions. To fill the policy vacuum,
designers and deployers of AI should assess the design and use of the algorithm
by applying moral values to prevent harmful practices and consequences. This
will involve arguments and disagreements. Irrespective of the outcome, a
meaningful and constructive discourse will contribute to improving the design
and the decision-making process. These debates are rife as discussed earlier and
later in this paper.
Moral responsibility is inextricably linked to legal liability and the concept
of accountability. Moral irresponsibility may or may not lead to legal liability
depending on whether the action resulting from the moral irresponsibility falls
within the ambit of the law. For instance, the use of surveillance methods using
facial recognition software in the EU must meet the standards of the European
Convention of Human Rights or the General Data Protection Regulation
35 D Gotterbarn and K Miller, ‘The Public is Priority: Making Decisions Using the Software
Code of Ethics’ (2009) 42(6) IEEE Computer 66–73.
36 TW Bynum, T Ward, and S Rogerson (eds), Computer Ethics and Professional Responsibility
(Blackwell 2004).
cdxxvi Malayan Law Journal [2021] 3 MLJ
As stated earlier in the paper, AI Ethical Frameworks have been drafted at the
national, regional, and international level as well as by corporations (mainly
tech corporations) either developing or using AI systems. At the regional or
international level, the organisations developing these frameworks represent
states or interested parties with varied interests such as experts, think tanks,
research organisations, or representatives of industries.
37 Herman T Tavani, Ethics and Technology: Controversies, questions, and strategies for ethical
computing (4th edn, Wiley 2013) 118.
38 Helen Nissenbaum, ‘Computing and Accountability’ in John Weckert (ed), Computer
Ethics (Routledge 2007) 273–280, 274.
39 Ibid.
An AI Ethical Framework for Malaysia: A Much-Needed
Governance Tool for Reducing Risks and Liabilities
[2021] 3 MLJ Arising From AI Systems cdxxvii
At the international level, several organisations have taken the lead. The
Institute of Electrical and Electronics Engineers (the IEEE) started an initiative
in 2016 through its IEEE Global Initiative for Ethical Considerations in
Artificial Intelligence and Autonomous Systems which has led to two iterations
of its report titled Ethically Aligned Design: A Vision for Prioritising Human
Well-being with Autonomous and Intelligent Systems (A/IS)42 containing six
General Principles. The Future of Life Institute’s AI Asilomar Principles43 came
after in 2017 with an extensive set of 13 ‘ethics and values’. The OECD AI
principles provided the basis for the AI principles adopted by the OECD
Council in May 2019.44 Soon after, these principles were adopted by the G20.
40 European Commission, The Ethics Guidelines for Trustworthy Artificial Intelligence https://
digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai accessed 20 July
2021.
41 ASEAN, ASEAN Digital Masterplan 2025 16, 20 https://asean.org/wp-content/uploads/
2021/08/ASEAN-Digital-Masterplan-2025.pdf accessed 20 July 2021.
42 IEEE, The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems. Ethically
Aligned Design: A Vision for Prioritizing Human Well-being with Autonomous and Intelligent
Systems (Version 1, 2016) https://standards.ieee.org accessed 22 August 2021; See also, for
Version 2, IEEE, The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems.
Ethically Aligned Design: A Vision for Prioritizing Human Well-being with Autonomous and
Intelligent Systems (Version 2, 2017), 6 https://standards.ieee.org accessed 22 August 2021.
43 Future of Life, Asilomar AI Principles https://futureoflife.org/ai-principles/ accessed 22 July
2021.
44 The Organisation for Economic Co-operation and Development, Recommendation of the
Council on Artificial Intelligence https://legalinstruments.oecd.org/en/instruments/OECD-
LEGAL-0449 accessed 22 July 2021.
cdxxviii Malayan Law Journal [2021] 3 MLJ
Where ethical frameworks are being developed for sector-specific needs, the
WHO has identified its own six key ethical principles for use of AI for health in
its most recent guidance report titled Ethics and Governance for Artificial
Intelligence for Health, published in June of this year.49 Each key ethical
principle is ‘a statement of a duty or a responsibility in the context of the
development, deployment and continuing assessment of AI technologies for
health’.50 The ethical principles are those considered to be most relevant for the
use of AI for health.
(c) The ethical principles and the most common occurring values
The WHO guidelines include the principles ‘to protect human autonomy
in decision making; promote human well-being, human safety and the public
interest in order to prevent harm; to ensure transparency, explainability and
intelligibility; to foster responsibility and accountability; to ensure
inclusiveness and equity to encourage the widest possible appropriate,
equitable use and access; and promote AI that is responsive and sustainable’.53
The IEEE guidelines include ‘Human Rights: Ensure they do not infringe
on internationally recognized human rights; Well-being: Prioritize metrics of
well-being in their design and use; Accountability: Ensure that their designers
The Asilomar principles are probably one of the most extensive with 13
ethical principles. These include ensuring the safety of AI systems; failure
transparency of AI systems when they cause harm; judicial transparency when
AI systems are used in judicial decision-making; responsibility on the part of
designers and builders of AI systems; value alignment in the design of AI
systems with human values; compatibility of AI systems with human; personal
privacy when AI systems analyse and utilise data; liberty and privacy; the shared
benefit of AI technologies; a shared economic prosperity created by AI systems;
human control; non-subversion of social and civic processes upon which
societal health is dependent; and the avoidance of an AI Arms Race.55
The values and principles distilled from these frameworks indicate some
commonly shared values and concerns. There are distinctions as different
organisations may prioritise certain values and principles, and, may include
those that are more specific to their concerns that are and relevant to mitigate
the risks related to the industry or the unique way in which AI tools are being
used in carrying out the operations of organisations.
54 IEEE, The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems. Ethically
Aligned Design: A Vision for Prioritizing Human Well-being with Autonomous and Intelligent
Systems (Version 2, 2017), 6 https://standards.ieee.org accessed 22 August 2021.
55 Future of Life, Asilomar AI Principles https://futureoflife.org/ai-principles/ accessed 22 July
2021.
56 Microsoft, Microsoft AI principles https://www.microsoft.com/en-us/ai/responsible-ai ac-
cessed 22 July 2021.
An AI Ethical Framework for Malaysia: A Much-Needed
Governance Tool for Reducing Risks and Liabilities
[2021] 3 MLJ Arising From AI Systems cdxxxi
research.57 This research was indeed very helpful as it recognised firstly, the
most common occurring values as transparency; justice and fairness; non-
maleficence; responsibility; privacy; beneficence; freedom and autonomy;
trust; sustainability; dignity; and, solidarity. And secondly, the authors of the
said study observed that not one ‘single ethical principle appeared to be
common to the entire corpus of documents’ but there was an emerging ‘global
convergence’ of five values evidenced in more than half of the sources of the
research. These were transparency, justice, fairness, non-maleficence, and
responsibility.
57 A Jobin, M Ienca, and E Vayena, ‘Artificial Intelligence: the global landscape of ethics
guidelines’ (arXiv:1906.11668) https://arxiv.org/ftp/arxiv/papers/1906/1906.11668.pdf
accessed 22 July 2021.
58 Ibid 6.
59 OECD AI Policy Observatory https://www.oecd.ai/dashboards?selectedTab=countries ac-
cessed 22 July 2021.
cdxxxii Malayan Law Journal [2021] 3 MLJ
60 See Sharmila Nair, ‘Gobind’s ministry working on a national data and AI policy’ (The Star,
12 September 2019) https://www.thestar.com.my/tech/tech-news/2019/09/12/
gobind039s-ministry-working-on-a-national-data-and-ai-policy accessed 22 July 2021;
MDEC, ‘Government, Public Policy, and Sustainable Business’ https://mdec.my/about-
malaysia/government-policies/ accessed 22 July 2021.
61 See OECD AI Policy Observatory https://www.oecd.ai/.
62 INSEAD, the Adecco Group, and Google Inc, The Global Talent Competitiveness Index
2020: Global Talent in the Age of Artificial Intelligence (2020) https://www.insead.edu/sites/
default/files/assets/dept/globalindices/docs/GTCI-2020-report.pdf accessed 22 July 2021.
An AI Ethical Framework for Malaysia: A Much-Needed
Governance Tool for Reducing Risks and Liabilities
[2021] 3 MLJ Arising From AI Systems cdxxxiii
There are clauses within several of the international frameworks that allow
variation in the adoption of guidelines at the national level. Jobin et al allude to
the ‘significant divergences’ that emerged from the study of the guidelines. The
authors also make mention of the need for inter-governmental harmonisation
and cooperation but emphasising that ‘it should not come at the costs of
obliterating cultural and moral pluralism over AI’ and highlighting the
challenge of achieving this by ensuring a balance between harmonisation and
‘cultural diversity and moral pluralism’.63
Wright and Schultz64 raise concerns that despite the advancement of AI,
there is little understanding in identifying ethical issues of business automation
and AI, in particular, who will be affected by this and how it will affect the
various parties such as labourers and nations. The authors have identified
nations as stakeholders in order to clarify and assess the cultural and ethical
implications of business automation for stakeholders ranging from labourers to
nations integrating stakeholder theory and social contracts theory. This is not
addressed in terms of taking into consideration national values and interests.
63 A Jobin, M Ienca, and E Vayena, ‘Artificial Intelligence: the global landscape of ethics
guidelines’ (arXiv:1906.11668) 16 https://arxiv.org/ftp/arxiv/papers/1906/1906.11668.
pdf accessed 22 July 2021.
64 S A Wright and A E Schultz, ‘The rising tide of artificial intelligence and business
automation: Developing an ethical framework’ (2018) 61(6) Business Horizons 823-832.
cdxxxiv Malayan Law Journal [2021] 3 MLJ
In assessing the risks of the use of AI, there is a need to adopt a broader
spectrum of values aside from those based on ethics. Mantelero67 delivers a new
perspective of the HRESIA model (an acronym for Human Rights, Ethical and
Social Impact Assessment) which includes fundamental rights. The model goes
beyond other models that narrowly focus on data processing issues and
concerns. Further, Weber’s68 Symbiotic Relationship model builds on the
HRESIA focussing on the principle of self-determination and non-
discrimination in furtherance of a more inclusive multi-stakeholder approach.
Interests of various stakeholders are prevalent in these models that seem to
overlap ethical values and legal standards. However, the models referenced
above again make no mention of the unique considerations based on national
values. The nation is merely a stakeholder but not the craftsman of the policy
and framework that governs AI.
69 See, Paola Ricaurte, ‘Data Epistemologies, Coloniality of Power, and Resistance’ (2019)
Television & New Media 1-16; P Ricaurte, N Couldry and U Mejias, ‘Data Colonialism:
Rethinking Big Data’s Relation to the Contemporary Subject’ (2018) Television and New
Media 1-14.
70 A Jobin, M Ienca, and E Vayena, ‘Artificial Intelligence: the global landscape of ethics
guidelines’ (arXiv:1906.11668) 17 https://arxiv.org/ftp/arxiv/papers/1906/1906.11668.
pdf accessed 22 July 2021.
cdxxxvi Malayan Law Journal [2021] 3 MLJ
Placing national interests and policies as informing and directing the values
to be included in an AI National Ethical Framework is essential. This is a
consideration that will weigh heavily on policy-makers in drafting a framework
for Malaysia nationally which will serve as an accepted normative instrument
that not only focuses on the articulation of values and principles by ethical
frameworks developed by the various international and regional organisations
including leading tech companies but also places a strong emphasis on national
values in Malaysia.
71 Roger Clarke, ‘Principles and business processes for responsible AI’ 2019 35(4) Computer
Law & Security Review 410-422. See also, Sharmila Nair, ‘Gobind’s ministry working on
a national data and AI policy’ (The Star, 12 September 2019) https://www.thestar.com.
my/tech/tech-news/2019/09/12/gobind039s-ministry-working-on-a-national-data-and-
ai-policy accessed 22 July 2021; MDEC, ‘Government, Public Policy, and Sustainable
Business’ https://mdec.my/about-malaysia/government-policies/ accessed 22 July 2021.
An AI Ethical Framework for Malaysia: A Much-Needed
Governance Tool for Reducing Risks and Liabilities
[2021] 3 MLJ Arising From AI Systems cdxxxvii