Download as pdf or txt
Download as pdf or txt
You are on page 1of 24

cdxiv Malayan Law Journal [2021] 3 MLJ

AN AI ETHICAL FRAMEWORK FOR MALAYSIA: A


MUCH-NEEDED GOVERNANCE TOOL FOR
REDUCING RISKS AND LIABILITIES ARISING FROM AI
SYSTEMS

by

DR JASPAL KAUR SADHU SINGH


and

DARMAIN SEGARAN1
With new technologies, we proceed through a series of milestones in terms of
their lifecycle. There is a trajectory of invention, approval and adoption,
exploitation, and finally, regulation. Black and Murray identify the stages or
lifecycle of development of disruptive technologies touching on points where
regulation becomes part of these stages — either before, at the point of, or after,
commercial exploitation.2 The point in its lifecycle when a particular
technology is legally regulated has differed over time. Historically, there is
evidence of early intervention even before the technology’s use is yet to be
prevalent, such as in the case of radio communication. Conversely, legal
regulation of technological innovation may take place at a later stage when
there is a proliferation of the invention and when it enters into the public
domain.3 Regulation is justified as proliferation of the use of technological
innovation may present instances of documented risks that require managing.
Black and Murray’s allusion to ethical debates on the development and
deployment of technologies contextualises the debates on the regulation of
artificial intelligence (‘AI’), as does this paper. Ethical debates often predate
regulatory initiatives and the lifecycle of AI is no exception. If the risks and
challenges arising from the design and use of the AI require managing,
governance frameworks or processes can be introduced in place of or prior to

1 Convenors of the AI, Law and Ethics Series. Dr Jaspal Kaur Sadhu Singh is Senior Lecturer
at the Faculty of Law, HELP University, and Mr Darmain Segaran is the Founder of
Segaran Law Chambers & Adjunct Fellow, Faculty of Law, HELP University. They are both
founding members of AI Doctrina, a knowledge portal to drive the conversation and create
awareness on the intersection of law and ethics with artificial intelligence (AI) https://www.
ai-doctrina.info/.
2 J Black and A Murray, ‘Regulating AI and Machine Learning: Setting the Regulatory
Agenda’ (2019) 10(3) European Journal of Law and Technology 20 https://ejlt.org/index.
php/ejlt/article/view/722/980 accessed 20 July 2021.
3 Ibid.
An AI Ethical Framework for Malaysia: A Much-Needed
Governance Tool for Reducing Risks and Liabilities
[2021] 3 MLJ Arising From AI Systems cdxv

legal regulation. AI that is trustworthy and responsible can be developed and


deployed to promote both innovation and serve altruistic benefits in societal
and economic development. The accelerated use of AI whilst yielding benefits
must be compatible with value-based principles within these governance
frameworks.
A ‘framework’ is defined as ‘a particular set of rules, ideas, or beliefs’ which
are uses ‘in order to deal with problems or to decide what to do’.4 The ethical
framework for AI serves this precise function. The framework applies
principles and values, whether based on ethics and morality or human rights, to
assess and measure the uses of AI and the outcome of these uses when these
result in risks, harmful practices, and negative consequences for individuals
and society. By identifying and mitigating these risks and harms in the AI’s
design and deployment, developers and deployers make decisions for the
design and use of the AI for which they will be responsible and accountable.
In this paper, the authors propose that the adoption of a governance
framework in the development and deployment of AI is essential in managing
risks and liabilities arising from the use of AI. The authors further propose that
the developers and deployers of AI systems adopt self-governance measures
guided by principles espoused in an AI ethical framework, a model
predominantly preferred by countries, as opposed to a legal framework to
regulate AI. The authors are aware that there are legislative initiatives in several
jurisdictions being proposed namely the European Commission’s AI Law,
which has received much attention.5 The European Commission released the
proposal for harmonising rules on AI, calling it the ‘Artificial Intelligence Act’.
Regulation by law is not the remit of this paper but it will certainly be discussed
in future papers.
An ethical framework serves as a precursor to the emerging area of AI, Law,
and Ethics, helping governments to design national legislation in the future.
Whilst appreciating the use of AI in improving our lives by creating
unprecedented opportunities, it also raises new ethical dilemmas for our
society, arising fundamentally from the risks of the use of AI. AI and its ability
for algorithmic regulation, whether reactive or predictive, have its misgivings

4 COBUILD Advanced English Dictionary (Harper Collins) https://www.collinsdictionary.


com/dictionary/english/framework accessed 22 July 2021.
5 European Commission, Proposal for a Regulation of the European Parliament and of the
Council Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act)
and Amending Certain Union Legislative Acts (Document 52021PC0206) https://eur-lex.
europa.eu/legal-content/EN/TXT/?uri=CELEX%3A52021PC0206 accessed 21 August
2021.
cdxvi Malayan Law Journal [2021] 3 MLJ

and manifold concerns. The adoption of an ethical framework is a vital step in


taking a position on core principles of an algorithm by design where ethical
consideration is integrated into the process at the time of the writing and
training of the algorithm. There are use cases that evidence good governance
integrated by organisations in their design process and system when developing
technological solutions harnessing AI capabilities for their clients.6 These are
essential internal governance structures such as the establishment of overseeing
committees, creation of roles, and drawing up data governance processes that
create a value of accountability in the algorithm’s decision-making. If these
governance mechanisms and assessment tools are based on a standard-setting
ethical framework, it will build confidence in the use of AI systems as an
assurance that steps have been taken to minimise concerns related to AI. A
failure in assessing the development and deployment of an AI algorithm against
a framework may result in legal liability and accountability.

Not all countries may favour a legal regulatory regime in order to be seen as
promoting innovation and growth of the AI sector and may opt for adopting
policies and AI-national ethical frameworks which serve to promote self-
governance.7 The sum of it is that a framework provides principles that we want
to see in AI tools that are inherently trustworthy and responsible, and
intrinsically transparent and explainable. Therefore, when developers build
and create AI systems, they have to begin with understanding how ethical
considerations matter and essentially, understand the ramifications of omitting
these considerations.

6 See Info-communications Media Development Authority (IMDA) and Personal Data


Protection Commission Singapore (PDPC), ‘Compendium of Use Cases: Practical
Illustrations of the Model AI Governance Framework’ (2020) https://www.pdpc.gov.sg/-/
media/files/pdpc/pdf-files/resource-for-organisation/ai/sgaigovusecases.pdf accessed 21
August 2021.
7 See, for instance, Aimee Chanthadavong, ‘Australia’s digital minister avoiding legislating AI
ethics and will remain voluntary framework’ (ZDNet, 28 July 2021) https://www.zdnet.
com accessed 23 August 2021. For further variations in AI strategies, see, S Fatima, KC
Desouza, GS Dawson and JS Denford, ‘Analyzing artificial intelligence plans in 34 countries’
(TechTank, Brookings Institution, 13 May 2021) https://www.brookings.edu/blog/techtank/
2021/05/13/analyzing-artificial-intelligence-plans-in-34-countries/ accessed 23 August
2021.
An AI Ethical Framework for Malaysia: A Much-Needed
Governance Tool for Reducing Risks and Liabilities
[2021] 3 MLJ Arising From AI Systems cdxvii

The importance of such frameworks can be evidenced by the fact that in the
last ten years, approximately 100 proposals for AI principles have been
published with studies identifying which principles are most cited.8
Some of the most cited frameworks are referenced in this article which
includes frameworks or declaration of principles that govern AI systems (or as
‘Intelligent and autonomous technical systems’, the term used by the IEEE),
published by the IEEE, the OECD, the European Commission, the Future of
Life Institute, UNESCO and the WHO.

RISKS AND ISSUES ARISING FROM AI SYSTEMS


The growth of AI is understandable as its use improves lives by creating
opportunities but its use raises several questions and challenges which require
careful consideration, particularly when the development and deployment of
AI raise ethical dilemmas. As AI tools increasingly permeate various aspects of
society (inter alia, insurance sector, immigration, law enforcement, education,
communities through Smart City initiatives), streamlining standards and
values will have a direct impact, both positive and negative, on the individual
or communities affected by these tools.
In a previous paper,9 the authors spoke of the concerns around the use of AI,
one of which was that to a large extent the development of AI systems lies in the
hands of tech companies and IT professionals who write algorithms for AI
systems that use volumes of data to train the AI to make decisions. The use of
these algorithms will control the decisions that are made for individuals
resulting in a type of social ordering. This social ordering leads to fears which
include increased surveillance; private sector use of Big Data analytics; erosion
of informational privacy; a lack of due process devoid of respect for fairness,
and, equality principles in decision-making that impacts individuals, the
delegation of decision-making to automation resulting in biased, non-
explainable and non-interpretable decisions, and finally, a lack of democratic
accountability of those who possess algorithmic power. These fears lead us to
argue that we need to regulate AI algorithms through the introduction of

8 See, Y Zeng, E Lu and C Huangfu, ‘Linking artificial intelligence principles’ (the AAAI
Workshop on Artificial Intelligence Safety, Honolulu, 2019) https://arxiv.org/ftp/arxiv/
papers/1812/1812.04814.pdf accessed 22 August 2021; See also, A Jobin, M Ienca, and E
Vayena, ‘Artificial Intelligence: the global landscape of ethics guidelines’
(arXiv:1906.11668) https://arxiv.org/ftp/arxiv/papers/1906/1906.11668.pdf accessed 22
July 2021.
9 JK Sadhu Singh and D Segaran, ‘AI, Law and Ethics: The Emergence of a New Discourse
in the Legal Thoroughfare’ [2020] 5 MLJ i.
cdxviii Malayan Law Journal [2021] 3 MLJ

soft-governance measures beginning with the promotion of a set of value-based


principles in an AI National Ethical Framework that can be introduced in the
life-cycle of an AI system and by design.

The issue of trust is at the forefront of concerns of policy-makers as they


attempt to define and frame national policies. The need to address the issue of
lack of trust is essential as it may become an impediment to the use of AI and
hence, its ability to promote altruism and utility of AI. This distrust raises
ethical dilemmas when different forms of AI (neural networks and machine
learning algorithm) is deployed in critical sectors such as finance, social care,
transportation, healthcare, police enforcement, and the justice process.

The most prominent risk is that of ‘placing’ decisions that impact people in
black boxes. An AI system processes data through its algorithm which produces
a prediction or outcome. The designers of the algorithm may not understand
how the variable are combined by the algorithm to make that prediction or
produce an outcome. This is because ‘black box predictive models can be such
complicated functions of the variables that no human can understand how the
variables are jointly related to each other to reach a final prediction’.10 Neural
networks may contain many ‘sharp corners’ that may achieve objectives that
the designer did not think about. Kearns and Roth explain that ‘complicated,
automated decision-making that can arise from machine learning has a
character of its own, distinct from that of its designer. The designer may have
had a good understanding of the algorithm that was used to find the decision-
making model, but not the model itself ’.11 Without the ability to understand
the process in which the decision was made, it lacks transparency and
explainability. Individuals may challenge these decisions on the basis it is bias
and discriminatory, or inaccurate and flawed, as they are unable to understand
how the algorithm arrived at its decision. The authors acknowledge the strides
made to make AI more explainable (explainable AI) and the use of ‘white

10 C Rudin and J Radin, ‘Why Are We Using Black Box Models in AI When We Don’t Need
to? A Lesson from an Explainable AI Competition’ (2019) Issue 1.2 Harvard Data Science
Review https://hdsr.mitpress.mit.edu/pub/f9kuryi8/release/6 accessed 22 August 2021. See
also, Tom Cassauwers, ‘Opening the ‘black box’ of artificial intelligence’ (Horizon, The EU
research & Innovation Magazine, 1 December 2020) https://ec.europa.eu/research-and-
innovation/en/horizon-magazine/opening-black-box-artificial-intelligence# accessed 22
August 2021.
11 M Kearns and A Roth, The Ethical Algorithm: The Science of Socially Aware Algorithm Design
(OUP 2020) 10–11.
An AI Ethical Framework for Malaysia: A Much-Needed
Governance Tool for Reducing Risks and Liabilities
[2021] 3 MLJ Arising From AI Systems cdxix

boxes’,12 this may reduce the degree of distrust in AI systems and their uses. It
should be noted that explainable AI systems are still at the early stages of
development. For example, in a white-box model, while a developer may know
the model architecture and parameters, this knowledge alone, will not make
the model explainable.13
The frameworks and documents espousing AI ethical principles often make
pronounced references to the risks around AI whilst recognising the altruistic
and economic benefits of AI. The OECD’s Recommendation of the Council
on Artificial Intelligence identifies that alongside benefits ‘AI also raises
challenges for our societies and economies, notably regarding economic shifts
and inequalities, competition, transitions in the labour market, and
implications for democracy and human rights’.14 Equally, the IEEE’s concerns
of using intelligent and autonomous technical systems ‘designed to reduce
human intervention in our day-to-day lives…impact on individuals and
societies’ includes the ‘potential harm to privacy, discrimination, loss of skills,
economic impacts, security of critical infrastructure, and the long-term effects
on social well-being’.15 European Commission’s High-Level Expert Group on
Artificial Intelligence highlighted a non-exhaustive list of concerns which
included the identification and tracking of individuals with AI, use of covert AI
systems, use of AI-enabled citizen scoring in violation of fundamental rights,
and the development of lethal autonomous weapon systems.16
The authors highlight instances of these concerns below with reference to
outcomes in court decisions and findings from research studies.

12 C Rudin and J Radin, ‘Why Are We Using Black Box Models in AI When We Don’t Need
to? A Lesson from an Explainable AI Competition’ (2019) Issue 1.2 Harvard Data Science
Review https://hdsr.mitpress.mit.edu/pub/f9kuryi8/release/6 accessed 22 August 2021.
13 A Das and P Rad, ‘Opportunities and Challenges in Explainable Artificial Intelligence
(XAI): A Survey’ (arXiv 2006.11371v2) https://arxiv.org/abs/2006.11371 accessed 23
August 2021.
14 The Organisation for Economic Co-operation and Development, Recommendation of the
Council on Artificial Intelligence https://legalinstruments.oecd.org/en/instruments/
OECD-LEGAL-0449 accessed 22 July 2021.
15 IEEE, The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems. Ethically
Aligned Design: A Vision for Prioritizing Human Well-being with Autonomous and Intelligent
Systems (Version 2, 2017), 6 https://standards.ieee.org accessed 22 August 2021.
16 European Commission, Ethics Guidelines for Trustworthy AI: High-Level Expert Group on
Artificial Intelligence, 33-34 https://digital-strategy.ec.europa.eu/en/library/
ethics-guidelines-trustworthy-ai accessed 22 August 2021.
cdxx Malayan Law Journal [2021] 3 MLJ

(a) Invasion of privacy

One major concern arising from the use of AI technology is the invasion and
erosion of privacy. In 2020, the Court of Appeal of England and Wales17 in the
Edward Bridges case had to consider the lawfulness of the use of automated
facial recognition technology (AFR) in a pilot project by the South Wales Police
Force (SWP). Bridges brought a claim for judicial review. He was in the vicinity
of two areas where the technology was deployed. Bridges contended that his
images were recorded, although the images were later deleted and Bridges did
not appear on SWP’s watch list. The Court of Appeal allowed the appeal on
three of the five grounds. The first successful ground was that the legislation
and local policies relied on by SWP provided no clear guidance on the use of
the AFR and as to who could be placed on a watch list. This was in breach of art
8 of the European Convention of Human Rights which protects the right to
family and private life. The court went further to say, on the second successful
ground, that the lack of guidance afforded excessively broad discretion in the
determination of the police officers to meet the standard required by art 8(2)
which sets out the grounds of any justified interference with the Convention
right. Resulting from this, the court concluded that the SWP provided a
deficient ‘data protection impact assessment’ (DPIA) as required by s 64 of the
UK Data Protection Act 2018 as it infringed art 8. On the third successful
ground, the Court found that there was non-compliance with the Public Sector
Equality Duty (PSED) under s 149 of the UK Equality Act 2010. The PSED’s
purpose is to ensure that public authorities consider the discriminatory
potential impact of the technology and the court found that the SWP erred by
not taking reasonable steps to make enquiries about whether the AFR software
had bias on racial or sex grounds. This assessment is vital even though the
appellate court acknowledged that there was no clear evidence that AFR was
indeed biased on the grounds of either or both race or sex.

(b) Bias and discrimination

Concerns around bias and discriminatory AI have been raised in innumerable


areas. Regulators and enforcement agencies are most conflicted on the use of
facial recognition software that is biased and erroneous in identifying
perpetrators. In studies undertaken, research findings have concluded that in
facial recognition technology, darker-skinned females are less likely to be

17 R (on the application of Edward Bridges) v The Chief Constable of South Wales Police & Ors
[2020] EWCA Civ 1058.
An AI Ethical Framework for Malaysia: A Much-Needed
Governance Tool for Reducing Risks and Liabilities
[2021] 3 MLJ Arising From AI Systems cdxxi

accurately identified.18 In the financial sector, credit scoring based on a


predictive AI algorithm may reduce a large number of low-income wage
earners to a black box of negative scoring.19 Use of AI by social media
companies has proven to be biased20 or erroneous.21

The most troubling use of AI leading to bias and discrimination as well as


the problem around the opacity of AI algorithms is in the area of
administration of justice. In the US, an AI algorithm called COMPAS
(Correctional Offender Management Profiling for Alternative Sanctions) is
used for predicting recidivism and came under scrutiny in the decision of the
Wisconsin Supreme Court, US, in State v Loomis.22 It is used by US criminal
judges in some states when assessing the recidivism risk of defendants or
convicted persons, in decisions on pre-trial detention, sentencing, or early
release. COMPAS estimates the risk of recidivism based on both an interview
with the offender and information from the offender’s criminal history.23 The
flaw with COMPAS is that by utilising data from the past, it systematically
discriminates by overestimating recidivism among African American
defendants compared to Caucasian Americans. The court in Loomis found that
the trial court’s use of COMPAS in sentencing did not violate the defendant’s
due process rights even though the methodology used that arrived at the
assessment was not disclosed both to the court or the defendant. The
methodology of the software is a trade secret and essentially is not subject to the
scrutiny of the courts. This raises issues of explainability and transparency of
the algorithm and doubts as to how it arrives at the assessment of the risk of
recidivism. Justice Bradley24 however, cautioned that judges relying on the
algorithm’s risk assessments must explain other factors considered over and
above the assessment that supports the sentence meted out by the court. Justice
Abrahamson25 raised similar concerns when she added that a more extensive

18 Tom Simonite, ‘The Best Algorithms Struggle to Recognize Black Faces Equally’ (WIRED,
22 July 2019) https://www.wired.com/story/best-algorithms-struggle-recognize-black-
faces-equally/ accessed 22 August 2021.
19 Aaron Klein, ‘Reducing Bias in AI-based Financial Services’ (Brookings Institution, 10 July
2021) https://www.brookings.edu/research/reducing-bias-in-ai-based-financial-services/
accessed 22 August 2021.
20 BBC Tech, ‘Twitter algorithm prefers slimmer, younger, light-skinned faces’ (BBC News, 11
August 2021) https://www.bbc.com/news/technology-58159723 accessed 22 August
2021.
21 Craig Langran, ‘Why was my tweet about football labelled abusive?’ (BBC News, 18 July
2021) https://www.bbc.com/news/technology-57836409 accessed 22 August 2021.
22 881 N W 2d 749 (Wis 2016)(US).
23 Ibid 754.
24 881 N W 2d 749 (Wis 2016)(US) 765.
25 881 N W 2d 749 (Wis 2016)(US) 774.
cdxxii Malayan Law Journal [2021] 3 MLJ

record from sentencing courts would have been required on ‘the strengths,
weaknesses, and relevance to the individualized sentence being rendered of the
evidence-based tool’. This case has attracted much chagrin26 as it raises
questions about the disregard for the right of due process. The US Supreme
Court has established that the due process related to a fair sentencing procedure
is ‘the right to be sentenced on the basis of accurate information’27 and to be
given the means to ascertain the correctness of the information.28 Due process
is linked to the issue of transparency and explainability of the algorithm as to
how it arrived at the assessment it did.

(c) Opacity and automated decision-making


Concerns around the opacity of the algorithm and the need for interpretability,
explainability, and transparency of the workings of the AI’s algorithm in
arriving at decisions continue to be raised in the courts for the courts’
determination. Last year, two decisions that ran contrary to each other fuelled
discussions around explainability and transparency. The Court of the Hague in
Netherlands in the System Riscico Indicatie (SyRI) case held that the
Government’s use of SyRI’s algorithm lacked transparency in ensuring that
there was no bias or discrimination.29 On the contrary, the French
Constitutional Court in the Parcoursup decision was reluctant in deciding that
there was a duty of explainability as to how an algorithm arrived at the
decision.30 Commentators have distinguished these two cases on several factors
which include the impact on rights of citizens, the extent of human review, the
wealth of data analysed, and, the incompatibility of the algorithm with
constitutional values. In Parcoursup’s case, the algorithm was used nationally to
place students in educational institutions for undergraduate courses whereas,
in System Riscico Indicatie’s case, the algorithm was used to identify individuals

26 ‘State v. Loomis: Wisconsin Supreme Court Requires Warning Before Use of Algorithmic
Risk Assessments in Sentencing’ (2017) 130 Harvard Law Review 1530 https://
harvardlawreview.org accessed 22 August 2021; See also, L Han-Wei, L Ching-Fu and C
Yu-Jie, ‘Beyond State v Loomis: Artificial Intelligence, Government Algorithmization, and
Accountability’ (2019) 27(2) International Journal of Law and Information Technology
122–141.
27 Townsend v Burke 334 US 736 (1948) (US).
28 State v Skaff 152 Wis 2d 48, 53; 447 N W 2d 84 (Ct.App.1989), 58 (US).
29 R Allen and D Masters, ‘SYRI: Think Twice Before Risk Profiling’ (AI Law Hub, 30 March
2020) https://ai-lawhub.com/2020/03/30/syri-think-twice-before-risk-profiling/ accessed
22 August 2021.
30 R Allen and D Masters, ‘French Parcoursup Decision’ (AI Law Hub, 16 April 2020)
https://ai-lawhub.com/2020/04/16/french-parcoursup-decision/ accessed 22 August
2021; See also ‘Parcoursup: Decision No.2020-834 QPC’ (AI Law Hub) https://ai-lawhub.
com/parcoursup-decision-no-2020-834-qpc/ accessed 22 August 2021.
An AI Ethical Framework for Malaysia: A Much-Needed
Governance Tool for Reducing Risks and Liabilities
[2021] 3 MLJ Arising From AI Systems cdxxiii

who are at a high risk of committing fraud related to social security,


employment, and taxes. Some may argue that the ramifications and the impact
on the individual resulting from the use of the algorithm and the predicted
outcome that followed may be another consideration for the extent of the
obligation to explain the decision-making process of the algorithm. If this is the
case, then the argument appears to be an implausible reconciliation of the two
decisions when the Italian Supreme Court decided earlier this year that the
opacity of the algorithm to produce ‘reputational ratings’ impacted the validity
of the alleged consent to the processing of personal data.31

With the ubiquitous use of AI, there will be an increasing role of the courts
to handle disputes in areas of liability, reputational harm, due process, harmful
data practices, surveillance and invasion of privacy, and discrimination, to
name a few. In the near future, the design and the functioning of AI will come
under greater scrutiny, and, AI developers and deployers will be held
accountable to the questions that will require adjudication in such cases.

DEVELOPMENT AND ADOPTION OF ETHICAL FRAMEWORKS

To manage the risks and concerns arising from AI and its impact on society,
ethical frameworks have been adopted or are in the nascent of being adopted at
the national, regional, and international levels. These frameworks comprise a
set of values to be embedded in the design and developmental process and the
deployment stage of the AI system. In a sense, these frameworks provide a
litmus test for assessing the extent that the AI is ‘responsible’ or ‘trustworthy’ or
synonyms of these adjectives, that describes the AI as one that holds up to the
standards of accountability required of it. This accountability is to be borne by
the developer and the deployer. The recurring reference in the titles of these
frameworks includes ‘ethical guidelines’ or ‘ethical principles’.

The IEEE in the second iteration of its guidelines emphasises that to attain
the full benefit of AI technologies, AI must be ‘aligned with our defined values
and ethical principles’ and this can be done with ethical frameworks that can
‘guide and inform dialogue and debate around the non-technical implications

31 R Allen and D Masters, A Clear Ruling from The Italian Supreme Court: Consent Without
Transparency Is Legally Worthless, Especially Where an AI System Is Used To Assess
Credibility And Reputation’ (AI Law Hub, 16 April 2020) https://ai-lawhub.com accessed
22 August 2021.
cdxxiv Malayan Law Journal [2021] 3 MLJ

of these technologies’.32 In a similar tone, the OECD recognises the ‘need for
a stable policy environment that promotes a human-centric approach to
trustworthy AI, that fosters research, preserves economic incentives to
innovate, and that applies to all stakeholders according to their role and the
context’. In another framework, UNESCO recognises that frameworks based
on ethical values and principles can help ‘shape the development and
implementation of rights-based policy measures and legal norms, by providing
guidance where the ambit of norms is unclear or where such norms are not yet
in place due to the fast pace of technological development combined with the
relatively slower pace of policy responses’.33
Most of these frameworks are aspirational of the fact that the application of
ethical values and principles can assist to overcome the challenges to reap the
opportunities linked to AI technologies.

(a) Why ethics? Why not?


The misplaced belief that philosophy and ethics have fallen into obsolescence
is disproved by the manifestation of the discourse of applied ethics and
technology.
As a background, the authors felt it was essential to provide historical
context in the intersection between ethics and technology. The field of
cyberethics, a branch of applied ethics emerged with the writings of Herman
Tavani,34 which formed the authors’ gestational knowledge on the subject.
Tavani defined it as ‘the study of moral, legal, and social issues involving
cybertechnology’ which ‘examines the impact of cybertechnology on our
social, legal, and moral systems, and it evaluates the social policies and laws that
have been framed in response to issues generated by its development and use’.
Tavani also offers a definition of cybertechnology as a ‘wide range of computing
and communication devices, from stand-alone computers to connected, or
networked, computing and communication technologies’. Other related fields
of technology and ethics predate ‘cyberethics’, such as ‘computer ethics’ and
‘information ethics’ and Tavani recognised the emerging fields of ‘robot ethics’

32 IEEE, The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems. Ethically
Aligned Design: A Vision for Prioritizing Human Well-being with Autonomous and Intelligent
Systems (Version 2, 2017), 6 https://standards.ieee.org accessed 22 August 2021.
33 UNESCO, Preliminary report on the first draft of the Recommendation on the Ethics of
Artificial Intelligence https://unesdoc.unesco.org/ark:/48223/pf0000374266 accessed 22
August 2021.
34 Herman T Tavani, Ethics and Technology: Controversies, questions, and strategies for ethical
computing (4th edn, Wiley 2013).
An AI Ethical Framework for Malaysia: A Much-Needed
Governance Tool for Reducing Risks and Liabilities
[2021] 3 MLJ Arising From AI Systems cdxxv

and ‘machine ethics’. The role of applied ethics in technology is certainly not a
nascent subject. It is indeed an evolving one as Tavani has alluded to. Ethical
theories can act as foundational concepts upon which to make decisions of the
risks and impact of technological innovation. With the application of the
values found in these theories, innovators are to ask questions arising from their
actions or inactions and the consequences thereof.

The emergence of applied ethics found its way into the professional
standards, codes, and integrity of the IT profession. Requirement for ethical
consideration has formed part of the professional ethics of the IT specialist
tracing it to the adoption of professional codes of ethics by the Association for
Computing Machinery (ACM) and the Institute for Electrical and Electronics
Engineers-Computer Society (IEEE-CS). Early scholars viewed these codes as
regulating professional members35 or play an essential function in inspiring by
identifying values and ideals, educating, guiding, demanding accountability,
and enforcing the values of the code.36
In a similar vein, ethical frameworks aim to ‘regulate’ or operate as these
codes function in ensuring that the professionals who write AI algorithms
weave these values into the design of the algorithm. To apply ethical theories to
dilemmas arising from the use of AI systems provide a systematic approach to
the moral responsibility to be borne by data scientists writing algorithms and
the users of the said algorithms. Moral responsibility leads to the moral agency
of one’s decisions and actions or even inactions. To fill the policy vacuum,
designers and deployers of AI should assess the design and use of the algorithm
by applying moral values to prevent harmful practices and consequences. This
will involve arguments and disagreements. Irrespective of the outcome, a
meaningful and constructive discourse will contribute to improving the design
and the decision-making process. These debates are rife as discussed earlier and
later in this paper.
Moral responsibility is inextricably linked to legal liability and the concept
of accountability. Moral irresponsibility may or may not lead to legal liability
depending on whether the action resulting from the moral irresponsibility falls
within the ambit of the law. For instance, the use of surveillance methods using
facial recognition software in the EU must meet the standards of the European
Convention of Human Rights or the General Data Protection Regulation

35 D Gotterbarn and K Miller, ‘The Public is Priority: Making Decisions Using the Software
Code of Ethics’ (2009) 42(6) IEEE Computer 66–73.
36 TW Bynum, T Ward, and S Rogerson (eds), Computer Ethics and Professional Responsibility
(Blackwell 2004).
cdxxvi Malayan Law Journal [2021] 3 MLJ

(GDPR). Hence, any moral decision taken when using self-governance


measures such as an AI Ethical Framework must include an assessment of
potential legal liability if there is the risk of contravening the provisions of these
laws. However, in a legal vacuum, AI ethical frameworks may be the sole metric
in determining whether the development and deployment of AI tools have any
deleterious effect.

AI Ethical Frameworks when adopted by organisations that design and use


AI tools operationalise the notion of responsibility and accountability.
Accountability is a concept that is viewed more broadly than responsibility.
Tavani37 refers to Nissenbaum where she distinguishes the two. Where
responsibility constitutes part of the coverage of a ‘robust and intuitive notion
of accountability’,38 answerability lies with a person, a group, or an entire
organisation for ‘not only malfunctions in life-critical systems that cause or risk
grave injuries and cause infrastructure and large monetary losses, but even for
the malfunctions that cause individual losses of time, convenience, and
contentment’.39 Where individual AI professionals may be responsible for the
design of the algorithm, organisations that decide to make available the AI
tools to deployers, or organisations that then deploy these tools in their
operations, will be held accountable. In ensuring principles of accountability
are upheld, organisations have to establish self-governance measures supported
with additional structures and mechanisms, such as the creation of an Ethics
Committee as a governance body, assessing these tools by employing an AI
Ethical Framework as a yardstick.

(b) The developers of frameworks

As stated earlier in the paper, AI Ethical Frameworks have been drafted at the
national, regional, and international level as well as by corporations (mainly
tech corporations) either developing or using AI systems. At the regional or
international level, the organisations developing these frameworks represent
states or interested parties with varied interests such as experts, think tanks,
research organisations, or representatives of industries.

37 Herman T Tavani, Ethics and Technology: Controversies, questions, and strategies for ethical
computing (4th edn, Wiley 2013) 118.
38 Helen Nissenbaum, ‘Computing and Accountability’ in John Weckert (ed), Computer
Ethics (Routledge 2007) 273–280, 274.
39 Ibid.
An AI Ethical Framework for Malaysia: A Much-Needed
Governance Tool for Reducing Risks and Liabilities
[2021] 3 MLJ Arising From AI Systems cdxxvii

At the regional level, the European Commission has responded by


developing The Ethics Guidelines for Trustworthy Artificial Intelligence (AI)40
which was presented to the Commission by the High-Level Expert Group on
AI on 8 April 2019, after an extensive consultation stage. Several other
organisations representing states at the regional level have made the need to
develop such a framework as part of an aspirational AI innovation plan. For
example, the ASEAN Digital Masterplan 2025 sets out a list of ‘desired
outcomes’ and as part of the second desired outcome, an ‘enabling action’,
EO2.7, is listed as the importance in adopting a ‘regional policy to deliver best
practice guidance on AI governance and ethics, IoT spectrum and technology’,
to promote a trusted ecosystem by developing a ‘regional guide that helps
address key governance and ethical issues when deploying AI solution…to
promote understanding and trust’.41

At the international level, several organisations have taken the lead. The
Institute of Electrical and Electronics Engineers (the IEEE) started an initiative
in 2016 through its IEEE Global Initiative for Ethical Considerations in
Artificial Intelligence and Autonomous Systems which has led to two iterations
of its report titled Ethically Aligned Design: A Vision for Prioritising Human
Well-being with Autonomous and Intelligent Systems (A/IS)42 containing six
General Principles. The Future of Life Institute’s AI Asilomar Principles43 came
after in 2017 with an extensive set of 13 ‘ethics and values’. The OECD AI
principles provided the basis for the AI principles adopted by the OECD
Council in May 2019.44 Soon after, these principles were adopted by the G20.

40 European Commission, The Ethics Guidelines for Trustworthy Artificial Intelligence https://
digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai accessed 20 July
2021.
41 ASEAN, ASEAN Digital Masterplan 2025 16, 20 https://asean.org/wp-content/uploads/
2021/08/ASEAN-Digital-Masterplan-2025.pdf accessed 20 July 2021.
42 IEEE, The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems. Ethically
Aligned Design: A Vision for Prioritizing Human Well-being with Autonomous and Intelligent
Systems (Version 1, 2016) https://standards.ieee.org accessed 22 August 2021; See also, for
Version 2, IEEE, The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems.
Ethically Aligned Design: A Vision for Prioritizing Human Well-being with Autonomous and
Intelligent Systems (Version 2, 2017), 6 https://standards.ieee.org accessed 22 August 2021.
43 Future of Life, Asilomar AI Principles https://futureoflife.org/ai-principles/ accessed 22 July
2021.
44 The Organisation for Economic Co-operation and Development, Recommendation of the
Council on Artificial Intelligence https://legalinstruments.oecd.org/en/instruments/OECD-
LEGAL-0449 accessed 22 July 2021.
cdxxviii Malayan Law Journal [2021] 3 MLJ

At the industry level, companies such as Microsoft45 and Google46 have


developed their frameworks. Industry or professional specific frameworks have
been established, for instance, in judicial decision making utilising AI in
predictive justice, judges look to the European Commission for the Efficiency
of Justice (CEPEJ)’s European Ethical Charter on the Use of Artificial
Intelligence in Judicial Systems and their environment.47

The most recent of these frameworks is the publication of UNESCO’s First


Draft of the Recommendation on the Ethics of Artificial Intelligence in July of
this year.48 The framework sets out overarching values that must be
demonstrable in the life cycle of AI systems supported by a set of principles.

Where ethical frameworks are being developed for sector-specific needs, the
WHO has identified its own six key ethical principles for use of AI for health in
its most recent guidance report titled Ethics and Governance for Artificial
Intelligence for Health, published in June of this year.49 Each key ethical
principle is ‘a statement of a duty or a responsibility in the context of the
development, deployment and continuing assessment of AI technologies for
health’.50 The ethical principles are those considered to be most relevant for the
use of AI for health.

(c) The ethical principles and the most common occurring values

The frameworks comprise general overarching principles. There are variations


between frameworks where certain values are identified as having importance
and are elevated in priority and importance. To demonstrate the principles that
often are benchmarks in these frameworks, a selection of guidelines and
frameworks will be referenced.

45 Microsoft, Microsoft AI principles https://www.microsoft.com/en-us/ai/responsible-ai ac-


cessed 22 July 2021.
46 Google, Artificial Intelligence at Google: Our Principles https://ai.google/principles/ accessed
22 July 2021.
47 European Commission for the Efficiency of Justice (CEPEJ), European ethical Charter on
the use of Artificial Intelligence in judicial systems and their environment https://rm.coe.int/
ethical-charter-en-for-publication-4-december-2018/16808f699c accessed 22 July 2021.
48 UNESCO, Preliminary report on the first draft of the Recommendation on the Ethics of
Artificial Intelligence https://unesdoc.unesco.org/ark:/48223/pf0000374266 accessed 22
August 2021.
49 World Health Organisation, Ethics and governance of artificial intelligence for health https://
www.who.int/publications/i/item/9789240029200 accessed 22 July 2021
50 Ibid 22.
An AI Ethical Framework for Malaysia: A Much-Needed
Governance Tool for Reducing Risks and Liabilities
[2021] 3 MLJ Arising From AI Systems cdxxix

As a first example, according to the European Commission’s The Ethics


Guidelines for Trustworthy Artificial Intelligence (AI) which is a document
prepared by the High-Level Expert Group on Artificial Intelligence (AI
HLEG),51 Trustworthy AI should be, firstly, lawful (respecting all applicable
laws and regulations); secondly, ethical (respecting ethical principles and
values); and finally, robust (both from a technical perspective while taking into
account its social environment). The Commission’s Guidelines provide a
scheme that comprises several layers — we will reference the first two —
‘Foundations of Trustworthy AI’ and ‘Realisation of Trustworthy AI’. The
foundations of Trustworthy AI identify and describe four ethical principles
premised on fundamental rights — respect for human autonomy; prevention
of harm; fairness; and, explicability. In the realisation of the Trustworthy AI
stage, these ethical principles are translated into seven key requirements which
the AI system must be assessed against throughout its life cycle. These are
human agency and oversight; technical robustness and safety; privacy and data
governance; transparency; diversity, non-discrimination and fairness; societal
and environmental wellbeing; and, accountability.

The OECD recommendations identify five complementary values-based


‘principles for the responsible stewardship of trustworthy AI and calls on AI
actors to promote and implement them: inclusive growth, sustainable
development and well-being; human-centred values and fairness; transparency
and explainability; robustness, security and safety; and accountability’.52

The WHO guidelines include the principles ‘to protect human autonomy
in decision making; promote human well-being, human safety and the public
interest in order to prevent harm; to ensure transparency, explainability and
intelligibility; to foster responsibility and accountability; to ensure
inclusiveness and equity to encourage the widest possible appropriate,
equitable use and access; and promote AI that is responsive and sustainable’.53

The IEEE guidelines include ‘Human Rights: Ensure they do not infringe
on internationally recognized human rights; Well-being: Prioritize metrics of
well-being in their design and use; Accountability: Ensure that their designers

51 European Commission, Ethics guidelines for trustworthy AI https://digital-strategy.ec.europa.


eu/en/library/ethics-guidelines-trustworthy-ai accessed 22 July 2021.
52 The Organisation for Economic Co-operation and Development, Recommendation of the
Council on Artificial Intelligence https://legalinstruments.oecd.org/en/instruments/OECD-
LEGAL-0449 accessed 22 July 2021.
53 World Health Organisation, Ethics and governance of artificial intelligence for health 23–30
https://www.who.int/publications/i/item/9789240029200 accessed 22 July 2021.
cdxxx Malayan Law Journal [2021] 3 MLJ

and operators are responsible and accountable; Transparency: Ensure they


operate in a transparent manner; and, Awareness of misuse: Minimize the risks
of their misuse’.54

The Asilomar principles are probably one of the most extensive with 13
ethical principles. These include ensuring the safety of AI systems; failure
transparency of AI systems when they cause harm; judicial transparency when
AI systems are used in judicial decision-making; responsibility on the part of
designers and builders of AI systems; value alignment in the design of AI
systems with human values; compatibility of AI systems with human; personal
privacy when AI systems analyse and utilise data; liberty and privacy; the shared
benefit of AI technologies; a shared economic prosperity created by AI systems;
human control; non-subversion of social and civic processes upon which
societal health is dependent; and the avoidance of an AI Arms Race.55

In an example of a corporate framework, Microsoft’s Responsible AI56 refers


to fairness (AI systems should treat all people fairly); inclusiveness (AI systems
should empower everyone and engage people); reliability and safety (AI
systems should perform reliably and safely); transparency (AI systems should
be understandable); privacy and security (AI systems should be secure and
respect privacy); and finally, accountability (AI systems should have
algorithmic accountability).

The values and principles distilled from these frameworks indicate some
commonly shared values and concerns. There are distinctions as different
organisations may prioritise certain values and principles, and, may include
those that are more specific to their concerns that are and relevant to mitigate
the risks related to the industry or the unique way in which AI tools are being
used in carrying out the operations of organisations.

Analysing the emerging themes of these frameworks, a convergence of


values can be identified. A 2019 research paper published by researchers at
ETH Zurich’s Health Ethics & Policy Lab found that over 80 distinct AI
ethical frameworks have been proposed in the five years preceding the

54 IEEE, The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems. Ethically
Aligned Design: A Vision for Prioritizing Human Well-being with Autonomous and Intelligent
Systems (Version 2, 2017), 6 https://standards.ieee.org accessed 22 August 2021.
55 Future of Life, Asilomar AI Principles https://futureoflife.org/ai-principles/ accessed 22 July
2021.
56 Microsoft, Microsoft AI principles https://www.microsoft.com/en-us/ai/responsible-ai ac-
cessed 22 July 2021.
An AI Ethical Framework for Malaysia: A Much-Needed
Governance Tool for Reducing Risks and Liabilities
[2021] 3 MLJ Arising From AI Systems cdxxxi

research.57 This research was indeed very helpful as it recognised firstly, the
most common occurring values as transparency; justice and fairness; non-
maleficence; responsibility; privacy; beneficence; freedom and autonomy;
trust; sustainability; dignity; and, solidarity. And secondly, the authors of the
said study observed that not one ‘single ethical principle appeared to be
common to the entire corpus of documents’ but there was an emerging ‘global
convergence’ of five values evidenced in more than half of the sources of the
research. These were transparency, justice, fairness, non-maleficence, and
responsibility.

The use of the phrase ‘global convergence’ in the findings is misleading.


When referring to Figure 158 which is a map of the world indicating the
geographic distribution of issuers of ethical AI guidelines by the number of
documents released, it is evident there is no global convergence as a large
number of countries in the global south have yet to adopt any position on
ethical standards to be imposed or considered when developing and deploying
AI systems. Several countries have published national AI policies and
strategies,59 however, the drawing up of AI ethical frameworks has yet to be
embarked upon, although there is an overall increase in the crafting and
adoption of these frameworks.

AN AI ETHICAL FRAMEWORK FOR MALAYSIA


Hence, with the issues of distrust rising, comes the determination on the part
of policy-makers to make AI a tool for societal good, as demonstrated by the
efforts made in developing ethical frameworks. An AI Ethical National
Framework will assist in guiding individuals, corporations, and organisations
to embrace the consideration of ethics in the development and deployment of
AI. It will set the tone for a more collaborative approach with the private sector
in developing AI capabilities and will contribute to the dynamism,
innovativeness, and competitiveness of the AI ecosystem
Despite the initial announcements in 2017 and further announcements in
2019, of the plan for adoption of a National AI Framework for Malaysia, there
has been little, if no progress, ensuing from the said announcement but there is

57 A Jobin, M Ienca, and E Vayena, ‘Artificial Intelligence: the global landscape of ethics
guidelines’ (arXiv:1906.11668) https://arxiv.org/ftp/arxiv/papers/1906/1906.11668.pdf
accessed 22 July 2021.
58 Ibid 6.
59 OECD AI Policy Observatory https://www.oecd.ai/dashboards?selectedTab=countries ac-
cessed 22 July 2021.
cdxxxii Malayan Law Journal [2021] 3 MLJ

indeed a commitment to developing one.60 If such a framework is indeed in the


pipeline, one of the flagship initiatives of the said framework will include AI
Governance adopting a National AI Ethical Framework. Owing to the policy
vacuum resulting from Malaysia not having an AI National Framework, there
is no detailed and readily implementable guidance to organisations to address
key ethical and governance issues when developing and deploying AI system
solutions. With most national AI frameworks, it is a policy document that aims
to promote public understanding and trust in AI systems, as discussed in the
previous section of this article. Such national AI frameworks provide the
overarching policy and direction in positioning a nation in benefiting from the
AI revolution by assisting understanding and confidence in AI systems. Within
these frameworks is a policy position on AI governance.

To draw in Malaysia’s aspiration to attain a position as a globally recognised


AI-ready nation, it must catapult itself to adhering to international standards in
AI, namely the ISO AI Standards (ISO/IEC JTC 1/SC 42) established in 2017
which enumerates six published ISO standards. The said ISO standard has 30
members categorised as participating and observing members, respectively.
The only ASEAN nation listed as a participating member is Singapore which is
ranked first in the AI readiness Index conducted by Oxford Insights and the
International Development Research Centre (IDRC) in 2019 and has
published its National AI Strategy in 2017 with its second iteration in 2020.
Indonesia and the Philippines are listed as observing members. Malaysia is not
listed as a member.
The OECD AI Policy Observatory61 indicates Malaysia as having zero AI
initiatives in comparison to other ASEAN countries such as Singapore which is
listed as having 20 initiatives, Thailand with four initiatives, Vietnam with five
initiatives, and Indonesia with one initiative.
In terms of Malaysia being ranked 26th in the Global Talent
Competitiveness Index 2020,62 advancing itself as a nation that is best
positioned to benefit from the AI revolution, it has to be equally competitive in

60 See Sharmila Nair, ‘Gobind’s ministry working on a national data and AI policy’ (The Star,
12 September 2019) https://www.thestar.com.my/tech/tech-news/2019/09/12/
gobind039s-ministry-working-on-a-national-data-and-ai-policy accessed 22 July 2021;
MDEC, ‘Government, Public Policy, and Sustainable Business’ https://mdec.my/about-
malaysia/government-policies/ accessed 22 July 2021.
61 See OECD AI Policy Observatory https://www.oecd.ai/.
62 INSEAD, the Adecco Group, and Google Inc, The Global Talent Competitiveness Index
2020: Global Talent in the Age of Artificial Intelligence (2020) https://www.insead.edu/sites/
default/files/assets/dept/globalindices/docs/GTCI-2020-report.pdf accessed 22 July 2021.
An AI Ethical Framework for Malaysia: A Much-Needed
Governance Tool for Reducing Risks and Liabilities
[2021] 3 MLJ Arising From AI Systems cdxxxiii

policy-making and establishing institutional frameworks to guide AI design


and use, as AI reshapes economies and societies across the world.

By drawing up an AI Ethical Framework, it will be a realisation of one AI


initiative that will place Malaysia as achieving one vertical in its AI Maturity.

OBSERVATIONS AND CONSIDERATIONS IN DRAWING UP A


NATIONAL ETHICAL FRAMEWORK

Several observations can be drawn from these frameworks.

The first is the importance of accommodating cultural and national values.

There are clauses within several of the international frameworks that allow
variation in the adoption of guidelines at the national level. Jobin et al allude to
the ‘significant divergences’ that emerged from the study of the guidelines. The
authors also make mention of the need for inter-governmental harmonisation
and cooperation but emphasising that ‘it should not come at the costs of
obliterating cultural and moral pluralism over AI’ and highlighting the
challenge of achieving this by ensuring a balance between harmonisation and
‘cultural diversity and moral pluralism’.63

Wright and Schultz64 raise concerns that despite the advancement of AI,
there is little understanding in identifying ethical issues of business automation
and AI, in particular, who will be affected by this and how it will affect the
various parties such as labourers and nations. The authors have identified
nations as stakeholders in order to clarify and assess the cultural and ethical
implications of business automation for stakeholders ranging from labourers to
nations integrating stakeholder theory and social contracts theory. This is not
addressed in terms of taking into consideration national values and interests.

The literature on AI prominently places in the centre of the discussion the


effects of AI and hence, proposing a form of regulation by adopting different

63 A Jobin, M Ienca, and E Vayena, ‘Artificial Intelligence: the global landscape of ethics
guidelines’ (arXiv:1906.11668) 16 https://arxiv.org/ftp/arxiv/papers/1906/1906.11668.
pdf accessed 22 July 2021.
64 S A Wright and A E Schultz, ‘The rising tide of artificial intelligence and business
automation: Developing an ethical framework’ (2018) 61(6) Business Horizons 823-832.
cdxxxiv Malayan Law Journal [2021] 3 MLJ

regulatory models.65 Each nation’s experience with the development and


deployment of AI differs. For instance, whilst facial recognition software may
be utilised in police monitoring and criminal justice in the UK, the rest of
Europe is being extremely cautious.66 National interests and attitudes play a
fundamental role in crafting the national ethical framework. It is not a case of
one-size-fits-all.

The second is in relation to the adoption of human rights criteria in


frameworks.

In assessing the risks of the use of AI, there is a need to adopt a broader
spectrum of values aside from those based on ethics. Mantelero67 delivers a new
perspective of the HRESIA model (an acronym for Human Rights, Ethical and
Social Impact Assessment) which includes fundamental rights. The model goes
beyond other models that narrowly focus on data processing issues and
concerns. Further, Weber’s68 Symbiotic Relationship model builds on the
HRESIA focussing on the principle of self-determination and non-
discrimination in furtherance of a more inclusive multi-stakeholder approach.
Interests of various stakeholders are prevalent in these models that seem to
overlap ethical values and legal standards. However, the models referenced
above again make no mention of the unique considerations based on national
values. The nation is merely a stakeholder but not the craftsman of the policy
and framework that governs AI.

The third is recognising the lack of adoption in developing and under-


developed countries — in particular the global south.

65 M Büchi, E Fosch-Villaronga, C Lutz, A Tamò-Larrieux, S Velidi and S Viljoen, ‘The


chilling effects of algorithmic profiling: Mapping the issues’ (2020) 36 Computer Law &
Security Review 105367; Trevor JM Bench-Capon, ‘Ethical approaches and autonomous
systems’ (2020) 281 Artificial Intelligence 103239; M J Neubert and G D Montañez,
‘Virtue as a framework for the design and use of artificial intelligence’ (Business Horizons, 16
December 2019).
66 See, Jennifer Rankin, ‘European parliament says it will not use facial recognition tech’ (The
Guardian, 5 February 2020) https://www.theguardian.com/technology/2020/feb/05/
european-parliament-insists-it-will-not-use-facial-recognition-tech accessed 22 August
2021; See also, Foo Yun Chee, EU privacy watchdogs call for ban on facial recognition in
public spaces (Reuters, 21 June 2021) https://www.reuters.com/technology/eu-privacy-
watchdogs-call-ban-facial-recognition-public-spaces-2021-06-21/ accessed 22 August
2021.
67 Alessandro Mantelero, ‘AI and Big Data: A blueprint for a human rights, social and ethical
impact assessment; (2018) 34(4) Computer Law & Security Review 754-772.
68 Rolf H Weber, ‘Socio-ethical values and legal rules on automated platforms: The quest for
a symbiotic relationship’ (2020) 36 Computer Law & Security Review 105380.
An AI Ethical Framework for Malaysia: A Much-Needed
Governance Tool for Reducing Risks and Liabilities
[2021] 3 MLJ Arising From AI Systems cdxxxv

Frameworks proposed by international organisations may not guarantee


protection from the use of AI tools in Southern countries. The adoption of
national level frameworks is weak and this is apparent from the OECD AI
Policy Observatory. Could it be that the established regional and international
frameworks are suited for countries where there is a higher AI Maturity? Or
perhaps the values espoused in these frameworks do not align themselves with
national aspirations, or to couching it in stronger terms, are repugnant to the
normative values within a particular nation? This is not to say that there are no
risks resulting from the use of AI tools in the global south. While the deploying
of AI may not be as rampant in Northern countries, Southern states have their
respective vulnerabilities. Global technology companies with investments,
products, and services in Southern countries may contract with governments
for the delivery of public services using citizens’ data.69 This commodification
of citizens’ data in Southern countries has been referred to as ‘data
colonisation’.

Indolence in drawing up any type of guidance on the use of AI ethical


frameworks may even lead to exacerbation of harm from the use of AI tools in
these legal systems that may not have entrenched laws such as protection of
data privacy or safeguards that protect citizens from potential abuse from the
use of such tools.

The fourth is the development of risk assessment approaches in business


organisations.

We are already seeing the establishment of internal governance structures


and measures to assess the development and deployment of AI systems in
ensuring that ethical values and principles are incorporated in these structures
and measures. Jobin et al referred to these as ‘soft governance mechanisms such
as Independent Review Boards (IRBs)’ to ‘assess the ethical validity of AI
applications’.70 A model of risk assessment and risk management should be
adopted that ensures responsible AI is to consider not only an organisation’s

69 See, Paola Ricaurte, ‘Data Epistemologies, Coloniality of Power, and Resistance’ (2019)
Television & New Media 1-16; P Ricaurte, N Couldry and U Mejias, ‘Data Colonialism:
Rethinking Big Data’s Relation to the Contemporary Subject’ (2018) Television and New
Media 1-14.
70 A Jobin, M Ienca, and E Vayena, ‘Artificial Intelligence: the global landscape of ethics
guidelines’ (arXiv:1906.11668) 17 https://arxiv.org/ftp/arxiv/papers/1906/1906.11668.
pdf accessed 22 July 2021.
cdxxxvi Malayan Law Journal [2021] 3 MLJ

interest but also that of prospective stakeholders.71

THE NEXT STEP

Whilst it is proffered that application of an ethics framework assists in the


design and use of AI systems in minimising the negative effects and risks of its
use to ensure trustworthy and responsible AI, existing AI ethics frameworks
lack a directed and measured emphasis of aligning the value-based principles
they espouse to national values to fit the economic and AI maturity of a nation.

Placing national interests and policies as informing and directing the values
to be included in an AI National Ethical Framework is essential. This is a
consideration that will weigh heavily on policy-makers in drafting a framework
for Malaysia nationally which will serve as an accepted normative instrument
that not only focuses on the articulation of values and principles by ethical
frameworks developed by the various international and regional organisations
including leading tech companies but also places a strong emphasis on national
values in Malaysia.

The literature on AI Ethical Frameworks, firstly, focuses on the use of AI


within organisations, and secondly, reduces the nation-state to a stakeholder.
The authors proffer to flip the model where the nation-state takes the lead, as
already undertaken by several countries but the difference is in drawing up a
national framework with special consideration of any additional principle(s)
unique to its national values and aspirations, or alternatively, a deviation from
already prescribed frameworks. The authors are in the midst of research to
develop a theoretical model for the drawing up and adoption of a national AI
Ethical Framework where national values are matched against universal values
to design a framework that is unique and aligned with national interests
utilising the Rukun Negara and the Federal Constitution as the matrix to
measure these values.

An ethical AI framework is the basal step in ensuring that AI tools are


assessed against a barometer of values and principles. There is already progress

71 Roger Clarke, ‘Principles and business processes for responsible AI’ 2019 35(4) Computer
Law & Security Review 410-422. See also, Sharmila Nair, ‘Gobind’s ministry working on
a national data and AI policy’ (The Star, 12 September 2019) https://www.thestar.com.
my/tech/tech-news/2019/09/12/gobind039s-ministry-working-on-a-national-data-and-
ai-policy accessed 22 July 2021; MDEC, ‘Government, Public Policy, and Sustainable
Business’ https://mdec.my/about-malaysia/government-policies/ accessed 22 July 2021.
An AI Ethical Framework for Malaysia: A Much-Needed
Governance Tool for Reducing Risks and Liabilities
[2021] 3 MLJ Arising From AI Systems cdxxxvii

for more advanced assessment tools to be adopted in particular when dealing


with ‘high-risk’ AI and ‘high-risk’ AI practices. In the case of the latter, these are
taking the form of legal frameworks.

You might also like