Final Digital Paper

You might also like

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 13

Surname 1

Student Name

Professor Date

Assignment

Date

Introduction

In the last 18 months, the assumption of Artificial Intelligence (AI) technology has

been at an upward trend across international corporates and businesses across the globe

(Herath). The rapid adoption of AI technology has heightened its ability to perform and solve

business problems using swift cognitive functions like the human brain. Organizations have

acquired the new technological norm into their processes to enhance their operational

capabilities. The commonly deployed AI capabilities in organizations are; machine learning,

computer vision, and robotic process automation (Fountaine et al, 60). AI adoption is rapidly

rising among organizations due to its efficiency in saving time and money that organizations

spend on human resources.

Amidst the many pros of AI technology, digital technology has several ethical

challenges associated with its development and deployment. The ethical issues around AI

focus on the system's biasness, transparency, and accountability (Borenstein et al, 22).

Technology has become the epitome of recent data breaches among organizations with

minimized privacy and surveillance of its data. Researches prove that AI decisions, in most

cases, are not transparent since they are never comprehensible by human intelligence.

Overcoming the biasness in technology has ignited debates on the legality of the

technology worldwide. AI's algorithmic decisions are driven by the already fed data, which

may translate to replication of errors and failures along the data outcomes (Ntoutsi et al).

Similarly, establishing accountability for AI technology has become a challenge for many
Surname 2

developers. For instance, in process automation and machine learning, the technology may

not follow all the legal procedures set for a process. Still, the justification of the outcome may

not be questionable since no human presence is involved (Felzmann 69). This paper aims to

develop and outline the possible ethical guidelines and standards that should be incorporated

into AI technology to meet the ethical threshold. Structurally, the paper will address the

ethical issues regarding transparency, bias, and accountability.

PART I

AI has become the closest solution to efficiency for most businesses and corporates

worldwide. The technology has provided advanced analytics and machine learning

capabilities in solving societal challenges and complexities (Di Vaio 177). However, the

ethical implications of using the technology must be considered for the technology to remain

sustainable and efficient for the coming generations. Ethics ensure that the technology serves

its purpose rather than becoming a threat to humans (Mogaji et al). Ethics sets the boundaries

for how much technology can advance before becoming a threat to humans. Ethics play a role

in AI by ensuring that the algorithmic initiatives maintain human dignity and are not harmful

to human coexistence through fairness and prevention from demonization.

AI's societal impact is embedded in improving efficiencies and automizing processes

in industries such as education and healthcare. AI technology's potential has immensely

revolutionized education services' outcomes by providing personalized teaching, managing

student data, and adjusting and improving teaching strategies for better outcomes (Di Vaio

100). Additionally, the technology has enhanced tremendous societal changes by enhancing

productivity through its algorithmic solving of complex environmental challenges.

In the healthcare systems, technology has enhanced the quality of healthcare

outcomes by minimizing errors in health records and X-ray reports. The technology analyzes

such data professionally and gives outcomes matching the principles of report development in
Surname 3

healthcare (Secinaro 4). Additionally, the ability of the algorithms to detect stroke has

enhanced positive patient outcomes in healthcare by offering healthcare physicians the ability

to detect stroke in its early stages before any other health risks (Rajpurkar 25). Even though

AI has improved efficiency in several sectors of the economy, the technology may pose

serious dangers to the surrounding and the overall global economy. With the automation of

industrial processes, many production machines would go obsolete, thus facilitating the rise

of unemployment demographics.

The achievement of ethical standards during the development and deployment of AI is

central to its efficiency. The AI ethical requirements must incorporate fairness, transparency,

accountability, and algorithmic ethics before development and deployment (Ryan et al.).

Additionally, developers must remain of good intent while aligning to the technical norms of

AI's operational cycle before deployment. On the other hand, fairness is critical during AI

development as it ensures that the technology operates without any discrimination or

biasness.

The increase in research and innovation malpractices brings the need for ethics in

guiding AI research and innovation. Ethics gives the criteria for innovation by balancing

innovation and accountability(Roseli et al.). Through ethics, stakeholders can engage in

research activities without compromising AI standards of transparency and fairness

(Haefner). Additionally, cognitive biasness that may result from traditional research methods

is avoidable when the underlying standards of ethical research are adhered to.

Generally, for research and innovation to materialize within the ethical specters of AI,

the principles of privacy, accountability, and social benefits must be upheld without

compromise(Ryan et al.). These ethical standards are important in research and innovation

since they form the basis of responsible utilization of artificial intelligence in potential

philosophical challenges and research complexities (Haefner). Therefore, the efficiency of


Surname 4

research and innovation is embedded in the ethical standards regulating the development and

deployment of AI.

PART II

Biasness in AI has brought great concern among developers concerned with the

consequences it generates to the data outcomes from AI. Biasness in AI development is

recognizable when the data embedded in the systems cannot be generalized in a wider

scope(Ryan). The biases are introduced into the AI through the training data before testing

and highly depend on the mode used to obtain the data (Roseli et al.). The data used in the

development of algorithms may contain errors, and if the errors are not detected at an early

stage, it may translate to biasness of data. Similarly, data bias may occur due to the

underrepresentation of specific data points in a data set.

Biased AI translates severe consequences to organizations and institutions due to the

systemic unfairness that arises from the bias. For businesses and profit-making corporates, AI

bias may result in losing customers' trust, translating to a decline in revenue (Enholm 1100).

The loss arises when customers detect outrageous inequality in customer service.

Additionally, when AI is used for employee rating, yet the data is biased, the outcome of the

rating may be discriminating, pushing an organization to the point of losing its employees

due to the demotivating biased data.

AI automates organizations' decision-making processes using policy engines and

algorithms to analyze and provide a decision for an institution. Understanding the decision-

making process by AI is crucial for the management of any organization as it gives the

management a direction of choosing the strategic alternatives suiting a particular situation

(Duan et al. 24). Additionally, appreciating the process of decision generation by AI helps

prioritize the options in terms of sustainability, feasibility, and acceptability. If the process is

appreciated, aligning the automated decision to the organizational values becomes easier.
Surname 5

Transparent AI is attributed to several challenges that prevent the system from

explaining how it arrives at decisions. The challenges to transparent AI arise when the input

data contains errors that have not been corrected. Additionally, the model's variables may

denote a poor outcome, especially when the hyperparameters are misconfigured (Felzmann

3333). In other cases, the explanation may fail when the selected subset of data is

unavailable, resulting in biased results from the model.

Accountability in AI systems encompasses all the compliance principles that make AI

trustworthy and reliable. Determining responsibility for AI actions and decisions plays a

significant role in the deployment and development of AI(Miguel et al). When the liability of

the AI system is determined, it establishes the legal certainty that promotes public confidence

in the AI systems. Due to biasness, AI technology may, in some instances, fail or course harm

and destruction of property(Ryan). In such scenarios, remedies to solving the disputes would

have to take effect to determine whose fault it was for the damage or harm to occur.

The product liability mechanism is used in addressing the related harm or dispute. The

product liability mechanisms entail the contract and the law of torts (Mogaji). The contract

law would address the possible claim such as failure of warning, negligence, manufacturing

defects, and design effects (Cabral 620). For instance, in the case of self-driven electric cars,

if the car causes an avoidable accident, the manufacturer or the developer becomes liable for

the harm and disputes caused. The developer may be liable for making a hazardous product to

a third party. Therefore, the mechanisms to address AI-related harms and disputes would be

traced back to the developers or manufacturers accused of design defects and manufacturing

negligence.

PART III

The interdisciplinary collaboration of ethical guidelines on the use of AI has

tremendously contributed to the protection and sustainability of AI systems by providing an


Surname 6

inclusive, ethical system that provides fair use of AI data without prejudice. The

Interdisciplinary collaboration advocates for an AI system that observes data privacy rights to

generate constructive data models without compromising the privacy and transparency of the

data (Padilla). Additionally, the collaboration advocates for an accountable AI system that

supports governance through explainable results that can earn users' confidence in the

generated results.

The existing ethical guidelines and principles adopted by international organizations

and states to guide the deployment and development of AI systems include; the principle of

privacy, transparency, accountability, justice and fairness, non-maleficence, and

responsibility (Ryan). Transparency in the generation of AI algorithms and models advocates

for the explainability of the algorithms and results. Additionally, the principles of

accountability and non-maleficence advocate for the prevention of harm and the

responsibility to clarify the legal liability of potential harm.

The principle of justice and fairness prevents inequality and biasness in reinforcing

fairness within the global community. On the other hand, privacy is a fundamental

component of AI ethics. The principle assures data protection and security while enhancing

the moral obligation of privacy during research and innovation using AI technology(Madaio

20). The key components of ethical guidelines in AI development and deployment are

embedded in the three main concepts of artificial intelligence: neural work, machine learning,

and robotic automation. The guidelines prevent and reduce the cases of data biasness in terms

of gender, race, and nationality. The ethical framework AI adopts enhances every

organization's constructive data management system.

PART IV

Algorithm and AI biasness is common in Artificial Intelligence, especially in

machine learning. The biasness has significant implications that must be mitigated for the
Surname 7

technology to run efficiently. The following strategies can be employed to mitigate bias in AI

data. Firstly, it is of essence to always have a human in the loop such that when the machines

fail, they can always intervene for better and unbiased results (Panch). Secondly, it would be

imperative for the user to have a constant undivided concentration on the data to identify any

feedback signals. Additionally, using a user-generated algorithm and data would reduce

algorithmic biasness since the human data will provide a feedback loop.

Implementing fairness by design is achieved by making fairness a part of machine

learning. The concept of fairness is tested by verifying biasness in the decision-making

system. The principle of fairness is disputed when there is discrimination in the data

outcomes generated by the systems (Madaio 22). For instance, during hiring and promotional

decisions, the selection process may exhibit biasness and discriminate against the potential

candidates due to an automated process with inaccurate data.

Monitoring and evaluating AI systems for bias uses several KL divergence

techniques. The technique is used to create explainable data models generated from Machine

learning processes. From the model, bias is determined from training data or in the statistical

priory (Foutaine 70). Additionally, bias can be evaluated through the determination of the

sources of data. Since the data used in machine learning are generated and collected from

peoples' experiences, this usually exhibits tremendous bias (Magrabi 130). Therefore, the

biasness can be evaluated from the metrics of data collection and class labels used in the data

classification. Whereas for the machine learning data, the KL divergence technique would

help identify assumptions taken through data testing.

PART V

Explainable AI (XAI) form of artificial intelligence provides an understanding of the

processes involved in the decision-making capabilities of an AI system. The XAI plays a

significant role in enhancing transparency in AI outcomes. To improve transparency and


Surname 8

trust, the model gives the rationale befitting the decision by explaining the type of controls

employed behind a particular decision(Angelov). The system's explainability helps boost the

accuracy of outcomes, thus improving customers' trust and loyalty. The XAI approaches and

techniques include the Local Interpretable Model-Agnostic Explanations (LIME), Gradient

Class Activation Mapping (GRAD-CAM), and the Shapley Additive explanation (SHAP)

(Angelov). High-performing machine learning models tend to exhibit complexity in their

algorithms and lack explainability, and the reverse is true. The analogy denotes the tradeoff

between performance levels and explainability.

The transparency by design principle requires the explainability of the results from an

AI system. The principle uses the XAI principles to generate the rationale behind every

system decision to improve the outcomes. Whenever users of the systemized decision can

understand the processes that informed the generation of a decision, the implementation of

the principle is determined (Zuo). The explainability in AI using the various XAI approaches

has enhanced the interpretation and communication of AI decisions to end users in a user-

friendly manner that can easily be understood. After making the decision, the tool gives a

human-friendly rationale concerning the formulae and the paths used to generate the outcome

(Miguel et al.). The XAI tool has benefited users of the AI system in detecting malicious

information in a given piece of data. The traceability of complex data earns the confidence of

the stakeholders to trust the decision made.

PART VI

As one of the fundamental ethical principles of AI, Accountability ensures that the AI

system operates within acceptable operational frameworks by taking responsibility in case of

a dispute or harm to the third party. The opacity of the AI systems requires the developers to

ensure that the system properly functions without any attributable hazardous effect on people

around it (Angelov). Therefore, the developer is responsible for any harm caused by the
Surname 9

system outside of the system's workability. On the other hand, the operator is responsible for

demonstrating fairness and transparency in the data generated to the end users. The users are

responsible for regulating the system's impact on the world. For instance, if users engage in

fraudulent activities with the system, they would remain guilty of fraud and not the system.

Product liability commences from implementing mechanisms for addressing AI-

related harms and disputes. The product liability mechanisms entail the contract and the law

of torts (Cabral 620). The contract law would address the possible claim such as failure of

warning, negligence, manufacturing defects, and design effects. For instance, in the case of

self-driven electric cars, if the car causes an avoidable accident, the developer becomes liable

for the harm and disputes caused. The developer may be liable for making a hazardous

product to a third party.

Audit and certification procedures are undertaken to analyze the large data volumes to

identify anomalies that may have resulted in AI biasness. AI auditing and certification help in

the traceability of errors and the identification of malicious data for risk assessment of the

data outcomes(Madaio 18). To achieve AI's ethical obligations, adopting a culture of

responsibility would serve as the foundation for building ethical leadership, prioritization of

learning, and the exhaustion of diverse research perspectives. A culture of accountability and

responsibility would minimize the negative outcomes of data by promoting Human-AI

collaboration and the exploration of ethical dimensions and concerns in using artificial

intelligence for surveillance.

PART VII

Like developers and users, the government equally has a role in promoting ethics in

artificial intelligence. The government establishes rules and regulations that guide and

promote ethical AI to protect its citizens from irresponsible and misleading AI outcomes. On

the other hand, the role of business in AI promotion is eliminating biasness in AI outcomes
Surname 10

and advocating non-manipulative and non-discriminatory business outcomes (Zuo). In

academia, researchers must uphold AI's ethical standards important in research and

innovation since they form the basis of responsible utilization of artificial intelligence in

potential philosophical challenges and research complexities.

Public awareness and engagement in AI ethics are essential for accelerating and

improving decision-making among societal processes, thereby boosting efficiency in business

to enhance productivity(Herath). Similarly, advocacy towards AI would help humanity in the

identification of trends and operations in the corporate world. International cooperations

equally have an immense role in harmonizing ethical guidelines for AI.

The role of international cooperation in promoting ethical technologies is embedded

in their principles that outlaw the misuse of AI in manipulating and supervising international

technologies. Additionally, the corporation engages in the identification of normative

boundaries for the codification of socio-economic human rights around the globe(Ryan). The

role gives corporations an upper hand in detecting, monitoring, and evaluating the ethical

capabilities of global technologies. Therefore, the global minimum ethical standards equally

influence the ethical practices within AI and related technological advancements.

Conclusion

In conclusion, the future of AI depends on how technology will impact human lives.

The interest of the developers, operators, and user of AI need to shift towards ensuring that

the technology doesn't implicate threats to humanity. Additionally, AI has become the closest

solution to efficiency for most businesses and corporates worldwide through advanced

analytics and machine learning capabilities in solving societal challenges and complexities.

Therefore, its ethical implications must be considered for the technology for the sustainability
Surname 11

of the system. Ethics ensure that the technology serves its purpose rather than becoming a

threat to humans. Ethics must set appropriate ethical boundaries to prevent threats to humans.

Similarly, the need for interdisciplinary collaborative research and adaptation to the

use of AI must be upheld to enhance the protection and sustainability of AI systems by

providing an inclusive, ethical system. The advocacy of the Interdisciplinary collaboration

would provide for unbiased use of the AI data without prejudice with the observation of data

privacy rights. The advocacy would generate constructive data models without compromising

the privacy and transparency of the data. Therefore, the collaboration's advocacies for an

accountable AI system will form a pillar for the support and governance of AI systems across

the globe. To mitigate the ethical challenges of biasness, transparency, and accountability,

stakeholders must adopt a culture of responsibility to develop a foundation of ethical

leadership for Artificial Intelligence.


Surname 12

Works Cited

Angelov, Plamen P., et al. "Explainable artificial intelligence: an analytical review." Wiley

Interdisciplinary Reviews: Data Mining and Knowledge Discovery 11.5 (2021):

e1424.

Cabral, Tiago Sérgio. "Liability and artificial intelligence in the EU: Assessing the adequacy

of the current Product Liability Directive." Maastricht Journal of European and

Comparative Law 27.5 (2020): 615-635.

Di Vaio, Assunta, et al. "Artificial intelligence and business models in the sustainable

development goals perspective: A systematic literature review." Journal of Business

Research 121 (2020): 283-314.

Felzmann, Heike, et al. "Towards transparency by design for artificial intelligence." Science

and Engineering Ethics 26.6 (2020): 3333-3361.

Fountaine, Tim, Brian McCarthy, and Tamim Saleh. "Building the AI-powered

organization." Harvard Business Review 97.4 (2019): 62-73.

Haefner, Naomi, et al. "Artificial intelligence and innovation management: A review,

framework, and research agenda✰." Technological Forecasting and Social

Change 162 (2021): 120392.

Herath, H. M. K. K. M. B., and Mamta Mittal. "Adoption of artificial intelligence in smart

cities: A comprehensive review." International Journal of Information Management

Data Insights 2.1 (2022): 100076.

Madaio, Michael, et al. "Assessing the Fairness of AI Systems: AI Practitioners' Processes,

Challenges, and Needs for Support." Proceedings of the ACM on Human-Computer

Interaction 6.CSCW1 (2022): 1-26.


Surname 13

Magrabi, Farah, et al. "Artificial intelligence in clinical decision support: challenges for

evaluating AI and practical implications." Yearbook of medical informatics 28.01

(2019): 128-134.

Miguel, Beatriz San, Aisha Naseer, and Hiroya Inakoshi. "Putting accountability of AI

systems into practice." Proceedings of the Twenty-Ninth International Conference on

International Joint Conferences on Artificial Intelligence. 2021.

Mogaji, Emmanuel, Taiwo O. Soetan, and Tai Anh Kieu. "The implications of artificial

intelligence on the digital marketing of financial services to vulnerable

customers." Australasian Marketing Journal (2020): j-ausmj.

Padilla, Thomas. Responsible Operations: Data Science, Machine Learning, and AI in

Libraries. OCLC Research Position Paper. OCLC Online Computer Library Center,

Inc. 6565 Kilgour Place, Dublin, OH 43017, 2019.

Panch, Trishan, Heather Mattie, and Rifat Atun. "Artificial intelligence and algorithmic bias:

implications for health systems." Journal of global health 9.2 (2019).

Rajpurkar, Pranav, et al. "AI in health and medicine." Nature medicine 28.1 (2022): 31-38.

Roselli, Drew, Jeanna Matthews, and Nisha Talagala. "Managing bias in AI." Companion

Proceedings of The 2019 World Wide Web Conference. 2019.

Ryan, Mark, and Bernd Carsten Stahl. "Artificial intelligence ethics guidelines for developers

and users: clarifying their content and normative implications." Journal of

Information, Communication and Ethics in Society (2020).

Secinaro, Silvana, et al. "The role of artificial intelligence in healthcare: a structured literature

review." BMC medical informatics and decision making 21 (2021): 1-23.

Zuo, Jiankai, et al. "Artificial Intelligence Prediction and Decision Evaluation Model Based

on Deep Learning." 2019 International Conference on Electronic Engineering and

Informatics (EEI). IEEE, 2019.

You might also like