Professional Documents
Culture Documents
A Qualitative Exploration of R
A Qualitative Exploration of R
A Qualitative Exploration of R
Artificial Intelligence
by
Jamie L. Lehn
August 2023
Committee Members
This study explored responses from over 100 individuals from both the technology and non-
artificial intelligence to determine how they impact an individual's ability to trust in artificial
intelligence. Further discovering how to address and potentially eliminate the perception of
society's belief that there is an existential crisis in humanity regarding the research,
development, and utilization of artificial intelligence, which culminates from numerous issues
concerning public safety, privacy, and other societal implications and whether this perception
affects a person’s ability to trust and accept the technology. The study explored the use and
promised benefits, and ethical and moral decision-making to address the existential crisis
perception and the lack of trust in artificial intelligence through a qualitative exploratory
approach. Initial findings from an analysis of survey respondents from both the technology
industry and outside the technology identified a common theme, indicating that what is
currently being done regarding governance and standardized controls is not working.
Respondents agreed that there is an ongoing ethical and moral concern regarding decision-
making, research, and utilization of artificial intelligence. These concerns continue to impact an
individual’s ability to trust and feel safe regarding artificial intelligence technology.
Keywords: Artificial Intelligence, Trust, Ethics and Morals, Governance, Risk and Benefits
ii
Acknowledgments
First and foremost, I would like to acknowledge the Heavenly Father and Great Spirit for
the ability to take on this journey. I want to thank my family, my sons Zachary Lehn and Charles
Austin, my life partner Tamara Austin, and stepfather Ricky Wells, along with my father, Frank
Lehn, and my mother, Lucinda Wells (Lone), who are with me in spirit for the ongoing support
they provided for me to fulfill a promise I made nearly 22 years ago when Zachary was born. I
want to acknowledge and thank the “Men in the Blue House” Dr. Mitch Ogden (UW-Stout), Dr.
Blair Bateman (BYU), Dr. Reed Reichwald, Dr. Geoffrey Archibald, Dr. Kent Archibald, Dr. Peter
Gessel, Patrick Crompton, and my best friend Scott Vaneps who provided the example of how
to be studious in an educational journey through observation and talking with them as they
I must thank the Colorado Technical University professors Dr. Abdullah Alshboul, Dr.
Phyllis Parise, Dr. Cynthia Calongne, Dr. Scott Wood, and my research supervisor Dr. Alexa
iii
Table of Contents
Study Problem..................................................................................................................... 2
Trust .................................................................................................................................. 17
Governance ....................................................................................................................... 25
Conclusions ....................................................................................................................... 35
iv
Research Methodology and Design .................................................................................. 38
Trustworthiness ................................................................................................................ 44
Ethical Assurances............................................................................................................. 47
Results ............................................................................................................................... 53
Conclusion ......................................................................................................................... 81
References .................................................................................................................................... 83
v
List of Tables
vi
List of Figures
vii
Chapter 1: Introduction
This qualitative exploratory study explores the interrelationship of risks, benefits, and
governance of artificial intelligence learning agents. The National Institute of Standards and
Technology (NIST) has shown that implementing information assurance controls will reduce the
systems (NIST, 2012, 2020b). However, there is a need to understand whether reducing risk and
implementing governance controls and governance models are enough to positively affect how
society perceives, accepts, and places its trust in artificial intelligence or if there is more. Many
researchers have previously identified that there is a perception in society's belief which
(BigThink, 2020; Gibbs, 2017; Roberts, 2021), culminating in numerous issues concerning public
The scope of this dissertation will cover a holistic look into multiple areas within
computer science and outside of the computer science field of study. These include trust,
governance, risks, and actual versus promised benefits. Within the scope of trust, the study will
look at human-computer interactions, morals, ethics, psychology, trust contexts, and the
components required to build trust. In terms of governance, the study will explore legal
components, industry standards, controls, and responsible versus ethical governance models.
An exploration into risk was contained within the study. With risk in mind, the study will review
the impacts of human rights issues, data accuracy and cleanliness, autonomy, publicized issues,
and failures resulting from artificial intelligence, accuracy and effectiveness of artificial
2
development code. Lastly, the study will examine the impact of actual versus promised benefits
The current theory is that applying standards and controls to artificial intelligence will
not be as beneficial as in other information technology areas. Artificial intelligence and its
associated learning agents and their development are unlike any other technology. Artificial
intelligence and learning agents' primary purpose is to think and act like humans, performing
human functions like cognitive logic, knowledge discovery, analytical interpretation, and
adaptation (Sharma, 2019a). Unsupervised learning agents are self-learning, meaning they seek
a resolution even if one has not been preprogrammed into its development code or processes
(Sharma, 2019b). As a result, no constant can be auditable; based on the environment, the
number of datasets, and the knowledge that it can access in which it works.
Study Problem
The problem is the need to understand if improving governance oversite, reducing and
understanding risk, and the potential or promised benefits of artificial intelligence, impact the
public's trust in artificial intelligence as inferred by previous research (ISO, 2020; NIST, 2020a;
WhiteHouse, 2019). This problem has been identified and supported by standards organizations
such as ISO (ISO, 2020) and NIST (NIST, 2020), along with the Trump administration
(WhiteHouse, 2019). Artificial intelligence has several benefits when used correctly, though it
could also be society’s downfall if appropriate controls and governance are not in place.
Scholars and technology industry leaders have warned about advanced artificial intelligence
risks (Yampolskiy, 2016). These risks include unsafe medical recommendations for alleviating
3
illness (Chen, 2018; Yigitcanlar et al., 2020) and inadequate controls that lead to deadly
automated or self-driving car accidents (NTSB, 2019; Yigitcanlar et al., 2020). Industry
innovators Stephen Hawking, Elon Musk, and Bill Gates suggest that artificial intelligence can be
dangerous, and that caution is needed (Marr, 2018). Musk commented that artificial
occurring in the next five years (Marr, 2018). Speaking at the US National Governors Association
in 2014, Musk told those in attendance that artificial intelligence is an exceptional and
regulation instead of reactive. Musk thinks it was too late when we are reactive in artificial
intelligence regulation (Gibbs, 2017). When asked by those in attendance how to best regulate
artificial intelligence development, Musk replied that the first stage was to "learn as much as
possible” (Gibbs, 2017). Musk's remark echoes Buchanan's advice that success in artificial
intelligence comes with expanded responsibilities that need to consider technological success's
societal repercussions and consequences and the need to educate decision-makers (Buchanan,
2005, p. 59). The outcome of this study is to understand how trust is impacted by artificial
intelligence and what steps can be taken to increase confidence and change the perspective
Study Purpose
The purpose of this qualitative exploratory study was to explore the interrelationship of
risks, benefits, and governance of artificial intelligence and its associated learning agents to
determine how they impact an individual's ability to trust artificial intelligence. The perception
that was explored is society's belief that there is an existential crisis in humanity (BigThink,
4
2020; Gibbs, 2017; Roberts, 2021), culminating from numerous issues concerning public safety,
privacy, and other societal implications and whether this perception affects a person’s ability to
The qualitative method was chosen over the quantitative as the qualitative approach
provides the framework needed. The rationale for using qualitative methodology is the need
for an exploratory design approach. As Romero (2020) identified, the exploratory route
aspects where the problem has not been studied in-depth, the literature is limited, and the
discovery of new information or the intention is not to provide a definitive solution (Dudovskiy,
2018; Given, 2008; Jupp, 2006; Romero, 2020). Utilizing an exploratory design for a study on
artificial intelligence meets these criteria. Artificial intelligence is relatively new compared to
other areas of computer science research. Thus, the amount of literature on the topic is
reduced; it is further reduced when the focus on artificial intelligence is narrowed down to a
particular component, such as artificial intelligence learning agents and the utilization of these
agents.
A humanistic approach to computer science examines how and why humans interact
with technology in a certain way (Hazzan et al., 2006). The qualitative methodology allows this
This, in turn, enables the research to explore the phenomenon with different outlooks. While
Demeyer (2011) states that computer science is a culmination of multiple fields dealing with
(Demeyer, 2011). However, there is more to computer science than technology, metrics, and
5
numeric. The human perspective is the perspective that this research aims to fulfill. The
considerable component that will aid in the validity and impact the ethical context of the
research (Given, 2008) and dissertation through ethics, judgments, values, and aesthetics,
which constructs the research approach and work (GuhaThakurta & Chetty, 2015; Thornhill,
Lewis, & Saunders, 2019). The study problem encompasses human emotions regarding artificial
intelligence. People's sentiment toward artificial intelligence supports whether the governance
of artificial intelligence agents will alleviate these emotions and ensure a high level of privacy
and safety. The research must add an axiology component to provide research ethics and value.
The target population for the study was split between members of general society and
development, and artificial intelligence innovators. Using a cross between purposive and
convenience sampling methods, a sample of 112 respondents was selected from the previously
mentioned technology-related fields and members of general society. Data collection consisted
of qualitative surveys. The qualitative surveys were conducted using the SurveyMonkey survey
tool. The sample for the qualitative study will come from professional groups within LinkedIn
for technology professionals. The selection for non-technology professionals was attained
through SurveyMonkey's surveying platform. Data collected from qualitative surveys were
analyzed for themes. Analysis of data will occur using a CAQDA tool such as MaxQDA. The
proper and adequate governance in artificial intelligence learning agents will improve trust.
6
Research Question
How does improving governance oversite, reducing and understanding risk, and the
potential or promised benefits of artificial intelligence impact the public's trust in artificial
intelligence?
Conceptual Framework
The conceptual framework for this study is depicted below in Figure 1. The flow of my
thought process starts with trust, trust in a technology that has provided numerous benefits
and risks to humankind. Research pointed to society’s inability to trust artificial intelligence,
whether due to science fiction movies or their fear of the unknown (BigThink, 2020; Gibbs,
2017; Roberts, 2021). Trusting this technology holds an elevated interest to many individuals
and governments. The National Institute of Standards and Technology and the International
Organization for Standardization wrote a paper on trusting artificial intelligence, and the
Whitehouse issued an executive order on the same topic. The study will examine trust, what
impacts an individual's ability to trust artificial intelligence, and what mechanisms within trust
could be triggers or catalysts. Additionally, research has shown that there is an ability to
transfer trust or distrust on a topic from one individual to another (Burt & Knez, 1996; Doney,
Cannon, & Mullen, 1998; Kramer, 1999; Lee & See, 2004). Recently developed and published
governance models using new standards and controls aim to positively impact and elevate an
individual’s ability to trust artificial intelligence and learning agents (ISO, 2020; NIST, 2020a).
Due to the newness of these governance models, adequate time has not yet passed to
7
determine if they will mitigate or eliminate risks brought on by artificial intelligence and
learning agents.
Taking it one step further is the philosophical component in which there is an unknown
as to whether the benefits promised or achieved by artificial intelligence and their associated
learning agents offset the amount of risk accepted by individuals and organizations. There is
achieved alone affect the level of trust in terms of an increase in positive outcomes,
perpetuating a higher level of trust. There is also an unknown if there is a point where
individuals discard the notion of risk and safety for the probability of receiving a benefit.
Figure 1
Conceptual Framework.
The significance of the study culminates from numerous incidents with artificial
development have occurred. Society has seen a stock market crash brought on by trading tools
integrated with artificial intelligence learning agents (Kirilenko, 2017), automated medical
prescription tools providing the wrong dosage to patients (Chen, 2018; Yigitcanlar et al., 2020),
incidents involving autonomous vehicles placing passengers and pedestrians at risk (NTSB,
2019; Yigitcanlar et al., 2020). Several other incidents have raised concerns regarding privacy,
public safety, and trust to identify specific societal implications. Industry leaders Stephen
Hawkings, Bill Gates, Elon Musk, and academic researchers have been warning the nation's
leaders for more than a decade that caution is needed and that artificial intelligence
development can be dangerous (Gibbs, 2017; Marr, 2018; Yampolskiy, 2016). Elon Musk stated
in an interview in 2018 that he believes a catastrophic incident could occur within the next five
This study hopes to identify what professionals must do within the industry to improve
technology acceptance. I hope that by focusing on issues that affect how people within and
outside the technology sector accept and trust artificial intelligence through understanding the
contexts, components, and requirements to build that trust. This acceptance and improved
trust could be through improved stability of artificial intelligence agents embedded in current
and new technology to reduce or eliminate issues that hurt individuals directly or indirectly.
This study looks at areas of governance in terms of individual, organizational, and applied
policies. Additionally, it examines standardized controls, risk, and human rights issues.
9
Reflexivity is an individual's ability and commitment to look inward and reflect on their
feelings, reactions, and motives to understand how they could potentially impact how they act
and think in a particular instance. Reflexivity is also a standpoint or perspective on where the
researcher stands on a topic, where they start, and why they feel a certain way (O'Brien, 2021).
It is stated that ethical tensions arise with the everyday practice of conducting research
(Guillemin & Gillam, 2004). These tensions provide the researcher with critical ethical moments
to respond to, making a decisive conscious or unconscious choice of which path to follow.
Reflexivity is a mental tool, an active process that assists us in handling ethical dilemmas that
may occur while conducting research, thus ensuring rigor, and aiding in achieving the correct
choice. As a reflexive researcher, the concentration is not merely on the facts of the presented
data but also on constructing data interpretations while simultaneously questioning how those
interpretations came into existence (Guillemin & Gillam, 2004). Reflexivity is more than just
reflection; it involves recognizing and being sensitive to the multiple ethical dimensions while
In terms of the research study, what interests me is the perception that artificial
what it is, what it can achieve, and what the future holds. Governance and understanding the
knowledge base.
factors. However, there are plenty of personal motivation factors. One such factor is the
10
expansion of knowledge. I am motivated to learn more about what artificial intelligence is and
what it is not. Another way of stating this is by exploring its history and current innovations,
which demystifies artificial intelligence. As the veil is pulled away, a new set of interests comes
into existence due to the complexity of artificial intelligence. I am motivated to see how
security controls and governance models influence artificial intelligence. Will these reduce
incidents sensationalized in the news and journal articles, and will the number of related
benefits increase? Moreover, do these influencers also affect people's ability to trust, and if so,
why is that?
Positionality and reflexivity cannot be fully identified without identifying any potential
agendas for conducting this study. My agenda is that I aim to author a study that provides a
clear picture of how individualized trust in artificial intelligence is impacted and to develop
theories and practical measures to mitigate any adverse impacts or highlight positive ones. The
axiology. That is the utilization of internal values developed through morals and ethics.
2019). To that extent, a definition of each is included as a prelude to each of the paragraphs
outlining them. Delimitations are defined mainly as the study's scope that describes the study's
the entire population. They act as denotations. They are implying limitations of the research
populations. Delimitations are self-imposed restrictions placed on the study by the researcher
researching artificial intelligence and trust seems foolish. However, there are natural
boundaries that arise during the research and the preparation of aligning a plan for the
recruitment, and the participant’s background. The geographical boundary is the study location,
and participant recruitment is limited to the United States. Participant recruitment has a
further boundary of being limited to utilizing technology-focused professional groups within the
LinkedIn social media platform for part of the participant pool. This pool is bound to the
understanding that each participant was required to show that they have at least ten years of
of artificial intelligence projects or teams. The other segment of the population is restricted to
Limitations for this study are identifying specific features, qualities, or attributes that
affect or influence the understanding of the findings. These limitations are constrictions or
restrictions consigned to the capability to oversimplify from the results, to explain additional
applications to observe, or associated with the usefulness and efficacy of discoveries that are
the outcome of how the study’s initial design or method chosen or used to determine interior
and exterior legitimacy or the consequence of unforeseen and unimagined contests and
questions that materialized in the course of the research (Miles, 2019; Price & Murnan, 2004).
A limitation is using LinkedIn professional groups to gather research data. Utilization of LinkedIn
element or population segment that could change the course of the study’s findings. To offset
that limitation, I have added SurveyMonkey to gather research data from individuals not
12
working in the technology sector. However, this creates another limitation as there may be a
question about the quality of the responses. The rationale for limiting the population pool is to
ensure participant results are founded on experience in those fields. Additionally, this research
acknowledges and accepts the premise that numerous research has been conducted on trust in
corporations, and between corporations. However, this study will focus on trust factors that aid
individuals in adjusting and adapting to intellectual complexities and concepts found in artificial
intelligence. The limitation of the methodology is that it does not account for the potential of
Definition of Terms
essential to understanding the study. Understanding these terms will bring clarity to the reader.
Governance
Governance is the process of guiding and governing events to ensure efficient growth
and development that integrates processes, customs, and policies to synchronize within them
properly (Brundage, 2019; Lo Piano, 2020; Mannes, 2020; Mitre, 2013; Perry & Uuk, 2019).
Governance Framework
The governance framework accounts for and takes into consideration of legal, ethical,
and moral discipline among the separate and distinguishable notions of being a responsible
subject, the idea of subjectivity, the nature of being that has existence, and the philosophical
design that form must adapt to utilization, structure, and material which stresses the
and social unity based on the theory that value, the especially moral worth of an action should
be based on evaluation of its ramifications, fallouts, and consequences (Fragile to Agile, n.d.;
Moral Imagination
Moral imagination is the ability that facilitates or enables individuals to pay attention to
or have a moral awareness of ethical concerns and discernments exhibited in a vast assortment
of circumstances to discern if actions are ethical or unethical (Clara F.D. Costa Ames, 2022).
Perspective Taking
another person's discernable and expressive appraisal of the phenomenon and their perception
Situational awareness transparency is a transparency model that has three levels. The
first level provides basic information about the current state, goals, intentions, and action plan.
The second level offers information about the reasoning process behind the action plan,
including rationale, capabilities, limitations, and trade-offs between different options. The third
level provides information regarding predicted consequences and the likelihood of a plan’s
Transparency
operations. Transparency does not require a rundown of all abilities, rationales, or behaviors.
14
The ability to be concise, clear, and efficient should take precedence (Lo Piano, 2020; Michaels,
2021).
Trust
Trust is a social psychological concept that can be defined as the attitude that an agent,
vulnerability (Burt & Knez, 1996; Deutsch, 1958; Guba, 1981; Johns, 1996; Kramer, 1999; Rotter,
1967).
Chapter Summary
This chapter identified the problem that there is the need to understand if improving
governance oversite, reducing, and understanding risk, and the potential or promised benefits
of artificial intelligence, impact the public's trust in artificial intelligence as inferred by previous
research. It aims to explore the interrelationship of risks, benefits, and governance of artificial
intelligence and its associated learning agents to determine how they impact an individual's
ability to trust artificial intelligence. The significance of this study culminates from numerous
incidents with artificial intelligence utilizing learning agents that have affected how individuals
see and trust in the technology. It covered the conceptual framework identifying the
qualitative methodology provides flexibility to study the humanistic approach within computer
science and how humans interact with technology in specific ways. In this chapter, you have
seen my statements on positionality, reflexivity, and terminology that were seen and utilized
The problem is the need to understand if improving governance oversite, reducing and
understanding risk, and the potential or promised benefits of artificial intelligence, impact the
public’s trust in artificial intelligence as inferred by previous research (ISO, 2020; NIST, 2020a;
WhiteHouse, 2019). The purpose of this qualitative exploratory study was to explore the
interrelationship of risks, benefits, and governance of artificial intelligence and its associated
learning agents to determine how they impact an individual's ability to trust artificial
intelligence. The conceptual framework that guided this study details the interrelationships of
the moral and ethical components affecting trust, possibly through governance, risks, benefits,
This study's importance results from numerous incidents with artificial intelligence
utilizing learning agents developed with no or limited security controls. Artificial intelligence is
conceptualization of artificial intelligence and its ethical concerns and significance needs to take
into consideration the current use and socio-technical and socio-economic factors (Stahl et al.,
2021).
This researcher’s position is that the model for applying the newly developed standards
and controls (NIST, 2020a) will not work as there are more than controls when determining
trust as indicated in the conceptual framework. Learning agents and their development are
unlike any other technology in existence. Artificial intelligence and learning agents' primary
purpose is to think and act like humans, performing human functions like cognitive logic,
learning agents are self-learning, seeking to discover a resolution even if one has not been
preprogrammed (Sharma, 2019b). This means that there is no constant that can be auditable.
Based on the environment in which the learning agent works, the results of two exact agents
will differ even if the starting point is the same. Another way to think of this phenomenon is
with the scenario of identical twin babies growing up; they may have started out identical,
though through their environment and experiences, they are no longer identical. It is these
The literature search strategies employed included using academic databases such as
EBSCO, ProQuest, ACM, IEEE, Elsevier, and SAGE. The External databases included were arXiv,
Taylor and Francis, Researchgate, and Google Scholar. Numerous search terms and keywords
were employed, such as trust in artificial intelligence, benefits of artificial intelligence, artificial
The search followed the conceptual framework that mapped out the flow of
components. The utilization of the conceptual framework in the literature search aided in
identifying areas of concern when looking to address how governance, ethics, risks, and
benefits affect the societal perception of trusting artificial intelligence. The following research
artificial intelligence, its origins, associated risks, and its benefits. John McCarthy coined
17
artificial intelligence in the 1950s while working at Dartmouth. McCarthy stated Intelligence
consists of two components, the epistemological and the heuristic. Epistemology is the
interpretation and depiction of the world that develops the explanation of concerns and
problems that succeeds from the data and information denoted in the interpretation. Heuristic
is the mechanism that, based on the data and information presented, resolves the concerns,
and proposes a solution, deciding what to do. In principle, all elements of knowledge or any
other aspect of intelligence can be accurately defined and expressed so that a mechanism can
David Marr (1977) stated that artificial intelligence is an example of a human endeavor
that started with a commitment of faith rather than on the results. Marr further noted that
artificial intelligence examines and analyzes multifaceted and intricate data processing rooted
in some facet of natural information processing methods to identify useful information (Marr,
1977). This analysis aids in understanding intelligence. Within humans’ ability to understand
different forms of intelligence resides at least one if not many, principles guiding the
organization and representation of knowledge that point to and capture the universal
Trust
A human's ability to understand, reason, and trust begins with a seed that needs to
grow and mature. Philosophy and reasoning began with an allusive but inadequate seed of
strategies, designs, and impressions, though the maturity of being and thought replaces these
inadequacies with improved versions; however, they are still insufficient concepts (Minsky,
1974). It is these insufficiencies that require humans to trust. Trust is a social psychological
18
concept that can be defined as the attitude that an agent, whether human or technology, will
ambiguity, insecurity, indecision, doubt, and vulnerability (Lee & See, 2004). Additionally, trust
First, it displaces guidance when direct examination turns out to be unfeasible. Secondly, trust
enables choice in the presence of indecision and insecurity by representing itself as a common
resolution heuristic. Third, trust minimizes doubt and hesitation when assessing the reactions
of others, in so doing directing adequate reliance and producing collaboration. Lastly, trust
enables and expedites delegation and adaptive conduct by replacing inflexible etiquettes,
practices, hierarchies, and processes with objective expectancies concerning the capacities of
others (Baba, 2007; Kramer, 1999; Lee & See, 2004; Ostrom, 1998).
Like many concepts, trust has multiple levels, contexts, and influencers, as it does not
exist in a vacuum. The processes and interactions that influence trust are logical, systematic,
effective, and partly based on analogy (Lee & See, 2004). Three levels or contexts of trust aid in
the creation and evolution of trust: individual, organizational, and cultural. Individual context is
the measure of an individual’s traits, such as disposition to trust, adaptability to change, and
acceptance of new information or ideas, as well as their personal history with trust, whether
positive or negative (Lee & See, 2004). Organizational context looks at the connections,
exchanges, and communications between individuals that advise them on the credibility and
fidelity of others, which could incorporate character, status, and rumors. It also sways and
impacts the trust of individuals who did not come into contact with the trustee (Burt & Knez,
1996; Kramer, 1999). The characteristics of motive, intention, and action held by the trustee are
19
important constructs or build blocks of trust (Bhattacharya, Devinney, & Pillutla, 1998; Mayer,
Davis, & Schoorman, 1995). Organizational context can be affected by the solidarity and
robustness of the social network. It is also the ability of the user to apply their situation to the
experiences of their peers (Doney, Cannon, & Mullen, 1998). Cultural context is the measure of
trust using cultural expectations. Culture has many definitions and meanings. The research
defines culture as a body of rules that govern the interactions of groups and societies,
otherwise known as social norms. Culture exhibits mutual understanding, practices, and
capabilities relating to those groups and societies. Variances in culture and related
interpretations may lead to insecurity, doubt, and ambiguity. Additionally, culture at the
individual level or as a group can form attitudes and opinions that sway and shape the
Trust in technology is like the trust placed in human counterparts. Therefore, there is a
First is the inclination to be a part of a bonded association that creates or elevates being
vulnerable when relying on another individual or object the supposition they will perform as
system (Moorman, Deshpandé, & Zaltman, 1993). Lastly, to be vulnerable to the activities of
regardless of having the capability to oversee or manage them (Mayer, Davis, & Schoorman,
1995).
The impact of trust on human-computer interactions with how humans interact with
influenced by human differences even when a condition contains ambiguity and indiscriminate
probabilities rule (Lee & See, 2004). A human’s ability to trust is dependent on the past
behavior of the individual or system (Deutsch, 1958). This can be correlated through multiple
examples where technology enhanced by artificial intelligence learning agents has either failed,
lack of trust and perceived ethical concerns (Miller, 2019). Some researchers have indicated
that programming or teaching artificial intelligence agents to be moral and have ethics may
offset the perceived existential crisis with this technology (Butkus, 2020; Farisco, Evers, &
Salles, 2020; Miller, 2019). Artificial intelligence ethics are considered to be in the stage of
infancy, and how to attain ethical disciplines within the development phase still needs to be
conceptualized (Lo Piano, 2020). Though how are morality and ethics programmed into artificial
intelligence learning agents when moral virtue is derived from customs resulting from
preceding accomplishments of deeds and undertakings (Aristotle, 1834). Though moral agency
vast assortment of circumstances to discern if actions are ethical or unethical (Clara F.D. Costa
From an anthropocentric view, there are differentiations between human and artificial
intelligence reasoning and awareness that raises hypothetical obstacles and impediments for
artificial moral agency. As a result, there is uncertainty as to whether these artificial moral
agents should mimic and model human reasoning and awareness, leading to decision-making
21
(Butkus, 2020). Concerns exist with preprogrammed or naturally learning artificial agents.
Researchers Farisco et al. (2020) identified that essential human cognition components are
absent in current artificial intelligence agents. This could attest to the probable impossibility of
translation into specific artificial experiences, such as unsound reasoning and emotional
experiences. These absent components and challenges may provide roadblocks to ethical
contexts (Farisco, Evers, & Salles, 2020). If artificial intelligence agents contain insufficiencies or
discrepancies in emotional experiences and processing, there is a potential that they may
endorsements, or advice, as there are challenges to optimizing for utilitarian logic or developing
coded ethical controls or allowing the agents to learn and self-actualize. One of these concerns
is that by trusting artificial intelligence learning agents to produce moral guidelines through
demonstrating the worst moral and ethical behavior of humankind (Butkus, 2020). This is
currently seen in artificial intelligence learning agents embedded into various other
accidental biases. These biases are a result of learning agents utilizing datasets containing bias
resulting in an output that is unjustifiably biased and discriminating (Lloyd, 2018). Negligent and
irresponsible inaccuracies and poor oversite in data collection, such as data mislabeling or
22
atypical representation of a group of individuals within the general population, has the
potential to develop into biases (Lloyd, 2018) that were previously mentioned along with other
risks and concerns that was addressed further, later on in this review of the literature.
Butkus (2020), quoting Wallach et al. (2008), stated that humans take a rare and distinctive
approach to making decisions involving moral choices. These choices are refined and enhanced
through time and adjusted or transformed by individualized experiences (Butkus, 2020; Wallach
& Allen, 2008); this statement can be traced back to Aristotle’s view in “The Nicomachean
However, it can be realized in part through the human cognitive framework. This framework
was cultivated and grew slowly through socialization instead of coding. This process exposes
humans to emotional prompts and the capacity to participate in an ethical process known as
agents (Butkus, 2020). Perspective-taking is the capability relating to multiple aspects, including
comprehending another person's discernable and expressive appraisal of the phenomenon and
There are vulnerabilities both with socialization and the cognitive framework.
for approval from a group, and a substantial detestation to disagreements and tensions within a
23
group values and ethics, and other group-related insufficiencies (Butkus, 2020). Morals are
essential for purposes and efforts that prompt or provoke ethical or social anxieties relating to,
but not limited to, defense, safety, or discernment about individuals (Miller, 2019). Moral
obligations must be isolated and extracted from all external factors methodically so that those
factors hold no influence or power on the individual's will, so that reason is the dominating
moral agents are not achievable by humans. Paradoxically, it is more uncomplicated and
straightforward for technology such as artificial intelligence and learning agents to come close
to this seemingly unrealistic and venerated mechanism and medium than humans (Brożek &
Janik, 2019). Human history has shown that as a race, growth has occurred in its ability to
reason and think utilizing intellectual rationalization. This is validated by how they perceive the
work through their own phenomenological filters, deliberating and assessing whatever they
observe, identify, and comprehend. They employ and utilize interpretations, conclusions,
assumptions, and judgments, along with other rational and analytical associations, acting upon
scenarios and situations based on those experiences and observations. This framework's
apparent truth and genuineness identify why this or something similar should be used in
At what point in trying to achieve increased moral virtue and agency is the line crossed
into unethical behavior? Advancements in neuroscience and biotech have allowed for
manipulating moral thoughts, reflection, and conduct in humans by manipulating and changing
24
or overriding their biology using prescription medicine or medical techniques to encourage and
foster trust (Lara & Deckers, 2020). There is an explicit endeavor to distort and obscure the
boundary between biology and artificial creating objects and personas with similar will and
desire as humans and artificial biological elements nearly indistinguishable from their own.
While these creations may insinuate a recognition of morals, it also identifies the probability of
a moral conflict. Even if there is a potential for a parallel path that may occur with the
Several barriers exist to creating moral agency in artificial intelligence agents; these
restrictions are critical to human morals when addressing limited empathy—the ability to build
in or teach inductive and abductive reasoning. Artificial intelligence is becoming too human
through self-evolving moral and ethical reasoning, along with human ontological concerns rising
from the perception and capability to participate in moral reasoning (Butkus, 2020). However, it
is easy to romanticize artificial intelligence's dystopian and idyllic future when there is no
examination and consideration of the theoretical and real-world concerns they cultivate and
develop (Butkus, 2020). Changes and required replacements to bring about this dystopian and
idyllic future profound psychological aspect of the human condition that has been instilled
within everyone’s core, whether recognized and acted upon or disregarded. That is meaning
and existence. Humans have been convinced that their meaning of life or their sole existence is
based on and defined by their work (Lee, 2018). A change in perspective can overcome the fear
of nonexistence or loss of meaning. A philosophy that the existence and meaning of the human
25
condition are love, compassion, creativity, empathy, and charity. These are the differentiators
between artificial intelligence and humans, which artificial intelligence cannot replace.
Governance
There are many reasons to regulate and govern artificial intelligence; the commitments,
obligations, security, intellectual property, and privacy concerns related to the various robotics
systems, unpiloted aircraft or drones, driverless or autonomous automobiles, along with the
governmental agencies have recently created controls (NIST, 2020a) or are currently attempting
to develop regulations (ISO, 2020; Lilkov, 2021). Machine learning has been coupled with game
theory to portray the extent and magnitude of risk-related deliberation. Developers utilize
game theory to teach the learning agents strategic defense; this theory teaches more robust
and more intelligent algorithms to kill weaker algorithms. Developing algorithms in this fashion
reinforce the belief that autonomous systems will unavoidably and involuntarily encounter a
scenario where they must make a complicated and intricate ethical decision regarding whether
to obey or disobey a specific rule (Almeida, Santos, & Farias, 2020). Many different frameworks
govern and guide the actions and activities of all individuals and entities in society. The
discussion on controlling and managing artificial intelligence agents and the end product
enhanced with that technology is still open and heated. One of the reasons for the continual
discussion is that artificial intelligence agents that enhance technology are thought and
believed to be so technically evolved and cutting-edge that they have to be considered liable
and accountable for their activities, behaviors, and deeds as an alternative to the individual,
organization, or entity that either designed or operated them (Bertolini & Episcopo, 2022).
26
Discussions and subsequent debates centered on subjectivity and agency have dynamically
depicted the social, philosophical, and legal elements of artificial intelligence being human as
early as articles and discussions facilitated by Alan Turing in the 1950s. Numerous public and
private actions have created declarations delineating and depicting influential high-level
fundamentals, virtues, and other precepts meant to shepherd ethical growth, placement, and
governance of artificial intelligence (Mittelstadt, 2019). These debates and discussions intensify
lifeforms such as Sophia receiving citizenship (Almeida, Santos, & Farias, 2020; Bertolini &
Episcopo, 2022; Gellers, 2020; Gunkel & Wales, 2021; Nyholm, 2018; Powell, 2020).
A framework based on unified regulations does not currently exist. However, should one
governance framework will need to account for and take into consideration legal, ethical, and
moral discipline among the separate and distinguishable notions of being a responsible subject.
The idea of subjectivity, the nature of being that has existence and the philosophical design that
form must adapt to utilization, structure, and material, which stresses the interdependence of
patterns of society along with their interrelationship to maintain culture and social unity based
on the theory that value, especially the moral worth of an action should be based on evaluation
of its ramifications, fallouts, and consequences. Comprehending the differences between legal
concerns and mortality educates individuals conceptually and the interrelationships with other
individuals conceptually and the interrelationships with other domains (Bertolini & Episcopo,
27
2022; Fossa, 2021). A thorough and universal perspective of the discussion on the personhood
abundance of concerns about its status within society. There are three intersecting functions or
where other disciplines, such as engineering, computer science, law, philosophy, and sociology,
have their abundance of perspectives and ideas concerning personhood. However, if these
caveats must be identified and agreed upon to avoid potential disadvantageous complications.
history, and criminology, to name a few, the discussion on subjectivity is structured over a
complimentary and unfavorable or adverse entitlements concerning moral and legal privileges
that artificial intelligence agents potentially be given (Bertolini & Episcopo, 2022). Thirdly, the
differentiated and noteworthy, predicated on the worthiness of the perspective and conviction
regarding elevating technology to the standing of subjects as an approach to figuring out what
humans deem or regard as a moral or legal issue. Collectively these differences are essentially
significant as they work and function as indispensable mechanisms for the dissection and
complete understanding of what demands are brought forth, for what intention, and upon
what basis, building a rational, coherent framework (Bertolini & Episcopo, 2022). Effective
the proper amount of individuals and the uniformity of procedures (Lo Piano, 2020). Utilizing
28
example-based explanations in machine language may assist with effectual and efficient
divergence among developers, subject matter experts in other domains, and dilettantes or
individuals in society without specialized training or knowledge (Lo Piano, 2020; Molnar, 2020).
Another governance model presents itself as the answer to providing controls and
exploratory technologies was put forward for further discussion. This framework style has
origins in a previous endeavor to encompass and incorporate all the approaches and
homogenized, and assimilated networks of guidance, mentoring, and dealings spanning all
domains, individuals, and stakeholders (Almeida, Santos, & Farias, 2020; Lo Piano, 2020).
Another model is The Interactive Governance Model for technology development, and legal
formulation highlights the characteristics of individuals and stakeholders, along with the need
for continual education and learning. Additionally, there is a requirement for a slow
incremental evolution of a lawful, moral, and ethical framework. Among the primary
advantages of this hybridized artificial intelligence governance model this model integrates top-
down with bottom-up regulatory activities in a gradual manner, consequently reducing the risk
perpetual state of modification (Almeida, Santos, & Farias, 2020). Another governance model is
"society-in-loop." The flexibility and efficacy of this model come from individual user
29
intelligence goods and services, "society-in-loop" has the potential to become a governance
tool for technology consumers within society to regulate and be proactive in identifying those
components. A benefit of the model is a potential reduction in clashes and discordances among
safety, security, privacy, and equity-related notions, generalizations, and conceptions (Almeida,
has the ability to enrich individual experiences for humankind. However, there are risks as it will
break down and bring about physical and non-physical harm. Non-physical regarding monetary
loss, expressing human bias, or deteriorating human dignity. These shortcomings can create
unequal and disparate influence and effectiveness due to novel, unconventional, fluctuating, or
erratic dangers, which could potentially lead to demoralization and, ultimately, humanity's
rejection of the technology. There are two paths to mitigating the risks; the first through
complex regulation and the other through communicating and being transparent about the
intelligence and learning agents. Risks, threats, and dangers can come in many forms, such as
privacy violations, bias resulting from data accuracy and cleanliness, societal implications,
30
inequality, lack of transparency, operational safety, job loss, stock market volatility, autonomy,
agent learning, bias in employment, artificial intelligence algorithms used to map out police
coverage not correctly optimized, enhanced military weaponry, and many more (Baryannis et
al., 2019; Eliacik, 2022; Hasan, Shams, & Rahman, 2021; Mannes, 2020; Neri & Cozman, 2020;
Over the last several years, multiple incidents involving technology enhanced by
artificial intelligence learning agents have occurred. Some of those incidents are the stock
market crash brought on by trading tools integrated with artificial intelligence learning agents
(Kirilenko, 2017), automated medical prescription tools providing the wrong dosage to patients
(Chen, 2018; Yigitcanlar et al., 2020), incidents involving autonomous vehicles placing
passengers and pedestrians at risk (NTSB, 2019; Yigitcanlar et al., 2020), along with several
incidents that have raised concerns regarding privacy, public safety, and trust. Additionally,
societal implications have been identified by industry leaders Stephen Hawkings, Bill Gates,
Elon Musk, and several academic researchers who have been warning the nation's leaders for
more than a decade that caution is needed and that artificial intelligence development can be
dangerous (Eliacik, 2022; Gibbs, 2017; Marr, 2018; Thomas, 2022; Yampolskiy, 2016). Elon Musk
stated in an interview in 2018 that he believes a catastrophic incident could occur within the
next five years (Marr, 2018). Just a year early, Musk stated at the US National Governors
Association that “artificial intelligence is a rare case where I think we need to be proactive in
regulation instead of reactive. Because I think it was too late by the time we are reactive in
Risk comes in all forms regarding artificial intelligence and learning agents. Though the
question still remains as to why all these incidents are occurring. One school of thought is that
incidents continue to occur due in part to the issue of transparency of the machine learning
agents and algorithms. The lack of transparency or opaqueness in these agents and algorithms
makes establishing and identifying if the outputs are influenced or flawed unfeasible and
unmanageable (Majumder & Dey, 2022; Pancake & ACM US Public Policy Council, 2017; Tjoa &
Guan, 2021). The Association for Computing Machinery created seven principles for
transparency and accountability that align with their code of ethics that provide guidance
during every phase of system development (Pancake & ACM US Public Policy Council, 2017).
further reduce risk and add clarity. Explainable artificial intelligence or XAI is a collection of
techniques, approaches, and methodologies that provide users, engineers, and data scientists
the ability to decipher and understand the output and results generated by artificial intelligence
agents built with machine language algorithms in hopes of building and expand trust (Bunn,
2020; Gieling, 2022; Gunning & Aha, 2019; Langer et al., 2021; Ridley, 2022).
Despite all the risks, dangers, and fears of artificial intelligence, many individuals have
faith and confidence in their belief that as a technology, artificial intelligence and its numerous
agents and algorithms could potentially aid in solving many of society’s utmost tenacious and
persistent concerns (Brooks, 2019; Eliacik, 2022) throughout many industry sectors such as
marketing along with numerous others (Pedamkar, 2019). Benefits from artificial intelligence
32
and its agents can be divided into two groups technical and financial. These groups are offset
and balanced by legal, social, and ethical concerns (Stahl et al., 2021).
Many wondrous benefits can be attained by utilizing artificial intelligence and its
associated agents and algorithms. Many of the risks that have been previously discussed were
at one time looked to benefit humankind (Baryannis et al., 2019; Eliacik, 2022; Gibbs, 2017;
Hasan, Shams, & Rahman, 2021; Lee, 2018; Mannes, 2020; Marr, 2018; Neri & Cozman, 2020;
Starck, Bierbrauer, & Maxwell, 2022; Thomas, 2022; Yampolskiy, 2016; Yigitcanlar et al., 2020).
To provide guidance, the discussion on the benefits of artificial intelligence is divided into
The artificial intelligence revolution will bring enormous, extraordinary wealth and
with robotics (Lee, 2018). This replacement will occur in factories, truck drivers, customer
service, telesales, radiology, and hematology over the next ten to fifteen years (Lee, 2018).
contributed more than two trillion dollars to the economy in 2018. PwC predicted that artificial
intelligence’s contribution to the economy would grow to 15.7 trillion dollars by 2030 as
mainstream adoption of artificial intelligence continues to grow (Pedamkar, 2019). The growth
in fiscal contribution may result from the overall cost of artificial intelligence becoming less
enhanced by artificial intelligence in the healthcare industry (Wolff et al., 2020). When the
33
reason is made and approved, the impact of artificial intelligence on the healthcare system is
enormous. One impact is bionic technology enhanced by artificial intelligence agents, also
known as smart bionics (Joshi, 2020). Artificial intelligence-enhanced bionics can fulfill the
medical needs of millions of individuals suffering from cardiac disease, diabetes, pancreatic and
other cancers, spinal cord injuries, blindness, and traumatic brain injuries (Giles, 2014; Houser,
2021; Joshi, 2020; Powell, 2018; Stokes, 2019). Though these medical benefits and their
availability comes with a dark and disparaging concern regarding the social and ethical
implication that the availability and affordability of these devices could be seen as
discriminatory and socially irresponsible (Salleh, 2008) as only the wealthy have access and the
means to pay for them. Though this may never change, organizations and their leadership are
not altruistic in doing business. It has also been stated that true altruism is non-existent, as
psychological egoism says all human action is motivated and guided by self-interest. This means
that those that assist others are doing it to fulfill their own psychological needs as long as
voluntarily doing good for others does not come at a personal sacrifice or perceived hardship
(Kraut, 2020). Another medical benefit of using artificial intelligence is that artificial
failure rate in today’s society. Technology is an integral component essential for supporting,
growing, and sustaining the organization (Farhanghi, Abbaspour, & Ghassemi, 2013). To
support growth and sustainment, organizations are investing in tools and platforms based on
organizational strategy (DeLoatch, 2018). Platforms include big data analytics, cloud
34
technologies, and robotics (Foote, 2021; IBM, 2021; NIST, 2019, 2020a; Oracle, n.d.; Paiva,
2020; Pallis, 2010; SAS Institute, n.d.). An essential reason for investing in big data analytics is
to achieve more accurate, predictive, and real-time possibilities (Hiter, 2021). However, data
analytics brings to light a societal risk in that data sets could potentially contain inaccurate data
and have programmed and learned biases, along with the protection of private information
(Hillier, 2021; Lloyd, 2018). Robotics brings about several social advantages and benefits, such
preventing violent acts through facial recognition, reducing job hazards associated with
framework individually, though not as a whole. Upon conclusion of the literature, gaps exist in
how governance, ethics, morals, and risks impact an individual’s ability to trust in artificial
intelligence and its associated agents. Researchers like Dr. John D. Lee and Dr. Katrina See
started the conversation in 2004 when discussing trust in association with automation. Though
Lee and See’s research is nearly two decades old, it has not kept up with technology.
Additionally, a gap in the literature identifies whether the numerous issues in the nightly news
associated with artificial intelligence overshadow the potential benefits that artificial
intelligence brings to society or whether the opposite of benefits surpasses the total risk.
Furthermore, there was a gap in the literature that would indicate if current governance
Conclusions
The amount of literature on artificial intelligence and its agents is immense. However, it
is fragmented and siloed (Clay, 2018; Cox, 2021). The research in this study attempts to
defragment and remove the literature silos, creating a holistic view of artificial intelligence. As
outlined, literature falls into silos that prevent identifying, interconnecting, and relating
information. One of the reasons identified for this fragmentation and siloed literature is that
artificial intelligence is classified into sixteen categories (Oke, 2008). Interconnecting areas or
components will determine how much or little governance, ethics, morals, and risks impact an
Chapter Summary
This chapter examined the interrelationship between trust, morality and ethics,
governance, risks, and benefits of artificial intelligence agents that comprise the body of
governance, and the overall benefits of this technology outweigh its risks and, if so, what its
impact is on trust. Previous research has shown that trust consists of multiple layers: individual,
cultural, and organizational, which are influenced through interactions that are logical,
systematic, effective, and analogous. Utilizing the contextual layers of trust and the influencers
aid in understanding which layer is mistrust in this technology developed and then proselytized
to others.
As you may recall, David Marr (1977) stated that artificial intelligence is an example of a
human endeavor that started with a commitment of faith rather than on the results. In the
forty-six years since Marr made that statement, there have been a plethora of results. Results
36
that are both positive and negative, as previously identified. In this chapter, numerous
examples have shown how principles guiding the organization and representation of knowledge
point to and capture the universal disposition of humanity’s intellectual capabilities (Marr,
1977).
Capabilities of how trust is impacted and influenced by the creation and evolution of
individual context, organizational context, and cultural context (Lee & See, 2004). It is possible
by violating these contexts that trust is lost. Violation could occur through perceived impacts on
social well-being as witnessed through media. Impacts like those previously identified, such as
limited or lack of ethical or moral decision-making or actions (Butkus, 2020; Clara F.D. Costa
Ames, 2022; Farisco, Evers, & Salles, 2020), Governance issues (Almeida, Santos, & Farias, 2020;
Bertolini & Episcopo, 2022; Mittelstadt, 2019), Risk and social harm issues as identified by
previous researchers (Baryannis et al., 2019; Eliacik, 2022; Hasan, Shams, & Rahman, 2021;
Mannes, 2020; Neri & Cozman, 2020; Starck, Bierbrauer, & Maxwell, 2022; Thomas, 2022).
37
The problem is the need to understand if improving governance oversite, reducing, and
understanding risk, and the potential or promised benefits of artificial intelligence, impact the
public’s trust in artificial intelligence as inferred by previous research (ISO, 2020; NIST, 2020a;
WhiteHouse, 2019). The purpose of this qualitative exploratory study was to explore the
interrelationship of risks, benefits, and governance of artificial intelligence and its associated
learning agents to determine how they impact an individual's ability to trust artificial
intelligence. The conceptual framework that guides this study details the interrelationships of
the moral and ethical components affecting trust, possibly through governance, risks, benefits,
views on artificial intelligence. As previously explained, the study explores the interrelationship
and correlation of risks, benefits, and governance of artificial intelligence learning agents.
Implementing information assurance controls will reduce the risk of artificial intelligence
learning agents. Does improving governance oversite, reducing and understanding risk, along
with the potential or promised benefits of artificial intelligence impact the public's trust in
artificial intelligence?
This chapter will cover the chosen methodology of the study, along with the design and
the methods for data collection. It will introduce and explain why I decided on qualitative
research tradition and exploratory for my design. It will also discuss population selection,
The research tradition I chose for my dissertation study is qualitative; the initial reason
for selecting this tradition was the belief that the literature on the study topic was limited due
to a perceived notion that the technology was newer than it was. The qualitative methodology
allows this viewpoint through the descriptive and open-ended survey questionnaire approach.
This, in turn, enables me to explore the phenomenon with different outlooks. How decisions
was made in this dissertation study is based on the philosophy of axiology, which includes
ethics and moral principles (Given, 2008) through ethics, judgments, values, and aesthetics,
which constructs the research approach (GuhaThakurta & Chetty, 2015; Thornhill, Lewis, &
constructivism as there are elements in both that resonate with me based on the definitions
(Marleneanu, 2013).
essential components from the introduction, problem and purpose statements, research
question, hypothesis, and methodology (Boitnott, 2022) using the same planned claims, lexicon
of jargon, and expressions, along with theories, beliefs, and expectations (Price, 2016). Through
between theory, actions, and measurements, which provides validity to the research or study
(Hoadley, 2004). Since design-based research is an approach and not a method (Herrington et
al., 2007), it can be utilized for quantitative or qualitative research, the two prevalent research
methods in computer science (Hazzan et al., 2006). The rationale for alignment is that the
39
research process becomes narrowly focused and clear; it eliminates needless work outside the
topic while producing clarity and understanding (Gavin, 2016; Jones, 2020). Alignment to a
particular degree program ensures that, as doctoral candidates, we can conduct discipline-
specific research in the given field within an appropriate amount of time, providing a timely
The chosen design for my dissertation study is exploratory. The exploratory design
various aspects of a problem (Romero, 2020). In aspects where the problem has not been
previously studied in-depth, literature is limited, and the discovery of new information or the
intention is not to provide a definitive solution (Dudovskiy, 2018; Given, 2008; Jupp, 2006;
Romero, 2020). Exploratory design works well with the continuum of research tradition
old and new paradigms while utilizing a hybrid approach for collecting data (Edmondson &
Mcmanus, 2007).
purpose and function support the methodology governing the dissertation (Osanloo & Grant,
2016). As my dissertation study is looking to determine, in part, the how and why, the plan for
computer science dissertation (Hazzan et al., 2006). The data sampling was established on non-
probability selection based on a fixed process to limit sample bias and increase the quality of
the data (QuestionPro, 2021). Also, sampling will employ a convenience technique as the ability
to gain participants is easy to access (StatisticsSolutions, 2021). As far as data analysis methods,
40
there is no single perfect analysis method; therefore, the plan is to utilize triangulation through
content analysis of participants’ responses from multiple groups (Warren, 2020). Triangulation
was selected because it uses two or more QDA methodologies (Warren, 2020), allowing me to
the risks, benefits, and governance of artificial intelligence and its agents falls directly in line
with the Doctor of Computer Science with a concentration in cybersecurity and information
assurance. As laid out previously, artificial intelligence agents are a part of many applications
and tools used in everyday life. As the adoption of these learning agents increased, so has the
perception that artificial intelligence is the killer of humanity. This “killer of humanity”
perception comes partly from a weakened level of trust resulting from numerous incidents
elevating the risk of this technology. The security of the public’s private information is a critical
The target population for the study was split between members of general society and
intelligence development, and artificial intelligence innovators. Further, the target population
of technology professionals was accessed through LinkedIn. Given this additional constraint, the
approximately 100,000.
41
Using a cross between purposive and convenience sampling methods, a sample of 112
respondents dispersed equally through the previously mentioned technology-related fields, and
members of general society were selected. However, data saturation was the ultimate endpoint
assessment for the sample size. The sampling of data was established on non-probability
selection based on a fixed process to limit sample bias and increase the quality of the data
(QuestionPro, 2021). Sampling will employ a convenience technique as the ability to gain
Participant recruitment will occur after attaining approval from Colorado Technical
University’s (CTU) Institutional Review Board (IRB). CTU’s IRB is a committee that reviews and
monitors research involving human subjects to ensure it is ethical. Upon receiving this approval,
recruitment will occur through professional groups within the LinkedIn social media platform
using a recruitment flyer to target the technology professionals and through SurveyMonkey
using their participant pool and process to target the general members of society.
As my dissertation study is looking to determine, in part, the how and why, the plan for
data collection is to use a qualitative survey to collect participant feedback. Qualitative surveys,
dissertation (Hazzan et al., 2006). The instruments I will use for data collection are the
qualitative survey questions outlined in Appendix A. The survey questions were designed
following the conceptual framework and identified literature outlined in chapter two.
Additionally, I reviewed several other dissertations to determine the proper formation of the
questions. All survey questions will undergo an expert review process where the subject matter
42
expert will critique them to ensure that the concepts are utilized correctly, and the language is
groups and using a private pool of survey participants from SurveyMonkey that meet the survey
criteria. Upon receipt of interest from respondents of the survey flyer, an email invite with the
URL was sent to the prospective participant. The email invite was the only way to authenticate
a participant and access the survey. Participants must agree to the informed consent
agreement before seeing the survey questions. Certain logic is embedded into the platform that
will hide questions based on answers received from survey criteria questions. The data
collected is personal and reveals the respondent's feelings, values, and beliefs. The design of
open-ended questions allows the respondent to provide more than a yes or no response.
All participants were given a pseudonym to protect their identity. Data was stored
securely by the researcher offline, as discussed in the following ethical assurances section. The
following list is a numerical process flow of what was needed for data collection:
B. IRB approval
D. Posting the recruitment flyer on LinkedIn professional interest groups per the LinkedIn
User Agreement. Interested members will contact me, at which time I will provide them
with the survey link through SurveyMonkey. Interested individuals will agree to the
informed consent form as the first step of the survey process before collecting data.
43
members will provide consent to the study before beginning the qualitative survey.
F. Refreshing the recruitment flyer daily to ensure it stays at the top of the discussion
feeds.
raw data.
The selected data coding method is two different methods, open and axial. This
selection is because open coding is the initial phase of Qualitative Data Analysis (QDA), creating
the categories. Axial then will follow up with making the linkages, further expounding on the
data using inductive and deductive theory until a pattern is identified in step one below (Sage,
2021).
The utilization of automated analysis tools provides numerous benefits. For this
dissertation, the benefits of MAXQDA’s to aid in the literature review process through
The following is a numerical process flow of what was used for the data analysis.
A. The selected types chosen for this study are Open-coding and Axial. Open coding was
the initial type used to familiarize me with the data, categorize it, and make initial
interpretations. Axial allowed me to refine and reduce the categories, identifying data
C. I read the collected data using Open-coding and Axial coding types, color-coded the
D. I Determined the coding categories and associated color schema for the open type.
E. Convert Open coding to axial, creating the linkages, and further expounding on the data
G. I Reevaluated the collected data using the processes provided within the MaxQDA tool.
Trustworthiness
Rigor and trustworthiness are provided through a scientific methodology that aims to
give a strict purview of the research question. This scientific methodology is sometimes
means that the researcher explores, chooses, and controls the best evidence for the research
the evidence to fit a particular set of rules that uses a precise approach to minimize bias to
produce reliable findings (University of Limerick, 2018). A critical factor in reducing bias in a
studies are more rigorous results in less bias than an observational study where the estimate of
effects is more substantial, though heavily biased (Bruce & Mollison, 2004). By employing a
rigorous scientific and systematic research study and review of the evidence, the
credibility, dependability, confirmability, and transferability (Guba, 1981). Each of these terms
45
aligns with four aspects and scientific representations of trustworthiness. However, sometimes
a fifth criterion is utilized, reflexivity (Stenfors, Kajamaa, & Bennett, 2020). In the following
subsection headers, the mapping of Naturalistic – Aspect – Scientific Term as identified by Guba
(1981).
Credibility is recognized through the researcher's ability to align theory, the research
question, collected data, analysis, and the results (Stenfors, Kajamaa, & Bennett, 2020). Within
the alignment framework, the researcher must ensure a proper sampling strategy, the
broadness, and depth of the literature, and systematically apply an analytical mindset that
continually critiques the data while looking for gaps and applicability.
Credibility demands that the methodology chosen to be well explained and supported
to the extent that the collection methods and the amount of information should be appropriate
for the chosen methodology. The confirmability audit can be conducted with the dependability
audit; the researcher validates that the data and interpretations are supported by coherent
material in the audit trail representing more than “figments of the researcher’s imagination”
(Guba & Lincoln, 1989). Credibility will also be obtained by asking the same questions in
consistently. The purpose of ensuring data availability and consistency is so that another
researcher following the same procedural steps can reproduce and replicate the research. This
is achieved through carefully monitoring the design and maintaining a thorough audit trail. The
46
audit trail requires a comprehensive account of research events and processes, influences on
data collection and analysis, and developing ideas, classifications, prototypes, and analytical
patterns (Universal Teacher, n.d.). However, replicability is impossible as we cannot assess the
same data twice. By definition, if we assess twice, we have evaluated two different data sets.
Thus, to approximate reliability, scholars must assemble a variety of theoretical concepts to get
Confirmability implies the extent to which the results can be confirmed and
corroborated (Trochim, 2020). It is often recognized through the researcher's ability to link and
clearly show how the collected data relates to the findings. The participants shape those
findings more than the researcher (Static Solutions, 2017). This can be accomplished through
descriptive details and quotations from the authors of the collected literature outlined in an
audit trail documentation. Along with audit documentation, reflexivity is required. To ensure
confirmability, another researcher can review the audit trail and play the “Devil’s Advocate”
role, contradicting the findings and recording these contradictions in the audit trail (Trochim,
2020). From there, the original researcher can build the rebuttals and additional research into
the study. However, since I am the only researcher, I will play both roles.
Transferability – Applicability
Description” (Geertz, 1973); otherwise, the descriptive detail of the context in which the
research process was developed and how this process shaped the study's overall findings. This
irrelevant to the results, allowing relevance in any setting (Guba, 1981). Removing these
variations and knowing a great deal about the transferring and receiving conditions provides a
fittingness that will enable the study’s findings to be transferred to another setting, group, or
context.
Reflexivity
however, it is just as important. Reflexivity allows the researcher to provide continual reflection
(Dixon, 2018) through introspective thought and contextually interrelating the participants,
data, and themselves. Reflexivity increases credibility and deepens the reader's understanding
of the study (Dodgson, 2019). Reflexivity can be separated into dualistic encampments;
prospective and retrospective (Attia & Edge, 2017; Edge, 2011). Prospective signifies the impact
the scholar has on the research. Retrospective represents the research's impact on the scholar
(Attia & Edge, 2017; Dixon, 2018). As briefly mentioned, reflexivity aids in increasing credibility
through the reduction of bias. Due to the subjective nature of qualitative research, reflexivity is
crucial as a scholar’s bias could influence the study in various ways. Bias can seep into the
study, starting with creating data gathering tools, collecting data, then analyzing and reporting
it (Dixon, 2018).
Ethical Assurances
The most popular way to define ethics is the standards for behavior and conduct that
differentiate between appropriate and deplorable conduct (Resnik, 2011). Ethics and morals
are not straightforward as one may think, as personal values and life experiences add to the
interpretation of ethics and morals. To further muddy the waters, ethics may focus on a
48
research discipline, then defined based on a particular method, perspective, or procedure for
analyzing complex issues. Individuals will conclude that their perspective is the most ethical and
moral (Resnik, 2011). Ethics promotes truth, knowledge, and error avoidance; it gives the
actions is deplorable and leads to a lack of trust and integrity. Trust, ethics, morals, and
integrity are essential for collaborative research (Resnik, 2011). Ethical dilemmas may occur,
and a novice or even a seasoned researcher may encounter them during data collection or
analysis. Some of these encounters have severe ethical and legal implications. Handling these
ethically important moments is known as ethics in practice (Guillemin & Gillam, 2004).
The study includes trust, ethics, and morals, extending beyond practice-based issues. It
touches on the core of societal values; therefore, it is a must to ensure that the assurances
discussed include a synopsis of what ethics and morals mean. Additionally, it ensures that
future researchers will understand the value this study has on the technology industry and
society.
Title 45, part 46, is the policy for protecting human subjects, otherwise known as the
‘Common Rule.’ This 1991 policy was based on the 1979 Belmont Report. This policy provides
the necessary provisions for schools, informed consent of subjects, and compliance assurance
(Health and Human Services, 2016). As stated by the National Science Foundation (2020), this
rule's importance is that people should not be participating without knowledgeable permission
and that subjects should not sustain an elevated risk of mental or physical impairment from
importance of HSP is that by using it, I, as the researcher minimize the likelihood of any
potential risk and harm to the participants (White, 2020). Including HSP in research aids in
ethical research by providing a framework that benefits both the researcher and the
participants, shows respect, and provides justice (NCBI.gov et al., 2016). Using the definition of
“No More Than Minimal Risk” as defined by the National Institute of Mental Health (N.D.),
which states minimal risk means that the likelihood and extent of damage, distress, or anxiety
are not greater than those encountered in their everyday routines in terms of physical and
“No More Than Minimal Risk” approach is vital as it protects the participant, the researcher,
and the school. Not following this approach places the participant in undue distress and
discomfort and could have long-term physical and mental effects extending past the research's
initial timeframe.
To accomplish the points documented here, I will provide and collect informed consent
from participants. They were apprised of the prospective and possibility of risk of privacy and
that I will mitigate that risk by appointing the participants an ID instead of associating the data
with their names. Data is secured and stored in a locked safe for seven years that only I can
access. After seven years, the data will be destroyed. Participants were informed that there is
likely no direct benefit to them for participating in the study but that the findings may add to
the body of knowledge. The Colorado Technical University Institutional Review Board will
Chapter Summary
To summarize chapter three, the methods information was provided on the collection of
data and its analysis. Identification was made on automation tools such as MaxQDA used to
analyze survey questions outlined in Appendix A. An additional discussion was held on rigor,
trustworthiness, ethics, and ethical dilemmas. The outcome of this discussion is that rigor and
trustworthiness are attained through reflexivity and other methodology that provides a strict
perspective of my research question. Ethics and ethical dilemmas was addressed through
axiology and reinforced through the golden or common rule outlined within the IRB process
Chapter 4: Findings
In this chapter, the findings from this qualitative study were outlined for review. The
purpose of the qualitative survey was to aid in understanding the purpose of this study, which
is to explore the interrelationship of risks, benefits, and governance of artificial intelligence and
its associated learning agents to determine how they impact an individual's ability to trust
artificial intelligence. The perception being explored is society's belief that there is an
existential crisis in humanity (BigThink, 2020; Gibbs, 2017; Roberts, 2021), culminating from
numerous issues concerning public safety, privacy, and other societal implications and whether
this perception affects a person’s ability to trust and accept the technology. This purpose aids in
furthering the grasp of the underlying problem, that is, the need to know if improving
governance oversite, reducing and understanding risk, and the potential or promised benefits
of artificial intelligence impact the public's trust in artificial intelligence as inferred by previous
research (ISO, 2020; NIST, 2020a; WhiteHouse, 2019). This problem has been identified and
supported by standards organizations such as ISO (ISO, 2020) and NIST (NIST, 2020), along with
The target population for the study was split between members of general society and
development, and artificial intelligence innovators. Using a cross between purposive and
Table 1 presents data collected from the completed survey by study participants. The
data is presented in aggregate form for the entire sample and then further split into groups of
technical and nontechnical respondents. Demographics of gender, age group, and regions of
the continental United States are provided in Table 1. The most common age range selected
was greater than 60 years of age. The gender distribution only varied by less than 4%.
However, in an effort to collect as much data as possible, results of some themes include
responses from partially completed surveys where the responses provided valuable and usable
data. Incorporating data from incomplete surveys provides an inclusive and axiological sample.
Uncompleted surveys with responses have a total sample of 212 mixed between technical and
non-technical groups. In the spirit of exploratory qualitative studies, numerical values are not
the focal point. However, the numerical figures provided are to illustrate the perspectives of all
participants.
Table 1
Totals
Break Down
Results
E. Word of mouth and media its influence and impact on artificial intelligence,
Results are presented to contrast varying perspectives. Not all participants responded to
all questions. Those who did not respond are not noted in the narrative that follows. As stated
above, in some instances, more respondents answered questions, thus showing a difference in
artificial intelligence technology and their beliefs on what societal and human rights issues may
arise from it. In the technical group, 82.4% (N=14) indicated that they understood what artificial
intelligence is, while 5.9% (N=1) did not, and one participant did not respond. In comparison,
59.6% (N=56) of non-technical workers also understand what artificial intelligence is, 28.7%
(N=27) did not, and the remaining twenty-seven did not respond.
artificial intelligence, participants were asked if they knew the difference between artificial
intelligence in research and implemented into everyday products versus artificial intelligence
represented on television and in the movies. Interestingly, technical workers were split evenly
between knowing there is a difference; there is no difference with 11.8% (N=2), while 17.6%
(N=3) of technical workers indicate that fictional artificial intelligence is a prelude to future
development. In comparison, those who do not work in technology showed that 26.5% (N=25)
understand there is a difference, 9.6% (N=9) indicate there is no difference between the two,
and 23.4% (N=22) believe that fictional artificial intelligence is a prelude to what is to come with
the remaining fifty-four participants has no thoughts on the subject. To support this further,
respondent 27 indicated that fictional artificial intelligence is more intelligent than real artificial
55
intelligence, where unreal or fictional artificial intelligence provides the impression that it
knows something. In contrast, real artificial intelligence only predicts events based on coded or
developed algorithms and datasets provided as training data. However, many respondents see
fictional artificial intelligence as a prelude and foreshadowing of what will come from the
industry.
cause some form of social issue or crisis for humanity, 11.8% (N=2) participants who work in the
technology industry indicated that they do not foresee any human rights issues in comparison
to 16% (N=15) of those who do not work in the technology industry. Additionally, 51.1% (N=48)
of non-technical workers indicate some type of social implication or human rights issues, in
contrast to 47.1 % (N=8) of technology workers. Many respondents identified that they are
concerned with the number of negative issues that have been reported resulting from artificial
intelligence. Several of these respondents indicated that the technology would continue to
grow unchecked and become too powerful to control once the realization occurs that control is
required as artificial intelligence will not care about who should be protected and does not
contain any ability of forethought to consider potential outcomes from potential decisions
being made.
Ethics and morals in artificial intelligence research and development provided varied
moral and ethical as those who research, develop, control, and utilize the technology. In the
technical group, 35.3% (N=6) believe ethics and morals are essential in researching and
56
however, 23.5% (N=4) believe that the utilization of ethics and morals should be increased.
Interestingly, only 5.9% (N=1) of technology workers believe that ethics and morals are
currently being used, while 23.5% (N=4) believe that it is not used.
In comparison, 34% (N=32) of respondents who do not work in the technology industry
indicated that ethics and morals are essential to research and development, while 33% showed
that the utilization of ethics and morals should be increased. Though 1.1% (N=1) respondents
said ethics and morals are unimportant. Interestingly, only 2.1% (N=2) of participants who do
not work in the technology industry believe that ethics and morals are currently being used,
When questioned about potential ethical issues caused by artificial intelligence, the
proprietary data and ownership infringement, as shown below (see Table 2).
Table 2
Though when looking at just human rights issues and society within the uncoded raw
survey results where more respondents provided feedback, 11.76% (N=2) of technology
respondents indicated that artificial intelligence does not or will not cause an adverse or
57
negative impact, while 52.94% (N=9) stated that it will; additionally 35.29% (N=6) told that they
believe specific human rights campaigns and societal beliefs influence the research and
development of artificial intelligence, while no one from the group believes that it negatively
affects the research and development of artificial intelligence. In comparison, 13.92% (N=11) of
nontechnology respondents indicated that artificial intelligence does not or will not cause
adverse or negative impact, while 31.65% (N=25) stated that it will; additionally, 36.71% (N=29)
indicated that they believe specific human rights campaigns and societal beliefs influence the
research and development of artificial intelligence, while 17.72% (N=14) from the group
believes that it negatively affects the research and development of artificial intelligence.
workers said that artificial intelligence should not be trusted; 51.1% (N=48) of non-technical
workers agreed with technology workers. In comparison, 11.8% (N=2) of technology workers
and 6.4% (N=6) of non-technical workers indicated that it can be trusted. Of those who
indicated that artificial intelligence could be trusted, one participant noted that artificial
intelligence could be helpful with helping low-income and low-education level individuals.
Additionally, 11.8% (N=2) of technology workers indicated that artificial intelligence should
undergo further research before the technology is trusted. In comparison, 18.1% (N=17) stated
When looking at trust, respondents were asked about risks, governance, benefits, and
transparency and how they impacted their ability to trust artificial intelligence. Regarding
transparency, 52.9% (N=9) of technology workers believe the lack of transparency affects their
58
ability to trust artificial intelligence. In comparison, 17.6% (N=3) said transparency does not
impact trusting the technology. In contrast, 50% (N=47) of respondents who do not work in the
technology industry indicated that the lack of transparency affects their ability to trust artificial
When asked whether risk concerns, governance frameworks, potential issues, and
actual or perceived benefits of artificial intelligence affect individuals’ ability to trust, the
technology workers indicated that they impacted their ability to trust. In the technology group,
53% (N=9) of workers said that it impacted their ability to trust, while 5.9% (N=1) said it had no
impact. In comparison, 60.6% (N=57) of those working outside the technology industry said that
it impacted their ability to trust, while 8.5% (N=8) said it had no impact.
When asked whether artificial intelligence's actual or promised benefits affected how
the participants felt about the technology, 54.4% (N=54) of those not employed in the
technology indicated that benefits affect their thoughts about the technology. In comparison,
When asked whether benefits offset any current or potential risk from artificial
intelligence, 11.8% (N=2) of technology workers stated that any realized or promised benefits
from artificial intelligence offset any existing or potential risk. In comparison, 52.9% (N=9) said
that benefits do not offset the risks. In contrast, 8.5% (N=8) of those not employed in the
technology industry indicated that benefits offset risks. In comparison, 55.3% (N=52) stated
that benefits do not offset any current or potential risk associated with artificial intelligence.
59
When asked if actual or promised benefits encourage the participant to trust artificial
intelligence, 35.3% (N=6) of technology workers said it did not, while 29.4% (N=5) said they did.
In comparison, 44.7% (N=42) of participants who do not work within the technology industry
indicated that any actual or promised benefits do not encourage them to trust artificial
Word of Mouth and Media - its Influence and Impact on Artificial Intelligence
artificial intelligence and whether it is an influencer in their perception of the technology, the
participants were asked whether television, media, movies, and word of mouth influenced their
emotions, the effects and impacts, the safety, overall opinion, and trust of artificial intelligence.
Feedback resulted in the theme of word of mouth and media – its influence and impact on
artificial intelligence. They were asked to rate their responses in the following ranges: great
When asked, “How does exposure to media, movies, television shows, and word of
mouth about artificial intelligence influence your emotions about the topic?” In the technical
group, 73.69% (N=14) of workers indicated that it influenced their emotions at least a little,
while 26.32% (N=5) stated that it did not. In comparison, 75.72% (N=78) of nontechnology
workers indicated that it influenced their emotions at least a little, while 24.27% (N=25) stated
that it did not. To fully understand these results of the responses, a breakdown of the 19
technical workers and the 103 non-technical workers' responses are outlined in the following
Table 3
Influences Emotions
When asked, “How does exposure to media, movies, television shows, and word of
mouth about artificial intelligence influence your perceptions regarding the effects of artificial
intelligence?” In the technical group, 73.68% (N=14) of workers indicated that it influenced
their perception of effects and impacts at least a little, while 23.32% (N=5) stated that it did
not. In comparison, 75.25% (N=76) of nontechnology workers indicated that it influenced their
perception of effects and impacts at least a little, while 24.25% (N=25) stated that it did not. To
fully understand these results of the responses, a breakdown of the 19 technical workers and
the 101 non-technical workers are outlined in the following table (see Table 4):
Table 4
When asked, “How does exposure to media, movies, television shows, and word of
mouth about artificial intelligence influence your perceptions regarding the safety of artificial
intelligence?” In the technical group, 73.68% (N=14) of workers indicated that it influenced
61
their perceptions regarding safety at least a little, while 26.32% (N=5) stated that it did not. In
perceptions regarding safety at least a little, while 27.72% (N=28) stated that it did not. To fully
understand these results of the responses, a breakdown of the 19 technical workers and the
101 non-technical workers are outlined in the following table (see Table 5):
Table 5
When asked, “How does exposure to media, movies, television shows, and word of
mouth about artificial intelligence influence your opinion of artificial intelligence?” In the
technical group, 73.68% (N=14) of technology workers indicated that it influenced their opinion
at least a little, while 26.32% (N=5) stated that it did not. In comparison, 76.24% (N=77) of
nontechnology workers indicated that it influenced their opinion at least a little, while 23.76%
(N=24) stated that it did not. To fully understand these results of the responses, a breakdown of
the 19 technical workers and the 101 non-technical workers are outlined in the following table
Table 6
When asked, “How does exposure to media, movies, television shows, and word of
mouth about artificial intelligence influence your perceptions regarding trust in artificial
intelligence?” In the technical group, 68.42% (N=13) of workers indicated that it influenced
their perceptions regarding trust at least a little, while 31.58% (N=6) stated that it did not. In
perceptions regarding trust at least a little, while 29.70% (N=30) stated that it did not. To fully
understand these results of the responses, a breakdown of the 19 technical workers and the
101 non-technical workers are outlined in the following table (see Table 7):
Table 7
Governance includes many areas. Participants were asked about general governance
and oversight. They were asked whether governance impacts how they trust artificial
63
intelligence and whether governance frameworks with multiple mechanisms are needed to
When investigating whether governance and oversight are needed for artificial
those not employed within the technology industry have a similar response, with 61.7% (N=58)
showing a need for it. Additionally, 11.8% (N=2) of technology workers indicated that
governance and oversight of artificial intelligence are beneficial, compared to 9.6% (N=9) of
nontechnology workers. However, 35.3% (N=6) of technology workers believe that further
research and exploration of governance and oversight models are needed, compared to 34%
While both groups saw a need for governance, they had mixed beliefs on its impact. In
the technical group, 4.58% (N=6) of workers indicated that increased governance would reduce
potential negative issues of artificial intelligence, while 4.58% (N=6) indicated that it will not. In
decreases the likelihood of potential negative impact, while 12.21% (N=16) stated that it will
not. Furthermore, 2.29% (N=3) of technology workers indicated that increased governance
would stifle and restrict artificial intelligence innovation, compared to 9.92% (N=37) of non-
Regarding morals and ethical decision-making within artificial intelligence research and
development, 3.82% (N=5) of technology workers indicated that increased governance would
lead to more ethical and moral decision-making by researchers and developers. However, no
participants in that group stated that it would not improve ethics and moral decision-making. In
64
would lead to more ethical and moral decision-making by researchers and developers.
Meanwhile, 12.21% (N=16) of participants said it would not improve ethics and moral decision-
making.
When asked about the relationship between governance and standards and their impact
on trusting artificial intelligence, 29.4% (N=5) of technology workers indicated that having a
working governance model and standards impacts their ability to trust artificial intelligence. In
comparison, 5.9% (N=1) disagreed. In contrast, 13.8% (N=13) of participants not working in
technology indicated that having a working governance model and standards impacts their
ability to trust artificial intelligence, while 5.3% (N=5) disagreed. However, 11.8% (N=2) of
technology workers indicated that regulators and those creating governance models and
When asked if governance with multiple mechanisms that oversee items such as
activities, behaviors, and deeds is needed as an alternative to the technical group's individual,
organization, or entity, 47.1% (N=8) of workers indicated that a governance model containing
multiple mechanisms is needed. In comparison, 17.6% (N=3) indicated that they are not. In
containing multiple mechanisms is needed, while 7.4% (N=7) indicated they are unnecessary.
The risk management section covers many aspects of risk. It includes the participant’s
understanding of the concept and managing the impacts and causal effects involving risk
occurrence. Regarding understanding the concept of risk and risk management, 75% (N=15) of
65
technology workers had at least some understanding of risk, while 25% (N=5) did not. In
comparison, 21.01% (N=29) of participants who do not work within the technology industry had
a minimum understanding of the concept of risk, while 78.99% (N=109) did not. To break this
down further to identify the 20 technology workers versus 138 non-technology workers, the
Table 8
Total
None 1-3 years 4-6 years 7-9 years 10+ years Knowledge
Level
Technology 75%
25% (N=5) 25% (N=5) 20% (N=4) 10% (N=2) 20% (N=4)
Workers (N=15)
Non-
78.99% 10.14% 4.35% 0.72% 5.80% 21.01%
Technology
(N=109) (N=14) (N=6) (N=1) (N=8) (N=29)
Workers
Participants were asked how risk could be reduced or eliminated. Their responses were
Table 9
Technology Non-technology
Workers workers
Increase Education and training 23.50% (N=4) 9.60% (N=9)
Increase communication 17.60% (N=3) 11.70% (N=11)
Apply Oversight/Governance 41.20% (N=7) 38.30% (N=36)
Add Transparency 35.30% (N=6) 13.80% (N=13)
Reduce/limit capabilities 5.90% (N=1) 8.50% (N=8)
Create Additional Testing Methods 11.80% (N=2) 9.60% (N=9)
Create markers identifying content created by Artificial
0% (N=0) 6.40% (N=6)
Intelligence
No unsupervised artificial intelligence 0% (N=0) 4.30% (N=4)
Adding Authentication/Security 0% (N=0) 6.40% (N=6)
66
governance oversight, reducing and understanding risk, and the potential or promised benefits
of artificial intelligence, impact the public's trust in artificial intelligence as inferred by previous
research (ISO, 2020; NIST, 2020a; WhiteHouse, 2019). Previous researchers and scholars, and
industry leaders have stated that there are numerous risks associated with artificial intelligence
(Gibbs, 2017; Marr, 2018; NTSB, 2019; Yampolskiy, 2016; Yigitcanlar et al., 2020). Along with
the risks, it has been noted that decision-makers need to be educated about the risks,
ramifications, and repercussions of artificial intelligence (Buchanan, 2005). The need to educate
decision-makers has been identified and supported by 28.8% of the participants of this study.
The study aimed to determine how the mechanisms of risks, benefits, and governance
impact a person’s ability to trust artificial intelligence. Previous researchers have noted that
artificial intelligence poses an existential crisis in humanity (BigThink, 2020; Gibbs, 2017;
Roberts, 2021), culminating from numerous issues concerning public safety, privacy, and other
societal implications. The findings of this study give light to and support that perception, with
an overwhelming majority (96%) of respondents agreeing that artificial intelligence has the
potential to cause moral and ethical issues, safety, employment/job loss, Infringement on
human rights, and infringement on proprietary data and ownership that could lead to an
The study also explored the interrelationship and connection between this perception
and an individual’s ability to trust and accept artificial intelligence. However, this perception
alone is insufficient to fully understand what affects a person’s ability to trust artificial
67
intelligence. Trust is a complex social psychological construct with numerous mechanisms that
can be triggers or catalysts. In addition, the ability to transfer trust or distrust on the topic from
one individual to another (Burt & Knez, 1996; Doney, Cannon, & Mullen, 1998; Kramer, 1999;
Lee & See, 2004) through multiple communication channels such as media, television, and
movies. Adding to this is trust is an attitude that an agent, human or technology-based, will aid
ambiguity, insecurity, indecision, doubt, and vulnerability (Burt & Knez, 1996; Deutsch, 1958;
Guba, 1981; Johns, 1996; Kramer, 1999; Rotter, 1967). To that end, 18.2% of respondents
indicated that artificial intelligence could not be trusted, with an additional 29.9% indicating
that further research is required before they can trust the technology. One respondent
indicated that we can only trust artificial intelligence as much as we trust the researchers and
the developers, as there is a potential for them to include unconscious bias (Respondent 3).
Another respondent stated, “that they would be more likely to trust it, though it appears that
artificial intelligence creators are barreling along with little regard for what society or
individuals may want, the potential risks, and costs” (Representative 99).
Ethics and morals play a large part in the trustworthiness of artificial intelligence.
However, there are two paths here. First, the path of artificial intelligence is ethical and moral.
The second is the ethics and morals of researchers and developers of artificial intelligence.
Previous researchers have indicated that programming or teaching artificial intelligence agents
to be moral and ethical may offset the existential crisis with artificial intelligence (Butkus, 2020;
Farisco, Evers, & Salles, 2020; Miller, 2019). However, as Participant 99 said, “Artificial
intelligence as a tool is neither ethical nor unethical. It is the users, researchers, and developers
68
who need to be ethical and moral.” To that end, a tool is only worthy of trust if it is well-made
and designed with multiple viewpoints that could limit bias. However, as a consumer, there is
“Ethical issues reside with the development community and not the technology.” Additionally,
participant 27 said:
Artificial intelligence will only ever be as ethical as the people using it. The moral
obligation of the designer is to create a system that is difficult to be used for bad purposes.
However, the end users of artificial intelligence are equally morally responsible for what
happens. For artificial intelligence to be safe for widespread use, people must hold
Several respondents questioned who or what gets to decide what is ethical and moral
versus what is not. Ethics and morals are uniquely different per individual based on upbringing,
culture, and experiences through the choices made and refined over time (Butkus, 2020;
In the upcoming chapter, I will go deeper into the interpretations of these findings.
Additionally, the implications of the findings on the cybersecurity and computer science sectors
are discussed.
Chapter Summary
This chapter comprised the findings from the quantitative survey conducted for this
study. The chapter started with a quick review of the study problem and purpose. It then
69
described the population of the study participants divided into two primary groups, those that
work within the technology sector and those who do not. A significant part of the chapter
discussed the survey findings based on identified themes identified from the conceptual
framework and the survey results. These themes are artificial intelligence, ethics, morals, trust,
benefits of artificial intelligence, word of mouth and media, governance, and risk management.
of the difference between fictional and real artificial intelligence. The second theme or
subsection discussed ethics and morals, its importance to research and development, its
utilization, and potential ethical issues. The third theme or subsection discusses the findings
relating to trust. Additionally, this section discussed how governance, benefits, and
transparency play a role in trusting artificial intelligence. The fourth theme or subsection
discusses the findings associated with actual, promised, or perceived benefits of artificial
intelligence. It covered whether knowing or receiving benefits from artificial intelligence offsets
any potential risk arising from the technology. Furthermore, it addresses whether knowing or
receiving benefits encourages individuals to trust artificial intelligence. The fifth theme or
subsection discusses the findings associated with word of mouth and media; this includes
television, movies, and social media. It presented the results on whether or not word of mouth
and media influenced the population’s emotions, perception of impact, perception regarding
safety, overall opinion, and perception of trust. The sixth theme or subsection discussed the
findings of governance and oversight, whether it is needed, its impact, its relation to ethics and
morals, and its effect on trust. The seventh and final theme or subsection discussed the findings
70
of risk management, the population’s understanding of the concept, their level of knowledge,
and what they believed were potential ways to limit, reduce, or eliminate risk regarding
artificial intelligence. The second part of the chapter further discussed the findings though it
included connections to previous literature discussed in chapter two and provided some
This chapter presents the interpretations of the findings from the qualitative survey. It
will also link those results to the previous research identified in Chapter Two. The purpose of
the qualitative survey was to aid in understanding the purpose of this study, which is to explore
the interrelationship of risks, benefits, and governance of artificial intelligence and its
associated learning agents to determine how they impact an individual's ability to trust artificial
intelligence. The perception being explored is society's belief that there is an existential crisis in
humanity (BigThink, 2020; Gibbs, 2017; Roberts, 2021), culminating from numerous issues
concerning public safety, privacy, and other societal implications and whether this perception
affects a person’s ability to trust and accept the technology. This purpose aids in furthering the
grasp of the underlying problem, that is, the need to know if improving governance oversite,
reducing and understanding risk, and the potential or promised benefits of artificial intelligence
impact the public's trust in artificial intelligence as inferred by previous research (ISO, 2020;
NIST, 2020a; WhiteHouse, 2019). This problem has been identified and supported by standards
organizations such as ISO (ISO, 2020) and NIST (NIST, 2020a)along with the Trump
As with all studies and research, there are limitations that differ from delimitations, as
sample size and the data quality did not meet with previously identified delimitations outlined
in Chapter One. The delimitation was to collect an equal number of technical and non-technical
72
participants. However, the population division became lopsided, with more non-technical
participants. Another limitation is the quality of the responses garnered from SurveyMonkey’s
minimum of 10 years of experience within the industry was not fully met. Most of the technical
participants' experience ranged from 1 to 6 years, followed by those with more than 10 years of
experience. It was decided to utilize the data from those with less than ten years of experience,
so eliminating them would not have provided a perspective needed for this study. It is
important to note that while these limitations are present within the study, I believe the data
captured still represents a significant understanding of the research question as it lays the
groundwork for future research by this or other researchers, thus meeting the guidelines of an
exploratory study.
A further limitation was added by not addressing the financial impact or its relationship
with ethical and moral decision-making within the artificial intelligence sector. The financial
The interpretations were broken down into themes identified in chapter four. However,
the difference is that there was some overlap between the themes listed below, as the findings
New scientific discoveries are exciting, and the desire to push boundaries, whether
moral or not. The utilization of morals and ethics in artificial intelligence is an important topic
and something that many survey technical and non-technical respondents believe is not
73
currently being utilized enough and should be increased. However, technology is only as ethical
as the individuals responsible for its utilization. Individuals should hold themselves and
coworkers to the highest level of morals. Morals and ethics include but are not limited to
honesty, integrity, diligence, and respect for self and others. Respect for others needs to
eliminate biases based on race, gender, sexual orientation, sexuality, disability, income level, or
political beliefs, to identify a few. However, numerous respondents asked the question, “Who is
responsible for outlining and upholding what beliefs and biases to limit?” “Thus, who gets to
decide what is moral and ethical for all of society?” It has been identified that morals and ethics
are inherently different for each individual based on their upbringing and culture (Helzer et al.,
2023; Schwartz, 2012; Wey Smola & Sutton, 2002). This is a crucial question and lays the
foundation for a philosophical study, which should be researched, though not included in the
Part of the concern arises from the potential of researchers and developers introducing
their unconscious bias into their work which is a final product that is just as flawed as the
researcher or developer. For many of the respondents, both inside and outside the technology
industry, this poses the dilemma of trust, in that how can individuals trust a flawed technology?
History has proven that we as a society do not trust and accept flawed technology (Eggers,
2012; Geels & Smit, 2000; Godulla et al., 2021). That is what this exploratory study aimed to
identify, to understand the mechanisms that could impact a person's ability to trust in artificial
intelligence. The many concerns regarding artificial intelligence identified by the respondents,
which affects their ability to trust, supports earlier research that indicated trust could not be
achieved if a person or technology does not aid in achieving a person’s goals or objectives in
74
certain circumstances the focal point can be described or depicted by ambiguity, insecurity,
indecision, doubt, and vulnerability (Burt & Knez, 1996; Deutsch, 1958; Guba, 1981; Johns,
1996; Kramer, 1999; Rotter, 1967). These circumstances lead to the spread of distrust and
addition, the ability to transfer trust or distrust on the topic from one individual to another
(Burt & Knez, 1996; Doney, Cannon, & Mullen, 1998; Kramer, 1999; Lee & See, 2004). This issue
supports the belief of minimal adoption of artificial intelligence due to a lack of trust and
Governance
The findings identified interesting patterns and thoughts. Thoughts were not limited to
those outside the technology industry but those within it. The majority of respondents
identified that artificial intelligence carries a significant amount of risk, and the lack of
transparency impacts their ability to trust artificial intelligence. They indicated that the current
governance frameworks and control standards do not go far enough to offset or protect users
from this risk, and both groups indicated that governance models should be researched further.
However, technology workers were split on the impact of what further governance would do to
the technology. However, more non-technological workers than those in the technology sector
indicated that increased governance would stifle innovation. This finding identifies and
supports that governance is a double edge sword in that increased governance is required to
help offset the potential risk and negative impacts; it could also negatively impact forward
progress and innovation. To help offset these impacts, the study identifies that governance
75
frameworks and controls standards need to include multiple transparent mechanisms that
address the activities, behaviors, and deeds as an alternative to the individual, organization, or
entity required in addition to those identified by previous researchers (Almeida, Santos, &
Farias, 2020; Bertolini & Episcopo, 2022; Brundage, 2019; Fossa, 2021; Hassan, 2021; Lilkov,
2021; Lo Piano, 2020; Mannes, 2020; McMenemy, 2019; Mitre, 2013; Mittelstadt, 2019;
Molnar, 2020; Perry & Uuk, 2019; Rousseau, 1920; Shiff, 2021).
Benefits
The majority of respondents identified that the technology's actualized and promised
future benefits (Brooks, 2019; Eliacik, 2022; Giles, 2014; Houser, 2021; Joshi, 2020; Pedamkar,
2019; Powell, 2018; Stokes, 2019) have no impact on how they felt about artificial intelligence.
They also indicated that the type and amount of benefits the technology provides or has been
promised to supply does not sway, deflect, or negate the risk resulting from the technology.
However, it is essential to reiterate that previous research has identified that many of the risks
today were at one time looked upon as a benefit to humankind (Baryannis et al., 2019; Eliacik,
2022; Gibbs, 2017; Hasan, Shams, & Rahman, 2021; Lee, 2018; Mannes, 2020; Marr, 2018; Neri
& Cozman, 2020; Starck, Bierbrauer, & Maxwell, 2022; Thomas, 2022; Yampolskiy, 2016;
Yigitcanlar et al., 2020). A widely held belief of respondents based on responses is that
As previously identified, trust, distrust, and personal opinion can be transferred from
one individual to another through many communication channels. To fully understand this
phenomenon, participants were asked whether word of mouth, television, media, and movies
76
influenced their emotions, effects and impacts, safety, overall opinion, and trust in artificial
word of mouth and the media influence their emotions, opinions, and perceptions regarding
indicated that exposure to media, movies, television shows, and word of mouth about artificial
intelligence influences their perceptions regarding the technology's safety. Therefore, the
transferability of trust, distrust, and personal opinion regarding the technology also impacts the
respondents’ perceptions.
Risks, threats, and dangers can come in many forms, such as privacy violations, bias
resulting from data accuracy and cleanliness, societal implications and inequality, along with
the lack of transparency, operational safety, job loss, stock market volatility, autonomy, bias in
employment, enhanced military weaponry, and many more (Baryannis et al., 2019; Eliacik,
2022; Hasan, Shams, & Rahman, 2021; Mannes, 2020; Neri & Cozman, 2020; Starck, Bierbrauer,
& Maxwell, 2022; Thomas, 2022). As previously reiterated, some risks identified with artificial
intelligence were once considered a benefit. Many respondents recognized and indicated that
risk and societal harm are systemic issues due to a lack of morals and ethics within the artificial
respondents provided many suggestions to offset some of those concerns; the top three
recommendations are to add oversight and governance, increase education and training, and
transparency.
77
Trust
numerous mechanisms that can be triggers or catalysts. Trust cannot be achieved if a person or
technology does not aid in achieving a person’s goals or objectives in certain circumstances; the
focal point can be described or depicted by ambiguity, insecurity, indecision, doubt, and
vulnerability (Burt & Knez, 1996; Deutsch, 1958; Guba, 1981; Johns, 1996; Kramer, 1999; Rotter,
1967). That said, it is no surprise that most technical and non-technical respondents stated that
artificial intelligence cannot be trusted in its current state. The amount of risk and the lack of
Practice Implications
Implications for cybersecurity and the technology industry as a whole can be summed
up in a single word; that is trustworthiness. The goal as it must aid in the achievement of an
individual’s goals or objectives where ambiguity, insecurity, indecision, doubt and vulnerability
are present (Lee & See, 2004). Trust that displaces guidance when direct examination is
unfeasible, enables choice in the presence of indecision and insecurity, and minimizes doubt,
hesitation when assessing the reactions of others, and enables and expedites delegation and
adaptive conduct by replacing inflexible etiquettes, practices, hierarchies, and processes with
objective expectancies concerning the capacities of others and technology (Baba, 2007; Kramer,
1999; Lee & See, 2004; Ostrom, 1998). This level of trust enables the goal of safety, which is key
for any security industry, whether technology-based such as cybersecurity. This study has
demonstrated that people do not trust or feel safe with artificial intelligence technology.
78
Many of the respondents indicated that there is a lack of ethics and morals in the
decision-making processes. The importance of ethical and decision making is a long standing
requirement for society as identified by Aristotle (Aristotle, 1834), thus has a profound effect
on a person when not experienced. Limited ethics and morals have led to a lack of transparency
in the research, development, and operation of artificial intelligence which supports previous
researchers observations and findings (Butkus, 2020; Farisco, Evers, & Salles, 2020; Miller,
2019). Additionally, the questionable ethics have put forth a limited governance framework
with controls focusing on only a part of artificial intelligence and not the whole of the
technology and its associated research and development. Many of the respondents of this
study in their identification for increased use of ethical decision-making, transparency, and
standard for the industry that would not suffocate the innovation needed for further growth,
but also provides the components allowing for individuals inside and outside of the technology
industry. Until this is done, there will always be many individuals who will not trust or accept
artificial intelligence.
Researcher Reflections
As I reflect on the learnings from this study, I am amazed at the swath of new
information and understandings it has brought me. Primarily it presented a new level of
confidence in my ability to complete a project of this magnitude. The subject matter and
research question raised were to form an exploratory review as to why there was and still is a
push to trust in technology unlike any innovative technology in the past and that a hard sell
Like many participants, I had opinions on artificial intelligence before starting this study.
These opinions and bias are shared in the Researcher Positionality and Reflexivity section in
Chapter 1, along with processes used to reduce bias in the study. Those were made from word
of mouth, the media, television, and movies. Which, as I discovered through this study, did not
provide a logical and fundamental foundation for formulating that opinion. The journey to
completing this project allowed me to investigate further and learn more about artificial
intelligence. However, researching artificial intelligence technology did not make me an expert
in that field; it has changed me from an outside spectator to an internal and enthusiastic
researcher.
When I started this study, I wondered why so many people thought there was an
existential crisis in humanity when it came to artificial intelligence (BigThink, 2020; Gibbs,
2017; Roberts, 2021) and why there was such a hard push to trust technology that brought fear
and panic to many in society. As a self-proclaimed techie and a sci-fi fan, I thought that artificial
intelligence was incredible and the achievements that came from its utilization were
remarkable. Therefore, I initially thought the survey's outcome and corresponding research
would not support these claims. However, as research started, I quickly found that an argument
could be made for both sides of the discussion. There are examples and research that support
and argue against the position that there is an existential crisis in humanity resulting from
artificial intelligence. To that end, my initial thought on the topic has changed slightly in that
while I support the growth in artificial intelligence, I also acknowledge that the industry needs
humanity as identified by this study and by previous researchers (BigThink, 2020; Gibbs, 2017;
In conducting this study, there are many areas or rabbit holes that I would have loved to
chase down. Though to be true to the boundaries of my research, I have outlined them as
The first area would be a qualitative exploratory study on whether marketing the
approach is used to sell a false narrative and hope to society; furthermore, who approves the
In line with the benefits of artificial intelligence, how the financial perspective plays into
ethical and moral choices. Research by an outside entity identified previously shows the
industry will explode in size and capital. This study could be an exploratory qualitative review,
The second area for additional research came from research participants, as highlighted
previously; Who is responsible for outlining and upholding what beliefs to support and biases to
limit? What metaphysical questions should be asked and investigated? What governmental or
industry policies, organizational controls, or individual needs are more important than others,
and how should an individual feel about responses being put forth? These questions are
committee to handle such a task and for that person or committee to be entirely accepted by
all of society.
81
Conclusion
This chapter presented the interpretations of the findings from the qualitative survey. It
also linked those results to the previous research identified in Chapter Two. The purpose of the
qualitative survey was to aid in understanding the purpose of this study, which was to explore
the interrelationship of risks, benefits, and governance of artificial intelligence and its
associated learning agents to determine how they impact an individual's ability to trust artificial
intelligence. The perception being explored is society's belief that there is an existential crisis in
humanity (BigThink, 2020; Gibbs, 2017; Roberts, 2021), culminating from numerous issues
concerning public safety, privacy, and other societal implications and whether this perception
affects a person’s ability to trust and accept the technology. This purpose aids in furthering the
grasp of the underlying problem, that is, the need to know if improving governance oversite,
reducing and understanding risk, and the potential or promised benefits of artificial intelligence
impact the public's trust in artificial intelligence as inferred by previous research (ISO, 2020;
NIST, 2020a; WhiteHouse, 2019). This problem that is being referenced and researched has
been identified and supported by standards organizations such as ISO (ISO, 2020) and NIST
(NIST, 2020a) along with the Trump administration (WhiteHouse, 2019). As I indicated earlier,
the technology industry as a whole can be summed up in a single word; that is trustworthiness.
The goal for any security industry, whether technology-based such as cybersecurity, is to make
individuals feel safe. Society needs and wants to believe they can trust the technology they use
daily. Many people no longer have this intrinsic sense of safety when it comes to artificial
intelligence; therefore, without that feeling of safety and the sense that technology leaders are
moral and ethical, they have no reason to trust artificial intelligence. As indicated, the number
82
of future benefits or the hard sell of trust by government agencies will not change how an
change the level of trust and safety most of the survey population identified. While this
industry is expanding at lightening speeds, it is doing a disservice to itself by not addressing the
shortage of ethics and morals which impact the level of trust and safety individuals are feeling.
83
References
Almeida, P., Santos, C., & Farias, J. S. (2020). Artificial intelligence regulation: A meta-framework
53/os/ai_and_data_management/2/
Aristotle. (1834). The nicomachean ethics of aristotle (T. W. Lancaster, Ed.). Queen's College,
Attia, M., & Edge, J. (2017). Be(com)ing a reflexive researcher: A developmental approach to
https://doi.org/10.1080/23265507.2017.1300068
Baba, M. (2007). Dangerous liaisons: Trust, distrust, and information technology in american
https://doi.org/10.17730/humo.58.3.ht622pk6l41l35m1
Baryannis, G., Validi, S., Dani, S., & Antoniou, G. (2019). Supply chain risk management and
artificial intelligence: State of the art and future research directions. International
https://doi.org/10.1080/00207543.2018.1530476
Bertolini, A., & Episcopo, F. (2022). Robots and ai as legal subjects? Disentangling the
ontological and functional perspective [Original Research]. Frontiers in Robotics and AI,
9. https://doi.org/10.3389/frobt.2022.842213
84
Bhattacharya, R., Devinney, T. M., & Pillutla, M. M. (1998). A formal model of trust based on
https://doi.org/10.5465/amr.1998.926621
https://bigthink.com/videos/will-evil-ai-kill-humanity
Boitnott, J. (2022). Five examples of business goals and how to set them. Jotform.
https://www.jotform.com/blog/business-goals/
Brooks, A. (2019). The benefits of ai: 6 societal advantages of automation. General Technology.
https://www.rasmussen.edu/degrees/technology/blog/benefits-of-ai/
Brożek, B., & Janik, B. (2019). Can artificial intelligences be moral agents? New Ideas in
Bruce, J., & Mollison, J. (2004). Reviewing the literature: Adopting a systematic approach.
https://doi.org/10.1783/147118904322701901
https://keep.lib.asu.edu/_flysystem/fedora/c7/220491/Brundage_asu_0010E_19562.pd
Buchanan, B. G. (2005). A (very) brief history of artificial intelligence. AI Magazine, 26(4), 53-60.
85
Bunn, J. (2020). Working in contexts for which transparency is important: A recordkeeping view
https://doi.org/10.1108/RMJ-08-2019-0038
Burt, R. S., & Knez, M. (1996). Trust and third-party gossip. In R. M. Kramer & T. Tyler (Eds.),
Trust in organizations: Frontiers of theory and research (pp. 68-89). Sage Publications.
Butkus, M. A. (2020). The human side of artificial intelligence. Science and Engineering Ethics,
Chen, A. (2018). Ibm’s watson gave unsafe recommendations for treating cancer.
https://www.theverge.com/2018/7/26/17619382/ibms-watson-cancer-ai-healthcare-
science
Chen, J. Y., Procci, K., Boyce, M., Wright, J., Garcia, A., & Barnes, M. (2014). Situation
https://apps.dtic.mil/sti/citations/ADA600351
Clara F.D. Costa Ames, M. (2022). How is your moral imagination going. ADM Ethics.
https://www.admethics.com/como-vai-a-sua-imaginacao-moral/
Clay, J. (2018). The challenge of the intelligent library. Keynote at What does your eResources
Cox, A. M. (2021). Exploring the impact of artificial intelligence and robots on higher education
https://www.hrdive.com/news/can-technology-drive-organizational-culture/518822/
86
Demeyer, S. (2011, August 2011). Research methods in computer science IEEE 27th
https://www.researchgate.net/publication/221307308_Research_methods_in_comput
er_science
https://www.toptal.com/artificial-intelligence/economics-benefits-of-artificial-
intelligence
Deutsch, M. (1958). Trust and suspicion. Journal of Conflict Resolution, 2(4), 265-279.
https://doi.org/10.1177/002200275800200401
education.com/ibpsych/2018/03/16/what-is-reflexivity/
Dodgson, J. E. (2019). Reflexivity in qualitative research. Journal of Human Lactation, 35(2), 220-
222. https://doi.org/10.1177/0890334419830990
Doney, P. M., Cannon, J. P., & Mullen, M. R. (1998). Understanding the influence of national
https://doi.org/10.5465/amr.1998.926629
Dudovskiy, J. (2018). The ultimate guide to writing a dissertation in business studies: A step-by-
https://www.psychologytoday.com/us/blog/the-power-personal-narrative/201906/the-
power-perspective-taking
87
Edge, J. (2011). The reflexive teacher educator. Roots and Wings, 1-196.
https://doi.org/10.4324/9780203832899
Edmondson, A. C., & Mcmanus, S. E. (2007). Methodological fit in management field research.
https://doi.org/10.5465/amr.2007.26586086
Eliacik, E. (2022). Pros and cons of ai: Is artificial intelligence suitable for you? Dataconomy.
https://dataconomy.com/2022/04/risks-and-benefits-of-artificial-intelligence/
Farhanghi, A. A., Abbaspour, A., & Ghassemi, R. A. (2013). The effect of information technology
firms (cef) in iran. Procedia - Social and Behavioral Sciences, 81, 644-649, Article
S1877042813015577. https://doi.org/10.1016/j.sbspro.2013.06.490
Farisco, M., Evers, K., & Salles, A. (2020). Towards establishing criteria for the ethical analysis of
https://doi.org/10.1007/s11948-020-00238-w
https://www.dataversity.net/brief-history-data-science/#
Fossa, F. (2021). Artificial agency and the game of semantic extension. Interdisciplinary Science
Fragile to Agile. (n.d.). Architecture governance framework - keeping enveryone honest. Fragile
to Agile. https://www.fragiletoagile.com.au/approach/architecture-governance-
framework/
88
Gavin, D. (2016). Constructing a study design: Aligning research question with methodology,
https://research.phoenix.edu/blog/constructing-study-design-aligning-research-
question-methodology-design-and-degree-program
Gellers, J. C. (2020). Rights for robots: Artificial intelligence, animal and environmental law (First
Gibbs, S. (2017). Elon musk: Regulate ai to combat 'existential threat' before it's too late. The
Guardian. https://www.theguardian.com/technology/2017/jul/17/elon-musk-
regulation-ai-combat-existential-threat-tesla-spacex-ceo
Gieling, R. (2022). Explainable ai explained: What is it, benefits, why does it matter, and
https://nymag.com/health/bestdoctors/2014/artificial-body-parts-2014-6/
https://doi.org/10.4135/9781412963909.n31
https://doi.org/10.1007/BF02766777
Guba, E. G., & Lincoln, Y. S. (1989). Fourth generation evaluation (E. G. Guba, Ed.). SAGE
Publications.
89
GuhaThakurta, S., & Chetty, P. (2015). Understanding research philosophy. Project Guru.
https://www.projectguru.in/research-philosophy/
Guillemin, M., & Gillam, L. (2004). Ethics, reflexivity, and “ethically important moments” in
https://doi.org/10.1177/1077800403262360
Gunkel, D. J., & Wales, J. J. (2021). Debate: What is personhood in the age of ai? AI & SOCIETY,
Gunning, D., & Aha, D. W. (2019). Darpa's explainable artificial intelligence program [Article]. AI
Hasan, R., Shams, R., & Rahman, M. (2021). Consumer trust and perceived risk for voice-
controlled artificial intelligence: The case of siri. Journal of Business Research, 131, 591-
597. https://doi.org/https://doi.org/10.1016/j.jbusres.2020.12.012
3-it-governance-frameworks/
Hazzan, O., Dubinsky, Y., Eidelman, L., Sakhnini, V., & Teif, M. (2006, March 2006). Qualitative
https://www.researchgate.net/publication/221538862_Qualitative_research_in_compu
ter_science_education
Health and Human Services. (2016). Federal policy for the protection of human subjects
https://www.hhs.gov/ohrp/regulations-and-policy/regulations/common-
90
rule/index.html#:~:text=Federal%20Policy%20for%20the%20Protection%20of%20Huma
n%20Subjects,the%20Common%20Rule%20and%20other%20applicable%20federal%20
regulations%2C
Herrington, J., McKenney, S., Reeves, T., & Oliver, R. (2007). Design-based research and doctoral
Hillier, W. (2021). Structured vs. Unstructured data: What's the difference? CareerFoundry.
https://careerfoundry.com/en/blog/data-analytics/structured-vs-unstructured-data/
Hiter, S. (2021). Big data trends in 2022 and the future of big data. Datamation.
https://www.datamation.com/featured/big-data-trends/
Houser, K. (2021). Mit's new bionics center may usher in our cyborg future. FreeThink.
https://www.freethink.com/health/bionics
analytics
ISO. (2020). Standards by iso/iec: Jtc 1/sc 42 - artificial intelligence. International Organization
for Standardization.
https://www.iso.org/committee/6794475/x/catalogue/p/1/u/0/w/0/d/0
Johns, J. L. (1996). A concept analysis of trust. Journal of Advanced Nursing, 24(1), 76-83.
https://doi.org/10.1046/j.1365-2648.1996.16310.x
91
Jones, R. (2020). Dissertation writing: The importance of alignment. The Lentz Leadership
of-alignment/
Joshi, N. (2020). Can ai lend a helping hand to smart bionics. BBN Times.
https://www.bbntimes.com/technology/can-ai-lend-a-helping-hand-to-smart-bionics
https://doi.org/10.4135/9780857020116
Kant, I. (1785). Groundwork of the metaphysics of morals (J. F. Hartknoch, Ed.). Royal University
of Konigsberg. http://www.earlymoderntexts.com/assets/pdfs/kant1785.pdf
Kirilenko, A. K., Albert S.; Samadi, Mehrdad; Tuzun, Tugkan. (2017). The flash crash: High-
frequency trading in an electronic market. The Journal of Finance, 72(3), 967-998, Article
1686004. https://doi.org/10.1111/jofi.12498
https://doi.org/10.1146/annurev.psych.50.1.569
Langer, M., Oster, D., Speith, T., Hermanns, H., Kästner, L., Schmidt, E., Sesing, A., & Baum, K.
(2021, February 15, 2021 ). What do we want from explainable artificial intelligence
Lara, F., & Deckers, J. (2020). Artificial intelligence as a socratic assistant for moral
Lee, J. D., & See, K. A. (2004). Trust in automation: Designing for appropriate reliance. Human
Lee, K.-F. (2018, April, 2018). How ai can save our humanity, TEDtalks.
https://www.ted.com/talks/kai_fu_lee_how_ai_can_save_our_humanity
Lilkov, D. (2021). Regulating artificial intelligence in the eu: A risky game. European View, 20(2),
166-174. https://doi.org/10.1177/17816858211059248
Lloyd, K. (2018, September 20, 2018). Bias amplification in artificial intelligence systems. AAAI
FSS-18: Artificial Intelligence in Government and Public Sector, Arlington, Virginia, USA.
Lo Piano, S. (2020). Ethical principles in machine learning and artificial intelligence: Cases from
https://doi.org/10.1057/s41599-020-0501-9
Majumder, S., & Dey, N. (2022). Explainable artificial intelligence (xai) for knowledge
0316-8_6
Mannes, A. (2020). Governance, risk, and artificial intelligence. AI Magazine, 41(1), 61-69.
https://dl.icdst.org/pdfs/files1/3686f481d8734b233303363baea5dd68.pdf#:~:text=One
%20approach%20is%20objectivism%2C%20characterized%20by%20the%20traditional,t
heories%2C%20and%20both%20impact%20strategies%20for%20technology%20integrat
ion.
Marr, B. (2018, November 19, 2018). Is artificial intelligence dangerous? 6 ai risks everyone
artificial-intelligence-dangerous-6-ai-risks-everyone-should-know-
about/#d3ec11d24040
Marr, D. (1977). Artificial intelligence—a personal view. Artificial intelligence, 9(1), 37-48,
Mayer, R. C., Davis, J. H., & Schoorman, F. D. (1995). An integrative model of organizational
https://doi.org/10.5465/amr.1995.9508080335
McGann, C. (2018, August 5, 2022). Let's talk ai ethics: The social benefits of artificial
intelligence. https://www.genesys.com/blog/post/lets-talk-ai-ethics-the-social-
benefits-of-artificial-intelligence
https://www.diligent.com/insights/entity-governance/what-is-governance-framework/
https://blog.trello.com/leaders-guide-to-organizational-transparency
Miles, A. D. (2019, October 26, 2017). Let’s stop the madness part 2: Understanding the
difference between limitations vs. Delimitations 5th Annual 2017 Black Doctoral
94
Network Conference,
https://www.researchgate.net/publication/334279571_ARTICLE_Research_Methods_a
nd_Strategies_Let%27s_Stop_the_Madness_Part_2_Understanding_the_Difference_Bet
ween_Limitations_vs_Delimitations#fullTextFileContent
Miller, T. (2019). Explanation in artificial intelligence: Insights from the social sciences. Artificial
https://doi.org/10.1016/j.artint.2018.07.007
Technology. https://dspace.mit.edu/bitstream/handle/1721.1/6089/AIM-
306.pdf?%2520sequence%3D2
engineering-guide/enterprise-engineering/enterprise-governance
Mittelstadt, B. (2019). Principles alone cannot guarantee ethical ai. Nature Machine
https://christophm.github.io/interpretable-ml-book/
Moorman, C., Deshpandé, R., & Zaltman, G. (1993). Factors affecting trust in market research
https://doi.org/10.1177/002224299305700106
https://www.nsf.gov/bfa/dias/policy/human.jsp#:~:text=The%20fundamental%20princi
95
ple%20of%20human%20subjects%20protection%20is,beyond%20the%20normal%20risk
s%20inherent%20in%20everyday%20life.
NCBI.gov, Committee on Science, T., and Law, Workforce, B. o. H. E. a., Affairs, P. a. G., &
National Academies of Sciences, E., and Medicine. (2016). Optimizing the nation's
investment in academic research: A new regulatory framework for the 21st century (Vol.
https://doi.org/10.17226/21824
Neri, H., & Cozman, F. (2020). The role of experts in the public perception of risk of artificial
https://www.nimh.nih.gov/funding/clinical-research/nimh-guidance-on-risk-based-
monitoring.shtml
NIST. (2012). Sp 800-30 rev. 1: Guide for conducting risk assessments. U.S. Department of
https://nvlpubs.nist.gov/nistpubs/Legacy/SP/nistspecialpublication800-30r1.pdf
NIST. (2019). Sp 1500-4r2: Big data interoperability framework: Volume 4, security and privacy
Technology. https://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.1500-4r2.pdf
NIST. (2020b). Sp 800-53 rev. 5: Security and privacy controls for information systems and
Technology. https://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.800-53r5.pdf
NTSB. (2019, 11/19/2019). Inadequate safety culture’ contributed to uber automated test
vehicle crash - ntsb calls for federal review process for automated vehicle testing on
https://doi.org/10.1007/s11948-017-9943-x
O'Brien, N. (2021). Reflexivity: What is it, and why is it important in your community? University
of Minnesota. https://extension.umn.edu/community-news-and-insights/reflexivity-
what-it-and-why-it-important-your-community
https://www.researchgate.net/publication/228618921_A_literature_review_on_artifici
al_intelligence
Osanloo, A., & Grant, C. (2016). Understanding, selecting, and integrating a theoretical
https://doi.org/10.5929/2014.4.2.9
97
Ostrom, E. (1998). A behavioral approach to the rational choice theory of collective action:
Paiva, A. (2020). Humans and robots together: Engineering sociality and collaboration
Italy. https://doi.org/10.1145/3377325.3380624
Pallis, G. (2010). Cloud computing: The new frontier of internet computing. IEEE Internet
Pancake, C. M., & ACM US Public Policy Council. (2017). Statement on algorithmic transparency
https://www.acm.org/binaries/content/assets/public-policy/acm-pres-ltr-un-re-
weapons-systems.pdf
https://www.educba.com/benefits-of-artificial-intelligence/?source=leftnav
Perry, B., & Uuk, R. (2019). Ai governance and the policymaking process: Key considerations for
https://doi.org/10.3390/bdcc3020026
Powell, C. (2018). Memory-boosting brain implants are in the works. Would you get one?
EpilepsyU. https://epilepsyu.com/memory-boosting-brain-implants-works-get-one/
https://heinonline.org/HOL/LandingPage?handle=hein.journals/dltr18&div=5&id=&pag
e=
Price, C. (2016). Alignment: Driving your research study on a straight path. University of
Phoenix. https://research.phoenix.edu/blog/alignment-driving-your-research-study-
straight-path
Price, J. H., & Murnan, J. (2004). Research limitations and the necessity of reporting them.
https://doi.org/10.1080/19325037.2004.10603611
QuestionPro. (2021). Types of sampling: Sampling methods with examples. QuestionPro Survey
Software. https://www.questionpro.com/blog/types-of-sampling-for-social-research/
Resnik, D. B. (2011). What is ethics in research & why is it important? National Institute of
https://www.veronaschools.org/cms/lib02/NJ01001379/Centricity/Domain/588/What%
20is%20Ethics%20in%20Research%20Why%20is%20it%20Important.pdf
Ridley, M. (2022). Explainable artificial intelligence (xai): Adoption and advocacy. Information
https://www.paulcraigroberts.org/2021/01/06/leading-scientist-of-artificial-
intelligence-says-ai-is-an-existential-threat-to-humanity/
Rotter, J. B. (1967). A new scale for the measurement of interpersonal trust. Journal of
Rousseau, J.-J. (1920). The social contract: & discourses (E. Rhys, Ed. Reprint ed.). JM Dent &
https://www.abc.net.au/science/articles/2008/02/29/2176665.htm
SAS Institute. (n.d.). Big data: What it is and why it matters. SAS Institute.
https://www.sas.com/en_us/insights/big-data/what-is-big-data.html
https://www.includehelp.com/ml-ai/introduction-to-artificial-intelligence.aspx
https://www.includehelp.com/ml-ai/types-of-learning-in-agents-in-artificial-
intelligence.aspx
Shiff, L. (2021). Cobit vs itil: Comparing it governance frameworks. BMC Software Inc.
https://www.bmc.com/blogs/cobit-vs-itil-understanding-governance-frameworks/
Stahl, B. C., Andreou, A., Brey, P., Hatzakis, T., Kirichenko, A., Macnish, K., Laulhé Shaelou, S.,
Patel, A., Ryan, M., & Wright, D. (2021). Artificial intelligence for human flourishing –
beyond principles for machine learning. Journal of Business Research, 124, 374-388.
https://doi.org/10.1016/j.jbusres.2020.11.030
100
Starck, N., Bierbrauer, D., & Maxwell, P. (2022). Artificial intelligence, real risks: Understanding
and mitigating vulnerabilities in the military use of ai. Modern War Institute at West
Point. https://mwi.usma.edu/artificial-intelligence-real-risks-understanding-and-
mitigating-vulnerabilities-in-the-military-use-of-ai/
Static Solutions. (2017). What is confirmability in qualitative research and how do we establish
qualitative-research-and-how-do-we-establish-it/
https://www.statisticssolutions.com/qualitative-sampling-techniques/
Stenfors, T., Kajamaa, A., & Bennett, D. (2020). How to … assess the quality of qualitative
Stokes, R. (2019). Artificial and human intelligence: Partners in bionic healthcare innovation.
intelligence-partners-in-bionic-healthcare-innovation
https://builtin.com/artificial-intelligence/risks-of-artificial-intelligence
Thornhill, A., Lewis, P., & Saunders, M. N. K. (2019). Research methods for business students
Tietz, T. (2020). John mccarthy and the raise of artificial intelligence. SciHi Blog: Daily blog on
Tjoa, E., & Guan, C. (2021). A survey on explainable artificial intelligence (xai): Toward medical
xai. IEEE Transactions on Neural Networks and Learning Systems, 32(11), 4793-4813.
https://doi.org/10.1109/TNNLS.2020.3027314
https://conjointly.com/kb/
https://scholarworks.uni.edu/cgi/viewcontent.cgi?article=1496&context=hpt
https://universalteacher.com/1/trustworthiness-in-qualitative-research/
systematic-approach-your-research-question
University of Southern California. (2021). Organizing your social sciences research paper.
https://libguides.usc.edu/c.php?g=235034&p=1559822
Wallach, W., & Allen, C. (2008). Moral machines: Teaching robots right from wrong. Oxford
University Press.
Warren, K. (2020). Qualitative data analysis methods 101: The “big 6” methods + examples.
methods/#:~:text=%20Qualitative%20Data%20Analysis%20Methods%20101%3A%20%2
01,Let%E2%80%99s%20stick%20with%20IPA%2C%20okay%3F%20IPA...%20More%20
102
White, M. G. (2020). Why human subjects research protection is important. Ochsner Journal,
actions/executive-order-maintaining-american-leadership-artificial-intelligence/
Wolff, J., Pauling, J., Keck, A., & Baumbach, J. (2020). The economic impact of artificial
intelligence in health care: Systematic review. J Med Internet Res, 22(2), e16866.
https://doi.org/10.2196/16866
Yigitcanlar, T., Desouza, K. C., Butler, L., & Roozkhosh, F. (2020). Contributions and risks of
artificial intelligence (ai) in building smarter cities: Insights from a systematic review of
This survey is intended to be utilized by those with a technical background and those
without one. If you are a layperson without technical knowledge, feel free to skip the questions
a. Yes, I have read and agree to the terms identified in the Informed consent
b. No, I have read and do not agree to the terms identified in the Informed consent
2. Do you currently work in the technology industry? If yes, what field and how many
years?
assurance?
a. Please answer with the number of years with hands-on experience None, 1-3, 4-
6, 7-9, or 10+.
a. Please answer with the number of years with hands-on experience None, 1-3, 4-
6, 7-9, or 10+.
6. What are your perceptions of the use of ethics and morals in developing and utilizing
artificial intelligence?
intelligence?
a. Increasing governance will reduce the risk and the potential negative impacts of
artificial intelligence.
b. Increasing governance will not reduce the risk and the potential negative
safety.
e. Increasing governance will not lead to improved moral and ethical decision-
9. How could risk be lowered in artificial intelligence development to lessen any negative
10. What are your perceptions and beliefs of risk versus reward regarding artificial
intelligence?
11. What are your perceptions and beliefs regarding artificial intelligence's realized and
future benefits?
12. What are your perception and beliefs when it comes to trusting artificial intelligence?
13. What is your perception and beliefs of industry standards and governance impacting the
14. What is your perception and belief that the risk brought on by artificial intelligence
15. What is your perception and belief that the number of benefits provided by artificial
intelligence agents is enough for you to overlook the risks of the technology?
105
16. What is your perception and beliefs regarding the relationship between the benefits of
17. What is your perception and beliefs regarding injecting transparency and interpretability
into the development of artificial intelligence and associated learning agents that impact
trust levels?
18. In what ways does exposure to media framing of artificial intelligence influence your
19. In what ways does exposure to media framing of artificial intelligence influence your
20. In what ways does exposure to media framing of artificial intelligence influence your
21. In what ways does exposure to media framing of artificial intelligence influence your
22. How does exposure to media framing of artificial intelligence influence your perceptions
23. What is your perception of the differences between fictional artificial intelligence on TV
24. What are your perceptions and beliefs regarding artificial intelligence's potential
a. I believe that artificial intelligence does not cause negative impacts on society or
human rights.
106
rights.
c. I believe specific human rights campaigns and societal beliefs influence the
25. What are your perceptions and beliefs regarding using a governance model that uses
26. How do your perceptions and beliefs regarding artificial intelligence's risks, governance,
This work may be used in accordance with the terms of the Creative Commons license
or other rights statement, as indicated in the copyright statement or in the metadata
associated with this work. Unless otherwise specified in the copyright statement
or the metadata, all rights are reserved by the copyright holder.
ProQuest LLC
789 East Eisenhower Parkway
P.O. Box 1346
Ann Arbor, MI 48106 - 1346 USA