Comprehension, Apprehension, and Acceptance Understanding the Influence of Literacy and Anxiety on Acceptance of Artificial Intelligence

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 29

Title:

Comprehension, Apprehension, and Acceptance:

d
Understanding the Influence of Literacy and Anxiety on

we
Acceptance of Artificial Intelligence

ie
Authors:
Gianluca Schiavo, Fondazione Bruno Kessler (FBK), Trento (Italy) -

ev
gschiavo@fbk.eu (ORCID 0000-0003-3529-3889)

Stefano Businaro, University of Trento (Italy)

Massimo Zancanaro, University of Trento and Fondazione Bruno Kessler, Trento

rr
(Italy) - massimo.zancanaro@unitn.it (ORCID 0000-0002-1554-5703)
ee
Corresponding author:
Gianluca Schiavo, Fondazione Bruno Kessler (FBK), Via Sommarive 18, 38123
Trento (Italy) - gschiavo@fbk.eu
p

Authors note
ot

Gianluca Schiavo is a researcher at the Fondazione Bruno Kessler (FBK) in Trento

Stefano Businaro is a master’s graduate at the University of Trento (Italy)


tn

Massimo Zancanaro is professor of Computer Science/Human-Computer Interaction at the


University of Trento and an affiliated senior researcher at Fondazione Bruno Kessler (FBK)
in Trento
rin

Declaration of Conflicting Interests


The Authors declare that there is no conflict of interest. The results presented in this paper
ep

have not been submitted or published elsewhere.

Acknowledgements
Pr

This research received partial support from the PNRR project FAIR - Future AI Research
(PE00000013), under the NRRP MUR program funded by the NextGenerationEU

This preprint research paper has not been peer reviewed. Electronic copy available at: https://ssrn.com/abstract=4668256
Title:
Comprehension, Apprehension, and Acceptance:

d
Understanding the Influence of Literacy and Anxiety on

we
Acceptance of Artificial Intelligence

Abstract

ie
In this paper, we discuss how laypeople's attitudes toward accepting Artificial Intelligence (AI)
technologies are influenced by their self-reported literacy (i.e., their understanding and ability
to use AI technology) and anxiety (i.e., their fear of the AI technology's impact on society). We

ev
conducted an anonymous survey, gathering 313 valid responses, and utilized Structural
Equation Modelling to examine the relationships between these factors, using the Technology
Acceptance Model (TAM) as our base model. Our findings indicate that while literacy fosters
a positive attitude towards acceptance, anxiety has a minimal, although significant, direct

rr
negative effect, but plays a significant role in mediating the influence of literacy. Specifically,
anxiety serves as a complementary partial mediator between literacy and acceptance,
meaning that a portion of the effect of literacy on acceptance is mediated through anxiety. In
contrast, literacy still explains a portion of acceptance independent from anxiety. Our study
ee
also confirms that dimensions of technology acceptance, as suggested by TAM, significantly
shape individuals' attitudes towards AI as for others digital technology. In this respect, AI
literacy positively influences the perception of ease of use and usefulness, thereby contributing
to the overall acceptance of AI-based technology. We also discuss the implications of these
p
findings for the development of critical digital literacy in the context of AI-based technology.

Keywords: Artificial Intelligence (AI), AI Literacy, AI Anxiety, Technology Acceptance Model


ot

1. Introduction
Over the past few years, the discussion around Artificial Intelligence (AI) has increasingly
tn

expanded in the general public, largely driven by the growing awareness of its potential and
the increasing availability of AI-based technologies (European Commission, 2023; World
Economic Forum, 2023).
The growing ubiquity of AI is impacting society, particularly as digital technology becomes
rin

more intricately woven into everyday life. Indeed, the capacity to engage and interact with AI
systems swiftly evolves into an essential competency in today's world, compelling individuals
to enhance their abilities or pivot their professions (Carolus et al., 2023; Long & Magerko,
2020; Ng et al., 2021; Pinski & Benlian, 2023; Wang et al., 2022).
ep

Furthermore, AI technology can bring significant societal and economic benefits, but it might
also have undesirable consequences (Stahl & Wright, 2018). Indeed, considerable attention
is starting to be devoted to ethical issues raised by AI, such as transparency, privacy,
responsibility, impact on employment and professional competences, and many others, to
Pr

ensure that benefits of this technology outweigh its risks (Jobin et al., 2019; Stahl & Wright,
2018). Although some of the risks of AI might appear as something related to an indefinite
future, the novelty and complexity of this technology, and the new challenges it poses to

This preprint research paper has not been peer reviewed. Electronic copy available at: https://ssrn.com/abstract=4668256
society, are already sources of anxiety for laypeople (Li & Huang, 2020; Wang & Wang, 2022;
Zhan et al., 2023).

d
In this paper, we try to articulate laypeople’s acceptance towards AI technology, specifically
for what concerns the relationship between anxiety or fear that this technology can induce,

we
and the perceived understanding, in terms of literacy, that people may have about it.
We took the stance of the Unified Theory of Acceptance and Use of Technology (UTAUT)
(Venkatesh et al., 2003) and specifically the dimensions described also in the Technology
Acceptance Model (TAM) (Davis, 1989). This approach follows the theory of Planned

ie
Behaviour (Ajzen, 1991) and posits that acceptance, defined as intention to use, drives actual
use or future adoption. In its original version (Davis, 1989; Davis et al., 1989), TAM relied on
just two constructs, ease of use and usefulness, by realizing how both impact on attitude
toward use, while only usefulness affects intentions to use, with ease of use directly mediating

ev
usefulness. Extensions of this model investigated the impact of different subjective and
contextual constructs, as well as applications in specific contexts of use (Marangunić & Granić,
2015; Taherdoost, 2018). Our study falls into the first category, as it incorporates two new
subjective constructs, AI Anxiety and AI Literacy, into the model. In this respect, it contributes

rr
to existing literature on AI acceptance and to the application of TAM as a model to understand
the acceptance and intended use of AI technology.
To explore the impact of literacy and anxiety in the broadest manner, we opted not to limit our
ee
investigation to a particular AI-based system. Instead, in our empirical study we referred to it
as generic AI technology. While we recognize that referring to AI technology in a generic sense
might introduce variability in the data due to the diverse interpretations that participants may
have of AI, we believe that this approach may better focus on our primary interest of
p
understanding the relationship between these constructs. Furthermore, by measuring the
correlation between the variables, we can potentially mitigate the impact of any such
variability. Indeed, other TAM studies investigated the impact of contextual dimensions on
acceptance without focusing on a specific technology: among others, Gruzd et al. (2012)
ot

discuss social media use in research practices without focusing on a specific platform;
similarly, Venkatesh et al. (2012) investigate mobile internet use among Hong Kong residents
without specifying the specific purpose or motivation for the use.
tn

2. Related works
Recent years have seen a growing interest in exploring interactions between people and AI,
rin

as well as AI education, focusing on the factors that shape people’s attitudes towards AI-based
technology. While acceptance of AI technology is a well-explored topic, particularly in relation
to the TAM literature, there are also studies delving into the anxiety or fear induced by this
technology. Other research has started to investigate people’s perceived understanding, or
literacy, in terms of the competencies and skills required for engaging with AI-based systems.
ep

However, comprehensive studies that examine the interrelationships among AI Acceptance,


AI Anxiety, and AI Literacy, along with their potential implications, are still scarce. Although
these dimensions have been studied individually (Johnson & Verdicchio, 2017; Kelly et al.,
2023; Li & Huang, 2020; Long & Magerko, 2020; Sohn & Kwon, 2020; Wang et al., 2022), and
Pr

there is some studies on the influence of AI Anxiety on the acceptance of AI technologies


(Kaya et al., 2022; Kelly et al., 2023), there remains a noticeable gap in linking these

This preprint research paper has not been peer reviewed. Electronic copy available at: https://ssrn.com/abstract=4668256
dimensions with AI Literacy. Our work seeks to bridge this gap by presenting a study that
explores their interrelationships.

d
In the remainder of this section, we review the main literature on each dimension, discussing
potential relationships and formulating the research hypotheses investigated in our study.

we
2.1 AI Acceptance

Numerous models and frameworks have been developed to explain user adoption of new

ie
technologies (Taherdoost, 2018), including the Unified Theory of Acceptance and Use of
Technology (UTAUT, Venkatesh et al., 2003) and the Technology Acceptance Model (TAM,
Davis, 1989). Generally, acceptance is defined as the behavioral intention or willingness to

ev
use, buy, or try a good or service, and user acceptance of technology has been found to be
fundamental to the successful uptake of devices (Davis, 1989; Taherdoost, 2018).

In the original TAM, perceived usefulness and perceived ease of use serve as precursors to
attitudes toward computer use (Davis, 1989). The model posits that intention to use a

rr
technology is influenced by one’s attitude toward its use, as well as the direct and indirect
impacts of perceived usefulness and ease of use. These two factors together impact on the
attitude toward usage, with perceived ease of use also directly affecting perceived usefulness.
Thus, the perceived usefulness (PU) and perceived ease of use (PEU) have been longly
ee
identified as variables that can help understanding the attitudes towards the adoption of a new
technology (Davis, 1989; Morosan, 2011; Taherdoost, 2018). When a new technology is
perceived to be highly useful, it increases the likelihood of its adoption (Morosan, 2011).
Moreover, Davis et al. (1989) further emphasized that through the examination of external
p

factors closely linked to ease of use and usefulness, researchers can formulate more effective
strategies for fostering technology acceptance.

Recently, there has been a surge of interest and research toward user acceptance of AI
ot

technology. However, the existing research appears dispersed and lacks systematic synthesis
(Kelly et al., 2023). Studies have investigated the factors affecting the acceptance of AI-based
systems across different application domains, such as autonomous cars (Cugurullo &
tn

Acheampong, 2023), emotion-sensing devices (Ho et al., 2022), facial recognition systems
(Zhong et al., 2021), and others (Ismatullaev & Kim, 2022). Review studies highlight a strong
interconnectedness between technological, human, and behavioral factors, with attitude and
perceived usefulness consistently emerging as the most influential determinants of adoption
rin

intention (Ismatullaev & Kim, 2022; Kelly et al., 2023).

In the context of AI technology acceptance, several studies have incorporated TAM


antecedents into their investigations, structuring their surveys around the TAM constructs
(Kelly et al., 2023; Lin & Xu, 2022; Xu & Wang, 2021). For instance, studies adopted the TAM
ep

model and showed that ease of use and usefulness positively influenced customers’ attitude
towards different AI applications such as facial recognition payment systems (Zhong et al.,
2021), AI robotic architects for design (Lin & Xu, 2022) and robot lawyer technology (Xu &
Wang, 2021).
Pr

In addition to traditional acceptance theories, other dimensions such as trust (Choung et al.,
2023), perceived risk, transparency (Wanner et al., 2022), reliability, enjoyment (Sohn & Kwon,
2020), explainability and causability (Shin, 2021) have also been investigated for their potential

This preprint research paper has not been peer reviewed. Electronic copy available at: https://ssrn.com/abstract=4668256
roles in the acceptance of AI systems. Other studies highlighted significant effects of
technological factors on user behavior and the dependence of technology adoption on user

d
characteristics such as age, gender, and education (Ismatullaev & Kim, 2022). Perceived
usefulness, performance expectancy, attitudes, trust, and effort expectancy significantly and

we
positively predicted intention, willingness, and use behavior of AI across multiple contexts
(Kelly et al., 2023).

Building on this foundation, this study aims to explore the relationship between acceptance of
AI-based systems and two pivotal dimensions identified in AI adoption literature: AI literacy
and AI anxiety. Simultaneously, we employ well-established constructs from the acceptance

ie
model to evaluate the impact of perceived usefulness, ease of use, and prior experience on
the adoption of AI technology.

ev
2.2 AI Literacy

The concept of literacy has evolved to encompass different facets of modern society and
various measurement frameworks have been devised accordingly. For example, scales to

rr
assess information literacy, computer literacy, data literacy, media literacy, privacy literacy,
and similar domains have been developed and widely employed in numerous studies (Carolus
et al., 2023; Lund & Agbaji, 2023; Ng et al., 2021; Wang et al., 2022). As AI technology
becomes increasingly integrated into our daily lives, it becomes paramount to understand the
ee
competencies of users in using it effectively (Long & Magerko, 2020; Ng et al., 2021). AI
competence is emerging as a crucial skill, gaining ever greater importance for future
employment prospects (Ng et al., 2021; Pinski & Benlian, 2023; Wang et al., 2022). However,
users may find themselves interacting with AI-embedded applications without a reliable mental
model of the underlying algorithms, introducing potential challenges to their comprehension
p

(Kaya et al., 2022; Long & Magerko, 2020; Ng et al., 2021). The concept of AI Literacy that
can be defined as 'the capability to accurately identify, effectively utilize, and critically assess
AI-related products while adhering to ethical standards' (Wang et al., 2022). Ng et al.'s (2021)
ot

review of AI Literacy definitions suggests different facets for this construct, specifically:
strategies to make proficient use of AI (Use and Apply), proper mental model and robust
understanding of what is AI (Know and Understand), recognize when AI is used in a system
tn

(Detect AI), and possible ethical concerns that this technology can rise (AI Ethics). Similarly,
Wang et al. (2022) proposed a theoretical framework to conceptualize AI Literacy, comprising
four constructs: Usage, defined as the ability to exploit AI technology to proficiently perform
tasks; Awareness, the ability to identify and comprehend AI systems during interactions;
rin

Evaluation, the capacity to critically analyze and assess AI applications and their outcomes;
and Ethics, which involves recognizing responsibilities and risks associated with using AI
systems.
Several studies have delved into AI Literacy, but the majority of them have utilized qualitative
ep

research methods, primarily conducting exploratory research for initial investigations (Ng et
al., 2021). These studies have predominantly concentrated on strategies to improve AI
Literacy rather than quantifying it in connection with other variables (for example, Almaiah et
al., 2022; Druga et al., 2019; Su et al., 2023). For instance, the influence of digital literacy on
digital technology acceptance has not been extensively explored. Some studies within the
Pr

educational domain have expanded TAM to encompass digital literacy as a factor facilitating
acceptability (Kabakus et al., 2023). However, the literature on AI literacy is still evolving, with
more exploration required. While research examining the relationship between AI Anxiety and

This preprint research paper has not been peer reviewed. Electronic copy available at: https://ssrn.com/abstract=4668256
AI Acceptance is available (Cugurullo & Acheampong, 2023; Kaya et al., 2022), there is a lack
of literature exploring the interplay between AI Literacy and other dimensions related to AI

d
acceptance. Our work aims to fill this gap by examining the relationship between AI Literacy
and AI Acceptance, investigating its connections with TAM antecedents and the role of AI

we
Anxiety.

2.3 AI Anxiety

AI is permeating our lives through various ways, and fear of technological change and
demands for government regulation of new technologies are not new phenomena. In extant

ie
literature, technofobia, or the exaggerated fear of advanced technology caused by its potential
side effects, is often investigated (Khasawneh, 2018).
Previous studies have argued that automated technologies such as AI are likely to disrupt

ev
society as we know it today, with significant implications for the workforce (Scherer, 2015;
Wang & Wang, 2022; Zhan et al., 2023) and concerns related to safety, privacy, surveillance,
misinformation, ethical decision-making, transparency and accountability (Hagendorff, 2020;
Leavy, 2018; Long & Magerko, 2020; Scherer, 2015; Stahl & Wright, 2018). AI Anxiety is

rr
worthy of attention also because literature suggests a relation between anxiety and
technological acceptance (Almaiah et al., 2022; Kaya et al., 2022; Wang & Wang, 2022).
Given the rapid development of AI, there is a tendency toward increasing anxiety about AI,
and as this technology advances, AI Anxiety is likely to become more widespread among the
ee
public (Kaya et al., 2022; Li & Huang, 2020). Only recently the concept of AI Anxiety is starting
to receive attention from scholars, which tried to give some dimensional composition to it. In
this respect, AI Anxiety can be defined as the “fear and worry about loss of control over AI”.
Other definitions of AI anxiety are those proposed by Johnson and Verdicchio (Johnson &
p

Verdicchio, 2017), who defined AI anxiety as an affective response of anxiety or fear that
inhibits an individual from interacting with AI, but they also consider the feeling of fear about
losing control of this technology.
ot

So far, AI Anxiety models have been developed from anxiety models of previous technological
advancements, such as computer anxiety (Li and Huang, 2020). However, AI substantially
differs from earlier technologies for a range of intrinsic peculiarities (Li & Huang, 2020; Pinski
tn

& Benlian, 2023; Scherer, 2015). Therefore, AI Anxiety might be a particular phenomenon
which is likely to generate a wider range of anxieties than, for example, computer anxiety (Li
and Huang, 2020). Li and Huang (2020) applied the “integrated fear acquisition” theory
(Menzies & Clarke, 1995; Rachman, 1977) to explain anxiety, as both emotions share similar
origins and consequences, and applied it in the context of AI. This approach considers anxiety
rin

as a "recycled" form of fear, protecting individuals from potential future threats.

The literature suggests a relationship between anxiety and the acceptance of new technology
(Dönmez-Turan & Kır, 2019; Khasawneh, 2018; Taherdoost, 2018; Torkzadeh & Angulo,
ep

1992). Specifically, higher levels of computer anxiety are often associated with lower levels of
acceptance and use of technology (Taherdoost, 2018; Torkzadeh & Angulo, 1992).
Furthermore, computer anxiety is negatively correlated with perceived usefulness and ease of
use of the technology (Dönmez-Turan & Kır, 2019)
Pr

In line with these findings, there have been studies conducted on AI Anxiety and AI
Acceptance, employing various methodologies, and exploring the phenomenon from different
perspectives (Almaiah et al., 2022; Cugurullo & Acheampong, 2023; Kaya et al., 2022). For

This preprint research paper has not been peer reviewed. Electronic copy available at: https://ssrn.com/abstract=4668256
example, Cugurullo and Acheampong (2022) studied anxiety related to Autonomous Vehicles,
finding that people substantially fear cars driven by AI. Despite it, fear of autonomous cars did

d
not influence the intention to adopt them, but what influenced people’s intention to use AI
driven cars were the perceived individual, urban and global benefits that AI technology may

we
generate. Almaiah et al (2022) investigated the impact of AI, Social Anxiety and Computer
Anxiety in e-learning settings finding AI developers and practitioners interested in applying AI
in the educational sector, that “cooperative learning environments have a remarkable impact
on anxiety”, as sharing information collaboratively helped learners to feel more comfortable,
and that computer anxiety is related to individual anxiety of using the technology while social

ie
anxiety has not a role on it. Moreover, Kaya et al (2022) found that personality traits, AI anxiety,
and demographics play significant roles in shaping attitudes toward AI. However, they found
that not all the subconstructs of the AI Anxiety have a significant predictive impact on attitude

ev
towards AI. While learning anxiety and AI configuration anxiety (fear of humanoid AI) were
associated with less forgiving attitudes towards the drawbacks of AI, job replacement anxiety
and sociotechnical blindness (failure in recognizing that AI systems work in combination with
people) did not significantly predict positive or negative attitudes toward AI (Kaya et al., 2022).

rr
Additionally, some studies suggest that technology anxiety may not only have a direct impact
on acceptance or intention to use but also (or rather) a moderator effect altering the strength
of other variables’ effects (Jeng et al., 2022; Yang & Forney, 2013). For instance, Yang &
Forney (2013) found that in the adoption of mobile shopping, facilitating conditions had a
ee
stronger effect on consumers with low technology anxiety compared to those with high anxiety.
Moreover, Jeng et al. (2022) discovered that older individuals with higher anxiety levels
experienced a negative impact on their attitude towards the intention to use a technology. Our
study seeks to investigate whether this moderating effect is also applicable to AI anxiety.
p

3. Study
The objective of the study was to evaluate the roles of AI literacy and AI anxiety in
ot

understanding the acceptance of AI technology.

The research is guided by the following questions:


tn

● To what extent do AI literacy and AI anxiety influence AI acceptance, and how does AI
anxiety mediate this relationship?
● How do the factors from the Technology Acceptance Model (TAM) explain AI
acceptance, and what is the relationship between AI literacy and these factors,
rin

specifically perceived usefulness and ease of use?

To address these questions, we formulated several hypotheses and tested them through a
survey study.
ep

3.1 Research hypothesis


Based on the literature reviewed previously, the following hypotheses were tested.
H1. AI-based technology acceptance adheres to the TAM model. According to the Technology
Pr

Acceptance Model, users' perceived usefulness (PU), perceived ease of use (PEOU), and
past experiences (EXP) with new technologies can shape their attitudes towards AI
Acceptance. This, in turn, can influence their intention to adopt the technology (INT).

This preprint research paper has not been peer reviewed. Electronic copy available at: https://ssrn.com/abstract=4668256
A second set of hypotheses explores the potential impact of AI Literacy and AI Anxiety on AI
Acceptance, while also examining their interrelationships.

d
H2. AI Literacy influences AI Acceptance. Considering that AI literacy can make AI technology
seem less intimidating and more accessible (Long & Magerko, 2020; Ng et al., 2021), we

we
anticipate that higher levels of AI literacy will foster more positive attitudes towards the
acceptance of AI-based technologies.
H3. AI Anxiety influences AI Acceptance. As established in the literature, elevated levels of AI
Anxiety tend to diminish the overall acceptability of AI-based technologies (Almaiah et al.,

ie
2022; Cugurullo & Acheampong, 2023; Kaya et al., 2022).
H4. AI Literacy influences AI Anxiety. We posit that a comprehensive understanding and
knowledge of AI technologies can demystify them, thereby rendering them less intimidating

ev
and mitigating anxiety or apprehension. Consequently, we anticipate that increased AI Literacy
will be correlated with reduced levels of AI Anxiety.
H5. AI Anxiety impacts the relation between AI Literacy and AI Acceptance. We propose, with
an exploratory aim, that AI Literacy and AI Anxiety are interrelated in their impact on

rr
acceptability. Specifically:
H5.1: We hypothesize that AI Anxiety serves as a mediator in the relationship between AI
Literacy and AI Acceptance. It affects the extent to which AI Literacy contributes to a
ee
favourable acceptance of AI technology.
H5.2: We also hypothesize that AI Anxiety acts as a moderator in the relationship between AI
Literacy and AI Acceptance. Although this aspect has not been previously discussed in
literature, the strength of the relationship between AI Literacy and AI Acceptance may be
p

greater at lower levels of anxiety than at higher levels. In other words, higher levels of AI
Anxiety may diminish the positive effect of AI Literacy on AI Acceptance.
The last two hypotheses investigate the effect of AI Anxiety and AI Literacy on TAM constructs.
ot

H6. AI Anxiety moderate the relation between Acceptance and Intention to Use. As observed
in the literature (Jeng et al., 2022; Vahedi & Saiphoo, 2018; Yang & Forney, 2013), anxiety
might moderate the effects of attitude and intention to use digital technology. This hypothesis
tn

investigates if the effect might hold also for AI anxiety.


H7. AI Literacy influences Perceived Usefulness and Perceived Ease of Use. When individuals
are equipped with the knowledge and skills to interact with AI, they are more likely to
acknowledge the potential advantages and contributions AI can make in their personal or
rin

professional lives. This acknowledgment may result in an increased perceived usefulness and
enhanced perceived ease of use of the technology. The relation between AI Literacy and
precursors to attitudes toward technology use is thus tested with this hypothesis.
3.2 Conceptual model
ep

Figure 1 shows the conceptual model tested in this study and related research hypotheses.
Pr

This preprint research paper has not been peer reviewed. Electronic copy available at: https://ssrn.com/abstract=4668256
d
ie we
ev
Figure 1. Conceptual model.
rr
ee
3.3 Study Methodology and Sample

This study employed a quantitative survey methodology. The survey was prepared in both
English and Italian languages, and to ensure linguistic and conceptual parity, a back-
translation method was utilised for the translation of items between English and Italian.
p

From June 2023 to August 2023, the online questionnaire was disseminated using snowball
sampling. Respondents who willingly agreed to participate were required to confirm it by
clicking the “Continue” button from a starting page describing the organization and the purpose
ot

of the study. After confirmation, the participants were guided to complete a self-administered
questionnaire. To ensure data completeness, participants were required to provide responses
to all questions before submitting the survey. Eventually, the study successfully obtained a
tn

valid sample consisting of 313 responses. Among them, 53.6% (n = 168) were female. The
mean age of all participants was 36.8 years (SD = 15.03), the youngest respondent was 18
and the older 77. The majority of participants, at 94.2%, hold European citizenship, while a
small proportion, 5.8%, have nationality from countries outside of Europe. The majority of
rin

participants had a bachelor’s or master’s degree (34.5% and 38.7% respectively) and spent
multiple hours per day on the internet (68.1%). Detailed statistics are reported in Table 1.

Table 1. Demographics of main study sample


ep

Characteristic Attribute Frequency Percentage (%)

Gender Female 168 53.6

Male 139 44.4


Pr

Other 6 2

Age 18-20 10 3.2

This preprint research paper has not been peer reviewed. Electronic copy available at: https://ssrn.com/abstract=4668256
21-30 150 47.9

d
31-40 44 14.1

41-50 40 12.8

we
>60 69 22.0

Nationality Italian 235 75.1

Dutch 36 11.5

ie
Other EU countries 24 7.6

Non-EU countries 18 5.8

ev
Education High School or less 74 23.6

Bachelor's degree 108 34.5

rr
Master’s degree 121 38.7

Doctoral degree 9 2.9

Prefer not to say 1 0.3


ee
Occupation Worker 190 60.7

Student 88 28.1
p

Unemployed 3 1.0

Retired 23 7.3
ot

Other 8 2.6

Prefer not to say 1 0.3


tn

Internet usage One hour or less per week 1 0.3

Few Hours Per Week 26 8.3

One Hour or Less Per Day 73 23.3


rin

Multiple Hours Per Day 213 68.1

3.4 Measures
ep

The questionnaire was designed to collect empirical data and test the proposed hypotheses.
AI Literacy, AI Anxiety and AI Acceptance were measured separately. The survey was divided
into 5 sections. With the first section demographic data about respondents were collected,
including information about their occupations and their use of the internet. The second section
Pr

contains questions about AI Acceptance, including Perceived Ease of Use (PEU), Perceived
Usefulness (PU), Previous Experience with AI technology (EXP) and intention to use it in the
future (INT), according to the base TAM (Davis, 1988; Venkatesh et al. 2003). The third part

This preprint research paper has not been peer reviewed. Electronic copy available at: https://ssrn.com/abstract=4668256
of the survey measures respondents’ self-assessed AI Literacy using the AI Literacy Scale
(AILS) developed by Carolus et al. (2023). The last section of the questionnaire measures AI

d
Anxiety using the AI Anxiety Scale (AIAS) developed by Wang and Wang, 2022. The items
are scored using a Likert scale ranging from 1 (strongly disagree) to 7 (strongly agree). For

we
Actual Use, we employed a single-item nominal scale with five options measuring frequency
of use; this item was then transformed into a numerical variable from 1 to 5 (1=never, 2=rarely,
3=sometimes, 4=often, 5=very often). The complete list of items is reported in Appendix A.

3.4.1 Perceived Ease of Use (PEU), Perceived Usefulness (PU), Prior Experience (EXP)

ie
and Intention to Use (INT). PEU and PU have been measured with one item each, that are
respectively “I think AI systems would be easy to use for people I know” and “I think that AI
systems are, in general, a useful technology”. Similarly to Lund and colleagues (2023), EXP

ev
was measured using the items “I have used AI systems such as ChatGPT before” and “I have
used AI systems such as ChatGPT in the last month” while INT with the items “I am interested
in using AI systems” and “I am likely to recommend AI systems to people I know”.

3.4.2 AI Literacy (AILS). Consistently with Wang et al (2022) conceptual framework and the

rr
exploratory review of Ng et al (2021), AI Literacy has been measured using the AI Literacy
Scale (AILS) developed by Carolus and colleagues (2023). The AILS is made of 18 items
divided into four constructs aimed at measuring the different aspects of AI Literacy: Use &
Apply AI, Know & Understand AI, Detect AI, and AI Ethics. We opted for a scale that measures
ee
self-perception while not evaluating individuals' actual proficiency or actual knowledge
because we aimed to measure the effect on an attitude rather than the proper use of systems
based on this technology.
p

3.4.3 AI Anxiety (AIAS). To measure AI Anxiety, it has been employed the AI Anxiety Scale
(AIAS) developed by Wang and Wang (2022), which is consistent with the four paths for AI
Anxiety identified by Li and Huang (2020). The main difference between Li and Huang’s (2020)
integrated fear acquisition theory and the AIAS is the absence of privacy violation anxiety and
ot

lack of transparency anxiety in Wang and Wang’s (2022) model. All the remaining aspects of
AI Anxiety identified by Li and Huang are included in Wang and Wang’s model. The AIAS
comprises 21 items grouped into four constructs, designed to assess the various dimensions
tn

of AI Anxiety: Learning, AI Configuration, Job Replacement and Sociotechnical Blindness.

3.4.4 AI Acceptance. Consistently with the Technology Acceptance Model (TAM), AI


Acceptance will be used to refer to people’s intention of using AI technology (Davis et al.,
rin

1989; Venkatesh et al., 2003; Zhong et al., 2021). To measure AI Acceptance, we followed
TAM definitions. Therefore, the scale for AI Acceptance includes 2 items about the attitude
toward use by participants: “I would be willing to help promote AI systems.” and “I am
comfortable with the idea of using AI systems in general.”
ep

4. Analysis
The measurement model was assessed using the partial least squares structural equation
model (PLS-SEM), which is suitable for evaluating composite and exploratory models with
Pr

less established theoretical underpinnings. PLS-SEM was selected for its proficiency in
exploratory research and its capability to perform confirmatory model testing (Hair et al., 2021).
The data analysis was conducted in two distinct steps. In the first step, the measurement

10

This preprint research paper has not been peer reviewed. Electronic copy available at: https://ssrn.com/abstract=4668256
model was assessed, followed in a second step by an evaluation of the structural model and
the tests of mediation and moderation.

d
Measurement Model Assessment.

we
The psychometric properties of constructs adapted from past studies are reviewed before
proceeding with the main analysis. Cronbach's alpha (α), composite reliability (rhoC), and
average variance extracted (AVE) were calculated to measure the validity and reliability of
constructs. Descriptive statistics of each item are reported in Appendix B.

ie
The constructs exhibited Cronbach’s alpha (α) and composite reliability (rhoC) values greater
than 0.7 and average variation extracted (AVE) values greater than 0.5 except for AI Anxiety,
as shown in Table 2. Therefore, the variables indicated satisfactory convergent validity and

ev
reliability (Hair et al., 2021).

Table 2. Convergent Validity and Reliability for multi-item constructs

Constructs Items Cronbach's α rhoC AVE

EXP 2 items
rr
0.827 0.920 0.852
ee
AI Literacy 18 items 0.960 0.963 0.596

AI Anxiety 21 items 0.936 0.942 0.443

AI Acceptance 2 items 0.796 0.907 0.830


p

INT 2 items 0.884 0.945 0.896


rhoC=composite reliability; AVE=average variance extracted
ot

In this study, discriminant validity was assessed using the Heterotrait-Monotrait Ratio of
Correlations (HTMT), a method deemed accurate for ascertaining the distinctiveness of
constructs, as suggested by Henseler et al. (2015). The result of the HTMT ratio shown in
Table 3 indicates that all the values are below stringent criteria of 0.85, thereby confirming that
tn

the requirement for discriminant validity has been satisfied.

Table 3. Heterotrait-monotrait Ratio of Correlations Result


rin

Constructs AI AI AI PEU PU EXP


Literacy Anxiety Acceptance

AI Literacy . . . . . .
AI Anxiety 0.356 . . . . .
ep

AI Acceptance 0.663 0.509 . . . .


PEU 0.391 0.192 0.541 . . .
Pr

PU 0.492 0.351 0.733 0.412 . .


EXP 0.688 0.335 0.643 0.384 0.467 .

11

This preprint research paper has not been peer reviewed. Electronic copy available at: https://ssrn.com/abstract=4668256
INT 0.618 0.372 0.847 0.549 0.686 0.690

d
Structural Model.

we
A PLS-SEM was run using the R library SEMinR (Ray et al., 2022). The structural model is
shown in Figure 2. It includes the standardized path coefficients, R square and mandatory test
results to manifest appropriateness of structural model. The R2 for the dependent variable
was 0.601, indicating that the whole model could explain 60% of variance in AI acceptance

ie
among the sample of this study.

ev
rr
p ee
ot

Figure 2. Results of the test of structural model. *p < 0.05; **p < 0.01.
tn

The path coefficient results are presented in Table 4. It shows that perceived ease of use
(PEU) and perceived usefulness (PU) have a significant positive relationship with AI
Acceptance (β=.17, p<.001, β=.36, p<.001 respectively). As expected, PEU has a positive
significant relation with PU (β=.41, p<.001), and PU has a positive relation with intention to
use (INT) (β= 0.29, p<.001). Previous experience (EXP) has a positive relationship with AI
rin

Acceptance (β=.13, p<.001), and AI Acceptance has a positive relationship with INT (β=.73,
p<.001). These results were expected from the base TAM.

AI Literacy has a significant positive relationship with AI Acceptance (β=.20, p<.001), and AI
ep

Anxiety has a significant negative relationship with AI Acceptance (β=–.19, p<.001). AI


Literacy has also a significant positive relationship with PU (β=0.41, p<.001) and with PEU
(β=.40, p<.001).

Table 4. Structural model results.


Pr

H Path Path t-stat. CI [2.5%, 97.5%]


estimation

12

This preprint research paper has not been peer reviewed. Electronic copy available at: https://ssrn.com/abstract=4668256
(β)

d
H1 PEU → AI Acceptance 0.168 3.75 [0.077, 0.260]

H1 PU → AI Acceptance 0.362 6.29 [0.274, 0.477]

we
H1 EXP → AI Acceptance 0.128 2.94 [0.037, 0.195]

H1 PEU → PU 0.411 8.09 [0.310, 0.501]

H1 PU → INT 0.294 4.83 [0.188, 0.412]

ie
H1 AI Acceptance → INT 0.533 10.11 [0.444, 0.638]

H2 AI Literacy → AI Acceptance 0.202 3.94 [0.107, 0.309]

ev
H3 AI Literacy → AI Anxiety -0.370 -6.62 [-0.476, -0.277]

H4 AI Anxiety → AI Acceptance -0.193 -4.68 [-0.269, -0.114]

rr
H7 AI Literacy → PU 0.406 7.16 [0.28, 0.51]

H7 AI Literacy → PEU 0.405 8.71 [0.32, 0.50]


ee
Test of Mediation.

Mediation effects are tested following Hair et al. (2021) and Nitzl et al. (2016). Table 5 shows
the total, direct and indirect effects of the hypothesized paths. The results show that the total
p

indirect effect of AI Literacy on AI Acceptance is 0.073. The direct effect from AI Literacy to AI
Acceptance is 0.636 with a 95% confidence interval [0.577; 0.699]. As this confidence interval
does not include zero, we conclude that AI Anxiety partially mediates the effect of AI Literacy
ot

on AI Acceptance. The product of the three paths is positive (0.048). We therefore conclude
that AI Anxiety acts as a complementary partial mediator in the relationship between AI
Literacy and AI Acceptance. A complementary partial mediation indicates that a portion of the
effect of AI Literacy on AI Acceptance is mediated through AI Anxiety, whereas AI Literacy still
tn

explains a portion of AI Acceptance that is independent of AI Anxiety. Complementary partial


mediation is often called a “positive confounding” or a “consistent” model (Zhao et al., 2010).

Table 5. Mediation Result.


rin

H Path Direct effect Indirect effect Mediation

H5.1 AI Literacy → AI 0.202 0.073 Complementary


Acceptance (t= 3.9, (t= 3.9, partial
through AI Anxiety CI [0.108, 0.309]) CI [0.041, 0.114]) mediation
ep

Test of Moderation.
Pr

Moderation effects were evaluated by constructing interaction terms between constructs within
the model, utilizing specific functions included in the seminR package (Hair et al., 2021; Ray
et al., 2022). Analysis of the bootstrapped paths revealed that the moderating influence of AI

13

This preprint research paper has not been peer reviewed. Electronic copy available at: https://ssrn.com/abstract=4668256
Anxiety on the relationship between AI Literacy and AI Acceptance was not statistically
significant. Similarly, the moderating effect of AI Anxiety on the relationship between AI

d
Acceptance and Intention to Use (INT) was not statistically significant. These results are
detailed in Table 6.

we
Table 5. Moderation Result.

H Path Path estimation (β) t-stat. CI [2.5%, 97.5%]

H5.2 AI Literacy * AI Anxiety -0.007 -0.265 [-0.073, 0.042]

ie
→ AI Acceptance

H6 AI Acceptance * AI Anxiety 0.011 0.312 [-0.057, 0.079]


→ Intention to use (INT)

ev
5. Discussion

rr
The variables derived from the TAM literature proved to be significant in this research aligning
with previous studies and therefore reinforcing this model as a reliable approach to
understanding acceptance and adoption of technology (Davis, 1989; Venkatesh et al. 2003).
ee
Our study emphasizes the importance of perceived usefulness and ease of use in influencing
an individual's decision to accept and utilize AI as well as the other technologies. Indeed, as
argued by Davis et al (1989), (perceived) usefulness is positively impacted by (perceived)
ease of use for AI technology too, and both these dimensions positively influence acceptance.
p

Since ease of use reflects, at least partially, the importance of the interaction, whereas
usefulness pertains more to the task at hand, our results may suggest that AI is not perceived
as a “natural and intuitive” form of interaction per se, as marketing hypes might suggest. In
the same way, prior experience with AI technology (EXP) increases the acceptance of AI
ot

technology, suggesting that our participants had in general positive experience with AI-based
systems to which they were exposed. Finally, in general, AI Acceptance improves the intention
to use (INT) AI-based technology. These results confirm our hypothesis H1.
tn

This study confirms a positive relationship between AI Literacy and AI Acceptance. As noted
by Carolus and colleagues (2023) in a correlation study, competences on AI and the
willingness to use AI technology are related between them. Our findings support this statement
rin

by positioning it within the general framework of TAM as a significant positive relationship


between AI Literacy and AI Acceptance: as AI Literacy increases, so does AI Acceptance,
confirming our second hypothesis (H2)

Our results also found a statistically significant relationship between the dimensions of AI
ep

Literacy and AI Anxiety. Specifically, the results indicate that an increase in AI Literacy
corresponds to a decrease in AI Anxiety. This suggests that as individuals gain more
knowledge and competence in AI, their anxiety and apprehension towards it diminishes.
Therefore, promoting a deeper understanding and proficiency in AI could mitigate the growth
of negative sentiments surrounding its emergence. These results confirm hypothesis H3. This
Pr

is an original and meaningful results finding since anxiety can impact one's willingness to adopt
new technology as suggested also by other studies (Cugurullo & Acheampong, 2022; Zhong
et al., 2020; Lund, 2023) and also confirmed by our model in the statistically significant

14

This preprint research paper has not been peer reviewed. Electronic copy available at: https://ssrn.com/abstract=4668256
negative association between AI Anxiety and AI Acceptance, thus confirming our fourth
hypothesis (H4).

d
In examining the link between AI Literacy and AI Acceptance, the mediation analysis revealed
the existence of a complementary partial mediation effect of AI Anxiety on the relationship

we
between AI Literacy and Acceptance, confirming H5.1. This indicates that a portion of AI
Literacy effect on AI Acceptance is explained by the dimension of AI Anxiety. Therefore, the
presence of a mediation effect provides a more nuanced understanding of how AI Literacy
and AI Acceptance are related by showing that the relationship is not just direct but is also
influenced by the reduction of AI Anxiety. Nevertheless, the relationship between AI Anxiety

ie
and AI Literacy is a true mediation, since there is still a component of anxiety that impacts
directly on AI Acceptance. As far as we can tell, this is an insight not previously discussed in
literature.

ev
The analysis did not reveal any statistically significant moderation effects, thus H5.2, which
posited that AI anxiety would moderate the relationship between literacy and acceptance, was
not confirmed. That means that regardless of the level of anxiety, the impact that literacy has

rr
on acceptance does not change.

Similarly, the potential moderation effect of AI Anxiety on the relationship between AI


Acceptance and Intention to Use was not found as statistically significant, thereby not
ee
confirming H6. These findings do not align with the moderating effects of anxiety in technology
acceptance documented in prior research (Jeng et al., 2022; Vahedi & Saiphoo, 2018; Yang
& Forney, 2013). This discrepancy may be attributed to the distinct demographic profile of our
study's participants, which skewed towards a younger population or to the unique
characteristics of AI technology itself. This aspect should be better investigated in further
p

research.

Lastly, our findings indicate that AI Literacy positively influences Perceived Usefulness (PU)
ot

and Perceived Ease of Use (PEU), thereby confirming H7. This relationship underscores the
importance of a better understanding of this technology since it alone can enhance one's
perception of the usefulness and ease of use, subsequently facilitating their acceptance.
tn

Eventually, our study confirms almost all the hypotheses of this study. AI Literacy and AI
Anxiety affects AI Acceptance, and these three dimensions of AI are interrelated in a way that
as AI Literacy increases, also AI Acceptance increases while AI Anxiety decreases, and vice
versa. The results also confirm the mediation role of AI Anxiety between AI Literacy and AI
rin

Acceptance, and thus the interaction between these three variables. Lastly, the TAM model
was considered to be valid also in the context of AI-based technologies, with AI Literacy
positively influencing the antecedents of perceived ease of use and usefulness. The lack of
moderation effect of AI Anxiety on the relation between literacy and acceptance (our
ep

disconfirmed H5.2) was for us an explorative question and it may suggest that literacy is
equally important for users with or without anxiety toward AI. On the other side, the
disconfirmed H6 on moderation of anxiety of the relation between acceptance and intention to
use was not expected from the extant literature and it has to be further investigated in future
works.
Pr

The significant impact of AI Literacy on AI Acceptance underscores the importance of AI


knowledge as a key factor in fostering positive attitudes and intentions towards using AI-based

15

This preprint research paper has not been peer reviewed. Electronic copy available at: https://ssrn.com/abstract=4668256
technology. AI Literacy not only modulates AI Anxiety but also enhances the perception of
relevance, usefulness, and ease of interaction with such technology. It is also crucial to

d
understand that AI Anxiety is not entirely separate from literacy. In fact, AI Anxiety mediates
the relation between literacy and acceptance. This evidence highlights the need for promoting

we
AI Literacy among the general population. Enhancing AI Literacy might empower individuals
with the necessary skills and knowledge to critically assess and engage with AI technologies,
thereby fostering acceptance and ultimately facilitating the adoption of this important new
technology. Furthermore, this informed understanding will also serve to alleviate anxiety
related to AI.

ie
The negative effect that anxiety has on acceptance suggests the need to directly tackle AI
Anxiety in order to facilitate a more positive reception of AI in society. This could involve openly
addressing the ethical concerns posed by AI technology. Furthermore, the recent push toward

ev
Human-Centric AI (Shneiderman, 2020b, 2020a) with its emphasis on reliability, safety and
trustworthiness may help reduce anxiety toward this technology.

Indeed, these observations are in line with recent literature that highlights the positive role of

rr
AI Literacy as a crucial aspect of digital competencies (Carolus et al., 2023; Long & Magerko,
2020; Ng et al., 2021), especially in the educational domain (Druga et al., 2019; Gibellini et
al., 2023; Polak et al., 2022; Su et al., 2023). This perspective aligns also with the initiatives
of various policymakers who are advocating for the development of a more conscious and
ee
informed use of AI technology (European Commission. Joint Research Centre., 2018; Miao et
al., 2021; Vincent-Lancrin & van der Vlies, 2020), emphasizing its importance in safeguarding
the well-being of future generations.

While we believe that this research offers valuable theoretical and practical insights, it is
p

important to recognize certain limitations. The sample size is relatively homogeneous,


primarily due to the snowball sampling approach used for recruitment: that could potentially
limit the generalizability of our findings. Then, recent research suggests that fear, and by
ot

extension AI Anxiety, is a context-dependent emotion that can be influenced by geographic


and cultural differences (Cugurullo & Acheampong, 2022). Since the majority of our
participants were from Italy (75.1%) and the Netherlands (11.5%), the results regarding AI
tn

anxiety should be assessed in different contexts too.

Similarly, although overall the age range of our participants is relatively large, almost half of
the participants are between 21 and 30 years old: this might have impacted on the results, for
example this may partially explain the absence of the expected moderation effect of anxiety
rin

on the relation between acceptance and intention to use.

Finally, the AI Anxiety Scale (AIAS) developed and validated by Wang & Wang (2022)
presented an Average Variance Extracted (AVE) of 0.443, which is marginally below the
ep

recommended threshold of 0.5. This discrepancy could stem from potential redundancy
among the scale's items, or the nuances introduced during the Italian translation process,
suggesting that some refinements may be beneficial.
Pr

6. Conclusion
In this paper, we adapt the Technology Acceptance Model (TAM) in its basic form (Davis,
1989), and apply it to the generic applications of AI-based technologies. We decided not to

16

This preprint research paper has not been peer reviewed. Electronic copy available at: https://ssrn.com/abstract=4668256
focus on any specific AI applications in order to investigate in general the relation between the
dimensions of literacy and anxiety toward acceptance of this type of technology and as a

d
consequence its adoption. A structural equation approach was used to model the relationships
between those variables and the core dimensions of TAM.

we
Our results confirm that the Technology Acceptance Model applies to AI-based technology
similarly to other digital technologies. We found that Perceived Ease of Use and Perceived
Usefulness significantly shape individuals' attitudes towards AI, thereby fostering AI
acceptance.

ie
The findings also highlight the interconnected relationship between AI Acceptance, AI Literacy,
and AI Anxiety. The results indicate that AI Acceptance is positively influenced by AI Literacy,
while negatively impacted by AI Anxiety. Furthermore, we also observed that enhancing AI

ev
Literacy alleviates anxiety towards AI, subsequently increasing its acceptance. Moreover, AI
Literacy had a positive effect on the perception of ease of use and usefulness, thus
contributing to overall acceptance of AI-based technology.

rr
Lastly, AI Anxiety also serves as a complementary partial mediator between AI Literacy and
AI Acceptance, meaning that a portion of the effect of literacy on acceptance is mediated
through anxiety. Our study identified AI Anxiety as a mediator, but not as a moderator factor
between these two dimensions. Indeed, we did not observe the moderating effect between
ee
acceptance and intention to use that has been reported in previous studies. This could indicate
that AI anxiety operates differently from other forms of technology-related anxiety discussed
in the literature. However, it is important to note that these findings should be further
investigated, as the limited age range of our sample may have affected our ability to fully
recognize these effects.
p

As a practical lesson learned, our results may suggest the importance of cultivating AI-related
competencies for increasing acceptance of this type of technology both directly and in several
ot

indirect ways: enhancing the perceived usefulness and ease of use of the technology, as well
as reducing anxiety. Of course, the public discourse surrounding the definition of AI literacy is
ongoing, with general guidelines still in development (Ng et al., 2021). The scientific
tn

community, however, appears to be converging on the understanding that AI Literacy should


encompass not only the technical aspects of AI's functioning but also considerations of its
ethical implications (Long & Magerko, 2020; Ng et al., 2021; Wang et al., 2022). Again, this
approach is supported by our findings on the direct and indirect relation between anxiety and
acceptance.
rin
ep

Declaration of generative AI and AI-assisted technologies in the writing process

During the preparation of this work the authors used Grammarly to detect grammar,
Pr

punctuation, and spelling errors. After using this tool/service, the authors reviewed and edited
the content as needed and take full responsibility for the content of the publication.

17

This preprint research paper has not been peer reviewed. Electronic copy available at: https://ssrn.com/abstract=4668256
References

d
Ajzen, I. (1991). The theory of planned behavior. Organizational Behavior and Human

we
Decision Processes, 50(2), 179–211. https://doi.org/10.1016/0749-5978(91)90020-T

Almaiah, M. A., Alfaisal, R., Salloum, S. A., Hajjej, F., Thabit, S., El-Qirem, F. A., Lutfi, A.,
Alrawad, M., Al Mulhem, A., Alkhdour, T., Awad, A. B., & Al-Maroof, R. S. (2022).
Examining the Impact of Artificial Intelligence and Social and Computer Anxiety in E-
Learning Settings: Students’ Perceptions at the University Level. Electronics, 11(22),

ie
Article 22. https://doi.org/10.3390/electronics11223662

Carolus, A., Koch, M., Straka, S., Latoschik, M. E., & Wienrich, C. (2023). MAILS -- Meta AI
Literacy Scale: Development and Testing of an AI Literacy Questionnaire Based on

ev
Well-Founded Competency Models and Psychological Change- and Meta-
Competencies (arXiv:2302.09319). arXiv. https://doi.org/10.48550/arXiv.2302.09319

Choung, H., David, P., & Ross, A. (2023). Trust in AI and Its Role in the Acceptance of AI
Technologies. International Journal of Human–Computer Interaction, 39(9), 1727–

rr
1739. https://doi.org/10.1080/10447318.2022.2050543

Cugurullo, F., & Acheampong, R. A. (2023). Fear of AI: An inquiry into the adoption of
autonomous cars in spite of fear, and a theoretical framework for the study of artificial
ee
intelligence technology acceptance. AI & SOCIETY. https://doi.org/10.1007/s00146-
022-01598-6

Davis, F. D. (1989). Perceived Usefulness, Perceived Ease of Use, and User Acceptance of
Information Technology. MIS Quarterly, 13(3), 319–340.
p
https://doi.org/10.2307/249008

Davis, F. D., Bagozzi, R. P., & Warshaw, P. R. (1989). User Acceptance of Computer
Technology: A Comparison of Two Theoretical Models. Management Science, 35(8),
ot

982–1003. https://doi.org/10.1287/mnsc.35.8.982

Dönmez-Turan, A., & Kır, M. (2019). User anxiety as an external variable of technology
acceptance model: A meta-analytic study. Procedia Computer Science, 158, 715–724.
tn

https://doi.org/10.1016/j.procs.2019.09.107

Druga, S., Vu, S. T., Likhith, E., & Qiu, T. (2019). Inclusive AI literacy for kids around the world.
Proceedings of FabLearn 2019, 104–111. https://doi.org/10.1145/3311890.3311904

European Commission. (2023, October 13). A European approach to artificial intelligence |


rin

Shaping Europe’s digital future. https://digital-


strategy.ec.europa.eu/en/policies/european-approach-artificial-intelligence

European Commission. Joint Research Centre. (2018). The impact of Artificial Intelligence on
learning, teaching, and education. Publications Office.
ep

https://data.europa.eu/doi/10.2760/12297

Gibellini, G., Fabretti, V., & Schiavo, G. (2023). AI Education from the Educator’s Perspective:
Best Practices for an Inclusive AI Curriculum for Middle School. Extended Abstracts of
the 2023 CHI Conference on Human Factors in Computing Systems, 1–6.
Pr

https://doi.org/10.1145/3544549.3585747

Gruzd, A., Staves, K., & Wilk, A. (2012). Connected scholars: Examining the role of social
media in research practices of faculty using the UTAUT model. Computers in Human

18

This preprint research paper has not been peer reviewed. Electronic copy available at: https://ssrn.com/abstract=4668256
Behavior, 28(6), 2340–2350. https://doi.org/10.1016/j.chb.2012.07.004

Hagendorff, T. (2020). The Ethics of AI Ethics: An Evaluation of Guidelines. Minds and

d
Machines, 30(1), 99–120. https://doi.org/10.1007/s11023-020-09517-8

we
Hair, J. F., Hult, G. T. M., Ringle, C. M., Sarstedt, M., Danks, N. P., & Ray, S. (2021). Partial
Least Squares Structural Equation Modeling (PLS-SEM) Using R: A Workbook.
Springer International Publishing. https://doi.org/10.1007/978-3-030-80519-7

Henseler, J., Ringle, C. M., & Sarstedt, M. (2015). A new criterion for assessing discriminant
validity in variance-based structural equation modeling. Journal of the Academy of

ie
Marketing Science, 43(1), 115–135. https://doi.org/10.1007/s11747-014-0403-8

Ho, M.-T., Mantello, P., Ghotbi, N., Nguyen, M.-H., Nguyen, H.-K. T., & Vuong, Q.-H. (2022).
Rethinking technological acceptance in the age of emotional AI: Surveying Gen Z

ev
(Zoomer) attitudes toward non-conscious data collection. Technology in Society, 70,
102011. https://doi.org/10.1016/j.techsoc.2022.102011

Ismatullaev, U. V. U., & Kim, S.-H. (2022). Review of the Factors Affecting Acceptance of AI-
Infused Systems. Human Factors, 00187208211064707.

rr
https://doi.org/10.1177/00187208211064707

Jeng, M.-Y., Pai, F.-Y., & Yeh, T.-M. (2022). Antecedents for Older Adults’ Intention to Use
Smart Health Wearable Devices-Technology Anxiety as a Moderator. Behavioral
ee
Sciences (Basel, Switzerland), 12(4), 114. https://doi.org/10.3390/bs12040114

Jobin, A., Ienca, M., & Vayena, E. (2019). The global landscape of AI ethics guidelines. Nature
Machine Intelligence, 1(9), Article 9. https://doi.org/10.1038/s42256-019-0088-2

Johnson, D. G., & Verdicchio, M. (2017). AI Anxiety. Journal of the Association for Information
p

Science and Technology, 68(9), 2267–2270. https://doi.org/10.1002/asi.23867

Kabakus, A. K., Bahcekapili, E., & Ayaz, A. (2023). The effect of digital literacy on technology
acceptance: An evaluation on administrative staff in higher education. Journal of
ot

Information Science, 01655515231160028.


https://doi.org/10.1177/01655515231160028

Kaya, F., Aydin, F., Schepman, A., Rodway, P., Yetişensoy, O., & Demir Kaya, M. (2022). The
tn

Roles of Personality Traits, AI Anxiety, and Demographic Factors in Attitudes toward


Artificial Intelligence. International Journal of Human–Computer Interaction, 0(0), 1–
18. https://doi.org/10.1080/10447318.2022.2151730

Kelly, S., Kaye, S.-A., & Oviedo-Trespalacios, O. (2023). What factors contribute to the
rin

acceptance of artificial intelligence? A systematic review. Telematics and Informatics,


77, 101925. https://doi.org/10.1016/j.tele.2022.101925

Khasawneh, O. Y. (2018). Technophobia: Examining its hidden factors and defining it.
Technology in Society, 54, 93–100. https://doi.org/10.1016/j.techsoc.2018.03.008
ep

Leavy, S. (2018). Gender bias in artificial intelligence: The need for diversity and gender theory
in machine learning. Proceedings of the 1st International Workshop on Gender
Equality in Software Engineering, 14–16. https://doi.org/10.1145/3195570.3195580

Li, J., & Huang, J.-S. (2020). Dimensions of artificial intelligence anxiety based on the
Pr

integrated fear acquisition theory. Technology in Society, 63, 101410.


https://doi.org/10.1016/j.techsoc.2020.101410

Lin, C.-Y., & Xu, N. (2022). Extended TAM model to explore the factors that affect intention to

19

This preprint research paper has not been peer reviewed. Electronic copy available at: https://ssrn.com/abstract=4668256
use AI robotic architects for architectural design. Technology Analysis & Strategic
Management, 34(3), 349–362. https://doi.org/10.1080/09537325.2021.1900808

d
Long, D., & Magerko, B. (2020). What is AI Literacy? Competencies and Design
Considerations. Proceedings of the 2020 CHI Conference on Human Factors in

we
Computing Systems, 1–16. https://doi.org/10.1145/3313831.3376727

Lund, B., & Agbaji, D. (2023). Information Literacy, Data Literacy, Privacy Literacy, and
ChatGPT: Technology Literacies Align with Perspectives on Emerging Technology
Adoption within Communities. SSRN Electronic Journal.
https://doi.org/10.2139/ssrn.4324580

ie
Marangunić, N., & Granić, A. (2015). Technology acceptance model: A literature review from
1986 to 2013. Universal Access in the Information Society, 14(1), 81–95.
https://doi.org/10.1007/s10209-014-0348-1

ev
Menzies, R. G., & Clarke, J. C. (1995). The etiology of phobias: A nonassociative account.
Clinical Psychology Review, 15(1), 23–48. https://doi.org/10.1016/0272-
7358(94)00039-5

rr
Miao, F., Holmes, W., Huang, R., Zhang, H., & Unesco. (2021). AI and education: Guidance
for policymakers.

Morosan, C. (2011). Customers’ Adoption of Biometric Systems in Restaurants: An Extension


ee
of the Technology Acceptance Model. Journal of Hospitality Marketing & Management,
20(6), 661–690. https://doi.org/10.1080/19368623.2011.570645

Ng, D. T. K., Leung, J. K. L., Chu, S. K. W., & Qiao, M. S. (2021). Conceptualizing AI literacy:
An exploratory review. Computers and Education: Artificial Intelligence, 2, 100041.
https://doi.org/10.1016/j.caeai.2021.100041
p

Pinski, M., & Benlian, A. (2023). AI Literacy—Towards Measuring Human Competency in


Artificial Intelligence: Vol. Proceedings of the 56th Hawaii International Conference on
System Sciences. IEEE Computer Society Press. https://hdl.handle.net/10125/102649
ot

Polak, S., Schiavo, G., & Zancanaro, M. (2022). Teachers’ Perspective on Artificial Intelligence
Education: An Initial Investigation. Extended Abstracts of the 2022 CHI Conference on
Human Factors in Computing Systems, 1–7.
tn

https://doi.org/10.1145/3491101.3519866

Rachman, S. (1977). The conditioning theory of fearacquisition: A critical examination.


Behaviour Research and Therapy, 15(5), 375–387. https://doi.org/10.1016/0005-
7967(77)90041-9
rin

Ray, S., Danks, N. P., & Valdez, A. C. (2022). seminr: Building and estimating structural
equation models. R package version.

Scherer, M. U. (2015). Regulating Artificial Intelligence Systems: Risks, Challenges,


ep

Competencies, and Strategies (SSRN Scholarly Paper 2609777).


https://doi.org/10.2139/ssrn.2609777

Shin, D. (2021). The effects of explainability and causability on perception, trust, and
acceptance: Implications for explainable AI. International Journal of Human-Computer
Studies, 146, 102551. https://doi.org/10.1016/j.ijhcs.2020.102551
Pr

Shneiderman, B. (2020a). Bridging the Gap Between Ethics and Practice: Guidelines for
Reliable, Safe, and Trustworthy Human-centered AI Systems. ACM Transactions on
Interactive Intelligent Systems, 10(4), 26:1-26:31. https://doi.org/10.1145/3419764

20

This preprint research paper has not been peer reviewed. Electronic copy available at: https://ssrn.com/abstract=4668256
Shneiderman, B. (2020b). Human-Centered Artificial Intelligence: Reliable, Safe &
Trustworthy. International Journal of Human–Computer Interaction, 36(6), 495–504.

d
https://doi.org/10.1080/10447318.2020.1741118

Sohn, K., & Kwon, O. (2020). Technology acceptance theories and factors influencing artificial

we
Intelligence-based intelligent products. Telematics and Informatics, 47, 101324.
https://doi.org/10.1016/j.tele.2019.101324

Stahl, B. C., & Wright, D. (2018). Ethics and Privacy in AI and Big Data: Implementing
Responsible Research and Innovation. IEEE Security & Privacy, 16(3), 26–33.
https://doi.org/10.1109/MSP.2018.2701164

ie
Su, J., Ng, D. T. K., & Chu, S. K. W. (2023). Artificial Intelligence (AI) Literacy in Early
Childhood Education: The Challenges and Opportunities. Computers and Education:
Artificial Intelligence, 4, 100124. https://doi.org/10.1016/j.caeai.2023.100124

ev
Taherdoost, H. (2018). A review of technology acceptance and adoption models and theories.
Procedia Manufacturing, 22, 960–967. https://doi.org/10.1016/j.promfg.2018.03.137

Torkzadeh, G., & Angulo, I. (1992). The concept and correlates of computer anxiety.

rr
Behaviour & Information Technology, 11(2), 99–108.
https://doi.org/10.1080/01449299208924324

Vahedi, Z., & Saiphoo, A. (2018). The association between smartphone use, stress, and
ee
anxiety: A meta-analytic review. Stress and Health, 34(3), 347–358.
https://doi.org/10.1002/smi.2805

Venkatesh, Thong, & Xu. (2012). Consumer Acceptance and Use of Information Technology:
Extending the Unified Theory of Acceptance and Use of Technology. MIS Quarterly,
36(1), 157. https://doi.org/10.2307/41410412
p

Venkatesh, V., Morris, M. G., Davis, G. B., & Davis, F. D. (2003). User Acceptance of
Information Technology: Toward a Unified View. MIS Quarterly, 27(3), 425–478.
ot

Vincent-Lancrin, S., & van der Vlies, R. (2020). Trustworthy artificial intelligence (AI) in
education: Promises and challenges (OECD Education Working Papers 218; OECD
Education Working Papers, Vol. 218). https://doi.org/10.1787/a6c90fa9-en
tn

Wang, B., Rau, P.-L. P., & Tianyi Yuan. (2022). Measuring user competence in using artificial
intelligence: Validity and reliability of artificial intelligence literacy scale. Behaviour &
Information Technology, 1–14. https://doi.org/10.1080/0144929x.2022.2072768

Wang, Y.-Y., & Wang, Y.-S. (2022). Development and validation of an artificial intelligence
rin

anxiety scale: An initial application in predicting motivated learning behavior.


Interactive Learning Environments, 30(4), 619–634.
https://doi.org/10.1080/10494820.2019.1674887

Wanner, J., Herm, L.-V., Heinrich, K., & Janiesch, C. (2022). The effect of transparency and
ep

trust on intelligent system acceptance: Evidence from a user-based study. Electronic


Markets, 32(4), 2079–2102. https://doi.org/10.1007/s12525-022-00593-5

World Economic Forum. (2023). Agenda articles on Artificial Intelligence. World Economic
Forum. https://www.weforum.org/agenda/artificial-intelligence-and-robotics/
Pr

Xu, N., & Wang, K.-J. (2021). Adopting robot lawyer? The extending artificial intelligence robot
lawyer technology acceptance model for legal industry by an exploratory study. Journal
of Management & Organization, 27(5), 867–885. https://doi.org/10.1017/jmo.2018.81

21

This preprint research paper has not been peer reviewed. Electronic copy available at: https://ssrn.com/abstract=4668256
Yang, K., & Forney, J. C. (2013). The moderating role of consumer technology anxiety in
mobile shopping adoption: Differential effects of facilitating conditions and social

d
influences. Journal of Electronic Commerce Research, 14(4), 334.

Zhan, E. S., Molina, M. D., Rheu, M., & Peng, W. (2023). What is There to Fear?

we
Understanding Multi-Dimensional Fear of AI from a Technological Affordance
Perspective. International Journal of Human–Computer Interaction, 0(0), 1–18.
https://doi.org/10.1080/10447318.2023.2261731

Zhao, X., Lynch, J. G., Jr., & Chen, Q. (2010). Reconsidering Baron and Kenny: Myths and
Truths about Mediation Analysis. Journal of Consumer Research, 37(2), 197–206.

ie
https://doi.org/10.1086/651257

Zhong, Y., Oh, S., & Moon, H. C. (2021). Service transformation under industry 4.0:
Investigating acceptance of facial recognition payment through an extended

ev
technology acceptance model. Technology in Society, 64, 101515.
https://doi.org/10.1016/j.techsoc.2020.101515

rr
p ee
ot
tn
rin
ep
Pr

22

This preprint research paper has not been peer reviewed. Electronic copy available at: https://ssrn.com/abstract=4668256
APPENDIX A. List of constructs and corresponding items

d
Code Construct Item Supporting
Literature

we
EXP1 Prior experience I have used AI systems such as ChatGPT before. Lund and
(EXP) Agbaji, 2023

EXP2 Prior experience I have used AI systems such as ChatGPT in the last month. Lund and
(EXP) Agbaji, 2023

ie
INT1 Intention to Use I am interested in using AI systems. Lund and
(INT) Agbaji, 2023

ev
INT2 Intention to Use I am likely to recommend AI systems to people I know. Lund and
(INT) Agbaji, 2023

ACC1 AI Acceptance I would be willing to help promote AI systems. Lund and


Agbaji, 2023

rr
ACC2 AI Acceptance I am comfortable with the idea of using AI systems in general. Lund and
Agbaji, 2023

PEU Perceived Ease I think AI systems would be easy to use for people I know. Lund and
ee
of Use (PEU) Agbaji, 2023
Davis, 1989

PU Perceived I think that AI systems are, in general, a useful technology Davis, 1989
Usefulness (PU)
p

Lit1 AI Literacy I know the most important concepts of the topics "Artificial Carolus et al.,
Intelligence". 2023
ot

Lit2 AI Literacy I know the definitions of AI. Carolus et al.,


2023
tn

Lit3 AI Literacy I can assess what the limitations and opportunities of using AI Carolus et al.,
are. 2023

Lit4 AI Literacy I can assess what advantages and disadvantages the use of Carolus et al.,
an AI entailes. 2023
rin

Lit5 AI Literacy I can imagine possible future uses for AI. Carolus et al.,
2023

Lit6 AI Literacy I can think of new uses of for AI. Carolus et al.,
ep

2023

Lit7 AI Literacy I can operate AI applications in everyday life. Carolus et al.,


2023
Pr

Lit8 AI Literacy I can use AI applications to make my everyday life easier. Carolus et al.,
2023

23

This preprint research paper has not been peer reviewed. Electronic copy available at: https://ssrn.com/abstract=4668256
Lit9 AI Literacy I can use AI meaningfully to achieve my everyday goals. Carolus et al.,
2023

d
Lit10 AI Literacy In everyday life, I can interact with AI in a way that makes my Carolus et al.,

we
tasks easier. 2023

Lit11 AI Literacy In everyday life, I can work together gainfully with an artificial Carolus et al.,
intelligence. 2023

ie
Lit12 AI Literacy I can communicate gainfully with AI in every-day life Carolus et al.,
2023

Lit13 AI Literacy I can tell if I am dealing with an application based on Artificial Carolus et al.,

ev
Intelligence. 2023

Lit14 AI Literacy I can distinguish devices that use AI from devices that do not. Carolus et al.,
2023

Lit15 AI Literacy

rr
I can distinguish if I interact with an AI or a "real human". Carolus et al.,
2023
ee
Lit16 AI Literacy I can weight the consequences of using AI for society. Carolus et al.,
2023

Lit17 AI Literacy I can incorporate ethical considerations when deciding Carolus et al.,
whether to use data provided by AI. 2023
p

Lit18 AI Literacy I can analyze AI-based applications for ethical implications. Carolus et al.,
2023
ot

Anx1 AI Anxiety Learning to understand all of the special functions associated Wang and
with an AI system makes me anxious. Wang, 2022
tn

Anx2 AI Anxiety Learning to interact with an AI system makes me anxious. Wang and
Wang, 2022

Anx3 AI Anxiety Reading an AI system manual makes me anxious. Wang and


Wang, 2022
rin

Anx4 AI Anxiety Taking a class about the development of AI systems makes Wang and
me anxious. Wang, 2022

Anx5 AI Anxiety Being unable to keep up with the advances associated with AI Wang and
ep

systems makes me anxious. Wang, 2022

Anx6 AI Anxiety Learning to use AI systems makes me anxious. Wang and


Wang, 2022
Pr

Anx7 AI Anxiety Learning how an AI system works makes me anxious. Wang and
Wang, 2022

24

This preprint research paper has not been peer reviewed. Electronic copy available at: https://ssrn.com/abstract=4668256
Anx8 AI Anxiety Learning to use specific functions of an AI system makes me Wang and
anxious. Wang, 2022

d
Anx9 AI Anxiety I am afraid that AI systems may make us dependent. Wang and
Wang, 2022

we
Anx10 AI Anxiety I am afraid that AI systems may make us even lazier. Wang and
Wang, 2022

Anx11 AI Anxiety I am afraid that AI systems may replace humans. Wang and
Wang, 2022

ie
Anx12 AI Anxiety I am afraid that widespread use of humanoid robots will take Wang and
jobs away from people. Wang, 2022

ev
Anx13 AI Anxiety I am afraid that if I begin to use AI systems I will become Wang and
dependent upon them and lose some of my reasoning skills. Wang, 2022

Anx14 AI Anxiety I am afraid that AI systems will replace someone’s job. Wang and

rr
Wang, 2022

Anx15 AI Anxiety I am afraid that an AI system may get out of control and Wang and
malfunction. Wang, 2022
ee
Anx16 AI Anxiety I am afraid that AI systems may be misused. Wang and
Wang, 2022

Anx17 AI Anxiety I am afraid of various problems potentially associated with an Wang and
AI system. Wang, 2022
p

Anx18 AI Anxiety I am afraid that AI systems may lead to robot autonomy. Wang and
Wang, 2022

Anx19 AI Anxiety I find humanoid AI products (e.g. humanoid robots) scary. Wang and
ot

Wang, 2022

Anx20 AI Anxiety I find humanoid AI systems (e.g. humanoid robots) Wang and
intimidating. Wang, 2022
tn

Anx21 AI Anxiety I don’t know why, but humanoid AI products (e.g. humanoid Wang and
robots) scare me. Wang, 2022
rin
ep
Pr

25

This preprint research paper has not been peer reviewed. Electronic copy available at: https://ssrn.com/abstract=4668256
APPENDIX B. Descriptive statistics of the items.

d
Construct Code Mean Standard Skewness Kurtosis
deviation

we
Prior experience EXP1
0.60 0.49 1.16 -0.40

EXP2
1.16 1.32 2.43 0.83

ie
Intention to Use INT1
5.23 1.53 3.11 -0.74

INT2
4.62 1.68 2.45 -0.42

ev
AI Acceptance ACC1
3.89 1.70 2.24 -0.01

ACC2

rr
4.35 1.84 2.04 -0.24

Perceived Ease of PEU


Use 4.57 1.49 2.68 -0.40
ee
Perceived PU
Usefulness 5.35 1.41 3.29 -0.75

AI Literacy Lit1
3.79 1.83 2.00 0.07
p

Lit2
4.42 1.94 1.96 -0.36

Lit3
ot

4.19 1.68 2.20 -0.37

Lit4
4.45 1.61 2.33 -0.39
tn

Lit5
5.09 1.50 3.21 -0.77

Lit6
4.38 1.77 2.13 -0.39
rin

Lit7
3.99 1.95 1.86 -0.08

Lit8
4.22 1.77 2.10 -0.28
ep

Lit9
3.86 1.74 2.03 -0.06

Lit10
4.18 1.79 2.12 -0.24
Pr

Lit11
3.92 1.88 1.90 -0.09

26

This preprint research paper has not been peer reviewed. Electronic copy available at: https://ssrn.com/abstract=4668256
Lit12
3.90 1.75 2.07 -0.08

d
Lit13
3.78 1.80 2.00 -0.02

we
Lit14
3.54 1.66 2.09 0.05

Lit15
4.35 1.75 2.16 -0.45

ie
Lit16
3.98 1.69 2.03 -0.19

Lit17
4.30 1.74 2.22 -0.36

ev
Lit18
3.94 1.67 2.09 -0.13

AI Anxiety Anx1

rr
3.12 1.64 2.44 0.49

Anx2
2.89 1.66 2.61 0.70
ee
Anx3
3.34 1.77 2.29 0.46

Anx4
2.52 1.52 3.42 1.00
p
Anx5
3.35 1.68 2.09 0.18

Anx6
2.66 1.57 2.86 0.81
ot

Anx7
2.64 1.56 2.92 0.84
tn

Anx8
2.68 1.61 3.03 0.89

Anx9
4.55 1.83 2.06 -0.35

Anx10
rin

5.31 1.65 3.07 -0.95

Anx11
3.89 1.94 1.80 -0.03

Anx12
ep

4.30 1.83 1.93 -0.13

Anx13
4.02 1.89 1.89 0.02

Anx14
Pr

4.86 1.70 2.34 -0.51

Anx15
4.55 1.80 2.07 -0.33

27

This preprint research paper has not been peer reviewed. Electronic copy available at: https://ssrn.com/abstract=4668256
Anx16
5.92 1.32 5.66 -1.59

d
Anx17
4.75 1.65 2.39 -0.42

we
Anx18
3.64 1.83 1.98 0.11

Anx19
4.02 1.92 1.86 -0.03

ie
Anx20
3.77 1.81 1.99 0.12

Anx21
3.48 1.92 1.90 0.28

ev
rr
p ee
ot
tn
rin
ep
Pr

28

This preprint research paper has not been peer reviewed. Electronic copy available at: https://ssrn.com/abstract=4668256

You might also like