Download as pdf or txt
Download as pdf or txt
You are on page 1of 28

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/372888018

Investigating student acceptance of an academic advising chatbot in higher


education institutions

Article in Education and Information Technologies · August 2023


DOI: 10.1007/s10639-023-12076-x

CITATION READS

1 406

3 authors, including:

Ghazala Bilquise Samar Ibrahim


Higher Colleges of Technology American University In Dubai
10 PUBLICATIONS 28 CITATIONS 9 PUBLICATIONS 14 CITATIONS

SEE PROFILE SEE PROFILE

All content following this page was uploaded by Samar Ibrahim on 03 October 2023.

The user has requested enhancement of the downloaded file.


Education and Information Technologies
https://doi.org/10.1007/s10639-023-12076-x

Investigating student acceptance of an academic advising


chatbot in higher education institutions

Ghazala Bilquise1 · Samar Ibrahim2 · Sa’Ed M. Salhieh3

Received: 2 May 2023 / Accepted: 21 July 2023


© The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature
2023

Abstract
The study explores factors affecting university students’ behavioural intentions in
adopting an academic advising chatbot. The study focuses on functional, socio-
emotional, and relational factors affecting students’ acceptance of an AI-driven aca-
demic advising chatbot. The research is based on a conceptual model derived from
several constructs of traditional technology acceptance models, TAM, UTAUT, the
latest AI-driven self-service technologies models, the Service Robot Acceptance
(sRAM) model, and the intrinsic motivation Self Determination Theory (SDT)
model. The proposed conceptual model has been tailored to an educational con-
text. A questionnaire Survey of Non-purposive sampling technique was applied to
collect data points from 207 university students from two major universities in the
UAE. Subsequently, PLS-SEM causal modelling was applied for hypothesis testing.
The results revealed that the functional elements, perceived ease of use and social
influence significantly affect behavioural intention for chatbots’ acceptance. How-
ever, perceived usefulness, autonomy, and trust did not show significant evidence
of influence on the acceptance of an advising chatbot. The study reviews chatbot
literature and presents recommendations for educational institutions to implement
AI-driven chatbots effectively for academic advising. It is one of the first studies
that assesses and examines factors that impact the willingness of higher education
students to accept AI-driven academic advising chatbots. This study presents sev-
eral theoretical contributions and practical implications for successful deployment
of service-oriented chatbots for academic advising in the educational sector.

Keywords Academic advising · Conversational Agent · Higher Education


Institution · Artificial Intelligence · Technology Acceptance

Extended author information available on the last page of the article

13
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Education and Information Technologies

1 Introduction

The academic advising process plays a key role in the student’s scholastic achieve-
ment, as universities offer a wide range of courses, majors, and academic opportu-
nities. Essentially, the quality of advising determines a student’s academic success
(Assiri, Al-Ghamdi & Brdesee, 2020). Academic advising has become an indispens-
able part of Higher Education Institutions (HEIs) to support student’s academic
achievement and contribute towards institutional goals of maximizing student reten-
tion and persistence (Campbell et al., 2008; Drake, 2011; Fricker, 2015), thereby
leading to overall academic excellence.
As part of the advising process, the advisor plays a multifaceted role to provide
the direction and assistance needed by students at vital stages of their academic ten-
ure (Iatrellis, Kameas & Fitsilis, 2017; Chan et al., 2019). An advisor’s role encom-
passes several functions some of which include developing personalized study plans
to fit students’ specific needs and directing students to available resources to manage
their academic standing (Campbell et al., 2008). In addition, advisors also support
students’ inquiries on institutional and academic policies and procedures, academic
progress, activities, and more. Thus, the responsibilities of advisors are overwhelm-
ing and often fail to meet the student and institutional expectations of high-quality
interaction and support (Iatrellis, Kameas & Fitsilis, 2017).
In order to assist the complex and time-consuming functions of academic advis-
ing, machine learning recommendation engines, and rule-based expert systems are
increasingly being proposed as technology-based alternatives (Iatrellis, Kameas
& Fitsilis, 2017). Essentially, the goal of such systems is to facilitate prescriptive
advising; by assisting students in selecting appropriate courses, classes, or majors
with minimal direct interaction (Noaman & Ahmed, 2015). However, such systems
tackle only one aspect of advising and fail to provide channels for interaction with
advisors, which is much needed by students to integrate with their environments,
feel connected, and achieve higher levels of satisfaction essential for student suc-
cess (Crookston, 1994; Campbell et al., 2008). A key component of developmen-
tal advising is the immediate engagement and exchange of personalized interaction,
which allows students to contact their advisor at any time (Noaman & Ahmed, 2015).
Advisor-student interaction has been established as a significant factor of student
success (Young-Jones et al., 2013), irrespective of the interaction modality (Junco et
al., 2016).
In this digital era, students are constantly in need of information in order to keep
up with their daily tasks and progress. The expectations of interaction and engage-
ment are being reshaped as today’s digital natives are continuously connected and
prefer immediate support for their academic queries (Robbins, 2020). On the other
hand, it has also become increasingly vital to involve all students in the advising
process to create an advising environment that provides inclusivity to all (Hu, 2020)
and not just those who do not actively seek help. To this end, technology is seen as a
positive channel for active advising and continuous engagement (Junco et al., 2016).
Moreover, during the COVID-19 era, it is expected that students have become accus-
tomed to utilizing technology for their learning and support needs (Liu & Ammigan,
2022).

13
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Education and Information Technologies

Recent technological advancements in Artificial Intelligence (AI) and, in par-


ticular, Natural Language Processing (NLP) have led to the use of conversational
technologies to engage end users in natural human-like conversations. Chatbots,
also known as conversational agents, are pervasive in various domains (Grudin &
Jacques, 2019) and are utilized for offering personalized, autonomous, and flexible
self-service platforms to end users (Mohamad Suhaili, Salim & Jambli, 2021). Chat-
bots are appealing as they offer efficient services by eliminating repetitive and time-
consuming human-agent communication, thus cutting operational costs by up to 30%
(Adam, Wessel & Benlian, 2021).
Integration of AI-based technology in the advising process can complement a
human advisor, thereby providing immediate and accurate assistance as and when
needed, allowing the advisors to focus and prioritize their advising tasks (Noaman &
Ahmed, 2015). Foregoing the limitations of human advisors, advising chatbots pro-
vide continuous service for enhanced institutional engagement and improved service.
The AI-driven advising chatbot technology has the potential to address the communi-
cation and interaction expectations of students and the institution as a whole. Kuhail
et al. (2022) developed an advising chatbot to assist college students by answering
related to course selection, academic policies, and more. Lim et al. (2021) on the
other hand developed a pre-emptive advising chatbot that initiates communication
with at-risk students as an early intervention measure. The chatbot predicts the low
performing students based on attendance and performance records and sends alerts
and reminders. Ho et al. (2018) developed a chatbot that advises college in choos-
ing elective courses. Students can consult with the advising chatbot about course
information, senior students opinions, possible course selection outcomes. Thus an
advising chatbot has the capability of providing all the support and advice as a human
advisor.
Nevertheless, despite the numerous benefits of chatbots, their use for academic
advising is still limited (Okonkwo & Ade-Ibijola, 2021). Studies on chatbot statistics
reveal that nearly 40% of users still prefer human interaction over chatbots (Moran,
2022). Go and Sundar (2019) and Araujo (2018) both argue that users are skeptical
about conversational agents’ abilities and hesitant to rely on them for vital support,
information, and guidance. Rapp et al. (2021) attribute the lack of socio-emotional
intelligence and robotic conversations as a major concern for resistance to embrace
chatbot technology. Moreover, a lack of trust in the chatbot’s abilities is also cited as
another cause of users’ hesitance (Adam, Wessel & Benlian, 2021).
Although the understanding and knowledge of chatbots have increased in recent
years, only a few studies have examined the behavioural intentions, beliefs, and
motivations towards adopting them (De Keyser et al., 2019). Furthermore, several
behavioural studies on chatbot adoption are based on the service industry for service-
oriented chatbots. In the education sector, only a handful of studies have researched
student or faculty perceptions of using chatbots for learning (Sandu & Gide, 2019;
Almahri et al., 2020; Chocarro et al., 2021). It is imperative to thoroughly under-
stand the factors that guide students’ behavioural intentions towards chatbot usage
for advising needs in order to successfully deploy them. Therefore, this study aims to
examine the factors that lead to users’ behavioural intentions to adopt a chatbot for
academic advising. We build a conceptual model by extending constructs from the

13
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Education and Information Technologies

UTAUT/TAM models for predicting users’ behavioural intentions to utilize a chatbot


for advising.
While there are several studies on technology adoption intention in HEI (Meet,
Kala & Al-Adwan 2022) and in particular adoption of chatbots for teaching and
learning purposes (Al Shamsi, Al-Emran & Shaalan 2022), to the best of our knowl-
edge, no existing literature investigates student willingness to use a chatbot for advis-
ing purposes. Therefore, this study aims to investigate the behavioural factors that
determine student acceptance of advising chatbots in the higher education sector in
the UAE. It is important to note that no chatbot was developed or adopted in this
study. Our paper builds a conceptual model to study students’ behavioural intention
to adopt a chatbot for academic advising based on their prior knowledge and/or use
of chatbots contrary to other studies who have investigated adoption of chatbot for
teaching and learning purposes.
In this study, we examine the functional, socio-emotional, and relational constructs
of technology acceptance derived from various theoretical models. The results of our
study will provide developers and researchers with an understanding of how users’
perceptions dictate their intention to adopt a chatbot for advising. These findings are
imperative to successfully design and deploy advising chatbots by understanding and
integrating users in new systems and to bridge the gap between behavioural studies
and systems design (Hevner et al., 2004). The novelty of our research is that it is one
of the first studies that presents insights into the acceptance of service-oriented chat-
bots in the educational sector.
The rest of the paper is organized as follows. Section 2 presents the literature
review with an overview of chatbots, theoretical framework, and hypothesis develop-
ment. Section 3 explains the methodology of our work. Section 4 presents a discus-
sion of the results, practical implications, and limitations. Finally, section 5 ends the
study with a conclusion.

2 Literature review

2.1 Chatbot background

Artificial intelligence has the potential to be applied to education to improve learn-


ers’ experiences in response to possible created future needs (Moraes, 2021). There-
fore, education can benefit significantly from the application of artificial intelligence.
Among the most notable examples is Georgia Tech’s teaching assistant for artificial
intelligence- chatbot, Jill Watson, who allows teachers to spend more time on more
in-depth discussions with students. A chatbot is a machine conversation system that
converses with humans in natural conversational language; it serves as a virtual per-
sonal assistant, offering assistance with multiple tasks such as seeking information,
searching, and establishing relationships (Sheehan, Jin & Gottlieb, 2020). Chatbots
can significantly enhance the emotional and communication aspects of the learning
process, delivering a more personalised learning experience for each student (Moraes,
2021). In addition, many studies have examined how chatbots can support e-learning.

13
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Education and Information Technologies

Chatbot capabilities for supporting academic advising needs have also been high-
lighted in extant literature (Ho et al., 2018; Bilquise & Shaalan, 2022; Kuhail et
al., 2022). An AI-based conversational agent is effective and efficient in advising
interactions as it overcomes human limitations by being available 24/7 to respond to
users’ queries (Bilquise & Shaalan, 2022). Moreover, it is anticipated to reduce the
advisors’ load on more quality interactions with their advisees (Bilquise & Shaalan,
2022). An AI-driven chatbot powered by natural language processing (NLP) tech-
nologies can possess anthropomorphic emotionally intelligent (Bilquise, Ibrahim
& Shaalan, 2022b;) and therefore emulate human like conversations with the advi-
sees (Kuhail et al., 2022). Moreover, an advising chatbot is more likely to provide
inclusivity to all students and also support students’ queries in their native language
(Bilquise, Ibrahim & Shaalan, 2022a). An AI-driven chatbot also has the potential to
intervene at an early stage and support students at risk by using predictive technolo-
gies (Lim, Ho & Chai, 2021).

2.2 Theoretical framework

The Technology Acceptance Model (TAM), proposed by Davis (1989), is a popu-


lar framework for assessing technology adoption in an organizational context. This
framework stipulates that a user’s adoption of new technology is driven by two main
factors - its perceived usefulness and perceived ease of use. Sánchez-Prieto et al.
(2020) applied the TAM to study students’ acceptance of AI-based assessment tools.
In another study, Gupta & Yadav (2022) applied the TAM to study the determinants
of faculty adoption of AI and other emerging technologies for teaching. Kim et al.
(2020) focused on perceived ease of communication and perceived usefulness in the
TAM model to determine students’ perceptions of an AI-based teaching assistant.
However, Bragozzi (2007) argues that with the advancement of technology, the TAM
framework is very simplistic as it considers only the functional aspects of technol-
ogy acceptance. Newer models, such as the Unified Theory of Acceptance and Use
of Technology (UTAUT), extend the TAM by including functional aspects and other
technology acceptance constructs, such as facilitating conditions, social influence,
and personal characteristics (Venkatesh et al., 2003). The model was later extended
to UTAUT2, which includes hedonic motivation (Venkatesh, Thong & Xu 2012).
Pedrotti and Nistor (2016) argue that intrinsic motivations play a vital role in deter-
mining behavioural intentions in an educational environment.
Several studies have also extended the TAM with more relevant constructs to gain
deeper insights on perceptions of AI-driven technologies. (Choi, 2022) extended the
TAM with the perceived trust construct to study teachers’ intention to adopt educa-
tional AI tools. Al Shamsi, Al-Emran & Shaalan (2022) expanded TAM by incorpo-
rating the constructs of subjective norm, enjoyment, facilitating conditions, trust, and
security to explore how students utilize AI-driven conversational agents for educa-
tional purposes. Ragheb et al. (2022) extended TAM with the social influence con-
struct to study student acceptance of chatbot technology. Raffaghelli et al. (2022)
added trust as a dimension to UTAUT to study students’ acceptance of AI-driven
early warning systems. Therefore, it is evident from extant literature that as user
expectations of technology have changed, TAM and UTAUT models lack the capa-

13
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Education and Information Technologies

bility of providing deeper insights in the behavioural intentions within an educational


context. Hence, it is crucial to provide a more accurate representation of how users
engage with an autonomous AI-driven chatbot by considering socio-emotional and
relational factors and functional factors.
The Self Determination Theory (SDT) proposed by Deci and Ryan (2012) relates
intrinsic motivation to behavioural intention stating that the level of autonomy in
decision-making influences intrinsically motivated actions. SDT constructs have
been used to study the willingness of teachers to continue using e-learning technol-
ogy (Sorebo et al., 2009). Nikou and Economides (2017) combined SDT and TAM
to integrate acceptance and motivational factors in a conceptual model, showing a
relationship between motivational factors and technology acceptance.
However, both of these extended models cannot provide a comprehensive frame-
work for new AI-driven technologies that are personalized and offer intelligent ser-
vices to empower users with automated self-service assistance (Gummerus et al.,
2019). Wirtz et al. (2018) overcome this limitation in their proposed Service Robot
Acceptance (sRAM) model. The model examines users’ perceptions of adopting
autonomous service-oriented systems. In addition to the functional aspects, the model
includes the social, emotional, and relational aspects of interaction with technology.
Wirtz et al. (2018) incorporate factors of trust and rapport to explain the relational
dimension of autonomous technology acceptance, while the socio-emotional dimen-
sions are presented by the factors of chatbot features such as human-like character-
istics (anthropomorphism) and social interactions. The sRAM model integrates the
Role theory (Solomon et al., 1985) and the Stereotype Content Model (SCM) (Fiske,
Cuddy & Glick, 2007). The role theory stipulates that the user’s interaction with a
service provider is driven by functional, social, and cultural norms (Solomon et al.,
1985). The SCM theory includes the factor of warmth as an addition to functional
competence, stipulating that user-friendliness and helpfulness are essential for accep-
tance (Fiske, Cuddy & Glick, 2007).
Our proposed conceptual model adopts key constructs from the aforementioned
theories to fit our educational context. This study considers functional, socio-emo-
tional, and relational factors influencing students’ adoption of advising chatbots. Six
main factors are analyzed for their impact on students’ behavioural intention to use an
advising chatbot: Perceived Ease of Use (PEU), Perceived Usefulness (PU), Social
Influence (SI), Perceived Trust (PT), Perceived Autonomy (PA), and Anthropomor-
phism (AN). The factors were selected to encompass the student’s beliefs and expec-
tations of the advising chatbot on adoption. While it is possible that some constructs
may play a mediating role, The focus of this research is to investigate the direct
relationship of the defined constructs on the behavioural intention to adopt the advis-
ing chatbot. Mediating roles of the constructs may be pursued in a further study as
discussed in Sect. 5.3.

2.3 Hypotheses development

Perceived Ease of Use (PEU). According to Davis et al. (1989), perceived ease of use
is the extent to which a particular system may be used with minimal effort. Hamidi
and Chavoshi (2018) claim that new technologies are typically developed for ease of

13
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Education and Information Technologies

use. They also ascertain that perceived ease of use is an antecedent of behavioural
intention to use technology. Specifically, Dwivedi et al. (2019) in Brachten et al.
(2021) recommend enhancing the ease of use of a system to boost the usage intention.
Pillai and Sivathanu (2020) showed that PEU influences the adoption intention of
AI-powered chatbots for travel planning, which aligns with other tourism technology
adoption studies. Furthermore, the study by Ragheb et al. (2022) revealed significant
effects of perceived ease of use on students’ behavioural intention to accept chatbot
technology for learning in a higher education institution in Egypt. In addition, Fer-
nandes and Oliveira (2021) showed that easy-to-use digital voice assistants promote
users’ acceptance of automated services.
In the context of academic advising chatbots, ease of use relates to the Chatbot
systems features that provide a set of easy-to-use settings and a simple interface
that support seamless interaction and prevent users from putting additional cognitive
effort and time into accomplishing a task (Ashfaq et al., 2020). Hence, if students
can complete their tasks efficiently, they are more likely to favor technology. On the
other hand, complicated steps and conversations would lead to frustration and aban-
donment of the technology (Araujo, 2018; Go & Sundar, 2019) if the effort required
to converse with the chatbot is more than reaching out to a human advisor. Thus,
we believe that students are more likely to use the chatbot if they perceive it to be
easy and effortless to accomplish their advising-related tasks. Therefore, we posit the
hypothesis:
H1. Perceived ease of use positively impacts the behavioural intention to adopt
the advising chatbot.

Perceived Usefulness (PU). The perceived usefulness construct is the degree to which
users believe adopting technology will contribute to their performance (Venkatesh,
James & Xu, 2012). It is primarily related to the system’s performance, quality, and
effectiveness (Davis, 1989). Venkatesh et al. (2012) deemed this construct the most
significant when deciding whether or not to adopt a technology. In addition, they
ascertain this construct as a strong predictor of behavioural intention. Several stud-
ies on the adoption of AI-driven chatbots in domains such as tourism (Pillai & Siv-
athanu, 2020) and customer service (Fernandes & Oliveira, 2021; Aslam et al., 2022)
have shown that perceived usefulness has a direct and positive impact on the inten-
tion to use service-oriented chatbots.

In the context of using chatbots for academic advising, the term “performance”
should concern some benefits of chatbots, such as problem-solving and time sav-
ings through real-time information (Ashfaq et al., 2020). Chocarro et al. (2021)
have shown that teachers’ intentions to use technology are positively impacted by
the perceived usefulness of the chatbot. Furthermore, performance expectation has
also been shown to motivate learners to use and accept new technology (Almaiah,
Alamri & Al-Rahmi, 2019). Therefore, we believe that students are more likely to use
a chatbot for advising if they perceive it to be useful in the advising process. Hence
the perceived usefulness of an advising chatbot is a critical factor in determining its
acceptance. We posit the hypothesis :

13
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Education and Information Technologies

H2. Perceived usefulness positively impacts the behavioural intention to adopt the
advising chatbot.

Social Influence (SI). It is the degree to which an individual seeks approval of their
social circle in the acceptance of technology (Aslam et al., 2022). It essentially sug-
gests that users’ behavioural intentions are guided by their peers, and social groups
(Sawang, Sun & Salim, 2014) as individuals often tend to comply with others, espe-
cially for short-term decisions such as technology acceptance (Brachten, Kissmer &
Stieglitz, 2021). The underlying rationale behind this construct is that even if people
do not intend to adopt a system, their belief that the significant people in their life
think they should is enough to persuade a change in behaviour (Lorenz & Buhtz,
2017).

Brachten et al. (2021) showed that peer influence has a more substantial impact
than managerial influence in adopting a chatbot within an organizational context.
Consistent with these findings, Fernandes and Oliveira (2021) reported that custom-
ers are influenced by other consumers who believe in the benefits of a service-ori-
ented chatbot. Gursoy et al. (2019) further showed that a strong social influence leads
to the willingness to use AI technology as consumers are persuaded by their social
groups’ belief in the usefulness and ease of the system.
In an educational context, students’ decisions are often influenced by society and
are likely to adopt behavoiurs of their peers, teachers, friends, and family (Martin
et al., 2002). Ragheb et al. (2022) reported a significant impact of social influence
on students’ behavioural intention to use a chatbot for teaching and learning. Based
on these findings, we assume that a student would be positively influenced to use a
chatbot based on the belief of friends, family members, or colleagues. Thus, we posit
the hypothesis:
H3: Social influence positively impacts the behavioural intention to use an advis-
ing chatbot.

Perceived Trust (PT). The trust may be defined as the extent to which an individual
believes the technology is credible, reliable, and secure. Trust is a crucial construct in
a personalized automated system since a user’s belief that the chatbot is efficient and
dependable provides confidence, which signifies trust in the chatbot’s capabilities
(Wirtz et al., 2018). Furthermore, users are more optimistic when they have faith in
the technology and, thus, are more likely to accept it (Fernandes & Oliveira, 2021).
Aslam (2022) showed that perceived trust plays a critical role in service chatbot
acceptance, where a higher sense of trust increases consumers’ willingness to use the
chatbot. Similarly, in the tourism domain, Pillai and Sivathanu (2020) showed that
travelers’ perceived trust leads them to use the travel planning chatbot and share per-
sonal information. Trust has also been identified as a significant factor in the intention
to use movie recommendation chatbots, with user satisfaction playing a mediating
role (Lee & Choi, 2017).

In the educational environment, trust in an advising chatbot refers to the belief that
the chatbot provides accurate information required for advising. Trust in the chatbot

13
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Education and Information Technologies

could be fostered by the ability of the chatbot to comprehend the student’s query and
respond effectively and accurately. Hamidi and Chavoshi (2018) show that students
are more likely to adopt mobile technologies if they perceive them as reliable and
credible. Thus, the students’ behavioural intention to use an advising chatbot is deter-
mined by how trustworthy they perceive it to be, as they would be sharing personal
information about their academic standing, grades, and GPA and relying on accurate
responses for their advising plans. Hence, we propose the hypothesis:
H4: Perceived trust has a positive impact on advising chatbot acceptance.

Perceived Autonomy (PA). According to Nikou and Economides (2017), perceived


autonomy refers to the degree to which a person believes he or she can control how
a task is accomplished within a system. It is one of the basic constructs in the Self
Determination Theory (SDT), where decision-making efficiency by a technology
user is influenced by the perception of autonomy (de Vreede, Raghavan & de Vreede,
2021). As a concept, autonomy refers to being in control of one’s behaviour such that
the self makes the decisions rather than external circumstances. De Vreede, Ragha-
van & De Vreede (2021) showed that autonomy is crucial in determining satisfaction
with chatbots, which ultimately influences technology acceptance, engagement, and
adoption.

In the context of academic advising, perceived autonomy refers to the ability of the
students to seek guidance and perform advising tasks independently without the need
for a human advisor. According to Moraes (2021), autonomy impacts support when
independent students seek guidance without compromising their control, thereby
indirectly influencing students’ intentions and actions toward chatbot adoption. In
addition, Nguyen et al. (2022) based their study on the Self-Determination Theory of
motivation to show that perceived autonomy is a significant predictor of behavioural
intention to adopt and interact with a chatbot interface. Therefore, we believe that the
ability of the students to request advising-related information and make decisions on
their own would positively influence the willingness to use the chatbot. Hence, we
propose the hypothesis:
H5: Perceived autonomy has a positive impact on chatbot acceptance.

Anthropomorphism (AN). Anthropomorphism refers to assigning human character-


istics, in terms of form, language, or voice, to non-human objects. Since the advising
chatbot is not multi-modal, we consider language and communication style as the
main anthropomorphic characteristic of the chatbot in terms of expressing emotions
and exhibiting socio-emotional intelligence. Anthropomorphic features are crucial in
determining AI device usage intentions (Lu, Cai & Gursoy, 2019). Social and emo-
tional cues in chatbot interaction make the conversation enjoyable and establish emo-
tional and psychological bonding (de Visser et al., 2016). Using social and emotional
language is more likely to lead consumers to accept the advice of a robotic agent
(Adam, Wessel & Benlian, 2021). Moreover, the human-like features of the chat-
bot tend to increase customer satisfaction and engagement with the conversational
agent (Araujo, 2018). Sheehan et al. (2020) revealed that anthropomorphic features
positively impact the willingness of consumers to use a customer service chatbot.

13
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Education and Information Technologies

Similarly, the human-like characteristics of a chatbot have been shown to positively


influence usage intention in the tourism domain (Cai et al., 2022) and the public
transport domain (Kuberkar & Singhal, 2020).

While they are no studies that investigate anthropomorphism in an educational con-


text, we believe that the student-advisor relation is personal and human-related. In
the context of academic advising, perceived anthropomorphism in a chatbot refers
to the student’s belief that the chatbot language and interaction are empathetic as
a human advisor. We believe that students are more likely to accept advice from a
chatbot that responds empathetically with appropriate emotional cues. Thus, we posit
the hypothesis:
H6: Anthropomorphism has a positive impact on advising chatbot acceptance.
Our proposed model is shown below (see Fig. 1).

3 Methodology

3.1 Measurement and scale

In this study, a survey questionnaire was designed to collect quantitative data on the
participant’s demographic profiles and responses to construct items adapted from
existing established scales. A pilot study was conducted with five to six respondents
to test the delivery medium and language clarity of the adapted items. A description
of the constructs, their sources, and their corresponding items are presented in the
appendix. The questionnaire statements were rated on a 5-point Likert scale, with
values ranging from (strongly disagree = 1) to (strongly agree = 5).

Fig. 1 Proposed Model

13
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Education and Information Technologies

Participants of the student were selected using a purposive sampling technique,


which falls under the category of non-probability sampling. This type of sampling
was deemed necessary as our study required specific participants (HEI students) who
possess the characteristics necessary to address our research objectives effectively.
Therefore, by carefully choosing these individuals, we can evaluate our research
questions more precisely and gather targeted information relevant to our study. The
target population of the study is explained in the next sub Sect. 3.2.

3.2 Data collection

The target population of our study is students in UAE Higher Education Institutions,
including both private and public sectors. We targeted one major institution within
each sector to study students’ perceptions and willingness to adopt a chatbot for aca-
demic advising. An online survey was administered to the students to get maximum
responses. The participants were provided with a brief description of a chatbot with
images of a natural conversation with an online agent to ensure awareness of the
concept of an advising chatbot. The textual descriptions and images avoided any
potential bias and were intended to ensure that all respondents were familiar with
the subject of the study. Furthermore, careful consideration and steps were taken
to minimize non-response bias. Nonresponse bias refers to the situation where par-
ticipants chosen for a sample fail to respond, either due to refusal to participate or
inability to access the survey. This bias can lead to inconclusive research findings due
to increased variability in estimates and the sample no longer accurately representing
the entire population. To minimize nonresponse bias, we took several measures. First,
we excluded collection of sensitive information in the survey, such as names and
contact details and ensured anonymity. Second, we used a mobile friendly medium to
deliver the survey to increase response rates. Third, we also ensured the survey was
not too lengthy and would take no more than 7–8 min to complete. Finally, we com-
municated the objective of the survey and estimated time required for completion for
transparency. Moreover, to improve response rates, we sent multiple reminders to
potential respondents.
The sample size was determined based on the recommendation of Hair (2009)
for a level of five to ten observations per parameter. Thus, the final data set of 207
responses is considered to be adequate for this study.

3.3 Technique

Our research explores and validates the conceptual model constructs through an
Exploratory Factor Analysis (EFA) using the IBM-SPSS tool. Furthermore, the
hypotheses are evaluated using a variance-based causal model. The model was used
to perform the Confirmatory Factor Analysis (CFA), test the validity and the reliabil-
ity of the latent constructs, and evaluate the causal relationships using the Structural
Equation Modelling (SEM) (Hair Jr., Gabriel & Patel 2014). SEM is a statistical
technique that has a set of relations between one or more independent variables (IVs)
and one or more dependent variables (DVs); both are either continuous or discrete.
It provides a means for discovering and verifying relationships between these vari-

13
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Education and Information Technologies

ables. This model conveys causal processes described by a series of structural or


regression equations, where these structural relations are modelled visually to enable
a better conceptualization of the theory behind the study. One of the most significant
advantages of SEM is that it allows the relationships between latent variables to be
examined to reduce errors (Hair et al., 2016).
The statistical package, Smart PLS, was adopted to evaluate the causal relation-
ships. SmartPLS uses the Partial Least Squares (PLS) algorithm for SEM modelling
and provides the capability to assess the measurement and structural models simul-
taneously. PLS-SEM is a regression-based method accompanied by factor analy-
sis. Compared to SEM based on covariance, PLS-SEM focuses on maximizing the
variance of the dependent variables explained by the independent variables rather
than reproducing the actual covariance matrix (Aslam et al., 2022). In recent years,
PLS-SEM has been gaining popularity because it has been shown to model latent
constructs even under conditions of non-normality and uses small to medium-sized
samples (Hair et al., 2011).

4 Results

The demographic profile of the participants, illustrated in Table 1, consists of 207


responses, of which 61.4% are male, and 38.6% are female respondents. Most
respondents belong to the age group 17–20 (58%) and are non-working students
(72.9%). Furthermore, most respondents also study in a private institution (66.2%).
UAE is a multicultural nation, and the respondents represent various nationalities,
where 44.4% are Emiratis and 55.6% are expatriates of various nationalities such as
British, American, Lebanese, Indian, and more. All the respondents have indicated
an average to a good experience with technology, which is expected as most of the
younger generation would be digitally savvy. In addition, 79.7% have reported hav-
ing experienced using a chatbot.
The instrument used in the study is derived from an existing scale and consists
of seven fixed factors, as previously mentioned, six variables were discarded due
to poor loading (n = 2) and cross-loading (n = 5) issues. Specifically, two items were
removed from each construct, PEU, PA, and AN, and one from the PT construct. The
results are illustrated in the Appendix. The resulting 23 items with seven factors have
reached 78% of the total variance, which surpasses the recommended criteria of 60%
(Hair, 2009). The internal consistency of each variable was determined by the Cron-
bach alpha, which ranged from 0.729 to 0.896.
The confirmatory factor analysis CFA results indicate an acceptable model fit using
multiple fitness algorithm results - CMIN = 619.772, SRMR = 0.06, rms Theta = 0.167,
and NFI = 0.819. Furthermore, the predictive relevance of the model was measured
by the Q2 (0.445 > 0), which establishes that the proposed model is relevant in pre-
dicting the intention of adoption. In addition, the R2 value of our model is 0.59, which
moderately explains the variance in the dependent (endogenous) constructs (Hair,
Ringle & Sarstedt, 2011). We further determine our model’s construct reliability and
validity using various measures explained in the followingsections.

13
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Education and Information Technologies

Table 1 Demographic profile of N = 207 %


participants
Gender
Male 127 61.35%
Female 80 38.65%
Age
17–20 120 58%
21–24 77 37.2%
25 and above 10 4.8%
Nationality
Emirati 92 44.4%
Expatriate 115 55.6%
Institution of Study
Private 137 66.2%
Public (Government) 70 33.8%
Working Student
Yes 56 27.1%
No 151 72.9%
Experience with Technology
Very Experienced 67 32.4%
Experienced 96 46.4%
Average 44 21.3%
No Experience 0 0%
Prior Chatbot Experience
Yes 165 79.7%
No 42 20.3%

Construct reliability is demonstrated by the measures of Composite Reliability


(CR). As shown in the appendix, all factors were above the required threshold of
0.7 for CR, ranging from 0.879 to 0.920, thus demonstrating that our scale items
are internally consistent. Construct validity was determined using both convergent
and discriminant validity measures. Convergent validity ensures that items that are
the indicators of the constructs are correlated with each and measure the same con-
struct consistently. As per the recommendation of Hair et al. (2016), standardized
factor loadings greater than 0.7 range between 0.762 and 0.916, and Average Vari-
ance Extracted (AVE) greater than 0.5 are adequate measures to assess convergent
validity. Table 2 shows the AVE and factor loadings result, demonstrating that our
constructs meet the minimum threshold requirement.
On the other hand, discriminant validity ensures that items of the scale strongly
load on their own construct and are differentiated from the items of the other con-
structs (Hair et al., 2016). Three measures were used to assess discriminant valid-
ity – Cross-loadings, Hetrotrait-Monotrait Ratio (HTMT), and Fornell and Larcker
criteria. Table 3 demonstrates the cross-loadings, showing that each item is strongly
loaded on its own construct and weakly loaded on the other constructs. Furthermore,
Table 4 verifies the Fornell and Larcker criteria of discriminant validity by showing
that the square root of the construct’s AVE score is greater than the inter-construct
correlation (Fornell & Larcker, 1981). Finally, Table 5 shows that the HTMT values
are below the recommended value of 0.9 (Henseler et al., 2014). All three measures

13
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Education and Information Technologies

Table 2 Construct and item details


Construct / Item Item Factor
Source code loading
Perceived (CR = 0.879, AVE = 0.785, VIF = 1.749)
Ease of Use
(PEU)
(Venkatesh, PEU01 I think I can use the chatbot for my advising queries without any 0.916
James & Xu, help
2012; Ashfaq PEU02 I think that the advising chatbot would not require a lot of mental 0.854
et al., 2020) effort
PEU03 I think it would be easier to use the advising chatbot to find the Deleted
information I need on my own
PEU04: I think learning to use the advising chatbot would be easy for me Deleted
Perceived (CR = 0.92, AVE = 0.743, VIF = 2.478)
Usefulness
(PU)
(Venkatesh, PU01: I think using the advising chatbot is useful for getting advising- 0.863
James & Xu, related information.
2012) PU02 I think using the advising chatbot would help me accomplish my 0.852
advising requests more quickly.
PU03 I think using the advising chatbot would ease in getting my advis- 0.884
ing information.
PU04 I think using the advising chatbot would help me with many 0.848
things
Perceived (CR = 0.906, AVE = 0.762, VIF = 2.334)
Autonomy
(PA)
(Jiménez- PA01 I think that using an advising chatbot would allow me to control 0.878
Barreto, how I receive advising information.
Rubio PA02 I think that I could express my true self when requesting advising 0.874
& Mo- information.
linillo, 2021; PA03 I think that using the advising chatbot would provide me the flex- Deleted
Nguyen et ibility to decide when and how to get advising information
al., 2022)
PA04 I think that the advising chatbot would allow me to request infor- Deleted
mation based on my interests.
PA05 I think that using the advising chatbot would allow me to access 0.866
advising information my way
Perceived (CR = 0.928, AVE = 0.763, VIF = 2.376)
Trust (PT)
(Lee & Choi, PT01 I think I would have faith in the information provided by the 0.869
2017) advising chatbot.
PT02 I think that the advising chatbot would provide unbiased and ac- 0.856
curate information and recommendations
PT03 I think that the advising chatbot would be honest and trustworthy 0.888
PT04 I think that the advising chatbot would provide a reliable service. 0.881
PT05 I think that I would trust the advising chatbot with my personal Deleted
information.
Anthropo- (CR = 0.915, AVE = 0.783, VIF = 1.705)
morphism
(AN)

13
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Education and Information Technologies

Table 2 (continued)
Construct / Item Item Factor
Source code loading
(Fernandes AN01 I would like the advising chatbot to be pleasant to interact with. 0.908
& Oliveira, AN02 I would like the advising chatbot to easily understand me. 0.882
2021; Svikh- AN03 I would like the advising chatbot interaction to be human-like 0.864
nushina & (similar to communicating with a real person)
Sciences,
AN04 I would like the advising chatbot to be able to express emotions. Deleted
2022)
AN05 I would like the advising chatbot to be empathetic. Deleted
Social Influ- (CR = 0.911, AVE = 0.72, VIF = 1.584)
ence (SI)
(Lu, Cai SI01 I would use an advising chatbot if many of my classmates and 0.762
& Gursoy, friends will use it
2019) SI02 Using an advising chatbot will be a status symbol in my social 0.814
networks (e.g., friends, family, and co-workers)
SI03 People whose opinions I value would prefer that I use an advising 0.904
chatbot for advising-related queries.
SI04 People who are important to me would encourage me to use an 0.905
advising chatbot
Behavioural (CR = 0.924, AVE = 0.803)
Intention to
Adopt (BI)
(Venkatesh, BI01 I intend to use the advising chatbot in the future 0.900
James & Xu, BI02 I would always try to use the advising chatbot for my advising 0.877
2012) needs
BI03 I plan to use the advising chatbot frequently 0.911

verify the discriminant validity of the constructs in our measurement model. More-
over, the Variance Inflation Factor (VIF) values for all the constructs are below 3,
indicating no multicollinearity between constructs (See Table 2).
The six hypotheses were tested using PLS-SEM with bootstrapping of 10,000
samples. The results of the structural model, shown in Fig 2, revealed that Social
Influence (SI), with a coefficient of 5.961 (p < 0.001), is a key factor in determining
the intentions of students to accept advising chatbots, thus supporting hypothesis 3.
Perceived Ease of Use (PEU), with a coefficient of 2.378 (p < 0.05), also is another
key impact in accepting advising chatbots, supporting hypothesis 1. Furthermore,
the results also show that Anthropomorphism (AN) has a p-value, almost signifi-
cant, with a coefficient of 1.61 when alpha is considered at 0.10. Hence, we report
hypothesis 6 as almost supported. Lastly, Perceived Usefulness (coefficient = 1.193,
p > 0.05), Perceived Autonomy (coefficient = 0.976, p > 0.05), and Perceived Trust
(coefficient = 1.119) are insignificant in determining the acceptance of an advising
chatbot, thus rejecting hypotheses 2, 4 and 5 respectively. The results of the structural
model are presented in Table 6.
The results reveal highly significant evidence that perceived ease of use impacts
the willingness of students to accept advising chatbots. Hence, advising chatbots
that allow users to accomplish their tasks effortlessly will impact the behavioural
intention to adopt the chatbot. Our results align with the findings of several studies
that have shown that ease of use is an essential factor in the technology acceptance
of AI-driven chatbots (Ashfaq et al., 2020; Pillai & Sivathanu, 2020; Aslam et al.,

13
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Education and Information Technologies

Table 3 Item cross-loadings


AN BI PA PEU PT PU SI
AN01 0.908 0.48 0.464 0.455 0.563 0.482 0.414
AN02 0.882 0.446 0.353 0.434 0.519 0.448 0.365
AN03 0.864 0.455 0.392 0.375 0.465 0.373 0.437
BI01 0.527 0.900 0.508 0.551 0.579 0.562 0.597
BI02 0.421 0.877 0.516 0.371 0.476 0.454 0.577
BI03 0.445 0.911 0.536 0.414 0.488 0.480 0.631
PA01 0.423 0.476 0.878 0.437 0.569 0.556 0.476
PA02 0.289 0.526 0.874 0.372 0.502 0.544 0.521
PA05 0.488 0.514 0.866 0.431 0.617 0.645 0.457
PEU01 0.436 0.496 0.457 0.916 0.512 0.603 0.332
PEU02 0.408 0.382 0.374 0.854 0.432 0.470 0.248
PT01 0.527 0.466 0.543 0.477 0.869 0.594 0.441
PT02 0.441 0.516 0.483 0.447 0.856 0.511 0.376
PT03 0.519 0.493 0.588 0.460 0.888 0.57 0.420
PT04 0.552 0.535 0.633 0.490 0.881 0.611 0.464
PU01 0.381 0.423 0.498 0.546 0.530 0.863 0.319
PU02 0.386 0.485 0.603 0.503 0.548 0.852 0.394
PU03 0.463 0.498 0.604 0.535 0.580 0.884 0.418
PU04 0.456 0.513 0.583 0.53 0.589 0.848 0.354
SI01 0.477 0.56 0.433 0.354 0.425 0.377 0.762
SI02 0.239 0.475 0.427 0.171 0.302 0.295 0.814
SI03 0.471 0.632 0.527 0.301 0.474 0.420 0.904
SI04 0.343 0.593 0.489 0.286 0.430 0.364 0.905

Table 4 Fornell and Larker criteria


AN BI PA PEU PT PU SI
AN 0.885
BI 0.521 0.896
PA 0.457 0.58 0.873
PEU 0.477 0.502 0.473 0.886
PT 0.583 0.577 0.644 0.536 0.874
PU 0.492 0.56 0.667 0.613 0.654 0.862
SI 0.459 0.672 0.556 0.333 0.487 0.433 0.849
Note: The bold values in the diagonals are the square root of AVE of each factor; the remaining values
represent the correlations (p < 0.1)

Table 5 Heterotrait-Monotrait Ratio of Correlation (HTMT)


AN BI PA PEU PT PU SI
AN
BI 0.596
PA 0.536 0.673
PEU 0.600 0.612 0.598
PT 0.663 0.645 0.740 0.657
PU 0.560 0.628 0.768 0.753 0.732
SI 0.522 0.765 0.646 0.406 0.545 0.488

13
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Education and Information Technologies

Fig. 2 Structural model results (* Supported, ** Not Supported)

Table 6 Hypotheses Testing Coefficient P Values Result


Result
H1: PEU -> BI 2.378 0.017 Supported
H2: PU -> BI 1.193 0.233 Not Supported
H3: SI -> BI 5.961 0.000 Supported
H4: PT -> BI 1.119 0.263 Not Supported
H5: PA -> BI 0.976 0.329 Not Supported
H6: AN -> BI 1.61 0.107 Not Supported

2022). Therefore, the advising chatbot must provide effortless interaction features
that motivate students to use it. Since a text-based chatbot does not have complex
interaction features, ease of use in the AI-driven conversational agent may be defined
as the ease of communication in natural language. Students would utilize the chatbot
to communicate easily and enquire about their advising needs without communica-
tion complexity. Furthermore, students may expect that the advising chatbots can
understand their requests and respond to them in Arabic or English. In other words,
students’ behaviour to accept advising chatbot and replace the in-person advisor
interaction may only happen if the chatbot is user-friendly and communicates the
information more accessible and with no additional complexity. The student will be
willing to adopt advising chatbots if these chatbots provide accurate responses to
their requests. Therefore, the advising chatbot must permit the ease of use of com-
munications with students, allowing efficient and hassle-free communications with
much less complexity.
Our findings also reveal significant evidence that social influence strongly impacts
behavioural intention to accept an advising chatbot. Our findings are supported by
several studies demonstrating social influence as a critical factor in the adoption of
chatbots in various domains (Arif, Aslam & Ali, 2016; Patil and Kulkarni, 2019;
Chin-Yuan et al., 2022). The acceptance of new technologies is strongly influenced

13
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Education and Information Technologies

by norms, as people often tend to accept a technology if they know it is accepted


by society and peers (Fernandes & Oliveira, 2021). In the age of AI technology,
which is becoming more widely available and trendy, individuals may adopt it to
enhance their social status (Fernandes & Oliveira, 2021). According to Venkatesh
et al. (2003), social influence can affect consumers’ decisions propelled by several
factors, such as beliefs, compliances, and contextual drive. In the university context,
students’ behaviours are often affected by their peers’ behaviour as most of the time,
advising tasks are performed in groups with their classmates. According to Martin
et al. (2002) in Aslam 2021, humans are likely to adopt common behaviours among
their peers.
In addition, our results indicate significant evidence that perceived anthropomor-
phism impacts behavioural intention to adopt advising chatbots, with an alpha of
10%. Although the significant error is higher than the other significant factors, the
results imply that students would like to communicate with the chatbot that pos-
sesses some social-emotional intelligence and is not robotic. They would prefer a
chatbot that communicates like a human advisor and understands their requests with-
out many iterations and confusion. According to Lu et al. (2019), a product with
anthropomorphized attributes can boost positive affect and behaviors by activating
the schema of the human image in the memory. In addition, the findings of Kuberkar
and Singhal (2020) support our results to show that anthropomorphism or human-like
conversation could influence the adoption of the chatbot system for public transport
commuters. Further, Rietz et al. (2019) emphasize anthropomorphism’s importance
in improving chatbot acceptance. Therefore, the socio-emotional intelligence of a
chatbot with the ability to communicate, understand and communicate with appropri-
ate emotional cues can increase the positive impact of adopting an advising chatbot.
Past research suggests that users are motivated by the functional performance
of the technology. However, Wirtz (2018) claims that a service that mainly fulfils
utilitarian needs is mainly driven by functional requirements. To this effect, several
studies in chatbot acceptance have shown the importance of performance efficacy in
adopting chatbots (Pillai & Sivathanu, 2020; Aslam et al., 2022). However, there is
not enough evidence in our results to support that perceived usefulness is significant
in determining student adoption of an advising chatbot. Although students were not
surveyed on the purpose of using the chatbot, in the context of higher education, it
is expected that the need for advising is driven mainly by emotional complexities
rather than functional requirements. This may explain our findings that have failed to
show significant evidence for the perceived usefulness factor to impact the students’
willingness to utilize a chatbot for advising. Moreover, researchers have claimed
that perceived usefulness strongly impacts the behaviour individuals who have prior
experience with the technology (Dwivedi et al., 2019). While 79% of our respondents
have claimed interactions with some form of the conversational agent, the extent
and satisfaction level of the interaction is unknown. Furthermore, students may have
used several chatbots for non-functional purposes. These findings may explain why
perceived usefulness is an insignificant factor for our data sample.
Although previous studies have shown that perceived trust is essential in user
acceptance of chatbots (Lee & Choi, 2017; Hamidi & Chavoshi, 2018; Pillai &
Sivathanu, 2020; Aslam et al., 2022), our results do not find significant evidence to

13
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Education and Information Technologies

support these findings. When individuals rely on automated self-service technology,


confidence in the technology largely depends on their ability to perform the task
precisely (Fernandes & Pedroso, 2021). This is especially true for service-oriented
chatbots when the consumers have prior experience with the chatbot in performing
service tasks that are highly related to trust in the information and privacy, like bank-
ing chatbots, shopping, and travel arrangements. Van Pinxteren et al. (2019) claim
that trust is a multidimensional phenomenon explained by functional competence,
anthropomorphism, and relatedness. We believe that further research is required to
examine the mediating effect of these factors on trust. Moreover, AI-driven technol-
ogy, like chatbots, is relatively new and has not been integrated into the education
sector. Since students do not have hands-on experience with chatbots for service-
related tasks, there is not enough data and evidence to prove that trust would contrib-
ute to the willingness to accept chatbots for advising.
In addition, also our findings show that perceived autonomy is an insignificant
factor in determining the willingness to adopt an advising chatbot, which contradicts
the findings of Jiménez-Barreto et al. (2021) on acceptance of a tourism Chatbot. Our
findings may be explained by the age and maturity level of the chatbot users. The
hospitality and tourism sector consumers are expected to be more mature individuals
accustomed to self-service technologies. Within the University context, the advising
process often encompasses pastoral care and guidance rather than just meeting the
functional requirements. Students, especially in their first or second year, may feel
more comfortable with personal contact with a human advisor for guidance and sup-
port rather than using self-service technology. They may enjoy the care and personal
attention their advisors provide and prefer the traditional form of communication.
Therefore, further research on the moderating effect of age and stickiness on the
human advisor may further explain our findings.

4.1 Theoretical contributions

The study makes multiple theoretical contributions. Firstly, it extends the UTAUT
model to be more appropriate for AI-based technologies. The new model integrates
constructs from the SDT and sRAM models to investigate how students adopt ser-
vice-oriented chatbots for academic advising in an educational setting. Secondly, the
study adds to the growing literature on chatbot adoption by identifying factors that
influence students’ behavioral intentions to use a chatbot for advising services. The
study introduces a new construct, “anthropomorphism,“ which has not been thor-
oughly researched in an educational setting.
Additionally, the study addresses the lack of research on the use of service-oriented
chatbots in higher education. It tests a new research model in a multicultural environ-
ment where design constraints are significant factors in the decision-making process
towards the satisfaction and continued use of a new technology. Finally, the study
contributes not only to the IS literature but also to the student behavior literature by
exploring student perceptions of academic advising and the use of AI technologies to
support the advising process.

13
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Education and Information Technologies

4.2 Practical implications

Designing a chatbot by incorporating user needs and expectations can significantly


reduce barriers to chatbot acceptance (Saner, 2018). Therefore, to successfully inte-
grate and exploit chatbot technology in HEI, it is imperative to study the needs and
expectations of the stakeholders and examine the factors that influence the adoption of
an intelligent conversational agent. The willingness of students to adopt an advising
chatbot is crucial to its future implementation. Unfortunately, despite the numerous
organizational benefits of automated AI-driven technologies that provide self-service
functionalities, there is limited user acceptance. Therefore, the findings of our study
will allow decision-makers to understand the factors that motivate students’ usage
of an advising chatbot for their advising needs. Our study revealed that social influ-
ence is the most significant factor in advising chatbot adoption. Therefore, as a direct
implication, universities must promote the idea of adopting advising chatbots as a
social norm. Decision makers can use this finding to promote the chatbot through stu-
dents’ social groups using student peers since students are more likely to engage with
it when they see their peers actively using and promoting it. Perceived ease of use and
anthropomorphism were also identified as significant factors in chatbot acceptance.
Decision-makers should focus on the chatbot interaction and anthropomorphic fea-
tures in terms of language, communication style, responsiveness, and accuracy when
designing the chatbot. Our results show that students would be willing to accept the
chatbot if they find it effortless. This indicates that natural language communica-
tion should be seamless. Acceptance of the chatbot may be enhanced when students
can communicate in their language and dialect and when the chatbot interprets and
responds accurately. On the other hand, the textual interactions should include socio-
emotional cues so the chatbot appears to “understand” the student for them to feel
comfortable using the technology. Using emojis and appropriate language style could
further enhance the anthropomorphic capabilities of the chatbot.
The insignificant impact of perceived trust, autonomy, and usefulness on the
academic advising chatbot acceptance highlights the unique characteristics of stu-
dents as compared to other consumers of self-service technologies. While customers
engaging in e-commerce transactions, customer service, and online bookings may
value trust, usefulness, and autonomy, students’ behavior maybe explained by their
lack of experience and engagement in self-service technologies. Therefore, decision-
makers should consider this, promote the functionalities of advising chatbots, and
make students aware of the chatbot’s 24/7 availability. In addition, universities must
ensure students that despite the self-service technology, human advisors will always
be available to support them when needed. Furthermore, additional data may be col-
lected using a prototype to examine the students’ confidence level with the chatbot
communication.

4.3 Limitations and future study

Despite the study’s contributions to the education advising sector, readers should
acknowledge a few limitations of this study that were encountered during the research
process to understand chatbot acceptance. These limitations would create opportuni-

13
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Education and Information Technologies

ties for researchers to further studies. First, the data was gathered through an online
survey, and only 207 data points were collected. More data points were needed, so
our contributions would be more valuable and increase their generalizability. Several
questions of constructs were eliminated from the model due to insufficient loading in
the exploratory factor analysis. More data points are needed to examine the value of
these questions in these constructs. Finally, the study examined only six constructs
that can influence students’ willingness to accept the use of advising chatbots. As the
model presented in Fig. 1 the relationship between these constructs and the hypoth-
eses are direct relationships which could be a study limitation where there is no medi-
ating construct that could also impact these hypotheses. Thus, there may be other
constructs that significantly impact students’ desire to adopt advising chatbots. More-
over, the moderating effect of certain variables such as age, culture, and stickiness
on human advisors may explain the significance of perceived autonomy and trust.
In addition, to improve the model’s predictive ability, future studies should investi-
gate other antecedents and other mediating construct that impact the willingness of
students to accept the use of advising chatbots. Further research might help identify
other factors, such as hedonic motivation, that may impact advising adoption.

5 Conclusion

Using AI-based technology in the advising process can complement a human advisor.
Rather than be hindered by human advisor limitations, chatbots can provide con-
tinuous service for a more positive higher educational institutional experience. In
addition to improving communication and interaction between students, AI-driven
chatbots can enhance institution performance. This research is a novel study that
examines the acceptance of academic advising chatbots in higher education. Samples
from two major universities were collected through a survey for the purpose of this
research study. A conceptual model was created by extending constructs from the
UTAUT/TAM models for predicting users’ behavioural intentions to utilize advising
chatbot. The study adopted the functional, socio-emotional, and relational constructs
of technology acceptance derived from various theoretical models created using
PLS-SEM. The results of this study demonstrated that social influence and perceived
ease of use, are the socio-functional elements that significantly influenced behav-
ioural intention for chatbot acceptance. In contrast, it was not observed that perceived
usefulness, autonomy, or trust influenced the acceptance of advising chatbots. While
anthropomorphism was identified as an almost significant factor when p value is set
at 0.1. A major contribution of this study is that it presents various recommendations
for educational institutions to implement AI-driven chatbots effectively for academic
advising.

Author contribution Ghazala Bilquise: Conceptualization, Methodology, Investigation, Validation, For-


mal analysis, Data curation, Writing-review and editing. Samar Ibrahim: Conceptualization, Methodol-
ogy, Investigation, Validation, Formal analysis, Data curation, Writing- review and editing. Sa’Ed M.
Salhieh: Validation, Formal Analysis, Review and editing.

13
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Education and Information Technologies

Funding This research did not receive any specific grant from funding agencies in the public, commercial,
or not-for-profit sectors.

Data Availability The datasets used and analysed during the current study are available from the corre-
sponding author on reasonable request.

Declarations

Conflict of interest The authors declare that they have no conflicting interests.

References
Adam, M., Wessel, M., & Benlian, A. (2021). AI-based chatbots in customer service and their effects on
user compliance. Electronic Markets Electronic Markets, 31(2), 427–445.
Almahri, F. A. J., Bell, D., & Merhi, M. (2020). Understanding Student Acceptance and Use of Chatbots
in the United Kingdom Universities: A Structural Equation Modelling Approach. 2020 6th IEEE
International Conference on Information Management, ICIM 2020, pp. 284–288.
Almaiah, M. A., Alamri, M. M., & Al-Rahmi, W. (2019). Applying the UTAUT model to explain the
students’ acceptance of mobile learning system in higher education. IEEE Access IEEE, 7,
174673–174686.
Al Shamsi, J. H., Al-Emran, M., & Shaalan, K. (2022). Understanding key drivers affecting students’
use of artificial intelligence-based voice assistants. Education and Information Technologies, 27(6),
8071–8091. https://doi.org/10.1007/s10639-022-10947-3
Araujo, T. (2018). Living up to the chatbot hype: The influence of anthropomorphic design cues and com-
municative agency framing on conversational agent and company perceptions. Computers in human
behavior. Amsterdam School of Communication Research (ASCoR), University of Amsterdam (85
vol., pp. 183–189). Netherlands: Elsevier Ltd. Box 15793, Amsterdam, NG 1001.
Arif, I., Aslam, W., & Ali, M. (2016). Students’ dependence on smartphones and its effect on purchasing
behavior. South Asian Journal of Global Business Research, 5(2), 285–302.
Ashfaq, M., Yun, J., Yu, S., & Loureiro, S. M. C. (2020). I, Chatbot: Modeling the determinants of users’
satisfaction and continuance intention of AI-powered service agents. Telematics and Informatics (54
vol., p. 101473). Elsevier. April.
Aslam, W., Siddiqui, D. A., Arif, I., & Farhat, K. (2022). Chatbots in the frontline: drivers of acceptance.
Kybernetes.
Assiri, A., Al-Ghamdi, A. A. M., & Brdesee, H. (2020). From traditional to intelligent academic advising:
A systematic literature review of e-academic advising. International Journal of Advanced Computer
Science and Applications, 11(4), 507–517.
Bagozzi, R. P. (2007). The legacy of the technology acceptance model and a proposal for a paradigm shift.
Journal of the association for information systems, 8(4), 3.
Bilquise, G., Ibrahim, S., & Shaalan, K. (2022a). Bilingual AI-Driven Chatbot for Academic Advising.
International Journal of Advanced Computer Science and Applications, vol. 13(8).
Bilquise, G., Ibrahim, S., & Shaalan, K. (2022b). Emotionally Intelligent Chatbots: A Systematic Litera-
ture Review. Human Behavior and Emerging Technologies. Hindawi, vol. 2022.
Bilquise, G., & Shaalan, K. (2022). AI-based Academic Advising Framework: A Knowledge Management
Perspective. International Journal of Advanced Computer Science and Applications, vol. 13(8).
Brachten, F., Kissmer, T., & Stieglitz, S. (2021). The acceptance of chatbots in an enterprise context – A
survey study. International Journal of Information Management. Elsevier Ltd, vol. 60(May 2020),
p. 102375.
Cai, D., Li, H., Law, R., & Law, R. (2022). Anthropomorphism and OTA chatbot adoption: a mixed meth-
ods study. Journal of Travel & Tourism Marketing. Routledge, vol. 39(2), pp. 228–255.
Campbell, S., Nutt, C., Engagement, S. S., & Outcomes, L. (2008). Academic advising in the New Global
Century: Supporting Student Engagement and Learning Outcomes Achievement. Peer Review,
10(2001), 4–7.

13
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Education and Information Technologies

Chan, Z. C. Y., Chan, H. Y., Chow, H. C. J., Choy, S. N., Ng, K. Y., Wong, K. Y., & Yu, P. K. (2019). Aca-
demic advising in undergraduate education: A systematic review. Nurse education today (75 vol., pp.
58–74). Elsevier.
Chin-Yuan, H., Ming-Chin, Y., I-Ming, C., & Wen-Chang, H. (2022). Modeling consumer adoption inten-
tion of an AI-Powered Health Chatbot in Taiwan: An empirical perspective. International Journal of
Performability Engineering, 18(5), 338.
Chocarro, R., Cortiñas, M., & Marcos-Matás, G. (2021). Teachers’ attitudes towards chatbots in education:
A technology acceptance model approach considering the effect of social language, bot proactive-
ness, and users’ characteristics. Educational Studies (00 vol., pp. 1–19). Routledge. 00.
Crookston, B. B. (1994). A Developmental View of Academic Advising as Teaching. NACADA Journal,
14(2), 5–9.
Davis, F. D. (1989). Perceived usefulness, perceived ease of use, and user acceptance of information tech-
nology. MIS quarterly. JSTOR, pp. 319–340.
Deci, E. L., & Ryan, R. M. (2012). Self-determination theory. Sage publications ltd.
De Keyser, A., Köcher, S., Alkire, L., Verbeeck, C., & Kandampully, J. (2019). Frontline service technol-
ogy infusion: conceptual archetypes and future research directions. Journal of Service Management.
Emerald Publishing Limited.
de Visser, E. J., Monfort, S. S., McKendrick, R., Smith, M. A. B., McKnight, P. E., Krueger, F., & Para-
suraman, R. (2016). Almost human: Anthropomorphism increases trust resilience in cognitive agents.
Journal of Experimental Psychology: Applied, 22(3), 331–349.
de Vreede, T., Raghavan, M., & de Vreede, G. J. (2021). Design foundations for AI assisted decision-
making: A self determination theory approach. Proceedings of the Annual Hawaii International Con-
ference on System Sciences, vol. 2020-Janua, pp. 166–175.
Drake, J. K. (2011). The role of academic advising in Student Retention and Persistence. About Campus:
Enriching the Student Learning Experience, 16(3), 8–12.
Dwivedi, Y. K., Rana, N. P., Jeyaraj, A., Clement, M., & Williams, M. D. (2019). Re-examining the Uni-
fied Theory of Acceptance and Use of Technology (UTAUT): Towards a revised theoretical model.
Information Systems Frontiers Information Systems Frontiers, 21(3), 719–734.
Fernandes, T., & Oliveira, E. (2021). Understanding consumers’ acceptance of automated technologies in
service encounters: Drivers of digital voice assistants adoption. Journal of Business Research, 122,
180–191. https://doi.org/10.1016/j.jbusres.2020.08.058.
Fiske, S. T., Cuddy, A. J. C., & Glick, P. (2007). Universal dimensions of social cognition: Warmth and
competence. Trends in cognitive sciences (11 vol., pp. 77–83). Elsevier. 2.
Fornell, C., & Larcker, D. F. (1981). Evaluating structural equation models with unobservable variables
and measurement error. Journal of marketing research (18 vol., pp. 39–50). Sage Publications Sage
CA: Los Angeles, CA,. 1.
Fricker, T. (2015). The Relationship between Academic Advising and Student Success in Canadian Col-
leges: A Review of the Literature. College Quarterly. ERIC, vol. 18(4), p. n4.
Go, E., & Sundar, S. S. (2019). Humanizing chatbots: The effects of visual, identity and conversational
cues on humanness perceptions. Computers in human behavior (97 vol., pp. 304–316). Elsevier.
Grudin, J., & Jacques, R. (2019). Chatbots, humbots, and the quest for artificial general intelligence. Con-
ference on Human Factors in Computing Systems - Proceedings, pp. 1–11.
Gummerus, J., Lipkin, M., Dube, A., & Heinonen, K. (2019). Technology in use – characterizing customer
self-service devices (SSDS). Journal of Services Marketing, 33(1), 44–56.
Gupta, P., Yadav, S. (2022). A TAM-based Study on the ICT Usage by the Academicians in Higher Edu-
cational Institutions of Delhi NCR. In Congress on Intelligent Systems: Proceedings of CIS 2021,
Volume 2, 329–353.
Gursoy, D., Chi, O. H., Lu, L., & Nunkoo, R. (2019). Consumers acceptance of artificially intelligent
(AI) device use in service delivery. International Journal of Information Management, 49, 157–169.
Hair, J. F. (2009). Multivariate data analysis.
Hair, J. F., Hult, G. T. M., Ringle, C. M., & Sarstedt, M. (2016). A primer on partial least squares struc-
tural equation modeling (2nd ed.).). Sage publications.
Hair, J. F. Jr., Gabriel, M. L., D. da S., & Patel, V. K. (2014). AMOS Covariance-based Structural equa-
tion modeling (CB-SEM): Guidelines on its applications as a Marketing Research Tool. Revista
Brasileira de Marketing, 13(2), 44–55.
Hair, J. F., Ringle, C. M., & Sarstedt, M. (2011). PLS-SEM: Indeed a Silver Bullet. Journal of Marketing
Theory and Practice. Routledge, vol. 19(2), pp. 139–152.

13
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Education and Information Technologies

Hamidi, H., & Chavoshi, A. (2018). Analysis of the essential factors for the adoption of mobile learning in
higher education: A case study of students of the University of Technology. Telematics and Informat-
ics (35 vol., pp. 1053–1070). Elsevier. 4.
Henseler, J., Dijkstra, T. K., Sarstedt, M., Ringle, C. M., Diamantopoulos, A., Straub, D. W., Ketchen,
D. J., Hair, J. F., Hult, G. T. M., & Calantone, R. J. (2014). Common Beliefs and Reality About
PLS: Comments on Rönkkö and Evermann (2013). Organizational Research Methods, vol. 17(2),
pp. 182–209.
Hevner, A. R., March, S. T., Park, J., & Ram, S. (2004). Design Science in Information Systems. MIS
Quarterly, 28(1), 75–105.
Ho, C. C., Lee, H. L., Lo, W. K., & Lui, K. F. A. (2018). Developing a chatbot for college student pro-
gramme advisement. 2018 International Symposium on Educational Technology (ISET). IEEE,
pp. 52–56.
Hu, X. (2020). Building an Equalized Technology-Mediated Advising Structure: Academic Advising at
Community Colleges in the Post-COVID-19 Era. Community College Journal of Research and Prac-
tice. Routledge, vol. 44(10–12), pp. 914–920.
Iatrellis, O., Kameas, A., & Fitsilis, P. (2017). Academic advising systems: A systematic literature review
of empirical evidence. Education Sciences, vol. 7(4).
Jiménez-Barreto, J., Rubio, N., & Molinillo, S. (2021). Find a flight for me, Oscar!” Motivational customer
experiences with chatbots. International Journal of Contemporary Hospitality Management, 33(11),
3860–3882.
Junco, R., Mastrodicasa, J. M., Aguiar, A. V., Longnecker, E. M., & Rokkum, J. N. (2016). Impact of
technology-mediated communication on student evaluations of advising. NACADA Journal, 36(2),
54–66.
Kim, J., Merrill, K., Xu, K., & Sellnow, D. D. (2020). My teacher is a machine: Understanding students’
perceptions of AI teaching assistants in online education. International Journal of Human–Computer
Interaction, 36(20), 1902–1911. https://doi.org/10.1080/10447318.2020.1801227.
Kuberkar, S., & Singhal, T. K. (2020). Factors influencing adoption intention of ai powered chatbot for
public transport services within a smart city. International Journal on Emerging Technologies, 11(3),
948–958.
Kuhail, M. A., Katheeri, A., Negreiros, H., Seffah, J., A., & Alfandi, O. (2022). Engaging students with
a Chatbot-Based academic advising system. International Journal of Human–Computer Interaction
(pp. 1–27). Taylor & Francis.
Lee, S. Y., & Choi, J. (2017). Enhancing user experience with conversational agent for movie recommen-
dation: Effects of self-disclosure and reciprocity. International Journal of Human Computer Studies.
Elsevier, vol. 103, pp. 95–105.
Lim, M. S., Ho, S. B., & Chai, I. (2021). Design and functionality of a university academic advisor chatbot
as an early intervention to improve students’ academic performance. Computational science and
technology (pp. 167–178). Springer.
Liu, C., & Ammigan, R. (2022). Humanizing the academic advising experience with technology: An inte-
grative review. STAR Scholar Book Series, pp. 185–202.
Lorenz, G. V., & Buhtz, K. (2017). Social influence in technology adoption research. a literature review
and research agenda.
Lu, L., Cai, R., & Gursoy, D. (2019). Developing and validating a service robot integration willingness
scale. International Journal of Hospitality Management, 80(January), 36–51.
Meet, R. K., Kala, D., & Al-Adwan, A. S. (2022). Exploring factors affecting the adoption of MOOC
in Generation Z using extended UTAUT2 model. Education and Information Technologies, 27(7),
10261–10283. https://doi.org/10.1007/s10639-022-11052-1
Mohamad Suhaili, S., Salim, N., & Jambli, M. N. (2021). Service chatbots: A systematic review. Expert
Systems with Applications. Elsevier Ltd, vol. 184(July 2020), p. 115461.
Moraes, C. L. (2021). Chatbot as a Learning Assistant: Factors influencing adoption and recommenda-
tion. Information Management School.
Moran, M. (2022). No Title. Startup Bonsai [online]. [Accessed 29 October 2022]. Available at: https://
startupbonsai.com/chatbot-statistics/.
Nguyen, Q. N., Sidorova, A., & Torres, R. (2022). User interactions with chatbot interfaces vs. Menu-
based interfaces: An empirical study. Computers in Human Behavior. Elsevier Ltd, vol. 128(Novem-
ber 2021), p. 107093.

13
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Education and Information Technologies

Nikou, S. A., & Economides, A. A. (2017). Mobile-Based Assessment: Integrating acceptance and moti-
vational factors into a combined model of Self-Determination Theory and Technology Acceptance.
Computers in Human Behavior, vol. 68, pp. 83–95.
Noaman, A. Y., & Ahmed, F. F. (2015). A New Framework for e Academic Advising. Procedia Computer
Science (65 vol., pp. 358–367). Elsevier Masson SAS. Iccmit.
Okonkwo, C. W., & Ade-Ibijola, A. (2021). Chatbots applications in education: A systematic review. Com-
puters and Education: Artificial Intelligence, 2, 100033. https://doi.org/10.1016/j.caeai.2021.100033.
Patil, K., & Kulkarni, M. S. (2019). Artificial intelligence in financial services: Customer chatbot advisor
adoption. Int J Innov Technol Explor Eng, 9(1), 4296–4303.
Pedrotti, M., & Nistor, N. (2016). In K. Verbert, M. Sharples, & T. Klobučar (Eds.), User motivation and
Technology Acceptance in Online Learning Environments BT - Adaptive and Adaptable Learning
(pp. 472–477). Cham: Springer International Publishing.
Pillai, R., & Sivathanu, B. (2020). Adoption of AI-based chatbots for hospitality and tourism. International
Journal of Contemporary Hospitality Management, 32(10), 3199–3226.
Ragheb, M. A., Tantawi, P., Farouk, N., & Hatata, A. (2022). Investigating the acceptance of applying
chat-bot (Artificial intelligence) technology among higher education students in Egypt. International
Journal of Higher Education Management, 08(02), 1–14.
Rapp, A., Curti, L., & Boldi, A. (2021). The human side of human-chatbot interaction: A systematic lit-
erature review of ten years of research on text-based chatbots. International Journal of Human Com-
puter Studies. Elsevier Ltd, vol. 151(March), p. 102630.
Rietz, T., Benke, I., & Maedche, A. (2019). The impact of anthropomorphic and functional Chatbot Design
features in enterprise collaboration Systems on user Acceptance. Wirtschaftsinformatik, (February),
pp. 1642–1656.
Robbins, R. (2020). Engaging gen zers through academic advising. Academic Advising Today, vol. 43(2).
Sandu, N., & Gide, E. (2019). Adoption of AI-chatbots to enhance student learning experience in higher
education in india. 2019 18th International Conference on Information Technology Based Higher
Education and Training, ITHET 2019. IEEE, pp. 1–5.
Saner, R. (2018). Chatbots:Changing User Needs and Motivations. The Expert Negotiator, pp. 69–84.
Sawang, S., Sun, Y., & Salim, S. A. (2014). It’s not only what I think but what they think! The moderating
effect of social norms. Computers & Education (76 vol., pp. 182–189). Elsevier.
Sheehan, B., Jin, H. S., & Gottlieb, U. (2020). Customer service chatbots: Anthropomorphism and adop-
tion. Journal of Business Research, vol. 115(February 2019), pp. 14–24.
Sánchez-Prieto, J. C., Cruz-Benito, J., Therón Sánchez, R., & García-Peñalvo, F. J. (2020). Assessed by
machines: Development of a TAM-based tool to measure AI-based assessment acceptance among
students. International Journal of Interactive Multimedia and Artificial Intelligence, 6(4), 80. https://
doi.org/10.9781/ijimai.2020.11.009
Solomon, M. R., Surprenant, C., Czepiel, J. A., & Gutman, E. G. (1985). A role theory perspective on
dyadic interactions: The service encounter. Journal of marketing (49 vol., pp. 99–111). Los Angeles,
CA: SAGE Publications Sage CA. 1.
Sorebo, O., Halvari, H., Gulli, V. F., & Kristiansen, R. (2009). The role of self-determination theory in
explaining teachers’ motivation to continue to use e-learning technology. Computers & Education,
53(4), 1177–1187.
Svikhnushina, E., & Sciences, C. (2022). PEACE: A model of Key Social and emotional ualities of con-
versational chatbots. ACM Trans Interact Intell Sys.
van Pinxteren, M. M. E., Wetzels, R. W. H., Rüger, J., Pluymaekers, M., & Wetzels, M. (2019). Trust
in humanoid robots: Implications for services marketing. Journal of Services Marketing, 33(4),
507–518.
Venkatesh, V., James, Y. T., & Xu, X. (2012). Consumer Acceptance and Use of Information Technology:
Extending the Unified Theory of Acceptance and Use of Technology. MIS quarterly, pp. 157–178.
Venkatesh, V., Morris, M. G., Davis, G. B., & Davis, F. D. (2003). User acceptance of information technol-
ogy: Toward a unified view. MIS quarterly. JSTOR, pp. 425–478.
Wirtz, J., Patterson, P. G., Kunz, W. H., Gruber, T., Lu, V. N., Paluch, S., & Martins, A. (2018). Brave new
world: Service robots in the frontline. Journal of Service Management, 29(5) 907–931. https://doi.
org/10.1108/JOSM-04-2018-0119.
Young-Jones, A. D., Burt, T. D., Dixon, S., Hawthorne, M. J., Young-jones, A. D., Burt, T. D., Dixon, S., &
Hawthorne, M. J. (2013). Academic advising: Does it really impact student success? Quality Assur-
ance in Education (21 vol., pp. 7–19). Emerald Group Publishing Limited. 1.

13
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Education and Information Technologies

Publisher’s Note Springer Nature remains neutral with regard to jurisdictional claims in published maps
and institutional affiliations.

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under
a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted
manuscript version of this article is solely governed by the terms of such publishing agreement and appli-
cable law.

Authors and Affiliations

Ghazala Bilquise1 · Samar Ibrahim2 · Sa’Ed M. Salhieh3

Ghazala Bilquise
gbilquise@hct.ac.ae
Samar Ibrahim
sibrahim@adjunct.aud.edu
Sa’Ed M. Salhieh
saed.salhieh@buid.ac.ae

1
Department of Computer Information Science, Higher Colleges of Technology, Dubai,
UAE
2
School of Arts and Science, American University in Dubai, Dubai, UAE
3
College of Engineering and IT, The British University in Dubai, Dubai, UAE

13
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Terms and Conditions
Springer Nature journal content, brought to you courtesy of Springer Nature Customer Service Center
GmbH (“Springer Nature”).
Springer Nature supports a reasonable amount of sharing of research papers by authors, subscribers
and authorised users (“Users”), for small-scale personal, non-commercial use provided that all
copyright, trade and service marks and other proprietary notices are maintained. By accessing,
sharing, receiving or otherwise using the Springer Nature journal content you agree to these terms of
use (“Terms”). For these purposes, Springer Nature considers academic use (by researchers and
students) to be non-commercial.
These Terms are supplementary and will apply in addition to any applicable website terms and
conditions, a relevant site licence or a personal subscription. These Terms will prevail over any
conflict or ambiguity with regards to the relevant terms, a site licence or a personal subscription (to
the extent of the conflict or ambiguity only). For Creative Commons-licensed articles, the terms of
the Creative Commons license used will apply.
We collect and use personal data to provide access to the Springer Nature journal content. We may
also use these personal data internally within ResearchGate and Springer Nature and as agreed share
it, in an anonymised way, for purposes of tracking, analysis and reporting. We will not otherwise
disclose your personal data outside the ResearchGate or the Springer Nature group of companies
unless we have your permission as detailed in the Privacy Policy.
While Users may use the Springer Nature journal content for small scale, personal non-commercial
use, it is important to note that Users may not:

1. use such content for the purpose of providing other users with access on a regular or large scale
basis or as a means to circumvent access control;
2. use such content where to do so would be considered a criminal or statutory offence in any
jurisdiction, or gives rise to civil liability, or is otherwise unlawful;
3. falsely or misleadingly imply or suggest endorsement, approval , sponsorship, or association
unless explicitly agreed to by Springer Nature in writing;
4. use bots or other automated methods to access the content or redirect messages
5. override any security feature or exclusionary protocol; or
6. share the content in order to create substitute for Springer Nature products or services or a
systematic database of Springer Nature journal content.
In line with the restriction against commercial use, Springer Nature does not permit the creation of a
product or service that creates revenue, royalties, rent or income from our content or its inclusion as
part of a paid for service or for other commercial gain. Springer Nature journal content cannot be
used for inter-library loans and librarians may not upload Springer Nature journal content on a large
scale into their, or any other, institutional repository.
These terms of use are reviewed regularly and may be amended at any time. Springer Nature is not
obligated to publish any information or content on this website and may remove it or features or
functionality at our sole discretion, at any time with or without notice. Springer Nature may revoke
this licence to you at any time and remove access to any copies of the Springer Nature journal content
which have been saved.
To the fullest extent permitted by law, Springer Nature makes no warranties, representations or
guarantees to Users, either express or implied with respect to the Springer nature journal content and
all parties disclaim and waive any implied warranties or warranties imposed by law, including
merchantability or fitness for any particular purpose.
Please note that these rights do not automatically extend to content, data or other material published
by Springer Nature that may be licensed from third parties.
If you would like to use or distribute our Springer Nature journal content to a wider audience or on a
regular basis or in any other manner not expressly permitted by these Terms, please contact Springer
Nature at

onlineservice@springernature.com

View publication stats

You might also like