The Role of User Perceptions of Intelligence Anthropomorphism and Self Extension On Continuance of Use of Personal Intelligent Agents

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 23

European Journal of Information Systems

ISSN: (Print) (Online) Journal homepage: https://www.tandfonline.com/loi/tjis20

The role of user perceptions of intelligence,


anthropomorphism, and self-extension on
continuance of use of personal intelligent agents

Sara Moussawi, Marios Koufaris & Raquel Benbunan-Fich

To cite this article: Sara Moussawi, Marios Koufaris & Raquel Benbunan-Fich (2022):
The role of user perceptions of intelligence, anthropomorphism, and self-extension on
continuance of use of personal intelligent agents, European Journal of Information Systems, DOI:
10.1080/0960085X.2021.2018365

To link to this article: https://doi.org/10.1080/0960085X.2021.2018365

Published online: 26 Jan 2022.

Submit your article to this journal

Article views: 784

View related articles

View Crossmark data

Full Terms & Conditions of access and use can be found at


https://www.tandfonline.com/action/journalInformation?journalCode=tjis20
EUROPEAN JOURNAL OF INFORMATION SYSTEMS
https://doi.org/10.1080/0960085X.2021.2018365

EMPIRICAL RESEARCH

The role of user perceptions of intelligence, anthropomorphism, and


self-extension on continuance of use of personal intelligent agents
Sara Moussawia, Marios Koufarisb and Raquel Benbunan-Fich b

a
Dietrich College of Humanities and Social Sciences, Heinz College of Information Systems and Public Policy, Information Systems,
Pittsburgh, PA, USA; bBaruch College Zicklin School of Business, Paul H. Chook Department of Information Systems and Statistics, One
Bernard Baruch Way, New York, NY, USA

ABSTRACT ARTICLE HISTORY


Personal Intelligent Agents (PIAs), such as Apple’s Siri and Amazon’s Alexa, are different from Received 27 January 2020
traditional information systems. They possess unique design features and are increasingly Accepted 7 December 2021
available through various technological devices. Due to PIAs’ relative novelty, little is known KEYWORDS
about the determinants of their continued use. An investigation into PIAs’ characteristics and Intelligent agents;
their impact on users’ post-adoption evaluations is expected to have theoretical and practical conversational intelligent
implications for PIA design and sustained usage. Our research model integrates perceptions of agents; personal intelligent
intelligence, anthropomorphism, and self-extension into the unified model of information agents; continued use;
technology continuance. Our findings show the key role of perceived intelligence of the PIA perceived intelligence;
on continuance intention and indicate that hedonic perceptions of the agent become less perceived
important during post-adoption. Our results also highlight the role of perceived ownership and anthropomorphism; self-
extension; The unified model
personalisation as antecedents of perceived self-extension.
of IT continuance; mastery;
personalization; ownership

1. Introduction
language to interact with the user. They also possess
Personal Intelligent Agents (PIAs) have quickly novel sensory and feedback mechanisms and
become very popular with users (Liao et al., 2019; advanced learning algorithms. Second, the use of intel­
Welch, 2018) and very important for tech companies’ ligent agents is rapidly expanding from the personal to
strategic plans. These systems – which include Apple’s the organisational level (e.g., Alexa for Business).
Siri, Google’s Assistant, and Amazon’s Alexa – are Third, prior research shows that continuance of use
backed by advances in artificial intelligence. PIAs are increases loyalty and lifetime value for both the tech­
designed to exhibit general intelligence and to think nology user and the technology provider organisation
and act as much as possible like a human. As a result, (Bhattacherjee, 2001b). Understanding the factors that
they have assumed the role of user companions and increase continued use of PIAs therefore has impor­
act on the users’ behalf to help manage everyday tant implications for individual and institutional users,
activities (Olsen & Malizia, 2011). Such extensive as well as for technology companies.
interaction with PIAs will result in novel effects on PIAs are worth investigating because they possess
users, ranging from impacting their general percep­ inherently unique characteristics. These systems are
tions of the technology to influencing their sense of aware of, and responsive to, their environment and
self. are sociable with users. They are proactive and goal-
In this study, we focus on the factors that influence and self-directed. Moreover, they can have human-like
the users’ decision to continue to use PIAs after initial characteristics, such as a name, a specific accent, and
adoption. We specifically aim to answer the following a sense of humour (Knote et al., 2019; Moussawi et al.,
research question: How do perceptions of a PIA’s char­ 2021). PIAs are commonly accessed through a device
acteristics influence the user’s intention to continue (e.g., smartphone or home device) and are not
using the agent? Examining continuance of use of equipped with an animated avatar or physical body.
PIAs is valuable for several reasons. First, these agents In addition, we believe that the PIAs’ existence within
are the first widely used human-like intelligent sys­ the user’s immediate personal space and their ability
tems, and their rates of use, as well as the types of tasks to extend the user’s capabilities may lead users to
for which they are employed, will only continue to incorporate these agents within their sense of self.
expand. They herald a new era where users will For this study, we integrate PIAs’ unique character­
employ different intelligent applications to perform istics and corresponding user perceptions into the
various tasks with different levels of complexity. unified model of information technology (IT) con­
These agents are highly customisable and use natural tinuance to understand user continuance intentions

CONTACT Sara Moussawi smoussaw@andrew.cmu.edu


© Operational Research Society 2022.
2 S. MOUSSAWI ET AL.

(Bhattacherjee & Lin, 2015). We specifically investi­ usefulness, satisfaction, positive disconfirmation of
gate the effect of perceived intelligence, perceived expectations, and subjective norms. The model assumes
anthropomorphism, and perceived self-extension on that pre-usage expectations change after adopting and
PIA users’ continuance of use decisions. We propose using the system. The dissonance between pre-usage
and test three antecedents for perceived self-extension: expectations and post-usage observed performance is
perceived ownership, perceived mastery, and per­ captured by the disconfirmation construct. Specifically,
ceived personalisation. Using data from a cross- positive disconfirmation of expectations occurs when
sectional study of PIA users, we empirically test and technology performance exceeds initial expectations,
verify our expanded model of usage continuance. and it is positively associated with satisfaction and per­
ceived usefulness. Positive disconfirmation is also asso­
ciated with positive attitudes and perceptions, and it is
2. Theoretical foundations a relevant construct in a post-adoption context.
2.1. Personal intelligent agents Additionally, the unified model of IT continuance
proposes that perceived usefulness, satisfaction with
Software agents are designed to act on behalf of the user use, and subjective norms positively influence the
to find and filter information, automate complex tasks, users’ continuance intention (Bhattacherjee, 2001b;
or collaborate with other agents to solve problems Venkatesh et al., 2011). Perceived usefulness is
(Czibula et al., 2009). They are considered intelligent a behavioural belief based on the utilitarian side of
when they have built-in knowledge and a capability to system use. Satisfaction is an attitude that captures
learn; these features enable them to be autonomous and the affective side of system use. Subjective norms,
continuously aware of, and adaptable to, their dynamic also an attitude, represent the social normative influ­
environment. PIAs (or personal assistant agents, smart ences by peers, colleagues, or friends that shape the
assistants, personal assistants, and conversational agents, user’s intentions to use the system. For PIAs, the
as they are referred to in the literature) learn and adapt to unified model of IT continuance (Bhattacherjee &
the user’s preferences (Czibula et al., 2009). We define Lin, 2015) can help explain what determines a user’s
a PIA as software that employs intelligent behaviour and decision to continue using a PIA after initial adoption
uses natural language processing and production function­ and use. Notably, this model treats all technologies
alities to assist the user (March et al., 2000; Moussawi alike, regardless of specific design characteristics. As
et al., 2021; Russell & Norvig, 2010). Based on the litera­ a result, the model may not capture the extent to
ture and results of a PwC Consumer Intelligence Voice which the defining features of the technology under
Assistant Survey (McCaffrey et al., 2018), we classify PIA consideration influence the user’s decision to continue
uses into three broad categories: informational (e.g., using it. Therefore, to adapt the model to PIAs, we
answering a question based on information available include user perceptions of its two unique character­
online), interpersonal (e.g., contacting someone via istics – namely, perceived intelligence and perceived
a phone call or text message) and transactional (e.g., anthropomorphism.
launching an app, play a song, etc.). Additionally, the unified model of IT continuance
Relevant work has explored various dimensions has been characterised as mostly extrinsic because it
related to PIAs’ use including personification, appro­ overlooks the role of intrinsic motivations. In indivi­
priateness of responses, and implications for security dual technologies whose use is discretionary, such as
and privacy. In an adoption context, recent studies PIAs, the model needs to include a hedonic antecedent
found that performance and effort expectancies, hedo­ (perceived enjoyment) as an additional predictor of
nic motivation, habit, likeability, perceived anthropo­ continuance intention. The model should also incor­
morphism, and perceived intelligence positively porate a construct that captures the extent to which
influence users’ intention to adopt PIAs (Moussawi technology design contributes to fulfill its main func­
et al., 2021; Wagner et al., 2019). Prior research tion (Lowry et al., 2015). Given the function of PIAs to
found that anthropomorphism led to more favourable work in the user’s personal space and assist with his/
user perceptions of a PIA, including an attenuating her personal tasks, we conceptualise this construct as
effect for its potentially intrusive features (Benlian perceived self-extension. The next sections define the
et al., 2019; Purington et al., 2017). In Appendix A, new constructs used to extend the model.
we present a review of recent research on PIAs and
intelligent conversational agents (see Table A1).
2.3. Perceived intelligence
March et al. (2000) defined an intelligent agent as “a
2.2. The unified model of IT continuance
piece of software that acts intelligently and in the place
The unified model of IT continuance (Bhattacherjee & of a human to perform a given task” that is charac­
Lin, 2015) proposes a set of factors that predict con­ terised by autonomy, adaptability, mobility, and com­
tinuance of use. Those factors include perceptions of munication. Additionally, an agent is personalised if it
EUROPEAN JOURNAL OF INFORMATION SYSTEMS 3

operates within a specific user’s context and can for­ 2000). Additionally, interactive avatars, humanoid
mulate more precise queries when interacting with the embodiment, and voice-based communication were
user (March et al., 2000). Intelligence has been defined found to positively influence the user’s experience
in objective and subjective ways in the fields of artifi­ when interacting with a system (Qiu & Benbasat,
cial intelligence, psychology, and human-robot inter­ 2009; Wang et al., 2007).
action (Krening & Feigh, 2018; Legg & Hutter, 2007). Perceived anthropomorphism itself has been
Objectively, a robot can be considered intelligent if it is defined and measured in various ways (Bartneck
capable of learning, adapting, solving problems, and et al. 2009; Kiesler et al., 2008; Moussawi & Benbunan-
achieving its goal while being aware of its environment Fich, 2021; Moussawi et al., 2021). Research on
(Legg & Hutter, 2007). Subjectively, users may per­ human-robot interaction has focused on the robot’s
ceive an agent as intelligent when it demonstrates appearance and movements – i.e., the robot’s fakeness,
characteristics of compliance, responsiveness, effort, artificialness, consciousness, and movement descrip­
ability to learn from feedback, transparency, and tion (e.g., fake, machinelike, or elegant) (Bartneck
robustness (Krening & Feigh, 2018). Other dimen­ et al. 2009). Other work has investigated robots’
sions used for measuring perceived intelligence human- and machine-likeness as well as sociability
include knowledge, competence, intelligence, foolish­ (Kiesler et al., 2008, 1996). In social psychology,
ness, and responsibility (Bartneck et al. 2009; Kiesler Waytz et al. (2014) contended that the presence of
et al., 1996). mental functions in an object (e.g., a car) is an indis­
For PIAs, our definition of intelligence aligns with pensable and satisfactory condition for humanness
the IS and AI literatures’ view and encompasses (Waytz et al., 2014). Haslam et al.’s (2008) work
dimensions of autonomy; problem solving; task com­ viewed humanness as a set of attributes that are
pletion; awareness of the environment; learning and uniquely or typically human. Features rated high in
adapting to change; speed and flexibility; and commu­ human uniqueness include civility, openness, and
nication capabilities (March et al., 2000; Moussawi & agreeableness. Features ranking high on human nature
Benbunan-Fich, 2021; Moussawi et al., 2021; Russell & include agency, warmth, emotionality, and extraver­
Norvig, 2010). More specifically, the user will perceive sion. We adopt Haslam et al.’s (2008) view and postu­
the agent as intelligent when it is aware of its sur­ late that a user will perceive the PIA based on uniquely
roundings; is able to respond to the user’s requests human attributes (such as funny, respectful or fluent)
without continuous intervention; is capable of adapt­ or based on human nature attributes (such as caring,
ing to and learning from every interaction or new happy or friendly). For instance, PIAs that commu­
piece of knowledge; and responds to requests effec­ nicate with the user by listening and talking back with
tively and efficiently. Therefore, we define perceived varying intonations and pitches are likely to appear
intelligence as the perception that the PIA is able to more human-like.
operate in an autonomous and goal-oriented way,
adapt to its environment, communicate with the
2.5. Perceived self-extension
human user using natural language, and deliver effi­
cient output (Moussawi et al., 2021). Given that the main purpose of a PIA is to act as
a user’s personal assistant, its continued use will
depend upon its ability to extend the user’s capabil­
2.4. Perceived anthropomorphism
ities. We conceptualise the continuous reliance of the
The concept of anthropomorphism has been widely user on the PIA via self-extension, i.e., attributing
investigated in IS and other fields (Epley et al., 2007; a meaning associated with the self or self-identity to
Moussawi et al., 2021; Pfeuffer et al., 2019). People can possessed objects (R. Belk, 1988; Sivadas & Machleit,
anthropomorphise – i.e., attribute distinctly human 1994). People who consider objects to be an extension
capacities – to non-human entities and objects posses­ of themselves may use these entities to define them­
sing human-like intention, cognition, emotions, or selves; to build a sense of identity; to serve as
features (Duffy, 2003; Epley et al., 2008, 2007; a reminder of their self-identity and aspirations; or
Fournier & Alvarez, 2012). Any object can be anthro­ to preserve and refine their self-concept (Mittal,
pomorphised, including invisible entities and sym­ 2006; Schifferstein & Zwartkruis-Pelgrim, 2008).
bolic concepts (Aaker, 1997). Research has explored Extant research on self-extension in IS has explored
various characteristics that impact anthropomorphic the concept of extended self directly (Clayton et al.,
perceptions, including facial expressions, speech para­ 2015) or through (1) closely related concepts, such as
meters, personality dimensions, and vividness (Hess emotional attachment and identification (You &
et al., 2009; Link et al., 2001; Nass & Steuer, 1993). Robert, 2017), and embodied perceptions of avatars
When interacting with computer applications, users (You & Sundar, 2013); (2) multidimensional scaling to
were found to attribute gender and ethnicity stereo­ map users’ extensional associations with technology
types and exhibit social behaviours (Nass & Moon, (Vishwanath & Chen, 2008); and (3) avatar designs
4 S. MOUSSAWI ET AL.

and their relationship with the self as well as forms of prior research, we conceptualise the antecedents of
attachment to possessions among teens (Kafai et al., perceived self-extension as (a) possession/ownership
2007; Odom et al., 2011). Findings suggest that with self-based choice and resource investment in
extending oneself to include the artefact – such as an acquisition; (b) mastery with investment in learning
embodied robot in collaborative teams or a personal how to use the agent; and (c) personalisation as an
technological artefact like a mobile phone – promotes indicator of bonding post-acquisition.
emotional attachment towards the artefact (Carter
et al., 2013; R. W. Belk, 2013; You & Robert, 2017).
Mittal (2006) identifies several mechanisms 3. Hypotheses development
whereby products become part of oneself at the pro­ We present the complete research model of our study
duct selection stage (through choice and acquisition), in Figure 1. In a continuance of use context, the dis­
during use (by investing resources on learning how to confirmation of expectations construct captures the
use it), and through bondin cognitive aspect, while the satisfaction construct cap­
g post-acquisition. First, objects are considered part tures the affective one. Satisfaction is the summary
of the self when users choose them and exercise con­ psychological state that results from the emotions
trol over them (Kiesler & Kiesler, 2004). For instance, surrounding the disconfirmed expectations and prior
users select objects we want to buy and through this IT use experience (Oliver, 1980; Venkatesh et al.,
selection we shape our extended selves (R. Belk, 1988). 2011). Prior research in IS supports the positive rela­
In a technology context, prior research explored the tionship between positive disconfirmation of expecta­
perceptions of virtual possessions among teenagers by tions and satisfaction with the experience
focusing on associations between the self and personal (Bhattacherjee, 2001a, 2001b; Bhattacherjee & Lin,
communication technologies such as mobile phones 2015; Bhattacherjee & Premkumar, 2004). We believe
(Kafai et al., 2007; Odom et al., 2011). that following the adoption and use of the PIA, the
Second, objects become part of the self when users user will continuously re-evaluate her initial expecta­
learn how to use them. Users are more likely to view tions considering the observed performance of the
products as extensions of themselves when they invest PIA. This re-evaluation will influence the levels of
resources (time, money, and effort) in their use. positive disconfirmation regarding the PIA’s perfor­
Through this mastery, the owner reinforces the pro­ mance. Consistent with prior research, we expect that
cess of constructing the extended self (R. Belk, 1988; the higher the positive disconfirmation, the higher the
Mittal, 2006). satisfaction level.
Third, personalisation of a product – by embellish­
ing it with personal symbols or messages or using it in H1: Positive disconfirmation of expectations is posi­
personal spaces – can increase the degree to which tively associated with satisfaction for experienced PIA
users consider it part of their extended selves (Kiesler users.
& Kiesler, 2004). By tailoring products to their perso­
nal preferences, users develop what Mittal (2006) calls Prior research has shown that pre-usage usefulness
post-acquisition attachment bonds. expectations, which are based on second-hand infor­
In this study, perceived self-extension captures the mation, tend to be weaker than post-usage usefulness
level to which users attribute meanings to the PIA that expectations, which are based on the user’s first-hand
are related to their self or self-identity. Consistent with interaction with the system (Bhattacherjee & Lin,

Figure 1. Research model. Controlled for but not shown are also habit, use frequency, and use tenure
EUROPEAN JOURNAL OF INFORMATION SYSTEMS 5

2015; Bhattacherjee & Premkumar, 2004). In the con­ performance during post-adoption, and hence the
text of post-adoption, prior work has supported higher the positive disconfirmation of expectations is
a positive association between positive disconfirma­ expected to be.
tion and perceived usefulness (Bhattacherjee, 2001a,
2001b; Bhattacherjee & Lin, 2015). We expect that H4: Perceived intelligence is positively associated with
following the adoption and use of the PIA, the user positive disconfirmation of expectations for experi­
will perform new tasks and evaluate her expectations enced PIA users.
in light of the observed performance of the PIA, which
will influence the level of positive disconfirmation. For Prior research has shown that perceived enjoyment
example, if the user’s expectations of Alexa are is an important factor shaping users’ behaviour with
exceeded by Alexa’s help with various tasks (i.e., posi­ various systems (Kamis et al., 2008; Koufaris, 2002).
tively disconfirmed), the user’s perception of her own When users anthropomorphise an entity, such as
performance (aided by Alexa) is likely to increase. We a technological application, a relationship forms,
expect that the more positive the disconfirmation which can elevate the quality of and hedonic gains
result, the higher the perceptions of usefulness of from the experience (Chandler & Schwarz, 2010).
the PIA. For example, prior research has shown that the inte­
gration of social cues into a website improved users’
H2: Positive disconfirmation of expectations is posi­ flow, gratification, and stimulation, which boosted
tively associated with perceived usefulness for experi­ hedonic benefits (Wang et al., 2007). We expect
enced PIA users. a similar relationship between anthropomorphism
and enjoyment for PIA users.
The relationship between perceived intelligence
and usefulness of a PIA has been supported in a pre- H5: Perceived anthropomorphism is positively asso­
adoption context (Moussawi et al., 2021). More gen­ ciated with perceived enjoyment for experienced PIA
erally, with decision-assistive technologies, a user users.
perceives a system to be more useful when it reduces
the user’s cognitive load and increases task efficiency Alan Turing famously stated that a computer can
(Kamis et al., 2008). Additionally, a system’s output be considered to have a mind if its interrogator cannot
quality is a significant determinant of perceived use­ tell that it is not human (Gray et al., 2007; Turing,
fulness of various types of systems (Chismar & 1950). In other words, a system’s intelligence and
Wiley-Patton, 2003; Davis et al., 1992; Hart & anthropomorphism are tightly linked. Accordingly,
Porter, 2004). Therefore, an agent that acts indepen­ researchers have developed anthropomorphism scales
dently and learns from the user’s behaviour while partly based on mentalistic notions (Waytz et al.,
delivering an effectual and goal-directed output is 2014). In our study, however, we separate perceptions
expected to increase the user’s perceived usefulness of intelligence (i.e., associated with possessing a mind)
of it. That is because the system will reduce the user’s from perceptions of human-likeness, because we
mental load and increase her efficiency and effective­ believe that with PIAs, perceptions of one do not
ness. We expect that this will be true post-adoption always associate with perceptions of the other
as well, and that users’ perceptions of intelligence (Moussawi et al., 2021). We include perceptions of
will increase their perceptions of the PIA’s possessing a mind in the perceived intelligence con­
usefulness. struct and conceptualise anthropomorphism based on
typically and uniquely human-like features of a PIA.
H3: Perceived intelligence is positively associated with For example, if Google Assistant presents the user with
perceived usefulness for experienced PIA users. relevant information based on a deep analysis of the
user’s preferences, history, and activity, but does so in
If the user’s perceived performance of the PIA a list of links in the Google app, it may be perceived as
exceeds her expectations, she will experience intelligent but not necessarily anthropomorphic.
a positive disconfirmation of expectations
(Bhattacherjee & Lin, 2015). Perceived intelligence is H6: Perceived intelligence is positively associated with
expected to increase perceptions of PIAs’ performance perceived anthropomorphism for experienced PIA
following continued use. First, the PIA is expected to users.
anticipate and autonomously carry out tasks. Second,
the PIA’s ability to produce the desired result and its We propose three antecedents for perceived self-
adaptive behaviour will ensure satisfactory output extension in a PIA context: perceived ownership, per­
quality and efficiency. For two users with a similar ceived mastery, and perceived personalisation. Since
level of pre-usage expectations, the higher the percep­ the PIA assists the user in her everyday tasks, it is
tions of intelligence, the higher the perceptions of always available to her within close proximity,
6 S. MOUSSAWI ET AL.

addresses her by name or preferred nickname, and between personalisation and self-extension. When
serves her unequivocally. It has access to the user’s an object is personalised, it starts to symbolise,
information and over time it can learn her needs and and represent relevant aspects of, the user (Burris
preferences and detect her behaviour patterns. The & Rempel, 2004; Kiesler & Kiesler, 2004). For
relationship is reciprocal, however. After adopting instance, a Siri user can customise the PIA by chan­
the PIA and after a period of repeated use, the user ging its gender, language, and dialect, capturing
can also learn how to interact with the PIA better and relevant aspects of herself.
gain mastery over it. Over time, the user can have
better control over the agent as she learns through H9: Perceived personalisation is positively associated
trial and error how to properly interact with the with perceived self-extension for experienced PIA
agent and make best use of its capabilities. users.
Additionally, since the PIA is available through
a device or an account that the user owns, the user The self-extension literature indicates that when
has sole control over use of the PIA. users perceive an object as an extension of them­
We expect a perception of ownership of the PIA to selves, they also perceive it as extending their cap­
develop in the post-adoption phase, and we define it as abilities (R. Belk, 1988; Kiesler & Kiesler, 2004;
the degree to which the user perceives that she owns Mittal, 2006). Since PIAs assist users with various
and can control the agent. Relevant research states that tasks, perceptions of extended self can allow the
the more we believe we possess something, the more it user to be more efficient and in control and can
becomes a part of the self (R. Belk, 1988; Kiesler & impact the user’s perception of her performance.
Kiesler, 2004; Prelinger, 1959). The level of control For instance, a PIA that assists a parent with setting
a user has over her PIA is akin to the control one has timers or reminders to ease the burden of juggling
over a body part to perform a function (e.g., to move multiple household tasks provides the user with
one’s legs to walk). The user can ask the PIA to per­ a functional extension of herself. These perceptions
form any of its possible tasks (informational, interper­ of extended self are shaped by increased mastery and
sonal, or transactional) at any time. She can launch it control over the agent over time. For experienced
or stop it from performing an action (e.g., playing users, PIAs may provide functional extensions that
a game, searching the internet, reading the news or amplify the self. Since the user will have increased
a book) at any time. capabilities (those of the PIA that are associated with
the self now) and will need less time to complete her
H7: Perceived ownership is positively associated with everyday tasks, she will perceive the PIA as more
perceived self-extension for experienced PIA users. useful.

Another process through which objects can become H10: Perceived self-extension is positively associated
a part of the self is appropriation or mastering the use with perceived usefulness for experienced PIA users.
of an object (R. Belk, 1988; Mittal, 2006). In a PIA
context, we define perceived mastery as the degree to Research exploring the association between the
which the user perceives herself to be proficient in consumer’s self and product meaning has proposed
using the PIA. Once the user masters the PIA, and an association between the diffuse self (i.e., the part of
the PIA remains in close proximity, its use can become the self that strives for hedonic satisfaction) and enjoy­
natural and automatic. In that way, when perceived ment that is achieved through product familiarity and
mastery is high, the PIA begins to resemble a physical sensory and aesthetic pleasures (Lapsley & Power,
appendage. 2012; Schifferstein & Zwartkruis-Pelgrim, 2008). We
expect that when the user considers the PIA to be
H8: Perceived mastery is positively associated with a part of her extended self, she will associate it with
perceived self-extension for experienced PIA users. her diffuse and private selves (i.e., the parts of the self
that strives for individual achievement). The PIA’s
A third process through which objects can connection with the user’s diffuse self is primarily
become a part of the self is personalisation. We based on the PIA’s familiarity to the user as well as
define perceived personalisation as the level to sensory pleasures, such as the PIA’s ability to sense the
which a user believes she is able to tailor the PIA user’s presence and detect her voice despite ambient
to her own preferences. We propose that a period of noise; an understanding of multi-finger gestures or
repeated use of a PIA will lead the user to feel that specific commands; and an appealing design with 3D
she is able to personalise the agent due to the effects or a soothing voice. The association with the
agent’s ability to customise its behaviour and learn private self is due to the agent’s ability to extend the
from user interactions. Research has found empiri­ user’s abilities (see H10). Therefore, we propose that
cal support for the positive causal association when users interact with an agent to which they are
EUROPEAN JOURNAL OF INFORMATION SYSTEMS 7

already accustomed and that they consider a part of social accreditation and self-definition (Lapsley &
themselves, they will have a pleasurable and enjoyable Power, 2012). The connection with the public self is
usage experience. mainly due to normative pressures. In our study, this
implies that PIA users are likely to develop positive
H11: Perceived self-extension is positively associated intentions towards continuance if they believe that
with perceived enjoyment for experienced PIA users. their relevant others (e.g., friends, colleagues, family
members) approve of this behaviour.
Since satisfaction is based on prior and current use
of the PIA, it is an important factor in determining H15: Subjective norms are positively associated with the
continuance of use intention. Prior research studies intention to continue to use for experienced PIA users.
supported the positive association between satisfaction
with use and continuance intention (Bhattacherjee, In line with prior research, we control for habit,
2001a, 2001b; Bhattacherjee & Lin, 2015; frequency of use, and use tenure, since these factors
Bhattacherjee & Premkumar, 2004). In line with the are directly relevant for continuing users (Limayem
unified model of IT continuance and prior research et al., 2007).
findings, we propose that when the satisfaction level
increases, so does the intention to continue to use
4. Research design
the PIA.
To test our research model, we designed a cross-sectional
H12: Satisfaction with the PIA is positively associated study with experienced PIA users who have adopted the
with the intention to continue to use for experienced PIA at some point in the past. Participants were college
PIA users. students from a student pool at a major university in the
Northeastern United States. Students are an appropriate
Prior research empirically supported the relation­ sample of experienced PIA users since they are adopting
ship between perceived usefulness and continuance voice assistants at a faster rate than other age groups
intention (Bhattacherjee & Lin, 2015; Limayem et al., (McCaffrey et al., 2018). The subject pool consisted of
2007; Venkatesh et al., 2011). Consistent with the students in several sections of an introductory informa­
unified model of IT continuance, we expect that if tion systems course, who had the option to participate in
a PIA is perceived as helping the user conduct her research projects to earn course credit (3% of the course
daily activities (e.g., keeping up with a schedule, com­ grade). We asked participants about their prior experi­
pleting tasks, looking up information, playing music) ence with PIAs. Participants who used a PIA more than
in a more efficient and effective manner, this percep­ two times in the previous month were directed to our
tion will increase continuance of use intention. survey. We adapted scales from the literature, as
explained in Appendix B, Table B1.
H13: Perceived usefulness of the PIA is positively
associated with the intention to continue to use for
4.1. Sample
experienced PIA users.
252 subjects qualified to participate in this study.
Prior research in IS found empirical support for the Subjects voluntarily chose to participate and received
relationship between enjoyment and intention to use the same amount of course credit upon completion of
in various contexts during pre- (Kamis et al., 2008; the questionnaire. We excluded 20 data records due to
Koufaris, 2002; Van der Heijden, 2004) and post- incomplete responses and used 232 valid and complete
adoption (Thong et al., 2006). We expect that PIA records in our data analysis. Participants reported the
users will enjoy interacting with an intelligent and most frequently used PIAs as Apple’s Siri (78%), Google
human-like PIA, and that this pleasurable experience Assistant (16%), Microsoft’s Cortana (4%), and
will be a strong determinant of their intention to Amazon’s Echo (2%) (Appendix C, Table C1). About
continue to use the system. 60% of users were using their agent for more than
a year. About 51% of the subjects were 18 to 20 years
H14: Perceived enjoyment is positively related to the old, 27% were 21 to 23 years old, and 57% were female.
intention to continue to use for experienced PIA users.
5. Data analysis
The positive association between subjective norms
and behavioural intention has been supported by sev­ We used SmartPLS (Ringle et al., 2015) to analyse the
eral studies on IS use (Bhattacherjee & Lin, 2015; measurement and structural model. PLS-SEM analysis
Karahanna et al., 1999). In relation to relevant others, was more suitable than CB-SEM because this research
we expect that the user will associate the PIA with her is an extension of an existing structural theory (Hair
public self, i.e., the facet of the self that strives for et al., 2011).
8 S. MOUSSAWI ET AL.

5.1. Measurement model evaluation p = 0.000). Among the factors that shape continuance
intention, the effect of satisfaction and usefulness was
We confirmed convergent, discriminant validity and
supported, but not that of subjective norms or enjoy­
reliability of all constructs in the model using loadings,
ment: H12 (β = 0.42; p = 0.000), and H13 (β = 0.33;
cross-loadings, composite reliability, average variance
p = 0.000).
extracted values, and the Fornell-Larcker criterion
Inner variance inflation factor (VIF) values were
(Fornell and Larcker, 1981; Hair et al., 2013). We
below 5, indicating no potential collinearity issues
also confirmed the lack of common method variance
among the constructs. As for predictive power, most
using the marker variable and Harman’s single factor
variables explained moderate (>0.20) to substantial
techniques (Lindell and Whitney, 2001; Podsakoff
(>0.50) variance in the model with two exceptions:
et al., 2003). For more details, please see Appendix C.
perceived intelligence explained 13% of the variance
in anthropomorphism, and perceived anthropo­
5.1.1. Structural model and hypotheses testing morphism and self-extension explained 20% of the
The sample size (232 records) was appropriate for variance in enjoyment. All other R2 values ranged
PLS-SEM testing since it was greater than ten times from 29% to 63% (Figure 2) (Hair et al., 2016).
the largest number of structural paths directed at Finally, to better understand the relationships in
a construct in the model (Hair et al., 2011). The our model, we calculated total effects (See Appendix
Goodness of Fit (GoF = 0.48 > GoFlarge = 0.36) indi­ C, Table C5). All total effects were high and statisti­
cated that the model performed well. The cut-off for cally significant, ranging from 0.349 to 0.736
large effect sizes was 0.36 (Wetzels et al., 2009). (Appendix C, Table C5).
We used the bootstrapping technique to estimate
the statistical significance of the path coefficients
(Chin, 2010). Figure 2 shows the results of our hypoth­
6. Discussion
esis testing. The paths from positive disconfirmation
of expectations to satisfaction and to perceived useful­ Our results supported the hypothesised positive effect
ness were supported: H1 (β = 0.73; p = 0.000) and H2 of perceived intelligence of a PIA on positive discon­
(β = 0.66; p = 0.000). The paths from perceived intelli­ firmation of expectations, perceived usefulness, and
gence to usefulness, positive disconfirmation, and per­ perceived anthropomorphism. We also found that
ceived anthropomorphism were also supported: H3 perceptions of anthropomorphism can increase the
(β = 0.14; p = 0.004), H4 (β = 0.53; p = 0.000), and user’s intrinsic enjoyment, though enjoyment itself
H6 (β = 0.35; p = 0.000). H5, which hypothesised the did not significantly impact continuance of use.
relationship between perceived anthropomorphism Additionally, we found support for positive relation­
and enjoyment, was supported (β = 0.34; p = 0.000). ships between a user’s perception of a PIA as an
Among the factors that shape perceived self-extension, extension of her self-identity (perceived self-
the effects of perceived ownership and personalisation extension) and perceptions of usefulness and enjoy­
were supported, but the effect of mastery was not ment. Furthermore, perceived self-extension was
supported: H7 (β = 0.43; p = 0.000) and H9 found to be positively impacted by the user’s perceived
(β = 0.23; p = 0.001). Perceived self-extension was ownership and personalisation of the PIA, but not by
found to increase perceptions of usefulness and enjoy­ perceived mastery. Our results also showed that post
ment: H10 (β = 0.12; p = 0.007) and H11 (β = 0.21; initial adoption, PIA users’ satisfaction and usefulness

Figure 2. Results of PLS analysis (*** p < 0.001, ** p < 0.01, * p < 0.05). Control variables: Habit (0.08, ns); Use Frequency (−0.03,
ns); Use Tenure (0.06, ns); ns = not significant
EUROPEAN JOURNAL OF INFORMATION SYSTEMS 9

are important predictors of continued use, while sub­ et al., 2015; Kafai et al., 2007; Odom et al., 2011;
jective norms and enjoyment are not. Finally, we Vishwanath & Chen, 2008). In line with these findings,
found support for the effect of positive disconfirma­ our results show that users can indeed report high
tion of expectations on both perceived usefulness and levels of perceived self-extension when it comes to
satisfaction. the PIAs they are using. However, PIAs are signifi­
cantly different from the technologies explored in
prior literature because they are more complex and
6.1. Theoretical implications
more proactive and interactive (as opposed to the
PIAs possess unique characteristics that enable them more passive nature of something like an email or an
to interact with users, react to the environment, be avatar). The fact that PIAs exhibit significant anthro­
proactive in completing tasks, possess personality pomorphic characteristics, as well, means that their
traits, and communicate using natural language. This relationship to a user’s self-identity may be different
study’s findings show that in a post-adoption context, than what has been found in prior research for other
users’ perceptions of a PIA as an extension of their technological artefacts. This is an important finding
self-identity play in an important role in continuance given the unique nature of PIAs.
of use. PIAs exist in the users’ personal space and users Our study also found that, in a post-adoption con­
access them through a non-embodied device. Our text, perceptions of intelligence and anthropomorph­
findings show that a user can perceive the agent as ism are related but can exist independently for PIAs
part of her self-identity and that these perceptions of and similar agents. More specifically, in technological
self-extension are positively related with how much artefacts, perceptions of mind and perceptions of
a user enjoys using a PIA and whether she believes the anthropomorphism (i.e., human-likeness) do not
agent makes her work more efficient. always overlap. An intelligent agent, for example, can
To better understand the factors that shape users’ be perceived as intelligent but not as anthropomorphic
perceptions of self-extension in this context, we inves­ (e.g., a very capable and non-personalised chatbot
tigated three factors: control, personalisation, and agent) and vice versa (e.g., an interactive voice assis­
mastery. We found that control over or owning the tant with a personality and pre-defined set of ques­
PIA increases users’ perceptions of self-extension. One tions). This is a different approach than the one
implication of this result is that for a technology to adopted in prior research in the psychology literature,
become part of the user’s self-identity, it must achieve which conflated the two constructs by attributing
some of the qualities of a user’s physical self (i.e., to be anthropomorphic notions to objects with cognition
perceived as under the control of and adaptable by the and a human-like mind (Waytz et al., 2014). For
user), much like she can control a limb or modify her PIAs, the separate but related nature of the two con­
voice or external appearance. Our findings also show structs was supported in a prior study in a pre-
that the more users can personalise the PIA (e.g., by adoption context (Moussawi et al., 2021) and was
changing its gender, language, or dialect), the higher found to hold in the present study in a post-adoption
their perceptions of self-extension. Though we focused context.
on user-directed customisation in our study, auto­ Another key aspect of our findings is that when it
mated personalisation (through continuous adapta­ comes to continuance of use of PIAs, the effect of
tion of the PIA) may also impact perceptions of self- utilitarian perceptions is significant, while that of
extension. This could be examined in future research. hedonic perceptions seems to wear off after initial
Interestingly, we did not find support for the adoption. First, our findings show that perceptions of
hypothesis that perceived mastery can increase per­ intelligence not only shape perceptions of anthropo­
ceptions of self-extension. One possible explanation is morphism, but also impact the user’s cognitive atti­
that if the PIA has a low complexity level, users do not tudes and her evaluation of the benefits achieved from
need to invest substantial time and effort to master its using the system. Specifically, we find that positive
use for the interaction to be successful. In their post- disconfirmation of expectations appears to be an
survey comments, some of our subjects indicated how important factor for continuance of use in a post-
easy their PIAs were to use, providing some support adoption context through its positive effect on user
for this explanation. Alternatively, the pace at which satisfaction and perceived usefulness. In other words,
the underlying algorithms learn and change, and pos­ our results imply that users engage in a continuous
sibly the resulting increase in predetermined set of evaluation of the PIA, using their prior expectations as
commands, could make using the PIA automatic and benchmark, and that the mechanism through which
frictionless, thereby reducing the users’ perception this evaluation affects their usage decisions is utilitar­
that they have mastered the use of the PIA. ian and concerns their overall satisfaction and their
Prior literature on self-extension in IS indicated the general sense of efficiency. Second, we find that per­
presence of an extended self with one’s mobile phone, ceived anthropomorphism can increase perceptions of
personal email, avatars, or Facebook posts (Clayton enjoyment, but enjoyment itself does not appear to
10 S. MOUSSAWI ET AL.

affect users’ continuance of use intentions. Research perceptions. PIAs’ features should ensure that they
has found that perceived anthropomorphism plays an exhibit high autonomy and pro-activeness; possess
important role in users’ initial acceptance of PIAs very good natural language production and proces­
(Moussawi et al., 2021; Wagner et al., 2019). Prior sing; and use effective knowledge models.
work has also found that perceptions of anthropo­ Our results also indicate that users will consider the
morphism can help reduce the negative impact of PIA as a part of their self-identity when they feel they
certain technology features (Benlian et al., 2019), have ownership over it and can customise it. This sug­
while positively impacting the user’s interaction with gests that PIA design should enable user customisation
the system by influencing their affect (Pfeuffer et al., (e.g., relating to the PIA’s name, voice, gender) as well as
2019). However, our results seem to indicate that what a user interface that affords the user ownership over the
is important to PIA users can shift as they move from agent’s reactions and behaviour (e.g., relating to specific
initial adoption to continued use. After the initial buttons or commands that control access to the agent or
novelty effect wears off, they may still enjoy using the agent’s reaction to different task requests). As PIAs
a PIA (our data did not show a reduction in perceived and the devices through which users access them evolve
enjoyment overall), but enjoyment is no longer an and acquire new smart features or increase their utilisa­
important factor in their decision to continue using tion of user tracking, such evolution could lead to con­
it. Instead, users shift to a utilitarian evaluation model, tinuous re-evaluations, positive or negative, of PIAs by
where satisfaction with use and perceived usefulness their users. We believe that this is an interesting ques­
are the only significant factors. tion for future research.
An alternative explanation for the lack of signifi­
cance in the relationship between perceived enjoy­
7. Limitations
ment and continuance of use intention may be
attributed to our conceptualisation of enjoyment in One threat to external validity relates to our choice of
terms of a pleasurable interaction. It is plausible that participants. We recruited student participants since
the satisfaction with use construct has captured college-age individuals are integrating PIAs into their
a hedonic aspect of the experience (with the two life at a faster pace than other age groups (McCaffrey
items with the words “pleased” and “delighted”) and et al., 2018). While targeting this group allowed us to
weakened the potential effect of perceived enjoyment easily recruit experienced users, we caution that other
rendering it not significant. This could explain the user groups may provide different results. Another
discrepancy between our results and those of prior threat to external validity is that about 78% of respon­
studies on other systems, different from PIAs, that dents were users of one specific PIA, Apple’s Siri.
found a role for perceived enjoyment in shaping con­ However, we note that 22% of subjects were users of
tinuance of use intentions (e.g., Lee & Tsai, 2010; other agents such as Microsoft’s Cortana, Amazon’s
Thong et al., 2006). However, given the unique nature Alexa, and Google’s Assistant, which extends the
of PIAs, a more likely explanation relates to the shift to results to various PIAs.
a utilitarian evaluation, though more research is neces­ Our research design choices could be a source of
sary to confirm this explanation. limitations. While we chose the cross-sectional
Additionally, the relationship between subjective approach and focused on intention to continue to
norms and intention to continue to use was not sup­ use as the dependent variable, a longitudinal approach
ported. In a work setting (Bhattacherjee & Lin, 2015), could shed a different light on the dynamics of PIA use
influence from colleagues, managers, and various continuance. Despite its limitations, a cross-sectional
other normative pressures are strong. Since the PIA methodology was adopted in many continuance stu­
is mostly used to complete personal tasks in dies (e.g., Bhattacherjee, 2001a; Ding & Chai, 2015;
a voluntary and individual context, the normative Kim et al., 2007). Finally, our study investigates users’
pressures can be lower, especially when the PIA is perceived PIA characteristics rather than specific
mostly used to complete personal tasks in design features. Future research could investigate
a voluntary context. which design features are the strongest contributors
to perceived intelligence and anthropomorphism.
6.2. Practical implications
8. Conclusion
Although perceived anthropomorphism and per­
ceived enjoyment may affect users during pre- Our research highlights the important role that per­
adoption, our results indicate that these factors have ceived intelligence plays in a post-adoption context, as
a limited effect in a post-adoption context. Since the well as the fading effect of enjoyment over time when
effect of perceived intelligence and usefulness seems to compared to an initial adoption context. It also pro­
be consistent in pre- and post-adoption, designers vides a better understanding of perceptions of PIAs as
need to focus on system features that increase those part of the users’ self-identities. Future research is
EUROPEAN JOURNAL OF INFORMATION SYSTEMS 11

needed to continue investigating human-like intelli­ Carter, M., Grover, V., & Thatcher, J. B. (2013). Mobile
gent applications and users’ continued interactions devices and the self: developing the concept of mobile
with these applications in various contexts. phone identity. In Strategy, adoption, and competitive
advantage of mobile services in the global economy (pp.
150–164. doi:10.4018/978-1-4666-1939-5.ch008). IGI
Global.
Acknowledgments Chandler, J., & Schwarz, N. (2010). Use does not wear
ragged the fabric of friendship: Thinking of objects as
Open Access funding provided by the Qatar National alive makes people less willing to replace them. Journal of
Library. Consumer Psychology, 20(2), 138–145. https://doi.org/10.
1016/j.jcps.2009.12.008
Chin, W. W. (2010). Bootstrap cross-validation indices for
Disclosure statement PLS path model assessment. In Handbook of partial least
squares (pp. 83–97). Springer.
No potential conflict of interest was reported by the Chismar, W. G., & Wiley-Patton, S. (2003). Does the
author(s). extended technology acceptance model apply to
physicians. System Sciences, 2003. Proceedings of the
36th Annual Hawaii International Conference on System
ORCID Sciences, Hawaii, USA: IEEE, 8.
Clayton, R. B., Leshner, G., & Almond, A. (2015). The
Raquel Benbunan-Fich http://orcid.org/0000-0002-6358- extended iself: The impact of iphone separation on cogni­
9161 tion, emotion, and physiology. Journal of Computer-
Mediated Communication, 20(2), 119–135. https://doi.
org/10.1111/jcc4.12109
Czibula, G., Guran, A.-M., Czibula, I. G., & Cojocar, G. S.
References
(2009). IPA-an intelligent personal assistant agent for
Aaker, J. L. (1997). Dimensions of brand personality. task performance support. 2009 IEEE 5th International
Journal of Marketing Research, 34(3), 347–356. https:// Conference on Intelligent Computer Communication and
doi.org/10.1177/002224379703400304 Processing, Cluj-Napoca, Romania: IEEE, 31–34.
Bartneck, C., Kulić, D., Croft, E., & Zoghbi, S. (2009). Davis, F. D., Bagozzi, R. P., & Warshaw, P. R. (1992).
Measurement instruments for the anthropomorphism, Extrinsic and intrinsic motivation to use computers in
animacy, likeability, perceived intelligence, and perceived the workplace. Journal of Applied Social Psychology, 22
safety of robots. International Journal of Social Robotics, 1 (14), 1111–1132. https://doi.org/10.1111/j.1559-1816.
(1), 71–81. https://doi.org/10.1007/s12369-008-0001-3 1992.tb00945.x
Belk, R. W. (2013). Extended self in a digital world. Journal Ding, Y., & Chai, K. H. (2015). Emotions and continued
of Consumer Research, 40(3), 477–500. https://doi.org/10. usage of mobile applications. Industrial Management &
1086/671052 Data Systems, 115(5), 833–852. https://doi.org/10.1108/
Belk, R. (1988, September). Possessions and the extended IMDS-11-2014-0338
self. Journal of Consumer Research, 15(2), 139. https://doi. Duffy, B. R. (2003). Anthropomorphism and the social
org/10.1086/209154 robot. Robotics and Autonomous Systems, 42(3–4),
Benlian, A., Klumpe, J., & Hinz, O. (2019). Mitigating the 177–190. https://doi.org/10.1016/S0921-8890(02)
intrusive effects of smart home assistants by using 00374-3
anthropomorphic design features: A multimethod Epley, N., Waytz, A., Akalis, S., & Cacioppo, J. T. (2008).
investigation. Information Systems Journal, 30(6), 1010– When we need a human: Motivational determinants of
1042. https://doi.org/10.1111/isj.12243 . anthropomorphism. Social Cognition, 26(2), 143–155.
Bhattacherjee, A., & Lin, C.-P. (2015). A unified model of IT https://doi.org/10.1521/soco.2008.26.2.143
continuance: Three complementary perspectives and Epley, N., Waytz, A., & Cacioppo, J. T. (2007). On seeing
crossover effects. European Journal of Information human: A three-factor theory of anthropomorphism.
Systems, 24(4), 364–373. https://doi.org/10.1057/ejis. Psychological Review, 114(4), 864. https://doi.org/10.
2013.36 1037/0033-295X.114.4.864
Bhattacherjee, A., & Premkumar, P. (2004). Understanding Fornell, C., & Larcker, D. F. (1981). Evaluating structural
changes in belief and attitude toward information tech­ equation models with unobservable variables and mea­
nology usage: A theoretical model and longitudinal test. surement error. Journal of marketing research, 18(1), 39–
MIS Quarterly, 28(2), 229–254. https://doi.org/10.2307/ 50.
25148634 Fournier, S., & Alvarez, C. (2012). Brands as relationship
Bhattacherjee, A. (2001a). An empirical analysis of the ante­ partners: Warmth, competence, and in-between. Journal
cedents of electronic commerce service continuance. of Consumer Psychology, 22(2), 177–185. https://doi.org/
Decision Support Systems, 32(2), 201–214. https://doi. 10.1016/j.jcps.2011.10.003
org/10.1016/S0167-9236(01)00111-7 Gray, H. M., Gray, K., & Wegner, D. M. (2007). Dimensions
Bhattacherjee, A. (2001b). Understanding information sys­ of mind perception. Science, 315(5812), 619-619. https://
tems continuance: An expectation-confirmation model. doi.org/10.1126/science.1134475
MIS Quarterly, 25(3), 351–370. https://doi.org/10.2307/ Hair, J. F. Jr, G. T. M. Hult, C. Ringle, and M. Sarstedt. 2013.
3250921 A Primer on Partial Least Squares Structural Equation
Burris, C. T., & Rempel, J. K. (2004). ”It’s the End of the Modeling (PLS-SEM). Thousand Oaks, CA: SAGE
World as We Know It”: Threat and the spatial-symbolic Hair, J. F., Jr, Hult, G. T. M., Ringle, C., & Sarstedt, M.
self. Journal of Personality and Social Psychology, 86(1), 19 (2016). A primer on partial least squares structural equa­
https://doi.org/10.1037/0022-3514.86.1.19 . tion modeling (PLS-SEM) (2nd ed.). Sage Publications.
12 S. MOUSSAWI ET AL.

Hair, J. F., Ringle, C. M., & Sarstedt, M. (2011). PLS-SEM: Liao, Y., Vitak, J., Kumar, P., Zimmer, M., & Kritikos, K.
Indeed a silver bullet. Journal of Marketing Theory and (2019). Understanding the role of privacy and trust in intel­
Practice, 19(2), 139–152. https://doi.org/10.2753/ ligent personal assistant adoption. International Conference
MTP1069-6679190202 on Information, Washington, DC: Springer, 102–113.
Hart, M., & Porter, G. (2004). The impact of cognitive and Limayem, M., Hirt, S. G., & Cheung, C. M. (2007). How
other factors on the perceived usefulness of olap. The habit limits the predictive power of intention: The case of
Journal of Computer Information Systems, 45(1), 47. information systems continuance. MIS Quarterly, 31(4),
Haslam, N., Loughnan, S., Kashima, Y., & Bain, P. (2008). 705–737. https://doi.org/10.2307/25148817
Attributing and denying humanness to others. European Lindell, M. K., and D. J. Whitney. 2001. “Accounting for
Review of Social Psychology, 19(1), 55–85. https://doi.org/ Common Method Variance in Cross-Sectional Research
10.1080/10463280801981645 Designs.” Journal of Applied Psychology 86 (1): 114–121.
Hess, T. J., Fuller, M., & Campbell, D. E. (2009). Designing Link, K. E., Kreuz, R. J., Graesser, A. C., & Group, T. R.
interfaces with social presence: Using vividness and extra­ (2001). Factors that influence the perception of feedback
version to create social recommendation agents. Journal delivered by a pedagogical agent. International Journal of
of the Association for Information Systems, 10(12), 1. Speech Technology, 4(2), 145–153. https://doi.org/10.
doi:10.17705/1jais.00216. 1023/A:1017383528041
Kafai, Y. B., Fields, D. A., & Cook, M. (2007). Your second Lowry, P. B., Gaskin, J., & Moody, G. D. (2015). Proposing
selves: Avatar designs and identity play in a teen virtual the multi-motive information systems continuance
world. Proceedings of DIGRA, (West London, UK). model (MISC) to better explain end-user system evalua­
Kamis, A., Koufaris, M., & Stern, T. (2008). Using an tions and continuance intentions. Journal of the
attribute-based decision support system for Association for Information Systems, 16(7), 515–579.
user-customized products online: An experimental https://doi.org/10.17705/1jais.00403
investigation. MIS Quarterly, 32(1), 159–177. https:// March, S., Hevner, A., & Ram, S. (2000). Research commen­
doi.org/10.2307/25148832 tary: An agenda for information technology research in
Karahanna, E., Straub, D. W., & Chervany, N. L. (1999). heterogeneous and distributed environments.
Information technology adoption across time: A Information Systems Research, 11(4), 327–341. https://
cross-sectional comparison of pre-adoption and doi.org/10.1287/isre.11.4.327.11873
post-adoption beliefs. MIS Quarterly, 23(2), 183–213. McCaffrey, M. H. P., Hobbs, M., & Wagner, J. (2018). Prepare
https://doi.org/10.2307/249751 for the voice revolution. Consumer Intelligence Series, PWC.
Kiesler, S., Powers, A., Fussell, S. R., & Torrey, C. (2008). https://www.pwc.com/us/en/services/consulting/library/
Anthropomorphic interactions with a robot and consumer-intelligence-series/voice-assistants.html
robot-like agent. Social Cognition, 26(2), 169–181. Mittal, B. (2006). I, me, and mine—How products become
https://doi.org/10.1521/soco.2008.26.2.169 consumers’ extended selves. Journal of Consumer
Kiesler, S., Sproull, L., & Waters, K. (1996). A prisoner’s Behaviour: An International Research Review, 5(6),
dilemma experiment on cooperation with people and 550–562. https://doi.org/10.1002/cb.202
human-like computers. Journal of Personality and Social Moussawi, S., & Benbunan-Fich, R. (2021). The effect of voice
Psychology, 70(1), 47. and humour on users’ perceptions of personal intelligent
Kiesler, T., & Kiesler, S. (2004). My pet rock and me: An agents. Behaviour & Information Technology, 40(15),
experimental exploration of the self extension concept. 1603–1626. doi:10.1080/0144929X.2020.1772368.
Advances in Consumer Research, XXXII 32 . Moussawi, S., Koufaris, M., & Benbunan-Fich, R. (2021).
Kim, H. W., Chan, H. C., & Chan, Y. P. (2007). A balanced How perceptions of intelligence and anthropomorphism
thinking–feelings model of information systems continu­ affect adoption of personal intelligent agents. Electronic
ance. International Journal of Human-Computer Studies, Markets, 31, 343–364. https://doi.org/10.1007/s12525-
65(6), 511–525. https://doi.org/10.1016/j.ijhcs.2006.11.009 020-00411-w .
Knote, R., Janson, A., Söllner, M., & Leimeister, J. M. (2019). Nass, C., & Moon, Y. (2000). Machines and mindlessness:
Classifying smart personal assistants: An empirical clus­ Social responses to computers. Journal of Social Issues, 56
ter analysis. Proceedings of the 52nd Hawaii International (1), 81–103. https://doi.org/10.1111/0022-4537.00153
Conference on System Sciences Maui, Hawaii, USA. Nass, C., & Steuer, J. (1993). Voices, boxes, and sources of
Koufaris, M. (2002). Applying the technology acceptance messages. Human Communication Research, 19(4),
model and flow theory to online consumer behavior. 504–527. https://doi.org/10.1111/j.1468-2958.1993.
Information Systems Research, 13(2), 205–223. https:// tb00311.x
doi.org/10.1287/isre.13.2.205.83 Odom, W., Zimmerman, J., & Forlizzi, J. (2011). Teenagers
Krening, S., & Feigh, K. M. (2018, September). and their virtual possessions: Design opportunities and
Characteristics that influence perceived intelligence in issues. Proceedings of the SIGCHI conference on Human
AI design. In Proceedings of the Human Factors and Factors in computing systems, Vancouver BC Canada:
Ergonomics Society Annual Meeting 62(1), 1637–1641. ACM, 1491–1500.
Sage CA: Los Angeles, CA: SAGE Publications. Oliver, R. L. (1980). A cognitive model of the antecedents
Lapsley, D. K., & Power, F. C. (2012). Self, ego, and identity: and consequences of satisfaction decisions. Journal of
Integrative approaches. Springer Science & Business Media. Marketing Research, 17(4), 460–469. https://doi.org/10.
Lee, M. C., & Tsai, T.-R. (2010). What drives people to 1177/002224378001700405
continue to play online games? An extension of technol­ Olsen, K. A., & Malizia, A. (2011). Automated personal
ogy model and theory of planned behavior. Intl. Journal assistants. Computer, 44(11), 11. https://doi.org/10.1109/
of Human–computer Interaction, 26(6), 601–620. https:// MC.2011.329
doi.org/10.1080/10447311003781318 Pfeuffer, N., Benlian, A., Gimpel, H., & Hinz, O. (2019).
Legg, S., & Hutter, M. (2007). A collection of definitions of Anthropomorphic information systems. Business &
intelligence. Frontiers in Artificial Intelligence and Information Systems Engineering, 61(4) , 523–533.
Applications, 157, 17–24. https://aisel.aisnet.org/bise/vol61/iss4/10 .
EUROPEAN JOURNAL OF INFORMATION SYSTEMS 13

Podsakoff, P. M., S. B. MacKenzie, J.-Y. Lee, and N. P. Venkatesh, V., Thong, J. Y., Chan, F. K., Hu, P. J. H., &
Podsakoff. 2003. “Common Method Biases in Brown, S. A. (2011). Extending the two-stage information
Behavioral Research: A Critical Review of the Literature systems continuance model: Incorporating Utaut predictors
and Recommended Remedies.” Journal of Applied and the role of context. Information Systems Journal, 21(6),
Psychology 88 (5): 879. 527–555. https://doi.org/10.1111/j.1365-2575.2011.00373.x
Prelinger, E. (1959). Extension and structure of the self. The Vishwanath, A., & Chen, H. (2008). Personal communication
Journal of Psychology, 47(1), 13–23. https://doi.org/10. technologies as an extension of the self: A cross-cultural
1080/00223980.1959.9916303 comparison of people’s associations with technology and
Purington, A., Taft, J. G., Sannon, S., Bazarova, N. N., & their symbolic proximity with others. Journal of the
Taylor, S. H. (2017). Alexa is my new bff” social roles, American Society for Information Science and Technology,
user satisfaction, and personification of the amazon echo. 59(11), 1761–1775. https://doi.org/10.1002/asi.20892
Proceedings of the 2017 CHI Conference Extended abstracts Wagner, K., Nimmermann, F., & Schramm-Klein, H.
on Human Factors in Computing Systems Denver Colorado (2019). Is it human? The role of anthropomorphism as
USA, New York NY United States: Association for a driver for the successful acceptance of digital voice
Computing Machinery, 2853–2859. assistants. Proceedings of the 52nd Hawaii International
Qiu, L., & Benbasat, I. (2009). Evaluating anthropomorphic Conference on System Sciences Hawaii USA.
product recommendation agents: A social relationship Wang, L. C., Baker, J., Wagner, J. A., & Wakefield, K. (2007).
perspective to designing information systems. Journal of Can a retail website be social? Journal of Marketing, 71(3),
Management Information Systems, 25(4), 145–182. 143–157. https://doi.org/10.1509/jmkg.71.3.143
https://doi.org/10.2753/MIS0742-1222250405 Waytz, A., Heafner, J., & Epley, N. (2014). The mind in the
Ringle, C. M., Wende, S., & Becker, J. M. (2015). SmartPLS machine: Anthropomorphism increases trust in an auton­
3. Boenningstedt: SmartPLS. http://www.smartpls.com omous vehicle. Journal of Experimental Social Psychology,
Russell, S., & Norvig, P. (2010). Artificial intelligence: 52(52), 113–117. https://doi.org/10.1016/j.jesp.2014.01.005
A modern approach (3rd ed.). Pearson Education. Welch, C. (2018). Amazon made a special version of Alexa for
Schifferstein, H. N., & Zwartkruis-Pelgrim, E. P. (2008). hotels that put echo speakers in their rooms. The Verge.
Consumer-Product attachment: Measurement and https://www.theverge.com/2018/6/19/17476688/amazon-
design implications. International Journal of Design, 2 alexa-for-hospitality-announced-hotels-echo .
(3), 1–14. Wetzels, M., Odekerken-Schröder, G., & Van Oppen, C.
Sivadas, E., & Machleit, K. A. (1994). A scale to determine (2009). Using PLS path modeling for assessing hierarch­
the extent of object incorporation in the extended self. ical construct models: guidelines and empirical
Marketing Theory and Applications, 5(1), 143–149. illustration. MIS Quarterly, 33(1). 177–195. https://doi.
Thong, J. Y., Hong, S.-J., & Tam, K. Y. (2006). The effects org/10.2307/20650284 .
of post-adoption beliefs on the You, S., & Robert, L. (2017). Emotional attachment, perfor­
expectation-confirmation model for information tech­ mance, and viability in teams collaborating with
nology continuance. International Journal of Human- Embodied Physical Action (Epa) robots. Emotional
Computer Studies, 64(9), 799–810. https://doi.org/10. Attachment, Performance, and Viability in Teams
1016/j.ijhcs.2006.05.001 Collaborating with Embodied Physical Action (EPA)
Turing, A. M. (1950). Computing machinery and Robots, Journal of the Association for Information
intelligence. Mind, LIX(236), 433–460. https://doi.org/ Systems, 19 (5), 377–407. doi:10.17705/1jais.00496.
10.1093/mind/LIX.236.433 . You, S., & Sundar, S. S. (2013). I feel for My Avatar:
Van der Heijden, H. (2004). User acceptance of hedonic Embodied perception in Ves. Proceedings of the SIGCHI
information systems. MIS Quarterly, 28(4), 695–704. conference on Human factors in computing systems (Paris
https://doi.org/10.2307/25148660 France), 3135–3138.
14 S. MOUSSAWI ET AL.

Appendix A (Wagner et al., 2019). Along these lines, another study


A recent classification study identified several attributes proposed that performance and effort expectancies,
relevant to PIAs, including communication via natural lan­ social influence, price value, habit, privacy risk expec­
guage and sensors; collective intelligence; and embodiment tancy, trust, experience, and access to a health system
reflected through anthropomorphism and a clear presence are expected to shape users’ adoption intention of
of an identifiable entity (Knote et al., 2019). a conversational agent for healthcare services (Laumer
Implementations of PIAs range from general purpose – et al., 2019). Additionally, perceptions of intelligence
such as Apple’s Siri, Google’s Assistant, Amazon’s Alexa – and anthropomorphism of the agent were found to
to specific purpose, which are tailored to certain tasks or influence new users’ beliefs and attitudes as well as
contexts (Mahmood et al., 2018; Santos et al., 2018). Task- trust in a PIA (Moussawi et al., 2021). Using the para-
and context-based PIAs have been designed for, and are social theory to explain adoption, research revealed
used in, different fields, including sports (iAPERAS), health­ interpersonal attraction as well as security and privacy
care and mental health (HealthPal, BeWell, StudentLife, risks as important factors impacting PIA adoption (Han
PPCare, EMMA), education (Adele), manufacturing, perso­ and Yang, 2018). In the context of robo-advisory for
nal communication, driving, nutrition, classroom, etc. investment decisions, perceptions of anthropomorph­
(Ghandeharioun et al., 2019; Mahmood et al., 2018; Tang ism, triggered by visual and verbal cues as well as
et al., 2012; Santos et al., 2018; Shaw et al., 1999; Verlic et al., personalised anchors, were found to influence social
2005). presence positively, which in turn causes an increase
A content analysis of Amazon.com reviews for the Echo in investment volumes (Adam et al., 2019). Other stu­
device and Alexa agent revealed that more personification, dies found that perceived anthropomorphism increases
i.e., the degree to which the technology is perceived as when an agent exhibits socially refined behaviour and
a person, led to higher star ratings regardless of functionality has a clear role, increased independence, and
issues or appropriateness of responses (Purington et al., a personality (Wagner et al., 2019). Additionally,
2017). Users’ personification of their PIA was characterised smart home assistants’ anthropomorphic features can
as a mindless politeness (Lopatovska and Williams, 2018), diminish the negative impact that intrusive technology
a view in line with the media equation theory (Reeves and features can have on users’ stress by shaping their feel­
Nass, 1996), which states that people unconsciously treat ings of privacy invasion (Benlian et al., 2019). Prior
computers as they treat other people. Research has also literature has also explored the role of agent design.
looked at other PIA issues, including children’s interactions Features such as preset answers, automatic sentiment
with PIAs, smart home integration, and the various implica­ analysis, and adaptive responses were found to impact
tions on security and privacy (Druga et al., 2017; Pradhan perceptions of empathy, social presence, and satisfac­
et al., 2018; Zeng et al., 2017). tion in a service encounter (Diederich et al., 2019a;
Several recent studies focused on exploring users’ 2019b). Research also explored the role of chatbots in
adoption of agents. Prior research that used the latest a digital work environment (Lechler et al., 2019) and
Unified Theory of Acceptance and Use of Technology user interactions with smart personal assistants in
(UTAUT2) found that performance and effort expec­ a group context (Winkler et al., 2019). Finally, in
tancies, hedonic motivation, habit, and likeability a judge-advisor system context, conversational agents
(shaped by perceived sociability, animacy, and human­ with female traits appeared to increase users’ perceived
like-fit) positively impact users’ intention to adopt PIAs competence of the agent (Pfeuffer et al., 2019).

Table A1. Relevant research on PIAs and intelligent conversational agents.


Study Objective Technology Research Design
Adam et al. Investigated how anthropomorphism and personalised anchors can Digital automated service Experiment
2019 influence investment decisions for robo-advisors.
Benlian et al., Investigated how smart home assistants’ anthropomorphic features can Smart home assistants Mixed methods: online
2019 attenuate the harmful effects of intrusive technology features on experiment and
strain by shaping users’ feelings of privacy invasion. survey
de Barcelos Conducted a systematic literature review of intelligent personal Intelligent personal assistants Literature review
Silva et al., assistants.
2020
Diederich et al., Explored if preset answer options of text-based conversational agents Text-based conversational Experiment
2019 influence perceived humanness, social presence, and service agents
satisfaction
Diederich et al., Proposed and evaluated a design of a conversational agent with Conversational agent Experiment
2019 automatic sentiment analysis and adaptive responses that could
emulate empathy in a service encounter.
Druga et al., Explored children’s perceptions of Amazon Alexa, Google Home, Cozmo Amazon Alexa, Google Home, Interviews
2017 and Julie Chatbot in the context of child-agent interaction. Cozmo and Julie Chatbot
Han and Yang, Explored users’ adoption intention of intelligent personal assistants Intelligent personal assistants Survey
2018 drawing on the parasocial relationship theory.
Kiseleva et al., Investigated user satisfaction, task completion and effort variations in Voice-controlled intelligent User study with
2016 a number of use scenarios in various contexts. personal assistants simulated tasks
Knote et al. Conducted an empirical cluster analysis to classify smart personal Smart personal assistants Literature review and
(2019) assistants. The study built on a systematic literature review, specified cluster analysis
a number of design characteristics and revealed five clusters of
assistants.
Kuzminykh Identified categories in behavioural and visual perceptions of agents in Conversational agents Qualitative multi-phase
et al., 2020 relation to the anthropomorphism of conversational agents. study
(Continued)
EUROPEAN JOURNAL OF INFORMATION SYSTEMS 15

Table A1. (Continued).


Study Objective Technology Research Design
Laumer et al., Built on UTAUT2 to investigate users’ adoption of conversational agents Conversational agents Semi-structured
2019 for healthcare services, specifically for disease diagnosis. interviews with 35
potential users
Lechler et al., Explored the roles that chatbots play in the exchange of feedback in Chatbots Design research
2019 digital work environments.
Liao et al., 2019 Explored the motivations for voice-controlled intelligent personal Voice-controlled Intelligent Survey
assistants’ adoption and the role that trust and privacy concerns play personal assistants
in this context.
Liu et al. (2017) Devised a roadmap for natural language processing research in IS based Various IT artefacts Literature review
on a literature review and the identification of 12 commonly
researched tasks.
Lopatovska and Investigated users’ personification, i.e., anthropomorphism, behaviours Amazon Alexa Qualitative study with
Williams, with intelligent personal assistants. The study aimed to categorise users’ diaries
2018 these behaviours with Amazon’s Alexa users.
Luger and Used the lens of Norman’s gulfs of execution and evaluation to explore Conversational agents Qualitative/Interviews
Sullen, 2016 users’ experiences with conversational agents.
Pfeuffer et al., Explored gender stereotyping and egocentric bias with cooperative Conversational agents Experiment
2019 conversational agents, specifically, with judge-advisor systems.
Porcheron et al., Examined how conversational agents are integrated and used in casual Intelligent personal assistants Participant-Observer
2017 group settings between friends. /Ethnomethodology
Pradhan et al., Examined the accessibility of intelligent personal assistants and how Amazon Echo User reviews and
2018 users with disabilities are using these agents. Interviews
Purington et al., Explored users’ personification of intelligent agents, specifically Amazon Amazon Echo User reviews
2017 Echo, and its connection to social interactions and user satisfaction.
Rzepka (2019) Built on the values and means-end chain theory to better understand Virtual assistants Qualitative/Interviews
users’ evaluations of virtual assistants and develop a means-end
objectives network for using these agents.
Sannon et al., Examined whether conversational agent’s type and user personification Conversational agents Interactive study
2018 influenced users’ tendency to disclose sensitive information to the
agent.
Wagner and Drew on the three-factor theory of anthropomorphism, media equation Digital voice assistants Qualitative/Interviews
Schramm- and uncanny valley to better understand users’ relationship with
Klein, 2019 digital voice assistants as well as assistants’ functionality,
anthropomorphic attributes, and behaviour.
Wagner et al., Revealed that performance and effort expectancy, hedonic motivation, Digital voice assistants Survey
2019 habit and likeability (shaped by perceived sociability, animacy and
humanlike-fit) positively impact users’ intention to adopt agents like
Siri and Alexa.
Winkler et al., Explored how interactions with smart personal assistants vs. a scripted Smart personal assistants Mixed method
2019 human facilitator in a complex problem-solving scenario improve approach
group performance and collaboration quality.
Yang et al., Found that perceived pragmatic and hedonic agent qualities influence Conversational agents (e.g., Siri, Survey
2019 users’ affective responses to them. Google Assistant, Alexa and
Cortana)
Zeng et al., Explored users’ experiences with smart homes and the emerging Smart Home Semi-structured
2017 security and privacy concerns. interviews

References Ghandeharioun, A., McDuff, D., Czerwinski, M., and


Rowan, K. 2019. “Emma: An Emotion-Aware Wellbeing
Adam, M., Toutaoui, J., Pfeuffer, N., and Hinz, O. 2019. Chatbot,” 2019 8th International Conference on Affective
“Investment Decisions with Robo-Advisors: The Role of Computing and Intelligent Interaction (ACII): IEEE, 1–7.
Anthropomorphism and Personalized Anchors in Han, S., and Yang, H. 2018. “Understanding Adoption of
Recommendations”. Intelligent Personal Assistants,” Industrial Management
de Barcelos Silva, A., Gomes, M. M., da Costa, C. A., da Rosa & Data Systems.
Righi, R., Barbosa, J. L. V., Pessin, G., De Doncker, G., and Kiseleva, J., Williams, K., Jiang, J., Hassan Awadallah, A.,
Federizzi, G. 2020. “Intelligent Personal Assistants: A Crook, A. C., Zitouni, I., and Anastasakos, T. 2016.
Systematic Literature Review,” Expert Systems with “Understanding User Satisfaction with Intelligent
Applications (147). Assistants,” Proceedings of the 2016 ACM on Conference
Diederich, S., Brendel, A. B., Lichtenberg, S., and Kolbe, L. on Human Information Interaction and Retrieval, 121–130.
2019a. “Design for Fast Request Fulfillment or Natural Kuzminykh, A., Sun, J., Govindaraju, N., Avery, J., and Lank, E.
Interaction? Insights from an Experiment with a 2020. “Genie in the Bottle: Anthropomorphized Perceptions
Conversational Agent,”. of Conversational Agents,” Proceedings of the 2020 CHI
Diederich, S., Janssen-Müller, M., Brendel, A. B., and Morana, Conference on Human Factors in Computing Systems, 1–13.
S. 2019b. “Emulating Empathetic Behavior in Online Service Laumer, S., Maier, C., and Gubler, F. T. 2019. “Chatbot
Encounters with Sentiment-Adaptive Responses: Insights Acceptance in Healthcare: Explaining User Adoption of
from an Experiment with a Conversational Agent”. Conversational Agents for Disease Diagnosis”.
Druga, S., Williams, R., Breazeal, C., and Resnick, M. 2017. “ Lechler, R., Stöckli, E., Rietsche, R., and Uebernickel, F.
Hey Google Is It Ok If I Eat You?” Initial Explorations in 2019. “Looking beneath the Tip of the Iceberg: The
Child-Agent Interaction,” Proceedings of the 2017 Two-Sided Nature of Chatbots and Their Roles for
Conference on Interaction Design and Children, 595–600. Digital Feedback Exchange”.
16 S. MOUSSAWI ET AL.

Liu, D., Li, Y., and Thomas, M. A. 2017. “A Roadmap for Sannon, S., Stoll, B., DiFranzo, D., Jung, M., and Bazarova,
Natural Language Processing Research in Information N. N. 2018. “How Personification and Interactivity
Systems,” Proceedings of the 50th Hawaii International Influence Stress-Related Disclosures to Conversational
Conference on System Sciences. Agents,” Companion of the 2018 ACM Conference on
Lopatovska, I., and Williams, H. 2018. “Personification of Computer Supported Cooperative Work and Social
the Amazon Alexa: Bff or a Mindless Companion,” Computing, 285–288.
Proceedings of the 2018 Conference on Human Santos, J., Rodrigues, J. J., Casal, J., Saleem, K., and Denisov,
Information Interaction & Retrieval, 265–268. V. 2018. “Intelligent Personal Assistants Based on
Luger, E., and Sellen, A. 2016. “” Like Having a Really Bad Internet of Things Approaches,” IEEE Systems Journal
Pa” the Gulf between User Expectation and Experience of (12:2).
Conversational Agents,” Proceedings of the 2016 CHI Shaw, E., Johnson, W. L., and Ganeshan, R. 1999.
conference on human factors in computing systems, “Pedagogical Agents on the Web,” Proceedings of the
5286–5297. third annual conference on Autonomous Agents, 283–290.
Mahmood, K., Rana, T., and Raza, A. R. 2018. “Singular Tang, Y., Wang, S., Chen, Y., and Chen, Z. 2012. “Ppcare: A
Adaptive Multi-Role Intelligent Personal Assistant (Sam- Personal and Pervasive Health Care System for the
Ipa) for Human Computer Interaction,” 2018 12th Elderly,” 2012 9th International Conference on
International Conference on Open Source Systems and Ubiquitous Intelligence and Computing and 9th
Technologies (ICOSST): IEEE, 35–41. International Conference on Autonomic and Trusted
Porcheron, M., Fischer, J. E., and Sharples, S. 2017. “” Do Computing: IEEE, 935–939.
Animals Have Accents?” Talking with Agents in Multi- Verlic, M., Zorman, M., and Mertik, M. 2005. “IAPERAS-
Party Conversation,” Proceedings of the 2017 ACM con­ Intelligent Athlete’s Personal Assistant,” 18th IEEE
ference on computer supported cooperative work and social Symposium on Computer-Based Medical Systems: IEEE.
computing, 207–219. Wagner, K., and Schramm-Klein, H. 2019. “Alexa, Are You
Pradhan, A., Mehta, K., and Findlater, L. 2018. “” Human? Investigating Anthropomorphism of Digital
Accessibility Came by Accident” Use of Voice- Voice Assistants–a Qualitative Approach”.
Controlled Intelligent Personal Assistants by People Winkler, R., Neuweiler, M. L., Bittner, E., and Söllner, M.
with Disabilities,” Proceedings of the 2018 CHI 2019. “Hey Alexa, Please Help Us Solve This Problem!
Conference on Human Factors in Computing Systems, 1– How Interactions with Smart Personal Assistants
13. Improve Group Performance”.
Reeves, B., & Nass, C. I. (1996). The media equation: How Yang, X., Aurisicchio, M., and Baxter, W. 2019.
people treat computers, television, and new media like real “Understanding Affective Experiences with
people and places. Cambridge university press. Conversational Agents,” Proceedings of the 2019 CHI
Rzepka, C. 2019. “Examining the Use of Voice Assistants: A Conference on Human Factors in Computing Systems, 1–
Value-Focused Thinking Approach”. 12.
EUROPEAN JOURNAL OF INFORMATION SYSTEMS 17

APPENDIX B

Table B1. Instruments’ items.


Construct Code Items Adapted from
Perceived intelligence PInt1 The PIA can complete tasks quickly. Moussawi and Koufaris
(2019)
PInt2 The PIA can understand my commands.
PInt3 The PIA can communicate with me in an understandable manner.
PInt4 The PIA can find and process the necessary information for completing its tasks.
PInt5 The PIA is able to provide me with a useful answer.
Perceived PAnt1 The PIA is able to speak like a human Moussawi and Koufaris
anthropomorphism (2019)
PAnt2 The PIA can be happy.
PAnt3 The PIA is friendly
PAnt4 The PIA is respectful
PAnt5 The PIA is funny
PAnt6 The PIA is caring
Positive disconfirmation Dis1 Compared to my initial expectations, the ability of the PIA to improve my Bhattacherjee and Lin (2015)
of expectations performance was much better than expected.
Dis2 Compared to my initial expectations, the ability of the PIA to increase my
productivity was much better than expected.
Dis3 Compared to my initial expectations, the ability of the PIA to enhance my
effectiveness was much better than expected.
Dis4 Compared to my initial expectations, the ability of the PIA to be useful for my
everyday life tasks was much better than expected.
Perceived usefulness PU1 Using the personal intelligent agent improves my daily performance Bhattacherjee (2001b) and
Davis et al. (1989)
PU2 Using the personal intelligent agent increases my productivity
PU3 Using the personal intelligent agent enhances my daily effectiveness
PU4 Overall, the personal intelligent agent is useful
PU5 Using the personal intelligent agent would enable me to complete tasks more
quickly
Satisfaction with use Sat1 I feel very satisfied about my overall experience with the personal intelligent agent. Bhattacherjee and Lin (2015)
Sat2 I feel very pleased about my overall experience with the personal intelligent agent.
Sat3 I feel very content about my overall experience with the personal intelligent agent.
Sat4 I feel absolutely delighted about my overall experience with the personal
intelligent agent.
Continuance intention Cont1 I intend to continue using the personal intelligent agent rather than discontinue its Bhattacherjee and Lin (2015)
use
Cont2 My intentions are to continue using the personal intelligent agent
Cont3 I plan to continue using the personal intelligent agent
Perceived mastery PM1 I have learned how to successfully interact with the personal intelligent agent in an Morrison (2002)
efficient manner.
PM2 I have mastered the use of the personal intelligent agent.
PM3 I have fully developed the appropriate skills and abilities to successfully interact
with the personal intelligent agent.
Perceived ownership PO1 This is my personal intelligent agent. Lee and Chen (2011)
PO2 I feel a very high degree of personal ownership of this personal intelligent agent.
PO3 I sense that I own this personal intelligent agent.
PO4 It is easy for me to think about this personal intelligent agent as mine.
Perceived self-extension Ext1 The personal intelligent agent helps me achieve the identity I want to have. Sivadas and Machleit (1994)
Ext2 The personal intelligent agent helps me narrow the gap between what I am and
what I try to be.
Ext3 The personal intelligent agent is central to my self identity.
Ext4 The personal intelligent agent is part of who I am.
Ext5 If the personal intelligent agent is taken from me I will feel as if part of myself has
been snatched from me.
Ext6 I derive some of my self-identity from the personal intelligent agent.
Perceived PP1 I set up the PIA to use it the way I want to. Kim and Son (2009) and
personalisation Zhou et al. (2012)
PP2 I have put effort into adapting the PIA to meet my needs.
PP3 I have chosen features offered by the PIA to suit my style of use.
PP4 The PIA is personalised in some way.
Perceived enjoyment PE1 While using the PIA, I find the interaction enjoyable. Kamis et al. (2008)
PE2 While using the PIA, I find the interaction interesting.
PE3 While using the PIA, I find the interaction to be fun.
Subjective norms SN1 People who influence my behaviour think that I should use the PIA. Bhattacherjee and Lin (2015)
SN2 People who are important to me think that I should use the PIA.
SN3 People who influence my behaviour would welcome my use of the IS in my
everyday life.
Habit Hb1 Using the PIA has become automatic to me. Limayem et al. (2007)
Hb2 Using the PIA comes naturally to me.
Hb3 When faced with a particular task, using the PIA is an obvious choice for me.
Hb4 I have a habit of using the PIA.
18 S. MOUSSAWI ET AL.

References Socialization,” Academy of management Journal


(45:6), 1149–1160.
Kim, S. S., and Son, J. Y. 2009. “Out of Dedication or Moussawi, S., & Koufaris, M. (2019). Perceived intelligence
Constraint? A Dual Model of Post-Adoption and perceived anthropomorphism of personal intelligent
Phenomena and Its Empirical Test in the Context of agents: Scale development and validation. In Proceedings
Online Services,” MIS quarterly, 49–70. of the 52nd Hawaii international conference on system
Lee, Y., and Chen, A. N. 2011. “Usability Design and sciences.
Psychological Ownership of a Virtual World,” Journal of Zhou, Z., Fang, Y., Vogel, D. R., Jin, X.-L., and Zhang, X.
Management Information Systems (28:3), 269–308. 2012. “Attracted to or Locked In? Predicting Continuance
Morrison, E. W. 2002. “Newcomers’ Relationships: Intention in Social Virtual World Services,” Journal of
The Role of Social Network Ties During Management Information Systems (29:1), 273–306
EUROPEAN JOURNAL OF INFORMATION SYSTEMS 19

APPENDIX C shown in Appendix C, Table C3. We then evaluated


C-1. Sample Statistics the measures’ composite reliabilities, which were
between 0.83 and 0.94, thus demonstrating high com­
posite reliability for all constructs (Appendix C, Table
Table C1. Subject demographics. C2).
Frequency Frequency We used the average variance extracted (AVE) values to
Variable (%) Variable (%) assess the convergent validity of the constructs. The AVEs
Times of Use on were all above 0.5 (Appendix C, Table C2). The only excep­
Age Average
tion was the AVE of 0.47 for perceived anthropomorphism,
18–20 119 (51.3%) Once a month 16 (6.9%) which was very close to 0.5. We kept all the items for
21–23 62 (26.7%) 2–3 times a month 64 (27.6%)
24–26 26 (11.2%) Once a week 26 (11.2%) perceived anthropomorphism since they cover the entire
27–29 6 (2.6%) 2–3 times a week 58 (25%) scope of the construct. We then used the Fornell-Larcker
30–32 9 (3.9%) Once a day 26 (11.2%) criterion to evaluate the discriminant validity (Fornell and
33–35 3 (1.3%) Twice a day 10 (4.3%) Larcker 1981; Hair et al. 2013). For each construct, the
36 or above 7 (3%) More than twice a day 32 (13.8%)
Gender Period of Use
square root of the AVE was higher than the construct’s
Male 132 (56.9%) Less than a month 4 (1.7%) individual correlation with other constructs, thus demon­
Female 98 (42.2%) 1–2 months 10 (4.3%) strating satisfactory discriminant validity (Appendix C,
Year 3–4 months 22 (9.5%) Table C2).
Freshman 19 (8.2%) 5–6 months 21 (9.1%) We used two techniques to check for the presence of
Sophomore 102 (44%) 7–8 months 8 (3.4%)
Junior 94 (40.5%) 9–10 months 15 (6.5%) common method variance: marker variable method
Senior 17 (7.3%) 11–12 months 12 (5.2%) (Lindell and Whitney 2001; Podsakoff et al. 2003) and
PIA Type More than a year 140 (60.3%) Harman’s single factor test (Podsakoff et al. 2003). The
Apple’s Siri 180 (77.6%) marker variable, belongingness to school, was theoretically
Microsoft’s Cortana 9 (3.9%)
Google Assistant 38 (16.4%)
unrelated to the variables in our study and identified before
Amazon’s Echo, 4 (1.7%) the start of data collection (Malhotra et al., 2006). Following
Alexa the marker variable approach outlined by Rönkkö and
Other 1 (0.4%) Ylitalo (2011), who adapted the Lindell and Whitney
(2001) method for PLS analyses, we ran two models (one
with the marker variable and one without). The results
C-2. Model Evaluation indicate that all path coefficients in both models are similar,
We first evaluated the relationships between the indi­ suggesting the absence of common method variance (See
cators and reflective variables. All loadings were satis­ Appendix C, Table C4; See Rönkkö and Ylitalo (2011)). In
factory, i.e., above or closely below the 0.70 threshold addition, we also used the Harman’s single factor test, which
(between 0.60 and 0.70). The only exception was one revealed the lack of one individual factor accounting for
item for perceived personalisation, which had a value of most of the covariance among the measures. The unrotated
0.57. We retained the item since the construct’s com­ factor solution shows that a single factor accounted for 29%
posite reliability and AVE values are satisfactory. We of the variance in the variables. This evidence indicates that
kept all items. The loadings and cross-loadings are common method bias is not a threat in this study.

Table C2. Cronbach’s alpha, composite reliability, average variance extracted, latent variable correlations, and square root of the
AVE.
Correlations
CA CR AVE P1 P2 P3 P4 P5 P6 P7 P8 P9 P10 P11 P12
P1 0.84 0.90 0.75 0.87
P2 0.85 0.90 0.70 0.63 0.83
P3 0.78 0.84 0.47 0.26 0.37 0.69
P4 0.84 0.91 0.76 0.34 0.43 0.40 0.87
P5 0.80 0.86 0.56 0.48 0.53 0.35 0.41 0.75
P6 0.86 0.91 0.77 0.58 0.59 0.34 0.29 0.46 0.88
P7 0.87 0.91 0.72 0.30 0.36 0.40 0.56 0.25 0.36 0.85
P8 0.74 0.83 0.56 0.41 0.37 0.33 0.57 0.27 0.37 0.65 0.75
P9 0.93 0.94 0.73 0.06 0.38 0.31 0.32 0.04 0.25 0.58 0.52 0.86
P10 0.86 0.90 0.64 0.67 0.78 0.27 0.36 0.50 0.58 0.35 0.39 0.38 0.80
P11 0.88 0.92 0.74 0.69 0.73 0.38 0.35 0.57 0.64 0.37 0.36 0.27 0.70 0.86
P12 0.84 0.90 0.76 0.26 0.41 0.34 0.46 0.14 0.25 0.59 0.56 0.67 0.44 0.33 0.87
CA: Cronbach’s Alpha, CR: Composite Reliability, AVE: Average Variance Extracted.
P1: Continuance intention, P2: Positive disconfirmation, P3: Perceived anthropomorphism, P4: Perceived enjoyment, P5: Perceived intelligence, P6:
Perceived mastery, P7: Perceived ownership, P8: Perceived personalisation, P9: Perceived self-extension, P10: Perceived usefulness, P11: Satisfaction, P12:
Subjective norms.
The square root of the AVE for each construct is presented in bold in the correlations section of the table.
20 S. MOUSSAWI ET AL.

Table C3. Loadings and cross-loadings for reflective constructs.


P1 P2 P3 P4 P5 P6 P7 P8 P9 P10 P11 P12
Cont1 0.86 0.52 0.23 0.33 0.36 0.46 0.29 0.40 0.08 0.54 0.56 0.27
Cont2 0.89 0.57 0.20 0.23 0.37 0.51 0.22 0.34 0.07 0.61 0.59 0.24
Cont3 0.85 0.54 0.26 0.32 0.52 0.53 0.27 0.33 0.01 0.58 0.65 0.18
PDis1 0.49 0.84 0.31 0.39 0.41 0.46 0.32 0.35 0.36 0.64 0.61 0.32
PDis2 0.50 0.84 0.28 0.37 0.41 0.46 0.28 0.30 0.36 0.74 0.62 0.39
PDis3 0.54 0.83 0.31 0.32 0.44 0.47 0.28 0.29 0.28 0.58 0.59 0.29
PDis4 0.56 0.81 0.33 0.36 0.51 0.58 0.33 0.30 0.29 0.63 0.62 0.36
PE1 0.32 0.41 0.32 0.85 0.38 0.28 0.50 0.47 0.26 0.32 0.33 0.36
PE2 0.29 0.33 0.33 0.88 0.37 0.21 0.46 0.47 0.25 0.31 0.28 0.42
PE3 0.29 0.39 0.40 0.88 0.33 0.27 0.51 0.55 0.31 0.33 0.32 0.43
PM1 0.59 0.59 0.29 0.24 0.51 0.84 0.24 0.26 0.11 0.53 0.59 0.17
PM2 0.48 0.48 0.30 0.24 0.41 0.90 0.38 0.36 0.27 0.52 0.57 0.25
PM3 0.51 0.55 0.31 0.29 0.36 0.90 0.29 0.32 0.23 0.49 0.56 0.22
PO1 0.22 0.29 0.34 0.51 0.16 0.29 0.84 0.58 0.51 0.29 0.35 0.54
PO2 0.23 0.30 0.31 0.45 0.22 0.30 0.85 0.53 0.55 0.33 0.33 0.51
PO3 0.25 0.28 0.34 0.47 0.24 0.27 0.84 0.50 0.44 0.25 0.27 0.44
PO4 0.32 0.35 0.39 0.48 0.24 0.38 0.86 0.59 0.45 0.32 0.31 0.49
PAnt1 0.25 0.32 0.60 0.26 0.43 0.28 0.21 0.18 0.15 0.25 0.29 0.19
PAnt2 0.09 0.21 0.64 0.20 0.08 0.22 0.30 0.19 0.27 0.12 0.25 0.22
PAnt3 0.17 0.22 0.79 0.31 0.23 0.19 0.32 0.22 0.16 0.13 0.23 0.21
PAnt4 0.20 0.23 0.70 0.21 0.26 0.21 0.23 0.18 0.14 0.15 0.26 0.17
PAnt5 0.17 0.25 0.68 0.32 0.20 0.23 0.28 0.28 0.25 0.22 0.31 0.28
PAnt6 0.13 0.23 0.69 0.30 0.09 0.23 0.35 0.33 0.41 0.21 0.20 0.38
PInt1 0.38 0.38 0.22 0.25 0.68 0.37 0.20 0.19 0.00 0.34 0.41 0.09
PInt2 0.30 0.45 0.23 0.30 0.77 0.37 0.17 0.18 0.05 0.32 0.44 0.09
PInt3 0.37 0.39 0.34 0.36 0.71 0.37 0.21 0.22 0.02 0.39 0.41 0.14
PInt4 0.35 0.37 0.31 0.36 0.80 0.26 0.24 0.27 0.02 0.36 0.42 0.16
PInt5 0.39 0.40 0.21 0.23 0.77 0.34 0.12 0.17 0.06 0.42 0.44 0.05
PU1 0.45 0.66 0.25 0.24 0.33 0.45 0.31 0.36 0.47 0.85 0.52 0.41
PU2 0.48 0.69 0.18 0.25 0.34 0.43 0.24 0.29 0.38 0.85 0.57 0.42
PU3 0.46 0.65 0.20 0.22 0.31 0.40 0.29 0.35 0.44 0.84 0.53 0.46
PU4 0.70 0.55 0.25 0.37 0.58 0.53 0.25 0.27 0.01 0.68 0.61 0.17
PU5 0.56 0.58 0.21 0.38 0.41 0.48 0.32 0.30 0.25 0.77 0.55 0.34
PP1 0.26 0.22 0.24 0.45 0.26 0.25 0.50 0.77 0.36 0.23 0.28 0.39
PP2 0.36 0.35 0.24 0.45 0.19 0.31 0.54 0.83 0.48 0.35 0.26 0.47
PP3 0.30 0.28 0.26 0.43 0.21 0.30 0.51 0.79 0.42 0.32 0.30 0.48
PP4 0.31 0.27 0.32 0.43 0.19 0.25 0.40 0.57 0.21 0.27 0.25 0.31
Sat1 0.62 0.66 0.28 0.31 0.52 0.55 0.29 0.30 0.23 0.61 0.88 0.30
Sat2 0.65 0.60 0.26 0.31 0.43 0.58 0.35 0.30 0.20 0.64 0.85 0.27
Sat3 0.60 0.61 0.38 0.33 0.53 0.57 0.32 0.34 0.19 0.56 0.86 0.28
Sat4 0.52 0.66 0.38 0.28 0.46 0.51 0.33 0.29 0.33 0.60 0.85 0.28
Ext1 0.07 0.37 0.26 0.29 0.14 0.24 0.50 0.44 0.84 0.38 0.24 0.55
Ext2 0.08 0.34 0.27 0.24 0.05 0.23 0.46 0.43 0.86 0.33 0.25 0.61
Ext3 0.01 0.29 0.27 0.25 0.00 0.20 0.51 0.41 0.84 0.27 0.21 0.52
Ext4 0.05 0.32 0.27 0.26 −0.03 0.16 0.51 0.45 0.86 0.31 0.19 0.59
Ext5 0.05 0.33 0.29 0.29 0.00 0.24 0.54 0.46 0.84 0.32 0.24 0.58
Ext6 0.05 0.33 0.25 0.28 0.05 0.24 0.47 0.44 0.89 0.35 0.26 0.58
SN1 0.25 0.38 0.35 0.44 0.14 0.22 0.55 0.51 0.61 0.40 0.31 0.91
SN2 0.20 0.35 0.22 0.34 0.11 0.24 0.49 0.46 0.61 0.41 0.26 0.87
SN3 0.22 0.33 0.32 0.41 0.12 0.21 0.48 0.49 0.53 0.35 0.28 0.83
P1: Continuance intention, P2: Positive disconfirmation, P3: Perceived anthropomorphism, P4: Perceived enjoyment, P5: Perceived intelligence, P6:
Perceived mastery, P7: Perceived ownership, P8: Perceived personalisation, P9: Perceived self-extension, P10: Perceived usefulness, P11: Satisfaction, P12:
Subjective norms
EUROPEAN JOURNAL OF INFORMATION SYSTEMS 21

Table C4. Marker variable analysis (***p < 0001; **p < 001).
Path/Model Baseline With Marker Variable
Coefficient Coefficient
Positive disconfirmation → Perceived usefulness 0.664*** 0.671***
Positive disconfirmation → Satisfaction 0.733*** 0.729***
Frequency → Continuance intention −0.03 −0.03
Habit → Continuance intention 0.083 0.082
Perceived anthropomorphism → Perceived enjoyment 0.335*** 0.315***
Perceived enjoyment → Continuance intention 0.089 0.081
Perceived intelligence → Positive disconfirmation 0.533*** 0.511***
Perceived intelligence → Perceived anthropomorphism 0.353*** 0.336***
Perceived intelligence → Perceived usefulness 0.135** 0.145**
Perceived mastery → Perceived self-extension 0.013 0.021
Perceived ownership → Perceived self-extension 0.426*** 0.431***
Perceived personalisation → Perceived self-extension 0.234** 0.235**
Perceived self-extension → Perceived enjoyment 0.209*** 0.202***
Perceived self-extension → Perceived usefulness 0.118** 0.122**
Perceived usefulness → Continuance intention 0.334*** 0.336***
Satisfaction → Continuance intention 0.418*** 0.412***
Subjective norms → Continuance intention −0.095 −0.097
Tenure → Continuance intention 0.056 0.059
Variable R2 R2
Continuance intention 0.559 0.562
Disconfirmation 0.285 0.294
Perceived anthropomorphism 0.125 0.125
Perceived enjoyment 0.200 0.22
Perceived self-extension 0.371 0.374
Perceived usefulness 0.631 0.635
Satisfaction 0.537 0.537

Table C5. Total effects (Listed in descending order by size).


Continuance intention Positive disconfirmation of expectations 0.521***
Satisfaction 0.408***
Perceived intelligence 0.340***
Perceived usefulness 0.336***
Habit 0.085
Perceived enjoyment 0.075
Belongingness 0.064
Tenure 0.061
Perceived self-extension 0.054*
Perceived anthropomorphism 0.027
Perceived ownership 0.023*
Perceived personalisation 0.013
Perceived mastery 0.001
Frequency −0.026
Subjective norms −0.093
Perceived usefulness Positive disconfirmation of expectations 0.660***
Perceived intelligence 0.496***
Perceived self-extension 0.118**
Perceived ownership 0.049*
Perceived personalisation 0.029
Perceived mastery 0.002
Satisfaction Positive disconfirmation of expectations 0.736***
Perceived intelligence 0.399***
Perceived enjoyment Perceived anthropomorphism 0.349***
Perceived self-extension 0.206***
Perceived intelligence 0.128**
Perceived ownership 0.088**
Perceived personalisation 0.049*
Perceived mastery 0.004
Positive disconfirmation of expectations Perceived intelligence 0.541***
Perceived anthropomorphism Perceived intelligence 0.365***
Perceived self-extension Perceived ownership 0.422***
Perceived personalisation 0.239**
Perceived mastery 0.018
22 S. MOUSSAWI ET AL.

References Podsakoff, P. M., MacKenzie, S. B., Lee, J.-Y., and Podsakoff,


N. P. 2003. “Common Method Biases in Behavioral
Fornell, C., and Larcker, D. F. 1981. “Structural Equation Research: A Critical Review of the Literature and
Models with Unobservable Variables and Measurement Recommended Remedies,” Journal of applied psychology
Error: Algebra and Statistics,” Journal of marketing research, (88:5), 879.
382–388. Malhotra, N. K., Kim, S. S., and Patil, A. 2006. “Accounting
Hair Jr, J. F., Hult, G. T. M., Ringle, C., and Sarstedt, M. 2013. for Common Method Variance in Is Research: Reanalysis
A Primer on Partial Least Squares Structural Equation of Past Studies Using a Marker-Variable Technique,”
Modeling (Pls-Sem), (First ed.). Sage Publications. Management Science (52:12), 1865–1883.
Lindell, M. K., and Whitney, D. J. 2001. “Accounting for Rönkkö, M., and Ylitalo, J. 2011. “Pls Marker Variable
Common Method Variance in Cross-Sectional Research Approach to Diagnosing and Controlling for Method
Designs,” Journal of applied psychology (86:1), 114. Variance”.

You might also like