Wang Et Al (2020)

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 11

International Journal of Human–Computer Interaction

ISSN: 1044-7318 (Print) 1532-7590 (Online) Journal homepage: https://www.tandfonline.com/loi/hihc20

Understanding Continuance Intention


toward Crowdsourcing Games: A Longitudinal
Investigation

Xiaohui Wang, Dion Hoe-Lian Goh & Ee-Peng Lim

To cite this article: Xiaohui Wang, Dion Hoe-Lian Goh & Ee-Peng Lim (2020): Understanding
Continuance Intention toward Crowdsourcing Games: A Longitudinal Investigation, International
Journal of Human–Computer Interaction, DOI: 10.1080/10447318.2020.1724010

To link to this article: https://doi.org/10.1080/10447318.2020.1724010

Published online: 12 Feb 2020.

Submit your article to this journal

Article views: 21

View related articles

View Crossmark data

Full Terms & Conditions of access and use can be found at


https://www.tandfonline.com/action/journalInformation?journalCode=hihc20
INTERNATIONAL JOURNAL OF HUMAN–COMPUTER INTERACTION
https://doi.org/10.1080/10447318.2020.1724010

SURVEY ARTICLE

Understanding Continuance Intention toward Crowdsourcing Games: A


Longitudinal Investigation
a
Xiaohui Wang , Dion Hoe-Lian Gohb, and Ee-Peng Limc
a
Department of Journalism, Hong Kong Baptist University, Hong Kong; bWee Kim Wee School of Communication and Information, Nanyang
Technological University, Singapore; cSchool of Information Systems, Singapore Management University, Singapore

ABSTRACT
Given the increasing popularity of gamified crowdsourcing, the study reported here involved examining
determinants of users' continuance intention toward crowdsourcing games, both with longitudinal data
and reference to a revised unified theory of acceptance and use of technology (UTAUT). At three time
points, data were collected from an online survey about playing crowdsourcing games. Time-lagged
regression, cross-temporal correlation, and structural equation modeling were performed to examine
determinants of the acceptance of crowdsourcing games. Results indicate that the revised UTAUT2 is
applicable to explaining the acceptance of crowdsourcing games. Not only did effort expectancy,
hedonic motivation, and social influence directly affect users’ continuance intention toward crowdsour-
cing games, but time-based variations also emerged in users’ perceptions and acceptance of the games
and in how their perceptions affect their acceptance. The findings answer the call for a context-specific
acceptance model and the identification of factors of adopting gamification.

In recent years, videos games have experienced consistently Despite the popularity of gamification, few studies have been
accelerated growth (Entertainment Software Association, conducted to explore the dynamic behaviors of users of gamified
2017). Part of the reason why games are popular could be that crowdsourcing systems. In response, the study presented here
video gaming brings together innovative and creative technolo- contributes to knowledge about the continuance of gamification.
gies to create engaging, immersive, and even breathtaking The chief contributions of the study are twofold. First, we inves-
experiences (Kahn & Williams, 2015). As a consequence, tigate the direct and indirect determinants of behavioral inten-
researchers have placed a significant amount of effort into study- tion to use integrated models, particularly by following the
ing gamification – that is, the use of game-like experiences to framework of the extended unified theory of acceptance and
motivate individuals in less engaging environments (Deterding, use of technology (UTAUT2; Venkatesh, Thong, & Xu, 2012)
Dixon, Khaled, & Nacke, 2011; Huotari & Hamari, 2017; Wang, and the extended the acceptance of technology model (TAM;
Goh, Lim, Vu, & Chua, 2017). A major area in which gamifica- Davis, Bagozzi, & Warshaw, 1992). Second, we explore the
tion has been used is crowdsourcing, which many organizations dynamic nature of those determinants and their effects on con-
have employed in outsourced tasks (Morschheuser, Hamari, tinuance intention – that is, to an individual’s decision to con-
Koivisto, & Maedche, 2017). In a similar way, gamification tinue using new technology beyond its first usage (Bhattacherjee,
could be an effective channel to mobilize large numbers of 2001; Sun & Jeyaraj, 2013). In studying users’ behavioral inten-
volunteers, since users of gamification are motivated not by tions toward using a technology, it is important to examine the
altruism or financial incentives but by their intrinsic desire to phenomenon over an extended period instead of within a mere
be entertained (Law & Von Ahn, 2011). cross-sectional snapshot (Sonderegger, Zbinden, Uebelbacher, &
In research responding to the increasing popularity of Sauer, 2012). When confronting new technologies for the first
gamification, Von Ahn and Dabbish (2008) have posited time, users make decisions about their acceptance of those
that the most important aspect of gamified crowdsourcing technologies that systematically differ from their subsequent
systems is that outputs are produced in enjoyable ways. decisions (Venkatesh, Morris, & Ackerman, 2000). In that
More recently, Sørensen et al. (2016) have discussed the light, the goal of the study was to examine how predictors of
possibility of applying crowdsourcing and gamification to acceptance influence users’ continued use of gamified crowd-
solve optimization problems in quantum physics. In that sourcing systems over time.
study, examining the online video game Quantum Moves, In what follows, the next section of the paper introduces
the authors showed that even in quantum physics, players’ the concepts of gamification, crowdsourcing, and the accep-
motivation can lead to new scientific insights. Unsurprisingly, tance of technology, followed by an overview of two models of
most studies on gamification have reported the positive such acceptance, as well as their extensions into the field of
results of their implementations (Morschheuser et al., 2017). gamification. After the Methods section presents the

CONTACT Xiaohui Wang vincentwx@hkbu.edu.hk Department of Journalism,Hong Kong Baptist University, Kowloon Tong, Hong Kong.
© 2020 Taylor & Francis Group, LLC
2 X. WANG ET AL.

instruments developed for the study and the procedures of Vries, & Srinivasan, 2012; Kawajiri, Shimosaka, & Kahima,
data collection used, the Results section presents the findings 2014), output engagement (Goncalves, Hosio, Ferreira, &
of data analyses. Major findings are elaborated upon in the Kostakos, 2014), and psychological outcomes (e.g., enjoyment,
Discussion section, after which the paper concludes by Altmeyer, Lessel, & Krüger, 2016); and (3) the motivations of
addressing the implications of the findings and the limitations using gamified crowdsourcing, such as social influence
of the study. (Hamari & Koivisto, 2015), intrinsic motivations (Mekler,
Brühlmann, Tuch, & Opwis, 2017), and esthetic experience
(Wang, Goh, Lim, & Vu, 2016; Wang et al., 2017). Such results
1. Research background provide empirical evidence for gamification developments,
providing designers with guidelines in ways of encouraging
1.1. Crowdsourcing games
and engaging users in the crowdsourcing tasks.
Gamification refers to the process of enhancing a service with Largely missing from the literature, however, are investiga-
affordances for gameful experiences (Deterding et al., 2015). tions into the individuals’ long-term interactions with the
The approach aims at evoking users’ intrinsic motivations by gamified systems (Rapp, Hopfgartner, Hamari, Linehan, &
designing systems reminiscent of games in order to support Cena, 2019). Furthermore, very little empirical work has yet
overall value creation for users (Hamari & Koivisto, 2015). systematically examined the antecedents of gamification
Principles and features of games can be used to attract, moti- acceptance based on nuanced theories (Morschheuser et al.,
vate, and engage users, which not only reduces perceived 2017). Seaborn and Fels (2015) noticed that a major issue of
barriers to using systems, including low incentive and poor gamification studies was the disconnection between theoreti-
rates of adoption (Baptista & Oliveira, 2017), but also trans- cal and practical work. Gamification studies would benefit
forms game-based interactions into desirable outputs from a wider use of theories to account for the complexity
(Zichermann & Linder, 2010). Gamification continues to of human behavior (Rapp et al., 2019). In this light, this study
experience rapid growth and has spawned numerous applica- investigates to examine the predictors of individuals’ accep-
tions for health, business, education, and sustainability as well tance and continuance of crowdsourcing games based on an
as persuasive technology (Deterding et al., 2015). integrated theoretical framework.
Meanwhile, crowdsourcing refers to the use of contribu-
tions, monetary or otherwise, from internet users in order to
1.2. UTAUT2 and extended TAM
offer desirable services or ideas (Howe, 2006). The primary
goal of crowdsourcing is either to reduce costs by distributing Despite the success stories of implementing gamification,
the work required by immense tasks to a crowd or to solve most developed systems struggle with increasing their user
complex problems with crowd intelligence. Since crowdsour- base (Deterding, 2015; Doan, Ramakrishnan, & Halevy, 2011).
cing generally relies on the participation of large numbers of Indeed, many innovations do not survive initial deployment
volunteers, gamification is an effective way to attract and due to having too few installations (Siu & Riedl, 2016), and
engage potential users. By transforming crowdsourcing tasks only when significant numbers of users participate in the
into game-like activities, gamification provides users with crowdsourcing process can large-scale problems be solved.
motives other than monetary compensation (Morschheuser However, such problems could be largely overcome by clar-
et al., 2017). Developed for crowdsourcing tasks such as ifying the driving factors behind users’ intention to use such
multimedia annotation, geolocation tagging, natural- systems.
language processing, scientific puzzle solving, and knowledge Proposed by Venkatesh, Morris, Davis, and Davis (2003), the
creation, gamification systems include Google Image Labeler unified theory of acceptance and use of technology (UTAUT)
for labeling images (Google Image Labeler, 2007), the online was synthesized using eight prominent theories from psychology
video game Foldit for identifying well-folded protein struc- and sociology in order to elucidate why people intend to use
tures (Cooper et al., 2010), the online game OnToGalaxy for information systems. UTAUT proposed four predictors of beha-
mining semantic relations (Carranza & Krause, 2012), and the vioral intentions toward and the use of technology: performance
aforementioned Quantum Moves for solving optimization expectancy, effort expectancy, social influence, and facilitating
problems in quantum physics (Sørensen et al., 2016), to conditions. Whereas performance expectancy is defined as the
name a few. degree to which using a technology will provide individuals with
Parallel to the increasing popularity of gamification in performance-related benefits, effect expectancy is the degree of
crowdsourcing, academic research on the topic has steadily ease in using a technology. By contrast, social influence refers to
increased and can be expected to continue increasing in the the extent to which users perceive that their important ones
future (Morschheuser et al., 2017). To date, studies examining think that they should use a technology. Last, facilitating condi-
gamification mainly focused on three aspects: (1) the effective- tions are individuals’ perceptions of the necessary resources and
ness of individual gamification affordance, such as points and support available to perform certain behaviors (Venkatesh et al.,
leaderboard (Prestopnik & Tang, 2015; Talasila, Curtmola, & 2003).
Borcea, 2016), storytelling and avatar (Prandi, Nisi, Salomoni, More recently, Venkatesh et al. (2012) also proposed
& Nunes, 2015; Sakamoto & Nakajima, 2014), and competitive UTAUT2 by extending UTAUT with three additional con-
or cooperative gaming design (Morschheuser et al., 2017; Pe- structs – hedonic motivation, price value, and habit – and
Than, Goh, & Lee, 2017); (2) the efficiency of gamified crowd- thereby tailoring the theory to the context of individual
sourcing tasks in terms of participation (Eickhoff, Harris, de consumers’ acceptance of technology. In particular, hedonic
INTERNATIONAL JOURNAL OF HUMAN–COMPUTER INTERACTION 3

motivation, also called perceived enjoyment, is defined as the intentions, and, in turn, behaviors related to using technology
fun or pleasure experienced by using a technology, and (Sonderegger et al., 2012). Most research examining users’
UTAUT2 has indeed been examined in hedonic contexts behavioral intentions has focused on one-time snapshots
(Baptista & Oliveira, 2017; Chang, Liu, & Chen, 2014; Oh instead of whether players’ perceptions and intention change
& Yoon, 2014). For instance, Baptista and Oliveira (2017) over time. In cross-sectional studies of users, participants are
utilized UTAUT2 to explain the acceptance of a gamified typically unfamiliar with the system being evaluated, and the
banking service and found that all of the proposed factors results of such work provide information about the initial
except facilitating conditions directly affected behavioral phase of human–computer interaction. By contrast, a long-
intentions. itudinal study involving multiple testing sessions is expected
A limitation of UTAUT2, however, is the underdevelop- to represent a more valid modeling of actual usage. It is also
ment of secondary factors of acceptance. Although perfor- suggested that, in the earliest stages of the introduction of new
mance expectancy, effect expectancy, social influence, and technology, users make decisions about their acceptance of
hedonic motivation are prominent predictors of behavioral the technology, which have been shown to systematically
intentions, UTAUT2 does not address the inter-relationships differ from their continuance-related decisions as their experi-
among their antecedents. By contrast, the extended TAM ence with the technology accumulates (Venkatesh et al.,
model specifies the hierarchical relationships among the ante- 2000). In short, Therefore, to gain a thorough understanding
cedents of behavioral intentions. Originally proposed to iden- of the underlying mechanisms of the continued acceptance of
tify determinants of the use of information systems in the technology, we investigated the role of perceptions in users’
workplace (Davis, 1989), TAM hypothesizes that initial per- initial acceptance-related decisions as well as their continu-
ceived ease of use, which is conceptually similar to effort ance-related ones.
expectancy in UTAUT, and perceived usefulness, which is Hanus and Fox (2015) have conducted one of the few
conceptually similar to performance expectancy (Martins, studies investigating the temporal nature of gaming experi-
ences. Assessing longitudinal data representing students’ per-
Oliveira, & Popovič, 2014), affect behavioral intentions by
ceptions of gamification in a classroom setting, they found
way of attitude. To accommodate TAM in contexts of gami-
that users’ satisfaction with the gamified curriculum was sig-
fication, researchers have sought to extend the model with
nificantly less in the later stages of using it than during the
hedonic constructs. For instance, Shin and Shin (2011) have
initial encounter, although significant differences in other
proposed that perceived enjoyment, which is conceptually
perceptions over time did not emerge. Added to their find-
similar to hedonic motivation, affect behavioral intentions to ings, longitudinal studies on information technology may also
play games in social networks via attitude. clarify the continued intention to use gamification systems.
In the light of the interactions among TAM’s constructs, For instance, Venkatesh and Davis (2000) tested the extended
we propose a revised UTAUT that integrates the hierarchical TAM in four organizational systems with longitudinal data
relationships in the extended TAM, as shown in Figure 1. and measured their model’s constructs at three different
Specifically, we propose that attitude, social influence, and points in time over a period of four months. They found
hedonic motivation form the first layer of antecedents of that perceived usefulness was the most stable construct across
behavioral intentions, whereas performance expectancy, effort the three time points, whereas the stability of the perceived
expectancy, and hedonic motivation form a second layer of ease of use was systematically lower. In another longitudinal
antecedents mediated by attitude. We excluded facilitating study, Luse, Mennecke, and Triplett (2013) examined changes
conditions from our model since it has been characterized in users’ intention to play virtual-world technology over time,
as a nonsignificant predictor of behavioral intention in recent with participants recruited to play a game and finish a survey
studies (Hoque & Sorwar, 2017; Martins et al., 2014). three times in two weeks. Their results demonstrated that
a user’s intention to utilize virtual-world technologies in his
or her future work decreased significantly over time. Most
1.3. Longitudinal investigation recently, Mou, Shin, and Cohen (2017) tested the extended
An overlooked consideration in the evaluation of the accep- TAM in the online health-information context with data
tance of technology is the temporal nature of the beliefs, collected in two phases during a five-week period. Among

Hedonic Motivation

Effort Expectancy Attitude Behavioral Intention

Performance
Expectancy
Social Influence

Figure 1. The integrated model of gamification acceptance.


4 X. WANG ET AL.

their findings, perceived ease of use became less important to questions (Figure 2b). At the end of the task session, the total
the formation of positive behavioral intentions in the second score achieved is presented to users as feedback for their
phase. progress. The difference between the two modes is that users
who choose the competition mode will be matched with
another user – a competitor – and gain extra points if they
2. Methodology beat the competitor. Those who choose the solo mode gain
points based on answers to the questions only. After the task
2.1. Game development session, scores are accumulated and updated on users’ profile.
To execute the study, a mobile crowdsourcing game called Scores are also associated with users’ rankings in the leader-
KpopRally was developed as a question-and-answer app board (Figure 2c) and achievement (Figure 2d) in the system.
designed to collect tags for music videos. Because annotating Through game elements such as gaining points and ranking in
music videos with tags – that is, brief, text-based tokens leaderboard, collecting achievements, challenging others, and
describing certain aspects of a video – enables the retrieval social interactions, users of KpopRally may gain enjoyment
of music videos, having appropriate tags associated with each during the gameplay.
music video was expected to allow more accurate search
results and recommendations (Dulačka & Bieliková, 2012).
2.2. Data collection
The collected tags could also function as metadata for
machine learning geared toward automatic annotation in the In our online survey study, participants were recruited via
future (Michelucci & Dickinson, 2016). e-mail invitation, poster advertisements, and oral invita-
The mechanism of the crowdsourcing system is simple; tions. Criteria for inclusion were having an Android smart-
users contribute tags by answering questions related to the phone with the screen size larger than 3.5 inches and
music videos. During the crowdsourcing session, a music agreeing to install the app on that smartphone. Participants
video clip played along with the question displayed at the were asked to register themselves on the study’s website
bottom of the screen. Users were required to choose an (Figure 3) and required to test the app and complete
answer from a list of options, and as incentives, they earned a questionnaire three times: at first-time use, a week later,
points as well as reputation within the system. Points for and two weeks after the first use. The approach was consis-
questions were awarded based on the percentage of other tent with the procedures used in previous longitudinal stu-
users who agreed with the chosen answer. In that way, the dies (Luse et al., 2013; Vigo & Harper, 2017). During the
answer with the highest percentage of agreement was deemed study, participants were first briefed about the concept and
the correct answer for the corresponding question and saved purpose of the crowdsourcing game, the procedures of the
as metadata. study, and the required tasks. Next, they were instructed to
The main activity of KpopRally is that users earn points in download and install the app on their smartphones and
the task session. Game elements with regards to points, lea- required to test the app according to a usage scenario.
derboard, achievements, challenge, and social element are After completing the tasks with the app, participants were
embedded in this system. The usage scenario of KpopRally is directed to the online questionnaire.
as follows. First, users are required to login with Facebook Upon finishing the first evaluation, participants were
account or register with a unique username. The first page of encouraged to continue using the app for at least two more
the system present users with a usage tutorial, user profile, weeks and reminded that they needed to test the app and
and the navigation to the task session (Figure 2a). Users can complete the questionnaire at two other time points: one week
choose between two play modes, solo play and competing and two weeks after the initial use. Via e-mail and in-app
with others. The tasks in the different modes are the same, notifications, participants were notified to complete the eva-
that is, five rounds of watching a music video and answering luations at the corresponding time points, and ones who did

(a) Mainpage (b) gameplay (c) leaderboard (d) achievements

Figure 2. Screenshots of the crowdsourcing game.


INTERNATIONAL JOURNAL OF HUMAN–COMPUTER INTERACTION 5

Figure 3. Website of the online study.

not complete the survey at the required time point received missing values exceeding 5% were excluded from analysis, and the
a follow-up notification. If participants did not complete the multiple imputation technique was used to manage all other
questionnaires within 10 days after the first use and within missing values in the data. Of the valid respondents in T1, 44
17 days after the second use, then their data were considered were male and 66 were female, with ages ranging from 20 to 40
to be invalid. At the end of the study, promised incentives and an average of 23.21 years. The majority of the participants
were given to participants as a reward for their participation. watched music videos online (60%) and played mobile games
(61%) frequently (once a week or more).

2.3. Measurements
The questionnaire survey (see Appendix I for questionnaire 3.2. Psychometric properties of measures
items) was designed to gather individuals’ perceptions and Confirmatory factor analysis was conducted to test the relia-
acceptance of crowdsourcing. The constructs measured in bility and validity of the measurement. Because data were
the study were behavioral intention, attitude, hedonic motiva- gathered at three time points, three confirmatory factor ana-
tion, effort expectancy, performance expectancy, and social lyses corresponding to those time points were conducted.
influence. The operationalization of those constructs was Specifically, convergent validity, discriminant validity, and
obtained from measures reported in previous research on composite reliability (CR) tests were employed to evaluate the
UTAUT2 and TAM (Davis, Bagozzi, & Warshaw, 1989; psychometric properties of the measurements. Consistent with
Venkatesh et al., 2012), and items corresponding to the con- the benchmark recommended by Fornell and Larcker (1981),
structs were rated on a scale of 1 (strongly disagree) to 7 all factor loadings of items on their corresponding construct
(strongly agree). The questionnaire also included two open- exceeded the acceptable value of 0.4, all CR values exceeded the
ended items to collect participants’ reflections on the system: cutoff value of 0.7, and the average variance extracted was
“What do you like about the system and its features?” and greater than the recommended 0.5 threshold. Such results
“What could be improved in this system?” A pilot test was suggest the good convergent validity and reliability of the
conducted to ensure the feasibility of the study design, as well measurement. The correlation matrix in Table 4 presents the
as the content validity and reliability of the questionnaire. Six discriminant validity of the measurement, the results for which
participants invited to review the study procedures provided show that inter-construct correlations were less than the var-
feedback that guided modifications of the wording of items. iances extracted by the constructs (i.e., the values on the diag-
To ensure that the items appropriately measured the con- onal). Overall, it can be concluded that the measurement was
structs, factor analysis was also conducted to test the reliabil- reliable and valid at all three points in time.
ity and validity of the scale.

3.3. Explaining continuance intention


3. Results
The study’s design avoided a major limitation associated with self-
3.1. Sample and demographic features
reported data: common-method variance, which results from
The survey was administered to 110 participants at the first-time measuring all concepts with a single questionnaire. Specifically,
evaluation of the game (T1), to 82 participants at the second stage the study involved examining the relationship between percep-
of evaluation (T2), and to 72 participants at the third stage (T3). tions measured at one time point and self-reported acceptance
Attrition rates at T2 and T3 were 25.5% and 12.2%, respectively, measured at the subsequent time point, such that the measure-
which were considered to be acceptable for longitudinal studies ment of intention and its determinants were separated by one
(Gustavson, von Soest, Karevold, & Røysamb, 2012). Data with week. To the extent that measurements were taken at different
6 X. WANG ET AL.

Table 1. TAM regression results predicting continuance intention. 3.5. Summary of results pooled across periods
Attitude Continuance Intention
A pooled data analysis was performed to examine the revised
Time of Measurement R2 β R2 β
UTAUT in the context of smartphones by following the
First period (T1~ T2, N = 82) 0.33 0.39
Hedonic Motivation 0.54*** 0.32** method of Venkatesh and Davis (2000). Specifically, data
Effort expectancy 0.02 0.12 were pooled across periods, which yielded a sample of 264
Performance Expectancy 0.10 0.18
Social Influence 0.29** questionnaires used for estimating the proposed research
Second period (T2~ T3, N = 72) 0.68 0.56 model. Structural equation modeling (SEM) was conducted
Hedonic Motivation 0.55*** 0.44** to test the hypotheses; as shown by the actual and recom-
Effort expectancy 0.38*** 0.21*
Performance Expectancy 0.10 0.04 mended values of the model fit indices in Table 3, all the
Social Influence 0.25* recommended indices were better than the recommended
*p < 0.05, **p < 0.01 ***p < 0.001 thresholds (Schreiber, Nora, Stage, Barlow, & King, 2006).
Such results demonstrated a good fit between the model and
times in different places, they were able to reduce systematic the data. As shown by the results of SEM in Figure 4, all the
covariation and thus minimize common method variance proposed relationships were supported except for perfor-
(Podsakoff, MacKenzie, Lee, & Podsakoff, 2003). mance expectancy. Effort expectancy and hedonic motivation
Table 1 shows the effects of hedonic motivation, perfor- had a significant effect on individuals’ attitudes toward the
mance expectancy, effort expectancy, and social influence on crowdsourcing game, although no significant relationship
continuance intention. In particular, hedonic motivation was emerged between performance expectancy and attitude. At
a strong determinant of attitude and intention to play at T2 the same time, attitude, hedonic motivation, and social influ-
and T3, whereas social influence was a strong determinant of ence directly affected behavioral intention.
intention to play at both times. Effort expectancy was not
a significant predictor at T2 but became a primary signifi-
4. Discussion
cant determinant of attitude and intention at T3. However,
performance expectancy remained consistently nonsignifi- The study was conducted to examine constructs of UTAUT in
cant across all time points. The regression models explained gamified crowdsourcing and the varying influences of differ-
between 33% and 68% of the variance in attitude and ent factors over time. The model was empirically validated
between 39% and 56% of the variance in behavioral using data gathered at three points in time from an online
intention. study evaluating a mobile crowdsourcing game. To answer the
research questions, time-lagged regression, cross-temporal
correlation, and SEM were performed, the results of which
3.4. Cross-temporal correlations
are discussed in what follows.
The cross-temporal correlations of acceptance-related con- First, an important contribution to the literature concerns
structs were also examined to better understand the inher- the temporal dynamics of the determinants of users’ accep-
ent changes of each construct. That is, the analysis involved tance of technology. The findings indicate time-based changes
examining the stability of users’ perceptions and acceptance in terms of users’ perceptions and acceptance, as well as the
of the technology at different time points, particularly by effect of their perceptions on acceptance. Results of the cross-
investigating the within-construct correlations measured at
different points in time (Venkatesh & Davis, 2000). Most of Table 3. The recommended and actual values of model fit indices.
the constructs showed strong, stable correlations across
Fit index χ2(df) χ2/df CFI TLI RMSEA SRMR
time, except for perceived output quality, correlations for
Recommended value <3 > .95 > .95 < .08 < .05
which were lower than the others (Table 2). Interestingly, Model fits 336.80 (157) 2.22 .96 .95 .07 .05
correlations in the later period (range: 0.58–0.81) were χ2/df is the ratio between the Chi-square and the degrees of freedom; CFI is the
systematically higher than correlations in the earlier period Comparative Fit Index; TLI is the Tucker–Lewis coefficient; RMSEA is the Root
(range: 0.47–0.67), which indicates that users’ perceptions Mean Square Error of Approximation; SRMR is the Standardized Root Mean
Square Residual.
during later usage were more stable than ones from initial
use. As shown by the results of cross-temporal correlation
analysis, perceptions and acceptance among users varied Table 4. Inter-correlations between constructs (validity and reliability).
across different points in time representing different stages The UTAUT2 M SD CR AVE 1 2 3 4 5 6
of use. 1.Continuance 4.25 1.36 0.93 0.81 0.90
Intention
2.Effort 5.23 1.13 0.84 0.64 0.68 0.80
Table 2. Cross-temporal correlations. expectancy
3.Performance 4.83 1.12 0.89 0.72 0.64 0.78 0.85
Constructs T1~ T2 T2~ T3 expectancy
Hedonic Motivation 0.67*** 0.81*** 4.Social 4.46 1.41 0.95 0.87 0.69 0.61 0.61 0.93
Effort expectancy 0.60*** 0.68*** influence
Performance Expectancy 0.55*** 0.80*** 5.Hedonic 4.44 1.41 0.96 0.89 0.79 0.72 0.65 0.58 0.94
Social Influence 0.47*** 0.58*** motivation
Attitude 0.56*** 0.73*** 6.Attitude 4.46 1.32 0.94 0.83 0.86 0.76 0.68 0.61 0.82 0.91
Continuance Intention 0.66*** 0.75*** M = Mean; SD = Standard Deviation; CR = Composite Reliability; AVE = Average
***p < 0.001 Variance Extracted
INTERNATIONAL JOURNAL OF HUMAN–COMPUTER INTERACTION 7

.17*
Hedonic Motivation
.60***
.31*** .60***
Effect Expectancy Attitude Continuance Intention

Performance
Expectancy
.22***
Social Influence

The Integrated Model of Gamification Acceptance

Figure 4. The proposed integtated model. R2(attitude) = .79; R2(intention) = .80.

temporal correlations show that the within-construct correla- (Sonderegger et al., 2012). That finding emphasizes the evolu-
tions among users’ perceptions and acceptance in the first tionary nature of the belief of usability.
stage were lower than those in the second stage, which sug- Fourth, performance expectancy of the app did not signifi-
gests that their perceptions and acceptance in later periods cantly relate to intention to play. It seems that players paid
differed starkly from that at the first use of the technology. more attention to their subjective experiences than the sys-
Such results are consistent with the notion found in prior tem’s practicality, which contrasts the major finding in the
studies that individuals’ perceptions and decisions about literature that usability significantly affects the acceptance of
acceptance vary according to the time of use (Hassenzahl & technology (e.g., Morschheuser et al., 2017; Sun & Jeyaraj,
Tractinsky, 2006; Sun & Jeyaraj, 2013; Venkatesh et al., 2000). 2013). Such results suggest that crowdsourcing games differ
Possibly, users’ perceptions and decisions about acceptance from utilitarian systems, although they could also reflect that
depend upon their familiarity with the system and would the game used in the study was framed as a game. After all,
change as they gain more experience (Sonderegger et al., participants were mainly motivated by a desire to be enter-
2012). The study thus confirms a limitation of cross- tained instead of concerns with the efficiency of crowdsour-
sectional studies, in which participants are typically unfamiliar cing, which de-emphasizes the usefulness of the system.
with the system being evaluated, for results that capture only Overall, the results offer considerable support for the proposi-
the first phase of human–computer interaction but not the tion that predictors of acceptance differ in different contexts.
temporal aspects of users’ perceptions and acceptance. In Last, the revised UTAUT found support from the results of
response, it is important to trace users’ experiences and beha- the study, which suggest that UTAUT can be generalized to
viors over time, as measured in the study presented here. the context of crowdsourcing games. The revised UTAUT
Second, regression analysis revealed that users’ hedonic provides a detailed account of the key factors underlying
motivation and perceptions of social influence significantly players’ acceptance of crowdsourcing games, and the research
affected their acceptance at all points in time, which under- model explained up to 80% of the variance in continuous
scores the important role of enjoyable experiences and social intention. Therein, hedonic motivation had the dominant
elements in the continuous use of a technology. That finding total effect on behavioral intention among all predictors, and
confirms the hypotheses that hedonic and social factors are the hierarchical relationships between the antecedents were
strong determinants of the sound acceptance of gamification examined.
(e.g., Gerow, Ayyagari, Thatcher, & Roth, 2013; Hamari &
Koivisto, 2015; Morschheuser et al., 2017). Put differently, the
use of crowdsourcing games is primarily motivated by the 5. Conclusion
individual’s desire to be entertained. Accordingly, it is valu-
5.1. Implications
able to account for how engaging the experience is as well as
social elements in designing crowdsourcing games. The results of the study have several important implications
Third, determinants of intention to play the crowdsourcing for future research on crowdsourcing games. First, we validate
game in the second stage also included effort expectancy, UTAUT in the context of gamification on smartphone apps.
which was not a significant factor in the first stage. That result Although the acceptance of technology has received consider-
shows that as users gained more hands-on experience with the able attention in different types of computer systems, they
app, they relied more on the ease of use in forming their have seldom been investigated in the context of gamification.
attitudes and intentions to play the game. A possible explana- The findings of the study thus indicate a new context for the
tion could be that initially using the system is part of a routine application of UTAUT2. Second, the results imply that indi-
of trial and practice, in which error and failure can be viduals are subject to different influences at their respective
expected and thus tolerated by novices (Mendoza & Novick, times of first using a system and deciding to continue to use
2005). However, issues with usability become problems as it. Individuals are subject to hedonic and social factors during
users develop from novices into experienced users and, in early stages of use, and hedonic, social, and usability-related
turn, judge the system based on their experience factors during later stages. Subjective and social feelings are
8 X. WANG ET AL.

always the focus of players’ experiences, whereas usability (Lee, 2009), motivation theory (Ryan, Rigby, & Przybylski,
plays an important role only in later stages of acceptance. In 2006), and theory about use and gratification (Sherry, Lucas,
future research on crowdsourcing games, acceptance at dif- Greenberg, & Lachlan, 2006). Future research could be devel-
ference stages should thus be predicted with different factors. oped to investigate whether constructs in other theoretical
Third, the study poses methodological implications for models show similar variety over time.
designing future studies on gamification, notably by suggest-
ing the need for longitudinal approaches.
For practitioners, the findings of the study first suggest that ORCID
game developers should emphasize an engaging experience, Xiaohui Wang http://orcid.org/0000-0003-2126-1712
which exerts a sustained impact on individuals’ acceptance of
crowdsourcing games. Qualitative feedback from participants
may provide recommendations for system development. References
When asked about their favorite aspects of the system, parti- Altmeyer, M., Lessel, P., & Krüger, A. (2016). Expense control:
cipants responded with comments such as “Imaginative,” A gamified, semi-automated, crowd-based approach for receipt cap-
“Relaxing and makes me feel at ease,” and “Appears to be turing. In Proceedings of the 21st International Conference on
interactive, I enjoy using this system.” The results provide Intelligent User Interfaces (pp. 31–42). New York, USA: ACM Press.
empirical evidence of how casual funs, such as the inspired Baptista, G., & Oliveira, T. (2017). Why so serious? Gamification impact
in the acceptance of mobile banking services. Internet Research, 27(1),
curiosity, relaxation, and capacity to kill time with less invol- 118–139. doi:10.1108/IntR-10-2015-0295
vement (Kultima, 2009; Lazzaro, 2008), facilitate gamification Bhattacherjee, A. (2001). Understanding information systems continu-
acceptance. Future development of gamification systems ance: An expectation-confirmation model. MIS Quarterly, 25,
could therefore address these elements. Alternatively, the 351–370. doi:10.2307/3250921
study also points out that underemphasizing usability, as Carranza, J., & Krause, M. (2012). Evaluation of game designs for human
computation. Paper presented at the Eighth Artificial Intelligence and
many studies have, is not advisable. Results of regression Interactive Digital Entertainment Conference. doi:10.1094/PDIS-11-
analysis show that whereas usability might not affect intention 11-0999-PDN
at the initial use of a technology, it plays an important role in Chang, I.-C., Liu, -C.-C., & Chen, K. (2014). The effects of hedonic/
attitudes and intentions during later use. As users continue to utilitarian expectations and social influence on continuance intention
use the system, usability-related issues become important and to play online games. Internet Research, 24(1), 21–45. doi:10.1108/
IntR-02-2012-0025
may hinder the system’s acceptance. We attribute that Cooper, S., Khatib, F., Treuille, A., Barbero, J., Lee, J., Beenen, M., …
dynamic to the known role of direct hands-on experience in Players, F. (2010). Predicting protein structures with a multiplayer
forming such beliefs (Venkatesh & Davis, 1996). Designing online game. Nature, 466(7307), 756–760. doi:10.1038/nature09304
crowdsourcing games with short learning curves and intuitive Davis, F. D. (1989). Perceived usefulness, perceived ease of use, and user
interfaces is thus important, since it can reduce users’ cogni- acceptance of information technology. MIS Quarterly, 16, 319–340.
doi:10.2307/249008
tive efforts in their continued use of a technology. Davis, F. D., Bagozzi, R. P., & Warshaw, P. R. (1989). User acceptance of
computer technology: A comparison of two theoretical models.
Management Science, 35(8), 982–1003. doi:10.1287/mnsc.35.8.982
5.2. Limitations and future work Davis, F. D., Bagozzi, R. P., & Warshaw, P. R. (1992). Extrinsic and
intrinsic motivation to use computers in the workplace. Journal of
The findings of the study should be interpreted in light of their Applied Social Psychology, 22(14), 1111–1132. doi:10.1111/
limitations. First, the research was conducted on one game jasp.1992.22.issue-14
employed on a single smartphone app, which may limit the Deterding, S. (2015). The lens of intrinsic skill atoms: A method for
gameful design. Human–Computer Interaction, 30(3–4), 294–335.
generalizability of the findings. Second, the study involved col- doi:10.1080/07370024.2014.993471
lecting data over a two-week period, which might have reflected Deterding, S., Canossa, A., Harteveld, C., Cooper, S., Nacke, L. E., &
only the initial stage of the implementation of crowdsourcing Whitson, J. R. (2015). Gamifying research: Strategies, opportunities,
games in reality. By contrast, future research could involve challenges, ethics. In Proceedings of the 33rd Annual ACM Conference
collecting data during a longer time span in order to gain Extended Abstracts on Human Factors in Computing Systems, Seoul,
Republic of Korea (pp. 2421–2424).
greater insights into the later stages of game implementation. Deterding, S., Dixon, D., Khaled, R., & Nacke, L. (2011). From game design
Third, the sample for the third wave of data collection included elements to gamefulness: Defining “gamification”. In Proceedings of the
only 72 participants, which might jeopardize the power of the 15th International Academic MindTrek Conference: Envisioning Future
statistical tests. Moreover, some nonsignificant findings from Media Environments, Tampere, Finland (pp. 9–15).
regression analyses might have been due to the insufficient Doan, A., Ramakrishnan, R., & Halevy, A. Y. (2011). Crowdsourcing
systems on the world-wide web. Communications of the ACM, 54(4),
power of the data. Future studies with larger sample sizes and 86–96. doi:10.1145/1924421
more diverse groups of users could increase the significance of Dulačka, P., & Bieliková, M. (2012). Validation of music metadata via
the results. At the same time, findings from those analyses were game with a purpose. In Proceedings of the 8th International
consistent with the results of previous studies, which reduced Conference on Semantic Systems, Graz, Austria (pp. 177–180).
the likelihood of spurious findings. doi:10.1177/1753193411427832
Eickhoff, C., Harris, C. G., de Vries, A. P., & Srinivasan, P. (2012).
Last, future research should seek to further examine the Quality through flow and immersion. In Proceedings of the 35th
dynamic attributes of other theoretical constructs beyond International ACM SIGIR Conference (pp. 871–880). Portland,
UTAUT, including those of the theory of planned behavior Oregon, USA: ACM. doi:10.1177/1753193412441761
INTERNATIONAL JOURNAL OF HUMAN–COMPUTER INTERACTION 9

Entertainment Software Association. (2017). Essential facts about the intrinsic motivation and performance. Computers in Human
computer and video game industry. Retrieved from http://www. Behavior, 71, 525–534. doi:10.1016/j.chb.2015.08.048
theesa.com/about-esa/essential-facts-computer-video-game-industry/ Mendoza, V., & Novick, D. G. (2005). Usability over time. In Proceedings
Fornell, C., & Larcker, D. F. (1981). Evaluating structural equation of the 23rd Annual International Conference on Design of
models with unobservable variables and measurement error. Journal Communication: Documenting & Designing for Pervasive Information
of Marketing Research, 18(1), 39–50. doi:10.1177/002224378101800104 (pp. 151–158). Coventry, UK: ACM.
Gerow, J. E., Ayyagari, R., Thatcher, J. B., & Roth, P. L. (2013). Can we Michelucci, P., & Dickinson, J. L. (2016). The power of crowds. Science,
have fun@ work? The role of intrinsic motivation for utilitarian 351(6268), 32–33. doi:10.1126/science.aad6499
systems. European Journal of Information Systems, 22, 360–380. Morschheuser, B., Hamari, J., Koivisto, J., & Maedche, A. (2017).
doi:10.1057/ejis.2012.25 Gamified crowdsourcing: Conceptualization, literature review, and
Goncalves, J., Hosio, S., Ferreira, D., & Kostakos, V. (2014). Game of future agenda. International Journal of Human-Computer Studies,
words: Tagging places through crowdsourcing on public displays. In 106, 26–43. doi:10.1016/j.ijhcs.2017.04.005
Proceedings of the 2014 Conference on Designing Interactive Systems Mou, J., Shin, D. H., & Cohen, J. (2017). Understanding trust and
(pp. 705–714). Vancouver, BC, Canada: ACM. perceived usefulness in the consumer acceptance of an e-service: A
Google Image Labeler. (2007). First report of alternaria dianthicola longitudinal investigation. Behaviour & Information Technology, 36
causing leaf blight on Withania somnifera from India. Plant Disease. (2), 125–139. doi: 10.1080/0144929X.2016.1203024
91. Available at http://images.google.com/imagelabeler/. Oh, J.-C., & Yoon, S.-J. (2014). Predicting the use of online information
Gustavson, K., von Soest, T., Karevold, E., & Røysamb, E. (2012). services based on a modified UTAUT model. Behaviour & Information
Attrition and generalizability in longitudinal studies: Findings from Technology, 33(7), 716–729. doi:10.1080/0144929X.2013.872187
a 15-year population-based study and a Monte Carlo simulation Pe-Than, E. P. P., Goh, D. H. L., & Lee, C. S. (2017). Does it matter how
study. BMC Public Health, 12(1), 918. doi: 10.1186/1471-2458-12-918 you play? The effects of collaboration and competition among players
Hamari, J., & Koivisto, J. (2015). Why do people use gamification of human computation games. Journal of the Association for
services? International Journal of Information Management, 35(4), Information Science and Technology, 68, 1823–1835. doi:10.1002/
419–431. doi:10.1016/j.ijinfomgt.2015.04.006 asi.23863
Hanus, M. D., & Fox, J. (2015). Assessing the effects of gamification in the Podsakoff, P. M., MacKenzie, S. B., Lee, J.-Y., & Podsakoff, N. P. (2003).
classroom: A longitudinal study on intrinsic motivation, social com- Common method biases in behavioral research: a critical review of the
parison, satisfaction, effort, and academic performance. Computers & literature and recommended remedies. Journal of Applied Psychology,
Education, 80, 152–161. doi:10.1016/j.compedu.2014.08.019 88(5), 879–903. doi: 10.1037/0021-9010.88.5.879
Hassenzahl, M., & Tractinsky, N. (2006). User experience - a research Prandi, C., Nisi, V., Salomoni, P., & Nunes, N. J., 2015. From gamifica-
agenda. Behaviour & Information Technology, 25, 91–97. doi:10.1080/ tion to pervasive game in mapping urban accessibility. In Proceedings
01449290500330331 of the 11th CHItaly (pp. 126–129). Rome, Italy: ACM.
Hoque, R., & Sorwar, G. (2017). Understanding factors influencing the Prestopnik, N. R., & Tang, J. (2015). Points, stories, worlds, and diegesis:
adoption of mHealth by the elderly: An extension of the UTAUT Comparing player experiences in two citizen science games.
model. International Journal of Medical Informatics, 101, 75–84. Computers in Human Behavior, 52, 492–506. doi:10.1016/j.
doi:10.1016/j.ijmedinf.2017.02.002 chb.2015.05.051
Howe, J. (2006). The rise of crowdsourcing. Wired Magazine, 14(6), 1–4. Rapp, A., Hopfgartner, F., Hamari, J., Linehan, C., & Cena, F. (2019).
Huotari, K., & Hamari, J. (2017). A definition for gamification: Strengthening gamification studies: Current trends and future oppor-
Anchoring gamification in the service marketing literature. tunities of gamification research. International Journal of Human-
Electronic Markets, 27(1), 21–31. doi:10.1007/s12525-015-0212-z Computer Studies, 127, 1–6. doi:10.1016/j.ijhcs.2018.11.007
Kahn, A. S., & Williams, D. (2015). We’re all in this (game) together. Ryan, R. M., Rigby, C. S., & Przybylski, A. (2006). The motivational pull
Communication Research, 43(4), 487–517. doi:10.1177/0093650215617504 of video games: A self-determination theory approach. Motivation and
Kawajiri, R., Shimosaka, M., & Kahima, H. (2014). Steered crowdsensing: Emotion, 30, 344–360. doi:10.1007/s11031-006-9051-8
Incentive design towards quality-oriented place-centric crowdsensing. Sakamoto, M., & Nakajima, T. (2014). Gamifying social media to encou-
In Proceedings of the 2014 ACM International Joint Conference on rage social activities with digital-physical hybrid role-playing. In
Pervasive and Ubiquitous Computing (pp. 691–701). Seattle, Proceedings of the 2014 International Conference on Social
Washington, USA: ACM. doi:10.1094/PDIS-08-13-0831-PDN Computing and Social Media (pp. 581–591). Heraklion, Crete,
Kultima, A. (2009). Casual game design values. In Proceedings of the 13th Greece: Springer. doi:10.1080/09593330.2013.837964
International MindTrek Conference: Everyday Life in the Ubiquitous Schreiber, J. B., Nora, A., Stage, F. K., Barlow, E. A., & King, J. (2006).
Era, Tampere, Finland (pp. 58–65). doi:10.1177/1753193408096015 Reporting Structural Equation Modeling and Confirmatory Factor
Law, E., & Von Ahn, L. (2011). Human computation. Synthesis Lectures Analysis Results: A Review. The Journal of Educational Research, 99
on Artificial Intelligence and Machine Learning, 5(3), 1–121. (6), 323–338. doi:10.3200/JOER.99.6.323-338
doi:10.2200/S00371ED1V01Y201107AIM013 Seaborn, K., & Fels, D. I. (2015). Gamification in theory and action: A
Lazzaro, N. (2008). The four fun keys. In K. Isbister & N. Schaffer (Eds.), survey. International Journal of Human-computer Studies, 74, 14–31.
Game usability: Advancing the player experience (pp. 315–344). Boca doi:10.1016/j.ijhcs.2014.09.006
Raton, FL: CRC press. Sherry, J. L., Lucas, K., Greenberg, B. S., & Lachlan, K. (2006). Video
Lee, M.-C. (2009). Understanding the behavioural intention to play online game uses and gratifications as predictors of use and game pre-
games: An extension of the theory of planned behaviour. Online ference. In P. Vorderer & J. Bryant (Eds.), Playing video games:
Information Review, 33, 849–872. doi:10.1108/14684520911001873 Motives, responses, and consequences (pp. 213–224). Abingdon, UK:
Luse, A., Mennecke, B., & Triplett, J. (2013). The changing nature of user Routledge.
attitudes toward virtual world technology: A longitudinal study. Shin, D.-H., & Shin, Y.-J. (2011). Why do people play social network
Computers in Human Behavior, 29(3), 1122–1132. doi: 10.1016/j. games? Computers in Human Behavior, 27(2), 852–861. doi:10.1016/j.
chb.2012.10.004 chb.2010.11.010
Martins, C., Oliveira, T., & Popovič, A. (2014). Understanding the Internet Siu, K., & Riedl, M. O. (2016). Reward systems in human computa-
banking adoption: A unified theory of acceptance and use of technology tion games. In Proceedings of the 2016 Annual Symposium on
and perceived risk application. International Journal of Information Computer-Human Interaction in Play, Austin, Texas, USA
Management, 34(1), 1–13. doi:10.1016/j.ijinfomgt.2013.06.002 (pp. 266–275).
Mekler, E. D., Brühlmann, F., Tuch, A. N., & Opwis, K. (2017). Towards Sonderegger, A., Zbinden, G., Uebelbacher, A., & Sauer, J. (2012). The
understanding the effects of individual gamification elements on influence of product aesthetics and usability over the course of time:
10 X. WANG ET AL.

A longitudinal field experiment. Ergonomics, 55, 713–730. About the Authors


doi:10.1080/00140139.2012.672658
Sørensen, J. J. W. H., Pedersen, M. K., Munch, M., Haikka, P., Xiaohui Wang is a Research Assistant Professor at Department of
Jensen, J. H., Planke, T., … Sherson, J. F. (2016). Exploring the Journalism, Hong Kong Baptist University. His research focuses on
quantum speed limit with computer games. Nature, 532(7598), user experience, social media, health informatics, and computational
210–213. doi:10.1038/nature17620 social science.
Sun, Y., & Jeyaraj, A. (2013). Information technology adoption and
Dion Hoe-Lian Goh is an Associate Professor at Wee Kim Wee School of
continuance: A longitudinal study of individuals’ behavioral inten-
Communication and Information, Nanyang Technological University.
tions. Information & Management, 50, 457–465. doi:10.1016/j.
His research areas and specialties are social media practices and percep-
im.2013.07.005
tions, gamification techniques for shaping user perceptions and motivat-
Talasila, M., Curtmola, R., & Borcea, C. (2016). Crowdsensing in the wild
ing behavior, and mobile information sharing and seeking.
with aliens and micropayments. IEEE Pervasive Computing, 15, 68–77.
doi:10.1109/MPRV.2016.18 Ee-Peng Lim is a Lee Kong Chian Professor of Information Systems at
Venkatesh, V., & Davis, F. D. (1996). A model of the antecedents of the School of Information Systems, Singapore Management University.
perceived ease of use: Development and test. Decision Sciences, 27(3), His main research interests are data mining and analytics, social media
451–481. doi: 10.1111/deci.1996.27.issue-3 content mining and analysis, social network mining and analysis, and
Venkatesh, V., & Davis, F. D. (2000). A theoretical extension of the digital libraries.
technology acceptance model: Four longitudinal field studies.
Management Science, 46, 186–204. doi:10.1287/mnsc.46.2.186.
11926 Appendix I
Venkatesh, V., Morris, M. G., & Ackerman, P. L. (2000). A longitudinal
field investigation of gender differences in individual technology Theories, Constructs, items, and measurement sources
adoption decision-making processes. Organizational Behavior and
Human Decision Processes, 83, 33–60. doi:10.1006/obhd.2000.2896
Venkatesh, V., Morris, M. G., Davis, G. B., & Davis, F. D. (2003). User Core constructs Items
acceptance of information technology: Toward a unified view. MIS Continuance It is worthwhile to use this system
Quarterly, 27(3), 425–478. doi:10.2307/30036540 Intention I am more likely to use this system in the future
Venkatesh, V., Thong, J. Y., & Xu, X. (2012). Consumer acceptance and I would like to use this system if it is available
Attitude I feel that using this system is a good way to kill time
use of information technology: Extending the unified theory of accep-
I feel I was absorbed in using this system.
tance and use of technology. MIS Quarterly, 36(1), 157–178. I was so involved in this system that I lost track of time.
doi:10.2307/41410412 Effort Expectancy My interaction with the system is clear and
Vigo, M., & Harper, S. (2017). Real-time detection of navigation pro- understandable
blems on the world ‘wild’web. International Journal of Human-com- It is easy for me to become skillful at using the system
puter Studies, 101, 1–9. doi: 10.1016/j.ijhcs.2016.12.002 I find the system ease to use
Learning to operate the system is easy for me
Von Ahn, L., & Dabbish, L. (2008). Designing games with a purpose.
Performance I find the system is useful for creating new keywords for
Communications of the ACM, 51(8), 58–67. doi:10.1145/1378704 Expectancy videos
Wang, X., Goh, D. H.-L., Lim, E.-P., & Vu, A. W. L. (2016). I find the system encourages me to create new keywords
Understanding the determinants of human computation game for videos
acceptance: The effects of aesthetic experience and output quality. I find the system produces keywords that are useful for
Online Information Review, 40(4), 481–496. doi:10.1108/OIR-06- describing videos
2015-0203 Social influence People who influence my behavior think that I should use
the system
Wang, X., Goh, D. H.-L., Lim, E.-P., Vu, A. W. L., & Chua, A. Y. K.
People who are import to me think that I should use the
(2017). Examining the effectiveness of gamification in human system
computation. International Journal of Human–Computer Interaction, People whose opinion that I value prefer that I use the
33(10), 813–821. doi:10.1080/10447318.2017.1287458 system
Zichermann, G., & Linder, J. (2010). Game-based marketing: Inspire Hedonic Motivation Using this system is enjoyable.
customer loyalty through rewards, challenges, and contests. NJ, USA: While using this system, I experienced pleasure.
John Wiley & Sons. Overall, I believe that this system is enjoyable.

You might also like