Professional Documents
Culture Documents
Uppal Et Al-2017-British Journal of Educational Technology
Uppal Et Al-2017-British Journal of Educational Technology
Uppal Et Al-2017-British Journal of Educational Technology
net/publication/315416635
CITATIONS READS
82 6,575
3 authors:
Stephen R. Gulliver
University of Reading
96 PUBLICATIONS 1,006 CITATIONS
SEE PROFILE
Some of the authors of this publication are also working on these related projects:
All content following this page was uploaded by Muhammad Amaad Uppal on 31 October 2017.
Abstract
e-Learning courses are fast becoming common-place, yet the success of these online
courses varies considerably. Since limited research addresses the issue of e-learning
quality (ELQ) of service in higher education environments, there is an increasing need
to effectively assess ELQ. In this paper, we argue that to obtain a satisfactory e-learning
student experience, we must offer more than access to learning material. The research
proposes an extended SERVQUAL model, the ELQ model, which in addition to key
service constructs, facilitates consideration of both information and system quality
factors. Exploratory Factor Analysis is conducted to investigate the reliability and
validity of the measurement model, and multiple regression analysis is used to test the
C 2017 British Educational Research Association
V
2 British Journal of Educational Technology Vol 00 No 00 2017
Practitioner Notes
What is already known about this topic
• e-Learning is expanding rapidly. As a result of increased online enrolment, traditional
universities and colleges, are offering increased number of online courses at all educa-
tion levels. Online platforms are also being used more widely to augment or replace tra-
ditional class-based educational offerings (Wang, Agrawala, & Cohen, 2007).
• Success of e-learning depends on the satisfaction of the learners and their inten-
tion to reuse. Many models have been used to assess the quality of e-learning
programs. SERVQUAL model, which has its roots in the Expectation-Confirmation
Theory (Oliver, 1980), was proposed by Parasuraman, Zeithaml, and Berry
(1998). SEVQUAL has become a dependable customer driven scale, utilized to
gauge the service quality delivery in a range of different industries, from hospital-
ity, telecommunication and retail to consulting; and has been utilized to assess
the quality of service delivery in e-learning (Udo, Bagchi, & Kirs, 2011).
What this paper adds
• In this paper, we propose an e-learning quality model, which is an extension of
SERVQUAL model. This model comprises of three dimensions; (1) Service dimension,
consisting of five independent variables; “Responsiveness,” “Reliability,”
“Tangibility,” “Assurance” and “Empathy,” (2) Information dimension, comprising
of “Learning Content” and (3) System dimension comprising of “Course Website.”
• This research proposes that in addition to “service,” consideration of
“information” and “system” quality are vital to achieve overall perception of qual-
ity for e-learning systems.
• The development of the e-learning quality model highlights a critical need to con-
sider both tangible and nontangible quality e-learning dimensions; a fact that
challenges much of the current research in this field.
Implications for practice and/or policy
• To ensure that delivery of e-learning material is perceived as being of quality, it is
vital that e-learning practitioners understand and define how learning content
should be developed; a factor of particular importance in areas impacted by poor
infrastructure and bandwidth.
• For e-learning system to be successful, all three dimensions have to be considered,
if any of the dimensions is ignored, the overall quality perception may suffer.
• Despite its virtual nature, e-learning provision, if it is to be perceived as being of
quality, must ensure that it neither ignores physical (ie, the appearance of learn-
ing resources, personnel and communication materials), or temporal student
needs (ie, a willingness to help learners and provide prompt service).
Introduction
A technological revolution is occurring across Higher Education Institutions, which is disrupting
traditional approaches to teaching and learning methods. Teaching and learning is no longer
confined to the classroom, since the physical presence of a teacher is no longer required for learn-
ing to take place (Zhang & Nunamaker, 2003). e-Learning solutions provide flexible access to
course materials (Levy, 2007), consistent delivery (Cantoni, 2004) and are not restricted by spa-
tial and/or temporal dimensions (Raab, Ellis, & Abdon, 2002). Accordingly, distance- and/or
online-learning options are being widely adopted by education providers as the teaching platform
of the future.
As a result of exponential online enrollment, traditional universities and colleges are offering an
increased number of online courses at all education levels. Online platforms are also being used
more widely to augment or replace traditional class-based educational offerings, with predictions
suggesting that, especially in high population and developing countries, the number of online
courses offered will surpass the number of onsite courses (Bolliger & Martindale, 2004).
Research implies that online students are much more goal-oriented and pro-active, ie, appreciat-
ing the added value attained through this learning medium (Levy, 2007). Although students
perceive the benefits of e-learning, and are more goal-orientated, research also shows that a
higher percentage of online students fail to finish (Diaz & Cartnal, 1999), a trend increasingly
amplified by those undertaking Massive Open Online Course. According to Diaz and Cartnal
(1999), student dropout in e-learning courses is on average 10–20% higher than traditional
courses and online courses are widely perceived as being of lower quality (Levy, 2007), ie, having
a lower achievement value to graduates (Chiu & Wang, 2008). Online programs facilitate ubiqui-
tous learning by limiting the interaction of learners (Swan, 2001), which in turn restricts the
potential for temporal and physical interaction and support between students and/or academic
staff; raising significant questions concerning the online learning experience. To improve student
dropout rates, research has focused on reactive forecasting/controlling of online student behavior,
instead of understanding e-learning course content and delivery quality (Bouhnik & Marcus,
2006). To manage e-learning delivery, it is important that the quality of e-learning is assessed
accurately (Gress, Fior, Hadwin, & Winne, 2010), to facilitate e-learning providers in the custom-
ization of their products to meet learner needs.
Understanding quality
Quality is a subjective term, which means different things to different stakeholders. Early litera-
ture defined quality as something being “fit for use” (Juran, 1981), or being in “conformance to
requirements” (Crosby, 1979). Yang and Liu (2007) stated that in addition to a lack of deficien-
cies, “quality” must consider, and must satisfy, both stated and implied needs. Ehlers (2004),
when considering e-learning quality (ELQ), defined 30 dimensions, which were subsequently
categorized into seven concept fields, ie, Tutor Support, Cooperation, Technology, Costs, Informa-
tion Transparency, Course Structure and Didactics. Ehlers (2004) emphasized the importance of
course content (“Didactics” and “Course structure”), and highlighted the importance of interac-
tion (“tutor support” and “cooperation”); thus supporting, in the context of e-learning, the
generic claim of Yang and Liu (2007).
The SERVQUAL model was proposed by Parasuraman et al. (1998) in an attempt to explain qual-
ity in the service sector. SERVQUAL has become a dependable customer driven scale, utilized to
gauge service quality delivery in a range of different industries. SERVQUAL aims to measures the
gap between customer expectation and customer experience, ie, a perception of satisfaction, con-
cerning the services provided and relies on the essential supposition that clients can assess an
organization’s service quality by contrasting expectations and experiences. If experiences are
C 2017 British Educational Research Association
V
4 British Journal of Educational Technology Vol 00 No 00 2017
Constructs Description
below expectation, then the customer will perceive quality as being low. If experiences meet or
exceeds expectation, then the customer will perceive quality as being high. The SERVQUAL model
applies a useful acronym “RATER,” which refers to: Reliability, Assurance, Tangibles, Empathy
and Responsiveness (see Table 1 for RATER construct definitions).
The main benefit of using SERVQUAL as a measuring tool, is its application in a range of
domains. SERVQUAL, which has been used to examine numerous service industries, stands out
from other instruments due to its common application in both theoretical and operational
domains (Asubonteng, McCleary, & Swan, 1996). Asubeonteng et al. stated that “until a supe-
rior, equally straightforward, model rises, SERVQUAL will prevail as a leading service quality
instrument.” Twenty years later, the SERVQUAL model continues to be used as a reliable tool for
assessing service quality across a range of service industries, including the education sector.
In 1992, DeLone and McLean developed the IS Success model, which considered System and
Information quality dimensions, in order to understand system use (objective) and user satisfac-
tion (subjective). Following validation, DeLone and McLean (2002) revised the model by
incorporating SERVQUAL measurements; adding a third service quality dimension. Numerous
quality models have been developed in literature either directly incorporating SERVQUAL (eg,
Stodnick & Rogers, 2008; Udo et al., 2011), or indirectly considering SERVQUAL, by developing
the work of Delone and Mclean (eg, Acton, Halonen, Golden, & Conboy, 2009; Roca, Chiu, &
Martınez, 2006). Stodnick and Rogers (2008), for instance, utilized SERVQUAL to see how stu-
dents perceive quality in traditional classroom education. Udo et al. (2011), propose a modified
SERVQUAL instrument for assessing ELQ, which consists of five dimensions: Assurance, Empathy,
Responsiveness, Reliability and Website Content, which considers the service dimension, yet fails
to effectively consider both system and information quality dimensions.
content” quality factors identified in literature were thematically grouped, using hermeneu-
tic analysis, into the following concept groups: presentation style (eg, Schluep, Ravasio, &
Sch€ar, 2003), content structure (eg, Teo & Gay, 2006), level and type of interactivity (eg,
Siau, Sheng, & Nah, 2006), language and communication (eg, Akinyemi, 2002; Hollins &
Foley, 2013) and delivery mode (eg, Gulliver & Kent, 2013). In our research, “Course
Website” relates to the system used to present and lay out information, and the inclusion of
technical functions that affect student perception of web platform quality. Significant factors
impacting perceived website quality, were grouped as relating to: interface design (eg, Cho,
Cheng, & Lai, 2009), navigation (eg, Volery & Lord, 2000), attractiveness (eg, Lin, 2010)
and ease of use (eg, Selim, 2007).
In this paper, we aim to explore whether ELQ, as shown in Figure 1, is a suitable model to assess
ELQ. Independent and dependent variables are drawn in Figure 2. Accordingly, this research will
determine:
1. Is the proposed ELQ model suitable to assess the quality of e-learning?
2. Does inclusion of “Learning Content” or “Course Website” have a significant impact on
perception of ELQ?
To consider the component aspects of quality, our research hypotheses state that, in context of e-
learning:
H1: “Reliability” is positively associated with students’ perception of e-learning quality.
H2: “Assurance” is positively associated with students’ perception of e-learning quality.
H3: “Tangibility” is positively associated with students’ perception of e-learning quality.
H4: “Empathy” is positively associated with students’ perception of e-learning quality.
H5: “Responsiveness” is positively associated with students’ perception of e-learning quality.
H6: “Learning Content” significantly impacts students’ perception of e-learning quality.
H7: “Course Website” significantly impacts students’ perception of e-learning quality.
thoroughness of subject material, quality and suitability of learning material. Two questions
were asked relating to general learning material quality (ie, LC_1/LC_4). Five Questions were
mapped to the five “Learning Content” factors, defined in our ELQ model, ie, Presentation
(LC_8), Structure (LC_3), Interactivity (LC_4/5), Language (LC_7) and Delivery Modes (LC_2/
10/11). Questions relating to course website were taken from Udo and Marquis (2002), meas-
uring: general quality perception (CW_4/6); Interface design (CW_5); Navigation (CW_1);
Attractiveness (CW_2); and Ease of Use (CW_3). ELQ was captured using general questions
LQ_1-LQ-4. A small number of questions were repeated, eg, RA-6 and CW_4, which was done
to measure feedback concerning different factors. A 5-point Likert scale was used for all ques-
tions in section two.
After conducting a short pilot test, to test the reliability of questions, the questionnaire was
distributed to students in different classes at two leading public universities in Lahore, Paki-
stan. University students (undergraduates, postgraduates and executives) are used in
numerous studies covering quality perception (Van Iwaarden, Van Der Wiele, Ball, & Mil-
len, 2004) and were relevant in the context of the research scope. These students were
enrolled on BSc Applied Management, BBA honours, MBA, EMBA, BSc Sciences and BSc
Engineering courses. A total of 490 students participated in the survey, most of whom pre-
viously had exposure to a range of e-learning solutions (ie, both computer-aided learning
and computer-supported collaborative learning). A total of 421 questionnaires were consid-
ered usable (see Table 2).
Assurance 6 0.799
Reliability 4 0.845
Responsiveness 4 0.824
Empathy 4 0.916
Tangibility 4 0.895
Learning content 8 0.825
Learning quality 4 0.865
Course website 6 0.825
Component
1 2 3 4 5 6 7 8
LQ1_Learnpercep 0.950
LQ2_Website 0.980
LQ3_InstMatClear 0.981
LQ4_uptodate 0.984
LC2_DiffFormats 0.953
LC3_VideoLec 0.511
LC5_Percept 0.971
LC6_Interesting 0.915
LC7_LecUrdu 0.907
EM1_Concerned 0.842
EM2_IndvNeeds 0.924
EM3_StudInterest 0.895
EM4_StudMotivation 0.916
TA1_ReqUni 0.850
TA2_ExpTeacher 0.887
TA3_PhyCampus 0.876
TA4_DegreeRecog 0.865
AS1_InstKnow 0.890
AS2_Fair 0.865
AS4_InstAns 0.881
AS6_TeamKnow 0.866
CW1_RelvInfo 0.610
CW3_ Easy 0.789
CW4_ Update 0.841
CW5_MM 0.771
CW6_HQ 0.727
RA1_ConsGood 0.973
RA3_CorrectsInfo 0.953
RA4_TeamHelp 0.935
RS1_QckResp 0.674
RS2_TeamHelp 0.906
RS3_TeamGuides 0.683
RS4_InstSupp 0.897
Rotation method: Promax with Kaiser normalization. Rotation converged in six iterations.
C 2017 British Educational Research Association
V
10 British Journal of Educational Technology Vol 00 No 00 2017
According to Hair, Anderson, Babin, & Black (2010), the minimum threshold value recom-
mended for a sample size of 421 is 0.350. Table 4 presents composite reliability (CR), average
variance extracted (AVE), maximum shared variance (MSV) and average shared variance (ASV)
values. Since CR values are greater than 0.7, AVE values are greater than 0.5, and MSV and
ASV are less than AVE, we claim respectively reliability, convergent validity and discriminant
validity. All loaded values were above 0.50, which confirms that factors have sufficient discrimi-
nant validity, and no unexpected cross-loading to occur.
Fitness of results
The ELQ model, to the best of our knowledge, is the first to measure the perception of ELQ; includ-
ing “Learning Content” (information) and “Course Website” (system) dimensions. The seven
Table 6: KMO and Bartlett’s test
hypotheses were tested as independent variables. At the p < .05 level, five factors were identified
as significant to the student’s perception of quality, ie, Assurance, Responsiveness, Tangibility,
Course Website, Learning Content. Empathy and Reliability (see Table 7). Table 8 presents the
correlations matrix between coefficient paths.
This research confirms hypotheses H2, H3, H5, H6 and H7, ie, Assurance, Responsiveness, Tan-
gibility, Course Website and Learning Content, are positively associated with the perception of
ELQ—Figure 3.
All fitness values are within acceptable criteria limits depending on the test, hence implying a
good model fit (see Table 9). Chi-square/df value is 2.89, where a value between 2.0 and 5.0 is
Assurance 1
Empathy 0.036 1
Responsiveness 0.050 0.322** 1
Reliability 0.063 0.118* 0.278** 1
Learning 0.041 0.239** 0.308** 0.337** 1
content
Tangibles 20.019 0.046 0.219** 0.201** 0.313** 1
Course website 0.331** 0.099* 0.109* 0.046 0.186** 0.112* 1
Learning 0.201** 0.147** 0.292** 0.170** 0.357** 0.243** 0.292** 1
quality
considered acceptable. Our RMSEA value is 0.069, and our CFI and NFI values are 0.990 and
0.986 respectively; demonstrating goodness of fit, thus supporting the results and validating the
proposed model.
Conclusion
Although the authors were only able to capture data from one specific teaching domain (ie, busi-
ness) within one geographic location (ie, Pakistan), our work clearly demonstrates that
perception of ELQ must consider SERVQUAL (service), Course Website (system) and Learning
Content (information) dimensions. This paper proposes an extended SERVQUAL model, ie, the
ELQ model, for measuring ELQ, comprising of these three dimensions. Our findings support exist-
ing literature (Yang & Liu, 2007) and highlight a growing need to understand, and explicitly
consider both tangible and intangible education needs.
Results confirmed hypotheses H2, H3, H5, H6 and H7; ie, that Assurance, Responsiveness,
Tangibility, Course Website and Learning Content have a positive correlation with the stu-
dent perception of ELQ. Accordingly, student’s seemingly value a stable, and easy to use, e-
learning environment. Interestingly results also imply that online students have a reduced
expectation concerning e-learning interaction and/or dependence on others, since empathy
and reliability does not significantly influence student perception of ELQ. Online students do
not seemingly expect high levels of empathy as part of the e-learning experience; practically
recognizing the limitations of e-learning courses. Since online students do not expect close
personal support, it is not unsurprising that “Empathy” was found not to be significant to
student perception of e-learning.
Due to the expense of producing and delivering online course material, e-learning content is
commonly developed over time from piecemeal linked resources (Gibbs & Gosper, 2012). The
negative risk of such content is that resources are taken from a range of media files, which
arguably leads to variation in the quality of delivery within and between modules (McKimm,
Jollie, & Cantillon, 2003). Although inconsistent, all inconsistencies are consistent and
repeatedly identical for all students. Unlike the classroom, where content delivery, support,
assessment and feedback, can vary significantly between groups, and students with groups,
online content delivery, module assessment and student feedback is near perfectly repeatable
for all. Online material is available (on demand) 24/7, and can be reviewed by the student
multiple times to enforce learning. Level of communication will be identical for all students,
and due to automation of tests, assessment processes and feedback mechanisms are also con-
sistent for all. As online students do not need to rely on others, instead seeking help and
support from online peers (Alzahrani, 2015), “Reliability” is unsurprisingly not seen as sig-
nificant to ELQ perception.
This study discusses high level concepts, yet fails to consider how low-level e-learning success fac-
tors influence student perception of ELQ, eg, student motivation and experience (Gutierrez-
Santiuste & Gallego-Arrufat, 2016), sense of isolation (Muhammad, Ahamd, & Shah, 2015),
C 2017 British Educational Research Association
V
ELQ factors 13
pedagogical model (Govender & Chitanana, 2016), self-efficacy (Ozudogru & Hismanoglu, 2016),
localization of content (Andersson, 2008) etc. We strongly encourage additional studies, to con-
sider the mapping that exists between low-level e-learning success factors, identified in literature
and student perception of ELQ.
Although additional research is required to develop contextual “quality” guidelines, develop-
ment of the ELQ model is a critical step in the consideration of both tangible and intangible
quality dimensions for e-learning; a fact that challenges much of the current research in this
field.
Acknowledgements
I would like to thank GC University, Lahore and University of Engineering and Technology,
Lahore, for allowing us to collect data from their students. I would also like to thank all students
who participated in this survey and made this study possible.
Statements on open data, ethics and conflict of interest
All data created during this research are openly available from the research page of GC Univer-
sity, Lahore at http://www.gcuktp.info/research/elq.
Research was systematically piloted, checked and carried out in line with University of Reading
ethical rules. Before participating in the study, all participants were required to read an informa-
tion sheet, describing the purpose of the study. Participation in the study was voluntary, and the
information sheet clearly described the participant’s right to withdraw from the study at any
time. Data collected as part of this study was anonymously store and analyzed. Participants were
required to sign a consent form, to show (1) that they had read the information sheet, (2) that
they were willing to participate in the study, (3) that they had understood that data would be
stored anonymously. Arrangement for expenses was considered, yet no payment was given to
participants to avoid financially motivated participation. Participants will be informed of any pub-
lished work that results from analysis of data collected as part of this study.
To the best of our knowledge, no potential conflict exists in the presentation of our work in the
British Journal of Educational Technology.
References
Acton, T., Halonen, R., Golden, W., & Conboy, K. (2009). DeLone & McLean success model as a descriptive
tool in evaluating the use of a virtual learning environment. Paper presented at International Confer-
ence on Organizational Learning, Knowledge and Capabilities (OLKC 2009), Amsterdam, the
Netherlands.
Alla, M. M. S. O., & Faryadi, Q. (2013). The effect of information quality in e-learning system. International
Journal of Applied Science and Technology, 3(6), 24–33.
Alzahrani, J. (2015). Investigating role of interactivity in effectiveness of e-learning (Doctoral dissertation,
Brunel University London).
Akinyemi, A. (2002). Effect of language on e-learning in the Arab world. In G. Richards (Ed.), Proceedings
of World Conference on e-Learning in Corporate, Government, Healthcare, and Higher Education 2002 (pp.
1109–1112). Chesapeake, VA: AACE.
Andersson, A. (2008). Seven major challenges for e-learning in developing countries: Case study eBIT, Sri
Lanka. International Journal of Education and Development using ICT, 4(3), 45–62.
Asubonteng, P., McCleary, K. J., & Swan, J. E. (1996). SERVQUAL revisited: A critical review of service
quality. Journal of Services Marketing, 10, 62–81.
Bolliger, D. U., & Martindale, T. (2004). Key factors for determining student satisfaction in online courses.
International Journal on E-Learning, 3, 61–67.
Bouhnik, D., & Marcus, T. (2006). Interaction in distance-learning courses. Journal of the American Society
Information Science and Technology, 57, 299–305.
C 2017 British Educational Research Association
V
14 British Journal of Educational Technology Vol 00 No 00 2017
Cantoni, V. (2004). Perspectives and challenges in elearning: Towards natural interaction paradigms.
Journal of Visual Languages and Computing, 15, 333–345.
Cao, M., & Zhang, Q. (2005). B2C e-commerce web site quality: An empirical examination. Industrial Man-
agement & Data Systems, 105, 645–661.
Chiu, C. M., & Wang, E. T. G. (2008). Understanding web-based learning continuance: The role of subjec-
tive task value. Information and Management, 45, 194–201.
Cho, V., Cheng, T. E., & Lai, W. J. (2009). The role of perceived user-interface design in continued usage
intention of self-paced e-learning tools. Computers & Education, 53, 216–227.
Crosby, P. B. (1979). Quality is free. The art of making quality certain (p. 17). New York: McGraw-Hill.
DeLone, W. H., & McLean, E. R. (1992). Information systems success: the quest for the dependent variable.
Information Systems Research, 3, 60–95.
DeLone, W. H., & McLean, E. R. (2002, January). Information systems success revisited. Proceedings
of the 35th Annual Hawaii International Conference on System Sciences, 2002 (HICSS)
(pp. 2966–2976). IEEE, Hawaii, USA.
Diaz, D. P., & Cartnal, R. B. (1999). Students’ learning styles in two classes: Online distance learning and
equivalent on-campus. College Teaching, 47, 130–135.
Ehlers, U. D. (2004). Quality in e-learning from a learner’s perspective. European Journal of Open, Distance
and E-learning, 7(1).
Gibbs, D., & Gosper, M. (2012). The upside-down-world of e-learning. Journal of Learning Design, 1(2), 46–
54.
Govender, D. W., & Chitanana, L. (2016). Perception of information and communications technology
(ICT) for instructional delivery at a university: From technophobic to technologically savvy. African Jour-
nal of Information Systems, 8, 70–85.
Gress, C. L., Fior, M., Hadwin, A. F., & Winne, P. H. (2010). Measurement and assessment in computer-
supported collaborative learning. Computers in Human Behavior, 26, 806–814.
Gulliver, S. R., & Kent, S. (2013, June). Higher education: Understanding the impact of distance learning
mode on user information assimilation and satisfaction. In Proceedings of the ITI 2013 35th Interna-
tional Conference on Information Technology Interfaces (ITI) (pp. 199–204). IEEE, Cavtat / Dubrovnik,
Croatia.
Gutierrez-Santiuste, E., & Gallego-Arrufat, M. J. (2016). Barriers in computer-mediated communication:
Typology and evolution over time. Journal of e-Learning and Knowledge Society, 12, 107–119.
Hair, J. F., Anderson, R. E., Babin, B. J., & Black, W. C. (2010). Multivariate data analysis: A global perspec-
tive (Vol. 7). Upper Saddle River, NJ: Pearson.
Hein, K. K. (2014). Creating and using interactive presentations in distance education courses: a view
from the instructor’s chair. Theses, Student Research, and Creative Activity: Department of Teaching, Learning
and Teacher Education. Paper 43.
Hollins, N., & Foley, A. R. (2013). The experiences of students with learning disabilities in a higher educa-
tion virtual campus. Educational Technology Research and Development, 61, 607–624.
Juran, J. M. (1981). Product quality: a prescription for the west. NYC, USA: AMACOM.
Lin, H. F. (2010). An application of fuzzy AHP for evaluating course website quality. Computers & Educa-
tion, 54, 877–888.
Levy, L. (2007). Comparing dropouts and persistence in e-learning courses. Computers & Education, 48,
185–204.
McKimm, J., Jollie, C., & Cantillon, P. (2003). Web based learning. British Medical Journal, 326(7394),
870–873.
Muhammad, A., Ahamd, F., & Shah, A. (2015). Resolving ethical dilemma in technology enhanced edu-
cation through smart mobile devices. International Arab Journal of e-Technology, 4, 25–31.
Ozudogru, F., & Hismanoglu, M. (2016). Views of freshmen students on foreign language courses deliv-
ered via E-learning. Turkish Online Journal of Distance Education, 17, 31–47.
Parasuraman, A., Zeithaml, V. A., & Berry, L. L. (1998). SERVQUAL: A multi-item scale for measuring
consumer perception of service quality. Journal of Retailing, 64, 2–40.
Raab, R. T., Ellis, W. W., & Abdon, B. R. (2002). Multisectoral partnerships in e-learning: A potential force
for improved human capital development in the Asia Pacific. Internet and Higher Education, 4, 217–229.
Roca, J. C., Chiu, C. M., & Martınez, F. J. (2006). Understanding e-learning continuance intention: An
extension of the Technology Acceptance Model. International Journal of Human-Computer Studies, 64,
683–696.
Schluep, S., Ravasio, P., & Sch€ ar, S. G. (2003). Implementing learning content management. In M. Rauter-
berg, M. Menozzi, & J. Wesson (Eds.), Human-Computer Interaction: INTERACT’03 (pp. 884–887). Amster-
dam: IOS press.
Selim, H. M. (2007). Critical success factors for e-learning acceptance: Confirmatory factor models. Com-
puters & Education, 49, 396–413.
Siau, K., Sheng, H., & Nah, F. F. H. (2006). Use of a classroom response system to enhance classroom
interactivity. IEEE Transactions on Education, 49, 398–403.
Stodnick, M., & Rogers, P. (2008). Using SERVQUAL to measure the quality of the classroom experience.
Decision Sciences Journal of Innovative Education, 6, 115–133.
Swan, K. (2001). Virtual interaction: design factors affecting student satisfaction and perceived learning
in asynchronous online courses. Distance Education, 22, 306–331.
Teo, C. B., & Gay, R. K. L. (2006). A knowledge-driven model to personalize e-learning. Journal on Educa-
tional Resources in Computing (JERIC), 6, 3.
Udo, G. J., Bagchi, K. K., & Kirs, P. J. (2011). Using SERVQUAL to assess the quality of e-learning experi-
ence. Computers in Human Behavior, 27, 1272–1283.
Udo, G. J., & Marquis, G. P. (2002). Factors affecting e-commerce web site effectiveness. Journal of Computer
Information Systems, 42, 10–16.
Van Iwaarden, J., Van Der Wiele, T., Ball, L., & Millen, R. (2004). Perceptions about the quality of web
sites: a survey amongst students at Northeastern University and Erasmus University. Information & Man-
agement, 41, 947–959.
Volery, T., & Lord, D. (2000). Critical success factors in online education. The International Journal of Educa-
tional Management, 14, 216–223.
Wang, J., Agrawala, M., & Cohen, M. F. (2007, July). Soft scissors: An interactive tool for realtime high
quality matting. ACM Transactions on Graphics, 26(3), Article 9. DOI: https://doi.org/10.1145/
1276377.1276389.
Wannatawee, P., Alhammad, M., & Gulliver, S. R. (2013). Technology acceptance and care self-manage-
ment: Consideration in context of chronic care management. Handbook of Research on Patient Safety and
Quality Care through Health Informatics, 295.
Yang, Z., & Liu, Q. (2007). Research and development of web-based virtual online classroom. Computers
and Education, 48, 171–184.
Zhang, D., & Nunamaker, J. F. (2003). Powering e-learning in the new millennium: An overview of e-
learning and enabling technology. Information Systems Frontiers, 5, 207–218.
Zhang, X., & Prybutok, V. R. (2005). A consumer perspective of e-service quality. IEEE Transactions on
Engineering Management, 52, 461–477.