Uppal Et Al-2017-British Journal of Educational Technology

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 16

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/315416635

Factors determining e-learning service quality: ELQ factors

Article  in  British Journal of Educational Technology · March 2017


DOI: 10.1111/bjet.12552

CITATIONS READS

82 6,575

3 authors:

Muhammad Amaad Uppal Samnan Ali


University of Reading GC University Lahore - Management Studies Department
11 PUBLICATIONS   175 CITATIONS    10 PUBLICATIONS   175 CITATIONS   

SEE PROFILE SEE PROFILE

Stephen R. Gulliver
University of Reading
96 PUBLICATIONS   1,006 CITATIONS   

SEE PROFILE

Some of the authors of this publication are also working on these related projects:

Travelling Salesman Problem View project

Investigating roles of individual difference on IS failure View project

All content following this page was uploaded by Muhammad Amaad Uppal on 31 October 2017.

The user has requested enhancement of the downloaded file.


British Journal of Educational Technology Vol 00 No 00 2017 00–00
doi:10.1111/bjet.12552

Factors determining e-learning service quality

Muhammad Amaad Uppal, Samnan Ali and Stephen R. Gulliver


Muhammad Amaad Uppal received a BSc Engineering degree in Metallurgy & Material sciences from Punjab
University, Lahore, Pakistan, an MBA (Marketing & International Business) degree from Oklahoma City
University, OKC, USA in 1993 and 1995 respectively. Currently he is working towards his PhD in Business
Informatics from Henley Business School, University of Reading, Reading, UK. He worked in a leading engineering
company (Descon Engineering) in Pakistan, for 4 years and was head of their MIS department before he accepted a
job to teach as a business and IT faculty at Sharjah Higher Colleges, Sharjah, UAE. After almost eight years at the
Sharjah colleges, Amaad came back to Pakistan to join Management Studies Department, GC University, Lahore
where he is working as an Assistant Professor. Amaad is an experienced educationist with over 15 years of teaching
experience in the areas of business and information technology. He has taught in the leading higher education
institutions in the UAE and in Pakistan. Amaad has special interest in using information technology for teaching and
learning. He successfully setup CISCO local academy in Sharjah Men’s College and delivered courses there for many
years along with the use of WebCT – leading online learning management systems (LMS), for teaching many IT and
business courses. Amaad’s research interest are in technology mediated learning and development. Samnan Ali
received a BSc Engineering degree in Mechatronics and Control Engineering from University of Engineering and
Technology, Lahore, Pakistan, an MBA degree from Lahore University of Management Sciences in 2007 and 2010
respectively. Samnan is currently enrolled for a PhD program in Business Informatics, Systems and Accounting
(BISA) at Henley Business School, University of Reading, UK. Samnan has 8 years of Teaching and Training
Experience at prestigious universities and institutes like Pakistan Institute of Management (Ministry of Industries&
Production, Govt. of Pakistan), National School of Public Policy, NSPP (Senior Management Wing) - Govt. of
Pakistan, IB&M Institute of Business and Management (UET Lahore) and Management Studies Department MSD
(GCU Lahore). He is currently serving as an Assistant Professor at MSD (GCU Lahore). His professional endeavors
include his experience as Chief Operating Officer at Lahore Chapter of Project Management Institute (PA, USA),
Senior Training Manager with Microsoft Corporation’s Gold certified partner “Expert Systems,” District Sales
Manager at Fauji Fertilizer Company and Business Development Manager at EBR Energy Pakistan (EBR Energy
Corporation, NJ, USA). His research interests are Technology based Learning and Culture. Stephen R. Gulliver
received a BEng (Hons) degree in Microelectronics, a MSc degree (Distributed Information Systems) and a PhD in
1999, 2001 and 2004 respectively. Stephen worked within the Human Factors Integration Defence Technology
Centre (HFI DTC), before getting a job as a lecturer at Brunel University (2005–2008). Now, as an Associate
Professor within the Informatics Research Centre (IRC), a core part of Henley Business School (Reading
University), his personal research relates to the area of user and pervasive Informatics. Dr Gulliver has published in a
number of related fields, including: multimedia and information assimilation, usability, key performance indicators
and user acceptance. Gulliver supervises research relating to topics including: VR information acquisition, extensible
modeling frameworks, CRM and ERP acceptance, intelligent building systems, eye-tracking technologies and
multimedia content personalization. Address for correspondence: Muhammad Amaad Uppal, Henely Business School,
University of Reading, UK. Email: amaad@gcumsd.edu.pk

Abstract
e-Learning courses are fast becoming common-place, yet the success of these online
courses varies considerably. Since limited research addresses the issue of e-learning
quality (ELQ) of service in higher education environments, there is an increasing need
to effectively assess ELQ. In this paper, we argue that to obtain a satisfactory e-learning
student experience, we must offer more than access to learning material. The research
proposes an extended SERVQUAL model, the ELQ model, which in addition to key
service constructs, facilitates consideration of both information and system quality
factors. Exploratory Factor Analysis is conducted to investigate the reliability and
validity of the measurement model, and multiple regression analysis is used to test the
C 2017 British Educational Research Association
V
2 British Journal of Educational Technology Vol 00 No 00 2017

research model. Data analysis reveals that Assurance, Responsiveness, Tangibility,


Course Website and Learning Content have a positive correlation with the perception of
ELQ. e-Learning students value a stable, and easy to use e-learning environment, yet do
not perceive empathy and reliability as significant to student perception of ELQ.

Practitioner Notes
What is already known about this topic
• e-Learning is expanding rapidly. As a result of increased online enrolment, traditional
universities and colleges, are offering increased number of online courses at all educa-
tion levels. Online platforms are also being used more widely to augment or replace tra-
ditional class-based educational offerings (Wang, Agrawala, & Cohen, 2007).
• Success of e-learning depends on the satisfaction of the learners and their inten-
tion to reuse. Many models have been used to assess the quality of e-learning
programs. SERVQUAL model, which has its roots in the Expectation-Confirmation
Theory (Oliver, 1980), was proposed by Parasuraman, Zeithaml, and Berry
(1998). SEVQUAL has become a dependable customer driven scale, utilized to
gauge the service quality delivery in a range of different industries, from hospital-
ity, telecommunication and retail to consulting; and has been utilized to assess
the quality of service delivery in e-learning (Udo, Bagchi, & Kirs, 2011).
What this paper adds
• In this paper, we propose an e-learning quality model, which is an extension of
SERVQUAL model. This model comprises of three dimensions; (1) Service dimension,
consisting of five independent variables; “Responsiveness,” “Reliability,”
“Tangibility,” “Assurance” and “Empathy,” (2) Information dimension, comprising
of “Learning Content” and (3) System dimension comprising of “Course Website.”
• This research proposes that in addition to “service,” consideration of
“information” and “system” quality are vital to achieve overall perception of qual-
ity for e-learning systems.
• The development of the e-learning quality model highlights a critical need to con-
sider both tangible and nontangible quality e-learning dimensions; a fact that
challenges much of the current research in this field.
Implications for practice and/or policy
• To ensure that delivery of e-learning material is perceived as being of quality, it is
vital that e-learning practitioners understand and define how learning content
should be developed; a factor of particular importance in areas impacted by poor
infrastructure and bandwidth.
• For e-learning system to be successful, all three dimensions have to be considered,
if any of the dimensions is ignored, the overall quality perception may suffer.
• Despite its virtual nature, e-learning provision, if it is to be perceived as being of
quality, must ensure that it neither ignores physical (ie, the appearance of learn-
ing resources, personnel and communication materials), or temporal student
needs (ie, a willingness to help learners and provide prompt service).

C 2017 British Educational Research Association


V
ELQ factors 3

Introduction
A technological revolution is occurring across Higher Education Institutions, which is disrupting
traditional approaches to teaching and learning methods. Teaching and learning is no longer
confined to the classroom, since the physical presence of a teacher is no longer required for learn-
ing to take place (Zhang & Nunamaker, 2003). e-Learning solutions provide flexible access to
course materials (Levy, 2007), consistent delivery (Cantoni, 2004) and are not restricted by spa-
tial and/or temporal dimensions (Raab, Ellis, & Abdon, 2002). Accordingly, distance- and/or
online-learning options are being widely adopted by education providers as the teaching platform
of the future.
As a result of exponential online enrollment, traditional universities and colleges are offering an
increased number of online courses at all education levels. Online platforms are also being used
more widely to augment or replace traditional class-based educational offerings, with predictions
suggesting that, especially in high population and developing countries, the number of online
courses offered will surpass the number of onsite courses (Bolliger & Martindale, 2004).
Research implies that online students are much more goal-oriented and pro-active, ie, appreciat-
ing the added value attained through this learning medium (Levy, 2007). Although students
perceive the benefits of e-learning, and are more goal-orientated, research also shows that a
higher percentage of online students fail to finish (Diaz & Cartnal, 1999), a trend increasingly
amplified by those undertaking Massive Open Online Course. According to Diaz and Cartnal
(1999), student dropout in e-learning courses is on average 10–20% higher than traditional
courses and online courses are widely perceived as being of lower quality (Levy, 2007), ie, having
a lower achievement value to graduates (Chiu & Wang, 2008). Online programs facilitate ubiqui-
tous learning by limiting the interaction of learners (Swan, 2001), which in turn restricts the
potential for temporal and physical interaction and support between students and/or academic
staff; raising significant questions concerning the online learning experience. To improve student
dropout rates, research has focused on reactive forecasting/controlling of online student behavior,
instead of understanding e-learning course content and delivery quality (Bouhnik & Marcus,
2006). To manage e-learning delivery, it is important that the quality of e-learning is assessed
accurately (Gress, Fior, Hadwin, & Winne, 2010), to facilitate e-learning providers in the custom-
ization of their products to meet learner needs.

Understanding quality
Quality is a subjective term, which means different things to different stakeholders. Early litera-
ture defined quality as something being “fit for use” (Juran, 1981), or being in “conformance to
requirements” (Crosby, 1979). Yang and Liu (2007) stated that in addition to a lack of deficien-
cies, “quality” must consider, and must satisfy, both stated and implied needs. Ehlers (2004),
when considering e-learning quality (ELQ), defined 30 dimensions, which were subsequently
categorized into seven concept fields, ie, Tutor Support, Cooperation, Technology, Costs, Informa-
tion Transparency, Course Structure and Didactics. Ehlers (2004) emphasized the importance of
course content (“Didactics” and “Course structure”), and highlighted the importance of interac-
tion (“tutor support” and “cooperation”); thus supporting, in the context of e-learning, the
generic claim of Yang and Liu (2007).
The SERVQUAL model was proposed by Parasuraman et al. (1998) in an attempt to explain qual-
ity in the service sector. SERVQUAL has become a dependable customer driven scale, utilized to
gauge service quality delivery in a range of different industries. SERVQUAL aims to measures the
gap between customer expectation and customer experience, ie, a perception of satisfaction, con-
cerning the services provided and relies on the essential supposition that clients can assess an
organization’s service quality by contrasting expectations and experiences. If experiences are
C 2017 British Educational Research Association
V
4 British Journal of Educational Technology Vol 00 No 00 2017

Table 1: SERVQUAL RATER definitions

Constructs Description

Reliability Capacity to perform the guaranteed service constantly and precisely


Assurance Knowledge and politeness of workers and their capacity to inspire
trust and certainty.
Tangibles The presence of physical offices, equipment, personnel and
communication materials.
Empathy Caring, individualized consideration the service firm gives to its clients.
Responsiveness Readiness to help clients and give timely service.

below expectation, then the customer will perceive quality as being low. If experiences meet or
exceeds expectation, then the customer will perceive quality as being high. The SERVQUAL model
applies a useful acronym “RATER,” which refers to: Reliability, Assurance, Tangibles, Empathy
and Responsiveness (see Table 1 for RATER construct definitions).
The main benefit of using SERVQUAL as a measuring tool, is its application in a range of
domains. SERVQUAL, which has been used to examine numerous service industries, stands out
from other instruments due to its common application in both theoretical and operational
domains (Asubonteng, McCleary, & Swan, 1996). Asubeonteng et al. stated that “until a supe-
rior, equally straightforward, model rises, SERVQUAL will prevail as a leading service quality
instrument.” Twenty years later, the SERVQUAL model continues to be used as a reliable tool for
assessing service quality across a range of service industries, including the education sector.
In 1992, DeLone and McLean developed the IS Success model, which considered System and
Information quality dimensions, in order to understand system use (objective) and user satisfac-
tion (subjective). Following validation, DeLone and McLean (2002) revised the model by
incorporating SERVQUAL measurements; adding a third service quality dimension. Numerous
quality models have been developed in literature either directly incorporating SERVQUAL (eg,
Stodnick & Rogers, 2008; Udo et al., 2011), or indirectly considering SERVQUAL, by developing
the work of Delone and Mclean (eg, Acton, Halonen, Golden, & Conboy, 2009; Roca, Chiu, &
Martınez, 2006). Stodnick and Rogers (2008), for instance, utilized SERVQUAL to see how stu-
dents perceive quality in traditional classroom education. Udo et al. (2011), propose a modified
SERVQUAL instrument for assessing ELQ, which consists of five dimensions: Assurance, Empathy,
Responsiveness, Reliability and Website Content, which considers the service dimension, yet fails
to effectively consider both system and information quality dimensions.

The importance of systems and information quality


DeLone and McLean (2002) highlighted that consideration of service, information and system
quality constructs was crucial to system use and user satisfaction. Learning content (ie, informa-
tion), via a website platform (ie, system), is available to learners at any time and could be
perceived as a nontemporal nonperishable product. Similarly, online e-learning providers provide
students with an education (ie, service). Although many have evaluated e-learning using TAM,
existing model constructs (see Wannatawee, Alhammad, & Gulliver, 2013) fail to support consid-
eration of all service, information and systems quality dimensions. Accordingly, within the
proposed model, ie, the ELQ model, all dimensions must be considered.
To extend service factors, we introduce “Learning content” (information) and “Course website”
(system) constructs. In the current work, “Learning content” refers to accessible and accurate
learning material provided to students in a concise and timely fashion. “Learning content” factors
were taken from previous work, primarily Alla and Faryadi (2013) and Hein (2014). “Learning
C 2017 British Educational Research Association
V
ELQ factors 5

content” quality factors identified in literature were thematically grouped, using hermeneu-
tic analysis, into the following concept groups: presentation style (eg, Schluep, Ravasio, &
Sch€ar, 2003), content structure (eg, Teo & Gay, 2006), level and type of interactivity (eg,
Siau, Sheng, & Nah, 2006), language and communication (eg, Akinyemi, 2002; Hollins &
Foley, 2013) and delivery mode (eg, Gulliver & Kent, 2013). In our research, “Course
Website” relates to the system used to present and lay out information, and the inclusion of
technical functions that affect student perception of web platform quality. Significant factors
impacting perceived website quality, were grouped as relating to: interface design (eg, Cho,
Cheng, & Lai, 2009), navigation (eg, Volery & Lord, 2000), attractiveness (eg, Lin, 2010)
and ease of use (eg, Selim, 2007).
In this paper, we aim to explore whether ELQ, as shown in Figure 1, is a suitable model to assess
ELQ. Independent and dependent variables are drawn in Figure 2. Accordingly, this research will
determine:
1. Is the proposed ELQ model suitable to assess the quality of e-learning?
2. Does inclusion of “Learning Content” or “Course Website” have a significant impact on
perception of ELQ?

Figure 1: Proposed ELQ model [Colour figure can be viewed at wileyonlinelibrary.com]


C 2017 British Educational Research Association
V
6 British Journal of Educational Technology Vol 00 No 00 2017

Figure 2: ELQ validation model [Colour figure can be viewed at wileyonlinelibrary.com]

To consider the component aspects of quality, our research hypotheses state that, in context of e-
learning:
H1: “Reliability” is positively associated with students’ perception of e-learning quality.
H2: “Assurance” is positively associated with students’ perception of e-learning quality.
H3: “Tangibility” is positively associated with students’ perception of e-learning quality.
H4: “Empathy” is positively associated with students’ perception of e-learning quality.
H5: “Responsiveness” is positively associated with students’ perception of e-learning quality.
H6: “Learning Content” significantly impacts students’ perception of e-learning quality.
H7: “Course Website” significantly impacts students’ perception of e-learning quality.

Data collection and instrument design


A questionnaire was used to collect participant data, which consisted of two sections; a full ques-
tionnaire can be downloaded from www.gcuktp.info/research/elq-questionnaire.pdf. There were
51 questions in total, 5 questions relating to demographic factors (ie, section one) and 46 ques-
tions relating to SERVQUAL and extended dimensions (ie, section two). Demographic questions
related to capture of gender, occurrence of schooling, type of schooling (private/public), current
degree program and current income.
In section two, where possible, we used previously validated survey questions. RATER and Learn-
ing content questions were adapted form Udo and Marquis (2002), as required, for use in context
of e-learning. The original instrument used to capture SERVQUAL factors comprised of 18 ques-
tions, and has been utilized widely in previous studies (Stodnick & Rogers, 2008). Questions were
contextually altered to ensure suitability in the context of internet learning. Questions were
adapted and added (ie, AS_5 - AS-7; EM_4/5; RS_2-4; RA_4) to ensure consideration of the
impact of online team teaching. Team teaching questions, previously intended for traditional
classroom environments (eg, Stodnick & Rogers, 2008), were adjusted to make the new questions
appropriate to the e-learning environment. Questions were added to ensure consideration of lec-
ture content (ie, RA_2); relating to quality of lecture content delivery.
“Learning Content” questions were taken from Cao & Zhang (2005), who constructed and tested
a scale for measuring B2C (Business to Customer) website quality fulfilment, and Zhang and Pry-
butok (2005) who measured client reactions concerning design, sound/visual impact, precision,
C 2017 British Educational Research Association
V
ELQ factors 7

thoroughness of subject material, quality and suitability of learning material. Two questions
were asked relating to general learning material quality (ie, LC_1/LC_4). Five Questions were
mapped to the five “Learning Content” factors, defined in our ELQ model, ie, Presentation
(LC_8), Structure (LC_3), Interactivity (LC_4/5), Language (LC_7) and Delivery Modes (LC_2/
10/11). Questions relating to course website were taken from Udo and Marquis (2002), meas-
uring: general quality perception (CW_4/6); Interface design (CW_5); Navigation (CW_1);
Attractiveness (CW_2); and Ease of Use (CW_3). ELQ was captured using general questions
LQ_1-LQ-4. A small number of questions were repeated, eg, RA-6 and CW_4, which was done
to measure feedback concerning different factors. A 5-point Likert scale was used for all ques-
tions in section two.
After conducting a short pilot test, to test the reliability of questions, the questionnaire was
distributed to students in different classes at two leading public universities in Lahore, Paki-
stan. University students (undergraduates, postgraduates and executives) are used in
numerous studies covering quality perception (Van Iwaarden, Van Der Wiele, Ball, & Mil-
len, 2004) and were relevant in the context of the research scope. These students were
enrolled on BSc Applied Management, BBA honours, MBA, EMBA, BSc Sciences and BSc
Engineering courses. A total of 490 students participated in the survey, most of whom pre-
viously had exposure to a range of e-learning solutions (ie, both computer-aided learning
and computer-supported collaborative learning). A total of 421 questionnaires were consid-
ered usable (see Table 2).

Table 2: Respondent demographic data

Gender Male 63.7% (268)


Female 36.3% (153)
Program of Study BSc/BBA Honors 63.5% (67)
MBA 16.2% (68)
EMBA 6.9% (29)
BSc Engineering 8.6% (36)
BSc Sciences 5% (21)
Household Income (Monthly) Below Rs. 20 000 9.7% (41)
Rs. 21 000 to 50 000 27.8% (117)
Rs. 51 000 to 100 000 37.3% (157)
Above Rs. 100 000 24.9% (105)
Schooling Public 31.6% (133)
Private 68.2% (287)

Table 3: Scale reliability

Factor label Number of items Cronbach’s alpha (a)

Assurance 6 0.799
Reliability 4 0.845
Responsiveness 4 0.824
Empathy 4 0.916
Tangibility 4 0.895
Learning content 8 0.825
Learning quality 4 0.865
Course website 6 0.825

C 2017 British Educational Research Association


V
V
8

Table 4: Discriminant and convergent validity

C 2017 British Educational Research Association


Constructs CR AVE MSV ASV LQ ASU EMP REP REL LC TAN CW

Learning Quality (LQ) 0.991 0.964 0.102 0.049 0.982


Assurance (ASU) 0.929 0.724 0.011 0.002 20.025 0.851
British Journal of Educational Technology

Empathy (EMP) 0.919 0.741 0.104 0.029 0.179 20.005 0.861


Responsiveness (REP) 0.794 0.542 0.104 0.050 0.320 20.005 0.323 0.737
Reliability (REL) 0.952 0.869 0.055 0.025 0.156 0.104 0.115 0.234 0.932
Learning Content (LC) 0.953 0.837 0.095 0.045 0.309 20.010 0.226 0.230 0.223 0.915
Tangibles (TAN) 0.895 0.681 0.063 0.026 0.246 20.014 0.065 0.189 0.137 0.250 0.825
Course Website (CW) 0.912 0.638 0.030 0.006 0.173 0.012 20.005 0.042 0.073 0.053 0.058 0.799

The bold highlighted values shows the highest value of correlation.


Vol 00 No 00 2017
ELQ factors 9

Data analysis and results


SPSSv19 and AMOS 22 were used to facilitate data analysis, with SPSS used for basic statistics,
and AMOS supporting regression (ie, Structural Equation Modeling) and model testing. Results
are presented in three subsections relating to respectively: (1) Reliability and Validity, (2) Explora-
tory Factor Analysis and (3) Fitness of results.

Reliability and validity


To check reliability of the scale, we conducted Cronbach Alpha to measure internal consistency.
The Cronbach Alpha for all questionnaire items is 0.879. The extracted factors’ Cronbach alpha
values for our quality factors is shown in Table 3. All alpha (a) values are greater than (>) 0.70,
which implies factors are highly correlated and interchangeable.

Table 5: Extraction method: principal component analysis

Component
1 2 3 4 5 6 7 8

LQ1_Learnpercep 0.950
LQ2_Website 0.980
LQ3_InstMatClear 0.981
LQ4_uptodate 0.984
LC2_DiffFormats 0.953
LC3_VideoLec 0.511
LC5_Percept 0.971
LC6_Interesting 0.915
LC7_LecUrdu 0.907
EM1_Concerned 0.842
EM2_IndvNeeds 0.924
EM3_StudInterest 0.895
EM4_StudMotivation 0.916
TA1_ReqUni 0.850
TA2_ExpTeacher 0.887
TA3_PhyCampus 0.876
TA4_DegreeRecog 0.865
AS1_InstKnow 0.890
AS2_Fair 0.865
AS4_InstAns 0.881
AS6_TeamKnow 0.866
CW1_RelvInfo 0.610
CW3_ Easy 0.789
CW4_ Update 0.841
CW5_MM 0.771
CW6_HQ 0.727
RA1_ConsGood 0.973
RA3_CorrectsInfo 0.953
RA4_TeamHelp 0.935
RS1_QckResp 0.674
RS2_TeamHelp 0.906
RS3_TeamGuides 0.683
RS4_InstSupp 0.897

Rotation method: Promax with Kaiser normalization. Rotation converged in six iterations.
C 2017 British Educational Research Association
V
10 British Journal of Educational Technology Vol 00 No 00 2017

According to Hair, Anderson, Babin, & Black (2010), the minimum threshold value recom-
mended for a sample size of 421 is 0.350. Table 4 presents composite reliability (CR), average
variance extracted (AVE), maximum shared variance (MSV) and average shared variance (ASV)
values. Since CR values are greater than 0.7, AVE values are greater than 0.5, and MSV and
ASV are less than AVE, we claim respectively reliability, convergent validity and discriminant
validity. All loaded values were above 0.50, which confirms that factors have sufficient discrimi-
nant validity, and no unexpected cross-loading to occur.

Exploratory factor analysis


To see if the observed variables adequately correlated, we conducted Exploratory Factor Analysis
using Principal Component Analysis with Promax rotation (see Table 5). Promax was selected for
two reasons: the sample size was adequately large, ie, n 5 421; and Promax is suitable when multi-
ple factors are correlated. Some questions needed to be dropped, as they did not load well.
Interestingly, when considering learning content questions, generic (ie, LC_1/4), presentation
(LC_8/9) and delivery mode questions—relating to technology/device (LC_10/11)—failed to load.
It is believed that quality perception concerning presentation and device preference, is not explicitly
related to learning content. Structure, Interactivity, language and content delivery (ie, audio, video,
text, etc) loaded reliably. When considering course website questions, only “attractiveness” failed to
load. It is believed that the attractiveness of the website is not perceived as essential to e-learning
content delivery; interface design, navigation, ease of use and information quality are all seen as
critical. The eight factors that were extracted in the pattern matrix (Table 5) were used for further
analysis. The cumulative variance of the eight factors was 77.68%, and all extracted factors had
eigenvalues above 1.0. All the communalities for each variable were significantly high; ie, all were
above 0.300, with most being above 0.800. The Kaiser-Meyer-Olkin and Bartlett’s test for sampling
adequacy was significant, showing that the chosen variables were sufficiently correlated (Table 6).

Fitness of results
The ELQ model, to the best of our knowledge, is the first to measure the perception of ELQ; includ-
ing “Learning Content” (information) and “Course Website” (system) dimensions. The seven
Table 6: KMO and Bartlett’s test

Kaiser-Meyer-Olkin measure of sampling adequacy. 0.800


Bartlett’s test of sphericity Approx. chi-square 16011.022
df 528
Sig. 0.000

Table 7: Regression weights

Estimate S.E. C.R. p

Learning Quality <— Assurance 0.156 0.056 2.791 .005


Learning Quality <— Empathy 0.013 0.049 0.259 .795
Learning Quality <— Responsiveness 0.225 0.065 3.454 ***
Learning Quality <— Reliability 0.009 0.064 0.138 .891
Learning Quality <— Learning Content 0.265 0.058 4.609 ***
Learning Quality <— Tangibles 0.126 0.049 2.583 .010
Learning Quality <— Course Website 0.253 0.066 3.817 ***

***Level of significance at <0.0001.


C 2017 British Educational Research Association
V
ELQ factors 11

hypotheses were tested as independent variables. At the p < .05 level, five factors were identified
as significant to the student’s perception of quality, ie, Assurance, Responsiveness, Tangibility,
Course Website, Learning Content. Empathy and Reliability (see Table 7). Table 8 presents the
correlations matrix between coefficient paths.
This research confirms hypotheses H2, H3, H5, H6 and H7, ie, Assurance, Responsiveness, Tan-
gibility, Course Website and Learning Content, are positively associated with the perception of
ELQ—Figure 3.
All fitness values are within acceptable criteria limits depending on the test, hence implying a
good model fit (see Table 9). Chi-square/df value is 2.89, where a value between 2.0 and 5.0 is

Table 8: Correlations matrix

Learning Course Learning


Assurance Empathy Responsiveness Reliability content Tangibles website quality

Assurance 1
Empathy 0.036 1
Responsiveness 0.050 0.322** 1
Reliability 0.063 0.118* 0.278** 1
Learning 0.041 0.239** 0.308** 0.337** 1
content
Tangibles 20.019 0.046 0.219** 0.201** 0.313** 1
Course website 0.331** 0.099* 0.109* 0.046 0.186** 0.112* 1
Learning 0.201** 0.147** 0.292** 0.170** 0.357** 0.243** 0.292** 1
quality

**Correlation is significant at the .01 level (2-tailed).


*Correlation is significant at the .05 level (2-tailed).

Figure 3: ELQ model with path coefficients


C 2017 British Educational Research Association
V
12 British Journal of Educational Technology Vol 00 No 00 2017

Table 9: Goodness of fit statistics

Index Value Criterion

Chi-square/Df 2.89 2.0–5.0


RMSEA 0.067 0–0.1
CFI 0.990 0–1
NFI 0.986 0–1

considered acceptable. Our RMSEA value is 0.069, and our CFI and NFI values are 0.990 and
0.986 respectively; demonstrating goodness of fit, thus supporting the results and validating the
proposed model.

Conclusion
Although the authors were only able to capture data from one specific teaching domain (ie, busi-
ness) within one geographic location (ie, Pakistan), our work clearly demonstrates that
perception of ELQ must consider SERVQUAL (service), Course Website (system) and Learning
Content (information) dimensions. This paper proposes an extended SERVQUAL model, ie, the
ELQ model, for measuring ELQ, comprising of these three dimensions. Our findings support exist-
ing literature (Yang & Liu, 2007) and highlight a growing need to understand, and explicitly
consider both tangible and intangible education needs.
Results confirmed hypotheses H2, H3, H5, H6 and H7; ie, that Assurance, Responsiveness,
Tangibility, Course Website and Learning Content have a positive correlation with the stu-
dent perception of ELQ. Accordingly, student’s seemingly value a stable, and easy to use, e-
learning environment. Interestingly results also imply that online students have a reduced
expectation concerning e-learning interaction and/or dependence on others, since empathy
and reliability does not significantly influence student perception of ELQ. Online students do
not seemingly expect high levels of empathy as part of the e-learning experience; practically
recognizing the limitations of e-learning courses. Since online students do not expect close
personal support, it is not unsurprising that “Empathy” was found not to be significant to
student perception of e-learning.
Due to the expense of producing and delivering online course material, e-learning content is
commonly developed over time from piecemeal linked resources (Gibbs & Gosper, 2012). The
negative risk of such content is that resources are taken from a range of media files, which
arguably leads to variation in the quality of delivery within and between modules (McKimm,
Jollie, & Cantillon, 2003). Although inconsistent, all inconsistencies are consistent and
repeatedly identical for all students. Unlike the classroom, where content delivery, support,
assessment and feedback, can vary significantly between groups, and students with groups,
online content delivery, module assessment and student feedback is near perfectly repeatable
for all. Online material is available (on demand) 24/7, and can be reviewed by the student
multiple times to enforce learning. Level of communication will be identical for all students,
and due to automation of tests, assessment processes and feedback mechanisms are also con-
sistent for all. As online students do not need to rely on others, instead seeking help and
support from online peers (Alzahrani, 2015), “Reliability” is unsurprisingly not seen as sig-
nificant to ELQ perception.
This study discusses high level concepts, yet fails to consider how low-level e-learning success fac-
tors influence student perception of ELQ, eg, student motivation and experience (Gutierrez-
Santiuste & Gallego-Arrufat, 2016), sense of isolation (Muhammad, Ahamd, & Shah, 2015),
C 2017 British Educational Research Association
V
ELQ factors 13

pedagogical model (Govender & Chitanana, 2016), self-efficacy (Ozudogru & Hismanoglu, 2016),
localization of content (Andersson, 2008) etc. We strongly encourage additional studies, to con-
sider the mapping that exists between low-level e-learning success factors, identified in literature
and student perception of ELQ.
Although additional research is required to develop contextual “quality” guidelines, develop-
ment of the ELQ model is a critical step in the consideration of both tangible and intangible
quality dimensions for e-learning; a fact that challenges much of the current research in this
field.

Acknowledgements
I would like to thank GC University, Lahore and University of Engineering and Technology,
Lahore, for allowing us to collect data from their students. I would also like to thank all students
who participated in this survey and made this study possible.
Statements on open data, ethics and conflict of interest
All data created during this research are openly available from the research page of GC Univer-
sity, Lahore at http://www.gcuktp.info/research/elq.
Research was systematically piloted, checked and carried out in line with University of Reading
ethical rules. Before participating in the study, all participants were required to read an informa-
tion sheet, describing the purpose of the study. Participation in the study was voluntary, and the
information sheet clearly described the participant’s right to withdraw from the study at any
time. Data collected as part of this study was anonymously store and analyzed. Participants were
required to sign a consent form, to show (1) that they had read the information sheet, (2) that
they were willing to participate in the study, (3) that they had understood that data would be
stored anonymously. Arrangement for expenses was considered, yet no payment was given to
participants to avoid financially motivated participation. Participants will be informed of any pub-
lished work that results from analysis of data collected as part of this study.
To the best of our knowledge, no potential conflict exists in the presentation of our work in the
British Journal of Educational Technology.

References
Acton, T., Halonen, R., Golden, W., & Conboy, K. (2009). DeLone & McLean success model as a descriptive
tool in evaluating the use of a virtual learning environment. Paper presented at International Confer-
ence on Organizational Learning, Knowledge and Capabilities (OLKC 2009), Amsterdam, the
Netherlands.
Alla, M. M. S. O., & Faryadi, Q. (2013). The effect of information quality in e-learning system. International
Journal of Applied Science and Technology, 3(6), 24–33.
Alzahrani, J. (2015). Investigating role of interactivity in effectiveness of e-learning (Doctoral dissertation,
Brunel University London).
Akinyemi, A. (2002). Effect of language on e-learning in the Arab world. In G. Richards (Ed.), Proceedings
of World Conference on e-Learning in Corporate, Government, Healthcare, and Higher Education 2002 (pp.
1109–1112). Chesapeake, VA: AACE.
Andersson, A. (2008). Seven major challenges for e-learning in developing countries: Case study eBIT, Sri
Lanka. International Journal of Education and Development using ICT, 4(3), 45–62.
Asubonteng, P., McCleary, K. J., & Swan, J. E. (1996). SERVQUAL revisited: A critical review of service
quality. Journal of Services Marketing, 10, 62–81.
Bolliger, D. U., & Martindale, T. (2004). Key factors for determining student satisfaction in online courses.
International Journal on E-Learning, 3, 61–67.
Bouhnik, D., & Marcus, T. (2006). Interaction in distance-learning courses. Journal of the American Society
Information Science and Technology, 57, 299–305.
C 2017 British Educational Research Association
V
14 British Journal of Educational Technology Vol 00 No 00 2017

Cantoni, V. (2004). Perspectives and challenges in elearning: Towards natural interaction paradigms.
Journal of Visual Languages and Computing, 15, 333–345.
Cao, M., & Zhang, Q. (2005). B2C e-commerce web site quality: An empirical examination. Industrial Man-
agement & Data Systems, 105, 645–661.
Chiu, C. M., & Wang, E. T. G. (2008). Understanding web-based learning continuance: The role of subjec-
tive task value. Information and Management, 45, 194–201.
Cho, V., Cheng, T. E., & Lai, W. J. (2009). The role of perceived user-interface design in continued usage
intention of self-paced e-learning tools. Computers & Education, 53, 216–227.
Crosby, P. B. (1979). Quality is free. The art of making quality certain (p. 17). New York: McGraw-Hill.
DeLone, W. H., & McLean, E. R. (1992). Information systems success: the quest for the dependent variable.
Information Systems Research, 3, 60–95.
DeLone, W. H., & McLean, E. R. (2002, January). Information systems success revisited. Proceedings
of the 35th Annual Hawaii International Conference on System Sciences, 2002 (HICSS)
(pp. 2966–2976). IEEE, Hawaii, USA.
Diaz, D. P., & Cartnal, R. B. (1999). Students’ learning styles in two classes: Online distance learning and
equivalent on-campus. College Teaching, 47, 130–135.
Ehlers, U. D. (2004). Quality in e-learning from a learner’s perspective. European Journal of Open, Distance
and E-learning, 7(1).
Gibbs, D., & Gosper, M. (2012). The upside-down-world of e-learning. Journal of Learning Design, 1(2), 46–
54.
Govender, D. W., & Chitanana, L. (2016). Perception of information and communications technology
(ICT) for instructional delivery at a university: From technophobic to technologically savvy. African Jour-
nal of Information Systems, 8, 70–85.
Gress, C. L., Fior, M., Hadwin, A. F., & Winne, P. H. (2010). Measurement and assessment in computer-
supported collaborative learning. Computers in Human Behavior, 26, 806–814.
Gulliver, S. R., & Kent, S. (2013, June). Higher education: Understanding the impact of distance learning
mode on user information assimilation and satisfaction. In Proceedings of the ITI 2013 35th Interna-
tional Conference on Information Technology Interfaces (ITI) (pp. 199–204). IEEE, Cavtat / Dubrovnik,
Croatia.
Gutierrez-Santiuste, E., & Gallego-Arrufat, M. J. (2016). Barriers in computer-mediated communication:
Typology and evolution over time. Journal of e-Learning and Knowledge Society, 12, 107–119.
Hair, J. F., Anderson, R. E., Babin, B. J., & Black, W. C. (2010). Multivariate data analysis: A global perspec-
tive (Vol. 7). Upper Saddle River, NJ: Pearson.
Hein, K. K. (2014). Creating and using interactive presentations in distance education courses: a view
from the instructor’s chair. Theses, Student Research, and Creative Activity: Department of Teaching, Learning
and Teacher Education. Paper 43.
Hollins, N., & Foley, A. R. (2013). The experiences of students with learning disabilities in a higher educa-
tion virtual campus. Educational Technology Research and Development, 61, 607–624.
Juran, J. M. (1981). Product quality: a prescription for the west. NYC, USA: AMACOM.
Lin, H. F. (2010). An application of fuzzy AHP for evaluating course website quality. Computers & Educa-
tion, 54, 877–888.
Levy, L. (2007). Comparing dropouts and persistence in e-learning courses. Computers & Education, 48,
185–204.
McKimm, J., Jollie, C., & Cantillon, P. (2003). Web based learning. British Medical Journal, 326(7394),
870–873.
Muhammad, A., Ahamd, F., & Shah, A. (2015). Resolving ethical dilemma in technology enhanced edu-
cation through smart mobile devices. International Arab Journal of e-Technology, 4, 25–31.
Ozudogru, F., & Hismanoglu, M. (2016). Views of freshmen students on foreign language courses deliv-
ered via E-learning. Turkish Online Journal of Distance Education, 17, 31–47.
Parasuraman, A., Zeithaml, V. A., & Berry, L. L. (1998). SERVQUAL: A multi-item scale for measuring
consumer perception of service quality. Journal of Retailing, 64, 2–40.
Raab, R. T., Ellis, W. W., & Abdon, B. R. (2002). Multisectoral partnerships in e-learning: A potential force
for improved human capital development in the Asia Pacific. Internet and Higher Education, 4, 217–229.

C 2017 British Educational Research Association


V
ELQ factors 15

Roca, J. C., Chiu, C. M., & Martınez, F. J. (2006). Understanding e-learning continuance intention: An
extension of the Technology Acceptance Model. International Journal of Human-Computer Studies, 64,
683–696.
Schluep, S., Ravasio, P., & Sch€ ar, S. G. (2003). Implementing learning content management. In M. Rauter-
berg, M. Menozzi, & J. Wesson (Eds.), Human-Computer Interaction: INTERACT’03 (pp. 884–887). Amster-
dam: IOS press.
Selim, H. M. (2007). Critical success factors for e-learning acceptance: Confirmatory factor models. Com-
puters & Education, 49, 396–413.
Siau, K., Sheng, H., & Nah, F. F. H. (2006). Use of a classroom response system to enhance classroom
interactivity. IEEE Transactions on Education, 49, 398–403.
Stodnick, M., & Rogers, P. (2008). Using SERVQUAL to measure the quality of the classroom experience.
Decision Sciences Journal of Innovative Education, 6, 115–133.
Swan, K. (2001). Virtual interaction: design factors affecting student satisfaction and perceived learning
in asynchronous online courses. Distance Education, 22, 306–331.
Teo, C. B., & Gay, R. K. L. (2006). A knowledge-driven model to personalize e-learning. Journal on Educa-
tional Resources in Computing (JERIC), 6, 3.
Udo, G. J., Bagchi, K. K., & Kirs, P. J. (2011). Using SERVQUAL to assess the quality of e-learning experi-
ence. Computers in Human Behavior, 27, 1272–1283.
Udo, G. J., & Marquis, G. P. (2002). Factors affecting e-commerce web site effectiveness. Journal of Computer
Information Systems, 42, 10–16.
Van Iwaarden, J., Van Der Wiele, T., Ball, L., & Millen, R. (2004). Perceptions about the quality of web
sites: a survey amongst students at Northeastern University and Erasmus University. Information & Man-
agement, 41, 947–959.
Volery, T., & Lord, D. (2000). Critical success factors in online education. The International Journal of Educa-
tional Management, 14, 216–223.
Wang, J., Agrawala, M., & Cohen, M. F. (2007, July). Soft scissors: An interactive tool for realtime high
quality matting. ACM Transactions on Graphics, 26(3), Article 9. DOI: https://doi.org/10.1145/
1276377.1276389.
Wannatawee, P., Alhammad, M., & Gulliver, S. R. (2013). Technology acceptance and care self-manage-
ment: Consideration in context of chronic care management. Handbook of Research on Patient Safety and
Quality Care through Health Informatics, 295.
Yang, Z., & Liu, Q. (2007). Research and development of web-based virtual online classroom. Computers
and Education, 48, 171–184.
Zhang, D., & Nunamaker, J. F. (2003). Powering e-learning in the new millennium: An overview of e-
learning and enabling technology. Information Systems Frontiers, 5, 207–218.
Zhang, X., & Prybutok, V. R. (2005). A consumer perspective of e-service quality. IEEE Transactions on
Engineering Management, 52, 461–477.

C 2017 British Educational Research Association


V

View publication stats

You might also like