Professional Documents
Culture Documents
2004 - DE - PredPerf Distance Learning
2004 - DE - PredPerf Distance Learning
2004 - DE - PredPerf Distance Learning
net/publication/249016740
CITATIONS READS
145 9,834
4 authors:
Some of the authors of this publication are also working on these related projects:
Developing Practical Wisdom: A Guide for Educators and Advanced Learners View project
The impact of blended classrooms (including flipped and inverted classes) on the achievement and attitudes of postsecondary students.
View project
All content following this page was uploaded by Robert M Bernard on 09 January 2016.
The study reported here concerns the development and predictive validation of an instru-
ment to assess the achievement outcomes of DE/online learning success. A 38-item
questionnaire was developed and administered to 167 students who were about to embark
on an online course. Factor analysis indicated a four-factor solution, interpreted as “general
beliefs about DE,” “confidence in prerequisite skills,” “self-direction and initiative” and
“desire for interaction.” Using multiple regression we found that two of these factors
predicted achievement performance (i.e., Cumulative Course Grade). Comparisons of
pretest and posttest administrations of the questionnaire revealed that some changes in
opinion occurred between the beginning and the end of the course. Also, categories of
demographic characteristics were compared on the four factors. The overall results suggest
that this instrument has some predictive validity in terms of achievement, but that
Cumulative Grade Point Average (i.e., the university’s record of overall achievement) is a
much better predictor.
Introduction
In the educational community, there has been a longstanding and persistent
impression that distance education (DE) is not of the same quality as tra-
ditional classroom-based education. While some have pronounced this on
scant evidence (e.g., Phipps & Merisotis, 1999; Russell, 1999), it is certainly
true that from the days of correspondence education onward, attrition has been
a problem, and a great deal of research (e.g., Bernard & Amundsen, 1989;
Morgan & Tam, 1999; Sweet, 1986) has been conducted to discover its
underlying causes. It has been suggested, among other things, that differences
in learning style and personality characteristics, the isolation felt by distance
*Corresponding author: Centre for the Study of Learning and Performance, Concordia
University, 1455 de Maisonneuve Blvd W., Montreal, Quebec H3G 1M8, Canada. Email:
bernard@education.concordia.ca
ISSN 0158-7919 (print); 1475-0198 (online)/04/010031-17
2004 Open and Distance Learning Association of Australia, Inc.
DOI: 10.1080/0158791042000212440
32 R. M. Bernard et al.
Predictive Validity
Predictive validity is the correlation of the test with an external criterion sepa-
rated by a time lapse between measurements. For example, the Scholastic
Aptitude Test (SAT) or the Graduate Record Exam (GRE) have predictive
validity to the extent they predict future student performance in terms of grade
point average at university (undergraduate and graduate, respectively). (Abrami,
Cholmsky, & Gordon, 2001, p. 41)
It is nearly axiomatic that the worth of an instrument of the kind suggested here
can be judged only in relation to the accuracy of the predictions it provides
about the future behaviors of online learners. Without this link we are in the
dark as to what counts most and what counts least in becoming a successful
online learner. So the most fundamental question being asked in the current
study is: “Do items cluster around the four areas of competence previously
described, and if they do, what can they tell us about how students will perform
in an online course?”
34 R. M. Bernard et al.
Research Questions
The following research questions guided the stages of the current study:
Question 1. Is there a coherent factor structure that underlies the items in the
questionnaire?
Question 2. Does the factor structure predict achievement performance (Course
Grade) in the online course?
Question 3. Do online students say the same things about online learning after
the course (posttest) that they said before the course began (pretest)?
Question 4. Do the factors identified differ across the demographic characteris-
tics of the sample? (i.e., gender, prior experience with online learning, number
of hours spent on educational applications of computing).
Method
Participants and Research Site
Students who were enrolled in a Web-based undergraduate course at Concor-
dia University in Montreal, Canada, participated in the study. The course was
titled “Problem Solving and Academic Strategies” and was originally designed
for students who had been in academic jeopardy (“fail standing”), and had
been readmitted, but was later opened to any student as a three-credit elective.
This sample contains a mix of “fail standing” students and “regular” students.
Procedure
Students enrolled in the fall 2002, winter 2003, and summer 2003 semesters
were briefed by the instructor about the study during their orientation session
and were asked to participate by completing a paper-based questionnaire and
had to give their consent by signing an officially designated consent form. The
consent form described the purpose of the study, explained the procedures that
would be followed, and provided space for the student to print and sign his/her
name. Moreover, students who gave their consent also agreed to allow their
Grade Point Average (GPA) and course scores to be used by the investigators.
Students were told that their participation was completely voluntary and
confidential, and that their decision to participate or not would have no affect
on their grade in the course. After describing the purpose of the study, the
instructor left the room to avoid bias, while teaching assistants distributed
consent forms and questionnaires. Those students who chose to participate
completed the consent form and questionnaire and identified themselves by
including the last four digits of their student identification number, which was
used to match pretest and posttest data. Completed consent forms and ques-
tionnaires were collected by the teaching assistants and placed in sealed
envelopes and were not opened until after the course was completed and the
Predicting Online Learning Achievement 35
Gender
Females 83 49.7
Males 84 50.3
Age (in years)a
18–22 103 62.0
23–27 53 31.7
28–32 4 2.4
ⱖ 33 6 3.6
Number of DE courses taken previously
0 84 50.6
1 47 28.0
2 25 14.9
ⱖ3 11 6.5
Hours of educational computing per week
⬍1 32 19.2
2–5 79 47.3
6–10 39 23.4
⬎ 10 17 10.2
a
Variable totals are the result of missing demographic data.
grades assigned. Students who chose not to participate were free to leave the
room while the instrument was being administered. When all of the consent
forms and questionnaires had been collected, the instructor returned and
continued with the orientation session.
Toward the end of the fall 2002, winter 2003, and summer 2003 semesters,
students were invited to complete an online posttest questionnaire and again
identified themselves by including the last four digits of their student
identification number. Only 63 of the original group of 167 students chose to
submit a posttest. The breakdown of participant demographics is shown in
Table 1.
Research Design
The study was a one-group correlational pretest–posttest design (Campbell &
Stanley, 1963). This design is inappropriate for drawing causal inferences from
the data because of the lack of a comparison condition, but perfectly appropri-
ate for investigating relationships.
Research Instrument
In the course of a previous research study, a large volume of the DE research
literature was reviewed. The needs of the DE learner emerged as one of many
36 R. M. Bernard et al.
central themes, and in more recent works the changing character of these needs
for successful online learning became apparent.
The themes we explored were study practices (Bernt & Bugbee, 1993);
learning styles (Coggins, 1988); need for interaction (e.g., Abrami & Bures,
1996; Fulford & Zhang, 1993); motivation (e.g., Bures et al., 2000; Bures,
Amundsen, & Abrami, 2002); and preferred instructional methods (e.g.,
Bernard & Naidu, 1992). McVay (2001) developed and pilot-tested a 13-item
instrument called the “Readiness for Online Learning Questionnaire”. We felt
that this instrument did not represent a broad enough spectrum of themes from
the DE literature (e.g., general beliefs about DE) and so we developed an
additional 25 items, written in the same style as the McVay items. In total, the
questionnaire we developed contained 38 items including the 13 McVay items.
The items were in the form of a 4-point Likert scale ranging from “strongly
agree” to “strongly disagree.” Demographic information was collected on the
pretest only.
Other Measures
Two additional measures were used in this study: (a) Cumulative GPA
(CGPA) and (b) Cumulative Course Grade (CCG). CGPA is on a 0–4.3 scale
and CCG was represented as a percentage.
Statistical Analysis
Factor analysis, multiple regression, Pearson Product Moment Correlations
and t tests were used to evaluate the four research questions. The statistical
procedures employed in each case are explained in the next section.
Results
The results of this study are presented by research question.
Question 1. Is there a coherent factor structure that underlies the items in the
questionnaire?
In order to answer this question the 38 scale statements were factor analyzed
using varimax rotation. Since this question was clearly exploratory, a reliability
coefficient was not calculated for the entire set of questions. Reliability esti-
mates were calculated after factor analyzing the questionnaire. Several indica-
tors were considered in order to judge the factorability of the matrix. The case
by item ratio was 4.77, a little under the standard of 5 suggested by Tabachnik
and Fidell (1996). However, other indices proved to be satisfactory. The
Kaiser–Meyers–Olkin (KMO) measure of sampling adequacy was 0.70 and
Bartlett’s test of sphericity was significant at p ⬍ 0.001 (Tabachnik & Fidell,
1996). Based on these tests, we proceeded with the factor analysis.
The rotated factor matrix yielded four factors with eigenvalues greater than
Predicting Online Learning Achievement 37
2.0 (ranging from 5.34 to 2.15). These four factors accounted for 48.88% of
the overall variance in the measure. We examined factor loadings on each of
these four factors that exceeded 0.40 and, based on the statements related to
each of the highest loading items, named the factors. No items loaded above
0.40 on more than one factor. On two items the differential loading across
factors was less than 0.20. Factor 1 comprised items related to “confidence in
prerequisite skills.” Factor 2 comprised items related to “general beliefs about
online learning.” Factor 3 comprised items related to “self-direction and
initiative” or “self-management of learning.” Factor 4 comprised items related
to “desire for interaction with others.” Table 2 provides a complete listing of
the item loadings on each factor (high loading items are in bold). As we
expected, these groups of items represent major recurring themes in the DE
and online learning literature.
Reliability coefficients were then calculated for each set of items. Factor 1
had a Cronbach’s alpha of 0.79 (8 items) with Corrected Item-Total Correla-
tions ranging from 0.35 to 0.63. Factor 2 had an alpha of 0.82 (8 items) with
Corrected Item-Total Correlations ranging from 0.41 to 0.65. Factor 3 had an
alpha of 0.81 (4 items) with Corrected Item-Total Correlations ranging from
0.53 to 0.70. Factor 4 had an alpha of 0.67 (5 items), slightly under the
standard of 0.70 suggested as indicating a reliable set of sub-measures with
Corrected Item-Total Correlations ranging from 0.26 to 0.53. Overall, these
subscales were considered to be reliable sets of items.
Question 2. Does the factor structure predict achievement performance (Course
Grade) in the online course?
Having established identifiable groupings of items, we proceeded to attempt to
determine whether these groupings predicted achievement outcomes as mea-
sured by the CCG. This grade is a composite of seven assignments that were
accomplished during the term, ranging from the simple creation of a study
schedule to a more complex essay writing assignment.
The items related to each factor were summed and entered as predictors into
hierarchical multiple regression, with Course Grade serving as the dependent
variable. The results were significant, F ⫽ 4.01 (4, 132), p ⬍ 0.01. In total, the
predictors accounted for 8.0% of the variance in Course Grade. For the sake
of brevity, the factors will be called “beliefs,” “skills,” “self-direction,” and
“interaction.” Table 3 shows the results of the multiple regression. Two of the
factors were significant predictors of course achievement: “self-direction” and
“beliefs.” “Interaction” came close to significance. “Self-direction” and
“beliefs” were positive predictors and “interaction” was a negative predictor.
The factor “skills” was not included in the final model, possibly because
deficits in these basic skills were only relevant as students were becoming
accustomed to working and interacting online. By the time the final Course
Grades were compiled, these skills were mastered and therefore no longer
relevant.
Though it was not a significant predictor, why was the factor labeled
38 R. M. Bernard et al.
22. I feel that face-to-face contact with my ⫺ 0.153 0.343 0.362 0.433
instructor is necessary for learning to occur
23. I can discuss with other students during 0.359 0.120 ⫺ 0.214 0.663
Internet activities outside of class
24. I can work in a group during Internet 0.220 0.148 0.00 0.767
activities outside of class
25. I can collaborate with other students during 0.156 0.108 ⫺ 0.205 0.795
Internet activities outside of class
Table 4. t tests between “fail standing” students and “regular” students on three factors
(Grade and GPA) and the four factors, Pearson Product Moment Correlations
between the four factors and GPA and Course Grade were calculated and are
presented in Table 5. As expected, the largest correlation was between Course
Grade and GPA (i.e., they share 46% of their variance). It is because of this
anticipated high correlation that GPA was not included in the predictive
multiple regression model in the first place. It is likely that all of the factors
would have been eclipsed by GPA.
The pattern of correlations between Course Grade and the factors follows
that of the multiple regression. Notice that the GPA by factor correlations
mirror this pattern, although only “self-direction” and GPA are significantly
correlated. This finding suggests that the factor “self-direction” applies more
broadly to other kinds of courses in various subject areas, since GPA is a
general rather than specific measure of achievement. Two interesting
significant inter-factor correlations emerged. “Beliefs” and “skills” were
significantly negatively correlated, as were “skills” and “interaction.” Taken
together, these findings may suggest that those who perceive their prerequisite
skill to be higher, may be less concerned with the nature of online learning and
aspects of interactivity that were measured by the “interaction” factor.
Overall, this study of the predictive validity of the questionnaire suggests the
following: (a) three of the four factors do predict learning achievement as
measured by Course Grades, but together they account for a small amount of
variance ( ⬍ 9%) in Course Grades; (b) CGPA, the cumulative measure of
university performance, is the best “predictor” of Course Grade, although it
was not included in the multiple regression model because of its large corre-
lation with the dependent measure; and (c) the factor “self-direction” is
*p ⬍ 0.05 (two-tailed).
a
N values ranged from 147 to 167, depending on the amount of missing data.
Predicting Online Learning Achievement 41
Table 6. Difference between “pretest” and “posttest” scores across the factors
a
Negative values here reflect a change from a higher mean on the pretest to a lower mean
on the posttest; positive values are the reverse.
positively correlated with both Course Grade and GPA, and emerged as the
best predictor of the three significant factors; it is also uncorrelated with the
other factors. This final point suggests that the prior opinions of students as to
their self-management, self-direction and initiative as learners are the best set
of items for predicting academic success in an online course. But this applies
to achievement in general (i.e., GPA) as well as achievement in this online
course (i.e., Course Grade).
Question 3. Do online students say the same things about online learning after
the course that they said before the course began?
We examined the four factors as they changed from pretest to posttest. Since
students in this course only meet face to face in an orientation session at the
beginning of the term, the posttest had to be administered online. Because of
this, only 63 students responded to it. Dependent t tests were conducted
between the pretest and posttest factors. Table 6 shows the results of this
analysis.
It is clear from this analysis that “beliefs about DE” did not change from
pretest to posttest. All of the other factors did, but in different directions.
Surprisingly, students at the end of term were significantly less positive about
their “comfort with basic skills” than at the beginning of term. It is possible
that students became less positive about the requirements of online learning as
difficulties in actually applying these skills in an online learning setting became
evident, although apparently this did not conflict with the Course Grade they
received (see Table 2). Students’ responses to the “self-direction” items, on the
other hand, changed for the better between the pretest and the posttest. Is it
possible that DE/online learning contributes to one’s perception of oneself as
a more independent and self-directed learner? That is certainly one interpret-
ation of these results. For the factor “interaction,” the means between pretest
and posttest changed in a negative direction (although the sign is positive).
Two interpretations of this occurred to us: (a) the lower posttest mean might
reflect disappointment at the level of interactivity that can be achieved online;
or (b) the lower posttest mean may indicate that the students’ need for
interactivity was not as critical to their learning as they had first estimated.
Question 4. Do the identified factors differ across the demographic characteris-
42 R. M. Bernard et al.
*p ⬍ 0.05.
tics of the sample? (i.e., gender, prior experience with online learning, number
of hours spent on educational applications of computing).
Having gathered demographic information on the students, we sought to
determine whether differences in: (a) gender, (b) number of DE/online courses
taken previously, and (c) number of hours spent per week in educational
computing affected the means on the four factors. Gender was already coded
1 and 2, but the two other variables were open-ended ordinal scales. They were
collapsed into dichotomous levels, 1 and 2. So, for number of DE/online
courses, 1 ⫽ 0 and 2 ⫽ one or more. For number of hours, 1 ⫽ less than 1 hr
and 2 ⫽ 1 or more hours. The t tests between these dichotomous levels are
shown in Table 7.
For gender, only the factor “interaction” was significant, with males (n ⫽ 78,
M ⫽ 12.14) indicating a greater desire for interactivity than females (n ⫽ 76,
M ⫽ 13.15). Students who had previously taken at least one online DE course
(n ⫽ 76, M ⫽ 18.20) had more positive “beliefs” about DE than students who
had never taken an online DE course (n ⫽ 70, M ⫽ 19.46). Students who used
computers in educational endeavors more frequently were more positive in
terms of both “beliefs” and “skills” (n ⫽ 51, M ⫽ 17.65; n ⫽ 55, M ⫽ 11.25,
respectively) than students who used computers less frequently (n ⫽ 96,
M ⫽ 19.44; n ⫽ 110, M ⫽ 12.73, respectively). Interestingly, “self-direction and
initiative” did not figure into any of these comparisons.
Summary of Findings
The following four summary conclusions seem warranted:
(1) There is a coherent four-factor structure that underlies the questionnaire
items: (a) “beliefs about DE”; (b) “confidence about basic prerequisite
skills”; (c) “self-direction and initiative”; and (d) “desire for interaction
with the instructor and other students.”
(2) Two of the four factors significantly predict achievement (Course Grade):
“beliefs about DE” is a positive predictor and “self-direction and initia-
Predicting Online Learning Achievement 43
Discussion
Distance education has changed remarkably over the last century. Taylor
(2001) characterizes five generations of DE, largely defined with regard to the
media, and thereby the range of instructional options, available at the time of
their prevalence. The progression that Taylor describes moves along a rough
continuum of increased flexibility, interactivity, materials delivery and access,
beginning in the early years of DE when it was called correspondence edu-
cation (i.e., the media were print and the post office), through broadcast radio
and television and on to current manifestations of interactive multi-media, the
Internet, access to Web-based resources, computer-mediated communication,
and most recently campus portals providing access to the complete range of
university services and facilities at a distance. While to most this is seen as
“progress” and movement towards the original raison d’être of DE, accessibility
to those who would otherwise not have access to education and “anytime and
anywhere” access to education, each iteration has demanded more of students
in terms of prerequisite skills, abilities and attitudes to the point that Smith
(1999) states: “With an increasingly diverse range of pedagogical methods
being employed by academics, little that students have previously learned in
traditional classrooms has prepared them for the era of online learning” (p. 1).
This study deals with the prerequisites of online learning in an attempt to
develop a questionnaire, following from the work of McVay (2001) and Smith
et al. (2003), that predicts achievement success in online learning. There are
several major findings.
The first is that the original literature-based dimensions of DE were vali-
dated through factor analysis of the 38 items on the questionnaire. This does
not say, of course, that these are the only dimensions that might be identified.
Four predominant factors were found within the questionnaire: Factor 1
comprised items related to “general beliefs about online learning.” Factor 2
comprised items related to “confidence in prerequisite skills.” Factor 3 com-
prised items related to “self-direction and initiative” or “self-management of
learning.” Factor 4 comprised items related to “desire for interaction with
others.” Interestingly, not all of the McVay questions ended up in the four-
factor solution in this study. Items 4 (I am willing to devote 8–10 hours per
week for my studies), 5 (I feel that online learning is of at least equal quality
44 R. M. Bernard et al.
with others than women? This is not an artifact of sampling because there was
an almost equal balance of men and women. It is, however, likely that these
results are an artifact of the effects of a large sample size on t test results; even
very small differences can result in significant findings with large samples.
This research study suggests that it is possible to develop a questionnaire
that predicts how students will perform in an online course. However, CGPA
remains the best predictor of Course Grade, the measure of achievement in this
study. Interestingly, the factor “self-direction and initiative” was significantly
correlated with CGPA, suggesting that this factor measures desirable charac-
teristics of achievement success more broadly than just this online course.
Contrary to the DE literature, “desire for interaction”—long thought to be a
facilitative characteristic of modern DE applications—predicted negative rather
than positive achievement. This is, indeed, a puzzling outcome of this study
which warrants further research attention.
Notes on Contributors
Robert M. Bernard is a Professor of Education at Concordia University and a
member of the Centre for the Study of Learning and Performance
(CSLP), specializing in instructional technology, distance education and
online teaching and learning, electronic and online publishing, research
design, statistical analysis and research synthesis (meta-analysis).
Aaron Brauer is an Extended Term Appointment (ETA) in the Academic
Technology Group of the Faculty of Arts and Science at Concordia
University. He is also a doctoral student in Educational Technology. His
areas of expertise are educational computing, distance education and
Internet-based applications of online learning.
Philip C. Abrami is a Professor of Education at Concordia University and the
Director of the Centre for the Study of Learning and Performance
(CSLP). His areas of expertise include instructional technology, social
psychology of education, postsecondary instruction and research synthesis
(meta-analysis).
Mike Surkes is a doctoral student in Educational Technology and a Research
Assistant in the Centre for the Study of Learning and Performance. His
area of interest is investigation of the characteristics of successful team-
work, leadership of teams and the process of collaboration.
Acknowledgements
This study was supported by a grant to Abrami and Bernard from the Social
Sciences and Humanities Research Council of Canada.
References
Abrami, P. C., & Bures, E. M. (1996). Computer-supported collaborative learning and
distance education. American Journal of Distance Education, 10(2), 37–42.
Abrami, P. C., Cholmsky, P., & Gordon, R. (2001). Statistical analysis for the social sciences:
An interactive approach. Boston, MA: Allyn & Bacon (includes CD-ROM).
46 R. M. Bernard et al.