2004 - DE - PredPerf Distance Learning

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 19

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/249016740

The development of a questionnaire for predicting online


learning achievement

Article  in  Distance Education · May 2004


DOI: 10.1080/0158791042000212440

CITATIONS READS

145 9,834

4 authors:

Robert M Bernard Aaron Brauer


Concordia University Montreal Concordia University Montreal
106 PUBLICATIONS   6,353 CITATIONS    1 PUBLICATION   145 CITATIONS   

SEE PROFILE SEE PROFILE

Philip C. Abrami Mike Surkes


Concordia University Montreal Concordia University Montreal
125 PUBLICATIONS   6,760 CITATIONS    6 PUBLICATIONS   286 CITATIONS   

SEE PROFILE SEE PROFILE

Some of the authors of this publication are also working on these related projects:

Developing Practical Wisdom: A Guide for Educators and Advanced Learners View project

The impact of blended classrooms (including flipped and inverted classes) on the achievement and attitudes of postsecondary students.
View project

All content following this page was uploaded by Robert M Bernard on 09 January 2016.

The user has requested enhancement of the downloaded file.


Distance Education, Vol. 25, No. 1, May 2004

The Development of a Questionnaire


for Predicting Online Learning
Achievement
Robert M. Bernard*, Aaron Brauer, Philip C. Abrami
and Mike Surkes
Concordia University, Canada

The study reported here concerns the development and predictive validation of an instru-
ment to assess the achievement outcomes of DE/online learning success. A 38-item
questionnaire was developed and administered to 167 students who were about to embark
on an online course. Factor analysis indicated a four-factor solution, interpreted as “general
beliefs about DE,” “confidence in prerequisite skills,” “self-direction and initiative” and
“desire for interaction.” Using multiple regression we found that two of these factors
predicted achievement performance (i.e., Cumulative Course Grade). Comparisons of
pretest and posttest administrations of the questionnaire revealed that some changes in
opinion occurred between the beginning and the end of the course. Also, categories of
demographic characteristics were compared on the four factors. The overall results suggest
that this instrument has some predictive validity in terms of achievement, but that
Cumulative Grade Point Average (i.e., the university’s record of overall achievement) is a
much better predictor.

Introduction
In the educational community, there has been a longstanding and persistent
impression that distance education (DE) is not of the same quality as tra-
ditional classroom-based education. While some have pronounced this on
scant evidence (e.g., Phipps & Merisotis, 1999; Russell, 1999), it is certainly
true that from the days of correspondence education onward, attrition has been
a problem, and a great deal of research (e.g., Bernard & Amundsen, 1989;
Morgan & Tam, 1999; Sweet, 1986) has been conducted to discover its
underlying causes. It has been suggested, among other things, that differences
in learning style and personality characteristics, the isolation felt by distance

*Corresponding author: Centre for the Study of Learning and Performance, Concordia
University, 1455 de Maisonneuve Blvd W., Montreal, Quebec H3G 1M8, Canada. Email:
bernard@education.concordia.ca
ISSN 0158-7919 (print); 1475-0198 (online)/04/010031-17
 2004 Open and Distance Learning Association of Australia, Inc.
DOI: 10.1080/0158791042000212440
32 R. M. Bernard et al.

learners and lack of self-management and independent learning skills may


account for higher attrition in DE than is characteristic of traditional class-
rooms. Kember (1995) proposed and tested a causal model of student progress
(i.e., persistence) based on four primary constructs: social integration, aca-
demic integration, external attribution, and academic incompatibility. His
results appeared to provide empirical support for the model, but a subsequent
replication by Woodley, de Lange, and Tanewski (2001) casts doubts on the
model’s ability to successfully predict student success.
Recently, attempts have been made to determine whether there is a research-
based answer to the question of DE and classroom differences. Bernard et al.
(2003) conducted a meta-analysis of this DE literature since 1985 which
addressed the following questions: (a) overall, is interactive DE as effective, in
terms of student achievement, student attitudes and retention (i.e., opposite of
attrition), as its classroom-based counterparts, and (b) what conditions con-
tribute to more effective DE as compared to classroom instruction? In total,
232 studies containing 599 independent findings were analyzed. The results
indicated a small, near zero, but significantly positive mean effect size for
interactive DE over traditional classroom instruction on achievement and a
small significant negative effect on combined attitude outcomes. A significantly
negative effect was also found for retention. All findings were significantly
heterogeneous and the analysis of retention data failed to uncover methodolog-
ical, pedagogical or media study features that could help in the search for
answers about student dropout in DE. Differences were found for
“synchronous” and “asynchronous” DE patterns.
In 2004, most educational institutions (and even some corporations) are
creating Internet-based courses, and even whole degree programs, available
entirely online. Can the same claims be made for this new type of DE course
as were established in the meta-analysis just described? Only time will tell—af-
ter a sufficient number of studies of online learning have been conducted. But
it seems likely that the skills and attitudes that are necessary for success in
online courses have changed, and are sufficiently different from those of
classroom-based students to warrant research interest (Smith, 1999). Recog-
nizing this, McVay (2001) developed and validated a 13-item questionnaire to
assess students’ “readiness for online learning.” The items were designed to
address student comfort with some of the basic skills and components of online
learning and to assess their independence as learners. In a factor analytic study
of this instrument, Smith, Murphy, and Mahoney (2003) established the
veracity of these two clusters of items (i.e., “comfort with e-learning” and
“self-management of learning”). However, their study did not utilize actual
online learners (i.e., the students were in classroom-based courses in the
United States and Australia) and the researchers did not relate these factors to
any outcome measure such as course achievement or course evaluation data, so
the question remains as to whether this instrument, or any instrument, can
predict online learning success. As a final note, Smith et al. called for studies
of the instrument’s predictive validity.
Predicting Online Learning Achievement 33

Other Potential Factors


The research reported here is a study of the predictive validity of an instrument
that was designed to assess readiness for online learning, similar to the one
McVay developed. However, rather than simply using the factorized McVay
instrument, we began with the assumption that there were additional measur-
able characteristics of novice online learners that should be considered in such
an instrument. In addition to “comfort with e-learning” and “self-management
of learning,” we suggest that initial beliefs about the nature and effectiveness of
online learning (Bures, Abrami, & Amundsen, 2000) and the desire for
interactivity with an instructor and other students could (and should) be
measured by an instrument of this sort. The literature on student attrition rate
suggests that interaction will reduce the “feelings of isolation” that DE students
have so often reported (Moore & Thompson, 1990). In a more general sense,
we posit that “frame of mind,” the beliefs about any important endeavor that
one is about to undertake, can significantly influence the outcomes of that
experience. With that in mind, we developed a bank of 38 items (including
McVay’s original 13 items) designed to tap into four dimensions of readiness:
(a) “readiness of online skills” (e.g., computing, Internet, communication,
written); (b) “readiness of self-management of learning and learning initiative”
(e.g., organizational, time-management); (c) “readiness of beliefs about DE/
online learning” (e.g., DE compared to classroom instruction, effectiveness of
DE as a means of achieving certain goals); and (d) “desire for interaction with
an instructor and/or other students” (e.g., timely feedback on assignments,
support from the instructor and classmates, collaboration on projects). The
first part of this study, then, is to determine whether these four hypothetical
factors emerge from the items on the questionnaire.

Predictive Validity
Predictive validity is the correlation of the test with an external criterion sepa-
rated by a time lapse between measurements. For example, the Scholastic
Aptitude Test (SAT) or the Graduate Record Exam (GRE) have predictive
validity to the extent they predict future student performance in terms of grade
point average at university (undergraduate and graduate, respectively). (Abrami,
Cholmsky, & Gordon, 2001, p. 41)

It is nearly axiomatic that the worth of an instrument of the kind suggested here
can be judged only in relation to the accuracy of the predictions it provides
about the future behaviors of online learners. Without this link we are in the
dark as to what counts most and what counts least in becoming a successful
online learner. So the most fundamental question being asked in the current
study is: “Do items cluster around the four areas of competence previously
described, and if they do, what can they tell us about how students will perform
in an online course?”
34 R. M. Bernard et al.

Research Questions
The following research questions guided the stages of the current study:
Question 1. Is there a coherent factor structure that underlies the items in the
questionnaire?
Question 2. Does the factor structure predict achievement performance (Course
Grade) in the online course?
Question 3. Do online students say the same things about online learning after
the course (posttest) that they said before the course began (pretest)?
Question 4. Do the factors identified differ across the demographic characteris-
tics of the sample? (i.e., gender, prior experience with online learning, number
of hours spent on educational applications of computing).

Method
Participants and Research Site
Students who were enrolled in a Web-based undergraduate course at Concor-
dia University in Montreal, Canada, participated in the study. The course was
titled “Problem Solving and Academic Strategies” and was originally designed
for students who had been in academic jeopardy (“fail standing”), and had
been readmitted, but was later opened to any student as a three-credit elective.
This sample contains a mix of “fail standing” students and “regular” students.

Procedure
Students enrolled in the fall 2002, winter 2003, and summer 2003 semesters
were briefed by the instructor about the study during their orientation session
and were asked to participate by completing a paper-based questionnaire and
had to give their consent by signing an officially designated consent form. The
consent form described the purpose of the study, explained the procedures that
would be followed, and provided space for the student to print and sign his/her
name. Moreover, students who gave their consent also agreed to allow their
Grade Point Average (GPA) and course scores to be used by the investigators.
Students were told that their participation was completely voluntary and
confidential, and that their decision to participate or not would have no affect
on their grade in the course. After describing the purpose of the study, the
instructor left the room to avoid bias, while teaching assistants distributed
consent forms and questionnaires. Those students who chose to participate
completed the consent form and questionnaire and identified themselves by
including the last four digits of their student identification number, which was
used to match pretest and posttest data. Completed consent forms and ques-
tionnaires were collected by the teaching assistants and placed in sealed
envelopes and were not opened until after the course was completed and the
Predicting Online Learning Achievement 35

Table 1. Frequencies and relative percentages of demographic characteristics

Demographic characteristics Frequency Relative %

Gender
Females 83 49.7
Males 84 50.3
Age (in years)a
18–22 103 62.0
23–27 53 31.7
28–32 4 2.4
ⱖ 33 6 3.6
Number of DE courses taken previously
0 84 50.6
1 47 28.0
2 25 14.9
ⱖ3 11 6.5
Hours of educational computing per week
⬍1 32 19.2
2–5 79 47.3
6–10 39 23.4
⬎ 10 17 10.2

a
Variable totals are the result of missing demographic data.

grades assigned. Students who chose not to participate were free to leave the
room while the instrument was being administered. When all of the consent
forms and questionnaires had been collected, the instructor returned and
continued with the orientation session.
Toward the end of the fall 2002, winter 2003, and summer 2003 semesters,
students were invited to complete an online posttest questionnaire and again
identified themselves by including the last four digits of their student
identification number. Only 63 of the original group of 167 students chose to
submit a posttest. The breakdown of participant demographics is shown in
Table 1.

Research Design
The study was a one-group correlational pretest–posttest design (Campbell &
Stanley, 1963). This design is inappropriate for drawing causal inferences from
the data because of the lack of a comparison condition, but perfectly appropri-
ate for investigating relationships.

Research Instrument
In the course of a previous research study, a large volume of the DE research
literature was reviewed. The needs of the DE learner emerged as one of many
36 R. M. Bernard et al.

central themes, and in more recent works the changing character of these needs
for successful online learning became apparent.
The themes we explored were study practices (Bernt & Bugbee, 1993);
learning styles (Coggins, 1988); need for interaction (e.g., Abrami & Bures,
1996; Fulford & Zhang, 1993); motivation (e.g., Bures et al., 2000; Bures,
Amundsen, & Abrami, 2002); and preferred instructional methods (e.g.,
Bernard & Naidu, 1992). McVay (2001) developed and pilot-tested a 13-item
instrument called the “Readiness for Online Learning Questionnaire”. We felt
that this instrument did not represent a broad enough spectrum of themes from
the DE literature (e.g., general beliefs about DE) and so we developed an
additional 25 items, written in the same style as the McVay items. In total, the
questionnaire we developed contained 38 items including the 13 McVay items.
The items were in the form of a 4-point Likert scale ranging from “strongly
agree” to “strongly disagree.” Demographic information was collected on the
pretest only.

Other Measures
Two additional measures were used in this study: (a) Cumulative GPA
(CGPA) and (b) Cumulative Course Grade (CCG). CGPA is on a 0–4.3 scale
and CCG was represented as a percentage.

Statistical Analysis
Factor analysis, multiple regression, Pearson Product Moment Correlations
and t tests were used to evaluate the four research questions. The statistical
procedures employed in each case are explained in the next section.

Results
The results of this study are presented by research question.
Question 1. Is there a coherent factor structure that underlies the items in the
questionnaire?
In order to answer this question the 38 scale statements were factor analyzed
using varimax rotation. Since this question was clearly exploratory, a reliability
coefficient was not calculated for the entire set of questions. Reliability esti-
mates were calculated after factor analyzing the questionnaire. Several indica-
tors were considered in order to judge the factorability of the matrix. The case
by item ratio was 4.77, a little under the standard of 5 suggested by Tabachnik
and Fidell (1996). However, other indices proved to be satisfactory. The
Kaiser–Meyers–Olkin (KMO) measure of sampling adequacy was 0.70 and
Bartlett’s test of sphericity was significant at p ⬍ 0.001 (Tabachnik & Fidell,
1996). Based on these tests, we proceeded with the factor analysis.
The rotated factor matrix yielded four factors with eigenvalues greater than
Predicting Online Learning Achievement 37

2.0 (ranging from 5.34 to 2.15). These four factors accounted for 48.88% of
the overall variance in the measure. We examined factor loadings on each of
these four factors that exceeded 0.40 and, based on the statements related to
each of the highest loading items, named the factors. No items loaded above
0.40 on more than one factor. On two items the differential loading across
factors was less than 0.20. Factor 1 comprised items related to “confidence in
prerequisite skills.” Factor 2 comprised items related to “general beliefs about
online learning.” Factor 3 comprised items related to “self-direction and
initiative” or “self-management of learning.” Factor 4 comprised items related
to “desire for interaction with others.” Table 2 provides a complete listing of
the item loadings on each factor (high loading items are in bold). As we
expected, these groups of items represent major recurring themes in the DE
and online learning literature.
Reliability coefficients were then calculated for each set of items. Factor 1
had a Cronbach’s alpha of 0.79 (8 items) with Corrected Item-Total Correla-
tions ranging from 0.35 to 0.63. Factor 2 had an alpha of 0.82 (8 items) with
Corrected Item-Total Correlations ranging from 0.41 to 0.65. Factor 3 had an
alpha of 0.81 (4 items) with Corrected Item-Total Correlations ranging from
0.53 to 0.70. Factor 4 had an alpha of 0.67 (5 items), slightly under the
standard of 0.70 suggested as indicating a reliable set of sub-measures with
Corrected Item-Total Correlations ranging from 0.26 to 0.53. Overall, these
subscales were considered to be reliable sets of items.
Question 2. Does the factor structure predict achievement performance (Course
Grade) in the online course?
Having established identifiable groupings of items, we proceeded to attempt to
determine whether these groupings predicted achievement outcomes as mea-
sured by the CCG. This grade is a composite of seven assignments that were
accomplished during the term, ranging from the simple creation of a study
schedule to a more complex essay writing assignment.
The items related to each factor were summed and entered as predictors into
hierarchical multiple regression, with Course Grade serving as the dependent
variable. The results were significant, F ⫽ 4.01 (4, 132), p ⬍ 0.01. In total, the
predictors accounted for 8.0% of the variance in Course Grade. For the sake
of brevity, the factors will be called “beliefs,” “skills,” “self-direction,” and
“interaction.” Table 3 shows the results of the multiple regression. Two of the
factors were significant predictors of course achievement: “self-direction” and
“beliefs.” “Interaction” came close to significance. “Self-direction” and
“beliefs” were positive predictors and “interaction” was a negative predictor.
The factor “skills” was not included in the final model, possibly because
deficits in these basic skills were only relevant as students were becoming
accustomed to working and interacting online. By the time the final Course
Grades were compiled, these skills were mastered and therefore no longer
relevant.
Though it was not a significant predictor, why was the factor labeled
38 R. M. Bernard et al.

Table 2. Questionnaire items with factor loadings of 0.40 and above

Factor Factor 1 Factor 2 Factor 3 Factor 4


Eigenvalue 5.34 2.95 2.75 2.15
Percentage of variance 19.79 10.93 10.18 7.97
Questionnaire items

1. *I am able to easily access the Internet as 0.458 0.114 ⫺ 0.102 0.050


needed for my studies
2. *I am comfortable communicating 0.709 0.186 0.008 0.101
electronically
3. *I am willing to actively communicate with 0.572 0.084 0.00 0.030
my classmates and instructors electronically
4. *I feel that my background and experience 0.505 0.067 0.298 0.184
will be beneficial to my studies
5 *I am comfortable with written 0.681 0.141 0.011 0.110
communication
6. I possess sufficient computer keyboarding 0.821 ⫺ 0.121 0.154 0.00
skills for doing online work
7. I feel comfortable composing text on a 0.762 0.00 0.105 0.00
computer in an online learning environment
8. I feel comfortable communicating online 0.735 0.092 0.009 0.106
in English
9 I am motivated by the material in an 0.349 0.525 0.091 0.275
Internet activity outside of class
10. Learning is the same in class and at home 0.040 0.771 0.103 0.00
on the Internet
11. I can practice English grammar during 0.159 0.421 0.024 0.165
Internet activities outside of class
12 I feel that I can improve my listening skills ⫺ 0.018 0.694 0.068 0.042
the same using the Internet and in class
13. I believe that learning on the Internet 0.043 0.691 0.00 0.00
outside of class is more motivating than
a regular course
14. I believe a complete course can be given by 0.209 0.702 ⫺ 0.112 0.067
the Internet without difficulty
15. I could pass a course on the Internet without 0.319 0.497 0.149 0.00
any teacher assistance
16 I believe that material in an Internet course 0.00 0.613 0.014 0.00
is better prepared than a traditional class
17. *When it comes to learning and studying, 0.164 0.013 0.649 0.00
I am a self-directed person
18. *In my studies, I am self-disciplined and 0.054 0.121 0.837 0.00
find it easy to set aside reading and
homework time
19. *I am able to manage my study time 0.028 0.181 0.782 0.00
effectively and easily complete assignments
on time
20. *In my studies, I set goals and have a high 0.00 0.00 0.694 0.072
degree of initiative
21. As a student, I enjoy working with other 0.00 0.037 0.119 0.522
students in groups
Predicting Online Learning Achievement 39

Table 2. Questionnaire items with factor loadings of 0.40 and above—continued

Factor Factor 1 Factor 2 Factor 3 Factor 4


Eigenvalue 5.34 2.95 2.75 2.15
Percentage of variance 19.79 10.93 10.18 7.97
Questionnaire items

22. I feel that face-to-face contact with my ⫺ 0.153 0.343 0.362 0.433
instructor is necessary for learning to occur
23. I can discuss with other students during 0.359 0.120 ⫺ 0.214 0.663
Internet activities outside of class
24. I can work in a group during Internet 0.220 0.148 0.00 0.767
activities outside of class
25. I can collaborate with other students during 0.156 0.108 ⫺ 0.205 0.795
Internet activities outside of class

*Items from the original McVay inventory.

“interaction” negatively related to course achievement? After all, the DE


literature predicts that increased interactivity between students and instructor
and among students should lead to an improvement in learning. We wondered
whether this might be an artifact of the two sub-samples that comprised the
larger sample, since the average Course Grade was lower for “fail standing”
students than for “regular” students. “Fail standing” students, under pressure
to achieve a satisfactory Course Grade, might have indicated, initially, that they
needed more help and feedback from the instructor, and more contact with
other students. To investigate this further we conducted an independent t test
between these two sub-samples to determine whether these groups differed on
the “interaction” factor. The results of the t test were not significant,
t(151) ⫽ 0.97, p ⬎ 0.05 indicating, on average, equal desire for interaction with
others than “regular” students.
Does this mean that the “fail standing” students felt the same as “regular”
students on the other three measures? The answer is no. Consistently, their
scores on the other three factors, “skills,” “beliefs,” and “self-direction,” were
less than the “regular” students. Table 4 shows the results of independent t
tests run on these other factors.
In order to further investigate the relationships among the two measures

Table 3. Results of multiple regression analysis of Course Grade on the


four factors and annual GPA (N ⫽ 137)

Name Standardized  t Ratio Significance

Constant ⫹ 6.57 p ⬍ 0.001


Skills ⫹ 0.035 ⫹ 0.40 p ⫽ 0.689
Beliefs ⫹ 0.188 ⫹ 2.15 p ⫽ 0.034
Self-Direction ⫹ 0.231 ⫹ 2.78 p ⫽ 0.006
Interaction ⫺ 0.170 ⫺ 1.92 p ⫽ 0.057
40 R. M. Bernard et al.

Table 4. t tests between “fail standing” students and “regular” students on three factors

Name Mean  t Ratio df Significance

Skills ⫹ 0.96 ⫹ 1.99 165 p ⬍ 0.05


Beliefs ⫹ 1.81 ⫹ 3.00 145 p ⬍ 0.01
Self-Direction ⫹ 0.95 ⫹ 3.04 160 p ⬍ 0.01

(Grade and GPA) and the four factors, Pearson Product Moment Correlations
between the four factors and GPA and Course Grade were calculated and are
presented in Table 5. As expected, the largest correlation was between Course
Grade and GPA (i.e., they share 46% of their variance). It is because of this
anticipated high correlation that GPA was not included in the predictive
multiple regression model in the first place. It is likely that all of the factors
would have been eclipsed by GPA.
The pattern of correlations between Course Grade and the factors follows
that of the multiple regression. Notice that the GPA by factor correlations
mirror this pattern, although only “self-direction” and GPA are significantly
correlated. This finding suggests that the factor “self-direction” applies more
broadly to other kinds of courses in various subject areas, since GPA is a
general rather than specific measure of achievement. Two interesting
significant inter-factor correlations emerged. “Beliefs” and “skills” were
significantly negatively correlated, as were “skills” and “interaction.” Taken
together, these findings may suggest that those who perceive their prerequisite
skill to be higher, may be less concerned with the nature of online learning and
aspects of interactivity that were measured by the “interaction” factor.
Overall, this study of the predictive validity of the questionnaire suggests the
following: (a) three of the four factors do predict learning achievement as
measured by Course Grades, but together they account for a small amount of
variance ( ⬍ 9%) in Course Grades; (b) CGPA, the cumulative measure of
university performance, is the best “predictor” of Course Grade, although it
was not included in the multiple regression model because of its large corre-
lation with the dependent measure; and (c) the factor “self-direction” is

Table 5. Correlation matrix of measures and factors

Measuresa Course Grade GPA Skills Beliefs Self-Direction Interaction

Course Grade 1.00


GPA ⫹ 0.68* 1.00
Skills ⫹ 0.04 ⫹ 0.02 1.00
Beliefs ⫹ 0.19* ⫺ 0.14 ⫺ 0.26* 1.00
Self-Direction ⫺ 0.23* ⫹ 0.31* ⫺ 0.13 ⫺ 0.10 1.00
Interaction ⫺ 0.10* ⫺ 0.10 ⫺ 0.32* ⫺ 0.28* ⫺ 0.05 1.00

*p ⬍ 0.05 (two-tailed).
a
N values ranged from 147 to 167, depending on the amount of missing data.
Predicting Online Learning Achievement 41

Table 6. Difference between “pretest” and “posttest” scores across the factors

Factor Mean differencea SE of mean t Ratio df Significance

Skills ⫹ 2.75 0.48 ⫹ 5.79 60 p ⬍ 0.001


Beliefs ⫺ 0.64 0.56 ⫺ 1.15 54 p ⫽ 0.26
Self-Direction ⫺ 1.42 0.33 ⫺ 4.26 58 p ⬍ 0.001
Interaction ⫹ 1.98 0.45 ⫹ 4.41 59 p ⬍ 0.001

a
Negative values here reflect a change from a higher mean on the pretest to a lower mean
on the posttest; positive values are the reverse.

positively correlated with both Course Grade and GPA, and emerged as the
best predictor of the three significant factors; it is also uncorrelated with the
other factors. This final point suggests that the prior opinions of students as to
their self-management, self-direction and initiative as learners are the best set
of items for predicting academic success in an online course. But this applies
to achievement in general (i.e., GPA) as well as achievement in this online
course (i.e., Course Grade).
Question 3. Do online students say the same things about online learning after
the course that they said before the course began?
We examined the four factors as they changed from pretest to posttest. Since
students in this course only meet face to face in an orientation session at the
beginning of the term, the posttest had to be administered online. Because of
this, only 63 students responded to it. Dependent t tests were conducted
between the pretest and posttest factors. Table 6 shows the results of this
analysis.
It is clear from this analysis that “beliefs about DE” did not change from
pretest to posttest. All of the other factors did, but in different directions.
Surprisingly, students at the end of term were significantly less positive about
their “comfort with basic skills” than at the beginning of term. It is possible
that students became less positive about the requirements of online learning as
difficulties in actually applying these skills in an online learning setting became
evident, although apparently this did not conflict with the Course Grade they
received (see Table 2). Students’ responses to the “self-direction” items, on the
other hand, changed for the better between the pretest and the posttest. Is it
possible that DE/online learning contributes to one’s perception of oneself as
a more independent and self-directed learner? That is certainly one interpret-
ation of these results. For the factor “interaction,” the means between pretest
and posttest changed in a negative direction (although the sign is positive).
Two interpretations of this occurred to us: (a) the lower posttest mean might
reflect disappointment at the level of interactivity that can be achieved online;
or (b) the lower posttest mean may indicate that the students’ need for
interactivity was not as critical to their learning as they had first estimated.
Question 4. Do the identified factors differ across the demographic characteris-
42 R. M. Bernard et al.

Table 7. Comparison of three demographic variables across the four factors

DE courses taken Hours per week in


Gender previously educ. computing
1 ⫽ F, 2 ⫽ M 1 ⫽ 0, 2 ⫽ 1 ⫹ 1 ⫽ ⬍ 1, 2 ⫽ 1 ⫹

Factor df t Ratio df t Ratio df t Ratio

Skills 165 ⫹ 0.108 164 ⫹ 0.77 163 ⫺ 2.98*


Beliefs 145 ⫺ 1.06 144 ⫺ 2.12* 145 ⫺ 2.93*
Self-Direction 159 ⫹ 0.84 158 ⫺ 1.52 157 ⫺ 1.33
Interaction 152 ⫺ 2.68* 151 ⫹ 0.27 151 ⫺ 0.82

*p ⬍ 0.05.

tics of the sample? (i.e., gender, prior experience with online learning, number
of hours spent on educational applications of computing).
Having gathered demographic information on the students, we sought to
determine whether differences in: (a) gender, (b) number of DE/online courses
taken previously, and (c) number of hours spent per week in educational
computing affected the means on the four factors. Gender was already coded
1 and 2, but the two other variables were open-ended ordinal scales. They were
collapsed into dichotomous levels, 1 and 2. So, for number of DE/online
courses, 1 ⫽ 0 and 2 ⫽ one or more. For number of hours, 1 ⫽ less than 1 hr
and 2 ⫽ 1 or more hours. The t tests between these dichotomous levels are
shown in Table 7.
For gender, only the factor “interaction” was significant, with males (n ⫽ 78,
M ⫽ 12.14) indicating a greater desire for interactivity than females (n ⫽ 76,
M ⫽ 13.15). Students who had previously taken at least one online DE course
(n ⫽ 76, M ⫽ 18.20) had more positive “beliefs” about DE than students who
had never taken an online DE course (n ⫽ 70, M ⫽ 19.46). Students who used
computers in educational endeavors more frequently were more positive in
terms of both “beliefs” and “skills” (n ⫽ 51, M ⫽ 17.65; n ⫽ 55, M ⫽ 11.25,
respectively) than students who used computers less frequently (n ⫽ 96,
M ⫽ 19.44; n ⫽ 110, M ⫽ 12.73, respectively). Interestingly, “self-direction and
initiative” did not figure into any of these comparisons.

Summary of Findings
The following four summary conclusions seem warranted:
(1) There is a coherent four-factor structure that underlies the questionnaire
items: (a) “beliefs about DE”; (b) “confidence about basic prerequisite
skills”; (c) “self-direction and initiative”; and (d) “desire for interaction
with the instructor and other students.”
(2) Two of the four factors significantly predict achievement (Course Grade):
“beliefs about DE” is a positive predictor and “self-direction and initia-
Predicting Online Learning Achievement 43

tive” is a positive predictor; “skills” was not significant. “Interaction with


the instructor and other students” is a negative predictor, but it is not
significant.
(3) Three of the factors changed from the beginning of the course (pretest) to
the end (posttest): students became more positive about their “self-
direction and initiative,” but they became more negative about their
“confidence in their basic skills” and their “desire for interaction.”
(4) Across the three demographics of the sample, differences were found for
three of the four factors. Only “self-direction and initiative” did not differ.

Discussion
Distance education has changed remarkably over the last century. Taylor
(2001) characterizes five generations of DE, largely defined with regard to the
media, and thereby the range of instructional options, available at the time of
their prevalence. The progression that Taylor describes moves along a rough
continuum of increased flexibility, interactivity, materials delivery and access,
beginning in the early years of DE when it was called correspondence edu-
cation (i.e., the media were print and the post office), through broadcast radio
and television and on to current manifestations of interactive multi-media, the
Internet, access to Web-based resources, computer-mediated communication,
and most recently campus portals providing access to the complete range of
university services and facilities at a distance. While to most this is seen as
“progress” and movement towards the original raison d’être of DE, accessibility
to those who would otherwise not have access to education and “anytime and
anywhere” access to education, each iteration has demanded more of students
in terms of prerequisite skills, abilities and attitudes to the point that Smith
(1999) states: “With an increasingly diverse range of pedagogical methods
being employed by academics, little that students have previously learned in
traditional classrooms has prepared them for the era of online learning” (p. 1).
This study deals with the prerequisites of online learning in an attempt to
develop a questionnaire, following from the work of McVay (2001) and Smith
et al. (2003), that predicts achievement success in online learning. There are
several major findings.
The first is that the original literature-based dimensions of DE were vali-
dated through factor analysis of the 38 items on the questionnaire. This does
not say, of course, that these are the only dimensions that might be identified.
Four predominant factors were found within the questionnaire: Factor 1
comprised items related to “general beliefs about online learning.” Factor 2
comprised items related to “confidence in prerequisite skills.” Factor 3 com-
prised items related to “self-direction and initiative” or “self-management of
learning.” Factor 4 comprised items related to “desire for interaction with
others.” Interestingly, not all of the McVay questions ended up in the four-
factor solution in this study. Items 4 (I am willing to devote 8–10 hours per
week for my studies), 5 (I feel that online learning is of at least equal quality
44 R. M. Bernard et al.

to traditional classroom learning), 8 (I have technical difficulties accessing the


Internet as needed for my studies) and 13 (As a student I enjoy working
independently) were not part of the final four-factor solution. This does not say
that these are not good items, just that they were eclipsed by other items that
expressed similar sentiments. Smith et al. (2003) also found that Item 4 was a
problem.
Second, the results of multiple regression of achievement on the four factors
revealed that three of them significantly predicted Course Grade, two in a
positive direction and one in a negative direction. The negative prediction of
“desire for interaction” is interesting because it seemingly contradicts the
predictions that increased interactivity will make learning easier, resulting in
higher achievement. It is possible that in this particular course the need for
interaction was, in fact, a minimal condition of success.
The following caveat, however, must be attached to this finding. Techni-
cally, two factors are positive predictors of achievement, with “self-direction”
being the most important. But their overall importance pales in light of the high
correlation between GPA and Course Grade. Put succinctly, prior achievement
is still the best predictor of future achievement. It is interesting to note the
significant but modest correlation between “self-direction” and GPA, suggest-
ing that this would be a reasonably good predictor of “achievement in general,”
which has little to do with whether a course is taught online or in a traditional
classroom.
The third finding (i.e., the difference between pretest and posttest) indicates
that initial reactions are not necessarily borne out over time, as the actual
conditions of online learning reveal themselves. Two factors (“skills” and
“interaction”) moved in a more negative direction, over time, and one (“self-
direction”) moved in a positive direction. It is possible that the two negative
results are indicative of over-estimation and, in the case of the positive shift,
that online learning might actually have contributed to better organizational
and self-management behaviors. For example, the use of computer-mediated
communication in an online setting means that students must use written
forms of expression to communicate with one another in articulating and
developing ideas, in arguing contrasting viewpoints, in refining opinions, in
settling disputes, and so on (Abrami & Bures, 1996). This use of written
language and peer interaction allows for the possibility of increased reflection
(Hawkes, 2001), the development of writing skills (Winkelmann, 1995), and
higher quality performance through peer modeling and mentoring (Lou,
Dedic, & Rosenfield, 2003). The critical thinking literature goes so far as to
suggest that activity of this sort promotes the development of higher-order
thinking skills (Garrison, Anderson, & Archer, 2001; McKnight, 2001). Inves-
tigation of the effects that distance and online courses have on the development
of learning and study behaviors is an interesting future line of inquiry.
The fourth finding, the four factors compared across levels of selected
demographic features, produced inconclusive and in some cases counter-
intuitive results. For instance, why did men express more interest in interaction
Predicting Online Learning Achievement 45

with others than women? This is not an artifact of sampling because there was
an almost equal balance of men and women. It is, however, likely that these
results are an artifact of the effects of a large sample size on t test results; even
very small differences can result in significant findings with large samples.
This research study suggests that it is possible to develop a questionnaire
that predicts how students will perform in an online course. However, CGPA
remains the best predictor of Course Grade, the measure of achievement in this
study. Interestingly, the factor “self-direction and initiative” was significantly
correlated with CGPA, suggesting that this factor measures desirable charac-
teristics of achievement success more broadly than just this online course.
Contrary to the DE literature, “desire for interaction”—long thought to be a
facilitative characteristic of modern DE applications—predicted negative rather
than positive achievement. This is, indeed, a puzzling outcome of this study
which warrants further research attention.

Notes on Contributors
Robert M. Bernard is a Professor of Education at Concordia University and a
member of the Centre for the Study of Learning and Performance
(CSLP), specializing in instructional technology, distance education and
online teaching and learning, electronic and online publishing, research
design, statistical analysis and research synthesis (meta-analysis).
Aaron Brauer is an Extended Term Appointment (ETA) in the Academic
Technology Group of the Faculty of Arts and Science at Concordia
University. He is also a doctoral student in Educational Technology. His
areas of expertise are educational computing, distance education and
Internet-based applications of online learning.
Philip C. Abrami is a Professor of Education at Concordia University and the
Director of the Centre for the Study of Learning and Performance
(CSLP). His areas of expertise include instructional technology, social
psychology of education, postsecondary instruction and research synthesis
(meta-analysis).
Mike Surkes is a doctoral student in Educational Technology and a Research
Assistant in the Centre for the Study of Learning and Performance. His
area of interest is investigation of the characteristics of successful team-
work, leadership of teams and the process of collaboration.

Acknowledgements
This study was supported by a grant to Abrami and Bernard from the Social
Sciences and Humanities Research Council of Canada.

References
Abrami, P. C., & Bures, E. M. (1996). Computer-supported collaborative learning and
distance education. American Journal of Distance Education, 10(2), 37–42.
Abrami, P. C., Cholmsky, P., & Gordon, R. (2001). Statistical analysis for the social sciences:
An interactive approach. Boston, MA: Allyn & Bacon (includes CD-ROM).
46 R. M. Bernard et al.

Bernard, R. M., & Amundsen, C. L. (1989). Antecedents to dropout in distance education:


Does one model fit all? Journal of Distance Education, 4(2), 25–46.
Bernard, R. M., Lou, Y., Abrami, P. C., Wozney, L., Borokhovski, E., Wallet, P. A., et al.
(2003, October). How does distance education compare to classroom instruction? A
meta--analysis of the empirical literature. Paper presented at the annual meeting of the
Association for Educational Communication and Technology, Anaheim, CA (final
results).
Bernard, R. M., & Naidu, S. (1992). Concept mapping, post-questioning and feedback: A
distance education field experiment. British Journal of Educational Technology, 23(1),
48–60.
Bernt, F. L., & Bugbee, A. C. (1993). Study practices and attitudes related to academic
success in a distance learning programme. Distance Education, 14(1), 97–112.
Bures, E., Abrami, P. C., & Amundsen, C. (2000). Student motivation to learn via
computer-conferencing. Research in Higher Education, 41(5), 593–621.
Bures, E., Amundsen, C., & Abrami, P. C. (2002). Motivation to learn via computer
conferencing: Exploring how task-specific motivation and CC expectations are related
to student acceptance of learning via CC. Journal of Educational Computing Research,
27(3), 249–264.
Campbell, D., & Stanley, J. (1963). Experimental and quasi-experimental designs for research.
New York: Houghton Mifflin.
Coggins, C. C. (1988). Preferred learning styles and their impact on completion of external
degree programs. American Journal of Distance Education, 2(1), 25–37.
Fulford, C. P., & Zhang, S. (1993). Perceptions of interaction: The critical predictor in
distance education. American Journal of Distance Education, 7(3), 8–21.
Garrison, D. R., Anderson, T., & Archer, W. (2001). Critical thinking, cognitive presence,
and computer conferencing in distance education. American Journal of Distance
Education, 15(1), 7–23.
Hawkes, M. (2001). Variables of interest in exploring the reflective outcomes of network-
based communication. Journal of Research on Computing in Education, 33(3), 299–315.
Kember, D. (1995). Open learning courses for adults: A model of student progress. Englewood
Cliffs, NJ: Educational Technology Publications.
Lou, Y., Dedic, H., & Rosenfield, S. (2003). Feedback model and successful e-learning. In
S. Naidu (Ed.), Learning and teaching with technology: Principles and practice (pp. 249–
260). London and Sterling, VA: Kogan Page.
McKnight, C. B. (2001). Supporting critical thinking in interactive learning environments.
Computers in the Schools, 17(3–4), 17–32.
McVay, M. (2001). How to be a successful distance education student: Learning on the Internet.
New York, NY: Prentice Hall.
Moore, M. G., & Thompson, M. M. (1990). The effects of distance learning: A summary of
the literature [Research Monograph]. University Park, PA: Pennsylvania State Univer-
sity.
Morgan, C., & Tam, M. (1999). Unraveling the complexities of distance education student
attrition. Distance Education, 20t(1), 96–108.
Phipps, R., & Merisotis, J. (1999). What’s the difference? A review of contemporary research on
the effectiveness of distance learning in higher education. Washington, DC: Institute for
Higher Education Policy.
Russell, T. L. (1999). The no significant difference phenomenon. Chapel Hill, NC: Office of
Instructional Telecommunications, North Carolina State University.
Smith, E. (1999). Learning to learn online. Retrieved July 24, 2003, from http://
www.csu.edu.au
Smith, P. J., Murphy, L., & Mahoney, E. (2003). Towards identifying factors underlying
readiness for online learning: An exploratory study. Distance Education, 24(1), 57–
67.
Predicting Online Learning Achievement 47

Sweet, R. (1986). Student dropout in distance education: An application of Tinto’s model.


Distance Education, 7(2), 201–213.
Tabachnik, B. S., & Fidell, L. S. (1996). Using multivariate statistics. New York: Harper-
Collins.
Taylor, J. C. (2001, April). Fifth generation distance education. Keynote address delivered at
the ICDE 20th World Conference, Dusseldorf, Germany, April 1–5. Retrieved
September 8, 2002, from http://www.usq.edu.au/users/taylorj/conferences.html
Winkelmann, C. L. (1995). Electronic literacy, critical pedagogy, and collaboration: A case
for cyborg writing. Computers and the Humanities, 29(6), 431–448.
Woodley, A., de Lange, P., & Tanewski, G. (2001). Student progress in distance education:
Kember’s model re-visited. Open Learning, 16(2), 113–131.
View publication stats

You might also like