Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 18

Computers & Education 150 (2020) 103851

Contents lists available at ScienceDirect

Computers & Education


journal homepage: http://www.elsevier.com/locate/compedu

The impact of learner-, instructor-, and course-level factors on


online learning
a,*
Binbin Zheng , Chin-Hsi Lin b,JemmaBaeKwon c

a
Michigan State University, A214B, 965 Wilson Road, East Lansing, MI, USA
b
The University of Hong Kong, Room 615, Meng Wah Building, Pokfulam Road, Hong Kong
c
California State University, Sacramento, Eureka Hall 322, Sacramento, CA, USA

ARTICLEINFO
ABSTRACT
Keywords:
K-12 online education The number of K-12 students taking online courses has been increasing tremendously over
Online course design the past few years. However, most research on online learning either compares its overall
English language and literature effec- tiveness to that of traditional learning, or examines perceptions or interactions using
Higher-level knowledge activities self-reported data; and very few studies have looked into the relationships between the
elements of K-12 online
courses and their students’ learning outcomes. Based on student-, instructor-, and course-level
data from 919 students enrolled in eight online high-school English language and literature
courses, the results of hierarchical linear modeling and content analysis found that project-
based assignments and high-level knowledge activities were beneficial to learning outcomes –
though not necessarily among students who took these courses for credit-recovery purposes.
The paper
also discusses implications for both online course-design practices and future research on pre-
dictors of online-learning success.

1. Introduction

Enrollment in K-12 distance-education courses in the United States increased dramatically over the past decade, more than
doubling between the 2009-10 school year, when it stood at 1.8 million (Zandberg & Lewis, 2008), and 2014–15, when it reached
3.8 million (Watson, Pape; Murin; Gemin, & Vashaw, 2015). Among all subject areas, languages present the biggest challenge in
K-12
online education, with studies reporting negative effects, as compared to similar face-to-face courses (see Cavanaugh, 2001, for
example), as well as negative feelings (e.g., Oliver & Kellogg, 2015).
Prior studies in K-12 settings have examined online learning in terms of student-level factors (e.g., behavioral data; see Liu &
Cavanaugh, 2011; Pazzaglia, Clements, Levigne, and Stafford, 2016), instructor-level ones (e.g., educational level; see Lin &
Zheng, 2015) and course-level factors (e.g., class size and grading approaches; see Lin, Kwon, & Zhang, 2019; Lin, 2019).
However, most
extant research has focused on just one out of three possible sets of factors – i.e., the student-, instructor-, or course level – and only
few
studies (e.g., Zhang & Lin, 2019; Zhang, 2017) have included more than one of the three possible set of factors in multi-level
statistical analyses, and the results seem unlikely to provide a complete picture of online learning in K-12 settings. In addition,
research that uses objective measures to assess course design, such as the number of course elements, remains scarce, as
researchers typically struggle to access online course data and materials in K-12 settings (Barbour, 2017).
In addition, credit recovery is one major reason that K-12 students take online courses (Queen & Lewis, 2011; Viano, 2018), yet

* Corresponding author. A214B, 965 Wilson Road, East Lansing, MI, 48824, USA.
E-mail addresses: binbinz@msu.edu (B. Zheng), chinhsi@hku.hk (C.-H. Lin), kwon@csus.edu (J.B. Kwon).

https://doi.org/10.1016/j.compedu.2020.103851
Received 10 October 2019; Received in revised form 13 January 2020; Accepted 11 February 2020
Available online 14 February 2020
0360-1315/© 2020 Elsevier Ltd. All rights reserved.
B. Zheng et al. Computers & Education 150 (2020)
103851

how well credit-recovery students perform in online courses remains unknown (Powell, Roberts, & Patrick, 2015; Taylor et al., 2016;
U.S. Department of Education, Institute of Education Sciences, What Works Clearinghouse, 2015 ). Viano (2018) described that
credit-recovery students are different from their counterparts in three ways: 1) they have studied all or part of the learning materials;
2) they are likely to have lower technological skills; and 3) they are more likely to have skill deficits for learning. Based on these
characteristics, Viano called for more research on credit-recovery learners. Our study thus included credit-recovery as a student-
level
variable to examine the effect on students’ online learning outcomes.
Accordingly, the purpose of the current study is to investigate how the combination of student-, instructor-, and course-level
factors
affects high schoolers’ online-learning success in English language and literature courses, using data on their actual learning
activity and course-design elements.

2. Literature review

2.1. Effectiveness of online English Language and Literature learning

A closer investigation of the factors contributing to success in K-12 online language courses is particularly urgent, given that
previous studies have yielded mixed findings regarding the effectiveness of such courses, as compared to their face-to-face
counter- parts: with some reporting lower achievement in the former (e.g., Cavanaugh, 2001), and some reporting no significant
achievement differences between online and offline settings (e.g., Freidhoff, 2017).
Cavanaugh’s (2001) meta-analysis of 19 studies that compared the effects of distance education and traditional methods on K-
12 academic achievement reported that all online courses except language courses had positive effect sizes. Specifically,
Cavanaugh
identified a small positive effect size of 0.43 in online English-language courses, but a somewhat larger negative effect size of
0.8 in online foreign-language courses, as compared to their respective face-to-face counterparts. A more recent meta-analysis of
45 studies that had compared the effectiveness of fully online, blended, and pure face-to-face instruction in K-12 and higher-
education settings (Means, Toyama, Murphy, & Baki, 2013) found a positive effect of online instruction, especially when it
took place as part of blended learning. Its authors did not specify effect sizes for each content area, but did mention that no
differences among subject areas were detected. It should be noted, however, that more than ninety percent of the studies
Means et al. (2013) analyzed were conducted in higher education.
In contrast to studies reporting findings in favour of online courses, H ung, H su, and R i ce ( 2012) examined predictors of
K-12
online-learning outcomes using data mining, and found that students in English-language courses clicked on course content less
often than students of other subjects did, and as a result had lower grades. More recently, Freidhoff (2017) reported that in 2015–
16, Michigan K-12 students’ pass rate in virtual English Language and Literature (ELL) courses was 54%: far lower than the
same in-
dividuals’ 77% pass rate in their non-virtual ELL courses. Moreover, Friedhoff did not find parallel differences between virtual and
non-virtual courses in other subject areas.
All that being said, the prior literature’s mixed findings on the effectiveness of K-12 online learning could also be a function of the
choice of student-, teacher-, and course-level variables. Because studies on teacher-level variables in such contexts are rare to
nonexistent, the following two sections will discuss the effects of student- and course-level factors only.

2.2. Effects of student-level variables

The student-level variables utilized in prior studies of K-12 online learning have mostly been of two types: credit-recovery status,
and data from online learning-management systems (LMSs).

2.2.1. Credit recovery


If online teachers and course designers are to serve credit-recovery students effectively, it is desirable for them to understand
why these students fail their face-to-face classes. Based on high-school students’ self-reported data, Oliver and Kellogg (2015)
identified two general reasons for such failure – a lack of self-discipline, and inefficient student-teacher communication – with
their participants
stating that online courses helped them address both issues, since learning was more self-paced and more one-on-one teacher
support
was provided.
Prior scholarship has also recommended treating credit-recovery students as a separate category in data analysis (e.g., Viano,
2018). Oviatt (2017), for example, highlighted differences in the amounts of help that credit-recovery and non-credit-recovery stu-
dents received, as well as the former group’s limited number of peer interactions; and Ingerham (2012) found that credit-recovery
students spent a significant amount of time off-task due to a range of distractions. However, studies examining such students’ out-
comes remain rare (Viano, 2018), and the effectiveness or ineffectiveness of credit-recovery courses remains undetermined (U.S.
Department of Education, Institute of Education Sciences, What Works Clearinghouse, 2015).

2.2.2. Behavioral data

2
B. Zheng et al. Computers & Education 150 (2020)
103851
The other student-level variable used to predict online-learning success, LMS-derived behavioral data, has commonly included
numbers of logins and their durations, but findings about how these two factors predict learning outcomes have been mixed. In 20%
of the online courses studied by Liu and Cavanaugh (2011) at a virtual high school in the Midwestern U.S. during the 2007-08
school
year, a student’s number of logins was negatively correlated with his/her final grade. The same study found that login duration had
a positive impact on final grades in 11 of the same 15 courses, but English language was not among them.

3
Hung et al. (2012) examined almost 24 million online records generated by more than 7500 students in 883 courses at a K-12
online institution in the northwestern U.S., and found that students enrolled in English language courses clicked on course material
less often, and received lower final grades, than those enrolled in foreign-language or health courses. However, Hung et al. neither
examined login durations nor made causal inferences between logins and final grades.
Rather than aggregated data, some studies have used time-series data to examine students’ online behavior. For example, Paz-
zaglia, Clements, Lavigne, and Stafford (2016) categorized students in 109 courses held over a 21-week semester at the Wisconsin
Virtual School based on their weekly login durations, among other LMS data. Those who logged in for 2 hours or more every week
achieved higher final scores than those who logged in for shorter periods. However, Pazzagalia et al. did not include any credit-
recovery courses in their sample, or take account of certain other potentially relevant factors such as instructor rank and course
design.
In sum, most studies that have examined students’ behavioral data in online K-12 settings have failed to consider either the
complex
structures of online courses or the influence of teachers. It is therefore possible that the apparent relationship between student login
behavior and learning outcomes is merely a function of how their courses are designed and implemented.

2.3. Effects of course-level factors

Prior studies of online-course design in higher education have generally used course-quality rubrics as predictors of students’
final
grades (e.g., Jaggars and Xu, 2016; Margaryan; Bianco, & Littlejohn, 2015). Jaggars and Xu closely examined the impact of four
aspects of online course design on student performance in two community colleges. These aspects were 1) course organization
and presen- tation, 2) learning objectives and assessments, 3) interpersonal interaction, and 4) technology. They found that
interpersonal inter-
action had a significant positive effect on students’ final grades, and explained 23% of grade variance, whereas the other three
design
aspects had no significant impact on learning outcomes. Margaryan, Bianco, and Littlejohn (2015) used instructional-design principles
from the literature, including but not limited to First Principles of Instruction (Merrill, 2002), to evaluate the quality of Massive Open
Online Courses (MOOCs). The principles they selected included 1) problem-centeredness, 2) activation, 3) demonstration, 4) appli-
cation, 5) integration, 6) collective knowledge, 7) collaboration, 8) differentiation, 9) authentic resources, and 10) feedback. Most of
the MOOCs they examined were found to accord poorly with most of these principles, but unfortunately, they did not examine the
relationship between specific MOOCs’ adherence to the 10 principles and their students’ learning outcomes.
Regarding the effect of online course design on K-12 learning, only few studies specifically examined such topic. Based on
National Standards for Quality Online Courses (International Association for K-12 Online Learning, 2011), Adelstein and Barbour
(2016) assessed the validity and reliability of these standards and showed that there is still room to increase the clarity of the
standards, yet whether courses complying these standards had a positive impact on learning outcomes remains unclear.
In any case, research that simply rates courses for their adherence to course-design principles does not provide instructors or
instructional designers with information about actual course content, and is thus of limited utility. At the time of writing, no study
appears to have specifically examined online course design using actual K-12 course data, let alone the relationship between
course design and learning outcomes.

2.4. Effects of instructor-level factors

There are three main strands of research on K-12 online instructors. The first has focused on pre-service teacher education and
teacher professional development (e.g., Quiroz, Ritter, Li, Newton, & Palkar, 2016). The second has focused on teacher
satisfaction and retention (e.g., Borup & Stevens, 2016; Larkin, Lokey-Vega, & Brantley-Dias, 2018), and the third has focused on
teaching practices (Borup & Steven, 2017; DiPietro; Ferdig, Black, & Presto, 2010; Lin & Zheng, 2015). Yet, as mentioned earlier,
very few studies have specifically examined the effect of teacher characteristics on student learning outcomes (see Zhang & Lin,
2019).

2.4.1. Educational level


Regarding instructors’ educational level, Zhang and Lin (2019) found that the instructors’ education level did not affect student
satisfaction in online language courses. Findings on such issue in K-12 face-to-face settings are inconclusive, with studies reporting
positive and negative findings (for a review, see Wayne & Youngs, 2003), while some showed that unrelated to student learning
outcomes (Croninger, Rice, Rathbun, & Nishio, 2007).

2.4.2. Rank
As for rank, to our knowledge, no study looks at this topic specifically in K-12 online settingsA close work is our prior study
(Zhang & Lin, 2019), which used both years of teaching and years of online teaching experience as teacher-level predictors of
student satisfaction, but no association was detected in that study.
To sum up, the existing literature has tended to measure online-learning success either by comparing it against learning success
in
traditional contexts, or by examining students’ perceptions, behaviors, or learning outcomes (Young, 2006). Few studies have
looked into the relations between online courses’ design elements and their learning outcomes (e.g., Jaggars; Xu, 2016); and even
fewer – if indeed any – have focused on how the combination of student, instructor, and course contribute to students’ online
learning success.
The current study therefore proposes to use actual course elements in its examination of how course design, together with student-
and instructor-level factors, predicts learning outcomes in a K-12 online-learning setting. It is guided by two research questions:
1. How do student-, instructor-, and course-level factors affect students’ learning outcomes in online ELL courses?
2. Do course-level factors interact with students’ reasons for enrollment (i.e., credit recovery vs. non-credit recovery)?

3. Methods

3.1. Context

This study was conducted in a large Midwestern state-wide virtual school. The initial pool of participants comprised all 1026
students who were enrolled in any of the eight online English language and literature courses at the eighth-to 12th-grade levels that
were offered during the 2015-16 school year. Two exclusion criteria were applied. First, the researchers excluded 47 students who
had taken more than one English course in any semester, because including them would have violated lack of independence, one
of the major assumptions of hierarchical linear modeling (HLM). Second, the 60 students with incomplete login information were
excluded. Thus, the final sample consisted of 919 students in eight courses taught by 12 instructors.

3.2. Data sources

First, students’ demographic information, final scores, and login activities on the school’s BlackBoard LMS, as well as instructor
information, were obtained directly from an LMS-derived dataset provided by the school. Second, two authors were granted full
access
to the completed online courses, and hand-coded course-level information.

3.3. Measurement

3.3.1. Student-level variables


The self-reported student-level variables of interest included gender, special-education status, and reasons for taking online
courses. The students’ numbers of logins, total login durations, and final grades were also collected. Descriptive statistics of these
student characteristics are presented in Table 1. When a student enrolled in an online course through the virtual school, s/he was
asked
to provide other demographic information – age and grade level – but answers to those questions were not required and less than
20% of the students answered both of them.
Reasons for taking online courses. This variable collected from the school included five categories: 1) non-availability of the course
at the student’s local school (6.9%); 2) credit recovery (12.6%); 3) learning preferences (33.0%); 4) scheduling conflict (22.3%); and
5) other (25.2%). To better link our analyses to the prior literature, however, all four of the non-credit-recovery options were
combined
into a single category.
Number of logins. This variable comprised the number of times a participant logged in to the LMS during the semester,
regardless of how long s/he remained logged in. The average number of logins ¼ was 79.18 (SD 40.86), with the highest
number recorded for an individual student being 294 and the lowest, one. The average number of logins per week was just
under four.
Login duration. This variable consisted of the total time a student remained logged in to the LMS during his/her semester
of enrollment. In the original LMS dataset, this variable was reported in minutes, but for ease of analysis, it was recoded into
hours.
Students’ average time on the LMS was 81.56 hours¼ (SD 68.20), with a high of 707 hours and a low of less than one. The
participants’ average time logged in per week was 5.1 hours.
Final grade. The learning outcome used in this research comprised the final grades reported by the virtual school to the
students’ own schools at the end of each semester. All courses with the same name (e.g., English Language and Literature) shared
the same
assessment regime across different sections (e.g., English Language and Literature 9A, English Literature and Literature 9B),
regardless of instructors. All course content and assessments were certified internally and externally (i.e., by Quality Matters). As
such, final
grades should be a reasonably accurate reflection of students’ learning in these courses.
Final grades consisted of a mixture of the participants’ scores on auto-graded and instructor-graded assignments, and the pro-
portion of instructor-graded assignments was roughly similar in each course. Students’ final grades were the sum of the points they

Table 1
Descriptive statistics of student-level variables.
Student-level variables (n ¼ 919) M SD Min Max

Final grades 73.44 29.37 0 100


Login times 69.18 40.86 1 294
Login duration 81.56 68.20 0 707
Male 0.43 0.50 0 1
Special Education 0.08 0.28 0 1
Enrollment reason
Learning preference of the student 0.33 0.47 0 1
Scheduling conflict 0.22 0.42 0 1
Credit recovery 0.13 0.33 0 1
Course unavailable at local school 0.07 0.25 0 1
Other 0.25 0.43 0 1
earned on each assignment divided by the total possible points, transformed into a points-out-of-100 format. The average final
grade obtained was 73.4 (SD ¼ 29.4).

3.3.2. Instructor-level variables


The two instructor-level variables of interest were the teachers’ education levels and ranks, as explained below (see Table 2).
Education level. This was a binary variable, with “1” referring to instructors with a master’s degree (n¼9, 75.0%) and “0” to those
with a bachelor’s degree (n 3, ¼ 25%).
Rank. This variable has four categories: full-time lead instructor (n ¼ 2, 16.7%), full-time non-lead instructor (n 3, 25.0%), part-
time instructor (n ¼5, 41.7%), and iEducator (n 2, 16.7%), with the last referring to novice online instructors who had recently
completed teacher-preparation programs and were working in the virtual school while continuing their training in how to teach in a
digital environment. Three binary variables were generated, using full-time non-lead instructor as the reference category.

3.3.3. Course-level variables


Courses were coded according to their assignment types, resource types, and knowledge taxonomy. It should be noted that the
assignment and resources types were consistent across courses per the virtual school’s course requirements, although the
numbers might vary among different courses. Descriptive statistics for these three course-level characteristics are presented in
Table 3.
Assignment types and resource types were coded using a bottom-up strategy. Each assignment was placed in one of five
categories: text-based (M ¼ 19.13, SD ¼ 6.38), multimedia (e.g., audio and/or video; M ¼ 0.25, SD ¼ 0.46), quiz (M ¼ 9.13, SD ¼
2.36), discussion (M ¼ 17.13, SD ¼ 8.94), and project-based (e.g., creating a PowerPoint on a specific research topic; M ¼ 0.38,
SD ¼ 0.52). Learning resources were coded into two categories: text-based (i.e., text readings, course guides; M ¼ 21.50, SD ¼
4.84) and multimedia (i.e., website links, infographics, audio files, and videos; M ¼ 29.13, SD ¼ 8.44).
Knowledge taxonomy was coded using a top-down strategy, based on the revised Bloom’s taxonomy (Anderson et al., 2001),
into the following six categories: remember (M ¼ 2.13, SD ¼ 3.09), understand (M ¼ 11.50, SD ¼ 7.25), apply (M ¼ 1.50, SD ¼
1.41),
analyze (M ¼ 14.63, SD ¼ 5.68), evaluate (M ¼ 9.25, SD ¼ 4.13), and create (M ¼ 13.63, SD ¼ 5.29).

3.4. Data analysis

To answer our first research question, a cross-classified two-level random effects model (Raudenbush & Bryk, 2002) was built
to determine the impacts of student-, instructor-, and course-level factors on students’ online-learning outcomes, with student-level
data nested in the combination of instructor- and course-level data. Multilevel modeling accounts for the influences of contexts, but
not for
the fact that individuals may belong to multiple contexts at the same time. In our case, students were nested within courses and in-
structors, but these two levels did not have a clear hierarchical relationship. Cross-classified multilevel models allow researchers to
disentangle such phenomena.
To answer the second research question, the model described above was modified to include the interactions between students’
credit-recovery intentions and three course-level factors (i.e., assignment types, resource types, and knowledge taxonomy).
Following
quantitative analysis, for the purpose of triangulation, the course content was subjected to conventional content analysis (Hsieh &
Shannon, 2005), in which coding categories are derived directly from the data. All statistical analyses were performed in Stata 13.

4. Findings

4.1. Impact on learning outcomes

Model 1 was the unconditional model, which estimated overall attainment across courses and instructors as 71.78 points out of
100 (see Table 4). Between-instructor variance was 13.80 and between-course variance, 29.76. The intra-class coefficient (ICC) for
in- structors was 0.016 (i.e., 13.8 divided by the sum of 13.8, 29.76, and 818.32 from the residual), indicating that instructors
explained
1.6% of the variance in students’ final grades. The ICC for courses was 0.034 (i.e., 29.76 divided by the sum of 13.8, 29.76, and
818.32), indicating that courses explained an additional 3.4% of such variance.
Model 2, which added student-level variables, showed that students who logged in more times (B ¼ 0.30, p < .001) and stayed
on the LMS for longer (B ¼ 0.10, p < .001) had significantly higher final grades than others. In addition, students who received
special education (B ¼ —7.10, p ¼ .009) or who enrolled for credit recovery (B ¼ —13.46, p < .001) had significantly lower final
grades than their fellow students who did not. Gender did not significantly predict final grades (B ¼ —0.22, p ¼ .867).

Table 2
Descriptive statistics of instructor-level variables.
Master’’s degree Bachelor’s degree Total

Full-time lead instructor 1 1 2


Full-time non-lead instructor 3 0 3
Part-time instructor 4 1 5
iEducator 1 1 2
Total 9 3 12
B. Zheng et al. Computers & Education 150 (2020)
103851

Table 3
Descriptive statistics of course-level variables.
Course-level variables (n ¼ 8) M SD Min Max

Assignment types
Text assignment 19.13 6.38 10 28
Multimedia assignment 0.25 0.46 0 1
Project-based assignment 0.38 0.52 0 1
Quiz 9.13 2.36 6 13
Discussion 17.13 8.94 6 34
Resource types
Text resources 21.50 4.84 16 31
Multimedia resources 29.13 8.44 18 40
Knowledge taxonomy
Remember 2.13 3.09 0 9
Understand 11.50 7.25 3 27
Apply 1.50 1.41 0 4
Analyze 14.63 5.68 7 22
Evaluate 9.25 4.13 4 15
Create 13.63 5.29 4 21

Model 3 added instructor-level variables, and showed that students whose instructors held master’s degrees (B ¼ —10.29, p ¼ .
002) received significantly lower final grades than those whose instructors had only bachelor’s degrees. The participants who were
taught by full-time lead instructors or part-time instructors had statistically comparable grades to those in courses taught by full-time
non-lead
instructors. However, students in courses taught by iEducators had significantly lower final grades than those in courses taught by
full- time non-lead instructors¼(B—5.65, p .037).
Model 4 added all course-level variables. To avoid potential confounding effects caused by uneven numbers of assignments and
quantities of resources, all course-level variables were first converted to percentages of their categories, and centered using grand
means. As compared to text-based assignments, project-based ones had a significantly positive impact on the participants’ final
grades:
¼ B 3.21, p .021, indicating that if the proportion of project-based assignments increased by 1%, the average final
grade would
increase by 3.21 points. However, higher proportions of other types of assignments (i.e., multimedia-based work, quizzes, and dis-
cussions) were not significantly associated with final grade levels. And having more multimedia-based resources, relative to text-
based ones, was associated with significantly lower final grades ¼ —(B 775.80, p .000). Grades were not significantly affected by the
balance of higher-level knowledge activities to lower-level ones.
Model 5 added interactions between the participants’ credit-recovery status and course-level variables. Among those variables
with significant direct effects in Model 4, all remained significant except credit recovery ¼— (B 14.97, p .570), while high-level knowledge
¼
activities became significant (B 0.15, p .019). The only interaction that was significant was between credit-recovery status and high-
level knowledge activities (B 0.32,¼p— .025). Taken as a whole, the Model 5 results may imply that credit-recovery status, in and of
itself, was not a reason for observed differences in final grades. Rather, credit-recovery students achieved significantly lower grades
than their non-credit-recovery counterparts only when, and perhaps because, the proportion of high-level knowledge activities in
their online courses was high.
Content analysis was employed to examine how project-based assignments and resources affected students’ learning
outcomes. For such assignments, they were offered clear and meaningful goals and instructions. In one typical project-based
assignment, students
were asked to create a multimedia presentation of their own research related to the lesson, using tools of their choice, not limited to
the selection of nine recommended by the teacher. The detailed instructions were as follows:
You have the freedom to be creative with this project, but still include all the parts of the research necessary. Use either Glogster or Prezi for
your presentation. For your attention grabber, use a type of media within your presentation such as Penzu, Bubbl.us, Vimeo, Storybird,
Xtranormal, Voki, Tiki Toki, etc.
This project-based assignment was used as a summative assessment for the lesson it dealt with. Very similar designs were
used consistently across all the sampled ELL courses.
We also used content analysis in an effort to understand why text-based resources worked better in online ELL courses than
multimedia-based resources did. This led us to identify two general types of text-based resources: course guides, and text-based
readings, with the former making up 87% of all such resources. These guides were provided in various formats, including rubrics,
text-maps, flow charts, and sets of bibliography cards, to serve various pedagogical purposes. One course guide, for instance,
provided
an organizer chart when students read Erich Remarque’s All Quiet on the Western Front, and asked them to use it to list the traits of
the
characters; provide evidence from the novel that revealed these traits; and explain how it revealed them. Another purpose of the
guides was to provide students with examples of how to finish particular assignments. For example, one instructor who asked
students to create a goal map using a graphic organizer provided a sample goal map that included a comprehensive picture of
achieving a goal,
including “why that is your goal,” “timeline for reaching the goal,” “actions you need to take to reach the main goal,” “people you

6
B. Zheng et al. Computers & Education 150 (2020)
103851

Table 3
need to reach your main goal,” and so forth.
Unlike text-based resources, which (as noted above) normally expressed clear pedagogical values, the analyzed multimedia-
based resources tended to be offered principally as means of increasing student engagement and interest; typical examples
included YouTube videos, TED talks, songs, and sound poems. However, our study did not identify even one example among such
resources that was

7
B. Zheng et al. Computers & Education 150 (2020)
103851

Table 4
A Two-level Hierarchical Linear Model Predicting Learning Outcome from Student-, Instructor-, and Course-level Factors (n ¼ 919).
Final achievement

Model 1:
Unconditional model Model 2: Add student- Model 3: Add Model 4: Add course- Model 5: Add
level factors instructor-level factors level factors interactions

Number of logins 0.30***


0.29*** 0.29*** 0.29***
(13.91)
(13.75) (13.75) (13.91)
Login duration 0.10***
0.10*** 0.10*** 0.10***
(7.83)
(7.81) (7.84) (7.89)
Male 0.26
0.29 0.38 0.77
(0.17)
(0.18) (0.24) (0.49)
Special education —7.10**
—7.54** —7.28** —7.29**
(-2.59)
(-2.75) (-2.65) (-2.68)
Credit recovery —13.46***
—13.05*** —12.19*** —14.97***
(-5.49)
(-5.34) (-4.97) (-5.70)
Instructor’s master degree —10.29** —7.65* —6.43*
(-3.15) (-2.46) (-1.99)
Lead instructor —0.72 —0.98 0.36
(-0.20) (-0.30) (0.11)
Part-time instructor 0.80 0.27 1.06
(0.30) (0.11) (0.41)
iEducator —5.65* —5.72* —5.51*
(-2.09) (-2.51) (-2.36)
Multimedia assignments —0.18 —0.61
(-0.13) (-0.40)
Project-based assignments 3.21* 3.70*
(2.31) (2.54)
Quiz —0.15 —0.47
(-0.46) (-1.30)
Discussions —0.03 —0.08
(-0.33) (-0.84)
Multimedia resources —775.80*** —827.36***
(-3.54) (-3.51)
Higher-level knowledge 0.09 0.15*
(1.47) (2.35)
Credit recovery*Multimedia —0.27
assignments (-0.07)
Credit recovery*Project-based —1.55
assignments (-0.56)
Credit recovery*Quiz 1.16
(1.62)
Credit recovery*Discussion 0.28
(1.10)
Credit recovery*Multimedia 375.29
resources (0.54)
Credit recovery*Higher-level —0.32*
knowledge (-2.24)
Constant 71.78*** 45.97*** 55.54*** 54.07*** 52.32***
(28.54) (16.96) (13.74) (15.70) (14.89)
Between-instructor variance
13.80** 22.97*** 1.93 0.00 0.00***
(3.19) (4.61) (0.13) (-0.03) (-4.79)
Between-course variance
29.76*** 12.73** 11.72** 0.00 0.00*
(4.74) (3.12) (3.07) (-1.90) (-2.29)
Residual
818.32*** 513.44*** 515.06*** 512.84*** 505.87***
(142.44) (132.66) (130.54) (133.76) (133.47)

Reference variables: female; non-special education student; non-credit recovery; instructor ’s bachelor’s degree; non-lead instructor; full-time
instructor; text-based assignment; text-based resources; t statistics in parentheses. *p < .05, **p < .01, ***p < .001.

unambiguously linked to the learning objectives of a particular lesson.


Lastly, we wanted to explore why discussion-based assignments did not work better than text-based ones. Here, content
analysis revealed that most of the so-called discussion-based assignments that had been placed on discussion boards were merely
platforms for the submission of student writing or other types of assignments, and not conduits for giving or receiving feedback, let
alone wider- ranging discussions of course content and its implications.

7
5. Discussion

5.1. Student-level factors

While credit recovery is a major motivation for K-12 schools’ provision of online courses (Queen & Lewis, 2011), hardly any
empirical studies have examined credit-recovery students’ performance in online courses (Viano, 2018). Indeed, the current study
is believed to be the first to comprehensively examine the relationship between individuals’ credit-recovery status and their learning
outcomes after controlling for student-, instructor-, and course-level factors. Its finding that such status had a significantly negative
impact on learning outcomes represents important empirical support for Viano’s assertion (2018) that practitioners’ equal treatment
of credit-recovery and non-credit-recovery students is unhelpful.
The two other major student-level predictors of our participants’ online ELL-course performance – i.e., their numbers of logins
and login duration – both had significantly positive effects on final grades. The former finding is at odds with research by Liu and
Cav- anaugh (2011), which reported that students’ numbers of logins had a negative effect on final grades in 20% of the online
courses they
examined (all in math), and no significant effect in the other 80%. It is possible that other factors, such as the number of credit-
recovery students, confounded the results that Liu and Cavanaugh reported. However, the positive relationship we identified
between login
duration and final grades was consistent with Liu and Cavanaugh’s (2011) finding of a similar relationship in 11 out of 15 online
courses, two of which were ELL courses.

5.2. Instructor-level factors

A surprising finding of our study was that the students of teachers with master’s degrees achieved significantly lower final
grades than those of teachers with only bachelor’s degrees. A review by Wayne and Youngs (2003) found that, in face-to-face
settings, the relationships between teachers’ educational levels and student learning outcomes were unclear, with various studies
reporting positive and negative effects; and using a longitudinal dataset in the U.S., Croninger et al. (2007) reported that teachers’
degree levels were unrelated to students’ academic achievement. One possible explanation for our finding is that teachers with
advanced degrees may have higher expectations when grading. Alternatively, as Croninger et al. argued, degree subjects (e.g.,
education vs. non-education)
may be more relevant to student achievement than degree levels. Unfortunately, in the absence of any degree-subject data or other
pertinent variables such as years of teaching experience, it would be premature to speculate too freely on this issue based on data
from such a small number of instructors.
The students of iEducators tended to achieve lower final scores than others, perhaps because the sampled iEducators had the
least
teaching experience of the target school’s four instructor types. As such, our data broadly supports Kemp’s (2011) finding that in-
structors’ online teaching experience positively predicted students’ learning outcomes.

5.3. Course-level factors

At the course level, our finding that project-based assignments benefited online ELL students’ performance more than text-
based assignments did is consistent with Merrill’s (2002) First Principles of Instruction, which from a psychological perspective
highlight the importance of problem-centeredness in education. Moreover, prior studies have mostly used course-quality rubrics to
examine the
relationships between course design and learning outcomes (e.g., Jaggars and Xu, 2016): an approach that does not yield specific
guidelines for online-course designers or instructors.
Our results also showed that the provision of multimedia-based resources had a negative impact on final grades. One possible
explanation for this involves the relevance of such resources. Though we did not specifically code their relevance to course
objectives, it was apparent that many of these multimedia resources were used to engage students, rather than to facilitate their
learning. As such,
it is worth wondering whether the sampled students found it difficult to juggle a combination of learning and non-learning
materials –
especially since research conducted in higher-education settings has reported that online students can be distracted by non-
learning activities or materials (Winter, Cotton, Gavin, & Yorke, 2010). Therefore, future research should try to determine the
optimal relative proportions of engagement-enhancing materials and learning materials per se in various types of courses.
Our quantitative analysis of the participants’ online discussions indicated that these did not significantly predict their final grades:
a finding at odds with those of many prior studies that have highlighted such discussions’ achievement benefits (for a review, see
Ding, Kim, & Orey, 2017). As noted above, however, our content analysis revealed that much of the activity initially classified as
“online discussion” did not actually require students to read or respond to others’ work or comments – violating Hew, Cheung, and
Ng’s (2010) principle that the benefits of online discussion are most likely to occur when students are engaged in meaningful social
interactions. In
combination with Oliver and Kellogg’s (2015) finding that K-12 online courses tend to focus on independent work, our results
imply that more advantage should be taken of the learning affordances of online discussion forums.
Though Oliver and Kellogg (2015) speculated that higher-level knowledge activities might help credit-recovery students, our study
found the opposite. Once we controlled for the interactions between course-design features and credit-recovery status, the impact
of credit recovery was no longer significant, implying that students’ reasons for taking online courses did not demonstrably affect
their learning outcomes, and that the real driver of the apparent relationship between credit-recovery status and achievement lay
elsewhere.
Designing online ELL courses with more high-level knowledge activities might encourage students’ engagement and in-depth
thinking,
which could ultimately promote their learning (see Garrison, Anderson, & Archer, 2001; Grisham & Wolsey, 2006). However, our
study showed that credit-recovery students received lower grades when their courses included more high-level knowledge activities,
whereas non-credit-recovery students’ achievement benefited from such activities.

6. Implications

This study’s results have three important theoretical and practical implications. First, they suggest that if an online
course’s design includes project-based assignments and higher-level knowledge activities, it will tend to improve students ’
learning outcomes.
Importantly, however, higher-level knowledge activities may not be helpful to credit-recovery students, or even affect their learning
negatively, especially if they lack foundational knowledge to build upon.
Second, this study implies that the target school, and quite possibly other virtual schools, need to pay much closer attention
to the
design of online discussion assignments. Our finding that online discussion negatively affected students’ learning outcomes should
not be used as a basis for prescriptive guidelines for online-course design, let alone for arguments that the use of discussions
should be curtailed. Rather, our qualitative analysis revealed that the great majority of “discussion” assignments in the eight
sampled ELL courses barely scratched the surface of the affordances of discussion as that term is normally understood. Unless
course designs demand and
facilitate meaningful social interaction, online discussion may continue to exist in name only.
Third, although course guides made up the largest proportion of the text-based resources used in the sampled ELL courses, our
findings suggest that online instructors and course designers should use more text-mapping and organizers, as this would usually
include clearer pedagogical purposes than multimedia-based resources do. Our finding, however, should not be used to eliminate
or decrease the number of multimedia-based resources in online ELL courses. In contrast, we suggest that when designing
multimedia- based resources, instructors and course designers should tie them closely to pedagogical goals rather than using them
only as a way to engage students.

7. Conclusions

The present study fills a major gap in the literature on K-12 online ELL courses, by using actual LMS data and final scores
together with instructor- and course-level factors to investigate the relationships among student activity, instructor
characteristics, course-
design elements, and students’ online learning outcomes. At the student level, it found that better learning outcomes occurred
when students took courses for non-credit-recovery reasons, logged in more times, and stayed logged in longer. At the instructor
level, the students of teachers with master’s degrees had lower final grades than those with bachelor’s degrees, and the students
of teachers who were iEducators had lower final grades than those of teachers who were full-time non-lead instructors. Finally,
from a course-
design perspective, the inclusion of more project-based assignments and more higher-level knowledge activities benefited most stu-
dents’ learning.
Several limitations of this study should be noted. First, our data lacked 1) key aspects of students’ non-educational background
factors, such as ethnicity and socio-economic status, any of which might have mediated the effects of student-, teacher-, and
course- level factors that we observed; and 2) certain additional student-level factors, including number of courses taken per
semester and preexisting technical skills, that have previously been reported as influencing online-course outcomes (Oliver &
Kellogg, 2015). Second, our sample was drawn from one subject area in a single virtual school, and additional studies will therefore
be needed to determine if our findings hold true for other subject areas or in other schools. Third, we were unable to obtain other
sources of data to further explain our findings. Qualitative data such as teacher interviews might help to better understand why
project-based and high-level knowledge activities benefited credit- and non-credit-recovery students differently.
Another promising avenue for future research would be a close qualitative examination of course-design elements, e.g., via in-
terviews with the instructors and/or course designers, to triangulate how course elements were designed and how they might be
modified to enhance students’ learning outcomes. In addition, we hope to answer Viano’s (2018) call for more research on credit-
recovery courses. Specifically, further examination of the interaction between course designs and students’ credit-recovery
objectives will help improve our theoretical and practical understanding of how the different needs of credit-recovery and
non-credit-recovery students in K-12 online courses can be met.

CRediT authorship contribution statement

Binbin Zheng: Conceptualization, Methodology, Investigation, Formal analysis, Writing - original draft, Writing - review &
editing. Chin-Hsi Lin: Investigation, Formal analysis, Writing - original draft, Writing - review & editing. Jemma Bae Kwon: Re-
sources, Writing - original draft.

Appendix A. Supplementary data

Supplementary data to this article can be found online at https://doi.org/10.1016/j.compedu.2020.103851.


B. Zheng et al. Computers & Education 150 (2020)
103851

References

Adelstein, D., & Barbour, M. (2016). Building better courses: Examining the content validity of the iNACOL National Standards for quality online courses. Journal of Online
Learning Research, 2(1), 41–73.
Anderson, L. W., Krathwohl, D. R., Airasian, P. W., Cruikshank, K. A., Mayer, R. E., Pintrich, P. R., et al. (2001). A taxonomy for learning, teaching, and assessing: A
revision of Bloom’s taxonomy of educational objectives (Complete edition). New York: Longman.
Barbour, M. K. (2017). K-12 online learning and school choice: Growth and expansion in the absence of evidence. In R. A. Fox, & N. K. Buchanan (Eds.), The wiley
handbook of school choice (pp. 421–440). Hoboken, NJ: Wiley Blackwell.
Borup, J., & Stevens, M. (2016). Factors influencing teacher satisfaction at an online charter school. Journal of Online Learning Research, 2(1), 3–22.
Borup, J., & Stevens, M. (2017). Using student voice to examine teacher practices at a cyber charter high school. British Journal of Educational Technology, 48(5),
1119–1130. https://doi.org/10.1111/bjet.12541.
Cavanaugh, C. (2001). The effectiveness of interactive distance education technologies in K-12 learning: A meta-analysis. International Journal of Educational
Telecommunications, 7(1), 73.
Croninger, R. G., Rice, J. K., Rathbun, A., & Nishio, M. (2007). Teacher qualifications and early learning: Effects of certification, degree, and experience on first-
grade
student achievement. Economics of Education Review, 26(3), 312–324. https://doi.org/10.1016/j.econedurev.2005.05.008.
Ding, L., Kim, C., & Orey, M. (2017). Studies of student engagement in gamified online discussions. Computers & Education, 115, 126–142. https://doi.org/10.1016/j.
compedu.2017.06.016.
DiPietro, M., Ferdig, R., Black, E., & Presto, M. (2010). Best practices in teaching K-12 online: Lessons learned from Michigan virtual school teachers. The Journal of Interactive
Online Learning, 9(3), 10.
Freidhoff, J. (2017). Michigan’s K-12 virtual learning effectiveness report 2015-2016. Michigan Virtual University. Retrieve from https://mvlri.org/research/
publications/michigans-k-12-virtual-learning-effectiveness-report-2015-16/.
Garrison, D. R., Anderson, T., & Archer, W. (2001). Critical thinking, cognitive presence, and computer conferencing in distance education. American Journal
of Distance Education, 15(1), 7–23. https://doi.org/10.1080/08923640109527071.
Grisham, D. L., & Wolsey, T. D. (2006). Recentering the middle school classroom as a vibrant learning community: Students, literacy, and technology intersect. Journal
of Adolescent & Adult Literacy, 49, 648–660. https://doi.org/10.1598/JAAL.49.8.2.
Hew, K. F., Cheung, W. S., & Ng, C. S. L. (2010). Student contribution in asynchronous online discussion: A review of the research and empirical exploration.
Instructional Science, 38, 571–606. https://doi.org/10.1007/s11251-008-9087-0.
Hsieh, H.-F., & Shannon, S. E. (2005). Three approaches to qualitative content analysis. Qualitative Health Research, 15(9), 1277–1288. https://doi.org/10.1177/
1049732305276687.
Hung, J.-L., Hsu, Y.-C., & Rice, K. (2012). Integrating data mining in program evaluation of K–12 online education. Educational Technology & Society, 15(3), 27–41.
Ingerham, L. (2012). Interactivity in the online learning environment: A study of users of the NorthCarolina virtual public school. Quarterly Review of Distance
Education, 13(2), 65–75.
International Association for K-12 Online Learning. (2011). National standards for quality online courses version 2. Vienna, VA: Author. Retrieved from https://www.
inacol.org/wp-content/uploads/2015/02/national-standards-for-quality-online-courses-v2.pdf.
Jaggars, S. S., & Xu, D. (2016). How do online course design features influence student performance? Computers & Education, 95, 270–284. https://doi.org/10.1016/j.
compedu.2016.01.014.
Kemp, M. S. (2011). Student and teacher predictors on credit recovery program outcomes: A national analysis. Gainesville: FL: University of North Florida. Available from:
ProQuest. (UMI No. 3458924).
Larkin, I., Lokey-Vega, A., & Brantley-Dias, L. (2018). Retaining K-12 online teachers: A predictive model for K-12 online teacher turnover. Journal of Online Learning
Research, 4(1), 53–85.
Lin, C.-H. (2019). Auto-grading versus instructor-grading in online English courses. Lansing, MI: Michigan Virtual Learning Research Institute.
Lin, C.-H., Kwon, J., & Zhang, Y. (2019). Online self-paced high-school class size and student achievement. Educational Technology Research and Development,
67, 317–336. https://doi.org/10.1007/s11423-018-9614-x.
Lin, C.-H., & Zheng, B. (2015). Teaching practices and teacher perceptions in online world language courses. Journal of Online Learning Research, 1(3), 275–
303. Liu, F., & Cavanaugh, C. (2011). High enrollment course success factors in virtual school: Factors influencing student academic achievement. International Journal on
E-Learning, 10, 393–418.
Margaryan, A., Bianco, M., & Littlejohn, A. (2015). Instructional quality of massive open online courses (MOOCs). Computers & Education, 80, 77–83.
https://doi.org/ 10.1016/j.compedu.2014.08.005.
Means, B., Toyama, Y., Murphy, R., & Baki, M. (2013). The effectiveness of online and blended learning: A meta-analysis of the empirical literature. Teachers College Record, 115(3), 1–
47.
Merrill, M. D. (2002). First principles of instruction. Educational Technology Research & Development, 50(3), 43–59. https://doi.org/10.1007/BF02505024.
Oliver, K., & Kellogg, S. (2015). Credit recovery in a virtual school: Affordances of online learning for the at-risk student. Journal of Online Learning Research, 1(2),
191–218.
Oviatt, D. R. (2017). Online students’ perceptions and utilization of a proximate community of engagement at an online independent study program (Doctoral dissertation).
Bringham Young University. Available from: ProQuest. (UMI No. 10268381).
Pazzaglia, A. M., Clements, M., Lavigne, H. J., & Stafford, E. T. (2016). An analysis of student engagement patterns and online course outcomes in Wisconsin (REL
2016–147). Washington, DC: U.S. Department of Education, Institute of Education Sciences, National Center for Education Evaluation and Regional
Assistance, Regional Educational Laboratory Midwest. Retrieved from http://ies.ed.gov/ncee/edlabs.
Powell, A., Roberts, V., & Patrick, S. (2015). Using online learning for credit recovery: Getting Back on track to graduation. Vienna, VA: iNACOL, The International
Association for K–12 Online Learning. Retrieved from http://www.inacol.org/wp-content/uploads/2015/09/iNACOL_UsingOnlineLearning ForCreditRecovery.
pdf.
Queen, B., & Lewis, L. (2011). Distance education courses for public elementary and secondary school students: 2009-10 (No. NCES 2012-008). Washington, DC: U.S.
Department of Education, National Center for Education Statistics.
Quiroz, R. E., Ritter, N. L., Li, Y., Newton, R. C., & Palkar, T. (2016). Standards based design: Teaching K-12 educators to build quality online courses. Journal of Online
Learning Research, 2(2), 123–144. Waynesville, NC USA: Association for the Advancement of Computing in Education (AACE).
Raudenbush, S. W., & Bryk, A. S. (2002). Hierarchical linear models: Applications and data analysis methods (2nd ed.). Thousand Oaks, CA: Sage.
Taylor, S., Clements, P., Heppen, J., Rickles, J., Sorensen, N., Walters, K., et al. (2016). Getting back on track the role of in-person instructional support for students
taking online credit recovery. Washington, D.C: American Institutes for Research. Retrieved from http://www.air.org/system/files/downloads/report/In-Person-
Support- Credit-Recovery.pdf.
U.S. Department of Education Institute for Education Sciences, What Works Clearinghouse. (2015). Dropout prevention intervention report: Credit recovery
programs. Retrieved from: https://ies.ed.gov/ncee/wwc/Docs/InterventionReports/wwc_credit_050515.pdf.
Viano, S. L. (2018). At-risk high school students recovering course credits online: What we know and need to know. American Journal of Distance Education, 32(1),
16–26. https://doi.org/10.1080/08923647.2018.1412554.
Watson, J., Pape, L., Murin, A., Gemin, B., & Vashaw, L. (2015). Keeping pace with K-12 digital learning: An annual review of policy and practice. Retrieved from http://
www.kpk12.com/wp-content/uploads/Evergreen_KeepingPace_2015.pdf.
Wayne, A. J., & Youngs, P. (2003). Teacher characteristics and student achievement gains: A review. Review of Educational Research, 73(1), 89–122.
https://doi.org/ 10.2307/3516044.
Winter, J., Cotton, D., Gavin, J., & Yorke, J. D. (2010). Effective e-learning? Multi-tasking, distractions and boundary management by graduate students in an
online environment. ALT-J, 18(1), 71–83. https://doi.org/10.1080/09687761003657598.

10
B. Zheng et al. Computers & Education 150 (2020)
103851

Young, S. (2006). Student views of effective online teaching in higher education. American Journal of Distance Education, 20(2), 65–77. https://doi.org/10.1207/
s15389286ajde2002_2.
Zandberg, I., & Lewis, L. (2008). Technology-based distance education Courses for public Elementary and secondary school students: 2002-03 and 2004-05. Statistical
¼
analysis report. NCES 2008-008. National Center for Education Statistics. Retrieved from http://eric.ed.gov/?id ED501788.
Zhang, Y. (2017). The effects of community of inquiry, learning presence, and mentor presence on learning outcomes: A revised community of inquiry model for K-12 online
learners (Doctoral dissertation). Available from: ProQuest. (UMI No. 10810605).
Zhang, Y. & Lin, C.-H. (2019). Student interaction and the role of the instructor in a virtual high school: What predicts online learning satisfaction? Technology,
Pedagogy, and Education, 1-20. Advanced online publication. https://doi.org/10.1080/1475939X.2019.1694061.

11

You might also like