Download as pdf or txt
Download as pdf or txt
You are on page 1of 15

Computers & Education 152 (2020) 103890

Contents lists available at ScienceDirect

Computers & Education


journal homepage: http://www.elsevier.com/locate/compedu

Elements of Success: Supporting at-risk student resilience through


learning analytics
Jae-Eun Russell a, *, Anna Smith a, Russell Larsen b, c
a
The Office of Teaching, Learning & Technology, University of Iowa, USA
b
Department of Chemistry, University of Iowa, USA
c
Department of Chemistry, University of Houston, USA

A R T I C L E I N F O A B S T R A C T

Keywords: Strategies that incorporate learning analytics are continuing to advance as higher education in­
Learning analytics platform stitutions work to promote student success. Although many of the early learning analytics ap­
Resilience plications were intended to help teaching professionals to identify at-risk students, some learning
Performance feedback
analytics applications display information on course progress and performance directly to stu­
At-risk students
Self-regulated learning
dents. While positive associations between student use of learning analytics and achievement
have been reported, some have expressed concern that for at-risk students, low estimated grades
might induce negative emotions, which could lead to disengagement or even withdrawal. How­
ever, few studies have examined the effects of such applications on at-risk students. Elements of
Success is a learning analytics platform that provides students with weekly performance feedback,
including a current estimated grade. This study examined the relationship between student use of
Elements of Success and academic performance among at-risk students in an introductory
chemistry course. Specifically, we compared final grade outcomes and the risk of withdrawal
among students who received a low estimated grade after the first midterm. Results indicated that
viewing performance feedback, including a low estimated grade, was not associated with with­
drawal from the course for at-risk students. Furthermore, at-risk students who used Elements of
Success were found to be resilient. After controlling for prior learning outcomes, demographics,
and self-reported study skills, it was found that they were more likely to earn a final passing grade
(C- or above) than at-risk students who never used it. The results and limitations are further
discussed.

1. Introduction

As institutions work to promote student success, strategies that use methods of learning analytics to inform decisions and trigger
actions are increasingly being applied. The pervasive use of digital learning materials and academic technology (e.g., Learning
Management Systems) has resulted in the availability of large, real-time learning data that record student paths through the digital
landscape. The detailed nature of raw interaction data leads to the challenges associated with large data sets but holds the promise of
providing the details necessary to capture the complexities of the learning processes. Analytics methods are being developed and
applied to mine such data to gain insight and to prompt action toward achieving desired outcomes, such as lowering course nonsuccess

* Corresponding author. The Office of Teaching, Learning & Technology, 2800 UCC, University of Iowa, Iowa City, IA, 52242, USA.
E-mail address: jae-russell@uiowa.edu (J.-E. Russell).

https://doi.org/10.1016/j.compedu.2020.103890
Received 9 January 2019; Received in revised form 25 March 2020; Accepted 27 March 2020
Available online 30 March 2020
0360-1315/© 2020 Elsevier Ltd. All rights reserved.
J.-E. Russell et al. Computers & Education 152 (2020) 103890

rates, increasing retention rates, and decreasing the time to degree (Campbell, DeBlois, & Oblinger, 2007; Hu, Lo, & Shih, 2014).
Many early learning analytics applications were designed as early alert systems, intended to help teaching professionals (in­
structors, academic advisors, or academic success counselors) direct their efforts toward students who are likely to benefit (Cai, Lewis,
& Higdon, 2015; Campbell et al., 2007; Macfadyen & Dawson, 2010; Tampke, 2013). Alternatively, some applications display
feedback on course progress directly to the learner, aiming to inform student decisions, facilitate reflection (Daley, Hillaire, &
Sutherland, 2016), and promote self-regulation of learning efforts (e.g., Arnold & Pistilli, 2012; Van Horne et al., 2018; Corrin & de
Barba, 2014; Huberth, Chen, Tritz, & McKay, 2015). Multiple studies have reported effective learning analytics-based interventions
with associated positive student perception of their use (e.g., Roberts, Howell, & Seaman, 2017; Tan, Koh, Jonathan, & Yang, 2017).
Further, recent studies have shown positive associations between the use of learning analytics dashboards and student engagement and
learning outcomes (e.g., Aljohani et al., 2019; Van Horne et al., 2018; Huberth et al., 2015; Kim, Jo, & Park, 2016). Dashboards can
provide students with efficient ways to monitor their learning, motivate them to create realistic goals, and empower them by providing
evidence of the central role of the learner in working toward goal attainment (Bodily & Verbert, 2017). However, little is known about
the impact of learning analytics dashboards containing performance feedback, such as an estimated course grade, for at-risk students in
traditional courses. Indeed, educators and researchers have expressed concern that exposure to performance feedback such as an
estimated course grade could discourage low-performing students and lead them to give up or withdraw from the course (Teasley,
2017). Previous research has linked emotions and achievement and reported that negative emotions negatively influence achievement
(e.g., Pekrun, Lichtenfeld, Marsh, Murayama, & Goetz, 2017; Valiente, Swanson, & Eisenberg, 2012). Similarly, motivational research
has reported that negative feedback on performance is associated with discouragement (Anderson & Rodin, 1989), decreased moti­
vation (Deci & Cascio, 1972; Vallerand & Reid, 1984), and can contribute to the development of anxiety and hopelessness (e.g.,
Pekrun, 2006). Such results prompt the concern that at-risk students could develop negative feelings from viewing a low estimated
grade when such performance information is directly delivered to them via learning analytics applications.
This study investigated the relationship between students’ use of a learning analytics platform, Elements of Success (EoS), and their
performance outcomes in a traditional course. In particular, we focused on the students who received low estimated grades after the
first midterm exam. Low estimated grades (e.g., D, F) could be seen as negative feedback that might induce negative emotions (e.g.,
frustration, anxiety, hopelessness), which could lead to disengagement or even withdrawal.
Although scores for assignments and exams are available to students via the Learning Management System (LMS) gradebook, the
EoS graphical display provides a broader context, which may help students derive meaning beyond the point value of their scores.
Hurwitz and Lee (2018) indicated that the cumulative grade point average (GPA) among high school graduates is 2.94 and that the
percentage of seniors who have an A average is 47 percent. In other words, many first-year students who are unfamiliar with college
grading norms may have unrealistic expectations of final grades, even with evidence of poor performance in a course (Jensen & Moore,
2008). Therefore, a dashboard may contextualize the scores to better enable students to set realistic goals, monitor their progress, and
adjust their behaviors toward goal attainment. To further assist students who are transitioning into a new learning environment, the
dashboard can also be used to deliver relevant information, directing students to campus learning resources or specific study strategies
that enhance learning in the course. In small courses, such timely, personalized advice might be provided by an instructor. However, in
large courses, learning analytics can offer an alternative method with which to deliver tailored, timely advice to students based on
course data. In this way, instructors and course designers can use the tools provided by a learning analytics platform to build feedback
that is supportive and promotes reflection on meaningful and contextualized information and data. In the current implementation of
EoS, the goal is to enable students to better monitor their progression and help them to make informed choices so that they can redirect
their efforts to improve their trajectory (Jayaprakash, Moody, Lauría, Regan, & Baron, 2014) through the course at a pace that will lead
to successful outcomes. This goal prompts the following two research questions that are considered in this work:

1. What is the association between students viewing performance feedback in EoS, including a current estimated grade, and the
likelihood that a student with a low estimated grade will withdraw from the course?
2. What is the association between students viewing performance feedback in EoS, including a current estimated grade, and the
likelihood that a student with a low estimated grade will increase their course performance?

2. Review of literature

2.1. Academic risk factors

Student retention is a challenge in higher education. First-year students, in particular, have a high attrition rate (Cred�e & Nie­
horster, 2012; Willcoxson, Cotter, & Joy, 2011; Zanden, Denessen, Cillessen, & Meijer, 2018). First-year students encounter multiple
challenges as they transition into higher education, including greater academic demands and the adoption of new social environment
with a lower level of academic structure (Cred�e & Niehorster, 2012; Respondek, Seufert, Stupnisky, & Nett, 2017). The obstacles to
student success are multifaceted and complex; however, failing a course is a major contributor to student attrition (Stewart et al., 2011)
and is related to both student characteristics and instructional practices (Sarra, Fontanella, & Di Zio, 2019). Over the previous several
decades, educators and researchers have investigated factors associated with student academic success and failure and have provided
insight on how to lower barriers in order to improve outcomes.
Student academic competence, with high school grades serving as the most common measure, has been identified as one of the
strongest predictors of first-year academic achievement by multiple research studies (e.g., Hamm, Perry, Chipperfield, Parker, &
Heckhausen, 2018; Mathiasen, 1984). In addition, student resilience, along with academic competence, is associated with academic

2
J.-E. Russell et al. Computers & Education 152 (2020) 103890

success (Sarra et al., 2019). Resilience refers to the process or outcome of successful adaptation despite challenging circumstances
(Howard & Johnson, 2010). Resilient students tend to bounce back from a poor result, are able to deal with setbacks and challenges,
and do not let an undesirable result affect their confidence (Sarra et al., 2019). Sarra et al. (2019) found that the group with highest risk
for academic failure was associated with low resilience. Martin and Marsh (2006) proposed multiple constructs related to resilience,
including self-efficacy, planning, and control. Within these constructs, the academic resilience of students tends to improve as the
students’ ability to plan and manage their studies develops. To foster growth, instructors need to define clear expectations, encourage
students to set meaningful goals, and provide mechanisms for students to use to monitor progress toward their achievement (Martin &
Marsh, 2006).
Significant differences of self-regulated learning skills between high- and low-achieving students have been identified in multiple
studies (e.g., DiFrancesca, Nietfeld, & Cao, 2016; Dog �anay & Demir, 2011; Ruban & Reis, 2006; VanZile-Tamsen & Livingston, 1999).
For example, DiFrancesca et al. (2016) found that high-achieving college students have more accurate metacognitive monitoring skills
when compared to low-achieving students. That is, even when high- and low-achieving students performed similarly, low-achieving
students tended to have overconfidence at the beginning of the course (DiFrancesca et al., 2016; Elder, Jacobs, & Fast, 2015). Given the
tendency of low-achieving students to inaccurately monitor their actual progress of learning, a likely result is that such students miss
opportunities to improve their performance due to misdirected adjustments in their goal-striving efforts (Perry, Hladkyj, Pekrun, &
Pelletier, 2001). Further, perceived academic control was found to be an important predictor of academic achievement, and
low-achieving students often perceive a lack of control over their learning (Perry et al., 2001). Perceived academic control is a person’s
belief in his or her influence over the success or failure of achievement outcomes (Perry et al., 2001; Respondek et al., 2017). Perry
et al. (2001) reported that high academic control students put more effort, more motivated, and used self-monitoring strategies more
often than low academic control students.
The effectiveness of interventions promoting self-regulated learning has been widely documented by multiple research studies (e.
g., DiFrancesca et al., 2016; Hamm et al., 2018; Ruban & Reis, 2006). In their study, Hamm et al. (2018) analyzed the effectiveness of
increasing goal engagement on student performance in online learning environments. They found that enhancing goal engagement
improved academic outcomes for college students with a low high school GPA. In addition, they noted that low- and high-achieving
students have been observed to use different study strategies. Low-achieving students typically engage in low-level learning strategies
such as reviewing notes, creating flashcards, and engaging in routine memorization of the material (DiFrancesca et al., 2016). In
contrast, high-achieving students use more sophisticated strategies to study and process material at a deeper level, such as concen­
trating on main ideas and relationships using concept maps, seeing patterns and the big picture in a large amount of written material,
and creating topic summaries based on the course content (Ruban & Reis, 2006). Interestingly, some research has shown that between
groups with no initial differences in prior knowledge, general ability, or self-efficacy (DiFrancesca et al., 2016), over the duration of the
course, significant differences in resilience, academic control, monitoring accuracy, and specific self-regulated learning skills emerge
as differentiating factors between high- and low-achieving students.
Additional factors have been identified by research on achievement emotions that indicate that emotions such as joy and hope­
lessness are related to success or failure (Pekrun, 2006). Positive emotions positively predict achievement, and negative emotions
negatively predict achievement (Pekrun et al., 2017). However, Pekrun, Goetz, Titz, and Perry (2002) indicated that the effects of these
emotions on performance can be mediated by other factors, such as effort. Moreover, a study conducted by Artino and Jones (2012)
found that negative emotions (e.g., frustration) emerged as a positive predictor of metacognition. Students who were more frustrated
put more effort into metacognitive strategies during the course (Artino & Jones, 2012).

2.2. Learning analytics for student success

As the use of digital learning materials and academic technologies has rapidly increased in classrooms, learning analytics and its
applications have been implemented to improve student success in higher education (Campbell et al., 2007). The primary goal of
learning analytics has been to understand the relationship between student engagement and performance and to identify the students
who are likely to benefit from defined interventions (Baneres, Rodríguez-Gonzalez, & Serra, 2019; Cohen, 2017; Costa, Fonseca,
Santana, De Araujo, & Rego, 2017; Lu, Huang, Huang, & Yang, 2017; Saqr, Fors, & Tedre, 2017). The early focus of learning analytics
applications was on identifying significant variables that contribute to learning outcome predictions to provide information to aca­
demic staff in order to improve decision-making and implement timely interventions for at-risk students (Lonn, Aguilar, & Teasley,
2015; Lu et al., 2017). Previous work has revealed a significant relationship between early grades and final grades (e.g., Dambolena,
2000; Jensen & Barron, 2014; Ramanathan & Fernandez, 2017). For example, Jensen and Barron’s (2014) study indicated that early
grades are strong predictors of course grades. The study reported that 67% of first-year students’ grades did not change from the
midterm, and 43.7% of them did not change from the first exam to the end of the semester. Further, several studies concluded that
students’ final academic performance can be accurately predicted early in the semester (e.g., Lu et al., 2018; Macfadyen & Dawson,
2010) and that these predictions can be coupled with early interventions to prompt underperforming students to use the remaining
time in the course to increase their performance. For example, Lu et al.’s (2017) study reported that students who received in­
terventions from an instructor based on the result of learning analytics (e.g., a list of students who are at risk and in need of inter­
vention) improved their learning outcomes in blended learning courses. A more recent study (Lu et al., 2018) reported that students’
final academic performance could be predicted in the first third of the semester, enabling action.
Macfadyen and Dawson’s (2010) study indicated that their prediction model correctly identified 81% of the students who received
a failing grade. Most grade prediction models include student background data, interactional data from the Learning Management
System (LMS), and performance scores of formative and summative assessments. Among multiple variables, performance scores of

3
J.-E. Russell et al. Computers & Education 152 (2020) 103890

formative assessments seem to be the best predictor for detecting underperforming students, whereas basic LMS data alone do not
predict learning outcomes (Tempella, Rienties, & Giesbers, 2015).
Lately, learning analytics application has been extended to provide performance feedback directly to students (e.g., Arnold &
Pistilli, 2012; Van Horne et al., 2018; Corrin & de Barba, 2014; Duval, 2011; Kim et al., 2016) to increase student self-awareness and
promote student engagement of self-regulated learning. A number of studies have indicated that the majority of students are interested
in learning analytics and welcome alert systems (e.g., Atif, Bilgin, & Richards, 2015; Corrin & de Barba, 2014; Roberts et al., 2017). Atif
et al.’s (2015) study reported that 90% of students are interested in receiving alerts if their performance drops. Interestingly, Roberts
et al. (2017) reported that students prefer automated alerts over alerts generated by teaching staff.
Previous studies have reported the positive effect of an early alert system on student academic achievement. Arnold and Pistilli
(2012) discussed Course Signals at Purdue. Signals has helped students better understand where they stand, and Arnold and Pistilli’s
(2012) work has shown that the use of simple notification interventions can have a significant effect on student behavior. Dwyer,
Williams, and Pribesh (2019) also found a positive impact of an early alert system on student persistence. Students who received an
early warning flag were more likely to persist (Dwyer et al., 2019). In their study, Cai et al. (2015) reported a significant association
between the alert messages students received and their visits to the tutoring center.
A number of student-facing dashboards present learning progress information, including a visual overview of a student’s activities
and how they relate to those of their peers, to support self-monitoring for students (Van Horne et al., 2018; Duval, 2011). The main
purpose of student-facing learning analytics dashboards is that they allow users to utilize the information in order to suggest actions
that benefit students, either directly or indirectly (Lonn et al., 2015). However, the impact of student use of learning analytics
dashboards has produced mixed findings.
Several studies have examined students’ perceptions of learning analytics dashboards (e.g., Corrin & de Barba, 2014; Tan et al.,
2017). In one such study, Tan et al. (2017) used a combination of questionnaires and focus group interviews of student dashboard
users. From these interviews, the investigators concluded that fostering self-reflection, prompting intentional goal-setting, enhancing
learning motivation, and sustaining engagement were perceived as the benefits of learning analytics dashboard use. However, these
same investigators warned that such norm-referenced feedback should be used cautiously since it can cause low-achieving students to
become less motivated. In addition, Corrin and de Barba (2014) found that students utilized dashboard information to reflect on their
learning progress and make plans, but the extent to which these actions directly impacted engagement and performance was not clear.
The positive association between students’ use of learning analytics dashboards and their course performance, motivation, and
satisfaction has been documented (e.g., Van Horne et al., 2018; Huberth et al., 2015; Kim et al., 2016; Saul & Wuttke, 2014). Huberth
et al.’s (2015) study examined the effectiveness of a dashboard, E2Coach, by comparing learning outcomes for students who used it to
those who did not, and found that its use was associated with improved performance, particularly for frequent users. E2Coach provides
an interactive grade prediction tool that shows each student their predicted grade based on their current performance, along with
normative information about the study habits and recommendations of prior students. Kim et al. (2016) also found that the students
who used the dashboard achieved higher scores than those who did not, but Kim et al. (2016) indicated that the frequency of its usage
did not have a significant impact on learning achievement. On the other hand, a few studies have reported inconclusive effects or
nonsignificant effects of student use of learning analytics on their achievement (e.g., Baneres et al., 2019; Park & Jo, 2015).

2.3. Elements of Success

Elements of Success (EoS) is a learning analytics platform that provides students with timely performance feedback and data vi­
sualizations. EoS has been implemented in large courses in five departments (Chemistry, Biology, Mathematics, Economics, and
Human Health & Physiology) at a large public Midwestern university. Students can access EoS from the course site of the LMS, and it
consists of three components. First, it provides a visual chart for all grade categories that compares the student’s category scores to the
class median and category total (Fig. 1). At a glance, students can understand their performance in each grade category on relevant
absolute and normative course scales. Second, it shows a student’s weekly overall progress compared with A, B, and C grade cut-points
(Fig. 2), so students can judge any gap that exists between their current grade and their grade goal. Lastly, after each major benchmark
(e.g., exam), it provides a current estimated grade using a textbox to supplement the graphical display (Fig. 3). Unlike grade prediction
models, EoS’s grade estimation is calculated solely based on students’ up-to-date accumulated points from all graded items. It does not
include any predictive variables to predict the final grade (e.g., high school GPA). It is only predictive to the extent that prior outcomes

Fig. 1. Graphs for each grade item category. Exact scores are shown when a user moves the pointer over each grade category.

4
J.-E. Russell et al. Computers & Education 152 (2020) 103890

Fig. 2. Weekly Progress. It updates every week. Exact total scores are shown when a user moves the pointer over each point on the line.

Fig. 3. Estimated Current Grade. It updates after each major benchmark.

predict future performance. Students typically receive the first estimated grade in the fifth week of the semester, depending on the
course’s first exam schedule. This grade estimate provides a snapshot of each student’s current status at each time point. Through EoS,
students understand how they are doing early in the semester, with fewer than 25% of the course points completed, so students still
have a window of opportunity to improve their performance before the end of the course (Fig. 4).
EoS uses data from the LMS gradebook; its information is updated as scores are updated within the LMS through synchronization
with the publisher-based homework system and by external exam and participation scores input by the instructors. EoS is generally
available to students from the third week of the semester until the last week of the semester. When EoS opens to the students, in­
structors are encouraged to communicate with their students regarding what EoS is and how to use it effectively. Many instructors also
send their students an email reminder to visit EoS after each major benchmark. Student adoption rates of EoS vary across courses,
ranging between 65% and 95%, with an adoption rate of 93.4% in the courses used in this study.

3. Data and methods

3.1. Data description

3.1.1. Course performance scores and EoS visits


We examined the student data of the General Chemistry I course in Fall 2016 and Fall 2017. The following were the primary reasons
to examine this course: 1) the large enrollment limits the instructor’s ability to make personalized interventions with at-risk students,

Fig. 4. Window of opportunity for improving performance.

5
J.-E. Russell et al. Computers & Education 152 (2020) 103890

2) the majority of the students are first-year students, transitioning from high school and likely unfamiliar with a large course setting
and different grading systems, 3) this course provides frequent formative assessments, so students can monitor their progress between
exams, and 4) the LMS gradebook is maintained and updated so that student course data is reliable.
All data were de-identified, and the research was reviewed and approved by the institutional research review board. General
Chemistry I is an introductory, non-lab chemistry course. This course meets a general education requirement for students who do not
plan on taking advanced chemistry and also serves those who do not have prior experience with chemistry or who require additional
preparation before taking the lab-based chemistry sequence. Around 1400 students take this lecture-based course each fall. The vast
majority (~82%) of enrolled students in Fall 2016 and Fall 2017 were first-year students. In addition, 69.4% of the students were
female, and 17.8% were underrepresented minority students (Table 1). Based on first-year survey responses, many students report that
this course is one of the most challenging of their first semester courses.
The course has three midterms (47% of the final grade) and a final exam (21% of the final grade). Low-stakes assignments
composed of online homework assignments, quizzes, and participation points make up the rest of the final grade (32%). Students who
did not take the first exam or who withdrew prior to the first exam were excluded from our analysis (~2% of students).
Our analyses focused on students who were considered at risk of receiving a low final grade in the course based on their estimated
grade after the first midterm exam. Estimated grades were calculated by summing the accumulated points in each category and
calculating a weighted score based on the end-of-semester category weights (exams, 68%; homework, 20%; quizzes, 8%; and activity/
discussion, 4%). This is equivalent to making the assumption that the prior student work in each category is a reasonable predictor of
future work in the same category. The weighted scores were then used to calculate the corresponding grade by applying the course
grading scale. Overall, the agreement between the estimated grade after the first midterm and final grade was moderate (weighted
Kappa ¼ .67), and after the second midterm, the weighted Kappa was .78.
In the analyses, we identified two levels of risk: high and moderate. The high-risk group was composed of students who received an
estimated grade of a Dþ or lower after the first midterm. Three hundred and ninety-four students (13.8% of the class) received an
estimated grade of a Dþ or lower, and of these students, 29% withdrew from the course, 42% earned a final grade of a Dþ or lower, and
29% earned a final grade of C- or above. The moderate-risk group was composed of students who received an estimated grade that was
either a C or C-, which was below average for the class. The average final grade was between a B- and a Cþ. Among the 554 students
(19.3% of the class) who received an estimated grade of a C or C- after the first midterm, 7% withdrew from the course, 12% earned a
final grade of a Dþ or lower, and just under half (49%) earned a final grade of a C or C-. In contrast, 89% of the students in the low-risk
group earned a Cþ or higher in the course. Although EoS does not use a predictive model, using the first-grade estimate to predict DFW
status (students who receive a final grade of D or F, or who withdraw), the area under the curve (AUC) was .92. In addition, the R2 for
the final grade was 0.73. Thus, we feel that the estimates provided accurate information to the students.
Using the LMS trace data, we collected the date and time of each visit to EoS. Our analysis focused on the EoS visits that occurred
between the first and second midterm exams. All students in this course could view their current estimated grade at week 6 through
EoS after the first midterm exam. Scores of completed course work and student views of the course gradebook were pulled from the
LMS and standardized within each semester.

3.2. Self-reported study skills and grit scale

To control students’ general motivation and study skills in our analyses, we included student response data from a subset of survey
questions drawn from the university-wide survey of first-year students. We included the following: Consistency of Interest (grit),
Perseverance of Effort (grit), Study Skills, and Anticipated End-of-Semester GPA. All first-year undergraduate students complete the
survey in the first three to five weeks of their first semester at the university.
The Grit Scale included 8 questions and offered response options on a 5-point Likert-type scale ranging from 1, indicating “Not like
me at all,” to 5, indicating “Very much like me.” Grit entails “working strenuously toward challenges, maintaining effort and interest
despite failure” and measures “perseverance and passion for long-term goals” (Duckworth, Peterson, Matthews, & Kelly, 2007, p.
1087). The Grit Scale measures two latent constructs: consistency of interest (e.g., “New ideas and projects sometimes distract me from
previous ones”) and perseverance of effort (e.g., “Setbacks don’t discourage me”). Grit plays an important role in academic
achievement and likely impacts whether or not a student gives up (withdraws) or perseveres in the course. There were 7 items
concerning student study skills (e.g., “I review my notes before my next class session”). Students rated their agreement with each item

Table 1
Student demographics.
High-risk Group (n ¼ 394) Moderate-risk Group (n ¼ 554) Full Class (n ¼ 2864)

n % n % n %

Female 282 71.57% 387 69.98% 1985 69.36%


Male 112 28.5% 167 30.02% 879 30.64%
First Generation 159 41.30% 201 36.88% 908 32.89%
International 7 1.78% 4 0.72% 85 2.97%
URM 117 29.70% 121 21.84% 509 17.77%
White 248 62.94% 402 72.56% 2121 74.06%
Asian 21 5.33% 22 3.97% 130 4.54%

6
J.-E. Russell et al. Computers & Education 152 (2020) 103890

using a 7-point Likert-type response scale with 1 indicating “Strongly Disagree” and 7 indicating “Strongly Agree.” For the GPA
expectation, students were asked to choose one of nine options in half-unit increments, 1 ¼ 0.0–0.5 to 9 ¼ 4.0 or higher.
In addition, we included institutional data for each student, including gender, underrepresented minority (URM) status, and prior
learning outcomes (high school GPA), as well as course data of initial performance scores (first three homework scores before the first
midterm and first midterm score), the second midterm scores, and final course grades, in our analyses. For students receiving a final
grade of W for withdrawing after the third week of the course, the date of withdrawal was also recorded.

3.3. Data analysis

3.3.1. Survival analysis


In order to examine the risk of withdrawal of students, we utilized a Cox proportional hazards model. The Cox proportional hazards
model is a semi-parametric survival model used to model time-to-event data. This model was developed for clinical applications,
modeling time until the event of interest (such as death or disease progression) has occurred (Therneau & Grambsch, 2000). The Cox
proportional hazards model has two major components: the baseline hazard function (λ0), which models the likelihood of an event
occurring at time t across the time period of interest, and model coefficients depicting how covariates either increase or decrease the
probability of an event. This model is considered semi-parametric because no assumptions about the shape of the baseline hazard
function are made.
The Cox proportional hazards regression model has an important advantage over traditional logistic regression models because it
can handle time-dependent covariates (Therneau & Grambsch, 2000). Given that students can withdraw at any point after the first
midterm, not accounting for the time-dependent nature of exposure to the “treatment” (i.e., viewing EoS) can lead to an immortal time
bias (Jones & Fowler, 2016). This bias refers to the fact that a subject must survive long enough to be exposed to the treatment. In other
words, students who withdraw shortly after the first midterm are unlikely to view EoS. If a logistic regression model were used, these
students would be placed in the non-users group, inflating the relationship between EoS use and persistence in the course. The Cox
proportional hazards model can mitigate this bias by placing individuals in the “treatment” or “control” groups based on whether they
had been exposed to the treatment at each time point, meaning that an individual can be classified as an EoS user and non-user at
different time points.
To avoid the potential for bias, EoS use was defined by the time-dependent function z ¼ z(t) as
Z(t) ¼ 0 if a student had not visited Elements of Success by day t
Z(t) ¼ 1 if the student had visited Elements of Success by day t
The date of the first exam was day 0.
Students still enrolled by the final day of class (day 87) were considered censored. The following model was used to estimate the
association between viewing an undesirable estimated final grade and the risk of withdrawing from the course:

λðt; ZÞ ¼ λ0 *expðβ1 *EoStatusðtÞ ​ þ β2 *URM þ β3 *Gender þ β4 *Grit1 þ β5 *Grit2 þ β6 *GPA ​ Expectation þ β7 *Study ​ Skills
þ β8 *Exam 1 þ β9 *Avg HomeworkÞ

For each group, this model compares the chance of withdrawal between a student who visited EoS and a student who did not visit at
each event time (day of withdrawal), but which group a student belongs to depends on whether or not they had visited EoS at that point
in time (between the first and second midterm). The risk group is summarized in Table 2. We included covariates that we considered
likely to be associated with withdrawal as control variables in the model. The following covariates were considered in our model:
demographic variables (gender and URM status), prior learning outcomes (high school GPA), performance scores at the beginning of
the semester (homework scores prior to the first midterm and the first midterm scores), and self-reported study skills, grit, and

Table 2
Mean and standard deviations of key variables by risk level.
Variable High-risk Group (N ¼ 394) Moderate-risk Group (N ¼ 554) Full Class (N ¼ 2864)

n M SD n M SD n M SD

High School GPA 351 3.35 0.39 511 3.51 0.36 2650 3.67 0.40
Avg. of the first 3 homework scores 394 64.67 30.26 554 91.79 15.50 2864 93.94 19.09
Midterm 1 394 40.58 10.74 554 53.80 8.50 2864 68.88 18.25
Midterm 2 394 36.79 19.04 554 53.37 15.85 2864 66.06 21.83
Consistency of Interest (grit)a 298 3.13 0.87 471 3.23 0.84 2491 3.28 0.82
Perseverance of Effort (grit)a 298 3.66 0.67 471 3.78 0.64 2491 3.85 0.63
GPA Expectationb 298 6.66 0.89 471 6.95 0.82 2491 7.28 0.83
Study Skillsc 298 5.03 0.91 471 5.28 0.91 2491 5.36 0.87
Avg. Course Final Grade 281 1.22 0.70 517 1.97 0.50 2692 2.64 0.90

Note. High-risk group ¼ Students who received a Dþ or lower predicted grade after the first midterm. Moderate-risk group ¼ Students who received a
C or C- predicted grade after the first midterm.
a
Grit is measured on a 5-point scale.
b
GPA Expectation has 9 options, 1 ¼ 0.0–0.5, 2 ¼ 0.51–1.0, 3 ¼ 1.01–1.5, 4 ¼ 1.51–2.0, 5 ¼ 2.01–2.5, 6 ¼ 2.51–3.0, 7 ¼ 3.01–3.5, 8 ¼ 3.51–4.0, 9
¼ 4.0 or higher.
c
Study skills are measured on a 7-point scale.

7
J.-E. Russell et al. Computers & Education 152 (2020) 103890

anticipated end-of-semester GPA. To increase the power of our analyses and avoid overfitting, control variables with p-values > .15
were removed sequentially from the model.

3.3.2. Logistic regression


We employed the logistic regression model to examine whether or not visiting EoS was associated with achieving a final grade that
was better than the first estimated grade. For the high-risk group, the outcome of interest was whether or not a student earned a passing
grade (i.e., C- or above). For the moderate-risk group, we were interested in whether a student earned an average or above average
final grade (i.e., Cþ or above). Given that our dependent variable was binary for both risk groups, a logistic regression model was well
suited to answer our second research question. The model is shown below.
� �
p
ln ¼ β0 þ β1 *URM þ β2 *Gender þ β3 *High ​ School GPA þ β4 *Grit1 þ β5 *Grit2 þ β6 *GPA ​ Expectation þ β7 *Study ​ Skills
1 p
þ β8 *Midterm ​ 1 þ β9 *Avg ​ Homework þ β10 *Grade ​ Visits þ β9 *EoS ​ Visits

This model compares the likelihood of achieving a higher final grade than the estimated grade that was provided after the first
midterm between a student who visited EoS and a student who did not visit during the timeframe of interest. We included covariates
that we considered likely to be associated with performance outcomes as control variables in the model. These covariates were de­
mographic variables, prior learning outcomes, performance scores at the beginning of the semester, self-reported study skills, grit,
anticipated end-of-semester GPA, and the student’s LMS gradebook visits. To increase the power of our analyses and avoid overfitting,
control variables with p-values > .15 were removed sequentially from the model.

4. Results

4.1. Descriptive statistics for at-risk students

The high-risk group had some indication prior to the first midterm exam that they might not be as academically successful as the
other students in the course (Table 2). The high-risk group, on average, had a slightly lower high school GPA than the class average.
They also had lower expectations for their first semester GPA and lower average study skills and grit, compared to the class average.
The moderate-risk group was fairly similar to the class average on most measures. In particular, this group already had fairly high
homework scores, earning 91.79% on average before the first midterm. The percentage of female students in both risk groups was
similar to the full class. However, the percentage of URM students in the risk groups was higher than the full class (Table 1).
On the other hand, there were not any significant differences on self-reported study skills, both grit constructs (consistency of
interest and perseverance of effort), or GPA expectation between EoS users and non-EoS users in both risk groups (Table 3). However,
EoS users showed significant higher homework points than non-EoS users before the first midterm in both risk groups (M ¼ 72.48 vs.
50.81, M ¼ 93.04 vs. 87.69, p < .001). In the high-risk group, EoS users had better high school GPA than non-EoS users, but their first
midterm scores were very similar to non-EoS users’ scores (M ¼ 40.36 vs. 40.97). In the moderate-risk group, there was no significant
difference in high school GPA between EoS users and non-EoS users, and non-EoS users achieved significantly higher scores on the first
midterm than EoS users (M ¼ 53.19 vs. 55.81, p < .01).
EoS users in both risk groups achieved significantly higher scores on the second midterm (M ¼ 40.32 vs. 30.53, p < .001, M ¼ 54.53
vs. 49.56, p < .01) and higher course grades (M ¼ 1.37 vs. 0.86, p < .001, M ¼ 2.00 vs. 1.85, p < .01) than non-EoS users. In the high-
risk group, the percentages of female and URM students were similar between EoS and non-EoS users. In the moderate-risk group, the
percentage of EoS female users was higher than that of non-EoS female users, whereas there was the same percentage of URM students
(Table 4).

Table 3
Means and standard deviations of key variables by risk level and EoS use.
High-risk Group Moderate-risk Group

EoS Users (N ¼ 252) Non-Users (N ¼ 142) EoS Users (N ¼ 425) Non-Users (N ¼ 129)

n M SD n M SD n M SD n M SD

High School GPA 222 3.39** 0.39 129 3.27 0.38 396 3.52 0.36 115 3.49 0.35
Avg. of the first 3 homework scores 252 72.48*** 25.45 142 50.81 33.11 425 93.04*** 13.97 129 87.69 19.23
Midterm 1 252 40.36 9.15 142 40.97 13.12 425 53.19** 7.98 129 55.81 9.79
Midterm 2 252 40.32*** 16.83 142 30.53 21.08 425 54.53** 14.95 129 49.56 18.05
Consistency of Interest (grit) 209 3.11 0.89 89 3.20 0.83 361 3.23 0.82 110 3.23 0.90
Perseverance of Effort (grit) 209 3.67 0.70 89 3.63 0.62 361 3.80 0.60 110 3.73 0.75
GPA Expectation 209 6.64 0.87 89 6.72 0.94 361 6.95 0.80 110 6.95 0.90
Study Skills 209 5.01 0.90 89 5.06 0.95 361 5.32 0.84 110 5.15 1.10
Avg. Course Final Grade 200 1.37*** 0.64 81 0.86 0.72 406 2.00** 0.49 111 1.85 0.52

Note. EoS users ¼ Students who visited EoS between the midterm 1 and midterm 2. ***p < .001, **p < .01.

8
J.-E. Russell et al. Computers & Education 152 (2020) 103890

4.2. Risk of withdrawal and EoS use

At week 15 of the course, among the 394 high-risk group students who had received an estimated grade of a Dþ or lower after the
first midterm, 113 (28.7%) students had withdrawn, and 281 (71.3%) students remained enrolled. High school GPA, study skills, and
perseverance of effort (grit) were removed from the model based on our model selection criterion. The final model included 6 control
variables (URM status, gender, first 3 homework scores, first midterm scores, consistency of interest, and GPA expectation) and EoS
visits (Table 5). URM students were significantly less likely to withdraw after controlling for the other covariates in the model (Hazard
Ratio (HR) ¼ 0.36, b ¼ 1.01, SE ¼ 0.32, p ¼ .0013). Students with the higher anticipated end-of-semester GPA were less likely to
withdraw (HR ¼ 0.81, b ¼ 22, SE ¼ 0.10, p ¼ .0308). Not surprisingly, students with higher midterm 1 and homework scores were
less likely to withdraw (p ¼ .0018, p < .0001). Visiting EoS was not related to the risk of withdrawal after controlling for URM status,
gender, consistency of interest (grit), GPA expectation, midterm 1, and homework scores. Although it was not significant, the risk of
withdrawal was lower among students who used EoS (HR ¼ 0.74, b ¼ 30, SE ¼ 0.27, p ¼ .27).
In the moderate-risk group, fewer students withdrew than in the high-risk group. By week 15, 37 students had withdrawn from the
course, and 517 remained. High school GPA, gender, consistency of interest (grit), and URM status were removed from the model
sequentially based on our model selection criterion. Perseverance of effort (grit), study skills, GPA expectation, midterm 1 scores,
homework scores, and EoS visits were included in the final model, and perseverance of effort (grit), homework scores, and EoS visits
were significant (Table 6). Students in this group who did not visit EoS had 2.66 times the risk of withdrawal at each event time (i.e., 1/
hazard ratio shown in Table 6) compared to students who did visit EoS after controlling for other variables in the model (HR ¼ 0.38, b
¼ 0.98, SE ¼ 0.40, p ¼ .0134). In addition, one standard deviation decrease in perseverance of effort (grit) was associated with 1.63
times the risk of withdrawal at each event time.
The survival analysis results indicated that EoS use (viewing the performance feedback, including a current estimated grade) was
not related to the risk of withdrawal for the high-risk group. For the moderate-risk group, EoS use was associated with decreased risk of
withdrawal after controlling for perseverance of effort (grit), study skills, GPA expectation, first midterm scores, and homework scores.
The probability of remaining in the course by groups is shown in Fig. 5.

4.3. Course achievement and EoS use

For the students who remained in the course, we examined whether or not visiting EoS regularly during the period of interest
(between the first and second midterm) was associated with achieving higher than estimated grades after the first midterm. Overall,
more students achieved higher than estimated grades after the first midterm than the second one (Table 7). To account for how
invested a student was in their grade in the course, a covariate, the number of times during this period (between the first and second
midterm) that a student checked the gradebook, was added. Due to the highly skewed nature of both EoS and gradebook visits,
gradebook visits were capped at 30 and EoS visits were capped at 10.
For the class as a whole, the median number of LMS gradebook visits was 17 and EoS visits was 4. In the high-risk group, EoS users
visited the LMS gradebook more frequently than non-EoS users, with median numbers of 15 and 9, respectively, and visits by users in
this category to EoS was the same as the class as a whole, 4 visits. In the moderate-risk group, the median number of gradebook visits
were 18 for EoS users and 12 for non-EoS users, and the median number of visits to EoS was 5 by users (Table 8).
For the high-risk group, both grit constructs, study skills, GPA expectation, and URM status were removed from the model based on
our model selection criterion, and the final model is shown in Table 9. High school GPA, homework scores, midterm 1 scores, LMS
gradebook visits, and EoS visits were all significantly and positively associated with achieving a grade of a C- or higher. The area under
the curve for the model was 0.847, indicating a relatively good model fit. Each EoS visit was associated with 1.13 times the odds of
receiving a grade above a Dþ, controlling for high school GPA, gender, homework scores, midterm 1 scores, and the number of times a
student visited the LMS gradebook (OR ¼ 1.13, p ¼ .0126).
In the moderate-risk group, our success level was defined by whether or not a student achieved a Cþ or higher grade, which would
be better than their estimated grade after the first midterm. URM status, GPA expectation, study skills, and perseverance of effort (grit)
were removed from the model based on our selection criterion. Similar to the high-risk group, none of the self-reported variables were
significant. In the final model, only high school GPA, homework scores, midterm 1 scores, and LMS gradebook visits were significantly

Table 4
Student Demographics Between EoS Group vs. Non-EoS Group.
High-risk Group (n ¼ 394) Moderate-risk Group (n ¼ 554)

EoS users (n ¼ 252) Non EoS (n ¼ 142) EoS users (n ¼ 425) Non EoS (n ¼ 129)

n % n % n % n %

Female 180 71.43% 102 71.83% 304 71.70% 83 64.34%


Male 72 28.57% 41 28.14% 122 28.3% 46 35.7%
First Generation 106 43.27% 53 37.86% 165 39.57% 36 28.13%
International 6 2.38% 1 0.70% 3 0.71% 1 0.18%
URM 77 30.56% 40 28.17% 93 21.88% 28 21.71%
White 155 61.51% 93 65.49% 311 73.18% 91 70.54%
Asian 13 5.16% 8 5.63 15 3.53% 7 5.43%

9
J.-E. Russell et al. Computers & Education 152 (2020) 103890

Table 5
Final survival model for high-risk students.
Parameter Estimate SE Chi-Square p Hazard Ratio 95% CI for Hazard Ratio

URM 1.01 0.32 10.28 0.0013 0.364 0.20–0.68


Female 0.49 0.30 2.74 0.0976 1.634 0.91–2.92
Consistency of Interest (grit) 0.18 0.11 2.65 0.1037 0.832 0.67–1.04
GPA Expectation 0.22 0.10 4.66 0.0308 0.805 0.66–0.98
Midterm 1 0.63 0.20 9.77 0.0018 0.531 0.36–0.79
Avg. of the first 3 homework scores 0.37 0.08 24.08 <.0001 0.691 0.60–0.80
EoS Visits 0.30 0.27 1.21 0.2714 0.744 0.44–1.26

Note. URM (Underrepresented Minority) included African American, Latino, and Native American. All continuous variables were standardized. STEP
1. HS GPA was removed, p ¼ 0.7457, STEP 2. Study skills were removed, p ¼ 0.4566, STEP 3. Perseverance of Effort (grit) was removed, p ¼ 0.517,
STEP 4. Final Model.

Table 6
The results of Survival Analysis for Moderate-risk Students.
Parameter Estimate SE Chi-Square p Hazard Ratio 95% CI for Hazard Ratio

Perseverance of Effort (grit) 0.49 0.23 4.46 0.0348 0.614 0.39 0.97
Study Skills 0.36 0.23 2.33 0.1268 1.427 0.90 2.25
GPA Expectation 0.29 0.19 2.33 0.1273 1.335 0.92 1.93
Midterm 1 0.76 0.49 2.38 0.1229 0.467 0.18 1.23
Avg. of the first 3 homework scores 0.74 0.30 6.18 0.0129 0.477 0.27 0.86
EoS visits 0.98 0.40 6.12 0.0134 0.376 0.17 0.82

Note. URM (Underrepresented Minority) included African American, Latino, and Native American. All continuous variables were standardized. STEP
1. HS GPA was removed, p ¼ 0.5667, STEP 2. Gender was removed, p ¼ 0.7091, STEP 3. Consistency of Interest (grit) was removed, p ¼ 0.3865, STEP
4. URM removed, p ¼ 0.17, STEP 5. Final Model.

Fig. 5. Survival estimates.

Table 7
Students who Achieved Better than Estimated grades after Midterm 1 & 2.
High-risk Group Moderate-risk Group

Exam 1 30.96% 36.56%


Exam 2 23% 28.2%

associated with achieving an average or above average grade (Table 10). The area under the curve for the model was 0.752, indicating
a moderate model fit. In contrast to the high-risk group, EoS visits were not associated with achieving an average or above average
grade among the moderate-risk students (p ¼ .366).
The results indicate that for the high-risk group, EoS visits were associated with a greater likelihood of achieving a higher final
grade than the estimated grade as compared to non-EoS users after controlling for high school GPA, gender, homework scores, midterm
1 scores, and the number of grade book visits. For the moderate-risk group, EoS visits were not significantly associated with achieving a
higher grade than the estimated grade.

10
J.-E. Russell et al. Computers & Education 152 (2020) 103890

Table 8
EoS and gradebook visits by risk level and EoS status.
High-risk Group Moderate-risk Group Full Class

EoS Users (N ¼ 200) Non-Users (N ¼ 81) EoS Users (N ¼ 406) Non-Users (N ¼ 129) N ¼ 2692

M (SD) Median M (SD) Median M (SD) Median M (SD) Median M (SD) Median

EoS visits 6.2 (9.64) 4 NA NA 7.34 (8.72) 5 NA NA 6.10 (7.22) 4


LMS Grade visits 17.8 (11.9) 15 11.58 (9.61) 9 21.82 (15.74) 18 15.90 (12.63) 12 19.5 (13.09) 17

Note. Gradebook visits were capped at 30 and EoS visits were capped at 10 due to the highly skewed nature of both visits.

Table 9
Results of logistic regression on the high-risk students.
Parameter Estimate SE Chi-Square Odds Ratio p

Intercept 4.74 1.63 8.43 0.0037


HS GPA 1.53 0.43 12.44 4.623 0.0004
Female 0.47 0.19 6.05 0.392 0.0139
Avg. of the first 3 homework 0.69 0.16 18.34 1.994 <.0001
Midterm 1 1.04 0.36 8.57 2.827 0.0034
LMS Gradebook Visits 0.08 0.02 14.43 1.085 0.0001
EoS Visits 0.12 0.05 6.22 1.127 0.0126

Note. STEP 1. Perseverance of Effort (grit) was removed p ¼ .929, STEP 2. Consistency of Interest (grit) was removed p ¼ .778, STEP 3. GPA
expectation was removed, p - .3341, STEP 4. Study skills were removed, p ¼ .1867, STEP 5. URM was removed, p ¼ .1506.

5. Discussion

This study examined student use of the learning analytics platform, Elements of Success (EoS). EoS presents students with up-to-
date, personalized course performance information, including estimated grades, in a large introductory course. In student EoS use in a
large chemistry course, we focused on the risk of withdrawal and the likelihood of achieving a higher final grade for at-risk students
who received a lower estimated grade than the class average after the first midterm. The findings were promising. At-risk student EoS
use did have a positive effect on the perseverance as measured by withdrawals. Further, logistic regression indicated that EoS use by
students in the at-risk category was associated with more favorable odds of improving grades status as compared to those students who
did not use EoS.
Student status in the at-risk group appears to be a combination of prior learning and course progress, based on the observation that
higher risk students had lower high school GPAs than high-achieving students. In addition, at-risk students were typically less engaged
in the course (as measured by homework scores, see Table 2), compared to high-achieving students. Although prior studies (e.g.,
Howard & Johnson, 2010) indicated that low resilience is associated with at-risk groups, the current study showed that the at-risk
students who used EoS were resilient. The high-risk student EoS use was not associated with an increased risk of withdrawal
compared to students in the same risk category who did not use EoS. In other words, knowing an undesired grade estimate did not
impact high-risk students’ withdrawal from a class; rather, it helped them to take action in order to persist in the course.
The previous research indicated that low-achieving students tend to have lower metacognitive monitoring accuracy compared to
high-achieving students, so low-achieving students might not accurately monitor their actual progress (DiFrancesca et al., 2016). In
particular, a different academic structure and the greater academic demands of college courses may overwhelm first-year students
(Cred� e & Niehorster, 2012). The unfamiliar environment of a large course with a different grading system is among the challenges that
first-year students must learn to navigate in response to a different academic environment (Hurwitz & Lee, 2018). In addition, due to
the lack of relevant experience with their new environment, many first-year students have unrealistic expectations of final grades
(Jensen & Moore, 2008). Previous studies have indicated positive effects of early alert systems in helping students recognize the
change in academic demands relative to their performance (e.g., Arnold & Pistilli, 2012; Cai et al., 2015; Moore, 2007). In particular,
Moore’s (2007) study found positive effects of reality checks on at-risk students’ performance. The timely performance information in
EoS is designed to give students context and encourage them to take prompt action to achieve their desired outcomes before the
window of opportunity in the course narrows. By referencing course components, EoS provides detailed performance feedback on all
grade items (homework, quizzes, and exams) to facilitate goal-setting in specific areas, along with overall performance such as total
scores, and estimated grades to establish the idea that long-term goals (e.g., course grades) are composed of smaller short-term goals (e.
g., assignment scores). Therefore, students can identify which areas they are not performing well in and how their performance in those
areas is impacting their overall performance. With this information, students can focus their efforts on target areas.
In order to promote academic resilience, goal engagement, self-regulated learning (DiFrancesca et al., 2016; Hamm et al., 2018;
Ruban & Reis, 2006), and managing one’s studies (Martin & Marsh, 2006) have been identified as critical skills. Monitoring progress
and reflecting on current performance are major processes to achieve desired goals in the self-regulated learning process. Easy access to
timely and reliable performance feedback is critical to foster self-regulated learning. Corrin and de Barba’s (2014) study indicated that
for students who reflect and plan, viewing their performance feedback improved their motivation and guided them in their progress. In
this context, any potential adverse effects of negative emotions from viewing undesired performance feedback on achievement might

11
J.-E. Russell et al. Computers & Education 152 (2020) 103890

Table 10
Results of logistic regression on the moderate-risk students.
Parameter Estimate SE Chi-Square Odds Ratio p

Intercept 6.16 1.31 22.23 <.0001


HS GPA 1.64 0.36 20.31 5.154 <.0001
Female 0.24 0.13 3.38 0.615 0.0662
Consistency of Interest (grit) 0.17 0.12 2.20 0.843 0.1378
Avg. of the first 3 homework 1.25 0.24 26.36 3.488 <.0001
Midterm 1 1.16 0.34 11.54 3.192 0.0007
LMS Gradebook Visits 0.03 0.01 6.17 1.035 0.013
EoS Visits 0.03 0.03 0.82 1.029 0.3661

Note. STEP 1. URM was removed, p ¼ .5020, STEP 2. GPA expectation was removed, p ¼ 0.4501, STEP 3. Study skills were removed, p ¼ 0.3268, STEP
4. Perseverance of Effort (grit) was removed.

be mediated by motivating action (Pekrun et al., 2002). Within this context, graphical summaries of each performance category and
timely grade estimates should facilitate planning and reflection on the part of the student and urge at-risk students toward prompt
appropriate action. Artino and Jones (2012) indicated that higher levels of frustration were associated with more adaptive behavioral
outcomes, which indicates that negative emotions can potentially produce positive outcomes (p. 173).
For the moderate-risk students who received an estimated grade of C or C- after the first midterm, this study found that EoS use was
associated with a decreased risk of withdrawal. Some students who thought they were not doing well may feel relieved when their
viewed results are better than anticipated; and, as a result, such students may decide to remain in the course rather than withdrawing.
However, this research further found that EoS use was not associated with achieving grades better than the estimated grades among the
moderate-risk students, which may indicate that the complicated nature of such courses requires interventions beyond those that were
used in this study.
Although student EoS use was high (>90%) in this course, this study identified students who never accessed EoS even though it was
available. It is therefore interesting to note the differences and similarities between EoS and non-EoS users in each risk group. In both
groups, there were no differences in self-reported grit, study skills, and GPA expectation. Further, there was no difference on midterm 1
scores between EoS users and non-users in the high-risk group, and even EoS users’ midterm 1 scores were significantly lower than
non-EoS users’ in the moderate-risk group (Table 3). However, early homework scores of the EoS users were significantly higher than
non-users and, consequently, the second midterm scores and final grades of the EoS users were significantly higher than non-users in
both risk groups. This can imply that EoS users might be more motivated to engage with the course than non-users. However, at the
same time, timely performance information from EoS might maintain their motivation and efforts to achieve desired outcomes.
Interestingly, the results indicated that the factors associated with the risk of withdrawal might be different from the factors related
to the likelihood of performing better for students who remained in the course. High school GPA was one of the significant factors
related to the likelihood of achieving a higher grade but was not related to the risk of withdrawal. Also, some of the self-reported
constructs were related to the risk of withdrawal, whereas none of them were associated with the likelihood of performing better.
Not surprisingly, performance scores of formative assessments at the beginning of the course and the first midterm scores were the
most important factors associated with both the risk of withdrawal and the likelihood of performing better. This indicates that student
engagement at an early stage of the semester is critical to prevent students from withdrawing and also to allow them to excel in the
course. Fig. 4 illustrates that the window of opportunity to improve the outcomes is narrower as time passes and is maximal earlier in
the semester. Therefore, it is logical that timing can make a difference and that accurate knowledge of the current standing at each time
point can influence the student’s decision to take action (e.g., seeking help) and that the end outcome becomes the cumulative result of
the decisions and actions that precede it.
Another interesting finding from the results was that the factors associated with withdrawal might differ depending on the risk
level. In particular, GPA expectation was related to withdrawal among the high-risk group but not among the moderate-risk group. In
contrast, perseverance of effort was inversely related to withdrawal only among the moderate-risk students. Also, only high-risk URM
students were less likely to withdraw.
Overall, these research findings are consistent with the previous studies that have indicated a positive relationship between student
use of a learning analytics dashboard and their achievement (e.g., Van Horne et al., 2018; Huberth et al., 2015; Kim et al., 2016). More
importantly, this study has provided valuable insight into how performance feedback and estimated grades provided through the
learning analytics dashboards impact at-risk students’ withdrawal from a course and their course performance. Providing timely
performance information through learning analytics at an early time of the semester does not affect the high-risk students’ withdrawal
decision and, rather, it helps the moderate-risk students’ decision to remain in the course. Further, it was significantly associated with
achieving higher grades among the high-risk students.

6. Limitations and conclusions

One limitation of this study is that student use of EoS was voluntary. In this study, student EoS use was high but still dependent on
the student’s interest in using the dashboard. Although EoS was available to all students and the instructor communicated with the
whole class regarding what it is and how to use it, the non-EoS group never accessed EoS. It is not clear what factors led to that
behavior. In fact, we included students’ consistency of interest and perseverance of effort, study skills, and an anticipated GPA as

12
J.-E. Russell et al. Computers & Education 152 (2020) 103890

covariates to control students’ motivation in our analyses. However, there might be other factors that affect students’ decisions related
to accessing EoS.
Another limitation of this study is a lack of data on the specific actions of students that resulted from viewing the EoS information.
Our analyses included student performance data, clicks on the EoS site, and views of the LMS gradebook, but many other actions
currently provide no digital marker. Therefore, we do not know what other actions students had tried to affect their performance (e.g.,
visiting instructional staff’s office hours, getting help from the tutoring center, adopting different study strategies).
Nonetheless, this paper has provided the case for the importance of at-risk students’ use of a learning analytics platform and their
withdrawal and course performance. The study findings recommend making timely performance feedback available from an early time
in the semester and encouraging students to use it, particularly in large introductory courses that many first-year students are enrolled
in. Further, timely performance information should be presented in a context where students can understand how their performance of
specific areas is impacting their overall performance. Therefore, students can make a connection between their efforts and outcomes in
order to increase control over their learning and performance. Also, making time to address concerns and providing opportunities to
improve their performance is important. Awareness and reflection on their current performance is the first step, and changing aca­
demic behavior accordingly must be followed (Moore, 2007) to change outcomes. Most of all, we recommend providing regular,
frequent formative assessments that students can engage with early and monitor their performance. Further, paying close attention to
student engagement of formative assessments at the beginning of the course and encouraging students who do not engage with them is
critical for at-risk student success regardless of their characteristics or aptitude (e.g., high school GPA).

Credit author statement

Jae-Eun Russell: Conceptualization, Investigation, Methodology, Writing. Anna Smith: Data curation, Formal analysis, Visual­
ization. Russell Larsen: Writing- Reviewing and Editing

Acknowledgments

The authors would like to thank Matthew Anson from the office of Assessment and Ross Miller from the ITS for their assistance to
coordinate de-identified data for this study.

Appendix A. Supplementary data

Supplementary data to this article can be found online at https://doi.org/10.1016/j.compedu.2020.103890.

References

Aljohani, N. R., Daud, A., Abbasi, R. A., Alowibdi, J. S., Basheri, M., & Aslam, M. A. (2019). An integrated framework for course adapted student learning analytics
dashboard. Computers in Human Behavior, 92, 679–690. https://doi.org/10.1016/j.chb.2018.03.035.
Anderson, S., & Rodin, J. (1989). Is bad news always bad? Cue and feedback effects on intrinsic motivation. Journal of Applied Social Psychology, 19, 449–467. https://
doi.org/10.1111/j.1559-1816.1989.tb00067.x.
Arnold, K., & Pistilli, M. (2012). Course Signals at Purdue: Using learning analytics to increase student success. In Processing of the 2nd international conference on
learning analytics and knowledge (pp. 267–270). New York: ACM. https://doi.org/10.1145/2330601.2330666.
Artino, A., & Jones, K. (2012). Exploring the complex relations between achievement emotions and self-regulated learning behaviors in online learning. Internet and
Higher Education, 15, 170–175. https://doi.org/10.1016/j.iheduc.2012.01.006.
Atif, A., Bilgin, A., & Richards, D. (2015). Student preferences and attitudes to the use of early alerts. AMCIS.
Baneres, D., Rodríguez-Gonzalez, M. E., & Serra, M. (2019). An early feedback prediction system for learners at-risk within a first-year higher education course. IEEE
Transactions on Learning Technologies, 12(2).
Bodily, R., & Verbert, K. (2017). Review of research on student-facing learning analytics dashboards and educational recommender systems. IEEE Transactions on
Learning Technologies, 10(4), 405–418. https://doi.org/10.1109/TLT.2017.2740172.
Cai, Q., Lewis, C. L., & Higdon, J. (2015). Developing an early-alert system to promote student visits to tutor center. Learning Assistance Review, 20(1), 61–72.
Campbell, J., DeBlois, P., & Oblinger, D. (2007). Academic analytics: A new tool for a new era. Educause Review, 42(4), 42–57.
Cohen, A. (2017). Analysis of student activity in web-supported courses as a tool for predicting dropout. Educational Technology Research & Development, 65(5),
1285–1304. https://doi.org/10.1007/s11423-017-9524-3.
Corrin, L., & de Barba, P. (2014). Exploring students’ interpretation of feedback delivered through learning analytics dashboards. In B. Hegarty, J. McDonald, &
S. K. Loke (Eds.), Rhetoric and Reality: Critical perspectives on educational technology. Proceedings ascilite Dunedin 2014 (pp. 629–633).
Costa, E. B., Fonseca, B., Santana, M., De Araujo, F., & Rego, J. (2017). Evaluating the effectiveness of educational data mining techniques for early prediction of
students’ academic failure in introductory programming courses. Computers in Human Behavior, 73, 247–256.
Cred�e, M., & Niehorster, S. (2012). Adjustment to college as measured by the student adaptation to college questionnaire: A quantitative review of its structure and
relationships with correlates and consequences. Educational Psychology Review, 24, 133–165. https://doi.org/10.1007/s10648-011-9184-5.
Daley, S. G., Hillaire, G., & Sutherland, L. M. (2016). Beyond performance data: Improving student help seeking by collecting and displaying influential data in an
online middle-school science curriculum. British Journal of Educational Technology, 47(1), 121–134. https://doi.org/10.1111/bjet.12221.
Dambolena, I. G. (2000). A regression exercise: Redicting final course grades from midterm results. Problems, Resources, and Issues in Mathematics Undergraduate
Studies, 10, 351–358.
Deci, E. L., & Cascio, W. F. (1972). Changes in intrinsic motivation as a function of negative feedback and threats. In Paper presented at the meeting of the estern
psychological association, Boston.
DiFrancesca, D., Nietfeld, J., & Cao, L. (2016). A comparison of high and low achieving students on self-regulated learning variables. Learning and Individual
Differences, 45, 228–236. https://doi.org/10.1016/j.lindif.2015.11.010.

13
J.-E. Russell et al. Computers & Education 152 (2020) 103890

Do�ganay, A., & Demir, O. € (2011). Comparison of the level of using metacognitive strategies during study between high achieving and low achieving prospective
teachers. Educational Sciences: Theory and Practice, 11(4), 2036–2043.
Duckworth, A. L., Peterson, C., Matthews, M. D., & Kelly, D. R. (2007). Grit: Perseverance and passion for long-term goals. Journal of Personality and Social Psychology,
92, 1087–1101.
Duval, E. (2011). Attention please!: Learning analytics for visualization and recommendation. In Proceedings of the 1st international conference on learning analytics and
knowledge (pp. 9–17). New York NY: ACM.
Dwyer, L. J., Williams, M. R., & Pribesh, S. (2019). Impact of early alert on community college student persistence in Virginia. Community College Journal of Research
and Practice, 43(3), 228–331. https://doi.org/10.1080/10668926.2018.1449034.
Elder, B., Jacobs, P., & Fast, Y. (2015). Identification and support of at-risk students using a case management model. Journal of Professional Nursing, 31(3), 247–253.
Hamm, J. M., Perry, R. P., Chipperfield, J. G., Parker, P. C., & Heckhausen, J. (2018). A motivation treatment to enhance goal engagement in online learning environments:
Assisting failure-prone college students with low optimism. Motivation Science. Advance online publication. https://doi.org/10.1037/mot0000107.
Howard, S., & Johnson, B. (2010). What makes the difference? Children and teachers talk about resilient outcomes for children “at risk. Educational Studies, 26,
321–337. https://doi.org/10.1080/03055690050137132.
Huberth, M., Chen, P., Tritz, J., & McKay, T. A. (2015). Computer-tailored student support in introductory physics. PloS One, 10(9), e01370001. https://doi.org/
10.1371/journal.pone.0137001.
Hu, Y., Lo, C., & Shih, S. (2014). Developing early warning systems to predict students’ online learning performance. Computers in Human Behavior, 36, 469–478.
Hurwitz, M., & Lee, J. (2018). Grade inflation and the role of standardized testing. In J. Buckley, L. Letukas, & B. Wildavsky (Eds.), Measuring success: Testing, grades,
and the future of college admissions (pp. 64–93). Baltimore: Johns Hopkins University Press.
Jayaprakash, S. M., Moody, E. W., Lauría, E., Regan, J. R., & Baron, J. D. (2014). Early alert of academically at-risk students: An open source analytics initiative.
Journal of Learning Analytics, 1(1), 6–47.
Jensen, P. A., & Barron. (2014). Midterm and first-exam grades predict final grades in biology courses. Journal of College Science Teaching, 44(2), 82–89.
Jensen, P. A., & Moore, R. (2008). Students’ behaviors, grades, and perceptions in an introductory biology course. The American Biology Teacher, 70, 483–487.
Jones, M., & Fowler, R. (2016). Immortal time bias in observational studies of time-to-event outcomes. Journal of Critical Care, 36, 195–199. https://doi.org/10.1016/
j.jcrc.2016.07.017.
Kim, J., Jo, I., & Park, Y. (2016). Effects of learning analytics dashboard: Analyzing the relations among dashboard utilization, satisfaction, and learning achievement.
Asia Pacific Education Review, 17, 13–24. https://doi.org/10.1007/s12564-015-9403-8.
Lonn, S., Aguilar, S., & Teasley, S. (2015). Investigating student motivation in the context of a learning analytics intervention during a summer bridge program.
Computers in Human Behavior, 47, 90–97.
Lu, O., Huang, A., Huang, J., Lin, A., Ogata, H., & Yang, S. (2018). Applying learning analytics for the early prediction of students’ academic performance in blended
learning. Educational Technology & Society, 21(2), 220–232.
Lu, O., Huang, J., Huang, A., & Yang, S. (2017). Applying learning analytics for improving students Engagement and learning outcomes in an MOOCs enabled
collaborative programming course. Interactive Learning Environments, 25(2), 220–234. https://doi.org/10.1080/10494820.2016.1278391.
Macfadyen, L., & Dawson, S. (2010). Mining LMS data to develop an “early warning system” for educators: A proof of concept. Computers & Education, 54, 588–599.
Martin, A., & Marsh, H. (2006). Academic resilience and its psychological and educational correlates: A construct validity approach. Psychology in the Schools, 43(3).
https://doi.org/10.1002/pits.20149.
Mathiasen, R. E. (1984). Predicting college academic achievement. College Student Journal, 18, 380–386.
Moore, R. (2007). Can a “reality check” improve the academic performances of at-risk students in introductory biology courses? Bioscene, 33(3), 6–12.
Park, Y., & Jo, I. (2015). Development of the learning analytics dashboard to support students’ learning performance. Journal of Universal Computer Science, 21(1),
110–133. https://doi.org/10.3217/jucs-021-01-0110.
Pekrun, R. (2006). The control-value theory of achievement emotions: Assumptions, corollaries, and implications for educational research and practice. Educational
Psychology Review, 18, 315–341. https://doi.org/10.1007/s10648-006-9029-9.
Pekrun, R., Goetz, T., Titz, W., & Perry, R. (2002). Academic emotions in students’ self-regulated learning and achievement: A program o qualitative and quantitative
research. Educational Psychologist, 37(2), 91–105. https://doi.org/10.1207/S15326985EP3702_4.
Pekrun, R., Lichtenfeld, S., Marsh, H. W., Murayama, K., & Goetz, T. (2017). Achievement emotions and academic performance: A longitudinal model of reciprocal
effects. Child Development, 88(5), 1653–1670. https://doi.org/10.1111/cdev.12704.
Perry, R., Hladkyj, S., Pekrun, R., & Pelletier, S. (2001). Academic control and action control in the achievement of college students: A longitudinal field study. Journal
of Educational Psychology, 93(4), 776–789. https://doi.org/10.1037/0022-0663.93.4.776.
Ramanathan, P., & Fernandez, E. (2017). Can early-assignment grades predict final grades in IT courses? ASEE.
Respondek, L., Seufert, T., Stupnisky, R., & Nett, U. (2017). Perceived academic control and academic emotions predict undergraduate university student success:
Examining effects on dropout intention and achievement. Frontiers in Psychology, 8, 243. https://doi.org/10.3389/fpsyg.2017.00243.
Roberts, L. D., Howell, J. A., & Seaman, K. (2017). Give me a customizable dashboard: Personalized learning analytics dashboards in higher education. Technology,
Knowledge and Learning, 22(3), 317–333. https://doi.org/10.1007/s10758-017-9316-1.
Ruban, L., & Reis, S. (2006). Patterns of self-regulatory strategy use among low-achieving and high-achieving university students. Roeper Review, 28(3), 148–156.
https://doi.org/10.1080/02783190609554354.
Saqr, M., Fors, U., & Tedre, M. (2017). How learning analytics can early predict under-achieving students in a blended medical education course. Medical Teacher, 39
(7), 757–767. https://doi.org/10.1080/0142159X.2017.1309376.
Sarra, A., Fontanella, L., & Di Zio, S. (2019). Identifying students at risk of academic failure within the educational data mining framework. Social Indicators Research
(No pagination specified).
Saul, C., & Wuttke, H. (2014). Turning learners into effective better learners: The use of the askMe! System for learning analytics. CEUR Workshop Proceedings, 1181,
57–60.
Stewart, T., Clifton, R., Daniels, L., Perry, R., Chipperfield, J., & Ruthig, J. (2011). Attributional retraining: Reducing the likelihood of failure. Social Psychology of
Education, 14, 75–92. https://doi.org/10.1007/s11218-010-9130-2.
Tampke, D. R. (2013). Developing, implementing, and assessing an early alert system. Journal of College Student Retention, 14(4), 523–532.
Tan, J. P., Koh, E., Jonathan, C., & Yang, S. (2017). Learner dashboards a double-edged sword? Students’ sense-making of a collaborative critical reading and learning
analytics. Journal of Learning Analytics, 4(1), 117–140. https://doi.org/10.18608/jla.2017.41.7.
Teasley, S. D. (2017). Student-facing dashboards: One size fits all? Technology. Knowledge, and Learning, 22, 377–384. https://doi.org/10.1007/s10758-017-9314-3.
Tempella, D., Rienties, B., & Giesbers, B. (2015). In search for the most informative data for feedback generation: Learning analytics in a data-rich context. Computers
in Human Behavior, 47, 157–167.
Therneau, T. M., & Grambsch, P. M. (2000). Modeling survival data: Extending the cox model. New York, New York: Springer-Verlag.
Valiente, C., Swanson, J., & Eisenberg, N. (2012). Linking students’ emotions and academic achievement: When and why emotions matter. Child Development
Perspectives, 6(2), 129–135. https://doi.org/10.1111/j.1750-8606.2011.00192.x.
Vallerand, R. J., & Reid, G. (1984). On the causal effects of perceived competence on intrinsic motivation: A test of cognitive evaluation theory. Journal of Sport
Psychology, 6(1), 94–102.
Van Horne, S., Curran, M., Smith, A., VanBuren, J., Zahrieh, D., Larsen, R., & Miller, R. (2018). Facilitating student success in introductory Chemistry with feedback in
an online platform. Technology, Knowledge and Learning, 23(1), 21–40. https://doi.org/10.1007/s10758-017-9341-0.

14
J.-E. Russell et al. Computers & Education 152 (2020) 103890

VanZile-Tamsen, C., & Livingston, J. (1999). The differential impact motivation on the self-regulated strategy use of high and low-achieving college students. Journal
of College Student Development, 40(1), 54–60.
Willcoxson, L., Cotter, J., & Joy, S. (2011). Beyond the first-year experience: The impact on attrition of student experiences throughout undergraduate degree studies
in six diverse universities. Studies in Higher Education, 36, 1–22. https://doi.org/10.1080/03075070903581533.
Zanden, P., Denessen, E., Cillessen, A., & Meijer, P. (2018). Pattern of success: First-year student success in multiple domains. Studies in Higher Education. https://doi.
org/10.1080/03075079.2018.1493097.

15

You might also like