Download as pdf or txt
Download as pdf or txt
You are on page 1of 292

Exploring How Online University Feedback Reflection Influences Teaching

Submitted by

Mary Beth Nipp

A Dissertation Presented in Partial Fulfillment

of the Requirements for the Degree

Doctorate of Education

Grand Canyon University

Phoenix, Arizona

September 28, 2018






ProQuest Number: 10974098




All rights reserved

INFORMATION TO ALL USERS
The quality of this reproduction is dependent upon the quality of the copy submitted.

In the unlikely event that the author did not send a complete manuscript
and there are missing pages, these will be noted. Also, if material had to be removed,
a note will indicate the deletion.






ProQuest 10974098

Published by ProQuest LLC (2018 ). Copyright of the Dissertation is held by the Author.


All rights reserved.
This work is protected against unauthorized copying under Title 17, United States Code
Microform Edition © ProQuest LLC.


ProQuest LLC.
789 East Eisenhower Parkway
P.O. Box 1346
Ann Arbor, MI 48106 - 1346
© by Mary Beth Nipp, 2018

All rights reserved.


GRAND CANYON UNIVERSITY

Exploring How Online University Feedback Reflection Influences Teaching

I verify that my dissertation represents original research, is not falsified or plagiarized,

and that I accurately reported, cited, and referenced all sources within this manuscript in

strict compliance with APA and Grand Canyon University (GCU) guidelines. I also

verify my dissertation complies with the approval(s) granted for this research

investigation by GCU Institutional Review Board (IRB).


Abstract

The purpose of this qualitative exploratory single case study was to examine how online

undergraduate full-time faculty perceived the influence of their instructional feedback

practices on their reflective thinking, and hence, their instructional strategy at a

southwestern higher educational institution. The theory that provided the foundation for

this study was Vygotsky’s learning theory and the Zone of Proximal Development. The

research questions focused on connections among faculty feedback practices and

reflection, feedback methods, and how feedback informed instruction. The data sources

included a faculty questionnaire, classroom feedback and a focus group while the data

analysis employed coding, thematic analysis, and triangulation of the data. From the

questionnaire, themes emerged derived from questionnaire responses; seven themes

demonstrated common paper issues and a second-level analysis of feedback revealed the

level of feedback relative to its formative value. Feedback was reviewed according to its

delivery method and written and video feedback were disaggregated. The focus group then

served to triangulate the data. This study’s major outcomes revealed that instructors who

gave text feedback provided more critical thinking course-related remarks while instructors

who provided video feedback focused more on surface-level issues. Most faculty feedback

focused on processes and regulation, prompting scaffolding towards self-regulation,

consistent with the study’s theoretical foundation. The implications for this study include

institutional leadership supporting various feedback methods or training in reflection so

that online instructors improve in the formative delivery of feedback.

Keywords: Asynchronous feedback, formative feedback, reflection


Table of Contents

List of Tables .......................................................................................................................x

List of Figures .................................................................................................................... xi

Chapter 1: Introduction to the Study....................................................................................1

Introduction ....................................................................................................................1

Background of the Study ...............................................................................................3

Problem Statement .........................................................................................................5

Purpose of the Study ......................................................................................................7

Research Questions ........................................................................................................9

Advancing Scientific Knowledge and Significance of the Study ................................12

Rationale for Methodology ..........................................................................................18

Nature of the Research Design for the Study...............................................................19

Definition of Terms......................................................................................................23

Assumptions, Limitations, Delimitations ....................................................................24

Assumptions........................................................................................................24

Limitations and delimitations. ............................................................................25

Summary and Organization of the Remainder of the Study ........................................26

Chapter 2: Literature Review .............................................................................................28

Introduction to the Chapter and Background to the Problem ......................................28

Background. ........................................................................................................29

Identification of the Gap ..............................................................................................33

Formative Feedback. ...........................................................................................34

Population. ..........................................................................................................35

Feedback methods...............................................................................................36
Theoretical Foundations and/or Conceptual Framework .............................................37

Review of the Literature ..............................................................................................42

Introduction. ........................................................................................................42

Historical perspective of online learning. ...........................................................42

Feedback in online education. ............................................................................45

Formative feedback.............................................................................................49

Feedback and social presence. ............................................................................60

Social presence and learning outcomes ..............................................................62

Feedback methods...............................................................................................65

Feedback and learner populations.......................................................................71

Reflection ............................................................................................................76

Methodology .......................................................................................................82

Instrumentation ...................................................................................................82

Summary ......................................................................................................................84

Chapter 3: Methodology ....................................................................................................88

Introduction ..................................................................................................................88

Statement of the Problem .............................................................................................89

Research Questions ......................................................................................................91

Research Methodology ................................................................................................94

Research Design...........................................................................................................95

Population and Sample Selection.................................................................................98

Sources of Data ..........................................................................................................101

Trustworthiness ..........................................................................................................105

Reliability...................................................................................................................107
Data Collection and Management ..............................................................................107

Data Analysis Procedures ..........................................................................................111

Ethical Considerations ...............................................................................................113

Limitations and Delimitations....................................................................................117

Summary ....................................................................................................................119

Chapter 4: Data Analysis and Results ..............................................................................121

Introduction ................................................................................................................121

Descriptive Findings ..................................................................................................124

Data Analysis Procedures ..........................................................................................137

Research question 1: Familiarization with the data. .........................................139

Research question 1: Coding the data. ..............................................................142

Research question 1: Establishing themes. .......................................................151

Research question 1: Summary of coding and themes .....................................156

Research question 2: Familiarization with the data. .........................................157

Research question 2: Coding the data. ..............................................................159

Research question 2: Establishing themes. .......................................................164

Research question 3: Familiarization with the data. .........................................175

Research question 3: Coding the data. ..............................................................179

Research question 3: Establishing themes ........................................................182

Results ........................................................................................................................187

Faculty questionnaires ......................................................................................189

Classroom papers ..............................................................................................192

Classroom paper feedback ................................................................................199

Focus groups .....................................................................................................208


Summary ....................................................................................................................211

Chapter 5: Summary, Conclusions, and Recommendations ............................................214

Introduction and Summary of Study ..........................................................................214

Summary of findings and conclusions .......................................................................217

RQ 1 ..................................................................................................................218

RQ 2 ..................................................................................................................223

RQ 3. .................................................................................................................226

Implications................................................................................................................227

Theoretical implications. ..................................................................................228

Practical implications ........................................................................................230

Future implications ...........................................................................................233

Strengths and weaknesses of the study .............................................................233

Recommendations ......................................................................................................236

Summary ....................................................................................................................238

References 240

Appendix A. Site Authorization Letter(s) ........................................................................263

Appendix B. IRB Approval Letter ...................................................................................265

Appendix C. Informed Consent .......................................................................................266

Appendix D. Instrument Validation .................................................................................270

Appendix E. Online Undergraduate Full-time Faculty Questionnaire Participant


Invitation ........................................................................................................273

Appendix F. Faculty Questionnaire .................................................................................274

Appendix G. Focus Group Interview Protocol ................................................................276

Appendix H. Questionnaire Responses............................................................................279

Appendix I. Template for Identifying Feedback Levels ..................................................280


List of Tables

Table 1. Data Sources Aligned with RQs ..................................................................... 122

Table 2. Undergraduate Full-time Faculty Demographics ............................................ 127

Table 3. Questionnaire Responses by Research Question for RQ 1 ............................. 129

Table 4. Questionnaire Responses by Research Question for RQ2 .............................. 129

Table 5. Questionnaire Responses by Research Question for RQ3 .............................. 129

Table 6. Questionnaire: Response to Paper Issues Total ............................................... 130

Table 7. Papers Examined for Feedback........................................................................ 132

Table 8. Participants, Methods, and Courses ................................................................. 135

Table 9. Paper Concerns from Questionnaire responses: Question 1 ........................... 145

Table 10. RQ1 Themes from Questionnaire .................................................................. 148

Table 11. Average Participant Word Count by Assignment.......................................... 158

Table 13. Paper Feedback According to Hattie & Timperley’s Levels ......................... 163

Table 14. Questionnaire Responses by Research Question: Research Question 2 ........ 169
List of Figures

Figure 1. Feedback model progression relative to learning. ............................................. 52

Figure 2. Text feedback levels with paper elements ....................................................... 170

Figure 3. Audio feedback levels with paper elements .................................................... 171


Chapter 1: Introduction to the Study

Introduction

Both feedback and reflection are essential to good teaching, whether that

teaching is online or in a traditional face-to-face setting. With its rapid advances in

improvement and growth, online learning is no longer considered an alternate modality

(Caruth & Caruth, 2013). Indeed, with its terrain of possibilities for learners, this

platform is poised to meet global challenges in education through its flexibility and

access (Gargano & Throop, 2017). This study’s discussion of feedback and instructor

reflection was focused on the online modality, because of its unique potential for varied

delivery methods which might influence how faculty reflect through their feedback

process. This qualitative exploratory single case study explored the perceived influence

of feedback practices on teacher reflection and the subsequent influence of reflection on

their instructional strategies. The current gap in reflective practices in education was

addressed as reflection was discussed in this new context (Wilson, 2013) and with a

diverse population (Borup, West, & Thomas, 2015).

Since its inception, course designers, administrators, and instructors have

examined ways to improve the online modality. These improvements include lessening

the distance and maintaining academic rigor, student satisfaction, and persistence

(Pattison, 2017). Technology offers numerous ways to lessen this distance. Advanced

online delivery methods, such as audio or video feedback, provide a broader range of

engagement between instructors and students, reducing isolation and contributing to a

sense of community (Gillett-Swan, 2017). In addition to increasing the socialization

factor of online learning, these methods have been shown to improve efficiencies in
feedback (Borup et al., 2015; Sims, 2016; Tunks, 2012). Therefore, this closer look at

teacher feedback practices on teacher reflection and its potential to influence their own

reflection on instruction could affect student outcomes.

The practice of reflection is applicable across most industries. In the context of

this study, reflection can be considered deep thought, or a complex process (Roberts,

2016) that connects new information or theory with individual meaning or experience

(Gardner, 2001). Reflection’s value to education is documented in recent literature

(Janssen, de Hullu, & Tigelaar, 2009; Nguyen, Fernandez, Karsenti, & Charlin, 2014;

Roberts, 2016) for its capacity to influence teaching practices (Wlodarsky & Walters,

2010). This act of engaging in mindful self-monitoring has been reviewed in higher

education (Bennett, Power, Thomson, Mason, & Bartleet, 2016) using models such as

Ryan and Ryan’s 4R’s of Reflection (2013).

For this study, reflection is a purposeful, rather than intuitive act towards

transformation (Ryan & Ryan, 2013). Reflection will be examined according to the

levels of reporting and responding, relating, reasoning, and reconstructing, which aligns

with the selected model designed for higher education use (Ryan & Ryan, 2013). For

online instructors, reflection might include analyzing their own learning process and

applying necessary changes accordingly (Wilson, 2013). These changes, in turn, could

serve to inform teaching strategies. More studies in this area are needed in higher

education (Hall, 2018). As more institutions launch online courses as part of their

strategic planning (Kilburn, Kilburn & Hammond, 2016), this proposed study formed a

contribution by examining feedback through reflection’s impact on instruction.


Chapter one will provide the foundation of the research by first discussing

relevant literature that supports the background of feedback and reflection for online

instruction. Next, the research questions will be presented, followed by an explanation

of how this study will advance the scientific knowledge in the field and the significance

of the study. A rationale for the methodology selection and the nature of the design will

be presented, along with a definition of terms, assumptions, limitations, and

delimitations for the study. Finally, a summary and discussion of organization for the

remainder of the study will conclude the chapter.

Background of the Study

Reflection and feedback are two critical elements that have both been rigorously

explored in traditional education, but more research is needed on how the two might

intersect in the online modality, where these phenomena take on new nuances,

emphases, and populations. The gap in this field was expressed in three recent empirical

studies, showing the need for expanding populations in studies of online feedback. In a

qualitative case study, Atwater, Borup, Baker, and West (2017) examined student

perceptions of video communication in an online course. Participants given both text

and video feedback stated that despite the convenience of text feedback, video feedback

provided more advantages. These researchers stated that more studies are needed to

explore and compare different feedback methods and their effectiveness, and that a

diverse student population might benefit from the nonverbal communication that video

feedback provides (Atwater et al., 2017). In a similar study, (Borup et al., 2015)

examined instructor and student perceptions of text and video feedback and found that

both populations valued the efficiency of text over the affective benefits of video and
recommended exploring asynchronous video feedback among a more diverse population

(Borup et al., 2015). Wade’s design-based qualitative study (2016) echoed similar

recommendations, calling for more studies on video feedback, specifically with

populations in courses with more than 25 students. Given online education’s momentum

(Caruth & Caruth, 2013), these prior studies demonstrated the present need for more

research in asynchronous online feedback, with attention to new delivery methods and

populations, as well as expanding the field for how these feedback methods might lead

to increased instructor reflection.

Feedback’s formative value has been documented. Seminal researchers Black &

Wiliam (1998) demonstrated that there is a strong association between formative

assessment and student learning gains. Feedback, therefore, became viewed as a

mechanism for student learning, when effective instructor support, scaffolding, and

alignment are at the student’s developmental level (Eraut, 2006; McNiff & Aicher,

2017; Wass & Golding, 2014). Although online instructors may understand what

constitutes effective feedback, the increasing volume of online courses has prompted

institutional concerns about faculty efficiencies and effectiveness (Planar & Moya,

2016). Instructors are turning to audio or video feedback methods (Borup et al., 2015;

Sims, 2016; Wade, 2016) as approaches to manage the volume of students in online

education and to increase personalization (Borup et al., 2015; Verpoorten, Westera, &

Specht, 2012; Wade, 2016); these efficiencies may result in more meaningful instructor

reflection. In this study, reflection is a complex process requiring deep thought (Roberts,

2016). Reflection’s potential to connect theory to practice (Bennett et al., 2016), makes
it a cyclic process (Roberts, 2016), making reflective practice about how and when to

apply these processes to a unique case (Bennett et al., 2016; Schon, 1983).

Reflection should be examined in asynchronous online environments.

Verpoorten et al. (2012) posited that asynchronous online learning represents potentially

greater opportunities for instructor reflection, which could enhance both instructor and

learner experience. Reflection causes one to justify their beliefs and to reassess the

efficacy of problem solving strategies (Musolino & Mostroni, 2005) and is a rigorous,

purposeful act requiring intellectual and emotional intentions (Ryan & Ryan, 2013). For

this study, reflection will be examined based on Ryan and Ryan’s (2013) four levels.

Academic reflection requires the agent to pay an active role in the transformation (Ryan

& Ryan, 2013). Given this context of reflection, it was worthwhile exploring the

perceived influence of feedback practices on teacher reflection and the subsequent

influence of reflection on instructional strategies.

Problem Statement

It was not known how online undergraduate full-time faculty perceived the

influence of their instructional feedback practices on their reflective thinking, and hence,

their instructional strategy at a southwestern higher education institution. Online

learning among secondary education institutions is increasing in enrollment and

popularity (Allen & Seaman, 2016; Elison-Bowers & Snelson, 2012; Forte, Schwandt,

Swayze, Butler, & Ashcraft, 2016; Mehta, Makani-Lim, Rajan, & Easter, 2017; Wright,

2014). This modality is evolving, resulting in a learning environment as robust and

academically challenging as its face-to-face counterparts (Mehta et al., 2017). This

evolution has prompted attention to improve online courses in their curriculum,


instruction, and communication but more should be explored specifically in faculty

feedback and reflection practices.

Through this study, faculty feedback and reflective practices were explored. This

study filled a gap in literature on asynchronous instructor feedback on new populations

(Atwater et al., 2017; Borup et al., 2015; Wade, 2016) and amalgamated the instructor

feedback methods with their reflective practices. Personalized instructor feedback in

online courses remains the center of student learning and primary mode of

communication (Planar & Moya, 2016), but despite feedback’s prominence in the online

setting, students report less satisfaction with feedback than any other component of the

course (Nicol, Thomson, & Breslin, 2014). All too often, as Borup et al. (2015) have

noted, in this modality, “teachers and students are separated in space and time for all or

most of the course.” (p. 162). This physical and psychological distance has prompted

instructors to find new ways of creating personal and improved feedback (Borup et al.,

2015; Pattison, 2017) and the specific methodologies instructors choose may have a

bearing on how they reflect on their feedback. For example, Atwater et al. (2017) noted

that for students outside the dominant culture group, video feedback was often more

effective than text feedback, resulting in fewer non-verbal misunderstandings. By

contrast, Borup et al.’s (2015) study found that students prized the efficiency of text over

video feedback; however, as technology improves, such findings may soon be obsolete

as technology advances. As this study focused on the instructor’s perceived influence of

feedback practices and reflection, greater insights into how to best serve expanding

student populations emerged. The general population affected by this study included
higher education institutions with online offerings, their faculty, and student

populations.

For this study, online undergraduate full-time university faculty served as the

unit of analysis. The host institution was a private university in the southwestern United

States where there are 60,000 online students and a traditional campus population of

17,000. This institution has a contingent of full-time online faculty working at campus

facilities who teach courses online exclusively and maintain an average load of 80 to

100 students at a time. This consistent course volume necessitates time management

strategies for instructors to provide personalized and effective feedback to students.

Thus, instructors may use text, audio, or video methods to deliver student feedback.

Purpose of the Study

The purpose of this qualitative exploratory single case study was to examine how

online undergraduate full-time faculty perceived the influence of their instructional

feedback practices on their reflective thinking, and hence, their instructional strategy at a

southwestern higher education institution. A purposive sample of ten to fifteen full- time

online undergraduate faculty at this institution served as the population. Because the

online modality is offered at numerous higher education institutions, the findings from

this study have significance for similar bodies. The value of reflection and feedback is

recognized in education. Feedback is viewed as a path to instruction (Black & Wiliam,

1998; Wiliam, 2007) and reflection is a purposeful act that has the potential to provide

self-assessment for faculty (Roberts, 2016) which could, in turn, influence their

teaching. What is missing are studies that intersect the two in an online setting. By
studying these aspects of teaching among full time online faculty, the findings can

provide valuable data for similar institutions.

Towards a better understanding of the perceived influence of feedback practices

on teacher reflection (and subsequently, the influence of reflection on instructional

strategy), the faculty served as the unit of analysis and the dimensions of phenomena

were faculty questionnaire results, instructor feedback from current and recent classes,

and the confirmation or challenging of emerging themes that came from focus group

discussions on the results of the data sources. Since classroom feedback can be viewed

as instructional (Hattie & Timperley, 2007) and through feedback, the online instructor

increases the social presence, (Pattison, 2017) the review of classroom feedback

provided critical data for analysis. This study involved 12 participants who responded to

a 12-item questionnaire which served as a rich context for analysis. These resulting

themes addressed the research question about how faculty perceive the influence of their

instructional feedback practices on their own reflective thinking, thereby contributing to

the field. In addition, classroom assignments were reviewed in multiple active courses

yielding meaningful trends about faculty feedback methods. The third dimension of

phenomena was a focus group consisting of the same participants, which allowed a

qualitative exploration of how faculty perceive the influence of reflective thinking on

their current and future instructional approaches.

This was an exploratory single case study. A qualitative method and a case study

design are selected when the researcher wants to explore the “how” of a phenomena

(Yin, 2014). In addition, qualitative research emphasizes exploring and leads to

understanding (Almalki, 2016) so this qualitative method aligns with the problem
statement. In this study, the “how” to be addressed involved how faculty perceived the

influence of feedback practices on reflection on their feedback practices and how that

reflective thinking informed teaching. This was explored through coding and analyzing

questionnaire responses about feedback, reviewing faculty courses to determine their

feedback methods, and reviewing discussions from the focus groups.

Research Questions

It was not known how online undergraduate full-time faculty perceived the

influence of their instructional feedback practices on their reflective thinking, and hence,

their instructional strategy at a southwestern higher education institution. Feedback is an

integral component of online facilitation that has been shown to contribute to learning

(Mirzaee & Hasrati, 2014). With technology offering an array of methods for delivering

feedback (Ali, 2016; Atwater et al., 2017; Borup et al., 2015), these methods may be

used to increase personalization or efficiencies (Ali, 2016). These efficiencies and

personalization could then change or improve the potential of feedback for learning. In

addition, these feedback methods may change how or if instructors reflect on their

feedback. A greater understanding of how these processes occur has implications for

online facilitation and student outcomes. The research questions for this study were:

RQ1: How do online undergraduate full-time faculty perceive the influence of their

instructional feedback practices on their own reflective thinking?

RQ2: What methods of feedback do online undergraduate full-time faculty use at a

southwestern higher education institution?

RQ3: How do online undergraduate full-time faculty perceive the influence of

reflective thinking on their current and future instructional approaches?


This proposed study examined faculty as a single case study, bounded by the

commonalities of delivering feedback in online courses and how those feedback

practices influenced reflective thinking. The three dimensions of phenomena explored

were faculty questionnaire results, classroom feedback, and emerging themes from the

focus groups on reflective thinking and how reflection can inform teaching.

A critical focus of this study was the instructor’s classroom feedback. Feedback

is closely linked to student success (Abaci, 2014; Ali, 2016; Gredler, 2016; Hattie &

Timperley, 2007; Nicol & MacFarlane-Dick, 2006). Moreover, feedback is regarded as a

teaching mechanism (Bonnel, 2008; Hattie & Timperley, 2007; Quinton & Smallbone,

2010). Formative feedback has the potential to go beyond the “atta-boy” comments

providing guidance and encouragement for assignments as well as orienting students

towards the learning goal and providing an understanding of their learning gaps. (Hattie

& Timperley, 2007).

This study examined online undergraduate full-time faculty feedback practices,

using Vygotsky’s (1978) ZPD as a framework to determine formative feedback’s

relationship to learning. Sadler’s (1989) seminal work established the role of feedback

as formative by identifying essential conditions that are consistent with the Vygotskyan

movement into the proximal zone. Sadler’s (1989) conditions state that the learner must

know the standard, how their own work compares to the standard, and how to close the

gap between the standard and the student’s actual performance level. Building on

Sadler’s work, Hattie and Timperley (2007) constructed a framework which also

corresponds with facilitating student learning through feedback in a four-level taxonomy

which includes: task, process, self-regulation, and praise (Hattie & Timperley, 2007).
This framework provided the foundation for the analysis and data examination for this

study.

The first phenomenal dimension of this study were questionnaires on instructor

reflection. Reflection in teaching is a valued practice that may prompt a greater

understanding of an instructor’s own teaching strategies (Wilson, 2013), and is regarded

as a learning mechanism (Borup et al., 2015; Tunks, 2012; Wade, 2016). Moreover, an

instructor’s specific method of feedback, whether video or text, could have bearing on

the quality and complexity of their feedback; a qualitative analysis of faculty

questionnaires could reveal faculty’s meta-cognitive processes during feedback and the

extent that these reflections inform their teaching. An analysis of these concepts

provided rich insights into faculty reflective practices, which were examined through the

lens of Ryan and Ryan’s 4Rs of reflection (Ryan & Ryan, 2013). This model proposes

that there are four levels of reflection: Reporting and responding, relating, reasoning,

and reconstructing, and each level increases in its complexity. Faculty were given three

questions at each level and these open-ended questions were addressed while faculty

delivered feedback, which provided rich data on how faculty reflected during their

feedback process, and ultimately, how or if these reflections served to influence their

teaching.

Next, the classroom data were examined to determine if faculty used video or

text feedback and for which assignments these methods corresponded. Categorizing the

feedback according to Hattie and Timperley’s (2007) levels demonstrated the feedback’s

formative value, or if instructors regarded feedback as a method of scaffolding. This

scaffolding would be consistent with Vygotsky’s (1978) ZPD theory. Moreover, this
classroom data served to triangulate the data, as the feedback was examined juxtaposed

to the questionnaire responses.

The third phenomenal dimension to explore in this study was the use of an online

undergraduate faculty focus group among participants to determine the perceived

influence of feedback practices on teacher reflection, and subsequently, the influence of

reflection on instructional strategy. Focus groups provide rich insight into the world of

the participant (Greenwood, Kendrick, Davies, & Gill, 2017). A direct analysis of the

recorded focus group session addressed the research question of how full-tine online

undergraduate faculty perceive the influence of reflective thinking on their current and

future instructional strategies.

Advancing Scientific Knowledge and Significance of the Study

In asynchronous learning environments, instructor feedback serves many roles.

Through feedback, instructors encourage, provide resources and direct instruction geared

towards a student’s future success (Hattie & Timperley, 2007). Within the proper

context, feedback can maximize a student’s learning effects (Hattie & Timperley, 2007;

Sadler, 1989). However, studies expressed a gap in how faculty provide asynchronous

feedback to different populations (Atwater et al., 2017; Borup et al., 2015; Wade, 2016).

This study addressed that population gap through focusing specifically on feedback from

first-year sequence courses, expanding the research on this topic.

This study’s gap came from three studies. Atwater et al. (2017), Borup et al.

(2015) and Wade (2016) studied asynchronous feedback and recommended additional

studies on different populations. Atwater et al. (2017) responded to the need for studies

in online feedback in their qualitative case study that examined student perceptions of
video feedback in an online graduate course. Their findings showed that students found

advantages with video feedback, including detail and personalization, but nevertheless,

preferred text feedback for its efficiency. Borup et al. (2015) showed similar findings

when they studied asynchronous feedback with both instructors and graduate students in

their complementary mixed methods design. Both instructors and students found that

text feedback was valued over video. These researchers recommended additional studies

in this area and on different populations. Finally, Wade (2016) studied learner

perspectives and worked with teacher practitioners to investigate the impact of

asynchronous video feedback through personal reflections. These personal reflections

from instructors served to describe the phenomena of asynchronous video feedback

Wade (2016) noted that prior studies focused on learner perspectives of different

feedback modalities and recommended more studies on video feedback, specifically

with populations in courses with more than 25 students.

This study addressed the gaps presented in these studies. First, both Borup et al.

(2015) and Atwater et al. (2017) recommended additional research in the area of

asynchronous feedback on diverse populations. Both studies examined perceptions of

graduate students and faculty. This study extended that population recommendation by

exploring how online undergraduate full-time faculty reflected during feedback. Thus,

the field of study on feedback was extended, and to a different population, but with new

insights from how faculty reflect. Both studies (Atwater et al., 2017; Borup et al., 2015)

found that graduate students preferred text feedback over audio or video, not neglecting

the value of personalization of audio and video feedback. However, these preferences

may not necessarily be replicated among an undergraduate population. First, a student’s


digital skill level may correspond with their age (Vanslambrouck, Chang, Tondeur,

Phillipsen, & Lombaerts, 2016). Therefore, while graduate students may view

technology as an inconvenience or challenge, undergraduate students may be more adept

at newer technological methods.

In addition, working adults in school may value efficiency (Vanslambrouck et

al., 2016) over personalization. A graduate population may represent more working

adults than the undergraduate population. Finally, technology advances exponentially

faster than research, meaning that emerging applications may make viewing video

feedback as convenient as text. For this study, the population of online undergraduate

full-time faculty used new applications that students can access on smart phones, giving

rise to the need for studies on asynchronous online feedback methods and various

populations.

This study has many practical applications that richly contributed to online

learning in higher education and filled a need by contributing to the body of knowledge

on reflection and feedback within online instruction from the instructor perspective.

Borup et al. (2015) noted a gap in literature was that past empirical studies explored both

feedback and reflection from student perspectives. This study shifted that discussion and

expanded this area of study to explore faculty feedback reflections, and their subsequent

reflection on instructional strategies. In addition, this study could yield valuable data on

how instructors view their own feedback methods and practices, and how these

reflections might influence their instruction. As faculty continue to explore newer

feedback methods in higher education, (Atwater et al., 2017; Borup et al., 2015) a richer
understanding of the reflective component could affect professional development or

improvement of grading practices or course facilitation.

Examining instructor perceptions of feedback modalities and their influence on

instruction is an important component of this study. For example, some instructors

delivering video feedback may have found it more effective method of scaffolding for

the student. This scaffolding may have taken on a different form through text feedback,

where scaffolding might have consisted of posting a link to students. Current studies

exploring feedback’s formative value call for a re-examination of both student and

teacher roles, giving students a more active role in the process (Planar & Moya, 2016).

Therefore, exploring how online undergraduate full-time faculty used feedback revealed

more about their reflective practices and how teaching is informed.

Feedback for this study was analyzed according to its potential for student

learning. Vygotsky’s (1978) Zone of Proximal Development (ZPD) underpinned the

concept of learning as discussed in the context of instructor feedback. Operationalizing

this theory, the instructor had a role in student learning by moving them from their

actual developmental level to the completed developmental level. Between the two is the

ZPD, where this developmental transition occurs, either through increasing the

challenging or providing scaffolding of the task (Vygotsky, 1978). This theory surmises

that instruction must be slightly more challenging than the student’s developmental level

and that when the instructor provides appropriate scaffolding within this optimal zone,

learning and self-regulation ensue (Smit, van Eerde, & Bakker, 2013). Because this

study focused on instructor feedback practices, Vygotsky’s (1978) ZPD formed an

anchor to understand how instructors used formative or scaffolding strategies in their


feedback and to what extent their reflections informed these practices. During feedback

delivery, an instructor’s reflections may guide them to provide either support or to

increase the challenge in order to move the student to the next developmental level. This

study extended the body of knowledge on ZPD as a learning theory, showing that

instructor reflections actually influenced their teaching; therefore, this study’s

implications could ultimately lead to practices to improve student learning.

Recent literature on feedback emphasizes its formative value. This kind of

feedback provides guidance on both student understanding and performance of the

assignment (Mirzaee & Hasrati, 2014). This study examined faculty’s formative

feedback, as it aligned with the Vygotsky (1978) theory that grounded the study. Wass

and Golding (2014) stated that Vygotsky’s (1978) ZPD has implications for teachers that

are still unexplored. With this theory as the study’s support, deeper connections between

how faculty provide feedback and its relation to student learning were uncovered.

This study had practical implications. Institutions can improve their online

instruction by selecting appropriate delivery methods that increase grading efficiencies

and improve feedback’s formative value. If instructor reflections do indeed influence

their teaching, there are implications for training and professional development. For

example, institutions may give greater attention to development in reflections or may see

the value in adapting new technologies, which may improve the capacity to reflect.

Nevertheless, knowing that video methods can improve feedback or personalization

does not mean that more faculty necessarily use these methods. Harrison et al., 2017)

reported that fewer online faculty expressed interest in using new technologies in their

classes, despite instructors’ expressed confidence with their institutions’ learning


platforms. Therefore, if such technologies do improve reflection, which, in turn,

influences instruction, then training and development can be directed to making these

methods available and salient.

Reflection literature on teaching has largely focused on face–to- face

environments. For example, Bond (2011) used Schon’s reflection in action theory to

demonstrate the importance of school administrators thinking and reacting in

spontaneous situations. However, more recent studies recognize the value of reflection

in online courses. Wade (2016) used reflection journals as a method to understand how

reflection can influence feedback. The outcomes of this study could prompt more

reflective practices for online educators. This increased attention to reflection has many

positive outcomes.

For the purposes of this study, reflection will be addressed according to Ryan

and Ryan’s (2013) model, based on four levels. The reporting and responding level

includes observations or evaluations, the relating level includes a connection between

the agent reflecting and the action, the reasoning level focuses on how theory or new

perspectives might support the reason, and the last level is reconstructing, which allows

the reflecting agent to consider future changes (Ryan & Ryan, 2013). Online

undergraduate full time faculty may develop patterns or routines to their grading; these

practices may not be the most effective nor the most satisfying for students. Reflection

could prompt newer strategies that could improve feedback for its formative value and

student satisfaction.

The outcomes of this study potentially affected the broader community of higher

education. Yin (2014) cited a common concern with qualitative studies is that
researchers analyze at the individual level and neglect the more global implications of

the study. As more is known about instructor feedback relative to Vygotsky’s ZPD

(1978), an increase in self-regulation may be a result (Wass & Golding, 2014). As this

paradigm shift occurs, both student and instructor roles are subject to change. As

Paulson Gjerde, Padgett, & Skinner (2017) recommended, greater attention might be

focused on helping students understand and analyze feedback and to show their active

role in the process (Planar & Moya, 2016), possibly with a movement towards having

students respond or reflect on their feedback (Denton & Rowe, 2015). These

philosophies are all consistent with the increased movement of self-regulated learning

(Nicol & Macfarlane-Dick, 2006) and would represent the goals of formative feedback.

Rationale for Methodology

For this study, a qualitative methodology was used as the most logical choice for

conducting research to explore the perceived influence of feedback practices on teacher

reflection and subsequently, the influence of reflection on instructional strategy. This

qualitative single case study has constructivist elements, since it is based on the

viewpoint that reality is constructed by individuals interacting (Almalki, 2016). This

qualitative framework allowed the researcher to capture nuances that were a part of

instructor reflection and feedback practices. The phenomena of faculty questionnaire

results, classroom feedback analysis, and focus group findings required non-numerical

data for richly coded themes to emerge. Teacher reflections could not have been

measured in a quantitative fashion and the feedback was examined by analysis and

coding to infer relevant conclusions that could contribute to the field. According to
Almalki (2016), qualitative research emphasizes exploring and understanding, so this

qualitative method was aligned with the problem stated.

A qualitative method aligned with the study’s phenomena of feedback and

reflection. Qualitative research allows a researcher to explain phenomena in its natural

setting (Yin, 2014), whereas a quantitative method requires larger numbers (Merriam &

Tisdell, 2016). For this study, a natural setting was required for authentic faculty

feedback and to determine their reflective practices. Therefore, a qualitative approach is

foundational for a greater understanding of the mechanisms that influence faculty

feedback reflections and how they inform their teaching. In this study, online

undergraduate full-time faculty feedback practices were explored in their daily setting of

instruction. Based on Yin’s (2014) tenets, this approach allowed the researcher to focus

on the natural setting of online undergraduate full-time faculty and how they selected

their feedback methods and how their reflections influenced their instruction.

Nature of the Research Design for the Study

A single case study was selected for the research design. A single case study is

recommended when the researcher wants to explain how a social phenomenon works

(Yin, 2014). When an investigation can focus on a single case, a holistic and real-world

perspective on group behavior can be revealed (Yin, 2014). The case study approach

allowed a deep exploration of the population of online undergraduate full-time faculty

feedback practices, what feedback methods were used, and how their feedback practices

informed their instruction.

When conducting a single case study, there are several qualitative designs that

were examined. First, a phenomenological approach was considered. A


phenomenological design is suitable when a researcher wants to explore emotions or

affective states about lived experiences (Merriam & Tisdell, 2016). However, the

primary focus for this study was the perceived influence of feedback practices on teacher

reflection and subsequently, the influence of reflection on instructional strategy, so

emotions or affective states were not emphasized. Grounded theory is a second design

within qualitative methods and its focus is contributing to new theories (Merriam &

Tisdell, 2016). However, per Yin’s (2014) recommendations, this study utilized theory

as an underpinning, and did not seek to establish or formulate a new theory. Vygotsky’s

(1978) Zone of Proximal Development was the theory that supported this study, since it

established a foundation to determine the perceived influence of feedback practices on

teacher reflection and thus considered how students might progress to the ZPD as

formative feedback was delivered. A third design to consider was narrative inquiry, in

which the story text forms the data set (Merriam & Tisdell, 2016), which would not be

sufficient for the proposed study, which relied on the triangulated data from faculty

questionnaires, instructor feedback on papers, and a focus group.

The target population was online undergraduate full-time faculty who work in a

collaborative environment and teach first-year sequence courses. Unlike adjunct

instructors who may work remotely, the sample of the population work collaboratively

with other online instructors, which allows them to regularly share best practices. The

results of this study, however, could also have meaningful implications to adjunct

instructors who are remote, as reflection is an individual process. The sample size of 10

faculty was large enough to produce meaningful results in this qualitative method and

design. Baxter and Jack (2008) stated that in a case study, researchers must provide
enough detail to assess the validity or credibility of the work and that purposeful

sampling strategies should be applied. Due to the volume of coursework of this faculty

population, sufficient detail was provided to assure the study’s credibility.

The unit of observation was online undergraduate full-time faculty at a

southwestern higher educational institution in the United States. This group of faculty

teach online exclusively from a campus location. The volume of courses provided

substantive data, and demonstrated a variety of feedback methods. Instructors used

either text or video methods for feedback.

A single case study framework was the design selected for this study due to

many conditions. First, the case is related to the theory of Vygotsky’s ZPD (1978) with

instructor feedback serving as scaffolding to move students to their proximal learning

zone. A single case study is the most appropriate for confirming or expanding on a

theory (Yin, 2014). Analysis of themes from these three sources of data converged to

explore the phenomena of the perceived influence of feedback practices on teacher

reflection (and subsequently, the influence of reflection on instructional strategy). As

faculty responded to the questionnaire in the natural setting of their grading, these data

aligned with the review of their actual classroom feedback. Finally, the focus group

provided the participants the opportunity to confirm or challenge the findings of the

other two data sources, resulting in the synthesis of data, making this an appropriate

single case study design.

A single case study was the best choice for this study’s research questions, due to

the kind of data necessary to address the questions. The data in qualitative studies must

allow for affective nuances. Byrne and Ragin (2009) justified a case study when
statistics are not capable of revealing results with a degree of satisfaction, or if there are

different outcomes or conditions that are more subjective. For this study, instructor

reflections and their insights on how they informed instruction needed to be analyzed

qualitatively. To understand the perceived influence of feedback practices on teacher

reflection (and subsequently, the influence of reflection on instructional strategy) o, an

understanding deeper than a statistical analysis was needed. The data collection

procedures were as follows:

Steps:

1. Obtained site authorization to conduct study at the institution.

2. Obtained Institutional Review Board (IRB) approval.

3. Field-tested three faculty members as validation of questionnaire.

4. Worked through Director of Online Instruction to send email to recruit volunteer


participants.

5. Ensured privacy and confidentiality of participant responses.

6. Received Inform Consent forms from participating faculty.

7. Informed study participants that confidentiality would be maintained.

8. Coded, themed, and analyzed current courses of participating instructors.

9. Where appropriate, data were triangulated.

10. Planned and scheduled questionnaire time period for faculty to complete
questionnaires.

11. Planned and scheduled focus group in order to moderate the discussion on how
reflection informs instruction (Yin, 2014). The focus group consisted of 10 study
participants.

12. Collected responses from prepared validated questionnaire. Researcher prepared


the questionnaire to determine faculty reflections during feedback. Questions
were based on the supporting theory of Vygotsky’s Zone of Proximal
Development (Vygotsky, 1978) and guided by Ryan and Ryan’s 4Rs model of
reflection. Faculty were asked to respond to the questionnaire in their daily
course of provided feedback on papers.
13. Reviewed courses to determine what kind of feedback faculty used. Faculty at
this institution used either text feedback embedded on student papers or video
feedback.

14. Facilitated focus group of participants who responded to the questionnaire. Focus
group discussions focused on data presented on questionnaire responses and
feedback analysis. Participants either confirmed or challenged the data.

Definition of Terms

This study explored to what extent online undergraduate full-time faculty

feedback practices influenced their reflective thinking, and if that reflective thinking had

an impact on instructional practices. The phenomenal dimensions were instructor faculty

questionnaire results, feedback from participants’ classes, and focus groups discussions.

The specialized terms relevant to this study are listed below.

Asynchronous learning. The online learning environment where students and

instructors operate separately and independently in terms of time and space,

communicating through the learning management system (Atwater et al., 2017)

Feedback. For the purposes of this study, feedback is defined as communication

of information to the student which directs reflection, facilitates in constructing self-

knowledge as it relates to learning, and prompts new learning goals (Bonnel, 2008).

Formative feedback. Distinguished from summative feedback, formative

feedback is designed to guide student learning and can also extend to promoting self-

regulation and self-esteem (Sims, 2016).

Online education. For the purposes of this study, online education is a course in

higher education consisting of at least 80% of the content is delivered online in an

asynchronous environment (Allen & Seaman, 2016).


Reflection. The complex process that requires deep thought (Roberts, 2016), and

connects theory to practice (Bennett et al., 2016); a cyclic process that determines how

and when to apply processes to a unique case (Bennett et al., 2016; Schon, 1983).

Scaffolding. Scaffolding involves instructor support that enables students to

reach a level of autonomously completing a task (Wass & Golding, 2014). Scaffolding is

part of Vygotsky’s Zone of Proximal Development theory (Vygotsky, 1978).

Social presence. For the purposes of this study, social presence is defined as

participants projecting their personal characteristics into the community, resulting in

presenting themselves as real people (Garrison, Anderson, & Archer, 2000).

Summative feedback. This includes the student’s grade and instructor comments

to justify the grade (Webb & Moallem, 2016).

Zone of Proximal Distance (ZPD). Part of Vygotsky’s (1978) sociocultural

theory of learning asserts that learners have a distance between their actual development

level, identified by independent problem solving, and their level of potential, where

tasks can be completed with appropriate instructor guidance. Learning takes place in this

zone.

Assumptions, Limitations, Delimitations

Assumptions. An assumption refers to a belief that is considered true without

evidence or justification (Denzin & Lincoln, 2011). This qualitative study explored

online undergraduate full-time faculty feedback practices influence their reflective

thinking and how those reflections informed their teaching. The following assumptions

were present in this study:

1. It is assumed that the faculty provided honest and authentic responses on the
questionnaire. The instrument used was Ryan and Ryan’s (2013) 4R model of
reflective thinking. It is possible that faculty could have provided less than
transparent responses to their feedback. Training on reflective strategies might
have been necessary for more insightful responses.

2. It was assumed that the reviewed classroom feedback was valid data that
represented authentic faculty responses. Forte et al. (2016) stated that feedback
carries the role of personal interaction between the student and the instructor and
can influence the degree of social distance. Therefore, it was assumed that the
classroom data represented student and instructor interaction.

Limitations and delimitations. Limitations are things that the researcher has no

control over, such as bias. In contrast, delimitations are things over which the researcher

has control, such as location of the study.

1. This was a single case study, which is a potential limitation, as there may
have been challenges associated with the generalizability of the findings.

2. The population could have been considered another limitation, as all


participants were full-time faculty who work on the host campus. Because
these faculty members teach full-time, their practices may be different from
adjunct faculty members. Moreover, these faculty members collaborate,
resulting in possibly less diversity than what might result from institutions
where contingent faculty are more prevalent or work remotely.

3. Potential bias from the researcher’s perspective was another limitation of the
study. Because this researcher is employed at the host institution, there is a
keen understanding and familiarity of the expectations and standards of
feedback. The data collection and analysis process were designed to mitigate
bias. Participation was voluntary, so the researcher did not select the
participants. Next, the paper selection process was based on assignments that
met the selection criteria of 300 words or more in length. The researcher also
selected papers from courses that were current, yet completed, so that there
would be sufficient data for review. These criteria limited the availability of
the selection, which means bias was not present in this selection process. The
data analysis was structured around a priori themes, meaning that the codes
for evaluating feedback were already determined, per Hattie and Timperley’s
(2007) framework. Finally, as this study was not about the actual quality of
feedback, the researcher was not taking instructor feedback standards into
consideration during the collection or the analysis.

4. From the participant’s perspective, the researcher’s supervisory role at the


host institution could have potentially presented bias or possible unconscious
or conscious coercion, as participants may have felt pressure to provide data
based on the relationship. However, these participants did not fall under the
direct supervision of the researcher and there were no ties to performance
evaluations from the researcher. Moreover, there was no promise of
monetary or other reward, improved evaluation scores, access to future
resources or support systems for participating. The participants agreed to
allow the researcher to review their own papers, as part of a data source,
which gave the participants no control over that source. The participants were
also aware that the study was not about the quality of the feedback, so there
was no motive to consciously or unconsciously respond to the questionnaire,
relative to the researcher’s supervisory status.

5. The researcher examined the classroom artifacts through her supervisory


access. Matched convenience sampling created an alignment between the
volunteer participants and their concomitant feedback on assignments.
Selecting assignments of more than 300 words from first-year-sequence
courses ensured alignment among the data sources.

6. A delimitation included the location of the study. Funding restrictions


required the researcher to draw participants from a local institution and this
limited the scope of the study.

Summary and Organization of the Remainder of the Study

This study was organized as follows: Chapter 1 included the introduction of the

study, the statement of the problem, the purpose of the study, the research questions, the

advancing of scientific knowledge, the significance of the study, the definition of terms,

the rationale for methodology, the nature of the research design, the assumptions,

limitations and delimitations, and the summary of the study. Current perspectives in the

literature on the topics of feedback and reflection were presented in Chapter 2. Current

literature about these topics forms an important rationale for the study. Chapter 3

explains and rationalizes the qualitative methodology, and the case study design,

population selection, chose of measures, including the questionnaire, the frameworks to

analyze the archived course feedback, and the framework for preparing the focus group

discussions and measures taken to ensure validity and reliability for their use. This

chapter also covers the data collection procedures, data analysis procedures, and ethical

considerations for this study. Chapter 4 presented the data analysis and findings. Chapter
5 summarized and interpreted the data for the study. This chapter also discussed

implications for practice and recommendations based on the results.

For online instructors, feedback is a critical practice for communication and

formative instruction (Abaci, 2014; Wade, 2016; Webb & Moallem, 2016). In an effort

to increase efficiencies and to lessen the distance in this modality, instructors are turning

to different methods of feedback, including audio, video, and text (Borup et al., 2015;

Gredler, 2016; Hall, 2018; Sims, 2016). These choices could influence how instructors

reflect on their feedback, as audio or video methods could lessen the cognitive load

(Borup et al., 2015; Wade, 2016) which could allow deeper reflection during the

feedback process. Feedback is often measured in terms of student outcomes, but there is

less attention to the thought processes of faculty and how those reflective practices

might influence teaching. Reflection is valued as a teaching strategy (Martin & Double,

1998; Nguyen et al., 2014). Exploring the perceived influence of feedback practices on

teacher reflection (and subsequently, the influence of reflection on instructional strategy)

has valuable implications for awareness of and improvement in teaching, and

implications for professional development. This study filled a gap in the literature on

feedback and reflection in online learning by examining teacher perception on feedback

(Wade, 2016), and expanding on Borup et al.’s (2015) findings, which recommended

exploring feedback among a more diverse student population. Finally, because Ali

(2016) recommended teacher perceptions of feedback as a gap in the literature, this

study addressed that.


Chapter 2: Literature Review

Introduction to the Chapter and Background to the Problem

The literature on both feedback and reflection is reviewed in this study. First, a

background discussion will be developed surrounding the need for extended studies in

faculty reflection and feedback. Next, an identification of the gap for this study will be

presented, followed by a discussion of the theories and conceptual framework for the

study. Finally, a thorough review of current literature on asynchronous feedback, faculty

reflection, and research that lays a foundation for the context of online higher education

will be presented. This review will also include seminal researchers and their

contributions to these areas. The literature review will conclude with a discussion of the

methodology planned for this study and how it relates to the methods used in literature

on this topic.

The chapter is organized according to major and minor sections. The

introduction to the chapter and background of the problem will be the first section; the

identification of the gap will follow; the next major section will discuss the theoretical

foundations and conceptual framework. The Review of the Literature is the next section.

Within that review are the following sub-sections: Historical perspective of online

learning, Feedback in online education, which includes a section on formative feedback

that discusses minor sections of three different models. The next major sections are

Feedback and Social Presence, followed by social presence and learning outcomes,

Feedback methods, and Feedback and learner populations. The next major section is on

Reflection, which has sub-sections of Reflection and teaching, models of reflection, and
outcomes of reflection. Major sections about the case study design and sources of data

conclude the themes in the literature review.

Research for this study came from peer-reviewed journal articles, published

dissertations and studies from the Grand Canyon University library’s ProQuest, Eric

databases, and Google Scholar. The foundation of previous researchers and their

conclusions served to inform the new findings of this study (Hart, 1998). The search

categories were online feedback, social presence, reflection, reflective teaching,

feedback methods, audio and video feedback. In the following background section,

literature related to the problems that are addressed by this research project is presented.

Background. Feedback’s critical role in online education can be viewed

according to its numerous elements and complexities. The heart of feedback is

conveying student progress (Hattie & Timperley, 2007). Through conveying this

progress, feedback itself can be viewed as a learning mechanism (Abaci, 2014; Gredler,

2016; Nicol et al., 2014). This evolving view of feedback began with Scriven (1967),

who first distinguished summative from formative feedback within the context of

curriculum evaluation, followed by Bloom (1969) who extended this concept to

classroom teaching, promoting formative feedback at various stages, and recommending

summative feedback only at the conclusion of an assignment (Ninomiya, 2016). Sadler

(1989), Hattie and Timperley (2007), and Black and Wiliam (1998) have all refined the

framework of formative feedback for classroom settings. The common element among

these researchers is that effective formative feedback has a strong relationship to

learning and that instructors must facilitate feedback that identifies the assignment goal,
clearly shows students where they fell short of the goal, and offers specific guidance for

future success.

When feedback is associated with learning, it is considered formative.

Scaffolding is an example of a tool to move students towards Vygotsky’s proximal zone

of development (1978), but despite its learning potential, instructors face challenges in

providing optimal formative feedback. First, the online environment poses particular

challenges for instructors to deliver corrective feedback while maintaining a positive

rapport with students (Wolsey, 2008). Next, feedback’s timeliness is attached to its

effectiveness (Sadler, 2010; Webb & Moallem, 2016). Therefore, if there is institutional

pressure for university faculty to provide timely feedback, the quality may be

compromised, with learning outcomes, instructor rapport and student satisfaction

compromised. Understanding more about how faculty select feedback methods and how

they reflect on their feedback can provide valuable data about its formative value and

student learning. In addition, understanding how these feedback practices influence

teacher reflection and subsequently, the influence of reflection on instructional

strategies, has significance for faculty and students. While much is known about the

elements of feedback, research is needed on how online instructors reflect on their

feedback.

Instructors approach these varied elements of feedback on a spectrum of

consciousness. An instructor’s focus on feedback could be divided or shifted according

to instructor expertise, institutional demands, or how aspects of feedback are allocated.

Hattie and Timperley (2007) identified three questions that represent phases of feedback

as they relate to learning: Where am I going? How am I going? Where to next? (Hattie
& Timperley, 2007). Yet in one study, when these feedback elements were examined,

teacher candidates disproportionately focused on “How am I going?” with limited

attention to the other questions (Ellis & Loughland, 2017). Arguably, feedback is neither

complete nor formative if only this element is addressed (Black & Wiliam, 1998; Hattie

& Timperley, 2007; Sadler, 1989).

Other feedback challenges include increasing personalization. Encouragement

and support are more challenging in an asynchronous environment (Atwater et al.,

2017). These diverse and sometimes competing elements are at work when instructors

approach a student paper to grade. Understanding more about how instructors reflect

could provide rich insights into how they approach these challenges, including how they

might prepare feedback to guide students to Vygotsky’s (1978) ZPD. Like feedback,

reflection is also a complex construct and more studies are needed (Roberts, 2016).

Although studies have been conducted on instructor reflections (Nguyen et al., 2014;

Roberts, 2016; Pawan, 2017; Ryan & Ryan, 2013) there is not a study about how

instructors reflect while delivering their feedback or the subsequent influence of that

reflection on teaching.

This study was about undergraduate faculty feedback exclusively in the online

classroom in higher education. Online learning has rapidly become a preferred method

of delivery for its flexibility and convenience to students, and its affordability to

institutions (Caruth & Caruth, 2013; Gredler, 2016; Rockinson-Szapkiw, 2012). Allen

and Seaman (2016) reported that 5.8 million distance education students includes 2.85

million taking all of their courses at a distance. Although there are advantages in this

modality, there are also challenges, including the inherent social distance and the
difficulties of delivering personalized meaningful feedback (Atwater et al., 2017; Borup

et al., 2015; Frisby, Limperos, Record, Downs, & Kercsmar, 2013; Pattison, 2017). As

technology advances, instructors are turning to new feedback methods, such as video

applications, which provide greater personalization in their communication and feedback

delivery (Atwater et al., 2017; Ellis & Loughland, 2017; Frisby et al., 2013; Wade,

2016). In addition to feedback, reflection is also a critical attribute of good teaching

(Falender, Shafranske, & Falicov, 2014; Roberts, 2016). Researchers have focused on

how students might reflect for deeper learning, or how instructors reflect (Brubaker,

2016). However, there are not studies about how the actual practice of feedback, if

reflected upon, could inform teaching for online faculty in higher education.

In online courses, feedback serves critical elements. First, it has the potential to

lessen the social distance (Pattison, 2017). Secondly, feedback is a learning mechanism.

Formative feedback serves as the learning process in online courses (Abaci, 2014;

Gredler, 2016; Sims, 2016; Wade, 2016). With feedback playing such critical roles, a

broader understanding of how it can lead to learning could be discovered through

examining instructor feedback reflections. As well, how instructors reflected while

giving feedback was examined.

Within the context of this study, reflection is a purposeful act. The reflector plays

an active role in their own reflection, through the model of reporting and responding,

relating, reasoning, and reconstructing (Ryan & Ryan, 2013). Reflection is associated

with learning (Harvey, Coulson, & McMaugh, 2016). Instructors who reflect on their

feedback may learn from that reflective process (Farrell & Jacobs, 2016), thereby

informing teaching strategies. For example, if an instructor is reflecting during feedback,


and it becomes evident that a student needs help with organization, through reflection,

the instructor might provide a resource to the student, or even to the whole class. The

instructor may find that, through video feedback, that showing organizational strategies

might be more effective than explaining through written text.

Knowing more about how instructors reflect could provide insight into these

methods and choices, but there were not studies that specifically addressed the

connection between instructor feedback reflections towards teaching. Moreover, this

study addressed a gap as researchers have recommended extending the population on

studies about feedback (Borup et al., 2015; Wade, 2016). As these researchers

recommended, new and diverse populations and larger class sizes were addressed in this

study to further knowledge in the field of how reflection and feedback inform instruction

in the online classroom. This study could result in valuable insight, shining light on the

value of perceived influence of feedback practices on teacher reflection and

subsequently, the influence of reflection on instructional strategy.

Identification of the Gap

Online learning remains a popular modality in higher education. Most higher

education institutions offer online courses as a critical component of their long-term

sustainability strategy (Allen & Seaman, 2016). Since 2002, its swift growth has

prompted ensuing research, comparing its andragogy to face-to-face settings. While the

accelerated growth of online education is a certainty, recent findings in the areas of

feedback and reflection are less consistent. For example, there are concerns with the

physical and psychological distance, retention, student satisfaction, and motivation

(Forte et al., 2016). These factors are addressed in many ways in a face-to-face
environment, but in the online environment, instructors may approach them solely

through faculty feedback. Recently, researchers also examined new technology methods

designed to improve feedback, but there are mixed findings in terms of instructor and

student preferences (Ali, 2016; Atwater et al., 2017; Borup et al., 2015). The research

gaps in feedback that led to this study will be presented in the context of recent research

and trends, revealing how this proposed study will expand research in these areas.

Formative Feedback. This study filled a gap through addressing specific

recommendations from recent studies. Borup et al. (2015) stated that a gap in past

empirical studies is that feedback is largely explored from the student’s perspective and

that greater attention should be given to the instructor perspective. Through

incorporating a reflective component, this study is about feedback from the instructor’s

viewpoint, which provided insight into the connection between the instructor’s feedback

delivery and their own teaching. Recent research in feedback confirms its formative

value. Mirzaee and Hasrati (2014) found in their qualitative stimulated recall

methodology, that instructor formative written feedback could contribute to student’s

nonformal learning, as it encourages specific action and raises the likelihood of student-

to-student engagement, as well as student-content engagement. These findings are

consistent with Webb and Moallem’s (2016) quantitative and qualitative data that

indicated that when feedback is specific, motivating, and informative, student

achievement improves.

Both studies showed documentation of the importance of formative feedback that

follows a structure or model for meaningful feedback. Mirzaee and Hasrati (2014) used

Vygotskian framework as an antecedent to feedback’s formative potential, and Webb


and Moallem (2016) posited that feedback is formative when it follows Hattie and

Timperley’s (2007) framework. More recently, Ellis and Loughland (2017) confirmed

the importance of a formative model, as their findings indicated that teachers were not

addressing all elements of this model, resulting in less effective student outcomes.

Therefore, trends in feedback have addressed its formative capacity, but online instructor

feedback had not been examined within the context of faculty reflections and how those

reflections might influence teaching.

Population. Recent research has focused on feedback outcomes on specific

populations, but a gap remained, revealing the need to explore more diverse populations.

Atwater et al. (2017) studied graduate student perceptions of video feedback in their

qualitative case study and found that although students appreciated the deeper student-

to- teacher relationship that resulted from the video communication, students preferred

written text for its convenience. These researchers called for future studies on different

populations. In a blended class, Borup et al. (2015) studied student and instructor

perceptions of feedback in different methods and found that, like Atwater et al.’s (2017)

findings, both students and instructors found value in video comments, which can be

more detailed, but overall preferred text for its convenience and accessibility. For this

study, the population was student teacher candidates; although these were undergraduate

students, they were in their final semester, so there is still a great distance from entry

level undergraduate students. These researchers had the same recommendation of

examining feedback methods with different populations and contexts, including courses

that are exclusively online (Borup et al. 2105). This study on instructor feedback and
reflection addressed these gaps as the focus was on first-year undergraduate students in

online classes.

Feedback methods. Recent research has focused on new ways to deliver

feedback. Various applications, including Screencast and Loom, allow instructors to

provide video or audio feedback. Students may view the paper while the instructor

annotates comments and provides resources; in addition, some applications show the

instructor’s face while delivering assignment comments. Using Screen-cast-o-matic, Ali

(2016) conducted a mixed methods study of two experimental groups and while one

group received written assignment feedback, the experimental group was given video

feedback on higher order writing criteria, such as content and organization, with written

feedback on the lower order criteria, such as grammar and mechanics. The findings

showed that the experimental groups outperformed the control group, meaning that the

Screencast video feedback was viewed as supportive and formative. However, students

reported technical challenges of delayed downloading and anxiety of accessing the

videos (Ali, 2016).

The above studies illustrated recent trends and studies focused on online faculty

feedback. However, as technology evolves, even current studies can be rapidly outdated.

For example, students in some studies reported preferring text feedback over video or

audio (Ali, 2016; Borup et al., 2015) for its convenience. Yet, technology’s rapid

advancements could lessen the downloading time or could make the feedback viewing

accessible on later model smartphones. Thus, it was significant to determine how, with

technology delays and concerns alleviated through newer methods, faculty chose their
particular feedback methods, and if their reflection during feedback informed their

instruction.

Theoretical Foundations and/or Conceptual Framework

Numerous students are influenced through online feedback. A majority of higher

educational institutions report that online delivery is critical to their long-term strategy

(McNiff & Aicher, 2017). With this modality showing sustenance, leadership in higher

education must view student learning as a meaningful benchmark for comparing

modalities, methods, and strategies in education. Learning theories are useful for

understanding the teacher’s role and prompt effective strategies for student learning.

Good teaching sets the conditions for the student to be in charge of their own learning.

Thus, teachers must understand the dynamics that influence learning, and are charged

with setting those conditions (Roberson, 2017). Vygotsky’s (1978) Zone of Proximal

Development (ZPD) is the theory that supported this study; its tenets demonstrate that

the teacher can set these conditions in any delivery modality.

Instructor strategies can affect student learning. Vygotsky’s (1978) ZPD was

developed in response to education that was systematic, which presumed that what the

instruction delivered would result in learning. This systematic method of instruction did

not address the fact that a new learning concept would affect the child’s development.

As part of the sociocultural theory of learning, Vygotsky’s (1978) ZPD component

proposed that development must occur prior to learning; social and cultural influences

contribute to the learner’s developmental level (McNiff & Aicher, 2017). Within this

theory, learning takes place between where a student can independently complete a task

and where the task is beyond their grasp. When an instructor provides the appropriate
guidance, this allows the student to complete the more challenging task, thus, in the zone

of proximal development.

Instructor assistance is often in the form of scaffolding. Per Vygotsky’s (1978)

theory, scaffolding includes resources, explanations, prompts, directions, advice, or

background information (Armstrong, 2015; Wass & Golding, 2014). This study

examined how instructor reflections might lead them to moving students to this proximal

development zone. For example, when instructors reported that they were giving

particular comments to provide support for future assignments, that would be an

example of moving students into this zone and providing the support needed.

Feedback, therefore, has the potential to affect learning. Within this learning

theory, feedback might be viewed as an actual pedagogical process, with Vygotsky’s

(1978) ZPD theory forming a suitable framework for the assessment of that

determination. Knowing that not all students are at the same level, instructors may

prepare individual feedback specifically to move students into the proximal zone. One

student may need scaffolding support or resources, while another may need to be

challenged in order to move to the ZPD that will activate development. Thus, the second

student referenced may have been at their completed developmental level at the time of

the assignment submission, so, instructors might reflect and use their feedback on how

to appropriately challenge one student, while giving scaffolding to others. This is

Vygotsky’s (1978) theory operationalized in higher educational online settings.

This study explored how online feedback practices influence reflective thinking.

Online instructors view feedback as a method of instruction (Abaci, 2014) and a

scaffolding tool (Gredler, 2016). Hattie and Timperley (2007) posited that feedback has
the potential to lead to learning, since it encourages students to act upon the given

advice. As a scaffolding mechanism, feedback can show students the gap between their

current and target performance (Mirzaee & Hasrati, 2014). If instructors are mindful as

they deliver feedback, these reflections could be a catalyst for instructors to more

appropriately consider if their scaffolding strategies are effective in moving students

towards the ZPD.

Different methods faculty used in online instruction were also explored. Online

instructors are turning to new methods to deliver feedback including screencast tools,

(Ali, 2016) video, (Borup et al., 2015; Borup et al., 2015; Sims, 2016) and auditory

methods (Borup et al., 2015; Frisby et al., 2013) which capture the instructor’s voice and

provide a view of the student’s paper. Video feedback has advantages; instructors can

increase their social presence through natural emotional reactions and nuances (Borup et

al., 2015). Because social presence is a direct indicator related to student outcomes

(Tunks, 2012), instructors may find these methods are more effective for scaffolding

strategies. Moreover, if an instructor’s voice explains a concept through feedback, the

paralinguistic features may allow a clearer explanation. This increased clarity could

contribute to the student’s understanding. Therefore, the instructor choice of feedback

delivery could influence student learning, relative to Vygotsky’s (1978) ZPD theory.

Finally, how online undergraduate full-time faculty perceive the influence of

reflective thinking on their current and future instructional approaches was also

explored. Because reflection is a complex process that requires depth of thought and

ongoing improvement and focus (Gasparič, & Pečar, 2016; Roberts, 2016), a deeper

understanding of these thoughts can uncover to what degree instructors are preparing
formative feedback to maximize learning. Appropriate scaffolding or challenges form

the premise for achieving the ZPD (Wass & Golding, 2014). For example, if an

instructor is reading a paper and mentally preparing their feedback, their reflective

thinking may prompt them to consider the kind of tools the student will need. If the

student has a repeated grammatical error, then the teacher might provide a resource for

the student. This is an example of using scaffolding to move the student into the ZPD. In

addition, through the act of reflecting, the teacher might make a mental note that several

students struggle in this particular area, so may extend their action to prepare a resource

for the entire class. Thus, if instructors consciously reflect on their feedback, they may

have a greater awareness of providing formative feedback that optimizes Vygotsky’s

(1978) theory.

Instructor reflections were examined according to a model. Ryan and Ryan’s

(2013) 4Rs model of reflection served as the framework to explore instructor reflection

on feedback. This model was influenced by Schon’s approach, and it provides a

workable framework that moves his theories into practice (Ryan & Ryan, 2013). Ryan

and Ryan adapted Bain, Ballantyne, Mills, and Lester’s (2002) model of five

components and this model was adapted to develop a questionnaire for instructors. Ryan

and Ryan’s four levels of reflection are: (1) Reporting and responding (2) Relating (3)

Reasoning (4) Reconstructing. The level one questions are the least complex, focusing

on “What” questions, such as, “What method of feedback is being used?” while the most

complex question is, “What might be another way to deliver this feedback?” This

framework formed the faculty questionnaire and guided participants into recognizing

their own reflections while they provided assignment feedback.


Instructors may consider many methods of moving students to their proximal

zone. Vygotsky (1978) posited that learners have an actual developmental level, which

is based on their previous level, identified as the completed developmental level. This

completed level is mostly determined through assessments or tasks (Vygotsky, 1978).

The ZPD theory has been widely regarded in primary and secondary environments

(Wass & Golding, 2014); however, in more recent literature, researchers have

recognized its value in higher education (Armstrong, 2015; McNiff & Aicher, 2017;

Roberts, 2016). In higher education, instructors operate with certain assumptions of the

learners’ completed level. That is, a learner in an ENG 102 class should have a

completed developmental level that is reflective of a successful conclusion from their

ENG 101 class.

Operationalizing Vygotsky’s theory may mean providing resources for a

learner’s new task or providing paper feedback that gives the student a tool or

explanation for the task. Through effective feedback, the instructor could provide

scaffolding to move the learner from the actual to the completed developmental level for

a particular objective. How instructors reflect can provide insight into how or if this

intentionality is taking place. This study served to advance the research on the ZPD

theory as it applied to feedback as a formative instructional mechanism. As further

exploration into instructor feedback determined their scaffolding strategies, a greater

understanding of how their methods, tone, and personalization contributed to learning

within this theory provided broader relevance to online learning in higher education.
Review of the Literature

Introduction. This section review includes topics relevant to the study’s

research questions and purpose. As online instruction continues to be an offering for

many higher educational institutions, the role of feedback has become formative,

standing in for traditional face-to-face instruction (Ali, 2016), but it is not known how

online undergraduate full-time faculty feedback practices influence their reflective

thinking or how this reflective thinking informs instruction at a southwestern higher

educational institution. This section will begin with the historical perspective of online

learning and how this modality has emerged. Because the current study is about

feedback in an online environment, the topic of feedback in online education will

follow. The next topics are seminal feedback models that provide a foundation on how

feedback can be instructional, or formative. Topics related to social presence, its

relationship to feedback and learning outcomes will follow. Finally, topics of reflection

will be presented relative to teaching, followed by reflection models, and outcomes of

reflection.

Historical perspective of online learning. The purpose of this qualitative

exploratory single case study was to examine how online undergraduate full-time faculty

perceived the influence of their instructional feedback practices on their reflective

thinking, and hence, their instructional strategy at a southwestern higher education

institution. The following section is about the historical implications that affect this

topic. Distance learning has taken on various forms to fill gaps for traditional education.

Correspondence courses began in the 1800s. In the 1920s and 1930s radio courses were

created to reach children in the inner cities. Distance courses were so popular in the
1920s that four times more students enrolled in distance than other higher education

courses combined (Caruth & Caruth, 2013).

Distance learning has seen many changes. As correspondence courses gave way

to early computer-mediated courses, the functionality and availability spread (Pearcy,

2014). For higher education, offering online courses became a cost-cutting measure, as

costs per student were rising faster than inflation (Deming, Goldin, Katz, & Yuchtman,

2015). By the beginning of the twenty-first century, online education was becoming a

machine with rapid acceleration, but the growing pains were evident. When acceleration

becomes disproportionate, it should be examined (Nash, 2015).

With the rapid expansion of online education, the concern was that higher

educational institutions might offer these courses strictly as an opportunity for revenue.

Nash (2015) posited that institutions should be cautious about these motives. Such

changes could stem from a lack of preparation on the part of the institution and if the

quality of course design and instructor training is lacking, the ethics of the program

could be jeopardized. As an example, if online courses are shorter in length than the

face-to-face counterparts, this could reflect less academic rigor (Nash, 2015). Another

effect of rapid enrollment was a corresponding increase in student attrition, which was

higher in online than in face-to-face courses (Forte et al., 2016; Smits & Voogt, 2017).

Institutional leadership embraced the online modality. However, instructors

reported a loss of autonomy, outdated hard copy course materials, and job insecurities

which fueled skepticism about the effectiveness of this platform (Caruth & Caruth,

2013). In addition, the transactional distance of online learning posed challenges for

instructors to lessen this distance (Atwater et al., 2017; Borup et al., 2015).
Transactional distance is the psychological and pedagogical separation between the

learner and the instructor (Moore, 2012). If communication gaps stem from the inherent

separation that characterizes online learning, then learning is potentially disrupted

(Huang, Chandra, DePaolo, & Simmons, 2016). Although many faculty appreciated the

convenience, flexibility, and diverse population of online learners, they also expressed

concerns about preserving academic integrity, the quality of online instruction, and the

time demands with facilitating volumes of students (Borup et al., 2015; Wright, 2014).

In many ways, the concerns of online learning did not differ from the concerns of

traditional teaching. Leaders in higher education called for highly qualified teachers

(Allen & Seaman, 2016) and creative ideas to foster critical thinking and deep student

engagement were favored over objective quizzes (Wright, 2014). Findings ways to

offset and avoid plagiarism were ongoing concerns (Elison-Bowers & Snelson, 2012;

Nash, 2015; Wright, 2014); however, as technology and practices improved, the face of

online education changed.

Online learning experienced swift and positive changes. Despite challenges that

plagued online learning in its early years, this approach of learning and teaching has

found favor with higher education. Offerings have expanded and enrollment for online

learning has increased (Allen & Seaman, 2016; Elison-Bowers & Snelson, 2012; Forte

et al., 2016; Mehta et al., 2017; Nash, 2015). This modality reflects creative delivery and

robust academics (Mehta et al., 2017). A plethora of research and emphasis on the

socialization components of online education have resulted in innovative methods

towards greater personalization. (Borup et al., 2015; Hostetter & Busch, 2013; Pattison,

2017). No longer considered the pariah of modalities, online education, instead, is


respected for its unique advantages compared to face-to-face environments, including

social egalitarianism, and a greater emphasis on writing and communication

(Arasaratnam-Smith & Northcote, 2017). In addition, rich pedagogical theories can be

actualized through course design and creating spaces as catalysts that contribute to

personal learning (Mehta et al., 2017). Attitudes are shifting and there is increasing

confidence from students and faculty in mastering learning management systems and the

communication within the asynchronous environment (Harrison et al., 2017).

A greater understanding of how online learning has evolved is needed as a

foundation for this study. This section addressed the rapid growth of online learning

(Deming et al., 2015) as well its challenges, expressed by Caruth and Caruth (2013),

Nash (2015), and others. In addition, Atwater et al. (2017) and Borup et al. (2015)

studied the importance of social presence and how newer technologies may lessen that

distance for the online environment. This delivery platform has seemingly limitless

ability to expand and adapt, and is well-positioned to accommodate emerging societal

and industry demands (Gargano & Throop, 2017). Therefore, within this platform, a

closer look at feedback, which is at the core of online instructional practices, could add

to the field of studies in online higher education.

Feedback in online education. Feedback is regarded as the communication of

information to the student to help them reflect, construct self-knowledge as it relates to

learning, and to set new learning goals (Bonnel, 2008). An abundance of current studies

reflect feedback’s connection to learning, but feedback’s breadth and depth demand a

deeper understanding of its potential in different conditions. Conflicting views on

feedback’s value provide additional justification for research.


Ample literature notes that formative feedback contributes to learning. Mirzaee

and Hasrati (2014) studied the role of formative learning through written feedback in an

English as a Foreign Language setting. The researchers aimed to extend the studies in

the nature and impact of written feedback and those connections to nonformal learning.

Participants received written feedback and qualitative interviews, and stimulated recall

methodology showed that formative feedback promoted learning, since it encouraged

students to act on it. Using Vygotsky’s (1978) ZPD as their theory, the researchers

considered the extent of the scaffolding process in written feedback. Using grade point

averages as criterion for selecting participating students, they hypothesized that students

with higher GPAs would be more likely to act on the received feedback. The transcribed

interviews were coded, categorized, and reviewed with a 5% point of difference in the

comments. The results showed that for highly motivated students, the written feedback

resulted in the following types of nonformal learning: reactive, deliberate, and implicit.

The researchers concluded that formative feedback leads to both teacher-student and

student- to -student engagement, because it promotes interaction, thereby leading to

greater learning opportunities.

In addition to learning, the literature supports feedback’s impact on student

performance. Abaci (2014) studied both the direct and indirect effects of feedback on

student learning and performance in relation to personality characteristics in online

learning environments. The gap that led to the study was that there is partial evidence of

how academic performance is influenced by personality variables, but the cumulative

effects are untested. In a qualitative cross-sectional survey design, the researcher

addressed academic motivation of feedback, how a learner’s goal orientation interacts


with instructor feedback and the learner’s feedback orientation to the instructor feedback

and how it aligns with academic performance. Due to the large sample size, the data

were analyzed through structural equation model. For this study 399 online

undergraduate college students were selected and surveyed according to the variables

presented in the research questions. The results indicated that more elaborate student

feedback was tied to higher performance, but there was not supporting evidence for the

mediating roles of academic motivation or goal orientation. The findings confirm the

value of detailed and elaborate feedback in online classes. The current study provided

valuable insights on how instructors reflect during their feedback, which could in turn,

help students refine or increase their awareness of goal orientation.

Instructor feedback methods for online instructors may contribute to student

outcomes. In a qualitative study, Sims (2016) used the Media Naturalness Theory to

explore the phenomenological and discourse analysis of instructor interviews and

feedback samples. The results showed that when instructors provided feedback, they

considered the educational purpose, the degree of interpersonal relationships with

students, and efficiency. The collected data also showed instructor methods, whether

audio, visual, written, or a combination. Instructor feedback samples were reviewed for

their complexity, tone, and word choice. Feedback in the form of audio and video had

higher word counts and greater complexity. A limitation of this study was that the

participants selected their own feedback samples, which may not have been typical of

their feedback to students. This study has implications for choosing methods that

support the most useful feedback for students, and further research is recommended for

whether these technologies better support summative or formative feedback.


Instructor feedback that is well-intended may not be effective, necessarily, in

terms of student perception. While many studies do confirm feedback’s positive

outcomes for students (Abaci, 2014; Mirzaee & Hasrati, 2014; Sims, 2016),

controversies in the literature also exist concerning instructor feedback. In a qualitative

multi-case study that explored differences that affect second language learner responses

to written corrective feedback, Li and Li (2012) analyzed interviews and student

responses to instructor’s written feedback. They found that when addressing errors,

instructors should be mindful of the students’ present zone of proximal development

(Vygotsky, 1978) and that for more complex corrections, such as sentence structure, the

feedback was less meaningful, since the students were aware of the grading criteria.

When errors addressed in written feedback were beyond the students’ ZPD (Vygotsky,

1978), the written corrective feedback was ineffective. The researchers recommend

more studies concerning individual differences to written feedback, and how the

feedback informs writing instruction.

Nevertheless, feedback remains an important part of online classroom instruction

and interaction. Studies confirm that instructor feedback has the potential to be the most

critical course element, and is the common denominator in student success (Abaci, 2014;

Ali, 2016; Gredler, 2016; Hattie & Timperley, 2007; Nicol & MacFarlane-Dick, 2006).

Yet, feedback is largely misunderstood and remains an under-researched concept

(Abaci, 2014; Borup et al., 2015; Gredler, 2016; Hattie & Timperley, 2007). Li and Li’s

(2012) findings uncover the importance of understanding the student’s proximal zone

while giving feedback. This focus on matching the feedback to the student population

was also present in Mirzaee and Hasrati’s (2014) study, as there was a difference in
reaction to feedback when grade point average was a factor. Despite the evidence of

positive outcomes of feedback, it is not effective in a vacuum (Hattie & Timperley,

2007). This study extended the literature on instructor feedback in online courses and

how that feedback influences teaching. As Li and Li (2012) suggested, more attention is

needed on feedback’s potential to inform instruction, so examining online undergraduate

full-time faculty feedback reflections provided greater insight into how feedback might

inform instruction.

Formative feedback. The current study is about exploring how online

undergraduate full-time faculty feedback practices influenced their reflective thinking or

how this reflective thinking informed instruction at a southwestern higher educational

institution. To explore this question, a deep understanding of feedback principles in

higher education was necessary. The topics in this section are seminal models of

formative feedback that are relevant to higher education and the online delivery

platform.

What constitutes formative assessment is not clear. The literature is not

consistent on an actual term for formative, in the context of assessment and feedback

(Price, 2015; Wiliam, 2014), but it is understood as a process that helps improve student

knowledge (Price, 2015) and is often applied when the assessment is tied to the

instruction (Wiliam, 2014). Some cite the primary characteristic as a tool delivered

midstream, for students to become agents of their learning at this juncture (Wiliam,

2014). Formative assessment is considered one of the most effective strategies to

promote student learning (Black & Wiliam, 1998), but in order for feedback to be

formative, it must be specific, meaning it references the work appraised, and general,
identifying broader principles that can apply to later works (Sadler, 1989). Formative

feedback has therefore, become recognized as instructional, which is viewed as a

contrast to summative feedback, which is linked to an objective measurement, such as a

letter grade or rubric (Webb & Moallem, 2016).

Summative feedback is not merely a different type of feedback and is often

identified as inferior to formative assessments. Nicol and Macfarlane-Dick (2006) stated

that motivation and self-esteem are more likely to be ignited when the stakes are lower,

in a formative context, as contrasted to summative assessments, which are usually

associated with more standardized tasks, where information is limited to the success or

failure of the task (Nicol & Macfarlane-Dick, 2006; Price, 2015). Wiliam (2014)

distinguished summative from formative stating that summative feedback functions as

evidence of the assessment, while formative actually generates that evidence. Although

formative assessment is often associated with learning (Hattie & Timperley, 2007),

when the focus of the goal shifts to a standardized outcome, the assignment feedback

strategy may be misaligned, that is, aimed at the success or failure of achieving the

objective goal of a score or letter grade, rather than learning (Ninomiya, 2016; Price,

2015). Thus, intended formative assessments can potentially be distilled to summative

ones (Ninomiya, 2016). The ideal conception of formative assessments is to empower

students to become self-regulated learners (Carless, 2006).

Another distortion of formative feedback is the notion that feedback is actively

given by the teacher and received by the student. If feedback is truly formative, this

mindset is erroneous and distracts from the positive outcomes of formative assessments

(Charteris, 2015). The main goal of formative feedback is to promote student regulation,
giving students a more proactive, rather than reactive role (Nicol & Macfarlane-Dick,

2006). When students receive effective and individual feedback, they are more likely to

independently solve tasks, contributing to self-regulation (van Kol & Rietz, 2016). Black

and Wiliam (1998), who are considered the primary researchers in this area (Ninomiya,

2016) extended their ideas of formative assessments to include self-regulation, which

has overlapping elements (Black, Harrison, Lee, Marshall, & Wiliam, 2004).

Assessment for Learning ( AfL) expands the idea of formative assessment to include the

practice of students and teachers, using peers to seek, reflect, and respond to the

assignment as a way of ongoing learning (Charteris, 2015). This recent attention to

student roles in feedback has significant implications for how instructors view their own

facilitation, and the possibility of their roles being re-negotiated (Renner, 2017).

Three seminal feedback models provide theory and principles that identify and

categorize feedback’s stages. The three models discussed will be Hattie and Timperley’s

(2007) Feedup, Feedback, Feedforward, Black and Wiliam’s (1998) Formative

Assessment Model, and Nicol and MacFarlane-Dick’s (2006) Seven Principles of Good

Feedback Practice. Despite their differences, there are overlapping principles, and these

models corresponded with Vygotsky’s (1978) Zone of Proximal Development, the

theory that underpinned this study. In the figure below, Nicol and Macfarlane-Dick’s

(2006) seventh principle is not included, as it is a process that follows assessment. These

models have common principles that can be divided according to three stages of the

feedback process. Below, the illustration shows the different stages, and how they

correspond with a student learning via the ZPD.


FEED FORWARD
FEED UP Teacher's expectations
Student's peformance level Task is too difficult to
Ability to work achieve independently
independently Encourage teacher and
Clarify performance peer dialogue
Facilitate self-assessment Provide opportunities to
close gap
FEED BACK
Gap
Can achieve with
scaffolding (ZPD)
Deliver high quality
feedback
Encourage positive
motivation and self-esteem

Figure 1. Feedback model progression relative to learning. Upper case font represents
Hattie & Timperley (2007), Times New Roman, 12 point font without special
capitalization represents Black & Wiliam’s (1998) model. Vygotsky’s (1978) Zone of
Proximal Development is in red, and the underlined passages represent Nicol &
Macfarlane-Dick (2006).

Hattie and Timperley’s Feed up, Feedback, Feed forward model. Feedback should

reduce the gap between a student’s current and desired understandings to enhance

learning (Hattie & Timperley, 2007). Hattie and Timperley’s (2007) Feed up,

Feedback, and Feed forward model focuses on questions: “Where am I going? How am

I going? Where to next?” as a means of reducing the gap between performance and the

student goal. These researchers found that when feedback is associated with student goal

setting, there is instruction and enhanced confidence in the task and self-regulation.

Hattie and Timperley’s (2007) model includes four levels for each feedback question:

The task level, the process level, the self-regulation, and self-level (Hattie & Timperley,

2007).
The “Where am I going?” stage is designed to convey information about the

goal. This part of feedback provides goal orientatin and when students understand these

goals, they are more likely to attain those (Hattie & Timperley, 2007). Feedback must be

related to the desired goal. For example, on an academic essay, students may be asked to

write a definition essay, but if the feedback is directed only towards grammar and

mechanics, the feedback is misaligned.

Following goal orientation are elements that address student performance. “How

am I going?” involves delivering direct feedback relative to student performance (Hattie

& Timperley, 2007). Thus, the feedback at this point must address the performance

relative to the goal. “Where to next?” is a feed- forward question that challenges

students and promotes self-regulation (Hattie & Timperley, 2007). When all elements of

feedback are successful, this stage has the greatest potential for learning and changing

behaviors. If these stages are requisite to feedback’s formative value, then instructors

need greater opportunities for crafting feedback (Kastberg, Lischka, & Hillman, 2016).

Although feedback models exist, it is not clear how faculty might implement the

recommended practices. To determine which questions from Hattie & Timperley’s

(2007) model were addressed in feedback, Ellis and Loughland (2017) conducted a

qualitative study and interviewed nine teacher education students in their third year of

study. Thematic analysis identified meta-categories and categories. One meta-category

was on the nature of the feedback message and the categories within this were: (1) The

provision of clear and specific direction (2) The detail and quality of feedback (3) The

frequency of feedback (4) The provision of focused feedback and feed forward (5) The

consistency of feedback and inter-rater reliability of supervising teachers. The findings


indicated that more participants were provided with “Where am I going?” feedback

compared to the other formative questions. These implications suggest that instructor

cognition does not always reach all three forms of feedback, as suggested through Hattie

and Timperley’s model (Ellis & Loughland, 2017).

If instructors are aware of feedback models, they may be more likely to give

feedback that is formative. Using Hattie and Timperley’s (2007) model as a framework,

Kastberg et al. (2016) conducted a cross-case analysis that reviewed activities between

prospective teachers and students. The problem expressed in the study was that

prospective teachers were not aware of giving constructive feedback that might improve

learning (Kastberg et al., 2016). Building on Wiliam’s (2007) study that found feedback

with comments only showed positive impacts on student learning of mathematical

concepts, Kastberg et al. (2016) used Hattie and Timperley’s four levels of feedback to

uncover themes in the prospective teacher feedback. The findings suggested that

mathematics teachers should be familiar with a model that addresses feedback levels,

and should craft feedback with attention to praising the student, identifying correct or

incorrect responses, and focusing on future success. While Ellis and Loughland (2017)

found that one question was addressed more than others, Kastberg et al. (2016) found

that there was little focus on the level of self-regulation. More research on how

instructors reflect on their own feedback is needed, and how instructors might

accomplish all that is required of effective feedback.

A third study that used Hattie and Timperley’s framework justified the need for a

fifth category for their feedback stages. Chen (2014) studied the receptivity of instructor

feedback in a peer-collaborative writing environment in two colleges. Teachers provided


feedback on student blogs; feedback was examined according to Hattie and Timperley’s

(2007) model and its four levels of feedback: task process, self-regulation, and

superficial praise. The researcher determined that a fifth category, mediative feedback,

was created as a need for reciprocity between teacher and student. Thirty-four students

from two colleges were assigned e-pals and blogged and discussed current event topics.

The researcher collected qualitative and quantitative data to examine instructor

feedback, including student surveys, interviews with students and instructors, coded

feedback, and student perceptions of the five feedback types.

The findings were triangulated with the interview data. These findings

demonstrated that interactive web tools necessitate the fifth level that should be added to

this model (Chen, 2014). In reviewing the findings of other studies where Hattie and

Timperley’s framework is integrated, both Ellis and Loughland (2017) and Kastberg et

al. (2016) found that not all of Hattie and Timperley’s stages were addressed. While

adding a fifth level has important theoretical implications, without implementation, it is

doubtful that it would contribute to the existing framework.

Black and Wiliam’s assessment and classroom learning. At its core, feedback

must address student success. The higher quality the feedback, the more likely the

learning gains (Black & Wiliam, 1998). Like Hattie and Timperley (2007), these

researchers discuss a gap between the learning goal and the student’s present state. The

idea that there is a gap is not new. In prior decades, Ramaprasad (1983) defined

feedback as the information an instructor provided about the gap between the actual

level and the referential level of a system parameter used to alter the gap (Black &

Wiliam, 1998). The similarities between Black and Wiliam’s model are evident in these
reflections: “Feedback to any pupil should be about the particular qualities of his or her

work, with advice on what he or she can do to improve, and should avoid comparisons

with other pupils” (p. 84). Although Black and Wiliam later expanded their concepts in

formative assessments, researchers continue to reference this seminal work (1998),

which will ground the discussion for this section.

If feedback is formative, then instructors must fully grasp the most effective

method of implementation. Black and Wiliam (1998) asserted that unless this gap is

bridged, an instructor has not delivered meaningful feedback. In a study that examined

how formative feedback is implemented, Sardareh (2016) conducted focus groups

among three ESL teachers who had recently attended training in formative assessment in

Malaysia. The findings showed that instructors were unaware of how to implement

feedback to close this gap. For example, the participants were not aware that feedback

should match the student’s level of achievement and that they confused praise with

target feedback (Sardareh, 2016).

The perspective of feedback’s role continues to evolve. Black and Wiliam’s

(1998) early findings identified the elements and roots of formative assessment and later

expanded to the concept of Assessment for Learning (AfL) (Ninomiya, 2016). Named

in literature as early as 1986 and later popularized in the United States as well as the

United Kingdom, AfL includes the practice of students and teachers using peers to seek,

reflect, and respond to information through dialogue, demonstration, and observing as a

way of ongoing learning (Charteris, 2015). An inquiry approach was used to explore

teacher candidates using AfL, and the role of dialogic feedback. Using reflective

dialogue interviews, the researcher used student voice data to measure learner
perceptions of their classroom learning, feedback, feed forward and self-assessment

practices.

Nicol and Macfarlane-Dick’s model. Two components presented by Nicol and

Macfarlane-Dick’s (2006) are to clarify performance and facilitate assessment. This kind

of feedback would point students to the goals, criteria, or standards (Nicol &

Macfarlane-Dick, 2006). Students frequently perform poorly due to a misunderstanding

between their own conceptions of goals and the actual assessment criteria (Nicol &

Macfarlane-Dick, 2006). The more students understand the criteria, the greater

ownership they have and therefore, they are more capable of self-assessment (Nicol &

Macfarlane-Dick, 2006). Paulson Gjerde et al. (2017) stated that only when the gap is

successfully identified is the student able to either close or reduce it. Black and Wiliam

(1998) refer to this stage as providing recognition of the desired goal.

Feedback has evolved into a more student-centered endeavor. Nicol and

Macfarlane-Dick’s (2006) model is designed to operationalize student-centered learning.

While the last few decades have shown a shift in how education views learning- as a

process that is constructed by the student- formative assessment practices have not kept

pace with this philosophy. In higher education, formative assessment and feedback are

largely considered instructor responsibilities, while feedback retains a transmission

process of messages that convey what is right or what is wrong (Nicol & Macfarlane-

Dick, 2006). This perspective is problematic because it puts formative assessment

exclusively in the hands of the instructor, who may assume that feedback messages are

understood, thereby disallowing a cognitive process for students. In addition, if the

transmission process is completely upon the teacher, then they are cognitively and
mechanically overloaded with the task of delivering meaningful, motivating, and

appropriately complex feedback (Nicol & Macfarlane-Dick, 2006).

The seven principles of good feedback reflect a student-centered approach to

learning, and are designed to promote learner self-regulation. At the center of feedback,

Black and Wiliam (1998) address the gap. In similar fashion, Nicol and Macfarlane-

Dick’s (2006) model also promotes high quality feedback that emphasizes self-

regulation (Nicol & Macfarlane-Dick, 2006). An additional feature to this model is that

feedback should encourage positive motivation and self-esteem (Nicol & Macfarlane-

Dick, 2006).

This student-centered focus on feedback resulted in self-regulation being

feedback’s ultimate goal. Understanding which formative assessment practices

contributed to self-regulation was the goal of Jing’s (2017) study. Focusing on Nicol and

Macfarlane-Dick’s (2006) principles, the qualitative exploratory study examined EFL

writing teacher feedback to investigate formative assessment practices that supported

learner self-regulation and student perceptions of the teacher practices. Jing (2017) cited

a gap in higher education formative feedback practices.

The researcher divided Nicol and Macfarlane-Dick’s (2006) seven principles

into two categories. The first three are teacher-student directed, meaning that there is a

reliance on teacher input for self-regulation to develop, while the fourth and fifth

principles are student-directed. The researcher noted the important connection between

Nicol and Macfarlane-Dick’s principles and the Vygotskian goals of self-regulation;

both require teacher assistance, with similar goals for students (Jing, 2017). The sources

of data were lesson observations, recorded teacher interviews, and student


questionnaires, which took place at the end of the course. A third form of data were

focus groups during and at the end of the semester. The results showed that instructors

implemented Nicol and Macfarlane-Dick’s (2006) five principles, that formative

feedback facilitates self-regulation when both teachers and students are involved. This

study’s focus on formative feedback elements and its alignment to Vygotsky’s principles

underscores the need for more studies, as Jing (2017) recommended similar studies in

different contexts.

Thus, for feedback to be truly formative, there are specific conditions. It must be

specific to the assignment appraised, and general, giving orientation to a broader concept

that is applicable to future work (Webb & Moallem, 2016). The feedback models

explored share the common components of the instructor providing an understanding of

the orientation of the assignment, an identification of and attention to a gap between

student performance and the criteria, and emphasis on learning and the application

towards future assignments. Understanding more about formative feedback models is

necessary for exploring how faculty reflect on feedback, and its contribution to their

teaching.

This section was designed to define feedback and the important distinctions

between summative and formative feedback as well as the philosophical evolutions

surrounding formative feedback. The current study was about how online undergraduate

full-time faculty feedback practices influence their reflective thinking or how this

reflective thinking informs instruction at a southwestern higher educational institution

and for a meaningful analysis, an understanding of feedback’s formative value was

necessary. In addition, Vygotsky’s (1978) Zone of Proximal Development was presented


along with a synthesis of how three recognized models are aligned with this theory. The

ZPD theory formed the foundation for exploring how online undergraduate full-time

faculty practices influenced reflective thinking and how this reflective thinking informed

instruction and the feedback models were used to analyze faculty feedback in their

classes.

Feedback and social presence. As this study was about online university

faculty feedback reflective practices, it was important to closely examine social

presence. Social presence is established through faculty personality and psychological

dimensions, allowing the instructor to build social capital with the student (Juvova,

Chudy, Neumeister, Plischke, & Kvintova, 2015). Social presence is one of three

components of the Community of Inquiry, which is the outcome, or artifact that

represents collective knowledge (Peacock & Cowan, 2016). Recent literature has

findings on social presence as a means of measuring student learning outcomes in online

courses. It is believed that when instructors have an awareness of and operationalize

these factors, a community of learning can exist in online learning (Martinez & Barnhill,

2017). As online learning became a more accepted modality, the Community of Inquiry

has served as a response to the skepticism of online learning and its social presence.

(Garrison et al., 2000).

Social presence plays a role in the success of an online program. Social presence

is the degree to which a person is perceived as authentic in a mediated environment

(Gunawardena & Zittle, 1997). Greater attention is given to social presence to measure

online learning outcomes and dynamics compared to face-to-face environments. Thus,

measuring social presence within an asynchronous environment can contribute to a


greater understanding of how students and instructors connect (Frisby et al., 2013).

Social presence is broken down through social cues, and the more frequent use of social

and verbal cues typically results in increased social presence (Atwater et al., 2017). With

online enrollment increasing, a better understanding of the strategies that contribute to

social presence has implications for online instruction (Frisby et al., 2013). Research

indicates that an instructor’s role in social presence is a significant factor in the success

of online education (Lowe-Madkins, 2016; Pattison, 2017).

On one hand, online learning is prized for its flexibility, freedom, and

convenience. Potentially, students can interact one -on -one with instructors more than in

a face-to-face setting (Mehta et al., 2017). However, since its inception, concerns about

online education focused on matching the effectiveness of face-to-face instruction. One

concern is that when student-teacher interpersonal connections are lacking, students may

lack self-discipline to learn on their own, and may not meet objectives (Frisby et al.,

2013).

Feedback’s effectiveness may be linked to the instructor’s level of social

presence. In a quantitative study that investigated the influence of written and audio

feedback, undergraduate students in a control group received written feedback while

undergraduates in an experimental group received audio feedback. These researchers

(Portolese Dias & Trumpy, 2014) hypothesized that audio feedback would be more

significant than written, in terms of student perceptions of learning effectiveness. Data

were in the form of student satisfaction surveys that were analyzed using a one-tailed t-

test. The findings showed that students reported greater instructor concern through audio

feedback, although the other three out of four survey question responses were not
significant (Portolese Dias & Trumpy, 2014). These researchers recommended

additional studies on audio feedback’s contribution to social presence, as these web tools

are advancing rapidly and those advances could reveal greater contributions in this area.

Feedback is considered a priority in improving the social presence of

asynchronous online instruction. In a qualitative case study, McGuire (2016) examined

five principles to improve student and teacher interaction and social presence in an

online classroom. These five practices were establishing a comfortable online

community, humanizing the course, prioritizing feedback, establishing clear

expectations, monitoring discussions, and creating relevancy (McGuire, 2016). Of these

five practices, the data showed that feedback played the most important role in

contributing to social presence. Instructors used strategies including videos to cover

common concepts, and prepared comments that allowed cutting and pasting to facilitate

timely feedback. The findings suggested that feedback is a priority as a means of

improving social presence in an asynchronous online space and the researcher

recommended that instructors employ audio, visual, and written text to connect with

students through their feedback (McGuire, 2016). If instructors are conscious of how

various technology and methods might contribute to better feedback, then a greater

understanding of how teachers reflect while giving feedback or what methods they select

could reveal their rationale for their choices and if their rationale constitutes a best

practice.

Social presence and learning outcomes. This study explored the reflective

feedback practices of online faculty in higher education. In addition, the study examined

the methods of feedback instructors used, which ultimately affected their teaching
practices. There is widespread empirical evidence of the relationship between social

presence and learning outcomes (Frisby et al., 2013). When social presence can be

increased through audio or video feedback, students demonstrate better performance

(Frisby et al., 2013). Numerous studies are written about how social presence is a way to

measure classroom effectiveness and as new technologies emerge, researchers seek to

confirm the effectiveness of these tools. Atwater et al. (2017) performed a qualitative

case study to examine student perceptions of synchronous video communication in

online courses. Instructors participated in Skype calls and provided both synchronous

video and text feedback for assignments.

The results showed that though some students experienced minor frustrations

with scheduling and technical concerns, students reported that the Skype calls motivated

them. In addition, participants reported that the calls contributed to relationship

development, improved confidence, and improved learning and understanding (Atwater

et al., 2017). Through video feedback, instructors were able to efficiently and effectively

establish their social presence through emotions, visual and vocal cues, and facial

expressions; these cues contributed to a greater understanding of feedback (Atwater et

al., 2017).

Social presence is related to learning outcomes in online learning. Hostetter and

Busch (2013) explored the relationship between social presence and student learning

outcomes. The research questions were about the pedagogical methods that affect

student perceptions of social presence, the relationship between social presence and

student outcomes in a mixed method study. Both qualitative and quantitative methods

were used. Course discussions were examined through content analysis and social
presence was determined by the use of first names, expressions of feelings or humor,

explicit references to others’ messages, questions, compliments, or agreement. Student

learning was measured by CATS, or Classroom Assessment Techniques, (Angelo,

1995), and student responses were evaluated on a rubric (Hostetter & Busch, 2013).

These researchers concluded that social presence may be a critical element to

successful online instruction and student performance increased correspondingly with

social presence. Both Hostetter and Busch’s (2013) and Atwater et al.’s (2017) studies

demonstrated important outcomes of social presence in an online environment. This

current study explored the perceived influence of feedback practices on teacher

reflection and subsequently, the influence of reflection on instructional strategy. If

faculty consider social presence as part of their feedback, this could inform their

instruction with meaningful implications.

Thus, as instructor social presence improves, so might student performance.

Frisby et al. (2013) found that when social presence was higher, students performed

better and communication improved. Using rhetorical and relational goals theory of

instruction, these researchers found that the medium of information has an impact on

how students learn. This is based on dual coding theory, which means that when

information is received through a single format, it is processed in an associative manner,

but when two formats are received, the processing is referential. Referential processing

is superior to associative (Frisby et al., 2013). Students reported that rhetorical and

relational goals were more likely to be met in classes with higher social presence, which

occurred when auditory and visual components provided the presence (Frisby et al.,

2013). Martinez and Barnhill (2017) also found that higher levels of learning occur
when social presence is greater. Thus, the positive effects of social presence, particularly

in online environments, are well documented, yet Frisby et al. (2013) noted that it is not

clear how these formatting differences specifically impact the student-instructor

relationship, or how social presence influences goals of students and instructors.

To summarize, social presence is widely studied as a way to gauge

personalization in the online classroom. The studies reviewed used qualitative or mixed

methods to determine various aspects of social presence, including student performance

and communication, learning outcomes, and student perceptions. Social presence relates

to instructor feedback, because well-formed feedback may include social presence cues

(Pattison, 2017). As instructors reflect on their feedback, they may be mindful of how

their own social presence influences the quality or perception of feedback. For example,

an instructor may find it challenging to provide corrective feedback without losing social

capital. This reflection may influence word choice, or it may prompt the instructor to

post additional resources in class. To fully explore how online undergraduate full-time

faculty feedback practices might influence reflective thinking or how this reflective

thinking informs instruction, understanding the affective nature of social presence was

necessary. This study expanded on studies in the field of social presence as it examined

how online undergraduate full-time faculty feedback practices influenced their reflective

thinking or how this reflective thinking informed instruction at a southwestern higher

educational institution.

Feedback methods. Instructors now have various methods of delivering

feedback in an asynchronous environment. Borup et al. (2015) noted that online

instructors provide feedback amidst the explosive growth of online learning. In order to
ameliorate the distance in online education, institutions have turned to audio and video

as methods of feedback. These methods are considered for their capacity to improve

feedback content, quality, and learning potential through formative feedback delivery.

These delivery methods have advantages from both instructor and student perspectives.

As the study is about qualitatively examining the methods that instructors used at a

southwestern university, a greater understanding of the findings in this area contributed

to the study and the value of instructor feedback.

Online education inherently has a greater psychological distance than face-to-

face classrooms. According to Wade, (2016) this distance can result in lower completion

rates and disengaged students. Text-based feedback often limits the nuances that

increase communication; moreover, text-based feedback is labor-intensive (Borup et al.,

2015; Sims, 2016; Wade, 2016). These factors formed the purpose of Wade’s (2016)

investigation of instructor perceptions of video feedback in this qualitative inquiry study.

Using Social Cognitive Learning Theory (Bandura, 1977), the researcher examined

instructors’ personal reflections to describe and interpret feedback delivered through

video.

The video feedback used in this study was through Screen-cast-o-matic, which

allowed instructors to provide a video of the student paper while providing auditory

feedback. Wade (2016) found that using video feedback, instructors can place greater

emphasis on key points in assignments, due to the allowance of greater complexity and

word count. These findings are consistent with recent studies that cite efficiency as an

advantage of audio and video feedback (Borup et al., 2015; McCarthy, 2015). Because
instructors value these efficiencies not for efficiencies alone, but to afford richer and

more complex feedback, the implications are significant.

A second advantage to video feedback was providing richer communication

nuances that are not afforded in text-only feedback. Instructors found video a softer way

to deliver necessary instruction and to guide improvements (Wade, 2016). Tunks (2012)

found that improved technology increased instructional presence. Several studies

connected social presence to greater personalization through video and audio feedback

(Frisby et al., 2013); however, among doctoral students, Rockinson-Szapkiw (2012)

found no difference in social presence between written and audio feedback. McCarthy

(2015) also found that despite the advantages of video feedback, there were instances

where students preferred text, such as when students resided in developing countries or

circumstances where technology was limited. Finally, Borup et al. (2015) found that

students preferred video over text for its conversational properties, but text over video

for its efficiencies and professionalism.

These competing views underscore the importance of more study in this area.

Additional studies would codify the specific advantages of audio and video feedback

over text. A third finding is that when instructors used asynchronous video feedback,

they were more likely to provide teaching supplements for their own courses (Wade,

2016). This finding relates to the current study, which contributed to the literature on

feedback methods by exploring the perceived influence of feedback practices on teacher

reflection and its subsequent influence of reflection on instructional strategies.

Instructors may select feedback methods based on their ability to produce more

complex feedback. Sims (2016) also investigated university instructors’ perceptions


associated with written, audio and video feedback in a phenomenological and discourse

analysis of interviews and feedback samples. The considerations were the educational

purpose of the feedback, the relationship factors between the instructor and students, and

the feedback delivery relative to efficiency of time and effort. Shades of difference

within these themes influenced the instructor’s choice of feedback method. Similar to

Wade’s (2016) findings, this study’s data confirmed that an advantage of audio and

video feedback is that with less effort, it afforded the instructor more complex feedback

with more word count.

Different delivery methods may facilitate the formative qualities of feedback.

Sims (2016) made distinctions between summative and formative feedback and included

the importance of the social dimensions of feedback. Whereas Wade (2016) found that

audio and video feedback allowed instructors to provide more guided instruction in a

more affective way, Sims (2016) expanded on this point, showing that narrated video

feedback is faster and therefore, less fatiguing. The research questions for this study

centered on instructor perceptions or attitudes towards audio and video feedback,

instructor perceived value of audio and video feedback, how feedback differs in these

different forms, and if instructor perceptions of these types of feedback are accurate.

Thus, instructor feedback methods may be influenced by the assignment. That is,

instructors were more likely to use audio or video feedback for a lengthier assignment,

such as an essay, but would use text feedback for shorter assignments. Some instructors

did not find the newer technologies more efficient, because extra steps were required. A

key finding is that audio feedback proved more useful in terms of promoting student

cognition and further research regarding audio and video technologies and the formative
feedback effects of these choices is suggested (Sims, 2016). Due to its many similarities,

these findings demonstrated relevance to the current study on exploring instructor

feedback: Both studies focused on instructor feedback and instructor perspectives and

both studies explored different methods of feedback. The current study expanded in this

area by adding the reflective component, to determine if instructor reflections influence

feedback practices.

Studies on feedback have the common element of learning theories as their

theoretical underpinnings. Wade (2016) undergirded her study in constructionism, which

is the lens by which learners construct personal meaning as they engage in their world.

Sims’ (2016) study was rooted in Natural Media Theory as the premise for interpreting

and analyzing instructor feedback. This theory is an extension of Darwin’s theory of

evolutionary biological adaptions that mimic the nuances of communication in a face –

to- face model (Sims, 2016). A third study of interest in instructor feedback methods is

undergirded by Vygotsky’s social-constructivist theory, which is also the theory of the

proposed study.

Student preferences may also affect instructor feedback practices. Gredler (2016)

used Vygotsky’s social-constructivist theory in a mixed methods study on postsecondary

student preferences for audio, written, and video feedback and hypothesized that student

preferences should be the criteria that determines effective instructor feedback.

Qualitative data in the form of interviews were analyzed. The findings showed that

students preferred proximal, detailed, and supportive feedback. Gredler’s (2016)

population consisted of mostly native English-speaking graduate students who had

experience in online learning. Online education also attracts a non-traditional student


and those characteristics may include those who delay enrollment years after high

school, are full time employees, parents, or those who may not have a high school

diploma (Vanslambrouck et al., 2016). It is important to have an understanding of online

student preferences from a more diverse population. The current study examined a

diverse adult university population, so Gredler’s findings formed a relevant foundation.

However, the current study, unlike Gredler’s, examined feedback from the instructor’s

perspective.

Emerging technologies present a spectrum of opportunities for online instructors,

yet, as more studies uncover the potential available through audio and video feedback,

some rough edges remain and practitioners continue to rely on text feedback for many

reasons. Written feedback allows numerous options, including marginal comments or

directly embedded feedback through a track changes feature. Electronic sticky notes,

different colors of text, and highlighting tools all offer understanding beyond mere

words. Borup et al.’s (2015) study found that for some purposes, written feedback was

preferred, as students found text efficient and organized. In this study, some students

also complained of the inconveniences of opening a video link, which was more

complicated than viewing text comments. However, Sims (2016) found that students

perceived text as overwhelming and that they did not always read written feedback.

Additional research in audio and video technologies was recommended (Sims, 2016).

This study addressed that recommendation through exploring how instructors select

feedback methods.

Thus, there are contradictions in the literature on feedback methods and their

contributions for online feedback. Borup et al. (2015) found positive attributes for both
audio and video feedback, but also found that students prefer written feedback. These

findings that demonstrated the convenience of written text must be juxtaposed to the

weight of the affective gains of personalizing feedback through audio and video

technologies. Even with the known advantages of audio and video feedback, online

instructors, though comfortable with their learning management systems, are not always

eager to try new web tools (Harrison et al., 2017).

Online education may retain the perception of being less personal than a face-to-

face venue. If this is true, then it is ironic that technological advances through audio and

video feedback applications also serve to humanize this modality (Dixon, 2015). These

advantages may qualify video and audio feedback as pedagogical, rather than

technological advances (Dixon, 2015). This current study built on the findings of the

studies in this section, as the researcher examined the various methods of feedback

among faculty in higher education, but in addition, considered how or if these methods

influenced reflective practices during feedback.

Feedback and learner populations. The literature that follows is a review of

feedback according to learner populations. Learner populations may influence faculty

feedback reflections and how these reflections inform their instruction. As Wade (2016)

noted, literature on feedback is fragmented. One area of fragmentation is that instructors

may select feedback methods based on the population. Online education was originally

targeted towards adult non-traditional learners due to its flexibility (Vanslambrouck et

al., 2016). At the same time, a student’s level of digital skills may depend on their age

(Vanslambrouck et al., 2016). Vanslambrouck et al. (2016) focused on adult learners in

a blended environment in a qualitative content analysis study. Examining their motives


to enroll and persist, these researchers found that adult learner motives include staying

salient in their current careers, gaining the possibility of an alternative job, or the desire

for a new job.

Understanding what these adult learners appreciated about the face-to-face

segments of the course has implications for online instructors concerning their choices

of video or audio methods for feedback delivery. Vanslambrouck et al. (2016) found that

while adult participants stated they valued face-to-face interactions with the instructor,

some considered them a waste of time. The study found that time constraints created a

greater appreciation for the online components. If adult learners value efficiency, even if

it means losing personalization, these are broader indicators for selecting appropriate

feedback methods.

Student population may determine the instructor’s choice for feedback methods.

Three studies were about different populations and feedback deliveries: Borup et al.

(2015) examined instructor feedback to teacher candidate student populations;

McCarthy (2015) studied first year college students; Rockinson-Szapkiw (2012) studied

audio feedback with doctoral learners. Examining these studies against the backdrop of

the population may lend insight into instructor methods for choosing technologies. For

example, if non-traditional adult learners demonstrate frustration with technology, audio

or video feedback versus written may compound that frustration.

Audio or video feedback may be useful in contributing to classroom community.

Rockinson-Szapkiw (2012) qualitatively examined how audio and text feedback

compared to written feedback, relative to the sense of community and learning on a

population of 125 doctoral learners. The problem that led to this study was that the
online environment was void of verbal cues and paralinguistic nuances that develop

social presence (Gunawardena & Zittle, 1997; Martinez & Barnhill, 2017). Using

Garrison et al.’s (2000) Community of Inquiry (CoI) as a framework, its three

constructs- social presence, cognitive presence, and teaching presence- formed the

backdrop to determine the extent of the learner’s sense of community.

For this study, the researchers examined two different groups of doctoral- level

students. One group received text feedback and the other received audio feedback in

addition to comments through Microsoft Word with its track changes feature. Both

groups had similar word counts for comments, but there was sufficient evidence that the

audio feedback provided a positive enhancement to online doctoral courses, because it

allowed the instructor’s paralinguistic cues and non-verbal emotions to come through.

They concluded that because audio feedback increases instructor presence, learning

outcomes may be stronger. The caution is that this is a particular population and doctoral

students could possess more persistence than their undergraduate counterparts

(Rockinson-Szapkiw, 2012).

Although instructors may select feedback methods depending on student

preferences, those preferences vary. McCarthy (2015) examined written, audio and

video feedback as summative strategies among first year students in Australia.

Summative assessment is considered the assessment of learning, compared to formative

feedback, which can be viewed as the assessment for learning (McCarthy, 2015).

Examples of summative assessments include rubrics, letter grades and summative

feedback comments justify those objective measures. McCarthy’s (2015) stated problem

for the study is that this kind of feedback is largely misunderstood in a written format.
The researcher’s goals were to explore the three feedback models. The models

were examined in terms of which were most engaging, easiest to comprehend, easiest to

access, and easiest to produce and distribute, and those which would reduce the

workload for instructors (McCarthy, 2015). Student participants used Likert scales to

assess the feedback models and to rank which kind of summative feedback they found

most beneficial. The majority of student participants preferred video feedback, followed

by written feedback. Audio feedback was the least preferred (McCarthy, 2015). These

findings suggest that although the personalized aspect of audio feedback was

appreciated, students preferred it less because it lacked a visual component. Because

McCarthy’s population was first-year university students, the preferences for visual

feedback may be related to the age group. According to Brumberger (2011), digital

natives have greater exposure to visual technologies, which relates to their preferred

learning style.

Student preferences for feedback may be related to feedback’s accessibility.

Exploring a different population, Borup et al. (2015) examined student and instructor

perceptions comparing text and asynchronous video feedback in blended courses. With

Draft and Lengel’s (1986) Media Richness Theory for support, these researchers posited

that information with depth, such as complex feedback, is best delivered face- to -face.

Because it lacks visual cues, written communication is lowest on the scale of

communication richness. Asynchronous feedback lacks the immediacy of face-to-face

feedback, but based on the Media Richness Theory, it falls higher on the continuum than

written feedback (Rice, 1992).


The research questions explored in this complementary mixed- method design

were aimed at blended learning student preferences regarding levels of feedback and

quality compared to text only, text feedback and asynchronous video feedback, and the

advantages and disadvantages of both forms of feedback. The findings indicated that

students preferred text feedback over video, due to its accessibility and the fact that

listening to video feedback tended to distract other students. Many reported the

additional disadvantage of not being able to view the video feedback on their phones,

which made the text options more desirable. Thus, even among traditional college-aged

students, as in McCarthy’s (2015) findings, the visual element was a greater priority

than the personalization of video feedback. The current study was about exploring the

methods that online undergraduate full-time faculty use when delivering feedback.

Those choices could be influenced by the instructors themselves, such as their comfort

levels with technologies or their level of motivation to try a new technology. Instructors

may also choose or avoid a method based on the efficiencies. In addition, student factors

may influence instructor choices.

As shown in the studies examined, traditional college students tend to value text

feedback over video or audio (Borup et al., 2015; McCarthy, 2015) due to the visual

element and the efficiency. However, Rockinson-Szapkiw’s study (2012) revealed that

doctoral learners did value technology that increased the instructor presence. It should

be noted that technology advances swiftly, and even current studies cannot keep pace

with improvements. For example, Borup et al.’s (2015) student population cited video

feedback as an inconvenience, since they could only access it from their computers;
however, to date, there are applications that would allow phone viewing, which would

shape these outcomes.

Reflection. This study examined how online undergraduate full-time faculty

feedback practices influenced their reflective thinking or how this reflective thinking

informed instruction at a southwestern higher educational institution. Thus, a deep

understanding of reflection was necessary as a foundation for the study. In addition to

feedback, reflection is a valued criteria of professional standards and objectives for

numerous disciplines (Ryan & Ryan, 2013; Winchester & Winchester, 2014). Because

reflection is recognized as a critical concept in education (Farrell, 2004; Rogers, 2001;

Rolfe, 2014) it would seem to be regularly practiced and understood. However, there is a

lack of clarity in what constitutes reflection (Nguyen et al., 2014; Rogers, 2001) and a

paucity of literature that defines a systematic developmental approach to reflection

(Nguyen et al., 2014; Ryan & Ryan, 2013).

Literature lacks a common understanding of reflection. This inconsistency in its

definition has served as an impediment to its full potential and development for learning

and teaching (Nguyen et al., 2014). What is known is that reflection is a complex and

rigorous process (Rogers, 200) that requires active, careful, and persistent consideration

that leads to further conclusions (Wilson, 2013). When reflection happens, it is an

intellectual stance that takes a daily practice to a state where change can take place both

at a personal and broader context (Ryan, 2013), with critical reflection resulting in self-

appraisal (Le Cornu, 2009). Within this broad framework, numerous models, layers,

and constructs have been created to identify and practice reflection. A greater

understanding of reflection and how its practices can be operationalized is critical since
it is regarded as a powerful mechanism for developing thinking and change (Dimova &

Kamarska, 2015).

An element of discomfort is a necessary component of reflection. Schon (1983)

described reflection as a process to address uncertainty, instability, uniqueness, and

conflict while Dewey included hesitation perplexity and mental difficulty as elements

that conceptualize reflection (Camacho Rico et al., 2012). This discomfort stems from

the act of wrestling the perplexity against a set of moral or ethical issues, which takes

place at the highest level of reflection (Camacho Rico et al., 2012). The discomfort,

puzzle, or perplexities inherent in reflection align with Vygotsky’s ZPD (1978).

According to Wass and Golding (2014) for learning to occur, the instructor must

“problematize” (p. 677) for the student. That is, the assigned task must be appropriately

challenging. Thus, for this current study which used the ZPD as its theoretical

framework, this same aspect of reflection, where the unresolved begs to be resolved,

aligned with this theory of learning.

Reflection and teaching. Researchers in current literature focus on the praxis of

reflection and its contribution to improved teaching, and therefore, student outcomes.

The current study, through addressing faculty reflection, contributed to a greater

understanding of feedback’s formative role and ultimately, teaching influences. For

students, feedback on written work can be viewed as a vehicle for reflection (Quinton &

Smallbone, 2010). Camacho Rico et al. (2012) explored how the reflection process

enhanced the first- year teaching year experience. Data were classroom observations,

reflective journals, lesson plans, and interviews and these were analyzed according to

Schon’s (1983) reflection-in-action and reflection-on-action theory. Reflection provided


the participants the opportunity to analyze how and why they behaved as they did and

how new changes or methods might reflect teaching. While reflecting, these teachers

adapted their actions during teaching. These researchers found that reflective practices

resulted in solving difficulties, restructuring performances and viewing problems

differently, and that reflection has a useful place for first year teachers.

Reflection is regarded as a continual process of effective teaching. Without

reflection, there is a danger of teachers going on “auto-pilot” in their classes during

critical decisions (Wilson, 2013). This phenomenological study explored how teachers

reflected and that reflection’s impact on future instruction, as a deeper understanding of

instructor reflective thoughts can provide insight into student achievement. This study

included an analysis of teacher reflective journaling of 12 middle school teachers. The

four themes that emerged were student learning, relationships, curricular planning and

lesson delivery. Like Camacho Rico et al.’s (2012) study, this was qualitative, but

differed because of its phenomenological design which allowed the researcher to

observe, record, and interpret these experiences (Wilson, 2013). The discussion of

Dewey’s philosophy identified the stages of reflection and Wilson (2013) noted that the

participants moved through each stage. The qualitative design did allow the researcher

to identify emerging themes; however, in the findings, the researcher claimed that the

reflective journals impacted student learning (Wilson, 2013), but did not show data to

confirm the learning.

Models of reflection. Most reflection models in education are rooted in Dewey’s

work. Schon’s (1983) The Reflective Practitioner built on Dewey’s concepts of

reflective thinking (Rogers, 2001; Wilson, 2013). With Dewey’s work as the foundation
of reflective thinking, Schon’s extension focused more closely on reflection in teaching

(Jaeger, 2013; Wilson, 2013). Schon defined a reflective practitioner as one who plans

before taking action, reviews past events to consider alternative choices, but also

reconsiders a course of action midstream. These tenets form his theory of reflection- in -

action and reflection- on -action (Schon, 1983). Reflection- in -action happens during an

action without the action being interrupted (Camacho Rico et al., 2012; Rogers, 2001)

and connects the uncertainties of teaching with the demands of a problem solving

environment (Schon, 1983).

Reflection is typically viewed as taking place following an event. What

differentiates Schon’s model from others is that the reflection -in-action component

happens during teaching, but many practitioners are pre-occupied with the latter stage,

reflection-on-action (Rolfe, 2014). Jaeger (2013) concluded that reflection-in-action

practices have merit, yet the tangible outcomes of reflection-in-action remain largely

untested. In a teaching situation, reflection-in-action occurs when such knowledge is

used to shift courses and to re-frame an event (Jaeger, 2013; Rogers, 2001; Rolfe, 2014).

In addition, with reflective behavior emphasized to teacher candidates, these practices

are not found to transfer to teachers in their own classrooms.

Reflection models have evolved in recent decades. Like Schon’s framework,

some reflection models provide clear steps or sequential stages, such as Ryan and

Ryan’s (2013) Teaching and Assessing Reflective Learning (TARL) model and Rodgers

(2002) six phases of spontaneous interpretation which include: naming the problem,

generating explanations, ramifying the explanations into hypotheses, and experimenting

and testing the hypotheses (Wilson, 2013). Yet other models do not have a sequential
order, but are characterized by recursive properties and processes. Dewey’s five states,

for example, should not be viewed as tasks to accomplish, but rather as an interactive

process (Wilson, 2013). Farrell’s model of reflective practice (2004) forms a loop,

emphasizing the process and is focused on the questions: What am I doing in the

classroom? Why am I doing this? What is the result? Will I change anything based on

the previous information?

Despite their differences, these reflective models each have value according to

their purpose. For this study, the selected model used to analyze data was Ryan and

Ryan’s (2013) 4R model of reflective thinking. This model addressed the extent of

teacher reflection that occurred during feedback for online instructors. The model

includes reporting and responding, relating, reasoning, and reconstructing. The critical

reconstructing stage prompts instructors to look towards future practices and is intended

to lead to deep examination (Ryan & Ryan, 2013). This hierarchy progresses from the

most fundamental reflective stage to the more complex (Bennett et al., 2016).

The current study used Ryan and Ryan’s (2013) 4R’s model to develop the

questionnaire for faculty feedback reflection. Bennett et al. (2016) used Ryan and

Ryan’s (2013) 4R’s model to explore how students and teacher candidates engaged in

critical self-monitoring as they attended their learning experiences. The questions were

structured so that researchers could analyze responses from students, academic team

members, community members, and stakeholders (Bennett et al., 2016). Stories, films,

audio files, and journals allowed interviewers to gauge expectations and explore teacher

candidate fears about working with the Aboriginal people. Data were coded into

categories and themed to create the final data set and showed that the participants moved
through three stages of reflection. The results of these reflections served to inform the

instructional approach for the Aboriginal people. This proposed current study also used

Ryan and Ryan’s (2013) 4R model as a framework for exploring how online

undergraduate full-time faculty feedback practices influenced their reflective thinking or

how this reflective thinking informed instruction at a southwestern higher educational

institution.

Outcomes of reflection. Reflection is regarded as a conscious process. Dewey

believed that reflection should have a conscious aim (Rogers, 2001) and that this

conscious process should then show tangible results. Schon (1983) stated that proper

reflection can result in a new understanding of situations of uncertainty, professional

knowledge, or broader outcomes, such as a new theory or framework. Reconstructing

future actions that enable change in society is an outcome at the highest level of

reflective processes (Bennett et al., 2016). When a learner is reflective, s/he will

demonstrate self-knowledge that is transferable in order to create a new framework

(Quinton & Smallbone, 2010).

In addition to being a conscious act, reflection should also promote change.

Effective reflection can result in changes in behavior (Rogers, 2001; Schon, 1983).

Ultimately, learning should be an outcome of reflection. Although the identifying

features and measures of learning vary, certainly the antecedents to learning are

outcomes of reflection; these include freedom of action, capacity for change, and a

greater awareness of context (Langer, 2000). Ryan and Ryan’s (2013) reconstructing

segment of the reflective scale prompts the learner to codify reflection’s outcomes with

questions such as “What might work and why? Are my ideas supported by theory? Can I
make changes to benefit others?” (Ryan & Ryan, 2013). Perhaps by being mindful about

reflection’s outcomes, practitioners can know the result of its conscious aim. In this

study, those reflective practices and how they influenced learning were explored.

Methodology. On the topic of feedback are numerous studies with a variety of

designs, including quantitative, qualitative, and mixed methods. Because overlapping

elements were measured, Borup et al. (2015) conducted a mixed study, with quantitative

information collected from surveys. Atwater et al. (2017) conducted a qualitative case

study to explore how students perceived advantages of video feedback, using video

recordings for data. Finally, Wade’s (2016) study was qualitative, as the study’s purpose

was to examine asynchronous video feedback in higher educational classrooms. The

data collected was self-reported evidence on feedback from participants. However, most

studies on feedback used qualitative methodology (Atwater et al., 2017; Charteris, 2015;

Ellis & Loughland, 2017; Jing, 2017; Kastberg et al., 2016; McGuire, 2016). As well,

nearly all reflection studies are qualitative and focused on establishing a framework for

reflection, including Camacho Rico et al.’s (2012) and Wilson’s (2013) qualitative

studies. Therefore, a qualitative methodology approach was most appropriate for this

study on feedback and reflection.

Instrumentation. Prior empirical research on feedback and reflection used

multiple methods to address the research questions. However, most studies reviewed on

feedback and reflection relied on open-ended responses or self-reported data, which is

consistent with a qualitative approach. A qualitative approach would yield richer data in

the themes of reflection and feedback. This study employed three forms of data: faculty

questionnaires, classroom feedback, and focus groups.


Questionnaires and interviews are common data sources in qualitative case

studies. To avoid bias, they must be constructed carefully (Yin, 2014). Ellis and

Loughland (2017) used questionnaires to derive thematic categories to analyze the data

of student perceptions on formative feedback. Borup et al. (2015) used interviews with

instructors to address the perceived advantages and disadvantages of audio and video

feedback. However, an interview has greater potential for bias. Yin (2014) discusses

how participants may echo one another’s responses. A written questionnaire, based on

Ryan and Ryan’s (2013) reflection model allowed faculty a chance to individually

respond for more individual data. For this current study, the researcher developed

careful questions and responses were coded and themed to build a case for the study.

Some studies from this literature review used classroom data. For example,

Portolese Dias and Trumpy (2014) reviewed online instructor feedback to compare

audio’s effectiveness with written text. This form of data can be useful in addressing a

“what” question in research (Yin, 2014). In this study, the question: “What kind of

feedback methods do faculty use?” is best answered through reviewing classroom data.

First, this will provide abundant current data, since multiple assignments and courses

were examined. Next, the classroom data served to confirm the responses from the

faculty questionnaire. This limitation of faculty commenting on their own feedback was

confirmed by Sims (2016) because faculty provided their own feedback, which may

limit the data’s authenticity. Hattie and Timperley’s (2007) framework of three

questions (Where am I going? How am I going?) and the four levels for each feedback

question: Task level, Process level, Self-regulation level, Self- level (Hattie &

Timperley, 2007), guided the feedback analysis.


Focus groups are often conducted in case studies. Focus groups are a way to

obtain the viewpoint of a group (Yin, 2014). For example, Ellis and Loughland (2017)

used focus groups to gather perceptions of a broad group as Hattie and Timperley’s

(2007) feedback questions were addressed. This study used focus groups to address the

research question about how online undergraduate full-time faculty feedback practices

influenced their reflective thinking or how this reflective thinking informed instruction

at a southwestern higher educational institution. The group dynamics that are associated

with focus groups prompted discussion of the data among participants.

Summary

Chapter 2 began with identifying Vygotsky’s Zone of Proximal Development as

the theory that undergirded the current study. The ZPD posits that instructors consider

the learner’s actual developmental level, then strategize to either support through

scaffolding or challenge them in this proximal zone, with the goal of the student

reaching their completed developmental level. This theory is relevant to understanding

how feedback practices might inform learning, as instructors may reflect on what

strategies will operationalize learning. The next section provided a historical perspective

of online education. To fully address the research questions about online feedback, a

broad perspective of the history and challenges of online education is necessary.

Next, the chapter provided a broad and scholarly review of the current literature

related to the phenomena of the proposed study topic of how online faculty feedback

practices influence reflective thinking at a southwestern higher educational institution.

The chapter then reviewed seminal literature on feedback for a foundation on the

principles that constitute effective feedback. The critical differences between summative
and formative feedback were discussed, as formative feedback is linked to student

outcomes (Hattie & Timperley, 2007; Mirzaee & Hasrati, 2014; Nicol & Macfarlane-

Dick, 2006) and recognized models were illustrated, reflecting their commonalities and

their alignment with the theory. To address the research questions appropriately, this

fundamental understanding of the interaction of feedback between the learner and

instructor was needed.

This chapter also included a discussion of social presence. For an online

instructor, the theory of social presence may factor into their choices for feedback

delivery methods. Their tone in writing and the social presence cues may be part of their

reflections on their feedback, which could inform teaching practices. Thus, this

foundation of social presence was critical to understanding the phenomena of feedback

and reflective practices.

A discussion and review of various feedback methods was also included in

Chapter 2. One research question in the study was: “What methods of feedback do

faculty use?” The reviews of the literature on this topic aimed to address that

phenomenon. Although there was an abundance of current studies on feedback, the

literature suggested that more variations in this area needed to be tested. Sims (2016)

acknowledged this need stating that studies on different feedback formats within the

same course would reveal greater insights into instructor methods and purposes. Thus,

future research is encouraged in determining the different methods of feedback that

faculty use.

This chapter’s last theme was reflection, which included several sub-topics to

address the element of reflective thinking. Theories and studies selected were either
current, such as Wilson’s (2013) or older literature, which demonstrated seminal

theorists and researchers like Schon. This section acknowledged the pioneers of

reflection theory, but also provided a rationale for choosing Ryan and Ryan’s (2013)

model that supported this study as the foundation for examining faculty reflective

practices. The literature was selected with the study’s phenomenon in mind: teacher

reflective practices. Wilson (2013) concluded her study stating that reflective practice

education is lacking and suggested that studies could also focus on teachers reflecting.

This study flipped the paradigm, making the teacher the learner, as they examined their

own reflective practices.

Both feedback and reflection have histories of deep relevance in education. With

online education, these critical characteristics become more pronounced as institutions

aim to maximize learning and satisfaction through this modality. To fully meet online

student needs in higher education, a deeper exploration of understanding how faculty

reflected on their own feedback to inform teaching, understanding how different

methods are selected, and if reflection informs teaching, contributed to the body of

knowledge on feedback and reflection in online learning. Such findings would benefit

stakeholders of the institution and would potentially generalize to higher education.

The findings on feedback and reflection showed that, given the magnitude of

feedback – its learning potential, its ability to increase social presence, how technology

can improve its delivery and efficiency–higher educational institutions can better

accommodate the conditions for instructors to focus on and improve feedback methods.

Moreover, this study’s findings show the student perspective as well. A movement

towards greater student involvement in feedback is reflected in studies that recommend


that students should have a more active role in the feedback process, such as providing

structured written responses to instructor feedback (Denton & Rowe, 2015; Geitz,

Brinke, & Kirschner, 2015). These are recommended practices to further increase self-

regulation.
Chapter 3: Methodology

Introduction

The purpose of this qualitative exploratory single case study was to examine how

online undergraduate faculty perceived the influence of their instructional feedback

practices on their reflective thinking, and hence, their instructional strategy at a

southwestern higher educational institution. In this chapter, the selected methodology

that best addressed the research questions will be discussed. This chapter will include: a

statement of the problem, the research questions, selected research methodology,

research design, selected population and sample, instruments, validity and reliability of

the data, the data collection process, management, analysis procedures, ethical

considerations, limitations, and delimitations.

The data for this study came from responses from the faculty questionnaire, from

reviewing faculty feedback, and from focus group findings. These faculty member

participants teach undergraduate courses at a higher educational institution in the

southwestern United States. This was a qualitative exploratory single case study that

included data gathered through reviewing feedback from the participating faculty’s

online courses, questionnaire responses, and identified themes from a focus group. The

questionnaire was prepared by the researcher, based on Ryan and Ryan’s (2013)

reflection model and allowed faculty to reflect in the course of their regular daily

feedback processes. The data from the faculty sample population were analyzed based

on Hattie and Timperley’s (2007) framework and reviewed to determine the methods of

feedback given. A focus group was used to explore how these reflections influenced

instructor practices or methods. Emerging and identifiable themes provided data that
addressed this research question. This helped determine to what extent faculty

reflections enhanced or solidified their teaching strategies.

Each type of data was necessary to address the research questions of this

qualitative case study that addressed the perceived influence of feedback practices on

teacher reflection (and subsequently, the influence of reflection on instructional strategy)

at a southwestern higher education institution. The research questions deeply explored

the intersections of feedback and reflection in an online environment. In a largely

faceless environment, online instructors seek out new methods of delivering feedback

(Borup et al., 2015). Understanding the methods selected and how reflection influenced

feedback provided valuable information for faculty and administration in higher

education.

Statement of the Problem

It was not known how online undergraduate full-time faculty perceived the

influence of their instructional feedback practices on their reflective thinking, and hence,

their instructional strategy at a southwestern higher education institution. In the online

classroom, feedback is a critical component to instruction and student communication

(Borup et al., 2015; Pattison, 2017; Quinton & Smallbone, 2010). Additionally,

feedback can be a way to increase instructor presence (Ali, 2016; Sims, 2016). A greater

understanding of how these processes intersect can create a greater awareness of

teaching. Reflection is a recognized teaching practice and feedback itself can be a

method or conduit for reflecting (Quinton & Smallbone, 2010).

The participants for this study were online undergraduate full-time faculty who

teach exclusively in this modality. The university selected for this study has a physical
campus of 15,500 students and 60,000 online students. Attention to the online

population and a closer examination of faculty practices involving both feedback and

reflection can potentially improve student satisfaction and learning outcomes. If the

factors that influence feedback reflection and how that reflection informs teaching are

identified and validated, focusing on reflection or sharing the outcomes of particular

feedback methods could improve instruction and therefore, student learning outcomes.

For this study, the participants were full time online undergraduate faculty who

meet and collaborate on a campus facility. The institution where this study takes place

also employs adjunct instructors, but the online undergraduate students are assigned full-

time faculty for first-year sequence courses. This model is unusual, since higher

education institutions more often employ adjunct faculty, particularly for online courses,

which can be facilitated remotely (Schieffer, 2016). Online undergraduate full- time

faculty teach 40 hours per week, year-round and work at campus facilities three to four

days a week, with the option of working remotely one to two days per week.

At the university selected for this study, the online population exceeds the

campus face-to-face population. Yet, there are more studies on the effectiveness of face-

to-face environments than there are studies that document effective practices in online

courses (Forte et al., 2016). Two highly studied forces: feedback and reflection - were

examined in a new way in the online environment to provide data that may shape

administrative decisions to potentially improve instruction for this population and

extended stakeholders. Online faculty desire to continue to grow in their instruction

practices (DeCosta, Bergquist, Holbeck, & Greenberger, (2015). A greater

consciousness of how they reflect on their feedback could prompt improved practices.
Seeing connections between their reflections and practices may prompt greater attention

to reflection through professional development. As higher education institutions aim to

improve the online experience for learners, understanding which methods might prompt

more attention to faculty reflection could influence practices. For example, if faculty

fined that video or audio feedback provides richer feedback, and prompts a higher level

of reflection towards instruction, then institutions may make decisions to extend these

practices to more faculty. In addition, given the vital role that feedback plays in online

learning, giving instructors more time to prepare feedback or to explore new web tools

may contribute to

positive student outcomes.

Research Questions

The research questions posed in this study were derived from teaching practices

that are well-documented in face-to-face teaching, however, there are fewer studies that

document these practices in asynchronous environments (Forte et al., 2016). Vygotsky’s

Zone of Proximal Development is a learning theory that undergirds the study. Learning

takes place when students move from their zone of independence, into the ZPD, where

their task is too challenging to accomplish on their own, but can be achieved through

scaffolding (Roberson, 2017). In the absence of in-person teaching, feedback stands in

for formative instruction for online instructors (Quinton & Smallbone, 2010; Sims,

2016; Webb & Moallem, 2016). It is therefore, important to understand the thought

process of online faculty as they choose feedback delivery methods and as they

construct their feedback. Their reflections could point to future improved practices to

enhance student learning. Like face-to-face instruction, no two online faculty members
grade the same, but all could benefit from a greater understanding of how they conduct

their own feedback, what methods are most effective, and to what extent their

instruction is influenced by that feedback.

This study provided a greater understanding of how instructor feedback

reflections influence their teaching. Factors that determined online faculty feedback

practices could include addressing the efficiencies necessary for grading volume or

moving students to their Zone of Proximal Development (Vygotsky, 1978) and the

specific resources or remarks given that are designed to achieve that. Understanding

these reflective practices that take place during feedback can provide rich insights into

how faculty might approach feedback and implications for professional development or

individual growth. The primary research question for this study was derived from the

problem statement:

It was not known how online undergraduate full-time faculty perceived the

influence of their instructional feedback practices on their reflective thinking, and hence,

their instructional strategy at a southwestern higher educational institution.

RQ1: How do online undergraduate full-time faculty perceive the influence of their

instructional feedback practices on their own reflective thinking?

RQ2: What methods of feedback do online undergraduate full-time faculty use at a

southwestern higher education institution?

RQ3: How do online undergraduate full-time faculty perceive the influence of

reflective thinking on their current and future instructional approaches?

To address these research questions, there were three different data collection

approaches. The first form of data collection was a questionnaire, prepared by the
researcher. This 12-item instrument addressed all three research questions, as there were

four questions created to address the three research questions. The prepared

questionnaire was guided by Ryan and Ryan’s 4Rs model (2013). The purpose of using

this model was to address the pedagogical landscape for reflection so that there was a

greater awareness of reflection practices (Ryan & Ryan, 2013).

Most literature on reflection focuses more on the specific level of learner

reflection, whereas this model is more of a systematic approach to reflection (Ryan &

Ryan, 2013). Reflections were addressed on four different levels: Reporting and

responding, relating, reasoning, and reconstructing. Ryan and Ryan (2013) conflated this

model from Bain et al.’s (2002) five-level model. The researcher prepared questions

specific to instructor feedback based on these four levels. This questionnaire was

designed to provide qualitative data on how faculty participants reflected while giving

feedback.

Next, the researcher reviewed feedback in the classes of online undergraduate

full-time faculty participants. This review of classes addressed RQ2: What methods of

feedback do online undergraduate full-time faculty use at a southwestern higher

educational institution? The classroom review identified the participants’ methods of

feedback faculty use, whether they used text or video. In addition, this review of

classroom feedback addressed RQ1 and RQ3, as Hattie and Timperley’s (2007) levels of

feedback formed the framework for the specific levels of feedback that faculty use,

which connects to their reflection. As formative feedback is more frequently connected

with learning (Black & Wiliam, 1998; Geitz et al., 2015; Hattie & Timperley, 2007),

courses were selected that demonstrated an instructor’s capacity to provide feedback at


all levels as expressed by Hattie and Timperley (2007). To maintain the focus of

reflective feedback, instructor feedback on longer subjective assignments, such as

essays, was examined. For these assignments, the feedback methods selected had a

relationship on how reflection influenced instruction.

The third form of data collection was the focus group, which consisted of ten of

the 12 faculty participants. These were the same participants who participated in the

questionnaire. This data addressed RQ 3: How do online undergraduate full-time faculty

perceive the influence of reflective thinking on their current and future instructional

approaches? Using Zoom computer software allowed remote participation. The focus

group allowed the participants to review the data, which was either confirmed or

challenged. The participants provided a broad spectrum of information in keeping with a

case study, which is a holistic endeavor (Merriam & Tisdell, 2016) and the data

collection approaches were aligned with that.

Research Methodology

The qualitative methodology was selected for this study. Two types of

methodologies are quantitative and qualitative. These methodologies might be

distinguished according to their data. In quantitative data, numbers are the prevalent

form of data, whereas in qualitative data, words are (Merriam & Tisdell, 2016). When a

qualitative approach is selected, a triangulation of data can be one way to substantiate

this form of data (Merriam & Tisdell, 2016). This qualitative study focused on faculty

feedback, which is best measured through their words. These words were analyzed and

coded to yield rich, nuanced data that can come from this type of methodology.
A qualitative method was the best choice for addressing the questions for this

study. A qualitative method captures the phenomenon in its natural setting (Yin, 2014).

For this study, online university instructors were studied in their setting of instruction,

where daily, they facilitate courses, which includes providing student feedback.

Conducting the study within this natural setting resulted in less bias or changes that

might have distorted the instructors’ thoughts. Since qualitative research, by nature, does

not lead to the specific conclusions of quantitative data (Merriam & Tisdell, 2016), this

natural setting yielded less bias as the investigator explored themes and patterns of

instructor feedback reflections and methods.

A third type of data is a mixed methods approach, using both qualitative and

quantitative data. Case studies often employ a mixed methods approach, which has the

advantages of integrated methods in a single study (Yin, 2014). This integration results

in richer data that is stronger and able to address more complex research questions (Yin,

2014). The phenomenal dimensions of this study were best explored through the

qualitative process, where faculty reflections, feedback from courses, and focus group

findings were coded and themed to provide three forms of data, which were triangulated,

and contributed to the opportunity for consistent replies. Therefore, qualitative data

collection processes were used, rather than a quantitative or mixed methods approach.

Research Design

A qualitative exploratory single case study was selected as the best design for

this qualitative study. A case study is selected when the researcher wants to provide an

in-depth description of the phenomenon, when the research questions address “How” or

“Why” and when the researcher has no control over the behavioral events (Yin, 2014).
According to Yin (2014), a single case can represent a contribution to knowledge in a

particular field by confirming or challenging a theory. In this study, Vygotsky’s (1978)

ZPD served as the theoretical underpinning to ascertain whether faculty’s reflection on

feedback influenced instruction. This study conformed to that criteria, making it the best

choice for this study. The themes from the three data sources converged to fully explore

the phenomenon of faculty feedback and reflective practices.

The online undergraduate full-time university faculty served as the unit of

analysis in this single case study, which was considered bounded, since the sample

participants teach at the same higher education institution. The present study explored

online undergraduate full-time faculty feedback reflective practices in depth. This ability

to go into depth is a characteristic of a case study, which can offer nuanced data in the

form of coded themes. Other qualitative methods could not accomplish this as

successfully. In a case study, the researchers themselves are viewed as the sensing

instrument who observes and interprets (Merriam & Tisdell, 2016).

The proposed study research questions were about how online undergraduate

full-time faculty perceive the influence of their instructional feedback practices and their

reflective thinking’s effect on their instructional approaches and how this population

conducted their feedback, whether through video or text methods. These types of

questions were best addressed through a case study. If the study question involved

“Who” might benefit from feedback, or “Who” provided formative feedback, the

resulting data would be less useful. If the study were about “How much” feedback is

appropriate, a survey might be more appropriate. Determining “how much” requires

quantitative measurement, so that goal is not consistent with the research questions
addressed in this study. Because “How” and “Why” questions are more exploratory in

nature, the case study emerged as the best choice for this qualitative study (Yin, 2014).

For this study, a single case design was the best approach because the researcher

did not have control over the behavioral events. Instead, studying the phenomenon in the

natural setting offered usable and unbiased data (Yin, 2014). One method that relies on

changing behavior is an experiment, where the variables are manipulated by the

researcher (Yin, 2014). However, for the phenomenon described, this manipulation is

not required to properly address the research questions. Therefore, a qualitative

exploratory single case study stands as the best choice of design.

There are disadvantages with any design, but a researcher must anticipate the

potential flaws and mediate the process to ensure the results are salient. Traditional

concerns include the attention to rigor, the generalizability of the study, and the level of

effort involved in this design (Yin, 2014). A single case study may be viewed as less

rigorous than other designs; however, this criticism is more likely applicable in a study

where the procedures were incorrect or haphazard (Yin, 2014). Therefore, careful

attention to the procedures and the research collection process were followed so the

rigor matched that of other designs. When researchers provide sufficient detail, the

reliability and credibility of the study improve (Baxter & Jack, 2008).

Case studies have been criticized because they are only a single case, which may

not generalize to a broader field. However, when grounded in theory, the results of a

single study can generalize to a particular population (Yin, 2014). This study revealed

the reflective thinking of feedback practices of online undergraduate full-time faculty at

a southwestern higher educational institution, but their findings, grounded in Vygotsky’s


(1978) ZPD theory can generalize to other online faculty and their grading and teaching

practices. Finally, a third concern with a case study is the volume of effort. A researcher

working within this design must gather large amounts of transcribed data, but careful

management of the data can offset this concern.

The multiple considerations led this researcher to select the case design as the

best choice for this study. This qualitative approach allowed a close look at an empirical

topic in its natural setting and properly addressed the exploratory questions of the study

on how online undergraduate full-time faculty feedback practices influenced their

teaching. Despite the challenges of collecting data, proper procedures were followed so

that this study demonstrated the rigor that characterizes other designs. Finally, as the

research questions were properly addressed through the study, through the ZPD theory

(Vygotsky, 1978), there was potential for the results to generalize to institutions of

higher education.

Population and Sample Selection

The population of interest for this study is online faculty in higher education.

This modality’s rapid growth (Deming et al., 2015) has prompted institutional leadership

to examine the aspects of asynchronous learning, including instructor feedback (Atwater

et al., 2017; Borup et al., 2015). Feedback is central to the instruction of the online

modality (Borup et al., 2015).The findings of this study could support the attention to

reflection and feedback or how different delivery methods can enhance online instructor

feedback.

The target population was online undergraduate full-time faculty at an institution

in the southwestern United States. At this institution, here are currently 150 full-time
faculty in the Division of Online Instruction. These instructors work in a collaborative

environment at a campus facility. This campus presence is designed to promote shared

practices and collaboration. These faculty teach courses in their approved disciplines,

such as Math, Science, or English, but may additionally be assigned first-year sequence

courses, based on rolling enrollment demands. This group of full-time faculty teach 80

to 100 students, maintaining an average load of four courses at one time.

The study sample was drawn from this target population of 150, with the

exception of the 14 English instructors, who are directly supervised by the researcher.

One hundred thirty six faculty were sent an email requesting participation in the study.

The email contained exclusion criteria, listing the selected first-year sequence courses

that contain assignments of 300 words or more. To qualify for the study, faculty needed

to have at least one of the first-year sequence courses listed at the time of receiving the

email, so that one or more of those courses could be used during the questionnaire.

Thirteen faculty responded and were qualified to participate; however, one did not

successfully complete the questionnaire, so the final number for the data source was 12.

This is consistent with the prediction of 10-15 respondents.

Site authorization was obtained on August 3, 2017, and following an approved

proposal by the university’s academic quality review process, the approval from the

Institutional Review Board (IRB) was received. The site authorization letter was valid

for one year, so it was anticipated that the data collection activities would be conducted

in November of 2017. The researcher examined the classroom artifacts through her

supervisory access, using matched convenience sampling to ensure an alignment

between the volunteer participants and their concomitant feedback on assignments. The
actual data collection time frame was January through April, 2018 and the data

collection process was reviewed and discussed. The researcher’s committee reviewed

the data collection process throughout, as meetings during this time frame were

February 22, 2018, and March 16, 2018.

The sample sizes for the three forms of data are the questionnaires, the classroom

feedback are outlined below. The first data source was a 12-item questionnaire

administered to the 13 participating faculty, but 12 were returned completed, yielding 90

pages of data for analysis. The classroom feedback drew from four courses per

participant, with up to 4 assignments in each course. The final source of data was a focus

group of the participants. The transcript from the recorded session yielded 6 single-

spaced pages. These different data sources and methods of collection served to increase

the validity and reliability of the study through triangulation (Golafshani, 2003).

The sampling procedures varied according to the sources. Purposive sampling

was the procedure for obtaining the target sample. Purposive sampling is a method that

allows the researcher to select willing participants (Yin, 2014) and is a form of non-

probability sampling that is common to case studies. This particular institution employs

full-time online faculty, who facilitate undergraduate courses with greater regularity and

volume than their adjunct counterparts; a student load between 80 to 100 students is

maintained year-round, yielding robust data. Moreover, this researcher has personal

knowledge about the faculty practices and understands that the online undergraduate

full-time faculty may choose their delivery methods. Because there is a variety of

feedback methods, this purposive sample method assisted in exploring the study’s

questions.
This study sample of full-time online undergraduate faculty was used for the

questionnaires and the focus groups. For the second data source of classroom papers, the

researcher reviewed four courses, with up to four assignments from each course. A

convenience sampling method was used for classroom papers, as units selected for

inclusion in the sample were the easiest to access. This kind of sampling was necessary,

since the goal was to examine classroom papers from recent first-year-sequence courses.

Since faculty in this division teach different courses at different times, the researcher

selected available current first-year sequence courses from each participant.

Sources of Data

This study triangulated data from three sources. The first source of data was the

12-item faculty questionnaire, which the participants completed while providing

feedback in their courses. Next, the researcher reviewed participants’ first-year sequence

courses to determine their methods of feedback using a prepared chart to identify the

methods of feedback faculty use (see Appendix J). The researcher reviewed four classes

per participant, and three to four assignments from those classes, yielding nearly 192

artifacts for review. The third source of data was a focus group for the participants. This

session was recorded through Zoom, an application that allows internet video

conferencing, and faculty discussed the data findings from the questionnaires and the

feedback results from reviewed papers. During the focus group session, participants

confirmed and challenged the presented data.

In a qualitative study, three data sources allow triangulation. This triangulation

provides convergence and a corroboration of the results from these different methods

(Almalki, 2016). The convergence then serves to determine the consistency of the
study’s findings (Yin, 2014). These three data sources were the most effective tools to

explore the perceived influence of feedback practices on teacher reflection and the

subsequent influence of reflection on instructional strategies.

Qualitative sources of data are typically more complex than those used in

alternate research methods. The goal of the single case study is to select data that reflects

validity and reliability, creating resulting data that is analysis-worthy (Yin, 2014). The

principles of data collection include multiple sources of data, a created database, a chain

of evidence, and care with data (Yin, 2014). Although an interview is a common data

source for qualitative case studies (Yin, 2014), for this study, the interview was not the

best way to collect data about actual events or behaviors of faculty feedback. Instead, the

questionnaire was selected as a means of meeting the principles and capturing how

faculty think about their own feedback while applying it.

The first source of data in this study was a 12-item questionnaire designed to

capture reflective feedback practices among online undergraduate full-time faculty. This

questionnaire was designed to collect data on the human event of faculty feedback

practices and reflections that took place during their feedback delivery. This

questionnaire was triangulated with the focus group discussion and the observation of

the classroom artifacts (otherwise known as assignment feedback), to converge on

findings of the study’s phenomena. Questions prompted faculty to consider the kinds of

classroom interventions they provided as a result of these feedback reflections.

The questionnaire was sent via email as an attachment and included three

questions focused on demographics. The remaining 12 questions were open-ended and

focused on how faculty reflected while delivering feedback. Ryan and Ryan’s 4R’s
(Ryan & Ryan, 2010; Ryan & Ryan, 2013) served as the model for the questionnaire.

This model has four levels, and each level represents a different depth in reflection.

Level 1 is reporting and responding, level 2 is relating, level 3 is reasoning and level 4 is

reconstructing. The creators of this model note that it is flexible and can be customized

to different disciplines and purposes (Ryan & Ryan, 2013). The faculty questionnaire for

this study had four questions at each level and these four questions were aligned with

each of the three research questions.

The second source of data in this study was faculty feedback from online

classrooms. The Site Authorization approval granted the researcher permission to review

the courses, and the researcher had access to the classes through her supervisory role.

The researcher examined the classroom documents through a matched convenience

sampling procedure. This was necessary to maintain consistency between the faculty

who volunteered to complete the questionnaire and their classroom papers. The

dissertation committee provided oversight during the data collection process. Reviewing

these classes addressed the research question: What methods of feedback do faculty use?

This data were useful for many reasons.

First, full-time online faculty used different feedback methods, including text or

video tools for feedback. Understanding the specific delivery method yielded deeper

meaning as faculty completed the questionnaires and participated in the focus group

discussions. Instructors may find that using audio or video feedback enhances the

metalinguistic properties of their remarks, resulting in greater student understanding and

satisfaction. Misinterpretation of instructor comments is a common concern in an

asynchronous classroom (Ali, 2016), so this may drive the instructor’s choice in this
kind of feedback. The data from observing the online classes did not yield the faculty

participants’ rationale for their choices, but as other forms of data surfaced, these

choices became useful in triangulating the results.

In accessing the classroom feedback, the researcher used care and discretion

during the review, making every effort to exclude any student information or

identification that might be referenced as video feedback was transcribed and names

were removed from text feedback. Only qualifying courses were examined and the most

recently completed first-year-sequence courses were selected for feedback review.

The theory that grounded the feedback analysis was Vygotsky’s (1978) ZPD, which

determined if the feedback promoted self-regulation. Hattie and Timperley’s (2007)

framework supported the feedback analysis. The principles of this theory determined the

coding mechanisms to identify specific scaffolding or other strategies that might be

formative methods of moving students into their proximal zone.

The faculty questionnaire was another data source. The researcher prepared the

questionnaire, and it was validated through field testing, to ensure the study was as

accurate as possible. Three experienced full-time faculty members evaluated the

questions as they related to the study’s purpose. Based on these evaluations and

suggestions, the researcher made changes to the questionnaire and this validation

process.

The third data source was the focus group. The participants who completed the

questionnaires also participated in the focus group, but two could not attend, resulting in

10 participants. According to Yin (2014), the purpose of a focus group is to moderate a

discussion about some aspect of the case study, with the purpose of finding the surface
views of each person in the group. The focus group discussions allowed faculty

participants to connect their responses from the questionnaire and to share and glean in a

peer-centered discussion. The focus group session was virtual and utilized Zoom, an

internet conferencing application, which allowed recording and convenience for the

participants, and allowed transcription for analysis.

Trustworthiness

The four elements that constitute trustworthiness in a study are credibility,

transferability, dependability, and confirmability. Each were addressed in this study to

ensure its trustworthiness. In contrast to a quantitative study, where the research itself is

the variable that determines the credibility of the study, a qualitative study’s credibility

is dependent upon the researcher and how they conduct the study. Trustworthiness

encompasses all elements of research credibility (Golafshani, 2003).

Credibility is critical to establish the overall validity of the study. One way to

establish credibility is to demonstrate that the study’s sources of data were derived from

literature in the field or existing models or studies (Golafshani, 2003). For this study, the

existing framework of Ryan and Ryan’s 4R’s model was adapted. This model is

recognized in literature and has been used on studies in the field of education and the

authors state it is adaptable to other disciplines and areas of study (Ryan & Ryan, 2013).

Credibility is also heightened when multiple data sources are used (Baxter & Jack,

2008). For this study, three data sources were triangulated: Faculty questionnaire,

classroom feedback, and focus group responses.

Transferability refers to the study’s findings being generalized or applicable to

either future research, policy, or practice (Lincoln & Guba, 1985). This study’s findings
have potential for transferability. For example, the study showed that some faculty stated

they would use video feedback with institutional support. This means that leadership

may support multi-modal feedback through training or funding, so that instructors can

deliver feedback in a method that is most effective. It is also possible that more attention

might be given to the very act of reflection and how reflecting during feedback can

provide rich material for classroom instruction. Therefore, reflection might be a focus of

future professional development. In this study, detailed description was necessary so that

there was sufficient background and the phenomena was understood.

Dependability was achieved through evidence, records of the data analysis, and a

clear alignment of the study’s phases (Lincoln & Guba, 1985). In a qualitative study,

evidence can come from the data sources and from the researcher’s field notes. The

researcher maintained raw data in Word files that are labeled and organized. This

electronic method of note taking ensured that the data were not lost, damaged, or

misread, and that it could be reviewed by another party. Careful records of the

established coding method were also documented carefully, which contributed to the

study’s dependability. Templates were used to establish themes and a coding process.

Finally, this study’s gap, problem statement, research question, and methodology were

aligned, which is an additional requirement for dependability (Lincoln & Guba, 1985).

Finally, this study expressed confirmability. The questionnaire was reviewed

and evaluated by experienced online full-time faculty who provided feedback on the

questions and to what extent they addressed the research questions. Acknowledging the

study’s shortcomings and their potential effects is another way to ensure confirmability.

By transparently discussing these potential shortcomings and their effects in the study,
this was confirmed. In a qualitative study, triangulation demonstrates that the multiple

data sources effectively demonstrate corroboration and the data triangulation contributes

to the study’s confirmability with multiple data sources (Golafshani, 2003; Yin, 2014).

Since the study’s conclusions are more accurate with analyses from different data

sources (Yin, 2014), this convergence (Baxter & Jack, 2008) of the questionnaire

responses, paper feedback data, and focus group discussions will support the

confirmability.

Reliability

A study has reliability when steps are taken to ensure consistency and care in

data management and in protocols. When the collection and management process is

well-documented, the study can then be replicated (Yin, 2014). Following the guidelines

set for by the dissertation committee, in this study, the researcher provided

documentation for protocols and data collection and management.

First, a protocol for conducting the focus group was prepared for the purposes of

replication. Next, the raw data were managed through creating anchor papers for

consolidating faculty feedback. The coding systems were documented so that another

researcher could replicate the study. Finally, a template was used to distribute the

participants’ feedback according to Hattie and Timperley’s (2007) themes. These

documents are provided in the study’s appendices.

Data Collection and Management

For this study on how on the perceived influence of feedback practices on

teacher reflection (and subsequently, the influence of reflection on instructional

strategy), the data were collected as follows: Following site authorization and IRB
approval, an email was sent through the director’s office inviting qualifying online

undergraduate full-time faculty to participate in the study. The email went to

approximately 136 faculty, as there are 150 full-time in this division, and 14 who are

directly supervised by the researcher were excluded. The email specifically requested

participation from faculty teaching first-year-sequence courses with written assignments.

These qualifying courses were listed within the email. First-year sequence courses with

writing assignments were selected because, despite the growing literature on the

advantages of peer feedback, Planar and Moya (2016) argued that early courses in

higher education are best served by direct instructor feedback.

Thirteen faculty agreed to participate in the study. The 12-item questionnaire was

conducted as they graded, so that their responses best reflected the phenomena in their

natural setting (Yin, 2014). Participants completed the questionnaire either at the

campus host site, or remotely, as long as the participants were providing feedback in

their daily routine as the questionnaire was conducted. One participant did not complete

the questionnaire, so the study results reflect responses and classroom data from 12

participants. The responses from the questionnaire were then analyzed for coding and

themes.

Classroom analysis was a second source of data for this study. This analysis was

necessary so that actual faculty feedback could be examined according to how faculty

may reflect. For each participant, four first-year sequence courses were reviewed and

from these courses, four papers from assignments that required 300 words or more were

reviewed. The analysis showed what kind of feedback methods faculty used and served

as a sufficient source of data.


In addition, examining this feedback demonstrated the connection between

feedback and learning, per Vygotsky’s ZPD (1978). As stated, the courses were selected

so that the formative elements of feedback could be analyzed. As part of site

authorization, permission was confirmed to review the classes; but the researcher did not

engage with the university’s administrative staff or research department in obtaining the

feedback. However, the dissertation committee was informed of the data collection and

analysis process. To ensure confidentiality, all student names were removed from

faculty feedback and video feedback was manually transcribed for the purpose of

creating data without student names as identifiers. For this feedback analysis phase of

the research, no additional efforts were required on the part of the participants. The

researcher examined classroom feedback from the 12 participants who successfully

completed the questionnaire.

The third source of data was the focus group which took place March 27, 2018.

The same participants who completed the questionnaire agreed to participate in the focus

group; however, two could not attend. The researcher facilitated the session of ten of the

twelve online undergraduate full-time faculty participants. The researcher reserved a

conference room in the same building where full-time online faculty work, but

participants attended the session remotely, through Zoom, an internet conferencing

application. The session was then manually transcribed, yielding six single-spaced pages

of responses.

Confidentiality was maintained during each phase of the data collection and

analysis. The collected data were free from student names or identifiers and will be

stored within the institution’s research department. In addition, data were stored on a
flash drive and was kept locked in the researcher’s desk drawer. These measures are to

protect the participants’ confidentiality and to maintain a high level of data management.

To ensure validation, the data were triangulated. Triangulation improves the

validity and reliability of findings (Golafshani, 2003). Triangulation was evident through

the questionnaire that faculty completed, the reviewed classroom feedback data, and the

faculty focus group recorded sessions. Comparing these three forms of data confirmed

the value of triangulation. The data were summarized to further obtain reliability and

validity. The researcher presented the research questions and how the synthesized data

addressed those questions. This summary provided a transparent process, demonstrating

the aim to arrive at the truth (Yin, 2014).

The raw data were prepared for the analysis in various ways, depending on the

data source. The questionnaires were reviewed for completion, then saved in files. The

classroom feedback was copied and pasted onto word documents, one document per

class, then eventually consolidated onto one document per participant, as repeated

feedback remarks were deleted and the data were consolidated. The video feedback was

transcribed onto documents. The focus group was recorded, then manually transcribed,

resulting in 6 pages of single-spaced text. These procedures prepared the data for

analysis.

Once the researcher received Institutional Review Board approval, the informed

consent letters were emailed to participants, and will be maintained for 5 years, and then

destroyed, consistent with standard institutional research guidelines (CITI). The

researcher obtained site authorization in August, 2017 and Institutional Review board
approval on January 9, 2018. The data collection process began in January, 2018,

following these approvals.

Data Analysis Procedures

It was not known how online undergraduate full-time faculty perceived the

influence of their instructional feedback practices on their reflective thinking, and hence,

their instructional strategy at a southwestern higher educational institution.

For this study, the research questions were:

R1: How do online undergraduate full-time faculty perceive the influence of their

instructional feedback practices on their own reflective thinking?

R2: What methods of feedback do online undergraduate full-time faculty use at a

southwestern higher educational institution?

R3: How do online undergraduate full-time faculty perceive the influence of

reflective thinking on their current and future instructional approaches?

Case studies represent particular challenges in the analysis phase of research.

The data itself must be examined for patterns and meaning. Numerous computer

programs can assist with the manual labor involved in managing the data (Yin, 2014)

but it takes a human touch to effectively interpret the data and to deduce meaningful

patterns. Whereas quantitative research objectively separates the researcher from the

data, researchers themselves have a direct role in qualitative data (Golafshani, 2003). For

this study on the perceived influence of feedback practices on teacher reflection (and

subsequently, the influence of reflection on instructional strategy), the data were

presented in thematic analysis and triangulated.


For this study, there were three sources of data to address the research questions.

The first source of data was the faculty questionnaire, which participants completed

while grading three assignments. The online classroom served as the artifact to address

the second research question, and the focus group was the third source of data. The

faculty questionnaires were completed by the participants and this data were already in

text format. The video classroom feedback was transcribed manually to allow for the

removal of confidential student identifiers, and to analyze the feedback remarks for

coding. Both of these sources of data were thematically analyzed, with codes assigned

and major and minor themes attributed to the data. The focus group of 10 of the 12

participants was recorded through Zoom, an internet conferencing tool, and the meeting

contributions were transcribed to again allow clarity and accuracy in deriving common

themes. These sources of data served to effectively address the study’s three research

questions.

The data sets were analyzed using open coding and the thematic analysis

process. Coding is a method of reduction that allows the researcher to establish priorities

and to focus among the data (Vaughn & Turner, 2016). Yin (2014) identifies this

analytical strategy as a method that allows the researcher to examine themes that emerge

when working with data. This method allowed the researcher to examine data trends

from different feedback methods, including video.

Transcribing the focus group discussions and classroom feedback into text

allowed for a careful and consistent analysis. As the codes emerged, the researcher

attributed themes and prepared tables to show the descriptive data. Codes then became

themes when they appeared frequently. For example, if more than one questionnaire
response stated that time management was a factor in their feedback reflections, that

initial code emerged as a theme. There was extraneous data that did not address the

specific elements of the study. The researcher interpreted the data and, in accordance

with Ganapathy (2016) remained flexible enough to change course when the data

dictated that. When clear patterns emerged, these were displayed on a table, so the

assigned codes and themes were attributed to the data.

The researcher expected both the quantity and quality of data to be sufficient for

the study. First, more than ten participants were recruited to allow for any attrition. Next,

the faculty questionnaire was comprise of 12 questions, and faculty were asked to

complete this questionnaire while completing three of their assignments. This yielded a

maximum of 120 responses about their reflective practices and their feedback. The

classroom analysis included examining four classes, and four assignments within those

classes for each instructor. Finally, the focus group of ten participants was

approximately one hour in length, which resulted in multiple-page transcripts for

analysis.

Finally, the three sources of data were considered and compared. Baxter and Jack

(2008) note that a common error in data analysis is to treat the sources of data

separately; instead, in a case study, these data may converge, so the researcher must

remain open to how the coding might dictate the themes. The focus group, as a third

component of the data analysis, provided triangulation, which assisted in controlling

bias and improving the validity and reliability of the findings (Golafshani, 2003).

Ethical Considerations
The general nature of this study did not present specific ethical concerns.

Because online full-time university faculty were not evaluated on their feedback quality,

there were not ethical concerns of job security or negative consequences associated with

their feedback or reflective practices being evaluated. Instead, the researcher made it

clear that the study’s purpose was to examine how faculty perceived the influence of

their instructional feedback practices on their reflective thinking, and hence, their

instructional strategy at a southwestern higher education institution, and the exploratory

nature did not result in negative consequences or responses. As with any study, any

careless management of data or insufficient amounts of data could jeopardize the results.

Therefore, this researcher made every effort to carefully manage the data, to follow the

protocols established, and to ensure that the data were significant enough to reveal

meaningful results.

For this study, IRB approval and site authorization was obtained prior to any

phase of data collection. An informed consent document was also provided for the

participants. Yin (2014) stated that informed consent is a critical step to formally solicit

volunteers and to alert participants to the nature of the study. This informed consent

document provided disclosure that participation in the study was voluntary and that

participants could have withdrawn at any time. These documents are included in the

appendices of the study.

In addition to notifying participants through informed consent, participants must

be protected in terms of their privacy and confidentiality. Yin (2014) stated that

participants should not be put in an undesirable situation, such as being added to a list

for future studies. To ensure this protection, no participant names were attached to any
codes or themes. Instead, participant responses were identified by numbers. In addition

to protective measures of participants, the researcher also ensured the confidentiality of

students. When classroom feedback was gathered and transcribed for video feedback,

student names were removed to avoid any student identifiers. All collected data will be

stored at the higher educational institution and will be destroyed after 5 years.

As with any study, there are potential conflicts of interests among participants.

For example, because this study explored the perceived influence of feedback practices

on teacher reflection and its subsequent influence of reflection on instructional

strategies, it is possible that those who volunteered may have had a special interest in the

subject or may have conducted their own study. By informing participants of the study’s

outcomes, they might have gained applicable insights to fortify, maintain, or improve

their own connections between feedback reflections and instruction. Every effort was

made to avoid bias in the study, including triangulating the data and rigorous measures

to effectively code, analyze, and report data.

One ethical consideration is that the researcher is in a supervisory role among

full-time online faculty. The researcher is the faculty chair for full-time online English

faculty at the host institution, so those faculty members were not invited to participate.

Participating faculty did not directly report to the researcher, but they knew the

researcher in a professional setting and were aware of the supervisory rank, which

presented potential for the participating faculty to present biased responses.

Another limitation is that this supervisory position could generate the possibility

of conscious or unconscious coercion, in which participants feel pressure to provide data

in a certain way, based on that affiliation. Although the researcher knew the target
population in a professional capacity and they were aware of the supervisory rank, the

sample was made up of faculty who responded at random and met the criteria, and at no

time during the study or in the past did the researcher influence the promotion,

termination, or pay of the participants.

The researcher used a matched convenience sampling of classroom papers to

review. This was necessary because there had to be an alignment between the faculty

who volunteered to participate and their concomitant assignments that were graded.

Inherent in this procedure was the selection of assignments of 300 or more words in

length and these assignments aligned with questionnaire data, classroom data, and focus

group data.

This researcher attempt to offset potential bias. First, any outlier comments in the

data were carefully considered. It is critical to be aware of any sensitive contrary

information (Yin, 2014). Secondly, a case researcher must keep an open mind. Being

deeply familiar with the subject matter can potentially create bias (Yin, 2014). In this

study, the researcher is familiar with feedback and its formative value, yet every effort

was made to remain open and adaptable to other perspectives and methods. A third

intention to offset bias lies in the researcher’s role with the institution. Because the

researcher supervises faculty, feedback is regularly reviewed and evaluated. However,

the researcher is aware that for this study, the objective was not to evaluate feedback

based on the same criteria, but to explore it relative to how faculty reflect and to

understand how and if selected methods contributed to reflections and ultimately, how

instruction is shaped.
Following the principles set forth from the Belmont Report, (The Belmont

Report, 1978) the data from this study will be saved on a flash drive and will be locked

in the researcher’s desk. The research data will be kept for 5 years and will be

inaccessible to the public.

Limitations and Delimitations

Qualitative case studies have potential limitations that if not properly addressed,

could jeopardize the integrity and accuracy of the study. Limitations common to case

studies include methodology, instrumentation, sample size, and time limitations (Yin,

2014). This was a qualitative study and by nature, had limitations by its non-quantitative

method. However, a qualitative method remains the most logical way to address the

research questions of this study and to fully explore the contemporary phenomena of

instructor feedback reflections and how they influence teaching.

In this study, the sample group could have posed a possible limitation, because

this study was conducted at a private higher educational institution in the southwestern

United States. This institution has a forward-thinking model that employs full-time

faculty at the campus facilities for online instruction. Instructors facilitate four courses at

a time, hold office hours, and are responsible for student communication outside the

classroom, including student calls and emails (DeCosta et al., 2015). The online full-

time faculty also interface with the traditional campus faculty, participate in curriculum

revisions, collaborative research, and campus functions. These attributes provided rich

data sources for faculty feedback practices; however, many institutions employ adjunct

faculty for their online delivery (Schieffer, 2016). Therefore, because the participants are

full-time faculty, the results may be less generalizable to higher education models where
this delivery is primarily taught by adjuncts. Because the online undergraduate full-time

faculty work together, there may be less diversity in their responses and feedback or

reflection philosophies. To curtail these limitations, there was careful attention to the

data analysis details and the triangulation of data enhanced the trustworthiness of the

study.

Another limitation is the researcher’s potential personal biases. The researcher

has been a part of the online full-time undergraduate faculty for more than seven years

and is therefore acquainted with their practices, methods, and philosophies. This

knowledge could lead the researcher to make basic assumptions about how faculty

conduct feedback. However, the data was collected with the goal of first-year-sequence

courses with assignments that are more than 300 words in length. In addition, the

researcher reviewed courses that were closed, yet current, so that the classroom artifacts

would most closely resemble the data based on the questionnaires. These parameters did

not allow the research to select papers based on personal bias during the data collection.

For the data analysis, the researcher examined the feedback with the a priori themes of

Hattie and Timperley (2007) as a guide. Having a theoretical model, such as Vygotsky’s

(1978) ZPD kept the data analysis anchored towards the research questions, rather than

the researcher’s preferences towards feedback practices.

Another limitation is that the researcher is in a supervisory role in the online

division. This supervisor role could have led to conscious or unconscious coercion, as

participants may have felt pressure to provide less authentic data, due to their knowledge

of the researcher’s role. However, faculty participating in this study did not directly

report to the researcher, nor did the researcher at any time influence their employment
status in any way. In addition, there was no promise of monetary or other reward,

improved evaluation scores, access to future resources or support systems for

participation.

The researcher accessed the classroom artifacts through her supervisory access,

and did not use institutional oversight in gathering this data. Convenience sampling was

used to create an alignment between the volunteer participants and their concomitant

feedback on assignments. Selecting assignments of more than 300 words ensured

alignment among the data sources.

Because the research questions had boundaries, there are delimitations to the

study. One delimiter was that the population was online undergraduate full-time faculty

at the southwestern higher educational institution. It is possible that an adjunct

population would provide results that might be more diverse, and therefore, more

meaningful. Thus, the lack of input from this broader population restricts the data.

However, the researcher applied rigor in establishing codes and themes, and the data

were triangulated to lessen the bias of the results (Golafshani, 2003).

Summary

Chapter 3 described the design of the study about online undergraduate faculty

feedback reflective practices at a higher educational institution. For this study, the

problem was that it was not known how online undergraduate full-time faculty perceived

the influence of their instructional feedback practices on their reflective thinking, and

hence, their instructional strategy at a southwestern higher education institution.

There are studies on faculty feedback methods and their effectiveness from

student perspectives (Ali, 2016; Atwater et al., 2017; Borup et al., 2015) and studies that
that focus on feedback’s effectiveness (Ellis & Loughland, 2017; McCarthy, 2015;

Mirzaee & Hasrati, 2014; Nicol et al.,, 2014; Nicol & Macfarlane-Dick, 2006) as well as

studies on the importance of reflection for faculty and its learning outcomes (Bennett et

al., 2016; Bond, 2011; Falender et al., 2014). However, research was needed to

understand more about the perceived influence of feedback practices on teacher

reflection and the subsequence influence of that reflection on instructional strategies. A

greater understanding of these reflections and practices could improve student learning

and satisfaction.

The data collection began by sending an email requesting voluntary participation

to the online undergraduate full-time faculty at the higher education institution in the

southwestern United States. The researcher worked with the Director of Online

Instruction, who sent out the recruiting email requesting participation from those

currently teaching first-year courses and those courses were listed, chosen for

assignments with 300 or more words. Due to the researcher’s supervisory role, the 14

English instructors were not included on this distribution list. Participants then

completed a questionnaire on their feedback, and classroom feedback was reviewed for

methods and this coded and themed data were then presented to a focus group, which

represented the final stage of data collection. Additional data obtained from the recorded

focus groups was added as a data source, thus, creating the triangulation of data.

Chapter 4 will include the results from the data collection and analysis. The

themes will be presented in visual and descriptive terms so that the results could be

appreciated and understood by other institutions who may want to replicate this study or

conduct additional studies on these topics.


Chapter 4: Data Analysis and Results

Introduction

The purpose of this qualitative exploratory single case study was to examine how

online undergraduate full-time faculty perceived the influence of their instructional

feedback practices on their reflective thinking, and hence, their instructional strategy at a

southwestern higher education institution. This study explored the intersection between

faculty feedback and their reflections to determine the impact on teaching. Feedback can

serve as formative instruction (Hattie & Timperley, 2007). Online instructors provide

feedback through text, audio, or video methods (Borup et al., 2015). These delivery

choices may be influenced by several factors, including efficiencies (Planar & Moya,

2016), but it is not known how these delivery methods might influence reflection or how

these reflective practices affect instruction. The data addressed the following research

questions:

RQ1: How do online undergraduate full-time faculty feedback perceive the influence

of their instructional feedback practices on their own reflective thinking?

RQ2: What methods of feedback do online undergraduate full-time faculty use at a

southwestern higher educational institution?

RQ3: How do online full-time faculty perceive the influence of reflective thinking

on their current and future instructional approaches?

This study was a qualitative method with a single case study design. The

research design connects the data and its conclusions to the study’s questions (Yin,

2014). For this study, the single case design was selected in order to fully describe and

explain the phenomenon (Yin, 2014). The researcher collected data from three sources
and reviewed, analyzed, and coded the data to produce resulting themes. The data came

from faculty questionnaire results, feedback from classroom papers, and focus group

transcripts. The triangulation of these data supported the development and salience of

the case. Three sources provided data for this study: Questionnaire responses from

faculty participants, classroom feedback, and the focus group discussion results. The

three research questions were supported by one or more of these sources, as the table

below shows:

Table 1.

Data Sources Aligned with RQs

Paper Concerns Faculty Questionnaire Classroom Papers Focus Groups

RQ1: How do online Questions 1-4 To analyze feedback Confirming or denying


undergraduate full-time and fortify common data
faculty Perceive the paper concerns
influence of their
instructional feedback
practices on their own
reflective thinking?

RQ2: What methods of Questions 5-8 Analyzed for theory Confirming or denying
feedback do and paper elements. data
undergraduate full-time Video and Text
online faculty use at a feedback compared.
southwestern higher
education institution?

RQ3: How Online Questions 9-12 Not used Confirming or denying


undergraduate full-time data
faculty perceive the
influence of reflective
thinking on their
current and future
instructional
approaches?

Thus, different elements of the data sources supported the research questions. The

questionnaire was designed to address all research questions, and the paper feedback
served as an artifact for analysis for two of the research questions. The data from the

focus group either sustained or refuted the data.

For this study, the sample population was full-time online undergraduate faculty

at a southwestern higher educational institution. There were originally 13 full-time

online undergraduate faculty who agreed to participate in the study, with 12 who fully

participated, providing evidence from all data sources. Ten of the 12 participants

attended the focus group session. The data sources included a faculty questionnaire

about their reflections during grading (see Appendix F), feedback from faculty papers,

and a focus group transcript. Each data source addressed the research questions resulting

in themes that connected back to the research questions, consistent with the case study

design (Yin, 2014). The coding process will be explained later in this chapter.

The faculty questionnaire was emailed to participants who were instructed to

complete the questionnaire as part of their grading focusing on four student papers for

each question. Thirteen participants sent in the questionnaires, but one was not

completed. Thus, 12 questionnaire responses were analyzed and codes were developed,

which resulted in themes. These themes were then presented to the faculty participants

in a focus group, where faculty confirmed and expanded upon the themes. The last data

collected were from the focus groups, where participants reviewed the themes from the

other data sources, which were questionnaire responses and classroom feedback

analysis.

The descriptive findings will be discussed in the following section. These

findings will include data on the demographics of the faculty participants. In addition,

the chapter will show explanations of data gathered from the questionnaires, classroom
feedback, and focus groups. This section will also present the researcher’s data analysis

procedures and resulting themes.

Descriptive Findings

The descriptive findings of this study will be presented in the following section.

This study explored the perceived influence of feedback practices on teacher reflection

(and subsequently, the influence of reflection on instructional strategy). The data sources

for this study were faculty questionnaires, classroom feedback, and focus groups, which

addressed the three research questions in the study. The population studied was online

undergraduate full-time faculty at a southwestern higher education institution where 150

full-time online faculty teach both graduate and undergraduate courses and work in a

collaborative environment at a campus facility. At this institution, online students are

assigned full-time instructors for their first-year sequence courses, and may then be

assigned courses with either adjunct or full-time faculty for subsequent courses. The

institution is intentional about providing first- year students with full-time, rather than

adjunct instructors. The online full-time faculty report to campus offices, collaborate and

share practices and uniform expectations. Because of this collaboration and attention to

first- year students, the researcher wanted to explore this particular population with

regards to feedback reflection and its influence on instruction.

The characteristics of the full-time online faculty make them a significant

population for this study. The participants in the target population work together in a

collaborative space at the host campus. The full-time faculty teaching load is

approximately 80-100 students, or four classes at a time. Substituting, student attrition,

or shifting enrollment can cause these numbers to vary. These faculty teach courses in
specialized subject areas, but may also be assigned first-year sequence courses. Of the

150 full-time online faculty, 27 are tier instructors, meaning that they teach higher

volumes and are dedicated to one first-year-sequence course. Tier instructors’ student

loads are between 120 and 150 and these instructors are provided instructional assistants

help with objective grading. Unlike the non-tiered instructors, the tiers are not assigned

to substitute for other instructors, in order to keep their teaching load manageable. Five

of the twelve study participants were tiered instructors.

Participants were recruited through an email requesting participation. The

researcher obtained permission from the director of the online division, who distributed

the email to the online full-time faculty division. To avoid study bias, fourteen English

instructors were excluded, because they work closely with the researcher. The letter

specifically requested participation from undergraduate instructors teaching first-year-

sequence courses in critical thinking, philosophy, religion, and introductory education

courses. These courses were selected because the study was limited to exploring the

particular feedback among first-year-sequence instruction, and because these courses

contain written assignments of 300 words or more, which would allow analyses of rich

data for the study.

The primary method of communication with participants was through email. The

researcher worked with the director of online full-time faculty who sent the first letter

asking for participation on January 12, 2018. Four participants responded on the same

day, then the same message was sent a week later, which resulted in seven additional

participants. An additional participant volunteered via a face-to-face conversation with

the researcher. By February 7, 2018, questionnaires were returned from 12 participants.


On February 7, another face-to-face conversation resulted in an additional volunteer.

This participant returned the questionnaire by February 21, 2018, resulting in a total of

13 questionnaires. Altogether, 13 qualifying faculty responded, however, one did not

complete the survey, so the data analyzed and presented is from 12 participants.

The participants cooperated at all stages of the study. All 13 participants

provided informed consent letters to demonstrate their participation in the study and

twelve participants completed the questionnaires and ten attended the focus group

session. The questionnaires were emailed as attachments to the faculty participants

immediately after they agreed to participate. Participants gave their consent to

participate via email, except one who verbally agreed, but then responded to a follow-up

email. Participants were emailed instructions to complete the questionnaire as a part of

their grading, since the questionnaire was designed to probe their reflective thoughts

while grading in their daily routine. Requesting that the questionnaire be completed

during the work hours ensured that the phenomenon was in its natural setting (Yin,

2014). A requested completion date was provided, but prior to the requested deadline,

the researcher sent follow-up emails to check in on participants and offered to answer

any questions. Two check-in emails were sent to one participant. All questionnaires

were returned, with one incomplete, resulting in 12 full-time online undergraduate

faculty participants.

As part of questionnaire, three questions provided demographic information. The

demographic responses provided information about the full-time-online undergraduate

faculty, including their years in teaching higher education and if the course they used for

the questionnaire was included as part of their regular course load. For all participants,
the courses addressed on the questionnaire reflected what they were most often assigned.

Undergraduate online faculty are expected to teach first-year sequence courses, as those

courses have the highest enrollment and faculty benefit from understanding the first-

year classes. Many online full-time faculty members have specialties in other

disciplines, such as math or psychology and faculty loads may be a combination of the

first-year-sequence classes and their specialty area.

The questionnaire responses provided relevant data on participants. There were

five male participants and seven females. The average online teaching experience in

higher education for male participants was 6.3 years and was 5.7 years for participating

females. For the purpose of maintaining the institution’s anonymity, these first-year

sequence courses were assigned pseudonyms so the course titles represent the general

subjects, but are not official course names. All participating males taught a first-year

critical thinking course. Females taught courses in critical thinking, philosophy, religion,

and an introductory success course for education majors. The table below shows the

participants, their experience and courses taught.

Table 2.

Undergraduate Full-time Faculty Demographics


Gender Number Online Teaching Primary Courses
Experience Taught by
Participants
Male 5 6.3 Critical Thinking (5)

Female 7 5.7 Philosophy (2),


Critical Thinking (2)
Intro to Education (2)
Religion (1)

The table shows that the majority of participants were female, with the critical thinking

class taught the most among the participants. All male participants taught the critical
thinking course, and two were tier instructors. Two females taught critical thinking, with

one instructor being tiered. Two female instructors taught the education class; one of

them was tiered and a fifth tiered female instructor taught the course in philosophy. The

remaining female participant taught a course in religion. The survey showed that the

instructors responded to the questionnaire based on the class they teach most frequently.

Thus, these instructors, with tiers included, have depth in their knowledge of the course

and feedback methods. The host institution implemented the practice of hiring full-time

faculty for first year course sequence in 2010, so the teaching experience average

indicates that some participants have been part of this model for most of its duration.

However, this also shows that the participants did not have online teaching experience

prior to their current positions.

The questionnaire was designed to capture reflective thoughts that take place

during faculty grading and was created based on Ryan and Ryan’s (2013) reflection

levels. As a result, the questions were structured within these four levels: Reporting and

Responding (1), Relating (2), Reasoning (3), and Reconstructing (4) (Ryan & Ryan,

2013). Faculty participants were instructed to respond to these questions on four

different student papers. Thus, 12 participants responded to 4 questions that addressed

the three research questions and this was done for 4 papers they graded. Since faculty

participants were asked to respond to four papers for each question, the questionnaire

responses yielded faculty reflections on 48 student assignments with raw data on a total

of 576 papers. Because the questionnaire was open-ended, the page length varied, but

the average completed questionnaire length was 8 pages. Thus, approximately 96 pages

of text were reviewed to determine the codes and themes of the questionnaires. The table
below shows resulting themes related to paper concerns, how instructors respond to

concerns and courses of action based on these concerns.

Table 3.

Questionnaire Responses by Research Question for RQ 1

Q1:Paper Concerns Q2:Perceived Q3:Resources Used Q4:Practice Changes


Frequency of Concern

Development High Instructor-added Instructor-added


Formatting High Instructor Feedback Existing Resources
Mechanics High Existing Resources No Changes
Thesis Low Other Instructor Student Contact
3rd Person High Co-workers
Below Word Count Low Past Student Contact

Table 4.

Questionnaire Responses by Research Question for RQ2

Q5:Methods Used: Q6:Factors of Q7:Expectations and Q8:Plans to continue


Influence Themes practices Themes

Text Time/Efficiency Time-Efficiency Yes


Video Familiarity Time-Efficiency Yes + may try
Both Student Needs Student Needs Yes + efficiency
Co-workers Co-workers Yes + adding other
tools

Table 5.

Questionnaire Responses by Research Question for RQ3


Q9:Methods Used Q10:Factors of Q11:Trends affecting Q12:Trends applied
Influence practices to broader context
None DQ’s Instructor-added Instructor-added
Instructor-added Assignments Feedback Co-workers
Tech tools Existing resources Increasing engagement Does not see
Feedback Instructor-added Phone Tech tools
Co-workers Engaging in forum
Student needs

The questionnaire revealed common themes based on instructor concerns.

Participants were given instructions to complete the questionnaire while grading


assignments of 300 words or more in order to capture a robust spectrum of feedback

themes. The Level 1 question related to the first research question revealed that the most

prominent concern was development, followed by format, mechanics, thesis statements,

word count, and deviation from the third person narrative. Similar themes emerged from

text and video responses. The instructors who graded using video comments expressed

concerns that eventually became themes which included: Development; Mechanics;

Formatting; Thesis Statements; Word Count; Third Person Narrative.

Table 6.

Questionnaire: Response to Paper Issues Total

Theme Total per Modality (Text/Video)

Development 34//6
Formatting 27/4
Mechanics 15/3
Thesis 7/2
rd
3 Person 3/2
Word count 4/2

The Level 2 question was designed to capture faculty reflections on the

frequency of the paper’s concern. There was a relationship between the paper concerns

cited and the frequency of their concern; that is, if faculty perceived development as a

concern with the paper, they also reported that this was common across student papers.

Level 3 and 4 questions showed that faculty addressed these concerns most often by

creating their own resources and that they would continue with this practice. Both video

and text participants reported differences in how they responded to these concerns and

how they would respond in the future. More details will be examined later in this

chapter.
The second research question was addressed in the next four questionnaire

responses (questions 5 through 8). Addressing the Level 1 question, two of the 12

participants stated that they used video feedback on the longer assignments and the

remaining 10 used text feedback. Text participants’ responses indicated that time and

efficiency were the most important factors for their selection, while video participants

first cited student needs, followed by responses indicating time or efficiency. When

responding to the Level 3 question, those who gave feedback through text cited student

need as the third consideration for this choice, indicating stronger preferences for using

text delivery based on familiarity and time management. This differs from the two

participants who provided video feedback, as they cited student needs first, followed by

time and efficiency as their secondary factor. Both video and text groups indicated they

would continue their practices of either text or video. These responses and their

significance to the study will be examined in greater detail later in this chapter.

The last set of questionnaire responses (9-12) addressed the third research

question. The first reflection level was designed to determine if faculty would change

their teaching practices based on their reflections while grading. For text participants,

“None” was the most common response, followed by instructor-added resources. Video

participants indicated that they would consider adding additional technical tools to other

course areas. Those giving text feedback also stated they might try technology in other

areas, but this was the third most frequent response for them. The Level 2 question was

about how the paper concern might have connected to a classroom element. The

instructors who provided video feedback reported that their own feedback covered the
concerns while instructors who gave text feedback noted that the student concern was

more often covered in class discussions, assignments, or resources already in the class.

The researcher examined classroom papers as the second data source to address

the research questions. For each class, assignments requiring 300 or more words were

examined. Papers of this length were selected because they generally yield more robust

feedback and instructors opted to provide feedback directly on the paper, which resulted

in a greater range of feedback comments and communication. All classes examined were

seven weeks long with two to four assignments that met this 300-word assignment

requirement. For each qualifying assignment within the class, four papers were

examined. The table below shows the number of qualifying papers by participant.

Table 7.

Papers Examined for Feedback


Class Number of Number of Papers
participants qualifying examined
(12 Total) assignments

Philosophy 4 3 48
Religion 1 4 16
Critical Thinking 5 2 40
Intro to Ed 2 3 24

In total, 128 assignment artifacts were examined. Text feedback was given either

through a comment box provided by the learning management system, or attached with

embedded or sidebar comments utilizing Microsoft Word’s Track Changes feature. The

host institution’s learning management system will not process a grade unless instructors

provide an accompanying remark in this comment box that is part of the gradebook;

however, many instructors opted to put in a prepared statement that is copied and pasted,

while individual and formative feedback was provided by comments on the attached
student papers. Thus, for this study, the feedback analysis focused on sidebar comments

on attached student papers.

The process of collecting classroom data was thorough and systematic. First,

four classes from each participant were chosen. Within these four classes, the researcher

observed four papers from qualifying assignments. Qualifying assignments were those

with papers 300 or more words in length. For most classes, between two to four

assignments qualified for this analysis.

The data were managed through creating one consolidated document for each

participant. Originally, there was one Word document for each class examined, resulting

in four documents per participant. On average, these documents were seven pages in

length, before they were consolidated. Reviewed assignments ranged from top scoring

to failing papers. Managing the data included consolidating instructor feedback from

four separate documents into one and to combine the feedback in the process. This

process necessitated establishing an “anchor” paper. Using four computer monitors, the

common comments from papers were captured and tallied, and then consolidated on the

anchor paper. This process allowed the data from each instructor to be captured on a

single document. This process also revealed which comments were repeated.

Video feedback data were also reviewed and analyzed. Two participants

provided video feedback on their student papers. For the purposes of this study, video

feedback refers to the instructor’s voice narrative and a live view of the instructor using

the cursor and track changes feature to either edit, teach, or comment on the student

work. Several web tools offer methods for faculty to give feedback. One tool is Screen-

cast-o-matic, which allows the student paper to be captured on the screen, while the
instructor delivers narrative feedback. This tool allows the student to view the instructor

performing live editing through video while listening to the instructor narrative. Loom, a

newer technology, offers the additional feature of a small video window of the

instructors during the narration. However, in this study, the instructors who selected the

video grading option did not utilize the video window. The researcher reviewed and

manually transcribed the feedback from these papers.

The data from the video feedback were transposed to text for comparison

purposes. The video feedback was manually transcribed, so that the feedback and

instructor remarks could be examined in the proper context. The data from reviewing

four student papers from three different assignments yielded approximately 28 pages per

instructor, but the page amount was not a valid measure of volume, since feedback was

pasted line by line to assist with the analysis. In addition, transcribed audio feedback

was single-spaced from the manual transcription, so its page length compared to text

feedback was not valid. Therefore the average instructor word count per assignment

emerged as a meaningful way of collecting data. These data show the participants, their

methods, and courses taught.


Table 8.

Participants, Methods, and Courses


Participants Method Course
By Number
1* Text Philosophy
2 Text Religion
4* Text Critical Thinking
5 Text Critical Thinking
6* Text Philosophy
7 Text Critical Thinking
8 Text Philosophy
9 Video Critical Thinking
10 Text Intro to Ed
11* Text Intro to Ed
12* Text Critical Thinking
13 Video Philosophy

A focus group formed the third data source. This focus group was held on March

27, 2018, at 3:00, PM. Participants were sent invitations through Microsoft Outlook’s

calendar feature. The session was conducted through Zoom, which allowed remote

attendance and video recording for future transcription. The discussion began at 3:00

and concluded at 3:48. The transcript of the session was 3424 words. The original

recruitment letter requested participation in the focus group; therefore, all participants

were invited to attend, but one could not attend, and one had to leave early. Participants

were assured that their involvement was confidential and that their contributions would

be shared only with the research committee. However, as the participants work together

and use Zoom in other capacities, there was no assurance that they would not recognize

each other’s voices or logins. Participants were also asked permission to record the

session and all verbally agreed. Since these participants work together as faculty, they

were already acquainted with each other before the session. However, their anonymity

was maintained through the presentation of data which showed descriptive data with no
names associated with the results. The session was recorded on Zoom and transcribed

for analysis purposes and for the triangulation of the data (Yin, 2014). The results will

be presented later in this chapter.

Through a PowerPoint presentation, the focus group was presented the two

sources of data, which were the questionnaire and feedback themes. The feedback

themes were analyzed two ways, according to classroom paper issues and levels

according to Hattie and Timperley’s (2007) four levels of feedback. The participants

agreed with the first data set from the questionnaires that showed the six themes of paper

concern. Three participants expressed disagreement when viewing the results, stating

surprise that instructor concern of the thesis statement was of low frequency. However,

another participant pointed out that not all classes emphasize the thesis statement and

several participants expressed agreement. The participants agreed with the remaining

two questionnaire responses that addressed the first research question.

The focus group participants confirmed most findings from the questionnaire

responses. Participants agreed with the data results for the second and third set of

questionnaire responses. These responses were aligned with the second and third

research questions, respectively. For the second research question, nine themes were

presented over four different questionnaire responses, and for the third research

question, 12 themes were presented on how the practices inform instruction. All

participants confirmed the findings and did not disagree with these results.

The last data presented to the focus groups were the classroom data that was

analyzed according to Hattie and Timperley’s (2007) levels. The researcher explained

that the feedback was analyzed for both the paper issues and for how the feedback
corresponded with Hattie and Timperley’s (2007) levels. The color coded display on two

bar charts provided a visual and the theme colors were explained (see Table 15). The

participants first observed that the theme of critical thinking was present only in the text

feedback. Another participant noticed that formatting seemed to be more prominent in

the video feedback. Some participants then offered thoughts about why formatting might

have more prominence in video rather than in text feedback. These implications and

findings will be discussed in greater detail in chapter 5.

Data Analysis Procedures

This qualitative exploratory single case study qualitatively explored instructor

feedback reflections. Qualitative analysis can be difficult because its techniques are not

well defined (Yin, 2014). Nevertheless, qualitative research is valuable in its ability to

provide a rich means of capturing the complexities of human behavior and descriptive

data can provide a deeper understanding of the phenomenon (Vaughn & Turner, 2016).

Moreover, qualitative research analysis can exhibit high quality if the researcher attends

to all the evidence, addresses rival interpretations and the most significant aspects of the

case, and uses expert knowledge from the researcher’s prior experience (Yin, 2014). The

following data presented demonstrates these marks of quality through a careful and

systematic thematic analysis.

The thematic analysis was the best procedure for analyzing this study’s data. Yin

(2014) posited that analyzing case study evidence must demonstrate rigor and a

systematic description of how the codes and themes were developed for each research

question. Thematic analysis has been defined as the search for emerging themes that

become important to the phenomenon’s description (Fereday & Muir-Cochrane, 2006).


In thematic analysis, raw data eventually is re-categorized into themes that most

appropriately capture the social phenomenon (Fereday & Muir-Cochrane, 2006).

For a qualitative case study, there were numerous advantages to a thematic

analysis procedure. According to Clarke and Braun (2013) thematic analysis offers a

“theoretical flexibility” (p. 120) that is advantageous to connecting the data to the theory

associated with the phenomenon. This flexibility allows the researcher to focus on

examining patterns without a strict adherence to pre-determined vocabulary, theory, or

framework (Clarke & Braun, 2013). Thematic analysis also allows the researcher to

effectively examine different types of data (Clarke & Braun, 2013). Finally, thematic

analysis is a viable procedure for what Clarke and Braun (2013) called the “messy

realities” (p. 122) of qualitative data. An example is the questionnaire instructions stated

that the participants should respond to four different papers for each question. Some

participants provided four different responses for each question, while others responded

to only one, or copied and pasted their responses. Thus, among the 12 participants, the

number of different responses varied for each question.

Within the thematic analysis framework, a hybrid process of inductive and

deductive framework was employed. This approach allowed codes obtained from data to

be integrated with existing theoretical framework (Fereday & Muir-Cochrane, 2006).

Some of the study’s themes were established a priori, meaning that existing theory pre-

determined the themes. For example, to address the second research question, a template

based on Hattie and Timperley’s (2007) feedback levels served to predefine the themes,

which allowed the data to integrate with theory (Fereday & Muir-Cochrane, 2006). This

process, consistent within thematic analysis, provided a meaningful framework for


analyzing data in accordance with the study’s theories and will be discussed in greater

detail later in the chapter.

The next section will discuss the coding process and how themes were

developed. Clarke and Braun (2013) identified six stages for effective thematic analysis

and they include becoming familiar with the data, coding, searching for themes,

reviewing themes, and defining and naming themes. For each research question, the

sections below will discuss the data analysis within this framework.

Research question 1: Familiarization with the data. The first research

question was: How do online undergraduate full-time faculty perceive the influence of

their instructional feedback practices on their own reflective thinking? The first data

source that supported this question was the group of responses from the faculty

questionnaire’s first four questions. Based on Ryan and Ryan’s (2013) reflective model,

faculty participants were asked four questions pertaining to each research question.

These four questions were structured according to a spectrum of reflective levels based

on Ryan and Ryan’s (2013) reflective model. Prior to its use, this questionnaire was

validated by three faculty members with three or more years of full-time teaching

experience who provided feedback on the initial questionnaire (see Appendix D). The

input from the faculty who validated the instrument served to clarify the instructions on

the questionnaire and validate the instrument.

Prior to determining codes, the researcher spent time becoming familiar with the

data. Clarke and Braun (2013) note that this critical process is often skipped in the

analysis of data. Rather than examining the questionnaires one at a time, the initial

familiarization began with viewing all questionnaires. This process was consistent with
Yin’s (2014) “ground up” strategies of pouring through data as a way to begin to

identify patterns. Because the questionnaire was open-ended, responses varied in length

and complexity, so the awareness of the variety of responses surfaced through this

familiarization process. Participants were asked to review four papers while responding

to the questionnaire, and this yielded robust data from participants.

Faculty participants were first asked about the concerns with the paper.

Becoming familiar with this data set showed that the open-ended nature resulted a

variety of responses, from short lists to entire paragraphs. The examples here show the

different styles of responses regarding paper concerns. Participant 6 stated, “Citing,

grammar, content.” While participant 2 stated:

Overall content was very good, but there was minimal scholarly support. Only

used two of the required four sources; no library research was done. Consistently

used masculine pronouns/words (e.g., man, mankind) instead of gender inclusive

words as previous coached multiple times. Many missing apostrophes to show

possession. The errors in writing mechanics are unusual for this student – the

paper was late and it seems he was hurried (Participant 2)

This initial examination of all responses, rather than examining each response

individually, showed that a kind of coding system was necessary as a way to capture

common words and to extract main ideas that would eventually create themes. The

actual coding process following this stage of familiarization will be discussed later in

this chapter.
The opportunity for open-ended responses showed similar variations in the

responses to the second question, as shown in these examples. Participant 7 stated, “Not

common usually 1-2 papers.” While participant 1 stated:

Common. The primary goal of the course is to have students working to apply

what they have learned regarding critical thinking skills into an objective,

persuasive essay in which they work to persuade others toward the most logical

stance on a debated topic/argument. They must also use their information literacy

skills learned in (Class Name) to help support their statements within the essay

(Participant 1)

In reviewing responses to this second question, the familiarization process demonstrated

that although the responses varied in length, only one word needed to be identified as

part of the coding process. This review of the classroom papers served to gauge the

variety in length and complexity of the instructor replies, and so served to prepare for

the more formal data analysis process.

The third and fourth questions from this instrument rounded out the alignment to

the first research question. Because participants were asked to review four papers for

each question, some prepared duplicate responses for the third and fourth questions,

indicating that an issue with one particular paper could generalize to other students.

Becoming familiar with this data showed that the third and fourth responses would share

similarities, so the same color coding system could be used to determine the themes.

A second data source needed to determine faculty reflective practices was

classroom paper feedback. Reviewing this feedback formed a necessary foundation for

identifying areas of emphases, interventions, and a way to determine faculty reflections.


Classroom papers served as a data source to address both the first and the second

research questions through a separate analysis process. The classroom papers addressed

the first research question as they were examined for thematic elements.

Familiarization requires an in-depth look at the material. Clarke and Braun

(2013) describe this as an immersion or an intimate familiarization of the data. To

achieve this familiarization process, the papers were examined several times, and the

video feedback was manually transcribed. This manual transcription provided a deeper

acquaintance with this data and created a sense of what was necessary for coding and

was the best way to manage that process. This preview also helped define important

boundaries. For example, it was clear that faculty provided feedback directly on papers,

but also in the learning management system’s summary box, but the familiarization

process led the researcher to determine that the sidebar feedback would be the focus of

the analysis as that is where the most formative feedback takes place.

The focus group was the third source of data for the study. The familiarization

process initially took place during the focus group through careful listening and

prompting during the session. During the focus group, when one participant spoke on a

certain subject, other participants were prompted to share on the same subject. So a

careful listening was the first step in becoming familiar with the data. Next, several

reviews of the transcript and different speaker contributions served to prepare for

identifying the themes.

Research question 1: Coding the data. After becoming familiar with the data

from the questionnaires, the coding process began. Questionnaire responses 1-4 aligned

with the first research question. The first question was about identifying the greatest
concerns with the papers. Using different colors in Microsoft Word’s highlighter feature,

the process included assigning colors to common elements in these responses. It was

important to see that instructors worded their concerns differently, while identifying the

shared core concern. The researcher selected pink to represent concerns about content or

development, yellow for anything related to APA formatting, blue for concerns with

word count, green for anything related to the thesis statement, grey for grammar and

mechanics and olive green for responses that identified voice or the improper

grammatical perspective. Below are two participant responses to demonstrate the initial

process of assigning colored coding. Participant 3 stated, “Alignment of argument to

thesis statement, tone of essay, grammatical errors, citation/reference style.”

The first essay was quite strong. I had very little concerns as this was only the

first draft. I noted a couple of spots within the essay where the student could

strengthen their argument with the inclusion of an outside, credible resource

(Participant 1).

The remarks from the first participant were coded pink as they related to content

concerns, while the remarks from the third participant necessitated gray, green, and

yellow coding to address the three issues. As the questionnaire responses were

examined and color-coded, the parameters of the codes came into focus and paper

concerns were consolidated to establish broader categories resulting in six themes which

were examined separately according to whether participants use text or video (see Table

7).

Nearly every participant comment on this first questionnaire response fit into a

code which was associated with a theme; however, there were some outliers. First,
although participants were asked about concerns with the paper, some instructors

nevertheless provided responses that indicated paper strengths. For example, participant

2 stated, “For this student, the essay was a significant improvement in quality over

previous assignment submissions.” This participant seemed to want to frame the

response to encompass multiple perspectives on the paper. Comments like these were

infrequent and were not included in the coding system.

Another important outlier was a concern with the similarity index. Most higher

education institutions utilize a software or service that checks for plagiarism. A

similarity index refers to a comparison among the student paper to existing publications,

allowing a screening for potential plagiarism. Although a student’s similarity index is a

legitimate concern on a student paper, per the university policy, the instructor would

treat such an issue separately and thus, it is considered separate from other elements

involving feedback. The table below shows how key words formed codes that eventually

became themes.
Table 9.

Paper Concerns from Questionnaire responses: Question 1

Codes Themes (Listed in order of frequency, most to least)


Support Development – Comments related to evidence or paragraph
Development development
Evidence
Data
Reasoning
Organization
Topic

Spacing Formatting – comments related to maintaining required style and


Quotes paper formatting
Paraphrasing
Contractions
Hyperlinks
Citations
Reference page
Page
Author’s name
Order of references

Wording Mechanics – comments regarding punctuation, usage, and


Review grammar.
Punctuation
Editing
Capitalization
Confusing/Unclear

Word count
Under/Falls Below Word Count – comments about paper word count requirement,
Over being below or too high.

Narrative
1st, 2nd, 3rd person Third Person Narrative – comments about maintaining a scholarly
I, you, we, us, they, our grammatical perspective in third person narrative.
Direct address
Audience questions

Important
Future Critical Thinking – comments for students that promote critical
Environment thinking relative to the course content. Conversational notes that
Legalization ( abortion, connect student work back to content.
marijuana)

Designed to capture instructor perception on the frequency of the concerns, the

second question responses were tallied and examined juxtaposed to the first question.
This question was designed only to capture instructor perception of papers, so responses

like “Common” or “Frequent” were reframed as “High.”

The third question also supported the first research question. Responses gave

insight into what resources faculty use to alleviate the paper concerns. This question was

designed to capture faculty reflections about resources that might be used in their

teaching. To generate initial codes for this data set, the same colored highlighted

features and colors in Microsoft Word were used.

Participants sometimes provided the same response for four different papers,

which demonstrated their intent to show how a concern in one paper generalized to their

practices overall. Participants reported that several resources were used to assist

students. For example, Participant 4 wrote, “I refer students to the (host institution’s)

Style Guide and Purdue Owl, as well as various other resources within the classroom. I

also rely on feedback that I provided to them on their outline.” This participant cited a

combination of resources, including the host institution’s writing resource center as well

as other university resources, including the OWL at Purdue University, a web resource.

This participant also indicated that his own feedback from a previous assignment was a

salient resource for student improvement.

Other participants stated that they created their own class resources to assist

students, or that they contacted students as a method of remediation. For example,

Participant 7 responded with, “Created a video going over the expectations that is posted

in Week 1.” While Participant 13 stated, “Video Feedback through Loom, and 1:1

appointment with student. These responses indicated that instructors either created their

own resources, contacted students, relied on existing resources, or co-workers in order.


The researcher analyzed this data using a template which allowed the participant

responses categorized in columns with general headings that later became themes. In

total, there were 20 codes that were later reframed into themes.

The fourth question showed responses similar to the third question. This level 4

question prompted instructors to consider what practices they might change, given the

concerns. The same themes established in the third question responses surfaced, with

the additional reply of “No changes.” Due to the similarity in responses, the same

template was used to assist in categorizing the responses. Participants were asked to

address each question on four different papers. Therefore, for one response, participants

may have four responses that are different, four that were the same, or any other

combination between one and four. Some responses were also left blank. The table

below represents the number of times words within the designated categories were

mentioned for that particular question and their frequencies and the frequency that these

terms were mentioned.

One outlier response was present as a response to this question. Outlier responses

must be carefully considered in thematic analysis coding. Clarke and Braun (2013)

posited that themed data should tell a story; therefore, outlier responses may not

contribute to the consistency of this story. The outlier response is from participant 5

stated, “Theories and best practices are not important to me.” By stating this negatively,

the key words are not meaningful and can therefore not be part of the coding process.
Table 10.

RQ1 Themes from Questionnaire

RQ 1 L1: What are the L2: How L3: What L4: What
concerns with the common are resources will practices might
paper? these assist you? you change,
concerns? given the
concerns?
How do online Development (41) High Instructor-added Instructor-added
undergraduate Format (31) L (4), Hi (10) (20) (39)
full-time faculty Mechanics (20) L(3), m (1) h(5) Refined FB (17) Existing
perceive the Thesis (8) L(3), m (1) h Existing resources (18)
influence of their 3rd person (6) (2) resources (15) Student
instructional L(3), h (2) Co-workers (5) Contact(3)
feedback practices H (4) Past (5) No changes (8)
on their own Student contact
reflective (1)
thinking?

The classroom papers served as the second data source to address the first

research question. Moving from the familiarization of the data to the coding of the

second data source of classroom papers was similar to the process addressed in the first

questionnaire response, as both reviewed paper issues. The student concerns expressed

on the questionnaire also surfaced on the classroom papers examined and the codes were

matched with the themes. For example, instructors used words like “support, evidence,

data, reasoning,” which were associated with the paper’s development. An example of a

paper comment is below:

You had a good start to the essay, but this was more of a compare/contrast

analysis. Additionally, I do not see any concrete evidence that can substantiate

the argument. Remember, the goal was to present a rational and logical argument

based on evidence (Participant 6).

Since the color-coding schema was already established with the questionnaire, this

comment was marked in pink, since it referred to the student’s development. Another
comment from this participant references the thesis, so it was marked in green. It stated,

“Instead of pros/cons, this paragraph should be an argument on one of the reasons

presented in the thesis statement and validated with concrete evidence.” This paper

feedback was manually reviewed and coded. Although there is software designed to

assist in a coding process, the researcher opted to manually examine all paper feedback.

In feedback, instructors used various nomenclatures that eventually pointed to these

categories, and it was necessary to examine all words and phrases within the context.

A careful interpretation and understanding were needed to properly code and

categorize the phrases that would eventually become themes. For example, the following

comments were coded the same “Avoid using ‘you’ in the essay” (Participant 6), “In an

APA essay you should always write in the third person. This means using he, she, they”

(Participant 6), Using “I” makes the content less credible and using “you” appears as

though you are assuming elements about the reader” (Participant 4). Although these

comments are stated differently, they all reference the same writing issue of maintaining

the academic third person voice in writing, which was identified by a tan color. This

need for individual interpretation led to the researcher’s decision to avoid software that

may have helped with some mechanical processes, but may have missed some nuances

or rephrased comments that shared common characteristics.

The coding process revealed a category of feedback that was unique to text

feedback. Some paper comments went beyond the student’s work and served as dialogue

between the learner and the instructor. Although there was general praise attached to this

comment, its primary purpose is to promote thinking on the part of the student and is

discrete from any of the student’s paper contribution. For example, “Nice! Evaluation is
an important part of critical thinking. This is what keeps us from assuming something is

true and forces us to consider all angles” (Participant 1). These comments were

identified using red. Unlike the other paper themes, these comments did not represent

paper concerns and were a theme that did not surface on the questionnaire. Participants

using text feedback used these comments as dialogue with the capacity to promote

critical thinking.

Coding for audio participants required careful consideration. The unscripted

nature of the feedback resulted in paper concerns that were repeated as a result of the

speaker’s narrative style; therefore, using software that might have tabulated key words

would not have sufficiently captured the nuance of the feedback. In audio feedback,

some of the narrative was superfluous, so for those phrases, the strike-through function

was used.

Um you do a better job in the actual paragraphs explaining it, so you wanna

make sure that you’re clearly illustrating your thesis statement. Okay? And then

finally I come into your conclusion, are you bringing everything together, are

you tying things together, right? Um, you don’t want these two words here. And

you know, you’re bringing it together here and you know, it’s, it’s solid.

References are not formatted properly. References need to be double-spaced so

you can go into here and then this, so it should look like that, okay? So,

realistically, the biggest thing you have to work on is that thesis, you’re gonna

have to develop a clear thesis that ties directly to those supporting paragraphs

(Participant 9)
This passage shows that during the coding process, some of the narrative did not warrant

a code, so it was crossed out, but still visible. Colors were also used to identify the a

priori themes that were established from the questionnaire.

For this first research question, the third data source reviewed was the focus

group transcript. In determining codes for the focus group, the transcripts were reviewed

for common language that identified or pointed to the data. The focus group participants

agreed with most of the data presented as well as the themes. Some results presented

were affirmed while others were challenged.

The first set of questionnaire responses were aligned to the first research

question. The participants were shown the themes about paper concerns. To facilitate

coding, the same color scheme that was already established for the paper issues was

used. When a participant made a comment about thesis statements, those comments on

the transcript were marked in green. At this point in the analysis, the researcher noted

that participants expressed divergent or slightly different views on the same theme, so

the objective was to first capture all comments expressed about the different themes,

then to examine them analytically.

Research question 1: Establishing themes. The themes from the first

questionnaire responses resulted from a careful and systematic process. The first stage of

moving from codes to themes was in reviewing the questionnaire responses about paper

concerns. The initial codes were condensed or consolidated. For example, terms such as

“APA style” and “Citing” were initial codes that later became the theme of

“Formatting.” Similarly, terms like “Illogical argument” and “Lack of Support” were

initial codes that were eventually assigned to the theme that was named “Development.”
The assigned colors assisted in seeing commonalities among the codes and finding a

broader category to name the theme. All codes were reviewed, consolidated, and

reframed into remaining themes. The finalized themes were: Development, Formatting,

Mechanics, Thesis Statement, Word Count, and 3rd Person Narrative.

The third and fourth responses on the questionnaire formed the second data set

that addressed this first research question. For these responses, codes were reviewed and

commonalities were consolidated into a theme. For example, both questionnaire

responses below were reframed into a theme called “Instructor Feedback”, such as, “I

will embed feedback in hopes the student will review and revise” (Participant 4) and

“Typeitin with prompts to encourage the students to go into more details in certain areas

to help them expand their writing” (Participant 10).

Typeitin is a software that allows instructors to prepare comments and to assign a

one-key code, allowing efficient copying and pasting of frequently-used instructor

comments. Therefore, this participant indicated that the paper’s concerns may prompt

revisions of Typeitin comments. The researcher used a template to consolidate remarks

that eventually became codes, then themes.

Comments about “Instructor Resources” were listed in a column; this included

anything the instructor created for their course in an effort to remediate the student

writing concern. These resources could include documents, created websites,

announcements, or information posted in the class. Any remarks about “Class

Resources” were listed in a separate column and they were distinguished from the first

category, because they refer to resources that are built into the class. These might

include electronic textbooks, articles, or lectures posted. Remarks associated with


“Instructor Feedback” another column, meant that instructors were relying on their own

feedback as a way to meet the student needs. Any response associated with ”Co-

workers” was also listed under a column. Words like “peers, fellow instructors, other

teachers, colleagues” were included in this column and later consolidated into this

theme. Because the full-time online undergraduate faculty work in a collaborative

environment, this response may be more common among this population than

populations in other higher education institutions.

The themes derived from the paper feedback were created and named to reflect

balance in scope and a working neutral framework that would serve future study

replication (Yin, 2014). The words chosen for the themes reflect common criteria used

to evaluate writing in higher education. Rubrics generally contain these themes, but

often list “Organization” as evaluation criteria; however, the data for this study revealed

that any comments germane to organization were specifically tied to the thesis

statement, so that is why among this data, the “Thesis Statement” was selected as a

theme. Similarly, “Word Count” could be considered part of the paper’s development,

but the data showed that instructors were specific in mentioning word count, frequently

enough to warrant its own theme. The participants seemed to separate development from

word count, as they would also comment on word count when it was excessive. The

theme of “3rd Person Narrative” could also arguably be considered in the categories of

grammar or voice, but participants frequently commented on this writing error and the

data were not sufficient to show other elements that could be captured in the voice of the

paper. Moreover, the category of “Mechanics” encompassed several writing elements,

including spelling, subject/verb agreement, and any punctuation or capitalization errors.


To balance the array of writing issues, the “3rd Person “was considered and named a

distinct theme.

Another consideration in naming the themes was to intentionally identify them as

elements of writing, rather than problems with writing. Thus, the theme names were

intentionally neutral so that all comments could be captured and appropriately

categorized. The comments here demonstrate how both a positive and a negative

comment would fall under the same theme. Participant 1 stated, “This is a biased

statement without credible evidence” and Participant 9 wrote, “So when I go into your

supporting paragraphs, the first thing I’m looking for, do you have an outside source?

You do, so that’s good. Every supporting paragraph should have an outside source.”

These remarks demonstrate different tones of feedback. The first participant uses text to

provide a direct evaluation of where the content is lacking, while Participant 9 uses

video feedback, giving it a narrative quality; both address the development of the paper.

These six themes served two data sets: The data chunk from the responses to the first

questionnaire, and the comments from the classroom papers.

Similarly, the data from the third and fourth questionnaire responses were named

as broad and recognizable categories that demonstrated how instructors might approach

student writing concerns. Themes were named broadly enough to reflect approaches that

instructors at other higher education institutions might take. For example, the daily

collaboration among the population of online undergraduate full-time faculty may have

led to sharing practices such as using Typeitin for feedback, but even though this

application was mentioned frequently, this was considered within the broader category

of improving or revising instructor feedback as a way to remediate student paper


concerns. Thus, the themes were named for study replication or a recognition beyond

this specific population.

Next, the focus group participants shared their perspectives on the findings.

Participants were presented data on a PowerPoint slide. They first viewed data from the

first questionnaire response, which showed themes of faculty concerns with papers. The

data themes were already established, so the participants were asked to confirm,

disagree, or to expand on the paper issue themes which were: Content, Formatting,

Mechanics, Thesis Statements, Third Person Narrative, and Word Count. Participants

agreed that these themes were common among undergraduate first year papers.

However, the second questionnaire responses showed how frequently the participants

saw these issues. The participants agreed with the first responses, but some were

surprised that the theme “Thesis Statements” had a “low” frequency response in the

second questionnaire responses. Some of the comments included: “I was a little

surprised just to see the perception of concern for thesis being so low listed there, just

because from my experience” (Participant 9), “I have the same kind of automatic – oh,

wow, low for thesis?” (Participant 1), and “I would agree with both of what they said as

you’d think that’d be a pretty big concern” (Participant 13).

Two participants offered possible justification for the response that stated thesis

statements were of “low” frequency of concern among instructors.

For the two essays in (religion class) where they write thesis statements, I try to

get them to practice their thesis statements in the discussion leading up to the

paper. And so we go back and forth and I fix them before they submit the papers.
So then they’re better in the actual essay. Because I try to head it off at the pass

to teach them before they do it (Participant 4)

I think a lot of it depends on the level you’re teaching at too, because I know

with some of the papers I chose, I teach (Introduction to Ed), so the thesis wasn’t

as important as teaching them formatting and developing the paper. So I think

that’s part of why there was a mix of ideas there too (Participant 10).

This focus group transcript as a third data source served to confirm the data

themes that aligned with the first research question. The discussion of the frequency of

concern of the thesis statement provided valuable insights into the significance of the

themes and how the paper themes may depend on the class.

Research question 1: Summary of coding and themes. Three data sources

were used to address this first research question. The first four questionnaire responses

served as data sets to address the first research question. Coding was achieved through

working with a template with broad topics listed at the top of the page, and adding

questionnaire responses to the most appropriate columns. The second questionnaire

responses were closed-ended, so to maintain consistency, these responses were reframed

to indicate whether the concerns were high, medium, or low. Finally, responses from

questions three and four were added to the template in the same fashion as the first

question.

The second data source that addressed this first research question was the review

of the classroom papers. These papers were reviewed for their classroom feedback. For

each participant, four classes were reviewed, with the feedback themes identified. These

feedback themes were similar to the concerns expressed from the first questionnaire
results about paper concerns. Video feedback was also reviewed, transcribed, and

disaggregated to show how the themes compared.

The third data source for the first research question was the focus group

transcript, which was manually transcribed. For the data related to this first research

question, the group confirmed the findings, but had questions about the thesis statement

and its mixed responses. The discussion that ensued focused on how different courses

emphasize different elements.

Research question 2: Familiarization with the data. The study’s second

research question was: What methods of feedback do online undergraduate full-time

faculty use at a southwestern higher education institution? The data sources that

addressed this question were the questionnaire responses 5-8 and the classroom papers

which were analyzed according to Hattie and Timperley’s (2007) levels of feedback.

The third source was the focus group.

The next stage of familiarization was reviewing question responses from 5-8 of

the faculty questionnaire. As some of these questions were closed-ended, it was clear

that the remarks would need to be tallied and due to its open nature, to be mindful of

synonyms to make sure everything was counted. A cursory review also revealed that

because the last question in this series had two parts, that the responses would be

divided, and capturing what followed the “yes” or “no” responses would be important.

Becoming familiar with the data was an essential first step towards the resulting

themes. For each participant, four papers from three to four assignments were examined,

resulting in approximately 128 pages of feedback to review. Thus, the volume of data

dictated how to approach the coding and how to manage the data. In addition, instructors
provided feedback in either text or video format. Reviewing the differences in these

platforms was another step in establishing the coding and theming process. For example,

an initial review demonstrated that the video feedback word count was much higher than

the average text feedback response. The familiarization showed that the video feedback

consisted of narrative text that was not necessarily part of the feedback. The table 11

shows the word count among participants.

Table 11.

Average Participant Word Count by Assignment

Questionnaire Method Course Average Word Count


Response per assignment
8 Text Philosophy 47
3 Text Religion 55
11* Text Intro to Ed 88
5 Text Critical Thinking 113
6* Text Philosophy 137
1* Text Philosophy 145
2 Text Religion 145
10 Text Intro to Ed 155
7 Text Critical Thinking 164
12* Text Critical Thinking 400
4* Text Critical Thinking 426
9 Video Critical Thinking 952
13 Video Philosophy 2473

The length of this video text meant that coding required attention to the paper issues that

were part of the narrative. An excerpt below demonstrates the volume of words used by

participants using video feedback.

Extracurriculars, you’d wanna, activities, um, you want it to be activities such as

band and sports but you wanna make sure you’re saying that they’re getting

eliminated or reduced, not just saying that, that they’re just there. You’re gonna

want to make this a little more clear, so realistically, the thesis is gonna be a big

area that you will want to work on just to ensure that it’s very clear, so I can see
you have sub-topics there, but as written, it’s not nearly as clear as it could be”

(Participant 9).

Although some specific paper elements are mentioned, there is superfluous narrative that

would not qualify as part of the established coding. This data familiarization aided in

establishing the framework for the next step.

To address the focus group data for the second research question, the recorded

session was reviewed. The participants agreed with the methods used, the factors that

influenced those methods, the expectations, practices, and the responses about the plans

to continue. The research told the participants that these consolidated responses were

listed from most to least frequent. Because these themes were already established from

the questionnaire, the familiarization process was previously established.

Research question 2: Coding the data. A second data set that addressed this

research question came from the questionnaire responses, 5-9. The first question was:

“Do you use audio, video, or text to deliver feedback?” Because this was a closed-ended

response, there was no coding necessary, but a computation of the responses showed

that two out of the 12 participants used video applications for their feedback. These

participants used a program called Loom, which allows both video and audio platforms.

Both instructors who used video captured the student’s paper in the video window and

used their cursor and typing to annotate the paper while narrating the feedback. Loom

offers users the opportunity for a small video window as the instructor narrates the

feedback, but in this study, neither participant utilized that particular feature. However,

both audio and video functions were used, as the students could view the active cursor

and annotations while listening to the narrative. Thus, this practice is considered video,
because the student’s paper was captured in the student’s view, with editing motions to

accompany the narrative.

The next question was: “What factors influence that decision?” The codes were

captured by distilling the main ideas from these responses. Most responses were brief, so

this task was manageable. For example, one participant said:

I have found that students prefer Loom videos and I have received extremely

positive remarks about this form of feedback. In addition it saves me time over

typing the feedback and I am able to provide more feedback that makes sense to

the student” (Participant 9).

The key words from this response indicated the factors of influence were “Student

Preferences,” which resulted in the theme “Student Needs.” Since this participant

mentioned saving time, responses that were similar were consolidated into the theme of

“Time/Efficiency.”

The third question aligned with RQ2 was viewed by participants as either similar

or identical to the second question. The responses were similar enough to compress and

to treat these as one theme. Vaughn and Turner (2016) discuss how merging can help

analyze common data. Table 12 shows these similarities and the revised coding that

followed.
Table 12.

RQ 2 Questionnaire response themes 5-8


RQ2 L1: Do you use audio, L2 & L3: What L4: Do you plan to
video, or text to deliver factors influence that continue with this
feedback? decision? Why did method? What are
you choose these other ways you might
methods? use this method?
What methods of Text (36) Time/Efficient (24) Yes (18)
feedback do Familiarity (16) Yes + may try (16)
undergraduate full- Student needs (12) Yes+ Efficiency (5)
time faculty use at a Co-workers (4) Yes + adding (3)
southwestern higher
education
institution?

The last questionnaire response aligned to this research question was about

whether participants would continue with their method or if they might implement new

methods. The first part of this question yielded closed-ended responses that did not

require special coding. However, the second part of this question necessitated showing

how participants’ initial response may have been shaped by the latter part of the

question. Therefore, there was not a clear-cut “yes” or “no” for some participants. Most

wanted to elaborate on their affirmative response, or to express conditions that might

prompt change. The responses were viewed from all participants and were displayed to

show the breadth of the responses. For example, one code was “Yes, with text” meaning

that this participant intended to remain with text feedback and did not plan to implement

video feedback.

The paper feedback was the source of data that addressed this second research

question. However, this analysis involved examining the remarks according to a

theoretical framework established by Hattie and Timperley (2007). For this phase of the

study, the themes were determined a priori, meaning that the data integrated with the

pre-determined themes that were linked to theory (Fereday & Muir-Cochrane, 2006).
The first set of questionnaire responses aligned with Research Question 1 and those data

set themes were: Development, Formatting, Thesis statements, Mechanics, Word Count,

and 3rd Person Narrative. These themes were maintained, but were then attributed to one

of the four levels from Hattie and Timperley’s (2007) framework.

The paper feedback analysis also assisted in linking the data to the theory of this

study. The study’s feedback was examined using Vygotsky’s (1978) Zone of Proximal

Development as the framework of formative feedback. Thus, the goal was to categorize

the feedback data to determine if, by instructors reflecting on their own feedback, if

Vygotsky’s ZPD was operationalized through instructor feedback. Vygotsky (1978)

posited that human developmental processes are not always aligned with learning, and

that instruction must be structured to challenge the student into that proximal zone,

where learning takes place.

In the online classroom, paper feedback does serve the role of instruction. Sadler

(1989) posited that formative assessment takes place when feedback on student work

improves and accelerates learning. Viewing “instruction” as movement to the ZPD,

instructor feedback must meet several criteria. Nicol and Macfarlane-Dick (2006), point

to self-regulation as the evidence of learning. This paradigm of learning promotes the

learners’ active self-monitoring in different learning processes, including using

resources, understanding goals, and reacting on external feedback (Nicol & Macfarlane-

Dick, 2006).

Thus, the goal was to examine the instructor feedback to determine if these

theories were operationalized. The selected theoretical framework was Hattie and

Timperley’s (2007) four levels of feedback. Both the text and video feedback were
examined and categorized according to Hattie & Timperley’s four levels (2007) of

feedback, which are: Feedback associated with the task ( FT); Feedback associated with

the process ( FP); Feedback associated with regulation (FR); and Feedback associated

with self (FS). These levels served as themes for the second research question and the

codes were determined by the sentence structure and usage of the comments.

Exclamations and adjectives indicated FT; command verbs were associated with FP;

declarative statements or content-area questions were FR; first and second pronoun use

correlated with FS.

Table 13.

Paper Feedback According to Hattie & Timperley’s Levels

Research Question Codes Themes


RQ2: What methods Exclamations and FT – Feedback associated with the task that is
of feedback do online adjectives, such as limited to a direct evaluation. For example, “Good
undergraduate full- Good, Great, Well job.”
time faculty use at a done!
southwestern higher
education institution? Command verbs, such FP – Feedback that gives specific corrections or
as Consider, Add, directions. For example, “Add a comma here.”
Revise, Remove, Give

Declarative statements FR – Feedback designed to give students


or questions specific to independent revisions and to promote self-
content or area where regulation. For example, “There should only be
revision is needed. one long quote per paper.”

First and second person FS – Feedback related to student character or


pronouns. instructor relations. For example, “I am proud of
you!”

These syntax patterns served as a guide to ensure that all comments were

categorized into one of Hattie and Timperley’s (2007) levels. It was also necessary to

review all feedback within its context. For example, if an instructor said, “I’m proud of

you” in the gradebook box, that comment was not directly tied to a specific paper

element and would be considered FS; however, if this same comment was presented in
the sidebar and was associated with the thesis statement, then that indicated that the

comment’s purpose was to reinforce a classroom discussion or suggestions from a

previous assignment. Thus, this same comment within this context would be considered

FT, as it is more of an evaluation on a specific element of writing. The a priori themes

integrated the learning theories with the data.

Research question 2: Establishing themes. For this research question, the

themes were established a priori according to the four levels established through Hattie

and Timperley’s (2007) framework. The questionnaire responses that aligned with this

research question were narrow in their content and some were closed-ended, so the key

words became the themes. The themes will be discussed in greater detail in the

following section.

The first questionnaire response themes were “Text, Video, or All.” The

participants who responded that they used video feedback stated that they also used text

feedback. There are two reasons these participants would respond saying they used text

in addition to video. First, an instructor cannot submit a grade without an accompanying

comment; this is by design of the learning management system. So, while these

participants provided a video link to students that opened up their audio and video

feedback, these instructors also provided general comments in this learning management

system’s grading box. These participants also stated that they do not use video feedback

on all assignments, but reserve it for longer ones. This study was bounded by

assignments of 300 words or more in length, in order to capture more complex feedback

and different instructor methods, so only their video feedback was reviewed.
Text feedback can be given through the grading comment box or on the attached

papers. On student papers, instructors may embed text through the Track Changes

feature in Microsoft Word, through sidebar comments, or through adding text at the top

or the bottom of the page, or a combination of these. A review of the data revealed that

how instructors treat these different areas of feedback is not consistent; that is, some

provide general comments that are not assignment-specific in the grading comment box,

while others may provide this kind of feedback at the top of the page. Still others may

provide these general comments through the sidebar function. Thus, all sidebar feedback

data were consolidated, regardless of whether it seemed to be formative or summative.

The theme of “Text Feedback” refers to any written form of feedback, which is

distinguished from a video method.

The themes addressed in factors of influence are: Time/Efficiency, Familiarity,

Student Needs, and Co-workers, as these were the most frequently cited as the factors

that contributed to the participant’s choice of feedback delivery. Online instructors are

indeed faced with grading volumes that prompt attention to efficiencies. Institutions are

focused on providing personalized feedback in the midst of the increasing enrollment of

online learners (Planar & Moya, 2016). This study’s particular population teaches a

full-time load of online courses; included in this population are tiered instructors,

managing volumes of up to 150 students at a time. Because time is regarded as an

influence on their delivery method, these participants may be selecting the method based

on its efficiency.

Participants who used text feedback as well as those who used video feedback

both reported time as an influencing factor. This suggests that there is not one method
that improves efficiencies for grading; however, it may suggest that certain methods

could improve efficiencies for some instructors. Borup et al. (2015) and Wright, (2014)

stated that in addition to increasing personalization, that newer technologies may serve

to manage volumes of grading. There were participants in this study who responded with

time as an influencing factor of using text feedback, but also stated that they would not

try video feedback. For example, when answering what kind of feedback was used, this

participant said, “I currently deliver my initial feedback via text, specifically through the

use of embedded comments. I then remind students that they can call me to go over their

feedback together” (Participant 4).

When asked what factors influenced that decision, this participant said:

I model my feedback off of what I have received in my master’s program, along

with what other faculty have shared with me. Embedded feedback is one of the

quickest ways to get valuable and specific feedback to students that they can

access regardless of their level of competency with technology (Participant 4).

The second theme that emerged from this set of questionnaire responses was

“Familiarity,” which meant that participant feedback options were influenced by their

own and their institutional conventions. Since video is a newer way to provide feedback,

the responses to this question reflected participant’s comfort levels with different

methods. One participant stated, “This is what I was told to do per university policy”

(Participant 4).

The third questionnaire response that was aligned with research question 2 also

showed results in the theme of “Familiarity” This question was closely tied with the

previous one, prompting some participants to provide a duplicate response. However,


this question also captured new responses, including “Self-challenge” from one

participant who used video feedback. The “Factors of Influence” and the “Expectations

and Practices” showed slight differences. Participants cited “Familiarity” most

frequently, with “Time” named next, yet the data showed that “Time” and “Efficiency”

was most frequently cited as the factor that determined their use of feedback. Finally, the

data from the last question in this set showed that most responded with “Yes,” but there

were caveats or additional remarks provided. These responses were captured in the

“Plans to Continue” column of responses.

Following “Time/Efficiency,” the next resulting theme under “Factors of

Influence” was named “Student Needs.” This theme was selected based on key words

from questionnaire responses that pointed to anything that benefitted the students. These

included citing student preferences from institutional surveys, or from individual student

messages that indicated these preferences. For example, one participant responded:

I have found that students prefer Loom videos and I have received extremely

positive remarks about this form of feedback. In addition it saves me time over

typing the feedback and I am able to provide more feedback that makes sense to

the student (Participant 9).

This participant indicated that student feedback prompted the continued use of Loom for

video feedback.

The themes of “Time/Efficiency” and “Student Needs” must be examined

aggregately, as they have overlapping elements. For example, an instructor may believe

that increasing efficiency allows more time for higher quality feedback or possibly

classroom interventions for improving student writing. They may also believe that
despite the advanced practices that technology affords, that not all students appreciate

these advancements. Indeed, Borup et al. (2015) found that despite the efficiencies and

personalization of video feedback, students reported preferring text feedback, citing its

conveniences.

The last question in this data set was designed to further prompt connections

between feedback and reflections, as it was about how faculty might continue with the

method or might try a new method. This same participant said:

I plan on continuing this manner of feedback for the time being as it has proven

successful for me. I am interested in using some Web 2.0 tools to provide

feedback – like Loom – but remain hesitant due to time constraints. I am unsure

whether or not I could provide enough feedback and still meet departmental

goals for grading (Participant 4).

Thus, this participant stated that text feedback was quicker, which was categorized as

“Time/Efficiency” as a theme, but also stated that they were hesitant to try Loom, the

program that allows video grading, due to its time constraints. Loom’s efficiency could

vary among individuals, but the perception of its efficiency may be just as critical in

reflection or choosing feedback delivery methods. To conclude, participants indicated

that time influenced their delivery method without having tried both types of feedback

modalities.
Table 14.

Questionnaire Responses by Research Question: Research Question 2


Methods Used: Factors of Influence Expectations and Plans to continue
practices
Text (36) Time/Efficiency Familiarity Yes, with text
Video Student Needs Time Yes, may consider
All Tradition/Norm Student Needs video
Data Yes, with text; may add
Self-challenge video to other areas of
course
May try with assistance

These themes provided insight into the second research question. The data

showed responses from both video and text participant users, but for both modalities, the

responses indicated that the chosen method reflects time efficiencies and what is best for

students. An exception to this is that those who used text feedback cited “Familiarity” as

being an expectation and continued practice. Moreover, some text participants expressed

interest in trying video feedback, but stated their hesitancy to try it in other responses.

This demonstrated that there are either technical challenges or mental obstacles to

overcome so that faculty members can try both kinds of delivery to determine which

kind of feedback is truly efficient for them, or what their students prefer.

In this study, feedback was regarded as the mechanism for learning theories.

Examining instructor feedback within Hattie and Timperley’s (2007) framework merges

the data with the underpinning theory. Defined as information relative to one’s

performance, feedback can be corrective, can clarity, encourage, or prompt action on the

part of the performer (Hattie & Timperley, 2007). The four-level model is designed to

provide the ideal conditions for feedback (Hattie & Timperley, 2007).

Feedback can form a path to learning. Feedback is considered effective when

students increase their efforts, based on a firm understanding of the goal (Hattie &
Timperley, 2007). If this goal is achieved, feedback becomes a powerful mechanism for

proficiency in self-regulation, which is consistent with moving students to their ZPD,

thus acting on Vygotsky’s (1978) learning theory.

Feedback’s effectiveness can be examined within a framework. Complete

feedback should answer three questions: Where am I going? How am I going? Where to

next? (Hattie & Timperley, 2007). These feedback questions operate on four different

levels: Task level, Process level, Self-regulation level, and Self-level (Hattie &

Timperley, 2007). These levels were the pre-determined themes that structured the data

analysis and the feedback remarks served as codes to support the themes.

The data were examined according to these four levels and was then integrated

with the previously identified themes of Development, Thesis, Formatting, Mechanics,

3rd Person Narrative, and Word Count. The graphs below show the two themes

displayed, according to either text or audio feedback:

Text Feedback according to Hattie & Timperley's Levels


250

200

150

100

50

Development C. Thinking Thesis Format Mechanics 3rd Person Word Ct

Figure 2. Text feedback levels with paper elements


Audio Feedback according to Hattie & Timperley's Levels
70

60

50

40

30

20

10

0
FT (Task) FP (Process) FR (Regulation)

Content C thinking Thesis Format Mechanics 3rd Person Word Ct

Figure 3. Audio feedback levels with paper elements

At the task level is feedback that is specifically tied to the student’s task. For this

analysis, the researcher examined the feedback for appraisals of performance. For

example, Participant 4 stated, “Nice job with your thesis statement. Your topic and sub-

topics are clearly introduced.” This feedback was categorized as FT (Feedback to the

Task) because it appraises or evaluates the writing element. The student task was the

thesis or topic and sub-topics and the instructor noted the evaluation of them. Not all

feedback at this level is positive. Here is another example from this same participant: “I

appreciate the effort, but you did not properly format your reference page per

(institutional name) guidelines.” Both comments are at this first level (FT), appraising a

specific writing task. The data showed that while instructors addressed all writing

elements at this level, “Development” was the most commonly addressed element.

The next level that dictated the analysis was FP or feedback that was specific to

the student process. According to Hattie and Timperley (2007), feedback at this level
may point students to try different strategies or may act as a cueing mechanism. The

analysis involved reviewing comments for tone and language; if instructors were

specifically telling the students what to do, that was a distinguishing characteristic of

this level. For example, Participant 8 stated, “Use the double-space feature in Microsoft

Word and type in double-space.” This example shows that rather than evaluating the

writing issue, the participant, instead, is showing the student how to modify their writing

processes for future submissions and is being direct in the language. At this process

level, participants who used text feedback most often addressed formatting issues.

Feedback about self-regulation (FR) directs students towards the learning goal

through developing autonomy. Hattie and Timperley (2007) posited that feedback at this

level can promote internal dialogue and assessment for the learner. Feedback that

supported this theme contained comments framed as questions to the student.

Instructors may have also commented on the paper’s subject matter, often affirming the

student’s thought process or probing the student into deeper thought on the same subject.

For example, Participant 10 wrote, “Yes, those in line with communication with their

body etc…can help them understand what is happening around them and how they may

be handling it.” Participant 1 stated, “Right! What roles does objectivity, perception, and

memory play within this process?” These remarks qualify as FR because instructors are

addressing external factors that promote internal processes.

Among the participants who used text feedback, within the FR category, the

theme of “Critical Thinking” prominently emerged, which was not evident among video

feedback in any levels. For participants who used text feedback, this theme was the

second most frequently used, following “Development.” By comparison, participants


who used video feedback also formulated FR comments when addressing content, but

did not use comments within the “Critical Thinking” theme. Unlike other comments,

instructor remarks in this theme did not address any particular writing issue, but instead,

served as dialogue on the course content. Here are some examples:

One quick side comment is in reference to the quote pulled from our e-book,

(gives title). I actually disagree with the author and find that thinking about

thinking describes metacognition rather than critical thinking which can be

considered a helpful skill in the process of thinking critically about what we

think or how we feel about a topic (Participant 1).

This comment promoted self-regulation, as it is not specifically tied to any problem or

concern with the paper. This “Critical Thinking” theme demonstrates how an instructor

may take the opportunity to converse on the class subject through paper feedback.

The last level described from Hattie and Timperley’s (2007) levels is FS, or

feedback about the self. These researchers acknowledged that this feedback is not

effective in isolation, but is nevertheless present. FS points to the learner’s personal

traits or character, and might include remarks such as, “Good effort!” (Hattie &

Timperley, 2007). While some FS was present in this study’s data, it was not included in

the final results, because it was not tied to a specific subject area; if praise-associated

remarks are tied to a specific writing element, then the feedback is categorized

differently. For example, Participant 11 stated, “Nice job explaining how learning styles

affect collaboration!” In this remark, the participant is not praising the student, but the

“Development” element of writing. Thus, this comment would be considered FR about

“Development.” Although this category of FS was reviewed and tabulated, it was not
included, as it did not overlap with the other themes. General praise was found in the

gradebox comments by many instructors, but as this study was limited to reviewing how

feedback was specifically tied to reflection, this category of feedback was not reviewed.

Analyzing instructor feedback within Hattie and Timperley’s (2007) framework

was an effective approach, but may have possibly resulted in missing data. Generally,

instructors used the sidebar function for formative feedback and this study was bounded

by that type of feedback. Placing boundaries in a case study keeps it reasonable in scope

(Baxter & Jack, 2008). It is possible that some instructors may have provided summative

comments through the sidebar feature, meaning that such feedback was not intended to

be formative. Summative feedback is generally provided at the end of course work and

is not designed to engage or mobilize students in the same way formative feedback is

(Mirzaee & Hasrati, 2014). It is also possible that instructors may have given formative

feedback directly in the gradebox, as opposed to on the student’s paper. Both

possibilities created potential for missing data; however, the researcher attempted to

compensate for this through the volume of papers reviewed.

When the focus group reviewed the data that supported the second research

question, no participants disagreed with the data. The data were explained to them, and

when prompted, there was not additional discussion nor were there challenges about the

responses. This confirmation served to triangulate the other data sources for this

research question.

In summary, the data analysis intersected the six themes of paper concerns:

Development, Formatting, Thesis, Mechanics, Word Count, 3rd Person Narrative, and

Hattie and Timperley’s (2007) levels of feedback: Feedback to the Task (FT), Feedback
about the Process (FP), and Feedback about Regulation (FR). However, it should be

noted that instructor comments most often included more than one element. For

example, if an instructor provided FT about a writing element, it was often followed by

FP or FR remarks to re-direct the student. For the sake of analysis, these comments were

broken up to categorize them according to these themes. Finally, the data were

disaggregated to show how these feedback remarks might be compared with attention to

content areas and feedback levels.

Research question 3: Familiarization with the data. The third research

question of this study was: How do online undergraduate full-time perceive the

influence of reflective thinking on their current and future instructional approaches? The

data sources reviewed were questionnaire responses 9-12 and the focus group input.

Below is the process of familiarization, coding, and theming that aligned with this

research question.

The last section (9-12) of the faculty questionnaire responses addressed this

research question. The first question was about what changes faculty would make to

their instructional practices, based on their own feedback reflections. Like other

questionnaire responses, because the questionnaire template allowed the participants to

respond to four different student papers for each question, participants duplicated

responses. Understanding how participants approached these questions demonstrated

that the responses would need to be checked for duplications or words that shared

common meanings.

The classroom papers served as an important artifact for this study. The papers

were analyzed in two ways: First, they were reviewed for feedback themes, and
secondly, they were reviewed according to how those themes applied to Hattie and

Timperley’s (2007) levels of feedback. Thus, the classroom paper artifacts were

foundational for all research questions, but were not a direct source of data that aligned

with this research question. Thus, the familiarization was not necessary to address this

research question.

As with the other research questions, the familiarization stage involved a review

of the transcript. According to Yin (2014), triangulation involves the opportunity for

alternate perspectives, so the focus group offered that opportunity. Becoming familiar

with the transcript was a necessary step in this process.

Like other questionnaire responses, the data were coded by a careful review of

the responses. Although many responses fell into a priori themes established from

previous answers, other responses demanded new themes. The ninth question was about

how student concerns might be connected to a classroom element. Most participants

responded with “None,” indicating contentment with their current approach to giving

feedback. Here are two participant responses: “Based on this paper and its feedback, I

will continue to use the same type of feedback for future instruction” (Participant 11)

and “There are a variety of methods to change or alter future instructions, feedback,

learning opportunities; however, I have no intention of changing anything as I worry

about not meeting university requirements” (Participant 5). These responses show

shades of difference, but could be distilled to reflect “None” to address changes faculty

would implement.

The next frequently- named response for this question was centered on

instructor-added resources, followed by “Tech tools,” which meant that participants


would be likely to try other methods of technology in different areas of the class.

Feedback was a theme that followed, meaning that instructors believed that focusing on

their own feedback was an effective method of addressing student issues. Next,

participants responded that they would seek assistance from leadership or co-workers.

These responses were consolidated and themed “Co-workers.” The last set of codes

related to how students themselves influence instruction practices. Responses included

linking back to data from student surveys, notes in the class from students, or phone

discussions. These codes were then attributed to the theme “Student Needs” and were

identified in the same fashion as previous codes, through finding similar strains of

thought in the participant responses.

Question 10 linked back to the feedback reflection and captured how a student

concern might relate to a particular classroom element. Most participants indicated that

the course discussion questions were the primary factor that related to the student

concern. The other themes were: Assignment, Existing Resources, and Instructor-added.

Because these were established themes from other questions, the coding process

involved attributing the responses to these established themes. The participant remark

here shows that the instructor sees a connection between the paper issue and the

assignment: “This concern is only related to the assignment itself. There are no other

classroom elements that are related or that specifically address these concerns”

(Participant 11).

Other participants found that concerns in the paper were associated with classroom

resources. For example,


The issue the student has – formatting – is something that came up in Topic 2,

and they have continued to use the classroom resources, to show considerable

improvement. While they are not formatting everything correctly, the issues are

very minor. It is clear that they get the general idea (Participant 4).

This participant was focused as much on student success as they were student concerns.

Their reflection indicated that they were assessing resources based on their

effectiveness.

Faculty participants responded to the eleventh question showing how trends

affect their practices and elaborated on the specific trends that would affect practices,

thereby asking participants to predict or reflect on these influences. Most responses here

were also determined a priori. Responses necessitated additional coding. Participants

sometimes pointed to institutional requirements as a factor affecting their practices, but

this was listed the least frequently. Here is how one participant stated this:

I have been told that student success rates and ‘checking the box’ are the most

important aspects of my job. Or that student learning as specifically measured by

an outside part is the most important. My desire to actually help a student learn

or grow is secondary to these pressures (Participant 5).

The last question on the instrument was designed to explore the extent of faculty

practices based on their own reflections. Participants had responses similar to previous

questions, so similar codes were captured to support themes of: Instructor-added; Co-

workers; Does not see; Tech Tools; Engaging in Forum. Table 5 shows the frequency of

these themes and how Instructor-added is the most dominant, followed by Co-workers.
This specific population may have contributed to the theme of co-workers, since

participants work face-to-face with fellow faculty members.

Research question 3: Coding the data. The focus group was the second data

source to address the third research question was the focus group session. This session

was held on March 27, 2018, at 3:00 PM, Arizona time. Ten of the 12 participants

attended via Zoom, which allowed remote attendance and the opportunity to record the

session, which lasted 48 minutes. The participants verbally consented to the session

being recorded, and the data were manually transcribed, resulting in a word count of

3,424. A careful review of listening to the session and reading the transcript were the

first steps in coding.

The participants agreed with most of the data presented, but did challenge some

findings. The first discussion was the second questionnaire response, which was a

follow-up to the paper concerns. After listing the paper concerns, faculty were asked to

follow-up with their perceptions of concerns and for most paper issues, they indicated

that they were frequent. The focus group participants questioned the response indicating

that thesis statement was considered “Low” as a response. Some challenged this, as they

believed that thesis statements play a prominent role in their grading. For example,

Participant 9 stated, “I was a little surprised just to see the perception of concern for

thesis being so low listed there, just because from my experience.” Participant 1 wrote,

“I have the same kind of automatic – oh, wow, low for thesis?” Participant 13 replied,

“Uh, I would agree with both of what they said as you’d think that’d be a pretty big

concern.”
These three participants expressed surprise that responses would indicate that

this element would be of low frequency. As this discussion ensued and was moderated,

another participant brought in a new perspective: “I think a lot of it depends on the level

you’re teaching at too, because I know with some of the papers I chose…the thesis

wasn’t as important as teaching them formatting and developing the paper” (Participant

10). This participant’s remarks demonstrated how the data aligned with the first research

question about paper concerns. Although there was a theme of thesis statements, its

prominence was in paper feedback more than in the questionnaire response when

instructors were asked about paper concerns. The participants’ remarks provided insight

on the gaps and disparate emphases on the thesis statement in this study.

The focus group discussion also demonstrated that participants may focus in on

some issues, relative to what they emphasized in class. For some, there may be raised

expectations of particular issues, resulting in focusing on different student concerns. For

example, here is how one participant expressed this:

For the essays…where they write thesis statements, I try to get them to practice

their thesis statements in the discussion leading up to the paper. And so we go

back and forth and I fix them before they submit the papers. So then they’re

better in the actual essay because I try to head it off at the pass (Participant 2)

Thus, it became clear that the discussion of the thesis statement was an example of an

issue where divergent instructor views illustrated different reflections and actions.

Because participant 2 stated that she creates her own materials to offset concerns with

the thesis statements, she sees these less and perhaps shifts her thoughts during feedback

to focus on other areas that may help students.


Another key finding that emerged from the focus group discussions centered on

the emphasis on content. When the paper issues were presented on a chart, the findings

showed that faculty gave most attention to content in their feedback. The codes that

determined the theme of “Content” included any feedback or questionnaire remarks that

referenced support, evidence, development, expansion, logical or illogical argument,

lack of resources, content. “Content” was frequently discussed in both text and video

feedback, but with a greater emphasis in text feedback. The significance of these

differences and this attention to content will be discussed in greater detail in chapter 5.

The focus group observed the key differences in the theme of critical thinking.

Two charts (see Figures 3 and 4) were shown that illustrated the paper issues as well as

where the issues fell according to Hattie and Timperley’s (2007) levels. The focus group

participants observed that an additional theme of critical thinking was present only in

text feedback. One participant noted, “Would this indicate that there’s less critical

thinking with the video feedback? Uh, that’s the wrong way to say it, but there’s less

critical thinking feedback” (Participant 5).

The discussion that followed was about reasons this theme might be less prominent in

video feedback. Two participants offered their views:

So that’s kind of interesting to note the difference to the text commentary. It

might be that for us, quite a few of us, video feedback is still new and we haven’t

found a way to integrate those critical thinking comments in there in a positive

way for the students. It was just interesting to note (Participant 5).

I feel like, like we try to be more positive maybe because on a video, you don’t

want to have on video, well, this was, completely off topic and you know, you
messed this up somehow – I feel like we stay a little bit more positive in those

videos and it’s more to show them that formatting process, in my experience

(Participant 10).

The conclusions in this discussion showed that feedback may vary according to the

feedback method. The comment above indicated that feedback through video may keep

instructors from addressing larger issues; instructors giving video feedback may remain

more focused on surface issues, such as formatting and mechanics. The significance of

this will be discussed in later sections.

Research question 3: Establishing themes. The themes for this last research

question were a result of the codes generated from the questionnaire responses. The

ninth question was: What, if any, changes will you make to future instruction, based on

your feedback reflections? As with previous questions, similar responses were

consolidated into the theme of “None” to indicate that the instructors would remain with

their current methods. This does not indicate that they are not actually reflecting on their

feedback, but could also indicate that they are satisfied with their current methods. This

theme was also named to be consistent with similar questions asking participants about

methods or planned changes.

A second theme that addressed the ninth question was “Instructor-added

resources.” Responses were attributed to this theme if they indicated that the participants

had specific plans to develop new resources. In an online classroom, instructors have

options of adding their own resources to a general resource section of the class, or to the

discussion area. Creating websites, videos, or adding written material are examples of

these. Here are examples of how two participant responses was assigned to this theme:
“Perhaps if I can find a couple of grammar/English quick lesson video snippets. These

have been helpful with the thesis and reference page. They may work with grammar-

related issues” (Participant 1) and “I would add a video on citing within DQ 1 (the

class’s first discussion question) to be proactive instead of reactive” (Participant 6).

The next two themes were “Tech tools” and “Feedback” which drew from codes

from the questionnaire responses. Whether they used text or video feedback, participants

stated they would consider technology in other classroom areas. These included

instructional videos posted in the classroom, using a tool to text students, or video-

conferencing. Thus, these codes resulted in the theme of “Tech tools.” The theme of

“Feedback” surfaced as participants indicated that through their changing feedback

comments, they could more effectively influence student paper concerns. This theme

was not present among video participants, but was the fourth most frequent theme

among text feedback participants.

The next theme demonstrated the specific culture of the population of online full-

time faculty. The theme was named “Co-workers,” which indicated that faculty were

influenced by their peers and leadership in their practices. Because faculty work together

at a campus facility, they collaborate, share, and discuss teaching methods. One outlier

response demonstrated that these influences were not always positive. This participant

responded, “I have management that does not care about methods or theories other than

what they feel is correct so disagreeing or trying something new has been shown to be a

negative and could impact my employment” (Participant 5). Although the negative

implication of this response gave it outlier properties, it was nevertheless considered the

theme of “Co-workers” since management influenced the practice.


The final theme from this questionnaire response was “Student needs,” which

indicated that any kind of student response or data influenced faculty practices. Students

have opportunities to contact instructors in the classroom or through email. Finally,

students complete surveys as they end their courses, so instructors may review these as a

source of data. Thus, student praises, concerns, or questions influence instruction for

some.

The tenth question was about the student concern broadly shaping instructor

practices. This question was designed to probe how faculty participants might connect

the paper issues to materials or concepts available in the course. The themes were:

Discussion questions, Assignments, Existing resources, and Instructor-added. Each of

these themes came from codes that were eventually categorized into these themes. For

example, one participant said, “Some stuff we have taken a proactive approach such as

asking students to post a reference as part of their DQ (discussion question) response to

help get them comfortable with citing and researching in the library” (Participant 6).

These raw data were considered as part of the theme “Discussion questions,”

since the participant was stating that the student issue was most closely associated with

this intervention.

The eleventh question specifically asked faculty how trends or individual

concerns might affect their practices. The themes from these responses are repeated from

other questions, as instructors discussed student needs and instructor-added resources. A

theme not seen in the tenth question was to call students. In the online full-time faculty

setting, calling students is a practice that is encouraged, as faculty are assigned phones

and numbers from the institution. Among remote adjunct faculty, this practice may not
be promoted or monitored. The most prominent theme among those giving text feedback

was “Instructor-added,” followed by Feedback, Increasing engagement, and Student

contact.

In response to this question, the theme of “Student contact” was more prominent

among participants who gave video feedback. Phrases like “contacting students” and

“set up appointments” were codes that were attributed to this theme. At the host

institution, full-time online faculty are encouraged to call students, both to welcome

them to class and to help during the class. For example, one participant stated, “I also

schedule 1:1 phone appointments to review all work and what students can do to make

things better” (Participant 6). This theme associated with contacting students is

significant, because other data indicated that both text and video participants believed

that these methods are best for students. Nevertheless, feedback methods alone are not

sufficient for addressing student needs and through reflecting on feedback, instructors

are directly prompted to call students.

The final question on the instrument was about how faculty reflection might

influence them in a broader context. This level 4 question was designed to capture how

faculty might re-shape their practices, focusing on the bigger picture. The online full-

time faculty at this institution work at a campus facility and engage in decisions

regarding curriculum and teaching practices. Thus, some of the responses in earlier

questions related to possibly a quick fix on the part of faculty, whereas this question was

designed for participants to consider the student issue in a broader context. This might

mean improving resources campus-wide for students, having a part in curriculum


revision, or joining a group or committee that might address issues of concern for

students.

The data from the responses underscored the value of using outside resources

beyond the instructors themselves, as many mentioned discussing concerns with co-

workers and management. Some themes were repeated, including Student needs and

Instructor-added, while two other themes were created specific to this response, and they

included: Tech Tools; Co-workers. The last two themes are significant, since they relate

directly to how this particular population, in a collaborative environment, might reflect

on feedback to shape practices. For example, “I will also share with my coworkers any

issues or trends that I see and also bring these up to my manager during one-on-ones. By

doing this I can obtain advise [sic] and new perspectives” (Participant 8).

The last set of questionnaire responses demonstrated that the particular culture of online

full-time faculty at this institution generated influences on practices and instruction.

The data sources for this research question were the questionnaire responses and

the focus group discussions and the classroom papers were not considered a direct data

source. The classroom papers served as data sources for two types of analysis.

Identifying the paper concerns was necessary for the first research question, and the

feedback that was analyzed according to Hattie and Timperley’s (2007) four levels.

Those foundations formed a platform for addressing this third research question, for

examining how the feedback practices inform reflection and what methods are used

helps understand this third research question about how faculty use that reflection. Thus,

no new themes from classroom papers will be discussed for this third research question.
The focus group reviewed the themes from the questionnaire responses and the

classroom papers. The focus group was asked to confirm, challenge, or add to the data.

No new themes were added as a result of the focus group discussions. That is, the group

appeared satisfied with the results from the questionnaires and the papers. However,

some participants were surprised at the frequency of the second questionnaire response,

with respect to the thesis statement. These participants stated that in their grading

experience, students often needed help with thesis statements. However, other

participants stated that the emphasis on the thesis statement could be dependent upon the

classes taught. The participants also observed that the critical thinking theme surfaced

among those who gave text feedback, but was not present among the video feedback

data. Triangulation served as a validation of the data (Yin, 2014). With the focus group’s

interest and attention on particular areas, and a general agreement of the remaining data,

the process of triangulation was complete and these remarks contributed to the

significance of the themes.

Results

This section will present the analysis results and how these results align with the

study’s three research questions. The study’s data consisted of reviewing questionnaire

responses, classroom paper feedback, video feedback that was transcribed, and a focus

group that was recorded and transcribed. All data were coded with the patterns resulting

into study themes. This section below will identify the study’s themes, how they were

developed from the codes and how those themes were viewed by the focus groups,

producing the final themes and results. The focus group completed the triangulation of

the data and provided emphasis on certain themes, which reflect data from each source.
As this is a qualitative case study, the salience of the results are dependent on the

rigor and care of the coding and theming process. Data for a qualitative study must show

a systematic process for meaning to emerge (Vaughn & Turner, 2016). In addition to the

systematic process of coding the data, the sources were triangulated to address the

study’s questions about faculty feedback reflection. The purpose of this qualitative

single case exploratory case study was to examine how online undergraduate faculty

perceived the influence of their instructional feedback practices on their reflective

thinking, and hence, their instructional strategy at a southwestern higher education

institution. To address that topic, the data sources will be discussed below, showing how

the coding and theme analysis contributed to the outcomes and how triangulation served

to produce those outcomes.

This study focused on online undergraduate full-time faculty at a southwestern

higher education institution. From the division, 136 out of the 150 online full-time

faculty were sent emails inviting their participation in the study. Initially, 13 volunteered

to participate in the study, but one did not successfully complete the questionnaire, so

the researcher continued to work with 12 active participants who completed the

questionnaire, agreed to a review of their classroom feedback, and 10 of these

participants attended a recorded focus group session. All participants provided informed

consent forms.

The participants first completed a faculty questionnaire of 12 questions. There

were four questions to align with each of the three research questions. Faculty completed

this questionnaire as they graded and examined four papers for each question. Next, the

participant’s classroom paper feedback was analyzed for common themes and how the
feedback fit into Hattie and Timperley’s (2007) levels of feedback. Finally, for the focus

group, the participants were shown the data from these two sources. They confirmed the

data, highlighted and challenged certain themes that will be discussed in greater detail in

this section. Finally, the confirmation and emphases from the focus groups resulted in

triangulating the data, resulting in stronger confirmation on existing themes.

Faculty questionnaires. Research question 1 was: How do online undergraduate

full-time faculty perceive the influence of their instructional feedback practices on their

own reflective thinking? This research question was addressed through the faculty

questionnaire responses 1-4. The themes of paper issues that surfaced were

Development, Formatting, Mechanics, Thesis statements, Third person narrative, and

Word count. These themes were derived from 37 codes (See table 9), which were words

participants used to address student issues that were later consolidated into the six

broader themes

The second question was closed-ended, so remarks were tallied as either high,

low, or mixed. For the third questionnaire response, there were seven themes, listed here

with corresponding codes: Instructor-added (20), Existing Resources (15), Feedback

(17), Co-workers (5), Past (5), and Student Contact (5). The fourth questionnaire

responses addressed whether participants would implement changes to their practices.

The themes and numbers of codes were: Instructor-added (39), Existing Resources (18),

None (8), Student Contact (3).

The themes surfaced as a result of the codes and the frequency of participant

responses. Therefore, the frequency of the codes demonstrated what was significant to

the participants. When asked about the paper issues on the questionnaire, issues related
to “Development” surfaced the most. Comments that related to evidence or paper

substance seemed important, due to the frequency of words like evidence, support, and

reasoning. The next code mentioned frequently was “Formatting.” Participants alluded

to this theme through words like APA style, citing, or discussing the reference page.

These were distinguished from comments that would be categorized under “Mechanics”

which included spelling, punctuation, and grammar outside of formatting. A third

prominent theme was “Thesis statements.” While not all participants expressed concern

about thesis statements, those who did ranked this concern high and noted it frequently,

both in the questionnaire, in the paper feedback, and later in the focus group. This will

be discussed in greater detail in those sections.

The remaining themes were “Third Person Narrative” and “Word Count.” First-

year sequence courses emphasize writing that is from a scholarly viewpoint and students

frequently need to present research-based papers. Beginning students may lapse into

using the first or second person, and instructors may provide instructional commentary

or may identify this error in the paper feedback. Participant responses were mixed

concerning the frequency of this issue. For word count, instructors relied on assignment

guidelines to determine if students were below or if they exceeded word count, and the

paper comments indicate this.

Participants responded to the third and the fourth questions with seven codes

with “Instructor-added” as the most dominant theme, which indicated that participants

believed they could best address the paper issues by creating their own materials. Codes

that supported this theme were examples, sample papers, creating videos, and preparing

commentaries. For the last questionnaire response, there were 11 codes that fell under
the theme of “Instructor-Created Resources.” Participants mentioned adding to class

discussions, posting new materials, creating FAQs, giving amplified instructions, and

posting Classroom Assessment Techniques or CATS (Angelo, 1995). Participants

regularly use, share, and create CATS, as they are part of ongoing institutional

professional development and full-time faculty expectations. Three responses indicated

that participants did not plan to make any changes about the paper concerns. Thus,

responses of “No changes” and “None,” became themed as “None.” To address the

question of how online full-time undergraduate faculty feedback practices influence

their thinking, there were six themes of paper concerns: Development, Formatting,

Mechanics, Thesis Statement, Word Count, and Third Person Narrative.

Participant responses showed that they mostly use their own created resources to

remediate these student issues, that this same practice will likely continue among the

participants. These results offered insight into the first research question, showing how

faculty reflecting on feedback shape how they address paper concerns. One participant

said:

To help students stay on topic and answer all prompts, I have created videos

walking through the assignments. The video also explains the requirements for

paragraph length. I also use a Frequently Asked Questions list in my weekly

announcements to help students identify potential problems so they don’t make

the same mistakes (Participant 11)

This participant, like many, relied on her own creation of resources as a way to

remediate and to shape instruction.


The participants who either responded with “None” or “No changes” were not

indicating the absence of a strategy; rather, they were secure with their current methods

of addressing student issues. One participant said:

I will not be making any changes to my practices, as I believe the use of Loom

videos clearly explains the issues with any given essay to the student. I feel

creating video feedback is a best practice and I will continue to do so (Participant

9).

This participant specified that not changing practices was akin to providing robust video

feedback.

Classroom papers. A second data source that addressed this research question

was an examination of the classroom papers. The review of these artifacts served to as a

source of triangulation of the data, because the actual classroom feedback could be

compared to the participant responses. When participants were asked about what

concerned them about student papers, the same themes were addressed in their actual

classroom feedback. Thus, although the researcher approached the paper feedback

openly, the a priori themes from the questionnaire served as themes for the classroom

feedback as well.

A significant focus on the theme of “Development” showing prevalence in

instructor feedback. To arrive at this theme, any words that related to the paper’s

development were coded by marking in pink and consolidated as “Development.” Here

are two examples of statements that depicted that theme: “This is a strong paragraph;

delving into the impact that bullying has on a victim’s schooling is a great way to
appropriately develop your topic (Participant 3) and “Please work on providing more

information in each supporting paragraph to make a strong argument” (Participant 6).

Although “Development” was a strong theme in both video and text feedback, it was

more prevalent in text feedback while the themes of “Formatting” and “Mechanics”

were stronger among the video feedback participants.

To address the first research question about how faculty reflect on their

feedback, two data sources were used: The first questionnaire response that was

designed to capture how faculty reflect in their grading and on what specific issues. The

resulting data from these sources shared the same themes, with one exception. A new

theme emerged from the paper feedback review. The theme “Critical thinking” was

attributed to participant comments that were detached from the student writing skills, but

showed that there are times when instructors want to relate the assignment to the course

content, dialogue with students, or promote deeper thinking on a subject. Here is one

example:

I grew up in Kansas. My entire family on my mom’s side are various farmers.

One has two turkey farms, one is a dairy farmer, one is a pork farmer, one is a

wheat, soy and corn feed farmer, one does pumpkins and watermelon

(Participant 1).

Another example states, “It is important knowing what good collaboration is and

how effective communication can affect this” (Participant 10). While this theme of

“Critical Thinking” was prevalent among participants who gave text feedback, it was not

present among participants who gave video feedback. The significance of this difference

will be discussed in chapter 5.


The remaining themes shared between the response to the first questionnaire and

the themes extracted from the classroom feedback review are: Mechanics, Thesis

statements, 3rd Person, and Word Count. The theme of “Mechanics” encompassed

several codes which were derived from participants noting areas where students needed

attention in grammar, usage, punctuation, capitalization, or word choice. Here are some

examples of comments that fell into the “Mechanics” theme: “Relive or relieve, make

sure to read through before submitting” (Participant 8) and “Only capitalize when

starting a sentence” (Participant 2). Both comments illustrated feedback that is focused

on student mechanics and the instructors were identifying and providing suggestions for

improving the writing issue.

The theme of “Third Person Narrative” surfaced often in both the questionnaire

responses and in the paper feedback of both text and video feedback. Instructors

addressed this issue in different ways. Participants may have advised students to avoid

“you,” or may have advised against using the “second person.” Similarly, participant

feedback may mention the error of using first person, often following up with

implications, explanations, or examples. An example is, “First person can be seen as a

hasty generalization” (Participant 6). Here, the participant is providing the rationale

behind avoiding the first person. It was common for participants to not only identify a

writing issue, but to extend their comments towards formative instruction. This trend

will be discussed in greater detail as the classroom papers were analyzed according to

Hattie and Timperley’s (2007) levels of feedback.

A prominent theme that surfaced for participants of certain classes was themed

“Thesis Statement.” Some participants indicated both on the questionnaires and in the
classroom papers that their concerns with thesis statements surfaced frequently. First-

year sequence courses focus on the framework of a five paragraph essay with a thesis

statement with three points. Therefore, these comments were not just on the thesis

statements themselves, but may have been evident in the body paragraphs. One

participant who provided video feedback mentioned “thesis” four times in the video

transcript of 898 words. The participant responded, “…are you giving me some lead-in

information to your thesis, most importantly, what does your thesis look like, is it

appropriate, is it clear and concise?” (Participant 9).

A participant who provided text feedback mentioned the term “thesis” two times

in a paper with word count of 344 words. He addressed it as, “Your thesis statement is

well-structured…A concluding paragraph re-states the thesis, re-caps the key points, and

ties everything together in a satisfying way” (Participant 3). This comment shows that

although this student did not have trouble with the thesis statement, the instructor still

provided commentary, emphasizing its role in the essay. Thus, there is a trend of

addressing topics not necessarily for correction, but for reinforcement or formative

instruction.

The data from reviewing classroom feedback as well as the first set of responses

from the questionnaires revealed the prominent themes that highlighted instructor

concerns with papers. From the responses, the themes showed how instructors viewed

papers and how those concerns reflect in their teaching practices. A common theme was

that participants did not plan on making changes, or they would create their own

resources to address student needs. The themes for the paper concerns (from Question 1)

were also used to examine the papers. The most prominent theme was “Development”
followed by “Formatting” and an additional theme of “Critical Thinking” emerged from

the instructors who used text feedback. Finally, the “Thesis Statement” theme was

prominent because in some classes, it was a foundational concept and was often

addressed in feedback either to reinforce its correctness or to remediate its development

and presence in the paper. The themes from the second research question will be

addressed in the next section.

Research question 2 was: What methods of feedback do undergraduate full-time

online faculty use at a southwestern higher education institution? The purpose of the

second research question was to determine if different kinds of reflection take place

depending on the feedback methods. Full-time faculty either use text feedback, utilizing

Microsoft Word’s sidebar comments or Track Changes embedded feedback. Other

instructors use an application called Loom for video feedback. The video allows audible

narration while the instructor can use the moving cursor and live typing to amplify the

feedback. The questionnaire responses 5-8 and the classroom papers were the two data

sources that addressed this question. However, for this research question, the classroom

feedback was examined a priori, according to Hattie and Timperley’s (2007) four levels.

The data were disaggregated to show the differences between the kind of feedback given

and areas of emphasis.

The questionnaire responses from 5-8 will be discussed first. Question 5 was

about which method of feedback faculty used and as this was a closed-ended question,

only two main responses surfaced and these also served as themes: Text and Video.

Eight of the twelve participants stated that they used text feedback. Two stated that they

used both text and video. One said the method depended upon the assignment, and one
participant stated that video was used; however, in reviewing the classroom data, text

was the primary method for this participant. It should be noted that the researcher

intentionally limited the classroom feedback review to lengthy assignments and an in-

depth classroom analysis revealed that instructors who used video feedback used text

feedback on shorter assignments.

Participants viewed questions six and seven as asking the same thing. Therefore,

the responses were compressed into four themes: Time/Efficiency (24), Familiarity (16),

Student needs (12), and Co-workers (4). For the video participants, Student needs (8)

was the most frequent theme, with Time/Efficiency (4) being the other theme. There

were no codes among the video participants that were attributed to other themes.

A prevalent theme among both types of participants was “Student Needs.” Here

are examples of codes that were attributed to this theme: “Anything to help students”

(Participant 7) and “Based on student needs” (Participant 9). Participant 9 used video

and Participant 7 used text, yet both believed that their chosen method was the best for

the student and this was listed as the dominant factor for selecting their particular

feedback method.

Another theme from this section of the questionnaire response was

“Time/Efficiency.” Full-time online faculty daily face loads of an average of 100

students; the tiered instructors typically carry loads of 150 students at a time. Thus,

participants stated that they considered efficiency an important factor in determining this

feedback. Like the previous theme, the video and text instructor responses both indicated

that these methods played a role in efficiency; that is, that instructors who used video
felt it to be more efficient than text, and those who used text believed that text was more

efficient than video. There were six comments that pointed to this theme.

A third theme was “Familiarity” which meant that participants were practicing

what they believed was expected or within their comfort zone. Responses with words

like “experience” or “expectations” were re-framed into this theme. This theme only

surfaced among participants giving text feedback. Since video feedback is relatively

new, participants who use this are more familiar with text feedback, providing a strong

rationale as to why this theme did not surface among those participants who gave video

feedback.

The last question that addressed this study’s second research question was about

the faculty’s plans to continue with the current method. Nearly all responses included a

“Yes,” and the questionnaire’s open-ended format revealed faculty attitudes towards

their choices. The “Yes” responses were further delineated and categorized as either an

affirmative response only, or an affirmative response coupled with an additional factor

based on the responses. Here are examples:

I plan on continuing this manner of feedback for the time being as it has proven

successful for me. I am interested in using some Web 2.0 tools to provide

feedback – like Loom – but remain hesitant due to time constraints. I am unsure

whether or not I could provide enough feedback and still meet departmental

goals for grading (Participant 4).

Participant 8 stated, “I am open to including additional assistance or methods

into my current plan but I do plan on continuing the same route to helping students.”

These participants provided affirmation of continuing their same methods, but


Participant 4 was compelled to share a specific reason for avoiding video feedback,

citing perceived time constraints. Like others, Participant 8 plans to continue with the

same method, but is open to change. The two participants who did use video feedback

did not express the desire to change their practice.

Classroom paper feedback. The second data source that addressed this second

research question was the classroom paper feedback that was analyzed according to

Hattie and Timperley’s (2007) four levels of feedback. This framework posits that

instructor feedback falls into either praise, task, process, or regulation levels (Hattie &

Timperley, 2007). For this analysis, the instructor feedback was examined according to

these levels. The theory that undergirded this study was Vygotsky’s (1978) Zone of

Proximal Development that correlates with a learner’s self-regulation. Therefore, it was

important to see if the selected method reflected the kind of feedback and if in turn, that

feedback reflection influenced teaching practices. For this data set, the themes were a

priori, because Hattie and Timperley’s (2007) levels were already established as the

structure. This means that the researcher examined the feedback for characteristics that

matched these levels.

The results of this analysis showed that among participants who used text

feedback, most comments fell into the self-regulation category of feedback (see Figures

3 and 4). These comments can be characterized as leading students into a self-discovery,

rather than explicit instructions or comments that praise or condemn. Within the self-

regulation level of feedback (FR), instructors might explain concepts, as seen below:

The thesis statement is the last sentence within the introduction and provides a

quick summarizing statement of the supporting paragraph topics within the rest
of the essay. What are your three supporting paragraph sub-topics? How do you

integrate those into a one-sentence thesis statement? (Participant 1).

This comment serves to remind the student of the concepts learned in class and prompts

the student towards self-regulation with the questions that follow. Most comments from

participants who used text feedback were mostly directed towards development,

followed by formatting, critical thinking, mechanics, thesis statements, third person

narrative, and word count. All paper issue themes were included in this self-regulation

category. Although self-regulation comments took place mostly within content and

critical thinking, all of the paper themes also showed up in this level of feedback.

Among the participants who provided video feedback, the highest concentration

was in the themes of: Content; Mechanics; Formatting; Thesis Statements; Third Person

Narrative (see Table 10). Here are a few examples of self-regulation comments provided

through video feedback: “When I go into your supporting paragraphs, I’m looking, do

you have an outside source? Every paragraph should have an outside source”

(Participant 9) and “Here you’re talking about relational awareness and here you’re

talking about the negative side of social media, so it doesn’t go well together”

(Participant 12).

This theme of self-regulation is evident, as both text and video comments have the

highest concentration in this level.

The second a priori theme established from Hattie and Timperley’s (2007)

framework is feedback about the process, or FP. Feedback fit into this category if the

comments specifically addressed how a student might remediate a paper issue. For

example, Participant 8 stated, “Work on removing ‘that’ from your paper, as it


constitutes passive voice” and Participant 11 replied, “Be sure to address at least one of

the objectives in the 4th paragraph.” Feedback at this level is direct and associated with a

particular writing issue. Longer assignments in first-year sequence courses are often first

drafts, so instructors may include feedback to facilitate the revision process. Of the

participants who used text feedback, most FP comments addressed formatting, followed

by content, mechanics, thesis statements, critical thinking, third person narrative, and

word count. For participants who used video feedback, mechanics was the highest

category, followed by content, formatting, thesis statements, then third person narrative.

Within this FP level, there were no comments on critical thinking or word count for

participants using video feedback.

Here are examples of FP feedback from the participants who used video

feedback: “You need to write in third person narrative” (Participant 9) and “Make sure

you’re not giving me any personal feelings” (Participant 12). Both text and video

feedback comments touched on many areas, focusing mostly on mechanics and

formatting. The significance of this will be discussed in chapter 5.

A third level from Hattie and Timperley (2007) served as an additional theme for

reviewing feedback to address this second research question related to methods of

feedback. Task feedback (FT) refers to feedback that assesses or evaluates student work

and is associated with a specific issue. This reference to a specific issue separates the FT

feedback from praise, which is limited to addressing the learners’ character traits. Thus,

when instructors used the sidebar feature to give feedback, such as, “Good job!” it was

considered FT, since the instructor was intentional about this association.
In examining the feedback remarks at this FT level, a noticeable result was the

difference in comments regarding the “Content” theme. For both text and video

feedback, content was commented upon the most at this level; however, among

participants using text feedback, comments about content were provided at this level

more than any other; whereas among participants who provided video feedback, most

comments about content appeared in the FR (self-regulation) level. Overall, among the

video participants, comments were not prevalent at this level. An example of feedback

from text feedback at the FT level states, “Great example” (Participant 5). At this level,

comments are typically brief, since the lengthier comments fall into different levels,

such as FP or FR feedback. In addition, the student receives context through the

instructor’s use of the sidebar comments and its identifier.

The last category of Hattie and Timperley’s (2007) is feedback that addresses the

self (Hattie & Timperley, 2007) or FS. Although these comments were present in both

text and video feedback, for the purposes of this study, they were not tabulated or

analyzed. Instructors typically limited their FS feedback to general overall comments

that the whole class may have received or as conclusive or summative remarks. FS

comments are not tied to a specific paper issue, and since this study was about how

instructors reflect on their feedback, these comments are not tied to that, as they would

not address remediation that might result from feedback reflection. Thus, the data (see

Table 10) show only the three levels, as they would serve the purpose of linking

feedback to future instruction.

Research question 3 was: How do online undergraduate full-time faculty

perceive the influence of reflective thinking on their current and future instructional
approaches? The data sources that supported the third research question were the third

set of questionnaire responses and the focus group results. The ninth question was

designed to capture if faculty participants would make any changes to their teaching,

based on their feedback. The most common response was “None” and to arrive at this

theme, all responses were reviewed and any comments that alluded to staying the same

were categorized under this theme. As an example,

I absolutely will continue to use Loom for outline and first drafts. As previously

stated, I use this technology to create Welcome to Class videos in which the

students are able to put a face and voice to my name which students have

overwhelmingly approved of. I also make tutorial videos (Participant 9).

Since this participant justified the use of video feedback, this was themed under “None.”

Five participant responses were associated with this theme.

A second prominent theme among participant responses was “Instructor-added.”

Participants stated this through sharing that they continually develop resources based on

student needs. One participant said, “Will keep working on developing videos that help

model the week’s objectives and assignments” (Participant 7). This participant is saying

that they are using videos in other areas, and that although they will not change their

feedback methods, they do react to student feedback by creating videos based on the

needs in papers.

A third theme from this ninth questionnaire response was “Tech Tools.”

Although similar to the other theme of developing new resources, this more specifically

refers to participants trying new web tools, such as Remind, which serves as a host site

that allows instructors to send text messages to students. One participant said:
I may start to use the Remind app. I may also strategize the tools that need to be

used to assist with each week’s assignment to build up to the rough draft, in hope

of better overall quality and connection to past assignments. I may over a live

Zoom conference to students one day a week for questions and to have open

ended assistance for one hour with help on their paper (Participant 13).

Faculty participants who use text may use video or other technology applications in

other areas of the course. Their feedback reflections point to those influences.

A third theme that was present in the responses to this question was “Co-

workers,” which also included institutional leadership. These themes may indicate that

faculty either expected support or training from leadership. For example, one participant

responded, “I will continue to seek out assistance from my peers and manager as well as

my students” (Participant 8). This response aligns with the culture of the full-time online

faculty participants, who collaborate and share practices by meeting on the campus site

during the week. Also, the response alludes to an expectation of leadership support for

changing practices. These findings may point to implications for training or leadership

development if faculty perceive that institutional support would prompt trying a new

method of feedback.

The last theme that emerged from this ninth questionnaire response was

“Feedback.” Most institutions allow students to evaluate faculty near or after the course

ending. In addition, students may take the initiative to communicate on their own to the

instructor about their thoughts on the feedback methods. For some instructors, the kind

of feedback received from students determines or fortifies the instructor’s current

feedback method. For example, “I will always be open to trying new strategies and
adapting to better assist my students. This means being open to feedback and ideas of

others whether that be peers or even students” (Participant 12).

Knowing student preferences from this data helped determine faculty methods.

The tenth question presented to faculty was designed to capture the factors that

connect a student concern to a classroom element. Four themes emerged from these

responses: Discussion Questions (16), Assignments (14), Existing resources (13), and

Instructor-added (7). Video participants responded with Feedback (1) and Existing

Resources (1). A prominent theme was Discussion Questions, or DQ’s. Faculty remarks

that alluded to the discussion question were appointed to this theme. One participant said

this about the paper being reviewed:

The issue the student has – formatting – is something that came up in Topic 2,

and they have continued to use the classroom resources, to show considerable

improvement. While they are not formatting everything correctly the issues are

very minor. It is clear that they get the general idea (Participant 4).

This instructor is validating the regular class discussion and how that can be a platform

for future instruction. This was the most common theme among the responses to this

question.

Following discussion questions, the second theme that addressed this tenth

questionnaire response was “Assignment,” which means that the participants expressed

that a particular student concern was limited to the assignment or its boundaries. This

meant that any concern with the assignment was not directly related to a

misunderstanding of a course concept, but may have surfaced as an isolated error. One

participant’s response illustrates this, “This concern is only related to the assignment
itself. There are no other classroom elements that are related or that specifically address

these concerns” (Participant 11). Other instructors did not concur that an issue was

specific to an assignment, because they chose to address some common mishaps prior to

the students’ attempting the assignment. For example, “I picked up on this potential

concern – supporting paragraphs – in the topic 3 DQ 1, and have provided the students

with resources to help ensure they are working on developing their topic” (Participant

4).

The last two themes that surfaced from this response were “Instructor-added” or

“Existing Resources,” listed in order of frequency. Thus, for this specific question,

instructors believed the class resources were sufficient, compared to adding their

resources. One participant specified how they added course materials to address

common student issues:

The (religion) course is a content-only course. Skills are not part of the

curriculum. It is assumed that by the time they get to this class, they will have

learned thesis and essay writing skills. However, because students still struggle

with thesis statements, I have added skills training to the discussion forum to

help students work on thesis writing (Participant 2).

Although participants viewed instructor-added materials and classroom materials as

connections to student success, the greatest factor of influence was the Discussion

Forum, where instructors could post resources and engage with students.

The eleventh question was about trends that affected faculty practices. Faculty

did agree that trends affected their practices, citing how the volume of a particular

student issue dictated their actions. Like other responses, “Instructor-added” was a
prominent theme, meaning that what instructors added to the class was swayed

according to these trends. Faculty cited volume to indicate their attention to these trends.

One participant expressed it this way:

Since I only teach one course at a high volume, trends are very important and

affect my practices. Individual concerns can often turn in to trends, so since I

have a concern come up more than once, I tend to change a practice or add a

practice to assist with that concern (Participant 10).

Participants did not specifically state that they chose a particular feedback method due to

high volume, but were saying that because of the large amount of student work they see,

that volume has a greater influence on their remediation and classroom practices.

Participants reported the importance of student contact as a result of trends.

When juxtaposing this response to the earlier repeated themes of “Student Needs,” this

suggests that whether faculty use video or text feedback, both are doing what believe is

best for students. When reviewing these factors within the framework of the different

methods, more discussion is needed to understand how these factors intersect with the

methods, and this will be discussed in chapter 5.

The final questionnaire response showed how faculty might apply these trends to

a broader context. The themes that emerged have been expressed in other questionnaire

responses, but in a different sequence. The most frequent theme was “Instructor-added”

resources, followed by: Co-workers; Does Not Apply; Tech Tools; Engaging in Forum.

Thus, faculty participants most commonly stated that their own prepared resources were

the most valuable practice for extending their remediation for students. One participant

stated:
Typically, if I see a student concern multiple times, I try to provide resources

ahead of time to help future students to not have the same concerns. These might

include video tutorials, a weekly checklist for assignments, a Frequently Asked

Questions section in my announcements, etc. (Participant 11).

An example of a broader context would be institutional changes, such as revising the

course curriculum, or shaping practices for the division of online full-time faculty.

Focus groups. The focus group served as the third data source for this study.

The focus group took place via Zoom on March 27, 2018. Zoom is an internet

conferencing tool that allows remote participation and the session to be recorded. Ten

participants were presented with consolidated themes and results from the questionnaire

responses and disaggregated data from the paper feedback. This allowed the participants

to view the paper feedback trends according to the different methods of either text or

video feedback. The focus group discussions resulted in new emphases to particular

areas of data, including: The emphasis on thesis statements, the theme of critical

thinking that only existed in the text feedback, and the differences in feedback,

according to Hattie & Timperley’s (2007) levels.

The focus group participants agreed with the data, but some were surprised that

the thesis statements received less focus. The second question on the instrument asked

the instructors how frequent the particular paper issue was, and for “Thesis Statements,”

one of the paper themes, this response was not high. Participants 7, 1, and 12 stated their

surprised that not all of the faculty participants considered that paper feedback that

addressed “Thesis Statements” was low; however, Participant 11 indicated that this

mixed perception could be that the courses taught have different demands. She said,
I think a lot of it depends on the level you’re teaching at too, because I know

with some of the papers I chose, I teach (intro to education), so the thesis wasn’t

as important as teaching them formatting and developing the paper. So, I think

that’s part of why there was a mix of ideas there too (Participant 11).

Another participant stated that when she responded to that question, she was aware of

how much planning and preparation she did prior to the student assignment and

therefore, did not have concerns regarding the student’s proficiency in writing the thesis

statement. She said,

For the two essays in (religion class), where they write thesis statements, I try to

get them to practice their thesis statements in the discussion leading up to the

paper. And so we go back and forth and I fix them before they submit the papers.

So then they’re better in the actual essay (Participant 2).

This participant prepares students for the thesis statement prior to the assignment,

whereas others view the draft feedback itself as a mechanism for instruction. A second

surprising element among the participants was the theme of “Critical Thinking.”

The “Critical Thinking” theme emerged after careful analysis of both the video

and text feedback. The disaggregated data showed significant differences in the

proportions of feedback and the attention to the themes. Among those who gave text

feedback, the second most prevalent theme was identified as “Critical Thinking,” which

reflected comments that were not tied to a specific graded element in the paper. These

comments demonstrated that instructors interacting with students on the basis of the

subject or class content, showing that the instructors are using feedback as a stand-in for

what might take place in face-to-face instruction. For example, “The scenario about Joni
is a true story. You might enjoy checking out her website and seeing how she has made

an impact on the world, particularly for others with disabilities.

http://www.joniandfriends.org/” (Participant 2).

For this assignment, the students were assigned to write an ethical dilemma and were

given various scenarios. This comment was not directed towards the student’s success or

failure on any rubric element, but was crafted to promote enthusiasm for the topic. All

participants were surprised that a theme would exist in text feedback, but not in video.

The implications of this will be discussed in greater detail in chapter 5.

The participants had different reactions to how the paper feedback comments

were categorized according to Hattie and Timperley’s (2007) levels and the proportions

of attention to the various themes. For all levels, the participants who gave text feedback

focused more on content than on any other theme. For the participants who gave video

feedback, the focus on content was greatest in FR and in FT, while the comments on

mechanics were the most prevalent at the FP level. Overall, instructors who provided

text feedback focused more on paper content. Participant 10 offered her perspective on

this difference:

When I think about when I use video feedback, I can tell, like, just as it shows

here in the chart, that it is often formatting because I’m trying to show them,

because maybe they’ve displayed that they don’t know how to do it and I’ve

tried to tell them through text, I’m showing them, this is how we indent, this is

how we double space (Participant 10).


It is possible that the benefits of video feedback afford greater direct instructional

opportunities over text, which could offer an explanation as to why there is less of an

emphasis on content or critical thinking remarks.

Summary

This study explored faculty feedback reflections and how those reflections might

influence their teaching. Three research questions were posed with three data sources

supporting these questions. First, the 12-question faculty questionnaire was divided into

three sections, with four questions each targeted to specific research questions. Next, the

classroom feedback was analyzed in two different ways, which meant the data supported

both the first and second research questions. The classroom feedback was analyzed

according to themes that surfaced according to instructor comments. The classroom

feedback that supported the second research question was analyzed a priori, according to

Hattie and Timperley’s levels of feedback (2007). The third research question was

supported from the questionnaire responses and from the focus group. These three

sources served to triangulate the data, as the focus group discussions reinforced and

emphasized themes and important trends.

For this study, several notable trends and themes surfaced. First, the paper

feedback analysis revealed that among all themes and trends, faculty participants

focused the most on content, or the student’s paper development. The participants who

provided feedback in text form showed content as the most prevalent theme in the FT

and FR Hattie and Timperley’s (2007) categories, while the participants who gave video

feedback showed content at the highest concentration in the FR level. The paper

feedback themes also revealed that critical thinking comments were present only from
participants who used text comments and were almost as frequent as content at the FR

level of feedback. This theme was not present among video feedback instructors. These

were the dominant emerging themes that surfaced from a review of the classroom papers

and the focus group data and questionnaire responses served to triangulate the data.

Important themes that surfaced from the questionnaires related to participants’

perceptions of how they respond to student issues and how their feedback reflection

helps them address the needs in the classroom. First, the theme of “Instructor-created

resources” had the greatest prevalence and demonstrated that most participants

remediated the paper concerns by creating new resources for students. This addresses

the study’s overarching theme of connecting feedback reflections to classroom practices.

Through these responses, the participants indicated that their classroom resources are

prepared directly as a result of how student feedback is reflected upon.

An interesting and recurring theme in the study also related to the methods

instructors use. As this study explores how faculty feedback methods might influence

their instruction, the data showed that whether participants provided text or video

feedback, both believed that their particular feedback method was best for students and

was the most efficient in terms of the volume. These seemingly conflicting responses

point to numerous implications that will be addressed in chapter 5.

Finally, as this study was grounded in Vygotsky’s (1978) Zone of Proximal

Development, the data from Hattie and Timperley’s (2007) served as a measure of

feedback’s formative value and the goal towards learner self-regulation. Because most

instructor comments were addressed in both the FP and FR levels, this ZPD (Vygotsky,

1978) theory seems to be actualized through instructor feedback. The data’s conclusive
results were examined in this chapter, and those results necessitate greater exploration in

terms of the implications, conclusions, and recommendations for future studies. These

topics will be discussed in chapter 5.


Chapter 5: Summary, Conclusions, and Recommendations

Introduction and Summary of Study

This chapter will begin with a summary of the study’s ten strategic points. The

study’s broad topic examines how instructors reflect on their feedback and how those

feedback practices influence their instruction. Supporting literature revealed gaps in

feedback studies, including how feedback is largely explored from the student’s

perspective (Borup et al., 2015) and population gaps were cited by Atwater et al. (2017)

who studied graduate student perceptions of video feedback and recommended future

studies on different populations. The theoretical foundation was Vygotsky’s (1978)

Zone of Proximal Development, and the study used instructor feedback as a mechanism

to examine its scaffolding and formative properties. Literature themes included:

Historical perspective of online learning, Feedback in online education, Formative

feedback, Feedback and social presence, Social presence and learning outcomes,

Feedback methods, Feedback and learner populations, and Reflection. This was a

qualitative case study design, which was the best approach for understanding the

complexities of the phenomena of instructor feedback reflection and their instruction.

The study’s problem statement was: It was not known how online undergraduate full-

time faculty perceived the influence of their instructional feedback practices on their

reflective thinking, and hence, their instructional strategy at a southwestern higher

education institution.

. The population was full-time online undergraduate faculty at a southwestern

higher education institution. The study’s three research questions were: (1) How do

online undergraduate full-time faculty perceive the influence of their instructional


feedback practices on their own reflective thinking? (2) What methods of feedback do

undergraduate full-time online faculty use at a southwestern higher education

institution? (3) How do online undergraduate full-time faculty perceive the influence of

reflective thinking on their current and future instructional approaches? The phenomena

explored was the perceived influence of feedback practices on teacher reflection and

subsequently, the influence of reflection on instructional strategy. This was a single case

study design. The study’s purpose statement was: The purpose of this qualitative

exploratory single case study was to examine how online undergraduate full-time faculty

perceived the influence of their instructional feedback practices on their reflective

thinking, and hence, their instructional strategy at a southwestern higher education

institution. Data collected included questionnaire responses from faculty participants,

classroom feedback, and a transcript from the focus group session. The data were

analyzed according to the thematic approach.

The purpose of this qualitative exploratory single case study was to examine how

full-time online undergraduate faculty perceived the influence of their instructional

feedback practices on their reflective thinking, and hence, their instructional strategy at a

southwestern higher education institution.. Studies related to online feedback have

focused on instructor presence and student preferences (Tunks, 2012), but this study

adds the dimension of instructor reflection as a lens to how their own feedback informs

instruction. Reflection allows instructors to analyze their own learning process, applying

changes to their instruction (Wilson, 2013). Moreover, as technology provides new

methods and online enrollment expands in higher education (Pattison, 2017), newer

technologies provide options of video feedback; the kinds of feedback methods used
could also play a role in faculty instruction, and therefore, the classroom instruction

implications.

This study focused on the seminal theory of Vygotsky’s (1978) Zone of Proximal

Development (ZPD), which posits that instructors provide scaffolding to advance

students to this zone where independent learning takes place. To operationalize this

theory, the participants’ classroom feedback was analyzed within the framework of a

supporting theory, based on Hattie and Timperley’s (2007) four levels of feedback. This

framework allows feedback to be examined within the context of its contribution to self-

regulation, which is consistent with Vygotsky’s theory. If instructors endeavor to lead

students to this zone, then how they reflect on their feedback and how their feedback

methods influence instruction are valid explorations for higher educational institutions.

The following research questions guided this study:

RQ1: How do online undergraduate full-time faculty perceive the influence of their

instructional feedback practices on their own reflective thinking?

RQ2: What methods of feedback do online undergraduate full-time online faculty

use at a southwestern higher education institution?

RQ3: How do online undergraduate full-time faculty perceive the influence of

reflective thinking on their current and future instructional approaches?

This study is significant because it explores how faculty reflection might

influence instruction. As more higher education institutions expand their online

offerings (Allen & Seaman, 2016; Tunks, 2012), instructor feedback has become

increasingly prominent. As instructors reflect on these practices, this reflection has the

capacity to shape thinking patterns and future actions (Bennett et al., 2016). In this
particular population of online full-time faculty, the face-to-face daily interaction at a

campus facility could result in peer collaboration of these influences. Thus, this study

could provide valuable implications for institutional faculty development in either

feedback delivery methods or reflection as a way of improving the formative value of

instruction.

The case study summary below will discuss the findings and conclusions of the

study’s significant themes as they apply to the study’s research questions. This section

will also discuss how these themes and findings relate to the study’s theories of ZPD and

Hattie and Timperley’s (2007) levels of feedback. In this study, chapters 1, 2, and 3

focused on the study’s theoretical foundations and the background to the problem that

the study will address, along with the proposed methodology. This following section

will illustrate how these findings relate to the propositions expressed in these chapters

and the significance of the findings and their capacity to advance knowledge in the areas

of instructor feedback.

Summary of findings and conclusions

This section will detail the significant and non-significant findings of the study

as they relate to the undergirding theories and how the gaps are addressed. This study

was about how online full-time undergraduate faculty feedback reflections influence

their teaching practices. In addition, the study explored two different feedback methods,

as participants used either text or video feedback. The conclusions of how these methods

intersected with faculty reflections and to what extent they influence their practices will

be shared as each research question is presented.


RQ 1. How do online undergraduate full-time faculty perceive the influence of

their instructional feedback practices on their own reflective thinking? To address this

question, three data sources were analyzed. Faculty participants were instructed to

complete a questionnaire during the course of their daily feedback while reviewing four

different papers. The first four questions were aligned to this first research question. In

addition, an extensive review of paper feedback was conducted in order to examine the

grading trends and to see an alignment between the questionnaire responses and the

actual instructor feedback and to determine the kind of feedback on paper in an effort to

align it with faculty reflections and the influence on teaching. Focus group discussions

served to confirm or challenge the data presented that aligned to this research question.

The themes from the questionnaire, papers, and focus groups will follow.

There were shared themes from both the first questionnaire response that showed

paper concerns and from the paper feedback review. These themes were Development,

Formatting, Mechanics, Thesis Statements, Third person, and Word Count. An

additional theme that came from the paper review was Critical Thinking. These themes

support the study’s foundations by showing greater insights on faculty reflection, how

faculty social presence is promoted, and how reflection varies and leads to problem

solving.

Development was the dominant theme in both the paper concerns and in the

questionnaire responses. For the purposes of this study, development refers to the

paper’s general content, how a student might support their claims through evidence from

sources, examples, reasons, or details. The thesis statement theme also offers instructors

more opportunities for formative remarks. Formative feedback has greater potential to
lead to student learning whereas feedback on the other more objective themes would

only give students a “quick fix.” By contrast, comments in the development or thesis

statement category provide more opportunities for students to learn the concept in a

variety of ways, that is, a feedback comment related to development or the thesis

statement does not correct a wrong, but rather, provides scaffolding for students since

there is not just one way of addressing these subjective issues. Therefore, instructors

focused more on instruction and guidance in their comments in their feedback. This

scaffolding is consistent with what Vygotsky (1978) proposed in the ZPD theory. This

attention to scaffolding supports feedback’s formative value, as Mirzaee and Hasrati

(2014) posited that formative feedback provides guidance for students.

If attention to higher order paper issues like development and the thesis

statement is significant, then of equal significance is the lesser proportion of attention to

the other paper issues. When participants were considering paper issues, the themes of

formatting, mechanics, third person, and word count were included, but with less

attention than the higher order theme of development. These remaining themes have

objective measures and focus on surface-level errors. If a student is missing a comma in

an in-text citation or misuses a capital letter, the feedback only moves the student from

incorrect to correct. Thus, less learning is taking place according to Vygotsky’s (1978)

theory of scaffolding, where feedback would serve as scaffolding towards self-

regulation.

Yet theorists and prior studies are mixed on feedback’s outcomes based on these

themes. Feedback largely has positive outcomes for students (Abaci, 2014; Mirzaee &

Hasrati, 2014; Sims, 2016), but instructors must be mindful of the individual’s level in
order for successful movement to the ZPD. Per Li and Li, (2012) instructors may be

mindful in giving feedback that is at the right level; for some students, feedback that is

too complex is less meaningful. Hattie and Timperley’s (2007) theory also supports the

powerful effect between feedback that is aimed at processes compared to surface task

information. For some students, feedback that addresses surface level errors can improve

task confidence and therefore, student efficacy, and lower performers reported

satisfaction in identifying and correcting errors (Lawanto, Santoso, Lawanto, &

Goodridge (2017). However, this contrasts with Ellis and Loughland (2017) whose

study found that feedback cannot be formative if it is disproportionately focused merely

on how a paper issue might be fixed. The foundations of this study focused on

feedback’s formative value and its theories grounded the study. Sadler (1989) states that

feedback is formative when the work is appraised, general, and broader principles are

expressed that can apply to future works. The participant themes, with the most attention

to development, but with the more objective themes not neglected, aligns with these

seminal theorists presented in chapter 2.

The reflection component of this first research question was addressed through

the alignment between what faculty participants expressed as concerns and their actual

feedback on their papers. According to Musolino and Mostroni (2005), reflection

prompts instructors to evaluate the efficacy of problem solving strategies. This kind of

reflection is seen in how instructors view paper issues and shape their instruction. For

example, participants expressed different ways to deal with one theme, showing

reflection taking place, in accordance with this theory. Prior studies, including Wade

(2016) used journals as a way to attempt to capture reflection during feedback. This
study extends those findings to show outcomes of reflective practices for online

educators.

The theory of social presence plays a role in this study, and its tenets are

foundational to examining feedback in an asynchronous environment. Social presence

can be defined as the degree to which one is perceived as authentic in a mediated

environment (Gunawardena & Zittle, 1997). It is possible that the instructors’ chosen

methods can enhance social presence. In this study, participants who used video

feedback bridged what Borup et al. (2015) refer to as the online environment’s inherent

separation between student and instructor.

One finding demonstrated that faculty who give text feedback may also reflect

on social presence. The theme of critical thinking was seen only in text feedback.

Because these comments were unrelated to a paper issue, these may demonstrate

strategies to improve social presence for those who give text feedback. As noted in this

study’s earlier chapters, social presence is critical to student success in an online

platform (Hostetter & Busch, 2013). Thus, the emergence of the critical thinking theme

could demonstrate a connection between instructor reflection and enhancing social

presence. A review of classroom papers revealed that these critical thinking comments

were repeated on classroom papers, indicating a consistent trend between the

questionnaire papers and the additional papers reviewed as artifacts. Because they were

not specific to a particular paper issue or the student’s success or failure, they were

appropriate to duplicate as a way to connect with students.

The theories of reflection grounded this study and the data that was aligned with

the first research question demonstrated the results of reflection. According to Farrell
(2004), reflection can be described as a loop between the process and the questions.

Reflection happens when one is asked: What am I doing in the classroom? Why am I

doing this? What is the result? Will I change anything based on the previous

information? In the online classroom, the feedback does serve as the instruction, so it is

fitting that these kinds of questions were directly posed to the faculty as they were

reviewing their feedback.

The final themes with significance to this research question include how

instructors prepare resources, class resources, instructor feedback, other instructors, and

student advisors. In addition, the theme of “None” was prominent, meaning that

instructors did not plan to change their current feedback practices. To address these

questions, instructors did reflect on their feedback during grading, so connections were

forged between their grading and classroom practices. A gap presented in this study was

to focus on feedback from the instructor’s perspective (Borup et al., 2015). This study’s

particular population was online full-time undergraduate faculty, who are focused on

first-year-sequence courses. Several themes were unique to that population and their

collaborative environment. For example, when participants noted that they receive

influence from either other instructors or student advisors, these factors could be less

prevalent among remote or adjunct instructors, since this online faculty work on campus

alongside university counselors. While face-to-face instructors collaborate, share and

influence one another, this element is missing in many asynchronous online programs;

therefore, these themes have significance as they are directly applicable to this

population.
RQ 2. What methods of feedback do undergraduate full-time faculty use at a

southwestern higher education institution? The significant themes that surfaced to

address this question will be discussed in this section. Questions 5-8 on the

questionnaire, paper feedback analysis, and the focus group transcript served as the data

sources to support this research question. Faculty participants either used text or video,

which addressed the fifth question. The sixth and seventh questions shared themes of

time and efficiency, what is best for students, and traditions and norms. The paper

feedback analysis themes were a priori, based on Hattie and Timperley’s (2007)

feedback model.

Faculty participants reported that time and efficiency was the highest determiner

of their method of feedback selection. Yet, this theme is inconclusive, since it came

from faculty who provided feedback through text and through video. Prior studies show

conflicting evidence on the efficiencies of video or text feedback. Video can be faster

and therefore less fatiguing for instructors (Ali, 2016; Atwater et al., 2017; Sims, 2016).

The problem that grounded this study focused on how online faculty in higher education

do strive to meet the demands of increasing enrollment (Borup et al., 2015; Planar &

Moya, 2016; Wright, 2014) so understanding which method is more efficient would lend

insight into that problem. But fewer online faculty have expressed interest in new

technology (Harrison et al., 2017), and in this study, most participants adhered to text

feedback, citing familiarity as their rationale. These findings could point to the need for

institutional training in new technologies.

As the second most prominent theme, faculty participants responded that they

selected their feedback methods based on what is best for students. The evidence in this
study showed mixed findings on which method is best for students. Video feedback

increases personalization between the student and the instructor (Atwater et al., 2017;

Frisby et al., 2013; Ellis & Loughland, 2017; Wade, 2016) and is viewed as supportive

and formative (Ali, 2016). In addition, video feedback results in fewer

misunderstandings (Atwater et al., 2017). Nevertheless, other studies show that students

do prefer text feedback over video (Borup et al., 2015; McCarthy, 2015). Video and text

are merely methods of feedback; feedback’s positive outcomes can occur through either

practice. Thus, if feedback is formative, personal, and addresses student gaps, and if

instructors find their preferred method leads them to these outcomes, then the

significance is based on individual preference. The remaining significant questionnaire

theme was identified as tradition or the norm of the institution. This means that faculty

are giving feedback based on what is expected and on what is comfortable for them.

This theme is significant in terms of how instructors may view the institutional support

of funding or training for newer methods.

The a priori themes from the paper analysis demonstrated how faculty

reflections supported Vygotsky’s (1978) Zone of Proximal Development theory, which

grounded this study. Through formative feedback, instructor remarks were analyzed

according to Hattie and Timperley’s (2007) four levels of feedback, a model designed to

demonstrate feedback’s effectiveness. Faculty remarks were at the task level (FT),

process level (FP), or self-regulation level (FR). In addition, there is a self-praise level

(FS), but this study focused on the first three, as explained in earlier chapters. Feedback

at the self-regulation level creates the capacity to increase confidence and to prompt

internal assessment for learners. According to Hattie and Timperley (2007), more
effective learners possess stronger FR strategies, whereas less effective learners seek out

feedback at the surface task level. This theory aligns with the Vygotskian movement to

the ZPD, which results in self-regulated learning.

At the task level, the highest concentration of feedback focused on development.

This means that faculty appraised student content for its effectiveness. This is interesting

because paper issues related to development are generally higher order skills; however,

instructors may have wanted to briefly touch on successful elements to serve as

reinforcement. For example, an instructor might glance over a quote in a paragraph and

quickly let the student know they did a “good job” including their quote. Of the three

levels analyzed, both text and video participants used the FT level the least. This lesser

use is consistent with the theories that a greater concentration on FR promotes

movement for student learning.

Data on both text and video feedback shows a greater concentration of FP and

the highest concentration in FR. There is an inverse relationship between the

concentration of formatting comments; for text feedback, most formatting comments are

addressed at FP and for video feedback, formatting is addressed more often in FR. Thus,

text feedback participants may have posted a prepared example or link to show students

how to address a formatting issue, whereas the video medium may have allowed

participants to challenge the student at a deeper level. FR comments can be questions or

statements about an issue. They are designed to let students generalize the remarks to

solve the problem on their own. Because video feedback is more personal (Atwater et

al., 2017), giving feedback at this level may be more effective through video rather than

through text.
Feedback at the process level focuses on “how” something should be done. Prior

research has noted the disproportionate attention to this level and that feedback cannot

be formative when limited to the process level (Ellis & Loughland, 2017). For video

feedback, comments in mechanics were highest in this area, whereas for those who gave

text feedback, the remarks about development remained the highest. The visual video

format allows a mini lesson on corrections in mechanics, making quick work of showing

how to address a mechanical error. Since feedback in FP is not the highest for either set

of participants, Ellis and Loughland’s (2017) concerns about the disproportionate

attention to process are not evident among this data.

The last a priori theme is FR, which suggests that feedback is focused on self-

regulation (Hattie & Timperley, 2007). Sadler (1989) notes that successful feedback

must identify broader principles that can apply to later works for its formative value to

be operationalized. With the concentration of most remarks in this level for both text and

video feedback, the findings show this emphasis on self-regulation and therefore,

formative value. This data is also consistent with the findings of Nicol and MacFarlane-

Dick (2006) that formative feedback is connected to self-regulated learning. Because

both text and video feedback show the highest concentration at this level, it is not

conclusive which method is more conducive to FR, but the significance of the findings

may link back to the individual instructor’s preferences.

RQ 3. How do online undergraduate full-time faculty perceive the influence of

reflective thinking on their current and future instructional approaches? The last research

question is an amalgamation of how the faculty feedback methods might influence their

instruction and the data that supports this question were the last four responses from the
questionnaire and the focus group transcripts. For this discussion, the responses with the

most frequency will be synthesized for a sharper discussion of the data’s magnitude,

consistent with thematic coding (Clarke & Braun, 2013). Most faculty participants stated

that they would not change their feedback methods, and that they felt that the course

discussion questions were the greatest factors of influence towards instruction. Faculty

participants also believed that student need was the primary driving force of their

decision. In addition, they cited instructor-prepared resources as an effective practice

that could apply to a broader context. There is a correspondence between instructors

who would not change their methods and are basing those choices considering what is

best for students. This theme rose above the challenge of managing paper volume,

showing that instructors first want to apply what they believe is best for students.

To conclude, this study focused on how faculty reflect on feedback, which is an

alternate paradigm to the plethora of research on student perceptions and preferences of

feedback. As Borup et al. (2015) expressed a gap in the literature about how feedback

should be examined from the instructor’s perspective and through reflection as the

catalyst, this study served that purpose. The problems presented in the study’s proposal

addressed the need for instructor reflection and if instructor feedback methods influence

their instruction. These propositions were based on theories of social presence,

Vygotsky’s learning theories and theories associated with feedback. The data findings

were consistent with many of these theories.

Implications

The topic of this study was the perceived influence of feedback practices on

teacher reflection (and subsequently, the influence of reflection on instructional


strategy). This study focused on the unique population of online undergraduate full-time

faculty at a southwestern higher education institution. Although online instructors in

higher education often work remotely, at the host institution, faculty meet and work at a

campus facility, collaborating and sharing strategies. For this study, questionnaire

responses, classroom feedback and focus group discussions were reviewed. The findings

offer meaningful theoretical, practical and future implications. Despite some

weaknesses, the study had many strengths.

Theoretical implications. This study’s data were triangulated. The faculty

questionnaire was designed to probe participants’ reflections and perceptions on their

own feedback, while the classroom data served as a critical artifact to compare against

those perceptions. Next, the focus group confirmed or questioned the findings,

completing the triangulation process. Theoretical implications of the study include the

value of reflection, social presence, how self-regulation feedback supports Vygotsky

(1978), and how Hattie and Timperley’s (2007) framework was confirmed as an

effective way to assess feedback and its formative value.

This study flipped a common paradigm of research on feedback that focuses on

student preferences by putting the focus on faculty and how they reflect on their own

feedback, which, in the online classroom, serves as instruction. Ryan and Ryan (2013)

describe reflection as an intellectual stance that moves a daily practice to a state where

change can take place in a broader context. In this study, feedback serves as a daily

practice for this online population and the mindful step of reflecting on their feedback

could result in practices that could lead to student improvement. For example, through

faculty reflecting on the kind of feedback they provided, some were surprised to see
gaps or areas of emphasis. The faculty participants demonstrated that when their

reflection is actualized, they have a greater awareness of their practices, which could

either be fine-tuned or re-shaped towards a more successful practice.

The theory of social presence is foundational in studies of online learning.

Understanding social presence is critical due to what Moore (2012) refers to as the

transactional distance of online education. Given the empirical research that supports the

connection between feedback and social presence (Frisby et al., 2013), the findings from

this study could be interpreted to reflect differences in how instructors approach social

presence. In other words, social presence could best be promoted according to the

method that is most comfortable for the instructor. This was demonstrated through the

theme of critical thinking. Even with this theme not evident among video feedback

participants, those who gave video feedback had much higher word counts in their

feedback. This is consistent with Sims (2016) who reported that consistent with the

Media Naturalness Theory, that feedback with greater complexity increases the social

presence of the instructor and in this study, those who gave video feedback had higher

word counts than those who gave text; however, those who gave text comments had a

tool box of prepared comments that extended beyond actual paper issues, which allowed

instructors to share intellectual dialogue through their feedback. McGuire (2016) stated

that feedback can be the most significant contributor to developing social presence and

faculty in both modalities seemed to be mindful of that goal.

In addition to feedback’s prominent role in developing social presence, multiple

studies and theories support feedback’s potential for learning. The learning theory for

this study was Vygotsky’s (1978) Zone of Proximal Development. Although Vygotsky
developed this theory in accordance with children’s developmental levels, the ZPD is

receiving more attention in studies of higher education (Armstrong, 2015; McNiff &

Aicher, 2017; Roberts, 2016). Because the findings demonstrated that instructors

focused primarily at the FR level, which is feedback that promotes self-regulation, it

seems that these instructors are successful in scaffolding and moving students to this

zone. The consistency between the instructors’ perceptions of paper issues and their

actual feedback remarks supports their mindfulness in providing feedback that is

formative. Nicol and Macfarlane-Dick (2006) define high quality feedback as the kind

that emphasizes self-regulation for students.

These theories that support feedback’s self-regulation value must be considered

within the broader context of feedback’s effectiveness. Li and Li (2012) caution that

when instructors aim to go too far beyond the student’s proximal zone, feedback is less

meaningful. Feedback is only effective if the gap is also clearly defined (Paulson Gjerde

et al., 2017), which means that self-regulation comments must accompany feedback at

the task level, where the gap is defined. Other feedback theories posit that students find

success when feedback is given at the task and process levels, because these corrections

can also serve to increase task confidence (Hattie & Timperley, 2007). For either text or

video feedback, faculty participants demonstrated that most of their feedback

concentration was in the self-processing and self-regulation, consistent with addressing

this gap. Yet there was also attention to surface level errors, demonstrating attention to

all levels of feedback, consistent with Hattie and Timperley’s theories.

Practical implications. This study’s findings reflected numerous practical

implications for higher education institutions. Because the population of full-time


faculty teach higher volumes of classes, awareness of their classroom practices could

result in improved practices where there are gaps, or sustained practices that are

successful. The implications from this study will benefit other institutions of similar

models. Any institution offering online courses must be concerned with the quality and

efficiency of its delivery (Planar & Moya, 2016). The practical implications will be

discussed in terms of training and institutional support of different methods.

As online education expands as a practical and cost-effective modality,

instructors may be burgeoned with heavier volumes of papers (Planar & Moya, 2016).

At the host institution, the full-time faculty typically teach four courses at one time. A

typical course volume is between 20 and 40 students. Tier instructors carry loads

between 100 to 120 students at one time. This population, therefore, represents a group

with a need for efficiency in grading without sacrificing quality in their attention to

students. In addition, because this population does not work remotely, they collaborate,

sharing strategies informally and in formal settings of professional development. In this

culture of innovation, several instructors use various methods to improve teaching and to

promote social presence. Instructors employ personalization through programs for video

discussion, use video conferencing, and use an application called Loom for video

feedback.

Two of the twelve participants used video feedback and those two participants

stated that they would continue with this method based on its efficiencies, quality of

feedback and its personalization, while other participants stated they would try video

feedback with institutional support or if it were required. The implication is that faculty

may try new methods with institutional support, which is consistent with Harrison et
al.’s (2017) findings that online faculty may be hesitant to learn new technologies. The

practical implication is that leadership should consider incorporating training in video

feedback and if necessary, of funding for any technological tools.

Another practical implication that emerged from these findings is the importance

of reflection. This study’s data showed that when faculty are tuned in to the kind of

feedback they are giving, they give attention to specific practices in their classrooms.

Although this faculty population works in a collaborative environment, faculty tend to

operate in isolation, often not aware of the kind of feedback their peers give. The

participants in this study were able to see their own feedback analyzed and categorized

and expressed surprise at some findings. For some, a visual analysis of their feedback

and where it is concentrated could help faculty effectively self-assess. Professional

development could include faculty reviewing their own or a peer’s feedback in the

Hattie and Timperley (2007) levels to understand the concentration of their own

feedback.

Faculty who provide video feedback could benefit from transcribing their

feedback and self or peer-reviewing the content to ensure that all areas of student papers

are developed. When feedback is unscripted, the natural nuances increase the social

presence, but the disadvantage is that the instructors tended to include superfluous

narrative and students may miss the instructor’s key points. With the critical thinking

component missing from the video feedback, instructors giving feedback through this

method might benefit from departing from student paper issues to discuss the content, as

the instructors delivering text feedback do. Thus, even with all its advantages,
instructors who give video feedback could evaluate their own feedback for a greater

understanding of what is truly best for students.

Future implications. This study was about exploring how faculty feedback

practices might inform teaching and if the methods contributed to that. There are future

implications for this study, as the area of online feedback has numerous possibilities.

The feedback theories of Hattie and Timperley (2007) and the learning theories of

Vygotsky (1978) grounded this study; however, understanding the necessary

components of feedback through theories, future studies or trainings could focus on

other elements of feedback. In this study, the participants’ feedback was reviewed

according to their method of text or video, but it would be useful to examine how the

same participant would give feedback in text or in video format. Since faculty

participants stated that their methods were best, it would have been worth exploring

specifics of what makes their methods more efficient, including timing their video

feedback or text feedback.

Strengths and weaknesses of the study. This study explored the connection

between faculty’s feedback reflection and their classroom practices. As with any study,

there were strengths and weaknesses which shall be presented here. One strength of the

study was that a particular population was examined. Although several institutions offer

courses in online modalities (Caruth & Caruth, 2013), this unique model of online full-

time faculty drew robust data, owing to their consistent course loads of up to 150

students. As well, faculty reflections could result from influence of peers, due to their

face-to-face interactions. Not only did this population address the gap in literature, but
the model of full-time online faculty could be replicated among higher education

institutions. Thus, learning more about feedback in this population was meaningful.

A second strength was using Hattie and Timperley’s (2007) levels as a way to

operationalize instructor scaffolding. When instructors could see their own feedback and

its content in certain levels, they had a clear understanding of the kind of feedback they

give and how it was focused at the process and regulation levels. The cross analysis gave

a broad picture of feedback and this same method would be useful for future

professional development or assessment of feedback. This framework could be

employed in a similar study to explore or evaluate faculty feedback.

A fourth strength is that this feedback focused on faculty thoughts, rather than

students’ thus filling a second gap in literature and shifting the paradigm (Borup et al.,

2015). Measuring student preferences can be challenging, because what they prefer may

not align with theory and preferences can change with technology. Research findings

may not keep pace with advances in technology, so any student preferences cited in

research could possibly change. As an example, Ali (2016) noted that students found

retrieving video feedback challenging, but advances are making this easier. Therefore, a

researcher must examine these student preferences for text over video feedback with

these rapid technological advances in mind. On the other hand, to view instructor

thoughts about their feedback reaches deep into andragogy and examining this mindful

practice reveals how instructors might fine tune their practices.

A final strength of this study is its rigor. A case study can be viewed as weaker,

due to its limited scope (Yin, 2014). However, several components of this study

increased the rigor. First, the faculty selected teach high volumes of courses. Five of the
twelve participants are tier instructors, facilitating six to eight classes at a time. Many

institutions relies on adjunct instructors, whose teaching loads would be less significant.

Thus, as faculty responded to the questionnaire, their volume and experience yielded

more meaning. A second component of rigor was built into the questionnaire, which had

twelve questions. Each of the twelve questions was built around four different papers.

This means that faculty had the opportunity to respond to four different papers, which

expanded the data that established codes and themes.

As with any study, there were weaknesses found. First, the study was conducted

at one higher education institution, so its findings may not be reflective of the results

that would surface at other institutions. The unique culture of the full-time online faculty

may not be representative of the operations of other campuses. A second weakness was

that the researcher works among the full-time online faculty, serving as a faculty chair.

The relationship the researcher has with the faculty could potentially have biased their

responses. Understanding this, the researcher ensured that the dissertation committee

served as an oversight and that the data collection and analysis were rigorous enough to

achieve meaningful results, despite the potential bias.

To conclude, the careful systematic process and guidance from the dissertation

committee resulted in a valid and reliable study. The weaknesses were inherent in

studies where the researcher works among the participants, and the awareness and

attention to this resulted in strengths that outweighed the weaknesses, such as the rigor

of the analysis. All strengths and weaknesses should be considered for future studies.
Recommendations

This study was about the perceived influence of feedback practices on teacher

reflection (and subsequently, the influence of reflection on instructional strategy). There

were significant findings expressed through disaggregated data that showed themes and

an analysis of the feedback levels according to Hattie and Timperley’s (2007)

framework. With the foundation of Vygotsky’s (1978) ZPD learning theory, data

showed connections between faculty’s feedback and their teaching. The study’s findings

offered significance, but also revealed new areas of study.

In this study, the data reflected text and video feedback for the purposes of

exploring how these differences might impact faculty reflection. Future studies could

focus on instructors using both forms of feedback to compare the differences in student

preferences or how feedback addresses Hattie and Timperley’s (2007) levels according

to video or text. If the goal of a future study was to explore which method of feedback is

successful according to Hattie and Timperley’s (2007) framework, finding participants

who use both would accomplish this, without the mediating factors of instructor

personality.

Hattie and Timperley’s (2007) levels of feedback served as the framework of this

study. These levels are a component of a larger framework, which is Hattie and

Timperley’s (2007) stages of feedback. This theory posits that effective feedback must

include goal orientation, acknowledgement of the student’s gap, and future guidance

(Hattie & Timperley, 2007). A similar framework and analysis could be established to

determine if faculty feedback addresses all these elements and if these elements are more

often met through video or through text feedback.


Another recommendation would be a study on comparing the themes between

the text and video feedback. The participants expressed surprise at the findings that

some themes were more dominant according to the delivery method. For example, the

theme of critical thinking was not present among the video participants. An study that

explores this or other themes and their dominance according to the delivery method

would be a logical extension.

A fourth recommendation for a related study would be to examine how faculty

use video in other areas of the class and if those videos influence their text feedback.

Such a study could more deeply probe faculty practices to determine the extent of how

an instructor video in one area of the class, such as in the discussion forum, could

influence how their paper feedback is perceived. Because social presence results from

faculty personality and psychological dimensions, (Juvova et al., 2015), it is possible

that if an instructor posts a video in one area of the class, that their social capital has

increased to the extent that text feedback is less distant.

A fifth recommendation emerged through the focus group discussions on the

levels of feedback. The participants viewed the data and noted that when giving video

feedback, it is possible that this format is more conducive to surface-level errors. A

study that focuses specifically on the differences between video and text feedback with

respect to surface levels would yield interesting findings. While some research confirms

the effectiveness of identifying surface level feedback (Li & Li, 2012), feedback that

focuses on broader issues is considered more formative (Mirzaee & Hasrati, 2014).

A final recommendation would be to study the placement of comments. As noted

in this study, feedback is presented to students through a central comment box and on
attached essays. Due to the analytical framework selected, this study was limited to

sidebar feedback, given through Microsoft Word comments on student papers. But it is

possible that students do not open their papers and may only view the feedback

displayed next to their grade. Regardless of the quality of feedback or its modality, if

students do not review their paper feedback, these merits are lost. Most learning

management systems offer these two display options of feedback and it would be

valuable to understand the student perspective and to limit a study to comments in this

area of the classroom.

Summary

This study explored how online full-time undergraduate faculty feedback

reflection practices influence their instruction, and their methods of feedback. The

population of this study was full-time online undergraduate faculty at a southwestern

institution, where faculty work in a face-to-face environment at a campus facility; this

faculty population teaches course loads between 80 to 150 students at a time. A

qualitative method was used and a case study was selected as the design. The researcher

used the thematic analysis approach to review, examine, and synthesize the data’s

findings. Three data sources were faculty questionnaire responses, actual feedback from

classes, and transcripts from focus group sessions. The theory that undergirded this

study was Vygotsky’s (1978) Zone of Proximal Development and Hattie and

Timperley’s (2007) framework on feedback levels served as the measure of how

feedback might fulfill that theory of learning.

The data showed that indeed, faculty do reflect on their own feedback and those

reflections contribute to their classroom practices. This was demonstrated through the
variety of themes that resulted. There was consistency between the concerns instructors

expressed and how these concerns lead them to continually produce their own materials,

consider student needs, and refine their own feedback, considering it a part of their

instruction. Another major outcome is that whether instructors use video or text

feedback, both believe they are doing what is efficient and what works best for their

students. These findings have value for future related studies or replications.
References

Abaci, S. (2014). Direct and indirect effects of feedback, feedback orientation, and goal

orientations on students' academic performance in online learning (Order No.

3670284). Available from ProQuest Dissertations & Theses Global.

(1650593276).

Ali, A. D. (2016). Effectiveness of using screencast feedback on EFL students' writing

and perception. English Language Teaching, 9(8), 106-121.

https://doi.org/10.5539/elt.v9n8p106

Allen, I. E., & Seaman, J. (2016) Online report card. Tracking online education in the

United States.

Almalki, S. (2016). Integrating quantitative and qualitative data in mixed methods

research--Challenges and benefits. Journal of Education and Learning, 5(3),

288-296. https://doi.org/10.5539/jel.v5n3p288

Angelo, T. A. (1995). Classroom assessment for critical thinking. Teaching of

Psychology 22(1): 6–7. https://doi.org/10.1207/s15328023top2201_1

Arasaratnam-Smith, L. A., & Northcote, M. (2017). Community in online higher

education: Challenges and opportunities. Electronic Journal of E-

Learning, 15(2), 188. Retrieved from

https://scholar.google.co.uk/scholar?q=Arasaratnam-

Smith%2c+L.+A.%2c+%26+Northcote%2c+M.+2017+Community+in+online+

higher+education%3a+Challenges+and+opportunities

Armstrong, C. (2015). In the zone: Vygotskian-inspired pedagogy for sustainability.

Journal of Classroom Interaction, 50(2), 133-144. Retrieved from


https://scholar.google.co.uk/scholar?q=Armstrong%2c+C.+2015+In+the+Zone%

3a+Vygotskian-Inspired+Pedagogy+for+Sustainability

Atwater, C., Borup, J., Baker, R., & West, R. E. (2017). Student perceptions of video

communication in an online sport and recreation studies graduate course. Sport

Management Education Journal (Human Kinetics), 11(1), 3.

doi:10.1123/smej.2016-0002

Bain, J. D., Ballantyne, R., Mills, C., & Lester, N. C. (2002). Reflecting on practice:

Student teachers ‘perspectives. Post Pressed, Flaxton, QLD.

Bandura, A. (1977). Social learning theory. New York: General Learning Press.

Baxter, P., & Jack, S. (2008). Qualitative case study methodology: Study design and

implementation for novice researchers. The Qualitative Report, 13 (4)544-559.

http://www.nova.edu/ssss/QR/QR13-4/baxter.pdf

Bennett, D., Power, A., Thomson, C., Mason, B., & Bartleet, B. (2016). Reflection for

learning, learning for reflection: Developing indigenous competencies in higher

education. Journal of University Teaching and Learning Practice, 13(2). Retrieved

from:https://scholar.google.co.uk/scholar?q=Bennett%2c+D.%2c+Power%2c+A.%

2c+Thomson%2c+C.%2c+Mason%2c+B.%2c+%26+Bartleet%2c+B.+2016+Refle

ction+for+learning%2c+learning+for+reflection%3a+Developing+indigenous+com

petencies+in+higher+education

Black, P., Harrison, C., Lee, C., Marshall, B., & Wiliam, D. (2004). Working inside the

black box: Assessment for learning in the classroom. Phi Delta Kappan, 86(1),

9-21. https://doi.org/10.1177/003172170408600105
Black, P., & Wiliam, D. (1998). Inside the blackbox: Raising standards through

classroom assessment. Phi Delta Kappan, 80 (2), 139-148.

https://doi.org/10.1177/003172171009200119

Bloom, B. S. (1969). Some theoretical issues relating to educational evaluation. In R. W.

Tyler (Ed.), Educational evaluation: new roles, new means: the 68th yearbook of

the National Society for the Study of Education (part II) 68(2), 26-50. Chicago,

IL: University of Chicago Press.

Bond, J. (2011). Thinking on your feet: Principals' reflection-in-action. International

Journal of Educational Leadership Preparation, 6(4). Retrieved from

https://scholar.google.co.uk/scholar?q=Bond%2c+J.+2011+Thinking+on+your+f

eet%3a+Principals%27+reflection-in-action

Bonnel, W. (2008). Improving feedback to students in online courses. Nursing

Education Perspectives (National League for Nursing), 29(5), 290-294.

https://doi.org/10.3928/00220124-20110715-02

Borup, J., West, R. E., & Thomas, R. (2015). The impact of text versus video

communication on instructor feedback in blended courses. Educational

Technology Research and Development, 63(2), 161-184. doi:10.1007/s11423-

015-9367-8

Brubaker, W. M. (2016). A teacher's journey: A phenomenological analysis of the lived

experience of beginning teachers (Order No. 10249178). Available from

ProQuest Dissertations & Theses Global. (1855473703).


Brumberger, E. (2011). Visual literacy and the digital native: An examination of the

millennial learner. Journal of Visual Literacy, 30(1), 19-47.

https://doi.org/10.1080/23796529.2011.11674683

Byrne, D., & Ragin, C. (2009) The Sage handbook of case-based methods.

http://srmo.sagepub.com.library.gcu.edu:2048/view/the-sage-handbook-of-case-

based-methods/SAGE.xml https://doi.org/10.4135/9781446249413

Camacho Rico, D. Z., Durán Becerra, L., Albarracin Trujillo, J. C., Arciniegas Vera, M.

V., Martínez Cáceres, M., & Cote Parra, G. E. (2012). How can a process of

reflection enhance teacher-trainees' practicum experience? 19(1), 48-60.

Retrieved from

https://scholar.google.co.uk/scholar?q=Camacho+Rico%2c+D.+Z.%2c+Dur%C3

%A1n+Becerra%2c+L.%2c+Albarracin+Trujillo%2c+J.+C.%2c+Arciniegas+Ve

ra%2c+M.+V.%2c+Mart%C3%ADnez+C%C3%A1ceres%2c+M.%2c+%26+Co

te+Parra%2c+G.+E.+2012+How+can+a+process+of+reflection+enhance+teache

r-

trainees%27+practicum+experience%3f+%3Ci%3EHow%3C%2fi%3E%2c+%3

Ci%3E19%3C%2fi%3E(1)%2c+48-60

Carless, D. (2006). Differing perceptions in the feedback process. Studies in Higher

Education 31: 219–33. https://doi.org/10.1080/03075070600572132

Caruth, G., & Caruth, D. (2013). The impact of distance education on higher education:

A case study of the United States. The Turkish Online Journal of Distance

Education, Vol 14, Iss 4, Pp 121-131 (2013), (4), 121. Retrieved from

https://scholar.google.co.uk/scholar?q=Caruth%2c+G.%2c+%26+Caruth%2c+D.
+2013+The+impact+of+distance+education+on+higher+education%3a+A+case

+study+of+the+United+States

Charteris, J. (2015). Learner agency and assessment for learning in a regional New

Zealand high school. Australian and International Journal of Rural Education,

(2), 2. Retrieved from

https://scholar.google.co.uk/scholar?q=Charteris%2c+J.+2015+Learner+agency+

and+assessment+for+learning+in+a+regional+New+Zealand+high+school

Chen, W. (2014). Actual and preferred teacher feedback on student blog

writing. Australasian Journal of Educational Technology, 30(4), 402.

https://doi.org/10.14742/ajet.635

Clarke, V., & Braun, V. (2013). Teaching thematic analysis. Psychologist, 26(2), 120-

123. Retrieved from

https://scholar.google.co.uk/scholar?q=Clarke%2c+V.%2c+%26+Braun%2c+V.

+2013+Teaching+thematic+analysis

DeCosta, M., Bergquist, E., Holbeck, R., & Greenberger, S. (2015). A desire for growth:

Online full-time faculty's perceptions of evaluation processes. Journal of

Educators Online, Vol 12, Iss 2 (2015), (2). Doi:10.1257/aer.p20151024

Deming, D. J., Goldin, C., Katz, L. F., & Yuchtman, N. (2015). Can online learning

bend the higher education cost curve? American Economic Review, 105(5), 496.

doi:10.1257/aer.p20151024

Denton, P., & Rowe, P. (2015). Using statement banks to return online feedback:

limitations of the transmission approach in a credit-bearing


assessment. Assessment & Evaluation in Higher Education, 40(8), 1095-1103.

doi:10.1080/02602938.2014.970124

Denzin, N. K., & Lincoln, Y. S. (2011). The SAGE handbook of qualitative research.

Thousand Oaks, CA: SAGE Publications.

Dimova, Y., & Kamarska, K. (2015). Rediscovering John Dewey’s model of learning

through reflective inquiry. Problems of Education in the 21St Century, 6329-39.

Retrieved from

https://scholar.google.co.uk/scholar?q=Dimova%2c+Y.%2c+%26+Kamarska%2

c+K.+2015+Rediscovering+John+Dewey%27s+model+of+learning+through+ref

lective+inquiry

Dixon, S. (2015). The pastoral potential of audio feedback: a review of the

literature. Pastoral Care in Education, 33(2), 96-104.

doi:10.1080/02643944.2015.1035317

Draft, R. L., & Lengel, R. H. (1986). Organizational information requirements, media

richness and structural design. Management Science, 32(5), 554–571.

https://doi.org/10.1287/mnsc.32.5.554

Elison-Bowers, P. R., & Snelson, C. (2012). “Ethical Challenges of Online Teaching.”

Teaching ethically: Challenges and opportunities, 55-65.

http://dx.doi.org/10.1037/13496-005

Ellis, N. J., & Loughland, T. (2017). 'Where to next?' Examining feedback received by

teacher education students. Issues in Educational Research, 2751-63. Retrieved

from

https://scholar.google.co.uk/scholar?q=Ellis%2c+N.+J.%2c+%26+Loughland%2
c+T.+2017+%27Where+to+next%3f%27+Examining+feedback+received+by+te

acher+education+students

Eraut, M. (2006). Feedback. Learning in Health and Social Care, 5, 111–118.

doi:10.1111/j.1473-6861.2006.00129.x.

Falender, C. A., Shafranske, E. P., & Falicov, C. J. (2014). Reflective practice: Culture

in self and other. In C. A. Falender, E. P. Shafranske, C. J. Falicov, C. A.

Falender, E. P. Shafranske, C. J. Falicov (Eds.) , Multiculturalism and diversity

in clinical supervision: A competency-based approach (pp. 273-281).

Washington, DC, US: American Psychological Association. doi:10.1037/14370-

012

Farrell, T. S. (2004). Reflective practice in action: 80 reflection breaks for busy

teachers.

Thousand Oaks, CA: Corwin Press.

Farrell, T. S., & Jacobs, G. M. (2016). Practicing what we preach: Teacher reflection

groups on cooperative learning. Tesl-Ej, 19(4), 1. Retrieved from

https://scholar.google.co.uk/scholar?q=Farrell%2c+T.+S.%2c+%26+Jacobs%2c

+G.+M.+2016+Practicing+what+we+preach%3a+Teacher+reflection+groups+o

n+cooperative+learning

Fereday, J., & Muir-Cochrane, E. (2006). Demonstrating rigor using thematic analysis:

A hybrid approach of inductive and deductive coding and theme development.

International Journal of Qualitative Methods, 5(1), 80-92. Retrieved from

https://scholar.google.co.uk/scholar?q=Fereday%2c+J.%2c+%26+Muir-

Cochrane%2c+E.+2006+Demonstrating+rigor+using+thematic+analysis%3a+A
+hybrid+approach+of+inductive+and+deductive+coding+and+theme+developm

ent

Forte, G. J., Schwandt, D., Swayze, S., Butler, J., & Ashcraft, M. (2016). Distance

education in the US: A paradox. Turkish Online Journal of Distance Education

(TOJDE), 17(3), 16. https://doi.org/10.17718/tojde.95102

Frisby, B. N., Limperos, A. M., Record, R. A., Downs, E., & Kercsmar, S. E. (2013).

Students' perceptions of social presence: Rhetorical and relational goals across

three mediated instructional designs. Journal of Online Learning &

Teaching, 9(4), 468. Retrieved from

https://scholar.google.co.uk/scholar?q=Frisby%2c+B.+N.%2c+Limperos%2c+A.

+M.%2c+Record%2c+R.+A.%2c+Downs%2c+E.%2c+%26+Kercsmar%2c+S.+

E.+2013+Students%27+perceptions+of+social+presence%3a+Rhetorical+and+r

elational+goals+across+three+mediated+instructional+designs

Ganapathy, M. (2016). Qualitative data analysis: Making it easy for nurse researcher.

Indian Journals. doi: 10.5958/0974-9357.2016.00057.X

Gasparič, R. P., & Pečar, M. (2016). Analysis of an asynchronous online discussion as a

supportive model for peer collaboration and reflection in teacher

education. Journal of Information Technology Education, 15369. Retrieved from

https://scholar.google.co.uk/scholar?q=Gaspari%C4%8D%2c+R.+P.%2c+%26+

Pe%C4%8Dar%2c+M.+2016+Analysis+of+an+asynchronous+online+discussio

n+as+a+supportive+model+for+peer+collaboration+and+reflection+in+teacher+

education
Gardner, F. (2001). Social work students and self-awareness: How does it happen?

Reflective Practice, 2(1), 27-40. Doi:10.1080/14623940120035505

Gargano, T., & Throop, J. (2017). Logging on: Using online learning to support the

academic nomad. Journal of International Students, 7(3), 918-924. Retrieved

from

https://scholar.google.co.uk/scholar?q=Gargano%2c+T.%2c+%26+Throop%2c+

J.+2017+Logging+on%3a+Using+online+learning+to+support+the+academic+n

omad

Garrison, D. R., Anderson, T., & Archer, W. (2000). Critical inquiry in a text-based

environment: Computer conferencing in higher education. The

Internet and Higher Education 2(2-3), 87-105.

Geitz, G., Brinke, D. J., & Kirschner, P. A. (2015). Goal orientation, deep learning, and

sustainable feedback in higher business education. Journal of Teaching in

International Business, 26(4), 273-292. doi:10.1080/08975930.2015.1128375

Gillett-Swan, J. (2017). The challenges of online learning supporting and engaging the

isolated learner. Journal of Learning Design, 10(1), 20-30

https://doi.org/10.5204/jld.v9i3.293

Golafshani, N. (2003). Understanding reliability and validity in qualitative research. The

Qualitative Report. 8 (4). 597-607. Retrieved from

https://scholar.google.co.uk/scholar?q=Golafshani%2c+N.+2003+Understanding

+reliability+and+validity+in+qualitative+research
Gredler, J. J. (2016). Postsecondary online students' preferences for instructor

feedback (Order No. 10132030). Available from ProQuest Dissertations &

Theses Global. (1810181501).

Greenwood, M., Kendrick, T., Davies, H., & Gill, F. J. (2017). Hearing voices:

Comparing two methods for analysis of focus group data. Applied Nursing

Research, 3590-93. doi:10.1016/j.apnr.2017.02.024

Gunawardena, C., & Zittle, F. (1997). Social presence as a predictor of satisfaction

within a computer-mediated conferencing environment. The American Journal of

Distance Education, 11(3), 8-26. https://doi.org/10.1080/08923649709526970

Hall, D. M. (2018). The power of feedback: An indicator of mentor effectiveness during

student teaching (Order No. 10274286). Available from ProQuest Dissertations

& Theses Global. (1904974572).

Hart, C. (1998). Doing a literature review: Releasing the social science research

imagination. Thousand Oaks, CA: Sage

Harvey, M., Coulson, D., & McMaugh, A. (2016). Towards a theory of the ecology of

reflection: Reflective practice for experiential learning in higher

education. Journal of University Teaching & Learning Practice, 13(2), 1-20.

Retrieved from

https://scholar.google.co.uk/scholar?q=Harvey%2c+M.%2c+Coulson%2c+D.%2

c+%26+McMaugh%2c+A.+2016+Towards+a+theory+of+the+ecology+of+refle

ction%3a+Reflective+practice+for+experiential+learning+in+higher+education

Harrison, R., Hutt, I., Thomas-Varcoe, C., Motteram, G., Else, K., Rawlings, B., &

Gemmell, I. (2017). A cross-sectional study to describe academics’ confidence,


attitudes, and experience of online distance learning in higher education. Journal

of Educators Online, 14(2), 74.

Hattie, J., & Timperley, H. (2007). The power of feedback. Review of educational

research, 77(1), 81-112. http://dx.doi.org/10.3102/003465430298487

Hostetter, C., & Busch, M. (2013). Community matters: Social presence and learning

outcomes. Journal of the Scholarship of Teaching & Learning, 13(1), 77-86.

Retrieved from

https://scholar.google.co.uk/scholar?q=Hostetter%2c+C.%2c+%26+Busch%2c+

M.+2013+Community+matters%3a+Social+presence+and+learning+outcomes

Huang, X., Chandra, A., DePaolo, C. A., & Simmons, L. L. (2016). Understanding

transactional distance in web-based learning environments: An empirical

study. British Journal of Educational Technology, (4), 734.

doi:10.1111/bjet.12263 https://doi.org/10.1111/bjet.12263

Jaeger, E. L. (2013). Teacher reflection: Supports, barriers, and results. Issues in

Teacher Education, 22(1), 89-104. Retrieved from:

https://scholar.google.co.uk/scholar?q=Jaeger%2c+E.+L.+2013+Teacher+reflect

ion%3a+Supports%2c+barriers%2c+and+results

Janssen, F., de Hullu, E., & Tigelaar, D. (2009). Using a domain-specific model to

improve student teachers' reflections on positive teaching experiences. Action in

Teacher Education, 31(2), 86-98.

https://doi.org/10.1080/01626620.2009.10463520

Jing, M. (2017). Using formative assessment to facilitate learner self-regulation: A case

study of assessment practices and student perceptions in Hong Kong. Taiwan


Journal of TESOL, 14(1), 87-118. Retrieved from

https://scholar.google.co.uk/scholar?q=Jing%2c+M.+2017+Using+formative+as

sessment+to+facilitate+learner+self-

regulation%3a+A+case+study+of+assessment+practices+and+student+perceptio

ns+in+Hong+Kong

Juvova, A., Chudy, S., Neumeister, P., Plischke, J., & Kvintova, J. (2015). Reflection of

constructivist theories in current educational practice. Universal Journal of

Educational Research, 3(5), 345-349. https://doi.org/10.13189/ujer.2015.030506

Kastberg, S. E., Lischka, A. E., & Hillman, S. L. (2016). Exploring prospective teachers'

written feedback on mathematics tasks. Conference Papers -- Psychology of

Mathematics & Education of North America, 783-790. Retrieved from

https://scholar.google.co.uk/scholar?q=Kastberg%2c+S.+E.%2c+Lischka%2c+A

.+E.%2c+%26+Hillman%2c+S.+L.+2016+Exploring+prospective+teachers%27

+written+feedback+on+mathematics+tasks

Kilburn, A., Kilburn, B., & Hammond, K. (2016). Capturing the quality of online higher

education using E-S-QUAL. Proceedings of The Marketing Management

Association, 72-73.

Langer, E. J. (2000). Mindful learning. Current Directions in Psychological

Science, 9(6), 220-223. https://doi.org/10.1111/1467-8721.00099

Lawanto, O., Santoso, H. B., Lawanto, K. N., & Goodridge, W. (2017). Self-regulated

learning skills and online activities between higher and lower performers on a

web-intensive undergraduate engineering course. Journal of Educators

Online, 11(3), https://doi.org/10.9743/jeo.2014.3.2


Le Cornu, A. (2009). Meaning, internalization, and externalization: Toward a fuller

understanding of the process of reflection and its role in the construction of the

self. Adult Education Quarterly: A Journal of Research and Theory, 59(4), 279-

297. https://doi.org/10.1177/0741713609331478

Li, S., & Li, P. (2012). Individual differences in written corrective feedback: A multi-

case study. English Language Teaching, 5 (11)

https://doi.org/10.5539/elt.v5n11p38

Lincoln, Y. S., & Guba, E. G. (1985). Naturalistic inquiry. Beverly Hills, CA: Sage.

Lowe-Madkins, M. (2016). The influence of building social presence and sense of

community in online learning: A meta-analysis on student satisfaction and retention

(Order No. 10158984). Available from ProQuest Dissertations & Theses Global.

(1824361664).

McCarthy, J. (2015). Evaluating written, audio and video feedback in higher education

summative assessment tasks. Issues in Educational Research, (2), 153. Retrieved

from

https://scholar.google.co.uk/scholar?q=McCarthy%2c+J.+2015+Evaluating+written

%2c+audio+and+video+feedback+in+higher+education+summative+assessment+ta

sks

McGuire, B. (2016). Integrating the intangibles into asynchronous online instruction:

Strategies for improving interaction and social presence. Journal of Effective

Teaching, 16(3), 62-75. Retrieved from

https://scholar.google.co.uk/scholar?q=McGuire%2c+B.+2016+Integrating+the+
intangibles+into+asynchronous+online+instruction%3a+Strategies+for+improvi

ng+interaction+and+social+presence

McNiff, J., & Aicher, T. J. (2017). Understanding the challenges and opportunities

associated with online learning: A scaffolding theory approach. Sport

Management Education Journal (Human Kinetics), 11(1), 13-23.

https://doi.org/10.1123/smej.2016-0007

Martin, G. A., & Double, J. M. (1998). Developing higher education teaching skills

through peer observation and collaborative reflection. Innovations in Education

and Training International, 35(2), 161-70.

https://doi.org/10.1080/1355800980350210

Martinez, J. M., & Barnhill, C. R. (2017). Enhancing the Student Experience in Online

Sport Management Programs: A Review of the Community of Inquiry

Framework. Sport Management Education Journal (Human Kinetics), 11(1), 24-

33. Retrieved from

https://scholar.google.co.uk/scholar?q=Martinez%2c+J.+M.%2c+%26+Barnhill

%2c+C.+R.+2017+Enhancing+the+Student+Experience+in+Online+Sport+Man

agement+Programs%3a+A+Review+of+the+Community+of+Inquiry+Framewor

Mehta, R., Makani-Lim, B., Rajan, M. N., & Easter, M. K. (2017). Creating online

learning spaces for emerging markets: An investigation of the link between

course design and student engagement. Journal of Business & Behavioral

Sciences, 29(1), 116-133. Retrieved from

https://scholar.google.co.uk/scholar?q=Mehta%2c+R.%2c+Makani-
Lim%2c+B.%2c+Rajan%2c+M.+N.%2c+%26+Easter%2c+M.+K.+2017+Creati

ng+online+learning+spaces+for+emerging+markets%3a+An+investigation+of+t

he+link+between+course+design+and+student+engagement

Merriam, S. B., & Tisdell, E. J. (2016). Qualitative research: A guide to design and

implementation. San Francisco, CA: Jossey-Bass.

Mirzaee, A., & Hasrati, M. (2014). The role of written formative feedback in inducing

non-formal learning among masters students. Teaching in Higher Education,

19(5), 555-564. doi:10.1080/13562517.2014.880683

Moore, M. G. (2012). The theory of transactional distance (pp. 1-25). University Park,

PA. https://doi.org/10.4324/9780203803738.ch5

Musolino, G. M., & Mostroni, E. (2005). Reflection and the scholarship of teaching,

learning, and assessment. Journal of Physical Therapy Education (American

Physical Therapy Association, Education Section), 19(3), 52. Retrieved from

https://scholar.google.co.uk/scholar?q=Musolino%2c+G.+M.%2c+%26+Mostroni

%2c+E.+2005+Reflection+and+the+Scholarship+of+Teaching%2c+Learning%2c+

and+Assessment

Nash, J. A. (2015). Future of online education in crisis: A call to action. Turkish Online

Journal of Educational Technology - TOJET, 14(2), 80-88. Retrieved from

https://scholar.google.co.uk/scholar?q=Nash%2c+J.+A.+2015+Future+of+Online+

Education+in+Crisis%3a+A+Call+to+Action

The Belmont Report. (1978). The National Commission for the Protection of Human

Subjects of Biomedical and Behavioral Research. Retrieved from

http://www.hhs.gov/ohrp/humansubjects/guidance/belmont.html
Nguyen, Q. D., Fernandez, N., Karsenti, T., & Charlin, B. (2014). What is reflection? A

conceptual analysis of major definitions and a proposal of a five‐component model.

Medical Education, 48(12), 1176-1189. doi:10.1111/medu.12583

https://doi.org/10.1111/medu.12583

Nicol, D., & Macfarlane-Dick, D. (2006). Formative assessment and self-regulated

learning: a model and seven principles of good feedback practice. Studies in Higher

Education. 39(2), 199-218. Doi: 10.1080/02602938.2013.795518

Nicol, D., Thomson, A., & Breslin, C. (2014). Rethinking feedback practices in higher

education: A peer review perspective. Assessment & Evaluation in Higher

Education, 39(1), 102-122. doi:10.1080/02602938.2013.795518

Ninomiya, S. (2016). The possibilities and limitations of assessment for learning:

exploring the theory of formative assessment and the notion of "Closing the

Learning Gap". Educational Studies in Japan: International Yearbook, (10), 79-91.

https://doi.org/10.7571/esjkyoiku.10.79

Pattison, A. B. (2017). An exploratory study of the relationship between faculty social

presence and online graduate student achievement, satisfaction, and persistence

(Order No. 10259040). Available from ProQuest Dissertations & Theses Global.

(1874562951).

Paulson Gjerde, K., Padgett, M. Y., & Skinner, D. (2017). The impact of process vs.

outcome feedback on student performance and perceptions. Journal of Learning in

Higher Education, 13(1), 73-82. Retrieved from

https://scholar.google.co.uk/scholar?q=Paulson+Gjerde%2c+K.%2c+Padgett%2c+

M.+Y.%2c+%26+Skinner%2c+D.+2017+The+impact+of+process+vs
Pawan, F. (2017). Reflective teaching online. Techtrends, 47(4), 30-34.

Peacock, S., & Cowan, J. (2016). From presences to linked influences within

communities of inquiry. International Review of Research in Open and

Distributed Learning, 17(5), 267-283. Retrieved from

https://scholar.google.co.uk/scholar?q=Peacock%2c+S.%2c+%26+Cowan%2c+J

.+2016+From+presences+to+linked+influences+within+communities+of+inquir

Pearcy, M. (2014). Student, teacher, professor: Three perspectives on online

education. The History Teacher, (2), 169. Retrieved from

https://scholar.google.co.uk/scholar?q=Pearcy%2c+M.+2014+Student%2c+teach

er%2c+professor%3a+Three+perspectives+on+online+education

Planar, D., & Moya, S. (2016). The effectiveness of instructor personalized and

formative feedback provided by instructor in an online setting: Some unresolved

issues. Electronic Journal of E-Learning, 14(3), 196. Retrieved from

https://scholar.google.co.uk/scholar?q=Planar%2c+D.%2c+%26+Moya%2c+S.+

2016+The+effectiveness+of+instructor+personalized+and+formative+feedback+

provided+by+instructor+in+an+online+setting%3a+Some+unresolved+issues

Portolese Dias, L., & Trumpy, R. (2014). Online instructor's use of audio feedback to

increase social presence and student satisfaction. Journal of Educators Online,

11(2), https://doi.org/10.9743/jeo.2014.2.5

Price, V. (2015). Exploring effectiveness and rationale of different assessment types,

Journal of Initial Teacher Inquiry (1). http://hdl.handle.net/10092/11438.


Quinton, S., & Smallbone, T. (2010). Feeding forward: Using feedback to promote

student reflection and learning--A teaching model. Innovations in Education and

Teaching International, 47(1), 125-135.

https://doi.org/10.1080/14703290903525911

Ramaprasad, A. (1983) On the definition of feedback, Behavioural Sciences, 28, 4–13.

https://doi.org/10.1002/bs.3830280103

Renner, J. (2017). Engaging TBR faculty in online research communities and emerging

technologies. Journal of Learning in Higher Education, 13(1), 33-44.

https://doi.org/10.4018/978-1-5225-2548-6.ch004

Rice, R. E. (1992). Task analyzability, use of new media, and effectiveness: A multi-site

exploration of media richness. Organization Science, 3(4), 475.

https://doi.org/10.1287/orsc.3.4.475

Roberson, S. (2017). Learning for maximum impact: Four critical but overlooked

ideas. Education, 137(3), 283-296. Retrieved from

https://scholar.google.co.uk/scholar?q=Roberson%2c+S.+2017+Learning+for+m

aximum+impact%3a+Four+critical+but+overlooked+ideas

Roberts, P. (2016). Reflection: A renewed and practical focus for an existing problem in

teacher education. Australian Journal of Teacher Education, 41(7).

https://doi.org/10.14221/ajte.2016v41n7.2

Rockinson-Szapkiw, A. (2012). Investigating uses and perceptions of an online

collaborative workspace for the dissertation process. Research in Learning

Technology, Vol 20, Iss 0, Pp 1-16 (2012), (0), 1. doi:10.3402/rlt.v20i0.18192


Rodgers, C. (2002). Defining reflection: Another look at John Dewey and reflective

thinking. Teachers College Record, 104, 842-866. Retrieved from

https://scholar.google.co.uk/scholar?q=Rodgers%2c+C.+2002+Defining+reflecti

on%3a+Another+look+at+John+Dewey+and+reflective+thinking

Rogers, R. R. (2001). Reflection in higher education: a concept analysis. Innovative

Higher Education, 26(1), 37-57. Retrieved from

https://scholar.google.co.uk/scholar?q=Rogers%2c+R.+R.+2001+Reflection+in+

higher+education%3a+a+concept+analysis

Rolfe, G. (2014). Rethinking reflective education: What would Dewey have done?

Nurse Education Today, 34(8), 1179-1183. https://doi-

org.lopes.idm.oclc.org/10.1016/j.nedt.2014.03.006

Ryan, M. (2013). The pedagogical balancing act: teaching reflection in higher

education. Teaching In Higher Education, 18(2), 144-155.

doi:10.1080/13562517.2012.694104

Ryan, M., & Ryan, M. (2010). The 4Rs model of reflective thinking. QUT Draw Project,

Queensland University of Technology. Viewed at

http://citewrite.qut.edu.au/write/4Rs-for-students-page1- v1.5.pdf.

Sadler, D. R. (1989). Formative assessment: revisiting the territory. Assessment in

Education, 5(1), 77-84. Retrieved from

https://scholar.google.co.uk/scholar?q=Sadler%2c+D.+R.+1989+Formative+assess

ment%3a+revisiting+the+territory%2c+Assessment+in+Education%2c+5(1)%2c+7

7-84
Sadler, D. R. (2010). Beyond feedback: Developing student capability in complex

appraisal. Assessment & Evaluation in Higher Education, 35(5), 535-550.

doi:10.1080/02602930903541015

Sardareh, S. A. (2016). Formative Feedback in a Malaysian Primary School ESL

Context. Malaysian Online Journal of Educational Sciences, 4(1), 1-8. Retrieved

from

https://scholar.google.co.uk/scholar?q=Sardareh%2c+S.+A.+2016+Formative+Feed

back+in+a+Malaysian+Primary+School+ESL+Context

Schon, D. A. (1983). The reflective practitioner: How professionals think in action. New

York: Basic Books.

Schieffer, L. (2016). The benefits and barriers of virtual collaboration among online

adjuncts. Journal of Institutional Research, 5. https://doi.org/10.9743/jir.2016.11

Scriven, M. (1967). The methodology of evaluation. In R. W. Tyler, R. M. Gagné & M.

Scriven (Eds.), Perspectives of curriculum evaluation (Vol. 1, pp. 39-83).

Chicago, IL: Rand McNally

Sims, J. M. (2016). Instructors' perspectives of giving audio and video feedback: Can

you hear me now? (Order No. 10163637). Available from ProQuest Dissertations

& Theses Global. (1830772364).

Smit, J., van Eerde, H., & Bakker, A. (2013). A conceptualisation of whole-class

scaffolding. British Educational Research Journal, 39(5), 817-834.

https://doi.org/10.1002/berj.3007
Smits, A., & Voogt, J. (2017). Elements of satisfactory online asynchronous teacher

behaviour in higher education. Australasian Journal of Educational

Technology, 33(2), 97. https://doi.org/10.14742/ajet.2929

Tunks, K. W. (2012). An introduction and guide to enhancing online instruction with

web 2.0 tools. Journal of Educators Online, 9(2),

https://doi.org/10.9743/jeo.2012.2.1

van Kol, S., & Rietz, C. (2016). Effects of web-based feedback on students'

learning. International Journal of Teaching and Learning in Higher

Education, 28(3), 385-394. Retrieved from

https://scholar.google.co.uk/scholar?q=van+Kol%2c+S.%2c+%26+Rietz%2c+C.

+2016+Effects+of+web-based+feedback+on+students%27+learning

Vanslambrouck, S., Chang, Z., Tondeur, J., Phillipsen, B., & Lombaerts, K. (2016).

Adult learners' motivation to participate and perception of online and blended

environments. Proceedings of the European Conference on E-Learning, 750.

Vaughn, P., & Turner, C. (2016). Decoding via Coding: Analyzing Qualitative Text

Data through Thematic Coding and Survey Methodologies. Journal of Library

Administration, 56(1), 41-51. doi:10.1080/01930826.2015.1105035

Verpoorten, D., Westera, W., & Specht, M. (2012). Using reflection triggers while

learning in an online course. British Journal of Educational Technology, 43(6),

1030-1040. doi:10.1111/j.1467-8535.2011.01257.x

Vygotsky, L. S. (1978). Mind in Society. Cambridge, MA: Harvard University Press.

https://doi.org/10.2307/1421493
Wade, N. N. (2016). The face of feedback: Exploring the use of asynchronous video to

deliver instructor feedback in multidisciplinary online courses (Order No.

10105000). Available from ProQuest Dissertations & Theses Global.

(1790629793).

Wass, R., & Golding, C. (2014). Sharpening a tool for teaching: the zone of proximal

development. Teaching in Higher Education, 19(6), 671-684.

https://doi.org/10.1080/13562517.2014.901958

Webb, A., & Moallem, M. (2016). Feedback and Feed-Forward for Promoting Problem-

Based Learning in Online Learning Environments. Malaysian Journal of

Learning and Instruction, 13(2), 1-41. Retrieved from

https://scholar.google.co.uk/scholar?q=Webb%2c+A.%2c+%26+Moallem%2c+

M.+2016+Feedback+and+Feed-Forward+for+Promoting+Problem-

Based+Learning+in+Online+Learning+Environments

Wiliam, D. (2014). Formative assessment and contingency in the regulation of learning

processes. Paper presented in a symposium entitled Toward a Theory of

Classroom Assessment as the Regulation of Learning at the annual meeting of

the American Educational Research Association, Philadelphia, PA. April, 2014.

Wiliam, D. (2007). Keeping learning on track: Classroom assessment and the regulation

of learning. In F. Lester (Ed.) Second Handbook of Research on Mathematics

Teaching and Learning (pp. 1051-1098). Charlotte, NC: InfoAge.

Wilson, R. E. (2013). Teacher reflection: A phenomenological study (Order No.

3609076). Available from ProQuest Dissertations & Theses Global.

(1496772435).
Winchester, T. M., & Winchester, M. K. (2014). A longitudinal investigation of the

impact of faculty reflective practices on students' evaluations of teaching. British

Journal of Educational Technology, 45(1), 112-124. doi:10.1111/bjet.12019

Wlodarsky, R., & Walters, H. (2010). Use of the reflective judgment model as a

reference tool for assessing the reflective capacity of teacher educators in a

college setting. Journal on Educational Psychology, 4(1), 13-20. Retrieved from

https://scholar.google.co.uk/scholar?q=Wlodarsky%2c+R.%2c+%26+Walters%2

c+H.+2010+Use+of+the+reflective+judgment+model+as+a+reference+tool+for

+assessing+the+reflective+capacity+of+teacher+educators+in+a+college+setting

Wolsey, T. D. (2008). Efficacy of instructor feedback on written work in an online

program. International Journal on E-Learning, 7(2), 311-329. Retrieved from

https://scholar.google.co.uk/scholar?q=Wolsey%2c+T.+D.+2008+Efficacy+of+i

nstructor+feedback+on+written+work+in+an+online+program

Wright, J. M. (2014). Planning to meet the expanding volume of online learners: An

examination of faculty motivation to teach online. Educational Planning, 21(4),

35-49. Retrieved from

https://scholar.google.co.uk/scholar?q=Wright%2c+J.+M.+2014+Planning+to+

meet+the+expanding+volume+of+online+learners%3a+An+examination+of+fac

ulty+motivation+to+teach+online

Yin, R. K. (2014). Case study research: Design and methods (5th ed.). Thousand Oaks,

CA: Sage Publications.


Appendix A.

Site Authorization Letter(s)

ID: 305

Dear Marybeth,

Please be advised that you have been granted site authorization for your research

request “Exploring how online university faculty feedback reflection influences teaching ". This

research protocol has been approved by the Site Authorization Committee.

Upon IRB approval by Grand Canyon University Institutional Review Board, the

research activities will be allowed as defined in your Site Authorization Application.

Please adhere to the following:

1. 1. The principle investigator(s) will only use the data collection method outlined in the
site authorization request.

2. 2. All participant responses must be anonymous.

Please be advised the scope of the research is limited to the above activities. If you

change your methodology or data collection plan, you will need to resubmit a new site

authorization request. Upon completion of the study you must include a summary of the study

results in the IRB study closeout package. Copies of journal publications related to this research

must also be submitted to the Office of Academic Research and IRB as appropriate.

Your next step is to complete your IRB application. See link below. Please contact

irb@gcu.edu if you have any questions about the IRB application.


https://cirt.gcu.edu/research/gcuirb

DATA RETRIEVAL: If this request includes access to GCU survey or archival data,
once your IRB is complete, please contact CIRT@gcu.edu, so we may facilitate the
retrieval of requested data.

Please upload a copy of this letter into your IRB application package as verification of
GCU site authorization, and please contact me if you have additional questions regarding your
site authorization or coordinating the logistics of your study.

Kind Regards,

Scott. W. Greenberger, Ed.D.

Manager of Research and Assessment

Center for Innovation in Research and Teaching (CIRT)

Grand Canyon University

3300 W. Camelback Road

Phoenix, AZ 85017

Phone: 602.639.7704

Email: scott.greenberger2@gcu.edu
Appendix B.

IRB Approval Letter

Please note that Grand Canyon University Institutional Review Board has taken the following
action on IRBNet:

Project Title: [1163434-1] Exploring How Online University Faculty Feedback Reflection
Influences Teaching Principal Investigator: Mary Beth Nipp, EdD

Submission Type: New Project


Date Submitted: December 7, 2017

Action: APPROVED
Effective Date: January 9, 2018
Review Type: Expedited Review

Should you have any questions you may contact Connor Low at connor.low@gcu.edu.

Thank you,
The IRBNet Support Team
Appendix C.

Informed Consent

Mary Beth Nipp

Grand Canyon University

Exploring How Online University Faculty Feedback Reflection Influences Teaching

Introduction

Thank you for agreeing to participate in this study. This form’s purpose is to provide

information that may affect your decision as to whether to participate in this research

and also to record the consent of those who agree to participate in the study. In addition,

this form outlines your right to withdraw from the study. By refusing to sign this form,

you are not obligated to participate in the study.

Purpose of the Study

This study will explore how online undergraduate full-time faculty instructor feedback

reflections influence their teaching and to explore their feedback methods. A greater

understanding of how feedback reflections and teaching practices intersect has practical

implications for student outcomes and institutional professional development.

Procedures

The purpose of this letter is to request your volunteer participation in a study that will

explore faculty feedback reflections and how they might influence teaching. If you agree

to participate, you will consent to your classroom feedback being analyzed and will be

given a questionnaire that will allow you to share your reflections during the feedback

process. In addition, you will be asked to respond to questions in a focus group of

approximately 5 participants. No time is required on your part for the classroom


feedback analysis. The questionnaire will be conducted while you provide feedback; you

can expect this process to take between 30 to 45 minutes. You may skip questions on

either the questionnaire or during the focus group. Finally, the focus group will last

approximately 45 to 60 minutes. All will take place in building 71 at the GCU campus.

Although the focus group sessions will be recorded, no personal identification will be

revealed in the study and the data will be secured during and after the data collection

process and analysis stages.

Risks

This study poses no psychological or physical risks of harm. There is no known risk

associated with participating in this study.

Benefits

There is no known benefit to participants for participating in the study, focus groups,

classroom analysis, or questionnaire completion. The study’s benefits, however, will

contribute to knowledge on how faculty feedback reflections might inform teaching.

Payment to Participants

For this study, there is no monetary cost to participants and no compensation is awarded.

Participant Confidentiality

The results of this study will be kept confidential. Your identity will not be revealed

through the data analysis or collection process. Access to the data will be limited to the

dissertation committee and the researcher.

Refusal to Sign Consent and Authorization


Your participation in this study is voluntary and if you choose to withdraw, that decision

will not be held against you by the institution. By refusing to sign this consent form, you

will not be able to participate in the study.

Cancelling This Consent and Authorization

At any time during the study, you may cancel your participation by notifying the

researcher in writing by emailing marybeth.nipp@gcu.edu.

Questions about Participation

Any questions about participation in this study should be directed to the names listed

below.

Investigator’s Statement

"I certify that I have explained to the above individual the nature and purpose,

the

potential benefits and possible risks associated with participation in this

research

study, have answered any questions that have been raised, and have witnessed

the

above signature. These elements of Informed Consent conform to the Assurance

given by Grand Canyon University to the Office for Human Research

Protections to protect the rights of human subjects. I have provided (offered) the

subject/participant a copy of this signed consent document.”

Mary Beth Nipp, August 24, 2017

Participation Certification
I understand that this agreement states that I have received a copy of this completed

Consent Form. By signing below, my signature indicates my understanding of the rights

as a participant in this study. I further understand that if I have any questions about

participating in this study, I may contact the Grand Canyon University Institutional

Review Board (IRB) at (602)639-7804. By signing this form, I affirm that I am at least

18 years of age and that I have received a copy of this consent form.

_________________________________________Participant’s Name

_________________________________________Participant’s Signature

_________________________________________Date

Researcher’s Contact Information:

Mary Beth Nipp, Researcher

Grand Canyon University

3300 West Camelback Road

Phoenix, Arizona, 85017

(602) 639-7332

Dr. Patricia D’Urso, Dissertation Committee Chair & Advisor

Pat.Durso@my.gcu.edu
Appendix D.

Instrument Validation

From: MaryBeth Nipp

Sent: Friday, October 13, 2017 7:14 PM

To: Christal Abron <Christal.Abron@gcu.edu>; Amanda Laster-Loftus

<Amanda.Laster-Loftus@gcu.edu>; John Steele <John.Steele@gcu.edu>

Subject: Field Testing

Hello!

Thank you for agreeing to help with my field best for my study, Exploring How

Online University Faculty Feedback Reflection Influences Teaching. You were asked to

help with this because you teach one or more of the following classes: PSY 105, UNV

104, CWV 101, PHI 105, and because you are experienced in your full-time faculty

capacity.

This study was about the perceived influence of feedback practices on teacher

reflection and subsequently, the influence of reflection on instructional strategy. For this

source of data, I prepared a questionnaire (See attached) that is based on Ryan and

Ryan’s (2013) 4R’s Model of reflection. I ask that you do not actually respond to the

questionnaire, as that would provide data and at this point in my qualitative study, I

cannot actually collect data prior to IRB approval. So, your job is just to give me

feedback about how you think this questionnaire will work to help accomplish this

purpose for my study. I welcome and appreciate your feedback!


Reference

Ryan, M. (2013). The pedagogical balancing act: teaching reflection in higher

education. Teaching In Higher Education, 18(2), 144-155.

doi:10.1080/13562517.2012.694104

Field Expert Replies:

Hi MaryBeth,

This looks like such an intriguing study- I am so excited to see how this all

comes together at the end of your study :) These all seem quite applicable to the scope

of what your research questions are asking. The only comments I had for things to

consider included a couple of areas that the wording might expand to be a bit clearer for

participants. If you are providing a specific artifact for the context of the study, or

having faculty answer the questions based on a specific artifact that they have in mind,

perhaps also consider including that in your wording. Such as- “Considering a specific

piece of student work…” or “In the context of the assignment that has been provided…”

Just a thought- I am sure this was included in your participant instructions, but consider

adding specifics to the questionnaire too, so you don’t get out of context responses for

those who did not read instructions thoroughly. I do hope this helps!

Amanda (PSY 100 faculty)

Hi MaryBeth,

I hope you are having a great night! My feedback is below.

R1 It seems like there may be a need for a little more explanation for this

question as it may be confusing to participants. Are the participants reviewing a paper


with that question or is that a general question regarding paper feedback? I do find this

set is of questions a little confusing. However, the other two questions are clear and

straightforward. Please let me know if you have any questions.

R2 Great as is!

R3 Great as it!

Have a great day!

John Steele, M.Ed. & MS/P

Hi MaryBeth,

Sorry it took me so long to get back to you on this!

I think you have included a great set of RQ’s that address the different angles of FTF

feedback and have kept your prompts neutral to prevent leading your participants. I can

see how this questionnaire will provide you with great data to analyze ☺

Christal Abron, MSP


Appendix E.

Online Undergraduate Full-time Faculty Questionnaire Participant Invitation

The following email will be sent to undergraduate full-time online faculty who teach

courses with written assignments requiring feedback.

My name is Marybeth Nipp and I am a doctoral candidate at Grand Canyon

University working under the Dissertation Committee chaired by Dr. Pat D’Urso. I am

inviting you to participate in my research study, which is about faculty reflection during

feedback the connections to our teaching. Your participation is valuable is an

opportunity to learn more about your own feedback and how your methods influence

your feedback and teaching. I am inviting full-time faculty who teach the following:

PSY 105, UNV 104, CWV 101 and PHI 105. Your participation will include completing

a questionnaire about your feedback and participation in a focus group.

If you would like to participate in the study, please contact me at mb@nipp.us or

at 602-639-7332. Additionally, you may contact the university CIRT office with any

questions: cirt@gcu.edu. Informed consent form is attached. Thank you!!

Marybeth Nipp
Appendix F.

Faculty Questionnaire

(Note: Research questions reflected in this questionnaire are those approved at

the proposal stage of the dissertation, but are not the final versions. Please note changes

in RQs used throughout this document.)

Demographic Information

1. Name, gender, years in current position

2. Degrees held, years in higher education

3. Subject primarily taught

Research Question 1 – How do online undergraduate full-time faculty feedback

practices influence reflective thinking at a southwestern higher educational institution?

A. What are the concerns with the paper? (Examples: Assignment is off topic,

basic skills are lacking)(Level 1)

B. How common are these concerns? What conditions might prompt the

concern? (Conditions might include student being unsuccessful on previous

assignment, failing class previously) (Level 2)

C. What resources will assist you? (Level 3)

D. What practices might you change, given the concerns? What theories or best

practices might support your change of instruction? (Level 4)

Research Question 2 – What methods of feedback do online undergraduate full-time

faculty use at a southwestern higher educational institution?

A, Do you use audio, video, or text to deliver feedback? (Level 1)

B. What factors influence that decision? (Level 2)


C. Why did you choose these methods? What influences (peers, theory, personal

experience, time management) prompted these methods? (Level 3)

D. Do you plan to continue with this method? What are other ways you might

use this method or to implement new ones? (Level 4)

Research Question 3 – How do these feedback practices inform future teaching

strategies for online undergraduate full-time faculty at a southwestern higher educational

institution?

A. What, if any, changes do you typically make to future instruction, based on

your feedback reflections? (Level 1)

B. How does the student concern relate to a classroom element, such as a

discussion question or course materials? (Level 2)

C. Do trends or individual concerns affect your practices? If so, how? (Level 3)

D. How do you apply these student concerns to a broader context? (Level 4)


Appendix G.

Focus Group Interview Protocol

Participant:
Participant:
Participant:
Participant:
Participant:
Participant:
Participant:
Participant:
Participant:
Participant:
Participant:
Participant:

Moderator: _______________________________________________________
Date: _________________________ Time: _____________________________
Location: ________________________________________________________
Consent Form Signed: All participants provided signed consent forms during the
questionnaire process.
Introductory Protocol
To facilitate the note-taking, I will be recording our session today. Please note
that only the researchers will have access to the tapes which will be eventually destroyed
after they are transcribed. Please plan on this session lasting approximately 30 – 60
minutes. During this time, we have several questions that we would like to cover. I will
be asking several questions related to my research questions. If you need me to stop the
discussion at any time or repeat a question, please don’t hesitate to ask. Because we are
recording, please speak loudly and clearly so we can capture everyone’s comments on
tape.
I will endeavor to maintain your confidentiality at all times, and nothing in this
discussion will link back to your supervisor nor will the information be shared outside of
the study.
I will share information with you about the study and the data that has already
been collected. Then I will ask for your thoughts on the information, but I will refrain
from participating in the discussion. Please feel free to respond to each other and speak
directly to others in the group.
We want to hear from all of you. And I know that many may have similar
experiences or attitudes on this topic, so please share all thoughts and ideas.
I will turn the recorder on now:
I understand some of you know each other, but others may not. Please share your
first name and the subject you primarily teach.
Do you have any questions before we begin?
Thank you for sharing. Each of you has completed an online questionnaire and
have allowed me to review your classroom feedback. The questions were all centered
on how you view your classroom feedback, what you reflect on while grading, and how
those reflections actually influence your teaching. When I reviewed your classroom
feedback, I looked for patterns and some common themes emerged. Today I will share
these themes that emerged during this data analysis. To ensure confidentiality, no names
will be visible, but you will be able to view the general ideas.
These themes are tied to one of the three research questions I have used for this
study. As I present each set of themes for these research questions, please share how you
feel about the accuracy of what I am presenting. In other words, tell me if you can
confirm the findings or are the ideas contrary to your reflections during feedback. Share
your thoughts and talk to each other about your experiences with each topic.
Before we begin, do you have any questions?
RQ1: How do online undergraduate full-time faculty feedback practices influence

reflective thinking at a southwestern higher educational institution?

• Based on the summary presented, are there points you disagree with? If so, what
are they and why?

• Are there points you agree with? If so, what are they and why?
• Are there points you feel need to be added for greater clarification? If so, what
are they and why?

RQ2: What methods of feedback do online undergraduate full-time faculty use at a

southwestern higher educational institution?

• Based on the summary presented, are there points you disagree with? If so, what
are they and why?

• Are there points you agree with? If so, what are they and why?
• Are there points you feel need to be added for greater clarification? If so, what
are they and why?

RQ3: How do these feedback practices inform future teaching strategies for online

undergraduate full-time faculty at a southwestern higher educational institution?

• Based on the summary presented, are there points you disagree with? If so, what
are they and why?

• Are there points you agree with? If so, what are they and why?
• Are there points you feel need to be added for greater clarification? If so, what
are they and why?
FINAL QUESTION: Are there any points or ideas you would like to add that we may
have missed.
Post Interview Comments and/or Observations:
Appendix H.

Questionnaire Responses

Themes from Questionnaire responses

RQ 1 L1 L2 L3 L4
Text Development (14) High Instructor-added (20) Instructor-added (39)
Format (14) L (4), Hi (10) Existing Resources Student Contact (3)
Mechanics (9) L(3), m (1) h(5) (15) Existing R’s (18)
Word Count (6) L(3), m (1) h (2) Refined FB (17) No changes (8)
Thesis (5) L(3), h (2) Co-workers (5)
3rd person (4) H (4) Past (5)
Student contact (1)
Existing R’s (5)
Instructor-added (1)
Past R’s (1)
Video Development (3) High (3) Instructor-added (4)
Mechanics (3) High (3) Existing resources (4)
Format (2) High (2) No changes (1)
Thesis (2) High (2)
3rd Person (2) High (2)
RQ2 L1 L2 L3 L4
Text Text (36) Student needs (12) Time/Efficiency (24) Yes (18)
Time/Efficient (24) Familiarity (16) Yes + may try (16)
Familiarity (16) Student needs (12) Yes + adding other
Assessment- Co-workers (4) Co-workers (4) tools (3)
dependent (8) Yes +efficiency (5)

Video Student needs (8) Student needs (8) Yes (4)


Time/Efficiency (4) Time/Efficiency (4) Yes + student needs
(4)
RQ3 L1 L2 L3 L4
Text None (20) Discussion in class Instructor-added (16) Instructor-added (24)
Instructor-added (9) (16) Student contact (5) Tech tools (4)
Tech tools (5) Assignments (14) Feedback (4) Co-workers (4)
Feedback (4) Existing resources Increasing Does not (4)
Co-workers (4) (13) engagement (4) Engaging in forum
Student needs (4) Instructor-added (7) (1)

Video None (4) Feedback (2) Student needs (4) Feedback (4)
Tech tools (4) Existing resources (1) Instructor-added (4) Instructor-added (4)
Calling students (4)
Appendix I.

Template for Identifying Feedback Levels

FT (Task) How did FP (Processing the FR (Self-regulation) FS (Self) What do

I do? task) What do I What will help me? you think of me?

need to do? What information is

needed?

You might also like