Professional Documents
Culture Documents
Exploring How Online University Feedback
Exploring How Online University Feedback
Submitted by
Doctorate of Education
Phoenix, Arizona
ProQuest 10974098
Published by ProQuest LLC (2018 ). Copyright of the Dissertation is held by the Author.
All rights reserved.
This work is protected against unauthorized copying under Title 17, United States Code
Microform Edition © ProQuest LLC.
ProQuest LLC.
789 East Eisenhower Parkway
P.O. Box 1346
Ann Arbor, MI 48106 - 1346
© by Mary Beth Nipp, 2018
and that I accurately reported, cited, and referenced all sources within this manuscript in
strict compliance with APA and Grand Canyon University (GCU) guidelines. I also
verify my dissertation complies with the approval(s) granted for this research
The purpose of this qualitative exploratory single case study was to examine how online
southwestern higher educational institution. The theory that provided the foundation for
this study was Vygotsky’s learning theory and the Zone of Proximal Development. The
reflection, feedback methods, and how feedback informed instruction. The data sources
included a faculty questionnaire, classroom feedback and a focus group while the data
analysis employed coding, thematic analysis, and triangulation of the data. From the
demonstrated common paper issues and a second-level analysis of feedback revealed the
level of feedback relative to its formative value. Feedback was reviewed according to its
delivery method and written and video feedback were disaggregated. The focus group then
served to triangulate the data. This study’s major outcomes revealed that instructors who
gave text feedback provided more critical thinking course-related remarks while instructors
who provided video feedback focused more on surface-level issues. Most faculty feedback
consistent with the study’s theoretical foundation. The implications for this study include
Introduction ....................................................................................................................1
Definition of Terms......................................................................................................23
Assumptions........................................................................................................24
Background. ........................................................................................................29
Population. ..........................................................................................................35
Feedback methods...............................................................................................36
Theoretical Foundations and/or Conceptual Framework .............................................37
Introduction. ........................................................................................................42
Formative feedback.............................................................................................49
Feedback methods...............................................................................................65
Reflection ............................................................................................................76
Methodology .......................................................................................................82
Instrumentation ...................................................................................................82
Summary ......................................................................................................................84
Introduction ..................................................................................................................88
Research Design...........................................................................................................95
Trustworthiness ..........................................................................................................105
Reliability...................................................................................................................107
Data Collection and Management ..............................................................................107
Summary ....................................................................................................................119
Introduction ................................................................................................................121
Results ........................................................................................................................187
RQ 1 ..................................................................................................................218
RQ 2 ..................................................................................................................223
RQ 3. .................................................................................................................226
Implications................................................................................................................227
Recommendations ......................................................................................................236
Summary ....................................................................................................................238
References 240
Table 13. Paper Feedback According to Hattie & Timperley’s Levels ......................... 163
Table 14. Questionnaire Responses by Research Question: Research Question 2 ........ 169
List of Figures
Introduction
Both feedback and reflection are essential to good teaching, whether that
(Caruth & Caruth, 2013). Indeed, with its terrain of possibilities for learners, this
platform is poised to meet global challenges in education through its flexibility and
access (Gargano & Throop, 2017). This study’s discussion of feedback and instructor
reflection was focused on the online modality, because of its unique potential for varied
delivery methods which might influence how faculty reflect through their feedback
process. This qualitative exploratory single case study explored the perceived influence
their instructional strategies. The current gap in reflective practices in education was
addressed as reflection was discussed in this new context (Wilson, 2013) and with a
examined ways to improve the online modality. These improvements include lessening
the distance and maintaining academic rigor, student satisfaction, and persistence
(Pattison, 2017). Technology offers numerous ways to lessen this distance. Advanced
online delivery methods, such as audio or video feedback, provide a broader range of
factor of online learning, these methods have been shown to improve efficiencies in
feedback (Borup et al., 2015; Sims, 2016; Tunks, 2012). Therefore, this closer look at
teacher feedback practices on teacher reflection and its potential to influence their own
this study, reflection can be considered deep thought, or a complex process (Roberts,
2016) that connects new information or theory with individual meaning or experience
(Janssen, de Hullu, & Tigelaar, 2009; Nguyen, Fernandez, Karsenti, & Charlin, 2014;
Roberts, 2016) for its capacity to influence teaching practices (Wlodarsky & Walters,
2010). This act of engaging in mindful self-monitoring has been reviewed in higher
education (Bennett, Power, Thomson, Mason, & Bartleet, 2016) using models such as
For this study, reflection is a purposeful, rather than intuitive act towards
transformation (Ryan & Ryan, 2013). Reflection will be examined according to the
levels of reporting and responding, relating, reasoning, and reconstructing, which aligns
with the selected model designed for higher education use (Ryan & Ryan, 2013). For
online instructors, reflection might include analyzing their own learning process and
applying necessary changes accordingly (Wilson, 2013). These changes, in turn, could
serve to inform teaching strategies. More studies in this area are needed in higher
education (Hall, 2018). As more institutions launch online courses as part of their
strategic planning (Kilburn, Kilburn & Hammond, 2016), this proposed study formed a
relevant literature that supports the background of feedback and reflection for online
of how this study will advance the scientific knowledge in the field and the significance
of the study. A rationale for the methodology selection and the nature of the design will
delimitations for the study. Finally, a summary and discussion of organization for the
Reflection and feedback are two critical elements that have both been rigorously
explored in traditional education, but more research is needed on how the two might
intersect in the online modality, where these phenomena take on new nuances,
emphases, and populations. The gap in this field was expressed in three recent empirical
studies, showing the need for expanding populations in studies of online feedback. In a
qualitative case study, Atwater, Borup, Baker, and West (2017) examined student
and video feedback stated that despite the convenience of text feedback, video feedback
provided more advantages. These researchers stated that more studies are needed to
explore and compare different feedback methods and their effectiveness, and that a
diverse student population might benefit from the nonverbal communication that video
feedback provides (Atwater et al., 2017). In a similar study, (Borup et al., 2015)
examined instructor and student perceptions of text and video feedback and found that
both populations valued the efficiency of text over the affective benefits of video and
recommended exploring asynchronous video feedback among a more diverse population
(Borup et al., 2015). Wade’s design-based qualitative study (2016) echoed similar
populations in courses with more than 25 students. Given online education’s momentum
(Caruth & Caruth, 2013), these prior studies demonstrated the present need for more
research in asynchronous online feedback, with attention to new delivery methods and
populations, as well as expanding the field for how these feedback methods might lead
Feedback’s formative value has been documented. Seminal researchers Black &
mechanism for student learning, when effective instructor support, scaffolding, and
alignment are at the student’s developmental level (Eraut, 2006; McNiff & Aicher,
2017; Wass & Golding, 2014). Although online instructors may understand what
constitutes effective feedback, the increasing volume of online courses has prompted
institutional concerns about faculty efficiencies and effectiveness (Planar & Moya,
2016). Instructors are turning to audio or video feedback methods (Borup et al., 2015;
Sims, 2016; Wade, 2016) as approaches to manage the volume of students in online
education and to increase personalization (Borup et al., 2015; Verpoorten, Westera, &
Specht, 2012; Wade, 2016); these efficiencies may result in more meaningful instructor
reflection. In this study, reflection is a complex process requiring deep thought (Roberts,
2016). Reflection’s potential to connect theory to practice (Bennett et al., 2016), makes
it a cyclic process (Roberts, 2016), making reflective practice about how and when to
apply these processes to a unique case (Bennett et al., 2016; Schon, 1983).
Verpoorten et al. (2012) posited that asynchronous online learning represents potentially
greater opportunities for instructor reflection, which could enhance both instructor and
learner experience. Reflection causes one to justify their beliefs and to reassess the
efficacy of problem solving strategies (Musolino & Mostroni, 2005) and is a rigorous,
purposeful act requiring intellectual and emotional intentions (Ryan & Ryan, 2013). For
this study, reflection will be examined based on Ryan and Ryan’s (2013) four levels.
Academic reflection requires the agent to pay an active role in the transformation (Ryan
& Ryan, 2013). Given this context of reflection, it was worthwhile exploring the
Problem Statement
It was not known how online undergraduate full-time faculty perceived the
influence of their instructional feedback practices on their reflective thinking, and hence,
popularity (Allen & Seaman, 2016; Elison-Bowers & Snelson, 2012; Forte, Schwandt,
Swayze, Butler, & Ashcraft, 2016; Mehta, Makani-Lim, Rajan, & Easter, 2017; Wright,
Through this study, faculty feedback and reflective practices were explored. This
(Atwater et al., 2017; Borup et al., 2015; Wade, 2016) and amalgamated the instructor
online courses remains the center of student learning and primary mode of
communication (Planar & Moya, 2016), but despite feedback’s prominence in the online
setting, students report less satisfaction with feedback than any other component of the
course (Nicol, Thomson, & Breslin, 2014). All too often, as Borup et al. (2015) have
noted, in this modality, “teachers and students are separated in space and time for all or
most of the course.” (p. 162). This physical and psychological distance has prompted
instructors to find new ways of creating personal and improved feedback (Borup et al.,
2015; Pattison, 2017) and the specific methodologies instructors choose may have a
bearing on how they reflect on their feedback. For example, Atwater et al. (2017) noted
that for students outside the dominant culture group, video feedback was often more
contrast, Borup et al.’s (2015) study found that students prized the efficiency of text over
video feedback; however, as technology improves, such findings may soon be obsolete
feedback practices and reflection, greater insights into how to best serve expanding
student populations emerged. The general population affected by this study included
higher education institutions with online offerings, their faculty, and student
populations.
For this study, online undergraduate full-time university faculty served as the
unit of analysis. The host institution was a private university in the southwestern United
States where there are 60,000 online students and a traditional campus population of
17,000. This institution has a contingent of full-time online faculty working at campus
facilities who teach courses online exclusively and maintain an average load of 80 to
100 students at a time. This consistent course volume necessitates time management
Thus, instructors may use text, audio, or video methods to deliver student feedback.
The purpose of this qualitative exploratory single case study was to examine how
feedback practices on their reflective thinking, and hence, their instructional strategy at a
southwestern higher education institution. A purposive sample of ten to fifteen full- time
online undergraduate faculty at this institution served as the population. Because the
online modality is offered at numerous higher education institutions, the findings from
this study have significance for similar bodies. The value of reflection and feedback is
1998; Wiliam, 2007) and reflection is a purposeful act that has the potential to provide
self-assessment for faculty (Roberts, 2016) which could, in turn, influence their
teaching. What is missing are studies that intersect the two in an online setting. By
studying these aspects of teaching among full time online faculty, the findings can
strategy), the faculty served as the unit of analysis and the dimensions of phenomena
were faculty questionnaire results, instructor feedback from current and recent classes,
and the confirmation or challenging of emerging themes that came from focus group
discussions on the results of the data sources. Since classroom feedback can be viewed
as instructional (Hattie & Timperley, 2007) and through feedback, the online instructor
increases the social presence, (Pattison, 2017) the review of classroom feedback
provided critical data for analysis. This study involved 12 participants who responded to
a 12-item questionnaire which served as a rich context for analysis. These resulting
themes addressed the research question about how faculty perceive the influence of their
the field. In addition, classroom assignments were reviewed in multiple active courses
yielding meaningful trends about faculty feedback methods. The third dimension of
phenomena was a focus group consisting of the same participants, which allowed a
This was an exploratory single case study. A qualitative method and a case study
design are selected when the researcher wants to explore the “how” of a phenomena
understanding (Almalki, 2016) so this qualitative method aligns with the problem
statement. In this study, the “how” to be addressed involved how faculty perceived the
influence of feedback practices on reflection on their feedback practices and how that
reflective thinking informed teaching. This was explored through coding and analyzing
Research Questions
It was not known how online undergraduate full-time faculty perceived the
influence of their instructional feedback practices on their reflective thinking, and hence,
integral component of online facilitation that has been shown to contribute to learning
(Mirzaee & Hasrati, 2014). With technology offering an array of methods for delivering
feedback (Ali, 2016; Atwater et al., 2017; Borup et al., 2015), these methods may be
personalization could then change or improve the potential of feedback for learning. In
addition, these feedback methods may change how or if instructors reflect on their
feedback. A greater understanding of how these processes occur has implications for
online facilitation and student outcomes. The research questions for this study were:
RQ1: How do online undergraduate full-time faculty perceive the influence of their
were faculty questionnaire results, classroom feedback, and emerging themes from the
focus groups on reflective thinking and how reflection can inform teaching.
A critical focus of this study was the instructor’s classroom feedback. Feedback
is closely linked to student success (Abaci, 2014; Ali, 2016; Gredler, 2016; Hattie &
teaching mechanism (Bonnel, 2008; Hattie & Timperley, 2007; Quinton & Smallbone,
2010). Formative feedback has the potential to go beyond the “atta-boy” comments
towards the learning goal and providing an understanding of their learning gaps. (Hattie
relationship to learning. Sadler’s (1989) seminal work established the role of feedback
as formative by identifying essential conditions that are consistent with the Vygotskyan
movement into the proximal zone. Sadler’s (1989) conditions state that the learner must
know the standard, how their own work compares to the standard, and how to close the
gap between the standard and the student’s actual performance level. Building on
Sadler’s work, Hattie and Timperley (2007) constructed a framework which also
which includes: task, process, self-regulation, and praise (Hattie & Timperley, 2007).
This framework provided the foundation for the analysis and data examination for this
study.
as a learning mechanism (Borup et al., 2015; Tunks, 2012; Wade, 2016). Moreover, an
instructor’s specific method of feedback, whether video or text, could have bearing on
questionnaires could reveal faculty’s meta-cognitive processes during feedback and the
extent that these reflections inform their teaching. An analysis of these concepts
provided rich insights into faculty reflective practices, which were examined through the
lens of Ryan and Ryan’s 4Rs of reflection (Ryan & Ryan, 2013). This model proposes
that there are four levels of reflection: Reporting and responding, relating, reasoning,
and reconstructing, and each level increases in its complexity. Faculty were given three
questions at each level and these open-ended questions were addressed while faculty
delivered feedback, which provided rich data on how faculty reflected during their
feedback process, and ultimately, how or if these reflections served to influence their
teaching.
Next, the classroom data were examined to determine if faculty used video or
text feedback and for which assignments these methods corresponded. Categorizing the
feedback according to Hattie and Timperley’s (2007) levels demonstrated the feedback’s
scaffolding would be consistent with Vygotsky’s (1978) ZPD theory. Moreover, this
classroom data served to triangulate the data, as the feedback was examined juxtaposed
The third phenomenal dimension to explore in this study was the use of an online
reflection on instructional strategy. Focus groups provide rich insight into the world of
the participant (Greenwood, Kendrick, Davies, & Gill, 2017). A direct analysis of the
recorded focus group session addressed the research question of how full-tine online
undergraduate faculty perceive the influence of reflective thinking on their current and
Through feedback, instructors encourage, provide resources and direct instruction geared
towards a student’s future success (Hattie & Timperley, 2007). Within the proper
context, feedback can maximize a student’s learning effects (Hattie & Timperley, 2007;
Sadler, 1989). However, studies expressed a gap in how faculty provide asynchronous
feedback to different populations (Atwater et al., 2017; Borup et al., 2015; Wade, 2016).
This study addressed that population gap through focusing specifically on feedback from
This study’s gap came from three studies. Atwater et al. (2017), Borup et al.
(2015) and Wade (2016) studied asynchronous feedback and recommended additional
studies on different populations. Atwater et al. (2017) responded to the need for studies
in online feedback in their qualitative case study that examined student perceptions of
video feedback in an online graduate course. Their findings showed that students found
advantages with video feedback, including detail and personalization, but nevertheless,
preferred text feedback for its efficiency. Borup et al. (2015) showed similar findings
when they studied asynchronous feedback with both instructors and graduate students in
their complementary mixed methods design. Both instructors and students found that
text feedback was valued over video. These researchers recommended additional studies
in this area and on different populations. Finally, Wade (2016) studied learner
Wade (2016) noted that prior studies focused on learner perspectives of different
This study addressed the gaps presented in these studies. First, both Borup et al.
(2015) and Atwater et al. (2017) recommended additional research in the area of
graduate students and faculty. This study extended that population recommendation by
exploring how online undergraduate full-time faculty reflected during feedback. Thus,
the field of study on feedback was extended, and to a different population, but with new
insights from how faculty reflect. Both studies (Atwater et al., 2017; Borup et al., 2015)
found that graduate students preferred text feedback over audio or video, not neglecting
the value of personalization of audio and video feedback. However, these preferences
Phillipsen, & Lombaerts, 2016). Therefore, while graduate students may view
al., 2016) over personalization. A graduate population may represent more working
faster than research, meaning that emerging applications may make viewing video
feedback as convenient as text. For this study, the population of online undergraduate
full-time faculty used new applications that students can access on smart phones, giving
rise to the need for studies on asynchronous online feedback methods and various
populations.
This study has many practical applications that richly contributed to online
learning in higher education and filled a need by contributing to the body of knowledge
on reflection and feedback within online instruction from the instructor perspective.
Borup et al. (2015) noted a gap in literature was that past empirical studies explored both
feedback and reflection from student perspectives. This study shifted that discussion and
expanded this area of study to explore faculty feedback reflections, and their subsequent
reflection on instructional strategies. In addition, this study could yield valuable data on
how instructors view their own feedback methods and practices, and how these
feedback methods in higher education, (Atwater et al., 2017; Borup et al., 2015) a richer
understanding of the reflective component could affect professional development or
delivering video feedback may have found it more effective method of scaffolding for
the student. This scaffolding may have taken on a different form through text feedback,
where scaffolding might have consisted of posting a link to students. Current studies
exploring feedback’s formative value call for a re-examination of both student and
teacher roles, giving students a more active role in the process (Planar & Moya, 2016).
Therefore, exploring how online undergraduate full-time faculty used feedback revealed
Feedback for this study was analyzed according to its potential for student
this theory, the instructor had a role in student learning by moving them from their
actual developmental level to the completed developmental level. Between the two is the
ZPD, where this developmental transition occurs, either through increasing the
challenging or providing scaffolding of the task (Vygotsky, 1978). This theory surmises
that instruction must be slightly more challenging than the student’s developmental level
and that when the instructor provides appropriate scaffolding within this optimal zone,
learning and self-regulation ensue (Smit, van Eerde, & Bakker, 2013). Because this
increase the challenge in order to move the student to the next developmental level. This
study extended the body of knowledge on ZPD as a learning theory, showing that
assignment (Mirzaee & Hasrati, 2014). This study examined faculty’s formative
feedback, as it aligned with the Vygotsky (1978) theory that grounded the study. Wass
and Golding (2014) stated that Vygotsky’s (1978) ZPD has implications for teachers that
are still unexplored. With this theory as the study’s support, deeper connections between
how faculty provide feedback and its relation to student learning were uncovered.
This study had practical implications. Institutions can improve their online
their teaching, there are implications for training and professional development. For
example, institutions may give greater attention to development in reflections or may see
the value in adapting new technologies, which may improve the capacity to reflect.
does not mean that more faculty necessarily use these methods. Harrison et al., 2017)
reported that fewer online faculty expressed interest in using new technologies in their
influences instruction, then training and development can be directed to making these
environments. For example, Bond (2011) used Schon’s reflection in action theory to
spontaneous situations. However, more recent studies recognize the value of reflection
in online courses. Wade (2016) used reflection journals as a method to understand how
reflection can influence feedback. The outcomes of this study could prompt more
reflective practices for online educators. This increased attention to reflection has many
positive outcomes.
For the purposes of this study, reflection will be addressed according to Ryan
and Ryan’s (2013) model, based on four levels. The reporting and responding level
the agent reflecting and the action, the reasoning level focuses on how theory or new
perspectives might support the reason, and the last level is reconstructing, which allows
the reflecting agent to consider future changes (Ryan & Ryan, 2013). Online
undergraduate full time faculty may develop patterns or routines to their grading; these
practices may not be the most effective nor the most satisfying for students. Reflection
could prompt newer strategies that could improve feedback for its formative value and
student satisfaction.
The outcomes of this study potentially affected the broader community of higher
education. Yin (2014) cited a common concern with qualitative studies is that
researchers analyze at the individual level and neglect the more global implications of
the study. As more is known about instructor feedback relative to Vygotsky’s ZPD
(1978), an increase in self-regulation may be a result (Wass & Golding, 2014). As this
paradigm shift occurs, both student and instructor roles are subject to change. As
Paulson Gjerde, Padgett, & Skinner (2017) recommended, greater attention might be
focused on helping students understand and analyze feedback and to show their active
role in the process (Planar & Moya, 2016), possibly with a movement towards having
students respond or reflect on their feedback (Denton & Rowe, 2015). These
philosophies are all consistent with the increased movement of self-regulated learning
(Nicol & Macfarlane-Dick, 2006) and would represent the goals of formative feedback.
For this study, a qualitative methodology was used as the most logical choice for
qualitative single case study has constructivist elements, since it is based on the
qualitative framework allowed the researcher to capture nuances that were a part of
results, classroom feedback analysis, and focus group findings required non-numerical
data for richly coded themes to emerge. Teacher reflections could not have been
measured in a quantitative fashion and the feedback was examined by analysis and
coding to infer relevant conclusions that could contribute to the field. According to
Almalki (2016), qualitative research emphasizes exploring and understanding, so this
setting (Yin, 2014), whereas a quantitative method requires larger numbers (Merriam &
Tisdell, 2016). For this study, a natural setting was required for authentic faculty
feedback reflections and how they inform their teaching. In this study, online
undergraduate full-time faculty feedback practices were explored in their daily setting of
instruction. Based on Yin’s (2014) tenets, this approach allowed the researcher to focus
on the natural setting of online undergraduate full-time faculty and how they selected
their feedback methods and how their reflections influenced their instruction.
A single case study was selected for the research design. A single case study is
recommended when the researcher wants to explain how a social phenomenon works
(Yin, 2014). When an investigation can focus on a single case, a holistic and real-world
perspective on group behavior can be revealed (Yin, 2014). The case study approach
feedback practices, what feedback methods were used, and how their feedback practices
When conducting a single case study, there are several qualitative designs that
affective states about lived experiences (Merriam & Tisdell, 2016). However, the
primary focus for this study was the perceived influence of feedback practices on teacher
emotions or affective states were not emphasized. Grounded theory is a second design
within qualitative methods and its focus is contributing to new theories (Merriam &
Tisdell, 2016). However, per Yin’s (2014) recommendations, this study utilized theory
as an underpinning, and did not seek to establish or formulate a new theory. Vygotsky’s
(1978) Zone of Proximal Development was the theory that supported this study, since it
teacher reflection and thus considered how students might progress to the ZPD as
formative feedback was delivered. A third design to consider was narrative inquiry, in
which the story text forms the data set (Merriam & Tisdell, 2016), which would not be
sufficient for the proposed study, which relied on the triangulated data from faculty
The target population was online undergraduate full-time faculty who work in a
instructors who may work remotely, the sample of the population work collaboratively
with other online instructors, which allows them to regularly share best practices. The
results of this study, however, could also have meaningful implications to adjunct
instructors who are remote, as reflection is an individual process. The sample size of 10
faculty was large enough to produce meaningful results in this qualitative method and
design. Baxter and Jack (2008) stated that in a case study, researchers must provide
enough detail to assess the validity or credibility of the work and that purposeful
sampling strategies should be applied. Due to the volume of coursework of this faculty
southwestern higher educational institution in the United States. This group of faculty
teach online exclusively from a campus location. The volume of courses provided
A single case study framework was the design selected for this study due to
many conditions. First, the case is related to the theory of Vygotsky’s ZPD (1978) with
zone. A single case study is the most appropriate for confirming or expanding on a
theory (Yin, 2014). Analysis of themes from these three sources of data converged to
faculty responded to the questionnaire in the natural setting of their grading, these data
aligned with the review of their actual classroom feedback. Finally, the focus group
provided the participants the opportunity to confirm or challenge the findings of the
other two data sources, resulting in the synthesis of data, making this an appropriate
A single case study was the best choice for this study’s research questions, due to
the kind of data necessary to address the questions. The data in qualitative studies must
allow for affective nuances. Byrne and Ragin (2009) justified a case study when
statistics are not capable of revealing results with a degree of satisfaction, or if there are
different outcomes or conditions that are more subjective. For this study, instructor
reflections and their insights on how they informed instruction needed to be analyzed
understanding deeper than a statistical analysis was needed. The data collection
Steps:
10. Planned and scheduled questionnaire time period for faculty to complete
questionnaires.
11. Planned and scheduled focus group in order to moderate the discussion on how
reflection informs instruction (Yin, 2014). The focus group consisted of 10 study
participants.
14. Facilitated focus group of participants who responded to the questionnaire. Focus
group discussions focused on data presented on questionnaire responses and
feedback analysis. Participants either confirmed or challenged the data.
Definition of Terms
feedback practices influenced their reflective thinking, and if that reflective thinking had
questionnaire results, feedback from participants’ classes, and focus groups discussions.
knowledge as it relates to learning, and prompts new learning goals (Bonnel, 2008).
feedback is designed to guide student learning and can also extend to promoting self-
Online education. For the purposes of this study, online education is a course in
connects theory to practice (Bennett et al., 2016); a cyclic process that determines how
and when to apply processes to a unique case (Bennett et al., 2016; Schon, 1983).
reach a level of autonomously completing a task (Wass & Golding, 2014). Scaffolding is
Social presence. For the purposes of this study, social presence is defined as
Summative feedback. This includes the student’s grade and instructor comments
theory of learning asserts that learners have a distance between their actual development
level, identified by independent problem solving, and their level of potential, where
tasks can be completed with appropriate instructor guidance. Learning takes place in this
zone.
evidence or justification (Denzin & Lincoln, 2011). This qualitative study explored
thinking and how those reflections informed their teaching. The following assumptions
1. It is assumed that the faculty provided honest and authentic responses on the
questionnaire. The instrument used was Ryan and Ryan’s (2013) 4R model of
reflective thinking. It is possible that faculty could have provided less than
transparent responses to their feedback. Training on reflective strategies might
have been necessary for more insightful responses.
2. It was assumed that the reviewed classroom feedback was valid data that
represented authentic faculty responses. Forte et al. (2016) stated that feedback
carries the role of personal interaction between the student and the instructor and
can influence the degree of social distance. Therefore, it was assumed that the
classroom data represented student and instructor interaction.
Limitations and delimitations. Limitations are things that the researcher has no
control over, such as bias. In contrast, delimitations are things over which the researcher
1. This was a single case study, which is a potential limitation, as there may
have been challenges associated with the generalizability of the findings.
3. Potential bias from the researcher’s perspective was another limitation of the
study. Because this researcher is employed at the host institution, there is a
keen understanding and familiarity of the expectations and standards of
feedback. The data collection and analysis process were designed to mitigate
bias. Participation was voluntary, so the researcher did not select the
participants. Next, the paper selection process was based on assignments that
met the selection criteria of 300 words or more in length. The researcher also
selected papers from courses that were current, yet completed, so that there
would be sufficient data for review. These criteria limited the availability of
the selection, which means bias was not present in this selection process. The
data analysis was structured around a priori themes, meaning that the codes
for evaluating feedback were already determined, per Hattie and Timperley’s
(2007) framework. Finally, as this study was not about the actual quality of
feedback, the researcher was not taking instructor feedback standards into
consideration during the collection or the analysis.
This study was organized as follows: Chapter 1 included the introduction of the
study, the statement of the problem, the purpose of the study, the research questions, the
advancing of scientific knowledge, the significance of the study, the definition of terms,
the rationale for methodology, the nature of the research design, the assumptions,
limitations and delimitations, and the summary of the study. Current perspectives in the
literature on the topics of feedback and reflection were presented in Chapter 2. Current
literature about these topics forms an important rationale for the study. Chapter 3
explains and rationalizes the qualitative methodology, and the case study design,
analyze the archived course feedback, and the framework for preparing the focus group
discussions and measures taken to ensure validity and reliability for their use. This
chapter also covers the data collection procedures, data analysis procedures, and ethical
considerations for this study. Chapter 4 presented the data analysis and findings. Chapter
5 summarized and interpreted the data for the study. This chapter also discussed
formative instruction (Abaci, 2014; Wade, 2016; Webb & Moallem, 2016). In an effort
to increase efficiencies and to lessen the distance in this modality, instructors are turning
to different methods of feedback, including audio, video, and text (Borup et al., 2015;
Gredler, 2016; Hall, 2018; Sims, 2016). These choices could influence how instructors
reflect on their feedback, as audio or video methods could lessen the cognitive load
(Borup et al., 2015; Wade, 2016) which could allow deeper reflection during the
feedback process. Feedback is often measured in terms of student outcomes, but there is
less attention to the thought processes of faculty and how those reflective practices
might influence teaching. Reflection is valued as a teaching strategy (Martin & Double,
1998; Nguyen et al., 2014). Exploring the perceived influence of feedback practices on
implications for professional development. This study filled a gap in the literature on
(Wade, 2016), and expanding on Borup et al.’s (2015) findings, which recommended
exploring feedback among a more diverse student population. Finally, because Ali
The literature on both feedback and reflection is reviewed in this study. First, a
background discussion will be developed surrounding the need for extended studies in
faculty reflection and feedback. Next, an identification of the gap for this study will be
presented, followed by a discussion of the theories and conceptual framework for the
reflection, and research that lays a foundation for the context of online higher education
will be presented. This review will also include seminal researchers and their
contributions to these areas. The literature review will conclude with a discussion of the
methodology planned for this study and how it relates to the methods used in literature
on this topic.
introduction to the chapter and background of the problem will be the first section; the
identification of the gap will follow; the next major section will discuss the theoretical
foundations and conceptual framework. The Review of the Literature is the next section.
Within that review are the following sub-sections: Historical perspective of online
that discusses minor sections of three different models. The next major sections are
Feedback and Social Presence, followed by social presence and learning outcomes,
Feedback methods, and Feedback and learner populations. The next major section is on
Reflection, which has sub-sections of Reflection and teaching, models of reflection, and
outcomes of reflection. Major sections about the case study design and sources of data
Research for this study came from peer-reviewed journal articles, published
dissertations and studies from the Grand Canyon University library’s ProQuest, Eric
databases, and Google Scholar. The foundation of previous researchers and their
conclusions served to inform the new findings of this study (Hart, 1998). The search
feedback methods, audio and video feedback. In the following background section,
literature related to the problems that are addressed by this research project is presented.
conveying student progress (Hattie & Timperley, 2007). Through conveying this
progress, feedback itself can be viewed as a learning mechanism (Abaci, 2014; Gredler,
2016; Nicol et al., 2014). This evolving view of feedback began with Scriven (1967),
who first distinguished summative from formative feedback within the context of
(1989), Hattie and Timperley (2007), and Black and Wiliam (1998) have all refined the
framework of formative feedback for classroom settings. The common element among
learning and that instructors must facilitate feedback that identifies the assignment goal,
clearly shows students where they fell short of the goal, and offers specific guidance for
future success.
of development (1978), but despite its learning potential, instructors face challenges in
providing optimal formative feedback. First, the online environment poses particular
rapport with students (Wolsey, 2008). Next, feedback’s timeliness is attached to its
effectiveness (Sadler, 2010; Webb & Moallem, 2016). Therefore, if there is institutional
pressure for university faculty to provide timely feedback, the quality may be
compromised. Understanding more about how faculty select feedback methods and how
they reflect on their feedback can provide valuable data about its formative value and
strategies, has significance for faculty and students. While much is known about the
feedback.
Hattie and Timperley (2007) identified three questions that represent phases of feedback
as they relate to learning: Where am I going? How am I going? Where to next? (Hattie
& Timperley, 2007). Yet in one study, when these feedback elements were examined,
attention to the other questions (Ellis & Loughland, 2017). Arguably, feedback is neither
complete nor formative if only this element is addressed (Black & Wiliam, 1998; Hattie
2017). These diverse and sometimes competing elements are at work when instructors
approach a student paper to grade. Understanding more about how instructors reflect
could provide rich insights into how they approach these challenges, including how they
might prepare feedback to guide students to Vygotsky’s (1978) ZPD. Like feedback,
reflection is also a complex construct and more studies are needed (Roberts, 2016).
Although studies have been conducted on instructor reflections (Nguyen et al., 2014;
Roberts, 2016; Pawan, 2017; Ryan & Ryan, 2013) there is not a study about how
instructors reflect while delivering their feedback or the subsequent influence of that
reflection on teaching.
This study was about undergraduate faculty feedback exclusively in the online
classroom in higher education. Online learning has rapidly become a preferred method
of delivery for its flexibility and convenience to students, and its affordability to
institutions (Caruth & Caruth, 2013; Gredler, 2016; Rockinson-Szapkiw, 2012). Allen
and Seaman (2016) reported that 5.8 million distance education students includes 2.85
million taking all of their courses at a distance. Although there are advantages in this
modality, there are also challenges, including the inherent social distance and the
difficulties of delivering personalized meaningful feedback (Atwater et al., 2017; Borup
et al., 2015; Frisby, Limperos, Record, Downs, & Kercsmar, 2013; Pattison, 2017). As
technology advances, instructors are turning to new feedback methods, such as video
delivery (Atwater et al., 2017; Ellis & Loughland, 2017; Frisby et al., 2013; Wade,
(Falender, Shafranske, & Falicov, 2014; Roberts, 2016). Researchers have focused on
how students might reflect for deeper learning, or how instructors reflect (Brubaker,
2016). However, there are not studies about how the actual practice of feedback, if
reflected upon, could inform teaching for online faculty in higher education.
In online courses, feedback serves critical elements. First, it has the potential to
lessen the social distance (Pattison, 2017). Secondly, feedback is a learning mechanism.
Formative feedback serves as the learning process in online courses (Abaci, 2014;
Gredler, 2016; Sims, 2016; Wade, 2016). With feedback playing such critical roles, a
Within the context of this study, reflection is a purposeful act. The reflector plays
an active role in their own reflection, through the model of reporting and responding,
relating, reasoning, and reconstructing (Ryan & Ryan, 2013). Reflection is associated
with learning (Harvey, Coulson, & McMaugh, 2016). Instructors who reflect on their
feedback may learn from that reflective process (Farrell & Jacobs, 2016), thereby
the instructor might provide a resource to the student, or even to the whole class. The
instructor may find that, through video feedback, that showing organizational strategies
Knowing more about how instructors reflect could provide insight into these
methods and choices, but there were not studies that specifically addressed the
studies about feedback (Borup et al., 2015; Wade, 2016). As these researchers
recommended, new and diverse populations and larger class sizes were addressed in this
study to further knowledge in the field of how reflection and feedback inform instruction
in the online classroom. This study could result in valuable insight, shining light on the
sustainability strategy (Allen & Seaman, 2016). Since 2002, its swift growth has
prompted ensuing research, comparing its andragogy to face-to-face settings. While the
feedback and reflection are less consistent. For example, there are concerns with the
(Forte et al., 2016). These factors are addressed in many ways in a face-to-face
environment, but in the online environment, instructors may approach them solely
through faculty feedback. Recently, researchers also examined new technology methods
designed to improve feedback, but there are mixed findings in terms of instructor and
student preferences (Ali, 2016; Atwater et al., 2017; Borup et al., 2015). The research
gaps in feedback that led to this study will be presented in the context of recent research
and trends, revealing how this proposed study will expand research in these areas.
recommendations from recent studies. Borup et al. (2015) stated that a gap in past
empirical studies is that feedback is largely explored from the student’s perspective and
incorporating a reflective component, this study is about feedback from the instructor’s
viewpoint, which provided insight into the connection between the instructor’s feedback
delivery and their own teaching. Recent research in feedback confirms its formative
value. Mirzaee and Hasrati (2014) found in their qualitative stimulated recall
nonformal learning, as it encourages specific action and raises the likelihood of student-
consistent with Webb and Moallem’s (2016) quantitative and qualitative data that
achievement improves.
follows a structure or model for meaningful feedback. Mirzaee and Hasrati (2014) used
Timperley’s (2007) framework. More recently, Ellis and Loughland (2017) confirmed
the importance of a formative model, as their findings indicated that teachers were not
addressing all elements of this model, resulting in less effective student outcomes.
Therefore, trends in feedback have addressed its formative capacity, but online instructor
feedback had not been examined within the context of faculty reflections and how those
populations, but a gap remained, revealing the need to explore more diverse populations.
Atwater et al. (2017) studied graduate student perceptions of video feedback in their
qualitative case study and found that although students appreciated the deeper student-
to- teacher relationship that resulted from the video communication, students preferred
written text for its convenience. These researchers called for future studies on different
populations. In a blended class, Borup et al. (2015) studied student and instructor
perceptions of feedback in different methods and found that, like Atwater et al.’s (2017)
findings, both students and instructors found value in video comments, which can be
more detailed, but overall preferred text for its convenience and accessibility. For this
study, the population was student teacher candidates; although these were undergraduate
students, they were in their final semester, so there is still a great distance from entry
examining feedback methods with different populations and contexts, including courses
that are exclusively online (Borup et al. 2105). This study on instructor feedback and
reflection addressed these gaps as the focus was on first-year undergraduate students in
online classes.
provide video or audio feedback. Students may view the paper while the instructor
annotates comments and provides resources; in addition, some applications show the
(2016) conducted a mixed methods study of two experimental groups and while one
group received written assignment feedback, the experimental group was given video
feedback on higher order writing criteria, such as content and organization, with written
feedback on the lower order criteria, such as grammar and mechanics. The findings
showed that the experimental groups outperformed the control group, meaning that the
Screencast video feedback was viewed as supportive and formative. However, students
The above studies illustrated recent trends and studies focused on online faculty
feedback. However, as technology evolves, even current studies can be rapidly outdated.
For example, students in some studies reported preferring text feedback over video or
audio (Ali, 2016; Borup et al., 2015) for its convenience. Yet, technology’s rapid
advancements could lessen the downloading time or could make the feedback viewing
accessible on later model smartphones. Thus, it was significant to determine how, with
technology delays and concerns alleviated through newer methods, faculty chose their
particular feedback methods, and if their reflection during feedback informed their
instruction.
educational institutions report that online delivery is critical to their long-term strategy
(McNiff & Aicher, 2017). With this modality showing sustenance, leadership in higher
modalities, methods, and strategies in education. Learning theories are useful for
understanding the teacher’s role and prompt effective strategies for student learning.
Good teaching sets the conditions for the student to be in charge of their own learning.
Thus, teachers must understand the dynamics that influence learning, and are charged
with setting those conditions (Roberson, 2017). Vygotsky’s (1978) Zone of Proximal
Development (ZPD) is the theory that supported this study; its tenets demonstrate that
Instructor strategies can affect student learning. Vygotsky’s (1978) ZPD was
developed in response to education that was systematic, which presumed that what the
instruction delivered would result in learning. This systematic method of instruction did
not address the fact that a new learning concept would affect the child’s development.
proposed that development must occur prior to learning; social and cultural influences
contribute to the learner’s developmental level (McNiff & Aicher, 2017). Within this
theory, learning takes place between where a student can independently complete a task
and where the task is beyond their grasp. When an instructor provides the appropriate
guidance, this allows the student to complete the more challenging task, thus, in the zone
of proximal development.
background information (Armstrong, 2015; Wass & Golding, 2014). This study
examined how instructor reflections might lead them to moving students to this proximal
development zone. For example, when instructors reported that they were giving
example of moving students into this zone and providing the support needed.
Feedback, therefore, has the potential to affect learning. Within this learning
(1978) ZPD theory forming a suitable framework for the assessment of that
determination. Knowing that not all students are at the same level, instructors may
prepare individual feedback specifically to move students into the proximal zone. One
student may need scaffolding support or resources, while another may need to be
challenged in order to move to the ZPD that will activate development. Thus, the second
student referenced may have been at their completed developmental level at the time of
the assignment submission, so, instructors might reflect and use their feedback on how
This study explored how online feedback practices influence reflective thinking.
scaffolding tool (Gredler, 2016). Hattie and Timperley (2007) posited that feedback has
the potential to lead to learning, since it encourages students to act upon the given
advice. As a scaffolding mechanism, feedback can show students the gap between their
current and target performance (Mirzaee & Hasrati, 2014). If instructors are mindful as
they deliver feedback, these reflections could be a catalyst for instructors to more
Different methods faculty used in online instruction were also explored. Online
instructors are turning to new methods to deliver feedback including screencast tools,
(Ali, 2016) video, (Borup et al., 2015; Borup et al., 2015; Sims, 2016) and auditory
methods (Borup et al., 2015; Frisby et al., 2013) which capture the instructor’s voice and
provide a view of the student’s paper. Video feedback has advantages; instructors can
increase their social presence through natural emotional reactions and nuances (Borup et
al., 2015). Because social presence is a direct indicator related to student outcomes
(Tunks, 2012), instructors may find these methods are more effective for scaffolding
paralinguistic features may allow a clearer explanation. This increased clarity could
delivery could influence student learning, relative to Vygotsky’s (1978) ZPD theory.
reflective thinking on their current and future instructional approaches was also
explored. Because reflection is a complex process that requires depth of thought and
ongoing improvement and focus (Gasparič, & Pečar, 2016; Roberts, 2016), a deeper
understanding of these thoughts can uncover to what degree instructors are preparing
formative feedback to maximize learning. Appropriate scaffolding or challenges form
the premise for achieving the ZPD (Wass & Golding, 2014). For example, if an
instructor is reading a paper and mentally preparing their feedback, their reflective
thinking may prompt them to consider the kind of tools the student will need. If the
student has a repeated grammatical error, then the teacher might provide a resource for
the student. This is an example of using scaffolding to move the student into the ZPD. In
addition, through the act of reflecting, the teacher might make a mental note that several
students struggle in this particular area, so may extend their action to prepare a resource
for the entire class. Thus, if instructors consciously reflect on their feedback, they may
(1978) theory.
(2013) 4Rs model of reflection served as the framework to explore instructor reflection
workable framework that moves his theories into practice (Ryan & Ryan, 2013). Ryan
and Ryan adapted Bain, Ballantyne, Mills, and Lester’s (2002) model of five
components and this model was adapted to develop a questionnaire for instructors. Ryan
and Ryan’s four levels of reflection are: (1) Reporting and responding (2) Relating (3)
Reasoning (4) Reconstructing. The level one questions are the least complex, focusing
on “What” questions, such as, “What method of feedback is being used?” while the most
complex question is, “What might be another way to deliver this feedback?” This
framework formed the faculty questionnaire and guided participants into recognizing
zone. Vygotsky (1978) posited that learners have an actual developmental level, which
is based on their previous level, identified as the completed developmental level. This
The ZPD theory has been widely regarded in primary and secondary environments
(Wass & Golding, 2014); however, in more recent literature, researchers have
recognized its value in higher education (Armstrong, 2015; McNiff & Aicher, 2017;
Roberts, 2016). In higher education, instructors operate with certain assumptions of the
learners’ completed level. That is, a learner in an ENG 102 class should have a
learner’s new task or providing paper feedback that gives the student a tool or
explanation for the task. Through effective feedback, the instructor could provide
scaffolding to move the learner from the actual to the completed developmental level for
a particular objective. How instructors reflect can provide insight into how or if this
intentionality is taking place. This study served to advance the research on the ZPD
within this theory provided broader relevance to online learning in higher education.
Review of the Literature
many higher educational institutions, the role of feedback has become formative,
standing in for traditional face-to-face instruction (Ali, 2016), but it is not known how
educational institution. This section will begin with the historical perspective of online
learning and how this modality has emerged. Because the current study is about
follow. The next topics are seminal feedback models that provide a foundation on how
relationship to feedback and learning outcomes will follow. Finally, topics of reflection
reflection.
exploratory single case study was to examine how online undergraduate full-time faculty
institution. The following section is about the historical implications that affect this
topic. Distance learning has taken on various forms to fill gaps for traditional education.
Correspondence courses began in the 1800s. In the 1920s and 1930s radio courses were
created to reach children in the inner cities. Distance courses were so popular in the
1920s that four times more students enrolled in distance than other higher education
Distance learning has seen many changes. As correspondence courses gave way
2014). For higher education, offering online courses became a cost-cutting measure, as
costs per student were rising faster than inflation (Deming, Goldin, Katz, & Yuchtman,
2015). By the beginning of the twenty-first century, online education was becoming a
machine with rapid acceleration, but the growing pains were evident. When acceleration
With the rapid expansion of online education, the concern was that higher
educational institutions might offer these courses strictly as an opportunity for revenue.
Nash (2015) posited that institutions should be cautious about these motives. Such
changes could stem from a lack of preparation on the part of the institution and if the
quality of course design and instructor training is lacking, the ethics of the program
could be jeopardized. As an example, if online courses are shorter in length than the
face-to-face counterparts, this could reflect less academic rigor (Nash, 2015). Another
effect of rapid enrollment was a corresponding increase in student attrition, which was
higher in online than in face-to-face courses (Forte et al., 2016; Smits & Voogt, 2017).
reported a loss of autonomy, outdated hard copy course materials, and job insecurities
which fueled skepticism about the effectiveness of this platform (Caruth & Caruth,
2013). In addition, the transactional distance of online learning posed challenges for
instructors to lessen this distance (Atwater et al., 2017; Borup et al., 2015).
Transactional distance is the psychological and pedagogical separation between the
learner and the instructor (Moore, 2012). If communication gaps stem from the inherent
(Huang, Chandra, DePaolo, & Simmons, 2016). Although many faculty appreciated the
convenience, flexibility, and diverse population of online learners, they also expressed
concerns about preserving academic integrity, the quality of online instruction, and the
time demands with facilitating volumes of students (Borup et al., 2015; Wright, 2014).
In many ways, the concerns of online learning did not differ from the concerns of
traditional teaching. Leaders in higher education called for highly qualified teachers
(Allen & Seaman, 2016) and creative ideas to foster critical thinking and deep student
engagement were favored over objective quizzes (Wright, 2014). Findings ways to
offset and avoid plagiarism were ongoing concerns (Elison-Bowers & Snelson, 2012;
Nash, 2015; Wright, 2014); however, as technology and practices improved, the face of
Online learning experienced swift and positive changes. Despite challenges that
plagued online learning in its early years, this approach of learning and teaching has
found favor with higher education. Offerings have expanded and enrollment for online
learning has increased (Allen & Seaman, 2016; Elison-Bowers & Snelson, 2012; Forte
et al., 2016; Mehta et al., 2017; Nash, 2015). This modality reflects creative delivery and
robust academics (Mehta et al., 2017). A plethora of research and emphasis on the
towards greater personalization. (Borup et al., 2015; Hostetter & Busch, 2013; Pattison,
actualized through course design and creating spaces as catalysts that contribute to
personal learning (Mehta et al., 2017). Attitudes are shifting and there is increasing
confidence from students and faculty in mastering learning management systems and the
foundation for this study. This section addressed the rapid growth of online learning
(Deming et al., 2015) as well its challenges, expressed by Caruth and Caruth (2013),
Nash (2015), and others. In addition, Atwater et al. (2017) and Borup et al. (2015)
studied the importance of social presence and how newer technologies may lessen that
distance for the online environment. This delivery platform has seemingly limitless
and industry demands (Gargano & Throop, 2017). Therefore, within this platform, a
closer look at feedback, which is at the core of online instructional practices, could add
learning, and to set new learning goals (Bonnel, 2008). An abundance of current studies
reflect feedback’s connection to learning, but feedback’s breadth and depth demand a
and Hasrati (2014) studied the role of formative learning through written feedback in an
English as a Foreign Language setting. The researchers aimed to extend the studies in
the nature and impact of written feedback and those connections to nonformal learning.
Participants received written feedback and qualitative interviews, and stimulated recall
students to act on it. Using Vygotsky’s (1978) ZPD as their theory, the researchers
considered the extent of the scaffolding process in written feedback. Using grade point
averages as criterion for selecting participating students, they hypothesized that students
with higher GPAs would be more likely to act on the received feedback. The transcribed
interviews were coded, categorized, and reviewed with a 5% point of difference in the
comments. The results showed that for highly motivated students, the written feedback
resulted in the following types of nonformal learning: reactive, deliberate, and implicit.
The researchers concluded that formative feedback leads to both teacher-student and
performance. Abaci (2014) studied both the direct and indirect effects of feedback on
learning environments. The gap that led to the study was that there is partial evidence of
and how it aligns with academic performance. Due to the large sample size, the data
were analyzed through structural equation model. For this study 399 online
undergraduate college students were selected and surveyed according to the variables
presented in the research questions. The results indicated that more elaborate student
feedback was tied to higher performance, but there was not supporting evidence for the
mediating roles of academic motivation or goal orientation. The findings confirm the
value of detailed and elaborate feedback in online classes. The current study provided
valuable insights on how instructors reflect during their feedback, which could in turn,
outcomes. In a qualitative study, Sims (2016) used the Media Naturalness Theory to
feedback samples. The results showed that when instructors provided feedback, they
students, and efficiency. The collected data also showed instructor methods, whether
audio, visual, written, or a combination. Instructor feedback samples were reviewed for
their complexity, tone, and word choice. Feedback in the form of audio and video had
higher word counts and greater complexity. A limitation of this study was that the
participants selected their own feedback samples, which may not have been typical of
their feedback to students. This study has implications for choosing methods that
support the most useful feedback for students, and further research is recommended for
outcomes for students (Abaci, 2014; Mirzaee & Hasrati, 2014; Sims, 2016),
multi-case study that explored differences that affect second language learner responses
responses to instructor’s written feedback. They found that when addressing errors,
(Vygotsky, 1978) and that for more complex corrections, such as sentence structure, the
feedback was less meaningful, since the students were aware of the grading criteria.
When errors addressed in written feedback were beyond the students’ ZPD (Vygotsky,
1978), the written corrective feedback was ineffective. The researchers recommend
more studies concerning individual differences to written feedback, and how the
and interaction. Studies confirm that instructor feedback has the potential to be the most
critical course element, and is the common denominator in student success (Abaci, 2014;
Ali, 2016; Gredler, 2016; Hattie & Timperley, 2007; Nicol & MacFarlane-Dick, 2006).
(Abaci, 2014; Borup et al., 2015; Gredler, 2016; Hattie & Timperley, 2007). Li and Li’s
(2012) findings uncover the importance of understanding the student’s proximal zone
while giving feedback. This focus on matching the feedback to the student population
was also present in Mirzaee and Hasrati’s (2014) study, as there was a difference in
reaction to feedback when grade point average was a factor. Despite the evidence of
2007). This study extended the literature on instructor feedback in online courses and
how that feedback influences teaching. As Li and Li (2012) suggested, more attention is
full-time faculty feedback reflections provided greater insight into how feedback might
inform instruction.
higher education was necessary. The topics in this section are seminal models of
formative feedback that are relevant to higher education and the online delivery
platform.
consistent on an actual term for formative, in the context of assessment and feedback
(Price, 2015; Wiliam, 2014), but it is understood as a process that helps improve student
knowledge (Price, 2015) and is often applied when the assessment is tied to the
instruction (Wiliam, 2014). Some cite the primary characteristic as a tool delivered
midstream, for students to become agents of their learning at this juncture (Wiliam,
promote student learning (Black & Wiliam, 1998), but in order for feedback to be
formative, it must be specific, meaning it references the work appraised, and general,
identifying broader principles that can apply to later works (Sadler, 1989). Formative
that motivation and self-esteem are more likely to be ignited when the stakes are lower,
associated with more standardized tasks, where information is limited to the success or
failure of the task (Nicol & Macfarlane-Dick, 2006; Price, 2015). Wiliam (2014)
evidence of the assessment, while formative actually generates that evidence. Although
formative assessment is often associated with learning (Hattie & Timperley, 2007),
when the focus of the goal shifts to a standardized outcome, the assignment feedback
strategy may be misaligned, that is, aimed at the success or failure of achieving the
objective goal of a score or letter grade, rather than learning (Ninomiya, 2016; Price,
given by the teacher and received by the student. If feedback is truly formative, this
mindset is erroneous and distracts from the positive outcomes of formative assessments
(Charteris, 2015). The main goal of formative feedback is to promote student regulation,
giving students a more proactive, rather than reactive role (Nicol & Macfarlane-Dick,
2006). When students receive effective and individual feedback, they are more likely to
independently solve tasks, contributing to self-regulation (van Kol & Rietz, 2016). Black
and Wiliam (1998), who are considered the primary researchers in this area (Ninomiya,
has overlapping elements (Black, Harrison, Lee, Marshall, & Wiliam, 2004).
Assessment for Learning ( AfL) expands the idea of formative assessment to include the
practice of students and teachers, using peers to seek, reflect, and respond to the
student roles in feedback has significant implications for how instructors view their own
facilitation, and the possibility of their roles being re-negotiated (Renner, 2017).
Three seminal feedback models provide theory and principles that identify and
categorize feedback’s stages. The three models discussed will be Hattie and Timperley’s
Assessment Model, and Nicol and MacFarlane-Dick’s (2006) Seven Principles of Good
Feedback Practice. Despite their differences, there are overlapping principles, and these
theory that underpinned this study. In the figure below, Nicol and Macfarlane-Dick’s
(2006) seventh principle is not included, as it is a process that follows assessment. These
models have common principles that can be divided according to three stages of the
feedback process. Below, the illustration shows the different stages, and how they
Figure 1. Feedback model progression relative to learning. Upper case font represents
Hattie & Timperley (2007), Times New Roman, 12 point font without special
capitalization represents Black & Wiliam’s (1998) model. Vygotsky’s (1978) Zone of
Proximal Development is in red, and the underlined passages represent Nicol &
Macfarlane-Dick (2006).
Hattie and Timperley’s Feed up, Feedback, Feed forward model. Feedback should
reduce the gap between a student’s current and desired understandings to enhance
learning (Hattie & Timperley, 2007). Hattie and Timperley’s (2007) Feed up,
Feedback, and Feed forward model focuses on questions: “Where am I going? How am
I going? Where to next?” as a means of reducing the gap between performance and the
student goal. These researchers found that when feedback is associated with student goal
setting, there is instruction and enhanced confidence in the task and self-regulation.
Hattie and Timperley’s (2007) model includes four levels for each feedback question:
The task level, the process level, the self-regulation, and self-level (Hattie & Timperley,
2007).
The “Where am I going?” stage is designed to convey information about the
goal. This part of feedback provides goal orientatin and when students understand these
goals, they are more likely to attain those (Hattie & Timperley, 2007). Feedback must be
related to the desired goal. For example, on an academic essay, students may be asked to
write a definition essay, but if the feedback is directed only towards grammar and
Following goal orientation are elements that address student performance. “How
& Timperley, 2007). Thus, the feedback at this point must address the performance
relative to the goal. “Where to next?” is a feed- forward question that challenges
students and promotes self-regulation (Hattie & Timperley, 2007). When all elements of
feedback are successful, this stage has the greatest potential for learning and changing
behaviors. If these stages are requisite to feedback’s formative value, then instructors
need greater opportunities for crafting feedback (Kastberg, Lischka, & Hillman, 2016).
Although feedback models exist, it is not clear how faculty might implement the
(2007) model were addressed in feedback, Ellis and Loughland (2017) conducted a
qualitative study and interviewed nine teacher education students in their third year of
was on the nature of the feedback message and the categories within this were: (1) The
provision of clear and specific direction (2) The detail and quality of feedback (3) The
frequency of feedback (4) The provision of focused feedback and feed forward (5) The
compared to the other formative questions. These implications suggest that instructor
cognition does not always reach all three forms of feedback, as suggested through Hattie
If instructors are aware of feedback models, they may be more likely to give
feedback that is formative. Using Hattie and Timperley’s (2007) model as a framework,
Kastberg et al. (2016) conducted a cross-case analysis that reviewed activities between
prospective teachers and students. The problem expressed in the study was that
prospective teachers were not aware of giving constructive feedback that might improve
learning (Kastberg et al., 2016). Building on Wiliam’s (2007) study that found feedback
concepts, Kastberg et al. (2016) used Hattie and Timperley’s four levels of feedback to
uncover themes in the prospective teacher feedback. The findings suggested that
mathematics teachers should be familiar with a model that addresses feedback levels,
and should craft feedback with attention to praising the student, identifying correct or
incorrect responses, and focusing on future success. While Ellis and Loughland (2017)
found that one question was addressed more than others, Kastberg et al. (2016) found
that there was little focus on the level of self-regulation. More research on how
instructors reflect on their own feedback is needed, and how instructors might
A third study that used Hattie and Timperley’s framework justified the need for a
fifth category for their feedback stages. Chen (2014) studied the receptivity of instructor
(2007) model and its four levels of feedback: task process, self-regulation, and
superficial praise. The researcher determined that a fifth category, mediative feedback,
was created as a need for reciprocity between teacher and student. Thirty-four students
from two colleges were assigned e-pals and blogged and discussed current event topics.
feedback, including student surveys, interviews with students and instructors, coded
The findings were triangulated with the interview data. These findings
demonstrated that interactive web tools necessitate the fifth level that should be added to
this model (Chen, 2014). In reviewing the findings of other studies where Hattie and
Timperley’s framework is integrated, both Ellis and Loughland (2017) and Kastberg et
al. (2016) found that not all of Hattie and Timperley’s stages were addressed. While
Black and Wiliam’s assessment and classroom learning. At its core, feedback
must address student success. The higher quality the feedback, the more likely the
learning gains (Black & Wiliam, 1998). Like Hattie and Timperley (2007), these
researchers discuss a gap between the learning goal and the student’s present state. The
idea that there is a gap is not new. In prior decades, Ramaprasad (1983) defined
feedback as the information an instructor provided about the gap between the actual
level and the referential level of a system parameter used to alter the gap (Black &
Wiliam, 1998). The similarities between Black and Wiliam’s model are evident in these
reflections: “Feedback to any pupil should be about the particular qualities of his or her
work, with advice on what he or she can do to improve, and should avoid comparisons
with other pupils” (p. 84). Although Black and Wiliam later expanded their concepts in
If feedback is formative, then instructors must fully grasp the most effective
method of implementation. Black and Wiliam (1998) asserted that unless this gap is
bridged, an instructor has not delivered meaningful feedback. In a study that examined
among three ESL teachers who had recently attended training in formative assessment in
Malaysia. The findings showed that instructors were unaware of how to implement
feedback to close this gap. For example, the participants were not aware that feedback
should match the student’s level of achievement and that they confused praise with
(1998) early findings identified the elements and roots of formative assessment and later
expanded to the concept of Assessment for Learning (AfL) (Ninomiya, 2016). Named
in literature as early as 1986 and later popularized in the United States as well as the
United Kingdom, AfL includes the practice of students and teachers using peers to seek,
way of ongoing learning (Charteris, 2015). An inquiry approach was used to explore
teacher candidates using AfL, and the role of dialogic feedback. Using reflective
dialogue interviews, the researcher used student voice data to measure learner
perceptions of their classroom learning, feedback, feed forward and self-assessment
practices.
Macfarlane-Dick’s (2006) are to clarify performance and facilitate assessment. This kind
of feedback would point students to the goals, criteria, or standards (Nicol &
between their own conceptions of goals and the actual assessment criteria (Nicol &
Macfarlane-Dick, 2006). The more students understand the criteria, the greater
ownership they have and therefore, they are more capable of self-assessment (Nicol &
Macfarlane-Dick, 2006). Paulson Gjerde et al. (2017) stated that only when the gap is
successfully identified is the student able to either close or reduce it. Black and Wiliam
While the last few decades have shown a shift in how education views learning- as a
process that is constructed by the student- formative assessment practices have not kept
pace with this philosophy. In higher education, formative assessment and feedback are
process of messages that convey what is right or what is wrong (Nicol & Macfarlane-
exclusively in the hands of the instructor, who may assume that feedback messages are
transmission process is completely upon the teacher, then they are cognitively and
mechanically overloaded with the task of delivering meaningful, motivating, and
learning, and are designed to promote learner self-regulation. At the center of feedback,
Black and Wiliam (1998) address the gap. In similar fashion, Nicol and Macfarlane-
Dick’s (2006) model also promotes high quality feedback that emphasizes self-
regulation (Nicol & Macfarlane-Dick, 2006). An additional feature to this model is that
feedback should encourage positive motivation and self-esteem (Nicol & Macfarlane-
Dick, 2006).
contributed to self-regulation was the goal of Jing’s (2017) study. Focusing on Nicol and
learner self-regulation and student perceptions of the teacher practices. Jing (2017) cited
into two categories. The first three are teacher-student directed, meaning that there is a
reliance on teacher input for self-regulation to develop, while the fourth and fifth
principles are student-directed. The researcher noted the important connection between
both require teacher assistance, with similar goals for students (Jing, 2017). The sources
focus groups during and at the end of the semester. The results showed that instructors
feedback facilitates self-regulation when both teachers and students are involved. This
study’s focus on formative feedback elements and its alignment to Vygotsky’s principles
underscores the need for more studies, as Jing (2017) recommended similar studies in
different contexts.
Thus, for feedback to be truly formative, there are specific conditions. It must be
specific to the assignment appraised, and general, giving orientation to a broader concept
that is applicable to future work (Webb & Moallem, 2016). The feedback models
student performance and the criteria, and emphasis on learning and the application
necessary for exploring how faculty reflect on feedback, and its contribution to their
teaching.
This section was designed to define feedback and the important distinctions
surrounding formative feedback. The current study was about how online undergraduate
full-time faculty feedback practices influence their reflective thinking or how this
ZPD theory formed the foundation for exploring how online undergraduate full-time
faculty practices influenced reflective thinking and how this reflective thinking informed
instruction and the feedback models were used to analyze faculty feedback in their
classes.
Feedback and social presence. As this study was about online university
dimensions, allowing the instructor to build social capital with the student (Juvova,
Chudy, Neumeister, Plischke, & Kvintova, 2015). Social presence is one of three
represents collective knowledge (Peacock & Cowan, 2016). Recent literature has
these factors, a community of learning can exist in online learning (Martinez & Barnhill,
2017). As online learning became a more accepted modality, the Community of Inquiry
has served as a response to the skepticism of online learning and its social presence.
Social presence plays a role in the success of an online program. Social presence
(Gunawardena & Zittle, 1997). Greater attention is given to social presence to measure
Social presence is broken down through social cues, and the more frequent use of social
and verbal cues typically results in increased social presence (Atwater et al., 2017). With
social presence has implications for online instruction (Frisby et al., 2013). Research
indicates that an instructor’s role in social presence is a significant factor in the success
On one hand, online learning is prized for its flexibility, freedom, and
convenience. Potentially, students can interact one -on -one with instructors more than in
a face-to-face setting (Mehta et al., 2017). However, since its inception, concerns about
concern is that when student-teacher interpersonal connections are lacking, students may
lack self-discipline to learn on their own, and may not meet objectives (Frisby et al.,
2013).
presence. In a quantitative study that investigated the influence of written and audio
(Portolese Dias & Trumpy, 2014) hypothesized that audio feedback would be more
were in the form of student satisfaction surveys that were analyzed using a one-tailed t-
test. The findings showed that students reported greater instructor concern through audio
feedback, although the other three out of four survey question responses were not
significant (Portolese Dias & Trumpy, 2014). These researchers recommended
additional studies on audio feedback’s contribution to social presence, as these web tools
are advancing rapidly and those advances could reveal greater contributions in this area.
five principles to improve student and teacher interaction and social presence in an
five practices, the data showed that feedback played the most important role in
common concepts, and prepared comments that allowed cutting and pasting to facilitate
recommended that instructors employ audio, visual, and written text to connect with
students through their feedback (McGuire, 2016). If instructors are conscious of how
various technology and methods might contribute to better feedback, then a greater
understanding of how teachers reflect while giving feedback or what methods they select
could reveal their rationale for their choices and if their rationale constitutes a best
practice.
Social presence and learning outcomes. This study explored the reflective
feedback practices of online faculty in higher education. In addition, the study examined
the methods of feedback instructors used, which ultimately affected their teaching
practices. There is widespread empirical evidence of the relationship between social
presence and learning outcomes (Frisby et al., 2013). When social presence can be
(Frisby et al., 2013). Numerous studies are written about how social presence is a way to
confirm the effectiveness of these tools. Atwater et al. (2017) performed a qualitative
online courses. Instructors participated in Skype calls and provided both synchronous
The results showed that though some students experienced minor frustrations
with scheduling and technical concerns, students reported that the Skype calls motivated
et al., 2017). Through video feedback, instructors were able to efficiently and effectively
establish their social presence through emotions, visual and vocal cues, and facial
al., 2017).
Busch (2013) explored the relationship between social presence and student learning
outcomes. The research questions were about the pedagogical methods that affect
student perceptions of social presence, the relationship between social presence and
student outcomes in a mixed method study. Both qualitative and quantitative methods
were used. Course discussions were examined through content analysis and social
presence was determined by the use of first names, expressions of feelings or humor,
1995), and student responses were evaluated on a rubric (Hostetter & Busch, 2013).
social presence. Both Hostetter and Busch’s (2013) and Atwater et al.’s (2017) studies
faculty consider social presence as part of their feedback, this could inform their
Frisby et al. (2013) found that when social presence was higher, students performed
better and communication improved. Using rhetorical and relational goals theory of
instruction, these researchers found that the medium of information has an impact on
how students learn. This is based on dual coding theory, which means that when
but when two formats are received, the processing is referential. Referential processing
is superior to associative (Frisby et al., 2013). Students reported that rhetorical and
relational goals were more likely to be met in classes with higher social presence, which
occurred when auditory and visual components provided the presence (Frisby et al.,
2013). Martinez and Barnhill (2017) also found that higher levels of learning occur
when social presence is greater. Thus, the positive effects of social presence, particularly
in online environments, are well documented, yet Frisby et al. (2013) noted that it is not
personalization in the online classroom. The studies reviewed used qualitative or mixed
and communication, learning outcomes, and student perceptions. Social presence relates
to instructor feedback, because well-formed feedback may include social presence cues
(Pattison, 2017). As instructors reflect on their feedback, they may be mindful of how
their own social presence influences the quality or perception of feedback. For example,
an instructor may find it challenging to provide corrective feedback without losing social
capital. This reflection may influence word choice, or it may prompt the instructor to
post additional resources in class. To fully explore how online undergraduate full-time
faculty feedback practices might influence reflective thinking or how this reflective
thinking informs instruction, understanding the affective nature of social presence was
necessary. This study expanded on studies in the field of social presence as it examined
how online undergraduate full-time faculty feedback practices influenced their reflective
educational institution.
instructors provide feedback amidst the explosive growth of online learning. In order to
ameliorate the distance in online education, institutions have turned to audio and video
as methods of feedback. These methods are considered for their capacity to improve
feedback content, quality, and learning potential through formative feedback delivery.
These delivery methods have advantages from both instructor and student perspectives.
As the study is about qualitatively examining the methods that instructors used at a
face classrooms. According to Wade, (2016) this distance can result in lower completion
rates and disengaged students. Text-based feedback often limits the nuances that
2015; Sims, 2016; Wade, 2016). These factors formed the purpose of Wade’s (2016)
Using Social Cognitive Learning Theory (Bandura, 1977), the researcher examined
video.
The video feedback used in this study was through Screen-cast-o-matic, which
allowed instructors to provide a video of the student paper while providing auditory
feedback. Wade (2016) found that using video feedback, instructors can place greater
emphasis on key points in assignments, due to the allowance of greater complexity and
word count. These findings are consistent with recent studies that cite efficiency as an
advantage of audio and video feedback (Borup et al., 2015; McCarthy, 2015). Because
instructors value these efficiencies not for efficiencies alone, but to afford richer and
nuances that are not afforded in text-only feedback. Instructors found video a softer way
to deliver necessary instruction and to guide improvements (Wade, 2016). Tunks (2012)
connected social presence to greater personalization through video and audio feedback
found no difference in social presence between written and audio feedback. McCarthy
(2015) also found that despite the advantages of video feedback, there were instances
where students preferred text, such as when students resided in developing countries or
circumstances where technology was limited. Finally, Borup et al. (2015) found that
students preferred video over text for its conversational properties, but text over video
These competing views underscore the importance of more study in this area.
Additional studies would codify the specific advantages of audio and video feedback
over text. A third finding is that when instructors used asynchronous video feedback,
they were more likely to provide teaching supplements for their own courses (Wade,
2016). This finding relates to the current study, which contributed to the literature on
Instructors may select feedback methods based on their ability to produce more
analysis of interviews and feedback samples. The considerations were the educational
purpose of the feedback, the relationship factors between the instructor and students, and
the feedback delivery relative to efficiency of time and effort. Shades of difference
within these themes influenced the instructor’s choice of feedback method. Similar to
Wade’s (2016) findings, this study’s data confirmed that an advantage of audio and
video feedback is that with less effort, it afforded the instructor more complex feedback
Sims (2016) made distinctions between summative and formative feedback and included
the importance of the social dimensions of feedback. Whereas Wade (2016) found that
audio and video feedback allowed instructors to provide more guided instruction in a
more affective way, Sims (2016) expanded on this point, showing that narrated video
feedback is faster and therefore, less fatiguing. The research questions for this study
instructor perceived value of audio and video feedback, how feedback differs in these
different forms, and if instructor perceptions of these types of feedback are accurate.
Thus, instructor feedback methods may be influenced by the assignment. That is,
instructors were more likely to use audio or video feedback for a lengthier assignment,
such as an essay, but would use text feedback for shorter assignments. Some instructors
did not find the newer technologies more efficient, because extra steps were required. A
key finding is that audio feedback proved more useful in terms of promoting student
cognition and further research regarding audio and video technologies and the formative
feedback effects of these choices is suggested (Sims, 2016). Due to its many similarities,
feedback: Both studies focused on instructor feedback and instructor perspectives and
both studies explored different methods of feedback. The current study expanded in this
feedback practices.
is the lens by which learners construct personal meaning as they engage in their world.
Sims’ (2016) study was rooted in Natural Media Theory as the premise for interpreting
to- face model (Sims, 2016). A third study of interest in instructor feedback methods is
proposed study.
Student preferences may also affect instructor feedback practices. Gredler (2016)
student preferences for audio, written, and video feedback and hypothesized that student
Qualitative data in the form of interviews were analyzed. The findings showed that
school, are full time employees, parents, or those who may not have a high school
student preferences from a more diverse population. The current study examined a
However, the current study, unlike Gredler’s, examined feedback from the instructor’s
perspective.
yet, as more studies uncover the potential available through audio and video feedback,
some rough edges remain and practitioners continue to rely on text feedback for many
directly embedded feedback through a track changes feature. Electronic sticky notes,
different colors of text, and highlighting tools all offer understanding beyond mere
words. Borup et al.’s (2015) study found that for some purposes, written feedback was
preferred, as students found text efficient and organized. In this study, some students
also complained of the inconveniences of opening a video link, which was more
complicated than viewing text comments. However, Sims (2016) found that students
perceived text as overwhelming and that they did not always read written feedback.
Additional research in audio and video technologies was recommended (Sims, 2016).
This study addressed that recommendation through exploring how instructors select
feedback methods.
Thus, there are contradictions in the literature on feedback methods and their
contributions for online feedback. Borup et al. (2015) found positive attributes for both
audio and video feedback, but also found that students prefer written feedback. These
findings that demonstrated the convenience of written text must be juxtaposed to the
weight of the affective gains of personalizing feedback through audio and video
technologies. Even with the known advantages of audio and video feedback, online
instructors, though comfortable with their learning management systems, are not always
Online education may retain the perception of being less personal than a face-to-
face venue. If this is true, then it is ironic that technological advances through audio and
video feedback applications also serve to humanize this modality (Dixon, 2015). These
advantages may qualify video and audio feedback as pedagogical, rather than
technological advances (Dixon, 2015). This current study built on the findings of the
studies in this section, as the researcher examined the various methods of feedback
among faculty in higher education, but in addition, considered how or if these methods
feedback reflections and how these reflections inform their instruction. As Wade (2016)
may select feedback methods based on the population. Online education was originally
al., 2016). At the same time, a student’s level of digital skills may depend on their age
salient in their current careers, gaining the possibility of an alternative job, or the desire
segments of the course has implications for online instructors concerning their choices
of video or audio methods for feedback delivery. Vanslambrouck et al. (2016) found that
while adult participants stated they valued face-to-face interactions with the instructor,
some considered them a waste of time. The study found that time constraints created a
greater appreciation for the online components. If adult learners value efficiency, even if
it means losing personalization, these are broader indicators for selecting appropriate
feedback methods.
Student population may determine the instructor’s choice for feedback methods.
Three studies were about different populations and feedback deliveries: Borup et al.
McCarthy (2015) studied first year college students; Rockinson-Szapkiw (2012) studied
audio feedback with doctoral learners. Examining these studies against the backdrop of
the population may lend insight into instructor methods for choosing technologies. For
population of 125 doctoral learners. The problem that led to this study was that the
online environment was void of verbal cues and paralinguistic nuances that develop
social presence (Gunawardena & Zittle, 1997; Martinez & Barnhill, 2017). Using
constructs- social presence, cognitive presence, and teaching presence- formed the
For this study, the researchers examined two different groups of doctoral- level
students. One group received text feedback and the other received audio feedback in
addition to comments through Microsoft Word with its track changes feature. Both
groups had similar word counts for comments, but there was sufficient evidence that the
allowed the instructor’s paralinguistic cues and non-verbal emotions to come through.
They concluded that because audio feedback increases instructor presence, learning
outcomes may be stronger. The caution is that this is a particular population and doctoral
(Rockinson-Szapkiw, 2012).
preferences, those preferences vary. McCarthy (2015) examined written, audio and
feedback, which can be viewed as the assessment for learning (McCarthy, 2015).
feedback comments justify those objective measures. McCarthy’s (2015) stated problem
for the study is that this kind of feedback is largely misunderstood in a written format.
The researcher’s goals were to explore the three feedback models. The models
were examined in terms of which were most engaging, easiest to comprehend, easiest to
access, and easiest to produce and distribute, and those which would reduce the
workload for instructors (McCarthy, 2015). Student participants used Likert scales to
assess the feedback models and to rank which kind of summative feedback they found
most beneficial. The majority of student participants preferred video feedback, followed
by written feedback. Audio feedback was the least preferred (McCarthy, 2015). These
findings suggest that although the personalized aspect of audio feedback was
McCarthy’s population was first-year university students, the preferences for visual
feedback may be related to the age group. According to Brumberger (2011), digital
natives have greater exposure to visual technologies, which relates to their preferred
learning style.
Exploring a different population, Borup et al. (2015) examined student and instructor
perceptions comparing text and asynchronous video feedback in blended courses. With
Draft and Lengel’s (1986) Media Richness Theory for support, these researchers posited
that information with depth, such as complex feedback, is best delivered face- to -face.
feedback, but based on the Media Richness Theory, it falls higher on the continuum than
were aimed at blended learning student preferences regarding levels of feedback and
quality compared to text only, text feedback and asynchronous video feedback, and the
advantages and disadvantages of both forms of feedback. The findings indicated that
students preferred text feedback over video, due to its accessibility and the fact that
listening to video feedback tended to distract other students. Many reported the
additional disadvantage of not being able to view the video feedback on their phones,
which made the text options more desirable. Thus, even among traditional college-aged
students, as in McCarthy’s (2015) findings, the visual element was a greater priority
than the personalization of video feedback. The current study was about exploring the
methods that online undergraduate full-time faculty use when delivering feedback.
Those choices could be influenced by the instructors themselves, such as their comfort
levels with technologies or their level of motivation to try a new technology. Instructors
may also choose or avoid a method based on the efficiencies. In addition, student factors
As shown in the studies examined, traditional college students tend to value text
feedback over video or audio (Borup et al., 2015; McCarthy, 2015) due to the visual
element and the efficiency. However, Rockinson-Szapkiw’s study (2012) revealed that
doctoral learners did value technology that increased the instructor presence. It should
be noted that technology advances swiftly, and even current studies cannot keep pace
with improvements. For example, Borup et al.’s (2015) student population cited video
feedback as an inconvenience, since they could only access it from their computers;
however, to date, there are applications that would allow phone viewing, which would
feedback practices influenced their reflective thinking or how this reflective thinking
numerous disciplines (Ryan & Ryan, 2013; Winchester & Winchester, 2014). Because
Rolfe, 2014) it would seem to be regularly practiced and understood. However, there is a
lack of clarity in what constitutes reflection (Nguyen et al., 2014; Rogers, 2001) and a
definition has served as an impediment to its full potential and development for learning
and teaching (Nguyen et al., 2014). What is known is that reflection is a complex and
rigorous process (Rogers, 200) that requires active, careful, and persistent consideration
intellectual stance that takes a daily practice to a state where change can take place both
at a personal and broader context (Ryan, 2013), with critical reflection resulting in self-
appraisal (Le Cornu, 2009). Within this broad framework, numerous models, layers,
and constructs have been created to identify and practice reflection. A greater
understanding of reflection and how its practices can be operationalized is critical since
it is regarded as a powerful mechanism for developing thinking and change (Dimova &
Kamarska, 2015).
conflict while Dewey included hesitation perplexity and mental difficulty as elements
that conceptualize reflection (Camacho Rico et al., 2012). This discomfort stems from
the act of wrestling the perplexity against a set of moral or ethical issues, which takes
place at the highest level of reflection (Camacho Rico et al., 2012). The discomfort,
According to Wass and Golding (2014) for learning to occur, the instructor must
“problematize” (p. 677) for the student. That is, the assigned task must be appropriately
challenging. Thus, for this current study which used the ZPD as its theoretical
framework, this same aspect of reflection, where the unresolved begs to be resolved,
reflection and its contribution to improved teaching, and therefore, student outcomes.
students, feedback on written work can be viewed as a vehicle for reflection (Quinton &
Smallbone, 2010). Camacho Rico et al. (2012) explored how the reflection process
enhanced the first- year teaching year experience. Data were classroom observations,
reflective journals, lesson plans, and interviews and these were analyzed according to
how new changes or methods might reflect teaching. While reflecting, these teachers
adapted their actions during teaching. These researchers found that reflective practices
differently, and that reflection has a useful place for first year teachers.
critical decisions (Wilson, 2013). This phenomenological study explored how teachers
instructor reflective thoughts can provide insight into student achievement. This study
four themes that emerged were student learning, relationships, curricular planning and
lesson delivery. Like Camacho Rico et al.’s (2012) study, this was qualitative, but
observe, record, and interpret these experiences (Wilson, 2013). The discussion of
Dewey’s philosophy identified the stages of reflection and Wilson (2013) noted that the
participants moved through each stage. The qualitative design did allow the researcher
to identify emerging themes; however, in the findings, the researcher claimed that the
reflective journals impacted student learning (Wilson, 2013), but did not show data to
reflective thinking (Rogers, 2001; Wilson, 2013). With Dewey’s work as the foundation
of reflective thinking, Schon’s extension focused more closely on reflection in teaching
(Jaeger, 2013; Wilson, 2013). Schon defined a reflective practitioner as one who plans
before taking action, reviews past events to consider alternative choices, but also
reconsiders a course of action midstream. These tenets form his theory of reflection- in -
action and reflection- on -action (Schon, 1983). Reflection- in -action happens during an
action without the action being interrupted (Camacho Rico et al., 2012; Rogers, 2001)
and connects the uncertainties of teaching with the demands of a problem solving
differentiates Schon’s model from others is that the reflection -in-action component
happens during teaching, but many practitioners are pre-occupied with the latter stage,
practices have merit, yet the tangible outcomes of reflection-in-action remain largely
used to shift courses and to re-frame an event (Jaeger, 2013; Rogers, 2001; Rolfe, 2014).
some reflection models provide clear steps or sequential stages, such as Ryan and
Ryan’s (2013) Teaching and Assessing Reflective Learning (TARL) model and Rodgers
(2002) six phases of spontaneous interpretation which include: naming the problem,
and testing the hypotheses (Wilson, 2013). Yet other models do not have a sequential
order, but are characterized by recursive properties and processes. Dewey’s five states,
for example, should not be viewed as tasks to accomplish, but rather as an interactive
process (Wilson, 2013). Farrell’s model of reflective practice (2004) forms a loop,
emphasizing the process and is focused on the questions: What am I doing in the
classroom? Why am I doing this? What is the result? Will I change anything based on
Despite their differences, these reflective models each have value according to
their purpose. For this study, the selected model used to analyze data was Ryan and
Ryan’s (2013) 4R model of reflective thinking. This model addressed the extent of
teacher reflection that occurred during feedback for online instructors. The model
includes reporting and responding, relating, reasoning, and reconstructing. The critical
reconstructing stage prompts instructors to look towards future practices and is intended
to lead to deep examination (Ryan & Ryan, 2013). This hierarchy progresses from the
most fundamental reflective stage to the more complex (Bennett et al., 2016).
The current study used Ryan and Ryan’s (2013) 4R’s model to develop the
questionnaire for faculty feedback reflection. Bennett et al. (2016) used Ryan and
Ryan’s (2013) 4R’s model to explore how students and teacher candidates engaged in
critical self-monitoring as they attended their learning experiences. The questions were
structured so that researchers could analyze responses from students, academic team
members, community members, and stakeholders (Bennett et al., 2016). Stories, films,
audio files, and journals allowed interviewers to gauge expectations and explore teacher
candidate fears about working with the Aboriginal people. Data were coded into
categories and themed to create the final data set and showed that the participants moved
through three stages of reflection. The results of these reflections served to inform the
instructional approach for the Aboriginal people. This proposed current study also used
Ryan and Ryan’s (2013) 4R model as a framework for exploring how online
institution.
believed that reflection should have a conscious aim (Rogers, 2001) and that this
conscious process should then show tangible results. Schon (1983) stated that proper
future actions that enable change in society is an outcome at the highest level of
reflective processes (Bennett et al., 2016). When a learner is reflective, s/he will
Effective reflection can result in changes in behavior (Rogers, 2001; Schon, 1983).
features and measures of learning vary, certainly the antecedents to learning are
outcomes of reflection; these include freedom of action, capacity for change, and a
greater awareness of context (Langer, 2000). Ryan and Ryan’s (2013) reconstructing
segment of the reflective scale prompts the learner to codify reflection’s outcomes with
questions such as “What might work and why? Are my ideas supported by theory? Can I
make changes to benefit others?” (Ryan & Ryan, 2013). Perhaps by being mindful about
reflection’s outcomes, practitioners can know the result of its conscious aim. In this
study, those reflective practices and how they influenced learning were explored.
elements were measured, Borup et al. (2015) conducted a mixed study, with quantitative
information collected from surveys. Atwater et al. (2017) conducted a qualitative case
study to explore how students perceived advantages of video feedback, using video
recordings for data. Finally, Wade’s (2016) study was qualitative, as the study’s purpose
data collected was self-reported evidence on feedback from participants. However, most
studies on feedback used qualitative methodology (Atwater et al., 2017; Charteris, 2015;
Ellis & Loughland, 2017; Jing, 2017; Kastberg et al., 2016; McGuire, 2016). As well,
nearly all reflection studies are qualitative and focused on establishing a framework for
reflection, including Camacho Rico et al.’s (2012) and Wilson’s (2013) qualitative
studies. Therefore, a qualitative methodology approach was most appropriate for this
multiple methods to address the research questions. However, most studies reviewed on
consistent with a qualitative approach. A qualitative approach would yield richer data in
the themes of reflection and feedback. This study employed three forms of data: faculty
studies. To avoid bias, they must be constructed carefully (Yin, 2014). Ellis and
Loughland (2017) used questionnaires to derive thematic categories to analyze the data
of student perceptions on formative feedback. Borup et al. (2015) used interviews with
instructors to address the perceived advantages and disadvantages of audio and video
feedback. However, an interview has greater potential for bias. Yin (2014) discusses
how participants may echo one another’s responses. A written questionnaire, based on
Ryan and Ryan’s (2013) reflection model allowed faculty a chance to individually
respond for more individual data. For this current study, the researcher developed
careful questions and responses were coded and themed to build a case for the study.
Some studies from this literature review used classroom data. For example,
Portolese Dias and Trumpy (2014) reviewed online instructor feedback to compare
audio’s effectiveness with written text. This form of data can be useful in addressing a
“what” question in research (Yin, 2014). In this study, the question: “What kind of
feedback methods do faculty use?” is best answered through reviewing classroom data.
First, this will provide abundant current data, since multiple assignments and courses
were examined. Next, the classroom data served to confirm the responses from the
faculty questionnaire. This limitation of faculty commenting on their own feedback was
confirmed by Sims (2016) because faculty provided their own feedback, which may
limit the data’s authenticity. Hattie and Timperley’s (2007) framework of three
questions (Where am I going? How am I going?) and the four levels for each feedback
question: Task level, Process level, Self-regulation level, Self- level (Hattie &
obtain the viewpoint of a group (Yin, 2014). For example, Ellis and Loughland (2017)
used focus groups to gather perceptions of a broad group as Hattie and Timperley’s
(2007) feedback questions were addressed. This study used focus groups to address the
research question about how online undergraduate full-time faculty feedback practices
influenced their reflective thinking or how this reflective thinking informed instruction
at a southwestern higher educational institution. The group dynamics that are associated
Summary
the theory that undergirded the current study. The ZPD posits that instructors consider
the learner’s actual developmental level, then strategize to either support through
scaffolding or challenge them in this proximal zone, with the goal of the student
how feedback practices might inform learning, as instructors may reflect on what
strategies will operationalize learning. The next section provided a historical perspective
of online education. To fully address the research questions about online feedback, a
Next, the chapter provided a broad and scholarly review of the current literature
related to the phenomena of the proposed study topic of how online faculty feedback
The chapter then reviewed seminal literature on feedback for a foundation on the
principles that constitute effective feedback. The critical differences between summative
and formative feedback were discussed, as formative feedback is linked to student
outcomes (Hattie & Timperley, 2007; Mirzaee & Hasrati, 2014; Nicol & Macfarlane-
Dick, 2006) and recognized models were illustrated, reflecting their commonalities and
their alignment with the theory. To address the research questions appropriately, this
instructor, the theory of social presence may factor into their choices for feedback
delivery methods. Their tone in writing and the social presence cues may be part of their
reflections on their feedback, which could inform teaching practices. Thus, this
Chapter 2. One research question in the study was: “What methods of feedback do
faculty use?” The reviews of the literature on this topic aimed to address that
literature suggested that more variations in this area needed to be tested. Sims (2016)
acknowledged this need stating that studies on different feedback formats within the
same course would reveal greater insights into instructor methods and purposes. Thus,
faculty use.
This chapter’s last theme was reflection, which included several sub-topics to
address the element of reflective thinking. Theories and studies selected were either
current, such as Wilson’s (2013) or older literature, which demonstrated seminal
theorists and researchers like Schon. This section acknowledged the pioneers of
reflection theory, but also provided a rationale for choosing Ryan and Ryan’s (2013)
model that supported this study as the foundation for examining faculty reflective
practices. The literature was selected with the study’s phenomenon in mind: teacher
reflective practices. Wilson (2013) concluded her study stating that reflective practice
education is lacking and suggested that studies could also focus on teachers reflecting.
This study flipped the paradigm, making the teacher the learner, as they examined their
Both feedback and reflection have histories of deep relevance in education. With
aim to maximize learning and satisfaction through this modality. To fully meet online
methods are selected, and if reflection informs teaching, contributed to the body of
knowledge on feedback and reflection in online learning. Such findings would benefit
The findings on feedback and reflection showed that, given the magnitude of
feedback – its learning potential, its ability to increase social presence, how technology
can improve its delivery and efficiency–higher educational institutions can better
accommodate the conditions for instructors to focus on and improve feedback methods.
Moreover, this study’s findings show the student perspective as well. A movement
structured written responses to instructor feedback (Denton & Rowe, 2015; Geitz,
Brinke, & Kirschner, 2015). These are recommended practices to further increase self-
regulation.
Chapter 3: Methodology
Introduction
The purpose of this qualitative exploratory single case study was to examine how
that best addressed the research questions will be discussed. This chapter will include: a
research design, selected population and sample, instruments, validity and reliability of
the data, the data collection process, management, analysis procedures, ethical
The data for this study came from responses from the faculty questionnaire, from
reviewing faculty feedback, and from focus group findings. These faculty member
southwestern United States. This was a qualitative exploratory single case study that
included data gathered through reviewing feedback from the participating faculty’s
online courses, questionnaire responses, and identified themes from a focus group. The
questionnaire was prepared by the researcher, based on Ryan and Ryan’s (2013)
reflection model and allowed faculty to reflect in the course of their regular daily
feedback processes. The data from the faculty sample population were analyzed based
on Hattie and Timperley’s (2007) framework and reviewed to determine the methods of
feedback given. A focus group was used to explore how these reflections influenced
instructor practices or methods. Emerging and identifiable themes provided data that
addressed this research question. This helped determine to what extent faculty
Each type of data was necessary to address the research questions of this
qualitative case study that addressed the perceived influence of feedback practices on
faceless environment, online instructors seek out new methods of delivering feedback
(Borup et al., 2015). Understanding the methods selected and how reflection influenced
education.
It was not known how online undergraduate full-time faculty perceived the
influence of their instructional feedback practices on their reflective thinking, and hence,
(Borup et al., 2015; Pattison, 2017; Quinton & Smallbone, 2010). Additionally,
feedback can be a way to increase instructor presence (Ali, 2016; Sims, 2016). A greater
The participants for this study were online undergraduate full-time faculty who
teach exclusively in this modality. The university selected for this study has a physical
campus of 15,500 students and 60,000 online students. Attention to the online
population and a closer examination of faculty practices involving both feedback and
reflection can potentially improve student satisfaction and learning outcomes. If the
factors that influence feedback reflection and how that reflection informs teaching are
feedback methods could improve instruction and therefore, student learning outcomes.
For this study, the participants were full time online undergraduate faculty who
meet and collaborate on a campus facility. The institution where this study takes place
also employs adjunct instructors, but the online undergraduate students are assigned full-
time faculty for first-year sequence courses. This model is unusual, since higher
education institutions more often employ adjunct faculty, particularly for online courses,
which can be facilitated remotely (Schieffer, 2016). Online undergraduate full- time
faculty teach 40 hours per week, year-round and work at campus facilities three to four
days a week, with the option of working remotely one to two days per week.
At the university selected for this study, the online population exceeds the
campus face-to-face population. Yet, there are more studies on the effectiveness of face-
to-face environments than there are studies that document effective practices in online
courses (Forte et al., 2016). Two highly studied forces: feedback and reflection - were
examined in a new way in the online environment to provide data that may shape
consciousness of how they reflect on their feedback could prompt improved practices.
Seeing connections between their reflections and practices may prompt greater attention
improve the online experience for learners, understanding which methods might prompt
more attention to faculty reflection could influence practices. For example, if faculty
fined that video or audio feedback provides richer feedback, and prompts a higher level
of reflection towards instruction, then institutions may make decisions to extend these
practices to more faculty. In addition, given the vital role that feedback plays in online
learning, giving instructors more time to prepare feedback or to explore new web tools
may contribute to
Research Questions
The research questions posed in this study were derived from teaching practices
that are well-documented in face-to-face teaching, however, there are fewer studies that
Zone of Proximal Development is a learning theory that undergirds the study. Learning
takes place when students move from their zone of independence, into the ZPD, where
their task is too challenging to accomplish on their own, but can be achieved through
for formative instruction for online instructors (Quinton & Smallbone, 2010; Sims,
2016; Webb & Moallem, 2016). It is therefore, important to understand the thought
process of online faculty as they choose feedback delivery methods and as they
construct their feedback. Their reflections could point to future improved practices to
enhance student learning. Like face-to-face instruction, no two online faculty members
grade the same, but all could benefit from a greater understanding of how they conduct
their own feedback, what methods are most effective, and to what extent their
reflections influence their teaching. Factors that determined online faculty feedback
practices could include addressing the efficiencies necessary for grading volume or
moving students to their Zone of Proximal Development (Vygotsky, 1978) and the
specific resources or remarks given that are designed to achieve that. Understanding
these reflective practices that take place during feedback can provide rich insights into
how faculty might approach feedback and implications for professional development or
individual growth. The primary research question for this study was derived from the
problem statement:
It was not known how online undergraduate full-time faculty perceived the
influence of their instructional feedback practices on their reflective thinking, and hence,
RQ1: How do online undergraduate full-time faculty perceive the influence of their
To address these research questions, there were three different data collection
approaches. The first form of data collection was a questionnaire, prepared by the
researcher. This 12-item instrument addressed all three research questions, as there were
four questions created to address the three research questions. The prepared
questionnaire was guided by Ryan and Ryan’s 4Rs model (2013). The purpose of using
this model was to address the pedagogical landscape for reflection so that there was a
reflection, whereas this model is more of a systematic approach to reflection (Ryan &
Ryan, 2013). Reflections were addressed on four different levels: Reporting and
responding, relating, reasoning, and reconstructing. Ryan and Ryan (2013) conflated this
model from Bain et al.’s (2002) five-level model. The researcher prepared questions
specific to instructor feedback based on these four levels. This questionnaire was
designed to provide qualitative data on how faculty participants reflected while giving
feedback.
full-time faculty participants. This review of classes addressed RQ2: What methods of
feedback faculty use, whether they used text or video. In addition, this review of
classroom feedback addressed RQ1 and RQ3, as Hattie and Timperley’s (2007) levels of
feedback formed the framework for the specific levels of feedback that faculty use,
with learning (Black & Wiliam, 1998; Geitz et al., 2015; Hattie & Timperley, 2007),
essays, was examined. For these assignments, the feedback methods selected had a
The third form of data collection was the focus group, which consisted of ten of
the 12 faculty participants. These were the same participants who participated in the
perceive the influence of reflective thinking on their current and future instructional
approaches? Using Zoom computer software allowed remote participation. The focus
group allowed the participants to review the data, which was either confirmed or
case study, which is a holistic endeavor (Merriam & Tisdell, 2016) and the data
Research Methodology
The qualitative methodology was selected for this study. Two types of
distinguished according to their data. In quantitative data, numbers are the prevalent
form of data, whereas in qualitative data, words are (Merriam & Tisdell, 2016). When a
this form of data (Merriam & Tisdell, 2016). This qualitative study focused on faculty
feedback, which is best measured through their words. These words were analyzed and
coded to yield rich, nuanced data that can come from this type of methodology.
A qualitative method was the best choice for addressing the questions for this
study. A qualitative method captures the phenomenon in its natural setting (Yin, 2014).
For this study, online university instructors were studied in their setting of instruction,
where daily, they facilitate courses, which includes providing student feedback.
Conducting the study within this natural setting resulted in less bias or changes that
might have distorted the instructors’ thoughts. Since qualitative research, by nature, does
not lead to the specific conclusions of quantitative data (Merriam & Tisdell, 2016), this
natural setting yielded less bias as the investigator explored themes and patterns of
A third type of data is a mixed methods approach, using both qualitative and
quantitative data. Case studies often employ a mixed methods approach, which has the
advantages of integrated methods in a single study (Yin, 2014). This integration results
in richer data that is stronger and able to address more complex research questions (Yin,
2014). The phenomenal dimensions of this study were best explored through the
qualitative process, where faculty reflections, feedback from courses, and focus group
findings were coded and themed to provide three forms of data, which were triangulated,
and contributed to the opportunity for consistent replies. Therefore, qualitative data
collection processes were used, rather than a quantitative or mixed methods approach.
Research Design
A qualitative exploratory single case study was selected as the best design for
this qualitative study. A case study is selected when the researcher wants to provide an
in-depth description of the phenomenon, when the research questions address “How” or
“Why” and when the researcher has no control over the behavioral events (Yin, 2014).
According to Yin (2014), a single case can represent a contribution to knowledge in a
feedback influenced instruction. This study conformed to that criteria, making it the best
choice for this study. The themes from the three data sources converged to fully explore
analysis in this single case study, which was considered bounded, since the sample
participants teach at the same higher education institution. The present study explored
online undergraduate full-time faculty feedback reflective practices in depth. This ability
to go into depth is a characteristic of a case study, which can offer nuanced data in the
form of coded themes. Other qualitative methods could not accomplish this as
successfully. In a case study, the researchers themselves are viewed as the sensing
The proposed study research questions were about how online undergraduate
full-time faculty perceive the influence of their instructional feedback practices and their
reflective thinking’s effect on their instructional approaches and how this population
conducted their feedback, whether through video or text methods. These types of
questions were best addressed through a case study. If the study question involved
“Who” might benefit from feedback, or “Who” provided formative feedback, the
resulting data would be less useful. If the study were about “How much” feedback is
quantitative measurement, so that goal is not consistent with the research questions
addressed in this study. Because “How” and “Why” questions are more exploratory in
nature, the case study emerged as the best choice for this qualitative study (Yin, 2014).
For this study, a single case design was the best approach because the researcher
did not have control over the behavioral events. Instead, studying the phenomenon in the
natural setting offered usable and unbiased data (Yin, 2014). One method that relies on
researcher (Yin, 2014). However, for the phenomenon described, this manipulation is
There are disadvantages with any design, but a researcher must anticipate the
potential flaws and mediate the process to ensure the results are salient. Traditional
concerns include the attention to rigor, the generalizability of the study, and the level of
effort involved in this design (Yin, 2014). A single case study may be viewed as less
rigorous than other designs; however, this criticism is more likely applicable in a study
where the procedures were incorrect or haphazard (Yin, 2014). Therefore, careful
attention to the procedures and the research collection process were followed so the
rigor matched that of other designs. When researchers provide sufficient detail, the
reliability and credibility of the study improve (Baxter & Jack, 2008).
Case studies have been criticized because they are only a single case, which may
not generalize to a broader field. However, when grounded in theory, the results of a
single study can generalize to a particular population (Yin, 2014). This study revealed
practices. Finally, a third concern with a case study is the volume of effort. A researcher
working within this design must gather large amounts of transcribed data, but careful
The multiple considerations led this researcher to select the case design as the
best choice for this study. This qualitative approach allowed a close look at an empirical
topic in its natural setting and properly addressed the exploratory questions of the study
teaching. Despite the challenges of collecting data, proper procedures were followed so
that this study demonstrated the rigor that characterizes other designs. Finally, as the
research questions were properly addressed through the study, through the ZPD theory
(Vygotsky, 1978), there was potential for the results to generalize to institutions of
higher education.
The population of interest for this study is online faculty in higher education.
This modality’s rapid growth (Deming et al., 2015) has prompted institutional leadership
et al., 2017; Borup et al., 2015). Feedback is central to the instruction of the online
modality (Borup et al., 2015).The findings of this study could support the attention to
reflection and feedback or how different delivery methods can enhance online instructor
feedback.
in the southwestern United States. At this institution, here are currently 150 full-time
faculty in the Division of Online Instruction. These instructors work in a collaborative
practices and collaboration. These faculty teach courses in their approved disciplines,
such as Math, Science, or English, but may additionally be assigned first-year sequence
courses, based on rolling enrollment demands. This group of full-time faculty teach 80
The study sample was drawn from this target population of 150, with the
exception of the 14 English instructors, who are directly supervised by the researcher.
One hundred thirty six faculty were sent an email requesting participation in the study.
The email contained exclusion criteria, listing the selected first-year sequence courses
that contain assignments of 300 words or more. To qualify for the study, faculty needed
to have at least one of the first-year sequence courses listed at the time of receiving the
email, so that one or more of those courses could be used during the questionnaire.
Thirteen faculty responded and were qualified to participate; however, one did not
successfully complete the questionnaire, so the final number for the data source was 12.
proposal by the university’s academic quality review process, the approval from the
Institutional Review Board (IRB) was received. The site authorization letter was valid
for one year, so it was anticipated that the data collection activities would be conducted
in November of 2017. The researcher examined the classroom artifacts through her
between the volunteer participants and their concomitant feedback on assignments. The
actual data collection time frame was January through April, 2018 and the data
collection process was reviewed and discussed. The researcher’s committee reviewed
the data collection process throughout, as meetings during this time frame were
The sample sizes for the three forms of data are the questionnaires, the classroom
feedback are outlined below. The first data source was a 12-item questionnaire
pages of data for analysis. The classroom feedback drew from four courses per
participant, with up to 4 assignments in each course. The final source of data was a focus
group of the participants. The transcript from the recorded session yielded 6 single-
spaced pages. These different data sources and methods of collection served to increase
the validity and reliability of the study through triangulation (Golafshani, 2003).
was the procedure for obtaining the target sample. Purposive sampling is a method that
allows the researcher to select willing participants (Yin, 2014) and is a form of non-
probability sampling that is common to case studies. This particular institution employs
full-time online faculty, who facilitate undergraduate courses with greater regularity and
volume than their adjunct counterparts; a student load between 80 to 100 students is
maintained year-round, yielding robust data. Moreover, this researcher has personal
knowledge about the faculty practices and understands that the online undergraduate
full-time faculty may choose their delivery methods. Because there is a variety of
feedback methods, this purposive sample method assisted in exploring the study’s
questions.
This study sample of full-time online undergraduate faculty was used for the
questionnaires and the focus groups. For the second data source of classroom papers, the
researcher reviewed four courses, with up to four assignments from each course. A
convenience sampling method was used for classroom papers, as units selected for
inclusion in the sample were the easiest to access. This kind of sampling was necessary,
since the goal was to examine classroom papers from recent first-year-sequence courses.
Since faculty in this division teach different courses at different times, the researcher
Sources of Data
This study triangulated data from three sources. The first source of data was the
feedback in their courses. Next, the researcher reviewed participants’ first-year sequence
courses to determine their methods of feedback using a prepared chart to identify the
methods of feedback faculty use (see Appendix J). The researcher reviewed four classes
per participant, and three to four assignments from those classes, yielding nearly 192
artifacts for review. The third source of data was a focus group for the participants. This
session was recorded through Zoom, an application that allows internet video
conferencing, and faculty discussed the data findings from the questionnaires and the
feedback results from reviewed papers. During the focus group session, participants
provides convergence and a corroboration of the results from these different methods
(Almalki, 2016). The convergence then serves to determine the consistency of the
study’s findings (Yin, 2014). These three data sources were the most effective tools to
explore the perceived influence of feedback practices on teacher reflection and the
Qualitative sources of data are typically more complex than those used in
alternate research methods. The goal of the single case study is to select data that reflects
validity and reliability, creating resulting data that is analysis-worthy (Yin, 2014). The
principles of data collection include multiple sources of data, a created database, a chain
of evidence, and care with data (Yin, 2014). Although an interview is a common data
source for qualitative case studies (Yin, 2014), for this study, the interview was not the
best way to collect data about actual events or behaviors of faculty feedback. Instead, the
questionnaire was selected as a means of meeting the principles and capturing how
The first source of data in this study was a 12-item questionnaire designed to
capture reflective feedback practices among online undergraduate full-time faculty. This
questionnaire was designed to collect data on the human event of faculty feedback
practices and reflections that took place during their feedback delivery. This
questionnaire was triangulated with the focus group discussion and the observation of
findings of the study’s phenomena. Questions prompted faculty to consider the kinds of
The questionnaire was sent via email as an attachment and included three
focused on how faculty reflected while delivering feedback. Ryan and Ryan’s 4R’s
(Ryan & Ryan, 2010; Ryan & Ryan, 2013) served as the model for the questionnaire.
This model has four levels, and each level represents a different depth in reflection.
Level 1 is reporting and responding, level 2 is relating, level 3 is reasoning and level 4 is
reconstructing. The creators of this model note that it is flexible and can be customized
to different disciplines and purposes (Ryan & Ryan, 2013). The faculty questionnaire for
this study had four questions at each level and these four questions were aligned with
The second source of data in this study was faculty feedback from online
classrooms. The Site Authorization approval granted the researcher permission to review
the courses, and the researcher had access to the classes through her supervisory role.
sampling procedure. This was necessary to maintain consistency between the faculty
who volunteered to complete the questionnaire and their classroom papers. The
dissertation committee provided oversight during the data collection process. Reviewing
these classes addressed the research question: What methods of feedback do faculty use?
First, full-time online faculty used different feedback methods, including text or
video tools for feedback. Understanding the specific delivery method yielded deeper
meaning as faculty completed the questionnaires and participated in the focus group
discussions. Instructors may find that using audio or video feedback enhances the
asynchronous classroom (Ali, 2016), so this may drive the instructor’s choice in this
kind of feedback. The data from observing the online classes did not yield the faculty
participants’ rationale for their choices, but as other forms of data surfaced, these
In accessing the classroom feedback, the researcher used care and discretion
during the review, making every effort to exclude any student information or
identification that might be referenced as video feedback was transcribed and names
were removed from text feedback. Only qualifying courses were examined and the most
The theory that grounded the feedback analysis was Vygotsky’s (1978) ZPD, which
framework supported the feedback analysis. The principles of this theory determined the
The faculty questionnaire was another data source. The researcher prepared the
questionnaire, and it was validated through field testing, to ensure the study was as
questions as they related to the study’s purpose. Based on these evaluations and
suggestions, the researcher made changes to the questionnaire and this validation
process.
The third data source was the focus group. The participants who completed the
questionnaires also participated in the focus group, but two could not attend, resulting in
discussion about some aspect of the case study, with the purpose of finding the surface
views of each person in the group. The focus group discussions allowed faculty
participants to connect their responses from the questionnaire and to share and glean in a
peer-centered discussion. The focus group session was virtual and utilized Zoom, an
internet conferencing application, which allowed recording and convenience for the
Trustworthiness
ensure its trustworthiness. In contrast to a quantitative study, where the research itself is
the variable that determines the credibility of the study, a qualitative study’s credibility
is dependent upon the researcher and how they conduct the study. Trustworthiness
Credibility is critical to establish the overall validity of the study. One way to
establish credibility is to demonstrate that the study’s sources of data were derived from
literature in the field or existing models or studies (Golafshani, 2003). For this study, the
existing framework of Ryan and Ryan’s 4R’s model was adapted. This model is
recognized in literature and has been used on studies in the field of education and the
authors state it is adaptable to other disciplines and areas of study (Ryan & Ryan, 2013).
Credibility is also heightened when multiple data sources are used (Baxter & Jack,
2008). For this study, three data sources were triangulated: Faculty questionnaire,
either future research, policy, or practice (Lincoln & Guba, 1985). This study’s findings
have potential for transferability. For example, the study showed that some faculty stated
they would use video feedback with institutional support. This means that leadership
may support multi-modal feedback through training or funding, so that instructors can
deliver feedback in a method that is most effective. It is also possible that more attention
might be given to the very act of reflection and how reflecting during feedback can
provide rich material for classroom instruction. Therefore, reflection might be a focus of
future professional development. In this study, detailed description was necessary so that
Dependability was achieved through evidence, records of the data analysis, and a
clear alignment of the study’s phases (Lincoln & Guba, 1985). In a qualitative study,
evidence can come from the data sources and from the researcher’s field notes. The
researcher maintained raw data in Word files that are labeled and organized. This
electronic method of note taking ensured that the data were not lost, damaged, or
misread, and that it could be reviewed by another party. Careful records of the
established coding method were also documented carefully, which contributed to the
study’s dependability. Templates were used to establish themes and a coding process.
Finally, this study’s gap, problem statement, research question, and methodology were
aligned, which is an additional requirement for dependability (Lincoln & Guba, 1985).
and evaluated by experienced online full-time faculty who provided feedback on the
questions and to what extent they addressed the research questions. Acknowledging the
study’s shortcomings and their potential effects is another way to ensure confirmability.
By transparently discussing these potential shortcomings and their effects in the study,
this was confirmed. In a qualitative study, triangulation demonstrates that the multiple
data sources effectively demonstrate corroboration and the data triangulation contributes
to the study’s confirmability with multiple data sources (Golafshani, 2003; Yin, 2014).
Since the study’s conclusions are more accurate with analyses from different data
sources (Yin, 2014), this convergence (Baxter & Jack, 2008) of the questionnaire
responses, paper feedback data, and focus group discussions will support the
confirmability.
Reliability
A study has reliability when steps are taken to ensure consistency and care in
data management and in protocols. When the collection and management process is
well-documented, the study can then be replicated (Yin, 2014). Following the guidelines
set for by the dissertation committee, in this study, the researcher provided
First, a protocol for conducting the focus group was prepared for the purposes of
replication. Next, the raw data were managed through creating anchor papers for
consolidating faculty feedback. The coding systems were documented so that another
researcher could replicate the study. Finally, a template was used to distribute the
strategy), the data were collected as follows: Following site authorization and IRB
approval, an email was sent through the director’s office inviting qualifying online
approximately 136 faculty, as there are 150 full-time in this division, and 14 who are
directly supervised by the researcher were excluded. The email specifically requested
These qualifying courses were listed within the email. First-year sequence courses with
writing assignments were selected because, despite the growing literature on the
advantages of peer feedback, Planar and Moya (2016) argued that early courses in
Thirteen faculty agreed to participate in the study. The 12-item questionnaire was
conducted as they graded, so that their responses best reflected the phenomena in their
natural setting (Yin, 2014). Participants completed the questionnaire either at the
campus host site, or remotely, as long as the participants were providing feedback in
their daily routine as the questionnaire was conducted. One participant did not complete
the questionnaire, so the study results reflect responses and classroom data from 12
participants. The responses from the questionnaire were then analyzed for coding and
themes.
Classroom analysis was a second source of data for this study. This analysis was
necessary so that actual faculty feedback could be examined according to how faculty
may reflect. For each participant, four first-year sequence courses were reviewed and
from these courses, four papers from assignments that required 300 words or more were
reviewed. The analysis showed what kind of feedback methods faculty used and served
feedback and learning, per Vygotsky’s ZPD (1978). As stated, the courses were selected
authorization, permission was confirmed to review the classes; but the researcher did not
engage with the university’s administrative staff or research department in obtaining the
feedback. However, the dissertation committee was informed of the data collection and
analysis process. To ensure confidentiality, all student names were removed from
faculty feedback and video feedback was manually transcribed for the purpose of
creating data without student names as identifiers. For this feedback analysis phase of
the research, no additional efforts were required on the part of the participants. The
The third source of data was the focus group which took place March 27, 2018.
The same participants who completed the questionnaire agreed to participate in the focus
group; however, two could not attend. The researcher facilitated the session of ten of the
conference room in the same building where full-time online faculty work, but
application. The session was then manually transcribed, yielding six single-spaced pages
of responses.
Confidentiality was maintained during each phase of the data collection and
analysis. The collected data were free from student names or identifiers and will be
stored within the institution’s research department. In addition, data were stored on a
flash drive and was kept locked in the researcher’s desk drawer. These measures are to
protect the participants’ confidentiality and to maintain a high level of data management.
validity and reliability of findings (Golafshani, 2003). Triangulation was evident through
the questionnaire that faculty completed, the reviewed classroom feedback data, and the
faculty focus group recorded sessions. Comparing these three forms of data confirmed
the value of triangulation. The data were summarized to further obtain reliability and
validity. The researcher presented the research questions and how the synthesized data
The raw data were prepared for the analysis in various ways, depending on the
data source. The questionnaires were reviewed for completion, then saved in files. The
classroom feedback was copied and pasted onto word documents, one document per
class, then eventually consolidated onto one document per participant, as repeated
feedback remarks were deleted and the data were consolidated. The video feedback was
transcribed onto documents. The focus group was recorded, then manually transcribed,
resulting in 6 pages of single-spaced text. These procedures prepared the data for
analysis.
Once the researcher received Institutional Review Board approval, the informed
consent letters were emailed to participants, and will be maintained for 5 years, and then
researcher obtained site authorization in August, 2017 and Institutional Review board
approval on January 9, 2018. The data collection process began in January, 2018,
It was not known how online undergraduate full-time faculty perceived the
influence of their instructional feedback practices on their reflective thinking, and hence,
R1: How do online undergraduate full-time faculty perceive the influence of their
The data itself must be examined for patterns and meaning. Numerous computer
programs can assist with the manual labor involved in managing the data (Yin, 2014)
but it takes a human touch to effectively interpret the data and to deduce meaningful
patterns. Whereas quantitative research objectively separates the researcher from the
data, researchers themselves have a direct role in qualitative data (Golafshani, 2003). For
this study on the perceived influence of feedback practices on teacher reflection (and
The first source of data was the faculty questionnaire, which participants completed
while grading three assignments. The online classroom served as the artifact to address
the second research question, and the focus group was the third source of data. The
faculty questionnaires were completed by the participants and this data were already in
text format. The video classroom feedback was transcribed manually to allow for the
removal of confidential student identifiers, and to analyze the feedback remarks for
coding. Both of these sources of data were thematically analyzed, with codes assigned
and major and minor themes attributed to the data. The focus group of 10 of the 12
participants was recorded through Zoom, an internet conferencing tool, and the meeting
contributions were transcribed to again allow clarity and accuracy in deriving common
themes. These sources of data served to effectively address the study’s three research
questions.
The data sets were analyzed using open coding and the thematic analysis
process. Coding is a method of reduction that allows the researcher to establish priorities
and to focus among the data (Vaughn & Turner, 2016). Yin (2014) identifies this
analytical strategy as a method that allows the researcher to examine themes that emerge
when working with data. This method allowed the researcher to examine data trends
Transcribing the focus group discussions and classroom feedback into text
allowed for a careful and consistent analysis. As the codes emerged, the researcher
attributed themes and prepared tables to show the descriptive data. Codes then became
themes when they appeared frequently. For example, if more than one questionnaire
response stated that time management was a factor in their feedback reflections, that
initial code emerged as a theme. There was extraneous data that did not address the
specific elements of the study. The researcher interpreted the data and, in accordance
with Ganapathy (2016) remained flexible enough to change course when the data
dictated that. When clear patterns emerged, these were displayed on a table, so the
The researcher expected both the quantity and quality of data to be sufficient for
the study. First, more than ten participants were recruited to allow for any attrition. Next,
the faculty questionnaire was comprise of 12 questions, and faculty were asked to
complete this questionnaire while completing three of their assignments. This yielded a
maximum of 120 responses about their reflective practices and their feedback. The
classroom analysis included examining four classes, and four assignments within those
classes for each instructor. Finally, the focus group of ten participants was
analysis.
Finally, the three sources of data were considered and compared. Baxter and Jack
(2008) note that a common error in data analysis is to treat the sources of data
separately; instead, in a case study, these data may converge, so the researcher must
remain open to how the coding might dictate the themes. The focus group, as a third
bias and improving the validity and reliability of the findings (Golafshani, 2003).
Ethical Considerations
The general nature of this study did not present specific ethical concerns.
Because online full-time university faculty were not evaluated on their feedback quality,
there were not ethical concerns of job security or negative consequences associated with
their feedback or reflective practices being evaluated. Instead, the researcher made it
clear that the study’s purpose was to examine how faculty perceived the influence of
their instructional feedback practices on their reflective thinking, and hence, their
nature did not result in negative consequences or responses. As with any study, any
careless management of data or insufficient amounts of data could jeopardize the results.
Therefore, this researcher made every effort to carefully manage the data, to follow the
protocols established, and to ensure that the data were significant enough to reveal
meaningful results.
For this study, IRB approval and site authorization was obtained prior to any
phase of data collection. An informed consent document was also provided for the
participants. Yin (2014) stated that informed consent is a critical step to formally solicit
volunteers and to alert participants to the nature of the study. This informed consent
document provided disclosure that participation in the study was voluntary and that
participants could have withdrawn at any time. These documents are included in the
be protected in terms of their privacy and confidentiality. Yin (2014) stated that
participants should not be put in an undesirable situation, such as being added to a list
for future studies. To ensure this protection, no participant names were attached to any
codes or themes. Instead, participant responses were identified by numbers. In addition
students. When classroom feedback was gathered and transcribed for video feedback,
student names were removed to avoid any student identifiers. All collected data will be
stored at the higher educational institution and will be destroyed after 5 years.
As with any study, there are potential conflicts of interests among participants.
For example, because this study explored the perceived influence of feedback practices
strategies, it is possible that those who volunteered may have had a special interest in the
subject or may have conducted their own study. By informing participants of the study’s
outcomes, they might have gained applicable insights to fortify, maintain, or improve
their own connections between feedback reflections and instruction. Every effort was
made to avoid bias in the study, including triangulating the data and rigorous measures
full-time online faculty. The researcher is the faculty chair for full-time online English
faculty at the host institution, so those faculty members were not invited to participate.
Participating faculty did not directly report to the researcher, but they knew the
researcher in a professional setting and were aware of the supervisory rank, which
Another limitation is that this supervisory position could generate the possibility
in a certain way, based on that affiliation. Although the researcher knew the target
population in a professional capacity and they were aware of the supervisory rank, the
sample was made up of faculty who responded at random and met the criteria, and at no
time during the study or in the past did the researcher influence the promotion,
review. This was necessary because there had to be an alignment between the faculty
who volunteered to participate and their concomitant assignments that were graded.
Inherent in this procedure was the selection of assignments of 300 or more words in
length and these assignments aligned with questionnaire data, classroom data, and focus
group data.
This researcher attempt to offset potential bias. First, any outlier comments in the
information (Yin, 2014). Secondly, a case researcher must keep an open mind. Being
deeply familiar with the subject matter can potentially create bias (Yin, 2014). In this
study, the researcher is familiar with feedback and its formative value, yet every effort
was made to remain open and adaptable to other perspectives and methods. A third
intention to offset bias lies in the researcher’s role with the institution. Because the
the researcher is aware that for this study, the objective was not to evaluate feedback
based on the same criteria, but to explore it relative to how faculty reflect and to
understand how and if selected methods contributed to reflections and ultimately, how
instruction is shaped.
Following the principles set forth from the Belmont Report, (The Belmont
Report, 1978) the data from this study will be saved on a flash drive and will be locked
in the researcher’s desk. The research data will be kept for 5 years and will be
Qualitative case studies have potential limitations that if not properly addressed,
could jeopardize the integrity and accuracy of the study. Limitations common to case
studies include methodology, instrumentation, sample size, and time limitations (Yin,
2014). This was a qualitative study and by nature, had limitations by its non-quantitative
method. However, a qualitative method remains the most logical way to address the
research questions of this study and to fully explore the contemporary phenomena of
In this study, the sample group could have posed a possible limitation, because
this study was conducted at a private higher educational institution in the southwestern
United States. This institution has a forward-thinking model that employs full-time
faculty at the campus facilities for online instruction. Instructors facilitate four courses at
a time, hold office hours, and are responsible for student communication outside the
classroom, including student calls and emails (DeCosta et al., 2015). The online full-
time faculty also interface with the traditional campus faculty, participate in curriculum
revisions, collaborative research, and campus functions. These attributes provided rich
data sources for faculty feedback practices; however, many institutions employ adjunct
faculty for their online delivery (Schieffer, 2016). Therefore, because the participants are
full-time faculty, the results may be less generalizable to higher education models where
this delivery is primarily taught by adjuncts. Because the online undergraduate full-time
faculty work together, there may be less diversity in their responses and feedback or
reflection philosophies. To curtail these limitations, there was careful attention to the
data analysis details and the triangulation of data enhanced the trustworthiness of the
study.
has been a part of the online full-time undergraduate faculty for more than seven years
and is therefore acquainted with their practices, methods, and philosophies. This
knowledge could lead the researcher to make basic assumptions about how faculty
conduct feedback. However, the data was collected with the goal of first-year-sequence
courses with assignments that are more than 300 words in length. In addition, the
researcher reviewed courses that were closed, yet current, so that the classroom artifacts
would most closely resemble the data based on the questionnaires. These parameters did
not allow the research to select papers based on personal bias during the data collection.
For the data analysis, the researcher examined the feedback with the a priori themes of
Hattie and Timperley (2007) as a guide. Having a theoretical model, such as Vygotsky’s
(1978) ZPD kept the data analysis anchored towards the research questions, rather than
division. This supervisor role could have led to conscious or unconscious coercion, as
participants may have felt pressure to provide less authentic data, due to their knowledge
of the researcher’s role. However, faculty participating in this study did not directly
report to the researcher, nor did the researcher at any time influence their employment
status in any way. In addition, there was no promise of monetary or other reward,
participation.
The researcher accessed the classroom artifacts through her supervisory access,
and did not use institutional oversight in gathering this data. Convenience sampling was
used to create an alignment between the volunteer participants and their concomitant
Because the research questions had boundaries, there are delimitations to the
study. One delimiter was that the population was online undergraduate full-time faculty
population would provide results that might be more diverse, and therefore, more
meaningful. Thus, the lack of input from this broader population restricts the data.
However, the researcher applied rigor in establishing codes and themes, and the data
Summary
Chapter 3 described the design of the study about online undergraduate faculty
feedback reflective practices at a higher educational institution. For this study, the
problem was that it was not known how online undergraduate full-time faculty perceived
the influence of their instructional feedback practices on their reflective thinking, and
There are studies on faculty feedback methods and their effectiveness from
student perspectives (Ali, 2016; Atwater et al., 2017; Borup et al., 2015) and studies that
that focus on feedback’s effectiveness (Ellis & Loughland, 2017; McCarthy, 2015;
Mirzaee & Hasrati, 2014; Nicol et al.,, 2014; Nicol & Macfarlane-Dick, 2006) as well as
studies on the importance of reflection for faculty and its learning outcomes (Bennett et
al., 2016; Bond, 2011; Falender et al., 2014). However, research was needed to
greater understanding of these reflections and practices could improve student learning
and satisfaction.
to the online undergraduate full-time faculty at the higher education institution in the
southwestern United States. The researcher worked with the Director of Online
Instruction, who sent out the recruiting email requesting participation from those
currently teaching first-year courses and those courses were listed, chosen for
assignments with 300 or more words. Due to the researcher’s supervisory role, the 14
English instructors were not included on this distribution list. Participants then
completed a questionnaire on their feedback, and classroom feedback was reviewed for
methods and this coded and themed data were then presented to a focus group, which
represented the final stage of data collection. Additional data obtained from the recorded
focus groups was added as a data source, thus, creating the triangulation of data.
Chapter 4 will include the results from the data collection and analysis. The
themes will be presented in visual and descriptive terms so that the results could be
appreciated and understood by other institutions who may want to replicate this study or
Introduction
The purpose of this qualitative exploratory single case study was to examine how
feedback practices on their reflective thinking, and hence, their instructional strategy at a
southwestern higher education institution. This study explored the intersection between
faculty feedback and their reflections to determine the impact on teaching. Feedback can
serve as formative instruction (Hattie & Timperley, 2007). Online instructors provide
feedback through text, audio, or video methods (Borup et al., 2015). These delivery
choices may be influenced by several factors, including efficiencies (Planar & Moya,
2016), but it is not known how these delivery methods might influence reflection or how
these reflective practices affect instruction. The data addressed the following research
questions:
RQ1: How do online undergraduate full-time faculty feedback perceive the influence
RQ3: How do online full-time faculty perceive the influence of reflective thinking
This study was a qualitative method with a single case study design. The
research design connects the data and its conclusions to the study’s questions (Yin,
2014). For this study, the single case design was selected in order to fully describe and
explain the phenomenon (Yin, 2014). The researcher collected data from three sources
and reviewed, analyzed, and coded the data to produce resulting themes. The data came
from faculty questionnaire results, feedback from classroom papers, and focus group
transcripts. The triangulation of these data supported the development and salience of
the case. Three sources provided data for this study: Questionnaire responses from
faculty participants, classroom feedback, and the focus group discussion results. The
three research questions were supported by one or more of these sources, as the table
below shows:
Table 1.
RQ2: What methods of Questions 5-8 Analyzed for theory Confirming or denying
feedback do and paper elements. data
undergraduate full-time Video and Text
online faculty use at a feedback compared.
southwestern higher
education institution?
Thus, different elements of the data sources supported the research questions. The
questionnaire was designed to address all research questions, and the paper feedback
served as an artifact for analysis for two of the research questions. The data from the
For this study, the sample population was full-time online undergraduate faculty
online undergraduate faculty who agreed to participate in the study, with 12 who fully
participated, providing evidence from all data sources. Ten of the 12 participants
attended the focus group session. The data sources included a faculty questionnaire
about their reflections during grading (see Appendix F), feedback from faculty papers,
and a focus group transcript. Each data source addressed the research questions resulting
in themes that connected back to the research questions, consistent with the case study
design (Yin, 2014). The coding process will be explained later in this chapter.
complete the questionnaire as part of their grading focusing on four student papers for
each question. Thirteen participants sent in the questionnaires, but one was not
completed. Thus, 12 questionnaire responses were analyzed and codes were developed,
which resulted in themes. These themes were then presented to the faculty participants
in a focus group, where faculty confirmed and expanded upon the themes. The last data
collected were from the focus groups, where participants reviewed the themes from the
other data sources, which were questionnaire responses and classroom feedback
analysis.
findings will include data on the demographics of the faculty participants. In addition,
the chapter will show explanations of data gathered from the questionnaires, classroom
feedback, and focus groups. This section will also present the researcher’s data analysis
Descriptive Findings
The descriptive findings of this study will be presented in the following section.
This study explored the perceived influence of feedback practices on teacher reflection
(and subsequently, the influence of reflection on instructional strategy). The data sources
for this study were faculty questionnaires, classroom feedback, and focus groups, which
addressed the three research questions in the study. The population studied was online
full-time online faculty teach both graduate and undergraduate courses and work in a
assigned full-time instructors for their first-year sequence courses, and may then be
assigned courses with either adjunct or full-time faculty for subsequent courses. The
institution is intentional about providing first- year students with full-time, rather than
adjunct instructors. The online full-time faculty report to campus offices, collaborate and
share practices and uniform expectations. Because of this collaboration and attention to
first- year students, the researcher wanted to explore this particular population with
population for this study. The participants in the target population work together in a
collaborative space at the host campus. The full-time faculty teaching load is
or shifting enrollment can cause these numbers to vary. These faculty teach courses in
specialized subject areas, but may also be assigned first-year sequence courses. Of the
150 full-time online faculty, 27 are tier instructors, meaning that they teach higher
volumes and are dedicated to one first-year-sequence course. Tier instructors’ student
loads are between 120 and 150 and these instructors are provided instructional assistants
help with objective grading. Unlike the non-tiered instructors, the tiers are not assigned
to substitute for other instructors, in order to keep their teaching load manageable. Five
researcher obtained permission from the director of the online division, who distributed
the email to the online full-time faculty division. To avoid study bias, fourteen English
instructors were excluded, because they work closely with the researcher. The letter
courses. These courses were selected because the study was limited to exploring the
contain written assignments of 300 words or more, which would allow analyses of rich
The primary method of communication with participants was through email. The
researcher worked with the director of online full-time faculty who sent the first letter
asking for participation on January 12, 2018. Four participants responded on the same
day, then the same message was sent a week later, which resulted in seven additional
This participant returned the questionnaire by February 21, 2018, resulting in a total of
complete the survey, so the data analyzed and presented is from 12 participants.
provided informed consent letters to demonstrate their participation in the study and
twelve participants completed the questionnaires and ten attended the focus group
participate via email, except one who verbally agreed, but then responded to a follow-up
their grading, since the questionnaire was designed to probe their reflective thoughts
while grading in their daily routine. Requesting that the questionnaire be completed
during the work hours ensured that the phenomenon was in its natural setting (Yin,
2014). A requested completion date was provided, but prior to the requested deadline,
the researcher sent follow-up emails to check in on participants and offered to answer
any questions. Two check-in emails were sent to one participant. All questionnaires
faculty participants.
faculty, including their years in teaching higher education and if the course they used for
the questionnaire was included as part of their regular course load. For all participants,
the courses addressed on the questionnaire reflected what they were most often assigned.
Undergraduate online faculty are expected to teach first-year sequence courses, as those
courses have the highest enrollment and faculty benefit from understanding the first-
year classes. Many online full-time faculty members have specialties in other
disciplines, such as math or psychology and faculty loads may be a combination of the
five male participants and seven females. The average online teaching experience in
higher education for male participants was 6.3 years and was 5.7 years for participating
females. For the purpose of maintaining the institution’s anonymity, these first-year
sequence courses were assigned pseudonyms so the course titles represent the general
subjects, but are not official course names. All participating males taught a first-year
critical thinking course. Females taught courses in critical thinking, philosophy, religion,
and an introductory success course for education majors. The table below shows the
Table 2.
The table shows that the majority of participants were female, with the critical thinking
class taught the most among the participants. All male participants taught the critical
thinking course, and two were tier instructors. Two females taught critical thinking, with
one instructor being tiered. Two female instructors taught the education class; one of
them was tiered and a fifth tiered female instructor taught the course in philosophy. The
remaining female participant taught a course in religion. The survey showed that the
instructors responded to the questionnaire based on the class they teach most frequently.
Thus, these instructors, with tiers included, have depth in their knowledge of the course
and feedback methods. The host institution implemented the practice of hiring full-time
faculty for first year course sequence in 2010, so the teaching experience average
indicates that some participants have been part of this model for most of its duration.
However, this also shows that the participants did not have online teaching experience
The questionnaire was designed to capture reflective thoughts that take place
during faculty grading and was created based on Ryan and Ryan’s (2013) reflection
levels. As a result, the questions were structured within these four levels: Reporting and
Responding (1), Relating (2), Reasoning (3), and Reconstructing (4) (Ryan & Ryan,
the three research questions and this was done for 4 papers they graded. Since faculty
participants were asked to respond to four papers for each question, the questionnaire
responses yielded faculty reflections on 48 student assignments with raw data on a total
of 576 papers. Because the questionnaire was open-ended, the page length varied, but
the average completed questionnaire length was 8 pages. Thus, approximately 96 pages
of text were reviewed to determine the codes and themes of the questionnaires. The table
below shows resulting themes related to paper concerns, how instructors respond to
Table 3.
Table 4.
Table 5.
themes. The Level 1 question related to the first research question revealed that the most
word count, and deviation from the third person narrative. Similar themes emerged from
text and video responses. The instructors who graded using video comments expressed
Table 6.
Development 34//6
Formatting 27/4
Mechanics 15/3
Thesis 7/2
rd
3 Person 3/2
Word count 4/2
frequency of the paper’s concern. There was a relationship between the paper concerns
cited and the frequency of their concern; that is, if faculty perceived development as a
concern with the paper, they also reported that this was common across student papers.
Level 3 and 4 questions showed that faculty addressed these concerns most often by
creating their own resources and that they would continue with this practice. Both video
and text participants reported differences in how they responded to these concerns and
how they would respond in the future. More details will be examined later in this
chapter.
The second research question was addressed in the next four questionnaire
responses (questions 5 through 8). Addressing the Level 1 question, two of the 12
participants stated that they used video feedback on the longer assignments and the
remaining 10 used text feedback. Text participants’ responses indicated that time and
efficiency were the most important factors for their selection, while video participants
first cited student needs, followed by responses indicating time or efficiency. When
responding to the Level 3 question, those who gave feedback through text cited student
need as the third consideration for this choice, indicating stronger preferences for using
text delivery based on familiarity and time management. This differs from the two
participants who provided video feedback, as they cited student needs first, followed by
time and efficiency as their secondary factor. Both video and text groups indicated they
would continue their practices of either text or video. These responses and their
significance to the study will be examined in greater detail later in this chapter.
The last set of questionnaire responses (9-12) addressed the third research
question. The first reflection level was designed to determine if faculty would change
their teaching practices based on their reflections while grading. For text participants,
“None” was the most common response, followed by instructor-added resources. Video
participants indicated that they would consider adding additional technical tools to other
course areas. Those giving text feedback also stated they might try technology in other
areas, but this was the third most frequent response for them. The Level 2 question was
about how the paper concern might have connected to a classroom element. The
instructors who provided video feedback reported that their own feedback covered the
concerns while instructors who gave text feedback noted that the student concern was
more often covered in class discussions, assignments, or resources already in the class.
The researcher examined classroom papers as the second data source to address
the research questions. For each class, assignments requiring 300 or more words were
examined. Papers of this length were selected because they generally yield more robust
feedback and instructors opted to provide feedback directly on the paper, which resulted
in a greater range of feedback comments and communication. All classes examined were
seven weeks long with two to four assignments that met this 300-word assignment
requirement. For each qualifying assignment within the class, four papers were
examined. The table below shows the number of qualifying papers by participant.
Table 7.
Philosophy 4 3 48
Religion 1 4 16
Critical Thinking 5 2 40
Intro to Ed 2 3 24
In total, 128 assignment artifacts were examined. Text feedback was given either
through a comment box provided by the learning management system, or attached with
embedded or sidebar comments utilizing Microsoft Word’s Track Changes feature. The
host institution’s learning management system will not process a grade unless instructors
provide an accompanying remark in this comment box that is part of the gradebook;
however, many instructors opted to put in a prepared statement that is copied and pasted,
while individual and formative feedback was provided by comments on the attached
student papers. Thus, for this study, the feedback analysis focused on sidebar comments
The process of collecting classroom data was thorough and systematic. First,
four classes from each participant were chosen. Within these four classes, the researcher
observed four papers from qualifying assignments. Qualifying assignments were those
with papers 300 or more words in length. For most classes, between two to four
The data were managed through creating one consolidated document for each
participant. Originally, there was one Word document for each class examined, resulting
in four documents per participant. On average, these documents were seven pages in
length, before they were consolidated. Reviewed assignments ranged from top scoring
to failing papers. Managing the data included consolidating instructor feedback from
four separate documents into one and to combine the feedback in the process. This
process necessitated establishing an “anchor” paper. Using four computer monitors, the
common comments from papers were captured and tallied, and then consolidated on the
anchor paper. This process allowed the data from each instructor to be captured on a
single document. This process also revealed which comments were repeated.
Video feedback data were also reviewed and analyzed. Two participants
provided video feedback on their student papers. For the purposes of this study, video
feedback refers to the instructor’s voice narrative and a live view of the instructor using
the cursor and track changes feature to either edit, teach, or comment on the student
work. Several web tools offer methods for faculty to give feedback. One tool is Screen-
cast-o-matic, which allows the student paper to be captured on the screen, while the
instructor delivers narrative feedback. This tool allows the student to view the instructor
performing live editing through video while listening to the instructor narrative. Loom, a
newer technology, offers the additional feature of a small video window of the
instructors during the narration. However, in this study, the instructors who selected the
video grading option did not utilize the video window. The researcher reviewed and
The data from the video feedback were transposed to text for comparison
purposes. The video feedback was manually transcribed, so that the feedback and
instructor remarks could be examined in the proper context. The data from reviewing
four student papers from three different assignments yielded approximately 28 pages per
instructor, but the page amount was not a valid measure of volume, since feedback was
pasted line by line to assist with the analysis. In addition, transcribed audio feedback
was single-spaced from the manual transcription, so its page length compared to text
feedback was not valid. Therefore the average instructor word count per assignment
emerged as a meaningful way of collecting data. These data show the participants, their
A focus group formed the third data source. This focus group was held on March
27, 2018, at 3:00, PM. Participants were sent invitations through Microsoft Outlook’s
calendar feature. The session was conducted through Zoom, which allowed remote
attendance and video recording for future transcription. The discussion began at 3:00
and concluded at 3:48. The transcript of the session was 3424 words. The original
recruitment letter requested participation in the focus group; therefore, all participants
were invited to attend, but one could not attend, and one had to leave early. Participants
were assured that their involvement was confidential and that their contributions would
be shared only with the research committee. However, as the participants work together
and use Zoom in other capacities, there was no assurance that they would not recognize
each other’s voices or logins. Participants were also asked permission to record the
session and all verbally agreed. Since these participants work together as faculty, they
were already acquainted with each other before the session. However, their anonymity
was maintained through the presentation of data which showed descriptive data with no
names associated with the results. The session was recorded on Zoom and transcribed
for analysis purposes and for the triangulation of the data (Yin, 2014). The results will
Through a PowerPoint presentation, the focus group was presented the two
sources of data, which were the questionnaire and feedback themes. The feedback
themes were analyzed two ways, according to classroom paper issues and levels
according to Hattie and Timperley’s (2007) four levels of feedback. The participants
agreed with the first data set from the questionnaires that showed the six themes of paper
concern. Three participants expressed disagreement when viewing the results, stating
surprise that instructor concern of the thesis statement was of low frequency. However,
another participant pointed out that not all classes emphasize the thesis statement and
several participants expressed agreement. The participants agreed with the remaining
The focus group participants confirmed most findings from the questionnaire
responses. Participants agreed with the data results for the second and third set of
questionnaire responses. These responses were aligned with the second and third
research questions, respectively. For the second research question, nine themes were
presented over four different questionnaire responses, and for the third research
question, 12 themes were presented on how the practices inform instruction. All
participants confirmed the findings and did not disagree with these results.
The last data presented to the focus groups were the classroom data that was
analyzed according to Hattie and Timperley’s (2007) levels. The researcher explained
that the feedback was analyzed for both the paper issues and for how the feedback
corresponded with Hattie and Timperley’s (2007) levels. The color coded display on two
bar charts provided a visual and the theme colors were explained (see Table 15). The
participants first observed that the theme of critical thinking was present only in the text
the video feedback. Some participants then offered thoughts about why formatting might
have more prominence in video rather than in text feedback. These implications and
feedback reflections. Qualitative analysis can be difficult because its techniques are not
well defined (Yin, 2014). Nevertheless, qualitative research is valuable in its ability to
provide a rich means of capturing the complexities of human behavior and descriptive
data can provide a deeper understanding of the phenomenon (Vaughn & Turner, 2016).
Moreover, qualitative research analysis can exhibit high quality if the researcher attends
to all the evidence, addresses rival interpretations and the most significant aspects of the
case, and uses expert knowledge from the researcher’s prior experience (Yin, 2014). The
following data presented demonstrates these marks of quality through a careful and
The thematic analysis was the best procedure for analyzing this study’s data. Yin
(2014) posited that analyzing case study evidence must demonstrate rigor and a
systematic description of how the codes and themes were developed for each research
question. Thematic analysis has been defined as the search for emerging themes that
analysis procedure. According to Clarke and Braun (2013) thematic analysis offers a
“theoretical flexibility” (p. 120) that is advantageous to connecting the data to the theory
associated with the phenomenon. This flexibility allows the researcher to focus on
framework (Clarke & Braun, 2013). Thematic analysis also allows the researcher to
effectively examine different types of data (Clarke & Braun, 2013). Finally, thematic
analysis is a viable procedure for what Clarke and Braun (2013) called the “messy
realities” (p. 122) of qualitative data. An example is the questionnaire instructions stated
that the participants should respond to four different papers for each question. Some
participants provided four different responses for each question, while others responded
to only one, or copied and pasted their responses. Thus, among the 12 participants, the
deductive framework was employed. This approach allowed codes obtained from data to
Some of the study’s themes were established a priori, meaning that existing theory pre-
determined the themes. For example, to address the second research question, a template
based on Hattie and Timperley’s (2007) feedback levels served to predefine the themes,
which allowed the data to integrate with theory (Fereday & Muir-Cochrane, 2006). This
The next section will discuss the coding process and how themes were
developed. Clarke and Braun (2013) identified six stages for effective thematic analysis
and they include becoming familiar with the data, coding, searching for themes,
reviewing themes, and defining and naming themes. For each research question, the
sections below will discuss the data analysis within this framework.
question was: How do online undergraduate full-time faculty perceive the influence of
their instructional feedback practices on their own reflective thinking? The first data
source that supported this question was the group of responses from the faculty
questionnaire’s first four questions. Based on Ryan and Ryan’s (2013) reflective model,
faculty participants were asked four questions pertaining to each research question.
These four questions were structured according to a spectrum of reflective levels based
on Ryan and Ryan’s (2013) reflective model. Prior to its use, this questionnaire was
validated by three faculty members with three or more years of full-time teaching
experience who provided feedback on the initial questionnaire (see Appendix D). The
input from the faculty who validated the instrument served to clarify the instructions on
Prior to determining codes, the researcher spent time becoming familiar with the
data. Clarke and Braun (2013) note that this critical process is often skipped in the
analysis of data. Rather than examining the questionnaires one at a time, the initial
familiarization began with viewing all questionnaires. This process was consistent with
Yin’s (2014) “ground up” strategies of pouring through data as a way to begin to
identify patterns. Because the questionnaire was open-ended, responses varied in length
and complexity, so the awareness of the variety of responses surfaced through this
familiarization process. Participants were asked to review four papers while responding
Faculty participants were first asked about the concerns with the paper.
Becoming familiar with this data set showed that the open-ended nature resulted a
variety of responses, from short lists to entire paragraphs. The examples here show the
Overall content was very good, but there was minimal scholarly support. Only
used two of the required four sources; no library research was done. Consistently
possession. The errors in writing mechanics are unusual for this student – the
This initial examination of all responses, rather than examining each response
individually, showed that a kind of coding system was necessary as a way to capture
common words and to extract main ideas that would eventually create themes. The
actual coding process following this stage of familiarization will be discussed later in
this chapter.
The opportunity for open-ended responses showed similar variations in the
responses to the second question, as shown in these examples. Participant 7 stated, “Not
Common. The primary goal of the course is to have students working to apply
what they have learned regarding critical thinking skills into an objective,
persuasive essay in which they work to persuade others toward the most logical
stance on a debated topic/argument. They must also use their information literacy
skills learned in (Class Name) to help support their statements within the essay
(Participant 1)
that although the responses varied in length, only one word needed to be identified as
part of the coding process. This review of the classroom papers served to gauge the
variety in length and complexity of the instructor replies, and so served to prepare for
The third and fourth questions from this instrument rounded out the alignment to
the first research question. Because participants were asked to review four papers for
each question, some prepared duplicate responses for the third and fourth questions,
indicating that an issue with one particular paper could generalize to other students.
Becoming familiar with this data showed that the third and fourth responses would share
similarities, so the same color coding system could be used to determine the themes.
classroom paper feedback. Reviewing this feedback formed a necessary foundation for
research questions through a separate analysis process. The classroom papers addressed
the first research question as they were examined for thematic elements.
achieve this familiarization process, the papers were examined several times, and the
video feedback was manually transcribed. This manual transcription provided a deeper
acquaintance with this data and created a sense of what was necessary for coding and
was the best way to manage that process. This preview also helped define important
boundaries. For example, it was clear that faculty provided feedback directly on papers,
but also in the learning management system’s summary box, but the familiarization
process led the researcher to determine that the sidebar feedback would be the focus of
the analysis as that is where the most formative feedback takes place.
The focus group was the third source of data for the study. The familiarization
process initially took place during the focus group through careful listening and
prompting during the session. During the focus group, when one participant spoke on a
certain subject, other participants were prompted to share on the same subject. So a
careful listening was the first step in becoming familiar with the data. Next, several
reviews of the transcript and different speaker contributions served to prepare for
Research question 1: Coding the data. After becoming familiar with the data
from the questionnaires, the coding process began. Questionnaire responses 1-4 aligned
with the first research question. The first question was about identifying the greatest
concerns with the papers. Using different colors in Microsoft Word’s highlighter feature,
the process included assigning colors to common elements in these responses. It was
important to see that instructors worded their concerns differently, while identifying the
shared core concern. The researcher selected pink to represent concerns about content or
development, yellow for anything related to APA formatting, blue for concerns with
word count, green for anything related to the thesis statement, grey for grammar and
mechanics and olive green for responses that identified voice or the improper
grammatical perspective. Below are two participant responses to demonstrate the initial
The first essay was quite strong. I had very little concerns as this was only the
first draft. I noted a couple of spots within the essay where the student could
(Participant 1).
The remarks from the first participant were coded pink as they related to content
concerns, while the remarks from the third participant necessitated gray, green, and
yellow coding to address the three issues. As the questionnaire responses were
examined and color-coded, the parameters of the codes came into focus and paper
concerns were consolidated to establish broader categories resulting in six themes which
were examined separately according to whether participants use text or video (see Table
7).
Nearly every participant comment on this first questionnaire response fit into a
code which was associated with a theme; however, there were some outliers. First,
although participants were asked about concerns with the paper, some instructors
nevertheless provided responses that indicated paper strengths. For example, participant
2 stated, “For this student, the essay was a significant improvement in quality over
response to encompass multiple perspectives on the paper. Comments like these were
Another important outlier was a concern with the similarity index. Most higher
similarity index refers to a comparison among the student paper to existing publications,
legitimate concern on a student paper, per the university policy, the instructor would
treat such an issue separately and thus, it is considered separate from other elements
involving feedback. The table below shows how key words formed codes that eventually
became themes.
Table 9.
Word count
Under/Falls Below Word Count – comments about paper word count requirement,
Over being below or too high.
Narrative
1st, 2nd, 3rd person Third Person Narrative – comments about maintaining a scholarly
I, you, we, us, they, our grammatical perspective in third person narrative.
Direct address
Audience questions
Important
Future Critical Thinking – comments for students that promote critical
Environment thinking relative to the course content. Conversational notes that
Legalization ( abortion, connect student work back to content.
marijuana)
second question responses were tallied and examined juxtaposed to the first question.
This question was designed only to capture instructor perception of papers, so responses
The third question also supported the first research question. Responses gave
insight into what resources faculty use to alleviate the paper concerns. This question was
designed to capture faculty reflections about resources that might be used in their
teaching. To generate initial codes for this data set, the same colored highlighted
Participants sometimes provided the same response for four different papers,
which demonstrated their intent to show how a concern in one paper generalized to their
practices overall. Participants reported that several resources were used to assist
students. For example, Participant 4 wrote, “I refer students to the (host institution’s)
Style Guide and Purdue Owl, as well as various other resources within the classroom. I
also rely on feedback that I provided to them on their outline.” This participant cited a
combination of resources, including the host institution’s writing resource center as well
as other university resources, including the OWL at Purdue University, a web resource.
This participant also indicated that his own feedback from a previous assignment was a
Other participants stated that they created their own class resources to assist
Participant 7 responded with, “Created a video going over the expectations that is posted
in Week 1.” While Participant 13 stated, “Video Feedback through Loom, and 1:1
appointment with student. These responses indicated that instructors either created their
responses categorized in columns with general headings that later became themes. In
total, there were 20 codes that were later reframed into themes.
The fourth question showed responses similar to the third question. This level 4
question prompted instructors to consider what practices they might change, given the
concerns. The same themes established in the third question responses surfaced, with
the additional reply of “No changes.” Due to the similarity in responses, the same
template was used to assist in categorizing the responses. Participants were asked to
address each question on four different papers. Therefore, for one response, participants
may have four responses that are different, four that were the same, or any other
combination between one and four. Some responses were also left blank. The table
below represents the number of times words within the designated categories were
mentioned for that particular question and their frequencies and the frequency that these
One outlier response was present as a response to this question. Outlier responses
must be carefully considered in thematic analysis coding. Clarke and Braun (2013)
posited that themed data should tell a story; therefore, outlier responses may not
contribute to the consistency of this story. The outlier response is from participant 5
stated, “Theories and best practices are not important to me.” By stating this negatively,
the key words are not meaningful and can therefore not be part of the coding process.
Table 10.
RQ 1 L1: What are the L2: How L3: What L4: What
concerns with the common are resources will practices might
paper? these assist you? you change,
concerns? given the
concerns?
How do online Development (41) High Instructor-added Instructor-added
undergraduate Format (31) L (4), Hi (10) (20) (39)
full-time faculty Mechanics (20) L(3), m (1) h(5) Refined FB (17) Existing
perceive the Thesis (8) L(3), m (1) h Existing resources (18)
influence of their 3rd person (6) (2) resources (15) Student
instructional L(3), h (2) Co-workers (5) Contact(3)
feedback practices H (4) Past (5) No changes (8)
on their own Student contact
reflective (1)
thinking?
The classroom papers served as the second data source to address the first
research question. Moving from the familiarization of the data to the coding of the
second data source of classroom papers was similar to the process addressed in the first
questionnaire response, as both reviewed paper issues. The student concerns expressed
on the questionnaire also surfaced on the classroom papers examined and the codes were
matched with the themes. For example, instructors used words like “support, evidence,
data, reasoning,” which were associated with the paper’s development. An example of a
You had a good start to the essay, but this was more of a compare/contrast
analysis. Additionally, I do not see any concrete evidence that can substantiate
the argument. Remember, the goal was to present a rational and logical argument
Since the color-coding schema was already established with the questionnaire, this
comment was marked in pink, since it referred to the student’s development. Another
comment from this participant references the thesis, so it was marked in green. It stated,
presented in the thesis statement and validated with concrete evidence.” This paper
feedback was manually reviewed and coded. Although there is software designed to
assist in a coding process, the researcher opted to manually examine all paper feedback.
categories, and it was necessary to examine all words and phrases within the context.
categorize the phrases that would eventually become themes. For example, the following
comments were coded the same “Avoid using ‘you’ in the essay” (Participant 6), “In an
APA essay you should always write in the third person. This means using he, she, they”
(Participant 6), Using “I” makes the content less credible and using “you” appears as
though you are assuming elements about the reader” (Participant 4). Although these
comments are stated differently, they all reference the same writing issue of maintaining
the academic third person voice in writing, which was identified by a tan color. This
need for individual interpretation led to the researcher’s decision to avoid software that
may have helped with some mechanical processes, but may have missed some nuances
The coding process revealed a category of feedback that was unique to text
feedback. Some paper comments went beyond the student’s work and served as dialogue
between the learner and the instructor. Although there was general praise attached to this
comment, its primary purpose is to promote thinking on the part of the student and is
discrete from any of the student’s paper contribution. For example, “Nice! Evaluation is
an important part of critical thinking. This is what keeps us from assuming something is
true and forces us to consider all angles” (Participant 1). These comments were
identified using red. Unlike the other paper themes, these comments did not represent
paper concerns and were a theme that did not surface on the questionnaire. Participants
using text feedback used these comments as dialogue with the capacity to promote
critical thinking.
nature of the feedback resulted in paper concerns that were repeated as a result of the
speaker’s narrative style; therefore, using software that might have tabulated key words
would not have sufficiently captured the nuance of the feedback. In audio feedback,
some of the narrative was superfluous, so for those phrases, the strike-through function
was used.
Um you do a better job in the actual paragraphs explaining it, so you wanna
make sure that you’re clearly illustrating your thesis statement. Okay? And then
finally I come into your conclusion, are you bringing everything together, are
you tying things together, right? Um, you don’t want these two words here. And
you know, you’re bringing it together here and you know, it’s, it’s solid.
you can go into here and then this, so it should look like that, okay? So,
realistically, the biggest thing you have to work on is that thesis, you’re gonna
have to develop a clear thesis that ties directly to those supporting paragraphs
(Participant 9)
This passage shows that during the coding process, some of the narrative did not warrant
a code, so it was crossed out, but still visible. Colors were also used to identify the a
For this first research question, the third data source reviewed was the focus
group transcript. In determining codes for the focus group, the transcripts were reviewed
for common language that identified or pointed to the data. The focus group participants
agreed with most of the data presented as well as the themes. Some results presented
The first set of questionnaire responses were aligned to the first research
question. The participants were shown the themes about paper concerns. To facilitate
coding, the same color scheme that was already established for the paper issues was
used. When a participant made a comment about thesis statements, those comments on
the transcript were marked in green. At this point in the analysis, the researcher noted
that participants expressed divergent or slightly different views on the same theme, so
the objective was to first capture all comments expressed about the different themes,
questionnaire responses resulted from a careful and systematic process. The first stage of
moving from codes to themes was in reviewing the questionnaire responses about paper
concerns. The initial codes were condensed or consolidated. For example, terms such as
“APA style” and “Citing” were initial codes that later became the theme of
“Formatting.” Similarly, terms like “Illogical argument” and “Lack of Support” were
initial codes that were eventually assigned to the theme that was named “Development.”
The assigned colors assisted in seeing commonalities among the codes and finding a
broader category to name the theme. All codes were reviewed, consolidated, and
reframed into remaining themes. The finalized themes were: Development, Formatting,
The third and fourth responses on the questionnaire formed the second data set
that addressed this first research question. For these responses, codes were reviewed and
responses below were reframed into a theme called “Instructor Feedback”, such as, “I
will embed feedback in hopes the student will review and revise” (Participant 4) and
“Typeitin with prompts to encourage the students to go into more details in certain areas
comments. Therefore, this participant indicated that the paper’s concerns may prompt
anything the instructor created for their course in an effort to remediate the student
Resources” were listed in a separate column and they were distinguished from the first
category, because they refer to resources that are built into the class. These might
feedback as a way to meet the student needs. Any response associated with ”Co-
workers” was also listed under a column. Words like “peers, fellow instructors, other
teachers, colleagues” were included in this column and later consolidated into this
environment, this response may be more common among this population than
The themes derived from the paper feedback were created and named to reflect
balance in scope and a working neutral framework that would serve future study
replication (Yin, 2014). The words chosen for the themes reflect common criteria used
to evaluate writing in higher education. Rubrics generally contain these themes, but
often list “Organization” as evaluation criteria; however, the data for this study revealed
that any comments germane to organization were specifically tied to the thesis
statement, so that is why among this data, the “Thesis Statement” was selected as a
theme. Similarly, “Word Count” could be considered part of the paper’s development,
but the data showed that instructors were specific in mentioning word count, frequently
enough to warrant its own theme. The participants seemed to separate development from
word count, as they would also comment on word count when it was excessive. The
theme of “3rd Person Narrative” could also arguably be considered in the categories of
grammar or voice, but participants frequently commented on this writing error and the
data were not sufficient to show other elements that could be captured in the voice of the
distinct theme.
elements of writing, rather than problems with writing. Thus, the theme names were
categorized. The comments here demonstrate how both a positive and a negative
comment would fall under the same theme. Participant 1 stated, “This is a biased
statement without credible evidence” and Participant 9 wrote, “So when I go into your
supporting paragraphs, the first thing I’m looking for, do you have an outside source?
You do, so that’s good. Every supporting paragraph should have an outside source.”
These remarks demonstrate different tones of feedback. The first participant uses text to
provide a direct evaluation of where the content is lacking, while Participant 9 uses
video feedback, giving it a narrative quality; both address the development of the paper.
These six themes served two data sets: The data chunk from the responses to the first
Similarly, the data from the third and fourth questionnaire responses were named
as broad and recognizable categories that demonstrated how instructors might approach
student writing concerns. Themes were named broadly enough to reflect approaches that
instructors at other higher education institutions might take. For example, the daily
collaboration among the population of online undergraduate full-time faculty may have
led to sharing practices such as using Typeitin for feedback, but even though this
application was mentioned frequently, this was considered within the broader category
Next, the focus group participants shared their perspectives on the findings.
Participants were presented data on a PowerPoint slide. They first viewed data from the
first questionnaire response, which showed themes of faculty concerns with papers. The
data themes were already established, so the participants were asked to confirm,
disagree, or to expand on the paper issue themes which were: Content, Formatting,
Mechanics, Thesis Statements, Third Person Narrative, and Word Count. Participants
agreed that these themes were common among undergraduate first year papers.
However, the second questionnaire responses showed how frequently the participants
saw these issues. The participants agreed with the first responses, but some were
surprised that the theme “Thesis Statements” had a “low” frequency response in the
surprised just to see the perception of concern for thesis being so low listed there, just
because from my experience” (Participant 9), “I have the same kind of automatic – oh,
wow, low for thesis?” (Participant 1), and “I would agree with both of what they said as
Two participants offered possible justification for the response that stated thesis
For the two essays in (religion class) where they write thesis statements, I try to
get them to practice their thesis statements in the discussion leading up to the
paper. And so we go back and forth and I fix them before they submit the papers.
So then they’re better in the actual essay. Because I try to head it off at the pass
I think a lot of it depends on the level you’re teaching at too, because I know
with some of the papers I chose, I teach (Introduction to Ed), so the thesis wasn’t
that’s part of why there was a mix of ideas there too (Participant 10).
This focus group transcript as a third data source served to confirm the data
themes that aligned with the first research question. The discussion of the frequency of
concern of the thesis statement provided valuable insights into the significance of the
themes and how the paper themes may depend on the class.
were used to address this first research question. The first four questionnaire responses
served as data sets to address the first research question. Coding was achieved through
working with a template with broad topics listed at the top of the page, and adding
to indicate whether the concerns were high, medium, or low. Finally, responses from
questions three and four were added to the template in the same fashion as the first
question.
The second data source that addressed this first research question was the review
of the classroom papers. These papers were reviewed for their classroom feedback. For
each participant, four classes were reviewed, with the feedback themes identified. These
feedback themes were similar to the concerns expressed from the first questionnaire
results about paper concerns. Video feedback was also reviewed, transcribed, and
The third data source for the first research question was the focus group
transcript, which was manually transcribed. For the data related to this first research
question, the group confirmed the findings, but had questions about the thesis statement
and its mixed responses. The discussion that ensued focused on how different courses
faculty use at a southwestern higher education institution? The data sources that
addressed this question were the questionnaire responses 5-8 and the classroom papers
which were analyzed according to Hattie and Timperley’s (2007) levels of feedback.
The next stage of familiarization was reviewing question responses from 5-8 of
the faculty questionnaire. As some of these questions were closed-ended, it was clear
that the remarks would need to be tallied and due to its open nature, to be mindful of
synonyms to make sure everything was counted. A cursory review also revealed that
because the last question in this series had two parts, that the responses would be
divided, and capturing what followed the “yes” or “no” responses would be important.
Becoming familiar with the data was an essential first step towards the resulting
themes. For each participant, four papers from three to four assignments were examined,
resulting in approximately 128 pages of feedback to review. Thus, the volume of data
dictated how to approach the coding and how to manage the data. In addition, instructors
provided feedback in either text or video format. Reviewing the differences in these
platforms was another step in establishing the coding and theming process. For example,
an initial review demonstrated that the video feedback word count was much higher than
the average text feedback response. The familiarization showed that the video feedback
consisted of narrative text that was not necessarily part of the feedback. The table 11
Table 11.
The length of this video text meant that coding required attention to the paper issues that
were part of the narrative. An excerpt below demonstrates the volume of words used by
band and sports but you wanna make sure you’re saying that they’re getting
eliminated or reduced, not just saying that, that they’re just there. You’re gonna
want to make this a little more clear, so realistically, the thesis is gonna be a big
area that you will want to work on just to ensure that it’s very clear, so I can see
you have sub-topics there, but as written, it’s not nearly as clear as it could be”
(Participant 9).
Although some specific paper elements are mentioned, there is superfluous narrative that
would not qualify as part of the established coding. This data familiarization aided in
To address the focus group data for the second research question, the recorded
session was reviewed. The participants agreed with the methods used, the factors that
influenced those methods, the expectations, practices, and the responses about the plans
to continue. The research told the participants that these consolidated responses were
listed from most to least frequent. Because these themes were already established from
Research question 2: Coding the data. A second data set that addressed this
research question came from the questionnaire responses, 5-9. The first question was:
“Do you use audio, video, or text to deliver feedback?” Because this was a closed-ended
response, there was no coding necessary, but a computation of the responses showed
that two out of the 12 participants used video applications for their feedback. These
participants used a program called Loom, which allows both video and audio platforms.
Both instructors who used video captured the student’s paper in the video window and
used their cursor and typing to annotate the paper while narrating the feedback. Loom
offers users the opportunity for a small video window as the instructor narrates the
feedback, but in this study, neither participant utilized that particular feature. However,
both audio and video functions were used, as the students could view the active cursor
and annotations while listening to the narrative. Thus, this practice is considered video,
because the student’s paper was captured in the student’s view, with editing motions to
The next question was: “What factors influence that decision?” The codes were
captured by distilling the main ideas from these responses. Most responses were brief, so
I have found that students prefer Loom videos and I have received extremely
positive remarks about this form of feedback. In addition it saves me time over
typing the feedback and I am able to provide more feedback that makes sense to
The key words from this response indicated the factors of influence were “Student
Preferences,” which resulted in the theme “Student Needs.” Since this participant
mentioned saving time, responses that were similar were consolidated into the theme of
“Time/Efficiency.”
The third question aligned with RQ2 was viewed by participants as either similar
or identical to the second question. The responses were similar enough to compress and
to treat these as one theme. Vaughn and Turner (2016) discuss how merging can help
analyze common data. Table 12 shows these similarities and the revised coding that
followed.
Table 12.
The last questionnaire response aligned to this research question was about
whether participants would continue with their method or if they might implement new
methods. The first part of this question yielded closed-ended responses that did not
require special coding. However, the second part of this question necessitated showing
how participants’ initial response may have been shaped by the latter part of the
question. Therefore, there was not a clear-cut “yes” or “no” for some participants. Most
prompt change. The responses were viewed from all participants and were displayed to
show the breadth of the responses. For example, one code was “Yes, with text” meaning
that this participant intended to remain with text feedback and did not plan to implement
video feedback.
The paper feedback was the source of data that addressed this second research
theoretical framework established by Hattie and Timperley (2007). For this phase of the
study, the themes were determined a priori, meaning that the data integrated with the
pre-determined themes that were linked to theory (Fereday & Muir-Cochrane, 2006).
The first set of questionnaire responses aligned with Research Question 1 and those data
set themes were: Development, Formatting, Thesis statements, Mechanics, Word Count,
and 3rd Person Narrative. These themes were maintained, but were then attributed to one
The paper feedback analysis also assisted in linking the data to the theory of this
study. The study’s feedback was examined using Vygotsky’s (1978) Zone of Proximal
Development as the framework of formative feedback. Thus, the goal was to categorize
the feedback data to determine if, by instructors reflecting on their own feedback, if
posited that human developmental processes are not always aligned with learning, and
that instruction must be structured to challenge the student into that proximal zone,
In the online classroom, paper feedback does serve the role of instruction. Sadler
(1989) posited that formative assessment takes place when feedback on student work
instructor feedback must meet several criteria. Nicol and Macfarlane-Dick (2006), point
resources, understanding goals, and reacting on external feedback (Nicol & Macfarlane-
Dick, 2006).
Thus, the goal was to examine the instructor feedback to determine if these
theories were operationalized. The selected theoretical framework was Hattie and
Timperley’s (2007) four levels of feedback. Both the text and video feedback were
examined and categorized according to Hattie & Timperley’s four levels (2007) of
feedback, which are: Feedback associated with the task ( FT); Feedback associated with
the process ( FP); Feedback associated with regulation (FR); and Feedback associated
with self (FS). These levels served as themes for the second research question and the
codes were determined by the sentence structure and usage of the comments.
Exclamations and adjectives indicated FT; command verbs were associated with FP;
declarative statements or content-area questions were FR; first and second pronoun use
Table 13.
These syntax patterns served as a guide to ensure that all comments were
categorized into one of Hattie and Timperley’s (2007) levels. It was also necessary to
review all feedback within its context. For example, if an instructor said, “I’m proud of
you” in the gradebook box, that comment was not directly tied to a specific paper
element and would be considered FS; however, if this same comment was presented in
the sidebar and was associated with the thesis statement, then that indicated that the
previous assignment. Thus, this same comment within this context would be considered
themes were established a priori according to the four levels established through Hattie
and Timperley’s (2007) framework. The questionnaire responses that aligned with this
research question were narrow in their content and some were closed-ended, so the key
words became the themes. The themes will be discussed in greater detail in the
following section.
The first questionnaire response themes were “Text, Video, or All.” The
participants who responded that they used video feedback stated that they also used text
feedback. There are two reasons these participants would respond saying they used text
comment; this is by design of the learning management system. So, while these
participants provided a video link to students that opened up their audio and video
feedback, these instructors also provided general comments in this learning management
system’s grading box. These participants also stated that they do not use video feedback
on all assignments, but reserve it for longer ones. This study was bounded by
assignments of 300 words or more in length, in order to capture more complex feedback
and different instructor methods, so only their video feedback was reviewed.
Text feedback can be given through the grading comment box or on the attached
papers. On student papers, instructors may embed text through the Track Changes
feature in Microsoft Word, through sidebar comments, or through adding text at the top
or the bottom of the page, or a combination of these. A review of the data revealed that
how instructors treat these different areas of feedback is not consistent; that is, some
provide general comments that are not assignment-specific in the grading comment box,
while others may provide this kind of feedback at the top of the page. Still others may
provide these general comments through the sidebar function. Thus, all sidebar feedback
The theme of “Text Feedback” refers to any written form of feedback, which is
Student Needs, and Co-workers, as these were the most frequently cited as the factors
that contributed to the participant’s choice of feedback delivery. Online instructors are
indeed faced with grading volumes that prompt attention to efficiencies. Institutions are
online learners (Planar & Moya, 2016). This study’s particular population teaches a
full-time load of online courses; included in this population are tiered instructors,
influence on their delivery method, these participants may be selecting the method based
on its efficiency.
Participants who used text feedback as well as those who used video feedback
both reported time as an influencing factor. This suggests that there is not one method
that improves efficiencies for grading; however, it may suggest that certain methods
could improve efficiencies for some instructors. Borup et al. (2015) and Wright, (2014)
stated that in addition to increasing personalization, that newer technologies may serve
to manage volumes of grading. There were participants in this study who responded with
time as an influencing factor of using text feedback, but also stated that they would not
try video feedback. For example, when answering what kind of feedback was used, this
participant said, “I currently deliver my initial feedback via text, specifically through the
use of embedded comments. I then remind students that they can call me to go over their
When asked what factors influenced that decision, this participant said:
with what other faculty have shared with me. Embedded feedback is one of the
quickest ways to get valuable and specific feedback to students that they can
The second theme that emerged from this set of questionnaire responses was
“Familiarity,” which meant that participant feedback options were influenced by their
own and their institutional conventions. Since video is a newer way to provide feedback,
the responses to this question reflected participant’s comfort levels with different
methods. One participant stated, “This is what I was told to do per university policy”
(Participant 4).
The third questionnaire response that was aligned with research question 2 also
showed results in the theme of “Familiarity” This question was closely tied with the
participant who used video feedback. The “Factors of Influence” and the “Expectations
frequently, with “Time” named next, yet the data showed that “Time” and “Efficiency”
was most frequently cited as the factor that determined their use of feedback. Finally, the
data from the last question in this set showed that most responded with “Yes,” but there
were caveats or additional remarks provided. These responses were captured in the
Influence” was named “Student Needs.” This theme was selected based on key words
from questionnaire responses that pointed to anything that benefitted the students. These
included citing student preferences from institutional surveys, or from individual student
messages that indicated these preferences. For example, one participant responded:
I have found that students prefer Loom videos and I have received extremely
positive remarks about this form of feedback. In addition it saves me time over
typing the feedback and I am able to provide more feedback that makes sense to
This participant indicated that student feedback prompted the continued use of Loom for
video feedback.
aggregately, as they have overlapping elements. For example, an instructor may believe
that increasing efficiency allows more time for higher quality feedback or possibly
classroom interventions for improving student writing. They may also believe that
despite the advanced practices that technology affords, that not all students appreciate
these advancements. Indeed, Borup et al. (2015) found that despite the efficiencies and
personalization of video feedback, students reported preferring text feedback, citing its
conveniences.
The last question in this data set was designed to further prompt connections
between feedback and reflections, as it was about how faculty might continue with the
I plan on continuing this manner of feedback for the time being as it has proven
successful for me. I am interested in using some Web 2.0 tools to provide
feedback – like Loom – but remain hesitant due to time constraints. I am unsure
whether or not I could provide enough feedback and still meet departmental
Thus, this participant stated that text feedback was quicker, which was categorized as
“Time/Efficiency” as a theme, but also stated that they were hesitant to try Loom, the
program that allows video grading, due to its time constraints. Loom’s efficiency could
vary among individuals, but the perception of its efficiency may be just as critical in
that time influenced their delivery method without having tried both types of feedback
modalities.
Table 14.
These themes provided insight into the second research question. The data
showed responses from both video and text participant users, but for both modalities, the
responses indicated that the chosen method reflects time efficiencies and what is best for
students. An exception to this is that those who used text feedback cited “Familiarity” as
being an expectation and continued practice. Moreover, some text participants expressed
interest in trying video feedback, but stated their hesitancy to try it in other responses.
This demonstrated that there are either technical challenges or mental obstacles to
overcome so that faculty members can try both kinds of delivery to determine which
kind of feedback is truly efficient for them, or what their students prefer.
In this study, feedback was regarded as the mechanism for learning theories.
Examining instructor feedback within Hattie and Timperley’s (2007) framework merges
the data with the underpinning theory. Defined as information relative to one’s
performance, feedback can be corrective, can clarity, encourage, or prompt action on the
part of the performer (Hattie & Timperley, 2007). The four-level model is designed to
provide the ideal conditions for feedback (Hattie & Timperley, 2007).
students increase their efforts, based on a firm understanding of the goal (Hattie &
Timperley, 2007). If this goal is achieved, feedback becomes a powerful mechanism for
feedback should answer three questions: Where am I going? How am I going? Where to
next? (Hattie & Timperley, 2007). These feedback questions operate on four different
levels: Task level, Process level, Self-regulation level, and Self-level (Hattie &
Timperley, 2007). These levels were the pre-determined themes that structured the data
analysis and the feedback remarks served as codes to support the themes.
The data were examined according to these four levels and was then integrated
3rd Person Narrative, and Word Count. The graphs below show the two themes
200
150
100
50
60
50
40
30
20
10
0
FT (Task) FP (Process) FR (Regulation)
At the task level is feedback that is specifically tied to the student’s task. For this
analysis, the researcher examined the feedback for appraisals of performance. For
example, Participant 4 stated, “Nice job with your thesis statement. Your topic and sub-
topics are clearly introduced.” This feedback was categorized as FT (Feedback to the
Task) because it appraises or evaluates the writing element. The student task was the
thesis or topic and sub-topics and the instructor noted the evaluation of them. Not all
feedback at this level is positive. Here is another example from this same participant: “I
appreciate the effort, but you did not properly format your reference page per
(institutional name) guidelines.” Both comments are at this first level (FT), appraising a
specific writing task. The data showed that while instructors addressed all writing
elements at this level, “Development” was the most commonly addressed element.
The next level that dictated the analysis was FP or feedback that was specific to
the student process. According to Hattie and Timperley (2007), feedback at this level
may point students to try different strategies or may act as a cueing mechanism. The
analysis involved reviewing comments for tone and language; if instructors were
specifically telling the students what to do, that was a distinguishing characteristic of
this level. For example, Participant 8 stated, “Use the double-space feature in Microsoft
Word and type in double-space.” This example shows that rather than evaluating the
writing issue, the participant, instead, is showing the student how to modify their writing
processes for future submissions and is being direct in the language. At this process
level, participants who used text feedback most often addressed formatting issues.
Feedback about self-regulation (FR) directs students towards the learning goal
through developing autonomy. Hattie and Timperley (2007) posited that feedback at this
level can promote internal dialogue and assessment for the learner. Feedback that
Instructors may have also commented on the paper’s subject matter, often affirming the
student’s thought process or probing the student into deeper thought on the same subject.
For example, Participant 10 wrote, “Yes, those in line with communication with their
body etc…can help them understand what is happening around them and how they may
be handling it.” Participant 1 stated, “Right! What roles does objectivity, perception, and
memory play within this process?” These remarks qualify as FR because instructors are
Among the participants who used text feedback, within the FR category, the
theme of “Critical Thinking” prominently emerged, which was not evident among video
feedback in any levels. For participants who used text feedback, this theme was the
did not use comments within the “Critical Thinking” theme. Unlike other comments,
instructor remarks in this theme did not address any particular writing issue, but instead,
One quick side comment is in reference to the quote pulled from our e-book,
(gives title). I actually disagree with the author and find that thinking about
concern with the paper. This “Critical Thinking” theme demonstrates how an instructor
may take the opportunity to converse on the class subject through paper feedback.
The last level described from Hattie and Timperley’s (2007) levels is FS, or
feedback about the self. These researchers acknowledged that this feedback is not
traits or character, and might include remarks such as, “Good effort!” (Hattie &
Timperley, 2007). While some FS was present in this study’s data, it was not included in
the final results, because it was not tied to a specific subject area; if praise-associated
remarks are tied to a specific writing element, then the feedback is categorized
differently. For example, Participant 11 stated, “Nice job explaining how learning styles
affect collaboration!” In this remark, the participant is not praising the student, but the
“Development.” Although this category of FS was reviewed and tabulated, it was not
included, as it did not overlap with the other themes. General praise was found in the
gradebox comments by many instructors, but as this study was limited to reviewing how
feedback was specifically tied to reflection, this category of feedback was not reviewed.
was an effective approach, but may have possibly resulted in missing data. Generally,
instructors used the sidebar function for formative feedback and this study was bounded
by that type of feedback. Placing boundaries in a case study keeps it reasonable in scope
(Baxter & Jack, 2008). It is possible that some instructors may have provided summative
comments through the sidebar feature, meaning that such feedback was not intended to
be formative. Summative feedback is generally provided at the end of course work and
is not designed to engage or mobilize students in the same way formative feedback is
(Mirzaee & Hasrati, 2014). It is also possible that instructors may have given formative
possibilities created potential for missing data; however, the researcher attempted to
When the focus group reviewed the data that supported the second research
question, no participants disagreed with the data. The data were explained to them, and
when prompted, there was not additional discussion nor were there challenges about the
responses. This confirmation served to triangulate the other data sources for this
research question.
In summary, the data analysis intersected the six themes of paper concerns:
Development, Formatting, Thesis, Mechanics, Word Count, 3rd Person Narrative, and
Hattie and Timperley’s (2007) levels of feedback: Feedback to the Task (FT), Feedback
about the Process (FP), and Feedback about Regulation (FR). However, it should be
noted that instructor comments most often included more than one element. For
FP or FR remarks to re-direct the student. For the sake of analysis, these comments were
broken up to categorize them according to these themes. Finally, the data were
disaggregated to show how these feedback remarks might be compared with attention to
question of this study was: How do online undergraduate full-time perceive the
influence of reflective thinking on their current and future instructional approaches? The
data sources reviewed were questionnaire responses 9-12 and the focus group input.
Below is the process of familiarization, coding, and theming that aligned with this
research question.
The last section (9-12) of the faculty questionnaire responses addressed this
research question. The first question was about what changes faculty would make to
their instructional practices, based on their own feedback reflections. Like other
respond to four different student papers for each question, participants duplicated
that the responses would need to be checked for duplications or words that shared
common meanings.
The classroom papers served as an important artifact for this study. The papers
were analyzed in two ways: First, they were reviewed for feedback themes, and
secondly, they were reviewed according to how those themes applied to Hattie and
Timperley’s (2007) levels of feedback. Thus, the classroom paper artifacts were
foundational for all research questions, but were not a direct source of data that aligned
with this research question. Thus, the familiarization was not necessary to address this
research question.
As with the other research questions, the familiarization stage involved a review
of the transcript. According to Yin (2014), triangulation involves the opportunity for
alternate perspectives, so the focus group offered that opportunity. Becoming familiar
Like other questionnaire responses, the data were coded by a careful review of
the responses. Although many responses fell into a priori themes established from
previous answers, other responses demanded new themes. The ninth question was about
responded with “None,” indicating contentment with their current approach to giving
feedback. Here are two participant responses: “Based on this paper and its feedback, I
will continue to use the same type of feedback for future instruction” (Participant 11)
and “There are a variety of methods to change or alter future instructions, feedback,
about not meeting university requirements” (Participant 5). These responses show
shades of difference, but could be distilled to reflect “None” to address changes faculty
would implement.
The next frequently- named response for this question was centered on
Feedback was a theme that followed, meaning that instructors believed that focusing on
their own feedback was an effective method of addressing student issues. Next,
participants responded that they would seek assistance from leadership or co-workers.
These responses were consolidated and themed “Co-workers.” The last set of codes
linking back to data from student surveys, notes in the class from students, or phone
discussions. These codes were then attributed to the theme “Student Needs” and were
identified in the same fashion as previous codes, through finding similar strains of
Question 10 linked back to the feedback reflection and captured how a student
concern might relate to a particular classroom element. Most participants indicated that
the course discussion questions were the primary factor that related to the student
concern. The other themes were: Assignment, Existing Resources, and Instructor-added.
Because these were established themes from other questions, the coding process
involved attributing the responses to these established themes. The participant remark
here shows that the instructor sees a connection between the paper issue and the
assignment: “This concern is only related to the assignment itself. There are no other
classroom elements that are related or that specifically address these concerns”
(Participant 11).
Other participants found that concerns in the paper were associated with classroom
and they have continued to use the classroom resources, to show considerable
improvement. While they are not formatting everything correctly, the issues are
very minor. It is clear that they get the general idea (Participant 4).
This participant was focused as much on student success as they were student concerns.
Their reflection indicated that they were assessing resources based on their
effectiveness.
affect their practices and elaborated on the specific trends that would affect practices,
thereby asking participants to predict or reflect on these influences. Most responses here
this was listed the least frequently. Here is how one participant stated this:
I have been told that student success rates and ‘checking the box’ are the most
an outside part is the most important. My desire to actually help a student learn
The last question on the instrument was designed to explore the extent of faculty
practices based on their own reflections. Participants had responses similar to previous
questions, so similar codes were captured to support themes of: Instructor-added; Co-
workers; Does not see; Tech Tools; Engaging in Forum. Table 5 shows the frequency of
these themes and how Instructor-added is the most dominant, followed by Co-workers.
This specific population may have contributed to the theme of co-workers, since
Research question 3: Coding the data. The focus group was the second data
source to address the third research question was the focus group session. This session
was held on March 27, 2018, at 3:00 PM, Arizona time. Ten of the 12 participants
attended via Zoom, which allowed remote attendance and the opportunity to record the
session, which lasted 48 minutes. The participants verbally consented to the session
being recorded, and the data were manually transcribed, resulting in a word count of
3,424. A careful review of listening to the session and reading the transcript were the
The participants agreed with most of the data presented, but did challenge some
findings. The first discussion was the second questionnaire response, which was a
follow-up to the paper concerns. After listing the paper concerns, faculty were asked to
follow-up with their perceptions of concerns and for most paper issues, they indicated
that they were frequent. The focus group participants questioned the response indicating
that thesis statement was considered “Low” as a response. Some challenged this, as they
believed that thesis statements play a prominent role in their grading. For example,
Participant 9 stated, “I was a little surprised just to see the perception of concern for
thesis being so low listed there, just because from my experience.” Participant 1 wrote,
“I have the same kind of automatic – oh, wow, low for thesis?” Participant 13 replied,
“Uh, I would agree with both of what they said as you’d think that’d be a pretty big
concern.”
These three participants expressed surprise that responses would indicate that
this element would be of low frequency. As this discussion ensued and was moderated,
another participant brought in a new perspective: “I think a lot of it depends on the level
you’re teaching at too, because I know with some of the papers I chose…the thesis
wasn’t as important as teaching them formatting and developing the paper” (Participant
10). This participant’s remarks demonstrated how the data aligned with the first research
question about paper concerns. Although there was a theme of thesis statements, its
prominence was in paper feedback more than in the questionnaire response when
instructors were asked about paper concerns. The participants’ remarks provided insight
on the gaps and disparate emphases on the thesis statement in this study.
The focus group discussion also demonstrated that participants may focus in on
some issues, relative to what they emphasized in class. For some, there may be raised
For the essays…where they write thesis statements, I try to get them to practice
back and forth and I fix them before they submit the papers. So then they’re
better in the actual essay because I try to head it off at the pass (Participant 2)
Thus, it became clear that the discussion of the thesis statement was an example of an
issue where divergent instructor views illustrated different reflections and actions.
Because participant 2 stated that she creates her own materials to offset concerns with
the thesis statements, she sees these less and perhaps shifts her thoughts during feedback
the emphasis on content. When the paper issues were presented on a chart, the findings
showed that faculty gave most attention to content in their feedback. The codes that
determined the theme of “Content” included any feedback or questionnaire remarks that
lack of resources, content. “Content” was frequently discussed in both text and video
feedback, but with a greater emphasis in text feedback. The significance of these
differences and this attention to content will be discussed in greater detail in chapter 5.
The focus group observed the key differences in the theme of critical thinking.
Two charts (see Figures 3 and 4) were shown that illustrated the paper issues as well as
where the issues fell according to Hattie and Timperley’s (2007) levels. The focus group
participants observed that an additional theme of critical thinking was present only in
text feedback. One participant noted, “Would this indicate that there’s less critical
thinking with the video feedback? Uh, that’s the wrong way to say it, but there’s less
The discussion that followed was about reasons this theme might be less prominent in
might be that for us, quite a few of us, video feedback is still new and we haven’t
way for the students. It was just interesting to note (Participant 5).
I feel like, like we try to be more positive maybe because on a video, you don’t
want to have on video, well, this was, completely off topic and you know, you
messed this up somehow – I feel like we stay a little bit more positive in those
videos and it’s more to show them that formatting process, in my experience
(Participant 10).
The conclusions in this discussion showed that feedback may vary according to the
feedback method. The comment above indicated that feedback through video may keep
instructors from addressing larger issues; instructors giving video feedback may remain
more focused on surface issues, such as formatting and mechanics. The significance of
Research question 3: Establishing themes. The themes for this last research
question were a result of the codes generated from the questionnaire responses. The
ninth question was: What, if any, changes will you make to future instruction, based on
consolidated into the theme of “None” to indicate that the instructors would remain with
their current methods. This does not indicate that they are not actually reflecting on their
feedback, but could also indicate that they are satisfied with their current methods. This
theme was also named to be consistent with similar questions asking participants about
resources.” Responses were attributed to this theme if they indicated that the participants
had specific plans to develop new resources. In an online classroom, instructors have
options of adding their own resources to a general resource section of the class, or to the
discussion area. Creating websites, videos, or adding written material are examples of
these. Here are examples of how two participant responses was assigned to this theme:
“Perhaps if I can find a couple of grammar/English quick lesson video snippets. These
have been helpful with the thesis and reference page. They may work with grammar-
related issues” (Participant 1) and “I would add a video on citing within DQ 1 (the
The next two themes were “Tech tools” and “Feedback” which drew from codes
from the questionnaire responses. Whether they used text or video feedback, participants
stated they would consider technology in other classroom areas. These included
instructional videos posted in the classroom, using a tool to text students, or video-
conferencing. Thus, these codes resulted in the theme of “Tech tools.” The theme of
comments, they could more effectively influence student paper concerns. This theme
was not present among video participants, but was the fourth most frequent theme
The next theme demonstrated the specific culture of the population of online full-
time faculty. The theme was named “Co-workers,” which indicated that faculty were
influenced by their peers and leadership in their practices. Because faculty work together
at a campus facility, they collaborate, share, and discuss teaching methods. One outlier
response demonstrated that these influences were not always positive. This participant
responded, “I have management that does not care about methods or theories other than
what they feel is correct so disagreeing or trying something new has been shown to be a
negative and could impact my employment” (Participant 5). Although the negative
implication of this response gave it outlier properties, it was nevertheless considered the
indicated that any kind of student response or data influenced faculty practices. Students
students complete surveys as they end their courses, so instructors may review these as a
source of data. Thus, student praises, concerns, or questions influence instruction for
some.
The tenth question was about the student concern broadly shaping instructor
practices. This question was designed to probe how faculty participants might connect
the paper issues to materials or concepts available in the course. The themes were:
these themes came from codes that were eventually categorized into these themes. For
example, one participant said, “Some stuff we have taken a proactive approach such as
help get them comfortable with citing and researching in the library” (Participant 6).
These raw data were considered as part of the theme “Discussion questions,”
since the participant was stating that the student issue was most closely associated with
this intervention.
concerns might affect their practices. The themes from these responses are repeated from
theme not seen in the tenth question was to call students. In the online full-time faculty
setting, calling students is a practice that is encouraged, as faculty are assigned phones
and numbers from the institution. Among remote adjunct faculty, this practice may not
be promoted or monitored. The most prominent theme among those giving text feedback
contact.
In response to this question, the theme of “Student contact” was more prominent
among participants who gave video feedback. Phrases like “contacting students” and
“set up appointments” were codes that were attributed to this theme. At the host
institution, full-time online faculty are encouraged to call students, both to welcome
them to class and to help during the class. For example, one participant stated, “I also
schedule 1:1 phone appointments to review all work and what students can do to make
things better” (Participant 6). This theme associated with contacting students is
significant, because other data indicated that both text and video participants believed
that these methods are best for students. Nevertheless, feedback methods alone are not
sufficient for addressing student needs and through reflecting on feedback, instructors
The final question on the instrument was about how faculty reflection might
influence them in a broader context. This level 4 question was designed to capture how
faculty might re-shape their practices, focusing on the bigger picture. The online full-
time faculty at this institution work at a campus facility and engage in decisions
regarding curriculum and teaching practices. Thus, some of the responses in earlier
questions related to possibly a quick fix on the part of faculty, whereas this question was
designed for participants to consider the student issue in a broader context. This might
students.
The data from the responses underscored the value of using outside resources
beyond the instructors themselves, as many mentioned discussing concerns with co-
workers and management. Some themes were repeated, including Student needs and
Instructor-added, while two other themes were created specific to this response, and they
included: Tech Tools; Co-workers. The last two themes are significant, since they relate
on feedback to shape practices. For example, “I will also share with my coworkers any
issues or trends that I see and also bring these up to my manager during one-on-ones. By
doing this I can obtain advise [sic] and new perspectives” (Participant 8).
The last set of questionnaire responses demonstrated that the particular culture of online
The data sources for this research question were the questionnaire responses and
the focus group discussions and the classroom papers were not considered a direct data
source. The classroom papers served as data sources for two types of analysis.
Identifying the paper concerns was necessary for the first research question, and the
feedback that was analyzed according to Hattie and Timperley’s (2007) four levels.
Those foundations formed a platform for addressing this third research question, for
examining how the feedback practices inform reflection and what methods are used
helps understand this third research question about how faculty use that reflection. Thus,
no new themes from classroom papers will be discussed for this third research question.
The focus group reviewed the themes from the questionnaire responses and the
classroom papers. The focus group was asked to confirm, challenge, or add to the data.
No new themes were added as a result of the focus group discussions. That is, the group
appeared satisfied with the results from the questionnaires and the papers. However,
some participants were surprised at the frequency of the second questionnaire response,
with respect to the thesis statement. These participants stated that in their grading
experience, students often needed help with thesis statements. However, other
participants stated that the emphasis on the thesis statement could be dependent upon the
classes taught. The participants also observed that the critical thinking theme surfaced
among those who gave text feedback, but was not present among the video feedback
data. Triangulation served as a validation of the data (Yin, 2014). With the focus group’s
interest and attention on particular areas, and a general agreement of the remaining data,
the process of triangulation was complete and these remarks contributed to the
Results
This section will present the analysis results and how these results align with the
study’s three research questions. The study’s data consisted of reviewing questionnaire
responses, classroom paper feedback, video feedback that was transcribed, and a focus
group that was recorded and transcribed. All data were coded with the patterns resulting
into study themes. This section below will identify the study’s themes, how they were
developed from the codes and how those themes were viewed by the focus groups,
producing the final themes and results. The focus group completed the triangulation of
the data and provided emphasis on certain themes, which reflect data from each source.
As this is a qualitative case study, the salience of the results are dependent on the
rigor and care of the coding and theming process. Data for a qualitative study must show
a systematic process for meaning to emerge (Vaughn & Turner, 2016). In addition to the
systematic process of coding the data, the sources were triangulated to address the
study’s questions about faculty feedback reflection. The purpose of this qualitative
single case exploratory case study was to examine how online undergraduate faculty
institution. To address that topic, the data sources will be discussed below, showing how
the coding and theme analysis contributed to the outcomes and how triangulation served
higher education institution. From the division, 136 out of the 150 online full-time
faculty were sent emails inviting their participation in the study. Initially, 13 volunteered
to participate in the study, but one did not successfully complete the questionnaire, so
the researcher continued to work with 12 active participants who completed the
participants attended a recorded focus group session. All participants provided informed
consent forms.
were four questions to align with each of the three research questions. Faculty completed
this questionnaire as they graded and examined four papers for each question. Next, the
participant’s classroom paper feedback was analyzed for common themes and how the
feedback fit into Hattie and Timperley’s (2007) levels of feedback. Finally, for the focus
group, the participants were shown the data from these two sources. They confirmed the
data, highlighted and challenged certain themes that will be discussed in greater detail in
this section. Finally, the confirmation and emphases from the focus groups resulted in
full-time faculty perceive the influence of their instructional feedback practices on their
own reflective thinking? This research question was addressed through the faculty
questionnaire responses 1-4. The themes of paper issues that surfaced were
Word count. These themes were derived from 37 codes (See table 9), which were words
participants used to address student issues that were later consolidated into the six
broader themes
The second question was closed-ended, so remarks were tallied as either high,
low, or mixed. For the third questionnaire response, there were seven themes, listed here
(17), Co-workers (5), Past (5), and Student Contact (5). The fourth questionnaire
The themes and numbers of codes were: Instructor-added (39), Existing Resources (18),
The themes surfaced as a result of the codes and the frequency of participant
responses. Therefore, the frequency of the codes demonstrated what was significant to
the participants. When asked about the paper issues on the questionnaire, issues related
to “Development” surfaced the most. Comments that related to evidence or paper
substance seemed important, due to the frequency of words like evidence, support, and
reasoning. The next code mentioned frequently was “Formatting.” Participants alluded
to this theme through words like APA style, citing, or discussing the reference page.
These were distinguished from comments that would be categorized under “Mechanics”
prominent theme was “Thesis statements.” While not all participants expressed concern
about thesis statements, those who did ranked this concern high and noted it frequently,
both in the questionnaire, in the paper feedback, and later in the focus group. This will
The remaining themes were “Third Person Narrative” and “Word Count.” First-
year sequence courses emphasize writing that is from a scholarly viewpoint and students
frequently need to present research-based papers. Beginning students may lapse into
using the first or second person, and instructors may provide instructional commentary
or may identify this error in the paper feedback. Participant responses were mixed
concerning the frequency of this issue. For word count, instructors relied on assignment
guidelines to determine if students were below or if they exceeded word count, and the
Participants responded to the third and the fourth questions with seven codes
with “Instructor-added” as the most dominant theme, which indicated that participants
believed they could best address the paper issues by creating their own materials. Codes
that supported this theme were examples, sample papers, creating videos, and preparing
commentaries. For the last questionnaire response, there were 11 codes that fell under
the theme of “Instructor-Created Resources.” Participants mentioned adding to class
discussions, posting new materials, creating FAQs, giving amplified instructions, and
regularly use, share, and create CATS, as they are part of ongoing institutional
that participants did not plan to make any changes about the paper concerns. Thus,
responses of “No changes” and “None,” became themed as “None.” To address the
their thinking, there were six themes of paper concerns: Development, Formatting,
Participant responses showed that they mostly use their own created resources to
remediate these student issues, that this same practice will likely continue among the
participants. These results offered insight into the first research question, showing how
faculty reflecting on feedback shape how they address paper concerns. One participant
said:
To help students stay on topic and answer all prompts, I have created videos
walking through the assignments. The video also explains the requirements for
This participant, like many, relied on her own creation of resources as a way to
indicating the absence of a strategy; rather, they were secure with their current methods
I will not be making any changes to my practices, as I believe the use of Loom
videos clearly explains the issues with any given essay to the student. I feel
9).
This participant specified that not changing practices was akin to providing robust video
feedback.
Classroom papers. A second data source that addressed this research question
was an examination of the classroom papers. The review of these artifacts served to as a
source of triangulation of the data, because the actual classroom feedback could be
compared to the participant responses. When participants were asked about what
concerned them about student papers, the same themes were addressed in their actual
classroom feedback. Thus, although the researcher approached the paper feedback
openly, the a priori themes from the questionnaire served as themes for the classroom
feedback as well.
instructor feedback. To arrive at this theme, any words that related to the paper’s
are two examples of statements that depicted that theme: “This is a strong paragraph;
delving into the impact that bullying has on a victim’s schooling is a great way to
appropriately develop your topic (Participant 3) and “Please work on providing more
Although “Development” was a strong theme in both video and text feedback, it was
more prevalent in text feedback while the themes of “Formatting” and “Mechanics”
To address the first research question about how faculty reflect on their
feedback, two data sources were used: The first questionnaire response that was
designed to capture how faculty reflect in their grading and on what specific issues. The
resulting data from these sources shared the same themes, with one exception. A new
theme emerged from the paper feedback review. The theme “Critical thinking” was
attributed to participant comments that were detached from the student writing skills, but
showed that there are times when instructors want to relate the assignment to the course
content, dialogue with students, or promote deeper thinking on a subject. Here is one
example:
One has two turkey farms, one is a dairy farmer, one is a pork farmer, one is a
wheat, soy and corn feed farmer, one does pumpkins and watermelon
(Participant 1).
Another example states, “It is important knowing what good collaboration is and
how effective communication can affect this” (Participant 10). While this theme of
“Critical Thinking” was prevalent among participants who gave text feedback, it was not
present among participants who gave video feedback. The significance of this difference
the themes extracted from the classroom feedback review are: Mechanics, Thesis
statements, 3rd Person, and Word Count. The theme of “Mechanics” encompassed
several codes which were derived from participants noting areas where students needed
attention in grammar, usage, punctuation, capitalization, or word choice. Here are some
examples of comments that fell into the “Mechanics” theme: “Relive or relieve, make
sure to read through before submitting” (Participant 8) and “Only capitalize when
starting a sentence” (Participant 2). Both comments illustrated feedback that is focused
on student mechanics and the instructors were identifying and providing suggestions for
The theme of “Third Person Narrative” surfaced often in both the questionnaire
responses and in the paper feedback of both text and video feedback. Instructors
addressed this issue in different ways. Participants may have advised students to avoid
“you,” or may have advised against using the “second person.” Similarly, participant
feedback may mention the error of using first person, often following up with
hasty generalization” (Participant 6). Here, the participant is providing the rationale
behind avoiding the first person. It was common for participants to not only identify a
writing issue, but to extend their comments towards formative instruction. This trend
will be discussed in greater detail as the classroom papers were analyzed according to
A prominent theme that surfaced for participants of certain classes was themed
“Thesis Statement.” Some participants indicated both on the questionnaires and in the
classroom papers that their concerns with thesis statements surfaced frequently. First-
year sequence courses focus on the framework of a five paragraph essay with a thesis
statement with three points. Therefore, these comments were not just on the thesis
statements themselves, but may have been evident in the body paragraphs. One
participant who provided video feedback mentioned “thesis” four times in the video
transcript of 898 words. The participant responded, “…are you giving me some lead-in
information to your thesis, most importantly, what does your thesis look like, is it
A participant who provided text feedback mentioned the term “thesis” two times
in a paper with word count of 344 words. He addressed it as, “Your thesis statement is
well-structured…A concluding paragraph re-states the thesis, re-caps the key points, and
ties everything together in a satisfying way” (Participant 3). This comment shows that
although this student did not have trouble with the thesis statement, the instructor still
provided commentary, emphasizing its role in the essay. Thus, there is a trend of
addressing topics not necessarily for correction, but for reinforcement or formative
instruction.
The data from reviewing classroom feedback as well as the first set of responses
from the questionnaires revealed the prominent themes that highlighted instructor
concerns with papers. From the responses, the themes showed how instructors viewed
papers and how those concerns reflect in their teaching practices. A common theme was
that participants did not plan on making changes, or they would create their own
resources to address student needs. The themes for the paper concerns (from Question 1)
were also used to examine the papers. The most prominent theme was “Development”
followed by “Formatting” and an additional theme of “Critical Thinking” emerged from
the instructors who used text feedback. Finally, the “Thesis Statement” theme was
prominent because in some classes, it was a foundational concept and was often
and presence in the paper. The themes from the second research question will be
online faculty use at a southwestern higher education institution? The purpose of the
second research question was to determine if different kinds of reflection take place
depending on the feedback methods. Full-time faculty either use text feedback, utilizing
instructors use an application called Loom for video feedback. The video allows audible
narration while the instructor can use the moving cursor and live typing to amplify the
feedback. The questionnaire responses 5-8 and the classroom papers were the two data
sources that addressed this question. However, for this research question, the classroom
feedback was examined a priori, according to Hattie and Timperley’s (2007) four levels.
The data were disaggregated to show the differences between the kind of feedback given
The questionnaire responses from 5-8 will be discussed first. Question 5 was
about which method of feedback faculty used and as this was a closed-ended question,
only two main responses surfaced and these also served as themes: Text and Video.
Eight of the twelve participants stated that they used text feedback. Two stated that they
used both text and video. One said the method depended upon the assignment, and one
participant stated that video was used; however, in reviewing the classroom data, text
was the primary method for this participant. It should be noted that the researcher
intentionally limited the classroom feedback review to lengthy assignments and an in-
depth classroom analysis revealed that instructors who used video feedback used text
Participants viewed questions six and seven as asking the same thing. Therefore,
the responses were compressed into four themes: Time/Efficiency (24), Familiarity (16),
Student needs (12), and Co-workers (4). For the video participants, Student needs (8)
was the most frequent theme, with Time/Efficiency (4) being the other theme. There
were no codes among the video participants that were attributed to other themes.
A prevalent theme among both types of participants was “Student Needs.” Here
are examples of codes that were attributed to this theme: “Anything to help students”
(Participant 7) and “Based on student needs” (Participant 9). Participant 9 used video
and Participant 7 used text, yet both believed that their chosen method was the best for
the student and this was listed as the dominant factor for selecting their particular
feedback method.
students; the tiered instructors typically carry loads of 150 students at a time. Thus,
participants stated that they considered efficiency an important factor in determining this
feedback. Like the previous theme, the video and text instructor responses both indicated
that these methods played a role in efficiency; that is, that instructors who used video
felt it to be more efficient than text, and those who used text believed that text was more
efficient than video. There were six comments that pointed to this theme.
A third theme was “Familiarity” which meant that participants were practicing
what they believed was expected or within their comfort zone. Responses with words
like “experience” or “expectations” were re-framed into this theme. This theme only
surfaced among participants giving text feedback. Since video feedback is relatively
new, participants who use this are more familiar with text feedback, providing a strong
rationale as to why this theme did not surface among those participants who gave video
feedback.
The last question that addressed this study’s second research question was about
the faculty’s plans to continue with the current method. Nearly all responses included a
“Yes,” and the questionnaire’s open-ended format revealed faculty attitudes towards
their choices. The “Yes” responses were further delineated and categorized as either an
I plan on continuing this manner of feedback for the time being as it has proven
successful for me. I am interested in using some Web 2.0 tools to provide
feedback – like Loom – but remain hesitant due to time constraints. I am unsure
whether or not I could provide enough feedback and still meet departmental
into my current plan but I do plan on continuing the same route to helping students.”
citing perceived time constraints. Like others, Participant 8 plans to continue with the
same method, but is open to change. The two participants who did use video feedback
Classroom paper feedback. The second data source that addressed this second
research question was the classroom paper feedback that was analyzed according to
Hattie and Timperley’s (2007) four levels of feedback. This framework posits that
instructor feedback falls into either praise, task, process, or regulation levels (Hattie &
Timperley, 2007). For this analysis, the instructor feedback was examined according to
these levels. The theory that undergirded this study was Vygotsky’s (1978) Zone of
important to see if the selected method reflected the kind of feedback and if in turn, that
feedback reflection influenced teaching practices. For this data set, the themes were a
priori, because Hattie and Timperley’s (2007) levels were already established as the
structure. This means that the researcher examined the feedback for characteristics that
The results of this analysis showed that among participants who used text
feedback, most comments fell into the self-regulation category of feedback (see Figures
3 and 4). These comments can be characterized as leading students into a self-discovery,
rather than explicit instructions or comments that praise or condemn. Within the self-
regulation level of feedback (FR), instructors might explain concepts, as seen below:
The thesis statement is the last sentence within the introduction and provides a
quick summarizing statement of the supporting paragraph topics within the rest
of the essay. What are your three supporting paragraph sub-topics? How do you
This comment serves to remind the student of the concepts learned in class and prompts
the student towards self-regulation with the questions that follow. Most comments from
participants who used text feedback were mostly directed towards development,
narrative, and word count. All paper issue themes were included in this self-regulation
category. Although self-regulation comments took place mostly within content and
critical thinking, all of the paper themes also showed up in this level of feedback.
Among the participants who provided video feedback, the highest concentration
was in the themes of: Content; Mechanics; Formatting; Thesis Statements; Third Person
Narrative (see Table 10). Here are a few examples of self-regulation comments provided
through video feedback: “When I go into your supporting paragraphs, I’m looking, do
you have an outside source? Every paragraph should have an outside source”
(Participant 9) and “Here you’re talking about relational awareness and here you’re
talking about the negative side of social media, so it doesn’t go well together”
(Participant 12).
This theme of self-regulation is evident, as both text and video comments have the
The second a priori theme established from Hattie and Timperley’s (2007)
framework is feedback about the process, or FP. Feedback fit into this category if the
comments specifically addressed how a student might remediate a paper issue. For
the objectives in the 4th paragraph.” Feedback at this level is direct and associated with a
particular writing issue. Longer assignments in first-year sequence courses are often first
drafts, so instructors may include feedback to facilitate the revision process. Of the
participants who used text feedback, most FP comments addressed formatting, followed
by content, mechanics, thesis statements, critical thinking, third person narrative, and
word count. For participants who used video feedback, mechanics was the highest
category, followed by content, formatting, thesis statements, then third person narrative.
Within this FP level, there were no comments on critical thinking or word count for
Here are examples of FP feedback from the participants who used video
feedback: “You need to write in third person narrative” (Participant 9) and “Make sure
you’re not giving me any personal feelings” (Participant 12). Both text and video
A third level from Hattie and Timperley (2007) served as an additional theme for
feedback. Task feedback (FT) refers to feedback that assesses or evaluates student work
and is associated with a specific issue. This reference to a specific issue separates the FT
feedback from praise, which is limited to addressing the learners’ character traits. Thus,
when instructors used the sidebar feature to give feedback, such as, “Good job!” it was
considered FT, since the instructor was intentional about this association.
In examining the feedback remarks at this FT level, a noticeable result was the
difference in comments regarding the “Content” theme. For both text and video
feedback, content was commented upon the most at this level; however, among
participants using text feedback, comments about content were provided at this level
more than any other; whereas among participants who provided video feedback, most
comments about content appeared in the FR (self-regulation) level. Overall, among the
video participants, comments were not prevalent at this level. An example of feedback
from text feedback at the FT level states, “Great example” (Participant 5). At this level,
comments are typically brief, since the lengthier comments fall into different levels,
The last category of Hattie and Timperley’s (2007) is feedback that addresses the
self (Hattie & Timperley, 2007) or FS. Although these comments were present in both
text and video feedback, for the purposes of this study, they were not tabulated or
that the whole class may have received or as conclusive or summative remarks. FS
comments are not tied to a specific paper issue, and since this study was about how
instructors reflect on their feedback, these comments are not tied to that, as they would
not address remediation that might result from feedback reflection. Thus, the data (see
Table 10) show only the three levels, as they would serve the purpose of linking
perceive the influence of reflective thinking on their current and future instructional
approaches? The data sources that supported the third research question were the third
set of questionnaire responses and the focus group results. The ninth question was
designed to capture if faculty participants would make any changes to their teaching,
based on their feedback. The most common response was “None” and to arrive at this
theme, all responses were reviewed and any comments that alluded to staying the same
I absolutely will continue to use Loom for outline and first drafts. As previously
stated, I use this technology to create Welcome to Class videos in which the
students are able to put a face and voice to my name which students have
Since this participant justified the use of video feedback, this was themed under “None.”
Participants stated this through sharing that they continually develop resources based on
student needs. One participant said, “Will keep working on developing videos that help
model the week’s objectives and assignments” (Participant 7). This participant is saying
that they are using videos in other areas, and that although they will not change their
feedback methods, they do react to student feedback by creating videos based on the
needs in papers.
A third theme from this ninth questionnaire response was “Tech Tools.”
Although similar to the other theme of developing new resources, this more specifically
refers to participants trying new web tools, such as Remind, which serves as a host site
that allows instructors to send text messages to students. One participant said:
I may start to use the Remind app. I may also strategize the tools that need to be
used to assist with each week’s assignment to build up to the rough draft, in hope
of better overall quality and connection to past assignments. I may over a live
Zoom conference to students one day a week for questions and to have open
ended assistance for one hour with help on their paper (Participant 13).
Faculty participants who use text may use video or other technology applications in
other areas of the course. Their feedback reflections point to those influences.
A third theme that was present in the responses to this question was “Co-
workers,” which also included institutional leadership. These themes may indicate that
faculty either expected support or training from leadership. For example, one participant
responded, “I will continue to seek out assistance from my peers and manager as well as
my students” (Participant 8). This response aligns with the culture of the full-time online
faculty participants, who collaborate and share practices by meeting on the campus site
during the week. Also, the response alludes to an expectation of leadership support for
changing practices. These findings may point to implications for training or leadership
development if faculty perceive that institutional support would prompt trying a new
method of feedback.
The last theme that emerged from this ninth questionnaire response was
“Feedback.” Most institutions allow students to evaluate faculty near or after the course
ending. In addition, students may take the initiative to communicate on their own to the
instructor about their thoughts on the feedback methods. For some instructors, the kind
feedback method. For example, “I will always be open to trying new strategies and
adapting to better assist my students. This means being open to feedback and ideas of
Knowing student preferences from this data helped determine faculty methods.
The tenth question presented to faculty was designed to capture the factors that
connect a student concern to a classroom element. Four themes emerged from these
responses: Discussion Questions (16), Assignments (14), Existing resources (13), and
Instructor-added (7). Video participants responded with Feedback (1) and Existing
Resources (1). A prominent theme was Discussion Questions, or DQ’s. Faculty remarks
that alluded to the discussion question were appointed to this theme. One participant said
The issue the student has – formatting – is something that came up in Topic 2,
and they have continued to use the classroom resources, to show considerable
improvement. While they are not formatting everything correctly the issues are
very minor. It is clear that they get the general idea (Participant 4).
This instructor is validating the regular class discussion and how that can be a platform
for future instruction. This was the most common theme among the responses to this
question.
Following discussion questions, the second theme that addressed this tenth
questionnaire response was “Assignment,” which means that the participants expressed
that a particular student concern was limited to the assignment or its boundaries. This
meant that any concern with the assignment was not directly related to a
misunderstanding of a course concept, but may have surfaced as an isolated error. One
participant’s response illustrates this, “This concern is only related to the assignment
itself. There are no other classroom elements that are related or that specifically address
these concerns” (Participant 11). Other instructors did not concur that an issue was
specific to an assignment, because they chose to address some common mishaps prior to
the students’ attempting the assignment. For example, “I picked up on this potential
concern – supporting paragraphs – in the topic 3 DQ 1, and have provided the students
with resources to help ensure they are working on developing their topic” (Participant
4).
The last two themes that surfaced from this response were “Instructor-added” or
“Existing Resources,” listed in order of frequency. Thus, for this specific question,
instructors believed the class resources were sufficient, compared to adding their
resources. One participant specified how they added course materials to address
The (religion) course is a content-only course. Skills are not part of the
curriculum. It is assumed that by the time they get to this class, they will have
learned thesis and essay writing skills. However, because students still struggle
with thesis statements, I have added skills training to the discussion forum to
connections to student success, the greatest factor of influence was the Discussion
Forum, where instructors could post resources and engage with students.
The eleventh question was about trends that affected faculty practices. Faculty
did agree that trends affected their practices, citing how the volume of a particular
student issue dictated their actions. Like other responses, “Instructor-added” was a
prominent theme, meaning that what instructors added to the class was swayed
according to these trends. Faculty cited volume to indicate their attention to these trends.
Since I only teach one course at a high volume, trends are very important and
have a concern come up more than once, I tend to change a practice or add a
Participants did not specifically state that they chose a particular feedback method due to
high volume, but were saying that because of the large amount of student work they see,
that volume has a greater influence on their remediation and classroom practices.
When juxtaposing this response to the earlier repeated themes of “Student Needs,” this
suggests that whether faculty use video or text feedback, both are doing what believe is
best for students. When reviewing these factors within the framework of the different
methods, more discussion is needed to understand how these factors intersect with the
The final questionnaire response showed how faculty might apply these trends to
a broader context. The themes that emerged have been expressed in other questionnaire
responses, but in a different sequence. The most frequent theme was “Instructor-added”
resources, followed by: Co-workers; Does Not Apply; Tech Tools; Engaging in Forum.
Thus, faculty participants most commonly stated that their own prepared resources were
the most valuable practice for extending their remediation for students. One participant
stated:
Typically, if I see a student concern multiple times, I try to provide resources
ahead of time to help future students to not have the same concerns. These might
course curriculum, or shaping practices for the division of online full-time faculty.
Focus groups. The focus group served as the third data source for this study.
The focus group took place via Zoom on March 27, 2018. Zoom is an internet
conferencing tool that allows remote participation and the session to be recorded. Ten
participants were presented with consolidated themes and results from the questionnaire
responses and disaggregated data from the paper feedback. This allowed the participants
to view the paper feedback trends according to the different methods of either text or
video feedback. The focus group discussions resulted in new emphases to particular
areas of data, including: The emphasis on thesis statements, the theme of critical
thinking that only existed in the text feedback, and the differences in feedback,
The focus group participants agreed with the data, but some were surprised that
the thesis statements received less focus. The second question on the instrument asked
the instructors how frequent the particular paper issue was, and for “Thesis Statements,”
one of the paper themes, this response was not high. Participants 7, 1, and 12 stated their
surprised that not all of the faculty participants considered that paper feedback that
addressed “Thesis Statements” was low; however, Participant 11 indicated that this
mixed perception could be that the courses taught have different demands. She said,
I think a lot of it depends on the level you’re teaching at too, because I know
with some of the papers I chose, I teach (intro to education), so the thesis wasn’t
as important as teaching them formatting and developing the paper. So, I think
that’s part of why there was a mix of ideas there too (Participant 11).
Another participant stated that when she responded to that question, she was aware of
how much planning and preparation she did prior to the student assignment and
therefore, did not have concerns regarding the student’s proficiency in writing the thesis
For the two essays in (religion class), where they write thesis statements, I try to
get them to practice their thesis statements in the discussion leading up to the
paper. And so we go back and forth and I fix them before they submit the papers.
This participant prepares students for the thesis statement prior to the assignment,
whereas others view the draft feedback itself as a mechanism for instruction. A second
surprising element among the participants was the theme of “Critical Thinking.”
The “Critical Thinking” theme emerged after careful analysis of both the video
and text feedback. The disaggregated data showed significant differences in the
proportions of feedback and the attention to the themes. Among those who gave text
feedback, the second most prevalent theme was identified as “Critical Thinking,” which
reflected comments that were not tied to a specific graded element in the paper. These
comments demonstrated that instructors interacting with students on the basis of the
subject or class content, showing that the instructors are using feedback as a stand-in for
what might take place in face-to-face instruction. For example, “The scenario about Joni
is a true story. You might enjoy checking out her website and seeing how she has made
For this assignment, the students were assigned to write an ethical dilemma and were
given various scenarios. This comment was not directed towards the student’s success or
failure on any rubric element, but was crafted to promote enthusiasm for the topic. All
participants were surprised that a theme would exist in text feedback, but not in video.
The participants had different reactions to how the paper feedback comments
were categorized according to Hattie and Timperley’s (2007) levels and the proportions
of attention to the various themes. For all levels, the participants who gave text feedback
focused more on content than on any other theme. For the participants who gave video
feedback, the focus on content was greatest in FR and in FT, while the comments on
mechanics were the most prevalent at the FP level. Overall, instructors who provided
text feedback focused more on paper content. Participant 10 offered her perspective on
this difference:
When I think about when I use video feedback, I can tell, like, just as it shows
here in the chart, that it is often formatting because I’m trying to show them,
because maybe they’ve displayed that they don’t know how to do it and I’ve
tried to tell them through text, I’m showing them, this is how we indent, this is
opportunities over text, which could offer an explanation as to why there is less of an
Summary
This study explored faculty feedback reflections and how those reflections might
influence their teaching. Three research questions were posed with three data sources
supporting these questions. First, the 12-question faculty questionnaire was divided into
three sections, with four questions each targeted to specific research questions. Next, the
classroom feedback was analyzed in two different ways, which meant the data supported
both the first and second research questions. The classroom feedback was analyzed
feedback that supported the second research question was analyzed a priori, according to
Hattie and Timperley’s levels of feedback (2007). The third research question was
supported from the questionnaire responses and from the focus group. These three
sources served to triangulate the data, as the focus group discussions reinforced and
For this study, several notable trends and themes surfaced. First, the paper
feedback analysis revealed that among all themes and trends, faculty participants
focused the most on content, or the student’s paper development. The participants who
provided feedback in text form showed content as the most prevalent theme in the FT
and FR Hattie and Timperley’s (2007) categories, while the participants who gave video
feedback showed content at the highest concentration in the FR level. The paper
feedback themes also revealed that critical thinking comments were present only from
participants who used text comments and were almost as frequent as content at the FR
level of feedback. This theme was not present among video feedback instructors. These
were the dominant emerging themes that surfaced from a review of the classroom papers
and the focus group data and questionnaire responses served to triangulate the data.
perceptions of how they respond to student issues and how their feedback reflection
helps them address the needs in the classroom. First, the theme of “Instructor-created
resources” had the greatest prevalence and demonstrated that most participants
remediated the paper concerns by creating new resources for students. This addresses
Through these responses, the participants indicated that their classroom resources are
An interesting and recurring theme in the study also related to the methods
instructors use. As this study explores how faculty feedback methods might influence
their instruction, the data showed that whether participants provided text or video
feedback, both believed that their particular feedback method was best for students and
was the most efficient in terms of the volume. These seemingly conflicting responses
Development, the data from Hattie and Timperley’s (2007) served as a measure of
feedback’s formative value and the goal towards learner self-regulation. Because most
instructor comments were addressed in both the FP and FR levels, this ZPD (Vygotsky,
1978) theory seems to be actualized through instructor feedback. The data’s conclusive
results were examined in this chapter, and those results necessitate greater exploration in
terms of the implications, conclusions, and recommendations for future studies. These
This chapter will begin with a summary of the study’s ten strategic points. The
study’s broad topic examines how instructors reflect on their feedback and how those
feedback studies, including how feedback is largely explored from the student’s
perspective (Borup et al., 2015) and population gaps were cited by Atwater et al. (2017)
who studied graduate student perceptions of video feedback and recommended future
Zone of Proximal Development, and the study used instructor feedback as a mechanism
feedback, Feedback and social presence, Social presence and learning outcomes,
Feedback methods, Feedback and learner populations, and Reflection. This was a
qualitative case study design, which was the best approach for understanding the
The study’s problem statement was: It was not known how online undergraduate full-
time faculty perceived the influence of their instructional feedback practices on their
education institution.
higher education institution. The study’s three research questions were: (1) How do
institution? (3) How do online undergraduate full-time faculty perceive the influence of
reflective thinking on their current and future instructional approaches? The phenomena
explored was the perceived influence of feedback practices on teacher reflection and
subsequently, the influence of reflection on instructional strategy. This was a single case
study design. The study’s purpose statement was: The purpose of this qualitative
exploratory single case study was to examine how online undergraduate full-time faculty
classroom feedback, and a transcript from the focus group session. The data were
The purpose of this qualitative exploratory single case study was to examine how
feedback practices on their reflective thinking, and hence, their instructional strategy at a
focused on instructor presence and student preferences (Tunks, 2012), but this study
adds the dimension of instructor reflection as a lens to how their own feedback informs
instruction. Reflection allows instructors to analyze their own learning process, applying
methods and online enrollment expands in higher education (Pattison, 2017), newer
technologies provide options of video feedback; the kinds of feedback methods used
could also play a role in faculty instruction, and therefore, the classroom instruction
implications.
This study focused on the seminal theory of Vygotsky’s (1978) Zone of Proximal
students to this zone where independent learning takes place. To operationalize this
theory, the participants’ classroom feedback was analyzed within the framework of a
supporting theory, based on Hattie and Timperley’s (2007) four levels of feedback. This
framework allows feedback to be examined within the context of its contribution to self-
students to this zone, then how they reflect on their feedback and how their feedback
methods influence instruction are valid explorations for higher educational institutions.
RQ1: How do online undergraduate full-time faculty perceive the influence of their
offerings (Allen & Seaman, 2016; Tunks, 2012), instructor feedback has become
increasingly prominent. As instructors reflect on these practices, this reflection has the
capacity to shape thinking patterns and future actions (Bennett et al., 2016). In this
particular population of online full-time faculty, the face-to-face daily interaction at a
campus facility could result in peer collaboration of these influences. Thus, this study
instruction.
The case study summary below will discuss the findings and conclusions of the
study’s significant themes as they apply to the study’s research questions. This section
will also discuss how these themes and findings relate to the study’s theories of ZPD and
Hattie and Timperley’s (2007) levels of feedback. In this study, chapters 1, 2, and 3
focused on the study’s theoretical foundations and the background to the problem that
the study will address, along with the proposed methodology. This following section
will illustrate how these findings relate to the propositions expressed in these chapters
and the significance of the findings and their capacity to advance knowledge in the areas
of instructor feedback.
This section will detail the significant and non-significant findings of the study
as they relate to the undergirding theories and how the gaps are addressed. This study
was about how online full-time undergraduate faculty feedback reflections influence
their teaching practices. In addition, the study explored two different feedback methods,
as participants used either text or video feedback. The conclusions of how these methods
intersected with faculty reflections and to what extent they influence their practices will
their instructional feedback practices on their own reflective thinking? To address this
question, three data sources were analyzed. Faculty participants were instructed to
complete a questionnaire during the course of their daily feedback while reviewing four
different papers. The first four questions were aligned to this first research question. In
addition, an extensive review of paper feedback was conducted in order to examine the
grading trends and to see an alignment between the questionnaire responses and the
actual instructor feedback and to determine the kind of feedback on paper in an effort to
align it with faculty reflections and the influence on teaching. Focus group discussions
served to confirm or challenge the data presented that aligned to this research question.
The themes from the questionnaire, papers, and focus groups will follow.
There were shared themes from both the first questionnaire response that showed
paper concerns and from the paper feedback review. These themes were Development,
additional theme that came from the paper review was Critical Thinking. These themes
support the study’s foundations by showing greater insights on faculty reflection, how
faculty social presence is promoted, and how reflection varies and leads to problem
solving.
Development was the dominant theme in both the paper concerns and in the
questionnaire responses. For the purposes of this study, development refers to the
paper’s general content, how a student might support their claims through evidence from
sources, examples, reasons, or details. The thesis statement theme also offers instructors
more opportunities for formative remarks. Formative feedback has greater potential to
lead to student learning whereas feedback on the other more objective themes would
only give students a “quick fix.” By contrast, comments in the development or thesis
statement category provide more opportunities for students to learn the concept in a
variety of ways, that is, a feedback comment related to development or the thesis
statement does not correct a wrong, but rather, provides scaffolding for students since
there is not just one way of addressing these subjective issues. Therefore, instructors
focused more on instruction and guidance in their comments in their feedback. This
scaffolding is consistent with what Vygotsky (1978) proposed in the ZPD theory. This
If attention to higher order paper issues like development and the thesis
the other paper issues. When participants were considering paper issues, the themes of
formatting, mechanics, third person, and word count were included, but with less
attention than the higher order theme of development. These remaining themes have
an in-text citation or misuses a capital letter, the feedback only moves the student from
incorrect to correct. Thus, less learning is taking place according to Vygotsky’s (1978)
regulation.
Yet theorists and prior studies are mixed on feedback’s outcomes based on these
themes. Feedback largely has positive outcomes for students (Abaci, 2014; Mirzaee &
Hasrati, 2014; Sims, 2016), but instructors must be mindful of the individual’s level in
order for successful movement to the ZPD. Per Li and Li, (2012) instructors may be
mindful in giving feedback that is at the right level; for some students, feedback that is
too complex is less meaningful. Hattie and Timperley’s (2007) theory also supports the
powerful effect between feedback that is aimed at processes compared to surface task
information. For some students, feedback that addresses surface level errors can improve
task confidence and therefore, student efficacy, and lower performers reported
Goodridge (2017). However, this contrasts with Ellis and Loughland (2017) whose
on how a paper issue might be fixed. The foundations of this study focused on
feedback’s formative value and its theories grounded the study. Sadler (1989) states that
feedback is formative when the work is appraised, general, and broader principles are
expressed that can apply to future works. The participant themes, with the most attention
to development, but with the more objective themes not neglected, aligns with these
The reflection component of this first research question was addressed through
the alignment between what faculty participants expressed as concerns and their actual
prompts instructors to evaluate the efficacy of problem solving strategies. This kind of
reflection is seen in how instructors view paper issues and shape their instruction. For
example, participants expressed different ways to deal with one theme, showing
reflection taking place, in accordance with this theory. Prior studies, including Wade
(2016) used journals as a way to attempt to capture reflection during feedback. This
study extends those findings to show outcomes of reflective practices for online
educators.
The theory of social presence plays a role in this study, and its tenets are
environment (Gunawardena & Zittle, 1997). It is possible that the instructors’ chosen
methods can enhance social presence. In this study, participants who used video
feedback bridged what Borup et al. (2015) refer to as the online environment’s inherent
One finding demonstrated that faculty who give text feedback may also reflect
on social presence. The theme of critical thinking was seen only in text feedback.
Because these comments were unrelated to a paper issue, these may demonstrate
strategies to improve social presence for those who give text feedback. As noted in this
platform (Hostetter & Busch, 2013). Thus, the emergence of the critical thinking theme
presence. A review of classroom papers revealed that these critical thinking comments
questionnaire papers and the additional papers reviewed as artifacts. Because they were
not specific to a particular paper issue or the student’s success or failure, they were
The theories of reflection grounded this study and the data that was aligned with
the first research question demonstrated the results of reflection. According to Farrell
(2004), reflection can be described as a loop between the process and the questions.
Reflection happens when one is asked: What am I doing in the classroom? Why am I
doing this? What is the result? Will I change anything based on the previous
information? In the online classroom, the feedback does serve as the instruction, so it is
fitting that these kinds of questions were directly posed to the faculty as they were
The final themes with significance to this research question include how
instructors prepare resources, class resources, instructor feedback, other instructors, and
student advisors. In addition, the theme of “None” was prominent, meaning that
instructors did not plan to change their current feedback practices. To address these
questions, instructors did reflect on their feedback during grading, so connections were
forged between their grading and classroom practices. A gap presented in this study was
to focus on feedback from the instructor’s perspective (Borup et al., 2015). This study’s
particular population was online full-time undergraduate faculty, who are focused on
first-year-sequence courses. Several themes were unique to that population and their
collaborative environment. For example, when participants noted that they receive
influence from either other instructors or student advisors, these factors could be less
prevalent among remote or adjunct instructors, since this online faculty work on campus
influence one another, this element is missing in many asynchronous online programs;
therefore, these themes have significance as they are directly applicable to this
population.
RQ 2. What methods of feedback do undergraduate full-time faculty use at a
address this question will be discussed in this section. Questions 5-8 on the
questionnaire, paper feedback analysis, and the focus group transcript served as the data
sources to support this research question. Faculty participants either used text or video,
which addressed the fifth question. The sixth and seventh questions shared themes of
time and efficiency, what is best for students, and traditions and norms. The paper
feedback analysis themes were a priori, based on Hattie and Timperley’s (2007)
feedback model.
Faculty participants reported that time and efficiency was the highest determiner
of their method of feedback selection. Yet, this theme is inconclusive, since it came
from faculty who provided feedback through text and through video. Prior studies show
conflicting evidence on the efficiencies of video or text feedback. Video can be faster
and therefore less fatiguing for instructors (Ali, 2016; Atwater et al., 2017; Sims, 2016).
The problem that grounded this study focused on how online faculty in higher education
do strive to meet the demands of increasing enrollment (Borup et al., 2015; Planar &
Moya, 2016; Wright, 2014) so understanding which method is more efficient would lend
insight into that problem. But fewer online faculty have expressed interest in new
technology (Harrison et al., 2017), and in this study, most participants adhered to text
feedback, citing familiarity as their rationale. These findings could point to the need for
As the second most prominent theme, faculty participants responded that they
selected their feedback methods based on what is best for students. The evidence in this
study showed mixed findings on which method is best for students. Video feedback
increases personalization between the student and the instructor (Atwater et al., 2017;
Frisby et al., 2013; Ellis & Loughland, 2017; Wade, 2016) and is viewed as supportive
misunderstandings (Atwater et al., 2017). Nevertheless, other studies show that students
do prefer text feedback over video (Borup et al., 2015; McCarthy, 2015). Video and text
are merely methods of feedback; feedback’s positive outcomes can occur through either
practice. Thus, if feedback is formative, personal, and addresses student gaps, and if
instructors find their preferred method leads them to these outcomes, then the
theme was identified as tradition or the norm of the institution. This means that faculty
are giving feedback based on what is expected and on what is comfortable for them.
This theme is significant in terms of how instructors may view the institutional support
The a priori themes from the paper analysis demonstrated how faculty
grounded this study. Through formative feedback, instructor remarks were analyzed
according to Hattie and Timperley’s (2007) four levels of feedback, a model designed to
demonstrate feedback’s effectiveness. Faculty remarks were at the task level (FT),
process level (FP), or self-regulation level (FR). In addition, there is a self-praise level
(FS), but this study focused on the first three, as explained in earlier chapters. Feedback
at the self-regulation level creates the capacity to increase confidence and to prompt
internal assessment for learners. According to Hattie and Timperley (2007), more
effective learners possess stronger FR strategies, whereas less effective learners seek out
feedback at the surface task level. This theory aligns with the Vygotskian movement to
This means that faculty appraised student content for its effectiveness. This is interesting
because paper issues related to development are generally higher order skills; however,
reinforcement. For example, an instructor might glance over a quote in a paragraph and
quickly let the student know they did a “good job” including their quote. Of the three
levels analyzed, both text and video participants used the FT level the least. This lesser
Data on both text and video feedback shows a greater concentration of FP and
concentration of formatting comments; for text feedback, most formatting comments are
addressed at FP and for video feedback, formatting is addressed more often in FR. Thus,
text feedback participants may have posted a prepared example or link to show students
how to address a formatting issue, whereas the video medium may have allowed
statements about an issue. They are designed to let students generalize the remarks to
solve the problem on their own. Because video feedback is more personal (Atwater et
al., 2017), giving feedback at this level may be more effective through video rather than
through text.
Feedback at the process level focuses on “how” something should be done. Prior
research has noted the disproportionate attention to this level and that feedback cannot
be formative when limited to the process level (Ellis & Loughland, 2017). For video
feedback, comments in mechanics were highest in this area, whereas for those who gave
text feedback, the remarks about development remained the highest. The visual video
format allows a mini lesson on corrections in mechanics, making quick work of showing
how to address a mechanical error. Since feedback in FP is not the highest for either set
The last a priori theme is FR, which suggests that feedback is focused on self-
regulation (Hattie & Timperley, 2007). Sadler (1989) notes that successful feedback
must identify broader principles that can apply to later works for its formative value to
be operationalized. With the concentration of most remarks in this level for both text and
video feedback, the findings show this emphasis on self-regulation and therefore,
formative value. This data is also consistent with the findings of Nicol and MacFarlane-
both text and video feedback show the highest concentration at this level, it is not
conclusive which method is more conducive to FR, but the significance of the findings
reflective thinking on their current and future instructional approaches? The last research
question is an amalgamation of how the faculty feedback methods might influence their
instruction and the data that supports this question were the last four responses from the
questionnaire and the focus group transcripts. For this discussion, the responses with the
most frequency will be synthesized for a sharper discussion of the data’s magnitude,
consistent with thematic coding (Clarke & Braun, 2013). Most faculty participants stated
that they would not change their feedback methods, and that they felt that the course
discussion questions were the greatest factors of influence towards instruction. Faculty
participants also believed that student need was the primary driving force of their
who would not change their methods and are basing those choices considering what is
best for students. This theme rose above the challenge of managing paper volume,
showing that instructors first want to apply what they believe is best for students.
feedback. As Borup et al. (2015) expressed a gap in the literature about how feedback
should be examined from the instructor’s perspective and through reflection as the
catalyst, this study served that purpose. The problems presented in the study’s proposal
addressed the need for instructor reflection and if instructor feedback methods influence
Vygotsky’s learning theories and theories associated with feedback. The data findings
Implications
The topic of this study was the perceived influence of feedback practices on
higher education often work remotely, at the host institution, faculty meet and work at a
campus facility, collaborating and sharing strategies. For this study, questionnaire
responses, classroom feedback and focus group discussions were reviewed. The findings
own feedback, while the classroom data served as a critical artifact to compare against
those perceptions. Next, the focus group confirmed or questioned the findings,
completing the triangulation process. Theoretical implications of the study include the
(1978), and how Hattie and Timperley’s (2007) framework was confirmed as an
student preferences by putting the focus on faculty and how they reflect on their own
feedback, which, in the online classroom, serves as instruction. Ryan and Ryan (2013)
describe reflection as an intellectual stance that moves a daily practice to a state where
change can take place in a broader context. In this study, feedback serves as a daily
practice for this online population and the mindful step of reflecting on their feedback
could result in practices that could lead to student improvement. For example, through
faculty reflecting on the kind of feedback they provided, some were surprised to see
gaps or areas of emphasis. The faculty participants demonstrated that when their
reflection is actualized, they have a greater awareness of their practices, which could
Understanding social presence is critical due to what Moore (2012) refers to as the
transactional distance of online education. Given the empirical research that supports the
connection between feedback and social presence (Frisby et al., 2013), the findings from
this study could be interpreted to reflect differences in how instructors approach social
presence. In other words, social presence could best be promoted according to the
method that is most comfortable for the instructor. This was demonstrated through the
theme of critical thinking. Even with this theme not evident among video feedback
participants, those who gave video feedback had much higher word counts in their
feedback. This is consistent with Sims (2016) who reported that consistent with the
Media Naturalness Theory, that feedback with greater complexity increases the social
presence of the instructor and in this study, those who gave video feedback had higher
word counts than those who gave text; however, those who gave text comments had a
tool box of prepared comments that extended beyond actual paper issues, which allowed
instructors to share intellectual dialogue through their feedback. McGuire (2016) stated
that feedback can be the most significant contributor to developing social presence and
studies and theories support feedback’s potential for learning. The learning theory for
this study was Vygotsky’s (1978) Zone of Proximal Development. Although Vygotsky
developed this theory in accordance with children’s developmental levels, the ZPD is
receiving more attention in studies of higher education (Armstrong, 2015; McNiff &
Aicher, 2017; Roberts, 2016). Because the findings demonstrated that instructors
seems that these instructors are successful in scaffolding and moving students to this
zone. The consistency between the instructors’ perceptions of paper issues and their
formative. Nicol and Macfarlane-Dick (2006) define high quality feedback as the kind
within the broader context of feedback’s effectiveness. Li and Li (2012) caution that
when instructors aim to go too far beyond the student’s proximal zone, feedback is less
meaningful. Feedback is only effective if the gap is also clearly defined (Paulson Gjerde
et al., 2017), which means that self-regulation comments must accompany feedback at
the task level, where the gap is defined. Other feedback theories posit that students find
success when feedback is given at the task and process levels, because these corrections
can also serve to increase task confidence (Hattie & Timperley, 2007). For either text or
this gap. Yet there was also attention to surface level errors, demonstrating attention to
result in improved practices where there are gaps, or sustained practices that are
successful. The implications from this study will benefit other institutions of similar
models. Any institution offering online courses must be concerned with the quality and
efficiency of its delivery (Planar & Moya, 2016). The practical implications will be
instructors may be burgeoned with heavier volumes of papers (Planar & Moya, 2016).
At the host institution, the full-time faculty typically teach four courses at one time. A
typical course volume is between 20 and 40 students. Tier instructors carry loads
between 100 to 120 students at one time. This population, therefore, represents a group
with a need for efficiency in grading without sacrificing quality in their attention to
students. In addition, because this population does not work remotely, they collaborate,
culture of innovation, several instructors use various methods to improve teaching and to
promote social presence. Instructors employ personalization through programs for video
discussion, use video conferencing, and use an application called Loom for video
feedback.
Two of the twelve participants used video feedback and those two participants
stated that they would continue with this method based on its efficiencies, quality of
feedback and its personalization, while other participants stated they would try video
feedback with institutional support or if it were required. The implication is that faculty
may try new methods with institutional support, which is consistent with Harrison et
al.’s (2017) findings that online faculty may be hesitant to learn new technologies. The
Another practical implication that emerged from these findings is the importance
of reflection. This study’s data showed that when faculty are tuned in to the kind of
feedback they are giving, they give attention to specific practices in their classrooms.
operate in isolation, often not aware of the kind of feedback their peers give. The
participants in this study were able to see their own feedback analyzed and categorized
and expressed surprise at some findings. For some, a visual analysis of their feedback
development could include faculty reviewing their own or a peer’s feedback in the
Hattie and Timperley (2007) levels to understand the concentration of their own
feedback.
Faculty who provide video feedback could benefit from transcribing their
feedback and self or peer-reviewing the content to ensure that all areas of student papers
are developed. When feedback is unscripted, the natural nuances increase the social
presence, but the disadvantage is that the instructors tended to include superfluous
narrative and students may miss the instructor’s key points. With the critical thinking
component missing from the video feedback, instructors giving feedback through this
method might benefit from departing from student paper issues to discuss the content, as
the instructors delivering text feedback do. Thus, even with all its advantages,
instructors who give video feedback could evaluate their own feedback for a greater
Future implications. This study was about exploring how faculty feedback
practices might inform teaching and if the methods contributed to that. There are future
implications for this study, as the area of online feedback has numerous possibilities.
The feedback theories of Hattie and Timperley (2007) and the learning theories of
other elements of feedback. In this study, the participants’ feedback was reviewed
according to their method of text or video, but it would be useful to examine how the
same participant would give feedback in text or in video format. Since faculty
participants stated that their methods were best, it would have been worth exploring
specifics of what makes their methods more efficient, including timing their video
Strengths and weaknesses of the study. This study explored the connection
between faculty’s feedback reflection and their classroom practices. As with any study,
there were strengths and weaknesses which shall be presented here. One strength of the
study was that a particular population was examined. Although several institutions offer
courses in online modalities (Caruth & Caruth, 2013), this unique model of online full-
time faculty drew robust data, owing to their consistent course loads of up to 150
students. As well, faculty reflections could result from influence of peers, due to their
face-to-face interactions. Not only did this population address the gap in literature, but
the model of full-time online faculty could be replicated among higher education
institutions. Thus, learning more about feedback in this population was meaningful.
A second strength was using Hattie and Timperley’s (2007) levels as a way to
operationalize instructor scaffolding. When instructors could see their own feedback and
its content in certain levels, they had a clear understanding of the kind of feedback they
give and how it was focused at the process and regulation levels. The cross analysis gave
a broad picture of feedback and this same method would be useful for future
A fourth strength is that this feedback focused on faculty thoughts, rather than
students’ thus filling a second gap in literature and shifting the paradigm (Borup et al.,
2015). Measuring student preferences can be challenging, because what they prefer may
not align with theory and preferences can change with technology. Research findings
may not keep pace with advances in technology, so any student preferences cited in
research could possibly change. As an example, Ali (2016) noted that students found
retrieving video feedback challenging, but advances are making this easier. Therefore, a
researcher must examine these student preferences for text over video feedback with
these rapid technological advances in mind. On the other hand, to view instructor
thoughts about their feedback reaches deep into andragogy and examining this mindful
A final strength of this study is its rigor. A case study can be viewed as weaker,
due to its limited scope (Yin, 2014). However, several components of this study
increased the rigor. First, the faculty selected teach high volumes of courses. Five of the
twelve participants are tier instructors, facilitating six to eight classes at a time. Many
institutions relies on adjunct instructors, whose teaching loads would be less significant.
Thus, as faculty responded to the questionnaire, their volume and experience yielded
more meaning. A second component of rigor was built into the questionnaire, which had
twelve questions. Each of the twelve questions was built around four different papers.
This means that faculty had the opportunity to respond to four different papers, which
As with any study, there were weaknesses found. First, the study was conducted
at one higher education institution, so its findings may not be reflective of the results
that would surface at other institutions. The unique culture of the full-time online faculty
may not be representative of the operations of other campuses. A second weakness was
that the researcher works among the full-time online faculty, serving as a faculty chair.
The relationship the researcher has with the faculty could potentially have biased their
responses. Understanding this, the researcher ensured that the dissertation committee
served as an oversight and that the data collection and analysis were rigorous enough to
To conclude, the careful systematic process and guidance from the dissertation
committee resulted in a valid and reliable study. The weaknesses were inherent in
studies where the researcher works among the participants, and the awareness and
attention to this resulted in strengths that outweighed the weaknesses, such as the rigor
of the analysis. All strengths and weaknesses should be considered for future studies.
Recommendations
This study was about the perceived influence of feedback practices on teacher
were significant findings expressed through disaggregated data that showed themes and
framework. With the foundation of Vygotsky’s (1978) ZPD learning theory, data
showed connections between faculty’s feedback and their teaching. The study’s findings
In this study, the data reflected text and video feedback for the purposes of
exploring how these differences might impact faculty reflection. Future studies could
focus on instructors using both forms of feedback to compare the differences in student
preferences or how feedback addresses Hattie and Timperley’s (2007) levels according
to video or text. If the goal of a future study was to explore which method of feedback is
who use both would accomplish this, without the mediating factors of instructor
personality.
Hattie and Timperley’s (2007) levels of feedback served as the framework of this
study. These levels are a component of a larger framework, which is Hattie and
Timperley’s (2007) stages of feedback. This theory posits that effective feedback must
include goal orientation, acknowledgement of the student’s gap, and future guidance
(Hattie & Timperley, 2007). A similar framework and analysis could be established to
determine if faculty feedback addresses all these elements and if these elements are more
the text and video feedback. The participants expressed surprise at the findings that
some themes were more dominant according to the delivery method. For example, the
theme of critical thinking was not present among the video participants. An study that
explores this or other themes and their dominance according to the delivery method
use video in other areas of the class and if those videos influence their text feedback.
Such a study could more deeply probe faculty practices to determine the extent of how
an instructor video in one area of the class, such as in the discussion forum, could
influence how their paper feedback is perceived. Because social presence results from
that if an instructor posts a video in one area of the class, that their social capital has
levels of feedback. The participants viewed the data and noted that when giving video
study that focuses specifically on the differences between video and text feedback with
respect to surface levels would yield interesting findings. While some research confirms
the effectiveness of identifying surface level feedback (Li & Li, 2012), feedback that
focuses on broader issues is considered more formative (Mirzaee & Hasrati, 2014).
in this study, feedback is presented to students through a central comment box and on
attached essays. Due to the analytical framework selected, this study was limited to
sidebar feedback, given through Microsoft Word comments on student papers. But it is
possible that students do not open their papers and may only view the feedback
displayed next to their grade. Regardless of the quality of feedback or its modality, if
students do not review their paper feedback, these merits are lost. Most learning
management systems offer these two display options of feedback and it would be
valuable to understand the student perspective and to limit a study to comments in this
Summary
reflection practices influence their instruction, and their methods of feedback. The
qualitative method was used and a case study was selected as the design. The researcher
used the thematic analysis approach to review, examine, and synthesize the data’s
findings. Three data sources were faculty questionnaire responses, actual feedback from
classes, and transcripts from focus group sessions. The theory that undergirded this
study was Vygotsky’s (1978) Zone of Proximal Development and Hattie and
The data showed that indeed, faculty do reflect on their own feedback and those
reflections contribute to their classroom practices. This was demonstrated through the
variety of themes that resulted. There was consistency between the concerns instructors
expressed and how these concerns lead them to continually produce their own materials,
consider student needs, and refine their own feedback, considering it a part of their
instruction. Another major outcome is that whether instructors use video or text
feedback, both believe they are doing what is efficient and what works best for their
students. These findings have value for future related studies or replications.
References
Abaci, S. (2014). Direct and indirect effects of feedback, feedback orientation, and goal
(1650593276).
https://doi.org/10.5539/elt.v9n8p106
Allen, I. E., & Seaman, J. (2016) Online report card. Tracking online education in the
United States.
288-296. https://doi.org/10.5539/jel.v5n3p288
https://scholar.google.co.uk/scholar?q=Arasaratnam-
Smith%2c+L.+A.%2c+%26+Northcote%2c+M.+2017+Community+in+online+
higher+education%3a+Challenges+and+opportunities
3a+Vygotskian-Inspired+Pedagogy+for+Sustainability
Atwater, C., Borup, J., Baker, R., & West, R. E. (2017). Student perceptions of video
doi:10.1123/smej.2016-0002
Bain, J. D., Ballantyne, R., Mills, C., & Lester, N. C. (2002). Reflecting on practice:
Bandura, A. (1977). Social learning theory. New York: General Learning Press.
Baxter, P., & Jack, S. (2008). Qualitative case study methodology: Study design and
http://www.nova.edu/ssss/QR/QR13-4/baxter.pdf
Bennett, D., Power, A., Thomson, C., Mason, B., & Bartleet, B. (2016). Reflection for
from:https://scholar.google.co.uk/scholar?q=Bennett%2c+D.%2c+Power%2c+A.%
2c+Thomson%2c+C.%2c+Mason%2c+B.%2c+%26+Bartleet%2c+B.+2016+Refle
ction+for+learning%2c+learning+for+reflection%3a+Developing+indigenous+com
petencies+in+higher+education
Black, P., Harrison, C., Lee, C., Marshall, B., & Wiliam, D. (2004). Working inside the
black box: Assessment for learning in the classroom. Phi Delta Kappan, 86(1),
9-21. https://doi.org/10.1177/003172170408600105
Black, P., & Wiliam, D. (1998). Inside the blackbox: Raising standards through
https://doi.org/10.1177/003172171009200119
Tyler (Ed.), Educational evaluation: new roles, new means: the 68th yearbook of
the National Society for the Study of Education (part II) 68(2), 26-50. Chicago,
https://scholar.google.co.uk/scholar?q=Bond%2c+J.+2011+Thinking+on+your+f
eet%3a+Principals%27+reflection-in-action
https://doi.org/10.3928/00220124-20110715-02
Borup, J., West, R. E., & Thomas, R. (2015). The impact of text versus video
015-9367-8
https://doi.org/10.1080/23796529.2011.11674683
Byrne, D., & Ragin, C. (2009) The Sage handbook of case-based methods.
http://srmo.sagepub.com.library.gcu.edu:2048/view/the-sage-handbook-of-case-
based-methods/SAGE.xml https://doi.org/10.4135/9781446249413
Camacho Rico, D. Z., Durán Becerra, L., Albarracin Trujillo, J. C., Arciniegas Vera, M.
V., Martínez Cáceres, M., & Cote Parra, G. E. (2012). How can a process of
Retrieved from
https://scholar.google.co.uk/scholar?q=Camacho+Rico%2c+D.+Z.%2c+Dur%C3
%A1n+Becerra%2c+L.%2c+Albarracin+Trujillo%2c+J.+C.%2c+Arciniegas+Ve
ra%2c+M.+V.%2c+Mart%C3%ADnez+C%C3%A1ceres%2c+M.%2c+%26+Co
te+Parra%2c+G.+E.+2012+How+can+a+process+of+reflection+enhance+teache
r-
trainees%27+practicum+experience%3f+%3Ci%3EHow%3C%2fi%3E%2c+%3
Ci%3E19%3C%2fi%3E(1)%2c+48-60
Caruth, G., & Caruth, D. (2013). The impact of distance education on higher education:
A case study of the United States. The Turkish Online Journal of Distance
Education, Vol 14, Iss 4, Pp 121-131 (2013), (4), 121. Retrieved from
https://scholar.google.co.uk/scholar?q=Caruth%2c+G.%2c+%26+Caruth%2c+D.
+2013+The+impact+of+distance+education+on+higher+education%3a+A+case
+study+of+the+United+States
Charteris, J. (2015). Learner agency and assessment for learning in a regional New
https://scholar.google.co.uk/scholar?q=Charteris%2c+J.+2015+Learner+agency+
and+assessment+for+learning+in+a+regional+New+Zealand+high+school
https://doi.org/10.14742/ajet.635
Clarke, V., & Braun, V. (2013). Teaching thematic analysis. Psychologist, 26(2), 120-
https://scholar.google.co.uk/scholar?q=Clarke%2c+V.%2c+%26+Braun%2c+V.
+2013+Teaching+thematic+analysis
DeCosta, M., Bergquist, E., Holbeck, R., & Greenberger, S. (2015). A desire for growth:
Deming, D. J., Goldin, C., Katz, L. F., & Yuchtman, N. (2015). Can online learning
bend the higher education cost curve? American Economic Review, 105(5), 496.
doi:10.1257/aer.p20151024
Denton, P., & Rowe, P. (2015). Using statement banks to return online feedback:
doi:10.1080/02602938.2014.970124
Denzin, N. K., & Lincoln, Y. S. (2011). The SAGE handbook of qualitative research.
Dimova, Y., & Kamarska, K. (2015). Rediscovering John Dewey’s model of learning
Retrieved from
https://scholar.google.co.uk/scholar?q=Dimova%2c+Y.%2c+%26+Kamarska%2
c+K.+2015+Rediscovering+John+Dewey%27s+model+of+learning+through+ref
lective+inquiry
doi:10.1080/02643944.2015.1035317
https://doi.org/10.1287/mnsc.32.5.554
http://dx.doi.org/10.1037/13496-005
Ellis, N. J., & Loughland, T. (2017). 'Where to next?' Examining feedback received by
from
https://scholar.google.co.uk/scholar?q=Ellis%2c+N.+J.%2c+%26+Loughland%2
c+T.+2017+%27Where+to+next%3f%27+Examining+feedback+received+by+te
acher+education+students
doi:10.1111/j.1473-6861.2006.00129.x.
Falender, C. A., Shafranske, E. P., & Falicov, C. J. (2014). Reflective practice: Culture
012
teachers.
Farrell, T. S., & Jacobs, G. M. (2016). Practicing what we preach: Teacher reflection
https://scholar.google.co.uk/scholar?q=Farrell%2c+T.+S.%2c+%26+Jacobs%2c
+G.+M.+2016+Practicing+what+we+preach%3a+Teacher+reflection+groups+o
n+cooperative+learning
Fereday, J., & Muir-Cochrane, E. (2006). Demonstrating rigor using thematic analysis:
https://scholar.google.co.uk/scholar?q=Fereday%2c+J.%2c+%26+Muir-
Cochrane%2c+E.+2006+Demonstrating+rigor+using+thematic+analysis%3a+A
+hybrid+approach+of+inductive+and+deductive+coding+and+theme+developm
ent
Forte, G. J., Schwandt, D., Swayze, S., Butler, J., & Ashcraft, M. (2016). Distance
Frisby, B. N., Limperos, A. M., Record, R. A., Downs, E., & Kercsmar, S. E. (2013).
https://scholar.google.co.uk/scholar?q=Frisby%2c+B.+N.%2c+Limperos%2c+A.
+M.%2c+Record%2c+R.+A.%2c+Downs%2c+E.%2c+%26+Kercsmar%2c+S.+
E.+2013+Students%27+perceptions+of+social+presence%3a+Rhetorical+and+r
elational+goals+across+three+mediated+instructional+designs
Ganapathy, M. (2016). Qualitative data analysis: Making it easy for nurse researcher.
https://scholar.google.co.uk/scholar?q=Gaspari%C4%8D%2c+R.+P.%2c+%26+
Pe%C4%8Dar%2c+M.+2016+Analysis+of+an+asynchronous+online+discussio
n+as+a+supportive+model+for+peer+collaboration+and+reflection+in+teacher+
education
Gardner, F. (2001). Social work students and self-awareness: How does it happen?
Gargano, T., & Throop, J. (2017). Logging on: Using online learning to support the
from
https://scholar.google.co.uk/scholar?q=Gargano%2c+T.%2c+%26+Throop%2c+
J.+2017+Logging+on%3a+Using+online+learning+to+support+the+academic+n
omad
Garrison, D. R., Anderson, T., & Archer, W. (2000). Critical inquiry in a text-based
Geitz, G., Brinke, D. J., & Kirschner, P. A. (2015). Goal orientation, deep learning, and
Gillett-Swan, J. (2017). The challenges of online learning supporting and engaging the
https://doi.org/10.5204/jld.v9i3.293
https://scholar.google.co.uk/scholar?q=Golafshani%2c+N.+2003+Understanding
+reliability+and+validity+in+qualitative+research
Gredler, J. J. (2016). Postsecondary online students' preferences for instructor
Greenwood, M., Kendrick, T., Davies, H., & Gill, F. J. (2017). Hearing voices:
Comparing two methods for analysis of focus group data. Applied Nursing
Hart, C. (1998). Doing a literature review: Releasing the social science research
Harvey, M., Coulson, D., & McMaugh, A. (2016). Towards a theory of the ecology of
Retrieved from
https://scholar.google.co.uk/scholar?q=Harvey%2c+M.%2c+Coulson%2c+D.%2
c+%26+McMaugh%2c+A.+2016+Towards+a+theory+of+the+ecology+of+refle
ction%3a+Reflective+practice+for+experiential+learning+in+higher+education
Harrison, R., Hutt, I., Thomas-Varcoe, C., Motteram, G., Else, K., Rawlings, B., &
Hattie, J., & Timperley, H. (2007). The power of feedback. Review of educational
Hostetter, C., & Busch, M. (2013). Community matters: Social presence and learning
Retrieved from
https://scholar.google.co.uk/scholar?q=Hostetter%2c+C.%2c+%26+Busch%2c+
M.+2013+Community+matters%3a+Social+presence+and+learning+outcomes
Huang, X., Chandra, A., DePaolo, C. A., & Simmons, L. L. (2016). Understanding
doi:10.1111/bjet.12263 https://doi.org/10.1111/bjet.12263
https://scholar.google.co.uk/scholar?q=Jaeger%2c+E.+L.+2013+Teacher+reflect
ion%3a+Supports%2c+barriers%2c+and+results
Janssen, F., de Hullu, E., & Tigelaar, D. (2009). Using a domain-specific model to
https://doi.org/10.1080/01626620.2009.10463520
https://scholar.google.co.uk/scholar?q=Jing%2c+M.+2017+Using+formative+as
sessment+to+facilitate+learner+self-
regulation%3a+A+case+study+of+assessment+practices+and+student+perceptio
ns+in+Hong+Kong
Juvova, A., Chudy, S., Neumeister, P., Plischke, J., & Kvintova, J. (2015). Reflection of
Kastberg, S. E., Lischka, A. E., & Hillman, S. L. (2016). Exploring prospective teachers'
https://scholar.google.co.uk/scholar?q=Kastberg%2c+S.+E.%2c+Lischka%2c+A
.+E.%2c+%26+Hillman%2c+S.+L.+2016+Exploring+prospective+teachers%27
+written+feedback+on+mathematics+tasks
Kilburn, A., Kilburn, B., & Hammond, K. (2016). Capturing the quality of online higher
Association, 72-73.
Lawanto, O., Santoso, H. B., Lawanto, K. N., & Goodridge, W. (2017). Self-regulated
learning skills and online activities between higher and lower performers on a
understanding of the process of reflection and its role in the construction of the
self. Adult Education Quarterly: A Journal of Research and Theory, 59(4), 279-
297. https://doi.org/10.1177/0741713609331478
Li, S., & Li, P. (2012). Individual differences in written corrective feedback: A multi-
https://doi.org/10.5539/elt.v5n11p38
Lincoln, Y. S., & Guba, E. G. (1985). Naturalistic inquiry. Beverly Hills, CA: Sage.
(Order No. 10158984). Available from ProQuest Dissertations & Theses Global.
(1824361664).
McCarthy, J. (2015). Evaluating written, audio and video feedback in higher education
from
https://scholar.google.co.uk/scholar?q=McCarthy%2c+J.+2015+Evaluating+written
%2c+audio+and+video+feedback+in+higher+education+summative+assessment+ta
sks
https://scholar.google.co.uk/scholar?q=McGuire%2c+B.+2016+Integrating+the+
intangibles+into+asynchronous+online+instruction%3a+Strategies+for+improvi
ng+interaction+and+social+presence
McNiff, J., & Aicher, T. J. (2017). Understanding the challenges and opportunities
https://doi.org/10.1123/smej.2016-0007
Martin, G. A., & Double, J. M. (1998). Developing higher education teaching skills
https://doi.org/10.1080/1355800980350210
Martinez, J. M., & Barnhill, C. R. (2017). Enhancing the Student Experience in Online
https://scholar.google.co.uk/scholar?q=Martinez%2c+J.+M.%2c+%26+Barnhill
%2c+C.+R.+2017+Enhancing+the+Student+Experience+in+Online+Sport+Man
agement+Programs%3a+A+Review+of+the+Community+of+Inquiry+Framewor
Mehta, R., Makani-Lim, B., Rajan, M. N., & Easter, M. K. (2017). Creating online
https://scholar.google.co.uk/scholar?q=Mehta%2c+R.%2c+Makani-
Lim%2c+B.%2c+Rajan%2c+M.+N.%2c+%26+Easter%2c+M.+K.+2017+Creati
ng+online+learning+spaces+for+emerging+markets%3a+An+investigation+of+t
he+link+between+course+design+and+student+engagement
Merriam, S. B., & Tisdell, E. J. (2016). Qualitative research: A guide to design and
Mirzaee, A., & Hasrati, M. (2014). The role of written formative feedback in inducing
Moore, M. G. (2012). The theory of transactional distance (pp. 1-25). University Park,
PA. https://doi.org/10.4324/9780203803738.ch5
Musolino, G. M., & Mostroni, E. (2005). Reflection and the scholarship of teaching,
https://scholar.google.co.uk/scholar?q=Musolino%2c+G.+M.%2c+%26+Mostroni
%2c+E.+2005+Reflection+and+the+Scholarship+of+Teaching%2c+Learning%2c+
and+Assessment
Nash, J. A. (2015). Future of online education in crisis: A call to action. Turkish Online
https://scholar.google.co.uk/scholar?q=Nash%2c+J.+A.+2015+Future+of+Online+
Education+in+Crisis%3a+A+Call+to+Action
The Belmont Report. (1978). The National Commission for the Protection of Human
http://www.hhs.gov/ohrp/humansubjects/guidance/belmont.html
Nguyen, Q. D., Fernandez, N., Karsenti, T., & Charlin, B. (2014). What is reflection? A
https://doi.org/10.1111/medu.12583
learning: a model and seven principles of good feedback practice. Studies in Higher
Nicol, D., Thomson, A., & Breslin, C. (2014). Rethinking feedback practices in higher
exploring the theory of formative assessment and the notion of "Closing the
https://doi.org/10.7571/esjkyoiku.10.79
(Order No. 10259040). Available from ProQuest Dissertations & Theses Global.
(1874562951).
Paulson Gjerde, K., Padgett, M. Y., & Skinner, D. (2017). The impact of process vs.
https://scholar.google.co.uk/scholar?q=Paulson+Gjerde%2c+K.%2c+Padgett%2c+
M.+Y.%2c+%26+Skinner%2c+D.+2017+The+impact+of+process+vs
Pawan, F. (2017). Reflective teaching online. Techtrends, 47(4), 30-34.
Peacock, S., & Cowan, J. (2016). From presences to linked influences within
https://scholar.google.co.uk/scholar?q=Peacock%2c+S.%2c+%26+Cowan%2c+J
.+2016+From+presences+to+linked+influences+within+communities+of+inquir
https://scholar.google.co.uk/scholar?q=Pearcy%2c+M.+2014+Student%2c+teach
er%2c+professor%3a+Three+perspectives+on+online+education
Planar, D., & Moya, S. (2016). The effectiveness of instructor personalized and
https://scholar.google.co.uk/scholar?q=Planar%2c+D.%2c+%26+Moya%2c+S.+
2016+The+effectiveness+of+instructor+personalized+and+formative+feedback+
provided+by+instructor+in+an+online+setting%3a+Some+unresolved+issues
Portolese Dias, L., & Trumpy, R. (2014). Online instructor's use of audio feedback to
11(2), https://doi.org/10.9743/jeo.2014.2.5
https://doi.org/10.1080/14703290903525911
https://doi.org/10.1002/bs.3830280103
Renner, J. (2017). Engaging TBR faculty in online research communities and emerging
https://doi.org/10.4018/978-1-5225-2548-6.ch004
Rice, R. E. (1992). Task analyzability, use of new media, and effectiveness: A multi-site
https://doi.org/10.1287/orsc.3.4.475
Roberson, S. (2017). Learning for maximum impact: Four critical but overlooked
https://scholar.google.co.uk/scholar?q=Roberson%2c+S.+2017+Learning+for+m
aximum+impact%3a+Four+critical+but+overlooked+ideas
Roberts, P. (2016). Reflection: A renewed and practical focus for an existing problem in
https://doi.org/10.14221/ajte.2016v41n7.2
https://scholar.google.co.uk/scholar?q=Rodgers%2c+C.+2002+Defining+reflecti
on%3a+Another+look+at+John+Dewey+and+reflective+thinking
https://scholar.google.co.uk/scholar?q=Rogers%2c+R.+R.+2001+Reflection+in+
higher+education%3a+a+concept+analysis
Rolfe, G. (2014). Rethinking reflective education: What would Dewey have done?
org.lopes.idm.oclc.org/10.1016/j.nedt.2014.03.006
doi:10.1080/13562517.2012.694104
Ryan, M., & Ryan, M. (2010). The 4Rs model of reflective thinking. QUT Draw Project,
http://citewrite.qut.edu.au/write/4Rs-for-students-page1- v1.5.pdf.
https://scholar.google.co.uk/scholar?q=Sadler%2c+D.+R.+1989+Formative+assess
ment%3a+revisiting+the+territory%2c+Assessment+in+Education%2c+5(1)%2c+7
7-84
Sadler, D. R. (2010). Beyond feedback: Developing student capability in complex
doi:10.1080/02602930903541015
from
https://scholar.google.co.uk/scholar?q=Sardareh%2c+S.+A.+2016+Formative+Feed
back+in+a+Malaysian+Primary+School+ESL+Context
Schon, D. A. (1983). The reflective practitioner: How professionals think in action. New
Schieffer, L. (2016). The benefits and barriers of virtual collaboration among online
Sims, J. M. (2016). Instructors' perspectives of giving audio and video feedback: Can
you hear me now? (Order No. 10163637). Available from ProQuest Dissertations
Smit, J., van Eerde, H., & Bakker, A. (2013). A conceptualisation of whole-class
https://doi.org/10.1002/berj.3007
Smits, A., & Voogt, J. (2017). Elements of satisfactory online asynchronous teacher
https://doi.org/10.9743/jeo.2012.2.1
van Kol, S., & Rietz, C. (2016). Effects of web-based feedback on students'
https://scholar.google.co.uk/scholar?q=van+Kol%2c+S.%2c+%26+Rietz%2c+C.
+2016+Effects+of+web-based+feedback+on+students%27+learning
Vanslambrouck, S., Chang, Z., Tondeur, J., Phillipsen, B., & Lombaerts, K. (2016).
Vaughn, P., & Turner, C. (2016). Decoding via Coding: Analyzing Qualitative Text
Verpoorten, D., Westera, W., & Specht, M. (2012). Using reflection triggers while
1030-1040. doi:10.1111/j.1467-8535.2011.01257.x
https://doi.org/10.2307/1421493
Wade, N. N. (2016). The face of feedback: Exploring the use of asynchronous video to
(1790629793).
Wass, R., & Golding, C. (2014). Sharpening a tool for teaching: the zone of proximal
https://doi.org/10.1080/13562517.2014.901958
Webb, A., & Moallem, M. (2016). Feedback and Feed-Forward for Promoting Problem-
https://scholar.google.co.uk/scholar?q=Webb%2c+A.%2c+%26+Moallem%2c+
M.+2016+Feedback+and+Feed-Forward+for+Promoting+Problem-
Based+Learning+in+Online+Learning+Environments
Wiliam, D. (2007). Keeping learning on track: Classroom assessment and the regulation
(1496772435).
Winchester, T. M., & Winchester, M. K. (2014). A longitudinal investigation of the
Wlodarsky, R., & Walters, H. (2010). Use of the reflective judgment model as a
https://scholar.google.co.uk/scholar?q=Wlodarsky%2c+R.%2c+%26+Walters%2
c+H.+2010+Use+of+the+reflective+judgment+model+as+a+reference+tool+for
+assessing+the+reflective+capacity+of+teacher+educators+in+a+college+setting
https://scholar.google.co.uk/scholar?q=Wolsey%2c+T.+D.+2008+Efficacy+of+i
nstructor+feedback+on+written+work+in+an+online+program
https://scholar.google.co.uk/scholar?q=Wright%2c+J.+M.+2014+Planning+to+
meet+the+expanding+volume+of+online+learners%3a+An+examination+of+fac
ulty+motivation+to+teach+online
Yin, R. K. (2014). Case study research: Design and methods (5th ed.). Thousand Oaks,
ID: 305
Dear Marybeth,
Please be advised that you have been granted site authorization for your research
request “Exploring how online university faculty feedback reflection influences teaching ". This
Upon IRB approval by Grand Canyon University Institutional Review Board, the
1. 1. The principle investigator(s) will only use the data collection method outlined in the
site authorization request.
Please be advised the scope of the research is limited to the above activities. If you
change your methodology or data collection plan, you will need to resubmit a new site
authorization request. Upon completion of the study you must include a summary of the study
results in the IRB study closeout package. Copies of journal publications related to this research
must also be submitted to the Office of Academic Research and IRB as appropriate.
Your next step is to complete your IRB application. See link below. Please contact
DATA RETRIEVAL: If this request includes access to GCU survey or archival data,
once your IRB is complete, please contact CIRT@gcu.edu, so we may facilitate the
retrieval of requested data.
Please upload a copy of this letter into your IRB application package as verification of
GCU site authorization, and please contact me if you have additional questions regarding your
site authorization or coordinating the logistics of your study.
Kind Regards,
Phoenix, AZ 85017
Phone: 602.639.7704
Email: scott.greenberger2@gcu.edu
Appendix B.
Please note that Grand Canyon University Institutional Review Board has taken the following
action on IRBNet:
Project Title: [1163434-1] Exploring How Online University Faculty Feedback Reflection
Influences Teaching Principal Investigator: Mary Beth Nipp, EdD
Action: APPROVED
Effective Date: January 9, 2018
Review Type: Expedited Review
Should you have any questions you may contact Connor Low at connor.low@gcu.edu.
Thank you,
The IRBNet Support Team
Appendix C.
Informed Consent
Introduction
Thank you for agreeing to participate in this study. This form’s purpose is to provide
information that may affect your decision as to whether to participate in this research
and also to record the consent of those who agree to participate in the study. In addition,
this form outlines your right to withdraw from the study. By refusing to sign this form,
This study will explore how online undergraduate full-time faculty instructor feedback
reflections influence their teaching and to explore their feedback methods. A greater
understanding of how feedback reflections and teaching practices intersect has practical
Procedures
The purpose of this letter is to request your volunteer participation in a study that will
explore faculty feedback reflections and how they might influence teaching. If you agree
to participate, you will consent to your classroom feedback being analyzed and will be
given a questionnaire that will allow you to share your reflections during the feedback
can expect this process to take between 30 to 45 minutes. You may skip questions on
either the questionnaire or during the focus group. Finally, the focus group will last
approximately 45 to 60 minutes. All will take place in building 71 at the GCU campus.
Although the focus group sessions will be recorded, no personal identification will be
revealed in the study and the data will be secured during and after the data collection
Risks
This study poses no psychological or physical risks of harm. There is no known risk
Benefits
There is no known benefit to participants for participating in the study, focus groups,
Payment to Participants
For this study, there is no monetary cost to participants and no compensation is awarded.
Participant Confidentiality
The results of this study will be kept confidential. Your identity will not be revealed
through the data analysis or collection process. Access to the data will be limited to the
will not be held against you by the institution. By refusing to sign this consent form, you
At any time during the study, you may cancel your participation by notifying the
Any questions about participation in this study should be directed to the names listed
below.
Investigator’s Statement
"I certify that I have explained to the above individual the nature and purpose,
the
research
study, have answered any questions that have been raised, and have witnessed
the
Protections to protect the rights of human subjects. I have provided (offered) the
Participation Certification
I understand that this agreement states that I have received a copy of this completed
as a participant in this study. I further understand that if I have any questions about
participating in this study, I may contact the Grand Canyon University Institutional
Review Board (IRB) at (602)639-7804. By signing this form, I affirm that I am at least
18 years of age and that I have received a copy of this consent form.
_________________________________________Participant’s Name
_________________________________________Participant’s Signature
_________________________________________Date
(602) 639-7332
Pat.Durso@my.gcu.edu
Appendix D.
Instrument Validation
Hello!
Thank you for agreeing to help with my field best for my study, Exploring How
Online University Faculty Feedback Reflection Influences Teaching. You were asked to
help with this because you teach one or more of the following classes: PSY 105, UNV
104, CWV 101, PHI 105, and because you are experienced in your full-time faculty
capacity.
This study was about the perceived influence of feedback practices on teacher
reflection and subsequently, the influence of reflection on instructional strategy. For this
source of data, I prepared a questionnaire (See attached) that is based on Ryan and
Ryan’s (2013) 4R’s Model of reflection. I ask that you do not actually respond to the
questionnaire, as that would provide data and at this point in my qualitative study, I
cannot actually collect data prior to IRB approval. So, your job is just to give me
feedback about how you think this questionnaire will work to help accomplish this
doi:10.1080/13562517.2012.694104
Hi MaryBeth,
This looks like such an intriguing study- I am so excited to see how this all
comes together at the end of your study :) These all seem quite applicable to the scope
of what your research questions are asking. The only comments I had for things to
consider included a couple of areas that the wording might expand to be a bit clearer for
participants. If you are providing a specific artifact for the context of the study, or
having faculty answer the questions based on a specific artifact that they have in mind,
perhaps also consider including that in your wording. Such as- “Considering a specific
piece of student work…” or “In the context of the assignment that has been provided…”
Just a thought- I am sure this was included in your participant instructions, but consider
adding specifics to the questionnaire too, so you don’t get out of context responses for
those who did not read instructions thoroughly. I do hope this helps!
Hi MaryBeth,
R1 It seems like there may be a need for a little more explanation for this
set is of questions a little confusing. However, the other two questions are clear and
R2 Great as is!
R3 Great as it!
Hi MaryBeth,
I think you have included a great set of RQ’s that address the different angles of FTF
feedback and have kept your prompts neutral to prevent leading your participants. I can
see how this questionnaire will provide you with great data to analyze ☺
The following email will be sent to undergraduate full-time online faculty who teach
University working under the Dissertation Committee chaired by Dr. Pat D’Urso. I am
inviting you to participate in my research study, which is about faculty reflection during
opportunity to learn more about your own feedback and how your methods influence
your feedback and teaching. I am inviting full-time faculty who teach the following:
PSY 105, UNV 104, CWV 101 and PHI 105. Your participation will include completing
at 602-639-7332. Additionally, you may contact the university CIRT office with any
Marybeth Nipp
Appendix F.
Faculty Questionnaire
the proposal stage of the dissertation, but are not the final versions. Please note changes
Demographic Information
A. What are the concerns with the paper? (Examples: Assignment is off topic,
B. How common are these concerns? What conditions might prompt the
D. What practices might you change, given the concerns? What theories or best
D. Do you plan to continue with this method? What are other ways you might
institution?
Participant:
Participant:
Participant:
Participant:
Participant:
Participant:
Participant:
Participant:
Participant:
Participant:
Participant:
Participant:
Moderator: _______________________________________________________
Date: _________________________ Time: _____________________________
Location: ________________________________________________________
Consent Form Signed: All participants provided signed consent forms during the
questionnaire process.
Introductory Protocol
To facilitate the note-taking, I will be recording our session today. Please note
that only the researchers will have access to the tapes which will be eventually destroyed
after they are transcribed. Please plan on this session lasting approximately 30 – 60
minutes. During this time, we have several questions that we would like to cover. I will
be asking several questions related to my research questions. If you need me to stop the
discussion at any time or repeat a question, please don’t hesitate to ask. Because we are
recording, please speak loudly and clearly so we can capture everyone’s comments on
tape.
I will endeavor to maintain your confidentiality at all times, and nothing in this
discussion will link back to your supervisor nor will the information be shared outside of
the study.
I will share information with you about the study and the data that has already
been collected. Then I will ask for your thoughts on the information, but I will refrain
from participating in the discussion. Please feel free to respond to each other and speak
directly to others in the group.
We want to hear from all of you. And I know that many may have similar
experiences or attitudes on this topic, so please share all thoughts and ideas.
I will turn the recorder on now:
I understand some of you know each other, but others may not. Please share your
first name and the subject you primarily teach.
Do you have any questions before we begin?
Thank you for sharing. Each of you has completed an online questionnaire and
have allowed me to review your classroom feedback. The questions were all centered
on how you view your classroom feedback, what you reflect on while grading, and how
those reflections actually influence your teaching. When I reviewed your classroom
feedback, I looked for patterns and some common themes emerged. Today I will share
these themes that emerged during this data analysis. To ensure confidentiality, no names
will be visible, but you will be able to view the general ideas.
These themes are tied to one of the three research questions I have used for this
study. As I present each set of themes for these research questions, please share how you
feel about the accuracy of what I am presenting. In other words, tell me if you can
confirm the findings or are the ideas contrary to your reflections during feedback. Share
your thoughts and talk to each other about your experiences with each topic.
Before we begin, do you have any questions?
RQ1: How do online undergraduate full-time faculty feedback practices influence
• Based on the summary presented, are there points you disagree with? If so, what
are they and why?
• Are there points you agree with? If so, what are they and why?
• Are there points you feel need to be added for greater clarification? If so, what
are they and why?
• Based on the summary presented, are there points you disagree with? If so, what
are they and why?
• Are there points you agree with? If so, what are they and why?
• Are there points you feel need to be added for greater clarification? If so, what
are they and why?
RQ3: How do these feedback practices inform future teaching strategies for online
• Based on the summary presented, are there points you disagree with? If so, what
are they and why?
• Are there points you agree with? If so, what are they and why?
• Are there points you feel need to be added for greater clarification? If so, what
are they and why?
FINAL QUESTION: Are there any points or ideas you would like to add that we may
have missed.
Post Interview Comments and/or Observations:
Appendix H.
Questionnaire Responses
RQ 1 L1 L2 L3 L4
Text Development (14) High Instructor-added (20) Instructor-added (39)
Format (14) L (4), Hi (10) Existing Resources Student Contact (3)
Mechanics (9) L(3), m (1) h(5) (15) Existing R’s (18)
Word Count (6) L(3), m (1) h (2) Refined FB (17) No changes (8)
Thesis (5) L(3), h (2) Co-workers (5)
3rd person (4) H (4) Past (5)
Student contact (1)
Existing R’s (5)
Instructor-added (1)
Past R’s (1)
Video Development (3) High (3) Instructor-added (4)
Mechanics (3) High (3) Existing resources (4)
Format (2) High (2) No changes (1)
Thesis (2) High (2)
3rd Person (2) High (2)
RQ2 L1 L2 L3 L4
Text Text (36) Student needs (12) Time/Efficiency (24) Yes (18)
Time/Efficient (24) Familiarity (16) Yes + may try (16)
Familiarity (16) Student needs (12) Yes + adding other
Assessment- Co-workers (4) Co-workers (4) tools (3)
dependent (8) Yes +efficiency (5)
Video None (4) Feedback (2) Student needs (4) Feedback (4)
Tech tools (4) Existing resources (1) Instructor-added (4) Instructor-added (4)
Calling students (4)
Appendix I.
I do? task) What do I What will help me? you think of me?
needed?