Download as pdf or txt
Download as pdf or txt
You are on page 1of 26

J. EDUCATIONAL COMPUTING RESEARCH, Vol.

45(1) 49-73, 2011

THE IMPACT OF TECHNOLOGY-ENHANCED STUDENT


TEACHER SUPERVISION ON STUDENT TEACHER
KNOWLEDGE, PERFORMANCE, AND SELF-EFFICACY
DURING THE FIELD EXPERIENCE

THEODORE J. KOPCHA
University of Georgia

CHRISTIANNA ALGER
San Diego State University

ABSTRACT

The eSupervision instructional program is a series of five online modules


housed in a content management system that support triad members (student
teachers, cooperating teachers, university supervisors) during the field experi-
ence. The program was designed on a cognitive apprenticeship framework
and uses a variety of technology to support both novice and expert activities,
including online discussion forums, guided observations, video reflection,
and a lesson plan performance support system. Forty-one student teachers
(19 eSupervision, 22 non-eSupervision) completed a statewide assess-
ment of knowledge and performance at the conclusion of their student
teaching. Seventeen (8 eSupervision, 9 non-eSupervision) also completed a
pre- and post-efficacy measure. After accounting for their experiences with
the cooperating teacher, the results indicate a non-significant difference in
teaching knowledge and performance and a statistically significant inter-
action in self-efficacy favoring eSupervision students. Implications for
teacher education and the practices that improve the supervision of student
teachers are discussed.

49

Ó 2011, Baywood Publishing Co., Inc.


doi: 10.2190/EC.45.1.c
http://baywood.com
50 / KOPCHA AND ALGER

Policy makers and teacher educators alike have advocated for reform in the
preparation of teachers, calling for improvement in the quality of the supervision
experience (Feiman-Nemser, 2001; Paris & Gespass, 2001), its connection to the
university (Darling-Hammond, 2006; Zeichner, 2002), and the integration of
technology into the coursework of student teachers (Hew & Knapczyk, 2007;
Lieberman & Pointer Mace, 2010; Otero, Peressini, Meymaris, Ford, Garvin,
Harlow, et al., 2005). Many researchers are concerned that the methods used
to prepare teachers, which have remained fairly static over the past 70 years,
no longer adequately prepare beginning teachers for the demands of today’s
classrooms (Darling-Hammond, 2006; Hess, 2009). Darling-Hammond (2006)
noted that traditional methods of preparation are insufficient because teachers are
held to higher standards than ever before—they are simultaneously expected to
teach complex skills to a broader and increasingly diverse range of learners,
effectively integrate technology into their lessons, and continue to examine and
improve their own teaching. Some have suggested that such high demands, in
conjunction with outdated methods of preparation, explain the issues of attrition
among teachers within the first few years of their careers (Darling-Hammond,
2010; Hess, 2009).
In response to these calls for reform, the teacher education and educational
technology departments at our institution collaborated to create a new program
of student teacher supervision. The program, called eSupervision, is a course
management system designed using a cognitive apprenticeship framework to
support the performance of student teachers, cooperating teachers, and university
supervisors with a variety of technology during the field experience. Partici-
pants in eSupervision engage in a number of technology-enhanced supervision
activities, such as video reflection with an expert, online discussion of classroom
management strategies with peers and experts alike, and building lesson plans
with a performance support system. Those activities were purposefully included
in the program due to their rich history in the literature. Researchers have
reported positive changes in the performance and/or attitudes of novice teachers
as a result of connecting more frequently with experts through telecommuni-
cations technology (Liu, 2005; Wu & Lee, 2004), observing one’s teaching
performance on video (Harford & MacRuairc, 2008; Lee & Wu, 2006), and
learning to plan instruction using electronic performance support systems (Hacker
& Sova, 1998; Wild, 1998).
Researchers have theorized that novice teachers who engage in technology-
supported learning environments designed from socio-cognitive theory would
have more opportunity to inform their practice and their efficacy (Joia, 2001;
Wang & Bonk, 2001). However, prior studies on the use of technology with
novice teachers focus more on participant perceptions of the tool and less on
student outcomes such as performance and efficacy (Gentry, Denton, & Kurz,
2008). The purpose of this study was to compare the knowledge, performance, and
self-efficacy of two cohorts of student teachers engaged in the field experience,
THE IMPACT OF TECHNOLOGY-ENHANCED SUPERVISION / 51

where one cohort participated in eSupervision and the other did not. We begin
the article by discussing the literature regarding the assessment of teacher
knowledge and performance in teacher education, teacher self-efficacy, and the
technology used to enhance the supervision of student teachers.

THEORETICAL PERSPECTIVES

Teacher Knowledge and Performance


Over the past 2 decades, teacher educators have been increasingly interested
in measuring teacher knowledge and performance in authentic ways. The
primary reason for this is accreditation—institutions that seek accreditation
must provide evidence of successfully training preservice teachers (Britten,
Mullen, & Stuve, 2003). More recently, there has been a focus on implementing
a standardized performance assessment in preservice teacher education. Darling-
Hammond (2010) suggested that such an assessment would introduce greater
accountability to institutions that credential teachers. This, in turn, would
push credentialing institutions to align their practices more closely with the
best practices for training teachers. Galluzzo (2005) suggested that adopting a
standardized performance assessment that takes a holistic look at a preservice
teacher would be a powerful way to help them develop professionally while
accounting for their ability to teach.
The Performance Assessment for California Teachers (PACT) is a statewide
assessment that all preservice teachers in California must pass before receiving
their teaching credential. PACT is an authentic assessment of both the teaching
knowledge and classroom performance of student teachers (Larsen & Calfee,
2005). Student teachers complete PACT by planning a series of lessons that
match specific teaching standards, videotaping themselves delivering those
lessons, and submitting both the lessons and video with a reflection on their
practice. PACT has been established as a valid and reliable measure of a preservice
teacher’s knowledge of and ability to plan, instruct, assess, and reflect (PIAR)
on teaching (Pecheone & Chung, 2006). The recent adoption of PACT at our
university created an opportunity for us to examine the impact of eSupervision
on teacher knowledge and performance.

Teacher Self-Efficacy
Teacher self-efficacy—that is, a teacher’s belief in his or her ability to achieve
specific results in a given context (Tschannen-Moran, Hoy, & Hoy, 1998)—
plays a role in a teacher’s performance. A teacher’s efficacy beliefs affect what
they are willing to attempt in the classroom and how persistent they will be at
succeeding as a teacher (Mulholland & Wallace, 2001; Tait, 2008; Tschannen-
Moran & Hoy, 2001). Tschannen-Moran et al. (1998) describe the interplay
52 / KOPCHA AND ALGER

between self-efficacy and performance as cyclical. For example, a positive


teaching experience can help improve a teacher’s beliefs in his or her abilities.
This, in turn, increases the likelihood of more positive experiences in the future.
Similarly, a negative experience might lower a teacher’s beliefs in his or her
abilities, which would increase the likelihood of more negative experiences
in the future.
The connection between teacher self-efficacy and performance has been evi-
denced in several ways. The students of highly efficacious teachers have
performed better and exhibited higher levels of motivation than students of
teachers with lower levels of self-efficacy (Caprara, Barbaranelli, Steca, &
Malone, 2006). There is also evidence that teachers with high self-efficacy are
less likely to burnout from the profession (Brouwers & Tomic, 2000; Fives,
Hamman, & Olivarez, 2007). Fives et al. (2007) found that 49 student teachers
engaged in the field experience experienced less severe feelings of burnout
when they had higher levels of self-efficacy. Given the relationship between
teacher self-efficacy and their performance as a teacher, an important outcome
for any program of teacher preparation would be to cultivate teachers with a
strong sense of efficacy.
Bandura (1997) identified four sources of efficacy knowledge: mastery experi-
ences, vicarious experiences, verbal persuasion, and physiological arousal. In
the context of student teaching, the first three sources have been thoroughly
established by several researchers. For example, mastery experiences are
described as teaching tasks that inform a teacher’s knowledge of his/her own
abilities, such as regularly and successfully planning and implementing instruc-
tion (Knoblauch & Hoy, 2008; Tschannen-Moran & Hoy, 2007). Vicarious
experiences are described as the observation of modeled instruction, including
observing one’s self, or self-modeling, and learning about the experiences of
others (Knoblauch & Hoy, 2008; Mulholland & Wallace, 2001). Verbal per-
suasion entails receiving feedback about one’s performance or emotional support
from colleagues, administrators, students, or the greater community (Liaw, 2009;
Tschannen-Moran & Hoy, 2007). Mastery experiences that are informed by
the other two sources of efficacy are suggested to be the most likely to shape a
teacher’s efficacy beliefs (Bandura, 1997; Knoblauch & Hoy, 2008; Mulholland
& Wallace, 2001). This makes sense—if a student teacher feels a lesson was
delivered successfully and an expert provides feedback confirming this, the
experience is likely to contribute to or improve his/her self-efficacy. Likewise,
if the student teacher feels that the lesson was delivered successfully and an
expert contradicts it, it may reduce his/her self-efficacy.
The field experience is an opportune time to positively influence and shape
novice teacher efficacy (Liaw, 2009; Mulholland & Wallace, 2001). Student
teachers frequently engage in activities that are believed to improve their self-
efficacy, such as observing a variety of competent models (Knoblauch & Hoy,
2008), sharing experiences with others (Liaw, 2009; Main & Hammond, 2008),
THE IMPACT OF TECHNOLOGY-ENHANCED SUPERVISION / 53

and receiving recognition for their successes (Mulholland & Wallace, 2001;
Tschannen-Moran & Hoy, 2007). In other words, they regularly engage in
activities that expose them to all three sources of efficacy knowledge—mastery
experiences, vicarious experiences, and verbal persuasion—at a time when they
are most open to them. Tschannen-Moran and Hoy (2007) found evidence of
this in a regression analysis that included contextual factors, verbal persuasion,
and mastery experience as predictors of self-efficacy. In that study, both mastery
experiences and verbal persuasion made a significant contribution to the efficacy
beliefs of 75 novice teachers, whereas only mastery experiences made a significant
contribution to the efficacy beliefs of the remaining 180 experienced teachers.

Technology-Enhanced Student Teacher Supervision


In teacher education, there is a rich history of evidence suggesting that student
teacher performance and self-efficacy can be improved using technology. Novice
teachers who have used telecommunication technologies to connect with experts
have engaged in deep levels of reflection (Barnett, Keating, Harwood, & Saam,
2002; Blanton, Moorman, & Trathen, 1998; Bodzin & Park, 2002) and created
highly complex instruction (Barnett et al., 2002). When novice teachers viewed
videos of themselves teaching, they engaged in deep levels of reflection (Crawford
& Patterson, 2004; Romano & Schwartz, 2005; Sherin & van Es, 2005) and had
opportunities to examine their own teaching beliefs (Yerrick, Ross, & Molebash,
2005). Novice teachers who have used Electronic Performance Support Systems
(EPSS)—that is, electronic systems that support people in everyday tasks without
the intervention of another human (Gery, 1991)—have demonstrated higher
levels of confidence and ability with regard to planning (Liu, 2005; Wild, 1998).
The eSupervision instructional program is a series of five online learning
modules that incorporate telecommunication technologies, video, and EPSS
during the field experience. In order of completion, the five modules were
Analyzing the Teaching Context, Classroom Management, Planning Instruction,
Engaging the Learner, and Assessing the Learner. The objectives of the modules
mirror the focus of PACT and the instructional activities in those modules
were designed around the elements of cognitive apprenticeship (Collins, Brown,
& Holum, 1991). Alger and Kopcha (2009, 2011) have previously described
the instructional activities within each module in great detail; we summarize
them here:
• Asynchronous discussion. Triad members (student teachers, cooperating
teachers, and university supervisors) use the online discussion boards to share
experiences and/or receive feedback and advice on relevant classroom issues,
such as classroom management, instructional strategies, and assessment
methods. Student teachers are asked to post bi-weekly to the discussion
boards and are encouraged to write about topics they find relevant. When
a comment is posted, every member receives an update via e-mail. Experts
54 / KOPCHA AND ALGER

in eSupervision regularly read and often reply to the postings of student


teachers other than their own.
• Guided observation including video reflection. Student teachers in
eSupervision are observed a total of six times per semester; three times by
the supervisor and three times by the cooperating teacher. When they are
observed, the observers use a downloadable form that is completed elec-
tronically and then shared with the others in the triad. The form contains
a variety of Likert-type and open-ended items that align with PACT and
focus the observation on the planning, instruction, assessment, and reflec-
tion (PIAR) of student teachers. In addition, student teachers videotape and
observe themselves delivering a lesson once per semester. The self-obser-
vation is completed using a guided reflection form and both the video and
reflection are reviewed with an expert for feedback.
• Lesson planning. Student teachers in eSupervision regularly create lesson
plans using an Electronic Performance Support System (EPSS). Student
teachers using the EPSS complete an online form that contains key reminders
of how to plan effective lessons. Then, they submit lessons electronically to
the cooperating teacher and supervisor for feedback. Each lesson is stored and
published to an online searchable database, and all members in eSupervision
can use the database to search for and view student lessons at any time.

Student teachers who participate in eSupervision are likely to perform differ-


ently than those who receive a traditional approach to supervision for several
reasons. First, interviews with student teachers who participated in the initial
pilot (see Alger & Kopcha, 2009, 2011) suggested that the technologies used can
enhance learning outcomes. Those student teachers reported that the dialogue
with peers and experts on the discussion boards helped inform their teaching
practices, and that the Lesson Plan EPSS helped them internalize the process
of planning. Second, eSupervision was designed to address the call in teacher
education to increase the intensity of the supervision process and improve
the consistency of novice-expert interaction throughout the learning process
(Darling-Hammond, 2006; Lieberman & Pointer Mace, 2010). Researchers have
suggested that doing both would improve the quality of teachers entering the
classroom (Caillier & Riordan, 2009; Darling-Hammond, 2010; Hess, 2009).
Student teachers who participate in eSupervision are also likely to develop
stronger efficacy beliefs than those in traditional forms of supervision. The
combination of discussion boards, video reflection, and EPSS provide student
teachers with more opportunities to acquire efficacy knowledge from a variety
of sources (mastery experiences, vicarious experiences, and verbal persua-
sion). Researchers have noted that novice teacher efficacy improves through
frequent and purposeful interaction between a wide variety of experts and
novices (Tschannen-Moran & Hoy, 2007) and through access to ample support
throughout the supervision process (Capa & Woolfolk-Hoy, 2005). While these
THE IMPACT OF TECHNOLOGY-ENHANCED SUPERVISION / 55

efficacy-building activities are present in traditional approaches to supervision,


student teachers in eSupervision engage in those activities more frequently
and with a greater variety of experts.

Purpose of the Study


While the previous studies on eSupervision indicated that it was a promising
alternative to traditional forms of supervision, they lacked a focus on specific
student teacher outcomes. The purpose of this article is to compare the knowledge,
performance, and teacher efficacy over time of two cohorts of student teachers,
one that participated in eSupervision as part of their student teaching field
experience and one that did not. Using a mixed methods approach, we examine
student teacher knowledge and performance in the form of PIAR scores from the
PACT assessment, teaching self-efficacy using Tschannen-Moran and Hoy’s
(2001) Teacher Sense of Self-Efficacy Scale, and the role of that technology
played on knowledge, performance, and efficacy through interviews with
student teachers from both groups. Two measures of control were included in this
study to account for the level of support student teachers experienced at their
placement sites and better isolate the impact of eSupervision on their PIAR
and TSES scores. The research questions guiding this study were:
1. How do the knowledge, performance, and self-efficacy of student teachers
who receive eSupervision during their field experience compare to students
who receive traditional supervision?
2. What relationships exist among teacher knowledge and performance, self-
efficacy, and use of eSupervision technology?
3. In what ways does eSupervision technology play a role in the development
of student teachers’ knowledge, performance, and self-efficacy?
The study of these factors in one setting is unique. Researchers have noted a
need to examine the relationship between teacher performance and self-efficacy,
both in a single setting and in ways that go beyond self-report (Labone, 2004;
Wheatley, 2005). Moreover, there is a lack of empirical studies examining the
role that technology can play in developing both performance and efficacy
(Gentry et al., 2008). Using both quantitative and qualitative methods, this paper
reports on student teacher knowledge and performance, self-efficacy, and the
impact of the technologies found in eSupervision on both of those constructs.

METHOD
Participants
A total of 41 student teachers (19 eSupervision, 22 non-eSupervision) from a
large university in the Southwest participated in the study. The student teachers
at this university engaged in a year-long student teaching field placement as part
56 / KOPCHA AND ALGER

of a post-baccalaureate secondary-level teaching credential program; they had


little to no prior experience with teaching in the classroom. Participants were
recruited from two of four secondary single subject credential cohorts that began
in Fall 2009. eSupervision student teachers were trained to use the eSupervision
technologies early in Fall 2009; all participants received technical support
regarding the use of video.
Cohorts were “matched” in that they shared similar school contexts—urban
districts with similar demographics that included a large proportion of English
Language Learners, ethnic and racial diversity, and high numbers of students
eligible for free or reduced lunch. However, they did differ in one respect.
Student teachers in eSupervision switched school placements at the beginning
of the second semester and non-eSupervision students did not.
Instructional Program
The eSupervision instructional program used in this study is identical in content
to the program by Alger and Kopcha (2009, 2011) described earlier; the only
differences were associated with improved usability and functioning based on
the results of the pilot study of eSupervision. Over the course of a year, the student
teachers in eSupervision completed five online instructional modules. As part
of their coursework, they were required to post to and/or respond to posts on the
discussion board on a biweekly basis. They were also required to create daily
lesson plans using the lesson plan EPSS and complete a video reflection once
near the middle of each semester.
The manner of supervision by experts in eSupervision differed from non-
eSupervision supervision. Each semester, non-eSupervision students were
formally observed six times by a university supervisor. They also completed a
video reflection, which they shared with peers for feedback. eSupervision stu-
dents were also formally observed six times. However, eSupervision supervisors
conducted only three formal observations of the student teacher per semester.
The eSupervision cooperating teachers conducted the remaining three formal
observations and, though not required, were encouraged to participate in the
online discussion. Because eSupervision supervisors had three fewer observations
than non-eSupervision supervisors, they were required to view and provide
feedback to student teachers on the video reflection. They were also required
to participate in online discussions and provide feedback on lesson plans. All
observations of eSupervision students were conducted with a standard observation
protocol made available in electronic format on the course management system.
Criterion Measures
Knowledge and Performance

The primary measures of teacher knowledge and performance were the


planning, instruction, assessment, and reflection (PIAR) scores associated with
THE IMPACT OF TECHNOLOGY-ENHANCED SUPERVISION / 57

the Performance Assessment for California Teachers (PACT). PACT provided a


robust illustration of teacher knowledge and performance at the end of the
credential program. As part of PACT, student teachers submitted a portfolio
that consists of a 3- to 5-day series of lessons, a video recording of themselves
delivering one of those lessons, and a reflection on their performance.
Validity and reliability were established in the following manner: three trained
evaluators assessed the quality of each student teacher’s submitted artifacts
across 13 different state standards called Teacher Performance Expectations
(TPE). Each TPE was evaluated on a 4-point scale using a detailed scoring
rubric and the average score of the three evaluators was reported. The scores on
the TPEs were then combined in specific ways to reflect the knowledge and
performance of student teachers with regard to PIAR. The scores on PIAR were
analyzed in this study.

Teacher Self-Efficacy

The primary measure of teacher self-efficacy was the short form (12 items)
of the Teachers’ Sense of Efficacy Scale (TSES) (Tschannen-Moran & Hoy,
2001). Student teachers reported on how much they felt they could do with
regard to a number of common teaching tasks (class management, student moti-
vation, delivery of content) on a 9-point scale with descriptors at every odd
number, from 1 (nothing) to 3 (very little) to 5 (some influence) to 7 (quite a bit)
to 9 (a great deal). A sample item from the measure is, “How much can you
do to control disruptive behavior in the classroom?”
The TSES was selected for this study because it has been established as
valid and reliable when used with preservice teachers. Fives and Buehl (2010)
found that the TSES was a valid and reliable indicator of the efficacy beliefs
of 102 experienced and 270 preservice teachers. While the measure reflected
efficacy beliefs within three specific factors (student engagement, instructional
practice, and classroom management) among the experienced teachers, they found
that the TSES only reflected a general sense of efficacy—that is, a one-factor
solution—among the preservice teachers. The reliability of the instrument in
this study was .87 and has been reported at .90 or higher in prior studies (Fives
& Buehl, 2010; Tschannen-Moran & Hoy, 2001).

Cooperating Teacher Quality

The cooperating teacher plays an influential role in developing student


teacher knowledge, performance, and self-efficacy during the field experience
(Knoblauch & Hoy, 2008). Novice teachers, however, may not consider the
level of support received from their cooperating teacher when reporting efficacy
beliefs. Tschannen-Moran and Hoy (2007) noted that novice teachers base their
efficacy beliefs more on an analysis of specific instructional tasks and less
on available support and resources than experienced teachers. Because of this, we
58 / KOPCHA AND ALGER

included two additional measures to examine whether each group of student


teachers in this study experienced a similar quality of relationship with their
cooperating teacher and support within their learning environment. The two
measures were the Learning to Teach Scale (Hamman & Olivarez, 2005) and the
Learning Climate Questionnaire (Williams & Deci, 1996). These are described
below.

• Learning to Teach Scale. The original survey, which contained 10 items,


measured both the level of guidance from the cooperating teacher and the
amount of imitation the student teacher engaged in. Student teachers in this
study reported on the five items associated with the level of guidance they
received from their cooperating teacher. Responses were given on a 6-point
scale with the following descriptors: never (1), almost never (2), sometimes
(3), often (4), almost always (5), and always (6). A sample item from the
measure is, “My cooperating teacher offers me guidance to improve my
teaching.” The scale has been established as both valid and reliable. The
reliability of this measure in this study was .93.
• Learning Climate Questionnaire. Student teachers reported on the level of
autonomy and support they experienced while learning to teach at their
placement site using a modified form of this questionnaire on a 7-point scale
with the following descriptors: strongly disagree (1), neutral (4), and strongly
agree (7). The original scale contained 15 items that referred to the level
of support provided by an “instructor.” The short form (six items) was used in
this study and modified so that the items referred to the cooperating teacher,
who was the instructor of interest in this context. A sample item from the
modified scale is, “I feel that my cooperating teacher provides me choices
and options.” The short form has been established as valid and reliable. The
reliability in this study was .88.

Student Teacher Interview Protocol

We constructed a semi-structured protocol containing 10 interview questions


designed to explore the impact of technology with regard to PIAR—that is, on
their ability to plan, instruct, assess, and reflect on instruction. The interview items
were written in such a way that student teachers from either group could report
on the specific technologies that they personally used while learning to teach.
As part of the interview, they described the technologies they personally used
to support their field experience. Sample items from the protocol are, “In what
ways did technology have an impact on your ability to: Plan lessons? Implement
them? Assess them? Reflect on your teaching?” and “Do you think that the
technology you used as part of learning to teach helped or hindered your growth
this semester? Why/why not?” This information helped us gain a richer descrip-
tion of the differences in technology use between groups.
THE IMPACT OF TECHNOLOGY-ENHANCED SUPERVISION / 59

Procedures
One cohort of 19 student teachers received the intervention of eSupervision
and the other cohort of 22 teachers did not. The cohort that received eSupervision
was selected because the instructor of the seminar for that cohort’s field experi-
ence volunteered to use eSupervision. Students, however, were assigned to
eSupervision in a random fashion. All students in the entire secondary single
subjects credential program at this university were placed randomly into one
of four cohorts at the beginning of the year. In this way, students were equally
likely to be placed in the cohort that received eSupervision or in the com-
parison group.
Student teachers in the eSupervision group engaged in the instructional pro-
gram over the course of two semesters. Because they participated in eSupervision
as part of an on-campus seminar course that met once per week, the seminar
leader set the pace for completing the activities in eSupervision. The seminar
leader also participated in online discussions and was responsible for monitoring
and promoting expert participation in eSupervision. Non-eSupervision students
participated in a similar seminar course for their own cohort; however, this
seminar had no instructional program like eSupervision as part of the coursework.
Instead, they participated in a series of face-to-face, in-class discussions that
focused on resolving issues that were specific to the cohort.
Although all student teachers completed the PACT, they had to be recruited
to participate in our additional surveys and interviews. Student teachers were
solicited for participation at the beginning of their second semester and 17
(8 eSup, 9 non-eSup) completed the TSES at that time (mid-year TSES). One
week prior to the end of the second semester, they once again completed the
TSES (post TSES), and also completed the Learning to Teach Scale and
the Learning Climate Questionnaire. Nine (5 esup, 4 non-eSup) participated
in telephone interviews shortly after completing the surveys; the interviews
were recorded and transcribed for analysis.

Design and Analysis


A mixed-methods quasi-experimental research design was used in this
study. The treatment (eSupervision) and control (non-eSupervision) groups
were “matched” in that all participants were teachers in secondary schools with
similar demographics and ethic diversity. A posttest-only design was used to
examine knowledge and performance (PIAR scores) between groups; a repeated
measures design was used to examine changes in self-efficacy (TSES scores)
over time between groups. Follow-up interviews were conducted to examine the
role technology played in developing student teacher knowledge, performance,
and self-efficacy.
Our literature review suggested that level of support from the cooperating
teacher within the placement site might influence student teacher efficacy beliefs
60 / KOPCHA AND ALGER

and, in turn, performance. We therefore included measures of those factors in our


design in an effort to isolate the impact of eSupervision on student teacher efficacy
and performance. To assess the role of these factors in our design, we conducted a
correlation analysis to examine the relationship between the mean scores for the
Learn to Teach Scale and Learning Climate Questionnaire and the post-TSES, and
between all three of these measures and PIAR mean scores. This analysis helped
us determine which, if any, measures should be treated as covariates in our
analysis of PIAR and post-TSES mean scores (Owen & Froman, 1998). We also
performed one-way analysis of variance (ANOVA) on each of the potential
covariates to examine whether there were any prior differences between groups
that might impact our analysis of TSES and PIAR scores.
Based on those results, we chose to exclude covariates from our analysis.
We conducted multivariate analysis of variance (MANOVA) on PIAR mean
scores to examine differences in knowledge and performance between groups.
We conducted repeated-measures analysis of variance (ANOVA) to examine
the change in TSES scores between groups over two points in time (mid-year
and post).
Interview responses were first analyzed by technology type to determine how
the technologies that were used contributed to changes in the participant’s
ability with regard to PIAR, as well as with their self-efficacy. The methods for
analysis were drawn from the recommendations of several researchers (Miles
& Huberman, 1984; Seidman, 1998). Specifically, both researchers read through
the interviews separately and identified initial themes and patterns. After
collaborating to refine those themes and categories, the interview data were
separately analyzed a second time. In those cases where the researcher’s coding
was inconsistent, the two researchers met to discuss the inconsistencies until
they reached agreement.
We also conducted an inductive analysis (Patton, 2002)—that is, we used an
a priori theory to guide our analysis—to examine the manner in which technology
helped inform student teacher confidence. For this second analysis, we used
Bandura’s (1997) theoretical typology of the sources of efficacy (mastery experi-
ences, vicarious experiences, and verbal persuasion) to examine the manner in
which each technology acted as a source of efficacy knowledge and/or contributed
to the development of self-efficacy among our student teachers. We used the
specific instances of those sources within the field experience that have been
thoroughly described by several researchers (Knoblauch & Hoy, 2008; Labone,
2004; Tschannen-Moran & Hoy, 2007) to guide our analysis.

RESULTS
The correlation analysis revealed that post-TSES scores were not significantly
correlated with Learn to Teach (= .16) or Learning Climate scores (= .22). None
of the PIAR mean scores were significantly correlated with post-TSES; these
THE IMPACT OF TECHNOLOGY-ENHANCED SUPERVISION / 61

correlations were relatively low and non-significant, ranging from –.23 to .27
with a mean of –.01. Correlations between PIAR mean scores and the Learning
Climate and Learn to Teach scores were similarly low and non-significant. This
suggested that PIAR and TSES mean scores could be analyzed without including
covariates in the model.
One-way analysis of variance (ANOVA) on the 17 mean scores for each
potential covariate revealed that both groups (eSup/non-eSup) experienced similar
and relatively high levels of post-efficacy and guidance from their cooperating
teacher. No statistically significant differences were found between groups on
either Post-TSES scores [Mesup = 7.06 and Mnon-eSup = 6.77], Learning to Teach
scores [Mesup = 4.70 and Mnon-eSup = 4.29], or Learning Climate scores [Mesup =
6.39 and Mnon-eSup = 5.97]. The lack of statistically significant differences on
these measures suggested that both PIAR and TSES scores could be analyzed
without including covariates.

Analysis of Knowledge and Performance

Multivariate analysis of variance on the 41 PACT mean scores for planning,


instruction, assessment, and reflection (PIAR) indicated that students in the
eSupervision group scored slightly higher overall than students in the non-
eSupervision group, M = 2.42 and 2.31, a non-significant difference, F(4, 36) =
224.18, p > .05, h2 = .06. Students in eSupervision scored higher than non-
eSupervision students in three of the four areas of PIAR—planning (M = 2.53
and 2.45), instruction (M = 2.32 and 2.13), and assessment (M = 2.51 and 2.33).
None of these differences were statistically significant. Table 1 contains the mean
PACT scores by area (PIAR) for both groups.

Analysis of Self-Efficacy

A total of 17 student teachers completed both the mid-year TSES and the post
TSES (8 eSupervision and 9 non-eSupervision). Repeated measures ANOVA on
TSES scores at two points in time (mid-year and post) revealed a statistically

Table 1. Mean PACT Scores by Area (PIAR) for eSupervision


and non-eSupervision Groups

Planning Instruction Assessment Reflection Total

eSupervision 2.53 2.32 2.51 2.31 2.42


Non-eSupervision 2.45 2.13 2.33 2.31 2.31
Total 2.49 2.22 2.42 2.32 2.36
Note: eSupervision group size = 19; non-eSupervision group size = 22
62 / KOPCHA AND ALGER

significant time by treatment interaction, F(1, 15) = 9.23, p < .01, h2 = .38.
Students in the eSupervision group had lower TSES scores than non-eSupervision
students at the mid-year, M = 6.59 and 7.00, but higher post-TSES scores,
M = 7.06 and 6.77. The main effects for time—that is, mid-year and post TSES
scores (M = 6.81 and 6.91)—was not statistically significant.

Student Teacher Interviews


A total of 9 student teachers (5 eSupervision, 4 non-eSupervision) were inter-
viewed to give them the opportunity to describe the role that technology played
in their student teaching field experience and in developing their teaching
skills. Students from both groups interviewed to determine the types of tech-
nologies both groups used and identify commonalities with regard to how those
technologies were used. The interviews were semi-structured and lasted an
average of 20 minutes.
Both groups reported using a variety of technologies to help them learn to
teach during their field experience. While students from both groups reported
using e-mail and video reflection, eSupervision participants also reported using
discussion boards, the online lesson plan EPSS, and observation templates.
eSupervision students also reported using online instructional modules, but saw
these as a foundation for the activities completed using the other technologies in
the program. Therefore, the results associated with those other technologies—
that is, e-mail, video reflection, the lesson plan EPSS, and the observations
forms—are presented in detail below and are summarized in Table 2.

Electronic Mail

It is likely that e-mail had similar impact on both groups in this study. Students
from each group used e-mail with similar frequency and for similar purposes. All
stated that they infrequently used e-mail with their supervisor, primarily to arrange
site visits and triad meetings. With regard to the cooperating teacher, email contact
varied in frequency from daily (1 eSup, 1 non-eSup) to weekly (2 eSup, 3
non-eSup) to only several times a semester (2 eSup, 1 non-eSup). These varied in
nature from planning to classroom management issues to the organization of
the school. The two participants who used e-mail daily with the cooperating
teacher were the only two that reported using it to receive advice about lesson
plans and share classroom experiences with the cooperating teacher.

Video Reflection

Students in both groups used video technology to record themselves teaching


a lesson and review the videotaped lesson. Students in the eSupervision group
used a standard protocol to reflect on the lesson, and then received feedback on
the videotaped lesson from their supervisor. All eSupervision students found that
THE IMPACT OF TECHNOLOGY-ENHANCED SUPERVISION / 63

Table 2. Interviewed Students’ Use and Impact of Technology


on PIAR and Self-Efficacy by Group

Technology eSupervision Non-eSupervision

Electronic •Likely to have had similar impact


mail as non-eSupervision due to
similar frequency and purpose

Video •Received feedback from •Received feedback


reflection supervisor from peers
•Provided a vicarious experience •Provided a vicarious
of self-modeling that informed experience of self-
level of mastery modeling that helped
•Verbal persuasion regarding identify issues with
performance was a powerful teaching
source of efficacy information •Student teachers felt
the verbal persuasion
they received was
superficial and
uninformative at times

Lesson Plan •Repeated use of tool acted as •Lacked any formal


EPSS a mastery experience instruction with regard
•EPSS less useful as ability to to planning
plan improved

Discussion •Expert comments prompted •No discussion boards


boards reflective practice used to support the
•Experiences shared online field experience
provided vicarious experiences
that informed own level of
teaching mastery

Cooperating •Cooperating teacher formally •No formal observation


teacher observed student teacher three by cooperating teacher
observation times a semester with an
forms observation form
•Provided useful and specific
feedback
•Created opportunity to receive
verbal persuasion and inform
mastery about PIAR
64 / KOPCHA AND ALGER

the feedback they received was a powerful source of information about their
ability to implement instruction.
The video experience afforded eSupervision students with two opportunities
to improve their efficacy. First, the act of observing themselves functioned as a
vicarious experience through self-modeling, which created an opportunity to
challenge their own thinking about their level of mastery with regard to teaching.
One described: “I was like, ‘Wow! My kids are being really well-behaved
and that’s something really smart that this kid just said, and I don’t even know
if I acknowledged that he said it.’” Another student noted that the observation
template helped guide this self-modeling in a purposeful manner, stating, “I tend
to be hard on myself. So having specific areas [of PIAR] to watch for, and
thinking, ‘Oh, I think I’m actually ok with doing this’ makes me more confident
as a teacher.”
The second opportunity to improve their efficacy came from the feedback the
supervisor provided student teachers, which acted as a form of verbal persuasion
that informed their level of confidence in their ability to teach. One student
explained, “[My supervisor] pointed out things that I missed even watching it.
He saw a different perspective but he came back with a few things and I was
like, ‘Wow! I didn’t notice this before.’”
All non-eSupervision students videotaped themselves teaching. Similar to the
eSupervision group, watching the video provided them with an opportunity to
engage in a vicarious experience of self-modeling. They were able to observe their
own performance and gauge their success with their own teaching ability. One
student explained,
[The video reflection] gave me an opportunity to see how I was delivering
the information, and to critique myself, whether it be a minor thing like
saying “ok” and things like that. Also, I was able to see if some of the students
were either engaged more or less than how I thought they were in the lesson.

Another similarly noted, “I could see the strategies [I was using] and see
that they were working.” Unlike the eSupervision group, however, they did not
have an opportunity to receive feedback, or verbal persuasion, from an expert
but rather from peers in a large-group setting. They reported that the feedback
helped them identify and change some issues in their own teaching, but felt
that overall it was superficial and uninformative.

Lesson Plan EPSS

eSupervision students were required to post daily lesson plans using the Lesson
Plan EPSS. This appears to have acted as a mastery experience for them—all
reported that the repeated use of this tool helped them internalize and master the
process of planning. One stated, “While I hated [planning daily lessons], it turned
out to be a good thing because I got better and better at it as I did it more and more.”
THE IMPACT OF TECHNOLOGY-ENHANCED SUPERVISION / 65

Two factors appear to have had a negative affect on the value of this tool for
students. First, the quality of the experience appears to have been diminished
by the frequency and quality of feedback provided by experts. Several noted
that the feedback they received was often not timely enough to be useful; all
wanted to receive more feedback on their posted lessons. Second, it was clear
that as mastery levels increased, students felt the need to use the tool less.
One student summarized this trend:

It helped me to really get the hang of the whole “planning every day”
and everything. By second semester, I had already gotten the idea of how
it flowed and it was just an extra thing that I didn’t participate in much.

In contrast, the non-eSupervision group reported that they lacked any formal
mechanism for learning to plan lessons and relied heavily on the cooperating
teacher for assistance with this task. Several noted that this was due to a lack of
instruction at the university level regarding the planning of lessons.

Discussion Boards

eSupervision students reported using the discussion boards to support their


learning to teach during the field experience; in contrast, non-eSupervision
students did not report using any type of discussion board related to the field
experience. In particular, the use of this tool influenced their ability to reflect on
their instruction. All the students reported that the discussion questions and
comments, posted by experts and novices alike, prompted reflection from the
participants. One student stated,

It was helpful—whether it was a cooperating teacher, colleague, or myself


just being asked questions—being asked to think about things. Sometimes
you get caught up in planning and being a teacher, so that when someone
comes along and asks you a question, you are like, “Oh, hmm.” It makes
you think about the “big picture.”

For eSupervision students, the discussion board also functioned as an oppor-


tunity to use vicarious experiences to judge their own level of mastery with
regard to teaching. Student teachers reported that they used the discussion boards
to read about the experiences of others and gauge their own level of success as
compared to their peers. One student stated, “I really wanted to know if other
people were having the same issues that I was, and [the discussion board] was
a huge way to communicate that.” Another similarly noted, “I appreciated seeing
things that were going on with other student teachers and seeing they were in the
same place as me.” Similar to the lesson plan EPSS, eSupervision students
reported using the discussion boards as a tool for measuring their own abilities
more when they were less confident, and less as they became more confident.
66 / KOPCHA AND ALGER

Cooperating Teacher Observation Forms

Cooperating teachers in eSupervision were asked to complete three formal


observations of their student teacher using an observation form; non-eSupervision
students did not engage in formal observations by their cooperating teacher. The
form was available online and completed forms were shared with supervisors
and student teachers electronically. Four of the five eSupervision students were
observed all three times and debriefed by their cooperating teacher using the
completed observation form; one was observed and debriefed only once.
All eSupervision students reported that the experience was useful; three noted
that it provided them with useful and specific feedback, while another reported
that it created an opportunity to reflect with the cooperating teacher on his/her own
practice. The form acted as a catalyst for a dialogue with his cooperating teacher
that focused on specific teaching strategies related to PIAR. In the area of
Instruction, one student noted,
The [individual rating] was very broad and could mean many different
things. The [cooperating] teacher’s comments, they told me more specif-
ically, “Spend a little time, but less time than you are, on explaining. Just
let [the students] do it and answer questions as they go.”

The use of the observation form also created a formal and guided opportunity
for student teachers to improve their self-efficacy through verbal persuasion.
Discussion guided by the observation form provided valuable information
that informed their level of mastery with a specific focus on PIAR and helped
them discern the positive attributes of their performance from the negative ones.
One student concisely summarized this, stating, “I tend to be hard on myself,
so having specific areas where I know I’m actually doing ok with something
makes me more confident.”

DISCUSSION

The purpose of this article was to examine the difference in knowledge,


performance, and teacher self-efficacy between student teachers who received
eSupervision as part of their field experience and those who did not. The results
indicate that the electronic supervision methods used in eSupervision may be
an effective alternative to traditional approaches to student teacher supervision
during the field experience. Although the PIAR scores for eSupervision students
were not significantly higher than for non-eSupervision students, eSupervision
students scored higher in three of the four areas. This is an encouraging result—
eSupervision student teachers received fewer site visits by their supervisor, but
had greater access to supervisory experiences mediated by technology (video
reflection, online discussion, lesson plan EPSS, observation forms). The fact that
they performed as well as their non-eSupervision peers on PIAR suggests that
THE IMPACT OF TECHNOLOGY-ENHANCED SUPERVISION / 67

traditional supervision (i.e., a series of observations from a supervisor) may


not be the only way to effectively supervise student teachers during the field
experience. This supports researchers who have suggested that technology-
enhanced alternatives to traditional supervision can be an effective approach to
teacher preparation (Levin & Waugh, 1998; Liu, 2005).
eSupervision students scored higher than non-eSupervision students on the
TSES at the conclusion of their field experience despite having lower scores at
the mid-year. This interaction may have occurred because eSupervision students
had more access to feedback due to technology. To make an efficacy judgment, a
student teacher must consider his or her competence in the context of a specific
task (Tschannen-Moran et al., 1998; Tschannen-Moran & Hoy, 2007). Unlike
experienced teachers, novice teachers tend to base those judgments more heavily
on feedback from others (Tschannen-Moran & Hoy, 2007). eSupervision students
had more opportunity to receive both positive and negative feedback through
the discussion boards from a variety of sources, often from a supervisor that
was not their own or from peers who were experiencing similar issues in the
classroom. They were also formally observed by and received feedback from their
cooperating teachers in addition to their supervisors. At the mid-year TSES,
student teachers were just learning to teach. It is likely that receiving feedback
from a wide variety of sources may have helped eSupervision students see
their own shortcomings more clearly and report lower levels of efficacy. At the
conclusion of their field experience, however, they had more opportunity than
non-eSupervision students to develop their teaching practice based on that
feedback and receive additional feedback about their efforts. Others (Bates
& Khasawneh, 2007; Wu & Lee, 2004) have similarly found that feedback from
experts played a role in shaping the efficacy beliefs of preservice teachers in online
settings and improving their ability to assess the quality of their own teaching.
Another reason for this interaction may have been that eSupervision students
had more opportunity to assess their level of mastery with teaching due to
technology. eSupervision students reported that their ability to plan lessons
became both internalized and routine due to the Lesson Plan EPSS, indicating
a high level of mastery associated with the skill of planning. They noted that
expert feedback on their videotaped lessons informed their perception of mastery
associated with implementing instruction. In addition, they reported that reading
about the experiences of others on the discussion boards, a form of vicarious
experience, provided opportunities judge their own level of success and reflect on
the “big picture” of teaching. Combined, these experiences were likely to inform
their efficacy beliefs. Mastery experiences that are informed by other sources of
self-efficacy are noted as one of the most powerful influences on a teacher’s
level of efficacy (Bandura, 1997; Mulholland & Wallace, 2001; Tschannen-
Moran et al., 1998). In addition, past studies have similarly found that student
teachers use feedback from others on discussion boards to evaluate their own
ability (Liu, 2005), feel more confident with regard to planning tasks due to using
68 / KOPCHA AND ALGER

EPSS (Hacker & Sova, 1998; Wild, 1998), and are better able to identify and
change issues with their own teaching as a result of video reflection paired
with expert feedback (Lee & Wu, 2006).
Scores from both the TSES and PACT were higher for the eSupervision group
at the conclusion of the field experience. This supports the idea that teacher
performance and self-efficacy influence and are related to each other (Tschannen-
Moran et al., 1998), and that the students in this study reported efficacy beliefs
that aligned with their own ability. We were surprised, however, that there was
no significant correlation between post-TSES and PIAR scores and no significant
difference in PIAR scores between groups. This is likely because the scale used
to evaluate PIAR has a somewhat small range (from 1 to 4), and may not have
been sensitive enough to indicate statistical significance given the small sample
sizes associated with our measures.
The small sample size associated with both measures and lack of a pre-
treatment efficacy measure limits the generalizability of the findings. However,
there are several reasons to believe that similar results would occur with a dif-
ferent group of students. First, the interviews suggest eSupervision students
had greater opportunity to inform their efficacy knowledge throughout the year
and correspond with the efficacy interaction favoring eSupervision students.
In addition, students were likely to have similar levels of efficacy prior to
treatment as they had little to no experience in actual classrooms before their
field experience. This supports the idea that eSupervision students experienced
something that influenced their efficacy beliefs differently than non-eSupervision
students. Finally, the results of this study are consistent with other researchers
who found that technology improved the performance and attitudes of
teachers learning to teach (Liu, 2005; Pianta, Mashburn, Downer, Hamre, &
Justice, 2008). Together, these factors support the validity of the findings in
this study.
While the 6% effect size associated with the impact of eSupervision on PIAR
scores may seem small (see Cohen, 1988), we feel it is promising when con-
sidered within our context. Hill, Bloom, Black, and Lipsey (2008) suggest
judging an effect size by comparing it to the results of other similar studies, as
well as by how well it addresses gaps noted in related policies and literature.
eSupervision was designed to improve longstanding issues with student teacher
supervision and augment rather than replace the mentoring that occurs during
the field experience. Our effect size indicates that we did augment that mentor-
ing, and this study is an important initial step toward generating an empirical
body of evidence that examines the impact of technology-enhanced super-
vision methods on student teacher efficacy beliefs and actual classroom practice
during the field experience. As noted by Gentry et al. (2008), most studies on
technology-supported mentoring in teacher education in general or during the
field experience in particular are not quantitative and lack a connection to student
performance outcomes.
THE IMPACT OF TECHNOLOGY-ENHANCED SUPERVISION / 69

The technology activities that student teachers completed in this study


appear to be effective ways to support the triad during the field experience.
Teacher educators interested in using technology-enhanced activities to support
the supervision of student teachers during the field experience should note that
the primary purpose of eSupervision is to support a cognitive apprenticeship—
technology is merely the vehicle for accomplishing this task. This supports
others (Bates & Khasawneh, 2007; Joia, 2001; O’Neill & Harris, 2004; Owston,
Wideman, Murphy, & Lupshenyuk, 2008; Wang & Bonk, 2001) who have noted
that learning outcomes are likely to improve for students in online settings that
are built on a cognitive apprenticeship framework and use technology to connect
them more purposefully with experts.
Future research should begin with a replication study that not only improves
the sample size, but also includes a pre-treatment measure of efficacy. This
would add a level of confidence to the analysis by improving both the power
of the statistical analysis and our understanding of the changes in efficacy scores
over time. Other research could include examining the individual influences on
teacher performance and self-efficacy within a technology-rich learning environ-
ment such as eSupervision. Both technological and non-technological influences
should be examined to determine which effectively shape teacher performance
and self-efficacy, and, more importantly, to what degree. Such studies would
help address the existing need to examine the connection between efficacy and
performance (Labone, 2004; Wheatley, 2005) and the impact of technology on
both (Gentry et al., 2008).

CONCLUSION
The preparation of teachers is more challenging than ever. As the demands
of today’s classrooms continue to increase, teacher educators are faced with
accomplishing more with our student teachers in less time, with fewer resources,
and with increasing levels of accountability. One likely response to such chal-
lenges will be to seek alternatives to the traditional approach to supervising
student teachers. eSupervision was designed to do just that—that is, to use
technology to do more with the resources available for supervising student
teachers while responding to increased pressures and demands for account-
ability and effectiveness. While limited in several ways, the results of this
study suggest that we were successful in improving the supervision of student
teachers. In addition, the results indicate that there are ways in which teacher
educators can use technology to accomplish these aims without detriment,
and potentially with improvement, to the knowledge, performance, and efficacy
beliefs of preservice teachers. Such evidence serves to inform our knowledge
of what effective supervision looks like and how to build effective supervisory
experiences with technology to better support all student teachers during their
field experience.
70 / KOPCHA AND ALGER

REFERENCES
Alger, C., & Kopcha, T. J. (2009). eSupervision: A technology framework for the 21st
century field experience in teacher education. Issues in Teacher Education, 18(2),
31-46.
Alger, C., & Kopcha, T. J. (2011). Technology supported cognitive apprenticeship trans-
forms the student teaching field experience. The Teacher Educator, 46(1), 71-88.
Bandura, A. (1997). Self-efficacy: The exercise of control. New York: W. H. Freeman
& Company.
Barnett, M., Keating, T., Harwood, W., & Saam, J. (2002). Using emerging technologies
to help bridge the gap between university theory and classroom practice: Challenges
and successes. School Science and Mathematics, 102(6), 299-313.
Bates, R., & Khasawneh, S. (2007). Self-efficacy and college students’ perceptions and
use of online learning systems. Computers in Human Behavior, 23(1), 175-191.
Blanton, W. E., Moorman, G., & Trathen, W. (1998). Telecommunications and teacher
education: A social constructivist view. Review of Research in Education, 23, 235-275.
Bodzin, A. C., & Park, J. P. (2002). Using a nonrestrictive web-based forum to promote
reflective discourse with preservice science teachers. Contemporary Issues in Tech-
nology and Teacher Education, 2(3), 267-289.
Britten, J., Mullen, L., & Stuve, M. (2003). Program reflections on the role of longi-
tudinal digital portfolios in the development of technology competence. The Teacher
Educator, 39(2), 79-94.
Brouwers, A., & Tomic, W. (2000). A longitudinal study of teacher burnout and perceived
self-efficacy in classroom management. Teaching and Teacher Education, 16(2),
239-253.
Caillier, S. L., & Riordan, R. C. (2009). Teacher education for the schools we need.
Journal of Teacher Education, 60(5), 489-496.
Capa, Y., & Woolfolk-Hoy, A. (2005). What predicts student teacher self-efficacy?
Academic Exchange Quarterly, 10(4), 123-127.
Caprara, G. V., Barbaranelli, C., Steca, P., & Malone, P. S. (2006). Teachers’ self-efficacy
beliefs as determinants of job satisfaction and students’ academic achievement:
A study at the school level. Journal of School Psychology, 44(6), 473-490.
Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2nd ed.).
Hillsdale, NJ: Erlbaum.
Collins, A., Brown, J. S., & Holum, A. (1991). Cognitive apprenticeship: Making thinking
visible. American Educator, 15(3), 6-11, 38-46.
Crawford, B. A., & Patterson, B. (2004, January). Scaffolding videotape reflections to
enhance teacher candidates’ practice. Paper presented at the International Conference
of the Association for the Education of Teachers in Science, Nashville, TN.
Darling-Hammond, L. (2006). Constructing 21st-century teacher education. Journal of
Teacher Education, 57(3), 300-314.
Darling-Hammond, L. (2010). Teacher education and the American future. Journal of
Teacher Education, 61(1-2), 35-47.
Feiman-Nemser, S. (2001). From preparation to practice: Designing a continuum to
strengthen and sustain teaching. Teachers College Record, 103(6), 1013-1055.
Fives, H., & Buehl, M. M. (2010). Examining the factor structure of the Teachers’ Sense
of Efficacy Scale. Journal of Experimental Education, 78(1), 118-134.
THE IMPACT OF TECHNOLOGY-ENHANCED SUPERVISION / 71

Fives, H., Hamman, D., & Olivarez, A. (2007). Does burnout begin with student-teaching?
Analyzing efficacy, burnout, and support during the student-teaching semester.
Teaching and Teacher Education, 23(6), 916-934.
Galluzzo, G. (2005). Performance assessment and renewing teacher education: The pos-
sibilities of the NBPTS standards. The Clearing House, 78(4), 142-145.
Gentry, L. B., Denton, C. A., & Kurz, T. (2008). Technologically-based mentoring
provided to teachers: A synthesis of the literature. Journal of Technology and Teacher
Education, 16(3), 339-373.
Gery, G. (1991). Electronic performance support systems. Cambridge, MA: Ziff Institute.
Hacker, R., & Sova, B. (1998). Initial teacher education: A study of the efficacy of
computer mediated courseware delivery in a partnership context. British Journal of
Educational Technology, 29(4), 333-341.
Hamman, D., & Olivarez, J. A. (2005). Learning to Teach Questionnaire: A measure of
the interaction between cooperating and student teachers. Paper presented at the
American Educational Research Association.
Harford, J., & MacRuairc, G. (2008). Engaging student teachers in meaningful reflective
practice. Teaching and Teacher Education, 24(7), 1884-1892.
Hess, F. M. (2009). Revitalizing teacher education by revisiting our assumptions about
teaching. Journal of Teacher Education, 60(5), 450-457.
Hew, K. F., & Knapczyk, D. (2007). Analysis of ill-structured problem solving, mentoring
functions, and perceptions of practicum teachers and mentors toward online mentoring
in a field-based practicum. Instructional Science, 35(1), 1-40.
Hill, C. J., Bloom, H. S., Black, A. R., & Lipsey, M. W. (2008). Empirical benchmarks for
interpreting effect sizes in research. Child Development Perspectives, 2(3), 172-177.
Joia, L. A. (2001). Evaluation of a hybrid socio-constructivist model for teacher training.
Journal of Technology and Teacher Education, 9(4), 519-549.
Knoblauch, D., & Hoy, A. W. (2008). “Maybe I can teach those kids.” The influence
of contextual factors on student teachers’ efficacy beliefs. Teaching and Teacher
Education, 24(1), 166-179.
Labone, E. (2004). Teacher efficacy: maturing the construct through research in alterna-
tive paradigms. Teaching and Teacher Education, 20(4), 341-359.
Larsen, L. R., & Calfee, R. C. (2005). Assessing teacher candidate growth over time.
[Article]. Clearing House, 78(4), 151-157.
Lee, G. C., & Wu, C.-C. (2006). Enhancing the teaching experience of pre-service teachers
through the use of videos in web-based computer-mediated communication (CMC).
Innovations in Education and Teaching International, 43(4), 369-380.
Levin, J., & Waugh, M. (1998). Teaching teleapprenticeships: Electronic network-
based educational frameworks for improving teacher education. Interactive Learning
Environments, 6(1-2), 39-58.
Liaw, E.-C. (2009). Teacher efficacy of pre-service teachers in Taiwan: The influence of
classroom teaching and group discussions. Teaching and Teacher Education, 25(1),
176-180.
Lieberman, A., & Pointer Mace, D. (2010). Making practice public: Teacher learning in
the 21st century. Journal of Teacher Education, 61(1-2), 77-88.
Liu, T.-C. (2005). Web-based cognitive apprenticeship model for improving pre-service
teachers’ performances and attitudes towards instructional planning: Design and
field experiment. Educational Technology & Society, 8(2), 136-149.
72 / KOPCHA AND ALGER

Main, S., & Hammond, L. (2008). Best practice or most practiced? Pre-service teachers’
beliefs about effective behaviour management strategies and reported self-efficacy.
Australian Journal of Teacher Education, 33(4), 28-39.
Miles, M. B., & Huberman, A. M. (1984). Qualitative data analysis: A sourcebook of
new methods. Newbury Park, CA: Sage.
Mulholland, J., & Wallace, J. (2001). Teacher induction and elementary science teaching:
Enhancing self-efficacy. Teaching and Teacher Education, 17(2), 243-261.
O’Neill, D. K., & Harris, J. B. (2004). Bridging the perspectives and developmental
needs of all participants in curriculum-based telementoring programs. Journal of
Research on Technology in Education, 37(2), 111-128.
Otero, V., Peressini, D., Meymaris, K. A., Ford, P., Garvin, T., Harlow, D., et al. (2005).
Integrating technology into teacher education: A critical framework for implementing
reform. Journal of Teacher Education, 56(1), 8-23.
Owen, S. V., & Froman, R. D. (1998). Uses and abuses of the analysis of covariance.
Research in Nursing & Health, 21, 557-562.
Owston, R., Wideman, H., Murphy, J., & Lupshenyuk, D. (2008). Blended teacher pro-
fessional development: A synthesis of three program evaluations. The Internet and
Higher Education, 11(3-4), 201-210.
Paris, C., & Gespass, S. (2001). Examining the mismatch between learner-centered
teaching and teacher-centered supervision. Journal of Teacher Education, 52(5),
398-412.
Patton, M. Q. (2002). Qualitative evaluation and research methods (3rd ed.). Thousand
Oaks, CA: Sage.
Pecheone, R. L., & Chung, R. R. (2006). Evidence in teacher education: The Performance
Assessment for California Teachers (PACT). Journal of Teacher Education, 57(1),
22-36.
Pianta, R. C., Mashburn, A. J., Downer, J. T., Hamre, B. K., & Justice, L. (2008).
Effects of web-mediated professional development resources on teacher-child inter-
actions in pre-kindergarten classrooms. Early Childhood Research Quarterly, 23(4),
431-451.
Romano, M., & Schwartz, J. (2005). Exploring technology as a tool for eliciting and
encouraging beginning teacher reflection. Contemporary Issues in Technology and
Teacher Education [Online serial], 5(2). Retrieved from http://www.citejournal.org/
vol5/iss2/general/article1.cfm
Seidman, I. E. (1998). Interviewing as qualitative research: A guide for researchers in
education and the social sciences. New York: Teachers College Press.
Sherin, M. G., & van Es, E. A. (2005). Using video to support teachers’ ability to notice
classroom interactions. Journal of Technology and Teacher Education, 13(3), 475-491.
Tait, M. (2008). Resilience as a contributor to novice teacher success, commitment, and
retention. Teacher Education Quarterly, 35(4), 57-75.
Tschannen-Moran, M., & Hoy, A. W. (2001). Teacher efficacy: Capturing an elusive
construct. Teaching and Teacher Education, 17(7), 783-805.
Tschannen-Moran, M., & Hoy, A. W. (2007). The differential antecedents of self-efficacy
beliefs of novice and experienced teachers. Teaching and Teacher Education, 23(6),
944-956.
Tschannen-Moran, M., Hoy, A. W., & Hoy, W. K. (1998). Teacher efficacy: Its meaning
and measure. Review of Educational Research, 68(2), 202-248.
THE IMPACT OF TECHNOLOGY-ENHANCED SUPERVISION / 73

Wang, F.-K., & Bonk, C. J. (2001). A design framework for electronic cognitive appren-
ticeship. Journal of Asynchronous Learning Networks, 5(2), 131-151.
Wheatley, K. F. (2005). The case for reconceptualizing teacher efficacy research.
Teaching and Teacher Education, 21(7), 747-766.
Wild, M. (1998). Creating a role for performance support systems in teacher education.
Technology, Pedagogy and Education, 7(2), 269-295.
Williams, G. C., & Deci, E. L. (1996). Internalization of biopsychosocial values by
medical students: A test of self-determination theory. Journal of Personality and Social
Psychology, 70, 767-779.
Wu, C.-C., & Lee, G. L. (2004). Use of computer-mediated communication in a teaching
practicum course. International Journal of Science and Mathematics Education, 2(4),
511-528.
Yerrick, R., Ross, D., & Molebash, P. (2005). Too close for comfort: Real-time science
teaching reflections via digital video editing. Journal of Science Teacher Education,
16(4), 351-375.
Zeichner, K. (2002). Beyond traditional structures of teacher education. Teacher Education
Quarterly, 29(2), 59-64.

Direct reprint requests to:


Dr. Theodore J. Kopcha
University of Georgia
Department of Educational Psychology and
Instructional Technology
604 Aderhold Hall
Athens, GA 30602
e-mail: tjkopcha@uga.edu
Copyright of Journal of Educational Computing Research is the property of Baywood Publishing Company, Inc.
and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright
holder's express written permission. However, users may print, download, or email articles for individual use.

You might also like