Professional Documents
Culture Documents
Techonology Enhanced Student Teacher Supervision PDF
Techonology Enhanced Student Teacher Supervision PDF
THEODORE J. KOPCHA
University of Georgia
CHRISTIANNA ALGER
San Diego State University
ABSTRACT
49
Policy makers and teacher educators alike have advocated for reform in the
preparation of teachers, calling for improvement in the quality of the supervision
experience (Feiman-Nemser, 2001; Paris & Gespass, 2001), its connection to the
university (Darling-Hammond, 2006; Zeichner, 2002), and the integration of
technology into the coursework of student teachers (Hew & Knapczyk, 2007;
Lieberman & Pointer Mace, 2010; Otero, Peressini, Meymaris, Ford, Garvin,
Harlow, et al., 2005). Many researchers are concerned that the methods used
to prepare teachers, which have remained fairly static over the past 70 years,
no longer adequately prepare beginning teachers for the demands of today’s
classrooms (Darling-Hammond, 2006; Hess, 2009). Darling-Hammond (2006)
noted that traditional methods of preparation are insufficient because teachers are
held to higher standards than ever before—they are simultaneously expected to
teach complex skills to a broader and increasingly diverse range of learners,
effectively integrate technology into their lessons, and continue to examine and
improve their own teaching. Some have suggested that such high demands, in
conjunction with outdated methods of preparation, explain the issues of attrition
among teachers within the first few years of their careers (Darling-Hammond,
2010; Hess, 2009).
In response to these calls for reform, the teacher education and educational
technology departments at our institution collaborated to create a new program
of student teacher supervision. The program, called eSupervision, is a course
management system designed using a cognitive apprenticeship framework to
support the performance of student teachers, cooperating teachers, and university
supervisors with a variety of technology during the field experience. Partici-
pants in eSupervision engage in a number of technology-enhanced supervision
activities, such as video reflection with an expert, online discussion of classroom
management strategies with peers and experts alike, and building lesson plans
with a performance support system. Those activities were purposefully included
in the program due to their rich history in the literature. Researchers have
reported positive changes in the performance and/or attitudes of novice teachers
as a result of connecting more frequently with experts through telecommuni-
cations technology (Liu, 2005; Wu & Lee, 2004), observing one’s teaching
performance on video (Harford & MacRuairc, 2008; Lee & Wu, 2006), and
learning to plan instruction using electronic performance support systems (Hacker
& Sova, 1998; Wild, 1998).
Researchers have theorized that novice teachers who engage in technology-
supported learning environments designed from socio-cognitive theory would
have more opportunity to inform their practice and their efficacy (Joia, 2001;
Wang & Bonk, 2001). However, prior studies on the use of technology with
novice teachers focus more on participant perceptions of the tool and less on
student outcomes such as performance and efficacy (Gentry, Denton, & Kurz,
2008). The purpose of this study was to compare the knowledge, performance, and
self-efficacy of two cohorts of student teachers engaged in the field experience,
THE IMPACT OF TECHNOLOGY-ENHANCED SUPERVISION / 51
where one cohort participated in eSupervision and the other did not. We begin
the article by discussing the literature regarding the assessment of teacher
knowledge and performance in teacher education, teacher self-efficacy, and the
technology used to enhance the supervision of student teachers.
THEORETICAL PERSPECTIVES
Teacher Self-Efficacy
Teacher self-efficacy—that is, a teacher’s belief in his or her ability to achieve
specific results in a given context (Tschannen-Moran, Hoy, & Hoy, 1998)—
plays a role in a teacher’s performance. A teacher’s efficacy beliefs affect what
they are willing to attempt in the classroom and how persistent they will be at
succeeding as a teacher (Mulholland & Wallace, 2001; Tait, 2008; Tschannen-
Moran & Hoy, 2001). Tschannen-Moran et al. (1998) describe the interplay
52 / KOPCHA AND ALGER
and receiving recognition for their successes (Mulholland & Wallace, 2001;
Tschannen-Moran & Hoy, 2007). In other words, they regularly engage in
activities that expose them to all three sources of efficacy knowledge—mastery
experiences, vicarious experiences, and verbal persuasion—at a time when they
are most open to them. Tschannen-Moran and Hoy (2007) found evidence of
this in a regression analysis that included contextual factors, verbal persuasion,
and mastery experience as predictors of self-efficacy. In that study, both mastery
experiences and verbal persuasion made a significant contribution to the efficacy
beliefs of 75 novice teachers, whereas only mastery experiences made a significant
contribution to the efficacy beliefs of the remaining 180 experienced teachers.
METHOD
Participants
A total of 41 student teachers (19 eSupervision, 22 non-eSupervision) from a
large university in the Southwest participated in the study. The student teachers
at this university engaged in a year-long student teaching field placement as part
56 / KOPCHA AND ALGER
Teacher Self-Efficacy
The primary measure of teacher self-efficacy was the short form (12 items)
of the Teachers’ Sense of Efficacy Scale (TSES) (Tschannen-Moran & Hoy,
2001). Student teachers reported on how much they felt they could do with
regard to a number of common teaching tasks (class management, student moti-
vation, delivery of content) on a 9-point scale with descriptors at every odd
number, from 1 (nothing) to 3 (very little) to 5 (some influence) to 7 (quite a bit)
to 9 (a great deal). A sample item from the measure is, “How much can you
do to control disruptive behavior in the classroom?”
The TSES was selected for this study because it has been established as
valid and reliable when used with preservice teachers. Fives and Buehl (2010)
found that the TSES was a valid and reliable indicator of the efficacy beliefs
of 102 experienced and 270 preservice teachers. While the measure reflected
efficacy beliefs within three specific factors (student engagement, instructional
practice, and classroom management) among the experienced teachers, they found
that the TSES only reflected a general sense of efficacy—that is, a one-factor
solution—among the preservice teachers. The reliability of the instrument in
this study was .87 and has been reported at .90 or higher in prior studies (Fives
& Buehl, 2010; Tschannen-Moran & Hoy, 2001).
Procedures
One cohort of 19 student teachers received the intervention of eSupervision
and the other cohort of 22 teachers did not. The cohort that received eSupervision
was selected because the instructor of the seminar for that cohort’s field experi-
ence volunteered to use eSupervision. Students, however, were assigned to
eSupervision in a random fashion. All students in the entire secondary single
subjects credential program at this university were placed randomly into one
of four cohorts at the beginning of the year. In this way, students were equally
likely to be placed in the cohort that received eSupervision or in the com-
parison group.
Student teachers in the eSupervision group engaged in the instructional pro-
gram over the course of two semesters. Because they participated in eSupervision
as part of an on-campus seminar course that met once per week, the seminar
leader set the pace for completing the activities in eSupervision. The seminar
leader also participated in online discussions and was responsible for monitoring
and promoting expert participation in eSupervision. Non-eSupervision students
participated in a similar seminar course for their own cohort; however, this
seminar had no instructional program like eSupervision as part of the coursework.
Instead, they participated in a series of face-to-face, in-class discussions that
focused on resolving issues that were specific to the cohort.
Although all student teachers completed the PACT, they had to be recruited
to participate in our additional surveys and interviews. Student teachers were
solicited for participation at the beginning of their second semester and 17
(8 eSup, 9 non-eSup) completed the TSES at that time (mid-year TSES). One
week prior to the end of the second semester, they once again completed the
TSES (post TSES), and also completed the Learning to Teach Scale and
the Learning Climate Questionnaire. Nine (5 esup, 4 non-eSup) participated
in telephone interviews shortly after completing the surveys; the interviews
were recorded and transcribed for analysis.
RESULTS
The correlation analysis revealed that post-TSES scores were not significantly
correlated with Learn to Teach (= .16) or Learning Climate scores (= .22). None
of the PIAR mean scores were significantly correlated with post-TSES; these
THE IMPACT OF TECHNOLOGY-ENHANCED SUPERVISION / 61
correlations were relatively low and non-significant, ranging from –.23 to .27
with a mean of –.01. Correlations between PIAR mean scores and the Learning
Climate and Learn to Teach scores were similarly low and non-significant. This
suggested that PIAR and TSES mean scores could be analyzed without including
covariates in the model.
One-way analysis of variance (ANOVA) on the 17 mean scores for each
potential covariate revealed that both groups (eSup/non-eSup) experienced similar
and relatively high levels of post-efficacy and guidance from their cooperating
teacher. No statistically significant differences were found between groups on
either Post-TSES scores [Mesup = 7.06 and Mnon-eSup = 6.77], Learning to Teach
scores [Mesup = 4.70 and Mnon-eSup = 4.29], or Learning Climate scores [Mesup =
6.39 and Mnon-eSup = 5.97]. The lack of statistically significant differences on
these measures suggested that both PIAR and TSES scores could be analyzed
without including covariates.
Analysis of Self-Efficacy
A total of 17 student teachers completed both the mid-year TSES and the post
TSES (8 eSupervision and 9 non-eSupervision). Repeated measures ANOVA on
TSES scores at two points in time (mid-year and post) revealed a statistically
significant time by treatment interaction, F(1, 15) = 9.23, p < .01, h2 = .38.
Students in the eSupervision group had lower TSES scores than non-eSupervision
students at the mid-year, M = 6.59 and 7.00, but higher post-TSES scores,
M = 7.06 and 6.77. The main effects for time—that is, mid-year and post TSES
scores (M = 6.81 and 6.91)—was not statistically significant.
Electronic Mail
It is likely that e-mail had similar impact on both groups in this study. Students
from each group used e-mail with similar frequency and for similar purposes. All
stated that they infrequently used e-mail with their supervisor, primarily to arrange
site visits and triad meetings. With regard to the cooperating teacher, email contact
varied in frequency from daily (1 eSup, 1 non-eSup) to weekly (2 eSup, 3
non-eSup) to only several times a semester (2 eSup, 1 non-eSup). These varied in
nature from planning to classroom management issues to the organization of
the school. The two participants who used e-mail daily with the cooperating
teacher were the only two that reported using it to receive advice about lesson
plans and share classroom experiences with the cooperating teacher.
Video Reflection
the feedback they received was a powerful source of information about their
ability to implement instruction.
The video experience afforded eSupervision students with two opportunities
to improve their efficacy. First, the act of observing themselves functioned as a
vicarious experience through self-modeling, which created an opportunity to
challenge their own thinking about their level of mastery with regard to teaching.
One described: “I was like, ‘Wow! My kids are being really well-behaved
and that’s something really smart that this kid just said, and I don’t even know
if I acknowledged that he said it.’” Another student noted that the observation
template helped guide this self-modeling in a purposeful manner, stating, “I tend
to be hard on myself. So having specific areas [of PIAR] to watch for, and
thinking, ‘Oh, I think I’m actually ok with doing this’ makes me more confident
as a teacher.”
The second opportunity to improve their efficacy came from the feedback the
supervisor provided student teachers, which acted as a form of verbal persuasion
that informed their level of confidence in their ability to teach. One student
explained, “[My supervisor] pointed out things that I missed even watching it.
He saw a different perspective but he came back with a few things and I was
like, ‘Wow! I didn’t notice this before.’”
All non-eSupervision students videotaped themselves teaching. Similar to the
eSupervision group, watching the video provided them with an opportunity to
engage in a vicarious experience of self-modeling. They were able to observe their
own performance and gauge their success with their own teaching ability. One
student explained,
[The video reflection] gave me an opportunity to see how I was delivering
the information, and to critique myself, whether it be a minor thing like
saying “ok” and things like that. Also, I was able to see if some of the students
were either engaged more or less than how I thought they were in the lesson.
Another similarly noted, “I could see the strategies [I was using] and see
that they were working.” Unlike the eSupervision group, however, they did not
have an opportunity to receive feedback, or verbal persuasion, from an expert
but rather from peers in a large-group setting. They reported that the feedback
helped them identify and change some issues in their own teaching, but felt
that overall it was superficial and uninformative.
eSupervision students were required to post daily lesson plans using the Lesson
Plan EPSS. This appears to have acted as a mastery experience for them—all
reported that the repeated use of this tool helped them internalize and master the
process of planning. One stated, “While I hated [planning daily lessons], it turned
out to be a good thing because I got better and better at it as I did it more and more.”
THE IMPACT OF TECHNOLOGY-ENHANCED SUPERVISION / 65
Two factors appear to have had a negative affect on the value of this tool for
students. First, the quality of the experience appears to have been diminished
by the frequency and quality of feedback provided by experts. Several noted
that the feedback they received was often not timely enough to be useful; all
wanted to receive more feedback on their posted lessons. Second, it was clear
that as mastery levels increased, students felt the need to use the tool less.
One student summarized this trend:
It helped me to really get the hang of the whole “planning every day”
and everything. By second semester, I had already gotten the idea of how
it flowed and it was just an extra thing that I didn’t participate in much.
In contrast, the non-eSupervision group reported that they lacked any formal
mechanism for learning to plan lessons and relied heavily on the cooperating
teacher for assistance with this task. Several noted that this was due to a lack of
instruction at the university level regarding the planning of lessons.
Discussion Boards
The use of the observation form also created a formal and guided opportunity
for student teachers to improve their self-efficacy through verbal persuasion.
Discussion guided by the observation form provided valuable information
that informed their level of mastery with a specific focus on PIAR and helped
them discern the positive attributes of their performance from the negative ones.
One student concisely summarized this, stating, “I tend to be hard on myself,
so having specific areas where I know I’m actually doing ok with something
makes me more confident.”
DISCUSSION
EPSS (Hacker & Sova, 1998; Wild, 1998), and are better able to identify and
change issues with their own teaching as a result of video reflection paired
with expert feedback (Lee & Wu, 2006).
Scores from both the TSES and PACT were higher for the eSupervision group
at the conclusion of the field experience. This supports the idea that teacher
performance and self-efficacy influence and are related to each other (Tschannen-
Moran et al., 1998), and that the students in this study reported efficacy beliefs
that aligned with their own ability. We were surprised, however, that there was
no significant correlation between post-TSES and PIAR scores and no significant
difference in PIAR scores between groups. This is likely because the scale used
to evaluate PIAR has a somewhat small range (from 1 to 4), and may not have
been sensitive enough to indicate statistical significance given the small sample
sizes associated with our measures.
The small sample size associated with both measures and lack of a pre-
treatment efficacy measure limits the generalizability of the findings. However,
there are several reasons to believe that similar results would occur with a dif-
ferent group of students. First, the interviews suggest eSupervision students
had greater opportunity to inform their efficacy knowledge throughout the year
and correspond with the efficacy interaction favoring eSupervision students.
In addition, students were likely to have similar levels of efficacy prior to
treatment as they had little to no experience in actual classrooms before their
field experience. This supports the idea that eSupervision students experienced
something that influenced their efficacy beliefs differently than non-eSupervision
students. Finally, the results of this study are consistent with other researchers
who found that technology improved the performance and attitudes of
teachers learning to teach (Liu, 2005; Pianta, Mashburn, Downer, Hamre, &
Justice, 2008). Together, these factors support the validity of the findings in
this study.
While the 6% effect size associated with the impact of eSupervision on PIAR
scores may seem small (see Cohen, 1988), we feel it is promising when con-
sidered within our context. Hill, Bloom, Black, and Lipsey (2008) suggest
judging an effect size by comparing it to the results of other similar studies, as
well as by how well it addresses gaps noted in related policies and literature.
eSupervision was designed to improve longstanding issues with student teacher
supervision and augment rather than replace the mentoring that occurs during
the field experience. Our effect size indicates that we did augment that mentor-
ing, and this study is an important initial step toward generating an empirical
body of evidence that examines the impact of technology-enhanced super-
vision methods on student teacher efficacy beliefs and actual classroom practice
during the field experience. As noted by Gentry et al. (2008), most studies on
technology-supported mentoring in teacher education in general or during the
field experience in particular are not quantitative and lack a connection to student
performance outcomes.
THE IMPACT OF TECHNOLOGY-ENHANCED SUPERVISION / 69
CONCLUSION
The preparation of teachers is more challenging than ever. As the demands
of today’s classrooms continue to increase, teacher educators are faced with
accomplishing more with our student teachers in less time, with fewer resources,
and with increasing levels of accountability. One likely response to such chal-
lenges will be to seek alternatives to the traditional approach to supervising
student teachers. eSupervision was designed to do just that—that is, to use
technology to do more with the resources available for supervising student
teachers while responding to increased pressures and demands for account-
ability and effectiveness. While limited in several ways, the results of this
study suggest that we were successful in improving the supervision of student
teachers. In addition, the results indicate that there are ways in which teacher
educators can use technology to accomplish these aims without detriment,
and potentially with improvement, to the knowledge, performance, and efficacy
beliefs of preservice teachers. Such evidence serves to inform our knowledge
of what effective supervision looks like and how to build effective supervisory
experiences with technology to better support all student teachers during their
field experience.
70 / KOPCHA AND ALGER
REFERENCES
Alger, C., & Kopcha, T. J. (2009). eSupervision: A technology framework for the 21st
century field experience in teacher education. Issues in Teacher Education, 18(2),
31-46.
Alger, C., & Kopcha, T. J. (2011). Technology supported cognitive apprenticeship trans-
forms the student teaching field experience. The Teacher Educator, 46(1), 71-88.
Bandura, A. (1997). Self-efficacy: The exercise of control. New York: W. H. Freeman
& Company.
Barnett, M., Keating, T., Harwood, W., & Saam, J. (2002). Using emerging technologies
to help bridge the gap between university theory and classroom practice: Challenges
and successes. School Science and Mathematics, 102(6), 299-313.
Bates, R., & Khasawneh, S. (2007). Self-efficacy and college students’ perceptions and
use of online learning systems. Computers in Human Behavior, 23(1), 175-191.
Blanton, W. E., Moorman, G., & Trathen, W. (1998). Telecommunications and teacher
education: A social constructivist view. Review of Research in Education, 23, 235-275.
Bodzin, A. C., & Park, J. P. (2002). Using a nonrestrictive web-based forum to promote
reflective discourse with preservice science teachers. Contemporary Issues in Tech-
nology and Teacher Education, 2(3), 267-289.
Britten, J., Mullen, L., & Stuve, M. (2003). Program reflections on the role of longi-
tudinal digital portfolios in the development of technology competence. The Teacher
Educator, 39(2), 79-94.
Brouwers, A., & Tomic, W. (2000). A longitudinal study of teacher burnout and perceived
self-efficacy in classroom management. Teaching and Teacher Education, 16(2),
239-253.
Caillier, S. L., & Riordan, R. C. (2009). Teacher education for the schools we need.
Journal of Teacher Education, 60(5), 489-496.
Capa, Y., & Woolfolk-Hoy, A. (2005). What predicts student teacher self-efficacy?
Academic Exchange Quarterly, 10(4), 123-127.
Caprara, G. V., Barbaranelli, C., Steca, P., & Malone, P. S. (2006). Teachers’ self-efficacy
beliefs as determinants of job satisfaction and students’ academic achievement:
A study at the school level. Journal of School Psychology, 44(6), 473-490.
Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2nd ed.).
Hillsdale, NJ: Erlbaum.
Collins, A., Brown, J. S., & Holum, A. (1991). Cognitive apprenticeship: Making thinking
visible. American Educator, 15(3), 6-11, 38-46.
Crawford, B. A., & Patterson, B. (2004, January). Scaffolding videotape reflections to
enhance teacher candidates’ practice. Paper presented at the International Conference
of the Association for the Education of Teachers in Science, Nashville, TN.
Darling-Hammond, L. (2006). Constructing 21st-century teacher education. Journal of
Teacher Education, 57(3), 300-314.
Darling-Hammond, L. (2010). Teacher education and the American future. Journal of
Teacher Education, 61(1-2), 35-47.
Feiman-Nemser, S. (2001). From preparation to practice: Designing a continuum to
strengthen and sustain teaching. Teachers College Record, 103(6), 1013-1055.
Fives, H., & Buehl, M. M. (2010). Examining the factor structure of the Teachers’ Sense
of Efficacy Scale. Journal of Experimental Education, 78(1), 118-134.
THE IMPACT OF TECHNOLOGY-ENHANCED SUPERVISION / 71
Fives, H., Hamman, D., & Olivarez, A. (2007). Does burnout begin with student-teaching?
Analyzing efficacy, burnout, and support during the student-teaching semester.
Teaching and Teacher Education, 23(6), 916-934.
Galluzzo, G. (2005). Performance assessment and renewing teacher education: The pos-
sibilities of the NBPTS standards. The Clearing House, 78(4), 142-145.
Gentry, L. B., Denton, C. A., & Kurz, T. (2008). Technologically-based mentoring
provided to teachers: A synthesis of the literature. Journal of Technology and Teacher
Education, 16(3), 339-373.
Gery, G. (1991). Electronic performance support systems. Cambridge, MA: Ziff Institute.
Hacker, R., & Sova, B. (1998). Initial teacher education: A study of the efficacy of
computer mediated courseware delivery in a partnership context. British Journal of
Educational Technology, 29(4), 333-341.
Hamman, D., & Olivarez, J. A. (2005). Learning to Teach Questionnaire: A measure of
the interaction between cooperating and student teachers. Paper presented at the
American Educational Research Association.
Harford, J., & MacRuairc, G. (2008). Engaging student teachers in meaningful reflective
practice. Teaching and Teacher Education, 24(7), 1884-1892.
Hess, F. M. (2009). Revitalizing teacher education by revisiting our assumptions about
teaching. Journal of Teacher Education, 60(5), 450-457.
Hew, K. F., & Knapczyk, D. (2007). Analysis of ill-structured problem solving, mentoring
functions, and perceptions of practicum teachers and mentors toward online mentoring
in a field-based practicum. Instructional Science, 35(1), 1-40.
Hill, C. J., Bloom, H. S., Black, A. R., & Lipsey, M. W. (2008). Empirical benchmarks for
interpreting effect sizes in research. Child Development Perspectives, 2(3), 172-177.
Joia, L. A. (2001). Evaluation of a hybrid socio-constructivist model for teacher training.
Journal of Technology and Teacher Education, 9(4), 519-549.
Knoblauch, D., & Hoy, A. W. (2008). “Maybe I can teach those kids.” The influence
of contextual factors on student teachers’ efficacy beliefs. Teaching and Teacher
Education, 24(1), 166-179.
Labone, E. (2004). Teacher efficacy: maturing the construct through research in alterna-
tive paradigms. Teaching and Teacher Education, 20(4), 341-359.
Larsen, L. R., & Calfee, R. C. (2005). Assessing teacher candidate growth over time.
[Article]. Clearing House, 78(4), 151-157.
Lee, G. C., & Wu, C.-C. (2006). Enhancing the teaching experience of pre-service teachers
through the use of videos in web-based computer-mediated communication (CMC).
Innovations in Education and Teaching International, 43(4), 369-380.
Levin, J., & Waugh, M. (1998). Teaching teleapprenticeships: Electronic network-
based educational frameworks for improving teacher education. Interactive Learning
Environments, 6(1-2), 39-58.
Liaw, E.-C. (2009). Teacher efficacy of pre-service teachers in Taiwan: The influence of
classroom teaching and group discussions. Teaching and Teacher Education, 25(1),
176-180.
Lieberman, A., & Pointer Mace, D. (2010). Making practice public: Teacher learning in
the 21st century. Journal of Teacher Education, 61(1-2), 77-88.
Liu, T.-C. (2005). Web-based cognitive apprenticeship model for improving pre-service
teachers’ performances and attitudes towards instructional planning: Design and
field experiment. Educational Technology & Society, 8(2), 136-149.
72 / KOPCHA AND ALGER
Main, S., & Hammond, L. (2008). Best practice or most practiced? Pre-service teachers’
beliefs about effective behaviour management strategies and reported self-efficacy.
Australian Journal of Teacher Education, 33(4), 28-39.
Miles, M. B., & Huberman, A. M. (1984). Qualitative data analysis: A sourcebook of
new methods. Newbury Park, CA: Sage.
Mulholland, J., & Wallace, J. (2001). Teacher induction and elementary science teaching:
Enhancing self-efficacy. Teaching and Teacher Education, 17(2), 243-261.
O’Neill, D. K., & Harris, J. B. (2004). Bridging the perspectives and developmental
needs of all participants in curriculum-based telementoring programs. Journal of
Research on Technology in Education, 37(2), 111-128.
Otero, V., Peressini, D., Meymaris, K. A., Ford, P., Garvin, T., Harlow, D., et al. (2005).
Integrating technology into teacher education: A critical framework for implementing
reform. Journal of Teacher Education, 56(1), 8-23.
Owen, S. V., & Froman, R. D. (1998). Uses and abuses of the analysis of covariance.
Research in Nursing & Health, 21, 557-562.
Owston, R., Wideman, H., Murphy, J., & Lupshenyuk, D. (2008). Blended teacher pro-
fessional development: A synthesis of three program evaluations. The Internet and
Higher Education, 11(3-4), 201-210.
Paris, C., & Gespass, S. (2001). Examining the mismatch between learner-centered
teaching and teacher-centered supervision. Journal of Teacher Education, 52(5),
398-412.
Patton, M. Q. (2002). Qualitative evaluation and research methods (3rd ed.). Thousand
Oaks, CA: Sage.
Pecheone, R. L., & Chung, R. R. (2006). Evidence in teacher education: The Performance
Assessment for California Teachers (PACT). Journal of Teacher Education, 57(1),
22-36.
Pianta, R. C., Mashburn, A. J., Downer, J. T., Hamre, B. K., & Justice, L. (2008).
Effects of web-mediated professional development resources on teacher-child inter-
actions in pre-kindergarten classrooms. Early Childhood Research Quarterly, 23(4),
431-451.
Romano, M., & Schwartz, J. (2005). Exploring technology as a tool for eliciting and
encouraging beginning teacher reflection. Contemporary Issues in Technology and
Teacher Education [Online serial], 5(2). Retrieved from http://www.citejournal.org/
vol5/iss2/general/article1.cfm
Seidman, I. E. (1998). Interviewing as qualitative research: A guide for researchers in
education and the social sciences. New York: Teachers College Press.
Sherin, M. G., & van Es, E. A. (2005). Using video to support teachers’ ability to notice
classroom interactions. Journal of Technology and Teacher Education, 13(3), 475-491.
Tait, M. (2008). Resilience as a contributor to novice teacher success, commitment, and
retention. Teacher Education Quarterly, 35(4), 57-75.
Tschannen-Moran, M., & Hoy, A. W. (2001). Teacher efficacy: Capturing an elusive
construct. Teaching and Teacher Education, 17(7), 783-805.
Tschannen-Moran, M., & Hoy, A. W. (2007). The differential antecedents of self-efficacy
beliefs of novice and experienced teachers. Teaching and Teacher Education, 23(6),
944-956.
Tschannen-Moran, M., Hoy, A. W., & Hoy, W. K. (1998). Teacher efficacy: Its meaning
and measure. Review of Educational Research, 68(2), 202-248.
THE IMPACT OF TECHNOLOGY-ENHANCED SUPERVISION / 73
Wang, F.-K., & Bonk, C. J. (2001). A design framework for electronic cognitive appren-
ticeship. Journal of Asynchronous Learning Networks, 5(2), 131-151.
Wheatley, K. F. (2005). The case for reconceptualizing teacher efficacy research.
Teaching and Teacher Education, 21(7), 747-766.
Wild, M. (1998). Creating a role for performance support systems in teacher education.
Technology, Pedagogy and Education, 7(2), 269-295.
Williams, G. C., & Deci, E. L. (1996). Internalization of biopsychosocial values by
medical students: A test of self-determination theory. Journal of Personality and Social
Psychology, 70, 767-779.
Wu, C.-C., & Lee, G. L. (2004). Use of computer-mediated communication in a teaching
practicum course. International Journal of Science and Mathematics Education, 2(4),
511-528.
Yerrick, R., Ross, D., & Molebash, P. (2005). Too close for comfort: Real-time science
teaching reflections via digital video editing. Journal of Science Teacher Education,
16(4), 351-375.
Zeichner, K. (2002). Beyond traditional structures of teacher education. Teacher Education
Quarterly, 29(2), 59-64.