Download as pdf or txt
Download as pdf or txt
You are on page 1of 14

EDUCATIONAL AND PSYCHOLOGICAL MEASUREMENT

GODDARD

A THEORETICAL AND EMPIRICAL ANALYSIS OF


THE MEASUREMENT OF COLLECTIVE EFFICACY:
THE DEVELOPMENT OF A SHORT FORM

ROGER GODDARD
University of Michigan

The present study reports on the development of a 12-item Likert-type measure of collec-
tive efficacy in schools. Designed to assess the extent to which a faculty believes in its
conjoint capability to positively influence student learning, the scale is based on a social
cognitive model that posits perceptions of collective efficacy develop from the cognitive
processing of group members. Faculty perceptions of group competence and the level of
difficulty inherent in the educational task faced by the school are tapped by the scale. The
12-item scale is more theoretically pure than an earlier 21-item scale to which the 12-
item scale is compared. The internal consistency of scores on the 12-item scale is tested
with Cronbach’s alpha, and a test of predictive validity using multilevel modeling is
reported.

Building on Bandura’s social cognitive theory (1997) and in response to


his repeated calls (Bandura, 1982, 1993, 1995, 1997) for systematic study of
the measurement of collective efficacy, a team of researchers at the Univer-
sity of Michigan and The Ohio State University recently conducted a study in
which they developed a 21-item scale to measure collective efficacy
(Goddard, Hoy, & Woolfolk Hoy, 2000). The purpose of the present study
was to reexamine the theoretical underpinnings and the psychometric proper-
ties of that 21-item Collective Efficacy Scale and improve its measurement
by constructing a more conceptually pure and parsimonious version of the
scale.
The present discussion begins with an examination of the unit of analysis
problem in the context of developing scales to measure organizational char-
acteristics such as collective efficacy. The purpose of this discussion is to
explain the analytic choices made in the treatment of the Collective Efficacy

Educational and Psychological Measurement, Vol. 62 No. 1, February 2002 97-110


© 2002 Sage Publications
97
98 EDUCATIONAL AND PSYCHOLOGICAL MEASUREMENT

Scale as a school-level variable. Next, the theoretical underpinnings of col-


lective efficacy are reviewed, and a conceptual model supporting its measure
is presented. Finally, the results of the scale development procedures are pre-
sented and discussed.

Self-Efficacy, Teacher Efficacy, and Collective


Efficacy: Unit of Analysis Distinctions
As with self-efficacy and teacher efficacy, the measure of collective efficacy
is based on social cognitive theory. In addition, just as student self-efficacy
and teacher efficacy are important predictors of the success of students (e.g.,
Multon, Brown, & Lent, 1991; Pajares & Graham, 1999) and teachers (e.g.,
Allinder, 1994; Ashton & Webb, 1986; Czerniak & Schriver, 1994; Enochs,
Scharmann, & Riggs, 1995; Gibson & Dembo, 1984), so too is collective effi-
cacy an important predictor of the differences between schools in student
achievement (Goddard, 2000; Goddard et al., in press). However, although
collective efficacy is similar to student self-efficacy and teacher efficacy in
these regards, an important distinction involving the unit of analysis must be
drawn. Social cognitive theory postulates that efficacy beliefs are formed
based on the cognitive processing of individuals (Bandura, 1997). In the case
of student self-efficacy and teacher efficacy, the individual is the unit of anal-
ysis. Hence, it is of little surprise that researchers interested in the effects of
student self-efficacy and teacher efficacy have developed and employed
measures that reflect the individual perceptions of students (e.g., Pajares &
Graham 1999) and teachers (e.g., Gibson & Dembo, 1984); student- and
teacher-level efficacy measures are conceptually consistent with research
questions that investigate the effects of these constructs. Analogously, when
researchers are interested in the differential performances of groups, the unit
of analysis is the group.
The unit of analysis is straightforward for some organizational character-
istics because they occur naturally at the group level (e.g., school size). How-
ever, because collective efficacy reflects the perceptions of group members
about a faculty’s conjoint capability to successfully educate students, its
assessment necessarily involves the combination of individual-level percep-
tual measures. Hence, researchers interested in collective efficacy have
addressed the nested nature of group-perceptual data by aggregating individ-
ual perceptions of collective efficacy to the group level (Bandura, 1993,
1997; Goddard et al., in press; Sampson, Raudenbush, & Earls, 1997). In
each of these studies, group-level aggregates (i.e., mean collective efficacy
scores for each group) were positive predictors of the dependent variables
examined.
Goddard (2001) recently examined the group aggregate approach to mea-
suring collective efficacy by comparing it to a measure of efficacy consensus
GODDARD 99

that was based on within-school variability in teachers’ collective efficacy


perceptions. Goddard showed that although within-school variability in
teachers’ collective efficacy perceptions does exist, variability is not a good
predictor of student achievement differences among schools. Goddard’s
results supported the use of central tendency measures in the empirical analy-
sis of collective efficacy. Although this finding may seem counterintuitive to
some, it is important to recognize that other disciplines have drawn similar
conclusions about measures of central tendency as predictors of group
behavior. For example, economists argue that public choice (e.g., in political
elections) most often satisfies the median voter because the person whose
preferences are in the middle is the one whose position can gain majority sup-
port (Hyman, 1995). From this perspective, variability in the perceptions of
group members is not a predictor of collective action. Applied to schools,
such reasoning suggests that the group mean effectively captures the behav-
ioral and normative influence that collective efficacy exerts.

The Unit of Analysis


Issue in Scale Development

Because the present study examined the development of a scale to mea-


sure collective efficacy, it is important to consider the unit of analysis issue
from a psychometric perspective. When examining the psychometric proper-
ties of a group measure, Sirotnik (1980) argued that researchers too often
make the mistake of analyzing data at the individual level. Unfortunately,
such individual-level analyses ignore the effects of group membership. As
Bryk and Raudenbush (1992) explained, group-level aggregates should be
interpreted differently than the individual-level measures from which they
are constructed. Simply put, if a group attribute is what a researcher seeks to
measure, then such a measure should be analyzed at the group level.
Sirotnik (1980) also argued that the “selection of appropriate levels of
analysis during psychometric studies rests primarily upon what is being
operationalized at the item level” (p. 259). All items in the Collective Effi-
cacy Scale are directed at the group, not the individual level. A collective effi-
cacy item (e.g., “Teachers in this school have what it takes to get the children
to learn”) requires a judgment about the whole faculty. Collective efficacy
items are distinct from teacher efficacy items (e.g., “I have what it takes to get
my students to learn”) that require teachers to make individual-level judg-
ments. Therefore, psychometric analysis of the Collective Efficacy Scale
should be conducted using school-level aggregates of teachers’ responses to
the scale items. Sirotnik observed that to do otherwise (i.e., to analyze the
psychometric properties of a group measure at the individual level) results in
“coefficients . . . representing a mixture of between and within group item
covariation with questionable interpretable value” (p. 255).
100 EDUCATIONAL AND PSYCHOLOGICAL MEASUREMENT

Theoretical Foundations
of Collective Efficacy
Bandura developed social cognitive theory to explain that the control that
individuals and groups exercise through agentive actions is powerfully influ-
enced by the strength of their efficacy perceptions. As a unified theory of
behavioral change, social cognitive theory specifies that efficacy beliefs are
developed through individual cognitive processing that uniquely weighs the
influence of efficacy-shaping information obtained through mastery experi-
ence, vicarious experience, social persuasion, and affective states (Bandura,
1997). The role of cognitive processing in the interpretation of efficacy infor-
mation is pivotal—the same experiences may lead to different efficacy
beliefs in different individuals, depending on the individuals’ interpretations
(Bandura, 1982, 1997). For schools, collective efficacy refers to the percep-
tions of teachers in a school that the efforts of the faculty as a whole will have
positive effects on students.

A Collective Efficacy Model

The model of collective efficacy employed in the present study is based on


the Tschannen-Moran, Woolfolk Hoy, and Hoy (1998) model of teacher effi-
cacy. Based on their review of more than 20 years of teacher efficacy
research, Tschannen-Moran and her colleagues postulated that teachers
weigh their perceptions of personal competence in relation to the demands of
the task when assessing their efficacy for a given situation. Hence, their
model is consistent with the notion that efficacy perceptions are unique
among other self-regarding constructs because they are both “task- and situation-
specific” (Pajares, 1996, p. 1). The model acknowledges that expectations for
attainment depend both on perceived competence to perform a given task and
the context in which the task will take place. In other words, collective effi-
cacy depends on the interaction of these two factors.
Group-teaching competence consists of judgments about the capabilities
that a faculty brings to a given teaching situation. These judgments include
inferences about the faculty’s teaching methods, skills, training, and exper-
tise. Task analysis (TA) refers to perceptions of the constraints and opportuni-
ties inherent in the task at hand. In addition to the abilities and motivations of
students, TA includes teachers’ beliefs about the level of support provided by
the students’ home and the community. A graphic representation of this
model appears in Figure 1.
The 21 items in the Collective Efficacy Scale under investigation in the
present study appear in Table 1. Each item is identified as being directed at
the assessment of either group competence (GC) or TA. Furthermore, the
items are identified as either positively (+) or negatively (–) worded. Validity
GODDARD 101

Sources of Collective
Efficacy
Analysis of the
Teaching Task
Mastery Experience Estimation
of
Analysis
Vicarious Experience Collective
and
Teacher
Interpretation Assessment of
Social Persuasion Efficacy
Teaching
Competence
Emotional State

Consequences of
Feedback Collective Efficacy
(e.g. goals, effort,
persistence, etc.)

Figure 1. A simplified model of collective teacher efficacy.

and reliability evidence for the scores on the 21-item scale is reported else-
where (Goddard et al., in press).

Purpose
Teacher responses to the 21-item Collective Efficacy Scale (Goddard
et al., in press) were reanalyzed to develop an improved measure of collective
efficacy. The present study was prompted for several reasons. First, in the 21-
item scale, the GC and TA elements of collective efficacy were not weighted
equally. In fact, review of the 21-item scale shows that 13 items reflected GC,
whereas only 8 (less than 40%) reflected the group task. Because there is
nothing in the conceptual model guiding the measure of collective efficacy to
suggest GC and TA should be unevenly weighted in a school’s collective effi-
cacy score, it seemed desirable to seek a balance across categories. A second
goal was to develop a more parsimonious measure. Therefore, the present
study addressed whether it is possible to construct a more balanced opera-
tional measure of collective efficacy that does not weigh one half of the
model more heavily than the other. A related objective was to make the scale
more parsimonious.

Method

Sample

The population of interest for the present study was the elementary
schools within one large urban midwestern school district. To establish a
102 EDUCATIONAL AND PSYCHOLOGICAL MEASUREMENT

Table 1
Original Goddard, Hoy, and Woolfolk Hoy 21-Item Collective Efficacy Scale

Number Item GC+ GC– TA+ TA–

CTE1 Teachers in this school have what it takes to get the


children to learn. X
CTE2 Teachers in this school are able to get through to
difficult students. X
CTE3 If a child doesn’t learn something the first time teachers
will try another way. X
CTE4 Teachers here are confident they will be able to motivate
their students. X
CTE5 Teachers in this school really believe every child
can learn. X
CTE6 If a child doesn’t want to learn teachers here give up. X
CTE7 Teachers here need more training to know how to deal
with these students. X
CTE8 Teachers in this school think there are some students
that no one can reach. X
CTE9 Teachers here don’t have the skills needed to produce
meaningful student learning. X
CTE10 Teachers here fail to reach some students because of
poor teaching methods. X
CTE11 These students come to school ready to learn. X
CTE12 Homelife provides so many advantages the students
here are bound to learn. X
CTE13 The lack of instructional materials and supplies makes
teaching very difficult. X
CTE14 Students here just aren’t motivated to learn. X
CTE15 The quality of school facilities here really facilitates
the teaching and learning process. X
CTE16 The opportunities in this community help ensure that
these students will learn. X
CTE17 Teachers here are well-prepared to teach the subjects
they are assigned to teach. X
CTE18 Teachers in this school are skilled in various methods
of teaching. X
CTE19 Learning is more difficult at this school because
students are worried about their safety. X
CTE20 Drug and alcohol abuse in the community make
learning difficult for students here. X
CTE21 Teachers in this school do not have the skills to deal
with student disciplinary problems. X

Note. GC = group competence; TA = task analysis.

measure of collective efficacy for each school, the principal from each of 50
randomly selected schools was contacted to schedule a time for the adminis-
tration of surveys to school faculty. One principal declined to participate.
GODDARD 103

Schools with fewer than 5 faculty members responding to the Collective Effi-
cacy Scale were not included in the final sample (Halpin, 1959). Of the 49
participating schools, 2 provided fewer than 5 faculty respondents and conse-
quently were dropped from the sample, leaving 47 schools or 94% of the 50
schools randomly selected for inclusion. Teacher surveys were researcher
administered to faculty groups during regularly scheduled afternoon faculty
meetings. During these meetings, other data beyond the scope of the present
study were also collected from teachers. For this reason, half of the teachers
in the room received a survey containing the collective efficacy items,
whereas other teachers received different surveys. Within each faculty, sur-
vey distribution was randomized. A total of 452 teachers in 47 different
schools completed surveys, and more than 99% of the forms returned were
useable.
Student achievement and demographic data for all schools in the final
sample were obtained from the central administrative office of the district.
The student-level data were employed to conduct a test of predictive validity
for the scores on the short form of the Collective Efficacy Scale.

Student-Level Measures

Student gender, race/ethnicity, free and reduced-price lunch status (a


proxy for socioeconomic status [SES]), and longitudinal student achieve-
ment data were provided by the school district for 2,536 fourth-grade stu-
dents. The mathematics achievement scores were obtained on a mandatory
statewide assessment. The state department of education provided test docu-
mentation that indicated adequate internal consistency for scores on the
dependent measure. Furthermore, content validity was deemed adequate
because the sampled school district followed the state model fourth-grade
curriculum for which the assessments were developed.
Approximately 1 year before student achievement was assessed on the
statewide assessment, the school district administered the seventh edition on
the Metropolitan Achievement Test to more than 86% of the sampled fourth-
grade students. Students’ normal curve equivalent mathematics total scores
of the Metropolitan Achievement Test served as prior achievement controls
for the test of predictive validity. In a review of the Metropolitan Achieve-
ment Test, Finley (1995) reported that KR-20 reliability coefficients for the
standardization sample were adequate.

Collective Efficacy Scale and Analysis

Faculty members in the sampled schools responded to the original 21-


item Collective Efficacy Scale. Based on the model in Figure 1, the scale
104 EDUCATIONAL AND PSYCHOLOGICAL MEASUREMENT

included items reflecting GC and TA, and within each of these categories,
both positively and negatively worded items appeared. The result was a scale
with four types of items but an unbalanced representation across the types (7
GC+, 6 GC–, 4 TA+, and 4 TA–).
Teacher responses to the 21-item Collective Efficacy Scale were aggre-
gated to the school level and subjected to a principal axis factor analysis. The
factor scores were then used to guide analytic decisions about which items to
include in a shortened scale. Factor scores and substantive concerns were
weighed together to determine final item selection for a more parsimonious
and theoretically balanced scale. Twelve items were selected, and a second
principal axis factor analysis was performed with these items. Internal con-
sistency of scores from the scales was measured with Cronbach’s alpha.

Validity Tests for Scores


on the 12-Item Scale

As a test of criterion-related validity, the relationship between scores on


the original Collective Efficacy Scale and the short form was tested with a
Pearson product-moment correlation. As a test of predictive validity, scores
from the short form of the Collective Efficacy Scale were entered in a multi-
level model as a predictor of between-school differences in student mathe-
matics achievement. Hierarchical linear modeling (HLM) (Bryk &
Raudenbush, 1992) was used to analyze the data for the multilevel test.
The within-school HLM model included student-level controls for SES,
race, gender, and prior achievement. The effect of student demographic vari-
ables was modeled using dummy variables, whereas the Metropolitan
Achievement Test measures of prior achievement were standardized to a
mean of zero and a standard deviation of one. SES was operationalized as a
dichotomous variable such that students receiving a free or reduced-price
lunch were coded 1, whereas all others were coded 0 for the variable SES.
Similarly, African American students were coded 1 for AFAM, and female
students were coded 1 for FEMALE. All student-level variables were grand-
mean centered. These covariates were employed to adjust school means for
the effects of student demographic characteristics. At the school level, the 12-
item Collective Efficacy Scale scores were grand-mean centered. To facili-
tate interpretation of results, both the student-level variables and the school
scores on the short form of the Collective Efficacy Scale were standardized to
mean of zero and a standard deviation of one. The multilevel structural equa-
tions employed in the full model are given below.

1. Student level: Yij = Bj0 + BjPASTMATHXijPASTMATH + BjPASTREADXijPASTREAD +


BjSESXijSES + BjAFAMXijAFAM + BjFEMALEXijFEMALE + rij.
2. School level: Bj0 = γ00 + γ0CTEWjCE + µ0j.
GODDARD 105

At the school level, the coefficient γ0CE estimates the average effect of collec-
tive efficacy, measured by the short form, on differences between schools in
mean student achievement.

Results
Consistent with the unit of analysis discussion presented earlier, the anal-
ysis began with the aggregation of teacher responses to each of the 21-items
in the original collective teacher efficacy scale to the school level. The result
was a mean score for each school for each of the 21 items in the scale. Next,
the 21 items were submitted to a principal axis factor analysis at the school
level. Prior to this analysis, negatively worded items were reverse coded.
Based on the theoretical model on which the scale is premised and consistent
with the approach of Goddard et al. (in press), a one-factor solution was
extracted. With an eigenvalue of 7.53, the extracted factor explained 57.89%
of the variance in the original 21 items.
In Table 2, the factor structure coefficients for the 21-item scale are ranked
within each of the four categories (GC+, GC–, TA+, TA–). To maintain ade-
quate coverage across categories, I elected to create a 12-item scale with 3
items representing each of the four categories. One approach to selecting
items for inclusion is to choose those items with the largest structure coeffi-
cients from each of the four categories. This approach yielded only 1 item,
CTE12 (“Homelife provides so many advantages the students here are bound
to learn”), that correlated less than .72 with the extracted factor. The inclusion
of CTE12 was, however, not problematic because its factor structure coeffi-
cient (.65) was deemed adequate, and moreover, CTE12 was an adaptation of
one of the original RAND teacher efficacy items (Armor et al., 1976).
Item selection should also reflect substantive concerns. Thus, it is worth
noting that CTE2 (“Teachers in this school are able to get through to difficult
students”) was retained to represent the GC+ category in lieu of CTE1, even
though the factor structure coefficient for CTE1 was marginally larger (.84
vs. .83). The decision to include CTE2 rather CTE1 was made because CTE2
is an adaptation of the second original RAND teacher efficacy item. More-
over, inclusion of both items developed from the original RAND items was
deemed to be both historically significant and consistent with more than two
decades of teacher efficacy research.
With the above selections, the 12 items in the short form reflected all
dimensions of the original Collective Efficacy Scale (Goddard et al., in press)
but in equal proportion (i.e., 3 GC+, 3 GC–, 3TA+, 3TA–). The 12 items
selected for the short form were then submitted to a principal axis factor anal-
ysis. A one-factor solution was extracted, and the results are displayed in
Table 3. With all but 1 item correlated .73 or above, a single factor having an
eigenvalue of 7.69 and explaining 64.10% of the variance was extracted. This
106 EDUCATIONAL AND PSYCHOLOGICAL MEASUREMENT

Table 2
Collective Efficacy Scale Factor Matrix With Items Ranked Within Categories

Structure
Number Item Coefficient Category

CTE4 Teachers here are confident they will be able to motivate


their students. .93 GC+
CTE1 Teachers in this school have what it takes to get the
children to learn. .84 GC+
CTE5 Teachers in this school really believe every child can
learn. .84 GC+
CTE2 Teachers in this school are able to get through to
difficult students. .83 GC+
CTE3 If a child doesn’t learn something the first time teachers
will try another way. .72 GC+
CTE17 Teachers here are well-prepared to teach the subjects
they are assigned to teach. .72 GC+
CTE18 Teachers in this school are skilled in various methods
of teaching. .64 GC+
CTE9 Teachers here don’t have the skills needed to produce
meaningful student learning. .79 GC–
CTE6 If a child doesn’t want to learn teachers here give up. .77 GC–
CTE21 Teachers in this school do not have the skills to deal
with student disciplinary problems. .77 GC–
CTE10 Teachers here fail to reach some students because of
poor teaching methods. .76 GC–
CTE7 Teachers here need more training to know how to deal
with these students. .74 GC–
CTE8 Teachers in this school think there are some students
that no one can reach. .69 GC–
CTE11 These students come to school ready to learn. .82 TA+
CTE16 The opportunities in this community help ensure that
these students will learn. .73 TA+
CTE12 Homelife provides so many advantages the students
here are bound to learn. .65 TA+
CTE15 The quality of school facilities here really facilitates
the teaching and learning process. .61 TA+
CTE19 Learning is more difficult at this school because students
are worried about their safety. .80 TA–
CTE14 Students here just aren’t motivated to learn. .79 TA–
CTE20 Drug and alcohol abuse in the community make learning
difficult for students here. .72 TA–
CTE13 The lack of instructional materials and supplies makes
teaching very difficult. .62 TA–

Note. GC = group competence; TA = task analysis.

compares favorably to the single factor obtained from the 21-item scale that
explained 57.89% of the variance. In addition, the 12-item scale yielded
scores with high internal consistency (alpha = .94).
GODDARD 107

Table 3
Factor Matrix for the 12-Item Collective Efficacy Scale

Structure
Number Item GC+ GC– TA+ TA– Coefficient

CTE2 Teachers in this school are able to get


through to difficult students. X .79
CTE4 Teachers here are confident they will be
able to motivate their students. X .91
CTE5 Teachers in this school really believe every
child can learn. X .76
CTE6 If a child doesn’t want to learn teachers
here give up. X .67
CTE9 Teachers here don’t have the skills needed
to produce meaningful student learning. X .73
CTE11 These students come to school ready to learn. X .91
CTE12 Homelife provides so many advantages the
students here are bound to learn. X .75
CTE14 Students here just aren’t motivated to learn. X .84
CTE16 The opportunities in this community help
ensure that these students will learn. X .80
CTE19 Learning is more difficult at this school
because students are worried about their safety. X .86
CTE20 Drug and alcohol abuse in the community
make learning difficult for students here. X .82
CTE21 Teachers in this school do not have the skills
to deal with student disciplinary problems. X .73

Validity Tests

Scores from the 12-item scale and the 21-item scale were highly corre-
lated (r = .983), suggesting that little change resulted from the omission of
almost 43% of the items (from 21 to 12 items). The significance of this find-
ing is that the correlation was not low. Indeed, a low correlation would have
suggested that the 12-item short form was measuring something different
than the original scale.
The multilevel predictive validity model identified between-school vari-
ance in the level-one intercepts (Bj0s) as the school-level dependent variable
(Bryk & Raudenbush, 1992). Thus, the intercepts for each of the 47 sampled
schools served as the operational measure of between-school differences in
student mathematics achievement, adjusted for student demographic charac-
teristics. The results of the predictive validity tests for mathematics achieve-
ment are shown in Table 4. As expected, the short form of the Collective Effi-
cacy Scale was a significant predictor of between-school differences in
student mathematics achievement.
For comparative purposes, selected properties of the original and short
forms of the Collective Efficacy Scale are presented in Table 5.
108 EDUCATIONAL AND PSYCHOLOGICAL MEASUREMENT

Table 4
Multilevel Predictive Validity Test: 12-Item Collective Efficacy Scale as a Predictor of
Variation Among Schools in Mathematics Achievement

Variable Coefficient Standard Error T Ratio p Value

Intercept .025 .036 0.69 < .001


Collective efficacy .106 .030 3.61 < .001
PASTMATH .553 .024 23.23 < .001
SES –.068 .015 –4.57 < .001
AFAM –.103 .016 –6.60 <.001
FEMALE .021 .011 1.905 .06

Table 5
Comparison of the Original and Short Collective Efficacy Scales

Attribute Short Form Original

Number of items 12 21
Internal consistency (alpha) .94 .96
Eigenvalue from principal axis factor analysis 7.69 7.53
Proportion of variance explained with single factor .6410 .5789

Discussion
The results of the present study provide new knowledge about the mea-
surement of collective efficacy in schools. The findings provide initial evi-
dence that using a 12-item scale that balances the relative weights given to the
elements of collective efficacy—assessment of GC and the analysis of the
teaching task—is equally as effective as using the original 21-item scale.
Indeed, when the scale was balanced across 12 items rather than unbalanced
across 21, the salient factor structure coefficients were higher, and a single
factor explained more of the total item variation for the 12-item scale. This
could be an advantage of attending to the match between the collective effi-
cacy measure and the conceptual model. In addition to providing a theoreti-
cally balanced measure, the 12-item scale is more parsimonious using 43%
fewer items than the original.
Although the short form is substantially shortened compared to the origi-
nal, the correlation between these scales (r = .983) suggests that the 12-item
scale is quite strongly related to the original scale. Finally, the multilevel tests
of predictive validity indicated that the short form is a positive predictor of
between-school variability in student mathematics achievement.
GODDARD 109

References
Allinder, R. M. (1994). The relationship between efficacy and the instructional practices of special
education teachers and consultants. Teacher Education and Special Education, 17, 86-95.
Armor, D., Conroy-Oseguera, P., Cox, M., King, N., McDonnell, L., Pascal, A., et al. (1976).
Analysis of the school preferred reading program in selected Los Angeles minority schools
(Report No. R-2007-LAUSD). Santa Monica, CA: RAND. (ERIC Document Reproduction
No. 130 243)
Ashton, P. T., & Webb, R. B. (1986). Making a difference: Teachers’sense of efficacy and student
achievement. New York: Longman.
Bandura, A. (1982). Self-efficacy mechanism in human agency. American Psychologist, 37,
122-147.
Bandura, A. (1993). Perceived self-efficacy in cognitive development and functioning. Educa-
tional Psychologist, 28(2), 117-148.
Bandura, A. (1995). Exercise of personal and collective efficacy in changing societies. In A. Bandura
(Ed.), Self-efficacy in changing societies. Cambridge, UK: Cambridge University Press.
Bandura, A. (1997). Self-efficacy: The exercise of control. New York: Freeman.
Bryk, A. S., & Raudenbush, S. W. (1992). Hierarchical linear models: Applications and data
analysis methods. Newbury Park, CA: Sage.
Czerniak, C. M., & Schriver, M. L. (1994). An examination of preservice science teachers’ be-
liefs and behaviors as related to self-efficacy. Journal of Science Teacher Education, 5(3),
77-86.
Enochs, L. G., Scharmann, L. C., & Riggs, I. M. (1995). The relationship of pupil control to
preservice elementary science teacher self-efficacy and outcome expectancy. Science Edu-
cation, 79(1), 63-75.
Finley, C. J. (1995). Review of the Metropolitan Achievement Test, Seventh Edition. In J. C.
Conoley & J. C. Impara (Eds.), The twelfth mental measurements yearbook (pp. 603-606).
Lincoln: University of Nebraska Press.
Gibson, S., & Dembo, M. (1984). Teacher efficacy: A construct validation. Journal of Educa-
tional Psychology, 76(4), 569-582.
Goddard, R. D. (2001). Collective efficacy: A neglected construct in the study of schools and stu-
dent achievement. Journal of Educational Psychology, 93(3), 467-476.
Goddard, R. D. (2000, April). Collective efficacy and student achievement. Paper presented
at the annual meeting of the American Educational Research Association, New Orleans,
Louisiana.
Goddard, R. D., Hoy, W. K., & Woolfolk Hoy, A. (2000). Collective teacher efficacy: Its mean-
ing, measure, and effect on student achievement. American Educational Research Journal,
37(2), 479-507.
Halpin, A. W. (1959). The leader behavior of school superintendents. Chicago: Midwest Ad-
ministrative Center.
Hyman, D. N. (1995). Public finance: A contemporary application of theory to policy. New
York: Harcourt Brace College.
Multon, K. D., Brown, S. D., & Lent, R. W. (1991). Relation of self-efficacy beliefs to academic
outcomes: A meta-analytic investigation. Journal of Counseling Psychology, 38, 30-38.
Pajares, F. (1996, April). Current directions in self research: Self-efficacy. Paper presented at the
annual meeting of the American Educational Research Association, New York.
Pajares, F., & Graham, L. (1999). Self-efficacy, motivation constructs, and mathematics perfor-
mance of entering middle school students. Contemporary Educational Psychology,
24(2),124-139.
Sampson, R. J., Raudenbush, S. W., & Earls, F. (1997). Neighborhoods and violent crime: A
multilevel study of collective efficacy. Science, 277, 918-924.
110 EDUCATIONAL AND PSYCHOLOGICAL MEASUREMENT

Sirotnik, K. A. (1980). Psychometric implications of the unit-of-analysis problem (with exam-


ples from the measurement of organizational climate). Journal of Educational Measure-
ment, 17, 245-282.
Tschannen-Moran, M., Woolfolk Hoy, A., & Hoy, W. K. (1998). Teacher efficacy: Its meaning
and measure. Manuscript submitted for publication.

You might also like