Professional Documents
Culture Documents
Roger Goddard University of Michigan
Roger Goddard University of Michigan
GODDARD
ROGER GODDARD
University of Michigan
The present study reports on the development of a 12-item Likert-type measure of collec-
tive efficacy in schools. Designed to assess the extent to which a faculty believes in its
conjoint capability to positively influence student learning, the scale is based on a social
cognitive model that posits perceptions of collective efficacy develop from the cognitive
processing of group members. Faculty perceptions of group competence and the level of
difficulty inherent in the educational task faced by the school are tapped by the scale. The
12-item scale is more theoretically pure than an earlier 21-item scale to which the 12-
item scale is compared. The internal consistency of scores on the 12-item scale is tested
with Cronbach’s alpha, and a test of predictive validity using multilevel modeling is
reported.
Theoretical Foundations
of Collective Efficacy
Bandura developed social cognitive theory to explain that the control that
individuals and groups exercise through agentive actions is powerfully influ-
enced by the strength of their efficacy perceptions. As a unified theory of
behavioral change, social cognitive theory specifies that efficacy beliefs are
developed through individual cognitive processing that uniquely weighs the
influence of efficacy-shaping information obtained through mastery experi-
ence, vicarious experience, social persuasion, and affective states (Bandura,
1997). The role of cognitive processing in the interpretation of efficacy infor-
mation is pivotal—the same experiences may lead to different efficacy
beliefs in different individuals, depending on the individuals’ interpretations
(Bandura, 1982, 1997). For schools, collective efficacy refers to the percep-
tions of teachers in a school that the efforts of the faculty as a whole will have
positive effects on students.
Sources of Collective
Efficacy
Analysis of the
Teaching Task
Mastery Experience Estimation
of
Analysis
Vicarious Experience Collective
and
Teacher
Interpretation Assessment of
Social Persuasion Efficacy
Teaching
Competence
Emotional State
Consequences of
Feedback Collective Efficacy
(e.g. goals, effort,
persistence, etc.)
and reliability evidence for the scores on the 21-item scale is reported else-
where (Goddard et al., in press).
Purpose
Teacher responses to the 21-item Collective Efficacy Scale (Goddard
et al., in press) were reanalyzed to develop an improved measure of collective
efficacy. The present study was prompted for several reasons. First, in the 21-
item scale, the GC and TA elements of collective efficacy were not weighted
equally. In fact, review of the 21-item scale shows that 13 items reflected GC,
whereas only 8 (less than 40%) reflected the group task. Because there is
nothing in the conceptual model guiding the measure of collective efficacy to
suggest GC and TA should be unevenly weighted in a school’s collective effi-
cacy score, it seemed desirable to seek a balance across categories. A second
goal was to develop a more parsimonious measure. Therefore, the present
study addressed whether it is possible to construct a more balanced opera-
tional measure of collective efficacy that does not weigh one half of the
model more heavily than the other. A related objective was to make the scale
more parsimonious.
Method
Sample
The population of interest for the present study was the elementary
schools within one large urban midwestern school district. To establish a
102 EDUCATIONAL AND PSYCHOLOGICAL MEASUREMENT
Table 1
Original Goddard, Hoy, and Woolfolk Hoy 21-Item Collective Efficacy Scale
measure of collective efficacy for each school, the principal from each of 50
randomly selected schools was contacted to schedule a time for the adminis-
tration of surveys to school faculty. One principal declined to participate.
GODDARD 103
Schools with fewer than 5 faculty members responding to the Collective Effi-
cacy Scale were not included in the final sample (Halpin, 1959). Of the 49
participating schools, 2 provided fewer than 5 faculty respondents and conse-
quently were dropped from the sample, leaving 47 schools or 94% of the 50
schools randomly selected for inclusion. Teacher surveys were researcher
administered to faculty groups during regularly scheduled afternoon faculty
meetings. During these meetings, other data beyond the scope of the present
study were also collected from teachers. For this reason, half of the teachers
in the room received a survey containing the collective efficacy items,
whereas other teachers received different surveys. Within each faculty, sur-
vey distribution was randomized. A total of 452 teachers in 47 different
schools completed surveys, and more than 99% of the forms returned were
useable.
Student achievement and demographic data for all schools in the final
sample were obtained from the central administrative office of the district.
The student-level data were employed to conduct a test of predictive validity
for the scores on the short form of the Collective Efficacy Scale.
Student-Level Measures
included items reflecting GC and TA, and within each of these categories,
both positively and negatively worded items appeared. The result was a scale
with four types of items but an unbalanced representation across the types (7
GC+, 6 GC–, 4 TA+, and 4 TA–).
Teacher responses to the 21-item Collective Efficacy Scale were aggre-
gated to the school level and subjected to a principal axis factor analysis. The
factor scores were then used to guide analytic decisions about which items to
include in a shortened scale. Factor scores and substantive concerns were
weighed together to determine final item selection for a more parsimonious
and theoretically balanced scale. Twelve items were selected, and a second
principal axis factor analysis was performed with these items. Internal con-
sistency of scores from the scales was measured with Cronbach’s alpha.
At the school level, the coefficient γ0CE estimates the average effect of collec-
tive efficacy, measured by the short form, on differences between schools in
mean student achievement.
Results
Consistent with the unit of analysis discussion presented earlier, the anal-
ysis began with the aggregation of teacher responses to each of the 21-items
in the original collective teacher efficacy scale to the school level. The result
was a mean score for each school for each of the 21 items in the scale. Next,
the 21 items were submitted to a principal axis factor analysis at the school
level. Prior to this analysis, negatively worded items were reverse coded.
Based on the theoretical model on which the scale is premised and consistent
with the approach of Goddard et al. (in press), a one-factor solution was
extracted. With an eigenvalue of 7.53, the extracted factor explained 57.89%
of the variance in the original 21 items.
In Table 2, the factor structure coefficients for the 21-item scale are ranked
within each of the four categories (GC+, GC–, TA+, TA–). To maintain ade-
quate coverage across categories, I elected to create a 12-item scale with 3
items representing each of the four categories. One approach to selecting
items for inclusion is to choose those items with the largest structure coeffi-
cients from each of the four categories. This approach yielded only 1 item,
CTE12 (“Homelife provides so many advantages the students here are bound
to learn”), that correlated less than .72 with the extracted factor. The inclusion
of CTE12 was, however, not problematic because its factor structure coeffi-
cient (.65) was deemed adequate, and moreover, CTE12 was an adaptation of
one of the original RAND teacher efficacy items (Armor et al., 1976).
Item selection should also reflect substantive concerns. Thus, it is worth
noting that CTE2 (“Teachers in this school are able to get through to difficult
students”) was retained to represent the GC+ category in lieu of CTE1, even
though the factor structure coefficient for CTE1 was marginally larger (.84
vs. .83). The decision to include CTE2 rather CTE1 was made because CTE2
is an adaptation of the second original RAND teacher efficacy item. More-
over, inclusion of both items developed from the original RAND items was
deemed to be both historically significant and consistent with more than two
decades of teacher efficacy research.
With the above selections, the 12 items in the short form reflected all
dimensions of the original Collective Efficacy Scale (Goddard et al., in press)
but in equal proportion (i.e., 3 GC+, 3 GC–, 3TA+, 3TA–). The 12 items
selected for the short form were then submitted to a principal axis factor anal-
ysis. A one-factor solution was extracted, and the results are displayed in
Table 3. With all but 1 item correlated .73 or above, a single factor having an
eigenvalue of 7.69 and explaining 64.10% of the variance was extracted. This
106 EDUCATIONAL AND PSYCHOLOGICAL MEASUREMENT
Table 2
Collective Efficacy Scale Factor Matrix With Items Ranked Within Categories
Structure
Number Item Coefficient Category
compares favorably to the single factor obtained from the 21-item scale that
explained 57.89% of the variance. In addition, the 12-item scale yielded
scores with high internal consistency (alpha = .94).
GODDARD 107
Table 3
Factor Matrix for the 12-Item Collective Efficacy Scale
Structure
Number Item GC+ GC– TA+ TA– Coefficient
Validity Tests
Scores from the 12-item scale and the 21-item scale were highly corre-
lated (r = .983), suggesting that little change resulted from the omission of
almost 43% of the items (from 21 to 12 items). The significance of this find-
ing is that the correlation was not low. Indeed, a low correlation would have
suggested that the 12-item short form was measuring something different
than the original scale.
The multilevel predictive validity model identified between-school vari-
ance in the level-one intercepts (Bj0s) as the school-level dependent variable
(Bryk & Raudenbush, 1992). Thus, the intercepts for each of the 47 sampled
schools served as the operational measure of between-school differences in
student mathematics achievement, adjusted for student demographic charac-
teristics. The results of the predictive validity tests for mathematics achieve-
ment are shown in Table 4. As expected, the short form of the Collective Effi-
cacy Scale was a significant predictor of between-school differences in
student mathematics achievement.
For comparative purposes, selected properties of the original and short
forms of the Collective Efficacy Scale are presented in Table 5.
108 EDUCATIONAL AND PSYCHOLOGICAL MEASUREMENT
Table 4
Multilevel Predictive Validity Test: 12-Item Collective Efficacy Scale as a Predictor of
Variation Among Schools in Mathematics Achievement
Table 5
Comparison of the Original and Short Collective Efficacy Scales
Number of items 12 21
Internal consistency (alpha) .94 .96
Eigenvalue from principal axis factor analysis 7.69 7.53
Proportion of variance explained with single factor .6410 .5789
Discussion
The results of the present study provide new knowledge about the mea-
surement of collective efficacy in schools. The findings provide initial evi-
dence that using a 12-item scale that balances the relative weights given to the
elements of collective efficacy—assessment of GC and the analysis of the
teaching task—is equally as effective as using the original 21-item scale.
Indeed, when the scale was balanced across 12 items rather than unbalanced
across 21, the salient factor structure coefficients were higher, and a single
factor explained more of the total item variation for the 12-item scale. This
could be an advantage of attending to the match between the collective effi-
cacy measure and the conceptual model. In addition to providing a theoreti-
cally balanced measure, the 12-item scale is more parsimonious using 43%
fewer items than the original.
Although the short form is substantially shortened compared to the origi-
nal, the correlation between these scales (r = .983) suggests that the 12-item
scale is quite strongly related to the original scale. Finally, the multilevel tests
of predictive validity indicated that the short form is a positive predictor of
between-school variability in student mathematics achievement.
GODDARD 109
References
Allinder, R. M. (1994). The relationship between efficacy and the instructional practices of special
education teachers and consultants. Teacher Education and Special Education, 17, 86-95.
Armor, D., Conroy-Oseguera, P., Cox, M., King, N., McDonnell, L., Pascal, A., et al. (1976).
Analysis of the school preferred reading program in selected Los Angeles minority schools
(Report No. R-2007-LAUSD). Santa Monica, CA: RAND. (ERIC Document Reproduction
No. 130 243)
Ashton, P. T., & Webb, R. B. (1986). Making a difference: Teachers’sense of efficacy and student
achievement. New York: Longman.
Bandura, A. (1982). Self-efficacy mechanism in human agency. American Psychologist, 37,
122-147.
Bandura, A. (1993). Perceived self-efficacy in cognitive development and functioning. Educa-
tional Psychologist, 28(2), 117-148.
Bandura, A. (1995). Exercise of personal and collective efficacy in changing societies. In A. Bandura
(Ed.), Self-efficacy in changing societies. Cambridge, UK: Cambridge University Press.
Bandura, A. (1997). Self-efficacy: The exercise of control. New York: Freeman.
Bryk, A. S., & Raudenbush, S. W. (1992). Hierarchical linear models: Applications and data
analysis methods. Newbury Park, CA: Sage.
Czerniak, C. M., & Schriver, M. L. (1994). An examination of preservice science teachers’ be-
liefs and behaviors as related to self-efficacy. Journal of Science Teacher Education, 5(3),
77-86.
Enochs, L. G., Scharmann, L. C., & Riggs, I. M. (1995). The relationship of pupil control to
preservice elementary science teacher self-efficacy and outcome expectancy. Science Edu-
cation, 79(1), 63-75.
Finley, C. J. (1995). Review of the Metropolitan Achievement Test, Seventh Edition. In J. C.
Conoley & J. C. Impara (Eds.), The twelfth mental measurements yearbook (pp. 603-606).
Lincoln: University of Nebraska Press.
Gibson, S., & Dembo, M. (1984). Teacher efficacy: A construct validation. Journal of Educa-
tional Psychology, 76(4), 569-582.
Goddard, R. D. (2001). Collective efficacy: A neglected construct in the study of schools and stu-
dent achievement. Journal of Educational Psychology, 93(3), 467-476.
Goddard, R. D. (2000, April). Collective efficacy and student achievement. Paper presented
at the annual meeting of the American Educational Research Association, New Orleans,
Louisiana.
Goddard, R. D., Hoy, W. K., & Woolfolk Hoy, A. (2000). Collective teacher efficacy: Its mean-
ing, measure, and effect on student achievement. American Educational Research Journal,
37(2), 479-507.
Halpin, A. W. (1959). The leader behavior of school superintendents. Chicago: Midwest Ad-
ministrative Center.
Hyman, D. N. (1995). Public finance: A contemporary application of theory to policy. New
York: Harcourt Brace College.
Multon, K. D., Brown, S. D., & Lent, R. W. (1991). Relation of self-efficacy beliefs to academic
outcomes: A meta-analytic investigation. Journal of Counseling Psychology, 38, 30-38.
Pajares, F. (1996, April). Current directions in self research: Self-efficacy. Paper presented at the
annual meeting of the American Educational Research Association, New York.
Pajares, F., & Graham, L. (1999). Self-efficacy, motivation constructs, and mathematics perfor-
mance of entering middle school students. Contemporary Educational Psychology,
24(2),124-139.
Sampson, R. J., Raudenbush, S. W., & Earls, F. (1997). Neighborhoods and violent crime: A
multilevel study of collective efficacy. Science, 277, 918-924.
110 EDUCATIONAL AND PSYCHOLOGICAL MEASUREMENT