Download as pdf or txt
Download as pdf or txt
You are on page 1of 22

Validity Study

Educational and Psychological


Measurement
Student Personal Perception 70(5) 858–879
ª The Author(s) 2010
of Classroom Climate: Reprints and permission:
sagepub.com/journalsPermissions.nav
Exploratory and Confirmatory DOI: 10.1177/0013164410378085
http://epm.sagepub.com

Factor Analyses

Ellen W. Rowe1, Sangwon Kim2, Jean A. Baker3,


Randy W. Kamphaus4, and Arthur M. Horne5

Abstract
The purpose of this study was to examine the factor structure of an instrument devel-
oped to assess elementary students’ individual perceptions of their classroom environ-
ments. The Student Personal Perception of Classroom Climate (SPPCC) originally
consisted of six subscales adapted from previously published scales. Exploratory factor
analysis identified the underlying dimensions of the SPPCC. The authors subsequently
tested the four-factor model against the six-factor model using confirmatory factor
analyses with an independent sample of students. The four-factor model appeared to
be a more tenable solution because of its equally adequate fit indexes, parsimony,
exploratory factor analytic support, and the high correlations between some factors.
Future research and potential limitations of the study are discussed.

Keywords
classroom climate, exploratory factor analysis, and confirmatory factor analysis

The classroom learning environment or classroom climate is the context in which


much of the academic and social learning takes place for children and adolescents
in schools. Although research on classroom climate has appeared in the literature

1
George Mason University, Fairfax, VA, USA
2
Ewha University, Seoul, Korea
3
(deceased) Michigan State University, East Lansing, MI, USA
4
Georgia State University, Atlanta, GA, USA
5
The University of Georgia, Athens, GA, USA

Corresponding Author:
Ellen W. Rowe, Department of Psychology, George Mason University, 10340 Democracy
Lane, Suite 202, Fairfax, VA 22030, USA
Email: erowe@gmu.edu

Downloaded from epm.sagepub.com by guest on March 4, 2015


Rowe et al. 859

for decades (Fraser, 1989, 1998), definitions of the construct vary, and there is no con-
sensus on a definitive set of dimensions (Sink & Spencer, 2005). Generally, though,
classroom climate has been described as the classroom social atmosphere (B. Johnson
& McClure, 2004) or the social–psychological environment for learning (Fraser, 1994).
Research in this area has documented associations between classroom climate and var-
ious student characteristics and outcomes (Fraser, 1994). Classroom climate or dimen-
sions of classroom climate have been linked to student motivation (A. Anderson,
Hamilton, & Hattie, 2004; Church, Elliot, & Gable, 2001; Urdan & Schoenfelder,
2006; Wentzel, 1997), student engagement in class activities (Furrer & Skinner,
2003; National Institute of Child Health and Human Development, Early Child Care
Research Network [NICHD ECCRN], 2002, 2005), goal orientation (Ames, 1992;
Church et al., 2001), academic values (Roeser, Eccles, & Sameroff, 2000), social skills
and competence (Baker, 2006; Brophy-Herb, Lee, Nievar, & Stollak, 2007), and aca-
demic achievement (Baker, 2006; Goh, Young, & Fraser, 1995; Hamre & Pianta,
2005; Urdan & Schoenfelder, 2006). As Moos (1979) suggested, then, measures of
classroom climate can serve as tools not only for evaluating educational environments
but also for improving educational settings to enhance children’s learning and develop-
ment. With federal legislation such as No Child Left Behind Act (2001) mandating
increased accountability for all public schools, the connections between classroom cli-
mate and student outcomes ensure that classroom climate will continue to be a germane
topic for educational researchers, professionals, and policy makers in the years to come.
Researchers have measured classroom climate and its components using a variety
of methods. In numerous studies, trained observers have conducted direct observa-
tions of classroom environments (e.g., Brophy-Herb et al., 2007; La Paro, Pianta, &
Stuhlman, 2004; NICHD ECCRN, 2002, 2005). Researchers have also used inter-
views with and information from teachers (e.g., Fletcher, Bos, & Johnson, 1999;
Mucherah, 2003). As valuable as these methods are for providing insights into class-
room context, Fraser (1987, 1998, 2001) has argued that students’ own perceptions of
their classroom environment offer valuable information about the classroom experi-
ence. After all, children’s perceptions of their class are based on knowledge of the par-
ticipants themselves and are the result of a relatively long period of exposure to the
environment (Fraser, 1998, 2001). Using students’ perceptions of the class environ-
ment is a natural extension of the student cognitive or student-mediating paradigm
(Waxman, 1991). According to this paradigm, students’ perceptions and interpreta-
tions of their own learning context can influence outcomes such as achievement
and social-emotional development.

Personal Versus Class Orientation


Most of the extant measures of classroom climate that assess students’ perceptions
consider students’ views of the class as a whole. For example, a class oriented item
might ask if ‘‘students in the class help one another.’’ As Fraser (1998) pointed out,
a class orientation is different from a personal orientation. An item oriented toward
a more personal perspective might be worded, ‘‘students in the class help me.’’ Fraser,

Downloaded from epm.sagepub.com by guest on March 4, 2015


860 Educational and Psychological Measurement 70(5)

Giddings, and McRobbie (1995) demonstrated that differences exist between stu-
dents’ perceptions of the class experience and their perceptions of their own personal
experience. Furthermore, Fraser and McRobbie (1995) found that the different per-
spectives accounted for separate and independent amounts of outcome variance. It
may also be that information about personal experience in the classroom provides
more useful information to school personal charged with designing classroom inter-
ventions for individual students. Currently, however, there are few existing measures
of classroom climate designed for use with elementary school students whose items
assess students’ perceptions of their own experience in the class.
It should be noted that in research, classroom climate is often considered a group-
level construct. In other words, there is a single class environment that is experienced
by all individuals or students in a particular class. Certainly measures that are worded
with a class orientation would tend to be conceptualized in this manner. We would
argue, however, that instruments assessing a personal orientation are better conceptu-
alized as an individual-level construct. In other words, with a personal orientation,
researchers or educational professionals are attempting to understand students’ indi-
vidual experiences in the class. For example, two students in the same class may
have very different relationships with their teacher. As a result, their personal experi-
ence of teacher support in the class could be different. With a personal orientation,
researchers still aggregate data for analytic purposes, but there is an understanding
that the data are a collection of individual experiences.

Existing Classroom Climate Instruments and the Constructs They Measure


A review of the literature on classroom climate reveals a number of instruments devel-
oped to measure student perceptions of classroom climate (Fraser, 1998). Among
these measures are the Learning Environment Inventory (LEI; Fraser, Anderson, &
Walberg, 1982; Walberg & Anderson, 1968), the Classroom Environment Scale
(CES; Moos & Trickett, 1987), the Constructivist Learning Environment Survey
(CLES; Taylor, Fraser, & Fisher, 1997), and the Classroom Life Instrument (CLI;
D. Johnson, 1974; D. Johnson, Johnson, & Anderson, 1983). The LEI, the CES,
and CLES were intended to be used with secondary school classes. However, Fraser
et al. (1982) published a measure, My Class Inventory (MCI), based on the LEI for use
with elementary school children from ages 8 to 12. The CLI has been used with chil-
dren as young as fourth grade. Although most of these instruments are worded with
a class (as opposed to personal) orientation and are based on different theoretical mod-
els, these instruments have a number of scales measuring similar constructs.
Among the constructs common to several measures of classroom climate is stu-
dents’ perceptions of teacher support. Teachers’ influences on students have been
investigated using different labels such as teacher support, teacher–student relation-
ships, teachers’ expectations of students, and caring (Pianta, Hamre, & Stuhlman,
2003). Teacher support has been found to be related to students’ academic achieve-
ment through its positive influences on students’ motivations including educational
aspirations and values, prosocial behaviors, and self concept (Wentzel, 2003). The

Downloaded from epm.sagepub.com by guest on March 4, 2015


Rowe et al. 861

lack of teacher support undermines students’ motivation, which in turn leads to disen-
gagement and withdrawal (Roeser & Eccles, 1998). Students who are well liked by
their teachers tend to receive better grades than do those who are not as well liked
(Wentzel, 2003), and students with closer teacher–child relationships tend to have bet-
ter academic achievement (Baker, 2006). Moreover, positive teacher–child relation-
ships can serve as a protective factor for children with behavior problems (Baker,
2006).
In addition to students’ views of teacher support, most measures of classroom cli-
mate assess peer support in the classroom. Several lines of research on peers have
demonstrated that peers play an important role in children’s schooling processes
(Hinshaw, 1992; Wentzel, 1994, 1998). In one study with middle school students,
Wentzel (1994) found that perceived academic support from peers was positively
related to students’ pursuit of academic, social goals (e.g., helping classmates with
social problems or schoolwork; following classroom rules). A. Anderson et al.
(2004) observed a significant relationship between student affiliation and student par-
ticipation/engagement in high school English classes. Research on peer acceptance or
rejection has consistently shown that students who are accepted by peers are more
likely to succeed academically, and those who are rejected by peers are more likely
to have academic difficulties. Peer acceptance also is related to other outcomes includ-
ing academic and prosocial motivations, academic competence, and satisfaction (Hin-
shaw, 1992; Wentzel, 1998).
Along with the effects of teachers and peers on education, a student’s perception of
his or her own competence can be an important indicator of academic achievement
(Bandura, 1994). Roeser et al. (2000) found that middle school students’ beliefs about
their academic competence were a significant positive predictor of grades. At the
same time, they reported that beliefs about academic competence were negatively
related to problem behaviors at school. Lorsbach and Jinks (1999) pointed out that stu-
dents’ beliefs about their own academic competency could have significant implica-
tions for improving learning environments and suggested several ways in which the
concept of self-efficacy could be incorporated into research on classroom climate.
Among the scales of the MCI (Fraser et al., 1982), the Difficulty scale addresses stu-
dents’ perception of the degree of difficulty of classroom work, and some of the items
on this scale approach the construct of academic self-efficacy (‘‘In my class the work
is hard’’). Moreover, Lorsbach and Jink’s contention that student’s perception of aca-
demic self-efficacy could provide important feedback to teachers interested in
improving classroom climate seems reasonable.
Another scale on the MCI is the Satisfaction scale. The items on this scale address
student’s satisfaction or the degree to which they like their classroom. Allodi (2002)
also identified a set of items measuring satisfaction in her investigation of classroom
climate among Swedish students. The scales from both the MCI and Allodi’s study
included items addressing satisfaction at the school level. Examples of school-level
items are Students in my class like being in school. Thus, there is precedence for
including items measuring satisfaction on instruments of students’ perception of class-
room climate. Although a number of these items are directed toward school

Downloaded from epm.sagepub.com by guest on March 4, 2015


862 Educational and Psychological Measurement 70(5)

satisfaction instead of classroom satisfaction, it can be argued that for children in ele-
mentary school, the classroom experience is the school experience (Battistich, Solo-
mon, Watson, & Schaps, 1997). Moreover, as Huebner (1994) noted, children’s
satisfaction, or their liking of school, is another aspect of children’s cognitive apprais-
als of school that seem related to important school outcomes.

Confirmatory Factor Analysis With Classroom Climate Instruments


In spite of the fact that a number of classroom climate measures have been in existence
for decades and that common constructs have been identified across measures, little
work has been done to evaluate the measurement models and the degree to which
the models fit the data on students’ actual perceptions of their classrooms. The use
of instruments with supportive evidence of validity is clearly a critical feature of
meaningful research (Dorman, 2003). In 1991, Waxman proposed that future research
in the area of classroom climate make use of newer and more sophisticated analytic
techniques such as confirmatory factor analysis, structural equation modeling, and
hierarchical linear modeling. Waxman went on to say that use of confirmatory factor
analysis would allow researchers to better address issues of measurement and to exam-
ine the relationships among variables.
Since Waxman’s (1991) suggestions, a few articles have appeared using confirma-
tory factor analysis (CFA) to provide validity evidence for classroom climate meas-
ures. Dorman (2003) used CFA to support the seven scale structure of the What Is
Happening In this Class? (WIHIC) questionnaire among high school students from
Australia, the United Kingdom, and Canada. The results of this study supported the
use of this instrument with high school students in those countries (Dorman, 2003).
Two additional studies have used CFA to examine measures of classroom climate
among elementary school children. In the first study, Allodi (2002) borrowed items
from measures such as the MCI and the Individualized Classroom Environment Ques-
tionnaire (ICEQ; Fraser, 1990) and adapted them to the Swedish language and Swed-
ish conditions. She administered a measure that initially included six scales to
Swedish children ages 8- to 12-years-old. The initial scales included on Allodi’s ques-
tionnaire were Satisfaction, Friction, Competition, Cohesiveness, Personalization, and
Differentiation. At the outset, items on the Differentiation scale yielded low internal
consistency values, so this scale was subsequently excluded from the analysis. A
model with the five remaining factors produced fit indices suggesting a satisfactory fit.
The second study examined the factor structure of the My Class Inventory–Short
Form (MCI-SF; Fraser, 1982; Fraser & Fisher, 1986) with 2,800 fourth-, fifth-, and
sixth-grade elementary students in Washington state (Sink & Spencer, 2005). Among
the classroom climate measures most frequently cited in the literature, the original
MCI is one of the few measures that was developed specifically for use with elemen-
tary age students. The five scales of the MCI-SF are Satisfaction, Friction, Competi-
tiveness, Difficulty, and Cohesiveness. Sink and Spencer conducted both exploratory
and confirmatory factor analyses to examine the factor structure of the MCI-SF. They
began by identifying five- and six-factor solutions using principal component analyses

Downloaded from epm.sagepub.com by guest on March 4, 2015


Rowe et al. 863

(PCAs). The five-factor model was consistent with Fraser’s (1982) original conceptu-
alization of the scale. The authors next tested the models with CFA, but the fit indices
did not indicate a good fit between either of the models and the data. The lack of fit for
the five-factor model is consistent with previous research using the MCI with 600
third grade students in Australia (Schibeci & Fraser, 1985). Based on the results of
their analyses, Sink and Spencer (2005) modified the MCI-SF. They deleted the Dif-
ficulty scale and two items on the Friction scale. The authors then conducted another
round of PCA and CFA. Although the goodness of fit indices indicated that the four-
factor solution fit the data well, this should not be surprising, given that authors used
the same data to create and modify the final model. Thus, the degree to which the four-
factor model is valid with other groups of children remains in question.

Goals of the Current Study


A primary goal of the present study was to develop a personal measure of classroom
climate for use with elementary school students. To this end, we borrowed and mod-
ified several existing scales of teacher support, peer cohesion or support, academic
self-competence, and satisfaction to determine if these scales could be considered
together as a personal measure of classroom climate. We included six scales (Teacher
Academic Support, Teacher Personal Support, Peer Academic Support, Peer Personal
Support, Academic Competence, and Satisfaction) for a total of 26 items. The items
are listed in Table 1 by their scale of origin. We elected to include these six dimen-
sions of classroom climate because of their convention in the literature on classroom
climate and their seeming applicability to elementary classrooms. Although these
dimensions are common in the literature on classroom climate, their combination
into a single measure with a personal orientation is unique. The MCI, for instance,
does not include a scale that addresses teacher support. Another consideration in
our decision was whether or not the constructs appeared to have potential for utility
in designing classroom interventions for individual students. A second goal of the
study was to provide preliminary validity support for use of this measure through
both exploratory factor analysis (EFA) and CFA. Because we conducted the EFA
and CFA with different samples, the CFA results can be viewed as further support
for the structure of the measure.
In considering the measurement model, a particular question of interest was
whether a four-factor or six-factor model best fits the data. As originally conceptual-
ized, the items belong to six scales. In the six-factor model, Teacher Academic Sup-
port was conceived as separate from Teacher Personal Support. Likewise, Peer
Academic Support was separate from Peer Personal Support. However, based on con-
siderations of environmental characteristics and cognitive development in elementary
school, we hypothesized that elementary-aged children might not differentiate
between personal and academic support from their teachers and peers. For instance,
elementary school teachers often play several roles in classrooms such as instructing,
promoting prosocial behavior, and providing emotional support. If this were the case,

Downloaded from epm.sagepub.com by guest on March 4, 2015


864
Table 1. Items of the Student Personal Perception of Classroom Climate by Their Original Scales

Scale Item Original source

Teacher Academic Support (4 items) (TS1) My teacher cares about how much I learn The Teacher Academic Support scale of the
(TS2) My teacher likes to see my work Classroom Life Instrument (CLI; D. Johnson
(TS3) My teacher likes to help me learn et al., 1983)
(TS4) My teacher wants me to do my
best school work
Teacher Personal Support (4 items) (TS5) My teacher really cares about me The Teacher Personal Support scale of the CLI
(TS6) My teacher thinks it is important to be my friend
(TS7) My teacher likes me as much as he/she
likes other students
(TS8) My teacher cares about my feelings
Peer Academic Support (4 items) (PS1) The kids in my class want me to do my The Student Academic Support scale of the CLI
best schoolwork
(PS2) The kids in my class like to help me learn
(PS3) The kids in this class care about
how much I learn
(PS4) The kids in this class want me to come to

Downloaded from epm.sagepub.com by guest on March 4, 2015


class everyday
Peer Personal Support (4 items) (PS5) In this class, other students think it is The Student Personal Support of the CLI
important to be my friend
(PS6) In this class, other students like me the way I am
(PS7) In this class, other students care about my feelings
(PS8) In this class, other students really care about me
(continued)
Table 1. (continued)

Scale Item Original source

Academic Competence (4 items) (AC1) I am very good at my school work The Scholastic Competence scale of the
(AC2) I am smart enough to do my school work Self-Perception Profile for Children (SPPC;
(AC3) I do very well at my school work Harter, 1985)
(AC4) I can figure out the answers to school work
Satisfaction (6 items) (SA1) I look forward to going to school The Satisfaction scale of the Multidimensional
(SA2) I like being in school Students’ Life Satisfaction Scale (MSLSS;
(SA3) School is interesting Huebner, 1994)
(SA4) I wish I didn’t have to go to school
(SA5) There are many things about school that I like
(SA6) I enjoy school activities

Downloaded from epm.sagepub.com by guest on March 4, 2015


865
866 Educational and Psychological Measurement 70(5)

Teacher Academic and Personal Support items could be combined for a single
Teacher Support scale. The same would be true for Peer Support items.

Study 1: Exploratory Factor Analysis


We began our study by using principal-axis factor analysis to confirm the assignment
of items on scales and to identify the underlying dimensions of the Student Personal
Perception of Classroom Climate (SPPCC).

Method
Participants. We used a subsample of students from a multiyear research project
which was designed to examine individual, interactional, and ecological contributors
to risk and adaptation of school-aged children (Project ACT Early). Participants
attended four public elementary schools in an urban school district in southeastern
United States. The school district was considered ‘‘at risk’’ based on several factors.
For example, approximately 60% of the students were eligible for free or reduced-
priced lunch, and 23% of adult residents in the area have not completed high school
(Boatright & Bachtel, 2002). Parental permission was obtained via mail. The sample
for Study 1 consisted of 267 regular classroom students. Boys and girls were almost
equally represented in the sample (47% and 53%, respectively). The students repre-
sented third, fourth, and fifth grades proportionally (35%, 32%, and 33%, respec-
tively). Also, 46% of the students were African American, 34% were Caucasian,
7% were Hispanic, 2% were Asian/Pacific, and 2% were multiracial; the remaining
9% were missing information on race or ethnicity. The sample is thought to be repre-
sentative of the school district.
Materials. For the ACT Early project, three researchers whose expertise is in edu-
cational psychology or a related field created a questionnaire to assess a variety of
aspects of life for elementary school students. The items were borrowed or modified
from several previously published scales within the public domain. Students were
asked to respond on a 4-point Likert-type scale, including 0 ¼ never, 1 ¼ sometimes,
2 ¼ often, and 3 ¼ almost always. For the present study, the six scales that repre-
sented our conceptualization of classroom climate were selected for the SPPCC
(see Table 1).
Teacher Academic Support and Teacher Personal Support. The Teacher Academic Sup-
port and Teacher Personal Support scales have the internal consistency reliabilities for
their scores of .61 and .73, respectively for our study, and of .78 and .80, respectively
for scores from the original measure, the Classroom Life Instrument (CLI; D. Johnson
et al., 1983). The CLI was developed with a sample of fourth to ninth graders.
Although the authors of the instrument claimed theoretical and factor analytic support
for the structure of the measure, we had difficulty accessing the factor analytic evi-
dence. While making no change in the wording of items, we adopted a 4-point scale
instead of the 5-point scale originally used on the CLI.

Downloaded from epm.sagepub.com by guest on March 4, 2015


Rowe et al. 867

Peer Academic Support and Peer Personal Support. The Peer Academic Support and
Peer Personal Support scale scores had the internal consistency reliabilities of .78
and .72, respectively for our sample, and of .67 and .78, respectively for scores
with the original measure, the CLI (D. Johnson et al., 1983). As was the case with
the teacher support scales, factor analytic evidence to support the distinction between
Student Academic Support and Student Personal Support in the CLI was unavailable.
No wording change was made for our study, but a 4-point scale was adopted instead of
the original 5-point scale.
Academic Competence. The Academic Competence scale measures children’s per-
ception of their competence or ability within the realm of academic performance.
The internal consistency reliability of the scale scores was .71 in our sample. The orig-
inal scale called the Scholastic Competence scale from the Self-Perception Profile for
Children (SPPC; Harter, 1985) was developed with third to eighth graders and yielded
internal consistency reliability ranging from .80 to .85. In the transition from Scholas-
tic Competence to Academic Competence, we reworded an alternative force-choice
format of the original items into a personal form (e.g., ‘‘Some kids feel like they
are just as smart as other kids their age BUT Other kids aren’t sure and wonder if
they are as smart’’ became ‘‘I am smart enough to do my school work’’) and adopted
a 4-point scale.
Satisfaction. The Satisfaction scale assessing children’s life satisfaction in school
environments had the internal consistency reliability of .82 in the current sample
and of .83 in the original study (Huebner, 1994). The original Satisfaction scale comes
from the Multidimensional Students’ Life Satisfaction Scale (MSLSS; Huebner,
1994). The MSLSS was developed with regular classroom students from third to
eighth grades. In our study, the wording and response format of the original items
remained the same; however, the number of items was reduced from eight to six
for the sake of brevity. Items with the highest factor loadings were included.
Procedures. The survey was administered in the spring of 2001 by graduate students
during regular classroom sessions. The questionnaires were projected via an overhead
projector. To control for reading differences among the students, graduate students
also read instructions and survey questions aloud. The questionnaires were adminis-
tered in a random order between classrooms to control for order effects. Teachers
either remained in their classrooms working at their desks or left the room.

Results
Descriptive statistics for all items in the SPPCC are presented in Table 2. Most items
were normally distributed except for one item (‘‘My teacher wants me to do my best
schoolwork’’). We used DeCarlo’s (1997) macro, which tests for multivariate skew-
ness and kurtosis, to identify outliers with the largest Mahalanobis distances. We
found six outliers statistically significant at the .05 level, F(26, 240) ¼ 55.92; how-
ever, the inspection of item-level responses of these outliers revealed that they were
within the possible range of responses and no coding errors were involved. Analyses
were run with and without the outliers to determine if the outliers influenced the

Downloaded from epm.sagepub.com by guest on March 4, 2015


Table 2. Descriptive Statistics for the Items of the Student Personal Perception of Classroom Climate in Principal-Axis

868
and Confirmatory Factor Analyses

Principal-axis factor analysis Confirmatory factor analysis

Item M SD Skewness Kurtosis M SD Skewness Kurtosis

1. I am very good at my school work 2.38 0.86 −1.04 −0.22 2.42 0.81 −1.08 −0.05
2. I look forward to going to school 2.00 1.07 −0.60 −0.99 1.98 1.10 −0.59 −1.08
3. I am smart enough to do my school work 2.61 0.72 −1.66 1.49 2.59 0.74 −1.73 2.01
4. I like being in school 1.90 1.05 −0.33 −1.25 1.92 1.09 −0.43 −1.22
5. School is interesting 1.87 0.99 −0.23 −1.17 1.83 1.06 −0.30 −1.22
6. I wish I didn’t have to go to school 1.85 1.06 −0.53 −0.92 1.86 1.14 −0.59 −1.09
7. I do very well at my school work 2.38 0.88 −1.16 0.14 2.44 0.82 −1.15 0.01
8. There are many things about school that I like 2.06 0.88 −0.35 −1.08 2.11 0.91 −0.48 −1.03
9. I can figure out the answers to school work 1.90 1.29 −0.45 −1.59 2.16 0.86 −0.43 −1.19
10. I enjoy school activities 2.26 0.87 −0.73 −0.77 2.35 0.88 −1.02 −0.20
11. My teacher cares about how much I learn 2.35 1.12 −1.30 −0.06 2.49 0.84 −1.58 1.52
12. The kids in my class want me to do my best schoolwork 1.28 1.06 0.36 −1.08 1.34 1.09 0.31 −1.19
13. My teacher really cares about me 2.36 0.91 −1.21 0.34 2.42 0.89 −1.35 0.68
14. In this class, other students think it is important to be my friend 1.39 1.04 0.28 −1.09 1.35 1.05 0.29 −1.10
15. My teacher likes to see my work 2.41 0.83 −1.17 0.30 2.36 0.86 −1.05 −0.08
16. The kids in my class like to help me learn 1.18 0.98 0.46 −0.79 1.23 1.05 0.46 −0.95

Downloaded from epm.sagepub.com by guest on March 4, 2015


17. My teacher thinks it is important to be my friend 1.88 1.04 −0.38 −1.14 2.01 1.09 −0.63 −1.01
18. In this class, other students like me the way I am 1.97 1.02 −0.53 −0.94 1.89 1.00 −0.31 −1.15
19. My teacher likes to help me learn 2.43 0.84 −1.22 0.35 2.47 0.85 −1.40 0.80
20. The kids in this class care about how much I learn 1.13 1.05 0.47 −0.99 1.19 1.06 0.48 −0.97
21. My teacher likes me as much as he/she likes other students 2.06 1.08 −0.67 −1.00 2.00 1.16 −0.65 −1.14
22. In this class, other students care about my feelings 1.36 0.98 0.28 −0.91 1.46 1.09 0.19 −1.27
23. My teacher wants me to do my best schoolwork 2.77 0.58 −2.64 6.41 2.70 0.65 −2.21 4.20
24. The kids in this class want me to come to class everyday 1.77 1.06 −0.23 −1.23 1.76 1.07 −0.25 −1.22
25. My teacher cares about my feelings 2.38 0.92 −1.23 0.24 2.28 0.98 −1.07 −0.17
26. In this class other students really care about me 1.38 0.98 0.25 −0.94 1.53 1.05 0.07 −1.19
Rowe et al. 869

results. Both analyses resulted in the same factor structure, so outliers were included in
analyses. Missing values were treated by the listwise deletion method, and a total of 24
cases were lost.
Principal-axis factor analysis was run to identify dimensions of the SPPCC using
SPSS 11.0. Because the variables were thought to be correlated with one another,
we rotated the results with an oblique rotation (Direct oblimin with d ¼ 0). As
Gorsuch (1983) recommended, multiple methods were used to determine the number
of factors to retain, including the eigenvalue >1 rule (EV > 1; Kaiser, 1960), scree test
(Cattell, 1966), parallel analysis (PA; Horn, 1965), minimum average partial proce-
dure (MAP; Velicer, 1976), and theory. The results varied across methods. The
EV > 1 rule and PA suggested five and six factors, respectively, whereas the scree
test and MAP suggested four factors. Consequently, we evaluated three different sol-
utions (four, five, and six factors).
Among the three solutions, the four-factor solution was best supported in that it
yielded a simple, interpretable factor structure. As shown in Table 3, the pattern matrix
produced a clear-cut pattern in that items loaded on only one factor with the cut-off
value of .30. Though the structure matrix yielded quite a few items with substantial
coefficients on off-factors, it still appeared similar to the pattern matrix in terms of pri-
mary coefficients being in order. In contrast, the five- and six-factor solutions appeared
to be overfactored (i.e., no item loaded on the fifth factor in the five-factor solution; only
one item loaded on each of the fifth and sixth factors in the six-factor solution). The total
variance accounted for by four factors was 43.63%. Extracted communalities of the var-
iables ranged from .18 to .68, and the average communality was .44.
The four factors were labeled as follows. Factor 1 was called Teacher Support, and
Factor 2 was named Academic Competence. Factor 3 was labeled Satisfaction,
whereas Factor 4 was named Peer Support. Table 4 presents the score reliabilities
of and correlations among the four factors. The internal consistency estimates of
the factor scores, as measured by Cronbach’s alpha, were adequate, ranging from
.71 to .85 (Henson, 2001). The factor correlations were low to moderate, ranging
from .03 to .43. The low to moderate correlations provide partial support for the dis-
criminant validity of scores on the factors.
In sum, the principal-axis factor analysis identified a four-factor structure for the
SPPCC, including Teacher Support, Peer Support, Academic Competence, and Satis-
faction. The findings suggested that children’s responses of the SPPCC did not differ-
entiate between academic and personal support from teachers and peers.

Study 2: Confirmatory Factor Analysis


We used confirmatory factor analysis to cross validate the findings from Study 1 and
test competing models (i.e., the four-factor model vs. the six-factor model). The two
models are nested: The Teacher Support factor in the four-factor model is a combina-
tion of the Teacher Academic Support and Teacher Personal Support factors in the six-
factor model, and the Peer Support factor in the four-factor model is a combination of
the Peer Academic Support and Peer Personal Support factors in the six-factor model.

Downloaded from epm.sagepub.com by guest on March 4, 2015


Table 3. Pattern/Structure Matrix for the Four-Factor Principal-Axis Factor Analysis of the Student Personal Perception of Classroom Climate

870
Factor 1: Teacher support Factor 2: Academic competence Factor 3: Satisfaction Factor 4: Peer support

Item Pattern Structure Pattern Structure Pattern Structure Pattern Structure

TS1 .433 .465 .005 .116 .020 .208 .056 .243


TS2 .307 .389 .035 .119 .023 .213 .157 .294
TS3 .677 .739 −.118 .075 .137 .411 .094 .429
TS4 .692 .620 .002 .162 −.027 .169 −.152 .121
TS5 .835 .819 −.055 .148 −.008 .297 .003 .341
TS6 .480 .588 .045 .186 .108 .355 .136 .381
TS7 .466 .497 .135 .250 .008 .203 −.011 .188
TS8 .756 .774 −.005 .182 .003 .307 .046 .358
AC1 .079 .229 .723 .738 −.016 .123 −.050 −.006
AC2 .041 .221 .677 .690 .017 .163 .018 .060
AC3 .076 .255 .716 .731 −.025 .148 .032 .071
AC4 −.106 .022 .476 .457 .037 .081 −.006 −.021
SA1 .043 .276 −.126 .011 .691 .688 .008 .322
SA2 −.110 .205 −.039 .077 .778 .763 .078 .368
SA3 .055 .292 .075 .199 .618 .638 −.033 .259
SA4 −.018 .209 .029 .145 .674 .638 −.081 .204
SA5 .121 .404 .184 .315 .550 .661 .075 .368

Downloaded from epm.sagepub.com by guest on March 4, 2015


SA6 .169 .407 .031 .155 .434 .573 .163 .421
PS1 .012 .281 .036 .063 .052 .316 .585 .613
PS2 .044 .330 −.061 −.032 −.007 .318 .738 .752
PS3 −.046 .249 −.033 −.021 .022 .309 .715 .705
PS4 −.070 .203 .024 .021 −.019 .248 .668 .632
PS5 −.030 .247 −.007 .023 .127 .358 .563 .605
PS6 .123 .330 .098 .132 −.045 .229 .485 .519
PS7 −.007 .265 .011 .020 −.036 .261 .688 .670
PS8 .095 .332 −.100 −.060 .003 .294 .634 .671
Rowe et al. 871

Table 4. Correlations, Reliabilities, Means, and Standard Deviations for Scores on the
Four Factors of the Student Personal Perception of Classroom Climate

Principal-axis factor analysis Confirmatory factor analysis

TS PS AC SA TS PS AC SA

Teacher Support (TS) — —


Peer Support (PS) .41 — .60 —
Academic Competence (AC) .25 .03 — .32 .27 —
Satisfaction (SA) .38 .43 .18 — .58 .57 .32 —

Reliability .81 .85 .71 .82 .87 .91 .79 .86


M 18.64 11.46 9.27 11.93 18.73 11.74 9.61 12.04
SD 0.30 0.35 0.17 0.26 5.35 6.58 2.53 4.73

Method
Participants. Participants in Study 2 were independent of participants in Study 1.
Although they attended the same schools, participants in Study 2 responded to the sur-
vey in the spring of 2002 or 2003. A total of 322 regular classroom students participated
in study 2 (49% boys and 51% girls). The students included 35% third, 32% fourth and
33% fifth graders. In terms of race and ethnicity, 49% of the students were African
American, 24% were Caucasian, 9% were Hispanic, 2% were Asian/Pacific, and 2%
were multiracial; the remaining 14% of the students were missing information on
race or ethnicity. The materials and procedures were the same as Study 1.

Results
Descriptive statistics are presented in Table 2 for all items of the SPPCC. Whereas
most items were normally distributed, two items (‘‘My teacher wants me to do my
best schoolwork.’’; ‘‘I am smart enough to do my schoolwork.’’) showed nonnormal-
ity as indicated by the significant values for skewness or kurtosis. Relative multivar-
iate kurtosis was 1.19, which was less than |2.0|, so a transformation was not needed.
Five outliers were found to be statistically significant at the .01 level, F(26, 295) ¼
61.39, using DeCarlo’s (1997) macro. The inspection of item-level responses of these
outliers showed that the responses were within the possible range, and no coding
errors were involved. CFA was conducted with and without outliers and resulted in
the same fit indexes. Thus, the results obtained with the outliers were reported. Miss-
ing variables in the current data were treated by the listwise deletion method, and 35
cases were lost as a result.
CFA was run with Amos 7.0. Prior to running CFAs, the four- and six-factor models
were hypothesized in a way that items were specified to load on specific latent variables
(factors). Factors were allowed to correlate with one other, but measurement errors were
not allowed to correlate. The variances of factors were set to equal 1.0 by default in
Amos. Several fit indices were used in evaluating the adequacy of models, including

Downloaded from epm.sagepub.com by guest on March 4, 2015


872 Educational and Psychological Measurement 70(5)

a c2 statistic, the comparative fit index (CFI), the root mean square error of approxima-
tion (RMSEA), and the standardized root mean square residual (SRMR). The c2 test is
the only statistical significance test reported in our study. The cut-offs and adequate val-
ues for the other fit indices used in this study are as follows: >.95 for the CFI, <.06 for
the RMSEA, and <.08 for the SRMR (Hu & Bentler, 1998, 1999).
The four-factor model yielded a statistically significant c2 value (524.47, degrees
of freedom [df] ¼ 293), but the other fit indexes indicated a good fit (CFI ¼ .980,
RMSEA ¼ .051, SRMR ¼ .044). Table 5 displays the standardized factor pattern
coefficients and structure coefficients for each item, with the latter presented in paren-
theses. The factor loadings of the items ranged from .49 to .84 with an average of .71.
All factor loadings were significant in the expected direction. The modification indi-
ces suggested that changes to the model would not improve the fit substantially.
The six-factor model also yielded a statistically significant c2 value (466.20, df ¼
284), yet the other fit indexes indicated a good fit (CFI ¼ .980, RMSEA ¼ .046,
SRMR ¼ .042). The factor loadings of the items ranged from .41 to .91 with an aver-
age of .70. All factor loadings were significant. The modification indices indicated
that no substantial changes were needed. Because we put more weight on other fit
indices, neither model was rejected despite statistically significant c2 values.
When researchers have competing models with differing numbers of nested factors,
the models can be compared. One method of comparing models is with c2 tests.
Therefore, a c2 test was used to test whether or not a statistically significant differ-
ence existed between the fit of the models. The results indicate that the six-factor
model fit significantly better than the four-factor model (58.27, df ¼ 9).
At the same time, the issue of whether or not to rely on statistical tests such as c2
tests or other practical fit indices for model comparisons has been debated. Recently,
researchers have tended to favor practical fit indices over statistical tests (D. B.
Bandalos, personal communication, April 17, 2006). For example, Parker, Baltes,
and Christiansen (1997) determined a best model based on practical fit indices rather
than a c2 test because the changes in their practical fit indices were small. Widaman
(1985) provided a criterion regarding changes in fit indices that should be considered
‘‘small.’’ Widaman suggested that improvements less than .01 generally indicate
functionally equivalent models. The comparison of the practical fit indices between
the four- and six-factor models revealed no difference in the CFI and only small
changes in the RMSEA and the SRMR (.005 and .002, respectively). Thus the four-
factor model is functionally equivalent to the six-factor model. According to Kline
(2005), when fit indices suggest that two models provide a similar fit for the same
data, the more parsimonious model is preferred owing to its greater explanatory
power. Moreover, the four-factor solution was supported by the EFA results from
Study 1. Finally, the correlations between academic and personal support factors
for both teachers and peers were so high that it is questionable to consider them as
separate factors (.87 for teachers and .95 for peers) from a measurement perspective.
With these considerations in mind, we elected to interpret the four-factor model.
Psychometric properties of the SPPCC are provided in Table 4. Reliability esti-
mates of scores on the four factors, as measured by Cronbach’s alpha, were adequate,

Downloaded from epm.sagepub.com by guest on March 4, 2015


Rowe et al. 873

Table 5. Standardized Factor Pattern and Structure Coefficients from the Confirmatory Factor
Analysis of the Four-Factor Model

Latent variable

Teacher Peer Academic


Item Support (TS) Support (PS) Competence (AC) Satisfaction (SA)

TS1 .77 (.42) .00 (.32) .00 (.11) .00 (.34)


TS2 .57 (.32) .00 (.24) .00 (.09) .00 (.26)
TS3 .74 (.40) .00 (.31) .00 (.11) .00 (.33)
TS4 .58 (.24) .00 (.19) .00 (.07) .00 (.20)
TS5 .82 (.47) .00 (.36) .00 (.13) .00 (.38)
TS6 .77 (.54) .00 (.41) .00 (.15) .00 (.44)
TS7 .49 (.37) .00 (.28) .00 (.10) .00 (.30)
TS8 .77 (.49) .00 (.37) .00 (.13) .00 (.40)
PS1 .00 (.32) .76 (.68) .00 (.12) .00 (.43)
PS2 .00 (.31) .78 (.67) .00 (.12) .00 (.42)
PS3 .00 (.34) .84 (.73) .00 (.13) .00 (.45)
PS4 .00 (.30) .72 (.63) .00 (.12) .00 (.40)
PS5 .00 (.29) .73 (.63) .00 (.11) .00 (.39)
PS6 .00 (.24) .61 (.50) .00 (.09) .00 (.31)
PS7 .00 (.32) .77 (.69) .00 (.13) .00 (.43)
PS8 .00 (.30) .74 (.64) .00 (.12) .00 (.40)
AC1 .00 (.11) .00 (.12) .68 (.30) .00 (.16)
AC2 .00 (.11) .00 (.12) .71 (.29) .00 (.15)
AC3 .00 (.14) .00 (.15) .82 (.37) .00 (.20)
AC4 .00 (.10) .00 (.11) .58 (.27) .00 (.14)
SA1 .00 (.32) .00 (.40) .00 (.15) .77 (.77)
SA2 .00 (.34) .00 (.43) .00 (.16) .84 (.83)
SA3 .00 (.28) .00 (.34) .00 (.13) .69 (.67)
SA4 .00 (.27) .00 (.34) .00 (.13) .63 (.65)
SA5 .00 (.25) .00 (.32) .00 (.12) .75 (.62)
SA6 .00 (.19) .00 (.23) .00 (.09) .57 (.45)

ranging from .79 to .91 (Henson, 2001). The correlations among the factors were low
to moderate, ranging from .27 to .60. The range of these correlations provided partial
support for the discriminant validity of scores on the factors. Teacher Support and
Peer Support showed the highest correlation (g ¼ .60), and Peer Support and Aca-
demic Competence showed the least (g ¼ .27). Also, Satisfaction was almost equally
correlated with Teacher Support (g ¼ .58) and Peer Support (g ¼ .57).

Discussion
This study presents an instrument for the assessment of elementary students’ personal
perceptions of their classroom climate. The present study also offers preliminary sup-
port for the structure of the measure via results of EFA and CFA. Taken together, the
EFA and CFA provide support for use of a four-factor model with dimensions of

Downloaded from epm.sagepub.com by guest on March 4, 2015


874 Educational and Psychological Measurement 70(5)

Teacher Support, Peer Support, Academic Competence, and Satisfaction. A single


scale for Teacher Support is consistent with existing classroom climate measures
such as the CES (Moos & Trickett, 1987) and the WIHIC (Dorman, 2003; Fraser,
1998) that have only one scale measuring teacher support. The high correlation
between academic support and personal support is additional evidence that these con-
structs are best represented as a single dimension (Wentzel, 1994). Finally, some
researchers postulate a broad concept of support to represent the diverse nature of
social support (e.g., Barrera, Sandler, & Ramsay, 1981). The convention in the liter-
ature, empirical findings, and unitary conceptualizations of social support all point
toward combining Teacher Academic and Teacher Personal Support into a single
dimension for children this age. For the same reasons, we combined Peer Academic
and Peer Personal Support items into a single factor of Peer Support.
Although Sink and Spencer (2005) identified psychometric concerns with items
from the Difficulty scale on the MCI, our analyses provide preliminary support for
the four items from the Academic Competency scale on the current measure. The
four items measuring Academic Competence demonstrated adequate score reliability,
and factor analytic results supported retention of the items. Use of the Satisfaction
items was also supported in this study.
The low to moderate correlations among the four factors of the SPPCC suggest that
these factors tapped distinct aspects of the same construct. In addition, the magnitudes
and directions of the correlations were consistent with previous findings in the liter-
ature. For example, in this study, Teacher Support was more strongly related to Aca-
demic Competence than was Peer Support.
Initial future research with the SPPCC will seek to provide further evidence for the
validity of the measure. In subsequent research, we plan to use longitudinal designs to
study the relationships between the SPPCC and significant student outcomes. We also
hope to investigate the use of this measure with younger elementary school children
and middle school students in order to better understand the potential role of develop-
ment on students’ own perceptions of their classroom climate. Given continued val-
idity findings that support use of the instrument, we envision a number of ways in
which the SPPCC could be used in research and practice by various educational pro-
fessionals, including teachers, administrators, and school psychologists. The potential
value of a measure such as this one seems particularly salient in the current educa-
tional environment in which federal laws mandate the use of evidence-based and
best practices in classrooms.
Several limitations of this study should be noted. First, because of the manner in
which the survey was conducted, a possible bias may exist in students’ responses.
Data were collected in classroom situations, and children’s responses may have
been influenced by the presence of their teachers and/or peers. The students may
have responded more positively than they actually perceived their classroom experi-
ence to be. A way of dealing with the issue of social desirability would be to use an
individualized computer-based survey system, which is more confidential.
Second, factor analysis is generally sample dependent. Therefore, one should be cau-
tious in applying the current findings to other samples with different characteristics such

Downloaded from epm.sagepub.com by guest on March 4, 2015


Rowe et al. 875

as age, culture, socioeconomic status, and so on without validating the use. For instance,
the degree to which this measure is appropriate for obtaining perceptions of classroom
climate from early elementary or middle school students remains unclear.
A final caveat should be made regarding the SPPCC and the construct of classroom
climate. In 1982, C. S. Anderson remarked on the heterogeneity of school climate
measures. Much of the diversity in school climate measures can be explained by dif-
ferences in the variables of interest to researchers across studies, researchers’ theoret-
ical orientation, and measurement. A similar phenomenon exists in the field of
classroom climate in that there is considerable variability across measures. For this
study, we used several social and psychological variables that are associated with stu-
dents’ outcomes, that seemed appropriate and meaningful for use in elementary school
classrooms, and that are recurring in the literature on classroom climate. However, we
do not assume that our four factors represent the definitive set of variables for
researching classroom climate.
Although research on classroom climate has been published for decades, little work
has been done with measures of students’ perceptions using statistical techniques such
as confirmatory factor analysis. In this study we provide a measure of students’ personal
experience in their classroom, and we offer preliminary evidence for the structure of the
measure with elementary school students. As Baker (2006) pointed out, there is an
increasing recognition of the importance of classroom climate and dimensions of class-
room climate and their influence on important student outcomes. It is our hope that the
SPPCC can be used to further the valuable applied work and research on this topic.

Authors’ Note
Jean A. Baker was included as an author posthumously in recognition of her contributions to
this article.

Declaration of Conflicting Interests


The authors declared no potential conflicts of interests with respect to the authorship and/orpu-
blication of this article.

Funding
Data for this study were collected through grant R306F60158 from the Institute for At-Risk
Children of the Office of Educational Research and Improvement, United States Department
of Education.

References
Allodi, M. W. (2002). A two-level analysis of classroom climate in relation to social context,
group composition, and organization of special support. Learning Environments Research,
5, 253-274.

Downloaded from epm.sagepub.com by guest on March 4, 2015


876 Educational and Psychological Measurement 70(5)

Ames, C. (1992). Classrooms: Goals, structures, and student motivation. Journal of Educational
Psychology, 84, 261-271.
Anderson, A., Hamilton, R. J., & Hattie, J. (2004). Classroom climate and motivated behavior in
secondary schools. Learning Environments Research, 7, 211-225.
Anderson, C. S. (1982). The search for school climate: A review of the research. Review of Edu-
cational Research, 52, 368-420.
Baker, J. A. (2006). Contributions of teacher-child relationships to positive school adjustment
during elementary school. Journal of School Psychology, 44, 211-229.
Bandura, A. (1994). Self-efficacy: The existence of control. New York, NY: W. H. Freeman.
Barrera, M., Sandler, I. N., & Ramsay, T. B. (1981). Preliminary development of a scale of
social support: Studies on college students. American Journal of Community Psychology,
9, 435-447.
Battistich, V., Solomon, D., Watson, M., & Schaps, E. (1997). Caring school communities.
Educational Psychologist, 32, 137-151.
Boatright, S. R., & Bachtel, D. C. (2002). The Georgia county guide (21st ed.). Athens: The
Georgia County Guide Office.
Brophy-Herb, H. E., Lee, R. E., Nievar, M. A., & Stollak, G. (2007). Preschoolers’ social com-
petence: Relations to family characteristics, teacher behaviors, and classroom climate. Jour-
nal of Applied Developmental Psychology, 28, 134-148.
Cattell, R. B. (1966). The scree test for the number of factors. Multivariate Behavioral
Research, 1, 245-276.
Church, M. A., Elliot, A. J., & Gable, S. L. (2001). Perception of classroom environment,
achievement goals, and achievement outcomes. Journal of Educational Psychology, 93,
43-54.
DeCarlo, L. T. (1997). On the meaning and use of kurtosis. Psychological Methods, 2, 292-307.
Dorman, J. P. (2003). Cross-national validation of the What Is Happening in This Class?
(WIHIC) questionnaire using confirmatory factor analysis. Learning Environments
Research, 6, 231-245.
Fletcher, T. V., Bos, C. S., & Johnson, L. M. (1999). Accommodating English language learners
with language and learning disabilities in bilingual education classrooms. Learning Disabil-
ities Research & Practice, 14, 80-91.
Fraser, B. J. (1982). Development of short forms of several classroom environment scales. Jour-
nal of Educational Measurement, 3, 221-227.
Fraser, B. J. (1987). Use of classroom environment assessments in school psychology. School
Psychology International, 8, 205-219.
Fraser, B. J. (1989). Twenty years of classroom climate work: Progress and prospect. Journal of
Curriculum Studies, 21, 307-327.
Fraser, B. J. (1990). Individualized Classroom Environment Questionnaire. Melbourne, Victo-
ria: Australian Council for Educational Research.
Fraser, B. J. (1994). Research on classroom and school climate. In D. Gabel (Ed.), Handbook of
research on science teaching and learning (pp. 493-541). New York, NY: Macmillan.
Fraser, B. J. (1998). Classroom environment instruments: Development, validity, and applica-
tions. Learning Environments Research, 1, 7-33.
Fraser, B. J. (2001). Twenty thousand hours: Editor’s introduction. Learning Environments
Research, 4, 1-5.

Downloaded from epm.sagepub.com by guest on March 4, 2015


Rowe et al. 877

Fraser, B. J., Anderson, G. J., & Walberg, H. J. (1982). Assessment of Learning environments:
manual for Learning Environment Inventory (LEI) and My Class Inventory (MCI). Perth:
Western Australian Institute of Technology.
Fraser, B. J., & Fisher, D. L. (1986). Using short forms of classroom climate instruments to
assess and improve classroom psychological environments. Journal of Research in Science
Teaching, 23, 387-413.
Fraser, B. J., Giddings, G. J., & McRobbie, C. J. (1995). Evolution and validation of a personal
form of an instrument for assessing science laboratory classroom environments. Journal of
Research in Science Teaching, 32, 399-422.
Fraser, B. J., & McRobbie, C. J. (1995). Science laboratory classroom environments at schools
and universities: A cross-national study. Educational Research and Evaluation, 1, 289-317.
Furrer, C., & Skinner, E. (2003). Sense of relatedness as a factor in children’s academic engage-
ment and performance. Journal of Educational Psychology, 95, 148-162.
Goh, S. C., Young, D. J., & Fraser, B. J. (1995). Psychosocial climate and student outcomes in
elementary mathematics classrooms: A multilevel analysis. Journal of Experimental Educa-
tion, 64, 29-40.
Gorsuch, R. L. (1983). Factor analysis (2nd ed.). Hillsdale, NJ: Erlbaum.
Hamre, B. K., & Pianta, R. C. (2005). Can instructional and emotional support in the first-grade
classroom make a difference for children at risk of school failure? Child Development, 76,
949-967.
Harter, S. (1985). Manual for the Self-Perception Profile for Children. Denver, CO: University
of Denver.
Henson, R. K. (2001). Understanding internal consistency reliability estimates: A conceptual
primer on coefficient alpha. Measurement and Evaluation in Counseling and Development,
34, 177-189.
Hinshaw, S. P. (1992). Externalizing behavior problems and academic underachievement in
childhood and adolescence: Causal relationships and underlying mechanisms. Psychologi-
cal Bulletin, 111, 127-155.
Horn, J. L. (1965). A rationale and test for the number of factors in factor analysis. Psychome-
trika, 30, 179-185.
Hu, L., & Bentler, P. M. (1998). Fit indices in covariance structure modeling: Sensitivity to
underparameterized model misspecification. Psychological Methods, 3, 424-453.
Hu, L., & Bentler, P. M. (1999). Cutoff criteria for fit indexes in covariance structure analysis:
Conventional criteria versus new alternatives. Structural Equation Modeling, 6, 1-55.
Huebner, E. S. (1994). Preliminary development and validation of a multidimensional lifesatis-
faction scale for children. Psychological Assessment, 6, 149-158.
Johnson, B., & McClure, R. (2004). Validity and reliability of a shortened, revised version of
the Constructivist Learning Environment Survey (CLES). Learning Environments Research,
7, 65-80.
Johnson, D. W. (1974). Evaluating affective outcomes of schools. In W. Walberg (Ed.), Eval-
uating school performance (pp. 99-112). Berkeley, CA: McCutchan.
Johnson, D. W., Johnson, R. T., & Anderson, D. (1983). Social interdependence and classroom
climate. Journal of Psychology, 114, 135-142.
Kaiser, H. F. (1960). The application of electronic computers to factor analysis. Educational
and Psychological Measurement, 20, 141-151.
Kline, R. B. (2005). Principles and practice of structural equation modeling (2nd ed.). New
York, NY: Guilford Press.

Downloaded from epm.sagepub.com by guest on March 4, 2015


878 Educational and Psychological Measurement 70(5)

La Paro, K. M., Pianta, R. C., & Stuhlman, M. (2004). The classroom assessment scoring sys-
tem: Findings from the prekindergarten year. Elementary School Journal, 104, 409-426.
Lorsbach, A. W., & Jinks, J. L. (1999). Self-efficacy theory and learning environment research.
Learning Environments Research, 2, 157-167.
Moos, R. H. (1979). Evaluating educational environments. San Francisco, CA: Jossey-Bass.
Moos, R. H., & Trickett, E. J. (1987). Classroom Environment Scale manual (2nd ed.). Palo
Alto, CA: Consulting Psychologists Press.
Mucherah, W. M. (2003). The influence of technology on the classroom climate of social stud-
ies classrooms: A multidimensional approach. Learning Environments Research, 6, 37-57.
National Institute of Child Health and Human Development, Early Child Care Research Net-
work. (2002). The relation of global first-grade classroom environment to structural class-
room features and teacher student behaviors. Elementary School Journal, 102, 367-387.
National Institute of Child Health and Human Development, Early Child Care Research Net-
work. (2005). A day in third grade: A large-scale study of classroom quality and teacher
and student behavior. Elementary School Journal, 105, 305-323.
No Child Left Behind Act of 2001, 20 U.S.C. x 6301 (2003).
Parker, C. P., Baltes, B. B., & Christiansen, N. D. (1997). Support for affirmative action, justice
perceptions, and work attitudes: A study of gender and racial-ethnic group differences. Jour-
nal of Applied Psychology, 82, 376-389.
Pianta, R. C., Hamre, B., & Stuhlman, M. (2003). Relationships between teachers and children.
In W. M. Reynolds & G. E. Miller (Eds.), Handbook of psychology (Vol. 7, pp. 199-234).
New York, NY: Wiley.
Roeser, R. W., & Eccles, J. S. (1998). Adolescents’ perceptions of middle school: Relation to
longitudinal changes in academic and psychological adjustment. Journal of Research on
Adolescence, 88, 123-158.
Roeser, R. W., Eccles, J. S., & Sameroff, A. J. (2000). School as a context of early adolescents’
academic and social emotional development: A summary of research findings. Elementary
School Journal, 100, 443-471.
Schibeci, R. A., & Fraser, B. J. (1985). Use of Lisrel in empirical test validation: An illustration
using a classroom environment instrument. Psychological Reports, 57, 139-142.
Sink, C. A., & Spencer, L. R. (2005). My Class Inventory–Short Form as an accountability tool
for elementary school counselors to measure classroom climate. Professional School Coun-
seling, 9, 37-48.
Taylor, P., Fraser, B., & Fisher, D. (1997). Monitoring constructivist classroom learning envi-
ronments. International Journal of Educational Research, 27, 293-302.
Urdan, T., & Schoenfelder, E. (2006). Classroom effects on student motivation: Goal structures,
social relationships, and competence beliefs. Journal of School Psychology, 44, 331-349.
Velicer, W. F. (1976). The relation between factor score estimates, image scores and principal
component scores. Educational and Psychological Measurement, 36, 149-159.
Walberg, H. J., & Anderson, G. J. (1968). Classroom climate and individual learning. Journal of
Educational Psychology, 59, 414-419.
Waxman, H. C. (1991). Investigating classroom and school learning environments: A review of
recent research and developments in the field. Journal of Classroom Interaction, 26, 1-4.
Wentzel, K. R. (1994). Relations of social goal pursuit to social acceptance, classroom behav-
ior, and perceived social support. Journal of Educational Psychology, 86, 173-182.
Wentzel, K. R. (1997). Student motivation in middle school: The role of perceived pedagogical
caring. Journal of Educational Psychology, 89, 411-419.

Downloaded from epm.sagepub.com by guest on March 4, 2015


Rowe et al. 879

Wentzel, K. R. (1998). Social support and adjustment in middle school: The role of parents,
teachers, and peers. Journal of Educational Psychology, 90, 202-209.
Wentzel, K. R. (2003). School adjustment. In W. M. Reynolds & G. E. Miller (Eds.), Handbook
of psychology (Vol. 7, pp. 235-258). New York, NY: Wiley.
Widaman, K. F. (1985). Hierarchical nested covariance models for multitrait-multimethod data.
Applied Psychological Measurement, 9, 1-26.

Downloaded from epm.sagepub.com by guest on March 4, 2015

You might also like