Professional Documents
Culture Documents
School Effectiveness Research From A Review of The
School Effectiveness Research From A Review of The
net/publication/233295398
CITATIONS READS
104 2,641
2 authors, including:
Hans Luyten
University of Twente
66 PUBLICATIONS 1,312 CITATIONS
SEE PROFILE
All content following this page was uploaded by Hans Luyten on 09 July 2014.
School effectiveness research (SER) has flourished since the 1980s. In recent years, however,
various authors have criticised several aspects of SER. A thorough review of recent criticism can
serve as a good starting point for addressing the flaws of SER, where appropriate, thereby
supporting its further development. This article begins by reviewing the criticism from different
perspectives by discussing the political-ideological nature of SER, its theoretical limitations and the
research methodology it applies. The review of each type of criticism is accompanied by a review of
the recommendations that the critics propose for improving SER. We then proceed to present our
views on each line of criticism and propose 5 avenues that we consider promising for the further
development of SER.
Introduction
This article refers to school effectiveness research (SER) as the line of research that
investigates performance differences between and within schools, as well as the
malleable factors that enhance school performance (usually using student achieve-
ment scores to measure the latter). The studies by Edmonds (1979) and by Rutter,
Maughan, Mortimore, and Ouston (1979) are generally considered the starting point
of SER. Much of the early work within the SER tradition had the explicit goal of
refuting the ‘‘schools-don’t-make-a-difference’’ interpretation that had been attrib-
uted to the research outcomes presented by Coleman et al. (1966) and by Jencks et al.
(1972).
School effectiveness research has flourished since 1979, and it has attracted
considerable political support in several countries. It has become sophisticated in
both data collection and data analysis, and some authors, including Stringfield (1995)
and Scheerens (1997), have sought to connect its empirical findings to economic and
social scientific theory. Publications by Scheerens and Bosker (1997) and by Teddlie
and Reynolds (2000) present the knowledge base of school effectiveness research.
Despite the progress that has been made, some aspects of SER have been criticised
for a number of years. Although much of that criticism has been aired by authors
belonging to the SER community, external critics have raised objections that are even
more fundamental. In our view, the criticism calls for a thorough review. As
Goldstein and Woodhouse (2000, p. 13) state,
. . .if many of its [SER’s] proponents remain superficially defensive and it [SER] ignores
or fails to understand the warnings of its critics, we have very little optimism that it will
survive its present state of adolescent turmoil to emerge into full maturity.
A thorough review can serve as a starting point for defining strategies to address the
valid elements of this criticism, thereby supporting the further development of SER.
This article starts by reviewing recent criticisms of SER expressed by various
authors and by considering their recommendations for improvement. Our goal is not
to present a comprehensive overview of every critique that has been written about
SER since its inception as a line of scientific research. We focus instead on criticism
that has been published more recently (from the 1990s to date). An extensive
discussion of earlier criticism (e.g., Cuban, 1983; Ralph & Fennessey, 1983; Rowan,
Bossert, & Dwyer, 1983) thus exceeds the scope of this article.
We used a ‘‘snowball’’ method to locate relevant literature for our review. Our
starting point was a discussion concerning 20 years of SER (Townsend, 2001), which
was presented in a special issue of this journal. References in this discussion led us to
other publications criticising SER, and these publications led us to yet others, and so
forth. In addition to presenting the critiques, we offer our own views on each of three
elements of criticism and suggest a number of new avenues for SER.
Objectivity. The SER tradition assumes that research will generate objective
knowledge through the application of rigorous quantitative methodologies. Many
critics from outside the SER community (Ball, 1998; Grace, 1998; Hamilton, 1998;
Lingard, Ladwig, & Luke, 1998; Rea & Weiner, 1998) consider this assumption
naı̈ve. Opposed to the belief in true objectivity, they argue that all research is
contaminated, at least to some extent, by the personal, political, and ideological
sympathies of the researcher. In their view, the processes of formulating research
questions, collecting information, and reporting findings always involve ideological
and political choices, which (might) serve particular interests. According to the
critics, this is especially apparent when research is dominated and funded by
governmental or government-related agencies. Rea and Weiner (1998) even suggest
that university departments or centres advocating SER should be formally recognised
as ‘‘think-tanks’’ for policy-makers rather than as centres for independent research.
Further, some critics argue that the tendency of SER to focus on the relationship
between school factors and student achievement while failing to address the limits of
what can be achieved through schooling has led to a culture of blame (Rea & Weiner,
1998) and guilt (Hargreaves, 1994).
The view of teaching and learning. Angus (1993), Elliott (1996) and other authors also
raise fundamental objections to the view of teaching and learning on which SER is
(implicitly) based. Elliott (1996) contrasts the SER perception of education as a
‘‘coercive process of social induction’’ (p. 209) with the perception of education as a
process that is ‘‘shaped by a concern to respect pupils’ capacities for constructing
personal meanings, for critical and imaginative thinking and, self-directing and self-
evaluating their learning’’ (p. 209). He explicitly denounces the idea that the quality
of the teaching-learning process should be judged according to its results. Elliot
argues that learning is an unpredictable process; a teacher’s responsibility is to create
conditions ‘‘which enable pupils to generate personally significant and meaningful
outcomes for themselves’’ (p. 221). In other words, the quality of education lies not in
its results but in the teaching-learning process itself.
essentially argue that SER should be abandoned altogether. Some external critics
(e.g., Thrupp, 2001), however, do not suggest abandoning SER, asking that SER
researchers (e.g., in England) recognise the political implications of their research,
and avoid compromising research and writing wherever possible.
Our position regarding the political-ideological aspects of SER. In our opinion, objectivity
is a worthy ideal for scientific research, even though its full realisation is unlikely.
Although the critics argue justifiably that every study is ideologically biased to some
extent, this does not mean that striving for objectivity is useless. On the contrary,
giving up the ideal of objectivity would render any research activity futile. The
fundamental reason for the existence of scientific research is its capacity for
generating information and knowledge that is valid regardless of ideological
preferences. Approaching this ideal as closely as possible requires minimising
ideological biases as much as possible.
The goal of objectivity is often most accessible in one major aspect of scientific
research: data analysis. Many of the quantitative methods used in SER involve
nothing more than using computer algorithms to process data. Although the data
processing itself is a completely objective procedure, the use of quantitative methods
certainly does not guarantee unbiased conclusions, as the output of the analyses
usually requires some interpretation, and ideological biases might again play a role at
this point. Given sufficient information about the data, analysis, and outcomes,
however, careful readers should be able to detect such biases. Research findings
derived from datasets that are open for secondary analysis further increase the
probability of revealing such biases.
The impact of ideological bias is probably much stronger in the agenda-setting phase
of research. There are few generally accepted standards concerning the propriety and
legitimacy of research questions. The clearest examples of the ideological bias of SER
are evident in the choice of research questions, as many aspects of the research agenda
are based on governmental rather than scientific concerns.
The extent to which SER can be accused of lacking ideological independence is
clearly reflected in the research questions it addresses and even more clearly in those
that it does not address. For example, SER hardly ever reflects on the appropriateness
of officially stated educational goals. The correspondence between these goals and the
standardised tests that are typically used to assess educational effectiveness is another
question that SER tends to ignore. The tests that are commonly used address only the
cognitive development of students, whereas the officially stated goals in most
countries are much broader, extending to such aspects as personal development and
citizenship. Furthermore, SER pays little attention to the limits of what can be
achieved through schooling. For example, the SER literature has yet to address the
question of whether it is fair to hold schools accountable for the persistent language
disadvantages experienced by minority students who mainly speak their native
languages outside of school.
There is no doubt that school effectiveness researchers in many countries have
strong ties to the policy community. Some authors (e.g., Slee & Weiner, 2001;
School Effectiveness Research 253
Thrupp, 2001) raise the problem that SER findings are inherently influenced by the
fact that they are funded primarily by the government or by government-related
agencies. This sceptical view of the relationship between the scientific community and
the world of policy is far too simplistic, however, particularly given the fact that the
work of these critics is usually funded by governments as well.
The second point of criticism relates to the fundamental view of teaching and
learning in SER. In our view, there is no alternative to viewing education as a
goal-oriented activity. Given the enormous amount of resources (taxpayers’
money) invested in education each year, it would be unethical not to consider
its effects. Education should prepare students for life. This includes stimulating
their personal development, citizenship and readiness for the labour market.
Assessing the extent to which these goals are accomplished is essential. We feel
that critics who argue that the teaching-learning process is unpredictable provide
an excellent argument for not investing in education—if the outcomes of
education are completely unpredictable, it makes no sense to invest any money
in it at all. Educational results must therefore be a major factor in assessing the
quality of education. This is not to say that the process of teaching is irrelevant.
Some teaching methods may be successful in certain aspects but also repellent
(e.g., because they are merely based on punishing students for failing to meet
targets set by the teacher). Teaching methods that fail to promote learning
progress satisfactorily must be rejected, regardless of any other ‘‘qualities’’ they
may have. It may be possible to choose between two or more methods that lead to
similar results. In that case, it makes sense for process characteristics to be the
decisive factor.
We must also acknowledge that, in practice, results may provide an incomplete or
biased picture. When teachers ‘‘teach to the test’’, the results may provide no
indication of any real interest, learning or understanding on the part of the students.
In our view, however, this implies no fundamental flaw in the use of results to assess
quality. It shows only that the application of the principle entails considerable
practical difficulties. Addressing these difficulties requires better instruments for
measuring the extent to which the intended results of teaching and learning have been
realised. The need for valid and reliable instruments for investigating the (presumed)
non-cognitive effects of education is especially acute.
Another serious problem is that educational goals are often formulated in abstract
or even vague terms. Governments frequently allocate large amounts of money and
other resources for educational activities without clearly specifying what these
activities are supposed to accomplish. To our knowledge, policy-makers hardly ever
identify a particular educational programme as a goal in and of itself. On the
contrary, they are usually quite eager to mention a wide range of effects that the
programme is supposed to have and goals that it is supposed to serve. If an
educational programme is to be seen as a goal in itself, that fact should be stated
explicitly right from the start. We feel, however, that this view is frequently
advanced only after the outcomes of an educational programme have turned out to
be less positive than originally hoped.
254 H. Luyten et al.
The theoretical basis of SER. According to Coe and Fitz-Gibbon (1998), the definitions
of SER variables lack theoretical grounding (their inclusion is justified on statistical
rather than theoretical grounds) and precision, and they lead to common-sense
operationalisations that vary considerably across studies. The same authors accuse
SER of ‘‘‘fishing’ for correlations’’ between particular indicators of school
effectiveness and particular school features, without clarifying why specific
characteristics of students, classes and schools are expected (or appear) to accompany
higher scores on school effectiveness indices. The correlational studies that pervade
SER are based on simple linear logic (‘‘more of this is associated with more of that’’)
and allow conclusions concerning neither the causal relations between variables nor
the mechanisms behind those relationships. According to Coe and Fitz-Gibbon
(1998), these flaws, combined with the ‘‘vote counting’’ (comparing apples to
oranges) that is common in SER reviews, imply that conclusions about how specific
phenomena influence school performance are incorrect, thereby jeopardising theory
development.
Coe and Fitz-Gibbon (1998) also argue that the perception of consensus
concerning the correlates of effective schools (e.g., Edmonds’, 1979, five-factor
model, or the nine-factor model presented by Reynolds & Teddlie, 2000), is partly
the product of the vague formulation and sloppy measurement (e.g., through self-
reports and/or unstandardised instruments) of such factors. They contend that review
studies are likely to take the least well-defined concepts as the strongest confirmation
of general results, as these concepts tend to be measured in such a wide variety of
ways. In addition, studies rarely provide information about the reliability of such
measurements. Moreover, reviews that use vote counting to summarise research tend
to ignore the most important information about effectiveness factors—the size of their
effects. Another problem relates to the risk of chance capitalisation. The repeated
reports of significant correlations between effectiveness and ‘‘educational leadership’’
are due, at least in part, to its inclusion in so many studies. We could expect to find a
statistically significant result from time to time through chance alone. Finally,
researchers often tend to report only the significant findings, ignoring those that are
not significant.
Thrupp (2001) argues that SER studies continue to be undertheorised. He further
asserts that school effectiveness researchers have failed to embrace the detailed
microlevel research that could build a body of data that would be suitable for
School Effectiveness Research 255
generating theory, and that they have not tapped into the wealth of sociological
theories of education.
Blind spots in the research. In its most comprehensive form, SER can be conceptualised
as the study of relationships between school input, school context, the schooling
process (at both school and classroom levels), and school performance. School
performance is usually expressed in terms of average student achievement by school.
These measures ideally include adjustments for such student characteristics as entry-
level achievement and socioeconomic status (SES), in order to determine the ‘‘net’’
value added by a school. Their main goal is to identify the factors that lead to the best
results. As a consequence, the most malleable factors receive the most emphasis
(Scheerens & Bosker, 1997).
A number of conceptual models exist to represent the lines of thinking within SER.
The model developed by Scheerens (1997), shown in Figure 1, is a typical example.
Figure 1 illustrates the relationships that SER assumes to exist among the factors it
addresses. For example, a school’s financial resources and the professional experience
of its teachers are two factors included within the category of school inputs. These
factors are assumed to have a direct impact on processes within the school as well as
on the general performance of the school. The nature of school leadership, teacher
cooperation within schools, and similar school-level characteristics are thought to
affect student achievement directly and indirectly, through processes occurring at the
classroom level (e.g., the quality of instruction). Other models used within SER have
expressed similar thoughts (Creemers, 1994; Hallinger & Heck, 1996; Stringfield &
Slavin, 1992). Although such models take the complex direct and indirect
relationships between factors into account conceptually, critics of SER argue that
empirical analyses are limited to the estimation of direct effects. We discuss this issue
in more detail in the section on criticism regarding research methodology.
To date, SER has concentrated especially on investigating relationships between
the nature of school-level processes and school performance. According to critics of
SER, some of the variation in school performance may be explained by the
relationship between school context and processes occurring within schools (2?3 in
Figure 1), as well as by the relationships between school context and school
performance (2?4). Thrupp (2001) argues that the background characteristics of
students, the composition of student populations within schools, and the curricula
used by the schools are often overlooked in the search for explanatory variables,
because they are erroneously taken as given. Slee and Weiner (1998) adopt an even
wider perspective, focusing on how schooling is influenced by the social, cultural, and
economic contexts of schools, as represented by the neighbourhoods, governments
and societies within which schools are embedded (cf. Lingard et al., 1998, and
Lauder, Jamieson, & Wikeley, 1998, as cited in our discussion of the political-
ideological criticism).
The student populations of schools can differ considerably in the proportion of
students coming from homes with particular characteristics. The extent to which and
how the home situation affects the schooling process (2?3) and how differences in
student populations promote or block student achievement (2?4) has yet to receive
much attention in SER. Authors have similarly ignored the question of which school
and classroom structures and strategies are most profitable for specific types of
student groups.
Thrupp (2001) refers to Hatcher (1998), who argues that school culture is a
product of the interaction between the culture of pupils and the formal school culture
(2$3). Further, according to Slee and Weiner (1998), the student population of a
school both responds to and influences its structure and culture. Grace (1998) argues
that SER is overly reductionistic in its attempts to explain differences in school
effectiveness. Referring to the successes of Catholic schools, he stresses that school
effectiveness is not the product of separate, individual factors but results instead from
strongly interrelated factors that find their expression in such concepts as ‘‘school
communities’’ and ‘‘school mission’’. For this reason, Grace argues that future
research should be more sensitive to the complexity of school effects and that its
analysis and methodology should be more innovative.
In addition to calling for more attention to the impact of variations in school
context on school processes and school outputs, other scholars recommend ‘‘opening
the black box’’ of the schooling process. According to Hill (1998), ‘‘. . .most school
effectiveness research has been top-down, it has been driven by the agenda of the
researchers, and it has failed to make meaningful connections with the place where
most school learning takes place, namely the classroom. . .’’ (p. 427).
Lingard et al. (1998) state that the use of only a small set of indicators to assess
school quality has led SER to be excessively oriented toward accountability. These
authors do not feel that surveys are adequate for opening the black box of schooling;
they therefore call for more in-depth, qualitative analyses of processes that actually
School Effectiveness Research 257
Narrow indicators of school effectiveness. Coe and Fitz-Gibbon (1998) label the effect
criteria used in SER as ‘‘a very narrow range of measures’’ (p. 423), which they
perceive to be due to the fact that researchers use data that are already available (e.g.,
student scores at national examinations) or easy to measure, among other factors.
Publications by Slee and Weiner (1998) and by Bosker and Visscher (1999) also point
to the fact that student achievement in the basic skills is unfortunately the only
criterion for judging school’ performance in most SER studies.
A fruitful road toward school improvement? Although the study of the extent to which
schools vary in effectiveness and the factors that seem to promote effectiveness are
academically interesting, few researchers see them as goals in themselves. They hope,
and probably expect, that the investigations will eventually help to promote school
effectiveness. Several scholars doubt that the approach followed in SER can ever
produce a theory of school improvement, however, or that it can be helpful in
improving the quality of schools. Lauder et al. (1998) wonder whether school
improvement based on SER will ever be possible. They doubt that universal critical
success factors can be found across schools, given the extent to which they differ in
context, organisational structure, culture, and other aspects. Coe and Fitz-Gibbon
(1998) state that the poor theoretical base of SER and the scarcity of evidence
concerning the relationships among variables prevent this line of investigation from
ever providing a solid foundation for school improvement. Moreover, the research
strategy of using information obtained from analysing features common to effective
schools as a means of improving underperforming schools ignores the fact that
correlation provides no proof of causation (Coe & Fitz-Gibbon, 1998).
Recommendations for upgrading the theoretical calibre of SER. Given the criticisms
described above, it is not surprising that critics argue for more theory-driven SER
(Coe & Fitz-Gibbon, 1998; Thrupp, 2001). Their general message is that the results
of previous research should be used more carefully. This applies to the results of
earlier SER in general, and especially to the concepts and findings from other fields of
educational and non-educational research that are relevant to SER. A closer
connection with earlier scientific work should be accompanied by formulations and
operationalisation of variables that are as precise as possible, as well as by hypotheses
about the relationships among variables. Critics urge researchers to use theory rather
than common sense to explain the relationships they study, in order to allow progress
in theoretical development.
Research should develop convincing explanations of how and why variables are
related. Studies by Thrupp (2001) and by Scheerens and Bosker (1997) argue that
258 H. Luyten et al.
qualitative studies into processes that occur within schools can provide a valuable
basis for developing hypotheses to be tested in large-scale research into the factors
that make some schools more effective than others. Scheerens and Bosker (1997)
recommend focusing on such school processes as teacher selection and recruitment,
the allocation of teachers to student groups, and the extent and nature of evaluation.
In their view, SER should also allow more differentiation in school effects across
grades, subjects and teachers, as well as among students of varying ages and with
varying abilities.
Several authors argue for including outcomes that reflect the actual (instead of the
assumed) educational objectives of the schools in question (Coe & Fitz-Gibbon,
1998) and metacognition (e.g., learning to learn, solve problems, and reflect; Bosker
& Visscher, 1999) along with the usual school effectiveness criteria (e.g., drop-out
rates and the percentage of grade repeaters; Scheerens & Bosker, 1997). Moreover,
the use of curriculum-embedded and criterion-referenced tests is considered to
improve the fairness of school effectiveness evaluations: testing what should have
been taught as well as studying the extent to which schools meet the (minimum) goals
that have been set for them (Coe & Fitz-Gibbon, 1998; Scheerens & Bosker, 1997).
Several critics recommend that SER focus more on features of the school
environment (Lauder et al., 1998; Thrupp, 2001), the classroom (Hill, 1998), the
composition of the student population, and other potentially explanatory variables
(Thrupp, 2001). Thrupp is especially interested in the measures that are needed in
order to serve student populations with specific characteristics appropriately. This
interest probably stems from the belief that the concept of a single universal best way
to treat and teach any student group under any circumstance is unrealistic. Such
research requires intervention into educational practice and the subsequent
evaluation of the effectiveness of those interventions. This is consistent with Hill
(1998), who ultimately expects ‘‘hit-and-miss’’ approaches to school reform to have
more impact than the ‘‘combing-through-natural-variation’’ approach of SER. Hill
thinks that the old paradigm of SER as specific research in itself has reached the end
of its ‘‘use-by-date’’. In the new paradigm, SER should be undertaken as a critical
component of continuous school improvement and accountability.
Our position regarding the theoretical calibre of SER. With some exceptions, we feel that
the theoretical basis for selecting and operationalising the variables studied in SER is
often quite weak; it seldom constitutes an elaborated theory. The concepts
investigated are often too vague, and the operationalisations vary greatly across
studies. Although it is not the case for all research, too many studies fish for
correlations. This is because they do not use theory to derive their hypotheses
concerning the interrelationships among variables. Progress in theoretical develop-
ment is slow because the results of SER are often quite disappointing; correlations are
not found, or they are low and inconsistent across studies. If correlations are found,
we know that there must be a reason that the variables are correlated. We cannot be
certain, however, whether the correlation is due to coincidence or whether it is
meaningful, nor can we identify the mechanisms underlying the correlations.
School Effectiveness Research 259
criticism originates from researchers within the field of SER. The methodological
criticisms of mainstream SER studies centre on three broad issues.
Does the school effect exist in the real world? Coe and Fitz-Gibbon (1998) point out that
the result that authors usually report as the ‘‘school effect’’ is actually the between-
school variance that cannot be explained by the school intake characteristics that are
controlled in the data analyses. School effectiveness studies typically find that a range
of control variables—primarily such student background characteristics as prior
achievement, SES, ethnicity and gender—account for a considerable proportion of
the variation between schools. They subsequently assume that the remaining variance
between schools is caused by certain school characteristics. A rigorous meta-analysis
of school effectiveness studies (Scheerens & Bosker, 1997; Witziers & Bosker, 1997),
however, revealed weak effects for each of the effectiveness factors that had been
investigated (cooperation, school climate, monitoring, opportunity to learn, parental
involvement, pressure to achieve, and school leadership). These school character-
istics, which are often said to enhance effectiveness, account for a small proportion of
the variation between schools; when authors report a ‘‘school effect’’, they are
generally referring to this proportion of the variation.
For the reasons described above, SER studies that have been conducted to date
suggest that schools can have only a limited impact on the ‘‘school effect’’. In that
sense, the term is misleading (Coe & Fitz-Gibbon, 1998). Moreover, as Goldstein
(1997) observes, estimates of a school’s effectiveness are always based on its relative
position in comparison with other schools. As a result, such studies always identify
‘‘effective’’ and ‘‘ineffective’’ schools, even when they all accomplish acceptable
results according to some absolute standard. We know only that student achievement
differs by school and that the student background variables included in the analysis
cannot account for these differences. Although the differences may be due to
unmeasured aspects of the school organisation, they may just as well be caused by
factors beyond a school’s control (e.g., unmeasured student background character-
istics or variables that relate to the school’s context). As stated by Thrupp (1999, p.
5), ‘‘. . .they may be school-based, they may nevertheless not be school-caused’’.
Because residual variance between schools is assumed to express a school effect, its
size is strongly dependent on the control variables that are included in the data
analysis, and especially on their explanatory power with regard to the variance in
student achievement between schools. If the control variables account for much of
this variation, the school effect is small. In other cases, the school effect may be
spuriously large. The choice of control variables is therefore crucial (Coe & Fitz-
Gibbon, 1998; Goldstein, 1997; Thrupp, 1999, 2001). In most cases, pragmatic
considerations (e.g., the accessibility of information) are at least as important as
theoretical ones. Most researchers acknowledge that measures of prior achievement
are indispensable in effectiveness studies, however, and gender, ethnicity, and SES
are often taken into account as well. Coe and Fitz-Gibbon (1998) note that many
studies take no account of what has actually been taught in the school (curriculum
alignment).
262 H. Luyten et al.
Number crunchers. The strong reliance of SER on quantitative research methods has
met with harsh criticism from scholars working outside of the effectiveness
tradition (Angus, 1993; Slee & Weiner, 2001; Thrupp, 1999, 2001). They
contend that the application of scientific standards, objective measurement and
sophisticated, rigorous data analysis and findings expressed as figures (e.g.,
regression coefficients, variance components, levels of significance) amounts to
nothing more than the objectification of teachers and pupils. In their view, the
prevailing empirical-analytical approach in the field of SER ignores the values and
life experiences of research participants and pays no attention to the meanings that
they give to events.
Authors within the school effectiveness tradition (Coe & Fitz-Gibbon, 1998;
Goldstein, 1997; Goldstein & Woodhouse, 2000; Reynolds, Hopkins, & Stoll, 1993;
Scheerens & Bosker, 1997) have also expressed concerns regarding the use of
predominantly quantitative methods. Their main concern is that effectiveness
research draws primarily on large-scale datasets containing relatively superficial
information, and that data analysis usually stops after estimating direct linear
relationships between one dependent variable (student achievement) and several
independent variables. They observe a preference for cross-sectional studies that are
biased towards variables (both dependent and independent) that are easy to measure.
Research questions are often addressed through standard, quantitative methodolo-
gies, even though the methodology should be tailored to the research questions.
These authors also emphasise that investigation into relationships that are more
complex (e.g., indirect, reciprocal, and curvilinear relationships, interaction and
differential effects, thresholds; cf. Coe & Fitz-Gibbon, 1998; Goldstein, 1997;
Scheerens & Bosker, 1997) remains scarce. Contrary to the ‘‘external critics’’, who
raise fundamental objections to quantitative methodology, the critics who are part of
the SER community call for a more sophisticated use of the available research
methods (qualitative and quantitative) in order to arrive at a closer match between
method and research questions. Finally, Goldstein (1997) points to the largely
School Effectiveness Research 263
ignored problem of measurement error; the low reliability of the variables under
analysis may produce seriously biased research findings.
Snapshot research. School effectiveness research has thus far focused strongly on
observational research, and studies based on (quasi-)experimental research are
relatively rare. Several authors within the SER community (Coe & Fitz-Gibbon,
1998; Hill, 1998; Scheerens & Bosker, 1997) refer to the limited value of studies that
mainly explore the natural variation between schools. This type of study basically
yields statistically sophisticated descriptions of existing situations, but it provides little
insight into how processes may differ in radically different situations.
The ultimate goal of effectiveness research is to find out ‘‘what works’’ and to
discover ways to improve education. Cross-sectional studies have been and continue
to be the prevailing research design in the SER tradition, essentially amounting to
comparisons of successful schools with ‘‘failing’’ counterparts. As Reynolds et al.
(1993, p. 51) noted about 10 years ago:
Over the past decade, school effectiveness researchers have cooperated more with
their colleagues from the tradition of school improvement research, which tends to
focus—almost by definition—more on development over time. The success of the
journal School Effectiveness and School Improvement and the annual International
Conference for School Effectiveness and Improvement are clear indications of this
cooperation, but actual long-term studies remain scarce. Most ‘‘longitudinal’’ school
improvement studies that have been conducted thus far, however, probably have very
little to say about the development of schools over the long term. Fullan (1991)
observes that institutional reform takes 5 years or more. The typical ‘‘longitudinal’’
school improvement research project comes to a halt before that time (Gray,
Goldstein, & Jesson, 1996). The ‘‘Louisiana School Effectiveness Study’’ (Teddlie &
Stringfield, 1993), which covers more than a decade, is probably the most notable
exception.
School effectiveness research has also focused more on successful schools than on
their less well-functioning counterparts (Reynolds & Teddlie, 2001). The factors that
enhance effectiveness may be quite different from those that lead to ineffectiveness. It
may simply be an unreachable goal for ineffective schools to adopt the policies and
practices that exist in well-performing schools (Slavin, 1998). While a school that is
already performing well may be able to increase its effectiveness by adopting a strong
focus on higher order thinking skills, the ineffectiveness of another school may be due
to such factors as an undisciplined school climate or insufficient attention to basic
skills. The need for studies that cover a wide time-span is generally acknowledged
(Coe & Fitz-Gibbon, 1998; Gray et al., 1996; Hill, 1998; Reynolds et al., 1993; Slater
& Teddlie, 1992). Cross-sectional studies essentially provide snapshots of successful
264 H. Luyten et al.
schools. Being effective and becoming effective are two different things, however, and
being effective is not the same as staying effective. While this statement may sound like
a truism, hardly any studies on school effectiveness have taken this distinction into
account. On the contrary, most SER studies implicitly assume that school
characteristics that correlate with being effective must also be related to becoming
and staying effective.
Recommendations for improving the methodological quality of SER. Coe and Fitz-Gibbon
(1998) call for a more appropriate description of the (adjusted) differences between
schools, which are usually referred to as ‘‘school effects’’. Their suggested alternative
‘‘adjusted academic performance of specific groups’’ has yet to receive much support.
Goldstein (1997) suggests qualifying the descriptions ‘‘effective’’ and ‘‘ineffective’’
with the term ‘‘relative’’. Several authors (Goldstein, 1997; Scheerens & Bosker,
1997; Thrupp, 1999, 2001) have argued that the selection of covariates in SER
should be based on theoretical relevance, although pragmatic considerations seem to
prevail in practice. Thrupp stresses the relevance of school population composition
and asserts that many school processes are deeply influenced by student intake
characteristics. Goldstein (1997) asserts that adjusting test scores for prior
achievement ignores the possibility that students may develop at different rates.
Measuring student development requires a series of test scores over an extended
period.
The need for a theoretical basis for variable selection applies also to the selection of
the dependent variables (measures of effectiveness). In this respect, Bosker and
Visscher (1999) propose ‘‘authentic testing’’ (e.g., teacher ratings of academic
performance based on observations, written material, etc.) as a means of obtaining
more valid information on student and school performance. Several authors call for
studies that pay more attention to teachers and departments as further sources of
variance, in addition to schools and students (Goldstein, 1997; Hill & Rowe, 1996;
Luyten & Snijders, 1996; Scheerens & Bosker, 1997). Responding to this call would
require multilevel analysis techniques that are somewhat more advanced than those
usually employed.
A widespread call for more longitudinal research on effectiveness and
improvement also exists within the SER community (Coe & Fitz-Gibbon, 1998;
Gray et al., 1996; Reynolds et al., 1993; Slater & Teddlie, 1992). Several authors
(Coe & Fitz-Gibbon, 1998; Hill, 1998; Scheerens & Bosker; 1997) argue that
future school effectiveness and school improvement research projects should be set
up as integrated components of educational innovation experiments or reform
initiatives. Scheerens and Bosker (1997) argue for studies that investigate the
processes by which ineffective schools improve and effective schools deteriorate. It
is not clear whether they also consider the possibility of retrospective research on
deterioration and improvement. They further recommend the use of large-scale
national datasets (involving cohorts of schools and students) to study the
development of schools in combination with additional, more in-depth data
collection.
School Effectiveness Research 265
Thrupp (1999, 2001), Slee and Weiner (2001), and other ‘‘external critics’’
criticise SER for its tendency to oversimplify the complex realities of education. In
their view, this tendency is due largely to the predominant use of large-scale
quantitative methodologies. These critics call for more small-scale, qualitative, and
detailed microlevel research. They see little use in striving for objective measurement
and applying rigorous data analysis, as ‘‘all forms of research are ideological’’ (Slee &
Weiner, 2001, p. 94). Researchers within the school effectiveness tradition (e.g.,
Teddlie & Stringfield, 1993) have also argued for the use of qualitative methods.
While internal critics consider qualitative approaches potentially useful supplements
to quantitative research methods, external critics tend to favour the replacement of
quantitative with qualitative methodologies. In the 1990s, cooperation increased
between researchers from the quantitative school effectiveness tradition and their
colleagues from the school improvement field, which began as a qualitatively oriented
research tradition (Reynolds et al., 1993).
In addition to the call for more qualitative methods, authors working within the
field of SER recommend the adoption of even more advanced quantitative methods
of data analysis (Coe & Fitz-Gibbon, 1998; Goldstein, 1997; Scheerens & Bosker,
1997; Scheerens, Bosker, & Creemers, 2001). These authors note that, while data
analysis in most studies stops at estimating direct linear relationships, analyses should
also explore relationships that are more complex. According to Goldstein and
Woodhouse (2000), the quantitative analysis in many SER studies oversimplifies
reality to the point of distortion. They note that the results of analysis can be only as
good as the data they concern. In their view, the main problem lies in the availability
of suitable data and not in any technical flaws in the analytical procedures.
Reynolds and Teddlie (2001, pp. 108 – 109) make the radical recommendation of
advising researchers to focus on failure and dysfunctionality:
. . .the dominant paradigm has been to study those schools already effective or ‘‘well’’
and to simply propose the adoption of the characteristics of the former organisations as
the goal for the less effective. In medicine, by contrast, research and study focuses upon
the sick person and their symptoms, the causes of their sickness and on the needed
interventions that may be appropriate to generate health. The study of medicine does not
attempt to combat illness through the study of good health, as does school effectiveness:
it studies illness to combat illness through promoting ‘‘wellness’’.
According to these authors, one of the main reasons why so little is known about the
road from ineffectiveness to effectiveness is the fact that prior research has tended to
focus on normal or average schools.
Our position regarding the methodological calibre of SER. We agree with Goldstein
(1997) and Coe and Fitz-Gibbon (1998) that the term ‘‘school effect’’ is hardly
appropriate, as SER has thus far failed to show conclusively that schools are able to
influence that which is commonly known as the ‘‘school effect’’. In our opinion,
however, efforts to replace this term with another are unlikely to result in any
substantial improvement in the quality of SER. Considering alternative methods of
266 H. Luyten et al.
Thus far, SER has placed too much emphasis on learning that takes place inside the
school and on academic goals that have traditionally received the most attention from
educational authorities. A genuine school effect would express the difference between
attending school and receiving no education in school. To date, however, only
relative performance differences between schools have been investigated. Moreover,
SER has been strongly observational and has made little use of existing theories.
Finally, the goal of developing strategies for school improvement based on insights
into the features of effective schools does not sufficiently consider the extent of
variation between schools.
also reduce existing social inequalities. While we do not consider this a primary goal
of education, it is nonetheless a likely consequence of optimal education.
The ability to evaluate many valuable outcomes of schooling adequately requires
the development or drastic improvement of assessment instruments. Outcomes in the
cognitive domain (especially lower order cognitive skills) form an obvious exception,
although the number of instruments for other skills (e.g., problem-solving, learning to
learn, and cross-curricular competencies) is growing. Although cognitive develop-
ment is highly relevant to preparation for the labour market, other characteristics
(e.g., communication skills, persistence, and self-esteem) are also relevant.
Instruments used in educational research to measure such personal and social
competencies rely almost exclusively on teacher ratings or on self-reports by students.
Little is known about the validity of these instruments, and some of the findings based
on the use of such instruments certainly call for explanation. For example, a number
of studies (e.g., Fend, 1981; Martinot, Kuhlemeier, & Feenstra, 1988) have identified
a tendency for the self-confidence of students to decrease over the course of their
school careers. The percentage of students who believe themselves to be good at
mathematics decreases between the 1st and 2nd years of secondary education.
Student perceptions apparently do not reflect the progress they actually make. Yet, it
seems hardly conceivable that students in their 2nd year are not aware that they
perform at a higher level than do their peers in the lower grade. This is just one of the
problems associated with measuring the personal and social aspects of student
development. One problem with teacher ratings is that they may not reveal
differences between schools and classrooms, as teachers tend to use the typical
student in their classroom—rather than some (unknown) national average—as the
reference point for assigning scores. This tendency may result in the underestimation
of variation between schools and classrooms.
number of complex issues, including how to determine the extent to which the
demands that are placed on schools are realistic. For example, is it realistic to expect
schools to compensate for the language disadvantages of minority students when
these students continue to speak other languages as soon as they leave the classroom?
The need to understand factors outside the school is likely to be even stronger for
non-cognitive outcomes (social and personal competencies) than it is for traditional,
cognitive goals.
Micro-economic theory (Scheerens & Van Praag, 1998) may also provide a fruitful
theoretical basis for more cause-effect oriented studies in schools. From this
perspective, actors within educational organisations operate as rational decision-
makers, who allocate their time and efforts to activities in such a way as to maximise
their personal utility. Formal analysis can lead to hypotheses concerning what
happens to the allocation of time and effort when conditions within schools change.
These hypotheses can subsequently be verified through research and statistical
analyses. In this way, micro-economic theory may help to clarify why actors within
educational organisations act as they do, how they respond to changes in the school
setting and how their behaviour can be changed such that their own productivity—as
well as that of the organisation—will increase.
Micro-economic theory calls one key idea of SER into question: the expectation
that the manipulation of the right (school-level, classroom-level or both) variables
automatically produces improvements in student achievement levels. Research into
educational leadership can be used to illustrate this way of reasoning in much of the
SER. This type of research assumes that principals have central roles within schools
and can thus exercise a certain amount of control processes within their
organisations. It is assumed that, should a principal initiate activities aimed at
improving academic achievement, both teachers and students will react, thereby
improving achievement. However, although the position of the principal within a
school is formally central and powerful, many studies in the field of educational
School Effectiveness Research 273
administration (e.g., Ingersoll, 1993; Witziers, 1992) have shown this assumption to
be highly questionable. Micro-economists argue that educational organisations
should be considered as multiplayer organisations in which power is spread among
school staff. Students and teachers respond to school policies by withdrawing the
resources that are under their control (i.e., their effort), with the consequence that
policies initiated by the principal may have no impact at all.
1. How can the feedback be made accessible to as many members of the target
group as possible?
2. Which characteristics of the feedback content are most appreciated and
considered most credible by school staff?
3. Which type of feedback is most accessible and easy to understand?
4. What is involved in using the feedback to detect problems, find remedies and
implement them successfully, thereby making schools more effective?
5. Which strategies for change strategies make schools more effective, given specific
school and context characteristics?
This type of research calls for longitudinal studies, as the process of change and
improvement takes years. Answering several of the research questions requires
qualitative investigations to explore how schools act in response to the feedback, what
problems occur, and how they can be addressed. In addition, more improvement-
oriented research, along with more traditional SER, could prove valuable for
portraying ‘‘the effectiveness history’’ of the schools under scrutiny (question 5); has
the effectiveness of the schools in question improved and, if it has, how is their
effectiveness related to the other features of the schools?
There is no guarantee that the full implementation of all of the suggestions we have
made will resolve all of the flaws of SER as they have been described. Further, the
new approaches are likely to generate new problems as well. These potential
improvements are worthy of exploration, however, as they have the potential to bring
us closer to understanding what makes schools effective and how they can become
(even) more effective.
References
Abell, P. (1991). Rational choice theory. Aldershot: Elgar.
Anderson, L. W. (2000). Why should reduced class size lead to increased student achievement? In
M. C. Wang & J. D. Finn (Eds.), How small classes help teachers do their best (pp. 3 – 24).
Philadelphia: Temple University Centre for Research in Human Development and Education.
276 H. Luyten et al.
Angus, L. (1993). The sociology of school effectiveness. British Journal of Sociology of Education, 14,
333 – 345.
Annevelink, E. (2004). Class size: Linking teaching and learning. Enschede, The Netherlands: Print
Partners Ipskamp.
Ball, S. (1998). Educational studies, policy entrepreneurship and social theory. In R. Slee & G.
Weiner (with S. Tomlinson) (Eds.), School effectiveness for whom? Challenges to the school
effectiveness and school improvement movements (pp. 70 – 83). London: Falmer Press.
Bosker, R. J., & Visscher, A. J. (1999). Linking school management theory to school effectiveness
research. In A. J. Visscher (Ed.), Managing schools towards high performance (pp. 291 – 322).
Lisse, The Netherlands: Swets & Zeitlinger.
Coe, R., & Fitz-Gibbon, C. T. (1998). School effectiveness research: Criticisms and
recommendations. Oxford Review of Education, 24(4), 421 – 438.
Coleman, J. S., Campbell, E., Hobson, C., McPartland, J., Mood, A., Weinfeld, F., & York, R.
(1966). Equality of educational opportunity. Washington, DC: Government Printing Office.
Creemers, B. P. M. (1994). The effective classroom. London: Cassell.
Cuban, L. (1983). Effective schools: A friendly but cautionary note. Phi Delta Kappan, 64, 695 –
696.
Dahl, R., & Lindblom, C. (1963). Politics, economics, and welfare: Planning and politico-economic
systems resolved into basic social processes. New York: Harper.
Dalin, P. (1998). Developing the twenty-first century school, a challenge to reformers. In A.
Hargreaves, A. Lieberman, M. Fullan, & D. Hopkins (Eds.), International handbook of
educational change (Vol. 5, pp. 1059 – 1073). Dordrecht/Boston/London: Kluwer Academic
Publishers.
De Vos, H. (1989). A rational-choice explanation of compositional effects in educational research.
Rationality and Society, 1, 220 – 239.
Edmonds, R. R. (1979). Effective schools for the urban poor. Educational Leadership, 37, 15 – 27.
Elliot, J. (1996). School effectiveness research and its critics. Alternative visions of schooling.
Cambridge Journal of Education, 26(2), 199 – 224.
Fend, H. (1981). Theorie der Schule, 2. durchgesehene Auflage [Theory of the school, second revised
edition]. München, Germany: Urban & Scharzenberg.
Fullan, M. (1991). The new meaning of educational change. London: Cassell.
Fullan, M. (1993). Change forces: Probing the depths of educational reform. London: Falmer Press.
Glass, G. V. (1979). Policy for the unpredictable (Uncertainty research and policy). Educational
Researcher, 8(9), 12 – 14.
Goldstein, H. (1997). Methods in school effectiveness research. School Effectiveness and School
Improvement, 8, 369 – 395.
Goldstein, H., & Woodhouse, G. (2000). School effectiveness research and educational policy.
Oxford Review of Education, 26, 353 – 363.
Grace, G. (1998). Realizing the mission: Catholic approaches to school effectiveness. In R. Slee &
G. Weiner (with S. Tomlinson) (Eds.), School effectiveness to whom? Challenges to the school
effectiveness and school improvement movements (pp. 117 – 127). London: Falmer Press.
Gray, J., Goldstein, H., & Jesson, D. (1996). Changes and improvements in schools’ effectiveness:
Trends over five years. Research Papers in Education, 11, 35 – 51.
Gray, J., Goldstein, H., & Thomas, S. (2001). Predicting the future: The role of past performance
in determining trends in institutional effectiveness at A-level. British Educational Research
Journal, 27, 391 – 406.
Hallinger, P., & Heck, R. H. (1996). The principal’s role in school effectiveness: An assessment of
methodological progress, 1980 – 1995. Paper presented at the Annual Meeting of the American
Educational Research Association, New York.
Hamilton, D. (1998). The idols of the market place. In R. Slee & G. Weiner (with S. Tomlinson)
(Eds.), School effectiveness to whom? Challenges to the school effectiveness and school improvement
movements (pp. 13 – 20). London: Falmer Press.
School Effectiveness Research 277
Reynolds, D., Hopkins, D., & Stoll, L. (1993). Linking school effectiveness knowledge and school
improvement practice: Towards a synergy, School Effectiveness and School Improvement, 4, 37 –
58.
Reynolds, D., & Teddlie, C. (2000). The processes of school effectiveness. In C. Teddlie & D.
Reynolds (Eds.), The international handbook of school effectiveness research (pp. 134 – 159).
London/New York: Falmer Press.
Reynolds, D., & Teddlie, C. (2001). Reflections on the critics and beyond them. School Effectiveness
and School Improvement, 12, 99 – 113.
Rowan, B., Bossert, S. T., & Dwyer, D. C. (1983). Research on effective schools: A cautionary
note. Educational Researcher, 12, 24 – 31.
Rowan, B., Raudenbush, S. W., & Cheong, Y. F. (1993). Teaching as a nonroutine task:
Implications for the management of schools. Educational Administration Quarterly, 29(4), 479 –
500.
Rutter, M., Maughan, B., Mortimore, P., & Ouston, J. (1979). Fifteen thousand hours: Secondary
schools and effects on children. Boston: Harvard University Press.
Scheerens, J. (1997). Conceptual models and theory-embedded principles on effective schooling.
School Effectiveness and School Improvement, 8, 269 – 310.
Scheerens, J., & Bosker, R. J. (1997). The foundations of educational effectiveness. Oxford/New York/
Tokyo: Pergamon.
Scheerens, J., Bosker, R. J., & Creemers, B. P. M. (2001). Time for self-criticism: On the viability
of school effectiveness research. School Effectiveness and School Improvement, 12, 131 – 157.
Scheerens, J., & Van Praag, B. M. S. (Eds.). (1998). Micro-economic theory and educational
effectiveness. Enschede: Print Partners Ipskamp.
Silins, H., & Mulford, B. (2003). Schools as learning organisations. The case for system, teacher
and student learning. Journal of Educational Administration, 40(5), 425 – 446.
Silins, H. C., Mulford, W. R., & Zarins, S. (2002). Organizational learning and school change.
Educational Administration Quarterly, 38(5), 613 – 642.
Slater, R. O., & Teddlie, C. (1992). Toward a theory of school effectiveness and leadership. School
Effectiveness and School Improvement, 3, 242 – 257.
Slavin, R. (1998). Sands, bricks and seeds: School change strategies and readiness for reform. In A.
Hargreaves, A. Lieberman, M. Fullan, & D. Hopkins (Eds.), International handbook of
educational change (Vol. 5, pp. 1299 – 1313). Dordrecht/Boston/London: Kluwer Academic
Publishers.
Slee, R., & Weiner, G. (2001). Education reform and reconstruction as a challenge to research
genres: Reconsidering school effectiveness research and inclusive schooling. School Effective-
ness and School Improvement, 12, 83 – 98.
Slee, R., Weiner, G (with Tomlinson, S.) (Eds.). (1998). School effectiveness for whom? Challenges to
the school effectiveness and school improvement movements. London: Falmer Press.
Smith, L. M. (1998). A kind of educational idealism: Integrating realism and reform. In A.
Hargreaves, A. Lieberman, M. Fullan, & D. Hopkins (Eds.), International handbook of
educational change (Vol. 5, pp. 100 – 120). Dordrecht/Boston/London: Kluwer Academic
Publishers.
Stringfield, S. (1995). Attempting to enhance students’ learning through innovative programs: The
case for schools evolving into high reliability organizations. School Effectiveness and School
Improvement, 6, 67 – 96.
Stringfield, S., Millsap, M. A., & Herman, R. (1998). Using ‘‘Promising Programs’’ to improve
educational processes and student outcomes. In A. Hargreaves, A. Lieberman, M. Fullan, &
D. Hopkins (Eds.), International handbook of educational change (Vol. 5, pp. 1314 – 1338).
Dordrecht/Boston/London: Kluwer Academic Publishers.
Stringfield, S., & Slavin, R. E. (1992). A hierarchical longitudinal model for elementary school
effects. In B. P. M. Creemers & G. J. Reezigt (Eds.), Evaluation of effectiveness (ICO
publication 2; pp. 35 – 68). Groningen, The Netherlands: ICO.
School Effectiveness Research 279
Teddlie, C., & Reynolds, D. (2000). The international handbook of school effectiveness research.
London/New York: Falmer Press.
Teddlie, C., & Stringfield, S. (1993). Schools make a difference: Lessons learned from a 10-year study of
school effects. New York: Teachers College Press.
Teddlie, C., C. Stringfield, S., & Reynolds, D. (2000). Context issues within school effectiveness
research. In C. Teddlie & D. Reynolds (Eds.), The international handbook of school effectiveness
research (pp. 160 – 186). London/New York: Falmer Press.
Thomas, S., Smees, R., MacBeath, J., Robertson, P., & Boyd, B. (2000). Valuing pupils’ views in
Scottish schools. Educational Research and Evaluation, 6, 281 – 316.
Thompson, J. D. (1967). Organizations in action. New York: McGraw-Hill.
Thrupp, M. (1999). Schools making a difference, let’s be realistic! Buckingham/Philadelphia: Open
University Press.
Thrupp, M. (2001). Sociological and political concerns about school effectiveness research: Time
for a new research agenda. School Effectiveness and School Improvement, 12, 7 – 40.
Townsend, T. (Ed.). (2001). Twenty years of school effectiveness research: Critique and response.
School Effectiveness and School Improvement, 12, 1 – 157.
Van der Wal, M., & Rijken, S. (2002). Cross Curriculaire Competenties: De samenhang tussen cognitieve
schoolprestaties, CCC en schoolkenmerken [Cross curricular competences: the relationship
between cognitive achievement, CCC and school characteristics]. Paper presented at the
Onderwijs Research Dagen, 29 – 31 May 2002, Antwerp, Belgium.
Visscher, A. J. (Ed.). (1999). Managing schools towards high performance. Lisse, The Netherlands:
Swets & Zeitlinger.
Visscher, A. J., & Coe, R. (2002). School improvement through performance feedback. Lisse, The
Netherlands: Swets & Zeitlinger.
Weiss, C. H. (1998). Improving the use of evaluations: Whose job is it anyway? In A. J. Reynolds &
H. J. Walberg (Eds.), Advances in educational productivity (Vol. 7, pp. 263 – 276). Greenwich/
London: JAI Press.
Witziers, B. (1992). Coördinatie binnen scholen voor voortgezet onderwijs [Coordination within
secondary schools]. Enschede, The Netherlands: Department of Education, University of
Twente.
Witziers, B., & Bosker, R. J. (1997, January). A meta-analysis on the effects of presumed school
effectiveness enhancing factors. Paper presented at the International Congress for School
Effectiveness and Improvement, Memphis, TN.
Witziers, B., Bosker, R. J., & Krüger, M. L. (2003). Educational leadership and student
achievement: The elusive search for an association. Educational Administration Quarterly,
39(3), 398 – 425.
View publication stats