How Does School Bureaucracy Affect Student Perform

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 16

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/228220988

How Does School Bureaucracy Affect Student Performance? A Case of New Jersey
School Districts

Article  in  SSRN Electronic Journal · June 2008


DOI: 10.2139/ssrn.1160941

CITATION READS
1 526

1 author:

Weerasak Krueathep
Chulalongkorn University
18 PUBLICATIONS   94 CITATIONS   

SEE PROFILE

Some of the authors of this publication are also working on these related projects:

Evaluating Thai Municipal Fiscal Conditions View project

All content following this page was uploaded by Weerasak Krueathep on 02 April 2019.

The user has requested enhancement of the downloaded file.


How Does School Bureaucracy Affect Student Performance?
A Case of New Jersey School Districts

Weerasak KRUEATHEP, PhD


(Department of Public Administration, Chulalongkorn University, Bangkok, Thailand 10330 E-mail: weerasak.k@chula.ac.th)

Abstract: There exist extensive debates on the relationship between school bureaucracy and student performance. Chubb and
Moe (1988, 1990) contend that bureaucracy causes poorer student performance. On the contrary, Smith and Meier (1994, 1995)
argue that school bureaucracy is a result of poor performing schools, not vice versa. Therefore, this essay reexamines this
relationship through new evidence using panel data from over 600 school districts in New Jersey during the years 2001/02 and
2005/06. The findings from fixed-effect models support both Chubb and Moe‟s (1988, 1990) and Smith and Meier‟s (1994,
1995) views. At the 8th and 11th grade levels, school bureaucracy is associated with poorer student performance as measured by
literacy and mathematic tests. On the other hand, at the 4th grade level, there is a lack of evidence to conclude that school
bureaucracy is harmful to students‟ learning. This research suggests that we have to be very specific when articulating school
administration reforms. An across-the-board solution of eliminating all school bureaucratic institutions and replacing them with
market-like school operations may not bring about positive outcomes as some advocates have argued.
Key words: school bureaucracy, student performance, New Jersey school district

1. Background

One of the contemporary education policy debates is over school choice. Public choice advocates
argue that market-based school systems can provide a better education to students than public schools
(e.g., Chubb and Moe 1988, 1990; Friedman 1962). These authors explain that bureaucratic features of
public school systems make themselves ineffective organizations that lower student performance. Public
schools are less effective since their quasi-monopoly power directs expenditures not to improving student
achievement or the quality of education, but to internal surpluses and excessive bureaucracy (Friedman
1962). Hence, their suggestion is to convert public schools into market-based institutions with less
bureaucratic control and to encourage more competition in school systems.
However, opponents of school choice argue that significant effects of school choice on student
achievement as once promoted in the works of Chubb and Moe (1988, 1990) are marginal and small
enough that they may be irrelevant for public policy purposes (Witte 1992, 1996; Smith and Meier 1994).
In fact, school bureaucracy does exist to prevent discrimination, favoritism or fraud, and to maintain
minimum academic standards (Rothstein 1993). As a result, the proposal to liberalize public school
system fails to consider the multiple goals of public education, and leads to reforms that generate more
social costs than benefits (Witte 1992; Smith and Meier 1994).
Hence, the absence of a consensus on the benefits or drawbacks of school bureaucracy on student
performance calls for further empirical evidence from a different sample in order to derive more solid
conclusions. Building upon past empirical studies from Chubb and Moe (1988), Smith and Larimer
(2004), Bohte (2001), and Meier, Polinard, and Wrinkle (2000), this research is central to the existing
debates: How does school bureaucracy affect student performance? This study aims to provide additional
evidence from New Jersey school districts. This research question is meaningful, since it is not only a policy
question of importance to education reform in the United States but is also an issue with theoretical
implications that further our understanding of public bureaucracy in school systems. Before major school
reform options are undertaken in the US or elsewhere, the relationship between bureaucracy and student
performance should be clearly understood.
The research will address the link between school bureaucracy and performance in four major steps.
First, it outlines theoretical arguments for and against school bureaucracy. Second, it discusses the data and
methods used in the analysis. Third, it provides empirical results based on a panel dataset of over 600 school
districts in New Jersey for a five-year period. Finally, it discusses the research findings and notes the
implications for educational reform policy in particular, and for public administration theories in general.

2. Debates over School Bureaucracy and Student Performance


Advocates of school choice and similar market-based reforms in education (e.g., Chubb and Moe
1988, 1990; Friedman 1962) argue that public schools perform poorly because they are subordinates in a
hierarchical system of democratic politics and unresponsive to task environments. Public choice theorists
have developed this conception based on the view that public bureaucracies are monopoly producers that
ignore the demands of their clientele, who have no other choice but to consume the services provided by
bureaucracies. For market-based institutions, by contrast, „voice and exit‟ options (Hirschman 1970) often
force private schools to pursue what students and parents really want. Unlike private schools, public
schools have little incentive to worry about their effectiveness and responsiveness to the demands of
parents and students for quality education.
The only control mechanism over public schools is through the democratic process (Chubb and
Moe 1988). Notwithstanding, as Chubb and Moe (1988) argue, this mode of institutional control
eventually results in weak organization performance. Administrators or higher-level directions over
school operations may have an agenda of their own that may not fulfill parental and student needs.
Additionally, “public schools are products of democratic institutional arrangements” (Chubb and Moe
1988, 1070). All organized interests have a stake in getting legislatures and school boards to
institutionalize their preferences through hundreds of mandates, rules, procedures, programs and other
social values. Unfortunately, parents and students make up less powerful organized groups compared to
others, e.g., teacher unions. In effect, this democratic process results in a more bureaucratized school
system to serve internal needs and most powerful electoral considerations, rather than the needs of real
clienteles--parents and students. Hierarchical control mechanisms, then, are institutionalized to ensure that
all electoral preferences are undertaken. However, the mechanisms inevitably lead to more stringent rules
and procedures that limit autonomy of street-level bureaucrats (teachers) to propose and implement
innovative pedagogical improvement, thus preventing the school system from responding quickly and
effectively to the demand of parents and students.
Chubb and Moe (1988) also argue that, unlike the public school system, private schools operate
in entirely different institutional environments. They are leaner in their responses to demands for quality
education and act in their own best interests when making adaptive adjustments. In this respect, private
schools always out-perform over-bureaucratized public schools since market incentives usually permit
competition and responsiveness direct to the clientele groups, which later on help generate more
preferable education performance.
Counterarguments to the school choice paradigm are made by Smith and Meier (1994, 1995).
They contend that bureaucracy is created to respond to low school performance and to address
environmental demand issues. Public school systems are more bureaucratic since they are faced with
more complex task environments (Meier, Polinard, and Wrinkle 2000). As argued by Meier, Polinard, and
Wrinkle (2000), poorly performing schools are overwhelmed with multifaceted problems: higher poverty,
a higher portion of disadvantaged students, teen pregnancy, and immigrant students. All of these issues
create a demand for bureaucracy to respond to the problems faced by schools‟ clienteles. School
bureaucracy is essential to process the delicate tasks, and to free teachers from administrative burdens so
that they can spend their time on teaching. These institutional responses inevitably create a higher
administrative overhead, leading to growing school bureaucracies.
Past empirical studies tend to support both claims. Chubb and Moe (1988, 1990) conducted
comparative analyses of public and private schools using national survey data. They found that private
schools were more likely to have the characteristics--namely methods of institutional control and

2
responsiveness to environments--widely believed to produce educational effectiveness. Similarly, Bohte
(2001) found negative relationships between school bureaucracy and student performance based on the
analysis of Texas school districts from the years 1991 to 1996.
On the contrary, Smith and Meier (1994, 1995) argued that the studies of Chubb and Moe (1988,
1990) used vague notions of school bureaucracy and relied solely on subjective assessments using
teachers‟ perceptions of how bureaucratic the school system was. Smith and Meier (1994) provided
empirical evidence against Chubb and Moe (1988, 1990) based on a 50-state dataset, and developed more
precise measures of school bureaucracy. They found that bureaucracy had negative predictive power for
student performance only marginally. Later, Smith and Meier (1995) and Meier, Polinard, and Wrinkle
(2000) showed that the negative relationship between bureaucracy and school performance should be
viewed based on causal logic. That is to say, large bureaucracies do not cause poorer school performance,
but they are evidence of a response to lower performing schools.
Similarly, Smith (1994) doubted the public choice premise on competition among public schools.
He found that, based on the Florida school district dataset, competition in public schools would result in a
creaming effect, which would further exacerbate inequities among school districts. He concluded that
public choice policy options may be not always as impressive as widely argued. Similarly, Wenglinsky
(1997) found that spending on central administration at district levels indirectly and statistically improved
student achievement in mathematic scores across the nation in 1992 (grades 4, 8, and 12). This is because
school administration helped improve learning environments, and more administrators could help to
advance educational improvement strategies. In contrast, Smith and Larimer (2004) took the middle
ground, stating that the nature of the relationship between school bureaucracy and performance depends
on the particular performance measures being examined. Both negative and positive consequences of
school bureaucracy can be observed in different circumstances.
In sum, what is needed is evidence from a different dataset in order to provide a validity test in
relation to the existing findings. This study tests the relationship between school bureaucracy and student
performance by following the models as originally employed in previous research (e.g., Smith and
Larimer 2004; Bohte 2001; and Meier, Polinard, and Wrinkle 2000). By taking this approach, this
research engages in replication with different samples and more refined regression models based on the
education and public administration literature. It is expected to make a substantive contribution to the
scientific progress as once suggested by King, Keohane and Verba (1994) and King (1995).

3. Data and Methods


This current study is based on data from school districts in New Jersey. New Jersey school
districts vary substantially in terms of student enrollment and demographic status, as well as school
district administration (see from Appendix 1). The data for each school district comes from two major
sources: The New Jersey School Report Card compiled by the State of New Jersey‟s Department of
Education, and the common core data (CCD) provided by the National Center for Educational Statistics
(NCES)1. The data covers the academic years 2001 - 2002 through 2005 – 2006. The pooled cross-
sectional data resulted in a total of 3,334 observations. However, since not all districts have the same
student grade structure, and since some data on specific items were missing, they both reduce the actual
number of observations for the analysis.
It should be noted up front that, in the case of New Jersey, some school districts do not operate
any public schools, thus, resulting in zero student enrollment. School district administrators send children
in their district to be educated in other nearby districts. Unfortunately, the available data do not contain
how many students are treated under this condition. Therefore, I have dropped those districts from the

1
The data are available at http://www.state.nj.us/education/data/ and at http://nces.ed.gov/ccd/index.asp
respectively.

3
analysis. In effect, the subsequent analysis may not truly represent the benefits of school bureaucracy in
this respect, which are presumably expected to be positive.
Panel-data structure is frequently affected by problems of serial correlation and
heteroscedasticity. Average district scores, on SATs for one year can easily be a function of average
scores from previous years. Likewise, errors may well have unequal variances between large and small
school districts. In the present case, autocorrelation and heteroscedasticity were detected in all models
after fitting the models by pooled cross-sectional OLS regressions 2. Clearly, these violate some Gauss-
Markov assumptions of the OLS. Additionally, school districts may have some strong, unobserved effects
on the academic achievement of students attending them. Since these factors are unobserved, they are not
included in the regression model. In this respect, they would possibly bias the estimates (betas), if being
done by the OLS. To control for the possibility that school districts have different unobserved effects, I
make the assumption in this study that all these unobserved factors are fixed at least during the period of
study, and estimate regression models by the fixed-effect (FE) method.
Like the OLS, the FE estimator still suffers from heteroscedasticity and serial correlation threats,
which bias the standard errors of the regression models and betas 3. Thus, to handle both heteroscedasticity
and serial correlation problems, I use the FE estimator with clustered robust standard errors, which have
demonstrated elsewhere the effectiveness in solving dual heteroscedasticity and serial correlation
problems (e.g., Bertrand, Duflo, and Mullainathan 2004; Petersen 2007). In this study, the standard errors
are clustered on the panel identifier -- the school districts.
However, although dependent variables to be used in this research are somehow censored (e.g.,
percentages of students passing the proficient levels are ranged between zero and one hundred), I still
utilized the FE regression model instead of using other maximum likelihood models (e.g., Tobit). There
are three reasons for this model selection. First, because it is reasonable enough to assume that the
dependent variables are continuous in nature. Second, the FE model for measuring student performance
for a panel data set is very common in education research (e.g., Bettinger 2005; Bifulco and Ladd 2006a,
2006b; Booker et al. 2007; and Sass 2006). Finally, the issue of bias estimation from the unobserved,
time-invariant factor is more crucial than bias from the data-censored problem. For instance, for some
socio-economic-political reasons that are unobservable, students in particular school districts might be
poorer performers than the average district. Thus, using the FE model is more advantageous in this
respect.
Additionally, one might argue for the use of the first differencing (FD) method in order to sort out
the unobserved time-invariant effects, and minimizing serial correlation problems. However, after
estimating all models by the FD technique, results still show moderate serial correlation of the error terms
(ρ ranges between -.0991 and -.1820), which eventually make the FD model inappropriate. As discussed
by Wooldridge (2000), the FD model is less efficient when autocorrelation is present.
Furthermore, the choice between the fixed effect model and random effects model is worth
discussing briefly. First, there is no compelling reason to believe that unobservable factors are
uncorrelated with other explanatory variables. For instance, some unobserved school district
characteristics are plausibly correlated with student demographics and school resources. Thus, the strictly
exogeneity assumption for the use of the random effects model is clearly violated. Second, the Hausman
Test utilized between the fixed effect and random effects models show insufficient evidence to believe
that the random effect estimations are superior to the fixed effect results for this dataset (see the results in
Appendix 2).

2
Autocorrelation problem in all six regression models was detected by Wooldridge‟s (2000) autocorrelation test.
Heteroscedasticity was detected by White‟s heteroscedasticity test.
3
Autocorrelation problem in all six regression models was detected by Wooldridge‟s (2000) autocorrelation test.
All models have shown statistically significant p-values for the lagged residuals. Heteroscedasticity was detected by
White‟s heteroscedasticity test.

4
The diagnosis of multicollinearity also was examined for all models (when pooled cross-sectional
OLS have been tested). Variance inflation factors (VIFs) were within acceptable ranges, except for the
variables of student diversity and socioeconomic status (for the percentages of Hispanic, Black, and Low-
Income student variables, VIFs ranged between 6.4 and 9.7). There is a strong theoretical reason for their
inclusion since they are key factors influencing student performance in the U.S. education context, and
none of them were dropped from the analysis.

Dependent Variables: Student Performance


New Jersey‟s assessment system comprises State required tests that are designed to measure
student progress in the attainment of the core curriculum content standards. The tests are High School
Proficiency Assessment (HSPA), Grade Eight Proficiency Assessment (GEPA), and New Jersey
Assessment of Skills and Knowledge (NJASK) for grade 3, 4, 5, 6, and 7. Test results measure students‟
learning abilities at three levels: (1) partial proficient, (2) proficient, and (3) advanced proficient, where
the proficient level or above is considered as “pass”. Therefore, in this study, the percentage of students
who exhibit knowledge of a proficient level or above is used as key performance measurements. Still, in
order to keep this research manageable, it specifically focuses on the performance of students in grades 4,
8, and 11 in mathematics and language arts. This is because they are the common measures of student
performance widely employed in empirical studies (e.g., Ferguson and Ladd 1996; Alston et al. 1994;
Burnham 1995). This results in the examination of six key dependent variables.
It should be noted that this study does not employ SAT (Scholastic Assessment Test) scores as
the performance measures for high-school students since these measures are somehow biased in favor of
students from higher socioeconomic backgrounds (Bohte 2001; Rothstein 1993). Thus, the use of these
measures as the dependent variables may not truly reveal the impact of school bureaucracy on the
performance of students in general.
Note also that this study utilizes the only performance measures that are quantifiable and tangible,
which are common in empirical educational research. It does not attempt to capture the „soft‟ or „tacit‟
skills of students (e.g., communication, leaderships, team working) as once suggested by Rothstein (1997)
or students‟ ability to find jobs in labor markets (Picus 1995) or some other social and nonmarket benefits
of schooling (Wolfe and Haveman 2002). In this respect, the subsequent analysis may not truly represent
the overall impact of school administration on student performance.

Independent Variables
A. Measurement of Bureaucracy
In general, bureaucracy is often used as a label for public administration or any large-scale formal
organization (Olsen 2005). However, bureaucracy is indeed a multidimensional concept that includes
fundamental attributes such as large size, a graded hierarchy, formal rules, performing specified tasks,
composing of salaried, technically trained employees, meritocratic appointment, predictable long-term
growth, responsible for doing jobs that require expert knowledge (Weber 1968; Downs 1965; Evans and
Rauch 1999; Goodsell 2004). Without question, this makes the selection of appropriate measures of
school bureaucracy difficult. This study follows the concept of bureaucracy as described by Meier,
Polinard, and Wrinkle (2000) and Chubb and Moe (1990) that bureaucracy is defined by red tape,
restrictive rules, and the unresponsiveness to task environments.
Previous studies have developed several measurements of school bureaucracy. Bohte (2001) and
Smith and Larimer (2004) operationalized school bureaucracy as (i) the percentage of central
administrators as a fraction of total full-time district employees, and (ii) the percentage of campus
administrators as a fraction of total full-time district employees. Still, the percentages of administrators
employed as used in these works were less valid since they failed to take the task size (the number of
student enrollment) into consideration. For instance, assume that a district hires more administrators and
overall staff in the same portion, and that the total number of student enrollment does not increase.
Common sense suggests that this proportional increase in school administrators and total district staff

5
should somehow result in more red tape and bureaucratic regulations. Unfortunately, the bureaucracy
measures as employed in Bohte (2001) and Smith and Larimer (2004) do not capture this fact. The
magnitudes of bureaucracy would be the same as long as the number of school administrators and the
number of district staff vary at a similar pace.
Alternatively, Smith (1994) defined bureaucracy as the total number of school officials per
student. Essentially, compared to the measures discussed thus far, this current measure adequately
captures the size of bureaucracy relative to district responsibilities. However, the use of the total number
of school officials is too broad to capture the essence of bureaucratic control and unresponsiveness as
argued by the public choice literature (e.g., Chubb and Moe 1990). Since not all staff are responsible for
formulating district policies and regulations, more staff per student may respond better to student and
parental needs, and therefore, are able to more adequately capture the lesser degrees of bureaucracy.
This study follows the operational definition of bureaucracy as used by Meier, Polinard, and
Wrinkle (2000). It is measured as the total number of school district administrators per 100 students. By
school administrators, they are full-time or equivalent principals and other staff concerned with directing
and managing the operations of a particular school 4. The underlying assumption is that more
administrators per student will result in more restrictive bureaucratic control and, accordingly, less
responsive to environmental demands. This measure adequately adjusts for school district size by
considering bureaucracy with regard to the size of student enrollment. Without question, this single
definition may not capture all dimensions of school bureaucracy. Notwithstanding, it is required for the
term to have value in empirical research. In this study, it is believed that the selected bureaucracy measure
has a substantial degree of construct validity.
If the argument made by public choice scholars (e.g. Chubb and Moe 1990) is correct, then we
should observe a negative relationship between the bureaucracy indicator and student performance. By
contrast, if bureaucracy proponents (e.g. Smith and Meier 1994, 1995) are correct, then a positive
relationship between bureaucracy and student performance should also be evident.

B. Other Control Variables


This section discusses control for covariates in the regression model. Earlier works have
suggested that the most important influence on student‟s performance is the socioeconomic status of
students before they attend school (e.g. Coleman et al. 1966; Payne and Biddle 1999). Experts have
hypothesized that the more affluent the background from which a student comes, the better he or she will
perform in school. Thus, these factors must be controlled for in order to obtain unbiased estimation of the
impacts of school bureaucracy on student performance. Therefore, this study includes a proxy of students‟
family income from the percentage of students eligible for free or reduced-price lunches. It is expected
that students from less-well-to-do families will perform more poorly in school.
Furthermore, scholars contend that school bureaucracy develops in response to environmental
demands or the need for remedial education, particularly in a district with heterogeneous student
populations (Smith and Meier 1994). Empirical studies often showed that the greater percentage of
nonwhite students in a districts results in lower learning achievement (Smith and Larimer 2004; Bohte
2001; Payne and Biddle 1999). Six variables are thus included to control for environmental diversity: the
percentage of African-American students in a district, the percentage of Hispanic students in a district, the
percentage of Asian students in a district, the percentage of Native-American students in a district, the
percentage of limited English proficiency (LEP) students in a district, and the percentage of students
enrolled in special education programs in a district.
Literature also suggests that student performance is a function of school resources (Ferguson and
Ladd 1996; Picus 1995; Wenglinsky 1997; Hedges, Laine, and Greenwald 1994; Evans, Murray, and
Schwab 1997), although there is a great deal of debate as to whether school resources improve student
achievement. Evans, Murray, and Schwab (1997) found that districts that increased expenditures

4
This definition is provided by the Department of Education, available at http://nces.ed.gov/ccd

6
subsequently improved student performance. Likewise, Ferguson and Ladd (1996) found that teacher
quality and class size did affect student test scores in Alabama public schools. However, Hanushek (1989,
1996) argued that public school inputs exerted no significant effect on student achievement. While there
is no consensus about the impact of expenditures on student performance, it is important to control for the
effects of school inputs, if any. This study uses four measures of school resources: pupil per teacher ratio,
per pupil expenditures for instruction, the percentage of money each district receives from external
sources (either state or federal or other sources), and the average number of years of faculty experience.
Higher levels of funding normally allow schools to hire more teachers and reduce pupil-teacher
ratios. Previous studies showed that class size is significant and negatively related to student performance
(Meier, Polinard, and Wrinkle 2000; Ferguson and Ladd 1996; Wenglinsky 1997). Instructional
expenditures per pupil are used to focus on classroom education as once argued by Rothstein (1993). It is
adjusted by price deflator indices in order to reflect a real purchasing power 5. Additionally, some districts
are worse off in terms of accumulative wealth. Thus, the percentage of external resources is included to
reflect the fact that more external funds relative to per pupil total revenues could improve student
performance. The average number of years of faculty experiences is a proxy of teacher quality as
previously found to have significant positive relationships with student performance (e.g., Ferguson and
Ladd 1996). Finally, the diseconomy of sizes may hinder school administration and student learning
(Odden and Picus 2007; Andrews, Duncombe, and Yinger 2002). Thus, the total number of student
enrollments is also included in the analysis.

4. Findings
Tables 1 and 2 report the results for student performance on the percentages of students passing
the proficient level or above in the State of New Jersey‟s tests. Descriptive statistics is presented in
Appendix 1. Generally, all regression models did a fair job in explaining student performance, except
models 3 and 5. The independent variables of interests altogether explain about 3% to 22% of the
variations of the dependent variables.
Student performances as measured by High School Proficiency Assessment (HSPA) and Grade
Eighth Proficiency Assessment (GEPA) (columns 1, 2, and 4 of Table 1) support the public choice‟s
claim that school bureaucracy has a negative, statistically significant impact on student learning, after all
important factors have been accounted for. To put it simply, the higher number of school district
administrators relative to student enrollment leads to lower student achievement at grade 8 and 11.
Specifically, for every increase in one district administrator per 100 students, the percentage of students
passing a proficient level or above in grade 11 drops by about 3.5 and 8.4 percentage points in literacy
and mathematic tests, respectively, ceteris paribus. Similarly, every increase in one district administrator
per 100 students is associated with about a 4.4 point drop in the percentage of student passing the
mathematic test for the 8th grade, holding other factors constant.
Notwithstanding, the impact of school bureaucracy on children‟s learning in the 4th grade are not
statistically significant, ceteris paribus, as revealed by the Assessment of Skills and Knowledge (ASK4)
passing rates in the fourth grade level (see columns 1 and 2 of Table 2). These findings suggest that the
impact of school bureaucracy on student performance vary significantly with concern to different grade
levels as suggested correctly by Bohte (2001). The results also support the finding of Smith and Larimer
(2004) that bureaucracy‟s relationship with student performance depends on how performance is measured.

5
The year 2000 is set equal to 100. Indices are obtained from the Bureau of Economic Analysis (BEA), the
Department of Commerce.

7
Table 1: Bureaucracy & Student Performance at the 11th and 8th Grades, Fixed-Effect Estimations
Dependent variables = HSPA HSPA GEPA GEPA
(Literacy) (Mathematics) (Literacy) (Mathematics)
Independent variables (1) (2) (3) (4)

School Bureaucracy -3.5094 *** -8.4845 *** -0.5077 -4.3601 **


(Administrator per 100 students) (1.1077) (2.0584) (1.5661) (1.7984)

Demographic variables
Percent Native 1.9155 ** 3.1468 *** -0.1561 -1.8224 **
(0.8010) (1.1962) (0.5956) (0.7499)

Percent Asian 0.1097 0.4698 ** 0.0633 0.3605 **


(0.1035) (0.2032) (0.1111) (0.1666)

Percent Black 0.0292 0.0931 -0.0626 * 0.0069


(0.0381) (0.0903) (0.0345) (0.0677)

Percent Hispanic 0.0899 ** 0.2899 *** -0.0838 0.1541 *


(0.0445) (0.0857) (0.0752) (0.0934)

Percent students with free or 0.0715 0.0682 0.0643 0.1000


reduced-price lunch (0.0510) (0.0732) (0.0582) (0.0851)

Percent students with limited 0.0152 -0.4448 *** 0.0236 0.0418


English proficiency (LEP) (0.0650) (0.0834) (0.0453) (0.0539)

Percent students in special 0.0086 0.0427 *** 0.0067 0.0178 **


education programs (0.0101) (0.0158) (0.0069) (0.0083)

School resources variables


Pupil/teacher ratio -0.0021 -0.0070 -0.0150 -0.0181
(0.0126) (0.0292) (0.0152) (0.0274)

Expenditure per pupil ($000) -0.5422 *** -0.3058 -1.0837 *** -0.3170
(0.1846) (0.2625) (0.2784) (0.3388)

Percent funding from external -0.0152 -0.0255 0.0300 0.0069


sources (0.0134) (0.0221) (0.0190) (0.0309)

Year of faculty experiences -0.2868 *** -0.7955 *** 0.1256 -0.5609 ***
(0.0651) (0.1186) (0.0861) (0.1158)

Total student enrollment -0.0001 -0.0009 ** 0.0002 -0.0008 *


(0.0002) (0.0006) (0.0002) (0.0004)

Constant 87.8874 *** 80.3082 *** 81.4431 *** 70.3450 ***


(1.8366) (3.3555) (2.6023) (3.3713)

Total observations 1,405 2,338


Total school districts 290 485
R-squared (within) 0.1061 0.2344 0.0297 0.0647
Rho 0.9419 0.9365 0.8847 0.8866
F-statistics 6.74 22.39 2.92 9.46
Probability > F-statistics 0.0000 0.0000 0.0004 0.0000
Standard errors are given in parentheses. All are clustered robust; *, **, *** indicate significant levels at .10, .05, and .01
respectively.

8
Table 2: Bureaucracy & Student Performance at the 4th Grade, Fixed-Effect Estimations

Dependent variables = ASK4 ASK4


(Literacy) (Mathematics)
Independent variables (5) (6)

School Bureaucracy -0.4963 -3.9270


(Administrator per 100 students) (1.7755) (2.8019)

Demographic variables
Percent Native 0.0622 -0.0003
(0.7841) (0.9706)

Percent Asian 0.1222 1.0210 ***


(0.1108) (0.2388)

Percent Black -0.0987 ** -0.0821


(0.0496) (0.1051)

Percent Hispanic -0.1203 ** 0.2914 ***


(0.0604) (0.1106)

Percent students with free or 0.2020 *** 0.1463


reduced-price lunch (0.0709) (0.1018)

Percent students with limited 0.0135 0.2247 ***


English proficiency (LEP) (0.0672) (0.0830)

Percent students in special -0.0045 -0.0152


education programs (0.0091) (0.0107)

School resources variables


Pupil/teacher ratio 0.3659 * -2.4393 ***
(0.2162) (0.3440)

Expenditure per pupil ($000) 1.0911 *** -2.4900 ***


(0.3001) (0.3917)

Percent funding from external 0.0291 0.0169


sources (0.0199) (0.0392)

Year of faculty experiences 0.1163 -0.3591 **


(0.1127) (0.1693)

Total student enrollment -0.0004 -0.0020 **


(0.0003) (0.0008)

Constant 67.2457 *** 122.0921 ***


(4.0253) (6.2330)

Total observations 2,054


Total school districts 524
R-squared (within) 0.0307 0.1601
Rho 0.8194 0.8119
F-statistics 2.46 16.81
Probability > F-statistics 0.0030 0.0000
Standard errors are given in parentheses. All are clustered robust; *, **, *** indicate significant levels
at .10, .05, and .01 respectively.

9
As once claimed by Smith and Meier (1994) and Bohte (2001), educational bureaucracies exist to
deal with students‟ immediate problems, and with the need for remedial education. Based on this logic,
students at different ages may develop different sets of problems that need different actions from school
administrators. Therefore, school bureaucracy may somehow be more essential to one group of students
than to another. Our results here support this interpretation. Bureaucracy does not show any significant
harm to student learning at the fourth grade level as reflected in the ASK4 tests. This implies that students
in early school years may have some difficulties in learning such that school administration is more or
less essential to address these obstacles. In other words, our findings bear evidence that 4 th grade students
are neutral to bureaucratic remedies. However, things are different with 8th and 11th graders who tend to
be more adversely affected by bureaucratic control than their counterparts at a lower grade level. The
implications of these findings will be discussed in the next section.
Other findings deserve brief discussion here. From the results in Tables 1 and 2, it is not clear
whether the level of school resources matter in enhancing student learning. In general, every one thousand
dollar increase in education expenditures is associated with the reduction of student passing the exams by
about 0.5 to 2.4 percentage points for HSPA (literacy), GEPA (literacy), and ASK4 (mathematics), other
things remain constant. Additionally, there is only marginal evidence to support the fact that smaller
pupil-teacher ratios would promote students‟ test achievements, as revealed by ASK4 (mathematics), at
least in the case of the New Jersey school districts.
Likewise, there is no evidence to support the fact that more experienced teachers perform better
in their jobs. Indeed, embarrassingly, the results show that teachers perform less effectively the longer
they stay in their post. Based on this finding, it is highly recommended that plans for professional
development as previously suggested by some education experts (e.g. Odden and Picus 2007; Odden
2007; Odden, Borman, and Fermanich 2004) be institutionalized in order to enhance further educational
achievement.
The significant levels of demographic variables vary considerably in inconsistent ways. For the
high school level, students of minority representations from Native American and Hispanic families
generally perform slightly better than the average. However, at the 8th and 4th grade levels, African-
American students generally perform lower than their peers, while other factors remain unchanged.
The conclusion in this research is that the relationship between school bureaucracy and student
performance is mixed, supporting the latest findings of Smith and Larimer (2004). A majority of the time,
at the 8th and 11th grade levels, school bureaucracy results in poorer student performance. The story is
reversed with respect to 4th grade students, where children might actually need remedial education. In this
respect, school bureaucracy is more or less very helpful to them. In sum, school bureaucracy may be
either positive or negative depending on the task environments within which it operates.

5. Discussion
This research examines the relationship between school bureaucracy and student performance
using new evidence from New Jersey school districts. Regression models are carefully constructed and
tested in order to help explain student performance using school bureaucracy as key independent variable
of interest, along with other control variables suggested in the education literature. In general, the results
presented here support both Chubb and Moe (1988, 1990)‟s and Smith and Meier (1994, 1995)‟s views:
school bureaucracy has mixed impacts on student performance, depending on the task environments. At
the 8th and 11th grade levels, school bureaucracy causes poorer student performance. On the other hand,
for 4th grade students, school bureaucracy causes no significant harm to student learning. The findings
suggest that school bureaucracy is, at best, beneficial, and at worst, not harmful to children. In this
respect, school bureaucracy may play indispensable roles in child schooling.
Smith and Larimer (2004) are right in pointing out this fact earlier. School bureaucracy is not all
evil. However, it is not all good either. We have to be very specific when assessing school bureaucracy
and the responsibilities assigned to it. Slashing bureaucracy in public schools would certainly bring about
declines in school performance since teachers have to assume administrative duties originally assigned to

10
school administrators. Notwithstanding, putting too much faith in administrators and strengthening school
bureaucracy to buffer all task environments are not always good policy options. Bureaucratic
organizations still possess monopolistic power and restricted regulations that, to some extent, are not
desirable. In short, care should be taken when reforming school institutions and control.
This research suggests convincingly that, for the elementary level, school administration at the
district level still matters in a public school system. Reform schemes that aim to deregulate bureaucratic
control and monitoring at lower grade levels might not be appropriate. On the other hand, for middle and
high school levels, public school reform policies that promote school autonomy or school-based
administration might lead to effective, desirable outcomes.
Readers should keep in mind that most of the arguments about school bureaucracy made in this
research focus mainly on the hierarchical control perspective; that is the size of school district
administration relative to the task size (student enrollment). This research still falls short of
operationalizing other facets of school bureaucracy, particularly those concerning bureaucratic red-tape
and responsiveness to external environments. Moreover, the notion of school bureaucracy examined thus
far is specifically measured at the district level, which is possibly different from what is going on at the
lower, operating level. Future research, either qualitative or quantitative, should focus on the
consequences of these unexplored bureaucracy aspects.
Equally important, the detailed, qualitative analysis of school bureaucracy at the district level is
still needed to figure out how different bureaucratic configurations affect student performance differently.
This would help reveal what is actually going on inside the school bureaucracy black box. An
understanding of internal causal relationship between school district administration and student
performance would make us more confident about the present research findings. At that time, we can be
more certain in articulating proposals for school reform.
It should also be noted that although this study utilizes data from a single state, and may not be
able to provide definite answers to the current controversies between bureaucracy and school
performance, this research has contributed to the issue in significant ways. It employs a different dataset
from the State of New Jersey, one of several states that is highly regarded for providing quality public
education in the U.S. Examining school bureaucracy from the experience of New Jersey‟s school districts
may shed light on our understanding over the role of bureaucracy on student performance, and hopefully
may provide some thought on how to pursue public education reform policies with respect to school
bureaucracy. Though the New Jersey public school system is not typical, it is believed that the findings
here might generally be applicable to public school systems elsewhere.
Finally, this research contributes to the public administration literature in that school bureaucracy,
as a part of public sector organizations, confers both positive and negative associations with student
performance. The one-sided perspective of attacking school bureaucracy as made by Chubb and Moe
(1988, 1990) is incomplete. More often, bureaucracy is a popular scapegoat for perceived poor student
performance and the ineffectiveness of schools (Goodsell 2004). Though this skepticism is true in some
part in our rapidly changing society, it is not always the case that all school administration is bad.
As Goodsell (2004) and Olsen (2005) reason, school bureaucracy still exhibits some positive elements. It
helps free street-level bureaucrats from dealing with external complex environments (administrative
responsibilities over remedial education) in order to spend time improving outputs (teaching). Though
sometimes it produces negative and undesirable consequences, we can solve bureaucratic problems in a
more specific, intelligible way. An across-the-board solution of eliminating all school bureaucratic
institutions and replacing them with market-like school operations may not always bring about positive
outcomes as some advocates have argued. At the very least, this current finding should provide evidence
to counterbalance those claims. What we need is the appropriate scope and role of school administration
within specific circumstances that help enhance public education, not get rid of it. Either explicitly or
implicitly, we still benefit from the existence of public schools.

11
6. References:
Andrews, Matthew, William Duncombe, and John Yinger. 2002. “Revisiting Economies of Size in
American Education: Are We Any Closer to a Consensus?” Economics of Education Review. 21
(3): pp.245-262.
Alston, Ethel, and others. 1994. The Kentuckey Education Reform Act. Frankfort, K.Y. Legislative
Research Commission.
Bertrand, Marianne, Esther Duflo, and Sendhil Mullainathan. 2004. “How Much Should We Trust
Differences-in-Differences Estimates?” The Quarterly Journal of Economics 119 (1): pp.249-275.
Bettinger, Eric P. 2005. “The Effect of Charter Schools on Charter Students and Public Schools,”
Economics of Education Review. 24: pp.133-147.
Bifulco, Robert, and Helen F. Ladd. 2006a. “The Impacts of Charter Schools on Student Achievement:
Evidence from North Carolina,” Education Finance and Policy. 1 (1): pp.50-90.
Bifulco, Robert, and Helen F. Ladd. 2006b. “School Choice, Racial Segregation, and Test-Score Gaps:
Evidence From North Carolina‟s Charter School Program,” Journal of Policy Analysis and
Management. 26 (1): pp.31-56.
Bohte, John. 2001. “School Bureaucracy and Student Performance at the Local Level,” Public
Administration Review. 61 (1): pp.92-99.
Booker, Kevin, Scott M. Gilpatric, Timothy Gronberg, and Dennis Jansen. 2007. “The Impact of Charter
School Attendance on Student Performance,” Journal of Public Economics. 91: pp.849-876.
Burnham, Tom. 1995. Mississippi Accreditation Requirements of the State Board of Education. Bulletin
171. 12th edition. Jackson, Mississipi.
Chubb, John, and Terry Moe. 1988. “Politics, Markets and the Organization of Schools,” American
Political Science Review. 82 (4): pp.1065-1087.
Chubb, John, and Terry Moe. 1990. Politics, Markets and America’s Schools. Washington, D.C.: The
Brookings Institution Press.
Coleman, James S., Ernest Q. Campbell, Carol J. Hobson, James McPartland, Alexander M. Mood,
Frederic D. Weinfeld, and Robert L. York. 1966. Equality of Educational Opportunity.
Washington, D.C.: US Government Printing Office.
Downs, Anthony. 1965. “A Theory of Bureaucracy,” American Economic Review. 55 (1/2): pp.439-446.
Evans, Peter, and James E. Rauch. 1999. “Bureaucracy and Growth: A Cross-National Analysis of the
Effects of „Weberian‟ State Structures on Economic Growth,” American Sociological Review. 64
(5): pp.748-764.
Evans, William N., Shelia E. Murray, and Robert M. Schwab. 1997. “Schoolhouses, Courthouses, and
Statehouses after Serrano,” Journal of Policy Analysis and Management. 16 (1): pp.10-31.
Friedman, Milton. 1962. Capitalism and Freedom. University of Chicago Press.
Ferguson, Ronald F., and Helen F. Ladd. 1996. “How and Why Money Matters: An Analysis of Alabama
Schools,” in Helen F. Ladd (ed). Holding Schools Accountable. Washington, D.C.: The Brookings
Institution. pp.265-298.
Goodsell, Charles T. 2004. The Case for Bureaucracy: A Public Administration Polemic. N.J.: CQ Press.
Hanushek, Eric A. 1989. “The Impact of Differential Expenditures on School Performance,” Educational
Researcher. 18 (May): pp.45-51.
Hanushek, Eric A. 1996. “School Resources and Student Performance,” in Gary Burtless (ed). Does
Money Matter? The Link between School, Student Achievement, and Adult Success. Washington,
D.C.: The Brookings Institution Press: pp.43-73.
Hedges, Larry V., Richard D. Laine, and Rob Greenwald. 1994. “Does Money Matter? A Meta-Analysis
of Studies of the Effects of Differential School Inputs on Student Outcomes,” Educational
Researcher. 23 (3): pp.5-14.
Hirschman, Albert O. 1970. Exit, Voice, and Loyalty: Responses to Decline in Firms, Organizations, and
States. Cambridge, M.A.: Harvard University Press.
King, Gary. 1995. “Replication, Replication,” PS: Political Science and Politics. 28 (3): pp.444-454.

12
King, Gary, Robert O. Keohane, and Sidney Verba. 1994. Designing Social Inquiry: Scientific
Interference in Qualitative Research. Princeton, N.J.: Princeton University Press.
Meier, Kenneth J., L. Polinard, and Robert D. Wrinkle. 2000. “Bureaucracy and Organizational
Performance: Causality Arguments about Public Schools,” American Journal of Political Science.
44 (3): pp.590-602.
Odden, Allan. R. 2007. “Redesigning School Finance Systems: Lessons from CPRE Research,” CPRE
Policy Brief RB-50 (February). (available at
http://www.cpre.org/images/stories/cpre_pdfs/rb50.pdf)
Odden, Allan R., and Lawrence O. Picus. 2007. School Finance: A Policy Perspective. New York, N.Y.:
McGraw-Hill.
Odden, Allan R., Geoffrey Borman, and Mark Fermanich. 2004. “Assessing Teacher, Classroom, and
School Effects, Including Fiscal Effects,” Peabody Journal of Education. 79 (4): pp.4-32.
Olsen, Johan P. 2005. “Maybe It Is Time to Rediscover Bureaucracy,” Journal of Public Administration
Research and Theory. 16 (1): pp.1-24.
Payne, Kevin J., and Bruce J. Biddle. 1999. “Poor School Funding, Child Poverty, and Mathematics
Achievement,” Educational Researcher. 28 (6): pp.4-13.
Petersen, Mitchell A. 2007. Estimating Standard Errors in Finance Panel Data Set: Comparing
Approaches. Northwestern University Kellogg School of Management, Finance Department
Working Paper No.329 (April).
Picus, Lawrence. 1995. “Does Money Matter in Education? A Policymaker‟s Guide,” in William Fowler,
Jr. (ed) Selected Papers in School Finance 1995. Washington, D.C.: National Center for Education.
pp.15-33.
Rothstein, Richard. 1997. Where‟s the Money Going?: Changes in the Level and Composition of
Education Spending, 1991-96. Washington, D.C.: Economic Policy Institute.
Rothstein, Richard. 1993. “The Myth of Public School Failure,” The American Prospect. 13 (Spring):
pp.20-34.
Sass, Tim R. 2006. “Charter Schools and Student Achievement in Florida,” Education Finance and
Policy. 1 (1): pp.91-122.
Smith, Kevin B. 1994. “Policy, Markets, and Bureaucracy: Reexamining School Choice,” Journal of
Politics. 56 (2): pp.475-491.
Smith, Kevin B., and Christopher W. Larimer. 2004. “A Mixed Relationship: Bureaucracy and School
Performance,” Public Administration Review. 64 (6): pp.728-736.
Smith, Kevin B., and Kenneth J. Meier. 1994. “Politics, Bureaucrats and Schools,” Public Administration
Review. 54 (5): pp.551-558.
Smith, Kevin B., and Kenneth J. Meier. 1995. “Public Choice in Education: Markets and the Demand for
Quality Education,” Political Research Quarterly. 48 (3): pp.461-478.
Weber, Max. 1922[1968]. Economy and Society. edited by G. Roth and C. Wittich. Berkeley: University
of California Press.
Wenglinsky, Harold. 1997. “How Money Matters: The Effect of School District Spending on Academic
Achievement,” Sociology of Education. 70 (3): pp.221-237.
Witte, John F. 1996. “School Choice and Student Performance,” in Helen F. Ladd (ed). Holding Schools
Accountable. Washington, D.C.: The Brookings Institution. pp.149-176.
Witte, John F. 1992. “Private School Versus Public School Achievement: Are There Findings That
Should Affect the Educational Choice Debate?” Economics of Education Review. 11 (December):
pp.371-394.
Wolfe, Barbara L., and Robert H. Haveman. 2002. “Social and Nonmarket Benefits from Education in an
Advanced Economy,” in Yolanda K. Kodrzycki (ed). Education in the 21st Century: Meeting the
Challenges of a Changing World. Federal Reserve Bank of Boston, 47th Economic Conference, pp.
97-131.
Wooldridge, Jeffrey M. 2000. Introductory Econometrics: A Modern Approach. South-Western College
Publishing.

13
Appendix 1: Descriptive Statistics of Key Variables

Key Variables N Minimum Maximum Mean S.D.

Student Performance Variables


Percentage of grade-11 students getting proficient or higher 1,438 6.3 100.0 83.3 12.4
levels on HSPA literacy (g11)
Percentage of grade-11 students getting proficient or higher 1,438 6.3 100.0 72.2 17.6
levels on HSPA math (g11)
Percentage of grade-8 students getting proficient or higher 2,389 5.6 100.0 76.7 16.8
levels on GEPA literacy
Percentage of grade-8 students getting proficient or higher 2,388 0.0 100.0 64.8 19.5
levels on GEPA math
Percentage of grade-4 students getting proficient or higher 2,086 15.8 100.0 82.2 13.0
levels on ASK4 literacy
Percentage of grade-4 students getting proficient or higher 2,086 9.1 100.0 77.6 15.7
levels on ASK4 math

School Bureaucracy Indicator


Number of district administrators per 100 students a/ 3,138 0.0 250.0 0.6 4.8

Covariates: Student SES, School Resources


Percentage of Native-American students to total enrollment in 3,202 0.0 6.8 0.2 0.4
a district
Percentage of Asian students to total enrollment in a district 3,202 0.0 48.4 5.2 6.9

Percentage of African-American students to total enrollment 3,202 0.0 100.0 14.8 23.1
in a district
Percentage of Hispanic students to total enrollment in a 3,202 0.0 95.5 11.1 15.0
district
Percentage of students eligible for free or reduced-price lunch 3,226 0.0 98.9 21.9 23.3

Percentage of students with limited English proficiency (LEP) 3,226 0.0 214.3 3.7 10.6

Percentage of students enroll in special education programs 3,226 0.0 1,653.1 25.7 72.7

Pupil per teacher ratio 3,186 0.0 300.0 13.5 13.3

Instruction expenditures per student (thousand $) 3,423 0.0 25.3 6.0 2.7

Percentage of external money to total revenues per pupil 3,091 0.0 100.0 39.1 24.3

Average year of experience of faculty in a district 3,136 0.0 28.0 10.7 4.2

Total student enrollment in a district 3,298 0.0 35,232.0 1,805.4 2,852.0

Note: a/ School bureaucracy indicator has an extreme value that might mislead the magnitude of bureaucratic control.
Percentile data might be essential: P50th = .327, P75th = .433, P90th = .584, P95th = .800, and P99th = 4.762

14
Appendix 2: Results of the Hausman Test between the Fixed-Effect and the Random-Effect Estimations

Models (dependent variables) Chi-square p-value

1. HSPA, literature 183.89 .0000


2. HSPA, mathematics 291.26 .0000
3. GEPA, literature -1316.78 n.a.
4. GEPA, mathematics -26.54 n.a.
5. ASK4, literature -559.40 n.a.
6. ASK4, mathematics -45.70 n.a.
Notes: For the specifications of each model, see Tables 1 and 2 in the text
n.a. means the Hausman Test for this model fails to meet the asymptotic assumptions

15

View publication stats

You might also like