Professional Documents
Culture Documents
The Effect of School Improvement Planning On Student Achievement A Goal Theory Perspective
The Effect of School Improvement Planning On Student Achievement A Goal Theory Perspective
David Huber
A Dissertation
Submitted in Partial Fulfillment of the
Doctor of Education
Department of Educational Leadership
February 7, 2014
Dissertation Advisor
Dissertation Committee
This quantitative study was designed to evaluate the effect of school improvement
planning as it relates to goal theory (Locke & Latham, 2002, 2006). This study tested the
hypothesis that schools creating quality school improvement plans (SIPs) consistent with
goal theory principles will have higher levels of student achievement. Achievement was
measured by a school’s School Performance Index score from the 2012-13 administration
of the Connecticut Mastery Test. The first research question asked whether components
of a plan consistent with goal theory (e.g. specific goals) are associated with increased
Alliance Districts. The second research question evaluated whether SIPs predicted
achievement of particular subgroups. Data collected for this study included individual
school improvement plans (N = 108), each school’s achievement score, and school-level
demographic data.
The regression analysis for the main hypothesis showed a relationship between
2012-13 student achievement and a combination of the predictor variables with the
proportion of shared variance being R2 = .861. The SIP score had a marginally
coefficients even though three of the goal theory variables (directing attention, r = .270, p
< .01, strategies, r = .226, p < .05, and feedback, r = .256, p < .01) had significant
positive correlations with student achievement. The main analysis was then replicated
using subgroup achievement as the outcome variable for the following three groups:
Hispanic or Latino (n = 96), Free/Reduced Lunch Eligible (n = 108), and High Needs (n
ii
= 108). None of the subgroups had significant regression coefficients even though all
three subgroups had small, yet significant correlations with SIP total (Hispanic, r = .299,
p<.01, Free and Reduced, r = .229, p<.05, and High Needs, r = .288, p<.01).
Recommendations from this study include schools developing high-quality SIPs which
are consistent with goal theory principles, and focusing SIP efforts to directly address
Table of Contents
List of Tables v
List of Figures vi
Acknowledgements vii
CHAPTER 1: INTRODUCTION
CHAPTER 3: METHODOLOGY
Purpose 34
Research Design 35
Population and Sample 35
Data Sources and Instrumentation 38
SIP quality based on PIM Scoring Rubric 38
Student Achievement- SPI school score, 2011-12 and 2012-13 43
iv
CHAPTER 4: RESULTS
Descriptive Statistics 56
Test of Hypothesis – SIP Quality and Student Achievement 60
Research Question – Goal Theory 62
Research Question – Subgroup Analysis 66
Summary 69
CHAPTER 5: DISCUSSION
Summary of Findings 71
Main Analysis 71
Goal Theory Analysis 71
Subgroup Analysis 72
Consistency of Results with Previous Research 72
Theoretical Implications 74
Practical Implications 74
Limitations and Suggestions for Future Research 80
Summary 82
References 84
List of Tables
List of Figures
ACKNOWLEDGEMENTS
Central Connecticut State University, and all the colleagues I have had the privilege to
greatly appreciative of the support of the Bristol Board of Education through this multi-
year process. I hope my new insight on school improvement will positively impact the
students of our city. Thank you to my friends and family that have been understanding
and encouraging while I completed this important step in my professional career. I would
also like to thank everyone; former students, parents, and colleagues who helped me
understand the importance of school improvement. The results of this study go far
beyond any statistical association and will hopefully, in some small way, impact many
students for years to come. In order for me to have finished this study, I am greatly
appreciative of Dr. Jim Conway, my advisor, who laid the foundation for me to finish this
work. Without his help and support, this study could not have been completed.
Lastly, I would like to thank my parents for their support, understanding, and
I can never repay you for all you have done – Thank you.
1
CHAPTER 1
planning. Specifically, this study will evaluate school improvement plans (SIPs) to
identify what if any components of a well-written plan are associated with higher levels
formed Alliance Districts. For this study, student achievement will be measured by
changes in the School Performance Index (SPI), a recently employed metric by the state,
State Department of Education [CSDE], 2012b). This study attempts to evaluate the
Connecticut and, depending on their school classification, are required to submit annual
required by all states as a form of school accountability and improvement under the U.S.
Department of Education’s Elementary and Secondary Act (ESEA) (1965) for schools
not performing at expected levels of achievement. Knowing the value of creating such
Since SIPs are now required and associated with the rating and categorization of
Alliance District schools (CSDE, 2012a), it is crucial to examine whether the plans do in
fact lead to improved outcomes. It is possible that, rather than being helpful, required
2
plans may become an unnecessary burden, something passed down by those above, and
only filed and stored until the next school year rather than carried out (Heroux, 1981).
The present study hopes to identify particular characteristics of SIPs for schools located
plans are identified as being beneficial to student achievement for a whole school or for
specific groups of students, these practices can be replicated across all Alliance schools
for focused and systemic improvement. Through this focused school improvement,
schools would then be able to meet legislative mandates while moving “Connecticut
closer to the goal of achieving better results for all students and ambitious levels of
growth for the state’s lowest performing students” (CSDE, 2012a, p. 27). By meeting the
state’s standards, schools would then be free from labels such as In Need of Improvement
State and national context. SIPs, as defined by the State of Connecticut, are
considered a roadmap or vehicle to drive educational change. They are written documents
intended to provide carefully thought out and detailed strategies, which are continually
monitored for optimal student achievement (CSDE Bureau of School and District
is a federal statute passed in 1965 representing the most far-reaching federal legislation
2010). ESEA has evolved over time to include the No Child Left Behind Act (NCLB) of
2001, requiring schools achieving at certain levels of achievement to submit formal SIPs.
3
as such and were required to operate under SIPs through June 30, 2004. These plans
documenting compliance with the federal requirements for improving schools. It was
rewards and punishes schools for student achievement scores (Hanushek & Raymond,
2004). SIPs were seen as one tool to improve educational quality for all students while
working to reduce current gaps in education. Additionally, the development of SIPs was
intended “to ensure that all requirements of Section 1116 of the NCLB Act are met”
(CSDE Bureau of School and District Improvement, 2007, p. 63) and that all students
Under No Child Left Behind, schools not making Adequate Yearly Progress
improvement and be required to submit a SIP to outline goals and strategies aimed at
improving student achievement. These two year plans for improvement must “embody a
design that is comprehensive, specific and focused primarily on the school’s instructional
program” (CSDE Bureau of School and District Improvement, 2007, p. 8). In alignment
with NCLB, a plan’s design must include “strategies based on scientifically based
research that will strengthen the core academic subject, ongoing professional
and measurable goals and intended outcomes that target the school’s greatest areas of
required to plan continually for improving the quality of instruction. The focus of this
planning is to increase the achievement for all students while reducing current gaps in
achievement between all subgroups. Connecticut is among the nation’s most affluent
states and has achievement gaps between socioeconomic subgroups that are larger than
any state in the nation (CSDE, 2012a). In 2012, CSDE crafted a proposal which included
the identification of the 30 most persistently low performing school districts as members
achievement. Membership also set forth requirements such as submission of SIPs for
Connecticut has determined that the SIP will serve as the roadmap to increased
student achievement for schools in Alliance Districts, while being the vehicle to drive and
monitor change during a school year (CSDE Bureau of School and District Improvement,
2007). With the passing of this legislation in May 2012, more than $90 million in new
funding was allocated to support several new initiatives including increasing early
schools and districts, expanding the availability of high-quality school models, improving
(CSDE, 2012a). The money, awarded conditionally, requires that it be used only for
actions specified on the district improvement plan which is the overarching framework
to Connecticut in that SIPs are required by legislation for Alliance District schools and it
is essential that SIPs are effectively written. When effective SIPs are carried out, the
result should be increased student performance, and with that increased student
explained in more detail in the methods section, is the increase in an elementary schools’
School Performance Index (SPI), which is a single number to rate an entire school’s
achievement.
Increased
Increased
Legislation Creating School
student
Requiring SIPs effective SIPs Achievement
performance Score
NCLB, must submit SIPs, yet very little empirical data exists on the effectiveness of this
schools and student learning; however, it remains unclear if improved learning actually
important role for SIPs, is designed to support schools most in need and provide
corrective action for schools that do not improve the learning experience for all students.
SIPs are a primary vehicle used to document and plan the improvement process. If the
6
evaluate the effectiveness of this planning as a means for improving student achievement.
This study examines school improvement planning through the lens of goal
theory. Goal theory was used to derive hypotheses about characteristics of SIPs that are
Theoretical Framework
This study examines school improvement planning through the lens of goal theory
(Locke, 1968; Locke & Latham, 2002). As defined by Locke and Latham (2002), a goal
is the object or aim of an action to be obtained within a specific amount of time. Through
four decades of research, specific attributes of goals have been determined to be most
effective in achieving one’s goal. Goal theory has been selected for the current study
psychology over a 25-year period. Locke and Latham (2006) define goal theory as “an
’open’ theory in that new elements are added as new discoveries are made” (pp. 265-
266). Their theory states that challenging and specific goals lead to higher performance
than less challenging or more abstract goals such as doing one’s best. Locke and Latham
(2006) described mechanisms, through which goals lead to performance, and moderators,
which are characteristics that change the strength of the relationship between two
actions leading to discovery and use of current knowledge and strategies. Moderators
applied to this theory include goal commitment, feedback, and task complexity. Goal
7
theory states higher commitment and increased feedback strengthen the goal-performance
Goal theory can be an excellent framework for understanding if or why SIPs work
in school settings. SIPs are linked to goal theory because the process requires setting
goals, and goal theory says that goals should be challenging and specific. Additionally,
when creating SIPs focusing on both the mechanisms and moderators of the theory, the
relationship between planning and achievement should be maximized. This study aims to
measure specific mechanisms and moderators found within SIPs in an effort to determine
if they are associated with increased student performance. This theory was used because
the process of completing a school improvement plan requires the setting of goals for a
school’s success.
empirical support at the individual level, and this study extends its principles to the
school level. Goal theory may help explain the mixed results of SIP effectiveness.
Following the assumptions of goal theory (Locke, 1968), planning for student success
should lead to increases in student achievement; however, very little empirical data exist
to support this claim (Fernandez, 2009). Mintrop, MacLellan, and Quintero (2001) found
that plans often reflected “espoused views” of teachers, school staff, and the community,
as opposed to the actions and interventions typically observed taking place within a
school. White (2009) found the process of creating SIPs to be more associated with
compliance and stakeholder participation than with the leadership necessary to improve
student learning. It has therefore been noted that SIPs may often take a non-optimal
8
focus; according to the theoretical framework for this study, SIPs should be framed upon
into question the current use and effectiveness of improvement planning. As Figure 1
indicates, legislation requiring SIPs assumes plans will be well written, carefully
implemented, and result in increases in student learning while reducing current gaps in
achievement. If goal theory can be applied to the process of creating SIPs, then schools
should see increased learning resulting in an increase in their SPI. These two outcomes
would then allow schools to meet all legislative requirements and be labeled as an
effective school.
This study is framed by one hypothesis derived from goal theory and a complete
review of the literature. The hypothesis is that schools creating quality SIPs consistent
with goal theory principles will have students who achieve at higher levels. For the
purpose of this study, achievement will be measured by the change in a school’s SPI
score (see “Definition of Terms” section) from the 2011-12 administration of the
Connecticut Mastery Test to the 2012-13 administration. This study has two research
questions. The first question is whether components of a plan consistent with goal theory
Definition of Terms
For this purpose of this study, the following terms have been defined.
9
the 2012 ESEA flexibility waiver. Districts identified as Alliance members are those with
the lowest achievement as measured by the newly created district performance index
scores (CSDE, 2012b). The intent of the Alliance District designation was to allocate
$39.5 million dollars of increased education cost sharing (ECS) funding for the 2012-13
school year to districts most in need and to continue that funding for a minimum of five
assumed that student achievement will improve and educational achievement gaps will be
CMT: This is the acronym for the Connecticut Mastery Test, Connecticut’s
statewide assessment of math, reading, science, and writing skills for Grade 3-8 students.
The test is given annually in March and scores are reported to districts in July.
performance: (a) Below Basic, (b) Basic, (c) Proficient, (d) Goal, and (e) Advanced.
Students are then assigned a classification based on their performance on the CMT. The
state has designated ‘Goal’ as a target for all students. Students who perform at the goal
Goal theory: Defined by Locke (1968) and Locke and Latham (2002, 2006), goal
stating that specific and challenging goals lead to higher levels of performance in
comparison to more abstract and less-challenging goals. Implied within the theory is the
10
assumption that goals are attainable, persons are committed, and there are no conflicting
goals.
School Improvement Plan (SIP): Under the No Child Left Behind [NCLB] Act of
2001, schools identified as not making adequate progress are to submit a school
improvement plan. SIPs as used in education are written documents intended to provide a
most likely to lead to increased student achievement, while identifying specific areas of
since the approval of the flexibility waiver in May 2012 is now the School Performance
Index (SPI). The SPI is designed to measure the achievement of an entire school
(aggregated across different grade levels) using a single number. Unlike requirements
under NCLB, the SPI allows partial credit for students scoring below proficient on a
given state assessment (Erpenbach, 2009). The SPI is designed to be a status measure for
collection of students) at a single point in time” (Castellano & Ho, 2013, p. 12).
locus of control for change resides inside an organization and serves as an “irrevocable
commitment to purpose beyond the ordinary” (p. 74). Planning is a process that “involves
explicit systematic procedures used to gain the involvement and commitment of those
principal stakeholders affected by the plan” (Falshaw, Glaister, & Tatoglu, 2006, p. 10).
11
Limitations
This study was not conducted using a random sample and therefore the
included elementary schools which were part of the newly formed Connecticut Alliance
Districts, and the actual sample consisted of those Alliance District schools that provided
this researcher with a copy of their SIP. Because every Alliance District elementary
school did not provide their SIP for this study, external validity, or the “extent to which
findings in one study can be applied to another situation” (Mertens, 2010, p. 129), cannot
be guaranteed.
This population, or group about which I intend to draw conclusions, includes low-
performing school districts that are required to submit SIPs. The sampling frame included
elementary and K-8 schools within Connecticut’s Alliance Districts and the sample will
be a subset of these schools that provided this researcher with their SIP. This sampling
frame was selected because the schools that are part of the Alliance Districts are the
the towns are not identical, there are more similarities among the districts than there are
differences.
Internal validity can also be considered a limitation because this study was not
improvement planning is not possible. There are many potential extraneous or lurking
variables, the most important of which I attempted to identify and account for.
12
Another limitation of this study is that it attempted to measure the intended, rather
than actual, implementation and monitoring of the SIPs. The current study’s methods did
Lastly, each SIP was scored by a human reader, thus reliability of scoring can be a
limitation. Reliability, or the degree to which measurements are free from error (Mertens,
2010), will be evaluated for intra-rater consistency. To increase the intra-rater reliability,
pilot scoring was conducted as a method of training and once scored, a representative
sample of plans were randomly selected for re-scoring. A full description of the
of the study within the current educational context in Connecticut while justifying the
importance of this research. Chapter 1 also introduces goal theory as the theoretical
framework, defines terms, and describes study limitations. Chapter 2 includes a review
of the literature which informs this study. The review begins with an outline of goal
theory and then provides the context for required planning. The review then delves into
an educational setting. The review ends with a focus on school improvement planning as
it relates to improved student learning, and connects the literature with the current study’s
hypothesis and research questions. Chapter 3 outlines the methods used for this study
including the research design, population and sample, data sources, and procedures.
Chapter 4 discusses the results of the the hypothesis and both research questions.
Chapter 5 interprets the results and makes recommendations for further study.
13
CHAPTER 2
This chapter includes a review of the literature which informs this study and looks
at the process of school improvement planning through the lens of goal theory. The
review starts with an outline of the theory and examines the role of goals in the planning
process. The review then delves into the effectiveness of planning. Due to the lack of
the search beyond education and look into planning across other sectors, such as business
accountability driven context. Finally, the review ends with a focus on school
improvement planning as it relates to improved student learning, and reviews the study
Search Strategy
The search strategy for this study consisted of thorough examination, beginning in
July 2012, of data bases, current literature, published dissertations, Internet searches, and
review of reference lists for articles and documents read. Because of the limited number
(Mertens, 2010), using each source’s reference list to continually build my list of relevant
sources. The databases used for this search included ProQUEST Dissertation and Theses
Full Text, Education Full Text, ERIC, PsycARTICLES, PsycINFO, and Social Sciences
Citation Index. Information was also obtained through conversation and telephone calls
with staff members of the Connecticut State Department of Education which led to
studies related to school improvement planning and student learning. I began to narrow
search and became saturated in the literature, my focus was on the role of goal setting and
Goal Theory
Goal theory provides a useful theoretical context for the present study because
SIPs involve setting goals at the school level. A goal, as defined by Locke and Latham
(2002), is the object or aim of one’s actions, such as attaining a certain level of
proficiency within a specified timeframe. Goals are the internal representations of desired
states and can be constructed as outcomes, events, or processes (Austin & Vancouver,
1996). They can be internal such as one’s ideal body temperature or “complex cognitive
depictions of desired outcomes (e.g., career success)” (Austin & Vancouver, 1996, p.
338). Goal setting as a motivational technique has its origins in two lines of inquiry: (a)
research in academic psychology and (b) applied management research (Mento, Steel, &
Karren, 1987). Based on the work of Locke (1968) the study of goal theory also includes
a focus on goal setting as it relates to task performance, thus the connection to the present
developed inductively over four decades of research is largely based on Ryan’s (1970)
belief that conscious goals impact one’s output. Researchers have found increasing
desired state (Higgins, 1987; Locke & Latham, 2006). This belief connects directly to the
process of school improvement planning which requires setting the plan, implementing
the plan, and seeing change systemically across an organization to lead to true
improvement.
Core findings. Locke and Latham (2002) identified goal setting to be most
effective when goals were challenging and specific. They found the most difficult goals
were associated with the highest levels of effort and performance. Specific goals are
motivating because they contain an external referent, and clearly define the optimal
outcome so one knows if the goal has not been reached. This is in contrast to “doing
one’s best” which allows for a range of acceptable levels of performance (Locke &
Latham, 2006).
Setting challenging goals has been shown to be more effective than setting less
challenging goals as long as the person attempting to accomplish the goal has the ability
to accomplish the task (Locke, 1968). Locke (1968) found an individual’s conscious
ideas have the ability to regulate their actions, thus setting harder goals has been found to
produce higher performance than easier goals. His findings also suggest that setting
unrealistically high goals may not necessarily lead to goal accomplishment, but that
16
individuals who set more challenging goals performed at a higher level than those setting
lower levels of goals. Other researchers, such as Atkinson (1958) have corroborated the
findings about unrealistic goals with similar research showing people assigned to an
overly challenging task (1:20 chance of completing the task) simply did not try to win
specificity of goals. By creating specific goals, individuals have a clearer picture of the
desired outcome; therefore less variability in final acceptable outcome is observed. Locke
and Latham (2002) found that goal specificity alone does not lead to a higher level of
specificity does reduce variation in performance by reducing the ambiguity about what is
(2002, 2006) identified four mechanisms by which goals impact performance. The
and effort towards a specific activity, having an energizing function directly related to
effort, positively impacting persistence, and impacting action leading to discovery or use
of current knowledge and strategies. For the purpose of school improvement planning,
the two most important of these mechanisms are directing attention towards a specific
instructional activity and the creation of appropriate strategies to accomplish the goals.
essential that goals are designed to direct attention and effort to a stated task. Planning for
goals refers to the development of specific alternate behavioral paths by which a goal can
17
be attained (i.e., a strategy) which allows individuals to direct attention towards a specific
outcome (Austin & Vancouver, 1996). Planning links goals to specific measurable
actions and serves to prioritize decisions among goals. Tubbs and Ekeberg (1991)
emphasize the importance of planning and its connection to goals and action. Planning
goals has also been associated with monitoring goals (Austin & Vancouver, 1996);
In addition to planning to direct one’s attention, goals can indirectly impact action
by leading to the discovery and use of task-relevant knowledge and strategies. Locke and
Latham (2002) have found people automatically access the knowledge and skills they
currently have when attempting to accomplish a goal and apply skills and strategies from
similar situations to complete a given task. When unsure of how to complete a task,
people will attempt to accomplish the task based on what they have done in the past to be
successful. Their research also found people when presented a new goal or task will
engage in deliberate planning to develop strategies enabling them to obtain their goal,
they are more likely to use the strategies to accomplish the stated goal.
(Locke & Latham, 2006). Of the three moderators, the current study focuses primarily on
Feedback serves as a tool to monitor progress towards stated goals, while commitment
18
committed to them, and frequently monitoring progress toward them, an organization can
make more frequent corrections, and thus work to accomplish the established goals
(Reeves, 2004; Schmoker, 2004; White, 2009). In contrast, those not monitoring progress
would lack continued feedback and be less likely to make mid-course changes to
commitment, a moderator in goal theory. This occurs immediately after goal acceptance
(Austin & Vancouver, 1996). Acceptance ranges from compliance, to identification, and
and the process of creating SIPs. In the absence of commitment to goals by teachers and
school staff, goals will not lead to higher performance. It is therefore necessary that the
process of writing SIPS foster commitment. Goal commitment has been found to
increase when goals are made public, as in the case of SIPs. Some feel this is because
present study. Meeting or not meeting stated goals serves as summary feedback because
not knowing how one is doing makes it “difficult or impossible for them to adjust the
level or direction of the effort to adjust their performance strategies to match what the
goal requires” (Locke & Latham, 2002, p. 708). Additionally, feedback is essential in the
process of goal setting because it allows adjustment in the level or direction of their
person is below target, effort may be increased or a new strategy may be attempted to get
closer to the desired state. Combining both feedback and goals has been shown to be
more effective than simply assigning a goal (Locke & Latham, 2002); feedback is also
relevant to SIPs because it directly connects to the process of effectively monitoring the
Ethical considerations. One concern with SIPs is the potential for drawing
attention to quantifiable goals while drawing attention away from other goals which are
harder to quantify. A majority of the research examines the positive benefits of goal
setting, but there have been unintended and unethical behaviors associated with the
practice. Kelley and Protsik (1997) found, in a qualitative study on performance based
incentives, that setting goals and attaching incentives often lead to teaching to individual
items and ignoring instructional areas which are not tested. This practice has been
observed on many SIPs that focus solely on standardized test topics and ignore non-tested
instructional areas such as science, social studies, and the arts. Barsky (2008) found that
performance goals can also interfere with ethical decision making by “directing
employees’ attention to achieving the goal, thereby occupying the cognitive resources
that might otherwise have been used to evaluate the morality of work-related behaviors”
(p. 65), in the case of teaching, the need to educate the whole child.
Planning
Cook (2004), who focused specifically on strategic planning (one type of planning),
defined planning as a system in which the locus of control for change resides inside of an
20
gain the involvement and commitment of those principal stakeholders affected by the
plan” (Falshaw et al., 2006, p. 10). Planning places heavy emphasis on understanding the
environment in which planning takes place and adjusting one’s actions to that
environment (Beach & Lindahl, 2004a). The process of planning is to serve as a tool to
ensure the fullest use of an organization’s resources (Bloom, 1986). Identified using
many different names, such as long range planning, corporate planning, strategic
field. As defined by Heroux (1981), planning includes accepting a task from a boss,
determining the best way to carry it out, and continually evaluating one’s work. Falshaw
path to achieving that goal. Beach and Lindahl (2004a) describe the process as
method for identifying an organization’s long-term goals, a systemic way to evaluate the
agreed upon strategies, and a precise system for monitoring the results. Several
components of goal theory stand out in these definitions, including the importance of
having a specific goal, strategies to achieve the goal, and monitoring or feedback
Phillips and Moutinho (2000) have linked the concept of strategy to a pattern of
the changing conditions, with a clear outcome of improving the long-term performance of
According to Falshaw et al. (2006) planning has been considered through two
lenses, content (what is included within a plan) and process (how one goes about
focuses on analysis which includes breaking down a goal into actionable steps such that
implementation can happen automatically. The actionable steps represent the mechanism
Locke and Latham (2002) identified as developing strategies. Planning can be seen as
strategic if it identifies areas for improvement and suggested intervention through a new
and innovative perspective and pushes an organization towards that perspective (Cook,
2004).
Planning and goal theory are connected in many ways. In order to engage in
planning, specific and challenging goals must be set along with the creation of
attention towards the identified strategies with persistence in accomplishing stated tasks.
For planning to be successful, there must be commitment to the identified goals, feedback
on progress, and a clear identification of necessary steps to accomplish the defined goal.
One main difference between the two is planning focuses on an organization with
multiple goals whereas goal theory was intended for individual increases in performance.
22
Therefore, while goal theory can be a useful guide to hypothesizing about important
characteristics of SIPs, certain elements of a good SIP may go beyond goal theory in
as Mintzberg, have questioned the applicability of planning to the public sector (Bloom,
1986). Mintzberg (1994) suggested that strategic planning in particular may not fulfill its
promise.
When strategic planning arrived on the scene in the mid-1960s, corporate leaders
embraced it as “the one best way” to devise and implement strategies that would
enhance the competitiveness of each business unit. True to the scientific
management pioneered by Frederic Taylor, this one best way involved
separating thinking from doing and creating a new function staffed by
specialists: strategic planners. Planning systems were expected to produce the
best strategies as well as step-by-step instructions for carrying out those
strategies so that the doers, the managers of business, could not get them wrong.
As we now know, planning has not exactly worked out that way. (p.107)
“circumstances are generally too complex to allow accurate accounting of all elements of
a system that may impact future events” (Beach & Lindahl, 2004b, p. 12). In addition to
public and private sectors are great enough to require substantial adaptation to make the
planning’s effectiveness (Bell, 2002; Cook, 2004; Falshaw et al., 2006; Heroux, 1981).
Several empirical studies have outlined the relationship between planning and corporate
their study the authors focused on assessing the formality of the planning process using
performance over the past three years. The authors found these variables to impact the
relationship across studies (r = .0830) between planning and performance. This meta-
analysis was conducted on 88 studies seeking to examine such factors as time period and
contained both quantitative measures (market and accounting based improvements) and
qualitative measures (perception based improvements) and included many studies not
previously included in other analyses. This work found planning does have a positive
effect on corporate planning especially when disaggregating the dataset; however, the
effect was smaller than originally believed. Results for some individual studies showed a
much stronger effect size than the average, but other studies showed a weaker than
Another study by Malik and Karger (1975) evaluated the financial performance of
businesses within three different industries: electronics, machinery, and chemical and
performance variables, including sales volume, sales and shares, cash flow, net income,
and overall earnings. Additionally, although the electronics companies showed the largest
Similarly, Wood and LaForge (1979) found in a mixed methods study that banks
serving as the study’s control. For this study, financial performance over a five-year
period was analyzed using growth in net income and return on owner’s investment as the
two variables. This study was limited in that overall, there was a relatively small sample
It should also be noted that the researchers cautioned against inferring that
comprehensive planning was the only reason for a bank’s success. Rather, they suggest
that the managers of these banks were more progressive in their overall management
techniques.
While the research reviewed is mixed regarding the relationship between the
effectiveness of planning, and while the studies have relatively small sample sizes, there
is at least some evidence that planning does in fact improve performance (Malik &
Karger, 1975; McIlquham-Schmidt, 2010; Phillips & Moutinho, 2000; Wood & LaForge,
25
1979). Therefore it would be useful to continue to evaluate planning in both industry and
education.
companies who are not required to plan nonetheless decide to continue the process of
planning. Heroux (1981) found that businesses adopt planning as a necessity for survival,
however, they soon comes to realize the planning did not lead to the level of gains they
had hoped. Additionally, many researchers have identified selecting the specific area of
performance measures to be problematic. Companies can opt for sales, profit, revenue,
dividends, stock price, or returns on assets. The challenge is that some of the performance
measures may be more influenced by planning than others (Falshaw et al., 2006).
associated with increased productivity, goal theory could lead to the identification of such
components. Phillips and Moutinho (2000) created a diagnostic tool used to evaluate
planning effectiveness within hotels. Their work found the most effective plans set
explicit goals, establish clear responsibilities for implementation, have high levels of
commitment, and use modern analytic techniques required for effectiveness. These
findings are consistent with the core findings of goal theory, in that specific and
challenging goals, commitment, and feedback are important. Additionally their findings
align with the idea that goal commitment moderates the effectiveness of goal setting.
These findings are also consistent with literature on effective school planning (Beach &
Lindahl, 2007; Connecticut State Department of Education Bureau of School and District
Improvement, 2007).
26
schools. School leaders are often not trained to implement true strategic planning;
components of planning are out of the realm of change for public schools, such as
identifying one’s mission. As stated by Beach and Lindahl (2004a), “The mission of all
but the highly specialized K-12 public schools (e.g. schools for the visually impaired or
vocational schools) is virtually identical across the United States” ( p. 221). Cook (2004),
however, feels that there are enough similarities to invest in this type of planning for
school improvement.
As this review will continue to show, the lack of attention to principles of goal
theory carries over into the educational sector. This is problematic because without
identify components of SIPs correlated with increased performance, then schools as well
as businesses can design strategic plans to maintain their competitive edge while meeting
Under NCLB (2001), schools identified as not making adequate progress submit
required for all schools, Fernandez (2009) found that by 2000, most schools were writing
formal plans for improvement. SIPs were designed to close achievement gaps and raise
levels of student achievement (White, 2009). Additionally, 48 states had begun some
27
2006). States have recognized the importance of school accountability and are using
distinguishing effective and ineffective schools (Phelps & Addonizio, 2006). Many feel
Excellence in Education’s 1983 report, A Nation at Risk. This report, although not the
first publicized criticism of America’s system of education, did serve as a catalyst for
years of political pressure and large-scale school reform (Beach & Lindahl, 2004b).
SIPs often contain many of the same characteristics as plans discussed earlier. A
review of the academic literature on characteristics of effective school plans indicates the
for implementation of each strategy (Fernandez, 2009; Reeves, 2004; White, 2009).
These areas, as noted earlier, are consistent with goal theory concepts. Other areas
necessary for systemic improvement, yet often missing from SIPs, include leadership
school’s readiness to change along with the process for improvement (Beach & Lindahl,
2004b; Hall & Hord, 2011; Reeves, 2004; White, 2009;). Without the integration of these
SIPs as used in education are intended to provide a focus to schools on their path
consistent with goal theory’s focus on identifying strategies for goal achievement. Similar
organization to have a shared focus on improvement while setting and monitoring the
implementation of specific and challenging goals and using the results to determine an
organization’s next steps. The initial concept of creating an effective school plan may
appear quite simple, however, in reality “the complexities of each school’s changing
environment, internal strengths and weaknesses, readiness for change, culture, needs, and
stakeholders make this a vastly intricate process” (Beach & Lindahl, 2007, p. 23).
One study conducted by McInerney and Leach (1992) evaluated Indiana’s Performance-
Based Accreditation at the high school level. Their work found that although schools
were required to submit SIPs, a majority focused more on the new implementation of
programs than on monitoring and evaluating current programs and student progress
which is a primary aim of the SIP process. Consistent with Reeves’ focus on
identification of best practices, “if there is a theme to the research on leadership impact, it
is that practices, not programs are the key to developing and sustaining a high level of
impact” (Reeves, 2011, p. 25). McInerney and Leach (1992) did find positive correlations
between the planning process and the development of a more unified staff (building a
awareness of a school’s strengths and areas of concern. The last correlation, the
McInerney and Leach’s (1992) study also identified, which could be relevant to
Connecticut’s current legislation, that often times the objectives on the SIP were not
related directly to the state’s performance indicators. Further, the authors recommend
Leach (1992) and Webb (2007) have both found that within the process of planning there
is always the chance that schools will pick inappropriate goals, thus making the process
of planning less beneficial. To prevent this from happening, a majority of the research
reviewed, consistent with the beliefs outlined in goal theory, supports the need for a
comprehensive needs assessment to ensure accurate and frequent monitoring of the goals
(Beach & Lindahl, 2004b; Connecticut State Department of Education Bureau of School
and District Improvement, 2007; Fernandez, 2009; Mourshed, Chijioke, & Barber, 2010;
As previously stated, little empirical data has been reported on the effectiveness
characteristics of SIPs with increased student achievement. Three studies have provided
this type of evidence. Curry (2007) examined the role of SIPs in student achievement but
in contrast with the current study, her research focused on the work of School Advisory
Councils which served as the mechanism for creating the SIPs. The study evaluated 67
middle and high schools including a survey of relevant stakeholders as well as content
analysis of the SIPs. In this study, the researcher coded the number of math and writing
30
strategies used in each SIP. The results showed a significant negative correlation between
the number of math strategies found in plans and student achievement. In addition, there
was a negative correlation between writing what she termed operational action steps and
student achievement. Operational action steps were defined as “any activity that is
comparison, programs, analyzing data, analyzing reports etc.” (Curry, 2007, p. 100). Her
findings are consistent with Reeves’ (2004) and White’s (2009) recommendations to limit
Two additional studies were more closely related to the current study. Reeves’
(2011) Planning, Implementation, and Monitoring (PIM) study and Fernandez’s (2009)
study on effectiveness of school improvement plans provide a framework for the current
study. Both studies used similar rubrics to examine specific characteristics of SIPs (as in
The PIM study (Reeves, 2011) included 2,000 schools in the United States and
Canada using achievement data for more than 1.5 million students. The participants in
this study represented a very diverse group including both urban and rural districts
spanning levels from elementary to high school. The study included double-blind reviews
included on the rubric were consistent with goal theory including extent to which goals
are specific, extent to which strategies are linked to specific student needs, and frequency
of monitoring goal progress. Reeves (2011) defined achievement as the change in percent
in student achievement. He did not, however, provide a full presentation of results for all
variables and only reported a small subset of all results, so significant findings need to be
interpreted with caution. One finding he did report indicated negative relationships
between inquiry process and specific goals and changes in student achievement (i.e., SIPs
with higher scores on inquiry process and specific goals showed less change, which is not
consistent with goal theory). In the PIM study, inquiry process was defined as evaluating
student achievement data and associating that data with the practices of teachers and
and school performance using data from Clark County School District in Nevada, the 5th
largest school district in the nation and home to 300,000 students. The data came from
three sources: data on SIP were obtained from a content analysis (using a rubric) of
individual schools’ SIPs, student achievement data came from standardized assessments,
and data on school demographics and school resources were also obtained. Similar to the
PIM study, Fernandez identified 17 SIP indicators including specific goals, measurable
goals, use of research based strategies, and frequency and monitoring. Fernandez
combined scores on all 17 indicators to come up with one overall quality score for each
spending, Fernandez (2009) found that quality planning showed an association with
student math and reading performance. Fernandez’s work, consistent with goal theory
(Locke, 1968) found “schools that developed plans with goals with specific time frames
32
and that specified more frequent monitoring of school performance tend to have higher
Additionally, consistent with views stated previously regarding multiple factors necessary
for improvement, the additive index measure of the plans (combining multiple factors)
These two studies (Fernandez, 2009; Reeves, 2011), like the present study,
examined the relationship between the quality of SIPs and student achievement. One
difference between the two studies is that the PIM study included schools from across the
United States and Canada while Fernandez only studied schools from one very large
district. Collectively, they provide mixed evidence on the effectiveness of SIPs. It is not
clear why the two studies showed different results. It might be due to the fact that
Fernandez’ SIPs all used the same format (i.e., the extraneous variable of SIP format is
controlled for) and came from only one school district. The different results could also be
An important gap in the current literature is that while SIPs have been mandated
for Alliance Districts in Connecticut, there is a lack of evidence supporting this mandate.
Additionally, there has been no attempt to apply goal theory principles to SIPs to
determine if characteristics consistent with goal theory predict achievement. The present
study is intended to build upon the findings from Fernandez (2009) and Reeves (2011) in
an effort to evaluate the effectiveness of school improvement planning. Unlike these two
studies, the current study will evaluate characteristics of an SIP as they relate to goal
33
theory (Locke, 1968; Locke & Latham, 2002, 2006). The present study, in essence a
replication of Fernandez’s work with an additional focus on goal theory, aims to evaluate
if school improvement plans, when created with components of goal theory, can be
The current study is framed by one hypothesis derived from goal theory and a
complete review of the literature. The hypothesis is that schools creating quality SIPs
consistent with goal theory principles will have higher levels of student achievement
than schools with lower quality SIPs. There are two research questions for this study. The
first asked whether components of a plan consistent with the goal theory principles of
specific and challenging goals, or any of the mechanisms (directing attention, having an
increased student achievement. The second research question evaluated whether school
CHAPTER 3
METHODOLOGY
The purpose of this study was to examine the effect of school improvement
planning on student achievement. This study aimed to replicate Fernandez’s (2009) study
additional focus on goal theory and on low-performing districts. Main variables used in
the analyses included scores for 2012-13 SIPs based on the PIM School Improvement
Audit (Reeves, 2011), academic achievement as measured by the change in school scores
on the newly created School Performance Index (SPI), and school-level demographic
information such as percentage of students eligible for free and reduced price meals and
The current study was framed by a hypothesis derived from goal theory and a
complete review of the literature. The hypothesis was that schools creating quality SIPs
consistent with goal theory principles would have higher student achievement than
schools with lower quality SIPs. The first research question asked whether components of
a plan consistent with the goal theory principles of specific and challenging goals, or any
demographic variability and the concern for the widening achievement gap between
subgroups, the second research question evaluated whether school improvement planning
Research Design
research design. This study was a prediction study in that “the researcher is interested in
using one or more variables (the predictor variables) to project performance on one or
more other variables (the criterion variables)” (Mertens, 2010, p. 161). The goal of the
analysis was to reveal the degree to which the highest scoring SIPs were associated with
the greatest levels of student achievement. It is important to note that just because a
(Pyrczak, 2010). As with any prediction study it is also important to consider other
performing school districts that are part of Connecticut’s newly formed Alliance
Districts. The schools located within these districts are required to submit SIPs. The
District and the actual sample was a subset of these schools that were willing to provide
their plan for inclusion in this study. Table 1 presents descriptions of the schools in the
sample. In an effort to increase the external validity of the sample, all Alliance District
schools were invited to participate. My goal was to obtain SIPs from 150 schools with
representation from each of the 30 Alliance District. I received plans from 108 schools
Table 1
As Table 1 indicates, there was not equal representation across all Alliance
Districts. Four of the largest districts in the state (Bridgeport, Hartford, New Haven, and
Waterbury), considered to be the poorest in the state as referenced by their DRG status,
submitted very few plans. Despite multiple efforts, I was unable to obtain any plans from
Waterbury or New Haven and received only one plan from Bridgeport. The implications
37
of this will impact the generalizability of the study results and will be discussed further in
Chapter 5.
because many of the larger urban districts have a majority of K-8 schools, they too were
included in the sample. This decision was made because K-8 schools are also required to
create a SIP and use the CMT to determine their yearly SPI. Additionally, the focus of
The Alliance District sampling frame was selected for three reasons. First,
Alliance District schools are required to submit SIPs to document their plan to improve
school who is required to submit an SIP designed to support my school’s growth on our
SPI. Lastly, a majority of the Alliance District schools have the lowest SPIs in the state,
It was my goal to obtain at least 150 SIPs because that number would provide
optimal power for the multiple regression analysis used to test my hypothesis. With any
statistical analysis, the larger the sample, the more precise the results are. Defined by
statisticians, precision is “the extent to which the same results would be obtained if
another random sample were drawn from the same population” (Pyrczak, 2010, p. 95).
Although 150 plans were not collected, I am confident that the current sample size of 108
was also a goal to obtain SIPs from each of the 30 Alliance Districts. The last reason for
38
this group to be included within my sampling frame is these districts are defined as the
lowest performing districts in the state, required to adhere to the same state mandates, and
For this study, three forms of school-level data were collected. The first was
whole school scores on 2012-13 SIPs using the PIM School Improvement Audit (Center
for Performance Assessment, 2005), a rubric used in past research (Reeves, 2011) to
evaluate the quality of SIPs. A copy of this instrument can be found in Appendix A. It
should be noted that, although a majority of plans encompassed only the 2012-13 school
year, there were some plans that were multiple year documents. The second type of data
was individual school SPI scores for the 2011-12 and 2012-13 school years, which are
based on the March administrations of the CMT. For this study, SPI scores will be
demographic information such as percentages of students eligible for free and reduced
price meals and percentages of ELL students. Because state level data is historically
released up to 1 year after submission, the demographic data used for this study consisted
of 2011-12 data, as opposed to the plans themselves, which represented the school’s
SIP quality based on PIM scoring rubric. The PIM scoring rubric was created
by the Leadership and Learning Center (Center for Performance Assessment, 2005). The
diagnostic tool consists of 30 performance dimensions found within plans that have been
empirically linked to student achievement. The rubric was designed to evaluate a school’s
39
planning, implementation, and monitoring of their improvement efforts. This was done
because of “clear evidence that when it comes to achievement and equity, planning and
processes are less important than implementation, execution, and monitoring” (Reeves,
2006, p. 62).
The items on the rubric are 30 leadership performance dimensions across five
leadership), inquiry process (e.g., possible cause-effect correlations and analysis of adult
actions), SMART goals (specific, measurable, attainable, realistic, timely), design (e.g.,
multiple assessments documented and strategies linked to specific student needs), and
evaluation (e.g., summary data provided and compared and next steps outlined in
evaluation).
The rubric rating scale contains three rating categories for the extent to which
improvement (see Appendix A). Using the described performance criteria, a rater
evaluates an SIP in the following way: exemplary performance on the dimension was
The PIM rubric was created by the Leadership and Learning Center using a
double-blind check for consistency. This was done to increase the intra-rater reliability
and also to increase content validity. The original rubric was piloted on more than 100
plans by separate raters without knowledge of the other’s scores (Reeves, 2006).
Consistency in SIP ratings was achieved over 80% of the time. On areas that had poor
40
reliability (at times less than 36%), the rubric was revised and subsequent tests
administered. The decision to use the PIM School Improvement Audit for this study was
made because it has been field tested and found to be both a reliable and valid measure of
To address the current study’s focus on goal theory, it was necessary to take the
rubric and attempt to align it with concepts from goal theory (i.e., the core findings,
mechanisms through which goals affect performance, and moderators of the goal-
performance relationship). The PIM rubric was not designed according to the components
of goal theory so this process was necessary to see if the PIM rubric would be an
appropriate tool for this study. Additionally, aligning the rubric components to goal
theory characteristics would allow for the assessment of the research question.
To assess alignment of the rubric with goal theory, a rational process was used
whereby the author and the dissertation advisor independently connected each of the
performance dimensions of the rubric with one of the major goal theory concepts. Goal
theory concepts included core findings (specific goals, challenging goals), mechanisms
increasing persistence, use of task relevant knowledge to develop new strategies), and
The first attempt to align the performance dimensions with goal theory led to only
dimensions were not originally connected to goal theory by Judge 2: # 1, strengths, and #
22, documented coaching and mentoring. Appendix B shows the judgments of each rater.
41
At that time both raters met and discussed disagreements. For items that the raters
disagreed on, disagreements were resolved. Final connections are shown in Appendix B.
performance dimensions on the PIM rubric; the list of PIM items for each goal theory
concept is shown in Table 1. Of the five that were not categorized, three performance
dimensions were determined to connect with more than one goal theory concept: #9,
achievement results linked to causes, was categorized in both directing attention and
feedback; #20, results indicators aligned to goals, was categorized in both directing
attention and feedback; and #30, results disseminated and transparent, was categorized
in both goal commitment and feedback. Two characteristics could not be categorized:
For this study, the scores from the rubric were analyzed in two ways. The first
analysis was on a whole plan score. This score was compiled by adding all 30 of the
performance dimension ratings for each plan and coming up with one total whole plan
score. The second analysis focused on combining scores from the set of performance
dimensions connected to each of the goal theory concepts as represented in Table 2. For
example, to complete an analysis on goal theory’s feedback concept, I added up the rubric
leadership; #9, achievement linked to causes; #16, multiple assessments; #18, frequent
monitoring; #20, aligned results indicators; #26, summary data; and #28, evidence, to get
an average sub-score for feedback. That sub-score average (and sub-score averages for
other goal theory concepts) were then analyzed for correlations with student achievement.
42
Because the goal theory core finding challenging goals had only one performance
dimension (#12, achievable goals), it was combined with specific to create a new
category referred to as core findings – challenging and specific goals. Directing attention
strategies had nine; commitment had two; and feedback had seven.
Table 2
27.
Knowledge
and skills
29. Next
Steps
43
Student achievement – SPI school score, 2011-12 and 2012-13. The second
source of data was school SPI scores from both 2011-12 (the baseline score) and 2012-
13. The standardized assessment tool used to calculate a school’s SPI is the CMT. The
SPI has been designed as a single index reflecting student achievement across all tested
subject areas. As stated in Connecticut’s ESEA flexibility waiver, the state’s target is for
schools to achieve 88 of 100 possible performance index points. Schools scoring at this
level would have performed at the goal level or above on a majority of the standardized
tests taken. A school’s baseline score (their 2011-12 score) was composed by averaging
three years of CMT assessment data. The state then measures progress against that
original 3-year baseline using a single year of data annually beginning with the spring
2013 data (CSDE, 2012a). Unlike requirements under NCLB, the SPI allows for partial
credit for students scoring below proficient on a given state assessment (Erpenbach,
2009). The SPI is designed to be a status measure for schools, where status is defined as
SPI scores from 2011-12 and 2012-13 were selected because the SPI is a new
measure identified by the CSDE and should be an appropriate tool to identify the
achievement. The SIPs evaluated for this study are the ones currently being used to guide
improvement and hold individual schools accountable, meaning that the content of the
plan (strategies and actions for school improvement) needed to be relevant for the 2012-
13 school year. Because schools have implemented their plans (either as developed for a
one year plan or by making yearly adjustments for a multi-year plan) it would be
44
appropriate to expect increased student achievement from spring 2012 to spring 2013.
continual growth from their baseline SPI (2011-12), moving closer each year to the state
goal of 88.
Connecticut Mastery Test. The CMT is important for this study because, as
mentioned earlier, it is used as the basis for creating an individual school’s SPI. For
through 8 assessing students in the content areas of reading, mathematics, and writing;
science is also assessed in Grades 5 and 8 (CSDE, 2012a). The CMT currently designates
five levels of performance: (a) Below Basic (BB), (b) Basic (B), (c) Proficient (P), (d)
Goal (G), and (e) Advanced (A). For purposes of calculating the SPI, students are
credited by subject based on their performance on the CMT in the following ways: (a)
BB = 0.0 points, (b) B = 0.33 points, (c) P = 0.67 points, (d) G or A = 1.0 points. To
calculate a subject-specific SPI, the following equation is used “SPIsubject = (%BB * 0.0)
+ (%B * 0.33) + (%P * 0.67) + (G or A * 1.0)” (CSDE, 2012a, p. 87). The subject-
specific SPIs are then averaged to create an SPI for each school, district and subgroup
based on all of the students who were tested. More detailed information on SPI
The content of the CMT is expected to represent the most important skills
assessing what students know, the CMT also employs performance tasks to measure what
students can do with the information they have learned (Hendrawan & Wibowo, 2012).
The mathematics section contains multiple-choice items, grid-in response items, and
45
open-ended responses that are scored using a 0-1, 0-2, or 0-3 scale. The reading section
consists of two subtests, the Degrees of Reading Power (DRP) and Reading
Comprehension. The DRP is a single session multiple choice test containing seven
passages, each of increasing difficulty. The Reading Comprehension test consists of two
sections containing both multiple choice and open ended items. The open ended items are
scored using a 0-2 scale. The writing section contains two subtests, Editing and Revision
and the Direct Assessment of Writing (DAW). The Editing and Revision is completed in
one session containing multiple choice questions, and the DAW is a single 45 minute
prompt scored using a 2-12 scoring scale. Lastly, science is assessed in grades 5 and 8
and contains multiple choice and open ended items. The open ended items are scored
using a 0-2 scoring scale and the test is completed in one session (Hendrawan &
Wibowo, 2012).
Testing items were created to reflect content standards using the Connecticut
and special education teachers, and CSDE curriculum and assessment specialists), and a
fairness committee before being piloted with students in Connecticut in grades 3 through
complete a comprehensive analysis of test items to determine the match between content
items and respective content strands. Hendrawan and Wibowo (2012) report that “AEC
concluded that CSDE has done a solid, quality job in matching the test items included on
the CMT4 with the relevant content strands and standards of the Language Arts and
correlate scores from new tests with scores from other tests designed to measure similar
content (Hendrawan & Wibowo, 2012). This has been done with each version of the
CMT. Additionally, correlations between the writing portions of the CMT and other
language arts tests (Degrees of Reading Power, Reading Comprehension, Editing &
(Hendrawan & Wibowo , 2012). Validity, as defined by Mertens (2010), “is a unified
concept and that multiple sources of evidence are needed to support the meaning of
scores” ( p. 384).
The CMT has also undergone multiple checks on reliability, or the consistency of
test scores (Hendrawan & Wibowo, 2012, p. 19). Because the CMT was piloted in a
single administration, internal consistency has been evaluated using Cronbach’s alpha.
They reported alpha only for the writing portion of the CMT. For the 2011 administration
the CMT Writing Cronbach’s alpha was 0.89 for Grade 3, 0.86 for Grade 4, and 0.88 for
as control variables because they have been found in previous research (Fernandez, 2009;
Hayes, Christie, Mills, & Lingard, 2004; Phelps & Addonizio, 2006) to relate to student
achievement and are publically available data. In order to try to rule out the possibility
that a relationship found between SIP quality and student achievement is due to other
factors, it was necessary to control for factors which may impact student achievement.
This study is similar to Fernandez’s (2009) study which controlled for ten factors
eligible for free and reduced price lunch, percentage of students with Limited English
(elementary, middle, high school), size of school (enrollment), transiency rate, and
percentage of highly qualified teachers. To the extent possible using publicly available
data, the current study replicated Fernandez’ (2009) control variables. The current study
eligible for free and reduced price lunch, per pupil spending (reported only at the district
students with Limited English Proficiency and school size (enrollment). The choice of
control variables was also consistent with the classifications in Connecticut’s District
Reference Groups (DRGs) where more affluent towns are assigned to higher DRG levels
status and need are grouped together. DRGs are classified on seven criteria including
income, education, occupation, family structure, poverty, home language, and district
enrollment. There are nine DRGs ranging from DRG A representing very affluent low-
need suburban towns to DRG I representing high-need and low socioeconomic areas.
Charter schools, technical schools, and Regional Education Service Centers are not given
a DRG classification. DRGs are not intended to assess the quality of instruction of
schools within a district. They were created to describe the characteristics of the families
with children attending Connecticut Public Schools (CSDE, 2006). DRG assignment of
Alliance Districts (27/30) are drawn from the lowest three DRG classifications.
48
Table 3
Number of Connecticut Alliance Districts Included within each DRG
A 0/9
B 0/21
C 0/30
D 1/24
E 0/35
F 2/17
G 11/17
H 9/9
I 7/7
Procedure
Obtaining SIPs. Because this study included data that is publically available,
attempts to collect individual SIPs began in March 2013 while the dissertation proposal
was being written. All 30 Alliance District superintendents were originally contacted by
mail requesting permission to obtain a copy of individual school plans. A copy of the
Districts that did not respond after 2 weeks were then contacted again both by phone and
email. District and school websites were also viewed in an attempt to obtain individual
school plans. In the initial attempt, 54 individual school plans from 19 districts were
offices during June 2013. Ongoing calls and emails were sent and this process concluded
in October 2013 with a final set of 108 usable school improvement plans representing 25
Alliance Districts. Two additional plans were received totaling 110 plans, however,
neither one could be included in the analysis. One of the plans was from a school that was
49
closed last year and the other plan was from the district improvement document, and thus
Scoring plans. As the researcher, I scored all plans using the PIM rubric
described earlier and located in Appendix A. It was essential that plans were scored
consistently so the resulting data were reliable. To evaluate inter-rater reliability Dr.
Michael Wasta, Ph.D., a retired superintendent of schools who currently consults with
Connecticut school districts on school improvement planning and I coded the first 20
To begin this process, both scorers met in August 2013 and discussed each of the
performance dimension. The researcher and Dr. Wasta then scored two plans together
and discussed their ratings and rationale for each rating. By discussing the characteristics
the scorers developed an operational definition of each performance dimension. Over the
next two weeks, the same 20 plans were scored independently with a ‘total plan’
reliability rating (correlation coefficient between the two sets of ratings) of .75. Because
that was very close to our goal of .80, I continued scoring the plans independently. Table
4 shows the initial reliability for the whole school plan ratings as well as the subscore
Table 4
Inter-Rater and Intra-Rater Reliability for SIP Quality Ratings
twice after the inter-rater reliability check. The first check took place after the next 40
plans (plans 21-60) were scored as I re-scored 10 randomly selected plans. To determine
which plans would be re-scored, all plans were entered into an SPSS file which created a
random variable for each plan. The ten lowest numbers were selected to be re-scored.
This allowed each plan to have an equal chance for being rescored.
for these 10 plans, which I refer to as ‘intra-rater reliability.’ The total plan score
reliability was extremely high (.995), however, two of the goal theory concepts,
strategies and commitment were well below the total score reliability. To address this, I
looked at each performance dimension for the 10 plans that were randomly sampled and
identified eight performance dimensions that showed the greatest variation in my scoring:
#3, teacher practices; #5, engaged stakeholders; #6, possible cause-effect corrections;
51
#8, analysis of adult actions; #9, achievement results linked to causes; #15, purposeful,
focused action steps; #21, adult learning and change process considered; and #23,
strategies linked to specific student needs. Although there was never more than a one
point discrepancy in scoring (meaning a score of a 1 on the first scoring and a 3 on the
continued to score the remaining plans. I again met with Dr. Wasta to review the scoring
rubric for these eight performance dimensions and elaborated on the criteria needed in an
effort to increase my scoring consistency. After this process, I rescored these eight
performance dimensions on all plans and the scores obtained from this process were used
for final analysis. A copy of the clarified eight performance dimensions, which were used
Finally, for the last set of 50 plans the same process for randomly selecting 10
plans was used to identify a set of plans to be rescored. This second set of scores again
showed a high whole plan intra-rater reliability (.986), but this time I substantially
increased the reliability for the strategies and commitment goal-theory concepts. As Table
4 shows, all categories achieved an intra-rater reliability of .778 or higher, and the
Data Analysis
This study used multiple regression analysis to assess the relationship between
SIP quality (the predictor variable) and student achievement (the outcome variable).
Multiple regression was the selected analysis because it allows assessment of the
relationship of SIP quality with student achievement while controlling for other school
level factors discussed earlier. The regression coefficient for SIP quality was used to test
52
the hypothesis about the relationship between SIP score and student achievement. I did,
however, compute correlations between all variables using the Pearson product-moment
Ethical Considerations
This research was submitted to the CCSU Human Studies Council and deemed
individual SIP. Lastly, all SIPs and SPI data are publicly available, and no data deal with
individuals. Because of this there is little chance that anyone could have been harmed by
Research Rigor
The student achievement data (CMT) collected for this study has proved to be
valid and reliable. Additionally, the PIM scoring rubric has been evaluated, adjusted, and
implemented for consistent results in determining the quality of a school plan. That along
with the research design described allowed for data analysis which is representative of
the 30 districts that make up Connecticut Alliance Districts. Unlike other studies, the
available population was relatively small, so to achieve high statistical power I worked to
obtain as many SIPs as possible, ending up with a total of 108 School Improvement Plans
for my analysis.
53
Limitations
This study was not conducted using a random sample and therefore the
District elementary school did not provide their SIP for this study, external validity, or
the “extent to which findings in one study can be applied to another situation” (Mertens,
Internal validity can also be considered a limitation because this study was not
conclusions about school improvement planning was not possible. There are many
potential extraneous or lurking variables that might have impacted the relationship
between SIP quality and student achievement, the most important of which I discuss in
Chapter 4.
Another limitation of this study is that it dealt with the intended, rather than
actual, implementation and monitoring of the SIPs. The PIM rubric is used to measure the
quality of three phases of school improvement: the planning phase (the focus of this
study), the implementation phase, and the monitoring phase. As the literature has
indicated (Fernandez, 2009; Heroux, 1981; Reeves, 2004;; White, 2009), much of what is
written in plans is for documentation and compliance purposes and may not accurately
describe adult actions within a school. The present study did not assess whether each of
the steps on a school’s plan were actually implemented or the degree to which they were
truly monitored.
Each SIP was scored by a human reader, thus reliability of scoring can be a
limitation. Reliability, or the degree to which measurements are free from error (Mertens,
54
2010), was evaluated at multiple points during the study and adjustments, as described
The last limitation is that the PIM School Improvement Audit was not created
using the principles of goal theory. The document was created using current literature on
best practice. For this study, it was necessary to align the performance dimensions of the
rubric to goal theory; therefore, the match between the rubric and theory was not exact.
achievement results impact many people. Those impacted by the process include
empirically link goal theory to student achievement, then I fill a current gap in
educational research and can use that information to improve my school leadership and
ability to plan for school improvement. More importantly, this information can be used
by other similar schools to create a document that is associated with increased student
and no risks are associated with the results, the likelihood of researcher bias is minimal.
55
CHAPTER 4
RESULTS
This chapter reports the findings regarding the effect of school improvement
with the addition of a focus on goal theory (Locke & Latham, 2002, 2006) and on low-
performing districts. For this study, three forms of school-level data were analyzed. The
first was whole school scores on 2012-13 SIPs using the PIM School Improvement Audit
(Center for Performance Assessment, 2005). The second included individual school SPI
scores for the 2011-12 and 2012-13 school years. The third was seven forms of school-
level demographic information including percentages of students eligible for free and
reduced price meals, special education population, per pupil spending, and percentages of
ELL students. For this study, achievement was measured by SPI scores.
The hypothesis for this study was that schools creating quality SIPs consistent
with goal theory principles would have higher levels of student achievement than schools
with lower quality SIPs. The first research question asked whether components of plans
consistent with the goal theory principles of specific and challenging goals, or any of the
feedback) identified by Locke and Latham (2002, 2006) would be associated with
increases in student achievement. The second research question evaluated whether school
Before analysis was completed, descriptive statistics and skewness were analyzed
on all variables. This was done to ensure the analysis would not be distorted by
substantially skewed data. The only variable showing high skewness was percentage of
greater can be considered high; values closer to zero indicate less skewness). To account
for this I conducted three transformations in an attempt to reduce the skewness (log base
10, square root, and inverse). The log special education variable produced the smallest
Descriptive Statistics
Tables 5 and 6 provide correlations and descriptive statistics for each of the
predictor variables used in this study as well as for SIP scores (N = 108). School-level
Learners, enrollment, per pupil spending, and the percentage of students qualifying for
free and reduced price lunch. Achievement scores from 2012-13 serve as the outcome
variable.
The average SIP score, obtained using the PIM School Improvement Audit, was
44.64 (SD = 11.11) out of a possible range of 30.0 to 90.0 points. Of the 108 plans
scored, 46% of the plans (n = 50) scored in the 30 - 39 point range, 21% of the plans (n =
23) scored in the 40 - 49 point range, 21% of the plans (n = 23) scored in the 50 - 59
point range, and only 11% (n = 12) plans scored higher than 60. One plan had a total
were 71.71 for 2011-12 and 69.67 for 2012-13. As stated in Connecticut’s ESEA
flexibility waiver, the state’s target is for all schools and subgroups within a school to
achieve 88 of 100 possible performance index points. My results indicate that most
Alliance Districts have not met this standard. Schools obtaining an SPI of 88 or above
would have performed at the goal level or above on a majority of the standardized tests
taken. An SPI of 67, which is very close to the 2012-13 mean achievement scores for this
study, would indicate a majority of schools were scoring at the proficient level (CSDE,
2012a). Table 5 summarizes the breakdown by achievement scores for the 108 schools
within this study. Of the 108 schools within this study, only two met or exceeded the
target of 88.
Table 5
Distribution of 2012-13 Achievement (SPI) scores
2012-13 SPI Ranges Number of Schools Percentage
30.0 - 39.9 0 0%
40.0 - 49.9 3 3%
relationship between SIP score and the outcome variable, 2012-13 student achievement, r
= .237, p <.05. According to Cohen (1988), r values less than .24, while significant, are
considered small. There was a very strong positive correlation between 2011-12
with 2012-13 achievement data include the following control variables: percent minority
r = -.541, p <.01, percent special education, r = -.306, p <.01, percent ELL, r = -.405, p <
Table 7 displays the data from the top 20 schools (as noted by their 2012-13 SPI
score) and their total plan scores in comparison with the lowest 20 schools. There was, on
average, a 6 point differential in total plan score between these two groups.
59
Table 6
Bivariate Correlations Among SIP Total Score, Student Achievement, and Control Variables (N = 108)
Variable Mean SD 1 2 3 4 5 6 7 8
7. Enrollment 417.96 143.57 .219 * -.125 -.083 .158 .032 .317 ** 1.00
8. PP Spending 14,129.13 1830.91 .312 ** -.009 .002 .348 ** -.123 .294 ** .202 * 1.00
9. % FR 56.03 21.43 .000 -.752 ** -.719 ** .749 ** .275 ** .473 ** .203 * .208 *
*p < .05. **p < .01.
Note. % Minority = percentage of students identified as minority; % SpEd = percentage of students identified as special education; %
ELL = percent of students identified as English Language Learners; PP Spending = per pupil spending; % FR = percentage of
students identified as eligible for free and reduced price lunch.
60
Table 7
Distribution of SIP Scores for 20 Highest and 20 Lowest Achieving Schools
20 Highest 47 9 3 5 1 2
School
SPIs
(88.9 -
78.7)
20 Lowest 40.9 12 4 2 2 0
School
SPIs
(29.9 -
62.1)
The main hypothesis for this study was that schools creating quality SIPs
consistent with goal theory principles would have higher student achievement than
schools with lower quality SIPs. The test of the hypothesis was done utilizing multiple
regression. The outcome variable was student achievement scores for 2012-13. A
school’s SIP score was included as a predictor variable to test the hypothesis that quality
achievement score was also used as a predictor. Additionally, the demographic factors
shown in Table 6, a majority of which were used in Fernandez’s (2009) study, were
raw change in student achievement between two years. Raw change scores (a difference
between achievement scores from 2012-13 and 2011-12) may have low reliability and do
not take into account regression to the mean; both present potential problems that the
The regression results for the main hypothesis (shown in Table 8) shows a high
proportion of shared variance between the predictor and outcome variables, with R2 =
.861, meaning 86% of the variance in scores can be accounted for by the combination of
predictor variables. This R-squared was statistically significant at the .001 level.
My hypothesis was tested by the regression coefficient for the SIP score. The SIP
score did have a marginally statistically significant regression coefficient, p = .052, with a
even though four of the variables (percentage of minority students, percentage of special
education students, percentage of English Language Learners, and percentage eligible for
free and reduced lunch) all had significant correlations with student achievement. A
high degrees of correlation between predictor variables (shown in Table 6). For example,
high correlations with percentage of free and reduced price lunch were found for
achievement 2011-12, r = -.752, p < .01 and percentage of minority students, r = .749, p
<.01. When multicollinearity arises in a data set, it can be hard to find significant
regression coefficients. When predictors are very strongly related to each other, it makes
it much more challenging to separate the variables, thus effectively holding one variable
Table 8
Multiple Regression Results for SIP Score Predicting Student Achievement
Following the main hypothesis, the first research question for this study was
whether components of a SIP consistent with goal theory (e.g. challenging and specific
increased student achievement. As Table 9 indicates, three of the goal theory variables
(directing attention, r = .270, p < .01, strategies, r = .226, p < .05, and feedback, r = .256,
p < .01) had significant positive correlations with 2012-13 student achievement.
Additionally, there were also very strong and significant correlations between the SIP
63
total score and four of the five goal theory principles (challenging and specific goals, r =
.827, p < .01, directing attention, r = .947, p < .01, strategies, r = .892, p < .01, and
The test of this research question was done utilizing multiple regression. For this
change in student achievement between two years. The outcome variable was student
achievement scores for 2012-13. For this analysis, predictor variables included scores on
individual SIPs from the five goal theory components: challenging and specific goals,
directing attention, strategies, commitment, and feedback scores. This analysis did not
use the total SIP score as a predictor. The only control variable in this analysis was the
The regression analysis for the research question shows a strong multiple
correlation between the outcome variable and the combination of the predictor variables,
with the proportion of shared variance being R2 = .86, meaning 86% of the variance in
achievement scores can be accounted for by a combination of the goal theory principles.
This test was statistically significant at the .001 level. However, none of the goal theory
control variables had significant regression coefficients. Similar to the main analysis, the
goal theory data presented high degrees of correlation between predictor variables
(shown in Table 6). When looking at the strength of correlations between predictor
variables, the issue of multicollinearity appears to have again arisen due to highly
correlated predictor variables. This issue will be discussed further in the last chapter.
64
Table 9
Bivariate Correlations Among SIP Total Score, Student Achievement,and Goal Theory Concepts (N = 108)
Variable Mean SD 1 2 3 4 5 6 7
1.SIP Total 44.64 11.11 1.00
7. Commitment 1.18 .26 .097 -.131 -.082 .051 -.012 .075 1.00
8. Feedback 1.40 .30 .941 ** .201 * .256 ** .790 ** .916 ** .774 ** .109
*p < .05. **p < .01.
65
Table 10
Multiple Regression Results for Goal Theory Concepts Predicting Student Achievement
Unstandardized Standard Standardized t Sig.
Coefficient Error Coefficient
between subgroups, the second research question evaluated whether school improvement
Connecticut receives an individual school SPI, as well as subgroup specific SPIs. SPIs for
the following subgroups are reported: Black or African American, Hispanic or Latino,
English Language Learners, Free/Reduced Lunch Eligible, Students with Disabilities, and
a subgroup labeled High Needs. High needs is defined as “an unduplicated count of
students in the English Language Learners, Free/Reduced Lunch Eligible, and Students
that particular school defined by the particular subgroup. Of the 108 schools within this
study, the following numbers of subgroups were reported: Black or African American (n
Lunch Eligible (n = 108), Students with Disabilities (n = 85), and High Needs (n = 108).
state documents and throughout this study. Due to the smaller sample size of some of the
subgroups within this study, regression results were reported for full data (N = 108) or
nearly complete data (i.e., Hispanic or Latino) subgroups. Black or African American,
English Language Learners, and students with disabilities were not included in the
regression analysis as their sample size did not represent full or nearly complete data.
Again using multiple regression with a separate analysis for each subgroup, the
outcome variable was the subgroup’s student achievement score for 2012-13. A school’s
67
SIP score was included as a predictor variable to test the hypothesis that the quality of
achievement score was also used as a control variable as in the main hypothesis test. As
with the main hypothesis test, the demographic factors shown in Table 6 were included as
Table 11 shows the correlations between subgroup SIP scores and student
achievement. All three subgroups had small, yet significant correlations with SIP total
(Hispanic, r = .299, p < .01, Free and Reduced, r = .229, p < .05, and High Needs, r =
Table 11
Bivariate Correlations Among SIP Total Score, Student Achievement, and Subgroups
Variable Mean SD 1 2 3 4 5 6
1.SIP Total 44.64 11.11 1.00
6. High Needs 61.09 8.30 .288 ** .676 ** .810 ** .843 ** .962 ** 1.00
(n = 108)
Table 12 shows the standardized regression coefficients for each of the subgroups.
Table 12
Multiple Regression Standardized Coefficients for Subgroup Exploratory Analysis
Table 12 also indicates that some of the demographic control variables did have
significant regression coefficients for some of the subgroups, yet not for others. There
69
does not appear to be any consistent interpretable pattern to the significant results for the
demographics. The implication of this will be discussed further in the final chapter.
Summary
As the results of this study indicate, there is evidence that SIP quality predicts
student achievement when controlling for demographic factors. The SIP score had a
standardized coefficient of .085, consistent with the research hypothesis. Although none
of the goal theory concepts or specific subgroups had significant regression coefficients,
a majority of these predictors were strongly correlated with student achievement and the
CHAPTER 5
DISCUSSION
This chapter discusses the study findings examining the effect of school
student achievement, with an additional focus on goal theory (Locke & Latham, 2002,
2006) and low-performing districts. Three forms of school-level data were analyzed. The
first was whole school scores on 2012-13 SIPs, the second was individual school
achievement scores for the 2011-12 and 2012-13 school years, and the third was school-
level demographic information including percentages of students eligible for free and
reduced price, percentage of special education students, enrollment, per pupil spending,
and percentage of ELL students. This study examined the quality of the plan as written
and did not attempt to evaluate the quality of implementation or monitoring, a component
many (Reeves, 2006; White, 2009; White & Smith, 2010) feel is equally important to the
Connecticut’s newly formed Alliance Districts (CSDE, 2012a). The schools located
within these districts are required to submit SIPs. The sampling frame included all
elementary or K-8 schools within Connecticut’s Alliance Districts. The actual sample,
though not fully representative of all Alliance Districts, was 108 schools that provided
The hypothesis for this study was that schools creating quality School
Improvement Plans (SIPs) consistent with goal theory principles would have higher
student achievement than schools with lower quality SIPs. The first research question
71
asked whether components of plans consistent with the goal theory principles of specific
moderators (goal commitment, feedback) identified by Locke and Latham (2002, 2006)
would be associated with increases in student achievement. The second research question
subgroups. The hypothesis and research questions were tested using multiple regression.
Summary of Findings
Main analysis. Utilizing multiple regression analysis, this study found there was
a small, yet marginally statistically significant positive relationship between the quality of
a SIP and student achievement, controlling for previous achievement and demographic
factors. The correlation between a plan’s score and 2012-13 achievement was r =.237, p
< .05. This was consistent with the research hypothesis. Findings suggest one could
expect that for every point increase in SIP total score, achievement would increase by
.085 points. From an educational leader’s standpoint, this is an area which schools can
control and improve upon, while working to close current gaps in achievement. The study
findings are consistent with, but because of the correlational design cannot definitively
Goal theory analysis. The first research question examined whether components
of a plan consistent with goal theory (e.g. challenging and specific goals; directing
attention) would be associated with increased student achievement. Although there was
not a significant regression coefficient for any of the goal theory principles, three of the
72
variables (directing attention r = .270, p < .01, strategies r =.226, p < .05, and feedback r
= .256, p < .01) had significant positive correlations with 2012-13 student achievement.
variability and the concern for the widening achievement gap between subgroups.
between SIP total scores and achievement for the following three subgroups of students:
Hispanic or Latino (n = 96), Free/Reduced Lunch Eligible (n = 108), and High Needs (n
regression coefficient for SIP total scores, but as indicated in Table 11, all three
subgroups’ achievement scores had significant correlations with the SIP total scores
(Hispanic, r = .299, p < .01, Free and Reduced, r = .229, p < .05, and High Needs, r =
The results from this study are consistent with Fernandez (2009) in that both
studies found evidence that when controlling for multiple demographic factors, the total
plan score predicted change in student achievement. Consistent with Reeves (2011),
Another similarity with previous research is that not every component of a SIP
can be equally associated with significant gains in student achievement. As was the case
with the PIM study (Reeves, 2011), certain components of planning were shown to be
performance dimensions from this study showed that the following five dimensions had
the largest statistical correlations to 2012-13 student achievement: #6, possible cause-
73
effect correlations, (r = .298, p <. 01); #16, multiple assessments documented, (r = .284,
p < .01); #2, assessment results, (r = .247, p < .01); #8, analysis of adult actions, (r =
.234, p < .05); and #13, relevant goals (r = .227 p < .05). That being said, the review of
the literature and findings from this study support past writings of Elmore (2000), Reeves
(2006), and White (2009), that leadership, teaching, and adult actions matter. If schools
are going to use their SIPs to guide their improvement process, characterticists such as
these should be included within plans which are implemented with fidelity, and
This study contributes to the existing literature with additional support for the idea
that school improvement planning does matter. It is the first to examine goal theory
concepts as predictors of achievement. Although the evidence that goal theory concepts
predict achievement was not strong (no significant regression coefficients), there were
significant correlations in three areas (directing attention r = .270, p < .01, strategies r
=.226, p < .05, and feedback r = .256, p < .01) which support the idea that goal theory
Additionally, this is the first study to look at whether SIP quality is related to
subgroup achievement. As with the goal theory analyses, there were no significant
regression coefficients for individual subgroups, but there were significant correlations
(Hispanic, r = .299, p < .01, Free and Reduced, r = .229, p < .05, and High Needs, r =
.288, p < .01). This suggests that SIPs as currently constructed in Connecticut Alliance
Districts have not been useful for reducing current gaps in achievement, and might need
Theoretical Implications
The results of this study support one theoretical implication. Based on this study’s
findings, goal theory provides a potentially useful framework for thinking about school
improvement planning. Defined by Locke and Latham (2002), a goal is the object or aim
of one’s action. Goals are internal representations of desired states and can be, as in
Vancouver, 1996). Setting goals and planning is now required in schools as a method of
Goal theory supports the belief that conscious goals impact one’s performance, thus
setting goals on a SIP should benefit students. Researchers have found increasing
important for school improvement planning. Schools should therefore consider goals as
takes place and adjusting one’s actions to that environment (Beach & Lindahl, 2004a).
Setting goals will help to focus school efforts to improve student outcomes.
Practical Implications
Based on the results of this study, three practical implications can be drawn. A
first implication is that schools should prioritize developing high-quality SIPs. The
75
quality of SIPs in this study varied considerably, with a mean SIP score of 44.64 (SD =
11.11). Of the 108 school improvement plans within this study, 46% of plans scored
between 30 and 39 points and 67% of the plans scored below 50 points. This indicates
that a majority of schools would benefit from an individualized assessment of their ability
achievement. It also highlights the fact, consistent with current research, that the process
of school improvement matters and is the only attribute which a school staff can control
on the PIM scoring rubric), which should be included on SIPs, include leadership
strategies (one of the lowest rated PIM areas in this study), data analysis techniques,
decision making practices, and an evaluation of a school’s readiness to change (Beach &
Lindahl, 2004b; Hall & Hord, 2011; Reeves, 2004; Schmoker, 1999; White, 2009). To
integrate current research with the findings from this study, and to monitor a school’s
Beach and Lindahl (2004a), many educators responsible for planning may be unfamiliar
with the knowledge base for effective planning. If legislation requires schools to
continually improve achievement, one way to accomplish that would be to increase the
knowledge base for those who create such plans, so they can then increase the quality of
their plan.
76
One way to utilize the results from this study to increase plan quality would be for
a school to use the PIM Scoring Audit to score its own plan to determine specific areas
for improvement. Such improvement is something that the current study’s results indicate
is needed.
school plans. A benefit to using goal theory as part of school improvement plan includes
prioritizing what matters most. Even though evidence was not strong that goal theory
concepts predict achievement, the theory provides a good conceptual framework for
thinking about how to motivate people to achieve desired outcomes and the significant
The mechanism or action leading towards goal accomplishment with the strongest
correlation to student achievement was directing attention (r = .270, p < .01). Schools can
improve upon their ability to direct attention by focusing on assessment result distinctions
general standardized assessment results. Lastly to direct attention to priority areas for
utilized in plans. The strategies selected should be directly related to the individual
school’s needs (as opposed to a general new initative) with an understanding from all
staff why, how, and in what context the strategies should be implemented (Center for
to achievement was feedback (r = .256, p < .01). Feedback serves as a tool to continually
monitor progress towards stated goals and make frequent adjustments (Reeves, 2004;
Schmoker, 2004; White, 2009). Consistent with Hattie (2012), feedback is an essential
ingredient of learning and this would seem to be true for both student and adult learners.
Without frequent and accurate feedback it is “difficult or impossible for them to adjust
the level or direction of the effort to adjust their performance strategies to match what the
goal requires” (Locke & Latham, 2002, p. 708). Schools can improve the quality of their
feedback by articulating the degree to which leaders monitor a school’s performance, set
is typically done during the data team process; however, the frequency of team meetings
Feedback is an important component of the learning process and data from this
study suggests that schools can improve incorporating this into their school improvement
process. When looking at individual performance dimensions from SIPs for this study
and their mean score (out of a possible score of 3), some of the lowest rated performance
dimensions were connected to feedback. For example, #26, summary data provided and
compared, M = 1.21 (SD = .41); #27, anticipated knowledge and skills, M = 1.13 (SD =
.34), and #30, results disseminated and transparent, M = 1.03 (SD = .17) address
feedback. As stated on the PIM rubric, a score of 1 indicates a low level of planning,
implementation, and monitoring, thus needing improvement. The plan scores therefore
A third practical implication would be for schools to focus SIP efforts to directly
address subgroup performance. Even high quality SIPs within this study did not fully
address this area, even though examining subgroup performance is a necessary step to
(82%) come from the lowest three DRGs in the state and often have multiple subgroups
for which they are held accountable. As stated in the literature review, states have
recognized the importance of school accountability and are using student achievement
and ineffective schools (Phelps & Addonizio, 2006). One determining factor in
In this study, SIP total score did not have significant regression coefficients for
any of the subgroups. A likely explanation is that not all of the plans were written to
address academic disparities between subgroups and, surprisingly, many of the plans did
not include any subgroup goals or strategies. If the focus of the plan was not on reducing
current achievement gaps, it would make sense that a statistical relationship not be found.
If schools are going to use their SIP to reduce current gaps in achievement,
including specific components of the PIM School Improvement Audit could be beneficial
in achieving these goals. Of the 30 performance dimensions on the rubric, six are written
distinctions by subgroups; #10, specific goals, which targets specific groups of students;
#12, achievable goals, which are designed to close subgroups’ learning gaps in three to
five years; #23, strategies linked to specific student needs, which could be written to
which is designed to meet specific subgroup needs; and #28, required evidence for
evaluation, which focuses on students whose performance puts them at risk of opening
increase the degree to which subgroup performance was planned for and monitored.
individual items to see if they correlated with subgroup performance. Achievement for all
three subgroups had small, yet significant correlations with performance dimension #2,
assessment results (Hispanic, r = .376, p < .01, Free and Reduced, r = .197, p < .05, and
High Needs, r = .215, p < .05). This performance dimension was also statistically
correlated with overall student achievement across all 108 plans (r = .247, p < .01).
and this study provides data showing that SIPs do little to address this. Specific
performance dimensions on the PIM scoring rubric have been written to address
achievement gaps and, therefore, should be included within plans. When looking at the
highest and lowest rated performance dimensions across all 108 plans that specifically
addressed subgroup performance, the highest rated dimension, #12, achievable goals, M
= 1.88 (SD = .84) still was below an overall score of a 2, indicating a less-than-proficient
level. The lowest rated performance dimension associated with subgroup performance
was #28, required evidence for evaluation, M = 1.26 (SD = .46) which specifically
identifies students at risk of opening achievement gaps (as opposed to closing) and
should be evident across all plans. Taken collectively, these are important findings for
school leaders and schools should prioritize the following six performance dimensions
for specific subgroups when crafting SIPs: #2, assessment results, #10, specific goals,
80
#12, achievable goals, #23, strategies linked to specific student needs, #24, professional
development driven by student needs, and #28, required evidence for evaluation.
As with any study, there are limitations. One limitation concerns external validity.
This study was not conducted using a random sample providing a representative sample
from the population. As noted earlier, one particular segment of my population was not
well represented in this study. Four of the largest districts in the state (Bridgeport,
Hartford, New Haven, and Waterbury), considered to be the poorest in the state (as
referenced by their DRG status), submitted very few plans. These factors limit the
well to low-income districts in Connecticut with the exception of the very largest and
poorest. While the results from this study are consistent with the idea that high scored
plans lead to achievement, other possible explanations cannot be ruled out. To increase
generalizability of results, a similar study across different subsets of the population, such
Another limitation is that this study used data spanning only 2 years. A suggestion
for additional research would be conducting a longiudinal study using multiple years of
heavy emphasis on understanding the environment in which planning takes place and
adjusting one’s actions to that environment (Beach & Lindahl, 2004a) and a longitudinal
An additional limitation to this study could be the oveall poor quality of plans
across the 108 Alliance District schools (M = 44.64, SD = 11.11.) Because a majority of
81
the plans (67%) scored lower than 50, this could indicate a low commitment to the
process of planning, which could possibly suppress the relationship between plan quality
implementation. While the quality of the plans in this study was empirically linked to
student achievement, that impact would be limited if districts and schools were not
Last, a limitation is that this study focused on the process of school improvement
planning without taking into consideration two other important factors – the
implementation and monitoring of such plans. These two critical areas are what Hopkins
(2001) referred to as strengthening a school’s ability for managing change. The data
obtained from this research shows a statistically significant correlation between the
process of planning and student achievement. What cannot be inferred is the degree to
which all three aspects of school improvement collectively impact student achievement.
dimensions were written to address a school’s ability to monitor the steps written in their
plan and it is recommended that schools prioritize their implementation and monitoring
of plans by focusing on the following: #4, acts of leadership, which articulates the
which looks for explicit evidence of improvement cycles for every improvement
initative; #18, frequent monitoring of student achievement, which quantifies the degree to
which teams meet to monitor perforamnce; #26, summary data provided and compared,
which measures the degree to which planned initatives are evaluated; #28, required
evidence for evaluation, which articulates specificity of data needed to monitor progress;
82
and #29, next steps outlined in evaluation, which specifies the degree to which changes
in practice will continue to move forward the improvement process. By prioritizing these
PIM performance dimensions, a school would be better able to evaluate the degree to
which all three aspects of school improvement collectively impact student achievement.
Summary
This study helps to fill in a gap in the existing research on the impact of school
improvement planning, the benefit of utilizing goal theory concepts for planning, and
relationship was found between the quality of a school improvement plan and student
achievement, but we know there are other factors that impact student achievement.
Unfortunately, this study, like others conducted before, has yet to find a simple fix to
guarantee success for all students. In an era where accountability and legislation demand
increased performance, there are still unanswered questions as to how to best implement
improve a school’s ability to plan, implement, and monitor school improvement efforts.
As the literature suggests (Beach & Lindahl, 2004a; Hall & Hord, 2011, Reeves, 2004;
What is still unknown is how and why “some students and teachers defy the odds
and perform at an exceptionally high level despite the prevalence of poverty, special
education, second languages, or other factors that in statistical terms are associated with
low student achievement” (Reeves, 2006, p. 14). This study did find that the achievement
scores of all three subgroups were significantly correlated with the SIP total scores.
83
Further study on subgroup achievement would add to the existing literature while helping
Finally, this study presented correlational data providing support that goal theory
can be a potentially useful framework for thinking about school improvement planning.
This information can benefit educators by motivating them to meet the academic needs
of all their students, while improving the overally quality of their SIPs.
84
References
Anfara, Jr., V. A., Patterson, F., Buehler, A., & Gearity, B. (2006). School improvement
planning in East Tennessee middle schools: A content analysis and perceptions
study. NASSP Bulletin, 90, 277-300. doi: 10.1177/0192636506294848
Armstrong, J. S. (1982). The value of formal planning for strategic decisions: Review of
empirical research. Strategic Management Journal, 3, 197-211. Retrieved from
http://smj.strategicmanagement.net/
Atkinson, J.W. (1958). Motives in fantasy, action, and society: A method of assessment
and study. New York, NY: Van Nostrand.
Beach, R. H., & Lindahl, R. (2004a). A critical review of strategic planning: Panacea for
public education? Journal of School Leadership, 14, 211-234. Retrieved from
https://rowman.com/page/JSL
Beach, R. H., & Lindahl, R. (2004b). Identifying the knowledge base for school
improvement. Planning and Changing, 35, 2-32. Retrieved from
http://planningandchanging.illinoisstate.edu/
Beach, R. H., & Lindahl, R. A. (2007). The role of planning in the school improvement
process. Educational Planning, 16(2), 19-43. Retrieved from
www.isep.info/publications.html
Bell, L. (2002). Strategic planning and school management: Full of sound and fury,
signifying nothing? Journal of Educational Administration, 40, 407-424. doi:
10.1108/09578230210440276
Bloom, C. (1986). Strategic planning in the public sector. Journal of Planning Literature,
1, 253-259. doi: 10.1177/088541228600100205
85
Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2nd ed.).
Hillsdale, NJ: Lawrence Erlbaum Associates.
Cook, Jr., W. J. (1990). Bill Cook’s strategic planning for America’s schools (rev.ed).
Arlington, VA: The American Association of School Administrators.
Cook, Jr., W. J. (2004). When the smoke clears. Phi Delta Kappan, 86(1), 73-75,83.
Retrieved from http://pdkintl.org/publications/kappan/
86
Elementary and Secondary Education Act (1965). Pub. L. 89-10, as added Pub. L. 103-
382, title I, Sec. 101, Oct. 20, 1994, 108 Stat. 3519, 20 U.S.C. § 6301.
Elmore, R. (2000). Building a new structure for school leadership. Washington, DC:
Albert Shanker Institute.
Falshaw, J. R., Glaister, K. W., & Tatoglu, E. (2006). Evidence on formal strategic
planning and company performance. Management Decision, 44, 9-30. doi:
10.1108/00251740610641436
Hall, G. E., & Hord, S. M. (2011). Implementing change: Patterns, principles and
potholes (3rd ed.). Boston, MA: Pearson.
Hattie, J. (2012) Visible learning for teachers: Maximizing impact on learning. New
York, NY: Routledge.
Hayes, D., Christie, P., Mills, M., & Lingard, B. (2004). Productive leaders and
productive leadership: Schools as learning organisations. Journal of Educational
Administration, 42, 520-538. doi: 10.1108/09578230410554043
Hendrawan, I., & Wibowo, A. (2012). The Connecticut Mastery Test: Technical report.
Retrieved from Connecticut State Department of Education website:
87
http://www.sde.ct.gov/sde/lib/sde/pdf/student_assessment/research_and_technical
/public_2011_cmt_tech_report.pdf
Heroux, R. L. (1981). How effective is your planning? Managerial Planning, 30(2), 3-16.
Hopkins, D. (2001). School improvement for real. New York, NY: Routledge Falmer.
Kelley, C., & Protsik, J. (1997). Risk and reward: Perspectives on the implementation of
Kentucky’s school-based performance award program. Educational
Administration, 33, 474-505. doi: 10.1177/0013161X97033004004
Klein, H. J., Wesson, M., Hollenbeck, J., & Alge, Jr., B. (1999). Goal commitment and
the goal-setting process: Conceptual clarification and empirical synthesis. Journal
of Applied Psychology, 84, 885-896. doi: 10.1037/0021-9010.84.6.885
Locke, E. A., & Latham, G. P. (2002). Building a practically useful theory of goal setting
and task motivation: A 35-year odyssey. American Psychologist, 57, 705-717.
doi: 10.1037//0003-066X.57.9.705
Locke, E. A., & Latham, G. P. (2006). New directions in goal-setting theory. Current
Directions in Psychological Science, 15, 265-268. doi: 10.1111/j.1467-
8721.2006.00449.x
Malik, Z. A., & Karger, D. W. (1975). Does long-range planning improve company
performance? Management Review, 64, 27-31. Retrieved from
http://sloanreview.mit.edu/
Menard, S. (2002). Applied logistical regression analysis (2nd ed.). Thousand Oaks, CA:
Sage.
88
Mento, A. J., Steel, R. P., & Karren, R. J. (1987). A meta-analytic study of the effects of
goal setting on task performance: 1966-1984. Organizational Behavior and
Human Decision Processes, 39, 52-83. Retrieved from http://www.journals.
elsevier.com/organizational-behavior-and-human-decision-processes/
Mertens, D. M. (2010). Research and evaluation in education and psychology (3rd ed.).
Thousand Oaks, CA: Sage.
Mintrop, H., MacLellan, A. M., & Quintero, M. F. (2001). School improvement plans in
schools on probation: A comparative content analysis across three accountability
systems. Educational Administration Quarterly, 37, 197-218. doi:10.1177
/00131610121969299
Mintzberg, H. (1994). The fall and rise of strategic planning. Harvard Business Review,
107-114. Retrieved from http://hbr.org
Mourshed, M., Chijioke, C., & Barber, M. (2010). How the world’s most improved
school systems keep getting better. Retrieved from McKinsey on Society website:
http://www.mckinsey.com /client_service/social_sector/latest_thinking/worlds
_most_improved_schools
Phelps, J. L., & Addonizio, M. F. (2006). How much do schools and districts matter? A
production function approach to school accountability. Educational
Considerations, 33(2), 51-62. Retrieved from http://www.coe.ksu.edu
/EdConsiderations/
Phillips, P. A., & Moutinho, L. (2000). The strategic planning index: A tool for
measuring strategic planning effectiveness. Journal of Travel Research, 38, 369-
379. doi: 10.1177/004728750003800405
Pyrczak, F. (2010). Making sense of statistics: A conceptual overview (5th ed.). Glendale,
CA: Pyrczak Publishing.
Reeves, D. B. (2004). Accountability for learning: How teachers and school leaders can
take charge. Alexandria, VA: ASCD.
Reeves, D. B. (2006). The learning leader: How to focus school improvement for better
results. Alexandria, VA: ASCD.
89
Reeves, D. B. (2011). Finding your leadership focus. New York, NY: Teachers College
Press.
Schmoker, M. (1999). Results: The key to continuous improvement (2nd ed.) Alexandria,
VA: Association for Supervision and Curriculum Development.
Tubbs, M. E., & Ekeberg, S. E. (1991). The role of intentions in work motivation:
Implications for goal-setting theory and research. Academy of Management, 16,
180-199. doi: 10.2307/258611
U.S. Department of Education. (2010). ESEA Blueprint for reform. Retrieved from
http://www2.ed.gov/policy/elsec/leg/blueprint/blueprint.pdf
White, S., & Smith, R .L. (2010). School improvement for the next generation.
Bloomington, IN: Solution Tree Press.
Appendix A
PIM School Improvement Audit
Scoring Guide for Section A: Comprehensive Needs
Exemplary (3 points)
Meets all criteria for Needs Improvement (1
Proficient level and Proficient (2 points) point)
Performance provides specific evidence Provides specific evidence Provides evidence that meets
Dimension to meet the criteria below. to meet the criteria below. the criteria below.
1. Strengths Strengths are described Strengths are specified Strengths are limited to
specifically for student beyond the student student achievement, and
achievement, teaching achievement area; they mentions of staff strengths
practices, and leadership include specific strengths are nonspecific or vague.
actions. of staff and school.
2. Student achievement is Student achievement data Student achievement data are
Assessment described in terms of state include some evidence of primarily described in terms
results or district assessments, school-level achievement of standardized test scores or
school-based assessments data, narrative, and state-level assessments of
that describe subscale school/classroom data to student achievement,
distinctions by subgroups, support district or state attendance, and
and classroom or assessment data. demographics.
contextual data that
describe patterns and trends
down to the skill level.
3. Teacher Teacher practices are Teacher practices are Teacher practices are generic
practices supported by research, supported by research, and statements that may identify
describe whether specific professional strategies supported by
professional development development needs are research but don't link to a
or repeated practice is identified. specific need for professional
needed, and describe how development.
monitoring of those
practices will be used to
improve instruction.
4. Acts of Leadership actions describe Leadership actions describe Leadership actions are not
leadership the degree to which leaders the degree to which leaders specifically distinguished
monitor performance, set specifically monitor from actions of other staff, or
direction, provide performance or set plans lack clear description
feedback, or communicate direction. of leadership actions.
values.
5. Engaged Evidence of frequent Evidence of one or more Evidence of involvement
stakeholders communication with instances of engaging with parents tends to be in
parents regarding parents in improving areas other than teaching and
standards (beyond student achievement (e.g., learning (e.g., percentage of
traditional grading online student monitoring, participation in conferences,
91
Exemplary (3 points)
Meets all criteria for Needs Improvement (1
Proficient level and Proficient (2 points) point)
Performance provides specific evidence Provides specific evidence to Provides evidence that
Dimension to meet the criteria below. meet the criteria below. meets the criteria below.
6. Possible Inquiry routinely Inquiry has identified some Effects (results targeted)
cause-effect examines cause and effect correlations from needs may or may not align to
corrections correlations from needs assessment data to select urgent needs assessed or
assessment data before specific strategies or program represent a quantifiable
selecting ANY strategies solutions planned. Positive vision of the future. Plan
or program solutions. correlations at desired levels tends to address broad
Positive correlations at represent a quantifiable content as improvement
desired levels represent a vision of the future. needs, without identified
quantifiable vision of the correlations between needs
future. and strategies.
7. Strategies ALL selected classroom- Most selected classroom- Few (≤50%) classroom-
driven by level research-based level research-based level research-based
specific needs programs or instructional programs or instructional instructional strategies or
strategies are identified strategies are identified for a programmatic and
for a stated purpose, and stated purpose. Most structural antecedents are
ALL standards-based schoolwide programs or identified based on data
research strategies are strategies (e.g., NCLB that support the need for a
designed to address research-based programs, specific program or
specific needs in student collaborative scoring, dual- strategy.
achievement. block algebra, tailored
summer school) specify the
92
Scoring Guide for Section C: S.M.A.R.T. (Specific, Measurable, Accomplishable, Relevant, Timely)
Goals
Exemplary (3 points)
Meets all criteria for Needs Improvement (1
Proficient level and Proficient (2 points) point)
Performance provides specific evidence Provides specific evidence to Provides evidence that
Dimension to meet the criteria below. meet the criteria below. meets the criteria below.
10. Specific ALL goals and supporting More than one goal and Most goals and supporting
goals targets specify supporting target specifies targets describe in general
Targeted student groups Targeted student groups rather than specific terms
Grade level Grade level Targeted student groups
Standard or content area Standard or content area and Grade level
and subskills delineated subskills delineated within Standard or content area
within that content area that content area and subskills delineated
Assessments specified to Assessments specified to within that content area.
address subgroup needs. address subgroup needs.
11. ALL goals and targets ALL goals and targets Few goals or targets
Measurable describe quantifiable describe quantifiable measures describe quantifiable
93
15. Purposeful, Plan describes WHY some Plan describes WHY Plan describes when
focused action action steps are implemented each focus area or major action steps will be
steps and HOW action steps will be action step is being implemented and by
implemented, when, in what implemented. whom—but not why or
settings, and by whom. how.
16. Multiple There are multiple forms of There are multiple forms Assessments are more
assessments student assessment data, of student assessment often used to comply
94
documented including formative, as well as data and some data for with directives than to
multiple measures of teacher teacher practices. serve as indicators of
practices and leader actions. change or improved
student achievement.
17. There is explicit evidence of There is explicit Evidence of
Demonstrated improvement cycles for every evidence of improvement cycles for
improvement school improvement initiative. improvement cycles for school wide initiatives
cycles some school is unclear.
improvement initiatives.
18. Frequent Monitoring schedule (≥ Monitoring schedule (≥ Monitoring for student
monitoring of monthly) that reviews both monthly) to review performance or teaching
student student performance and adult student performance. practices is infrequent.
achievement teaching practices.
19. Ability to Capacity for rapid rollout in Some midcourse No description of
rapidly team responses to data, corrections are midcourse corrections
implement and professional development, and delineated and observed in
sustain reform coaching; time allotted for anticipated in design of improvement plan.
adjustments and opportunities improvement plan.
in response to student needs.
20. Results All results indicators serve as Some results indicators Results indicators are
indicators interim progress probes for serve as interim progress vague, hard to describe,
aligned to goals each S.M.A.R.T. goal. probes for S.M.A.R.T. or difficult to measure.
goals.
21. Adult Consideration of adult learning Some attention to adult Evidence provided of
learning and issues and the change process learning issues and adult learning or change
change process is evident in time, programs, change process is process considered in
considered and resources. evident in plan (e.g., planning. Plan tends to
limited initiatives, be fragmented with
integrated planning, and multiple initiatives,
related support little attention to time
structures). requirements for
implementation.
22. Documented Coaching/mentoring system Coaching or mentoring Coaching or mentoring
coaching and creates a coaching or is planned and systemic. is incidental, viewed as
mentoring mentoring cadre by building sole responsibility of
capacity and application. coach instead of school
wide effort.
23. Strategies Research-based instructional Most research-based Selected strategies,
linked to specific strategies, programs, and instructional strategies, programs, and
student needs structures selected to impact programs, and structures structures are not
specified student needs at are linked to specified clearly linked to student
school. ALL design activities student needs at school needs as identified in
and innovations are strongly (school, subgroup, or the data. School may
correlated to student individual). lack support in research
95
Exemplary (3 points)
Meets all criteria for Proficient Proficient (2 points) Needs Improvement (1
level and provides specific Provides specific point)
Performance evidence to meet the criteria evidence to meet the Provides evidence that
Dimension below. criteria below. meets the criteria below.
26. Summary Evaluation compares planned Evaluation summarizes Evaluation tends to limit
data provided initiatives with actual results data and evidence that data summaries to student
and compared from the prior year, examines examine student achievement analyses.
achievement results based on performance in multiple Plans tend to examine
safety-net power standards by content areas; it student performance
grade, and compares those results describes students in without specifying
to district performance. Student need of intervention students in need of
performance is augmented by a whose performance intervention whose
specific review of curriculum puts them at risk of performance puts them at
impact, time/opportunity for opening learning gaps. risk of opening learning
students, or the effect of teaching gaps.
practices on achievement.
27. Evaluation plan describes explicit Evaluation plan Evaluation plan tends to
96
Anticipated new knowledge, specific skills, describes new describe new knowledge,
knowledge and attitudes that will result from knowledge and specific skills, and attitudes in
and skills professional development skills or attitudes that general terms and
associated with each goal for will result from perceptions rather than
students, staff, AND professional specific knowledge or
stakeholders. development associated skills.
with most goals for
students and staff.
28. Required Evaluation specifies data and Evaluation specifies Evaluation tends to use
evidence for evidence needed to evaluate data and evidence identical generalities for
evaluation progress to meet all stated goals, needed to evaluate each goal rather than to
including formative, school- progress to meet all specify data and evidence
based Tier 2 data explicitly stated goals, including needed to evaluate
aligned to address those students formative, school-based progress toward goals.
whose performance puts them at Tier 2 data and their
risk of opening rather than frequency.
closing learning gaps.
29. Next steps Documented next steps outline Next steps to improve Next steps rarely address
outlined in how changes in teaching and teaching and learning changes in how teaching
evaluation learning will occur, how the are delineated and and learning will occur.
leadership team analyzes data, supported by a clearly Next steps, if specified,
and how evidence was collected defined improvement tend to describe future
and submitted to colleagues and cycle in the plan. outcome targets (goals)
peers for review. The evaluation rather than next steps in
plan recommends changes from a terms of adult actions.
list of alternatives and delineates
a process to secure resources,
implement changes, and evaluate
them.
30. Results Evaluation plan is transparent in Evaluation plan Evaluation plan may
disseminated describing how results (positive describes how the describe process for
and or negative), conclusions, lessons compared results communicating results,
transparent learned, and next steps will be (positive or negative) but seldom specifies next
communicated and disseminated are communicated to steps or how results will
to all primary stakeholders improve goal setting be explained to
(families, educators, staff, and ensure that lessons stakeholders.
patrons, partners, and the public). are learned.
Copyright © 2005. Center for Performance Assessment. All rights reserved.
97
Appendix B
Judgments of Connection Between PIM Rubric and Goal Theory Concepts
Performance Dimension Core Findings Mechanisms- Moderators-
The actions leading towards goal accomplishment Characteristics that change the
strength of the goal and
performance relationship
Challenging: Specific: Directing Having an Goals Goals lead to Goal Feedback:
Difficulty Specific goals attention and energizing increase the arousal, Commitment:
level of goals clearly define effort toward function: persistence, discovery, Reveals
an external specific goal- Higher goals prolonging and/or use of Determination progress in
referent so it is relevant lead to effort. task- to achieve a relation to
clear exactly activities and higher relevant goal. goals.
what needs to away from efforts than knowledge
be goal- do lower and Increased when
accomplished. irrelevant level goals. strategies., goals are made
activities. e.g., using public, relates
This effect existing to professional
occurs both knowledge integrity.
cognitively and skills or
and developing Important for
behaviorally. new goals to be
strategies. attainable.
Comprehensive Needs Assessment
1 Strengths a Judge 1
2 Assessment Results Agreement
3 Teacher Practices Agreement
4 Acts of Leadership Agreement
5 Engaged stakeholders Judge 2 Judge 1
Inquiry Process
6 Possible cause-effect Agreement
correlations
7 Strategies driven by Agreement
specific needs
8 Analysis of adult actions Judge 1 Judge 1 Judge 2
9 Achievement results Judge 1 Judge 2
98
Appendix C
Letter to District Requesting Permission to Obtain School Improvement Plan
I request your permission to contact the person within your district who can provide me a
copy of your elementary school improvement plans for each K-5 or K-6 school within
your district.
Over the next year, I plan on examining characteristics of school improvement plans to
see if any components of a plan have a statistically significant impact on the newly
created School Performance Index (SPI). All data will remain anonymous. Additionally,
the data collection and analysis is for doctoral research only.
I appreciate your support in my doctoral studies. If you have any questions or can guide
me to the person who can make available individual elementary school improvement
plans, I would greatly appreciate it. All findings will be shared with districts that request
such information. I can be reached via email: davidhuber@ci.bristol.ct.us.
Sincerely,
Appendix D
Revised Rubric Elaboration of 8 Performance Dimensions
6. Possible Inquiry routinely examines Inquiry has identified some Effects (results targeted)
cause-effect cause and effect correlations from needs may or may not align to
corrections correlations from needs assessment data to select urgent needs assessed or
assessment data before specific strategies or represent a quantifiable
selecting ANY strategies program solutions planned. vision of the future. Plan
or program solutions. Positive correlations at tends to address broad
Positive correlations at desired levels represent a content as improvement
desired levels represent a quantifiable vision of the needs, without identified
quantifiable vision of the future. correlations between needs
future. and strategies.
It is clear the team
identified the problem There is no needs
102
15. Purposeful, Plan describes WHY some Plan describes WHY Plan describes WHEN
focused action action steps are implemented and each focus area or action steps will be
steps HOW action steps will be major action step is implemented and by
implemented, when, in what being implemented. whom—but not why or
settings, and by whom. how.
21. Adult Consideration of adult Some attention to adult Evidence provided of adult
learning and learning issues and the learning issues and learning or change process
change change process is evident in change process is considered in planning. Plan
process time, programs, and evident in plan (e.g., tends to be fragmented with
considered resources. limited initiatives, multiple initiatives, little
integrated planning, attention to time requirements
and related support for implementation.
You would see multiple PD structures).
on the SAME topic so the There is no way the plan can
teachers can grasp it. And You would see be successfully implemented
COACHING would be multiple PD on the – there is just too much.
observed SAME topic so the
103
Appendix E
Human Studies Council Letter of Exemption