Download as pdf or txt
Download as pdf or txt
You are on page 1of 113

The Effect of School Improvement Planning on Student Achievement:

A Goal Theory Perspective

David Huber

A Dissertation
Submitted in Partial Fulfillment of the
Doctor of Education
Department of Educational Leadership

Central Connecticut State University


New Britain, Connecticut

February 7, 2014

Dissertation Advisor

Dr. James M. Conway


Professor of Psychological Science

Dissertation Committee

Dr. Karen C. Beyard


Professor of Educational Leadership and Ed.D. Program Director

Dr. Anthony Rigazio-Digilio


Professor of Educational Leadership

Dr. Susan Kalt Moreau


Deputy Superintendent, Bristol Public Schools
Insert Signature Page Here
i

This quantitative study was designed to evaluate the effect of school improvement

planning as it relates to goal theory (Locke & Latham, 2002, 2006). This study tested the

hypothesis that schools creating quality school improvement plans (SIPs) consistent with

goal theory principles will have higher levels of student achievement. Achievement was

measured by a school’s School Performance Index score from the 2012-13 administration

of the Connecticut Mastery Test. The first research question asked whether components

of a plan consistent with goal theory (e.g. specific goals) are associated with increased

student achievement across elementary schools within Connecticut’s newly formed

Alliance Districts. The second research question evaluated whether SIPs predicted

achievement of particular subgroups. Data collected for this study included individual

school improvement plans (N = 108), each school’s achievement score, and school-level

demographic data.

The regression analysis for the main hypothesis showed a relationship between

2012-13 student achievement and a combination of the predictor variables with the

proportion of shared variance being R2 = .861. The SIP score had a marginally

statistically significant regression coefficient, p = .052, with a positive standardized

coefficient of .085, consistent with the research hypothesis.

None of the individual goal theory components had significant regression

coefficients even though three of the goal theory variables (directing attention, r = .270, p

< .01, strategies, r = .226, p < .05, and feedback, r = .256, p < .01) had significant

positive correlations with student achievement. The main analysis was then replicated

using subgroup achievement as the outcome variable for the following three groups:

Hispanic or Latino (n = 96), Free/Reduced Lunch Eligible (n = 108), and High Needs (n
ii

= 108). None of the subgroups had significant regression coefficients even though all

three subgroups had small, yet significant correlations with SIP total (Hispanic, r = .299,

p<.01, Free and Reduced, r = .229, p<.05, and High Needs, r = .288, p<.01).

Recommendations from this study include schools developing high-quality SIPs which

are consistent with goal theory principles, and focusing SIP efforts to directly address

subgroup performance as a means of closing current gaps in achievement.

Key Words: school improvement plan, student achievement, planning effectiveness


iii

Table of Contents

List of Tables v

List of Figures vi

Acknowledgements vii

CHAPTER 1: INTRODUCTION

Introduction and Purpose of the Study 1


Background of the Study 2
State and National Context 2
Connecticut Alliance Districts and SIPs 4
Statement of the Problem 5
Theoretical Framework 6
Hypothesis and Research Questions 8
Definition of Terms 8
Limitations 11
Organization of the Dissertation 12

CHAPTER 2: REVIEW OF THE LITERATURE

Review of the Literature 13


Search Strategy 13
Goal Theory 14
Core Findings 15
Mechanisms 16
Moderators 17
Ethical Considerations 19
Planning 19
Effectiveness of Planning 22
Planning in a School Setting 26
Evidence of Effectiveness of SIPs 29
The Present Study 32

CHAPTER 3: METHODOLOGY

Purpose 34
Research Design 35
Population and Sample 35
Data Sources and Instrumentation 38
SIP quality based on PIM Scoring Rubric 38
Student Achievement- SPI school score, 2011-12 and 2012-13 43
iv

Connecticut Mastery Test 44


Student Demographic Data 46
Procedure 48
Obtaining SIPs 48
Scoring Plans 49
Data Analysis 51
Ethical Considerations 52
Research Rigor 52
Limitations 53
Role of Researcher 54

CHAPTER 4: RESULTS

Descriptive Statistics 56
Test of Hypothesis – SIP Quality and Student Achievement 60
Research Question – Goal Theory 62
Research Question – Subgroup Analysis 66
Summary 69

CHAPTER 5: DISCUSSION

Summary of Findings 71
Main Analysis 71
Goal Theory Analysis 71
Subgroup Analysis 72
Consistency of Results with Previous Research 72
Theoretical Implications 74
Practical Implications 74
Limitations and Suggestions for Future Research 80
Summary 82

References 84

Appendix A: PIM School Improvement Audit 90


Appendix B: Judgments of Connection Between PIM Rubric 97
and Goal Theory
Appendix C: Letter to District Requesting Permission to Obtain SIP 100
Appendix D: Revised Rubric Elaboration of 8 Performance Dimensions 101
Appendix E: Human Studies Council Letter of Exemption 104
v

List of Tables

Table 1: Description of Alliance Districts and Schools included in 36


Study

Table 2: Rubric Performance Dimensions and their Connection to 42


Goal Theory

Table 3: Number of Connecticut Alliance Districts Included within 48


Each DRG

Table 4: Inter-rater and Intra-rater Reliability for SIP Quality Ratings 50

Table 5: Distribution of 2012-13 Achievement (SPI) Scores 57

Table 6: Bivariate Correlations Among SIP Total Score, Student 59


Achievement, and Control Variables

Table 7: Distribution of SIP Scores for 20 Highest and 20 Lowest 60


Achieving Schools

Table 8: Multiple Regression Results for SIP Score Predicting Student 62


Achievement

Table 9: Bivariate Correlations Among SIP Total Score, Student 64


Acheivement, and Goal Theory Concepts

Table 10: Multiple Regression Results for Goal Theory Concepts 65


Predicting Student Achievement

Table 11: Bivariate Correlations Among SIP Total Score, Student 67


Acheivement, and Subgroups

Table 12: Multiple Regression Standardized Coefficients for Subgroup 68


Exploratory Analysis
vi

List of Figures

Figure 1: Conceptual Representation of this Study 5


vii

ACKNOWLEDGEMENTS

I would like to thank the members of my dissertation committee, the professors at

Central Connecticut State University, and all the colleagues I have had the privilege to

learn with for supporting me in so many ways as I completed this dissertation. I am

greatly appreciative of the support of the Bristol Board of Education through this multi-

year process. I hope my new insight on school improvement will positively impact the

students of our city. Thank you to my friends and family that have been understanding

and encouraging while I completed this important step in my professional career. I would

also like to thank everyone; former students, parents, and colleagues who helped me

understand the importance of school improvement. The results of this study go far

beyond any statistical association and will hopefully, in some small way, impact many

students for years to come. In order for me to have finished this study, I am greatly

appreciative of Dr. Jim Conway, my advisor, who laid the foundation for me to finish this

work. Without his help and support, this study could not have been completed.

Lastly, I would like to thank my parents for their support, understanding, and

encouragement in every decision I have made leading me to this dissertation completion.

I can never repay you for all you have done – Thank you.
1

CHAPTER 1

INTRODUCTION AND PURPOSE OF THE STUDY

The present study is designed to evaluate the effectiveness of school improvement

planning. Specifically, this study will evaluate school improvement plans (SIPs) to

identify what if any components of a well-written plan are associated with higher levels

of student achievement across elementary schools located within Connecticut’s newly

formed Alliance Districts. For this study, student achievement will be measured by

changes in the School Performance Index (SPI), a recently employed metric by the state,

used to measure achievement of an entire school using a single number (Connecticut

State Department of Education [CSDE], 2012b). This study attempts to evaluate the

intended implementation and monitoring of SIPs.

Districts designated as Alliance members are the lowest performing 30 districts in

Connecticut and, depending on their school classification, are required to submit annual

SIPs. The literature is mixed in terms of a plan’s actual effectiveness in improving

student achievement (Fernandez, 2009). Nonetheless, this process of planning is now

required by all states as a form of school accountability and improvement under the U.S.

Department of Education’s Elementary and Secondary Act (ESEA) (1965) for schools

not performing at expected levels of achievement. Knowing the value of creating such

plans is essential for schools and public policy.

Since SIPs are now required and associated with the rating and categorization of

Alliance District schools (CSDE, 2012a), it is crucial to examine whether the plans do in

fact lead to improved outcomes. It is possible that, rather than being helpful, required
2

plans may become an unnecessary burden, something passed down by those above, and

only filed and stored until the next school year rather than carried out (Heroux, 1981).

The present study hopes to identify particular characteristics of SIPs for schools located

within the newly formed Connecticut Alliance Districts. If specific characteristics of

plans are identified as being beneficial to student achievement for a whole school or for

specific groups of students, these practices can be replicated across all Alliance schools

for focused and systemic improvement. Through this focused school improvement,

schools would then be able to meet legislative mandates while moving “Connecticut

closer to the goal of achieving better results for all students and ambitious levels of

growth for the state’s lowest performing students” (CSDE, 2012a, p. 27). By meeting the

state’s standards, schools would then be free from labels such as In Need of Improvement

and negative school classifications.

Background of the Study

State and national context. SIPs, as defined by the State of Connecticut, are

considered a roadmap or vehicle to drive educational change. They are written documents

intended to provide carefully thought out and detailed strategies, which are continually

monitored for optimal student achievement (CSDE Bureau of School and District

Improvement, 2007). School improvement planning originated because of ESEA, which

is a federal statute passed in 1965 representing the most far-reaching federal legislation

enacted by Congress directly impacting public education (U.S. Department of Education,

2010). ESEA has evolved over time to include the No Child Left Behind Act (NCLB) of

2001, requiring schools achieving at certain levels of achievement to submit formal SIPs.
3

According to section 10-223b of the Connecticut General State Statutes,

beginning in 2001, schools identified as in need of improvement were to remain labeled

as such and were required to operate under SIPs through June 30, 2004. These plans

became commonplace across the country as a result of NCLB, as a method of

documenting compliance with the federal requirements for improving schools. It was

because of NCLB that a system of consequential accountability was established, which

rewards and punishes schools for student achievement scores (Hanushek & Raymond,

2004). SIPs were seen as one tool to improve educational quality for all students while

working to reduce current gaps in education. Additionally, the development of SIPs was

intended “to ensure that all requirements of Section 1116 of the NCLB Act are met”

(CSDE Bureau of School and District Improvement, 2007, p. 63) and that all students

achieve at high levels.

Under No Child Left Behind, schools not making Adequate Yearly Progress

(AYP) on state assessments for two consecutive would be labeled in need of

improvement and be required to submit a SIP to outline goals and strategies aimed at

improving student achievement. These two year plans for improvement must “embody a

design that is comprehensive, specific and focused primarily on the school’s instructional

program” (CSDE Bureau of School and District Improvement, 2007, p. 8). In alignment

with NCLB, a plan’s design must include “strategies based on scientifically based

research that will strengthen the core academic subject, ongoing professional

development, technical assistance, strategies to promote parent/guardian involvement,

and measurable goals and intended outcomes that target the school’s greatest areas of

need” (p. 8).


4

Connecticut Alliance Districts and SIPs. Connecticut administrators are

required to plan continually for improving the quality of instruction. The focus of this

planning is to increase the achievement for all students while reducing current gaps in

achievement between all subgroups. Connecticut is among the nation’s most affluent

states and has achievement gaps between socioeconomic subgroups that are larger than

any state in the nation (CSDE, 2012a). In 2012, CSDE crafted a proposal which included

the identification of the 30 most persistently low performing school districts as members

of the newly created Alliance District program. Identification as an Alliance District

ensured additional funding to increase student achievement while reducing gaps in

achievement. Membership also set forth requirements such as submission of SIPs for

schools achieving at certain levels.

Connecticut has determined that the SIP will serve as the roadmap to increased

student achievement for schools in Alliance Districts, while being the vehicle to drive and

monitor change during a school year (CSDE Bureau of School and District Improvement,

2007). With the passing of this legislation in May 2012, more than $90 million in new

funding was allocated to support several new initiatives including increasing early

childhood opportunities, providing intensive interventions to the lowest-performing

schools and districts, expanding the availability of high-quality school models, improving

upon teacher and administrative talent development, and increasing accountability

(CSDE, 2012a). The money, awarded conditionally, requires that it be used only for

actions specified on the district improvement plan which is the overarching framework

for each school’s plan.


5

The context of this study, as depicted in Figure 1, focuses on SIPs. It is relevant

to Connecticut in that SIPs are required by legislation for Alliance District schools and it

is essential that SIPs are effectively written. When effective SIPs are carried out, the

result should be increased student performance, and with that increased student

performance come a variety of positive outcomes. For example, beginning in 2013-14,

student achievement results represent 45% of teacher and administrator effectiveness, a

result of Connecticut’s flexibility waiver (CSDE, 2012a). Another outcome, which is

explained in more detail in the methods section, is the increase in an elementary schools’

School Performance Index (SPI), which is a single number to rate an entire school’s

achievement.

Increased
Increased
Legislation Creating School
student
Requiring SIPs effective SIPs Achievement
performance Score

Figure 1. Conceptual representation of this study.

Statement of the Problem

Beginning in 2001, schools not making adequate yearly progress, as required by

NCLB, must submit SIPs, yet very little empirical data exists on the effectiveness of this

process (Fernandez, 2009). The well-intentioned legislation was created to support

schools and student learning; however, it remains unclear if improved learning actually

occurs. Connecticut’s ESEA Flexibility Request (CSDE, 2012a), which includes an

important role for SIPs, is designed to support schools most in need and provide

corrective action for schools that do not improve the learning experience for all students.

SIPs are a primary vehicle used to document and plan the improvement process. If the
6

process of planning is a requirement in an evidence-based reform, it is essential to

evaluate the effectiveness of this planning as a means for improving student achievement.

This study examines school improvement planning through the lens of goal

theory. Goal theory was used to derive hypotheses about characteristics of SIPs that are

related to school achievement.

Theoretical Framework

This study examines school improvement planning through the lens of goal theory

(Locke, 1968; Locke & Latham, 2002). As defined by Locke and Latham (2002), a goal

is the object or aim of an action to be obtained within a specific amount of time. Through

four decades of research, specific attributes of goals have been determined to be most

effective in achieving one’s goal. Goal theory has been selected for the current study

because setting and striving for goals is a crucial component of SIPs.

Goal theory was developed inductively within industrial and organizational

psychology over a 25-year period. Locke and Latham (2006) define goal theory as “an

’open’ theory in that new elements are added as new discoveries are made” (pp. 265-

266). Their theory states that challenging and specific goals lead to higher performance

than less challenging or more abstract goals such as doing one’s best. Locke and Latham

(2006) described mechanisms, through which goals lead to performance, and moderators,

which are characteristics that change the strength of the relationship between two

variables. Mechanisms include directing one’s attention toward a specific activity,

serving an energizing function, positively impacting one’s persistence, and impacting

actions leading to discovery and use of current knowledge and strategies. Moderators

applied to this theory include goal commitment, feedback, and task complexity. Goal
7

theory states higher commitment and increased feedback strengthen the goal-performance

relationship, and greater complexity weakens the relationship.

Goal theory can be an excellent framework for understanding if or why SIPs work

in school settings. SIPs are linked to goal theory because the process requires setting

goals, and goal theory says that goals should be challenging and specific. Additionally,

when creating SIPs focusing on both the mechanisms and moderators of the theory, the

relationship between planning and achievement should be maximized. This study aims to

measure specific mechanisms and moderators found within SIPs in an effort to determine

if they are associated with increased student performance. This theory was used because

the process of completing a school improvement plan requires the setting of goals for a

school’s success.

Unlike strategic or school improvement planning, goal theory is intended to

explain individual-level behavior. Goal theory has gathered a substantial amount of

empirical support at the individual level, and this study extends its principles to the

school level. Goal theory may help explain the mixed results of SIP effectiveness.

Following the assumptions of goal theory (Locke, 1968), planning for student success

should lead to increases in student achievement; however, very little empirical data exist

to support this claim (Fernandez, 2009). Mintrop, MacLellan, and Quintero (2001) found

that plans often reflected “espoused views” of teachers, school staff, and the community,

as opposed to the actions and interventions typically observed taking place within a

school. White (2009) found the process of creating SIPs to be more associated with

compliance and stakeholder participation than with the leadership necessary to improve

student learning. It has therefore been noted that SIPs may often take a non-optimal
8

focus; according to the theoretical framework for this study, SIPs should be framed upon

the principles of goal theory.

In Connecticut, student achievement gaps are continuing to grow which brings

into question the current use and effectiveness of improvement planning. As Figure 1

indicates, legislation requiring SIPs assumes plans will be well written, carefully

implemented, and result in increases in student learning while reducing current gaps in

achievement. If goal theory can be applied to the process of creating SIPs, then schools

should see increased learning resulting in an increase in their SPI. These two outcomes

would then allow schools to meet all legislative requirements and be labeled as an

effective school.

Hypothesis and Research Questions

This study is framed by one hypothesis derived from goal theory and a complete

review of the literature. The hypothesis is that schools creating quality SIPs consistent

with goal theory principles will have students who achieve at higher levels. For the

purpose of this study, achievement will be measured by the change in a school’s SPI

score (see “Definition of Terms” section) from the 2011-12 administration of the

Connecticut Mastery Test to the 2012-13 administration. This study has two research

questions. The first question is whether components of a plan consistent with goal theory

(e.g. specific goals, feedback/monitoring) are associated with increased student

achievement. The second research question evaluated whether school improvement

planning predicted achievement of particular subgroups.

Definition of Terms

For this purpose of this study, the following terms have been defined.
9

Alliance District: The Alliance District program, aimed at identifying and

supporting school districts most in need of improvement, was created as a byproduct of

the 2012 ESEA flexibility waiver. Districts identified as Alliance members are those with

the lowest achievement as measured by the newly created district performance index

scores (CSDE, 2012b). The intent of the Alliance District designation was to allocate

$39.5 million dollars of increased education cost sharing (ECS) funding for the 2012-13

school year to districts most in need and to continue that funding for a minimum of five

years. By increasing funding, focusing supports, and outlining a strategic plan, it is

assumed that student achievement will improve and educational achievement gaps will be

reduced (CSDE, 2012a).

CMT: This is the acronym for the Connecticut Mastery Test, Connecticut’s

statewide assessment of math, reading, science, and writing skills for Grade 3-8 students.

The test is given annually in March and scores are reported to districts in July.

CMT Performance Levels: The CMT currently designates five levels of

performance: (a) Below Basic, (b) Basic, (c) Proficient, (d) Goal, and (e) Advanced.

Students are then assigned a classification based on their performance on the CMT. The

state has designated ‘Goal’ as a target for all students. Students who perform at the goal

level demonstrate extensive knowledge of grade-level concepts and can complete a

majority of grade-level tasks with minimal assistance (CSDE, 2011).

Goal theory: Defined by Locke (1968) and Locke and Latham (2002, 2006), goal

theory is an inductively developed theory within industrial/organizational psychology

stating that specific and challenging goals lead to higher levels of performance in

comparison to more abstract and less-challenging goals. Implied within the theory is the
10

assumption that goals are attainable, persons are committed, and there are no conflicting

goals.

School Improvement Plan (SIP): Under the No Child Left Behind [NCLB] Act of

2001, schools identified as not making adequate progress are to submit a school

improvement plan. SIPs as used in education are written documents intended to provide a

focus to schools on their path to improvement by identification of practices which are

most likely to lead to increased student achievement, while identifying specific areas of

concerns (CSDE Bureau of School and District Improvement, 2007).

School Performance Index (SPI): The primary measure of school accountability

since the approval of the flexibility waiver in May 2012 is now the School Performance

Index (SPI). The SPI is designed to measure the achievement of an entire school

(aggregated across different grade levels) using a single number. Unlike requirements

under NCLB, the SPI allows partial credit for students scoring below proficient on a

given state assessment (Erpenbach, 2009). The SPI is designed to be a status measure for

schools, where status is defined as “the academic performance of a student or group (a

collection of students) at a single point in time” (Castellano & Ho, 2013, p. 12).

Planning: Cook (2004) defines planning as a system of planning in which the

locus of control for change resides inside an organization and serves as an “irrevocable

commitment to purpose beyond the ordinary” (p. 74). Planning is a process that “involves

explicit systematic procedures used to gain the involvement and commitment of those

principal stakeholders affected by the plan” (Falshaw, Glaister, & Tatoglu, 2006, p. 10).
11

Limitations

This study was not conducted using a random sample and therefore the

generalizability of results to the total population is uncertain. The sampling frame

included elementary schools which were part of the newly formed Connecticut Alliance

Districts, and the actual sample consisted of those Alliance District schools that provided

this researcher with a copy of their SIP. Because every Alliance District elementary

school did not provide their SIP for this study, external validity, or the “extent to which

findings in one study can be applied to another situation” (Mertens, 2010, p. 129), cannot

be guaranteed.

This population, or group about which I intend to draw conclusions, includes low-

performing school districts that are required to submit SIPs. The sampling frame included

elementary and K-8 schools within Connecticut’s Alliance Districts and the sample will

be a subset of these schools that provided this researcher with their SIP. This sampling

frame was selected because the schools that are part of the Alliance Districts are the

lowest-performing districts in the state, thus most likely to be assigned to a classification

requiring that a plan be created. Additionally, although socioeconomic demographics of

the towns are not identical, there are more similarities among the districts than there are

differences.

Internal validity can also be considered a limitation because this study was not

conducted as a true experiment, so drawing firm cause-effect conclusions about school

improvement planning is not possible. There are many potential extraneous or lurking

variables, the most important of which I attempted to identify and account for.
12

Another limitation of this study is that it attempted to measure the intended, rather

than actual, implementation and monitoring of the SIPs. The current study’s methods did

not allow assessment of how plans are actually implemented.

Lastly, each SIP was scored by a human reader, thus reliability of scoring can be a

limitation. Reliability, or the degree to which measurements are free from error (Mertens,

2010), will be evaluated for intra-rater consistency. To increase the intra-rater reliability,

pilot scoring was conducted as a method of training and once scored, a representative

sample of plans were randomly selected for re-scoring. A full description of the

procedures for scoring will be discussed in Chapter 3.

Organization of the Dissertation

This dissertation is organized into five chapters. Chapter 1 provides an overview

of the study within the current educational context in Connecticut while justifying the

importance of this research. Chapter 1 also introduces goal theory as the theoretical

framework, defines terms, and describes study limitations. Chapter 2 includes a review

of the literature which informs this study. The review begins with an outline of goal

theory and then provides the context for required planning. The review then delves into

the effectiveness of planning in general, and begins to narrow specifically to planning in

an educational setting. The review ends with a focus on school improvement planning as

it relates to improved student learning, and connects the literature with the current study’s

hypothesis and research questions. Chapter 3 outlines the methods used for this study

including the research design, population and sample, data sources, and procedures.

Chapter 4 discusses the results of the the hypothesis and both research questions.

Chapter 5 interprets the results and makes recommendations for further study.
13

CHAPTER 2

REVIEW OF THE LITERATURE

This chapter includes a review of the literature which informs this study and looks

at the process of school improvement planning through the lens of goal theory. The

review starts with an outline of the theory and examines the role of goals in the planning

process. The review then delves into the effectiveness of planning. Due to the lack of

empirical data on school improvement planning effectiveness, it was necessary to expand

the search beyond education and look into planning across other sectors, such as business

and travel. The review then narrows to planning as it relates to schools in an

accountability driven context. Finally, the review ends with a focus on school

improvement planning as it relates to improved student learning, and reviews the study

hypothesis and both research questions.

Search Strategy

The search strategy for this study consisted of thorough examination, beginning in

July 2012, of data bases, current literature, published dissertations, Internet searches, and

review of reference lists for articles and documents read. Because of the limited number

of studies on school improvement planning, I relied heavily on a “snowball” approach

(Mertens, 2010), using each source’s reference list to continually build my list of relevant

sources. The databases used for this search included ProQUEST Dissertation and Theses

Full Text, Education Full Text, ERIC, PsycARTICLES, PsycINFO, and Social Sciences

Citation Index. Information was also obtained through conversation and telephone calls

with staff members of the Connecticut State Department of Education which led to

further review of legislative documents and updated policies.


14

Search terms included: school improvement, school improvement planning,

planning effectiveness, strategic planning, strategic leadership, impact of planning,

student achievement, goal theory, goal-setting theory, accountability, and school

accountability. The inclusion criteria originally included any articles, documents, or

studies related to school improvement planning and student learning. I began to narrow

my search to required planning, legislative documents pertaining to school improvement

planning, and strategic planning as it relates to student achievement. As I continued my

search and became saturated in the literature, my focus was on the role of goal setting and

its connection to planning, school accountability, and student achievement.

Goal Theory

Goal theory provides a useful theoretical context for the present study because

SIPs involve setting goals at the school level. A goal, as defined by Locke and Latham

(2002), is the object or aim of one’s actions, such as attaining a certain level of

proficiency within a specified timeframe. Goals are the internal representations of desired

states and can be constructed as outcomes, events, or processes (Austin & Vancouver,

1996). They can be internal such as one’s ideal body temperature or “complex cognitive

depictions of desired outcomes (e.g., career success)” (Austin & Vancouver, 1996, p.

338). Goal setting as a motivational technique has its origins in two lines of inquiry: (a)

research in academic psychology and (b) applied management research (Mento, Steel, &

Karren, 1987). Based on the work of Locke (1968) the study of goal theory also includes

a focus on goal setting as it relates to task performance, thus the connection to the present

study given the use of goals in SIPs.


15

Setting goals has become commonplace in work settings as a method of

increasing productivity and achieving highly competitive standards. Goal theory,

developed inductively over four decades of research is largely based on Ryan’s (1970)

belief that conscious goals impact one’s output. Researchers have found increasing

performance is seen as a function of ability and motivation, thus setting goals is an

important first step in achieving the necessary motivation required to accomplish a

desired state (Higgins, 1987; Locke & Latham, 2006). This belief connects directly to the

process of school improvement planning which requires setting the plan, implementing

the plan, and seeing change systemically across an organization to lead to true

improvement.

Core findings. Locke and Latham (2002) identified goal setting to be most

effective when goals were challenging and specific. They found the most difficult goals

were associated with the highest levels of effort and performance. Specific goals are

motivating because they contain an external referent, and clearly define the optimal

outcome so one knows if the goal has not been reached. This is in contrast to “doing

one’s best” which allows for a range of acceptable levels of performance (Locke &

Latham, 2006).

Setting challenging goals has been shown to be more effective than setting less

challenging goals as long as the person attempting to accomplish the goal has the ability

to accomplish the task (Locke, 1968). Locke (1968) found an individual’s conscious

ideas have the ability to regulate their actions, thus setting harder goals has been found to

produce higher performance than easier goals. His findings also suggest that setting

unrealistically high goals may not necessarily lead to goal accomplishment, but that
16

individuals who set more challenging goals performed at a higher level than those setting

lower levels of goals. Other researchers, such as Atkinson (1958) have corroborated the

findings about unrealistic goals with similar research showing people assigned to an

overly challenging task (1:20 chance of completing the task) simply did not try to win

because the odds of their success were so small.

In addition to suggesting challenging goals, Locke and Latham (2002) address

specificity of goals. By creating specific goals, individuals have a clearer picture of the

desired outcome; therefore less variability in final acceptable outcome is observed. Locke

and Latham (2002) found that goal specificity alone does not lead to a higher level of

performance because goals can vary substantially in difficulty. However, “goal

specificity does reduce variation in performance by reducing the ambiguity about what is

to be attained” (Locke & Latham, 2002, p. 706).

Mechanisms. In addition to challenging and specific goals, Locke and Latham

(2002, 2006) identified four mechanisms by which goals impact performance. The

mechanisms, or actions leading towards goal accomplishment, include directing attention

and effort towards a specific activity, having an energizing function directly related to

effort, positively impacting persistence, and impacting action leading to discovery or use

of current knowledge and strategies. For the purpose of school improvement planning,

the two most important of these mechanisms are directing attention towards a specific

instructional activity and the creation of appropriate strategies to accomplish the goals.

With regard to directing attention and efforts toward a specific activity, it is

essential that goals are designed to direct attention and effort to a stated task. Planning for

goals refers to the development of specific alternate behavioral paths by which a goal can
17

be attained (i.e., a strategy) which allows individuals to direct attention towards a specific

outcome (Austin & Vancouver, 1996). Planning links goals to specific measurable

actions and serves to prioritize decisions among goals. Tubbs and Ekeberg (1991)

emphasize the importance of planning and its connection to goals and action. Planning

goals has also been associated with monitoring goals (Austin & Vancouver, 1996);

planning is a necessary tool for putting a feedback mechanism in place to progress

towards one’s goal (Locke & Latham, 2002).

In addition to planning to direct one’s attention, goals can indirectly impact action

by leading to the discovery and use of task-relevant knowledge and strategies. Locke and

Latham (2002) have found people automatically access the knowledge and skills they

currently have when attempting to accomplish a goal and apply skills and strategies from

similar situations to complete a given task. When unsure of how to complete a task,

people will attempt to accomplish the task based on what they have done in the past to be

successful. Their research also found people when presented a new goal or task will

engage in deliberate planning to develop strategies enabling them to obtain their goal,

which is a necessity in school improvement planning. Last, when people are

appropriately trained in implementing strategies to accomplish high-performance goals,

they are more likely to use the strategies to accomplish the stated goal.

Moderators. Moderators, or characteristics that change the strength of the goal-

performance relationship, include goal commitment, feedback, and task complexity

(Locke & Latham, 2006). Of the three moderators, the current study focuses primarily on

goal commitment and feedback as critical functions of school improvement plans.

Feedback serves as a tool to monitor progress towards stated goals, while commitment
18

leads to motivation necessary to complete an established goal. By setting goals, staying

committed to them, and frequently monitoring progress toward them, an organization can

make more frequent corrections, and thus work to accomplish the established goals

(Reeves, 2004; Schmoker, 2004; White, 2009). In contrast, those not monitoring progress

would lack continued feedback and be less likely to make mid-course changes to

accomplish goals (Beach & Lindahl, 2007).

To maintain attention towards a goal and foster persistence requires goal

commitment, a moderator in goal theory. This occurs immediately after goal acceptance

(Austin & Vancouver, 1996). Acceptance ranges from compliance, to identification, and

most importantly to internalization. Goal commitment is relevant to goal achievement

and the process of creating SIPs. In the absence of commitment to goals by teachers and

school staff, goals will not lead to higher performance. It is therefore necessary that the

process of writing SIPS foster commitment. Goal commitment has been found to

increase when goals are made public, as in the case of SIPs. Some feel this is because

accomplishing public goals becomes a matter of professional integrity (Klein, Wesson,

Hollenbeck, & Alge, 1999).

In addition to goal commitment, feedback is an important moderator for the

present study. Meeting or not meeting stated goals serves as summary feedback because

not knowing how one is doing makes it “difficult or impossible for them to adjust the

level or direction of the effort to adjust their performance strategies to match what the

goal requires” (Locke & Latham, 2002, p. 708). Additionally, feedback is essential in the

process of goal setting because it allows adjustment in the level or direction of their

efforts enabling continual progress towards goal completion. If feedback indicates a


19

person is below target, effort may be increased or a new strategy may be attempted to get

closer to the desired state. Combining both feedback and goals has been shown to be

more effective than simply assigning a goal (Locke & Latham, 2002); feedback is also

relevant to SIPs because it directly connects to the process of effectively monitoring the

process of school improvement planning.

Ethical considerations. One concern with SIPs is the potential for drawing

attention to quantifiable goals while drawing attention away from other goals which are

harder to quantify. A majority of the research examines the positive benefits of goal

setting, but there have been unintended and unethical behaviors associated with the

practice. Kelley and Protsik (1997) found, in a qualitative study on performance based

incentives, that setting goals and attaching incentives often lead to teaching to individual

items and ignoring instructional areas which are not tested. This practice has been

observed on many SIPs that focus solely on standardized test topics and ignore non-tested

instructional areas such as science, social studies, and the arts. Barsky (2008) found that

performance goals can also interfere with ethical decision making by “directing

employees’ attention to achieving the goal, thereby occupying the cognitive resources

that might otherwise have been used to evaluate the morality of work-related behaviors”

(p. 65), in the case of teaching, the need to educate the whole child.

Planning

Goal theory, while intended to focus on individual performance, is relevant to

school or organization-level performance because it plays a critical role in planning.

Cook (2004), who focused specifically on strategic planning (one type of planning),

defined planning as a system in which the locus of control for change resides inside of an
20

organization and serves as an “irrevocable commitment to purpose beyond the ordinary”

( p. 74). It is defined as a process that “involves explicit systematic procedures used to

gain the involvement and commitment of those principal stakeholders affected by the

plan” (Falshaw et al., 2006, p. 10). Planning places heavy emphasis on understanding the

environment in which planning takes place and adjusting one’s actions to that

environment (Beach & Lindahl, 2004a). The process of planning is to serve as a tool to

ensure the fullest use of an organization’s resources (Bloom, 1986). Identified using

many different names, such as long range planning, corporate planning, strategic

management, or strategic planning, the relationship between planning and performance

has been inconclusive (Falshaw et al., 2006).

The components of planning are fairly similar regardless of one’s professional

field. As defined by Heroux (1981), planning includes accepting a task from a boss,

determining the best way to carry it out, and continually evaluating one’s work. Falshaw

et al. (2006) describe formulation, implementation, and control as the necessary

components of planning, with a clear and specific goal to be accomplished as well as a

path to achieving that goal. Beach and Lindahl (2004a) describe the process as

identifying beliefs, mission, strengths, weaknesses, competition, objectives, and

necessary strategies. According to Armstrong (1982) planning requires an explicit

method for identifying an organization’s long-term goals, a systemic way to evaluate the

agreed upon strategies, and a precise system for monitoring the results. Several

components of goal theory stand out in these definitions, including the importance of

having a specific goal, strategies to achieve the goal, and monitoring or feedback

regarding goal progress.


21

Phillips and Moutinho (2000) have linked the concept of strategy to a pattern of

decisions involving alignment of practices which impact and improve organizational

performance. This method of planning is said to result in a better match of variables to

the changing conditions, with a clear outcome of improving the long-term performance of

a company (McIlquham-Schmidt, 2010).

According to Falshaw et al. (2006) planning has been considered through two

lenses, content (what is included within a plan) and process (how one goes about

accomplishing a plan), both of which need to be considered in creating SIPs. Planning

focuses on analysis which includes breaking down a goal into actionable steps such that

implementation can happen automatically. The actionable steps represent the mechanism

Locke and Latham (2002) identified as developing strategies. Planning can be seen as

strategic if it identifies areas for improvement and suggested intervention through a new

and innovative perspective and pushes an organization towards that perspective (Cook,

2004).

Planning and goal theory are connected in many ways. In order to engage in

planning, specific and challenging goals must be set along with the creation of

appropriate strategies to address areas of concern. In addition, the goals must be

measurable to allow monitoring. Achieving challenging goals requires the direction of

attention towards the identified strategies with persistence in accomplishing stated tasks.

For planning to be successful, there must be commitment to the identified goals, feedback

on progress, and a clear identification of necessary steps to accomplish the defined goal.

One main difference between the two is planning focuses on an organization with

multiple goals whereas goal theory was intended for individual increases in performance.
22

Therefore, while goal theory can be a useful guide to hypothesizing about important

characteristics of SIPs, certain elements of a good SIP may go beyond goal theory in

dealing with organizational processes.

Effectiveness of planning. It is reasonable to believe that the process of planning

would improve an organization’s (e.g., a school’s) performance. Some researchers, such

as Mintzberg, have questioned the applicability of planning to the public sector (Bloom,

1986). Mintzberg (1994) suggested that strategic planning in particular may not fulfill its

promise.

When strategic planning arrived on the scene in the mid-1960s, corporate leaders
embraced it as “the one best way” to devise and implement strategies that would
enhance the competitiveness of each business unit. True to the scientific
management pioneered by Frederic Taylor, this one best way involved
separating thinking from doing and creating a new function staffed by
specialists: strategic planners. Planning systems were expected to produce the
best strategies as well as step-by-step instructions for carrying out those
strategies so that the doers, the managers of business, could not get them wrong.
As we now know, planning has not exactly worked out that way. (p.107)

Planning is initiated as a tool to improve outcomes or productivity. In any attempt

at improvement, as Mintzberg (1994) has shown, multiple factors go into successfully

planning and implementing interventions to increase production. The idea of multiple

factors impacting improvement is also supported in the educational field because

“circumstances are generally too complex to allow accurate accounting of all elements of

a system that may impact future events” (Beach & Lindahl, 2004b, p. 12). In addition to

the complexity of circumstances, researchers have suggested the differences between

public and private sectors are great enough to require substantial adaptation to make the

planning successful (Bloom, 1986).


23

The importance of planning relies on the continual examination of evidence of

planning’s effectiveness (Bell, 2002; Cook, 2004; Falshaw et al., 2006; Heroux, 1981).

Several empirical studies have outlined the relationship between planning and corporate

outcomes. In a study of 113 UK companies, Falshaw et al. (2006) found an inconsistent

relationship between a company’s long range planning and financial performance. In

their study the authors focused on assessing the formality of the planning process using

such indicators as size of the organization, amount of environmental turbulence within an

organization, and subjective measures of performance to self-evaluate the company’s

performance over the past three years. The authors found these variables to impact the

style of planning the organization undertook; however, no relationship between the

planning process and financial performance was found.

In contrast, McIlquham-Schmidt (2010) found a very small yet positive

relationship across studies (r = .0830) between planning and performance. This meta-

analysis was conducted on 88 studies seeking to examine such factors as time period and

journal quality on the planning and performance relationship. This meta-analysis

contained both quantitative measures (market and accounting based improvements) and

qualitative measures (perception based improvements) and included many studies not

previously included in other analyses. This work found planning does have a positive

effect on corporate planning especially when disaggregating the dataset; however, the

effect was smaller than originally believed. Results for some individual studies showed a

much stronger effect size than the average, but other studies showed a weaker than

average effect size.


24

Another study by Malik and Karger (1975) evaluated the financial performance of

businesses within three different industries: electronics, machinery, and chemical and

drug companies. Their questions attempted to identify the effectiveness of formal

integrated long-range planners in comparison to non-planners. Their study, although

small in scale, showed formal planners financially outperformed non-planners on 9 of 13

performance variables, including sales volume, sales and shares, cash flow, net income,

and overall earnings. Additionally, although the electronics companies showed the largest

gains, all groups made significant profit gains.

Similarly, Wood and LaForge (1979) found in a mixed methods study that banks

classified as comprehensive planners performed significantly better than partial or non-

planners. These comprehensive planners also outperformed a randomly selected group

serving as the study’s control. For this study, financial performance over a five-year

period was analyzed using growth in net income and return on owner’s investment as the

two variables. This study was limited in that overall, there was a relatively small sample

size: comprehensive planners (n = 26), partial planners (n = 6) and non-planners (n = 9).

It should also be noted that the researchers cautioned against inferring that

comprehensive planning was the only reason for a bank’s success. Rather, they suggest

that the managers of these banks were more progressive in their overall management

techniques.

While the research reviewed is mixed regarding the relationship between the

effectiveness of planning, and while the studies have relatively small sample sizes, there

is at least some evidence that planning does in fact improve performance (Malik &

Karger, 1975; McIlquham-Schmidt, 2010; Phillips & Moutinho, 2000; Wood & LaForge,
25

1979). Therefore it would be useful to continue to evaluate planning in both industry and

education.

The lack of overwhelmingly positive evidence raises the question of why

companies who are not required to plan nonetheless decide to continue the process of

planning. Heroux (1981) found that businesses adopt planning as a necessity for survival,

however, they soon comes to realize the planning did not lead to the level of gains they

had hoped. Additionally, many researchers have identified selecting the specific area of

performance measures to be problematic. Companies can opt for sales, profit, revenue,

dividends, stock price, or returns on assets. The challenge is that some of the performance

measures may be more influenced by planning than others (Falshaw et al., 2006).

Because most existing research fails to examine specific components of planning

associated with increased productivity, goal theory could lead to the identification of such

components. Phillips and Moutinho (2000) created a diagnostic tool used to evaluate

planning effectiveness within hotels. Their work found the most effective plans set

explicit goals, establish clear responsibilities for implementation, have high levels of

commitment, and use modern analytic techniques required for effectiveness. These

findings are consistent with the core findings of goal theory, in that specific and

challenging goals, commitment, and feedback are important. Additionally their findings

align with the idea that goal commitment moderates the effectiveness of goal setting.

These findings are also consistent with literature on effective school planning (Beach &

Lindahl, 2007; Connecticut State Department of Education Bureau of School and District

Improvement, 2007).
26

At this point, it is important to consider whether planning is appropriate in

schools. School leaders are often not trained to implement true strategic planning;

therefore the fidelity of implementation may be questionable. Another concern is many

components of planning are out of the realm of change for public schools, such as

identifying one’s mission. As stated by Beach and Lindahl (2004a), “The mission of all

but the highly specialized K-12 public schools (e.g. schools for the visually impaired or

vocational schools) is virtually identical across the United States” ( p. 221). Cook (2004),

however, feels that there are enough similarities to invest in this type of planning for

school improvement.

As this review will continue to show, the lack of attention to principles of goal

theory carries over into the educational sector. This is problematic because without

knowing what characteristics of plans lead to improved performance, companies and

schools continue to participate in a strategic planning process without a clear

understanding of what is required to increase productivity. If goal theory can help

identify components of SIPs correlated with increased performance, then schools as well

as businesses can design strategic plans to maintain their competitive edge while meeting

the high expectations placed upon them by shareholders and legislators.

Planning in a School Setting

Under NCLB (2001), schools identified as not making adequate progress submit

a SIP. This is depicted in Figure 1 as it applies to Connecticut’s context. Although not

required for all schools, Fernandez (2009) found that by 2000, most schools were writing

formal plans for improvement. SIPs were designed to close achievement gaps and raise

levels of student achievement (White, 2009). Additionally, 48 states had begun some
27

form of standardized testing as a form of statewide accountability (Phelps & Addonizio,

2006). States have recognized the importance of school accountability and are using

student achievement and the process of school improvement planning as a method of

distinguishing effective and ineffective schools (Phelps & Addonizio, 2006). Many feel

widespread public accountability occurred as a result of the National Commission on

Excellence in Education’s 1983 report, A Nation at Risk. This report, although not the

first publicized criticism of America’s system of education, did serve as a catalyst for

years of political pressure and large-scale school reform (Beach & Lindahl, 2004b).

SIPs often contain many of the same characteristics as plans discussed earlier. A

review of the academic literature on characteristics of effective school plans indicates the

importance of targeted areas for improvement, integration of specific interventions

strategies, frequent monitoring of student data, and identification of persons responsible

for implementation of each strategy (Fernandez, 2009; Reeves, 2004; White, 2009).

These areas, as noted earlier, are consistent with goal theory concepts. Other areas

necessary for systemic improvement, yet often missing from SIPs, include leadership

strategies, data analysis techniques, decision making practices, and an evaluation of a

school’s readiness to change along with the process for improvement (Beach & Lindahl,

2004b; Hall & Hord, 2011; Reeves, 2004; White, 2009;). Without the integration of these

steps and a formal evaluation of the improvement process, sustained improvement is

unlikely (Webb, 2007).

SIPs as used in education are intended to provide a focus to schools on their path

to improvement by identification of practices which are most likely to lead to increased

student achievement, while identifying specific areas of concerns. This practice is


28

consistent with goal theory’s focus on identifying strategies for goal achievement. Similar

to strategic planning in general, creating a school improvement plan requires an

organization to have a shared focus on improvement while setting and monitoring the

implementation of specific and challenging goals and using the results to determine an

organization’s next steps. The initial concept of creating an effective school plan may

appear quite simple, however, in reality “the complexities of each school’s changing

environment, internal strengths and weaknesses, readiness for change, culture, needs, and

stakeholders make this a vastly intricate process” (Beach & Lindahl, 2007, p. 23).

Several studies have examined the implementation of plans in school settings.

One study conducted by McInerney and Leach (1992) evaluated Indiana’s Performance-

Based Accreditation at the high school level. Their work found that although schools

were required to submit SIPs, a majority focused more on the new implementation of

programs than on monitoring and evaluating current programs and student progress

which is a primary aim of the SIP process. Consistent with Reeves’ focus on

identification of best practices, “if there is a theme to the research on leadership impact, it

is that practices, not programs are the key to developing and sustaining a high level of

impact” (Reeves, 2011, p. 25). McInerney and Leach (1992) did find positive correlations

between the planning process and the development of a more unified staff (building a

positive culture), an increase in parent and community involvement, and a greater

awareness of a school’s strengths and areas of concern. The last correlation, the

identification of strengths and areas of concern, represents an essential component of goal

theory’s focus on obtaining commitment.


29

McInerney and Leach’s (1992) study also identified, which could be relevant to

Connecticut’s current legislation, that often times the objectives on the SIP were not

related directly to the state’s performance indicators. Further, the authors recommend

If improved standardized test scores, improved graduation rates, and improved


attendance are the goals, it would seem reasonable to require school personnel to
develop plans addressed toward those goals rather than encouraging school
personnel to use local discretion in developing goals of their own. ( p. 26)

As research indicates, there is no required format for an SIP. McInerney and

Leach (1992) and Webb (2007) have both found that within the process of planning there

is always the chance that schools will pick inappropriate goals, thus making the process

of planning less beneficial. To prevent this from happening, a majority of the research

reviewed, consistent with the beliefs outlined in goal theory, supports the need for a

comprehensive needs assessment to ensure accurate and frequent monitoring of the goals

(Beach & Lindahl, 2004b; Connecticut State Department of Education Bureau of School

and District Improvement, 2007; Fernandez, 2009; Mourshed, Chijioke, & Barber, 2010;

Reeves, 2006, 2011).

Evidence of Effectiveness of SIPs

As previously stated, little empirical data has been reported on the effectiveness

of school improvement planning. The focus of this section is on research relating

characteristics of SIPs with increased student achievement. Three studies have provided

this type of evidence. Curry (2007) examined the role of SIPs in student achievement but

in contrast with the current study, her research focused on the work of School Advisory

Councils which served as the mechanism for creating the SIPs. The study evaluated 67

middle and high schools including a survey of relevant stakeholders as well as content

analysis of the SIPs. In this study, the researcher coded the number of math and writing
30

strategies used in each SIP. The results showed a significant negative correlation between

the number of math strategies found in plans and student achievement. In addition, there

was a negative correlation between writing what she termed operational action steps and

student achievement. Operational action steps were defined as “any activity that is

required to complete an objective such as purchasing, creating, scheduling, reporting,

comparison, programs, analyzing data, analyzing reports etc.” (Curry, 2007, p. 100). Her

findings are consistent with Reeves’ (2004) and White’s (2009) recommendations to limit

the number of goals and strategies.

Two additional studies were more closely related to the current study. Reeves’

(2011) Planning, Implementation, and Monitoring (PIM) study and Fernandez’s (2009)

study on effectiveness of school improvement plans provide a framework for the current

study. Both studies used similar rubrics to examine specific characteristics of SIPs (as in

the present study) in an effort to quantify SIP effectiveness.

The PIM study (Reeves, 2011) included 2,000 schools in the United States and

Canada using achievement data for more than 1.5 million students. The participants in

this study represented a very diverse group including both urban and rural districts

spanning levels from elementary to high school. The study included double-blind reviews

of SIPs in an attempt to see what components of a plan focusing on leadership practices,

could be associated with increases in student achievement. Many SIP characteristics

included on the rubric were consistent with goal theory including extent to which goals

are specific, extent to which strategies are linked to specific student needs, and frequency

of monitoring goal progress. Reeves (2011) defined achievement as the change in percent

proficient over a three year period.


31

Reeves (2011) found multiple factors to be statistically associated with increases

in student achievement. He did not, however, provide a full presentation of results for all

variables and only reported a small subset of all results, so significant findings need to be

interpreted with caution. One finding he did report indicated negative relationships

between inquiry process and specific goals and changes in student achievement (i.e., SIPs

with higher scores on inquiry process and specific goals showed less change, which is not

consistent with goal theory). In the PIM study, inquiry process was defined as evaluating

student achievement data and associating that data with the practices of teachers and

leaders significantly tied to the student achievement results.

In a similar study, Fernandez (2009) explored a relationship between SIP quality

and school performance using data from Clark County School District in Nevada, the 5th

largest school district in the nation and home to 300,000 students. The data came from

three sources: data on SIP were obtained from a content analysis (using a rubric) of

individual schools’ SIPs, student achievement data came from standardized assessments,

and data on school demographics and school resources were also obtained. Similar to the

PIM study, Fernandez identified 17 SIP indicators including specific goals, measurable

goals, use of research based strategies, and frequency and monitoring. Fernandez

combined scores on all 17 indicators to come up with one overall quality score for each

SIP, which he used to complete his analysis.

After controlling for a variety of factors such as school demographics and

spending, Fernandez (2009) found that quality planning showed an association with

student math and reading performance. Fernandez’s work, consistent with goal theory

(Locke, 1968) found “schools that developed plans with goals with specific time frames
32

and that specified more frequent monitoring of school performance tend to have higher

levels of improvement in students standardized test scores” (Fernandez, 2009, p. 357).

Additionally, consistent with views stated previously regarding multiple factors necessary

for improvement, the additive index measure of the plans (combining multiple factors)

“outperformed any individual dimension alone” (p. 357).

These two studies (Fernandez, 2009; Reeves, 2011), like the present study,

examined the relationship between the quality of SIPs and student achievement. One

difference between the two studies is that the PIM study included schools from across the

United States and Canada while Fernandez only studied schools from one very large

district. Collectively, they provide mixed evidence on the effectiveness of SIPs. It is not

clear why the two studies showed different results. It might be due to the fact that

Fernandez’ SIPs all used the same format (i.e., the extraneous variable of SIP format is

controlled for) and came from only one school district. The different results could also be

attributed to some unidentified variable or variables.

The Present Study

An important gap in the current literature is that while SIPs have been mandated

for Alliance Districts in Connecticut, there is a lack of evidence supporting this mandate.

Only a few studies have examined the SIP-student achievement relationship.

Additionally, there has been no attempt to apply goal theory principles to SIPs to

determine if characteristics consistent with goal theory predict achievement. The present

study is intended to build upon the findings from Fernandez (2009) and Reeves (2011) in

an effort to evaluate the effectiveness of school improvement planning. Unlike these two

studies, the current study will evaluate characteristics of an SIP as they relate to goal
33

theory (Locke, 1968; Locke & Latham, 2002, 2006). The present study, in essence a

replication of Fernandez’s work with an additional focus on goal theory, aims to evaluate

if school improvement plans, when created with components of goal theory, can be

associated with higher levels of achievement in Connecticut’s Alliance District schools.

The current study is framed by one hypothesis derived from goal theory and a

complete review of the literature. The hypothesis is that schools creating quality SIPs

consistent with goal theory principles will have higher levels of student achievement

than schools with lower quality SIPs. There are two research questions for this study. The

first asked whether components of a plan consistent with the goal theory principles of

specific and challenging goals, or any of the mechanisms (directing attention, having an

energizing function, increasing persistence, leading to an appropriate strategy) or

moderators (supporting goal commitment or providing feedback) are associated with

increased student achievement. The second research question evaluated whether school

improvement planning predicted achievement of particular subgroups.


34

CHAPTER 3

METHODOLOGY

The purpose of this study was to examine the effect of school improvement

planning on student achievement. This study aimed to replicate Fernandez’s (2009) study

quantifying components of SIPs associated with increased student achievement, with an

additional focus on goal theory and on low-performing districts. Main variables used in

the analyses included scores for 2012-13 SIPs based on the PIM School Improvement

Audit (Reeves, 2011), academic achievement as measured by the change in school scores

on the newly created School Performance Index (SPI), and school-level demographic

information such as percentage of students eligible for free and reduced price meals and

percentage of English Language Learners (ELLs).

The current study was framed by a hypothesis derived from goal theory and a

complete review of the literature. The hypothesis was that schools creating quality SIPs

consistent with goal theory principles would have higher student achievement than

schools with lower quality SIPs. The first research question asked whether components of

a plan consistent with the goal theory principles of specific and challenging goals, or any

of the mechanisms (directing attention, serving as an energizing function, increasing

persistence, leading to development of a strategy) or moderators (goal commitment,

feedback) could be associated with increased student achievement. To account for

demographic variability and the concern for the widening achievement gap between

subgroups, the second research question evaluated whether school improvement planning

predicted achievement of particular subgroups.


35

Research Design

This quantitative study used what Mertens (2010) refers to as a correlational

research design. This study was a prediction study in that “the researcher is interested in

using one or more variables (the predictor variables) to project performance on one or

more other variables (the criterion variables)” (Mertens, 2010, p. 161). The goal of the

analysis was to reveal the degree to which the highest scoring SIPs were associated with

the greatest levels of student achievement. It is important to note that just because a

correlation between two variables is observed, a causal relationship is not indicated

(Pyrczak, 2010). As with any prediction study it is also important to consider other

variables which may impact the outcome variable.

Population and Sample

The population, or group about which I drew conclusions, includes low-

performing school districts that are part of Connecticut’s newly formed Alliance

Districts. The schools located within these districts are required to submit SIPs. The

sampling frame includes elementary or K-8 schools within Connecticut’s Alliance

District and the actual sample was a subset of these schools that were willing to provide

their plan for inclusion in this study. Table 1 presents descriptions of the schools in the

sample. In an effort to increase the external validity of the sample, all Alliance District

schools were invited to participate. My goal was to obtain SIPs from 150 schools with

representation from each of the 30 Alliance District. I received plans from 108 schools

(40%) across 25 Alliance Districts (83%).


36

Table 1

Description of Alliance Districts and Schools Included in This Study

District Total Plains Plans Per Pupil Spending DRG


Enrollment Obtained Available
Ansonia 2,514 2 2 11,314.81 H
Bloomfield 2,151 2 2 17,342.73 G
Bridgeport 20,031 1 26 12,977.24 I
Bristol 8,410 7 7 12,635.14 G
Danbury 10,488 5 10 11,724.70 H
Derby 1,440 1 2 12,576.84 H
East Hartford 6,969 7 10 11,771.25 H
East Haven 3,234 4 4 13,524.66 G
East Windsor 1,284 0 1 14,920.39 F
Hamden 5,817 8 8 15,200.16 G
Hartford 20,853 8 32 17,989.85 I
Killingly 2,676 1 3 13,996.70 G
Manchester 6,303 9 9 14,390.73 G
Meriden 8,227 3 8 12,526.80 H
Middletown 5,026 7 8 13,413.37 G
Naugatuck 4,514 5 6 13,435.22 G
New Britain 10,044 3 10 11,630.12 I
New Haven 20,401 0 31 17,486.41 I
New London 2,939 3 3 13,756.60 I
Norwalk 11,111 12 12 15,665.50 H
Norwich 3,756 7 7 13,408.56 H
Putnam 1,253 1 1 14,375.93 G
Stamford 15,471 5 12 16,331.01 H
Vernon 3,554 5 5 12,960.73 G
Waterbury 17,814 0 20 14,718.06 I
West Haven 6,059 0 7 11,772.89 H
Winchester 685 1 1 16,037.75 G
Windham 3,242 1 4 15,918.66 I
Windsor 3,490 0 2 15,472.89 D
Windsor Locks 1,746 1 1 15,270.41 F
Notes: Data retrieved from: http://sdeportal.ct.gov/Cedar/WEB/ct_report/DTHome.aspx. Per Pupil
Spending retrieved from: http://www.sde.ct.gov/sde/lib/sde/PDF/dgm/report1/basiccon.pdf

As Table 1 indicates, there was not equal representation across all Alliance

Districts. Four of the largest districts in the state (Bridgeport, Hartford, New Haven, and

Waterbury), considered to be the poorest in the state as referenced by their DRG status,

submitted very few plans. Despite multiple efforts, I was unable to obtain any plans from

Waterbury or New Haven and received only one plan from Bridgeport. The implications
37

of this will impact the generalizability of the study results and will be discussed further in

Chapter 5.

My original plan was to include kindergarten through grade 5 schools; however,

because many of the larger urban districts have a majority of K-8 schools, they too were

included in the sample. This decision was made because K-8 schools are also required to

create a SIP and use the CMT to determine their yearly SPI. Additionally, the focus of

this study is on an SIP’s intended effectiveness as opposed to the individual grade or

level of the students.

The Alliance District sampling frame was selected for three reasons. First,

Alliance District schools are required to submit SIPs to document their plan to improve

student achievement. Second, as a researcher, I have a particular interest in this

population because I am currently an administrator in an Alliance District elementary

school who is required to submit an SIP designed to support my school’s growth on our

SPI. Lastly, a majority of the Alliance District schools have the lowest SPIs in the state,

thus creating a substantial sense of urgency to improve.

It was my goal to obtain at least 150 SIPs because that number would provide

optimal power for the multiple regression analysis used to test my hypothesis. With any

statistical analysis, the larger the sample, the more precise the results are. Defined by

statisticians, precision is “the extent to which the same results would be obtained if

another random sample were drawn from the same population” (Pyrczak, 2010, p. 95).

Although 150 plans were not collected, I am confident that the current sample size of 108

provided enough power to support this study. In an effort to increase generalizability, it

was also a goal to obtain SIPs from each of the 30 Alliance Districts. The last reason for
38

this group to be included within my sampling frame is these districts are defined as the

lowest performing districts in the state, required to adhere to the same state mandates, and

provided additional funding to increase student achievement. This is in comparison to

other non-Alliance Districts which might not be required to submit a SIP.

Data Sources and Instrumentation

For this study, three forms of school-level data were collected. The first was

whole school scores on 2012-13 SIPs using the PIM School Improvement Audit (Center

for Performance Assessment, 2005), a rubric used in past research (Reeves, 2011) to

evaluate the quality of SIPs. A copy of this instrument can be found in Appendix A. It

should be noted that, although a majority of plans encompassed only the 2012-13 school

year, there were some plans that were multiple year documents. The second type of data

was individual school SPI scores for the 2011-12 and 2012-13 school years, which are

based on the March administrations of the CMT. For this study, SPI scores will be

referred to as measures of achievement. The third source of data included school-level

demographic information such as percentages of students eligible for free and reduced

price meals and percentages of ELL students. Because state level data is historically

released up to 1 year after submission, the demographic data used for this study consisted

of 2011-12 data, as opposed to the plans themselves, which represented the school’s

focus of improvement for 2012-13.

SIP quality based on PIM scoring rubric. The PIM scoring rubric was created

by the Leadership and Learning Center (Center for Performance Assessment, 2005). The

diagnostic tool consists of 30 performance dimensions found within plans that have been

empirically linked to student achievement. The rubric was designed to evaluate a school’s
39

planning, implementation, and monitoring of their improvement efforts. This was done

because of “clear evidence that when it comes to achievement and equity, planning and

processes are less important than implementation, execution, and monitoring” (Reeves,

2006, p. 62).

The items on the rubric are 30 leadership performance dimensions across five

areas of comprehensive needs assessment (e.g., assessment results and acts of

leadership), inquiry process (e.g., possible cause-effect correlations and analysis of adult

actions), SMART goals (specific, measurable, attainable, realistic, timely), design (e.g.,

multiple assessments documented and strategies linked to specific student needs), and

evaluation (e.g., summary data provided and compared and next steps outlined in

evaluation).

The rubric rating scale contains three rating categories for the extent to which

each performance dimension is described in a SIP: exemplary, proficient, and needs

improvement (see Appendix A). Using the described performance criteria, a rater

evaluates an SIP in the following way: exemplary performance on the dimension was

indicated by a score of 3 indicating high PIM, proficient performance indicated by a

score of 2 indicating middle or medium PIM, needs improvement indicated by a score of

1 indicating low PIM.

The PIM rubric was created by the Leadership and Learning Center using a

double-blind check for consistency. This was done to increase the intra-rater reliability

and also to increase content validity. The original rubric was piloted on more than 100

plans by separate raters without knowledge of the other’s scores (Reeves, 2006).

Consistency in SIP ratings was achieved over 80% of the time. On areas that had poor
40

reliability (at times less than 36%), the rubric was revised and subsequent tests

administered. The decision to use the PIM School Improvement Audit for this study was

made because it has been field tested and found to be both a reliable and valid measure of

effective school improvement planning and student achievement.

To address the current study’s focus on goal theory, it was necessary to take the

rubric and attempt to align it with concepts from goal theory (i.e., the core findings,

mechanisms through which goals affect performance, and moderators of the goal-

performance relationship). The PIM rubric was not designed according to the components

of goal theory so this process was necessary to see if the PIM rubric would be an

appropriate tool for this study. Additionally, aligning the rubric components to goal

theory characteristics would allow for the assessment of the research question.

To assess alignment of the rubric with goal theory, a rational process was used

whereby the author and the dissertation advisor independently connected each of the

performance dimensions of the rubric with one of the major goal theory concepts. Goal

theory concepts included core findings (specific goals, challenging goals), mechanisms

(directing attention towards goal relevant activities, having an energizing function,

increasing persistence, use of task relevant knowledge to develop new strategies), and

moderators of the goal-performance relationship (goal commitment, feedback). The table

used for making these judgments is included in Appendix B.

The first attempt to align the performance dimensions with goal theory led to only

53% agreement (16 of 30 performance dimensions) between scorers. Two performance

dimensions were not originally connected to goal theory by Judge 2: # 1, strengths, and #

22, documented coaching and mentoring. Appendix B shows the judgments of each rater.
41

At that time both raters met and discussed disagreements. For items that the raters

disagreed on, disagreements were resolved. Final connections are shown in Appendix B.

After thorough discussion, connections were arrived at for 25 of the 30

performance dimensions on the PIM rubric; the list of PIM items for each goal theory

concept is shown in Table 1. Of the five that were not categorized, three performance

dimensions were determined to connect with more than one goal theory concept: #9,

achievement results linked to causes, was categorized in both directing attention and

feedback; #20, results indicators aligned to goals, was categorized in both directing

attention and feedback; and #30, results disseminated and transparent, was categorized

in both goal commitment and feedback. Two characteristics could not be categorized:

#17, demonstrated improvement cycles, and #25, professional development supported

and integrated into key processes and operations.

For this study, the scores from the rubric were analyzed in two ways. The first

analysis was on a whole plan score. This score was compiled by adding all 30 of the

performance dimension ratings for each plan and coming up with one total whole plan

score. The second analysis focused on combining scores from the set of performance

dimensions connected to each of the goal theory concepts as represented in Table 2. For

example, to complete an analysis on goal theory’s feedback concept, I added up the rubric

scoring of the following performance dimensions connected to feedback: #4, acts of

leadership; #9, achievement linked to causes; #16, multiple assessments; #18, frequent

monitoring; #20, aligned results indicators; #26, summary data; and #28, evidence, to get

an average sub-score for feedback. That sub-score average (and sub-score averages for

other goal theory concepts) were then analyzed for correlations with student achievement.
42

Because the goal theory core finding challenging goals had only one performance

dimension (#12, achievable goals), it was combined with specific to create a new

category referred to as core findings – challenging and specific goals. Directing attention

had seven performance dimensions; serving an energizing function; increasing

persistence both had zero performance dimensions; use of knowledge in developing

strategies had nine; commitment had two; and feedback had seven.

Table 2

Rubric Performance Dimensions and their Connections to Goal Theory

Challenging Specific Directing Energizing Increase Strategy Commitment Feedback


Attention Function Persistence
12.Achievable 1.Strengths 2.Assessment 3.Teacher 5.Engaged 4.Acts of
Goals Results Practice Stakeholders Leadership

10.Specific 8. Adult 6. Cause- 30. Results 9.Achievement


Goals Actions Effect Disseminated linked to
Correlations Causes

11.Measurable 9.Achievement 7. Specific 16.Multiple


Goals linked to Need Assessments
Causes Strategies
14.Timely 13.Relevant 15.Focused 18.Frequent
Goals Goals Action Steps Monitoring

20.Aligned 19.Rapidly 20.Aligned


Results Implement Results
Indicators and Sustain Indicators

21.Adult 22.Coaching 26. Summary


Learning and Data
Mentoring
24. PD 28. Evidence
Driven by 23.Strategies
Student Linked to 30. Results
Needs Student Disseminated
Needs

27.
Knowledge
and skills

29. Next
Steps
43

Student achievement – SPI school score, 2011-12 and 2012-13. The second

source of data was school SPI scores from both 2011-12 (the baseline score) and 2012-

13. The standardized assessment tool used to calculate a school’s SPI is the CMT. The

SPI has been designed as a single index reflecting student achievement across all tested

subject areas. As stated in Connecticut’s ESEA flexibility waiver, the state’s target is for

schools to achieve 88 of 100 possible performance index points. Schools scoring at this

level would have performed at the goal level or above on a majority of the standardized

tests taken. A school’s baseline score (their 2011-12 score) was composed by averaging

three years of CMT assessment data. The state then measures progress against that

original 3-year baseline using a single year of data annually beginning with the spring

2013 data (CSDE, 2012a). Unlike requirements under NCLB, the SPI allows for partial

credit for students scoring below proficient on a given state assessment (Erpenbach,

2009). The SPI is designed to be a status measure for schools, where status is defined as

“the academic performance of a student or group (a collection of students) at a single

point in time” (Castellano & Ho, 2013, p. 12).

SPI scores from 2011-12 and 2012-13 were selected because the SPI is a new

measure identified by the CSDE and should be an appropriate tool to identify the

temporal relationship between effective school planning and increased student

achievement. The SIPs evaluated for this study are the ones currently being used to guide

improvement and hold individual schools accountable, meaning that the content of the

plan (strategies and actions for school improvement) needed to be relevant for the 2012-

13 school year. Because schools have implemented their plans (either as developed for a

one year plan or by making yearly adjustments for a multi-year plan) it would be
44

appropriate to expect increased student achievement from spring 2012 to spring 2013.

Additionally, under Connecticut’s flexibility waiver, schools are expected to make

continual growth from their baseline SPI (2011-12), moving closer each year to the state

goal of 88.

Connecticut Mastery Test. The CMT is important for this study because, as

mentioned earlier, it is used as the basis for creating an individual school’s SPI. For

Connecticut elementary schools, the CMT is administered to students in Grades 3

through 8 assessing students in the content areas of reading, mathematics, and writing;

science is also assessed in Grades 5 and 8 (CSDE, 2012a). The CMT currently designates

five levels of performance: (a) Below Basic (BB), (b) Basic (B), (c) Proficient (P), (d)

Goal (G), and (e) Advanced (A). For purposes of calculating the SPI, students are

credited by subject based on their performance on the CMT in the following ways: (a)

BB = 0.0 points, (b) B = 0.33 points, (c) P = 0.67 points, (d) G or A = 1.0 points. To

calculate a subject-specific SPI, the following equation is used “SPIsubject = (%BB * 0.0)

+ (%B * 0.33) + (%P * 0.67) + (G or A * 1.0)” (CSDE, 2012a, p. 87). The subject-

specific SPIs are then averaged to create an SPI for each school, district and subgroup

based on all of the students who were tested. More detailed information on SPI

calculations is available in Connecticut’s ESEA flexibility waiver (CSDE, 2012a).

The content of the CMT is expected to represent the most important skills

reflected in the standards of Connecticut’s Curriculum Framework. In addition to

assessing what students know, the CMT also employs performance tasks to measure what

students can do with the information they have learned (Hendrawan & Wibowo, 2012).

The mathematics section contains multiple-choice items, grid-in response items, and
45

open-ended responses that are scored using a 0-1, 0-2, or 0-3 scale. The reading section

consists of two subtests, the Degrees of Reading Power (DRP) and Reading

Comprehension. The DRP is a single session multiple choice test containing seven

passages, each of increasing difficulty. The Reading Comprehension test consists of two

sections containing both multiple choice and open ended items. The open ended items are

scored using a 0-2 scale. The writing section contains two subtests, Editing and Revision

and the Direct Assessment of Writing (DAW). The Editing and Revision is completed in

one session containing multiple choice questions, and the DAW is a single 45 minute

prompt scored using a 2-12 scoring scale. Lastly, science is assessed in grades 5 and 8

and contains multiple choice and open ended items. The open ended items are scored

using a 0-2 scoring scale and the test is completed in one session (Hendrawan &

Wibowo, 2012).

Testing items were created to reflect content standards using the Connecticut

Curriculum Frameworks and underwent reviews by the testing company, Measurement

Incorporated, a CMT content advisory committee (made up of content experts, regular

and special education teachers, and CSDE curriculum and assessment specialists), and a

fairness committee before being piloted with students in Connecticut in grades 3 through

8. Additionally, CSDE appointed Assessment and Evaluation Concepts, Inc. (AEC) to

complete a comprehensive analysis of test items to determine the match between content

items and respective content strands. Hendrawan and Wibowo (2012) report that “AEC

concluded that CSDE has done a solid, quality job in matching the test items included on

the CMT4 with the relevant content strands and standards of the Language Arts and

Mathematics Curriculum Framework” ( p. 12).


46

In order to establish validity for a newly developed assessment, it is common to

correlate scores from new tests with scores from other tests designed to measure similar

content (Hendrawan & Wibowo, 2012). This has been done with each version of the

CMT. Additionally, correlations between the writing portions of the CMT and other

language arts tests (Degrees of Reading Power, Reading Comprehension, Editing &

Revision) were also calculated in an effort to increase evidence of construct validity

(Hendrawan & Wibowo , 2012). Validity, as defined by Mertens (2010), “is a unified

concept and that multiple sources of evidence are needed to support the meaning of

scores” ( p. 384).

The CMT has also undergone multiple checks on reliability, or the consistency of

test scores (Hendrawan & Wibowo, 2012, p. 19). Because the CMT was piloted in a

single administration, internal consistency has been evaluated using Cronbach’s alpha.

They reported alpha only for the writing portion of the CMT. For the 2011 administration

the CMT Writing Cronbach’s alpha was 0.89 for Grade 3, 0.86 for Grade 4, and 0.88 for

Grade 5 , indicating good reliability.

Student demographic data. School-level demographics from 2011-12 were used

as control variables because they have been found in previous research (Fernandez, 2009;

Hayes, Christie, Mills, & Lingard, 2004; Phelps & Addonizio, 2006) to relate to student

achievement and are publically available data. In order to try to rule out the possibility

that a relationship found between SIP quality and student achievement is due to other

factors, it was necessary to control for factors which may impact student achievement.

This study is similar to Fernandez’s (2009) study which controlled for ten factors

including per pupil spending, percentage of minority students, percentage of students


47

eligible for free and reduced price lunch, percentage of students with Limited English

Proficiency, students who have Individualized Education Plans, type of schools

(elementary, middle, high school), size of school (enrollment), transiency rate, and

percentage of highly qualified teachers. To the extent possible using publicly available

data, the current study replicated Fernandez’ (2009) control variables. The current study

controlled for the following: percentage of minority students, percentage of students

eligible for free and reduced price lunch, per pupil spending (reported only at the district

level), percentage of students who have Individualized Educational Plans, percentage of

students with Limited English Proficiency and school size (enrollment). The choice of

control variables was also consistent with the classifications in Connecticut’s District

Reference Groups (DRGs) where more affluent towns are assigned to higher DRG levels

and less affluent towns are clustered in the lower DRGs.

DRGs are a classification system where districts with similar socioeconomic

status and need are grouped together. DRGs are classified on seven criteria including

income, education, occupation, family structure, poverty, home language, and district

enrollment. There are nine DRGs ranging from DRG A representing very affluent low-

need suburban towns to DRG I representing high-need and low socioeconomic areas.

Charter schools, technical schools, and Regional Education Service Centers are not given

a DRG classification. DRGs are not intended to assess the quality of instruction of

schools within a district. They were created to describe the characteristics of the families

with children attending Connecticut Public Schools (CSDE, 2006). DRG assignment of

Connecticut Alliance Districts are summarized in Table 3 and show a majority of

Alliance Districts (27/30) are drawn from the lowest three DRG classifications.
48

Table 3
Number of Connecticut Alliance Districts Included within each DRG

DRG Number of Alliance Districts/Total


Districts per DRG

A 0/9
B 0/21
C 0/30
D 1/24
E 0/35
F 2/17
G 11/17
H 9/9
I 7/7

Procedure

Obtaining SIPs. Because this study included data that is publically available,

attempts to collect individual SIPs began in March 2013 while the dissertation proposal

was being written. All 30 Alliance District superintendents were originally contacted by

mail requesting permission to obtain a copy of individual school plans. A copy of the

letter, which was co-signed by my district leadership team, is included in Appendix C.

Districts that did not respond after 2 weeks were then contacted again both by phone and

email. District and school websites were also viewed in an attempt to obtain individual

school plans. In the initial attempt, 54 individual school plans from 19 districts were

received. A second attempt was made by again calling non-responding superintendents’

offices during June 2013. Ongoing calls and emails were sent and this process concluded

in October 2013 with a final set of 108 usable school improvement plans representing 25

Alliance Districts. Two additional plans were received totaling 110 plans, however,

neither one could be included in the analysis. One of the plans was from a school that was
49

closed last year and the other plan was from the district improvement document, and thus

did not address the individual school’s improvement.

Scoring plans. As the researcher, I scored all plans using the PIM rubric

described earlier and located in Appendix A. It was essential that plans were scored

consistently so the resulting data were reliable. To evaluate inter-rater reliability Dr.

Michael Wasta, Ph.D., a retired superintendent of schools who currently consults with

Connecticut school districts on school improvement planning and I coded the first 20

plans separately. The goal was to seek an inter-rater correlation of .80.

To begin this process, both scorers met in August 2013 and discussed each of the

30 PIM rubric performance dimensions to develop a general understanding of each

performance dimension. The researcher and Dr. Wasta then scored two plans together

and discussed their ratings and rationale for each rating. By discussing the characteristics

of each performance dimension and the criteria needed to obtain a score of a 3, 2, or 1,

the scorers developed an operational definition of each performance dimension. Over the

next two weeks, the same 20 plans were scored independently with a ‘total plan’

reliability rating (correlation coefficient between the two sets of ratings) of .75. Because

that was very close to our goal of .80, I continued scoring the plans independently. Table

4 shows the initial reliability for the whole school plan ratings as well as the subscore

reliability at each of the three phases of scoring.


50

Table 4
Inter-Rater and Intra-Rater Reliability for SIP Quality Ratings

Total Plan Challenging Directing Strategies Commitment Feedback


and Specific Attention

20 plans .750 .780 .800 .690 .230 .740


(inter-rater
reliability)

Random .995 .980 .950 .500 .100 .860


Sample of
10 from
first 40
(intra-rater
reliability)

Random .986 .995 .973 .968 .778 .982


Sample of
10 from
last 50
(intra-rater
reliability)

To continually check my scoring reliability, I analyzed the scoring reliability

twice after the inter-rater reliability check. The first check took place after the next 40

plans (plans 21-60) were scored as I re-scored 10 randomly selected plans. To determine

which plans would be re-scored, all plans were entered into an SPSS file which created a

random variable for each plan. The ten lowest numbers were selected to be re-scored.

This allowed each plan to have an equal chance for being rescored.

I computed the correlation coefficient between my initial scores and my re-scores

for these 10 plans, which I refer to as ‘intra-rater reliability.’ The total plan score

reliability was extremely high (.995), however, two of the goal theory concepts,

strategies and commitment were well below the total score reliability. To address this, I

looked at each performance dimension for the 10 plans that were randomly sampled and

identified eight performance dimensions that showed the greatest variation in my scoring:

#3, teacher practices; #5, engaged stakeholders; #6, possible cause-effect corrections;
51

#8, analysis of adult actions; #9, achievement results linked to causes; #15, purposeful,

focused action steps; #21, adult learning and change process considered; and #23,

strategies linked to specific student needs. Although there was never more than a one

point discrepancy in scoring (meaning a score of a 1 on the first scoring and a 3 on the

second), these eight performance dimensions needed to be further clarified before I

continued to score the remaining plans. I again met with Dr. Wasta to review the scoring

rubric for these eight performance dimensions and elaborated on the criteria needed in an

effort to increase my scoring consistency. After this process, I rescored these eight

performance dimensions on all plans and the scores obtained from this process were used

for final analysis. A copy of the clarified eight performance dimensions, which were used

to re-score all plans, can be found in Appendix D.

Finally, for the last set of 50 plans the same process for randomly selecting 10

plans was used to identify a set of plans to be rescored. This second set of scores again

showed a high whole plan intra-rater reliability (.986), but this time I substantially

increased the reliability for the strategies and commitment goal-theory concepts. As Table

4 shows, all categories achieved an intra-rater reliability of .778 or higher, and the

majority of categories were above .973.

Data Analysis

This study used multiple regression analysis to assess the relationship between

SIP quality (the predictor variable) and student achievement (the outcome variable).

Multiple regression was the selected analysis because it allows assessment of the

relationship of SIP quality with student achievement while controlling for other school

level factors discussed earlier. The regression coefficient for SIP quality was used to test
52

the hypothesis about the relationship between SIP score and student achievement. I did,

however, compute correlations between all variables using the Pearson product-moment

correlation coefficient as a preliminary step before completing the regression analysis.

This information can be found in Table 6.

Ethical Considerations

This research was submitted to the CCSU Human Studies Council and deemed

exempt from committee review. A letter of exemption from review is included in

Appendix E. Additionally, all Alliance District superintendents were first contacted

(Appendix C) to obtain permission before contacting any schools in an effort to obtain an

individual SIP. Lastly, all SIPs and SPI data are publicly available, and no data deal with

individuals. Because of this there is little chance that anyone could have been harmed by

the conduct of this research.

Research Rigor

The student achievement data (CMT) collected for this study has proved to be

valid and reliable. Additionally, the PIM scoring rubric has been evaluated, adjusted, and

implemented for consistent results in determining the quality of a school plan. That along

with the research design described allowed for data analysis which is representative of

the 30 districts that make up Connecticut Alliance Districts. Unlike other studies, the

available population was relatively small, so to achieve high statistical power I worked to

obtain as many SIPs as possible, ending up with a total of 108 School Improvement Plans

for my analysis.
53

Limitations

This study was not conducted using a random sample and therefore the

generalizability of results to the total population is uncertain. Because every Alliance

District elementary school did not provide their SIP for this study, external validity, or

the “extent to which findings in one study can be applied to another situation” (Mertens,

2010, p. 129), cannot be guaranteed.

Internal validity can also be considered a limitation because this study was not

conducted as an experiment or quasi-experiment, so drawing firm cause-effect

conclusions about school improvement planning was not possible. There are many

potential extraneous or lurking variables that might have impacted the relationship

between SIP quality and student achievement, the most important of which I discuss in

Chapter 4.

Another limitation of this study is that it dealt with the intended, rather than

actual, implementation and monitoring of the SIPs. The PIM rubric is used to measure the

quality of three phases of school improvement: the planning phase (the focus of this

study), the implementation phase, and the monitoring phase. As the literature has

indicated (Fernandez, 2009; Heroux, 1981; Reeves, 2004;; White, 2009), much of what is

written in plans is for documentation and compliance purposes and may not accurately

describe adult actions within a school. The present study did not assess whether each of

the steps on a school’s plan were actually implemented or the degree to which they were

truly monitored.

Each SIP was scored by a human reader, thus reliability of scoring can be a

limitation. Reliability, or the degree to which measurements are free from error (Mertens,
54

2010), was evaluated at multiple points during the study and adjustments, as described

earlier, were made to increase consistency.

The last limitation is that the PIM School Improvement Audit was not created

using the principles of goal theory. The document was created using current literature on

best practice. For this study, it was necessary to align the performance dimensions of the

rubric to goal theory; therefore, the match between the rubric and theory was not exact.

Role of the Researcher

As an elementary school principal in an Alliance District, it is essential that I

know if certain components of school improvement planning are empirically linked to

student achievement. As Figure 1 indicated, planning of an effective SIP and student

achievement results impact many people. Those impacted by the process include

students, teachers, the community, and me as an administrator. If this study can

empirically link goal theory to student achievement, then I fill a current gap in

educational research and can use that information to improve my school leadership and

ability to plan for school improvement. More importantly, this information can be used

by other similar schools to create a document that is associated with increased student

achievement. Because the study is designed to improve upon my professional practice

and no risks are associated with the results, the likelihood of researcher bias is minimal.
55

CHAPTER 4

RESULTS

This chapter reports the findings regarding the effect of school improvement

planning on student achievement. This study was a replication of Fernandez’s (2009)

study quantifying components of SIPs associated with increased student achievement

with the addition of a focus on goal theory (Locke & Latham, 2002, 2006) and on low-

performing districts. For this study, three forms of school-level data were analyzed. The

first was whole school scores on 2012-13 SIPs using the PIM School Improvement Audit

(Center for Performance Assessment, 2005). The second included individual school SPI

scores for the 2011-12 and 2012-13 school years. The third was seven forms of school-

level demographic information including percentages of students eligible for free and

reduced price meals, special education population, per pupil spending, and percentages of

ELL students. For this study, achievement was measured by SPI scores.

The hypothesis for this study was that schools creating quality SIPs consistent

with goal theory principles would have higher levels of student achievement than schools

with lower quality SIPs. The first research question asked whether components of plans

consistent with the goal theory principles of specific and challenging goals, or any of the

mechanisms (directing attention, serving as an energizing function, increasing

persistence, leading to development of a strategy) or moderators (goal commitment,

feedback) identified by Locke and Latham (2002, 2006) would be associated with

increases in student achievement. The second research question evaluated whether school

improvement planning predicted achievement of particular subgroups. The hypothesis

and research question were tested using multiple regression.


56

Before analysis was completed, descriptive statistics and skewness were analyzed

on all variables. This was done to ensure the analysis would not be distorted by

substantially skewed data. The only variable showing high skewness was percentage of

students identified as special education, with a skewness statistic of 5.573 (a value of 2 or

greater can be considered high; values closer to zero indicate less skewness). To account

for this I conducted three transformations in an attempt to reduce the skewness (log base

10, square root, and inverse). The log special education variable produced the smallest

skewness (.472) and was used for all analyses

Descriptive Statistics

Tables 5 and 6 provide correlations and descriptive statistics for each of the

predictor variables used in this study as well as for SIP scores (N = 108). School-level

predictor variables include achievement scores for 2011-12, percentage of minority

students, percentage of special education students, percentage of English Language

Learners, enrollment, per pupil spending, and the percentage of students qualifying for

free and reduced price lunch. Achievement scores from 2012-13 serve as the outcome

variable.

The average SIP score, obtained using the PIM School Improvement Audit, was

44.64 (SD = 11.11) out of a possible range of 30.0 to 90.0 points. Of the 108 plans

scored, 46% of the plans (n = 50) scored in the 30 - 39 point range, 21% of the plans (n =

23) scored in the 40 - 49 point range, 21% of the plans (n = 23) scored in the 50 - 59

point range, and only 11% (n = 12) plans scored higher than 60. One plan had a total

score of 30 points representing the lowest level of planning, implementation, and

monitoring; the most highly scored plan received 75 points.


57

Mean achievement scores, as represented by a School Performance Index (SPI)

were 71.71 for 2011-12 and 69.67 for 2012-13. As stated in Connecticut’s ESEA

flexibility waiver, the state’s target is for all schools and subgroups within a school to

achieve 88 of 100 possible performance index points. My results indicate that most

Alliance Districts have not met this standard. Schools obtaining an SPI of 88 or above

would have performed at the goal level or above on a majority of the standardized tests

taken. An SPI of 67, which is very close to the 2012-13 mean achievement scores for this

study, would indicate a majority of schools were scoring at the proficient level (CSDE,

2012a). Table 5 summarizes the breakdown by achievement scores for the 108 schools

within this study. Of the 108 schools within this study, only two met or exceeded the

target of 88.

Table 5
Distribution of 2012-13 Achievement (SPI) scores
2012-13 SPI Ranges Number of Schools Percentage

20.0 - 29.9 1 .9%

30.0 - 39.9 0 0%

40.0 - 49.9 3 3%

50.0 - 59.9 13 12%

60.0 - 69.9 28 26%

70.0 - 79.9 47 44%

80.0 - 89.9 16 15%


58

As Table 6 indicates, there was a small but statistically significant positive

relationship between SIP score and the outcome variable, 2012-13 student achievement, r

= .237, p <.05. According to Cohen (1988), r values less than .24, while significant, are

considered small. There was a very strong positive correlation between 2011-12

achievement and 2012-13 achievement, r = .920, p <.01. Other significant correlations

with 2012-13 achievement data include the following control variables: percent minority

r = -.541, p <.01, percent special education, r = -.306, p <.01, percent ELL, r = -.405, p <

.01, and percent free and reduced lunch, r = -.719, p <.01.

Table 7 displays the data from the top 20 schools (as noted by their 2012-13 SPI

score) and their total plan scores in comparison with the lowest 20 schools. There was, on

average, a 6 point differential in total plan score between these two groups.
59

Table 6
Bivariate Correlations Among SIP Total Score, Student Achievement, and Control Variables (N = 108)
Variable Mean SD 1 2 3 4 5 6 7 8

1. SIP Total 44.64 11.11 1.00

2. Achievement 71.71 10.47 .178 1.00


11/12
3. Achievement 69.67 10.54 .237 * .920 ** 1.00
12/13
4. %Minority 57.89 22.84 .164 -.570 ** -.541 ** 1.00

5. % Sp Ed 1.03 .18 -.253 ** -.288 ** -.306 ** .142 1.00

6. % ELL 11.36 9.86 .167 -.396 ** -.405 ** .550 ** -.057 1.00

7. Enrollment 417.96 143.57 .219 * -.125 -.083 .158 .032 .317 ** 1.00

8. PP Spending 14,129.13 1830.91 .312 ** -.009 .002 .348 ** -.123 .294 ** .202 * 1.00

9. % FR 56.03 21.43 .000 -.752 ** -.719 ** .749 ** .275 ** .473 ** .203 * .208 *
*p < .05. **p < .01.

Note. % Minority = percentage of students identified as minority; % SpEd = percentage of students identified as special education; %
ELL = percent of students identified as English Language Learners; PP Spending = per pupil spending; % FR = percentage of
students identified as eligible for free and reduced price lunch.
60

Table 7
Distribution of SIP Scores for 20 Highest and 20 Lowest Achieving Schools

Mean Plan 30-39 40-49 50-59 60-69 70-79


Score

20 Highest 47 9 3 5 1 2
School
SPIs
(88.9 -
78.7)

20 Lowest 40.9 12 4 2 2 0
School
SPIs
(29.9 -
62.1)

Test of Hypothesis – SIP Quality and Student Achievement

The main hypothesis for this study was that schools creating quality SIPs

consistent with goal theory principles would have higher student achievement than

schools with lower quality SIPs. The test of the hypothesis was done utilizing multiple

regression. The outcome variable was student achievement scores for 2012-13. A

school’s SIP score was included as a predictor variable to test the hypothesis that quality

of school improvement planning does impact student achievement. The 2011-12

achievement score was also used as a predictor. Additionally, the demographic factors

shown in Table 6, a majority of which were used in Fernandez’s (2009) study, were

included as predictor or control variables.

For this study, I utilized a residualized change approach as opposed to analyzing

raw change in student achievement between two years. Raw change scores (a difference

between achievement scores from 2012-13 and 2011-12) may have low reliability and do

not take into account regression to the mean; both present potential problems that the

residualized change approach avoids. The residualized change approach involves


61

including 2011-12 achievement scores as a predictor variable, removing from the

outcome variable any variance shared between the 2 years of scores.

The regression results for the main hypothesis (shown in Table 8) shows a high

proportion of shared variance between the predictor and outcome variables, with R2 =

.861, meaning 86% of the variance in scores can be accounted for by the combination of

predictor variables. This R-squared was statistically significant at the .001 level.

My hypothesis was tested by the regression coefficient for the SIP score. The SIP

score did have a marginally statistically significant regression coefficient, p = .052, with a

positive standardized coefficient of .085, consistent with the research hypothesis.

None of the demographic control variables had significant regression coefficients

even though four of the variables (percentage of minority students, percentage of special

education students, percentage of English Language Learners, and percentage eligible for

free and reduced lunch) all had significant correlations with student achievement. A

potential problem with finding significant regression coefficients is multicollinearity, or

high degrees of correlation between predictor variables (shown in Table 6). For example,

high correlations with percentage of free and reduced price lunch were found for

achievement 2011-12, r = -.752, p < .01 and percentage of minority students, r = .749, p

<.01. When multicollinearity arises in a data set, it can be hard to find significant

regression coefficients. When predictors are very strongly related to each other, it makes

it much more challenging to separate the variables, thus effectively holding one variable

constant (Menard, 2002).


62

Table 8
Multiple Regression Results for SIP Score Predicting Student Achievement

Unstandardized Standard Standardized t Sig.


Coefficient Error Coefficient

Y-Intercept 10.245 6.476 1.582 .117

Achievement 11/12 .823 .061 .818 13.398 .000*

SIP Total .080 .041 .085 1.966 .052†

% Minority .002 .029 .003 .052 .959

% ELL -.088 .052 -.083 -1.706 .091

Enrollment .003 .003 .040 .976 .332

PP Spending 4.209 .000 .007 .169 .866

% FR -.033 .036 -.067 -.934 .352

% SpEd (Log) -2.118 2.389 -.037 -.886 .378

Note. % Minority = percentage of students identified as minority; % ELL = percentage of


students identified as English Language Learners; PP Spending = per pupil spending; % FR =
percentage of students identified as eligible for free and reduced price lunch; % SpEd (Log) =
logged (base 10) percentage of students identified as special education.
*p < .05. †p =.052

Research Question – Goal Theory

Following the main hypothesis, the first research question for this study was

whether components of a SIP consistent with goal theory (e.g. challenging and specific

goals; directing attention; strategies; commitment; feedback) would be associated with

increased student achievement. As Table 9 indicates, three of the goal theory variables

(directing attention, r = .270, p < .01, strategies, r = .226, p < .05, and feedback, r = .256,

p < .01) had significant positive correlations with 2012-13 student achievement.

Additionally, there were also very strong and significant correlations between the SIP
63

total score and four of the five goal theory principles (challenging and specific goals, r =

.827, p < .01, directing attention, r = .947, p < .01, strategies, r = .892, p < .01, and

feedback, r = .941, p < .01.)

The test of this research question was done utilizing multiple regression. For this

analysis I again utilized a residualized change approach as opposed to analyzing raw

change in student achievement between two years. The outcome variable was student

achievement scores for 2012-13. For this analysis, predictor variables included scores on

individual SIPs from the five goal theory components: challenging and specific goals,

directing attention, strategies, commitment, and feedback scores. This analysis did not

use the total SIP score as a predictor. The only control variable in this analysis was the

2011-12 achievement scores.

The regression analysis for the research question shows a strong multiple

correlation between the outcome variable and the combination of the predictor variables,

with the proportion of shared variance being R2 = .86, meaning 86% of the variance in

achievement scores can be accounted for by a combination of the goal theory principles.

This test was statistically significant at the .001 level. However, none of the goal theory

control variables had significant regression coefficients. Similar to the main analysis, the

goal theory data presented high degrees of correlation between predictor variables

(shown in Table 6). When looking at the strength of correlations between predictor

variables, the issue of multicollinearity appears to have again arisen due to highly

correlated predictor variables. This issue will be discussed further in the last chapter.
64

Table 9
Bivariate Correlations Among SIP Total Score, Student Achievement,and Goal Theory Concepts (N = 108)

Variable Mean SD 1 2 3 4 5 6 7
1.SIP Total 44.64 11.11 1.00

2. Achievement 11-12 71.71 10.47 .178 1.00

3. Achievement 69.67 10.54 .237 * .920 ** 1.00


12-13
4. Challenging & 1.68 .66 .827 ** .099 .150 1.00
Specific
5. Directing Attention 1.61 .49 .947 ** .237 * .270 ** .749 ** 1.00

6. Strategies 1.41 .40 .892 ** .165 .226 * .530 ** .833 ** 1.00

7. Commitment 1.18 .26 .097 -.131 -.082 .051 -.012 .075 1.00

8. Feedback 1.40 .30 .941 ** .201 * .256 ** .790 ** .916 ** .774 ** .109
*p < .05. **p < .01.
65

Table 10
Multiple Regression Results for Goal Theory Concepts Predicting Student Achievement
Unstandardized Standard Standardized t Sig.
Coefficient Error Coefficient

Y-Intercept -1.721 3.778 -.456 .650

Achievement .924 .040 .918 23.172 .000**


11/12

Challenging .715 1.043 .045 .685 .495


and Specific

Directing -3.504 2.537 -.163 -1.381 .170


Attention

Strategies 2.810 1.898 .107 1.480 .142

Commitment .602 1.633 .015 .369 .713

Feedback 3.534 3.809 .100 .928 .356

**p < .001.


66

Research Question – Subgroup Analysis

To address the issue of demographic variability and current achievement gaps

between subgroups, the second research question evaluated whether school improvement

planning predicted achievement of particular subgroups of students. Each school in

Connecticut receives an individual school SPI, as well as subgroup specific SPIs. SPIs for

the following subgroups are reported: Black or African American, Hispanic or Latino,

English Language Learners, Free/Reduced Lunch Eligible, Students with Disabilities, and

a subgroup labeled High Needs. High needs is defined as “an unduplicated count of

students in the English Language Learners, Free/Reduced Lunch Eligible, and Students

with Disabilities subgroups” (CSDE, 2013).

In order for a subgroup to be reported, there must be a minimum of 20 students in

that particular school defined by the particular subgroup. Of the 108 schools within this

study, the following numbers of subgroups were reported: Black or African American (n

= 77), Hispanic or Latino (n = 96), English Language Learners (n = 41), Free/Reduced

Lunch Eligible (n = 108), Students with Disabilities (n = 85), and High Needs (n = 108).

The category students with disabilities is commonly referred to as special education on

state documents and throughout this study. Due to the smaller sample size of some of the

subgroups within this study, regression results were reported for full data (N = 108) or

nearly complete data (i.e., Hispanic or Latino) subgroups. Black or African American,

English Language Learners, and students with disabilities were not included in the

regression analysis as their sample size did not represent full or nearly complete data.

Again using multiple regression with a separate analysis for each subgroup, the

outcome variable was the subgroup’s student achievement score for 2012-13. A school’s
67

SIP score was included as a predictor variable to test the hypothesis that the quality of

school improvement planning impacts subgroup student achievement. The 2011-12

achievement score was also used as a control variable as in the main hypothesis test. As

with the main hypothesis test, the demographic factors shown in Table 6 were included as

predictors (control variables).

Table 11 shows the correlations between subgroup SIP scores and student

achievement. All three subgroups had small, yet significant correlations with SIP total

(Hispanic, r = .299, p < .01, Free and Reduced, r = .229, p < .05, and High Needs, r =

.288, p < .01).

Table 11
Bivariate Correlations Among SIP Total Score, Student Achievement, and Subgroups

Variable Mean SD 1 2 3 4 5 6
1.SIP Total 44.64 11.11 1.00

2.Achievement 71.71 10.47 .178 1.00


11-12
3. Achievement 69.67 10.54 .237 * .920 ** 1.00
12-13
4. Hispanic (n = 62.30 9.85 .299 ** .692 ** .805 ** 1.00
96)

5. Free and 62.68 9.17 .229 * .723 ** .842 ** .872 ** 1.00


Reduced (n =
108)

6. High Needs 61.09 8.30 .288 ** .676 ** .810 ** .843 ** .962 ** 1.00
(n = 108)

*p < .05. **p < .01.


68

Table 12 shows the standardized regression coefficients for each of the subgroups.

As Table 12 indicates, none of the subgroups showed a statistically significant regression

coefficient for SIP total scores.

Table 12
Multiple Regression Standardized Coefficients for Subgroup Exploratory Analysis

Hispanic Free and Reduced High Needs Special Education

Achievement 11- .723 ** .949 ** 1.024 ** .936 **


12

SIP Total .092 .073 .119 .010

% Minority .254 * .48 .055 .290 *

% ELL -1.94 * -.130 -.130 .102

Enrollment .029 -.006 -.008 .019

PP Spend .065 -.016 -.031 .049

% FR .007 .367 ** .537 ** .167

% SpEd (Log) -.117 -.034 -.046 .051

Note. % Minority = percentage of students identified as minority; % ELL = percentage of


students identified as English Language Learners; PP Spending = per pupil spending; % FR =
percentage of students identified as eligible for free and reduced price lunch; % SpEd (Log) =
logged (base 10) percentage of students identified as special education.
*p < .05. **p < .01.

Table 12 also indicates that some of the demographic control variables did have

significant regression coefficients for some of the subgroups, yet not for others. There
69

does not appear to be any consistent interpretable pattern to the significant results for the

demographics. The implication of this will be discussed further in the final chapter.

Summary

As the results of this study indicate, there is evidence that SIP quality predicts

student achievement when controlling for demographic factors. The SIP score had a

marginally statistically significant regression coefficient, p = .052, with a positive

standardized coefficient of .085, consistent with the research hypothesis. Although none

of the goal theory concepts or specific subgroups had significant regression coefficients,

a majority of these predictors were strongly correlated with student achievement and the

process of school improvement planning. The implications of these findings will be

discussed further in the final chapter.


70

CHAPTER 5

DISCUSSION

This chapter discusses the study findings examining the effect of school

improvement planning on student achievement. This study was a replication of

Fernandez’s (2009) study quantifying components of SIPs associated with increased

student achievement, with an additional focus on goal theory (Locke & Latham, 2002,

2006) and low-performing districts. Three forms of school-level data were analyzed. The

first was whole school scores on 2012-13 SIPs, the second was individual school

achievement scores for the 2011-12 and 2012-13 school years, and the third was school-

level demographic information including percentages of students eligible for free and

reduced price, percentage of special education students, enrollment, per pupil spending,

and percentage of ELL students. This study examined the quality of the plan as written

and did not attempt to evaluate the quality of implementation or monitoring, a component

many (Reeves, 2006; White, 2009; White & Smith, 2010) feel is equally important to the

process of school improvement.

The population included low-performing school districts that are part of

Connecticut’s newly formed Alliance Districts (CSDE, 2012a). The schools located

within these districts are required to submit SIPs. The sampling frame included all

elementary or K-8 schools within Connecticut’s Alliance Districts. The actual sample,

though not fully representative of all Alliance Districts, was 108 schools that provided

plans for inclusion in this study.

The hypothesis for this study was that schools creating quality School

Improvement Plans (SIPs) consistent with goal theory principles would have higher

student achievement than schools with lower quality SIPs. The first research question
71

asked whether components of plans consistent with the goal theory principles of specific

and challenging goals, or any of the mechanisms (directing attention, serving as an

energizing function, increasing persistence, leading to development of a strategy) or

moderators (goal commitment, feedback) identified by Locke and Latham (2002, 2006)

would be associated with increases in student achievement. The second research question

examined whether school improvement planning predicted achievement within individual

subgroups. The hypothesis and research questions were tested using multiple regression.

Summary of Findings

Main analysis. Utilizing multiple regression analysis, this study found there was

a small, yet marginally statistically significant positive relationship between the quality of

a SIP and student achievement, controlling for previous achievement and demographic

factors. The correlation between a plan’s score and 2012-13 achievement was r =.237, p

< .05. This was consistent with the research hypothesis. Findings suggest one could

expect that for every point increase in SIP total score, achievement would increase by

.085 points. From an educational leader’s standpoint, this is an area which schools can

control and improve upon, while working to close current gaps in achievement. The study

findings are consistent with, but because of the correlational design cannot definitively

demonstrate, the concept that SIP quality has an effect on achievement.

Goal theory analysis. The first research question examined whether components

of a plan consistent with goal theory (e.g. challenging and specific goals; directing

attention) would be associated with increased student achievement. Although there was

not a significant regression coefficient for any of the goal theory principles, three of the
72

variables (directing attention r = .270, p < .01, strategies r =.226, p < .05, and feedback r

= .256, p < .01) had significant positive correlations with 2012-13 student achievement.

Subgroup analysis. The second research question accounted for demographic

variability and the concern for the widening achievement gap between subgroups.

Multiple regression analyses were conducted to observe any statistical association

between SIP total scores and achievement for the following three subgroups of students:

Hispanic or Latino (n = 96), Free/Reduced Lunch Eligible (n = 108), and High Needs (n

= 108). As Table 12 indicates, none of the subgroups had a statistically significant

regression coefficient for SIP total scores, but as indicated in Table 11, all three

subgroups’ achievement scores had significant correlations with the SIP total scores

(Hispanic, r = .299, p < .01, Free and Reduced, r = .229, p < .05, and High Needs, r =

.288, p < .01).

Consistency of Results with Previous Research

The results from this study are consistent with Fernandez (2009) in that both

studies found evidence that when controlling for multiple demographic factors, the total

plan score predicted change in student achievement. Consistent with Reeves (2011),

individual aspects of SIPs have been shown to be related to achievement.

Another similarity with previous research is that not every component of a SIP

can be equally associated with significant gains in student achievement. As was the case

with the PIM study (Reeves, 2011), certain components of planning were shown to be

more strongly correlated to achievement. A supplemental analysis of individual

performance dimensions from this study showed that the following five dimensions had

the largest statistical correlations to 2012-13 student achievement: #6, possible cause-
73

effect correlations, (r = .298, p <. 01); #16, multiple assessments documented, (r = .284,

p < .01); #2, assessment results, (r = .247, p < .01); #8, analysis of adult actions, (r =

.234, p < .05); and #13, relevant goals (r = .227 p < .05). That being said, the review of

the literature and findings from this study support past writings of Elmore (2000), Reeves

(2006), and White (2009), that leadership, teaching, and adult actions matter. If schools

are going to use their SIPs to guide their improvement process, characterticists such as

these should be included within plans which are implemented with fidelity, and

continually monitored for optimal success.

This study contributes to the existing literature with additional support for the idea

that school improvement planning does matter. It is the first to examine goal theory

concepts as predictors of achievement. Although the evidence that goal theory concepts

predict achievement was not strong (no significant regression coefficients), there were

significant correlations in three areas (directing attention r = .270, p < .01, strategies r

=.226, p < .05, and feedback r = .256, p < .01) which support the idea that goal theory

should be further studied as part of the school improvement process.

Additionally, this is the first study to look at whether SIP quality is related to

subgroup achievement. As with the goal theory analyses, there were no significant

regression coefficients for individual subgroups, but there were significant correlations

(Hispanic, r = .299, p < .01, Free and Reduced, r = .229, p < .05, and High Needs, r =

.288, p < .01). This suggests that SIPs as currently constructed in Connecticut Alliance

Districts have not been useful for reducing current gaps in achievement, and might need

to be constructed in different ways to do so.


74

Theoretical Implications

The results of this study support one theoretical implication. Based on this study’s

findings, goal theory provides a potentially useful framework for thinking about school

improvement planning. Defined by Locke and Latham (2002), a goal is the object or aim

of one’s action. Goals are internal representations of desired states and can be, as in

school improvement planning, constructed as outcomes, events, or processes (Austin &

Vancouver, 1996). Setting goals and planning is now required in schools as a method of

increasing productivity, increasing student learning, and insuring teacher accountability.

Goal theory supports the belief that conscious goals impact one’s performance, thus

setting goals on a SIP should benefit students. Researchers have found increasing

performance is seen as a function of ability and motivation, thus setting goals is an

important first step in achieving the necessary motivation required to accomplish a

desired state (Higgins, 1987; Locke & Latham, 2006).

Goal theory, while intended to focus on individual performance, is relevant to

school performance because it plays a critical role in thinking strategically which is

important for school improvement planning. Schools should therefore consider goals as

they participate in planning. Planning strategically, as does school improvement

planning, places heavy emphasis on understanding the environment in which planning

takes place and adjusting one’s actions to that environment (Beach & Lindahl, 2004a).

Setting goals will help to focus school efforts to improve student outcomes.

Practical Implications

Based on the results of this study, three practical implications can be drawn. A

first implication is that schools should prioritize developing high-quality SIPs. The
75

quality of SIPs in this study varied considerably, with a mean SIP score of 44.64 (SD =

11.11). Of the 108 school improvement plans within this study, 46% of plans scored

between 30 and 39 points and 67% of the plans scored below 50 points. This indicates

that a majority of schools would benefit from an individualized assessment of their ability

to plan and specific professional development around multiple areas of school

improvement which have been shown to be associated with increasing student

achievement. It also highlights the fact, consistent with current research, that the process

of school improvement matters and is the only attribute which a school staff can control

(Fernandez, 2009), thus it is an important and worthwhile factor in improving schools.

Areas necessary for systemic improvement (included as performance dimensions

on the PIM scoring rubric), which should be included on SIPs, include leadership

strategies (one of the lowest rated PIM areas in this study), data analysis techniques,

decision making practices, and an evaluation of a school’s readiness to change (Beach &

Lindahl, 2004b; Hall & Hord, 2011; Reeves, 2004; Schmoker, 1999; White, 2009). To

integrate current research with the findings from this study, and to monitor a school’s

progress towards improvement, it is recommended that goal theory concepts also be

deliberately integrated when creating high-quality SIPs. Additionally, consistent with

Beach and Lindahl (2004a), many educators responsible for planning may be unfamiliar

with the knowledge base for effective planning. If legislation requires schools to

continually improve achievement, one way to accomplish that would be to increase the

knowledge base for those who create such plans, so they can then increase the quality of

their plan.
76

One way to utilize the results from this study to increase plan quality would be for

a school to use the PIM Scoring Audit to score its own plan to determine specific areas

for improvement. Such improvement is something that the current study’s results indicate

is needed.

A second recommendation would be to include components of goal theory into

school plans. A benefit to using goal theory as part of school improvement plan includes

prioritizing what matters most. Even though evidence was not strong that goal theory

concepts predict achievement, the theory provides a good conceptual framework for

thinking about how to motivate people to achieve desired outcomes and the significant

correlations do provide some evidence of the usefulness of goal theory concepts.

The mechanism or action leading towards goal accomplishment with the strongest

correlation to student achievement was directing attention (r = .270, p < .01). Schools can

improve upon their ability to direct attention by focusing on assessment result distinctions

by subgroups, classrooms, patterns, and trends as opposed to the typical reporting of

general standardized assessment results. Lastly to direct attention to priority areas for

improvement, schools should focus on professional development driven by their current

student needs to focus their limited time and resources.

Another mechanism that was statistically correlated to student achievement was

strategies (r =.226, p <.05). I therefore recommend that research based strategies be

utilized in plans. The strategies selected should be directly related to the individual

school’s needs (as opposed to a general new initative) with an understanding from all

staff why, how, and in what context the strategies should be implemented (Center for

Performance Assessment, 2005).


77

The moderator of the goal-performance relationship with the strongest correlation

to achievement was feedback (r = .256, p < .01). Feedback serves as a tool to continually

monitor progress towards stated goals and make frequent adjustments (Reeves, 2004;

Schmoker, 2004; White, 2009). Consistent with Hattie (2012), feedback is an essential

ingredient of learning and this would seem to be true for both student and adult learners.

Without frequent and accurate feedback it is “difficult or impossible for them to adjust

the level or direction of the effort to adjust their performance strategies to match what the

goal requires” (Locke & Latham, 2002, p. 708). Schools can improve the quality of their

feedback by articulating the degree to which leaders monitor a school’s performance, set

direction, and communicate progress to community stakeholders. This is something that

is typically done during the data team process; however, the frequency of team meetings

varied as referenced by individual school plans.

Feedback is an important component of the learning process and data from this

study suggests that schools can improve incorporating this into their school improvement

process. When looking at individual performance dimensions from SIPs for this study

and their mean score (out of a possible score of 3), some of the lowest rated performance

dimensions were connected to feedback. For example, #26, summary data provided and

compared, M = 1.21 (SD = .41); #27, anticipated knowledge and skills, M = 1.13 (SD =

.34), and #30, results disseminated and transparent, M = 1.03 (SD = .17) address

feedback. As stated on the PIM rubric, a score of 1 indicates a low level of planning,

implementation, and monitoring, thus needing improvement. The plan scores therefore

indicate there is ample room for improvement on feedback.


78

A third practical implication would be for schools to focus SIP efforts to directly

address subgroup performance. Even high quality SIPs within this study did not fully

address this area, even though examining subgroup performance is a necessary step to

continually close gaps in achievement. Table 3 indicates that 27 of 33 Alliance Districts

(82%) come from the lowest three DRGs in the state and often have multiple subgroups

for which they are held accountable. As stated in the literature review, states have

recognized the importance of school accountability and are using student achievement

and the process of school improvement planning as a method of distinguishing effective

and ineffective schools (Phelps & Addonizio, 2006). One determining factor in

classifying a school aeffective is the ability to reduce current gaps in achievement.

In this study, SIP total score did not have significant regression coefficients for

any of the subgroups. A likely explanation is that not all of the plans were written to

address academic disparities between subgroups and, surprisingly, many of the plans did

not include any subgroup goals or strategies. If the focus of the plan was not on reducing

current achievement gaps, it would make sense that a statistical relationship not be found.

If schools are going to use their SIP to reduce current gaps in achievement,

including specific components of the PIM School Improvement Audit could be beneficial

in achieving these goals. Of the 30 performance dimensions on the rubric, six are written

to address subgroup achievement: #2, assessment results, which includes making

distinctions by subgroups; #10, specific goals, which targets specific groups of students;

#12, achievable goals, which are designed to close subgroups’ learning gaps in three to

five years; #23, strategies linked to specific student needs, which could be written to

address individual subgroups; #24, professional development driven by student needs,


79

which is designed to meet specific subgroup needs; and #28, required evidence for

evaluation, which focuses on students whose performance puts them at risk of opening

additional achievement gaps. Including characteristics such as these in a plan would

increase the degree to which subgroup performance was planned for and monitored.

It should be noted that I conducted a follow-up analysis on rubric scores on these

individual items to see if they correlated with subgroup performance. Achievement for all

three subgroups had small, yet significant correlations with performance dimension #2,

assessment results (Hispanic, r = .376, p < .01, Free and Reduced, r = .197, p < .05, and

High Needs, r = .215, p < .05). This performance dimension was also statistically

correlated with overall student achievement across all 108 plans (r = .247, p < .01).

Educational leaders understand the importance of closing gaps in achievement

and this study provides data showing that SIPs do little to address this. Specific

performance dimensions on the PIM scoring rubric have been written to address

achievement gaps and, therefore, should be included within plans. When looking at the

highest and lowest rated performance dimensions across all 108 plans that specifically

addressed subgroup performance, the highest rated dimension, #12, achievable goals, M

= 1.88 (SD = .84) still was below an overall score of a 2, indicating a less-than-proficient

level. The lowest rated performance dimension associated with subgroup performance

was #28, required evidence for evaluation, M = 1.26 (SD = .46) which specifically

identifies students at risk of opening achievement gaps (as opposed to closing) and

should be evident across all plans. Taken collectively, these are important findings for

school leaders and schools should prioritize the following six performance dimensions

for specific subgroups when crafting SIPs: #2, assessment results, #10, specific goals,
80

#12, achievable goals, #23, strategies linked to specific student needs, #24, professional

development driven by student needs, and #28, required evidence for evaluation.

Limitations and Suggestions for Future Research

As with any study, there are limitations. One limitation concerns external validity.

This study was not conducted using a random sample providing a representative sample

from the population. As noted earlier, one particular segment of my population was not

well represented in this study. Four of the largest districts in the state (Bridgeport,

Hartford, New Haven, and Waterbury), considered to be the poorest in the state (as

referenced by their DRG status), submitted very few plans. These factors limit the

generalizability of results to the total population, however, I believe my results generalize

well to low-income districts in Connecticut with the exception of the very largest and

poorest. While the results from this study are consistent with the idea that high scored

plans lead to achievement, other possible explanations cannot be ruled out. To increase

generalizability of results, a similar study across different subsets of the population, such

as schools in large high poverty cities, would provide additional insight.

Another limitation is that this study used data spanning only 2 years. A suggestion

for additional research would be conducting a longiudinal study using multiple years of

data to continually evaluate school improvement strategies. Planning strategically places

heavy emphasis on understanding the environment in which planning takes place and

adjusting one’s actions to that environment (Beach & Lindahl, 2004a) and a longitudinal

study would allow for this.

An additional limitation to this study could be the oveall poor quality of plans

across the 108 Alliance District schools (M = 44.64, SD = 11.11.) Because a majority of
81

the plans (67%) scored lower than 50, this could indicate a low commitment to the

process of planning, which could possibly suppress the relationship between plan quality

and student achievement. Directly linked to the quality of a plan is a school’s

implementation. While the quality of the plans in this study was empirically linked to

student achievement, that impact would be limited if districts and schools were not

working to implement the written plans.

Last, a limitation is that this study focused on the process of school improvement

planning without taking into consideration two other important factors – the

implementation and monitoring of such plans. These two critical areas are what Hopkins

(2001) referred to as strengthening a school’s ability for managing change. The data

obtained from this research shows a statistically significant correlation between the

process of planning and student achievement. What cannot be inferred is the degree to

which all three aspects of school improvement collectively impact student achievement.

To better understand the planning and implementation process, six performance

dimensions were written to address a school’s ability to monitor the steps written in their

plan and it is recommended that schools prioritize their implementation and monitoring

of plans by focusing on the following: #4, acts of leadership, which articulates the

degree to which leaders monitor performance; #17, demonstrates improvement cycles,

which looks for explicit evidence of improvement cycles for every improvement

initative; #18, frequent monitoring of student achievement, which quantifies the degree to

which teams meet to monitor perforamnce; #26, summary data provided and compared,

which measures the degree to which planned initatives are evaluated; #28, required

evidence for evaluation, which articulates specificity of data needed to monitor progress;
82

and #29, next steps outlined in evaluation, which specifies the degree to which changes

in practice will continue to move forward the improvement process. By prioritizing these

PIM performance dimensions, a school would be better able to evaluate the degree to

which all three aspects of school improvement collectively impact student achievement.

Summary

This study helps to fill in a gap in the existing research on the impact of school

improvement planning, the benefit of utilizing goal theory concepts for planning, and

their relationship to student achievement. A small yet marginally statistically significant

relationship was found between the quality of a school improvement plan and student

achievement, but we know there are other factors that impact student achievement.

Unfortunately, this study, like others conducted before, has yet to find a simple fix to

guarantee success for all students. In an era where accountability and legislation demand

increased performance, there are still unanswered questions as to how to best implement

school improvement. This study provides specific recommendations which should

improve a school’s ability to plan, implement, and monitor school improvement efforts.

As the literature suggests (Beach & Lindahl, 2004a; Hall & Hord, 2011, Reeves, 2004;

White, 2009), the process of school improvement is extremely complex.

What is still unknown is how and why “some students and teachers defy the odds

and perform at an exceptionally high level despite the prevalence of poverty, special

education, second languages, or other factors that in statistical terms are associated with

low student achievement” (Reeves, 2006, p. 14). This study did find that the achievement

scores of all three subgroups were significantly correlated with the SIP total scores.
83

Further study on subgroup achievement would add to the existing literature while helping

schools in their efforts to reduce current gaps in acheivement.

Finally, this study presented correlational data providing support that goal theory

can be a potentially useful framework for thinking about school improvement planning.

This information can benefit educators by motivating them to meet the academic needs

of all their students, while improving the overally quality of their SIPs.
84

References

Anfara, Jr., V. A., Patterson, F., Buehler, A., & Gearity, B. (2006). School improvement
planning in East Tennessee middle schools: A content analysis and perceptions
study. NASSP Bulletin, 90, 277-300. doi: 10.1177/0192636506294848

Armstrong, J. S. (1982). The value of formal planning for strategic decisions: Review of
empirical research. Strategic Management Journal, 3, 197-211. Retrieved from
http://smj.strategicmanagement.net/

Atkinson, J.W. (1958). Motives in fantasy, action, and society: A method of assessment
and study. New York, NY: Van Nostrand.

Austin, J. T., & Vancouver, J. B. (1996). Goal constructs in psychology: Structure,


process, and content. Psychological Bulletin, 120, 338-375. doi: 10.1037/0033-
2909.120.3.338

Barsky, A. (2008). Understanding the ethical cost of organizational goal-setting: A


review of theory and development. Journal of Business Ethics, 81, 63-81. doi:
10.1007/s10551-007-9481-6

Beach, R. H., & Lindahl, R. (2004a). A critical review of strategic planning: Panacea for
public education? Journal of School Leadership, 14, 211-234. Retrieved from
https://rowman.com/page/JSL

Beach, R. H., & Lindahl, R. (2004b). Identifying the knowledge base for school
improvement. Planning and Changing, 35, 2-32. Retrieved from
http://planningandchanging.illinoisstate.edu/

Beach, R. H., & Lindahl, R. A. (2007). The role of planning in the school improvement
process. Educational Planning, 16(2), 19-43. Retrieved from
www.isep.info/publications.html

Bell, L. (2002). Strategic planning and school management: Full of sound and fury,
signifying nothing? Journal of Educational Administration, 40, 407-424. doi:
10.1108/09578230210440276

Bloom, C. (1986). Strategic planning in the public sector. Journal of Planning Literature,
1, 253-259. doi: 10.1177/088541228600100205
85

Castellano, K. E., & Ho, A. D. (2013). A practitioner’s guide to growth models.


Retrieved from http://scholar.harvard.edu/files/andrewho/files/a_pracitioners
_guide_to_growth_models.pdf

Center for Performance Assessment. (2005). Planning, implementation, and monitoring


(PIM) school improvement audit and scoring guide. Retrieved from
http://www.tcpress.com/pdfs/9780807751701_Ex.pdf

Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2nd ed.).
Hillsdale, NJ: Lawrence Erlbaum Associates.

Connecticut State Department of Education (2006). Research bulletin: District Reference


Groups 2006. Retrieved from http://sdeportal.ct.gov/Cedar/Files/Pdf/
Reports/db_drg_06_2006.pdf

Connecticut State Department of Education (2009). Connecticut Mastery Test fourth


generation data analysis guide. Retrieved from http://www.csde.state.ct.us
/public/cedar/assessment/cmt/resources/misc_cmt/VSInterpretativeGuide.pdf

Connecticut State Department of Education (2011). Connecticut mastery test fourth


generation interpretive guide. Retrieved from http://www.csde.state.ct
.us/public/cedar/assessment/cmt/resources/misc_cmt/CMT%20Interpretive%20G
uide%202011.pdf

Connecticut State Department of Education (2012a). ESEA flexibility request. Retrieved


from http://www.sde.ct.gov/sde/lib/sde/pdf/nclb/waiver/
esea_flexibility_request_052412.pdf

Connecticut State Department of Education (2013). School performance reports.


Retrieved from http://www.sde.ct.gov/sde/cwp/view.asp?a=2683&Q=334346

Connecticut State Department of Education Bureau of Accountability and Improvement


(2012b). Alliance district application for state education cost sharing funds 2012-
13. Retrieved from http://www.sde.ct.gov/sde/lib/sde/pdf/alliance_districts
/alliance_district_application.pdf

Connecticut State Department of Education Bureau of School and District Improvement


(2007). School and District Improvement Guide. Retrieved from
https://www.google.com/#q=Connecticut+schol+and+district+improvement
+guide

Cook, Jr., W. J. (1990). Bill Cook’s strategic planning for America’s schools (rev.ed).
Arlington, VA: The American Association of School Administrators.

Cook, Jr., W. J. (2004). When the smoke clears. Phi Delta Kappan, 86(1), 73-75,83.
Retrieved from http://pdkintl.org/publications/kappan/
86

Curry, M. M. (2007). A study of school improvement plans, school decision-making and


advocacy, and their correlation to student academic achievement (Doctoral
dissertation). Available from ProQuest Dissertations and Theses database. (UMI
No. 3253726)

Davies, B. (2003). Rethinking strategy and strategic leadership in schools. Educational


Management Administration & Leadership, 31, 295-312.
doi: 10.1177/0263211X03031003006

Elementary and Secondary Education Act (1965). Pub. L. 89-10, as added Pub. L. 103-
382, title I, Sec. 101, Oct. 20, 1994, 108 Stat. 3519, 20 U.S.C. § 6301.

Elmore, R. (2000). Building a new structure for school leadership. Washington, DC:
Albert Shanker Institute.

Erpenbach, W. J. (2009). Determining adequate yearly progress in a state performance


or proficiency index model. Retrieved from The Council of Chief State School
Officers website: http://www.ccsso.org/Documents/2009/Determining
_Adequate_Yearly_2009.pdf

Falshaw, J. R., Glaister, K. W., & Tatoglu, E. (2006). Evidence on formal strategic
planning and company performance. Management Decision, 44, 9-30. doi:
10.1108/00251740610641436

Fernandez, K. E. (2009). Evaluating school improvement plans and their affect on


academic performance. Educational Policy, 25, 338-367.
doi: 10.1177/0895904809351693

Hall, G. E., & Hord, S. M. (2011). Implementing change: Patterns, principles and
potholes (3rd ed.). Boston, MA: Pearson.

Hanushek, E. A., & Raymond, M. E. (2004). Does school accountability lead to


improved student performance? (Working Paper 10591) Retrieved from National
Bureau of Economic Research website:
http://www.nber.org/papers/w10591.pdf?new_window=1

Hattie, J. (2012) Visible learning for teachers: Maximizing impact on learning. New
York, NY: Routledge.

Hayes, D., Christie, P., Mills, M., & Lingard, B. (2004). Productive leaders and
productive leadership: Schools as learning organisations. Journal of Educational
Administration, 42, 520-538. doi: 10.1108/09578230410554043

Hendrawan, I., & Wibowo, A. (2012). The Connecticut Mastery Test: Technical report.
Retrieved from Connecticut State Department of Education website:
87

http://www.sde.ct.gov/sde/lib/sde/pdf/student_assessment/research_and_technical
/public_2011_cmt_tech_report.pdf

Heroux, R. L. (1981). How effective is your planning? Managerial Planning, 30(2), 3-16.

Higgins, E. T. (1987). Self-discrepancy: A theory relating self and affect. Psychological


Review, 94, 319-340. doi: 10.1037//0033-295X.94.3.319

Hopkins, D. (2001). School improvement for real. New York, NY: Routledge Falmer.

Kelley, C., & Protsik, J. (1997). Risk and reward: Perspectives on the implementation of
Kentucky’s school-based performance award program. Educational
Administration, 33, 474-505. doi: 10.1177/0013161X97033004004

Klein, H. J., Wesson, M., Hollenbeck, J., & Alge, Jr., B. (1999). Goal commitment and
the goal-setting process: Conceptual clarification and empirical synthesis. Journal
of Applied Psychology, 84, 885-896. doi: 10.1037/0021-9010.84.6.885

Locke, E. A. (1968). Toward a theory of task motivation and incentives. Organizational


Behavior and Human Performance, 3, 157-189. Retrieved from
http://www.journals.elsevier.com/organizational-behavior-and-human-decision-
processes/

Locke, E. A., & Latham, G. P. (2002). Building a practically useful theory of goal setting
and task motivation: A 35-year odyssey. American Psychologist, 57, 705-717.
doi: 10.1037//0003-066X.57.9.705

Locke, E. A., & Latham, G. P. (2006). New directions in goal-setting theory. Current
Directions in Psychological Science, 15, 265-268. doi: 10.1111/j.1467-
8721.2006.00449.x

Malik, Z. A., & Karger, D. W. (1975). Does long-range planning improve company
performance? Management Review, 64, 27-31. Retrieved from
http://sloanreview.mit.edu/

McIlquham-Schmidt, A. (2010). Strategic planning and corporate performance. What is


the relationship? (Working Paper 2010-02) Retrieved from Aarhus School of
Business website: http://old-hha.asb.dk/man/cmsdocs/WP/2010/wp2010_02.pdf

McInerney, W. D., & Leach, J. A. (1992). School improvement planning: Evidence of


impact. Planning and Changing, 23(1), 15-28. Retrieved from
http://planningandchanging.illinoisstate.edu/

Menard, S. (2002). Applied logistical regression analysis (2nd ed.). Thousand Oaks, CA:
Sage.
88

Mento, A. J., Steel, R. P., & Karren, R. J. (1987). A meta-analytic study of the effects of
goal setting on task performance: 1966-1984. Organizational Behavior and
Human Decision Processes, 39, 52-83. Retrieved from http://www.journals.
elsevier.com/organizational-behavior-and-human-decision-processes/

Mertens, D. M. (2010). Research and evaluation in education and psychology (3rd ed.).
Thousand Oaks, CA: Sage.

Mintrop, H., MacLellan, A. M., & Quintero, M. F. (2001). School improvement plans in
schools on probation: A comparative content analysis across three accountability
systems. Educational Administration Quarterly, 37, 197-218. doi:10.1177
/00131610121969299

Mintzberg, H. (1994). The fall and rise of strategic planning. Harvard Business Review,
107-114. Retrieved from http://hbr.org

Mourshed, M., Chijioke, C., & Barber, M. (2010). How the world’s most improved
school systems keep getting better. Retrieved from McKinsey on Society website:
http://www.mckinsey.com /client_service/social_sector/latest_thinking/worlds
_most_improved_schools

National Commission on Excellence in Education. (1983). A nation at risk. The


imperative for educational reform. Retrieved from IDEA website:
http://democraticeducation.org/index.php/library/resource/a_nation_at_risk/

No Child Left Behind Act (2001). Pub. L. 107-110, §1116(b)(3)(A). §200.41.

Phelps, J. L., & Addonizio, M. F. (2006). How much do schools and districts matter? A
production function approach to school accountability. Educational
Considerations, 33(2), 51-62. Retrieved from http://www.coe.ksu.edu
/EdConsiderations/

Phillips, P. A., & Moutinho, L. (2000). The strategic planning index: A tool for
measuring strategic planning effectiveness. Journal of Travel Research, 38, 369-
379. doi: 10.1177/004728750003800405

Pyrczak, F. (2010). Making sense of statistics: A conceptual overview (5th ed.). Glendale,
CA: Pyrczak Publishing.

Reeves, D. B. (2004). Accountability for learning: How teachers and school leaders can
take charge. Alexandria, VA: ASCD.

Reeves, D. B. (2006). The learning leader: How to focus school improvement for better
results. Alexandria, VA: ASCD.
89

Reeves, D. B. (2011). Finding your leadership focus. New York, NY: Teachers College
Press.

Ryan, T. A. (1970). Intentional behavior. New York, NY: Ronald Press.

Schmoker, M. (1999). Results: The key to continuous improvement (2nd ed.) Alexandria,
VA: Association for Supervision and Curriculum Development.

Schmoker, M. (2004). Tipping point: From feckless reform to substantive instructional


improvement. Phi Delta Kappan, 85, 424-432. Retrieved from
http://www.kappanmagazine.org

Tubbs, M. E., & Ekeberg, S. E. (1991). The role of intentions in work motivation:
Implications for goal-setting theory and research. Academy of Management, 16,
180-199. doi: 10.2307/258611

U.S. Department of Education. (2010). ESEA Blueprint for reform. Retrieved from
http://www2.ed.gov/policy/elsec/leg/blueprint/blueprint.pdf

Webb, D. L. (2007). Using school improvement planning to improve teacher practice: A


qualitative case study in evaluating school improvement planning in one
elementary school (Doctoral dissertation). Available from ProQuest Dissertations
and Theses database. (UMI No. 3243589)

White, S. H. (2009). Leadership Maps. Englewood, CO: Lead + Learn Press.

White, S., & Smith, R .L. (2010). School improvement for the next generation.
Bloomington, IN: Solution Tree Press.

Wood, D. R.,& LaForge, R. L. (1979). The impact of comprehensive planning on


financial performance. Academy of Management Journal, 22,516-526. Retrieved
from http://aom.org/amj/
90

Appendix A
PIM School Improvement Audit
Scoring Guide for Section A: Comprehensive Needs

Exemplary (3 points)
Meets all criteria for Needs Improvement (1
Proficient level and Proficient (2 points) point)
Performance provides specific evidence Provides specific evidence Provides evidence that meets
Dimension to meet the criteria below. to meet the criteria below. the criteria below.

1. Strengths Strengths are described Strengths are specified Strengths are limited to
specifically for student beyond the student student achievement, and
achievement, teaching achievement area; they mentions of staff strengths
practices, and leadership include specific strengths are nonspecific or vague.
actions. of staff and school.
2. Student achievement is Student achievement data Student achievement data are
Assessment described in terms of state include some evidence of primarily described in terms
results or district assessments, school-level achievement of standardized test scores or
school-based assessments data, narrative, and state-level assessments of
that describe subscale school/classroom data to student achievement,
distinctions by subgroups, support district or state attendance, and
and classroom or assessment data. demographics.
contextual data that
describe patterns and trends
down to the skill level.
3. Teacher Teacher practices are Teacher practices are Teacher practices are generic
practices supported by research, supported by research, and statements that may identify
describe whether specific professional strategies supported by
professional development development needs are research but don't link to a
or repeated practice is identified. specific need for professional
needed, and describe how development.
monitoring of those
practices will be used to
improve instruction.
4. Acts of Leadership actions describe Leadership actions describe Leadership actions are not
leadership the degree to which leaders the degree to which leaders specifically distinguished
monitor performance, set specifically monitor from actions of other staff, or
direction, provide performance or set plans lack clear description
feedback, or communicate direction. of leadership actions.
values.
5. Engaged Evidence of frequent Evidence of one or more Evidence of involvement
stakeholders communication with instances of engaging with parents tends to be in
parents regarding parents in improving areas other than teaching and
standards (beyond student achievement (e.g., learning (e.g., percentage of
traditional grading online student monitoring, participation in conferences,
91

periods), best practices, participation in curriculum attendance at school events,


and grading (e.g., design, methods to support newsletters, etc.); complies
standards-based report learning at home). with minimum state
card or nonfiction standards for communication
writing). with parents.
Evidence of engaging
parents, patrons, and
partner businesses or
organizations is clearly
described.
Web site includes links to
various data warehouses
for demographic and
student achievement
assessment.
Copyright © 2005. Center for Performance Assessment. All rights reserved.

Scoring Guide for Section B: Inquiry Process

Exemplary (3 points)
Meets all criteria for Needs Improvement (1
Proficient level and Proficient (2 points) point)
Performance provides specific evidence Provides specific evidence to Provides evidence that
Dimension to meet the criteria below. meet the criteria below. meets the criteria below.

6. Possible Inquiry routinely Inquiry has identified some Effects (results targeted)
cause-effect examines cause and effect correlations from needs may or may not align to
corrections correlations from needs assessment data to select urgent needs assessed or
assessment data before specific strategies or program represent a quantifiable
selecting ANY strategies solutions planned. Positive vision of the future. Plan
or program solutions. correlations at desired levels tends to address broad
Positive correlations at represent a quantifiable content as improvement
desired levels represent a vision of the future. needs, without identified
quantifiable vision of the correlations between needs
future. and strategies.
7. Strategies ALL selected classroom- Most selected classroom- Few (≤50%) classroom-
driven by level research-based level research-based level research-based
specific needs programs or instructional programs or instructional instructional strategies or
strategies are identified strategies are identified for a programmatic and
for a stated purpose, and stated purpose. Most structural antecedents are
ALL standards-based schoolwide programs or identified based on data
research strategies are strategies (e.g., NCLB that support the need for a
designed to address research-based programs, specific program or
specific needs in student collaborative scoring, dual- strategy.
achievement. block algebra, tailored
summer school) specify the
92

student needs being


addressed.
8. Analysis of Explicit evidence Most described causes are Evidence of analysis of
adult actions indicates routine data adult actions or the result of cause and effect
analysis to identify cause adult decisions rather than correlations is not
and effect correlations. demographic student or described in the plan.
ALL causes are adult family factors outside of the Causes either are absent or
actions or the result of instructional control of tend to be demographic
adult decisions rather than educators. Plan describes factors outside of the
demographic student or some links between causes instructional control rather
family factors outside of (antecedents) and desired than adult actions and
the instructional control of results (effects). strategies. Plan rarely
educators. inquires into cause-effect
relationships.
9. ALL effects (desired Most effects (desired results Few (≤50%) effects are
Achievement results or goals) are or goals) are explicitly linked explicitly linked to
results specifically linked to to identified causes, identified causes,
(effects) linked cause behaviors or strategies, conditions for strategies, conditions for
to causes antecedent conditions for learning, or administrative learning, or administrative
learning or administrative structures. conditions.
structures (e.g., time and
opportunity, resources,
etc.).
Copyright © 2005. Center for Performance Assessment. All rights reserved.

Scoring Guide for Section C: S.M.A.R.T. (Specific, Measurable, Accomplishable, Relevant, Timely)
Goals

Exemplary (3 points)
Meets all criteria for Needs Improvement (1
Proficient level and Proficient (2 points) point)
Performance provides specific evidence Provides specific evidence to Provides evidence that
Dimension to meet the criteria below. meet the criteria below. meets the criteria below.

10. Specific ALL goals and supporting More than one goal and Most goals and supporting
goals targets specify supporting target specifies targets describe in general
Targeted student groups Targeted student groups rather than specific terms
Grade level Grade level Targeted student groups
Standard or content area Standard or content area and Grade level
and subskills delineated subskills delineated within Standard or content area
within that content area that content area and subskills delineated
Assessments specified to Assessments specified to within that content area.
address subgroup needs. address subgroup needs.
11. ALL goals and targets ALL goals and targets Few goals or targets
Measurable describe quantifiable describe quantifiable measures describe quantifiable
93

goals measures of performance. of performance with specific measures of performance.


Baseline data are always assessments. Stated goals seldom
provided for each goal or reference student needs or
objective. growth targets or specific
assessment tools.
12. ALL goals and targets are At least one goal or target is Goal targets are set so low
Achievable sufficiently challenging to sufficiently challenging to that achievement will not
goals close learning gaps in close learning gaps in three to close learning gaps in
three to five years for five years for targeted foreseeable future, or there
targeted subgroups. subgroups. Learning gaps are are insufficient data to
specified. determine whether any
learning gaps will be
closed by achieving goal
targets.
13. Relevant ALL goals and targets ALL goals and targets align Few goals and targets
goals align with urgent student with urgent student needs describe urgent student
needs. ALL goals can be identified in comprehensive needs identified in
explicitly linked to the needs assessment (subgroups comprehensive needs
mission and beliefs of the specified). Some goals are assessment. Links to
school or district. explicitly linked to the mission mission or beliefs are
or beliefs of the school or vague or absent.
district.
14. Timely Each goal and target Some goals and targets Goals and targets rarely
goals describes a fixed date in describe a fixed date in time describe a fixed date in
time when it will be when they will be achieved, time when they will be
achieved. but all goals or objectives achieved, and they describe
specify a specific window of only broad windows of
time (within 30 days). time for any goals.
Copyright © 2005. Center for Performance Assessment. All rights reserved.

Scoring Guide for Section D: Design

Exemplary (3 points) Needs Improvement (1


Meets all criteria for Proficient (2 points) point)
Proficient level and provides Provides specific Provides evidence that
Performance specific evidence to meet the evidence to meet the meets the criteria
Dimension criteria below. criteria below. below.

15. Purposeful, Plan describes WHY some Plan describes WHY Plan describes when
focused action action steps are implemented each focus area or major action steps will be
steps and HOW action steps will be action step is being implemented and by
implemented, when, in what implemented. whom—but not why or
settings, and by whom. how.
16. Multiple There are multiple forms of There are multiple forms Assessments are more
assessments student assessment data, of student assessment often used to comply
94

documented including formative, as well as data and some data for with directives than to
multiple measures of teacher teacher practices. serve as indicators of
practices and leader actions. change or improved
student achievement.
17. There is explicit evidence of There is explicit Evidence of
Demonstrated improvement cycles for every evidence of improvement cycles for
improvement school improvement initiative. improvement cycles for school wide initiatives
cycles some school is unclear.
improvement initiatives.
18. Frequent Monitoring schedule (≥ Monitoring schedule (≥ Monitoring for student
monitoring of monthly) that reviews both monthly) to review performance or teaching
student student performance and adult student performance. practices is infrequent.
achievement teaching practices.
19. Ability to Capacity for rapid rollout in Some midcourse No description of
rapidly team responses to data, corrections are midcourse corrections
implement and professional development, and delineated and observed in
sustain reform coaching; time allotted for anticipated in design of improvement plan.
adjustments and opportunities improvement plan.
in response to student needs.
20. Results All results indicators serve as Some results indicators Results indicators are
indicators interim progress probes for serve as interim progress vague, hard to describe,
aligned to goals each S.M.A.R.T. goal. probes for S.M.A.R.T. or difficult to measure.
goals.
21. Adult Consideration of adult learning Some attention to adult Evidence provided of
learning and issues and the change process learning issues and adult learning or change
change process is evident in time, programs, change process is process considered in
considered and resources. evident in plan (e.g., planning. Plan tends to
limited initiatives, be fragmented with
integrated planning, and multiple initiatives,
related support little attention to time
structures). requirements for
implementation.
22. Documented Coaching/mentoring system Coaching or mentoring Coaching or mentoring
coaching and creates a coaching or is planned and systemic. is incidental, viewed as
mentoring mentoring cadre by building sole responsibility of
capacity and application. coach instead of school
wide effort.
23. Strategies Research-based instructional Most research-based Selected strategies,
linked to specific strategies, programs, and instructional strategies, programs, and
student needs structures selected to impact programs, and structures structures are not
specified student needs at are linked to specified clearly linked to student
school. ALL design activities student needs at school needs as identified in
and innovations are strongly (school, subgroup, or the data. School may
correlated to student individual). lack support in research
95

achievement gains. or best practice.


24. Professional Professional development is Professional Professional
development linked to meeting specific development is explicitly development is
driven by subgroup needs, addresses collaborative, selected to fragmented and may or
student needs underlying causes of any meet identified student may not address student
substandard performance, is needs (school, subgroup, needs at school. It is
limited to three major or individual), embedded rarely limited to three
initiatives per goal, and in functioning school major initiatives per
prepares educators to improve processes, limited to goal; activities tend to
decision making through three major initiatives be overly ambitious in
planned reflection or analysis. per goal, and scheduled number or scope.
within normal school
functions at least
monthly.
25. Professional Support to professional Support to professional Design has few systems
development development is provided for all development is provided to support professional
supported and initiatives in multiple ways in more than one way. development efforts.
integrated into (e.g., change procedures,
key processes cross-curricular applications,
and operations integration, subtract obsolete
practices, collaboration, and
modeling).
Copyright © 2005. Center for Performance Assessment. All rights reserved.

Scoring Guide for Section E: Evaluation

Exemplary (3 points)
Meets all criteria for Proficient Proficient (2 points) Needs Improvement (1
level and provides specific Provides specific point)
Performance evidence to meet the criteria evidence to meet the Provides evidence that
Dimension below. criteria below. meets the criteria below.

26. Summary Evaluation compares planned Evaluation summarizes Evaluation tends to limit
data provided initiatives with actual results data and evidence that data summaries to student
and compared from the prior year, examines examine student achievement analyses.
achievement results based on performance in multiple Plans tend to examine
safety-net power standards by content areas; it student performance
grade, and compares those results describes students in without specifying
to district performance. Student need of intervention students in need of
performance is augmented by a whose performance intervention whose
specific review of curriculum puts them at risk of performance puts them at
impact, time/opportunity for opening learning gaps. risk of opening learning
students, or the effect of teaching gaps.
practices on achievement.
27. Evaluation plan describes explicit Evaluation plan Evaluation plan tends to
96

Anticipated new knowledge, specific skills, describes new describe new knowledge,
knowledge and attitudes that will result from knowledge and specific skills, and attitudes in
and skills professional development skills or attitudes that general terms and
associated with each goal for will result from perceptions rather than
students, staff, AND professional specific knowledge or
stakeholders. development associated skills.
with most goals for
students and staff.
28. Required Evaluation specifies data and Evaluation specifies Evaluation tends to use
evidence for evidence needed to evaluate data and evidence identical generalities for
evaluation progress to meet all stated goals, needed to evaluate each goal rather than to
including formative, school- progress to meet all specify data and evidence
based Tier 2 data explicitly stated goals, including needed to evaluate
aligned to address those students formative, school-based progress toward goals.
whose performance puts them at Tier 2 data and their
risk of opening rather than frequency.
closing learning gaps.
29. Next steps Documented next steps outline Next steps to improve Next steps rarely address
outlined in how changes in teaching and teaching and learning changes in how teaching
evaluation learning will occur, how the are delineated and and learning will occur.
leadership team analyzes data, supported by a clearly Next steps, if specified,
and how evidence was collected defined improvement tend to describe future
and submitted to colleagues and cycle in the plan. outcome targets (goals)
peers for review. The evaluation rather than next steps in
plan recommends changes from a terms of adult actions.
list of alternatives and delineates
a process to secure resources,
implement changes, and evaluate
them.
30. Results Evaluation plan is transparent in Evaluation plan Evaluation plan may
disseminated describing how results (positive describes how the describe process for
and or negative), conclusions, lessons compared results communicating results,
transparent learned, and next steps will be (positive or negative) but seldom specifies next
communicated and disseminated are communicated to steps or how results will
to all primary stakeholders improve goal setting be explained to
(families, educators, staff, and ensure that lessons stakeholders.
patrons, partners, and the public). are learned.
Copyright © 2005. Center for Performance Assessment. All rights reserved.
97

Appendix B
Judgments of Connection Between PIM Rubric and Goal Theory Concepts
Performance Dimension Core Findings Mechanisms- Moderators-

The actions leading towards goal accomplishment Characteristics that change the
strength of the goal and
performance relationship
Challenging: Specific: Directing Having an Goals Goals lead to Goal Feedback:
Difficulty Specific goals attention and energizing increase the arousal, Commitment:
level of goals clearly define effort toward function: persistence, discovery, Reveals
an external specific goal- Higher goals prolonging and/or use of Determination progress in
referent so it is relevant lead to effort. task- to achieve a relation to
clear exactly activities and higher relevant goal. goals.
what needs to away from efforts than knowledge
be goal- do lower and Increased when
accomplished. irrelevant level goals. strategies., goals are made
activities. e.g., using public, relates
This effect existing to professional
occurs both knowledge integrity.
cognitively and skills or
and developing Important for
behaviorally. new goals to be
strategies. attainable.
Comprehensive Needs Assessment
1 Strengths a Judge 1
2 Assessment Results Agreement
3 Teacher Practices Agreement
4 Acts of Leadership Agreement
5 Engaged stakeholders Judge 2 Judge 1
Inquiry Process
6 Possible cause-effect Agreement
correlations
7 Strategies driven by Agreement
specific needs
8 Analysis of adult actions Judge 1 Judge 1 Judge 2
9 Achievement results Judge 1 Judge 2
98

(effects) linked to causes


SMART Goals
10 Specific goals Agreement
11 Measurable goals Agreement
12 Achievable goals Agreement
13 Relevant goals Agreement
14 Timely goals Judge 2 Judge 1 Judge1
Design
15 Purposeful, focused action Judge1 Judge 2
steps
16 Multiple assessments Agreement
documented
17 Demonstrated improvement We are not confident about categorizing this. Judge1 Judge 2
cycles b
18 Frequent monitoring of Agreement
student achievement
19 Ability to rapidly Agreement
implement and sustain
reform
20 Results indicators aligned Judge1 Judge 2
to goals
21 Adult learning and change Judge1 Judge 2
process considered
22 Documented coaching and Judge1
mentoring a
23 Strategies linked to specific Agreement
student needs
24 PD driven by student needs Agreement
25 PD supported and Judge1 We are not confident about Judge 2
integrated into key categorizing this.
processes and operations b
Evaluation
26 Summary data provided and Judge1 Judge 2
compared
27 Anticipated knowledge and Agreement
skills
99

28 Required evidence for Judge1 Judge 2


evaluation
29 Next steps outlines in Agreement
evaluation
30 Results disseminated and Judge 2 Judge1
transparent
Notes: Shaded cells represent final connections decided on either by initial agreement between judges, or by discussion of disagreement.
a
Performance dimensions 1 and 22 were not originally categorized by Judge 2.
b
Performance dimensions 17 and 25 could not be categorized.
100

Appendix C
Letter to District Requesting Permission to Obtain School Improvement Plan

March 18, 2013

Dear Dr. Solek,

My name is David Huber and I am planning to conduct my dissertation research on


school improvement planning effectiveness. As an administrator in an Alliance District
school, I am interested in seeing what impact school improvement planning has on
student achievement.

I request your permission to contact the person within your district who can provide me a
copy of your elementary school improvement plans for each K-5 or K-6 school within
your district.

Over the next year, I plan on examining characteristics of school improvement plans to
see if any components of a plan have a statistically significant impact on the newly
created School Performance Index (SPI). All data will remain anonymous. Additionally,
the data collection and analysis is for doctoral research only.

I appreciate your support in my doctoral studies. If you have any questions or can guide
me to the person who can make available individual elementary school improvement
plans, I would greatly appreciate it. All findings will be shared with districts that request
such information. I can be reached via email: davidhuber@ci.bristol.ct.us.

This research is fully supported by my district level administrative team.

Sincerely,

David Huber Susan Kalt Moreau, Ph.D.


Doctoral Candidate Deputy Superintendent
Bristol Public Schools
101

Appendix D
Revised Rubric Elaboration of 8 Performance Dimensions

3. Teacher MAJORITY of Teacher MAJORITY of Teacher Teacher practices are generic


practices practices are supported by practices are supported statements that may identify
research, describe whether by research, and specific strategies supported by
professional development professional research but don't link to a
or repeated practice is development needs are specific need for professional
needed, and describe how identified. If I know it is development.
monitoring of those research based…I will
practices will be used to count it. Must have
improve instruction. BOTH research and PD.
5. Engaged Evidence of frequent Evidence of one or more Evidence of involvement with
stakeholders communication with instances of engaging parents tends to be in areas
parents regarding parents in improving other than teaching and
standards (beyond student achievement (e.g., learning (e.g., percentage of
traditional grading online student participation in conferences,
periods), best practices, monitoring, participation attendance at school events,
and grading (e.g., in curriculum design, newsletters, etc.); complies
standards-based report methods to support with minimum state standards
card or nonfiction learning at home). for communication with
writing). parents.
Evidence of engaging Something instructional
parents, patrons, and done for families to
partner businesses or engage parents to help
organizations is clearly the school achieve the
described. school’s goals.
Web site includes links to
various data warehouses
for demographic and
student achievement
assessment.

6. Possible Inquiry routinely examines Inquiry has identified some Effects (results targeted)
cause-effect cause and effect correlations from needs may or may not align to
corrections correlations from needs assessment data to select urgent needs assessed or
assessment data before specific strategies or represent a quantifiable
selecting ANY strategies program solutions planned. vision of the future. Plan
or program solutions. Positive correlations at tends to address broad
Positive correlations at desired levels represent a content as improvement
desired levels represent a quantifiable vision of the needs, without identified
quantifiable vision of the future. correlations between needs
future. and strategies.
It is clear the team
identified the problem There is no needs
102

themselves and know assessment or data to


WHY they are focusing on support having these
it. strategies in the plan.
8. Analysis of Explicit evidence indicates Most described causes are Evidence of analysis of
adult actions routine data analysis to adult actions or the result of cause and effect
identify cause and effect adult decisions rather than correlations is not described
correlations. ALL causes demographic student or in the plan. Causes either
are adult actions or the family factors outside of the are absent or tend to be
result of adult decisions instructional control of demographic factors outside
rather than demographic educators. Plan describes of the instructional control
student or family factors some links between causes rather than adult actions and
outside of the instructional (antecedents) and desired strategies. Plan rarely
control of educators. results (effects). inquires into cause-effect
relationships.
Lists exactly what the
adults need to do/change District Theory of Action
and recognize their does NOT count.
behavior is key to
improvement.
9. ALL effects (desired Most effects (desired results Few (≤50%) effects are
Achievement results or goals) are or goals) are explicitly explicitly linked to
results specifically linked to cause linked to identified causes, identified causes, strategies,
(effects) linked behaviors or antecedent strategies, conditions for conditions for learning, or
to causes conditions for learning or learning, or administrative administrative conditions.
administrative structures structures.
(e.g., time and opportunity, Same criteria as 8
resources, etc.). Same criteria as 8

15. Purposeful, Plan describes WHY some Plan describes WHY Plan describes WHEN
focused action action steps are implemented and each focus area or action steps will be
steps HOW action steps will be major action step is implemented and by
implemented, when, in what being implemented. whom—but not why or
settings, and by whom. how.

21. Adult Consideration of adult Some attention to adult Evidence provided of adult
learning and learning issues and the learning issues and learning or change process
change change process is evident in change process is considered in planning. Plan
process time, programs, and evident in plan (e.g., tends to be fragmented with
considered resources. limited initiatives, multiple initiatives, little
integrated planning, attention to time requirements
and related support for implementation.
You would see multiple PD structures).
on the SAME topic so the There is no way the plan can
teachers can grasp it. And You would see be successfully implemented
COACHING would be multiple PD on the – there is just too much.
observed SAME topic so the
103

teachers can grasp it.


23. Strategies Research-based instructional Most research-based Selected strategies, programs,
linked to strategies, programs, and instructional strategies, and structures are not clearly
specific structures selected to impact programs, and linked to student needs as
student needs specified student needs at structures are linked to identified in the data. School
school. ALL design activities specified student needs may lack support in research
and innovations are strongly at school (school, or best practice.
correlated to student subgroup, or
achievement gains. individual). NO DATA provided must
be a 1.
This can be a whole
school strategy as long
as it is connected to
the data.
104

Appendix E
Human Studies Council Letter of Exemption

You might also like