Ohio School District Report Card Demystified

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 13

Prepared Notes for Joint Education Oversight Committee

April 25, 2019


Marc A. Schare
614 791-0067
marc9@aol.com

Chairman Cupp, members of the committee, thank you for looking into some of
the problems endemic with the state report card. My name is Marc Schare. I served
on the Board of Education in Worthington for 12 years, 2 as its President and I
worked with former State Rep. Mike Duffey to help him understand some of
Worthington’s frustrations with the report card. I was very happy to receive a call
from Director Jones asking me to come and speak to some of these issues.

Before getting started, let me share why I’m so passionate about this issue. I’m an
accountability hawk. When I was first elected in 2006, Worthington prided itself
on its performance on the State Report Card. We used to hold community
celebrations on the Village Green complete with cake and ice cream to highlight
our status of “Excellent with Distinction”. Any deviation suggesting anything other
than excellence was met with calls and emails from concerned constituents. That
was then. In just 12 short years, the district’s performance on the state report card
is barely noticed and never discussed. During my last re-election campaign in
2017, an issue-oriented campaign to be sure, not a single constituent, not one, even
mentioned Worthington’s performance on the report card. In 2018 in the all-
important achievement section of the card, Worthington passed 9 out of 23 state
standards, an “F”, yet my former colleagues on the Board of Education tell me that
no one cares. No concern whatsoever. Our director of academic achievement took
less than a handful of calls on any aspect of the report card, a remarkable statistic
from a school district with 50,000 residents. This attitude against standardized
testing extends beyond Ohio. In a recent Gallup poll, 1892 Superintendents were
surveyed and only 9% of Superintendents said that standardized testing is “very
important” and only 60% said that it was even “somewhat important”. 1 Moreover,
the number is trending quickly in the wrong direction.

1
https://www.gallup.com/education/243224/superintendents-say-engagement-hope-best-
measures-success.aspx
If Superintendent’s are not concerned with test scores, how do they internally
measure the quality of their districts. The survey provides some insight into this.

The highest scoring items are subjective measurements, hope and engagement with
the others focusing on high school. I’d argue that if a district waits until high
school to objectively measure its outcomes, it has waited too long.

Many educators are satisfied with subjective measurements and, in fact, created
local report cards called “Quality Profiles” 2 which at least in my district are mailed
2

https://www.worthington.k12.oh.us/cms/lib/OH01001900/Centricity/Domain/973/Worthington%20QP%202018%2
0vDigital4.pdf
to every resident. They believe it creates a fairer, more balanced approach, but I
respectfully disagree. Highlighting the success of individual students or athletic
teams, while interesting, does not provide guidance as to the academic work going
on in school buildings every day. In addition, Ohio has quantitative measurements
of subjective vs. objective evaluations courtesy of the teacher evaluation model. It
is inexplicable that even in districts with very low grades on standardized tests,
teachers still receive in overwhelming numbers the two highest ratings of
“Accomplished” and “Skilled”. The difference is that of subjective vs. objective
measurements.

As an accountability hawk who believes strongly in objective measurements and


benchmarking against other districts as the only way to know how well your
district is truly performing, I believe the State of Ohio and its leaders must do
whatever they can to reverse the trends against standardized, objective
measurements.

Dr. Jones asked me to focus on the achievement component of the state report card
which is good news for me because it’s the easiest component to explain, although
quite controversial from a “is it fair” perspective. The achievement component
consists of two parts, a “performance index” weighted at 75% and an “indicators
met” component weighted at 25%. These components combine to form a single
letter grade.

The documentation for the performance index can be found at this link:

https://education.ohio.gov/getattachment/Topics/Data/Report-Card-
Resources/Achievement-Measure/Technical-Documentation-PI-Score.pdf.aspx

This is an example of the performance index from Worthington’s 2018 Report


Card.
94.5 out of a possible 120 is 78.8% which works out to a letter grade of “C”.

Easy enough, right? Not really. There are philosophical and practical questions that
must be addressed when dealing with the performance index and standardized
testing in general.

First, let’s talk philosophy. We must address the question of whether raw
performance data has probative value. In other words, is it of use to school
districts, principals at the building level and the general public. Many educators
will claim that the performance index component simply measures the socio-
economic demographics of the district and to be sure, there is an undeniable
correlation between socio-economic status and the performance index as Ohio
education luminaries such as Howard Fleeter point out year after year after year.
Most educators will tell you they prefer a growth metric if you’re going to be
evaluated at all.

I disagree. I believe the PI has value despite the correlation between socio-
economic status and test score and here’s why. The students in low-SES schools
will be competing with students in high-SES schools their entire lives. We live in a
capitalist society and to excuse a low-PI district because of the very real challenges
of educating poor and minority students is doing those students a disservice.

Consider this. If you conclude that a low-SES building or district is going to do


poorly on standardized testing so why bother with a performance index, you are
essentially saying that no combination of strategy, tactics, curriculum, teaching
methods, parental outreach, wraparound services or anything else is going to make
a difference. I don’t believe that and I hope none of you do either. This country
must solve the riddle of educating poor and minority populations. There simply is
no alternative, and grading on the curve, which is what you do when you solely
measure growth, is not a solution. The performance index continues to highlight
this societal challenge and I believe it’s better to acknowledge it than to bury it.

Here are some practical questions surrounding the performance index.

1) The scale puts all of the incentives on moving students from limited to basic and
from basic to proficient and gives only small credit for moving students from
proficient to accelerated or advanced. Is this really what you want?

2) Are the cut scores used by the American Institute of Research (AIR)
appropriate. In other words, does Ohio agree with AIR’s definition of “proficient”.
AIR believes that a score of “accelerated” means that a student is on track to be
college or career ready and that “proficient”, based on the cut scores”, merely
means a passing grade. 3 Is the test and the cut scores calibrated correctly?

3) Is there a better way to handle untested students? Standardized testing is going


to be a more and more common political issue. The impact of untested students on
the overall performance index could be meaningful, especially now with the use of
letter grades.

4) In Ohio, standardized tests are given about ¾ of the way through the year.
Many educators have complained that the test included items that were not taught
yet. While no one wants to teach to the test, should Ohio provide sequencing
guidance as to what standards should be taught sooner rather than later.

3
https://oh.portal.airast.org/core/fileparse.php/3094/urlt/Understanding_State_Tests_Reports_2017-2018.pdf
5) Does the use of a single letter grade detract from the data? If so, should Ohio
consider simply providing the data on report cards and leave the interpretation to
local school districts, press reports and so forth?

6) Finally, what should be done to eliminate the following paradigm. With the
exception of 3rd grade and high school, the tests are high stakes for the district,
high stakes for the building, high stakes for the teacher, maybe even high stakes for
the community in terms of school district quality perception but otherwise
meaningless to the students. Should the paradigm be modified such that students
have some skin in the game?
Next, we move on to the “indicators met” portion of the achievement component.
The calculation is simple – number of indicators met over the total number of
indicators resulting in a percentage that is translated into a letter grade. The
passage rate for each academic indicator is 80% proficient. This was
Worthington’s latest effort for indicators met at the elementary grades:

I think we can all agree that 3 out of 9 is not a very good showing, but there is a
question that would need to be asked here. Was the decision to move the bar from
75% to 80% the correct one? Why 80% as opposed to 70% or 90%? If the 2013-
2014 number of 75% were used, Worthington would have met 8 out of 9 indicators
at the elementary level and 17 out of 24 total, raising its percentage to 73.9 and its
letter grade from an “F” to a “C”. Obviously, to the casual observer looking just at
the number of indicators met, there is a huge difference between 9 of 24 and 17 of
24 even though the difference in terms of kids scoring proficient was relatively
low. The point I’m trying to make is that the result is somewhat arbitrary and
arbitrary results have little probative value. Honestly, other than the non-academic
indicators (absenteeism, end of course improvement), the “indicators met”
component pretty much measures what the PI measures, all be it less precisely. The
way the data is presented on the web site is useful in that it is broken down by
grade and subject with trends provided, but as far as contributing to the overall
grade of the achievement indicator, it's two sides of the same coin. The questions I
believe should be considered regarding the indicators met component are:

1) Given it is measuring the same things that the PI measures, do we need to


include it in the report card at all, or should we simply make the data available?

2) Should we have non-academic criteria such as the “gifted” indicator and


absenteeism indicator or should that data be elsewhere in the report card?

3) Is 80% the right number?

4) Should we be assigning a letter grade to this component or let “9 out of 23”


speak for itself?

Speaking of the overall grade for achievement, it consists of 75% Performance


Indicator and 25% Indicators Met, but 75% of precisely what number and 25% of
precisely what number is not as easy a question to answer as it should be. This is
the calculation:
So, in Worthington’s case, the calculation looks like this:
Component Score Points MultiplierResult

Performance Index 78.8 2.75 0.75 2.063


Indicators 37.5 0.75 0.25 0.188
Total 2.25

Figure 1 Worthington Achievement Grade Calculation

2.25 is a low “C” and I guarantee you I am the only person outside of our
curriculum team in Worthington that ran those numbers.

I want to highlight one difficulty with the letter grade. Consider the case of
Bluffsview Elementary School. They received a performance index of 105.4 with
over 87.3% of the school scoring Proficient or above. 105.4/120 yields a
percentage of 87.833, a very high “B”. Bluffsview also received 11 out of 12 on
the indictors met portion, or 91.7%, missing only the “gifted” indicator. Running
the calculation, we have

Component Score Points MultiplierResult

Performance Index 87.833 4 0.75 3


Indicators 91.7 4.25 0.25 1.063
Total 4.063

Figure 2 Calculation of Achievement Grade for Bluffsview Elementary

The total score is 4.0625


Which falls slightly short of an “A” for the overall achievement indicator,
however, the failure lies in the “gifted” achievement indicator, 2/3 of which
doesn’t measure achievement but rather, services offered to gifted students and
gifted students’ value-add. In Bluffsview’s case, the Gifted Performance Index met
the state criteria but because of the way the district chooses to offer gifted services,
it failed the “gifted inputs” component and also failed the gifted “value add”
component. Neither of these components measures raw student achievement and
therefore, I would argue shouldn’t be part of the achievement score. I mention this
because it highlights the danger of boiling down complex calculations to a single
letter grade. I had to dig for this data. No constituent is going to do that. All they
know is that Bluffsview received a “B” for achievement, but they have no idea
why and that’s doing our constituents a disservice.

So that’s achievement in a nutshell.

In closing, I consider the performance index to be the single most important


number in the entire document, but I did want to comment very briefly on some of
the other components.

The growth component’s original intent is to factor out those things that schools
have no control over such as socio-economic factors, two-parent households,
parents reading to kids, food security, medical, dental and so forth. The question
for Ohio’s implementation of value-add is whether or not it actually does that.
First, Ohio’s value-add system doesn’t measure growth, it rank orders growth
among public school districts and offers a grade based on where you wind up in the
list. In other words, it measures relative placement amongst districts implying that
every district can grow and exceed expectations, but if you didn’t grow as much as
the rest of the state, you would still do poorly in the growth portion of the report
card. Second, there are competing models for value-add. Worthington participated
in Project Soar from Battelle for Kids which factored out non-school
characteristics by comparing you against students, buildings and districts with
similar characteristics. Florida does the same thing. Ohio simply compares you
against you from last year and the year before. The point is that growth is
important enough to see if Ohio should consider adapting alternative models.

I haven’t met anyone that can adequately explain the gap closing calculation so I’m
not even going to try. The online version of the State Report Card offers no clue as
to how this is calculated. If we believe that gaps are important enough to be
measured and reported, we should be able to find a way to explain it to non-
educators.
The letter grades associated with the “Improving at-risk K-3 readers” category
makes no sense. This was in Worthington’s latest report:

It is simply impossible to explain to constituents how, if 99.4% of Worthington’s


third graders met the reading requirement for promotion to fourth grade, that
somehow translates to a “C” on a report card. Clearly, Worthington is doing
something right with struggling readers to get them over the 3GRG bar. I
understand how the grade is derived, but its use detracts from the overall
credibility of the report card.

In summary, I believe the achievement component of the state report card is the
best indicator of what happens in the schools, tempered by the reality that socio-
economic and other conditions play an important part. I also believe that the
complexity of the current report card and results that are inconsistent with
constituents views of reality detract from the perception of the importance of an
objective measurement and it’s my hope that this project of the JEOC leads to a
revamping of the report card that will once again make it meaningful for school
districts, buildings, teachers, parents and the public. Thank you for your attention.

You might also like