Ohio’s “value-added” metric not sufficient to count for half of a school’s rating
There is little dispute that information about the academic gains students make (or don’t) is a valuable addition to pure student proficiency data. But there is little agreement about how best to calculate growth and how to use it to inform things like teacher evaluations and school rating systems. The latter was the focus of much testimony last week in the Senate education committee over Gov. Kasich’s plan to overhaul how Ohio’s districts are graded. Local educators believe the governor’s plan gives too little weight to academic progress (and too much to achievement). But the limits of our current value-added system seem to indicate that the governor’s formula is just right, for now.
Under Senate Bill 316, Ohio would move to an A to F school rating system with ratings calculated based on four factors: 1) student achievement on state tests and graduation rates, 2) a school performance index based on state test results, 3) student academic progress, and 4) the performance of student subgroups.
Matt Cohen, chief researcher for the state education department, testified that feedback from the field indicates they want growth (aka “value-added” in Ohio) to count more heavily than 25 percent. Bill Sims, CEO of the Ohio Alliance for Public Charter Schools, suggested that value-added data account for half of a school’s rating – or that ratings be “bumped up” one level if a school exceeds the state’s value-added expectations. Columbus City Schools Superintendent Gene Harris made a similar suggestion during her testimony.
But, considering how few students for whom the state has value-added data, counting that as half of a district’s rating is heavy-handed.
Ohio’s current value-added system measures student progress in grades four through eight. Just 36 percent of Ohio public school students are enrolled in grades four, five, six, seven, or eight – meaning that the state has value-added data for a bit more than one-third of all students. At best. Student mobility among districts impedes the ability to calculate gains; it’s quite plausible that the state doesn’t have growth data for even one-third of schoolkids. Further, value-added in Ohio only measures progress in reading and math, not any other subjects. (The state has achievement test data for more than half of all students and across more subjects.)
The progress of roughly one-third of students in two subjects shouldn’t make up half of a district’s rating; counting it for 25 percent of the overall grade sounds about right. Down the road, however, a fair argument could be made to weigh growth more heavily in a rating’s equation.
After the transition to the Common Core academic standards and tests in 2014, Ohio should be able to calculate value-added data for high school students. And as the collection of education data continues to improve, we ought to be able to calculate gains for even the most highly mobile of students.
Ohio could also consider other ways to use growth to inform ratings. For example, Florida (which has this data through tenth grade) weighs growth as half of a school’s rating, but not in the same simple fashion Ohio educators are suggesting. One-quarter of a Sunshine State’s school rating is based on overall student progress, and one-quarter is based on the progress made by the bottom 25 percent of students – meaning even the highest performing district can’t afford not to focus on its lowest performers. This approach makes limited progress data more meaningful.
Student academic progress is important, and Ohio has been a leader in calculating and reporting progress data. But our growth measure, as it looks today, isn’t of the scope and scale needed to account for half of a rating.