Not so fast: real research must supplant shell-game analyses
The debate over charter schools in the Buckeye State continued last week when the Coalition for Public Education (CPE)-a group that has filed numerous lawsuits against charters and the charter school program over the years-held a news conference to unveil its analysis of Ohio's school report-card data. CPE called Ohio's charter school program a failure, boasting that 10 years into the "experiment," district schools are far outperforming charters on the state's achievement tests.
Gadfly readers will remember that the Thomas B. Fordham Institute's own analysis of this data showed that, by and large, charter schools and district schools perform relatively evenly on state achievement tests. Both types of schools in urban areas struggle to help children meet the state proficiency standards. Unlike CPE, Fordham provided a weighted comparison of charter school performance to district school performance in the same cities and charter school performance to the Big Eight district schools where the vast majority of charters operate.
While one might debate the methodologies used in these studies, the truth is that both analyses-and most other charter school "studies" available in Ohio-are limited in their usefulness to policy makers because they lack any information on student growth over time and are merely "snapshots" of student performance at a single point in time.
Unfortunately, really good studies of school performance-charter or district-are hard to come by. Fortunately, however, we are learning a lot more about what constitutes quality research and how to generate it.
In 2006, the research group Public Impact evaluated 58 comparative analyses of charter schools and district schools and found that while there is no perfect research methodology, there are some basic criteria to look for in a high-quality study:
- Value-added Analysis. Quality research looks at the growth of individual students over time, giving a true indication of whether a student is better off for having attended that school.
- An Adequate Sample Size. The study should include a sufficient sample of schools/students to allow for generalization.
- A Sound Comparison. It is very important that the study compare charter schools'/students' performance to that of a relevant group of district schools/students to minimize the chance that charter students are somehow different from district students in ways that influence achievement, such as poverty, special needs, motivation, and previous academic performance. This gives a real "apples-to-apples" comparison.
The current "gold standard" in student achievement research is reflected in an ongoing evaluation of charter schools in New York City, funded by the federal government and led by Harvard economist Caroline Hoxby. This five-year evaluation of nearly 50 New York City charter schools tracks only students whose names were entered in a school lottery, then compares the educational achievements of those students randomly picked to attend the charter school with those students who stayed in the district schools. Findings of the first-year cohort reveal that charter students are, on average, posting higher gains in reading and math than they would have had they attended the city's district schools.
No such longitudinal research has been conducted in Ohio; it is costly and time-consuming work to get right. But there is a great need for this sort of research if we are to help Ohio improve its charter school program and, ultimately, determine which schools work and which don't.
The Ohio Department of Education (ODE) has taken several steps in the right direction. First, it has implemented a student identifier system that allows for the tracking of student achievement over time and across schools. Second, ODE has put in place a value-added component to the state's assessment system that will ensure growth data in the coming years for measuring student performance longitudinally and gauging individual school impact.
The General Assembly also has shown that it is interested in gaining more information about the state's charter school program before deeming it a "failure" or allowing it to expand further. In lame duck legislation in late 2006, lawmakers charged the Partnership for Continued Learning (PCL) with studying the operation and oversight of community schools and coming up with a series of legislative recommendations for improving the program. Equally important to understanding which schools meet or surpass state standards is identifying and highlighting the schools that help students make significant academic gains over time. Unfortunately, the legislature has not provided the PCL with the time, money, or expertise it needs to produce a high-quality student achievement report. Without such research, the charter-school debate will continue to be based more on opinion than fact.