Standards, Testing & Accountability

  • The heroic journalism of the Boston Globe in exposing pedophilia enabled by the Catholic Church was the focus of last year’s Oscar-winning Spotlight. Now the paper has trained its attention on New England preparatory schools, where some allegations of misconduct date back a half-century or more. Its survey of the claims is penetrating and comprehensive: Nearly seventy such schools have faced complaints of sexual harassment or abuse in the last twenty-five years, with accusations lodged by two hundred alleged victims. And we have no reason to believe that the exploitation is limited to private schools; as a 2004 literature synthesis undertaken by the Department of Education makes clear, sexual misconduct plagues schools across the country and in every sector.
  • At one point, forty-four states were affiliated with one of the two next-generation testing consortia (PARCC and Smarter Balanced) that arose with the widespread adoption of the Common Core. This spring, just twenty-one of those states will be administering the tests. Chalkbeat has published a thorough account of the political machinations that overtook the assessments, as well as the efforts of legislators to pull away from them. In dozens of states, what followed was chaos. Curricular experts were
  • ...

The school choice tent is much bigger than it used to be. Politicians and policy wonks across the ideological spectrum have embraced the principle that parents should get to choose their children’s schools and local districts should not have a monopoly on school supply.

But within this big tent, there are big arguments about the best way to promote school quality. Some want all schools to take the same tough tests and all low-performing schools (those that fail to show individual student growth over time) to be shut down (or, in a voucher system, to be kicked out of the program). Others want to let the market work to promote quality and resist policies that amount to second-guessing parents.

In the following debate, Jay Greene of the University of Arkansas’s Department of Education Reform and Mike Petrilli of the Thomas B. Fordham Institute explore areas of agreement and disagreement around this issue of school choice and school quality. In particular, they address the question: Are math and reading test results strong enough indicators of school quality that regulators can rely on them to determine which schools should be closed and which should be expanded—even if parental demand is inconsistent with...

Editor's note: This post is the sixth and final entry in an ongoing discussion between Fordham's Michael Petrilli and the University of Arkansas's Jay Greene that seeks to answer this question: Are math and reading test results strong enough indicators of school quality that regulators can rely on them to determine which schools should be closed and which should be expanded—even if parental demand is inconsistent with test results? Prior entries can be found herehereherehere, and here.

Shoot, Jay, maybe I should have quit while we were ahead—or at least while we were closer to rapprochement.

Let me admit to being perplexed by your latest post, which has an Alice in Wonderland aspect to it—a suggestion that down is up and up is down. “Short-term changes in test scores are not very good predictors of success,” you write. But that’s not at all what the research I’ve pointed to shows.

Start with the David Deming study of Texas’s 1990s-era accountability system. Low-performing Lone Star State schools faced low ratings and responded by doing something to boost the achievement of their low-performing students. That yielded short-term test-score gains, which were related to positive long-term outcomes. This is the sort of thing we’d...

Editor's note: This post is the fifth in an ongoing discussion between Fordham's Michael Petrilli and the University of Arkansas's Jay Greene that seeks to answer this question: Are math and reading test results strong enough indicators of school quality that regulators can rely on them to determine which schools should be closed and which should be expanded—even if parental demand is inconsistent with test results? Prior entries can be found hereherehere, and here.

Mike, you say that we agree on the limitations of using test results for judging school quality, but I’m not sure how true that is. In order not to get too bogged down in the details of that question, I’ll try to keep this reply as brief as possible.

First, the evidence you’re citing actually supports the opposite of what you are arguing. You mention the Project Star study showing that test scores in kindergarten correlated with later life outcomes as proof that test scores are reliable indicators of school or program quality. But you don’t emphasize an important point: Whatever benefits students experienced in kindergarten that resulted in higher test scores, they did not cause higher test scores in later grades—even though they produced better later-life outcomes....

Editor's note: This post is the fourth in an ongoing discussion between Fordham's Michael Petrilli and the University of Arkansas's Jay Greene that seeks to answer this question: Are math and reading test results strong enough indicators of school quality that regulators can rely on them to determine which schools should be closed and which should be expanded—even if parental demand is inconsistent with test results? Prior entries can be found herehere, and here.

I think we’re approaching the outline of a consensus, Jay—at least regarding the most common situations in the charter quality debate. We both agree that closing low-performing schools is something to be done with great care, and with broad deference to parents. Neither of us wants “distant regulators” to pull the trigger based on test scores alone. And we both find it unacceptable that some states still use test score levels as measures of school quality.

I think you’re right that in the vast majority of cases, charter schools that are closed by their authorizers are weak academically and financially. Parents have already started to “vote with their feet,” leaving the schools under-enrolled and financially unsustainable. Closures, then, are akin to euthanasia. That’s certainly been our experience at...

Previous research has found that oversubscribed urban charter schools produce large academic gains for their students. But are these results related to test score inflation (defined by one assessment expert as increases in scores that do not signal a commensurate increase in proficiency in the domain of interest)? In other words, do these schools merely figure out how to prepare their students to do well on the high-stakes exam, or are they contributing to real learning writ large?

To explore this question, a recent study examines state testing data from 2006 to 2011 at nine Boston middle school charters with lottery-based admissions. By exploiting the random nature of the lottery system, prior studies have found that these schools produce substantial learning gains on the Massachusetts Comprehensive Assessment System (MCAS).

To carry out the analysis, author Sarah Cohodes breaks down the learning gains by the various components of the state assessment—akin to how one might disaggregate overall gains by student subgroup. A math assessment might contain several different testing domains (e.g., geometry versus statistics), with some topics being tested more frequently than others. Cohodes’s hypothesis is as follows: If the gains are attributable to score inflation, we might expect to see stronger results on...

This new research report from Educational Testing Services is a solid contribution to the evidence base—rather than the opinion base—about the so-called “opt-out” movement. Author Randy E. Bennett finds that parents’ refusal to let their children sit for standardized tests is “a complicated, politically charged issue made more so by its social class and racial/ethnic associations. It is also an issue that appears to be as much about test use as about tests themselves.”

Opt-out true believers will likely dismiss out of hand anything coming from a testing outfit. but they ought to take a long look. The report does a good job synthesizing data from both the national and state departments of education, published surveys, and other sources to put between two covers exactly what is known—and can be sensibly divined—about who is opting out and why. “Parents who opt their children out appear to represent a distinct subpopulation,” the report notes. In New York, for example, “opt-outs were more likely to be white and not to have achieved proficiency on the previous year’s state examinations.” Test refusers are also less likely to be poor or to attend a school district serving large numbers of low-income families or ELL students. None...

A sixth grader in Mountain Brook, Alabama, can be considered one of the luckiest in the country, enrolled in a district where he and his classmates read and do math three grade levels above the average American student. But a child of similar age in Birmingham, just five miles north on Route 280, would be in considerably worse shape; there, kids perform 1.8 grade levels below average. So how could a ten-minute drive transport students to a different educational galaxy? Well, look at some numbers compiled by a team of Stanford researchers: Mountain Brook is 98 percent white, with a median household income of $170,000. Birmingham is 96 percent black, with a median household income of $30,000. Sometimes the figures speak for themselves.

John Bel Edwards, the recently elected Democratic governor of Louisiana, has had an eventful few months. After being inaugurated in January, he’s wrangled with state lawmakers over their leadership selection process and hustled to patch a huge crater in the budget. But his education agenda, largely aimed at curbing the growth of the state’s charter sector and cutting funding for voucher students, has run aground over the last few weeks. After the state’s newly...

Dave Yost

I am a conflicted man.

Professionally, I lead Ohio’s auditing staff, a team of financial experts whose job it is to verify that tax dollars are being properly spent and to root out any misuse or theft of public money. That includes charter school spending.

Yet personally, I’m a strong proponent of the charter school movement. I believe in the lifetime benefits of school choice and affording all parents the ability to choose the school that will best serve their children.

My friends sometimes question how I can be so tough on charters when I personally support them. The answer, I tell them, is simple: We don’t play favorites. We can’t. We shouldn’t. Doing so would erode the public’s trust in our office, which we must faithfully and ardently protect. To ignore the misdeeds of the few problem charters would stain the great work of many. Turning a blind eye to the problems in a charter school, or any school, would mean that we failed our children, which is never an option.

It’s a conflict that public officials often face when their official duties require them to make decisions running counter to their personal beliefs.

The mission of the auditor...

Editor's note: This post is the third in an ongoing discussion between Fordham's Michael Petrilli and the University of Arkansas's Jay Greene that seeks to answer this question: Are math and reading test results strong enough indicators of school quality that regulators can rely on them to determine which schools should be closed and which should be expanded—even if parental demand is inconsistent with test results? Prior entries can be found here and here.

It’s always nice to find areas of agreement, but I want to be sure that we really do agree as much as you suggest, Mike. I emphasized that it should take “a lot more than ‘bad’ test scores” to justify overriding parental preferences. You say that you agree. But at the end, you add that we may have no choice but to rely primarily on test scores to close schools and shutter programs—or else “succumb to ‘analysis paralysis’ and do nothing.”

This is a false dichotomy. If all we have are unreliable test scores, we don’t have to make decisions based on them or “do nothing.” Instead, we could rely on local actors who have more contextual knowledge about school or program quality. So if the charter board, local...

Pages