Stuck in the PARCCing lot
This week, the Partnership for Assessment of Readiness for College and Careers (PARCC) released its latest cost estimates, which are coming in significantly higher than the costs of the Smarter Balanced assessments. Almost immediately following the announcement, Georgia dropped out of the federally funded assessment consortium. This after Utah, Alabama, and Oklahoma dropped out of both consortia and after North Dakota switched from PARCC to Smarter Balanced.
Around the blogosphere, speculation—and occasional high-fiving—erupted. My friend and colleague, Andy Smarick, jumped on the announcement declaring it a “disaster” on Twitter and hinting that the PARCC defections might be signaling the beginning of the end of the Common Core. On Twitter, Rick Hess lamented, “If only Core Core'ites had been warned to take political, policy concerns seriously...” And Mike Petrilli lambasted Georgia officials on Twitter, chiding, “Shame on Georgia. You really can’t afford to spend 1/3 of 1% of your per pupil funding on tests?”
— Michael Petrilli (@MichaelPetrilli) July 22, 2013
Of course, Common Core supporters have lots of reasons to worry about the growing cracks in the CCSS coalitions: Petrilli is right to hold officials’ feet to the fire when it comes to making tough implementation decisions, and Smarick might be right that these PARCC defections are signaling the beginning of the end of the federally funded assessment consortia. (Though, to be clear, we are still nowhere near that. Support for both PARCC and SBAC in some circles remains quite strong. And pressure on PARCC from states might be exactly what the consortium needs to turn things around.)
That said, whether or not the consortia succeed, I think it is wrong to say the fate of these two “common assessments” are—or need to be—inextricably linked to the overall promise of the Common Core. Here’s why:
First, I have no reason to believe in the divinity of either PARCC or Smarter Balanced (SBAC) (or of any test developer, for that matter). While PARCC generally seems to get it right when it comes to alignment to the CCSS, they are rumored to be plagued by management woes and strategic indecision that threaten the whole operation. Leadership matters—not only when it comes to the viability of the consortium, but also when it comes to decisions about length, cost, and so on.
SBAC, on the other hand, seems to be running a tighter ship, but there is increasing evidence that their assessment is not well aligned to the Common Core. Back when SBAC released its first ELA content frameworks, I worried that the test looked eerily similar to the state tests of the past, with a focus on reading skills over evidence-based reading and writing and with little indication that the consortium prioritized using the kinds of authentic, complex, and content-rich texts that the CCSS demands. On Tuesday, Rhode Island education blogger Jason Becker rightly noted, “While the rest of the internet seems to be obsessed over Georgia leaving [PARCC], the real concern should be over the quality of [the SBAC] test items.” And, when it comes to quality, Becker is unimpressed:
The problem with the SBAC items is they represent the worst of computerized assessment. Rather than demonstrating more authentic and complex tasks, they present convoluted scenarios and even more convoluted input methods…What I see here is not worth the investment in time and equipment that states are being asked to make, and it is hardly a "next generation" set of items that will allow us to attain more accurate measures of achievement.
(It’s also worth noting that the assessment consortia are the only federally controlled aspect of the Common Core; perhaps this is where we should heed the warnings of our conservative brethren and let the market push excellence?)
On the other hand, Smarick is right to be skeptical of states that opt to go it alone, because they will have a steep hill to climb. They’ve got less than two years to build a better, cheaper test than what either PARCC or SBAC has to offer. And if history is any judge, a marketplace of state-by-state assessment development has yet to bring us excellence.
Second, commonness is only a good thing if we have excellence first. The reason we at Fordham are supportive of the Common Core standards is because they are clearer and more rigorous than the vast majority of state standards they’ve replaced. This is a point that even James Milgram and Sandra Stotsky—two of Common Core’s more vocal opponents—acknowledge. It’s also why we’re more bullish on the CCSS than we are on the NGSS.
On the assessment side, because we don’t know that either consortium has cornered the market on quality, I’m not ready to make “commonness” our primary goal. When Utah dropped out of both consortia last year, I argued that, while assessments are critical to CCSS implementation, we should be worried about quality first. Indeed, I remain convinced that test quality is far more important than their commonness. Yes, if we have more tests we will have less comparability between and among states. But I’m not sure we’ve reached an inflection point where fewer tests would yield better results for our kids.
Third, while the policy and political debate over Common Core is heating up, there are some very real and critically important conversations and shifts that are happening at the classroom level—and that is where change needs to happen if Common Core is going to live up to its promise.
In New York City, for example, thanks to the Common Core focus on text complexity and content-rich curricula, Lucy Calkins’s Reading and Writing Workshop—which has for years been a foundation of reading instruction in the city’s schools—doesn’t even appear on the list of “recommended” Common Core–aligned programs. And the cities that are part of the Council for Great City Schools have committed to using tools like the Student Achievement Partners “Publisher’s Criteria” to guide decisions about curriculum, instruction, and assessment.
Perhaps even more critically, just this week, Student Achievement Partners released an “Assessment Evaluation Tool” that is designed to that is designed to
…evaluate the alignment of grade or course-level assessment materials for alignment with the CCSS, including interim or benchmark assessments and classroom assessments.
For both ELA and math, the tool presents both “non-negotiables” and “indicators of quality” that help clarify what the Common Core standards demand in each grade band and what educators and leaders should look for when evaluating a test for its alignment to the standards.
Given the focus in standards- and accountability-driven reform on “outcomes” over inputs, these are exactly the kinds of tools and criteria that should be driving our discussion about standards implementation—and certainly about assessment alignment and quality. And in the end, if we wound up in the early years with two dozen assessments that all met the indicators on this tool, we’d have tests that provided clear signals of how instruction and curriculum needs to shift under the Common Core and we’d have better information about whether our students were on track towards college and career readiness. It’s hard to see how that portends the end of the Common Core.
None of this is to say that we should not think critically about what the demise of either or both assessment consortia might mean for the CCSS, nor should we pretend like this is the only speed bump states will hit as CCSS implementation continues to ramp up. But let’s also remember that, to date, forty-five states and D.C. have adopted the Common Core as their own. And educators and leaders in many of those states continue to align curriculum, instruction, assessment, and professional development to the expectations. Those of us who support the content and rigor changes the Common Core can bring about owe it to them to acknowledge that the fate of these standards lies not in the offices of Achieve or SBAC but, rather, in the schools and classrooms where teachers are changing what they do every day to meet the new content and rigor demands of the Core.