Standards-Based Reforms

Nationally and in Ohio, we press for the full suite of standards-based reforms across the academic curriculum and throughout the K–12 system, including (but not limited to) careful implementation of the Common Core standards (CCSS) for English language arts (ELA) and mathematics as well as rigorous, aligned state assessments and forceful accountability mechanisms at every level.

Resources:

Our many standards-based blog posts are listed below.


Fordham’s experts on standards-based reforms:


Editor’s note: This is the last in a series of blog posts that takes a closer look at the findings and implications of Evaluating the Content and Quality of Next Generation Assessments, Fordham’s new first-of-its-kind report. The prior five posts can be read here, here, herehere, and here.

It’s hard to believe that it’s been twenty-two months (!) since I first talked with folks at Fordham about doing a study of several new “Common Core-aligned” assessments. I believed then, and I still believe now, that this is incredibly important work. State policy makers need good evidence about the content and quality of these new tests, and to date, that evidence has been lacking. While our study is not perfect, it provides the most comprehensive and complete look yet available. It is my fervent hope that policy makers will heed these results. My ideal would be for states to simply adopt multi-state tests that save them effort (and probably money) and promote higher-quality standards implementation. The alternative, as many states have done, is to go it alone. Regardless of the approach, states should at least use the results of this study and other recent and forthcoming investigations of test quality...

Editor’s note: This is the fifth in a series of blog posts that takes a closer look at the findings and implications of Evaluating the Content and Quality of Next Generation Assessments, Fordham’s new first-of-its-kind report. The prior four posts can be read here, here, here, and here.

When one of us was enrolled in a teacher education program umpteen years ago, one of the first things we were taught was how to use Bloom’s taxonomy. Originally developed in 1956, it is a well-known framework that delineates six increasingly complex levels of understanding: knowledge, comprehension, application, analysis, synthesis, and evaluation. More recently—and to the consternation of some—Bloom’s taxonomy has been updated. But the idea that suitably addressing various queries and tasks requires more or less brainpower is an enduring truth (well sort of).  

So it is no surprise that educators care about the “depth of knowledge” (DOK) (also called “cognitive demand”) required of students. Commonly defined as the “type of thinking required by students to solve a task,” DOK has become a proxy for rigor even though it concerns content complexity rather than difficulty. A clarifying example: A student may not have seen a...

On the campaign trail, Senator Ted Cruz reliably wins applause with a call to "repeal every word of Common Core." It's a promise he will be hard-pressed to keep should he find himself in the White House next January. Aside from the bizarre impracticality of that comment as phrased (which words shall we repeal first? "Phonics"? "Multiplication"? Or "Gettysburg Address"?), the endlessly debated, frequently pilloried standards are now a deeply entrenched feature of America's K–12 education landscape—love 'em or hate 'em.

Common Core has achieved "phenomenal success in statehouses across the country," notes Education Next. In a study published last month, the periodical found that "thirty-six states strengthened their proficiency standards between 2013 and 2015, while just five states weakened them." That's almost entirely a function of Common Core. 

Education Next began grading individual states’ standards in 1995, comparing the extent to which their state tests' definition of proficiency aligned with the gold-standard National Assessment of Educational Progress assessment (often referred to as "the nation's report card”). That year, six states received an A grade. As recently as four years ago, only Massachusetts earned that distinction. Today, nearly half of all states, including the District of Columbia, have earned A ratings....

Leading up to this year’s report card release, some school districts expressed concern about the negative impact of students opting out of state assessments on their report card grades. In response, lawmakers proposed a well-intentioned but shortsighted bill attempting to mitigate the impact of opt-outs—first by erasing non-test-takers from their schools’ performance grades and then (after being amended) by reporting two separate Performance Index grades. The Ohio Department of Education devised a temporary reporting solution: Performance Index scores would be reported as normal (including the impact of non-test-takers, as per current law), but a “modified achievement measure” would be made available to illustrate how districts would have scored if non-test-takers didn’t count.

A quick look at the data shows that the impact of opt-outs last year (2014–15) was minimal for the vast majority of Ohio school districts. As depicted in Table 1, fifty-two districts (8.5 percent) experienced a letter grade change because of their non-participation rates (shaded in green). This was most likely driven by the opt-out movement. It’s hard to say for sure, though, because Ohio only captures test participation rates and not the reasons for non-participation—which might include excused or unexcused absences, truancy, or opting...

Editor's note: This letter appeared in the 2015 Thomas B. Fordham Institute Annual Report. To learn more, download the report.

Dear Fordham Friends,

Think tanks and advocacy groups engage in many activities whose impact is notoriously difficult to gauge: things like “thought leadership,” “fighting the war of ideas,” and “coalition building.” We can look at—and tabulate—various short-term indicators of success, but more often than not, we’re left hoping that these equate to positive outcomes in the real world. That’s why I’m excited this year to be able to point to two hugely important, concrete legislative accomplishments and declare confidently, “We had something to do with that.”

Reading

Namely: Ohio’s House Bill 2, which brought historic reforms to the Buckeye State’s beleaguered charter school system, and the Every Student Succeeds Act, the long-overdue update to No Child Left Behind

In neither case can we claim anything close to full credit. On the Washington front especially, our contributions came mostly pre-2015, in the form of writing, speaking, and networking about the flaws of NCLB and outlining a smaller, smarter federal role. We were far from alone; figures...

A new Harvard University study examines the link between Common Core implementation efforts and changes in student achievement.

Analysts surveyed randomly selected teachers of grades 4–8 (about 1,600 in Delaware, Maryland, Massachusetts, New Mexico, and Nevada), asking them a number of questions about professional development they’ve received, materials they’ve used, teaching strategies they’ve employed, and more. Analysts used those responses to create twelve composite indices of various facets of Common Core implementation (such as “principal is leading CCSS implementation”) to analyze the link between each index and students’ performance on the Common Core-aligned assessments PARCC and SBAC. In other words, they sought to link teacher survey responses to their students’ test scores on the 2014–15 PARCC and SBAC assessments, while also controlling for students’ baseline scores and characteristics (along with those of their classroom peers) and teachers’ value-added scores in the prior school year.

The bottom line is that this correlational study finds more statistically significant relationships for math than for English. Specifically, three indices were related to student achievement in math: the frequency and specificity of feedback from classroom observations, the number of days of professional development, and the inclusion of student performance on CCSS-aligned assessments in teacher evaluations....

Editor’s note: This is the fourth in a series of blog posts taking a closer look at the findings and implications of Evaluating the Content and Quality of Next Generation Assessments, Fordham’s new first-of-its-kind report. The first three posts can be read herehere, and here.

It’s historically been one of the most common complaints about state tests: They are of low quality and rely almost entirely on multiple choice items. 

It’s true that item quality has sometimes been a proxy, like it or not, for test quality. Yet there is nothing magical about item quality if the test item itself is poorly designed. Multiple choice items can be entirely appropriate to assess certain constructs and reflect the requisite rigor. Or they can be junk. The same can be said of constructed response items, where students are required to provide an answer rather than choose it from a list of possibilities. Designed well, constructed response items can suitably evaluate what students know and are able to do. Designed poorly, they are a waste of time.

Many assessment experts will tell you that one of the best ways to assess the skills, knowledge, and competencies that we expect students to demonstrate is through...

Fordham’s latest blockbuster report digs deep into three new, multi-state tests (ACT Aspire, PARCC, and Smarter Balanced) and one best-in-class state assessment, Massachusetts’ state exam (MCAS), to answer policymakers’ most pressing questions about the next-generation tests: Do these tests reflect strong college- and career-ready content? Are they of rigorous quality? Broadly, what are their strengths and areas for improvement?

Over the last two years, principal investigators Nancy Doorey and Morgan Polikoff led a team of nearly forty reviewers to find answers to those questions. Here’s a quick sampling of the findings:

  • Overall, PARCC and Smarter Balanced assessments had the strongest matches to college- and career-ready standards, as defined by the Council of Chief State School Officers.
  • ACT Aspire and MCAS both did well regarding the quality of their items and the depth of knowledge they assessed.
  • Still, panelists found that ACT Aspire and MCAS did not adequately assess—or may not assess at all—some of the priority content reflected in the Common Core standards in both ELA/Literacy and mathematics.

As might be expected, the report has garnered national interest. Check out coverage from The 74 MillionU.S. News, and Education Week just for a start.

Or better...

Editor’s note: This is the second in a series of blog posts that will take a closer look at the findings and implications of Evaluating the Content and Quality of Next Generation Assessments, Fordham’s new first-of-its-kind report. The first post can be read here

Few policy issues over the past several years have been as contentious as the rollout of new assessments aligned to the Common Core State Standards (CCSS). What began with more than forty states working together to develop the next generation of assessments has devolved into a political mess. Fewer than thirty states remain in one of the two federally funded consortia (PARCC and Smarter Balanced), and that number continues to dwindle. Nevertheless, millions of children have begun taking new tests—either those developed by the consortia, ACT (Aspire), or state-specific assessments constructed to measure student performance against the CCSS, or other college- and career-ready standards.

A key hope for these new tests was that they would overcome the weaknesses of the previous generation of state assessments. Among those weaknesses were poor alignment with the standards they were designed to assess and low overall levels of cognitive demand (i.e., most items required simple recall or...

The Thomas B. Fordham Institute has been evaluating the quality of state academic standards for nearly twenty years. Our very first study, published in the summer of 1997, was an appraisal of state English standards by Sandra Stotsky. Over the last two decades, we’ve regularly reviewed and reported on the quality of state K–12 standards for mathematicsscienceU.S. historyworld historyEnglish language arts, and geography, as well as the Common CoreInternational BaccalaureateAdvanced Placement and other influential standards and frameworks (such as those used by PISA, TIMSS, and NAEP). In fact, evaluating academic standards is probably what we’re best known for.

For most of the last two decades, we’ve also dreamed of evaluating the tests linked to those standards—mindful, of course, that in most places, the tests are the real standards. They’re what schools (and sometimes teachers and students) are held accountable for, and they tend to drive curricula and instruction. (That’s probably the reason why we and other analysts have never been able to demonstrate a close relationship between the quality of standards per se and changes in student achievement.) We wanted to know how well matched the assessments were to the standards, whether they were of high...

Pages