The four stages of educator-evaluation evolution
We’re about to enter the all-important fourth phase of the evolution of educator evaluations.
Phase 1 was the national advocacy that spurred action. This is best represented by the now-famous TNTP report “The Widget Effect,” which convinced so many that one of the tallest obstacles to improving student learning was the low-quality evaluation systems that seemed to exist in just about every school district around.
Phase 2 was the Race to the Top era, during which the federal government, through the enticement of huge grants, compelled states to enact far-reaching reforms that affected policy and practice. (I wrote about Uncle Sam’s carrot and how states responded in this article for Education Next).
I can attest to the yawning gap between policy and practice and how challenging it is to bridge.
We’re currently in Phase 3, during which state and local leaders try desperately to implement the big-promise policies that, by any fair accounting, got way out ahead of practice. As a recovering state policymaker, I can attest to the yawning gap between policy and practice and how challenging it is to bridge.
Phase 3, however, has also been marked by a number of smart people in the research and policy-analysis world trying to figure out and describe what this flurry of activity amounts to and then make recommendations for future action.
The first contribution in this area was TNTP’s short, straightforward “Teacher Evaluation 2.0.” If you’re new to the subject or want to get re-grounded, give this brief a look. Its “six design standards” are spot-on, and the New Haven evaluation matrix will be valuable to state- and district-level policymakers wrestling with developing summative ratings from multiple measures.
In recent weeks, two fuller analyses have been released, and both deserve attention. The first was penned by my Bellwether colleagues Sara Mead, Andrew Rotherham, and Rachael Brown under the aegis of the AEI Teacher Quality 2.0 project.
“The Hangover: Thinking About the Unintended Consequences of the Nation's Teacher Evaluation Binge” discusses the complicated “tensions” that have surfaced during the implementation process. The report’s title is not only catchy—it is also apt. Passing legislation felt good, but now policymakers and practitioners are paying the price.
The most widely discussed tension relates to the level of flexibility granted to districts: too much, and states risk wide variation and possible bad behavior at the local level; too little, and districts find themselves with hands tied.
The authors rightfully point out the dissonance between reformers’ views on evaluation policy and charter schools. We simultaneously want state mandates on the former but bristle when they are applied to the latter.
The most troubling tension, however, relates to what the authors describe as “the evolving” system of education delivery. In short, the evaluation rules we’ve created were a response to yesterday’s system; but these rules could very well hinder the development of tomorrow’s system of personalized learning and blended instruction. It’s my view that the seemingly unavoidable collision of fluid, individualized learning and hulking, expensive, common end-of-year assessments ought to cause us sleepless nights.
[For those interested in the history of the teacher-evaluation-policy journey, the report makes reference to a paper by Kevin Carey in 2004 that argued for using value-added models in evaluations. I didn’t know about this paper and had to go back and give it a look. Carey, once again, was ahead of his time, seeing things more clearly than many of us and arguing convincingly for a new path.]
Finally, two easily overlooked sentences in the report nicely summarize its fundamental point: “By nature, education policymaking tends to lurch from inattention to overreach,” and “Do not expect legislation to do regulation’s job.”
In combination, they tell us that evaluation-reform laws may have been passed too quickly and that those laws, so certain of their own righteousness and unaware of the countless complications lurking around every corner, left too little discretion to policy administrators.
The other report is the excellent “State of Teacher Evaluation Reform,” written for the Center for American Progress by Patrick McGuinn, a Drew University professor.
This paper should have a reserved spot on the desk of every state policymaker working on educator evaluations. It tackles its subject through six case studies of “early-adopter” states.
Though it does a very good job of pulling out commonalities and lessons learned, its real value is in its careful study and explanation of what these states actually did. The paper has a real-time feel to it; the reader gets an appreciation for how well-intentioned policymakers in different places struggled to translate legislative and regulatory language into action. It reveals numerous little-known approaches and analyzes their upsides and downsides.
For example, Tennessee decided that the state would handle the training of its more than 5,000 observers. When political opposition to the new system grew, the Governor asked a nonprofit to assess the first year of implementation.
In Colorado, a principal-evaluation system was implemented before the teacher-evaluation system; state leaders drastically underestimated the SEA capacity needed to implement the new policies; and an outside organization (the Colorado Legacy Foundation) played a large role throughout the process.
The stories from other states are equally edifying. I know from first-hand experience that McGuinn’s description of New Jersey’s saga is accurate, so I have every reason to believe the other case studies are equally fair and instructive.
His conclusions are terrific, a number of which could have only materialized through case studies. For example, some of the trouble with state-level implementation was the result of SEAs trying to build new offices dedicated to educator effectiveness at the same time as they were launching new, massive programs. Also, because of limited internal capacity, SEAs are relying on external consultants and foundations; that could have detrimental implications over time.
If you’re interested in making education policy succeed in the field, you really should read both of these papers. You’ll be a smarter ed reformer as a result—but you’ll also be able to contribute to the extraordinarily important fourth phase of educator evaluation: adjusting the current policies and practices to ensure they are sustainable and lead to better results for kids.
blog comments powered by Disqus
About the Editor
Michael J. Petrilli
Executive Vice President
Mike Petrilli is one of the nation's foremost education analysts. As executive vice president of the Thomas B. Fordham Institute, he oversees the organization's research projects and publications and contributes to the Flypaper blog and weekly Education Gadfly newsletter.
May 23, 2013
Sign Up for updates from the Thomas B. Fordham Institute
- Core Knowledge Blog
- Daniel Willingham: Science and Education Blog
- Education Next Blog
- Getting Smart
- Gotham Schools
- Jay P. Greene
- Joanne Jacobs
- NACSA's Chartering Quality
- National Journal Education Blog
- NCTQ Pretty Darn Quick
- NCTQ Teacher Quality Bulletin
- Ohio Education Gadfly
- Politics K-12
- Quick and the Ed
- Rick Hess Straight Up
- The Corner
- The Hechinger Report
- Top Performers