Measuring Teacher Effectiveness: A Look “Under the Hood” of Teacher Evaluations in 10 States
Historically, teacher evaluations have been “nothing burgers,” with nearly 100 percent of educators rated “satisfactory” or better (often based on a single classroom observation, if that). These empty-calorie appraisals of educator effectiveness keep the teaching profession plump—but don’t provide the right regimen to ensure its health. Recently, however, some have begun changing their diets. This report from Public Impact, 50CAN, and ConnCAN offers detailed profiles of ten meaty programs: Delaware, Rhode Island, Tennessee, Hillsborough County (Tampa), Houston, New Haven, Pittsburgh, D.C., Achievement First, and Relay Graduate School of Education. (Why CO's Harrison School District 2 wasn’t included, we’re unsure.) It explains how each handles all aspects of teacher assessment, from student-achievement measures and classroom observations to nonacademic measures; data accuracy, validity, and reliability; and evaluation-result reporting. Tennessee, for example, gathered teams of educators in each of the subjects not on state tests (and therefore not available for value-added analyses) to determine growth measures for these courses. And, to handle the sticky situation of team teaching, Rhode Island weights value-added results to reflect the amount of time each teacher spends with a student. Those hungry for healthier recipes for teacher-evaluation systems—or simply those in need of a beginner’s cookbook—should nibble on this report.
SOURCE: Daniela Doyle and Jiye Grace Han, Measuring Teacher Effectiveness: A Look “Under the Hood” of Teacher Evaluations in 10 States (New York, NY: 50CAN; Hartford, CT: ConnCAN; and Chapel Hill, NC, Public Impact, 2012).