For those of you following the interesting and ever-changing world of educator evaluations, a few recent happenings may be worth a look.
The Delaware Department of Education recently published a report written by its internal Teacher and Leader Effective Unit on the implementation of its revised educator-evaluation system, DPAS-II. As part of the state’s Race to the Top grant, the DDE incorporated “robust measures of student achievement” into its existing system. That model had been structured around four components of effective teaching measured through classroom observations. It had produced almost no variation among the state’s educators (in the new system, those observations still make up 80 percent of the final rating).
Most components of DPAS-II get a binary rating, but a new component, based on multiple measures of student growth, is scored “Exceeds,” “Satisfactory,” or “Unsatisfactory.” The summative evaluation combines the components, netting ratings “Highly Effective,” “Effective,” “Needs Improvement,” and “Ineffective.”
The state’s educators were divided into three groups based on the availability of student-performance measures; these include state tests, external and internal assessments in subjects outside of math/reading, and “growth goals” based on professional standards and position responsibilities.
Importantly, those teachers whose scores were determined (at least in part) on the basis of empirical measures of student growth had more score variation (54 percent receiving “Exceeds”) than those assessed via growth goals based on professional standards (69 percent receiving “Exceeds”). Moreover, the distribution of scores associated with student-performance measures varied widely among districts, making inter-district comparisons difficult....