The ruling would allow the New York City school district to release performance rankings for more than 12,000 of its teachers to the media outlets that have requested the information.
The local teachers union, which sought to block release of the records, is appealing the decision, but if the information is released, it will be the largest such case in the United States.
The performance ratings are tied to “value added” data that link individual teachers with their students’ improvement on tests.
The method is lauded by some as the most objective measure for determining teacher effectiveness and decried by others who see it as riddled with errors. It is at the center of a growing debate about the merit of putting such data in the hands of the public – set off by the Los Angeles Times’s decision last summer to publish a database of individual teachers based on similar rankings.
“This sort of public naming and shaming is thought of by some as a good way to help parents understand the quality of teachers in their schools,” says Kevin Welner, co-director of the National Education Policy Center at the University of Colorado in Boulder, which on Monday released a policy brief calling into question the degree to which teacher quality can be determined by value-added data. “But unless you make a lot of assumptions and just look at top rank versus bottom rank, [using value-added models to evaluate teachers] is almost like throwing a dart at a dartboard. That’s really troubling if you’re talking about a high-profile public announcement of where teachers rank.”
Still, while few believe value-added data should be the only way teachers are evaluated, many researchers point to the information as the most accurate way possible to gauge a teacher’s effects on his or her students. More and more, it is becoming a piece of official teacher evaluations.
In its Race to the Top contest, the Obama administration offered incentives for districts and states to begin tying teacher evaluations to student test scores. In Washington, D.C., the district fired several hundred teachers last year for poor performance, based in part on their students’ value-added test scores. In New York, the city doesn’t currently use the data in teacher evaluations, but beginning in 2013, they will count for 25 percent of a teacher’s performance evaluation.
The rankings at issue in this court case – called the teacher data reports (TDRs) – were developed four years ago for more than 12,000 of the city’s 80,000 public-school teachers as part of a pilot project.
At the time, the district promised to keep those ratings private, but in her ruling, New York State Supreme Court Justice Cynthia Kern said that the promise doesn’t matter.
She emphasized that she was not ruling on whether the teacher data reports should be released, just whether the Department of Education’s decision to do so was “arbitrary and capricious.”
“The unredacted TDRs may be released regardless of whether or to what extent they may be unreliable or otherwise flawed,” Justice Kern wrote.
Indeed, flaws exist with value-added measures, most experts agree – especially when the measures are tied to individual names or not averaged out over several years.
They can change drastically for any given teacher from year to year; they don’t take into account things like chronic absenteeism, learning gains due to summer school, or supplemental teachers; and in some schools with high mobility rates for students, teachers may even be graded on the performance of students he or she never taught, if they changed schools after the teacher was linked to them in the fall.
But even so, say advocates of the data, they often offer the best picture available of the varying effectiveness from teacher to teacher. And depending how the data are released, making them public may serve some goals.
“More-effective teachers are on average about three times more effective than the least effective teachers,” says Jane Hannaway, director of the Education Policy Center at the Urban Institute in Washington. “There are valid issues about how these effective and ineffective teachers are distributed across schools.”
Ms. Hannaway says that she has concerns with the sort of database of individual teachers that the L.A. Times produced, since the data can be misleading at the individual level and the public may not have the tools to understand the context. Also, the ratings constitute just one piece of what a teacher is doing. But comparing school to school, or looking at average scores and their distribution within districts, could provide valuable information, she says, adding, “I don’t think there’s going to be any way to suppress this information.”
Since the L.A. Times released its database, media outlets have asked several other districts around the country for such data, with mixed results. This court case applies only to New York City, but it may be looked at as a model when similar battles come up.
New York’s teachers union, the United Federation of Teachers, has promised to appeal the decision, and the district won’t release the rankings until after the appeal. UFT president Michael Mulgrew said in a statement that he was “disappointed” in the ruling. “The reports, which are largely based on discredited state tests, have huge margins of error and are filled with inaccuracies, will only serve to mislead parents looking for real information,” he said.