Aftermath of bad test results
HENDERSON, N.C. — It was one of the most awful days in sixth-grade teacher Heddie Alston's life, that day in fall 1997 when new test results tarred Pinkston Street Elementary School as the state's worst.
In days, a state-mandated "assistance team" arrived. The principal was fired. Three-quarters of Pinkston Street teachers resigned. Students hung their heads when asked where they went to school. "I was so embarrassed," Ms. Alston says.
At every grade level and by nearly every test measure, Pinkston Street students were at the bottom. On the fifth-grade writing test, for instance, not a single student among more than 400 rated "at mastery."
Yet today, in a startling switch, pride and achievement are surging at this urban school in North Carolina.
The reason? The same tests that hammered Pinkston Street in 1997, taken one year later, showed 43 percent of students performing at grade level compared with just 23 percent a year earlier. Though still well below the state average, that leap earned the school an "exemplary growth" rating.
As the results were announced, teachers and students shouted and hugged each other. After a year of remedial slogging, an influx of dollars and new teachers, Pink- ston Street students were clawing toward grade-level performance.
"We're on our way," says Principal Beverly Joseph, "and the kids and teachers know it. We couldn't have done it without those first test results, as painful as they were."
Like Texas, North Carolina has long ranked near the bottom on the National Assessment of Educational Progress (NAEP) tests. But student achievement on NAEP and other tests has risen dramatically in recent years - along with broad public support.
North Carolina was one of only five states to have significant gains in NAEP fourth-grade reading skills between 1992 and 1998. But not everyone buys the idea that results like these are evidence of solid learning. Critics say high-stakes testing:
*Results in "teaching to the test" - narrowing the scope of classroom content to fit what's on the test.
*Artificially inflates a school or district's test score over time.
*Leads to federal or state rather than local control.
*Measures economic and demographic differences, not ability.
*Is often unfair because the tests used to judge ability - and hold kids back - are often designed for other uses, like monitoring curriculum.
Daniel Koretz, a senior social scientist at RAND Corp. in Washington and a professor of education at Boston College, says, "Tests used for accountability are not necessarily meaningful at all - because people can tailor their teaching too closely to the test. So states are likely to get large increases in scores that don't generalize to anything else."
He is hardly alone. "You should not make high-stakes decisions on tests unless you're confident all students have had a reasonable opportunity to learn the content covered by the tests," says Walter Haney, a researcher at the Center for the Study of Testing, Evaluation, and Educational Policy in Boston.
Haney, who has been called as an expert witness in North Carolina and Texas lawsuits filed by parents of children who were held back, says his research has turned up at least a dozen instances of test cheating in Texas in the last decade. And that leads him to wonder about reliability of test results.
But Mike Ward, North Carolina's superintendent of schools, says high-stakes testing has helped boost low-performing schools. "The tests have everyone working hard. It hasn't been easy - but we have to be accountable."
NEW JERSEY'S 'POOR' ENGLISH STANDARD: By high school graduation, students should be able to "understand the study of literature and theories of literary criticism."