Professors in the United States, it seems, need a crash course in a key scholarly concept: evaluation.
To evaluate is to judge. But a new paper published by the American Academy of Arts and Sciences in Cambridge, Mass., says that's not what's happening on campuses - particularly not on Ivy League ones. How else to explain the fact that at Harvard, for instance, A's were awarded to 46 percent of students in 1996 (versus 22 percent in 1966), and 82 percent of graduates received honors? Or why only 10 to 20 percent of students at all types of colleges now receive less than a B minus? Not to mention that the number of A's across higher education jumped to 26 percent in 1993 from 7 percent in 1969.
For those puzzled by the stream of superlatives pouring forth from the ivory tower, this paper (http:// www.amacad.org) reminds that appearances can be deceiving. Its purpose, though, is to serve warning. The effect of ceding the high ground on evaluation, the authors say, is to fail those who truly deserve A's, corrode employers' faith in profiles of graduates, and force reliance on an "old boy and girl network" that can offer accurate assessment - but only to members of the club.
The authors date the problem to the Vietnam era, when a bad grade could end a draft deferment, and follow it to a growing consumer mentality that equates paying lots of money with getting good results. They target attitudes that link low performance not to ability or effort but to poor self-esteem that can be remedied by easy grading.
So, do some extra A's really matter? To the authors, a "system that fears candor is demoralizing"; it undermines the "central values of academic life." Grade inflation, which fails to tell it like it is, also fails to prepare students for the future - and the cost, they argue, is high.