College rankings don't tell how well students learn
It's not hard to tell when it's college-rankings season. That's when high school seniors rush to the newsstands to check out the number next to their top pick. Parents buy extra copies of the listings. College presidents whose stars are up make demure statements, while their less fortunate counterparts issue pronouncements about "ludicrous" lists that are "bordering on fraud."
Money Magazine, Time, Newsweek, Mother Jones, and others offer rankings. But last week, the most watched of the bunch - U.S. News & World Report - weighed in (see list, right).
The magazine's annual ranking, which most colleges watch very closely, spurred the usual horse-race debate and invective over reducing a college to a number. But this year, more attention has been given to a key issue that U.S. News is unable to quantify: how well students learn at different schools.
For the first time, a critical mass of research is being devoted to determining how to measure what students learn as undergraduates.
If successful, those efforts could soon yield rankings based on the quality of undergraduate teaching, not just the number of books in the library or the size of the endowment.
Much of the frustration and recrimination over rankings is due to the fact that rankings like U.S. News only hint at what's inside, researchers say.
"Most people assume that if a young person goes to Harvard, Wellesley, or Swarthmore they will be better off than at a state public university," says Roger Benjamin, president of the Rand Council for Aid to Education in New York. "But what the institution adds to the individual student is never really answered. There's national movement on this subject to find out."
At least a half-dozen organizations are working on tools to assess the reading, writing, and critical-thinking skills of undergraduates in what they call an objectively quantifiable fashion.
The Educational Testing Service in Princeton, N.J., has made "breakthroughs" in developing machine-scorable writing tests for higher education, observers say.
The National Center for Higher Education Management Systems in Boulder, Colo., is also working on higher-ed measurement tools.
The Rand Council for Aid to Education is conducting a pilot study of criteria that could be used to measure critical-thinking skills in tests - and ways to motivate students to do their best on such optional testing.
The National Center for Public Policy and Higher Education is developing student-outcomes criteria to include in its biennial 50-state report card on higher ed.
Researchers at Harvard and Yale are also working on statistical tools for assessing student learning in college.
One of the most advanced efforts is the National Survey of Student Engagement (NSSE), funded by the Pew Charitable Trust. The survey asks students about time spent on activities like homework, reading, and writing.
Such efforts could yield a single tool to permit fair comparisons of teaching quality across a range of institutions.
But participation could be a problem. So far, only about 470 out of about 1,400 four-year institutions have signed up for or taken the NSSE, says George Kuh, the survey's director at Indiana University in Bloomington. Still, public demand may drive success. "When you have 70 percent of high school graduates going on to college, and college being indispensable to the American dream, that has an effect," says Russell Edgerton, director of the Pew Forum on Undergraduate Learning. "Demands and pressures for real accountability are rising."