A better way to rank America's colleges

Parents and students deserve a program to create their own rankings.

July 19, 2007

Many of us in higher education dislike popular college rankings such as the annual academic beauty pageant from US News & World Report. But expecting them to go away is naive, and attempting to undermine them is unwise since students and families could perceive that as petulant and paternalistic. Worse, it could seem as if we have something to hide.

What should higher education do instead? We should do what we do really well – educate and contribute new ideas. We dislike the rankings because they imply an objective ranking of a complex set of institutions, which we then worry has an undue impact on students' choices. Our claim that useful data are already available through our reporting to the government and elsewhere is a little disingenuous, because those data are hard to access and not easily used to evaluate and compare institutions.

Instead, we in higher ed should work to make available a better mousetrap, which would decrease the importance of the existing rankings to students and families. No rankings are perfect, but given families' interest in the judgments that rankings offer, supplying a better alternative to the flawed commercial rankings seems a better strategy for moderating their influence.

One way for higher ed to start would be to agree to send to a third-party non-profit or foundation the same data that we already submit to US News and other rating organizations, as well as sending other important data that we regularly report. The third party would then make the data readily available online, perhaps on a website.

Fortunately, efforts like this are under way at both the National Association of Independent Colleges and Universities (NAICU), and the Annapolis Group, a consortium of America's leading liberal arts colleges. The best possible result of these initiatives would include an easy-to-use program to allow prospective students and their parents to essentially build their own rankings. They could decide which variables they value and how much, by assigning their own weights to the criteria they care about. If you don't care about SATs, give them a zero weight! If you care about small classes and the diversity of the student body, weigh them heavily.

This do-it-yourself capacity would serve several functions. The decisions about variables and weights would demonstrate to students and families that the rankings are sensitive to these choices.

A one-size ranking does not fit all, because students and families care about different things. It would also allow families to tailor the rankings to their own particular concerns. US News has talked about making variable weights an option. But choosing variables and weights has to be central, not optional, to a useful ranking system for students and families.

Here is another way that one ranking system doesn't fit all colleges. What if a school doesn't use the SAT in making admissions decisions and therefore doesn't collect or report these data? In a new system, that school couldn't be ranked if a student chose a positive weight for the SATs. Students would know that the school doesn't value that piece of information. They could then run the rankings with other information (maybe class rank and other indicators of academic achievement), excluding the SAT, and see what those rankings look like. Alternatively, they could decide they actually do care about the average SATs of the student body and decide to look at other schools. Fair enough.

US News has some data that colleges don't. In particular, the magazine conducts a reputation survey of college presidents and deans, which many people in higher education find extremely problematic. One option would be for higher ed leaders collectively to stop filling the survey out, so that US News didn't have the data either. I suspect that US News would then turn to others, such as CEOs of profit and nonprofit organizations, leaders in the public sector, and other employers to fill out the survey. They could argue, after all, that they were surveying the "users" of our final product – college graduates.

Another option for higher ed leaders would be to continue to do the reputation survey in exchange for receiving the results. With these data also included in the new rankings software, users could then decide for themselves whether they think the reputation variable should play any role in their decision on a college. Reducing access to information seems counter to what we do. Demonstrating how to make good use of information seems a better strategy.

Rankings will always be limited in what they can tell consumers. Part of higher education's role about the rankings should be to remind students and their families that these are only one piece of information that they should take into account in deciding where to go to college. Intangibles will and should play a role in these decisions, but that doesn't mean we shouldn't also look at the tangibles.

Catharine Bond Hill is a higher education economist, and the president of Vassar College.