New world university ranking puts Harvard back on top
Harvard snagged the No. 1 spot in a new world university ranking that puts the United States at the head of the pack in higher education excellence.
Boston — An international university ranking released last week bumped Harvard University from its No. 1 spot and sent other top US universities tumbling. That fired up speculation about whether the US university system is on the decline.
But Thursday, a different university ranking put Harvard back at the top, followed by four other American universities. One analysis ran the headline: “The US is the best of the best.”
The earlier rankings had 53 US schools in the top 200. Thursday’s ranking had 72.
So how are these rankings decided? Which ranking is better? And which country is better?
Quacquarelli Symonds (QS), a British-based higher education consulting firm, produced the earlier rankings. The ones released Thursday were produced by Times Higher Education, the UK's leading higher education news publication, and Thomson Reuters.
Those two, as well as Shanghai Jiao Tong University, produce the most influential international university rankings out there. But don't all use the same criteria – and they weight those they do have in common differently.
Until this year, QS and Times worked in tandem, producing one ranking. Since the split, Times has implied that the QS rankings are overly subjective.
The dominant QS criterion is a university’s reputation, as evaluated by academics. It is weighted at 40 percent. In the Times rankings, a survey of teaching reputation is the closest thing to that QS criterion, and it is weighted at only 15 percent.
Times's measures of research influence, output, revenue, and reputation account collectively for 62.5 percent of the ranking. Classroom factors such as student-faculty ratios, academic awards, and faculty salaries, along with the school's reputation, make up 30 percent. Research income from industry and the international makeup of the faculty and students also factor in slightly.
Philip Altbach, the director of the Center for International Higher Education at Boston College and a member of Times’ board of editors, notes that the schools take the rankings seriously because of competitive concerns and because their ranking can affect public and private funding. He says that placing less emphasis on reputation is a positive thing, and argues that the Times methodology offers a sharper picture of a university's capabilities. Mr. Altbach was consulted when Times was deciding how best to evaluate the universities.
Reputation should be a factor, but one concern about weighting it heavily is that there could be a lag – a school could be ranked on a reputation that is no longer fully deserved, says Ben Wildavsky, a senior fellow at the Kauffman Foundation and author of the book “The Great Brain Race: How global universities are reshaping the world.”
Global rankings are a rapidly growing phenomenon, Mr. Wildavsky says. They began in Shanghai in 2003 because China wanted to compete with the world’s best schools, but had no idea how its universities stacked up. Since then, the global university system has become a marketplace, and in order for a market to function, it needs information. That’s where the rankings come in.
In the US, the U.S. News & World Report rankings still attract much more attention and Americans tend to stay in the country for school – at least partially because American universities still dominate global higher education, Wildavsky says.
Countries want to see their universities rise in the rankings because of the link between education and prosperity. Governments sometimes concentrate their resources on a smaller number of schools to boost them in the rankings and raise the country’s overall academic reputation, Wildavsky says. Rankings can also help schools pinpoint areas where they lag.
But neither QS nor Times – nor any of the others that have attempted to measure universities’ quality – have figured out how to measure some of the key indicators of educational quality, particularly instructional quality, Altbach says.
In Wildavsky’s opinion, those key indicators are research measures, how much students’ knowledge and abilities increase while enrolled, teaching effectiveness, and workforce readiness. The Organization for Economic Cooperation and Development is studying ways to measure many of those factors.
“Rankings, which are highly imperfect, which have flaws, have nevertheless become the de facto measure of quality in the global university marketplace,” Wildavsky says. “If we can get the criteria right, we can really improve the universities.”