Opinion, Berkeley Blogs

Ranking Fatigue is a Worldwide Phenomenon

By John Aubrey Douglass

rankings

Ranking fatigue has finally set in, and its a worldwide phenomenon.

A number of high-profile law schools in the US recently announced they will no longer participate in one commercial ranking; Dutch universities have begun a move away from using rankings and citation indexes for evaluating university performance, and that of their faculty.

China, home of the Shanghai Jaio Tong global ranking of world universities, has done something similar in its recent ministerial edicts.

Some universities, admittedly usually brand-name institutions with previously established international profiles, are refusing to participate in rankings – meaning they are not supplying information requested by commercial rankers, many of whom simply find a way to find basic information, like faculty to student ratios and research funding, to keep them in their ranking products.

At the same time, the COVID pandemic acted as a disruptor with the rush to on-line courses and a dissipation of the normal life of universities.

In some form the war in Ukraine and increased international tension has also brought into questions or at least a reconsideration the strategies of universities that have focused so much on improving their rankings, and therefore sense of prestige and importance.

There are two main roles for global rankings of universities, beside generating income for the rankers, that is not often deciphered by critics.

First, and largely the case in the US and pioneered by US World News, as consumer guides for prospective students and increasingly as a source of press releases and bragging in an increasingly competitive market for talent.

Second and more significantly, as indicators for national ministries of the supposed quality and productivity of a nation’s universities and, in turn, an obsession of many universities to do better.

The first global ranking came out in 2003, generated by Shanghai Jiao Tong University at the request of the Chinese government. It and most other influential global rankings, such as the Times Higher Education and QS's commercial products, focus on a narrow band of research productivity as the primary marker of quality and notoriety.

Over the last two decades, global rankings of universities emerged as a substantial force in resource allocation and shaping the behaviors of university management and faculty throughout -- if not so much in the US, in much of the world.

The good is that the ranking race, and the response of ministries, led to incentives that reshaped the internal culture of many national university systems and institutions that historically had weak internal quality and accountability policies and practices.

The bad is that it induced practices and behaviors toward a vague model of global competitiveness and a poorly designed marker of prestige that is not in the best interests of the nations they serve.

Most starkly, this includes demands for faculty and graduate students intent on an academic career to publish in recognized international academic journals – feeding a startling growth in their number, citation inflation, and gaming to move up this or that ranking.

For decades many universities have focused on global rankings and its progeny the concept of World Class Universities (WCU) to drive academic planning and resource allocation – often under pressure from ministries to climb up this or that commercial ranking.

In turn, rankings companies have built profitable businesses, including consulting service on how to “improve” their research productivity shaped by their matrices. This has all been accompanied by a cavalcade of books and articles that reinforce the value of rankings and that boast of successful strategies.

In this myopic race, many of these university have lost their way, diminishing their larger mission and role in society, and hindering innovation in areas such as student learning, creative forms of research that benefit stakeholders, and the importance of their public service role.

The fact is that citations are declining as a meaningful indicator of institutional and individual research productivity.

For one, the number of journals and journal articles keeps outstripping the estimated science output, in part accelerated by the proliferation of online journals.

Second, the drive for better rankings as induced an internalized academic behavior that seeks legitimacy via a blizzard of publications and a predilection for often needless citations. One can barely read any journal article in the social sciences anymore without facing a barrage of meaningless citations often on the most mundane of observations.

In addition, impact calculations used by rankers assume that the pool of journals from which citations emerge will remain constant over time. But the pool is rapidly expanding, and with it the number of citations.

There is also gaming by universities and by researchers, and by journal editors.

One example: an article in Science notes a scandal related to Chinese scientists, most who are under great pressure to produce publications, that contract with “article mills” to produce journal quality manuscripts. “A growing black market is peddling fake research papers, fake peer reviews, and even entirely fake research results to anyone who will pay.”

Studies have also shown that some journal editors, and their boards, encourage authors to cite articles in the same journal they are publishing in –in turn, driving up the journal’s impact score based, your guessed it, on citations.

Always in search of profits, we have also seen a proliferation of commercial efforts to slice and dice the market to create new rankings. Where once there was focus on data collected internationally and with some reliability, more and more rankings depend on institutions providing information.

New rankings of “social impact,” or meeting the UN’s Sustainable Development Goals, are hopelessly flawed in their methodology; many if not most universities are gaming the data. Much of these rankings rely on written responses by universities on the virtues of their programmatic efforts that, one might suspect, could lead to exaggeration and embellishment.

Some universities even contract with commercial rankers to help strategies to improve their ranking, with evidence of a clear conflict of interest. Many nation-states have also got into the game, creating their own rankings when their leading national universities did not do well in the commercial global rankings.

In two books, I attempted to fight the deleterious impact of global rankings, outlining an alternative model, what I called the New Flagship University, devoted equally to teaching, research and public service. It has some, if marginal, influence in diminishing the infatuation with rankings and gaming.

At one time I also argued that the path to diminish the influence of the dominant three or so global rankings that influenced ministries, and now deeply ingrained in many university policies and behaviors, was to encourage the generation of more rankings.

And indeed, the ranking industry continues to look for ways to monetizes the data they collect and expand their consulting services. The thought was that this would diffuse the ranking market, with universities claiming to be in the top twenty or whatever in this or that ranking, increasingly making it evident of a decline in their credibility.

It is clear today that ranking products will remain consumer guides for students, and an influence on the global movement of academic talent.

But as influencers of university strategies, policies and behaviors, they are thankfully on the decline.

That also means a corresponding decline in the notion of a World Class University that blatantly ignores the many important roles and activities universities need to pursue to be productive and impactful institutions.