University rankings and the rise and fall of international education

University rankings helped recruitment of international students to finance increased research. With student numbers dropping, now is the time to put less emphasis on rankings and reduce concern about the risks of relying too heavily on international students.

When Australia’s borders closed a long international student boom finally ended. It had been very lucrative for Australia’s universities. Between 2000 and 2018 international student revenue increased by nearly 500 per cent in real terms, to almost $9 billion.

Government policy changes partly explain why universities looked for new discretionary revenue sources, including international students. Additional funds were needed to support part-funded government-supported research projects, and to pay academic staff with teaching and research roles, which had become more difficult as public teaching and research funding was separated and distributed on different criteria.

But these explanations are only part of the story. Research expenditure has boomed this century, nearly tripling in real terms between 2000 and 2018. This is spending on a scale well beyond what was needed to fill funding gaps left by domestic policy changes. Research could only have been financed at this level by profits on international students.

To explain why universities felt the need to recruit so many international students to finance their research a global factor needs to be considered: the rise of university rankings. Universities have long been concerned with status, but the establishment of the Academic Ranking of World Universities in 2003 (often called the Shanghai Jiao Tong rankings), and the Times Higher Education Rankings in 2004, put brutally (if spuriously) precise numbers on where each university stood. Other rankings followed.

Methodological critiques soon piled on, but they made no difference. By 2005, the University of Sydney had announced ranking aspirations. By 2007 the University of Melbourne’s annual report included information on performance against its rankings goals.

Target rankings are now common. In the Group of Eight universities, the University of Sydney wants to be first in Australia in the best-known rankings. The University of Melbourne wants to be consistently in the top 40 of the ARWU and the top 25 of the THE rankings. UNSW developed a composite index of different rankings, and aims to be in the top 50. The University of Queensland wants to be ‘well inside’ the top 75.

Rankings subdivided into regions, fields of research and university ages created more potential winners and losers. By the 2010s, Australian universities with no prospect of reaching the highest ranks were showing an interest.

The trouble is that many universities around the world hold similar ambitions. In her book on the global influence of university rankings, Ellen Hazelkorn found that most university leaders she surveyed were unhappy with their institution’s position. Even if they did not personally like rankings they could not easily pay them no attention. As Hazelkorn’s book argues, the decisions of students, donors, governments and prospective academic staff can be influenced by rankings. In her survey, seven out of ten university leaders had taken action to improve their university’s ranking.

This competition for an inherently limited number of top ranks means that enhancing research quality and quantity is not enough. Universities must improve by more than their competitors. Rapid growth is necessary to get ahead. This is one reason why the Group of Eight universities, which have the most ambitious research targets, ended up highly exposed to the international student market.

As the chart below shows, the pre-2020 international student boom was largely a Group of Eight and private sector affair (although many non-university higher education provider enrolments are in pathway colleges leading to a range of public universities).

Dept of Education Table

For years, the Group of Eight universities were on a virtuous cycle. International student surveys show Chinese students are particularly motivated by rankings, their willingness to pay high fees helped universities increase their research and boost their rankings, which in turn attracted more Chinese students. The strategy succeeded on its own terms. Two Australian universities made the AWRU’s top 100 when it began in 2003. By 2019 seven ranked in the top 100.

The risk now is that the virtuous cycle turns vicious; that fewer Chinese students means less research, which means lower rankings, which means fewer Chinese students. But we will have to see how this turns out, as universities in competitor countries are also taking a big COVID-19 hit. Rankings are based on relative, not absolute, research performance.

Rankings have not had a wholly malign influence. They helped convert profits from the (then) booming Chinese economy into research that will potentially benefit a wide range of people.

But rankings also distort research priorities in favour of fields that contribute to the metrics used, which generally are biased towards science over social science or the humanities. Australian topics are disadvantaged, since research on Australia is cited less than topics of global significance or concerning countries with larger populations.

Research excellence can be measured against a standard, as our domestic Excellence in Research for Australia exercise tries to do, rather than placing exaggerated significance on the often small relative differences that drive the rankings. As the chart below shows, in the top 100 a few exceptional universities have high absolute scores, followed by a long tail of institutions, including all Australian top 100 universities, with minor differences in their scores.

AWRU Table

While rankings get publicity it will be hard for universities to ignore them completely. But universities with falling positions might give less prominence in their marketing to their ranks, which only encourages students to believe that they are reliable guides to quality.

And if rankings become less significant, universities will not feel the need to indefinitely increase their international student numbers. Yes, it is good to have international students – not just their money, but also their contribution to university and Australian life, and the value of long-term personal connections between Australia and countries in our region.

But even before COVID-19 arrived, university international student practices were attracting plenty of concern and criticism on both financial risk and academic (English language standards, soft marking, cheating, influence of the Chinese Communist Party) grounds.

Nobody wanted university priorities to be re-oriented in the rapid and destructive way that is now happening. But in the medium to long-term, less emphasis on global rankings, and some moderation in international student numbers, may not be all bad.

This is an edited version of a blog post that first appeared on Andrew Norton’s higher education blog.

print

Andrew Norton is Professor in the Practice of Higher Education Policy at the Centre for Social Research and Methods at the Australian National University. He was previously the Higher Education Program Director at the Grattan Institute. He is the author or co-author of many articles, reports and other publications on higher education issues. These include The cash nexus: how teaching funds research in Australian universities and a reference report on higher education trends and policies, Mapping Australian higher education.

This entry was posted in Education. Bookmark the permalink.

Please keep your comments short and sharp and avoid entering links. For questions regarding our comment system please click here.
(Please note that we are unable to post comments on your behalf.)