Paper-to-Podcast

Paper Summary

Title: International University Rankings: For Good or Ill?


Source: Higher Education Policy Institute (27 citations)


Authors: Bahram Bekhradnia


Published Date: 2016-12-01




Copy RSS Feed Link

Podcast Transcript

Hello, and welcome to paper-to-podcast! Today, we'll be discussing a fascinating paper that I have read 94% of, titled "International University Rankings: For Good or Ill?" by Bahram Bekhradnia and published back in 2016. Be prepared to rethink everything you know about university rankings!

In this paper, the author discusses international university rankings, focusing on the Times Higher Education (THE) and Quacquarelli Symonds (QS) rankings, which are based on more than 85% research performance measures. As a result, universities focus on research rather than teaching or outreach to improve their position in these rankings.

But wait, there's more! The paper reveals flaws in the data collection methods used by these rankings. Universities often provide their own data without proper auditing, leading to inconsistencies and inaccuracies. And get this - QS even engages in "data scraping" from various sources when a university doesn't provide data, raising more concerns about data quality.

The author suggests that presenting the ranking results in bands or radar diagrams could be less misleading than ordinal lists. This way, small differences between universities wouldn't be exaggerated, and we'd get a more rounded view of an institution's strengths and weaknesses.

The paper delves into the methods and issues associated with international university rankings, comparing the criteria and weights used in each ranking system. The research also evaluates the impact of these rankings on universities and government policies, touching on problems with data quality and reputational surveys.

Positive aspects of the research include its critical examination of international university rankings and thorough analysis of the methodologies used. The author identifies limitations and shortcomings, shedding light on biases and inaccuracies that can arise from relying solely on such systems to evaluate universities.

However, some potential issues with the research include the reliance on non-comparable data, universities providing their own data without proper auditing, and the use of flawed reputation surveys. These issues undermine the credibility and usefulness of international university rankings.

So, what can we do with this research? Universities can focus on improving important aspects like teaching quality and student support, rather than solely prioritizing research performance for the sake of moving up in rankings. Policymakers can design more comprehensive and fair assessment systems that capture the diverse missions and strengths of higher education institutions.

As for prospective students and their families, they can use the findings of this research to make more informed decisions when choosing a university. By being aware of the limitations and biases in international rankings, they can prioritize factors that are truly important to them, such as teaching quality and campus environment, rather than relying solely on a university's position in a ranking list.

In conclusion, this paper highlights the need for a more holistic approach to measuring and comparing university performance. It serves as a valuable resource for educators, policymakers, and students alike, encouraging them to question the validity of these rankings and to seek better ways of evaluating and comparing universities.

Thank you for joining us on this enlightening journey through university rankings! You can find this paper and more on the paper2podcast.com website. Until next time, keep questioning those rankings!

Supporting Analysis

Findings:
The paper discusses international university rankings and reveals some fascinating insights. Firstly, it highlights that over 85% of the measures attached to the Times Higher Education (THE) and Quacquarelli Symonds (QS) rankings are related to research performance. This means that for a university to improve its position in these rankings, it must focus on research rather than other aspects, such as teaching or outreach. This focus on research could be detrimental to the education of students and the overall function of a university. Secondly, the paper shows the flaws in the data collection methods used in these rankings. Universities often provide their own data without proper auditing, leading to inconsistencies and inaccuracies. QS even engages in "data scraping" from various sources when a university doesn't provide data, raising more concerns about data quality. Lastly, the paper suggests that presenting the ranking results in bands or radar diagrams could be less misleading than ordinal lists. Ordinal lists can exaggerate small differences between universities and give false impressions of superiority. Using bands or radar diagrams would provide a more rounded view of an institution's strengths and weaknesses, allowing for more informed decision-making by students and policymakers.
Methods:
The paper analyzes the methods and issues associated with international university rankings, focusing on four main ranking systems: Times Higher Education (THE), Quacquarelli Symonds Ltd (QS), Academic Ranking of World Universities (ARWU), and U-Multirank. The author examines the dimensions, indicators, and weights used by each ranking system, identifying commonalities and differences. The research involves a comparison of the criteria and weights used in each ranking system and evaluates the impact of these rankings on universities and government policies. It also explores the problems associated with the data used in rankings, such as the reliance on institutions to provide their own data, data scraping, and reputational surveys. Furthermore, the paper assesses the presentation of ranking results, highlighting the issues with ordinal lists and proposing alternative methods for presenting results, like banding and radar diagrams. The author also considers the potential improvements that could be made to ranking methodologies, such as broadening criteria beyond research performance and improving data quality and validation.
Strengths:
The most compelling aspects of the research are its critical examination of international university rankings and the thorough analysis of the methodologies used by these ranking systems. By identifying the limitations and shortcomings of these rankings, the researchers shed light on the biases and inaccuracies that can arise from relying on such systems to evaluate universities. The best practices followed by the researchers include a comprehensive comparison of different ranking systems, a detailed examination of the criteria and weights used in generating the rankings, and a clear illustration of the issues related to data integrity and presentation. The paper also explores alternative ways of presenting university rankings, such as banding and radar diagrams, which could provide a more accurate and informative picture of an institution's strengths and weaknesses. Overall, the research highlights the need for a more holistic approach to measuring and comparing university performance, one that considers not only research output but also teaching quality, student support, and other important aspects of higher education. This critical examination of current ranking systems serves as a valuable resource for educators, policymakers, and students alike, encouraging them to question the validity of these rankings and to seek better ways of evaluating and comparing universities.
Limitations:
Some potential issues with the research include the reliance on non-comparable data, universities providing their own data without proper auditing, and the use of flawed reputation surveys. The lack of internationally comparable data, other than for research, makes it difficult to create accurate and meaningful rankings. Universities supplying their own data without proper auditing can lead to inaccuracies and inconsistencies in the rankings. Furthermore, the practice of "data scraping" by some ranking bodies can aggravate these issues, as it involves collecting data from various sources without control over their accuracy or adherence to standard definitions. The reputation surveys used in some rankings are methodologically flawed and tend to measure research performance rather than providing a comprehensive view of university quality. Additionally, the focus on research performance in most rankings can drive universities and governments to prioritize research over other aspects of higher education, such as teaching and community outreach. Overall, these issues undermine the credibility and usefulness of international university rankings.
Applications:
The research on international university rankings has potential applications in informing higher education institutions, policymakers, and prospective students about the true value and limitations of such rankings. By understanding the shortcomings and biases present in current ranking methodologies, universities can focus on improving aspects of their performance that truly matter, such as teaching quality, student support, and community engagement, rather than solely prioritizing research performance for the sake of moving up in rankings. Policymakers can use the insights from this research to design more comprehensive and fair assessment systems that capture the diverse missions and strengths of higher education institutions, rather than relying on flawed international rankings as the sole indicator of success. This could lead to better funding allocation and support for universities that excel in areas not currently covered by popular ranking systems. Prospective students and their families can use the findings of this research to make more informed decisions when choosing a university. By being aware of the limitations and biases in international rankings, they can prioritize factors that are truly important to them, such as teaching quality, student support, and campus environment, rather than relying solely on a university's position in a ranking list.