- Like it or hate it, University rankings are increasingly shaping the landscape of higher education, influencing everything from student decisions to institutional policies.
- Although each ranking system has its own methodology, most are built around common core parameters, which are weighted differently.
- Universities that collaborate with industry and generate income through innovation and applied research tend to score well in this category.
In a surprising turn of events, recent university ranking reports have placed Kenyatta University (KU) ahead of the mighty University of Nairobi (UoN) as the top institution in Kenya, sparking widespread debate among stakeholders.
While some hailed the shift as a reflection of Kenyatta University’s growing academic stature, others questioned the credibility and consistency of the ranking systems used.
The discussions have brought to light the complexities of university evaluations, stirring both celebration and skepticism across the academic landscape.
University rankings are systematic evaluations of higher education institutions that compare them based on a variety of parameters, such as academic performance, research output, faculty quality, and international reach.
They serve as tools for prospective students, faculty, and policymakers to gauge the relative strengths of institutions. Each ranking body uses its own formula, weighing different factors according to its priorities, resulting in diverse rankings that can vary widely between systems.
Like it or hate it, University rankings are increasingly shaping the landscape of higher education, influencing everything from student decisions to institutional policies.
These rankings provide a snapshot of how universities stack up against one another globally and regionally, based on a set of criteria. However, the methodologies behind these rankings often spark debate, as stakeholders question their accuracy, fairness, and impact.
There are several prominent University Ranking Bodies that have been taken seriously in this context and based on their performance. QS World University Rankings, is run by Quacquarelli Symonds (QS).
This system is one of the most popular, with a focus on academic and employer reputation. It also emphasises internationalisation and the employability of graduates. QS, an independent ranking body, is widely recognised for its rigorous methodology and global surveys.
Times Higher Education (THE) World University Rankings, is known for its robust focus on research and teaching quality, including metrics on research influence and income from industry partnerships. THE, is part of the Times Higher Education magazine and is recognised for its research-driven ranking approach.
Academic Ranking of World Universities (ARWU), is another ranking body. It is also known as the Shanghai Rankings, and is famous for its focus on research output, particularly publications in high-impact journals and awards like Nobel Prizes. ARWU is independently produced by Shanghai Ranking Consultancy and is often praised for its clear, research-oriented metrics.
U-Multirank, is funded by the European Union. It allows users to create personalised rankings based on what matters most to them, offering flexibility across a range of criteria from teaching to international cooperation. U-Multirank is an independent project with EU backing, which adds credibility to its unique, user-driven approach.
University rankings rely on a mix of data sources, including expert opinions, self-reported data, and independent audits, to create a holistic view of institutional performance.
However, it’s important to remember that the ranking systems may prioritise different aspects of education, meaning the data collection process can vary widely.
Rankings are web-based and therefore obtained from institutional websites. Although each ranking system has its own methodology, most are built around common core parameters, which are weighted differently.
Academic Reputation, is often the most heavily weighted parameter with a weighting of 30–40%. It is based on surveys of academics around the world, who are asked to assess which institutions they believe are excelling in their field. For example, in the QS World University Rankings, academic reputation accounts for 40% of the total score.
Employer Reputation, is weighted at 10–20%. This is where rankings rely on surveys of employers who provide feedback on which universities produce the most competent graduates. This reflects the employability of a university’s alumni in the global job market.
Research Output and Impact, in terms of number of research publications, research citations, and overall impact are crucial indicators of a university’s contribution to knowledge creation. Times Higher Education (THE) focuses heavily on research impact, with citations alone contributing up to 30% of an institution’s score. However weightings can range between 20–40%.
Student-to-Faculty Ratio, with a 10–20% weighting is another parameter. A lower student-to-faculty ratio suggests a better learning environment, as students are presumed to receive more individualised attention. This metric is often used to gauge teaching quality.
With a weighting of 5–10%, Internationalization (Faculty and Students) is an important parameter. The proportion of international faculty and students reflects a university’s global appeal and diversity. Higher international representation is viewed positively as it enriches the campus environment and encourages cross-cultural exchange.
Industry Income and Partnerships is equally important. Universities that collaborate with industry and generate income through innovation and applied research tend to score well in this category. Rankings like THE give weight to an institution’s ability to attract funding from the private sector and so the weighting ranges between 2–10%.
Faculty Quality, weighting 10–15%, is a crucial parameter. The profile of lecturers in terms Professorship and PhD holders is very relevant in university ranking.
Some rankings include the number of awards, such as Nobel Prizes or Fields Medals, won by faculty or alumni to reflect the academic excellence of the institution.
Citations per Faculty, attracts a weighting of 10–20%. It measures research output and impact by considering the average number of citations per faculty member’s published work.
With 79 universities across Kenya, there is a growing call for the Commission for University Education (CUE) to develop standardised metrics based on institutional audits that they carry out, to rank the nation’s higher education institutions.
Advocates argue that such metrics would encourage a competitive spirit centred on quality and best practices, allowing universities to benchmark their progress within a uniquely Kenyan context.
Establishing these rankings would provide all universities, from the oldest institutions to the newly established, a local framework for assessing performance. This initiative could reduce the reliance on international rankings, which may not fully capture the strengths and challenges specific to Kenya, while promoting a fair, inclusive approach to quality improvement.
However, university rankings, often used as a benchmark for institutional prestige, are increasingly facing criticism for their shortcomings. While they provide a snapshot of perceived academic quality, rankings tend to prioritise metrics like research output, faculty credentials, and international recognition, often at the expense of other critical factors. These systems frequently overlook the quality of teaching, student satisfaction, and the actual impact of education on local communities.
Additionally, rankings disproportionately favour well-established, resource-rich universities, putting smaller or less affluent institutions at a disadvantage. This creates a skewed perception of academic excellence, where institutions focus on improving metrics that cater to rankings rather than addressing the genuine educational needs of students.
Moreover, the heavy emphasis on research can pressure faculty to prioritise publications over teaching, further compromising the quality of education. Rankings, while informative, often fail to capture the full spectrum of what makes a university truly successful and valuable to society.
In Kenya, therefore, the recent rankings that put KU ahead of the UoN is more likely than not, in this era of competition in which every institution is working hard to be in the limelight, in a bid to attracting more students and best faculty.
It is undisputable that in the past two decades, KU has invested heavily in some of the ranking parameters, especially under the leadership of its former transformative Vice-Chancellor Prof. Olive Mugenda, and which could have midwifed its new position.
Nevertheless, if current trends are anything to go by, the 7 oldest public universities in Kenya namely; Moi, UoN, Egerton, KU, Jomo Kenyatta University of Agriculture and Technology (JKUAT), Masinde Muliro University of Science and Technology (MMUST), and Maseno, are likely to be overtaken by younger and upcoming ones, local factors such as resource constraints, notwithstanding.
YOU MAY ALSO LIKE: Kenya’s University Funding at a Crossroads: Lessons From The DUC Model
As George Bernard Shaw put it, “Progress is impossible without change, and those who cannot change their minds cannot change anything.” Thus, shifts in university rankings, rather than being seen as setbacks for traditional institutions, can be viewed as signs of progress within the sector, as emerging institutions strive for excellence and visibility.
University rankings can be powerful tools for benchmarking institutional success, but they should be interpreted with caution.
Prospective students and faculty should consider the parameters that align with their priorities rather than relying solely on overall rankings. Ultimately, a holistic view that takes into account both strengths and weaknesses is key to understanding the true value of any university.