Category: Rankings

  • One Aus university in top 50 THE rankings – Campus Review

    One Aus university in top 50 THE rankings – Campus Review

    Australia’s universities have charged up the global leaderboard in a year where many of their international peers lost ground, according to a world-renowned tertiary rankings list.

    Please login below to view content or subscribe now.

    Membership Login

    Source link

  • Breaking Away from Rankings – Edu Alliance Journal

    Breaking Away from Rankings – Edu Alliance Journal

    The Growing Movement to Reform Research Assessment and Rankings

    By Dean Hoke, September 22, 2025: For the past fifteen years, I have been closely observing what can only be described as a worldwide fascination—if not obsession—with university rankings, whether produced by Times Higher Education, QS, or U.S. News & World Report. In countless conversations with university officials, a recurring theme emerges: while most acknowledge that rankings are often overused by students, parents, and even funders when making critical decisions, few deny their influence. Nearly everyone agrees that rankings are a “necessary evil”—flawed, yet unavoidable—and many institutions still direct significant marketing resources toward leveraging rankings as part of their recruitment strategies.

    It is against this backdrop of reliance and ambivalence that recent developments, such as Sorbonne University’s decision to withdraw from THE rankings, deserve closer attention

    In a move that signals a potential paradigm shift in how universities position themselves globally, Sorbonne University recently announced it will withdraw from the Times Higher Education (THE) World University Rankings starting in 2026. This decision isn’t an isolated act of defiance—Utrecht University had already left THE in 2023, and the Coalition for Advancing Research Assessment (CoARA), founded in 2022, has grown to 767 members by September 2025. Together, these milestones reflect a growing international movement that questions the very foundations of how we evaluate academic excellence.

    The Sorbonne Statement: Quality Over Competition

    Sorbonne’s withdrawal from THE rankings isn’t merely about rejecting a single ranking system. It appears to be a philosophical statement about what universities should stand for in the 21st century. The institution has made it clear that it refuses to be defined by its position in what it sees as commercial ranking matrices that reduce complex academic institutions to simple numerical scores.

    Understanding CoARA: The Quiet Revolution

    The Coalition for Advancing Research Assessment represents one of the most significant challenges to traditional academic evaluation methods in decades. Established in 2022, CoARA has grown rapidly to include 767 member organizations as of September 2025. This isn’t just a European phenomenon—though European institutions have been early and enthusiastic adopters. The geographic distribution of CoARA members tells a compelling story about where resistance to traditional ranking systems is concentrated. As the chart shows, European countries dominate participation, led by Spain and Italy, with strong engagement also from Poland, France, and several Nordic countries. This European dominance isn’t accidental—the region’s research ecosystem has long been concerned about the Anglo-American dominance of global university rankings and the way these systems can distort institutional priorities.

    The Four Pillars of Reform

    CoARA’s approach centers on four key commitments that directly challenge the status quo:

    1. Abandoning Inappropriate Metrics The agreement explicitly calls for abandoning “inappropriate uses of journal- and publication-based metrics, in particular inappropriate uses of Journal Impact Factor (JIF) and h-index.” This represents a direct assault on the quantitative measures that have dominated academic assessment for decades.

    2. Avoiding Institutional Rankings Perhaps most relevant to the Sorbonne’s decision, CoARA commits signatories to “avoid the use of rankings of research organisations in research assessment.” This doesn’t explicitly require withdrawal from ranking systems, but it does commit institutions to not using these rankings in their own evaluation processes.

    3. Emphasizing Qualitative Assessment The coalition promotes qualitative assessment methods, including peer review and expert judgment, over purely quantitative metrics. This represents a return to more traditional forms of academic evaluation, albeit updated for modern needs.

    4. Responsible Use of Indicators Rather than eliminating all quantitative measures, CoARA advocates for the responsible use of indicators that truly reflect research quality and impact, rather than simply output volume or citation counts.

    European Leadership

    Top 10 Countries by CoARA Membership:

    The geographic distribution of CoARA members tells a compelling story about where resistance to traditional ranking systems is concentrated. As the chart shows, European countries dominate participation, led by Spain and Italy, with strong engagement also from Poland, France, and several Nordic countries. This European dominance isn’t accidental—the region’s research ecosystem has long been concerned about the Anglo-American dominance of global university rankings and the way these systems can distort institutional priorities.

    The geographic distribution of CoARA members tells a compelling story about where

    Prestigious European universities like ETH Zurich, the University of Zurich, Politecnico di Milano, and the University of Manchester are among the members, lending credibility to the movement. However, the data reveals that the majority of CoARA members (84.4%) are not ranked in major global systems like QS, which adds weight to critics’ arguments about institutional motivations.

    CoARA Members Ranked vs Not Ranked in QS:

    The Regional Divide: Participation Patterns Across the Globe

    What’s particularly striking about the CoARA movement is the relative absence of U.S. institutions. While European universities have flocked to join the coalition, American participation remains limited. This disparity reflects fundamental differences in how higher education systems operate across regions.

    American Participation: The clearest data we have on institutional cooperation with ranking systems comes from the United States. Despite some opposition to rankings, 78.1% of the nearly 1,500 ranked institutions returned their statistical information to U.S. News in 2024, showing that the vast majority of American institutions remain committed to these systems. However, there have been some notable American defections. Columbia University is among the latest institutions to withdraw from U.S. News & World Report college rankings, joining a small but growing list of American institutions questioning these systems. Yet these remain exceptions rather than the rule.

    European Engagement: While we don’t have equivalent participation rate statistics for European institutions, we can observe their engagement patterns differently. 688 universities appear in the QS Europe ranking for 2024, and 162 institutions from Northern Europe alone appear in the QS World University Rankings: Europe 2025. However, European institutions have simultaneously embraced the CoARA movement in large numbers, suggesting a more complex relationship with ranking systems—continued participation alongside philosophical opposition.

    Global Participation Challenges: For other regions, comprehensive participation data is harder to come by. The Arab region has 115 entries across five broad areas of study in QS rankings, but these numbers reflect institutional inclusion rather than active cooperation rates. It’s important to note that some ranking systems use publicly available data regardless of whether institutions actively participate or cooperate with the ranking organizations.

    This data limitation itself is significant—the fact that we have detailed participation statistics for American institutions but not for other regions may reflect the more formalized and transparent nature of ranking participation in the U.S. system versus other global regions.

    American universities, particularly those in the top tiers, have largely benefited from existing ranking systems. The global prestige and financial advantages that come with high rankings create powerful incentives to maintain the status quo. For many American institutions, rankings aren’t just about prestige—they’re about attracting international students, faculty, and research partnerships that are crucial to their business models.

    Beyond Sorbonne: Other Institutional Departures

    Sorbonne isn’t alone in taking action. Utrecht University withdrew from THE rankings earlier, citing concerns about the emphasis on scoring and competition. These moves suggest that some institutions are willing to sacrifice prestige benefits to align with their values. Interestingly, the Sorbonne has embraced alternative ranking systems such as the Leiden Open Rankings, which highlight its impact.

    The Skeptics’ View: Sour Grapes or Principled Stand?

    Not everyone sees moves like Sorbonne’s withdrawal as a noble principle. Critics argue that institutions often raise philosophical objections only after slipping in the rankings. As one university administrator put it: “If the Sorbonne were doing well in the rankings, they wouldn’t want to leave. We all know why self-assessment is preferred. ‘Stop the world, we want to get off’ is petulance, not policy.”

    This critique resonates because many CoARA members are not major players in global rankings, which fuels suspicion that reform may be as much about strategic positioning as about values. For skeptics, the call for qualitative peer review and expert judgment risks becoming little more than institutions grading themselves or turning to sympathetic peers.

    The Stakes: Prestige vs. Principle

    At the heart of this debate is a fundamental tension: Should universities prioritize visibility and prestige in global markets, or focus on measures of excellence that reflect their mission and impact? For institutions like the Sorbonne, stepping away from THE rankings is a bet that long-term reputation will rest more on substance than on league table positions. But in a globalized higher education market, the risk is real—rankings remain influential signals to students, faculty, and research partners.
    Rankings also exert practical influence in ways that reformers cannot ignore. Governments frequently use global league tables as benchmarks for research funding allocations or as part of national excellence initiatives. International students, particularly those traveling across continents, often rely on rankings to identify credible destinations, and faculty recruitment decisions are shaped by institutional prestige. In short, rankings remain a form of currency in the global higher education market.

    This is why the decision to step away from them carries risk. Institutions like the Sorbonne and Utrecht may gain credibility among reform-minded peers, but they could also face disadvantages in attracting international talent or demonstrating competitiveness to funders. Whether the gamble pays off will depend on whether alternative measures like CoARA or ROI rankings achieve sufficient recognition to guide these critical decisions.

    The Future of Academic Assessment

    The CoARA movement and actions like Sorbonne’s withdrawal represent more than dissatisfaction with current ranking systems—they highlight deeper questions about what higher education values in the 21st century. If the movement gains further momentum, it could push institutions and regulators to diversify evaluation methods, emphasize collaboration over competition, and give greater weight to societal impact.

    Yet rankings are unlikely to disappear. For students, employers, and funders, they remain a convenient—if imperfect—way to compare institutions across borders. The practical reality is that rankings will continue to coexist with newer approaches, even as reform efforts reshape how universities evaluate themselves internally.

    Alternative Rankings: The Rise of Outcome-Based Assessment

    While CoARA challenges traditional rankings, a parallel trend focuses on outcome-based measures such as return on investment (ROI) and career impact. Georgetown University’s Center on Education and the Workforce, for example, ranks more than 4,000 colleges on the long-term earnings of their graduates. Its findings tell a very different story than research-heavy rankings—Harvey Mudd College, which rarely appears at the top of global research lists, leads ROI tables with graduates projected to earn $4.5 million over 40 years.

    Other outcome-oriented systems, such as The Princeton Review’s “Best Value” rankings, emphasize affordability, employment, and post-graduation success. These approaches highlight institutions that may be overlooked by global research rankings but deliver strong results for students. Together, they represent a pragmatic counterbalance to CoARA’s reform agenda, showing that students and employers increasingly want measures of institutional value beyond research metrics alone.

    These alternative models can be seen most vividly in rankings that emphasize affordability and career outcomes. *The Princeton Review’s* “Best Value” rankings, for example, combine measures of financial aid, academic rigor, and post-graduation outcomes to highlight institutions that deliver strong returns for students relative to their costs. Public universities often rise in these rankings, as do specialized colleges that may not feature prominently in global research tables.

    Institutions like the Albany College of Pharmacy and Health Sciences illustrate this point. Although virtually invisible in global rankings, Albany graduates report median salaries of $124,700 just ten years after graduation, placing the college among the best in the nation on ROI measures. For students and families making education decisions, data like this often carries more weight than a university’s position in QS or THE.

    Together with Georgetown’s ROI rankings and the example of Harvey Mudd College, these cases suggest that outcome-based rankings are not marginal alternatives—they are becoming essential tools for understanding institutional value in ways that matter directly to students and employers.

    Rankings as Necessary Evil: The Practical Reality

    The CoARA movement and actions like Sorbonne’s withdrawal represent more than just dissatisfaction with current ranking systems. They reflect deeper questions about the values and purposes of higher education in the 21st century.

    If the movement gains momentum, we could see:

    Diversification of evaluation methods, with different regions and institution types developing assessment approaches that align with their specific values and goals

    Reduced emphasis on competition between institutions in favor of collaboration and shared improvement

    Greater focus on societal impact rather than purely academic metrics

    More transparent and open assessment processes that allow for a better understanding of institutional strengths and contributions

    Conclusion: Evolution, Not Revolution

    The Coalition for Advancing Research Assessment and decisions like Sorbonne’s withdrawal from THE rankings represent important challenges to how we evaluate universities, but they signal evolution rather than revolution. Instead of the end of rankings, we are witnessing their diversification. ROI-based rankings, outcome-focused measures, and reform initiatives like CoARA now coexist alongside traditional global league tables, each serving different audiences.

    Skeptics may dismiss reform as “sour grapes,” yet the concerns CoARA raises about distorted incentives and narrow metrics are legitimate. At the same time, American resistance reflects both philosophical differences and the pragmatic advantages U.S. institutions enjoy under current systems.

    The most likely future is a pluralistic landscape: research universities adopting CoARA principles internally while maintaining a presence in global rankings for visibility; career-focused institutions highlighting ROI and student outcomes; and students, faculty, and employers learning to navigate multiple sources of information rather than relying on a single hierarchy.

    In an era when universities must demonstrate their value to society, conversations about how we measure excellence are timely and necessary. Whether change comes gradually or accelerates, the one-size-fits-all approach is fading. A more complex mix of measures is emerging—and that may ultimately serve students, institutions, and society better than the systems we are leaving behind. In the end, what many once described to me as a “necessary evil” may persist—but in a more balanced landscape where rankings are just one measure among many, rather than the single obsession that has dominated higher education for so long.


    Dean Hoke is Managing Partner of Edu Alliance Group, a higher education consultancy. He formerly served as President/CEO of the American Association of University Administrators (AAUA). Dean has worked with higher education institutions worldwide. With decades of experience in higher education leadership, consulting, and institutional strategy, he brings a wealth of knowledge on colleges’ challenges and opportunities. Dean is the Executive Producer and co-host for the podcast series Small College America.

    Source link

  • Students score universities on experience – Campus Review

    Students score universities on experience – Campus Review

    Three private universities offer the best student experience out of all Australian institutions according to the latest student experience survey, with the University of Divinity ranked number one overall.

    Please login below to view content or subscribe now.

    Membership Login

    Source link

  • Australian universities fall in world rankings – Campus Review

    Australian universities fall in world rankings – Campus Review

    Rankings

    Two universities made the top 20 and six made the top 50, as Asian unis push to improve

    Just under 70 per cent of Australian universities have dropped compared to last year in the latest QS World University Rankings released on Thursday.

    Please login below to view content or subscribe now.

    Membership Login

    Source link

  • WSU wins impact rankings for fourth year – Campus Review

    WSU wins impact rankings for fourth year – Campus Review

    Western Sydney University (WSU) has ranked first in the measure of delivering community impact out of over 2000 universities globally in the Times Higher Education Impact Rankings released Wednesday.

    Please login below to view content or subscribe now.

    Membership Login

    Source link

  • Melbourne has the best academic reputation of any Australian uni, Times Higher Education says

    Melbourne has the best academic reputation of any Australian uni, Times Higher Education says

    Melbourne University Campus in Carlton.
    Picture: NCA NewsWire / David Geraghty

    The University of Melbourne has topped the list of Australia’s most prestigious higher education facilities globally.

    Please login below to view content or subscribe now.

    Membership Login

    Source link

  • Data, Decisions, and Disruptions: Inside the World of University Rankings

    Data, Decisions, and Disruptions: Inside the World of University Rankings

    University rankings are pretty much everywhere. Though the earliest university rankings in the U. S. date back to the early 1900s and the modern ones from the 1983 debut of the U. S. News and World Report rankings. The kind of rankings we tend to talk about now, international or global rankings, really only date back to 2003 with the creation of the Shanghai Academic Rankings of World Universities.

    Over the decade that followed that first publication, a triumvirate emerged at the top of the rankings pyramid. The Shanghai Rankings, run by a group of academics at the Shanghai Jiao Tong University, the Quacquarelli Symonds, or QS Rankings, and the Times Higher Education’s World University Rankings. Between them, these three rankings producers, particularly QS and Times Higher, created a bewildering array of new rankings, dividing the world up by geography and field of study, mainly based on metrics relating to research.

    Joining me today is the former Chief Data Officer of the Times Higher Education Rankings, Duncan Ross. He took over those rankings at a time when it seemed like the higher education world might be running out of things to rank. Under his tutelage, though, the Times Impact Rankings, which are based around the 17 UN Sustainable Development Goals, were developed. And that’s created a genuinely new hierarchy in world higher education, at least among those institutions who choose to submit to the rankings.  

    My discussion with Duncan today covers a wide range of topics related to his time at THE. But the most enjoyable bit by far, for me anything, was the bit about the genesis of the impact rankings. Listen a bit, especially when Duncan talks about how the Impact Rankings came about because the THE realized that its industry rankings weren’t very reliable. Fun fact, around that time I got into a very public debate with Phil Beatty, the editor of the Times Higher, on exactly that subject. Which means maybe, just maybe, I’m kind of a godparent to the impact rankings. But that’s just me. You may well find other points of interest in this very compelling interview. Let’s hand things over to Duncan.


    The World of Higher Education Podcast
    Episode 3.20 | Data, Decisions, and Disruptions: Inside the World of University Rankings 

    Transcript

    Alex Usher: So, Duncan, let’s start at the beginning. I’m curious—what got you into university rankings in the first place? How did you end up at Times Higher Education in 2015?

    Duncan Ross: I think it was almost by chance. I had been working in the tech sector for a large data warehousing company, which meant I was working across many industries—almost every industry except higher education. I was looking for a new challenge, something completely different. Then a friend approached me and mentioned a role that might interest me. So I started talking to Times Higher Education, and it turned out it really was a great fit.

    Alex Usher: So when you arrived at Times Higher in 2015, the company already had a pretty full set of rankings products, right? They had the global rankings, the regional rankings, which I think started around 2010, and then the subject or field of study rankings came a couple of years later. When you looked at all of that, what did you think? What did you feel needed to be improved?

    Duncan Ross: Well, the first thing I had to do was actually bring all of that production in-house. At the time, even though Times Higher had rankings, they were produced by Clarivate—well, Thomson Reuters, as it was then. They were doing a perfectly good job, but if you’re not in control of the data yourself, there’s a limit to what you can do with it.

    Another key issue was that, while it looked like Times Higher had many rankings, in reality, they had just one: the World University Rankings. The other rankings were simply different cuts of that same data. And even within the World University Rankings, only 400 universities were included, with a strong bias toward Europe and North America. About 26 or 27 percent of those institutions were from the U.S., which didn’t truly reflect the global landscape of higher education.

    So the challenge was: how could we broaden our scope and truly capture the world of higher education beyond the usual suspects? And beyond that, were there other aspects of universities that we could measure, rather than just relying on research-centered metrics? There are good reasons why international rankings tend to focus on research—it’s the most consistent data available—but as you know, it’s certainly not the only way to define excellence in higher education.

    Alex Usher: Oh, yeah. So how did you address the issue of geographic diversity? Was it as simple as saying, “We’re not going to limit it to 400 universities—we’re going to expand it”? I think the ranking now includes over a thousand institutions, right? I’ve forgotten the exact number.

    Duncan Ross: It’s actually around 2,100 or so, and in practice, the number is even larger because, about two years ago, we introduced the concept of reporter institutions. These are institutions that haven’t yet met the criteria to be fully ranked but are already providing data.

    The World University Rankings have an artificial limit because there’s a threshold for participation based on the number of research articles published. That threshold is set at 1,000 papers over a five-year period. If we look at how many universities could potentially meet that criterion, it’s probably around 3,000, and that number keeps growing. But even that is just a fraction of the higher education institutions worldwide. There are likely 30,000—maybe even 40,000—higher education institutions globally, and that’s before we even consider community colleges.

    So, expanding the rankings was about removing artificial boundaries. We needed to reach out to institutions in parts of the world that weren’t well represented and think about higher education in a way that wasn’t so Anglo-centric.

    One of the biggest challenges I’ve encountered—and it’s something people inevitably fall into—is that we tend to view higher education through the lens of our own experiences. But higher education doesn’t function the same way everywhere. It’s easy to assume that all universities should look like those in Canada, the U.S., or the UK—but that’s simply not the case.

    To improve the rankings, we had to be open-minded, engage with institutions globally, and carefully navigate the challenges of collecting data on such a large scale. As a result, Times Higher Education now has data on around 5,000 to 6,000 universities—a huge step up from the original 400. Still, it’s just a fraction of the institutions that exist worldwide.

    Alex Usher: Well, that’s exactly the mission of this podcast—to get people to think beyond an Anglo-centric view of the world. So I take your point that, in your first couple of years at Times Higher Education, most of what you were doing was working with a single set of data and slicing it in different ways.

    But even with that, collecting data for rankings isn’t simple, right? It’s tricky, you have to make a lot of decisions, especially about inclusion—what to include and how to weight different factors. And I think you’ve had to deal with a couple of major issues over the years—one in your first few years and another more recently.

    One was about fractional counting of articles, which I remember went on for quite a while. There was that big surge of CERN-related articles, mostly coming out of Switzerland but with thousands of authors from around the world, which affected the weighting. That led to a move toward fractional weighting, which in theory equalized things a bit—but not everyone agreed.

    More recently, you’ve had an issue with voting, right? What I think was called a cartel of voters in the Middle East, related to the reputation rankings. Can you talk a bit about how you handle these kinds of challenges?

    Duncan Ross: Well, I think the starting point is that we’re always trying to evaluate things in a fair and consistent way. But inevitably, we’re dealing with a very noisy and messy world.

    The two cases you mentioned are actually quite different. One is about adjusting to the norms of the higher education sector, particularly in publishing. A lot of academics, especially those working within a single discipline, assume that publishing works the same way across all fields—that you can create a universal set of rules that apply to everyone. But that’s simply not the case.

    For example, the concept of a first author doesn’t exist in every discipline. Likewise, in some fields, the principal investigator (PI) is always listed at the end of the author list, while in others, that’s not the norm.

    One of the biggest challenges we faced was in fields dealing with big science—large-scale research projects involving hundreds or even thousands of contributors. In high-energy physics, for example, a decision was made back in the 1920s: everyone who participates in an experiment above a certain threshold is listed as an author in alphabetical order. They even have a committee to determine who meets that threshold—because, of course, it’s academia, so there has to be a committee.

    But when you have 5,000 authors on a single paper, that distorts the rankings. So we had to develop a mechanism to handle that. Ideally, we’d have a single metric that works in all cases—just like in physics, where we don’t use one model of gravity in some situations and a different one in others. But sometimes, you have to make exceptions. Now, Times Higher Education is moving toward more sophisticated bibliometric measures to address these challenges in a better way.

    The second issue you mentioned—the voting behavior in reputation rankings—is completely different because it involves inappropriate behavior. And this kind of issue isn’t just institutional; sometimes, it’s at the individual academic level.

    We’re seeing this in publishing as well, where some academics are somehow producing over 200 articles a year. Impressive productivity, sure—but is it actually viable? In cases like this, the approach has to be different. It’s about identifying and penalizing misbehavior.

    At the same time, we don’t want to be judge and jury. It’s difficult because, often, we can see statistical patterns that strongly suggest something is happening, but we don’t always have a smoking gun. So our goal is always to be as fair and equitable as possible while putting safeguards in place to maintain the integrity of the rankings.

    Alex Usher: Duncan, you hinted at this earlier, but I want to turn now to the Impact Rankings. This was the big initiative you introduced at Times Higher Education. Tell us about the genesis of those rankings—where did the idea come from? Why focus on impact? And why the SDGs?

    Duncan Ross: It actually didn’t start out as a sustainability-focused project. The idea came from my colleague, Phil Baty, who had always been concerned that the World University Rankings didn’t include enough measurement around technology transfer.

    So, we set out to collect data from universities on that—looking at things like income from consultancy and university spin-offs. But when the data came back, it was a complete mess—totally inconsistent and fundamentally unusable. So, I had to go back to the drawing board.

    That’s when I came across SDG 9—Industry, Innovation, and Infrastructure. I looked at it and thought, This is interesting. It was compelling because it provided an external framework.

    One of the challenges with ranking models is that people always question them—Is this really a good model for excellence? But with an external framework like the SDGs, if someone challenges it, I can just point to the United Nations and say, Take it up with them.

    At that point, I had done some data science work and was familiar with the tank problem, so I jokingly assumed there were probably 13 to 18 SDGs out there. (That’s a data science joke—those don’t land well 99% of the time.) But as it turned out, there were more SDGs, and exploring them was a real light bulb moment.

    The SDGs provided a powerful framework for understanding the most positive role universities can play in the world today. We all know—well, at least those of us outside the U.S. know—that we’re facing a climate catastrophe. Higher education has a crucial role to play in addressing it.

    So, the question became: How can we support that? How can we measure it? How can we encourage better behavior in this incredibly important sector?

    Alex Usher: The Impact Rankings are very different in that roughly half of the indicators—about 240 to 250 across all 17 SDGs—aren’t naturally quantifiable. Instead, they’re based on stories.

    For example, an institution might submit, This is how we combat organized crime or This is how we ensure our food sourcing is organic. These responses are scored based on institutional submissions.

    Now, I don’t know exactly how Times Higher Education evaluates them, but there has to be a system in place. How do you ensure that these institutional answers—maybe 120 to 130 per institution at most—are scored fairly and consistently when you’re dealing with hundreds of institutions?

    Duncan Ross: Well, I can tell you that this year, over 2,500 institutions submitted approved data—so it’s grown significantly. One thing to clarify, though, is that these aren’t written-up reports like the UK’s Teaching Excellence Framework, where universities can submit an essay justifying why they didn’t score as well as expected—what I like to call the dog ate my student statistics paper excuse. Instead, we ask for evidence of the work institutions have done. That evidence can take different forms—sometimes policies, sometimes procedures, sometimes concrete examples of their initiatives. The scoring process itself is relatively straightforward. First, we give some credit if an institution says they’re doing something. Then, we assess the evidence they provide to determine whether it actually supports their claim. But the third and most important part is that institutions receive extra credit if the evidence is publicly available. If you publish your policies or reports, you open yourself up to scrutiny, which adds accountability.

    A great example is SDG 5—Gender Equality—specifically around gender pay equity. If an institution claims to have a policy on gender pay equity, we check: Do you publish it? If so, and you’re not actually living up to it, I’d hope—and expect—that women within the institution will challenge you on it. That’s part of the balancing mechanism in this process.

    Now, how do we evaluate all this? Until this year, we relied on a team of assessors. We brought in people, trained them, supported them with our regular staff, and implemented a layer of checks—such as cross-referencing responses against previous years. Ultimately, human assessors were making the decisions.

    This year, as you might expect, we’re introducing AI to assist with the process. AI helps us filter out straightforward cases, leaving the more complex ones for human assessors. It also ensures that we don’t run into assessor fatigue. When someone has reviewed 15 different answers to the same question from various universities, the process can get a bit tedious—AI helps mitigate that.

    Alex Usher: Yeah, it’s like that experiment with Israeli judges, right? You don’t want to be the last case before lunch—you get a much harsher sentence if the judge is making decisions on an empty stomach. I imagine you must have similar issues to deal with in rankings.

    I’ve been really impressed by how enthusiastically institutions have embraced the Impact Rankings. Canadian universities, in particular, have really taken to them. I think we had four of the top ten last year and three of the top ten this year, which is rare for us. But the uptake hasn’t been as strong—at least not yet—in China or the United States, which are arguably the two biggest national players in research-based university rankings. Maybe that’s changing this year, but why do you think the reception has been so different in different parts of the world? And what does that say about how different regions view the purpose of universities?

    Duncan Ross: I think there’s definitely a case that different countries and regions have different approaches to the SDGs. In China, as you might expect, interest in the rankings depends on how well they align with current Communist Party priorities. You could argue that something similar happens in the U.S. The incoming administration has made it fairly clear that SDG 10 (Reduced Inequalities) and SDG 5 (Gender Equality) are not going to be top priorities—probably not SDG 1 (No Poverty), either. So in some cases, a country’s level of engagement reflects its political landscape.

    But sometimes, it also reflects the economic structure of the higher education system itself. In the U.S., where universities rely heavily on high tuition fees, rankings are all about attracting students. And the dominant ranking in that market is U.S. News & World Report—the 600-pound gorilla. If I were in their position, I’d focus on that, too, because it’s the ranking that brings in applications.

    In other parts of the world, though, rankings serve a different purpose. This ties back to our earlier discussion about different priorities in different regions. Take Indonesia, for example. There are over 4,000 universities in the country. If you’re an institution like ITS (Institut Teknologi Sepuluh Nopember), how do you stand out? How do you show that you’re different from other universities?

    For them, the Impact Rankings provided an opportunity to showcase the important work they’re doing—work that might not have been recognized in traditional rankings. And that’s something I’m particularly proud of with the Impact Rankings. Unlike the World University Rankings or the Teaching Rankings, it’s not just the usual suspects at the top.

    One of my favorite examples is Western Sydney University. It’s a fantastic institution. If you’re ever in Sydney, take the train out there. Stay on the train—it’s a long way from the city center—but go visit them. Look at the incredible work they’re doing, not just in sustainability but also in their engagement with Aboriginal and Torres Strait Islander communities. They’re making a real impact, and I’m so pleased that we’ve been able to raise the profile of institutions like Western Sydney—universities that might not otherwise get the recognition they truly deserve.

    Alex Usher: But you’re still left with the problem that many institutions that do really well in research rankings have, in effect, boycotted the Impact Rankings—simply because they’re not guaranteed to come first.

    A lot of them seem to take the attitude of, Why would I participate in a ranking if I don’t know I’ll be at the top?

    I know you initially faced that issue with LERU (the League of European Research Universities), and I guess the U.S. is still a challenge, with lower participation numbers.

    Do you think Times Higher Education will eventually crack that? It’s a tough nut to crack. I mean, even the OECD ran into the same resistance—it was the same people saying, Rankings are terrible, and we don’t want better ones.

    What’s your take on that?

    Duncan Ross: Well, I’ve got a brief anecdote about this whole rankings boycott approach. There’s one university—I’m not going to name them—that made a very public statement about withdrawing from the Times Higher Education World University Rankings. And just to be clear, that’s something you can do, because participation is voluntary—not all rankings are. So, they made this big announcement about pulling out. Then, about a month later, we got an email from their graduate studies department asking, Can we get a copy of your rankings? We use them to evaluate applicants for interviews. So, there’s definitely some odd thinking at play here. But when it comes to the Impact Rankings, I’m pretty relaxed about it. Sure, it would be nice to have Oxford or Harvard participate—but MIT does, and they’re a reasonably good school, I hear. Spiderman applied there, so it’s got to be decent. The way I see it, the so-called top universities already have plenty of rankings they can focus on. If we say there are 300 top universities in the world, what about the other 36,000 institutions?

    Alex Usher: I just want to end on a slightly different note. While doing some background research for this interview, I came across your involvement in DataKind—a data charity that, if I understand correctly, you founded. I’ve never heard of a data charity before, and I find the idea fascinating—intriguing enough that I’m even thinking about starting one here. Tell us about DataKind—what does it do?

    Duncan Ross: Thank you! So, DataKind was actually founded in the U.S. by Jake Porway. I first came across it at one of the early big data conferences—O’Reilly’s Strata Conference in New York. Jake was talking about how data could be used for good, and at the time, I had been involved in leadership roles at several UK charities. It was a light bulb moment. I went up to Jake and said, Let me start a UK equivalent! At first, he was noncommittal—he said, Yeah, sure… someday. But I just kept nagging him until eventually, he gave in and said yes. Together with an amazing group of people in the UK—Fran Bennett, Caitlin Thaney, and Stuart Townsend—we set up DataKind UK.

    The concept is simple: we often talk about how businesses—whether in telecom, retail, or finance—use data to operate more effectively. The same is true in the nonprofit sector. The difference is that banks can afford to hire data scientists—charities often can’t. So, DataKind was created to connect data scientists with nonprofit organizations, allowing them to volunteer their skills.

    Of course, for this to work, a charity needs a few things:

    1. Leadership willing to embrace data-driven decision-making.
    2. A well-defined problem that can be analyzed.
    3. Access to data—because without data, we can’t do much.

    Over the years, DataKind—both in the U.S. and worldwide—has done incredible work. We’ve helped nonprofits understand what their data is telling them, improve their use of resources, and ultimately, do more for the communities they serve. I stepped down from DataKind UK in 2020 because I believe that the true test of something successful is whether it can continue to thrive without you. And I’m happy to say it’s still going strong. I kind of hope the Impact Rankings continue to thrive at Times Higher Education now that I’ve moved on as well.

    Alex Usher: Yeah. Well, thank you for joining us today, Duncan.

    Duncan Ross: It’s been a pleasure.

    And it just remains for me to thank our excellent producers, Sam Pufek and Tiffany MacLennan. And you, our viewers, listeners, and readers for joining us today. If you have any questions or comments about today’s episode, please don’t hesitate to get in touch with us at [email protected]. Worried about missing an episode of the World of Higher Education? There’s a solution for that. Go to our YouTube page and subscribe. Next week, our guest will be Jim Dickinson. He’s an associate editor at Wonkhe in the UK, and he’s also maybe the world expert on comparative student politics. And he joins us to talk about the events in Serbia where the student movement is challenging the populist government of the day. Bye for now.

    *This podcast transcript was generated using an AI transcription service with limited editing. Please forgive any errors made through this service.

    Source link